P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 10-09-08, 07:13 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,016
Default Peer-To-Peer News - The Week In Review - September 13th, '08

Since 2002


































"This conduct is hardly consistent with the vigorous price competition we hope to see in a competitive [wireless] marketplace." – Senator Herb Kohl, D-Wisconsin


"Should you go out and buy Vista today? Probably not. With Windows 7’s launch scheduled for early 2010, we’re actually closer to that date than we are to Vista’s launch. If you’ve ridden out the storm on XP so far, it probably isn’t worth investing in Vista for just a year and a half of use." – Will Smith


"From a commercial point of view, peer-to-peer provides access to more fans, on a global scale, than ever thought possible via traditional distribution methods." – Mark Meharry


"People wanna blame the decline of album sales on downloading - I think it’s actually the record companies’ fault." – Corey Taylor



































September 13th, 2008




Committee Amends, Approves "Enormous Gift" to Big Content
Julian Sanchez

The Enforcement of Intellectual Property Rights Act of 2008, which was blasted by consumer groups and library associations this week as an "enormous gift" to the content industry, won the approval of the Senate Judiciary Committee this afternoon by a 14-4 vote. As first reported by Ars this morning, a series of amendments were added during committee mark-up, providing privacy safeguards for records seized under the law and stripping away several controversial provisions—though not the hotly contested section empowering the Justice Department to litigate civil infringement suits on behalf of IP owners.

One significant change to the proposed legislation addressed, at least in some small measure, a concern broached by Public Knowledge and other consumer groups in a letter to the Judiciary Committee yesterday. Though the amended bill still creates expanded provisions for civil forfeiture of property implicated in an IP infringement case—potentially including servers or storage devices containing the personal data of large numbers of innocent persons—lawmakers altered the bill's language to affirmatively require a court to issue a protective order "with respect to discovery and use of any records or information that has been impounded," establishing "procedures to ensure that confidential, private, proprietary, or privileged information contained in such records is not improperly disclosed or used." They did not, however, go so far as to immunize the data of "virtual bystanders" from seizure, as the letter had requested.

The forfeiture section was also modified to exclude, as grounds for seizure, the violation of the "anticircumvention" provisions of the Digital Millennium Copyright Act. The old language would have allowed for forfeiture of tools that could be used to circumvent digital rights management software.

Excised, as well, was language that would have barred the "transshipment" through the United States of IP infringing goods. Since different countries have different IP rules, this language would potentially have defined goods that were legal in both their country of origin and their final destination—because, for instance, differences in copyright terms allowed works to fall into the public domain overseas while still under copyright in the US—as contraband.

The amendments also added a seat for a representative of the Food and Drug Administration, as well as any "such other agencies as the President determines to be substantially involved in the efforts of the Federal Government to combat counterfeiting and piracy" on the "interagency intellectual property enforcement advisory committee" that the bill would create.

Two new provisions were tacked on to the end of the law. The first directs the Comptroller General to conduct a study of the impact of piracy on domestic manufacturers and develop recommendations for improving the protection of IP in manufactured goods. (Wouldn't it be better to do this sort of thing before enacting enforcement legislation?)

The second is a nonbinding "sense of congress" resolution stipulating that, while "effective criminal enforcement of the intellectual property laws against such violations in all categories of works should be among the highest priorities of the Attorney General," the AG should give priority, in cases of software piracy, to cases of "willful theft of intellectual property for purposes of commercial advantage or private financial gain," especially those "where the enterprise involved in the theft of intellectual property is owned or controlled by a foreign enterprise or other foreign entity." Which is to say, that copy of Photoshop you pulled off BitTorrent last week isn't on the top of the Justice Department's docket... yet.

Remaining intact was language that would give the Justice Department authority to pursue civil suits against IP infringers, awarding any damages won to the patent, copyright, or trademark holders. Critics have blasted this provision as a gift of free, taxpayer-funded legal services to content owners. The bill now goes to the full Senate, and must still be recognized with its counterpart legislation in the House, which lacks the language deputizing the DoJ to bring suit on behalf of IP owners.
http://arstechnica.com/news.ars/post...g-content.html





Study Says Intellectual Property System Should Die
Ben Jones

A recently released study has claims that the current ‘Intellectual Property’ situation in the world is not working well. Driven by a fear of losing out, and bolstered by an attitude that profit is the aim of IP, progress is hampered. Not only by the entertainment industry, also in biotechnology where medicines are sometimes restricted or withheld, causing deaths.

When we write about “Intellectual Property” and copyright, it is mostly related to the entertainment industry. However, the problems are much broader than some would expect. A study, published by non-profit group The Innovation Group (and released under a Creative Commons license no less), doesn’t pull many punches about IP. Right at the start, it addresses the cause of the problem as many see it, from biotechnology to the music industry.

Quote:
The current era of intellectual property is waning. It has been based on two faulty assumptions made nearly three decades ago: that since some intellectual property (IP) is good, more must be better; and that IP is about controlling knowledge rather than sharing it. These assumptions are as inaccurate in biotechnology – the field of science covered by this report – as they are in other fields from music to software.
The discussion throughout focuses on how this “Old IP” system harms innovation and consumers. It mentions how the music industry is lobbying for higher penalties for copyright infringement, while they refuse to try out new business models. Similarly, how the movie industry tries to ban and restrict new technology, untill they realize they can make money off it.

Perhaps even more concerning, when it comes to biotechnology – medication, treatments, equipment – withholding information or purposefully restricting it will lead to deaths. One example the paper makes on this topic is the lawsuits 39 pharmaceutical companies brought against the South Africa government, for trying to act effectively to deal with the HIV/AIDS crisis there. Such restrictions have undoubtedly hastened the deaths of thousands if not millions.

This study is not alone in stating the problems with patents in research and development. In August, Kenyan medicine-men revealed that they have kept their traditional practices to themselves, because of the fear of patents. With the high costs, and excessive paperwork, filing patents on the techniques is not feasible to them, according to a report in Business Daily Africa. They are worried that companies that find the patent process trivial will patent their techniques, and prevent them from being used.

With them on this is the Pirate Party International, a collection comprised of all the national Pirate Partys) has mentioned that biopatents are a source of concern and an area they hope to change. Swedish Pirate Party Chairman Rick Falkvinge told TorrentFreak: “This shows yet again how Big Pharmacy practices are robbing people of their medicine; only now, they have managed to silence the critical word-of-mouth distribution of indigenous knowledge, through fear of monopolization of traditional medicine. It is high time for the patent system in general, and pharmacy patents in particular, to be exposed and abolished.”

Yet these arguments and studies appear to be falling on deaf ears. Today, a bill aimed at increasing the enforcement of these IP ‘rights’ still further – including the ability for the government to file civil IP complaints without the complaint of the IP holder – got it’s first reading in the US Senate’s Judiciary Committee. With only a few months left of this session of Congress, the lobby groups are almost certainly going all out to get them passed, despite strong opposition. Lost (or ignored) in this push is the intent of copyright and patents, which the US Constitution says is to promote progress, which as the study shows, it no longer does.

It also goes without saying that despite this talk of ‘old IP’ and ‘new IP’, there are those that refuse to use the term at all.
http://torrentfreak.com/study-says-i...ld-die-080911/





U.S. Bounds Ahead on Broadband Proliferation

But it still has a long way to go.
Sean Michael Kerner

Though the U.S. still trails other parts of the world in deployment of high-speed broadband, all is not lost. According to the latest State of the Internet report from content delivery player Akamai, the nation's broadband penetration is on the rise.

Akamai found that U.S. broadband connections -- defined as connections at 5 megabits per second (Mb/sec) or faster -- grew in number by 29 percent, compared to the previous quarter.

"I think the U.S. growth rate is something we expected," David Belson, Akamai's director of market intelligence and author of the report, told InternetNews.com. "If you look at the money being spent to build out the fiber to the home infrastructure, and if you look at the competitive deals that are going on, vendors are trying hard to make it affordable and 'outspeed' each other."

Despite such efforts, the country still sits sixth on Akamai's list of the most widely broadband-enabled counties, with only 26 percent of U.S. Internet connections having been clocked at speeds of 5Mb/sec or greater. South Korea continues to hold the top spot with 64 percent of its Internet user's connection at speeds of 5 Mb/sec or greater.

Belson isn't optimistic that the U.S. will catch up to South Korea any time soon, either.

"We'll expect to see connection speeds grow rapidly in the future since we're at only a quarter of the connections that we're seeing from the U.S. being at 5Mb/sec," Belson said. "There is still a long way to grow."

Belson attributed the large percentage of broadband deployments in South Korea were the combined result of population density and government intervention. In South Korea, a large portion of the population lives in apartment buildings, which makes wiring large groups of people easier.

Likewise, its government has taken a proactive stance on rolling out high-speed connectivity. Whether the winner of the upcoming presidential elections in the U.S. will push for similar proliferation -- supporting much-discussed efforts like wiring rural communities -- remains uncertain, he added.

"Does the new leadership of the Unites States have the opportunity to put some money where their mouth is?" Belson said. "Absolutely. Will they fund rural broadband? Unlikely."

Given the nature of the market, I don't think we'll see 60 to 70 percent high-speed broadband penetration in the U.S. for quite some time," he added.

Closer to home

While the U.S. as a whole continues to get faster, the State of California actually slipped in the rankings, despite its huge IT and high technology industries. Belson noted that California came in 21st in the nation, with its 7 percent growth rate over first quarter having been outpaced by other states' growing broadband infrastructures. In Akamai's last report, California ranked 17th.

Once again, the tiny state of Delaware led the U.S. in broadband penetration, a fact that Belson attributed to some of the same factors that also made South Korea the leader globally -- namely, comparatively high population density. About 66 percent of all traffic from Delaware came from broadband connections.

Akamai, a leader in Internet content distribution, is in prime position to assess the status of the U.S.'s infrastructure, since it has servers based at the edge of the Internet in locations across the country and throughout the world. It examined some 346 million unique IP addresses to compile its most recent report.

Its position also enables it to provide some insight into attack traffic that also radiates across the Internet. In the report, it said some 400 unique ports were targeted by attackers, representing a nearly 20-times increase over first quarter.

Though a greater number ports were attacked, most of the attacks -- 85 percent -- hit only the top ten ports. The most-often attacked port during the quarter was TCP Port 445, with over 28 percent of all traffic. The port is often used Windows SMB (define) traffic, and has been targeted in the past by worms like Sasser to propagate.

By the same token, Belson noted that most other ports experienced a relatively low volume of attacks.

"It's more spurious traffic or port scanning then being a well coordinated attack across more ports," Belson said.
http://www.internetnews.com/infra/ar...liferation.htm





Why Is the Internet So Infuriatingly Slow?

Plus, two horrible things your Internet service provider wants to do to make it speedier.
Chris Wilson

Everyone hates their Internet service provider. And with good cause: In the age of ubiquitous Internet access, Web service in America is still often frustratingly slow. Tired of being the villain, telecom companies have assigned blame for this problem to a new bad guy. He's called the "bandwidth hog," and it's his fault that streaming video on your computer looks more like a slide show than a movie. The major ISPs all tell a similar story: A mere 5 percent of their customers are using around 50 percent of the bandwidth—sometimes more during peak hours. While these "power users" are sharing three-gig movies and playing online games, poor granny is twiddling her thumbs waiting for Ancestry.com to load.

The ISPs are certainly correct that there's a problem: The current network in the United States struggles to accommodate everyone, and the barbarians at the gate—voice-over-IP telephony, live video streams, high-def movies—threaten to drown the grid. (This Deloitte report has a good treatment of that eventuality.) It's less clear that the telecom companies, fixated as they are on the bandwidth hogs, are doing a good job of managing the problem and planning for the future. The ISPs have put forward two big ideas, in recent months, about how to fix our bandwidth crisis. We can arrange these plans into two categories: horrible now and horrible later.

Plan One: Feed the meter. Category: Horrible now. In January, Time Warner announced it was rolling out an experimental plan in Beaumont, Texas, that charged users by the gigabyte. Thirty dollars would get you 5 gigabytes a month, while a $55 plan would get you 40. Each extra gigabyte over the limit costs a buck. In succeeding months, this data-capping idea has caught on. Comcast recently announced that it's drawing the line at 250 gigabytes per user per month. Once you've used that much bandwidth, you can get your account suspended.

A limit of 250 gigs a month is plenty enough for most of us, at least for now. Silicon Alley Insider has a nice rundown of what it would take to hit that limit, to the tune of two HD movies a day and a lot of gaming on the side. But that assumes your connection is speedy enough to stream high-quality video in the first place. It's a chicken-and-egg problem: People use less bandwidth when their connection is crawling from congestion.

A reasonable argument can be made that this is a sound way to clear up congestion. It is rather unfair that people who barely use the Web have to pay the same or similar rates as people who use BitTorrent all day. The "meterists"—and there are a few of them out there—think systems like Time Warner's are inherently fairer, as they end the practice of forcing light users to subsidize heavy users. The rosiest scenarios even suppose that a pay-as-you-go Internet could give telecoms the financial incentive to expand their networks.

The criticism is easy to condense: No one joyrides in a taxi. A plan like this, as its many opponents have noted, will cramp the freewheeling, inventive nature of the Internet. The Internet owes its success to two pillars of human activity: masturbation and procrastination. (Seriously: We have the porn companies to thank for pioneering all sorts of technologies, from VHS to secure credit-card transactions online.) Is the Internet really the Internet if people don't use it to waste time?

Widespread deployment of capped or metered plans would also cripple businesses that have invested in high-bandwidth products, like videoconferencing. And if people start pinching bytes, it could also pose problems for security—if you hear the meter ticking, you'll probably be less eager to install large operating-system updates and new virus-definition files.

Beyond that, capping data transfer is simply a crude way to get people to curb their data appetites. Imposing limits on gigabytes per month is as sensible as replacing speed limits with a total number of miles you can drive in a given day. A more reasonable scenario—though one that's still decidedly unfun—would be to charge for Internet access as we charge for cell phones, running the meter during peak hours and letting people surf and download for free on nights and weekends, when there's far less competition for bandwidth.

Plan Two: Blame BitTorrent. Category: Horrible later. In addition to capping data transfer, Comcast is taking a second anti-hog initiative. Rather than charging more, the company plans to slow or cut off peer-to-peer traffic during peak times. Last October, the Associated Press caught Comcast deprioritizing traffic from BitTorrent and other file-sharing protocols. The company received a slap from the FCC for singling out a specific type of traffic, which violates the FCC's policy statement on network management. Comcast now says it will pursue a more compliant strategy that slows the connections of power users during peak times without singling out specific types of traffic. This tactic is similar to the more general practice of "traffic shaping": prioritizing data packets for applications like video that shouldn't lag at the expense of something like e-mail, which can wait in line an extra few seconds without anyone noticing—except that it's deprioritizing users, not data packets. (People who hate the concept of traffic shaping prefer to call this "throttling" or "choking.")

This plan is "horrible later" because it fails to account for the natural evolution of the Web toward larger file sizes and higher bandwidth activities. While it isn't a God-given right to be able to downloaded pirated DVDs all day long, the ISPs should not adopt a long-term strategy that penalizes high-bandwidth activity. As FCC commissioner Robert M. McDowell pointed out in the Washington Post a few weeks ago, this is not the first time we've reached a crisis level of congestion. If Time Warner and Comcast had structured their networks around anti-bandwidth-hogging policies, say, 20 years ago, revolutionary services like YouTube and BitTorrent might not even exist.

Now let's take a step back and sympathize with the ISPs. On the one hand, power users and Web entrepreneurs brand them as anti-innovation for going after bandwidth hogs with regressive tactics. On the other, there are oodles of home users who get infuriated when it takes forever for a page to load in their browser. On top of that, they have to deal with net-neutrality advocates who often seem more interested in policing the ISPs than in proposing ways to fix our bandwidth crunch (though Columbia law professor and Slate contributor Tim Wu runs down some good possible fixes in this New York Times op-ed). So let's help the ISPs out and look at a few promising technologies that could help us all surf quickly and happily.

The high-fiber diet. If bandwidth demands do continue to scale, we could get to the point where anyone who wants a decent connection to watch a 100-gigabyte holographic movie—or whatever we're watching five years from now—will have to get a fiber-optic cable directly to their home. Verizon has bet on this solution with its FiOS service. These "fiber to the premises" connections are still very expensive and aren't yet widely deployed—and the commercials also make you want to retrofit your entire neighborhood with copper, just out of spite—but it looks as if they're only getting more necessary. (Some researchers believe that the same technology that may someday lead to invisibility cloaks might also be deployed to route fiber-optic signals through today's existing networks. That effort is fairly nascent.)

Cold, hard cache. Shortly before the start of the 2008 Olympics, some commentators feared the global network wouldn't be able to handle all the demand for streaming Web video. The fact that the Internet didn't "melt," as one ZDNet author feared, set tongues wagging about NBC's use of third-party "content-delivery networks." To deliver nonlive content, these companies can store popular content on many different servers around the country—a method of ensuring that data packets don't have to travel as far to reach their destination. In general, your machine will retrieve information much faster from a "nearby" server on the network than from one across the globe. If a copy of the movie you want is stored by your ISP on a local server, you'll both get it faster and hold up fewer people in the process. Just as NBC did, companies may need to turn to these content-delivery companies—essentially, large private networks—to help distribute both cached and live content. Still, it feels a little defeatist; taking customers off the public Internet is great for reducing congestion, but the fact that it's necessary is a problem we need to fix head-on, not work around.
http://www.slate.com/id/2199368





Guiding Internet Traffic Can Dramatically Reduce P2P Downloading Time
ANI

Scientists at the University of Washington in Seattle have come up with a new scheme that can solve the problem low connection speeds during peer-to-peer (P2P) downloading.

Such problems usually occur when Internet users download P2P content that is stored a long way from their homes, and the online links over thousands of kilometres get tied up delivering it.

The researchers say that their scheme called Proactive Provider Participation for P2P (P4P) can help ease the load.

Their idea is to take the help of Internet service providers (ISPs) in supplying P2P sites with data on the shortest routes between peers.

They also propose to involve network traffic reports that identify uncongested routes, reports New Scientist magazine.

While making a presentation at a Seattle conference on the internet last week, the researchers revealed that tests conducted by them suggested that P4P could cut the average trip of a P2P data packet from 1600 kilometres to just 250 km, reducing overall load by about 80 per cent.
http://www.thaindian.com/newsportal/..._10094243.html





DSL Is The New Dial-Up

At least according to one analysis firm...
Karl Bode

The second quarter was the worst quarter ever for DSL additions, with AT&T and Verizon barely adding 100,000 broadband customers, but collectively losing 220,000 DSL customers (130,000 for Verizon, 90,000 for AT&T). Why? The slow housing market plays a role, as does DSL users migrating to fiber. Verizon's neglect of DSL infrastructure and marketing is also a reason. But the baby bells are also hampered by DSL's slower speed in the face of 10-20Mbps cable, popular cable technologies like Powerboost, and VoIP bundles.

The shift makes DSL the new dial-up, according to a new report by stat farm Strategy Analytics, which examines the impact that last quarter's dismal showing in new telco DSL additions will have on the industry. "The Telcos' core DSL offerings are unable to compete effectively with Cable; they must step up their already frenetic fiber roll out to stay in the game," says Strategy Analytics researcher Ben Piper. "Indeed, we are starting to see DSL become the new Dial Up." In other words, despite the advice of weak kneed investors, it's time to upgrade.

Cable appears to be better positioned because DOCSIS 3.0 upgrades are cheaper than running fiber (be it FTTN or FTTH), and MSOs are adding VOIP customers much more quickly than the telcos are adding video subscribers. But the bells' primary strength right now is wireless, and this fight could get much more interesting once they begin serious deployment of LTE wireless broadband (AT&T claims 20Mbps wireless by 2009, though that's optimistic). Verizon's focus on urban fiber (thanks to bendable fiber) could also speed up FiOS numbers.
http://www.broadbandreports.com/show...w-DialUp-97562





How the 'Net Works: an Introduction to Peering and Transit
Rudolph van der Berg

Whose pipes?

In 2005, AT&T CEO Ed Whitacre famously told BusinessWeek, "What they [Google, Vonage, and others] would like to do is to use my pipes free. But I ain't going to let them do that…Why should they be allowed to use my pipes?"

The story of how the Internet is structured economically is not so much a story about net neutrality, but rather it's a story about how ISPs actually do use AT&T's pipes for free, and about why AT&T actually wants them to do so. These inter-ISP sharing arrangements are known as "peering" or "transit," and they are the two mechanisms that underlie the interconnection of networks that form the Internet. In this article, I'll to take a look at the economics of peering of transit in order to give you a better sense of how traffic flows from point A to point B on the Internet, and how it does so mostly without problems, despite the fact that the Internet is a patchwork quilt of networks run by companies, schools, and governments.

The basics

At the moment, the Internet consists of over 25,000 Autonomous Systems (AS). An Autonomous System can independently decide who to exchange traffic with on the 'Net, and it isn't dependent upon a third party for access.

Networks of Internet service providers, hosting providers, telecommunications monopolists, multinationals, schools, hospitals and even individuals can be Autonomous Systems; all you need is a single "AS number" and a block of provider independent IP-numbers. These can be had from a regional Internet registry (like RIPE, ARIN, APNIC, LACNIC and AFRINIC). Though one network may be larger or smaller, technically and economically they have equal possibilities.

(Most organizations and individuals do not interconnect autonomously to other networks, but connect via an ISP. One could say that an end-user is "buying transit" from his ISP.)

In order to get traffic from one end-user to another end-user, these networks need to have an interconnection mechanism. These interconnections can be either direct between two networks or indirect via one or more other networks that agree to transport the traffic.

A <--> B (direct)
A <-->C<-->D<-->…<-->B (indirect)

Most network connections are indirect, since it is nearly impossible to interconnect directly with all networks on the globe. (The likes of FLAG and AT&T might come close, but even they can't claim global network coverage.) In order to make it from one end of the world to another, the traffic will often be transferred through several indirect interconnections to reach the end-user. The economic arrangements that allow networks to interconnect directly and indirectly are called "peering" and "transit":

• Peering: when two or more autonomous networks interconnect directly with each other to exchange traffic. This is often done without charging for the interconnection or the traffic.
• Transit: when one autonomous network agrees to carry the traffic that flows between another autonomous network and all other networks. Since no network connects directly to all other networks, a network that provides transit will deliver some of the traffic indirectly via one or more other transit networks. A transit provider's routers will announce to other networks that they can carry traffic to the network that has bought transit. The transit provider receives a "transit fee" for the service.

The transit fee is based on a reservation made up-front for the number of Mbps. Traffic from (upstream) and to (downstream) the network is included in the transit fee; when you buy 10Mbps/month from a transit provider you get 10 up and 10 down. The traffic can either be limited to the amount reserved, or the price can be calculated afterward (often leaving the top five percent out of the calculation to correct for aberrations). Going over a reservation may lead to a penalty.

When a network refuses to peer for another network, things can get ugly. I once heard the following anecdote at a RIPE meeting.

Allegedly, a big American software company was refused peering by one of the incumbent telco networks in the north of Europe. The American firm reacted by finding the most expensive transit route for that telco and then routing its own traffic to Europe over that link. Within a couple of months, the European CFO was asking why the company was paying out so much for transit. Soon afterward, there was a peering arrangement between the two networks.

Given the rules of peering, we can examine how an ISP will behave when trying to build and grow its network, customer base, revenues, and profits. To serve its customers, an ISP needs its own network to which customers connect. The costs of the ISP's network (lines, switches, depreciation, people, etc.) can be seen as fixed; costs don't increase when an extra bit is sent over the network compared to when there is no traffic on the network.

Traffic that stays on the ISP's network is the cheapest traffic for that ISP. In fact, it's basically free.

Peering costs a bit more, since the ISP will have to pay for a port and the line to connect to the other network, but over an established peering connection there is no additional cost for the traffic.

Transit traffic is the most expensive. The ISP will have to estimate how much traffic it needs, and any extra traffic will cost extra. If the ISP is faced with extra traffic (think large-scale P2P use), its first priority will be to keep the traffic on its own network. If it can't, it will then use peering, and as a last resort it will pay for transit.

Every ISP will need to buy some amount of transit to be able to interconnect with the entire world, and to achieve resilience, an ISP will choose more than one transit provider. Transit costs money, and as the ISP grows, its transit bill will grow, too. In order to reduce its transit bill, the ISP will look for suitable networks to peer with. When two networks determine that the costs of interconnecting directly (peering) are lower than the costs of buying transit from each other, they'll have an economic incentive to peer.

Peering's costs lie in the switches and the lines necessary to connect the networks; after a peering has been established, the marginal costs of sending one bit are zero. It then becomes economically feasible to send as much traffic between the two network peers as is technically possible, so when two networks interconnect at 1Gbps, they will use the full 1Gbps. But with transit, even though it is technically possible to interconnect at 1Gbps, if the transit-buying network has only bought 100Mbps, it will be limited to that amount. Transit will remain as a backup for when the peering connection gets disrupted. The money an ISP saves by peering will go into expanding the business.

Another important limitation of peering is that it is open only to traffic coming from a peer's end-users or from networks that have bought transit. A transit provider will not announce a route toward a network it peers with to other networks it peers with or buys transit from. If it did announce the route, it would be providing free transit over its network for its peers or, even worse, buying transit from another network and giving it away freely to a peer. This situation is illustrated below (blue is peering, red is transit).

The higher up in the network you are, the more networks you can see without needing to pay someone else for transit. In the example above, a network like G is sometimes said to be a Tier 1 network, because it buys transit from no one, yet still has access to the whole network.

It's a common misconception that the benefit an ISP derives from peering depends upon the direction of the flow of traffic. According to this way of thinking, if YouTube peers with an ISP, this benefits YouTube more than it does the ISP (since YouTube sends so much data but receives comparatively little). But in practice, the flow of traffic is not an issue for an interconnect. Whether it goes to or from the network, companies still need the same Cisco equipment.

In practice, it is actually quite likely that the ISP side of an ISP-YouTube relationship would see the greatest savings both in absolute costs and as a percentage of total traffic costs. Most ISPs have less traffic (and buy less transit) than YouTube and its parent Google have. Their buying power therefore is less than that of YouTube/Google, so their price per Mbps/month for transit is likely to be higher. Given that the amount of traffic saved from transit is by definition equal for both YouTube and the ISP, it follows that the ISP is saving more money.

Hot potato, cold potato

Another source of contention and confusion is arguments between "hot potato" and "cold potato" routing. Hot potato routing is the practice of handing over traffic at the earliest convenience (hot, hot! Here, you take it!), while cold potato routing is where you hold onto traffic as long as you can before handing it over to another network.

There are long debates in the networking world about which of these is the best solution. Hot potato routing may overload a link to an interconnection point with many peers, or it might force a global network provider to carry traffic all the way from Europe to South America at its own cost if it has peered with another network, whereas it could have sold transit. Some transit providers have solved this problem by splitting their networks into several regional Autonomous Systems and only peering locally (not globally) with each of its AS numbers.

Cold potato routing may give the originating network greater control over quality, except that it is making a guess on the status of the network beyond its own routers. In a cold potato scenario, it's difficult to factor in changes that happen over time, as guesses are made based on the past. Hot potato routing, on the other hand, assumes that the other guy knows best how to route traffic on his network, and it also assumes that if the other network gets overloaded at a location, it will have the biggest incentive to upgrade or to restructure its interconnects.

Pay to peer?

Would it be advisable to pay for peering? There has been significant debate on whether it is beneficial to pay for peering, but I think that peering should typically be free. When two networks peer, they both save the same amount of traffic from transit.
As stated previously, the monetary benefits of not having to use transit depend upon the transit price that each network pays. The network that saves the least is the network that has the best transit deals. If, for both networks, a peering agreement is cheaper than buying transit, then the choice of who should pay for the peering agreement becomes completely arbitrary.

One could say that the network that saves more money should share the savings with the network that saves less, but on what basis? The peering in itself is already there. Paying money for it or sharing the benefits doesn't make it better. The only reason the smaller party pays more is because it is in a less fortunate position when it comes to buying transit. If, through renegotiation of transit contracts, it is all of a sudden better off, it would still be hard to convince the other network to reverse payments. Worse still, it would in fact be sponsoring the other network to attain even lower overall traffic costs. If the two networks at the same time compete for the same customers, it would now be sponsoring its competitor.

There might be situations where a peering might be beneficial to network A, but the savings are too little for network B. In such a case it might look good to A to pay B for a peering agreement to increase B's savings to such a level that both parties will profit. Though this might sound good at first, it could have unintended consequences for network A. If the traffic between the two networks grows to such a level that both parties benefit equally from the peering, B will still want to try to keep the payment for the peering; it's essentially free money.

Another problem with pay to peer is that networks would have an incentive to understate their transit costs in order to become a receiving party. This makes it less likely that both parties would reach a peering agreement, because one party is lying about its benefits and the other is not willing to pay. This is hard to check for either party. The best thing a network can do is hope that when it's economical for this network to peer for free, it is the same case for the other network. If not, the transaction costs of other arrangements are probably too high.

Peering Locations

Peering will happen at a location that is most convenient for both networks. When two networks decide to peer in one location, that location immediately becomes a valuable place at which to peer for other networks, too. This increase in value causes more and more networks to cluster together at certain locations. In the history of the internet, we can see that at first, these locations were at the sites where academic networks interconnected, and later on at large co-location facilities. In order to facilitate peering, Internet exchange points (IXPs) were established at those locations. In Europe these IXPs are typically not-for-profit associations, while in the USA they operate as private businesses.

Putting a single switch in between all the parties who want to interconnect makes it possible to reach all parties with one connection (public interconnect), instead of having to dedicate a line and a port on a switch for each interconnection. This does require IXP's to be neutral and uninvolved in the business of their customers; the process of peering and transit is up to the networks, and the IXP is just responsible for the technical functioning of the switch.

This doesn't mean, however, that peerings will take place only through the IXP. There will still be direct interconnects that bypass the exchange (known as private interconnects), where the exchange can act as a backup for that interconnect (and a transit connection often acts as a backup for that backup).

When more and more networks roll out in the location of the Internet exchange point, this location becomes valuable not only for peering, but also for buying and selling transit. This will attract transit providers to the location in order to peer with other networks that sell transit and also to try and sell transit to networks needing it. The increase in transit providers will cause more competition and, therefore, a lowering of transit costs, which will, in turn, increase the attractiveness of the location for other networks through the combination of more peers and lower transit costs.

As networks grow, some of them will exchange more and more traffic with networks that are not yet present at the local Internet exchange. If the costs of buying a direct connection to another location where networks are present is lower than the costs of transit, then the network will expand toward the low-cost location. This is quite clear in Europe, where medium and large networks will almost always be present at the IXPs of Amsterdam, London, Frankfurt, and Paris. In these cities, there are many networks to interconnect with and the price of transit is at its lowest.

The irony is that in some of these towns, transit prices have dropped to such lows that it's no longer economical for some smaller networks to interconnect at an IXP, since the transit fee saved is lower than the monthly fee for the IXP.

In a nutshell, the economics of interconnection are:

• Peer as much as you can, to avoid transit fees.
• Use the savings from peering to expand your business and network.
• Use the expansion of your business and network to become more attractive for others to peer with and to reach those that are attractive to peer with.
• Establish IXPs in order to further lower the costs of peering, to bring together as many networks as possible, and to create locations where there is competition between providers of transit.
• Repeat.

Transit economics

Providing transit has its own rationales and economic mechanisms. Transit providers charge transit fees in order to recoup their investment in the lines and switches that make up their networks. The price of transit will be a combination of the costs of running the network, plus the amount of transit the transit provider has bought, minus (maybe) the traffic that is destined directly for peers and customers of the transit provider.

Being a pure transit provider with only Autonomous Systems as customers puts a network in a weird spot. Such a network's business case is built on being the intermediary in the flow of traffic, so it tries to charge all of the other autonomous systems for their traffic. The problem for a pure transit provider is that its customers are always looking at ways to lower their transit fees, and lower transit fees can be had by switching to a competitor or by not using the transit provider at all. So disintermediating the transit provider is standard behavior for the transit provider's customers.

How can the transit provider prevent its customers from going to competitors or from cutting it out of the loop? The first way is to keep prices down. If a transit provider is the only provider of a link between Geneva and Amsterdam, it will have to be very aware that its price stays low. If it's too high, the customers may opt to cancel their transit contracts and either build their own links or compel a competitor to step into the market and start competing.

The other trick is to actively work to keep competitors from entering the market. How do you persuade people not to enter the market? By keeping margins low, even as growth rises. Fiber is a fixed-cost investment, because traffic can be supported for little or no extra cost. Though it's tempting to let profits rise with the growth of traffic, the network will actually have to lower its traffic price every month in order for margins to remain the same, thereby keeping intact the barrier to entry for a competing network.

A couple of cooperating ISPs can also be dangerous to the business plan of a pure transit player. These networks could cooperate in creating a backbone between their networks in order to carry traffic to and from eachother's systems. For instance, Dutch, Belgian, French, and Swiss ISPs could work together and bypass a Trans-European transit provider. So a pure transit play is under constant threat even from existing customers who resell traffic.

An interesting tactic that I once heard about was from a content-heavy hosting provider who was trying to buy transit from residential ISPs. ISPs have a high inflow of traffic; hosting providers have high outbound traffic. Because incoming and outgoing traffic are bundled into the same price, the hosting provider rightly had determined that there would be ISPs willing to resell upstream capacity they didn't use. For the pure transit player this might be seen as a loss of income.

In the end, pure transit is debatable as a real business model. An average end-user is bound to its ISP by numerous switching costs (change of e-mail address, lack of knowledge, time, hassle, etc.), but this customer lock-in just does not apply to transit. The Border Gateway Protocol propagates a change in transit provider within seconds, globally. Autonomous Systems can switch within seconds and there is little a transit provider can do to differentiate itself from rivals. Add to this the effect of competitors and mutually assured destruction, and one can understand that there is not much money to be had in this business.

Tough at the top: word about Tier 1 networks

Tier 1 networks are those networks that don't pay any other network for transit yet still can reach all networks connected to the internet. There are about seven such networks in the world. Being a Tier 1 is considered very "cool," but it is an unenviable position. A Tier 1 is constantly faced with customers trying to bypass it, and this is a threat to its business. On top of the threat from customers, a Tier 1 also faces the danger of being de-peered by other Tier 1s. This de-peering happens when one Tier 1 network thinks that the other Tier 1 is not sufficiently important to be considered an equal. The bigger Tier 1 will then try to get a transit deal or paid peering deal with the smaller Tier 1, and if the smaller one accepts, then it is acknowledging that it is not really a Tier 1. But if the smaller Tier 1 calls the bigger Tier 1's bluff and actually does get de-peered, some of the customers of either network can't reach each other.

If a network has end-users (consumers or businesses), it's probably in a better business position than a Tier 1 or a pure-play transit provider, since having end-users provides stability to a business. Autonomous Systems can switch within seconds, but end-users are stickier customers. Churn is less of a problem and revenues are therefore more stable and easier to base decisions on, since prices don't have to drop on a monthly basis. So an end-user business, combined with a bit of transit is, therefore, ideal for a network provider.

Can peering and transit lead to a steady state?

Economists often ask if peering and transit can lead to a steady state, i.e., a situation that can sustain itself by generating enough money for investments while also providing a dynamic and competitive environment.

I personally think the answer is yes. Experiences in recent years have shown a big boom and bust in long haul networks. However, I do believe these are the result of over-investment and not problems with the model of peering and transit. Five overprovisioned networks on the same route are too much for any business case. So yes, if investment is done prudently, and if the owners of transit networks understand that they will have to lower prices continuously or face mutually assured destruction, then it is possible to have a stable state.
http://arstechnica.com/guides/other/...-transit.ars/1





Google Backs Project to Connect 3bn to Net
Andrew Edgecliffe-Johnson

Google has thrown its weight behind ambitious plans to bring internet access to 3bn people in Africa and other emerging markets by launching at least 16 satellites to bring its services to the unconnected half of the globe.

The search engine has joined forces with John Malone, the cable television magnate, and HSBC to set up O3b Networks, named after the “other 3bn” people for whom fast fibre internet access networks are not likely to be commercially viable.

They will today announce an order for 16 low-earth orbit satellites from Thales Alenia Space, the French aerospace group, as the first stage in a $750m project to connect mobile masts in a swath of countries within 45 degrees of the equator to fast broadband networks.

Larry Alder, product manager in Google’s alternative access group, said the project could bring the cost of bandwidth in such markets down by 95 per cent. “This really fits into Google’s mission [to extend internet use] around the developing world,” he said.

The partners have so far injected about $20m each to raise $65m, including a smaller contribution from Allen & Company, the media advisory boutique.

Richard Cole, head of HSBC’s private equity group, which has already funded fixed-line communications projects in some emerging markets, said the bank would lead the search for further finance in the next two years – about 70 per cent of which will come from the debt markets.

Mike Fries, chief executive of Liberty Global, Mr Malone’s international cable company, said the three partners reserved the right to contribute further towards the extra $150m-$180m in equity financing required, but could bring in other backers.

Greg Wyler, the technology entrepreneur who founded O3b Networks, said its satellites would be operational by the end of 2010. Wireless spectrum required for the service had been secured through the International Telecommunication Union.

O3b, headquartered in Jersey, the Channel Islands tax haven, will focus on signing up communications operators, including clients of HSBC, in emerging markets across Africa, Asia, Latin America and the Middle East.

It could also supply satellite dishes in more developed markets such as Mexico, where bandwidth remains expensive in rural areas, Mr Alder said. Mr Fries said Liberty Global’s cable operations in Chile and Australia may become customers.
http://www.ft.com/cms/s/0/ee2f738c-7...nclick_check=1





'Tough Choices' for UK Broadband

The cost of taking fibre-based broadband to every UK home could top £28.8bn, says a report.

Compiled by the government's broadband advisory group, the report details the cost of the different ways to wire the UK for next generation broadband.

Another option, to take the fibres to street-level boxes, would only cost £5.1bn, it said.

Big differences in the cost of updating urban and rural net access will pose difficult choices, says the report.

High costs

In a statement Antony Walker, chief executive of the Broadband Stakeholder Group which drew up the report, said: "The scale of the costs involved means that the transition to superfast broadband will be challenging."

"We hope that this report will help to ensure an informed public debate on the key policy and regulatory decisions that lie ahead," he said.

The BSG report looks at the three most likely options for using fibre to boost the speed of the UK's broadband networks.

The cheapest option, at £5.1bn, is to take fibre only to the familiar street-level cabinets that act as a connection point between homes and exchanges. Beyond the cabinet to the home existing copper cables would be used. The BSG estimates that this system would permit speeds of 30-100 Megabits per second (Mbps).

The other two options involve taking fibre to homes via a shared or dedicated cable.

The BSG puts a £25.5bn price tag on the shared option which would see a small number of homes sharing the 2.5 Gigabits per second capacity of each line.

Giving every home or business its own dedicated cable is the most expensive option, said the BSG, and could cost up to £28.8bn. But it would mean each home would get up to 1Gbps.

But, warned the report, even these relatively simple choices conceal stark differences in the cost of taking fibre to different parts of the country.

For instance, it said that the high price of the cheapest option for fibre is already far higher than the amount telecoms firms have already spent cabling up the UK.

Also, it noted, taking fibre to homes in rural areas costs disproportionate amounts of money - essentially the more isolated a home the more it costs to take fibre to it.

The BSG estimates that getting fibre to the cabinets near the first 58% of households could cost about £1.9bn. The next 26% would cost about £1.4bn and the final 16% would cost £1.8bn.

The disparity in costs meant the UK faced some tough choices, said Mr Walker.

However, he added, enthusiasm for the take-up of broadband could make taking it to rural areas more palatable for telecoms firms.

"If operators could achieve a higher level of take-up in rural areas than we have predicted in our study, then the business case for deployment in those areas could improve significantly", said Mr Walker.
http://news.bbc.co.uk/1/hi/technology/7600834.stm





Clever Commercial, Comcast...But You're Wrong
Peter Glaskowsky

This post will no doubt confuse those who accused me of taking money from Comcast for writing last week's piece on Comcast's Internet usage cap.

If it helps them feel better, they have my permission to suppose that DirecTV offered me a larger bribe. It isn't true, but they don't seem to care about the truth, anyway.

But those of you who have read some of my even earlier posts may have noticed that I'm not exactly happy with Comcast, and that while I get my Internet access from Comcast, I actually get my TV service from DirecTV, a company I happen to like a lot. (Even though it disappoints me sometimes, I pay my DirecTV bill every month--and the company has never paid me a dime.)

So when Comcast picks a fight with DirecTV, I'm not just going to stand idly by.

In this case, it's a fight over which television provider offers more high-definition programming.

Comcast is currently running a clever commercial based on a fictitious game show called "You might think DirectTV has more HD than Comcast...but you're wrong."

In this show, contestants are asked whether Comcast or DirecTV offers more HD "choices" in a given place and time--for example, in Chicago at 7:12pm.

The answer, according to Comcast, is always Comcast. (I'm as shocked as you are!)

The trick here is that Comcast includes all of its On Demand content and comes up with the entirely artificial figure of 500 "choices." So this comparison has a factual basis...but it's still wrong.

It seems to me that the more relevant comparisons--the ones that would actually be useful to customers trying to choose between these services--involve the number of channels and the total amount of programming available on Comcast and DirecTV.

Based on my own research, the channel comparison goes overwhelmingly to DirecTV by a score of 88 to 35, for channels from external providers.

The 35 HD channels on Comcast's "All Channel" list for Cupertino, Calif., sorted by channel name:

A&E - HD, ABC Family - HD, AMC - HD, Animal Planet - HD, Cinemax - HD, CNN - HD, Discovery - HD, Discovery Science - HD, Disney - HD, ESPN - HD, ESPN2 HD, Food Network - HD, FSNBA , HBO - HD, HGTV - HD, KBCW - HD, KGO - (ABC), KNTV - (NBC), KPIX - (CBS), KQED - (PBS), KRON - (IND), KTVU - (Fox), MHD, MOJO HD, National Geography, NFL Network HD, Sci-Fi - HD, Showtime - HD, Starz! - HD, TBS HD, Theater HD, TLC - HD, TNT HD, Universal HD, VS/Golf HD

The 88 HD channels on DirecTV's "Premier" package plus local channels for the San Francisco Bay Area, also sorted by channel name:

A&E HD, ABC Family HD, Altitude HD, Animal Planet HD, Big Ten Network HD, Biography Channel HD, Bravo HD, Cartoon Network, Cinemax HD East, Cinemax HD West, CMT HD, CNBC HD+, CNN HD, CSN Bay Area HD, CSN Chicago HD, CSN Mid-Atlantic HD, CSN New England HD, CSTV HD, Discovery Channel HD, ESPN HD, ESPN2 HD, ESPNews HD, Fox Business Network HD, FSN Arizona HD, FSN Cincinnati HD, FSN Detroit HD, FSN Florida HD, FSN Midwest HD, FSN North HD, FSN Northwest HD, FSN Ohio HD, FSN Pittsburgh HD, FSN Prime Ticket HD, FSN Rocky Mountain HD, FSN South HD, FSN Southwest HD, FSN West HD, Fuel TV HD, FX HD, HBO HD East, HBO HD West, HD Theater, HDNet, History Channel HD, KBCW HD (Ind), KGO HD (ABC), KNTV HD (NBC), KPIX HD (CBS), KRON HD (Ind), KTVU HD (Fox), MASN HD, MSG HD, MSG PLUS HD, MTV HD, National Geographic Channel HD, NBA.TV HD, NESN HD, NFL Network HD, NHL Network HD, Planet Green HD, Sci-Fi Channel HD, Science Channel HD, Showtime 2 HD, Showtime Extreme HD, Showtime HD, Showtime HD West, Showtime Showcase HD, SNY HD, Speed Channel HD, Spike HD, SportSouth HD, SportsTime Ohio HD, Starz Comedy HD, Starz Edge HD, Starz HD East, Starz HD West, Starz Kids & Family HD, Sun Sports HD, TBS in HD, Tennis Channel HD, The Movie Channel HD, TLC HD, TNT HD, Toon Disney HD, USA Network HD, VERSUS HD/GOLF CHANNEL HD, VH1 HD, YES HD

If we throw in the number of pay-per-view channels, the score would go even more toward DirecTV. I can't find exact figures for this comparison, but it looks as if Comcast has, at most, only a few HD pay-per-view channels, while DirecTV has dozens. (DirecTV claims a total HD channel count over 130, but I can't figure out exactly where that number comes from.)

As for the comparison in programming, well, all those extra HD channels on DirecTV carry many programs per day and hundreds per month--each. Even if we throw in the on-demand programming from Comcast, it would lose by a landslide.

The cheap trick of making a comparison at exactly 7:12 p.m. doesn't mean anything to me because you can't watch 500 channels at one time. I think the bottom line is simple: over the course of a day, week, or month, DirecTV delivers well more than twice as much HD programming as Comcast.

DirecTV has its own on-demand service now, based on Internet delivery to DirecTV high-definition DVRs. If we counted that as well, it would only extend DirecTV's advantage. But I don't think that it should count--it's a different kind of service.

This does bring up an interesting point, though. DirecTV on-demand programming would count against Comcast's usage cap, whereas Comcast's On Demand service doesn't--a point made frequently in the comments for my post last week.

But that line of argument just doesn't work for me. Comcast On Demand doesn't travel over your Internet service at all; it comes in through the digital cable service. Both services may come into your home on the same cable, but they don't share bandwidth. This ought to be obvious--even if a customer is using all the bandwidth available from Comcast's Internet service, there's no interruption to Comcast cable TV service.

In fact, you don't even need to have Comcast Internet service to get Comcast On Demand. So of course it's true that Comcast On Demand programming doesn't count against the Comcast Internet usage cap.

This doesn't mean that Comcast is giving its On Demand service an unfair advantage. It's a classic fair advantage. Comcast deployed a cable infrastructure that has enough bandwidth to carry two services; the company is entitled to run two services and treat them as separate businesses.

Some people seem uncomfortable with the idea of businesses having rights, but this is equally a question of individual rights. Comcast has rights because Comcast's stockholders, managers, and employees have rights. In this case, these rights include setting the terms and conditions for the company's services. If it was your company, you'd insist on the same freedom.
http://news.cnet.com/8301-13512_3-10...=2547-1_3-0-20





The Meek Shall Inherit the Web

Computing: In future, most new internet users will be in developing countries and will use mobile phones. Expect a wave of innovation

THE World Wide Web Consortium (W3C), the body that leads the development of technical standards for the web, usually concerns itself with nerdy matters such as extensible mark-up languages and cascading style sheets. So the new interest group it launched in May is rather unusual. It will focus on the use of the mobile web for social development—the sort of vague concept that techie types tend to avoid, because it is more than simply a technical matter of codes and protocols. Why is the W3C interested in it?

The simple answer is that the number of mobile phones that can access the internet is growing at a phenomenal rate, especially in the developing world. In China, for example, over 73m people, or 29% of all internet users in the country, use mobile phones to get online. And the number of people doing so grew by 45% in the six months to June—far higher than the rate of access growth using laptops, according to the China Internet Network Information Centre.

This year China overtook America as the country with the largest number of internet users—currently over 250m. And China also has some 600m mobile-phone subscribers, more than any other country, so the potential for the mobile internet is enormous. Companies that stake their reputations on being at the technological forefront understand this. Last year Lee Kai-fu, Google’s president in China, announced that Google was redesigning its products for a market where “most Chinese users who touch the mobile internet will have no PC at all.”

It is not just China. Opera Software, a firm that makes web-browser software for mobile phones, reports rapid growth in mobile-web browsing in developing countries. The number of web pages viewed in June by the 14m users of its software was over 3 billion, a 300% increase on a year earlier. The fastest growth was in developing countries including Russia, Indonesia, India and South Africa.

Behind these statistics lies a more profound social change. A couple of years ago, a favourite example of mobile phones’ impact in the developing world was that of an Indian fisherman calling different ports from his boat to get a better price for his catch. But mobile phones are increasingly being used to access more elaborate data services.

A case in point is M-PESA, a mobile-payment service introduced by Safaricom Kenya, a mobile operator, in 2007. It allows subscribers to deposit and withdraw money via Safaricom’s airtime-sales agents, and send funds to each other by text message. The service is now used by around a quarter of Safaricom’s 10m customers. Casual workers can be paid quickly by phone; taxi drivers can accept payment without having to carry cash around; money can be sent to friends and family in emergencies. Safaricom’s parent company, Vodafone, has launched M-PESA in Tanzania and Afghanistan, and plans to introduce it in India.

Similar services have also proved popular in South Africa and the Philippines. Mobile banking is now being introduced into the Maldives, a group of islands in the Indian Ocean where many people lost their life savings, held in cash, in the tsunami of December 2004.

For the W3C, M-PESA and its ilk are harbingers of far more sophisticated services to come. If mobile banking is possible using a simple system of text messages, imagine what might be possible with full web access. But it will require standards to ensure that services and devices are compatible. Stéphane Boyera, co-chair of the new W3C interest group, says its aim is to track the social impact of the mobile web in the developing world, to ensure that the web’s technical standards evolve to serve this rapidly emerging constituency.

The right approach, Mr Boyera argues, is not to create “walled gardens” of specially adapted protocols for mobile devices, but to make sure that as much as possible of the information on the web can be accessed easily on mobile phones. That is a worthy goal. But Ken Banks, the other co-chair of the W3C’s new interest group and the founder of kiwanja.net, which helps non-profit organisations exploit mobile technologies in the developing world, points out that simple services based on text messages are likely to predominate for some time to come, for several reasons. All mobile phones, however cheap, can send text messages. Mobile-web access requires more sophisticated handsets and is not always supported by operators. And users know what it costs to send a text message.

As countries work their way up the development ladder, however, the situation changes in favour of full mobile-web access. Jim Lee, a manager at Nokia’s Beijing office, says he was surprised to find that university students in remote regions of China were buying Nokia Nseries smart-phones, costing several months of their disposable income. Such handsets are status symbols, but there are also pragmatic reasons to buy them. With up to eight students in each dorm room, phones are often the only practical way for students to access the web for their studies. And smart-phones are expensive, but operators often provide great deals on data tariffs to attract new customers.

Xuehui Zhao, a recent graduate of the Anyang Institute of Technology in Henan province, explains that a typical monthly package for five yuan ($0.73) includes 10 megabytes of data transfer—more than enough to allow her to spend a couple of hours each day surfing the web and instant-messaging with friends. It is also much cheaper than paying 200 yuan per month for a fixed-broadband connection.

As this young generation of sophisticated mobile-web users grows up, what sort of new services will they want? Many NGOs and local governments are trying things out. Several examples were discussed at a workshop in June organised by the W3C in São Paolo, Brazil. The government of the Brazilian state of Paraná, for instance, is using text messages and voice-menu systems to notify the unemployed about job opportunities and farmers about agricultural prices.

But the workshop also highlighted the limits of what such efforts can achieve. It quickly became apparent that more or less identical services are being developed from scratch repeatedly in different parts of the world. There is clearly room for more co-ordination of such efforts, which is exactly what the W3C has in mind.

Furthermore, many clever systems are being developed by NGOs with no apparent interest in setting up commercial services. As Mr Boyera points out, this raises the issue of sustainability. What happens when the NGO’s funding runs out? One conclusion from the workshop was that promoting social development through the mobile web will mean engaging with businesses. Regulators can also help by fostering cheap mobile access.

The developing world missed out on much of the excitement of the initial web revolution, the dotcom boom and Web 2.0, largely because it did not have an internet infrastructure. But developing countries may now be poised to leapfrog the industrialised world in the era of the mobile web.
http://www.economist.com/science/tq/...ry_id=11999307





Telco to Fiber-Deploying Town: We Sue Because We Care
Nate Anderson

TDS Telecom, a telco with 3,500 employees and a presence in 30 states, is suing the town of Monticello, Minnesota, for trying to put in a fiber optic network of its own. Why would a company try to prevent a town from building itself a faster network? TDS tells us that it's really just looking out for the taxpayer (and its own infrastructure investment).

Not satisfied with the current DSL and cable offerings, Monticello hatched an ambitious plan to wire up its entire town with fiber, build an interconnect station, and allow ISPs to link up to the site and offer Internet access over the city-maintained fiber links. After a vote on the measure passed overwhelmingly last year, Monticello moved to break ground and was promptly sued by the local telephone provider, Bridgewater, a unit of TDS.

We've already covered the legal filings in that case (which is ongoing), but were also interested in hearing from TDS. Fiber backers see the lawsuit and a recent announcement to install a TDS-built fiber network in town as strategy designed largely to prevent the Monticello experiment from being repeated across Minnesota ("See, you'll get sued, and neither of us wants that! Also, we're already fiber networks, so no need to do it yourselves! Please stop thinking about it!"). But TDS insists it's in the right.

Andrew Petersen, the director of legislative and public relations for the company, told Ars in an e-mail that the company's "first" reason for filing the lawsuit was because such projects have failed in other communities and "we're hoping to prevent the citizens of Monticello from becoming the shareholders of a $25 million tax burden." Second on the list was the company's desire to "protect our corporate assets and investments in Monticello," and TDS believes that the city has crossed legal lines in starting the project.

When I spoke to Petersen by phone, he stressed that TDS was committed to its local communities; when it heard the voice of the people, it took action. Just as the city prepared to begin digging in May, TDS announced that it would wire Monticello for fiber and would do it by the end of the year. Given that the city had embarked on a multiyear process, at significant expense, to investigate and then approve such a project, why did TDS wait until all that time and energy had been invested in the idea before suing? Petersen says it's because TDS didn't yet know that people really wanted fiber; once the referendum was a success, the company moved quickly to give people what it now knew they wanted.

I asked Petersen to speak more generally about his company's objection to the city's plan, which would not have kept the network open for hookups to any private ISP that wanted to provide service. But Petersen says that's not an attractive offer; a company like TDS really wants to own (and control) its own network so that it's certain of the network architecture and can address last-mile complaints from subscribers. Municipalities are better at "streets and snowplowing," Petersen says, than at highly technical network design and maintenance.

TDS has nine crews in town right now to build the 100-mile network; 20+ miles have already been laid. Meanwhile, the city has hardly been pleased with the entire process, and seems determined to pursue its original plan. While the main residential buildout is on hold during the court case, Monticello broke ground last week on a smaller fiber ring and an interconnection point for the network. This more limited project will link local government facilities.

In an August proposal, the city had asked TDS to partner on the work. While both entities were building their own networks, it hardly made sense to do the most expensive, most labor intensive work twice—tearing up the ground, inserting ducting, and running fiber. TDS said no, alleging that such cooperation could be anticompetitive, though it did offer free fiber links to all government buildings and redundant fiber links to schools (which are already served by a state system).

So the parallel fiber buildout goes on apace even as the parallel legal teams battle it out in a Minnesota courtroom. Such battles are being fought around the country, and most boil down to the limits of municipal services in each state. Has broadband become a utility service, so indispensable to the public that a city should at least have the right to offer it to everyone who pays taxes?

Christopher Mitchell, who works for the Institute for Local Self-Reliance and has been heavily involved in the Monticello fiber issue, argued in a piece for the local paper on July 31 that broadband is a utility and that local communities should be involved in providing it. "TDS, a phone company headquartered in Madison, has filed a complaint with the laughable charge that the fiber network is neither a 'utility' nor 'public convenience,'" he wrote. "Minnesota’s legislature has explicitly listed telecommunications and 'cable television and related services' in its definition of public utilities. Everyone who has ever used the Internet knows it is a public convenience.

"TDS cannot win this case, but it can stall Fibernet Monticello's start-up to buy time for its own hasty upgrades and attempts to lock subscribers into long-term contracts. The City must stand strong during these trying times. The landslide network referendum was not merely about having a faster network. It was about a faster network featuring local services and accountability to the community."

TDS, for its part, says the fiber build out shows its own accountability to the community, and Petersen argues that TDS isn't some unaccountable, much-loathed corporation. It has "extremely high customer satisfaction in all of our markets," he says.
http://arstechnica.com/news.ars/post...e-we-care.html





Senator Examining Rising Text Messaging Rates
Joelle Tessler

A key member of the Senate Judiciary Committee is asking the nation's top four wireless carriers to justify the ''sharply rising rates'' they charge people to send and receive text messages.

In letters to top executives at Verizon Wireless, AT&T Inc., Sprint Nextel Corp. and T-Mobile, Wisconsin Democrat Herb Kohl said Tuesday that he is concerned that rising text messaging rates reflect decreasing competition in the wireless business.

Kohl chairs the Judiciary Subcommittee on Antitrust, Competition Policy and Consumer Rights. His inquiry comes as European Commission regulators are threatening to impose a cap on roaming fees for text messages sent by Europeans traveling outside of their home nations, in an effort to force prices down by as much as 70 percent.

Kohl said he was concerned that consumers are paying more than 20 cents per message, up from 10 cents in 2005. This increase, he said, ''does not appear to be justified by rising costs in delivering text messages,'' which are small data files that are inexpensive for carriers to transmit.

Kohl said he is particularly concerned that all four of the companies appear to have adopted identical price increases at nearly the same time. ''This conduct is hardly consistent with the vigorous price competition we hope to see in a competitive marketplace,'' he wrote.

Kohl also noted that these rate hikes have occurred during the industry's recent consolidation, which has reduced the number of national wireless carriers in the U.S. to four from six. That consolidation continues, he said, as the large national wireless carriers buy out smaller, regional competitors -- as evidenced most recently by Verizon Wireless' planned acquisition of Alltel Corp. for $5.9 billion plus the assumption of $22.2 billion in debt.

Verizon Wireless, a joint venture of Verizon Communications Inc. and Vodafone Group PLC, said it will respond to Kohl's letter once it has had a chance to review it. AT&T said it has received the letter and will respond accordingly, and Sprint said ''we look forward to responding to the Senator's inquiry about the text messaging options we offer our customers and we will fully cooperate with his request.''

T-Mobile, which is owned by Deutsche Telekom AG, and AT&T did not immediately return calls seeking comment.
http://washingtontimes.com/news/2008...ssaging-rates/





System for Measuring Radio Audiences Faces Inquiry
Brian Stelter

The New York attorney general’s office said Tuesday that it was opening an investigation into the way that Arbitron, which measures audiences for radio stations, is deploying devices called personal people meters.

The devices, which people carry with them, can pick up radio signals. The main concern is that Arbitron, which is switching over from a system in which people keep personal diaries of their radio listening, does not plan to put personal people meters in the hands of enough minority listeners. This would skew the ratings for minority-oriented radio stations.

In a letter to Arbitron, which is based in New York City, Attorney General Andrew M. Cuomo said he was worried that the new system was “neither reliable nor fair, and may have a dramatically negative impact on minority broadcasting in New York.”

Arbitron produces radio ratings that are used to buy and sell advertising time. Despite resistance from some broadcasters, the company has introduced the personal people meter, a cellphone-size device that recognizes which station is being tuned in. Arbitron says the devices are more accurate than diaries, which are based on recall. The company estimates the audience size for radio stations by extrapolating from the results of its consumer sample.

Initial tests of the meter last year showed steep ratings declines for radio stations that cater to minority groups and younger listeners. James L. Winston, the executive director of the National Association of Black Owned Broadcasters, said the people meters were not measuring an adequate sample of African-Americans and Hispanics, thus putting stations with urban formats at a disadvantage.

“The sample size they are conducting their studies with is too small, and that causes their data to be unreliable,” Mr. Winston said.

In a statement on Tuesday, Arbitron maintained that the audience sample that had been selected to carry the people meters “fully represents the diversity of New York radio markets.” The company previously delayed the introduction of the meters in New York by nine months to improve the diversity of its sample, and it says that the sample is now broadly inclusive.

Minority broadcasters had encouraged the attorney general’s office and other government agencies to investigate Arbitron’s use of people meters. A coalition of broadcasters petitioned the Federal Communications Commission to intervene last week, and the commission is now conducting a public comment period.

Mr. Cuomo’s office is going further. On Tuesday it issued a subpoena to gather information about the people meters. In the letter to Arbitron, Mr. Cuomo wrote that the people meter system “appears to contain design flaws that may have a devastating impact on minority communities, broadcasters and businesses.”

Data from people meters is set to become the standard for buying and selling radio airtime in New York on Oct. 8, when the ratings for September will be released. Benjamin Lawsky, a special assistant to Mr. Cuomo, said it remained to be seen if the attorney general’s office would try to intervene before the October switch date.

Earlier this year, the Media Ratings Council, the agency that oversees measurement for the industry, refused to approve the people meter for use in the New York market. (It has approved the meter for use in Houston, one of the first cities in which it was tested.) The decision is now being reconsidered, and may be reversed before Oct. 8.

In a statement, Arbitron argued that the media industry should be concerned about the attempts to “supplant or short-circuit” accreditation for people meters.
http://www.nytimes.com/2008/09/10/bu...0arbitron.html





Broadcasters & Microsoft Zune Form "Tag" Team
FMQB

Nine radio broadcasters have announced their commitment to broadcasting with technology that includes "song tags," using existing broadcasting infrastructure. A song tag is an encrypted digital code identifying a specific song and embedded in an FM broadcast. When a listener hears a song on the radio that he or she likes and tags it, the code is stored on their MP3 player so they can purchase it later. More than 450 FM radio stations operated by Beasley, Bonneville, CBS Radio, Citadel, Clear Channel Radio, Cox Radio, Emmis, Entercom and Greater Media will be broadcasting the FM song tags. Hundreds of those are live today, with the remainder rolling out over the next several weeks.

The news comes in conjunction with Microsoft's announcement about a new set of features for all Zune media players that takes advantage of song tagging. On September 16, Microsoft will officially launch "Buy From FM," enabling users of its Zune player to tag a song from any radio broadcast and instantly purchase and download it. When the customer is in a Wi-Fi hot spot, the song can be immediately downloaded to the Zune device. If Wi-Fi is not available, the device will have a queue of songs ready to download once it is connected to a home computer. Zune is also expanding its device lineup with new 16GB and 120GB capacities as well as new blue-on-silver and all-black color schemes.

"Radio is one of the primary ways people discover new music, which is why we have built an FM tuner into every Zune portable media player," said Chris Stephenson, GM of Global Marketing for Zune at Microsoft. "The leadership of these radio broadcasters has played an integral role in enabling millions of Zune users to tag and purchase songs directly from FM radio."

"The combination of encrypted digital code with Microsoft's Zune and the outstanding products our stations broadcast daily, we are now able to give consumers a fully-integrated digital experience," commented Bruce Reese, CEO of Bonneville International. "It offers instant gratification for our listeners who tune in daily to hear their favorite music and the possibility of new discoveries that happen when each song is played."

"Radio’s decision to push the digital envelope doesn’t mean that our analog broadcasts need to be left behind," added Clear Channel Radio President and CEO John Hogan. "Clear Channel Radio will have 450 stations live with RDS song tagging at launch. And we applaud Microsoft’s leadership and shared commitment to making cutting-edge entertainment experiences available to the masses."

"The connection between the music discovery radio has always provided, and its resulting sales, make this interactive radio feature a natural for the radio industry," said Bob Neil, President and CEO of Cox Radio.

"We are thrilled to be a part of bringing music tagging to Zune listeners,” commented Emmis Chairman and CEO Jeff Smulyan. "This is the next step in the evolution of radio: providing our listeners with the ability to download music instantly."

Entercom CEO David Field added, "We are moving toward a future where music discovery, purchase and fulfillment is a convenient and seamless experience."

"This innovative technology is yet another compelling example of how radio is embracing today's interactive world," said Greater Media President and CEO Peter Smyth. "We are thrilled to be able to offer our listeners the opportunity to further interact with our stations through tagging and ultimately purchase their favorite music."
http://fmqb.com/Article.asp?id=879146





Legal Digital Music is Commercial Suicide

Fans suffer as lawyers get rich
Michael Robertson

Opinion Lala, for those who don't know, is a free streaming music venture. Invested in by Warner Music group to the tune of $20m it streams about five million songs, but also offers 89 cent MP3 sales, and song rentals for 10 cents each. But why is almost nobody using their well-designed, expansive, free streaming service?

I'm not talking about the song rentals for 10 cents - we all knew that was a non-starter. But people aren't streaming songs even for free. While Imeem is streaming more than 1m sessions per day, on Lala only 25 daily listens will get your song into the weekly Top 10. The service just isn't attracting users at all, in spite of the marketing major label WMG has committed to do. Lala appears to be just another in a long list of industry endorsed companies that tries to make the labels happy - and in so doing, apparently forfeits its chance to build a user base or a business.

Over the last decade there seems to be three broad categories of digital music companies:

Firstly, there are companies who actively court label endorsement, and don't do anything the labels don't like. Many agree to pay the labels big fees including substantial million dollar up-front payments. Many of these have raised substantial money, too. They get some nice press articles, but then quietly fade away. If they have label money or executives running their company or influencing the company (instead of net people) then they have never been able to attract a significant user base. Examples include Liquid Audio, a2b, Lala, Pandora, Wippit, Qtrax, Mashboxx, and Nokia's Comes With Music.

Secondly, there are companies who get sued by the labels after attracting a huge audience. They usually succumb to legal pressure and sign licences agreeing to pay the label's big fees including substantial million dollar up-front payments. They may have audiences for their service but it's irrelevant, because the royalty structure ensures they will never turn a profit. Napster, Imeem and the upcoming MySpace music store fits in this category. At MP3.com we were profitable, but the portion of our business which served licensed music was never going to make any money.

Then there are the companies who have been sued, but are proceeding with the legal case rather than settling. If these companies lose their lawsuits they will likely go out of business because of draconian statutory damage rates, which ensure that even if their service is beneficial for the music industry, they are driven to bankruptcy with oversized damage awards. MP3tunes, Veoh, Multiply, Seeqpod and Playlist.com fall into this bracket (although my sources say that Playlist.com's VC are pushing hard for a settlement which would put them in category two.)

Missing from this list is a fourth category, where a true partnership between net companies and the industry is negotiated. A partnership where the digital company provides some benefit to labels or publishers, and in return they get the ability to create a profitable business.

Go legal and die

The internet companies I talk to don't mind giving some direct benefit to music companies. What torpedoes that possibility is the big financial requests from labels for "past infringement", plus a hefty fee for future usage. Any company agreeing to these demands is signing their own financial death sentence.

The root cause is not the labels - chances are if you were running a label you would make the same demands, since the law permits it. The lack of clarity in the law is the real culprit - and it's the huge potential penalties that create an incentive for the big record labels' law firms to file lawsuits. Without clear laws and rulings from the court about what is permissible, every action touching a copyrighted work is a possible infringement, with a large financial windfall if the copyright owner can persuade a Judge to agree.

Fortunately, there seems to be light at the end of the legal tunnel. Two recent US court rulings have added some clarity to several key copyright issues and both rulings were clear victories for the digital company wishing to interact with copyrighted works.

First there was Fox v Cablevision, where a commercial company wanted to provide a remote recording and playback service to its cable customers. Think of it as a centralised TiVo system. The appellate court reversed a lower courts ruling that such a service was a copyright infringement by Cablevision, and each playback was a public performance requiring a royalty payment clearing the way for such a service.

Second was Io v Veoh, which found that Veoh, a YouTube-like service, was protected from financial claims resulting from hosting videos owned by others, because it was acting within a safe harbour of the DMCA. The DMCA offers protection to internet service providers for several actions, including storing material at the direction of the user and linking to works elsewhere.

Both these rulings were significant defeats for media companies and victories for consumers. The depth and detail of these rulings suggest that courts are gaining a deeper understanding of technology issues. It may be that in a few years there will be substantially more clarity on digital media issues, which will be a strong inducement for technology and media companies to create mutually beneficial partnerships rather than engage in costly legal battles.

We're not there yet, but if a few more courts conclude as the California court did in the Veoh case that the DMCA protects online services, this will dissuade media companies from their legal attacks and bring them to a negotiation table.
http://www.theregister.co.uk/2008/09..._music_models/





Free Music Downloads Without the Legal Peril
Roy Furchgott

EVERYBODY likes free music but nobody likes to be sued. For people seeking free music online, therein lies the rub.

It’s simple to get free music from online services like LimeWire, but it could also bring an unfriendly letter from a lawyer.

Dave Dederer feels your pain. As a songwriter and former guitarist for the Presidents of the United States of America, the owner of a record label and an Internet music entrepreneur, he is especially suited to assess the rights of artists, fans and distributors. After a close study of the laws that regulate his business, one thing is clear, he says: “It’s a swirling cesspool.”

The line between legal and illegal is murky even to Mr. Dederer. His online music service, nuTsie (an anagram of iTunes), pays licensing fees to artists. In the shifting legal landscape, however, someday that might not be enough, he said.

“They are cashing our checks, and that is all we feel like we can do for now,” he said. Such uncertainty may work in the favor of fans of free music. Because the law tends to lag behind the technological advances, there are gray areas where free music, while not clearly legal, is not clearly illegal. People can take advantage of these gaps with little fear of a lawsuit, but they will have to keep abreast of legal developments to make sure they remain on the right side of the line.

The (Shifting) Law of the Land

The legal concept regarding copying is called fair use. But what is fair to do without the copyright holder’s permission? The legal precedent that lets people transfer CDs to their iPods was established in Sony Corporation of America v. Universal City Studios, known as the Betamax case. Essentially, the ruling said that people could record copyrighted material for personal, noncommercial use.

But that’s where it gets tricky. Suppose you have a vinyl record and you want to hear it on your iPod. Does the recording have to come from your own album, or can you download a copy from LimeWire, which provides access to a whole world of legal and illegal content? After all, you have paid for the right to hear the song; does it matter where your specific copy comes from?

“It’s a gray area; there has never been a court case covering it. I would argue it’s fair use,” said Fred von Lohmann, a senior staff lawyer with the Electronic Frontier Foundation, a free speech advocacy group. He added, “I am willing to admit a court might see it differently.”

Because the rule is blurry, chances are low that you will be zinged for it. The Recording Industry Association of America and the record companies are going after the most egregious cases, in which people offer hundreds of copyrighted songs for downloading, they say.

Reports of the association suing a woman over 24 songs are technically correct, but it was because of a quirk of the law. The woman in question had more than 1,000 songs in her shared directory, the association says. Rather than file paperwork to prove ownership of all of them, the association sued over just a portion.

Where to Get It

Some free music sources are, without question, legitimate. Apple’s iTunes, which has helped to popularize paid downloading, gives away a song by a new artist every week. A different free iTunes song is given away at Starbucks stores each week, as well. Amazon offers free downloads on its MP3 page. Rolling Stone’s Web site has free MP3s on its Rock & Roll Daily page. Artistdirect.com offers more than 200 free downloads, as does MP3.com. Many smaller labels, like Sub Pop, Dischord and Fat Wreck Chords, among others, also give away music.

There are other sources for free, legal downloads besides individual labels. Creative Commons is a site that helps copyright holders decide which rights they want to share — for instance making songs free for personal use and distribution, but not for sampling or commercial use. The five-year-old organization said it had licensed about 1 million songs, and lists them at creativecommons.org/legalmusicforvideos. One user of Creative Commons, the eclectic radio station WFMU-FM, posts legal in-studio performances at freemusicarchive.org.

Another method that is unlikely to get you in trouble is recording songs from Internet radio. While trouble is unlikely, it can’t be entirely ruled out, because some lawyers say it’s legal under the Home Recording Act of 1992, while others say the act specifically prohibits digital recording. Either way, if you record songs for your own use, no one will know you have them. Many online stations have high-quality audio feeds, including Mr. Dederer’s company, nuTsie, as well as FlyTunes, SpiralFrog and Jango.
Recording songs from Internet radio requires a streaming audio recorder and a sound editor. Windows users can download Audacity free from Sourceforge.net. The application will record any audio that plays over your computer’s speakers, save it and export it to iTunes or any other music player. Unfortunately, the simplest option for Mac users is more costly: WireTap Studio, from Ambrosia Software, has a 30-day free trial but then costs $69.

What You Can Do With It

Since the songs are free, it should be O.K. not only to download them to your PC or MP3 player, but also to upload them to sites where other people can retrieve them, right? Not according to Mark Fischer, a digital rights lawyer at Fish & Richardson, a law firm in Boston that specializes in intellectual property.

“The fact the artist is providing it for free doesn’t take away the copyright,” Mr. Fischer said. Just because the artists give it away does not mean that users can — unless the artist gives explicit permission to share the files.

But not so fast, says Mr. Von Lohmann. One of the other tenets of fair use is economic harm. If you pass along a song that the artist has given away, the artist loses no income, so it could be fair use, he said.

It’s a case the recording industry group is unlikely to bring, because even the bands are usually unaware they need to give pass-around permission. Typical of many artists, the band Wilco, which has often offered giveaways, not only expects but encourages downloaders to share, even though it has never posted explicit permission.

“We hope you won’t sell it, but you can use it, listen to it, put it on an iPod, make a CD, share it with your friends, whatever,” said Tony Margherita, Wilco’s manager.

Granted, going semi-legit is not as easy as firing up LimeWire and taking all the songs you want. All of these gray-area sites require some sifting, choices are limited and sometimes a lot of work is required. You probably won’t find all of your old favorites by the biggest artists, but you won’t hear from their lawyers, either.
http://www.nytimes.com/2008/09/09/te...09myspace.html





Marillion to Put Album on File-Sharing Websites

Marillion will upload their new album to file-sharing websites
Veronica Schmidt

Just when record labels thought things couldn’t get any worse, the veteran British band Marillion have surpassed the efforts of Radiohead and Coldplay by announcing plans to put their new album on file-sharing sites.

The enemy of squeezed record labels, file-sharing sites allow people to upload music so others can copy it for free.

Amidst calls by the music industry for a crackdown on illegal music sharing, Marillion is set to muddy the waters by uploading their 15th album, Happiness is the Road, to sites such as The Pirate Bay, Mininova and LimeWire.

The band’s move will be legal as they own the copyright, but is likely to promote file sharing and lend a legitimacy to sites that regularly play host to illegal file sharing.

Marillion, who hit their chart peak during the mid-1980s, were one of the first bands to release music via their website in the 1990s. Today, keyboard player Mark Kelly defended their latest move: "While we don't condone illegal file-sharing, it's a fact of life that a lot of music fans do it.

"We want to know who our file-sharing fans are. If they like our new album enough, then we want to persuade them to at least come and see us on tour."

A string of high-profile acts have entered the fray over free music in the past few years. Radiohead asked fans to pay what they saw fit when they released their latest, critically-acclaimed album, In Rainbows, on their website. Coldplay gave away the first single from their latest album for free.

As artists, including heavyweights Madonna and Jay-Z, shift their focus to making money from live performances, record labels' sales have plummeted.

Today, Matt Phillips, director of communications for the British Phonographic Industry, which represents the British recorded music sector, said: "While any non-contracted artist is at liberty to give their own music away if that's what they want, we hope that that music fans understand that it's not their right to take music for nothing without permission, and appreciate that not every artist is in the same position."

But Mark Meharry, founder of Music Glue, which is working with Marillion to put their album on peer-to-peer sites, said: "Fans that acquire music via peer-to-peer networks have been treated as thieves by the global recording industry.

"From a commercial point of view, peer-to-peer provides access to more fans, on a global scale, than ever thought possible via traditional distribution methods."
http://entertainment.timesonline.co....cle4724254.ece





Slipknot Frontman Says Labels Cause Piracy
enigmax

Slipknot vocalist and frontman Corey Taylor says it’s time for the music industry to stop taking legal action against downloaders. He feels it is the labels themselves who are to blame for online piracy, since the quality of released music is so bad, no-one wants to buy it.

Slipknot vocalist and frontman Corey Taylor has launched an attack on recording labels, saying that instead of spending their time chasing downloaders, they should use their resources to find bands that produce better music.

Taylor told Kerrang: “Why would you blame (people who download music)? Half the f**king albums that are out there are s**t. I don’t download, but at the same time, I don’t buy new music ’cause it all sucks. Okay, there’s a handful of bands that I buy, but other than that, I just buy old s**t because old s**t is good. Sorry!”

Taylor, who recently collected a Kerrang award on behalf of the band saying “I just showed up for the booze,” says that it’s not fair to blame the fall in album sales on file-sharers, and lays the blame squarely on the shoulders of the labels:

“People wanna blame the decline of album sales on downloading - I think it’s actually the record companies’ fault.”

Of course, lots of people blame the labels for piracy but Taylor believes they aren’t doing their job properly since they promote acts which aren’t up to standard, resulting in people feeling the acts simply aren’t worth the money.

“I think it’s the quality of the product. If record companies would stop giving any f**king mook (idiot) on the street with a fringe a record deal or their own record label, maybe you would sell more f**king albums, dips**ts.”
http://torrentfreak.com/slipknot-fro...piracy-080910/





Apple Admit Briton DID Invent iPod, But He's Still Not Getting Any Money
Daniel Boffey

Apple has finally admitted that a British man who left school at 15 is the inventor behind the iPod.

Kane Kramer, 52, came up with the technology that drives the digital music player nearly 30 years ago but has still not seen a penny from his invention.

And the father of three is so hard up he had to sell his home last year and move his family to rented accommodation.

Now documents filed by Apple in a court case show the US firm acknowledges him as the father of the iPod.

The computer giant even flew Mr Kramer to its Californian headquarters to give evidence in its defence during a legal wrangle with another firm, Burst.com, which claimed it held patents to technology in the iPod and deserved a cut of Apple’s £89billion profits.

Two years ago, Mr Kramer told this newspaper how he had invented the device in 1979 – when he was just 23.

His invention, called the IXI, stored only 3.5 minutes of music on to a chip – but Mr Kramer rightly believed its capacity would improve.

His sketches at the time showed a credit-card-sized player with a rectangular screen and a central menu button to scroll through a selection of music tracks – very similar to the iPod.

He took out a worldwide patent and set up a company to develop the idea.

But in 1988, after a boardroom split, he was unable to raise the £60,000 needed to renew patents across 120 countries and the technology became public property.

Real deal: Apple sells 100 iPods a minute

Apple used Mr Kramer’s patents and drawings to defend itself in the legal wrangle with Burst last September and he gave evidence under fire from Burst’s lawyers.

Mr Kramer, of Hitchin, Hertfordshire, said: ‘I was up a ladder painting when I got the call from a lady with an American accent from Apple saying she was the head of legal affairs and that they wanted to acknowledge the work that I had done.

‘I must admit that at first I thought it was a wind-up by friends. But we spoke for some time, with me still up this ladder slightly bewildered by it all, and she said Apple would like me to come to California to talk to them.

'Then I had to make a deposition in front of a court stenographer and videographer at a lawyers’ office. The questioning by the Burst legal counsel there was tough, ten hours of it. But I was happy to do it.’

The dispute between Apple and Burst.com has since been settled confidentially out of court.

Mr Kramer said: ‘To be honest, I was just so pleased that finally something that I had done which has been a huge success and changed the music industry was being acknowledged. I was really quite emotional about it all.’

He is now negotiating with Apple to gain some compensation from the copyright that he owns on the drawings.

But so far he has received only a consultancy fee for providing his expertise in the legal case.

A staggering 163million iPods have been sold since the device was launched by Apple in 2001.

Every minute, another 100 are snapped up worldwide, earning Apple an estimated £5.5billion last Christmas alone.

But Mr Kramer, in contrast, last year had to close his struggling furniture design business and move with his wife Lorraine and children, Jodi, nine, Luis, 14, and Lauren, 16, into rented accommodation.

‘I can’t even bring myself to buy an iPod for myself,’ he said. ‘Apple did give me one but it broke down after eight months.’

Mr Kramer, who organises the annual British Invention Show, is now working on an invention he claims will be bigger than the iPod.

Called Monicall, it will allow people to have phone calls recorded and emailed to the various parties as an audio file.

He said: ‘It will speed up business deals and provide a low-cost third-party witness to conversations and agreements.

‘A deal will be done on the phone and that is it – an audio file gets emailed over within 30 seconds.’
http://www.dailymail.co.uk/news/arti....html?ITO=1490





RealNetworks to Introduce a DVD Copier
Brad Stone

People have been avidly feeding music CDs into their computers for years, ripping digital copies of albums and transferring the files to their other computers and mobile devices.

This has not happened nearly as much with DVDs, for both practical and legal reasons. But that may soon change.

On Monday, RealNetworks, the digital media company in Seattle, will introduce RealDVD, a $30 software program for Windows computers that allows users to easily make a digital copy of an entire DVD — down to the extras and artwork from the box.

Robert Glaser, chief executive of RealNetworks, called it “a compelling and very responsible product that gives consumers a way to do something they have always wanted to do,” like make backup copies of favorite discs and take movies with them on their laptops when they travel.

But RealDVD is also sure to be a controversial product — one that will easily earn its maker the ire of Hollywood’s powerful and litigious movie studios.

Since the DVD format was introduced more than a decade ago, Hollywood has unremittingly sought to protect the DVD from the fate that befell the CD, which has no mechanism to prevent copying.

Pirate music services like Napster sparked the digital music revolution. The ability of regular consumers to make digital copies of CDs easily with their computers fed such services and, in Hollywood’s view, led to the weakening of the major music labels.
A vibrant movie rental market makes the threat of widespread DVD copying even more ominous. If people who lack technical knowledge can easily copy DVDs, Hollywood worries, they will stop buying DVDs and instead simply visit the local Blockbuster to “rent, rip and return.”

To stave off this outcome and protect what is now $16 billion in annual DVD sales, studios and consumer electronics companies have enveloped their discs with encryption that is intended to prevent copying.

They also regularly go to court to fight any company that offers software to break the encryption. More than five years ago, several studios and the Motion Picture Association of America sued 321 Studios, a company in St. Louis, that had sold the popular program DVD X Copy. A judge ruled that the software violated the Digital Millennium Copyright Act, and the company closed in 2004.

Since then, anyone who wanted to make a backup copy of his “Star Wars” or “Lost” DVDs had to turn to free but illegal programs on the Web, with names like Handbrake and Mac the Ripper. These programs are hard to legally stop because they have many creators who are typically overseas and have few resources. They are used mostly by sophisticated Internet aficionados who may just as easily download movies directly from illicit file-sharing services.

Now RealNetworks believes that the industry’s legal stranglehold on DVD copying has begun to weaken. In March 2007, the DVD Copy Control Association, an alliance that licenses the encryption for DVDs, lost a lawsuit against Kaleidescape, a Silicon Valley start-up company that sells a $10,000 computer server that makes and stores digital copies of up to 500 films.

The DVD association has appealed the ruling. But Mr. Glaser thinks the decision has created the framework for a legal DVD copying product with built-in restrictions to prevent piracy.

The software, which will go on sale on Real.com and Amazon.com this month, will allow buyers to make one copy of a DVD, playable only on the computer where it was made. The user can transfer that copy to up to five other Windows computers, but only by buying additional copies of the software for $20 each. The software does not work on high-definition Blu-ray discs, which the movie industry has even more aggressively sought to protect from illicit copying.

“If you look at the functionality of the product, we have put in significant barriers so people don’t just take this and put it on peer-to-peer networks,” Mr. Glaser said. “I think we’ve been really respectful of the legitimate interests of rights holders.”

Bill Rosenblatt, editor of the online newsletter DRM Watch, said the future for RealDVD probably depends on the outcome of the Kaleidescape appeal. If a higher court reverses the decision and hands the movie industry a decisive victory over DVD copying technology, “Real will have to withdraw the product and could get sued,” he said.

RealNetworks began informing the studios of its new product late last week. Representatives for several major studios and the copy association declined to comment, saying they wanted to examine the software first.

However, one technology executive at a major studio, who did not want to be named because the matter is legally delicate, predicted there would be staunch resistance to RealDVD. He also questioned Real’s motives.

“When so much of their success depends on having reasonable relationships with content owners, you wonder why they would be quite so bold in doing this, unless they are desperate,” this executive said.

“Desperate” may not be quite the right word, but the company could use a hit product. RealNetworks was among the first in the mid-’90s to introduce software to play digital audio and video on the Web.

More recently, despite the steady growth of its Rhapsody subscription music service and RealArcade gaming service, RealNetworks has been eclipsed by other digital media companies, including Apple and Amazon. Its stock is down sharply over the last two years.

Mr. Glaser, however, thinks RealDVD will have widespread appeal, and he is already pondering its future. He says the software could eventually work across home networks and play movies on televisions, instead of just computer screens.

He also plans to solicit the cooperation of movie studios, which he says could sell digital copies of movies and TV shows to people who rip their own DVDs. So, for example, someone who copied the first season of “Mad Men” would be a prime candidate to buy and download Season 2.

“Once you give consumers a legitimate path, you can do all kinds of other interesting things with them,” he said.
http://www.nytimes.com/2008/09/08/technology/08dvd.html





Virginia Court Strikes Down Anti-Spam Law
AP

The Virginia Supreme Court declared the state's anti-spam law unconstitutional Friday and reversed the conviction of a man once considered one of the world's most prolific spammers.

The court unanimously agreed with Jeremy Jaynes' argument that the law violates the free-speech protections of the First Amendment because it does not just restrict commercial e-mails. Most other states also have anti-spam laws, and there is a federal CAN-SPAM Act as well.

The Virginia law ''is unconstitutionally overbroad on its face because it prohibits the anonymous transmission of all unsolicited bulk e-mails, including those containing political, religious or other speech protected by the First Amendment to the U.S. Constitution,'' Justice G. Steven Agee wrote.

In 2004, Jaynes became the first person in the country to be convicted of a felony for sending unsolicited bulk e-mail. Authorities claimed Jaynes sent up to 10 million e-mails a day from his home in Raleigh, N.C. He was sentenced to nine years in prison.
Jaynes was charged in Virginia because the e-mails went through an AOL server there.

The state Supreme Court last February affirmed Jaynes' conviction on several grounds but later agreed, without explanation, to reconsider the First Amendment issue. Jaynes was allowed to argue that the law unconstitutionally infringed on political and religious speech even though all his spam was commercial.

Jaynes' attorney, Thomas Wolf, has said sending commercial spam would still be illegal under the federal CAN-SPAM Act even if Virginia's law is invalidated. However, he said the federal law would not apply to Jaynes because it was adopted after he sent the e-mails that were the basis for the state charges.
http://www.nytimes.com/aponline/busi...-Spam-Law.html





Anti-Piracy Scam Emails Target BitTorrent Users
Ernesto

A new trend is surfacing, as spammers have sent out millions of emails targeting BitTorrent users. The emails, that claim to come from MediaDefender, warn the receiver that he or she has been logged using BitTorrent and points them to an attachment supposedly containing evidence, but which is in fact infected with a virus.

Over the years BitTorrent has attracted some shady figures. We’ve reported on malware ridden BitTorrent clients and media players, a BitTorrent site that infects its users with spyware, and several other scams.

Although most scams can be avoided easily when a few simple rules are followed, they still manage to trick thousands of novices every day - and this is not going to end anytime soon. Since BitTorrent has become more or less mainstream, with millions of users worldwide, it also proves an interesting target for email spammers.

The latest scam, unlike the others we have reported on before, is one that is sent by email. The email is disguised as a message from the anti-piracy company MediaDefender (using their logo etc.), and warns the recipient that his or her download behavior has been logged. The email has a report attached with more details about the infringed material, which turns out to be a virus.
http://torrentfreak.com/anti-piracy-...-users-080907/





OiNK Admin Charged With Conspiracy to Defraud
Ernesto

During October 2007, the popular BitTorrent tracker OiNK was shut down in a joint effort by Dutch and British law enforcement. Today, OiNK admin Alan Ellis has been charged with conspiracy to defraud. Charges against four OiNK uploaders will follow later today.

After extending the bail date 5 times, Cleveland [UK] police has announced the charges against OiNK administrator Alan Ellis.

Cleveland police initially stated that the charges against Alan would be announced during December 2007, but this was soon postponed for two months due to a lack of evidence, only to be postponed another 4 times.

Interestingly, the charges against Ellis are not related to copyright offenses. Instead, he has been charged with “conspiracy to defraud”, further details about the charges are not available at the moment, but are likely to be released in the coming days. On 24th September, the case will he heard at a magistrates court.

Later today there will be more news regarding the charges (if there are any) against the four OiNK uploaders. Initially, six uploaders were arrested on suspicion of “Conspiracy to Defraud the Music Industry”, and other copyright offenses. However, two uploaders were released from further investigation in July.

The OiNK shutdown was an international operation, named “Operation Ark Royal”, and both British and Dutch police were involved. The police acted upon information fed to them by the IFPI and the BPI, two well known anti-piracy organizations who claimed that OiNK was a money machine.
http://torrentfreak.com/oink-admin-c...efraud-080910/





OiNK Uploaders Charged with Copyright Infringement
Ernesto

Today, after almost a year, the OiNK investigation came to an end. Earlier today we reported that OiNK administrator Alan Ellis was charged with “conspiracy to defraud”. Now, just hours later the alleged uploaders are charged with copyright infringement for uploading one CD.

This May, five men and one woman were arrested for sharing music on OiNK. The suspects were taken in for questioning, and required to provide DNA samples and fingerprints.

Two months later, two of the six alleged uploaders were released from further investigation, but (at least) two of the remaining four have been charged today. The alleged uploaders were charged with copyright infringement for uploading one CD. The “conspiracy to defraud” accusations vanished, as they were not mentioned.

The case(s) will be heard in two weeks at a Magistrates Court, after which there is a possibility that it will be passed onto a Crown court. TorrentFreak had the chance to talk to one of the charged uploaders. “I think it’s a sledgehammer to crack a walnut,” he said. The alleged uploader is convinced that he is being used to set an example.

It is indeed strange that thousands of UK residents get off with a friendly warning letter from their ISP, while the four OiNK uploaders are being charged for doing exactly the same thing. The fact that it is only one CD makes the case even more bizarre.

OiNK was one of the largest private BitTorrent trackers, hosting hundreds and thousands of torrents. The site was shut down in a joint effort by Dutch and British law enforcement in October 2007, based on intel from the IFPI and the BPI, two well known anti-piracy organizations.

The police have yet to release an official statement, so more details about the charges may come available in the coming days. Until then, the BPI told us they can not comment on the case.
http://torrentfreak.com/oink-uploade...gement-080910/





High-Ranking Web Site Administrator Sentenced in Peer-to-Peer Piracy Crackdown
Press release

Daniel Dove, 26, formerly of Clintwood, Va., was sentenced by U.S. District Court Judge James P. Jones to 18 months in prison for his role as a high-ranking administrator of a peer-to-peer (P2P) Internet piracy group, Acting Assistant Attorney General Matthew Friedrich announced today. In addition, Dove was ordered to serve three years of supervised release and fined $20,000. A jury found Dove guilty of conspiracy and felony copyright infringement on June 26, 2008.

At trial, evidence was presented that proved Dove was an administrator for EliteTorrents.org, an Internet piracy site that, until May 25, 2005, was a source of infringing copyrighted works, specifically pre-release movies. Elite Torrents used BitTorrent P2P technology to distribute pirated works to thousands of members around the world. Evidence proved Dove was an administrator of a small but crucial group of Elite Torrents members known as "Uploaders," who were responsible for supplying pirated content to the group. Evidence presented at trial proved that Dove recruited members who had very high-speed Internet connections, usually at least 50 times faster than a typical high-speed residential Internet connection, to become Uploaders. The evidence also showed that Dove operated a high-speed server, which he used to distribute pirated content to the Uploaders.

Dove's conviction is the eighth resulting from Operation D-Elite, a federal crackdown against the illegal distribution of copyrighted movies, software, games and music over P2P networks employing the BitTorrent file distribution technology.

Operation D-Elite targeted leading members of a technologically sophisticated P2P network known as Elite Torrents. Prosecutors presented the jury evidence that, at its height, the Elite Torrents group attracted more than 125,000 members and facilitated the illegal distribution of approximately 700 movies, which were downloaded more than 1.1 million times. The evidence also established that massive amounts of high-value software, video games and music were made available to members of the Elite Torrents group. The wide variety of content selection included illegal copies of copyrighted works before they were available in retail stores or movie theaters.

The case was prosecuted by Trial Attorney Tyler G. Newby of the Criminal Division's Computer Crime and Intellectual Property Section and Assistant U.S. Attorney Jay V. Prabhu for the Eastern District of Virginia, with assistance from the U.S. Attorney's Office for the Western District of Virginia. The investigation was conducted by the FBI field offices in San Diego and Richmond, Va., with significant assistance from the CyberCrime Fraud Unit, Cyber Division at FBI Headquarters in Washington, D.C. The Motion Picture Association of America provided assistance to the D-Elite investigation.

SOURCE U.S. Department of Justice via PR Newswire





Massive Takedown of Anti-Scientology Videos on YouTube
Eva Galperin

Over a period of twelve hours, between this Thursday night and Friday morning, American Rights Counsel LLC sent out over 4000 DMCA takedown notices to YouTube, all making copyright infringement claims against videos with content critical of the Church of Scientology. Clips included footage of Australian and German news reports about Scientology, A Message to Anonymous/Scientology , and footage from a Clearwater City Commission meeting. Many accounts were suspended by YouTube in response to multiple allegations of copyright infringement.

YouTube users responded with DMCA counter-notices. At this time, many of the suspended channels have been reinstated and many of the videos are back up. Whether or not American Rights Counsel, LLC represents the notoriously litigious Church of Scientology is unclear, but this would not be the first time that the Church of Scientology has used the DMCA to silence Scientology critics. The Church of Scientology DMCA complaints shut down the YouTube channel of critic Mark Bunker in June, 2008. Bunker’s account, XenuTV, was also among the channels shut down in this latest flurry of takedown notices.
http://www.eff.org/deeplinks/2008/09...videos-youtube





EA's Spore Hit by DRM Backlash
Texyt Staff

Spore, the hotly-anticipated new game from Electronic Arts is facing fierce criticism from consumers over its apparent use of unusually restrictive digital rights management (DRM) technology.

“I basically put down my hard-earned money and get punished for buying the game legally. The gaming industry needs to be sent a message that this is no way to treat their customers”, said one buyer who posted a review at Amazon.com

“[The DRM] means that you are actually renting the game, instead of owning it”, another claims.

The $50 game has received an overwhelming number of negative reviews on Amazon, many from users who admit that they they have not purchased the game because of its DRM. Numerous potential buyers say they canceled pre-orders or plan to return the game.

Three activations

After it is first installed by the user, the SecuROM Product Activation DRM system used by Spore allows the game to be installed only two more times, users claim. An install is used up each time the game is installed on a different PC – or even in some cases when the user installs new hardware on the original PC. A new or re-installed operating system also forces the user to use up one more of the two remaining installations of the game.

If the game has already been installed three times, the user has to contact EA customer service to explain why he or she wants to install it again.

According to EA's online customer support, the actual limit is three “concurrently active” licenses. This page states that the licenses eventually expire, possibly after ten days, and implies that they can then be re-used. However, this appears to conflict with other statements from EA.

No Warning?

Customers have also complained that the product box contains no warning about the DRM technology used. Others complain that SecuROM is not mentioned in the End User License Agreement (EULA).

In fact, EA's EULA states “You may download the number of copies allowed by the software's digital rights management from an authorized source. However, you may use only one copy of the software on a single computer at any given time.”

EA claims that it does not mention SecuROM in Spore's license agreement because that would force it to create a slightly different version of its two page license agreement for each product.

Resale and rental restriction claims

Other potential buyers charge that EA is using the limit of three product activations to prevent resale or rental of the game.

However, the EULA states: “You may make a permanent transfer [sic] all your rights to install and use the software to another individual or legal entity”.

These terms do appear to forbid rental of the game. While resale appears possible under the EULA, the limited number of activations automatically included could make the game less attractive as a second hand purchase.

SecuROM rootkit allegations

Many have claimed that SecuROM uses 'Root Kit' methods, more commonly used by viruses, trojans and other malware, to hide files from the user.

SecuROM is developed by Sony DADC, a subsidiary of Sony which manufactures optical disks.

“SecuROM does not use any root kit technology in its implementation”, Sony DADC says. Ars Technica's Ken Fisher examined an earlier SecuROM game, BioShock, that was also accused of containing a rootkit, and said he could find no sign of one. SecuROM itself also denies it uses a rootkit.

“SecuROM customers enjoy the benefits of extended shelf life, while maximizing profits through additional product sales”, Sony states.

“With SecuROM Product Activation, Sony DADC offers games and software publishers, plus online distributors, the possibility to apply just one single DRM solution to their content, regardless whether it is distributed via the physical or digital sales channel (e.g. Internet). SecuROM Product Activation enables content owners to apply different business models, such as 'Try & Buy' or 'Subscription' for publishers and online distributors of games and/or software. As a result publishers and online distributors can benefit from increasing customer loyalty and additional revenues”, Sony DADC says in a description of the application.

Links

Bioshock buyer describes how to obtain a refund for game with SecuROM, on the grounds that it actually violates its own user license.
http://www.texyt.com/EA+Maxis+Spore+...h+Amazon+00124





Twittering From the Cradle
Camille Sweeney

IT would be easy to assume that the first month of Cameron Chase’s life followed the monotonous cycle of eat-sleep-poop familiar to any new parent. But anyone who has read his oft-updated profile on Totspot, a site billed as Facebook for children, knows better. Cameron, of Winter Garden, Fla., has lounged poolside in a bouncy seat with his grandparents, noted that Tropical Storm Fay passed by his hometown, and proclaimed that he finds the abstract Kandinsky print above his parents’ bed “very stimulating!”

Hailing from Winnipeg, Ontario, Dominic Miguel Alexander Carrasco, 7 months old, uses his Totspot page to share his obsessions with his entourage. His fave nickname? Buddy or Big Boy. His fave book? “Green Eggs and Ham.” His fave food? Unsurprisingly, “mom’s milk.”

Of course, these busy social networkers don’t actually post journal entries or befriend playground acquaintances themselves. Their sleep-deprived parents are behind the curtain, shaping their children’s online identities even before they are diaper-free.

“It does feel a little funny to personalize it in his voice and be connecting to other babies as him,” said Kristin Chase, 29, Cameron’s mother, who updates his page at least every other day.

But considering that relatives clamor for updates, she enjoys being able to catalog 10-week-old Cameron’s doings in one Web-accessible place. “Knowing his daddy, it won’t be long before he’s blogging about himself anyway,” she said, referring to her husband, Nathan, 29. He joined Odadeo, a site still in beta that allows dads to blog on behalf of baby as well as meet other fathers.

Call it convenient. Call it baby overshare. But a host of new sites, including Totspot, Odadeo, Lil’Grams and Kidmondo, now offer parents a chance to forgo the e-mail blasts of, say, their newborn’s first trip home and instead invite friends and family to join and contribute to a network geared to connecting them to the baby in their lives.

“It’s an interesting model,” said Amanda Lenhart, a senior research specialist for the Pew Internet & American Life Project. “Everyone can decide how much or little they want to know about a baby, which avoids the situation of receiving a few too many e-mails about someone’s wonderful child, and parents can decide how much they want to share — in minimal or maximal ways.”

But does the world really need online social networking for babies?

A few entrepreneurs and many Web-forward parents think so. As of this month, Totspot has accumulated 15,000 users. Kidmondo and Lil’Grams, both started last year, each have thousands of users worldwide.

“We’re seeing a rising tide of parenting interest on social networks,” said Adam Ostrow, the editor of Mashable, a news blog about social networking sites. “Recently, I’ve noticed a lot of Facebook users are adding their children to their profiles.”

Mr. Ostrow sees parents’ posting on behalf of their children as a “natural evolution” of say, Twittering about oneself.

“We are at a very pro-parenting moment in time,” said Pamela Paul, the author of the book “Parenting, Inc.” “It’s reflected in our offline culture and on the Web. We are all screaming about it at the top of our lungs.”

So much so that some early adopters have become ventriloquists for their children, even those too young to speak for themselves. With a quick glance at a cheerful profile, parents can also handpick their offspring’s playmates much like online daters choose companions.

While pregnant, Erin Carrasco, 25, mother of Dominic, friended other moms-to-be online, whose newborns became part of Dominic’s network on Totspot. And now on Thursdays, a few of those baby friends (and their moms) join Ms. Carrasco and Dominic offline in a group focusing on babies’ health.

Julie Ward, 38, is less adventurous about whom she friends online for her son Dixon. She keeps Dixon’s network restricted to people she already knows offline and uses his Totspot page to capture fleeting moments in his development.

One milestone: He gobbled up his first Oreo two weeks ago. “Even collecting the smallest details helps me remember the overall picture of this time of his life,” she said, “though my biggest concern apart from whether we can print all this out for Dixon in hard copy is about privacy, which so far seems O.K.”

Founders of this new breed of baby sites, for all their networking aspirations, allow parents to choose whom they want to invite to view and share information about their child. Totspot even has a feature that allows users to track who has visited their child’s Web page and when.

Daniel Hallac and his wife Carole, co-founders of Kidmondo, believe that someday children themselves will go to the site. “Our son Shaun is only a year and a half so he’s not all that interested yet,” Mr. Hallac said. “But we have a page on our site for our older son Davide, who is 6. He checks up on it a lot and loves to read his story. Sometimes he’ll say something like ‘How come you didn’t write about my baseball game yesterday?’”

These sites allow parents to create “attractive and compelling versions of a kid’s story,” said John Palfrey, a professor at Harvard Law School and an author of “Born Digital: Understanding the First Generation of Digital Natives.”

But Mr. Palfrey warns that parents posting the intimate details of their children’s lives need to ask not only who has access to this content, but also who owns it. “Whether or not they realize it as such,” he said, “parents are contributing to their child’s digital dossier. And, who sees that dossier later on may be of concern.”

Not to mention that children whose relatives have traded minutiae about everything from their burp frequencies to the very hour they first rolled over may be, once teenagers, awed — or embarrassed — by the level of detail in their ghostwritten bildungsroman.

Karen Kavanaugh, for one, hopes her 7-month-old daughter will one day find her Totspot page touching. “After all, I’m not raising a baby, I’m raising a woman,” said Ms. Kavanaugh, 35, of Hollywood, Fla. “I want to do that with dignity and respect and not put things online that she may later wish I never had.”
http://www.nytimes.com/2008/09/11/fashion/11Tots.html





‘Potter’ Author Wins Copyright Ruling
Larry Neumeister

The author of the “Harry Potter” series, J.. K. Rowling, has won her claim that a fan violated her copyright with his plans to publish a Potter encyclopedia.

Judge Robert P. Patterson of Federal District Court said Ms. Rowling had proven that Steven Vander Ark’s “Harry Potter Lexicon” would cause her irreparable harm as a writer. He permanently blocked publication of the reference guide and awarded Ms. Rowling and her publisher $6,750 in statutory damages.

Ms. Rowling sued RDR Books last year to stop publication of material from the Harry Potter Lexicon Web site. Mr. Vander Ark, a former school librarian, runs the site, which is a guide to the seven Potter books and includes detailed descriptions of characters, creatures, spells and potions.

The small publisher was not contesting that the lexicon infringes upon Ms. Rowling’s copyright but argued that it was a fair use allowable for reference books. In his ruling, Judge Patterson noted that reference materials were generally useful to the public but that in this case, Mr. Vander Ark went too far.

“While the Lexicon, in its current state, is not a fair use of the Harry Potter works, reference works that share the Lexicon’s purpose of aiding readers of literature generally should be encouraged rather than stifled,” the judge said.

He added that he ruled in Ms. Rowling’s favor because the “Lexicon appropriates too much of Rowling’s creative work for its purposes as a reference guide.”

Anthony Falzone, who argued the case for RDR Books, said he had not yet seen the ruling and could not immediately comment. RDR’s publisher, Roger Rapoport, did not immediately return a telephone message for comment.

A spokeswoman for Ms. Rowling and her publisher, Warner Brothers Entertainment, did not immediately return a telephone message for comment.

Though Ms. Rowling had once praised the Web site, she testified earlier this year that the lexicon was nothing more than a rearrangement of her material.

She said she was so distressed at the prospect that it would be published that she had stopped work on a new novel. “It’s really decimated my creative work over the last month,” she said during the trial in April.

If the lexicon is published, she went on, “I firmly believe that carte blanche will be given to anyone who wants to make a quick bit of money, to divert some Harry Potter profits into their own pockets.”

Mr. Vander Ark, a devoted fan of Ms. Rowling, began work on his Web site in 1999 and released it in 2000.
http://www.nytimes.com/2008/09/09/bu...terweb.html?hp





The Harry Potter Decision, as Text
Groklaw

I'm getting quite a bit of email about the J.K. Rowling Harry Potter decision in the copyright infringement suit, and so I know you are interested in it. So here you go, the Order [PDF]. I've done it as text for you as well. And to help you understand even better, http://www.groklaw.net/pdf/RDRPropFi...pdf]here's [PDF] what the fan site suggested to the judge for findings and order, and here's what [PDF] Time Warner/Rowling thought would be appropriate. The latter document also has six exhibits, which I'll get for you too, if there is sufficient interest. But this is, I believe, enough for you to grasp why the judge made the decision he did.

It must have been an very hard decision, and Rowling certainly didn't win every claim. The court rejected the allegations of bad faith and willfulness, as I'll show you. I think you can see that just in the money damages order, which are very, very low, the absolute statutory minimum.

The fact is, Rowling led the defendant on with praise of his work, there was talk of him being the editor of the official encyclopedia, then changed her mind about having him or anyone as editor and decided to do it herself. Her prior praise of his fan site weighed against her. But she did tell him eventually he had no role, and he went ahead anyway, with some marketing that the judge found misleading. So the question was, is it fair use? It certainly could have been, since a copyright owner can't control transformative derivative works totally, but where the defendant failed was in the how of it, how he went about it.

The impression I get from the Order is that if he'd been less of a fan and copied less and written more of his own words instead, it would have worked out better for him. The court, despite finding against fair use, found the defendant at the time had a reasonable belief that it was fair, and that shows me how close the call was, but in analyzing the four factors courts use for fair use determinations case by case, this judge decided it didn't pass ultimately.

But he managed to do so without, in my view, damaging the field for transformative fair use works. Let me show you what I mean. You'll see how carefully the judge annotates his ruling with prior case law, and if you wish to understand his decision, you really would have to read all the citations, because that is where the judge tells you why he decided each element.

The truth is, the defendant could have done a lexicon without all the copying that might have survived challenge. This defendant put in a lot of effort and volunteer labor, and I'm guessing he's sorry now, and who can blame him? His draft for his book was more than 400 typed pages with 2,437 entries organized alphabetically. But there is nothing inappropriate in asking fan sites to learn where the line is. And while the Order includes an injunction against publishing the Lexicon, I wasn't clear if this impacts the website. The judge quotes from the Feist case, which tells us where the line is in copyright infringement cases, since not all copying is infringing:

To establish a prima facie case of copyright infringement, a plaintiff must demonstrate "(1) ownership of a valid copyright, and (2) copying of constituent elements of the work that are original."...

The element of copying has two components: first, the plaintiff must establish actual copying by either direct or indirect evidence; then, the plaintiff must establish that the copying amounts to an improper or unlawful appropriation.

There was admittedly a lot of copying, both word for word and in paraphrasing. So the question was, how much is too much? Footnote 13 shows that it wasn't that simple for the judge, because a novel and a lexicon are not the same genre:

The post-trial briefs of the parties both suggest that Ringgold's quantitative/qualitative approach is the applicable test for substantial similarity in this case, and the Court agrees. Since the original and secondary works are of different genres, the question of substantial similarity is difficult to examine using the other tests applied in this Circuit. See Castle Rock, 150 F.3d at 139 ("Because in the instant case the original and secondary works are of different genres and to a lesser extent because they are in different media, tests for substantial similarity other than the quantitative/qualitative approach are not particularly helpful to our analysis.").

The Harry Potter books are wholly original, not a mixture of original elements and uncopyrightable elements, so that means, under the law, the court states, that you get to copy less:

Where, as here, the copyrighted work is "wholly original," rather than mixed with unprotected elements, a lower quantity of copying will support a finding of substantial similarity. ...

And that's where the defendant fell on his sword, because there was a lot of copying, admittedly, and the judge's feeling was that it went over the quantitative line of what is acceptable under the law, even for a lexicon, particularly with respect to the companion books Rowling produced, not so much with the novels:

The quantitative extent of the Lexicon's copying is even more substantial with respect to Fantastic Beasts and Quidditch Through the Ages. Rowling's companion books are only fifty-nine and fifty-six pages long, respectively. The Lexicon reproduces a substantial portion of their content, with only sporadic omissions, across hundreds of entries.

Does that mean if he had done a lexicon of just the novels, not the companion books, if might have passed muster? Maybe, particularly if it didn't include so much copying and if instead of quoting or closely paraphrasing the author the lexicon author did his own original writing.

What made this case so difficult is that while a lexicon is a compilation of facts, and you can't copyright facts, Rowling's books are invented facts, hence original and inventive. So a compilation of her invented facts isn't the same as a compilation of facts about World War I or Elvis, or any other actual event or person. Rowling has sent a very clear message that she wants her stuff left alone. And the law, the judge points out, presumes that her original expression is entitled to protection up to a point.

What about fair use, though? Surely a lexicon is a transformative work. The defendant raised that defense, arguing that the ruling in an earlier case about a book of trivia from the Seinfeld TV series didn't apply here, because unlike in the Seinfeld case, there is no substantial similarity here. A lexicon isn't a novel. And the copied facts about the novels were not in the same order, for example. The judge didn't buy that argument:

Reproducing original expression in fragments or in a different order, however, does not preclude a finding of substantial similarity.

The law doesn't care what order the copying is in, if there is substantial copying in any order, it won't fly, and this is why the judge found it crossed the line:

Notwithstanding the dissimilarity in the overall structure of the Lexicon and the original works, some of the Lexicon entries contain summaries of certain scenes or key events in the Harry Potter series, as stated in the Findings of Fact. These passages, in effect, retell small portions of the novels, though without the same dramatic effect. In addition, the entries for Harry Potter and Lord Voldemort give a skeleton of the major plot elements of the Harry Potter series, again without the same dramatic effect or structure. Together these portions of the Lexicon support a finding of substantial similarity. To be sure, this case is different from Twin Peaks, where forty-six pages of the third chapter of a guidebook to the Twin Peaks television series were found to constitute "essentially a detailed recounting of the first eight episodes of the series. Every intricate plot twist and element of character development appear[ed] in the Book in the same sequence as in the teleplays." 996 F.2d at 1372-73 (supporting the Second Circuit's finding of comprehensive nonliteral similarity). Those "plot summaries" were far more detailed, comprehensive, and parallel to the original episodes than the so-called "plot summaries" in this case. Nonetheless, it is clear that the plotlines and scenes encapsulated in the Lexicon are appropriated from the original copyrighted works. See Paramount Pictures, F. Supp. 2d at 334 (noting that Twin Peaks was distinguishable but nonetheless applying its broader holding that "a book which tells the story of a copyrighted television series infringes on its copyright"). Under these circumstances, Plaintiffs have established a prima facie case of infringement.

But then there was another element, derivative work. Was the Lexicon a derivative work? Copyright owners have certain rights over derivative works. If you write a book that consists of annotations, revisions, modifications, and elaborations, it is a derivative work, the court explained. It's not derivative automatically, the ruling continued, just because it's based upon a preexisting work, because a parody will pass even though it is based on a preexisting work. The overarching concept is to protect the author's right to transform the work into a new medium. If a writer wants to turn her novel into a play or a movie, that right is hers. You'll notice in the news today that there is a new lawsuit filed alleging that Steven Spielberg failed to get permission to use the short story on which Rear Window was based and which he is alleged to have used for Disturbia, for example. But a lexicon isn't a movie or a play. It's a completely different thing. But here, once again, the substantial amount of copying was the elephant in the room, and Plaintiffs made that point:

Plaintiffs argue that based on the Twin Peaks decision "companion guides constitute derivative works where, as is the case here, they 'contain a substantial amount of material from [the underlying work.'"

But this judge disagreed:

By condensing, synthesizing, and reorganizing the preexisting material in an A-to-Z reference guide, the Lexicon does not recast the material in another medium to retell the story of Harry Potter, but instead gives the copyrighted material another purpose. That purpose is to give the reader a ready understanding of individual elements in the elaborate world of Harry Potter that appear in voluminous and diverse sources. As a result, the Lexicon no longer "represents [the] original work[s] of authorship." 17 U.S.C. § 101. Under these circumstances, and because the Lexicon does not fall under any example of derivative works listed in the statute, Plaintiffs have failed to show that the Lexicon is a derivative work.

He also ruled it was not an unauthorized abridgement, as claimed by plaintiffs. But what about fair use? Without it, there can be a real stifling of the incentive to create secondary works. And lexicons are valuable. So how does a court decide which matters most, the original author's monopoly rights, which the law views as also a public benefit if it provides an incentive to authors to publish, or the public's interest in having secondary works? Copyright law's goal is 'promoting the Progress of Science and useful Arts,' according to the US Constitution, so which competing interest works to accomplish that best in a fair use analysis of this case? And that's where the concept of transformative use comes in. If it's transformative, it adds something new, and the public benefits from the new work, which can add value to the original. Here's how this court reasoned:

The purpose of the Lexicon's use of the Harry Potter series is transformative. Presumably, Rowling created the Harry Potter series for the expressive purpose of telling an entertaining and thought provoking story centered on the character Harry Potter and set in a magical world. The Lexicon, on the other hand, uses material from the series for the practical purpose of making information about the intricate world of Harry Potter readily accessible to readers in a reference guide. To fulfill this function, the Lexicon identifies more than 2,400 elements from the Harry Potter world, extracts and synthesizes fictional facts related to each element from all seven novels, and presents that information in a format that allows readers to access it quickly as they make their way through the series. Because it serves these reference purposes, rather than the entertainment or aesthetic purposes of the original works, the Lexicon's use is transformative and does not supplant the objects of the Harry Potter works.

That is talking about the novels. But Rowling does companion books too, a kind of lexicon of her own. And there the transformative issue was less obvious:

Although the Lexicon does not use the companion books for their entertainment purpose, it supplants the informational purpose of the original works by seeking to relate the same fictional facts in the same way. Even so, the Lexicon's use is slightly transformative in that it adds a productive purpose to the original material by synthesizing it within a complete reference guide that refers readers to where information can be found in a diversity of sources.

The best evidence of the Lexicon's transformative purpose is its demonstrated value as a reference source....

The utility of the Lexicon, as a reference guide to a multi-volume work of fantasy literature, demonstrates a productive use for a different purpose than the original works.

After all, Rowling herself said she used the website as a resource, so she can hardly argue effectively that the work isn't transformative, since if it weren't, she wouldn't need to use it to check a fact. She could just check her companion books. The Lexicon occasionally does offer "new information, new aesthetics, new insights and understandings," the judge wrote, quoting from the standard in Castle Rock. However, its transformative character is diminished by inconsistency. And the massive copyright came into play there:

A finding of verbatim copying in excess of what is reasonably necessary diminishes a finding of a transformative use....

Perhaps because Vander Ark is such a Harry Potter enthusiast, the Lexicon often lacks restraint in using Rowling's original expression for its inherent entertainment and aesthetic value.

Then there was the issue of commercial or noncommercial use. The website was free. The book was intended as a commercial venture. The general underlying concept is that the author of a creative work gets to make the money; significant commercial exploitation by others will be blocked by the courts. Fair use is supposed to be about the public's interest, not your pocketbook's. Here, the judge found that it counted only a little against the defendant, though, because he'd already determined that the lexicon would be of value to the public. But it did count some that it hoped to be the first in the market with such a lexicon, and the author herself had said she wanted to do one herself. The plaintiffs also argued bad faith and willfulness, but the judge wasn't persuaded. The defendant, he decided, had a reasonaable belief that the use was fair. But what about the balance between how much was copied and the benefits of a transformative use?

In undertaking this inquiry, the Court bears in mind that "room must be allowed for judgment, and judges must not police criticism," or other transformative uses, "with a heavy hand." Chicago Bd. of Educ....

Obviously, a Lexicon has to use the original work quite a bit to be useful as a reference work, but the court felt that "there are a number of places where the Lexicon engages in the same sort of extensive borrowing that might be expected of a copyright owner, not a third party author". And you don't lose your copyright rights just because your work is wildly popular, the court points out. It reminds me of something Dylan said once, "Just because you like my stuff doesn't mean I owe you anything".

The 4th fair use factor is market harm. This was a biggie. "Plaintiffs presented expert testimony that the Lexicon would compete directly with, and impair the sales of, Rowling's planned encyclopedia by being first to market." However, the judge noted that case law doesn't necessarily give her that exclusive right. She doesn't get to be first, if a sufficiently transformative work gets published first:

Notwithstanding Rowling's public statements of her intention to publish her own encyclopedia, the market for reference guides to the Harry Potter works is not exclusively hers to exploit or license, no matter the commercial success attributable to the popularity of the original works. See Twin Peaks, 996 F.2d at 1377 ("The author of 'Twin Peaks' cannot preserve for itself the entire field of publishable works that wish to cash in on the 'Twin Peaks' phenomenon"). The market for reference guides does not become derivative simply because the copyright holder seeks to produce or license one.

I expect that sentence I've highlighted was a disappointment to Rowling. But US law allows for fair use. You can't prevent fair use here.

But was the Lexicon fair use or not? An expert testified that the Lexicon would diminish demand for the books, because they reveal key plot lines without any spoiler warnings. But the judge disagreed. Kids would read them both for different reasons. But not the companion books. There it could indeed interfere. Why would you buy the companion books when they are almost all in the Lexicon?

So summing up the analysis of all the fair use factors, the judge found that this was not fair use:

The fair-use factors, weighed together in light of the purposes of copyright law, fail to support the defense of fair use in this case. The first factor does not completely weigh in favor of Defendant because although the Lexicon has a transformative purpose, its actual use of the copyrighted works is not consistently transformative. Without drawing a line at the amount of copyrighted material that is reasonably necessary to create an A-to-Z reference guide, many portions of the Lexicon take more of the copyrighted works than is reasonably necessary in relation to the Lexicon's purpose. Thus, in balancing the first and third factors, the balance is tipped against a finding of fair use. The creative nature of the copyrighted works and the harm to the market for Rowling's companion books weigh in favor of Plaintiffs. In striking the balance between the property rights of original authors and the freedom of expression of secondary authors, reference guides to works of literature should generally be encouraged by copyright law as they provide a benefit readers and students; but to borrow from Rowling's overstated views, they should not be permitted to "plunder" the works of original authors (Tr. (Rowling) at 62:25-63:3), "without paying the customary price" Harper & Row, 471 U.S. at 562, lest original authors lose incentive to create new works that will also benefit the public interest (see Tr. (Rowling) at 93:20-94:13).

In this instance, Rowling testified that she was disheartened by the Lexicon, and she wouldn't have the incentive to write her own if the market became filled with what she viewed as substandard encyclopedias of her works. Also, the charity would lose expected royalties, if she couldn't protect her own version of a lexicon. If this lexicon was permitted, others would follow. The judge took that seriously as a factor, but it was not the winning factor. It was the copying, the extent of it:

Ultimately, because the Lexicon appropriates too much of Rowling's creative work for its purposes as a reference guide, a permanent injunction must issue to prevent the possible proliferation of works that do the same and thus deplete the incentive for original authors to create new works.

As for damages, the court awarded the statutory minimum. First, the Lexicon hadn't been published yet, so there was no harm beyond the infringement. So, that meant $750.00 for each of the seven Harry Potter novels and each of the two companion books, for a total of $6,750.00. I can't have an opinion, since I haven't read either work and so can't judge the amount of copying, but I think it's clear that the judge put forth the effort to really try to get it right and to balance fairly everyone's interests, as defined by Copyright and case law.

Update: The lawyer who represented RDR, the defendant, has now offered a statement which you can find in Mark Hamblett's New York Law Journal article on Law.com:

Hammer, who represented RDR Books, said the decision was thoughtful and well crafted. "I'm sorry about the result, that the lexicon was not found to be sufficiently transformative," Hammer said. "But I am happy the judge endorsed the genre of reference works and companion books as valuable and important and that authors don't have an automatic right to control what's written about their works."

That's pretty much what I got from the ruling too, and I'm gratified to see that my assessment is on the mark.
http://www.groklaw.net/article.php?s...80909014304275





Proposed Copyright Law a 'Gift' to Hollywood, Info Groups Say
David Kravets

A dozen special-interest groups urged lawmakers Wednesday to squelch proposed legislation that for the first time would allow the U.S. Justice Department to prosecute civil cases of copyright infringement.

The Enforcement of Intellectual Property Rights Act, scheduled to be heard in the Senate Judiciary Committee on Thursday, also creates a Cabinet-level copyright-patent czar charged with creating a worldwide plan to combat piracy. The czar would "report directly to the president and Congress regarding domestic and international intellectual property enforcement programs."

The bill, a nearly identical version the House passed last year, is strongly backed by the music and movie industries. The House and Senate versions encourage federal-state anti-piracy task forces, the training of other countries about IP enforcement and, among other things, institute an FBI piracy unit.

In a letter to the Judiciary Committee, the groups said granting the Justice Department the power to file civil lawsuits on behalf of Hollywood and others is "an enormous gift" to copyright holders.

"Movie and television producers, software publishers, music publishers, and print publishers all have their own enforcement programs," the letter said. "There is absolutely no reason for the federal government to assume this private enforcement role."

The dozen groups include American Association of Law Libraries, American Library Association, Consumer Federation of America, Consumers Union, Digital Future Coalition, Electronic Frontier Foundation, Essential Action, IP Justice, Knowledge Ecology International, Medical Library Association, Public Knowledge and Special Libraries Association.

The House version does not contain language granting the Justice Department the ability to sue copyright infringers. The department does prosecute criminal acts of infringement, although rarely.

If the Senate version becomes law, it is not immediately clear how the Justice Department's expanded powers would work in practice. For example, would the department assume the role of the Recording Industry Association of America, which has sued more than 30,000 people in the United States for copyright infringement since 2003?
http://blog.wired.com/27bstroke6/200...ed-copyri.html





‘Dangerous’ Box Office Numbers Reveal America’s Lack of Interest
John Cairns

WHERE THE HELL ARE YOU PEOPLE?? Why aren’t you at the local movie theater? What the heck is it that people pack these places during the summer when it is stinking hot and there’s plenty of outdoor activities to take advantage of, yet at the first sign of fall people run like hell from the theaters?!

Probably, no doubt, because Hollywood is driving people away with what they are serving up at the moment. This week, of course, they served up very little: one freaking new movie. And I know it’s back-to-school, and there are still people fleeing hurricanes and the like (the entire east coast was hit by tropical storm Hanna this weekend). And football season is on everywhere. But let’s face it: it’s easy to come up with an excuse not to go to the movies if the flicks being served up are so dog-awful.

The top flick in the land was the lone new release this week: Nicolas Cage’s piece of junk Bangkok Dangerous. Yet it only made something like $7.8 Million. Not even eight freaking million dollars for a first-place movie!! Now that really is sad, folks. A showing like that makes the whole movie industry look like it’s in a depression.

This is definitely the worst weekend of the year at theaters so far. In fact, I’ve read this is the worst weekend at the box office in seven years.

What do you expect, though, when studios treat late August and early September as a dumping ground for all the crap movies that didn’t test well with audiences?! They’re basically conceding defeat before they even start! No wonder there’s no one paying these inflated movie prices, then, If that’s how Hollywood views the early fall period.

Here is how Rotten Tomatoes rated these latest bird-turd releases that rolled out the last three weeks. Bangkok Dangerous: 12 percent. Babylon A.D.: 7 percent. The House Bunny: 39 percent. Traitor: 53 percent. Death Race: 39 percent. Disaster Movie: A big, freaking zero percent!!! Now, Hamlet 2 did get 62 percent, but that was it. One freaking movie rated as “fresh” among the wide releases. No WONDER everyone is staying home.

If you want to see any movies that have a hope of being good, there’s always the Toronto International Film Festival, where everyone from Brad Pitt to Paris Hilton is hanging out. That’s the one place in North America where theaters are packed for these Oscar contenders.

Having been there before, though, I can say with certitude that not all these theaters in Toronto are packed. Some of these art-house and foreign flicks play to pretty empty houses. But the gala flicks all pack the joint.

At least there is SOME place on earth where people are still excited about the movies, instead of totally repelled. Seven freaking million for the top domestic box office movie: that is a joke, folks. Summer is DEFINITELY over and in the grave.

The grim final numbers from this joke of a weekend at theaters:

1. Bangkok Dangerous $7.8 million
2. Tropic Thunder $7.5 million
3. The House Bunny $5.9 million
4. The Dark Knight $5.7 million
5. Traitor $4.6 million
6. Babylon A.D. $4.0 million
7. Death Race $3.6 million
8. Disaster Movie $3.3 million
9. Mamma Mia! $2.7 million
10. Pineapple Express $2.4 million

So that about wraps up this dismal weekend. What else is there to add? Everyone has flown the coop. The cinema multiplex is a sad place to be these days.

There’s nowhere to go but up, right? Check back later this week for what is sure to be a big weekend with lots of new flicks on the way here at the Reject Report.
http://www.filmschoolrejects.com/new...ous-ly-low.php





HBO Offshoot Launches Web Video Series
Andrew Wallenstein

HBOlab, an experimental offshoot of the cable powerhouse focused on online programming, is launching a Web video series featuring a cast of the Internet's most popular entertainers.

Jessica Rose, star of the Web sensation "lonelygirl15," will be joined with top YouTube talent, including video bloggers known as sxePhil and KevJumba, for the scripted comedy "Hooking Up." Set to premiere October 1, the 10-episode series will be distributed on top video portals including YouTube and MySpace as well as a destination site, hookingupshow.com (http://www.hookingupshow.com).

"Hooking" could prove groundbreaking for the nascent webisode genre by amassing a sizable viewership, given its aggregation of Internet personalities who can promote the production to their devoted audiences of millions of young viewers.

For HBOlab, "Hooking" is an opportunity to take to the next level the knowledge the unit has gleaned regarding Internet video distribution.

"I think we're going to see a lot more hits than had we cast a bunch of funny people you didn't know," said Fran Shea, head of HBOlab.

YouTube content partnerships manager George Strompolos likened the genesis of "Hooking" to an organic strategy employed occasionally by his site's stars, who pop up in one another's videos to cross-pollinate their audiences.

"It has happened before, but there hasn't been a real production company with this level of support orchestrating everything," he said. "That will be the extra push it will need to get out there in a smart, fun way."

"Hooking" puts its Internet all-stars to work acting as students at a fictional university where the populace spends most of its time e-mailing, instant-messaging and Twittering, but always seems to be miscommunicating.

Hookingupshow.com will be supplemented with social-networking capabilities, and the series' characters will have their own Facebook pages. No advertising will be a part of the first season, though HBO could elect to bring sponsors on board should the series continue.

To date, HBOlab has concentrated on creating the online hub Runaway Box, a collection of comedy videos that will continue to operate. One Runaway player, Mike Polk, recently signed a script-development deal with HBO.

Registering among the 10 most subscribed channels on YouTube, Kevin Wu, aka KevJumba, and Philip DeFranco, aka sxePhil, are well-known as video bloggers but have not worked on a scripted production.

"It was a big risk to take two non-actors and put them in lead roles," Shea said.

Another top 10 YouTube star, Michael Buckley of the "What the Buck Show," not only appears in "Hooking" but also has signed a development deal with HBOlab that will put him in other projects created by the unit. Other Web stars with roles in "Hooking" include Kevin Nalts, Charles Trippy and Cory Williams.

"Hooking" is written and directed by Woody Tondorf, a HBOlab staffer who also has a part in the series.
http://www.washingtonpost.com/wp-dyn...090800727.html





Capturing the Moment (and More) via Cellphone Video
Anne Eisenberg

STILL keeping in touch with friends by texting? How old-fashioned. Some early adopters of technology are now using their mobile phones to send not typed words or photographs, but live video broadcasts. They’re streaming scenes from their daily lives — like trips to the mall, weddings, a new puppy’s antics or even a breaking news story that they happen upon.

“People have moved on from texting,” said Carla Thompson, senior analyst at the Guidewire Group, a marketing research firm in San Francisco. “Just typing in what you are doing is no longer enough. That’s why the field of live video streaming is burgeoning.”

People want to share visceral experiences immediately, Ms. Thompson said. But many cellphones, including the iPhone, do not yet permit live streaming, and many that do are not cheap — including the high-end Nokia N series of smartphones, for example, which can cost from about $400 to $895. Users whose mobile phone plans don’t include unlimited data streaming for Internet services will have to add the coverage, typically for about $15 to $20 a month.

Once they have the right phones and plans, users can aim their built-in cameras, press a few buttons and, with the right software, be broadcasting within seconds. Their videos can be seen on blogs, on social networking sites like Facebook or, among other places, on the Web sites of companies that provide the software and services for streaming, like Kyte (www.kyte.com) or Qik (qik.com).

“You can record whatever’s happening around you and send it back to wherever you’ve embedded your channel,” Ms. Thompson said. “You don’t have to set up a camera — it’s really instant.”

Viewers can respond immediately to videos, typing messages on their keyboards, for instance, and sending them along to a live session. The typed chat appears instantly at the bottom of viewers’ screens.

The relatively simple technology, which requires no television cameras or satellite links, has much potential, Ms. Thompson said, although the quality will vary when users stream live video, depending on the available bandwidth from the provider.

Still, the technology is appealing and easy to use, she said: “There’s no learning curve. People can pick it up right away.”

With Qik, people can use one of 40 or so Nokias, or some phones with the Windows mobile operating system, including the Motorola Q, the Sony Ericsson Xperia X1, the AT&T Tilt, the HTC Touch Dual and the Samsung Blackjack II.

Representative John Culberson, Republican of Texas, pairs video streaming with Twitter, the microblogging system that lets him broadcast messages of up to 140 characters to people in his network. He notifies his 3,000 or so Twitter followers when he is about to stream a video with his Nokia N95, so they can watch it live or later — at, for instance, Qik.

“I can talk directly to my constituents in real time without any filter,” Mr. Culberson said.

KCRW, the National Public Radio affiliate in Santa Monica, Calif., decided several months ago to try streaming video of live events from cellphones to its Web site. Anil Dewan, director of new media, said the station first used standard video camera-based footage. “The technology got in the way,” he said. “It didn’t capture the energy we wanted in a live event.”

Instead, the station made a relatively small investment, Mr. Dewan said: three Nokia N95 phones and three plans with AT&T allowing unlimited access to the company’s 3G network. The station signed up with Kyte and sent phones and staff members to the Democratic and Republican conventions to capture events.

“It’s been a tremendous success,” said Mr. Dewan, with more than 124,000 views of 67 convention clips.

“It’s a small, nimble technology,” he said of the streaming process. “You can record and upload quickly to our Web site using Kyte. The content is fed straight from cellphones to the Web site. No one has to encode or edit it.”

Daniel Graf, the chief executive of Kyte in San Francisco, said it allows viewers to respond not only with text but also with audio or video comments.

KCRW is using Kyte’s services on a trial basis, he said. In the future, commercial users will pay a flat fee based either on traffic or on a share of revenue; individuals are not charged for private use of the service.

Kevin Rose, founder of Digg, a Web news site, bought a Nokia N95 after seeing other people stream video from smartphones. He signed up with Qik, and particularly likes the interactive chat feature.

“It gives every viewer a chance for feedback,” he said.

Mr. Rose says he thinks the prospects for cellphone streaming are promising.

“I have a whole group of friends,” he said. “The second they see this technology, they want it.”
http://www.nytimes.com/2008/09/14/te...y/14novel.html





Like ‘The Real World,’ With More Computers
Stuart Elliott

FOR years, MTV has been bringing together eclectic groups of young adults to live together in loftlike spaces on the series “The Real World.” Now, with the backing of a major technology marketer, the network has gathered 16 youthful creative types in a loft in Brooklyn for a contest that can be watched on TV or online.

Beginning on Monday, MTV and its mtvU channel, which is aimed at college and university students, will join forces with Hewlett-Packard to present “Engine Room,” an original series that will follow the 16 contestants, divided into four teams, as they produce digital art using — of course — PCs, work stations, monitors and other products sold by H.P.

Episodes of “Engine Room” run from five to seven minutes each, and the series is scheduled to last seven weeks. At the end, one team will win prizes that include $400,000 in cash and a chance to program the giant MTV screen in Times Square for a night.

“Engine Room” follows a previous video series that H.P. sponsored with MTV, called “Meet or Delete.” It also comes after a video series for the back-to-school season, “Dorm Storm,” presented by Hewlett-Packard in partnership with Broadband Enterprises, an online video producer and distributor.

Those series are indicative of the increasing interest in video campaigns among large marketers, particularly to reach younger consumers who have demonstrated a willingness to watch clips on their computers, cellphones and other mobile devices.

“We don’t want it to be advertising; we want it to be real,” said David Roman, vice president for worldwide marketing communications at the personal systems group of H.P. in Cupertino, Calif.

“We’re learning as we go not to do so much talking about what we do but rather let people do things with the product,” Mr. Roman said. “That’s where the ‘wow factor’ comes from.”



Mr. Roman estimated that spending by Hewlett-Packard for the “Engine Room” initiative would be in “the tens of millions of dollars,” beginning with efforts that began months ago to recruit contestants on a Web site (mtvengineroom.com).

Almost 2,000 people from 122 countries submitted more than 20,000 original artworks, he said, to earn a chance to take part in the contest.

The teams of contestants are divided by the regions they come from: Asia-Pacific, Europe, Latin America and North America. They are visited during the competition by guests like the musician Moby; Kevin Smith, the movie director; and the British pop band the Ting Tings.

The digital creations of the teams were judged by a diverse panel that included musicians, filmmakers, museum curators, a physicist, a tattoo artist, critics and Pete Connolly, an art director from Goodby, Silverstein & Partners, the Hewlett-Packard creative agency.

A film that Mr. Connolly created for his résumé inspired a series of commercials for H.P. that features celebrities like Jay-Z, Jerry Seinfeld and Serena Williams. They are all seen from the neck down while demonstrating how they live their lives digitally.

“We take three, four months to create a commercial, and we saw kids creating parodies on YouTube” in a fraction of the time, said Nancy Reyes, an account director on the Hewlett-Packard account at Goodby, Silverstein in San Francisco, part of the Omnicom Group.

“The idea of giving these kids who are creating amazing things with computers a stage, a global stage, to show off their work seemed like a natural fit,” she added.

The decision to go back to MTV and mtvU — part of the MTV Networks division of Viacom — was made based on what was deemed a good result for “Meet or Delete,” a dating series that was introduced in 2006. Episodes can still be watched online at meetordelete.com.

Programs like “Meet or Delete” and “Engine Room,” which are created for marketers and feature products interwoven into the plots, are becoming more popular among executives at TV networks. Such shows, known on Madison Avenue as branded entertainment, have to walk a fine line between entertaining audiences and pitching to them.

“Our audiences don’t wait for a TV show or a Web site to get good,” said Ross Martin, senior vice president for programming at mtvU in New York. “They don’t like it, they hit stop or they hit delete.”

Thus, “if we compromise the quality of the work or the integrity of the content,” he added, “we lose our credibility and we’re done.”



Advertisers like Ford Motor, General Electric, Microsoft and Yahoo have also worked with mtvU on branded-entertainment projects, Mr. Martin said.

A search on Google for information about the mtvU reality series for Ford, “College 500,” which matches two student teams in a cross-country race, turns up almost 2.8 million results.

Viewers will be able to watch the episodes of “Engine Room” on mtvU in the United States, on various MTV channels in other countries and online — in nine languages — at mtvengineroom.com.

As the editing of the episodes nears completion, Mr. Martin said he could not discuss which team won the competition.

However, he added, “I will tell you, the winning team won by a point after six challenges.”
http://www.nytimes.com/2008/09/11/bu...ia/11adco.html





VideoSurf Hopes to Ride Internet Video Wave
Michael Liedtke

There are plenty of places to watch online video, but still no easy way to find a particular clip without suffering through a lot of trial and error.

Now a startup led by a former Yahoo Inc. engineer is promising to simplify Internet video search with a complex technology that enables computers to recognize images, such as actors' faces, and index them scene-by-scene.

"We absolutely think that it's a leap forward in finding and discovering video on the Web," said Lior Delgo, who left Yahoo in 2006 to start VideoSurf. His previous startup was the travel search engine FareChase, which he sold to Yahoo in 2004 for an undisclosed amount.

VideoSurf showcased its search engine for the first time Wednesday at a conference hosted by the blog TechCrunch. A test version is up and running, but visitors still must submit an e-mail address and obtain a password to use it.

VideoSurf's "computer vision" is much different than other video search engines such as YouTube, Blinkx and EveryZing that read written tags assigned to video or scan transcripts of their spoken words in order to catalog clips.

"We have taught computers to see inside the videos," Delgo said.

The approach enables VideoSurf, which has indexed millions of clips from YouTube, Hulu and other popular video sites, to detect specific people appearing in a video, even if their names haven't been tagged to every scene. For instance, a search for Alec Baldwin might produce thumbnails from his work on the TV show "30 Rock" as well as his famous sales talk in the movie "Glengarry Glen Ross."

The clips can be seen on VideoSurf's own site, giving the San Mateo, Calif.-based startup a chance to make money from ads. For now, Delgo and VideoSurf's 21 other employees are getting by on $5.5 million provided by investors who include Al Gore and Joel Hyatt, who co-founded Current TV with the former vice president.
http://www.physorg.com/news140288631.html






Security Research Vid Shows "Virus" Infecting E-Voting Machines
Max sez

The Security Group at the University of California in Santa Barbara has released the video that shows the attacks carried out against the Sequoia voting system. I heard about the video when talking to some members of the group, but it was never made available to the public before. The video was shot as part of the Top-To-Bottom Review organized by the California Secretary of State. Even though the review was carried our in July 2007, the video has been posted only now, more than a year after (why?).

The video shows an attack where a virus-like software spreads across the voting system. The coolest part of the video is the one that shows how the "brainwashed" voting terminals can use different techniques to change the votes even when a paper audit trail is used. Pretty scary stuff. The video is proof that these types of attacks are indeed feasible and not just a conspiracy theory.

Also, the part that shows how the "tamperproof" seals can be completely bypassed is very funny (and disturbing at the same time).
http://www.boingboing.net/2008/09/08...search-vi.html





Threat to Computers for Industrial Systems Now Serious

Security researcher publishes code that gives hackers a back door into utility companies, water plants, and oil refineries in order to raise awareness of the vulnerabilities
Robert McMillan

A security researcher has published code that could be used to take control of computers used to manage industrial machinery, potentially giving hackers a back door into utility companies, water plants, and even oil and gas refineries.

The software was published late Friday night by Kevin Finisterre, a researcher who said he wants to raise awareness of the vulnerabilities in these systems, problems that he said are often downplayed by software vendors. "These vendors are not being held responsible for the software that they're producing," said Finisterre, who is head of research with security testing firm Netragard. "They're telling their customers that there is no problem, meanwhile this software is running critical infrastructure."

Finisterre released his attack code as a software module for Metasploit, a widely used hacking tool. By integrating it with Metasploit, Finisterre has made his code much easier to use, security experts said. "Integrating the exploit with Metasploit gives a broad spectrum of people access to the attack," said Seth Bromberger, manager of information security at PG&E. "Now all it takes is downloading Metasploit and you can launch the attack."

The code exploits a flaw in Citect's CitectSCADA software that was originally discovered by Core Security Technologies and made public in June. Citect released a patch for the bug when it was first disclosed, and the software vendor has said that the issue poses a risk only to companies that connect their systems directly to the Internet without firewall protection, something that would never be done intentionally. A victim would have to also enable a particular database feature within the CitectSCADA product for the attack to work.

These types of industrial SCADA (supervisory control and data acquisition) process control products have traditionally been hard to obtain and analyze, making it difficult for hackers to probe them for security bugs, but in recent years more and more SCADA systems have been built on top of well-known operating systems like Windows or Linux making them both cheaper and easier to hack.

IT security experts are used to patching systems quickly and often, but industrial computer systems are not like PCs. Because a downtime with a water plant or power system can lead to catastrophe, engineers can be reluctant to make software changes or even bring the computers off-line for patching.

This difference has led to disagreements between IT professionals like Finisterre, who see security vulnerabilities being downplayed, and industry engineers charged with keeping these systems running. "We're having a little bit of a culture clash going on right now between the process control engineers and the IT folks," said Bob Radvanovsky, an independent researcher who runs a SCADA security online discussion list that has seen some heated discussions on this topic.

Citect said that it had not heard of any customers who had been hacked because of this flaw. But the company is planning to soon release a new version of CitectSCADA with new security features, in a statement, released Tuesday.

That release will come none too soon, as Finisterre believes that there are other, similar, coding mistakes in the CitectSCADA software.

And while SCADA systems may be separated from other computer networks within plants, they can still be breached. For example, in early 2003, a contractor reportedly infected the Davis-Besse nuclear power plant with the SQL Slammer worm.

"A lot of the people who run these systems feel that they're not bound by the same rules as traditional IT," Finisterre said. "Their industry is not very familiar with hacking and hackers in general."
http://www.infoworld.com/article/08/...rious_1.htm l





Verizon Tech Accused Of Making $220K In Sex Calls

Former employee said to have tapped into landlines of 950 customers to make 45,000 minutes worth of calls
Lee Kushnir & Jeff Capellini

A former Verizon technician racked up $220,000 in phone-sex calls by tapping into the land lines of nearly 950 customers, authorities charged on Tuesday.

Joseph Vaccarelli, 45, of Nutley, made approximately 5,000 calls, resulting in 45,000 minutes of call time, Bergen County Prosecutor John L. Molinelli said in a news release.

Vaccarelli placed the calls in about 30 municipalities in Bergen County, according to the release.

Verizon estimated that out of a 40-week period, Vaccarelli spent 15 weeks talking on 900 chat lines, authorities alleged.

Vaccarelli was charged with theft by deception and theft of services. He is scheduled to be arraigned Wednesday in Central Municipal Court.
http://wcbstv.com/watercooler/phone.....2.813865.html





Exclusive: Widespread Cell Phone Location Snooping by NSA?
Chris Soghoian

If you thought that the National Security Agency's warrantless wiretapping was limited to AT&T, Verizon and Sprint, think again.

While these household names of the telecom industry almost certainly helped the government to illegally snoop on their customers, statements by a number of legal experts suggest that collaboration with the NSA may run far deeper into the wireless phone industry. With over 3,000 wireless companies operating in the United States, the majority of industry-aided snooping likely occurs under the radar, with the dirty-work being handled by companies that most consumers have never heard of.

A recent article in the London Review of Books revealed that a number of private companies now sell off-the-shelf data-mining solutions to government spies interested in analyzing mobile-phone calling records and real-time location information. These companies include ThorpeGlen, VASTech, Kommlabs, and Aqsacom--all of which sell "passive probing" data-mining services to governments around the world.

ThorpeGlen, a U.K.-based firm, offers intelligence analysts a graphical interface to the company's mobile-phone location and call-record data-mining software. Want to determine a suspect's "community of interest"? Easy. Want to learn if a single person is swapping SIM cards or throwing away phones (yet still hanging out in the same physical location)? No problem.

In a Web demo (PDF) (mirrored here) to potential customers back in May, ThorpeGlen's vice president of global sales showed off the company's tools by mining a dataset of a single week's worth of call data from 50 million users in Indonesia, which it has crunched in order to try and discover small anti-social groups that only call each other.

Clearly, this is creepy, yet highly lucrative, stuff. The fact that human-rights abusing governments in the Middle East and Asia have deployed these technologies is not particularly surprising. However, what about our own human-rights-abusing government here in the U.S.? Could it be using the same data-mining tools?

To get a few answers, I turned to Albert Gidari, a lawyer and partner at Perkins Coie in Seattle who frequently represents the wireless industry in issues related to location information and data privacy.

When asked if there is a market for these kinds of surveillance data-mining tools in the U.S., Gidari told me: "Of course. It is a global market and these companies have partners in the U.S. or competitors."

The question is not if the government would like to use these tools--after all, what spy wouldn't want to have point-and-click real-time access to the location information on millions of Americans? The real mystery is how the heck the National Security Agency can legally get access to such large datasets of real-time location information and calling records. The answer to that, Gidari said, is the thousands of other, lesser-known companies in the wireless phone and communications industry.

The massive collection of customer data comes down to the interplay of two specific issues: First, thousands of companies play small, niche support roles in the wireless phone industry, and as such these firms learn quite a bit about the calling habits of millions of U.S. citizens. Second, the laws relating to information sharing and wiretapping specifically regulate companies that provide services to the general public (such as AT&T and Verizon), but they do not cover the firms that provide services to the major carriers or connect communications companies to one other.

Thus, while it may be impossible for the NSA to legally obtain large-scale, real-time customer location information from Verizon, the spooks at Fort Meade can simply go to the company that owns and operates the wireless towers that Verizon uses for its network and get accurate information on anyone using those towers--or go to other entities connecting the wireless network to the landline network. The wiretapping laws, at least in this situation, simply don't apply.

Giardi explained it as follows:

Networks are more and more disaggregated and outsourced, from customer service call centers overseas with full viewing access to data to key infrastructure components and processing. A single communication is handled by many more parties than the named provider today. Moreover, interoperability protocols include network identifiers--send a message from company A to company B and the acknowledgment of delivery may include location and other information. That's just the way the system is designed--location was about billing in the early years and no one bothered to undo the existing protocols when business models changed and interoperability became common practice or a myriad of new messaging companies came into being...So my point is that there are many access points--albeit less convenient than one-stop shopping at the big carriers--to get information including real-time data.

For example, if a Sprint Wireless customer in Virginia calls a relative in Montana--who is a customer of a small, regional landline carrier--information on the callers will spread far beyond just those two communications companies.

Sprint doesn't own any of its own cellular towers, and so TowerCo, the company that owns and operates the towers, of course, learns some information on every mobile phone that communicates with one of its towers. This is just the tip of the iceberg, though. There are companies that provide "backhaul" connections between towers and the carriers, providers of sophisticated billing services, outsourced customer-service centers, as well as Interexchange Carriers, which help to route calls from one phone company to another. All of these companies play a role in the wireless industry, have access to significant amounts of sensitive customer information, which of course, can be obtained (politely, or with a court order) by the government.

With the passage of laws like the FISA Amendments Act and the USA Patriot Act, in most cases, requests for customer information come with a gag order, forbidding the companies from notifying the public, or the end users whose calling information is being snooped upon. Gidari summed it up this way:

So any entity--from tower provider, to a third-party spam filter, to WAP gateway operator to billing to call center customer service--can get legal process and be compelled to assist in silence. They likely don't volunteer because of reputation and contractual obligations, but they won't resist either.

Seeking clarification, I turned to Paul Ohm, a former federal prosecutor turned cyberlaw professor at the University of Colorado Law School and a noted expert on surveillance laws.

Before getting into the details of the issue, Ohm first outlined the basic problem of the various wiretap and surveillance laws; they are extremely confusing and few people fully understand them. The 9th Circuit Court of Appeals seemed to share Ohm's view, stating a few years ago that the Electronic Communications Privacy Act is a "complex, often convoluted area of the law" (United States v. Smith, 155 F.3d 1051).

Ohm then said that the "one thing I can say with confidence is that you are correct to note that the [Stored Communication Act's] voluntary disclosure prohibitions (in 18 USC 2702(a)) apply only to providers to the public."

After describing all the ways that the government could legally collect real-time data on millions of U.S. citizens, Gidari said that essentially, the existence of such a program would likely remain a secret (barring a whistle-blower or leaks to the press by government officials). Summing it up, he stated that:

Whether [a] vendor to a carrier to the public cooperates with agencies (either for a fee or by acquiescence in an order), is something you will not find out as FISA makes it so, regardless of whether the person is in the U.S. or communicating with a person abroad. Such means and methods largely are hidden.

However, if the existence of such a program were ever confirmed, Ohm said that Congress would not be too happy:

If [the sharing of data by niche telecom providers] is seen as allowing an end-around an otherwise clear prohibition in the SCA, Congress is likely to throw a fit when it is revealed and try to amend the law. DOJ is sensitive to this kind of thing (despite what the NSA wiretapping program would lead you to believe) and would probably try to avoid blatantly bypassing otherwise clear language in this way.
http://news.cnet.com/8301-13739_3-10030134-46.html





New court decision affirms that 4th Amendment protects location information

Government Must Get a Warrant Before Seizing Cell Phone Location Records

San Francisco - In an unprecedented victory for cell phone privacy, a federal court has affirmed that cell phone location information stored by a mobile phone provider is protected by the Fourth Amendment and that the government must obtain a warrant based on probable cause before seizing such records.

The Department of Justice (DOJ) had asked the federal court in the Western District of Pennsylvania to overturn a magistrate judge's decision requiring the government to obtain a warrant for stored location data, arguing that the government could obtain such information without probable cause. The Electronic Frontier Foundation (EFF), at the invitation of the court, filed a friend-of-the-court brief opposing the government's appeal and arguing that the magistrate was correct to require a warrant. Wednesday, the court agreed with EFF and issued an order affirming the magistrate's decision.

EFF has successfully argued before other courts that the government needs a warrant before it can track a cell phone's location in real-time. However, this is the first known case where a court has found that the government must also obtain a warrant when obtaining stored records about a cell phone's location from the mobile phone provider.

"Cell phone providers store an increasing amount of sensitive data about where you are and when, based on which cell towers your phone uses when making a call. Until now, the government has routinely seized these records without search warrants," said EFF Senior Staff Attorney Kevin Bankston. "This landmark ruling is hopefully only the first of many. Just as magistrates across the country have begun denying government requests to track cell phones in real-time without warrants, based on arguments first made by EFF, so too do we hope this decision will spark new scrutiny of the government's unconstitutional seizure of stored cell phone location records."

The American Civil Liberties Union (ACLU), the ACLU Foundation of Pennsylvania, and the Center for Democracy and Technology (CDT) joined EFF's brief.
http://www.eff.org/press/archives/2008/09/11





YouTube Bans Terrorism Training Videos
AAP

Terrorist training videos will be banned from appearing on YouTube, under revised new guidelines being implemented by the popular video-sharing site.

The Google-owned portal will ban footage that advertises terrorism or extremist causes and supporters of the change hope it will blunt al-Qaeda's strong media online campaign.

The move comes after pressure on the internet search engine from Connecticut Senator Joseph Lieberman.

In addition to the ban on terror training videos, the new YouTube guidelines includes bans on videos that incite others to commit violent acts, videos on how to make bombs, and footage of sniper attacks.

The internet has become a powerful tool for terrorism recruitment. What was once conducted at secret training camps in Afghanistan is now available to anyone, anywhere because of the web.

Chatrooms are potent recruitment tools, but counterterrorism officials have found terrorist-sponsored videos are also key parts of al-Qaeda's propaganda machine.

"It's good news if there are less of these on the web," FBI spokesman Richard Kolko said. "But many of these jihadist videos appear on different websites around the world, and any time there is investigative or intelligence value we actively pursue it."

How to slit throats

There have been online terror-training videos ranging from how to slit a victim's throat and how to make suicide vests to how to make explosives from homemade ingredients and how to stalk people and ambush them, said Bruce Hoffman, a counterterrorism expert and professor at Georgetown University.

Hoffman said he does not know whether the videos were posted on YouTube, but they have been available at other sites online.

A year ago, a Homeland Security Department intelligence assessment said: "The availability of easily accessible messages with targeted language may speed the radicalisation process in the homeland for those already susceptible to violent extremism."

Recognising the growing threat of radicalisation, Lieberman - the Democrat-turned-independent who chairs the Senate Homeland Security and Governmental Affairs Committee - asked Google to ban videos from al-Qaeda and other Islamist terror groups.

He said the private sector also has a role in protecting the United States from terrorists. By banning these videos on YouTube, "Google will make a singularly important contribution to this important national effort," Lieberman wrote to Google's chairman and chief executive, Eric Schmidt, in May.

Dealing with extremists

Representatives of Google and YouTube would not respond to questions about Lieberman's appeal.

Despite the move there is a debate among radicalisation experts of whether shutting down extremist sites is the most effective way to counter the threat.

They say keeping them online allows analysts and investigators to monitor what is being said and in some cases who is saying it.

"The reality is by shutting it down, it is more or less a game of whack-a-mole: it pops up somewhere else," said Frank Ciluffo, homeland security director at George Washington University.

However, he said, forcing extremists to find other ways to post videos could give officials a better opportunity to monitor them.
http://news.sbs.com.au/worldnewsaust..._videos_557508





Google Military-Controlled Satellite Reaches Orbit, We Don't Feel Lucky

According to the company, the GeoEye-1 satellite is the highest resolution commercial satellite orbiting the planet right now. It reached orbit yesterday, but in reality, it's not an ordinary commercial satellite: it's fully controlled by the Department of Defense's U.S. National Geospatial-Intelligence Agency. And two guys named Larry and Sergei.

Part of the US National Geospatial Intelligence Agency NextView program, the SUV-sized GeoEye-1 launched yesterday in a Delta II 7326 rocket from the Vandenberg Air Force Base in California—without exploding. Hours later, GeoEye's ground station in Norway confirmed that the rocket had delivered its payload right on target. The satellite was alive, fully armed and operational on its 423-mile orbit above the Earth.

Built by General Dynamics, the GeoEye-1 is equipped with a next-generation camera made by ITT. This camera can easily distinguish objects 16 inches long, with 11-bits per pixel color. In other words: this thing can see the color of your shorts. It will be up there, looking at your pants every single day, the time it takes for it to complete one orbit. And it will keep doing that for more than ten years, its expected life.

Of course, there's nothing new here until you notice the huge Google logo on the rocket, signaling the fact that Sergei and Larry own the exclusive rights to the GeoEye-1 images. Yes, no other company will be able to access this information, only Google. And they will be there, available for the public in Google Maps and Google Earth.

But don't fret, tin-foil hatters, because Google won't be able to access the highest resolution images because of US government regulations. Sure, the other guys will, but then again, their big bad satellites can see closer than this one. Still, you can rest safe that your underpants will be safe from public scrutiny. For now. Unless you do like me and keep flashing them around.
http://news.cnet.com/8301-1023_3-100...=2547-1_3-0-20





German Government Tells Citizens Not to Use Google Chrome

Germany's Federal Office for Information Security says that Google's new browser Chrome "should not be used for surfing the Internet." The problem, according to a translation from Blogoscoped, is that joined with email and search, Chrome gives Google too much data about its users. The government also said Chrome should be avoided because its still in beta. Here's the real deal, though: Germans hate Google because like Microsoft with Windows and Apple with iTunes, its a big American company that's so popular it seems like a monopoly. For those keeping score at home — or trying to use the Web in Germany — that rules out Chrome, Apple's Safari, Internet Explorer and Mozilla's Firefox because it runs on Google money. What's left? The Opera browser, conveniently built in Europe.
http://valleywag.com/5046665/german-...-google-chrome





Google Tightens Data Retention Policy — Again
Miguel Helft

Under pressure from regulators, policymakers and privacy advocates around the world, Google said late Monday that it would further tighten its data retention policy. In its official blog, the company said it would “anonymize” search records after 9 months, rather than the current 18 months.

Google has always kept logs of all queries conducted on its search engine, along with IP addresses — digital identifiers linking those searches to specific computers and Internet browsers. Before last year, Google’s retained those logs indefinitely. But in March of 2007, the company said it would begin anonymizing those logs after 18 months. Other search companies quickly followed suit, unveiling their own, more privacy-friendly policies.

Google’s move of last March did not please all privacy advocates, and clearly, it was not enough to placate regulators, especially in Europe. In its blog post, Google said it adopted the tighter rules reluctantly, as data retention allows it to offer a better service for users. “While we’re glad that this will bring some additional improvement in privacy, we’re also concerned about the potential loss of security, quality, and innovation that may result from having less data,” the company said. And the company suggested that a further shortening of its data retention period would do little to protect users’ privacy.

Chris Hoofnagle, a privacy expert and senior fellow at the Berkeley Center for Law and Technology said the new policy was in line with Google’s approach to privacy. “Google has a vision for privacy where individuals will not hesitate to share even sensitive personal information in exchange for access to good products and services,” he said. “Key to achieving that vision is the removal of consequences for liberal sharing of personal data. Shortening the identifiable storage time reduces the risk of unintended, unforeseen uses of the data.”
http://bits.blogs.nytimes.com/2008/0...ref=technology





Security Expert: Google anonymization not Anonymous Enough
Ryan Paul

In response to regulatory pressure, Google has announced a new data retention policy that reduces the duration that user IP addresses are stored in the company's logs. Google claims that IP addresses are now anonymized after nine months instead of 18 months.

Google's data retention policies have been a topic of significant contention. The company has faced enormous pressure from the European Commission's Article 29 workgroup, which is tasked with monitoring data protection issues. Google decided to implement the 18-month cycle for IP anonymization last year after receiving criticism from EU officials. Google's latest move to cut the retention period to 9 months appears to be similarly motivated. If it was, it appears to have been a step in the right direction for EU officials. EU Justice Commissioner Jacques Barrot told Reuters that the nine-month policy was "a good step in the right direction" even though it falls short of the EU's recommended retention period of six months.

Google also submitted to the Article 29 working group an open letter which explains in detail the reasons why Google believes that log data needs to be retained. According to Google, the logs are used to combat click fraud and search poisoning, to improve the overall quality of search results, and to detect abusive exploitation of search results. One example that Google cites is the recent Santy search worm, which used search queries to locate vulnerable targets. Google used the logs to identify Santy attack patterns and then implemented a filter to block them.

Google also discusses the privacy implications of ad-supported services relative to conventional commercial services. Google acknowledges that it uses the log data to provide contextually relevant advertisements in order to make its service financially sustainable. The company contends that this business model is offers a higher level of protection for consumer privacy than a conventional subscription-based business model.

"Google's search business is offered to the public for free, and is thus inherently superior from a privacy perspective to paid services because it does not require users' real names, billing addresses, credit card numbers or mandatory tax and accounting records," Google wrote in its letter to the Article 29 working group. "To support this free service, Google primarily relies on being able to serve relevant advertising to its users."

Although Google touts its plans for log anonymization as a major win for consumer privacy, some critics—such as security researcher Chris Soghoian—believe that Google's anonymization practices are inadequate and that the company's public statements are misleading.

We asked Google to explain how they anonymize the logs and got a response explaining that the exact method hasn't been determined yet, but that it will probably involve randomizing a few bits of the IP.

"We are still working on figuring out the anonymization algorithm we will use. After nine months, we will likely change some of the bits in the IP address in the logs (we have not yet determined how many); after 18 months we remove the last eight bits in the IP address and change the cookie information," a Google spokesperson told Ars. "We have focused on IP addresses, because we recognize that users cannot control IP addresses in logs. On the other hand, users can control their cookies. When a user clears cookies, s/he will effectively break any link between the cleared cookie and our raw IP logs once those logs hit the 9-month anonymization point. Moreover, we are continuing to focus on ways to help users exert better controls over their cookies."

Soghoian argues that removing the last eight bits doesn't provide adequate protection. As he points out, each truncated IP value in the database after 18 months would be associated with queries from a theoretical maximum of 255 users.

Although Google's approach would effectively make it impossible to detect the actual IP address behind individual queries, anyone with access to the data could still potentially use patterns in the queries associated with groups of IP addresses to ascertain the likely identity of the user behind a portion of them, in much the same way that attackers were able to do so with the search data that was accidentally leaked by AOL in 2006.

Google may have deflected regulatory smack-down in the short-term, but it's privacy practices are still not up to the same standards as its competitors. Microsoft, for instance, removes the entire IP address and all other identifiers after 18 months and Ask.com launched a new feature last year that allows users to search anonymously.
http://arstechnica.com/news.ars/post...us-enough.html





Google Claims License to User Content in Multiple Products

Google asserts a right to use content from users in the terms of use for several of its products.
Grant Gross

Google last week removed some language in its Chrome browser's terms of service that gave the company a license to any material displayed in the browser, but that language remains in several other Google products, including its Picasa photo service and its Blogger service.

The language came from Google's universal terms of service, the default license agreement for Google products.

The provision in the license agreement states that Google users retain the copyright to the content they post into a Google product, but then says, "By submitting, posting or displaying the content you give Google a perpetual, irrevocable, worldwide, royalty-free, and non-exclusive licence to reproduce, adapt, modify, translate, publish, publicly perform, publicly display and distribute any Content which you submit, post or display."

Similar language exists in terms of service for Picasa, Blogger, Google Docs and Google Groups. It does not exist in Gmail and no longer exists in Chrome.

The provisions raise security as well as privacy questions, said Randy Abrams, director of technical education at Eset, a cybersecurity vendor. "I wouldn't do anything that was personally sensitive or security-sensitive with most any Google product," he said.

Google removed the language from Chrome one day after launching the browser, following an outcry about the copyright implications of it asserting a license for everything posted or displayed in the browser. In some cases, Google needs the license to display content, Mike Yang, Google's senior product counsel, said in a blog post.

"To be clear: our terms do not claim ownership of your content -- what you create is yours and remains yours," Yang wrote. "But in lawyer-speak, we need to ask for a 'license' (which basically means your permission) to display this content to the wider world when that's what you intend."

Several critics questioned whether that language was appropriate in other applications provided by Google.

Google seems to be going in two different directions with these licensing terms, Abrams said. "One thing is to abide by their 'do no evil' creed, but also claim as many rights as possible," he said. "This is a typical corporate response: Try to get as much as you can and back off if forced to."

Yang also noted that similar language exists in the terms of service at several Web sites, including Amazon.com, eBay and Facebook. And the next sentence in that copyright provision limits what Google could do with a picture posted on the Picasa service or a blog post in Blogger, he said. It reads as follows: "This licence is for the sole purpose of enabling Google to display, distribute and promote the Services and may be revoked for certain Services as defined in the Additional Terms of those Services."

Yang, in an interview, said terms of service that may be needed for a Web site to display content may not be appropriate for other applications. Google "goofed" in putting the copyright language in Chrome, and the company is reviewing that copyright language in some of its other products, he said.

"There's no intent on our part to assert any sort of license for all the stuff users push to and from the Internet," Yang said. "[The universal terms of service] is a pretty broad license, but only to the extent that we need it to provide you with the services."

However, the copyright terms that still exist in Picasa, Blogger and other Google applications would allow the company to use its customers' content to promote the Google service. That could allow Google to use the content in live product demonstrations, for example, or in some promotional materials, Yang said.

Asked whether Google could take user content and use it in an advertising campaign without their permission, Yang said internal Google policies would probably prevent the company from doing it. Google wouldn't sell user content without permission, he added.

Andrew Flusche, a Fredericksburg, Virginia, lawyer who focuses on copyright and other issues, questioned how internal Google policy would guarantee protection of the end-users.

"Google's internal policy can change any time; it's their policy," Flusche said. "The only protection users have is what the EULA [end user license agreement] says."

The user agreement could allow Google to "publish a full-color book of Picasa photos as a promotional product," he added.

Google is correct when it says many Web sites have similar copyright provisions, he added. "But that doesn't mean anything," Flusche said. "The terms are still unfavorable to users; that's the dynamic of a huge corporation and millions of end-users."

However, Google would mostly likely be careful with its use of user content to promote its products, given that there's little upside in doing so, said Josh King, vice president for business development at general counsel at Avvo.com, a legal advice site.

"While the rights they've reserved themselves are very broad, it's probably a case of their actual practice being more conservative," King said. "We just have to hope they maintain their stance of not being evil."
http://www.pcworld.idg.com.au/index.php?id=655778381





Google to Digitize Newspaper Archives
Miguel Helft

Google has begun scanning microfilm from some newspapers’ historic archives to make them searchable online, first through Google News and eventually on the papers’ own Web sites, the company said Monday.

The new program expands a two-year-old service that allows Google News users to search the archives of some major newspapers and magazines, including The New York Times, The Washington Post and Time, that were already available in digital form. Readers will be able to search the archives using keywords and view articles as they appeared originally in the print pages of newspapers.

Under the expanded program, Google will shoulder the cost of digitizing newspaper archives, much as the company does with its book-scanning project. Google angered some book publishers because it had failed to seek permission to scan books that were protected by copyrights. It will obtain permission from newspaper publishers before scanning their archives.

Google, based in Mountain View, Calif., will place advertisements alongside search results, and share the revenue from those ads with newspaper publishers.

Initially, the archives will be available through Google News, but the company plans to give newspapers a way to make their archives available on their own sites.

“This is really good for newspapers because we are going to be bringing online an old generation of contributions from journalists, as well as widening the reader base of news archives,” said Marissa Mayer, vice president for search products and user experience at Google.

But many newspaper publishers view search engines like Google as threats to their own business. Many of them also see their archives as a potential source of revenue, and it is not clear whether they will willingly hand them over to Google.

“The concern is that Google, in making all of the past newspaper content available, can greatly commoditize that content, just like news portals have commoditized current news content,” said Ken Doctor, an analyst with Outsell, a research company.

Google said it was working with more than 100 newspapers and with partners like Heritage Microfilm and ProQuest, which aggregate historical newspaper archives in microfilm. It has already scanned millions of articles.

Other companies are already working with newspapers to digitize archives and some sell those archives to schools, libraries and other institutions, helping newspapers earn money from their historical content.

The National Digital Newspaper Program, a joint program of the National Endowment for the Humanities and the Library of Congress, is creating a digital archive of historically significant newspapers published in the United States from 1836 to 1922. It will be freely accessible on the Internet.

Newspapers that are participating in the Google program say it is attractive.

“We wouldn’t be talking about digitization if Google had not entered this arena,” said Tim Rozgonyi, research editor at The St. Petersburg Times. “We looked into it years back, and it appeared to be exceedingly costlv.”

Mr. Rozgonyi said that the newspaper might be able to generate additional revenue from the digital archives by producing historical booklets or commemorative front pages. But he said that increasing sales was not the primary objective of the digitization program.

“Getting the digitized content available is a wonderful thing for people of this area,” he said. “They’ll be able to go to our site or Google’s and tap into 100 years of history.”

Pierre Little, publisher of The Quebec Chronicle-Telegraph, which has been published since 1764 and calls itself “North America’s Oldest Newspaper,” said many readers visit the newspaper’s Web site to look for obituaries and conduct research on their ancestors.

“We could envision that thousands of families would be attracted to our archives to search for people who came over to the New World,” Mr. Little said. “We hope that will be a financial windfall for us.”

Brad Stone contributed reporting.
http://www.nytimes.com/2008/09/09/te.../09google.html





New E-Newspaper Reader Echoes Look of the Paper
Eric A. Taub

The electronic newspaper, a large portable screen that is constantly updated with the latest news, has been a prop in science fiction for ages. It also figures in the dreams of newspaper publishers struggling with rising production and delivery costs, lower circulation and decreased ad revenue from their paper product.

While the dream device remains on the drawing board, Plastic Logic will introduce publicly on Monday its version of an electronic newspaper reader: a lightweight plastic screen that mimics the look — but not the feel — of a printed newspaper.

The device, which is unnamed, uses the same technology as the Sony eReader and Amazon.com’s Kindle, a highly legible black-and-white display developed by the E Ink Corporation. While both of those devices are intended primarily as book readers, Plastic Logic’s device, which will be shown at an emerging technology trade show in San Diego, has a screen more than twice as large. The size of a piece of copier paper, it can be continually updated via a wireless link, and can store and display hundreds of pages of newspapers, books and documents.

Richard Archuleta, the chief executive of Plastic Logic, said the display was big enough to provide a newspaperlike layout. “Even though we have positioned this for business documents, newspapers is what everyone asks for,” Mr. Archuleta said.

The reader will go on sale in the first half of next year. Plastic Logic will not announce which news organization will display its articles on it until the International Consumer Electronics Show in Las Vegas in January, when it will also reveal the price.

Kenneth A. Bronfin, president of Hearst Interactive Media, said, “We are hopeful that we will be able to distribute our newspaper content on a new generation of larger devices sometime next year.” While he would not say what device the company’s papers would use, he said, “we have a very strong interest in e-newspapers. We’re very anxious to get involved.”

The Hearst Corporation, the parent of Hearst Interactive Media, owns 16 daily newspapers, including The Houston Chronicle, The San Antonio Express and The San Francisco Chronicle, and was an early investor in E Ink. The company already distributes electronic versions of some papers on the Amazon Kindle.

Newspaper companies have watched the technology closely for years. The ideal format, a flexible display that could be rolled or folded like a newspaper, is still years off, says E Ink. But it foresees color displays with moving images and interactive clickable advertising coming in only a few more years, according to Sriram K. Peruvemba, vice president for marketing for E Ink.

E Ink expects that within the next few years it will be able to create technology that allows users to write on the screen and view videos. At a recent demonstration at E Ink’s headquarters here, the company showed prototypes of flexible displays that can create rudimentary colors and animated images. “By 2010, we will have a production version of a display that offers newspaperlike color,” Mr. Peruvemba said.

If e-newspapers take off, the savings could be hefty. At the The San Francisco Chronicle, for example, print and delivery amount to 65 percent of the paper’s fixed expenses, Mr. Bronfin said.

With electronic readers, publishers would also learn more about its readers. With paper copy subscriptions, newspapers know what address has received a copy and not much else. About those customers picking up a copy on the newsstand, they know nothing.

As an electronic device, newspapers can determine who is reading their paper, and even which articles are being read. Advertisers would be able to understand their audience and direct advertising to its likeliest customers.

While this raises privacy concerns, “these are future possibilities which we will explore,” said Hans Brons, chief executive of iRex Technologies in Eindhoven, the Netherlands.

IRex markets the iLiad, an 8.5 by 6.1-inch electronic reader that can be used to receive electronic versions of the newspaper Les Echos in France and NRC Handelsblad in the Netherlands.

The iRex, Kindle and eReader prove the technology works. The big question for newspaper companies is how much people will pay for a device and the newspaper subscription for it.

Papers face a tough competitor: their own Web sites, where the information is free. And they have trained a generation of new readers to expect free news. In Holland, the iLiad comes with a one-year subscription for 599 euros ($855). The cost of each additional year of the paper is 189 euros ($270). NRC offers just one electronic edition of the paper a day, while Les Echos updates its iRex version 10 times a day.

A number of newspapers, including The New York Times, offer electronic versions through the Kindle device; The Times on the Kindle costs $14 a month, similar to the cost of other papers. “The New York Times Web site started as a replica of print, but it has now evolved,” said Michael Zimbalist, vice president for research and development operations at The New York Times Company. “We expect to experiment on all of these platforms. When devices start approximating the look and feel of a newspaper, we’ll be there as well,” Mr. Zimbalist said.

Most electronic reading devices use E Ink’s technology to create an image. Unlike liquid-crystal display of computer monitors and televisions, electronic paper technology does not need a backlight, remains displayed even when the power source runs down, and looks brighter, not dimmer, in strong light. It also draws little power from the device’s battery.

Plastic Logic’s first display, while offering a screen size that is 2.5 times larger than the Kindle, weighs just two ounces more and is about one-third the Kindle’s thickness.

It uses a flexible, lightweight plastic, rather than glass, a technology first developed at Cambridge University in England. Plastic Logic, based in Mountain View, Calif., was spun off from that project.
http://www.nytimes.com/2008/09/08/technology/08ink.html





Paper Concedes Outdated Link
Bloomberg News

The Tribune Company said Tuesday that a link to the six-year-old article on the UAL Corporation’s 2002 bankruptcy filing had appeared on the South Florida Sun-Sentinel’s Web site before another news organization mistakenly presented the article as new.

Traffic in the newspaper’s database pushed a link to the old article to the most-viewed section of the Web site’s business page early on Sept. 7 and it was picked up by a Google search agent, Tribune said.

Tribune said on Monday that the article had never appeared on the Web site. An erroneous report from Income Securities Advisors Inc. caused a 76 percent drop in shares of UAL, the parent of United Airlines, before trading was halted Monday. An Income Securities summary appeared on the Bloomberg terminal, and Bloomberg News published its own headline before correcting it. UAL, based in Chicago, issued a statement Monday to assure investors it had not filed for bankruptcy.

United demanded a retraction from The Sun-Sentinel and said it was beginning an investigation. Tribune owns The Chicago Tribune and The Sun-Sentinel, which is based in Fort Lauderdale, Fla.
http://www.nytimes.com/2008/09/10/bu...dia/10ual.html





London Stock Exchange Crippled by System Outage
Daisy Ku and Dominic Lau

The London Stock Exchange (LSE.L: Quote, Profile, Research, Stock Buzz) suffered its worst systems failure in eight years on Monday, forcing the world's third largest share market to suspend trading for about seven hours and infuriating its users.

The problem occurred on what could have been one of London's busiest trading days of the year, as markets rebounded worldwide following the U.S. government's decision to bail out mortgage companies Fannie Mae (FNM.N: Quote, Profile, Research, Stock Buzz) and Freddie Mac (FRE.N: Quote, Profile, Research, Stock Buzz).

"We have the biggest takeover in the history of the known world ... and then we can't trade. It's terrible," one trader said.

The Johannesburg Stock Exchange, which uses the LSE's trading platform TradElect, also suspended trading.

"This halt today clearly has once again damaged (the LSE's) reputation as a leading exchange, especially on a day like today, highlighting that it may have been unable to handle the volumes this morning," added another trader.

The exchange would not say whether volume was the issue and declined to give details on what had caused the problem. But angry customers were demanding an explanation.

"We want answers as to how this happened in the first place and reassurances that it will not happened again," said Angus Rigby, chief executive of brokerage TD Waterhouse.

The LSE, the world number-three exchange by traded volume in the first half of this year, opened for trading as usual at 0700 GMT, but connectivity problems left some brokers unable to trade. It was then forced to suspend trading to ensure some market players were not disadvantaged.

The Exchange finally got trading going at 1500 GMT -- half an hour before it was due to close.

"We had to sit on our hands and wait for fragmented and at times ambiguous announcements as to when the LSE would be up and running ... It's ironic that since 2:30 today we've been able to trade, and get a fair market value on Barclays or any other ADR (American depository receipt) on the New York Exchange, but not on the LSE," said Rigby.

The UK Financial Services Authority, in its Financial Risk Outlook 2008 report, says the risk of such infrastructure failures is growing with the rise of electronic trading and straight-through processing.

Systems Upgrades

The LSE plans a series of system upgrades and is migrating Italian equities to its trading platform TradElect this month.

Monday's trading suspension was the longest suffered by the exchange since April 5, 2000, when problems with an older trading system led to an eight-hour suspension.

On June 17, the Milan Stock Exchange, which the LSE acquired in October 2007, suspended trading due to technical difficulties. On November 7 last year the LSE itself experienced a connectivity problem with its real-time market data system Infolect which connects to TradElect.

The outrage came at an embarrassing time as the LSE fights new entrants. In a letter to the Financial Times on Monday LSE Chief Executive Clara Furse defended the exchange's position, describing TradElect, which the bourse introduced last year, as "the cutting edge".

Nasdaq OMX Europe, a cash equity platform set up by transatlantic exchange group Nasdaq OMX (NDAQ.O: Quote, Profile, Research, Stock Buzz) to rival European bourses such as LSE and Deutsche Boerse (DB1Gn.DE: Quote, Profile, Research, Stock Buzz), will start on September 26. NYSE Euronext (NYX.N: Quote, Profile, Research, Stock Buzz)(NYX.PA: Quote, Profile, Research, Stock Buzz) said it will launch a pan-European market in November.

The LSE faces growing competition from new entrants and its share price has fallen sharply as a result this year.

Turquoise, a cash equities trading venue backed by nine investment banks, as well as Chi-X Europe, owned by Nomura (8604.T: Quote, Profile, Research, Stock Buzz) and investment banks, are both gaining market share. Both said on Monday they were trading normally.

The LSE outage coincided with a system failure at the Intercontinental Exchange (ICE.N: Quote, Profile, Research, Stock Buzz) (ICE), which shut trade across London commodity markets for more than an hour. According to an ICE official there was no apparent link between the two.

(Additional reporting by Simon Falush and David Sheppard; Editing by Andrew Callus and David Holmes)
http://www.reuters.com/article/ousiv...01084620080908





Exclusive Interview: Microsoft Admits What Went Wrong with Vista, and How They Fixed It
Will Smith

We sat down with Microsoft to hear the company’s side of the Vista story. What lessons have been learned following the worst Windows launch in the company’s history? Is Microsoft doing enough to regain PC users’ faith?

Way back in January 2007, after years of hype and anticipation, Microsoft unveiled Windows Vista to a decidedly lukewarm reception by the PC community, IT pros, and tech journalists alike. Instead of a revolutionary next-generation OS that was chock-full of new features, the Windows community got an underwhelming rehash with very little going for it. Oh, and Vista was plagued with performance and incompatibility problems to boot.

Since then, the PC community has taken the idea that Vista is underwhelming and turned it into a mantra. We’ve all heard about Vista’s poor network transfer speeds, low frame rates in games, and driver issues—shoot, we’ve experienced the problems ourselves. But over the last 18 months, Vista has undergone myriad changes, including the release of Service Pack 1, making the OS worth a second look. It’s time we determine once and for all whether we should stick with XP for the next 18 months while we wait for Windows 7. But before we answer that question, let’s review exactly what’s wrong with Windows Vista.

What Went Wrong with Vista’s Launch?

We’ve seen worse launches over the years, but not from a multibillion dollar product that was a half-decade in the making. Here are the seven biggest contributors to Vista’s dud of a debut

Instability

At launch, we complained that Vista was significantly less stable than its predecessor. We experienced more hard locks, crashes, and blue screens in the first weeks of use than we had in the entire year prior using XP. Sadly for Microsoft, our experience was shared by many early Vista users.

The problems weren’t limited to high-end, bleeding-edge hardware, either. People with pedestrian, nonexotic hardware configs reported crashes, instability, and general wonkiness with Vista on laptops and desktops, in homebuilt rigs and OEM machines, and in PCs that originally shipped with XP. Considering that improved stability was one of the biggest promises Microsoft made for Vista, users were understandably upset.

Incompatibility

Microsoft didn’t make any big promises about application compatibility, and it’s a damn good thing. If a desktop application didn’t follow Vista’s rules for behavior, Vista wouldn’t let it run. The program would fail to load, crash on use, or eat the user’s data, depending on the development infraction. And to be clear, we’re not talking about shareware apps created by some dude in his basement, we’re talking about Acrobat Reader, iTunes, Trillian, and dozens of other programs, not even counting the antivirus programs that are rarely compatible with a new OS.

Getting hardware working could be just as challenging. If you had one of the millions of perfectly serviceable, but suddenly incompatible printers or scanners, you probably felt pretty raw. We know we did.

Additionally, if you needed to connect to a VPN (virtual private network) that isn’t supported by Vista’s built-in client, you were probably out of luck. Vista shipped without support from major VPN manufacturers, including Cisco, leaving work-at-home types out in the cold.

The massive number of compatibility problems ensured that every user would be touched by at least one disappointment.

Performance

We would expect a new version of Windows to be slower than the previous one, given immature drivers and new features that drain CPU cycles and absorb memory. However, the performance differential has always been less than 10 percent in the past and only really evident in hardware-intensive apps, such as games.

At Vista’s launch, our tests revealed worse-than-expected performance in many different tasks and applications. Gaming performance suffered notably; using drivers from the launch time frame, our tests showed as much as a 20 percent performance difference between Vista and XP on the same machine. But that wasn’t the worst of it.

Even common tasks suffered. Large network file transfers took a ludicrous amount of time, even on systems hardwired to gigabit networks. On affected machines, Vista could take days to transfer a full gigabyte of data! While that was a worst-case scenario, many users complained that file transfers took twice as long to complete in Vista as in XP.

User Account Control

Vista brought marked improvements to the overall security of Windows, one of the few areas in which the OS actually lived up to Microsoft’s promises. Unfortunately, one of the mechanisms that helps enable that security comes at a high cost—it’s incredibly annoying.

That’s right, we’re talking about User Account Control, aka UAC. Even if you don’t know what it’s called, if you’ve used Vista, you’re undoubtedly aware that you need to prepare your clicking finger when the desktop darkens and your trusty PC starts asking whether you really meant to install that application you just double-clicked. UAC prompts you whenever an app tries to write to an area of your hard disk or registry that Windows finds suspicious. This seems like a good thing, right? It would be, except UAC prompts every time the installer does something suspicious. We’ve had Vista prompt us no fewer than five times before completing installs it questioned.

The problem is compounded by the fact that those five prompts look and behave differently, even though they’re all asking for basically the same thing: permission to write to a protected area of your system. To make matters even worse, none of the UAC prompts actually tells power users what the app is doing. When you click that Allow button, all you’re doing is adding a speed bump to whatever malware you might be installing.

Executed properly, UAC could have been a savior for people wont to install every application they find. Unfortunately, the UAC prompts quickly become so annoying that most users either disable them (the power-user option) or mindlessly click Allow (the mom option).

Activation

Activation has been a hassle since Microsoft first included it with Windows XP. Microsoft’s never really honored its stated 90-day limit for discarding activation information either. After installing the OS once or twice, you inevitably have to call some poor sap manning the activation hotline to enable Windows. What bothers us about Vista is the inclusion of the Windows Genuine Advantage software, which periodically checks in with Microsoft to ensure that the copy of Windows you’ve already activated remains genuine.

That’s all well and good, unless something confuses WGA. Unfortunately, just about everything confuses WGA. It could be something as simple as a BIOS reset that sets the clock back a few years. Or it could be that Microsoft’s entire activation process shuts down for a few hours—like it did last August. But at least Microsoft curbs piracy of Vista and other activated software by treating its customers like criminals, right? Well, not so much. Hacked versions of Vista that simply bypass activation are available on BitTorrent sites around the world.

Version Overload

In the old days, there were two distinct versions of Windows: one for home users and one for corporate users. For home, you bought Windows 98; IT departments bought Windows NT (at least the serious ones did). With Windows XP, this trend continued, despite the fact that both the home and enterprise OSes used the same core.

With Vista, the old home and enterprise distinctions went out the window, as Microsoft added three more versions of Windows, removing crucial features like the 3D UI from the low-end release and forcing power users who want access to both work-friendly and enthusiast features to shell out for the $400 Ultimate edition. To help justify that exorbitant price, Microsoft promised Ultimate Extras, the first of which didn’t materialize until months after launch, and those that did appear were disappointing. A bad Texas Hold ’Em game, a backup utility that should have been included in every box, and support for other languages do not “ultimate extras” make.

Oh, and if you used Windows XP Professional at home and wanted to upgrade to a less-expensive home version of Vista, you were out of luck. The only upgrade path that worked from XP Pro to anything with Media Center capability was the spendy Ultimate edition.

‘One More Thing’

If the last eight years of watching Steve Jobs smugly introduce “one more thing” have taught us anything, it’s that no matter how technically sound (or alternately, how fatally flawed) a product is, every major release desperately needs one or two supersexy features to incite lust in geeks everywhere. Every time Jobs rolls out a new product, he teases the audience with a feature or two that you simply cannot wait to use. These features not only leave customers clamoring for the new product but also give those pesky users sitting on the fence a rationale for upgrading. While Vista had the technical chops in the form of the Aero renderer to deliver some potentially astounding apps, Microsoft’s best effort was Flip3D—a gimpy knock-off of a feature that OS X implemented infinitely better.

Aside from that, most of the apps included with Vista are rote updates of their forebears—from Movie Maker to Photo Gallery. There’s very little that’s new, even when the apps themselves are brand-new (see Windows Mail). Worse than nothing new, there’s not much in a stock Windows install to inspire anyone—even the stereotypical dullard PC user.

An Exercise in Angering Potential Customers: DirectX 10

Vista was supposed to mark the launch of a new revolution in PC gaming, spearheaded by the full might of Microsoft as manifested in the Games for Windows initiative. With promises of everything from a fully fledged online matchmaking experience (a la Xbox Live), easier installations, and (most importantly) a host of killer AAA titles, Games for Windows looked poised to really challenge console dominance and modernize the PC as a gaming platform.

What Games for Windows actually did was tie the DirectX 10 API to Vista simply to drive sales of the OS. The first Vista-exclusive AAA Games for Windows title was a downright geriatric port of Halo 2, a game that originated with the first Xbox and doesn’t use DirectX 10! To add insult to injury, there was no technical reason for a three-year-old ported Xbox game to be Vista-only. True, the community quickly released a patch that opened the door for XP gamers, but we still can’t understand who possibly thought this was a good idea.

Microsoft continued down the suicidal Vista-only path for one more release, Shadowrun. Despite innovative gameplay and cross-platform support for its Xbox counterpart, the Vista-only release was enough to doom FASA Interactive, the studio that created the game.

The Benchmarks

We take a quantitative look at Vista and XP performance to determine exactly what penalty, if any, you pay when you upgrade to Windows Vista

To test Vista versus XP performance, we built what we think is a fairly middle-of-the-road rig—an Intel Q6600 quad core with 2GB of memory and a GeForce 8800 GTS videocard. We then ran a battery of benchmarks in three different OS environments: XP with Service Pack 3, Vista sans Service Pack 1 (with modern Nvidia drivers installed), and Vista with SP1. Our tests measure everything from overall system performance to network speed to gaming prowess.

Overall Performance

Unsurprisingly, Windows XP remains faster in almost all of our standard system benchmarks. More noteworthy is how SP1 has improved Vista’s performance, narrowing the gap between that OS and XP in key tests and even allowing Vista to surpass XP in our MainConcept encoder test.

Unfortunately for Vista, our desktop benchmarks do reveal areas where Vista continues to suffer substantial performance hits compared to XP, namely in ProShow and Quake 4. We’ve talked to the ProShow developers, and they don’t know what causes the slowdown with their app in Vista, but they’re investigating. We attribute the Quake 4 performance hit to poor OpenGL drivers in Vista.

As we mentioned before, we’re perfectly willing to sacrifice a few percentage points of performance from an operating system upgrade. However, the difference between Vista SP1 and XP SP3 in ProShow and Quake 4 reaches a dismal 10 to 25 percent.

Gaming

We didn’t include any DirectX 10 games in our tests simply because DirectX 10 wasn’t around when Vista launched, and DirectX 10 graphics still aren’t supported on Windows XP. Our basic system benchmarks already include a pair of games, FEAR and Quake 4, but we tossed in an additional round of 3DMark06 to further assess Vista’s gaming prowess.

The results were informative. Aside from the already noted Quake 4/OpenGL deficiency, Vista performed admirably both with and without SP1, turning in scores equivalent to XP’s. This tells us that the poor gaming performance we saw in the early days of Vista was more the result of immature drivers than issues with the OS. Of course, Microsoft can still be blamed for shoddy coordination with the graphics-card makers at the time of Vista’s launch.

Network Transfer Speed

Our final set of benchmarks test networking performance. We set up the fastest NAS box we’ve ever tested, the QNAP TS-109 Pro, and ran our standard network storage benchmarks on it. While we saw the same stunning performance inadequacies from pre-SP1 Vista that we observed at the OS’s launch, SP1 and the subsequent updates seem to have solved most of those issues. The minor gaps of a few seconds that do exist between XP and Vista SP1 are explained by the fact that XP shuts the file transfer window before the transfer is confirmed, while Vista waits until it has checked the copied file.

The Takeaway

With the exception of a couple outlier applications, Vista’s performance is within striking distance of XP’s, for the most part. Thanks largely to a series of performance enhancements and SP1, Vista has closed the gap in many areas where it was deficient. We’re willing to overlook the poor OpenGL gaming performance simply because there aren’t very many OpenGL games coming out, and it seems the ProShow problem is an isolated incident.

Overall System Performance
Windows XP (SP3) Windows Vista (Launch) Windows Vista (SP1)
Premiere Pro CS3 (sec) 924 960 960
Photoshop CS3 (sec) 133 136 139
ProShow (sec) 963 1214 1275
MainConcept (sec) 1881 1822 1814
Quake 4 (fps) 143.5 126.5 125.8
FEAR (fps) 65 65 65

Best scores are bolded. These are our standard system benchmarks, with one exception. We ran the games at 1920x1200 resolution, with 4x AA and 16x anisotropic filtering on FEAR, no AA and no anisotropic filtering on Quake 4.

Gaming Performance
Windows XP (SP3) Windows Vista (Launch) Windows Vista (SP1)
3DMark06 Game 1 (fps) 29 28 28
3DMark06 Game 2 (fps) 26 26 26

Network Transfer Speeds
Windows XP (SP3) Windows Vista (Launch) Windows Vista (SP1)
Network - Small to NAS (sec) 38 48 43
Network - Small from NAS (sec) 39 68 42
Network - Large to NAS (sec) 139 181 144
Network - Large from NAS (sec) 140 271 142

Best scores are bolded. Idle temperatures were measured after an hour of inactivity; load temperatures were measured after an hour’s worth of CPU Burn-In (four instances). Test system consists of a stock-clock Q6700 processor on an EVGA 680i motherboard.

Microsoft Concedes Vista Launch Problems

Abandoning the pretense that Vista is the perfect OS, Microsoft reps sat down with us to discuss the OS’s problems in a (kind of) frank conversation

We were surprised when Microsoft reps agreed to discuss Vista’s launch problems and what the company has done to fix them. We were surprised not only that they agreed to answer our questions with candor, but that they were speaking to us at all. Our initial conversation occurred in June and set the stage for the article you’re reading. This dialogue also marked the first time in eight years that we had a private conversation with any Microsoft employee without a PR manager present.

The answers we got during this mid-June background conversation were brutally honest: Our source, a high-ranking Windows product manager, conceded that Microsoft botched the Vista launch. He added that the company’s biggest concern wasn’t the OS but rather the eroded faith in Microsoft’s flagship product among users of all types and experience levels.

Our conversation was refreshingly frank, and no topic appeared off limits. To wit:

• Our Microsoft source blamed bad drivers from GPU companies and printer companies for the majority of Vista’s early stability problems.
• He described User Account Control as poorly implemented but defended it as necessary for the continued health of the Windows platform.
• He admitted that spending the money to port DirectX 10 to Windows XP would have been worth the expense.
• He assailed OEM system builders for including bad, buggy, or just plain useless apps on their machines in exchange for a few bucks on the back end.
• He described the Games for Windows initiative as a disaster, with nothing more than 64-bit compatibility for games to show for years of effort.
• He conceded that Apple appeals to more and more consumers because the hardware is slick, the price is OK, and Apple doesn’t annoy its customers (or allow third parties to).

Yes, the June conversation was dazzlingly candid, and we were looking forward to an equally blunt follow-up meeting—a scheduled late-July on-the-record interview with Erik Lustig, a senior product manager responsible for Windows Fundamentals. But then the universe as we know it returned to normal, and Microsoft became Microsoft again. Our interview with Lustig was overseen by a PR representative and was filled with the type of carefully measured language that we’ve come to expect from Microsoft when discussing “challenges.” A “challenge” is Microsoftese for anything that isn’t going according to the company’s carefully choreographed plans. In the text that follows, we’ve combined the information conveyed during the mid-June background conversation with decoded translations of the “on the record” conversation we had in July. The contrast between the two interviews is stunning.

We herewith give you a snapshot of Microsoft’s take on Vista launch problems.

Stability

According to now-public internal Microsoft memos, 18 percent of all Vista crashes reported during the months immediately following its launch were due to unstable Nvidia graphics card drivers.

Microsoft has never issued any public comment concerning who’s to blame for the driver crashes, but during our background conversation, our source conceded that hardware OEMs were writing WDDM (Windows Display Driver Model) drivers for a moving target during Vista’s beta and release-candidate periods. Our source told us that because of low-level OS changes, hardware vendors didn’t have sufficient time to develop and test their drivers. This mirrors what Steven Sinofsky, the head of the Windows team, said in an interview with Cnet earlier this year: “The schedule challenges that we had, and the information disclosure weren’t consistent with the realities of the project, which made it all a much trickier end point when we got to general availability in January.”

Launch problems aside, once Vista is updated with SP1, it seems much more reliable than it was early on. The Maximum PC Lab isn’t equipped for long-term stability testing, but in our anecdotal experience, Vista’s stability problems are largely fixed, even on somewhat exotic hardware. Whether Vista is more stable than WinXP really depends on the actual hardware configuration you’re using more than anything else.

Compatibility

While discussing this story on background, Microsoft placed blame for incompatible software and hardware on its third-party partners. However, during our on-the-record chat, Lustig simply said, “I honestly don’t have the exact numbers for that,” in reference to the ratio of crashes attributed to Microsoft versus third-party entities.

Regardless, we’re well aware that Microsoft had been talking to hardware and software developers about Vista compatibility issues since the 2005 Meltdown, Microsoft’s annual gaming conference. At that conference, Microsoft informed game developers that they needed to write apps that behaved well, or they would face problems with Vista. The requirements were, for the most part, simple—caveats like not writing to C:/Program Files/ or C:/Windows/.

It’s also important to note a shameful truth that everyone in the PC industry is aware of but rarely discusses: When a new OS comes out, third-party vendors will often withhold compatibility support in order to drive sales of new units, turning the cost of supporting a new OS from a liability into a source of revenue. The same goes for software like antivirus utilities and some CD/DVD burning apps, both of which hook into the OS very closely.

Security

The statistics on Vista’s security record are clear: Vista is the most secure version of Windows to date. Nonetheless, Lustig said that Microsoft made “changes that have had some short-term ramifications that we’ve worked very hard the last year and a half, and through Service Pack 1, to address.” Some of these changes may have had unintended negative consequences, but Vista has suffered fewer security defects than any previous version of Windows. In short, sometimes you just have to give up flexibility for security. As Lustig told us, “I believe that those changes are going to be a fundamental basis for the integrity of the platform.” We agree.

Gaming Performance

During our initial June interview, Microsoft blamed unoptimized videocard drivers for poor gaming performance. To confirm this, we tested both the launch version of Vista and the post-SP1 version of Vista with current Nvidia drivers. Our gaming tests showed only the most negligible performance differences between the two OS builds, confirming that Vista itself was not to blame for early game performance issues. Rather, those earliest Vista videocard drivers were the culprits. Indeed, now 18 months after its launch, Vista’s performance is within striking distance of WinXP’s in almost every test we ran.

The Impact of SP1

Because Vista’s first Service Pack significantly improved the struggling OS, we were surprised that Microsoft didn’t tack a Second Edition label on it, a la Windows 98. Providing measurable improvements in performance and stability, Service Pack 1 should have been Vista’s saving grace. No? Lustig told us that despite significant improvements in most of Vista’s deficient areas, “there is a lot of leftover concern [about Vista] based on information folks have heard anecdotally.”

Quite an admission. Lustig continued, “The challenge for Microsoft isn’t necessarily continuing to take the feedback and improving the product—we’ve been doing that since launch and will continue to. The challenge is getting the message out that we’ve listened, we’ve made very positive changes, we’re seeing very positive results from the changes we’ve made, and there’s enough value in the product.”

Maximum PC's Final Word on Vista

After spending the last six weeks getting down and dirty with the OS—on multiple hardware configurations, in both 64-bit and 32-bit flavors, and on mobile and desktop systems—we’re willing to give it a second chance. There are still tons of things about the OS we’re not happy with—starting with the now-$350 Ultimate SKU and working down from there—but from a performance, stability, and security standpoint, we’re satisfied with where Vista is today. You no longer need to sacrifice performance or stability if you want to run the latest version of Windows.

If you already have Vista, there’s no reason not to use it, but should you go out and buy Vista today? Probably not. With Windows 7’s launch scheduled for early 2010, we’re actually closer to that date than we are to Vista’s launch. If you’ve ridden out the storm on XP so far, it probably isn’t worth investing in Vista for just a year and a half of use.
http://www.maximumpc.com/article/fea...failure_launch





The Jaunty Jackalope Hops Aboard Ubuntu’s Ark
Ashlee Vance

Microsoft has spent years battling an Apple. Now it must go up against a Jaunty Jackalope as well.

The Jaunty Jackalope moniker is the latest animal-themed name used by Canonical, a maker of open source software, to describe an upcoming version of Ubuntu, its flavor of the Linux operating system. Other names used for previous releases of Ubuntu have included Hardy Heron, Dapper Drake and Breezy Badger. While the names may seem silly, they reflect part of the culture that has helped Ubuntu become a legitimate player in both the desktop and server operating system markets.

The geek elite use Linux, which is an operating system built with open source software that serves the same basic functions as Microsoft’s Windows or Apple’s Mac OS X. Of late, members of that geek elite have tended to choose Ubuntu as their favorite version of Linux. (There are hundreds, if not thousands, of variations on Linux, each with their own collections of applications and features.)

In December, Google will host a developer conference around Ubuntu at its Mountain View, Calif., headquarters. One of the main topics of discussion should be Jaunty Jackalope, which will likely ship next April in final form.

Canonical expects this version of the operating system to boast improvements in the speed at which the software boots up. “Let’s see if we can make booting or resuming Ubuntu blindingly quick,” wrote Canonical’s chief executive Mark Shuttleworth, in a note to developers.

In addition, Canonical plans on catering to the “cloud,” where users tap applications stored on central servers rather than firing up something like Microsoft Office right on their desktop. Shuttleworth was very vague about how Canonical intends to ride the cloud but said the company is after “weblications.”

The rabid interest in Ubuntu by both software developers and technology managers has helped the operating system come out of nowhere to rival long-standing Linux operating systems built by Red Hat and Novell.

To be sure, Red Hat remains the dominate version of Linux picked up by large companies. But the grass-roots interest in Ubuntu has opened some doors for the operating system. For example, Google uses a customized version of the software called Goobuntu for internal operations. In addition, PC giant Dell now offers Ubuntu as an option on some desktop and laptop machines.

Besides Ubuntu’s popularity with the tech crowd, the South African-born Shuttleworth is a big reason for the software’s success.

In 1999, VeriSign bought Shuttleworth’s company, Thawte (pronounced “thought”), for $575 million. (Shuttleworth used $20 million of that money to purchase a trip to the International Space Station in 2002.)

With Shuttleworth’s fortune backing Canonical, the company can battle against giants such as Microsoft and Apple without fearing for its near-term survival.

Shuttleworth concedes that the goofy names — a jackalope, after all is a mythical creature — are a personal indulgence.

“No excuse, I’m afraid,” he said in an interview conducted via e-mail. “I deserve the blame for this. The buck / drake / eft / fawn / heron / ibex / jackalope stops here, so to speak. We learned a while ago that our sanity depended on making the names alphabetical, so the next one will be the K* K* but beyond that, it’s not a sophisticated process.”

“K,” he added, “is going to be very, very hard.”

Beyond keeping the Canonical folks sane, you can argue that the names help separate Ubuntu from the crowd.

“I think anything that’s remotely entertaining keeps people interested,” said Dave Rosenberg, the co-founder of another oddly named open source player, MuleSource.

Canonical is expected to release a version of Ubuntu called Intrepid Ibex next month.
http://bits.blogs.nytimes.com/2008/0...d-ubuntus-ark/





Researchers Find Racism Translates to Virtual Worlds as Well
John Timmer

It's easy to develop a confusing picture of what goes on inside of multiuser virtual worlds, such as Second Life and World of WarCraft. Some reports suggest that the virtual reality enables people to escape from social interactions they otherwise find difficult; others highlight how users of virtual worlds find them satisfying because of the rich social interactions they enable. Some researchers at Northwestern University looked into just how well real-life social influences translate to the the virtual realm and discovered one that does: racism.

The authors used two different instances of social manipulation that are known to work well in the real world. The first is the "foot in the door" (FITD) approach, in which a small, easily accomplished favor is asked. These tend to make the person who granted the favor happy about their cooperation, and more likely to agree to further requests, even if they require more effort.

The second method, called "door in the face" (DITF), accomplishes the same thing using a different approach. The initial request, instead of being easy to handle, involves an extensive effort on the part of the person asked. Usually, that request is declined, but it makes people more likely to agree to a further, less time-intensive request. Instead of being inwardly-focused, the DITF method depends largely on a person's perception of the individual or organization making the request; the more responsible and credible they seem, the more likely the second request will be agreed to.

The researchers added a second layer on top of these two methods of manipulation by using avatars with skin tones set at the two extremes of light and dark that the environment, There.com, allows. This let them check for whether another pervasive social influence, racism, holds sway in the virtual world.

The tests involved the ability of There.com users to instantly teleport to any location in the game. The control condition, and the second request for both the FITD and DITF approaches, was a teleport to a specific location to take part in a screenshot. For FITD, the first, easy request was a screenshot in place. For DITF, the initial request involved a series of screenshots around the virtual world that might take as much as two hours.

416 There.com users were approached at random. Somewhat amusingly, about 20 of those approached for each test did something unexpected. For FITD, they simply teleported away before the question could be completed. Even more oddly, over 20 people agreed to spend a few hours taking screenshots with random strangers.

It turns out that social manipulation works just as well in virtual worlds as it does in the real one, with one very significant caveat. The FITD approach, which depends on people feeling good about themselves, increased cooperation on the second request from roughly 55 percent to 75 percent. DITF did even better, boosting the fraction of those who agreed to the second request to over 80 percent—but only if the avatar making the request was white. If that avatar was black, the response dropped to 60 percent, which was statistically indistinguishable from the control.

Since the DITF method depends on subjects' perception of the one doing the asking, the obvious conclusion is that black avatars are viewed as less appealing than white ones. The virtual world not only recapitulates social manipulation, but also social problems. The judgment directed towards the avatar's color is even more surprising, given that There.com allows its users to change their avatar's appearance instantly.

The authors don't seem to know whether to celebrate the finding, since it opens up new avenues for pursuing social research, or to condemn the fact that racism has been dragged from the real to virtual worlds. The recognize that there is an alternate interpretation—namely, that people judge users for having chosen to use a black avatar, rather than for being black—but don't find that alternative any more appealing.
http://arstechnica.com/news.ars/post...s-as-well.html





Pot Users Share High Times Online
David Sarno

By the time we began the interview, Bong Rip had absorbed quite a bit of THC.

I’d been watching the 30-year-old host of “BongTV,” a live Internet show that features Mr. Rip traveling around the Southland in his ’88 Rolls Royce limo, rapping with guests and friends — and smoking more pot than I thought was possible. On screen, he’d made short work of four big joints, demonstrated repeatedly and convincingly why his name is Bong Rip and otherwise had not gone three minutes without a quick lung full from his glass pipe.

I was surprised he could maintain consciousness, let alone speak. But this is what he does every day — live, on the Internet — from 4:20 p.m. to 4:20 a.m.

“It’s like a virtual party, right in your computer,” he told me with impressive coherence. “I have over 100 people watching that I take with me in my limousine — they don’t make a mess, they don’t cause any trouble and they don’t smoke my weed.”

“BongTV” has a small but dedicated following — Bong Rip calls them the Stoner Army. People watch the show on multi-way video chat services Stickam.com and UserPlane.com. Because it’s live, viewers get to chat, joke and toke with Bong Rip in real time. But a warning to those easily offended: This show exists only because there are no ratings on the Internet.

Die-hard fans are awarded ranks by Gen. Rip for being loyal, reading about marijuana legalization and helping him advertise his show around the Web. You can ascend from private to captain, be named senator or governor (OK, those two are not military ranks, but, geez, don’t kill Bong’s buzz!), and if you really impress him, he’ll make you a major.

Whenever Bong decides it’s time to light up, he calls out “420 in the chat room if you’re smoking!” Within moments, dozens of viewers have eagerly chimed in. “420!” they type — and they’re not just posturing — among the 10 or 20 viewers that are running their own webcams, a healthy number can be seen reaching for an implement and joining right in.

Once a largely invisible subculture, the pot community has harnessed online video and social networking to “come out of the grow closet” and into the open. Not to be left out of the Web 2.0 movement, like-minded users (that’s “users” in both senses) are taking advantage of the Internet to connect and socialize semi-anonymously, and from a distance.

“These are people who are really looking for a venue to express how much they love it, or love growing it or just like having it around,” said Dave Warden, who hosts “The Weed Report,” an online video magazine on which Warden visits Los Angeles dispensaries, glass galleries and his own home, reviewing oddly named strains of cannabis along the way. The snappily edited show is entertaining and even contains some pretty funny sketches by Warden, who was formerly the Gadget Guy on the DIY Network’s “Lawn Care Workshop” and has several Hollywood directing and producing credits to his name.

“The Weed Report” routinely scores tens of thousands of views on YouTube and other sites and even has its own bong-making company as a sponsor. The show’s modest success spurred Warden to create a kind of pot video social network, in which aficionados from several countries have uploaded 125 pot-related videos of various levels of sophistication (read: Most are nothing more interesting than people getting high).

Many of the videos on theweedreport.com come from Canada, where the legal climate is considerably more relaxed. “If there was a marijuana video war, the Canadians would be winning,” Warden said. Warden pointed to online pot-smoking shows like “Chronic604” (where a bunch of guys smoke in different places), “Baked in BC” (where a bunch of pretty girls do) and Pot TV, the leading online cannabis network.

Pot TV is a large repository of videos, including serial shows, pot documentaries and snippets of “real” TV segments on legalization and politics. The site is run by Greg “Marijuana Man” Williams and produced by Mark Emery, the publisher of Canada’s Cannabis Culture magazine and one of the pot world’s most celebrated and notorious figures. Emery, Williams and colleague Michelle Rainey were arrested in 2005 on drug trafficking and money laundering charges related to their online seed-selling business. The case has become a rallying point in the pot world since the trio was arrested in Canada for allegedly breaking U.S. law. They’re awaiting extradition hearings.

Williams, who hosts “The Grow Show” on Pot TV, agreed that the Web has been a watershed for pot culture. “The revolution may not be televised,” he quipped, “but it is alive and kicking on YouTube.

“People do this because they know what they are doing is not really going to hurt them or anyone who might follow their lead. If you have a law that needs constant enforcement and it is openly defied, it is a pretty good indication that the people do not want that law.”

Speaking of laws, one might wonder how illegal it is to post videos of oneself smoking pot — with or without a California prescription, which Warden and Bong Rip say they have.

Well, the Drug Enforcement Administration and the Los Angeles Police Department agree that, barring obviously felonious activity like dealing or possession of large amounts, this kind of stuff is small potatoes.

“If it’s just a person smoking marijuana in a residential-looking environment, there’s really no law violation there,” said LAPD Sgt. Kevin Kurzhals of the narcotics division. “We legally couldn’t just break down the door and do anything.”

Special Agent Sarah Pullen of the DEA offered a similar statement. “The DEA’s focus is to pursue those traffickers that have the biggest impact. We typically don’t go after that level of user.”

So it appears that our pot video stars are safe for now, which is a good thing for Bong Rip, who seems almost more addicted to broadcasting himself to the Stoner Army than he is to any banned substance. As my interview with Bong concluded, he offered me an unexpected bonus.

“What’s your first name again, man?”

“David ..... ”

“Dave, you are now officially a major in the Stoner Army as recognized by Bong Rip and the Stoner Army worldwide.”

Major Dave ..... I could get used to that.


Times staff writer Charlie Amter contributed to this report.
http://latimesblogs.latimes.com/webs...ers-share.html

















Until next week,

- js.



















Current Week In Review





Recent WiRs -

September 6th, August 30th, August 23rd, August 16th, August 9th, August 2nd

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - October 13th, '07 JackSpratts Peer to Peer 1 10-10-07 09:59 AM
Peer-To-Peer News - The Week In Review - September 22nd, '07 JackSpratts Peer to Peer 3 22-09-07 06:41 PM
Peer-To-Peer News - The Week In Review - May 19th, '07 JackSpratts Peer to Peer 1 16-05-07 09:58 AM
Peer-To-Peer News - The Week In Review - December 9th, '06 JackSpratts Peer to Peer 5 09-12-06 03:01 PM
Peer-To-Peer News - The Week In Review - September 16th, '06 JackSpratts Peer to Peer 2 14-09-06 09:25 PM






All times are GMT -6. The time now is 04:16 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)