P2P-Zone  

Go Back   P2P-Zone > Peer to Peer
FAQ Members List Calendar Search Today's Posts Mark Forums Read

Peer to Peer The 3rd millenium technology!

Reply
 
Thread Tools Search this Thread Display Modes
Old 15-05-13, 07:56 AM   #1
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 10,013
Default Peer-To-Peer News - The Week In Review - May 18th, '13

Since 2002


































"The numbers in our investigation suggest that previously reported magnitudes in game piracy are too high." – Anders Drachen


"At least once a month my wife and I jump in our car and drive until cell service drops off and spend the weekend engaged with all things analog." – Evan Sharp


"Even we won’t be able to figure out where files sent to us come from. If anyone asks us, we won’t be able to tell them." – Amy Davidson






































May 18th , 2013




BitTorrent Study Challenges Videogame Piracy Misconceptions
Olivia Solon

A large-scale analysis of BItTorrent file-sharing of videogames has shown that the number of illicit digital copies is not as high as reported by industry trade organisations.

Anders Drachen from the Department of Communication and Psychology at Aalborg University and the PLAIT Lab at Northeastern University as well as Robert Veitch from the Department of IT Management at Copenhagen Business School analysed a the filesharing of some 173 computer games over a three-month period between 2010 and 2011.

They set out to study videogame piracy because "despite the substantial debate about digital game piracy, there is minimal objective information available about the relative magnitude of piracy, or its distribution across different countries nor across game titles or game genres". Both sides of the debate agree that game piracy is common, but the numbers vary dramatically between reports. The Entertainment Software Association claims that it had tracked almost 10 million illegal downloads of around 200 games in December 2009. Meanwhile TorretnFreak reported 18.14 million downloads for the five most downloaded PC games on BitTorrent in 2010, with a further 5.34 million downloads of the five most downloaded console games.

During the three-month period, the team tracked BitTorrent file sharing, looking at games for 14 different platforms including PC, Xbox 360, PS3, Wii, DS, iOS/Mac and PSP.

The BitTorrent protocol breaks down files into pieces and distributes them across the network. Information such as a Uniform Resource Identifier (URI) and cryptographic hashes contained in the metadata file are shared with a tracker server, which lists all the users making the file available. The metadata file is then distributed to users via the server. BitTorrent search engines such as The Pirate Bay host the metadata files, providing a search capability for peers. People can access files using client-side BitTorrent software which then reassembles the different parts of the file.

The research team developed a custom web crawler that periodically issued queries to a BitTorrent search engine. Once it located the metadata files, the crawler obtained the tracker server URIs before issuing an HTTP "GET" request to obtain a list of IP addresses for peers participating in the sharing. From this they compiled a list of 173 game titles. Games being distributed legally via BitTorrent were not included. Nor were those torrents relating to a game that did not contain the full game. They took all of the torrent data and consolidated it to give the total number of peers for each game, their platform and the region.

They found that 12.6 million unique peers from 250 areas (not sure what this is) were sharing illicit copies of games, including Fallout: New Vegas, Darksiders, Tron Evolution, Call of Duty: Black Ops, Starcraft 2 and The Sims 3: Late Night. The 10 most popular games titles during the period drove 42.7 percent of unique peers (5.37 million) on BitTorrent and just 20 countries hogged 76.7 percent of the total file-sharing activity. The most activity (relative to population size) was seen to come from Romania, Croatia, Greece, Portugal and Hungary.

Drachen told Wired.co.uk: "This is definitely not a case of developing countries vs. industrial countries but much more diverse. While we can only speculate about the underlying causal factors of the patterns we observe, I suspect that they are pretty complex."

When looking at those games that made their debut on BitTorrent during the three-month period (as opposed to being there already), they noticed that they experienced a rapid increase in popularity within the first few days, before gradually declining. This implies that any snap-short analyses of BitTorrent activity, where data are collected over a very short time-period, can be incredibly misleading -- any accurate figures must take into account the variations in activity over time.

When it comes to the number of unique peers per game, it is evident that the most popular genres were RPGs (18.9 percent), action-adventure (15.9 percent), third-person shooters (12.7 percent) and racing (9.3 percent). The team also found a correlation between a game's review score on Metacritic and the volume of BitTorrent activity rleated to that game.

Drachen said in a press release: "First and foremost, P2P game piracy is extraordinarily prevalent and geographically distributed [at least it was during the period analysed]. However, the numbers in our investigation suggest that previously reported magnitudes in game piracy are too high. It also appears that some common myths are wrong, e.g. that it is only shooters that get pirated, as we see a lot of activity for children's and family games on BitTorrent for the period we investigated."

The study notes that the industry is moving towards online platforms for major commercial and casual games in the hope that they will reduce piracy.

You can read the full study here.
http://www.wired.co.uk/news/archive/...t-gaming-study





Judge Dismisses Film Company's Lawsuit Against Local Defendants

Voltage Pictures said 34 people pirated 'Maximum Conviction'; district judge rules the defendants were unfairly lumped together
Sanne Specht

A federal judge last week took aim at "reverse class action lawsuits" while dismissing a Los Angeles-based movie company's lawsuit that claimed dozens of Jackson County "John Doe" defendants had pirated one of its movies off the Internet.

Salem attorney Carl Crowell in February filed a lawsuit on behalf of Voltage Pictures LLC in U.S. District Court in Medford, seeking up to $180,000 in damages from each of 34 defendants accused of pirating the 2012 movie "Maximum Conviction" off the Internet.

Statewide, Voltage alleged, nearly 600 people had participated in copyright infringement.

The company sought $30,000 for the alleged infringement, and an additional $150,000 from each defendant in statutory damages should there be a finding of willful conduct.

U.S. District Court Judge Ann Aiken last week dismissed the case, ruling the movie company had unfairly lumped the plaintiffs together in a "reverse class action suit" to save more than $200,000 in court costs, and possibly intimidate the plaintiffs into paying $7,500 for allegedly illegally viewing a $10 video.

Voltage sought a jury trial, alleging the unnamed defendants used their computers to illegally copy and distribute the 2012 movie "Maximum Conviction," which was directed by Keoni Waxman and stars Steven Seagal and Steve Austin. Voltage said the unnamed local defendants resided in Medford, Talent, Central Point, Shady Cove, Klamath Falls and Brookings.

The defendants were not identified by name at the time of the initial filing because Voltage had only their Internet protocol addresses. Voltage later subpoenaed the defendants' Internet service providers, which include Charter Communications, Clearwire Corp., CenturyLink, Embarq Corp. and Frontier Corp., to obtain the defendants' names.

"... the manner in which plaintiff is pursuing the Doe defendants has resulted in $123,850 savings in filing fees alone," Aiken wrote, adding that defendants were being subjected to a "lack of fundamental fairness."

Voltage's suit stated the video industry has tried to capitalize on Internet technology and reduce costs to consumers through legitimate and legal venues such as Netflix, Hulu and Amazon Prime. But, it maintained, people continue to "steal motion pictures and undermine the efforts of creators through their illegal copying and distribution of motion pictures" through peer-to-peer networks such as BitTorrent, which connects computers through its system.

Whenever people download the BitTorrent application, they become both a user and a distributor, the suit alleges.

Aiken agreed technologies such as BitTorrent are "anonymous and stealthy tools" that allow for large-scale copyright infringement. But she said not every defendant named in Voltage's suit had equal culpability. Some had unsecured IP addresses, others allowed only downloading and prohibited uploading, while others were associated with institutional accounts such as businesses or schools with numerous users, she said.

Voltage's suit states it is a "common misunderstanding that people involved in motion pictures are already wealthy" and that the perception is that "the end product, such as a DVD, only costs very little to make."

In fact, the suit says, there are "countless expenses and labors," including writers, staff, construction workers and others involved in making the final product.

Aiken said cases such as the one filed by Voltage allow plaintiffs to "use the courts' subpoena powers to troll for quick and easy settlements." A sample demand letter sent to defendants associated with IP addresses threatens severe punitive damages and a not-so-subtle implication that liability is a foregone conclusion, Aiken said.

The letter asks $7,500 as a settlement offer, and said that amount would increase up to $150,000 if the recipient did not agree to prompt payment. It also makes threats against attempts to delete files, asserting costs associated with such actions will be added to the assessment. Aiken characterized the letter as an exorbitant extortion tactic.

"Accordingly, plaintiff's tactic in these BitTorrent cases appears to not seek to litigate against all the Doe defendants, but to utilize the court's subpoena powers to drastically reduce litigation costs and obtain, in effect, $7,500 for its product, which in the case of Maximum Conviction, can be obtained for $9.00 on Amazon for the Blue-Ray/DVD combo or $3.99 for a digital rental," Aiken wrote.

Crowell refused comment to the Mail Tribune in February and did not return calls Friday.
http://www.mailtribune.com/apps/pbcs...40311/-1/rss01





“Six Strikes” Anti-Piracy Outfit Loses Company Status, Faces Penalties
Ernesto

The Center for Copyright Information, a partnership between the RIAA, MPAA and several major Internet providers, has had its company status revoked. The CCI, who are leading the “six strikes” anti-piracy scheme in the US, has violated state laws and is unable to conduct any official business anywhere in the United States. In addition the outfit faces civil penalties and risks losing its name to a third-party company.

During the summer of 2011 the MPAA and RIAA teamed up with five major Internet providers in the United States, announcing their “six strikes” anti-piracy plan.

The parties founded the Center for Copyright Information (CCI) and few months later they started a non-profit company with the same name in Washington, D.C.

After more than a year of delays the CCI finally launched its Copyright Alert System during February. But just when it appeared the group was on the right track, it met another roadblock.

According to the Columbia Department of Consumer and Regulatory Affairs (DCRA), the company leading the six-strikes program has had its status revoked. This pretty much means that the company is unable to conduct any official business anywhere in the United States.

The revocation means that CCI’s articles of organization are void, most likely because the company forgot to file the proper paperwork or pay its fees.

“If entity’s status is revoked then articles of incorporation / organization shall be void and all powers conferred upon such entity are declared inoperative, and, in the case of a foreign entity, the certificate of foreign registration shall be revoked and all powers conferred hereunder shall be inoperative,” the DCRA explains.

Unfortunately for the CCI, the DCRA doesn’t have a strike based system and the company is now facing civil penalties and fines.

It appears that company status was revoked last year which means that other businesses now have the option to take over the name. That would be quite an embarrassment, to say the least, and also presents an opportunity to scammers.

“When a Washington DC corporation is revoked by the DCRA, its name is reserved and protected until December 31st of the year the corporation is revoked. After December 31st, other business entities may use the corporations name,” the DCRA explains on its website.

Technically the CCI could have started a new corporation under a different name but this seems unlikely. TorrentFreak was able to confirm that at least one of the participants in the Copyright Alert System paid a substantial amount of money to the revoked company last year.

As with any other company, CCI will be able to have its company status reinstated after fulfilling its obligations. A source connected to the Center of Copyright Information informs TorrentFreak that the proper paperwork has been filed now. This most likely means that the DCRA will update the company’s status in the near future.

Finally, it will be interesting to see if this situation holds consequences for the anti-piracy warnings that are supposedly being sent out at the moment – the Internet seems strangely devoid of U.S. subscribers in receipt of any.
http://torrentfreak.com/six-strikes-...alties-130515/





Ethically Handicapped Prenda’s “Boss” Paul Duffy Signs A New Batch of Extortion Letters
SJD

Many remember that less than 3 years ago an infamous scumbag Steve “Lightspeed” Jones, a pornographer who specializes in “barely legal” genre (i.e. he recruits and films very young girls), articulated the “troll credo” that would become a modus operandi of the sleaziest porno extortionists:

People aren’t embarrassed when their neighbors find out they downloaded a few songs, but illegally trading midget, tranny, facials, and teen porn content? There is some news worth keeping from the wife, kids, parents, and neighbors.

Please feel free to continue to compare this to the RIAA…

Steve Lightspeed


(He said this in the context of hiring John Steele.)

I heard stories about troll harassers/collectors (not only Prenda’s, but Lipscomb’s, for example) threatening to tell relatives, neighbors, and colleagues that the victim is being sued in connection to an illegal download of pornography. Along these lines, Lipscomb’s collectors inflicted more harm upon citizenry than anyone else — see Fantalis’s story.

Yet I never saw these threats explicitly written in a demand letter — until yesterday. No one else but Prenda came up with a new sleaze at the time when the entire gang, including the ethically handicapped attorney who signed it (Paul Duffy), pleaded the Fifth and was referred to the authorities for criminal investigation (as a matter of fact, Duffy pleaded the Fifth twice). Last week people started receiving new letters, this time not from involuntarily dissolved Duffy Law Group (like in April), not from fake/shell corporations, but from the “Anti-Piracy Law Group,” the latest Prenda reincarnation. An explicit threat to call one’s neighbors was added to this masterpiece of douchebaggery (emphasis is mine):

[...] The purpose of this step is to gather evidence about who used your Internet account to steal from our client [sjd: never mind that this case is about hacking, not copyright infringement]. The list of possible suspects includes you, members of your household, your neighbors (if you maintain an open wi-fi connection) and anyone who might have visited your house. In the coming days we will contact these individuals to investigate whether they have any knowledge of the acts described in my client’s prior letter. [...]

Anything goes if it helps to scare an uninformed extortion target:

[...] Internet is full of stories of people being brought to court by our firm, incurring significant legal fees and suffering large judgments [...]

I don’t know what part of their bodies these guys use for thinking: to see what kind of stories people will find, try to google “Anti-Piracy Law Group,” or visit antipiracylawgroup.com (copy and paste to make sure that this is real).
View this document on Scribd

If I was not a relatively modest kind, I would tell you what to do with such letter. But you know it without me if you spend an hour surfing the “Internet full of stories.”

By the way: the lopsided second page is not a result of faulty scanning. This is exactly how the original printed letter looks like. Also, we probably have a new definition of “Chutzpah,” since the letters are dated 5/7/2013 — the very next day after Judge Wright’s smackdown.
Good news

I want to finish on a lighter note.

I hope that everyone is familiar with Friday’s surprise interview that John Steele gave to ArsTechnica. It does not make sense to discuss the things this narcissistic megalomaniac said on the record. I keep wondering if this pretentious paltry creature understands the extent of the damage he inflicts upon himself and his buddies when he opens his mendacious mouth in public. Funny enough, Jason Sweet used Steele’s words from this interview to argue against Prenda in the evening of the exact same Friday!

While the entire interview is good news overall, there is more to it: while John struggles with mastering a delicate art of shutting-the-fuck-up, some people are doing their job in silence. And some of them visit this site in the line of their duty:

I like it. I like it a lot.
http://fightcopyrighttrolls.com/2013...rtion-letters/





Canadian Anti-Piracy Outfit Pirates Photos for its Website
Ernesto

Canadian anti-piracy company Canipre has been teaming up with film studios to hunt down and sue alleged BitTorrent pirates. They want to change people’s attitudes toward piracy and make a few bucks in the process. However, it appears that the attitude change should start closer to home, as their own website blatantly uses photos that have been ripped-off from independent photographers.

Copyright is a double-edged sword, and those who sharpen one side often get cut by the other. We see it happening time and time again with lawyers, lawmakers, anti-piracy groups and copyright holders.

The U.S Copyright Group, for example, ripped off the website of a competitor. They copied the design and code of the Copyright Enforcement Group and passed it off as their own. Only when we called them out on it did they remove all “infringing” content.

In Canada a similar situation is unfolding at the moment. Anti-piracy group Canipre, who work with the makers of the Hurt Locker as did the U.S Copyright Group, have been busted ripping-off the work of independent photographers.

Their dark themed website features images that originate from several photographers, but they all have one thing in common – they are being used without permission. A classic mistake, but one that should have never been made by a company that takes the moral high ground when it comes to piracy.

Just a few days ago Canipre’s boss defended their plan to sue thousands of BitTorrent pirates by claiming that they want to change people’s attitudes. In addition, they proudly use the ironic slogan “they all know it’s wrong and they ‘re still doing it.”

Since Vice Canada broke the news a few hours ago nearly all the photos have been removed. However, there are still screenshots that should be good for an interesting court battle, or perhaps more appropriately, a settlement of a few thousand dollars.

Steve Houk, who took the self portrait pictured above, contacted Canipre about the blatant infringement, looking for compensation.

“I sent them an e-mail via their website. I identified the image, told them that it is my creative property under copyright and requested that they either remove the image from their site or compensate me for its use.”

“I also told them that it was disheartening to see a company that champions intellectual property rights to pirate someone else’s creative work,” Houk notes.

Canipre quickly took on the role of “innocent” infringer and blamed their web design firm for obtaining the photos. The design firm allegedly took content from an image bank, but that would have to be a rogue outfit as Houk never sold away his rights.

In any case, Canipre is of course ultimately responsible for the content that appears on their company website, just like they hold an Internet subscriber responsible for the infringing behavior of their neighbors.

In addition to Houk, Vice also got in touch with photographers Sascha Pohflepp and Brian Moore. Both confirmed that their work was used on Canipre’s website without permission.

“That’s amazing. No, I did not give them permission as far as I know. Go get ‘em,” Moore responded.

So there we have it once again. An outfit that targets copyright infringers is actively infringing copyright themselves.

They are so incompetent and probably blinded by the dollar signs in their eyes, that they can’t even put a website together without breaking the law themselves – the same copyright law they use to go after movie pirates.
http://torrentfreak.com/canadian-ant...ebsite-130515/





Swedish Prosecutor Asks Court to Block File-Sharer Pirate Bay

Swedish prosecutors have launched a new attempt to close down the popular file-sharing site The Pirate Bay, asking a court to block internet addresses used to access the site.

The Pirate Bay, one of the world's biggest file-sharing sites, allows individuals to download movies, music and games without paying. The global entertainment industry has repeatedly tried to block its activities.

Swedish prosecutors have asked the court to de-register the domain names "piratebay.se" and "thepiratebay.se". Sweden's Internet Infrastructure Foundation registers and administers the country's ".se" internet addresses.

"They have been used as a means to commit crime," prosecutor Fredrik Ingblad said on Wednesday.

"There is a significant criminal copyright infringement which causes a great deal of damage to many."

The Pirate Bay uses a domain name registered in Sint Maarten, a Dutch territory in the Caribbean. The Internet Infrastructure Foundation said de-listing the ".se" domain names would not stop people using the site.

"People who want to use the site would get to it anyway," Maria Ekelund, a spokeswoman for the Foundation, said.

In 2009, a court in Sweden - where The Pirate Bay was founded in 2003 - fined and sentenced to jail three men then behind the site for breaching copyright in a case brought by firms including Sony Universal Music and EMI.

The site continued to operate after the convictions, despite police seizing the servers used by The Pirate Bay. The site is now run by an unknown group.

(Reporting by Simon Johnson; Editing by Janet Lawrence)
http://www.reuters.com/article/2013/...94E0XR20130515





Records Labels Prepare Massive ‘Pirate Site’ Domain Blocking Blitz
Andy

In their ongoing battle against websites said to infringe music copyrights, record labels have initiated a fresh wave of actions aimed at forcing UK ISPs to carry out domain blocking. This third wave is set to be the biggest so far, affecting as many as 25 domains and including some of the world’s largest torrent sites and file-hosting search engines. Furthermore, the BPI – the entity coordinating the action – will ask courts to block US-based music streaming operation, Grooveshark.

In early July 2012, a music industry insider informed TorrentFreak that music licensing group PPL had begun polling its members on the issue of piracy.

On behalf of the BPI and by extension the major recording labels in the UK, PPL asked its members whether they had licensed any music to a range of torrent sites including KickAssTorrents, H33T and Fenopy. By February 2013 their motivations were confirmed when the High Court ordered ISPs to block all three sites.

Now, ten months after their initial survey and three months after their latest court success, we can confirm that the BPI have just initiated their most ambitious domain blocking initiative yet. Yesterday on behalf of the BPI, PPL sent out a request to its members, similar in most key respects to the one sent last year.

“Over the past years, UK music labels have innovated to build one of the most vibrant digital music sectors in the world. However, the growth of digital music in the UK is held back by a raft of illegal businesses commercially exploiting music without a licence from the copyright holders,” the communication begins.

“In considering what next steps to take, BPI would like to know if any PPL record company members have, in the UK, licensed their recorded music to the operators of the below websites,” it continues.

PPL-BPI

What follows is list containing some of the world’s largest BitTorrent, file-hosting, and MP3 search engine websites in the world. A second industry source informs TorrentFreak that the BPI does indeed intend to have the sites blocked via upcoming action in the High Court.

BitTorrent

1337x currently has 484,000 torrents in its database. The site is special since it’s in a minority of public torrent sites that also operates its own tracker. At the turn of 2013 it was the 6th most popular torrent site in the world.

BitSnoop is a torrent indexing site that currently has a massive 19.9 million torrents in its database. When it comes to DMCA notices the site is more transparent than most. BitSnoop says its has complied with 789,303 takedown notices since December 2011 and even publishes league tables of the senders. According to the list the BPI have sent none, which is interesting since they have sent more than 300,000 complaints to Google about BitSnoop.

ExtraTorrent is the 5th largest torrent and 9th largest file-sharing related website in the world. It also claims to be DMCA compliant but that hasn’t stopped the BPI sending close to 200,000 takedowns directly to Google.

Two years ago Isohunt became the first torrent search engine to implement a keyword filter to block infringing content on behalf of the MPAA. It is the 4th largest torrent site in the world and is subject to continuing legal action in the United States. BPI member companies have sent more than 310,000 takedown requests to Google.

TorrentReactor re-entered the Top 10 torrent websites chart this year after a brief hiatus. The site claims compliance with both the DMCA and its European equivalent.

The BPI’s list is long and goes on to include TorrentCrazy, Monova, Torrentdownloads and TorrentHound, and with the word ‘torrent’ cropping up a few times one might presume that these sites are all fundamentally the same. However, there is a surprise inclusion in the list.

Torrentz is the 3rd largest torrent site in the world but it differs from the other sites in the list in an important way. Torrentz is a meta-search engine, in that it’s a search engine that searches other search engines. Furthermore, not only is it fully compliant with the DMCA and its euro equivalent, but Torrentz carries absolutely no torrents whatsoever.

File-Hosting search engines

The attack on torrent search engines is only the beginning. The BPI is also looking to target other sites that don’t carry any of their own material but index content located on other sites.

Filestube, a site that indexes content on a few dozen external file-hosting sites, has been subjected to a massive DMCA notice campaign in recent months. Google says it has received 4.45 million takedown notices from 2,2650 copyright holders. Other similar sites included on the BPI list are Filecrop, Filetram and Rapidlibrary.

Music streaming and MP3

Although it is already the subject of a domain block in Denmark, the inclusion of Grooveshark in the BPI’s list comes as somewhat of a surprise. Isohunt aside, the company’s management have a much higher public profile than any other site in the list and could conceivably turn up in the UK High Court to fight any blocking attempt.

The list winds up with a range of MP3 download/search engine type operations that have grown in popularity during recent months. BeeMP3, Dilandau, MP3juices, MP3lemon, MP3raid and MP3skull have all featured heavily in Google’s Transparency Report, probably due to their ease of use and crowd-pleasing search results.

Abmp3, Bomb-mp3, Emp3world and Newalbumreleases complete the list.

PPL members are being asked to respond directly to the BPI’s legal department by May 21 informing the music group of any licensing deals in place – presumably the BPI wish to avoid potential embarrassment in the High Court.
http://torrentfreak.com/records-labe...-blitz-130515/





Bumbling ASIC Heralds New Internet Censorship Era
Bernard Keane

ASIC has been revealed as the agency behind the blocking of a Melbourne education website, using a hitherto-unused internet censorship power.

An inept regulator exercising a hitherto-unused internet censorship power has been revealed as the source of the accidental blocking of a Melbourne education website.

IT industry news site Delimiter has revealed that Australian Securities and Investments Commission was behind the blocking of the Melbourne Free University website and more than 1000 other sites in early April when it sought to block a website suspected of engaging in fraud, using a power under s.313 of the Telecommunications Act.

The reason for the blocking of the site in April has remained a mystery, but Delimiter’s Renai LeMay pursued the issue and eventually unearthed from Broadband Minister Stephen Conroy’s office the fact that a broad power under the Telecommunications Act had been used.

Under s.313, a carrier or carriage service provider must:

” … give officers and authorities of the Commonwealth and of the States and Territories such help as is reasonably necessary for the following purposes: enforcing the criminal law and laws imposing pecuniary penalties; assisting the enforcement of the criminal laws in force in a foreign country; protecting the public revenue; safeguarding national security.”

ASIC in effect used this power to censor the internet, in the course of which over 1000 sites unconnected to the target site were blocked, including Melbourne Free University, which was told nothing by authorities or its ISP about why.

ASIC is one of Australia’s most inept regulators, with a string of courtroom defeats marking its efforts to enforce corporate law. Despite its record of bumbling, last year ASIC used the Joint Committee on Intelligence and Security’s inquiry into data retention to demand an expansion of its power to intercept internet and phone communications.

ASIC’s use of the s.313 power opens the possibility of a de facto internet filter scheme with less oversight than the filter originally proposed by Stephen Conroy in the government’s first term. As LeMay correctly notes, a filter comprised of individual requests from a variety of regulators asserting they are “enforcing criminal laws” or “safeguarding national security” is harder to monitor or hold to account. As Melbourne Free University discovered, it is also very difficult for businesses and organisations accidentally blocked to discover who has blocked them or why.

In Tuesday’s budget, the government announced its abandonment of the internet filter scheme would enable a saving of several million dollars. It has been replaced with a “voluntary” filter scheme limited to sites identified by Interpol. That filter is a minimal one compared with both to the original Conroy proposal, which would have targeted a broader range of allegedly “illegal” content under Australian laws, and the one available via s.313, which is driven purely by the internal interpretations by regulators of what is “enforcing criminal law” or “safeguarding national security”.

However, all are easily evaded using virtual private networks or routing software like Tor. The safest bet for all Australian internet users is to use basic, freely available software to encrypt and redirect their internet usage to avoid filters and data retention regimes.
http://www.crikey.com.au/2013/05/16/...ensorship-era/





TRA Denies Jail and Fine Up to Dh1 Million for Using Skype in UAE

The Telecommunication Regulatory Authority (TRA) denied earlier media report that the UAE residents could be fined and imprisoned for using Skype.

A report said today that some of the services that Skype offers - like making telephone calls - need a licence from the TRA and the violators could be fined up to Dh1 million and also imprisonment.

“TRA did not state the mentioned penalties… TRA confirms that there are a number of factual inaccuracies, for instance, the provisions of article (71) do not apply in this case,” the regulatory body said in a statement.

Some of the services that Skype offers - like making telephone calls - need a license from the UAE Telecommunication Regulatory Authority (TRA), according to Majid Al Masmar, Acting Director General of TRA.

According to a report in Arabic daily Emarat Al Youm, the authority said that failure to obtain a licence, will subject a user to the amended Article 71 of the Telecommunications Act.

This includes imprisonment for a period not exceeding two years and/or a fine of not less than Dh50,000 and not more than Dh1million.

Al Masmar confirmed that the services which Skype provides require licence from the TRA.

In case of failure to get such licence, defaulter will be subjected to the communication law No. (3) of 2003 and penalties will be applied.

Al Masmar clarified that downloading Skype onto one’s computer does not give subscribers the right to utilise services offered without necessary licences.

The report quotes Al Masmar as pointing out that attempts to over-ride ban on unauthorised Skype services have increased recently.

However, both, etisalat and du have not applied for any official requests to the TRA to allow Skype services in the UAE.

“There is a big difference for subscribers when they get online Skype services formally through the two licenced operators and when they illegally download it via the internet, as is currently done by a large number of the people,” he said.

He explained that in the event etisalat and du offered these services, it will mean allowing the use of Skype to make both voice and video calls from smartphone to smartphone, but with the approval of the TRA.
http://www.emirates247.com/business/...05-12-1.506105





A Saudi Arabia Telecom's Surveillance Pitch
Moxie Marlinspike

Last week I was contacted by an agent of Mobily, one of two telecoms operating in Saudi Arabia, about a surveillance project that they’re working on in that country. Having published two reasonably popular MITM tools, it’s not uncommon for me to get emails requesting that I help people with their interception projects. I typically don’t respond, but this one (an email titled “Solution for monitoring encrypted data on telecom”) caught my eye.

I was interested to know more about what they were up to, so I wrote back and asked. After a week of correspondence, I learned that they are organizing a program to intercept mobile application data, with specific interest in monitoring:

• Mobile Twitter
• Viber
• Line
• WhatsApp

I was told that the project is being managed by Yasser D. Alruhaily, Executive Manager of the Network & Information Security Department at Mobily. The project’s requirements come from “the regulator” (which I assume means the government of Saudi Arabia). The requirements are the ability to both monitor and block mobile data communication, and apparently they already have blocking setup. Here’s a sample snippet from one email:

From: Yasser Alruhaily <…….. .. .@mobily.com.sa>

Date: Thursday, May 2, 2013 1:04 PM

Subject: Re: As discussed last day .further discussion

we are working in defining a way to deal with all such requirements from regulator and it is not only for Whatsapp, it is for whatsapp, line, viber, twitter etc..

So, what we need your support in is the following:

• is there any technical way that allow for interception these traffic?
• Is there any company or vendor could help us on this regard?
• is there any telecom company they implement any solution or workaround?


One of the design documents that they volunteered specifically called out compelling a CA in the jurisdiction of the UAE or Saudi Arabia to produce SSL certificates that they could use for interception. A considerable portion of the document was also dedicated to a discussion of purchasing SSL vulnerabilities or other exploits as possibilities.

Their level of sophistication didn’t strike me as particularly impressive, and their existing design document was pretty confused in a number of places, but Mobily is a company with over 5 billion in revenue, so I’m sure that they’ll eventually figure something out.

What’s depressing is that I could have easily helped them intercept basically all of the traffic they were interested in (except for Twitter – I helped write that TLS code, and I think we did it well). They later told me they’d already gotten a WhatsApp interception prototype working, and were surprised by how easy it was. The bar for most of these apps is pretty low.

In The Name Of Terror

When they eventually asked me for a price quote, and I indicated that I wasn’t interested in the job for privacy reasons, they responded with this:

I know that already and I have same thoughts like you freedom and respecting privacy, actually Saudi has a big terrorist problem and they are misusing these services for spreading terrorism and contacting and spreading their cause that’s why I took this and I seek your help. If you are not interested than maybe you are on indirectly helping those who curb the freedom with their brutal activities.

So privacy is cool, but the Saudi government just wants to monitor people’s tweets because… terrorism. The terror of the re-tweet.

But the real zinger is that, by not helping, I might also be a terrorist. Or an indirect terrorist, or something.

While this email is obviously absurd, it’s the same general logic that we will be confronted with over and over again: choose your team. Which would you prefer? Bombs or exploits. Terrorism or security. Us or them. As transparent as this logic might be, sometimes it doesn’t take much when confirming to oneself that the profitable choice is also the right choice.

If I absolutely have to frame my choices as an either-or, I’ll choose power vs. people.

Culture Over Time

I know that, even though I never signed a confidentiality agreement, and even though I simply asked questions without signaling that I wanted to participate, it’s still somewhat rude of me to publish details of correspondence with someone else.

I’m being rude by publishing this correspondence with Mobily, not only because it’s substantially more rude of them to be engaged in massive-scale eavesdropping of private communication, but because I think it’s part of a narrative that we need to consider. What Mobily is up to is what’s currently happening everywhere, and we can’t ignore that.

Over the past year there has been an ongoing debate in the security community about exploit sales. For the most part, the conversation has focused on legality and whether exploit sales should be regulated.

I think the more interesting question is about culture: what do we in the hacker community value and prioritize, and what is the type of behavior that we want to encourage?

Let’s take stock. One could make the case that the cultural origins of exploit sales are longstanding. Since at least the 90’s, there has been an underlying narrative within the hacker community of not “blowing up” or “killing” bugs. A tension against that discipline began with the transition from a “hacker community” to a “security industry,” and the unease created by that tension peaked in the early 2000’s, manifested most clearly by the infamous AntiSec movement.

Fundamentally, AntiSec tried to reposition the “White Hat” vs “Black Hat” debate by suggesting that there are no “White Hats,” only “Green Hats” – the color of money.

As someone who also regretted what money had done to the hacker community, I was largely sympathetic with AntiSec. If I’m really honest with myself, though, my interest in the preservation of 0day was also because there was something fun about an insecure internet at the time, particularly since that insecurity predominately tended to be leveraged by a class of people that I generally liked against a class of people that I generally disliked.

In short, there was something about not publishing 0day that signaled affiliation with the “hacker community” rather than the “security industry.”

The Situation Today

In many ways, it’s possible that we’re still largely operating based on those original dynamics. Somewhere between then and now, however, there was an inflection point. It’s hard to say exactly when it happened, but these days, the insecurity of the internet is now more predominantly leveraged by people that I dislike against people that I like. More often than not, that’s by governments against people.

Simultaneously, the tension between “0day” vs “publish” has largely transformed into “sell secretly” vs “publish.” In a sense, the AntiSec narrative has undergone a full inversion: this time, there are no “Black Hats” anymore, only “Green Hats” – the color of money.

There are still outliers, such as Anonymous (to the extent that it’s possible to be sympathetic with an unguided missile), but what’s most significant about their contribution is that they’re not using 0day at all.

Forgetting the question of legality, I hope that we can collectively look at this changing dynamic and perhaps re-evaluate what we culturally reward. I’d much rather think about the question of exploit sales in terms of who we welcome to our conferences, who we choose to associate with, and who we choose to exclude, than in terms of legal regulations. I think the contextual shift we’ve seen over the past few years requires that we think critically about what’s still cool and what’s not.

Maybe this is an unpopular opinion and the bulk of the community is totally fine with how things have gone (after all, it is profitable). There are even explicitly patriotic hackers who suggest that their exploit sales are necessary for the good of the nation, seeing themselves as protagonists in a global struggle for the defense of freedom, but having nothing to do with these ugly situations in Saudi Arabia. Once exploits are sold to US defense contractors, however, it’s very possible they could end up delivered directly to the Saudis (eg, eg, eg), where it would take some even more substantial handwaving to think that they’ll serve in some liberatory way.

For me at least, these changes have likely influenced what I choose to publish rather than hold, and have probably caused me to spend more time attempting to develop solutions for secure communication than the type of work I was doing before.

It’s Happening

Really, it’s no shock that Saudi Arabia is working on this, but it is interesting to get fairly direct evidence that it’s happening. More to the point, if you’re in Saudi Arabia (or really anywhere), it might be prudent to think about avoiding insecure communication tools like WhatsApp and Viber (TextSecure and RedPhone could serve as appropriate secure replacements), because now we know for sure that they’re watching.

For the rest of us, I hope we can talk about what we can do to stop those who are determined to make this a reality, as well as the ways that we’re already inadvertently a part of that reality’s making.
http://www.thoughtcrime.org/blog/saudi-surveillance/





Phone Records of Journalists Seized by U.S.
Charlie Savage and Leslie Kaufman

Federal investigators secretly seized two months of phone records for reporters and editors of The Associated Press in what the news organization said Monday was a “serious interference with A.P.’s constitutional rights to gather and report the news.”

The A.P. said that the Justice Department informed it on Friday that law enforcement officials had obtained the records for more than 20 telephone lines of its offices and journalists, including their home phones and cellphones. It said the records were seized without notice sometime this year.

The organization was not told the reason for the seizure. But the timing and the specific journalistic targets strongly suggested they are related to a continuing government investigation into the leaking of information a year ago about the Central Intelligence Agency’s disruption of a Yemen-based terrorist plot to bomb an airliner.

The disclosures began with an Associated Press article on May 7, 2012, breaking the news of the foiled plot; the organization had held off publishing it for several days at the White House’s request because the intelligence operations were still unfolding.

In an angry letter to Attorney General Eric H. Holder Jr. on Monday, Gary Pruitt, the president and chief executive of The A.P., called the seizure, a “massive and unprecedented intrusion” into its news gathering activities.

“There can be no possible justification for such an overbroad collection of the telephone communications of The Associated Press and its reporters,” he wrote. “These records potentially reveal communications with confidential sources across all of the news gathering activities undertaken by The A.P. during a two-month period, provide a road map to A.P.’s news gathering operations, and disclose information about A.P.’s activities and operations that the government has no conceivable right to know.”

The development represents the latest collision of news organizations and federal investigators over government efforts to prevent the disclosure of national security information, and it comes against a backdrop of an aggressive policy by the Obama administration to rein in leaks. Under President Obama, six current and former government officials have been indicted in leak-related cases so far, twice the number brought under all previous administrations combined.

Justice Department regulations call for subpoenas for journalists’ phone records to be undertaken as a last resort and narrowly focused, subject to the attorney general’s personal signoff. Under normal circumstances, the regulations call for notice and negotiations, giving the news organization a chance to challenge the subpoena in court.

The Justice Department referred questions about the subpoena to a spokesman for Ronald C. Machen Jr., the United States attorney for the District of Columbia, who was assigned by Mr. Holder last June to lead one of two major leak investigations. Those inquiries came amid a Congressional uproar over several disclosures of national security information in the media.

“We must notify the media organization in advance unless doing so would pose a substantial threat to the integrity of the investigation,” Mr. Machen’s spokesman, William Miller, said.

“Because we value the freedom of the press,” Mr. Miller added, “we are always careful and deliberative in seeking to strike the right balance between the public interest in the free flow of information and the public interest in the fair and effective administration of our criminal laws.”

But First Amendment experts and free press advocates portrayed the move as shocking in its breadth.

The Newspaper Association of America issued a statement saying: "Today we learned of the Justice Department’s unprecedented wholesale seizure of confidential telephone records from the Associated Press. These actions shock the American conscience and violate the critical freedom of the press protected by the U.S. Constitution and the Bill of Rights.”

A spokeswoman for Dow Jones, which owns The Wall Street Journal, said the company was concerned about the “broader implications” of the action.

Jay Carney, a White House spokesman, said the White House was not involved in the subpoena. “Other than press reports, we have no knowledge of any attempt by the Justice Department to seek phone records of the A.P.,” he said, adding “we are not involved in decisions made in connection with criminal investigations.”

The Justice Department did not respond to a question about whether a similar step was taken in the other major government leak investigation Mr. Holder announced last June. It is believed to be focused on a New York Times reporter, David E. Sanger, and his disclosures in articles and in a book about a joint American-Israeli effort to sabotage Iranian nuclear centrifuges with the so-called Stuxnet virus.

David McCraw, a lawyer for The New York Times, said, “We’ve had no contact from the government of any sort.”

Mr. Holder announced the two special leak investigations in June amid calls in Congress for a crackdown on leaks after a spate of disclosures about the bomb plot, cyberwarfare against Iran, Mr. Obama’s procedures for putting terrorism suspects on a “kill list,” and the raid that killed Osama bin Laden. The revelations had been published by The New York Times, The A.P. and in several books.

Republicans accused the administration of deliberately leaking classified information, jeopardizing national security in an effort to make Mr. Obama look tough in an election year — a charge the White House rejected. But some Democrats, too, said the leaking of sensitive information had gotten out of control.

Mr. Holder’s move at the time was sharply criticized by Republicans as not going far enough. They wanted him to appoint an outside special counsel, and a Senate resolution calling for a special counsel was co-sponsored by 29 Republican senators.

On Monday, however, after The A.P. disclosed the seizure of the records, some Republican leaders criticized the administration as going too far. Michael Steel, a spokesman for House Speaker John A. Boehner, said: “The First Amendment is first for a reason. If the Obama Administration is going after reporters’ phone records, they better have a damned good explanation.”And Doug Heye, a spokesman for Representative Eric Cantor of Virginia, the majority leader, linked the revelation to a brewing controversy over the targeting of Tea Party groups for greater scrutiny by the Internal Revenue Service, saying “these new revelations suggest a pattern of intimidation by the Obama administration.”

The A.P. said Monday that it first learned of the seizure of the records last Friday afternoon when its general counsel, Laura Malone, received a letter from Mr. Machen, the United States attorney. The letter to Mr. Holder said the seizure included “all such records for, among other phone lines, an A.P. general phone number in New York City as well as A.P. bureaus in New York City, Washington, D.C., Hartford, Connecticut, and at the House of Representatives.”

The Associated Press is a nonprofit global news cooperative owned by its American newspaper and broadcast members.

Charlie Savage reported from Washington, and Leslie Kaufman from New York. Christine Haughney contributed reporting from New York.
https://www.nytimes.com/2013/05/14/u...zed-by-us.html





White House Pushes for Media Shield Law
Charlie Savage

Under fire over the Justice Department’s use of a broad subpoena to obtain calling records of Associated Press reporters in connection with a leak investigation, the Obama administration sought on Wednesday to revive legislation that would provide greater protections to reporters in keeping their sources and communications confidential.

President Obama’s Senate liaison, Ed Pagano, on Wednesday morning called the office of Senator Charles E. Schumer, Democrat of New York, and asked him to reintroduce a version of a bill that he had pushed in 2009 called the Free Flow of Information Act, a White House official said.

The bill would create a federal media shield law, akin to ones most states already have, giving journalists some protections from penalties for refusing to identify confidential sources in federal law enforcement proceedings, and generally enabling journalists to ask a federal judge to quash subpoenas for their phone records.

Hours later, Attorney General Eric H. Holder Jr. appeared before the House Judiciary Committee for a hearing that covered a wide range of topics but repeatedly returned to the A.P. phone records. Lawmakers from both parties sought to grill him over why federal investigators secretly used a subpoena this year to obtain a broad swath of toll records — logs of calls sent and received — for several A.P. bureaus and reporters, without advance notice.

“These requests appear to be very broad and intersect important First Amendment protections,” said the committee’s chairman, Representative Robert W. Goodlatte, Republican of Virginia. “Any abridgment of the First Amendment right to the freedom of the press is very concerning.”

Mr. Holder, however, repeatedly noted that he had recused himself because the F.B.I. had interviewed him as one of the officials who knew the information that was leaked to The A.P., which is believed to be about the foiling of a bombing plot involving the Yemen branch of Al Qaeda in the spring of 2012. The decision to approve the subpoena was made by his deputy, James M. Cole.

“I was not the person who was involved in that decision,” he said.

That answer, versions of which he gave in response to multiple questions from Republicans about the leak investigation, did not satisfy committee members, several of whom said they wanted Mr. Cole to appear before the committee and answer questions. Mr. Holder, however, cautioned that since the investigation was continuing, Mr. Cole might not be able to discuss the issue.

Mr. Holder also said that he did not put his recusal in writing, which drew widespread criticism from the lawmakers. Later in the hearing, he said that he had decided to examine whether it would be a better policy to always record when he was transferring his powers to his deputy for a specific matter.

The top Democrat on the committee, Representative John Conyers of Michigan, noted that he had sponsored a version of the Free Flow of Information Act that passed the House twice when it was under Democratic control. He said he would reintroduce his version, too, and he said he hoped that Republicans — who until recently had called for more aggressive investigations of leaks — would support it.

The version the Obama administration is seeking to revive, however, is the one that was chiefly sponsored by Mr. Schumer, which was negotiated between the newspaper industry and the White House. It was approved by the Senate Judiciary Committee in a bipartisan 15-to-4 vote in December 2009. But while it was awaiting a floor vote in 2010, a furor over leaking arose after WikiLeaks began publishing archives of secret government documents, and the bill never received a vote.

In a statement confirming that he would reintroduce the legislation, Mr. Schumer referred to the controversy over the subpoena of A.P. calling records, saying: “This kind of law would balance national security needs against the public’s right to the free flow of information. At minimum, our bill would have ensured a fairer, more deliberate process in this case.”

It is not clear whether such a law would have changed the outcome of the subpoena involving The A.P.

The 2009 legislation would have created a presumption that when the government was seeking calling records from a telephone carrier, the news organization would be notified ahead of time, allowing it to fight the subpoena in court. But the bill would also have allowed the government to seek a 45-to-90-day delay in notification if a court determined that such notice would threaten the integrity of the investigation.

Under the bill, the scope of protection for reporters would vary according to whether it was a civil case, an ordinary criminal case or a national security case.

The greatest protection would be given to civil cases, in which litigants seeking to force reporters to testify or trying to obtain their calling information would be required to show why their need for the information outweighed the public’s interest in unfettered news gathering.

Ordinary criminal cases would work in a similar fashion, except the burden would be on the reporter seeking to quash the subpoena to show by a “clear and convincing” standard that the public interest in the free flow of information should prevail over the needs of law enforcement.

Cases involving the disclosure of classified information would be more heavily tilted toward the government. Judges could not quash a subpoena through a balancing test if prosecutors presented facts showing that the information sought might help prevent a terrorist attack or other acts likely to harm national security.

In his testimony, Mr. Holder said he supported Mr. Schumer’s bill.

“There should be a shield law with regard to the press’s ability to gather information and to disseminate it,” he said. “The focus should be on those people who break their oath and put the American people at risk, not reporters who gather this information.”
https://www.nytimes.com/2013/05/16/u...ield-bill.html





Cops Should Get Warrants to Read Your E-Mail, Attorney General Says
David Kravets

Attorney General Eric Holder became the White House’s highest ranking official to support sweeping privacy protections requiring the government, for the first time, to get a probable-cause warrant to obtain e-mail and other content stored in the cloud.

“It is something that I think the Department will support,” Holder testified before the House Judiciary Committee, when questioned about the Justice Department’s position.

Last month, the Senate Judiciary Committee approved a package that nullifies a provision of federal law allowing the authorities to acquire a suspect’s e-mail or other stored content from an internet service provider without showing probable cause that a crime was committed if the content is 180 days or older.

Under the current law, the 1986 Electronic Communications Privacy Act, the government can obtain e-mail without a warrant as long as the data has been stored on a third-party server — the cloud — for 180 days or more. The government only needs to show, often via an administrative subpoena, that it has “reasonable grounds to believe” the information would be useful to an investigation.

Holder, who was speaking at a Justice Department oversight hearing, said that warrants are unnecessary for non-criminal investigations.

Holder’s thinking is a sea change to the agency’s position two years ago when the department testified that warrants should never be required.

Initially, ECPA provided privacy to users, but that privacy protection eroded as technology advanced and people began storing e-mail and documents on servers for longer periods, sometimes indefinitely. The act was adopted at a time when e-mail wasn’t stored on servers for a long time, but instead was held briefly on its way to the recipient’s inbox. E-mail more than 6 months old was assumed abandoned.

In March, Elana Tyrangiel, the Justice Department’s acting assistant attorney general for the Office of Legal Policy, testified before a House subcommittee in March that the 180-day rule “no longer made sense.” (.pdf)

Regardless of Holder’s support, lawmakers have been debating change to ECPA for years. Similar legislation to the Senate version is pending a hearing before the House Judiciary Committee.

Still, not all President Barack Obama officials are onboard.

Mary Jo White, the Securities and Exchange Commission’s new chair, wrote the Senate Judiciary Committee last month that ECPA reform would hinder the government’s “ability to protect investors.”
http://www.wired.com/threatlevel/201...mail-warrants/





FBI’s Latest Proposal for a Wiretap-Ready Internet Should Be Trashed
Julian Sanchez

The FBI has some strange ideas about how to “update” federal surveillance laws: They’re calling for legislation to penalize online services that provide users with too much security.

I’m not kidding. The proposal was revealed in The Washington Post last week — and a couple days ago, a front-page story in The New York Times reported the Obama administration is preparing to back it.

Why? Federal law enforcement agencies like the FBI have long feared their wiretap capabilities would begin “going dark” as criminals and terrorists — along with ordinary citizens — shift from telephone networks, which are required to be wiretap-ready under the 1994 Communications Assistance for Law Enforcement Act (CALEA), to the dizzying array of online communications platforms available today.

While it’s not yet clear how dire the going-dark scenario really is, the statutory “cure” proposed by the FBI — with fines starting at $25,000 a day for companies that aren’t wiretap capable — would surely be worse than the disease.

The FBI’s misguided proposal would impose costly burdens on thousands of companies (and threaten to entirely kill those whose business model centers on providing highly secure encrypted communications), while making cloud solutions less attractive to businesses and users. It would aid totalitarian governments eager to spy on their citizens while distorting business decisions about software design. Perhaps worst of all, it would treat millions of law-abiding users with legitimate security needs as presumed criminals — while doing little to hamper actual criminals.

It Stifles Innovation

The FBI’s plan would effectively make an entire category of emerging secure platforms — such as the encrypted voice app Silent Circle or the Dropbox-like cloud storage service Spider Oak — illegal overnight. Such services protect user confidentiality by ensuring that not even the company’s employees can access sensitive data; only the end users retain the encryption keys needed to unlock their content.

This is hugely attractive for users who might otherwise be wary about relying on cloud services — whether they’re businesses negotiating multi-million dollar mergers, lawyers and therapists handling confidential documents, activists in authoritarian states, or just couples looking to back up their newborn photos.

But if the FBI gets its way, companies won’t be able to adopt that “end to end” encryption model, or offer their users the security it provides. A wiretap interface is essentially an intentional security vulnerability, as network engineer Susan Landau points out — which means requiring companies to be wiretap-capable is also mandating them to design less secure services.

That comes with a potentially large economic downside — and not just to cloud companies: If cloud providers can’t promise iron-clad confidentiality, corporations may well keep operating their own outdated systems, even though shifting to a secure cloud solution would be more efficient and less expensive.

It’s Tech-Ignorant

Typically, the FBI is claiming that they just want internet platforms to be subject to the same requirements as phone networks (which are familiarly accessible to them under CALEA).

But as a group of renowned computer scientists point out in an important new paper, “Going Bright: Wiretapping without Weakening Communications Infrastructure,” this misleading analogy ignores key differences between the architectures of these networks.

For one, online platforms are altered and updated far more frequently than phone networks — and there are a hell of a lot more online services than there are phone carriers. That means an interception mandate imposes a greater burden on a larger number of much smaller firms.

It also means that as platforms evolve, the code firms deploy to provide wiretap functionality is bound to have vulnerabilities. This provides hackers with ample incentive to simultaneously compromise an entire user base — and the sweetly ironic prospect of doing so through a law enforcement interface would be irresistible to them.

More fundamentally, the internet is a decentralized packet-switched network that operates very differently from a centrally-switched phone network — and many types of online communication follow the same design principle. For example, video-chat services like Skype rely on a peer-to-peer design that doesn’t require a centralized hub to route calls. Because it doesn’t depend on a single company’s servers to handle all the traffic, this architecture makes the service resilient and allows it to scale more easily — as well as more difficult for repressive regimes to block.

But the lack of a central hub also makes peer-to-peer communications inherently trickier to intercept. And threatening hefty fines for companies that can’t reliably provide access to user communications could easily deter companies from choosing the approach, even when it makes the most sense on economic or engineering grounds.

Instead of being decided by what’s best for the vast majority of users, communications architectures would be determined by what makes things easiest for law enforcement – essentially trading off the costs of the rare and tiny fraction of users who might be criminals with the the benefits of the many.

That’s utterly at odds with the spirit of permissionless innovation that has made the internet such a spectacular engine of economic and cultural growth.

And Ironically, It Won’t Really Protect Us

But if slowing innovation and weakening security is the price of catching terrorists and child pornographers, isn’t it a price worth paying?

Not if it doesn’t work.

Once it’s clear that online companies can’t promise true security, the most sophisticated and dangerous criminals will simply implement their own client-side encryption. DIY encryption may be too difficult or inconvenient for ordinary users, who benefit from services that take the hassle out of security — but the criminals the FBI is most interested in will doubtless find it worth the extra trouble.

As security researcher Matt Blaze and Susan Landau noted here in Wired, criminals, rival nation states, and rogue hackers routinely seek out and exploit vulnerabilities in our computers and networks … much faster than we can fix them. We don’t need to add wiretapping interfaces as new and “particularly juicy” targets to this cybersecurity landscape.

What we need to do is urge the FBI to find other ways to gather the evidence it needs — approaches that don’t indiscriminately compromise user security and online innovation. Instead of looking to Congress to add new vulnerabilities, the Bureau could focus on becoming better hackers of existing systems (for example, by exploiting bugs as backdoors).

In short, the FBI proposal is all cost for little to no benefit. The Obama administration needs to dump this ill-conceived scheme on the trash heap where it belongs.
http://www.wired.com/opinion/2013/05...nd-ridiculous/





Concerns Arise on U.S. Effort to Allow Internet ‘Wiretaps’
Somini Sengupta

Surveillance can be a tricky affair in the Internet age.

A federal law called the Communications Assistance for Law Enforcement Act allows law enforcement officials to tap a traditional phone, as long as they get approval from a judge. But if communication is through voice over Internet Protocol technology — Skype, for instance — it’s not as simple.

That conversation doesn’t pass through a central hub controlled by the service provider. It is encrypted — to varying degrees of protection — as it travels through the Internet, from the caller’s end to the recipient’s.

The Federal Bureau of Investigation has made it clear it wants to intercept Internet audio and video chats. And that, according to a new report being released Friday by a group of technologists, could pose “serious security risks” to ordinary Internet users, giving thieves and even foreign agents a way to listen in on Americans’ conversations, undetected.

The 20 computer experts and cryptographers who drafted the report say the only way that companies can meet wiretap orders is to re-engineer the way their systems are built at the endpoints, either in the software or in users’ devices, in effect creating a valuable listening station for repressive governments as well as for ordinary thieves and blackmailers.

“It’s a single point in the system through which all of the content can be collected if they can manage to activate it,” said Edward W. Felten, a computer science professor at Princeton and one of the authors of the report, released by the Center for Democracy and Technology, an advocacy group in Washington.

“That’s a security vulnerability waiting to happen, as if we needed more,” he said.

The report comes as federal officials say they are close to reaching consensus on the F.B.I.’s longstanding demand to be able to intercept Internet communications. Companies that say they were unable to modify their operations to comply with the new wiretap orders would be subject to a fine, according to the plan. The White House has yet to review it.

Neither the F.B.I. nor White House officials have provided technical details of how the Web service providers would comply.

Law enforcement officials regularly seek information from Web companies about the communications of their users, from e-mail messages to social network posts and chats.

Microsoft, which owns Skype, reported receiving 4,713 requests in 2012 from law enforcement, which covered just over 15,000 Skype accounts; the company said it released only “noncontent data, such as a Skype ID, name, e-mail account, billing information and call detail records” if an account is connected to a telephone number.

Skype is a Luxembourg company, even after its acquisition by Microsoft, of Redmond, Wash. United States wiretap law does not apply to the company.

Along with Mr. Felten, who served as a technologist with the Federal Trade Commission until recently, the report’s authors include the cryptographer Bruce Schneier and Phil Zimmermann, who created what has become the most widely used software to keep e-mails private.
https://www.nytimes.com/2013/05/17/b...-wiretaps.html





Interpol Filter Scope Creep: ASIC Ordering Unilateral Website Blocks
Renai LeMay

The Federal Government has confirmed its financial regulator has started requiring Australian Internet service providers to block websites suspected of providing fraudulent financial opportunities, in a move which appears to also open the door for other government agencies to unilaterally block sites they deem questionable in their own portfolios.

The news came tonight in a statement issued by the office of Communications Minister Stephen Conroy, following a controversial event in April which saw some 1,200 websites wrongfully blocked by several of Australia’s major Internet service providers.

On April 12, Melbourne publication the Melbourne Times Weekly reported that more than 1,200 websites, including one belonging to independent learning organisation Melbourne Free University, might have been blocked by “the Australian Government”. At the time, Melbourne Free University was reportedly told by its ISP, Exetel, that the IP address hosting its website had been blocked by Australian authorities. The block lasted from April 4 until April 12.

Subsequently, the US-based Electronic Frontier Foundation issued a media release linking the issue to the Labor Federal Government’s various Internet filtering initiatives, especially the voluntary filtering scheme currently implemented by a number of major ISPs including Telstra, Optus and Vodafone.

In November last year, Communications Minister Stephen Conroy formally dumped the Government’s highly controversial mandatory Internet filtering scheme, instead throwing his support behind a much more limited scheme which sees Australian ISPs voluntarily implementing a much more limited filter which Telstra, Optus and one or two other ISPs had already implemented. Vodafone has also implemented the filter, and the process is also believed to be under way at other ISPs such as iiNet.

The ‘voluntary’ filter only blocks a set of sites which international policing agency Interpol has verified contain “worst of the worst” child pornography — not the wider Refused Classification category of content which Conroy’s original filter had dealt with. The instrument through which the ISPs are blocking the Interpol list of sites is Section 313 of the Telecommunications Act. Under the Act, the Australian Federal Police is allowed to issue notices to telcos asking for reasonable assistance in upholding the law. It is believed the AFP has issued such notices to Telstra and Optus to ask them to filter the Interpol blacklist of sites.

The use of the Section 313 notices in this manner is believed to be the first occasion when the legislation has been interpreted to allow the Australian Federal Police to request ISPs to block website addresses. Some ISPs have questioned the legality of the use of the legislation in this manner, with some — such as one ISP believed to be major telco TPG — going so far as to refuse to follow the AFP’s requests to block websites.

Over the past week, a number of different Federal Government involved in Internet regulation, including the Attorney-General’s Department, the Australian Federal Police and the Australian Communications and Media Authority have denied involvement in the April block. However, tonight Senator Conroy’s office revealed that the incident that resulted in Melbourne Free University and more than a thousand other sites being blocked originated from a different source — financial regulator the Australian Securities and Investment Commission.

On 22 March this year, ASIC issued a media release warning consumers about the activities of a cold-calling investment scam using the name ‘Global Capital Wealth’, which ASIC said was operating several fraudulent websites — www.globalcapitalwealth.com and www.globalcapitalaustralia.com. In its release on that date, ASIC stated: “ASIC has already blocked access to these websites.”

The regulator today did not immediately respond to a request for comment clarifying that statement, but Conroy’s office tonight confirmed the agency had, as the Australian Federal Police has previously for the limited Interpol filter, issued a notice under Section 313 of the Telecommunications Act for “an IP address that was linked to a fraud website” — presumably the websites belonging to the group describing itself as Global Capital Wealth.

“ASIC believed that the website in question was operating in breach of Australian law, specifically section 911a of the Corporations Act 2001,” Conroy’s office said. “Under Section 313 of the Telecommunications Act, websites that breach Australian law can be blocked.”

“Melbourne Free University’s website was hosted at the same IP address as the fraud website, and was unintentionally blocked. Once ASIC were made aware of what had happened, they lifted the original blocking request. The government is working with enforcement agencies to ensure that Section 313 requests are properly targeted in future.”

Anomalies in the website block occurred, according to Conroy’s office, because of the differing nature of the methods which the two agencies — ASIC and the AFP — have used in their Section 313 notices. Users who attempt to access websites blocked under the AFP’s limited child abuse filtering scheme are directed to a website notifying them that the site has been blocked and how they can, if necessary, appeal such a block. However, ASIC’s process merely blocked the websites suspecting of hosted fraudulent material, leaving users such as Melbourne Free University’s users in the dark as to what had happened. In addition, the AFP process uses actual website addresses — whereas the ASIC process uses IP addresses.

ASIC’s user of Section 313 of the Telecommunications Act in this manner appears to be the first known occasion that the agency — or any other agency than the AFP — has done so, and appears to open the door that any Federal Government department or agency could request Australian ISPs to block websites which are believed to contain illegal material.

However, some segments of Australia’s technical and legal communities have long harboured concerns about using the legislation in this manner.

In contrast with Labor’s previous mandatory Internet filtering policy (which was to have been administered by the Australian Communications and Media Authority and which was dumped last year) there is currently no known civilian oversight of the Section 313 notifications scheme, no method of appeal and no way of ascertaining whether and why sites have been blocked under the legislation.

There is no mechanism in place to ensure that owners of web sites who have those sites blocked by Section 313 notices — deliberately or inadvertently, as happened with the Melbourne Free University case — are notified of the reason their sites have been blocked.

Furthermore, Section 313 of the Telecommunications Act does not specifically deal with certain breaches of the law. In fact, it only requires that ISPs give government officers and authorities (such as police) reasonable assistance in upholding the law. Because of this, there appears to be nothing to stop the Australian Federal Police, ASIC or any other agency from issuing much wider notices under the Act to ISPs, requesting they block categories of content which may be technically illegal in Australia but not blocked yet.

A number of sites which were on the borderlines of legality — such as sites espousing a change of legislation regarding euthanasia, for example — were believed to be included as part of the blacklist associated with the Federal Government’s much wider mandatory filtering policy. It is not clear what safeguards exist to prevent the Section 313 notification scheme to include such extra categories of content.

Because of this, the usage of Section 313 of the Telecommunications Act which ASIC applied in March appears to represent something of a “back door” for Australian authorities to request web sites be blocked from viewing by Australians — but with no oversight of the process, no appeals mechanism, and no transparency to the public or interaction with the formal justice system.

Long-time readers of Delimiter will note that I have for several years been warning that if the Australian Federal Police started using Section 313 of the Telecommunications Act to block child abuse websites, that there would be nothing to stop that newly re-interpreted legislation from being used by the AFP or other agencies to block whatever other websites they felt like on the day.

In fact, I remember getting into a very loud and angry argument with then-Internet Industry Association chief executive Peter Coroneos — who helped develop the Interpol filter/AFP process — about the potential for scope creep once Section 313 of the Act started to be used in this manner. I hope Coroneos will now admit that he helped open Pandora’s Box for Government Internet filtering.

It is very easy to foresee that other Federal Government agencies would like to follow the example set by ASIC and quietly use Section 313 notices to block other sites on the borderlines of legality. The Department of Health and Aging may like to block pro-euthanasia sites, for example, or sites promoting illegal drug use. The Australian Taxation Office may like to block sites promoting methods of tax evasion. The Department of Defence may like to block sites which expose details of Australian military misconduct. And so on. The list is endless, and I am sure that there are at least a couple of agencies closely examining what ASIC has done here, with a view to potentially doing the same in their own portfolios in future. Hell, the ASIC case may just be the tip of an existing iceberg; the example where someone actually got caught, because of a false positive.

The questions about the lack of transparency and oversight involved in such a process should be obvious to all concerned. It is very close to a universally accepted truth that the Australian public does not want government authorities to be able to unilaterally order websites blocked to Australian view without (at least) oversight of that process and a robust appeals process.

There are also questions here about how such a process may interplay with the existing courts system. I would ask, for example, whether ASIC has actually concluded a legal case against the individuals behind the ‘Global Capital Wealth’ sites which it ordered blocked in March. If it has not, one wonders whether it is exceeding its authority in ordering those sites offline. The evidence collected by the regulator, may, after all, not support its case that the sites are fraudulent. Where is the line? We’ve seen law enforcement authorities come unstuck in their accusations before, after all. That’s why Australia has a courts system — so that the claims of law enforcement can be tested, and not just taken for granted.

Let me finish this article by noting how disappointed I am in the personal integrity of all of the government public servants who enabled or abetted this situation to come about. In the course of my investigations into this matter over the past week, I contacted three of the key Federal Government departments and agencies concerned with Internet regulation — the ACMA, the Attorney-General’s Department, and the AFP.

In each case, each agency explicitly denied responsibility for the action which led to Melbourne Free University being unfairly blocked in April. However, in each case, each agency implicitly had knowledge of what had happened, but was unwilling to comment further on the issue. ASIC’s action has also completely blindsided Australia’s telcos, most of whom, having just gotten used to the Interpol filter, are right now wondering what the hell is happening and why they’re now being told by the financial regulator to filter a whole new category of content.

Eventually, Communications Minister Stephen Conroy came clean on the issue — most likely because I signalled I was determined to get to the bottom of the matter, and would pursue it through Freedom of Information requests if necessary, as I have done with previous government Internet filtering efforts.

However, coming clean on this kind of issue — unilateral government censorship of Australia’s Internet access, behind closed doors and with zero public transparency — is a little like owning up to being a serial philanderer. It lets people know what type of person you are, but it doesn’t solve the problem, and it won’t stop people feeling cheated.

The Australian public overwhelmingly rejected the Labor Federal Government’s previous attempt at a universal Internet filter. Now that filter is back: But it’s on questionable legal ground, it’s being done behind closed doors by anonymous public servants (remind you of the data retention process?), it’s already resulting in massive false positives and there’s no notification or appeals mechanism. Wonderful. But then again, don’t we trust the Government? Don’t we?
http://delimiter.com.au/2013/05/15/i...ebsite-blocks/





Bloomberg Admits Terminal Snooping
Amy Chozick

Reporters at Bloomberg News were trained to use a function on the company’s financial data terminals that allowed them to view subscribers’ contact information and, in some cases, monitor login activity in order to advance news coverage, more than half a dozen former employees said.

More than 315,000 Bloomberg subscribers worldwide use the terminals for instant market news, trading information and communication. Reporters at Bloomberg News, a separate division from the terminal business, were nonetheless told to use the terminals to get an edge in the competitive world of financial journalism where every second counts, according to these people, who spoke on the condition of anonymity because of the company’s strict nondisclosure agreements.

The company acknowledged that at least one reporter had gained access to information on Goldman Sachs after the bank complained to the company last month. On Sunday, Ty Trippet, a Bloomberg spokesman, said that “reporters would not have been trained to improperly use any client data.”

Matthew Winkler, editor in chief of Bloomberg News, underscored that the practice was at one time commonplace. In an editorial published on Bloomberg View late Sunday night, he said the practice of allowing reporters access to limited subscriber information dated back to the inception of the news arm of the giant financial information company founded by Michael R. Bloomberg.

“The recent complaints relate to practices that are almost as old as Bloomberg News,” Mr. Winkler said. “Some reporters have used the so-called terminal to obtain, as The Washington Post reported, ‘mundane’ facts such as logon information.”

It was a striking admission from the man who wrote “The Bloomberg Way: A Guide for Reporters and Editors,” considered among the quintessential handbooks on ethical business reporting.

In his editorial, Mr. Winkler apologized for the practices that had taken place in the newsroom for decades. “Our clients are right,” he said. “Our reporters should not have access to any data considered proprietary. I am sorry they did. The error is inexcusable.”

Bloomberg’s more than 2,400 journalists go through hours of compulsory training on how to use the superfast data-splicing terminals, and several former employees said that training included informal tips on how to use a function called UUID to locate sources who were also subscribers.

The sheer amount of data available on the terminals created a dynamic in the Bloomberg newsroom in which some reporters favored breaking news over strict subscriber confidentiality, former reporters said.

“There was always a discussion in the newsroom of how to use the terminals to break news,” said one former Bloomberg journalist. “That’s where it gets nuanced because I’m sure that in encouraging people to break news, Matt did not mean in this way,” this person added, referring to Mr. Winkler.

On Friday, Mr. Winkler reminded reporters of the company’s policy that prohibits journalists from discussing nonpublic Bloomberg documents and proprietary information about the company and its clients in their reporting. Last month, he contacted Goldman Sachs to apologize after the bank had complained about the reporting technique.

Bloomberg reporters also are accused of monitoring JPMorgan Chase executives’ login information last summer, when the bank suffered a multibillion-dollar trading loss, according to people briefed on the situation. The bank never formally complained to Bloomberg representatives about the practice.

The Federal Reserve and Treasury Department are also investigating whether reporters tracked employees. Bloomberg terminals sit in the highest echelons of power — including central banks, rival news organizations, Congress and even the Vatican.

Daniel L. Doctoroff, chief executive of Bloomberg L.P., said that making limited customer data available to reporters was a “mistake” and that it would not happen again. The company said the functions that led to the controversy had been disabled in the newsroom last month. The company also appointed a senior executive to the newly created role of client data compliance officer. (Mr. Bloomberg stepped back from the company’s day-to-day operations when he became mayor of New York.)

Bloomberg executives have not denied that they knew some reporters turned to the terminals to monitor when subscribers, who are mostly traders and finance executives, had logged on. On less frequent occasions, reporters also monitored chats between those subscribers and customer service representatives. Reporters could not see a subscriber’s specific securities, trades or which news articles they had read.

Mr. Winkler did not expand on who may have been affected. He said the practices were a legacy left over from when reporters were considered part of the sales operation. Nearly 85 percent of the company’s $7.9 billion in 2012 revenue came from its terminal business.

The news operation was assembled in the 1990s primarily as a way to sell more terminals. Reporters regularly accompanied sales representatives to sell subscribers on the wonders of the terminal, the desktop computers that provide a constant stream of headlines and data and sit upon many traders’ desks.

The company has said the close relationship between journalists and the sales team meant there was a reason to allow reporters access to limited subscriber data to help with customer service and to customize news to subscribers’ needs.

The UUID function at the center of the current controversy provided background on an individual subscriber, including contact information, and when the subscriber had last logged on. An internal Bloomberg review conducted after Goldman Sachs complained last month that a reporter had inquired about a partner’s employment status after tracking the executive on UUID, revealed that “several hundred” reporters had used the technique. Mr. Trippet, the company spokesman, said no reporters had been fired.

In 2011, Erik Schatzker, a host of Bloomberg Television’s “Market Makers” show, said on the broadcast that he had used a terminal subscriber’s data to report on a finance executive. The episode set off concerns inside the newsroom.

After Mr. Schatzker made the remarks, which were first reported by BuzzFeed on Saturday, Bloomberg conducted an internal review. Executives thought the terminal functions that allowed reporters to see subscriber data had been disabled, said one person briefed on the review.

On Friday, Mr. Doctoroff also tried to ease subscribers’ concerns. “Reporters only have access to the same customer relationship data available to our clients,” he said, adding, “Client trust is our highest priority.”
https://www.nytimes.com/2013/05/13/b...-snooping.html





Bloomberg's Top Editor Calls Client Data Policy 'Inexcusable'

Matthew Winkler, editor-in-chief of Bloomberg News, apologized on Monday for allowing journalists "limited" access to sensitive data about how clients used Bloomberg terminals, saying it was "inexcusable", but that important customer data had always been protected.

His statement came as the European Central Bank said it was in "close contact with Bloomberg" about any possible breaches in the confidentiality of data usage. The U.S. Federal Reserve and the Bundesbank, Germany's central bank, said they were examining whether there could have been leaks. A source briefed on the situation said the U.S. Treasury Department was looking into the question as well.

The practice of giving reporters access to some data considered proprietary - including when a customer looked into broad categories such as equities or bonds - came to light in media reports last week. In response, the parent company, Bloomberg LP, said it had restricted such access last month after Goldman Sachs Group Inc (GS.N) complained.

Winkler, in an editorial posted on Bloomberg.com, said: "Our reporters should not have access to any data considered proprietary. I am sorry they did. The error is inexcusable."

Goldman flagged the matter to Bloomberg after the bank found that journalists had access to more information than it had known and argued the information was sensitive and should not be seen by reporters.

The news triggered fears at Wall Street firms about the privacy of sensitive data, as well as at the Fed and other U.S. government departments that use Bloomberg terminals.

In the editorial, Winkler sought to clarify what exactly Bloomberg journalists could see. He said they had access to a user's login history, as well as "high-level types of user functions on an aggregated basis, with no ability to look into specific security information."

He said the practice dates back to the early days of Bloomberg News in the 1990s, when reporters used the terminal to find out what kind of news coverage customers wanted.

"As data privacy has become a central concern to our clients, we should go above and beyond in protecting data, especially when we have even the appearance of impropriety," Winkler wrote. "And that's why we've made these recent changes to what reporters can access."

BLOOMBERG'S BOOK

Data security was an issue that company founder Michael Bloomberg wrestled with in his 1997 book, "Bloomberg by Bloomberg." In general, he wrote, restricting access to proprietary information can be an ineffective exercise.

Often "the whole data security issue is overblown at most corporations that think they have a lot to guard," wrote Bloomberg, who has been mayor of New York City since 2002. "Pilferage and leakage are costs of doing business. Live with them. While some restrictions make sense, many are ridiculous."

In his statement, Winkler emphasized that Bloomberg News "has never compromised the integrity of that data in our reporting" and said Bloomberg journalists are subject to standards that are among the most stringent in the business.

"At no time did reporters have access to trading, portfolio, monitor, blotter or other related systems," he said. "Nor did they have access to clients' messages to one another. They couldn't see the stories that clients were reading or the securities clients might be looking at."

Even though the information available to Bloomberg reporters was limited, senior Goldman executives argued that a trader could profit just by knowing what type of securities high-profile users were looking at, or what questions a government official raised with Bloomberg's help desk, people with direct knowledge of their views said.

The issue made people inside the bank uncomfortable with even the Bloomberg marketing and sales team's access to information, the sources said.

In disclosing the new restrictions set last month, Chief Executive Daniel Doctoroff said Bloomberg had created the position of client data compliance officer to ensure that its news operations never have access to confidential customer data.

Closely held Bloomberg, which competes with Thomson Reuters (TRI.TO) (TRI.N), the parent of Reuters News, gets the bulk of its revenue from terminal sales to financial institutions.

Bloomberg has more than 315,000 terminal subscribers globally, with each Bloomberg terminal costing more than $20,000 a year. Last year it posted revenue of $7.9 billion.

In a statement on Friday, Thomson Reuters said its news division operates "completely independently, with reporters having no access to non-public data on its customers, especially any data relating to its customers use of its products or services."

(Reporting By Frank McGurty; Editing by Ciro Scotti and Leslie Gevirtz)
http://www.reuters.com/article/2013/...94C0LD20130513





The Bloomberg 'Snooping Scandal' is Completely Overblown

Bloomberg was using pretty standard 'big data' on users to get a slight edge. It's exactly what Wall Street tries to do
Heidi Moore

Bloomberg News has been accused of violating the privacy of its users by collecting their personal contact information and usage practices to fuel its stream of financial news. After a few days of controversy, the news company has backed down, calling its practices both, in origin, "as old as Bloomberg itself" and, in the present day, "inexcusable".

The reaction from Wall Street traders is more often along the lines of "meh". One trader, as an example, noted, "[The journalists] are doing functions that any other Bloomberg users would do." Few people predict that any users will leave Bloomberg over this.

Still, among the chattering classes, as far as journalism stories go, this seems like a juicy one. Bloomberg is an industry leader in financial news and its consistent scoops are the frustration of rivals like Dow Jones and Reuters who struggle to lure financial clients in the same way that Bloomberg does, lining up nearly all of Wall Street to pay $20,000 a year for each of its trading terminals. Its rivals haven't succeeded nearly as well.

Bloomberg is also, importantly, something of a frustration to financial firms, whose public relations teams delight in complaining about the aggressiveness of Bloomberg reporters, including Goldman Sachs. Bloomberg has also moved onto the turf of some of these financial firms, providing research and opinion that can often compete with the daily punditry that those firms sell. It also has a service, Bloomberg Tradebook, that acts as a broker for trades.

It is the height of irony that those financial firms, who make their living by collecting and slicing data on client trades, are complaining that Bloomberg was making use of some data on their traders.

So what did Bloomberg reporters do? Reporters allegedly had enough access to see a client's contact information, and the last time they logged into a Bloomberg terminal. This could, in turn, allow them to call sources at home or suspect that an important Wall Street figure had been fired.

To a consumer, this seems like a violation. Many of us are able to conduct our lives with little realization of how much Big Data has on us, and that's a good thing: if we knew how much information we were giving away, we may end up too paralyzed to do much of anything at all.

Yet we know, on some level, that our privacy is an illusion. Facebook collects your searches, as you tap in the names of everyone from frenemies to exes; Google tracks your YouTube viewing to trace every cat video that crosses your screen; when you give your zip code to a cashier, you're actually giving his company the path to your home address and personal mailbox. Your phone calls are at the disposal of the federal government as are your emails.

On Wall Street, this scrutiny is magnified. The "data" aspect of Wall Street is married to its "social" aspect. In simple terms, this means two things: the first is that people talk, and will always talk, no matter what regulators do to prevent them. Talking is a way of getting information, both on Wall Street and in reporting. The second aspect is that the goal of working in finance is to amass as much information as possible and then use it to make money.

It's hard out there for investors. In a world where 85% of hedge funds and 65% of mutual funds underperform the market, data – and talk – provides an edge. The ability to collect and slice data is what makes financial companies useful to their clients. This data can come in the form of research, or it can come through an analysis of trading activity.

But many at Bloomberg didn't feel as if they were doing anything particularly wrong when they were, in the company's lingo, "harnessing the power of the terminal". In this sense, Bloomberg reporters acted like traders – albeit traders of information rather than stocks or bonds.

So it's important to separate some key points, around this: were Bloomberg reporters illegally or unethically using the information available to them? And if so, did it drive their journalism? What is the pragmatic effect of the terminal quasi-scandal?

On the first point, there seems to be no evidence of unethical behavior from reporters, at least. It's easy to see that if the information was made available to Bloomberg reporters as part of their work tools, they wouldn't question its use. Wall Street, which uses every available bit of information itself, understands this. As one Wall Street trader put it to me: "it's not difficult to understand why someone with access to that information would use it. Bloomberg captures everything."

On the second point – that of driving journalism – it's likely that the claims of a Bloombergian information monopoly are highly overblown. Any decent financial reporter would be dubious that Bloomberg reporters could have gained much high-quality journalism from the terminal alone. There's no evidence that really valuable scoops – such as those on mergers and acquisitions – could have come from mining terminal information. Despite the hype, the information available from the terminal was poor relative to what a reporter would need to actually construct any kind of useful story. It could provide leads, perhaps, but not replace the hard work of reporting.

Bloomberg's genuinely award-winning journalistic work – on health reform and other issues – was based on shoe-leather reporting and did not and could not have come through mining the terminal. Amusingly, JP Morgan complained that Bloomberg reporters used terminal information to judge that some traders had been let go after the London Whale debacle. Bloomberg also first reported that the multibillion-dollar London Whale trade even existed, which is a much bigger and more important story, and was clearly not information that could be gathered from a terminal.

So why did Bloomberg apologize? The key was in Winkler's message about the "appearance of impropriety". Bloomberg started out as a sales business, with a small sideline in news. Winkler joined to create a full-fledged news business, and Bloomberg, over the past five years, has transitioned to embracing a full news operation complete with veterans from The Wall Street Journal and the New York Times. For a sales business, access to customer information is essential. For a news business, it looks like prying.
Bloomberg is, however, above all a data company. There is something that Bloomberg does differently than news organizations.

Customers of Bloomberg sign up specifically to make money by trading. They know they are living in a Big Data world, where their usage is tracked, because they use the Bloomberg terminal for just that purpose: to track the markets and each other. They are endlessly interested in obscure uses of the terminal. Every month, Bloomberg Markets, the company's magazine for the financial industry, includes features on little-known terminal commands. Bloomberg mines every kind of data, and that is why customers are willing to fork over the money for a terminal. They also know that they themselves become a kind of Soylent Green to provide data to others. This is largely fine to them because Bloomberg cannot interfere with their trades. Banks have far more to fear from the prying of other financial companies than they do from any data provider.

Then there is the other aspect: the social one.

What makes the terminals so popular on Wall Street is not just their comprehensive data, but their social value. The Bloomberg terminal is like the Facebook of Wall Street; a social tool that connects the financial world. The trading terminal has one major advantage: Bloomberg Instant, a widely popular chat/messaging system that Bloomberg sells as instant access to important people, and logs 200m messages – or 15m to 20m chats a day among 310,000 subscribers. Here's the ad copy on it, which gives you a sense of what Bloomberg is really selling through its terminals: influence.

"Your Bloomberg Professional service account gives you immediate membership in a community of 310,000 of the world's most influential decision makers. We connect you with this robust network spanning finance, business and government – for a distinct advantage when generating ideas, conducting research and finding trading partners."

Bloomberg reporters, as well as Bloomberg clients, could make use of chat. Because of this, it's easy to see why Bloomberg reporters would use login or contact information: because largely, they understood the terminal as a social tool as well as an analytical one. If Facebook tells you the last time your friends logged in, why is it wrong for a Bloomberg terminal to tell you when your contacts last logged in? Many would say it is not.

Either way, it is unlikely that this quasi-scandal, while it keeps media pundits busy, will ever really impact Bloomberg's business.
Wall Street never stops doing business with you because you know too much; in fact, it's quite the opposite. The more you know, the more like the all-knowing Borg that you seem, the more people will flock to your door hoping to get some of the information you hold. This is why Goldman Sachs' "vampire squid" years, in which it became famous for knowing everything about every business, only helped the firm's profits.

Bloomberg has made its service an inevitability on Wall Street; and if its rivals are honest, they will admit that they are kicking themselves for not having done the same.
http://www.guardian.co.uk/commentisf...ndal-overblown





The New Yorker: Introducing Strongbox
Amy Davidson

This morning, The New Yorker launched Strongbox, an online place where people can send documents and messages to the magazine, and we, in turn, can offer them a reasonable amount of anonymity. It was put together by Aaron Swartz, who died in January, and Kevin Poulsen. Kevin explains some of the background in his own post, including Swartz’s role and his survivors’ feelings about the project. (They approve, something that was important for us here to know.) The underlying code, given the name DeadDrop, will be open-source, and we are very glad to be the first to bring it out into the world, fully implemented.

Strongbox is a simple thing in its conception: in one sense, it’s just an extension of the mailing address we printed in small type on the inside cover of the first issue of the magazine, in 1925, later joined by a phone number (in 1928—it was BRyant 6300) and e-mail address (in 1998). Readers and sources have long sent documents to the magazine and its reporters, from letters of complaint to classified papers. (Joshua Rothman has written about that history and the magazine’s record of investigative journalism.) But, over the years, it’s also become easier to trace the senders, even when they don’t want to be found. Strongbox addresses that; as it’s set up, even we won’t be able to figure out where files sent to us come from. If anyone asks us, we won’t be able to tell them.

How does that work? The graphic below maps it out; multiple computers, thumb drives, encryption, and Tor are all involved. We’ll be looking forward to what we find in Strongbox, with the same curiosity our first editors had almost ninety years ago.
http://www.newyorker.com/online/blog...ring-tool.html





Bunny

Bunny is intended to act as a layer 1/2 technology that attempts to hide its wireless mesh communication traffic. Bunny wraps all data in and out in a layer of obfuscation. It does this by passively listening to the local wireless and building a model of 'average' traffic. Then using this model, it hides small snippets of data within various fields of the 802.11 protocol that are either poorly defined or prone to contain data that mutates a lot. These fields will include but are not limited to, vendor data, data packets of encrypted networks, duration fields.

For full whitepaper like decription of Bunny, check proposal.txt.

You need a monitor/injection capable wireless chipset. Please check the aircrack website for compatible cards.
https://github.com/mothran/bunny#readme





High Court Rules Video to be Taken Down from YouTube, Facebook

Eoin McKeogh took the case after a a video clip accused him of taxi fare evasion Eoin McKeogh took the case after a a video clip accused him of taxi fare evasion

The High Court has ordered internet companies to permanently remove on a worldwide basis a video clip falsely accusing an Irish student of taxi fare evasion.

Mr Justice Michael Peart made the order in a case brought by Eoin McKeogh, 23, a DCU student, against YouTube, Google, Facebook and a number of websites over the video and accompanying material.

However, he said it was not clear whether this was possible, or how it might be done.

The judge ordered that experts for Mr McKeogh meet with experts for the internet companies on how to go about taking down the material permanently on a worldwide basis.

The experts should be nominated within 14 days and the meeting between them take place within the following fortnight.

When reports have been prepared and exchanged, the matter can come back before the court to "consider the position which emerges", the judge said.

The material wrongly identified Mr McKeogh as a man leaving a taxi without paying the fare in Monkstown, Dublin.

Mr McKeogh sought a mandatory injunction requiring the internet companies to permanently remove the video and other material.

Mr Justice Peart previously found Mr McKeogh was grossly defamed in the video because he was incontrovertibly not the person in it.

He gave this ruling when refusing a separate bid by Mr McKeogh to stop newspapers identifying him.

In dealing with his injunction application against the internet companies today, pending full hearing of his action for damages over the matter, the judge said after the taxi driver involved had posted the video asking if anyone could identify the person in his cab, one person wrongly named Mr McKeogh who was in Japan at the time.

There followed, the judge said, "a miscellany of the most vile, crude, obscene and generally obnoxious comments" about Mr McKeogh on YouTube and Facebook.

The clip went viral and "all manner of nasty and seemingly idle minds got to work and as seems to happen with apparent impunity nowadays on social media sites", posted whatever vile and abusive first thing that came into the "vacant, idle and meddlesome heads", the judge said.

Mr McKeogh, having returned from Japan, tried to have the material removed which was no easy task even though Google, in all its guises, and Facebook, claim that their sites contain an effective self-delete facility to allow him remove it, he said.

Mr McKeogh attempted to do so before taking his court action but the self-delete facility was limited in scope and, for instance, remains viewable and accessible from abroad.

While the defendants had argued Mr McKeogh should not get an injunction because he had not revealed in his initial court papers that he tried the self-delete facility, the judge was satisfied it did not affect the genuineness of his claim.

Given Mr McKeogh's clear innocence, it was a surprise to the judge that the defendants did not assist him more in getting the material removed.
http://www.rte.ie/news/2013/0516/450...from-websites/





Netflix Cuts Back On Expiration Dates After 'Streamaggedon'

Company says that to combat confusion, it has altered its API to prevent third-party tools from broadcasting potentially inaccurate expiration dates for streaming movies.
Steven Musil

In the wake of disappointment and confusion caused by Netflix's "streamaggedon" movie purge, the rental service has made changes to its API that will make it harder for third-party tools to determine when titles will expire.

The revelation late last month that hundreds of classic movies, including Woody Allen's "Stardust Memories" and the James Bond hits "Dr. No" and "Goldfinger," would soon vanish from movie fans' instant streaming queues caused a minor uproar that some in the media dubbed "streamaggedon." A Netflix spokesperson said that both the number of titles and the level of studio involvement were inaccurate and instead said that the purge was part of a normal ebb and flow on the streaming service due to licensing contracts for exclusive content.

To prevent such confusion in the future, Netflix announced late Monday that it will alter the programming interface to prevent the movie expiration dates from showing up in third-party tools such as InstantWatcher.com, which provides a searchable listing of the on-demand Netflix movies. The change means that one of InstantWatcher.com's most popular features, known as "Expiring Soon," will no longer work.

Information listed in such tools is often inaccurate due to short-notice changes in content availability, Netflix said late Monday on its developers blog:

Starting today, we will no longer provide expiration dates for any of our titles in the public API. We will continue to publish the field to the REST API and the catalog index file to minimize the likelihood of breaking applications that use it, although all titles will now have "1/1/2100" as the date value.

We are making this change because the expiration date can be inaccurate as a result of frequent, often last minute, changes in content flow.


However, the service promises that users will still have access to each movie's streaming expiration date via each individual title's page.
http://news.cnet.com/8301-1023_3-575...streamaggedon/





Sandvine: Netflix Owns One-Third of North American Traffic at Peak, Has Doubled its Mobile Share in 12 months
Emil Protalinski

Once again, broadband Internet service tracking firm Sandvine has released its latest report for North America, and once again, Netflix is ruling Internet usage. This time, however, there looks to be small indication that Netflix is gaining on the mobile Web as well.

Sandvine says that the dominance of real-time entertainment is due “in large part” to the continued market leadership of Netflix. The company’s streaming service accounts for almost a third of peak downstream traffic on fixed networks:

As you can see, during the Internet’s peak, Netflix accounts for 32.3 percent of downstream traffic in North America — much more than any other single site or service. It’s worth noting that Netflix has been maintain this one-third mark for a few years now, and its share actually decrease recently.

Yet Sandvine says this doesn’t matter:

While we observed that their share of traffic decreased by a fraction of a percent since our 2H2012 study, it should not be interpreted as a decline in the popularity of the service at the expense of their competitors. In fact, competing pay-video services such as Amazon (1.31%) and HBO Go (0.34%) saw their relative share decline in a greater amount than that of Netflix.

The below pie graph puts the streaming firm’s first place status into better perspective.

No, the real story here is that Netflix’s North American mobile data usage share almost doubled from 2.24 percent to 3.98 percent in the last 12 months. Sandvine believes that that this number will increase going forward and that longer form video as a whole will become more commonplace on mobile networks in North America.

Here’s the same breakdown as above, but for mobile:

Watching TV episodes and movies on your mobile screen is less than ideal, but it is becoming more and more viable. As smartphones grow in size and connections become faster, and as tablets become more ubiquitous, Netflix could find itself taking on YouTube in a few years.

By the next report, we wouldn’t be surprised if it muscled its way past Google Play and SSL traffic to move into the top five for downstream mobile traffic.
http://thenextweb.com/insider/2013/0...-in-12-months/





ABC to Live-Stream Its Shows via App
Brian Stelter

This week ABC will quietly revolutionize its app for iPhones and iPads with a button called “live.” Users around New York and Philadelphia will be able to live-stream all the programming from ABC’s local stations there, the first time that any major broadcaster has turned on such a technology.

The functionality will be featured at ABC’s upfront presentation for advertisers on Tuesday. It is, among other things, an attempt to keep up with the rapidly changing expectations of television viewers.

It also reflects the increasing role that subscriber fees play in the broadcasting business: the live stream will be available only to paying subscribers of cable and satellite providers, even though the stations’ signals are available free over the public airwaves.

ABC, a unit of the Walt Disney Company, said the live stream would be available in the other six cities where it owns stations sometime this summer. It is also in talks with the companies that own ABC’s more than 200 affiliates to make the “live” button work in their markets.

ABC finished the first of its affiliate deals, with Hearst Television, on Sunday afternoon; it said the live streams would work in Hearst’s 13 markets, including Boston and Pittsburgh, in the coming months.

The mobile app may prod the other broadcasters to follow ABC, much as they did seven years ago after the network started to stream full episodes of shows the morning after their TV premieres. ABC had originally planned to introduce a live-streaming feature for its apps in 2014, but decided to speed up that process this year.

“We keep a very close eye on consumer demand,” said Anne Sweeney, the president of the Disney-ABC Television Group, which includes the broadcast network. “We watch how people are behaving with their devices, and we really felt that we needed to move faster.”

Internally the project was code-named Project Acela, a reference to the high-speed train between Boston and Washington. A team led by Albert Cheng, Ms. Sweeney’s executive vice president for digital media, was given a deadline of May 14, the date of the ABC upfront. While Apple devices came first, other phones and tablets will be supported in the coming months, Mr. Cheng said. Securing the necessary rights from programming providers was laborious, but ABC will be able to stream all of its stations’ local newscasts, syndicated talk shows like “Katie,” and national series like “Grey’s Anatomy.”

The live-stream functionality comes at a time when ABC and its broadcast rivals are trying to keep the attention of audiences that are increasingly turning to cable channels and Internet streaming services like Netflix.

It gives ABC another talking point about how it is adapting to audience preferences; in this case, viewers will be able to carry “Good Morning America” with them as they move around the house in the morning, or tune into a weekend basketball game while out with friends. The live stream will work anywhere in a local market, the same way an old-fashioned TV antenna would.

During a demonstration of the app in her New York office on Friday, Ms. Sweeney said she was struck by how personalized television becomes when it is live-streamed to a person’s phone.

The app is also an implicit rebuttal to Aereo, the start-up backed by Barry Diller that is being sued by major station owners for streaming their signals to paying subscribers in New York. Ms. Sweeney reiterated her view that Aereo is illegal but said the plans for the app’s live-stream feature predated the service.

The app, to be named Watch ABC, in line with Disney’s existing Watch Disney and Watch ESPN apps, will allow users to watch ABC shows on demand, like the network’s previous app had. In the future, ABC will withhold its most recent TV episodes from the free versions of Hulu and ABC.com, further limiting access to paying subscribers of cable and satellite providers only.

The mobile live stream will not carry the same ads as the television broadcast; instead, it will include the same sorts of digital ads as on ABC.com. This is in part because the Nielsen Company is not able to measure mobile viewing of live television yet.
“What you see here is the same live programming,” Mr. Cheng said as he used the app, “but what we are doing during the commercial break is actually inserting new ads into the stream.”

Over time, live-streaming of ABC stations could cannibalize big-screen viewing of those stations, but ABC could make up the difference through streaming ads. Disney’s chief executive, Robert A. Iger, pointed out this month that an increase in online advertising partly compensated for declines in TV ad revenue in the first quarter of the year.

Transmitting television via live stream requires new deals with traditional distributors, like Comcast, DirecTV and Verizon FiOS, and with the owners of ABC’s affiliates. Gaining Hearst’s backing ahead of Tuesday’s upfront was important to ABC because it lent some local support to the app effort.

David Barrett, the chief executive of Hearst Television, said in a statement on Sunday that his company, recognizing “that consumers want the ability to view our stations’ programming on any device that has a screen,” was eager to work with ABC on the app.

Some station owners may bristle at ABC’s arrangement, however, given the other mobile television efforts that are under way. In some cases, these efforts require a miniature antenna, or a dongle, to be plugged into the phone.

A technology company called Syncbak has a live-streaming app for phones that does not require a dongle, but currently, it can carry only local programming, not syndicated or national programming.

CBS took a minority stake in Syncbak last month, stoking talk that it might use the technology to live-stream the stations it owns.

The Fox network, a unit of the News Corporation, is also known to be working on live-streaming functionality for its stations, though it is not expected to be available soon.
https://www.nytimes.com/2013/05/13/b...ogramming.html





T-Mobile Drops Anti-Net Neutrality Lawsuit Filed By MetroPCS, Leaving Verizon on its Own
Adi Robertson

Cellphone carriers have generally met net neutrality proposals with varying levels of hostility, but Verizon and MetroPCS have been particularly belligerent: in 2011, they sued to overturn the FCC's then-newly adopted Open Internet rules. Since then, the two have consistently argued in court against the rules, which they've said undermine the freedom to run their networks as they see fit. But as T-Mobile finalizes its merger with MetroPCS, it's decided it doesn't want an old lawsuit to come with its new spectrum. In a court statement filed today, T-Mobile has moved to dismiss its appeals claim.

T-Mobile's decision to back out doesn't mean the suit is over. Verizon will continue its litigation, though the court document indicates that it knows about the move and will make no attempt to stop it. And the Open Internet rules themselves haven't stopped companies from pushing the boundaries of what constitutes blocking the competition. AT&T has maintained that it was in the right to block FaceTime over its network, and Comcast started favoring its own Xfinity TV app even after the rules took effect. But for now, Verizon is alone among US carriers in its legal challenge.
http://www.theverge.com/2013/5/17/43...rality-lawsuit





Lawmakers Push for Sale of Government Airwaves
Gautham Nagesh

Lawmakers and regulators are at odds over the best way to satisfy the public’s growing demand for wireless data. Both have made finding more spectrum to expand mobile broadband networks a priority, but members of Congress are pushing for the immediate sale of a valuable chunk of federal airwaves, while the Obama administration appears more concerned with long-term planning.

Most stakeholders agree on one key point: The growing consumer demand for online video and other mobile applications has created a significant burden for wireless carriers, who claim their networks are straining to meet capacity. Those networks run on slices of spectrum, or airwaves, that the wireless carriers purchase at auction from the Federal Communications Commission for their exclusive use.

The various bands of spectrum have different characteristics depending on their frequency. Bands at lower frequencies are capable of covering much longer distances and going through walls, making them best suited for mobile uses, such as cellphone networks. The spectrum between 400 megahertz and 3 gigahertz is generally considered the most valuable, with all cellphone bands falling within that range.

Much of the spectrum not already sold to wireless companies or broadcasters is held by the government. The band of federal spectrum considered most valuable by the wireless industry is the 1755-1850 MHz band, more specifically the lower 25 MHz from 1755 to 1780. That band is used for cellular networks in some other countries and is technically appealing to the wireless industry for a host of reasons.

According to a report from the National Telecommunications and Information Administration, 19 U.S. government agencies currently deploy a wide variety of communications and surveillance tools in the 1755-1850 MHz band. Those include law enforcement surveillance tools, satellites dishes, drones, tactical radios and a host of other military uses.

The difficulty and cost of moving those federal users is uncertain and varies greatly depending on the use. Satellites can remain in the air for 20 years to 25 years, while other equipment potentially could be shifted to different bands of spectrum or replaced in a shorter time frame. The NTIA’s report estimates that it would take a decade and at least $18 billion to clear all federal users from the 1755-1850 band so it could be auctioned off.

“The hard truth is that there might be government users in that space that you just might not be able to move out for a while,” a Senate Democratic aide said. “They might need to be there for the next 10 to 15 years. That’s just the reality.”

But the prospect of waiting a decade is unacceptable to the wireless industry, which believes that next year’s spectrum auction — in which airwaves relinquished by television broadcasters will be sold to wireless carriers — is unlikely to attract enough broadcaster participation to satisfy its appetite. Several policymakers agree, as evidenced by the FCC’s announcement earlier this year that it will seek to auction the 1755-1780 MHz block at next year’s auction.

Rep. Doris Matsui, D-Calif., and then-Rep. Cliff Stearns, R-Fla., offered a bill last year directing the government to auction off the 1755-1780 block. Sen. John Thune, R-S.D., the ranking member of the Senate Commerce Committee, offered a similar amendment to last year’s defense authorization bill. Neither provision became law, but legislation isn’t necessary; Congress has already authorized the FCC to sell that spectrum as needed.

“Our nation is facing a looming spectrum crunch,” Matsui said. “In the short term, I believe pairing the 1755-1780 band with the 2155-2180 MHz band makes sense not just for revenue purposes but also for spurring innovation and consumer demand in our ever-growing digital economy.”

Passing legislation on the issue would provide federal agencies with a deadline as well as some form of carrot or stick to encourage government users to vacate the spectrum as expeditiously as possible. Supporters of auctioning the 1755-1780 MHz band argue that the NTIA’s cost estimate is inflated and that more detailed work and research is needed to determine the actual costs.

“All of this is completely doable under existing law,” the Senate Democratic aide said. “There’s a belief amongst some in commercial industry that, absent legislative mandate, government agencies won’t [vacate the spectrum]. But the president is firmly behind trying to make this happen and pushing it.”

Government users counter that their networks are programmed to run across the entire 95 MHz from 1755 to 1850, so depriving them of the lower 25 MHz would still require the replacement or reconfiguration of most equipment. The Obama administration has instead proposed an expanded regime of spectrum sharing between government and private industry, based largely on the way federal agencies currently use spectrum.

Unlike private companies, federal users never actually hold their own spectrum, they are instead given permission to use certain frequencies after coordinating with the NTIA. That means the various government networks must coexist in the same or adjacent bands without interference. Administration officials have suggested that a similar setup involving multiple commercial carriers may be more efficient than licensing the spectrum to one company for exclusive use.

Both the Pentagon and wireless industry have balked in the past at the notion of sharing airwaves, with the former concerned about potential interference with critical systems and the latter nervous about building its networks on spectrum it doesn’t own. Both sides view sharing as a fallback option, in contrast to the administration’s apparent preference for the approach.

The four major wireless carriers recently began preliminary testing with the Pentagon on potential sharing solutions, with particular focus on finding a way for commercial carriers to coexist in the 1755-1780 MHz band with government satellites. Allowing satellites to continue operating in the band while clearing other government users is expected to save as much as $3 billion from the relocation costs.

The Pentagon estimates the cost of vacating the entire 1755-1850 MHz band at roughly $12 billion. A Defense Department spokesman acknowledged that testing is under way with the major wireless carriers to examine the feasibility of sharing the spectrum while preserving certain vital government programs such as drones and air combat training systems.

“The department cannot provide details or speculate on the outcome of this effort,” the spokesman said. “However, it is important to note that this is a very good example of the cooperation that is occurring between the DOD and the commercial wireless industry.”

“We must carefully examine the benefits of both clearing and sharing spectrum over the long term,” Matsui said. “We must work to find a timely solution that addresses both our economic needs and our national security challenges.”

Republicans have expressed a broad preference for auctioning spectrum rather than sharing it among carriers, and they are particularly sensitive to concerns regarding critical defense systems. Should agencies drag their feet on vacating some or all of the 1755-1850 MHz block, GOP lawmakers will likely be quick to criticize the administration.

“Sharing sounds great, and there are various sharing technologies, but operating WiFi devices [in the same band] is very different than air combat training sharing spectrum with 4G broadband,” another Senate aide said. “That’s never been done before and probably shouldn’t be the first-choice option to pursue.”
http://www.rollcall.com/news/lawmake...-224636-1.html





Samsung Testing 5g Phones with 1gbps Download Speed
Rodney

So you’re still trying to decide whether or not to go to LTE /4G on your next handset or, worse yet, your area doesn’t even offer you the choice and you’re stuck on 3G for the foreseeable future? Well, I have some good news for you – 5G is now being tested by Samsung; so you can now get even further behind!

For anyone who still thinks Samsung isn’t an innovative company or that they’re just Apple copycats, this should serve to remind you just how impressive their technological teams really are. Samsung was a key developer in both the 3G/4G and wireless charging standards and they’re leading the pack on 5G.

To put things in a little perspective for you, the maximum speed 4G can achieve is about 75mbps (which you and I are never likely to achieve), so this is a huge jump.

Of course, rolling out a cellular network is a slow and extremely expensive process, so telco’s are going to want to leave 4G around a long time, to redeem their investment (plus this is early days in 5G development) so don’t expect to see 5G phones any time soon. None the less, it’s possible in 5 years or so, we might be getting more than a gigabit per second on our handsets, outside of wifi range – and it won’t be Apple we have to thank for it.
http://www.androidanalyse.com/samsun...ownload-speed/





Zact Tries to Out-Uncarrier T-Mobile with Customizable Mobile Plans

Plans and parental controls could cut total costs by more than $1,000, but you'll have to settle for last year's phones
Dieter Bohn

A new wireless carrier is launching today, it's called "Zact" and though it's piggybacking on top of Sprint's wireless network, the company swears up and down that it's not an MVNO (Mobile Virtual Network Operator). Zact's goal is to out-T-Mobile T-Mobile as the "Uncarrier" by offering no-contract service plans that are customizable to an incredibly granular level. It's a compelling idea in a sea of restrictive and annoying service plan options — but for now it's probably only going to appeal to the traditional pre-paid customer.

Zact is offering two low-end Android handsets for sale at un-subsidized prices: LG Viper 4G LTE for $399 and LG Optimus Elite for $199 — both running older versions of Android. These devices have special software built deeply into the OS that allow them to dynamically change your service plan on the go. You pay a flat fee of $4.99 per-device, and then choose your minutes, texts, and data packages as you need them. You can select amounts down to very specific levels — the typical "steps" between plans range in the dollars and cents instead of in the tens of dollars.

"Good plans, terrible devices"

The idea is that you only need to pay for what you're actually going to use as you use it, and Zact will refund you for whatever you don't use at the end of the month. In theory, this can result in radically lower bills — or at least bills that accurately reflect your actual usage instead of your best guess. As you use your phone, an app will pop-up as necessary for you to add more minutes or data — or you can modify it whenever you like.

Zact also allows for incredibly discrete parental controls — you can manage another device on your plan directly from the phone and disallow data, texting, and even app usage for other devices. Zact is also offering ways to get app-specific plans: if you'd like to get an email plan or a Facebook plan instead of a generic data bucket, you can do that. The features all come thanks to Zact's deep integration with Android, which allows them to recognize which apps are trying to launch or use data and whitelist or blacklist them as appropriate.

"If it sounds confusing, that's probably because it is at first"

If it sounds confusing, that's probably because it is a little confusing — at least to people who are used to more traditional plans with big buckets of minutes and data. CEO Greg Raleigh believes that the new pricing model should become the new standard. T-Mobile's Un-Carrier plans were "a good kind of half-step," Raleigh says, "but at the end of the day what it is really? It's just a rebanding or remarketing of the no-contract concept."

The company behind Zact is called ItsOn, and Raleigh insists that the service that powers Zact isn't the same thing as a traditional, pre-paid MVNO. "An MVNO would take the same box as the carrier would buy and then create a network with the same offers. That is not what we do, we have a virtual service that has 10 times the flexibility," he says, "I would not spend my time on an MVNO."

""I would not spend my time on an MVNO.""

That may indeed be true from a technical perspective, as Zact doesn't need to build out the infrastructure that most MVNOs do, but the end result for consumers will essentially be the same — the company currently only offers low-end Android devices dependent on Sprint's largely-3G network. However, Raleigh notes that ItsOn will announce "top tier carrier deals" later this year, offering the same flexible plans. Theoretically, Zact could also offer higher-end Android phones as well, Raleigh says that the custom software necessary for Zact to work is compatible with Jelly Bean.

Until then, Zact is meant to be a "lighthouse," as Raleigh puts it. Raleigh's goal is to prove that consumers can save money (an average of $1,126 over two years, the company claims) without carriers losing profits. It's a lofty goal, but the thing about lighthouses is that they tend to get abandoned if no ships sail into their waters.
http://www.theverge.com/2013/5/13/43...mobile-service





EE and Ipsos MORI Face Privacy Backlash Over Mobile Data Analysis

Deal that allows market research firm to offer insights based on the mobile data of EE's 27m customers compared to the "Snooper's Charter"
Pete Swabey

A new service from EE and market research company Ipsos MORI that analyses mobile phone usage data has been compared to the controversial "Snooper's Charter".

Earlier this year, the two companies signed an agreement that will allow Ipsos MORI to analyse anonymised mobile phone usage data from the telco, which has 27 million customers, and offer insights to businesses and local authorities.

The data that Ipsos MORI would be able to analyse includes individual user's location to the nearest 100 metres.

The Sunday Times reported yesterday that experts have compared the service to the Communications Data Bill, or "Snooper's Charter", which proposed giving law enforcement agencies access to phone and web usage data.

The paper also reported that the Metropolitan Police had been in discussions with Ipsos MORI about using the service, but ceased talks after it was contacted by journalists.

Ipsos MORI has dismissed The Sunday Times's coverage as "misleading".

"Ipsos MORI only receives anonymised data without any personally identifiable information on an individual customer," it said in a statement. "We do not have access to any names, personal address information, nor postcodes or phone numbers."

"We can see the volume of people who have visited a website domain, but we cannot see the detail of individual visits, nor what information is entered on that domain."

"We only ever report on aggregated groups of 50 or more customers."

In an email to Information Age, Ipsos MORI CEO Ben Page said that the backlash against the service has been exacerbated by the public scepticism towards big businesses.

"Interestingly, on Twitter at least, I have found many people OK once the actual details have been explained," he wrote.

EE has yet to respond to an invitation to comment.

Mobile telecommunications providers are increasingly trying to find ways to make money from the considerable amount of data they collect about their customers.

Last year, O2-parent Telefonica announced the launch of a new service, called Smart Steps, that would allow retailers to analyse footfall in their stores by using customers' anonymised GPS data.

In an interview with Campaign India earlier this year, analytics firm SapientNitro said that it was working with Vodafone in the UK to help the company "productise" its data.

"If you look at any telecom carrier, they know your and my movements. All they have to do is track us," SapientNitro's India marketing chief told the magazine. "This will be a very good revenue model for an operator."

As the Sunday Times' reaction to the EE / Ipsos MORI partnership demonstrates, however, there is a high degree of sensitivity about the impact such services may have on customers' privacy.

In particular, many experts question providers' ability to keep the data anonymous. Researchers have shown that user identities can often be reconstructed by cross-referencing with other data sets.

A paper published in October last year showed that 80% of mobile users could be "precisely identified" by comparing their location data with their social graph – i.e. their Facebook friends.
http://www.information-age.com/techn...-data-analysis





AT&T's Stephenson: Content Players Will Subsidize Consumer's Data
Sue Marek

Just days after reports circulated that ESPN was in talks with a Tier 1 wireless operator about potentially subsidizing consumer access to the company's content via mobile, AT&T (NYSE:T) Chairman and CEO Randall Stephenson told investors at a J.P. Morgan Global Technology, Media and Telecom Conference that he expects content and app developers to soon introduce new types of business models that will allow customers to get access to their content without racking up high data usage bills.

Although Stephenson didn't elaborate on the details of these new models, he suggested advertising may be one option. "There will be models that emerge where they defray consumer charges by paying it themselves or by advertising," he said.

Just last week, the Wall Street Journal reported that ESPN was in discussions with at least one large U.S. carrier about subsidizing wireless access to its content. These talks could potentially lead to a "toll-free" data plan, thus opening up new revenue streams for both parties. However, nothing official has been announced.

Stephenson also said that the second quarter looks much better for AT&T's wireless business, and he hinted that new products or services will be introduced in the third quarter. He also said that AT&T Mobility will not see much growth from new business initiatives in 2013; however, he expects the connected car arena to become "meaningful" in 2015.

In February, General Motors announced it will replace Verizon Wireless' (NYSE:VZ) service with AT&T Mobility's service in its OnStar offering beginning in 2014. As part of the deal, AT&T will provide service to GM's OnStar in Chevrolet, Buick, GMC and Cadillac vehicles. AT&T will power OnStar's existing calling and monitoring services, and will support a new suite of infotainment services like streaming audio, web access, applications, and even video for backseat passengers. AT&T will also power GM's in-vehicle Wi-Fi hotspots and voice calling services.

During his appearance, Stephenson talked at length about the transition to LTE and how AT&T expects to see traffic on its HSPA network decline, resulting in little capacity investment in 3G. Because LTE is about 50 percent more capital efficient than 3G, Stephenson expects a big downward trend in capital expenditures for the company.

He also expects handset prices to decline. He explained that, if cloud storage becomes popular, consumers will need less storage and possibly less processing power on their phones, thereby reducing the prices of those devices.
http://www.fiercewireless.com/story/...ata/2013-05-15





New Android Boss Finally Reveals Plans for World’s Most Popular Mobile OS
Steven Levy

For the past few years, Sundar Pichai has been part of a tag-team routine staged at Google’s annual I/O developer conference. Pichai, a Googler since 2004, would present on behalf of Google’s Chrome division, including its browser and cloud-based operating system. His counterpart was Andy Rubin, head of Google’s Android division. As Android grew to the world’s most popular mobile OS (it’s now on 750 million devices worldwide, with 1.5 million new activations every day), people wondered what was the sense of Google having two operating systems. Meanwhile, Andy Rubin was the unofficial king of I/O.

That won’t be the case this year. In March, Google announced Rubin was stepping down from Android to pursue unspecified moon shots elsewhere in the company. Pichai would take over Rubin’s duties at Android. He immediately went from being an important Google executive (in addition to Chrome, he was also in charge of Google’s apps efforts) to perhaps the most pivotal member of Larry Page’s “L-team” of top executives. So far Pichai, a 40-year old grad of the fabled Indian Institute of Technology and later Stanford, has kept his head down and refused all press. But as this week’s I/O event approached, he granted WIRED his first interview since taking over Android.

WIRED: The Android handover from Andy Rubin to you seemed sudden and mysterious to us on the outside. Was it long in the works?
PICHAI: I got to know only towards the end of the process of Andy deciding to step back. It played out in a rapid time fashion over the couple weeks prior to the actual announcement. I am passionate about computing and so to me, it was very exciting to be in a position where I could make an impact on that scale.

Now that you’re in this new position, have your views evolved in terms of the coexistence of Chrome and Android?
I don’t think my views have changed much. Android and Chrome are both large, open platforms, growing very fast. I think that they will play a strong role, not merely exist. I see this as part of friendly innovation and choice for both users and developers.

But can’t it be confusing having two operating systems?
Users care about applications and services they use, not operating systems. Very few people will ask you, “Hey, how come MacBooks are on Mac OS-X and iPhone and iPad are on iOS? Why is this?” They think of Apple as iTunes, iCloud, iPhoto. Developers are people, too. They want to write applications one time, but they also want choice. What excites me in this new role is that I can try do the right thing for users and developers — without worrying about the fact that we have two things. We embrace both and we are continuing to invest in both. So in the short run, nothing changes. In the long run, computing itself will dictate the changes. We’re living through a pivotal moment. It’s a world of multiple screens, smart displays, with tons of low-cost computing, with big sensors built into devices. At Google we ask how to bring together something seamless and beautiful and intuitive across all these screens. The picture may look different a year or two from from now, but in the short term, we have Android and we have Chrome, and we are not changing course.

Still, it’s a huge use of resources to have two operating systems as opposed to one. This has to be an issue you wrestle with.
It’s a fair question. We want to do the right things at each stage, for users and developers. We are trying to find commonalities. On the browser layer, we share a lot of stuff. We will increasingly do more things like that. And maybe there’s a more synergistic answer down the line.

As Android’s new head, what do you see as the biggest challenge?
First let me talk about the opportunities. The scale and scope is even bigger than what I had internalized. The momentum — in terms of new phones and new tablets — is breathtaking. I see huge opportunity, because it is just shocking how much of the world doesn’t have access to computing. In his book Eric [Schmidt] talks about the next 5 billion [the people on earth who aren’t connected to the internet who soon will be]. That’s genuinely true and it excites me. One of the great things about an open system like Android is it addresses all ends of the spectrum. Getting great low-cost computing devices at scale to the developing world is especially meaningful to me.

Now what about the challenges?
Here’s the challenge: without changing the open nature of Android, how do we help improve the whole world’s end-user experience? For all your users, no matter where they are, or what phone or tablet they are buying or what tablet they are buying.

What does that mean when a company like Facebook comes out with Home, which changes that experience?
It’s exciting that Facebook thought of Android first in this case. Android was intended to be very customizable. And we welcome innovations. As for the specific product, my personal take on it is that time will tell. To Mark [Zuckerberg], people are the center of everything. I take a slightly different approach. I think life is multifaceted: people are a huge part of it, but not the center and be-all of everything.

Some people worry that Google might respond to Facebook Home by blocking this kind of approach in a future release.
We want to be a very, very open platform, but we want a way by which end users are getting a good experience overall. We have to figure out a way to rationalize things, and do it so that it makes sense for users and developers. There’s always a balance there. It’s no different from the kind of decisions that Facebook has to make about its own platform. But right now, we don’t plan to make any changes — we are excited they’ve done good work.

Hold on. You’re saying that you like innovation like Home–but at some point in the future you might decide that an invasive software approach like this isn’t good for users and can’t be done in a future Android release?

No. Let me clarify. Users get to decide what apps and what choices they want. Some users really want this. We don’t want to get in the way of that. [But] in the end, we have to provide a consistent experience. As part of that, with every release of Android, we do go through changes. So we may make changes over time. But if this is what users want, I think Facebook will be able to do it. We want it to be possible for users to get what they want.

What about something more drastic like Kindle Fire, which actually forks the Android experience into something quite different?
Under the rules of the license, Amazon can do that. In general, we at Google would love everyone to work on one version of Android, because I think it benefits everyone better. But this is not the kind of stuff we’re trying to prevent. Our focus is not on Facebook Home or Kindle Fire. Computing is going through a once in a lifetime explosion. Our opportunity is making sure that this works well for people and solves important problems for them. For example, you are going to have computing which can potentially warn you before you have a heart attack.

Is it a problem for Google that Samsung is so dominant, and makes almost all the money on the platform?

I realize this gets played up in the press a lot. Samsung is a great partner to work with. We work with them on pretty much almost all our important products. Here’s my Samsung Galaxy S4. [Pichai holds up the phone.]

How’s that eye-tracking thing working out?
I actually never used it. Look, Samsung plays a critical role in helping Android be successful. To ship great experiences, you need hardware and software together. The relationship is very strong on a day-to-day basis and on a tactical basis. So I’m not that concerned. Historically the industry has had long stable structures. Look at Microsoft and Intel. They were very codependent on one another, but it served both of them well. When I look at where computing needs to go, we need innovation in displays, in batteries. Samsung is a world leader in those technologies.

One benefit of Samsung being so dominant is that you don’t hear much concern that Google might show favoritism to Motorola, which it now owns.
For the purposes of the Android ecosystem, Motorola is [just another] partner.

What’s the future of Google-branded hardware?
You will see a continuation of what we have tried to do with Nexus and Chromebooks. Any hardware projects we do will be to push the ecosystem forward.

One reason that people think that Chrome might step back in favor of Android is that the Open Web might not be able to deliver what users need on their devices. As head of Chrome you have promoted the vision of cloud-based apps, based on technologies like HTML 5, saying that they will be as powerful and fast as native apps written to run directly on specific machines. But last year Mark Zuckerberg said that Facebook’s biggest mistake was trying to use HTML 5 and the open web for its mobile apps. He said it simply didn’t have the quality and speed to serve his users. Was that a blow to your vision of Chrome?
I think the reality is a bit different. I managed Chrome and apps even before Android. Some of our large applications are now written directly to the device — for instance, we have native Gmail apps. But I disagree with the opinion that all of Facebook’s mobile issues can be blamed on HTML 5. I just don’t think that was true. There are other companies with very successful apps that have taken an HTML 5 approach on mobile and done really well. For instance, a lot of magazines have switched from native back to HTML 5 for the mobile apps. Financial Times did it, and they’ve blogged that their user engagement and traction has increased significantly. It’s the reverse of what Facebook said. And this is the beauty. Each developer’s needs are unique.

In terms of numbers, Android sells more than Apple, but Apple makes more money from its platform. Is your mandate to generate more revenue from Android?
We’re very comfortable with our business model. All our core services–Search YouTube, Maps, etc.– are used on phones, and Android helps people to use those services. So fundamentally there’s a business model there. And services, like Google Play, are obviously a source of revenue. We saw payouts to developers on Play quadruple in 2012. I think we are barely beginning to get started. We’re in the early beginnings of a sea change in computing. Think about education and enterprise — incredible opportunities. We’re much more focused now on the consumer end of the experience, but we think the right things will happen from a business sense.

Were you surprised to see a Firefox OS?
Not at all. The web is an important platform, and I don’t think it’s going change ’til I die. It’s another reason why if we don’t do Chrome OS, someone else will.

A lot of people have complained about Android’s update process. How does Google make sure that people will get updated with the latest version?
We are thinking about how to make Android handle updates better. We see ways we can do this. It’s early days. We’re talking with our partners and working our way through it. We need time to figure out the mechanics, but it’s definitely an area of focus for me and for the team.

What can we expect from I/O this year?
It’s going to be different. It’s not a time when we have much in the way of launches of new products or a new operating system. Both on Android and Chrome, we’re going to focus this I/O on all of the kinds of things we’re doing for developers, so that they can write better things. We will show how Google services are doing amazing things on top of these two platforms.

As Android head, what are your marching orders from Larry Page?
Larry wants to make sure we are driving innovation and doing amazing things for users and developers. That’s what I want too. So there’s a melding of minds– his marching orders are, “Please go and do Google-scale things.”

Finally, you had a pretty full plate with Chrome and Apps, and now you’re handling the world’s biggest phone platform in addition. How are you managing?
I have a secret project which adds four hours every day to the 24 hours we have. There’s a bit of time travel involved.
http://www.wired.com/business/2013/0...s-for-android/





Lawmakers Show Concerns About Google’s New Glasses
Claire Cain Miller

Eight members of Congress on Thursday formally demanded that Google address a range of privacy concerns about its new wearable technology device, Google Glass.

The letter, addressed to Larry Page, Google’s chief executive, outlined eight questions for Google and asked for a response by June 14.

“We are curious whether this new technology could infringe on the privacy of the average American,” the letter said. “Because Google Glass has not yet been released and we are uncertain of Google’s plans to incorporate privacy protections into the device, there are still a number of unanswered questions.”

The glasses, which are not yet for sale to the public, connect to the Internet and allow people to do things like take photographs, record and watch video, send text messages, post to social media sites and read text snippets. They have already raised privacy concerns on issues like unwanted recording.

The request, from the Bipartisan Congressional Privacy Caucus, came as Google held its annual I/O developers conference in San Francisco, where it showed off Glass, gave software developers information about how to build apps for the device and introduced seven new apps, including ones from Facebook, Twitter and CNN.

The group, for which Representative Joe Barton, Republican of Texas, is a co-chairman, asked questions including how Google would collect and store data from the devices, how it would ensure that it did not unintentionally collect private data, how Google would protect the privacy of people not using Glass when they are with people using it and whether the device would have facial recognition technology.

Steve Lee, director of product management for Google Glass, addressed the facial recognition question in a statement.

“We’ve consistently said that we won’t add new face recognition features to our services unless we have strong privacy protections in place,” he said.

Google has faced punishments over privacy violations with past products, including a settlement with the Federal Trade Commission over a social networking tool and another one with 38 states over data collection during its Street View mapping project.

In a session at the conference, Mr. Lee addressed other concerns in the letter. Google followed all its privacy and data collection policies with Glass, he said, and built social cues into the device to help prevent certain privacy violations. For instance, users have to press a button or speak to Glass to take a photograph or record video, and look directly at whatever they are shooting.

Still, one developer said he had already built an app for Glass that enables users to take a photograph with a wink.

In a statement, Chris Dale, a Google spokesman, said, “We are thinking very carefully about how we design Glass because new technology always raises new issues.”

He added that Google was slowly selling early versions of the device, which cost $1,500, to people who sign up for them, “to ensure that our users become active participants in shaping the future of this technology.”
https://www.nytimes.com/2013/05/17/t...gle-glass.html





Google Buys a Quantum Computer
Quentin Hardy



Google and NASA are forming a laboratory to study artificial intelligence by means of computers that use the unusual properties of quantum physics. Their quantum computer, which performs complex calculations thousands of times faster than existing supercomputers, is expected to be in active use in the third quarter of this year.

The Quantum Artificial Intelligence Lab, as the entity is called, will focus on machine learning, which is the way computers take note of patterns of information to improve their outputs. Personalized Internet search and predictions of traffic congestion based on GPS data are examples of machine learning. The field is particularly important for things like facial or voice recognition, biological behavior, or the management of very large and complex systems.

“If we want to create effective environmental policies, we need better models of what’s happening to our climate,” Google said in a blog post announcing the partnership. “Classical computers aren’t well suited to these types of creative problems.”

Google said it had already devised machine-learning algorithms that work inside the quantum computer, which is made by D-Wave Systems of Burnaby, British Columbia. One could quickly recognize information, saving power on mobile devices, while another was successful at sorting out bad or mislabeled data. The most effective methods for using quantum computation, Google said, involved combining the advanced machines with its clouds of traditional computers.

Google and NASA bought in cooperation with the Universities Space Research Association, a nonprofit research corporation that works with NASA and others to advance space science and technology. Outside researchers will be invited to the lab as well.

This year D-Wave sold its first commercial quantum computer to Lockheed Martin. Lockheed officials said the computer would be used for the test and measurement of things like jet aircraft designs, or the reliability of satellite systems.

The D-Wave computer works by framing complex problems in terms of optimal outcomes. The classic example of this type of problem is figuring out the most efficient way a traveling salesman can visit 10 customers, but real-world problems now include hundreds of such variables and contingencies. D-Wave’s machine frames the problem in terms of energy states, and uses quantum physics to rapidly determine an outcome that satisfies the variables with the least use of energy.

In tests last September, an independent researcher found that for some types of problems the quantum computer was 3,600 times faster than traditional supercomputers. According to a D-Wave official, the machine performed even better in Google’s tests, which involved 500 variables with different constraints.

“The tougher, more complex ones had better performance,” said Colin Williams, D-Wave’s director of business development. “For most problems, it was 11,000 times faster, but in the more difficult 50 percent, it was 33,000 times faster. In the top 25 percent, it was 50,000 times faster.” Google declined to comment, aside from the blog post.

The machine Google and NASA will use makes use of the interactions of 512 quantum bits, or qubits, to determine optimization. They plan to upgrade the machine to 2,048 qubits when this becomes available, probably within the next year or two. That machine could be exponentially more powerful.

Google did not say how it might deploy a quantum computer into its existing global network of computer-intensive data centers, which are among the world’s largest. D-Wave, however, intends eventually for its quantum machine to hook into cloud computing systems, doing the exceptionally hard problems that can then be finished off by regular servers.

Potential applications include finance, health care, and national security, said Vern Brownell, D-Wave’s chief executive. “The long-term vision is the quantum cloud, with a few high-end systems in the back end,” he said. “You could use it to train an algorithm that goes into a phone, or do lots of simulations for a financial institution.”

Mr. Brownell, who founded a computer server company, was also the chief technical officer at Goldman Sachs. Goldman is an investor in D-Wave, with Jeff Bezos, the founder of Amazon.com. Amazon Web Services is another global cloud, which rents data storage, computing, and applications to thousands of companies.

This month D-Wave established an American company, considered necessary for certain types of sales of national security technology to the United States government.
http://bits.blogs.nytimes.com/2013/0...ntum-computer/





Paul Otellini's Intel: Can the Company That Built the Future Survive It?

As the CEO steps down, he leaves the Intel machine poised to take on the swarming ecosystem of competitors who make smartphone chips.
Alexis C. Madrigal

Forty-five years after Intel was founded by Silicon Valley legends Gordon Moore and Bob Noyce, it is the world's leading semiconductor company. While almost every similar company -- and there used to be many -- has disappeared or withered away, Intel has thrived through the rise of Microsoft, the Internet boom and the Internet bust, the resurgence of Apple, the laptop explosion that eroded the desktop market, and the wholesale restructuring of the semiconductor industry.

For 40 of those years, a timespan that saw computing go from curiosity to ubiquity, Paul Otellini has been at Intel. He's been CEO of the company for the last eight years, but close to the levers of power since he became then-CEO Andy Grove's de facto chief of staff in 1989. Today is Otellini's last day at Intel. As soon as he steps down at a company shareholder meeting, Brian Krzanich, who has been with the company since 1982, will move up from COO to become Intel's sixth CEO.

It's almost certain that the chorus of goodbyes for Otellini will underestimate his accomplishments as the head of the world's foremost chipmaker. He's a company man who is not much of a rhetorician, and the last few quarters of declining revenue and income have brought out detractors. They'll say Otellini did not get Intel's chips into smartphones and tablets, leaving the company locked out of computing's fastest growing market. They'll say Intel's risky, capital-intensive, vertically integrated business model doesn't belong in the new semiconductor industry, and that the loose coalition built around ARM's phone-friendly chip architecture have bypassed the once-invincible Intel along with its old WinTel friends, Microsoft, Dell, and HP.

And yet, consider the case for Otellini. Intel generated more revenue during his eight-year tenure as CEO than it did during the rest of the company's 45-year history. If it weren't for the Internet bubble-inflated earnings of the year 2000, Otellini would have presided over the generation of greater profits than his predecessors combined as well. As it is, the company machinery under him spun off $66 billion in profit (i.e. net income), as compared with the $68 billion posted by his predecessors. The $11 billion Intel earned in 2012 easily beats the sum total ($9.5) posted by Qualcomm ($6.1), Texas Instruments ($1.8), Broadcom ($0.72), Nvidia ($0.56), and Marvel ($0.31), not to mention its old rival AMD, which lost more than a billion dollars.

Of course, Otellini has both his predecessors' ambition and inflation to thank for his gaudy numbers, but he kept Intel a powerhouse. Under his watch since 2005, it created the world's best chips for laptops, assumed a dominant position in the server market, vanquished long-time rival AMD, retained a vertically integrated business model that's unique in the industry, and maintained profitability throughout the global economic meltdown. The company he ran was far larger, more complex and more global than anything Bob Noyce and Gordon Moore could have imagined when they founded it in 1968. And the business environment was certainly no easier than any encountered by the other four Intel CEOs. Yet he delivered quarter after quarter of profits along increasing revenue. In the last full year before he ascended to chief executive, Intel generated $34 billion in sales. By 2012, that number had grown to $53 billion.

"By all accounts, the company has been incredibly successful during his tenure on the things that made them Intel," said Stacy Rasgon, a senior analyst who covers the semiconductor industry at Sanford C. Bernstein. "Tuning the machine that is Intel happened very well under his watch. They've grown revenues a ton and margins are higher than they used to be."

Even Otellini's natural rival, former AMD CEO Hector Ruiz, had to agree that Intel's CEO "was more successful than people give him credit for."

But, oh, what could have been! Even Otellini betrayed a profound sense of disappointment over a decision he made about a then-unreleased product that became the iPhone. Shortly after winning Apple's Mac business, he decided against doing what it took to be the chip in Apple's paradigm-shifting product.

"We ended up not winning it or passing on it, depending on how you want to view it. And the world would have been a lot different if we'd done it," Otellini told me in a two-hour conversation during his last month at Intel. "The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do... At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn't see it. It wasn't one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought."

It was the only moment I heard regret slip into Otellini's voice during the several hours of conversations I had with him. "The lesson I took away from that was, while we like to speak with data around here, so many times in my career I've ended up making decisions with my gut, and I should have followed my gut," he said. "My gut told me to say yes."

In person, Otellini is forthright and charming. For a lifelong business guy, his affect is educator, not salesman. He is the kind of guy who would recommend that a junior colleague read a book like Scale and Scope, a 780-page history of industrial capitalism. To his credit, he fired back responses to nearly all my questions about his tenure, company, and industry at a dinner during CES in Las Vegas and later at Intel's headquarters. And when he wasn't going to answer, he didn't duck, but repelled: "I'm not going to talk about that."

On stage, however, during the heavily produced keynote talks CEOs are now required to give, Otellini's persona and company do not inspire legions of cheering fans. When he steps on stage, there is no Jobsian swell of emotion, no one screams out, "We love you, Paul!" And yet, this is the outfit that pushes the leading edge of chip innovation. They are the keepers of (Gordon) Moore's Law, ensuring that the number of transistors on an integrated circuit continues to double every couple years or so. If Otellini's CV is lacking a driverless car project or rocketship company, it may be because the technical challenges Intel faces require a different kind of corporation and leader.

"He's super low-key guy. He's not a Steve Jobs. He's not a Bill Gates. But his contribution has been just as big," said the new president of Intel, Renee James, who has worked with Otellini for 15 years.

His management secret was his own exemplary drive, discipline, and humility. He came in early, worked hard, and demanded excellence of himself. "He didn't yell and scream. He never dictated. He never asked me to come in on a Sunday. He never asked me to stay late on a Friday. But he had this way of getting you to rise to the occasion," said Navin Shenoy, who served as Otellini's chief-of-staff from 2004 to 2007. "He'd challenge you to do something that we'd all be proud of."

Peter Thiel might complain that the Valley hasn't invented rocket packs and flying car because investors and entrepreneurs have been focused on frivolous nonsense. But Paul Otellini's Intel spent $19.5 billion on R&D during 2011 and 2012. That's $8 billion more than Google. And a substantial amount of Intel's innovation comes from its manufacturing operations, and Intel spent another $20 billion building factories during the last two years. That's nearly $40 billion dedicated to bringing new products into being in just two years! These investments have continued because of Otellini's unshakeable faith that eventually, as he told me, "At the end of the day, the best transistors win, no matter what you're building, a server or a phone." That's always the strategy. That's always the solution.
At the end of the day, the best transistors win, no matter what you're building, a server or a phone.

Intel's kind of business and Otellini's brand of competent, quiet management are not in fashion in Silicon Valley right now. And yet, almost no one has can claim the Valley more than Otellini. Every day for four decades -- in a career that spans the entirety of the PC era -- Intel's Santa Clara headquarters have been the center of his working world.

As we stood outside Otellini's corner cubicle, marked by a makeshift waiting room with a television, a couple of display cases, and a plucky plant, I asked him to reflect on what the end might feel like. "It is strange. I've been pinning this badge on every day for 40 years," he said. "But I won't miss the commute from San Francisco." After making thousands of trips down 101 and racking up 1.2 million miles on United through hundreds of trips around the world, he seemed ready to stop going.

The Many Computer Revolutions

Despite the $53 billion in revenue and all the company's technical and business successes, the question on many a commentator's mind is, Can Intel thrive in the tablet and smartphone world the way it did during the standard PC era?

The industry changes ushered in by the surge in these flat-glass computing devices can be seen two ways. Intel's James prefers to the see the continuities with Intel's existing business. "Everyone wants the tablet to be some mysterious thing that's killing the PC. What do you think the tablet really is? A PC," she said. "A PC by any other name is still a personal computer. If it does general purpose computing with multiple applications, it's a PC." Sure, she admitted, tablets are a "form factor and user modality change," but tablets are still "a general purpose computer."

On the other hand, the industry changes that have surrounded the great tablet upheaval have been substantial. Consumer dollars are flowing to different places. Instead of Microsoft's operating system dominating, Apple and Google's do. The old-line PC makers have struggled, while relative upstarts such as Samsung and Amazon have pushed millions of units.

The chip challenges are different as well. Rather than optimizing for the maximum computational power of a device, it's energy efficiency that's most important. How much performance can a processor deliver per watt of power it sucks from a too-small battery?

The semiconductor industry itself has seen perhaps even larger changes. In the early days of Silicon Valley, chipmakers had their foundries right there in the Valley, hence the name. During the 1980s, Japanese chipmakers battled American ones, beating them badly until Intel turned the tide in the latter half of the decade. The factories moved out of the valley to places like domesticallyChandler, Arizona and Folsom, California, as well as to Asia, mostly Taiwan.

Meanwhile, each generation of chips got technically more challenging and the foundries required to build them got more expensive. Chipmakers needed to sell massive amounts of chips in order to make up the huge capital equipment costs. The industry became cruelly cyclical, booming and busting with a regularity that defied managerial skill. For all those reasons and more, during the last twenty years, the chipmaking industry has been consolidating. Almost all semiconductor companies are now "fabless," choosing to outsource the production of their silicon to Taiwan Semiconductor Manufacturing Company (TSMC), United Microelectronics Corporation (UMC), or GlobalFoundries, a venture backed by the United Arab Emirates. The new fabless chip designers don't have to build plants, which allows them to have more stable businesses, but they lose the ability to gain competitive advantage by tweaking production lines. The transition to this state of affairs killed off many companies and allowed others to thrive.

Add it all up and there are only a few chipmakers left standing. The aforementioned contract manufacturers like TSMC, Samsung, and, of course, Intel.

These two structural trends at the consumer and industry levels intersect at a formerly obscure British company called ARM Holdings. Originally founded as a partnership between Acorn Computers (remember them?), VLSI (remember them?), and Apple, ARM now just creates and licenses the chip architectures that other companies tweak and have manufactured. In a sense, they sell a chip "starter kit" that companies like Apple, Qualcomm, Broadcom, Marvel, and Nvidia build upon to create their own products.

Chips based on the ARM intellectual property are generally not as high-performance as Intel's, but they're fantastically energy efficient. While ARM did make chips for Apple's ill-fated Newton device, in the early 2000s, ARM became the dominant architecture supplier to the so-called "embedded" market. These chips are not general computing devices, but have specific jobs in (for example) cars, hard drives, and factories. This specialization is also one of the reasons that ARM chips are cheap. An Intel microprocessor could sell for $100. ARM-based chips might sell for $10, and often less than a dollar. In the first quarter of this year, 2.6 billion chips using ARM's architecture were shipped.

The two key attributes of ARM's architecture -- energy efficiency and low cost -- developed before cell phone phones, but they were exactly what mobile designers were looking for. As the smartphone market exploded, so did ARM's share price as investors realized what a key node ARM had become in the burgeoning computer-on-glass phone and tablet market.

As the smartphone market exploded, so did ARM's share price.

For companies who are trying to decide whether to go with Intel or an ARM-licensee, it's a bit like being asked whether you'd rather deal with Switzerland or the Aztec empire. "With ARM, when you are tired of Qualcomm you can go to NVIDIA or another company," Linley Gwennap, the boss of the Linley Group, a research firm, told The Economist last year. "But in Intel's case, there's nobody else on its team."

ARM-based designs are now found in more than 95 percent of smartphones. ARM may not be dominant in the way Intel is dominant in PCs, but the system they underpin is.

Simon Segars is the man who will have to deal with the fallout from all of ARM's successes. He begins as the new CEO of the company on July 1. I met him after he spoke on a panel about "multi-industry business ecosystems" at the Parc 55 hotel in the heart of San Francisco. He was tall and genial, happy to patiently and thoroughly explain why ARM had found itself in possession of so many friends and so much good fortune.

"I can genuinely say that our approach is to work within an ecosystem that is a healthy ecosystem. By that I mean the people in it are making money from what they do," he said. "We get questions on a regular basis, Why don't you quadruple your royalty rates? Because you're so strong, what are you customers going to do? We could do that and we could probably enjoy some more revenue for some time, but our customers would go off and do something else or have less healthy businesses. If we tried to extract lots of money out of the ecosystem, we'd have less companies supporting the ARM architecture and that would limit where it could go."

ARM is a company that finds itself in the right place at the right time with a philosophy of innovation that lots of companies want to believe in.

"Through the '90s and early 2000s, we saw an explosion in the number of people who could build a chip. That led to a lot of innovation and all the electronic devices that we see today," Segars said. "The role we've played is providing this core building block, this microprocessor, that many of these devices require. We've provided that in a very cost-effective way to anybody who wanted it. And that's allowed people to put intelligence into devices that they couldn't have afforded to do because they would have had to do it all themselves."

The Mobile Mystery: What Did Otellini See and When Did He See It?

Many of the structural changes that occurred in these industries now seem predictable. It feels like somebody else could have positioned Intel differently to take advantage of these trends. At the very least, Otellini should have seen where the changes were leading the silicon world.

And the thing is, he did. He just wasn't able to get the Intel machine turning fast enough. "The explosion of low-end devices, we kinda saw as a company and for a variety of reasons weren't able to get our arms around it early enough," he admitted.

It was Otellini, after all, who had made the call to start developing the very successful low-power Atom processor for mobile computing applications. And it was Otellini, who upon ascending to the throne, drew a diagram that I'll call the Otellini Corollary to Moore's Law at the company's annual Strategic Long Range Planning Process meeting, or SLRP. He duplicated it for me in an appropriately anonymous Intel conference room, calling it half-jokingly "the history of the computer industry in one chart."

On the Y-axis, we have the number of units sold in a year. On the X-axis, we have the price of the device, beginning with the $10,000 IBM PC at the far left and extending to $100 on the far right. Then, he drew a diagonal line bisecting the axes. As Otellini sketched, he talked through the movements represented in the chart. "By the time the price got to $1000, sort of in the mid-90s, the industry got to 100 million units a year," he said, circling the $1k. "And as PCs continued to come down in price, they got to be an average price of 600 or 700 dollars and we got up to 300 million units." He traced the line up to his diagonal line and drew an arrow pointing to a dot on the line. "You are here," he said. "I don't mean just phones, but mainstream computing is a billlion units at $100. That's where we're headed."

"What I told our guys is that we rode all the way up through here, but what we needed to do was very different to get to [a billion units]... You have to be able to build chips for $10 and sell a lot of them."

"This is what I had to draw to get Intel to start thinking about ultracheap," Otellini concluded.

"How well do you think Intel is thinking about ultracheap?" I asked.

"Oh they got it now," he said, to the laughter of the press relations crew with us. "I did this in '05, so it's [been more than] seven years now. They got it as of about two years ago. Everybody in the company has got it now, but it took a while to move the machine."

It took a while to move the machine. The problem, really, was that Intel's x86 chip architecture could not rival the performance per watt of power that designs licensed from ARM based on RISC architecture could provide. Intel was always the undisputed champion of performance, but its chips sucked up too much power. In fact, it was only this month that Intel revealed chips that seem like they'll be able to beat the ARM licensees on the key metrics.

No one can quite understand why it's taken so long. "I think Intel is still suffering from the with the inability of this very fine company to enter a new major segment that changes the game," Magnus Hyde, former head of TSMC North America told me. "That's been a problem before Paul, been a problem during Paul, and will probably be a problem going forward. They have all the things they need on the paper: the know-how, the customers, the cash to take over whatever they need. But somehow a little piece is missing."

"This is a company with 100,000 employees with a 40-year legacy. They are unbelievably good at what they do. No one can touch them," said Rasgon, the analyst. "There is a certain degree of arrogance that goes align with that."

"As CEO, that's your job: steer [the ship]," he continued. "It doesn't necessary mean [Otellini had] a failure of vision, but he couldn't get the ship to turn."

Ruiz, who led AMD's last battle with Intel while he was CEO from 2002 to 2008, told me he thought Intel's mobile progress had been slowed by their concentration on his company. "The focus the company has had for the past three decades on squashing AMD caused them to lose sight of the important trends towards mobility and low power," he said. "They should have focused more on their customers and the future than on trying to outdo AMD."

Some people seem to think someone else could have done better. And it's nice to believe in the transformative leader. Call it the Fire-the-Coach Fallacy. Sometimes, installing a new leader of an organization leads to better performance. But far more often, as some simple Freakonomics blogpost would tell you, we overestimate the importance of changing the coach or the CEO. It's not that CEOs are not important, but the preexisting conditions within and surrounding a company are just more important.

Unlike a lot of leaders, Otellini seems aware of this fact. "Intel's culture is blessedly not the culture of a CEO, nor has it ever been," he told me. "It's the Intel culture."

Otellini, of course, knew the Intel culture well. It had formed the substrate of his entire career. Starting out in finance in 1974, he'd worked his way up the chain on the business side of the operation, eventually landing the key gig of managing Intel's IBM account in 1983. It was right before Intel abandoned the memory business. He'd worked closely with Andy Grove, watching how he processed information, managed, and made decisions. He'd spent two years in the executive suite with Craig Barrett, watching him steer Intel in the rocky days after the Internet bust.

The Intel culture has been remarkably successful, of course. But it has also shown a resistance to change. They've managed to successfully surf massive transitions like getting out of the memory business in 1985 to focus on microprocessors or retaining a leading position in the move from desktop processors to laptops, but the same focus and scale that makes Intel so powerful also prevent it from changing tacks quickly. If you've got 4,000 PhDs and 96,000 other people working for you, it's hard to turn on a dime.

Perhaps, though, the transformation that Otellini began in 2005 will finally be complete during Brian Krzanich's tenure. Intel's technical lead, perfectionism, and scale will create amazing chips at prices that cause phone and tablet makers to give up their commitments to the ARM ecosystem.

"They already have products in the marketplace that are competitive and I would not be surprised if they had best-in-class products in a few years," Rasgon said. "What they are doing on the [manufacturing] process has really driven that."

Otellini sees an analogy to the current situation in Intel's performance with Centrino laptop chips. "Intel made the big bet. [Chief Product Officer] Dadi [Perlmutter] and I made the big bet in 2001 to bet on mobile. This was when the desktop was 80 percent of all PCs, maybe 90 percent, and unabated growth and notebooks were luggables," Otellini said. "And we thought that there was an argument about what a computer could be and that led to what would become Centrino."

Centrino chips won over Apple's Steve Jobs because the silicon was so good they could not be ignored. "The head-to-head of comparison of an Intel based notebook and an Apple notebook were night and day in terms of performance, battery life, etc," he said. "That's what got their attention."

And if Apple -- so notoriously anti-Intel that a 1996 Mac commercial showed a burning Intel mascot -- could come to love Intel processors, couldn't all the current ARM licensees see the blue Intel light?

A Battle of Innovation Cultures: The Lab Vs. The Ecosystem

Silicon Valley has been, rightly or wrongly, synonymous with innovation for four decades. Now, it's as much a notion as a place. When Paul Otellini joined Intel in 1974, a year of bloodletting at the company that also saw two of its future CEOs hired (Otellini and his predecessor Craig Barrett), the peninsula south of San Francisco and the Santa Clara Valley had merged in the American mind into the crucible for the future. Though Intel would only make $20 million that year, it was clear that these chips, and their tendency to get cheaper so quickly, were a new force unto the world. The whole enterprise was shaped by individual humans, structured by capitalism, and aided by Cold War R&D money, but the effects of all this memory and computation, its exponentiality, were hard to predict. A story led the New York Times business section a couple years later with the banner headline, "Revolution in Silicon Valley." The subheadline read, "'The basic thing that drives technology is the desire to make money,' says one executive. Now, where can they use the technology?"

Think of that as a kind of ur-mainstream media Silicon Valley story. It's got all the elements: an early reference to the orchards that used to exist, "low-slung" buildings as the unlikely seat of revolution, hot consumer products, hypercompetitive industries, massive innovation, great men, something like a formulation of Moore's Law, and the exceptionalist sense that this could only happen in this one place in California.

There are two conflicting narratives about all this Silicon Valley innovation. On the one hand, there is the notion that Silicon Valley is an ecosystem of entrepreneurs and inventors, financiers and researchers. Companies can break up and reassemble. Spinoffs can pop out of larger corporations. Startups can disrupt whole industries. Competitors can cooperate and then compete and then cooperate. And when you add up all these risk-taking, failure-forgiving people, the sum is greater than the parts. Fundamental to this notion is the idea that innovation happens best in networks of firms and individuals, in an ecosystem (a word that itself gained credence thanks, in part, to Stanford ecologist Paul Ehrlich in the late 1960s).

On the other hand, we have Intel. Intel structured and thought of itself like a research laboratory, according to long-time Silicon Valley journalist Michael S. Malone, in his 1985 book, The Big Score. "The image of a giant research team is important ti understanding the corporate philosophy Intel developed for itself," Malone wrote. "On a research team, everybody is an equal, from the project director right down to the person who cleans the floors: each contributes his or her expertise toward achieving the final goal of a finished successful product."

Malone went on that the culture of Intel was not that of a bunch of loosey-goosey risk takers, but true believers, almost robotic in their dedication to Intel's goals. "Intel was in many ways a camp for bright young people with unlimited energy and limited perspective," he continued. "That's one of the reasons Intel recruited most of its new hires right out of college: they didn't want the kids polluted by corporate life... There was also the belief, the infinite, heartrending belief most often found in young people, that the organization to which they've attached themselves is the greatest of its kind in the world; the conviction they are part of a team of like-minded souls pushing back the powers of darkness in the name of all mankind."

This is a very different vision of innovation. This is an army of people tightly coordinated, highly organized, and hardened by faith. It was this side that competitors and suppliers have long encountered and complained about (sometimes appealing to the regulatory authorities).

"They are tough to deal with. I know some of the executives privately and they say, 'We're not really nice people to deal with.' They admit it. And it's true," Magnus Hyde, former head of Taiwan Semiconductor North America, told me. "They are really nasty when you get into negotiations."

And as for this whole "failure's cool!" mantra that seems to re-echo around Silicon Valley, Intel's Andy Grove enshrined what he called "creative confrontation," which encouraged and rewarded people to get after each other for flagging performance or mistakes.

Taken as a whole, Intel is a self-contained research, development, and deployment machine. That is not an ecosystem. Though obviously Intel has many partners with whom it makes money and has good relationships, on the leading edge of innovation, Intel goes it alone.

Time and again, this strategy has worked as almost all of their competitors have fallen by the wayside. Intel is the only chip company in the world that's been able to hang on to its vertically integrated business model. "They have these methods, these Intel methods, that have worked very well for them," Hyde said.

The way Otellini vanquished AMD is a classic example of the Intel way. AMD had always played Brooklyn to Intel's Manhattan. Otellini himself had offers from both companies coming out of business school, and the competition remained fierce all the way until he took the reins. AMD was resurgent then. They had beat Intel to market with excellent 64-bit chips that were perceived to provide more performance for less money than Intel's processors. AMD's stock was on a climb that would take it to dizzying heights. By the end of 2008, Intel had destroyed AMD's momentum and sent the company into a tailspin. Finally, in early 2009, AMD spun out its fabrication facilities, exiting the chipmaking game. It was TKO in the longest-running bout in Silicon Valley. "They buried AMD," Rasgon put it bluntly.

Of course, there were several ugly court battles about Intel's hardball tactics in keeping AMD out of more machines. Intel eventually paid AMD $1.25 billion to settle the case in late 2009.

What's clear is that when Intel has a single competitor to focus on, they are hard to beat. "The thing about Intel is that we always come back," Otellini told me. "We put resources on it. We get focused. And watch out." They outinnovate, outmanufacture, and outcompete any company that comes into their targets.

Which brings us back to the question of mobile, the space that has eluded Intel for a deacde. What's fascinating is that it's a battle between Intel and a swarm of companies licensing chip designs from a relatively small IP company, ARM. Intel has bulk and strength, but they've come up against that other model of innovation: the ecosystem. It's two ideas about how Silicon Valley works locked in combat. If you're the swarm, with Qualcomm as the queen bee, the question is: How do you hold the coalition together?

If you're Intel, which fly do you fire the shotgun at? Not ARM, that's for sure.

"ARM is an architecture. It's a licensing company," Otellini said. "If I wanted to compete with ARM, I'd say let's license Intel architecture out to anyone that wants it and have at it and we'll make our money on royalties. And we'd be about a third the size of the company."

"it's important for me, as the CEO, that I tell our employees who it is that we have to compete with and who we're focused on, and I don't want them focused on ARM. I want them focused on Qualcomm or Nvidia or TI," he continued. "Or if someone like Apple is using ARM to build a phone chip, I want our guys focused on building the best chip for Apple, so they want to buy our stuff."

I asked ARM's Segars about what I'd heard from Otellini, namely that Intel would beat the individual members of his coalition because they make the best transistors, and that would ultimately carry the day.

"There is a long track record of Intel investing very heavily on the leading edge of technology and implementing innovations of process technologies ahead of everybody else. That is a statement of fact and nobody would dispute that," Segars responded. "The transistors are, of course, important. The way in which the transistors are used is very important and really what the explosion of the technology space over the last couple of decades has shown is that there is a need to innovate and you can't focus innovation in just one company. If all the world's chips came from one vendor, whether it's Intel or anybody else, naturally that's going to limit innovation because there are only so many people and there will be a philosophy that's followed."

But Otellini, or Krzanich, can't focus Intel on ARM's "intangible" rhetoric. The questions industry watchers should be asking, Otellini said, are these ones: "Do you think Intel can beat Qualcomm? Do you think Intel can beat Nvidia? Do you think Intel can compete with Samsung?"

The answer might be yes, Intel can compete with each one, but maybe not with them all.

Or, maybe, the great machine will dominate once again. "If I'm looking out five, ten years, they could potentially bury everybody else," said Rasgon, the analyst.
http://www.theatlantic.com/technolog...le-war/275825/





Disruptions: Even the Tech Elites Leave Gadgets Behind
Nick Bilton

If you were to meet 32-year-old Robin Sloan of San Francisco, you might think him a Luddite unable to get his head around new technologies. He owns an old Nokia phone with one main application: making phone calls. He takes notes using a pen and paper notepad. And he reads books printed on paper.

But Mr. Sloan is far from a Luddite. He used to work at Twitter as a media manager, teaching news outlets to use the hottest social media tools. Before that he was with Current TV as an online strategist, inventing the future of digital journalism.

Yet last year, as he set out to write his first book, “Mr. Penumbra’s 24-Hour Bookstore,” he found his iPhone and other technologies were getting in the way of his productivity, so he simply got rid of them. “I found it was more important and more productive for me to be daydreaming and jotting down notes,” he said. “I needed my idle minutes to contribute to the story I was doing, not checking my e-mail, or checking tweets.”

Even in Silicon Valley, Mr. Sloan has company.

As every aspect of our daily lives has become hyperconnected, some people on the cutting edge of tech are trying their best to push it back a few feet. Keeping their phone in their pocket. Turning off their home Wi-Fi at night or on weekends. And reading books on paper, rather than pixels.

I’ve experienced this, too.

Two years ago, when the iPhone and iPad were spiking in popularity, when I dined with other technology bloggers and reporters we enthusiastically passed our phones around the table, showing off the latest app or funny YouTube clip.

Now, even as our gadgets can hold more apps and stream faster videos, when I’m at dinner with technologists we play a new game. Attendees happily place their smartphones in a stack in the middle of the table, and the first person who touches his or her phone before the meal is over has to pay the bill.

Some couples who work in tech seem to be trying to step back the most.

“At least once a month my wife and I jump in our car and drive until cell service drops off (yes, this is possible) and spend the weekend engaged with all things analog,” Evan Sharp, a founder of Pinterest, said — on e-mail. “We read, we walk all over the California hills, we cook, we meet people who don’t work in technology.”

Other couples have told me of a “no gadgets in the bedroom” rule. (Kindles are sometimes an exception.) Some say they leave their phones at home when they go for Sunday brunch. Rather than take a picture of their bacon and eggs to post to Instagram, they can now enjoy each other’s company, and do that strange thing called talking.

There could even be a business model in products that encourage us to step away from our gadgets.

Last Tuesday, Penguin Press published “The Pocket Scavenger,” a book both physical and digital that encourages readers to go on an unusual scavenger hunt, collecting random objects, drawing and smudging on the book’s pages, then documenting them later with a smartphone.

“We’re not going to get rid of technology,” said Keri Smith, the author. “I feel like we’ve lost touch with noticing smells and tactile sensations, and I’d just like to offer some kind of antidote to what’s out there.”

As for Mr. Sloan, who has since published his book, he said his break from technology was a resounding success. He still checks his e-mail, but not while he’s getting coffee with someone or going for a stroll.

Although he isn’t rushing off to buy the next iPhone, he said he wouldn’t rule it out. But he would use such a device differently than he did before downgrading his cellphone.

“It sounds silly because we all used to do this all the time, but after getting rid of my smartphone I am now so much more comfortable just leaving the house without any phone at all,” he said. “I feel like I kind of learned how to do that again, and I would do the same thing if I had a fancy new smartphone too.”
http://bits.blogs.nytimes.com/2013/0...adgets-behind/





How UNSW Creates the World's Best Hackers
Ben Grubb

The University of NSW is known for producing some of Australia's top lawyers, doctors and accountants. But the 64-year-old institution is now gaining a reputation for excelling in what is often viewed as anti-establishment – hacking.

Growing reputation: UNSW students excelled in an anti-establishment hacking challenge. Photo: Erin Jonasson

It's noon on Tuesday and a group of four students are hovering over their laptops having just bunkered down in a room at UNSW's Kensington campus, where they're going to be for the next 24 hours hacking into IT systems. Computer cables, power boards, water bottles and brown paper bags full of food are spread across the table.

Looking over the students' shoulders is their admired IT security lecturer and mentor, who likes to distance himself from being called an academic, laughs off the suggestion he's a "hackademic" and doesn't want his photo taken for this article for undisclosed reasons.

By noon on Wednesday the students have barely had any sleep. This is because they have just finished participating in the 2013 Cyber Security Challenge, which Telstra, along with government agencies, held this week. It's the second year the challenge has been held and Telstra says it holds it to help identify potential candidates for IT security work, which are in demand.

The challenge involves students, from universities and TAFE colleges nationwide, competing in a non-stop 24-hour "capture the flag" contest, where they test the security of a fictitious company's IT systems, aiming to get the most points.

During the 24 hours they are required to conduct "penetration testing" on the company's web apps, network and product, as well as give advice in easy-to-understand "grandma" language.

A penetration test is used to evaluate the security of a computer system or network by simulating a hacker attack.

Forty-three groups, each made up of four students from 20 institutions, participated and it proved so popular that Telstra had to cap the number of teams per school to three. What may have attracted the budding young hackers was the first prize, a trip to the infamous Black Hat IT security conference in Las Vegas.

UNSW entered three groups of four students in the challenge, placing first, second and third. The winning group also won last year.

Some members of the winning team have also come first in other Australian security challenges, such as the Ruxcon capture the flag contest held in Melbourne.

So what is it about UNSW that is cultivating some of the nation's most promising young "white hat" hackers?

Fionnbharr Davies is the IT security lecturer and mentor to the UNSW students.

Unlike other, more serious academics who teach at the university, Davies lists as a joke on his staff profile that he is a "Professional Suit Wearer" who teaches "Haqr techniques", "HTML" and "Thwarting the lizard people threat".

A 27-year-old who works full-time at Azimuth Security, Davies co-lectures part-time alongside Brendan Hopper, 28, who works at the Commonwealth Bank as a penetration tester.

Many of their students have gone on to work at large security companies such as Stratsec (now BAE Systems) and Securus Global, while some pupils are doing internships at the likes of Google.

Theo Julienne, one of Davies' students and a member of the Cyber Security Challenge winning team, said that what made him a good white hat hacker was Davies' unconventional teaching practices. Julienne's teammate Karla Burnett agreed. Both students said Davies' courses were practical and hands-on, rather than all about theory.

"If companies are doing stupid things that do not make sense he criticises them in lectures," Burnett said. "He won't try and be polite about stuff."

Julienne added: "[Davies] doesn't screw around with theory stuff. He tells us this is how it works and this is what you do in a company to secure it."

Although working in the security industry pays more than lecturing, Davies said he lectures because it is fun.

"I've actually not been paid twice because I was too lazy to fill out the forms because I'm not motivated by money," he said.

"It's incredibly rewarding to teach students and see them do really well... My students winning first, second and third place [this week] is great."

He said his courses are very different from the typical IT courses at other universities.

"It's a really big difference from your [average computer] science degree courses, because generally when you are going to them you start doing artificial intelligence [AI].

"You get told here's a problem, here's the algorithm you should use to solve it, and go and solve it using that algorithm.

"So essentially this [security] part of the course I teach is like a mini thesis. We drop the students in the deep end and go, 'All right, you have to come up with your own project. It's going to be difficult and we'll judge you half on it.'

"A lot of people then immediately drop out of the course after the first or second week when they realise it's going to be a ton of work. A lot of university students don't want to do a ton of work."

Davies said he was shocked when he saw the way students at other universities were taught IT security.

"They're all taught by these academics who have never hacked a thing in their life," he said. "The students are good, it's just the teachers ...

"Talking to some of the other students at other universities I was actually quite appalled at the sort of things they were taught. Like, it's not even real computer science."

Davies said about 60 per cent of his course focused on projects, while the other 40 per cent was based on a "war game" challenge like the one held by Telstra and government agencies.

"So students write rootkits," he said. "At the courses running at the moment ... students are designing rootkits for Mac OS X, Android and Linux."

A "rootkit" is a stealthy type of software designed to hide the existence of certain processes or programs from normal methods of detection. It is often malicious and used by hackers.

"People are writing exploits in the course and doing large security projects you might normally see proper security researchers doing [in the real world]," Davies said. "But students can do them because they're intelligent and motivated."

To defend against hackers, Burnett said, one needed to think like one, and that was what Davies taught students to do – by thinking "offensive" rather than "defensive".

"You need to know how a hacker is thinking if you want to defend against them," Burnett said.

Davies, when asked if he believed students should report vulnerabilities they find or stumble across on company's IT systems, said: "We say that you should do whatever you want with the exploit. It's your vulnerability, you found it, it's your thing. You have no obligation to report it at all. In fact, reporting it can get you into a lot of trouble."

Davies pointed to the case of Australian security expert Patrick Webster, who received a knock on the door from police and a legal letter after he told First State Super he had found a flaw exposing the personal details of its 770,000 members.

First State Super disabled his superannuation account with them, asked to check his computers and said he may have been liable for any costs to fix the security breach he reported to them.

"There is always the chance that a company will turn around and bite you," Davies said.

"That kind of shit happens all the time and is why I actually recommend not doing responsible disclosure. I don't like the term and if anything you should sell the vulnerability.

"Sell it to anyone [you can find] because they're the ones who will then deal with the company... Or, if you're not going to reveal it, go through a third party like CERT Australia."

Matt Barrie, who teaches cryptography and network security at the University of Sydney, said his course was also very hands-on, and to an extent unconventional, but not as much as Davies'.

"We get people to analyse things like Wi-Fi, find out how it can be hacked and what the problems are," he said.

Barrie also conducts war games and gives students cryptographic puzzles to crack.

But unlike Davies, Barrie believes universities are "getting fairly practical" with their IT security courses.

"Some of my graduates have joined the Defence Signals Directorate and have become key people at a firm called Stratsec, which recently got sold to [Defence contractor] BAE Systems," Barrie said.

Asked if he thought he could be creating the next generation of black hat hackers, Davies said: "I have absolutely no fears I'm creating some black hat army. If anything, these students are going to go on to become really good security researchers.

"I think if people were going to be black hats they'd be black hats by the time they hit university."
http://www.smh.com.au/it-pro/securit...510-2jdbt.html





Top U.S. Admiral Puts Cyber Security On the Navy's Radar
John O'Callaghan

Cyber security and warfare are on par with a credible nuclear deterrent in the defense priorities of the United States, the U.S. Navy's top admiral said on Monday, after the Pentagon accused China of trying to hack into its computer networks.

Admiral Jonathan Greenert, the chief of naval operations, told Reuters the defense department's cyber program had continued unabated despite the political gridlock about the U.S. budget deficit and enforced spending cuts in other areas.

"The level of investment that we put into cyber in the department is as protected or as focused as it would be in strategic nuclear," Greenert said in an interview in Singapore just before the start of the three-day Reuters Cybersecurity Summit in Washington (link.reuters.com/dam97t).

"It's right up there, in the one-two area, above all other programs."

For the U.S. Navy, cyber security is critical because its ability to coordinate ships, planes and personnel depends heavily on computer networks and satellites.

"We've got to understand how to defend them, how to exploit them ourselves and how to, as necessary, be able to do offensive effects," said Greenert, who will attend this week's IMDEX Asia maritime defense show in Singapore.

"Many people who look at the future of warfare say it's bound to start in cyber. The first thing you'd want to do is shut down their sensors, interrupt their power grid, confuse them ... and presumably guard against that kind of thing and recognize if it's starting."

The U.S. Navy has enjoyed advantages in traditional sea, undersea and air warfare but times have changed, he said.

"In the cyber domain, a lot of people - civilian hackers, anybody - can get into this," Greenert said.

The Pentagon, in its annual report to Congress on China, cited "state-sponsored industrial and technical espionage" to benefit Chinese defense industries, military planners and government leaders.

Last week's report, for the first time accusing the Chinese of trying to break into U.S. defense computer networks, said the cyber snooping was a "serious concern" that pointed to a greater threat because the "skills required for these intrusions are similar to those necessary to conduct computer network attacks".

Beijing dismissed the report as groundless. The People's Daily, regarded as a mouthpiece of the Chinese Communist Party, said later that "the United States is the real 'hacking empire' and has an extensive espionage network".

Greenert, a member of the Joint Chiefs of Staff, said the U.S. military felt the need to go public about China due to a "threshold of frustration".

Beyond China, he said Iran has a "deliberate and emerging" cyber capability, Russia is "very advanced" and North Korea is "more in the development stage".

HIGH TECH AND OFF THE GRID

The security of satellites is paramount as they underpin nearly all U.S. military functions with communications, target and weather data, along with warning of missile launches.

Washington is keeping a close eye on Chinese activities in space after an intelligence report last year raised concerns about China's growing ability to disrupt U.S. military and intelligence satellites.

"You want to develop satellites that have a cryptological effect or impact so that they are hardened against jamming," Greenert said.

As a contingency in case satellites are jammed, contaminated with a computer virus or hit by a missile or a directed-energy weapon, he said the Navy was going "back to the future" by looking at high-frequency relays used during the Cold War.

"That part of the electromagnetic spectrum is still out there. It's not as well traveled as it used to be so it's actually cleaner," he said. "We're actively working on that."

The energy blasted from ships by radar and satellite systems is "like a beacon", Greenert said, making reduction of the electronic "signature" a key part of the Navy's cyber strategy.

Using radar in targeted patterns, changing frequencies and using lower energy levels and shorter pulses are all part of the plan to stay stealthy - along with shutting down the systems quickly when in "mission control" mode.

"It's like quitting smoking. You've got to learn to get off this addiction to constant information to and from," Greenert said. "Going off the grid can be a good thing."

(Editing by Ron Popeski)
http://www.reuters.com/article/2013/...94C0B320130513





LulzSec Hackers Sentenced Beween One to Three Years, Accessory to 32 Months

Sentenced totals six years in prison.
Dan Raywood

Three members of the hacktivist group LulzSec have been sentenced to a total of six years in prison.

Members Ryan Ackroyd, Jake Davis and Mustafa al-Bassam had been charged with attacks on the Serious Organised Crime Agency (SOCA), Sony, Nintendo, 20th Century Fox and governments and police forces in a 50-day spree in the summer of 2011.

Davis was sentenced to 24 months in a young offender's institution and he will serve half of the sentence. Al-Bassam received a 20-month sentence, suspended for two years and 300 hours unpaid work. Ackroyd was given a 30-month sentence; he will serve half.

Cleary also pleaded guilty to possession of child abuse images following a second arrest on October 4, 2012. He will be sentenced at separate hearing.

Among its other activities, LulzSec also put a fake front page on The Sun newspaper claiming News International chief executive Rupert Murdoch had died, and in its final act it published the details of 500,000 users. Ackroyd was known as 'Kayla', while Davis was the main spokesman known as 'Topiary'.

Ackroyd was arrested in September 2011, while Davis was charged with unauthorised computer access and conspiracy to carry out a distributed denial-of-service (DDoS) attack in August 2011. Due to his age, al-Bassam was not named until he pleaded guilty.

Both men were named by the FBI when LulzSec leader Sabu was revealed to have been working as an informant for the bureau in March 2012.

Ryan Cleary, who was arrested in June 2011, was not a recognised member of the group but did own a botnet of 100,000 PCs that was reported to have been used by the group. He was sentenced to 32 months in jail and will be required to serve half of his sentence.

Judge Deborah Taylor said that: "the name LulzSec encapsulates your desires to cause embarrassment and disruption, whole keeping your own identities hidden", and accused the four of each playing a role during a seven-month online campaign, "using your technical abilities to cause catastrophic losses for amusement".
http://www.scmagazine.com.au/News/34...32-months.aspx





Real Estate or Utility? Surging Data Center Industry Blurs Boundaries
James Glanz

The trophy high-rises on Madison, Park and Fifth Avenues in Manhattan have long commanded the top prices in the country for commercial real estate, with yearly leases approaching $150 a square foot. So it is quite a Gotham-size comedown that businesses are now paying rents four times that in low, bland buildings across the Hudson River in New Jersey.

Why pay $600 or more a square foot at unglamorous addresses like Weehawken, Secaucus and Mahwah? The answer is still location, location, location — but of a very different sort.

Companies are paying top dollar to lease space there in buildings called data centers, the anonymous warrens where more and more of the world’s commerce is transacted, all of which has added up to a tremendous boon for the business of data centers themselves.

The centers provide huge banks of remote computer storage, and the enormous amounts of electrical power and ultrafast fiber optic links that they demand.

Prices are particularly steep in northern New Jersey because it is also where data centers house the digital guts of the New York Stock Exchange and other markets. Bankers and high-frequency traders are vying to have their computers, or servers, as close as possible to those markets. Shorter distances make for quicker trades, and microseconds can mean millions of dollars made or lost.

When the centers opened in the 1990s as quaintly termed “Internet hotels,” the tenants paid for space to plug in their servers with a proviso that electricity would be available. As computing power has soared, so has the need for power, turning that relationship on its head: electrical capacity is often the central element of lease agreements, and space is secondary.

A result, an examination shows, is that the industry has evolved from a purveyor of space to an energy broker — making tremendous profits by reselling access to electrical power, and in some cases raising questions of whether the industry has become a kind of wildcat power utility.

Even though a single data center can deliver enough electricity to power a medium-size town, regulators have granted the industry some of the financial benefits accorded the real estate business and imposed none of the restrictions placed on the profits of power companies.

Some of the biggest data center companies have won or are seeking Internal Revenue Service approval to organize themselves as real estate investment trusts, allowing them to eliminate most corporate taxes. At the same time, the companies have not drawn the scrutiny of utility regulators, who normally set prices for delivery of the power to residences and businesses.

While companies have widely different lease structures, with prices ranging from under $200 to more than $1,000 a square foot, the industry’s performance on Wall Street has been remarkable. Digital Realty Trust, the first major data center company to organize as a real estate trust, has delivered a return of more than 700 percent since its initial public offering in 2004, according to an analysis by Green Street Advisors.

The stock price of another leading company, Equinix, which owns one of the prime northern New Jersey complexes and is seeking to become a real estate trust, more than doubled last year to over $200.

“Their business has grown incredibly rapidly,” said John Stewart, a senior analyst at Green Street. “They arrived at the scene right as demand for data storage and growth of the Internet were exploding.”

While many businesses own their own data centers — from stacks of servers jammed into a back office to major stand-alone facilities — the growing sophistication, cost and power needs of the systems are driving companies into leased spaces at a breakneck pace.

The New York metro market now has the most rentable square footage in the nation, at 3.2 million square feet, according to a recent report by 451 Research, an industry consulting firm. It is followed by the Washington and Northern Virginia area, and then by San Francisco and Silicon Valley.

A major orthopedics practice in Atlanta illustrates how crucial these data centers have become.

With 21 clinics scattered around Atlanta, Resurgens Orthopaedics has some 900 employees, including 170 surgeons, therapists and other caregivers who treat everything from fractured spines to plantar fasciitis. But its technological engine sits in a roughly 250-square-foot cage within a gigantic building that was once a Sears distribution warehouse and is now a data center operated by Quality Technology Services.

Eight or nine racks of servers process and store every digital medical image, physician’s schedule and patient billing record at Resurgens, said Bradley Dick, chief information officer at the company. Traffic on the clinics’ 1,600 telephones is routed through the same servers, Mr. Dick said.

“That is our business,” Mr. Dick said. “If those systems are down, it’s going to be a bad day.”

The center steadily burns 25 million to 32 million watts, said Brian Johnston, the chief technology officer for Quality Technology. That is roughly the amount needed to power 15,000 homes, according to the Electric Power Research Institute.

Mr. Dick said that 75 percent of Resurgens’s lease was directly related to power — essentially for access to about 30 power sockets. He declined to cite a specific dollar amount, but two brokers familiar with the operation said that Resurgens was probably paying a rate of about $600 per square foot a year, which would mean it is paying over $100,000 a year simply to plug its servers into those jacks.

While lease arrangements are often written in the language of real estate,“these are power deals, essentially,” said Scott Stein, senior vice president of the data center solutions group at Cassidy Turley, a commercial real estate firm. “These are about getting power for your servers.”

One key to the profit reaped by some data centers is how they sell access to power. Troy Tazbaz, a data center design engineer at Oracle who previously worked at Equinix and elsewhere in the industry, said that behind the flat monthly rate for a socket was a lucrative calculation. Tenants contract for access to more electricity than they actually wind up needing. But many data centers charge tenants as if they were using all of that capacity — in other words, full price for power that is available but not consumed.

Since tenants on average tend to contract for around twice the power they need, Mr. Tazbaz said, those data centers can effectively charge double what they are paying for that power. Generally, the sale or resale of power is subject to a welter of regulations and price controls. For regulated utilities, the average “return on equity” — a rough parallel to profit margins — was 9.25 percent to 9.7 percent for 2010 through 2012, said Lillian Federico, president of Regulatory Research Associates, a division of SNL Energy.

But the capacity pricing by data centers, which emerged in interviews with engineers and others in the industry as well as an examination of corporate documents, appears not to have registered with utility regulators.

Interviews with regulators in several states revealed widespread lack of understanding about the amount of electricity used by data centers or how they profit by selling access to power.

Bernie Neenan, a former utility official now at the Electric Power Research Institute, said that an industry operating outside the reach of utility regulators and making profits by reselling access to electricity would be a troubling precedent. Utility regulations “are trying to avoid a landslide” of other businesses doing the same.

Some data center companies, including Digital Realty Trust and DuPont Fabros Technology, charge tenants for the actual amount of electricity consumed and then add a fee calculated on capacity or square footage. Those deals, often for larger tenants, usually wind up with lower effective prices per square foot.

Regardless of the pricing model, Chris Crosby, chief executive of the Dallas-based Compass Datacenters, said that since data centers also provided protection from surges and power failures with backup generators, they could not be viewed as utilities. That backup equipment “is why people pay for our business,” Mr. Crosby said.

Melissa Neumann, a spokeswoman for Equinix, said that in the company’s leases, “power, cooling and space are very interrelated.” She added, “It’s simply not accurate to look at power in isolation.”

Ms. Neumann and officials at the other companies said their practices could not be construed as reselling electrical power at a profit and that data centers strictly respected all utility codes. Alex Veytsel, chief strategy officer at RampRate, which advises companies on data center, network and support services, said tenants were beginning to resist flat-rate pricing for access to sockets.

“I think market awareness is getting better,” Mr. Veytsel said. “And certainly there are a lot of people who know they are in a bad situation.”

The soaring business of data centers is exemplified by Equinix. Founded in the late 1990s, it survived what Jason Starr, director of investor relations, called a “near death experience” when the Internet bubble burst. Then it began its stunning rise.

Equinix’s giant data center in Secaucus is mostly dark except for lights flashing on servers stacked on black racks enclosed in cages. For all its eerie solitude, it is some of the most coveted space on the planet for financial traders. A few miles north, in an unmarked building on a street corner in Mahwah, sit the servers that move trades on the New York Stock Exchange; an almost equal distance to the south, in Carteret, are Nasdaq’s servers.

The data center’s attraction for tenants is a matter of physics: data, which is transmitted as light pulses through fiber-optic cables, can travel no faster than about a foot every billionth of a second. So being close to so many markets lets traders operate with little time lag.

As Mr. Starr said: “We’re beachfront property.”

Standing before a bank of servers, Mr. Starr explained that they belonged to one of the lesser-known exchanges located in the Secaucus data center. Multicolored fiber-optic cables drop from an overhead track into the cage, which allows servers of traders and other financial players elsewhere on the floor to monitor and react nearly instantaneously to the exchange. It all creates a dense and unthinkably fast ecosystem of postmodern finance.

Quoting some lyrics by Soul Asylum, Mr. Starr said, “Nothing attracts a crowd like a crowd.” By any measure, Equinix has attracted quite a crowd. With more than 90 facilities, it is the top data center leasing company in the world, according to 451 Research. Last year, it reported revenue of $1.9 billion and $145 million in profits.

But the ability to expand, according to the company’s financial filings, is partly dependent on fulfilling the growing demands for electricity. The company’s most recent annual report said that “customers are consuming an increasing amount of power per cabinet,” its term for data center space. It also noted that given the increase in electrical use and the age of some of its centers, “the current demand for power may exceed the designed electrical capacity in these centers.”

To enhance its business, Equinix has announced plans to restructure itself as a real estate investment trust, or REIT, which, after substantial transition costs, would eventually save the company more than $100 million in taxes annually, according to Colby Synesael, an analyst at Cowen & Company, an investment banking firm.

Congress created REITs in the early 1960s, modeling them on mutual funds, to open real estate investments to ordinary investors, said Timothy M. Toy, a New York lawyer who has written about the history of the trusts. Real estate companies organized as investment trusts avoid corporate taxes by paying out most of their income as dividends to investors.

Equinix is seeking a so-called private letter ruling from the I.R.S. to restructure itself, a move that has drawn criticism from tax watchdogs.

“This is an incredible example of how tax avoidance has become a major business strategy,” said Ryan Alexander, president of Taxpayers for Common Sense, a nonpartisan budget watchdog. The I.R.S., she said, “is letting people broaden these definitions in a way that they kind of create the image of a loophole.”

Equinix, some analysts say, is further from the definition of a real estate trust than other data center companies operating as trusts, like Digital Realty Trust. As many as 80 of its 97 data centers are in buildings it leases, Equinix said. The company then, in effect, sublets the buildings to numerous tenants.

Even so, Mr. Synesael said the I.R.S. has been inclined to view recurring revenue like lease payments as “good REIT income.”

Ms. Neumann, the Equinix spokeswoman, said, “The REIT framework is designed to apply to real estate broadly, whether owned or leased.” She added that converting to a real estate trust “offers tax efficiencies and disciplined returns to shareholders while also allowing us to preserve growth characteristics of Equinix and create significant shareholder value.”
https://www.nytimes.com/2013/05/14/t...oundaries.html





Our Intellectual Property Laws Are Out of Control

Intellectual property law is supposed to spur experimentation, PM contributor and Instapundit blogger Glenn Reynolds writes, not deter it. But the patent and copyright laws of yesteryear are ill-equipped for the world of 2013.
Glenn Harlan Reynolds

"Too much of a good thing," Mae West supposedly said, "can be wonderful." Is that true? Maybe in some cases, but probably not where patents and copyrights are concerned.

Just look at the controversy over a recent ruling that made unlocking your cellphone a felony punishable by five years in prison and $500,000 in fines. This twist on 1998's Digital Millennium Copyright Act (DMCA) has encouraged people to rethink what, exactly, intellectual property laws should protect, and to wonder if they've gone too far. I think the answer is yes, and that a look back at the constitutional roots of our patent and copyright system can offer some useful guidelines.

Software is becoming the most valuable part of many physical goods. For a Blu-ray disc, that's obvious: The intellectual property—the movie—matters more than the physical medium. But these days, even cars and airplanes depend as much on their software as on their steel. With that in mind, companies have pushed for ever-greater protections. Because the DMCA makes it illegal to circumvent software encryption, some DIY car repairs could potentially be judged illegal—the software may be encrypted!

Intellectual property law is supposed to promote experimentation, not hold it back. A similar problem in 17th-century England led to the precursor of our own system of patents and copyrights. In those days British monarchs often granted monopolies to courtiers in exchange for money or political support. The holder had the exclusive right to sell a product, anything from playing cards to French perfume. These unpopular arrangements were political payoffs, not rewards for introducing new products. And the abuses got so bad that in 1624 Parliament passed a law banning monopolies except as a reward for inventors.

Fast-forward to the drafting of the United States Constitution and you find similar thinking. Thomas Jefferson opposed all government-granted monopolies, but James Madison argued that while monopolies generally are bad, there is a place for patents and copyrights. In the end, the Patent and Copyright Clause (Article I, Section 8) empowered Congress "[t]o promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries."

The idea was that innovators would be rewarded with a short-term monopoly on their work. Afterward it would enter the public domain, hopefully sparking further creations or discoveries. In the early days the Constitution's "limited times" were quite limited: 14 years for patents; 14 years, plus a potential 14-year renewal term, for copyrights. And patents were strictly scrutinized to ensure that they represented real inventions. (Jefferson himself, when he was secretary of state, served as a patent examiner, so important did he consider this task.)

Nowadays the limited times aren't so limited. Copyright has been extended to the life of the author plus 70 years; corporate works (with no living person as "author") get a 120-year term. Patents are good for just 20 years, but there's far less scrutiny to ensure that they represent something truly new—a lot of "nuisance patents" are filed to provide bargaining chips rather than to protect actual creativity. Also, influential companies often get Congress to extend their own patent rights through special legislation. Does a century-plus exclusive right encourage invention more than a 28-year exclusive right? It's doubtful.

The DMCA's rules make things worse by interfering with the repair or repurposing of electronic goods after they have been sold. Some companies are even trying to apply that kind of thinking to nondigital products. The Supreme Court just took a small, positive step in the case of Kirtsaeng v. John Wiley, where it protected the right to resell books bought overseas. The publisher had argued, essentially, that you might own a book you bought, but the company retained the right to sell it.

Ownership ought to mean something. When you buy a smartphone or an automobile, it should be yours, and companies shouldn't be able to leverage their intellectual property rights in software to keep you from unlocking, repairing, modifying, or reselling it as you see fit. Intellectual property is a good thing, all right. But it turns out that too much of it isn't wonderful at all.
http://www.popularmechanics.com/tech...ntrol-15467970





Newegg Nukes “Corporate Troll” Alcatel in Third Patent Appeal Win This Year

Company takes hard line on patent cases, and keeps winning.
Joe Mullin

In 2011, Alcatel-Lucent had American e-commerce on the ropes. The French telecom had sued eight big retailers and Intuit, saying that their e-commerce operations infringed Alcatel patents; one by one, they were folding. Kmart, QVC, Lands' End, and Intuit paid up at various stages of the litigation. Just before trial began, Zappos, Sears, and Amazon also settled. That left two companies holding the bag: Overstock.com, and Newegg, a company whose top lawyer had vowed not to ever settle with patent trolls.

Then things started going badly for the plaintiff. Very badly. Instead of convincing the East Texas jury to hand Alcatel the tens of millions they were asking for—$12 million from Newegg alone—they got a verdict of non-infringement. And as for the one patent they had argued throughout trial was so key to modern e-commerce, US Patent No. 5,649,131—the jury invalidated its claims.

Alcatel-Lucent was scrambling. The company's patent-licensing operations were contentious but lucrative, and it surely had plans to move on from those eight heavyweights to sue many more retailers. The '131 patent, titled simply "Communications Protocol" and related to "object identifiers," was its crown jewel.

Alcatel-Lucent went all out on appeal. With upwards of $19 billion in revenue, the company was easily able to amass legal firepower not available to your average patent troll. For its appeal, it hired Wilmer Hale, the kind of top law firm that handles high-stakes appeals with lawyers that have their own Wikipedia pages.

Last Friday, May 10, Alcatel-Lucent's appeals lawyer, Mark Fleming, made his case to a three-judge panel. Representing Newegg and Overstock was Ed Reines, the same attorney who helped Newegg blow apart the infamous "online shopping cart" patent in an appeal ruled on a few months ago.

Federal Circuit judges typically take months, and occasionally years, to review the patent appeals that come before them. Briefs in this case were submitted last year, and oral arguments were held last Friday, May 10. The three-judge panel upheld Newegg's win, without comment—in just three days.

Yesterday morning, Reines sent Newegg's legal team an e-mail: "I'm pleased to report we received a summary affirmance. That is extraordinary to my eye given the stakes and complexity of the matter."

"Mark Fleming is phenomenal, a great lawyer," said Lee Cheng, Newegg's chief legal officer, in an interview with Ars. "I heard him argue. But even he can't make bad facts go away."

Alcatel-Lucent dropped the case over its other two patents, desperate to get back the '131 patent that Newegg and Overstock had killed at trial. “If they had been able to revive this patent, the litigation machine would have continued on," Reines told Reuters after the win.

"There's good news and there's bad news," said Cheng in an interview with Ars. "The good news is, we won this case on every point. The bad news is, we're running out of lawsuits. There are fewer trolls for us to fight. I've spent a lot of time over the last seven years figuring out what to do with these guys. There are strategies I think would be really neat and effective that I literally can't execute. I can't make good law because I don't have any appellate cases left. They [the trolls] are dismissing cases against us before any dispositive motions."

Newegg has already won two other patent appeals this year: from Kelora Systems and Soverain Software. Even though Alcatel-Lucent has billions in revenue from real businesses, when it comes to patent battles, Cheng doesn't see them as being so different. Since Alcatel is asserting patents in markets it's nowhere near actually participating in, he sees them as a kind of "corporate troll."

"It's an operating company that happens to hold a patent," said Cheng. "But it does nothing at all to bring the benefit of that patent to society."

Mark Griffin, Overstock's general counsel and Cheng's co-defendant in this case, began to see Alcatel-Lucent in the same terms. ""They took an old cell phone patent and tried to say that it applied to the Internet," Griffin told Bloomberg. His company is looking to take a harder line against trolls, as Newegg has. "The general counsels of publicly traded companies have started to wake up—we don't feed trolls," Griffin said.

Still, fighting them off was no easy task. Alcatel acquired Lucent in 2006, and the French telecom used the intellectual property it acquired to kick off one of the most aggressive and wide-ranging patent licensing campaigns in history, embracing the "patent troll" model even as it maintained operating businesses in other realms.

"These are the Bell Labs patents," Cheng said. "It is truly, truly tragic how the mighty have fallen. This company was once the pride of American innovation, a company that had its roots in Thomas Edison. And it ended up selling off its patents for a few bucks. What Alcatel-Lucent did was really offensive." He continued:

They systematically sent thousands of letters out saying, "Hey, we own 27,000 patents, and here are some patents we think you infringe." They had a whole licensing group whose job was to monetize these patents, by threatening litigation, and in some cases litigating. It didn't actually matter if you did your own analysis and got back to them and said, "Hey guys, we actually think we don't infringe." The response was something to the effect of, well, we have 27,000 patents—and you probably infringe something, so give us a licensing fee.

At trial in East Texas, Cheng took the stand to tell Newegg's story. Alcatel-Lucent's corporate representative, at the heart of its massive licensing campaign, couldn't even name the technology or the patents it was suing Newegg over.

"Successful defendants have their litigation managed by people who care," said Cheng. "For me, it's easy. I believe in Newegg, I care about Newegg. Alcatel Lucent, meanwhile, they drag out some random VP—who happens to be a decorated Navy veteran, who happens to be handsome and has a beautiful wife and kids—but the guy didn't know what patents were being asserted. What a joke.

"Shareholders of public companies that engage in patent trolling should ask themselves if they're really well-served by their management teams," Cheng added. "Are they properly monetizing their R&D? Surely there are better ways to make money than to just rely on litigating patents. If I was a shareholder, I would take a hard look as to whether their management was competent."
http://arstechnica.com/tech-policy/2...-this-year/#p3





U.S. Now Paints Apple as ‘Ringmaster’ in Its Lawsuit on E-Book Price-Fixing
Edward Wyatt and Nick Wingfield

The e-mail, from Steve Jobs of Apple to James Murdoch of News Corporation, reads as if one old sport were trying to cajole another into joining a caper: “Throw in with Apple and see if we can all make a go of this to create a real mainstream e-books market at $12.99 and $14.99.”

According to the Justice Department, that e-mail is part of the evidence that Apple was the “ringmaster” in a price-fixing conspiracy in the market for e-books, a more direct leadership role than originally portrayed in the department’s April 2012 antitrust lawsuit against Apple and five publishing companies.

In its suit, the government said that Apple and the publishers conspired to fix e-book prices as part of a scheme to force Amazon to raise its e-book price from a uniform $9.99 to the higher level noted by Mr. Jobs in the e-mail, which publishers wanted. That, the department said, resulted in higher prices to consumers and ill-gotten profits for Apple and its partners.

The e-mail was released on Tuesday as part of the government’s filing before the trial in the case, set to begin on June 3 in New York.

Two days after Mr. Jobs’s e-mail to Mr. Murdoch, HarperCollins, the publishing company owned by News Corporation, signed an agreement with Apple to force all sellers of electronic books to adopt the new pricing model, the government said.

Apple is the only defendant left in the lawsuit after five publishing companies — Hachette, HarperCollins, Macmillan, Penguin and Simon & Schuster — agreed last year and earlier this year to settle the charges.

Tom Neumayr, a spokesman for Apple, said the company did not conspire to fix prices on e-books.

“We helped transform the e-book market with the introduction of the iBookstore in 2010, bringing consumers an expanded selection of e-books and delivering innovative new features,” Mr. Neumayr said. “The market has been thriving and innovating since Apple’s entry, and we look forward to going to trial to defend ourselves and move forward.”

The Justice Department’s latest filings in the case also paint a picture of an Apple willing to use its power in mobile apps to strong-arm reluctant partners. That is especially evident in the accusations the department makes about Apple’s dealings with Random House, the last major publisher to resist striking an e-books deal with Apple.

In July 2010, Mr. Jobs, Apple’s former chief executive, told the chief executive of Random House, Markus Dohle, that the publisher would suffer a loss of support from Apple if it held out much longer, according to an account of the conversation provided by Mr. Dohle in the filing. Two months later, Apple threatened to block an e-book application by Random House from appearing in Apple’s App Store because it had not agreed to a deal with Apple, the filing said.

After Random House finally agreed to a contract on Jan. 18, 2011, Eddy Cue, the Apple executive in charge of its e-books deals, sent an e-mail to Mr. Jobs attributing the publisher’s capitulation, in part, to “the fact that I prevented an app from Random House from going live in the app store,” the filing reads.

The newly released documents also quote David Shanks, chief executive of Penguin, as saying that Apple was the “facilitator and go-between” for the publishing companies in arranging the agreement.

And the documents quote Mr. Dohle as saying that an Apple executive counseled him that the publishing company could threaten to withhold e-books from Amazon to force Amazon to accept the higher prices. Random House was not named as a defendant in the lawsuit.

The price-fixing suit charges that Apple advised publishers to move from a wholesale pricing model, which let retailers charge what they wanted, to a system that allowed publishers to set their own e-book prices, a model known as agency pricing.

The publishers said Amazon was pricing e-books below their actual cost, putting financial pressure on the publishers that they said would drive them out of business. The dispute underscored the extent to which competition from digital retailers like Amazon was transforming the traditional book industry.

Three of the publishers, HarperCollins, Simon & Schuster and Hachette, settled with the government immediately. Penguin, Macmillan and Apple originally decided to fight the charges. But in December, to clear the way for its merger with Random House, Penguin settled, followed by Macmillan in February.

The settlements call for the publishers to lift restrictions imposed on discounting and other promotions by e-book retailers. The companies are also prohibited from entering into new agreements with similar restrictions until December 2014.

The publishers must also notify the government in advance about any e-book ventures they plan with each other, and they are prohibited for five years from agreeing to any kind of so-called most-favored-nation clause with any retailer, which establishes that no other retailer is allowed to sell e-books for a lower price.

Edward Wyatt reported from Washington and Nick Wingfield from Seattle.
https://www.nytimes.com/2013/05/15/t...ce-fixing.html





France Weighs 'Culture Tax' for Apple, Google Products

President Francois Hollande will decide by the end of July whether France should impose new taxes on technology giants like Apple and Google to finance cultural projects, a move that could feed into an anti-business image days after a spat with Yahoo!.

The Socialist government asked former Canal Plus (VIV.PA) CEO Pierre Lescure to find new ways of funding culture during an economic downturn, in line with France's "cultural exception" argument that such projects must be shielded from market forces.

While far from becoming laws, the proposals could worsen tension between France and technology giants after Industry Minister Arnaud Montebourg blocked an attempt by Yahoo! (YHOO.O) to buy a majority stake in French video clip site Dailymotion.

The run-in reignited a debate on state intervention in the economy, angered the firm's French parent company and exposed discord between Montebourg and Finance Minister Pierre Moscovici, who denied having approved the move.

Lescure's report said taxes on sales of smart-phones and tablets, namely Apple's (AAPL.O) iPhone and iPad and Google (GOOG.O) Android products, could help fund culture because consumers were spending more money on hardware than on content.

The proposed tax would mirror fees already paid by television users, TV and radio broadcasters and Internet service providers to fund art, cinema and music in France, but which Google, Apple and Amazon (AMZN.O) are now exempt from paying.

"Companies that make these tablets must, in a minor way, be made to contribute part of the revenue from their sales to help creators," Culture Minister Aurelie Filipetti told journalists.

Hollande's office said in a statement that he wanted lawmakers to review legislation based on the report's recommendations by the summer. Parliament goes into recess at the end of July and returns in mid- to late September.

Filipetti added that the "culture tax", which she said would be "minimal and widely distributed", was likely to be included in a budget law to be submitted to parliament in November.

French officials are also pushing to ensure that French cultural products, and notably the audiovisual sector, remain exempt from free trade rules during talks on a planned trade agreement between the European Union and the United States.

U.S. President Barack Obama is travelling to Europe next month to launch the talks, which would create the world's largest free-trading bloc if they were successful.

After months of public criticism of Google, France in January abandoned trying to make it pay newspapers for publishing links to articles, settling for a pledge by Google to invest 60 million euros in a fund to support news production.

In Germany, publishers Bertelsmann SE and Axel Springer (SPRGn.DE) also backed down from a similar push after a court let Google publish links and previews to their content for free.

Last October, Google denied a newspaper report saying it received a 1 billion euro tax claim from the French state.

In Britain, Google has stirred the ire of lawmakers for the way it pays almost no income tax on billions of dollars of UK sales each year.

(Reporting by Nicholas Vinocur and Marine Pennetier, editing by Mark Heinrich)
http://in.reuters.com/article/2013/0...94C0AQ20130513





New Apps Bring Live Music Into the Home
Natasha Baker

Music lovers wanting to hear live tunes without going to a concert or club or leaving the comfort of their home can use new computer apps that connect them to events in venues around the world.

Although event-streaming sites like Livestream offer some live concerts, these apps are tailored exclusively for music and try to capture the atmosphere and social aspects of a live show.

"Going to a concert is about the music. But it's also about the shared experience of watching that music with all these other people," said Judy Estrin, the chief executive of Redwood City, California-based company Evntlive.

Evntlive streams concerts of well known and indie artists on its namesake Web app, which was launched last month and is accessible on mobile devices.

Some concerts can be viewed for free. But for others, there is a fee of between $2.99 and $5.99. The company works directly with musicians and venues to stream the shows.

Concert Window, which relaunched its Web app to make it accessible on mobile devices, streams approximately 25 live concerts each week, with fees ranging from free to $5.

The New York City-based company provides equipment such as cameras to artists and controls the broadcast remotely. Two thirds of the profits go to musicians.

Both Evntlive and Concert Window offer social features so music fans can chat with others in real-time, learn more about an artist, or watch related videos about the performer.

"It's about redefining what live performance means online, as opposed to saying it's just a linear experience where all you want to do is capture it and stream it," said Estrin.

Spacebar, a new app for the iPhone, focuses exclusively on streaming audio. Any musician with an iPhone can stream a performance on the app, whether they're in a concert hall or at home. The first five minutes of streaming is free and then it costs 99 cents. Fans can also tip artists.

"We are mainly targeting the artists that headline in the small clubs. Small- to medium-sized clubs that fill 100 to 200 people," said Gregory Miller, co-founder and executive producer of Spacebar, based in San Francisco.

About a dozen live concerts are accessible through the app each week.

Evntlive also has on-demand content from previously recorded concerts. Concert Window plans to launch on-demand content in the near future.

Dan Gurney, the chief executive of Concert Window, said online concerts are the next best thing to being at a live concert.

"There's nothing better than kicking back on your couch after a long day of work and enjoying some live music," he said.

(Editing by Patricia Reaney and Philip Barbara)
http://www.reuters.com/article/2013/...94C0V120130513





Two Musical Minds Seek a Different Kind of Mogul
Jenna Wortham

The record producer Jimmy Iovine and his business partner Dr. Dre have a keen eye for talent. After all, Mr. Iovine discovered Dr. Dre when he was just Andre Young, and between them, the two have jump-started the careers of stars ranging from Lady Gaga to 50 Cent to the Black Eyed Peas.

Now they think they can help create the next Steve Jobs.

The music moguls, who founded the wildly popular Beats headphone business, are giving $70 million to the University of Southern California to create a degree that blends business, marketing, product development, design and liberal arts. The gift is relatively modest, as donations to universities go. But the founders’ ambitions are lofty, as they explained in an interview Monday in the elaborate presidential dining room on the lush U.S.C. campus.

“If the next start-up that becomes Facebook happens to be one of our kids, that’s what we are looking for,” said Mr. Iovine, an energetic 60-year-old dressed in his trademark uniform of T-shirt and fitted jeans, faded baseball hat and blue-tinted eyeglasses.

Like many celebrities, Mr. Iovine and Dr. Dre have been seduced by the siren call of the tech world, which has lured celebrities like Justin Bieber, Tyra Banks and Leonardo DiCaprio to finance a start-up or develop their own idea. They have had more success than most with Beats, a private company that they say makes $1 billion in sales annually.

Still, the world of academia is far from familiar to Mr. Iovine and Dr. Dre. Neither went to college. And during the interview, Mr. Iovine confessed more than once that he was “out of my depth” when it came to discussing details of the program. He referred those questions to Erica Muhl, dean of the university’s fine arts school, who will be the inaugural director of the program and in charge of devising the curriculum, selecting professors and reviewing applications.

Dr. Dre, 48, svelte and relaxed in black jeans and a black sweater that just barely concealed a faded forearm tattoo, has an easy rapport with Mr. Iovine, and was content to let him do most of the talking. The rapper nodded often, ate chocolate chip cookies with evident pleasure, and chimed in occasionally. When he did, he chose his words carefully.

As a rapper, Dr. Dre was lauded for his blend of agile West Coast lyrics and rich, blunt beats; asked if he ever expected as a young performer that he would help start a university program, he leaned forward and laughed long and hard.

“Never in a million years,” he said.

But he and Mr. Iovine are betting that their instinct and keen ears — which helped Mr. Iovine find and sign Dr. Dre who, in turn, ferreted out Snoop Dogg and Eminem when they were budding musicians — will help them find future chief executives.

It doesn’t matter whether it is the next Gwen Stefani, Mr. Iovine said, whom he signed at 19, or recruiting and nurturing the next Marissa Mayer.

“Talent is talent,” he said.

The details of the four-year program, officially the U.S.C. Jimmy Iovine and Andre Young Academy for Arts, Technology and the Business of Innovation, are still being completed. The first class of 25 students will enter in fall 2014, selected for their academic achievement, the university said, as well as their ability for “original thought.”

At the core of the curriculum is something called “the Garage,” which will require seniors to essentially set up a business prototype. It appears to be inspired by technology incubators like Y-Combinator and universities like Stanford that encourage students to develop and pitch start-up ideas as class assignments.

“I feel like this is the biggest, most exciting and probably the most important thing that I’ve done in my career,” Dr. Dre said.

Part of the endowment includes several full scholarships, he said, to help a financially disadvantaged students to “go on to do something that could potentially change the world.”

Still, the $70 million endowment, to which Mr. Iovine and Dr. Dre contributed equally, does not rank high among gifts to universities; for example, in 2012, Stanford raised over $1 billion from donors, $304.3 million of which was designated for research and programs.

U.S.C. has received larger gifts from other donors in recent years. But Rae Goldsmith, the vice president for resources of the Council of Advancement and Support of Education, which tracks donations above $100 million to colleges and universities, said that regardless of the size the donation was meaningful because it was rare for donors to establish new departments and courses of study.

“This kind of gift can be helpful in achieving one overall goal of the institution,” she said. “It’s certainly noteworthy.”

In the rarefied tech world, $70 million is a drop in the bucket. Last May, Evernote, a note-taking app, raised the same amount in a round of venture capital.

But C. L. Max Nikias, the university’s president, said the size of the gift would fully support the new program, and would leave a legacy that would “make a difference in society.”

The idea for the program came to Mr. Iovine and Dr. Dre not long after creating the Beats company, when they found themselves with a problem familiar to Silicon Valley entrepreneurs: the rapidly depleting reservoir of potential employees, including software engineers and marketing savants.

“It came out of us trying to find people to work for us,” Mr. Iovine said.

They hope that the program will supply not only future employees for Beats’ current business, but also for a new venture, a streaming music service, Beats Music, that is expected to make its debut later this year.

Mr. Iovine compared their thinking to the approach to a typical business problem of “how do we make the best product?”

“In this case,” he said, “the kids are the product.”

Mr. Iovine said that over the course of their partnership, he has run many other ideas by Dr. Dre.

“Usually Dre is like ennhhhhhh,” he said, mimicking the sound of a bleating buzzer used to signify halftime or a wrong answer during a game show. But when it came to this idea, Dr. Dre’s curiosity was piqued.

“The last time he reacted like that was Beats,” Mr. Iovine said.

The university has played an important role in both Mr. Iovine’s and Dr. Dre’s lives. Mr. Iovine’s daughter completed her undergraduate studies there; on Friday, he is delivering the class of 2013 commencement speech. Perhaps more important, the school is fewer than a dozen miles from where Dr. Dre grew up in Compton.

Mr. Iovine acknowledged that their plan was ambitious but said the pair were not afraid to take risks.

“We have no idea where this is going,” he said.

Dr. Dre said, “It’s definitely a steppingstone to something.” And Mr. Iovine jumped in, finishing the thought, “We’re not quite sure what it is.”
https://www.nytimes.com/2013/05/15/t...c-program.html
















Until next week,

- js.



















Current Week In Review





Recent WiRs -

May 11th, May 4th, April 27th, April 20th

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, comments, questions etc. in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
JackSpratts is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-To-Peer News - The Week In Review - July 16th, '11 JackSpratts Peer to Peer 0 13-07-11 06:43 AM
Peer-To-Peer News - The Week In Review - July 9th, '11 JackSpratts Peer to Peer 0 06-07-11 05:36 AM
Peer-To-Peer News - The Week In Review - January 30th, '10 JackSpratts Peer to Peer 0 27-01-10 07:49 AM
Peer-To-Peer News - The Week In Review - January 16th, '10 JackSpratts Peer to Peer 0 13-01-10 09:02 AM
Peer-To-Peer News - The Week In Review - December 5th, '09 JackSpratts Peer to Peer 0 02-12-09 08:32 AM






All times are GMT -6. The time now is 05:18 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
© www.p2p-zone.com - Napsterites - 2000 - 2024 (Contact grm1@iinet.net.au for all admin enquiries)