View Single Post
Old 19-03-08, 07:54 AM   #2
JackSpratts
 
JackSpratts's Avatar
 
Join Date: May 2001
Location: New England
Posts: 9,854
Default

Put Young Children on DNA List, Urge Police
Mark Townsend and Anushka Asthana

• 'We must target potential offenders'
• Teachers' fury over 'dangerous' plan

Primary school children should be eligible for the DNA database if they exhibit behaviour indicating they may become criminals in later life, according to Britain's most senior police forensics expert.

Gary Pugh, director of forensic sciences at Scotland Yard and the new DNA spokesman for the Association of Chief Police Officers (Acpo), said a debate was needed on how far Britain should go in identifying potential offenders, given that some experts believe it is possible to identify future offending traits in children as young as five.

'If we have a primary means of identifying people before they offend, then in the long-term the benefits of targeting younger people are extremely large,' said Pugh. 'You could argue the younger the better. Criminologists say some people will grow out of crime; others won't. We have to find who are possibly going to be the biggest threat to society.'

Pugh admitted that the deeply controversial suggestion raised issues of parental consent, potential stigmatisation and the role of teachers in identifying future offenders, but said society needed an open, mature discussion on how best to tackle crime before it took place. There are currently 4.5 million genetic samples on the UK database - the largest in Europe - but police believe more are required to reduce crime further. 'The number of unsolved crimes says we are not sampling enough of the right people,' Pugh told The Observer. However, he said the notion of universal sampling - everyone being forced to give their genetic samples to the database - is currently prohibited by cost and logistics.

Civil liberty groups condemned his comments last night by likening them to an excerpt from a 'science fiction novel'. One teaching union warned that it was a step towards a 'police state'.

Pugh's call for the government to consider options such as placing primary school children who have not been arrested on the database is supported by elements of criminological theory. A well-established pattern of offending involves relatively trivial offences escalating to more serious crimes. Senior Scotland Yard criminologists are understood to be confident that techniques are able to identify future offenders.

A recent report from the think-tank Institute for Public Policy Research (IPPR) called for children to be targeted between the ages of five and 12 with cognitive behavioural therapy, parenting programmes and intensive support. Prevention should start young, it said, because prolific offenders typically began offending between the ages of 10 and 13. Julia Margo, author of the report, entitled 'Make me a Criminal', said: 'You can carry out a risk factor analysis where you look at the characteristics of an individual child aged five to seven and identify risk factors that make it more likely that they would become an offender.' However, she said that placing young children on a database risked stigmatising them by identifying them in a 'negative' way.

Shami Chakrabarti, director of the civil rights group Liberty, denounced any plan to target youngsters. 'Whichever bright spark at Acpo thought this one up should go back to the business of policing or the pastime of science fiction novels,' she said. 'The British public is highly respectful of the police and open even to eccentric debate, but playing politics with our innocent kids is a step too far.'

Chris Davis, of the National Primary Headteachers' Association, said most teachers and parents would find the suggestion an 'anathema' and potentially very dangerous. 'It could be seen as a step towards a police state,' he said. 'It is condemning them at a very young age to something they have not yet done. They may have the potential to do something, but we all have the potential to do things. To label children at that stage and put them on a register is going too far.'

Davis admitted that most teachers could identify children who 'had the potential to have a more challenging adult life', but said it was the job of teachers to support them.

Pugh, though, believes that measures to identify criminals early would save the economy huge sums - violent crime alone costs the UK £13bn a year - and significantly reduce the number of offences committed. However, he said the British public needed to move away from regarding anyone on the DNA database as a criminal and accepted it was an emotional issue.

'Fingerprints, somehow, are far less contentious,' he said. 'We have children giving their fingerprints when they are borrowing books from a library.'

Last week it emerged that the number of 10 to 18-year-olds placed on the DNA database after being arrested will have reached around 1.5 million this time next year. Since 2004 police have had the power to take DNA samples from anyone over the age of 10 who is arrested, regardless of whether they are later charged, convicted, or found to be innocent.

Concern over the issue of civil liberties will be further amplified by news yesterday that commuters using Oyster smart cards could have their movements around cities secretly monitored under new counter-terrorism powers being sought by the security services.
http://www.guardian.co.uk/society/20...stice.children





Meet the Diebnew, same as the Diebold

Interesting Email from Sequoia
Ed Felten

A copy of an email I received has been passed around on various mailing lists. Several people, including reporters, have asked me to confirm its authenticity. Since everyone seems to have read it already, I might as well publish it here. Yes, it is genuine.

====

Sender: Smith, Ed [address redacted]@sequoiavote.com
To: felten@cs.princeton.edu, appel@princeton.edu
Subject: Sequoia Advantage voting machines from New Jersey
Date: Fri, Mar 14, 2008 at 6:16 PM

Dear Professors Felten and Appel:

As you have likely read in the news media, certain New Jersey election officials have stated that they plan to send to you one or more Sequoia Advantage voting machines for analysis. I want to make you aware that if the County does so, it violates their established Sequoia licensing Agreement for use of the voting system. Sequoia has also retained counsel to stop any infringement of our intellectual properties, including any non-compliant analysis. We will also take appropriate steps to protect against any publication of Sequoia software, its behavior, reports regarding same or any other infringement of our intellectual property.

Very truly yours,
Edwin Smith
VP, Compliance/Quality/Certification
Sequoia Voting Systems

[contact information and boilerplate redacted]
http://www.freedom-to-tinker.com/?p=1265





Plan for Voting Machine Probe Dropped after Lawsuit Threat
Diane C. Walsh

Union County has backed off a plan to let a Princeton University computer scientist examine voting machines where errors occurred in the presidential primary tallies, after the manufacturer of the machines threatened to sue, officials said today.

A Sequoia executive, Edwin Smith, put Union County Clerk Joanne Rajoppi on notice that an independent analysis would violate the licensing agreement between his firm and the county. In a terse two-page letter Smith also argued the voting machine software is a Sequoia trade secret and cannot be handed over to any third party.

Last week Rajoppi persuaded the statewide clerk's association to have an independent study of the machines done by Edward Felten, a professor of computer science and public affairs at Princeton University. The Constitutional Officers Association of New Jersey called for the independent review to ensure the integrity of the election process.

Sequoia maintains the errors, which were documented in at least five counties, occurred due to mistakes by poll workers. The firm, which is based in Colorado, examined machines in Middlesex Count, and concluded that poll workers had pushed the wrong buttons on the control panels, resulting in errors in the numbers of ballots cast.

But officials found it odd that such an error never occurred before and the clerk's association wanted further testing.

On the advice of county's attorneys, however, Rajoppi said today she must forego all plans for independent analysis.

That upset Penny Venetis, a Rutgers University law professor representing a group of activists trying to have electronic voting machines scrapped.

"We shouldn't have a corporation dictating how elections are run in the state," Venetis said. "If an elected official believes there was an anomaly and the matter has to be investigated, then the official should be able to consult with computer experts without interference."

The Union County clerk said she intends to write to the state Attorney General's Office again in hopes of convincing the state to call for an independent study. The attorney general oversees the election process.
http://www.nj.com/news/index.ssf/200...threatens.html





Evidence of New Jersey Election Discrepancies
Ed Felten

Press reports on the recent New Jersey voting discrepancies have been a bit vague about the exact nature of the evidence that showed up on election day. What has the county clerks, and many citizens, so concerned? Today I want to show you some of the evidence.

The evidence is a “summary tape” printed by a Sequoia AVC Advantage voting machine in Hillside, New Jersey when the polls closed at the end of the presidential primary election. The tape is timestamped 8:02 PM, February 5, 2008.

The summary tape is printed by poll workers as part of the ordinary procedure for closing the polls. It is signed by several poll workers and sent to the county clerk along with other records of the election.

Let me show you closeups of two sections of the tape. (Here’s the full tape, in TIF format.)

You can see the vote totals on this machine for each candidate. On the Democratic side, the tally is Obama 182, Clinton 179. On the Republican side it’s Giuliani 1, Romney 13, McCain 40, Paul 3, Huckabee 4.

Above is the “Option Switch Totals” section, which shows the number of times each party’s ballot was activated: 362 Democratic and 60 Republican.

This doesn’t add up. The machine says the Republican ballot was activated 60 times; but it shows a total of 61 votes cast for Republican candidates. It says the Democratic ballot was activated 362 times; but it shows a total of 361 votes for Democratic candidates. (New Jersey has a closed primary, so voters can cast ballots only in their own registered party.)

What’s alarming here is not the size of the discrepancy but its nature. This is a single voting machine, disagreeing with itself about how many Republicans voted on it. Imagine your pocket calculator couldn’t make up its mind whether 1+13+40+3+4 was 60 or 61. You’d be pretty alarmed, and you wouldn’t trust your calculator until you were very sure it was fixed. Or you’d get a new calculator.

This wasn’t an isolated instance, either. In Union County alone, at least eight other AVC Advantage machines exhibited similar problems, as did dozens more machines in other counties.

Sequoia, the vendor, is trying to prevent any independent investigation of what happened.
http://www.freedom-to-tinker.com/?p=1266





The New Iron Curtain

Why I can never go to America again
vetinarii

Since 2004, all visitors to the USA, even from countries that are normally considered among its closest friends, are fingerprinted on entry to the country.

Think about that for a moment. Fingerprinting. Why do that?

Well, you might want to create an infallible way of checking that a person who leaves the country is the same as the person who entered it under that name. I could understand that. But that's not the reason, because nobody rechecks the prints when we leave. And they certainly don't destroy the record after we've gone.

Or you might want to compare the prints against a database of people who have, for instance, been deported from the country or tried to enter it illegally. Again -- if that were the purpose, I'd have no problem with it. In that case, what you would do would be to get the visitor to place their finger on a scanner, then run the print against the established database. No need to store the visitor's record at all. But no, that doesn't happen either: the prints are collected and kept, apparently forever, by the KGB -- sorry, I mean DHS.

So what exactly are they doing with those prints?

Initially, the story was that they would be used solely for immigration control. These records were, we were reassured, incompatible with law enforcement databases. That reassurance lasted all of two years. And now visitors' fingerprints are indeed compared against the FBI's database. Way to make us feel welcome.

The only conclusion that seems to make sense is that, from the day I next set foot on US soil, my prints will routinely be scanned against every crime scene in America.

What's so frightening about that? If I don't do anything bad I have nothing to fear, right? And even if someone did make a mistake, surely I'd be safe if I wasn't even in the country at the time.

Historically, fingerprints worked well in crime detection because they were compared against those of a relatively small population, people who were already shortlisted as potential suspects. Generally, detectives would check records from people who'd previously been charged and/or convicted of similar crimes in the same general area. That's not a bad way of making a shortlist.

But now, with the integration of databases, that shortlist has grown, and at the same time it's getting ever easier to get onto it. The police generally fingerprint everyone they arrest, even if they are then released without charge -- and as far as I can tell, the records once collected are never, ever destroyed. Worse, if CSI and its relatives are to be believed, prints from crime scenes are now routinely compared against those taken from non-criminals, such as military personnel. I don't know if anyone's done a study into the connection between fingerprint identification and the ever-growing proportion of ex-service-people convicted of crimes, but I think it'd be an interesting subject.

If you're a foreigner, merely stepping off the plane is grounds for something that I can only see as a kind of low-grade arrest. The DHS's database is already larger than the FBI's, and the DHS itself is surprisingly coy about what it does with all that information. All I can find are vague platitudes about "making America safer". How, exactly, they do this is of course secret.

But the sordid truth about fingerprints is that they are not nearly as infallible as they're made out to be. The (very few) systematic trials of fingerprint identification have shown a frighteningly high level of false-positive identifications (where a fingerprint expert pronounced a match wrongly). And yet the vast majority of people -- even those in law enforcement who really should know better -- still believe that fingerprint identification is the gold standard of positive ID.

Worse: the US government has made it clear in recent years that being outside the country is no defence against its law-enforcement agencies. Last December, a senior lawyer for the US government told the Court of Appeal in London that the US Supreme Court had sanctioned kidnapping foreign nationals if they were wanted for crimes in the USA. No more messing about with judicial procedures and extradition agreements: if they want me, they can and will come and snatch me off any street in the world.

So what does that leave me with? A significant percentage chance, per year, of being "infallibly" identified as having been at the scene of some crime, at which point there's nothing standing between me and officially-sanctioned abduction. And, of course, absolutely no guarantee of access to a lawyer or court at any point of the process.

No thanks. My only defence, such as it is, is to keep my name the hell out of that database.

The USA is, of course, perfectly right to apply reasonable measures to defend its borders and its people. And I'm still, for the moment, at liberty not to go there.

What makes me sad is that, for the first thirty years of my life, I was accustomed to seeing travel grow easier. I revelled in the way the world was opening up to me. There was so much to do. In America there are dozens of sights I'd still love to see, friends I'd love to visit, places to walk or climb or just gape. I took it for granted that I'd always be able to visit the Grand Canyon, or San Francisco, or Seattle, Shiloh, Boston, Harper's Ferry, Jamestown, Yellowstone, Philadelphia, the Alamo on my next visits.

But now, as far as I'm concerned, those places might as well not exist any more. They're lost to me, probably forever. (Ironically, however, I'm now free to visit Hungary or Poland more or less at will.)

Technology has been perverted. Instead of making life easier, now it's used to erect new barriers and create new hazards. And that, I think, is a damn' shame.
http://www.thisisby.us/index.php/con...w_iron_curtain





Wiretapping's True Danger

History says we should worry less about privacy and more about political spying.
Julian Sanchez

As the battle over reforms to the Foreign Intelligence Surveillance Act rages in Congress, civil libertarians warn that legislation sought by the White House could enable spying on "ordinary Americans." Others, like Sen. Orrin Hatch (R-Utah), counter that only those with an "irrational fear of government" believe that "our country's intelligence analysts are more concerned with random innocent Americans than foreign terrorists overseas."

But focusing on the privacy of the average Joe in this way obscures the deeper threat that warrantless wiretaps poses to a democratic society. Without meaningful oversight, presidents and intelligence agencies can -- and repeatedly have -- abused their surveillance authority to spy on political enemies and dissenters.

The original FISA law was passed in 1978 after a thorough congressional investigation headed by Sen. Frank Church (D-Idaho) revealed that for decades, intelligence analysts -- and the presidents they served -- had spied on the letters and phone conversations of union chiefs, civil rights leaders, journalists, antiwar activists, lobbyists, members of Congress, Supreme Court justices -- even Eleanor Roosevelt and the Rev. Martin Luther King Jr. The Church Committee reports painstakingly documented how the information obtained was often "collected and disseminated in order to serve the purely political interests of an intelligence agency or the administration, and to influence social policy and political action."

Political abuse of electronic surveillance goes back at least as far as the Teapot Dome scandal that roiled the Warren G. Harding administration in the early 1920s. When Atty. Gen. Harry Daugherty stood accused of shielding corrupt Cabinet officials, his friend FBI Director William Burns went after Sen. Burton Wheeler, the fiery Montana progressive who helped spearhead the investigation of the scandal. FBI agents tapped Wheeler's phone, read his mail and broke into his office. Wheeler was indicted on trumped-up charges by a Montana grand jury, and though he was ultimately cleared, the FBI became more adept in later years at exploiting private information to blackmail or ruin troublesome public figures. (As New York Gov. Eliot Spitzer can attest, a single wiretap is all it takes to torpedo a political career.)

In 1945, Harry Truman had the FBI wiretap Thomas Corcoran, a member of Franklin D. Roosevelt's "brain trust" whom Truman despised and whose influence he resented. Following the death of Chief Justice Harlan Stone the next year, the taps picked up Corcoran's conversations about succession with Justice William O. Douglas. Six weeks later, having reviewed the FBI's transcripts, Truman passed over Douglas and the other sitting justices to select Secretary of the Treasury (and poker buddy) Fred Vinson for the court's top spot.

"Foreign intelligence" was often used as a pretext for gathering political intelligence. John F. Kennedy's attorney general, brother Bobby, authorized wiretaps on lobbyists, Agriculture Department officials and even a congressman's secretary in hopes of discovering whether the Dominican Republic was paying bribes to influence U.S. sugar policy. The nine-week investigation didn't turn up evidence of money changing hands, but it did turn up plenty of useful information about the wrangling over the sugar quota in Congress -- information that an FBI memo concluded "contributed heavily to the administration's success" in passing its own preferred legislation.

In the FISA debate, Bush administration officials oppose any explicit rules against "reverse targeting" Americans in conversations with noncitizens, though they say they'd never do it.

But Lyndon Johnson found the tactic useful when he wanted to know what promises then-candidate Richard Nixon might be making to our allies in South Vietnam through confidant Anna Chenault. FBI officials worried that directly tapping Chenault would put the bureau "in a most untenable and embarrassing position," so they recorded her conversations with her Vietnamese contacts.

Johnson famously heard recordings of King's conversations and personal liaisons with various women. Less well known is that he received wiretap reports on King's strategy conferences with other civil rights leaders, hoping to use the information to block their efforts to seat several Mississippi delegates at the 1964 Democratic National Convention. Johnson even complained that it was taking him "hours each night" to read the reports.

Few presidents were quite as brazen as Nixon, whom the Church Committee found had "authorized a program of wiretaps which produced for the White House purely political or personal information unrelated to national security." They didn't need to be, perhaps. Through programs such as the National Security Agency's Operation Shamrock (1947 to 1975), which swept up international telegrams en masse, the government already had a vast store of data, and presidents could easily run "name checks" on opponents using these existing databases.

It's probably true that ordinary citizens uninvolved in political activism have little reason to fear being spied on, just as most Americans seldom need to invoke their 1st Amendment right to freedom of speech. But we understand that the 1st Amendment serves a dual role: It protects the private right to speak your mind, but it serves an even more important structural function, ensuring open debate about matters of public importance. You might not care about that first function if you don't plan to say anything controversial. But anyone who lives in a democracy, who is subject to its laws and affected by its policies, ought to care about the second.

Harvard University legal scholar William Stuntz has argued that the framers of the Constitution viewed the 4th Amendment as a mechanism for protecting political dissent. In England, agents of the crown had ransacked the homes of pamphleteers critical of the king -- something the founders resolved that the American system would not countenance.

In that light, the security-versus-privacy framing of the contemporary FISA debate seems oddly incomplete. Your personal phone calls and e-mails may be of limited interest to the spymasters of Langley and Ft. Meade. But if you think an executive branch unchecked by courts won't turn its "national security" surveillance powers to political ends -- well, it would be a first.
http://www.latimes.com/news/printedi...9139.story?rss





DHS Data Mining–It’s as Bad as You Thought
looseheadprop

The Department of Homeland Security just sent a report to Congress about its data mining activities. This is the third such report as required under Section 806 of the Federal Agency Data Mining Reporting Act of 2007.

Under the Act, DHS was compelled to go back and report on its data mining activities in 2006 and a previous report for data mining activities in 2007.

It seem as though Congress did not like the previous reports and thought that DHS was using a definition of data mining that was too narrow which might have excluded too many DHS programs.

So, Congress, in House Report 109-609, gave DHS a detailed definition of what data mining means to them to be added on top of the definition that DHS was already using.

.... a query or search or other analysis of 1 or more electronic databases, whereas (A) at least 1 of the databases was obtained from or remains under the control of a non federal entity, or the information was acquired intially by another department or agency of the Federal Government for purposes other than intelligence or law enforcement; (B) a department or agency of the Federal Government or non federal entity acting on behalf of the Federal Government is conducting the query, or search or other analysis to find a predictive pattern indicating terrorist or criminal activity; and (C) the search does not use a specific individual person's identifiers to acquire information concerning that individual.

DHS has apparently gone back to the drawing board and is taking another crack at the 2007 report, using the newer definition.

And guess what they found? Yep, a whole bunch of activities that DHS had not reported to Congress as being data mining turned out to be.....wait for it.....data mining! Who'da thunk it?

The not previously reported data mining includes an inbound and outbound cargo analysis program, ADVISE (Analysis, Dissemination, Visualization, Insight and Semantic Enhancement program pilot), and ICE's DARTTS program (Data Analysis and Research for Trade Transparency System). [Anybody interested in money laundering in the wake of the Spitzer scandal really wants to read this link, it is full of examples of how the Bank Secrecy Act reporting requirements actually work in the field.]

Anyway, DHS should have done a Privacy Impact Assessment of these programs to see if they a) infringed on people's privacy, and b) what mitigation measures could be taken to ameliorate that infringement.

Since DHS didn't include these programs in its reporting, you already guessed that it didn't do the privacy assessment, right?

Sigh. None of this actually surprised anyone here, did it?

DHS promises to go back and do the privacy assessments and to be good little boys and girls in future, but basically, this is Congress catching them red handed.
http://firedoglake.com/2008/03/19/dh...s-you-thought/





Do Americans Care About Big Brother?
Massimo Calabresi

Pity America's poor civil libertarians. In recent weeks, the papers have been full of stories about the warehousing of information on Americans by the National Security Agency, the interception of financial information by the CIA, the stripping of authority from a civilian intelligence oversight board by the White House, and the compilation of suspicious activity reports from banks by the Treasury Department. On Thursday, Justice Department Inspector General Glenn Fine released a report documenting continuing misuse of Patriot Act powers by the FBI. And to judge from the reaction in the country, nobody cares.

A quick tally of the record of civil liberties erosion in the United States since 9/11 suggests that the majority of Americans are ready to trade diminished privacy, and protection from search and seizure, in exchange for the promise of increased protection of their physical security. Polling consistently supports that conclusion, and Congress has largely behaved accordingly, granting increased leeway to law enforcement and the intelligence community to spy and collect data on Americans. Even when the White House, the FBI or the intelligence agencies have acted outside of laws protecting those rights — such as the Foreign Intelligence Surveillance Act — the public has by and large shrugged and, through their elected representatives, suggested changing the laws to accommodate activities that may be in breach of them.

Civil libertarians are in a state of despair. "People don't realize how damaging it is to a democratic society to allow the government to warehouse information about innocent Americans," says Mike German, national security counsel at the American Civil Liberties Union.

Or do they? In all the examples of diminished civil liberties, there are few, if any, where the motivating factor was something other than law and order or national security. There are no scandalous examples of the White House using the Patriot Act powers for political purposes or of individual agents using them for personal gain. The Justice IG report released Thursday, for example, examined some 50,000 National Security Letters issued in 2006 to see whether the FBI misused that specialized kind of warrantless subpoena. The IG found some continuing abuse of the power, but blamed it for the most part on sloppiness and bad management, not nefarious intent. In a press release accompanying the report, Fine said, "The FBI and Department of Justice have shown a commitment to addressing these problems."

There may, nonetheless, be reasons to feel wary of the civil liberties vs. security trade-off into which Americans have bought. If the misuse documented in the Justice IG report stems from incompetence, Americans may not be getting the security they bargain for in sacrificing their civil liberties. It's also possible the Justice IG may yet find among the abused Patriot Act powers examples of an FBI agent stalking his girlfriend or doing a favor for a political operative friend. Fine is still preparing a report on the illegal use of "exigent letters" in unauthorized demands for records from business.

For now, however, civil libertarians will have to continue to argue that the danger lies not in how the government's expanded powers are being used now, but how they might be used in the future. "The government can collect information about the average citizen without any concern for their rights, but the citizen can't find out what the government is doing, and that's inimical to government of we the people," says the ACLU's German. So far, that argument hasn't convinced the people.
http://www.time.com/time/nation/arti...722537,00.html





Time Magazine Invents Facts to Claim that Americans Support Bush's Domestic Spying Abuses
Glenn Greenwald

(updated below)

No matter how corrupt and sloppy the establishment press becomes, they always find a way to go lower. Time Magazine has just published what it purports to be a news article by Massimo Calabresi claiming that "nobody cares" about the countless abuses of spying powers by the Bush administration; that "Americans are ready to trade diminished privacy, and protection from search and seizure, in exchange for the promise of increased protection of their physical security"; and that the case against unchecked government surveillance powers "hasn't convinced the people." Not a single fact -- not one -- is cited to support these sweeping, false opinions.

Worse still -- way worse -- this "news article" decrees the Bush administration to be completely innocent, even well-motivated, even in those instances where technical, irrelevant lawbreaking has been found, as it proclaims:

In all the examples of diminished civil liberties, there are few, if any, where the motivating factor was something other than law and order or national security.
Does Calabresi or his Time editors have the slightest idea how secret, illegal spying powers have been used, towards what ends they've been employed and with what motives? No, they have absolutely no idea. Not even members of Congressional Intelligence Committees know because the Bush administration has kept all of that concealed. So Time just makes up facts to defend the Bush administration with wholly baseless statements that one would expect to come pouring out of the mouths only of Dana Perino and Bill Kristol -- the "motivating factor" for secret, illegal spying was nothing "other than law and order or national security."

This article literally has more factual errors -- pure, retraction-level falsehoods -- than it has paragraphs. It makes Joe Klein look like a knowledgable and conscientious surveillance expert. It's one of the most falsehood-plagued articles I've seen in quite some time. Let's just count the ways this article includes demonstrably false assertions, purely based on facts:

(1) Time claims that "nobody cares" about the Government's increased spying powers and that "polling consistently supports that conclusion." They don't cite a single poll because that assertion is blatantly false.

Just this weekend, a new poll released by Scripps Howard News Service and Ohio University proves that exactly the opposite is true. That poll shows that the percentage of Americans who believe the Federal Government is "very secretive" has doubled in the last two years alone (to 44%) and that "nearly nine in 10 say it's important to know presidential and congressional candidates' positions on open government when deciding who to vote for."

The same poll also found that 77% of Americans believe that "the federal government opened mail and monitored phone calls of people in the U.S. without first getting permission from a federal judge," and 64% believe "that the federal government has opened mail or monitored telephone conversations involving members of the news media." Only a small minority (20%) believe that the Federal Government is "Very Open" or "Somewhat Open." Exactly as was true for The Politico's very untimely article last week falsely claiming that Americans are increasingly supporting the Iraq War again -- on the very day that a new USA Today poll showed that Americans overwhelmingly favor unconditional timetables for withdrawal -- Time today asserts a falsehood that is squarely negated by a poll released the day before.

The proposition that "polls consistently" find that Americans don't mind incursions into their civil liberties is a rank falsehood. From a December, 2005 CNN poll, days after the NSA scandal was first disclosed:

Nearly two-thirds said they are not willing to sacrifice civil liberties to prevent terrorism, as compared to 49 percent saying so in 2002. More importantly, ever since it was revealed that the Bush administration has been spying on Americans without the warrants required by law, polls have consistently shown that huge numbers of Americans -- usually majorities -- oppose warrantless spying, exactly the opposite of what Time just claimed.

Much of the polling on warrantless eavesdropping occurred throughout 2006 when the NSA scandal was being debated. Here's what a Quinnipiac poll concluded:

By a 76-19 percent margin, American voters say the government should continue monitoring phone calls or e-mail between suspected terrorists in other countries and people in the U.S., according to a Quinnipiac University national poll released today. But voters say 55-42 percent that the government should get court orders for this surveillance.

Voters in "purple states," 12 states in which there was a popular vote margin of 5 percentage points or less in the 2004 Presidential election, plus Missouri, considered the most accurate barometer of Presidential voting, want wiretap warrants 57 - 39 percent.

Red states, where President George W. Bush's margin was more than 5 percent in 2004, disagree 51 - 46 percent with the President that the government does not need warrants. Blue state voters who backed John Kerry by more than 5 percent want warrants 57 - 40 percent, the independent Quinnipiac (KWIN-uh-pe-ack) University poll finds.

A total of 57 percent of voters are "extremely" or "quite" worried that phone and e-mail taps without warrants could be misused to violate people's privacy. But 54 percent believe these taps have prevented some acts of terror.

"Don't turn off the wiretaps, most Americans say, but the White House ought to tell a judge first. Even red state voters, who backed President Bush in 2004, want to see a court okay for wiretaps," said Maurice Carroll, Director of the Quinnipiac University Polling Institute.

From the beginning, pluralities in the vast majority of states -- 37 out of 50 -- believed the President "clearly" broke the law with his NSA spying. A CBS poll found that Americans believe (51-43%) that "the President does not have the legal authority to authorize wiretapes without a warrant to fight terrorism." And back when Russ Feingold introduced his resolution to censure the President for breaking the law in spying on Americans, a plurality of Americans supported censure of Bush despite the fact that Feingold was virtually alone among political figures in advocating it. And most Americans opposed immunity for telecoms accused of breaking the law in how they spied on Americans:

Opposition to immunity is widespread, cutting across ideology and geography. Majorities of liberals, moderates, and conservatives agree that courts should decide the outcomes of these legal actions (liberals: 64% let courts decide, 26% give immunity; moderates: 58% let courts decide, 34% give immunity; conservatives: 50% let courts decide, 38% give immunity).

As is so often true, the facts are exactly the opposite of what Time, in defending the Bush administration, tells its readers. Can one find polls in which pluralites of Americans support warrantless eavesdropping and other secret spying programs? If one looks hard enough for polls emphasizing "spying on terrorists," perhaps one can, but Time's assertion that "polling consistently supports the conclusion" that Americans want to give up civil liberties for security is patently false.

(2) This is Time's next claim:

Even when the White House, the FBI or the intelligence agencies have acted outside of laws protecting those rights -- such as the Foreign Intelligence Surveillance Act -- the public has by and large shrugged and, through their elected representatives, suggested changing the laws to accommodate activities that may be in breach of them.

Have Calabresi and his editors been on vacation for the last four months? During that time, there has been a protracted, bitter debate in Congress over the President's demands for permanent, warrantless eavesdropping powers and amnesty for telecoms which broke the law in spying on Americans. It provoked filibusters and all sorts of obstructionism in the Senate, and House Democrats -- including virtually every conservative "Blue Dog" -- just chose warrantless eavesdropping and telecom amnesty as the issue on which to defy, for the first time ever, the President's national security orders.

Additionally, while it is true that the GOP-led Congress largely endorsed every one of the President's policies, including his lawbreaking, the American voting public threw the Republicans out of power in 2006. When Democrats, once in power, began copying their behavior in endorsing even the President's illegal behavior, their approval ratings plummeted. Just last week, they refused to give legal sanction to the President's illegal spying; demanded that the lawsuits arising from that spying proceed; and even passed a bill requiring a full-scale investigation into what the President did when spying on Americans for all those years. These events were bizarrely ignored by Time because they negate the narrative they want to push.

(3) Time's defense of the Bush administration -- that "law and order or national security" has motivated even the illegal spying -- is perhaps most indefensible of all. The administration has blocked every Congressional and judicial attempt to investigate how it has used these spying powers. Thus, nobody has any idea what has motivated the spying or what the level of abuse is.

As Julian Sanchez wrote in a superb Op-Ed in the Los Angeles Times this weekend, the Federal Government abused its warrantless spying power for decades -- to spy on political opponents and other dissidents -- but nobody had any idea that was going on until the Church Committee conducted a full-fledged investigation. As Sanchez wrote:

If you think an executive branch unchecked by courts won't turn its "national security" surveillance powers to political ends -- well, it would be a first.
We have had no investigation into how the Bush administration has used these spying powers. There has been no Church Committee, no intensive media investigation, no judicial process. The only "investigations" into any of these surveillance activities has come from the executive branch itself. All we have are slothful, government-worshiping reporters like Calabresi and Time editors who sit back content in their own ignorance, having no idea how the Bush administration used its spying powers, citing their own total ignorance as proof that the Government did nothing wrong -- they did everything for our own Good, for our Protection.

Time's vouching for the Good Motives of the Bush administration is completely false for a separate reason. Even with as little as we know about what they've done, there most certainly are examples of politically-motivated spying, even though Calabresi and his editors are apparently unaware of them. From Democracy Now in 2006:

Earlier this week, the Servicemembers Legal Defense Network released documents showing that the Pentagon conducted surveillance on a more extensive level than first reported late last year. De-classified documents show that the agency spied on "Don't Ask, Don't Tell' protests and anti-war protests at several universities around the country. They also show that the government monitored student e-mails and planted undercover agents at least one protest.

But the Pentagon has not released all information on its surveillance activities. The American Civil Liberties Union recently filed a federal lawsuit to force the agency to turn over additional records. The lawsuit charges that the Pentagon is refusing to comply with Freedom of Information Act requests seeking records on the ACLU, the American Friends Service Committee, Greenpeace, Veterans for Peace and United for Peace and Justice, as well as 26 local groups and activists.
Even NBC reported previously:

A year ago, at a Quaker Meeting House in Lake Worth, Fla., a small group of activists met to plan a protest of military recruiting at local high schools. What they didn't know was that their meeting had come to the attention of the U.S. military.

A secret 400-page Defense Department document obtained by NBC News lists the Lake Worth meeting as a "threat" and one of more than 1,500 "suspicious incidents" across the country over a recent 10-month period. . . .

The Defense Department document is the first inside look at how the U.S. military has stepped up intelligence collection inside this country since 9/11, which now includes the monitoring of peaceful anti-war and counter-military recruitment groups.

Are Time reporters and editors just blissfully ignorant of these incidents or do they conceal them because they negate their clean, crisp storyline?

(4) The whole Time article is based upon one of the most pervasive journalistic fallacies: namely, that the choices the establishment press makes as to what they will cover and not cover is reflective of what "Americans" generally care about. Thus, Calabresi begins the article by listing a whole series of recent revelations about the Bush administration's ever-increasing Surveillance State powers and abuses and concludes: "to judge from the reaction in the country, nobody cares."

But the only ones who "don't care" are establishment media outlets like Time, not the "ordinary Americans" on whose behalf they always fantasize that they speak. It's the media that has ignored those stories.

Here is a Nexus count of how much media coverage certain stories have received over the last 30 days, including the Surveillance State stories which Calabresi cites as proof that Americans don't care about their constitutional liberties:

* "Spitzer and prostitutes" -- 2,323 results

* "Spitzer and Kristen" -- 1,087 results

* "Obama and Rezko" -- 1,263 results

* "Obama and Jeremiah Wright" -- 466 results

* "Wall Street Journal and data mining" -- 9 results

* "FBI and National security letters" -- 149 results

* "Intelligence Oversight Board" -- 21 results

This is what establishment journalists like Calabresi always do. Their industry obsesses on the most vapid, inconsequential chatter. They ignore the stories that actually matter. And then they claim that Americans only care about vapid gossip and not substantive issues -- and point to their own shallow coverage decisions as "proof" of what Americans care about. That thought process was vividly evident with their obsession with the Edwards hair "story," when they all chattered about it endlessly, promoted it in headlines, and then, when criticized for that, claimed that it was obviously something Americans were interested in, pointing to their own media fixation as proof that Americans cared.

The Time Magazines of the world ignore stories about Bush's abuses of spying powers. Therefore, Americans don't care about such abuses. That's the self-referential, self-loving rationale on which this entire article is based. And the whole article is filled with demonstrable falsehoods, all in service of arguing that the Bush administration has done nothing wrong, and even if they did, Americans don't mind at all.


UPDATE: Yet another serious factual error in Calabresi's article that I neglected to mention:

There are no scandalous examples of the White House using the Patriot Act powers for political purposes or of individual agents using them for personal gain.

Has Time ever heard of the U.S. Attorneys scandal, which just resulted in the filing of a Congressional lawsuit to compel recalcitrant Bush aides to comply with Subpoenas? From Harper's Scott Horton on Saturday:

This was largely part of an effort to disguise the obvious fact that the dismissals were the implementation of a political plan which had been formulated in the White House, largely under the guidance of Karl Rove. They were also designed to disguise the fact that an elaborate scheme had been concocted to circumvent the process through which candidates are reviewed and confirmed by the Senate using a secret amendment to the USA PATRIOT Act.

It's not surprising that this scandal would be whitewashed from the pages of Time, in light of what its Managing Editor, Rick Stengel, decreed last year while on The Chris Matthews Show:

Mr. STENGEL: I am so uninterested in the Democrats wanting Karl Rove, because it is so bad for them. Because it shows business as usual, tit for tat, vengeance. That's not what voters want to see.

Ms. BORGER: Mm-hmm.

MATTHEWS: So instead of like an issue like the war where you can say it's bigger than all of us, its more important than politics, this is politics.

Mr. STENGEL: Yes, and it's much less. It's small bore politics.

The principal theme of Time Magazine appears to be that corruption and even blatant lawbreaking by the Bush administration is a total non-story, something that nobody cares about and therefore shouldn't be investigated or reported (Joe Klein's first reaction in Time following disclosure of the NSA scandal was to defend the lawbreaking and sternly warn Nancy Pelosi and Democrats generally that they had better not object to the warrantless spying program or else they would be (justifiably) out of power forever).

Identically, Calabresi's declaration that the FBI's unquestionably illegal use of NSL powers under the Patriot Act was harmless and benign because the Bush DOJ said so is equally gullible and dishonest. As Patrick Meighan pointed out in comments:

In other words, we know that the Justice Department has not intentionally abused its unchecked investigative powers because the Justice Department looked at the Justice Department and decided that the Justice Department did not intentionally abuse its unchecked investigative powers.

In 2008, that's what's supposed to pass for checks and balances.

It is not surprising that this is the view of Bush followers, but it's also the predominant view of our ornery watchdog journalists as well. The Founders envisioned that the media would be the watchdog over government deceit and corruption, but nobody is more aggressive in dismissing concerns of government lawbreaking and deceit than the Time Magazines of our country. That's their primary function.
http://www.salon.com/opinion/greenwa...ime/index.html





MI5 Seeks Powers to Trawl Records in New Terror Hunt

Counter-terrorism experts call it a 'force multiplier': an attack combining slaughter and electronic chaos. Now Britain's security services want total access to commuters' travel records to help them meet the threat
Gaby Hinsliff

Millions of commuters could have their private movements around cities secretly monitored under new counter-terrorism powers being sought by the security services.

Records of journeys made by people using smart cards that allow 17 million Britons to travel by underground, bus and train with a single swipe at the ticket barrier are among a welter of private information held by the state to which MI5 and police counter-terrorism officers want access in order to help identify patterns of suspicious behaviour.

The request by the security services, described by shadow Home Secretary David Davis last night as 'extraordinary', forms part of a fierce Whitehall debate over how much access the state should have to people's private lives in its efforts to combat terrorism.

It comes as the Cabinet Office finalises Gordon Brown's new national security strategy, expected to identify a string of new threats to Britain - ranging from future 'water wars' between countries left drought-ridden by climate change to cyber-attacks using computer hacking technology to disrupt vital elements of national infrastructure.

The fear of cyber-warfare has climbed Whitehall's agenda since last year's attack on the Baltic nation of Estonia, in which Russian hackers swamped state servers with millions of electronic messages until they collapsed. The Estonian defence and foreign ministries and major banks were paralysed, while even its emergency services call system was temporarily knocked out: the attack was seen as a warning that battles once fought by invading armies or aerial bombardment could soon be replaced by virtual, but equally deadly, wars in cyberspace.

While such new threats may grab headlines, the critical question for the new security agenda is how far Britain is prepared to go in tackling them. What are the limits of what we want our security services to know? And could they do more to identify suspects before they strike?

One solution being debated in Whitehall is an unprecedented unlocking of data held by public bodies, such as the Oyster card records maintained by Transport for London and smart cards soon to be introduced in other cities in the UK, for use in the war against terror. The Office of the Information Commissioner, the watchdog governing data privacy, confirmed last night that it had discussed the issue with government but declined to give details, citing issues of national security.

Currently the security services can demand the Oyster records of specific individuals under investigation to establish where they have been, but cannot trawl the whole database. But supporters of calls for more sharing of data argue that apparently trivial snippets - like the journeys an individual makes around the capital - could become important pieces of the jigsaw when fitted into a pattern of other publicly held information on an individual's movements, habits, education and other personal details. That could lead, they argue, to the unmasking of otherwise undetected suspects.

Critics, however, fear a shift towards US-style 'data mining', a controversial technique using powerful computers to sift and scan millions of pieces of data, seeking patterns of behaviour which match the known profiles of terrorist suspects. They argue that it is unfair for millions of innocent people to have their privacy invaded on the off-chance of finding a handful of bad apples.

'It's looking for a needle in a haystack, and we all make up the haystack,' said former Labour minister Michael Meacher, who has a close interest in data sharing. 'Whether all our details have to be reviewed because there is one needle among us - I don't think the case is made.'

Jago Russell, policy officer at the campaign group Liberty, said technological advances had made 'mass computerised fishing expeditions' easier to undertake, but they offered no easy answers. 'The problem is what do you do once you identify somebody who has a profile that suggests suspicions,' he said. 'Once the security services have identified somebody who fits a pattern, it creates an inevitable pressure to impose restrictions.'

Individuals wrongly identified as suspicious might lose high-security jobs, or have their immigration status brought into doubt, he said. Ministers are also understood to share concerns over civil liberties, following public opposition to ID cards, and the debate is so sensitive that it may not even form part of Brown's published strategy.

But if there is no consensus yet on the defence, there is an emerging agreement on the mode of attack. The security strategy will argue that in the coming decades Britain faces threats of a new and different order. And its critics argue the government is far from ready.

The cyber-assault on Estonia confirmed that the West now faces a relatively cheap, low-risk means of warfare that can be conducted from anywhere in the world, with the power to plunge developed nations temporarily into the stone age, disabling everything from payroll systems that ensure millions of employees get paid to the sewage treatment processes that make our water safe to drink or the air traffic control systems keeping planes stacked safely above Heathrow.

And it is one of the few weapons which is most effective against more sophisticated western societies, precisely because of their reliance on computers. 'As we become more advanced, we become more vulnerable,' says Alex Neill, head of the Asia Security programme at the defence think-tank RUSI, who is an expert on cyber-attack.

The nightmare scenario now emerging is its use by terrorists as a so-called 'force multiplier' - combining a cyber-attack to paralyse the emergency services with a simultaneous atrocity such as the London Tube bombings.

Victims would literally have nowhere to turn for help, raising the death toll and sowing immeasurable panic. 'Instead of using three or four aircraft as in 9/11, you could do one major event and then screw up the communications network behind the emergency services, or attack the Underground control network so you have one bomb but you lock up the whole network,' says Davis. 'You take the ramifications of the attack further. The other thing to bear in mind is that we are ultimately vulnerable because London is a financial centre.'

In other words, cyber-warfare does not have to kill to bring a state to its knees: hackers could, for example, wipe electronic records detailing our bank accounts, turning millionaires into apparent paupers overnight.

So how easy would it be? Estonia suffered a relatively crude form of attack known as 'denial of service', while paralysing a secure British server would be likely to require more sophisticated 'spy' software which embeds itself quietly in a computer network and scans for secret passwords or useful information - activating itself later to wreak havoc.

Neill said that would require specialist knowledge to target the weakest link in any system: its human user. 'You will get an email, say, that looks like it's from a trusted colleague, but in fact that email has been cloned. There will be an attachment that looks relevant to your work: it's an interesting document, but embedded in it invisibly is "malware" rogue software which implants itself in the operating systems. From that point, the computer is compromised and can be used as a platform to exploit other networks.'

Only governments and highly sophisticated criminal organisations have such a capability now, he argues, but there are strong signs that al-Qaeda is acquiring it: 'It is a hallmark of al-Qaeda anyway that they do simultaneous bombings to try to herd victims into another area of attack.'

The West, of course, may not simply be the victim of cyber-wars: the United States is widely believed to be developing an attack capability, with suspicions that Baghdad's infrastructure was electronically disrupted during the 2003 invasion.

So given its ability to cause as much damage as a traditional bomb, should cyber-attack be treated as an act of war? And what rights under international law does a country have to respond, with military force if necessary? Next month Nato will tackle such questions in a strategy detailing how it would handle a cyber-attack on an alliance member. Suleyman Anil, Nato's leading expert on cyber-attack, hinted at its contents when he told an e-security conference in London last week that cyber-attacks should be taken as seriously as a missile strike - and warned that a determined attack on western infrastructure would be 'practically impossible to stop'.

Tensions are likely to increase in a globalised economy, where no country can afford to shut its borders to foreign labour - an issue graphically highlighted for Gordon Brown weeks into his premiership by the alleged terrorist attack on Glasgow airport, when it emerged that the suspects included overseas doctors who entered Britain to work in the NHS.

A review led by Homeland Security Minister Admiral Sir Alan West into issues raised by the Glasgow attack has been grappling with one key question: could more be done to identify rogue elements who are apparently well integrated with their local communities?

Which is where, some within the intelligence community insist, access to personal data already held by public bodies - from the Oyster register to public sector employment records - could come in. The debate is not over yet.
http://www.guardian.co.uk/uk/2008/ma...rity.terrorism





International Cyber-Cop Unit Girds for Uphill Battles
Layer 8

An group of international cyber cops is ramping up plans to fight online crime across borders.

The unit, known as the Strategic Alliance Cyber Crime Working Group, met this month in London and is made up of high-level online law enforcement representatives from the FBI, Australia, Canada, New Zealand, and the United Kingdom. One of the main goals of the group, which was founded in 2006, is to fight cyber crime in a common way by sharing intelligence, swapping tools and best practices, and strengthening and synchronizing their respective laws.

And it has its work cut out for it.

The Government Accountability Office last year said there is concern about threats that nation-states and terrorists pose to our national security through attacks on US computer-reliant critical infrastructures and theft of our sensitive information.

For example, according to the US-China Economic and Security Review Commission report, Chinese military strategists write openly about exploiting the vulnerabilities created by the U.S. military’s reliance on advanced technologies and the extensive infrastructure used to conduct operations.

Also, according to FBI testimony, terrorist organizations have used cybercrime to raise money to fund their activities. Despite the reported loss of money and information and known threats from adversaries, there remains a lack of understanding about the precise magnitude of cybercrime and its impact because cybercrime is not always detected or reported.

The group hopes to impact some of those problems. At the London meeting, participating countries outlined ways to share forensic tools, possibilities for joint training, and strategies for a public awareness campaign to help reduce cyber crime. According to the FBI, the group is one outgrowth of the larger Strategic Alliance Group—a formal partnership between these nations dedicated to tackling larger global crime issues, particularly organized crime.

The group so far has:

• Collectively developed a comprehensive overview of the transnational cyber threat—including current and emerging trends, vulnerabilities, and strategic initiatives for the working group to pursue (note: the report is available only to law enforcement);

• Set up a special area on Law Enforcement Online, the FBI’s secure Internet portal, to share information and intelligence;

• Launched a series of information bulletins on emerging threats and trends (for example, it drafted a bulletin recently describing how peer-to-peer, or P2P, file sharing programs can inadvertently leak vast amounts of sensitive national security, financial, medical, and other information);

• Began exploring an exchange of cyber experts to serve on joint international task forces and to learn each other’s investigative techniques firsthand; and

• Shared training curriculums and provided targeted training to international cyber professionals.

The GAO noted cybercrime laws vary widely across the international community. For example, Australia enacted its Cybercrime Act of 2001 to address this type of crime in a manner similar to the US Computer Fraud and Abuse Act. In addition, Japan enacted the Unauthorized Computer Access Law of 1999 to cover certain basic areas similar to those addressed by the U.S. federal cybercrime legislation.

Countries such as Nigeria with minimal or less sophisticated cybercrime laws have been noted sources of Internet fraud and other cybercrime. In response, they have looked to the examples set by industrialized nations to create or enhance their cybercrime legal framework. A proposed cybercrime bill, the Computer Security and Critical Information Infrastructure Protection Bill, is being debated before Nigeria’s General Assembly for consideration. Because political or natural boundaries are not an obstacle to conducting cybercrime, international agreements are essential to fighting cybercrime. For example, in November 2001, the United States and 29 other countries signed the Council of Europe’s Convention on Cybercrime as a multilateral instrument to address the problems posed by criminal activity on computer networks. Nations supporting this convention agree to have criminal laws within their own nation to address cybercrime, such as hacking, spreading viruses or worms, and similar unauthorized access to, interference with, or damage to computer systems. It also enables international cooperation in combating crimes such as child sexual exploitation, organized crime, and terrorism through provisions to obtain and share electronic evidence. The U.S. Senate ratified this convention in August 2006. As the 16th of 43 countries to support the agreement, the United States agrees to cooperate in international cybercrime investigations.

The governments of European countries such as Denmark, France, and Romania have ratified the convention. Other countries including Germany, Italy, and the United Kingdom have signed the convention although it has not been ratified by their governments. Non-European countries including Canada, Japan, and South Africa have also signed but not yet ratified the convention, the GAO report said.

In the US alone, the GAO said the annual loss due to computer crime was estimated to be $67.2 billion for US organizations, according to a 2005 FBI survey. The estimated losses associated with particular crimes include $49.3 billion in 2006 for identity theft and $1 billion annually due to phishing. These projected losses are based on direct and indirect costs that may include actual money stolen, estimated cost of intellectual property stolen, and recovery cost of repairing or replacing damaged networks and equipment.

Meanwhile the Strategic Alliance Cyber Crime Working Group will meet again in May, to bring together legal and legislative experts from the five countries to talk about common challenges, differing approaches, and potential ways to streamline investigations and harmonize laws on everything from data retention standards to privacy requirements, the FBI said.
http://www.networkworld.com/community/node/26144





Estonia Calls for EU Law to Combat Cyber Attacks

Estonia has called on the European Union to make cyber attacks a criminal offence to stop Internet users from freezing public and private Web sites for political revenge.

Estonian President Toomas Hendrik Ilves said he believed the Russian government was behind an online attack on Estonia over its decision to move a Red Army monument from a square in the capital Tallin. Russia has denied any involvement.

The decision triggered two nights of rioting by mainly Russian-speaking protesters, who argued that the Soviet-era memorial was a symbol of sacrifices made during World War Two.

The rioting coincided with repeated requests to Web sites, forcing them to crash or freeze. Network specialists said at the time at least some of the computers used could be traced to the Russian government or government agencies.

"Russian officials boasted about having done it (cyber attacks) afterwards -- one in a recent interview a month and a half ago saying we can do much more damage if we wanted to," he told Reuters in an interview.

The European Commission has sole right to initiate EU law and its Information Society and Media Commissioner Viviane Reding agreed action was needed.

"What happened in Estonia should be a wake-up call for Europe. Cyber attacks on one member state concern the whole of Europe. They must therefore receive a firm European response," Reding told Reuters from Budapest.

Reding said that last November she proposed setting up a new European telecoms market authority.

NATO also has opened a cyber defence "centre of excellence" in Estonia to study solutions to combating online attacks.

Mock cyber attacks on Estonia's new online voting system have given the country a better idea of how to handle a real attack when it came, Ilves said.

"Other (EU) member states helped in fending off the attacks by siphoning off some of the attacks," Ilves said.
http://www.financialmirror.com/more_...%20/%20Telecom





Serious RFID Vulnerability Discovered

A group of a Dutch university's digital security researchers discovers a major security flaw in a popular RFID tag; discovery can have serious commercial and national security implications; as important as the discovery itself was how the researchers handled the situation

RFID technology is gaining new adopters, and some governmental organizations now develop policies to push for an even faster adoption of the technology (see HSDW story). This story is not going to help this trend: A week ago researchers and students of the Digital Security group of the Radboud University Nijmegen have discovered a serious security flaw in a widely used type of contactless smartcard, also called RFID tag. It concerns the Mifare Classic RFID card produced by NXP (formerly Philips Semiconductors). Earlier, German researchers Karsten Nohl en Henryk Plötz pointed out security weaknesses of this cards. Worldwide around one billion of these cards have been sold. This type of card is used for the Dutch "ov-chipkaart" (the RFID card for public transport throughout the Netherlands) and public transport systems in other countries (for instance, the subway in London and Hong Kong). Mifare cards are also widely used as company cards to control access to buildings and facilities. All this means that the flaw has a broad impact. Because some cards can be cloned, it is in principle possible to access buildings and facilities with a stolen identity. This has been demonstrated on an actual system. In many situations where these cards are used there will be additional security measures; it is advisable to strengthen these where possible.

The Digital Security group found weaknesses in the authentication mechanism of the Mifare Classic. In particular:

1. The working of the CRYPTO1 encryption algorithm has been reconstructed in detail

2. There is a relatively easy method to retrieve cryptographic keys, which does not rely on expensive equipment

Combining these ingredients, the group succeeded on mounting an actual attack, in which a Mifare Classic access control card was successfully cloned. In situation where there are no additional security measures, this would allow unauthorized access by people with bad intentions.

Background

The Mifare Classic is a contactless smartcard developed in the mid-1990s. It is a memory card which offers some memory protection. The card is not programmable. The cryptographic operations it can perform are implemented in hardware, using a so-called linear shift feedback register (LSFR) and a "filter function." The encryption algorithm this implements is a proprietary algorithm CRYPTO1 which is a trade secret of NXP. The security of the card relies in part on the secrecy of CRYPTO1 algorithm, which is known as "security by obscurity."

Mifare Classic cards are typically used for authentication. Here the goal is that two parties prove who they are. This is done by demonstrating that they know some common secret information, a so-called shared secret (cryptographic) key. Both parties, in this case the Mifare card and the card reader, carry out certain operations and then check each other's results to be sure of whom they are dealing with. Authentication is needed to control access to facilities and buildings, and Mifare cards are commonly used for this purpose. Successful authentication is also a prerequisite to reading or writing part of the memory of the Mifare Classic. The card's memory is divided into sectors, each protected by two cryptographic keys. Proper key management is a subject in its own right. Roughly speaking, there are two possibilities:

1. All cards and all card readers used for a some application have the same keys for authentication. This is common when cards are used for access control

2. Each card has its own cryptographic keys. To check the keys of a card, the card reader should then first determine which card it is talking to and then look up or calculate the associated key(s). This is called key diversification. It is claimed that this approach is used for the Dutch public transport card.

Now, the Digital Security group found weaknesses in the authentication mechanism of the Mifare Classic. In particular:

1. The working of the CRYPTO1 encryption algorithm has been reverse engineered, and the group developed our own implementation of the algorithm

2. The group found a relatively easy method to retrieve cryptographic keys, which does not rely on expensive equipment

To reverse engineer the CRYPTO1 encryption algorithm the group used flawed authentication attempts. If one does not precisely follow the rules of the prescribed protocol, one can obtain some information about of the way it works. Combining such information is was possible to reconstruct the algorithm. Once the algorithm is known, one can find out the keys that are used by a so-called brute force attack, that is, simply trying all possible keys. In this case the keys are 48 bits long. Trying all the keys then requires around nine hours on advanced equipment, according to the recent TNO report 34643 `Security Analysis of the Dutch OV-chipkaart, published February 26th 2008.

Here too, however, certain flaws in the authentication protocol could be exploited, as the group discovered. This led members of the digital security group to the second point: there is a way to relatively easily retrieve the key without carrying out a lengthy brute force attack. This can be done by first carrying out many failed authentication attempts, which do provide some information. Storing the results of this in a big table, one can look for a match and retrieve the key. The table only has to be constructed once, and can be prepared in advance by repeatedly running the CRYPTO1 algorithm on a fixed input. The group's proof-of-concept demonstration of this attack still required many authentication attempts once this table had been constructed. Recording these attempts took several hours, but could be carried out by a hidden antenna to eavesdrop on a card reader. It seems that the complexity can be further reduced, possibly dramatically so, making the attack much simpler.

Once the secret cryptographic key is retrieved, there will be possibilities for abuse. How severe these possibilities are will depend on the situation. If all cards share the same key, then the system will be extremely vulnerable. This may be the case if cards are used for access control to buildings and facilities, both in the private and public sector. There is however no information on how common this is. For such a setting we demonstrated an actual attack, where a card of, say, an employee can be cloned by bumping into that person with a portable card reader. The person whose identity is being stolen may then be completely unaware that anything has happened. In a situation in which diversified keys are used, abuse will be more difficult, but not impossible. No actual attacks have been demonstrated for such a scenario.

At the technical level there are currently no known countermeasures. Shielding cards when they are not in use, for example, in a metal container, reduces the risk of an attacker secretly reading out a card. When the card is being used, however, it is still possibly to eavesdrop on the communication, with a hidden antenna near the access point. Strengthening of traditional access control measures is therefore advisable. Access to sensitive facilities will (or should) be protected by several protection mechanisms anyway, of which the RFID tag is only one.

The Dutch group's hacking of teh RFID card is not the first such attempt. In December 2007 Karten Nohl and Henryk Plötz announced that they had reconstructed CRYPTO1 at a hackers' conference in Berlin. The Dutch group has been in touch with them, and the group's work builds on their results. Nohl and Plötz kept some information about CRYPTO1 to themselves. To reverse engineer CRYPTO1, they carried out a physical attack in which they studied the layout of the hardware implementing the algorithm on an actual Mifare Classic chip. Their approach is completely different from the Dutch group's approach, as the latter only exploited weaknesses of the protocol and did not look looking at the hardware implementation.

The Dutch researchers say they face a dilemma: When discovering a security flaw there is a question on how to handle this information. Immediate publication of the details can encourage attacks and do serious damage. Keeping the flaw secret for a long period may mean that necessary steps to counter the vulnerability are not taken. It is common practice in the security community to try to strike a balance between these concerns, and reveal flaws after some delay. This is the approach the group has taken. On Friday 7 March the government was informed, because national security issues might be at stake. On 8 March, experts of the Dutch Signals Security Bureau (NBV) of the General Intelligence and Security Service (AIVD) visited Nijmegen to assess the situation, in which they concluded that the approach the digital security group demonstrated was an effective attack. On 9 March, NXP was informed and on Monday, 10 March, Trans Link Systems (the company developing the Dutch public transport card). The group spoke to representatives of both companies about the technical details, and is collaborating with them to analyze the impact and think of possible countermeasures. On 12 March 12, minister Ter Horst has informed the Dutch Parliament of the problem.
http://hsdailywire.com/single.php?id=5765





How to Hack RFID-Enabled Credit Cards for $8

A number of credit card companies now issue credit cards with embedded RFIDs (radio frequency ID tags), with promises of enhanced security and speedy transactions.

But on today's episode of Boing Boing tv, hacker and inventor Pablos Holman shows Xeni how you can use about $8 worth of gear bought on eBay to read personal data from those credit cards -- cardholder name, credit card number, and whatever else your bank embeds in this manner.

Fears over data leaks from RFID-enabled cards aren't new, and some argue they're overblown -- but this demo shows just how cheap and easy the "sniffing" can be.

This episode is part of our ongoing series of interviews with some of the thinkers, hackers, and tinkerers at the O'Reilly Emerging Technology conference this year.
http://tv.boingboing.net/2008/03/19/...-an-rfide.html





State Agency Moves to Plug USB Flash Drive Security Gap
Brian Fonseca

Security officials are issuing USB flash drives to workers in the state of Washington's Division of Child Support as part of a new security procedure established to eliminate the use of nonapproved thumb drives by workers collecting and transporting confidential data.

The state has so far distributed 150 of 200 SanDisk Corp. Cruzer Enterprise thumb drives to unit supervisors in the division who manage collections teams in 10 field offices, said officials (see also "Review: 7 secure USB drives").

Brian Main, the division's data security officer, said the new drives promise to help officials keep better track of mobile data by integrating them with Web-based management software that can centrally monitor, configure and prevent unauthorized access to the miniature storage devices.

"We do periodic risk analysis of our systems, and one of the things that came up is the use of thumb drives -- they were everywhere," said Main. "We had a hard time telling which were privately owned and which were owned by the state." He also said that officials had difficulty keeping track of what data was stored on the workers' thumb drives.

Main said the division plans to manage and back up the new drives using SanDisk's Central Management & Control server software, which will soon be installed at the division's headquarters in Olympia. The software, which relies on a Web connection to directly communicate with agents on the tiny flash drives, can also remotely monitor and flush any lost drives, he said.

Each field office will run a copy of the software to handle localized management needs, he said.

Officials in the division's training operations will get Cruzer Enterprise devices with 4GB of memory to store large presentations and screenshots. Enforcement personnel will get devices that store 1GB, Main said.

Main said the division first looked at Verbatim America LLC's thumb drives in its effort to improve security but ultimately turned to the SanDisk technology because of its support for Microsoft Corp.'s Windows Vista operating system.

Cruzer Enterprise provides 256-bit AES encryption and requires users to create a password upon activation. The device automatically deletes all of its content once someone has tried 10 times to access it using incorrect passwords. Main said the self-encrypting capability was removes the "human component" from managing confidential data, a key feature for the agency.

The Division of Child Support collects about $700 million annually in child-support payments form noncustodial parents. The agency, part of the state's Department of Social and Health Services, manages 350,000 active child-support cases annually, noted Main.

Sensitive data transported by off-site workers includes tax documents, employer records, criminal histories and federal passport data of some agency clients, Main said. At the least, he noted, the drives include the names, dates of birth and Social Security numbers of children serviced by the agency.

The state began rolling out the Cruzer drives late last year after recalling the thumb drives used by workers. Most of those had been purchased independently by the employees, causing myriad problems for security personnel, Main said. The new policy requires workers to use the drives supplied by the agency. Main said he eventually plans to destroy all existing thumb drives collected as part of the security policy change.

Most companies are too enamored of the convenience, portability and low cost of USB flash drives to consider their threat to security, said Larry Ponemon, chairman of Ponemon Institute LLC, a Traverse City, Mich.-based research firm.

"I think a lot of organizations are asleep at the switch. They don't see this as a huge problem, and it obviously has the potential to be the mother of all data-protection issues," said Ponemon. "A lot of organizations believe if you have a good [security] policy and you educate people and ask them to be good, that's sufficient. The reality is, thumb drives create a lot of uncertainty because they contain enormous an amount of information."

A December 2007 survey of 691 IT security practitioners by Ponemon Institute asked respondents if they believed most employees would report a lost laptop or memory stick. While 78% said that employees would likely notify IT about a lost laptop, only 25% expected that workers would report a lost USB flash drive.

"The general perception is no one will report a lost USB memory stick because they're so cheap -- and the embarrassment factor. It's hard to even know all the different instances where information [on them] is lost or stolen," remarked Ponemon.

The agency is in talks with ControlGuard to deploy the security provider's Endpoint Access Manager Server and Endpoint Agents across its network. Access Management Server sends security policy information from a central location to agents installed at specific data points to enforce protection and monitor activities. Main said the technology would allow his office to restrict authentication and control data output access on PCs, hard drives and printers.
http://www.computerworld.com/action/...&intsrc=kc_top





Second Mass Hack Exposed
Shaun Nichols

Hot on the heels of a recent hack in which 10,000 sites were compromised, researchers have disclosed a new large-scale attack..

Researchers at McAfee estimated that the attack has been active for roughly one week, and in that time frame has managed to place itself on roughly 200,000 web pages.

Most of the infected pages are running the phpBB forum software, said McAfee. The compromised pages are embedded with a Javascript file that links to the site hosting the attack.

Rather than attempt to exploit browser vulnerabilities, the attack attempts to trick a user into manually launching its malicious payload.

"This contrasts [Thursday’s] attack in that the vast majority of those were active server pages (.ASP)," explained McAfee researcher Craig Schmugar on a company blog posting.

"The ASP attacks are different than the phpBB ones in that the payload and method are quite different. Various exploits are used in the ASP attacks, where the phpBB ones rely on social engineering."

The infected pages bring up what appears to be a pornographic web site. Upon loading the page, a 'fake codec' social engineering attack is attempted. The user is told that in order to view the movie on the page, a special video codec must be installed.

The user then downloads a trojan program which installs a malware package on the users system then delivers a fraudulent error message telling the user that the supposed codec could not be installed.
http://www.itnews.com.au/News/72214,...k-exposed.aspx





Advanced Software Identifies Complex Cyber Network Attacks

By their very nature networks are highly interdependent and each machine’s overall susceptibility to attack depends on the vulnerabilities of the other machines in the network; new software allows IT managers to address this problem

A chain is only as strong as its weakest link, and a computer network is only as secure as the least-secure computer attached to it. Researchers at George Mason University’s Center for Secure Information Systems have developed new software that can reduce the impact of cyber attacks by identifying the possible vulnerability paths through an organization’s networks. By their very nature networks are highly interdependent and each machine’s overall susceptibility to attack depends on the vulnerabilities of the other machines in the network. Attackers can thus take advantage of multiple vulnerabilities in unexpected ways, allowing them incrementally to penetrate a network and compromise critical systems. In order to protect an organization’s networks, it is necessary to understand not only individual system vulnerabilities, but also their interdependencies. “Currently, network administrators must rely on labor-intensive processes for tracking network configurations and vulnerabilities, which requires a great deal of expertise and is error prone because of the complexity, volume and frequent changes in security data and network configurations,” says Sushil Jajodia, university professor and director of the Center for Secure Information Systems. “This new software is an automated tool that can analyze and visualize vulnerabilities and attack paths, encouraging ‘what-if analysis’.”

The software developed at Mason, CAULDRON, allows for the transformation of raw security data into roadmaps that allow users to proactively prepare for attacks, manage vulnerability risks and have real-time situational awareness. CAULDRON provides informed risk analysis, analyzes vulnerability dependencies and shows all possible attack paths into a network. In this way, it accounts for sophisticated attack strategies that may penetrate an organization’s layered defenses. CAULDRON’s intelligent analysis engine reasons through attack dependencies, producing a map of all vulnerability paths that are then organized as an attack graph that conveys the impact of combined vulnerabilities on overall security. To manage attack graph complexity, CAULDRON includes hierarchical graph visualizations with high-level overviews and detail drilldown, allowing users to navigate into a selected part of the big picture to get more information. “One example of this software in use is at the Federal Aviation Administration. They recently installed CAULDRON in their Cyber Security Incident Response Center and it is helping them prioritize security problems, reveal unseen attack paths and protect across large numbers of attack paths,” says Jajodia. “While currently being used by the FAA and defense community, the software is applicable in almost any industry or organization with a network and resources they want to keep protected, such as banking or education.”
http://hsdailywire.com/single.php?id=5779





Stanford Researchers Developing 3-D Camera With 12,616 Lenses
Dan Stober

The camera you own has one main lens and produces a flat, two-dimensional photograph, whether you hold it in your hand or view it on your computer screen. On the other hand, a camera with two lenses (or two cameras placed apart from each other) can take more interesting 3-D photos.

But what if your digital camera saw the world through thousands of tiny lenses, each a miniature camera unto itself? You'd get a 2-D photo, but you'd also get something potentially more valuable: an electronic "depth map" containing the distance from the camera to every object in the picture, a kind of super 3-D.

Stanford electronics researchers, lead by electrical engineering Professor Abbas El Gamal, are developing such a camera, built around their "multi-aperture image sensor." They've shrunk the pixels on the sensor to 0.7 microns, several times smaller than pixels in standard digital cameras. They've grouped the pixels in arrays of 256 pixels each, and they're preparing to place a tiny lens atop each array.

"It's like having a lot of cameras on a single chip," said Keith Fife, a graduate student working with El Gamal and another electrical engineering professor, H.-S. Philip Wong. In fact, if their prototype 3-megapixel chip had all its micro lenses in place, they would add up to 12,616 "cameras."

Point such a camera at someone's face, and it would, in addition to taking a photo, precisely record the distances to the subject's eyes, nose, ears, chin, etc. One obvious potential use of the technology: facial recognition for security purposes.

But there are a number of other possibilities for a depth-information camera: biological imaging, 3-D printing, creation of 3-D objects or people to inhabit virtual worlds, or 3-D modeling of buildings.

The technology is expected to produce a photo in which almost everything, near or far, is in focus. But it would be possible to selectively defocus parts of the photo after the fact, using editing software on a computer

Knowing the exact distance to an object might give robots better spatial vision than humans and allow them to perform delicate tasks now beyond their abilities. "People are coming up with many things they might do with this," Fife said. The three researchers published a paper on their work in the February edition of the IEEE ISSCC Digest of Technical Papers.

Their multi-aperture camera would look and feel like an ordinary camera, or even a smaller cell phone camera. The cell phone aspect is important, Fife said, given that "the majority of the cameras in the world are now on phones."

Here's how it works:

The main lens (also known as the objective lens) of an ordinary digital camera focuses its image directly on the camera's image sensor, which records the photo. The objective lens of the multi-aperture camera, on the other hand, focuses its image about 40 microns (a micron is a millionth of a meter) above the image sensor arrays. As a result, any point in the photo is captured by at least four of the chip's mini-cameras, producing overlapping views, each from a slightly different perspective, just as the left eye of a human sees things differently than the right eye.

The outcome is a detailed depth map, invisible in the photograph itself but electronically stored along with it. It's a virtual model of the scene, ready for manipulation by computation. "You can choose to do things with that image that you weren't able to do with the regular 2-D image," Fife said. "You can say, 'I want to see only the objects at this distance,' and suddenly they'll appear for you. And you can wipe away everything else."

Or the sensor could be deployed naked, with no objective lens at all. By placing the sensor very close to an object, each micro lens would take its own photo without the need for an objective lens. It has been suggested that a very small probe could be placed against the brain of a laboratory mouse, for example, to detect the location of neural activity.

Other researchers are headed toward similar depth-map goals from different approaches. Some use intelligent software to inspect ordinary 2-D photos for the edges, shadows or focus differences that might infer the distances of objects. Others have tried cameras with multiple lenses, or prisms mounted in front of a single camera lens. One approach employs lasers; another attempts to stitch together photos taken from different angles, while yet another involves video shot from a moving camera.

But El Gamal, Fife and Wong believe their multi-aperture sensor has some key advantages. It's small and doesn't require lasers, bulky camera gear, multiple photos or complex calibration. And it has excellent color quality. Each of the 256 pixels in a specific array detects the same color. In an ordinary digital camera, red pixels may be arranged next to green pixels, leading to undesirable "crosstalk" between the pixels that degrade color.

The sensor also can take advantage of smaller pixels in a way that an ordinary digital camera cannot, El Gamal said, because camera lenses are nearing the optical limit of the smallest spot they can resolve. Using a pixel smaller than that spot will not produce a better photo. But with the multi-aperture sensor, smaller pixels produce even more depth information, he said.

The technology also may aid the quest for the huge photos possible with a gigapixel camera—that's 140 times as many pixels as today's typical 7-megapixel cameras. The first benefit of the Stanford technology is straightforward: Smaller pixels mean more pixels can be crowded onto the chip.

The second benefit involves chip architecture. With a billion pixels on one chip, some of them are sure to go bad, leaving dead spots, El Gamal said. But the overlapping views provided by the multi-aperture sensor provide backups when pixels fail.

The researchers are now working out the manufacturing details of fabricating the micro-optics onto a camera chip.

The finished product may cost less than existing digital cameras, the researchers say, because the quality of a camera's main lens will no longer be of paramount importance. "We believe that you can reduce the complexity of the main lens by shifting the complexity to the semiconductor," Fife said.
http://news-service.stanford.edu/new...20-031908.html





Understanding Anonymity and the Need for Biometrics
Mark A. Shiffrin and Avi Silberschatz

Every time we leave our homes, we enter a world dominated by strangers and anonymity. Although facial or voice recognition may help us authenticate a few of those we encounter, what about the many people we don't know? In particular, how do we authenticate ourselves to each other when we need to know who we are dealing with?

Confusing privacy with anonymity has delayed implementation of robust, virtually tamper-proof biometric authentication to replace paper-based forms of ID that neither assure privacy nor reliably prove identity. The debate over Real ID and sensitivity to creation of any form of national ID reveal a fear that anything that identifies us to others will intrude on privacy. This has led to a preoccupation with forms of ID rather than the fundamental question of how we can reliably identify ourselves to each other. This is a crucial issue: We live in a society where we are often unknown to the people we encounter, including people who need to know exactly who they are dealing with.

While anonymity implies privacy, it does not confer it. We delude ourselves into thinking we have privacy if the person next to us doesn't know our name. If we use cash and avoid technological conveniences such as credit cards and windshield-mounted RFID devices to pay highway tolls, we may think we are going about life anonymously. We are allowing ourselves to believe that our public acts, how we communicate to others by word or deed in public space, are now somehow private.

In the tight-knit communities in which people used to live, people presumed that neighbors always knew whenever someone ventured outside of his or her front door, because everyone knew each other and could see public conduct. In the global virtual neighborhood, we now live among strangers. We may have anonymity as we encounter people who are not familiar with us, but it is only an illusion that public acts are now private.

Outside our homes, we have always lived in a public space where our open acts are no longer private. Anonymity has not changed that, but has provided an illusion of privacy and security. A credit card, rather than a shopkeeper, might record our purchases. Or, the RFID chip in our EZ pass might recognize that we cross a bridge at a given moment, instead of a toll taker. But these are records of public acts in which we openly engage in a public space with no reasonable expectation of confidentiality.

In public space, we engage in open acts where we have no expectation of privacy, as well as private acts that cannot take place within our homes and therefore require authenticating identity to carve a sphere of privacy. Such private acts might involve receiving medical treatment or conducting financial transactions. Individuals have a strong interest in maintaining control of treatment records that we rightly consider confidential, and knowing that finances cannot be misappropriated or snooped without consent.

The false privacy of anonymity allows others to steal what remains private to us in public space. Personal identity is unique and should remain in our control. Our lives outside our homes include not only open acts, but also those private transactions that have to take place in space we cannot control.

The lack of reliable authentication becomes a threat to control of our own identity and confidential information, because it enables others to take advantage of living among strangers to assume a false identity undetected. Strangers can falsely assume our identities when they steal identifying information like social security or credit card numbers. They can also threaten our personal, economic and national security when they garb themselves in legitimacy by forging ID or misusing someone else's ID with or without that person's collusion.

Biometric authentication has a role in maintaining and defending our control of our own identity and personal data. This emerging technology makes it virtually impossible to assume someone else's unique identity. It is a way of providing the same kind of security in the virtual neighborhood that we once had in rooted neighborhoods, where the uniqueness of individual identity was assured by neighbors authenticating each other through facial recognition.

We have to expect that people will see us when we are in public and that our open public acts will be just that. But we have to worry that, in an anonymous world without authenticated identity, privacy will be violated when others can assume our identifying characteristics and take control of transactions and interactions outside the home that are indeed personal and unique to us. This is a threat to the sphere of privacy we take with us outside our homes, including not only our interest in maintaining control of our names and reputations, but also of transactions and records that are highly confidential to us. Authenticated identity can address this threat, as well as the threat posed to society by strangers exploiting the vulnerability of anonymity to assume false identity.
http://www.thestandard.com/news/2008...eed-biometrics





Growth of Facial Recognition Biometrics, I

More and more private and government organizations turn to facial recognition biometric (just think DMVs), but privacy concerns slow broader adoption

After a driver sits for a photo at the Illinois Secretary of State office to renew a license, officials use facial-recognition technology to give the resulting image a close look. First, state officials verify that the face matches the images portrayed on previous licenses issued under the driver’s name. The second, more extensive run-through determines if the same face appears on other Illinois driver’s licenses with different names. Washington Technology's Alice Lipowicz writes that since starting the program in 1999, the state has uncovered more than 5,000 cases of multiple identity fraud, according to Beth Langen, policy and program division administrator at the Illinois Secretary of State office. The state pays Digimarc Corp. about 25 cents per license for the service, she said. “We are very pleased. It is a fraud for which we have no other tool” to combat, Langen said.

About 40 percent of the nation’s drivers will undergo such facial-recognition database checks when they renew their licenses in twenty states. It is just one indication that after years of ups and downs, facial-recognition technology in government agencies is gaining momentum on several fronts. Facial-image-matching applications have been available for more than a decade but are just beginning to attain widespread use in government. Using captured facial images which are adjusted for lighting, the technology extracts data from the image -- such as the length of a nose or a jaw line -- and uses an algorithm to compare the data from one image to other images. Facial recognition got off to a bad start when tested at the Super Bowl in Tampa, Florida, in 2001. Surveillance images of faces from the crowd generated so many false positives that the test was deemed a failure. Experts concede there still are high error rates if facial recognition is applied to images taken under less-than-ideal conditions. That type of application also spurs the greatest concern about privacy and civil rights violations.

Now, however, facial recognition is considered reliable in environments in which the lighting, facial expression, angle of the head, and distance of the subject from the camera can be controlled, and interference from hats, sunglasses, and such can be minimized. The most recent test results announced in March 2007 by the National Institute of Standards and Technology (NIST) showed error rates of 1 percent or less, a huge improvement compared with previous tests. Spending for 2008 on contracts related to facial recognition is estimated at $400 million, said Peter Cheesman, a spokesman at International Biometric Group (IBG), a New York-based consulting firm. That includes $254 million for civilian agencies, $68 million for law enforcement, and about $75 million for surveillance and access control, he said. State driver’s license bureaus are in the forefront. The twenty or so state motor vehicle departments that have facial-recognition systems or are in the process of implementing them typically perform one-to-one and one-to-many matches within their states.

Growth in such applications is continuing, driven by concerns about identity theft and fraud. Along with Colorado, Illinois, Iowa, Kentucky, Wisconsin, Washington, and many others, Oregon is the latest state to install facial recognition. “Doing facial matching in state motor vehicle departments is acceptable, logical and inexpensive. More states will move toward it,” said Raj Nanavati, partner at IBG.
http://hsdailywire.com/single.php?id=5762





Voice Biometrics Gaining a Foot Hold

Philips and PerSay combine encryption software with technology that manages users' "voiceprints" and speech verification; both potential customers and privacy advocates say they like it

With the help of encryption, voice biometrics technology has taken a big step forward in strengthening its privacy and security measures, according to the Information and Privacy Commissioner of Ontario. The major advancements have come from Europe, where Netherlands-based electronics giant Philips has taken its biometric encryption technology and applied it to Israel-based PerSay’s "voiceprint" and speech-verification products. According to Ontario privacy commissioner Ann Cavoukian, the combination of these technologies has ushered in a new layer of privacy and security. “In the past, voice biometrics has basically been conducted in the clear and it hasn’t been encrypted,” Cavoukian said. “So when your voiceprint is sent across the network and back to the server, the information could be vulnerable. Now, you can replace that with a highly protected system that will give you the benefits of voice biometrics, but with enhanced privacy and security.”

IT World Canada's Rafael Ruffolo writes that biometric encryption is a process which securely binds a PIN or a cryptographic key to a biometric -- which includes physical characteristics such as fingerprints, retinas, palm prints, or voice recognition. Cavoukian referred to biometric encryption as a positive-sum technology and encouraged any organization sitting on the fence for voice biometrics to consider adopting it with this new encryption system. “Based on these developments, I’d encourage anyone that is considering voice biometrics to look at the Philips/PerSay model and explore the encryption technology,” Cavoukian said. “I could see why people would hold back until there was a viable encryption system. But now I’m truly delighted because nothing could be more superior than biometric encryption.”

One application for the technology involves remote voice authentication. In standard remote authentication systems, a customer’s voiceprint is collected at a terminal and subsequently sent to a processing server, which compares the voiceprint with a stored template/biometric before sending it back to the terminal for authentication. With biometric encryption, however, the process is altered and the biometrically encrypted template is sent to the terminal, as opposed to sending the voiceprint out to the server. As a result, no audio is ever sent over the network. Michiel van der Veen, general manager at Philips priv-ID Biometrics, said that creating better privacy technologies will help speed up the penetration of biometric solutions within the commercial market. And because of how convenient the technology can be, he said, biometrics will play an increasingly larger role in the average consumers’ life. “If you start thinking about using sensitive biometric information in all kinds of applications, it means that your biometric identity is exposed in all kinds of commercial solutions and can suddenly become available to a whole lot of people,” van der Veen said. “The current solutions already respect privacy and adhere to strict guidelines. But when you add privacy solutions like we are offering today, then you basically make privacy inherent. “No matter who is using the solution you will be able to guarantee the data and the voiceprint is not misused for other purposes.”

Cavoukian said that the most remarkable aspect of combining biometric encryption and voice recognition was the technical challenges both Philips and PerSay were able to overcome. “What often happens is we see degradation in performance or a loss of accuracy because encrypting the voiceprint gives you far less information to work with,” she said. “The beauty of this is that not only were they successful in applying biometric encryption to voice, but we also noticed that there was no degradation of the voice technology either.” One of the biggest markets for voice biometrics is among the financial sector, where banks are increasingly offering more and more of its services via the telephone. Cavoukian said increased privacy measures for these voice authenticated systems would be a perfect fit. “This encryption technology would be ideally suited for sensitive tasks such as banking, checking your market account, or trading over the phone,” she said. “This is really just the beginning of the many possibilities I see for biometric technology.”
http://hsdailywire.com/single.php?id=5761





Identifying Manipulated Images

New tools that analyze the lighting in images help spot tampering.
Erica Naone

Photo-editing software gets more sophisticated all the time, allowing users to alter pictures in ways both fun and fraudulent. Last month, for example, a photo of Tibetan antelope roaming alongside a high-speed train was revealed to be a fake, according to the Wall Street Journal, after having been published by China's state-run news agency. Researchers are working on a variety of digital forensics tools, including those that analyze the lighting in an image, in hopes of making it easier to catch such manipulations.

Tools that analyze lighting are particularly useful because "lighting is hard to fake" without leaving a trace, says Micah Kimo Johnson, a researcher in the brain- and cognitive-sciences department at MIT, whose work includes designing tools for digital forensics. As a result, even frauds that look good to the naked eye are likely to contain inconsistencies that can be picked up by software.

Many fraudulent images are created by combining parts of two or more photographs into a single image. When the parts are combined, the combination can sometimes be spotted by variations in the lighting conditions within the image. An observant person might notice such variations, Johnson says; however, "people are pretty insensitive to lighting." Software tools are useful, he says, because they can help quantify lighting irregularities--they can give solid information during evaluations of images submitted as evidence in court, for example--and because they can analyze more complicated lighting conditions than the human eye can. Johnson notes that in many indoor environments, there are dozens of light sources, including lightbulbs and windows. Each light source contributes to the complexity of the overall lighting in the image.

Johnson's tool, which requires an expert user, works by modeling the lighting in the image based on clues garnered from various surfaces within the image. (It works best for images that contain surfaces of a fairly uniform color.) The user indicates the surface he wants to consider, and the program returns a set of coefficients to a complex equation that represents the surrounding lighting environment as a whole. That set of numbers can then be compared with results from other surfaces in the image. If the results fall outside a certain variance, the user can flag the image as possibly manipulated.

Hany Farid, a professor of computer science at Dartmouth College, who collaborated with Johnson in designing the tool and is a leader in the field of digital forensics, says that "for tampering, there's no silver button." Different manipulations will be spotted by different tools, he points out. As a result, Farid says, there's a need for a variety of tools that can help experts detect manipulated images and can give a solid rationale for why those images have been flagged.

Neal Krawetz, who owns a computer consulting firm called Hacker Factor, presented his own image-analysis tools last month at the Black Hat 2008 conference in Washington, DC. Among his tools was one that looks for the light direction in an image. The tool focuses on an individual pixel and finds the lightest of the surrounding pixels. It assumes that light is coming from that direction, and it processes the image according to that assumption, color-coding it based on light sources. While the results are noisy, Krawetz says, they can be used to spot disparities in lighting. He says that his tool, which has not been peer-reviewed, is meant as an aid for average people who want to consider whether an image has been manipulated--for example, people curious about content that they find online.

Cynthia Baron, associate director of digital media programs at Northeastern University and author of a book on digital forensics, is familiar with both Krawetz's and Farid's work. She says that digital forensics is a new enough field of research that even the best tools are still some distance away from being helpful to a general user. In the meantime, she says, "it helps to be on the alert." Baron notes that, while sophisticated users could make fraudulent images that would evade detection by the available tools, many manipulations aren't very sophisticated. "It's amazing to me, some of the things that make their way onto the Web and that people believe are real," she says. "Many of the things that software can point out, you can see with the naked eye, but you don't notice it."

Johnson says that he sees a need for tools that a news agency, for example, could use to quickly perform a dozen basic checks on an image to look for fraud. While it might not catch all tampering, he says, such a tool would be an important step, and it could work "like an initial spam filter." As part of developing that type of tool, he says, work needs to be done on creating better interfaces for existing tools that would make them accessible to a general audience.
http://www.technologyreview.com/Infotech/20423/?a=f





Pleasing Google's Tech-Savvy Staff

Information Officer Finds Security in Gadget Freedom of Choice
Vauhini Vara

How do you run the information-technology department at a company whose employees are considered among the world's most tech-savvy?

Douglas Merrill, Google Inc.'s chief information officer, is charged with answering that question. His job is to give Google workers the technology they need, and to keep them safe -- without imposing too many restrictions on how they do their job. So the 37-year-old has taken an unorthodox approach.

Unlike many IT departments that try to control the technology their workers use, Mr. Merrill's group lets Google employees download software on their own, choose between several types of computers and operating systems, and use internal software built by the company's engineers. Lately, he has also spent time evangelizing to outside clients about Google's own enterprise-software products -- such as Google Apps, an enterprise version of Google's Web-based services including email, word processing and a calendar.

Mr. Merrill, who has surfer-length hair and follows a T-shirt dress code, studied social and political organization at the University of Tulsa in Tulsa, Okla., and then went on to earn master's and doctorate degrees in psychology from Princeton University. His education in IT came largely from jobs as an information scientist at RAND Corp., senior manager at Price Waterhouse and senior vice president at Charles Schwab & Co. He joined Google in late 2003.

We sat down with Mr. Merrill to talk about Google's approach to IT. Excerpts:

The Wall Street Journal: What's the structure of the IT organization at Google?

Mr. Merrill: We're a decentralized technology organization, in that almost everyone at Google is some type of technologist. At most organizations, technology is done by one organization, and is very locked-down and very standardized. You don't have the freedom to do anything. Google's model is choice. We let employees choose from a bunch of different machines and different operating systems, and [my support group] supports all of them. It's a little bit less cost-efficient -- but on the other hand, I get slightly more productivity from my [Google's] employees.

WSJ: How do you support all of those different options effectively?

Mr. Merrill: We offer a lot more self-service. For example, let's say you want a new application to do something. You could take your laptop to a tech stop [areas in Google offices where workers can get technical support], but you can also go to an internal Web site where you download it and install the software. We allow all users to download software for themselves.

WSJ: Isn't that a security risk?

Mr. Merrill: The traditional security model is to try to tightly lock down endpoints [like computers and smartphones themselves], and it makes people sleep better at night, but it doesn't actually give them security. We put security into the infrastructure. We have antivirus and antispyware running on people's machines, but we also have those things on our mail server. We have programs in our infrastructure to watch for strange behavior. This means I don't have to worry about the endpoint as much. The traditional security model didn't really work. We had to find a new one.

WSJ: You depend in large part on open-source software or software that's built internally. What are some examples? What are the benefits?

Mr. Merrill: We do buy software where it makes sense to -- for example, we have a general ledger [accounting software] from Oracle; Oracle did a good job. Where it makes more sense to buy, we'll buy; where it makes more sense to build our own, we'll build. An example: Our [customer-relationship management] software is tightly integrated with our ad system, so we had to build our own.

We also believe there should be competition -- for instance, in operating systems, because different operating systems do different things well. We run search off of Linux. We run the Summer of Code where we pay college students to work on open-source projects that they think are useful.

WSJ: What's driving the "consumerization" of tech in the enterprise, where companies are borrowing tech ideas from the consumer Internet?

Mr. Merrill: Fifteen years ago, enterprise technology was higher-quality than consumer technology. That's not true anymore. It used to be that you used enterprise technology because you wanted uptime, security and speed. None of those things are as good in enterprise software anymore [as they are in some consumer software]. The biggest thing to ask is, "When consumer software is useful, how can I use it to get costs out of my environment?"

Google Apps is hosted on my infrastructure, and [the Premier Edition] costs roughly $50 a seat. You can go from an average of 50 megabytes of [email] storage to 10 gigabytes and more. There's better response time, you can reach email from anywhere in the world, and it's more financially effective.

WSJ: When you make that pitch to other CIOs, what are they most skeptical about?

Mr. Merrill: When I talk to Fortune 100 CIOs, they want to understand, "What is your security model? Is it really as reliable? What's the catch?"

The answer is, I had to build this massive infrastructure to run Google, so adding all the enterprise data isn't a big deal. I already had to build security standards because search logs are really private. Very few [Google employees] have access to consumer data, [and those who do] have to go through background checks. We have a rich relationship with the security community -- so when people find problems, they tell us. We have more than 150 security engineers who do nothing but security. We don't have a security priesthood: Every engineer is trained. We use automated tools that check every engineer's code.

We're able to invest in information security in a way that most people aren't. We did it because of search. In some sense, Google Apps is just a byproduct.
http://online.wsj.com/article/SB120578961450043169.html?mod=googlenews_wsj





U.S. Adapts Cold-War Idea to Fight Terrorists
Eric Schmitt and Thom Shanker

In the days immediately after the attacks of Sept. 11, 2001, members of President Bush’s war cabinet declared that it would be impossible to deter the most fervent extremists from carrying out even more deadly terrorist missions with biological, chemical or nuclear weapons.

Since then, however, administration, military and intelligence officials assigned to counterterrorism have begun to change their view. After piecing together a more nuanced portrait of terrorist organizations, they say there is reason to believe that a combination of efforts could in fact establish something akin to the posture of deterrence, the strategy that helped protect the United States from a Soviet nuclear attack during the cold war.

Interviews with more than two dozen senior officials involved in the effort provided the outlines of previously unreported missions to mute Al Qaeda’s message, turn the jihadi movement’s own weaknesses against it and illuminate Al Qaeda’s errors whenever possible.

A primary focus has become cyberspace, which is the global safe haven of terrorist networks. To counter efforts by terrorists to plot attacks, raise money and recruit new members on the Internet, the government has mounted a secret campaign to plant bogus e-mail messages and Web site postings, with the intent to sow confusion, dissent and distrust among militant organizations, officials confirm.

At the same time, American diplomats are quietly working behind the scenes with Middle Eastern partners to amplify the speeches and writings of prominent Islamic clerics who are renouncing terrorist violence.

At the local level, the authorities are experimenting with new ways to keep potential terrorists off guard.

In New York City, as many as 100 police officers in squad cars from every precinct converge twice daily at randomly selected times and at randomly selected sites, like Times Square or the financial district, to rehearse their response to a terrorist attack. City police officials say the operations are believed to be a crucial tactic to keep extremists guessing as to when and where a large police presence may materialize at any hour. “What we’ve developed since 9/11, in six or seven years, is a better understanding of the support that is necessary for terrorists, the network which provides that support, whether it’s financial or material or expertise,” said Michael E. Leiter, acting director of the National Counterterrorism Center.

“We’ve now begun to develop more sophisticated thoughts about deterrence looking at each one of those individually,” Mr. Leiter said in an interview. “Terrorists don’t operate in a vacuum.”

In some ways, government officials acknowledge, the effort represents a second-best solution. Their preferred way to combat terrorism remains to capture or kill extremists, and the new emphasis on deterrence in some ways amounts to attaching a new label to old tools.

“There is one key question that no one can answer: How much disruption does it take to give you the effect of deterrence?” said Michael Levi, a fellow at the Council on Foreign Relations and the author of a new book, “On Nuclear Terrorism.”

The New Deterrence

The emerging belief that terrorists may be subject to a new form of deterrence is reflected in two of the nation’s central strategy documents.

The 2002 National Security Strategy, signed by the president one year after the Sept. 11 attacks, stated flatly that “traditional concepts of deterrence will not work against a terrorist enemy whose avowed tactics are wanton destruction and the targeting of innocents.”

Four years later, however, the National Strategy for Combating Terrorism concluded: “A new deterrence calculus combines the need to deter terrorists and supporters from contemplating a W.M.D. attack and, failing that, to dissuade them from actually conducting an attack.”

For obvious reasons, it is harder to deter terrorists than it was to deter a Soviet attack.

Terrorists hold no obvious targets for American retaliation as Soviet cities, factories, military bases and silos were under the cold-war deterrence doctrine. And it is far harder to pinpoint the location of a terrorist group’s leaders than it was to identify the Kremlin offices of the Politburo bosses, making it all but impossible to deter attacks by credibly threatening a retaliatory attack.

But over the six and a half years since the Sept. 11 attacks, many terrorist leaders, including Osama bin Laden and his deputy, Ayman al-Zawahri, have successfully evaded capture, and American officials say they now recognize that threats to kill terrorist leaders may never be enough to keep America safe.

So American officials have spent the last several years trying to identify other types of “territory” that extremists hold dear, and they say they believe that one important aspect may be the terrorists’ reputation and credibility with Muslims.

Under this theory, if the seeds of doubt can be planted in the mind of Al Qaeda’s strategic leadership that an attack would be viewed as a shameful murder of innocents — or, even more effectively, that it would be an embarrassing failure — then the order may not be given, according to this new analysis.

Senior officials acknowledge that it is difficult to prove what role these new tactics and strategies have played in thwarting plots or deterring Al Qaeda from attacking. Senior officials say there have been several successes using the new approaches, but many involve highly classified technical programs, including the cyberoperations, that they declined to detail.

They did point to some older and now publicized examples that suggest that their efforts are moving in the right direction.

George J. Tenet, the former director of the Central Intelligence Agency, wrote in his autobiography that the authorities were concerned that Qaeda operatives had made plans in 2003 to attack the New York City subway using cyanide devices.

Mr. Zawahri reportedly called off the plot because he feared that it “was not sufficiently inspiring to serve Al Qaeda’s ambitions,” and would be viewed as a pale, even humiliating, follow-up to the 9/11 attacks.

And in 2002, Iyman Faris, a naturalized American citizen from Kashmir, began casing the Brooklyn Bridge to plan an attack and communicated with Qaeda leaders in Pakistan via coded messages about using a blowtorch to sever the suspension cables.

But by early 2003, Mr. Faris sent a message to his confederates saying that “the weather is too hot.” American officials said that meant Mr. Faris feared that the plot was unlikely to succeed — apparently because of increased security.

“We made a very visible presence there and that may have contributed to it,” said Paul J. Browne, the New York City Police Department’s chief spokesman. “Deterrence is part and parcel of our entire effort.”

Disrupting Cyberprojects

Terrorists hold little or no terrain, except on the Web. “Al Qaeda and other terrorists’ center of gravity lies in the information domain, and it is there that we must engage it,” said Dell L. Dailey, the State Department’s counterterrorism chief.

Some of the government’s most secretive counterterrorism efforts involve disrupting terrorists’ cyberoperations. In Iraq, Afghanistan and Pakistan, specially trained teams have recovered computer hard drives used by terrorists and are turning the terrorists’ tools against them.

“If you can learn something about whatever is on those hard drives, whatever that information might be, you could instill doubt on their part by just countermessaging whatever it is they said they wanted to do or planned to do,” said Brig. Gen. Mark O. Schissler, director of cyberoperations for the Air Force and a former deputy director of the antiterrorism office for the Joint Chiefs of Staff.

Since terrorists feel safe using the Internet to spread ideology and gather recruits, General Schissler added, “you may be able to interfere with some of that, interrupt some of that.”

“You can also post messages to the opposite of that,” he added.

Other American efforts are aimed at discrediting Qaeda operations, including the decision to release seized videotapes showing members of Al Qaeda in Mesopotamia, a largely Iraqi group with some foreign leaders, training children to kidnap and kill, as well as a lengthy letter said to have been written by another terrorist leader that describes the organization as weak and plagued by poor morale.

Dissuading Militants

Even as security and intelligence forces seek to disrupt terrorist operations, counterterrorism specialists are examining ways to dissuade insurgents from even considering an attack with unconventional weapons. They are looking at aspects of the militants’ culture, families or religion, to undermine the rhetoric of terrorist leaders.

For example, the government is seeking ways to amplify the voices of respected religious leaders who warn that suicide bombers will not enjoy the heavenly delights promised by terrorist literature, and that their families will be dishonored by such attacks. Those efforts are aimed at undermining a terrorist’s will.

“I’ve got to figure out what does dissuade you,” said Lt. Gen. John F. Sattler, the Joint Chiefs’ director of strategic plans and policy. “What is your center of gravity that we can go at? The goal you set won’t be achieved, or you will be discredited and lose face with the rest of the Muslim world or radical extremism that you signed up for.”

Efforts are also under way to persuade Muslims not to support terrorists. It is a delicate campaign that American officials are trying to promote and amplify — but without leaving telltale American fingerprints that could undermine the effort in the Muslim world. Senior Bush administration officials point to several promising developments.

Saudi Arabia’s top cleric, Grand Mufti Sheik Abdul Aziz al-Asheik, gave a speech last October warning Saudis not to join unauthorized jihadist activities, a statement directed mainly at those considering going to Iraq to fight the American-led forces.

And Abdul-Aziz el-Sherif, a top leader of the armed Egyptian movement Islamic Jihad and a longtime associate of Mr. Zawahri, the second-ranking Qaeda official, has just completed a book that renounces violent jihad on legal and religious grounds.

Such dissents are serving to widen rifts between Qaeda leaders and some former loyal backers, Western and Middle Eastern diplomats say.

“Many terrorists value the perception of popular or theological legitimacy for their actions,” said Stephen J. Hadley, Mr. Bush’s national security adviser. “By encouraging debate about the moral legitimacy of using weapons of mass destruction, we can try to affect the strategic calculus of the terrorists.”

Denying Support

As the top Pentagon policy maker for special operations, Michael G. Vickers creates strategies for combating terrorism with specialized military forces, as well as for countering the proliferation of nuclear, biological or chemical weapons.

Much of his planning is old school: how should the military’s most elite combat teams capture and kill terrorists? But with each passing day, more of his time is spent in the new world of terrorist deterrence theory, trying to figure out how to prevent attacks by persuading terrorist support networks — those who enable terrorists to operate — to refuse any kind of assistance to stateless agents of extremism.

“Obviously, hard-core terrorists will be the hardest to deter,” Mr. Vickers said. “But if we can deter the support network — recruiters, financial supporters, local security providers and states who provide sanctuary — then we can start achieving a deterrent effect on the whole terrorist network and constrain terrorists’ ability to operate.

“We have not deterred terrorists from their intention to do us great harm,” Mr. Vickers said, “but by constraining their means and taking away various tools, we approach the overall deterrent effect we want.”

Much effort is being spent on perfecting technical systems that can identify the source of unconventional weapons or their components regardless of where they are found — and letting nations around the world know the United States has this ability.

President Bush has declared that the United States will hold “fully accountable” any nation that shares nuclear weapons with another state or terrorists.

Rear Adm. William P. Loeffler, deputy director of the Center for Combating Weapons of Mass Destruction at the military’s Strategic Command, said Mr. Bush’s declaration meant that those who might supply arms or components to terrorists were just as accountable as those who ordered and carried out an attack.

It is, the admiral said, a system of “attribution as deterrence.”
http://www.nytimes.com/2008/03/18/washington/18terror.html?hp





Wikileaks Releases Early Atomic Bomb Diagram
anonymous

Wikileaks has released a diagram of the first atomic weapon, as used in the Trinity test and subsequently exploded over the Japanese city of Nagasaki, together with an extremely interesting scientific analysis. Wikileaks has not been able to fault the document or find reference to it elsewhere. Given the high quality of other Wikileaks submissions, the document may be what it purports to be, or it may be a sophisticated intelligence agency fraud, designed to mislead the atomic weapons development programs of countries like Iran. The neutron initiator is particularly novel. "When polonium is crushed onto beryllium by explosion, reaction occurs between polonium alpha emissions and beryllium leading to Carbon-12 & 1 neutron. This, in practice, would lead to a predictable neutron flux, sufficient to set off device."
http://hardware.slashdot.org/hardwar.../1345226.shtml





Milestones

The ICBM Turns 50

A cheerful note on a grim anniversary: They still haven't been fired.
Tim Cavanaugh

To the voluminous list of ironies that attended the Cold War doctrine of mutually assured destruction, we can add one more. On its 50th birthday, the intercontinental ballistic missile, that once-commanding symbol of the apocalypse, has become a national security underdog, a defense system whose future is uncertain, whose ranks are dwindling and whose utility in the 21st century is in serious question. That might gladden aging peaceniks whose Volvos sported "Nuclear weapons: May they rust in peace" bumper stickers during the Reagan era, but these days hawks and doves are equally likely to regard the ICBM with suspicion.

Consider the numbers. From a 1969 peak of 1,054, the Air Force now fields 450 missiles. Within the last three years the United States has retired 100 ICBMs, including the entire run of Peacekeepers, which began life as the controversial "MX" missile in the '70s. Mighty Vandenberg Air Force Base, where the first nuclear-tipped Atlas rocket facilities were built in 1958, lives on as a spaceport and missile testing facility, but today 22 square miles of mostly undeveloped coastal land in Santa Barbara County look more like a lost opportunity in real estate than an urgent military asset. The Week in Review is edited and published by Jack Spratts. The last Titan II rocket (decommissioned from missile duty in 1987) took off from Vandenberg in 2003, carrying a payload for the Defense Meteorological Satellite Program; the three-stage Minuteman (1962- ) is now the only land-based ICBM in the U.S. arsenal. Much of the action in America's ongoing wars is conducted by unmanned aerial vehicles, and the Air Force is engaged in various great debates about next-generation weapons, including the very interesting question of whether piloted fighters and bombers have any future. How can the ICBM help but seem like the last Hula Hoop in the age of the RipStik?

During a recent visit to Vandenberg to help mark the semi-centennial of nuclear-tipped missiles, Maj. Gen. Thomas F. Deppe made a compelling case for the ICBM. Wearing boots and digital camouflage and speaking without notes or coffee in a windowless office, the burly vice commander of Air Force Space Command at Colorado's Peterson Air Force Base acknowledged the waning of the fleet but pointed out that the ICBM remains a vital deterrent, at least to clearly delineated state-to-state war: "The beauty of the ICBM is that it tremendously complicates matters for any adversary attacking this country."

Is that true, though? After all, the nuclear umbrella doesn't seem to have complicated the first foreign assault on U.S. soil of the 21st century. But Deppe, who began his Air Force career as an enlisted instrumentation technician in 1967 and has worked in missiles for most of his adult life, points not to the attacks that occurred on Sept. 11, 2001, but to the many that didn't occur in the 50 years before that. "The lesson of the Cold War is that strategic deterrence works," Deppe said. "There are a number of nations, and unfortunately that number is on the rise, that are developing nuclear capability, that have ballistic missile capability that can reach this country. The question of deterrence, and how much is enough, goes back to my earliest years in the Air Force. And really, it's impossible to measure how much is enough. You'll know if you don't have enough, but you'll never know if you have too much. Is 450 the right number? Apparently it is, because we're deterring aggressors. But is 449 not enough?"
Don't expect to find out any time soon. The Air Force is completing a $7-billion upgrade of its Minuteman assets, a "nosecone to nozzle" spiffing up that will keep the missile in place until about 2030. What will come after that? Strategic Command has been considering the possibility of conventional ICBMs for years. In planning for an eventual Minuteman replacement, the Air Force is looking for smarter, more accurate delivery systems, but it is not ignoring the continuing value of being able to deliver nasty surprises from outer space. "The ICBM remains the single most prompt weapon we have," Deppe noted. "It can reach out and touch somebody anywhere in the world in 45 minutes."

Which lends one cheerful note to this grim anniversary: In all these years, the things still haven't been used. Unlike carrier fleets or rapid-deployment forces, the ICBM was not about power projection or foreign intervention but about persuading a lethal adversary not to attack the U.S. The strangest possible outcome of mutually assured destruction was the one that came to pass: Two political and economic systems competed without coming to blows, and the better system prevailed. That's no less astounding now than it was in the '90s -- or for that matter the '50s, when those missileers first went underground with their little keys, awaiting orders that never came.
http://www.latimes.com/news/printedi...,7443763.story





Far Out! Peace Symbol Turns 50



A new book — out in April — traces the origin and history of the peace sign

Baby Boomers may recall it through a swirl of tear gas, scrawled on walls, on signs in marches and silent sit-ins, or on the helmet covers of weary Vietnam soldiers.

The peace sign, which turns 50 in April, was introduced in a calmer Britain in 1958 to promote nuclear disarmament, and spread fast as times got tense.

Since its inception, it has been revered as a sign of our better angels and cursed as the "footprint of the American chicken."

The symbol that helped define a generation is less evident now, but it is far from forgotten. After what it went through, how could it be?

National Geographic Books is out with "Peace: The Biography of a Symbol," by Ken Kolsbun and Michael Sweeney, which traces the simple symbol from its scratched-out origins based on the semaphore flag positions for N and D (nuclear disarmament) to the influence it had, and retains, in social movements.



While the book details how the symbol came to be and how it spread, it focuses more on the backdrop of the peace movement generally, from its antecedents in the McCarthyism of the 1950s to nuclear proliferation, Vietnam, Kent State and the 1968 Chicago Democratic Convention to its later promotions of other causes.

It has become "a rallying cry for almost any group working for social change," the authors write.

The book is enhanced by numerous photos, some chillingly familiar, some simply nostalgic.

Who can forget the frantic teenager kneeling over the fallen student at Kent State University. Or the student sticking a flower in the barrel of a National Guard rifle? Or the whaling ship bearing down on a Greenpeace raft? Or Woodstock?

The symbol itself was created by a British pacifist textile designer, Gerald Holtom, who initially considered using a cross but got an icy reception from some of the churches he sought as allies.

So on a wet, chilly Good Friday — April 4, 1958 — the symbol as we know it made its debut in London's Trafalgar Square where thousands gathered to support a "ban the bomb" movement and to make a long march to Aldermaston, where atomic weapons research was being done.

While Holtom designed the symbol, the U.S Patent and Trademark Office ruled in 1970 that it is in the public domain. It was quickly commercialized, showing up, among other places, on packages of Lucky Strike cigarettes, but also on a 1999 postage stamp after a public vote to pick 15 commemoratives to honor the 1960s.

Kolsbun is a jack of many trades that include longtime and enthusiastic peace activism, a propensity that shows through. Sweeney is a professor of journalism at Utah State University.



If you recall the mood and times of the '60s and 1970s, the book will take you back. Depending on your level of enthusiasm then, you might imagine a whiff of tear gas. Or recall the better times of the 1967 Summer of Love, which a lot of GIs remember another way.

Holtom clung to his pacifist beliefs to the end, which came on Sept. 18, 1985 at 71. His will requested that his grave marker be carved with two of his peace symbols, inverted.

For reasons unclear, the authors write, they aren't inverted. They're exactly the way he made them.

Maybe that's why.
http://today.msnbc.msn.com/id/23677930/





Arthur C. Clarke, 90, Science Fiction Writer, Dies
Gerald Jonas



Arthur C. Clarke, a writer whose seamless blend of scientific expertise and poetic imagination helped usher in the space age, died early Wednesday in Colombo, Sri Lanka, where he had lived since 1956. He was 90.

Rohan de Silva, an aide, confirmed the death and said Mr. Clarke had been experiencing breathing problems, The Associated Press reported. He had suffered from post-polio syndrome for the last two decades.

The author of almost 100 books, Mr. Clarke was an ardent promoter of the idea that humanity’s destiny lay beyond the confines of Earth. It was a vision served most vividly by “2001: A Space Odyssey,” the classic 1968 science-fiction film he created with the director Stanley Kubrick and the novel of the same title that he wrote as part of the project.

His work was also prophetic: his detailed forecast of telecommunications satellites in 1945 came more than a decade before the first orbital rocket flight.

Other early advocates of a space program argued that it would pay for itself by jump-starting new technology. Mr. Clarke set his sights higher. Borrowing a phrase from William James, he suggested that exploring the solar system could serve as the “moral equivalent of war,” giving an outlet to energies that might otherwise lead to nuclear holocaust.

Mr. Clarke’s influence on public attitudes toward space was acknowledged by American astronauts and Russian cosmonauts, by scientists like the astronomer Carl Sagan and by movie and television producers. Gene Roddenberry credited Mr. Clarke’s writings with giving him courage to pursue his “Star Trek” project in the face of indifference, even ridicule, from television executives.

In his later years, after settling in Ceylon (now Sri Lanka), Mr. Clarke continued to bask in worldwide acclaim as both a scientific sage and the pre-eminent science fiction writer of the 20th century. In 1998, he was knighted by Queen Elizabeth II.

Mr. Clarke played down his success in foretelling a globe-spanning network of communications satellites. “No one can predict the future,” he always maintained. But as a science fiction writer he couldn’t resist drawing up timelines for what he called “possible futures.” Far from displaying uncanny prescience, these conjectures mainly demonstrated his lifelong, and often disappointed, optimism about the peaceful uses of technology — from his calculation in 1945 that atomic-fueled rockets could be no more than 20 years away to his conviction in 1999 that “clean, safe power” from “cold fusion” would be commercially available in the first years of the new millennium.

Popularizer of Science

Mr. Clarke was well aware of the importance of his role as science spokesman to the general population: “Most technological achievements were preceded by people writing and imagining them,” he noted. “I’m sure we would not have had men on the Moon,” he added, if it had not been for H. G. Wells and Jules Verne. “I’m rather proud of the fact that I know several astronauts who became astronauts through reading my books.”

Arthur Charles Clarke was born on Dec. 16, 1917, in the seaside town of Minehead, Somerset, England. His father was a farmer; his mother a post office telegrapher. The eldest of four children, he was educated as a scholarship student at a secondary school in the nearby town of Taunton. He remembered a number of incidents in early childhood that awakened his scientific imagination: exploratory rambles along the Somerset shoreline, with its “wonderland of rock pools”; a card from a pack of cigarettes that his father showed him, with a picture of a dinosaur; the gift of a Meccano set, a British construction toy similar to American Erector Sets.

He also spent time, he said, “mapping the moon” through a telescope he constructed himself out of “a cardboard tube and a couple of lenses.” But the formative event of his childhood was his discovery, at age 13 — the year his father died — of a copy of Astounding Stories of Super-Science, then the leading American science fiction magazine. He found its mix of boyish adventure and far-out (sometimes bogus) science intoxicating.

While still in school, he joined the newly formed British Interplanetary Society, a small band of sci-fi enthusiasts who held the controversial view that space travel was not only possible but could be achieved in the not-so-distant future. In 1937, a year after he moved to London to take a civil service job, he began writing his first science fiction novel, a story of the far, far future that was later published as “Against the Fall of Night” (1953).

Mr. Clarke spent World War II as an officer in the Royal Air Force. In 1943 he was assigned to work with a team of American scientist-engineers who had developed the first radar-controlled system for landing airplanes in bad weather. That experience led to Mr. Clarke’s only non-science fiction novel, “Glide Path” (1963). More important, it led in 1945 to a technical paper, published in the British journal Wireless World, establishing the feasibility of artificial satellites as relay stations for Earth-based communications.

The meat of the paper was a series of diagrams and equations showing that “space stations” parked in a circular orbit roughly 22,240 miles above the equator would exactly match the Earth’s rotation period of 24 hours. In such an orbit, a satellite would remain above the same spot on the ground, providing a “stationary” target for transmitted signals, which could then be retransmitted to wide swaths of territory below. This so-called geostationary orbit has been officially designated the Clarke Orbit by the International Astronomical Union.

Decades later, Mr. Clarke called his Wireless World paper “the most important thing I ever wrote.” In a wry piece entitled, “A Short Pre-History of Comsats, Or: How I Lost a Billion Dollars in My Spare Time,” he claimed that a lawyer had dissuaded him from applying for a patent. The lawyer, he said, thought the notion of relaying signals from space was too far-fetched to be taken seriously.

But Mr. Clarke also acknowledged that nothing in his paper — from the notion of artificial satellites to the mathematics of the geostationary orbit — was new. His chief contribution was to clarify and publicize an idea whose time had almost come: it was a feat of consciousness-raising of the kind he would continue to excel at throughout his career.

A Fiction Career Is Born

The year 1945 also saw the start of Mr. Clarke’s career as a fiction writer. He sold a short story called “Rescue Party” to the same magazine — now re-titled Astounding Science Fiction — that had captured his imagination 15 years earlier.

For the next two years Mr. Clarke attended King’s College, London, on the British equivalent of a G.I. Bill scholarship, graduating in 1948 with first-class honors in physics and mathematics. But he continued to write and sell stories, and after a stint as assistant editor at the scientific journal Physics Abstracts, he decided he could support himself as a free-lance writer. Success came quickly. His primer on space flight, “The Exploration of Space,” became an American Book-of-the-Month Club selection.

Over the next two decades he wrote a series of nonfiction bestsellers as well as his best-known novels, including “Childhood’s End” (1953) and “2001: A Space Odyssey” (1968). For a scientifically trained writer whose optimism about technology seemed boundless, Mr. Clarke delighted in confronting his characters with obstacles they could not overcome without help from forces beyond their comprehension.

In “Childhood’s End,” a race of aliens who happen to look like devils imposes peace on an Earth torn by Cold War tensions. But the aliens’ real mission is to prepare humanity for the next stage of evolution. In an ending that is both heartbreakingly poignant and literally earth-shattering, Mr. Clarke suggests that mankind can escape its suicidal tendencies only by ceasing to be human.

“There was nothing left of Earth,” he wrote. “It had nourished them, through the fierce moments of their inconceivable metamorphosis, as the food stored in a grain of wheat feeds the infant plant while it climbs towards the Sun.”

The Cold War also forms the backdrop for “2001.” Its genesis was a short story called “The Sentinel,” first published in a science fiction magazine in 1951. It tells of an alien artifact found on the Moon, a little crystalline pyramid that explorers from Earth destroy while trying to open. One explorer realizes that the artifact was a kind of fail-safe beacon; in silencing it, human beings have signaled their existence to its far-off creators.

Enter Stanley Kubrick

In the spring of 1964, Stanley Kubrick, fresh from his triumph with “Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb,” met Mr. Clarke in New York, and the two agreed to make the “proverbial really good science fiction movie” based on “The Sentinel.” This led to a four-year collaboration; Mr. Clarke wrote the novel and Mr. Kubrick produced and directed the film; they are jointly credited with the screenplay.

Many reviewers were puzzled by the film, especially the final scene in which an astronaut who has been transformed by aliens returns to orbit the Earth as a “Star-Child.” In the book he demonstrates his new-found powers by detonating from space the entire arsenal of Soviet and United States nuclear weapons. Like much of the plot, this denouement is not clear in the film, from which Mr. Kubrick cut most of the expository material.

As a fiction writer, Mr. Clarke was often criticized for failing to create fully realized characters. HAL, the mutinous computer in “2001,” is probably his most “human” creation: a self-satisfied know-it-all with a touching but misguided faith in his own infallibility.

If Mr. Clarke’s heroes are less than memorable, it’s also true that there are no out-and-out villains in his work; his characters are generally too busy struggling to make sense of an implacable universe to engage in petty schemes of dominance or revenge.
Mr. Clarke’s own relationship with machines was somewhat ambivalent. Although he held a driver’s license as a young man, he never drove a car. Yet he stayed in touch with the rest of the world from his home in Sri Lanka through an ever-expanding collection of up-to-date computers and communications accessories. And until his health declined, he was an expert scuba diver in the waters around Sri Lanka.

He first became interested in diving in the early 1950s, when he realized that he could find underwater, he said, something very close to the weightlessness of outer space. He settled permanently in Colombo, the capital of what was then Ceylon, in 1956. With a partner, he established a guided diving service for tourists and wrote vividly about his diving experiences in a number of books, beginning with “The Coast of Coral” (1956).

Of his scores of books, some like “Childhood’s End,” have been in print continuously. His works have been translated into some 40 languages, and worldwide sales have been estimated at more than $25 million.

In 1962 he suffered a severe attack of polio. His apparently complete recovery was marked by a return to top form at his favorite sport, table tennis. But in 1984 he developed post-polio syndrome, a progressive condition characterized by muscle weakness and extreme fatigue. He spent the last years of his life in a wheelchair.

Clarke’s Three Laws

Among his legacies are Clarke’s Three Laws, provocative observations on science, science fiction and society that were published in his “Profiles of the Future” (1962):

¶“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

¶“The only way of discovering the limits of the possible is to venture a little way past them into the impossible.”

¶“Any sufficiently advanced technology is indistinguishable from magic.”

Along with Verne and Wells, Mr. Clarke said his greatest influences as a writer were Lord Dunsany, a British fantasist noted for his lyrical, if sometimes overblown, prose; Olaf Stapledon, a British philosopher who wrote vast speculative narratives that projected human evolution to the farthest reaches of space and time; and Herman Melville’s “Moby-Dick.”

While sharing his passions for space and the sea with a worldwide readership, Mr. Clarke kept his emotional life private. He was briefly married in 1953 to an American diving enthusiast named Marilyn Mayfield; they separated after a few months and were divorced in 1964, having had no children.

One of his closest relationships was with Leslie Ekanayake, a fellow diver in Sri Lanka, who died in a motorcycle accident in 1977. Mr. Clarke shared his home in Colombo with his friend’s brother, Hector, his partner in the diving business; Hector’s wife, Valerie; and their three daughters.

Mr. Clarke reveled in his fame. One whole room in his house — which he referred to as the Ego Chamber — was filled with photos and other memorabilia of his career, including pictures of him with Yuri Gagarin, the first man in space, and Neil Armstrong, the first man to walk on the moon.

Mr. Clarke’s reputation as a prophet of the space age rests on more than a few accurate predictions. His visions helped bring about the future he longed to see. His contributions to the space program were lauded by Charles Kohlhase, who planned NASA’s Cassini mission to Saturn and who said of Mr. Clarke, “When you dream what is possible, and add a knowledge of physics, you make it happen.”

At the time of his death he was working on another novel, “The Last Theorem,” Agence France-Presse reported. “ The Last Theorem’ has taken a lot longer than I expected,” the agency quoted him as saying. “That could well be my last novel, but then I’ve said that before.”
http://www.nytimes.com/2008/03/19/books/19clarke.html



















Until next week,

- js.



















Current Week In Review





Recent WiRs -

March 15th, March 8th, March 1st, February 23rd

Jack Spratts' Week In Review is published every Friday. Submit letters, articles, press releases, questions and comments in plain text English to jackspratts (at) lycos (dot) com. Submission deadlines are Thursdays @ 1400 UTC. Please include contact info. The right to publish all remarks is reserved.


"The First Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public."
- Hugo Black
JackSpratts is offline   Reply With Quote