Wednesday, September 27, 2006

"Holy sh#t -- maybe biology doesn't suck!"


So said my brother David (MIT math major), on forwarding me this animation of the inner life of a cell.

Quantum Computing Since Democritus: Lecture 3

Gödel, Turing, and Friends. Another whole course compressed into one handwaving lecture. (This will be a recurring theme.)

Monday, September 25, 2006

Is there no other?


O Achilles of Arkansas, O bane of Foxes and Roves, O solitary warrior among Democrats: dasher of hopes, prince of platitudes, felatee of Jewesses, belated friend of Tutsis, toothless tiger of climate change, greatest of all living Americans: how shall we summon thee back?

Admissions unhooked

An anonymous indie-cinema-loving hermit friend from Amsterdam sends me an article in this week's Economist entitled "Poison Ivy: Not so much palaces of learning as bastions of privilege and hypocrisy" (unfortunately, only available to subscribers). The article is a summary of an excellent Wall Street Journal series by Daniel Golden (again, unfortunately, only available to subscribers), which I've been following with great interest. Golden has also put out a book about this topic, called The Price of Admission ("How America's Ruling Class Buys Its Way into Elite Colleges -- and Who Gets Left Outside the Gates"), which I just ordered from Amazon. In the meantime, I'll simply quote a few passages from the Economist piece:
Mr Golden shows that elite universities do everything in their power to admit the children of privilege. If they cannot get them in through the front door by relaxing their standards, then they smuggle them in through the back. No less than 60% of the places in elite universities are given to candidates who have some sort of extra "hook", from rich or alumni parents to "sporting prowess". The number of whites who benefit from this affirmative action is far greater than the number of blacks...

Most people think of black football and basketball stars when they hear about "sports scholarships". But there are also sports scholarships for rich white students who play preppie sports such as fencing, squash, sailing, riding, golf and, of course, lacrosse. The University of Virginia even has scholarships for polo-players, relatively few of whom come from the inner cities...


What is one to make of [Senate Majority Leader Bill] Frist, who opposes affirmative action for minorities while practising it for his own son?


Two groups of people overwhelmingly bear the burden of these policies -- Asian-Americans and poor whites. Asian-Americans are the "new Jews", held to higher standards (they need to score at least 50 points higher than non-Asians even to be in the game) and frequently stigmatised for their "characters" (Harvard evaluators persistently rated Asian-Americans below whites on "personal qualities"). When the University of California, Berkeley briefly considered introducing means-based affirmative action, it rejected the idea on the ground that "using poverty yields a lot of poor white kids and poor Asian kids".

The article ends with the hope that "America's money-addicted and legacy-loving universities can be shamed into returning to what ought to have been their guiding principle all along: admitting people to university on the basis of their intellectual ability."

I harped about this issue in one of my very first posts, almost a year ago. I don't know what else to say. If idealism won't goad us Americans (yes, I'm still an American) into overhauling our crooked, anti-intellectual admissions system, then maybe it will help to see just how absurd that system looks to the rest of the world.

Sunday, September 24, 2006

Mistake of the Week: "The Future Is In X"

One of the surest signs of the shnood is the portentous repetition of the following two slogans:
Biology will be the physics of the 21st century.

The future of the world is in China and India.
Let me translate for you:
You know the field of Darwin, Pasteur, and Mendel, the field that fills almost every page of Science and Nature, the field that gave rise to modern medicine and transformed the human condition over the last few centuries? Well, don't count it out entirely! This plucky newcomer among the sciences is due to make its mark. Another thing you shouldn't count out is the continent of Asia, which is situated next to Europe. Did you know that China, far more than a source of General Tso's Chicken, has been one of the centers of human civilization for 4,000 years? And did you know that Gandhi and Ramanujan both hailed from a spunky little country called India? It's true!
Let me offer my own counterslogans:
Biology will be the biology of the 21st century.

The future of China and India is in China and India, respectively.

Thursday, September 21, 2006

Only eight annoying questions to go

A month ago, I posed the following as the 10th most annoying question in quantum computing:
Given an n-qubit pure state, is there always a way to apply Hadamard gates to some subset of the qubits, so as to make all 2n computational basis states have nonzero amplitudes?
Today Ashley Montanaro and Dan Shepherd of the University of Bristol sent me the answer, in a beautiful 4-page writeup that they were kind enough to let me post here. (The answer, as I expected, is yes.)

This is a clear advance in humankind's scientific knowledge, which is directly traceable to this blog. I am in a good mood today.

The obvious next question is to find an α>0 such that, for any n-qubit pure state, there's some way to apply Hadamards to a subset of the qubits so as to make all 2n basis states have |amplitude| at least α. Clearly we can't do better than α=sinn(π/8). Montanaro and Shepherd conjecture that this is tight.

What's the motivation? If you have to ask...

Tuesday, September 19, 2006

Quantum Computing Since Democritus: Lecture 2

Cardinals, ordinals, and more. A whole math course compressed into one handwaving lecture, and a piping-hot story that's only a century old.

Monday, September 18, 2006

Yau strikes back

Along with his law firm. You can read his side of the Poincaré story at doctoryau.com.

(Hey, passing along press releases sent to me by law firms sure is easy! I wonder why more media outlets don't do exactly the same thing.)

Saturday, September 16, 2006

How to rig an election

My friend Alex Halderman is now after bigger fish than copy-"protected" music CD's. Watch this video, in which he, Ed Felten, and Ariel Feldman demonstrate how to rig a Diebold voting machine (and also watch Alex show off his lock-picking skills). Reading the group's paper, one becomes painfully aware of a yawning cultural divide between nerds and the rest of the world. Within the nerd universe, that voting machines need to have a verifiable paper trail, that they need to be open to inspection by researchers, etc., are points so obvious as to be scarcely worth stating. If a company (Diebold) refuses to take these most trivial of precautions, then even without a demonstration of the sort Alex et al. provide, the presumption must be that their machines are insecure. Now Alex et al. are trying to take what's obvious to nerds into a universe -- local election boards, the courts, etc. -- that operates by entirely different rules. Within this other universe, the burden is not on Diebold to prove its voting machines are secure; it's on Alex et al. to prove they're insecure. And even if they do prove they're insecure -- well, if it weren't for those pesky researchers telling the bad guys how to cheat, what would we have to worry about?

So, how does one bridge this divide? How does one explain the obvious to those who, were they capable of understanding it, would presumably have understood it already? I wish I had an easy answer, but I fear there's nothing to do but what Alex, Ed, and Ariel are doing already -- namely, fight with everything you've got.

Thursday, September 14, 2006

PHYS771 Quantum Computing Since Democritus

That, for better or worse, is the name of a course I'm teaching this semester at the University of Waterloo. I'm going to post all of the lecture notes online, so that you too can enjoy an e-learning cyber-experience in my virtual classroom, even if you live as far away as Toronto. I've already posted Lecture 1, "Atoms and the Void." Coming up next: Lecture 2.

Intellectual funnel cake

It's the Science Blog Carnival! Come see me hawking my wares, alongside Peter Woit, Dave Bacon, and dozens of other clowns.

Monday, September 11, 2006

Obligatory retrospective

I woke up at my normal time -- probably around 2PM -- in my room at Berkeley's International House, to find an avalanche of email: from a fellow grad student, urging everyone to check the news; from Christos Papadimitriou, reminding us that we have a community here, and communities can comfort; from Luca Trevisan, announcing that the class that he taught and I TA'ed would be canceled, since on a day like this it was impossible to think about algorithms. I then clicked over to news sites to find out what had happened.

After confirming that my friends and family were safe, I walked over to my office in Soda Hall, mostly to find people to talk to. Technically I had office hours for the algorithms class that afternoon, but I didn't expect students actually to come. Yet come they did: begging for hints on the problem set, asking what would and wouldn't be on the test, pointing to passages in the CLRS textbook that they didn't understand. I pored over their textbook, shaking my head in disbelief, glancing up every minute or so at the picture of the burning buildings on the computer screen.

That night there was a big memorial service in Sproul Plaza. When I arrived, a woman offered me a candle, which I took, and a man standing next to her offered me a flyer, which I also took. The flyer, which turned out to be from a socialist organization, sought to place the events of that morning in "context," describing the World Trade Center victims as "mostly white-collar executives and those who tried to save them."

After a few songs and eulogies, a woman got up to explain that, on this terrible day, what was really important was that we try to understand the root causes of violence -- namely poverty and despair -- and not use this tragedy as a pretext to start another war. The crowd thunderously applauded.

While the speeches continued, I got up and wandered off by myself in the direction of Bancroft Way. Much as I did the year before, when the area around Telegraph was festooned with Nader for President posters, I felt palpably that I wasn't living in an outcomes-based region of reality. The People's Republic of Berkeley was proving to be a staunch ally of the Oilmen's Oligarchy of Crawford, undermining the only sorts of opposition to it that had any possibility of succeeding.

I decided to forget about politics for a while and concentrate exclusively on research. I can't say I succeeded at this. But I did pass my prelim exam three days later (on September 14), and a few weeks afterward proved the quantum lower bound for the collision problem.


Note: Feel free to post your own retrospective in the comments section. Andris Ambainis has already done so.

Sunday, September 10, 2006

When modular arithmetic was a STOC result

So, it seems the arXiv is now so popular that even Leonhard Euler has contributed 25 papers, despite being dead since 1783. (Thanks to Ars Mathematica for this important news item, as well as for the hours of procrastination on my part that led to its rediscovery.) Since I'd long been curious about the mathematical research interests of the nonliving, I decided to check out Leonhard's most recent preprint, math.HO/0608467 ("Theorems on residues obtained by the division of powers"). The paper starts out slow: explaining in detail why, if a mod p is nonzero, then a2 mod p, a3 mod p, and so on are also nonzero. By the end, though, it's worked out most of the basics of modular arithmetic, enough (for example) to analyze RSA. Furthermore, the exposition, while "retro" in style, is sufficiently elegant that I might even recommend acceptance at a minor theory conference, even though the basic results have of course been known for like 200 years.

Oh -- you say that Mr. E's papers were as difficult and abstract for their time as Wiles and Perelman's papers are for our own time? BULLSHIT. Reading the old master brings home the truth: that, for better and worse, math has gotten harder. Much, much harder. And we haven't gotten any smarter.

Friday, September 08, 2006

Reasons to believe II: quantum edition

At Greg Kuperberg's request, I've decided to follow my Ten Reasons To Believe P!=NP with...

Thirteen Reasons Why I'd Be Surprised If Quantum Computing Were Fundamentally Impossible

So that there's no question about exactly where I stand, I'll start out by repeating, for the ten billionth time, the Official Scott Aaronson Quantum Computing Position Statement.
  • It's entirely conceivable that quantum computing will turn out to be impossible for a fundamental reason.

  • This would be much more interesting than if it's possible, since it would overturn our most basic ideas about the physical world.

  • The only real way to find out is to try to build a quantum computer.

  • Such an effort seems to me at least as scientifically important as (say) the search for supersymmetry or the Higgs boson.

  • I have no idea -- none -- how far it will get in my lifetime.
I now offer thirteen arguments to support the above views.
  1. The Obvious Argument. Quantum mechanics has been the foundation for all non-gravitational physics since 1926. Hoping that it would "just go away" has been one of the most consistently losing strategies in the history of science. If physicists and engineers didn't take quantum mechanics seriously as a description of the world, they wouldn't have been able to invent the laser, transistor, or classical computer. For that matter, they wouldn't be able to explain why all the atoms in the universe don't instantly disintegrate. Now, if you start with quantum mechanics, and write down the model of computation that directly flows from it, what do you end up with? BQP: Bounded-Error Quantum Polynomial-Time.

  2. The Experimental Argument. Ten years ago, one wouldn't have been able to do much more than mount a general defense of quantum mechanics. But by now, liquid-NMR quantum computers have been built that not only factored 15 into 3 x 5 with small probability of error, but also searched 8-item databases. I've seen some of the machines that performed these staggering computational feats right here in Waterloo; they look like big-ass cylinders with the word "Bruker" on them. Seriously, while liquid-NMR (at least for now) doesn't seem to be scalable, there's been lots of recent work on solid-state NMR, photonics, and ion traps, all of which (if I'm not mistaken) are up to at least 3 qubits. While I don't think the experimentalists are anywhere close to succeeding, these are smart people who haven't been sitting on their asses (or if they have, then no doubt hard at work at a lab table or something).

  3. The Better-Shor-Than-More Argument. Why do skeptics always assume that, if quantum mechanics turns out to be only approximate, then whatever theory supersedes it will reinstate the Extended Church-Turing Thesis? Why isn't it just as likely, a priori, that the new theory would yield even more computational power than BQP? This isn't merely a logical point: to the extent that people have tried to propose serious alternatives to quantum mechanics (where "serious" means "agreeing with known experiments"), those alternatives often do involve more computational power than BQP.

  4. The Sure/Shor Argument. If you believe quantum mechanics is going to break down before nontrivial quantum computing becomes possible, then you must believe there's some point where it will break down -- some level of size, or complexity, or whatever, at which it will cease to be a useful description of the world. What is that point? In other words, where is the line -- possibly a fuzzy, asymptotic, resource-dependent line -- that puts the quantum states that have already been observed on one side, and the quantum states that arise in Shor's factoring algorithm on the other? In a paper I wrote three years ago, I called such a line a "Sure/Shor separator," and challenged skeptics to come up with some example of what it might be. I even tried to get the ball rolling by studying such separators myself. My idea was that having a Sure/Shor separator could motivate further research: once they knew where the "barrier" was, the experimentalists could set to work trying to cross it; then, if they succeeded, the skeptics could come back with a new barrier, and so on. Unfortunately, no skeptic has yet risen to the challenge. It's not hard to see why: if you start with the many-particle entangled states that have already been observed (for example, by the Zeilinger group and by Ghosh et al.) and then throw in a few closure properties, you quickly end up with -- well, the set of all quantum states. Coming up with a "reasonable" set of states that includes Sure states but doesn't include Shor states turns out to be an extremely hard problem.

  5. The Linearity Argument. In my experience, at least 70% of all objections to quantum computing boil down to the idea that a quantum computer would be a "souped-up analog computer" -- a machine that would store information not in voltage differences or the positions of pulleys, but instead in exponentially-small amplitudes. From this idea it follows readily that, just as "old-school" analog computers have always run up against scalability problems, so too will quantum computers. To see why the analogy fails, think about classical probabilities. If you flip a coin a thousand times, you'll end up with a probability distribution over outcomes that requires real numbers of order 2-1000 to describe. Does it follow from this that classical probabilistic computers are really analog computers in disguise, or that classical probability theory must be a mere approximation to some deeper, underlying theory? Of course not -- for, unlike voltages or pulleys, probabilities evolve in time by means of norm-preserving linear transformations, which are insensitive to small errors. Well, quantum amplitudes also evolve by means of norm-preserving linear transformations, and this is what makes them behave like probabilities with respect to error, and not like the state variables of an analog computer.

  6. The Fault-Tolerance Argument. Among the many nontrivial consequences of this linearity, there's one that probably counts as a separate argument: the Threshold Theorem. This theorem states that even if a quantum computer is subject to noise, we can still use it to do universal computation, provided we have parallel processing and a supply of fresh qubits, and provided the error rate is at most ε per qubit per time step, for some constant ε>0 independent of the length of the computation. The original lower bound on ε was about 10-6, but recently Knill and others have brought it up to 1-3% under plausible assumptions. Many quantum computing researchers talk about this theorem as the knight in shining armor who rode in unexpectedly to vindicate all their hopes. They're entitled to do so, but to me, the theorem has always felt more like a beautiful, detailed working-out of something that couldn't possibly have been false. (And not just because it's a theorem.)

  7. The What-A-Waste Argument. Why do I say that the threshold theorem "couldn't possibly have been false"? Well, suppose quantum mechanics were an accurate description of reality, yet quantum computing was still impossible for some fundamental reason. In that case, we'd have to accept that Nature was doing a staggering amount of quantum computation that could never be "extracted," even in principle. Indeed, even assuming that life is (and always will be) confined to the vicinity of one planet, the resulting computational waste would make the waste of 1011 uninhabited galaxies look like chickenfeed. I don't deny that such a possibility is logically consistent, but my complexity-theoretic instincts rebel against it.

  8. The Non-Extravagance Argument. In my opinion, if quantum computers could solve NP-complete problems in polynomial time, then there really would be grounds for regarding them as physically extravagant. Like coming up with theories that allow causality violations and superluminal signalling, coming up with models of computation that can simulate NP, #P, and PSPACE is just too easy. It's not interesting. The interesting task is to come up with a model of computation that's stronger than the usual ones (P, BPP, and P/poly), but not so strong that it encompasses NP-complete problems. If it weren't for BQP, I don't think I'd have any clear idea of what such a model could look like. (Sure, we have problems and complexity classes below NP, but that's different from a full-fledged model of computation.)

  9. The Turn-The-Tables Argument. If building quantum computers that outperform classical ones is fundamentally impossible, then it must be possible to write classical computer programs that efficiently simulate any quantum system found in Nature. And yet, even though this way of looking at the question is perfectly equivalent, there's a reason quantum computing skeptics avoid it. This is that, as soon as you frame the issue this way, they (the skeptics) are the ones who look like wild-eyed technological optimists -- believing we'll be able to simulate superconductors and quark-gluon plasmas on an ordinary desktop PC! The "staid," "conservative" position is that such a simulation won't be possible -- or, equivalently, that the systems being simulated have more computational power than the PC doing the simulating.

  10. The Island-In-Theoryspace Argument. String theorists have been ridiculed for claiming that string theory is "too beautiful to be wrong." But as Peter Woit points out in his fascinating new book, this is not at all a bad argument. It's a fine argument; the real question is whether string theory -- with its perturbation series, ten dimensions of which six are compactified for unknown reasons, landscape of vacua, etc. -- really is as beautiful as its proponents think it is. At the risk of breaking my vow, let me hasten to say that I'm in no position to judge. What I do know is that there's something mathematically unique about quantum mechanics: how it takes advantage of special properties of the L2 norm that fail for other p-norms, how the parameter-counting for mixed states that works perfectly with complex numbers fails with real numbers and quaternions, and so on. Crucially, it seems all but impossible to change quantum mechanics while retaining its nice properties. More so than general relativity or any other theory we have, quantum mechanics gives every indication of being an island in theoryspace.

  11. The Only-Game-In-Town Argument. However one feels about the alternatives to string theory -- loop quantum gravity, spin foams, twistors, and so on -- at least each one has a "developer base," a community of physicists who are actively trying to make it work. By contrast, I don't know of any picture of the world in which quantum computing is impossible, that's being actively developed by any research community anywhere. (Gerard 't Hooft and Stephen Wolfram are not research communities.) All the skepticism of quantum computing that I'm aware of is purely negative in character.

  12. The Historical Argument. If the above arguments are sound, then why haven't people already succeeded in building quantum computers? It's been what, ten years already? Some historical perspective might be helpful here: in Samuel Johnson's The History of Rasselas, Prince of Abissinia, written in 1759, Johnson has one of his characters give a correct explanation of why heavier-than-air flying machines should be physically possible, and then build a test plane that promptly plummets into a lake. Johnson was safe in ridiculing the idea; it would be another 144 years before Kitty Hawk. Closer to our topic, Darwin wrote in his autobiography about an eccentric loon of his acquaintance, who dreamed of building an engine to automate routine human thought. Though the loon -- a certain Charles Babbage -- hadn't run afoul of any fundamental theory, his proposal to build a classical computer was a century ahead of its time. Since the 1600's, science has often been generations ahead of technology. History gives us no reason at all to assume that a technology will be discovered to be compatible with known laws of physics at about the same time as it becomes possible to implement.

  13. The Trademark-Twist Argument. This last argument is the hardest one to articulate, but possibly the most compelling to my mind. In my view, Nature has been telling us, over and over and over, that our everyday intuitions will match the physical world if and only if we first apply a little "twist" to them. Often this twist involves an unusual symmetry, or switching from the L1 to the L2 norm, or inserting negative or complex numbers where our intuition says that only nonnegative real numbers would make sense. We see such a twist in special relativity, in the metric that's not positive definite but instead has a (-1,1,1,1) signature. We see it in the -1 phase that the universe picks up when you swap a fermion with its identical twin. We see it in the fact that, to rotate an electron back to where it was, you have to turn it not 360o but 720o. We see it in the Dirac equation. We see it, of course, in quantum mechanics itself. And what is BQP, if not P=BPP with Nature's trademark little twist?

Wednesday, September 06, 2006

The quantum-complexity bathroom reader

A reader named Lewis K. wrote in to ask for a "brief list of required reading for someone with a normal CS degree under his belt who wants to be taken to the research front in quantum complexity." Alright then:

[Deutsch] [Bernstein-Vazirani] [BBBV] [Simon] [Shor] [Grover] [BBHT] [BBCMW] [Ambainis] [Watrous] [ANTV] [Fortnow-Rogers] [Abrams-Lloyd] [Childs et al.] [DMV] [EHK] [BJK] [Gottesman] [KKR] [Marriott-Watrous]

(Sprinkle in some textbooks, survey articles, and course lecture notes to taste.)

Commenters will boil me alive for leaving out huge swaths of the field, and they'll be right. I've merely listed some papers that had a definite impact on how I, personally, attack problems. But hey, I'm the one you asked. So print 'em out, take 'em to the toilet, and sit there for a long time. When you're finished, you won't be at the "research front" -- for that you obviously have to read my papers -- but hopefully you'll have seen enough to visit the big bad arXiv on your own. Happy Hadamards!

Monday, September 04, 2006

Reasons to believe

More often than I can remember, I've been asked some form of the following question: "If you computer scientists can't prove P=NP or P!=NP, then why aren't we justified in believing whichever one we want? And why is the 'consensus' that P!=NP anything more than a shared prejudice -- something you repeat to each other so your work won't seem irrelevant?"

It's time to assume the mantle of Defender of the Faith. I'm going to give you ten arguments for believing P!=NP: arguments that are pretty much obvious to those who have thought seriously about the question, but that (with few exceptions) seem never to be laid out explicitly for those who haven't. You're welcome to believe P=NP if you choose. My job is to make you understand the conceptual price you have to pay for that belief.

Without further ado:
  1. The Obvious Argument. After half a century, we still don't know any algorithm for an NP-complete problem that runs in subexponential time. For Circuit-SAT, the canonical NP-complete problem, we don't know any algorithm essentially better than brute-force search. In math, if decades of research fail to turn up an object, and there's no a priori reason to suppose the object exists, it's usually a good strategy to conjecture that the object doesn't exist. We can all list counterexamples to this thesis, but the examples are much more numerous (though usually less famous, for obvious reasons).

  2. The Empirical Argument. While the argument based on decades of mathematical work can stand on its own, in the case of P versus NP we also have half a century of evidence from the computer industry. In a few cases -- like linear programming and primality testing -- people wanted fast ways to solve a problem in practice, and they came up with them, long before the problem was proved to be tractable theoretically. Well, people certainly want fast ways to solve NP-complete problems in practice, and they haven't been able to invent them. The best-known satisfiability algorithms -- such as DPLL, GSAT, and Survey Propagation -- work surprisingly well on certain instance distributions, but croak (for example) on instances derived from factoring or automated theorem proving.

  3. The Bayesian Argument. Why can't we turn the last two arguments on their heads, and say that, if our failure to find a fast SAT algorithm is evidence that P!=NP, then our failure to prove P!=NP is likewise evidence that P=NP? The answer is, because lower bounds are harder to prove than upper bounds. Assuming P=NP, it's difficult to come up with a good reason why an efficient algorithm for NP-complete problems wouldn't yet have been discovered. But assuming P!=NP, we understand in great detail why a proof hasn't yet been discovered: because any proof will need to overcome specific and staggering obstacles. It will need to "know" how 3SAT differs from 2SAT, how quadratic programming differs from linear programming, and how approximating set cover within o(log|S|) differs from approximating it within log|S|. It will need to "look inside" computations in a way that doesn't relativize. It will need to argue that NP-complete problems are hard, not because they look like random Boolean functions, but because they don't look like random Boolean functions. While we have no reason to think such a proof is impossible -- indeed, we have proofs satisfying some of the desiderata -- we do have reason to think it will be extremely difficult.

    Whatever your "naïve prior probability" was that P=NP, the above considerations, together with Bayes' Rule, suggest revising it downward.

  4. The Multiple-Surprises Argument. Here's a point that's not often stressed: for P to equal NP, not just one but many astonishing things would need to be true simultaneously. First, factoring would have to be in P. Second, factoring would have to be as hard as breaking one-way functions. Third, breaking one-way functions would have to be as hard as solving NP-complete problems on average. Fourth, solving NP-complete problems on average would have to be as hard as solving them in the worst case. Fifth, NP would have to have polynomial-size circuits. Sixth, NP would have to equal coNP. And so on. Any one of these statements, by itself, would overturn much of what we think we know about complexity.

  5. The Hierarchy Argument. This argument goes back to the early days of P versus NP. We know that P is strictly contained in EXP by the time hierarchy theorem. It follows that either P is strictly contained in NP, or NP is strictly contained in PSPACE, or PSPACE is strictly contained in EXP. Likewise, since NL is strictly contained in PSPACE=NPSPACE by the space hierarchy theorem, either NL is strictly contained in P, or P is strictly contained in NP, or NP is strictly contained in PSPACE. But if some of these separations hold, then why not all of them? To put the point differently, we know that collapse is not the general rule of the Complexity Zoo: even between P and EXP, there really are infinitely many distinct species. Indeed for some pairs of species, like E and PSPACE, we know they're not equal even though we don't know if either one contains the other! The burden of evidence, then, is on those who believe that two seemingly-distinct species are the same, not on those who believe they're different.

  6. The Known-Algorithms Argument. We do have nontrivial efficient algorithms for several problems in NP, such as matching, stable marriage, minimum spanning tree, matrix inversion, planarity testing, and semidefinite programming. But every one of these algorithms depends, in a crucial way, on some special combinatorial or algebraic structure of the problem being solved. Is this just a fancy way of repeating that we don't know yet how to solve NP-complete problems? I don't think it is. It's possible to imagine a situation where we knew "generic" techniques for achieving exponential speedups, which worked for objects as complicated as Turing machines, and the only problem was that we didn't yet know how to apply those techniques to prove P=NP. But this is nothing like the actual situation.

  7. The Known-Lower-Bounds Argument. It could be that the dream of proving superpolynomial lower bounds on circuit size is no more than that: a pipe dream. But the fact remains we can prove superpolynomial lower bounds, albeit in weaker models of computation that are easier to analyze. To give some examples, superpolynomial lower bounds have been proven on the sizes of resolution proofs, monotone circuits, constant-depth circuits, read-once branching programs, and multilinear formulas.

  8. The Self-Referential Argument. If P=NP, then by that very fact, one would on general grounds expect a proof of P=NP to be easy to find. On the other hand, if P!=NP, then one would on general grounds expect a proof of P!=NP to be difficult to find. So believing P!=NP seems to yield a more 'consistent' picture of mathematical reality.

  9. The Philosophical Argument. If P=NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in "creative leaps," no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss; everyone who could recognize a good investment strategy would be Warren Buffett. It's possible to put the point in Darwinian terms: if this is the sort of universe we inhabited, why wouldn't we already have evolved to take advantage of it? (Indeed, this is an argument not only for P!=NP, but for NP-complete problems not being efficiently solvable in the physical world.)

  10. The Utilitarian Argument. Suppose you believe P!=NP. Then there are only two possibilities, both of which are deeply gratifying: either you're right, or else there's a way to solve NP-complete problems in polynomial time. (I realize that I've given a general argument for pessimism.)
There are several questions that the above arguments don't pretend to address: first, why is P versus NP a reasonable question? Second, even if P!=NP, why should we expect there to be a proof in ZF set theory? Third, even if there is a proof, why should we expect it to be within reach of the human intellect? I'm really not cut out for this C. S. Lewis role, but look for further installments of Mere Complexity as the need arises...

Sunday, September 03, 2006

If challenge is what you seek

From left: Amnon Ta-Shma, your humble blogger, David Zuckerman, Adi Akavia, Adam Klivans. Behind us: the majestic mountains of Banff, Canada, site of yet another complexity workshop, which I just returned from a couple days ago, after which I immediately had to move out of my apartment, which explains the delay in updating the blog. Thanks to Oded Regev for the photo.

A few highlights from the workshop:
  • Rahul Santhanam presented a proof that for every fixed k, there exists a language in PromiseMA with no circuits of size nk. This is a problem I spent some time on last year and failed to solve.

  • Dmitry Gavinsky discussed the question of whether quantum one-way communication complexity can be exponentially smaller than randomized two-way communication complexity. Richard Cleve has a candidate problem that might yield such a separation.

  • Ryan O'Donnell presented a proof that one can decide, using poly(1/ε) queries, whether a Boolean function is a threhold function or is ε-far from any threshold function. This is much harder than it sounds.

  • I took a gondola to the top of Sulphur Mountain, where the above photo was taken. While walking amidst some slanty rocks, I slipped and twisted my ankle. I was hobbling around for several days afterward, but seem to be OK now.
Overwhelming everything else, alas, was a memorial session for Misha Alekhnovich. Misha, who loved extreme sports, went on a whitewater kayaking trip in Russia a month ago. At a dangerous bend in the river, his three companions apparently made it to shore safely, while Misha did not. He was 28, and was to get married a few days from now.

Misha and I overlapped as postdocs at IAS, and I wish I'd gotten to know him better then. From the conversations we did have, it was clear that Misha missed Russia and wanted to go back as soon as possible. The truth, though, is that I knew Misha less on a personal level than through his groundbreaking work, and particularly his beautiful paper with Razborov, where they show that the Resolution proof system is not automatizable unless FPT = W[P]. I still find it incredible that they were able to prove such a thing.

Lance has already discussed the memorial session, in which Eli Ben-Sasson and Sasha Razborov offered their personal remembrances, while Toni Pitassi and Russell Impagliazzo gave talks about Misha's work, emphasizing how the P versus NP question always lay just beneath the surface. It occurred to me that an outsider might find these talks odd, or even off-putting. Here we were, at a memorial for a dead colleague, talking in detail about the definition of automatizability and the the performance of the DPLL algorithm on satisfiable CNF instances. Personally, I found it moving. At a funeral for a brilliant musician, would one discuss his "passion for music" in the abstract without playing any of his songs?

The tragic loss of Misha has reinforced a view I've long held: that if challenge is what you seek, then the thing to do is to tackle difficult open problems in math and computer science (or possibly physics). Unlike the skydiver, the kayaker, or the mountain-climber, the theorem-prover makes a permanent contribution in the best case, and is down a few months and a few hundred cups of coffee in the worst case. As for physical challenges, walking around heavily-populated tourist areas with slanty rocks has always presented more than enough of them for me.