Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Are Computers Ready to Create Mathematical Proofs?

michael posted more than 10 years ago | from the GIGO dept.

Education 441

DoraLives writes "Interesting article in the New York Times regarding the quandary mathematicians are now finding themselves in. In a lovely irony reminiscent of the torture, in days of yore, that students were put through when it came to using, or not using, newfangled calculators in class, the Big Guys are now wrestling with a very similar issue regarding computers: 'Can we trust the darned things?' 'Can we know what we know?' Fascinating stuff."

cancel ×

441 comments

Sorry! There are no comments related to the filter you selected.

Rumsfeld, anyone? (5, Funny)

dolo666 (195584) | more than 10 years ago | (#8787939)

> 'Can we know what we know?' Fascinating stuff.

Reminds me of Rumsfeld [about.com] ... "Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know."

Re:Rumsfeld, anyone? (0)

jpBabelFish (768685) | more than 10 years ago | (#8788007)

As we have known, because there is knowns which is known, something always is interesting in me, a report that it did not happen,; As for us there are times when we have known that you have known. In addition as for us you know that it is the unknown which is known; Namely we have known that it is a certain thing which we do not know. But and -- where it has unknown unknown as for us those where we do not know that you do not know.

Rumsfeld, anyone?-Overload. (3, Funny)

Anonymous Coward | more than 10 years ago | (#8788037)

???
BOOM!

"Cleanup in aisle 10"

I enjoy watching him squirm. (0, Offtopic)

Anonymous Coward | more than 10 years ago | (#8788148)

Rumsfeld is the only senior staff member that seems to have a pulse. He is always lively and joking in his press briefings. I may not agree with some of his decisions or policies, but at least I can listen to him without falling alseep. I wish more politicians were not robots. Every time I hear a snippet of a Bush speech it sounds so fucking robotic, almost on the same level as Gore. Rumsfeld is always trying to squirm his way out of a direct answer to pretty much every question the press asks him. It's hillarious.

Re:I enjoy watching him squirm. (1, Insightful)

Anonymous Coward | more than 10 years ago | (#8788205)

what I find interesting about him is that if you ask a stupid question, he'll make sure you know it was a stupid question.

Prior to september 11, that attitude meant his days were numbered. After september 11, that attitude shows he's in charge.

Re:Rumsfeld, anyone? (2, Funny)

kfg (145172) | more than 10 years ago | (#8788267)

Are you sure?

KFG

They have already proven.... (-1, Troll)

Anonymous Coward | more than 10 years ago | (#8787940)

(at least WRT Linux) that you get what you pay for. ;)

Re:They have already proven.... (-1, Flamebait)

Anonymous Coward | more than 10 years ago | (#8788128)

Astute observation, compadre!

Hi! My name is p00p, [slashdot.org] and I'm an independent consultant whose job it is to check out operating systems; I detect weaknesses for a living, which is why I am particularly glad to have Open Sores crap like Linux on the streets.. but enough about my boring job, I can tell that you are here for the Report ! With no further ado, let me break it down for ya, homez.

I've checked it once, I've checked it a million times - the numbers don't lie, folks, it appears that Linux on the desktop is an utter failure right out the gate. GNOME is still a floundering fudgepack dependent on the dying kludge-fuck Xfree86, [xfree86.orghttp] and there's no light at the end of that tunnel as we all know!

KDE follows right behind, with a hideous mess built on anti-speed-demon Trolltech's QT toolkit, also filtered through Xfree just like GNOME. Ouch. Like the name, Trolltech, but the toolkit is a boner. Sorry guys! I know you all tried really hard and probably gave up maybe three or four hours of watching gay pr0n to pump that code out , but it looks like the only slots and signals people want are the ones that pay out big bucks in Vegas.

Security is still an issue, but you'd expect that from any amateur Open Sores project. Linux ain't so great, and it appears that the automatic updating mechanism in 90% of Linux installations is nonexistent or broken.. this is probably because Communism is more effective as a political philosophy than programming paradigm. And even then it never did work right, now did it?

Moving along, let's look at the appz people want [emphasis on "want" - editor] and see how many have been ported to Linux. Counting the Gimp (a pity vote) and WordPerfect (oh wait - that's dead) we have a grand total of ONE - I repeat ONE semi-popular app. Mozilla is a useless pig, else we'd be delighted to have it aboard just to give a semblance of competition to Mcrosoft. Maybe thirty years from now 'Zilla will be back to take charge eh! ;)

Lastly, and perhaps most significantly, it appears that among the children and unemployed hobbyists who currently form the bulk of Open Sores "developers" (term used loosely, no offense intended to legitimate software engineers!) there is a large homosexual contingent that is increasing every year.
The conclusion is inescapable: folks, Linux is officially gay!

This important announcement was brought you by p00p! [slashdot.orgp00p]
"Happy New Year and don't choke on my oversized donkey dong please, Linux!"

note: I realize that Apple is even more gay, but Apple's gayness comes at a significant price which many welfare-scamming bottom-feeders of homosexual orientation are unwilling to pay, hence the continuing focus on Linux afficionados and their OS of choice. Thanks for reading, and I'll be back next week. Cheerio!

Can someone elaborate on... (2, Interesting)

Anonymous Coward | more than 10 years ago | (#8787944)

...precisely why it's so hard to see that a pyramid of cannonballs is an optimal stack? This seems almost axiomatic.

I guess the obvious Monte Carlo simulation doesn't constitute "proof," but still, what exactly is the big question here?

Re:Can someone elaborate on... (5, Informative)

stephentyrone (664894) | more than 10 years ago | (#8788019)

It's not hard to *see*. It's hard to *prove*. Very little of mathematics is consumed with proving deep, mystical statements that no one would ever anticipate to be true. Much (maybe most) of mathematics is built around proving (relatively) obvious things. Why bother? because sometimes, relatively obvious things turn out to be false, and there's no way to know that they won't until you've prooved them true. In general, showing that discrete (or semi-discrete) phenomena are optimal is fairly tricky; you can't just appeal to calculus to optimize some function. You often have to somehow break the search space up into a bunch of disjunct cases that span all the possibilities, and be able to prove that they span all possibilities. Then, if you're lucky, you can use some kind of calculus-type argument on the continuous spaces you're left with.

Re:Can someone elaborate on... (2, Insightful)

baywulf (214371) | more than 10 years ago | (#8788039)

Sometimes a precise mathematical analysis can lead to startling insights. Perhaps there are multiple solutions equally optimal. Perhaps it gives a general algorithm that works for other shapes.

Re:Can someone elaborate on... (1)

Guppy06 (410832) | more than 10 years ago | (#8788139)

Most people just get hung up on the term "brass monkey" and snicker.

Re:Can someone elaborate on... (3, Informative)

uncleO (769165) | more than 10 years ago | (#8788166)

What if the stack is very large, for instance? Is it still obvious that the pyramid is best? Consider a similar problem, that of packing squares in a rectangle. When the rectangle is small, it is obvious that the best way is to pack them together starting from one corner, leaving a little wastage on the opposite two sides. It seems strange, but it is true, that when the recangle is very large, this strategy is not optimal. More squares can be packed when they are not packed in such a regular pattern. This is the main difficulty with proofs of this type. It can be easy to prove that a given strategy is not optimal, but not so simple to show that no better strategy can exist.

Re:Can someone elaborate on... (2, Interesting)

JoranE (722214) | more than 10 years ago | (#8788183)

Definition of optimal: At least as good as EVERY other possible configuration of (in this case infinitely many) oranges in 3-space.

If by "see" you mean "conjecture" then, yes, it is easy to see. If by "see" you mean "prove", and thereby catch a glimpse into the fundemental reasons WHY this is the best possible stacking, then it is exceedingly difficult.

How would you go about designing a proof that would take into account every single one of the infinitely many configurations of oranges? Could you even list them? Indeed, I read somewhere that Hales said that the hardest part was reducing all the infinitely many configurations into something like 5,000 cases.

Focus on the infinite part of this problem: we're looking for an arrangement of infinitely many oranges in space such that we maximize the density of the oranges. Now, is it obvious to you that an (infinite) pyramid is denser than packing them somehow into a sphere-ish shape (with infinite radius of course)? It certainly isn't to me...

I'm not sure is this is very convincing, but it's the best I can do without really researching the problem.

Re:Can someone elaborate on... (1)

the_2nd_coming (444906) | more than 10 years ago | (#8788194)

dude...obvious things are the hardest to prove.

prove to me that 0 1

that proof takes about 15 - 20 lines of crap just to prove that 0 1.

your a physics student aren't you.

Re:Can someone elaborate on... (1)

the_2nd_coming (444906) | more than 10 years ago | (#8788228)

damn HTML.

Obvious things are hardest to prove.

prove to me that 0 is less than 1.

that proof takes about 15 - 20 lines of crap.

pretty long just to prove something that is so obvious.

your a physics person aren't you.

Re:Can someone elaborate on... (0)

conan776 (723791) | more than 10 years ago | (#8788234)

Don't ask me. AKAICT a recursive algorithm would get this right away.
Is zero oranges the most dense? Duh, then:
while forever do {
pack another orange.
if (not a pyramid shaped packing) goto kepler sux;
}
kepler sux: you weren't sposed to be able to get here!
OK, where's my nobel prize in mathematics??

Re:Can someone elaborate on... (1)

davidoff404 (764733) | more than 10 years ago | (#8788239)

Actually a couple of months ago a team of physicists from, I believe, Stanford proved that optimal packing is achieved with oblate spheroids once uniformity of the elements' orientation is dropped as a requirement (obviously, since that wouldn't matter with spheres).

piss? (-1, Troll)

Anonymous Coward | more than 10 years ago | (#8787948)

ooh ehy i gotsa take a frosty piss while the malt runs down my legs... ahhhhh

p00p on y'all!!! [slashdot.org]

I don't care (1)

fresh27 (736896) | more than 10 years ago | (#8787951)

I'll use a calculator as long as it's easier, screw accuracy.

Re:I don't care (3, Interesting)

Vancorps (746090) | more than 10 years ago | (#8787979)

That can actually be a good point, when you make a task faster and easier then you can sacrifice a little accuracy here and there.

When the results become too inaccurate then you have to change your tools, so unless accuracy is key, such as NASA's calculations and the likes then you work on making a computer do as much as possible to eliminate human error and humans can be there to adjust the almost mythic fudge factor

Re:I don't care (1)

colmore (56499) | more than 10 years ago | (#8788108)

yes but these are proofs, not engineering calculations, there's no such thing as a margin of error when your possible answers are "yes" and "no"

while in practice, there is a grey area, the intuition of a good mathematician can still be correct even if he or she makes a few small errors in explaining it rigorously, a formal proof should still aspire to a standard of perfect rigor.

Re:I don't care (2, Insightful)

Vancorps (746090) | more than 10 years ago | (#8788165)

That is certainly true, so if a computer screws up portion in the middle and the answer still comes out correct then the end result is the same.

Another post mentioned that computers could come up with the proofs but it would be too hard for a human to verify. At what point can be start to both trust the output and to build upon that trusted output by further trusting future output. It could be an interesting time in which we don't understand how things are done but we are able to do them none the less. It all comes down to the question, how much do you want to trust a machine?

poop (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8787953)

hooray for poop

Create vs. Verify (5, Informative)

Squeamish Ossifrage (3451) | more than 10 years ago | (#8787954)

The headline does a slight disservice in describing the article that way: Whether or not computers can create proofs isn't an issue. The problem comes when the resulting proof is too involved to be verified by a human, and so the computer's work has to be trusted.

Re:Create vs. Verify (1)

Dr Tall (685787) | more than 10 years ago | (#8788000)

Not knowing much about this stuff, I'm kind of confused. What good is a proof that is too complicated to be understood? Can it still be useful without understanding its basis other than HAL said?

Re:Create vs. Verify (4, Insightful)

Xilo (741751) | more than 10 years ago | (#8788196)

involved, not complicated - like the Four Color [Conjecture]. Noone could figure out a way to actually prove it, so some one (or group of someones) wrote a program to systematically determine all the possible arrangements of regions in a simplified series of maps, and then figure out how to color each of those maps. The involved part was .. well, all of it. It wasn't necessarily very complicated, just labor-intensive. Computers are perfectly suited for tedium.

Re:Create vs. Verify (4, Insightful)

Llywelyn (531070) | more than 10 years ago | (#8788220)

Sometimes it is enough to know that something is *true* or *false* without having to know the details of the in-between steps.

I'll give some trivial examples to illustrate:

For a famous example, it would provide a great deal of peace of mind if we could prove that P != NP. It wouldn't matter that we understand that proof so much as that it is *provably true*. If, on the other hand, it is proven false that is just as important (if not more so) and while an understanding of the proof might lead more easily to examples of such, we would know (for certain) that trusting public key encryption over the long run would be a Bad Idea(TM) (for example) and that it is just a matter of time before a polynomial time algorithm is developed.

(Not that such will necessarily be fast, mind you, but we would know it would exist).

For another example--there are certain things that can be inferred if Poincare Conjecture is true for N=3. If we can prove the Poincare Conjecture is true (and it is now thought that it might be) it means things to physicists, even if we don't know why it is true.

The bigger question here is "can we trust it if we can't verify it by hand."

Re:Create vs. Verify (4, Insightful)

Vancorps (746090) | more than 10 years ago | (#8788014)

If humans created the computer to do the task should they not trust that it would do the task and do it well? Perhaps, perhaps not.

A computer could quite easier come up with a very complex answer simply because it can do more calculations in a given second than a human can. Of course humans take in a lot more variables at a given time so the numbers are actually very opposite but I'm sure you get my point.

I think you test it with progressively more different problems, if the answers come out precise and accurate then you can build your level of trust in the system. Kind of like the process of getting users to trust saving to the server after a bad crash wiped everything because the previous admin was a moron.

indeed (4, Insightful)

rebelcool (247749) | more than 10 years ago | (#8788091)

On a similar topic, today I attended a lecture by Tony Hoare [microsoft.com] on compilers that can generate verified code and tools that guide the human programmer into designing programs that can be easily validated (from the compiler's stance). One very good question raised afterwards was, well how do you know you can trust the compiler generating the verified program?

Though Dr. Hoare danced around that question a little, presumably that aspect of the project would have to be done by hand, a monumental task to say the least.

Re:Create vs. Verify (1)

Ichijo (607641) | more than 10 years ago | (#8788122)

The problem comes when the resulting proof is too involved to be verified by a human, and so the computer's work has to be trusted.

When something is too involved for a human to accomplish -- isn't this what we have computers for? So the solution is simple. Produce another system to verify the first system's proof. Ideally, this second system should be built by a group completely separate from the first so as not to inherit any design flaws or false assumptions.

The same methodology could be applied to electronic voting. Everyone votes twice, once on each of two machines from two different manufacturers, and at the end of the day the results are tallied separately and compared.

Re:Create vs. Verify (1)

NanoGator (522640) | more than 10 years ago | (#8788157)

"The problem comes when the resulting proof is too involved to be verified by a human, and so the computer's work has to be trusted."

Just to clarify, do you mean the computer's accuracy in moving the bits around, or do you mean the human who wrote the program didn't make any mistakes?

Re:Create vs. Verify (1)

the_2nd_coming (444906) | more than 10 years ago | (#8788255)

a proof that is to complex for a human to understand or verify?

is it really a proof then? should a proof not need to be able to be human understandable and verifiable?

could you not just break it up into lemmas?

Sorry..... (-1, Offtopic)

Spyro VII (666885) | more than 10 years ago | (#8787955)

I don't really feel like registering to the NYTimes just to read an article... I'm not paranoid... I'm just lazy...

'Can we trust the darned things?' (4, Funny)

FFFish (7567) | more than 10 years ago | (#8787958)

Depends whether it's a Pentium with an FDIV bug, I imagine...

Re:'Can we trust the darned things?' (0)

Anonymous Coward | more than 10 years ago | (#8788077)

Intel, the microchip giant, uses proof-checking software to check algorithms in its chips, in the hope of avoiding glitches like one in the original 1994 Pentium that caused numbers to divide incorrectly.

Can we trust the software that tells us the chip design is sound?

Re:'Can we trust the darned things?' (1, Interesting)

Anonymous Coward | more than 10 years ago | (#8788178)

True. Any argument is only as good as it's assumptions.

You may find the following project interesting:
It [acl2] [utexas.edu] is a common lisp programming language and a system for proving properties about code. It has been used to prove correctness properties about microprocessors[microcode], compilers, and application code. Notably, I believe it is currently being used to prove certain properties about the Java Sandbox.

When the FDIV bug was reported, AMD hired it's creators to mechanically verify correctness of it's floating point microcode on a model of the K5 developed on the system. Details [utexas.edu]

In fact, they were unable to verify it's correctness, as it wasn't correct. Fortunately, the code was able to be changed, prior to chip fabrication, saving AMD a lot of money and a lot of embarassment.

Will this complicate what we can understand? (3, Interesting)

Anonymous Coward | more than 10 years ago | (#8787959)

I think the issue will become: can we learn anything on our own, we don't want to rely on imposibly long proofs. My calc teacher hates us using calculators, she thinks it will be the end of calculus as we know it.

Re:Will this complicate what we can understand? (1)

iminplaya (723125) | more than 10 years ago | (#8788034)

My calc teacher hates us using calculators...

So do I. They should be banned from school, just like guns. It's one thing to use them in business for quick work. They have no place in school where you're supposed to be learning how to add.

Re:Will this complicate what we can understand? (1)

Knetzar (698216) | more than 10 years ago | (#8788073)

After algebra, one should do most calculations with variables in the result. The area of a circle with a radius of 3 is simply 9*pi, on a test why does it matter if the student plugs the number in to get a decimal number?

Re:Will this complicate what we can understand? (1)

pastafazou (648001) | more than 10 years ago | (#8788155)

Simple. The student should know the value of pi to a certain number of significant figures. With a calculator, they're just typing in 3*pi and getting an answer with 8+ significant digits. Without a calculator, they're actually calculating the answer as close as they can based on how many significant digits they know. That's a better test of who's mastering mathematics.

Re:Will this complicate what we can understand? (0)

Anonymous Coward | more than 10 years ago | (#8788198)

Here is a comprimise:

I will gladly abstain from using a calculator the day I am not given pages and pages of redundant math assignments to do.

Re:Will this complicate what we can understand? (1)

iminplaya (723125) | more than 10 years ago | (#8788173)

The area of a circle with a radius of 3 is simply 9*pi, on a test why does it matter if the student plugs the number in to get a decimal number?

Because you're just repeating the formula. The answer is single number, real or otherwise. I need to know the actual area of the circle, not the formula for figuring it out.

Re:Will this complicate what we can understand? (1)

Absurd Being (632190) | more than 10 years ago | (#8788203)

Sure, I'll just leave behind 8*pi*e^(jpi/6) * (3*10^8)^3 /(1.602*10^-19 * (1.5)^.5 *2^.5*(4pi*10^-12^2)) as my answer. Or I could just give you a number. It's all constants and variables, just a boatload of them together.

Re:Will this complicate what we can understand? (1)

jdhutchins (559010) | more than 10 years ago | (#8788238)

Beyond algebra, the point of the test is not to test arithmetic skills, it's to test whatever area you're learning. By the time you learn algebra, you've spent roughly 6-8 years learning arithmetic, so there's not much point in beating it out of you any more. You know how to do it, it's just a matter of how many careless mistakes you make.

For some tests, you shouldn't be allowed to use a calculator (knowing basic sine and cosine values, for example, or numerical integration without a calculator). Other than that, you're just seeing how few careless mistakes the student is making over stuff they've already been tested on, which is not always the point of the test.

Re: . . . As we know it. (4, Interesting)

Bastian (66383) | more than 10 years ago | (#8788244)

I think the "as we know it" is the key word here. It creates a new juggling act that teachers have to deal with. On one hand, calculators are extremely useful for the things at which humans are error prone. If students can use calculators or Mathematica or something, they can check their arithmetic much more reliably, which is great for isolating problems with learning concepts from basic mathematical errors. Granted, some people don't see this as a bonus since it's good to be able to do arithmetic reliably without aid. For smaller numbers I agree, but in the real world people use calculators for arithmetic with numbers that have a lot of sig figs. Making students do this arithmetic by hand is just distracting them from learning the concepts they are supposed to be learning.
On the other hand, many students are prone to using these devices and applications as crutches and try to get away with doing things like using their calculator's implementation of Newton's Method instead of solving the problem themselves.

Some professors have found solutions to this problem, others havent. When I was at college, I think our math department had achieved a pretty good level of harmony with Mathematica - we were expected to do a lot of stuff by hand or in our heads - Gaussian elimination, for example - but in order to make the math seem useful, we were also exptected to be able to solve real-world problems with the stuff we learned. Not contrived "real-world" problems from your high-school textbook, but stuff like interpreting large and dirty scientific datasets where the specific technique we would have to use to solve the problem was something we could figure out, but not something that had been explicitly laid out by the textbook or in lecture. We had to apply the concepts we had learned to figure out the problem, but there was no way we were going to chug out that arithmetic by hand - when was the last time you tried to work on a 16x16 matrix using a pencil and paper? How about a 100x100 one?

Well... (1)

-kertrats- (718219) | more than 10 years ago | (#8787961)

Well, we certainly can't trust them to let us read the article. Anyone got a registration handy?

Re:Well... (1)

Russellkhan (570824) | more than 10 years ago | (#8788038)

How about this: a Google partner link [nytimes.com] to the story - no registration required.

I think so, yes. (3, Informative)

James A. M. Joyce (764379) | more than 10 years ago | (#8787966)

I mean, we've used computers to prove much of boolean and linear algebra. The most famed result in the field is that of the Robbins Conjecture [google.com] , proven entirely by computer. The computer produced a very "inhuman" proof...

Re:I think so, yes. (5, Insightful)

Quill345 (769162) | more than 10 years ago | (#8788048)

Automated theorem provers have been around for a long time, if you can express your thoughts using first order logic. Here's a program from 1986... lisp code [cuny.edu]

It was a big help with the 4-color map proof... (5, Informative)

The Beezer (573688) | more than 10 years ago | (#8787976)

and there are already programs out that help with this. Here's [gatech.edu] one for example...

Are Computers Ready to Create Mathematical Proofs? (-1)

Anonymous Coward | more than 10 years ago | (#8787986)

Are Computers Ready to Create Mathematical Proofs?

NO

Re:Are Computers Ready to Create Mathematical Proo (2, Informative)

superpulpsicle (533373) | more than 10 years ago | (#8788055)

The question is whether the people setting up to create mathematical proof are ready themselves?

So many times I see people use programs like MAPLE to show something mathematical, and it ends up a disaster.

Problems is the brain on the chair, not brain on machine.

Re:Are Computers Ready to Create Mathematical Proo (0)

Anonymous Coward | more than 10 years ago | (#8788071)


YES

Next thing you know.... (1, Funny)

Anonymous Coward | more than 10 years ago | (#8788004)

there will be a computer that will automatically post on slashdot.

Text of the article (2, Informative)

Anonymous Coward | more than 10 years ago | (#8788009)


In Math, Computers Don't Lie. Or Do They?
By KENNETH CHANG

Published: April 6, 2004

leading mathematics journal has finally accepted that one of the longest-standing problems in the field -- the most efficient way to pack oranges -- has been conclusively solved.

That is, if you believe a computer.

The answer is what experts -- and grocers -- have long suspected: stacked as a pyramid. That allows each layer of oranges to sit lower, in the hollows of the layer below, and take up less space than if the oranges sat directly on top of each other.

While that appeared to be the correct answer, no one offered a convincing mathematical proof until 1998 -- and even then people were not entirely convinced.

For six years, mathematicians have pored over hundreds of pages of a paper by Dr. Thomas C. Hales, a professor of mathematics at the University of Pittsburgh.

But Dr. Hales's proof of the problem, known as the Kepler Conjecture, hinges on a complex series of computer calculations, too many and too tedious for mathematicians reviewing his paper to check by hand.

Believing it thus, at some level, requires faith that the computer performed the calculations flawlessly, without any programming bugs. For a field that trades in dispassionate logic and supposedly unambiguous truths and falsehoods, that is an uncomfortably gray in-between.

Because of the ambiguities, the journal, the prestigious Annals of Mathematics, has decided to publish only the theoretical parts of the proof, which have been checked in the traditional manner. A more specialized journal, Discrete and Computational Geometry, will publish the computer sections.

The decision represents a compromise between wholehearted acceptance and rejection of the computer techniques that are becoming more common in mathematics.

The debate over computer-assisted proofs is the high-end version of arguments over using calculators in math classes -- whether technology spurs greater achievements by speeding rote calculations or deprives people of fundamentals.

"I don't like them, because you sort of don't feel you understand what's going on," said Dr. John H. Conway, a math professor at Princeton. But other mathematicians see a major boon: just as the computers of today can beat the grand masters of chess, the computers of tomorrow may be able to discover proofs that have eluded the grandest of mathematicians.

The packing problem dates at least to the 1590's, when Sir Walter Raleigh, stocking his ship for an expedition, wondered if there was a quick way to calculate the number of cannonballs in a stack based on its height. His assistant, Thomas Harriot, came up with the requested equation.

Years later, Harriot mentioned the problem to Johannes Kepler, the astronomer who had deduced the movement of planets. Kepler concluded that the pyramid was most efficient. (An alternative arrangement, with each layer of spheres laid out in a honeycomb pattern, is equally efficient, but not better.) But Kepler offered no proof.

A rigorous proof, a notion first set forth by Euclid around 300 B.C., is a progression of logic, starting from assumptions and arriving at a conclusion. If the chain is correct, the proof is true. If not, it is wrong.

But a proof is sometimes a fuzzy concept, subject to whim and personality. Almost no published proof contains every step; there are just too many.

The Kepler Conjecture is also not the first proof to rely on computers. In 1976, Dr. Wolfgang Haken and Dr. Kenneth Appel of the University of Illinois used computer calculations in a proof of the four-color theorem, which states that any map needs only four colors to ensure that no adjacent regions are the same color.

The work was published -- and mathematicians began finding mistakes in it. In each case, Dr. Haken and Dr. Appel quickly fixed the error. But, "To many mathematicians, this left a very bad taste," said Dr. Robert D. MacPherson, an Annals editor.,

To avoid a repetition, the Annals editors wanted a careful, complete review of Dr. Hales's proof. "But that's not how things turned out," Dr. MacPherson said. "It was a disappointment for all of us." The first group recruited to review the proof spent several years on it, but gave up a year ago, exhausted. Everything checked by the reviewers, led by Dr. Gabor Fejes Toth of the Hungarian Academy of Sciences, turned out to be correct. But the prospect of reviewing every calculation proved too daunting.

Dr. MacPherson likened the process to proofreading the listings in a phone book.

"Everywhere they looked in the phone book, it was right," he said, "and they looked in a lot of places."

The Annals considered publishing the paper with a disclaimer of sorts: This proof has been mostly, but not entirely, checked.

Dr. Conway of Princeton, despite his dislike of computer proofs, felt that a disclaimer cast unfair aspersions. "I was very unhappy about it," he said. The proof, Dr. Conway said, included more documentation than usual, including dated records of each step that Dr. Hales and a graduate student, Samuel P. Ferguson, had performed.

Faced with such criticism, the Annals editors reconsidered the disclaimer and sent the paper to another mathematician. The new reviewer agreed that the theoretical underpinnings were sound, and the editors arrived at a Solomon-like decision: they split Dr. Hales's paper in two.

"The part that's going in The Annals of Mathematics is a proof," Dr. MacPherson said. "We feel he has made a serious contribution to mathematics."

In a new policy, The Annals has decided that computer-assisted proofs have merit, but the journal will accord them a lower status than traditional proofs, regarding them more like laboratory experiments that provide supporting evidence.

"In some cases, it's not a good idea to verify computer proofs," Dr. MacPherson said. "It took the effort of many mathematicians for many years, and nothing came out of it."

Even in traditional proofs, reviewers rarely check every step, instead focusing mostly on the major points. In the end, they either believe the proof or not.

"It's like osmosis," said Dr. Akihiro Kanamori, a mathematics professor at Boston University who writes about the history of mathematics. "More and more people say it's a proof and you believe them."

That is why an earlier proof of the Kepler Conjecture, first offered eight years before Dr. Hales's, is rarely talked about these days. Dr. Wu-Yi Hsiang of University of California at Berkeley claimed he had a proof in 1990, and in 1993 he published an article that he now calls an outline, not a complete proof.

Mathematicians sharply criticized it, saying that it contained holes of logic that they did not think Dr. Hsiang could fill.

The level of rigor and detail that mathematicians have demanded of proofs has waxed and waned over the centuries. Major mathematical fields of the 1700's and 1800's like calculus and topology developed without rigorous proofs.

"For quite some time in mathematics, arguments were basically descriptive," Dr. Kanamori said. "People would give what we would now call informal arguments."

But as mathematicians spun out ever more intuitive arguments, some "proofs" turned out to be false, leading them to shore up the foundations of their earlier work. Mathematicians continue to debate how much rigor a proof requires and whether too much emphasis on details stifles creativity.

Dr. Hsiang did not publish his complete proof until 2002, and it appeared as a book rather than in a peer-reviewed journal. Few have bothered to give the book a close read.

"Hsiang has not such a good track record," said Dr. Frank Quinn, a mathematics professor at Virginia Tech. "I don't want to spend time proving it's wrong."

Dr. Hsiang counters that his proof offers deeper insight and that others' understanding of his techniques is inadequate.

Some believe that computers, the source of the debate in Dr. Hales's proof, will actually quiet the debate over proofs. Instead of serving just as a tool for calculations, as in Dr. Hales's proof, computers can also be used to discover new proofs.

Mathematicians like Dr. Larry Wos of Argonne National Laboratory use "automated reasoning" computer programs: they enter axioms and the computer sifts through logical possibilities in search of a proof. Because of the huge number of possibilities, a human still needs to tell the computer where to search.

"The human mind will never be replaced," Dr. Wos said, butthe advantage of computers is their lack of preconceptions. "They can follow paths that are totally counterintuitive," he said.

The software also fills in the tedious work, he said, giving the mathematician more time to contemplate other problems, and it generates as much or as little detail as a mathematician desires. "They are fully rigorous and fully formal in the best sense of the word," Dr. Wos said. "They tell you exactly how each step was obtained."

In 1996, Dr. Wos and a colleague, Dr. William McCune, used the software to prove a previously unsolved problem known as the Robbins Conjecture.

In a 2003 book, "Automated Reasoning and the Discovery of Missing and Elegant Proofs," Dr. Wos described new proofs and more elegant versions of known proofs discovered by computers.

Intel, the microchip giant, uses proof-checking software to check algorithms in its chips, in the hope of avoiding glitches like one in the original 1994 Pentium that caused numbers to divide incorrectly.

Dr. Hales has embarked on a similar project called Flyspeck -- the letters F, P and K stand for "formal proof of Kepler" -- to put to rest any last doubts about the computer proof.

Current software, however cannot handle anything nearly as complex as the Kepler Conjecture. Dr. Hales estimates that it will take 20 years of cumulative effort by a team of mathematicians to complete.

As for his 1998 proof of the Kepler Conjecture, Dr. Hales said that final publication, after a review process, originally expected to last a few months, would be almost anticlimactic. "For me, the big moment was when I completed the proof," Dr. Hales said. "And I don't think anything will change when I see it in print."

Re:Text of the article (-1, Troll)

Anonymous Coward | more than 10 years ago | (#8788224)

Troll post. MOD DOWN

But as mathematicians spun out ever more intuitive arguments, some "proofs" turned out to be false, leading them to shore up the foundations of their earlier work. Mathematicians, via gay butt sex , continue to debate how much rigor a proof requires and whether too much emphasis on details stifles creativity.

I see no good reason why not.... (4, Interesting)

kommakazi (610098) | more than 10 years ago | (#8788016)

Computers are a human creation...it's not a matter of whether we can trust the computer, but rather a matter of can we trust that the people who built the computer and coded the software it runs knew what they were doing and didn't make any errors. Computers can only do what we tell them to...so really it was humans who indirectly made the proofs by producing a system capable of doing so. All it really boils down to is whether the folks who made the system and it's software knew what they were doing or not and whether they made any errors or not.

as for me why it is not, you do not see the good r (0)

jpBabelFish (768685) | more than 10 years ago | (#8788035)

The people who the computer is made, the software that cord/code do those which run the mistake which the computer had done and did not make know, thing we who are compilation of the human can rely on the computer, but on the other hand us you rely on problem it is possible, how, it is not problem?.. with that. The computer so... but as for indirectly making evidence with the compilation of the system which really so can do being the human we saying to those it is possible not to be. Everything where that boils really the system was made, whether or not how people who and that software knew and are made the mistake which is those which have been done or it is not, and.

Re:I see no good reason why not.... (1)

wmspringer (569211) | more than 10 years ago | (#8788036)

And if you trust that the computer didn't break...though I suppose you can always get the software and run it on your old computer.

I had a calculator once that started giving wrong answers in the middle of a math test; very inconvenient :-p

Re:I see no good reason why not.... (1)

wmspringer (569211) | more than 10 years ago | (#8788131)

er, owN computer, sorry

mod parent up, please (1)

mantera (685223) | more than 10 years ago | (#8788107)

most insightful comment i have seen for days.

Re:I see no good reason why not.... (1)

iminplaya (723125) | more than 10 years ago | (#8788112)

It boils down to the old cliche, "Garbage in, garbage out."

I always loved this picture (1)

DanThe1Man (46872) | more than 10 years ago | (#8788021)

do you mean that this picture will be true someday [google.com] ? It does seem a lot friendlier then SKYNET.

no (2, Funny)

sporty (27564) | more than 10 years ago | (#8788024)

no.

q.e.d.

Re:no (0)

Anonymous Coward | more than 10 years ago | (#8788078)

You're one of those people who thought computers would never be able to play chess, right?

Theorem Provers (4, Informative)

bsd4me (759597) | more than 10 years ago | (#8788025)

Theorem provers [wikipedia.org] have been around for a long time. A net search should turn up a ton of hits. The key is to implement a system that can be verified by hand, and then build on it.

new facet of an old issue (5, Insightful)

colmore (56499) | more than 10 years ago | (#8788027)

20th century mathematics has seen some pretty amazing things, but at the same time, there are very real questions as to what constitutes "proof" any more.

consider this: the hypothesis of the famous Riemann Zeta problem has been tested for trillions of different solutions, and it has held true in every case. (If you want an explanation of the Zeta problem, look elsewhere, I don't have the time)

Now that means that it's *probably* true, but nobody accepts that as mathematical proof.

On the other hand, the classification problem for finite simple groups has been rigorously solved, but the collected proof (done in bits by hundreds of mathematicians working over 30 years) is tens of thousands of pages in many different journals. given the standards of review, it is a virtual certainty that there is an error somewhere in there that hasn't been found. So, again, the solution to this problem is *probably* right, but it has been accepted as solved.

What's the difference between these two cases really? What's the difference between these and relying on computer proofs that are, again, *probably* right?

In this light, the math of the late 19th century and early 20th century was something of a golden age, modern standards of logical rigor were in place, but the big breakthroughs were still using elementary enough techniques that the proofs could be written in only a few pages, and the majority of mathematically literate readers could be expected to follow along. These days proofs run in the hundreds of pages and only a handful of hyper-specialized readers can be expected to understand, much less review them.

Re:new facet of an old issue (2, Interesting)

qbwiz (87077) | more than 10 years ago | (#8788225)

On the other hand, the classification problem for finite simple groups has been rigorously solved, but the collected proof (done in bits by hundreds of mathematicians working over 30 years) is tens of thousands of pages in many different journals. given the standards of review, it is a virtual certainty that there is an error somewhere in there that hasn't been found. So, again, the solution to this problem is *probably* right, but it has been accepted as solved.

Wouldn't this be a good use for the computer, then - attempting to verify these disjointed proofs and combine them into one unified whole?

Re:new facet of an old issue (0)

Anonymous Coward | more than 10 years ago | (#8788261)

why smug bastards?

Humans for humans (1)

forgetful (725420) | more than 10 years ago | (#8788029)

Computers are the compass, they are not the terrain!

Calculators (1)

OrthodonticJake (624565) | more than 10 years ago | (#8788045)

I don't know about computers making proofs. I'm no expert, but afaik they only know what we tell them to know. Then again, if we use the same logic processes found in computers in real life, then there should be no problem... until we think up loopier ways of thinking. I don't look foreward to the days when my math teacher will say "Ok, everyone back to addition. We've found a bug". On another note, calculators are awesome. My math teachers yell at me for doing ridiculously easy problems in them (simple addition and the like) but I keep shrugging them off. But, the other day, my math teacher told us that there was a kid (no names mentioned) who got a test problem wrong because there was an addition mistake. There is no arguing that the kid would have benefited from using a calculator on her/his simple addition. Score number 2 for slacking! I rest my long-winded case.

maturity (2, Insightful)

jdkane (588293) | more than 10 years ago | (#8788050)

whether technology spurs greater achievements by speeding rote calculations or deprives people of fundamentals.

As far as calculator example goes, I believe a person should understand the fundamentals before using the calculator. Don't give a power tool to a kid. Somebody with an understanding of the fundamentals can wield the tool correctly and wisely whereas some cowboy is just dangerous. The old saying comes to mind "know just enough to be dangerous". As far as those oranges in the article, well somebody had better figure out a way to confirm the computer answer, without having to go through the exact same meticulous steps. Don't put your trust in technology without the proof to back it up, because technology built by people is prone to error. Now I'm sure the orange problem won't cause harm to anybody, however I hope I never read such an article about a nuclear power plant!

Do you trust mathematics, or not? (0)

mkorman (740943) | more than 10 years ago | (#8788051)

I haven't read the article, but...

It always seemed to me that the real question here is whether or not we trust mathematics, not computers. After all, if we can prove that theorem provers work correctly, then it follows that the proofs they generate will be correct. Humans write the theorem provers, so that problem is certainly comprehensible.

I don't see what the big deal is (4, Interesting)

ameoba (173803) | more than 10 years ago | (#8788057)

I don't see what the big deal is; not only is the general problem of proving mathematical statements undecidable (even without considering Godel's theorem) but even solvable problems require a lot of human intervention to get solved. Most problems (ie - examples out of math textbooks) aren't going to come up with a proof in any reasonable amount of time by simply dropping it into a theorem prover and pushing "go".

"Automated theorem proving" is less an automatic process (like you'd get with an automated production line) and more of a mechanical assistance to the job (like using a fork-lift to move heavy things faster than you could by hand).

It not only takes work to convert a problem into a good representation, but then you have to structure the problem statement in such a way that a theorem prover can make optimal use of it. Often times, you're forced to, upon following the output, prove lemmas (sub-proofs).

Then, when you finally get a proof, you get the joy of trying to simplify it to something that -can- be understood by a person; again, this is part of the process that can't really be automated well.

change the title (4, Insightful)

NotAnotherReboot (262125) | more than 10 years ago | (#8788058)

Change the title to: "Are Computers Able to Verify Mathematical Proofs Beyond All Doubt?"

actually, no (1)

Lupus Rufus (11262) | more than 10 years ago | (#8788221)

Verification of mathematical proofs is left up to the referees of a paper. No one (at the moment) is suggesting that computers are able to perform the verification job of the referees; all the computer was able to do in this case was essentially trot out a (big) number of cases and verify certain computable propositions. It was the checking of those verifications that stumped the referees, but, as those verifications formed an essential part of the proof of Kepler's Conjecture itself, their removal made the proof incomplete. Perhaps an even more appropriate title would be "Are Computers Ready to Assist in Proving Mathematical Theorems?"

Godel? (2, Informative)

Anonymous Coward | more than 10 years ago | (#8788069)

Doesn't the Godel incompleteness theorem say they can't?

Re:Godel? (1)

Sir Pallas (696783) | more than 10 years ago | (#8788161)

No. Q.E.D. It says that a non-trivial axiomatic system (that is, one that can represent a complex system like the natural numbers) can not be both complete and consistent, in all cases. But there are some things that can be proved for special, limited cases; and we err on the side of consistency over completeness.

Re:Godel? (1)

bsd4me (759597) | more than 10 years ago | (#8788232)

Doesn't the Godel incompleteness theorem say they can't?

Yes and not. Can a program terminate (give yes or no) for an arbitrary statement? Nope. Can a program terminate for a set of possible statements. Yup.

There are practical consequences of this. (Disclaimer: Memory hazy, but I considered grad school on the subject) Higher order lamba calculus is (probably) undecidable. This basically means that we can't type certain forms of computer programs. If you limit the input to let-bound polymorphism, then there is a decidable type system (such as Hindley-Milner), and we get a plethora of powerful programming languages (ML, Haskell, etc).

Registration free link anyone? (-1, Offtopic)

Hangin10 (704729) | more than 10 years ago | (#8788086)

Does anyone have a registration free link to the article?

Ask and you shall receive (0)

Anonymous Coward | more than 10 years ago | (#8788105)

No reg link [nytimes.com]

Wait, I know this one... (4, Funny)

bomb_number_20 (168641) | more than 10 years ago | (#8788090)

The answer is left as an exercise for the reader.

making hard things easy... (3, Funny)

Polo (30659) | more than 10 years ago | (#8788096)

Mathematics is just Symbol Manipulation. I suspect computers are pretty good at that.

Also, chess is just Pattern Matching... I don't know if humans have the edge there or not. ;)

Damn, I should have submitted this (2, Insightful)

Pedrito (94783) | more than 10 years ago | (#8788102)

My father sent this to me first thing this morning. I told him that I didn't think that rigorous mathematical proofs should be based on software either in whole or in part. All software of any complexity is inherently buggy. That doesn't mean that rigorous mathematical proofs are flawless by nature. That's why they have peer review. But peer review on software still isn't always sufficient.

Another issue is that you're then excluding any mathematicians who aren't also fairly adept programmers, from really understanding your proof.

All of this said, computers are necessary to do math these days and I think mathematicians should make use of them. I just don't believe we've reached a level of maturity in software development that meets the stringent requirements of mathematical proofs.

Why not.. (3, Interesting)

Sir Pallas (696783) | more than 10 years ago | (#8788103)

..if you can prove the program. I know that a lot of people look down on axiomatic semantics and model checkers; but I also know some people that started in that area a long time ago that still believe in it, and even more that are trying to get faculty appointments out of this rebounding field. If you can prove a program does what you expect it to and it, it turn, can be shown to prove what you really wanted, I don't see the problem. Maybe some of our theoretical mathematicians just need a dose of practical computer science.

well it looks like a quandary all right.. (-1, Troll)

Anonymous Coward | more than 10 years ago | (#8788109)

Hi! My name is p00p, [slashdot.org] and I'm an independent consultant whose job it is to check out operating systems; I detect weaknesses for a living, which is why I am particularly glad to have Open Sores crap like Linux on the streets.. but enough about my boring job, I can tell that you are here for the Report ! With no further ado, let me break it down for ya, homez.

I've checked it once, I've checked it a million times - the numbers don't lie, folks, it appears that Linux on the desktop is an utter failure right out the gate. GNOME is still a floundering fudgepack dependent on the dying kludge-fuck Xfree86, [xfree86.orghttp] and there's no light at the end of that tunnel as we all know!

KDE follows right behind, with a hideous mess built on anti-speed-demon Trolltech's QT toolkit, also filtered through Xfree just like GNOME. Ouch. Like the name, Trolltech, but the toolkit is a boner. Sorry guys! I know you all tried really hard and probably gave up maybe three or four hours of watching gay pr0n to pump that code out , but it looks like the only slots and signals people want are the ones that pay out big bucks in Vegas.

Security is still an issue, but you'd expect that from any amateur Open Sores project. Linux ain't so great, and it appears that the automatic updating mechanism in 90% of Linux installations is nonexistent or broken.. this is probably because Communism is more effective as a political philosophy than programming paradigm. And even then it never did work right, now did it?

Moving along, let's look at the appz people want [emphasis on "want" - editor] and see how many have been ported to Linux. Counting the Gimp (a pity vote) and WordPerfect (oh wait - that's dead) we have a grand total of ONE - I repeat ONE semi-popular app. Mozilla is a useless pig, else we'd be delighted to have it aboard just to give a semblance of competition to Mcrosoft. Maybe thirty years from now 'Zilla will be back to take charge eh! ;)

Lastly, and perhaps most significantly, it appears that among the children and unemployed hobbyists who currently form the bulk of Open Sores "developers" (term used loosely, no offense intended to legitimate software engineers!) there is a large homosexual contingent that is increasing every year.
The conclusion is inescapable: folks, Linux is officially gay!

This important announcement was brought you by p00p! [slashdot.orgp00p]
"Happy New Year and don't choke on my oversized donkey dong please, Linux!"

note: I realize that Apple is even more gay, but Apple's gayness comes at a significant price which many welfare-scamming bottom-feeders of homosexual orientation are unwilling to pay, hence the continuing focus on Linux afficionados and their OS of choice. Thanks for reading, and I'll be back next week. Cheerio!

Well can we trust humans? (1)

bigsexyjoe (581721) | more than 10 years ago | (#8788124)

The article says that published proofs generally skip steps. I'd bet the humans make more mistakes than the computer. Actually, Wiles's proof of Fermat's Last Theorem origanally had a mistake which was fixed later.

The secret proof to life... (-1, Offtopic)

SunPin (596554) | more than 10 years ago | (#8788137)

42!

What else is there to know?

The stack might get a bit deep, but... (4, Funny)

Odin's Raven (145278) | more than 10 years ago | (#8788140)

'Can we trust the darned things?' 'Can we know what we know?'

The obvious solution is to have the computer create a new proof that shows that the algorithm it used to create the original proof is, in fact correct.

And to prove that the proof of the proof can be trusted, have the computer create a proof of the proof of the proof.

And to prove that the proof of the proof of the proof can be trusted, ...

A rather timely Slashdot fortune (2, Funny)

Anonymous Coward | more than 10 years ago | (#8788146)

What I see on the bottom of the page as I read this topic discussion is:
The truth of a proposition has nothing to do with its credibility. And vice versa.

Wrong Summary (2, Informative)

DougMackensie (79440) | more than 10 years ago | (#8788149)

This story and this issue are not about whether or not the mathematical community trusts a computer created proof. The issue is whether or not the community can trust the human behind the computer to create a computer program/system that is "flawless enough". Issues and bugs may arise, and the community can't trust that these issues will 1. be found and 2. be severe enough to affect the validity of the proof.

4 color map problem (1)

stox (131684) | more than 10 years ago | (#8788167)

Wasn't the 4 color map problem solved in a similar fashion, in the late 1970's, by Ken Appel? This is nothing new, there are just some problems you are not going to solve by hand.

Verification of computer proofs is a pain... (3, Interesting)

briaydemir (207637) | more than 10 years ago | (#8788171)

The problem with computer generated proofs is that in order to trust the result of the computer, you have to trust:

  1. The program(s), as source code, that generated the proofs.
  2. The compiler(s) that turned the source code into something executable.
  3. The hardware that ran the programs.

And of course, our understanding of the hardware depends on how accurate our understanding of the laws of physics is. Any mistake in either the source code, compiler, or hardware, and potentially the proof produced is incorrect. That's an awful amount of stuff you have to check just to make absolutely sure the computer is correct. Then, consider how many bug-free pieces of software you've encountered. ... Yeah, I can see why mathematicians would not trust computer generated proofs.

Of course, people are not infallible either, but that's well known and expected. It's all about how much uncertainty people are willing to accept.

Re:Verification of computer proofs is a pain... (1)

Sir Pallas (696783) | more than 10 years ago | (#8788191)

And if the computer screws up, blame becomes a slightly fuzzy concept.

The dawn of a new age of Math... and Science! (1)

themaddone (180841) | more than 10 years ago | (#8788175)

Math is constantly limited by the continuous need to be rigorous. It's not a bad thing, but imagine the potential. A mathematician (or even, a semi-informed layman) would be able to input a logical statement and have the computer determine the validity, special cases, problems, and so forth.

The hard part would become formulating the statements. For science, this result would be immeasurable. Ask the computer if it can be done, and get an answer. Knowledge could increase exponentially! (No, really -- once upon a time, only advanced mathematicians knew calculus, but now we learn it in high school. Just wait until warp theory is an entry level college engineering course)

a mathematician's perspective (5, Interesting)

jockeys (753885) | more than 10 years ago | (#8788187)

as a graduate in the fields of mathematics, i spent a large portion of my five undergraduate years doing proofs. there are a great many ways to prove things, sometimes applicable sometimes not. (e.g. using inductive proofs for numeric theorems is all well and good but completely useless for any sort of ring-theory or spatial proofs)

there are also several levels of depth for proofs, ranging from "i've found a counter example so i can write the whole thing off as garbage" to "i have exhaustively and rigorously proved this starting with the basic axioms of number theory and worked my way on up"

the latter is really the only acceptable way to prove anything seriously. sometimes when you are reworking an already- done proof to illustrate a point, other mathematicians will allow a bit of latitude when it comes to cutting corners, but for a proof as far-reaching as the one in the article, i would only be interested in a "rigorous" proof, that is, one that started with the foundational tenets of mathematics and combined those to form and prove other postulates, etc. very much a form of abstraction, not unlike large development projects.

the problem arises when one (or several) humans have to be able to objectively check the whole thing. to use my prior example of a large development project, no one developer at microsoft understand the whole of windows. it's too big for a single human to understand. each developer knows what he needs to do to complete his part, and so on and so forth.

traditionally, for proofs, a single mathematician (or a small group) would hammer out the whole proof, so the level of complexity remained at a human-understandable level. (even if tedious) my concern, as a mathematician, with using an automated solution would be the rapidly growing order of complexity needed to properly back up increasingly complex proofs. as stated in the article, it's like trying to proofread a phonebook. (only, you must also consider that for every element of the proof (a particular listing) there are several branching layers of complexity (fundamental, underlying proofs many layers deep) underneath. this gets more complicated in an exponential fashion) obviously this approach will only remain human-checkable for relatively small problems. (programmers: think of some horrible nondeterministic- polynomial problem like the "traveling salesman" problem. systems, like humans, are a finite resource, but if you increase the size of the problem, the complexity will quickly grow far beyond your ability to compute. large proofs suffer from the same difficulties, if not quite as concrete and pronounced as NP algorithms)

in closing, i would have to agree that proofs, no matter the effort and computing time put into them, really should not be viewed as being as rigorous as those provable "by hand" and human- understandable, even if the computer has arrived at a satisfactory conclusion, because we have no way of KNOWING if the computer has built up the proof correctly, except that it says it has.

We Trust Every[One|Thing] just after we verify (1, Insightful)

sPaKr (116314) | more than 10 years ago | (#8788243)

We trust everyone, Trust but verify. Why should machines be any differnt. If I ask you to do a math problem do I trust you with the correct answer, sure I do, but I still verify. I preform the calculation my self, and allow others to preform the calculation. If we all come up with the same result we most likly have the correct answer. We must be careful not to allow the problem to define the solution method as that allows for us all to preform the same mistakes. Just like the Seti@home and distributed crack peopel found out that not everyone can be trusted that the only thing todo is have many people test the same key space no one should trust only one answer but rather develop consenous

Mathematics is Human (2, Troll)

Traser (60664) | more than 10 years ago | (#8788250)

Mathematics is one of the most intensely human of human endeavours. Everything in it is a production of the human mind entirely. Yes, the real world can sometime lead us into an interesting area of inquiry, but at its core the uncoverings of truth from axioms is a human endeavour.

A computer can be a useful tool (I'll be doing computational graph theory this summer), but it is not human. It does not have the ability to hold the possiblities of ideal forms within it and understand. It does not think.

The use of numeric methods to solve applied problems, or symbolic methods to pure problems is good and useful, but it does not constitute proof.

A human being, given an understanding of the underlying mathematics, must be able to go through the proof step by step, and see that, from the givens, the conclusion is inevitable.

I don't accept the Four-Colour theorem as proven true. I strongly suspect it to be so, but my suspicion does not truth make.

The Riemann hypothesis, on the other hand, is much, much further from being proved then the Four-Colour Theorem. Yes, millions of zeroes have been checked...but there are infinitely many zeroes, and all it takes for it to be false is for ONE of those zeroes to fall off the Re=1/2 part of the complex plane.

If I were giving odds, then millions divided by infinity is awfully close to zero.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>