# Are Computers Ready to Create Mathematical Proofs?

#### michael posted about 10 years ago | from the GIGO dept.

441
DoraLives writes *"Interesting article in the New York Times regarding the quandary mathematicians are now finding themselves in. In a lovely irony reminiscent of the torture, in days of yore, that students were put through when it came to using, or not using, newfangled calculators in class, the Big Guys are now wrestling with a very similar issue regarding computers: 'Can we trust the darned things?' 'Can we know what we know?' Fascinating stuff."*

## Rumsfeld, anyone? (5, Funny)

## dolo666 (195584) | about 10 years ago | (#8787939)

> 'Can we know what we know?' Fascinating stuff.Reminds me of Rumsfeld [about.com] ... "Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know."

## Re:Rumsfeld, anyone? (0)

## jpBabelFish (768685) | about 10 years ago | (#8788007)

## Rumsfeld, anyone?-Overload. (3, Funny)

## Anonymous Coward | about 10 years ago | (#8788037)

BOOM!

"Cleanup in aisle 10"

## I enjoy watching him squirm. (0, Offtopic)

## Anonymous Coward | about 10 years ago | (#8788148)

## Re:I enjoy watching him squirm. (1, Insightful)

## Anonymous Coward | about 10 years ago | (#8788205)

Prior to september 11, that attitude meant his days were numbered. After september 11, that attitude shows he's in charge.

## Re:Rumsfeld, anyone? (2, Funny)

## kfg (145172) | about 10 years ago | (#8788267)

KFG

## They have already proven.... (-1, Troll)

## Anonymous Coward | about 10 years ago | (#8787940)

## Re:They have already proven.... (-1, Flamebait)

## Anonymous Coward | about 10 years ago | (#8788128)

Hi! My name is p00p, [slashdot.org] and I'm an independent consultant whose job it is to check out operating systems; I detect weaknesses for a living, which is why I am particularly glad to have

Open Sores crap like Linuxon the streets.. but enough about my boring job, I can tell thatyouare here forthe ReportI've checked it once, I've checked it a million times -

the numbers don't lie,folks, it appears that Linux on the desktop is an utter failure right out the gate. GNOME is still a floundering fudgepack dependent on the dying kludge-fuck Xfree86, [xfree86.orghttp] and there's no light at the end ofthattunnel as we all know!KDE follows right behind, with a hideous mess built on anti-speed-demon Trolltech's QT toolkit, also filtered through Xfree just like GNOME. Ouch. Like the name, Trolltech, but the toolkit is a boner.

Sorry guys!I know you all triedreally hardandprobably gave up maybe three or four hours of watching gay pr0n to pump that code outSecurity is still an issue, but

you'd expect that from any amateur Open Sores project.Linux ain't so great, andit appears that the automatic updating mechanism in 90% of Linux installations is nonexistent or broken..this is probably because Communism is more effective as a political philosophy than programming paradigm. And even then it never did work right, now did it?Moving along, let's look at the appz people want

[emphasis on "want" - editor]and see how many have been ported to Linux. Counting the Gimp (a pity vote) and WordPerfect (oh wait - that's dead) we have a grand total of ONE - I repeat ONE semi-popular app. Mozilla is a useless pig, else we'd be delighted to have it aboard just to give a semblance of competition to Mcrosoft. Maybe thirty years from now 'Zilla will be back to take charge eh!Lastly, and perhaps most significantly, it appears that among the children and unemployed hobbyists who currently form the bulk of Open Sores "developers" (term used loosely, no offense intended to legitimate software engineers!) there is a large homosexual contingent that is increasing every year.

This important announcement was brought you by p00p! [slashdot.orgp00p]

"Happy New Year and don't choke on my oversized donkey dong please, Linux!"note: I realize that Apple is even more gay, but Apple's gayness comes at a significant price which many welfare-scamming bottom-feeders of homosexual orientation are unwilling to pay, hence the continuing focus on Linux afficionados and their OS of choice. Thanks for reading, and I'll be back next week. Cheerio!## Can someone elaborate on... (2, Interesting)

## Anonymous Coward | about 10 years ago | (#8787944)

I guess the obvious Monte Carlo simulation doesn't constitute "proof," but still, what exactly is the big question here?

## Re:Can someone elaborate on... (5, Informative)

## stephentyrone (664894) | about 10 years ago | (#8788019)

## Re:Can someone elaborate on... (2, Insightful)

## baywulf (214371) | about 10 years ago | (#8788039)

## Re:Can someone elaborate on... (1)

## Guppy06 (410832) | about 10 years ago | (#8788139)

## Re:Can someone elaborate on... (3, Informative)

## uncleO (769165) | about 10 years ago | (#8788166)

## Re:Can someone elaborate on... (2, Interesting)

## JoranE (722214) | about 10 years ago | (#8788183)

If by "see" you mean "conjecture" then, yes, it is easy to see. If by "see" you mean "prove", and thereby catch a glimpse into the fundemental reasons WHY this is the best possible stacking, then it is exceedingly difficult.

How would you go about designing a proof that would take into account every single one of the infinitely many configurations of oranges? Could you even list them? Indeed, I read somewhere that Hales said that the hardest part was reducing all the infinitely many configurations into something like 5,000 cases.

Focus on the infinite part of this problem: we're looking for an arrangement of infinitely many oranges in space such that we maximize the density of the oranges. Now, is it obvious to you that an (infinite) pyramid is denser than packing them somehow into a sphere-ish shape (with infinite radius of course)? It certainly isn't to me...

I'm not sure is this is very convincing, but it's the best I can do without really researching the problem.

## Re:Can someone elaborate on... (1)

## the_2nd_coming (444906) | about 10 years ago | (#8788194)

prove to me that 0 1

that proof takes about 15 - 20 lines of crap just to prove that 0 1.

your a physics student aren't you.

## Re:Can someone elaborate on... (1)

## the_2nd_coming (444906) | about 10 years ago | (#8788228)

Obvious things are hardest to prove.

prove to me that 0 is less than 1.

that proof takes about 15 - 20 lines of crap.

pretty long just to prove something that is so obvious.

your a physics person aren't you.

## Re:Can someone elaborate on... (0)

## conan776 (723791) | about 10 years ago | (#8788234)

## Re:Can someone elaborate on... (1)

## davidoff404 (764733) | about 10 years ago | (#8788239)

## piss? (-1, Troll)

## Anonymous Coward | about 10 years ago | (#8787948)

p00p on y'all!!! [slashdot.org]

## I don't care (1)

## fresh27 (736896) | about 10 years ago | (#8787951)

## Re:I don't care (3, Interesting)

## Vancorps (746090) | about 10 years ago | (#8787979)

When the results become too inaccurate then you have to change your tools, so unless accuracy is key, such as NASA's calculations and the likes then you work on making a computer do as much as possible to eliminate human error and humans can be there to adjust the almost mythic fudge factor

## Re:I don't care (1)

## colmore (56499) | about 10 years ago | (#8788108)

while in practice, there is a grey area, the intuition of a good mathematician can still be correct even if he or she makes a few small errors in explaining it rigorously, a formal proof should still aspire to a standard of perfect rigor.

## Re:I don't care (2, Insightful)

## Vancorps (746090) | about 10 years ago | (#8788165)

Another post mentioned that computers could come up with the proofs but it would be too hard for a human to verify. At what point can be start to both trust the output and to build upon that trusted output by further trusting future output. It could be an interesting time in which we don't understand how things are done but we are able to do them none the less. It all comes down to the question, how much do you want to trust a machine?

## poop (-1, Offtopic)

## Anonymous Coward | about 10 years ago | (#8787953)

## Create vs. Verify (5, Informative)

## Squeamish Ossifrage (3451) | about 10 years ago | (#8787954)

createproofs isn't an issue. The problem comes when the resulting proof is too involved to be verified by a human, and so the computer's work has to be trusted.## Re:Create vs. Verify (1)

## Dr Tall (685787) | about 10 years ago | (#8788000)

## Re:Create vs. Verify (4, Insightful)

## Xilo (741751) | about 10 years ago | (#8788196)

involved, notcomplicated- like the Four Color [Conjecture]. Noone could figure out a way toactuallyprove it, so some one (or group of someones) wrote a program to systematically determine all the possible arrangements of regions in a simplified series of maps, and then figure out how to color each of those maps. The involved part was## Re:Create vs. Verify (4, Insightful)

## Llywelyn (531070) | about 10 years ago | (#8788220)

I'll give some trivial examples to illustrate:

For a famous example, it would provide a great deal of peace of mind if we could prove that P != NP. It wouldn't matter that we understand that proof so much as that it is *provably true*. If, on the other hand, it is proven false that is just as important (if not more so) and while an understanding of the proof might lead more easily to examples of such, we would know (for certain) that trusting public key encryption over the long run would be a Bad Idea(TM) (for example) and that it is just a matter of time before a polynomial time algorithm is developed.

(Not that such will necessarily be fast, mind you, but we would know it would exist).

For another example--there are certain things that can be inferred if Poincare Conjecture is true for N=3. If we can prove the Poincare Conjecture is true (and it is now thought that it might be) it means things to physicists, even if we don't know why it is true.

The bigger question here is "can we trust it if we can't verify it by hand."

## Re:Create vs. Verify (4, Insightful)

## Vancorps (746090) | about 10 years ago | (#8788014)

A computer could quite easier come up with a very complex answer simply because it can do more calculations in a given second than a human can. Of course humans take in a lot more variables at a given time so the numbers are actually very opposite but I'm sure you get my point.

I think you test it with progressively more different problems, if the answers come out precise and accurate then you can build your level of trust in the system. Kind of like the process of getting users to trust saving to the server after a bad crash wiped everything because the previous admin was a moron.## indeed (4, Insightful)

## rebelcool (247749) | about 10 years ago | (#8788091)

Though Dr. Hoare danced around that question a little, presumably that aspect of the project would have to be done by hand, a monumental task to say the least.

## Re:Create vs. Verify (1)

## Ichijo (607641) | about 10 years ago | (#8788122)

When something is too involved for a human to accomplish -- isn't this what we have computers for? So the solution is simple. Produce another system to verify the first system's proof. Ideally, this second system should be built by a group completely separate from the first so as not to inherit any design flaws or false assumptions.

The same methodology could be applied to electronic voting. Everyone votes twice, once on each of two machines from two different manufacturers, and at the end of the day the results are tallied separately and compared.

## Re:Create vs. Verify (1)

## NanoGator (522640) | about 10 years ago | (#8788157)

"The problem comes when the resulting proof is too involved to be verified by a human, and so the computer's work has to be trusted."Just to clarify, do you mean the computer's accuracy in moving the bits around, or do you mean the human who wrote the program didn't make any mistakes?

## Re:Create vs. Verify (1)

## the_2nd_coming (444906) | about 10 years ago | (#8788255)

is it really a proof then? should a proof not need to be able to be human understandable and verifiable?

could you not just break it up into lemmas?

## Sorry..... (-1, Offtopic)

## Spyro VII (666885) | about 10 years ago | (#8787955)

## 'Can we trust the darned things?' (4, Funny)

## FFFish (7567) | about 10 years ago | (#8787958)

## Re:'Can we trust the darned things?' (0)

## Anonymous Coward | about 10 years ago | (#8788077)

Intel, the microchip giant, uses proof-checking software to check algorithms in its chips, in the hope of avoiding glitches like one in the original 1994 Pentium that caused numbers to divide incorrectly.Can we trust the software that tells us the chip design is sound?

## Re:'Can we trust the darned things?' (1, Interesting)

## Anonymous Coward | about 10 years ago | (#8788178)

You may find the following project interesting:

It [acl2] [utexas.edu] is a common lisp programming language and a system for proving properties about code. It has been used to prove correctness properties about microprocessors[microcode], compilers, and application code. Notably, I believe it is currently being used to prove certain properties about the Java Sandbox.

When the FDIV bug was reported, AMD hired it's creators to mechanically verify correctness of it's floating point microcode on a model of the K5 developed on the system. Details [utexas.edu]

In fact, they were unable to verify it's correctness, as it wasn't correct. Fortunately, the code was able to be changed, prior to chip fabrication, saving AMD a lot of money and a lot of embarassment.

## Will this complicate what we can understand? (3, Interesting)

## Anonymous Coward | about 10 years ago | (#8787959)

## Re:Will this complicate what we can understand? (1)

## iminplaya (723125) | about 10 years ago | (#8788034)

My calc teacher hates us using calculators...So do I. They should be banned from school, just like guns. It's one thing to use them in business for quick work. They have no place in school where you're supposed to be learning how to add.

## Re:Will this complicate what we can understand? (1)

## Knetzar (698216) | about 10 years ago | (#8788073)

## Re:Will this complicate what we can understand? (1)

## pastafazou (648001) | about 10 years ago | (#8788155)

## Re:Will this complicate what we can understand? (0)

## Anonymous Coward | about 10 years ago | (#8788198)

I will gladly abstain from using a calculator the day I am not given pages and pages of redundant math assignments to do.

## Re:Will this complicate what we can understand? (1)

## iminplaya (723125) | about 10 years ago | (#8788173)

The area of a circle with a radius of 3 is simply 9*pi, on a test why does it matter if the student plugs the number in to get a decimal number?Because you're just repeating the formula. The answer is single number, real or otherwise. I need to know the actual area of the circle, not the formula for figuring it out.

## Re:Will this complicate what we can understand? (1)

## Absurd Being (632190) | about 10 years ago | (#8788203)

## Re:Will this complicate what we can understand? (1)

## jdhutchins (559010) | about 10 years ago | (#8788238)

For some tests, you shouldn't be allowed to use a calculator (knowing basic sine and cosine values, for example, or numerical integration without a calculator). Other than that, you're just seeing how few careless mistakes the student is making over stuff they've already been tested on, which is not always the point of the test.

## Re: . . . As we know it. (4, Interesting)

## Bastian (66383) | about 10 years ago | (#8788244)

On the other hand, many students are prone to using these devices and applications as crutches and try to get away with doing things like using their calculator's implementation of Newton's Method instead of solving the problem themselves.

Some professors have found solutions to this problem, others havent. When I was at college, I think our math department had achieved a pretty good level of harmony with Mathematica - we were expected to do a lot of stuff by hand or in our heads - Gaussian elimination, for example - but in order to make the math seem useful, we were also exptected to be able to solve real-world problems with the stuff we learned. Not contrived "real-world" problems from your high-school textbook, but stuff like interpreting large and dirty scientific datasets where the specific technique we would have to use to solve the problem was something we could figure out, but not something that had been explicitly laid out by the textbook or in lecture. We had to apply the concepts we had learned to figure out the problem, but there was no way we were going to chug out that arithmetic by hand - when was the last time you tried to work on a 16x16 matrix using a pencil and paper? How about a 100x100 one?

## Well... (1)

## -kertrats- (718219) | about 10 years ago | (#8787961)

## Re:Well... (1)

## Russellkhan (570824) | about 10 years ago | (#8788038)

## I think so, yes. (3, Informative)

## James A. M. Joyce (764379) | about 10 years ago | (#8787966)

## Re:I think so, yes. (5, Insightful)

## Quill345 (769162) | about 10 years ago | (#8788048)

## It was a big help with the 4-color map proof... (5, Informative)

## The Beezer (573688) | about 10 years ago | (#8787976)

## Are Computers Ready to Create Mathematical Proofs? (-1)

## Anonymous Coward | about 10 years ago | (#8787986)

Are Computers Ready to Create Mathematical Proofs?NO

## Re:Are Computers Ready to Create Mathematical Proo (2, Informative)

## superpulpsicle (533373) | about 10 years ago | (#8788055)

So many times I see people use programs like MAPLE to show something mathematical, and it ends up a disaster.

Problems is the brain on the chair, not brain on machine.

## Re:Are Computers Ready to Create Mathematical Proo (0)

## Anonymous Coward | about 10 years ago | (#8788071)

YES

## Next thing you know.... (1, Funny)

## Anonymous Coward | about 10 years ago | (#8788004)

## Text of the article (2, Informative)

## Anonymous Coward | about 10 years ago | (#8788009)

In Math, Computers Don't Lie. Or Do They?

By KENNETH CHANG

Published: April 6, 2004

leading mathematics journal has finally accepted that one of the longest-standing problems in the field -- the most efficient way to pack oranges -- has been conclusively solved.

That is, if you believe a computer.

The answer is what experts -- and grocers -- have long suspected: stacked as a pyramid. That allows each layer of oranges to sit lower, in the hollows of the layer below, and take up less space than if the oranges sat directly on top of each other.

While that appeared to be the correct answer, no one offered a convincing mathematical proof until 1998 -- and even then people were not entirely convinced.

For six years, mathematicians have pored over hundreds of pages of a paper by Dr. Thomas C. Hales, a professor of mathematics at the University of Pittsburgh.

But Dr. Hales's proof of the problem, known as the Kepler Conjecture, hinges on a complex series of computer calculations, too many and too tedious for mathematicians reviewing his paper to check by hand.

Believing it thus, at some level, requires faith that the computer performed the calculations flawlessly, without any programming bugs. For a field that trades in dispassionate logic and supposedly unambiguous truths and falsehoods, that is an uncomfortably gray in-between.

Because of the ambiguities, the journal, the prestigious Annals of Mathematics, has decided to publish only the theoretical parts of the proof, which have been checked in the traditional manner. A more specialized journal, Discrete and Computational Geometry, will publish the computer sections.

The decision represents a compromise between wholehearted acceptance and rejection of the computer techniques that are becoming more common in mathematics.

The debate over computer-assisted proofs is the high-end version of arguments over using calculators in math classes -- whether technology spurs greater achievements by speeding rote calculations or deprives people of fundamentals.

"I don't like them, because you sort of don't feel you understand what's going on," said Dr. John H. Conway, a math professor at Princeton. But other mathematicians see a major boon: just as the computers of today can beat the grand masters of chess, the computers of tomorrow may be able to discover proofs that have eluded the grandest of mathematicians.

The packing problem dates at least to the 1590's, when Sir Walter Raleigh, stocking his ship for an expedition, wondered if there was a quick way to calculate the number of cannonballs in a stack based on its height. His assistant, Thomas Harriot, came up with the requested equation.

Years later, Harriot mentioned the problem to Johannes Kepler, the astronomer who had deduced the movement of planets. Kepler concluded that the pyramid was most efficient. (An alternative arrangement, with each layer of spheres laid out in a honeycomb pattern, is equally efficient, but not better.) But Kepler offered no proof.

A rigorous proof, a notion first set forth by Euclid around 300 B.C., is a progression of logic, starting from assumptions and arriving at a conclusion. If the chain is correct, the proof is true. If not, it is wrong.

But a proof is sometimes a fuzzy concept, subject to whim and personality. Almost no published proof contains every step; there are just too many.

The Kepler Conjecture is also not the first proof to rely on computers. In 1976, Dr. Wolfgang Haken and Dr. Kenneth Appel of the University of Illinois used computer calculations in a proof of the four-color theorem, which states that any map needs only four colors to ensure that no adjacent regions are the same color.

The work was published -- and mathematicians began finding mistakes in it. In each case, Dr. Haken and Dr. Appel quickly fixed the error. But, "To many mathematicians, this left a very bad taste," said Dr. Robert D. MacPherson, an Annals editor.,

To avoid a repetition, the Annals editors wanted a careful, complete review of Dr. Hales's proof. "But that's not how things turned out," Dr. MacPherson said. "It was a disappointment for all of us." The first group recruited to review the proof spent several years on it, but gave up a year ago, exhausted. Everything checked by the reviewers, led by Dr. Gabor Fejes Toth of the Hungarian Academy of Sciences, turned out to be correct. But the prospect of reviewing every calculation proved too daunting.

Dr. MacPherson likened the process to proofreading the listings in a phone book.

"Everywhere they looked in the phone book, it was right," he said, "and they looked in a lot of places."

The Annals considered publishing the paper with a disclaimer of sorts: This proof has been mostly, but not entirely, checked.

Dr. Conway of Princeton, despite his dislike of computer proofs, felt that a disclaimer cast unfair aspersions. "I was very unhappy about it," he said. The proof, Dr. Conway said, included more documentation than usual, including dated records of each step that Dr. Hales and a graduate student, Samuel P. Ferguson, had performed.

Faced with such criticism, the Annals editors reconsidered the disclaimer and sent the paper to another mathematician. The new reviewer agreed that the theoretical underpinnings were sound, and the editors arrived at a Solomon-like decision: they split Dr. Hales's paper in two.

"The part that's going in The Annals of Mathematics is a proof," Dr. MacPherson said. "We feel he has made a serious contribution to mathematics."

In a new policy, The Annals has decided that computer-assisted proofs have merit, but the journal will accord them a lower status than traditional proofs, regarding them more like laboratory experiments that provide supporting evidence.

"In some cases, it's not a good idea to verify computer proofs," Dr. MacPherson said. "It took the effort of many mathematicians for many years, and nothing came out of it."

Even in traditional proofs, reviewers rarely check every step, instead focusing mostly on the major points. In the end, they either believe the proof or not.

"It's like osmosis," said Dr. Akihiro Kanamori, a mathematics professor at Boston University who writes about the history of mathematics. "More and more people say it's a proof and you believe them."

That is why an earlier proof of the Kepler Conjecture, first offered eight years before Dr. Hales's, is rarely talked about these days. Dr. Wu-Yi Hsiang of University of California at Berkeley claimed he had a proof in 1990, and in 1993 he published an article that he now calls an outline, not a complete proof.

Mathematicians sharply criticized it, saying that it contained holes of logic that they did not think Dr. Hsiang could fill.

The level of rigor and detail that mathematicians have demanded of proofs has waxed and waned over the centuries. Major mathematical fields of the 1700's and 1800's like calculus and topology developed without rigorous proofs.

"For quite some time in mathematics, arguments were basically descriptive," Dr. Kanamori said. "People would give what we would now call informal arguments."

But as mathematicians spun out ever more intuitive arguments, some "proofs" turned out to be false, leading them to shore up the foundations of their earlier work. Mathematicians continue to debate how much rigor a proof requires and whether too much emphasis on details stifles creativity.

Dr. Hsiang did not publish his complete proof until 2002, and it appeared as a book rather than in a peer-reviewed journal. Few have bothered to give the book a close read.

"Hsiang has not such a good track record," said Dr. Frank Quinn, a mathematics professor at Virginia Tech. "I don't want to spend time proving it's wrong."

Dr. Hsiang counters that his proof offers deeper insight and that others' understanding of his techniques is inadequate.

Some believe that computers, the source of the debate in Dr. Hales's proof, will actually quiet the debate over proofs. Instead of serving just as a tool for calculations, as in Dr. Hales's proof, computers can also be used to discover new proofs.

Mathematicians like Dr. Larry Wos of Argonne National Laboratory use "automated reasoning" computer programs: they enter axioms and the computer sifts through logical possibilities in search of a proof. Because of the huge number of possibilities, a human still needs to tell the computer where to search.

"The human mind will never be replaced," Dr. Wos said, butthe advantage of computers is their lack of preconceptions. "They can follow paths that are totally counterintuitive," he said.

The software also fills in the tedious work, he said, giving the mathematician more time to contemplate other problems, and it generates as much or as little detail as a mathematician desires. "They are fully rigorous and fully formal in the best sense of the word," Dr. Wos said. "They tell you exactly how each step was obtained."

In 1996, Dr. Wos and a colleague, Dr. William McCune, used the software to prove a previously unsolved problem known as the Robbins Conjecture.

In a 2003 book, "Automated Reasoning and the Discovery of Missing and Elegant Proofs," Dr. Wos described new proofs and more elegant versions of known proofs discovered by computers.

Intel, the microchip giant, uses proof-checking software to check algorithms in its chips, in the hope of avoiding glitches like one in the original 1994 Pentium that caused numbers to divide incorrectly.

Dr. Hales has embarked on a similar project called Flyspeck -- the letters F, P and K stand for "formal proof of Kepler" -- to put to rest any last doubts about the computer proof.

Current software, however cannot handle anything nearly as complex as the Kepler Conjecture. Dr. Hales estimates that it will take 20 years of cumulative effort by a team of mathematicians to complete.

As for his 1998 proof of the Kepler Conjecture, Dr. Hales said that final publication, after a review process, originally expected to last a few months, would be almost anticlimactic. "For me, the big moment was when I completed the proof," Dr. Hales said. "And I don't think anything will change when I see it in print."

## Re:Text of the article (-1, Troll)

## Anonymous Coward | about 10 years ago | (#8788224)

But as mathematicians spun out ever more intuitive arguments, some "proofs" turned out to be false, leading them to shore up the foundations of their earlier work. Mathematicians,via gay butt sex, continue to debate how much rigor a proof requires and whether too much emphasis on details stifles creativity.## I see no good reason why not.... (4, Interesting)

## kommakazi (610098) | about 10 years ago | (#8788016)

## as for me why it is not, you do not see the good r (0)

## jpBabelFish (768685) | about 10 years ago | (#8788035)

## Re:I see no good reason why not.... (1)

## wmspringer (569211) | about 10 years ago | (#8788036)

I had a calculator once that started giving wrong answers in the middle of a math test; very inconvenient

## Re:I see no good reason why not.... (1)

## wmspringer (569211) | about 10 years ago | (#8788131)

## mod parent up, please (1)

## mantera (685223) | about 10 years ago | (#8788107)

## Re:I see no good reason why not.... (1)

## iminplaya (723125) | about 10 years ago | (#8788112)

## I always loved this picture (1)

## DanThe1Man (46872) | about 10 years ago | (#8788021)

## no (2, Funny)

## sporty (27564) | about 10 years ago | (#8788024)

q.e.d.

## Re:no (0)

## Anonymous Coward | about 10 years ago | (#8788078)

## Theorem Provers (4, Informative)

## bsd4me (759597) | about 10 years ago | (#8788025)

Theorem provers [wikipedia.org] have been around for a long time. A net search should turn up a ton of hits. The key is to implement a system that can be verified by hand, and then build on it.

## new facet of an old issue (5, Insightful)

## colmore (56499) | about 10 years ago | (#8788027)

consider this: the hypothesis of the famous Riemann Zeta problem has been tested for trillions of different solutions, and it has held true in every case. (If you want an explanation of the Zeta problem, look elsewhere, I don't have the time)

Now that means that it's *probably* true, but nobody accepts that as mathematical proof.

On the other hand, the classification problem for finite simple groups has been rigorously solved, but the collected proof (done in bits by hundreds of mathematicians working over 30 years) is tens of thousands of pages in many different journals. given the standards of review, it is a virtual certainty that there is an error somewhere in there that hasn't been found. So, again, the solution to this problem is *probably* right, but it has been accepted as solved.

What's the difference between these two cases really? What's the difference between these and relying on computer proofs that are, again, *probably* right?

In this light, the math of the late 19th century and early 20th century was something of a golden age, modern standards of logical rigor were in place, but the big breakthroughs were still using elementary enough techniques that the proofs could be written in only a few pages, and the majority of mathematically literate readers could be expected to follow along. These days proofs run in the hundreds of pages and only a handful of hyper-specialized readers can be expected to understand, much less review them.

## Re:new facet of an old issue (2, Interesting)

## qbwiz (87077) | about 10 years ago | (#8788225)

On the other hand, the classification problem for finite simple groups has been rigorously solved, but the collected proof (done in bits by hundreds of mathematicians working over 30 years) is tens of thousands of pages in many different journals. given the standards of review, it is a virtual certainty that there is an error somewhere in there that hasn't been found. So, again, the solution to this problem is *probably* right, but it has been accepted as solved.Wouldn't this be a good use for the computer, then - attempting to verify these disjointed proofs and combine them into one unified whole?

## Re:new facet of an old issue (0)

## Anonymous Coward | about 10 years ago | (#8788261)

## Humans for humans (1)

## forgetful (725420) | about 10 years ago | (#8788029)

## Calculators (1)

## OrthodonticJake (624565) | about 10 years ago | (#8788045)

## maturity (2, Insightful)

## jdkane (588293) | about 10 years ago | (#8788050)

whether technology spurs greater achievements by speeding rote calculations or deprives people of fundamentals.As far as calculator example goes, I believe a person should understand the fundamentals before using the calculator. Don't give a power tool to a kid. Somebody with an understanding of the fundamentals can wield the tool correctly and wisely whereas some cowboy is just dangerous. The old saying comes to mind "know just enough to be dangerous". As far as those oranges in the article, well somebody had better figure out a way to confirm the computer answer, without having to go through the exact same meticulous steps. Don't put your trust in technology without the proof to back it up, because technology built by people is prone to error. Now I'm sure the orange problem won't cause harm to anybody, however I hope I never read such an article about a nuclear power plant!

## Do you trust mathematics, or not? (0)

## mkorman (740943) | about 10 years ago | (#8788051)

It always seemed to me that the real question here is whether or not we trust mathematics, not computers. After all, if we can prove that theorem provers work correctly, then it follows that the proofs they generate will be correct. Humans write the theorem provers, so that problem is certainly comprehensible.

## I don't see what the big deal is (4, Interesting)

## ameoba (173803) | about 10 years ago | (#8788057)

"Automated theorem proving" is less an automatic process (like you'd get with an automated production line) and more of a mechanical assistance to the job (like using a fork-lift to move heavy things faster than you could by hand).

It not only takes work to convert a problem into a good representation, but then you have to structure the problem statement in such a way that a theorem prover can make optimal use of it. Often times, you're forced to, upon following the output, prove lemmas (sub-proofs).

Then, when you finally get a proof, you get the joy of trying to simplify it to something that -can- be understood by a person; again, this is part of the process that can't really be automated well.

## change the title (4, Insightful)

## NotAnotherReboot (262125) | about 10 years ago | (#8788058)

## actually, no (1)

## Lupus Rufus (11262) | about 10 years ago | (#8788221)

## Godel? (2, Informative)

## Anonymous Coward | about 10 years ago | (#8788069)

## Re:Godel? (1)

## Sir Pallas (696783) | about 10 years ago | (#8788161)

## Re:Godel? (1)

## bsd4me (759597) | about 10 years ago | (#8788232)

Doesn't the Godel incompleteness theorem say they can't?

Yes and not. Can a program terminate (give yes or no) for an arbitrary statement? Nope. Can a program terminate for a set of possible statements. Yup.

There are practical consequences of this. (Disclaimer: Memory hazy, but I considered grad school on the subject) Higher order lamba calculus is (probably) undecidable. This basically means that we can't type certain forms of computer programs. If you limit the input to let-bound polymorphism, then there is a decidable type system (such as Hindley-Milner), and we get a plethora of powerful programming languages (ML, Haskell, etc).

## Registration free link anyone? (-1, Offtopic)

## Hangin10 (704729) | about 10 years ago | (#8788086)

## Ask and you shall receive (0)

## Anonymous Coward | about 10 years ago | (#8788105)

## Wait, I know this one... (4, Funny)

## bomb_number_20 (168641) | about 10 years ago | (#8788090)

## making hard things easy... (3, Funny)

## Polo (30659) | about 10 years ago | (#8788096)

Also, chess is just Pattern Matching... I don't know if humans have the edge there or not.

## regfree link (3, Informative)

## werdnapk (706357) | about 10 years ago | (#8788097)

## Damn, I should have submitted this (2, Insightful)

## Pedrito (94783) | about 10 years ago | (#8788102)

Another issue is that you're then excluding any mathematicians who aren't also fairly adept programmers, from really understanding your proof.

All of this said, computers are necessary to do math these days and I think mathematicians should make use of them. I just don't believe we've reached a level of maturity in software development that meets the stringent requirements of mathematical proofs.

## Why not.. (3, Interesting)

## Sir Pallas (696783) | about 10 years ago | (#8788103)

## well it looks like a quandary all right.. (-1, Troll)

## Anonymous Coward | about 10 years ago | (#8788109)

Open Sores crap like Linuxon the streets.. but enough about my boring job, I can tell thatyouare here forthe ReportI've checked it once, I've checked it a million times -

the numbers don't lie,folks, it appears that Linux on the desktop is an utter failure right out the gate. GNOME is still a floundering fudgepack dependent on the dying kludge-fuck Xfree86, [xfree86.orghttp] and there's no light at the end ofthattunnel as we all know!KDE follows right behind, with a hideous mess built on anti-speed-demon Trolltech's QT toolkit, also filtered through Xfree just like GNOME. Ouch. Like the name, Trolltech, but the toolkit is a boner.

Sorry guys!I know you all triedreally hardandprobably gave up maybe three or four hours of watching gay pr0n to pump that code outSecurity is still an issue, but

you'd expect that from any amateur Open Sores project.Linux ain't so great, andit appears that the automatic updating mechanism in 90% of Linux installations is nonexistent or broken..this is probably because Communism is more effective as a political philosophy than programming paradigm. And even then it never did work right, now did it?Moving along, let's look at the appz people want

[emphasis on "want" - editor]and see how many have been ported to Linux. Counting the Gimp (a pity vote) and WordPerfect (oh wait - that's dead) we have a grand total of ONE - I repeat ONE semi-popular app. Mozilla is a useless pig, else we'd be delighted to have it aboard just to give a semblance of competition to Mcrosoft. Maybe thirty years from now 'Zilla will be back to take charge eh!Lastly, and perhaps most significantly, it appears that among the children and unemployed hobbyists who currently form the bulk of Open Sores "developers" (term used loosely, no offense intended to legitimate software engineers!) there is a large homosexual contingent that is increasing every year.

This important announcement was brought you by p00p! [slashdot.orgp00p]

"Happy New Year and don't choke on my oversized donkey dong please, Linux!"note: I realize that Apple is even more gay, but Apple's gayness comes at a significant price which many welfare-scamming bottom-feeders of homosexual orientation are unwilling to pay, hence the continuing focus on Linux afficionados and their OS of choice. Thanks for reading, and I'll be back next week. Cheerio!## Well can we trust humans? (1)

## bigsexyjoe (581721) | about 10 years ago | (#8788124)

## The secret proof to life... (-1, Offtopic)

## SunPin (596554) | about 10 years ago | (#8788137)

What else is there to know?

## The stack might get a bit deep, but... (4, Funny)

## Odin's Raven (145278) | about 10 years ago | (#8788140)

The obvious solution is to have the computer create a new proof that shows that the algorithm it used to create the original proof is, in fact correct.

## A rather timely Slashdot fortune (2, Funny)

## Anonymous Coward | about 10 years ago | (#8788146)

## Wrong Summary (2, Informative)

## DougMackensie (79440) | about 10 years ago | (#8788149)

## 4 color map problem (1)

## stox (131684) | about 10 years ago | (#8788167)

## Verification of computer proofs is a pain... (3, Interesting)

## briaydemir (207637) | about 10 years ago | (#8788171)

The problem with computer generated proofs is that in order to trust the result of the computer, you have to trust:

And of course, our understanding of the hardware depends on how accurate our understanding of the laws of physics is. Any mistake in either the source code, compiler, or hardware, and potentially the proof produced is incorrect. That's an awful amount of stuff you have to check just to make absolutely sure the computer is correct. Then, consider how many bug-free pieces of software you've encountered. ... Yeah, I can see why mathematicians would not trust computer generated proofs.

Of course, people are not infallible either, but that's well known and expected. It's all about how much uncertainty people are willing to accept.

## Re:Verification of computer proofs is a pain... (1)

## Sir Pallas (696783) | about 10 years ago | (#8788191)

## The dawn of a new age of Math... and Science! (1)

## themaddone (180841) | about 10 years ago | (#8788175)

The hard part would become formulating the statements. For science, this result would be immeasurable. Ask the computer if it can be done, and get an answer. Knowledge could increase exponentially! (No, really -- once upon a time, only advanced mathematicians knew calculus, but now we learn it in high school. Just wait until warp theory is an entry level college engineering course)

## a mathematician's perspective (5, Interesting)

## jockeys (753885) | about 10 years ago | (#8788187)

there are also several levels of depth for proofs, ranging from "i've found a counter example so i can write the whole thing off as garbage" to "i have exhaustively and rigorously proved this starting with the basic axioms of number theory and worked my way on up"

the latter is really the only acceptable way to prove anything seriously. sometimes when you are reworking an already- done proof to illustrate a point, other mathematicians will allow a bit of latitude when it comes to cutting corners, but for a proof as far-reaching as the one in the article, i would only be interested in a "rigorous" proof, that is, one that started with the foundational tenets of mathematics and combined those to form and prove other postulates, etc. very much a form of abstraction, not unlike large development projects.

the problem arises when one (or several) humans have to be able to objectively check the whole thing. to use my prior example of a large development project, no one developer at microsoft understand the whole of windows. it's too big for a single human to understand. each developer knows what he needs to do to complete his part, and so on and so forth.

traditionally, for proofs, a single mathematician (or a small group) would hammer out the whole proof, so the level of complexity remained at a human-understandable level. (even if tedious) my concern, as a mathematician, with using an automated solution would be the rapidly growing order of complexity needed to properly back up increasingly complex proofs. as stated in the article, it's like trying to proofread a phonebook. (only, you must also consider that for every element of the proof (a particular listing) there are several branching layers of complexity (fundamental, underlying proofs many layers deep) underneath. this gets more complicated in an exponential fashion) obviously this approach will only remain human-checkable for relatively small problems. (programmers: think of some horrible nondeterministic- polynomial problem like the "traveling salesman" problem. systems, like humans, are a finite resource, but if you increase the size of the problem, the complexity will quickly grow far beyond your ability to compute. large proofs suffer from the same difficulties, if not quite as concrete and pronounced as NP algorithms)

in closing, i would have to agree that proofs, no matter the effort and computing time put into them, really should not be viewed as being as rigorous as those provable "by hand" and human- understandable, even if the computer has arrived at a satisfactory conclusion, because we have no way of KNOWING if the computer has built up the proof correctly, except that it says it has.

## We Trust Every[One|Thing] just after we verify (1, Insightful)

## sPaKr (116314) | about 10 years ago | (#8788243)

## Mathematics is Human (2, Troll)

## Traser (60664) | about 10 years ago | (#8788250)

A computer can be a useful tool (I'll be doing computational graph theory this summer), but it is not human. It does not have the ability to hold the possiblities of ideal forms within it and understand. It does not think.

The use of numeric methods to solve applied problems, or symbolic methods to pure problems is good and useful, but it does not constitute proof.

A human being, given an understanding of the underlying mathematics, must be able to go through the proof step by step, and see that, from the givens, the conclusion is inevitable.

I don't accept the Four-Colour theorem as proven true. I strongly suspect it to be so, but my suspicion does not truth make.

The Riemann hypothesis, on the other hand, is much, much further from being proved then the Four-Colour Theorem. Yes, millions of zeroes have been checked...but there are infinitely many zeroes, and all it takes for it to be false is for ONE of those zeroes to fall off the Re=1/2 part of the complex plane.

If I were giving odds, then millions divided by infinity is awfully close to zero.