Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The End of Mathematical Proofs by Humans?

timothy posted about 9 years ago | from the somehow-I-doubt-it dept.

Math 549

vivin writes "I recall how I did a bunch of Mathematical Proofs when I was in high school. In fact, proofs were an important part of Math according to the CBSE curriculum in Indian Schools. We were taught how to analyze complex problems and then break them down into simple (atomic) steps. It is similar to the derivation of a Physics formula. Proofs form a significant part of what Mathematicians do. However, according to this article from the Economist, it seems that the use of computers to generate proofs is causing mathematicians to 're-examine the foundations of their discipline.' However, critics of computer-aided proofs say that the proofs are hard to verify due to the large number of steps and hence, may be inherently flawed. Defenders of the same point out that there are non computer-aided proofs that are also rather large and unverifiable, like the Classification of Simple Finite Groups. Computer-aided proofs have been instrumental in solving some vexing problems like the Four Color Theorem."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered


FP! (-1, Offtopic)

Anonymous Coward | about 9 years ago | (#12151954)

Linux Rocks!

Re:FP! (-1, Offtopic)

Anonymous Coward | about 9 years ago | (#12152020)

Offtopic but true! Props to you, AC.

Re:FP! (-1, Offtopic)

Anonymous Coward | about 9 years ago | (#12152034)

"Linux Rock" is modded as Offtopic, come to think about it "Windows Rock" will probably be modded as "Troll" ;)

Science by AI (2, Insightful)

MaDeR (826021) | about 9 years ago | (#12151955)

I think that in far future all science will be done by AI, because knowledge will be too complicated and complex to understand for even most inteligent human on chemical boost/genetic engineered/stuffed with chips.

Re:Science by AI (5, Interesting)

FleaPlus (6935) | about 9 years ago | (#12152004)

This reminds me of a Nature paper from last year:

Functional genomic hypothesis generation and experimentation by a robot scientist [nature.com]

The question of whether it is possible to automate the scientific process is of both great theoretical interest and increasing practical importance because, in many scientific areas, data are being generated much faster than they can be effectively analysed. We describe a physically implemented robotic system that applies techniques from artificial intelligence to carry out cycles of scientific experimentation. The system automatically originates hypotheses to explain observations, devises experiments to test these hypotheses, physically runs the experiments using a laboratory robot, interprets the results to falsify hypotheses inconsistent with the data, and then repeats the cycle. Here we apply the system to the determination of gene function using deletion mutants of yeast (Saccharomyces cerevisiae) and auxotrophic growth experiments. We built and tested a detailed logical model (involving genes, proteins and metabolites) of the aromatic amino acid synthesis pathway. In biological experiments that automatically reconstruct parts of this model, we show that an intelligent experiment selection strategy is competitive with human performance and significantly outperforms, with a cost decrease of 3-fold and 100-fold (respectively), both cheapest and random-experiment selection.

New Scientist also had an article on it: "Robot scientist outperforms humans in lab." [newscientist.com]

Re:Science by AI (4, Insightful)

Zork the Almighty (599344) | about 9 years ago | (#12152009)

Well at least in regards to math, I stongly doubt that this will ever be the case. Mathematics is developed over decades and centuries. With a few notable exceptions, it doesn't just fall out of the sky in textbook form. Most areas of math started out as a giagantic mess (ex; calculus, linear algebra, even geometry), and it has taken the work of countless researchers, authors, and teachers to distill and refine it. This process will continue, and it is inevitable that the subjects which baffle us today will be hammered out and taught to grade school students eventually. Well developed theory makes mathematics easier, and this in turn fuels new discoveries.

Re:Science by AI (0)

Anonymous Coward | about 9 years ago | (#12152026)

By that time, I think your "most intelligent human" will be indistinguishable from the computer based on the stuffing of chips that you refered to.

I imagine both will be a pretty tangled network of flesh&blood (natural or genetically engineered) and more traditional hardware.

Yesterday's news? (0, Troll)

nextdoornerd (845674) | about 9 years ago | (#12152080)


First, the story has been already discussed on technocrat.net day's ago (well, with 5 comments or so not really "discussed" as such :))

Second, I have to run to have lunch now and then boys... I'm eager to wrestle over my master's subject with the whole /. crowde... Oh, the sanity :)

Anyway, more to the point: it's not really AI, man. (At least nowadays.) Just some wonderfully pure math represented in (the most cases) painfully obfuscated, say, ML code >:\ (oh functionals, how I love thee... let me count the ways... uhm... zero? Ahemm.)

I dress myself (-1, Offtopic)

Anonymous Coward | about 9 years ago | (#12151956)


Critics Reaction... (4, Insightful)

Suhas (232056) | about 9 years ago | (#12151958)

...From TFA if a computer is used to make this reduction, then the number of small, obvious steps can be in the hundreds of thousands--impractical even for the most diligent mathematician to check by hand. Critics of computer-aided proof claim that this impracticability means that such proofs are inherently flawed.

So basically what they are saying is that if the proof is too long to be checked, then it is flawed? WTF?

The Economist discussing math.... (0)

Anonymous Coward | about 9 years ago | (#12151977)

is like Microsoft discussing computer security.

If it were a math journal publishing this paper it might have some credibility.

But considering the source, it's just so a bunch of MBAs can feel smarter than PhD Mathemeticians for a while.

Another Lomborg case? (2, Interesting)

orzetto (545509) | about 9 years ago | (#12152133)

The Economist is the same magazine that supported Bjørn Lomborg [economist.com] , thereby proving their utter incompetence in environmental science. They defended him in spite of clear and very detailed indications [lomborg-errors.dk] from the scientific environment that he was nuts.

I suppose they will know economics when they talk about it, but they demonstrated an inappropriate habit of pontificating on things they don't have a clue about. I for one think they burned a lot of karma.

Re:Critics Reaction... (5, Insightful)

damieng (230610) | about 9 years ago | (#12151981)

Yes, thats right.

If you can't independently examine and verify your "proof" then how can it be considered proof of anything?

Re:Critics Reaction... (3, Insightful)

wwahammy (765566) | about 9 years ago | (#12151982)

Flawed in the sense it can't be peer reviewed to be "proven." It could be true but because it can't be independently verified then its not a proof because you can't prove it. Now whether this is truly a situation where a proof is unprovable or just people reacting to the thought of their profession being eliminating by technology is another debate entirely.

Re:Critics Reaction... (1)

oneandoneis2 (777721) | about 9 years ago | (#12151984)

...too long to be checked, then it is flawed?...

You could be on to something here: That rule might apply to software too.

How many lines of code do they reckon make up the Windows source code..? :o)

Re:Critics Reaction... (1, Insightful)

Anonymous Coward | about 9 years ago | (#12151998)

A proof that can't be checked is definitely flawed.

Re:Critics Reaction... (0)

Anonymous Coward | about 9 years ago | (#12152017)

Yeah, why not use a computer to check it?

Re:Critics Reaction... (1)

Zork the Almighty (599344) | about 9 years ago | (#12152032)

If it can't be checked, it's not really a "proof" is it ? Now granted, that leaves a lot of room for leaway. For example, one could construct an algorithm for the problem and then prove that the algorithm is correct. There are other more inventive possibilities too.

Re:Critics Reaction... (1)

Tomfrh (719891) | about 9 years ago | (#12152088)

So the proof only exists if you observe it? That reminds of a certain cat experiment....

Re:Critics Reaction... (4, Insightful)

Zork the Almighty (599344) | about 9 years ago | (#12152116)

Whether or not something is a proof is entirely our distinction to make. We choose the axioms on which the proof is based. To paraphrase Bill Klem (a famous umpire): when asked whether a pitch was a ball or a strike, "It isn't anything until I call it".

Re:Critics Reaction... (3, Insightful)

MonMotha (514624) | about 9 years ago | (#12152083)

A mathematical proof is nothing but a manner of convincing someone (other mathematicians) that what you assert is indeed correct, given the following presumably known, accepted as true, or otherwise previously proven "things". The whole point of a proof is that someone else knows wtf it is talking about.

I could assert that 2+2=4. You may believe me, but have I really proven it to you? Not yet, so you don't need to believe me. If I instead say that the the cardinality of the union of two disjoint sets, each of cardinality 2, is 4, then I've (sort of) showed you that 2+2=4, assuming you accept my definitions of disjoint sets, set union, and set cardinality (which presumably I've proven elsewhere, or taken from some other source). Now do you believe me that 2+2=4?

I could assert anything. You may or may not know if it's true. A proof is just something to back up my assertion and convince you that I'm right. Hence, if a proof is unintelligable, it's pretty darn worthless.

Creativity (5, Insightful)

Daxx_61 (828017) | about 9 years ago | (#12151961)

Much of mathematics isn't just grunt power, there is also a lot of creative work going on there. Without humans to drive the computers doing the work in the right directions, it could take a long time before a computer would be able to get its proof - it simply doesn't know what it is looking for.

I for one welcome our new robotic theorum proving overlords.

Re:Creativity (0)

Anonymous Coward | about 9 years ago | (#12152036)

Precisely. The area of mathematics (even pointed to in the article) is called "experimental mathematics". You will only really see it in the area of discrete/finite mathematics. But the interesting stuff is still hard to do.

For me it was useful in showing what was possible and what wasn't. We would run programs to find "things" in projective spaces. Even if we did find something you would still be stuck in the position of 1) seeing if it is new, 2) proving that it was new (subtle but different). (Think of the symmetry groups of objects. If it has a big group then you would find quadrillions of these things; smaller the group less of them there are hence harder to find).

Generally exhaustive computer searches are impossible (you know, the amount of time before the sun goes medieval on our arses) for "interesting" problems . To make the computer searches feasible you need to think about the assumptions you need (starting configurations, symmetries etc) and then perform the search. The data you get back will give you some data points on where to do the next search. So "experimental mathematics".

Hmpth (1)

headkase (533448) | about 9 years ago | (#12152041)

It's more like 90% pattern matching, 5% positive feedback effector systems and 5% adaptive selection of all engrams*.

Source [bartleby.com]
* A physical alteration thought to occur in living neural tissue in response to stimuli, posited as an explanation for memory.
Um. Yeah. Or something like that.

Re:Creativity (4, Insightful)

Zork the Almighty (599344) | about 9 years ago | (#12152092)

Without humans to drive the computers doing the work in the right directions, it could take a long time before a computer would be able to get its proof - it simply doesn't know what it is looking for.

To some extent that's true, except in areas where human understanding has reduced mathematical proof to a mechanical process. For example, verifying algebraic identities, or even geometric proofs. A more advanced example is the Risch algorithm for elementary integration. It amounts to a proof that an integral either is or is not expressible in terms of elementary functions. Eventually we come to understand an area to such an extent that we can implement mechanical algorithms and move on. The proper role of the computer is to carry out these algorithms, so that we can use them to discover something else.

R. Daneel? Is that you? (0)

Anonymous Coward | about 9 years ago | (#12151963)

I for one welcome our new artificial über-geek overlords.

If computers could write proofs... (4, Insightful)

John Allsup (987) | about 9 years ago | (#12151965)

Short, sweet, beautiful proofs of interesting and useful theorems, I would welcome them to do so with open arms.

As a tool to produce vast quantities of precise logical porridge quickly, computers have no equal in today's world, yet that is not what real mathematical proofs should be about.

Mathematical proofs should show short, clever ways of connecting otherwise disparate concepts that are only obvious in hindsight. This is where computers will always be weaker.

Re:If computers could write proofs... (2, Insightful)

Dancin_Santa (265275) | about 9 years ago | (#12151989)

Mathematical proofs should show short, clever ways of connecting otherwise disparate concepts that are only obvious in hindsight. This is where computers will always be weaker.

So you're saying that even a theoretical sufficiently advanced computer would be unable to match the logic and creativity of a human being? I think a simple brute force counter written in Mathematica (unlimited integer lengths) whose output was executed by a CPU would prove you wrong.

Computers can separate wheat from chaff. That's what AI is all about. No, seamless human interaction is still a ways off, but as for number crunching, the ability to compare large, disparate sets, and logical "thinking", computers are vastly more well-suited to the task than humans in every respect.

Re:If computers could write proofs... (4, Funny)

Hideyoshi (551241) | about 9 years ago | (#12151997)

Computers can separate wheat from chaff. That's what AI is all about.
Which explains the glorious successes that subject has enjoyed in the last few decades ...

Re:If computers could write proofs... (1)

Anonymous Coward | about 9 years ago | (#12152013)

Your rip on AI would be applauded except that AI is used in all sorts of everyday computing that it's amazing that it still gets a bad rap.

Everything from load balancing to spam filtering to high-speed 3d graphics displaying uses AI to some extent. We used to call them expert systems, but you don't hear that term so much anymore because almost every system is an expert system of some kind.

clever - yes, short - no (1)

S3D (745318) | about 9 years ago | (#12152046)

Mathematical proofs should show short, clever ways of connecting otherwise disparate concepts that are only obvious in hindsight. Modern mathematics is very complex, important and useful theorems proven recently have huge and difficalt proofs, built from a lot of steps. Imortant results hardly have a short proofs.
Take a look at the concepts used in the one of the most famous proofs of XX centery : the Feramt last theorem [wikipedia.org] . It's proof followed from the Taniyama-Shimura theorem [wikipedia.org] which establish non-obvious connection between quite different areas of mathematics. The proof is very long and complex, but of great importance for mathematics, and may have be realted to cryptography and even physics. Also such proofs are way outside of the power of near-future computers.

Re:If computers could write proofs... (1)

shanen (462549) | about 9 years ago | (#12152075)

Sometimes they do. In fact, some of the earliest theorem proving programs found new, short, and even elegant proofs of old theorems. However, after that the field didn't seem to advance very much for quite a while... Perhaps there have been some big breakthroughs lately? At least that's what this discussion seems to suggest.

Actually, I was officially a math major for about a year (before my transition to the much easier computer science). Some parts of it were pretty simple, but the proofs always threw me. I'd spend a couple of pages proving what I thought was the crucial step, and old Professor Bichteler would tell me that it was obvious, but the critical part was over here in the line between these two other steps, where I'd incorrectly thought it was obvious... Also, I had a tendency to prove almost everything by contradiction, and the professors wanted direct proofs. Which leads back to the original topic, since many computerized theorem-proving systems have favored proof by contradiction.

(However, the real reason I didn't become a better mathematician was probably the forced retirement of Professor Greenwood. He was a seriously great teacher, and I've always felt that a couple more classes with him would have made the difference...)

If there are no proofs (0)

willmeister (709686) | about 9 years ago | (#12151968)

then my brother won't have to suffer through what I had to in High School Geometry. Long Live Proofs

Seems simpler to prove proffs-by-computer (3, Insightful)

Anonymous Coward | about 9 years ago | (#12151969)

Simply prove the program correct; and all its outputs will be correct?

To do that.... well, just make sure the program was designed by a correct computer.

Re:Seems simpler to prove proffs-by-computer (5, Informative)

rxmd (205533) | about 9 years ago | (#12152156)

Three major obstacles with this approach (which has been tried BTW):
  • Logical problems with proofs for correctness. For example, it has been proven that no program can prove itself correct.
  • Correctness proofs are hard to do and incredibly tedious. Have you ever tried it? And no, you can't have a program do them, because you'd have to prove this program correct, which sends you right back to square #1.
  • You'd have to prove all sorts of other factors correct, including the operating system and hardware your program is running on. This leads to another set of interesting problems, including that "correct hardware" is useful only as a theoretical concept. What's a "correct computer" if there's a small probability that bits will spontaneously flip in memory, for example?
In short: while it might seem elegant to prove the prover, then have everything else proved by this prover, this approach has little value in practice.

Does it really work? (1, Funny)

janek78 (861508) | about 9 years ago | (#12151971)

Computer generated proofs really do seem like a real time saver. Provided that they work. Now, if we could only prove that... Let me think, how about building this atomic superpowered robotic monster to prove that computer generated proofs work? Or perhaps we could use those computers to prove it themselves!

Godel/Turing/Cohen... (-1)

ivano (584883) | about 9 years ago | (#12151976)

...have pretty much *proved* that you need humans to do mathematics. It's one of the few occupations on this planet that have been proven to exist :)


Re:Godel/Turing/Cohen... (2, Informative)

willmeister (709686) | about 9 years ago | (#12151996)

I have no doubt that in 50 years (given current progress), creativity as we know it will be commonly implemented in advanced computers. Your brain is nothing more than a biologically advanced computer, complete with binary nerve signals and the works.

Why can't human brain functions be copied and improved upon in circuitry, other than the currently prohibitive costs involved, and our failure to accurately determine the exact neuron mapping of anyone's brain. Even the dynamic neuron connections could be potentially emulated or mimiced by a computer.

Re:Godel/Turing/Cohen... (0)

Anonymous Coward | about 9 years ago | (#12152090)

You have no doubt because you're obviously clueless about principles of computers' operation.

Re:Godel/Turing/Cohen... (3, Insightful)

Hideyoshi (551241) | about 9 years ago | (#12152018)

No, they didn't "prove" any such thing - you've been reading too much Roger Penrose if you think so. There's absolutely no evidence that human minds have magical access to truths which formal systems don't; there once was a time when it was thought "obvious" that parallel lines could never meet, and it's still hard for many people to believe that there is no such thing as universal time.

Re:Godel/Turing/Cohen... (0)

Anonymous Coward | about 9 years ago | (#12152044)

Um, and indeed it's true that parallel lines never meet - unless you want to bend (pun intended, but it's true literally) the definition of parallel.

Parallel lines - those who's (minimal) distance from each other is the same at any points - indeed do not have some places where they touch and others where they don't.

If some silly mathemetician wants to publish a paper saying that he projected those lines on some curved space so now they touch - yet he still wants to call them parallel - that's a linguistics problem and not a math problem.

Re:Godel/Turing/Cohen... (0, Troll)

ivano (584883) | about 9 years ago | (#12152065)

>> There's absolutely no evidence that human minds have magical access to truths which formal systems don't

What has Penrose have to do with it? His little "quantum computers in our brain" has nothing to do with what I said. I gave you my sources: Cohen, Turning and Godel. That's it. Ask a logician if you don't believe me. All of these guys showed the limitations of formal systems (my bit about humans is irrelevant - and not very funny to begin with).


Re:Godel/Turing/Cohen... (0)

Anonymous Coward | about 9 years ago | (#12152119)

I think the point is that our brain is, in fact, a formal system -- just a very big one. All one needs to do is implement our brain as a computer, and we should be able to prove anything with that computer that we could with a human.


Re:Godel/Turing/Cohen... (0)

Anonymous Coward | about 9 years ago | (#12152081)

Your ignorance shows - of course they did prove it, and you would knew if you spent more time getting your facts than ranting at /. They proved that certain things *cannot* be proven in formal systems and that certain problems cannot be solved by computers/Turing machines.

Maths and computers? Who'd have thought (0)

Anonymous Coward | about 9 years ago | (#12151979)

Wow, you mean computer's can do maths? Maybe now someone can include a calculator with my favourite shipping of XP, or... maybe we could use math in programming?!! omfgbbq..

Uh, I doubt it. (1, Insightful)

Anonymous Coward | about 9 years ago | (#12151983)

Since you're going to need a human to write the proof of correctness for the program that wrote the other proofs, aren't you now?

Anyway, the entire point of a proof is to be read by humans. This isn't like, say, a computer program, where the point of the proof. A mathematical proof is something humans create in order to convince other humans of things beyond all doubt. The four color theorem proof was pretty closely watched; I guarantee you if you just go up to someone and say "statement [i]blah[/i] because my expert system verifies it to be the case" there is going to be some fucking doubt still there.

Re:Uh, I doubt it. (1)

g1t>>v (121036) | about 9 years ago | (#12152060)

And what if you go up to someone and say "statement blah because here's a (very large) proof that my expert system generated, but which can be verified with a simple program which is just a few pages long"? Remember that *finding* a proof is hard (and hence requires human creativity / AI stuff) but *checking* that a proof is correct is easy (just a bit tedious when the proof is very large, and whom better to trust to do this job than a computer)?

The best math is always elegant. (5, Interesting)

Pants75 (708191) | about 9 years ago | (#12151988)

That's what my math teacher always said, back in school. I'm sure he would extend that statement to include proofs, given that they are also just math.

What about Fermats last theorem? Fermat wrote in the margin of his note book that he had a proof, but it was too large to fit there, so he'll write it on the next page. Trouble was, the next page was missing from the book.

The modern proof for FLT took hundreds of pages of dense math and went through some math concepts that AFAIK hadn't even been invented in Fermats time.

What was Fermats proof (if it existed)? It would surely have been far more elegant than the modern version.

That doesn't make the modern version wrong, just less pure, I feel.

The problem with modern computer aided proofs is they allow the proof to become unwieldy and overly verbose, compared to what it would have to be if just a human produced it.

Such is progress I guess.

Re:The best math is always elegant. (3, Funny)

Craigj0 (10745) | about 9 years ago | (#12152015)

Perhaps his proof was flawed and he realised it so he ripped the last page out?

Re:The best math is always elegant. (1)

Pants75 (708191) | about 9 years ago | (#12152024)

Yeah, thats always been a big assumption about Fermat.

I'd love someone who could test it to get hold of a copy though.


Computer _aided_ proofs (4, Insightful)

irexe (567524) | about 9 years ago | (#12151991)

Computer proofs, like the graph color proof, are not proofs that are completely generated by a computer. The computer is merely used to brute force a fairly large number of 'special' cases which together account for all cases. The construction of the proofing method is and will remain human work, lest we create AI that matches our own I.In short, they are computer aided proofs only.

Further and more importantly, at this point we do not have and are not likely to have a machine that can prove any provable theorem (and fyi, not all truths in mathematics are provable!).

Re:Computer _aided_ proofs (2, Informative)

g1t>>v (121036) | about 9 years ago | (#12152070)

Then you'd better check your facts ... there must be dozens (if not hundreds---erhhh, speaking about checking ones facts :-) ) of systems in which mathematical theorems are routinely proved (HOL, PVS, Coq, Mizar to name just a few)!

very long proofs (0)

bkubi (834026) | about 9 years ago | (#12152000)

can lead to different results than shorter ones, but both are correct. And this inherent failure of mathematics can be used to create new mathematics. Anyone interested, read my favorite scince fiction author: Greg Egan - Luminous I really like the book.

Re:very long proofs (0)

StrawberryFrog (67065) | about 9 years ago | (#12152028)

can lead to different results than shorter ones, but both are correct.

That to me is an extraordinary claim, and I'd like to see proof of it.

Anyone interested, read my favorite scince fiction author:

Much as I like Greg Egan, fictoin is never a proof of anything. Because it's, you know, not actual.

Are computer-aided proofs really proof? (3, Insightful)

Mattb90 (666532) | about 9 years ago | (#12152001)

During my preparation for my interview to study Mathematics at Cambridge last year, one of the discussion topics that came up was computer-aided proof. Between the interview-experienced teacher, a school colleague who was also applying and myself, we came to the idea that computer-aided proof might not be proof at all, because proof for Mathematicans is the ability to reproduce the workings by yourself - it might take a very long time, but the idea should be that a human could dervive them in order for them to be considered true proofs of human concepts. Whether many in the Mathematical community in England take this view, I won't know until the end of the year when I hope to start my course - but based on this debate, it wouldn't surprise me if quite a few did.

Re:Are computer-aided proofs really proof? (1)

ockegheim (808089) | about 9 years ago | (#12152127)

Accepting a computer-generated proof too complex for one mathematician to reproduce would require an element of faith. A non-mathematician like me must have faith in the existing system of peer review. Whether mathematicians would be comfortable in a quasi-theological role is a different matter.

No progress without understanding (5, Insightful)

Freaky Spook (811861) | about 9 years ago | (#12152002)

I remember how much I hated learning alegebra, trig, calculus etc & how much the theory sucked, I never saw any point to it & loved it when I discovered my TI-83 could do pretty much everything.

Although I discovered easier ways to do the arithmatic, I still knew the underlying theory of the equations & what the numbers were actually doing, not just what a computer was telling me.

Students should learn this, they are the basic building blocks of a science that dictates pretty much everything on this planet & although they won't have a use for everything they are taught they will have enough knowledge to "problem solve" which is what most of high school maths is designed to do, it trains our brains to think logically & be able to work out complex problems.

How are people going to be able to further phsyics, medicine, biology if they get into their respective tertiary courses without understanding the basic principals of all science & have to learn it all over again??

Or what about when computers just won't work & things have to be done by hand??

Its fair to integrate comuters into maths but not at the expense of the theory that makes us understand how things work, we should not put all our faith in technology just because its the easy thing to do.

What about feigenbaum constant? (0)

Anonymous Coward | about 9 years ago | (#12152003)

Doesn't the story go that the Feigenbaum constant [wolfram.com] was discovered because he moved away from computers, not onto them? Or was that Lotfi Zadeh [seattlerobotics.org] and fuzzy logic?

Re:What about feigenbaum constant? (2, Interesting)

Zork the Almighty (599344) | about 9 years ago | (#12152072)

I believe that Feigenbaum's constant was discovered with the aid of a programmable pocket calculator. He noticed something, and used the computer to check it out. I think that of all the things that computers can do for mathematicians, this is the most valuable. You can ask "what about this" and the computer will do the grunt work. It's made a grad student's life much easier I'm sure :)

Theorem Proving (2, Interesting)

NitsujTPU (19263) | about 9 years ago | (#12152025)

Theorem proving is NP Complete.

FOL Theorem provers jump through a number of hoops to make the whole bit a little more practical, but realistically speaking, having a computer that just runs through mathematical proofs in the manner that a human does is a long way off.

An interesting thing about the article is that the first proof was done with an FOL prover... it was a long, non-intuitive proof, but an FOL prover has performed that proof.

The second was done with code, a human had to write the program. There is no computer replacing the activity of a human in performing this action that I can see here. It was merely code to brute attack the problem from what I can see, but I didn't read the guys article (from what I can see, it hasn't reached publication yet anyway).

Re:Theorem Proving (0)

Anonymous Coward | about 9 years ago | (#12152121)

Theorem proving is NP Complete.

You think so? Prove it. Or show us a proof.

Elegant proofs (2, Interesting)

erikkemperman (252014) | about 9 years ago | (#12152027)

I'm certainly not a very gifted mathematician, but I'd like to think my grasp on it is at least adequate to make this point: from the proofs I've seen and understood, it seems that while the shorter and more elegant proofs may not strictly be more "truthful" than complicated ugly ones, the important thing is they are so much easier to understand, verify, and explain to others. For instance, I remember being especially charmed by Cantor's diagonal argument and will not forget that particular method of reasoning for the rest of my life.

But maybe there's a point where simple proofs just won't do it; if there were a 4 line proof for Fermat's last conjecture I'm sure someone would have found it before Andrew Wiles proved the thing in something like 80 pages.. In such cases, computer-aided reasoning is really the only way to go, I'd say. It's probably easier to verify a proofchecker than a proof.

unverifiable (2, Interesting)

StrawberryFrog (67065) | about 9 years ago | (#12152038)

However, critics of computer-aided proofs say that the proofs are hard to verify due to the large number of steps and hence, may be inherently flawed.

Nonsense. If people can't verify proofs, then what you need to do is to verify them by computer :)

I'm not entirely joking here - so long as the verification program is written independantly of the thoerem-proving program, and it can reliably distinguish between valid and invalid logical inferences, what's the problem? It's simple automated testing methods really.

Seems the computer is wrong (-1)

MLopat (848735) | about 9 years ago | (#12152045)

Forgive me if I don't remember the theorem correctly, but the article links to the to a statement of the theorem that says

"The four color theorem states that every possible geographical map can be colored with at most four colors in such a way that no two adjacent regions receive the same colour. Two regions are called adjacent if they share a border segment, not just a point."

Well if you look at this political map of the United States [mapsofworld.com] it seems to me that Wyoming (or one of its surrounding states) would necessitate a fifth color.

So did the computer prove something true that isn't or is the theorem in the link provided simply stated wrong?

Re:Seems the computer is wrong (1)

PakProtector (115173) | about 9 years ago | (#12152073)

You should look at that map again. By simply eyeballing it and moving my mouse around Wyoming, I, a mere man, can easily tell it holds true.

Re:Seems the computer is wrong (1)

tgv (254536) | about 9 years ago | (#12152077)

No, it doesn't. Color Montana red, Idaho blue, Utah red, Colorado blue, Nebraska red, South Dakota blue, North Dakota yellow, and you only need a fourth color for Wyoming. Purple, perhaps...

No, this theorem is pretty valid, and it is intuitively understandable. The only problem is that the only existing proof is rather long...

Re:Seems the computer is wrong (1)

MLopat (848735) | about 9 years ago | (#12152079)

Okay, so we would simply need a political border where one area is enclosed in perfect square by other square areas. Simply put, the theorem is only true provided someone doesn't draw a map with that condition.

Re:Seems the computer is wrong (0)

Anonymous Coward | about 9 years ago | (#12152120)

Get yourself a bit of paper and some coloured pens and play around for a bit.

Any map (on a flat bit of paper) can be coloured that way.

Now - it does NOT mean that given a pre-coloured map, you can't add areas that you can't colour. It just means that for those maps there was a DIFFERENT colouring which CAN be extended. You'll need to go back and recolour.

But it does seem to be true, after you play around with it for a while. It's not possible (on a flat bit of paper) to make an area which touches four other areas each of which touch each other.
Trying to do it, at some point you find that you have completely surrounded one of them, and you can then re-use that colour for your next area.

Re:Seems the computer is wrong (0)

Anonymous Coward | about 9 years ago | (#12152089)

Nope, you're just a fucking idiot.

I'm not going to answer for you, but please think.

Just because the state is surrounded by 6 states (which I'm guessing is your point) does not mean that 6 colours are neccesary.

For instance without considering the other states, these 6 could be represented by just 3 colours, while still following the rules.

Now think about what you have just said.

Please think.

Re:Seems the computer is wrong (0)

Anonymous Coward | about 9 years ago | (#12152093)

Check it again. If you use four colors and analyze Wyoming, you assign one color to Wyoming and therefore have three colors to assign to boundary states. But the point is that the boundary states are not all adjacent to each other; therefore, you can use the color of a non-adjacent state. The only problem with applying this idea to political maps is that not all states are contiguous (and thus nonplanar), so the four-color theorem, while interesting, cannot be applied to real-world political maps.

Re:Seems the computer is wrong (0)

Anonymous Coward | about 9 years ago | (#12152131)

Wyoming - Blue
Montana - Red
Idaho - Yellow
Utah - Red
Colorado - Yellow
Nebraska - Red
South Dakota - Yellow
North Dakota - Blue

I don't really see your point.

Nothing too new? (2, Informative)

xiaomonkey (872442) | about 9 years ago | (#12152050)

Using computer programs to prove theorems in mathematics is as old as the field of artifical intelligence itself. In fact, some of the initial excitement around AI originated from the existence of such programs as Newell and Simon's Logical Theorist and it's sucessor the General Problem Solver [wikipedia.org] .

IIRC - back in the early days of AI (1960s), some people in the field thought that in relatively short order computers would be a major soucre of such mathematical proofs. It hasn't happened yet, but that doesn't mean that it won't eventually.

Consider the source (-1, Troll)

Qrlx (258924) | about 9 years ago | (#12152053)

The Economist on mathematics?

What does The Economist know? It's a right-wing rag.

Wasn't always that way... sadly they're not what they used to be.

In terms of Slashdot, they would be posting at a 2. They've been around the block, but they're nobody special. Sorta like me. :)

Re:Consider the source (0)

Anonymous Coward | about 9 years ago | (#12152067)

That's true, they couldn't even spell Haken's name right. Also, it's unbelievable to write an article about mathematicians being replaced by computers without some reference to Doron Zeilberger.

Re:Consider the source (4, Funny)

Trogre (513942) | about 9 years ago | (#12152101)

What does The Economist know? It's a right-wing rag.

What does Slashdot know? It's a left-wing rag.

Re:Consider the source (1, Insightful)

Anonymous Coward | about 9 years ago | (#12152153)

If by left-wing you mean "gun-toting, Europe-hating, and neoliberal", then I agree with you.

Re:Consider the source (0, Offtopic)

arodland (127775) | about 9 years ago | (#12152114)

The Economist doesn't correspond to either of your pitiful, disgusting "wings", and that is good.

Re:Consider the source (2, Informative)

l-ascorbic (200822) | about 9 years ago | (#12152123)

A right wing rag that backed Clinton and Kerry (albeit grudgingly), supports gay marriage, legalisation of drugs, gun control, abolishment of the death penalty..?

Re:Consider the source (4, Insightful)

sien (35268) | about 9 years ago | (#12152125)

Umm. So does this mean right wingers can't do maths?

Secondly, the claim that a magazine that opposes the death penalty and supports gay marriage is right-wing rag (which presumably you meant in US terms, is kinda amusing.

The Economist, correctly stated, is a liberal magazine. It supports liberal economics and liberal social policy. Unfortunately the word 'liberal' in the US has been badly distorted.

Re:Consider the source (0)

Anonymous Coward | about 9 years ago | (#12152129)

They've been around the block, but they're nobody special. Sorta like me.

By virtue of your posting here on /., I have a hard time believing you've "been around the block". I have a hard enough time believing that you've been out of your parents' basement, much less the shower or even clean clothes.

Re:Consider the source (0)

Anonymous Coward | about 9 years ago | (#12152134)

IMHO the whole article was to get MBAs to feel smarter and/or more important than mathemeticians for a while.

Stupid post name (1, Insightful)

Anonymous Coward | about 9 years ago | (#12152058)

Who the hell came with this stupid name for the post?

The article is about computer-aided proof, where the human makes the essential proof decisions while letting to the computer the tasks of properly formulating and performing very simple, yet laborious steps (which would otherwise draw the attention of the human from essential stuff).

You **cannot** actually prove a really complex theorem in a fully automated way, because the reasoning is non-monotonic, which implies huge computational complexities. Machines are not yet good at being creative.

Blowhard critics could use a logic course... (4, Insightful)

geekpuppySEA (724733) | about 9 years ago | (#12152096)

However, critics of computer-aided proofs say that the proofs are hard to verify due to the large number of steps and hence, may be inherently flawed.

Because if there's one thing that humans are better at than computers, it's performing large numbers of repeated steps. Flawlessly.

here y'all go again, panicking... (3, Insightful)

Fry-kun (619632) | about 9 years ago | (#12152115)

To the opponents of computer-aided proof (with their hard-to-check argument), I would say this:
It's easy to check a proof. It's hard to come up with a proof. Computers are great at checking proofs - all the program needs to do is verify whether the steps are logically correct or there's a discrepancy. Coming up with a proof, on the other hand, is a very hard task (being NP-complete, unless defined in a certain way) and thus usually requires a human (or sometimes, a lot of humans) to work on the problem.
A computer would not be able to come up with new principles of mathematics in order to tackle a given problem, it would only try to use every trick that has been discovered to the point of creation of the program (of course that doesn't have to be the case, but my point is that human intervention would be required to "teach" the computer about the new concept so that it would try to use it for the proof)
That is not to say that computers are useless in proofs. Obviously, they're often used as assistants in proving something-or-other, but there's also a direction in computer science where your computer would take a program that you wrote in a certain manner, and prove certain properties about it, e.g. that it is not possible to get out of array bounds in your C program...
time to sleep

A proof needs to be intuitive (4, Informative)

nigham (792777) | about 9 years ago | (#12152135)

If you take a grad school AI course, they'll make you do proofs the way a computer does it... maybe using propositional logic. The idea is to break up the problem into a set of statements that looks quite ridiculous (e.g. NOT engine AND train AND NOT moving), and then taking pairs of these statements and mixing and matching. The result is that you determine your sequence steps by simple trial and error or by trying to combine the propositional symbols (AND, NOT etc) and the variables (train etc). Once you generate a proof, its just a list of such statements which evaluates to a FALSE or a TRUE value but if you want to understand the proof, its hopeless.

I doubt the human proof system will go away completely - even if we can check nasty theorem proofs using computers, we still need humans to sit and explain what they mean.

Change roles? (1)

archeopterix (594938) | about 9 years ago | (#12152138)

Defenders of the same point out that there are non computer-aided proofs that are also rather large and unverifiable, like the Classification of Simple Finite Groups.
That's why a different approach seems more appealing to me - humans write the proofs, computers check them. This approach is more promising, since verifying proofs is computable (well, except for some weirder logic systems), generating proofs is not (well, except for some weirder logic systems that are not very expressive).

Such proof verifiers already exist today, but they are too stupid to be used to verify non-trivial theorems. They don't understand such paper-savers as "... other cases follow from this one" or "without loss of generality...". Of course, to allow such statements and still honestly check the proof, the verifier would have to generate some trivial parts of the proof itself - this is what human readers of mathematical papers do routinely. Are we thus back to square one (verification -> generation)? Well - proving the lemmas may be hard, but I think it's still much simpler than generating non trivial proofs.

Subject (1)

Legion303 (97901) | about 9 years ago | (#12152140)

Until computers can routinely beat human chess players, pure mathematicians will still have jobs. Human intuition is generally better than computer brute-force when it comes to logical problems like these.

There's always room for creativity. (1)

bombadier_beetle (871107) | about 9 years ago | (#12152148)

Just because a tool is powerful, or its inner workings incomprehensible, does not mean that it cannot be used creatively or towards creative ends. In fact, one might assert the opposite.

The one main advantage of computers... (1)

boris_the_hacker (125310) | about 9 years ago | (#12152149)

... is that they allow you to make mistakes faster.

I am currently hand proving some logic for my phd and it is a pain in the arse. I also have a tool that I have written to do the same thing.

I still have to prove the logic is correct (by hand) and that the tool implements the proof search correctly (trickier) so I can then use the tool to solve bigger problems.

Some of the proofs the program generates in under a minute takes myself days to do. (50+ pages of pdf generated from LaTeX).

Through the development of the tool, I could validate from my hand written proofs whether it was generating correct proofs. Sometimes it would get things wrong and others it would do it better than myself.

Computers are useful but only when we hardcode them with understanding that we have developed.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account