Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Physicist Proposes New Way To Think About Intelligence

samzenpus posted about a year ago | from the look-at-the-big-brain-on-brad dept.

Science 233

An anonymous reader writes "A single equation grounded in basic physics principles could describe intelligence and stimulate new insights in fields as diverse as finance and robotics, according to new research, reports Inside Science. Recent work in cosmology has suggested that universes that produce more entropy (or disorder) over their lifetimes tend to have more favorable properties for the existence of intelligent beings such as ourselves. A new study (pdf) in the journal Physical Review Letters led by Harvard and MIT physicist Alex Wissner-Gross suggests that this tentative connection between entropy production and intelligence may in fact go far deeper. In the new study, Dr. Wissner-Gross shows that remarkably sophisticated human-like "cognitive" behaviors such as upright walking, tool use, and even social cooperation (video) spontaneously result from a newly identified thermodynamic process that maximizes entropy production over periods of time much shorter than universe lifetimes, suggesting a potential cosmology-inspired path towards general artificial intelligence."

cancel ×

233 comments

Sorry! There are no comments related to the filter you selected.

Oh, he's back from his tour of the universes? (3, Insightful)

Anonymous Coward | about a year ago | (#43514883)

How was the weather?

Re:Oh, he's back from his tour of the universes? (1)

wisnoskij (1206448) | about a year ago | (#43516073)

Why is this down modded?

It is the correct response. insightful sarcasm aimed at this "scientist's" complete lack of any supporting evidence. because, of course, we have not discovered one, let alone the many, intelligence species in other universes. We do not even know if any other universes ever existed or ever will.

Relevant xkcd (5, Insightful)

Karganeth (1017580) | about a year ago | (#43514907)

Re:Relevant xkcd (1)

fearofcarpet (654438) | about a year ago | (#43515105)

From TFA:

To the best of our knowledge, these tool use puzzle and social cooperation puzzle results represent the first successful completion of such standard animal cognition tests using only a simple physical process. The remarkable spontaneous emergence of these sophisticated behaviors from such a simple physical process suggests that causal entropic forces might be used as the basis for a general—and potentially universal—thermodynamic model for adaptive behavior.

So, yah, XKCD nailed it... clearly trying to maximize the overall diversity of accessible future paths of their worlds.

Re:Relevant xkcd (-1, Offtopic)

hiepgadan1 (2896495) | about a year ago | (#43515187)

http://nsteen.net/ [nsteen.net] Thanks you for story

Re:Relevant xkcd (-1)

Anonymous Coward | about a year ago | (#43515269)

Mods: feel free to stop by xkcd and let him know how insightful he is, but the link itself is entirely derivative.

Re:Relevant xkcd (2)

wonkey_monkey (2592601) | about a year ago | (#43515303)

Mods: feel free to stop by xkcd and let him know how insightful he is, but the link itself is entirely derivative.

There's no need to throw a wobbly just because you didn't think of it first. Maximize your disorder, dude!

Re:Relevant xkcd (2)

mrego (912393) | about a year ago | (#43515509)

Isn't this the same as IDIC: "Infinite Diversity in Infinite Combinations" ? Increased entrophy?

Re:Relevant xkcd (0, Insightful)

cyborg_monkey (150790) | about a year ago | (#43515519)

Oh thank god you linked xkcd... I was worried that no one would do so for this story. by the way, you're a fuckwit.

Re:Relevant xkcd (3, Informative)

saveferrousoxide (2566033) | about a year ago | (#43515569)

Oh you're a fan of webcomics? Here, have a Penny Arcade. [penny-arcade.com]

Re:Relevant xkcd (1)

epine (68316) | about a year ago | (#43516137)

http://xkcd.com/793/ [xkcd.com]

My field is <mate selection>, my complication is <social transactions in symbolic discourse>, my simple system is <you> and the only equation I need is <you're not getting any>. Thanks for offering to prime my pump with higher mathematics. But you know, if you'd like to collaborate on a section on this intriguing technique of speaking in angle brackets to deliver a clue where no clue has gone before, perhaps we should meet for coffee—if you can refrain yourself from dismantling the social milieu long enough to drain your mug.

Pauses to observe patiently as the word "milieu" penetrates into physicist's long-forgotten amygdala with the deep impact of an entire bottle of earthquake pills, whose fine print reads "not effective on physicists(*)" with a footnote (in even smaller print) reading "unless first assailed with angle agonists of his own devising".

nintendo! (2, Interesting)

Anonymous Coward | about a year ago | (#43514921)

Interesting idea. http://techcrunch.com/2013/04/14/nes-robot/

That guy took basically a random generator and 'picked' good results to build on. However the input is basically chaos.

Nonsense (0)

Anonymous Coward | about a year ago | (#43514925)

Humans are soulless meat computers. Intelligence is just a byproduct of electrical signals in a random mesh of electrons. Everything that can be discovered has already been discovered.

Re:Nonsense (0)

Anonymous Coward | about a year ago | (#43514977)

Jeioc mod osdife oiefm oeifm o foieam sa mdiofpsdkelka f dsimf om. lkjdsf! LKf! lkdjfoidj!

And if you think I'm trolling on purpose, realize that that it's just the electrons, it could not have happened any other way and I could not have written anything else than what I did - we're just soulless meat computers. Oh, and don't worry what conclusion you draw from considering this - it's basically a random event anyway so you're just under the illusion of concluding anything... and under the illusion of actually having an illusion :)

when I want to maximize entropy ... (2)

fche (36607) | about a year ago | (#43514941)

... I burn stuff. Now I can feel smarter about it. Win!

Re:when I want to maximize entropy ... (0)

Anonymous Coward | about a year ago | (#43515165)

So intelligence is a method the universe uses to maximize the rate at which entropy happens. In that case there's no way to overcome AGW as it is something that the universe has already set in motion to maximize entropy. Burn things, burn fuel, consume radioactive material in reactors, generate as much or as little heat as you want, the whole thing is just to reach the point of maximum entropy.

In that case I'll stop feeling bad if I leave the TV on when I go out or if I have to make two trips in the car when one should have done but I forgot something. It's just the universe maximizing entropy.

Re:when I want to maximize entropy ... (1)

Hentes (2461350) | about a year ago | (#43515531)

The point of the paper is that intelligent behaviour maximizes longterm, not immediate entropy gain.

Re:when I want to maximize entropy ... (1)

fche (36607) | about a year ago | (#43515589)

(Isn't the heat-death of the universe a process that results in maximal long-term entropy growth?)

Re:when I want to maximize entropy ... (0)

Anonymous Coward | about a year ago | (#43515881)

Not only that, but *the longer, the better*. It maximizes the length of that term too.

Re:when I want to maximize entropy ... (3, Interesting)

femtobyte (710429) | about a year ago | (#43515895)

The point in the paper that addresses the "burn shit to be smart!" concept is that the "intelligence" is operating on a simplified, macroscopic model of the world, which doesn't pay attention to the microscopic entropy of chemical bonds (increased by setting stuff on fire). In this simplified "critter-scale" world, shorter-term entropy gain *is* the driving compulsion. The toy model "crow reaching food with a stick" example wasn't driven by the crow thinking "gee, if I don't eat now, I'll be dead next year, so I'd better do something about that." Instead, the problem was "solved" by the crow maximizing entropy a few seconds ahead --- e.g. it moves to reach the stick, because there are a lot more system states available if the stick can be manipulated instead of just lying in the same place on the ground. The "intelligent behavior" only needs to maximize entropy on the time-scale associated with completing the immediate task --- a few seconds --- rather than "long term" considerations about nutritional needs.

Re:when I want to maximize entropy ... (1)

gtall (79522) | about a year ago | (#43515917)

Speak for yourself. When I barbecue a marshmallow, I rather enjoy the immediate entropy gain.

Intelligence a man made idea. (3, Interesting)

jellomizer (103300) | about a year ago | (#43514949)

Intelligence was invented by man, as a way to make them seem better then other animals in the world.
Then we further classified it down so we can rank people.

So it isn't surprising if we want to find intelligent life outside of earth, then we need to change the rules again, as well we need to change the rules of what intelligence is by the fact we have created technology that emulates or exceeds us in many areas we use to classify intelligence.

Intelligence is a man made measurement, I expect it will always be in flux. However you shouldn't dismiss or automatically accept as good ideas just because someone number that was granted by a fluctuating scale.

Re:Intelligence a man made idea. (-1)

Anonymous Coward | about a year ago | (#43514991)

Go back to your cave.

Re:Intelligence a man made idea. (-1)

Anonymous Coward | about a year ago | (#43515001)

Intelligence is the ability to write "penis" on the Internet.

Re:Intelligence a man made idea. (0)

Anonymous Coward | about a year ago | (#43515121)

Meta-level recursive definition. Congrats.

Re:Intelligence a man made idea. (1)

JustOK (667959) | about a year ago | (#43515133)

and yet all you can do is type it.

Re:Intelligence a man made idea. (0)

Anonymous Coward | about a year ago | (#43515155)

All concepts were invented by man, that doesn't mean they aren't useful.

Re:Intelligence a man made idea. (1)

jellomizer (103300) | about a year ago | (#43515295)

True,
But some concepts are based on more solid definitions. Where we can measure it the same way every time to make the definition.

Re:Intelligence a man made idea. (4, Interesting)

Intrepid imaginaut (1970940) | about a year ago | (#43515359)

We are better than other animals in the world. By any objective measure we can move faster, go higher, lift more weight, survive in more hostile environments, and a great deal more using our intelligence. There's no animal that can do something better than we can, with a few exceptions like tortoises with very long lifespans, but we'll get there too. Now whether or not that means we are more worthy in some objective way is a totally differerent question.

Re:Intelligence a man made idea. (5, Interesting)

femtobyte (710429) | about a year ago | (#43515429)

For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much—the wheel, New York, wars and so on—whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man—for precisely the same reasons.

-- Douglas Adams

Re:Intelligence a man made idea. (1)

lightknight (213164) | about a year ago | (#43515555)

Thank you. ;-)

Re:Intelligence a man made idea. (0)

Anonymous Coward | about a year ago | (#43516119)

Wow!! I was completely unaware that ol' Doug was capable of communicating with dolphins!!

Re:Intelligence a man made idea. (2, Insightful)

Anonymous Coward | about a year ago | (#43515533)

We are better than other animals in the world. By any objective measure we can move faster, go higher, lift more weight, survive in more hostile environments, and a great deal more using our intelligence. There's no animal that can do something better than we can, with a few exceptions like tortoises with very long lifespans, but we'll get there too. Now whether or not that means we are more worthy in some objective way is a totally differerent question.

Better? Not really, more resourceful? Yes, definitely. And we have to be. Without the use of tools, we'd still be stuck in the Serengeti, treed by lions and tigers. Because, as a species, we are physically weak (probably more today then 100,000 years ago, but still weak in comparison to an orangutan as to the amount we can lift), slow (The fastest man CAN outrun a horse, but no one could outrun a cheetah on the straight away, and can only tolerate a small range of temperatures. (Without clothes, we wouldn't survive a winter outside the tropics) , and go higher? (Or deeper, for that matter) It is only because of tools. We can't fly on our own, we need to bring oxygen with us to high altitudes, we can't hold our breathes for any appreciable time and we need tools to survive depths that other mammals can handle with no problems.

Take away our ability to make tools, and man is easy prey to the rest of the animal kingdom.

Re:Intelligence a man made idea. (0)

Anonymous Coward | about a year ago | (#43516085)

In Street Fighter, the scrub labels a wide variety of tactics and situations “cheap.” This “cheapness” is truly the mantra of the scrub.

If you beat a scrub by throwing projectile attacks at him, keeping your distance and preventing him from getting near you—that’s cheap. If you throw him repeatedly, that’s cheap, too. We’ve covered that one. If you block for fifty seconds doing no moves, that’s cheap. Nearly anything you do that ends up making you win is a prime candidate for being called cheap. -- Sirlin

Tigers are scrubs. GG noobcat. Learn2Firearms.

Re:Intelligence a man made idea. (1)

Anonymous Coward | about a year ago | (#43515785)

The main and singular difference is our physical capacity for speech and the communication of abstract ideas that allows.

A very very tiny number of humans have figured out things - and it was due to our ability for speech that they were able to communicate what they figured out to the rest of us.

Re:Intelligence a man made idea. (3, Insightful)

gtall (79522) | about a year ago | (#43515959)

BS. Take your basic household feline. It's tricked its owners into feeding, watering, and petting it. Hell, it has even tricked them into taking out the dooty. No living life form comes close to that kind of intelligence.

Re:Intelligence a man made idea. (0)

Anonymous Coward | about a year ago | (#43515419)

Sanctimony was invented by man, as a way to make them seem better then other men in the world.
Then we further classified it down so we can rank people.

FTFY.

time (0)

Anonymous Coward | about a year ago | (#43514955)

"much shorter than universe lifetimes"
Good, because my attempt that took much longer than universe lifetimes was going a bit too slow.

Link to article (3, Informative)

zrbyte (1666979) | about a year ago | (#43514965)

Re:Link to article (1)

Anonymous Coward | about a year ago | (#43515335)

Thanks. I had a glance at it, and to me it seems - while being an interesting angle to look at the problem - this is completely dependent on the choice of the entropic forces function(s). In AI we call that a fitness function and the problem solving ability of an AI system is almost completely dependent on us choosing "the right one". The examples you see in the video don't reveal the entropic forcing function(s) or how they came up with them.

So i guess this is really just an interesting new angle to look at the problem. They basically showed it is possible to model such a system using a formula based on entropy. Finding the entropic forcing function(s) is then left as an exercise to the reader.

Am I missing something? (2, Insightful)

fuzzyfuzzyfungus (1223518) | about a year ago | (#43514973)

This looks eerily like a physicist who has just opened a biology textbook and is now restating the idea that 'intelligence' is the product of an evolutionary selection process because it's a uniquely powerful solution to the class of problems that certain ecological niches pose and is now attempting to add equations....

Is there something that I'm missing, aside from the 'being alive means grabbing enough energy to keep your entropy below background levels' and the 'we suspect biological intelligence of having evolved because it provides a fitness advantage in certain ecological niches' elements?

Re:Am I missing something? (5, Insightful)

Trepidity (597) | about a year ago | (#43515071)

This is what it seems to be from a quick read. It would also explain why he would publish an AI paper in a physics journal, rather than in, you know, an AI journal: probably because he was hoping to get clueless physicists who aren't familiar with existing AI work as the reviewers.

Which isn't to say that physicists can't make good contributions to AI; a number have. But the ones who have an impact and provide something new: 1) explain how it relates to existing approaches, and why it's superior; and 2) publish their work actually relevant journals with qualified peer-reviewers.

Re:Am I missing something? (0)

Anonymous Coward | about a year ago | (#43515091)

Is there something that I'm missing, aside from the 'being alive means grabbing enough energy to keep your entropy below background levels'

Assuming the summary isn't entirely unrelated to the root article:

a newly identified thermodynamic process that maximizes entropy production over periods of time much shorter than universe lifetimes

Suggesting that the purpose of intelligence in this man's random musings might be to increase the background levels of entropy for your own benefit. This is also the first time I've seen something that appears to be a 'scientific' mandate to nuke Venus.

Re:Am I missing something? (5, Informative)

LourensV (856614) | about a year ago | (#43515753)

Suggesting that the purpose of intelligence in this man's random musings might be to increase the background levels of entropy for your own benefit.

That's close, I think. I am not a physicist and I skimmed the equations, but here's my take on what they're proposing. Physical systems have states, which can be described by a state vector. The state of these systems evolves according to some set of rules that describes how the state vector changes over time. They've built a simulator in which the probability of a certain state transition is computed by looking at how many different paths (in state space, i.e. future histories of the system) are possible from the new state, in such a way that the system tries to maximise the number of possibilities for the future. In one example, they have a particle that moves towards the centre of a box, because from there it can move in more directions than when it's close to a wall.

They then set up two simple models mimicking two basic intelligence tests, and find that their simulator solves them correctly. One is a cart with a pendulum suspended from it, which the system moves into an upright position because from there it's easiest (cheapest energetically, I gather) to reach any other given state. The other is an animal intelligence test, in which an animal is given some food in a space too small for it to reach, and a tool with which the food can be extracted. In their simulation, the "food" is indeed successfully moved out of the enclosed space, because it's easier to do various things with an object when it's close compared to when it's in a box. However, in neither case does the algorithm "know" the goal of the exercise. So they've shown that they've invented a search algorithm that can solve two particular problems, problems which are often considered tests of intelligence, without knowing the goal.

Then, they use this to support the hypothesis that intelligence essentially means maximising future possibilities. Another way of saying this, I think, is that an intelligent creature will seek to maximise the amount of power it has over its environment, and they've translated that concept into the language of physics. That's an intriguing concept, relating to the concept of liberty, power struggles between people at all scale levels, scientific and technological progress, and so on. I can't imagine this idea being new though. So it all hinges on to what extent this simulation adds anything new to that discussion.

On the face of it, not much. You might as well say that they've found two tests for which the solution happens to coincide with the state that maximises the number of possible future histories. The only surprising thing then is that their stochastically-greedy search algorithm (actually, without having looked at the details, I wouldn't be surprised if it turned out to be yet another variation of Metropolis-Hastings with a particular objective function) finds the global solution without getting stuck in a local minimum, which could be entirely down to coincidence. It's easy to think of another problem that their algorithm won't solve, for example if the goal would be to put the "food" into the box, rather than taking it out. Their algorithm will never do that, because that would increase the future effort necessary to do something with it. Of course, you might consider that pretty intelligent, and many young humans would certainly agree, although their parents might not. It would be interesting to see how many boxed objects you need before the algorithm considers it more efficient to leave them neatly packaged rather than randomly strewn about the floor, if that happens at all.

There's another issue in that the examples are laughably simple. While standing upright allows you to do more different things, no one spends their lives standing up, because it costs more energy to do that as a consequence of all sorts of random disturbances in the environment. The model ignores this completely. Similarly, you could argue that since in the simulation (unlike in the actual animal experiment) there is no reward for using the object, expending the energy to get it out of its box is not very intelligent at all.

Conclusion, interesting idea, but in its present state, not much more than that.

Re:Am I missing something? (0)

Anonymous Coward | about a year ago | (#43515107)

This looks eerily like a physicist who has just opened a biology textbook and is now restating the idea that 'intelligence' is the product of an evolutionary selection process because it's a uniquely powerful solution to the class of problems that certain ecological niches pose and is now attempting to add equations....

Do you even understand a single equation in the paper? Your statement is similar to someone, when being shown how gravity and laws of motion may explain the Earth's own rotation and its orbit around the sun, said "This looks like a physicist opened the window from his lab and noticed sunrise and seasons, and is now attempting to add equations..."

Re:Am I missing something? (4, Insightful)

geek (5680) | about a year ago | (#43515195)

I grew up right next to the Lawrence Livermore National Laboratory. My dad and the vast majority of my friends moms and dads worked there for a long time as physicists. Being around these people for 35 years has taught me something. They are morons. They know physics but literally nothing else, besides of course math.

Its one of those strange situations where they can be utterly brilliant in their singular field of study but absolutely incompetent at literally everything else. I've known guys with IQ's in the 160's that couldn't for the life of them live on their own for their inability to cook or clean or even drive a car. I know one of them that was 45 years old and had never had a drivers license. His wife drove him everywhere or he walked (occasionally the bus if the weather was poor). He didn't do this for ideological reasons like climate change blah blah, he did it because he couldn't drive. He failed the drivers test for years until he gave up trying.

Whenever a physicist starts talking about something other than physics, I typically roll my eyes and ignore them. It's just intellectual masturbation on their part.

Re:Am I missing something? (3, Interesting)

Anonymous Coward | about a year ago | (#43515411)

This. I've been a physics student for a third of my life and I've come to the conclusion that I cannot live with other physicists for precisely this reason. Poked my nose into the maths & compsci faculty for a bit, but they were no better.
In any case, in this concrete situation: the paper mentioned in TFA gives us not even one hint on how to construct an AI and is chock-full of absurd simplification of a complicated system.

Bill Burr (2)

justthinkit (954982) | about a year ago | (#43515873)

Bill Burr, on one of his MMPC, talked about trying to learn Spanish. At first he was cursing that it was his third try and what was wrong with him. Then later he said that it just came down to the fact that he didn't really need to learn it. Europeans need to know multiple languages. Americans don't. Doesn't mean we are stupid, incompetent, etc. Affects whether we learn language number two, though.
.

We live in a society defined by division of labor. The physicist figured that out, as have many video game addicts.

When P-man walks he gets to think about his theories more, he gets necessary exercise, and he gets his chore done in about the same amount of time. And he simply isn't interested in most of the stuff that we rush around doing. He doesn't particularly want or need a cell phone, and for sure not a tablet. TV is low bandwidth, high noise -- easily filtered out with the convenient OFF button. Shopping is a once-a-week thing that someone else does...no need to duplicate effort. Same with laundry, with those two large machines doing most of the work.

It is called the simple life. And it kind of rocks.

...says the part-time physics guy [just-think-it.com] .

Re:Am I missing something? (5, Funny)

Anonymous Coward | about a year ago | (#43515201)

I think the problem of uninformed physicists has been addressed by proper scientific research before:

http://www.smbc-comics.com/?id=2556 [smbc-comics.com]

Re:Am I missing something? (0)

Anonymous Coward | about a year ago | (#43515227)

From the look of it you could summarise their arguement as "an intelligent system moves towards the point where it can change things the most". They have some simple examples to back this up. For example a cart controlling the base of an inverted pendulum will move to keep the pendulum upright because the upright position allows the most options for where it can move next.

It's an interesting observation but nothing exceptional.

Re:Am I missing something? (5, Informative)

femtobyte (710429) | about a year ago | (#43515407)

Yes, what you're missing is the entire point of the paper. Here's my attempt at a quick summary:
Suppose you are a hungry crow. You see a tasty morsel of food in a hollow log (that you can't directly reach), and a long stick on the ground. The paper poses an answer to the question: what general mechanism would let you "figure out" how to get the food?

Many cognitive models might approach this by assuming the crow has a big table of "knowledge" that it can logically manipulate to deduce an answer: "stick can reach food from entrance to log," "I can get stick if I go over there," "I can move stick to entrance of log," => "I can reach food." This paper, however, proposes a much more general and simple model: the crow lives by the rule "I'll do whatever will maximize the number of different world states my world can be in 5 seconds from now." By this principle, the crow can reach a lot more states if it can move the stick (instead of the fewer states where the stick just sits in the same place on the ground), so it heads over towards the stick. Now it can reach a lot more states if it pokes the food out of the hole with the stick, so it does. And now, it can eat the tasty food.

The paper shows a few different examples where the single "maximize available future states" principle allows toy models to "solve" various problems and exhibit behavior associated with "cognition." This provides a very general mechanism for cognition driving a wide variety of behaviors, that doesn't require the thinking critter to have a giant "knowledge bank" from which to calculate complicated chains of logic before acting.

Re:Am I missing something? (0)

Anonymous Coward | about a year ago | (#43515757)

You know I looked at that paper and went "TL;DR". Then I came to comments and found a hundred people criticizing this work who had obviously gone "TL;DR" too.

Then I saw your post where you actually RTFAed and explained what it said.

I thank you, sir. With this in mind I may actually read the article later ;)

Re:Am I missing something? (1)

phantomfive (622387) | about a year ago | (#43515489)

More strictly speaking, they are talking about the idea of 'will' (that is my understanding). How does the computer, or a human, decide what to do, or indeed choose to do anything? Why do humans care at all, and how can we make computers care?

The idea is that the urge to resist entropy yields a competitive advantage and leads to intelligence. They built some software to demonstrate this, but I can't tell if the source code was released (it seems like it wasn't, but I don't have a subscription to find out).

Their software is really impressive if it does what it says. They claim it was able to make money trading stocks without being instructed to do so. If so, its intelligence is smarter than me.

Re:Am I missing something? (1)

femtobyte (710429) | about a year ago | (#43515743)

The idea is that the urge to resist entropy yields a competitive advantage and leads to intelligence.

Actually, the opposite: "intelligence" functions by seeking to maximize entropy. Note, however, we are talking about an approximate "macroscopic scale" entropy of large-scale objects in the system rather than the "microscopic" entropy of chemical reactions, so "intelligence" isn't about intentionally setting as many things as you can on fire ("microscopic" entropy maximization). So, the analogue statement to "all the gas molecules in a room won't pile up on one side" is "an intelligent critter won't want to get backed into a corner" --- in both cases, the "system" works to maximize "entropy," or the number of available future states (a lot less possibilities for where molecules can be if they're all confined to half the room; a lot less places where a critter can go if stuck in a corner).

Re:Am I missing something? (1)

phantomfive (622387) | about a year ago | (#43515847)

I'm not sure that's right, if you watch the video in the summary, all the examples tend towards more order. They actually have an example where two critters end up in a corner.

Re:Am I missing something? (1)

femtobyte (710429) | about a year ago | (#43515951)

My "critter not going into a corner" example was based on the first toy model in the paper, a particle that drifts towards the center of a box when "driven" by entropy maximization. In some of the more "advanced" examples, there are more complex factors coming into play that may maximize entropy by "ending up in a corner," depending on how the problem is set up. However, if you read the paper (instead of just glancing at videos), the mathematical formalism that drives the model is all about maximizing entropy.

Re:Am I missing something? (1)

phantomfive (622387) | about a year ago | (#43516041)

Ah, you're right, good call.

My somewhat pedantic but sincere question (1)

srussia (884021) | about a year ago | (#43516065)

Actually, the opposite: "intelligence" functions by seeking to maximize entropy.

Don't you mean that intelligence "functions by seeking to maximize the entropic gradient"?

Choice (1, Interesting)

rtb61 (674572) | about a year ago | (#43514995)

Intelligence, the ability to delve into the past and reach into the future, in order to craft the present and manipulate the probability of eventualities. The greater the ability the greater the intellect, the power of choice.

proper way to think about intelligence: (-1)

Anonymous Coward | about a year ago | (#43514997)

Part of an administration of any sort: dumb
Everyone else: smart

Maintaining disorder (2)

Dan East (318230) | about a year ago | (#43515027)

It appears to me that the algorithm is trying to maintain entropy or disorder, or at least keep open as many pathways to various states of entropy as possible. In the physics simulations, such as balancing and using tools, this essentially means that it is simply trying to maximize potential energy (in the form of stored energy due to gravity or repulsive fields - gravity in the balancing examples, and repulsive fields in the "tools" example).

While this can be construed as "intelligence" in these very specific cases, I don't think it is nearly as generalized or multipurpose as the author makes it out to be.

Re:Maintaining disorder (0)

Anonymous Coward | about a year ago | (#43515161)

Evocative of the philosophy of Leto Atreides II

More than one (1)

JayInPlano (1865346) | about a year ago | (#43515039)

I am always amused when I see these posts on intelligence as if it is an singular thing. There are many types of intelligence from scientific (i.e. medicine, mathematics) to creative (i.e. literature, music, sculpting) to social (i.e. business, politics) and athletic (i.e. sports, recreation). No one person can be good at all of these types of intelligence, but collectively we can. Or put another way. We all saw Watson win at chess and Jeopardy, I don’t think would do so well playing Texas hold’em against some tournament champions.

Re:More than one (2)

plover (150551) | about a year ago | (#43515231)

I am always amused when I see these posts on $(any_topic) that reveal the inability of the poster to recognize things happen on a variety of levels.

In this case, intelligence refers not to the subset of humans capable of posting on slashdot, or playing music. Intelligence in this context refers to the evolution of a brain capable of making decisions based on stimuli as well as experience. An earthworm would qualify as intelligent. It takes a whole lot more steps to get from amino acid soup to an earthworm's level of intelligence than it would to get from an earthworm-sized brain to a human brain.

You're being so literal it's constraining your thinking. It's like you have your own personal grammar nazi that keeps you from seeing a bigger picture. That's especially dangerous on slashdot where the "editors" rarely choose the right words. Learn to expand and adapt.

Re:More than one (1)

oh_my_080980980 (773867) | about a year ago | (#43515387)

It always amuses me that people who have no background on a particular subject, like these physicists, feel compelled to publish a paper on said topic.

Then I have to laugh when lemmings herald the paper as insightful.

Re:More than one (1)

plover (150551) | about a year ago | (#43515465)

I think we both agree that their sample size of one planet is not statistically significant. If he wants to be taken seriously, this guy should be funding the hell out of SETI.

Hahahahahahaha, sorry, that last sentence was too hard to type without laughing.

Re:More than one (1)

Jmc23 (2353706) | about a year ago | (#43515913)

You're not very intelligent if you can't see how someone could be intelligent in all these fields.

you mean minimize entropy? (0)

Anonymous Coward | about a year ago | (#43515065)

"Allowing the rod to fall would drastically reduce the number of remaining future histories, or, in other words, lower the entropy of the cart-and-rod system."

  increase the entropy.

"Keeping the rod upright maximizes the entropy. It maintains all future histories that can begin from that state, including those that require the cart to let the rod fall."

    minimizes the entropy.

Spherical cows! (4, Funny)

femtobyte (710429) | about a year ago | (#43515081)

You can tell this is a physicist's paper. It lacks spherical cows, but only because the toy models were set up in 2D. So, instead, we get a crow, chimpanzee, or elephant approximated by circular disks.

Wonder how correct his predictions will prove to b (1)

Camembert (2891457) | about a year ago | (#43515089)

I skimmed through the article. The idea as entropy as a driving factor for intelligence is certainly novel (to my knowledge), I haven't even met it in science fiction stories ! But, while interesting in his small test set, I really wonder about the author's extrapolations. Intelligence and free will seem much more complex than a thermodynamic optimisation. Perhaps, just perhaps that his idea is part of the very first steps from matter towards life and intelligence but much more research needs to be done.

Re:Wonder how correct his predictions will prove t (1)

oh_my_080980980 (773867) | about a year ago | (#43515251)

Try nonexistent. The authors have ZERO background on intelligence. Basically they are proposing intelligent design....

Re:Wonder how correct his predictions will prove t (1)

Camembert (2891457) | about a year ago | (#43515413)

I agree with your statement. However, as I wrote, it could (maybe!) be that thermodynamics work in a way that improves the conditions to go from dead matter to life and intelligence; compared to pure chaos.

Re:Wonder how correct his predictions will prove t (1)

femtobyte (710429) | about a year ago | (#43515479)

The paper has nothing to do with the "design" and/or evolution of intelligence. It proposes a general mechanism by which "intelligent" brains may be able to "figure out" how to perform a wide variety of tasks (by maximizing future available states based on a simplified internal world model). Plain old evolutionary selective pressures would favor critters with brains good at carrying out this type of cognition.

Re:Wonder how correct his predictions will prove t (0)

Anonymous Coward | about a year ago | (#43515289)

Not pictured in the article: cognitive scientists the world over laughing at him over a cup of coffee.

Too simple theory (0)

Anonymous Coward | about a year ago | (#43515135)

For a learning process there needs to be a machinery though. Brain and body. Or a computer. I.e. something that can possibly balance a stick or walk on two legs, either real or simulated. So there is no intelligence out of thin air. The described max. entropy principle seems to apply to learning process, though (children do break things apart, no?).
Organisms capable of learning develop because of evolution benefit compared to those that have no intelligence and do not need to learn.

Hi ! (-1, Troll)

YeuVN Wap (2903951) | about a year ago | (#43515205)

xem phim sex online

Intelligence is inherited (1)

Quakeulf (2650167) | about a year ago | (#43515237)

Just like any other physical trait.

Re:Intelligence is inherited (1)

oh_my_080980980 (773867) | about a year ago | (#43515327)

Actually it's not.

It's Intelligent Design! (-1, Redundant)

maroberts (15852) | about a year ago | (#43515249)

God created a system which would over time maximise intelligence and thus produce man.

This is so sad (1, Interesting)

rpresser (610529) | about a year ago | (#43515319)

The universe developed intelligence as a way of making entropy wind down faster ... which will destroy all intelligence ... which is a tragedy because the winding down was necessary to create us ... and the universe WANTED TO SEE US SUFFER.

Re:This is so sad (1)

Anonymous Coward | about a year ago | (#43515671)

Please, do not anthropomorphise the Universe, he hates it.

Re:This is so sad (0)

Anonymous Coward | about a year ago | (#43515701)

the third law of thermodynamics states that for a closed system, entropy stays constant or increases. life just decreases it locally - somewhere else, entropy has to increase to offset that local decrease

Re:This is so sad (0)

Anonymous Coward | about a year ago | (#43515811)

The entropy of a perfect crystal at absolute zero is exactly equal to zero.

Is the Third Law of Thermodynamics. I believe you are referring to the Second Law of Thermodynamics which is so damned famous you really should remember its ordinal number.

Re:This is so sad (0)

Anonymous Coward | about a year ago | (#43516089)

the universe WANTED TO SEE US SUFFER

Being meguca is suffering.

Re:This is so sad (1)

gtall (79522) | about a year ago | (#43516099)

That might explain why the Universe is trying to kill us. Those asteroids periodically buzzing the Earth were sent there, the Universe's aim is just a bit off. Sooner or later, it will get the target sighted in. Sometimes it is in the form of Gaea who periodically tosses an earthquake, or when she's really pissy, a super volcano...just for a little recreational resurfacing.

The Universe hates intelligence, we're all dead.

pretty sure (1)

ObjectiveSubjective (2828749) | about a year ago | (#43515365)

this is a load of horse shit invented just so nerds can do something trivial and think the are doing something monumental and amazing instead

I'm not sleeping in.. (2)

xtal (49134) | about a year ago | (#43515475)

I'm maintaining the maximum number of possible outcomes for the day, in harmony with the laws of nature. :)

If Human Intelligence is so valuable... (1)

onebeaumond (1230624) | about a year ago | (#43515495)

Why don't other animals have it? The answer is it just can't compete, in an evolutionary sense, with other phenotypes (like instinct). Or, put more simply: "He who hesitates is lunch". Evolution can certainly be modeled as a system that maintains entropy, but I just don't see this abstraction being all that useful in explaining intelligence.

Sheldon? (0)

prefec2 (875483) | about a year ago | (#43515553)

Since when is Sheldon Cooper allowed to post online?

Silly paper that completely misses the point (5, Informative)

mTor (18585) | about a year ago | (#43515577)

Here's a review of this paper by a researcher who actually works in the field of AI and cognitive psychology:

Interdisciplinitis: Do entropic forces cause adaptive behavior? [wordpress.com]

Few choice quotes:

Physicists are notorious for infecting other disciplines. Sometimes this can be extremely rewarding, but most of the time it is silly. I've already featured an example where one of the founders of algorithmic information theory completely missed the point of Darwinism; researchers working in statistical mechanics and information theory seem particularly susceptible to interdisciplinitis. The disease is not new, it formed an abscess shortly after Shannon (1948) founded information theory. The clarity of Shannon's work allowed a metaphorical connections between entropy and pretty much anything. Researchers were quick to swell around the idea, publishing countless papers on âoeInformation theory of Xâ where X is your favorite field deemed in need of a more thorough mathematical grounding.

and after he explains what the paper's about and how utterly empty it is, he offers some advice to authors:

By publishing in a journal specific to the field you are trying to make an impact on, you get feedback on if you are addressing the right questions for your target field instead of simply if others' in your field (i.e. other physicists) think you are addressing the right questions. If your results get accepted then you also have more impact since they appear in a journal that your target audience reads, instead of one your field focuses on. Lastly, it is a show of respect for the existing work done in your target field. Since the goal is to set up a fruitful collaboration between disciplines, it is important to avoid E.O. Wilson's mistake of treating researchers in other fields as expendable or irrelevant.

Re:Silly paper that completely misses the point (2)

Black Parrot (19622) | about a year ago | (#43515905)

and after he explains what the paper's about and how utterly empty it is, he offers some advice to authors:

By publishing in a journal specific to the field you are trying to make an impact on, you get feedback on if you are addressing the right questions for your target field instead of simply if others' in your field (i.e. other physicists) think you are addressing the right questions.

The authors were just trying to maximize the number of possible future states for their idea.

Nature abhors a vacuum. (1)

OldCodger (2479044) | about a year ago | (#43515627)

He's just repeating the old adage - best demonstrated by Gary Larson - that nature abhors a vacuum. :)

second law of thermodynamics? (0)

Anonymous Coward | about a year ago | (#43515705)

This is just a restatement of the second law of thermodynamics with some thoughts about how that constrains life/evolution. Nothing to see here folks, move along.

The comments referring to Gary Larson catch the biased nature of the thoughts. Nothing to see here folks, move along.

I have an untidy office (1)

Alain Williams (2972) | about a year ago | (#43515725)

Lots of entropy here, more than most! Does that mean that I am super intelligent ?

Sante Fe Institute (1)

RockGrumbler (1795608) | about a year ago | (#43515733)

I don't have the background to judge the novelty of this approach or not. But the quote from the Sante Fe institute fellow would imply that the chaos/complexity folks find it interesting.

Common sense formalized (0)

Anonymous Coward | about a year ago | (#43515745)

This paper formalizes something that until now we viewed as common sense.
Intelligent agent:
- is creative
- thinks outside of the box
- comes up with solutions that are unexpected
- is novel
- surprises you with their behavior

Dumb agent:
- follows rigid rules
- thinks only within rigid set rule-set (thinks inside the box)
- comes up with obvious solutions
- is repetitive
- is very predictable and repetitive in their behavior

Intelligent agent traits involve more entropy than the dumb agent. If you have two adaptive paths, one that will maximize the entropy and a second that doesn't, the one that will maximize entropy will produce a more intelligent agent (maybe not in the next generation, but eventually). High entropy = open mind. Low entropy = closed mind.

This has interesting implications for every day life. If you choose a low entropy career (assembly line worker), your intelligence will suffer, compared to a choice of a high entropy career (researcher).

  Low entropy environments are predictable, high entropy environments are more unpredictable. I would say that unpredictability is a good way to judge the level of intelligence. Take 2 agents. The one who can successfully predict the actions of the other is more intelligent. I can predict most of the actions of my 5 yo daughter (for now), that makes me more intelligent. She can not predict my actions to the same degree (yet).

Deeper philosophical question is the level of unpredictability. If someone is so unpredictable that their actions are crazy ... are they really crazy? Or are they simply a misunderstood genius? Maybe their ideas will be understood only by later generations? What is the difference between complexity and randomness? After all I can make an agent that has very high entropy (make it jump around randomly) and yet it will not be as intelligent as an agent who is complex (but doesn't have the same level of entropy). High entropy isn't the only thing that makes someone intelligent. High entropy has to be bounded by adaptive behavior, otherwise it's useless. A mad scientist may have very high entropy (discovers new form of energy generation) but very poor adaptive behavior (is jobless/homeless because of poor hygiene and social skills). Some things like tying shoes and brushing teeth have to be somewhat low entropy (repetitive) just to satisfy the criteria of basic day to day survival.

May apply to "life" rather than "intelligence" (1)

littlewink (996298) | about a year ago | (#43515983)

In any case both require clear definition, but it would appear that this paper applies to "life" rather than the more restrictive "intelligence".

IMO "intelligence" is primarily the ability to refer to things outside the here and now, the property that linguists call displacement [wikipedia.org] . See Derek Bickerton's "Adam's Tongue" [google.com] for details.

maximizes number of future states (1)

johnrpenner (40054) | about a year ago | (#43516123)

the article is self contradictory — it says "It actually self-determines what its own objective is," said Wissner-Gross. "This [artificial intelligence] does not require the explicit specification of a goal".

this is not true, because it then goes on to say, "trying to capture as many future histories as possible".

so there IS a goal — it maximizes the number of future states — exactly the same way a negaMax search can maximize the mobility paramater in a chess engine search.

in other words, this is a lot of hype — defining intelligence as maximization of finding states of less entropy (i.e. maximal future states), and running a classic negaSearch on that basis is what is going on here.

its a novel way to go about things, but redefining the terms doesnt actually make anything new in the sensation way this article claims.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>