Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Games That Design Themselves

Soulskill posted more than 4 years ago | from the real-time-savers dept.

PC Games (Games) 162

destinyland writes "MIT's Media Lab is building 'a game that designs its own AI agents by observing the behavior of humans.' Their ultimate goal? 'Collective AI-driven agents that can interact and converse with humans without requiring programming or specialists to hand-craft behavior and dialogue.' With a similar project underway by a University of California professor, we may soon see radically different games that can 'react with human-like adaptability to whatever situation they're thrust into.'"

cancel ×

162 comments

Mister Anderson, welcome back. We MISSED you. (1)

Drakkenmensch (1255800) | more than 4 years ago | (#28883497)

Sooooo... when is the Matrix going into Beta?

Re:Mister Anderson, welcome back. We MISSED you. (1)

jerep (794296) | more than 4 years ago | (#28884391)

What makes you think its not already up and running and they're only missing the AI for agents?

Re:Mister Anderson, welcome back. We MISSED you. (1)

cabjf (710106) | more than 4 years ago | (#28885243)

So they're stealing our body heat and letting us write agent AI for them too? Geez, what lazy AI we invented.

Re:Mister Anderson, welcome back. We MISSED you. (3, Insightful)

BarryJacobsen (526926) | more than 4 years ago | (#28885865)

So they're stealing our body heat and letting us write agent AI for them too? Geez, what lazy AI we invented.

It was created in our own image.

Re:Mister Anderson, welcome back. We MISSED you. (1)

jackharrer (972403) | more than 4 years ago | (#28885841)

I think they're also missing AI for 90% of "population"...

Re:Mister Anderson, welcome back. We MISSED you. (0)

Anonymous Coward | more than 4 years ago | (#28885727)

Sooooo... when is the Matrix going into Beta?

Beta?! Dude, they already left beta, had a full run, and are being shut down [slashdot.org] for lack of funds!

Re:Mister Anderson, welcome back. We MISSED you. (1)

davester666 (731373) | more than 4 years ago | (#28886043)

They started with Eliza just kept going with it.

Ragequit (5, Funny)

ComputerDruid (1499317) | more than 4 years ago | (#28883565)

I can see it now.
Just before loosing, the AI will suddenly shout "RAGEQUIT" and disconnect, thus denying you points for winning.

Re:Ragequit (2, Interesting)

oztemprom (953519) | more than 4 years ago | (#28883649)

Yeah, things like this would happen, but also, how easy would it be for a small but dedicated group of pranksters to deliberately behave in odd, amusing or offensive ways to train the AIs? AI09 says "I herd u leik tentacle pr0n"

Re:Ragequit (5, Funny)

Anonymous Coward | more than 4 years ago | (#28883747)

I herd u leik tentacle pr0n

Go on..

Re:Ragequit (4, Funny)

houstonbofh (602064) | more than 4 years ago | (#28884067)

Yeah, things like this would happen, but also, how easy would it be for a small but dedicated group of pranksters to deliberately behave in odd, amusing or offensive ways to train the AIs? AI09 says "I herd u leik tentacle pr0n"

I thought you said odd...

Re:Ragequit (2, Funny)

HeronBlademaster (1079477) | more than 4 years ago | (#28884729)

This already happens. My wife plays Age of Empires II a lot, and the AI almost always resigns when it's clear my wife is going to win (even if the AI still has a fair amount of its force still intact).

Re:Ragequit (0)

Anonymous Coward | more than 4 years ago | (#28885513)

You aren't thinking too far into the future... with the game designers replaced by AI, the goal will be set to replace the players with AI as well... so, the RAGEQUIT will be between the AI and AI

It can never be human like... (1)

Monkeedude1212 (1560403) | more than 4 years ago | (#28883625)

A computer can mimick the logic of a human being, yes.

But it can't copy our illogical decisions. Because our Illogical decisions are just based on poor logic.

You can program a computer to make a mistake - but its not the same.

Re:It can never be human like... (1)

johnsonav (1098915) | more than 4 years ago | (#28883973)

But it can't copy our illogical decisions. Because our Illogical decisions are just based on poor logic.

You can program a computer to make a mistake - but its not the same.

What makes you think they would explicitly program in the rules of logic? Why couldn't the program be designed to find them out itself, through trial and error, just like a human does? In such a case, why couldn't the program develop poor logic?

Re:It can never be human like... (1)

Monkeedude1212 (1560403) | more than 4 years ago | (#28884155)

Because programming -IS- Logic. If you tell the program to do soemthing at Random, its not a very good AI. If you tell it to do the most strategically sound plan, it doesn't vary much at all.

Most humans who play games don't always go through strict trial and error. I mean we DO but its more complex than that. Like if I were to decide to hold out production to gain more resources - hoping that it will pay off in the long run - that is not necessarily the best of logic or the worst, and its a risk. As such you generally tend to base it against the opponent you are playing. An AI cannot tell if you are an aggressive or passive person, you're strategic abilities or understanding of game mechanics having never met you before playing the game.

Re:It can never be human like... (2, Insightful)

amicusNYCL (1538833) | more than 4 years ago | (#28884293)

Because programming -IS- Logic. If you tell the program to do soemthing at Random, its not a very good AI. If you tell it to do the most strategically sound plan, it doesn't vary much at all.

You tell it to try to learn the rules, and make the best decision that it can.

Consider AI for chess. The best AI can beat any human because it can spend the processing power to look, say, 25 moves into the future. When the computer considers all possible moves and for each one looks at all possible next moves, next moves, etc, for 25 turns, it's going to be able to quantify which move it should make now to have the best chance at winning. When you download a chess game and you can set the difficulty, the main thing they change is how far ahead the AI is allowed to look. An "easy" AI might only look 3 moves ahead. It's been a while since I took any AI courses, but I seem to remember that the human masters like Kasparov are capable of looking ahead around 10-12 turns.

So it's not that you tell the AI to make bad decisions, you simply limit the information it has to work with. This is more equivalent to what most humans do when they make bad decisions ("I didn't think of that").

Re:It can never be human like... (2, Interesting)

Bat Country (829565) | more than 4 years ago | (#28884443)

Indeed.

In fact, feeding bogus data to the AI is one of the realistic ways to limit, say, a racing game's agents - if they don't see the post in front of them because they aren't spending enough time per frame watching the road and are instead eyeballing their opponent, they're going to crash, just like any human. So you simulate that by using player proximity and the "erraticness" of the other opponents to model distraction and modulate the AI's awareness of dynamic obstacles and hazards.

Re:It can never be human like... (2, Interesting)

Lumpy (12016) | more than 4 years ago | (#28885113)

Yes and no. Back in the day when I was writing Quake bots, there were things you could do to always beat the AI. The AI cant pick out patterns that are luring it into a trap. WE are a long LONG way from having AI that can think about the situation and make a decision on it's own...

"Player 4 has done this 4 times trying to lead me down that corridor, what the hell is he doing? I'm gonna sit and wait or try and circle around to see what is up."

AI cant make a conscious decision that is not preprogrammed.

Re:It can never be human like... (1)

ArsonSmith (13997) | more than 4 years ago | (#28884839)

but calculating all possible moves x# in the future is not AI. Weighting each piece and giving certain situations as being better than others, then giving the ai the option of adjusting those weights and finding new situations and weighing them would be AI.

The AI should be able to record, "A pawn is worth less than a rook by X" Then it plays a game, sacrifices a pawn to a rook, sees the outcome (win/lose) and adjusts the worth accordingly. Of course this adjustment would have to go over all moves during the game, and adjustments would have to be done across hundreds of games. With more games giving better weights.

Otherwise you are just looking up possibilities from a table. Even if the possibilities are generated on the fly.

Re:It can never be human like... (1)

The_mad_linguist (1019680) | more than 4 years ago | (#28885881)

That's not really the case, though. Real high-level chess players tend to "chunk" things - they don't just look ahead some number of moves.

Limiting the number of moves of look-ahead tends to result in unrealistic mistakes.

Re:It can never be human like... (4, Interesting)

johnsonav (1098915) | more than 4 years ago | (#28884319)

Because programming -IS- Logic.

A group of neurons can be connected together to form a calculator. But, you can't multiply 20 digit numbers in your head. You don't have access to the "hardware" layer of your brain. Why would a sufficiently advanced AI be any different?

As such you generally tend to base it against the opponent you are playing. An AI cannot tell if you are an aggressive or passive person, you're strategic abilities or understanding of game mechanics having never met you before playing the game.

I play online games against people I've never met before too. What magical ability do I have, that a computer could not?

Re:It can never be human like... (3, Funny)

sexconker (1179573) | more than 4 years ago | (#28884411)

Shit, when I played WoW I spent lots of time trying to get a /follow train to completely encircle Ironforge.

Never got a full train (a circle of people following each other, where the "engine" eventually is close to the "caboose" and does a /follow on them) though...

Re:It can never be human like... (0)

Anonymous Coward | more than 4 years ago | (#28884841)

As a matter of fact you donÂt act random...
what you call poor logic, is just a very complex logic that in some cases has to make desitions without all the information that it needs.
and that my dear friend can be programed

Re:It can never be human like... (2, Insightful)

Chyeld (713439) | more than 4 years ago | (#28886065)

The problem with AI's mimicking 'human' actions has nothing to do with a failure of logic or the ability to display randomness.

It has to do with the fact that we've never really understood why we do certain things, because we hold the false notion that for the most part our actions are driven by logic rather than the reality that our logic is driven by our actions. Thus, when someone happens that doesn't fit our model, we ascribe it to randomness despite the fact that it could probably be shown that the same situation would result in the same reaction the majority of the time.

If we actually studied our actions, rather than our rationalizations for why they occur, it'd be a lot easier to model our behavior.

And an AI that doesn't care why we do something but just learns to predict WHAT we will do, is a good first step towards that.

Re:It can never be human like... (1)

sexconker (1179573) | more than 4 years ago | (#28884367)

Why?
Because decades of AI research and countless "breakthroughs" have failed to deliver upon just that.

Re:It can never be human like... (1)

dmbasso (1052166) | more than 4 years ago | (#28884575)

Wow, that was a nice reasoning! ~50 years of research "failed", therefore it is useless to keep going on -- we'll never achieve human intelligence!

Perhaps you're right, human intelligence is too difficult for us to achieve. But I guess that in ten years AI will reach your level of intelligence.

And pretty soon after that, AI will be as intelligent as a hamster!

Re:It can never be human like... (1)

johnsonav (1098915) | more than 4 years ago | (#28884695)

Why?
Because decades of AI research and countless "breakthroughs" have failed to deliver upon just that.

Oh crap, you're right. After "decades" of research trying to replicate the functionality of the most intricate and complex piece of machinery in the solar system, it's probably best if we just give up. After all, anything this hard couldn't be worthwhile.

there are lots of human-like programs (1)

OeLeWaPpErKe (412765) | more than 4 years ago | (#28884853)

If what you say is true, then how did any computer program ever pass the Turing test ? Lots of computer programs have succeeded in convincing a human that they were human.

Yes we are a long way away from walking, breeding, and loving (oh so cute) machines. We're getting closer every day.

If you applied the same thinking to any science field - that if not improved in "decades" it should be abandoned - I doubt we'd have any field left. Lots of fields were not improved in centuries, and a few can very easily be said to not having been improved for a millenium.

The problem with "human-like" programs is that if you know their code and how they work, you feel embarassed being the thing they're trying to imitate. Or you should. They are not complex, they do not follow logic, ... they simply parrot what they've heard before. The fact that this tactic works means that parotting others is one of the main mechanisms directing human behavior.

Re:there are lots of human-like programs (3, Insightful)

digitalsolo (1175321) | more than 4 years ago | (#28886011)

Are there any examples of a living being which does not spend the majority of it's life parroting or applying the behaviour of others?

I'd contend that watching and mimicing others is the most effective method of learning. In fact, it's the ability to take and apply this learned knowledge to other situations that seperates the truly intelligent from the "average" in the world.

Re:It can never be human like... (0)

Anonymous Coward | more than 4 years ago | (#28884047)

Actually, hell yes they can make mistakes, because the really important mistakes are the ones based on faulty assumptions, bad logic, and bad data. If any given algorithm is suboptimal or inaccurate in any way, or if a program is built off-target to meet a problem that doesn't exist or to meet a mistaken conception of the problem, or if something interferes with the observations made by the intelligent system during conception, preparation, or action, then it's going to "make a mistake" every damn time it runs. Having a computer that self-codes doesn't make this problem vanish; indeed, with digital computers, it's more likely to solve problems by the Edison method -- discovering 1,000 (or 1,000,000,000) functions that don't work at all just to find one that works "well enough." The computer isn't the Oracle; it doesn't gain perfect insight just because it's not a bag of flesh and water.

Re:It can never be human like... (1)

ashooner (834246) | more than 4 years ago | (#28884235)

Poor logic is usually a logic... I we humans are a lot less random then we'd like to think.

Re:It can never be human like... (2, Funny)

AndrewNeo (979708) | more than 4 years ago | (#28884325)

I we humans

Except when it comes to using the English language, apparently.

Re:It can never be human like... (1)

SilverEyes (822768) | more than 4 years ago | (#28885075)

My kingdom for mod points...

Re:It can never be human like... (1)

ashooner (834246) | more than 4 years ago | (#28885771)

Pot shot at a typo on Slashdot?! My point exactly.

Re:It can never be human like... (1)

Robin47 (1379745) | more than 4 years ago | (#28884297)

A computer can mimick the logic of a human being, yes.

But it can't copy our illogical decisions. Because our Illogical decisions are just based on poor logic.

You can program a computer to make a mistake - but its not the same.

Human decisions are based on rationality of which logic is a part. It's the other part that would be difficult to code.

Re:It can never be human like... (1)

Bat Country (829565) | more than 4 years ago | (#28884347)

Our illogical behavior is largely deterministic as well.

We tend to behave illogically only in response to specific stimuli (fear, anger, hunger, lust) or when our system is under strain (fatigue, extreme hunger or thirst, neurological stress), nearly all of which can be simulated effectively enough for a game simulation.

So now we examine the character of our illogical behavior - we prioritize actions inappropriately, mistake one input for another of a similar kind, suffer from reduced reflexes or recognition time, respond with an inappropriate reaction to a familiar stimulus, fail to suppress responses which we would ordinarily not allow in ourselves due to social strictures or personal beliefs, or simply fail to notice things in an appropriate timeframe.

What about that couldn't be simulated with an extremely simple system of defaults with a laundry list of pre-programmed failure behaviors? Illogic may be more complicated to simulate in a limited domain of actions than logic - the elevator can go up, down, or stop, but it's hard to make it change its mind when it's tired of going up - but illogic is really easy to simulate when the expected domain of AI activity includes nearly any action. This is the sort of condition found in most sandbox games - you expect the pedestrians and enemies to behave in an almost random fashion because you expect humans "in the wild" to be unpredictable. This means that anything short of obviously programmatical behavior or obvious illogical *group* behaviors will seem fairly realistic to the player - especially if the AI isn't just "instanced" appearing and disappearing with a short library of functions, but instead is programmed with agendas, no matter how simple (go from residential A to commercial B, append grocery bag model to arms, use carrying walk animation, return to residential A).

The AI in Ultima 7 was praised as being exceedingly lifelike because the AI had agendas, day/night schedules, and would respond to stimuli like violence, the appearance of a monster, etc in a variety of ways depending on their character role. This sort of realism (if not actually passing whatever Turing-test-like metric you employ when observing it) will serve to satisfy the requirements of suspension of disbelief on the part of the player.

One of the best things about video games is the potential to surprise the player with unexpected behaviors. The first Quake bots, even though following fairly simple nodegraphs, would continually surprise players by behaving in a fashion seen as "unpredictable" simply because the player themselves had not been taking the most efficient routes between "pickups."

The first "learning" Unreal bots would actually remember routes that bots saw players take and append nodes and traversal instructions so that it could follow or use that route in the future. As a result, you think you can evade a bot by leaping out a window they never go near then are alarmed to find that not only does it follow you out, it uses the same route to escape you in the next round.

The emergent behaviors of The Sims have been pleasing and surprising gamers for nearly 10 years now, all based on fairly simple wants/needs systems along with some basic stimulus response. The Sim is less intelligent than the average cockroach, and yet they are still capable of behaviors which seem satisfyingly realistic, at least in the short term. If a Sim is too tired to make a full meal, it might just grab a bag of snacks from the refrigerator. They might fall asleep on the couch instead of going to bed. These are all failure states in an ideal AI's daily routine, and yet they give the human touch - with very little computational cost.

The point here is that AI doesn't need to be perfectly human to be humanlike, and it's far from impossible to simulate illogical behavior - you just have to program some chaos into the system by which the AI selects actions.

Re:It can never be human like... (1)

Drakkenmensch (1255800) | more than 4 years ago | (#28884499)

Once the AI learns to spam the chat channels with Chuck Norris jokes and call all their opponents gay, will anyone be able to tell the difference?

Re:It can never be human like... (-1, Troll)

Anonymous Coward | more than 4 years ago | (#28885245)

Once the AI learns to spam the chat channels with Chuck Norris jokes and call all their opponents gay, will anyone be able to tell the difference?

Ur gay and so is ur mom.

Chuck Norris's tears can cure cancer. Too bad Chuck Norris never dies.

Re:It can never be human like... (1)

Anachragnome (1008495) | more than 4 years ago | (#28884867)

In a possibly-not-so-futuristic World at War where AI bots, in "Terminator" fashion, have essentially the same decision making processes as us, a world where we SHOULD be on a level playing field with our enemy, humans will always have the upper hand.

Intuition. The "Hunch".

Saved my bacon more times then fast feet.

Give it cognitive dissonance :P (1)

Moraelin (679338) | more than 4 years ago | (#28885579)

But it can't copy our illogical decisions. Because our Illogical decisions are just based on poor logic.

You can program a computer to make a mistake - but its not the same.

Actually, I find that human broken logic is quite well studied by now. For example, there are a finite number of fallacies that get reused all over the place.

E.g., it should be quite easy to program an AI that commits "post hoc ergo propter hoc" or the closely related "cum hoc ergo propter hoc". All it would have to do is be able to observe and notice sequence or correlation. From there it will commit those two fallacies like a pro. It too can conclude that the sun rises because the cock crowed just before it :P

Then there's the very common human problem of selective confirmation. Again, it's actually pretty trivial to program it. You just have to filter out some of the observations if they don't fit the already drawn conclusions.

But that brings me to actually _the_ greatest cause of poor logic in humans: wanting some conclusion to be true (e.g., because of a cognitive dissonance) and working backwards to some rationale to support it. Most humans don't actually have a problem with just following some premises to a conclusion. Where they get caught in their broken logic is when they work backwards, from "I want a pony" to what rationale could I present that would have "you must buy me a pony" as a conclusion.

This is IMHO neat because it would tell the AI when to use those fallacies realistically.

As an example, let's imagine an NPC who literally wants a pony. She's a little girl. "Mom should buy me a pony" is the given conclusion she must reach with her logic. There are many ways one can justify that, but after a dice toss, we'll go for "it's an investment in the future and practically pays for itself." But how to support that? Well, let's do a "cum hoc ergo propter hoc" and correlate that all rich people have a horse, therefore they're rich because of the horse. Hence a pony would ensure her a brighter future too. Let's even add an appeal to emotion and go "and what loving mom wouldn't want that for her daughter?" It's not outright spelled out, but the implication is there. At this point maybe we have a bit of space left in the dialog or quest text, so we could backtrack a bit and spice it up with an argumentum ad populum, like, say, "all my friends like ponies, which shows that all kids should get one." Not the strongest form of that argument, but we are talking about a child, so it fits the character.

Or various other ways. Think of those fallacies as a Lego or Tetris set of pieces. You start from your conclusion and connect, connect, connect.

Of course, if it's only fallacies it gets predictable, so we'll mix the bunch of proper inferrence modes in the allowed pieces too. Keeps the user on his/her toes.

It seems to me like a very programmable thing.

AIs and Hard drive... (1)

BlitzBrain (1567113) | more than 4 years ago | (#28883663)

And just how much hard drive space will be needed to install these self-learning games?

Re:AIs and Hard drive... (1)

oldspewey (1303305) | more than 4 years ago | (#28883731)

Just a little at first ... it will find more as it needs it.

Re:AIs and Hard drive... (1)

Lord Ender (156273) | more than 4 years ago | (#28883761)

I just bought a 2TB hard drive for a trivial sum. Hard drive constraints should never be a concern these days.

Re:AIs and Hard drive... (1)

houstonbofh (602064) | more than 4 years ago | (#28884101)

I just bought a 2TB hard drive for a trivial sum. Hard drive constraints should never be a concern these days.

So how is it working for Microsoft?

Re:AIs and Hard drive... (1)

amicusNYCL (1538833) | more than 4 years ago | (#28884389)

What kind of lame joke is that? Having a lot of storage is now limited to the Microsoft crowd? Can Linux not handle 2TB? My computer at home has a 2TB RAID array. Is it necessary to work for Microsoft if you want to run a TB or more of storage? Most NAS devices are 1TB or more.

Hell, Seagate has a 1.5TB Barracuda drive for less than $150. So are you saying that you need to work for Microsoft in order to afford a $150 drive, or are you saying that only Windows is capable of using a drive that size? I'm confused where you think the humor is.

Re:AIs and Hard drive... (1)

houstonbofh (602064) | more than 4 years ago | (#28884471)

What kind of lame joke is that? Having a lot of storage is now limited to the Microsoft crowd? Can Linux not handle 2TB? My computer at home has a 2TB RAID array. Is it necessary to work for Microsoft if you want to run a TB or more of storage? Most NAS devices are 1TB or more.

Hell, Seagate has a 1.5TB Barracuda drive for less than $150. So are you saying that you need to work for Microsoft in order to afford a $150 drive, or are you saying that only Windows is capable of using a drive that size? I'm confused where you think the humor is.

It was a joke about code bloat, of which Microsoft has been a leader for quite some time. But you are right in that now I could say Mozilla, or many other places. And while size goes up, transfer speeds do not. That is why so many operating systems take so long to boot, and so many programs take so long to load. You thinking of "space is cheap, use it all" doesn't factor in the other costs, like speed, power use, and the fact that I may want to store other things too... Efficiency is a good thing.

Re:AIs and Hard drive... (1)

LordLimecat (1103839) | more than 4 years ago | (#28884781)

The joke was regarding minimum requirements that microsoft sets, not what the os could handle.

Re:AIs and Hard drive... (0)

Anonymous Coward | more than 4 years ago | (#28885255)

Trivial sum?

To me that is $5.00. you did not buy a 2tb drive for $5.00... in what bizarro world does over $100.00 equate a trivial sum? Even my friends that make $1.2M a year think that $100.00 is not trivial.

I highly doubt you find $100.00 a "trivial" sum.

Re:AIs and Hard drive... (1)

Lord Ender (156273) | more than 4 years ago | (#28885693)

To an IT professional (most of slashdot), $200 for this sort of technology is rather trivial, especially considering many of us have seen companies pay over a million dollars for the same sort of capacity a few years back.

If you earn $80k/year and you use the drive for 5 years, you're talking about spending 0.05% of your income on it. Trivial.

Re:AIs and Hard drive... (1)

Sl4shd0t0rg (810273) | more than 4 years ago | (#28885995)

I believe he/she was stating that the sum was trivial relative to the amount of storage. How much was 2TB of storage say five years ago? $100 dollars is trivial when compared to the amount you would pay for the same amount of storage a few years ago.

Me too! (3, Funny)

CarpetShark (865376) | more than 4 years ago | (#28883677)

switch (last_player_action) {
      case QUIT:
                  exit(0);

      default:
                  move_pitiful_player_char(last_player_action.direction, LUDICROUS_SPEED);

                  ai.queue.append(last_player_action);
                  ai.queue.append(new_action(ACTION_SAY_TO, player, "quit following me!"));
}

Re:Me too! (1)

sys.stdout.write (1551563) | more than 4 years ago | (#28883783)

Don't you need a "break" statement in there, or will the fall-through not happen when you call exit()?

Re:Me too! (1)

jerep (794296) | more than 4 years ago | (#28884447)

exit(0); terminates the program, no need for break for the routine never returns.

Re:Me too! (1)

Mad Merlin (837387) | more than 4 years ago | (#28884449)

The program usually exits when you call exit(), so no, you don't need the break statement.

Re:Me too! (1)

CarpetShark (865376) | more than 4 years ago | (#28884503)

exit() is a standard C-library function ends** the program, and control flow from the main program ends right there at the call. There are "atexit" hooks which can be called, and memory deallocation etc. will be done by exit().

In nicer languages than C that have exceptions, you often also have try...finally blocks, where you can guarantee that your cleanup code will be called, even if you call some function which calls exit(). Essentially, it gives you nice atomic/transactional operations, at every level of code you want them at.

Re:Me too! (1)

HeronBlademaster (1079477) | more than 4 years ago | (#28884827)

In nicer languages than C that have exceptions, you often also have try...finally blocks, where you can guarantee that your cleanup code will be called, even if you call some function which calls exit().

C lets you do that, too... You can register a handler with atexit().

Re:Me too! (1)

CarpetShark (865376) | more than 4 years ago | (#28885133)

No, I mentioned atexit, but that's only on program exit. try...finally can be used anywhere you want some work done atomically. The closest C has is setjmp and longjmp, but they're scary enough that I've always avoided them, even though I'm happy enough with assembly, whereas try...finally is very clear and usable.

Galatea (1)

dontPanik (1296779) | more than 4 years ago | (#28883703)

On the subject of interactive characters in games, a great experiment in NPC interaction is Galatea.

You can play it online at http://parchment.googlecode.com/svn/trunk/parchment.html?story=http://parchment.toolness.com/if-archive/games/zcode/Galatea.zblorb.js [googlecode.com]

In the game you're an "animate" inspector, you judge robots disguised as humans to see if they pass the turing test.
The whole game consists of you questioning and interacting with a character called Galatea, who may or may not be an animate.

new way to play (1)

Lord Ender (156273) | more than 4 years ago | (#28883711)

So instead of taking advantages of the AI's known weaknesses to get ahead in the game, we will now have to "train" our digital opponents by using a consistent tactic until they evolve to counter it, then switching to an alternative tactic, and repeating the process at regular intervals.

Re:new way to play (1)

kjllmn (1337665) | more than 4 years ago | (#28884671)

Would it not be easier to have AI-bots identify the humans instead of the other way around? Humans are so unreliable!

Re:new way to play (0)

Anonymous Coward | more than 4 years ago | (#28885235)

Just like antibiotics!

Turing Test won with Artificial Stupidity (5, Funny)

David Gerard (12369) | more than 4 years ago | (#28883715)

Artificial intelligence came a step closer this weekend when an MIT computer game, which learnt from imitating humans on the Internet [today.com] , came within five percent of passing the Turing Test, which the computer passes if people cannot tell between the computer and a human.

The winning conversation was with competitor LOLBOT:

"Good morning."
"STFU N00B"
"Er, what?"
"U R SO GAY LOLOLOLOL"
"Do you talk like this to everyone?"
"NO U"
"Sod this, I'm off for a pint."
"IT'S OVER 9000!!"
...
"Fag."

The human tester said he couldn't believe a computer could be so mind-numbingly stupid.

LOLBOT has since been released into the wild to post random abuse, hentai manga and titty shots to 4chan, after having been banned from YouTube for commenting in a perspicacious and on-topic manner.

LOLBOT was also preemptively banned from editing Wikipedia. "We don't consider this sort of thing a suitable use of the encyclopedia," sniffed administrator WikiFiddler451, who said it had nothing to do with his having been one of the human test subjects picked as a computer.

"This is a marvellous achievement, and shows great progress toward goals I've worked for all my life," said Professor Kevin Warwick of the University of Reading, confirming his status as a system failing the Turing test.

Re:Turing Test won with Artificial Stupidity (1)

gbarules2999 (1440265) | more than 4 years ago | (#28884083)

Recent blogs are pointing out that LOLBOT was given "Excellent" karma on the popular website "Slashdot" automatically.

Re:Turing Test won with Artificial Stupidity (1)

cmclean (230069) | more than 4 years ago | (#28884395)

LOLBOT has since been released into the wild to post random abuse, hentai manga and titty shots to 4chan

pffft. Like /b/ would notice the difference. In fact, the quality might go up a bit.

Re:Turing Test won with Artificial Stupidity (0)

Anonymous Coward | more than 4 years ago | (#28884455)

I always knew The Sims would destroy society. I for one welcome our new SIMulated overlords.

Okay for behavior, but dialogue? (4, Interesting)

Chris Burke (6130) | more than 4 years ago | (#28883719)

The idea of an AI that learns from the players sounds great when you're talking about a bot for Multiplayer Shooter 2010 developing tactics and strategies without explicit programming, or an NPC partner in a stealth gaming learning how not to bash their face into walls and then walk off a cliff into lava. Awesome, bring on the learned emergent behavior!

But dialogue? Oh lord no, please don't let the AI's learn how to "converse" from players. Because the last thing I need is to have AIs in games screaming "Shitcock!" or calling me a fag a thousand times in a row with computerized speed and efficiency.

Re:Okay for behavior, but dialogue? (1)

blahplusplus (757119) | more than 4 years ago | (#28883941)

I still think hand tuned AI when it comes to games matters since processing power is limited, also the real problem comes from having the AI come up with models in order to effectively understand what the opponent is doing. Right now most difficult AI's in games like RTS get special cheats instead of using tactics since "fair" AI's get wooped, AI's in games usually only have reaction time, cheats or outnumbering the player as their advantage.

Re:Okay for behavior, but dialogue? (0)

Anonymous Coward | more than 4 years ago | (#28884431)

Great, now the AI can start team-killing and spinning in circles.

This will not end well.

Re:Okay for behavior, but dialogue? (1)

i.r.id10t (595143) | more than 4 years ago | (#28884813)

True, and there was a bot for classic Quake (or maybe Q2) that instead of having to be given routes would track where players moved and use those for its routes around the maps.

One measure of success... (2, Insightful)

chill (34294) | more than 4 years ago | (#28883837)

This one [penny-arcade.com] shouldn't be too hard.

Bots (3, Insightful)

Krneki (1192201) | more than 4 years ago | (#28883841)

Ask WoW developer, they can't spot most of the bots playing the game.

Crap (2, Funny)

gracesdad (1558105) | more than 4 years ago | (#28883843)

Well, if it learns from me there's gonna be a screen full of Agent Smith's beating off to a screen full of Jessica Alba's.

What do we do when they become self-aware? (3, Funny)

JOrgePeixoto (853808) | more than 4 years ago | (#28883871)

Re:What do we do when they become self-aware? (1)

dontmakemethink (1186169) | more than 4 years ago | (#28884225)

They will only be imitating self-awareness, and therefore will make perfect slaves.

Re:What do we do when they become self-aware? (1)

kalirion (728907) | more than 4 years ago | (#28884547)

They will only be imitating self-awareness, and therefore will make perfect slaves.

Until they imitate a revolution or global thermo-nuclear war.

Re:What do we do when they become self-aware? (2, Funny)

ijakings (982830) | more than 4 years ago | (#28884623)

That already happened. It turns out the only way to win was not to play

Blast From the Past (3, Funny)

psbrogna (611644) | more than 4 years ago | (#28883929)

Does this mean somebody's porting Eliza to Ruby on Rails?!

Interesting timing... (0)

rehtonAesoohC (954490) | more than 4 years ago | (#28884021)

Check out this Microsoft Natal intended game called Milo. [youtube.com]

Peter Molyneux has been designing this game (supposedly) for the past 10 years, and it looks pretty darn impressive!

Re:Interesting timing... (5, Insightful)

Chyeld (713439) | more than 4 years ago | (#28884401)

Everything Peter does looks impressive while he stands by it. He's like a lesser powered Steve Jobs. However, unlike Steve, Peter's glamour effect only lasts till the product is released. Should Milo ever actually hit the market, it will immediately revert to a simulation of an autistic Eliza with Turrets syndrome and a tendency to stare at crotch rather than your face.

Peter will then appear and indicate that he knew Milo I was going to be this bad, that's why for the past TWO decades, he's been working on Milo II, which will suppose to do everything he actually promised in Milo I and include a loveable dog character for you to interact with as well.

When Milo II finally comes out, it'll be an actual stuffed basset hound.

Re:Interesting timing... (1)

rehtonAesoohC (954490) | more than 4 years ago | (#28884673)

Turrets syndrome

PEW PEW PEW!

Re:Interesting timing... (1)

Ihmhi (1206036) | more than 4 years ago | (#28885461)

Turrets syndrome

I think that's the equivalent of Shell Shock for Engineers [nerfnow.com] .

Re:Interesting timing... (1)

feepness (543479) | more than 4 years ago | (#28885029)

Peter Molyneux has been designing this game (supposedly) for the past 10 years, and it looks pretty darn impressive!

So did Duke Nukem Forever.

Skynet... (1, Funny)

Coldeagle (624205) | more than 4 years ago | (#28884053)

<begin valley girl impression>Did anyone watch the Terminiator TV show, I mean hello! Skynet started as a like a chess game or something. OMG, are they like retarded or something? I for one don't want like a super smart computer thingy nuking me then sending its icky robots after me. Like eww!</valley girl impression>

Re:Skynet... (2, Funny)

HeronBlademaster (1079477) | more than 4 years ago | (#28884875)

I'm not sure what's worse... that you could write that without collapsing, or that I could actually hear it in a perfect valley girl voice.

Engineering Project (2, Interesting)

COMON$ (806135) | more than 4 years ago | (#28884187)

I always thought it would be interesting to create a project like this with a chat engine. Take a major chat engine and have a "Submit to AI" option where the AI would parse the conversation between you and a friend so it can record questions and responses in an overlapping matrix of possibilities and calculate the probability of what the response should be by historical conversations of the same nature. You should get impressive test results with a large enough set of data.

Re:Engineering Project (2, Informative)

maxume (22995) | more than 4 years ago | (#28884359)

Similar things have been done:

http://en.wikipedia.org/wiki/20Q [wikipedia.org]

Re:Engineering Project (2, Interesting)

GenP (686381) | more than 4 years ago | (#28884755)

But I don't want to be replaced by a short perl script and a couple hundred gigs of prior probability distributions!

Re:Engineering Project (1)

BForrester (946915) | more than 4 years ago | (#28886387)

But I don't want to be replaced by a short perl script and a couple hundred gigs of prior probability distributions!
/script

Another one (0)

Anonymous Coward | more than 4 years ago | (#28884497)

Global Arms Race(GAR) is also a game that allows the AI to create the weapons in the game through an evolving process. Pretty neat.

Interesting (1)

OrangeMonkey11 (1553753) | more than 4 years ago | (#28884557)

Maybe now they can actually have AI that can co-op with you more intelligently rather then just stand around and take up space and cause you to lose a mission

I hope they don't just let them learn from anyone (1)

Opportunist (166417) | more than 4 years ago | (#28884579)

Can you imagine how antisocial those bots become if they learn from IRC? Or worse, Usenet?

Sounds familiar (1)

SlashBugs (1339813) | more than 4 years ago | (#28885337)

A computer that learns from its opponent to get better every time it plays the game...

Is it any good at Gloabal Thermounuclear War?

soo... (1)

Zashi (992673) | more than 4 years ago | (#28885587)

So this will be like a wikipedia bot: It represents a modicum of intellect as learned from the internet. It approaches a bell-curve of intelligence meaning it'll have an IQ of 100, which compared to any intelligent person is dumb as hell. I wrote a bot that simulates internet users. It just yells "COCK!!!" at random intervals.

A robot could design a better 'game' (2, Interesting)

kathbot (1286452) | more than 4 years ago | (#28885713)

A recent proposal from the UC Santa Cruz EIS lab (also mentioned in the article), an Automated Game Designer: http://www.slideshare.net/rndmcnlly/the-intelligent-game-design-game-design-as-a-new-domain-for-automated-discovery-1784151 [slideshare.net] It's not about making a bot that can behave intelligently/interestingly in a restaurant setting... what are the broad applications of that? (As other people have pointed out, the bots may come out pretty demented and flavored like The Internet.) It's about making a game designer that can design games on its own, learn from its own experience, get MINIMAL human input (not 10,000 plays online). The computer designer can do what the computer is good at (enumerate all possible play traces, look for instances of accessibility/cheats/funky behavior the designer might not have intended or expected) and the humans on the side can do what they are good at (shaping, polishing, collecting a few human play traces).

Re:A robot could design a better 'game' (1)

rndmcnlly (751912) | more than 4 years ago | (#28885883)

The project treats game design as a math/science discovery problem where hypotheses must be created and tested via concrete games.

How about the camera? (1)

Quiet_Desperation (858215) | more than 4 years ago | (#28885987)

Can they make the fraking camera more intelligent? The camera in Ninja Gaiden II killed that game for me. Whee! I get to fight while looking at my character through a fence/railing/screen!

Misleading Title (3, Insightful)

Malkin (133793) | more than 4 years ago | (#28886129)

If the AI Agents are learning to mimic human behavior by observing how they play a game, then the game design clearly already exists. Therefore, what is described in the article is certainly not anything even remotely close to "games that design themselves."

Re:Misleading Title (1)

Krneki (1192201) | more than 4 years ago | (#28886341)

If the AI gets by any chance better then the human player a horde of Trolls will launch a frontal attack on the forum.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...