Credit where credit is due.
It still sucks though. Just not as much.
Credit where credit is due.
It still sucks though. Just not as much.
We're not going to make any progress here as long as you refuse to address the only claim I've made. To be honest, at this point, I can only assume you're trolling.
I think you're the one who seems to expect a Doom bot to have solved all of AI forever.
No, I'm saying that the article and a lot of readers seem to believe that a significant problem has been solved, which, obviously, has not been solved.
You seem to want to believe there is far more to this project than is actually there. Wishful thinking is fine, but don't pretend that it's reality.
The fact remains that statements like "the AI learned to play the game from only screen data" are completely false. It had far more than data from the screen buffer available to it.
We're not arguing over what "learned to play the game" means, we're arguing over the claim that the program learned having access only to the screen buffer, which is completely false.
Why does that matter? Because the uninteresting part, they trained a NN by traditional means, isn't what people are claiming. They're claiming that a much more difficult problem was solved.
Take a look at this, disturbing, belief:
This bot was just shown a video of what's happening and then learned how to play exactly like a human player would.
That's an actual quote from someone here. You'll find similar statements all over this thread. That is, apparently, what people believe. You can not argue that this is true in any way. It's a completely false statement. The claim made here is so far removed from reality that I can't believe you think I'm splitting hairs!
This isn't akin to a "God of the gaps" argument, nor are we talking about strong AI, consciousness, or anything like that. This is a bullshit article conning uninformed people in to believing (and spreading the false belief) that some advancement was made that has not, in fact, been made.
Can you still stand behind the above quote? If so, how? If an actual advancement were made that allowed the program to learn to play solely from feedback from the display buffer, would you say that it had already been done, referencing the paper here? If not, why do you insist on promoting this absurdly false belief?
When a human learns to play Doom, he starts knowing the success criteria
I couldn't disagree more. New players can, and do, learn how to play from either seeing someone play or by interacting with the game. They don't need to be told any details. They can learn how to play exclusively though visual (or visual and auditory) feedback. They can learn how keys and doors work, that they can kill monsters, and that they progress by finding the exit in each level all on their own. On health, they don't need to be told that taking less damage is better than taking more damage.
They can also set their own goals and define their own success criteria. They might want to kill all the monsters on each level, complete levels as fast as possible, complete levels using only a specific weapon, without taking damage, etc. They can determine, on their own, what is and is not successful play even if that differs from the developers intentions.
Identifying or defining success criteria is something that this program does not, and can not, do. However, it is strongly implied (if not explicitly stated) in the article and countless comments here. The program did not, in any way, learn to play the game exclusively through feedback from the screen buffer. The most important feedback mechanism, an evaluation of successful play, is provided in addition to the screen buffer data.
This isn't a narrow or nuanced point. I'm not splitting hairs here. The difference between what was actually done, and what people believe was done, is astronomical.
Consider the claims I quoted above like "The point is that the AI learned to play the game from only screen data. No maps, no preset strategy, just visual data" and "This bot was just shown a video of what's happening and then learned how to play exactly like a human player would." How could you say that claims like this are anything but completely false after reading the paper? They strongly claim a much more significant achievement than was actually accomplished.
Who said it didn't use the screen buffer?
My point was that it didn't learn to play the game exclusively using feedback from the screen buffer, like the magical thinkers here seem to believe.
It's not difficult. The implication here, from the article and summary, is that the program learned to play the game using only feedback from the display buffer. That is, quite obviously, false.
As I pointed out earlier, it did not, and can not, determine success criteria. That is the assumption you see endlessly here, implicit in absurd statements like "the computer in this case is still learning through visual feedback only", "The point is that the AI learned to play the game from only screen data. No maps, no preset strategy, just visual data", "This bot was just shown a video of what's happening and then learned how to play exactly like a human player would.", and a host of other, similar, nonsense statements. That would indeed be an impressive accomplishment. That, quite obviously, didn't happen.
This is, as I've said, no different than any other NN project. To claim otherwise is absurd.
Why this is controversial is beyond me. It's not exactly complicated.
Nonsense. In Wolfensten, for example, a 1D depth buffer is NECESSARY to paint partially eclipsed sprites, and to avoid painted completely eclipsed sprites.
This is how it works: On the first pass, a ray is cast for each vertical column on the display. When a wall is encountered, the distance to the wall and the position of the ray on the way is determined and used to draw a vertically-centered column scaled by distance. The distance is stored in a buffer the width of the screen. (A Depth Buffer!)
On the second pass, sprites are scaled and drawn. For each column of the sprite, the buffer is consulted. If the depth of the sprite is greater than the depth of the buffer, the column is not drawn.
How the hell do you think it worked? Magic?
Yes, it plays with only screen data. It did not learn how to play using only that data. Just like a zillion similar NN projects before it.
Sorry if this hurts your religious beliefs, but reality is indifferent to your fantasies.
We're watching new myths and religions form around pseudo-scientific ideas (like the simulation hypothesis) and science fiction based beliefs about the current and future state of artificial intelligence.
As you point out, to believe someone could "break out" of the simulation seems to imply a deeply inconsistent metaphysic. I expect this to change as these odd believes evolve in to something more coherent.
It's like watching UFO cults develop all over again.
Now, who on Slashdot things Small Wonder was a documentary?
What about my post was false?
This toy did not "teach itself" to play the game using only feedback from the screen buffer. That is a very simple and obvious fact. How you can believe otherwise is astonishing. Read the damn paper.
I know. No one likes to have their silly fantasies shattered by the cold light of reality, but enough is enough. Face facts, read the paper, and get over it. The last thing we need is another technology-based religion like the "less wrong" group or the Kurzweil cultists..
Looks like someone didn't read the paper. It's free, so it's not like it'll cost you anything to read it.
Here's a helpful hint: When you hear than an AI "taught itself" anything, it's guaranteed to be bullshit for the foreseeable future. Simply things like determining success criteria are far-beyond what so-called AI can actually do that it's laughable to say it "taught itself" anything.
They trained a NN just like we've been training NN's for ages. It's about as interesting as an undergrads NN project.
Wishful thinking from the futurists?
The silly belief here is that the program "taught itself" to play using only feedback from the display buffer. That is so obviously untrue that I can't understand how anyone could believe it, let alone repeat it!
Read the paper. It's no different than the average undergrads NN project.
Of course it has a depth buffer! Even Wolfenstein had a depth buffer. How the hell do you think they painted the sprites, magic?
Read the paper.
"Oh what wouldn't I give to be spat at in the face..." -- a prisoner in "Life of Brian"