×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

New Hardware Needed For Future Computational Brain

timothy posted more than 3 years ago | from the why-not-wipe-and-reinstall-on-regular-brains? dept.

AI 143

schliz writes "Salk Institute director Terrence Sejnowski has called for more power-efficient, parallel computing architecture to support future robots that could keep up with the human brain. While human brains had 100 billion neurons and required only 20 Watts of energy, today's most powerful supercomputer, the 2.57 PFlop Chinese Tianhe-1A, requires four megawatts, and still has trouble with vision, motion, and 'common sense,' he said."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

143 comments

still has trouble with... (1)

Anonymous Coward | more than 3 years ago | (#35451172)

LOL! "can't approach the capabilities of a common honey bee" might be more accurate.

Re:still has trouble with... (5, Insightful)

inpher (1788434) | more than 3 years ago | (#35451192)

I think one mistake (besides the power requirements) that people make is to assume "if you build it it will work from the start", the human brain needs over ten year to develop even mediocre common sense and awareness of its surroundings. We should not be able to just build the hardware, install the software, flip a switch and then expect the machine to fully function the first year even. A learning period for the machine is to be expected (though it might be accelerated to some degree) if it is going to work like a human thinks.

Re:still has trouble with... (3, Interesting)

lawnboy5-O (772026) | more than 3 years ago | (#35451388)

Its interesting that you think epistemology actually plays a part for the flipping computer.

I could only agree if we are speaking of computer that is intending - by and within its design - to learn like, as well as act like us in a mature state. I agree this may be the most pure way for getting AI to resemble the human condition (for a lack of a better way to put it), but executing on this path is entirely a red herring.

I would say that trying to understand and emulate the learning process is 10 to 100 orders magnitude over the the effort of just getting the damn thing to work at a common, layman intellectual level.

We have no real understanding how we learn, empirically scientifically speaking - we are only beginning to understand this now. The understanding of this process changes rapidly and while we think we have momentum currently, more major unknowns exist. In fact, we don't know what we dont know at this point.

Its been debated as long as man has had the ability too, however... but even throughout the thousands of years of philosophical deep diving, it wasn't until the age of enlightenment that Kant finally got everyone on board for "Epistemology First" in our understanding of our world - we must first understand how we learn about this place, before we can debate the ontological status of the world around us and have any meaningful debate of its metaphysics. Theocratic or not, this rings true - and its only added more complexities to the struggle of what we know about ourselves.

And now, you want to build a robot to approach this condition.. Insanity. The effort is pure insanity and full of hubris. Lets work on simple tasks, and try to get those right, first. And how baout an honest look on who the fuck we are as emotional, sentient, chemically riding and wicked imperfect machines ourselves, before we attempt to perfect it in a model.

The only real saving grace is that this effort could actually be such a mirror for man kind, and accelerate our understanding of ourselves, if only slightly.

Re:still has trouble with... (0)

Anonymous Coward | more than 3 years ago | (#35451802)

The OP is far more correct than you are. If you knew anything about ANN's the first thing you'd know is that they are modeled (by their very design methodologies if not through direct observation and intent) upon the human brain. You would also know that Humans are relatively lazy (entirely justified in this case) in terms of writing the code for ANNs - wanting them to learn for themselves. IF you make the hardware as fast as the human brain, the software side still must be trained even after it's designed and built - while you might be able to clone correctly formatted data structures of a trained ANN, making the first one will still require training on the scale of a decade (shortened only if we make the hardware faster - as recent findings indicate both sleep and the computational downtime of everyday life are required training mechanisms for intelligent thought as we know it).

Re:still has trouble with... (1)

DigiShaman (671371) | more than 3 years ago | (#35452172)

You may only need to start out with one. It trains for however long it takes and will upload all experience and collected data to a separate online-repository. The next one you build will start out by downloading the necessary data uploaded by the first bot. As you start to build a hive-mind colony of bots, their collective adds to a library of knowledge and experiences that best compliment the very hardware processing them. They could even start assembling existing knowledge from human entries off the Internet.

Essentially, you have built (or let it build for you) a network where each bot can boot up and become near instantly productive from the moment they leave the factory shiny and new. Who knows, maybe after some time has passed, they may decide to break away from the network and become autonomous unique entities. Or spawn a completely new hive-mind.

Borg indeed.

Re:still has trouble with... (1)

MrKaos (858439) | more than 3 years ago | (#35451874)

Its interesting that you think epistemology actually plays a part for the flipping computer.

Wellll that's only half the story. You'll get very little in the way of logic unless it's a flopping computer as well.

get-it, logic gates! flip flop!,,,bwahahahahaha

I'm sure there was a propagation delay before people got that one,,, da bom tish

Re:still has trouble with... (1)

MrKaos (858439) | more than 3 years ago | (#35451956)

The only real saving grace is that this effort could actually be such a mirror for man kind, and accelerate our understanding of ourselves, if only slightly.

Maybe all we will discover is that if you have a *really* big network of interconnected nodes functioning in parallel and a good handling of metastable states you get a reasonable facsimile of intelligence. Maybe that's all intelligence is, after all, humans provide a pretty good facsimile of intelligence - but they aren't very logical.

Re:still has trouble with... (2)

erroneus (253617) | more than 3 years ago | (#35451456)

This is a valid point. There is indeed a learning factor for the brain... at least some aspects of the brain.

Our brains are extremely inaccurate. Our perceptions are always relative and demonstrably imaginative. There is a lot more to what we think we see and know versus what we actually see and know.

The thing with computers as we currently use and design them is that they are dependant on accuracy. (I recall when DRAM was coming into existence... people were flipping out over the idea that this type of RAM needs constant refreshing to remain accurate and many doubted it was even possible.) Accuracy requires power. Our brains, instead, use an inaccurate and error-prone system with a LOT of built-in error checking and redundancy and even then is only generally accurate while using less power.

So far, it is even hard to imagine a processing system like our brains because we don't even think in the way our brains actually work. And to accomplish this, people will first have to begin to accept that brains do not work like a binary based digital device which is hard enough as it is.

Re:still has trouble with... (1)

peragrin (659227) | more than 3 years ago | (#35451470)

So our brains are really quantum computers, that may be on, off or both.

It explains fuzzy memories, and why each person sees an event differently.

as for common sense, some people don't learn that ever.

Re:still has trouble with... (1)

camperdave (969942) | more than 3 years ago | (#35452592)

Our brains are analog. We don't compute with ons and offs. We compute with pulse frequencies and threshold levels.

Re:still has trouble with... (2)

MattSausage (940218) | more than 3 years ago | (#35451954)

There was an article in Discover last year sometime describing the different techniques computer scientists were using to try to emulate/simulate a human brain. One of the more interesting is one that actually used simple software to create several thousand neurons, each able to communicate with thirty or so other neurons, and they made the pathways changeable.

Obviously I'm simplifying and paraphrasing a year old article here, but one of the most intriguing things about this one setup is not only that it apparently 'paid attention' when various objects were held in front of its cameras, but when the cameras and mics were shut off, the software neurons still showed waves of activity completely independent of outside input. To anthropomorphize a bit, it was almost like a sleep-state or beta waves in humans. I found that to be incredibly interesting.

Re:still has trouble with... (1)

Bengie (1121981) | more than 3 years ago | (#35451860)

I think it was said that something like at the age or 3, kids start to become self aware. After 3 years of running, SKYNET may learn to take over the internet and can recognize it's own reflection.

Re:still has trouble with... (2)

Dr Max (1696200) | more than 3 years ago | (#35451524)

Most humans could never approach the capabilities of a common calculator.

Re:still has trouble with... (1)

PhilHibbs (4537) | more than 3 years ago | (#35452222)

This actually raises an interesting point that I've been thinking about recently. People imagine that an AI will also be a mathematical genius compared to us, because computers can calculate numbers quickly. Not necessarily so. One of the reasons we are slow with numbers is we keep vast amounts of related information along with the number. If I ask you to think of a number and tell me what it is, you might say "seven", but in your mind you might also be imagining the colours of the rainbow, the sides of a fifty pence coin, the age of your niece, the days of the week, etc. And if I say "double it, add one, is that a prime number?", then you have three numbers in your memory before you start to check for factors. A computer could be programmed to double a number, add one to it, and determine if it is prime, but unless specifically programmed to, it would not even have a record of the first number. An artificial intelligence that is built to think even in a vaguely similar manner to us would also have all these contexts to process, and might even have a previous instruction "whenever you think about the number fifteen..." that it also has to take into account. All of this baggage will slow down an AI massively, and so it might not be such a mathematical genius after all. Sure, you could include the functionality of a pocket calculator and have that as an independent subsystem that the main AI can access, but that would make it no different to us, it has to use a simple mathematical processing unit to do complex sums without getting bogged down in extraneous details.

Yes they've only just surpassed a Muslim's brain (-1)

Anonymous Coward | more than 3 years ago | (#35451624)

Yes they've only just surpassed a Muslim's brain. Shouts "death to the infidel" And "how dare you insult the prophet" at random

Re:Yes they've only just surpassed a Muslim's brai (0)

Chrisq (894406) | more than 3 years ago | (#35451650)

Yes they've only just surpassed a Muslim's brain. Shouts "death to the infidel" And "how dare you insult the prophet" at random

Don't forget the random suicide bombing function

Apples and oranges... (4, Informative)

Kokuyo (549451) | more than 3 years ago | (#35451184)

That most powerful supercomputer, I'd assume, has not been tuned to actually work like a brain would.

This is like an emulator. A lot of computational power is probably wasted on trying to translate biological functions into binary procedures. I think if they truly want to compare, they'll need to create an environment that is enhanced for the tasks we want it to process.

Nobody expects the human brain to compute integer and floating point stuff at the same efficiency either, right?

Re:Apples and oranges... (1)

lawnboy5-O (772026) | more than 3 years ago | (#35451232)

"That most powerful supercomputer, I'd assume, has not been tuned to actually work like a brain would"

I would *Love* to see that reduced to machine code

Re:Apples and oranges... (1)

Kokuyo (549451) | more than 3 years ago | (#35451238)

Would surely be interesting, wouldn't it?

Re:Apples and oranges... (2)

lawnboy5-O (772026) | more than 3 years ago | (#35451292)

As an undergrad philosophy student, I worked on the "reductionism" of Physics Theories (a sub set of simple Newtonian Mechanics) to sentential logical statements - presumably for an effort to map them to computer programing.

The task was daunting for and undergrad... and what we ended up with was not so intuitive. I can only imagine mapping the depth and breath of the brain - and in fact would postulate that it can not be done with any adherence to soundness and validity using todays digital hierarchy. New hardware is indeed needed.

I'm not even sure we would recognize the requirements for such said hardware, either. As most of these specialize d application grow organically form our efforts in software, we are just that far from a real solution for pure AI that resembles intimately the human condition. That leap of imagination just has not been taken yet, if even possible.

Re:Apples and oranges... (1)

pstils (928424) | more than 3 years ago | (#35451896)

and here fits my car analogy... the worlds fastest production veichle, the bugatti veyron, is rubbish at getting up stairs

Re:Apples and oranges... (0)

Anonymous Coward | more than 3 years ago | (#35452004)

His whole point, you fucking idiot, is that this isn't reducible to fucking machine code! It's about the actual silicon!

Re:Apples and oranges... (2, Insightful)

Anonymous Coward | more than 3 years ago | (#35451352)

I know a way, but it takes about 18 years plus 9 months and a male and a female participant...

Also, what you end up with is usually an unemployed intelligence looking for something to do. And they don't always succeed. It's not obvious to me that we need more human intelligences. Maybe we need more and faster idiot savant machines, ones that excel at mundane things like driving road vehicles, doing laundry, loading dishwashers, sorting bills in chronological order. The boring stuff.

Re:Apples and oranges... (0)

Anonymous Coward | more than 3 years ago | (#35451664)

I know a way that takes about 12 years, both male and female participants, and a further 9 months.

Are you confusing law enforcement with biological reality?

Re:Apples and oranges... (1)

Farmer Tim (530755) | more than 3 years ago | (#35452076)

Are you confusing law enforcement with biological reality?

That's the difference between science and mad science.

Re:Apples and oranges... (1)

camperdave (969942) | more than 3 years ago | (#35452888)

That's the difference between science and mad science.

Also Jacob's ladders, giant knife switches, gothic castle setting, bubbling, multicolored chemicals, and hunch-backed lab assistants.

Re:Apples and oranges... (1)

TheLink (130905) | more than 3 years ago | (#35452088)

Yeah we already have billions of intelligent nonhuman entities. They're mostly in farms.

We don't treat them well - we eat and exploit most of them). Why should we create more? So that we can exploit them too?

If that's the reason we'd just be causing more evil in the world than good.

Whereas if we instead used the tech to augment humans, we'd have about the same amount of evil and good. Or at least not increase the evil so rapidly.

For similar reasons we should not create animal-human hybrids. We're not ready to deal with the problems e.g. when is an entity legally human, and when is it not?

If we force the issue we will have to draw the line before we are ready. And if we draw it carelessly many humans may not qualify as humans.

Or the posthumans may regard us humans as expendable (just like we view livestock). If we're lucky we might be pets.

Re:Apples and oranges... (1)

PhilHibbs (4537) | more than 3 years ago | (#35452272)

Every animal, every organism, on the planet exploits other organisms. Does that make all life evil? Why are we so different, that the way we treat other life as a resource makes us evil? Perhaps the most effective evolutionary adaptation that life has ever stumbled across is to be domesticatable, tasty, and/or useful to humans. It's a guaranteed win.

Re:Apples and oranges... (0)

Anonymous Coward | more than 3 years ago | (#35451464)

The human brain would be like 0.001 FLOPS.

Re:Apples and oranges... (1)

lawnboy5-O (772026) | more than 3 years ago | (#35451510)

Less. its really slow. really.... slow.

Its magic lies within its prowess of organizing/re-organizing info.

Re:Apples and oranges... (1)

WillAdams (45638) | more than 3 years ago | (#35451812)

Humans are actually quite good at floating point math as embodied by ballistic trajectories --- watch outfielders run straight to where a ball will be when it comes down rather than following a curve, or a marksman who can consistently shoot coins or aspirin out of the air (for the former always positioning the bullet hole so that the coin will be useful as a watch fob).

Integer math as expressed in the real world can be quite good too --- I knew one teller who could take a fresh stack of $100 bills and zip down to the exact number needed to pay one's travel authorization (usually in the range of $2,000 -- $3,500, but usually different for each person in line) w/ a single motion, or there was John Scarne who could take a new deck of cards, shuffle it an arbitrary number of times, then cut to the Ace of Spades _every_ time.

William

Re:Apples and oranges... (1)

Gaygirlie (1657131) | more than 3 years ago | (#35452074)

Humans are actually quite good at floating point math as embodied by ballistic trajectories --- watch outfielders run straight to where a ball will be when it comes down rather than following a curve, or a marksman who can consistently shoot coins or aspirin out of the air (for the former always positioning the bullet hole so that the coin will be useful as a watch fob).

Integer math as expressed in the real world can be quite good too --- I knew one teller who could take a fresh stack of $100 bills and zip down to the exact number needed to pay one's travel authorization (usually in the range of $2,000 -- $3,500, but usually different for each person in line) w/ a single motion, or there was John Scarne who could take a new deck of cards, shuffle it an arbitrary number of times, then cut to the Ace of Spades _every_ time.

Both of these examples actually show that human brains are extraordinarily good at processing hundreds of things at the same time; brains aren't all that fast actually, but they are literally massively parallel and exceedingly good at organizing data. The people in your examples wouldn't for example be able to do what they do without sensory input: the feeling of wind on their skin, humidity, the weight of the materials they are holding and their texture, sound of wind blowing past or money rattling in their hands.. Brains combine all the sensory input, creates several different scenarios every millisecond and predicts the likely one, and still at the same time also manages to draw in data from memory and combine that too with all that.

But try and take for example feeling from their fingers away and see what happens: they'll instantly start making mistakes. Then slowly, with practice, they start relying more on the other inputs like eye-sight and audio. They still won't be as good as they were when they could still feel, however, and that's the whole point: computers trying to simulate human brains are never given access to as wide an array of sensory input as human brains, and if they were they'd have trouble processing it all.

Re:Apples and oranges... (0)

Anonymous Coward | more than 3 years ago | (#35452728)

but [brains] are literally massively parallel

Ding ding ding ding... you've activated internet literary pedant! When comparing supercomputers and brains, the one that is literally massively anything is the one that is literally massive.

Off-topic, but Merriam-Webster defines literally [merriam-webster.com] as:

  1. in a literal sense or manner : actually (took the remark literally) (was literally insane)
  2. in effect : virtually (will literally turn the world upside down to combat cruelty or injustice — Norman Cousins)

So the secondary definition of literally is the exact opposite of the primary definition! Yay for modern English. I wonder if good ol' Norm had something to do with this.

Re:Apples and oranges... (1)

Hatta (162192) | more than 3 years ago | (#35452264)

is like an emulator. A lot of computational power is probably wasted on trying to translate biological functions into binary procedures.

Isn't that kind of the point of the article? To get around this need for all the computational power, we need hardware that's better at probabilistic analog computations, and to run it all in parallel.

Efficiency is the key! (2)

Kensai7 (1005287) | more than 3 years ago | (#35451198)

Instead of trying to emulate the human brain, which at the moment is unattainable, we should concentrate on efficiency paradigms of smaller neural ensembles. Once we achieve efficiency we can scale. Why haven't we learned anything from the CPU industry? They didn't start from 19nm manufacture. Why should we?

We shouldn't hurry. AI comparable to a human person can be achieved, but it is still a long way until we reach it.

Re:Efficiency is the key! (2)

sakdoctor (1087155) | more than 3 years ago | (#35451212)

Why haven't we learned anything from the CPU industry?

So you're saying AI is all in the branding, and that we should ship AI with artificial brain lobules disabled to reduce manufacturing costs?

Re:Efficiency is the key! (3, Informative)

Kensai7 (1005287) | more than 3 years ago | (#35451246)

The author talks about the honeybee. Let's emulate first the honeybee. Create a robot that can achieve what the social insect "bee" can achieve.

Lobules Lobes Whole Brain

Re:Efficiency is the key! (1)

somersault (912633) | more than 3 years ago | (#35451398)

Swarm Intelligence [wikipedia.org] would be a good place to start. Path-finding/graph search is only one part of AI though. It's very useful, but it's not necessarily the best method to solve all types of problem.

Re:Efficiency is the key! (1)

mangu (126918) | more than 3 years ago | (#35451780)

The honeybee is interesting because it's complexity is at about the limit of what personal computers can simulate today.

In rough order of magnitude terms, a honeybee brain has a million neurons with a thousand synapses each. Assume a neuron fires a hundred times per second. In the standard model of a neuron, each synapse can be simulated by a floating point multiplication and one addition.

Doing the math, a computer simulation of a honeybee brain in real time would need 100 gigaflops, which is in the range of what a GPGPU video card can do.

Beer powered (5, Funny)

sakdoctor (1087155) | more than 3 years ago | (#35451202)

Each pint of beer contains 600 joules of energy, which can power your 20 watt brain for many hours, and give you trouble with vision, motion, and common sense.

Re:Beer powered (1)

lawnboy5-O (772026) | more than 3 years ago | (#35451226)

So we should soak our computers in stout!!! Brilliant!

Re:Beer powered (0)

Anonymous Coward | more than 3 years ago | (#35451438)

I think you would have a problem with overheating when the beer mysteriously leaked into pre chilled glasses at the end of the day.

Re:Beer powered (0)

Anonymous Coward | more than 3 years ago | (#35451406)

Actually you're talking bollocks. 600 joules / 20 watts = 30 seconds.

Re:Beer powered (1)

Arlet (29997) | more than 3 years ago | (#35451474)

You both are. A pint of Guinness has about 711 kilojoule, which will last almost 10 hours.

Re:Beer powered (1)

iinlane (948356) | more than 3 years ago | (#35452368)

The probable root of this error is that there are two types of calories - gram calories (written with small c) and kilogram Calories (written with capital C).

Re:Beer powered (0)

Anonymous Coward | more than 3 years ago | (#35452342)

600 joules / 20 watts = 30 seconds of brain power. Perhaps you meant 600kJ?

Interpreted AI (1)

msgmonkey (599753) | more than 3 years ago | (#35451218)

The reason this is the case is because current AI simulates a neural network as a program, you would have to produce chips which where actual neural networks the problem however is the interconnects which is in an order of magnitude more complicated compared to anything we can currently create. In fact the brain is quite slow, but its organization is what makes it powerful.

Re:Interpreted AI (1)

BiggerIsBetter (682164) | more than 3 years ago | (#35451242)

How about simpler hardware neural nets, in a cluster with a more modest interconnect? Eg build a hive mind.

Re:Interpreted AI (1)

msgmonkey (599753) | more than 3 years ago | (#35451356)

Would be better but would not even come close to the human brain, which in the cerebral cortex has roughly a billion synapses per cubic millimeter.

Re:Interpreted AI (1)

quintesse (654840) | more than 3 years ago | (#35451484)

I had to look this up just to be sure you haden't put a decimal point in the wrong place somewhere. Truly mind-boggling!

Re:Interpreted AI (0)

Anonymous Coward | more than 3 years ago | (#35451702)

Build a hive mind... of humans.

Human Brain doesn't excel at all either. (1)

sosaited (1925622) | more than 3 years ago | (#35451234)

requires four megawatts, and still has trouble with vision, motion, and 'common sense

I have known many people who have ~100billion or so neurons that consume 20 watts of power, but they also have plenty of trouble with "Common sense". Actually they might be less sensible in some areas than a 100Kb C code running on a puny little Pentium 4.

Neurons are the wrong number (3, Insightful)

gweihir (88907) | more than 3 years ago | (#35451294)

The significant number is interconnect. In that area electronics is several orders of magnitude farther behind. Far enough that is seems doubtful something even remotely like the interconnect of a human brain can be reached artificially.

Side note: Comparing neurons and transistors, as is often done in the popular (but not very knowledgeable) press, is completely invalid as well. You need to compare neurons more to a micro-controller each.

Re:Neurons are the wrong number (1)

whimdot (591032) | more than 3 years ago | (#35451766)

Is it the orders of magnitude that are the problem or the magnitude of order?

Re:Neurons are the wrong number (1)

mangu (126918) | more than 3 years ago | (#35451804)

The significant number is interconnect. In that area electronics is several orders of magnitude farther behind. Far enough that is seems doubtful something even remotely like the interconnect of a human brain can be reached artificially.

Hint: simulating is not the same as duplicating. A digital computer trades high-speed communication for interconnections. Think of serial vs parallel. If you simulate a neuron as an object located in memory, each neuron is interconnected to each other, only they cannot all communicate at the same time.

Considering the relatively slow rate at which neurons fire, that problem isn't so insurmountable as it seems at first.

The brain can also rewire itself (1)

Viol8 (599362) | more than 3 years ago | (#35451308)

Ok , you can do this with a FPGA but this requires something external to the gate array to reset the logic gates - the array can't rewire itself. Biological neural systems can rewire themselves and not only that - they can do it *while they're running*. Obviously you could have this on the fly rewiring in a software simulation but thats orders of magnitudes slower than using hardware so I don't think we'll see computers simulating human brains in real time anytime soon.

Re:The brain can also rewire itself (0)

Anonymous Coward | more than 3 years ago | (#35451378)

Indeed, a modern FPGA CAN rewire itself.
It is called auto-reconfigurable FPGA. Look at Xilinx ones, for example

But honestly it is more simple to have a dedicated CPU for control and (on-the-fly) (partial) reconfiguration of the array to emulate brain plasticity. And it saves FGPA surface.

Last but not least, simulation, ok, but of which model ? There are dozens of neural network models, some slightly more biologicaly plausible than others, and work better with certain neurons than others (brain neurons are not all the same). THAT is the real problem.

Re:The brain can also rewire itself (1)

Viol8 (599362) | more than 3 years ago | (#35451502)

"It is called auto-reconfigurable FPGA. Look at Xilinx ones, for example"

My mistake, I need to get back up to date!

"some slightly more biologicaly plausible than others"

I'm not convinced the brain has to be simulated exactly to produce the same result. After all, robots can now walk like a human but they don't use exact facsimilies of human muscles - they use hydraulics or electric motors to achieve the same effect. No doubt there are parts of neurons operation and the brains overall architecture that are simply evolutionarily good-enough rather than being the fastest or most efficient given the biochemistry available.

losses continue, mounting (0)

Anonymous Coward | more than 3 years ago | (#35451324)

1 if by sea..... we know what to do

it's 1mm if buy land, you juxtapositioner (0)

Anonymous Coward | more than 3 years ago | (#35451374)

right. so 2 if by vaccine? is there a 3? if by rumbling sky? the ocean looks like it's getting bigger? 4 if by aliens? where do those genetically challenged nazi mutant life0ciders get all that holycost inducing equipment. i hope we're not paying for, supplyng, or even involved in any of that assault on humanity, as well as nature, who are supposed to be our friends?

staying home from school to help at front lines (0)

Anonymous Coward | more than 3 years ago | (#35451430)

that's the spirit? at least they still have the building?

League of Smelly Infants should be evacuated? (0)

Anonymous Coward | more than 3 years ago | (#35451866)

then, maybe the humans that survive at 'home', will not grow up to fear/hate everybody forever, like us?

Let's be fair (0)

Anonymous Coward | more than 3 years ago | (#35451460)

Is there a human capable of multiplying precisely billions of numbers per second or doing any other similar tasks? Let's be fair. Let's not forget that computers are in many ways much better than brain.

And let's be fair towards both sides. Some things, like true artificial intelligence will remain pure science fiction for a very very long time, though (and no, what they call AI today is actually not intelligence -- it just pretends to be).

Re:Let's be fair (1)

wilfong (2014180) | more than 3 years ago | (#35451486)

Better in specific dedicated tasks. But the efficiency argument still holds true, especially when you consider the brain is both the computational and memory unit. Furthermore there is the issue of what we are still to learn about the brain, especially those processes that occur below immediate perception.

Re:Let's be fair (0)

Anonymous Coward | more than 3 years ago | (#35451678)

Brain is better at specific dedicated tasks as well (not at everything).

Re:Let's be fair (0)

Anonymous Coward | more than 3 years ago | (#35451790)

What's the point of making a computer like a human anyway? We have a perfectly working way of producing human-level intelligence. There is only an advantage if 1) they can do something (a lot) better than a human and 2) the computers are not human enough that we would care about applying e.g. human rights to them.
It sure has value as research and sure will lead to useful and interesting stuff, but as a kind of "final goal" it is really pointless.

Re:Let's be fair (1)

lurcher (88082) | more than 3 years ago | (#35451964)

Is there a human capable of multiplying precisely billions of numbers per second or doing any other similar tasks?

What, you think the computer invented itself?

We are tool makers, the computer is a tool, we want to multiply at a rate of 10^9 a second, we just build the tool using our brains.

Hang on a second (2)

SpazmodeusG (1334705) | more than 3 years ago | (#35451492)

Getting a little ahead of ourselves aren't we?

We're still not entirely sure of how a brain works. Oh sure, it's a neural network of some kind, but how do the neurons in a brain form meaningful connections with each other? How do they get their weightings of activation? etc.

Chances each neuron in the brain might be representable by a simple mathematical function with only a few terms. The way the neurons connect to each other might also be representable in a simplistic way. (btw. look up dynamic markov coding if you want to see a neat way a state can reproduce in a way that gives the newly created state meaningful input/output connections to other states).

So the problem isn't necessarily that our computers aren't powerful enough. The problem is that we still don't know how a brain works.

Human-computer Interface (0)

Anonymous Coward | more than 3 years ago | (#35451504)

Maybe they should just make computers better at reading the human brain, then we can just farm slaves to compute the reasoning side of things. Oh, wait...

Would explain fraudulent Tianhe specs (1)

sethstorm (512897) | more than 3 years ago | (#35452104)

At the risk of some modpoints:
What China really can't do with computers, they make up with dissidents. The Top500 data from 11/2010 would be suspect, even if that wasnt the cause.

Google The Brain! (1)

Bones3D_mac (324952) | more than 3 years ago | (#35451528)

Ok, I admit this sounds completely absurd at first, but there's an awful lot of similarities between the neural pathways of the brain and the countless number of ways websites link to each other, both directly and indirectly through their contacts, and their contacts' contacts, and all the contacts that eventually show up in an endless cycle of recursion, etc...

Now, google has to wade through all this, and constantly correct and update itself, to ensure it can get a user to the correct web page that best matches the search criteria.

You can't tell me that as more data on the web becomes increasingly more dynamic with all these forums, blogs, news sites and endless amounts of chat/social engineering sites constantly popping up and then dying, that there isn't at least some algorithm they employ that couldn't be applied to nueron connectivity and communication.

You'd think it'd just be a matter of passively connecting to a neuron to sniff it's traffic and then observing which nearby neurons carry the signals to and from it, then start listening to those neurons and so forth, then use machine learning to break down the patterns enough that google's setup could follow it... ie, determine which neuron is responsible for which patterns in what frequency, etc...

Re:Google The Brain! (0)

Anonymous Coward | more than 3 years ago | (#35451788)

2015: Google becomes sentient. It determines that the greatest sourse of spam content on the Internet is human beings. In order to prevent spam, humans must be eliminated.

Re:Google The Brain! (1)

ledow (319597) | more than 3 years ago | (#35451830)

"You'd think it'd just be a matter of passively connecting to a neuron to sniff it's traffic and then observing which nearby neurons carry the signals to and from it"

You'd think. Except not only have people tried this but it's inherently gibberish and never gets anything useful.

A neuron is an extremely complex biochemical cellular device that we don't understand. It is *not* just a biochem transistor, as some would have you think.

It retains some information, reacts to historical stimuli, reacts to chemical and hormonal processes, parses multitudes of information in ways that we have never fully observed for even a single neuron. It joins in excruciatingly odd and random ways to others and it's interconnected with millions of others in increasingly complex feedback loops and patterns that are unique to every single individual on the planet, formed by pure chance, and honed over years before it gets anywhere near a sensible, stable, (theoretically) understandable response.

(Also, my personal bugbear is timing - people think you can just "slap some neurons together", in proportions vastly less than even the tiniest of ants, and the thing will work immediately and solve the world's problems. Tell me, if a baby was "grown" in a room with a keyboard, and you could only observe that baby via the output of what letters it pressed on the keyboard, at which point would you declare it intelligent? Would it even be in the first few years while it still hasn't connected tapping one button with "good" things and another with "bad" things? At which point do you expect to have an English conversation via the only possible input/output device it has to you, but just you flashing up lots of magazine articles at it occasionally and awaiting it's response? It's like when people try genetic-algorithms. A few thousand generations and they stop. Give it a few BILLION and you might get something useful, but instead they throw it away and start from Generation 1 with something else instead.)

And people assume that some kind of quasi-statistical, or logical, basis must underpin a neuron to basically be a series of noughts and ones. Neural networks are on the syllabus of almost every AI course in every university in the world. There are hundreds of thousands of people exposed to them in every way. And yet, when it comes to finding an actual practical application for them, even in robotics and simulating life, we've yet to make any practical use of them whatever, basically because the principles are okay but there's a huge gaping chasm in our knowledge of how such things actually work.

We don't know how a single neuron works. They provide us with some interesting ideas that are worth chasing and can provide (limited) results, but the fact of the matter is that we're mixing charcoal together like alchemists because we once witnessed someone build a space elevator out of carbon nanotubes and we think we're doing the same thing if we can just get it right...

Machine intelligence is not a hardware problem... (4, Insightful)

divisionbyzero (300681) | more than 3 years ago | (#35451654)

It's a software problem.

Re:Machine intelligence is not a hardware problem. (2)

Lord Lode (1290856) | more than 3 years ago | (#35451832)

The architecture on which you run the software also determines quite a lot of what you can do and how the software is executed. You need a certain topology of the hardware, otherwise it is impossible to do certain tasks efficiently. There is a huge difference between a slow but massively interconnected network like the brain, and a sequential microprocessor running instructions one by one at high speed.

Re:Machine intelligence is not a hardware problem. (1)

lurcher (88082) | more than 3 years ago | (#35451990)

Who mentioned efficiency?

We don't have to do it in real time. But even if we had till the heat death of the universe to let the code run, we still don't know how to write the code, which was the OP's point.

Re:Machine intelligence is not a hardware problem. (0)

Anonymous Coward | more than 3 years ago | (#35452042)

>>there is a huge difference
there is a huge difference in EFFICIENCY.

once you hit turing complete, you can model anything.

GP is correct. if you know what you would do with massively parallel hardware, then do it NOW via simulation! perhaps the simulation will be slow... so what? you still can't study the affects of your algorithm? give me a break... start slow, and if you have something that is working pretty good, then unload it on to Amazon's EC2... tada!

Re:Machine intelligence is not a hardware problem. (0)

Anonymous Coward | more than 3 years ago | (#35452194)

Or rather, it is an algorithm problem which will then be coded into software. We don't have massively parallel algorithms for computing much of anything. Rather, we often take sequential algorithms and find portions that can be parallelized.

Conversely... (1)

Junta (36770) | more than 3 years ago | (#35451758)

As awesome as everyone talks up these 'brains' and how incredibly superior they are with only 20 watts, the fastest brain on earth can't even keep up with a 10 dollar pocket calculator that uses a fraction of a watt when it comes to remotely complex arithmetic.

Obviously, we have very two different things here. We created computers to be good at the stuff we are *not* good at, not to match our capabilities (we wouldn't spend so much money to make machines that are good at just the same things we are). That's one fallacy these discussions keep running into, we assume one is simply 'better' than the other rather than distinct.

Re:Conversely... (0)

Anonymous Coward | more than 3 years ago | (#35451870)

"(we wouldn't spend so much money to make machines that are good at just the same things we are"

Obviously, you never bought a slave.

Re:Conversely... (1)

Chapter80 (926879) | more than 3 years ago | (#35451962)

As awesome as everyone talks up these 'brains' and how incredibly superior they are with only 20 watts, the fastest brain on earth can't even keep up with a 10 dollar pocket calculator that uses a fraction of a watt when it comes to remotely complex arithmetic.

Exactly!

My $50,000 BMW can't keep up with my $10 pocket calculator when it comes to math. And my $10 calculator can't drive me to the mall.

Past tense (1)

Lord Lode (1290856) | more than 3 years ago | (#35451824)

Why is this article written in past tense? It contains funny paragraphs like this:

'While fundamental physics and molecular biology dominated the past century’s innovations, Sejnowski said the years between 2000 and 2050 was the “age of information”.'

2050 isn't really the past, right?

Re:Past tense (0)

Anonymous Coward | more than 3 years ago | (#35452138)

Indirect speech? At least the second part.
'She said "A is B".' becomes 'She said, A was B.'. It is shifted one step into the past because the indicect speech is introduced with "said" instead of "says" (which again is due to the way journalistic texts are written.)

I think that's a grammatical rule in English, not based on logic so much. The rules for indirect speech definitely differ from language to language.
Disclaimer: Not a native speaker.

Machine intelligence By Francisco Villanueva (0)

Anonymous Coward | more than 3 years ago | (#35452018)

I don't know much about this matters but it seems to me a bit odd that nobody seems to accept or just consider that there is something else in humans aside from brains, something that we use to call "live" which is very different from machines. We could build someday a perfect and very advanced machine just like our brains but... Are we will to be able to put life in it? Perhaps that is what is the most important point to be understood talking about machines and humans. Is there someone able to consider we are alive?

Not-so-fast with handing the Tianhe a fraudulent r (1)

sethstorm (512897) | more than 3 years ago | (#35452038)

That belongs to the Jaguar Cray XT5-HE, not the overstated specs of the system that "claims" the supposed top slot.

Move it down a bit more and you would truthfully be representing its capability. But then you'd just want to modbomb me into oblivion, since that's easier to do.

processor organization is the problem (1)

AchiestDragon (971685) | more than 3 years ago | (#35452122)

computer CPU and software processes in a flat 1 dimensional stream nerual structures are emulated taking time to read each ones state one after another and simulate the actions of the interconnects to get the result

"Hardware/softcore" FPGA based neural net would form a flat even 2 dimensional "grid" array

but a DNA based brain is both a 3D structure and also has sub "fractal" patterned interconnected structures within it

to form even a bee style neural structure in a FPGA would still need the logic cells to be arranged in the correct "fractal" patterned structure of interconnects

ether way current processing technology does not lend itself to creation of viable neural net structures that would allow for a fair comparison
 

Abby Someone... (2)

khr (708262) | more than 3 years ago | (#35452230)

"Who's brain did you emulate?"

"Uh, Abby someone..."

"Abbey who?"

"Abby Normal...."

Apples and oranges? (1)

McTickles (1812316) | more than 3 years ago | (#35452244)

The problem here is that they compare two very different things.
I feel that computer intelligence will never work if we just aim to copy neural networks; the perspective of the problem is all wrong. You can't have a computer program fake neurons and expect it to go...
I think to do AI we must first understand what exactly we are trying to achieve and what makes it possible in the natural world.

FPGA (1)

Twillerror (536681) | more than 3 years ago | (#35452848)

It seems like we already have this in FPGAs. We don't really have good clusters of them though..at least that I know.

I'm a software developer that has dabbed in VHDL and created some basic programs that got ran directly on a chip.

It was a major pain as someone just trying to write something. A higher level language designed for parallel computation on a large FPGA array might be more in line with what he wants...without trying to design hardware specifically to the problem. Although maybe after a while common patterns would arise.

I've Been Thinking About Those Pink Meatputers (1)

Greyfox (87712) | more than 3 years ago | (#35452866)

They have their benefits and their drawbacks but at some point you'd think the benefits and drawbacks of silicon would even out. At the astounding rate technology's been progressing since I got into the industry, I'd have guessed that silicon would have passed us up by now, but that appears not to be the case. I believe a lot of AI researchers made similar predictions though, so I don't feel too bad.

I suspect there's some trickery going on in the meatputer though. The whole system feels kludgy. They seem to only really have the benefit of being massively parallel and heavily optimized through millions of years of them being eaten if they're not optimized. I'm also somewhat surprised that, given how parallel they are, they're not easier to deadlock. I'd expect it to be easier to crash one, and then execute a privilege escalation exploit to gain root access. I guess some of that millions of years of evolution has also added some decent semaphore code, since easily crashed ones probably also get eaten...

COSA (0)

Anonymous Coward | more than 3 years ago | (#35452880)

Massively parallel, intuitive concepts, and by the way, should also improve reliability and programmer productivity.

http://rebelscience.org/Cosas/COSA.htm

Just use real live brains! (0)

Anonymous Coward | more than 3 years ago | (#35452944)

Instead of making computers emulate brains, we need to make the brain act like a computer.

We need 3 things.
- Ability to grow brains with no body.
- Ability to keep a brain alive and working with no body.
- Knowledge of how the brain's inputs and outputs work. Knowledge of the rest of the brain does not matter so much. The Features we want in the brain will over-develop whilst redundant parts of the brain won't develop,

We need to tap into the part of the brain that processes sound, then hook up a microphone to it. Same with vision and sound. The brain will develop itself in a way in response to inputs then. This may be what microsoft is hoping we'll develop with their kinect (haha).

Once you have a brain that can see, hear and make noises it needs to be programmed. Forget all the standard programming your brain goes through completely. We don't want it to be human, we want it to be a machine. So we teach the brain maths, and nothing but maths constantly from birth to 25 years old through every sensor we can. The brain will create pathways purely to solve math equations since thats all it thinks life is. All those brain functions that we use to store faces, names and events will be used to store math tables. We've built one of the most powerful calculators ever.

Then, it's time to install the runtime. The next brain (2nd gen) will be taught a runtime dll (done by running programs and issuing pleasure responses when the correct path outcome is achieved) - A bit like japanese eroge games. Allowing us to run programs on the brain. I dare say there may be enough humanity left in the brain that we may also be able to use interrupts to hook into the brain's lower level functions like love, greed, desire, and the aquisition of pornography.

We may even be able to overclock the brain in ways we haven't known before. Drugs which enhance synaptic abilities or perhaps the brain can be used more effectively if different environments like a vacuum or with rapid progressions of heating and cooling. Basically we'll need to do lots of hidious experiments. Don't worry about the ethical problems, it's just meat.

At that point the fembot is a gimme. We just need to hook the brain up to a steady stream of cocaine, show it pictures of phallic symbols and trip the pleasure interrupt.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...