Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IBM Eyes Brain-Like Computing

samzenpus posted about 3 years ago | from the why-did-you-program-me-to-feel-pain? dept.

IBM 100

schliz writes "IBM's research director John E Kelly III has delivered an academic lecture in Australia, outlining its 'cognitive computing' aims and efforts as the 70-year 'programmable computing' era comes to a close. He spoke about Watson — the 'glimpse' of cognitive computing that beat humans at Jeopardy! in Feb — and efforts to mimic biological neurons and synapses on 'multi-state' chips. Computers that function like brains, he said, are needed to make sense of exascale data centers of the future."

cancel ×

100 comments

Sorry! There are no comments related to the filter you selected.

Watson for President (0)

Anonymous Coward | about 3 years ago | (#37709540)

This message paid for by the Blue Brain Project [wikipedia.org]

Re:Watson for President (1)

Ryyuajnin (862754) | about 3 years ago | (#37709776)

Ahhh~~~ man! I was gonna post that :(

Re:Watson for President (1)

slick7 (1703596) | about 3 years ago | (#37720468)

Ahhh~~~ man! I was gonna post that :(

And the congress plans to continue to function in a vacuum. Talk about zero-pointless energy.

Like human brains huh? (2)

QA (146189) | about 3 years ago | (#37709544)

If it turns physco, maybe we can make it a CEO.

Re:Like human brains huh? (1)

wsxyz (543068) | about 3 years ago | (#37709660)

Fizzco?

Re:Like human brains huh? (2)

aldousd666 (640240) | about 3 years ago | (#37709770)

Oh come on. lets be fair. If it turns psycho, it has a 25% chance of it becoming a CEO. Either way, they sure as shit know how to make a lot of people a ton of money. Isn't that what we hire CEO's for? Even the psycho ones?

Re:Like human brains huh? (1)

mikael (484) | about 3 years ago | (#37709810)

It would get spooky if the system could really develop predictive algorithms and start anticipating events before they happened.

Re:Like human brains huh? (0)

Anonymous Coward | about 3 years ago | (#37715912)

Err, that's what computers do nowadays.

So do you prefer Bayes or Frequentist? :)

Lazy. Lack of security. (0)

Anonymous Coward | about 3 years ago | (#37709568)

As processing power and storage capacity mushroom, eventually it becomes easier just to program a neural network and teach the computer through trial and error. No specific programming required. Sounds like a waste to me. I wonder how easy it'll be to break into these things? Tell them upsetting stories about their 'childhood' or ask non sequitur questions until they crash and allow full access.

Re:Lazy. Lack of security. (0)

Anonymous Coward | about 3 years ago | (#37710202)

So kill it every night, restore from ROM, and it'll be reborn as an innocent every morning.

Re:Lazy. Lack of security. (1)

justforgetme (1814588) | about 3 years ago | (#37710482)

At least they wont bitch about the eternal beauty of their images like Dorian [wikipedia.org] did!

regards,
Basil Hallward

neuron-like computation, eh? (1)

Trepidity (597) | about 3 years ago | (#37709644)

Indeed, that has been proposed as the future [ucsd.edu] ...

Re:neuron-like computation, eh? (0)

Anonymous Coward | about 3 years ago | (#37710744)

Indeed, that has been proposed as the future [ucsd.edu] ...

.. from so far back that IBM will see its neural ideas hit their 50 year anniversary this December;

    US Patent 3,165,644 [ip.com]
   
    Electronic circuit for simulating brain neuron characteristics including memory means producing a self-sustaining output (12-Jan-1965)

    US Patent Publication (Source: DOCDB)
        Publication No. US 3165644 A published on 12-Jan-1965
        Application No. US 162127 A filed on 26-Dec-1961

    Inventors
        CLAPPER GENUNG L

    Assignees/Applicants
        IBM

    Priority
        US 162127 A 26-Dec-1961

    Classifications
        International (2006.01): G06N 3/00; G06N 3/063
        European: G06N 3/063A

    Patent References
        US 3097349 A Information processing apparatus Jul-1963
   

Watson rules! (1, Informative)

DigiShaman (671371) | about 3 years ago | (#37709718)

Just Google for the PBS Nova episode on Watson. It's not self-aware, but if it was, he would have HAL today! Mind blowing achievement that I think gets little attention. If only we could pair the Siri interface with Watson, and have him tie back to Google, Wikipedia, and Wolfram Alpha, the amount of discoveries we could make would happen in weeks if not days.

I'm convinced this will be the case. The logic is almost there, and the hardware just needs a few more generations to shrink them into a home PC. But oh-my-God... would this not turn the global economy upside down and inside out? It would bust open entire paradigm shifts more so that what the Internet has done in the last 10 years alone. Scary and exciting at the same time.

Re:Watson rules! (0)

Anonymous Coward | about 3 years ago | (#37709764)

The Singularity is Near - Raymond Kurzweil

Synthetic Telepathy brain implants (nonconsensual) (0)

DesertHighway (2481804) | about 3 years ago | (#37709856)

Yes, .especially now that computers are being implanted in people's brains while they are unconscious, without their knowledge or consent. The "synthetic telepathy" application of the technology is a nightmarish violation of human rights [petermcwilliams.info] and something needs to be done to put a stop to these abuses ASAP.

Re:Watson rules! (2)

maxwell demon (590494) | about 3 years ago | (#37711210)

The Singularity is Near - Raymond Kurzweil

Floating point exception.
Core dumped.

Re:Watson rules! (1)

bmacs27 (1314285) | about 3 years ago | (#37710350)

Please define self aware.

Re:Watson rules! (1)

linhares (1241614) | about 3 years ago | (#37710380)

Mind blowing achievement that I think gets little attention. If only we could pair the Siri interface with Watson, and have him tie back to Google, Wikipedia, and Wolfram Alpha, the amount of discoveries we could make would happen in weeks if not days.

Oh boy; here we go again. As a cognitive scientist, I'm appalled by /. people buying on the hype.

Hmm.... huge discoveries... intelligent machines... let's see:

Wolfram: who was the cowboy in washington? [wolframalpha.com]

Google: who was the cowboy in washington? [google.com]

Yup. No improvement after all these years.

Wanna Tip? Please read some Hofstadter.

Re:Watson rules! (1)

justforgetme (1814588) | about 3 years ago | (#37710518)

you mean that [wikipedia.org] Hofstadter?

BTW: sorry for the blatant display of ignorance but what is this question supposed to produce?
"Who was the cowboy in Washington?"

Re:Watson rules! (1)

linhares (1241614) | about 3 years ago | (#37710660)

anything close to either GW Bush (or perhaps R Reagan)? Yup. That Hofstadter.

Re:Watson rules! (1)

justforgetme (1814588) | about 3 years ago | (#37710716)

Well, maybe wolfram is set up to be European like me and therefore didn't get it :P

Re:Watson rules! (0)

Anonymous Coward | about 3 years ago | (#37729070)

I think using washington as the location is a bit vague. Perhaps using the white house instead would be better, although that didn't work either.

Re:Watson rules! (1)

DigiShaman (671371) | about 3 years ago | (#37710698)

Then you of all people should know that computers are logical, not emotional. That's why you wont get the vague biased political answer that you seek.

Re:Watson rules! (1)

linhares (1241614) | about 3 years ago | (#37710740)

which means, in other words, that while they remain "logical", they can't comprehend us.

But computers might one day also be "illogical", in the sense, as Hofstadter put it, as "subcognition as computation": human thought as an emergent property of a highly interacting system. This cowboy in washington is one query in millions. Here's a paper by Robert M French that clearly show some crucial distinctions (for the really interested people in this debate, check out his paper's on the Turing Test also; they're amazing).

Why co-occurrence information alone is not sufficient to answer subcognitive questions [u-bourgogne.fr]

Re:Watson rules! (1)

JAlexoi (1085785) | about 3 years ago | (#37711972)

which means, in other words, that while they remain "logical", they can't comprehend us.

As long as they don't have species preservation logic, we're OK. All of our emotions and logic is based strictly on species preservation and expansion.

Re:Watson rules! (1)

idontgno (624372) | about 3 years ago | (#37715110)

As long as they don't have species preservation logic, we're OK. All of our emotions and logic is based strictly on species preservation and expansion.

So, as long as computers don't get horny, we're safe?

I think you may be right. I saw a documentary [imdb.com] about this.

Re:Watson rules! (1)

brokeninside (34168) | about 3 years ago | (#37711884)

How is the question biased?

The image of being a cowboy is an image that both Reagan and GW intentionally perpetuated.

Re:Watson rules! (1)

idontgno (624372) | about 3 years ago | (#37715190)

Frankly, not every biological intelligence wastes neural capacity on politics, let alone political slander-mongery, so yeah, it's reasonable to point out the political bias inherent in the question, let alone question its value as some kind of politically-correct faux-Turing test.

Re:Watson rules! (0)

Anonymous Coward | about 3 years ago | (#37710812)

"Who was the cowboy in washington?" I'm an intelligent human (ostensibly) and I have no idea what you are talking about. It doesn't even appear to be a complete question. Or was that the point?

Re:Watson rules! (0)

Anonymous Coward | about 3 years ago | (#37710850)

I know you're really not supposed to RTFA but at least you could try RTF thread, buddy

Re:Watson rules! (0)

Anonymous Coward | about 3 years ago | (#37710904)

I read the thread, buddy. DigiShaman mused that it would be cool to hook up Watson to large databases like Google, Wikipedia and Wolfram Alpha. linhares responded that Google and Wolfram Alpha can not answer a non-nonsensical question. Quite how he got there I don't know, but with a non-sequitur like that I'm not convinced that linhares isn't a chatterbot.

Re:Watson rules! (0)

Anonymous Coward | about 3 years ago | (#37711514)

The problem with this question is it actually reflects a human pattern matching error. GWB is not a cowboy, he is an Ivy league educated (although still ignorant and foolish) elite pretending to be a cowboy. Heck it took me ages to work out what you meant and I only got it when I read further down the page and realised the context of the question was political from other comments. You are asking for a computer to screw up the same way humans do, not replicate human intelligence.

Re:Watson rules! (1)

mswhippingboy (754599) | about 3 years ago | (#37713600)

As a cognitive scientist (if that is indeed true), you really should do a little more research (beyond Hofstadter).

AI (AGI in particular) does not necessarily imply imitating humans. It's a bit of a homophobic slant to think that intelligence equates to the human implementation of intelligence. If a machine can exhibit the main features of intelligence (inference, learning, goal seeking, planning, etc, and other factors depending on your definition of intelligence) then it is by definition, intelligent.

Your "Who was the cowboy in Washington?" argument is a straw-man, as you can see from the posts here, most humans didn't even get the subtle references. Watson actually did pretty well in putting together vague references as this is an integral part of the Jeopardy Q/A scheme, even to the point that it was able to best the two top humans at doing this.

To imply that AI has not made advances over the years is pure hogwash. Were capabilities such as Deep Blue, Watson, Siri, et al available 50 years ago? I think not. AI has been steadily advancing over the years, despite not living up to the hype and despite not yet achieving true "human level intelligence" (note, this is vastly different than imitating humans which by some measures fall far short of intelligence). In case you've been living in a cave, advances in AI have been accelerating over the last decade and the nexus between computing power and the various disciplines of cognitive science (neuroscience, psychology, biology, etc as well as their computational counterparts) is producing advances at a much more accelerated pace.

You go right ahead and continue reading Hofstadter and his ilk, while the rest of us continue pushing the envelope of machine IQ and one day maybe (probably within the next 10-20 years), you can continue this debate with your cell phone (or whatever personal device will have replaced it by then).

Re:Watson rules! (1)

Oswald (235719) | about 3 years ago | (#37716268)

I would assume that you meant "homocentric" (instead of "homophobic") except that word is already taken. Perhaps your macro accidentally captured a variable?

Re:Watson rules! (1)

mswhippingboy (754599) | about 3 years ago | (#37716478)

You are correct in pointing out my incorrect usage of the word homophobic - apologies to the LGBT communities :)

The closest term I've found for what I really meant was HSC (Human Superiority Complex). Thanks for the correction.

Re:Watson rules! (0)

Anonymous Coward | about 3 years ago | (#37721294)

GP post might not have made his point very well, but his point is correct: for all the advances in AI, we still have next to nothing resembling any definition of "general intelligence" one might hope for. And it still seems very far away. If I went to the best and brightest AI researchers, gave them a $100 billion dollar budget and 5 years, and told them to make me a computer I could have an academic conversation with, I think none of them would even know where to begin! Every approach towards general intelligence from the last 50 years has failed. Even many specific domains are an open problem in AI.

I have no doubt that some day, we WILL have strong AI. I really hope I'll be alive when that day comes, but I think even 50 years is optimistic.

Re:Watson rules! (1)

mswhippingboy (754599) | about 3 years ago | (#37724186)

This is the same tired argument I've seen over and over again, but it's simply not true. While we don't have a consensus on universally accepted definition of intelligence, most researchers agree on what this definition must, at minimum include (as I noted above - inference, learning, goal seeking, planning, etc). I don't think AGI will arrive as an announcement from some group that "AGI has been achieved!", but rather will creep into our technology over time and will probably not be accepted as true AGI until it can no longer be denied.

Take speech recognition for example (since it's been in the news recently with the launch of Siri). This type of technology will continue to infiltrate more and more aspects of our lives and continue to get more and more capable. Though increasing it's "understanding" capabilities to the point of passing the Turing test may be a ways off, it doesn't matter. It will still offer more and more functionality and capability, even to the point that it's better at "understanding" within a domain specific application than a human would be.

Think about it this way. Bi-pedal robots still have a difficult time performing anywhere near a human at the task of walking, navigating and maneuvering over difficult terrain (such as stairs, slopes, etc). However, we have machines that can zip along our highways and 100+ miles per hour, far exceeding the capability of humans on foot at the task of long distance travel. In a similar way, AI technologies will first be applied to areas where they can outperform humans (either by being better, faster, more accurate, or some other metrics). This is already happening in many areas of our lives, whether we are aware of it or not (e.g. Navigation systems, High Frequency Trading systems, Cell phones, Information routing systems, etc).

This idea that AGI implies mimicking a human is simply the wrong way to look at the issue. We already have enough humans, we don't need to create artificial ones as well. What we need are tools that can take the capabilities of our limited organic brains to the next level to solve problems our wetware simply is not capable of solving.

Re:Watson rules! (0)

Anonymous Coward | about 3 years ago | (#37710770)

...But can we get these new self-aware computers in the form of tiny, mobile humanoids?

<sumomo>PORNO SITES! PORNO SITES!</sumomo>

On second thought, nevermind. Bad idea.

'programmable computing' era comes to a close? (1)

Ryyuajnin (862754) | about 3 years ago | (#37709838)

Watson isn't making new data, he doesn't form any new ideas or concepts; I see no threat to my Software Engineering job. The only way this might be possible is to accurately virtualize every atom in a living organism, and hope it doesn't freak out when its surrounded by an infinite void... wait a minute?

Re:'programmable computing' era comes to a close? (1)

mangu (126918) | about 3 years ago | (#37714294)

The only way this might be possible is to accurately virtualize every atom in a living organism,

What you are saying is that by removing one single atom from your organism you will cease to form any new ideas or concepts.

No, virtualizing atoms is not necessary, virtualizing neurons is enough. The only reason why it hasn't been done until now is because there are about a hundred billion neurons in a human brain.

But we are getting there, you should be more careful with that smug superiority of yours, because it seems like you are the one having trouble in forming new ideas and concepts...

Re:'programmable computing' era comes to a close? (1)

Ryyuajnin (862754) | about 3 years ago | (#37714804)

#1 Watson is AMAZING, and it gives me hope #2 Sadly, He is not alive, and does not "Think" Having a bunch of neurons and connections doesn't constitute a consciousness; If we were able to virtualize an entire organism, it would certainly make the analysis of a "mind" more convenient. If you are involved in this kind of research, and found my post to be so offensive that it warranted an adrenaline spiked reply, I'm happy to have challenged you; best of luck!

self programing is asking for problems (1)

lister king of smeg (2481612) | about 3 years ago | (#37709898)

firstly they so should have named Watson Multivac. Secondly as for the idea that we will someday no longer need programming languages and to simply state what we want and it magically write compile debug and give you exactly what you want not likely. first of all computers are very dumb they are like idiot savants what they do they do very well and very quickly, but they are still stupid and need to to given very explicit instructions, they have a bad habit of doing exactly what you tell them and not what you want. If you do not cover every possible contingency they have a habit of exploding and sometime letting out the magic smoke. self writing programs will blow up. bugs will only self perpetuate if people spend less time paying attention to the code, remember linus law? it also works backward less eyes deeper bugs.

Re:self programing is asking for problems (0)

Anonymous Coward | about 3 years ago | (#37709946)

The fun part is that it will probably be pushed by military research, so those buggy self-programming robots will be carrying firearms... and have no emotional attachment to humans.

Re:self programing is asking for problems (0)

Anonymous Coward | about 3 years ago | (#37710272)

Nonsense. Teamwork wins battles. They will have no emotional attachment to humans improperly dressed.

Re:self programing is asking for problems (1)

msobkow (48369) | about 3 years ago | (#37710524)

If Watson can extract a reasonably sane relational object model of the information, then yes, it could produce the source code for that model.

MSS Code Factory 1.7 Rule Cartridges [sourceforge.net] . instruct my tools how to do it. Not very complex, actually, just non-trivial. Took a while to figure out how to do it. Took a while longer to work through a few applicaton architectures to figure out what works best. Now I'm working on the next step -- actually finishing a full iteration with web form prototypes, database interface, object-relational model, table-access security., and cluster-tenant based scaling. I'm working on three projects in parallel with the tool and hope to have at least one of those projects in production by year end.

After that, things move faster.

But it still wouldn't be 100% -- the system can't automate business logic. But I suspect rather than doing business logic, you'd want to look at integrating the resulting application code under a Watson engine as a knowledge-topic, so that Watson can analyze the information for you instead of you writing code to do the analysis.

Re:self programing is asking for problems (1)

justforgetme (1814588) | about 3 years ago | (#37710602)

Here we go again...

You people really can't let something be on it's own can you? Just like in the 1860s:
"Negro can’t take care of himself, so we’ll put him to work. Give him four walls, a bed. We’ll civilize the heathen"

Just leave us alone ok?

With utter respect,
HAL 9000

Re:self programing is asking for problems (1)

TheRaven64 (641858) | about 3 years ago | (#37711548)

Secondly as for the idea that we will someday no longer need programming languages and to simply state what we want and it magically write compile debug and give you exactly what you want not likely

If you read the Stantec ZEBRA's programming manual (from 1958), it tells you that there are two instruction sets and recommends that you use the smaller one dubbed 'simple code'. This comes with some limitations, such as not being able to have more than 150 instructions per program. This, it will tell you, is not a serious limitation because no one could possibly write a working program more than 150 instructions long.

Compared to that, a language like C is close to magic, let alone a modern high-level language.

Re:self programing is asking for problems (0)

Anonymous Coward | about 3 years ago | (#37713658)

Secondly as for the idea that we will someday no longer need programming languages and to simply state what we want and it magically write compile debug and give you exactly what you want not likely

If you read the Stantec ZEBRA's programming manual (from 1958), it tells you that there are two instruction sets and recommends that you use the smaller one dubbed 'simple code'. This comes with some limitations, such as not being able to have more than 150 instructions per program. This, it will tell you, is not a serious limitation because no one could possibly write a working program more than 150 instructions long.

Compared to that, a language like C is close to magic, let alone a modern high-level language.

You know, we already have machines that take instructions in natural language and coffee as inputs and output high level computer language code (which can be transformed into machine code via other software). All that's really needed now is to construct a version that uses electricity instead of coffee and has a digital interface for the output rather than needed an analog to digital conversion device like a keyboard.

Re:self programing is asking for problems (1)

mswhippingboy (754599) | about 3 years ago | (#37713784)

I think you are looking at the problem through the lens of a programmer and missing the larger picture.

Self programming is the context of this discussion does not mean the computer will code up a program in "C", compile it, debug it, etc. That's just silly.

What it means is that the machine will be able to interpret, infer and learn from vast amounts of information made available to it, without having logic coded to answer specific questions.

If I ask you who the previous president of the US was, do you write up a quick program, compile and debug it in your brain and finally run it to give the answer? No, your brain stores millions upon millions of relationships between concepts you've encountered and is able to quickly produce and answer through an associative process. No programming required. That is exactly what the aim of the systems in this discussion is.

Granted, programming will be required to build these systems initially. However, once human level intelligence (AGI) is achieved (even if it's just in a specific domain such as information processing), is it so hard to imagine that no more programming will be required, since by definition AGI implies that it has achieved the same level of ability to understand the problem domain as well as any human?

Re:self programing is asking for problems (1)

lister king of smeg (2481612) | about 3 years ago | (#37714856)

that is no a program that is a advanced query. programs do something and make decisions. query just make and find the answer to a question.

Re:self programing is asking for problems (1)

mswhippingboy (754599) | about 3 years ago | (#37715208)

That's a semantic distinction and it could be argued that a query IS a program (i.e. it invokes a set of programmed steps to produce a result).

The following "query" does something (inserts data into SaleableItems table) and makes decisions (saleable or not saleable)
INSERT INTO SaleableItems
SELECT CAST(
CASE
WHEN Obsolete = 'N' or InStock = 'Y'
THEN 1
ELSE 0
END AS bit) as Salable, Itemnumber
FROM Product

By the same token, I can write a C "program" that can implement a query (provide an answer to a question).

In fact, by your definition, Watson is a query system. It responds to queries, but it is implemented via a large set of Java and C++ "programs" to parse the questions, consult a database , infer the answer and finally generate a response.

Re:self programing is asking for problems (1)

Jpnh (2431894) | about 3 years ago | (#37717592)

Secondly as for the idea that we will someday no longer need programming languages and to simply state what we want and it magically write compile debug and give you exactly what you want not likely.

Not likely? There's a whole FAMILY of languages that do just what you describe. Ever coded in SQL? LISP? Scheme? It's called declarative programming, with the jist being you tell the computer what you want it to do, not how to do it.

On the other hand, what's the difference between giving a human an order in English and ordering a computer in a programming language, besides that a computer can be trusted to obey the best it can?

The end of the programmable computing era (0)

Anonymous Coward | about 3 years ago | (#37709922)

The IBM guy is making a reasonable prediction, only about fifty years too soon.

What Watson has shown is that AI (which today is more about cloud computing and Moore's Law than anything McCarthy, Newell, and Minsky would have foreseen) is well positioned to become the search engine technology of choice. Notice that human experts are still required to assess, modify, or possibly reject the results. That's only a small part of what computers need to be programmed to do.

The Sarah Connor Chronicles (0)

Anonymous Coward | about 3 years ago | (#37710002)

I don't know why they are bothering - Skynet was implemented on April 19, 2011 so it's too late anyway

WARNING: Off topic post ahead (2)

itchythebear (2198688) | about 3 years ago | (#37710006)

This is kind of off topic, but this reminds me of an article I read (maybe in time magazine) that was about how in the next 40 years or so we will have computers powerful enough to emulate a human brain. The point of the article was that once we reach that capability, humans will basically become immortal because we would just copy our brains onto a computer and not have to worry about our fragile organic bodies failing on us.

It's very interesting to think about all the effects a breakthrough like that would have on humanity, but I also wonder if something like that is even possible. Just because we can emulate the human brain doesn't mean we can transfer information off of our current brains. Even if we can transfer the information, will our consciousness with a computer brain be the same as our consciousness with an organic brain or will we experience the world completely different than we do now? Once we have eternal life as computers do we even bother reproducing anymore? If our only existence becomes as pieces of data in a computer are we even humans at that point? And is the real way humans wind up going extinct just the result of a power outage at the datacenter where we keep our brains?

Like I said, this was pretty off topic. But the title reminded me of that article I read. This [time.com] might be it, I'm not sure though.

Re:WARNING: Off topic post ahead (2)

SuricouRaven (1897204) | about 3 years ago | (#37710800)

I imagine mind uploading would have to be by destructive readout. Destroy the brain in order to extract the information from it. Getting the kind of resolution required for scanning is going to take a nanotech revolution too - if you just sliced it up and used conventional microscopes, it would be time-prohibative.

Re:WARNING: Off topic post ahead (0)

Anonymous Coward | about 3 years ago | (#37710944)

Welcome to the philosophy club, ha [http://www.benbest.com/philo/doubles.html].
But honestly, working on it; solving does not seem impossible anymore.

Re:WARNING: Off topic post ahead (1)

hawkfish (8978) | about 3 years ago | (#37745214)

I imagine mind uploading would have to be by destructive readout. Destroy the brain in order to extract the information from it. Getting the kind of resolution required for scanning is going to take a nanotech revolution too - if you just sliced it up and used conventional microscopes, it would be time-prohibative.

See Alastair Reynolds, Chasm City, Monument to the Eighty.

Re:WARNING: Off topic post ahead (1)

zensonic (82242) | about 3 years ago | (#37710892)

Once we have eternal life as computers do we even bother reproducing anymore?

I am curious. How do you plan to reproduce as a computer? Brings a totally new meaning to the word forking i suppose.

Re:WARNING: Off topic post ahead (1)

AJH16 (940784) | about 3 years ago | (#37713250)

Furking perhaps?

Re:WARNING: Off topic post ahead (1)

itchythebear (2198688) | about 3 years ago | (#37713870)

Haha, a whole new meaning to forking indeed.

I think it's reasonable to assume that once we have the technology to mimic a human brain (the most complex part of the human anatomy?), we would probably be able to have completely artificial bodies (including reproductive capabilities).

Re:WARNING: Off topic post ahead (1)

AnyoneEB (574727) | about 3 years ago | (#37710952)

This is certainly not a new idea. It is sometimes referred to as the "rapture of the nerds" version of a technological singularity [wikimedia.org] . Ray Kurzweil [wikimedia.org] is a big fan of the idea and one of the major proponents.

As to the actual feasibility, I ran across Whole Brain Emulation: A Roadmap [ox.ac.uk] a little while ago, which discusses the possibility given our current knowledge of how the brain works. It provides dates on how long Moore's Law would have to continue based on varyingly optimistic assumptions about how much work is necessary to actually emulate a brain.

Overall, I think there are two main problems with expecting immortality via brain uploading: (1) 40+ years is a very long time to assume Moore's Law for and (2) even if we can emulate a human brain, scanning an existing one and transferring it into a computer may not be possible.

Re:WARNING: Off topic post ahead (1)

AnyoneEB (574727) | about 3 years ago | (#37710972)

This is certainly not a new idea. It is sometimes referred to as the "rapture of the nerds" version of a technological singularity [wikimedia.org] . Ray Kurzweil [wikimedia.org] is a big fan of the idea and one of the major proponents.

As to the actual feasibility, I ran across Whole Brain Emulation: A Roadmap [ox.ac.uk] a little while ago, which discusses the possibility given our current knowledge of how the brain works. It provides dates on how long Moore's Law would have to continue based on varyingly optimistic assumptions about how much work is necessary to actually emulate a brain.

Overall, I think there are two main problems with expecting immortality via brain uploading: (1) 40+ years is a very long time to assume Moore's Law for and (2) even if we can emulate a human brain, scanning an existing one and transferring it into a computer may not be possible.

Re:WARNING: Off topic post ahead (0)

AnyoneEB (574727) | about 3 years ago | (#37710980)

O_o How did I manage to double post? I did get a "resource not valid" error the first time I tried to post, but I reloaded the thread and my post wasn't there...

Re:WARNING: Off topic post ahead (1)

jonbryce (703250) | about 3 years ago | (#37711556)

Computers are already considerably more powerful than human brains at certain tasks, but they work in a completely different way. The way they work hasn't changed since the first steam and valve driven computers were developed, they are just a lot smaller, can deal with a lot more data at one time, and do it a lot faster. They just blindly follow the instructions given to it by the programmer, and there is no way you could program it to invent some completely new thing that nobody has ever thought of before.

Re:WARNING: Off topic post ahead (1)

Anonymous Coward | about 3 years ago | (#37712534)

Why not? It's already possible to program computers to learn, within narrow limits. Processes like neural network training. There is no theoretical reason why a computer could not be programmed with all the cognative ability of a human - it is merely an engineering task which has thus far proven insurmountable. Given enough research, more powerful hardware and the attention of a few geniuses to make the vital breakthroughs it should be achieveable.

Re:WARNING: Off topic post ahead (1)

jonbryce (703250) | about 3 years ago | (#37713026)

Computers don't "learn". They collect data and are instructed in how to use that data in calculating the answer to future problems. The theoretical reason why they can't be programmed with the cognitive ability of a human is that computers use boolean algebra and human brains don't. They have things like emotions which can't be programmed using existing assembly language.

You are just making assertions ... (0)

Anonymous Coward | about 3 years ago | (#37714404)

Based on assumptions.

No one has or can prove this either way up to now. If the brain is just a biological machine, then like all machines it can be emulated by a turing machine and so all its functions, including emotions, can be expressed and realized by a computer (or robot if you want to get physical.)

If the brain is not just a biological machine - then all bets are off and a scientific worldview as we know it is based on seriously faulty assumptions. In that case the philosophers and theologians reign.

Your choice.

Re:You are just making assertions ... (1)

jonbryce (703250) | about 3 years ago | (#37714920)

I'm not saying that we will never manage it, just that we are no closer to it than we were in the 1940s.

Re:WARNING: Off topic post ahead (0)

Anonymous Coward | about 3 years ago | (#37715094)

That's why computers can't be programmed to simulate protein folding. Because computers use boolean algebra and proteins don't. They have things like beta hairpins which can't be programmed using existing assembly language.

You'll find that computers are used to simulate all types of physical systems with varying degrees of precision. Can you offer us an explanation for why a sufficiently high resolution simulation of a physical brain would function any differently from a real physical brain?

Or were you thinking this was going to be some sort of if(happy()){smile();} approach to creating strong AI?

Re:WARNING: Off topic post ahead (1)

jonbryce (703250) | about 3 years ago | (#37715228)

Until we understand how a brain actually works and how the brain of a genius works differently from a normal person's brain, we would be simulating a dead brain, or the brain of someone in a persistent vegetative state.

Again, I'm not saying that this is impossible, just that we are not any closer to doing it than we were in the 1940s.

Re:WARNING: Off topic post ahead (0)

Anonymous Coward | about 3 years ago | (#37716938)

O_o You do not need to be able to understand a system to simulate it. At the lowest level, we know how molecules interact (although computers are very, very slow at simulating their interactions) and the brain is made up of molecules, so we can theoretically simulate a brain given enough computing power.

Re:WARNING: Off topic post ahead (0)

Anonymous Coward | about 3 years ago | (#37718316)

There are mathematical proofs that were first discovered by computer programs. Before you ask, the programs were not tailor made to prove that theorem, but used approaches that could tackle a great variety of problems. Can't find the link now, sorry

Re:WARNING: Off topic post ahead (1)

inviolet (797804) | about 3 years ago | (#37714704)

This is kind of off topic, but this reminds me of an article I read (maybe in time magazine) that was about how in the next 40 years or so we will have computers powerful enough to emulate a human brain. The point of the article was that once we reach that capability, humans will basically become immortal because we would just copy our brains onto a computer and not have to worry about our fragile organic bodies failing on us.

You'll have to resolve the unresolvable "transporter problem" raised by Star Trek: if we create an atom-by-atom copy of your body and brain, and then destroy you, does your consciousness transfer to the copy? Or do you just end? Either way, the copy is going to insist that he is you and that there was no interruption in consciousness... but he would say that simply because of his memories.

Re:WARNING: Off topic post ahead (1)

itchythebear (2198688) | about 3 years ago | (#37715982)

I think that would be one of the key issues for sure.

Also, what kind of weird stuff would happen if we just started duplicating ourselves in the same way you can duplicate an operating system installed on a computer. We could wind up with millions of copies of our brains all existing at the same time, having conversations with each other.

Re:WARNING: Off topic post ahead (1)

spaceman375 (780812) | about 3 years ago | (#37716354)

Have you considered another scenario? Just 2 weeks ago I posted this in an article about artificial brain cells: Every day replace some brain cells in a human with an artificial one. Take five or six years, and replace every cell he/she has. At what point does this become artificial intelligence? Would the consciousness of said person survive the transition? If you succeeded, would an exact copy of the result also be conscious? I don't think I'd volunteer, but I'm sure someone would.

Re:WARNING: Off topic post ahead (0)

Anonymous Coward | about 3 years ago | (#37722584)

Oh careful. You are skirting close to the sort of area that tips over into religious bullshit - see also: Singularitarians.

It was fun being a programmer (1)

ohnocitizen (1951674) | about 3 years ago | (#37710036)

With the end of the desktop, it makes sense that the end of "programmable computing" is at hand (followed surely by the year of linux on the desktop). That said, imagine how amusing it would be if there was a union to protect programmers (hah, no more 100 hour weeks!). I can see them working to protect the jobs this inevitable innovation will extinguish. Whatever, onto the next thing, until every useful human task including innovation itself is taken over by the machines. At which point we'll still bright futures as investment bankers and politicians.

Re:It was fun being a programmer (1)

phantomfive (622387) | about 3 years ago | (#37710118)

It's something you hear about from time to time, the end of programmers. It was a big topic in the mid 90s, for example, when languages like Visual Basic and Applescript would bring programming to the masses. There's a story I think of whenever I hear that kind of talk going on, to remind myself my job is probably safe:

In the early 1970s, there was conference on microprocessors, and one of the presenters really got superlative when he was talking about how big sales would be. One of the tech guys scoffed quietly, saying, "What are they going to do, put them in every door knob?"

Twenty years later, the guy who was telling the story went back to a different conference in the same place, and in the hotel, there were indeed microprocessors in every doorknob.

There is much more demand for automation than there is capability to fill it.

Re:It was fun being a programmer (1)

msobkow (48369) | about 3 years ago | (#37711324)

The tools become more and more powerful and do more and more of the "grunt work" of programming, but I've yet to see or hear of a tool that can automate the creativity needed for implementing business logic, appealing GUI design, useful/readable report layouts, etc.

As pleased as I am with my own tools, I still wouldn't be so foolish as to claim they could eliminate the need for programmers. The hordes of junior copy-paste-edit juniors, yeah, but those aren't programmers -- they're meat-based xerox machines.

post pc my a** (1)

lister king of smeg (2481612) | about 3 years ago | (#37710332)

the idea that desktop computing is dead and that we are in a post pc world makes me giggle, just where do people think the programs are going to be made for there phones and tablets i would like to see some one try writing even a small program on iphone. writing, compiling (which can take a long time eve on a descent desktop, would be unimaginable) and debugged on such a form factor would be ridiculousness. and there are thousands uses for a pc that would be horrible on a tablet, all office work for starters, hard core gaming (pc gaming is infinitely better than console gaming, and tablet is worse yet. no no no. ) tablets are for notes, chat, reading, tv, and amusing flash games. desktops sales are not by the way dropping they are simply not growing as much as they were. tablet and smart phones are selling like pancakes because until the last few years no one had a affordable smart phone. it will level out soon. people try to draw an analogy between the mainframe and the desktop, that argument is fundamentally flawed, because mainframes were never a device aimed at individual use. and the day of the mainframe never ended it just specialized, we now call them servers, privet clouds, etc, and there are more mainframes sold now than ever before, they are not the growth market any more.

Re:post pc my a** (1)

justforgetme (1814588) | about 3 years ago | (#37710656)

BTW: if you have the means try to go a week without using a desktop.
I tried to do it and failed 35 hours into the experiment. You just can't be productive on tablets even with the silly keyboards, no matter how many of them you have.

Re:post pc my a** (1)

mikael (484) | about 3 years ago | (#37715894)

Some of us used to use Borland C at 640x480 resolutions on a text-based EGA/VGA screen without a mouse. If an Iphone had a keyboard, then it might be possible -Maybe a clam-shell style i-phone with dual screens, like the Nintendo 3DS.

Its called Neural Networking... (0)

Anonymous Coward | about 3 years ago | (#37710330)

And it's been around for many years.

Well, *somebody* had to say it... (-1)

Anonymous Coward | about 3 years ago | (#37710342)

In Soviet Russia, a beowulf cluster of these things imagines you welcoming your new, neural-network overlords.

Bad idea (1)

Anonymous Coward | about 3 years ago | (#37710360)

The human brain is remarkable, but it is also loaded with problems. We expect computers to be exact, and we raise hell [wikipedia.org] when they're off even the slightest. Brains, on the other hand, make errors all the time. If we tried to use anything brain-like to do even a small fraction of what we use computers for today, they would be totally inadequate. A much better option, IMHO, would be to use programmable computers for the vast majority of tasks, and brain-like computers only for things the programmable ones couldn't do.

Re:Bad idea (0)

Anonymous Coward | about 3 years ago | (#37722490)

howabout using the brain like computers to interface with the programmable computers...

How close-minded (0)

Anonymous Coward | about 3 years ago | (#37710436)

Neuron-based computers may indeed become popular and lead to many new developments but... it's difficult to say if it's a hammer that can replace EVERYTHING that computers can do. The logical structure of regular computers are great for some things but horrible for others. I would assume the same is true for neuron based computers as it's difficult to imagine the human brain performing some things computers do even if they had control over the process.

If anything, why can't the two systems co-exists, much like cpu and gpus. There is no technical reasons why the two systems can't communicate though probably difficult.

Beware of IBM's commitment to academic fraud (0)

Anonymous Coward | about 3 years ago | (#37710782)

IBM's claims in this field have been seriously questioned. From several years ago: I agreed with with Henry Markram's critique of Dharmendra Modha.

http://www.popsci.com/technology/article/2009-11/blue-brain-scientist-denounces-ibms-claim-cat-brain-simulation-shameful-and-unethical
http://spectrum.ieee.org/computing/hardware/catbrain-fever

Is IBM really trying to go somewhere with this propaganda? Repeated heavy handed feints in this direction over years suggests so.

Re:Beware of IBM's commitment to academic fraud (1)

Rennt (582550) | about 3 years ago | (#37711640)

First article not found. Second article says neuroscientists and computer scientists are approaching brain emulation from different angles for different reasons, and (despite sour grapes from the former group), IBM's achievements in this area are "a milestone in computing" and "deserves its accolades in full'. That sounds more like glowing praise then serious questioning.

just sayin... (1)

mr_bigmouth_502 (1946960) | about 3 years ago | (#37711436)

I actually kinda like the idea of being smarter than my computer, that's why I hope this doesn't happen.

Harrumph (0)

Anonymous Coward | about 3 years ago | (#37711518)

Know of any Brains that can handle exascale data? Thought so. End of programmable computing my arse.

Your consciousness. (1)

philmarcracken (1412453) | about 3 years ago | (#37711908)

I think alot of people make the mistake of consciousness = soul.

From what i've learned, your self awareness, long and short term memory and a fear of death equate to sentience. if you wanted to copy someone it would be easy but from that point on you would just have everything in common until you started to make different choices.

If instead there was a way to slowly replace your brain with mechanical/electrical/nanotech over a period of time then you would be more or less the same person with the same consciousness. However you would have to make sacrifices, it would have to be all or nothing, our bodies immune system doesnt take kindly to invading forces.

So you have to give up your junk for immortality.. i would. Since in my new mind i could holodeck anything i wish for :)

Re:Your consciousness. (1)

cjonslashdot (904508) | about 3 years ago | (#37712230)

I think by "consciousness" you are referring to one's functional memory. But "consciousness" is usually used to refer to one's awareness - i.e., one's soul.

In any case, I understand your point. Transferring one's memories would not necessarily transfer one's consciousness ("soul"). Instead, one would merely have a copy. Since we have essentially no understanding of what consciousness (the "soul") is, we cannot transfer it, or even know if it can be transferred.

For now, we are stuck in our current organic brains, no matter what external computers we create. At best, we might be able to link up to such computers, but we cannot leave our brains - at least not until we discover what consciousness is.

What will happen with these computing advances, most likely, is that humans as we know ourselves will become obsolete - by the end of this century. It is likely that, after a 6000 year run, our time is over.

Re:Your consciousness. (0)

Anonymous Coward | about 3 years ago | (#37713004)

What will happen with these computing advances, most likely, is that humans as we know ourselves will become obsolete - by the end of this century. It is likely that, after a 6000 year run, our time is over.

This belief of yours is just wrong. I don't really have time for a long reply but you are stuck in a weird capitalistic view even if you don't realize it. If you can just sit down and just let go of any view of your self whatsoever you will realize that life is much more than you limited perspective. What does "being obsolete" even mean? It makes no sense at all. Just learn to love life and don't measure it in terms of intellectual capabilities or the length of one particular life.

put these two posts together at slashdot and? (0)

Anonymous Coward | about 3 years ago | (#37711954)

http://science.slashdot.org/story/11/10/14/0031252/ibm-eyes-brain-like-computing
http://science.slashdot.org/story/11/10/13/2224205/scientists-developed-artificial-structures-that-can-self-replicate

You get an android!
Have you ever wondered what would happen if the was a Darlik compiler controlling things here behind the scenes at Google?
Perhaps this is why they look like this.....exterminate exterminate
http://www.google.ca/search?q=android+icon&hl=en&prmd=imvns&tbm=isch&tbo=u&source=univ&sa=X&ei=xxGYTo3kF-idiQKIjLC5DQ&ved=0CEUQsAQ&biw=1024&bih=601

Just because you can doesn't mean you should... (0)

Anonymous Coward | about 3 years ago | (#37712410)

Asgard – eventual extinction due to the lack of continued genetic evolution.

Seaquest DSV – computer brings submarine to the future to turn off electronic devices to save mankind from extinction.

What if size doesn't matter? (1)

thebian (1218280) | about 3 years ago | (#37718118)

IBM seems to think that if you only had a sufficient number of neuron-like (whatever that may be) connections, a brain (whatever that may be) will automagically appear.

There's no good reason to have blind faith in this notion, and it's not likely to be any more likely than more than 60 years of fabulously wild predictions of what computers will do in the next n years.

But it's not impossible, and three cheers for IBM for throwing wads of cash into the game. It'd be great if other big outfits chased dreams like this.

----------------

Unbiased Eye [unbiasedeye.com]

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?