Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Kurzweil on the Future

CmdrTaco posted more than 6 years ago | from the brain-upgrades-now-please dept.

Biotech 300

dwrugh writes "With these new tools, [Kurzweil] says, by the 2020s we'll be adding computers to our brains and building machines as smart as ourselves. This serene confidence is not shared by neuroscientists like Vilayanur S. Ramachandran, who discussed future brains with Dr. Kurzweil at the festival. It might be possible to create a thinking, empathetic machine, Dr. Ramachandran said, but it might prove too difficult to reverse-engineer the brain's circuitry because it evolved so haphazardly. 'My colleague Francis Crick used to say that God is a hacker, not an engineer,' Dr. Ramachandran said. 'You can do reverse engineering, but you can't do reverse hacking.'"

Sorry! There are no comments related to the filter you selected.

Obfuscation (4, Interesting)

kalirion (728907) | more than 6 years ago | (#23651157)

How is haphazardly hacked together code any harder to reverse engineer than intentionally obfuscated code? We know the latter isn't a problem for a determined hacker....

Re:Obfuscation (3, Informative)

ShadowRangerRIT (1301549) | more than 6 years ago | (#23651279)

Because haphazardly hacked together code is usually full of bugs and design limitations, while obfuscated code is simply rearranged good code? Integrating with buggy, poorly written code is not my cup of tea.

Re:Obfuscation (3, Funny)

dintech (998802) | more than 6 years ago | (#23652015)

According to the summary you do it every day with everyone you meet!

Re:Obfuscation (1)

Jesus_666 (702802) | more than 6 years ago | (#23652065)

I knew that introversion is the only right way!

By 1980, we'll by flying cars (Kurz is a crackpot) (0)

Anonymous Coward | more than 6 years ago | (#23652825)

The guy used to be okay then he went coo-koo for cocoa-puffs. Jeane Dixon is more often correct.

Re:Obfuscation (2, Insightful)

robertjw (728654) | more than 6 years ago | (#23652983)

Because haphazardly hacked together code is usually full of bugs and design limitations, while obfuscated code is simply rearranged good code? Integrating with buggy, poorly written code is not my cup of tea.
Yes, because we all know that obfuscated code NEVER has any bugs or design limitations. If the Microsoft document format and Windows File Sharing can be reverse engineered I'm sure anything can.

Re:Obfuscation (4, Interesting)

PainMeds (1301879) | more than 6 years ago | (#23651367)

I think his point is the belief that hacked together code is more nonsensical and therefore would be more difficult to reverse engineer. It's like the difference between tracing back ethernet cables in a clean colo facility vs. tracing back cables at something like Mae-East, which (the last time I looked, at least) largely resembled a post-apocalyptic demilitarized zone. At least that's how he seems to view hackers. I personally see hackers as codifying something even more beautiful, logical, and well-articulated than the mundane corporate programmer, delivering a much higher level of intelligence and complexity than most could understand. Either way, you end up with the conclusion that it's a real pain in the ass to hack something that someone smarter than you wrote. Going inline with the quote, if God is a hacker, then you'd expect him to be one of the super geniuses and not some poor yahoo not quite knowing what he was doing.

Apply that to the brain, and we're worlds behind. Considering we're still on the binary system, when DNA uses four base pairs for its instruction code, I think Crick grasped the complexity of the human body much more than Kurzweil seems to. To compare the brain to a computer chip is, I think, a grossly unbalanced parallel.

Re:Obfuscation (0, Flamebait)

VeNoM0619 (1058216) | more than 6 years ago | (#23651979)

God? Hacker? First off, I'm not understanding the relation between God being a "computer" expert. Second off, this also assumes a scientist believes in God and not evolution/big bang, etc. where we merely "happened" and weren't designed.

To bring religion into the field of biology... not unheard of, but not recommended either.

Re:Obfuscation (2, Insightful)

bigstrat2003 (1058574) | more than 6 years ago | (#23652155)

It's a metaphor. The literal meaning is irrelevant, the point is that that scientists thinks our biology is rather haphazard and jumbled, not well-structured. I think you're reading the statement too literally.

Re:Obfuscation (2, Interesting)

PainMeds (1301879) | more than 6 years ago | (#23652219)

Crick was a militant atheist, but he had at least a few philosophical feelings about God in his later years. I recall a quote of him talking about how we couldn't explain the origin of life, and cited it as "nothing short of a miracle". What his mixed signals tells me is that at heart he was a scientist, and wasn't prepared to make a biased judgment in either direction whether God did or did not exist - because he likely knew that either of those positions could bias his work. He was an interesting fellow to study, at least. Far more multidimensional than the dry scientists of this age... and I suspect he's answered any of his questions about God by now.

Re:Obfuscation (1)

offput (961196) | more than 6 years ago | (#23652469)

I'm an atheist and I still had to facepalm over this. The comment about "God" being a hacker is an analogy. One that fits evolution very well, since humans were "hacked" together one evolutionary change at a time.

Re:Obfuscation (1)

louzer (1006689) | more than 6 years ago | (#23652329)

Base 2 is more efficient than Base 4 for storing information if the cost of a b-state circuit is proportional to b.
Proof: []
Source: Hacker's Delight by Henry S. Warren, Jr.

Re:Obfuscation (1)

PainMeds (1301879) | more than 6 years ago | (#23652509)

That's very weak. From the link:

Suppose you are building a computer and you are trying to decide what base to use to represent integers.

So this (very short, uncited, non-whitepaper) document claims to be proof that is almost entirely relevant to DNA, as it claims 1. you're building a computer, and 2. your primary interest is in storage. In the setting of DNA, there are likely many reasons that 4 base pairs would make more sense other than just storage and circuitry. This is the kind of closed-mindedness Crick didn't want to assume in his presumptions about God, I think.

Re:Obfuscation (1)

louzer (1006689) | more than 6 years ago | (#23653013)

4 base pairs make sense because:
1. DNA is read in triplets.
2. There are 20 amino acids in use by nature to build nano-machinery that sustains life.
3. 4^3 > 20. So it leaves enough space for punctuation in the genetic code, like start and stop codons.

Re:Obfuscation (3, Insightful)

Jesus_666 (702802) | more than 6 years ago | (#23652345)

A hack can be beautiful or ugly. A hack that uses a property of the programming language in a clever way to achieve a speedup is beautiful, but a hack that relies on the processor violating its own spec in a certain way is ugly, especially if the programmer who wrote it didn't bother documenting what or why he did. Not only is the latter incredibly fragile, you can also not just take the specs and understand it - you have to know what's not in the spec to be able to fully grok it.

Also note how "quickly hacked together" usually implies that conceptional and code-level cleanliness were forgone in favor of development time savings. A dynamic webpage that consists of PHP and HTML happily mixed together with constructs like <?php if($d = $$q2) { ?> <30-lines-of-html />* is as unreadable as the worst spaghetti code, but a web dev who needs to deliver a prototype within hours might actually do that. He quickly hacks things together, (ab)using PHP's proprocessor nature in order to deliver quickly, even though not even himself will be abe to maintain the document afterwards.

The human brain consists of patch after randomly applied patch. I'd think that it would be the equivalent of a byzantine system with twenty different coding styles, three different OOP implementations and a dependency tree that includes four versions of the same compiler because parts of the code require a specific version. The code works, it's reasonably fast but it's miles from being clean and readable.

* Yes, it is supposed to have an assignment and a variable variable in the if block.

Re:Obfuscation (1)

Empiric (675968) | more than 6 years ago | (#23652543)

Perhaps you didn't get your requirements document in on time.

Or lacked sufficient organizational authority to get it placed as top priority...

On the upside, I think we've definitively resolved the question of whether once someone brilliant produces Hard AI, whether there'll be a bunch of also-rans complaining, "Well, yeah, but his code sucks..."

Re:Obfuscation (1)

somersault (912633) | more than 6 years ago | (#23653031)

I personally see hackers as codifying something even more beautiful, logical, and well-articulated than the mundane corporate programmer, delivering a much higher level of intelligence and complexity than most could understand
As someone who used to hack away at stuff rather than designing it first (and I still do that usually, though if I'm going to do something fairly complex I sometimes plan it out on paper a bit first), and once had my code referred to as "twisted-hacked" by another coder (I think he was Brazillian or German, can't remember, it was 8 years ago.. I'm still not sure whether to be proud of the fact that my twisted-hacked code works, or ashamed of the fact that I am a bit of a self-taught cowboy when it comes to coding, despite having done CompSci at University since then where I should have picked up some nice boring safe habits - I do exception checks and have always run validation on my inputs at least..), I can't say I agree that hacked together code is particularly more beautiful, logical or well-articulated. I get way more kicks out of working out how to do something with less lines and logic flow, whereas hacked together stuff will usually have a lot of un-necessary detritus. If you then short-hand your code so much that it gets obfuscated and confusing again - especially if you have no comments to explain what is going on, or the reader has no clue about language specific operators like $_ in perl - then it can get back to seeming 'hacked together' though I suppose.

Re:Obfuscation (1)

jeiler (1106393) | more than 6 years ago | (#23651413)

The comparison is not to intentionally obfuscated code, but to organized and documented code. Haphazard code is quite a bit more difficult than clearly written code.

Re:Obfuscation (1)

cnettel (836611) | more than 6 years ago | (#23651525)

Intentional obfuscation over any greater scale tends to show clear patterns. Having something that is in the range of a couple of GB of data, knowing that it is basically just random junk that happened to pass most of the regression tests for each new version, and then trying to find it what it all does -- that's what you are facing for the human genome.

Re:Obfuscation (5, Insightful)

mrbluze (1034940) | more than 6 years ago | (#23651633)

How is haphazardly hacked together code any harder to reverse engineer than intentionally obfuscated code? We know the latter isn't a problem for a determined hacker....

Nonetheless there is something to what Kurzweil says, futurist (or in my language 'bullshit-artist') though he is.

The brain is probably impossible to 'reverse-engineer', not because of its evolution but because to come up with a brain you need to have 9 months in-utero development followed by years of environmental formation, nurturing and so forth, by which time the brain is so complex and fragile that analyzing it adequately becomes practically impossible.

I mean, take the electro-encephalogram (EEG). It gives us so little information it's laughable. Electrical signals from the cortext mask those of deeper structures and still we just end up with an average of countless signals. Every other form of brain monitoring is also fuzzy. Invasive monitoring of brain function is problematic because it damages the brain and the brain adapts (probably) in different ways each time. Sure, we can probably get some of the information we are after, but the entire brain is, I would suggest, too big a task.

But we can use the same principles that exist in the brain to mimic its functionality. But it ultimately is a new invention and not a replica of a brain, even if it does manage to demonstrate consciousness.

Hey /. editors, how does Obama's cum taste? (-1, Troll)

Anonymous Coward | more than 6 years ago | (#23651911)

Did any actually make it into your hungry mouths, or did he just spray it across your adoring faces?

Obamajesus is here to save us! (0)

Anonymous Coward | more than 6 years ago | (#23652283)

*shudders and nuts in trousers*

Oh shit, someone get me a Kleenex please...

poverty of expectations (4, Funny)

Bastard of Subhumani (827601) | more than 6 years ago | (#23651191)

we'll be adding computers to our brains and building machines as smart as ourselves.
Sigh, talk about picking the low-hanging fruit...

Re:poverty of expectations (2, Informative)

krog (25663) | more than 6 years ago | (#23651227)

A singularly worthless comment.

Re:poverty of expectations (1)

MagdJTK (1275470) | more than 6 years ago | (#23651285)

Fair point --- I'd be incredibly disappointed if the we could only make a machine as clever as you.

Re:poverty of expectations (1)

peragrin (659227) | more than 6 years ago | (#23651721)

It is 2008. by 2020 the majority of computers will probably still be running some form of windows, and thus dumber than the sum IQ of all NASCAR fans.
i hold no hope for AI for a long time.

Re:poverty of expectations (1)

OldeTimeGeek (725417) | more than 6 years ago | (#23652225)

I know that this is horribly off topic, but what's up with the "sum IQ of all NASCAR fans"? Is it because it started in the Southern United States, or is that racing fans in general are perceived to be a bit slow in the intelligence department?

Re:poverty of expectations (1)

peragrin (659227) | more than 6 years ago | (#23652843)

While some of the engineers and designers for the cars are brillant in a hacker kind of way, the average fan of Nascar in particular is not overtly bright. These are the kind of people who lose hands by holding onto firecrackers as they go off. The kind of people the darwin awards were designed to showcase.

While I consider anyone who watches a car drive around an oval for 6 hours to have questionable hobbies, Nascar in particular aren't so bright.

It is one thing if the race course itself is a variable, or your going for 10 seconds of raw speed, but tell me, what is so interesting about driving for 5 hours? I do it several times a year it is boring as all hell and i have to worry about cops, accidents, and idiot drivers.

Re:poverty of expectations (1)

somersault (912633) | more than 6 years ago | (#23653109)

Probably just that among actual racing fans, racing around in a big figure 0 where you only turn left is quite dull. Sure they're going pretty fast, and there are some interesting tactical concepts, but NASCAR fans are probably just in it for the crashes anyway :p

Re:poverty of expectations (1)

MarkvW (1037596) | more than 6 years ago | (#23652729)

Yeah, think about seeding planets with custom-coded biomechanical stuff! Or . . . being able to build and control a structure bigger than universes so that we can perceive things on an infinitely larger scale! Ahhhh . . . if we only can avoid extinction. . .

Adding computers to our brains? (3, Insightful)

wcrowe (94389) | more than 6 years ago | (#23651211)

Taking into consideration computer security issues, I think I'll pass.

Re:Adding computers to our brains? (2, Funny)

mrbluze (1034940) | more than 6 years ago | (#23651299)

This serene confidence is not shared by neuroscientists
Or anyone else who isn't on sedatives.

Re:Adding computers to our brains? (3, Funny)

Gazzonyx (982402) | more than 6 years ago | (#23651915)

Taking into consideration computer security issues, I think I'll pass.
Yeah... and you don't even want to know about 'Patch Tuesday'...

Re:Adding computers to our brains? (1)

Bodrius (191265) | more than 6 years ago | (#23652669)

You're missing the potential. - If we embed computers into our brains, we'll open the door to security hacks.
- If that is possible, there will be master brain hackers.
- If there is a master brain hacker, there will be Naked Female Robots jumping from buildings []

For the average Slashdot Id, the trade off is probably worth it.

wetware security (4, Insightful)

gobbo (567674) | more than 6 years ago | (#23653135)

Taking into consideration computer security issues, I think I'll pass.
Why? There are already trojan horses for the brain, like religion; worms, like jingles and product design; and zero-day exploits like money, not to mention rootkits like crack. I'd say the average brain is worse off than an unpatched WinXP install hooked up to broadband with no firewall.

Without the ability to install properly open code, I suggest a good security patch, like zen, or some other semi-mystical skepticism.

Wouldn't that *help*? (3, Interesting)

DriedClexler (814907) | more than 6 years ago | (#23651283)

it might prove too difficult to reverse-engineer the brain's circuitry because it evolved so haphazardly. "My colleague Francis Crick used to say that God is a hacker, not an engineer," Dr. Ramachandran said.
I'd always thought that this constraint would *help* because you know, in advance, that the solution to the problem is constrained by what we know about how evolution works. You have to start with the simplest brain that could have genetic fitness enhancement, and then work up from there, making sure each step performs a useful cognitive function.

Furthermore, looking at the broader picture, I was reading an artificial intelligence textbook that was available online (sorry, can't recall it offhand) which said that current AI researchers have shifted their focus from "how do humans think?" to "how would an optimally intelligent being think?" and therefore "what is the optimal rational inference you can make, given data?" In that paradigm, the brain is only important insofar as it tracks the optimal inference algorithm, and even if we want to understand the brain, that gives us another clue about it: to improve fitness, it must move closer to the what the optimal algorithm would get from that environment.

Re:Wouldn't that *help*? (4, Interesting)

cnettel (836611) | more than 6 years ago | (#23651457)

Any work with genetic algorithms show that you can very easily get a solution containing a lot of crud. Pruning too heavy on fitness in each generation will give you far from optimal results. The point is that each incremental change in our evolutionary history did NOT improve fitness. It just didn't hurt it enough, and might have combined with another change to increase it later on.

It all boils down to the result that an intelligent organism capable of building a social, technological civilization could have been quite different from us. Even if it looked like us, details as well as overall layout of the brain could supposedly have been quite different, but still giving an equivalent fitness. A simulation reproducing everything is not feasible, so how do we find out which elements are really relevant?

Re:Wouldn't that *help*? (1)

DriedClexler (814907) | more than 6 years ago | (#23652093)

The point is that each incremental change in our evolutionary history did NOT improve fitness. It just didn't hurt it enough
Oh, maybe Richard Dawkins could stop perpetuating that misconception then.

Re:Wouldn't that *help*? (1)

Xiaran (836924) | more than 6 years ago | (#23652363)

Where has Richard Dawkins claimed otherwise?

Re:Wouldn't that *help*? (1)

maxume (22995) | more than 6 years ago | (#23651783)

Non advantageous mutations can persist and spread in a population. Two mutations that are not advantageous could combine to create an advantage dozens of generations after they first arose. A simple "fitness" model is not adequate to model evolution.

Re:Wouldn't that *help*? (1)

BrotherBeal (1100283) | more than 6 years ago | (#23652089)

Not to get all postmodern, but what do you mean by "optimal" and "intelligent" in the latter part of your post? Those are very important terms to throw around without clear definitions. We're talking about intelligence, an incredibly ill-specified topic, and I suspect that this "optimal intelligence" approach is, at best, a misrepresentation of current AI research (probably on the authors' part). Optimal is in the eye of the beholder, and I see no reason to suspect that there is anything out there "better" than the "best" human mind. However, the firmware in my printer may very well disagree, and mock me in ways far too subtle for my feeble, meat-based mind to comprehend. Without a clear definition of optimal - something you could express algorithmically and plug into a fitness function of some sort - it seems it would be very difficult to show real results in A.I. I agree that an evolutionary approach is probably the best way to build up true intelligence, since there's just too much information to synthesize in the short term. Think about the AI in A Mind Forever Voyaging, for example - maybe simulations like that are the best way to build an A.I. we can relate to as something other than a tool.

Right on track... (1)

morgan_greywolf (835522) | more than 6 years ago | (#23651307)

Right on track, maybe a little slow....The Terminator was sent back from the year 2029...

Kurzweil Talk in Cambridge, MA (4, Interesting)

yumyum (168683) | more than 6 years ago | (#23651323)

I attended a talk by Kurzweil a couple of weeks ago at the Broad Institute in Cambrige, MA. Absolutely fascinating what he foresees in the near future (~20 years). I believe it is 2028 when he believes a machine will pass the Turing Test. Even sooner, he predicts that we will have nanobots roaming around inside our bodies, fixing things and improving on our inherent deficiencies. Very cool. He also addressed a similar complaint about being able to reverse-engineer the brain, but it was of the nature that we may not be smart enough to do so. I (and he of course) doubt that that is the case. Kurzweil thinks of the brain as a massively parallel system, one that has very low signaling rate (neuron firing) compared to a CPU which it overcomes by the massive number of interconnections. It will definitely be a big problem to solve, but he is confident that it will be.

Re:Kurzweil Talk in Cambridge, MA (2, Interesting)

VeNoM0619 (1058216) | more than 6 years ago | (#23652167)

Kurzweil thinks of the brain as a massively parallel system, one that has very low signaling rate (neuron firing) compared to a CPU which it overcomes by the massive number of interconnections.
Mod me offtopic or whatever, I don't care, but I've been thinking about this for a few weeks. If our brains are so well interconnected, how is it that we instantly die if a bullet merely passes through it and destroys a few of those connections? We can shoot bullets through most parts of a computer and more than likely only a piece of it will be damaged (I have never done this, but we could in theory just reroute the processing through the non-damaged parts correct?)

How is it that we can have brain damage and "destroy" some parts of the brain, but the minute we pass something through it physically, the entire thing ceases to function- instantly, instead of certain areas slowly fading away. I'm sure there is a simple answer and this may more than likely be a stupid question, but it has been making me curious.

Re:Kurzweil Talk in Cambridge, MA (3, Interesting)

yumyum (168683) | more than 6 years ago | (#23652907)

Whether one dies from an injury depends on the amount and the kind of damage. A famous example of non-lethal brain injury is Phineas Gage [] . Also lookup lobotomy.

Re:Kurzweil Talk in Cambridge, MA (3, Informative)

ColdWetDog (752185) | more than 6 years ago | (#23652917)

If our brains are so well interconnected, how is it that we instantly die if a bullet merely passes through it and destroys a few of those connections?

1) You don't die "instantly" unless the damage is very extensive. In particular, you can have autonomous functioning the brainstem persist after massive "upper" brain damage.

2) You can damage large parts of the brain and have the damage rerouted - sometimes. There are large, apparently "silent" parts of the brain that you can remove without (apparent) problems. Read some of Oliver Sack's stuff for some interesting insights on how the brain works on a macroscopic basis.

Have you accepted Google as your personal search engine?

Re:Kurzweil Talk in Cambridge, MA (1, Insightful)

Anonymous Coward | more than 6 years ago | (#23652547)

Ten years ago he was saying 2010.

Kurzweil's timelines have no foundation in reality, and never have.

mid-age life crisis (4, Informative)

peter303 (12292) | more than 6 years ago | (#23651335)

Kurzweil's predictions will come to pass, by not on the time-scale he envisions. probably centuries. He has been hoping for personal immortality through technology and takes over 200 anti-aging pills a day.

Re:mid-age life crisis (4, Interesting)

yumyum (168683) | more than 6 years ago | (#23651519)

He has been fairly accurate about his past predictions, so I don't think centuries will be what it takes. Basically, he sees medical/biomedical advances starting to heat up and exponentially grow just like the electronics industry has.

IMO, a very controversial prediction of his is that around 15 years from now, we will start to increase our life expectancy by one year, every year, a rate he also sees taking on exponential growth...

Re:mid-age life crisis (4, Insightful)

SatanicPuppy (611928) | more than 6 years ago | (#23652515)

Wishful thinking has made fools of better thinkers.

Biomedical advances will never increase at the same rate as computer technology, simply because experimenting with silicon (or whatever) doesn't have any health and safety issues tied to it, more less any potential moral backlash (barring real AI, which I think is farther away as well).

It takes 15 years, sometimes, for a useful drug with no proven side effects to make it to market. Even if we made theoretical breakthroughs today, it'd be a decade or more before they could be put into practice on a meaningful scale, assuming that they were magically perfect and without flaws/side-effects/whatever.

It's very dangerous to look at our advances in computer technology and try to apply those curves to other disciplines. It's equally ridiculous to assume that the rate of increase will remain the same with no compelling evidence to support the assertion. In terms of computers and biotech, we're still taking baby steps, and while they seem like a big deal, we still have a long way to go.

Re:mid-age life crisis (1)

yumyum (168683) | more than 6 years ago | (#23653149)

I look at it as the accretion of information, something that Kurzweil keeps coming back to. It is our ability to discover, and discoveries tend to snowball in a cascading effect. There is plenty of research being done at the chemical level that does not require your 15 year testing phase. In his talk that I attended, he pointed out that most of the drug research has been scatter shot, try this, try that. Recently, with improved technology, including computing power, that approach is starting to change into a more nuanced approach as we continue to increase our knowledge of the chemical processes at work within our bodies.

Re:mid-age life crisis (2, Informative)

Sabathius (566108) | more than 6 years ago | (#23651609)

Centuries? This assumes a linear progression. We are talking about the singularity--which i case you haven't been noticing--is happening in an exponential manner.

I suggest you take a look at his actual research before you say such things. Here's a link to a presentation her recently did at Ted: []

Re:mid-age life crisis (3, Insightful)

elrous0 (869638) | more than 6 years ago | (#23652659)

Oh, bullshit. The "singularity" isn't happening is anything close to an exponential manner. Technological advances in many fields have essentially stagnated in recent decades (transportation, space travel, power generation, etc.). In other fields, we are progressing, but hardly at an "exponential" rate (medicine, biology, etc.). Communications is the only field that has progress at anything close to an exponential rate over the last few decades.

We're not even CLOSE to anything resembling some magical singularity.

Re:mid-age life crisis (4, Interesting)

Jeff DeMaagd (2015) | more than 6 years ago | (#23651669)

He has been hoping for personal immortality through technology and takes over 200 anti-aging pills a day.

Which is pretty funny given that dietary supplements haven't been found to be very useful on a whole.

He really doesn't seem to look any younger or stay the same age either. He does look a bit better than smokers of his age, but not by a whole lot, in my opinion.

Re:mid-age life crisis (1)

RicktheBrick (588466) | more than 6 years ago | (#23652949)

the Singularity, that revolutionary transition when humans and/or machines start evolving into immortal beings with ever-improving software. How do we know that it has not already occurred? Immortality would be boring. There is nothing that would keep someone entertained that long. Therefore we die and are reincarnated to keep live interesting. If we were sure about this it would make it boring too so we will always have a fear of death to keep up our interest in our present life. It is just like gambling, it is not very interesting unless one is playing for high stakes.

Re:mid-age life crisis (1)

vertinox (846076) | more than 6 years ago | (#23653039)

He really doesn't seem to look any younger or stay the same age either. He does look a bit better than smokers of his age, but not by a whole lot, in my opinion.

According to him his Type II Diabetes appears to have gone away for the time being... At least the symptoms part of it.

Silliness (4, Insightful)

Junior J. Junior III (192702) | more than 6 years ago | (#23651369)

You can't reverse-hack? Who says?

You can reverse engineer anything. Whether it has a well-thought out design or not, its functions can be analyzed and documented and re-implemented and/or tweaked.

If anything, the timetable may be in question, but not the question of whether or not it can be done. I have no doubt it can be done, it's just a matter of how long it'll take given the right resources, the right talent, the right technology, and the right knowledge.

Granted, I'm just an idiot posting on slashdot, and not an inventor or neuroscientist, but I still think I'm right on this.

Re:Silliness (1)

mrbluze (1034940) | more than 6 years ago | (#23651417)

You can't reverse-hack? Who says?
Kurzweil says. Oh, hands-on-heads too, by the way.

Re:Silliness (1)

SatanicPuppy (611928) | more than 6 years ago | (#23652621)

His problem is that he's confusing form and function.

I've met many a hack I couldn't figure out; a massive tangle of irreplicatable crap. But I can still make something new that is functionally identical, and that is what would need to be accomplished to replicate the brain.

I see wishful thinking on two fronts. One, he wants to be immortal and touch the "weak godhood" of the Singularity. But two, he still wants to be "unique" in having this wonderful un-hackable brain. I think those two ideas contradict each other.

Re:Silliness (0)

Anonymous Coward | more than 6 years ago | (#23652921)

yes, given enough time and resources you can do practically anything short of violating physical laws. but his point is that reverse-engineering human brain would be a royal PITA.

Re:Silliness (1)

eggstasy (458692) | more than 6 years ago | (#23653001)

It's basic high-school math that not every function has an inverse function.
We can understand "engineering" in this context, as opposed to "hacking", as the careful planning of a system so that we use "well-behaved" functions, thereby maximizing our future ability to refactor said system, and minimizing the losses of potentially useful information throughout the various operations...
For instance, a properly engineered graphics application will let you zoom in and out of a picture without permanently transforming the source data.
Some older, or "hackier" applications would use the now-shrunk data for the enlargement operation, resulting in a morass of useless pixels.
Nature is lazy, it follows the path of least resistance, so its output tends to be like this, building upon previous work rather than starting from scratch.
There's an whole book out there about "unintelligent design", detailing why we cannot be the creation of a perfect divine entity since we're so poorly engineered. Wikipedia has some info on it as well. []

Nah (5, Insightful)

Hoplite3 (671379) | more than 6 years ago | (#23651385)

AI is our generation's flying car. It's what we see in the future, not what will be. Instead of the flying car, we got the internet. It isn't very picturesque (especially over at, but it is cool.

The future will be like that: something people aren't really predicting. Something neat, but not flashy.

Alternatively, the future will be the "inverse singularity" -- you know, instead of the Vinge godlike AI future singularity of knowledge, there could be a singular event that wipes out civilization. We certainly have the tools to do that today.

Re:Nah (3, Insightful)

vertinox (846076) | more than 6 years ago | (#23651985)

AI is our generation's flying car. It's what we see in the future, not what will be.

I don't know. I think AI is economically easier and desirable to acheive than a flying car.

The internet was predicted about the same time, but no one really paid attention but because it was economically viable and actually desirable (no drunk drivers or grandma's driving 300mph into buildings with a missile) it came about.

Secondly, AI in any form is desirable. From something as simple as filtering data, to more advanced like picking stocks, and the final goal of actually being a companies CEO is what many companies are investing in right now.

Of course no one is building a super intelligent CEO in a box as of now, but many companies are developing programs that are borderline AI with dealings with choosing their best investments especially the larger financial firms with those who manage mutual funds.

Now they don't call them AI at this point but they are approaching and I would wager that when it becomes viable, people will be building MBA's in a box to determine strategic decisions.

Re:Nah (1)

Rogerborg (306625) | more than 6 years ago | (#23652671)

The current stock pickers are Artificially Dumb, since the main driver (avoiding big losses) is predicting just how irrational all the other stock pickers - meat or silicon - are going to be. That's not really a direction that we want to explore without a Common Sense Override, since sooner or later we'll hit on the perfect feedback loop which will sell the entire global economy for $3.50 to an Elbonian day trader.

Why must you piss on my parade, sir? (1)

elrous0 (869638) | more than 6 years ago | (#23652525)

I hate naysayers such as you. Now if you'll excuse me, my robotic butler is informing me that my space elevator car to moonbase 23 has arrived.

Re:Nah (1)

bornyesterday (888994) | more than 6 years ago | (#23652563)

How is AI, a concept that dates back to the early 1900s "our generation's"? In both cases, we have early experimental versions of AI and flying cars. Neither is anything close to what science fiction writers and fans have envisioned for the last 100 years.

Re:Nah (1)

Yvanhoe (564877) | more than 6 years ago | (#23652847)

We have flying cars. They only are oddities that are not economically viable. Hell, we even have jetpacks and privately-funded space travels.
The difference with most of other predictions is that AI doesn't have to become economically viable. You only need one, somewhere, to make a technological disruption.

Deja vu (1)

Hatta (162192) | more than 6 years ago | (#23651411)

I said pretty much the same thing [] Ramachandran said in a Kurzweil related thread yesterday. Funny how that works.

Where's my f'ing flying car dip$%^* (2, Funny)

coren2000 (788204) | more than 6 years ago | (#23651429)

Where's my motherf'ing flying car?

Re:Where's my f'ing flying car dip$%^* (2, Informative)

Idiomatick (976696) | more than 6 years ago | (#23651953)

There have been flying cars .... I hate people using this. We have the technology to make flying cars we have for a long time. The reason you dont see them is because they are expensive. If you think a hummer gets bad mileage a flying car gets much worse. Since it doesnt use wings for lift (the type envisioned by most people) you need to expend many times more fuel than a plane which uses too much already. Then of course you need to expend lots of effort making it lightweight. And its doomed to being a one seater unless you make it bigger than a regular car. If a product is doomed to lose money with total certainty. Why would any company make it? As well it costs millions for RnD so you cant make it just for novelty sake. It is NOT a technological problem, its economic.
AI is completely different. The cost is in computing power not dollars. Computing power is being driven down by forces around the world funneling billions into computers. Ai is also used around the world as it can be developed incrementally not leap to turing ready. Its used many decision making computer systems which again have billions of dollars being funneled to them. So we are constantly improving AI already. As for chips in our brains. There again are supporting technologies to work this out. There was an article about mind reading robots a few days ago, study on the brain is big. Of course cellphones for miniaturization. Mind controlled limbs coming out. Sure it is more difficult but the forces of capitalism are on ourside. And it tends to get its way.
  That said i think his timescale is way off. We will have computers as fast as the brain in 2029 perhaps but they'll need to become more common place before we could expend that playing with and testing AI. So i'd say late 203x. Chips in the brain has one obvious setback, like genetic modification the government will surely stand in the way of science. If people arent comfortable with the idea it'll get bogged down in testing phases. After that it wont get enough funding because well there probably isnt a big market outside /. for putting chips in your brain. So i'd say we are a while off from seeing that. Immortality might be easier than chip in the head ironically because it makes less people feel queasy.

Re:Where's my f'ing flying car dip$%^* (1)

coren2000 (788204) | more than 6 years ago | (#23652323)

There have been flying cars .... I hate people using this.
I hate when people take things too seriously... but anyway, you'll notice I wasn't asking about flying cars in general... i was asking where *MY* flying car is.

Technology promised us flying cars in each garage and hasn't yet delivered... I dont really care that its an economic issue rather than a tech issue (actually its a tech issue... the tech isn't good enough to be cheap, but why quibble).

Am I really to believe that Kurzweil (who's books get really boring as he starts making bad predictions) that I'll have brain implants before a flying car? I think not.

Will people move data like in Johnny Mnemonic in ? (1)

Joe The Dragon (967727) | more than 6 years ago | (#23651431)

Will people move data like in Johnny Mnemonic in there heads?

Re:Will people move data like in Johnny Mnemonic i (2, Funny)

Yetihehe (971185) | more than 6 years ago | (#23651731)

With current technology I can very easily move some 20-30gb in my stomach (2gb microsd cards in some small pill-like protective case). But download is then a little shitty...

Re:Will people move data like in Johnny Mnemonic i (1)

socialhack (890471) | more than 6 years ago | (#23652533)

You know that form factor us up to 8GB now.

Kurzweil? (1)

phtpht (1276828) | more than 6 years ago | (#23651449)

Was Mulder also there?

I like my brain as it is... (1)

multi-flavor-geek (586005) | more than 6 years ago | (#23651451)

All of th bi-polar, latent hallucination filled, occasional freezing and rebooting, hacked, whacked, twacked mess it is. I wouldn't change it for the world. It took me thirty odd years to figure out what to do with it but suddenly I found out of this mess I can get creativity! Now I am doing marketing for a nightclub (with no experience no less) and started my own magazine! Not going to post a link as I don't want godaddy taking a shit on me quite yet.... That ans they may notice the string of noscript tags that eliminate all of their ads... But back to my occasionally wandering brain I am going to leave it alone, I like it here, and you will never be able to program into a computer one thing, experience, you can make a computer emulate empathy, but you cannot make it learn past a series of yes & no questions into the wonderful world of angels dancing on the heads of pins.

moron the future, no gadgets required (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#23651453)

the lights are coming up all over now. conspiracy theorists are being vindicated. some might choose a tin umbrella to go with their hats. the fairytail is winding down now. let your conscience be yOUR guide. you can be more helpful than you might have imagined. there are still some choices. if they do not suit you, consider the likely results of continuing to follow the corepirate nazi hypenosys story LIEn, whereas anything of relevance is replaced almost instantly with pr ?firm? scriptdead mindphuking propaganda or 'celebrity' trivia 'foam'. meanwhile; don't forget to get a little more oxygen on yOUR brain, & look up in the sky from time to time, starting early in the day. there's lots going on up there.

is it time to get real yet? A LOT of energy is being squandered in attempts to keep US in the dark. in the end (give or take a few 1000 years), the creators will prevail (world without end, etc...), as it has always been. the process of gaining yOUR release from the current hostage situation may not be what you might think it is. butt of course, most of US don't know, or care what a precarious/fatal situation we're in. for example; the insidious attempts by the felonious corepirate nazi execrable to block the suns' light, interfering with a requirement (sunlight) for us to stay healthy/alive. it's likely not good for yOUR health/memories 'else they'd be bragging about it? we're intending for the whoreabully deceptive (they'll do ANYTHING for a bit more monIE/power) felons to give up/fail even further, in attempting to control the 'weather', as well as a # of other things/events.

dictator style micro management has never worked (for very long). it's an illness. tie that with life0cidal aggression & softwar gangster style bullying, & what do we have? a greed/fear/ego based recipe for disaster. meanwhile, you can help to stop the bleeding (loss of life & limb);

the bleeding must be stopped before any healing can begin. jailing a couple of corepirate nazi hired goons would send a clear message to the rest of the world from US. any truthful look at the 'scorecard' would reveal that we are a society in decline/deep doo-doo, despite all of the scriptdead pr ?firm? generated drum beating & flag waving propaganda that we are constantly bombarded with. is it time to get real yet? please consider carefully ALL of yOUR other 'options'. the creators will prevail. as it has always been.

corepirate nazi execrable costs outweigh benefits
(Score:-)mynuts won, the king is a fink)
by ourselves on everyday 24/7

as there are no benefits, just more&more death/debt & disruption. fortunately there's an 'army' of light bringers, coming yOUR way. the little ones/innocents must/will be protected. after the big flash, ALL of yOUR imaginary 'borders' may blur a bit? for each of the creators' innocents harmed in any way, there is a debt that must/will be repaid by you/us, as the perpetrators/minions of unprecedented evile, will not be available. 'vote' with (what's left in) yOUR wallet, & by your behaviors. help bring an end to unprecedented evile's manifestation through yOUR owned felonious corepirate nazi glowbull warmongering execrable. some of US should consider ourselves somewhat fortunate to be among those scheduled to survive after the big flash/implementation of the creators' wwwildly popular planet/population rescue initiative/mandate. it's right in the manual, 'world without end', etc.... as we all ?know?, change is inevitable, & denying/ignoring gravity, logic, morality, etc..., is only possible, on a temporary basis. concern about the course of events that will occur should the life0cidal execrable fail to be intervened upon is in order. 'do not be dismayed' (also from the manual). however, it's ok/recommended, to not attempt to live under/accept, fauxking nazi felon greed/fear/ego based pr ?firm? scriptdead mindphuking hypenosys.

consult with/trust in yOUR creators. providing more than enough of everything for everyone (without any distracting/spiritdead personal gain motives), whilst badtolling unprecedented evile, using an unlimited supply of newclear power, since/until forever. see you there?

"If my people, which are called by my name, shall humble themselves, and pray, and seek my face, and turn from their wicked ways; then will I hear from heaven, and will forgive their sin, and will heal their land."

meanwhile, the life0cidal philistines continue on their path of death, debt, & disruption for most of US. gov. bush denies health care for the little ones;

whilst demanding/extorting billions to paint more targets on the bigger kids;

& pretending that it isn't happening here;
all is not lost/forgotten/forgiven

(yOUR elected) president al gore (deciding not to wait for the much anticipated 'lonesome al answers yOUR questions' interview here on /.) continues to attempt to shed some light on yOUR foibles. talk about reverse polarity;

Great, something else to patch (1)

PainMeds (1301879) | more than 6 years ago | (#23651491)

I have a hard enough time keeping my PS3, Macbook Pro, iPhone, and desktop machine at the latest patch level. Thanks, but no thanks.

Adding computers to my brain? (1)

schnikies79 (788746) | more than 6 years ago | (#23651495)

No thanks!

One of the great things about current technology is the ability to unplug and sometimes I just want to do just that.

Sounds a little like.... (1)

SGDarkKnight (253157) | more than 6 years ago | (#23651521)

A claim that was made saying we would no longer need "paper" by the year 2000. Even with this guys previous track record about the sudden internet boom, computer chess champion, etc... Com'on now, adding computers to the brain? Don't get me wrong, I think i will eventually happen, just not by the year 2020. At one point in the article it says by the 2020's and in another part it says he made a $10k bet that it would happen by 2029. I know its still considered the 2020's, but still, seems a little too early for those types of advancements. It just seems to me that for any significant progress to be made in neurotechnology specifically, the area of "adding computers to the brain", espically with all the hoops you have to jump though before you can even get to the human testing phase, 2020 is way too soon. 2029 might be a better bet IMO -- oh wait, Dr. Kurzweil already made that bet. I think hes going to be off by at least another 10 to 20 years before we have any Johnny Mnemonic's running around (and yes, i know the movie takes place in the year 2021).

Ugh (0)

Anonymous Coward | more than 6 years ago | (#23651571)

Before we get to Ray Kurzweil's plan for upgrading the "suboptimal software" in your brain, let me pass on some of the cheery news he brought to the World Science Festival last week in New York.

Do you have trouble sticking to a diet? Have patience. Within 10 years, Dr. Kurzweil explained, there will be a drug that lets you eat whatever you want without gaining weight.
What's that then, eating a tonne of snow a day? Cyanide, maybe? Rohypnol and they put your fat ass on a treadmill? On the plus side, I now don't need to waste anymore time reading the rest of the article (yes, I'm new here)."Futurist"? Hah!

Adaptation (4, Insightful)

jonastullus (530101) | more than 6 years ago | (#23651773)

IANANS (I am not a Neuroscientist), but as with other approaches of interfacing the human brain with periphery it seems to work really well to let the brain do the hard interfacing work.

So, as haphazardly as the brain structures, memory storage, sensory input, etc. might have evolved, it might still be flexible enough to figure out a sufficiently simple interface with anything you might connect to it. Given a smart training of finding the newly connected "hardware", it might be possible to interface with some really interesting brain extensions.

The complexity and the abstractness of the extension might be limited by the very pragmatic learning approach of the brain, making it more and more difficult to learn the interface if the learning progress is too abstract/far away for the brain to "stay interested". Though maybe with sufficiently long or "intense" sensory deprivation that could be extended a bit.

My problem with the argument of the "haphazard" structure of the brain is that it could have been used to deny the possibility of artificial limbs or artificial eyes, which both seem to work pretty well. Sure, these make use of already pre-existing interfaces in the brain, but as far as I know (not very far) the brain is incredibly malleable at least during the first 3 years of childhood.

So, as ethically questionable as that may sound to us, it might make sense to implant such extensions in newborn babies and let them interface to them in the same way they learn to use their eyes, coordinate their limbs and acquire language.

Good times ;)

Re:Adaptation (1)

mapsjanhere (1130359) | more than 6 years ago | (#23652665)

My worry would be that our brain interfaces too well with the new gadgets.
"how did he die?"
"his spacial awareness co-processor run out of battery, and he run into a wall"
Adding extra-human capabilities without turning a human into a human-machine hybrid, each depending on each other for survival, sounds like the true challenge. And that's not even looking in the ethical challenges of preventing borg-like control via the add-ons. "Don't fret about the elections, Mr. President. The new citizen 2.1 firmware rev has your reelection secured by a 58.2% margin."

My biggest problem with Kurzweil (4, Insightful)

jeiler (1106393) | more than 6 years ago | (#23651785)

He not only makes predictions about technology (which is a feasible endeavor, though fraught with difficulties), but also about the universe that the technology will interact with. Predicting that brain scan technology will improve is (pardon the pun) a no-brainer. Predicting that we will map out hundreds of specialized areas within the brain is a prediction that is completely off the wall, because we don't know enough about brain function to know if all areas are specialized.

Re:My biggest problem with Kurzweil (0, Troll)

bornyesterday (888994) | more than 6 years ago | (#23652611)

The fun things about predicting the future is that people only pay attention when you get it right.

Balls of crystal (3, Insightful)

sm62704 (957197) | more than 6 years ago | (#23651801)

by the 2020s we'll be adding computers to our brains and building machines as smart as ourselves

As a cyborg [] myself, I don't see any sane person adding a computer to his brain for non-medical uses.

I was going to say that sane people don't undergo surgery for trivial reasons, then I thought of liposuction and botox for rich morons, and LASIK for baseball players without myopia. I don't see any ethical surgeons doing something as dangerous as brain surgery for anything but the most profound medical reasons, like blindness or deafness.

As to the "as smart as ourselves", the word "smart" [] has so many meanings that you could say they already are and have been since at least the 1940s: "1. to be a source of sharp, local, and usually superficial pain, as a wound." Drop ENIAC on your foot ans see how it smarts. "7. quick or prompt in action, as persons." By that definition a pocket calculater is smarter than a human.

Kurtzwiel has been saying this since the 1970s, only then it was "by the year 2000".

We don't even know what consciousness is. How can you build a machine that can produce something you don't understand?

Re:Balls of crystal (1)

yumyum (168683) | more than 6 years ago | (#23652043)

Kurtzwiel has been saying this since the 1970s, only then it was "by the year 2000".
Do you have a reference for this?

Re:Balls of crystal (1)

sm62704 (957197) | more than 6 years ago | (#23652375)

No, and in fact I could be wrong. I don't even remember where I read it, th eseventies were a long time ago.

The ultimate in interface design (1, Interesting)

thesandtiger (819476) | more than 6 years ago | (#23651861)

All concerns of security aside, I do think that sophisticated direct brain connections with computers will be coming along pretty soon - think along the lines of what they're doing now with robotic limbs and such. It absolutely won't surprise me if within a few more years (5-10?) that kind of stuff, an artificial limb being controlled by the brain exactly as a natural limb is, is completely commonplace. And direct brain control is the best interface around, really.

For me, the huge thing will be when I'm able to control a computer's inputs directly with my brain to do tasks I currently do now. For instance, while I'm a decent typist, it's still much slower than my thoughts, and I will often race well ahead of what I'm able to type while I'm writing. I'd love it if I could interface directly and just think out what I want to say and then edit out all of the noise. I've got ideas for images I want to create but, unfortunately, I've not got the steady hands necessary to translate those images from my mind to paper or screen. A direct interface might, if advanced enough, allow me to at least put the basics of an idea out there and then repair it later.

I'd love it if I could interact with objects around me as well - for instance, at university I have a swipe card that lets me into the research lab I work in, but it'd be much better if doors and elevators and such could know I'm there, know I'm me, and make a judgment - "Oh, she's alone, she's authorized to enter, open the doors" or "Oh, she's authorized to be here, but she's with someone else, so I will ask her to verify their guest status" or even "She's authorized and not alone, but she's activated a panic button, so I'll alert security, record the scene" or whatever. Basically, a smart environment with my implants acting as the key.

None of that seems particularly unrealistic to me - yes, it'd require a lot of training/calibration to get things working accurately, but it all seems reasonable at this point. I'm not asking for Neuromancer-like "jacking in" or anything - I mean, visual implants would be great, and I can think of thousands of things I'd do with them - but for right now I'd settle for much of what I've described. Heck, I'd settle for implants that'd only let me do what I can already do with a mouse, keyboard, microphone and camera - I can think of lots of neat tricks that could be done to make my life easier like that.

No, thanks! (1)

gnupun (752725) | more than 6 years ago | (#23651863)

I don't want the govt to read or control my thoughts. Yet another day the elites want to attack the weak, common people and steal what little they have (their minds).

Computers in Brains (1)

salveque (1221584) | more than 6 years ago | (#23651961)

I remember seeing a article about houses that controlled the lives of their inhabitants for them. It wouldn't work because people aren't willing to give up control.

It's one of the things I always think of when I hear `technological singularity': people won't be willing to give up control. They want to be the smartest. So they won't make machines that are smarter than them: they'll make themselves smarter.

Not only could basic arithmetic functions and memory be built in, but also internet connectivity and even interfacing with body-protecting nanobots (personal control side steps the privacy problems). They could be designed to pass on to offspring (sperm and ova carry them)(would be necessary because it makes them permanent and secure in the perception of the general world).

The implications would be massive. We'd probably see the disappearance of most other computers, decentralization (why live near by when we can VOIP in our heads?), longer lifespans, and many other things.

This is of course overlooking ethics. Should we really mess with our bodies like this?

microsoft (0)

Anonymous Coward | more than 6 years ago | (#23651991)

Once we would be able to connect our brains up to computers, would you really trust it to run on windows? I dread to think what would happen if you downed a pint of something it didn't know about.

would bring a new meaning to BSOD

Braiiiinns! (0)

Anonymous Coward | more than 6 years ago | (#23652055)

Won't zombies love us even more then because of the crunchyness of our brains?

It's inevitable (1)

Besna (1175279) | more than 6 years ago | (#23652223)

Sort of like a perfect playoff run in the NBA. You can be for or against it--it will happen either way.

Worst summary in a long time (1)

ceoyoyo (59147) | more than 6 years ago | (#23652385)

And there have been some remarkably bad ones.

What new tools? What festival? Why doesn't Kurzweil get a first name while Ramachandran gets not only a first name but an initial? Has Ray dropped the Ray?

You don't really need to reverse engineer it... (2, Insightful)

tmosley (996283) | more than 6 years ago | (#23652493)

The brain is an adaptive system. Provide it with a stimulus, and it will reprogram itself. How do you think the monkey learned how to use the robotic arm [] ? Did they hack into the neurons and input code to work a third arm?

No, the monkey's brain spontaneously created the neural network to control it. Sentient beings aren't computers, at least not in the conventional sense, because they reformat themselves to process new data (learning), and even to process new types of inputs. One might be able to build a computer advanced enough to handle this level of functionality, but once it is built, you won't be programming it with code. Instead, you'll be teaching it just like you do a child.

Changing ideas (1)

Bender0x7D1 (536254) | more than 6 years ago | (#23652573)

From TFA:

Two decades ago he predicted that "early in the 21st century" blind people would be able to read anything anywhere using a handheld device. In 2002 he narrowed the arrival date to 2008. On Thursday night at the festival, he pulled out a new gadget the size of a cellphone, and when he pointed it at the brochure for the science festival, it had no trouble reading the text aloud.

I'm guessing that 20 years ago he was thinking of a handheld device that would actually allow blind people to literally "see" the text - not have it read to them. In 1976 Kurzweil invented the Kurzweil Reading machine that could read text to the blind. It covered an entire table top. With an exponential decrease in size, this would have been projected to be a handheld device in the early 90s. So why add the extra 10-20 years to the prediction?

I'm guessing, and I could be wrong, that he added the extra time to allow for the development of the required neural link for visualizing the text. So, this really isn't the device he envisioned, but a simpler concept that does a similar thing. Kind of like a rocket belt is like a jet pack, but doesn't let you fly from New York to L.A. at 300 MPH.

God is just protecting his intellectual property (1)

genner (694963) | more than 6 years ago | (#23652931)

just like everyone else.

The future... (2, Informative)

religious freak (1005821) | more than 6 years ago | (#23653099)

The future Conan???

PS Anyone having trouble getting their rightful Karma bonuses despite still having 'excellent' Karma?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?