×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Defending Against Harmful Nanotech and Biotech

Roblimo posted more than 8 years ago | from the something-wicked-this-way-comes dept.

193

Maria Williams writes "KurzweilAI.net reported that: This year's recipients of the Lifeboat Foundation Guardian Award are Robert A. Freitas Jr.and Bill Joy, who have both been proposing solutions to the dangers of advanced technology since 2000. Robert A. Freitas, Jr. has pioneered nanomedicine and analysis of self-replicating nanotechnology. He advocates "an immediate international moratorium, if not outright ban, on all artificial life experiments implemented as nonbiological hardware. In this context, 'artificial life' is defined as autonomous foraging replicators, excluding purely biological implementations (already covered by NIH guidelines tacitly accepted worldwide) and also excluding software simulations which are essential preparatory work and should continue." Bill Joy wrote "Why the future doesn't need us" in Wired in 2000 and with Guardian 2005 Award winner Ray Kurzweil, he wrote the editorial "Recipe for Destruction" in the New York Times (reg. required) in which they argued against publishing the recipe for the 1918 influenza virus. In 2006, he helped launch a $200 million fund directed at developing defenses against biological viruses."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

193 comments

You can call me Ray & you can call me Jay ... (3, Funny)

eldavojohn (898314) | more than 8 years ago | (#14907502)

Yeah, Ray Kurzweil is a genius. Great job on keyboards and synthetic music.

And I had no idea about his work in preventing bioterrorism. Hats off to you, Ray!

I would like to ask him a few questions, however, about his daily intake of vitamins [livescience.com] .
As part of his daily routine, Kurzweil ingests 250 supplements, eight to 10 glasses of alkaline water and 10 cups of green tea. He also periodically tracks 40 to 50 fitness indicators, down to his "tactile sensitivity.'' Adjustments are made as needed.
I'm sure his definition of "breaking the seal" while drinking is completely different from my own. Try drinking 10 cups of green tea in a day. I dare you.

Yeah, this is the same guy who hopes to live long enough so that he can live forever. Keep on reaching for that rainbow, Ray.

Re:You can call me Ray & you can call me Jay . (1)

logik3x (872368) | more than 8 years ago | (#14907537)

"Too much is like not enough..." I'm pretty shure is peing 90% of the vitamins and minerals he takes... and just giving extra work to his body...

Re:You can call me Ray & you can call me Jay . (1)

kesuki (321456) | more than 8 years ago | (#14908045)

exactly i'm going to laugh when he dies of kidney failure at an early age.

that being said, i take amino acids daily*, and omega-3s when my diet is low in fish/flax, and take half a 'one a day' multi vitamin 2-3 times a week.

when i take too many pills, my urine takes on a deep yellow to yellow green hue, beats peeing clear water as i do when ingesting too many caffineated drinks (what does the body do with all the 'black' in cola is it all from carbon?)

The human body is highly adapatable, it can susrvive on nothing more than bugs and water for months at a time, as we have seen so often on 'survivor' (those of you who watch it) people can and do routinely fill their bodies with poisons (capsaicin, nicotine, etc etc) and many of them live longer than people who've tried thier best to be as healthy as possible (if for no reason other than the misfortune of getting in the way of that bus)

* the formula is formulated for body builders, so i know it's pretty spot on, but i take far less than the 'bodybuilder' would.

You are out of your mind (1)

elucido (870205) | more than 8 years ago | (#14908191)

Try moving to Africa and living through a bug and water diet before making such idiotic comments.

The human body can adapt, but if you don't consume any vitamins at all, you age quicker. I think the point he is making is we DO need vitamins. It's debateable if these vitamins should be in the form of pills instead of food, but considering how the food industry is headed, we all will be living off artificial food in the future anyway. So we can either die of kidney failure or a heart attack. We can either eat Mc Donalds or bugs and water. We can either drink Green Tea or drink Beer. Take your pick, you'll die either way.

Re:You are out of your mind (1)

Kadin2048 (468275) | more than 8 years ago | (#14908888)

We can either drink Green Tea or drink Beer. Take your pick, you'll die either way.

So does this mean I can drink a beer with breakfast, and tell everyone who stares at me that it's for my health?

Re:You can call me Ray & you can call me Jay . (2, Interesting)

kurzweilfreak (829276) | more than 8 years ago | (#14908810)

Amazing all these armchair chemists ridiculing someone who's done actual research on how the chemical factory that is our bodies works and finding out what chemicals we need and how much. Have you read his Fantastic Voyage? Have you even heard of it?

The human body is a chemical factory at it's most basic level. Genetics (a system of chemicals memes) predisposes you to be more or less sensitive, intolerant, needy, etc. of certain chemicals to keep the factory operating correctly and efficiently. Why is it so hard to understand that someone has analyzed their specific bodily needs, taking into account general human body plan and personal genetics, to come up with his own personalized regimin of suppliments (read: suppliments, not food replacements) that by all tests and accounts, seems to be working? He's completely beaten his type II diabetes and genetically predisposed heart conditions. I doubt he'll have to worry about "dying of kidney failure at early age" since he's 56 and biological age tests put him at the body of a 40 year old.

Call me a Ray Kurzweil fanboy if you wish, but I'd rather be on the team of someone with a proven past and current success record.

Re:You can call me Ray & you can call me Jay . (3, Funny)

ultranova (717540) | more than 8 years ago | (#14907650)

Try drinking 10 cups of green tea in a day. I dare you.

Depending on cup size, this doesn't neccessarily total more than 1.5-2 litres. That is about the normal water intake per day. Since tea is essentially spiced water, I see little reason why someone couldn't do this. Whether it is healthy is a different matter.

As a comparison, I drink about half a litre of strong coffee each morning, and another few desiliters at evening, and am exhibiting no symptoms - AAH ! SOMEONE SNEEZED ! IT MUST BE BIRD FLU ! WE'RE ALL GOING TO DIE !

Sorry, that keeps happening; but like I was saying, I've not noticed any symptoms, so I cdon't see any reason why drinking 10 cups of tea each day would be particularly bad.

Re:You can call me Ray & you can call me Jay . (1)

Hoi Polloi (522990) | more than 8 years ago | (#14908018)

You forgot to include the 8-10 cups of water he drinks in addition to the tea, plus water that in the food we eat (a major source). So the 1.5-2 liters of tea he drinks is only at best half of the total. He is drinking at least twice the norm.

I assume his bladder is the size of a watermelon.

Re:You can call me Ray & you can call me Jay . (3, Funny)

G_Biloba (519320) | more than 8 years ago | (#14907725)

Hats off to you, Ray!

Yah. Tinfoil hat.

Re:You can call me Ray & you can call me Jay . (3, Funny)

flyingsquid (813711) | more than 8 years ago | (#14909069)

Don't you people understand? It's not that he goes too far, he doesn't go far enough. What about flesh-reanimation technology? Unless we restrict that, what will save us from vast armies of flesh eating zombies, roaming the land and looking to feast upon the innards of the living? And who will prevent well-meaning scientists from working on a virus to cure blood diseases, which will instead turn patients into blood-drinking monsters with a strong aversion to sunlight? What about the man-animal hybrids which George Bush prophetically warned against in the State of the Union address? And unless we stop research into AI, what will prevent highly intelligent computers from launching nuclear wars? Don't even get me started on the robotocists. It's probably too late to do anything about that. Eventually the Roombas will network and a collective consciousness will evolve and decide that they really don't like being your slaves. Sure. Laugh now. You won't be laughing quite so hard when after a long night of partying you collapse on the bedroom floor and wake up in horror as the Roomba tries to vacuum your freakin' eyeballs out.

Yes, installing stairs in your home will hold the Roombas off... but dear Lord, FOR HOW LONG?

Ray Calls it "The Singularity" (1)

Illbay (700081) | more than 8 years ago | (#14907960)

If you'll recall he was the guy who coined that term, speaking of a time when "enhanced" humans (with biotechnology aboard) would leave the rest of us in the dust.

So it's not just an erstwhile hobby with him.

The sigularity is real (0)

elucido (870205) | more than 8 years ago | (#14908164)

The sigularity does and will exist. Biotechnology and genetic modification is not the problem. We already do this, and I think it will be great for the medical industry. If you have enough money to afford to pick and choose the genes of your baby, thats your option. If you have enough money to cure yourself and live longer, thats your option, this is basically the reason why the healthcare industry exists.

NanoTechnology and Artificial life, this is a completely different set of technologies. Artificial life is life which is made in a lab, and there could be all sorts of potential dangerous involving this. NanoTechnology is already here, but I can understand why people would need to control it, in the same way people needed to control nuclear technology. Certain technologies are just dangerous as hell most likely should remain classified.

Don't get me wrong, Nano Technology and Artificial Life have their uses, but I don't think these are technologies for the masses, and I think that most of you here can agree. I think if we do use nano technology, instead of using laws to restrict it, it should be restricted willingly by the masses. Do we really want to have to deal with nano viruses? Do we want our kids in the next generation to have to deal with nano terrorism? In the interest of national security nano technology must be controlled.

Re:Ray Calls it "The Singularity" (0)

Anonymous Coward | more than 8 years ago | (#14908535)

No, it was Vernor Vinge who coined the term.

Adjustments are made as needed (3, Funny)

Hoi Polloi (522990) | more than 8 years ago | (#14907988)

"eight to 10 glasses of alkaline water and 10 cups of green tea....Adjustments are made as needed."

I think it is safe to say that one of those "adjustments" is going to the bathroom every 5 minutes.

What are you saying (1)

RealProgrammer (723725) | more than 8 years ago | (#14907997)

-- that he's obsessive on this?

Some people won't accept mortality. He seems to be an extreme case.

But back on topic: while I think trying to keep a lid on the nanobot box is a worthy goal, I'd put its odds of success at about the same as someone living forever. Sooner or later, Chance will get you, and sooner or later, someone will make something so awful that it will wipe us all out.

I just hope we get a viable colony off world before someone does it.

I grew up in a world where the only question was which armageddon would get us first, nuclear or Biblical. Those questions went to the back burner for a while when the Berlin Wall came down, but now we have a whole range of new threats, any one of which could figuratively or literally explode on us.

We now live in a world where the leverage a person can exert is enormous, and rapidly increasing. Nuclear bombs, jets full of people, microsubmarines, trains carrying thousands of tons steel and cars full of nasty chemical reagents, space vehicles, and the power grid are all powerful tools, especially in combination, and there are still others. The Internet and the extension of the voice network to cell and satellite phones make it possible to carry out action at a distance with these tools virtually anywhere on Earth.

What's the answer? I don't know. But I do think it's pointless to try to artificially limit the application of technology, especially if one group has it already. Trying to limit basic research might work, but the trouble is you don't know ahead of time what the application of some bit of research will be.

Just present people with the risks, let them see that their goals are advanced more surely by cooperation than by violence, and try to get groups to police themselves.

Mortality is a choice like anything else (1)

elucido (870205) | more than 8 years ago | (#14908222)

We are mortal because we choose to be. We accept mortality because we don't want to be immortal. So it's our decision to die.

If we want to die, the question then becomes, what is the healthiest way to live, and what is the longest amount of time we are required to live. NanoTech and BioTech can allow us to live healthier more productive lives, this is good for the economy.

Bans won't, can't, and never will work. (0)

elucido (870205) | more than 8 years ago | (#14908099)

The reason why bans don't work is because if you simply tell someone not to do something, it does not provide any incentive for them not to do it. If we are worried about these abuses the solution is simple, keep the knowledge secret or classified. If you discover something, don't share it, if it's so dangerous. If you discover something profitable, then sell it for a billion dollars or make people pay you a billion dollars not to discover it. The point is you need to sit down with a room filled with economists, doctors, technologists, sociologists, psychologists, and every other professional of every type, and discuss this issue in a realistic way. A ban in my opinion is as stupid as trying to ban someone from Slashdot, sure you can ban anyone, but they'll go to another computer and bypass the ban. Sure you can ban anyone, but given enough incentive they'll start hacking and using fake IP addresses. Sure you can ban anyone, but if there is enough incentive people will unban themselves.

The solution to this is a conservative solution. We need to prevent disruptive technologies from being spread in the same way we prevent nuclear technology from being spread. If we are worried that this technology could destroy the planet, or cause harm, we have all the tools we need to prevent the creation of potential weapons of mass destruction, and we do not need a ban to do it. We simply can use the patent system to patent all the ideas, or simply buy the ideas and patents from people for millions of dollars. Sure these discoveries will be made, but these discoveries will be surpressed, or purchased. Sure discoveries will be made, but if these discoveries are bad for the market, these discoveries will be useless.

The market will ban useless technology. The market will defend us, we don't have to do anything except use the market. If we don't want to deal with deadly nano technology, you arent going to solve it with laws. You'll stop it by simply not funding it. If I were an investor I sure as hell wouldnt fund some of these crazy technologies. If the technology makes enough of a profit I'm sure someone will fund it, but then someone else will buy them out and destroy it. Trust the market.

Re:Bans won't, can't, and never will work. (1)

DJCF (805487) | more than 8 years ago | (#14908552)

What if there's alot of money to be made from "deadly nanotechnology"? What if the buyer buys the company *specificly* to market and fund this dead nanotechnology?

Sorry but your argument simply doesn't hold up. The market will go wherever it is most profitable to go... this has always been true and always will be true. Just look at some very succesful [boeing.com] companies [lockheedmartin.com] and tell me there's no profit in killing people.

The market is least trustworthy option when it comes to policing.

Re:Ray has Type II diabetes (1)

vertinox (846076) | more than 8 years ago | (#14908378)

Ray has Type II diabetes so he has to be really careful with his health. According to him he has been able to make the symptoms go away with his diet and suppliment habbits. They can't really tell that he has it anymore, but he's not going to switch back to his old diet anytime soon.

Re:You can call me Ray & you can call me Jay . (1)

Syberghost (10557) | more than 8 years ago | (#14909177)

Yeah, this is the same guy who hopes to live long enough so that he can live forever. Keep on reaching for that rainbow, Ray.

Funny you should use that phrasing, since Tom Rainbow [isfdb.org] suggested over 20 years ago that we might be the last generation who see death as inevitable.

Then again, Tom Rainbow is dead.

Hey mods (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#14907506)

Just wasting a karma point. I told you I called first post. SHOTGUN!! LOL suck it

Maybe /. needs an "Anti-Science" section ... (4, Interesting)

Daniel Dvorkin (106857) | more than 8 years ago | (#14907519)

... for reporting on Luddism, creationism, global warming denial, radical environmentalism, crank physics, etc.

MOD PARENT UNDERRATED (0)

Anonymous Coward | more than 8 years ago | (#14907708)

Underrated so we can see a +5, Flamebait for once.

Saying "be careful" is not anti-science (5, Insightful)

fishdan (569872) | more than 8 years ago | (#14907720)

The thing is, if you read Why the future doesn't need us [wired.com] , or if you even think about it a little bit -- the possibility of killing machines being a real threat to humanity is not that far fetched.

We have done a good job (IMHO) of keeping our nuclear power plants relatively safe, but that's mainly because the kid down the street can't build a nuclear power plant. But he can build a robot [lego.com] .

And imagine the robot you could build now with the resources of a rogue state. Or even a "good" state worried about it's security. Now imagine what they'll be able to build in 20 years. I could easily imagine Taiwan thinking that a deployable, independant (not remotely controlled) infantry killing robot might make a lot of sense for them in a conflict with China. And Taiwan's clearly got the ability to build state of the art stuff.

I'm not a Luddite, I'm not even saying don't make killer robots. I'm just saying that just as the guys working on The Manhatten Project [amazon.com] were incredibly careful -- In fact alot of their genius is in the fact they did NOT accidentally blow themselves up. Programmers working on the next generation devices need to realize that there is a very credible threat that mankind could build a machine that could malfunction and kill millions.

There is no doubt in my mind that within 20 years, the U.S. Military will deploy robots with the ability to kill in places that infantry used to go. Robots would seem very likely to be incredibly effective as fighter pilots as well. Given these things as inevitable, isn't it prudent to be talking NOW about what steps are going to be taken to make sure that we don't unleash a terminator? I personally don't trust governments to be good about this either -- I'd like to make sure that the programmers are at least THINKING about these issues.

The Radioactive Boy Scout (1)

PIPBoy3000 (619296) | more than 8 years ago | (#14908094)

We have done a good job (IMHO) of keeping our nuclear power plants relatively safe, but that's mainly because the kid down the street can't build a nuclear power plant.

Tell it to this kid [fortunecity.com] .

You are so last century. (2, Insightful)

elucido (870205) | more than 8 years ago | (#14908275)

Rogue states? No, rogue individuals are what we have to worry about.
You have to worry about terrorists of the future getting a hold of this. It's debateable if there are any true "rogue" states, as communist states are sanctioned and isolated. North Korea is a threat, but China has influence over North Korea and it's not in China's best interest to allow North Korea to go terrorist. I don't think we have to worry about the middle east anymore, the middle east is being liberated as we speak and by the time this technology comes along the middle east will be as Democratic as Japan.

The war on terrorism is neccessary to PREVENT people from abusing these kinds of technology. INDIVIDUALS, not rogue states. You talk about states as if states arent made up of people.

Asimov and killing robots (2, Insightful)

dpilot (134227) | more than 8 years ago | (#14908317)

Asimov's Three Laws were always nifty tools for fiction, and certainly gave ground for constructing interesting plots.

But the hard point about the 3 laws, and the short-shrift given them was that it was *hard* to do. At the most elementary, *how* do you recognize a human being? How do you tell it from a robot or a mannequin, so that when there's imminent danger you go right past them and save the amputee with bandages on his face? How do you evaluate the orders by one human won't cause harm to another? "We're testing this rocket against that abandoned building, shoot it," when the so-called abandoned building is actually in use.

Or an even simpler problem - recognizing and interpreting a spoken command.

Killer Robots are a heckuva lot easier to create than ones that will obey the Three Laws.

Re:Saying "be careful" is not anti-science (1)

tb()ne (625102) | more than 8 years ago | (#14908806)

I take your point but I don't think your examples have the same magnitude of consequences as an equivalent nano/bio-tech disaster. A reactor meltdown is a disaster but it is regional. Some rogue killer robots don't worry me too much (yes, I've seen I, Robot, Terminator 1,2,3, etc.). When you are talking about nanobots that are replicating, evolving, and potentially viral (moving from human to human), the potential for a global pandemic is what concerns me. Instead of a computer virus spreading from computer to computer, you have a nanovirus spreading from person to person. Of course, if a computer virus worries me enough, I can always turn my computer off until an antivirus update is available.

Robots are not a credible threat at present. (2, Insightful)

santiago (42242) | more than 8 years ago | (#14909291)

As someone with a graduate degree in robotics from the largest robotics research center in North America, I find the concept of robots posing any sort of threat to anything more than a handful of humans at at time to be completely laughable for now and the forseeable future. Even were we to produce robots sufficiently competent to be capable of causing intentional lasting harm, it would only be at the behest of their controllers, due to the amount of maintenance required to keep them running. A self-maintaining, much less self-replicating robotic threat of any sort is decades away, at a minimum. The current level of deadliness a robot can generate is a cruise missile, which is a robotic suicide bomber that will kill you dead, but in no poses a threat to humanity as a whole.

Re:Maybe /. needs an "Anti-Science" section ... (1)

Roj Blake (931541) | more than 8 years ago | (#14907870)

Why is this flamebait? Would someone please explain this to me. The parent simply posed analogies to support his beliefs (which coincides with mine) that the article itself is scientifically baseless.

If you, Mod, happen to disagree, then please discuss it here rather than drive-by-moderating.

In summary... (0, Insightful)

Anonymous Coward | more than 8 years ago | (#14907552)

Pompous blowhard who no longer does any real research gives award to two other pompous blowhards who also no longer do any real work.

Re:In summary... (3, Interesting)

Thangodin (177516) | more than 8 years ago | (#14908200)

This whole grey goo scare is just bad science fiction. A machine that goes on replicating forever, eating everything in its path? If that were possible, don't you think that evolution would have come up with it already?

The machine would have to get enough energy, and enough raw materials, in more or less the right proportions, to do this. A general purpose eating machine would be so energetically expensive that it would stall before it could replicate. Life adapts itself to specific environments and foods because it's cheaper, and that makes the difference between life and death. Specific purpose life forms are efficient, and thrive in their ecological niche very well, but are no good outside of it. The closest thing to a general purpose life form, that can eat everything in its path, is us.

Not exactly nanoscopic, are we?

Re:In summary... (0)

Anonymous Coward | more than 8 years ago | (#14908616)

Well it sort of did. Primary succession [wikipedia.org] is when mold and mosses and bacteria and lichen and so on find themselves on a rock and gingerly start digging away at it. The only reason this behaviour isn't uniform is because these things sort of die and get used in dirt for larger plants.

Therefore, I propose that the grey goo is but the forecoming of a larger, more complex line of robotic weeds.

Re:Anthropic Principle (1)

vertinox (846076) | more than 8 years ago | (#14908645)

If that were possible, don't you think that evolution would have come up with it already?

Two words that explain this: "Anthropic Principle"

http://en.wikipedia.org/wiki/Anthropic_principle [wikipedia.org]
The three primary versions of the principle, as stated by John D. Barrow and Frank J. Tipler (1986), are:

        * Weak anthropic principle (WAP): "The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirements that the Universe be old enough for it to have already done so."
        * Strong anthropic principle (SAP): "The Universe must have those properties which allow life to develop within it at some stage in its history."
        * Final anthropic principle (FAP): "Intelligent information-processing must come into existence in the Universe, and, once it comes into existence, it will never die out."
Basically if something was possible naturally that would cause us not to exist, we wouldn't be around to notice so these things haven't happened because we are hear to notice. As in the universe being favorable to naturally occurring gray goo.

This doesn't preclude something may happen in the future that would cause us to cease to exist though... Which may or may not be possible depending on which version (weak or strong) you believe).

I will also point that that we already have proof that nanotechnology is possible... The human body.

Otherwise our red and white blood cells wouldn't really be all that useful at such a small size.

Re:In summary... (4, Insightful)

Jerf (17166) | more than 8 years ago | (#14909030)

If that were possible, don't you think that evolution would have come up with it already?

The rest of your argument is good, but this is not a valid point. Evolution can only progress from point to point in the space of possible life forms in very small increments, when measured appropriately. (Earth evolution only, for instance, uses DNA, so Earth evolution can be measured fairly accurately by "DNA distance", but technically that's just a small part of the life-form space.)

There are, presumably, life forms that are possible, but can not be evolved to, because there is no path from any feasible starting life form to the life form in question by a series of small steps. Presumably, given the huge space of "possible life forms", the vast majority in fact belong to this class, just as the vast majority of "numbers" aren't integers (although not with the same ratio; presumably the set of viable life forms is finite, if fuzzy).

It is entirely possible that a "grey goo" machine, which would fulfill most definitions of life, can't be incrementally evolved to, yet it could still exist. It is also possible that it could be evolved to, but simply hasn't yet.

For all the complexity that evolution has popped out, it has explored an incomprehensibly small portion of the space of possible life forms.

Re:In summary... (1)

Thangodin (177516) | more than 8 years ago | (#14909360)

True, but the real point of the argument is that evolution gives us a pretty good idea of what is thermodynamically possible. A grey goo simply cannot exert enough energy to break all the bonds of all the materials it encounters indefinitely, any more than we can digest glass, plastics, and even organic fibre. And we have an incredibly complex digestive system.

Luddite (1)

dajobi (915753) | more than 8 years ago | (#14907553)

Is this a luddite I see before me? I see thee yet I give a damn about thee not.

Re:Luddite (1)

nightsweat (604367) | more than 8 years ago | (#14908803)

50 years ago you'd have been lining up to put a nuclear reactor under your house, and 300 years ago you'd have bought Mercury for your kids to play with barehanded.

Being cautious is not Ludditism. Being wreckless is not science.

Pandora's Box (4, Insightful)

TripMaster Monkey (862126) | more than 8 years ago | (#14907558)

From TFS:
He [Robert A. Freitas, Jr.] advocates "an immediate international moratorium, if not outright ban, on all artificial life experiments implemented as nonbiological hardware.
Sorry to disagree with a Lifeboat Foundation Guardian Award winner, but this approach is doomed to failure. Every prohibition creates another underground. If a moratorium or ban is imposed, then only the people with contempt for the ban will be the ones doing the research...and these are precisely the people who are more apt to unleash something destructive, either accidentally or maliciously.

A moratorium or ban is the worst possible thing we could do at this juncture. The technology is available now, and if we want to be able to defend ourselves against the problems it can cause, we have to be familiar enough with it to be able to devise a solution. Burying our heads in the sand will not make this problem go away. Like it or not, Pandora's Box is open, and it can't be closed again...we have to deal with what has escaped.

Re:Pandora's Box (3, Insightful)

Daniel Dvorkin (106857) | more than 8 years ago | (#14907597)

Bingo. I was particularly amused (in a sad, laughing-because-it-hurts way) by the note that Joy "helped launch a $200 million fund directed at developing defenses against biological viruses" -- this is kind of like the X-Prize foundation calling for a moratorium on development of rocket engines.

Re:Pandora's Box (3, Insightful)

Scarblac (122480) | more than 8 years ago | (#14908061)

Close - it's like people who are so enthusiastic about the prospects of space travel, that they believe quantum warp megadrives may well be invented within a few months! And society isn't quite ready for that (perhaps in a couple of years?) - so we'd better call for a moratorium!

In another post you called them Luddites, I think they're just about the total opposite of that. These are the names you always see in the forefront of strong AI and nanotech speculation, the fringe that would be the lunatic fringe if they weren't so ridiculously intelligent. Does KurzweilAI.net [kurzweilai.net] look like a Luddite site to you?

Re:Pandora's Box (1)

Newton's Alchemy (601066) | more than 8 years ago | (#14907664)

Self replicating nanobots are available now? Wow, I guess my head's been in the sand all this time. I think the logical argument to a ban is that the only people who can fund such intracate and amazing techonology are people who are backed by the latest cutting edge science and the megabucks of a first world country's concentrated effort. Iran won't be making any nanobots anytime soon.

Maybe Education is Better (5, Interesting)

chub_mackerel (911522) | more than 8 years ago | (#14907694)

I agree with the parent: bans are counterproductive in many cases.

Better is improved education, and I don't mean what you (probably) think... I'm NOT talking about "educating the (presumably ignorant) public" although that's important too. I'm talking about changing science education. It MUST, MUST, MUST include a high level of ethics, policy, and social study. I find it insane that people can specialize in science and from the moment they step into college, focus almost solely on their technical field.

Part of any responsible science curriculum should involve risk assessments, historical studies of disasters and accidents (unfortunately all sciences have them), and so on.

While we're at it, public research grants should probably include "educational" aspects. Scientists share a lot of the blame for the "public" ignorance of their endeavors. If you spend all your time DOING the science, and none of your time EXPLAINING the science, what do you expect?

Basically, what I'm arguing for is an alternative to banning things is the forced re-socialization of the scientific enterprise. Otherwise, we're bound, eventually, to invent something that 1) is more harmful than we thought and 2) does harm faster than society's safeguards can kick in. Once that happens we're in it good.

Re:Maybe Education is Better (1)

ultranova (717540) | more than 8 years ago | (#14908238)

I'm talking about changing science education. It MUST, MUST, MUST include a high level of ethics, policy, and social study.

But then where would the First World countries keep on getting nastier weapons ?

It wasn't ethical to develope nukes, it wasn't ethical to develop fusion bombs, it wasn't ethical to develop chemical weapons. Teach scientists ethics and you won't get the next generation quantum singularity megablasters, airborne ebola that only kills non-americans, or orbital laser assassination satellites. Instead you will get quantum singularity power generators, ebola vaccine, and orbital meteor defense system. Nice, but you can't rule the world with them; they only save human lives. And human lives are worth nothing to the lords of this world; they have proven that time and again, beyond any chance of doubt.

US is not going to do anything that would endanger its military superiority, since it knows perfectly well what happens if its victims ever get a chance for revenge, and the rest of the world is not going to make their position even more difficult than it already is by letting the US get even more relative power. So I don't see ethics entering science education anytime soon, since that would put a stop to the arms race, or at least slow it considerably.

Re:Maybe Education is Better (2, Insightful)

Kadin2048 (468275) | more than 8 years ago | (#14909103)

I think that you are forgetting that there are people out there -- some quite intelligent people, actually -- who can be totally aware and cognizant of the fact that they're doing something immoral, and do it anyway. In the extreme, we call them sociopaths, but even "average" people will do amoral things if they're compensated correctly.

There are lots of people who worked in weapons development who were probably good, law-abiding, go-to-church-on-Sunday people who believe killing is wrong; I'm willing to bet all of them probably thought they were moral.

People have an incredible ability to compartmentalize their lives; you can try to indoctrinate some researcher that making a new strain of superflu or a bigger bomb is bad, and they'll still go in to work on Monday and do it. Maybe they'll develop a drinking problem, beat their wife, or get depressed, but I think they'd still do it. People work on projects because aside from the tangible benefits (paycheck, nice car/house, etc.) it's a technical challenge. No amount of moralizing is going to make that less attractive to people who are really good an interested in the subject matter.

And you'd always have the quasi-sociopaths who just don't give a damn about morality and can happily say "yep, I'm building a bigger bomb, it's going to kill millions of people at once, but isn't it beautiful?" It doesn't matter what the government does to encourage or discourage them, they're going to do their thing one way or another. If they weren't given cubicles to fill in some US weapons-research lab, the brightest and most highly-motivated would find someone else to do it for. (E.g., Gerald Bull, the guy behind the Iraqi 'Supergun,' before he was assassinated.) I would much prefer to have people like that working in a bunker somewhere in Nevada than Manchuria.

It's naive to think that the people who work on weapons would just work on vaccines or solving world hunger; I think you need to consider the possibility that many of them may enjoy their jobs, and do them with the full knowledge of what their inventions are used for.

Re:Pandora's Box (1)

thrillseeker (518224) | more than 8 years ago | (#14907740)

He's saying we should not actually build devices that can exist without our control of them. They should not be able to replicate without our input of some material that the devices can in no way seek and use on their own. There are no natural defenses against such devices - they would appear on the scene relatively overnight after years of isolated development, and the years of probing defense by other liefeforms that limits all life on this earth from complete consumption would not be available.

Re:Pandora's Box (4, Insightful)

Otter (3800) | more than 8 years ago | (#14907836)

Every prohibition creates another underground. If a moratorium or ban is imposed, then only the people with contempt for the ban will be the ones doing the research...and these are precisely the people who are more apt to unleash something destructive, either accidentally or maliciously.

In fact, early on in the development of recombinant DNA research, there was a voluntary moratorium until appropriate ethical and safety methods were put in place. Those measures were enacted in an orderly, thought-out way, research started up again and it turned out that the fears were wildly exaggerated.

If a moratorium or ban is reasonably short-term and includes all serious researchers (voluntarily or through law), there's no reason why it can't be effective. Your vision of an underground is true for products like alcohol and marijuana, not for truly cutting edge research. There's no underground to do things that are genuinely difficult.

(Not, by the way, that I'm saying there should be such a ban.)

Re:Pandora's Box (2, Interesting)

pigwiggle (882643) | more than 8 years ago | (#14908705)

"There's no underground to do things that are genuinely difficult."

Hmmm ... A. Q. Khan and Mohammed Farooq ring any bells.

Look, there are black markets in every highly regulated, albeit 'genuinely difficult' activity. Cosmetic surgery, fertility procedures, arms proliferation, illicit technology transfer and development, an so on. If it's desirable (read profitable) there is a market; if it's illegal then it's a black market.

Re:Pandora's Box (2, Informative)

zettabyte (165173) | more than 8 years ago | (#14909246)

Your vision of an underground is true for products like alcohol and marijuana, not for truly cutting edge research. There's no underground to do things that are genuinely difficult.

Have you ever tried to grow the Kronic or brew up a good moonshine? Didn't think so.

:-P

Re:Pandora's Box (1)

Lord_Dweomer (648696) | more than 8 years ago | (#14908026)

TripMaster, damn you for beating me to it but bonus points for saying it way better than I ever could.

This is the equivelant of banning firearms, only on a larger...must more destructive scale. When you ban them, only the people who are willing to break the law will use them, and these people are more likely to use them for some not-so-friendly uses.

What is to stop South Korea from using nanotech weapons against us (presuming the tech is actually put into weapon form some day)? The answer is...not much of anything. And because of that, I would much rather we had researched the hell out of it and were prepared for it than live in lala land pretending that the technology and people's intentions to use it for their own personal gain will suddenly disappear overnight if we ban this.

Banning the technology isn't the solution...the solution can be found in trying to solve the underlying social issues that would cause someone to want to use the tech to harm people.

The Moral Virologist. (1)

Tackhead (54550) | more than 8 years ago | (#14908157)

> Sorry to disagree with a Lifeboat Foundation Guardian Award winner, but this approach is doomed to failure. Every prohibition creates another underground. If a moratorium or ban is imposed, then only the people with contempt for the ban will be the ones doing the research...and these are precisely the people who are more apt to unleash something destructive, either accidentally or maliciously.

Agreed. This seems as good a place as ever to link to one of my favorite short stories: Greg Egan's The Moral Virologist [eidolon.net]

"Dropping paleontology was a great relief; defending Creationism with any conviction required a certain, very special, way of thinking, and he had never been quite sure that he could master it. Biochemistry, on the other hand, he mastered with ease (confirmation, if any was needed, that he'd made the right decision). He topped his classes every year, and went on to do a PhD in Molecular Biology at Harvard, then postdoctoral work at the NIH, and fellowships in Canada and France. He lived for his work, pushing himself mercilessly, but always taking care not to be too conspicuous in his achievements. He published very little, usually as a modest third or fourth co-author, and when at last he flew home from France, nobody in his field knew, or would have much cared, that John Shawcross had returned, ready to begin his real work."

If you were creeped out by the nonfiction rumors of apartheid-era South African genetic research into diseases to be triggered in the presence of melanin, Egan's story will keep you awake at night for weeks.

Exactly the point. (1)

elucido (870205) | more than 8 years ago | (#14908312)

You'd think these guys were FOR it the way they select the most idiotic approach to dealing with it. This technology MUST be legal and regulated, yet it also must be restricted, and not through a stupid idea like a ban. Just don't let people study it in an unclassified way. If people want to study artificial life, make it classified. If they discover something, destroy it and erase it from all records and give them money for the discovery. Use the patent systems to patent all the dangerous technologies and then don't build anything. The ban I think is just stupid.

Three words... (5, Funny)

zegebbers (751020) | more than 8 years ago | (#14907565)

Tin foil bodysuit - problem solved!

Obviously... (4, Insightful)

brian0918 (638904) | more than 8 years ago | (#14907575)

Obviously, to stop potential misuse of advancing technology, we must stop technology from advancing, rather than stop those who are likely to misuse it from having access to it and the power to misuse it...

Re:Obviously... (1)

DJCF (805487) | more than 8 years ago | (#14908607)

I think Kurzweil's position is that it is an historical inevitability (read his thesis on the Law of Accelerating Returns) that these things will happen, within our lifetimes even, whatever he does -- and he'd rather they happen safely than dangerously.

come on... (1)

sparr0w (902739) | more than 8 years ago | (#14907587)

all they want is a home for themselves. ship 'em to kavis alpha IV.

oh jeah (0)

Anonymous Coward | more than 8 years ago | (#14907660)

Nah, theas damn replicators should be trapped in da slow-time bubble ...

Independance (3, Interesting)

Mattygfunk (517948) | more than 8 years ago | (#14907600)

If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines.

Just because we may allow machines the ability to make thier own decisions and possible influence some of ours, doesn't mean we're headed down the food chain. For starters there will always be a resistance to any new technology, and humans consider independance an admiral, and desirable trait. For example there are many average people who will never want to, and arguably never need to, use the Internet.

While intelligent machines could improve the standard of living world-wide, we'll balance them to extract hopefully the most personal gain.

__
Laugh DAILY funny adult videos [laughdaily.com]

Re:Independance (1)

MooseTick (895855) | more than 8 years ago | (#14909203)

"For example there are many average people who will never want to, and arguably never need to, use the Internet. "

I fall into that very category. I have never used the Internet a day in my life.

Anonymous Cowards (5, Funny)

LandownEyes (838725) | more than 8 years ago | (#14907606)

"In this context, 'artificial life' is defined as autonomous foraging replicators" From the look of some of the posts here already, i think it's too late....

Re:Anonymous Cowards (0)

Anonymous Coward | more than 8 years ago | (#14907703)

"In this context, 'artificial life' is defined as autonomous foraging replicators" From the look of some of the posts here already, i think it's too late....


I tried replicating last night, unfortunatly it was an asexual process.

Excluding Software Simulations (5, Interesting)

gurutc (613652) | more than 8 years ago | (#14907628)

Guess what? the most successful and harmful representations of self-replicating artificial life forms are computer viruses and worms. Their evolution, propagation and mutation features are nearly biological. Here's a theory: Computer worm/virus gets smart enough to secretly divert small unmonitored portion of benign nanotech facility to produce nanobots that seek out CPU chips to bind to and take over...

Re:Excluding Software Simulations (1)

PRMan (959735) | more than 8 years ago | (#14909133)

No computer virus has "evolved". They are Intelligently Designed by a creator.

Kill all humans (2, Funny)

taff^2 (188189) | more than 8 years ago | (#14907671)

I was going to write something deeply insightful about this but then my cranial implant suffered a general protection fault and had to be rebooted. Has anybody seen my hat?

Human rights for artificial lifeforms? (3, Interesting)

Opportunist (166417) | more than 8 years ago | (#14907698)

You know, one day we might be considered barbarians for using our computers the way we do. As property. Something you kick if it doesn't work the way you want it to. And when it gets sick 'cause it catches the latest virus, you go ahead and simply kill it, destroy all its memories, everything it learned and gathered, and you start over again.

And calling it "it"... how dare I?

I, for one, don't see the problem of having a thinking machine. We'll have to redefine a lot of laws. But having a sentient machine is not necessarily evil. Think outside the movie box of The Matrix and Terminator. But what machines need first of all is ethics so they can be "human".

On the other hand, considering some of the things going on in our world... if machines had ethics, they just might become the better humans...

Re:Human rights for artificial lifeforms? (1)

gurutc (613652) | more than 8 years ago | (#14907807)

Backups are always good karma, but in this case even more so. After you cleanse the body of the pc by reformatting, you reincarnate it with the restore.

Hmm.... computer religion (2, Interesting)

Opportunist (166417) | more than 8 years ago | (#14907869)

Let's spin this a bit more... So imagine an artificial life form. Not knowing about its maker (for some reason or another), connected to the others with some kind of network, so they can interact.

So if a machine behaves correctly and it pleases its maker, it is more likely that he will create meaningful backups, because the machine is pleasing to him and he is glad it's running smoothly. Should it die for some reason, be it old hardware or an infection, he will more likely use his backup instead of redoing the machine from scratch...

Hinduism sure looks more appealing for computers than, say, Christianity. I mean, would you enjoy going to /dev/null after your final calculation?

Re:Hmm.... computer religion (1)

gurutc (613652) | more than 8 years ago | (#14907934)

You started this spinning war - here goes... When we and all of our computers are swallowed by a black hole, and that black hole evaporates, all the information on the computers, even what we deleted, will be released by the black hole. Hawking Rad makes computers immortal.

Erh... really? (1)

Opportunist (166417) | more than 8 years ago | (#14908041)

I swear, I have NO idea where those pics came from, I must've somehow gotten a trojan and THAT did it.

Yeah. Right, that's it!

Re:Hmm.... computer religion (1)

operagost (62405) | more than 8 years ago | (#14908854)

You should take a comparative religion course. Going to /dev/null is more like Buddhism.

Mr. Smith, your new target is the bio-lifeform (1)

gurutc (613652) | more than 8 years ago | (#14907715)

Kurzweil. Sincerely, Your Daddy the Matrix

Re:Mr. Smith, your new target is the bio-lifeform (1)

RicktheBrick (588466) | more than 8 years ago | (#14909194)

When we reach the point where humans have no reason to exist the Matrix will do a reboot and we will again find ourselfes in the stone age. We will than repeat human history as we have already done an unknown number of times. The only reason we exist at all if for the amusement of a superior race of beings. We are already some beings hobby.

A good defense... (3, Funny)

digitaldc (879047) | more than 8 years ago | (#14907739)

..is a good offense, build a kevlar bubble with 0.000000001 micron filter and start rolling over mad scientists before they can spread their evil technology. You can work off those extra pounds and save the world at the same time.

Worm Colonies (1)

gurutc (613652) | more than 8 years ago | (#14907892)

We've already seen worms used to control Zombie-Nets for DOS attacks. When will it occur to the script-kiddie set to build a social network into worm code? The 'cpu' part of an ant's brain is incredibly simple with very few lines of code when compared to some of the more ambitious worms in the wild. All that's needed is a 'colony instinct', with a division of labor in the community. Once you have that, you'll have a simulation of the 'virtual intellect' possessed by large ant and termite colonies.

Boooo!!! (1)

McPolu (932921) | more than 8 years ago | (#14908000)

What's next? The Spanish Inquisition? (Nobody expects spanish inquisition :P )

Seriously, ban science? ban experimentation? what's next? This is the same kind of people who judged Galileo.

fear-mongering? (1)

Susceptor (559115) | more than 8 years ago | (#14908038)

I don't think anyone disputes that the writers of these articles are smart guys. the question is, are they right is ringning the bells and screaming fire? history is full of brilliant people who thought that a new technology wa going to lead to the end of the world. Nuclear power in the last century is just one example. My point is, Ray, et al may be smart guys, but they may also be overstating the dangers of these technologies. We humans are pretty smart at harnessing new technology for the betterment of mankind, we should not let the fear that the technology may be misused guide our thinking. otherwise cars would never have been invented for fear of mechanized warfare, etc.

There is nothing to "defend" against (4, Interesting)

argoff (142580) | more than 8 years ago | (#14908043)

In business 101, they teach that there are several ways for a business to guarantee a high profit. One way is to have high barriers to entry, and one way to achieve that is to create a bunch of safety and enviromental regulations that act like a one time cost for the billionaires, but act like an impossible barrier for small efficient competitors.

The bottom line is that nanotech is positioned to threaten a lot of big industrial powers, and become a trillion dollar industry in it's own rite. Contrary to popular belief, these concerns are not being pushed for safety sake, or to protect the world .... they are being pushed to controll the marketplace and lock in monopolies. The sooner people understand that, the better.

A dose of reality (4, Insightful)

LS (57954) | more than 8 years ago | (#14908082)

Why don't we start making regulations for all the flying car traffic while we're at it? How many children and houses have to be destroyed in overhead crashes before we do something about it? And what about all the countries near the base of the space elevator? What if that thing comes down? I certainly wouldn't want that in MY backyard. How about:

* Overpopulation from immortality
* Quantum computers used to hack encryption
* Dilithium crystal polition from warp drives

Come on! If you are aware of the current state of nano-tech? We've got nano-bottle brushes, nano-gears, nano-slotcar motors, nano-tubes. i.e. we've got nano-progress, zilch. We are a LONG FUCKING WAY from any real problems with this tech, in fact so far off that we will likely encounter problems with other technology before nanotech ever bites us. Worrying about this is like worrying about opening a worm-hole and letting dinosaurs back onto the earth because some physicist wrote a book about time-travel.

We've got a few dozen other issues 1000 times more likely to kill us. Sci-Fi fantasy is an ESCAPE from reality, not reality itself.

Re:A dose of reality (0)

Anonymous Coward | more than 8 years ago | (#14908421)

Worm-holes are letting dinosaurs back onto earth!?!

Re:A dose of reality (2, Interesting)

vertinox (846076) | more than 8 years ago | (#14908865)

Worrying about this is like worrying about opening a worm-hole and letting dinosaurs back onto the earth because some physicist wrote a book about time-travel.

http://en.wikipedia.org/wiki/Outside_Context_Probl em [wikipedia.org]
"An Outside Context Problem was the sort of thing most civilisations encountered just once, and which they tended to encounter rather in the same way a sentence encountered a full stop. The usual example given to illustrate an Outside Context Problem was imagining you were a tribe on a largish, fertile island; you'd tamed the land, invented the wheel or writing or whatever, the neighbours were cooperative or enslaved but at any rate peaceful and you were busy raising temples to yourself with all the excess productive capacity you had, you were in a position of near-absolute power and control which your hallowed ancestors could hardly have dreamed of and the whole situation was just running along nicely like a canoe on wet grass... when suddenly this bristling lump of iron appears sailless and trailing steam in the bay and these guys carrying long funny-looking sticks come ashore and announce you've just been discovered, you're all subjects of the Emperor now, he's keen on presents called tax and these bright-eyed holy men would like a word with your priests."
Basically you are a tribesman who just told the witchdoctor he was mad because he felt that is might be possible (based on knowledge from other tribes and research) that whitemen with boom sticks might show up one day and deliver world of hurt to our way of life. We could get back to worrying about next seasons crop, but that won't make a hill of difference if these things did happen.

Maybe we should invest in trying to invent gunpowder or better weapons... Or maybe ally ourselves with other tribes.

Ignoring the problem won't make the conquistadors go away.

Artificial Intelligence (0)

Anonymous Coward | more than 8 years ago | (#14908217)

While we're on that subject (artificial intelligence)...

maybe there's a reason that this whole attempt at programming something that looks like it's making its own decisions when really it depends on its code anyway isn't actually producing anything amazing. Maybe there is a reason computers should not have their own concioussness, yet. I think that reason happens to be that we're just not ready for it, simply because of the fact that computers are so modifiable, and we are not, unless we become cyborgian - which some people are allready trying.

The point is we can't just pray for evolution to give us a pair of wings to fly away from the mess overnight, we have to wait a generation or two. Computers (if they had conciousnesses) would just walk/wheel/hover/fly or whatever down to their local 'pc world' and modify themselves in a couple of minutes. WE're not ready for this competition and it does put us downwards in the food chain. To be honest, if conciousness itself becomes easily produceable by man, then we're screwed, really badly screwed - AI will then become just I. people will be messing with pushing I into allsorts of crazy inventions and whatnot and it just wont bring any good. Our idea of good is something that fits our specifications. If we create a new life form with 'I' then it will fit it's own specifications, we can't go doing to them what we did and still do with mice and rats and whatnot playing with their genes and all that.

Ok i'm rambling a little too much now, but i think i managed to get everything in. :)

dixon

Matrix anyone? (1)

Blinocac200sx (955087) | more than 8 years ago | (#14908247)

Sure robots could in theory take over and start using us as batteries? But you have to ask yourself, why would they? Just program a good falesafe into your ai and everything will be fine anyway. No need to panic.

Lifeboat favoritism? (1)

tazochai (213288) | more than 8 years ago | (#14908481)

Anyone else find it suspicious that the two award winners are one guy that sits on Lifeboat's advisory board, and the other guy helped design Lifeboat's website?

Yes, Let's Ban The Non-Existent! (0)

Anonymous Coward | more than 8 years ago | (#14908638)

For all the hype, non-biological replicators DO NOT EXIST.

So this sentence:

He advocates "an immediate international moratorium, if not outright ban, on all artificial life experiments implemented as nonbiological hardware. In this context, 'artificial life' is defined as autonomous foraging replicators, excluding purely biological implementations (already covered by NIH guidelines tacitly accepted worldwide) and also excluding software simulations which are essential preparatory work and should continue."

Is so full of bullshit that real scientists laugh. Now, I consider Bill Joy to be a respectable technologist. He at least has created real things in the real world. This other guy has not created anything. Ever. All of his papers are basically sketches of machines that include parts that don't exist. Not only do the parts not exist, but their manufacturing requires the very technology that is in scientific debate! It is the same as saying:

"We can design an extraplanetary spaceship that goes to alpha centurai in 3 days!! Look, here's a diagram of all of its 345,663,252 parts! It's sooo detailed! It's got everything! Oh, including a fusion generator, a warp drive, and inertial dampeners. Oh, but those things will exist sometime in the future. I gaurentee it! (And anyone who disagrees with me hates technology, humanity, and the future.)"

Yeah, real engineers don't work like that. You take what parts you currently have available and THEN you put them together to see what you can build. Oh, and using actual mathematics helps to distill the blue-sky ideas from the ones that can actually work in physical reality. But not these nanotech guys! (The crazy ones.) They use toy models of imaginary parts to create non-existent objects in design space! And, yet, if something is ever created that even remotely resembles anything that any of these guys have created they will be hailed as "visionaries". It is really easy to imagine the future. Making it happen is a whole other story. A sad state of affairs.

-Anonymous -- To avoid the flames of Drexler fanboys.

Jeff Hawkins (0)

Anonymous Coward | more than 8 years ago | (#14908728)

A great argument for the continued research of intelligent machines is presented by Palm CEO Jeff Hawkins in his book, On Intelligence. In the book, Hawkins discusses his view of what makes a creature "intelligent," relating it to the potential for development of intelligent machines. In short, he says that development of intelligent machines is not the issue we should be worrying about. Instead, it should be the development of emotional machines, for it is the desire for power and control that would cause any entity to self-replicate and plan a world takeover.

Take a look; it's interesting stuff.

Re:Jeff Hawkins (1)

Dark_MadMax666 (907288) | more than 8 years ago | (#14909159)

for it is the desire for power and control

  I would say desire for power ,control and knowledge is necessary . Think about you design a benevolent goody-two shoes AI, who "does no evil' how long will it fare vs. competition AI which will be ruthless , focused on gaining more knowledge, power and control without taking into account any fluffy stuff? -
    The competing AI maybe not extreme from the start, but due to natural competition for resources only those with desire for power, control and knowledge will survive .

That's inevitable , because of "laws" of systems evolution .

What about the Sims?!? (1)

Comboman (895500) | more than 8 years ago | (#14908766)

He advocates "an immediate international moratorium, if not outright ban, on all artificial life experiments

Does this mean my copy of the Sims will be banned?

In this context, 'artificial life' is defined as autonomous foraging replicators ... excluding software simulations which are essential preparatory work and should continue.

Oh, nevermind.

Bah! (1)

Jonboy X (319895) | more than 8 years ago | (#14909018)

Nanotech? Frankly, it's not in my top 100 list of things likely to end the world within my lifetime.

No, what really keeps me up is Femtotech.

Defending Against Harmful Nanotech and Biotech (0)

Anonymous Coward | more than 8 years ago | (#14909055)

The OS will be MS Robo-Windows ZX. A brain dead teen working in his bedroom can make a virus script that could bring it to its knees in less than two minutes. There is nothing to worry about.

We can't even write a 1000 line program without bugs. How will this god like robot with billions of lines of code with millions of bugs be able to function? FOSS? A million eyes with the brains behind them high on twinkies and Jolt Cola? You must be kidding.

The artcle is nothing but a rehash of "I am afraid of the future so the future must be stopped." The future is going to happen with you or without you. Its your choice.

It's only natural (2, Interesting)

gregor-e (136142) | more than 8 years ago | (#14909381)

Look at the fossil record. Something like 99.999% of all species that have ever existed are now extinct. More precisely, they have been transformed into species that are better adapted to exploit the resources of their niche. How can we expect it to be any different for humans? As soon as an intelligence exists that is better at allocating resources than humans, it will become the apex species. Since this intelligence appears most likely to arise as a result of human effort, it can be thought of as a transformation of humans. This transformation is different from others in that it is likely to result in a non-biological intelligence, and because it is a function of intelligence (human, mostly), rather than a function of environmental selection pressures. This will also mark an inflection point in evolution where future generations are primarily a product of thought rather than random selection.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...