Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Vinge and the Singularity

michael posted more than 13 years ago | from the i-can't-do-that,-dave dept.

News 163

mindpixel writes: "Dr. Vinge is the Hugo award winning author of the 1992 novel "Fire Upon the Deep" and the 1981 novella "True Names." This New York Times piece (registration required) does a good job of profiling him and his ideas about the coming "technological singularity," where machines suddenly exceed human intelligence and the future becomes completely unpredictable. " Nice story. And if you haven't read True Names, get a hold of a copy, plenty of used ones out there.

Sorry! There are no comments related to the filter you selected.

Slightly OT (1)

Bob McCown (8411) | more than 13 years ago | (#2177238)

Years ago (early 90's) I started a GURPS space campaign, spent weeks setting up scenario, etc. A week before the game was going to begin, a friend of mine (one of the people that was going to play in the game) hands me "A fire upon the deep" to read. As I started reading, It sounded vaguely familiar. Turns out my scenario for the GURPS game was remarkably close to the plot of the book. QUICK! To the re-write cave, Robin!

Why does a machine need to be conscious? (2)

joshv (13017) | more than 13 years ago | (#2177243)

Everyone seems to be all wrapped up in 'consciousness' and 'emotion'. Machines must certainly have these things to take our place, right? Nope. All they need is the capability to reproduce themselves and to do it more frequently or more efficiently that the biological systems that came before.

Something as simple as a self replication nano-bot (whatever that is) that consumer oxygen for energy could end up being the only non-plant form of life on the planet if it replicated out of control and drove oxygen levels below that needed to sustain animal life.

Currently machines do replicate and improve themselve, with the help of humans. Over time the amount of help they need is continuely decreasing. I do not think that machines will need to be as intelligent as humans to decrease the amount of human assistance required for replication to near 0.


Re:Smartness is Overrated (2)

Bearpaw (13080) | more than 13 years ago | (#2177244)

The people who run the world -- to whatever extent anyone runs it -- would no doubt be pleased that you think that the people that you've been led to believe run the world aren't smart.

(I mean, that's if they had any reason to really care about your (or my) opinion. Which they probably don't, except perhaps as just another tiny part of the masses.)

And the point isn't that supersmart machines would necessarily want to run the world, it's that it's hard to guess what they would want. Or why they should care if what they want happens to be at odds with what we might want. Why would what we want be at all relevant to them?

Huh? (2)

dennism (13667) | more than 13 years ago | (#2177245)

Where machines suddenly exceed human intelligence and the future becomes completely unpredictable.

It's funny to see someone predicting the future and at the end of their prediction ruling out the possibility of future predictions.

My prediction: That this prediction will end up like the majority of predictions -- wrong.

Re:Why emotion? (2)

Sloppy (14984) | more than 13 years ago | (#2177246)

Emotion, for the most part, is a chemical reaction to events, that's all.

Emotions are much more than just chemical reactions. Chemical reactions are just how the human brain happens to implement emotions. Emotions have function and behavioral consequences (e.g. you lust for a female, so you sneak up behind her, restrain her, and hump her -- oops, I mean -- you talk to her and find out her astrological sign and phone #) and that behavior has emerged through (and been shaped by) the evolutionary process. Emotions do things useful for continued survival of the genes that program the chemical processes that implement the emotions, it's not just some weird byproduct.

An AI that is created through an evolution-like process (and there is a very reasonable chance that this is how the first AI will be made) will benefit from the behavior-altering characteristics of emotions, so they will probably emerge. Sure, they won't be implemented as chemical processes (well, I guess that depends on how future computers work ;-) but they'll be there.


Re:The Singularity and Computational Efficiency (2)

Sloppy (14984) | more than 13 years ago | (#2177247)

human beings, with their neurons clicking away at petacycles per second, can only do arithmetic extremely poorly, at less than a flop!

Mathematics as we know it has only been around for a couple thousand years (and was pretty darned simple until just a few hundred years ago), but humans have been around for hundreds of thousands of years. This means that the ability to do arithmetic quickly, simply isn't something that humans need to do to survive, thus evolutionary forces have not optimized our hardware for doing that.

If you want AIs that are fast at arithmetic, evolve them in a virtual environment where arithmetic ability is an important selection criterion.


Re:"A Fire..." and Anachronistic Commentary (3)

Sloppy (14984) | more than 13 years ago | (#2177248)

The concept of an email virus would not be some grand prophetic vision in 1992.

I don't think many people back then had any idea, that it would suddenly become "normal" for people to execute untrusted data with full privledges. The concept is still mind-boggling even today, let alone 1992.

OTOH, it's more of a social issue than a technogical one. I guess it doesn't take much vision to realize: People are stupid.


Re:Flawed assumptions? (2)

HiThere (15173) | more than 13 years ago | (#2177249)

If you would read the original paper, you would note that Vinge postulates several differing ways in which superhuman intelligence could be achieved. Some of them are similar to the net with added computational clusters. For these to be successful, it seems to me there are no technical problems that currently need to be solved. The problems are more orgizational and economic.

Consider, e.g., a large company that implemented an internal copy of the net. Now it has it's network servers, attached, but there's this problem of locating the information that is being sought. So it implements xml based data descriptions, and an indexing search engine. And, as computers get more powerful, it uses a distributed-net approach to do data-mining, with a neural net seeking the data, and people telling it whether it found what they wanted, or to look again. As time goes by, the computer staff tunes this to optimize storage, up-time, etc. The staff trains it to present them the information they need. It learns to recognize which kinds of jobs need the same information at the same time, which need it after a time delay, etc. And then it starts predicting what information will be asked for so that it can improve it's retrieval time. ...

Of the entire network, only the people are separately intelligent, but the network is a lot more intelligent than any of its components, including the people. The computers may never become separately intelligent. But the network sure would.

Still, I expect that eventually the prediction process would become sufficiently complete that it would also predict what the response to the data should be. So it could predict what data the next person should need. So it could predict what answer the next person should give. So ...
So if anybody called in sick, or went on vacation, the network would just heal around them. And eventually...

Caution: Now approaching the (technological) singularity.

Talk - an early form of instant messaging? (2)

Raphael (18701) | more than 13 years ago | (#2177255)

Quote from the article:

The idea for "True Names" came from an exchange he had one day in the late 1970's while using an early form of instant messaging called Talk.

Is it just me, or did anyone else pause for a second after reading that sentence? As far as I remember, most of the operating systems that had access to the Internet had some form of a "talk" program. This includes all UNIX-like operating systems that I tried, such as Ultrix, SunOS, Solaris, HP-UX, A/UX, AIX and now Linux, but also some IBM 3090 mainframes (although these were batch-processing machines, there was also a way to talk to other users).

The term "instant messaging" was coined much later: only a few years ago, when Windows started to invade all desktops and AOL started promoting its AIM. Seeing "talk" defined as "an early form of instant messaging" just looks... strange to me.

Re:We've already been through a singularity (2)

dutky (20510) | more than 13 years ago | (#2177256)

Actually, there is another (semi-)recent event that more closely resembles Vinge's singularity, where our own artifacts have overtaken us by one means or another: the rise of corporations.

Corporations are an artifact of our legal systems and have steadily grown in power and efficacy since they were first concieved several hundred years ago. At this point they are self-sustaining and self-reproducing, even persuing their own agendas that have only a tangential relationship to individual human agendas.

I think it is interesting to note, however, that corporations are not, by almost any measure, smarter than individual humans, quite the oposite (consider well known sayings about the I.Q. of a mob or design by committee). The issue isn't whether our creations become more intelligent than us, but whether they become more potent than us.

Corporations have become more potent than individual humans because 1) they can amass far larger fortunes (in terms of manpower, money, land, or almost any other measure) than an individual, and 2) they are, essentially, immortal (and, to a large extent, unkillable. While the laws may, technically, be empowered to disband a coroporation, in practice this is nearly impossible). Corporations are essentially god-like: omnipotent (if not omnicient) and immortal, invulnerable to almost any harm, complete with their own mysterious motives and goals.

So, if we accept that the singularity has already occurred, we might ask why we aren't more aware of it's after effects. The answer, of course, is that the corporations don't want us to be aware, and are doing everything in their considerable power to obscure the effects of the singularity. Life goes on as normal, as far as lowly humans are concerned, because it would be terribly inconvenient for the corporations if it didn't (modulo polution, environmental destruction and a moderate amount of human suffering and expoitation).

The Singularity and Computational Efficiency (5)

RobertFisher (21116) | more than 13 years ago | (#2177258)

Vinge is not the only one to notice that the rate of growth in computer devices, if extrapolated for a few decades, will eventually exceed the capacity of the human brain, both in terms of strorage capability and in terms of processing speed. Indeed, this very notion forms the basis of many of Joy's and Kurzweil's recent discussions.

However, in doing this extrapolation, one is making a few assumptions. Most notably is that one can teach a computer how to :"think" using some (probably very complex) set of algorithms with comparable computational efficiency as the human brain, if one indeed had a computer with similar processing and storage ability as the human brain. That logic is quite flawed, due to the assumption of computational efficiency.

What do I mean by computational efficiency? Roughly speaking, the relative performance of one algorithm to another. For instance, in talking about the singularity (as Vinge puts it), one often neglects to notice the fact that human beings, with their neurons clicking away at petacycles per second, can only do arithmetic extremely poorly, at less than a flop! Logical puzzles often similarly vex humans (witness the analytic portions of the GRE!), where they also perform incredibly poorly. Significantly, human beings are very computationally inefficient at most tasks involving higher brain functions. We might process sound and visual input very well and very quickly, but most higher brain functions are very poor performers indeed.

One application of a similar train of logic is that human beings are the only animals known to be capable of performing arithmetic. Therefore, if one had a computer comparable to the human brain, one could do arithmetic. Heck, by this logic, we're only 50 years away from using computers to do integer addition!

The main point here is that, with regards to developing a "thinking" machine, WE MIGHT VERY WELL have the brute force computational resources available to us today. The hardware is not the limitation, so much as our ability to design the software with the complex adaptive ability of the human brain.

Just WHEN we will be able to develop that software, no one can really say, since it is really a fundamental flaw in our approaches, rather than in our devices. (It is similar to asking when physicists will be able to write down a self-consistent theory of everything. No one can say.) It could happen in a decade or two, or it could take significantly longer then 50 years. It all depends on how clever we are in attacking the problem.

Diaspora by Greg Egan (1)

McFarlane (23995) | more than 13 years ago | (#2177259)

let me plug the novel Diaspora by Greg Egan as an interesting look at what the singularity will mean to the future of humanity - the history of the rest of time reduced to handy pocket novel size

Re:Flawed assumptions? (2)

iapetus (24050) | more than 13 years ago | (#2177260)

Very interesting, but it still doesn't address the question of whether artificial intelligence that approaches human intelligence, let alone surpasses it is possible. A lot of the ideas of what the future would contain in 100 years a century ago were wrong. In fact the same is true for much shorter periods of time.

Yes, technology will advance in the next X years, but to assume that a necessary part of that advancement is the creation of a machine that is more intelligence than a human is just plain ridiculous. Some would argue that a machine intelligence of that nature is absolutely impossible in the first place (not that I agree with them, but there are rational arguments that suggest this).

I'm basing my view on the state of AI and what we can expect in the future on the results of research I've seen and carried out at some of the top AI departments in the world, so I think I've got a fairly good grasp of the subject matter, and I am 100% happy to say that faster computers will not give us any form of machine intelligence.

Re:Flawed assumptions? (3)

iapetus (24050) | more than 13 years ago | (#2177261)

The world becomes stranger faster, every year.

But very rarely in the ways you expect. Look at the predictions people were making for life in the year 2000 back in 1800, or 1900, or 1950, or even 1990. You'll see that a lot of it didn't happen. Some did, and some things that people hadn't even considered happened as well. But a lot of it just didn't take place.

Regardless of whether advancement takes place, the link that Vinge assumes between computer hardware performance and computer intelligent does not exist. If true machine intelligence comes about within the next thirty years it will not be as a direct result of improved hardware performance. There aren't any systems out there that aren't intelligent, but could be if we could overclock their processors to 150GHz.

Flawed assumptions? (5)

iapetus (24050) | more than 13 years ago | (#2177262)

Progress in computer hardware has followed an amazingly steady curve in the last few decades [17]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years.

Progress in computer hardware has followed this curve and continues to do so. Progress in computer intelligence however, hasn't. Computers are still stupid. They can now be stupid more quickly. This isn't going to produce super-human intelligence any time soon.

Dr Vinge reminds me somewhat of that most mocked of AI doomsayers, Kevin [] Warwick [] .

Re:Talk - an early form of instant messaging? (1)

Galahad (24997) | more than 13 years ago | (#2177264)

Ah, but tis true. Welcome to a new way of looking at the world.

Now that we have that defined that equivalence, are there any IM patents that need busting?

Re:The Singularity and Computational Efficiency (2)

fluffhead (32589) | more than 13 years ago | (#2177267)

Vinge does allude to this in the Singularity [] paper:
But it's much more likely that devising the software will be a tricky process, involving lots of false starts and experimentation. If so, then the arrival of self-aware machines will not happen till after the development of hardware that is substantially more powerful than humans' natural equipment.

#include "disclaim.h"
"All the best people in life seem to like LINUX." - Steve Wozniak

ray? (1)

Mr. Quick (35198) | more than 13 years ago | (#2177269)

did ray kurzweil [] just bogart this dude's idea for his new book? []

Re:ray? (1)

Mr. Quick (35198) | more than 13 years ago | (#2177270)

hey jackass, why don't you look down the page some more?
you would have seen this. []

Re:ray? (1)

Mr. Quick (35198) | more than 13 years ago | (#2177271)

you're a prick!

Re:Flawed assumptions? (2)

ajs (35943) | more than 13 years ago | (#2177272)

The idea is that *all* technology is asymtotic. Yes, computer *speed* is a simple (non-asymtotic, so far) progression. AI seems to have gone no-where (except where we redefine the term), but the IMPACT on our culture and our world has been on a curve, the function of which is only starting to become evident. Think about what a man from 1800 would say about our world (it would probably involve lots of screaming). Now think about how someone from 1900 would feel (not a WHOLE lot different, but the acceptance of, if not comfort with electricity is there at least). Now, think about someone from 1950 (you actually HAVE little radios that you can talk to people through? you can travel to Japan HOW fast? old people get replacement WHATS?)

Technology in genetics, networking, materials science and electrical engineering is progressing at a frightening rate. Soon, we'll be able to construct useful, microscopic machines; implanted computers; and who knows what else.

The world becomes stranger faster, every year.

Aaron Sherman (

Across Realtime and the signularity (3)

ajs (35943) | more than 13 years ago | (#2177273)

So, this idea is introduced in book 2 (or 3, depending on how you count) of "Across Realtime", a Novel(s) of his (it was origianlly 2 short novels and a novella, I think).

The idea is that technology progression is asymtotic, and will eventually reach the point where one day of technological progress is equal to all that of human history, and then, well... there's the next day. He doesn't cover exactly what it is, because by definition, we don't know yet. But, it's catastrophic in the novel. A good read (actually the first part which basically just introduces the "Bauble" is a good read alone).

He sort of refined the idea into something maintainable in Fire Upon the Deep by introducing the concept of the Slow Zone which acts as a kind of buffer for technology. If things in the Beyond get too hairy, the Slow Zone always remains unaffected, and civilization can crawl back up out of the "backwaters" (e.g. our area of the galaxy).

He's a good author, and I love his take on things like cryptography, culture (A Deepness in the Sky), religion, USENET (Fire Upon the Deep), Virtual Reality and cr/hacker culture (True Names).

Aaron Sherman (

Re:The Singularity and Computational Efficiency (1)

Mr. Slippery (47854) | more than 13 years ago | (#2177274)

The hardware is not the limitation, so much as our ability to design the software with the complex adaptive ability of the human brain.

It may well be that we'll never be able to design such software.

However, we could evolve it. Using genetic algoriothms and other "evolutionary" programming approaches [] seems to me the most promising approach.

Tom Swiss | the infamous tms |

Human intelligence? (4)

Hard_Code (49548) | more than 13 years ago | (#2177277)

I'm still waiting for humans to exceed human intelligence...we're all so obsessed about what the "robots" will do in the future when they get smarter than us. The present sucks already.

scold-mode: off

We've already been through a singularity (2)

Hydrophobe (63847) | more than 13 years ago | (#2177278)

The human race has already been through a singularity. Its aftermath is known as "civilization", and the enabling technology was agriculture, which first made it possible for humans to gather in large permanent settlements.

There are a few living humans who have personally lived through this singularity... stone-age peoples in the Amazon and Papua New Guinea abruptly confronted by it. For the rest of the human race it was creeping and gradual, but it still fits the definition of a singularity: the "after" is unknowable and incomprehensible to those who live in the "before".

Re:Singularity, SETI and the Fermi Paradox (2)

Hydrophobe (63847) | more than 13 years ago | (#2177279)

There are other possibilities as well for SETI's lack of success. Our solar system and our planet may be fairly unique in some ways:

  • the Sun is constant (not a variable star)
  • the Sun is a single star (not a binary or multiple star)
  • the presence of Jupiter in its current location (large planet gravitationally deflects and sweeps away comets and small asteroids that cause catastrophic extinctions)
  • nearly circular orbits (many of the transsolar planets that have been discovered are in highly eccentric orbits)
  • the presence of the Moon (large satellite that causes tides, which are important in the development of terrestrial life)
  • plate tectonics (present on Earth but not on Venus, may be crucial)
  • the positioning of the Earth in the "habitable zone" of the solar system (5-10% closer or farther to the Sun, and advanced life wouldn't develop)
  • the positioning of the Sun in the "habitable zone" of the galaxy (too much farther out and metals are too sparse and element ratios are unsuitable, too much closer and you run the risk of mass extinctions from supernovas and the like)

Probably bacteria-like life is extremely common, but advanced intelligent life might in fact be somewhat rarer than was once thought.

virtual reality progress = ghost planet (3)

Hydrophobe (63847) | more than 13 years ago | (#2177280)

Another strong possiblity (for lack of SETI) is that intelligent races prefer virtual reality to real reality, in much the same way that the human race prefers to sit inside watching TV instead of going outside for a walk in the woods and grasslands where we evolved.

When we have better than Final Fantasy rendering in real time, most of the human race will probably choose to spend most of their day living and interacting there, in virtual-reality cyberspace... in much the same way that many of us today spend most of their day in an office environment, living and creating economic value in ways incomprehensible to our hunter and farmer ancestors.

When this happens, the planet may seem empty in many ways... in much the same way that suburban streets in America seem empty to a third-world visitor used to bustling and noisy street life.

This phase (human race moves into and settles cyberspace, become less visible in the physical world) is not the same as the Singularity. For one thing, it is not at all dependent on future advances in artificial intelligence... we just need ordinary number-crunching computers a few orders of magnitude faster than today.

If the AI naysayers are right, and machines never get smart enough, then the Singularity will never happen... but the "ghost planet" scenario will inevitably happen in our lifetime... either as a result of progress, or as the unhappy result of plague or nuclear war.

Re:get a copy...if you can (1)

puterGeezer (66610) | more than 13 years ago | (#2177281)

Google [] turned up this:

True Names - the novel by Vernor Vinge []
Comment by the transcriber ... Bluejay Books) TRUE NAMES
VERNOR VINGE Bluejay Books Inc. All the ... - 101k - Cached [] - Similar pages []

Re:Deepness in the Sky - Focus (2)

alispguru (72689) | more than 13 years ago | (#2177285)

The scariest part of Deepness for me was his idea of Focus - a biotechnology for inducing hacker trance artificially and indefinitely in humans. Focus was the basis of the power of the bad guys in the novel - their automated systems had super-human reasoning abilities because they were based on networks of Focused humans and computers.

We had better hope that AI (and hence the Singularity) is indeed possible, because if it isn't, Focus is almost certainly possible, and with it tyrrany on a scale we can barely imagine.

Singularity is "Rapture for Nerds" (1)

meehawl (73285) | more than 13 years ago | (#2177286)

Ken MacLeod noted this in a Salon article [] .

Re:Respect Copyright (1)

BlueUnderwear (73957) | more than 13 years ago | (#2177287)

Didn't you just violate their copyright yourself by publishing that notice? I'd think "all materials" means just that: all materials on the site, including the copyright notice. Now, where can I report you to the NYTimes?

True Names Re-issue keeps getting delayed (2)

billstewart (78916) | more than 13 years ago | (#2177288)

If you look at True Names in Amazon, you'll see that it's going to be reissued Real Soon Now, with a bunch of introductory essays on various topics, as True Names and the Opening of the Cyberspace Frontier. Friends of mine wrote some of the essays, so I've been interested in getting a copy. Unfortunately, it's been going to come out Real Soon Now for about 5 years, and every 6-12 months the publication date slips another 6-12 months - This time for Sure! Some of the essays that were cutting-edge when they were written are going to start to look like old science-fiction by the time they actually get published...

Re:The Singularity and Computational Efficiency (1)

Demiah (79313) | more than 13 years ago | (#2177289)

Thanks for the post, and while I (naturally) agree with your conclusion AI is a software rather than a hardware problem, your comment often neglects to notice the fact that human beings, with their neurons clicking away at petacycles per second, can only do arithmetic extremely poorly, at less than a flop!
only describes the calculations we carry out consciously. This doesn't really apply to the autistic lightning calculators - or even us when we're doing calculus to say, catch a ball or drive a car. Trying to think about what you're doing under those circumstances tends to make the task quite a bit harder..

(Is consciousness over-rated? :)

Is there anyone out there who knows more maths than me who's willing to tell me what my brain can do that a neural net of sufficient size can't?

All I have to say to Vinge is... (1)

SteveC (79927) | more than 13 years ago | (#2177290)


I like his books, but his predictions about the future are about as likely as those in the 50's stating that we would be all have our own flying vehicle by now.

Re:Huh? (1)

awaterl (85528) | more than 13 years ago | (#2177291)

The word 'paradoxon' has a nice ring to it, and looks as though its root word is 'paradox'. What exactly does it mean?

Great teacher (1)

d2ksla (89385) | more than 13 years ago | (#2177292)

I took his microcomputer architecture class at SDSU back in '93. He was probably the best teacher I've had so far, being real clear and logical. Not to mention a hardcore assembly programmer, having us to do labs using the multitasking mini-OS he had written in 68K assembly...

Re:Deepness in the Sky - Focus (2)

Ronin441 (89631) | more than 13 years ago | (#2177293)

The scariest part for me was that Focus is a plot device to let the author talk about us. The Focused people were, as you mention, hackers, and they were slaves. The point (for me, at least) is not that some super-biotech could be created to convert humans into willing slaves -- it's that we hackers already willingly enslave ouselves. Our central philosophy puts our focus on doing the work first, and being paid for the work second. As long as our employer continues to give us interesting puzzles to solve, and interesting tools to solve those puzzles with, we will be his willing slave.

Scares the shit out of me.

The non-Singularity (2)

Keelor (95571) | more than 13 years ago | (#2177294)

A great book, and a (somewhat humorous) look at what might happen if the Singularity cannot be reached is A Deepness in the Sky. The common counter-argument to the future of incredibly intelligent AI is that we can't even write a word processor without bugs right now--well, Deepness takes that idea and runs with it. In that future, AI, FTL travel, and all those fun science fiction ideas were never realized. Instead, people have to deal with spending years going between stars, isolated civilazations rising and collapsing over and over again, and 10,000 years of legacy code. The hero of the book gets much of his power from the fact that he actually understands a decent amount of the legacy code.

Vinge has made it fairly clear that he doesn't think that Deepness is where society is going--he seems fairly confident that we'll reach the Singularity.


Re:Flawed assumptions? (2)

Steeltoe (98226) | more than 13 years ago | (#2177295)

so I'm fairly confident I'll live to see computers at least as intelligent as I am. And I'm 54.

Well, that doesn't say much. Because either A) you're not very bright or B) you live a very safe and healthy life, so you expect it to be looong. ;-)

But seriously, don't you think there's a huge step from building an artificial neurological brain to making it actually work. We may imitate some internal processes in the neurons, but the brain has a huge and complex architecture suited for human activity and body. I believe it can be done, roughly, but if it's going to be in MY lifetime there'll have to be HUUGE advances soon.

I don't believe these AIs will be comparable to humans that soon though. Much of human thinking is not logical at all. If we were to only live perfectly logical lives, I think I'd vote myself out of "humanity". Because much of our joy and fun is not logical at all.

Then again, it all really depends what you mean with intelligence too. That's just another can of worms, making such statements completely arbitrary.

- Steeltoe

Re:Vinge's Singularity is AI Doc Numero Uno! (1)

topos (102469) | more than 13 years ago | (#2177296)

Maybe you would be better off writing science fiction. It's easier than creating a "superintelligence beyond any human IQ".

And if you're really serious about this, remember that a lot of clever people have tried this before you, and utterly failed.

Good luck anyway!

Re:Knowledge Crash (2)

Rand Race (110288) | more than 13 years ago | (#2177297)

Roger MacBride Allen's Hunted Earth series volume one: The Ring of Charon. Volume 2 is The Shattered Sphere. It's been a while since I read them and I don't remember the knowledge crash stuff, but they are pretty good hard scifi IIRC.

Allen also wrote a book called The Modular Man about a man who downloads his personality into a robotic vaccum cleaner that is excellent and deals with many of the same concepts Dr. Vinge is talking about.

Knowledge Crash (4)

Satai (111172) | more than 13 years ago | (#2177298)

One of the more interesting ideas I've read about is the Knowledge Crash. I'm not entirely sure how feasible a theory it is, but it was proposed in a science fiction series I read a while back. (The first book was called something about Charon... Ring of Charon? Moon of Charon? Something like that.)

The idea is, basically, that every year it costs more to educate someone. In order to be able to expand our collective knowledge, or even to utilize the machines and operate the systems of the present, it will cost a certain amount of money in the education process.

In addition, we can quantify the amount of output a single human creates in his or her lifetime. For instance - if she works for thirty years at a Power Planet or something, we can determine the value that she has contributed to society.

As systems become more complex, more education is required. The education costs more money. At some point, if this continues unchecked, we will be faced with a situation where the cost of education exceeds the value brought as a result of that education.

That's called the Knowledge Crash. (Or it was in the books.)

While I'm not convinced that this is true, it's certainly an interesting theory. It seems to me that, on average this can't happen, as one of the points of creating more and more complicated (generic) systems is to facilitate simpler and simpler controls, and thus dumber and dumber operators. While the creators of those systems may have 'crashed knowledge,' it seems that the whole point of that would be to hurl some value at the workers.

But then you have to consider that, inherent in the value of a designer, the ease of use is part of the entire value analysis versus education, and then that'll crash...

Re:Flawed assumptions? (2)

(void*) (113680) | more than 13 years ago | (#2177299)

But seriously, don't you think there's a huge step from building an artificial neurological brain to making it actually work. We may imitate some internal processes in the neurons, but the brain has a huge and complex architecture suited for human activity and body. I believe it can be done, roughly, but if it's going to be in MY lifetime there'll have to be HUUGE advances soon.

While skepticism is a fine sentiment, I can't help notice that you are making more assumptions than Vinge is. Sure we will be able to simulate or imitate the brain roughly -- but I think it is a stretch to demand that consciousness come only from detailed imitation. It may be that roughly is enough. The brain is also a finite machine - we will soon be able to build electronics that excess the capacity.

Singularity, SETI and the Fermi Paradox (1)

jfaughnan (115062) | more than 13 years ago | (#2177301)

One of the best pieces of indirect evidence for the inevitability of the Singularity is the Fermi paradox, and, to a lesser extent, SETI's lack of success (to date).

Fermi showed that, given reasonable assumptions, we ought to expect "ET" to be ubiquitous. Since extraterrestrials are not all about us, this suggests either technologic civilizations are exquisitely rare or that they rapidly lose behaviors like migration and radio communication. By rapidly I mean within two to three hundred years.

The Singularity is the kind of event that would do that. If technologic civilizations always progress to a Singularity they may well lose interest in minor details like reproduction and out migration. Among other things they would operate on very different time scales from pre-Singular civilizations.

See also [] .

John Faughnan

Misconception about Vinge's Singularity (1)

yooden (115278) | more than 13 years ago | (#2177304)

Vinge does not require the advancement of computers to a point at which they are regarded intelligent. This is only one of several possibilities mentioned in his paper [] .

Other possibilities include:

  • "Waking up" of computer networks.
  • Humans using sophisticated HCI [] . (e.g. Vinge's Focused, Stephenson's Drummers)
  • Genetically altered humans. (Card's Descolada?)

Deepness in the Sky (2)

yooden (115278) | more than 13 years ago | (#2177306)

'Deepness' [] is the Prequel to 'Fire upon the Deep' and even better. Read it first.

While there is more discussion about non-human intelligence in 'Fire', the actual impact of Vinge's idea is greater in 'Deepness', where his excellent world-building skill is used to create the best traditional SF I know.

Both 'Deepness' and 'Fire' also feature some really neat alien races.

Re:I don't buy it. (2)

small_dick (127697) | more than 13 years ago | (#2177308)

> Artificial intelligence will never have emotions.

Snicker. I think Dr. Vinge is right...and I think it is scary. If you are familiar with electronics, think about how a diode avalanches.

If he is correct, AI could well "avalanche" past what evolution gave us in a very, very short period of time.

Humans learn at a given pace. We are nearly helpless at birth, yet can be a "McGyver" in our twenties and thirties, able to make a helocopter gunship from nothing but bailing wire and duct tape (on tv anyway). That's a 20-30 year span, or nearly a quarter of our lives to reach our maximum potential.

Who is to say an AI system could not, at some point, triple it's cognitive abilities in a 100 nS time slice?

And to think I didn't take his class cuz some lamer told me he was a "hard ass" -- rats. That's what I get for listening to lamers. SDSU has so many wonderful Professors...Vinge, Baase, Carroll. Great University, great professors, great memories.

Treatment, not tyranny. End the drug war and free our American POWs.

Smartness is Overrated (2)

YIAAL (129110) | more than 13 years ago | (#2177309)

Oh, NOOO! When machines become supersmart, they'll run the world, because right now the smartest people are the ones who're running the world, and supersmart machines will be even -- oh, wait, never mind. In the words of Jurgen, cleverness is not on top, and never has been.

Re:Smartness is Overrated (2)

YIAAL (129110) | more than 13 years ago | (#2177310)

It continues to be unclear to me just exactly how smartness = world domination. Experience would seem to indicate that very high intelligence is in fact associated with a decreasing likelihood of achieving substantial political power. Abuse is all very fine, but I'd appreciate seeing a mechanism here, not handwaving.

Re:Deepness in the Sky == Unix (1)

Ella the Cat (133841) | more than 13 years ago | (#2177311)

In that book, set 10,000 years from now, the human interstellar calendar puts year 0 at Jan 1 1970 (which most people equate with the first landing on another planet). Yay!

Re:Registration required? (1)

krogoth (134320) | more than 13 years ago | (#2177312)

or, if you prefer:
l/p: fuckyou69g, fuckyourself

Re:Hmm yes (1)

krogoth (134320) | more than 13 years ago | (#2177313)

No chance. Remember, nothing is ever removed from slashdot, it's only added.

I respect their copyright... (2)

clary (141424) | more than 13 years ago | (#2177314) not reading the damned article at all. There are enough freely browsable sources of information that I do not need the online New York Times. Once in a while I send the NYT an email telling them so.

Hmm...I wonder should I have an ethical dilemma reading commentary of people who have read the article in violation of copyright? I think not, since I have entered into no agreement with the NYT.

"General" Human Intelligence not Necessary (3)

clary (141424) | more than 13 years ago | (#2177315)

Progress in computer hardware has followed this curve and continues to do so. Progress in computer intelligence however, hasn't. Computers are still stupid. They can now be stupid more quickly. This isn't going to produce super-human intelligence any time soon.
We don't necessarily need to crack the strong AI problem to push us into a singularity. Exponential progress in technological capability in general will do the trick, once we hit the elbow of the curve, if it has one (which is a bit tricky to see from this side).

Because of stupid, but fast, computers, we are headed toward being able to hack our DNA (and/or proteins). This will certainly produce incremental gains in lifespan and health...perhaps it will produce dramatic ones.

Because of stupid, but fast, computers, we can simulate physical processes to enable us to engineer better widgets. Perhaps this will make routine space travel economical.

Because of stupid, but fast, computers, we are heading toward having the bulk of human knowledge instantly available to anyone net connection. How will this leverage technical progress?

Two things (3)

Dr. Spork (142693) | more than 13 years ago | (#2177316)

One thing I don't get is why something that's very intelligent would be inherently unpredictable. Should Christians think that because the God they believe in is supposed to be supremely intelligent His actions are totally unpredictable by us? Might he send the pious to hell and the wicked to heaven? I don't see much of a relationship between intelligence and predictability. The most unpredictable people I know are dumb.

Another thing has to do with this "let's fear AI" genre of SciFi in general. Why does no one challenge the assumption that when artificial creatures develop intelligence and a personality, that personality will inevitably be indifferent, power-hungry and cold? Isn't it just as easy to imagine that artificially intelligent creatures/machines will strike us as being neurotically cautious, or maybe friendly to the point of being creepy? Maybe they'll become obsessed with comedy or math or music. Or video games.

Realistically, I think the first machines which we take to be intelligent will be very good at means-to-ends reasoning, but will not be able to deliberate about ends (i.e. why should one sort of outcome be preferrable to another). I would argue that even we humans can't really deliberate about ends. At some point we hit some hard-wired instincts. Why, for example, is it better that people are happy rather than suffering? The answer is just a knee-jerk reaction by us, not some sort of a reasoned conclusion.

When we create AI we will have the luxury of hard-wiring these instincts into intelligent machines (without some parameters specifying basic goals, nothing could be intelligent, not even we). Humans and animals are basically built with a set of instincts designed make them survive and fuck and make sure the offspring survive. There is no reason to think AI creatures would necessarily have these instructions as basic. I'm sure we could think of much more interesting ones. The consequence is that AI creatures might be more intelligent than we are, but in no way sinister.

As opposed to THIS future... (1)

ave19 (149657) | more than 13 years ago | (#2177318)

...which is utterly predictable?


ARGH! It's a rogue Boomer! (1)

Maran (151221) | more than 13 years ago | (#2177319)

Progress in computer hardware has followed an amazingly steady curve in the last few decades [17]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years.

So we're still on track for Bubblegum Crisis [] then ^_^


Re:Flawed assumptions? (1)

Jasonv (156958) | more than 13 years ago | (#2177320)

Reminds me of my favorite Picasso quote:

"Computers are useless; they can only answer questions."

Official Flame Thread (1)

fm6 (162816) | more than 13 years ago | (#2177321)

We've been here before [] .

I haven't got the stomach to read any Vinge philosophizing this morning, but Michael's characterization of Vinge and the "Singularity" doesn't jive with what I remember from his work. Deepness is written around the assumption that there is a fundamental limit to the intelligence (the "weak AI" [] school of thought).

The last Vinge book I liked at all was Marooned in Real Time. This contained ideas I disagreed with, but they didn't totally infest the story. Been a while since I read it, but distinctly recall it has the Singularity happening because of a rise in human intelligence, mainly due to people becoming more and more skilled in using computers. Both Deepness and Marooned depict people as a crucial element in any kind of real intelligence.

I dimly recall an essay at the end of Marooned in which he describes the Singularity, not in terms of any specific technology, but rather as the inevitable result of scientific and technical progress. He argues that if you view this progress as a curve, it resembles an asymptotic curve like

y = 1/(n - x)

When we reach the singularity in this graph, SOMETHING WONDERFUL will happen.

I'm sorry. This is pseudoscientific mysticism. You can't reduce all human achievement to a damn curve. Even if you could, its interpretation is hardly obvious. Maybe the Singularity is when the world shrinks to the size of a pea [] .

That sort of misuse of extrapolation and interpolation seemed to be pretty popular during the 80s. Laffer was only an economist (though obviously not a good one). But Vinge is a mathematician by training! He has no excuse.


Re:Flawed assumptions? (2)

herwin (169154) | more than 13 years ago | (#2177322)

iapetus wrote: "Progress in computer hardware has followed this curve and continues to do so. Progress in computer intelligence however, hasn't. Computers are still stupid. They can now be stupid more quickly. This isn't going to produce super-human intelligence any time soon."

The problem from the perspective of a working neuroscientist is that we don't yet understand how the brain is intelligent. On the other hand, things are starting to fall into place. For example, we have a hint of why neural synchronization occurs in the brain, because we're beginning to realize that time synchrony is something many neurons are very good at detecting. We're also beginning to understand memory formation in the cortex. It seems to involve the creation of clusters of synapses, and those clusters get activated by time-synched signals. There's some evidence for analog computation, and there's some evidence for almost quantum computation. So we're beginning to understand how to build a brain. That seems to be the hump, so I'm fairly confident I'll live to see computers at least as intelligent as I am. And I'm 54.

Re:Across Realtime and the signularity (2)

maetenloch (181291) | more than 13 years ago | (#2177324)

Across realtime is one of my all-time favorite SF novels. In it he introduces 'bobbles' (stasis fields) and the 'technological singularity'. What's interesting is that rather than just invent some new technology and go Oooh Ahhh over it, he lets the story follow how people quickly adapt the new technology and start playing with it. What's amazing is that he wrote this back in the early 80's, and yet nothing he wrote about computers seems dated - that's foresight!

Vinge's Singularity is AI Doc Numero Uno! (4)

Mentifex (187202) | more than 13 years ago | (#2177325)

Technological Singularity by Vernor Vinge -- available online at ing.html [] -- is the scariest and yet most inspiring document that I have ever read on Artificial Intelligence -- which is being implemented slowly but surely on SourceForge at [] in JavaScript for Web migration and in Forth for robots, evolving towards full civil rights on a par with human beings and towards a superintelligence beyond any human IQ, as described so eerily and scarily by Vinge. It used to be that I did not like Vinge's science fiction, but right now I am thoroughly enjoying A Deepness in the Sky by Vinge.

books (1)

XyouthX (194451) | more than 13 years ago | (#2177327)

Marooned in Realtime is an amazing short story. but i can also highly recommend reading A Deepness in the Sky and A Fire in the Deep. they both depicture a very dynamic human future and bring out some interesting theories very vividly.

Re:Fire Upon the Deep (1)

Ssolstice (198935) | more than 13 years ago | (#2177328)

As far as I could tell, the "fire" was the god-like AI that existed in the transcend, but was wreaking destruction in the "deep", or gravitationally dense areas of the galaxy.

Respect Copyright (1)

sasha328 (203458) | more than 13 years ago | (#2177330)

I think the previous post needs to be removed becuase it is copyrighted []

All materials contained on this site are protected by United States copyright law and may not be reproduced, distributed, transmitted, displayed, published or broadcast without the prior written permission of The New York Times Company. You may not alter or remove any trademark, copyright or other notice from copies of the content.

The following paragraph in the copyright notice does not allow you to republish it:

However, you may download material from The New York Times on the Web (one machine readable copy and one print copy per page) for your personal, noncommercial use only.

Re:Vinge's Singularity is AI Doc Numero Uno! (1)

closedpegasus (212610) | more than 13 years ago | (#2177333)

There's a huge wealth of singularity-related material here [] on Ray Kurzweil's [] site, including Vinge's "The Technological Singularity [] ". Also on the site is Ray's precis to his next book, called "The singularity is near." It expands on his earlier views of "the law of accelerating returns" from his last book "the age of spiritual machines".

50 years of solitude (1)

snStarter (212765) | more than 13 years ago | (#2177334)

"Heck, by this logic, we're only 50 years away from using computers to do integer addition!"

Well....I think the real important concept is that we might be 50 years away from computers who can invent FOR THEMSELVES how to do integer addition.

That's the key for AI.

Re:The Singularity and Computational Efficiency (1)

Glabrezu (215236) | more than 13 years ago | (#2177335)

Take in account that in Kurzweil's view, the singularity is not only a matter of better hardware, but of development in all the areas of research. So, according to his point of view, hardware and software will get caught in that singularity.

I do believe that the singularity actually has a more specific name, the development of artficial human level intelligence. If sometime humans get to that, then we can say that probably the objective of humanity have been acomplished, in developing something that is better than us in that that makes us different, and superior, from the rest of the species, and therefore leaving the world in a better situation that when we found it ;) (so making BP happy). We have improved ourselves. If we do get to human level AI, then humanity is obsolete, and more than being afraid we should be proud (in spanish we could say that is "el fin de la humanidad", but "el fin" in both senses).

Humans being are very inefficient in a lot of computation tasks, but that can be improved much faster if we have a digital AI than our biological brains. We could try some optimization, and if suddenly we go nuts we could restore the backup ;).

People are extremely afraid of being superated, they would prefer (like Joy) to don't develop some areas of research because that may make humans an obsolete technology, well, in my opinion we are just some very small part of a higher scheme (by who? who knows, maybe evolution), and if we can make something that is actually better and more fit than humans, please do it.


Re:Singularity, SETI and the Fermi Paradox (1)

praedor (218403) | more than 13 years ago | (#2177337)

Naw. Assuming (wont happen) that a machine intelligence takes over from biological intelligence on some alien planet, all that would mean is that instead of the ET biological creatures showing up all over, personally or via radio, you would run across the machine creations of this biological ET race. The absense, thus far, of any recognizable ET signals doesn't rule out ET in favor of the so-called "singularity", it just indicates that no one is broadcasting in a way that our paltry antenna are able to detect.

WE aren't broadcasting in a way that our own detection methods could detect beyond a few lightyears - our signals are too weak, non-directional and our detectors are, well, right now an OLD Arecibo antenna that can ONLY detect a signal SPECIFICALLY sent our direction by a powerful transmitter. Why would we think this is what an alien race would do when we ourselves aren't doing this and have no plans to do this on any continuous basis? Hell, it would require decades of dedicated, directional, powerful transmission by an alien race in order for our attempts to detect them to bear fruit.

Who's to say that some alien race didn't do this for a thousand years...but the signals passed us by during the reign of the dinosaurs?

In any case, if the "singularity" happens as a rule, then my question is where are the ET machines? THEY should be ubiquitous now...yet they are not. That's your answer. Singularity doesn't happen.

Re:The Singularity and Computational Efficiency (2)

praedor (218403) | more than 13 years ago | (#2177339)

I disagree. Humans and other animals may be poor (relatively) at doing paper-and-pencil mathematics, but they are quite good and fast with innate math. Huh? Well, tossing a basketball through a hoop requires unconscious calculation to make the muscle add the correct energy to the throw, it must be pused in the correct direction to make up for player movement relative to the hoop, etc. A lion, alternatively, must do the calculation on efficient pursuit trajectory of prey when they bolt. A lion doesn't run to the prey, it predicts and compensates for the movement/running of the prey to form an intercept course.

This happens all the time and unconsciously with ALL creatures with a brain. It does involve math and it is automatic. Not too bad.

Then there is the difference between a machine calculating a formula that HAD to derive from a human No machine creates new formulas or mathematics. They ONLY calculate that which humans in their creativity, slow as it may be, are able to devise. Quantum math, relativity, calculus...humans are slow to calculate the answers but very good at coming up with the formulations and rules.

Re:Hmm yes (1)

AndroidCat (229562) | more than 13 years ago | (#2177341)

spammers: email me directly at root@ and cc to root@

Now that's an interesting thought, does Code Red's random number generator ever come up with Code Red, meet Code Red.

Don't laugh, $cientology was looking for the eevil Major Domo, who operated a FTP site with cult secrets at

And getting back on topic, when True Names came out, the few copies around did a tour lasting a year or so. (And yes, I got it back.) And I added Alan to the security of SISPG's encrypted message store on PSBGM. (1981) We read True Names, and recognized it.

Hexapodia is the key! (1)

AndroidCat (229562) | more than 13 years ago | (#2177342)

Old school? Certainly after the Great Renaming.

And please recall the name of the space craft: Out of Band II. It's full of in-jokes, like "Sandor Arbitration Intelligence at the Zoo", can we say Henry @ utzoo?

The .. prequil .. A Deepness in the Sky (Deep novels) suffers from Bloody Fat Book syndrome and Cast of Thousands. A good read, but some of the tech is just too advanced for the zone.

Re:Vinge's Singularity is AI Doc Numero Uno! (1)

AndroidCat (229562) | more than 13 years ago | (#2177343)

Oh please don't cry, it's just a web-jazzed version of Pyroto Mountain.

Do not offend TSOTL!

Re:get a copy...if you can (1)

mfarah (231411) | more than 13 years ago | (#2177346)

_True Names... And Other Dangers_ and the Analog magazine _True Names_ was published in (I can't remember the issue now) are regularly sold at e-bay (at final auction prices ranging from 20 to 30 bucks in most cases), so I suggest you set up a "saved search" at the site and let it warn you about auctions related to Vernor Vinge.

By The way: To clear up some confusion: _True Names_ is a novella. It was published in that Analog magazine I mentioned, and it was published along with four short stories in the book _True Names... And Other Dangers_. (two of those stories, _Bookworm, Run!_ and _Long Shot_ are pretty interesting).

Also, a new book is going to be published, called _True Names and the opening of the Cyberspace Frontier_, which reportedly will contain the novella (but not the short stories!). Now... the "is going to be published soon" status hasn't changed from at least february/2000 (yes, more than year and a half), and I have no idea why-s the delay.

Death to Vermin.

Re:Registration required? Why not! (2)

MarkLR (236125) | more than 13 years ago | (#2177347)

What is so hard about respecting a miminal request by the publishers of the article? Just register a name with them and read it off of their site. The more people do things like this (repost entire articles) the more likely the NY Times is going to stop providing articles on their web site.

Re:I don't buy it. (1)

Cryptimus (243846) | more than 13 years ago | (#2177350)

If we as a species ever develop compelling virtual reality, we'll lock ourselves inside it and never come out. Why is 60% of internet traffic expended on sexual content? (Yeah, I'm guessing here but I'll bet the number is startlingly higher than you'd expect.) Basically, we're just a species of complete wankers.

get a copy...if you can (1)

borroff (267566) | more than 13 years ago | (#2177352)

Actually, I've been trying to find a used copy of True Names for years, with not much luck. Used copies run $90+ on Amazon and Barnes and Noble.

Re:Succinctly (2)

Rogerborg (306625) | more than 13 years ago | (#2177357)

  • Automatons will not be able to easily jump beyond logic by themselves, people will always be needed to teach them how.

Hmm. The trouble is that very, very few humans can actually make intuitive leaps. I can think of the guy (or gal) who figured out fire, Da Vinci, Edison, Einstein, a handful of others. Most of us just make tiny tweaks to other people's ideas.

Bizarrely, given sufficient processing power, it might be more efficient to produce a speculating machine (that can design a random device from the atomic level up, with no preconceptions of usage, then try and find a use for it), rather than try and identify humans who can actually come up with ideas that are genuinely new.

Succinctly (4)

Rogerborg (306625) | more than 13 years ago | (#2177358)

The most succinct Vinge quote [] that I can think of is:

  • To the question, "Will there ever be a computer as smart as a human?" I think the correct answer is, "Well, yes. . . very briefly."

How would we know? (1)

Atreides4 (309781) | more than 13 years ago | (#2177359)

How would we know if we created a machine more intelligent than a human? We have the Turing test to see if it's as intelligent as a human, but how can a human tell if we've got something more intelligent than us? The tools for measuring human intelligence are useless on humans, much less computers. Would we be able to tell by it's acheievements, or it's writings?

Predictability and Unpredictability (1)

Strangely Unbiased (313686) | more than 13 years ago | (#2177361)

If computers did rule the logic world, the way humans do now, wouldn't the opposite happen (the future becoming entirely predictable)? I mean, since logic will be processed using circuits,wouldn't the quantum mumbo-jumbo on which the laws of uncertainty are based and which probably add some randomness in the human brain be overriden? Have you ever seen a computer behave weird because of a group of molecules behaving slightly differently (this is possible, but astronomically unlikely)? On the contrary, in the human brain(which works using chemical reaction, much more vulnerable to subtle random changes) this is much more likely to change the result. If computers do get smarter than humans, wouldn't we be able to follow a set of rules to predict an outcome?

Anyway, all this 'the future is predictable' talk is nonsense, IMHO. The only way to completely predict output/reaction, is to have the input beforehand. And since the input is ever changing, it look slike the only way to achieve that is to have an identical universe (shifted slightly into the future) with you.

But I'm not really into this kind of stuff, so if you are more into it than I am, please publish your educated guess (that is what we're all posting here isn't it).

cool singularity links (5)

jparp (316662) | more than 13 years ago | (#2177365)

Ray Kurzweil seems to be making Vinge's singularity his life's work:

And then there's the non-profit corporation, the Singularity Institute for artificial Intelligence, which is determined to bring the Singularity about as soon as possible:
There are a lot of good Vinge links on that page too, btw

Singinst seems to be the brainchild of this guy:,1282,43080, 00.html
who has a lot of interesting docs here:

Don't miss, the FAQ on the meaning of life, it's great reading.

Unpredictable future (2)

OpenSourced (323149) | more than 13 years ago | (#2177367)

where machines suddenly exceed human intelligence and the future becomes completely unpredictable

I thought the future was already unpredictable.

About the intelligent machines, I think the error is falling into the "biology" trap. Our whole perception system is conditioned by the ideas of "survival", "advancement", "power", "conscience", among others. Those come from our setup as living entities, trapped in a limited resources environment, having to compete for those resources. The fact that a machine is intelligent won't make it conscious, or interested in survival or power. There is no obvious relation. If your were to menace a machine more intelligent than you with cutting the power supply, it would be perhaps politely interested but not more. That is, if the development of the machine is made through "traditional" procedures. I would be wary of genetical-algorithm type developing. That could create a thinking and competitive machine :o)

There are things that we cannot even imagine. One of them is the workings of our own brains. Other one is how a thinking machine would act. Of course, some are more interesting to write a book about that others. But it isn't S&F for me, more like fantasy.


Re:Huh? (1)

2nd_metaman (412245) | more than 13 years ago | (#2177368)

It seems like you did not get his point.

You are right about one thing: What he is saying is a kind of paradoxon: He predicts that the future will be unpredictable.

Thats all he really does. Everything else he says in his paper just serves to underline that fact.

And just for that basic assumption he makes, he shows that he really has something to say about the future. Or did you com across anybody in the "business of predicting the future" who was as honest as Mr. Vinge ?

Re:The Singularity and Computational Efficiency (1)

hereticmessiah (416132) | more than 13 years ago | (#2177370)

Are we talking J Average Person like you or I, or are we talking about what human beings are really capable of?

In addition, remember that the system doesn't have to be 'pure'. It can have specialised components which are capable of doing the mathematics quickly.

Hmm yes (2)

Ubi_UK (451829) | more than 13 years ago | (#2177376)

I guess you are right.
Now that I have seen my error, can I correct it by withdrawing my post? Can anyone tell me how?
(This is not intended as a troll)

Registration required? (5)

Ubi_UK (451829) | more than 13 years ago | (#2177377)

AN DIEGO -- VERNOR VINGE, a computer scientist at San Diego State University, was one of the first not only to understand the power of computer networks but also to paint elaborate scenarios about their effects on society. He has long argued that machine intelligence will someday soon outstrip human intelligence.

But Dr. Vinge does not publish technical papers on those topics. He writes science fiction.

And in turning computer fact into published fiction, Dr. Vinge (pronounced VIN-jee) has developed a readership so convinced of his prescience that businesses seek his help in envisioning and navigating the decades to come.

"Vernor can live, as few can, in the future," said Lawrence Wilkinson, co-founder of Global Business Network, which specializes in corporate planning. "He can imagine extensions and elaborations on reality that aren't provable, of course, but that are consistent with what we know."

Dr. Vinge's 1992 novel, "A Fire Upon the Deep" (Tor Books), which won the prestigious Hugo Award for science fiction, is a grand "space opera" set 40,000 years in a future filled with unfathomable distances, the destruction of entire planetary systems and doglike aliens. A reviewer in The Washington Post (news/quote) called it "a wide-screen science fiction epic of the type few writers attempt any more, probably because nobody until Vinge has ever done it well."

But computers, not aliens, were at the center of the work that put Dr. Vinge on the science fiction map -- "True Names," a 30,000-word novella that offered a vision of a networked world. It was published in 1981, long before most people had heard of the Internet and a year before William Gibson's story "Burning Chrome" coined the term that has come to describe such a world: cyberspace.

For years, even as its renown has grown, "True Names" has been out of print and hard to find. Now it is being reissued by Tor Books in "True Names and the Opening of the Cyberspace Frontier," a collection of stories and essays by computer scientists that is due out in December.

"True Names" is the tale of Mr. Slippery, a computer vandal who is caught by the government and pressed into service to stop a threat greater than himself. The story portrays a world rife with pseudonymous characters and other elements of online life that now seem almost ho-hum. In retrospect, it was prophetic.

"The import of `True Names,' " wrote Marvin Minsky, a pioneer in artificial intelligence, in an afterword to an early edition of the work, "is that it is about how we cope with things we don't understand."

And computers are at the center of Dr. Vinge's vision of the challenges that the coming decades will bring. A linchpin of his thinking is what he calls the "technological singularity," a point at which the intelligence of machines takes a huge leap, and they come to possess capabilities that exceed those of humans. As a result, ultra- intelligent machines become capable of upgrading themselves, humans cease to be the primary players, and the future becomes unknowable.

Dr. Vinge sees the singularity as probable if not inevitable, most likely arriving between 2020 and 2040.

Indeed, any conversation with Dr. Vinge, 56, inevitably turns to the singularity. It is a preoccupation he recognizes with self-effacing humor as "my usual shtick."

Although he has written extensively about the singularity as a scientific concept, he is humble about laying intellectual claim to it. In fact, with titles like "Approximation by Faber Polynomials for a Class of Jordan Domains" and "Teaching FORTH on a VAX," Dr. Vinge's academic papers bear little resemblance to the topics he chooses for his fiction.

"The ideas about the singularity and the future of computation are things that basically occurred to me on the basis of my experience of what I know about computers," he said.

"And although that is at a professional level, it's not because of some great research insight I had or even a not-so-great research insight I had. It's because I've been watching these things and I like to think about where things could go."

Dr. Vinge readily concedes that his worldview has been shaped by science fiction, which he has been reading and writing since childhood. His dream, he said, was to be a scientist, and "the science fiction was just part of the dreaming."

Trained as a mathematician, Dr. Vinge said he did not begin "playing with real computers" until the early 1970's, after he had started teaching at San Diego State. His teaching gradually shifted to computer science, focusing on computer networks and distributed systems. He received tenure in 1977.

"Teaching networks and operating systems was a constant source of story inspiration," Dr. Vinge said. The idea for "True Names" came from an exchange he had one day in the late 1970's while using an early form of instant messaging called Talk.

"Suddenly I was accosted by another user via the Talk program," he recalled. "We chatted briefly, each trying to figure out the other's true name. Finally I gave up and told the other person I had to go -- that I was actually a personality simulator, and if I kept talking, my artificial nature would become obvious. Afterwards I realized that I had just lived a science fiction story."

Computers and artificial intelligence are, of course, at the center of much science fiction, including the current Steven Spielberg film, "A.I." In the Spielberg vision, a robotic boy achieves a different sort of singularity: parity with humans not just in intelligence but in emotion, too. "To me, the big leap of faith is to make that little boy," Dr. Vinge said. "We don't have evidence of progress toward that. If it ever happens, there will be a runaway effect, and getting to something a whole lot better than human will happen really fast."

How fast? "Maybe 36 hours," Dr. Vinge replied.

Dr. Vinge's own work has yet to make it to the screen, although "True Names" has been under option for five years. "It's been a long story of my trying to convince studio executives to really consider the work seriously because it seemed so far out," said David Baxter, a Hollywood writer and producer who is writing the screenplay with Mark Pesce, co-creator of Virtual Reality Modeling Language, or VRML. "But as time has passed, the world has started to match what was in the book."

In the meantime Dr. Vinge has been providing scenarios in the corporate world as well. He is one of several science fiction writers who have worked with Global Business Network in anticipating future situations and plotting strategies for several major companies.

Mr. Wilkinson, the co-founder of Global Business Network, said that Dr. Vinge's work with the group provided "an unbelievably fertile perspective from which to look back at and reunderstand the present."

"It's that ability to conceptualize whole new ways of framing issues, whole new contexts that could emerge," Mr. Wilkinson said. "In the process he has contributed to the turnarounds of at least two well-known technology companies."

Dr. Vinge, shy and reserved, is hardly a self-promoter. He scrupulously assigns credit to others whenever he can. And although he insists that much of his work is highly derivative, his fans do not necessarily share that view.

"The thing that distinguishes Vernor is he's a scientist and all of his stuff makes sense," Mr. Baxter said. "It's all grounded in the here and now."

Dr. Vinge is now a professor emeritus at San Diego State, having retired to devote his time to his writing and consulting. Over lunch at a restaurant not far from the university, he described a story he was working on.

"Well, there's a recovering Alzheimer's patient," Dr. Vinge began, before being interrupted and asked how one could be a recovering Alzheimer's patient.

His eyes brightened. "You can't," he said, and a sly smile crossed his face. "Yet."

Re:"General" Human Intelligence not Necessary (1)

WM_NCDESTROY (451999) | more than 13 years ago | (#2177378)

Because of stupid, but fast, computers, we are heading toward having the bulk of human knowledge instantly available to anyone net connection. How will this leverage technical progress?
yeah, unless stupid, but rich lobbyists have their way and everything becomes pay-per-view

Re:The Singularity and Computational Efficiency (2)

Strange Ranger (454494) | more than 13 years ago | (#2177379)

I'm not sure I agree that AI is a software problem because I don't see where regular human intelligence is a software problem. There is no software that comes with a new-born. A new-born is a complex system that comes out of the womb ready to learn. It's already thinking. You could argue that it has an OS - instincts, genetic instructions, but really what if there were a hardware copy of a baby only made with silicon (or whatever). If it was constructed properly it should pop out of the vat ready to learn.

I guess I'm arguing that intelligence is a function of pathway complexity and self-referentiality (real word?).
Maybe if we build it right - complex enough circuitry/pathways and enough self-referential ability, can modify itself and external environment, e.g. alter it's own version of programmable logic controllers and move a coke bottle with a robotic arm, [Yes I did say "programmable" but I didn't say "fully pre-programmed".] - maybe, like a new-born, if we build it right, and simulate a little evolution along the way, the intelligence will come.
I think the challenge is not coding intelligence which sounds impossible to me, but building something that facilitates intelligence developing on it's own, again, like a new-born. Not software, but hardware that has the "complex adaptive ability of the human brain".

Granted the first one would be no more intelligent than a rabid gerbil, but that's a good start.

Why emotion? (1)

jrcarter1000 (455806) | more than 13 years ago | (#2177380)

Something I always thought was sort of strange was people think when machines suddenly become conscious, they also develop emotion. Emotion, for the most part, is a chemical reaction to events, that's all. So how would a computer suddenly learn to hate humans, or have the desire to take over the world, or whatever, just because it's self aware? Even if you use the argument that they will build upon themselves, why would it teach itself to hate, or become greedy? It would still make those decisions based on logic. So, if the computer knows that it is beyond humans, why would it logically expend it's energy to wipe out or rule something that doesn't matter to it?

Well, (2)

dublisk (456374) | more than 13 years ago | (#2177381)

If the computers ever get angry at us, we can always just block out the sun by scorching the sky, thus rendering the computers powerless while the humans sit pretty.

Already been done (1)

FrankHaynes (467244) | more than 13 years ago | (#2177385)

You've made my point for me. To expand a bit: machines created with a.i. will be imbued with those qualities of which we have knowledge and which we are capable of imbuing into them.

If Vinge puts that we are capable of creating evil machines, then we should be just as capable of creating kind, helpful machines.

Of course, this has been done already. It's called making a baby. They make us laugh, cry, clean up the chair and carpet, are generally unpredictable until they develop personalities...some become mass murderers, others become prize-winning scientists. Who knew?

If we feel the need to create silicon-based Humans, it might be wise to understand better the carbon-based ones first. Maybe that's the point.

If such a machine makes a mistake, does that make it 'artificial stupidity'?? :)

"A Fire..." and Anachronistic Commentary (1)

sequit (468961) | more than 13 years ago | (#2177386)

The New York Times article doesn't mention this in its brief discussion of "A Fire Upon the Deep," but part of the story revolves around an old-school USENET-style galactic discussion forum/communications medium. It even successfully predicts something like email virii but falls short of predicting an M1/M2 scheme as good as Slashdot's. Kudos!

Re:Predictability and Unpredictability (1)

johnsjs (471712) | more than 13 years ago | (#2177388)

Granted computer hardware will continue to be fairly predictable (not necessarily a safe assumption, it will be easy enough to build in quantum uncertainty into a system if you want to) nonetheless it is actually chaos theory that will make super intelligent systems unpredictable. Not that the hardware will be unpredictable, but that beyond a cetain level of complexity the software will become unpredictable. We already have software systems with undocumented functionality that the designers and coders included without realising it. You only need a few simple rules carefully chosen and systems can soon escalate beyond computational analysis. The whole massive processing thing is relating to the 'Strong' school of AI, 'Weak' AI, i.e. intelligence as an emergent an unexpected phenomenon does not require this processing power. Think about it this way. A beetle has very little processing power compared to the box you're reading this on, but write a program for your machine to navigate it's way across a room to a food source by calculation and you don't need a lot of obstacles for the beetle to make your gigahertz machine look stupid. On the other hand give a ZX81 a couple of rules;
Move towards the food; If you hit something go left 1 foot then move towards the food
and nine times out of ten with random obstacels it will give the beetle a run for it's money (assuming all contestants have an equal mode of transport etc). My point is not that ZX81s are better, simply that I suspect we won't even know the systems that reach self awareness first, they will be one of those big complicated monsters that has been added to, modified, and probably runs on a COBOL core. Government, University, Corporate, it's irrelevant. These systems have evolved from silicon to somewhere beyond single cells in 50 years, it took us 3 billion years for that bit, the last bit was rather quicker.

Dr Vinge may be right, he may be wrong, but I think that while humans may inherit the stars, they may be life, but not as we know it.

Re:Smartness is Overrated (1)

threadworm (472508) | more than 13 years ago | (#2177389)

think 'distribution of resources' mr/mrs smart-alecky attourney person. Right now, that process occurs in analogue at the hands of people that 'aren't clever'. Supersmart machines will change that into an increasinglyh higher and higher digital-like resolution, making small refinements to the global resource network that will multiply any given individuals share of resources by an unguessably large amount.

It is a simple thing to posit that there are abundant resources for everyone on the planet. There is enough of each element (hydrogen, neon, lead, etc) for each person to accomplish each desire. The difficulty each of us deals with every day is acquiring, refining, processing and building with these elements. Super smart machines would be so smart that they could easilly restructure the flow of materials and information to the utmost degree of optimisation.

But all of this marvellous utopian refinement would take place in a matter of hours at the singularity. A little while later, those same machines would have upgraded and optimised themselves to an unkowable level of sophistication and cognitive capability. This timeline is what is being referred to in the article. Note that Dr Vinge suggests complete revolution of human society would occur in perhaps 36 hours! The singularity is much like a logarithmic scale, flat for much of its length and shooting sky high, perhpaps perpendicularly directly UP towards the end. The row axis represents time and the collumn axis represents technological sophistication.

These machines have been prophesied by one of the most suitable men on the planet to consider such things. You, on the otherhand, have merely been trained in sophistry. Please keep your sharp tongue inside your mouth next time!

Re:Flawed assumptions? (2)

threadworm (472508) | more than 13 years ago | (#2177390)

heh, the flawed assumptions are yours, I assure you. Dr. Vinge is considering technological sophistication from the stone age to present in his theory, not computational power or any other specific.

Consider that for perhaps millions of years we had fire and spears as our main tools. Then agriculture, then metallurgy, then language, communication etc. Each epoch is marked by revolutions in technological sophistication, and also, each epoch shift occurs more and more rapidly, in a logarithmic fashion. consider the advances of the last 100 years to see my point.

In fact, the last great technological revolution has been the global information network that we are currently using to discuss the topic. Born less than 30 years ago, it has already saturated the planet, becoming nearly ubiquitous to the segment of the population at the front of the wave.

Re:Deepness in the Sky == Unix (1)

Admiral Naismith (472968) | more than 13 years ago | (#2177391)

Yes, the secret is out -- Vernor is a UNIX user. In fact, I had a chance to talk geek-speak with him at Worldcon last year (my wife ran the SFFWA hospitality suite, so we got all kinds of wonderful visitors in the evenings). While we were munching pizza he told me that he used a customized version of EMACS to write all his novels. Name another author today who can make that claim! :-) We also decorated the suite with some large plush spiders and a "web" of wire and little tiny lights to celebrate his Hugo-nominated novel. He was amused.

super intelligence (1)

polyscience (472973) | more than 13 years ago | (#2177392)

Ha, to be ruled by the super intelligent or the super stupid ("did you hear what out president said, he must be crazy" David Burns, Talking Heads) Hard choice to make, not.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?