×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Memristor Minds, the Future of Artificial Intelligence

Soulskill posted more than 4 years ago | from the boldly-stated-claims dept.

Robotics 184

godlessgambler writes "Within the past couple of years, memristors have morphed from obscure jargon into one of the hottest properties in physics. They've not only been made, but their unique capabilities might revolutionize consumer electronics. More than that, though, along with completing the jigsaw of electronics, they might solve the puzzle of how nature makes that most delicate and powerful of computers — the brain."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

184 comments

Oblig. wiki-link (4, Informative)

Eudial (590661) | more than 4 years ago | (#28658591)

Re:Oblig. wiki-link (-1, Troll)

Anonymous Coward | more than 4 years ago | (#28658681)

Thank you for providing this link that any one of us who needed it could have found for ourselves. It was in such an obscure corner of the internet that it surely would have been at worst the second place we looked for information. You should surely be rewarded with a +5 informative for this useful service.

Re:Oblig. wiki-link (1)

calzakk (1455889) | more than 4 years ago | (#28659015)

I found the link useful :-)

Instead of going to Wikipedia and typing the word into the search box and hitting Go... I just clicked the link!

Re:Oblig. wiki-link (1)

General Wesc (59919) | more than 4 years ago | (#28659329)

I pressed ^t, typed in 'wp memristor' (actually it was either 'wp ^v' or 'wp [middle-click]'), and pressed Enter. Bookmark keywords are your friend.

This is Slashdot. I think we should assume people are capable of locating Wikipedia on their own.

Re:Oblig. wiki-link (0)

Anonymous Coward | more than 4 years ago | (#28660049)

I'm new here. What's this "Wikipedia" and where do I find it?

Re:Oblig. wiki-link (0)

Anonymous Coward | more than 4 years ago | (#28658921)

A memristor /memrst/ ("memory resistor") is any of various kinds of passive two-terminal circuit elements that maintain a functional relationship between the time integrals of current and voltage.

That explains it. Thanks.

I'm always taken back by this (2, Interesting)

msgmonkey (599753) | more than 4 years ago | (#28658665)

That we've developed a whole industry based on an incomplete model, I wonder how things would have developed if the memristor had existed 30 years ago. Exciting times as a lot of things will be re-examined.

Re:I'm always taken back by this (5, Informative)

Anonymous Coward | more than 4 years ago | (#28658711)

Probably nothing significant, seeing as you can emulate exactly what a digital memristor does with 6 transistors and some electricity always applied. Memristors in CPU/logic would not be viable because of their low wear cycles and very high latencies. It would make for some nice multi-terabyte sized USB sticks though.

As for its analog uses, Skynet comes to mind...

Re:I'm always taken back by this (5, Insightful)

Marble1972 (1518251) | more than 4 years ago | (#28658785)

Probably nothing significant, seeing as you can emulate exactly what a digital memristor does with 6 transistors

Exactly right.

It's not a hardware breakthrough that'll create a true AI - it's an algorithm breakthrough that's required. Faster computers might be nice - but it'll always comes down to the algorithm.

And actually the sooner we create Skynet - the better the chance we have to beat it. Because if we wait too long - that super fast hardware it will be running will could make it too hard to beat. ;)

Re:I'm always taken back by this (1)

joshier (957448) | more than 4 years ago | (#28658823)

You go to the power outlet and you unplug it. and anyway, why all this skynet shit? there are closer dangers to us than that.

Re:I'm always taken back by this (0)

Anonymous Coward | more than 4 years ago | (#28658887)

Skynet spreads onto almost every computer in the world (ie. there is no central core).

Trying to turn it off (ie. kill it) is what prompts it to defend itself by firing the nuclear missiles it has control of, triggering a counter-attack which kills most humans.

Yes, there are closer dangers but even so, I still have John Connor on speed-dial.

Re:I'm always taken back by this (1)

ScrewMaster (602015) | more than 4 years ago | (#28659233)

Skynet spreads onto almost every computer in the world (ie. there is no central core).

That was just the rubbish from the third movie, which film I'm personally trying to forget. It was also ridiculous: once Skynet nuked everyone and in the process shut down all power distribution and communications networks, all those millions of computers running some little bit of Skynet would be turned off and isolated anyway.Killings us would have been instant suicide for Skynet.

The original 80's vision of Skynet as a vast artificial intelligence living in a cavern somewhere running on its own power supply still makes a lot more sense.

Re:I'm always taken back by this (0)

Anonymous Coward | more than 4 years ago | (#28659457)

They said Skynet began learning at a geometric rate when connected to too many computers, and became self aware. Following that I don't think there was much to learn. Hence, it wouldn't need so many. It also has neural net processors at its disposal.

But yeah, T3 was expensive special effects and a weak story. It doesn't matter though, they could rewrite history so that T3 never happened. (going back in time to kill the producers, maybe)

Re:I'm always taken back by this (2, Interesting)

madkow (518822) | more than 4 years ago | (#28658893)

And actually the sooner we create Skynet - the better the chance we have to beat it. Because if we wait too long - that super fast hardware it will be running will could make it too hard to beat. ;)

Or the better chance we have to learn live with it. James Hogan's 1979 book "The Two Faces of Tomorrow" details a plan to deliberately goad a small version of a self aware computer (named Spartacus) into self defense before they built the big version. When Spartacus learned that humans were even more frail than he and equally motivated by self preservation he chose to unilaterally lay down arms.

And how exactly do you exactly plan (1, Insightful)

msgmonkey (599753) | more than 4 years ago | (#28658901)

to implement a proper neural network on a von neumann type architecture, it's like trying to fit a square into a circle. So the developments have been in making special processors that work closer to real neurons but still digital. Memristors allow them to get closer to the real thing. Like the article states they did n't even have the tools to test these because of their analogue nature so we're at the begining here.

The purpose here is n't to get faster hardware, a computer can add two numbers together orders of magnitude faster than a person, but try and get a computer to tell if a picture I give it is male or female or if there is even a person at all in the picture. It does n't matter how fast your hardware is your bubble sort is always going to suck vs a quick sort.

Re:And how exactly do you exactly plan (0)

Anonymous Coward | more than 4 years ago | (#28659063)

not. With the O. Only remove the O when you are combining not with the word before it.

Re:And how exactly do you exactly plan (0)

Anonymous Coward | more than 4 years ago | (#28659071)

your bubble sort is always going to suck vs a quick sort.

Not if your list/array is already sorted (worst case for qsort, best case for bubblesort).

Re:And how exactly do you exactly plan (1)

Marble1972 (1518251) | more than 4 years ago | (#28660165)

it's like trying to fit a square into a circle.

I agree - but so is running particle simulation software on today's architecture. Sure it would be better if we could have have a (parallel) processor per particle calculating all the forces impacting upon it down to planck length precision. But the point is with our limited multiprocessing power - we can achieve very good results in a more or less in a timely fashion.

My main point being - if we had the algorithm - it could be proven on today's architecture. After all - you can demonstrate a learning architecture with just matchboxes & Jelly beans [metafilter.com] and that's damned slow (but tasty!)

Take the person/picture problem and reduce the picture down to 50x50 32bit colour pixels... that's not really a lot of data to process to get a result. Training takes longer of course... but you can at least prove your algorithm within a reasonable timeframe (your PhD thesis perhaps) ;)

Re:I'm always taken back by this (2, Interesting)

Requiem18th (742389) | more than 4 years ago | (#28658943)

I don't know, with a 10,000 write limit If my brain was made of memristors I'd be terribly mortified.

Re:I'm always taken back by this (3, Informative)

Sponge Bath (413667) | more than 4 years ago | (#28659341)

This [inist.fr] talks about neuronal replacement. It looks like your brain may have a write limit, it just automatically replaces worn out bits.

Re:I'm always taken back by this (1)

Hurricane78 (562437) | more than 4 years ago | (#28658993)

Except if what I and many other people think is true: That the only difference between our spiking neural nets and Skynet is the processing power.

Re:I'm always taken back by this (1, Insightful)

4D6963 (933028) | more than 4 years ago | (#28659099)

the only difference between our spiking neural nets and Skynet is the processing power.

No, repeat after me: Putting a shitload of neural networks on a supercomputer won't create a strong AI.

A very common persistent misconception.

Re:I'm always taken back by this (4, Insightful)

ceoyoyo (59147) | more than 4 years ago | (#28659853)

"Repeat after me" is really annoying. If you're going to be that irritating you'd better have some pretty strong evidence to back yourself up. Where is it?

Re:I'm always taken back by this (1)

4D6963 (933028) | more than 4 years ago | (#28660037)

You want evidence that something infeasible (assuming you won't consider that we currently can't put enough neural nets in a supercomputer powerful enough to satisfy your demand) is impossible? I want some evidence that filling a 1 billion cubic meters bag with spiders won't turn into an evil blob-like insect overlord. Where is it?

However if by 'evidence' you mean 'reason', then I'll tell you that researchers spent decades putting ever increasing shitloads of neural networks on ever increasingly powerful supercomputers and that nothing particular ever came out of it, and that neural networks are just algorithms anyways, so throwing a whole bunch of power into it won't solve it, as we know what neural networks can or cannot do. If neural networks could possibly create some sort of strong AI, even if it was only a bug-like AI, we'd know it by now. Don't be fooled by the association made with actual neurons. If you're interested in AI because of HAL 9000 and the likes, then don't get into the field of AI, because there's nothing but ever-lasting disappointment for you.

Strong AI is to computer research as instant matter teleportation or (backwards) time travel are to astrophysics, nice in scifi, but never gonna happen.

Re:I'm always taken back by this (3, Insightful)

ceoyoyo (59147) | more than 4 years ago | (#28660187)

Ah yes, the "our computers are incredibly powerful and we've tried it and it didn't work so the whole class of solutions is obviously ruled out" argument.

Before you make (extremely condescending) statements that something is impossible, you should at least make sure you qualify your terms properly.

"I think it's very unlikely that using current neural network algorithms on computers with current or near future capacities will produce a strong AI" would be a good start.

We certainly do not know what the limits of "neural networks" (as a general class of algorithms) are. We also don't have anything like the computing power to properly simulate a neural network with a capacity where we'd expect to see "intelligence."

You might be correct. Then again, you may well not be. Even if you are, the only people who will listen to posts like yours are people who already agree with you.

wrong level of complexity (2, Insightful)

reiisi (1211052) | more than 4 years ago | (#28659289)

Kludge a lot of state machines together and you can simulate stack machines to a certain limit.

Kludge a lot of context free grammars together and you can simulate a context-sensitive grammar within certain limits. But it takes infinite stack, or, rather, infinite memory to actually build a context-sensitive grammar out of a bunch of context-free grammar implementations.

Intelligence is at least at the level one step beyond -- unrestricted grammar.

(Yeah, I'm saying we seem to have infinite tape and infinite stack, even though mortality is a little hard to see beyond.)

Re:wrong level of complexity (1)

CarpetShark (865376) | more than 4 years ago | (#28660001)

Intelligence is at least at the level one step beyond -- unrestricted grammar.

I don't see how you can claim unrestricted grammar when every language I know of uses the concepts of nouns, verbs, etc. Surely an unrestricted grammar would mean completely alien languages which in no way are directly translateable into other human languages.

Re:I'm always taken back by this (1)

LuxMaker (996734) | more than 4 years ago | (#28659089)

And actually the sooner we create Skynet - the better the chance we have to beat it. Because if we wait too long - that super fast hardware it will be running will could make it too hard to beat. ;)

And why did this suddenly make the tune "Just beat it" play in my mind? Michael Jackson vs. Skynet?

Re:I'm always taken back by this (1)

TapeCutter (624760) | more than 4 years ago | (#28659217)

"It's not a hardware breakthrough that'll create a true AI - it's an algorithm breakthrough that's required."

On the contray, I think you need a algorithmic breakthrough to understand the brain but you don't need a new algorithim to create a brain [bluebrain.epfl.ch] . Humans have built and used many things well before they had a theoretical basis for how they worked, for example people were using levers to build pyramids long before archimedes came and gave us the "lever algorithim".

Re:I'm always taken back by this (1)

Burnhard (1031106) | more than 4 years ago | (#28659347)

it's an algorithm breakthrough that's required

That is, of course, making the assumption that intelligence doesn't require Consciousness and that Consciousness can be captured in an algorithm. Two somewhat dubious pre-requisites.

Re:I'm always taken back by this (1)

The End Of Days (1243248) | more than 4 years ago | (#28659769)

Why dubious? There's no evidence either way, so poo-pooing the possibility doesn't make you smarter, it just makes you close-minded.

Re:I'm always taken back by this (1)

Burnhard (1031106) | more than 4 years ago | (#28659985)

so poo-pooing the possibility doesn't make you smarter, it just makes you close-minded.

As there's no evidence either way, metaphysical doubt, it seems to me, is quite a sensible position.

Re:I'm always taken back by this (1)

divisionbyzero (300681) | more than 4 years ago | (#28659389)

Right. AI is a software problem, not a hardware problem. That's not to say that current hardware could run the software should it ever be devised, but once we know what the software is we can build the hardware that will run it. So, how do we come up with the software if we don't have the hardware to run it? It's called philosophy.

Re:I'm always taken back by this (1)

ceoyoyo (59147) | more than 4 years ago | (#28659833)

You're still thinking about simulating intelligence using a standard computer, in which case you're right, you need the right algorithm.

What they're proposing is not to simulate a brain but to build one. There is no algorithm. It might be sensitive to how you wire things up, but probably not excessively so, otherwise it would be very difficult to evolve working brains. The key is getting the right components to build the thing out of.

Re:I'm always taken back by this (2, Interesting)

peragrin (659227) | more than 4 years ago | (#28658811)

Of course if you currently multiply the 100 million or more transistors in a current cpu by 6 you don't have any kind of problem do you? Of course a memresistor is closer in design to a permanent RAM Disk. You can turn off the system as much as you want but it instantly restores you right from where you left it.

Now that it is proven all that matters is figuring out how best to use it and what limitations it has.

Thanks what I meant. (0)

msgmonkey (599753) | more than 4 years ago | (#28658843)

I was n't talking about replacing transistors with memeistsors, I'm talking about a completely different paradigm. Making neural networks using digital electronics does n't work well at all in the same way that trying to do simple operation like adding two numbers using neural networks is very difficult. The memristors we have now are just the begining, they have n't been developed nearly enough, give it 30 years and we should have something much more advanced.

What we have on our desktops today are just glorified calculators. In the future we could have digital analogue hybrid cpus, we're reaching the limits of digital cpus but we have n't even started exploring proper neural network type processors (except for ones based on digital circuits).

Re:Thanks what I meant. (1)

ardor (673957) | more than 4 years ago | (#28659027)

First, we don't have glorified calculators on our desktops. Calculators usually aren't turing complete, PCs are. As for the neural networks, while there are many problems that need to be overcome, digital technology isn't one of them.

Not really Turing complete. (1)

reiisi (1211052) | more than 4 years ago | (#28659153)

Effectively Turing complete within a certain range of speeds and requirements for state memory.

But the tape is finite.

So, yes, glorified calculating machines. (The boundary between is not as clearly defined as you assert.)

Re:I'm always taken back by this (3, Interesting)

GigaplexNZ (1233886) | more than 4 years ago | (#28659309)

Memristors in CPU/logic would not be viable because of their low wear cycles and very high latencies.

That's a current manufacturing limitation, not something inherent to what a memristor is. Had these been discovered much sooner, we would be much better at manufacturing them and they probably would have made a significant impact.

Re:I'm always taken back by this (1)

jerep (794296) | more than 4 years ago | (#28658767)

we've developed a whole industry based on an incomplete model

Wait you mean this is the first time this happens? I thought schools were the first to do that.

Don't forget also (0)

Anonymous Coward | more than 4 years ago | (#28658773)

religion.

Re:Don't forget also (2, Insightful)

CarpetShark (865376) | more than 4 years ago | (#28658789)

(Without contempt or disrespect) religion is a great example of how far you can get with an incomplete model. Enlightenment, which some would argue is the highest human state, is taught with nothing more than vague contradictions that hint at a different way of thinking. Most religions use similar techniques to some extent, and I suppose most education must to some degree as well.

That said, I think religion could not have come first, as it's basically a specialised educational system. Besides, you can't teach religion before you teach words, objects, etc.

Re:Don't forget also (0)

Anonymous Coward | more than 4 years ago | (#28659191)

But you can learn words/objects without someone teaching them.

Re:Don't forget also (1)

CarpetShark (865376) | more than 4 years ago | (#28659957)

Depends on your definition of teaching. Most of the education world would include being a role model, providing examples, as a type of teaching too.

enlightenment (1)

reiisi (1211052) | more than 4 years ago | (#28659197)

Some people believe that, in true religion, enlightenment is the realization of a rational basis to existence.

That is, half of enlightenment is the realization of the rational basis, and the other half is the realization that mortality pushes that rational basis ultimately beyond (mortal) human reach.

There seems to be some division as to whether giving up on understanding is preferred, since mortality is an absolute limit.

And there seems to be some further division as to whether mortality is really an absolute limit.

(And I see a metaphor here in the infinite tape of a Turing machine.)

The first time -- (1)

reiisi (1211052) | more than 4 years ago | (#28659173)

Adam and Eve.

Or, if you don't get the reference, us.

Humans have been doing this as far back as there have been humans. It is one of the things which sets us apart from the other animals. Or, it might be argued that this is just another way of looking at the only thing that separates us from the other animals.

Re:I'm always taken back by this (3, Informative)

Yvanhoe (564877) | more than 4 years ago | (#28658845)

No. This is a lot of gross overexageration.
Our computers are Turing-complete. Point me to something that is missing in this before I get excited. This new component may have great applications, but it will "only" replace some existing components and functions. It is great to have it but it is nothing essentially missing.

Practically Turing complete. (2, Insightful)

reiisi (1211052) | more than 4 years ago | (#28659247)

Woops. Posted this below in the wrong sub-thread. Oh, well, post it here, too, with this mea culpa.

Not until we have infinite tape and infinite time to process the tape are our computers truly Turing complete.

Moore boasted that technology would always be giving us just enough more tape. I'm not so sure we should worship technology, but so far the tech has stayed a little ahead of the average need.

Anyway, this new tech may provide a way to extend the curve just a little bit further, keep our machines effectively Turing complete for the average user for another decade or so.

Or not. If Microsoft goes down, the average user may soon realize he has been seriously duped about computational needs.

Electrical Memristors Don't Exist Yet (5, Informative)

indigest (974861) | more than 4 years ago | (#28658731)

From the article:

What was happening was this: in its pure state of repeating units of one titanium and two oxygen atoms, titanium dioxide is a semiconductor. Heat the material, though, and some of the oxygen is driven out of the structure, leaving electrically charged bubbles that make the material behave like a metal.

The memristor they've created depends on the movement of oxygen atoms to produce the memristor-like electrical behavior. Purely electrical components such as resistors, capacitors, inductors, and transistors only rely on the movement of electrons and holes to produce their electrical behavior. Why is this important? The chemical memristor is an order of magnitude slower than the theoretical electrical equivalent, which no one has been able to invent yet.

I think the memristor they've created is a great piece of technology and will certainly prove useful. However, it is like calling a rechargeable chemical battery a capacitor. While both are useful things, only one is fast enough for high speed electronics design for applications like the RAM they mentioned. On the other hand, a chemical memristor could be a flash memory killer if they can get the cost down (which I doubt to happen any time soon).

Re:Electrical Memristors Don't Exist Yet (1)

j0hnyquest (1571815) | more than 4 years ago | (#28658799)

I completely agree. Right now, the memristor is more of something theoretical physicists splooge over rather than something that will be anywhere near useful in the foreseeable future. I believe its like a lot of the quantum innovations currently being worked on (read IEEE potentials, there are a bunch)... lots of potential, but nothing concrete yet.

You're right of course (1)

msgmonkey (599753) | more than 4 years ago | (#28658937)

but on the other hand a neuron works with electrochemical signaling and the design seems to be quite good :)

Re:Electrical Memristors Don't Exist Yet (1)

Hurricane78 (562437) | more than 4 years ago | (#28658997)

How about an optical memristor?

Why focus on hopefully soon outdated technology. :)

Practically Turing complete. (1)

reiisi (1211052) | more than 4 years ago | (#28659227)

Not until we have infinite tape and infinite time to process the tape are our computers truly Turing complete.

Moore boasted that technology would always be giving us just enough more tape. I'm not so sure we should worship technology, but so far the tech has stayed a little ahead of the average need.

Anyway, this new tech may provide a way to extend the curve just a little bit further, keep our machines effectively Turing complete for the average user for another decade or so.

Or not. If Microsoft goes down, the average user may soon realize he has been seriously duped about computational needs.

Re:Electrical Memristors Don't Exist Yet (0)

Anonymous Coward | more than 4 years ago | (#28659513)

I think the memristor they've created is a great piece of technology and will certainly prove useful. However, it is like calling a rechargeable chemical battery a capacitor. While both are useful things, only one is fast enough for high speed electronics design for applications like the RAM they mentioned.

The memristor is an analog device. Achieving a function providing some infinitesimal value between zero and one is a complex algorithm overlayed on binary. That implies a time component. Doing this in a relational-array is even more time.

In short, as far as encoding information state goes, the memristor is potentially an order of magnitude more powerful.

Re:Electrical Memristors Don't Exist Yet (1)

tennin (527730) | more than 4 years ago | (#28659823)

The charge carriers are still electrons. Oxygen vacancy migration tunes the distance the electrons must tunnel. At the extremely small scale where memristance becomes dominant, the speed of ion diffusion seems negligible.

Why not renaming it to memistor? (0)

Anonymous Coward | more than 4 years ago | (#28658763)

Putting "mr" in a word can lead to pronunciation difficulties, just google for words containing "mr" then exclude all abbreviations of mister to find how rarely the sequence it's used. Renaming it to "memistor" would help greatly. Also, the wikipedia page for memristor already contains a reference to memistor.

Re:Why not renaming it to memistor? (1)

Anne Thwacks (531696) | more than 4 years ago | (#28658805)

Putting "mr" in a word can lead to pronunciation difficulties,

For who? Think of it as mem'ristor. There how hard is that? It is true that the pronouciation of the letter "r" is quite different in Sierra Leone and Japan, but its hardly a major problem, and the presence of the "m" in front of it isnt a problem for anyone I know.

The idea that the devices are a "major breakthrough" is a problem though - how do these differ from any amount of other devices producing "negative resistance" through phase change? As other have pointed out, its slow and awkward to use,

A viable memristor based FPGA might be interesting, and more practical than other memory applications I guess. Probably more patentable than chocolate-chip-cookies as well (but not a lot - it is clearly obvious to those "sufficiently skilled in the art" ie me.)

Re:Why not renaming it to memistor? (1)

svunt (916464) | more than 4 years ago | (#28658983)

Putting "mr" in a word can lead to pronunciation difficulties, just google for words containing "mr" then exclude all abbreviations of mister to find how rarely the sequence it's used. Renaming it to "memistor" would help greatly. Also, the wikipedia page for memristor already contains a reference to memistor.

The 'm' and 'r' are in different syllables, so it's really not an issue. I assume you can handle 'Tim Robbins' so you can handle 'memristor'

Re:Why not renaming it to memistor? (1)

Twinbee (767046) | more than 4 years ago | (#28660085)

Perhaps he thought it was pronounced "me mristor", as in: "Oill get me mristor out torday!".

Not nature... (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#28658769)

There is no explanation for the creation of the human brain because God made man.

Thanks.

Re:Not nature... (0)

Anonymous Coward | more than 4 years ago | (#28658797)

Citation Needed.

Re:Not nature... (0, Troll)

beelsebob (529313) | more than 4 years ago | (#28658867)

[The Bible]

What's needed is not a citation, it's a proof, or at least a tiny amount of evidence.

Tha's goint to be the NEXT BIG THING (1)

12357bd (686909) | more than 4 years ago | (#28658787)

in the computer world.

The question is: will be see the result in our lives?

I really wish so, but the succes has stalled computer innovation. Thirty years ago we expected to be able to talk to our machines, now those advances can make it finally possible. Will the industry and economics be able to adapt to make it possible in our life time frames?

Re:Tha's goint to be the NEXT BIG THING (3, Informative)

Yvanhoe (564877) | more than 4 years ago | (#28658861)

AI needs new algorithms to progress. Electronics will not change the way we program computers. They are already Turing complete, a new component adds nothing to the realm of what a device can compute. Expect a revolution in electronics, but IT people will not see a single difference (except maybe a slight performance improvement)

Re:Tha's goint to be the NEXT BIG THING (1)

avg_joe_01 (756831) | more than 4 years ago | (#28658927)

Turing complete

You keep using that word. I do not think it means what you think it means.

Re:Tha's goint to be the NEXT BIG THING (1)

4D6963 (933028) | more than 4 years ago | (#28659143)

Huh??! Please explain how this was used incorrectly in the GP post, and then explain what it means to you.

Re:Tha's goint to be the NEXT BIG THING (1)

ceoyoyo (59147) | more than 4 years ago | (#28659977)

Our computers are not Turing complete. To be so, they would require infinite memory and infinite time.

"Turing complete" is one of those things know-it-alls say like "correlation does not imply causation." The vast majority of statements on Slashdot that contain those phrases are made by people who do not have more than a very superficial knowledge of either one.

Which is a pity in this information age. Even Wikipedia has both right: Turing Complete [wikipedia.org] , Correlation Does Not Imply Causation [wikipedia.org] .

Re:Tha's goint to be the NEXT BIG THING (2, Insightful)

12357bd (686909) | more than 4 years ago | (#28658931)

Old designs were not fully explored, ie: Turing's 'intelligent or trainable' [alanturing.net] machines. This kind of electronics can do those old concepts viable, that's IMO the NEXT BIG THING, not just algorithms (looped circuitry is not hard to simulate, is hard to predict).

The Von newman architecture of our 'computers' was just one possibility, not the only or the best, just the convenient. New hardware processing habilities, could lead to new kinds of maybe not 'programable' in the current sense of the word, but 'trainable' machinery.

Re:Tha's goint to be the NEXT BIG THING (1)

4D6963 (933028) | more than 4 years ago | (#28659129)

but the succes has stalled computer innovation

No, reality got in the way. As much as you can want to have a HAL 9000 in your computer, it's not going to happen, because as far as we know it might just be theoretically impossible to create something like that.

Thirty years ago we expected to be able to talk to our machines, now those advances can make it finally possible.

No it's not. What makes you think it's gonna help with anything you talk about? That's typical of throwing the word "neuron" into a technology story, just as soon you have a bunch of readers peeing their pants fantasising of HAL 9000/Skynet/whatever else you people think is a cool scifi example of strong AI.

Re:Tha's goint to be the NEXT BIG THING (1)

Baron_Yam (643147) | more than 4 years ago | (#28659653)

I can point to a common example of a machine that produces intelligence... the human brain.

So, given that nature did it once I'm confident it's not theoretically impossible.

Re:Tha's goint to be the NEXT BIG THING (1)

4D6963 (933028) | more than 4 years ago | (#28659955)

A machine is a device. A device is a human invention. Men didn't invent brains.

Besides by theoretically impossible I was talking about doing it algorithmically.

Re:Tha's goint to be the NEXT BIG THING (1)

Baron_Yam (643147) | more than 4 years ago | (#28660069)

You're talking religion, not science.

The brain is an electro-chemical machine 'designed' by random chance controlled by natural selection.

The algorithms are in there, and there is no reason to believe they can't be copied by man or even that we might figure them out ourselves.

If the algorithms were theoretically impossible then your brain wouldn't exist.

Not Nature...REVISITED...perfect sense... (1)

griffinfinity (121020) | more than 4 years ago | (#28658879)

Interesting information...

  The only bones I have over this is what is 'artificial' in regards to intelligence? It being that mankind has been on a mission to recreate 'itself' since day 1, and everything we do is support in that respect, there is nothing artificial at all about what we seek to fashion. We are not replicators on Kirk's tugboat, we are those who seek to become that which we cannot find, the Creator. I dunno, wasn't it Water's who mumbled something about it all making perfect sense?

Artificial intelligence? (5, Insightful)

pieterh (196118) | more than 4 years ago | (#28658903)

The amazing thing is that we consider individual brains to be "intelligent" when it seems pretty clear we're only intelligent as part of a social network. None of us are able to live alone, work alone, think alone. The concept of "self" is largely a deceit designed to make us more competitive, but it does not reflect reality.

So how on earth can a computer be "intelligent" until it can take part in human society, with the same motivations and incentives: collect power, knowledge, information, friends, armies, territories, children...

Artificial intelligence already exists and it's called the Internet: it's a technology that amplifies our existing collective intelligence, by letting us connect to more people, faster, cheaper, than ever before.

The idea that computers can become intelligent independently and in parallel with this real global AI is insane, and it has always been. Computers are already part of our AI.

Actually, the telegraph was already a global AI tool.

But, whatever, boys with toys...

Re:Artificial intelligence? (5, Insightful)

dcherryholmes (1322535) | more than 4 years ago | (#28658979)

But I could stick you on a deserted island all by yourself and you would still be intelligent, right? I'm not denying that we are deeply social creatures, nor that a full definition of an organism must necessarily include a description of its environment. But I think you are confusing the process by which we become intelligent with intelligence itself.

Re:Artificial intelligence? (1)

DMoylan (65079) | more than 4 years ago | (#28659919)

but we are social animals. a simple illness or injury will kill a lone human unable to feed themselves temporarily whereas a human in most societies will be cared for until they are literally back on their feet.

but that is a fully developed adult isolated on an island. a human is intelligent not solely because of their genetics but of the society they grow up in. look at the numerous cases where children have been reared by animals. http://en.wikipedia.org/wiki/Feral_child#Documented_cases [wikipedia.org]

most of those have terrible difficulty adapting at a later than normal age to functioning in human societies. most have difficulty in communicating beyond the most basic needs. we learn so much in those crucial first few years. fire, wheel, levers

i would say that if you took 1000 children (pre school) and put them on that large island in such a way that most would be able to feed themselves you would not end up with intelligence for quite a while. how many generations would it take before language and tool use would be considered intelligent?

i still think turings test is the best base line for recognition of artifical intelligence. yet chatbots fool people every day. http://en.wikipedia.org/wiki/Chatterbot#Malicious_chatterbots [wikipedia.org]

are they intelligent? perhaps we just need a new definition of what is artificial intelligence.

my 2c

Re:Artificial intelligence? (1)

dcherryholmes (1322535) | more than 4 years ago | (#28659993)

I'm not disputing anything that you wrote. I'm simply saying that the quality we loosely describe as "intelligence" can inhere in an isolated autonomous unit, even if it could never have naturally arisen that way. Therefore, I conclude that it would be possible to have AI in a set-top box. Or at least, there's nothing in our natures that refutes it.

Re:Artificial intelligence? (2, Insightful)

hitmark (640295) | more than 4 years ago | (#28659001)

and here i keep observing that the overall intelligence in a room drops by the square of the number of people in said room...

Re:Artificial intelligence? (4, Insightful)

Hurricane78 (562437) | more than 4 years ago | (#28659011)

None of us are able to live alone, work alone, think alone.

Did you come up with this because of your own ability to do so?
Because except for reproduction, we can easily survive our whole life alone.
Sure it will be boring. But it works.

The idea that computers can become intelligent independently and in parallel with this real global AI is insane, and it has always been.

Says who? You, because you need it to base your arguments on it? ^^
You will see it happening in your lifetime. Wait for it.

Re:Artificial intelligence? (0)

Anonymous Coward | more than 4 years ago | (#28659037)

Yeah right... And from what age? Provide a small child with all the food and shelter it requires, but no human interaction, and see how long they survive. I am assuming you mean you could live life all alone, after you were raised by your parents, taught language and how to prepare your food and shelter etc etc etc.

Re:Artificial intelligence? (0)

Anonymous Coward | more than 4 years ago | (#28659033)

The amazing thing is that we consider individual brains to be "intelligent"

No it's not.

...when it seems pretty clear we're only intelligent as part of a social network.

If you take someone's social network away and they're dumb, then I'd suggest they were dumb in the first place.

The concept of "self" is largely a deceit designed to make us more competitive, but it does not reflect reality.

I'm actually making this post. Really.

Artificial intelligence already exists and it's called the Internet

No it's not. The internet is called the Internet.

it's a technology that amplifies our existing collective intelligence

Er... what? You copy that from a magazine cover?

The idea that computers can become intelligent independently and in parallel with this real global AI is insane, and it has always been.

What if they were attached to another identitical internet? That would imply that "your" idea is insane too.

But, whatever, boys with toys...

I'm not sure what you mean. Your vagina got in the way.

Re:Artificial intelligence? (1)

Paradigma11 (645246) | more than 4 years ago | (#28659077)

Actually we usually don't consider brains intelligent, we consider humans intelligent. As for needing a society to be intelligent, well you need all kinds of stuff to be able to show intelligent behaviour (air is a biggie). I do not think that considering ourselfs as distinct parts of reality as false. This problems only originate because of western philosophies use of language and set theory in this regard. An indian philosopher once said: All problems of western philosophy exists because you are able to say that "something is" without adding an attribute like blue.

Re:Artificial intelligence? (0)

Anonymous Coward | more than 4 years ago | (#28659155)

Your mom called. She wants her bong back.

Re:Artificial intelligence? (1)

4D6963 (933028) | more than 4 years ago | (#28659165)

The amazing thing is that we consider individual brains to be "intelligent" when it seems pretty clear we're only intelligent as part of a social network. None of us are able to live alone, work alone, think alone. The concept of "self" is largely a deceit designed to make us more competitive, but it does not reflect reality.

No, you're completely wrong. It's sufficiently obvious why that I don't feel the need to elaborate.

Actually, the telegraph was already a global AI tool.

No, it's called a network. You seem to fail to see the difference between a network and an intelligence. I don't think you know what intelligence means.

Re:Artificial intelligence? (1)

mugurel (1424497) | more than 4 years ago | (#28659239)

I believe that what original poster wanted to express is that it makes no sense to talk of intelligent behavior without a context, and in the limit, a social framework. An action of an individual can be called intelligent in such a social framework, but if you take away that context, what criteria are left to judge the action as an intelligent action?

The point is that the meaning of the predicate "intelligent" is very complex. Unlike "rational", which depends only on (roughly) your definition of goals and actions.

Re:Artificial intelligence? (0)

Anonymous Coward | more than 4 years ago | (#28659897)

Until the tubes can pass the turing test, calling them AI is incorrect in my book, literally.

Free transistors (3, Informative)

w0mprat (1317953) | more than 4 years ago | (#28658975)

Transistors are naturally analog, it's only that we force them to be digital. If we are prepared to accept more probabilistic outputs then there are massive gains to be had http://www.electronista.com/articles/09/02/08/rice.university.pcmos/ [electronista.com] . Work is being done with analog computing too.

I think memristors will be complimentary to existing rather than a revolution on their own yet analog transistors would have George Boole flip-flopping between orientations in his grave.

Re:Free transistors (1)

Twinbee (767046) | more than 4 years ago | (#28660051)

I think even with 1000x performance, it will be hard to return to analog. There's something about the 100% copyability of data, determinism and exactness of digital which analog can't hope to achieve.

Maybe 1,000,000x would veer me over however...

whatever (2, Interesting)

jipn4 (1367823) | more than 4 years ago | (#28659009)

In the 1970's, the big breakthrough was supposedly tunnel diodes, a simpler and smaller circuit element than the transistor. Do our gadgets now run on tunnel diodes? Doesn't look like it to me.

Re:whatever (0)

Anonymous Coward | more than 4 years ago | (#28659667)

In the 1970's, the big breakthrough was supposedly tunnel diodes, a simpler and smaller circuit element than the transistor. Do our gadgets now run on tunnel diodes? Doesn't look like it to me.

http://en.wikipedia.org/wiki/Tunnel_diode [wikipedia.org]

they were in use in the 60's and invented and manufactured first in the 50's. They don't exhibit gain. Really, your comment makes no sense whatsoever.

It was 1960s, and they were quickly obsoleted (2, Informative)

Kupfernigk (1190345) | more than 4 years ago | (#28659681)

The Esaki (tunnel) diode is a two terminal device which basically exists in two states (I am simplifying, I know) at two different currents. Its weakness is that (a) it requires a current source to keep it in one or the other state and (b) both input (changing state) and output need amplifying devices. As soon as cmos become fast enough things like tunnel diodes were dead in the water because a cmos transistor does its own amplifying, and requires almost no power to keep in one state rather than the other.

Therefore, a device which requires effectively no power to keep in one of two states, and has much greater speed than either flash or magnetic domains would be a step forward compared to the current state of the art.

Could this explain memory loss in old age? (1)

abhikhurana (325468) | more than 4 years ago | (#28659029)

If brain were indeed made of memoristors and these had finite write cycles, could it be that once we have reached these write cycles, the memoristors stop of being any use. Ofcourse the brain would try to minimise dmage to memoristors by spreading the data around but you will eventually reach a limit and eventually the same memoristors would be overwritten again and again, until eventually you start reaching the write limit for some of these, which might explain why we start losing memory after reaching 30s or so.

I suppose the way to check it, potentially, would be to see if people who have impaired senses (e.g. someone who is deaf or dumb etc.) show better brain functions in older age, as they didn't have that much data to store as someone who was getting data from all the senses.

Like we need more Artificial Intelligence... (1)

3seas (184403) | more than 4 years ago | (#28659137)

... don't we have enough people producing this already?

all thanks to Chua ! :-) (0)

Anonymous Coward | more than 4 years ago | (#28659203)

For 50 years, electronics engineers had been building networks of dozens of transistors - the building blocks of memory chips - to store single bits of information without knowing it was memristance they were attempting to simulate.

  1. invent the new name for old things,
  2. invest in advertisement until it sticks,
  3. soon you can claim (tacitly at first) you invented it,
  4. PROFIT !

What a stupid engineers they were, designing ever improving memory chips for half a century and not knowing they should call them "memristors". Memristors, memristors, memristors... That's what's inside your RAM, HD, flash, all thanks to Chua ! :-)

K.L.M.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...