Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Neuroscientists At MIT Developing DNI

Zonk posted more than 8 years ago | from the jack-in-and-tune-out dept.

Science 126

coolphysco1010 wrote to discuss the possible development of a direct neural interface, ala 'The Matrix', that could eventually allow for instant object recognition. From the article: "Now, neuroscientists in the McGovern Institute at MIT have been able to decipher a part of the code involved in recognizing visual objects. Practically speaking, computer algorithms used in artificial vision systems might benefit from mimicking these newly uncovered codes ... In a fraction of a second, visual input about an object runs from the retina through increasingly higher levels of the visual stream, continuously reformatting the information until it reaches the highest purely visual level, the inferotemporal (IT) cortex. The IT cortex identifies and categorizes the object and sends that information to other brain regions."

cancel ×

126 comments

Sorry! There are no comments related to the filter you selected.

Imagine the possibilities... (3, Funny)

theufo (575732) | more than 8 years ago | (#14014706)

for adult entertainment.

Re:Imagine the possibilities... (4, Funny)

exaviger (928938) | more than 8 years ago | (#14014711)

wohoo, alter my IT neurons to think my girl friend looks like buffy :)

Re:Imagine the possibilities... (1)

ozmanjusri (601766) | more than 8 years ago | (#14014774)

alter my IT neurons to think my girl friend looks like buffy

Well, it's a better ambition than "Thing" from the Addams Family, I suppose.

Re:Imagine the possibilities... (1)

speed_of_light (930261) | more than 8 years ago | (#14015203)

Linux users could be made to believe that they're using Windows Oh wait, that's not possible. They'd know something was up because the system would crash and set itself on fire as an act of mercy to itself.

Re:Imagine the possibilities... (1)

CapnGrunge (233552) | more than 8 years ago | (#14016433)

You misspelt Willow

Re:Imagine the possibilities... (-1, Troll)

Anonymous Coward | more than 8 years ago | (#14014745)

Ironicly, at almost 6am, a friend and i were sitting here watching tomb raider, while smoking crippy. The discussion went as follows:

[crippyfriend] lets watch tomb raider on showtime. Angelina Jolie is hot

[me] yeah she is hot lets look at her titties through google

[crippyfriend] dude, is she hot enough that you would eat her snatch right after brad pitt nutted in it?

[me] no

[crippyfriend] yeah you would..

[me] would i be entitled to half her money after i did that?

[crippyfriend] no but you wouldn't be able to eat her snatch otherwise.

[me] I still wouldn't do it. that's sick. no CLAAAAAM CHOOOOWDAAAAAA for me

[crippyfriend] how about Jennifer Anderson?

[me] lets see if we can find her titties online too

[crippyfriend] brad pitt is a lucky fucker, he dumped Anderson for Jolie. I wish I was presented with choices like that in life.

[me] yeah that would be a tuff decision.

[crippyfriend] that lucky bastard

[me] maybe in 20 years there will be a VR type of deal where that very senario could come to fruition.

[crippyfrien] oh yeah?

[me] let me check slashdot..

Re:Imagine the possibilities... (5, Funny)

Cruithne (658153) | more than 8 years ago | (#14014748)

Now were you R'ingTFA, or were you looking at the woman in the red dress?

Re:Imagine the possibilities... (1)

value_added (719364) | more than 8 years ago | (#14014765)

It must have been the woman. The article describes researchers busy showing pictures of toys and yams to monkeys and looking for some kind of response.

Ignoring how depressing a Day in the Life of a Research monkey must be, I'm wondering why they wouldn't opt for something more stimulating [lhup.edu] ?

Re:Imagine the possibilities... (1)

AnonymousYellowBelly (913452) | more than 8 years ago | (#14014869)

"Look back..." (goatse appears)
"Arghhhh!!!!"

Troll: "hehe, another /.er bites the dust"

Re:Imagine the possibilities... (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#14014759)

Yeah while the government has you shakled to a gang and have you working hard on the global plantation. You get the nice "adult entertainment". Enjoy

Re:Imagine the possibilities... (0)

Anonymous Coward | more than 8 years ago | (#14014896)

AutoSlashPost

FINALLY! I've been waiting for this forever, the ultimate micro$oft killer!

I'll bet mac os can interface with my visual cortex without causing me to reboot!

End AutoSlashPost

MIT Scientists More Ethical than Korean Scientists (0)

Anonymous Coward | more than 8 years ago | (#14016395)

According to an article [wsj.com] by the "Wall Street Journal", Dr. Woo Suk Hwang had attained international fame by successfully cloning a human embryo, but he accomplished his feat by pressuring a lab worker into donating her own eggs. Consequently, Gerald Schatten, a cell biologist at the University of Pittsburgh, has severed his ties with Mr. Hwang and cited gross breaches of ethics.

Below is the full article in the event that the above link is inaccessible.

U.S. Scientist Quits Stem-Cell Alliance
By a WALL STREET JOURNAL Staff Reporter
November 12, 2005; Page A5A

A prominent U.S. scientist is withdrawing from an international collaboration to create human embryonic stem cells.

Gerald Schatten, a cell biologist at the University of Pittsburgh, said he was severing all collaborations with the laboratory of Dr. Woo Suk Hwang of Seoul University.

Dr. Hwang, a veterinarian, has drawn international applause for leading the first effort to clone human embryos and extract their stem cells. Last month, he announced the formation of the World Stem Cell Foundation, an international alliance aimed at spreading that technology.

Dr. Schatten, who was to have led the organization's board of directors, says he is now severing collaboration with Dr. Hwang, due to questions over the source of human eggs used in a 2004 cloning project, and errors in a 2005 paper coauthored by the scientists.

A 2004 news report in the journal Nature said at least one female laboratory worker had provided eggs for the project, an allegation that Dr. Hwang has denied on several occasions. Under U.S. rules, collecting eggs from women working on a cloning project would be considered unethical. In the original paper, published by the journal Science last year, the scientists said the eggs all came from anonymous donors.

Sweet mother of brain implants. (3, Funny)

Mecdemort (930017) | more than 8 years ago | (#14014716)

I'm going to be first in line for the new computer interface brain implants. Hopefully they don't run windows.

Re:Sweet mother of brain implants. (2, Interesting)

annex1 (920373) | more than 8 years ago | (#14014721)

Consider me second in line. I can't tell you how much of an improvement for the species this will be. Better yet, the blind may finally have a hope of actually receiving ACTUAL replacement vision, not a poor substitute. :D

Re:Sweet mother of brain implants. (1)

exaviger (928938) | more than 8 years ago | (#14014724)

hmmm, I guess that is more of a noble cause then making my girl friend look like buffy.

Re:Sweet mother of brain implants. (2, Funny)

Mecdemort (930017) | more than 8 years ago | (#14014734)

Indeed, I was hinting at a new high of laziness that could be achieved. No longer would you have to move your fingers to use the computer. It would be possible to actually vegitate.

Re:Sweet mother of brain implants. (0)

Anonymous Coward | more than 8 years ago | (#14014804)

So, could Windows be considered LSD 2.0?

Re:Sweet mother of brain implants. (1)

Frogbert (589961) | more than 8 years ago | (#14014852)

As a nerd I am often on the cusp of technology and I am generally not afraid of picking up first generation products. However I think I can confidently say that when it comes to brain implants I'm going to wait a bit for the dust to settle before taking the plunge.

Re:Sweet mother of brain implants. (2, Interesting)

Artifakt (700173) | more than 8 years ago | (#14014914)

I'm thinking of Walter a John Williams novel, 'Hardwired'. The protagonist has cyber-eyes, and bought them just when the company had gotten most of the bugs out, but while they were still trying unusual features like sepia tone overlays, film nior settings, and the like to see where consumer interest lay. They then dropped all these features to make cheaper, more basic designs they could sell to the broadest possible market. I'm with Alexander Pope on this one:

"Be not the first by whom the new are tried -
Nor yet the last to lay the old aside."

Re:Sweet mother of brain implants. (1)

EnsilZah (575600) | more than 8 years ago | (#14015019)

In brain-implanted america, windows runs YOU!
=\

Re:Sweet mother of brain implants. (3, Informative)

Anonymous Coward | more than 8 years ago | (#14015028)

They don't work well. I was involved in artificial vision implant work a few years ago, and the current spread from the electrodes is too large to stimulate the small numbers of neurons necessary to give anything other than a "light blob". The lab put a grid of 5x5 electrodes in a human eye and the human reported on what they could perceive from current applied to it: the surgery was only done on people about to have their eye removed anyway for medical reasons, to be at little risk of hurting a good eye.

If the electrodes could be improved, with work like that of David Edell at http://www.ninds.nih.gov/funding/research/npp/sow/ N01-NS-2-2347SOW.pdf [nih.gov] , then it might become possible to actually provide the shape of a letter to an artificial eye. But not even tapping the brain directly will get past the need to localize current for the nerves, and fascinating things happen around smaller and smaller electrodes that make it very tough. It's also delicate, expensive work, with lots of need for using lab animals and careful data and record keeping to be sure you're measuring what you think you're measuring. The concept of giving someone a neural jack that provides high-bandwidth computer data of any sort is, and remains, utter fiction.

Surprisingly, the implants for artifical hearing work very well: having the auditory nerve laid out, low frequency to high frequency, along the bony tube of the cochlea helps localize the current to just the nerves you want to hit with each electrode.

Re:Sweet mother of brain implants. (1)

RockModeNick (617483) | more than 8 years ago | (#14016058)

My girlfriend has a brain implant for treatment for her dystonia, she has a timing device in her chest with a wire running up her neck and into her skull and brain, delivering direct stimulation to certain nerurons and reducing involuntary muscle contractions.

Windows brain implants (1)

Esteanil (710082) | more than 8 years ago | (#14015337)

Imagine the possibilities, now even the slashdot crowd can get girls, just hack her brain ;-)

Re:Sweet mother of brain implants. (0)

Anonymous Coward | more than 8 years ago | (#14015859)

Thank you for the amusing and, more importantly, strikingly original joke, user 930017. I see you've already made yourself at home. Mindless expression of group think is the way we do things around here. Picking on MS is the Slashdot equivalent of picking on the nerdy guy in class, but that's what it takes to fit in. Nothing wrong with that.

Very well. Carry on. Enjoy your first week.

Poor Monkeys (3, Insightful)

el americano (799629) | more than 8 years ago | (#14014725)

I don't want to see the results when they start trying to recreate those nueral patterns in the monkeys brains. Honesty, to say that observing these kinds of patterns brings us any closer to injecting images directly into the brain, when we have so little technology to do that (knives and chemicals basically) is ludicrous. I suppose the writer, rather than the scientists, can probably take all the credit for that exaggeration.

Re:Poor Monkeys (0)

Anonymous Coward | more than 8 years ago | (#14015151)

Please. Knives and chemicals? Your over-simplification may sound cute and clever, although it does a disservice to the human endeavor that goes into these projects. You forgot to mention nuclear magnetic resonance imagers (MRI), micro electromagnetic systems (MEMS) manufacturing, supercomputers and computer aided design, video editers and processors, etc. etc. I'd say what little technology we have (superconductors, microchips, and reliable human brain-maps) are more than adequate for the task.

Re:Poor Monkeys (1)

ehrichweiss (706417) | more than 8 years ago | (#14015361)

Well, I have first hand experience from experimenting on a device that makes you feel motion [slashdot.org] that had me seeing flashes of light when I turned the levels all the way up. Admittedly that's not injecting images into my visual cortex, but other than being hit in the head REALLY hard, I've never seen such a thing before.

Considering that they've been injecting audio into our brains [google.com] (yes, I have one of those too) for ages now I don't see that they have much choice but to finish developing the technologies.

Re:Poor Monkeys (1)

el americano (799629) | more than 8 years ago | (#14016293)

Not into your brain. It says here [neurophone.com] that it stimulates "a tiny organ in the inner ear" that is sensitive to ultra sonic sound. I think it would be interesting to try to stimulate nerve ending directly, whether it be sight or sound, but that was not this experiment.

I also find it interesting that they call it an ultrasonic neural stimulation instrument for brain entrainment(?) Was there some reason that they can't call it a hearing aid or artificial hearing? That would be the best way to sell it, so I can only assume that it has some serious limitations.

Re:Poor Monkeys (1)

Korean Elvis (930353) | more than 8 years ago | (#14015541)

Remarkably, the classifier found that just a split second's worth of the neural signal contained specific enough information to identity and categorize the object, even at positions and sizes the classifier had not previously "seen."

so NOT known ojbects and KNOWN objects has similarly patterns for neural signals in monkey brains for conclusions! SO find repeatables on nerual patterns is keys i htink on market producables.

Chair (3, Funny)

GloomE (695185) | more than 8 years ago | (#14014727)

I'm looking to purchase a dentist chair.
Hole in the headrest preferable.

Re:Chair (1)

InterestingX (930362) | more than 8 years ago | (#14015860)

It's not the hole in the headrest I'm worried about.

It's about time! (3, Insightful)

Anonymous Coward | more than 8 years ago | (#14014746)

The implications for using this technology to cure blindness (one day, obviously not immediately) are wonderful! This is the kind of thing science was really meant for - helping humans live better lives. Kudos to MIT!

Re:It's about time! (2, Informative)

Yvanhoe (564877) | more than 8 years ago | (#14014944)

Immediatly, a large range of blindness can be cured by implants, either by putting a CCD array inside the retina or, in case of damage in the optic nerve, a camera can be wired to the visual cortex. Right now, some blind people see ( with a low res, b&w image but see nonetheless) thanks to implants.

link [seeingwithsound.com]
other link [brown.edu]

But yes, with the technology presented in the article, I suppose one could even cure blinds that have a damaged visual cortex.

Re:It's about time! (1)

eokyere (685783) | more than 8 years ago | (#14015107)

yeah, may be after we finally create those damn nanobots :P

Re:It's about time! (5, Informative)

Anonymous Coward | more than 8 years ago | (#14015291)

Kudos to MIT!

Every visual neuroscientist, ever, has been working on "deciphering part of the code involved in recognizing visual objects." Poggio and DiCarlo's contribution is mostly that they were able to record from a large number of neurons simultaneously in the inferotemporal cortex (IT). It's a logical (but interesting, to be sure) progression of work that has been done for decades in IT -- most of that work done elsewhere.

Neural prosthetics and DNI are the bullshit that people trot out to make neuroscience interesting to the public. It's worth pointing out that neither of the actual named scientists in this work raise the possibility, and in fact, other than the abstract, there's nothing that even hints at the idea. These guys aren't working on a DNI. They're doing basic science. Years, decades down the road, some engineers might take the work that built on Poggio and DiCarlo's work and turn it into a DNI. Or at least, we can so hope.

Name a university, and I can guarantee that the odds are that they'll have some basic science research underway with as much potential for the betterment of society as this stuff. So when you say "kudos to MIT" like this, remember that you're praising their PR department, not their scientists.

Re:It's about time! (1)

tgv (254536) | more than 8 years ago | (#14015690)

Mod parent up, since its parent is rather misleading for the uninformed. I would like to add that most of the knowledge on visual pathways comes from live monkey research, and is only partially known to translate to human vision.

Matrix? (3, Interesting)

Auckerman (223266) | more than 8 years ago | (#14014762)

The article reads more like they are reverse engineering pattern recongition systems as the brain sees and interperates objects, which sounds closer to the movie Brainstorm [imdb.com] .

Re:Matrix? (2, Informative)

Tune (17738) | more than 8 years ago | (#14014784)

Or Strange Days [imdb.com] , which could be considered a (lose) remake of Brainstorm.

Well, actually the article focusses on intercepting the sensoric data and making sense of it. I believe scientists have for some time been able to make sense of the basic sensoric data; stuff like using a cat's eye to produce webcam quality images. This research seems directed at interpreting the signals at a much deeper level.

Though very interesting, it's still a one-way extraction process (ie. *not* synthesis) which is just completely unrelated to anything i saw in The Matrix, but I may have stumbled into an excuse to view that movie again ;-)

Re:Matrix? (1, Offtopic)

bsartist (550317) | more than 8 years ago | (#14015374)

Or Strange Days, which could be considered a (lose) remake of Brainstorm.

Sigh. Loose is the opposite of tight. Lose is the opposite of win. What's so damn difficult about this?

Re:Matrix? (2, Funny)

Haydn Fenton (752330) | more than 8 years ago | (#14015616)

Sigh.
Looser is what grammar and spelling nazi's should be. Loser is what grammar and spelling nazi's are.
What's so damn bad about someone making a mistake so minor that anybody with an IQ higher than a banana can still understand?
;-)

Re:Matrix? (1)

joemawlma (897746) | more than 8 years ago | (#14015892)

::keanu voice:: "WHOA"

No 12 monkeys (4, Informative)

noc_man (917321) | more than 8 years ago | (#14014766)

I read an article many years ago about them doing this to live human patients. Via a fiber cable brain wet-ware implant, a blind man was able to discern colors and rudimentary objects. He did have a short seizure during the interview; however, once the subject got passed that he immediately requested that the researchers continue.
Unfortunately this was so long ago I cannot remember the magazine or relocate the article. But googling artificial vision shows a few parts of history and HOWSTUFFWORKS has a full set of details

http://health.howstuffworks.com/artificial-vision. htm [howstuffworks.com]

Re:No 12 monkeys (3, Interesting)

Timeburn (19302) | more than 8 years ago | (#14014829)

IIRC, it was in Wired, circa about 1999 or 2000. The article covered research in South America (banned in the US), on a patient who had lost his vision, but whose optic nerve was intact. They interfaced directly into the nerve, stimulating it manually at first (This is when the seizure occurred).

The project was apparently quite successful, as the patient was able to move about the facility, pick up a phone from a desk, and even drive a car around the parking lot. Fairly low-res input, but enough to see shapes and movement.

Don't know what's happened with the project since, nor can I find the original article right at the moment, but it definately sounded promising.

Re:No 12 monkeys (3, Informative)

groomed (202061) | more than 8 years ago | (#14014855)

The Vision Quest [wired.com] in Wired 10.09 of September 2002.

The article, as well as the feasibility of Dr. Dobelle's (who has died in 2004) research, are sketchy at best. Apply truckload of salt.

Fabulous (2, Funny)

Centurix (249778) | more than 8 years ago | (#14014768)

Maybe they could simulate the feeling of taking a really great dump.

Re:Fabulous (1, Funny)

Anonymous Coward | more than 8 years ago | (#14014840)

Looks like some fool marked you as troll, but I understand. I took a dump tonight that made my fucking legs ache. Now that's reality!

Re:Fabulous (1)

BiggerIsBetter (682164) | more than 8 years ago | (#14015013)

There's nothing as over-rated as bad sex,
And there's nothing as under-rated as a good dump.

Re:Fabulous (0)

Anonymous Coward | more than 8 years ago | (#14015027)

You could just buy a dildo.

Just recordings (2, Interesting)

venicebeach (702856) | more than 8 years ago | (#14014772)

Seems to me they are just recording from IT neurons. There's no input to the cortex. I haven't read the science paper (is it out yet?) but it really seems like they are just analyzing the firing patterns of IT neurons while the monkey looks at objects. Nothing new here technology-wise.

Re:Just recordings (2, Insightful)

venicebeach (702856) | more than 8 years ago | (#14014791)

OK I just read the Science article. What's interesting about it is that they got recordings from a large population of neurons in IT during object recognition and have some cool analyses of the kinds of information that can be extracted from the capture, e.g. how large a population of neurons you need to accurately identify the object, how well the neurons discriminated among the categories and generalized across the same image at different sizes and positions, etc.

Important to remember that these monkeys were trained on a limited stimulus set, so its not that you can tell what the monkey is looking at by loooking at the recordings without knowing it is one of these pre-trained items.

Re:Just recordings (4, Informative)

FleaPlus (6935) | more than 8 years ago | (#14014816)

This reminded me of the research by Quian Quiroga et al in which they performed single-neuron recordings from MTL (upstream of IT, if I recall correctly) in humans. In that study they found neurons which would respond selectively to particular objects, such as Jennifer Aniston, Halle Berry, and the Sydney Opera House. Here's the abstract:

R. Quian Quiroga, L. Reddy, G. Kreiman, C. Koch & I. Fried Invariant visual representation by single neurons in the humanbrain. [caltech.edu] Nature (2005) 435, 1102-1107

It takes a fraction of a second to recognize a person or an object even when seen under strikingly different conditions. How such a robust, high-level representation is achieved by neurons in the human brain is still unclear. In monkeys, neurons in the upper stages of the ventral visual pathway respond to complex images such as faces and objects and show some degree of invariance to metric properties such as the stimulus size, position and viewing angle. We have previously shown that neurons in the human medial temporal lobe (MTL) fire selectively to images of faces, animals, objects or scenes. Here we report on a remarkable subset of MTL neurons that are selectively activated by strikingly different pictures of given individuals, landmarks or objects and in some cases even by letter strings with their names. These results suggest an invariant, sparse and explicit code, which might be important in the transformation of complex visual percepts into long-term and more abstract memories.

Re:Just recordings (0)

Anonymous Coward | more than 8 years ago | (#14014877)

uhhuh, but reread the experiment, and then try and draw out the same conclusion...

Nothing new here (1)

Tune (17738) | more than 8 years ago | (#14014793)

At the end of the article:

It was quite surprising that so few IT neurons (several hundred out of millions) for such a short period of time contained so much precise information.

That *is* an interesting result, since (computer) neural net research generally tends to favour a designs with a complete overkill in the number of neurons.

"If we could record a larger population of neurons simultaneously, we might find even more robust codes hidden in the neural patterns and extract even fuller information," Poggio said.
...Then again, they seem to be on the ever familiar track to the bigger is better dead-end. So indeed, nothing new here...

We have non-invasive signal injection technology (2, Informative)

lightyear4 (852813) | more than 8 years ago | (#14015653)


We already have something called transcranial magnetic stimulation. See:

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumb er=1300793 [ieee.org]
http://groups.csail.mit.edu/vision/medical-vision/ surgery/tms.html [mit.edu] -- most relevant to discussion, has section on visual signal injection
http://www.biomag.hus.fi/tms/ [biomag.hus.fi]
http://www.mp.uni-tuebingen.de/mp/index.php?id=94 [uni-tuebingen.de]
http://en.wikipedia.org/wiki/Transcranial_magnetic _stimulation [wikipedia.org]
http://pni.unibe.ch/TMS.htm [unibe.ch]

Allergies... (1)

HonkyLips (654494) | more than 8 years ago | (#14014782)

I just finished reading "Neuromancer" (for about the 10,000th time) and so this seems pretty cool. But I've always wondered how they deal with the potential for allergies... from what I understand, in theory you can become allergic to basically anything at anytime without warning, so you wouldn't want to get a fancy new implant only to die of anaphyllactic shock while watching porn....

Re:Allergies... (1)

Conch (52381) | more than 8 years ago | (#14014790)

Let's hope that I don't get allergic to stupid comments, because then I would die of an anaphyllactic shock reading slashdot.

Hmm . . . (0)

TheScorpion420 (760125) | more than 8 years ago | (#14014801)

Said it before and i'll say it again, bring on the drunken boxing! Hmm, mabye the complete Kama Sutra. . . But defineatly the drunken boxing.

sorry to dash your hopes, but... (5, Informative)

Xochi77 (629021) | more than 8 years ago | (#14014812)

i am not at MIT, but I can tell you this aint about to happen any time soon.

i am working on optical neuron-computer interfaces, and this is probably the most efficient and direct route for reading neurons. I know of researchers who can also stimulate neurons to fires via light, so in principle, we could build a complete neuroptical computers tomorrow... if neurons were not complete bastards to work with.

you see, they just dont like to stay place. where i research, they often build tiny fences to keep them in place, but even then, they go shooting theyre axons anywhere they feel, with no concern for the feelings of the researcher.

we also grow neurons on microchip surfaces, which allows for high speed and high resolution stimulation and reading of single neuron activity, but in two dimensions, which is excellent for retina etc.

but the neuron-chip or old fashioned neuron-electrode are hard to place, and optical reading of neurons still has bugs to sort out (id guess from 4-10 years more basic research). whenever you see these cool brainscan pics with MRI etc, remember theyre resolution is on the order of millimeters, and thats a lot of complexity lost.

http://www.biochem.mpg.de/mnphys/ [biochem.mpg.de] has a nice review of the problems involved, if you like hardcore solidstate chemistry, silicon physics, and neurobiology

Re:sorry to dash your hopes, but... (3, Informative)

FleaPlus (6935) | more than 8 years ago | (#14014834)

What are your thoughts on using autonomously adjusting electrodes [caltech.edu] to deal with the problem of neurons shifting about? Granted, the current systems are rather bulky, but much more compact ones are under development.

Re:sorry to dash your hopes, but... (5, Informative)

Xochi77 (629021) | more than 8 years ago | (#14014857)

cute, but check out- http://scholar.google.com/url?sa=U&q=http://www.bi oon.com/biology/UploadFiles/200502/200502160347225 62.pdf [google.com] "Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex" also, i forsee the development of light-gated ionchannels, such as the one i mentioned before, that can be opens at various wavelengths, much like gfp has been mutated from green to the whole blue-red spectrum. geneticaly specified reading of neurons has furthers to go, but it will happen, and soon i think. in the end, why go with hacking into the brain to insert electrodes and chips etc, when two-photon microscopy can see though tissue?

Re:sorry to dash your hopes, but... (1)

Roxton (73137) | more than 8 years ago | (#14015470)

Hear, hear. The potential of two-photon microscopy in the real-time detection of low-level neuronal physiology is, I dunno, exciting.

Re:sorry to dash your hopes, but... (-1)

Anonymous Coward | more than 8 years ago | (#14014841)

Shift key motherfucker. Do you use it?

Re:sorry to dash your hopes, but... (1)

Xochi77 (629021) | more than 8 years ago | (#14014868)

STFU. better?

Re:sorry to dash your hopes, but... (0, Troll)

Hamstij (831222) | more than 8 years ago | (#14014879)

While you're at it, learn the difference between their and they're.

Re:sorry to dash your hopes, but... (0, Offtopic)

Xochi77 (629021) | more than 8 years ago | (#14014886)

60d d4mn 17, 1 h473 6r4mm4r n4z15. 1f y0u h4v3 50m37h1n6 u53fu| 70 54y, 7h3n 54y 17, d0n7 n17p1ck 57uff y0u 0bv10u5|y und3r574nd

OMFG (3, Funny)

russint (793669) | more than 8 years ago | (#14014830)

this could make LSD obsolete!

Re:OMFG (1)

Xarius (691264) | more than 8 years ago | (#14014950)

I know your joking, but in most dystopian sci-fi I've seen or read, the future "drug" of choice is some kind of technological device. Especially in the anime (nerd alert!) Serial Experiments Lain.

Re:OMFG (1)

akheron01 (637033) | more than 8 years ago | (#14015183)

Ahhh, and the device he is reffering to is of course... Accela!

According to wikipedia
Not a chemical drug, but rather a nanomachine delivered in pill form. It increases the processing speed of the brain, making its user feel that time has slowed down.

Re:OMFG (1)

Thing 1 (178996) | more than 8 years ago | (#14015173)

At first I read this and was nodding in agreement, "Yes, I can see that if they develop this, we'll have no more need for monitors."

Then I remembered my college days. (It's early this morning...)

Re:OMFG (1)

rabel (531545) | more than 8 years ago | (#14015305)

Yeah, but have you used your direct neural interface, ON WEED?

Re:OMFG (0)

Anonymous Coward | more than 8 years ago | (#14015415)

Wonderful, so even when I'm in virtual reality somebody's avatar will be slouching around saying "Man, I am so HIGH right now!"

should be a lot of brain articles... (1)

superwiz (655733) | more than 8 years ago | (#14014875)

in the next few days. The SFN just started in Washington.

Vision has been done for a long time (1)

prestwich (123353) | more than 8 years ago | (#14014878)

I remember reading a long time (at least 10 years, perhaps 20?) ago about direct stimulation of the visual cortex;
now at the time they were just doing a few blobs intended to help the blind.

This looks like it is moving a bit further up the chain a bit which should be interesting;
in the end it is just vision however.

Resistance is futile... (0)

Anonymous Coward | more than 8 years ago | (#14014894)

...eventually we will all be assimilated.

What if... (2, Interesting)

nsasch (827844) | more than 8 years ago | (#14014918)

What if the IT cortex was bypassed: The computers would get or simulate the input, and recognize and categorize the object and the computer would send that data directly to the other parts of the brain. Now the human doesn't see the ball, but knows there's a ball in front of them, and it's red, and about the size of their head, etc (all the details), but doesn't see it, just has a "feeling" that a ball is there.

Cyborg possibilities (1)

Nerdposeur (910128) | more than 8 years ago | (#14014992)

That's a really interesting idea - bypass the senses and go directly to the meaning portions of the brain. I wonder if you'd be able to tell your own conclusions from what was being fed to you?

This kind of stuff both excites and scares me. Whenever I hear about electronics/brain interaction (like the story about monkeys moving robotic arms using their brains [bbc.co.uk] ), I think of the cyborg possibilities. The most interesting one to me is the ability to supplement your own faulty memory with a hard drive and your own thinking power with a processor. You'd take a little snapshot of every person you met and file it away with their name, never to be forgotten. If you needed to calculate something, you'd just run it through your stored equations.

Think about what school would do for you! If I could remember all the science, history and literature I've been taught and forgotten, I'd be a much more educated guy than I am now.

And of course you could add senses that humans don't have - more visual spectra, a magnetic sense or "location" sense based on GPS, etc.

What makes all this less exciting - besides welcoming our new cyborg overlords - is remembering how unreliable technology is. After seeing my roommate organize his life with PalmPilots, which invariably broke, I decided to stick with writing things down in a little notepad. It never crashes. Given the headaches we've all experienced with computer problems, imagine how you'd feel if a whole section of your brain quit working.

Re:Cyborg possibilities (1)

EnsilZah (575600) | more than 8 years ago | (#14015047)

A few other possibilities:

- Sonar-vision, i think seeing in 3D would help us understand the world better.
- Telepathy, if you get the input/output working and hook up and antenna, watch out for spam though.
- Computer aided augmentation, help the brain with functions it's not very good at like calculating large numbers.
- Image streaming from your eyes, photography, movies like in one of William Gibson's books, i think it was Monal Lisa Overdrive, you wouldn't need to be an artist to recreate an image you have in your mind.

Probably not likely to have any of that any time soon though. =\

Re:Cyborg possibilities (1)

PlusFiveTroll (754249) | more than 8 years ago | (#14015732)

- Image streaming from your eyes, photography, movies like in one of William Gibson's books, i think it was Monal Lisa Overdrive, you wouldn't need to be an artist to recreate an image you have in your mind.

Please explain how you would uninstall the SonyBMG DRM root kit then!

Re:Cyborg possibilities (1)

jerometremblay (513886) | more than 8 years ago | (#14016467)

lobotomy

Re:Cyborg possibilities (2, Interesting)

Kjella (173770) | more than 8 years ago | (#14015182)

The most interesting one to me is the ability to supplement your own faulty memory with a hard drive and your own thinking power with a processor. You'd take a little snapshot of every person you met and file it away with their name, never to be forgotten. Think about what school would do for you! If I could remember all the science, history and literature I've been taught and forgotten, I'd be a much more educated guy than I am now.

Not to go all Trinity on you, but why limit it to your own experiences? Basicly, say you wanted to recall the text of something you've never read, the HDD could supply it. That is simply on a request-response variety. You could do searches in information bases you've never read. You could do a two-way communication to make drill-downs. Let's say you were looking at a bird, and you could supply information to the base, the base might ask "questions" like color, size, beak, feathers, legs, sound to your brain to pull the information you want. The whole of wikipedia could easily fit on such a HDD. Sure there'd be a lot of trial and error here but this data could be gathered from everyone carrying it to improve the interface. It's more a matter if the human mind could keep up or if it'd go wacko from all this information at its fingertips. Then you could really talk about information overload.

Re:Cyborg possibilities (0)

Anonymous Coward | more than 8 years ago | (#14015696)

In fact, forget about those elctrodes and hard drives. The whole Internet will be your brain. Imagine that!

Re:What if... (2, Insightful)

kko (472548) | more than 8 years ago | (#14015007)

Yeah, cool, well, actually getting any sort of input to the brain seems to be a big part of, if not THE actual problem.

Jeez.

feedback loop? (1)

Loconut1389 (455297) | more than 8 years ago | (#14014919)

But what if we're already in the matrix.. that would be a direct neural interface inside a direct neural interface... Talk about a mind bender.. or should I say 'spoon bender'?

Re:feedback loop? (2, Funny)

CptNerd (455084) | more than 8 years ago | (#14015408)

But what if we're already in the matrix.. that would be a direct neural interface inside a direct neural interface... Talk about a mind bender.. or should I say 'spoon bender'?

Or more likely just Bender: "We're boned!"

15 years ago (1)

xmedar (55856) | more than 8 years ago | (#14015004)

I discussed this with Peter Donaldson of the Neurological Prosthesis Unit in South London, it's like packet switching the British invent it but don't fund it and the Americans take it on, fund it and get all the money and glory, oh well.

*Insert obligatory RIAA/DRM comment here* (0)

Anonymous Coward | more than 8 years ago | (#14015058)

You know it's going to happen.

I hope Sony doesn't get a hold of this. (1)

frogstar_robot (926792) | more than 8 years ago | (#14015075)

One of the original Memory Stick ads had a guy with a card slot in the back of his head and a Stick was about to be placed in it. At the time, I though Sony was subconsciously telegraphing where they'd like all of this to go. These days, I'm certain. What's really scary is the sheeple will go for "Neural Rights Management" if it means he gets to watch Survivor 15.

Re:I hope Sony doesn't get a hold of this. (0)

Anonymous Coward | more than 8 years ago | (#14015112)

Actually, Sony has a patent for that kind of thing, despite the lack of tech to back it up. It was on /. at one point, search for it.

Not really `"ALA the Matrix"'... (1)

Biomechanical (829805) | more than 8 years ago | (#14015104)

Disclaimer: I am currently drunk, so the following comments may seem a little more disjointed than usual.

I remember when I was playing Shadowrun, and delved into Cyberpunk 2020, and loving the idea of having a character who could directly interface to a computer - in Shadowrun it was via a "datajack", located directly behind the ear and mounted in the hard skull tissue for maximum anchorage.

The idea is not new. I remember reading about a guy called "Jerry" who'd had a special series of wires - I think fibre optic - running into his head and connected into his brain, set up some time around the mid seventies.

With these systems running to a, relatively, small computer on his belt, he was able to discern largish, monochrome coloured objects from a certain distance and within a post card field of view. In the seventies.

Jerry's problem was not that the computer had power supply problems, or its actual size, but the software running the system was not very sophisticated and couldn't create very accurate distinctions between, say, a man and the hat on his head.

Fortunately, Jerry had been born with sight, so he could distinguish between certain shapes that he'd seen before.

Expand the idea. Give direct neural input to the parts of the brain that receive sound and touch. You now have the "Matrix" of Shadowrun - shapes, sounds, rudimentary touch, and sight.

I saw that someone else in this forum mentioned Windows, probably as joke. The key thing is that if you write the input controls in just the right way, you don't need an OS to run it. The brain is an extraordinary parallel processing computer. Before you pick up that cup of coffee/soft drink/alcohol, think about all the various thoughts that just went through your brain for the simple task of picking up the cup.

Look for $object. $object located $here. $closest appendage ==...

There is so much happening in your brain, even just reading this post, that it would be difficult to write down every step you do even when you're simply pressing a button on the keyboard.

The key to creating a DNI is not thinking about the real-time OS to use - because no computer runs like our brain -, or what protocol to use - because the protocols are all dependent on OS's we use on computers -, but to think of the optimal way to get the information about here, to here, inside your head.

Once you do that, you can create input from anywhere, which is both a scary and exciting thought.

We could have movies that literally immerse us in the action/drama/comedy/what ever, computer games that simulate the game creator's idea of a gaming universe and lets us run wild, or security systems that let us roam like wild viruses or worms through a system...

Do you know what I think is the only real problem standing in the way of expanding this tech to the point where we can watch movies, program, play games, or surf the web with our brains?

The interface? Our brains are meat and chemicals, and can be mapped. The input? I say "that's a red ball", and immediately you see it as a red ball, and then - when you're actually looking at the object - you decide for yourself whether or not it looks red or not. The connections? Optic fibre or copper? We'll sort out a simple, clean way to hook up without hurting ourselves.

The big problem I see is bandwidth. Let's go back to the idea before of how much parallel processing goes on in the brain. Try opening up ten applications on your computer - separate ones - and then telling them to perform CPU intensive tasks. Notice the slow down?

This is the same as someone doing one task, and you come along and ask them to do another. They may be able to do it, but it'll be a little harder to do that as well as the other ten, twenty, or even thirty, things they're already doing, both conciously and sub-conciously.

Motor actions, input-output actions and reactions - touch/taste/feel/smell/sight actions and reactions - plus thinking about what they have to do next, and worrying that they've finished the last task that they started.

At the same time that we're trying to figure out how to interface a computer directly to our brains, we should also be trying to figure out how to make computers that run parallel tasks as well as us. Sure, they're sure as shit faster on individual tasks, but PC's really suck at doing multiple tasks - and by multiple, I mean anywhere from fourty to eighty jobs at once.

I've run out of things to say. Elucidate on your own.

Boon for Camouflage (2, Interesting)

schwit1 (797399) | more than 8 years ago | (#14015186)

Could you test potential camouflage patterns with this and find which cause the most difficulty in visually deciphering? Or one day have computers generate camouflage on the fly based upon the surroundings.

Code Talkers (2, Interesting)

Doc Ruby (173196) | more than 8 years ago | (#14015225)

It's not a "code". There's no objective reality that the brain is decoding for mere "referential integrity". The brain is organizing its responses to incoming sensory info, in a feedback loop with itself, including resonating "memory" response signals. Sure, object representations are recognized as repeats of previous object representations, and dispatched to brain areas sensitized to those representations. But it's not like objects outside the body have standard codes, the same from person to person, like say insulin has in our DNA. That would be way to static for us to survive in this changeable world. We're making it up as we go along, and living in the reality we generate. The closer our mind's model matches the world we encounter, the smarter we are.

Re:Code Talkers (0)

Anonymous Coward | more than 8 years ago | (#14015763)

Yeah, in a way the closer our minds model matches the real world, we would be smarter. But just what is the real world? Humans don't have the ability to sense magnetism, radiation, gravity (at least, directly), various light wavelenghts, sound frequencies, etc. etc. etc. I think humans, although the smartest creatures we know of, are very primitive when you take into account the 'missing' parts. There are certain animals which can smell or hear things from immense distances. It's well known (at least, amongst people in the field), that under deep levels of hypnosis, humans can sense (in touch form) things that aren't touching them, without hearing or seeing them. I'm sure derren brown and other hypnotists have done it on TV before and you've probably seen them. You may also know about people who "see" numbers, or "taste" words, I'm not sure what the condition is called, but certain senses get mixed with others (in the seeing numbers case, it would be the part of the brain involved with sight, merging with the part responsible for mathematics processing), when this "disorder" occurs in someone with other "disorders", they gain the ability to do truly amazing and mind-boggling things. There's savants, who have the extrodinary ability to calculate huge and complex maths problems way faster than computers. They can count objects in mere millisenconds, there's even one person in the world who can read, and memorise, 2 pages of a book in seconds. He manages to read one page with his left eye and one with his right eye. Somewhere in our brains we have the capacity to do a scary amount of processing. There's also other things, which I don't personally believe (only due to the small amount of evidence in their favour), but I've read about a russian girl who can see in X-Ray, being able to find illnesses (such as cancer) simply by looking at people. We can do so much with our minds, most of which we don't even know about, or have the "lucidity" to achieve. When you take all this kind of stuff into account, along with what could be achieved with direct brain\neuron interfacing, it's immensely scary\interesting\exciting. Add GPS, a dictionary of each language in the world, translator, calculators, night vision, etc. to our brains, along with a physical "cyborg" body, such as the japanese invention to give extra strength to old people, mixed with some nanotechnology, and we become so smart nobody in the world could even imagine with decent clarity. Yet that's still an immensely small amount of knowledge compared to what's still out there. Being able to sense EVERYTHING, EVERYWHERE, EVERYTIME would be the ultimate knowledge, but in that case we wouldn't need it, since we know all. Which would bring us onto things like the meaning of life and other stuff. Whether it's too much information for us to handle (as some philosophers argue), or that we know so much we have nothing left to do, or something else, we would go insane. So whilst the future, with it's matrix-like games, simulations or "virtual" lifes seems hellishly fun, ignorance and even stupidity, may be better after all.

Anybody who wants to drill a hole in my skull. . . (1)

Fantastic Lad (198284) | more than 8 years ago | (#14015247)

is not my friend.

The world is already a giant hologram where you can do or undo whatever you feel like.

Plugging your head into an artificial world is like wanting to play space-invaders on a simulated computer interface inside a game of Quake. No thank-you. We already have the perfect interface out here where the graphics and sound are of the highest quality and there is no chumpy, 'Save' button to make things boring. And there are plenty of cheats keys in the structure of reality if you have the courage to seek them.

Anyway, people who crave to 'plug in' are kind of creepy. Being around them makes me vaguely worried that at any moment I'll get shot in the back with a big plasma gun.

-FL

MIT still behind the times (1, Insightful)

Anonymous Coward | more than 8 years ago | (#14015386)

I'm getting a bit tired of MIT getting press for research that has already been done years ago. In this case in particular, see the Dobelle Institute: here [cbsnews.com] , here [erc-assoc.org] , and here [artificialvision.com] , for instance.

Seriously. Don't exacerbate the inflated delusions of these guys by pretending that their research is unique or "cutting-edge". Expect more of them.

it's not DNI at all (2, Informative)

darkeye (199616) | more than 8 years ago | (#14015400)

as they don't actually connect to the neurons, but read the neuron acticity patterns, probably through fast MRI scanners. and there's no feedback either - they don't send any data to the neurons (other then through the natural eye of the monkey in the tests)

Why the Matrix? (1)

TheDracle (836105) | more than 8 years ago | (#14015627)

I'm not really as concerned with the implications of being able to inject images into the human brain, while that may be somewhat useful. It's likely the visual cortex may have many subtle differences as well between human and chimpanzee brains--- so this is likely to be a much more difficult set of technology to translate for human use. What's interesting about this is the fact they're claiming that an incredibly complicated set of algorithms, that have been evolved over billions of years in our brains, can be reverse engineered.

In AI we've yet to find what algorithms are responsible for consciousness, for visual recognition, and a myriad of other problems. They're all just sitting in our brains, likely on the lowest level of the neuron, waiting for us to extract them. This has infinitely more applications than forcing images into people's brains.

Book Flashback... (1)

Khyber (864651) | more than 8 years ago | (#14015848)

Wow this reminds me of a book I read called Nanotime. Pretty killer read, too. I just hope we don't have to be fully awake when they insert this into our brains, like in the book!

dated information (1)

FlippyTheSkillsaw (533983) | more than 8 years ago | (#14015980)

Almost identical information is located in one of my college psychology text books from 2002. The book is called Sensation and Perception, but I'm sure that just about any textbook on the subject of neurophysiology would cover this. The research involved is probably a few years older than the book.
Recording the activity of hundreds of IT neurons produced a large database of IT neural patterns generated in response to each object under many different conditions.
This translates to a cortical probe(maybe sub-cortical) sticking out the back of a monkey's head with leads that go into the brain. Charming. One of my biology professors does research on frogs in a similar way and frogs don't seem to mind.

Audio Object Recognition (1)

Bombula (670389) | more than 8 years ago | (#14016035)

Could parallel research and development will enable recognition of audio 'objects' as well? It might help open the stubborn door to voice recognition.

In the short term, I suspect there would be more immediate applications for voice recognition than for visual object recognition, though I am still pulling for these guys if it leads to cars driving themselves.

I'm currently doing similar work... (2, Interesting)

mr. squishie (726877) | more than 8 years ago | (#14016540)

...but on human subjects using fMRI. This research really isn't related to the matrix or DNI's directly, it's about seeing whether or not electrical signals from the brain contain enough information for a classifier (ironically, in our case, artificial neural networks) to distinguish between some subjective cognitive state.

Considering the progress we've made in distinguishing cognitive states (is this person looking at a face, a house, a squirrel, etc?) in human subjects using fMRI (an extremely noisy dataset), I'm not surprised that they found that there's enough information in a few neurons to perform classification.

Really, the best pop-sci term to describe this would be "mind reading" -- the high level goal is to have a function that transforms physical space to some sort of cognitive space. I guess you could say it's the "I" of the I/O DNI in the matrix.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?