Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Human Eyes as Digital Cameras?

Cliff posted more than 11 years ago | from the right-out-of-a-cyberpunk-novel dept.

It's funny.  Laugh. 45

Mad Dog Kenrod asks: "A recent ad campaign for a digital camera had the slogan (something like) 'imagine being able to take a picture from your head and show it to people' - it was basically showcasing how small the camera was. This got me thinking: most people simply want to 'snap what they see'. Given that the human eye already has a very workable lens, and a retina which (I assume) is similar in technology to a digital camera, how feasible would it be to 'tap into' the optic nerve (not the brain, because by then the 'image' is probably something else entirely) and turn the signals from all those rods and cones into pixels?"

"Given we can do C.A.T. scans, would it even be feasible to do this from outside the head (say, with sufficient miniaturization, from the arm of your glasses)?

Of course, you would lack other things like zooms and filters and even an ability to 'frame' the picture (and there'd be problems for people with eye disease), but I propose that, for the majority of us who just want to quickly 'snap what we see' this would make for the smallest, lightest camera possible.

I know nothing about what would be involved in making this happen, so would be interested in people's thoughts."

cancel ×

45 comments

Sorry! There are no comments related to the filter you selected.

Have to say it... (0, Funny)

GreyWolf3000 (468618) | more than 11 years ago | (#5639368)

You don't want to see the world the way I see it :P

Re:Have to say it... (1)

PD (9577) | more than 11 years ago | (#5639452)

Why? Are you severely astigmatic?

April trolls day (2, Insightful)

missing000 (602285) | more than 11 years ago | (#5639491)

I have to say that I always thought of 1/4 as "april fools day", not "april troll's day"

Re:April trolls day (1)

Mr Z (6791) | more than 11 years ago | (#5641041)

I've always thought of 1/4 as "one fourth" or January 4th, depending on the context...

Re:April trolls day (1)

missing000 (602285) | more than 11 years ago | (#5641069)

I've always thought of 1/4 as "one fourth" or January 4th, depending on the context...

Hmm... I assume from the above that you suffer from an "American education"?

Most of the world uses D/M/Y, but clearly that is not acceptable for you I guess.

Re:April trolls day (1)

Mr Z (6791) | more than 11 years ago | (#5641130)

Where it matters, I write the date unambiguously. For example, YYYY-MMM-DD, where MMM is a 3-character abbreviation.

Re:April trolls day (1)

MattCohn.com (555899) | more than 11 years ago | (#5641379)

And the best way to write it is

Y/M/D

It lists items alphabetically, in the right date order.

eventualy.. (1)

greywire (78262) | more than 11 years ago | (#5639392)

it would be possible.

But much sooner then that, perhaps real soon if not already, you could simply build a digital camera into, say, a pair of sunglasses...

Not another Ask Slashdot... (0, Troll)

NickV (30252) | more than 11 years ago | (#5639394)

that can be answered by popping the question into Google and looking at the first 10 results..

God I hate slashdot!

Nearly Impossible (5, Informative)

lliiffee (413498) | more than 11 years ago | (#5639409)

Just a few cells back from the retina, the visual signals have allready been 'encoded' in a way which would make a straight pixelmap hard to attain. (Each neuron here corresponds to a wierd gaussian thing centered around a given point) Furthermore, the signals aren't sent down a single neural train, they go all over the place all willy-nilly. Theoretically, these things would be overcome but the most serious problem is that our eyes at any given moment only look at a tiny, tiny bit of space. The illusion of a continuous field of vision is created by the brain in an amazing process which is not very well understood.

Re:Nearly Impossible (1)

PD (9577) | more than 11 years ago | (#5639476)

Easy solution: don't tap the optic nerve. Tap the retina. See the post about the cat elsewhere in the comments for this story.

Re:Nearly Impossible (3, Interesting)

MacJedi (173) | more than 11 years ago | (#5639727)

The problem is that if you tap in at that point (and let's pretend that you could sink enough electrodes into the retina; if you're tapping in at that level you'd have to hit a significant percentage of them) the raw image would be very poor. You'd have to do all the processing yourself, in hardware and the required processing is not fully understood.

I'd suggest that you'd be better off letting the brain do most of the processing and take output from the visual cortex. I believe there has been some success doing this with blind persons. Tapping into the optic nerve is a tempting compromise, but remember that the optic nerve is made up of hundreds of axons. I doubt a simple cuff electrode would do the trick-- you'd need to get the firing rates for each one (or at least some large percentage of the axons) and this is beyond the current state-of-the-art, afaik.

In any rate, cat example you're citing was for tapping into the thalamus. That's about smack dab in the middle of the brain. Some of the computation is done and some isn't, so that might be a good compromise.

It's important to realize that there is computation done at virtually every step of the path from retina to the visual cortex. There is no passive transmission of data (afaik) so each part is important.

/joeyo

Re:Nearly Impossible (1)

blahlemon (638963) | more than 11 years ago | (#5640611)

There is also the problem of the image at the back of the eye is flipped, so every image recovered would need to be flipped over.

It would probably be easier to monitor the activity in the visual portion of the brain and translate the activity into an image then trying to understand the mess of nerves in the optic nerve bundle.

Re:Nearly Impossible (1)

KDan (90353) | more than 11 years ago | (#5641122)

Wow, that's a really big problem. They haven't solved that yet, you know. Whenever you get an upside-down picture, the print shops have to throw it away cause it's useless. It would take years on a massive parallel cluster to flip the image back...

Daniel

Already being worked on (link included) (4, Informative)

rask22 (144831) | more than 11 years ago | (#5639418)

There has already been research in this area using cats. The researchers were able to reconstruct images of what the cat was actually seeing. Pretty amazing stuff if you ask me.

link: http://www.berkeley.edu/news/media/releases/99lega cy/10-15-1999.html

Re:Already being worked on (link included) (1)

Lendrick (314723) | more than 11 years ago | (#5640040)

Interesting stuff...

Here's a question, though. When you dream, do the images that you're dreaming go through the thalamus as well?

Re:Already being worked on (link included) (1)

itwerx (165526) | more than 11 years ago | (#5640521)

Here is a functional link [berkeley.edu]

But if it was just me I'd probably just use a number drill (very small drill bit) and a wire-wrap tool...

(Now the Karma question - is this a Funny, Informative, Troll? :)

Re:Already being worked on (link included) (1)

bill_mcgonigle (4333) | more than 11 years ago | (#5641146)

They're actually tapping into the brain there, where the image reconstruction occurs. A deep-brain implant is somewhat less fun to think about than a nerve tap, but we'll have 'em in 30 years [kurzweilai.net] anyway.

Perception (3, Interesting)

xyzzy (10685) | more than 11 years ago | (#5639422)

The problem is that you'd probably get a *shitty* picture. Or at best, it wouldn't reproduce "what you saw" any more than a regular camera does.

The majority of what you "see" is exactly because of the post-processing your brain does, as well as your eye and optic nerve. This occurs both in the optic realm (shading, motion, etc), and because your brain applies all kinds of cognitive processes to the visual signal. It isn't simply a passive sensor like a CCD.

Re:Perception (1)

Associate (317603) | more than 11 years ago | (#5641080)

True that. I read somewhere, possibly here, that there is around a 50% data loss from the eye to the brain. The brain extrapolates the rest.
Besides, slide shows from my vision would be cut short by someone yelling FOCUS!!

Re:Perception (1)

zaqattack911 (532040) | more than 11 years ago | (#5655432)

Don't fucking say "true that" .

God I hate that shit. speak english you maggot

Re:Perception (1)

n2dasun (467303) | more than 11 years ago | (#5656628)

Calm down. He/she didn't mean it.

Re:Perception (1)

Associate (317603) | more than 11 years ago | (#5658179)

God I hate that shit. speak english you maggot
Sure, I'll stop, as soon as you learn to capitalize and punctuate. :P

Eye is part of the visual SYSTEM (1)

bill_mcgonigle (4333) | more than 11 years ago | (#5639438)

Assuming this isn't a dumb april fools joke (are they lame this year, or what?)...

No.

What you see is the result of a whole lot of post-processing by a supercomputer called 'your brain'. The input from the optic nerve is quite inferior to the image you see.

For instance, your digital camera would have a blind spot [colostate.edu] in every picture. It's also upside down, and probably non-uniform in its curvature.

Re:Eye is part of the visual SYSTEM (0)

Anonymous Coward | more than 11 years ago | (#5640666)

Well, they've apparently done it (http://www.berkeley.edu/news/media/releases/99leg acy/10-15-1999.html).. so how can you be so sure?

Re:Eye is part of the visual SYSTEM (1)

bill_mcgonigle (4333) | more than 11 years ago | (#5641034)

Right, they've tapped into the brain there.

not there yet (1)

daveilers (251819) | more than 11 years ago | (#5639448)

[sabac.co.yu]
Here you can see where they were on this not so long ago

Short version: They hooked up 177 halfway down a cat's optic path and were able to create images/movies from the info they recieved. One Problem is how hard it is to connect to all the nerves without disrupting their message. the other problem is the image info changes as it moves from the eye to the brain, so it gets processed as it travels. They were only able to interpret the image information at a certain spot on the way to the brain.

[harvard.edu]
You can see pictures here

Pr0n (0)

termos (634980) | more than 11 years ago | (#5639512)

Porn will never be the same again... after shooting it with grandma's eyes!

amazingly.... (0)

Anonymous Coward | more than 11 years ago | (#5639521)

This april fools ask slashdot is actually less retarded than a non-april fools one.

The eye is an engineers worst nightmare. (1)

t (8386) | more than 11 years ago | (#5639561)

Imagine that you are an Electrical Engineer and you are given a new camera technology, the specs are that the red, green, and blue sensors are randomly distributed. The distribution of sensors is also non-uniform spatially, most are in the middle. The number of sensors for each color also varies. Also, the responses from each sensor also overlap in irregular ways, no two sensors have the exact same response to the same stimulus. Oh yeah, the signals from the sensors are unmapped, we have no idea which signals belong to which sensors. And the sensors also return signals independently from each other, so unlike a typical digital camera with a shutter, you have to integrate the signal over a period of time. Unfortunately the assembly that holds the sensors is also jiggling constantly (saccades). And one more just to piss you off, the assembly is filled with water so light entering it will bend depending on the color of the light. You'll have to correct that distortion. Did I mention that the "grayscale" content of the image is from a whole different set of sensors?

Mod parent up (1)

archnerd (450052) | more than 11 years ago | (#5659895)

Well put.

Re:The eye is an engineers worst nightmare. (1)

More Karma Than God (643953) | more than 11 years ago | (#5674544)

Easy solution.

Implement a nueral network to transform these disorderly impulses into a meaningfull image.

Pincushion problem (1)

crmartin (98227) | more than 11 years ago | (#5639614)

It could be done (someone else mentioned the cat experiments) but long before you got the resolution of the eye, you'd run into something we used to call the "pincushion problem" -- by the time you've got enough electrodes to capture the information, you no longer have the tissue of interest, you have a pincushion -- and pincushions don't act like normal tissue.

But let's assume you did it somehow (nanotech, maybe -- everyone knows nanotech can do ANY magic desired). The eye isn't really like a digital camera at all: each of the sensory cells has a photosensitive dye called rhodopsin which is bleached out by exposure to light, in the process changing its electrical properties. (What I recall is that it liberates electrons, but look it up as I'm not certain.) This degree of bleaching is what produces the signal, which becomes encoded as a series of pulses on the optical neurons.

This sounds like a digital signal, but it's more complicated than that, because the rhodopsin regenerates slowly -- this is why it takes minutes to get your night vision back after exposure to light. The cells in the retina communicate among themselves as well. The result is that the signal from the retina is ... well, weird.

The point is that while you might be able to make it work (if you solve the pincushion problem) you wouldn't gain much, because the eye is a crappy camera. It's the signal processing afterwards that's good. If you want to really get a good representation of what the "eye" sees -- or rather what the brain sees -- you ought to use a good digital camera, and try to figure out the signal processing instead,

It's all relative (1)

Jahf (21968) | more than 11 years ago | (#5639695)

The way I remember it from biology (which is a stretch at this point ;), each person interprets the information from the cones and rods differently.

In other words, each picture taken off the optic nerve would be relative to the person who saw it.

We learn to associate a color with the information we get, but one person might see "red" when a cone is active, another might see "red" when a rod is active.

If you could tap into the light coming direclty into the eye, maybe, but that is a hardware mod, not a signal tap, and I don't see it being taken very well.

Maybe if you could create individual filters for each person easily it would work ... but really, I don't think I -want- people to have access to what I see :)

Re:It's all relative (1)

Grishnakh (216268) | more than 11 years ago | (#5640062)

So does this mean that people who get eyeball or brain transplants are going to see things in weird colors?

Re:It's all relative (1)

softwave (145750) | more than 10 years ago | (#5643398)

Or more important, Jahf, does this mean that the color know as "red" could be associated by someone else to what I associate to "green" for example?

I've been pondering about this for realy some time. I don't know if i'm making myself clear. But in other words, could it be that "my" red = "your" green or other color??

Re:It's all relative (1)

einstein (10761) | more than 10 years ago | (#5645585)

I'm red-green color-blind: Yes.

Re:It's all relative (0)

Anonymous Coward | more than 11 years ago | (#5652045)

No, you idiot. To me, the color gray looks like "foo". His brain interperets gray as "bar", but because he was told in school that the gray lollipop (ugh!) that he saw as bar was this thing called gray , and I was told that the same lollipop (that i see as foo) was gray, we can't communicate this effectively. It's an age old philosopher's problem having to do with a generalization of point-of-view bias. It also applies to other senses, as well as to more complex ideas, like right and wrong. I too am colorblind, by the way - blue/purple and yellow/green. Those look like 2 colors, not 4.

bionic eyes (2, Funny)

cryptozoologist (88536) | more than 11 years ago | (#5640041)

in the book cyborg by martin caidin (the book the $6e+6 man was based on) our hero had a camera for an eye instead of the spiffy telephoto ir one he had on tv. he had to pop it out to change the film. today he could just stick a usb cable in his eye. also the bionic woman's ear would have ogg playback capability.

Here... (1)

M.C. Hampster (541262) | more than 11 years ago | (#5640777)


Why don't you lay down right here and I'll give it a try? *pulling out a scapel*

EyeglassCam (1)

andrewski (113600) | more than 11 years ago | (#5641697)

It is conceivable that eyeglasses could be made, ala- Tom Cruise era Mission Impossible. They wouldn't be the greatest, but you could record what the eyeglass-wearer sees.

Personally... (1)

68K (234318) | more than 10 years ago | (#5644500)

...I think whoever cracks this one is going to die richer than Bill Gates. The amount of pictures I'd take on a summer day's walk around town looking at the barely-dressed ladies would necessitate a 20Gb hard disk stuffed up my ass. :-)

(Of course, doing it with a camera behind some with sunglasses would be a good start.)

And I'm sure there'd be significant applications in the medical and military fields. I've been thinking how cool this would be for years...

68K.

Surely only of limited use... (1)

biglig2 (89374) | more than 11 years ago | (#5651352)

I would say that only spys, perverts etc. who wanted totally concealed cameras would find that mounting a camera on your glasses wasn't many orders of magnitude better. And slivers of slow glass will be easier for that. ;-)

Anyhow, technology to do this thru glasses will be needed to enable all the various fabulous things we will do once glasses become a favoured ocmputer interface; HUD overlays, for example, will gain tremendously from knowing what it is you are seeing.

As Scott Adams puts it, we all want Terminator style targeting boxes that will automatically lock onto any salesmen we meet while shopping. Also Predator style night vision. (Well, more like in the Computer game than the film I suppose)

Voice recognition to replace the keyboard and eye movement tracking to replace the mouse, those are the UI of the future.

All ths stuff exists in bulky military hardware now, so give it a few years and it will filter down to the consumers. There are already prototype systems for cars that give you enhanced night vision.

But wait. There's more. =) (1)

dlcantrell (573793) | more than 11 years ago | (#5654605)

What if someone could intercept those images? Talk about "Being John Malcovich"! That's the last thing I need, someone getting a hold of pictures of me doing obscene things while dressed like Scooby Doo. Talk about humiliating!

Well I Got Mad Believing CIA Had Fixed A Camera (0)

Anonymous Coward | more than 11 years ago | (#5670740)

Its true and I landed up in a psychiatry.
what you see is a private thing and if someone can see what you can see it is just horrible part of the time. But at other times it was fun..swimming and being sure that it can be seen. I hope to God that this notion leaves me..reading all these posts has helped. Thanks and I am not joking.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>