Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

High-Speed Video Free With High-Def Photography

kdawson posted more than 4 years ago | from the obvious-once-you-think-of-it dept.

Science 75

bugzappy notes a development out of the University of Oxford, where scientists have developed a technology capable of capturing a high-resolution still image alongside very high-speed video. The researchers started out trying to capture images of biological processes, such as the behavior of heart tissue under various circumstances. They combined off-the-shelf technologies found in standard cameras and digital movie projectors. What's new is that the picture and the video are captured at the same time on the same sensor. This is done by allowing the camera's pixels to act as if they were part of tens, or even hundreds, of individual cameras taking pictures in rapid succession during a single normal exposure. The trick is that the pattern of pixel exposures keeps the high-resolution content of the overall image, which can then be used as-is, to form a regular high-res picture, or be decoded into a high-speed movie. The research is detailed in the journal Nature Methods (abstract only without subscription).

cancel ×

75 comments

How long (2, Funny)

Anonymous Coward | more than 4 years ago | (#31166140)

How long before it is used for porn?

Re:How long (1)

derGoldstein (1494129) | more than 4 years ago | (#31166366)

Just hit the "Pause" button.

How long? That's what she said! (-1, Redundant)

Anonymous Coward | more than 4 years ago | (#31166374)

Specifically for porn with massive, uncircumsized, erect nigger dicks with cum bubbles dripping down from the head.

Re:How long (0)

Anonymous Coward | more than 4 years ago | (#31167646)

Uhm, RTFA - it was developed to observe biological processes...

Re:How long (1)

cgenman (325138) | more than 4 years ago | (#31171570)

It has already happened, kind of. Esquire [gizmodo.com] has done covershoots on a video camera, then selected individual frames to pull out for photos and the cover.

Of course, what the article is talking about is changing how high-speed photography happens in order to get high-speed video on the same chip... essentially, dividing each CCD into 16 subsequent regions, and firing off those sequentially to form 16 frames of video or 1 image. There is some image degradation inherent in what they're talking about doing, of course. Each frame of video is going to be 1/16th the otherwise resolution, and the overall image exposure time will be longer than it would have been if they had fired all at once. But it is a nifty trick to better utilize these CCD beasts we've made.

I read the title as "High-Def Pornography"... (4, Funny)

HouseOfMisterE (659953) | more than 4 years ago | (#31166172)

I think it's past my bedtime.

Re:I read the title as "High-Def Pornography"... (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#31166202)

first reply!

Re:I read the title as "High-Def Pornography"... (0)

Anonymous Coward | more than 4 years ago | (#31166242)

Haha me too. (And it's past my bedtime too).

Re:I read the title as "High-Def Pornography"... (0)

Anonymous Coward | more than 4 years ago | (#31166464)

Me too, but then again it was 4:20 not too long ago...

Re:I read the title as "High-Def Pornography"... (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31166528)

I think it's past my bedtime.

The old, worn out, tired, repetitive, unoriginal redundant "I can't read correctly" Slashdot meme. And this shit keeps getting modded Funny. How many more iterations of this already-not-very-funny joke need to occur before you assholes realize it's not all that funny?

Q: How come there's no truly clever or truly witty humor on Slashdot?
A: Because shit like this keeps getting modded up.

Re:I read the title as "High-Def Pornography"... (1)

HouseOfMisterE (659953) | more than 4 years ago | (#31170886)

Q: How come there's no truly clever or truly witty humor on Slashdot?

A: Because shit like this keeps getting modded up.

O: Slashdot seems to have an open door policy for @$$#0l3$, though. ;)

interlacing (2, Interesting)

Anonymous Coward | more than 4 years ago | (#31166182)

Sounds like they have a high resolution image sensor but the timing of the data samples from certain groups of pixels is staggered. Sort of like how one frame of interlaced NTSC DVD video can represent a single "high resolution" 720x480 image, or a series of two 720x240 images 1/60th second apart.

Re:interlacing (4, Insightful)

MrNaz (730548) | more than 4 years ago | (#31166424)

Yea that's the first thing I thought as well; the principle is similar to video interlacing from back in the day, except that this is more sophisticated, and could conceivably be used to capture extremely high definition, extremely high framerate footage.

If you apply this technology to high grade 50mpix Hasselblad sensors, you could conceivably acheive frame rates of thousands of frames per second in 2k or even 4k resolution using gear that costs under $100k. Currently, that sort of photography is limited to national science bodies and multi-million dollar budgets. Being able to do that sort of thing for under 6 figures would open up HUGE research possibilities for university science labs and other relatively fund-poor institutions.

Re:interlacing? (1)

N Monkey (313423) | more than 4 years ago | (#31166760)

Yea that's the first thing I thought as well; the principle is similar to video interlacing from back in the day, except that this is more sophisticated, and could conceivably be used to capture extremely high definition, extremely high framerate footage.

I could only read the abstract, but this just seems to be the reverse of Frameless Rendering: Double Buffering Considered Harmful [nus.edu.sg] which relates to rendering 3D graphics in scattered sets of pixels.

Re:interlacing (2, Interesting)

Rockoon (1252108) | more than 4 years ago | (#31166820)

I think you've missed the point.

Use the high-frame-rate camera to take a high-frame-rate video, or use it to take a high resolution picture, but you cant take a high-frame-rate high-resolution video.

The idea is that the light sensitive components have a minimum response time that is too large to capture high frame-rate digital data without tricks. So engineers being what they are use seperate groups of them with staggered capture times in order to achieve high frame-rates. In the simplest case there would only be two groups of senors, probably would be called odd and even, which would allow double the frame-rate of that minimum response time.

What these blokes have noted is that the groups of sensors which capture a single frame are stippled across the capture device, so if the capture times were not staggered then the effective resolution is higher. Essentially they are un-staggering the capture times post-capture in order to achieve that high resolution, meaning that you cannot have both at the time time.

The most they can save appears to be 50%, the cost of a regular high resolution capture device which they didnt get with the their high-frame-rate device purchase.

Re:interlacing (0)

Anonymous Coward | more than 4 years ago | (#31166922)

Yes, but I didn't miss the point. Video is far lower resolution than photo, so assuming you want a 1080HD video at 1920x1080, which is 2mpix, a 30fps 50mpix sensor could, in theory, do (50/2)*30 FPS, or 750fps. If you drop the res to HD720 then the potential framerate goes up further.

Re:interlacing (4, Informative)

infolation (840436) | more than 4 years ago | (#31167038)

The technology they're using, which can derive high resolution frames by comparing several successive frames, or analyzing the rolling shutter effect of CMOS cameras is actually already well established in film visual effects.

Visual effects technology company 'The Foundry' have done quite a lot of research into this area already.

Their Furnace F_SmartZoom [thefoundry.co.uk] tool uses motion estimation techniques to analyse successive film frames to derive single frames of higher resolution than any one of the moving frames. And their Rolling Shutter [thefoundry.co.uk] tool uses local motion estimation algorthithms to analyze the staggered frames output by CMOS cameras to reconstruct them into complete un-staggered frames.

It's very interesting that the scientists in Oxford are exploiting this side effect of CMOS cameras by combining both these technologies to derive high resolution, un-blurred frames from multiple CMOS images.

As a side-note, District 9 was shot on the Red camera (a CMOS camera that exhibits this rolling shutter efffect), and a lot of Image Engine's post-production work that film required this sort of analysis so that staggered frames could be reconstructed to enable 3-D motion tracking for the insertion of CG into live action plates.

Re:interlacing (1)

dosowski (15924) | more than 4 years ago | (#31168308)

Use the high-frame-rate camera to take a high-frame-rate video, or use it to take a high resolution picture, but you cant take a high-frame-rate high-resolution video.

So it's the Heisenberg Photography Principle?

Re:interlacing (1)

drkim (1559875) | more than 4 years ago | (#31196126)

No...
It's the Lincoln/"fooling the people" effect. :)

Re:interlacing (2, Informative)

scdeimos (632778) | more than 4 years ago | (#31178224)

The idea is that the light sensitive components have a minimum response time that is too large to capture high frame-rate digital data without tricks.

It's not actually a minimum response time issue, at least not from a CCD sensor point of view (as opposed to CMOS sensors you tend to see in consumer-level digital video and photography products).

"Traditional" high-speed photography with CCD sensors usually works by lighting the scene with high-intensity light sources so that the sensors are able to gather enough photons within the short exposure times to be "useful." Have a look around GooTube for things like the "SawStop demo" on the Discovery Time Wrap program for a good example of this.

If you look at a single pixel element on a CCD sensor it's essentially a photon well - it receives photons from the environment and converts them to an electric charge. Assuming the electronics reading charges out of the CCD sensor are good enough, a single photon striking a pixel element would be detectable, thus it's not really a pixel-related minimum response time issue.

The conventional electronics used in the "read out " process of a CCD sensor essentially do the following: they enable a "row" of pixel elements and clock the electric charges across the "columns" by using something akin to a bucket bridge network. The charge from the column getting clocked off the side of the sensor is read by an ADC (analogue-digital convertor) and stored in a digital buffer (RAM) before being sent to the host device. Each row is "clocked out" and read in this fashion, then the whole CCD sensor is shorted to reset any residual charges ready for the next exposure. Any response time issues are in the clocking out process, since the weakest link in the chain will be the time needed by the ADC to capture and convert a single charge.

The proposed technique changes the read out process in several ways, vastly increasing the complexity of the CCD sensor's bucket bridge network and reset electronics in the process. Say, for example, the sensor is setup as an array of 2x2 elements (the article proposes 4x4 elements). The read out process needs to read out pixels in four phases: even columns on even rows, odd columns on even rows, even columns on odd rows, odd rows on odd rows. That sounds complex already, right, but it's worse: because the sensor will essentially be exposed continuously you also need to reset the charges in those groups individually otherwise you'll get residual charge build-up that skews the data over time. If you don't all pixel elements will eventually read as full charges.

Electronics complexity issues aside, I'm wondering how useful this technique will be for high-speed scientific research. When looking at the resultant high-speed video each frame will be offset slightly in both the horizontal and vertical directions (1/2 pixel in a 2x2 network, 1/4 pixel in a 4x4 network). To some degree this will able to be corrected using sub-pixel blending, but this will introduce errors into the frames thus reducing their utility. Nonetheless, it sounds like a very interesting technique.

Re:interlacing (2)

Ihmhi (1206036) | more than 4 years ago | (#31168816)

Hasselblad sensors

There's seriously something called a Hasselblad sensor? That is fucking awesome. That sounds like something off of Babylon 5. "Incoming enemy fighters on the Hasselblad sensors!"

Re:interlacing (0)

Anonymous Coward | more than 4 years ago | (#31172608)

Hasselblad is a camera brand, and they make medium-format digital cameras (huge compared to most). A "Hasselblad sensor" is not a type of sensor any more than the image sensors in Canon, Nikon, Sony, Pentax, and other cameras.

Representative sample (3, Funny)

LordLucless (582312) | more than 4 years ago | (#31166204)

As I read this, there are three comments. Two are about porn. Slashdot in a nutshell.

Re:Representative sample (2, Funny)

Anonymous Coward | more than 4 years ago | (#31166238)

As I read this, there are three comments. Two are about porn. Slashdot in a nutshell.

Actually, only one of those three was about porn. The other two (and this one) are just offtopic. So that makes Slashdot 25% horny, 25% pedantic, and 75%-100% offtopic.

Re:Representative sample (0)

Anonymous Coward | more than 4 years ago | (#31166306)

Technically, taking into account post #2's title, the parent's assessment is correct.

Re:Representative sample (1)

MoeDumb (1108389) | more than 4 years ago | (#31166246)

I noticed 5 comments and came to see if the first one would be serious or about porn. Expectation confirmed.

Re:Representative sample (1)

derGoldstein (1494129) | more than 4 years ago | (#31166376)

And then your post complains about Slashdot. Add that to the nutshell.

Re:Representative sample (1)

noidentity (188756) | more than 4 years ago | (#31172410)

As I read this, there are three comments. One is about porn, and another is about Slashdot itself. Slashdot in a nutshell.

so, basically.... (0)

Anonymous Coward | more than 4 years ago | (#31166266)

a new optimized/specialized OS for a digital camera...

I've actually thought about this... (5, Interesting)

pushing-robot (1037830) | more than 4 years ago | (#31166318)

...and how eventually cameras will not have a "shutter" as we know it but will simply keep track of how each pixel was illuminated at each moment in time. Of course, shutterless sensors are already in widespread use; we call them "eyes", and they have the same benefits that TFA describes: Your brain can observe low-detail fast-moving objects and high-detail static objects at the same time without having to reconfigure anything. Consequentially, shutterless cameras would have the side benefit of better approximating biological vision.

The ultimate dream would be a truly holographic sensor that records exactly where, when, and at what angle each photon hit the sensor, so that the zoom, exposure time, and focus can be changed in post-processing (as well as a lot of other cool stuff).

Re:I've actually thought about this... (0)

Anonymous Coward | more than 4 years ago | (#31166372)

There are already shutterless cameras. They're called video cameras...

Re:I've actually thought about this... (0)

Anonymous Coward | more than 4 years ago | (#31166418)

They still have a shutter speed.

Re:I've actually thought about this... (1)

derGoldstein (1494129) | more than 4 years ago | (#31166480)

But the shutter speed is done in software (or by adjusting the signal processors, which are in hardware, but it's still "programming", rather than a mechanical mechanism). The term "shutter speed" is becoming vestigial.

Some stills cameras do too, but.... (2, Interesting)

N Monkey (313423) | more than 4 years ago | (#31166928)

There are already shutterless cameras. They're called video cameras...

Some stills cameras, e.g. on phones, are shutterless as well, but often have some interesting artefacts [flickr.com] .

In this case it is probably due to the high level of correlation between pixel position and "shutter" time. I'm guessing that, in the paper, (judging only by the abstract) they are using a pseudo-random pattern for the pixel sampling which would trade these weird effects for 'noise' which would be less obvious.

Re:Some stills cameras do too, but.... (1)

jesset77 (759149) | more than 4 years ago | (#31166992)

Some stills cameras, e.g. on phones, are shutterless as well, but often have some interesting artefacts [flickr.com] .

Ha, sweet; but that's not the most illustrative image from that set. I prefer This one [flickr.com] .. you know, since hardware folk might mistake your linked image with some new, weed-whacker style floppy propeller system. :3

In the photo I linked, I love how the "pseudoblades" also have well-defined shadows xD

Re:Some stills cameras do too, but.... (1)

derGoldstein (1494129) | more than 4 years ago | (#31167178)

you know, since hardware folk might mistake your linked image with some new, weed-whacker style floppy propeller system.

Well I just looked at your image and I think I'm looking at some matter-displacement, levitation/suspension technology that involves volume-altering materials... and possibly LSD.

Re:Some stills cameras do too, but.... (1)

asvravi (1236558) | more than 4 years ago | (#31171752)

This is a common weaving artefact in cheap cameras that use CMOS sensors instead of the higher quality CCD sensors. CMOS sensors do not have the so called "global shutter" mechanism - so the exposure and the serial scan out of pixels both take place simultaneously, which creates this effect. In contrast, CCD sensors have a global shutter, which allows the exposure to complete before serially scanning out the pixels, thus protecting the image from a "rolling" exposure.

A good descriptive article here - http://www.dvxuser.com/jason/CMOS-CCD/ [dvxuser.com]

Re:Some stills cameras do too, but.... (1)

crossmr (957846) | more than 4 years ago | (#31169744)

I'm never flying again.

Re:I've actually thought about this... (1)

nmos (25822) | more than 4 years ago | (#31166446)

and how eventually cameras will not have a "shutter" as we know it but will simply keep track of how each pixel was illuminated at each moment in time.

I believe most non-SLR digital cameras already do without a physical shutter. I've always thought that for these cameras it might be useful to break up a typical exposure into multiple shorter exposures and just stack the resulting images using the differences between frames to detect noise and blur due to camera shake etc.

Re:I've actually thought about this... (3, Funny)

derGoldstein (1494129) | more than 4 years ago | (#31166458)

This entire field can easily be extrapolated. First, the shutter is a mechanical components that isn't required -- every portable computer has video camera that can take still images. The reason we still have shutters in high-end cameras is because of the way sensors are currently designed, and the fact that modern DSLRs are basically upgraded film SLRs.
And what about the lens? If the sensors are omnidirectional and can simply keep reporting their state at a high frequency, the "lens" (its optical purpose) can be done in software. You just need a high density of sensors and the ability to process the information fast enough.
Obviously, the individual sensors can't be truly omnidirectional, but rather their visibility angle would depend on the geometry of the surface they're placed on -- which could be a hemisphere, or even an almost complete sphere. As you mentioned, the angle of light would still be relevant, but this would be done on an individual sensor basis -- rather than one lens orchestrating the entire image.

There, we solved it. Engineers, get to work!

Re:I've actually thought about this... (0)

Anonymous Coward | more than 4 years ago | (#31166572)

while the shutter isn't needed for more than keeping dust away from the sensor the mirror wish directs the light to your eye still is

Re:I've actually thought about this... (1)

derGoldstein (1494129) | more than 4 years ago | (#31167050)

That's being done away with as well. The newer LCDs reproduce a better representation of the photo you're going to shoot than the eyepiece. There are also digital eyepieces which replicate the image on a tiny projector. The term "Single Lens Reflex" will soon be done away with.

Re:I've actually thought about this... (1)

dunkelfalke (91624) | more than 4 years ago | (#31169138)

Sorry but no. The resolution of an eyepeace is higher and the reaction speed is the speed of light so there is no lag.

Re:I've actually thought about this... (1)

derGoldstein (1494129) | more than 4 years ago | (#31170496)

Light speed: You're right. If you react fast enough to notice the difference between the highest speed optical sensors, and the lens reflection, then yes, you do indeed require light speed. And congratulations on having super powers.

Resolution: If you got a 1080p image into that tiny projector, would that be enough? How about a 4k image? At what point would the resolution be high enough?

Both of these characteristics will improve with time -- it won't be long before the clunky reflex mechanism is out.

Re:I've actually thought about this... (2, Informative)

dunkelfalke (91624) | more than 4 years ago | (#31170850)

A high resolution optical sensor delivers a shitload of data - 20 and more megabytes for every frame. The processing of the data from the Bayer matrix (we won't take the Foveon into account for the sake of the argument) and resizing also takes time. You need at least 60 fps to get rid of lag while moving. Have fun at processing 1.2 terabytes per second.

Re:I've actually thought about this... (1)

dunkelfalke (91624) | more than 4 years ago | (#31170954)

Gigabytes, sorry.

Re:I've actually thought about this... (0)

Anonymous Coward | more than 4 years ago | (#31167616)

I think we still have shutters to use the flash.

Re:I've actually thought about this... (1)

John Hasler (414242) | more than 4 years ago | (#31168096)

> ...the "lens" (its optical purpose) can be done in software.

No it can't. The pixels have no information about the direction from which each photon arrived. Without a lense each of your pixels will receive photons from every point in the scene with no way to sort them out.

Re:I've actually thought about this... (1)

derGoldstein (1494129) | more than 4 years ago | (#31170700)

No it can't. The pixels have no information about the direction from which each photon arrived. Without a lense each of your pixels will receive photons from every point in the scene with no way to sort them out.

...and 3 lines later in the post I said: "the angle of light would still be relevant, but this would be done on an individual sensor basis -- rather than one lens orchestrating the entire image".
A bit like fly's eye -- every sensor would only report on a single angle. Place an array of these sensors on a hemisphere (or an only slightly convex/concave surface), and make it dense enough, and you *could* do the rest in software.

Re:I've actually thought about this... (1)

Arthur Grumbine (1086397) | more than 4 years ago | (#31166466)

The ultimate dream would be a truly holographic sensor that records exactly where, when, and at what angle each photon hit the sensor...

I'm usually one of the last to predict the limits of future technology, but I can not imagine how this could ever be any more practical/useful than capturing/tracking every hundredth, or thousandth, or even millionth photon. I think maybe there's a lack of perspective as to the insane frequency at which photons contact a given surface area.

Re:I've actually thought about this... (1)

madddddddddd (1710534) | more than 4 years ago | (#31166576)

"ultimate dream" wasn't a fair enough qualifier for you?

Re:I've actually thought about this... (2, Informative)

Anonymous Coward | more than 4 years ago | (#31166610)

It depends. In good lighting you don't need to register all photons. However in a dark room or for watching night sky each photon counts. Here is an informative article: http://math.ucr.edu/home/baez/physics/Quantum/see_a_photon.html
Human eye can actually register flash of about 90 photons (10% of them will reach the retina, so about 9 photons is enough to activate receptors). The sensitivity also depends on the wavelength.

Re:I've actually thought about this... (1)

derGoldstein (1494129) | more than 4 years ago | (#31167002)

I can't reach the article you posted for some reason, but from what I recall there are several species that have eyes that are designed for low-light *and* high frequency -- owls are one example. If you want resolution, then hawks are able to see a moving target, the size of a mouse, from a mile away.

Of course, by giving any example found in nature you're setting the bar pretty high. If we wanted to replicate the functionality of the simplest plankton (like a Picoplankton), we'd probably need to construct a building-sized bio-processing facility.

Re:I've actually thought about this... (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31166618)

I think maybe there's a lack of perspective as to the insane frequency at which photons contact a given surface area.

So it's about as often as a Mexican breeds or a nigger commits a crime, then?

Re:I've actually thought about this... (3, Informative)

EdZ (755139) | more than 4 years ago | (#31167218)

It's of massive value in astronomy. And it's exactly whatsuperconducting image sensors [esa.int] do.

Re:I've actually thought about this... (1, Interesting)

Anonymous Coward | more than 4 years ago | (#31166494)

There have already been several adaptive sensor/camera designs and prototypes proposed that adjust the integration (shutter) time independently for each pixel on the sensor so that no pixel is saturated (maxed out). Consequently knowing the per-pixel integration time and sensor value allows you to reconstruct a high dynamic range image. This design seems to be the application of the idea of binning (which has been used for noise reduction and improved dynamic range when coupled with a spatially varying attenuation filter) but instead using it to integrate over staggered intervals.

Re:I've actually thought about this... (1)

Mashdar (876825) | more than 4 years ago | (#31168248)

My eyes have shutters. You must look pretty weird. About your holographic idea, Heisenberg (or even De Broglie) might have something to say about that. (something about the difficulty of identifying both location and movement of an electron) Also, (being from /. I have not read TFA) are they talking about staggering pixel capacitor charging across rows? Up until this point, has everyone really been doing this row-by-row? This seems sub-optimal for high-def images of moving objects, anyway.

Re:I've actually thought about this... (0)

Anonymous Coward | more than 4 years ago | (#31170738)

My eyes have shutters.

Trying to compete with BadAnalogyGuy? You're eye lids are much more like a lens cap.

Re:I've actually thought about this... (1)

complete loony (663508) | more than 4 years ago | (#31168330)

The ultimate dream would be a truly holographic sensor that records exactly where, when, and at what angle each photon hit the sensor

That's already been done to some extent. Put a grid of small lenses in front of your sensor and you can trace each ray back through the main lens and reconstruct a lower resolution image from any point of view, or focal length, that would have been viewable from any point within the volume of the camera.

Re:I've actually thought about this... (2, Interesting)

gillbates (106458) | more than 4 years ago | (#31170166)

The overwhelming majority of digital cameras do not have a shutter. You do realize that clicking sound comes not from a shutter, but from a small speaker, right?

I'm honestly sorry I didn't patent this technique back in 2005 when I was working with digital image sensors, but suffice to say, it's been known about and used in industry for quite some time. Engineers have always known there was a tradeoff between the image resolution and frame rate, and this appears a rather obvious compromise. An image sensor chip has a limited bandwidth for reading out pixels, so naturally the framerate is a factor of the image pixel count.

Most image sensors can be reconfigured rather quickly, perhaps even between frames. This technique is hardly worth a patent, as it's obvious to anyone who's ever had to make a tradeoff between frame rate and light sensitivity, or frame rate and resolution. For video, there's the standard D1 resolution of 720 by 480. For stills, the whole resolution of the sensor is used. So obvious that it is hard to consider it novel enough to patent.

Re:I've actually thought about this... (1)

pushing-robot (1037830) | more than 4 years ago | (#31175580)

I guess I should have said "frameless" cameras (it was two in the morning...).

Many cameras today don't have a physical shutter, but they still work (to my knowledge) by exposing the sensor to light for a significant fraction of a second, then reading the cumulative charge on the sensor elements and "flushing" the elements, and then starting over again.

A "frameless" camera is an oversampled camera; instead of exposing the sensor for ten milliseconds or more at a time, you would take readings more than once per millisecond. To generate an image, you downsample some of the microsecond "frames" into a composite image. The frames would be averaged to reduce noise, and "exposure time" could be changed after the fact by adding or removing micro-frames.

Granted, it's not much different from how current digital video sensors operate, except in scale. You could do some fancy stuff with noise reduction and image stabilization (among other things) if you had sub-millisecond sensor data.

Re:I've actually thought about this... (1)

noidentity (188756) | more than 4 years ago | (#31172518)

The ultimate dream would be a truly holographic sensor that records exactly where, when, and at what angle each photon hit the sensor, so that the zoom, exposure time, and focus can be changed in post-processing (as well as a lot of other cool stuff).

No, the ultimate sensor would record the quantum wavefunctions of the "photons", rather than the collapse of them. Then you could.. well I'm not sure what that would allow, but it's clearly capturing more information.

Re:I've actually thought about this... (1)

steelfood (895457) | more than 4 years ago | (#31174000)

That'd be great for the purposes of mimicking biological eyes, i.e. a sensor for a general-purpose robot to keep data rates down. However, it wouldn't be good for photography or cinematography as an art at all. The idea is to capture both the minute details and the fast motion at the same time to create an experience that's a little different from reality. It's like HDR or UV photography. Our eyes don't have that kind of capability, yet we still would want to take such pictures.

Please help (0)

Anonymous Coward | more than 4 years ago | (#31166706)

Will someone please rewrite the summary for those of us who aren't Xhibit?

Re:Please help (0)

Anonymous Coward | more than 4 years ago | (#31166948)

Will someone please rewrite the summary for those of us who aren't Xhibit?

Will someone please rewrite your comment for those of us who don't know what the fuck Xhibit is?

Sounds familiar (1)

gringer (252588) | more than 4 years ago | (#31166866)

This sounds like something I remember a flatmate talking about previously; there is a free software program that did this. You took a few low-resolution pictures, ran them through the program, and got out a high resolution image. The same can be done with a video (as the low-resolution pictures).

I can't recall the name of the program, will have a hunt for it.

Re:Sounds familiar (1)

carlvlad (942493) | more than 4 years ago | (#31166984)

You mean like image interpolation ?

Re:Sounds familiar (0)

Anonymous Coward | more than 4 years ago | (#31167676)

What you're thinking of is called super-resolution:
http://en.wikipedia.org/wiki/Super-resolution

Re:Sounds familiar (1)

John Hasler (414242) | more than 4 years ago | (#31168292)

> You took a few low-resolution pictures, ran them through the program, and
> got out a high resolution image.

This is not the same thing at all.

Re: Sounds familiar (1)

ArundelCastle (1581543) | more than 4 years ago | (#31169210)

Sounds like this development would greatly improve a 2008 Casio camera a friend told me about a couple weeks ago. 6MP, with full res shots going into the buffer @ 60fps before you fully press the shutter button. Up to 1200 fps (tiny) video.
Hate to sound like a shill, but "high-resolution still image alongside very high-speed video" describes this pretty well, depending on your definition of "high" at least.

http://www.exilim.com/intl/ex_f1/features1.html [exilim.com]
http://www.casio.com/products/Cameras/EXILIM_High-Speed/EX-F1/ [casio.com]
http://gizmodo.com/383843/casio-exilim-ex+f1-slow+mo-super-cam-full-review-verdict-totally-unique-shockingly-powerful [gizmodo.com]

Free as in Free Links Please (OH MY A FREE LINK!) (1)

flaptrap (1038180) | more than 4 years ago | (#31167174)

Is Slashdot not about open access? I read enough complaints from scientist bloggers about having to be on-campus in-the-office or-else-pay for articles they have a subscription to.

A little research back to the researchers could doubtless second-source the information; I regularly see the authors post the articles themselves or at least an informative link: http://www.isis-innovation.com/licensing/3268.html [isis-innovation.com]

They just re-invented interlacing (0)

Anonymous Coward | more than 4 years ago | (#31167396)

... just when we are about to finally get rid of that horrible hack, it makes a comeback

Firmware update (1)

JobyOne (1578377) | more than 4 years ago | (#31199044)

So when do I get a firmware update to turn my HandyCam into a high-speed video monster?
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...