Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Sharpest Images With "Lucky" Telescope

kdawson posted more than 7 years ago | from the palomar-ascendent dept.

Space 165

igny writes "Astronomers from the University of Cambridge and Caltech have developed a new camera that gives much more detailed pictures of stars and nebulae than even the Hubble Space Telescope, and does it from the ground. A new technique called 'Lucky imaging' has been used to diminish atmospheric noise in the visible range, creating the most detailed pictures of the sky in history."

cancel ×

165 comments

Sorry! There are no comments related to the filter you selected.

Yawn (0)

Anonymous Coward | more than 7 years ago | (#20458271)

So what is so special? Many telescopes can resolve better than the Hubble in the visible range.

I'd only be impressed if they somehow made a ground telescope that could resolve in the infrared better than the Hubble.

Re:Yawn (0)

Anonymous Coward | more than 7 years ago | (#20458335)

The article says the images formed by this process are not just better than Hubble in the visible range, it claims that they are the most detailed ever. Seems pretty impressive to me.

Re:Yawn (3, Informative)

QuickFox (311231) | more than 7 years ago | (#20458475)

According to the second article on that page, it's the other way around:

Images from ground-based telescopes are invariably blurred out by the atmosphere. Astronomers have tried to develop techniques to correct the blurring called adaptive optics but so far they only work successfully in the infrared where the smearing is greatly reduced.

Re:Yawn (1)

hackus (159037) | more than 7 years ago | (#20458893)

Not likely to happen anytime soon.

As everyone knows, and for those that do not, infra red wavelengths are absorbed by water vapor.

Keeps us toasty at night, but sadly blocks the infra red for observations at the same time.

-Hack

Re:Yawn (3, Informative)

Cecil (37810) | more than 7 years ago | (#20459307)

Actually, near infrared is not blocked by water vapor, in fact water vapor is extremely transparent to near infrared light even moreso than visible light. That's why satellites can use infrared to see through clouds, and also why adaptive optics work so well in the near infrared range.

Far infrared is a different story, and you're absolutely correct there.

Re:Yawn (3, Informative)

0123456789 (467085) | more than 7 years ago | (#20460285)

Adaptive optics works so well in the IR due to the wavelength dependence of the Fried parameter, r0, and hence Kolmogorov turbulence. There's less turbulence in the IR, hence it's easier to correct it.


See here [noao.edu] , for example, for more information.


There are wavelength ranges in the NIR where the atmosphere is indeed transparent (J,H and K bands, for example); but the atmosphere is opaque at most NIR wavelengths (and, even at those IR wavelengths where the atmosphere is transparent, the transmittance is lower than at visible or radio wavelengths). See here [caltech.edu] for more info.

Lucky Imaging (5, Insightful)

dlawson (209945) | more than 7 years ago | (#20458319)

First post, huh.
This technique is often used by amateur astrophotographers using newer CCD cameras and even webcams. Astronomy Picture Of the Day http://antwrp.gsfc.nasa.gov/apod/astropix.html [nasa.gov] is a great site to see this stuff. I haven't checked Googles pictures, but I am sure that there would be a number of them there, too.
The quality of some of these photos is amazing.
davel

Re:Lucky Imaging (1)

ackthpt (218170) | more than 7 years ago | (#20458829)

CCD cameras need not all cost £££ or $$$. I'm in the midst of converting a Philips SPC900NC to an astro imaging camera. Alas, I don't think I'll finish in time for a trip with the scope to high elevation next weekend.

Re:Lucky Imaging (5, Interesting)

Anonymous Coward | more than 7 years ago | (#20459537)

Apologies for not having an account - but I would really like to ask a question for someone who understands the process.

the wikipedia entry on this subject http://en.wikipedia.org/wiki/Lucky_imaging [wikipedia.org] states that new procedures take, '... advantage of the fact that the atmosphere does not "blur" astronomical images, but generally produces multiple sharp copies of the image'.

Does the correction algorithm apply a single vector to each image (ie the entire frame is shifted in unity) to produce the composite, or is a vector field applied to every pixel point in the image to shift individually the pixels toward their correct centres? Also if it is pointwise what type of transform is being applied, affine , perspective etc.

   

Re:Lucky Imaging (1)

YttriumOxide (837412) | more than 7 years ago | (#20462273)

Please mod parent up so that someone knowledgeable can answer the question - I don't know the answer myself, but would love to, and the Anonymous Coward score of zero means many people may miss this great question.

If they can do this from earth... (4, Interesting)

Ant P. (974313) | more than 7 years ago | (#20458327)

...can the same be applied in space telescopes to get rid of the interference of the gas clouds they're looking at?

Re:If they can do this from earth... (5, Funny)

Anonymous Coward | more than 7 years ago | (#20458387)

Well sure. All you have to do is bounce your laser off of those gas clouds to find out how to compensate for them. That should only take a couple hundred or a couple of thousand years with a laser that would consume more power than all of the Earth uses. Oh, and you better hope that that gas cloud doesn't change in the transit time.

Re:If they can do this from earth... (2, Insightful)

vtcodger (957785) | more than 7 years ago | (#20462375)

***Well sure. All you have to do is bounce your laser off of those gas clouds to find out how to compensate for them.***

That's adaptive optics. 'Lucky imaging' looks to be something different. Sounds like Lucky Imaging tries to catch and merge portions of the image that occasionally, by chance, make it through the ever changing atmosphere with minimal distortion.

But I think that the answer to the original question is probably still 'No" It doesn't sound like Lucky Imaging per se is an answer to the question "How can I see objects obscured by cosmic gas clouds?"

Re:If they can do this from earth... (4, Insightful)

Aesir1984 (1120417) | more than 7 years ago | (#20458485)

The distortion they are trying to get rid of is caused by motion of the air in the atmosphere. It's similar to the waves and blurring you see looking across a parking lot on a hot day. They put space telescopes out of the atmosphere to get above these effects. The objects they're looking at don't have this problem because the thing being imaged is what is giving off the light, it's not something between the source and the viewer like the atmosphere is and so does not cause diffraction to the same extent. I would expect that this technique works rather well for bright objects, however I wouldn't expect it to work well for the very dim objects that the Hubble is normally tasked to look at. It order for them to use this technique they have to take many images per second. For very dim objects this would only mean a few photons per picture, not nearly enough to figure out if this image is any sharper than any other. So we won't be able to get rid of space telescopes or adaptive optics just yet.

No, and this is why. (2, Interesting)

edunbar93 (141167) | more than 7 years ago | (#20460953)

Interstellar gas clouds are pretty static. You would have to take one image every, say, year or maybe 100 years to really get any difference in the image quality. Whereas the earth's atmosphere produces an effect almost exactly the same as if you were to look at the bottom of a swimming pool, and in about the same timeframe.

No, the images we get right now from space telescopes are the best we can get at any given epoch, and that's just the way it is.

But surely... (0)

click2005 (921437) | more than 7 years ago | (#20458353)

Amateur Lucky Imaging is popular because the technique is so cheap and effective. The low cost means that we could apply the process to telescopes all over the world."

can't they use the same techniques with the HST itself?

Re:But surely... (4, Insightful)

ScrewMaster (602015) | more than 7 years ago | (#20458373)

I'm just blowing smoke here, but it seems to me that a technique designed to compensate for atmospheric distortion might not be all that useful when there's no atmospheric.

Re:But surely... (1, Interesting)

click2005 (921437) | more than 7 years ago | (#20458459)

The technique takes the clearer portions from many images and merges them. The article says that some portions are less smeared than others but doesn't say if the atmoisphere was also magnifying the target or not. I know astronomers have used gravity from intervening distant objects to magnify other distant objects so couldn't a similar technique be used there?

Re:But surely... (1)

Tablizer (95088) | more than 7 years ago | (#20460011)

[use it on Hubble also] I'm just blowing smoke here, but it seems to me that a technique designed to compensate for atmospheric distortion might not be all that useful when there's no atmosphere.

Drats! There goes my thesis.
     

Re:But surely... (4, Informative)

drudd (43032) | more than 7 years ago | (#20458507)

As the previous poster noted, there isn't any atmosphere and thus the technique isn't useful for HST.

Additionally, while they don't mention details in the article, I presume they have a specially designed camera. This is an old technique, but it's generally limited to very bright objects due to something called readout noise. Basically all CCD's produce an additional signal due to the process of reading out the data. This limits the effectiveness of repeated short observations to sources which are much brigher than this noise, since the noise also grows linearly with the number of images taken.

To image distant galaxies you typically have to take exposures of one to several hours, and thus this technique isn't useful.

Doug

Re:But surely... (0)

Anonymous Coward | more than 7 years ago | (#20458679)

Actually you are mistaken. If you research it a bit the reason the excitement is out there is that the CCD imager used is capable of working with low contrast images at high speed. It was only recently created. Pretty cool stuff.

Re:But surely... (5, Informative)

hazem (472289) | more than 7 years ago | (#20458685)

Additionally, while they don't mention details in the article, I presume they have a specially designed camera.

They are using a new kind of CCD that somehow lowers the noise floor. Details are at:
http://www.ast.cam.ac.uk/~optics/Lucky_Web_Site/LI _Why%20Now.htm [cam.ac.uk]

In fact this site (same basic place) is much more informative than the press release and answers a lot of questions:
http://www.ast.cam.ac.uk/~optics/Lucky_Web_Site/in dex.htm [cam.ac.uk]

Re:But surely... (2, Informative)

andersa (687550) | more than 7 years ago | (#20461537)

To sum up, the problem is readout noise. The faster you read out the CCD, the more noise you get. When you image a faint object the readout noise exceeds the signal level. The reason amature astronomers can use this technique anyway is because they are imaging bright objects (like planets), so the signal is easily discernable from the readout noise.

Now there is a new type of CCD with a built in digital signal multiplier that precedes the readout step in each individual pixel. You can simply select an appropriate multiplier that gives pixel values that fall nicely in the middle of the register width and when you read out the value, any noise can simply be subtracted away because you know that it will be much less than the signal value you are looking at.

Re:But surely... (1)

BattleApple (956701) | more than 7 years ago | (#20458563)

Possibly, if the HST was as large and powerful as the land based telescopes

"The images space telescopes produce are of extremely high quality but they are limited to the size of the telescope," Dr Mackay added. "Our techniques can do very well when the telescope is bigger than Hubble and has intrinsically better resolution."

Re:But surely... (1)

CalSolt (999365) | more than 7 years ago | (#20460669)

Well shit, I'm waiting for the array of 1,000 foot telescopes on the moon. THEN we'll be doing some serious astronomy.

awesome... (0)

Anonymous Coward | more than 7 years ago | (#20458357)

gonna get me some new posters to fa..look at!

ummm... (0, Offtopic)

rstruzik (900860) | more than 7 years ago | (#20458363)

using 'Blue Peter' technology

sounds painful

Re:ummm... (1)

German_Dupree (1099089) | more than 7 years ago | (#20458523)

That's what happens when you play with string...

Blue Peter for non-Brits (4, Informative)

ackthpt (218170) | more than 7 years ago | (#20459139)

using 'Blue Peter' technology

Blue Peter [bbc.co.uk] is a BBC childrens show. Blue Peter Technology is effectively something so simple a child could do it.

Re:Blue Peter for non-Brits (1)

poena.dare (306891) | more than 7 years ago | (#20460163)

Just wait... they'll be using "squint" and "move the menu back and forth at arms length" technology next.

Re:Blue Peter for non-Brits (1)

monk.e.boy (1077985) | more than 7 years ago | (#20461657)

Did anyone understand that comment?

Re:Blue Peter for non-Brits (1)

Cruise_WD (410599) | more than 7 years ago | (#20462169)

Have you never watched a person with failing eyesight try and read the small-print on a menu (or anything else)?

Well, I thought it was a funny comment, anyway.

Exposure Time? (3, Insightful)

MonorailCat (1104823) | more than 7 years ago | (#20458385)

TFA states that the camera takes 20 frames per second. Aren't most exposures of deep space objects on the order of seconds or minutes (or longer). Seems like 1/20th of a sec wouldn't cut it for all but the brightest objects.

Re:Exposure Time? (4, Informative)

gardyloo (512791) | more than 7 years ago | (#20458449)

Add up 1000 of those frames, and you have a 50 second exposure.

Re:Exposure Time? (1)

ackthpt (218170) | more than 7 years ago | (#20459301)

Use something like Registax [astronomie.be] .

Re:Exposure Time? (1)

PhunkySchtuff (208108) | more than 7 years ago | (#20462499)

No, you don't. There is a certain threshold for signals to be recognised above noise in an image sensor. If the signals you're trying to detect are so weak, many rapid samples will only give you more noise

Re:Exposure Time? (1)

QuickFox (311231) | more than 7 years ago | (#20458567)

Seems like 1/20th of a sec wouldn't cut it for all but the brightest objects.
One of the short texts below the two initial articles says that it's a new camera capable of detecting individual photons:

This new camera chip is so sensitive that it can detect individual particles of light called photons even when running at high speed. It is this extraordinary sensitivity that makes these detectors so attractive for astronomers.
Unfortunately it doesn't give any details on how much light is needed compared to other techniques.

Compared to adaptive optics? (5, Informative)

kebes (861706) | more than 7 years ago | (#20458409)

One of the main limitations to ground-based optical telescopes (and one of the reasons that Hubble gets such amazing images) is that the atmosphere generates considerable distortion. Random fluctuations in the atmosphere cause images to be blurry (and cause stars to twinkle, of course). The technique they present appears to be taking images at very high-speed. They developed an algorithm that looks through the images, and identifies the ones that happen to not-blurry (hence "lucky"). By combining all the least blurry images (taken when the atmosphere just happened to be not introducing distortion), they can obtain clear images using ground-based telescopes (which are bigger than Hubble, obviously). I imagine the algorithm they've implemented tries to use sub-sections of images that are clear, to get as much data as possible.

Overall, a fairly clever technique. I wonder how this compares to adaptive optics [wikipedia.org] , which is another solution to this problem. In adaptive optics, a guide laser beam is used to illuminate the atmosphere above the telescope. The measured distortion of the laser beam is used to distort the imaging mirror in the telescope (usually the mirror is segmented into a bunch of small independent sub-mirrors). The end result is that adaptive optics can essentially counter-act the atmospheric distortion, delivering crisp images from ground telescopes.

I would guess that adaptive optics produces better images (partly because it "keeps" all incident light, by refocusing it properly, rather than letting a large percentage of image acquisitions be "blurry" and eventually thrown away), but adaptive optics are no doubt expensive. The technique presented in TFA seems simple enough that it would be added to just about any telescope, increasing image quality at a sacrifice in acquisition time.

Re:Compared to adaptive optics? (1)

Cadallin (863437) | more than 7 years ago | (#20458599)

Both are employed pretty heavily by advanced "Amateur" astronomers. I put amateur in quotes because people at the high end of the hobby may have setups costing $50,000-$100,000+ dollars, going up to as much as people are willing to spend. There are several companies (http://www.sbig.com/ [sbig.com] for example) that specialize in producing imaging equipment and software for these setups. It's pretty amazing what these people are able to do.

Re:Compared to adaptive optics? (4, Informative)

Phanatic1a (413374) | more than 7 years ago | (#20458769)

ObRTFA: RTFA. It's not used *instead* of adaptive optics, it's used together with adaptive optics.

The camera works by recording the images produced by an adaptive optics front-end at high speed (20 frames per second or more). Software then checks each one to pick the sharpest ones. Many are still quite significantly smeared but a good percentage are unaffected. These are combined to produce the image that astronomers want. We call the technique "Lucky Imaging" because it depends on the chance fluctuations in the atmosphere sorting themselves out.

Re:Compared to adaptive optics? (2, Informative)

edunbar93 (141167) | more than 7 years ago | (#20460731)

ObRTFA: RTFA. It's not used *instead* of adaptive optics, it's used together with adaptive optics.

No, they propose that it be used together with adaptive optics. The research that was done to produce this press release was actually done at the Mount Palomar observatory, which was completed in 1947 [caltech.edu] and most certainly does not feature adaptive optics.

From the article:

The technique could now be used to improve much larger telescopes such as those at the European Southern Observatory in Chile, or the Keck telescopes in the top of Mauna Kea in Hawaii. This has the potential to produce even sharper images.

(Emphasis mine)

Re:Compared to adaptive optics? (1)

garvon (32299) | more than 7 years ago | (#20459027)

http://www.ast.cam.ac.uk/~optics/Lucky_Web_Site/LI _vs_AO.htm [cam.ac.uk]
Is an article comparing Lucky to Adaptive. It looks like Lucky can work with dimmer objects then the Adaptive.
   

Re:Compared to adaptive optics? (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#20461967)

than, shitcock

Re:Compared to adaptive optics? (1)

jstott (212041) | more than 7 years ago | (#20459519)

Overall, a fairly clever technique. I wonder how this compares to adaptive optics, which is another solution to this problem.

The two techniques are unrelated; either one or both at the same time can be used to improve the images. Actually, the sample images from the original article were taken through a telescope (Palomar) using basic adaptive optics to improve the image before the "lucky" software even saw the data.

As you suggest, this also works with sub-sections of the image. I saw this same technique described at a conference about five years ago for imaging through terrestrial turbulence (I think it was funded by the US Army) and they were using image sub-sections at the time, having seen the videos they had then subsectioning is clearly both possible and effective.

-Jonathan

Re:Compared to adaptive optics? (1)

Tablizer (95088) | more than 7 years ago | (#20459909)

Random fluctuations in the atmosphere cause images to be blurry ... The technique they present appears to be taking images at very high-speed. They developed an algorithm that looks through the images, and identifies the ones that happen to not-blurry (hence "lucky"). By combining all the least blurry images ... they can obtain clear images using ground-based telescopes

Actually, this is kind of what the human eye/brain does when observing thru telescopes. Human observers have generally been able to discern details better than (regular) astrophotography. This is because the brain remembers the details of the sharpest moments.

However, I see a potential downside of the Lucky technique. For dim objects you need a lot of light, or samples in this case. If you toss the majority of them because they are blurry, then you don't get a lot of samples. Hubble can keep just about every sample. Thus, such a technique may work better for obtaining clarity of brighter objects than for obtaining the details of faint objects.
     

Re:Compared to adaptive optics? (1)

TheMCP (121589) | more than 7 years ago | (#20460431)

Adaptive optics does not require a guide laser. The system often works by identifying an object in the image which is essentially small: a far away star that will register as more or less a point source, for example. It then uses that as its guide, and distorts the mirror to minimize the image of that object: essentially, to focus it. If that object is "focused", then objects near it generally are too.

Using this methodology, a large ground based telescope can easily achieve better imaging than the Hubble, and this has been true for a decade or so. The Hubble is more famous, however, because NASA often uses it to make pretty pictures and puts them out as press releases, while most large ground based telescopes are used to do actual science and the results are a lot of un-photogenic numbers.

You Too Can Get Lucky. (5, Informative)

Erris (531066) | more than 7 years ago | (#20458431)

DIY [cam.ac.uk] .

Re:You Too Can Get Lucky. (2, Informative)

[rvr] (93881) | more than 7 years ago | (#20459019)

This is indeed no news to amateur astronomers. This technique has been used extensively by planetary imagers in recent years to take amazing photos of Jupiter, Mars and Saturn. The basic tools are a good webcam to take AVI files and Registax to proccess the frames. Take a look to Damien Peach's best images [damianpeach.com] .

As for pro, there is even an article in Wikipedia about it: Lucky imaging [wikipedia.org] : "Lucky imaging was first used in the middle 20th century, and became popular for imaging planets in the 1950s and 1960s (using cine cameras or image intensifiers). The first numerical calculation of the probability of obtaining lucky exposures was an article by David L. Fried in 1978."

In order to throw away many frames and retain only those of high quality, better have a bright object or a big telescope. In this case, the astronomers had been able to image a faint nebula.

Re:You Too Can Get Lucky. (0)

Anonymous Coward | more than 7 years ago | (#20460067)

What? A twitter [slashdot.org] post without "M$ Windoze" Inconceivable!

Performance vs. Adaptive Optics (0)

Anonymous Coward | more than 7 years ago | (#20458441)

Can anyone comment as to whether this method would be superior to the adaptive optics currently planned for the Thirty Meter Telescope (of which Caltech is a participant)? Or would this potentially be used as a supplement to the adaptive optics rather than a low-cost substitute?

Re:Performance vs. Adaptive Optics (1)

gardyloo (512791) | more than 7 years ago | (#20458493)

Can anyone comment as to whether this method would be superior [...]
You must be new here.

Re:Performance vs. Adaptive Optics (1)

Aesir1984 (1120417) | more than 7 years ago | (#20458591)

It's not really superior, it's just cheaper. If you're only looking at bright objects (planets, Messier objects, nearby galaxies, and some stellar remnants) this method would work very well. The problem comes when you try to take a picture of dim objects because you have to have enough light be caught by the optics in the exposure time (1/20th of a second in their example) to get an image good enough to tell if it's a "lucky" image or not. If you're looking at something really dim then those few photons you'll get in that amount of time just won't cut it. One black picture is just as sharp as another as far as the computer algorithm is concerned.

Re:Performance vs. Adaptive Optics (0)

Anonymous Coward | more than 7 years ago | (#20458905)

In the article, they mention that this technology would best be suited to large ground telescopes. What I infer from this is that due to the lesser ability of smaller space telescopes to gather light, that this technology would be less suited, but a suitably large mirror/lens/whatever-is-used-in-large-telescopes could probably achieve significant enough light-gathering capabilities as to make for a good picture even with fast shutter speeds.

Re:Performance vs. Adaptive Optics (0)

Anonymous Coward | more than 7 years ago | (#20462537)

You infer wrong. This technique has zero application in space telescopes because they SIMPLY DON'T NEED atmosphere compensation tech.

Spider-sense (5, Interesting)

BitwizeGHC (145393) | more than 7 years ago | (#20458447)

That is really quite amazing, and reminds me a bit of the jumping spiders whose retinas vibrate to increase their optic resolution.

Re:Spider-sense (0)

Anonymous Coward | more than 7 years ago | (#20458639)

jumping spiders whose retinas vibrate to increase their optic resolution.

Well that's me tiptoeing nervously round my own home for the next week.

Re:Spider-sense (1)

QuickFox (311231) | more than 7 years ago | (#20458681)

Don't worry, they'll only jump on you when you're asleep.

Errors? (1)

WPIDalamar (122110) | more than 7 years ago | (#20458473)

Since it's running through a computer algorithm and piecing many together, and isn't just a single "lucky" picture, I wonder how much error is introduced by the algorithm. I mean sure, an algorithm like this might work well most of the time, but what happens when it produces an image that looks clear, but isn't accurate.

Re:Errors? (0)

Anonymous Coward | more than 7 years ago | (#20458713)

"...what happens when it produces an image that looks clear, but isn't accurate. "

This is a real possibility. Although probability is against this happening very often, or seriously obscuring the reality of what we're imaging, it is quite possible that simple common patterns of image noise could show up very often on these lucky images because they pass as "sharp" segments. But the more complex the pattern, the greatly less probable it becomes. Reducing this effect would require a better algorithm that is able to correctly identify what is really there, and what is artifact.

Re:Errors? (1)

Score Whore (32328) | more than 7 years ago | (#20458721)

Assuming the noise is random, and the object isn't, then these should be pretty close to what the same picture taken with the same optical equipment in a vacuum would produce with a slight bias towards the center of the noise. That is, if your noise is evenly distributed between 0.0 and 5.0, then averaging a few hundred slices through time would result in a fixed noise of 2.5 across the board. Which is the same thing as no noise.

There are things this wouldn't be useful for. Mainly anything that might be changing quickly. Like a light chart. Or some objects that emit in wavelengths that are absorbed by the atmosphere.

Re:Errors? (1)

Max Littlemore (1001285) | more than 7 years ago | (#20458743)

The principal is that by taking lots of pictures of the same thing, you can correct the error. The larger the sample you take, the closer you get to the true image. For error to be amplified you would almost need the same random dust particle arrangement from the telescope to the edge of the atmosphere in a significant sample of the images, which is very unlikely.

Of course you probably understand that.

what happens when it produces an image that looks clear, but isn't accurate.

In answer to your actual question, there are two possibilities. One is that life goes on as normal, the other is that it doesn't. Given the gravity of the consequences of even a small error in these images, we must be completely sure of their accuracy.

I therefore nominate you to travel to the cat's eye nebula to check. In return I promise to do everything in my power to ensure that the language is still alive on your return, so that someone can understand your report.

Dr. Mackay? (3, Informative)

comrade k (787383) | more than 7 years ago | (#20458481)

Dr Craig Mackay is happy to be contacted directly for interviews
Man, the whole Stargate franchise has been really going down the drain since they cancelled SG-1.

Government gets lucky. (0)

Anonymous Coward | more than 7 years ago | (#20458499)

"The researchers, from the University of Cambridge and the California Institute of Technology (Caltech), used a technique called "Lucky Imaging" to take the most detailed pictures of stars and nebulae ever produced - using a camera based on the ground. "

This also works when looking the other way.

Re:Government gets lucky. (1)

Artifakt (700173) | more than 7 years ago | (#20459839)

This is probably an adaptation of something the U.S.'s Office of Strategic Reconnaissance has been using for the last 10 years or so to do just that. (But remember - If I actually knew that for sure, I couldn't say anything).

Re:Government gets lucky. (1)

Tablizer (95088) | more than 7 years ago | (#20460049)

This is probably an adaptation of something the U.S.'s Office of Strategic Reconnaissance has been using for the last 10 years or so to do just that. (But remember - If I actually knew that for sure, I couldn't say anything).

You might be right. The "sky spies" may have had this technology for many decades, but couldn't publicize it so that the enemy wouldn't have it also. Its a shame that there may be many such military tech advances that we can't use for science. Somebody in the Pentagon probably said, "oh shit, the cat's out of the bag" when it was (re) discovered by civilians.
       

Many amateurs already do this (4, Informative)

szyzyg (7313) | more than 7 years ago | (#20458549)

THere's several pieces of software which do som parts of this - Registax is what I use, but amateurs usually only have enough aperture to make this work for bright objects like planets. You can take a good quality webcam (the top of the line Phillips webcams are the best bang for yout buck), record some video of a planet through a telescope and then pick out the least distorted images before adding them together to create the final image. Now, the trick is getting the best measurement of which images are undistorted, and getting enough light in each frame while keeping the esposure time short enough to beat the atmosphere.

Look at the planetary images here [imeem.com] for my attempts at this technique.

Re:Many amateurs already do this (1)

Tsiangkun (746511) | more than 7 years ago | (#20459443)

nice work, some incredible shots in that collection.

Re:Many amateurs already do this (1)

edunbar93 (141167) | more than 7 years ago | (#20460853)

The difference is the resolution of the camera. The Phillips Toucam can produce movies at 640x480, whereas I would expect that the cameras they were using in this research produce research-grade resolutions in excess of 1280x1024, which is no small feat to get working at 30fps. Also, to make this work with a Toucam requires very bright objects and/or very large telescopes.

However, it's worth noting that amateurs' results today are typically much better than those of professional astronomers 30 or even 20 years ago, and are no doubt the inspiration for this new camera.

Comparison to hubble... (5, Informative)

Anonymous Coward | more than 7 years ago | (#20458579)

TFA mentions that they can achieve images better than Hubble. The sample image they show [cam.ac.uk] , of the Cat's Eye Nebula, isn't as sharp as the Hubble image of the same object [esa.int] .

Probably they can push their technique harder than this initial image suggests (it was mainly comparing the "lucky" image with a conventional, blurry, ground-based image)... But I just thought it would be good to show Hubble's pictures alongside.

Re:Comparison to hubble... (0)

Anonymous Coward | more than 7 years ago | (#20459283)

I was going to mention the same thing. While the Hubble image is indeed many times sharper, perhaps by "can achieve" they were simply conjecturing that a ground-based telescope could be designed to fully exploit the benefits of this post processing technique, so that superior resolution to Hubble is possible (at reduced cost no less).

Re:Comparison to hubble... (1)

Bemopolis (698691) | more than 7 years ago | (#20459525)

Yes, it would take a lot more explanation on their part to convince me that that image is sharper than the corresponding HST image. This could be the result of press office "manglage" (which, sadly, I have experienced firsthand). What I do know is that once they can remove all atmospheric effects from the 200-inch telescope, then the pictures would be twice as sharp as HST, since it is half its size (92 inches).

But my eyes tell me they haven't removed all atmospheric effects yet, and their words aren't convincing.

Re:Comparison to hubble... (1)

JavaManJim (946878) | more than 7 years ago | (#20459679)

Thank you for the Hubble version of the Cat's Eye Nebula picture.

1. Slashdot introduction summary should say not as good as Hubble (HST). Instead mistakenly says better than Hubble.

2. The TFA linked site should (chuckle) show Hubble pictures along with the other ground based pictures.

God bless John Grunsfeld and the other NASA space walking astronauts who fix HST. Also the vast supporting cast for those missions.

Thanks for the update.
Jim

Lucky Telescope (0, Offtopic)

s0m3body (659892) | more than 7 years ago | (#20458637)

Just give me a dope and you won't believe what my resolution is !

Ugg, Background (1)

Nutty_Irishman (729030) | more than 7 years ago | (#20458651)

You would think that someone that developed a start of the art method to remove noise and distortions from atmospheric images would think twice about using a salt and pepper bitmap background.

50,000 times cheaper, so what (1)

schwit1 (797399) | more than 7 years ago | (#20458663)

Technology improves over time and it gets cheaper. The HST is 20 years old, and the technology to design and build it are even older. New inventions will come along in the next decades to make Lucky seem overpriced. But that doesn't stop people from deploying it now.

Not convinced by TFA (4, Interesting)

Oligonicella (659917) | more than 7 years ago | (#20458709)

Just went and looked up the Cat's Eye Nebula as taken by the Hubble. Lot more detail. What gives? Someone able to explain that, please?

link (1, Informative)

Anonymous Coward | more than 7 years ago | (#20458837)

Hubble: Cat's Eye Nebula [hubblesite.org]

Re:Not convinced by TFA (1)

darkmeridian (119044) | more than 7 years ago | (#20459757)

This is all a guess.

The Hubble images probably resolve fainter objects but the Lucky images are sharper. Sharpness means resolution of distinct objects is better. The Hubble may see more while the Lucky sees them sharper but misses out on faint objects. The big question for me is how good the Earth-based telescope is at picking up faint images, which appears to Hubble's strength. The Hubble Telescope can peer at an object for hours at a time with an open aperture. A ground-based telescope cannot because the Earth turns.

I am also going to guess the Hubble image was processed more heavily after the fact.

Re:Not convinced by TFA (0)

Anonymous Coward | more than 7 years ago | (#20460793)

Fixing your knowledge:
- The Earth rotates in 24h
- Hubble orbits in 90 min
so Hubble cannot peer "hours at a time", but ground telescopes can.

Fixing you lack of imagination:
- apertures can be closed and reopened.

Re:Not convinced by TFA (1, Informative)

Anonymous Coward | more than 7 years ago | (#20461615)

Fixing your knowledge:
- The Earth rotates in 24h
- Hubble orbits in 90 min
so Hubble cannot peer "hours at a time", but ground telescopes can.
Hubble can actually produce millon-second-long exposures [hubblesite.org] . That's 400 orbits, but stacking 21 minute exposures.

Re:Not convinced by TFA (1)

Oligonicella (659917) | more than 7 years ago | (#20462151)

Your 'knowledge' fix was explained away subsequently as baseless. Your aperture 'fix' is nonsense as both can do that. Good thing you posted AC.

Basically TFA is loaded. The photos they provide pale in comparison to Hubble shots. At the moment, the article is tripe. Let's see if things improve.

Let's see it beat Hubble at: (4, Interesting)

tjstork (137384) | more than 7 years ago | (#20458759)

I would think that before the scientists claim victory over Hubble, let's see their camera best some of Hubble's best work:

http://hubblesite.org/ [hubblesite.org]

There's a number of excellent Hubble images of just about everything in our solar system to the most distant galaxies.

I would put my money on Hubble, for two reasons.

First, the averaging algorithm is not without its flaws. They make the assumption that by averaging out a bunch of images, you eliminate distortion. For this to work, you have to assume that the probability of a particular pixel being in the right spot is higher as the distortion would essentially be random, and that could theoretically not be the case. If the distortion is completely random, then, averaging a set of images would essentially lose the pixel that is being pushed around its "real" spot by the atmosphere, and you can actually see that, as the corrected images still look muddy compared to their HST or even adaptive optic counterparts.

Secondly, the atmosphere doesn't just distort light, it also filters it. You can use averaging to remove distortion "noise", but, there's really no way to ascertain what information was removed by the atmosphere.

The bottom line is, yes, you can get some pretty good results with averaging software, but, if you have money to spend, the best images are going to be space based, and its still going to cost a billion dollars. Given the promise the heavens hold for the advance of human understanding, let alone essentially infinite resources, one only hopes that policy makers will not be mislead by the outrageous claim that one can get the best images from the ground. You can't. HST should not be thought of as an aberration made obsolete by adaptative optics or the low budget averaging. Low budget averaging and adaptive optics really need to be thought of as getting by until we can put larger, and better visible wavelength telescopes into space.

Imagine what a space based Mt. Palomar sized mirror could do, if in space!

Re:Let's see it beat Hubble at: (1)

SoupIsGood Food (1179) | more than 7 years ago | (#20461907)

There's also the issue of deep sky surveys - these require looooooooooooong exposures. If you park your butt in front of a telescope in your backyard for a look through it, you will see more detail the longer you're parked. Your eye is able to pick out more information with longer exposure. So it is with imaging. Yeah, a really big mirror like on Palomar means you don't need to spend as much time imaging the same section of space as a smaller scope to capture equivalent detail, but here's the deal: the HST can look at the same thing for 20 minutes at a time, ekeing out every last little photon it can from the scene. Then it will do it all over again on the next orbit. And again, and again... each 20 minute pass adding to the cumulative light captured, until it's equivalent to an eleven day exposure. You're not going to get lucky enough to get eleven days worth of distortion-free viewing from a terrestrial scope without inviting in all sorts of software artifacts, sorry.

So, Lucky Imaging's a great step forward for some dirtside scope applications... less so for others, and not a replacement at all for a decent space-based instrument.

Surprised this hasn't been asked yet here (0, Redundant)

earthforce_1 (454968) | more than 7 years ago | (#20458799)

Is the algorithm used to pick the best image, or part of an image open source?

Re:Surprised this hasn't been asked yet here (1)

Tablizer (95088) | more than 7 years ago | (#20459983)

Is the algorithm used to pick the best image, or part of an image open source?

Ah, the Google Web-Search Telescope.
   

Is the Crusade Over? (0)

Anonymous Coward | more than 7 years ago | (#20459119)

I mean the crusade to save the Hubble.

Funny how something that had the scientific community up in arms and invectives flying at NASA and the Bush Administration (what's new eh?) is now a moot point. Twice as sharp, waaay cheaper. Time to put away the banners, boys!

Interesting but picture quality unjustified (4, Insightful)

bit01 (644603) | more than 7 years ago | (#20459231)

The technique they're using, while interesting, needs more justification.

I'm wary when I see people doing any selection on random data because there's the problem of selection bias; throwing away the hundred results that don't match what they want and keeping the one that does. Just getting an image that seems plausible is not good enough.

Their quality measure [cam.ac.uk] isn't one I'd use. They should be comparing the technique-plus-low-resolution-optics against high-resolution-optics directly. That is, doing image differencing of images taken at the same time and seeing what differences there are. They may well have good reason for assuming it's all okay but until somebody does that test they cannot assume they've removed all the variability that the atmosphere provides; there could be all sorts of hidden biases due to various atmospheric, molecular and statistical effects.

---

"Intellectual Property" is unspeak. All inventions are the result of intellect. A better name is ECI - easy copy items.

don't worry (1)

Lehk228 (705449) | more than 7 years ago | (#20459625)

we're taking pretty pictures of the sky not doing brain surgery.

Re:Interesting but picture quality unjustified (1)

thePig (964303) | more than 7 years ago | (#20460353)

An interesting point -
Since we are doing science, is it a good idea to throw away nonconformist images away as improper?
Are we not bringing our own bias also to this? If we are only looking at what we expect to find and throw away the unexpected, wouldn't science take a hit?

Re:Interesting but picture quality unjustified (1)

edunbar93 (141167) | more than 7 years ago | (#20460931)

Well, I guess that all depends on whether or not you can *prove* that the that you throw away is worth throwing away. If you take a thousand images of exactly the same thing over the course of an hour, and keep only the best images, that's a little different than taking toxicology readings from a thousand different patients and keeping only the best results. The cat's eye nebula isn't going to change measurably from our perspective over the course of an hour. If you keep careful documentation on what you do to the images during processing, (after all, the results must be reproducible) that should keep problems to a minimum.

However, there's also a term called "overprocessing", by which some amateurs can create images with detail that doesn't actually exist in the original images. But if you keep careful documentation on what you do to the images during processing, (after all, the results must be reproducible) that should keep problems to a minimum.

Common use with amateurs, but has issues (3, Informative)

edremy (36408) | more than 7 years ago | (#20459339)

As many have pointed out, there are a whole pile of applications that do the same thing for amateur telescopes. I've taken my Dad's 40-year-old 6" Dynascope, fixed up the motor drive, bought a $60 webcam (Philips SPC900), adapter and UV filter and gotten some quite nice photos of the Sun, the Moon, Jupiter and Saturn by capturing a few thousand frames and running them through Registax. (I'm working on Mars and Uranus- a whole lot harder with a small scope from a suburban backyard.)

I'm curious though about how they deal with some of the "features" you get to see with this technique. It's *very* easy to stack a few hundred images, run Registax's sharpening filter and get some interesting pictures of stuff that doesn't really exist. I'm not sure I really trust the fine detail in my photos- unless I see it in another taken a few hours later it may well not be real.

Would they be able to try this with hubble? (0)

Anonymous Coward | more than 7 years ago | (#20459543)

Imagine, say, taking a movie using hubbles ccd camera, as hubble very slowly rotates across a field of view, and then using the slightly different positioning and color difference of the ccd pixels to extrapolate the color of the slice that a given ccd pixel would now cover.

inverse process also useful (1, Funny)

Anonymous Coward | more than 7 years ago | (#20459773)

I've found the inverse process to work well on pictures of my wife ;)

Google Earth & satellite pics (1)

Circlotron (764156) | more than 7 years ago | (#20459779)

I hope the next time the pics used by Google Earth are updated they use something like this, assuming it is applicable.

They've got two things going on at once here (1, Insightful)

Anonymous Coward | more than 7 years ago | (#20460263)

You can't tell at all how much good the "lucky" camera is doing, although it is a tried and true
technique. Notice the before pictures in each case are wihtout adaptive optics and without the "lucky"
camera. The "after" images have both and EACH is likely to improve the image quality. I'd bet the adaptive
optics is doing most of it, but it's a pretty shoddy way to present the data.

Space-based telescopes aren't dead yet... (2, Informative)

XNormal (8617) | more than 7 years ago | (#20460775)

Even if this technique can eventually produce better pictures at lower cost it is still limited to wavelengths that can penetrate the atmosphere. Some of the most exciting recent discoveries are in infrared (Spitzer) and X-ray (Chandra). The next big telescipe (James Webb Space Telescope) is also for infrared.

Does everything have to include jackassery? (1)

Quiet_Desperation (858215) | more than 7 years ago | (#20460905)

and it's 50,000 times cheaper than Hubble

That's a bit of a cheap shot. Hubble has been in operation for 17 years and has been a vital research tool. The tech for this new technique is, well, NEW.

i invented the lucky telescope concept in 1995. (4, Interesting)

geowiz (571903) | more than 7 years ago | (#20461109)

I invented this process in 1995.
here is my original post on
the sci.image.processing newsgroup
my old email address is no longer active.
new one is geopiloot at mindspring.com 9 reduce the numbers of ooo's in pilot to one
it was ironic that many people jumped out to say it wouldn't work at the time.
it does work and it works well. In fact most of the additive image processing now done by amateur astronomers everywhere using pc's software is based on my invention which I did not patent.

George Watson

From: George Watson (71360.2455@CompuServe.com)
Subject: virtual variable geometry telescope
This is the only article in this thread
View: Original Format
Newsgroups: sci.image.processing
Date: 1995/12/11

Has anyone implemented a virtual variable geometry telescope using
only a CCD attached to a normal non variable telescope?

It would work like this:

Take extremely short duration images from the CCD at a frequency
faster than the frequency of atmospheric distortion (1/60 sec I have
read is the minimal needed timeslice for physically corecting
atmospheric distortion in real time so maybe an exposure of 1/120 sec
would be short enough).

Choose via computer a high contrast image as a reference image.

Continue to take rapid short duration images and keep only the high
contrast ones with that have minimal displacement/offset from the
reference image.

Sum each of those acceptable images to a storage that will become the
final image.

What you should end up with is a final image that has minimal
atmosperic based distortion because all the low contrast and non
matching images will have been discarded.

Obviously you build an image over a longer period of time than with
real time optical correction but at perhaps lower cost.

Anyone know whether this has been proposed/done or researched?

--
George Watson

The opinions expressed here are those of the fingers
of George Watson only; not those of George Watson himself.

Please reply via this newsgroup. No Email unless requested,
Thanks.

View this article only
Newsgroups: sci.space.policy
Date: 1995/12/30
 

i invented lucky imaging in 1995. (0, Redundant)

geowiz (571903) | more than 7 years ago | (#20461193)

I invented this process in 1995. here is my original post on the sci.image.processing newsgroup my old email address is no longer active. new one is geopiloot at mindspring.com 9 reduce the numbers of ooo's in pilot to one it was ironic that many people jumped out to say it wouldn't work at the time. it does work and it works well. In fact most of the additive image processing now done by amateur astronomers everywhere using pc's software is based on my invention which I did not patent. George Watson From: George Watson (71360.2455@CompuServe.com) Subject: virtual variable geometry telescope This is the only article in this thread View: Original Format Newsgroups: sci.image.processing Date: 1995/12/11 Has anyone implemented a virtual variable geometry telescope using only a CCD attached to a normal non variable telescope? It would work like this: Take extremely short duration images from the CCD at a frequency faster than the frequency of atmospheric distortion (1/60 sec I have read is the minimal needed timeslice for physically corecting atmospheric distortion in real time so maybe an exposure of 1/120 sec would be short enough). Choose via computer a high contrast image as a reference image. Continue to take rapid short duration images and keep only the high contrast ones with that have minimal displacement/offset from the reference image. Sum each of those acceptable images to a storage that will become the final image. What you should end up with is a final image that has minimal atmosperic based distortion because all the low contrast and non matching images will have been discarded. Obviously you build an image over a longer period of time than with real time optical correction but at perhaps lower cost. Anyone know whether this has been proposed/done or researched? -- George Watson The opinions expressed here are those of the fingers of George Watson only; not those of George Watson himself. Please reply via this newsgroup. No Email unless requested, Thanks. View this article only Newsgroups: sci.space.policy Date: 1995/12/30

coherent averaging (1)

Ixlr8 (63315) | more than 7 years ago | (#20461481)

The technique resembles coherent averaging: you know there is a still image which is degraded by atmospheric noise. By sampling over and over again and averaging, the noise (only gaussian!) is removed. It goes with the squareroot of the amount of samples used: i.e. for each quadrupling in samples, halve of the noise remains.

Of course there are other sources of error whose contribution also determine your detection limit.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?