×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Robotic Camera Extension Takes Gigapixel Photos

timothy posted more than 5 years ago | from the ale-and-hugin-together-robotically dept.

Robotics 102

schliz writes "Scientists at Carnegie Mellon University have developed a device that lets a standard digital camera take pictures with a resolution of 1-gigapixel (1,000-megapixels). The Gigapan is a robotic arm that takes multiple pictures of the same scene and blends them into a single image. The resulting picture can be expanded to show incredible detail."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

102 comments

Not so novel (5, Interesting)

Anonymous Coward | more than 5 years ago | (#23446944)

Seth Teller at MIT EE was doing this 8 years ago. Check out his Cityproject.

Re:Not so novel (2, Insightful)

2.7182 (819680) | more than 5 years ago | (#23446996)

How true. Another example of people in computer vision with no ideas who are stealing from people who don't sell themselves well enough. What a sad subject computer vision has become. See Adam Kropps paper with Seth Teller here on spherical mosaics. [mit.edu]

Re:Not so novel (1, Funny)

The End Of Days (1243248) | more than 5 years ago | (#23447186)

Yeah, it's a real shame that people don't take the time to find out every single idea anyone has ever had so they don't duplicate things. What a sad world.

Re:Not so novel (4, Insightful)

2.7182 (819680) | more than 5 years ago | (#23447226)

Well its a good idea to check before they submit for a patent, since there is prior art.

Also, the MIT work is well known to anyone in this area. It's not that hard to google some of the keywords and get the MIT page. The CMU people either knew and ignored it, or they simply didn't do what most of the scientists at their institution usually do, which is read the standard conference papers in computer vision, and browse the web (just a little!!). It's not as if the MIT work was published in some obscure place.

Re:Not so novel (1, Funny)

yfarren (159985) | more than 5 years ago | (#23447628)

How does this get modded insightful when it is nothing but a troll? Seriously Mods, the article doesn't talk anywhere about a patent. This is just some guy trolling trying to start some argument about something wholly unrelated to the use/interest of using a regular camera to get a higher resolution photo. No-one but the poster mentioned patents.

That's a troll. Look at it, see its warts and throw it back under the bridge it came from.

Re:Not so novel (1)

yfarren (159985) | more than 5 years ago | (#23447766)

Anyone know if there is a way to check if the first guy who modded GP Insightful was the same guy who modded (my own) parent "Troll"?

Re:Not so novel (0)

Anonymous Coward | more than 5 years ago | (#23447934)

your post has even less to do with the topic, so i believe your own definition of trolling would fit nicely.

Re:Not so novel (0)

Anonymous Coward | more than 5 years ago | (#23447984)

Considering I was the one who modded your post Troll (logged in, of course), I can assure you that it wasn't.

But you're an obvious Troll. You're just trying to start some argument about the Slashdot moderating community which is wholly unrelated to the use/interest of using a regular camera to get a higher resolution photo.

I looked at it, saw its warts, and threw it back under the bridge it came from.

Re:Not so novel (1)

davolfman (1245316) | more than 5 years ago | (#23448316)

I thought the original was scans from air reconnaissance film, not stitched digital. Or am I thinking of yet another giant picture project.

Re:Not so novel (4, Funny)

Clover_Kicker (20761) | more than 5 years ago | (#23448636)

You know what they say, a month in the lab can often save an afternoon in the library.

Re:Not so novel (0)

Anonymous Coward | more than 5 years ago | (#23451272)

However, at the end of the month you've got a better understanding of the possibilities and limitations of the techniquesu, and some working hardware or software to build on.

Re:Not so novel (1)

iamhassi (659463) | more than 5 years ago | (#23449192)

"it's a real shame that people don't take the time to find out every single idea anyone has ever had so they don't duplicate things"

I know! It's not like you can just do a "search" of the "internet" for words like "1 gigapixel" and get results! [google.com].

say... that would be useful, if there was a website where you could search the entire internet with just a few key words... someone should invent that! I'd use it everyday!



but seriously this isn't news worthy at all. It was 5 years ago when 6 megapixel cameras were $1,000+, but I can buy a 10 megapixel from walmart for $150 [walmart.com]. Give me $15,000 and I can do the same thing.

Re:Not so novel (1)

Pemdas (33265) | more than 5 years ago | (#23455474)

Strangely, people seem to be focused on the "OMG it's something that people know how to do already" and missed out on the price point. Yeah, given $15,000, I could develop something that does this by putting together panotools and a commercially available robot arm; that's not the point. The point is, it costs under $300, and that's pretty cool.

Ironically, the people claiming these researchers haven't done their homework seem to have not bothered to look into what claims to novelty are being made, something which is pretty trivial to do. Yes, this builds on a large body of work in photo stitching and superresolution. This is a problem because...?

AFAICT, no single part of this system is radically new or different, but as a whole, it's a pretty neat new thing. It's an interesting, integrated technology that's being made available to the average Joe.

It seems to be the same people that developed the Qwerk, which is a low cost LInux PC with integrated stuff for controlling robots. Same MO: cheap, integrated stuff to push robotic technology to the larger community.

So, please, get off your high horses people, and stop accusing people of laziness or dishonesty without first giving some modicum of effort to understanding what it is you're talking about.

Re:Not so novel (0)

Anonymous Coward | more than 5 years ago | (#23449762)

I am acquainted with Randy Sargent, the head of this project, and I have to say he is a really smart guy. He build a cool vision system and has done some nice robotics work. He was in industry for a while though, and may not have been aware of this. I think he is an honest guy. That being said, there are A LOT of people working on this project, and someone must have known.

Re:Not so novel (2, Interesting)

GiMP (10923) | more than 5 years ago | (#23447120)

I believe that Steve Mann [wikipedia.org] of wearable computing fame was the first to create an algorithm for photo stitching [wearcam.org].

Re:Not so novel (0)

Anonymous Coward | more than 5 years ago | (#23447182)

That's insane. Photo-stiching has been done for as long as cameras have been around, from analog to digital images, and at almost no point can one person be said to have made a significant scientific change in the idea. It's pretty obvious. Kind of like one-click.

Guys like Steve Mann think of something obvious and are so convinced that they are gods gift to the world that they claim credit for it themselves.

We Did It in 1990 (5, Interesting)

Doc Ruby (173196) | more than 5 years ago | (#23447368)

I worked for a SF area startup in 1990 that produced and sold cameras for "digital prepress" [accessmylibrary.com] (later called "desktop publishing", and now just "publishing" ;) that had the highest resolution around, to compete with drum scanners [wikipedia.org] that were then the expensive industry standard equipment.

We took a 512x512 Hitachi video sensor with a 2x2 C-M/Y-K mask repeated over it, for initial 1Kx1Kx40bit images that we derived from DSP on the intensity of the color-masked pixels. Then we physically stepped the sensor through 8x8 subpixel shifts, subsampling each pixel 64x. We ran the resulting 320MB raw composite files through a bank of multiple 25MFLOPS DSPs (interconnected and logic-accelerated by a fat FPGA) to produce 4Kx4Kx36bit 72MB files. In 1990 that was an awesome achievement.

We poured dramatic engineering work into that platform, which replaced a $150K drum scanner with a $30K PC (on DOS or Win3.0, or plus optional $5K Mac with its GUI including Photoshop 1.0). We had to deal with DSP for micropositioning the video sensor quickly (using feedback data from a laser/interferometer), with new color spaces (I was part of the JPEG org that produced the image format), with custom interconnects at blazing bandwidth, with parallel multiprocessing at then-supercomputer speeds written in C on DOS, and even with the physics of the light variably distorted by turbulence in the air between the camera and scanned slides, heated by the hot lights necessary for exposures fast enough to allow 64 frames and rescan before the sensor wiggled.

All for a 16Mpxl camera that's now beaten by big sensors on handheld consumer devices for under $2K (in 2008, not 1990, dollars). But I can proudly say that we beat them by almost 20 years.

Re:We Did It in 1990 (0)

Anonymous Coward | more than 5 years ago | (#23447594)

Yeah, but Kropp created an omnidirectional image in an automated fashion using a robot arm.

Re:We Did It in 1990 (1)

ettlz (639203) | more than 5 years ago | (#23447714)

I must ask... did you test it on Lenna?

Re:We Did It in 1990 (2, Interesting)

Doc Ruby (173196) | more than 5 years ago | (#23449770)

Not really, though we did have a Lenna slide kicking around. We primarily used a Kodak test slide of color bars/wheels and greyscale gradients, and one image of a European/American looking blonde on Kodak slide and one image of a young Japanese looking woman on Fuji slide. We had different colorspaces for US/Europe and Japan, because Fuji film had a larger green dynamic range supposedly because Japanese people have more acute green-band vision (though I've never independently verified that).

Once the camera was initially calibrated we'd use a test target to test how well I'd calibrated the film recorder. We printed a slide of Lenna, then scanned and reprinted it a few times through our DSP convergence algorithm, adjusting the film recorder's colorspace instead of the camera. Then we doubled a Lenna scan/print over a purely photographic repro of Lenna and scanned that subtractive image, calibrating the camera until it converged.

Re:We Did It in 1990 (1)

rastoboy29 (807168) | more than 5 years ago | (#23450412)

That's really cool.  Can I ask you a question?  Not knowing anything about that industry--what was it your machine took pictures _of_?

Re:We Did It in 1990 (1)

Doc Ruby (173196) | more than 5 years ago | (#23453346)

Mostly we scanned photos printed from film, or directly scanned the film slides themselves (two different kinds of film, different scanning params). The scans were used to import the photos into magazines and newspapers. At the time (1990) there were no hirez portable digital cameras for direct digital photography of suitable quality for publishing. They used these $150K+ drum scanners, which were whirling cylinders with the photos taped on, scanned in successive lines with an A/D sensor head (like a printer in reverse), which we replaced with something that could be lit and positioned like a real camera, with real Nikon 35mm lenses, put on a tripod, direct digitization of a live scene (though its exposures took too long for anything but a "still life").

We did also sell some to some corporation that claimed to be using them to aerially map agriculture, because we had such excellent discrimination among minute shades of green ("Fuji mode" dedicated something like 16bits of 36 per pixel to green, not the 8bits of the standard 24bit color). I think they were monitoring drug farms, ID'ing different crops by their color signature, because we didn't market to that industry and we never heard from them again after they took delivery - not even for support or upgrades. But I was impressed that our engineering was so robust that the micropositioning of the sensor tested OK for stability even mounted in an aircraft, due to the laser/interferometer feedback DSP. And we got to write some early tile stitching 2D DSP algorithms, that I always wanted to port to the FPGA where they were a more natural fit. But I think that they just flew film cameras, possibly 70mm movie cameras for analog oversampling - our camera was uniquely qualified to use some simple optics to scan the 70mm frames into our 35mm image plane - then scanned the frames later. They should have bought the film recorder SW I made, because that device would have scanned a filmstrip, with colorspace conversions etc.

The main guy designing it was a true genius, the secondary guy working the nightshift was one of the cleverest and most enthusiastic hackers (in any engineering medium) I've ever met, and the engineer in charge was really a mathematician working with these brand-new DSPs which finally put linear algebra onto a cheap chip fast enough for realtime Fourier analyses. I was lucky to be the kid hired with few skills and no engineering "discipline", who these pros just sent careening around the lab cross-pollinating each of those main brains by asking dumb questions and trying to get the whole technical vision into my unimprinted brain from its various parts, which forced them to sync to each other without battling each other's egos. A lot of the projects we worked on were entirely new models for programming, especially in the multiproc interops and data distribution strategies far beyond the digital cameras that were 20 years ahead of their time, but all focused on delivering a working product, so they were reality-driven though visionary. Some new tech, like the IBM/Toshiba/Sony CellBE uP, puts on silicon what we breadboarded back then, but still lacks the SW platforms and tools to make it programmable. And also some of the HW integration we made work with discrete components at relatively slow (MHz, not GHz) clocks is still waiting to be either integrated in a single device or even just glued together on a single fast interconnect.

Maybe someday I'll start up my own version of that skunkworks, if there's still some of the old vision left to implement. But I'll need some kid who's willing to work the dayshift and the nightshift to keep up with the big brains they're keeping synced.

Sorry for the long brag, but the remembering the pictures released the whole movie.

Re:Not so novel (1)

sjs132 (631745) | more than 5 years ago | (#23447748)

There is also a program that you can do this with... You take a series of pics, and adjust the focal point as it puts them all together to get super fine detail out of a "normal" camer. Lots of folks use it for pictures of their mineral and rock specimens... The name of the program escapes me at the moment, you may find more infor at www.mindat.org in the forums and/or info on the pictures if you look up a mineral and check out some of the cool pics.

I like the folks that do this, it makes for some super wallpaper. :)

Re:Not so novel (0)

Anonymous Coward | more than 5 years ago | (#23448130)

IIRC the human eye has rapid almost imperceptible movements too (no i'm not referring to REM.)
It is probably an advantage since it makes it easier to determine contours and probably helps with resolution too. So really not a novel idea. :D

ALE (4, Interesting)

pipatron (966506) | more than 5 years ago | (#23446988)

Also check out Anti-Lameness Engine, http://auricle.dyndns.org/ALE/ [dyndns.org] which does exactly the same thing, but you have to provide your own arm.

Re:ALE..aint seen nuthin yet (0)

Anonymous Coward | more than 5 years ago | (#23450304)

Watch out, new ultra high rez pix of Paris Hilton are headed straight at ya! LOL

Any superresolution software for average Joe? (3, Interesting)

grimJester (890090) | more than 5 years ago | (#23447024)

Is there any superresolution software good enough that I could, for example, take twenty blurry pics with my phone and merge them to a single sharp one?

Re:Any superresolution software for average Joe? (5, Informative)

Sitnalta (1051230) | more than 5 years ago | (#23447110)

Yep. Open Adobe Photoshop and go to File -> Automate -> Photomerge. Then simply point it to the folder containing your photo array.

Re:Easter Eggs and bunnies (1)

Technician (215283) | more than 5 years ago | (#23448794)

Follow the link in the article to the photo. scroll down to the other photos. Look at the mad hatter's photo. At first it doesn't look like much, but it has been photoshopped to include lots of hidden stuff. If you have trouble, zoom in in the sidewalk cracks to get a start. There are at least 2 bunnies in each sidewalk joint. Have fun.

I found the egg in the basket with bunnies painted on it. I still need to find the purple bunny.

Re:Easter Eggs and bunnies (1)

sumdumass (711423) | more than 5 years ago | (#23451630)

I can find all of them but the camouflaged bunny. The closest I can come is the camo colored fabric in the bottom right hand corner saying something about 1at 20 emails would be entered into a a drawing for an easter gift. I'm sure it too late for that but I can't find the camo bunny. BTW, I can tell you where the purple bunny is.

Re:Any superresolution software for average Joe? (1)

loraksus (171574) | more than 5 years ago | (#23450280)

Or, instead of paying $700 for the latest version of photoshop, use Autostich [cs.ubc.ca]
It's free and not half bad (ILM even uses it)

Re:Any superresolution software for average Joe? (1)

Phat_Tony (661117) | more than 5 years ago | (#23454618)

Autostitch looks interesting for stitching, but unless you used your phone cam with forethought of stitching a panorama, and shot your subject with a dozen overlapping closeups carefully arranged to cover your intended field for panorama stitching, it's not going to help here. I take it the grandparent poster has a bunch of pictures of essentially the same composition, that are all blurry. Photoshop will auto-align these for you, but adjusting a bunch of aligned blurry pictures to increase the apparent resolution isn't so easy.

Look at a program called Photoacute>, which is geared specifically toward stacking images with identical framing to increase clarity- by pixel averaging to reduce noise, depth of field stacking to reduce out-of-focus blurriness, or both. [photoacute.com]

Re:Any superresolution software for average Joe? (1)

darenw (74015) | more than 5 years ago | (#23448730)

there's the "drizzling" technique in astronomy. i don't have my notes on it handy, but google it and you will find.

Re:Any superresolution software for average Joe? (1)

Fishbulb (32296) | more than 5 years ago | (#23450410)

Yes, Hugin [sourceforge.net].

Of other interest is the PanoTools Wiki [panotools.org].

Note however, that you can't make cake from crap. 'Garbage in, garbage out' as the saying goes. The whole concept of a camera on your phone, to me, is like having a television on your fridge.

Re:Any superresolution software for average Joe? (1)

Ceriel Nosforit (682174) | more than 5 years ago | (#23451400)

It's called image stacking, and is used a lot in astronomy. There exists freeware, or open source, which will do it for you.

Re:Any superresolution software for average Joe? (1)

4D6963 (933028) | more than 5 years ago | (#23453110)

Is there any superresolution software good enough that I could, for example, take twenty blurry pics with my phone and merge them to a single sharp one?
I don't know what pics made by your cell phone look like but I'd say they probably don't have a lot of aliasing going on, which, if I'm not mistaken is necessary to apply super resolution techniques. The article or even CMU's page on the topic is very light on details, but it seems to be more about panorama than true super-resolution techniques, that is recovering aliased high frequency components by comparing many aliased images.

So in that sense, sure, you can make panoramas out of your blurry cell phone pictures, but without an optical zoom you won't get a much better resolution.

But can't you infinitely zoom into a normal photo? (5, Funny)

CRCulver (715279) | more than 5 years ago | (#23447040)

After all, they do it all the time on CSI.

Re:But can't you infinitely zoom into a normal pho (1)

tomhudson (43916) | more than 5 years ago | (#23447098)

You need to load the AgentDeckard module into your BladeRunnerCam.

Wow (2, Insightful)

Sitnalta (1051230) | more than 5 years ago | (#23447062)

So, basically it can do the exact same thing as Photoshop, except with the added expense and complications of a robotic arm. Way to go, Carnegie Mellon.

Re:Wow (2)

ColdWetDog (752185) | more than 5 years ago | (#23447202)

So, basically it can do the exact same thing as Photoshop, except with the added expense and complications of a robotic arm. Way to go, Carnegie Mellon.

Which is rather ironic since the Photomerge routine in Photoshop CS3 is quite adept at taking multiple hand shot images and stitching them together. Traditionally, photographers have used leveling tripods and paid careful attention to exposures. While this can lead to better results than a hand shot and stitch, the latter is awfully good. The intelligence of the Photomerge algorithm (and other stand alone products) are so good that you do not need to have the images precisely registered in space. Using a leveling tripod is not particularly difficult so I don't see where the "robot" helps.

And Gigapixel images? A bit of overkill if you want to show them to "everyone" (i.e. on the Web). Nothing to see here, move along.

Re:Wow (1)

FrenchSilk (847696) | more than 5 years ago | (#23447376)

Well, as I said in my earlier post, it is next to impossible to do this without a robot if you are using a long lens to take a large array of photographs that are closely spaced. And as to viewing gigapixel images, there are numerous ways to do it. You can use zoomify, as in this image: http://www.donfrenchphotography.com/Zoomify/HalfDome2D.htm [donfrenchphotography.com] where you can zoom in close enough to see people standing on top of Half Dome. Or go to the gigapan site (http:\\www.gigapan.org) to see many examples of gigapixel images.

Re:Wow (0)

Anonymous Coward | more than 5 years ago | (#23447302)

The big difference is that it is next to impossible to hand-hold a camera with a 400 mm lens, for example, and take a 2 dimensional array of photographs that are displaced from one another by a few degrees. The robot, on the other hand, can move the camera just the right amount both horizontally and vertically.

Re:Wow (1)

solitas (916005) | more than 5 years ago | (#23450240)

You don't want to move the camera horizontally & vertically - more like PIVOT it around the focal point of the lens so you don't get any perspective offsets (i.e. what your two, or more, eyes do to get binocular/multiocular vision).

1000 megapixel? (4, Funny)

duguk (589689) | more than 5 years ago | (#23447084)

From TFS:

take pictures with a resolution of 1-gigapixel (1,000-megapixels)

Where's the other 24 megapixels? ;)

caveat: yes, i know. don't start. it was a joke. don't link to wikipedia to explain, besides; xkcd explained it better. [xkcd.com]

Re:1000 megapixel? (0)

Anonymous Coward | more than 5 years ago | (#23447838)

But 2008 *is* a leap year, so 1 gigapixel = 1000 megapixels. Read your own damn link.

Link? (1)

Patentmat (846401) | more than 5 years ago | (#23447104)

To some sample pictures?

Re:Link? (1)

Jeff321 (695543) | more than 5 years ago | (#23447122)

The article links to a sample located at http://www.gigapan.org/ [gigapan.org]

Re:Link? (1)

penguin king (673171) | more than 5 years ago | (#23448518)

h einThat is actually a fairly awesome photo if you look at it. Given the scope of the photo, the fact that there is enough resolution to zoom in far enough to read the time on the clock (on that little tower in the upper corner) and also make out the individual Roman numerals on its face is fairly cool! In fact you can also see in the individual windows... hrmm privacy debate anyone :p

Some Links (3, Informative)

Rufus211 (221883) | more than 5 years ago | (#23447128)

This is a pretty cool project, and I actually saw it when I was at CMU a bit ago (and was wondering what the hell it was).

There's a CMU press release [cmu.edu] about it.
The site with all the pictures is http://www.gigapan.org/ [gigapan.org]
You can see the hardware here [charmedlabs.com].

The only problem with this, and any other multi-picture stitching, is that you get obvious stitching problems when there is any movement in the scene, like the trolley in the middle of this scene [gigapan.org].

Movement indeed (1)

Fortran IV (737299) | more than 5 years ago | (#23447232)

For some real vehicle distortion, remember Scanner Photography? [awardspace.com] Michael Golembewski built high-res cameras out of flatbed scanners; the model he described on this site took a 115-megapixel image--with each exposure.

Re:Some Links (0)

Anonymous Coward | more than 5 years ago | (#23447458)

Followed the link, and the link to the South Bank picture...see those legs under the stairway? Interesting...4 shoes.

Re:Some Links (1)

Skapare (16644) | more than 5 years ago | (#23447770)

How about some real images instead of some plug-in program? The web already supports image files, so there is no excuse for forcing flash on people. Once Firefox developers figure out how to display video, then we can finally get rid of flash.

Re:Some Links (0)

Anonymous Coward | more than 5 years ago | (#23449522)

Movement could be prevented in many scenes by taking the photograph N times, and eliminating anything that appears in N photos. You would also then have the opportunity to remove things which ruin a shot, or keep things which make it great.

Higher Resolution != Higher Quality (5, Insightful)

Manip (656104) | more than 5 years ago | (#23447168)

I think this just proves that higher resolution doesn't result in a higher quality photo.

If you look at the entire photo it doesn't look any better than a regular photo even if it contains much more information.

For years now there has been a push to larger and larger resolution photos with people often mistaking this with "quality."

All a higher resolution really allows you to do is zoom in more after a certain point. Which is awesome from a photo editing point of view, but for most people unimportant.

What you really want to be focusing on is the lens quality, zoom quality (lol Digital Zoom), noise, and other characteristics of the camera (e.g. ISO rating).

So it is great that they spent lot's of time doing this but it isn't all that interesting to average Joes or even serious photographers. We all really want better quality pictures, not bigger ones.

Re:Higher Resolution != Higher Quality (1)

religious freak (1005821) | more than 5 years ago | (#23447300)

Ain't that the truth? I learned this the hard way when I gave up my wonderful Canon (which was just too old) for a crappy Casio. Went with the higher megapixels without truly educating myself first.

Re:Higher Resolution != Higher Quality (5, Informative)

RichardKaufmann (204326) | more than 5 years ago | (#23447362)

You're confusing three different aspects of quality:

1. Resolution (the number of pixels in am image, here increased by stitching overlapping images)
2. Dynamic range, color fidelity, noise (the quality of a particular pixel). This can be somewhat ameliorated by HDR photography or just averaging identical shots (all with no moving subjects and a sturdy tripod). Google Photomatix for details.
3. Whether the shot is interesting, well composed, in focus, without motion blur, etc. Panorama photography is most interesting for its artistic potential; more pixels is just a delightful side effect.

#1 and #2 can be addressed by money and a willingness to prostrate yourself to the camera gods. #3 requires talent!

And to put a final nail in the megapixel coffin: check out http://www.luminous-landscape.com/essays/Equivalent-Lenses.shtml [luminous-landscape.com] (particularly Nathan Nyhrvold's comments) for a discussion of how sensor size and f-stop place an upper bound on resolution irrespective of sensor density. Physics can be a pain sometimes!

Re:Higher Resolution != Higher Quality (2, Insightful)

v1 (525388) | more than 5 years ago | (#23447882)

When you get to the highest resolution the image is terrible. It looks like a muddy smoothing of a blocky jpeg. You can't say this is a gigapixel image much more than you can say a 640x480 tiff expanded to 400,000x250,000 pixels (in its blocky glory) is a gigapixel image.

I was expecting to see good quality all the way down to the highest zoom. something like google-earth quality for the most part. They don't let you just keep zooming in past the point where the resolution has hit the wall like this does.

They have no business calling this a gigapixel image, everyone that reads that is expecting that's the resolution. In reality it speaks more for the pixel count than content.

Re:Higher Resolution != Higher Quality (1)

loraksus (171574) | more than 5 years ago | (#23449974)

At a certain point, you're going to run into a point where air turbulence, haze, lens quality, etc affect your image and nothing can avoid that.
What should be done is to limit the zoom to that point (even if the image can technically zoom in further)

Take for example, this pic (it's big - and I sort of hate my webhost, so feel free)
http://vehiclehitech.com/pictures/!%20Photography/2008-02-24-KelownaSkylinePanorama-attempt1.jpg [vehiclehitech.com]
I shot it with a $100 lens, so at 100% it doesn't look too great (the jpeg artifacts don't help either)
But at around 35% (if you're able to easily scale it in your browser), it looks pretty decent, right?

Re:Higher Resolution != Higher Quality (1)

v1 (525388) | more than 5 years ago | (#23452416)

My point though is not complaining about the actual quality, but to the advertised quality. That image you linked to is a 46 megapixel image. At your recommended 35% scaling to look clear, it's a 5.7m megapixel image. Take that scale image and export it and then open that and zoom it back up to 46 megapixels and smooth it, and it looks about as good as the original 46 megapixel image. That illustrates the point that you can make a higher megapixel image based on a lower megapixel source, but that doesn't improve it, and is somewhat dishonest to describe it at a resolution that clearly exceeds the actual resolution represented in the image.

I don't believe either of those two examples could be described honestly as more than 6 megapixel images.

That gigapixel camera could be modified to just blow up the image to 10 gp, and then smooth it, and try to call it a 10gp image, and I'd cry fraud.

Re:Higher Resolution != Higher Quality (1)

loraksus (171574) | more than 5 years ago | (#23449764)

The thing is... higher resolution - and by that I mean a larger number of pixels AND a lens that can actually give you enough detail to utilize it will improve pictures.
Too many lenses nowadays are crap - details are blurred and muddy even before you zoom in. It's difficult to get a decently sharp picture, even if you shoot in RAW and do a fair bit of post.

This detail is (imho) important in landscapes, and having this detail is an important part in getting pictures to "pop". Portraits too, but obviously this thing isn't usable for that.

This also has the side effect of getting you a fairly decent, fairly cheap fisheye "lens". Take a number of pictures and you'll have decent FOV and decent quality.
Your entry level canon DSLR fisheye (EF-S 10-20mm) goes for roughly $500 and it's a bit of a crapshoot as to what you get (some on Amazon complain about it, some say it's great).
Moving up in fisheyes is extremely expensive (and there really aren't many choices for DSLRs anyways - at least in the canon lineup)

Taking a few extra shots and merging them with autostitch (which, btw, is free and a kick ass app) will get you a pretty decent fisheye look for just a couple extra megs and a bit of cpu time.

Re:Higher Resolution != Higher Quality (1)

SoupIsGoodFood_42 (521389) | more than 5 years ago | (#23450024)

No affordable lens around has the resolution I want for landscapes. Photo stitching like this is the only way I can get the images I want.

misuse of word resolution (3, Insightful)

cinnamon colbert (732724) | more than 5 years ago | (#23447214)

to my understanding, resolution refers to a mapping of the object on to the image - resolution is a ration of 1 cm in object/x cm in image
it has nothing to do with pixels per image, although you can have more of the object , at the same resolution, with more pixels

Re:misuse of word resolution (0)

Anonymous Coward | more than 5 years ago | (#23447332)

No, what you're talking about is the "scale" of the image.

Re:misuse of word resolution (2, Informative)

vandoravp (709954) | more than 5 years ago | (#23450140)

Sort of. Resolution is the fineness of information that can be resolved given a fixed size. Keep in mind that number of pixels is not exactly image size, which can be described in cm or inches depending on the context. For example, a 10in by 10in image at 300dpi will have a much higher density of information when compared to a 10in by 10in image at 72dpi. So, these gigapixel images are potentially much higher resolution, depending on the physical dimensions it is presented at. Compared to most graphics on the web, these images are insanely high resolution. However, if they were printed out at 300dpi, their resolution would be the same as any typical printout, except the prints would be physically gigantic. You are right though that "pictures with a resolution of 1-gigapixel" is inaccurate, so far as it is missing scale, as noted by the sib post, though it is implicitly compared to typical photo scales.

Big Deal! (4, Funny)

Evildonald (983517) | more than 5 years ago | (#23447290)

I've been doing this for years with a film camera and then sticky-taping all the photos together. Then when i want to "zoom in" I just move my head closer to the picture.

Another entry into the market is always welcome (4, Informative)

AsmordeanX (615669) | more than 5 years ago | (#23447328)

That's not really all that new. Motorized panorama heads have been around for a long time. People have even built them from Lego Technics.

As an avid pano/gigapixel photographer myself I'm interested in any new entry into the excessively priced head market. I'm using a Kadian Quickpan Pro that cost me $400 a few years ago. An automated system would be very nice but the cost is usually horrific. I've even had a head custom built at one point.

As for the use, I like to take big pictures. I have a 6ft x 3ft print hanging on my wall. The print is 400dpi taken from a 43000x22000 (just shy of 1GP). People see the picture and say it looks nice then walk a little closer, and closer, and closer. Pretty soon they are standing 4" away and excitedly reading the serial number on the front of a train car that is only 2" across on the print.

Re:Another entry into the market is always welcome (1)

v1 (525388) | more than 5 years ago | (#23447894)

what kind of a printer does it take to print a high res monster like that?

Though most real people are unlikely to afford such a camera, is there any way to borrow/lease/rent one?

Re:Another entry into the market is always welcome (2, Informative)

Skapare (16644) | more than 5 years ago | (#23448666)

what kind of a printer does it take to print a high res monster like that?

This kind [epson.co.uk].

Re:Another entry into the market is always welcome (1)

AsmordeanX (615669) | more than 5 years ago | (#23448752)

For the camera I use a Canon 300D (recently upgraded to a 450D) and a 300mm lens. I've seen people do mosaics with a 600L lens to create a virtual 50mm lens that would crash most PCs even trying to load, let alone actually stitch.

I love PTGui

Re:Another entry into the market is always welcome (1)

Skapare (16644) | more than 5 years ago | (#23448832)

Had Canon finally added auto-bracketing in the 450D?

Re:Another entry into the market is always welcome (1)

SoupIsGoodFood_42 (521389) | more than 5 years ago | (#23450136)

It's not fully automatic -- you still need to fire the shutter 3 times, but otherwise, yes. At least on the 400D.

Re:Another entry into the market is always welcome (0)

Anonymous Coward | more than 5 years ago | (#23451466)

I used to set the shutter to serial shooting when I used automatic bracketing on my old 400D. Then I could just press and hold the shutter while the camera took three bracketed photos. No need to fire the shutter thrice.

Re:Another entry into the market is always welcome (1)

Skapare (16644) | more than 5 years ago | (#23451772)

I want to minimize any vibration in taking the shots. Once the mirror is up, what I want is for it to just do all the bracketed shots. I could use a shutter cable or whatever. This will be on a tripod. I just want those shots to all be framed as close alike as possible.

Re:Another entry into the market is always welcome (1)

niceone (992278) | more than 5 years ago | (#23452516)

On my 350d if you set it to continuous shooting and AEB it takes all three shots when you press the shutter button once. But it does put the mirror down between shots, and setting the Mirror Lockup custom function doesn't seem to work well with AEB - there's no way to avoid the mirror going back down between shots.

That's great for still life (1)

davidwr (791652) | more than 5 years ago | (#23447342)

It doesn't work well for action shots though. Well, I guess it could if you eliminated the "single camera" requirement.

Besides, when we get to 10,000 dpi by about 12 bits per color, we will be as sharp as film. A 150 megapixel, normal-sized, 36-bit camera is probably 4-6 years away in the sub-$500 consumer market, sooner in the professional market.

Of course, for normal consumer 4x6 prints with no cropping, you don't need nearly that level of detail.

point it (0)

Anonymous Coward | more than 5 years ago | (#23447520)

right at Uma Thurman, please.

This and tourist remover... (1, Interesting)

Anonymous Coward | more than 5 years ago | (#23447602)

Combine this with tourist remover http://www.snapmania.com/info/en/trm/ (take several pictures of the same scene, and only use the bits which are stationary).

That would make large pics without the motion distortion.

pics or didn't happen (0)

Anonymous Coward | more than 5 years ago | (#23448046)

I can't wait to download a ~3gb sample image.

Peeping tom... (0)

Anonymous Coward | more than 5 years ago | (#23448172)

There is a guy masturbating in one of the windows when fully zoomed in. In the dormitory top floor, at the first (left) 3 windows across, it is the right most window.

I guess gigapixel photos are good for something...

You can read the magazine covers... (2, Funny)

dogdick (1290032) | more than 5 years ago | (#23448386)

thats pretty amazing...
There is also a woman blowing a guy in one of the windows.

Redundancy? (1)

PingPongBoy (303994) | more than 5 years ago | (#23448570)

Incredible detail or incredible redundancy?

When I think of detail, I think of zoom. Multiple pictures can help define some fuzzy areas, and assuming the subject doesn't change, correct for atmospheric distortion. However, as the naked eye staring at a distant object can't quite make out what is being seen, a whole bunch of fuzzy snapshots aren't going to give any big confidence improvement.

Aperture synthesis [wikipedia.org] involves simultaneous processing of light to zoom by adding light that is in the same phase [wikipedia.org]. A digital camera cannot match this kind of zooming by redundant images, which basically contain much of the same information, thus eating up a hard drive for no real gain.

I haven't read about the theory behind the digital camera used to make the multiple pictures so this may be what they're doing. If the camera is used to take many pictures over a prolonged time where the angle of the sun gradually varies, a kind of zoom may be achieved. The robot arm is intriguing--if the light source is constant and a static subject is only a few arm's lengths away, a magnified view of sorts might be achievable. I'm skeptical.

Autostitch does it automatically (2, Interesting)

electrostatic (1185487) | more than 5 years ago | (#23449214)

Autostitch [cs.ubc.ca] "is the world's first fully automatic 2D image stitcher." The order in which you take the photos in not important, just that you cover everything and that there is plenty of overlap. You don't have to worry about keeping the camera horizontal -- it will rotate individual shots as needed. And you can ZOOM in on certain shots for more detail. I've used it to merge 154 shots into one panorama. Free.

Autostitch for higher res inside consumer cameras? (1)

ErkDemon (1202789) | more than 5 years ago | (#23457232)

Wow, that looks fun!

So I guess, if you wanted to use multiple images to boost the resolution of a given scene past the sensor resolution of your camera, you'd take a stack of near-identical photos, use an editing package to blow them all up by a factor of, say, five or ten, so that all the photos show big square single-colour blocks at high magnification that correspond to single pixels in the original images, and then when the software has found the "best-fit" alignment and rotations for the different images, the overlapping transparent blocks should give you "sub-block" detail corresponding to sub-pixel detail at the original resolution?

That sounds like an interesting way of taking ultra-high resolution pictures of small objects without needing hideously expensive lenses and intense lighting. you could even make a custom tripod mount attachment that lets you lock the camera sensor at a fixed distance, and then rotate or slide the camera while you fire off pics.

Ideally, I guess you'd have a "smart" camera with a piezo device that could deliberately waggle the sensor between shots so that you could lock the camera completely and have the camera's own onboard processor fire off a volley of shots and then assemble a higher-res image all by itself (after a wait).

I wonder how long it'll be before this starts turning up on Canon and Sony "consumer" cameras and on cameraphones as a standard feature? It's probably a lot cheaper than improving the optics or the sensors. Downside: you have a moving part to go wrong.

More thinks ... if you were building the thing into the camera itself, I guess you could have the lens tilting so that the image is panned diagonally across the sensor. That'd give guaranteed extra resolution without rotating the image. I don't know whether you'd be able to do this accurately to sub-pixel resolution, or whether you'd just use "noise" to pan diagonally and hope for the best. Or just hope that the photographer has a wobbly tripod! :)

Gigapixel picture != Gigapixel camera (2, Insightful)

SamP2 (1097897) | more than 5 years ago | (#23449520)

True, you can put together a lot of composite pictures to achieve an arbitrary size and resolution. You can legitimately call the result a gigapixel picture (if it reaches the resolution requirement, of course). But that shouldn't be confused with a gigapixel camera, and in and by itself is not actually that impressive.

For example, if you take the entire photoset of Google Earth, you'd probably get a few peta-pixels worth of data. Ultimately, it all boils down to how much of that data is needed at any given time. You might need a low-detail, large-area image (e.g. view of Earth as a whole), or a high-detail, small-area image of your backyard. In either case, you wouldn't need more than at most a few dozen megapixels at any given time. It's unlikely anyone ever needs more than that size, whether they are studying galaxies or atoms, because the more detail you need, the less physical area you need covered, and vice versa.

Similar to Microscope Technology (1)

Sir Holo (531007) | more than 5 years ago | (#23450642)

There are microscopes that play similar tricks. We have a couple the will compose images from 3x3 mosaics which are taken automatically by the camera.

Cameras that automatically do sub-pixel shifts between frames (for resolution) and that do frame-shifts (for large images) are commonly available in the marketplace.

Some others will instead bracket focus and automagically composite an image with a huge apparent depth-of-focus.

13 gigapixel 1 gigapixel (0)

Anonymous Coward | more than 5 years ago | (#23450924)

http://www.harlem-13-gigapixels.com/

Where not to use your Gigapan (1)

jgs (245596) | more than 5 years ago | (#23452656)

Tangentially related, here's [andycarvin.com] a blog entry by an NPR staffer who was harassed and threatened with arrest for using a Gigapan in Union Station.

NPR reporters almost arrested for using Gigapan (1)

GrahamIX (300998) | more than 5 years ago | (#23455248)

Meanwhile, some NPR reporters using the new gigapan camera were almost arrested for taking pictures with it at Union Station.

www.andycarvin.com [andycarvin.com]

Another symptom of the knee jerk reaction against anything and everything unfamiliar in the War on Terror.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...