Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software Technology

Photosynth Demo 204

A couple of days ago Microsoft labs released a demo of their new Photosynth software on the web. Photosynth allows the aggregation of social picture networks (a la Flickr) into a completed image in addition to allowing a level of depth to image browsing previously unavailable. There is also a very impressive video of the demo available.
This discussion has been archived. No new comments can be posted.

Photosynth Demo

Comments Filter:
  • but I couldn't... 30 seconds of ads at the beginning, then the phrase "through an aquisition".

    typical microsoft "innovation"
    • Enh, so it wasn't Microsoft that did the innovative work. It's still a damn impressive demo (although you know what they say about demos...); you're missing out. (Ad was annoying, though.)
      • Video looks cool yes but it will never take off.

        The video only looks cool because their demos are done because their source photos are carefully chosen.
        They didnt send a n00b out to take the photos.
        • Re: (Score:3, Informative)

          by xeromist ( 443780 )
          You must not have seen the whole thing. The cathedral was assembled from images available from the internet taken by hundreds of different people and cameras.
          • Re: (Score:3, Interesting)

            by cheater512 ( 783349 )
            Which were then manually screened to weed out the crap ones.
          • Re: (Score:3, Interesting)

            by SpryGuy ( 206254 )
            A friend of mine asked, "Doesn't that violate about a billion copyrights?"

            I shrugged. Can someone take my photos on Flikr and use them to create new content without my approval?

        • "They didnt send a n00b out to take the photos." They got the photos (they said) by searching for "Notre Dame" on Flikr, including plenty of photos taken by "n00bs".
        • I doubt it'll take off in it's current form, but I do wonder why they can get such great performance from the internet, zooming and manipulating huge numbers of photos in 3D, but Aperture (and it's light table) are slow and laggy. I think it should be incorporated into other software.
    • Re: (Score:2, Informative)

      by Anonymous Coward
      Better Link to the video demo.
      http://www.ted.com/index.php/talks/view/id/129 [ted.com] Click Here
      • that's the same crap ad-infested garbage hype video as the one on youtube.
        • mod parent -1 moron (Score:1, Informative)

          by Anonymous Coward
          The link that the GP posted was to the same video, but it splits the presentation into three parts (Advertisement, Enter Seadragon, The Photosynth Experience) that you can easily skip between. The good part starts at about 30 seconds into the clip.
          • Re: (Score:2, Funny)

            by froggero1 ( 848930 )
            ...watched the whole thing now... couldn't find the aformentioned "good part" anywhere.
            • When I were a lad we'd use good old fashioned scissors and sticky tape to make pictures that looked a lot better than this blurry computerized gibberish.
          • Also, the video on the TED site can be enlarged and I believe is higher resolution.


            TED [ted.com] is definitely a site worth visiting away, as this presentation is probably among the less interesting ones you can watch there. More people should check it out.

        • by froggero1 (848930) on 13:55 Wednesday 06 June 2007 (#19416443)
          (http://talsma.ca/)

          that's the same crap ad-infested garbage hype video as the one on youtube.
          --
          The only maxim of a free government ought to be to trust no man living with power to endanger the public liberty.
          Thank you for your crap ad-infested comments. Your advertisements (sig and site link) are longer than your comment itself.
           
    • by Bananatree3 ( 872975 ) on Wednesday June 06, 2007 @05:14PM (#19416677)
      I decided wade through the hype/ads/blah, and came across a really cool piece of software. It takes thousands of flickr images stitches them into a 3-dimensional mosaic, all just through software. No special on-site 3d imaging hardware, just a program compiling everyday images of something. It does this through some very advanced image recognition. If you can brave the ads, it IS worth it.
      • Sounds like an application of autostitch [cs.ubc.ca]. The downloadable demo version is pretty neat and fun to play with, if you have overlapping scenery photos, for example.
        • Re: (Score:3, Informative)

          by spoco2 ( 322835 )
          No, it's far more advanced than that, as its recognition is able to match objects that are not directly from the same set of photos, or even all photos, some can be diagrams or drawings for example.

          The part that blew me away is the SeaDragon technology behind the image/information scaling portion of things... now that is just incredible... check out a talk/demo at TED on March of 2007 by Blaise Aguera y Arcas of Microsoft [videosift.com], just amazing stuff.
      • Re: (Score:3, Insightful)

        by ozbird ( 127571 )
        "Windows XP SP2 and Vista Only

        The Photosynth technology preview runs only on Windows XP SP2 and Windows Vista.

        If you feel you've reached this message in error, you can try anyway."

        Wow, another innovative product from Microsoft.
      • Re: (Score:3, Informative)

        by naoursla ( 99850 )
        Microsoft also worked with the BBC [bbc.co.uk] to produce this collection [live.com] of Photosynths of several well known places in Britain.
        • Quote

          Windows XP SP2 and Vista Only

          The Photosynth technology preview runs only on Windows XP SP2 and Windows Vista.

          If you feel you've reached this message in error, you can try anyway.

          Unquote

          Typical...
          • Re: (Score:3, Interesting)

            by naoursla ( 99850 )
            Yep. I run Windows 2003 Server at work and it doesn't work on that either. I am pretty sure the Photosynth team wants it to run on more platforms. This is still a new product that is barely out of the research stage.
    • by kahei ( 466208 )

      Yes, they funded this innovation by buying equity.

      C'mon, learn how it works. It's the system you live in.
      • Uh, the innovation had already occurred by the time of purchase of equity. "Someone else's equity at the time" is perhaps what you meant, but I don't see MS investing in company's whose research is still at the 'idea' stage.
  • Huh? (Score:3, Funny)

    by DragonWriter ( 970822 ) on Wednesday June 06, 2007 @04:50PM (#19416367)

    Photosynth allows the aggregation of social picture networks (a la Flickr) into a completed image in addition to allowing a level of depth to image browsing previously unavailable.


    That appears to be syntactically tolerable English. Semantically, though, WTF?
    • Re: (Score:3, Funny)

      by Timesprout ( 579035 )
      Let me translate.

      Pretty pictures.
    • Re:Huh? (Score:5, Informative)

      by RealGrouchy ( 943109 ) on Wednesday June 06, 2007 @05:03PM (#19416529)

      Photosynth allows the aggregation of social picture networks (a la Flickr) into a completed image in addition to allowing a level of depth to image browsing previously unavailable.
      That appears to be syntactically tolerable English. Semantically, though, WTF?

      This lets you take all sorts of pictures of your room, and will automatically assemble them into a 3D environment. It will assemble your photos to look like an RPG, instead of a slideshow.

      Using the example in the video...there are hundreds of online collections of people's photos of Notre Dame cathedral. Each photo is of a different part of it, from a slightly different angle.

      This software takes all those different photos and assembles them into a 3D representation of Notre Dame cathedral, where you can look at any of the individual photos.

      In addition, if someone identifies one of the saints in a statue on the cathedral, when you take a photo of it and your photo is added to the collection with the software, your photo will also have that saint identified--thereby enhancing the data contained in your photo.

      - RG>
      • Re:Huh? (Score:4, Interesting)

        by timeOday ( 582209 ) on Wednesday June 06, 2007 @05:18PM (#19416729)
        I don't think this technology has that much to do with social picture networks in particular, I'm not sure using it to index images is all that compelling. What would be more useful is inputting some images from different angles (or a video) and getting back a .3ds texture-mapped geometric model. Reconstruction of gometry from imagery has been a big research topic for ages but I'm not aware of any effective, user-friendly software to do it.
        • by kjart ( 941720 )
          I see no reason why you both can't be right.
      • by K8Fan ( 37875 )

        It would be helpful if cameras had GPS location and direction metadata to give the software a starting point.

    • You know what's strange is that I thought the same thing before I watched the video demo, and now having watched it, the sentence makes decent sense.

      WTFV.
      • Re: (Score:3, Insightful)

        by nine-times ( 778537 )

        Yeah, it's pretty decently cool, too. Personally, I thought the magazine and the car ad with highly detailed information "printed" really small was as interesting a concept at anything-- it looked like it might provide a reading experience that would make sense for an online magazine, and the small print bends the concept of your printable space in an interesting way. So long as there are sufficient hints that the tiny text was there, it would allow you to put a lot of information into a small "space".

        Th

  • Can we get an editor who doesn't post/write press releases too? We're geeks, we know about blogs, you can't bullshit us with your PR so quit trying.

    It's insulting when an article like this appears and SCREAMS "We were paid for it".

    Either write like a human being or stop trying to impress us, because you can't do both.
    • Oh that explains it. I was wondering why I watched 10 seconds of a person looking at a large amount of pictures, like it solved something important.
      • Re:press release (Score:4, Insightful)

        by koreth ( 409849 ) * on Wednesday June 06, 2007 @05:10PM (#19416617)
        I don't get the point of that part either, but keep watching. A couple minutes into it he moves on to the real meat of the demo, and it's pretty astonishing. I won't spoil it except to say that if I'd seen it in a sci-fi movie I'd probably have dismissed it as very cool-looking but totally unrealistic.
  • by L. VeGas ( 580015 ) on Wednesday June 06, 2007 @04:59PM (#19416491) Homepage Journal

    Photosynth allows the aggregation of social picture networks (a la Flickr) into a completed image in addition to allowing a level of depth to image browsing previously unavailable.
    Slashdot summary entices the accumulated aggravation of social comment communities (a la Digg) into a aggregated juxtaposition while interspersing levels of irritation heretofore unimaginable
  • by kiwicmc ( 93934 ) on Wednesday June 06, 2007 @05:10PM (#19416619)
    Unlike the first set of posters I managed to get over my self importance and watched a couple of seconds of BMW ads to see the actual video.

    I liked the initial viewing of large quantity of hi-res images and the smooth zoom. The aggregation of many thousand flickr images of the Notre Dame (including one of a poster on a wall) into a 3-D image was fantastic.

    C
    • Re: (Score:2, Insightful)

      by Threni ( 635302 )
      > The aggregation of many thousand flickr images of the Notre Dame (including one of a poster on a wall) into a 3-D image was
      > fantastic.

      Yeah, that's got to be running on a bog-standard Vista install, hasn't it. I agree with the guy - I can't think of a better way to read a newspaper than to pan around and zoom in on a huge monitor in my front room. And I can't wait to see what happens to this system when it's attacked by spammers creating fictional spaces. Whats to stop people from adding the world
      • Well the fact that DN3D takes place in Los Angeles, and the previous Duke Nukems took place on some kind of weird factory planet, with no ladders and a lot of mysteriously unsupported floating platforms.
    • youtube videos don't have ads, but your not the first one to mention this in this thread. Weird. Were you on the other site? Why would microsoft have ads in their own techdemo?

    • by trawg ( 308495 )

      I liked the initial viewing of large quantity of hi-res images and the smooth zoom. The aggregation of many thousand flickr images of the Notre Dame (including one of a poster on a wall) into a 3-D image was fantastic.

      Yeh - but now I'm scratching my head wondering how they do that.

      Don't get me wrong, I'm blown away by the impressiveness of it, but I want to know (even roughly) how it's done. I can't for the life of me figure out how you can take random photos of the internet, throw them into some software, and have it churn out a 3d map based on nothing more than the photos.

      Seriously, it's so awesome that I almost can't believe they really did it. I would love to even just get a vague idea of what they're doing to make

      • Re: (Score:2, Interesting)

        by Korvar ( 937226 )

        Step 1) Get lots of photos of a given subject

        Step 2) Process these photos and find "similar points"

        Step 3) Start correlating points on separate photograps

        With enough points in common on two or more photographs, you can begin to get an idea of the 3D relationship between the points, and also the cameras taking the photographs.

        There are applications that allow you to do Step 2 manually (the clearest example of the process I found was http://www.3dphoto.dk/UK/technique-UK.htm [3dphoto.dk]), but Photosynth appear

  • One step forward! (Score:4, Interesting)

    by Sectrish ( 949413 ) on Wednesday June 06, 2007 @05:13PM (#19416665) Homepage
    At least now someone at Microsoft seems to know _what_ to buy, this is some pretty amazing technology. I just hope that someday it will be available to other OS'es too.
    • Re:One step forward! (Score:4, Interesting)

      by evohe80 ( 737760 ) on Wednesday June 06, 2007 @06:19PM (#19417471)
      One thing that amazes me of Microsoft is how, having so many bright people at MS reasearch, most of their stuff is so bad, and/or lacks innovation. (I know part of this came from some other company they bought, but some of it is original from MS, I've read a paper related to this technology).

      Every single paper I've seen from MS research is great. Well done!

      (from someone developing computer vision on linux)
      • by kjart ( 941720 )

        One thing that amazes me of Microsoft is how, having so many bright people at MS reasearch, most of their stuff is so bad, and/or lacks innovation.

        Seriously? It's the same in any industry. Just look at the 'concept' cars released by major car manufacturers - the actual cars made seldom have more than a glancing resemblance with those cars. Making a sweet prototype is not nearly the same as making something for mass consumption.

    • It's kind of amusing how the original research demo [washington.edu] is in Java, so it runs on anything. The microsoft demo [live.com] of course is Windows XP/Vista only. At least they ported the plugin from ActiveX to work in Firefox.
    • Re:One step forward! (Score:5, Informative)

      by TheTranceFan ( 444476 ) on Wednesday June 06, 2007 @09:53PM (#19419309) Homepage
      Microsoft didn't buy Photosynth. It bought Seadragon. The Photosynth client is indeed built on Seadragon's client, but the idea behind Photosynth (which was a joint University of Washington/Microsoft Research project called PhotoTourism) significantly predated the Seadragon acquisition, and there was a working client. When Microsoft decided to reimplement the client as a technology preview, that's when the Seadragon team and client came into the picture.

      That said, Seadragon's technology is great. It's a fantastically smooth way to browse arbitrarily large images or collections of images, and it was a good acquisition indeed.

      (I was on engineer on the Photosynth team.)
  • by toQDuj ( 806112 ) on Wednesday June 06, 2007 @05:16PM (#19416697) Homepage Journal
    This zoom-ability of the first part has a lot in common with the ideas behind Jef Raskin's The Humane Environment http://en.wikipedia.org/wiki/Archy/ [wikipedia.org].

    The second part, however, shows marvellous stuff. Especially if what I think he did, was search for patterns in images, and compare those for unique objects to collect a library of images of a single object.

    This guy and supposedly his group shouldn't work for Microsoft in my opinion, but would perhaps feel more at home in a fundamental science laboratory. But I think my opinion on this is slightly partial.

    B.
    • Re: (Score:2, Informative)

      The project was demonstrated on the Research Channel at the beginning of the year.

      Microsoft bought out a company that had written the non 3D part of Photosynth and student(s) at the University of Washington wrote the rest if I remember correctly. At the time they didn't work for Microsoft.
    • This guy and ... his group shouldn't work for Microsoft

      Someone else pointed out that the actual work was done outside of M$, but I agree that it's a shame they were bought up. Expect this to be crushed instead of landing on your desk.

    • by K8Fan ( 37875 )

      This guy and supposedly his group shouldn't work for Microsoft in my opinion, but would perhaps feel more at home in a fundamental science laboratory.

      Like it or not, Microsoft Research is doing very serious computer science, and has a lot of top people being allowed to really push the envelope.

    • by dave1g ( 680091 )
      Uh, its not the like the poor guy is stuck hacking on Office and Windows.

      http://research.microsoft.com/research/default.asp x [microsoft.com]

      Microsoft does fund real research just like any other research lab.
  • by tygerstripes ( 832644 ) on Wednesday June 06, 2007 @05:39PM (#19417003)
    I can honestly say, without hyperbole, that this is the first time all those promises of what the web can really do - interconnectivity, automatic synaptic contextual linking, user generated content, and god-damned cleverness - have finally come together into something which is un-fucking-believable!!

    All those next-stage, new-wave, super-hyped ideas that generated enough excitement to get a survivable user-base just kind of passed me by, because they only ever seemed to be minor amplifications of what we already had. But this... this is something totally new. And utterly, utterly incredible!

    I'm so excited by this it's making me feel sick! TECHNOLOGY! INTERWEB! I take it all back - forgive me for my lack of faith! I LOVE YOU!

    And by the way, that "content only limited by how many pixels are on the screen" idea has been a long time coming, and I'm deeply happy that someone's solved it. I could never understand why we use raster-imaging for computer games because it's a squillion times quicker than ray-tracing, but nobody had applied the same idea to other applications. Now I feel justified in wondering, and I'm so pleased with the result!

    • by adisakp ( 705706 ) on Wednesday June 06, 2007 @08:10PM (#19418475) Journal
      I could never understand why we use raster-imaging for computer games because it's a squillion times quicker than ray-tracing, but nobody had applied the same idea to other applications.

      I don't think that basic rasterizing engines are the limit. The limit is that the source data for all these pictures are tens or hundreds of gigabytes (and in the future, conceivably terabytes). Somewhere in the assembly and cross-correlation of all this data, they have to be generating LOD's (levels of detail) and dynamically loading / managing MIP-maps to keep the loaded dataset to a reasonable level. This is the hard part since "reasonable level" for loaded imageset size is probably currently a couple hundred megabytes or much less. You can probably load more data into RAM but try maintaining a 60FPS refresh with a gigabyte of textures - especially on a laptop or basic computer.

      Once you've done this you can use a variety of display techniques... the main reason to use basic texture-mapping / flat rasterization is that sources are photos which are basically a pre-lit "flat" textures.

      However, if you can generate a 3-D model and can separate lighting / color information (perhaps using combinations of day and night pictures or varying lighting from different photographs), it would be then possible to perform simple ray-tracing or other hybrid renderers -- think how cool it would look to have a dynamic artist's sketchpad with these images "penciled" in realtime. There are already high-frame-rate (near-realtime) ray tracing demos already out there for CELL and X86 that render moving images at a lower-res for higher-interactive frame rates and then when not-moving, render high-quality image stills that are quite impressive.
    • by trawg ( 308495 )
      This is basically exactly how I felt when I saw Google's StreetView thing. I can't believe it's been trumped by Microsoft. Nice to see them innovating with some really seriously impressive stuff.
      • I've been imagining, and trying to figure out how to do, a combination of the two.
        Have a truck drive around photographing everything, and run the photos into software to generate the 3D model. Now we see - in practically the same week - both parts of that in place. Just string the two together, throw in public-accessable photos, crunch a few terabytes, and we'll have one of the coolest applications EVAR.
  • by Ided ( 978291 ) on Wednesday June 06, 2007 @05:45PM (#19417101)
    This software is absolutely amazing, especially when you consider the programmatic side of this. People bashing this without actually watching the video AND playing with the operating demo are really missing out. You don't have to like it but at least have a reason that shows some form of intelligence. Not just "the intro was poorly done".
    • This software is absolutely amazing, especially when you consider the programmatic side of this.
      People bashing this without actually watching the video AND playing with the operating demo are really missing out. You don't have to like it but at least have a reason that shows some form of intelligence. Not just "the intro was poorly done".

      "But look, you found the notice didn't you?"

      "Yes," said Arthur, "yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying Beware of the Leopard."

  • by Lord Satri ( 609291 ) <alexandrelerouxNO@SPAMgmail.com> on Wednesday June 06, 2007 @06:06PM (#19417317) Homepage Journal
    Right here [slashdot.org].
  • Just looking at that (Score:3, Interesting)

    by goldcd ( 587052 ) on Wednesday June 06, 2007 @06:15PM (#19417413) Homepage
    rather fabulous demo, I realize that that would tie in beautifully with the surface computing MS showed last week (which was lovely as a tech demo with little immediate use).
    Vista is 'nice' but it's just a progression of what we already know - these tech demos give me a big warm fuzzy futuristic feeling inside :)
    If nothing else it shows that MS is innovating again (at last) - Ball's back with Apple and Google now - "Make me more impressed!"
  • by Anonymous Coward
    No, the demo is not rigged (and it's about 11 months old).

    The whole thing is based on SIFT keypoints http://www.cs.ubc.ca/~lowe/keypoints/ [cs.ubc.ca] . These are very powerful and work indeed as shown in the video/demo. Check autopano-sift http://user.cs.tu-berlin.de/~nowozin/autopano-sift / [tu-berlin.de] for a real application using them.

    There is only a little problem, M$ cannot use SIFT commercially. The licence says "for research purposes only" and the US Patent 6,711,293, Asignee: The University of British Columbia protects SIF
  • Data aggregation (Score:4, Interesting)

    by jemenake ( 595948 ) on Wednesday June 06, 2007 @06:25PM (#19417539)
    Near the end of his presentation, the guy sums up the technology as taking all of these separate images from various sources on the net and figuring out how they all interlink to present a larger, more coherent picture. He got applause.

    My first thought was about the U.S. government's "total information awareness" project, where they're trying to take lots of separate pieces of info (which are already available to law enforcement) and interlinking them all together to provide a more coherent picture... but most people consider that to be evil.

    Granted, the government isn't doing it with vacation photos, but the idea, of finding pieces of data that are related and finding out *how* they're related, is the same. The difference in people's reaction to it, I can only attribute to the fact that people see the photosynth guy as good, and the government as evil. But I don't agree that the goodness or evilness of an action is solely determined by the goodness or evilness of who's doing it. The U.S. gov't tries this and fails. It expects that it can invade foreign countries and install friendly governments and torture people because it's "the good guys", yet the soviet union did those same things during the cold war and we admonished them for it because they were "the bad guys".

    So, where am I going with this rant? My point is this: You can't blame somebody for connecting the dots. In fact, that seems to be one of the things that we, as humans, are particularly good at. So, if you think that this photosynth thing is fine, then I think you've got to grant that the TIA project is fine. Now, you could argue that some particular bits of information shouldn't be available, but the piecing it together to form a more coherent picture... I can't come up with an argument against it that I consider defensible. Sure, it makes me uncomfortable, but that's not an "argument".
    • Re: (Score:3, Insightful)

      by dabraun ( 626287 )

      So, if you think that this photosynth thing is fine, then I think you've got to grant that the TIA project is fine.

      Technology is a tool. It is great to use hammers to build houses. It is not great to use hammers to bludgeon people's skulls. In no way does thinking photosynth is fine imply that TIA is fine - the fact that they (may) require the same technology to be possible does not in any way make them morally equivalent.
  • Vast Desktop... (Score:5, Interesting)

    by Slur ( 61510 ) on Wednesday June 06, 2007 @06:43PM (#19417711) Homepage Journal
    Actually, as I looked at the demo, I couldn't help feeling like all that virtual space was looking like a damn nice desktop environment. Nevermind the part of the demo with a flat-on scrolly-zoomy desktop, as nice as that would be (Seems obvious in a way too... And wouldn't it be nice if Leopard had that instead of "Spaces" ?). But imagine the notion of opening up an application and instead of just popping up a new window it creates a new space - within the desktop virtual space - and brings you into it. You can always pull back and move around to another window or workspace, but while in it you'd be totally immersed.

    I dunno, I just like the notion of immersive environments, especially for conceptual learning. I think we're going to see a prevalence of this kind of interface in the near future.
    • Actually, as I looked at the demo, I couldn't help feeling like all that virtual space was looking like a damn nice desktop environment.

      I thought so too. This was a nice example of how you can handle lots of visual data via a limited screen, and one solution I'm very familiar with are virtual desktops. I couldn't stand using a computer without them, as you can easily focus on one task in one desktop. Windows and OS X in comparison look incredibly messy, as they attempt to present all of the computer's capabilities at once. I actually prefer a relatively small screen, and I've come to greatly appreciate the idea that computers let you wor

  • PhotoSynth was previewed and available months and months ago, like a year almost.

    The real news story today is about using Silverlight technology in a new Live project.

    Today's MS story was about "Windows Live PhotoZoom". A set of features managing photos using Silverlight using some of the original PhotoSynth technologies.

    http://www.liveside.net/blogs/main/archive/2007/06 /06/windows-live-photodoom-alpha-silverlight-power s-new-microsoft-live-labs-project.aspx [liveside.net]

    Ya, PhotoSynth is a cool technology, but not exac
  • The video is very nice and seems to show of Silverlight's canvas function pretty well, if that is indeed Silverlight. The developer seems to have a very good artist's eye in the way the photos are pleasingly laid out.

    I confess I had to watch the video without sound in my office but if as people are saying the image warping is automated, then it sounds very much like work done by Paul Haeberli of Silicon Graphics and posted in his Grafica Obscura [graficaobscura.com] notebook. He calls it image merging via a projective warp [graficaobscura.com] and

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...