Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!



A Warming Planet Can Mean More Snow

iabervon Re:Science or Religion? (1136 comments)

It used to be the case that scientists had a good theory about what weather there would be in different seasons, and this theory was usually right. They couldn't predict daily weather all that well, but they could predict that you could reasonably grow oranges in Florida without worrying about it being colder than Maine for a week and snowing a month later, and they could tell you that there would be snow in Vancouver and not in Dallas.

Now conditions are outside the boundaries that climate models are based on, and scientists really have no clue any more. And it's not just the scientific climate models that don't apply; common sense and experience are no longer relevant, because we don't have history that tells us what happens in this environment, measured, anecdotal, or otherwise. In all of our past experience, the arctic wind has blown eastwards around the pole. Then one year it blows across the pole into Europe. Two years later, it blows across the pole into North America. Is this going to be a regular occurrence? Nobody knows.

The extent to which climate change has a falsifiable hypothesis, it is rejecting the null hypothesis. That is, you can ask: is the environment now following the patterns we have previously observed? We find that we are observing patterns that we had not observed previously, including some that we would have noticed had they occurred in a substantial time period. On the other side, we've previously been able to demonstrate enough of an understanding of climate to know how to build houses and what crops to plant where. But the evidence that you should build houses in Florida to keep heat out and houses in Maine to keep heat in is getting less certain. The issue is not that scientists know that something bad is going to happen, it's that nobody has any clue if something bad is going to happen, even after taking into account that some bad things never happened before, because the situation is just different in some measurable ways.

Personally, my guess is that the planet has major negative feedback, or it wouldn't have stayed in a reasonably narrow range of climates long enough for life to get this far. More greenhouse gasses in the atmosphere will trigger more cooling by some other mechanism, which might be okay or might be all of the continents turning into highly-reflective deserts instead of light-absorbent arable land. We really can't make an accurate prediction.

more than 4 years ago

Seasonal Flu Shots Double Risk of Getting Swine Flu, Says New Study

iabervon Correlation is obvious (258 comments)

There's got to be a significant correlation between having the seasonal flu vaccine recommended for you, and being exposed to swine flu. Surely we should expect that people who choose to get seasonal flu shots do so in part because they're more likely than average to come down with the flu if they don't get a vaccine. Being at high risk for exposure to the flu is a clear mediating factor in leading to both getting every available flu shot and coming down with any strain that goes around that there isn't a vaccine for.

To put it another way, we vaccinate some people go keep them from spreading the flu. If there's a link between getting the vaccine and getting the flu if you don't get the vaccine, then we're vaccinating the right people, and we should go on vaccinating them. (But it's worth making sure people know that they can't act like they're immune to the flu this year.)

Of course, the study could have found an actual danger to the vaccine, but we can't tell until the peer review is complete; peer review is where people will come to some sort of consensus on what the risk is that this value should be compared to.

more than 5 years ago

Designer Accused of Copying His Own Work By Stock Art Website

iabervon Re:hit them back (380 comments)

That's why there's a legal system to which the two parties present their evidence before a judgement is rendered. If this guy can actually present the evidence he says (here) he has, he should win in court. If he's lying here, he should still go to court, and lose big. (Or, more likely, they should go to court-backed mediation, where they can show their evidence to a mediator who can make a decision if it's obvious and make it stick if they both accept it.)

more than 5 years ago

Living Free With Linux, Round 2

iabervon What's with the numbered versions of Ubuntu? (936 comments)

He makes a good point about the weird Ubuntu version scheme: a new user is likely to think that you could update from version 8.04 to 8.10 as a ordinary incremental change. But an expert would know that 8.04 and 8.10 are actually dates ('08.April and '08.October), and everybody actually calls them Hardy and Intrepid whenever they're saying anything that might be useful information. For that matter, the recent code names actually tend to give accurate suggestions about the sort of release it will be, with the LTS one suggesting robustness and the others suggesting ambition of various sorts. (Are you sure you want to move from something Hardy to something Intrepid? On the one hand, you get new stuff; on the other hand, it won't live as long)

more than 5 years ago

The ASP.NET Code Behind Whitehouse.gov

iabervon Anyone try a DNS lookup? (143 comments)

$ host www.whitehouse.gov
www.whitehouse.gov is an alias for www.whitehouse.gov.edgekey.net.
www.whitehouse.gov.edgekey.net is an alias for e2561.b.akamaiedge.net.

Reducing their bandwidth and server load is just not a big deal. (See Akamai and note that the whole site takes the path that the "image" request takes in that diagram.)

about 6 years ago

Git Adoption Soaring; Are There Good Migration Strategies?

iabervon Re:My (short) experience with git so far (346 comments)

You can probably simplify the git workflow a lot.

You don't need to update at all until you're done with your branch. If there were any benefit to updating regularly, you could get exactly the same effect at the end by rebasing a dozen times against progressively newer commits in the upstream history. The exception, of course, is if someone has done something you want to use, but you can just update to the point you require. SVN doesn't have a command for "updating all the way is too hard, update only a little bit at a time", which gets people in the habit of updating all the time.

If this is your whole workflow, you shouldn't have any of your own changes on the master branch (except, of course, when you send out your work branch and then get it back in the next pull), so you shouldn't need to fix anything there; in fact, the "pull" should just say "fast-forward" and give you exactly what is in the main repository. In fact, you can skip having a master branch at all, and just rebase against "origin/master" (which is the state of the main repository the last time you looked).

You should use "git commit" all the time. Until you push, you can revise commits after you make them. As soon as you've done any work you wouldn't want to redo, commit it. Then use "git commit --amend" when you do more. Eventually, do another amend to write a real message, inspect the change with "git show", then fetch, rebase, make sure you still like it, and then push.

I generally work with two branches, one which contains just whatever I've written as I write it, with tons of commits with the message "more stuff" or "fixes", and the other formed by getting a diff between origin and the junk branch and applying only those parts that are actually all one logical change and all good, and committing it with a good message. I repeat this until my good branch contains everything worthwhile, and then I dump the branch that's only got unneeded debugging statements, whitespace changes, and so forth.

about 6 years ago

Alan Cox Leaves Red Hat

iabervon Re:There is speculation... (163 comments)

For that matter, why should Red Hat fund development on the sorts of thing that Alan Cox works on, if hardware vendors are willing to fund it? Intel can even get developers internal documentation and (most importantly) face time with hardware designers who can explain things that they didn't think to document (or that they documented in a huge specification that's too big to find the little detail in).

There's no reason for Red Hat to have a collection of kernel developers working on stuff that Red Hat doesn't need more than anybody else does.

about 6 years ago

Game Devs Warming Up To More Mature-Rated Games On the Wii

iabervon Re:Crossplatform (129 comments)

I think you're missing the fact that the Wiimote is sufficiently different from other systems' controllers that most games released for both of them will be terrible on one or the other. Trying to sell to the massive Wii install base isn't going to be easy if you're trying to compete with games that are less awkward and more fun on the Wii.

On the other hand, there's no reason there couldn't be a good Wii GTA, except that it would be much more disturbing than other console versions. In ordinary GTA games, your character does all sorts of bad things while you sit around pushing buttons on a controller. In a proper Wii version, you'd be miming doing the bad things yourself, which will seem a whole lot worse.

more than 6 years ago

Should the United States' New CTO Really Be a CIO?

iabervon Re:I object to the question (243 comments)

I'd actually say that the US needs a CTO, who would be responsible for identifying the most promising technologies to subsidize the development of. Someone who can look at the range of technologies that are under development, and decide whether it makes sense to go for more efficient production of renewable hydrocarbon fuels for cars and fuel-efficient hybrids, or whether we'd do better to produce more electricity in the midwest, transport it as compressed hydrogen gas, turn it back into electricity in the high-population areas, and drive plug-in electrics. And are we going to use enough more oil to make it worthwhile to put research into cleaner and safer processing, or should we focus on not using it in the first place? OSHA, the EPA, and the DoE each have their own priorities, and there isn't anyone in the position to find a balance between them and get useful things done.

On the other hand, the government needs someone to fix the government's information technology problems, which is an entirely different job, and seems to be what Obama's looking for. I'm not sure why this post isn't called "Secretary of Information Technology", actually, which would be more in line with other executive branch government posts and distinguish clearly between the two possible job descriptions.

more than 6 years ago


iabervon hasn't submitted any stories.



1000st post

iabervon iabervon writes  |  more than 11 years ago

I've now made 1000 slashdot posts. My 1000th post was even sufficiently good to get moderated up and show evidence of being read by quite a number of people (I, in fact, saved the 1000th one for something I thought would be worth the distinction).

Now I'm curious as to my posting rate. I can't seem to find information like when my account was created, nor can I find a way to look at my really old posts. I'm fairly certain I'm below one/day, though, which is reasonable. Amazing how they build up over time. It's funny; I don't really think of myself as an early adopter, but as time goes on, I become a relatively earlier adopter of everything I've adopted. Obviously, time works like that, but what I remember is how much I missed at the beginning.


UI thoughts

iabervon iabervon writes  |  more than 12 years ago

This is based on reading When Good User Interfaces go Crufty. I didn't think I'd be able to find a comment again in the discussion.

Saving is now an operation that users care about. It's actually more like committing changes. Users expect, from their computer use, to have the times they saved as times they can revert to (at least the most recent one). But they also don't necessarily have the habit of saving often (and they shouldn't save often, by the meaning they're using; they only should save when they've completed something). The correct behavior is to keep track of changes since the last save and apply the most recent set when the program starts up. You can quit, crash, lose power, whatever, and it will come back where you were. But if you (e.g.) copy the file somewhere else, it won't be different, because you haven't saved it. And there's no need, on a modern OS, to actually ever write the in-progress version with file operations: you can memory map it and the OS will write it to the disk as needed, avoiding the possibility of the program crashing with unsaved work.

While I'm on the topic, undo past saves is a useful idea (as is multiple versions), but it should stick to the session, not to the document. "Sure, I'll delete the secret parts of the document and send it to you." The user-level concept of a file has only a single version, with nothing but the current contents. Everything else is magic that the user expects to be personal.

Programs should quit when you have closed all of the windows, and not at any other time. If each window is a document, it quits when you're no longer working on any documents. If there's a window for other stuff, you'd have to close that, too (e.g., in an IDE, you'd have to close the IDE as well as the documents). You can't close the IDE part unless you're closed all of the documents (although, possibly, if you try to close it, don't use it, and close the last document, it should go away, so you can do things in any order).

Programs which can handle multiple documents should recognize when the program is already running and open the document in the existing program when you open a new document. This is often supported, but not frequently enough that users tend to do it instead of using "Open". Note that this applies just as much to the command line as GUIs; why can't I do "emacs " in a shell instead of ^X^F and then finding the file? (actually, I can. But it was complicated to set up.)

Remembering inodes isn't right, but there's a similar idea: hard links. You have two directory entries for the same storage. Of course, there are a number of questions remaining. If you move your document to a new name and create a new document with the old name, which do you get? This is essentially the situation you get when saving later versions of documents (except that the old version is also removed). Hard links would tend to follow the versions. You probably want to add a flag on files to say that it is "weak", such that, if there are no non-weak links to the file, it goes away, unlinking all of the weak links (unless any of them are open, perhaps); that would let you purge documents which are in the "recent" list but have been removed. It would make sense to have symlinks in your recent list as well, though, so that, if the original is gone but there's a document with that name still there, you get that. (Doesn't help for documents which have been moved and then replaced without notice, though).



iabervon iabervon writes  |  more than 12 years ago

I wish people on slashdot wouldn't talk about patents without a basic understanding of how to read a patent. The abstract is an abstract, and is always going to be extremely broad. That's not what the claim is on. Ditto the title. If you can describe your invention in a few words, it's almost certainly obvious.

The patent applies to things under the claims. The claims (and supporting documents) are the important parts. The rest are not really anything more than search keywords.


cvs fun

iabervon iabervon writes  |  more than 12 years ago

If you do:
cvs -u -D yesterday today {filename}
cvs will tell you what has changed in that file in the past day.

If you do:
cvs -u -d yesterday today {filename}
cvs will tell you:

cvs diff: I know nothing about yesterday
cvs diff: I know nothing about today

I think this should be song lyrics, but I can't come up with other lines.


My experience with Linux

iabervon iabervon writes  |  more than 12 years ago

(This is a response to an InformationWeek article. I couldn't get the comment system to work, so I posted it here instead, since I'd written it)

I switched to Linux in early '96, when Windows 3.1 wasn't meeting my needs. I think that Linux is a good choice at the point when you're going to have to put in retraining effort anyway. So long as it works, keep the same operating system. But remember that MicroSoft doesn't release new versions of most things; they change enough things that more training is required in order to use anything new.

As you said, the change between Windows 95 and Windows 98 was difficult for some users. On the other hand, since I started using Linux, there has not been a change of that sort to anything I've used, with the sole exception of Netscape.

This isn't to say that the OS hasn't made any progress; it's just that everything I've been used to having continued to work as I expected. All of the progress has been improvements which do not interfere with existing behavior: devices which didn't work now work; the system is faster and more responsive; the graphics have improved.

I think Linux is a good idea because it is Free Software, which means that you'll never have to get a new version of anything unless you want to, and because it has gotten sufficiently advanced that there will probably be ways to do everything your company wants to do. I wouldn't switch anything over to Linux if it is currently working, but I would use Linux instead of a new Windows version on new systems.


Slashdot annoying ads?

iabervon iabervon writes  |  more than 12 years ago

Is it just me, or are the ads just like they've always been again? I got the new pages for a bit, and then it went back to the old way.

Maybe it's just the "lite" pages.


I scared myself last night

iabervon iabervon writes  |  more than 12 years ago

So last night I was working on a virtual machine I've been thinking about. I took a "Beta" simulator (for a class at MIT) that I'd written a while ago, and converted it to using structs for values instead of longs. This amounts mostly to tracking down every place that a value is used (in both the simulator and the assembler) and using the value field of the variable, plus dealing more carefully with storage, and also writing functions for dealing with the simulated memory, since that's now more complicated.

I got it all written, and debugged it with some difficulty (as I'd forgotten how most of the code worked, and, when I could understand the code at a glance, I didn't have any larger-scale information, like, say, where I'd put various functions), and actually got it all working exactly like it had been before, except with the ability to eventually attach the information of my choice to every single value in the system.

I added it to my CVS repository. Then I removed a few things that I hadn't meant to add. Then I removed all of the C files. Oops.

I stared at it for a bit, and then copied the emacs backup files over, thinking I'd lost a single revision of each, which wouldn't be that much, since I'd been debugging. I set about redoing all of the remaining modifications.

Turns out that my emacs doesn't overwrite the backup files under whatever circumstances I was in, which meant that the files I had were all the originals. I then proceeded to make the sum of every single change I'd made in the previous 3 hours, by going through each file and changing everything that was not how it should be. In the process I moved and updated a function I had been considering but hadn't changed.

When I tried compiling it, it turned out that I'd forgotten to include a header file in each of the files, and I'd forgotten three ampersands.

I did the whole thing half-asleep (having already been sufficiently sleepy to delete all of the files). It took me slightly more than 45 minutes. I don't really think I could have done it if it hadn't been right after, or if I'd had to do it from scratch. But it was like if I'd taken an exam, and then had to retake it immediately afterwards, and I'd been able to give the answers word for word, except that I forgot to put my name on the pages. This from someone who can't normally remember more than one thing at a time.


Taking credit for other people's code

iabervon iabervon writes  |  about 13 years ago

It is odd that people think you'd get in trouble in the industry for taking credit for other people's code. At my job, we use CVS, and CVS @author tags give you credit for any code you modify. Of course, CVS (w/ emacs) will do things like color your files based on who wrote the current version of each line.


Random thought

iabervon iabervon writes  |  about 13 years ago

# Add a file to HP/UX /var/log/omena
if grep -i 'hp/ux' `uname`; then
mv /var/log/omena /tmp/omena
for i in `cat $1`; do if [ `cat $1 /tmp/omena | grep -c $i` = 1 ]; then echo $i >> /var/log/omena; fi done


What people keep missing about Linux

iabervon iabervon writes  |  about 13 years ago

Linux is like, for example, CRT technology. It's not a product; it is a technology which is in a part of products available from a number of sources. It is provided by a community of researchers, who work on the improvements they or their employers consider important.

Linux, itself, is not user-friendly; it is full of exposed wires and dials, like a CRT. Of course, you can get a case which has plugs in the back and knobs and buttons on the front: the system libraries and system utilities.

With these, you have a system which is about the equivalent of a monitor; it is reasonably friendly and easy to use, but it doesn't do anything in particular. Next you need stuff to attach to it: more libraries, standard programs, desktop environment, etc.

The user-friendliness work that's being done now is as related to Linux as VCR interface design is to CRT technology. MicroSoft essentially only sells the equivalent of TVs with integrated VCRs; Linux proper does, in fact, compete with Windows, but only with a relatively small part of it, and that is not the part that users interact with directly. And it is not, by itself, a product, so it is not marketted, nor is it (by itself) even useable by anyone but serious hobbyists. Of course, its features matter to the end user, much as the end user cares about picture quality and interference with a TV. It will not "go out of business" or "fail"; rather it may go unused for certain applications.

The real question people mean to ask is: will anyone even produce desktop software using Linux that will become widely-used? My guess is that, before MicroSoft's grip on computer users is loosened that much, the desktop metaphor will be dead. At some point, some other pattern of interaction will come into style, and MS will find that most of Windows and their other software is now outdated.


User Interface design ideas

iabervon iabervon writes  |  about 13 years ago

I am designing for the case where every user has a "home" machine, which has the user's files as well as the applications the user wants to use. An individual user might have multiple home machines, but has a separate identity for each. Users are sitting at either the home machine or a remote machine.

All of the configuration should be on the home machine, because the user is likely to sit down at a new remote machine frequently (i.e., go to a friend's house, and log in from there). The home machine should, however, be able to chose different configurations based on the remote machine in use, since a user will likely want a different UI depending on the speed of the connection, the capabilities of the remote machine, etc. Remote machines should be identified both individually and by negotiated features.

The interface layer will present to the application a set of capabilities which may be used. Without some capaibilities, the application may refuse to run (without graphics, an image viewer will not work), may run with limited functionality (an image editor might still let you scale or crop an image unseen), or may simply skip using the capabilities (an HTML viewer will use a label instead of graphics, or just ignore sounds).

The interface layer is intended to be "narrow", such that relaying instructions over a network connection is quick. This means both that simple commands must remain simple (e.g., "display this text"), and that a "smart" implementation must be able to indicate its cleverness (e.g., if multiple independant buffers are supported on the remote end, the application must be able to use this to switch between buffers without repainting each one).


Dang W3C...

iabervon iabervon writes  |  about 13 years ago

I wish the W3C would quit breaking HTML. You'd think they'd do sensible things like not deprecating all ways of achieving a semantically meaningful effect. I just ran into this issue:

You used to be able to have an ordered list which started at whatever value you wanted. This is important if, for example, you're listing items which are in a zero-based array. Conventionally, HTML lists start with 1, so a list of items in an Java array really needs to have the starting number set; otherwise it is misleading.

Here is what the HTML 4 specification says about this situation: "Deprecated". Nevermind that this has a semantic meaning, is the only way to continue a list across intervening text (as is the standard format for, say, the field of linguistics), and cannot sensibly be done with stylesheets.

It is particularly odd that the W3C seems to do a good job with many of their standards, but their most heavily-used standard is regularly broken, such that it totally fails to serve its original purpose and cannot be used in the ay it is intended.


The missing "cheese" scene

iabervon iabervon writes  |  more than 13 years ago

"Frodo, where can Gandalf be? He should be here by now."
"It seems odd that he'd disappear on a ten minute walk. Wait, do you think he meant 'Bree', with two 'e's?"
"But that's a third of the way to Rivendell! And there's a perfectly good inn right here in Hobbiton!"
"But I don't think Gandalf likes soft cheese. I think he wanted us to leave the Shire."
"Leave the... hey, look at that big guy in a black cloak. I wonder if he'd like some cheese."


Lord of the Rings thought

iabervon iabervon writes  |  more than 13 years ago

Did other people notice previously that Frodo got the Ring from Bilbo on his birthday?


Amusing idea for LOTR movie (mild spoilers)

iabervon iabervon writes  |  more than 13 years ago

It would be really amusing if, after a while of being faithful to the world and to the story, and movies suddenly entirely diverged, while remaining faithful to the world.

The Company gets to Lothlorian as expected, Frodo offers the Ring to Galadriel... and she takes it and crushes Mordor.

Then the second movie involves the forces of good, including reformed orcs and trolls and the Brown Riders, fighting the dark elves of Loth Morgul and The Dark Lady.

Then the third movie involves getting the Ring, now recovered, through Mordor with just about everyone on the side of good from the previous movie trying to take it for themselves.


Peer-to-peer review journals

iabervon iabervon writes  |  more than 13 years ago

The current situation in academia is silly. People write papers, which may be of variable quality. In order to weed out bad ones, and, more importantly, fix ones that have problems, they are reviewed by anonymous other people in the same field, who point out problems and suggest changes. Then they are published in journals, which are read by people in the field. After this, the journals end up with the publication rights. This made sense when only journals could easily publish things, so neither the author nor the peers would have much use for the publication rights anyway. But now it is easy to publish things yourself, so the main work the journals do that the authors cannot do easily is arranging the peers.

I would like to set up a P2P system for this task. An author would submit a paper to the system, which would randomly chose reviewers, who would then get the paper and a psuedonym for giving responses. Reviewers, either anonymous or not, could sign statements about the paper which include system-wide reasons the paper is worthwhile (e.g., "This paper is correct", "This paper is inspirational", "This paper should be considered"). Once the anonymous review period had ended (either the deadline passes or all of the reviews have been submitted for a final version), the paper is published if it meets the publication criteria; it has attached the system-defined signed statements which demonstrate it as worthy of publication, and thus appear on people's lists of new papers. People could also add statements saying what they think of the paper, so that other people could essentially search for the papers a given author likes.

It would be easy to have a web-of-trust mechanism for the creation of accounts which can publish papers, can be chosen as reviewers, and can make statements in their own name. The anonymizing protocol doesn't have to be particularly robust, since the authors tend to not want to know who the reviewers are, and could probably figure it out by analyzing the style if they wanted to. The protocol would presumably avoid giving a paper to its own author, or someone from the same organization.

It seems like it would be possible to do all this with a relatively simple web+P2P app. I may sit down and write something of the sort at some point.


Kitten News

iabervon iabervon writes  |  more than 13 years ago

Foreign body removed from domestic cat.

It's really impressive how cute a cat is doing nothing in particular after you've been worried about him for a while. Compared to throwing up or hiding, sitting on a futon and washing is so cute.


Don't do this (cont)

iabervon iabervon writes  |  more than 13 years ago

I'm having some sort of odd dream. There's some music playing, and the sound of breaking glass. I realize it's a dream, and try to wake up. Then I'm having a different odd dream. My roommate is standing over me, and I'm sitting in what seems to be a dishwasher. After continuing to try to wake up for a bit, I realize that it's not a dream any more. I am, in fact, sitting in the dishwasher, and my roommate is now removing a two-foot piece of glass (with only a tiny bit of blood on the very end) from my shoulder.

Turns out that the music was, in fact, playing on the TV, and the breaking glass was real, wasn't dishes (which I thought at first), but was the window by the sink, which I broke with the back of my head. Walking away from the sink, I'd passed out and fallen backwards into the dishwasher, which was fortunately nearly empty and can evidentally support my weight.

I then drank a glass of sugar-water, ate my stew (which had finished microwaving by that point and was getting cold), started to get cold when I had removed my overshirt (since it might have broken glass on it) and was sitting next to the broken window, moved to the other room (without much difficulty), ate a bunch of gummy bears, and drank a mug of hot tea and a bottle of iced tea. By this point, I was feeling pretty much fine, and managed to go out that evening without any ill effects. The only damage seems to be a small puncture wound on my shoulder and the broken window.


Don't do this

iabervon iabervon writes  |  more than 13 years ago

My house had a party on Saturday. That morning I got up late, had a big breakfast, and then started to clean up to get ready. By the time I was done, it was time for the party (I got distracted for a while). So I didn't get a second meal, although I ate a bunch of snack and dessert food during the party.

Sunday morning, I slept late, ate another reasonably-sized breakfast, and then hung around the house, but didn't have lunch, since it was getting close to time for dinner. As I was getting ready to make dinner, my roommate cut his finger on a can. Blood doesn't bother me, but wounds make me a little light-headed, and I was also really low on food by this point. So I'm feeling dizzy, and I go and sit on the couch. But that isn't going to get me any food, so I get up after a bit when I'm not feeling quite as bad, and I put a can of stew in the microwave, and then I go and wash out the can for recycling.


Messing with Povray

iabervon iabervon writes  |  more than 13 years ago

I've been messing with povray, just looking at the examples (so far), and I've noticed that the samples don't include any spaces; just objects.

I think it lacks the right camera for spaces. I've been thinking about the vision AI problem, and have realized a few things that I haven't noticed in the books I've seen (admittedly, I haven't really read that much of many books, but...): you have two eyes. They angle outward, so they can see different things, as well as being in slightly different places. You move your eyes around without thinking about it, so you see in a bunch of directions at almost the same time. You also move your head nearly without thinking about it, and your neck is not pivoting around the focal point of either of your eyes, which means that you see things from a number of slightly different positions.

All together, this explains why a picture of a place is barely recognizableand is hard to work out much 3-d information about, in part why the vision problem is totally impossible as generally assumed, why it's really hard to avoid getting totally confused in FPSes, and why plays don't videotape well.

So I've been trying to figure out how you could make a picture that was locally consistent (preserves angles at every point), and contains the information you ought to get out of looking around in a room.

Slashdot Login

Need an Account?

Forgot your password?