Read on for some of the most interesting comments and exchanges on a handful of yesterday's Slashdot posts (on the age of the Universe, virtual desktops in OS X, trick photography on the Reuters wire, and AOL's latest privacy gaffe) in today's Backslash summary.
About yesterday's story about a recalculation of the Hubble constant that indicates the Universe is much older than the current conventional wisdom that it's about 14 billion years old, reader Toby Haynes (tjwhaynes) writes
I love it when I see reports like this. Stating that the age of the universe is 15.8 billion years old gives the impression that this is accurate to around 1 percent or better. The error bars on this sort of figure are probably closer to +/- 2 billion years or more, implying that the 99% percentile answer is something in the range 12-20 billion years. Most of the "measurements" over the last 20 years fit into that range. There is a tendency for the more recent publications to fall into the 14-16 billion year mark and that may simply be a reflection that that is the "accepted" answer.
I actually used to work on a team measuring the Hubble Constant using Radio Telescope data ten years ago — actually the same group who came up with 42 km s-1 Mpc-1 value which caused all the Douglas Adams H2G2 references (that was shortly before I joined). There was a lot of controversy over the value of the Constant back then and it is still a hot topic. Back then, the Hubble Constant was thought to have values anywhere from 30 km s-1 Mpc-1 up to 120 km s-1 Mpc-1 . The smaller the value of the Hubble Constant, the older the Universe is. Having a smaller value was desirable because it meant that the Universe was old enough to account for the oldest objects observed (about 16 billion years old). Think about that.
One of the points that struck me then was that the value of the Hubble Constant measured tended to be higher when measured using "more local" techniques and tended to be lower as techniques using more distant measurements were used. The Radio Telescope information gave us measurements based on object around or beyond a redshift of 1 (or, to put it another way, these clusters of galaxies observed were about half the age of the universe when the light left them).
Anyway, we'll be seeing more measurements of the Hubble Constant for many more years. Just remember the error bars!
Reader habig disagrees, writing
To that, Haynes replies
No, the startling thing about recent cosmological work is that we do know this number to ~percent. The flagship for this new "precision cosmology" are the WMAP [nasa.gov] results [nasa.gov]. The number is weighing in at 13.7+/-0.2 billion years. Take a look at the tables of cosmological parameters in this paper and the carefully calculated error bars.
This particular press release's sweeping claims do overreach, as nicely summarized by Michael Richmond in a post above. M33 isn't at a cosmological distance, the observations being done by this project help to understand the lower rungs of the distance ladder, from which you can figure out distances to far-off galaxies and try to calculate numbers to independently compare to the microwave background fits. These results are one of many such distance calibrations, and have to be factored in statistically with the others. On the whole, several other means of figuring out cosmological parameters (such as the Age of the Universe) agree with the WMAP results within errors. You only get TFA's 15% increase if that is the only measurement you use to calibrate distances, throwing out all the rest.
Chewing through that paper (interesting one by the way) shows that those error bars are based on analysis of the data after processing. Therefore, those error bars on the age of the universe are assuming that the removal of foreground sources and fluctuations due to the Sunyaev Zel'dovich effect have been done absolutely correctly. No attempt (that I can see) has been made to model the errors arising from that procedure. That alone suggests that there are systematic effects which are not accounted for in those results.
I'm extremely skeptical of a lot of error bars on a lot of data. Confusion is a huge topic in radio astronomy (and I don't mean the chaotic, running-around, headless-chicken type of confusion) and I see paper after paper that really doesn't understand it, deal with it or present any full explanation of how errors in confusion analysis would propagate into the answers.
Of the several announcements from Apple's World Wide Developers Conference yesterday, the most controversial seemed to be the introduction of "Spaces," an implementation of virtual desktops for Mac OS X's next version, Leopard.Reader bandrzej welcomed the introduction of virtual desktops, but pointed a finger at Apple for taking so long to introduce them:
About time with the virtual windows! Took them long enough...all other major *nix based window managers have them. Makes their "photocopying" comment at WWDC seem double edged, eh?
mblase has a mitigation defense for Apple's tardiness, writing
In all fairness, Leopard's Spaces implementation looks like a quantum improvement on other virtual desktop managers I've used. (Granted, it's been awhile since I tried any since I was never very satisfied.) None of the other VDMs I recall were quite "Mac-like" enough — by that, I don't mean flashy and animated, but easy to use and understand.
They borrowed some design ideas from Expose, it looks like; you can view all four of your desktops at once; you can drag-and-drop windows from one to the other; and they all use the same Dock instead of using different Docks for each desktop, which is the one thing I always wanted.
Reader CatOne mostly agrees and adds some details:
I've played with Spaces briefly; it's nice.
You can configure as many virtual desktops if you want — the default is 4 (2x2) but you can add rows or columns as you see fit. I went to 16 (4x4) and that was fine... I don't know whether 36 or heck 81 would be manageable. I'm sure it would be RAM heavy ;-)
The ability to bind applications to individual "spaces" is nice, as is the ability to dynamically drag windows between them. Clicking on an application icon automatically moves you to the appropriate space; this should mean much less (where is that damn window, it's buried!) that I still experience, even on my 30" Cinema Display. I thought this would be enough space for that to not happen anymore; all I have now is *huge* browser and mail windows.
Is it a quantum leap in virtual desktop managers? No. But switching between them is quick, efficient, and easy (you can use control-space # to go to it, or control-arrow key)... so it really just gives you a desktop space many times your actual space... that's what it feels like. None of the cube effects a la You! desktops, which is slow and mostly eye-candy-esque.
On the disclosure by America Online that the company had inadvertently released more than a half million customer search records stripped of names but not otherwise sanitized (and thereby possibly exposing individuals to snooping), reader ivan256 wants to know
To that question, reader schwaang writes
Why were you ever under the delusion that aggregate data about your searches would be kept private? You don't even have an implied right to privacy when you send un-encrypted data across the internet. Not only are people stupid if they're upset about this, they're stupid if they're surprised.
Calling this is a consumer rights issue is a joke. There are no rights involved here other than ones that people made up after the fact because they were irrationally upset.
"information about the searches you perform through the AOL Service and how you use the results of those searches;"
And then it says:
"AOL will only share your AOL Member information with third parties to provide products and services you have requested, or when we have your consent"
"Keep reading," says ivan256:
Get down to the part about AOL Search, which has additional privacy terms. It is implied that they have your consent unless you opt out of the data collection.
While some commenters scoffed at privacy concerns in aggregated, semi-anonymized data, reader geekotourist says it's time to revisit "personally identifying information."
When AOL apologized today, the spokesperson said'"Although there was no personally-identifiable data linked to these accounts, we're absolutely not defending this."
Back in January, related to the story on how the DoJ demands and gets ISP data, AOL had said that "We did not comply with the request made in the subpoena," spokesman Andrew Weinstein said. "Instead, we gave the Department of Justice a list of aggregate anonymous search terms that did not include results or any personally identifiable information."
AOL- you need to rethink that phrase personally identifiable, because it doesn't seem to mean what you think it means. You're hiding behind one technical definition of PII, without concern about whether or not the results actually have PII. If you're releasing results with personally identifying information, then you cannot say you're not releasing PII. I'd written in January "I question this assumption by Yahoo, AOL, etc. that search terms, by themselves, have no privacy considerations because they've been separated from personal info. What if the search itself contains personal information? Are the search companies deleting the timestamps and randomizing the order of the search terms themselves? Because otherwise I could see personal info showing up." Obviously, half a year later, they still think that replacing a name with a number takes away the PII. They need to have a talk with, say, the Census Department, about why the department will withhold data about groups of businesses in a region. Grouped data can easily become PII data if you can tease out characteristics. AOL didn't even group the data!
As always, relevant quotes from the best.essay.evar on why privacy is a fundamental human right: "If information that is actually about someone else is wrongly applied to us, if wrong facts make it appear that we've done things we haven't, if perfectly innocent behavior is misinterpreted as suspicious because authorities don't know our reasons or our circumstances, we will be at risk of finding ourselves in trouble in a society where everyone is regarded as a suspect. By the time we clear our names and establish our innocence, we may have suffered irreparable financial or social harm..."
Yesterday's post about news agency Reuters' admission that it ran a digitally manipulated photo depicting the effects of Israeli bombing in Lebanon drew more than 500 comments. Joining many others in pointing out the obvious manipulation of the photograph, reader plover wants to know "Is Reuters complicit?"
The photo was so obviously manipulated as to be laughable. Anyone who's ever used the Clone Brush tool would immediately recognize it as having been manipulated, and anyone who's completely unfamiliar with digital photography would still question the regularity of the blobs of smoke.
Sure, this photographer is at fault, and you can make assumptions about his political motives for Photoshopping this image. But what's worse is how did Reuters let such a piece of crap into the system? The guys on SomethingAwful [somethingawful.com] or Worth 1000 [worth1000.com] all do a much better job, and that's just for the glory of the contest. They're not trying to pass their stuff off as "news." Even the guys at Fark [fark.com] aren't this bad (not even Heamer :-) No, this Photoshop was of "The Daily Show" quality — comically bad.
The only conclusion I can come up with is that Reuters isn't actually looking at the images that come in the door. Even if someone at Reuters had the same political agenda as the photographer, he should have had the good sense to deny that picture because the Photoshopping was so obvious. Actually, neither conclusion is good news for Reuters at all.
Piling on one last insult, Megane writes
It was done so badly that I could tell it was clone tooled by looking at the thumbnail of the picture.
Many thanks to the readers (especially those quoted above) whose comments informed each of these discussions.