grcumb writes "Looks like the folks at Google have made good on their promise to release the Android 4.0 source code. Android software engineer Jean-Baptiste Queru writes: "Hi! We just released a bit of code we thought this group might be interested in. Over at our Android Open-Source Project git servers, the source code for Android version 4.0 (Ice Cream Sandwich) is now available."
"This is actually the source code for version 4.0.1 of Android, which is the specific version that will ship on the Galaxy Nexus, the first Android 4.0 device. In the source tree, you will find a device build target named "full_maguro" that you can use to build a system image for Galaxy Nexus. Build configurations for other devices will come later."
grcumb writes "The Economist magazine is running a brief profile of Digicel, a 'minnow' in the wireless telecoms market that has distinguished itself by setting up shop in some of the most unlikely (and dangerous) markets in the world, including Haiti and Papua New Guinea, whose capital, Port Moresby, has one of the highest murder rates in the world.
"If you just focus on risk, you can't do a thing," said Digicel's billionaire president Denis O'Brien in a 2008 Forbes profile. But O'Brien's small-market revolution should teach us another lesson, too: Traditional economic analysis doesn't work when it comes to communications. Telecommunications is a supply-driven economy. If you build it — no matter where you build it — they will come.
Now, if someone could just teach the North American telcos this...." top
grcumb writes "In a recent article on Alternet, Annalee Newitz writes to report that our perception of the typical anonymous poster as a fat, half-naked basement dweller with a grudge is nearly 100% wrong. Virgil Griffith's WikiScanner site exposes the surprising truth: The majority of dishonest edits and omissions on wikipedia derive from corporate and government IP addresses. In Annalee's words: 'It turns out that the people who are hiding behind anonymity online for nefarious or selfish reasons are not little guys in pajamas but the very bastions of accountability that haters of the Web have deified.'" Link to Original Source top
grcumb writes "Pearl Jam reports that their live webcast from Lollapalooza was censored by AT&T. The statement on the band's website outlines their concerns in the context of the ongoing Net Neutrality 'debate':
"AT&T's actions strike at the heart of the public's concerns over the power that corporations have when it comes to determining what the public sees and hears through communications media.
"Aspects of censorship, consolidation, and preferential treatment of the internet are now being debated under the umbrella of "NetNeutrality." Check out The Future of Music or Save the Internet for more information on this issue.
It's refreshing to see that at least some of our media darlings have a clue about what this debate is about," Link to Original Source top
grcumb writes "Le Monde has published a story claiming that French defence officials have asked all senior functionaries in the French government to stop using Blackberries wireless mobile devices. Fears that the US-based mail servers supporting the service could lead to systematic eavesdropping by US intelligence agencies led to the drastic move. From the AP story:
"It's not a question of trust," Mr. Lasbordes told The Associated Press. "We are friends with the Americans, the Anglo-Saxons, but it's economic war."
Research In Motion, makers of the Blackberry device, claim they couldn't read the emails even if they wanted to: "No one, including RIM, has the ability to view the content of any data communication sent using the BlackBerry Enterprise Solution,"
Apparently, nobody at RIM has ever worked at the NSA."
About once a month or so, I'm tempted to dump 25 bucks on Flickr to upgrade to a 'Pro' account, just so I can plop more than 200 photos into that particular bucket. I admit I've been on the cusp a couple of times.
But I never do. The plain fact is that Flickr is a terrible photo viewing interface.
A bright white background is possibly the worst neutral background they could have chosen. White washes out colours and destroys one of the things that I personally love best: subtle shading on very dark and earth-toned pictures. It's got the point where a lot of self-respecting photographers actually have a 'View on Black' link, pointing to one of several services that do nothing other than render the very same photo with a dark background. The difference is stunning.
But Flickr, in its infinite marketing wisdom, would rather emulate Google's 'any colour as long as it's white' mantra. In Google's case, there's wisdom in the approach; they are a utility, like power or water, not a creative service. Flickr does not benefit in the least from an engineer's design sense, and it's high time someone told them that.
One Hundred's Spartan
When viewing photos in groups - or any aggregation, for that matter - one is usually presented with a hodge-podge of 100 pixel thumbnails. Viewing photo sets is even worse. the screen is filled with a patchwork quilt of arbitrarily cropped 75×75 pixel postage stamps. No, wait, I take that back. Postage stamps are larger.
I can't imagine a worse fate for any decent photo. To be reduced to a smudge of light among dozens or hundreds of others on a glaring white page. I'm not sure even Ansel Adams could survive that.
Of course, there are some photos that do just fine in such an environment. Too often, they're from the 'Ooh Shiny!' school of art. To everyone's credit, some genuinely lovely photos can be found, if you know where to look. But they're lovely in spite of Flickr, not because of it.
There are any number of technical arguments for crowding dozens of blots of colour together and call them a collection, but none of them wash when it comes to aesthetics, or even usability, for that matter.
Flickr's groups are subject to the same AOL-ish devaluation that most large scale communities suffer from. The absolute preciousness of users who troll through other galleries, bestowing silly trophy and ribbon icons on pretty photos in a desperate attempt to burnish their collective karma by associating with only the best types... it's off-putting in a way that I'd rather not characterise in a public medium.
Let's just leave it at this: Any group of more than a few dozen people who are mostly unknown to one another can never merit the descriptor 'exclusive'.
Worst of all, Flickr is a vortex. It's a gravity well whose debris can be found throughout the Web, but which is entirely self-referential. Once you're in there, you don't come out. I've had over 14,000 visitors to my main photo stream, yet a mere 18 referrals from Flickr show up in my imagicity.com server logs. People who use Flickr don't go elsewhere.
Flickr, in other words, is good for Flickr. Any benefit that derives to individual photographers seems to be purely coincidental.
All of of this isn't Flickr's fault, per se. The fault lies in our technical inability to render - and more importantly, to manage - images efficiently through a standard GUI, and to share them effectively.
It seems almost paradoxical. Digital technology has allowed revolutionary advances in photography. It has made possible one thing that I love more than any: the ability to draw with light rather than pigment. Sometimes when I'm engrossed in my work I find myself getting almost drunk on colour. There is nothing more rewarding than watching a well-built slide show wash the room with light and shape, to see human vision captured, distilled and transformed in the process.
It astounds me, therefore, how poorly most websites handle photos.
But this is the environment that Flickr has chosen. With few tools to effectively deal with social economies of scale, people are left to their own devices, so they crowd together (as people always do), creating cacophony where contemplation might once have been. Flickr has embraced (in the embarrassing cloying-college-drinking-buddy sense of the word) conventional wisdom with regards to UI, and have spent all their effort on the engineering challenge of handling photos in volume. They've tacked on a few trendy bloggy/webbish bits, like tagging with keywords and location data, but done nothing whatsoever to innovate how photos are viewed.
And that, it seems to me, should be the very essence of innovation where photography is concerned.
I won't demur for a moment if you counter that thumbnails are a necessary evil, that larding a page up with binaries slows down load times, that we're unfortunately bound by the lowest common denominator where display and download capacity are concerned. Nor will I argue if you express admiration for their ability to handle the data volumes that they do. Just storing and serving up 2 billion photos is a decidedly non-trivial task.
But let's be clear here: I expect more from Flickr. I judge them by a higher standard.
They want to set themselves apart? Then let them deal intelligently - dare I say it? creatively - with their popularity. The engineering challenge is interesting; I'll be the first to admit it. But dammitall, this is a photography site. It's for creative people. Is it too much to ask that they should actually take a little of their revenue and use it for basic research and innovation? Where's the research into lossless compression, peer-to-peer content distribution, point-and-click monitor calibration, optimal display environments, click-and-drag online image resizing? Where's the community for UI geeks?
How many of Flickr's 10-30 million monthly visitors have paid accounts there? My guess would be: Several. Surely some of that revenue could go into renewal, exploration and invention.
Perhaps it's no surprise that Flickr founders Catarina Fake and Stewart Butterfield left Yahoo! just as soon as they reasonably could. I don't doubt for a moment that they've thought a great deal more about these issues than I have. Perhaps they'll be the ones who manage to pull a rabbit or two out of their digital cap.
Sometimes you have to destroy the document in order to save it....
I give up. I can't support OpenOffice Write any more, and it's nobody's fault but their own. For anything more than simple tasks, the application is terrible. Their only saving grace is that Microsoft Office has its own brand of polished turd, named Word. Collectively, they are racing to the bottom of a decade-long decline in useability.
No, that's too generous. The thing is, they're at the bottom. They are useless for any but the most trivial tasks, and the most trivial tasks are better accomplished elsewhere, anyway.
Yes, I'm ranting. Let's put this into a proper context:
I hate word processors. For any but the simplest tasks, their interfaces are utterly ridiculous. I haven't liked a word processing interface since WordPerfect circa version 5, and if I had my own way, I'd author all my documents in either emacs or vi, depending on the circumstances.
Why do word processors suck so badly? Mostly, it's because of the WYSIWYG approach. What You See Is What You Get, besides being one of the most ghastly marketing acronyms to see the light of day in the digital era, is ultimately a lie. It was a lie back in the early 1990s when it first hit the mainstream, and it remains a lie today. The fact of the matter is that trying to do structuring, page layout and content creation at the same time is a mug's game. Even on a medium as well understood as paper, it's just too hard to control all the variables with the tools available and still have a comprehensible interface.
But the real sin that word processors are guilty of is not that they're trying to do WYSIWYG - okay it is that they're trying to do WYSIWYG, but they way they go about it makes it even worse. Rather than insisting that the user enter data, structure it and then lay it out, they cram everything into the same step, short-circuiting each of those tasks, and in some cases rendering them next to impossible to achieve.
Learning how to write, then structure, then format a document (or even just doing each through its own interface) is easier to learn and easier to accomplish than the all-in approach we use today. For whatever reason, though, we users are deemed incapable of creating a document without knowing what it's going to look like right now, and for our sins, that's what we've become. And so we are stuck with word processors that are terrible at structuring and page layout as well as being second-rate text authoring interfaces. They do nothing well, and many things poorly, in no small part because of the inherent complexity of trying to do three things at once.
It doesn't help that their technical implementation is poor. The Word document format is little better than a binary dump of memory at a particular moment in time. For our sins, OpenOffice is forced to work with that as well, in spite of having the much more parse-worthy ODF at its disposal these days.
There's no changing any of this, of course. The horse is miles away, and anyway the barn burned down in the previous millennium. The document format proxy war currently underway at the ISO is all the evidence I need to know that I'll be dealing with stupid stupidstupid formatting issues for years to come. I will continue to be unable to properly structure a document past about the 80th percentile, which is worse than not at all. I will continue to deal with visual formatting as my only means to infer context and structure, leaving me with very little capacity to do anything useful with the bloody things except to print them out and leave them on someone's desk.
Maybe I'll just stop using them at all. Maybe I'll just start doing everything on the web and never print again. I'm half serious about this, actually. At least on the Web, the idea that content and presentation are separate things isn't heresy. At least on the Web, I can archive, search, contextualise, comment, plan, structure and collaborate without having to wade through steaming piles of cruft all the time.
At least on the Web, I can choose which steaming piles I step into.
I'm going to start recommending people stop using Word as an authoring medium. There are far better, simpler tools for every task, and the word processor has been appropriate for exactly none of them for too long now. Sometimes you have to destroy the document in order to save it.
The Debian OpenSSL vulnerability apparently went unnoticed for well over a year. Why is it that crackers and script kiddies never found it and/or exploited it?
Over the weekend, I've been thinking about last week's disclosure concerning Debian's OpenSSL package, which in effect stated that all keys and certificates generated by this compromised code have been trivially crackable since late 2006.
There's a pretty good subjective analysis of the nature of the error on Ben Laurie's blog (thanks, Rich), and of course the Debian crew itself has done a fairly good job of writing up the issue.
The scope of this vulnerability is pretty wide, and the ease with which a weak key can be compromised is significant. Ubuntu packaged up a weak key detector script containing an 8MB data block which, I'm told, included every single possible key value that the Debian OpenSSL package could conceivably create.
The question that kept cropping up for me is: This one-line code change apparently went unnoticed for well over a year. Why is it that crackers and script kiddies never found it and/or exploited it? Numerous exploits on Microsoft Windows would have required far more scrutiny and creativity than this one. Given the rewards involved for 0-day exploits, especially in creating platforms for cross-site scripting attacks, why is it nobody bothered to exploit this?
My hypothesis - sorry, my speculation is this: People at every stage of the production process and everywhere else in the system trusted that the others were doing their job competently. This includes crackers and others with a vested interest in compromising the code. I should exclude from this list those who might have a reasonable motivation to exploit the vulnerability with stealth and to leave no traces. If, however, even they didn't notice the danger presented by this tiny but fundamental change in the code base, well my point becomes stronger.
The change itself was small, but not really obscure. It was located, after all, in the function that feeds random data into the encryption process. As Ben Laurie states in his blog, if any of the OpenSSL members had actually looked at the final patch, they would almost certainly have noticed immediately that it was non-optimal.
In all this time, apparently, nobody using Debian's OpenSSL package has actually (or adequately) tested to see whether the Debian flavour of OpenSSL was as strong as it was supposed to be. That level of trust is nothing short of astounding. If in fact malware authors were guilty of investing the same trust in the software, then I'd venture to state that there's a fundamental lesson to be learned here about human nature, and learning that lesson benefits the attacker far more than the defender:
Probe the most trusted processes first, because if you find vulnerabilities, they will yield the greatest results for the least effort.
P.S. Offhand, there's one circumstance that I think could undermine the credibility of this speculation, and that's if there's any link between this report of an attack that compromised not less than 10,000 servers and the recent discovery of the Debian OpenSSL vulnerability.