Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Google Android — a Universe of Incompatible Devices

jilles FUD (636 comments)

This kind of criticism has been popping up repeatedly regarding Android. Most of these reports are speculative and seem to be ignoring the facts, which are that:

1) There are hardly any vendor specific Android SDKs, everybody gets their SDK from Google. Apparently this is not causing any problems with respect to compatibility between the included emulator and device compatibility. If this was an issue, people would be downloading vendor specific APIs to work around the problem. As it is, they are not. It's a non issue. It just works.
2) Most speculative pieces like the hardly original one cited here on compatibility come without any concrete examples whatsoever: which popular Android applications are actually problematic? Where are the hordes of disgruntled users? What's the actual technical analysis of the underlying causes? Where are the device specific applications?
3) Barring documented differences, the Android platform is actually backwards compatible. So if you want to target Android 1.6-2.1, don't use any features introduced after 1.6 or make the use of those features optional.
4) So far the first available Android device, i.e. the G1, has been updated to the latest Android version. Not right away of course, but the fact is that most Android devices in the market are 1.6 or newer either because they shipped like that or because they have been upgraded at some point.
5) The predominant application development platform on the Android phone is Java. What you think you might know about compatibility and native platforms simply does not apply to a proper Java platform covered in unit tests like Android. By and large backwards compatibility is a complete non issue. See 2. If you have evidence suggesting otherwise, share it. If it's not backwards compatible, your unit tests fail and you fix the problem. It's that simple.
6) Most other vendors address this issue by not licensing their platforms to others (e.g. RIM) and shipping only a handful of devices (Apple) or regularly breaking compatibility (MS). Given the competition, Google is actually doing pretty decent shipping a platform that runs on dozens of devices from dozens of vendors. Windows mobile is the closest thing in terms of breadth and we all love windows mobile for its excellent compatibility track record right? (NOT :-) ). The failure of other vendors to address this issue is what has been driving Android growth in the past year.
7) Of course there are bad devices out there and vendors with bad software update policies. SE shipping a 1.6 device at this point in time is illustrative of their poor strategy. Their inability to get this device out of the door is testimony to their incompetence. Their declining market share is well deserved. Don't blame Google for that though.
8) The practice of forking code, which is what some vendors do, is bad for compatibility and time to market. This is true for any piece of software. If you are going to get an Android device, make sure it is running Android 2.x and that the vendor in question has a track record of supporting their devices in the field with updates. Extensive vendor or operator specific customizations mean significant delays between getting updates on your device and increased dependence on a probably not so competent development team.

more than 4 years ago
top

Cygwin 1.7 Released

jilles Re:Compared to MingW, (203 comments)

What do you mean small subset, you can compile and install KDE and x.org on Cygwin. Pointless I agree, but most common place unix commandline stuff compiles and works on cygwin. I used to depend on cygwin to make life a bit more tolerable when I was still depending on windows laptops for development. There's something called puttycyg which is a fork of putty with some cygwin stuff compiled in that really improves the terminal experience (bash in a dos window kind of sucks).

I never noticed any performance issues. But then I don't do much performance critical work on the commandline. A few simple greps, ls and wc commands and the odd bash script to automate some stuff.

more than 4 years ago
top

Making Sense of the Cellphone Landscape

jilles Re:We need a Debian Atp-Get model for phones (185 comments)

Well, Maemo is essentially a Debian derivative with the fully functional debian package management tools installed and configured to be used with Nokia software repositories for over the air apt-get updates & upgrades (i.e. no need to flash the device with new firmware, you'll get updates as they are made available). You can install a package from the officially supported (i.e. no need for hacks to accomplish this) list of packages to get a root shell after which you can modify sources.list to e.g. add one of the several repositories for free (OSS) goodies or even your own repository (which is really nice if you are developing for the device).

http://repository.maemo.org/

This is right now the only device that is truly open to modification and usable as an actual phone at the same time. There are many linux phones on the market but most are either intended for developers and barely functional or intended for end users and completely locked down (e.g. pretty much any Android phone). The N900 is not locked down, comes with official support to get root access, excellent linux based SDK, an excellent mozilla based browser, excellent multimedia and multitasking support, and it is a pretty good phone too.

disclaimer: I work for Nokia but just check the many independent reviewers for some more or less unanimously shared enthusiasm about what this phone can do.

more than 4 years ago
top

The NoSQL Ecosystem

jilles Re:hmm (381 comments)

That's just another way of saying sql databases are a poor match for the requirements big websites face. SQL databases used at scale almost always throw characteristic features like transactions, joins, or even ACID out of the window in order to scale. Once you start doing that, SQL databases just become a really complicated way to store stuff. The one database that is really popular on big websites is mysql, which started out its popularity as a non transactional database. While most common features have been slapped on since, the proper way to use mysql at large scale still involves not relying on those.The way sites like facebook, ebay, etc. use it is as a dumb key-value store. Apparently, Amazon does not use database transactions. That's pretty steep for a billion dollar+ revenue ecommerce site that handles millions of financial transactions per day. I'm pretty sure they'd use database transactions if it was feasible to do so. Instead they handle transactions at the application level.

So the problem is not APIs but the fact that the underlying technology can't live up to the requirements. Never mind what is possible in theory because that's not worth shit in practice. Now there are several non sql storage systems under development with different designs that are designed from the ground up to be scalable and have all sorts of desirable qualities regarding data integrity and a growing number of people relying on them in real world situations.

about 5 years ago
top

How Nokia Learned To Love Openness

jilles Re:Playing to Apple's weakness (180 comments)

You might make the case that the N900 already has the better hardware when you compare it to the iphone. And for all people dismissing Nokia as just a hardware company, there's tons of non trivial Nokia IPR in the software stack as well (not all OSS admittedly), that provides lots of advantages in the performance or energy efficiency domain; excellent multimedia support (something a lot of smart phones are really bad at), hardware acceleration, etc. Essentially most vendors ship different combinations of chips coming from a very small range of companies so from that point of view it doesn't really matter what you buy. The software on top makes all the difference and the immaturity of newer platforms such as Android can be a real deal breaker when it comes to e.g. battery life, multimedia support, support for peripherals, etc. There's a difference between running linux on a phone and running it well. Nokia has invested heavily in the latter and employs masses of people specialized in tweaking hardware and software to get the most out of the hardware.

But the real beauty of the N900 for the slashdot crowd is simply the fact that it doesn't require hacks or cracks: Nokia actively supports & encourages hackers with features, open source developer tools, websites, documentation, sponsoring, etc. Google does that to some extent with Android but the OS is off limits for normal users. Apple actively tries to stop people from bypassing the appstore and is pretty hostile to attempts to modify the OS in ways they don't like. Forget about other platforms. Palm technically uses linux but they are still keeping even the javascript + html API they have away from users. It might as well be completely closed source. You wouldn't know the difference.

On the other hand, the OS on the N900 is Debian. Like on Debian, the package manager is configured in /etc/sources.list which is used by dpkg and apt-get, which work just as you would expect on any decent Debian distribution. You have root access, therefore you can modify any file, including sources.list. Much of Ubuntu actually compiles with little or no modification and most of the problems you are likely to encounter relate to the small screen size. All it takes to get to that software is pointing your phone at the appropriate repositories. There was at some point a Nokia sponsored Ubuntu port to ARM even, so there is no lack of stuff that you can install. Including stuff that is pretty pointless on a smart phone (like large parts of KDE). But hey, you can do it! Games, productivity tools, you name it and there probably is some geek out there who managed to get it to build for Maemo. If you can write software and package it as a Debian package and can cross compile it to ARM (using the excellent OSS tooling of course), there's a good chance it will just work.

So, you can modify the device to your liking at a level no other mainstream vendor allows. Having a modifiable Debian linux system with free access to all of the OS on top of what is essentially a very compact touch screen device complete with multiple radios (bluetooth, 3G, wlan), sensors (GPS, motion, light, sound), graphics, dsp, should be enough to make any self respecting geek drool.

Now with the N900 you get all of that, shipped as a fully functional smart phone with all of the features Nokia phones are popular for such as excellent voice quality and phone features, decent battery life (of course with all the radios turned on and video & audio playing none stop, your mileage may vary), great build quality and form factor, good support for bluetooth and other accessories, etc. It doesn't get more open in the current phone market currently and this is still the largest mobile phone manufacturer in the world.

In other words, Nokia is sticking out its neck for you by developing and launching this device & platform while proclaiming it to be the future of Nokia smart phones. It's risking a lot here because there are lots of parties in the market that are in the business of denying developers freedom and securing exclusive access to mobile phone software. If you care about stuff like this, vote with your feet and buy this or similarly open (suggestions anyone?) devices from operators that support instead of prevent you from doing so. If Nokia succeeds here, that's a big win for the OSS community.

Disclaimer: I work for Nokia and I'm merely expressing my own views and not representing my employer in any way. That being said, I rarely actively promote any of our products and I choose to do so with this one for one reason: I believe every single word of it.

more than 5 years ago
top

XHTML 2 Cancelled

jilles Re:CSS 3 spec (222 comments)

That's the whole problem. All the experts are working for the browser vendors. The W3C never had any business overriding them. Css3 will never happen (standardized & widely implemented). But of course the relevant bits have long been implemented and now those await standardization. It would be nice if w3c bureaucracy could catch up here.

Basically what's wrong here is that after a agile start in the nineties, w3c turned into yet another standards organ. Essentially, for most of the past ten years they've done nothing relevant. Most of the good stuff on the web today basically bypassed their processes (AJAX, HTML5, javascript, DOM). At some point XHTML was hijacked by the Semantic Web crowd. This was essentially given a well deserved neck shot today. They never produced standards or products worth reporting here. Meanwhile, browser vendors had to organize outside the W3C to get some progress going. Current HTML5 is the result of that. Anything else ongoing in W3C is pretty much not relevant (unless you are part of the Semantic Web crowd). Css3 is a good example of why standardize first and implement later is a bad idea.

more than 5 years ago
top

How Software Engineering Differs From Computer Science

jilles Old debate (306 comments)

I think more specifically, Software Engineering is an empirical discipline. All successful approaches in this field (scientific and practical) are about empirically measuring and adjusting what is going on rather than using mathematical models to analyze or predict things. This puts Software Engineering more in the realm of social sciences than that of mathematics. As a consequence, the current practice of old fashioned mathematicians that became computer scientist teaching software engineering in universities produces mediocre results since they typically lack the background for using empirical approaches. In other words they are effectively amateurs lacking both relevant experience in actually practicing software engineering (at least I observed this when studying CS) on large software systems and the necessary scientific background for studying software engineering in a empirical way.

Luckily this has been changing. Most of the better universities now have software engineering scientists that actually do have a clue when it comes to their topic. Also industrial practice is gradually progressing (although the number of cocky CS college dropouts ignoring 40 years of wisdom remains a problem). Methodologies like scrum and other agile approaches are solidly based on measuring what is going on and applying practices that have been demonstrated to actually work (as opposed to be predicted/assumed to work by some mathematician). Sadly most companies practicing these methodologies don't do so rigorously and only pay lip service to the whole notion. All the good software engineers I know combine excellent technical skills with good empirical and people skills. It's all about measuring what is going on rather than assuming or deducing from mathematical models. Good SEs use test suites, profilers and other means to find out how good/bad their code is.

So, maintainability is a function of how messy the software is (easy to measure with all sort of metrics), how well the code is covered with tests (easy to measure), number of bugs per kloc (easy to measure), and indeed experience of the people maintaining the software (not that difficult either). A typical mismanaged project will have some junior software engineer messing with the code until it works (sort of), lots of reported bugs, poor test coverage, and a manager who doesn't act on any of the things he/she should be measuring.

One of the interesting things about large open source projects is their Darwinist nature weeding out the bad ones. There's a lot to learn from how successful OSS projects operate. A lot of best practices find their origin in such projects. Things like version control, bug tracking, unit testing frameworks, etc are experimented with a lot in the OSS world. For example, Mozilla pioneered Bugzilla and was also quite early adopting continuous integration and automated tests. They developed a lot of technology and practices just to support knowing how they were doing. Most of that is now widely used across the industry. Ubuntu is doing similar things currently and projects like Eclipse maintain a very high pace of development with very consistent levels of quality in circumstances that most companies would be unable to handle.

more than 5 years ago
top

Comparing the Size, Speed, and Dependability of Programming Languages

jilles Re:Pet peeve (491 comments)

Yes of course you are right. And more recently we actually have languages with coherent and consistent behavior across implementations (somewhat of a novelty, just look at C implementations). There's several Ruby interpreters that run the Rails framework now. The fastest one (or so it is claimed often) is JRuby, which runs on top of the Java virtual machine, which has has quite many implementations on a wide variety of hardware optimized for running on e.g. embedded hardware or on CPUs with thousands of cores and impressive levels of compatibility given these differences. So saying language X is faster than language Y is a quite meaningless statement these days. Faster on what, and under what conditions and at what required level of stability, and with what kind of benchmark? Most C programs are fast, except they have all but 1 core on the CPU idling because threading and concurrency are very hard to bolt on to a C program. Which is why some performance critical messaging servers are done in languages like Erlang.

Most C programmers believe it is the closest thing to native aside from assembler. Probably correct if you ignore 40 years of progress in the hardware world but downright naive in light of modern day X86 processors. Technically x86 is not a native instruction set anymore but a virtual machine language that happens to have an in hardware translation to the 'real' instruction set. Like all such translations, it comes at a cost of lost flexibility, and indeed performance. But it is worth avoiding having to rewrite all those pesky compilers. So rather than shatter the assumptions C programmers make, they actually support them by sacrificing transistors. The only real difference between Java and C execution model (aside from better defined semantics in Java) is that Java does the translation in software, at run-time, taking into account performance characteristics of both the running program and the underlying hardware. That in a nutshell is why the LLVM project exists to do the same for C programs (and indeed many other languages, including Java).

Of course you have to take into account the levels of abstraction and indirection provided in application frameworks as well. Those come at a cost and that cost is typically high in scripting languages (but you get so much in return). Java is often unfairly compared to C. I say unfairly here because it is usually a comparison of application frameworks rather than the language implementation. Java is in 'laboratory' conditions quite fast, even comparable to C and applies all the same performance optimizations (and then some). Except, nobody programs Java that way (except some dude at work who managed to produce the suckiest Java class ever, different story) . Similarly, C mostly lacks the rich frameworks that are common in the Java world. Transcode a C program to Java and you end up with a code base that is still pretty fast (e.g. quake 2 has a Java port). Stupid people in both camps believe that it is an either-or type decision between the two and that it is somehow inherent to the language what you end up with. Clever engineers know that 95% of their code is not performance critical at all (i.e. cpu idling most of the time), and that it makes a hell of a lot of sense to do whatever is necessary to get that 5% performing as best as possible if the cost in terms of stability, productivity, security, portability, etc is low enough. That's why server-side c is not a commonly required job skill.

That's why Apple used the llvm platform to port Mac OS X to mobile phones and why Google chose to implement a heavily customized Java vm on top of familiar C/C++ components to emulate their success. Good engineers know when to choose what. Both iphone and android are excellent achievements that move beyond the stupid status quo of wondering which language is better.

more than 5 years ago
top

MS Word 2010 Takes On TeX

jilles academia is already using word ... (674 comments)

... unless you narrow it down to Mathematics, Physics and Computer science departments of course. BTW. I fully sympathize with those inclined to defining it as such :-).

All that stuff about typography is bullshit. The only real reason latex is used there is formulas, which is a niche feature that lacks relevance outside the exact sciences. You might find the occasional sociologist, anthropologist, biologist, etc. using Latex but by and large they are anomalies in a population that is mostly completely ignorant of the 'joys' of compiling and debugging your tables, references, and what not and giving up on the many comforts that come with a decent word processor, none of which seem to be particularly well cared for by your average programmer's editor (which tend to optimized for different purposes).

Now there will be a ton of uber geeks itching to mention a feature or two that is their biggest reason for sticking to latex but the truth is that this crowd is simultaneously to conservative to switch to anything else and too uninvolved in OSS to actually step forward and fix one of the many OSS word replacements so that it can do whatever it is that makes Latex work for them. Apparently, nobody that cares enough to recognize the many flaws of latex related workflows is competent or willing to step forward and fix them. Guilty as charged myself as well (obligatory disclaimer for sitting on my ass here).

Open Office? Crappy compared to both MS Office and Latex. When given the choice, I choose to use something else (seem to default to word, but just because it is easy). Latex, if I have to. Abi word: niche product that doesn't do most of the things a scientist would need. Great if ms write/wordpad (same thing really) would have done the job as well. What else is there? K-office. Nice, ambitious, and notoriously nearly done but not actually usable ever since people started working on it (10-12 years ago?).

Consequently, we're stuck with a software system put together by a nearly retired professor and author of excellent books on computer science who peeked with his career in the seventies, i.e. some 30-40 years ago, that has had no updates worth writing home about since before I started studying computer science in the mid nineties. Nothing against Donald Knuth (great achievements and would have been honered to have been taught by this man). But shit, the world has moved on. Is latex really the best mankind can do when it comes to writing articles and thesises?

more than 5 years ago
top

HP Accused of Illegal Exportation To Iran

jilles what export restrictions? (287 comments)

Export controls are largely ineffective and easily bypassed with proxies and intermediate partners. I've heard in several separate cases that people working for multinationals in Iran simply use a middleman to get their hands on any equipment they need. They simply 'advice' their customers that in order to operate foo, they would need x units of component bar which they are sadly unable to deliver due to export restrictions and basically bar 'magically' appears on site when needed, no questions asked.

Basic problem is that the US exports to most countries in the world and exporting from those countries to those to which it doesn't want to export is basically not controlled at all (except by local legislation in e.g. NATO member states). So all you need is a willing middleman in any country the US can export to. The middleman is not in US jurisdiction and under no obligation to even inform US trading partners of intent to export. Likewise US trading partners have no interest whatsoever in knowing what happens to their stuff after they ship it.

Naturally, most US companies maintain good & extensive relations with such middlemen providing them with support, logistics, etc.

more than 5 years ago
top

Why Use Virtual Memory In Modern Systems?

jilles Re:You mean physical memory right :-) (983 comments)

On most windows PCs you are better off maxing out the memory on the motherboard (which you can do quite cheaply) and telling windows to never ever waste your time copying bits back and forth between the paging file and RAM (which it will happily do for no good reason at all otherwise). Think PC with 1.5GB, 1 GB free and the damn OS swapping your applications to disk so that each alt tab results in lots of disk activity, especially painful on otherwise fast and capable laptops with slow disks. 1.5 GB is plenty for Java development using Eclipse and running Firefox and a few office applications all at the same time.

It depends on your usage pattern but I've done this on a laptop with 1.5 GB and got a huge performance boost. The flipside is that the OS will get a bit more rude when you run out of memory. But on the other hand that saves you some time as well because the alternative is that it gets very very slow. Just keep an eye on your memory usage and close some applications when you are closing in on the upper limit.

Macs are different. I have 4 GB in mine and the OS is generally very well behaved with the page file. I guess with linux it depends. Basically if you run a server, a page file is a waste of time (either you have enough ram or you don't, when you don't you are in trouble).

more than 5 years ago

Submissions

jilles hasn't submitted any stories.

Journals

jilles has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?