Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Programming As If Performance Mattered

simoniker posted more than 10 years ago | from the my-programming-can-reverse-time dept.

Programming 615

Junks Jerzey writes "Saw the essay 'Programming as if Performance Mattered', by James Hague, mentioned at the Lambda the Ultimate programming language weblog. This is the first modern and sensible spin on how optimization has changed over the years. The big 'gotcha' in the middle caught me by surprise. An inspiring read." Hague begins: "Are particular performance problems perennial? Can we never escape them? This essay is an attempt to look at things from a different point of view, to put performance into proper perspective."

cancel ×

615 comments

jihad (-1, Flamebait)

Anonymous Coward | more than 10 years ago | (#9070655)

join us in jihad [anti-slash.org]

Good coding example - How things are done right! (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#9070656)

That's quite an facile editorial but you can't expect better from normal users. My screenshot looks better than yours. Evolution is better than KMail, GNOME looks more polished than KDE and so on. I do use XChat, Abiword, Rhythmbox.... ...usually you get stuff like these from normal users. And this is ok since you can't blame them for stuff they simply don't know about or don't have a slighest knowledge about.

Such editorials are hard to take serious since they are build up on basicly NO deeper knowledge of the matter. Most people I met so far are full of prejudices and seek for excuses or explaination why they prefer the one over the other while in reality they have no slightest clue on what parameters they compare the things.

If people do like the gance ICONS over the functionality then it's quite ok but that's absolutely NO framework to do such comparisons.

I do come from the GNOME architecture and spent the last 5 years on it. I also spent a lot of time (nearly 1 year now if I sum everything up) on KDE 3.x architecture including the latest KDE 3.2 (please note I still do use GNOME and I am up to CVS 2.6 release myself).

Although calling myself a GNOME vetaran I am also not shy to criticise GNOME and I do this in the public as well. Ok I got told from a couple of people if I don't like GNOME that I simply should switch and so on. But these are usually people who have a tunnelview and do not want to see or understand the problems around GNOME.

Speaking as a developer with nearly 23years of programming skills on my back I can tell you that GNOME may look polished on the first view but on the second view it isn't.

Technically GNOME is quite a messy architecture with a lot of unfinished, half polished and half working stuff inside. Given here are examples like broken gnome-vfs, half implementations of things (GStreamer still half implemented into GNOME (if you can call it an implementation at all)) rapid changes of things that make it hard for developers to catch up and a never ending bughunting. While it is questionable if some stuff can simply be fixed with patches while it's more required to publicly talk about the Framework itself.

Sure GNOME will become better but the time developers spent fixing all the stuff is the time that speaks for KDE to really improve it with needed features. We here on GNOME are only walking in the circle but don't have a real progress in true usability (not that farce people talk to one person and then to the next). Real usability here is using the features provided by the architecture that is when I as scientists want to do UML stuff that I seriously find an application written for that framework that can do it. When I eye over to the KDE architecture then as strange it sounds I do find more of these needed tools than I can find on GNOME. This can be continued in many areas where I find more scientific Software to do my work and Software that works reliable and not crash or misbehave or behave unexpected.

Comparing Nautilus with Konqueror is pure nonsense, comparing GNOME with KDE is even bigger nonsense. If we get a team of developers on a Table and discuss all the crap we find between KDE and GNOME then I can tell from own experience that the answer is clearly that GNOME will fail horrible here.

We still have many issues on GNOME which are Framework related. We now got the new Fileselector but yet they still act differently in each app. Some still have the old Fileselector, some the new Fileselector, some appearance of new Fileselectors are differently than in other apps that use the new Fileselector code and so on. When people talk about polish and consistency, then I like to ask what kind of consistency and polish is this ? We still have a couple of different ways to open Window in GNOME.

- GTK-Application-Window,
- BonoboUI Window,
- GnomeUI Window,

Then a lot of stuff inside GNOME are hardcoded UI's, some are using *.glade files (not to mention that GLADE the interface builder is still not aware of the new Widgets in GTK and even not aware of the deprecated ones), then we have *.xml files for BonoboUI windows etc. As you can see it's a pain to maintain all this junk. These are just a little spot on the entire Mountain. I can countless bring up more stuff. Sure these things are being worked on. No doubt but as I said they WORK on it this means that there is NO real progress for the future since people write new apps for GNOME and probably use old API and then they need to change huge parts of their code only to adopt the new API rather than working on the application itself to bring it forward with better features the user needs.

Why do I say these things in public and still use GNOME. Well when I started, I was developing stuff using the Motif widgetset and during that time around 1999 KDE and GNOME were looking quite similar from features and stuff. So I decided to work on and for GNOME although I am not quite happy with many so called 'solutions' inside GNOME and I think that we need to discuss them (on whatever place it is) to make people who like to contribute to GNOME know where the problems are and how we can solve them (if possible).

From my person experience KDE is far far superior of the inferior GNOME when it comes to technical aspects. Even if there are a few Menu entries to much or the Toolbar is overblown in Konqueror these are all cosmetical things that can be changed if needed (and if the developers think it's a good thing) but looking at the amount of KDE users and applications that got stomped out of nothing I do believe that there are a lot of people simply happy how KDE is as it is now.

If they change the Fileselector in KDE then it's inherited by other applications. So the author doesn't need to change huge leaps of code since they simply inherit it. If someone changes the Addressbook object then it's being inherit in other applications, same for Clock, Bookmarks etc. The Fileselector looks similar in all apps, the Toolbars and Menu look similar in all apps etc. They have quite nifty features that I am missing in GNOME. Even nowadays I ask myself if the developers working on GNOME are still on track of what the user really wants or if they are not caught in a tunnelview here by doing something no-one can really use.

When I hear people talking about all these cool usability studies SUN made then I need to smile here since this thing is laying back a few years now. And SUN already started working on their GLASS Desktop based on JAVA (no it's not GNOME based). The reason why SUN still works on GNOME is as far as I was told is that they had a 5 years contract with GNOME to do so. Anyways you can't depend on old usability tests. To warranty true usability these tests needs to be re-done every now and then so you can guarantee some sort of quality assurance that the stuff is still on track and truly usable for people.

Usable not as in which button to press, usable as in 'can I find the apps I need to do my business work'. or 'can I copy files and subdirs from FTP and have the stuff arrive correctly on my Desktop' (gnome-vfs still horrbly fails here).

Well I think people should really do an article based on these things since they are elementary for a Desktop. Neverthless I do believe that both sides KDE and GNOME do work hard on their Desktop but truly I believe that KDE makes better steps forward and imo in the right direction too. Even alternative stuff such as MorphOS or XFCE are far more useroriented and friendlier to use than what GNOME offers today.

> Perhaps GNOME is a bloody mess inside and KDE is a
> masterpiece, but does that really matter to the user?

Yes it does matter. We today place the stones for the road tomorrow. And we should decide wisely which stones we lay on that road. Should be go for an inferior Desktop which stagnates because developers are messing around in the Framework or should we go for a technical supperior Desktop ? Yes USERS do care a lot and it matters a lot of them as well. Since these users want to use polished applications, applications that are tightly integrated, that share one database for their Addressbooks, one database for their Bookmarks, they simply want to put all their Addresses in one database and be sure they can use these things in their Word like application (serial letters) in their Cellphone syncing app, in their Palm or PowerPC syncing app, in their Email client and so on. It matters a lot for the user if he can reliable use a FTP client or Filemanager to copy a bunch of files from A to B without worrying whether the stuff appeared correctly or not. And yes it matters a lot for the user whether he can be sure that new applications can rapidely be developed (even by himself) in a short time due to taking objects. And yes it also matters for us all whether a nice Desktop is being used which works reliable in all areas and guarantees new applications since we wanted to demonstrate outside world (non Linux people) how far Linux and the Desktop really are. How can we demonstrate the world outside that KDE is in many areas even far supperior over WindowsXP (in Desktop functionality) if we show people how nice icons GNOME has and as soon they start using it figure out that it's a mess ?

And yes, there is nothing wrong for KDE being similar to Windows. I want Windows for Linux. At least it offers me a cool Desktop with similar functionality and cool stuff. Hell I don't even come from Windows I used to be an AmigaOS person before that.

Better a Windows look and behave like rather than a Desktop that fit's nowhere where people and industry needs to spend hours and probably millions of Dollars into teaching their people how to do simpliest things. Now tell your customers who pay for your service how to use gconftool-2 for example. They will chop your testi**es off and put them in a glass with alcoholics.

> the difference is that Gconfig it is aimed for advanced
> users and Kpanel for general use.

And here is the problem. GNOME these days aims for the unexperienced users. Quite a contradictorily to the aims of GNOME don't you think. Most important settings are simply hidden behind GConf (and not Gconfig better you get off and learn some basics before teaching knowledged people what the differences are).

> I don't want I windows on Linux, the reason I use Linux is
> to get off MS, and what about Mac users who don't like
> Windows and want something else?.

Honestly, KDE is closer to both of them than GNOME. KDE offers the MacOSX way of Menu system (Top Menu), KDE has a cool Liquid Theme, KDE can look quite close to anything you like. It can even look like MorphOS.

But back to a normal conversation. You should look back in the mid 80's and compare the things today. Most Desktop solutions are all the same.

- Window
- Window can be moved,
- Icon on Desktop,
- Icon on Desktop can be moved,
- Filemanager,
- Filemanager can do things,
- Panel, Toolbar, Top Menu

So saying that Mac Users won't like KDE is plain stupid, the same stupid way saying Windows users don't like KDE etc. There can also be people who do like GNOME, there is no problem. But we should clearly look for the superior Desktop solution and it should be even clear to you that KDE is technically FAR superior. It's so much superior that comparing KDE and GNOME is plain wrong. It's like comparing a Ferrari with an Austin Mini.

> I do feel that Gnome is more likely to be successful on
> the corporate workstation than KDE

I don't believe so. Even corporate people have eyes in their head and a brain they can use. When they spent some time into Linux and know more about the technical stuff and probably the two desktops they then will decide wisely. I recently had a conversation with someone who wanted to change his entire company (1200 Desktops) to GNOME but then they decided to use KDE after they figured out how messy GNOME really is.

> because there are less option to fiddle around with and it
> seems simplier to get things done

What things do you think they get done that simple ? I would know a couple of examples of the things you can do simplier on GNOME than e.g. KDE ? But ok be it like that, this still doesn't change the broken Framework issue which is basicly the all and everything for a Desktop. No matter how less options you have, no matter how clean you assume the desktop to be, no matter how polished or nice you find it yourself. It still won't change the broken junk inside it. As many people already explained (since they elaborated correctly) GNOME will take years (IF EVER) to reach quality of KDE.

Forget the ugly icons, forget the bazillion of Menu entries and forget all the tons of Options. These are all things you can change easily and quickly. Unfortunately you can't easily change the broken stuff in GNOME that quickly. I wish it would be possible but as sad and realistic it sounds, it won't happen.

> sure stock Gnome isn't as polished as KDE, but Ximian
> Gnome is. Gnome 2.6 looks like it might just Gnome that
> extra bit of polish that it needs as well.

Yeah but thre rest remains GNOME, the same incomplete and unfinished Framework. Ximian GNOME may be a name in the public, but new apps need to be developed as well and that's still the same problematic issue than using stock GNOME. You still deal with the problems I have described above.

We need a stable desktop, a desktop with good framework, nice applications and where we can be sure that rapid application development is possible. A Ximian GNOME won't change anything here.

> Computer users usually don't know much about computers, I
> can't imagine a customer trying to find and specific option
> here.

Excuse me, but why do these people want to use Linux then ? If they have no clue what they are doing they better head off using Windows. Every farmer can give help with Windows, every neighbour can and even every WalMart store can help these people in Windows related questions. Why do they want to bother with Linux then ?

New people unfamilar to computers make their first touch with Windows. They learn to use it, they using it fine and they strangely get their stuff done the way they like and Windows is overblown with configuration options.

Even my sister is far better in Microsoft Word than I ever was or ever will be (not to mention that I am not interested either). But you see that people as unexperienced they are are usually willing to learn and do it. They learn by mistakes and don't make them again the next time.

Every now and then my sister comes up to me and tells me that her printer doesn't work. Hell it's even easier for me as Administrator and even as long years of Linux user to fix her 1 second problem with the printer on Windows rather than on Linux. Windows is dead simple but yet full of configuration stuff. People not interested in config stuff won't fiddle with the things either.

Even cars, videorecorders, cellphones, pda's, dvd burners, mp3 players are getting more features and things. And when I see people talking about technical stuff they usually go for the things with many options because they think it's correct with their price.

Anyways you should clearly read my comments. All the options, icons and much menu entries you can IGNORE since these are things you can easily CHANGE. Changing all the stuff in KDE is far easier than fixing the broken Framework in GNOME.

> Yes people are willing to learn, but they are more worried
> aboyt getting their work done as fast as posible, less
> clicks, less options, just do what they need.

Ok and what WORK do these people get done with GNOME they can not get done with KDE ?

For my knowledge they can get the same work done with KDE as they would get done with GNOME. So far we hopefully agree.

Now let's get a look beyond the tunnel (having a tunnelview is kinda pointless here).

Say that person wants to get REAL work done. Say he or she want's to do some astrological stuff. Where will he get the software to get the work done ? GNOME doesn't offer such a software so he or she can't even start to work.

Say people come in #gnome every day complaining there is no CD burning application like K3b now how can they get their work done if the application is missing ?

Say people want to do presentation stuff like PowerPoint, where is the application on GNOME so these people can get the work done ?.

Say people want to do 3D stuff for their mechanical course, where do they get the application for GNOME ?

Say people want to do UML for their university course, where do they get that program for GNOME ? DIA ? Hell I am a practical example here that DIA is unusable to do a shite.

Now where is the software on GNOME to get exactly that work done ?. Looking over to KDE the software is existing already.

Ok I am not blaming GNOME for not having all this. NO. But I wanted to make you understand that a good Framework is required to guarantee rapid application development. Rapid application development means that the users do not need to wait 2 years until they get the work done, since they already have the software today to get the work done. And this software is in a way to be improved. They have no problems changing huge parts of their Code to fit the fixed Framework since the Framework on KDE is already in a very good condition. The developers concentrate on the fun stuff improving and echancing their applications rather than fixing stuff or get their app understanding the new changed API.

You know, a good Framework means that you can quickly develop programs. Programs that people can use to get serious work done.

I always wished GNOME would have such a great development Framework like KDE has but it sadly hasn't and this is what I like people to understand. There is no point blaming one desktop and favoriting another one just for the Icons of for the Themes (as this editorial shows) it matters more that we have a good framework for the future and guarantee that apps are being written in masses.

This is all I wanted to say, nothing more, nothing less. If you are not willing to understand this (or not able to understand either due to limited knowledge) then this is your problem not mine. I took quite a lot of time to explain these things to you. By now everyone else reading this should have understood the points.

Let me give you a view examples of what I think of being a broken framework:

a) When I implment new features but do it just half. Adding GStreamer to GNOME for example which is indeed a nice thing but adding it only to half of the apps and skipping the others is a bug.

b) Fixing half of the stuff in apps. Say you committ a patch that fixes 2 dialogs in Nautilus but leaves the others as they were 2 years before is imo a bug. Makes using the app become, well ugly.

c) Offering multiple ways to open a Window in GNOME is a bug. GTK+, GnomeUI, BonoboUI. This leads to inconsistency and total clutter.

d) Writing a new Fileselector but have the default apps use a mixture of old and New Fileselectors is imo a bug. By the way why should a developer waste time fixing all old and new Filedialogs ? If the stuff is properly written then you simply inherit the new Fileselector without noticing it. It's simply there. Here is a proof that not everything in GNOME 2.0 is re-written. Much of the stuff is simply ported from 1.x.

e) When copying files via Nautilus (say ftp.gnome.org) and you copy a subdir which includes MORE directories and files from that FTP to your Desktop and you get stuff like

(copying file 98 of 23)

Or get 0byte files copied from that FTP to your Desktop then this is a bug.

f) Gnomifying OpenOffice is an even bigger bug. The entire OpenOffice framework is based on the Staroffice Foundation Class (Their own Widgetset). Gnomifying all this is simply an idiotic task and leads to fragmentation in the code. Again they will do this work only hal. Only what you see will be changed not the rest. So the result is a mixture of old code and new changed User Experience.

g) Hardcoded UI is a bug (at least under GNOME), it leads no space for UI designers to fix all the stuff without code skills. Where should they start ? In the Hardcoded stuff ? In the *.glade stuff ? In the .xml stuff ?

h) Having all apps do their own bookmarks system is a bug, There is no central bookmarks solution. Same for Addressbooks etc.

i) When I call out for a bounty and have people called up to 'tweak and fiddle' Evolution support into the Panel Callendar then this is a bug and not a feature. A feature would be if I changed the Callendar Object so when I inherit it into other applications that all these apps will benefit from the Evolution support and not just one.

----
And yes what you write is indeed also a big problem (at least your text is partially right). A lot of undocumented API changes. A lot of undocumented changes itself.

E.g. I wrote a little Application which uses a GtkCombo I was in the assumption to use a good API from GTK and then one day they changed the Widget and marked this one DEPRECATED and this in a new App that I wrote.

The changes are quite huge and I feel quite frustrated having this one changed to adopt the new Widgets. It's not just trivial changes these changes I have to do are quite huge and will take me a couple of days. The days I usually have to stay motivated to do the work. Now instead of improving my application I need to fiddle around to remove the old stuff, go through 10 source files and remove the stuff. Not to mention that I also need to re-write huge chunks of code only to fix the stuff.

While the old GtkCombo allowed me to simply attach a GList to it (my 'History' function is based on a GList which contains 5 Listentries which have Data applied) I now need to create an entire TreeModell again and populate that Tree with these values.

What I do here is changing a well thought interface (which I spent hours to figure out before) into a new interface and what I do is tweak and fiddle the stuff in a way to make it fit there. Which then leaves other parts of my code get slightly unoptimal as I used to have in mind before.

Why so frustrated and why attacking my person ? Do you fear that I could be right and you not ? Your reply is far to ridiculous and only a try to publicly destroy my creditibility rather than a sign of willing to accept the critics as I write them (since they are right) and start discussion with the community and have these problems solved. People like you are more up to attack those who bring up constructive criticism and feedback rather than true willing to change the true things.

> What are you actually trying to say. Does every
> application need to add GStreamer? Why don't you specify
> precisely what features are not implemented and state
> where GNOME has stated officially or unofficially not to
> implement it.

Not every application needs to embedd GStreamer that's pure rubbish. But the audio stuff should and should do that correctly. Right now in GNOME we deal with direct Xine calls and GStreamer calls. Developer have been chosing Xine in many tools (Totem and Rhythmbox) due to stability reasons because GStreamer still is unfinished, no stable API and no general stability. They do still offer the posibility to include the GStreamer stuff but what benefits does it give when it locks up during playback or simply doesn't play back at all. Go and get a look in the code yourself id you don't trust my words. Let's continue with the new GMixer, it was hyped that it now supports the GStreamer stuff but yet it doesn't. When I select 'alsasink' in GConf-Editor then I would like to be able to Mix the alsa stuff and not get a dialog that the sound devices an not be found. The reason why it can not be found is it still expects the OSS emulation in Alsa to be active so it just mixes the OSS part of alsa but not the native part. I thought it uses GStreamer here, so I do assume it to use the right sinks and right devices to Mix. Just one example.

> Still short on specifics. If you are such a great software
> developer and you claim to have been working for GNOME but
> yet you have not been able to solve even one of the
> problems you whine about years on end.

The problem here is you can't simply sent in bugfixes or patches when your innerst tells you that this is plain wrong and needs to be re-done correctly. See it like a house where you continue glueing stuff into it. A bit here, a bit there a bit in another place and you see how the stuff you are doing makes no sense but yet you continue because you can't convince the other owners that it would be better to trash the entire house and start from scratch.

> Where are all the bug reports you have filed? Where are
> all the patches you are submitted that the GNOME 'people'
> have refused to commit. Please give us more facts, and
> soon.

They are either on bugzilla.gnome.org or made their way in the Applications in case they got accepted. I know you are trying to pick here but you won't be successful. For further information you can look into ChangeLogs. But this isn't your point at all, you will reply and tell me that you wasn't able to find a shit (many others have tried this before). I think you should get out of your tunnelview and your evangelism here and start looking into the real problems. Guess why there aren't any changes in GNOME because people and developers fear to do these changes or raise constructive criticism because it ends in things like this. Ignorance, Elitism, Tunnelview and even worse Namecalling.

> How about you give an example of the clutter that this
> causes. Are you complaining that GTK+ has only one way to
> open a window, or that GnomeUI has only one way to open a
> window or that GTK, GnomeUI and BonoboUI altogether have
> three ways to open a window?

The problem here is interoperability with the rest of GNOME. Try opening a couple of applications on your GNOME desktop. Say one program written using GnomeUI, one written in GTK+ one written in BonoboUI. Now go to:

Desktop-Preferences -> Menus&Toolbars

And fiddle around with the values there. You will see that some programs imidiately change the Toolbar and Menu behaviour, some not, some change their appearance after they got opened and closed and some even do not react on these settings at all. Just one example only to satisfy your questions here.

Technically they are a pain to maintain too. Specially for UI people, those who go from app to app fixing all the paddings, Layout of buttons and widgets etc. There is a big difference if you use one GUI designer or one system to change all this or need to learn 3-4 different but common used ways to change all this.

It is problematic having hardcoded UI in the code (which requires that an UI expert needs to learn to programm to solve these things) or if you use GLADE (which can create *.glade files or simply embeddable code) or if you use BonoboUI with it's *.xml files. They are all totally differently, different attributes, different behaviour etc. And yet I do see people using all these things in their own apps over and over again. Sure core developers may use the correct way for their upcoming products but not the new developers who start working on their apps. They use GLADE to build the interface but forget or don't know that GLADE isn't aware of all the new DEPRECATED widgets or new widgets that have been introduced lately. Go and look yourself.

> just tells me that you don't know or understand sh*t about
> things in GNOME as you claim all the time. It also brings
> doubt your claimed knowledge about software development
> because I don't need to remind you of what happens during
> API changes such as the one going on in the FileSelector.

No I was more demonstrating how good the KDE framework for these kind of things are. They change the Object Fileselector one time (regardless what changes they do) and it's automatically inherit into other apps. While in GNOME they now offer 2 Fileselectors the old and the new one. Different API is a problem here but this is a sign that the stuff is simply an artifact from GTK 1 and GNOME 1 times. If it was a total rewrite as you want to make me believe then this would have been introduced far earlier.

What I also speak about is the consistent look of these Fileselectors. The new Fileselector now offers this stupid 'expander' widget where you first get a locationbar (at least some apps show this) and then need to press the expander to get the rest of the files shown. To much magic and to much 'usability experts' have made a huge mess once again for a simply shitty fileselector. Jesus we use Fileselectors of 20 years and longer they do what the name says showing files and directories where we can simply dive in and do the task. No magic techno stuff that yet requires 3 mouseclicks to actually do what I want.

> This one and all the rest of your 'examples' are just too
> funny. You have just gone to 'bugzilla' and copied things
> over.

No I didn't copied them but it's ok for me that you confirm these problems to exist. Now we have GNOME 2.6 in a couple of days and these problems are still there since GNOME 2.0 or even earlier who actually knows. You seriously want to go enterprise with these problems ? And when will they get fixed ? Is it even possible to fix these issues ?

> Unless you can show an official or unofficial policy
> from GNOME not to fix these issues, they are moot. Just
> because they are not fixed when you want does not make the
> framework broken.

As long these things are not fixed and even unknown whether they can be fixed at all - yes I do have the tendency to say that these things are broken. People who write a FTP client for GNOME use an alternative library to do these things since they can not reliably use the ones offered in the GNOME framework due to these errors. People writing a Webbrowser for example can't use the HTTP backend of gnome-vfs due it not to work reliable not doing redirections for URL's etc. Sure there are always bugs in such projects. I am the last person admitting that there aren't bugs. Every bigger project has a lot of them and this is natural and just the way it is.

But here are the fundamental problems I do see in GNOME. They spent to much time rewriting stuff over and over again and want to do everything the right way (excuse me, there isn't something as the right way, there is just one way and another way but the right way doesn't exist). You need to finish a project and then head over to the other one and make sure that with increasing version of GNOME that the stuff you offer to the people is less painful and usable.

Nautilus used to show signals of becoming a well not as crappy Filemanager as it used to be and now it has been changed into a Spatial Filemanager. This is a drastical change for the Users. While they have done such drastical changes in the behavior of Nautilus they forget to fix the other things due to lack of resources. Imo it would have been better fixing gnome-vfs and all the other tiny bits and bytes rather than re-writing stuff that has been written before and has shown signal that they work. This is going over and over in GNOME and still no sign or signal where we as users can see (look here the evolution of the software is finished, we can have it stay that way and lets continue working on other bits). No they are busy throwing over concepts and re-write them over and over again. And all the developers outside who work on their own software need to play catchup to have their app following the new changes. Instead doing the funstuff to continue improving their app they stick into all these messy changes.

And hey, this is just my very own opinion. That's why I do fullheartly welcome the Quality and Assurance team in KDE. They will clearly signal the developers 'hey what are you doing now ?'.

> I'm sure I'm not the only one tired of seeing all these
> verbose spillage of fud from you every time a GNOME
> article shows up.

Whatever you think. There are people outside who agree with me there are people outside who agree with you. That's life but I do see a reason here to make people understand these things before writing editorials like these. A good solid framework and nice applications are important.

If people come over and over again with their counted Applications they like to use and others come over with the same old junk over and over. The same way I come over with the same stuff to make people understand the problems here.

GNOME has copied a lot of stuff from MacOSX and Windows in the past months and years. Sadly the wrong bits were copied.

A last thing to add from my side that people do not think about. KDE already offers all these things already. Two years ago when I used KDE 3.x I already noticed a lot of stuff in KDE that were missing and still are missing in GNOME.

I do know that one day someone will fix the broken gnome-vfs. But when ? As long as these things are not working properly people use other libraries to solve the solutions they need to solve. GNOME may (or may not) get all these things one day. Say in 2 years by now. But KDE had exactly all these things 2 years ago already. There is a development difference of 4 years between both Desktop solutions. While GNOME is catching up to what KDE offered 2 years ago KDE continues to quickly expand in all areas and the applications it offers are growing as well and new applications can easily be developed in a short timeframe.

These things you should take into account to when doing such editorials. Not just looking at fancy icons and compare two screenshots. I would say the same things about GNOME if GNOME were in the position to be much enchanced over KDE. Although even KDE is lacking a bunch of things that I would see more improved.

- Split out applications in own Modules like in GNOME rather than having all put into kdelibs, kdebase, kdeutils etc.
- More clean layout of includes in the includes directory like GNOME does (by default and not per distro excuse during install).
- Make sure the .po files come with the module rather than a separate huge translated tarball.

I here again do like the way GNOME does it. As you see neither of both are perfect.

Of course it is. (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#9070661)

You know, the problem with jumping in just to say 'First Post!' is that your comment invariably suffers from a lack of content.

Damn! (4, Funny)

Spruce Moose (1857) | more than 10 years ago | (#9070662)

If only I had written my first post program with performance in mind I could have not failed it!

Re:Damn! (2, Funny)

nacturation (646836) | more than 10 years ago | (#9070766)

Next up... Slashdot: Posting as if Karma Mattered

me too... (0, Offtopic)

interactive_civilian (205158) | more than 10 years ago | (#9070867)

Yeah...my first version of "Hello World" could also have used a bit more optimization...

;p

speed/easy coding (3, Interesting)

rd4tech (711615) | more than 10 years ago | (#9070665)

The golden rule of programming has always been that clarity and correctness matter much more than the utmost speed. Very few people will argue with that

Really? How about the server side of things?

Shameless bragging: Why don't you take a look at my page to get a whole new view on peformance?

Re:speed/easy coding (1)

Anonymous Coward | more than 10 years ago | (#9070796)

yes even on the server. fast but broken is useless.

otoh, sometimes (not usually) fast *is* important, and you have to design for it.

btw, wierd comment about cpuedge. Why do you think a few tuned libs would give anyone 'a whole new view on performance'? same old same old.

Re:speed/easy coding (4, Insightful)

ArbitraryConstant (763964) | more than 10 years ago | (#9070823)

On the server side security is an issue (also on the client side, clearly). If your code isn't clear and correct, the number of bugs is likely to be higher than average, and bugs lead to exploits. Your libraries may be well written, I don't know specifically. It's possible to do both, just hard.

Re:speed/easy coding (3, Insightful)

edwdig (47888) | more than 10 years ago | (#9070857)

I'd say his points are more true on the server side than the client side.

Say you're a large business, and you have a mix of client side and server side applications. Both have significant processing time requirements Which do you spend more time optimizing?

In this scenario, you're going to have a large number of client machines and a small number of servers. If servers need a little more power, you can upgrade the machine without too much disruption or money spent. The upgrade will benefit all users of the system. In this case, it's more cost effective to upgrade the server than it is to pay developers to optimize the hell out of the code.

The client machines is a different story. There's a lot of machines in use. Upgrading any one will only help the user of that computer. Optimizing the code will help every user. In this case, paying a developer to optimize your code will be a lot cheaper than doing a company wide hardware upgrade.

This is all of course assuming you're designing things well in the first place. Of course you should do things like use a quick sort (or whatever may be more appropriate in the case at hand) instead of a bubble sort. The point is its not worth spending days to get the last 1% of performance.

Followed your link (3, Insightful)

ccoakley (128878) | more than 10 years ago | (#9070913)

1. Is that your sig or is that part of your comment? If it is part of your comment, please explain why it would give me a whole new view on performance. If it's your sig, then spooky how it was related to the topic.

2. Assuming your stuff is good, when are you going to code up SHA-1 (*MY* favorite hash)?

3. On the server side of things, I would argue that correctness is more important than otherwise. If an app crashes 1 in 100 times for a desktop user, the developer blames windows and the user is satisfied (don't flame me on this, please). On the server, if the app crashes 1 in 100 times, it may bring down the transactions for 100s of users, making things very bad for the developer. For non-crash correctness problems, consider a problem which makes a minor, but cumulative error in subsequent runs. That would likely be disasterous for the server situation.

As far as clarity, find me one developer who has taken over a project and not complained about the quality of the inherited code ever. Seriously. (that's not directed at parent)

The question I always ask is (5, Insightful)

Anonymous Coward | more than 10 years ago | (#9070667)

Is the time it takes me to do the performance optimization worth it in time or money.

Re:The question I always ask is (2, Informative)

rpozz (249652) | more than 10 years ago | (#9070697)

Performance can be quite a major thing if you're doing a lot of forking/threading (ie like a daemon). If you create 100 threads, any memory leaks or bottlenecks are multiplied 100 times.

However, 0.1s delay after clicking an 'OK' button is perfectly acceptable. It all depends on what you're coding.

Re:The question I always ask is (1)

dhalgren99 (708333) | more than 10 years ago | (#9070883)

Yeah, tell that to my manager...

Re:The question I always ask is (4, Insightful)

irokitt (663593) | more than 10 years ago | (#9070740)

Probably not, but if you are working on an open source project, we're counting on you to make it faster and better than the hired hands at $COMPANY. That's what makes OSS what it is.

Re:The question I always ask is (2, Insightful)

prockcore (543967) | more than 10 years ago | (#9070770)

Is the time it takes me to do the performance optimization worth it in time or money.

The question I ask is, can the server handle the load any other way? As far as my company is concerned, my time is worth nothing. They pay me either way. The only issue is, and will always be, will it work? Throwing more hardware at the problem has never solved a single performance problem, ever.

We've been slashdotted twice. In the pipeline is a database request, a SOAP interaction, a custom apache module, and an XSLT transform.

Our server never even came close to its breaking point. I attribute it to optimizing for performance.

Re:The question I always ask is (0)

techno-vampire (666512) | more than 10 years ago | (#9070807)

Throwing more hardware at the problem has never solved a single performance problem, ever.

Tell that to Micro$oft. Every version of Windows since 95 has needed more memory and speed and still works slower.

Re:The question I always ask is (3, Insightful)

bm_luethke (253362) | more than 10 years ago | (#9070791)

"Is the time it takes me to do the performance optimization worth it in time or money."

To a certain extent. I've seen that excuse for some pretty bad/slow code out there.

Writing effecient and somewhat optimised code is like writing readable extensable code: if you design and write with that in mind you usually get 90% of it done for very very little (if any) extra work. Bolt it on later and you usually get a mess that doesn't actually do what you intented.

A good programmer should always keep both clean code and fast code in mind while writing software.

What annoys me (4, Insightful)

Anonymous Coward | more than 10 years ago | (#9070672)

is that ms word 4 did all I need, and now the newest office is a thousand times the size and uses so much more cpu and ram but does no more.

a sad inditement

Re:What annoys me (1, Offtopic)

Planesdragon (210349) | more than 10 years ago | (#9070756)

is that ms word 4 did all I need

And you aren't still using it why?

(hint--your answer is the reason why MS 4 doesn't do all you need.)

the newest office is a thousand times the size and uses so much more cpu and ram but does no more.

Wrong. Office does a LOT more. Tasks that used to require running a specific process now run idly in the background, multiple times in-between keystrokes. If my PC falls behind me in typing now, I know that there's a problem--not just that I'm typing too fast.

Of course, the world would be a cleaner place if MS made up their minds between "easy to use" and "powerful", instead of trying to be both and failing miserably at each.

Re:What annoys me (5, Insightful)

DrEasy (559739) | more than 10 years ago | (#9070853)

And you aren't still using it why? (hint--your answer is the reason why MS 4 doesn't do all you need.)
Or maybe because you are forced to upgrade to read files that were created with a more recent version?

Re:What annoys me (2, Insightful)

Bush Pig (175019) | more than 10 years ago | (#9070879)

Probably the only thing Word 4 doesn't do that he needs is read the Word 97 (or whatever) files that other people keep sending him.

Re:What annoys me (5, Funny)

Anonymous Coward | more than 10 years ago | (#9070758)

> a sad inditement

Well, it does have a spell checker now...

Its not just MS (0)

Anonymous Coward | more than 10 years ago | (#9070773)

Performance is an inssue for us too.

I just want a GNU distro that runs as fast as windows 98. Debian based. And a pony.

A Poem (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#9070675)

~-~-~-

I truly feel it is my calling
To go throughout this land installing
Gloryholes in all the stalls
Of the USA

So if you hear my labored sawing
Through yonder stall partition walling
Poke ol' Norton through the hole
and say "Neighbor, good day!"

And don't be shy
Or try to lie
Because it's plain as day
Dear friend, you're reading Slashdot
And therefore you are gay

Managed environments (4, Funny)

Nick of NSTime (597712) | more than 10 years ago | (#9070678)

I code in managed environments (.NET and Java), so I just let some mysterious thing manage performance for me.

Re:Managed environments (1)

rpozz (249652) | more than 10 years ago | (#9070711)

I code in managed environments (.NET and Java), so I just let some mysterious thing manage performance for me.

.NET/Java help with organising performance issues to a certain extent, however if you happen to create a large number of threads in Java, you will get quite a major speed decrease. Running in a VM isn't an excuse for inefficient code (in fact it can make it even worse in some cases).

Re:Managed environments (1)

Erratio (570164) | more than 10 years ago | (#9070760)

Oh come on....NET or Java programs running slow? That's impossible. Next thing you're going to try to tell me is that they take up unnecessary resources too.

Re:Managed environments (1)

asb (1909) | more than 10 years ago | (#9070792)

however if you happen to create a large number of threads in Java, you will get quite a major speed decrease

No shit, Sherlock? How about this: "if you happen to fork a large number of processes in C you will get quite a major speed decrease."

There is no silver bullet with which you will be able to write efficient programs. Not creating enough threads can be just as bad performance bottleneck. The performance issues in Java are just the same that are issues in other languages. In the end you only have to know what you are doing and what the APIs you are using are doing.

Re:Managed environments (4, Informative)

metlin (258108) | more than 10 years ago | (#9070786)

Contrary to popular belief, managed code environments do optimize code a whole lot more than you would think!

Joe Beda [eightypercent.net] , the guy from Microsoft behind Avalon, had a discussion on Channel9 where he talked about why managed code is not that bad a thing afterall.

Like I mentioned in an earlier post, managed code helps optimize the code for some of the bad programmers out there who cannot do it themselves, and takes care of a lot of exceptions and other "troublesome" things :) So, in the long run, it may not be that bad a thing afterall.

There are two facets to optimization - one is optimization of the code per se, and the other is the optimization of the project productivity - and I think managed code environments do a fairly good job of the former and a very good job of the latter.

My 0.02.

Funny thing about performance (5, Interesting)

ObviousGuy (578567) | more than 10 years ago | (#9070683)

You can spend all your time optimizing for performance and when you finally release your product, your competition whose main objective was to get the product out the door faster, who uses a slower algorithm, is already first in mindshare with your customers. Not only that, the processors that you thought you would be targetting are already a generation behind and that algorithm that was going to hold back your competition runs perfectly fast on new processors.

Performance gains occur at the hardware level. Any tendency to optimize prematurely ought to be avoided, at least until after v1.0 ships.

Re:Funny thing about performance (1, Funny)

Anonymous Coward | more than 10 years ago | (#9070701)

Yeah. I bet you use bubblesort too...

Re:Funny thing about performance (2, Interesting)

ObviousGuy (578567) | more than 10 years ago | (#9070706)

No, I use the language's sort routine. This typically means quicksort or heapsort.

Do you code all your own algorithms?

Re:Funny thing about performance (5, Insightful)

corngrower (738661) | more than 10 years ago | (#9070724)

Any tendency to optimize prematurely ought to be avoided, at least until after v1.0 ships.


Assuming there is a second version, which there may not be because potential customers found that the performance of v1.0 sucked.

Re:Funny thing about performance (1)

Erratio (570164) | more than 10 years ago | (#9070802)

The releasing of v1.0 is of course a somewhat arbitrary decision. I think what is implied is that the program should originally be written with clarity and reusability in mind, then after an initial version is completed, any bottlenecks should be optimized (for me that happens well before v1. Saving a half a second on a routine that runs once in a while can be delayed for a while if not indefinitely, but code that is passed through a million times every time the program is used will benefit immensely from even the slightest tweak.

Re:Funny thing about performance (2, Insightful)

naden (206984) | more than 10 years ago | (#9070866)

Assuming there is a second version, which there may not be because potential customers found that the performance of v1.0 sucked.

Better a version 1.0 that sucked than none at all.

And funny how Microsoft seems to release so many crappy 1.0 releases yet usually ends up clawing back to become the market leader.

Re:Funny thing about performance (0)

Anonymous Coward | more than 10 years ago | (#9070736)

You can spend all your time optimizing for performance and when you finally release your product, your competition whose main objective was to get the product out the door faster, who uses a slower algorithm, is already first in mindshare with your customers.

Either that or you are successful and make lots of money at which point Microsoft decides they will take your customers by writing a faster app using their undisclosed API calls.

Re:Funny thing about performance (4, Insightful)

metlin (258108) | more than 10 years ago | (#9070755)

Well said.

However, I will dispute the claim that performance gains happen only at the hardware level - although programmers cannot really optimize every tiny bit, there is no harm in encouraging good programming.

The thing is that a lot of programmers today have grown NOT to respect the need for performance - they just assume that the upcoming systems would have really fast processors and infinite amounts of RAM and diskspace, and write shitty code.

I agree that like Knuth said, premature optimization is the root of all evil. However, writing absolutely non-optimized code is evil in itself - when a simple problem can be simplified in order and time, it's criminal not to :)

A lot of times, programmers (mostly the non-CS folks who jumped the programming bandwagon) write really bad code, leaving a lot of room for optimization. IMHO, this is a very bad practice, something that we have not really been paying much attention to because we always have faster computers coming up.

Maybe we never will hit the hardware barrier, I'm sure this will show through.

Re:Funny thing about performance (4, Interesting)

techno-vampire (666512) | more than 10 years ago | (#9070868)

The thing is that a lot of programmers today have grown NOT to respect the need for performance - they just assume that the upcoming systems would have really fast processors and infinite amounts of RAM and diskspace, and write shitty code.

That's not the only reason. Programmers usually get to use fast machines with lots of RAM and diskspace, and often end up writing programs that need everything they have.

Back in the DOS days, I worked on a project that had a better way of doing things. We had one machine with reasonable speed as the testbed. It wasn't well optimized as we didn't expect our customers to know how to do that and the programs we were writing didn't need expanded or extended memory. If what you wrote wouldn't run on that machine, it didn't matter how well it worked on your machine, you had to tweak it to use less memory.

Re:Funny thing about performance (2, Informative)

Anonymous Coward | more than 10 years ago | (#9070763)

Obviously you have never done any programing wrt cryptography. Optimization is *_N-E-V-E-R_* done in the hardware!!! The difference between using a good algorithm and a crappy one is the difference between 2 days for the program to run, and fifty trillion centuries (literally). Hardware upgrades are merely incremental. Moore's law says speed doubles every 18 months, but doubling is a tiny incremental increase, if you want an exponential/logorithmic change, you have to use software. I'm not talking about "oh, twice as fast as a year ago", but "10,000 times as fast as that other software" or "1 billion times as fast".

Method of Payment... (1)

inf0rmer (545195) | more than 10 years ago | (#9070686)

What, you mean I don't get paid per line of code I write?

Longer code is faster code (1, Funny)

Anonymous Coward | more than 10 years ago | (#9070695)

Unroll those loops by hand. You'll get a little bump in speed. And a little bump in your pocketbook.

Re:Longer code is faster code (0)

Anonymous Coward | more than 10 years ago | (#9070820)

I want more of Duff's device. :)

A little bump in your pants (0)

Anonymous Coward | more than 10 years ago | (#9070873)

bump in your pants whaaa!!!

the software taketh what the hardware giveth. (4, Insightful)

equex (747231) | more than 10 years ago | (#9070690)

i remember times when 7.14mhz and 256k ram was enough to drive a multitasking windowed os. (amiga)
ive seen glenz vectors and roto-zoomers on the commodore 64.
modern os's, escpecially windows seem super-sluggish when you see what is possible on those old computers if you just care to optimize the code to the max.

Re:the software taketh what the hardware giveth. (2, Interesting)

neil.orourke (703459) | more than 10 years ago | (#9070782)

But the great demos on the Amiga and C64 never hit the OS.

Have a look at some of the PC demos that boot from DOS and take over the machine (eg. www.scene.org) and tell me that they aren't just as amazing.

If everyone paid attention in english class... (2, Informative)

fervent_raptus (664099) | more than 10 years ago | (#9070704)

this slashdot post would read:

I just finished reading the essay "Programming as if Performance Mattered", by James Hague. The essay covers how compiler optimization has changed over the years. If you get bored, keep reading; there's a big 'gotcha' in the middle. Hague begins: "Will performance issues haunt us forever? This essay puts performance analysis in perspective."

I think... (2, Interesting)

rms_nz (196697) | more than 10 years ago | (#9070707)

...it would have been better for him to show the run times for all the versions of his program to show us what difference each of the changes had made...

Make the common case fast (2, Interesting)

DakotaSandstone (638375) | more than 10 years ago | (#9070709)

Yes, yes, yes. Do optimize. But, come on people, do we really need to turn that nice readable device init code that only executes once into something like:
for (i=0,j=0,init();i!=initDevice(j);j++,writeToLog()) ;

Sheesh!

Re:Make the common case fast (0)

CosmeticLobotamy (155360) | more than 10 years ago | (#9070816)

Why not? If it takes you more than a second to read that and figure out what it does, you're in the wrong business.

Why are you using i to compare initDevice(j) to 0?

And to preempt the likely response, yes, I am in the wrong business.

You don't optimize, that's the job of the compiler (2, Insightful)

Anonymous Coward | more than 10 years ago | (#9070710)

If you write clear and simple code the compiler or interpreter does all the other work. It will automatically remove unused code and simplify complex segments. So long as your code is not unnecessarily convoluted often the machine optimizations are better than the human brain optimizations. It's like register allocation. You don't do that by hand. That's just crazy! Some poor fools 20 years ago had to do it by hand and came up with an algorithm to do it that the computer just does for you.

That's the difference between modern languages and more archaic ones. Sure you can't get the "absolute best" most optimized optimization, but you're probably going to get a better optimization than you can think of just from the interpreter/compiler doing its job.

The only thing that really needs optimization is streamlining data structures because the compiler can't predict what part of the data structure isn't used during runtime. You just ned to make sure you use the right data structure for the job and put the basic pen-and-paper (optimized) algorithm down in plain code. No strange hacker tricks needed.

Re:You don't optimize, that's the job of the compi (0)

Anonymous Coward | more than 10 years ago | (#9070828)

Unless the compiler is VC++ 5.0-try looking at an asm dump from that puppy. Ouch.

Re:You don't optimize, that's the job of the compi (1)

neil.orourke (703459) | more than 10 years ago | (#9070846)

No matter how good the compiler is, it can't possibly compensate for poor program design.

Right at the outset, you need to decide if the program is performance critical or not. Take, for example, this code:

class fred{
int q;

int setQ(int newQ)
{
q=newQ;
}
};

fred myFred=new(fred);

Now, which is going to execute faster:
myFred.setQ(10);

or

myFred.q=10;

Using your super optimising compiler, how is it going to know the best way of setting q in myFred? It can't, because no compiler could make an assumption like that.

Re:You don't optimize, that's the job of the compi (1)

prockcore (543967) | more than 10 years ago | (#9070864)

So long as your code is not unnecessarily convoluted often the machine optimizations are better than the human brain optimizations.

That's not what optimising is. It is a logic problem, one that a computer cannot solve. It is organization, it involves using your mind. It is making sure that your code isn't doing more work that it should. Restructuring code to remove redundant operations. Finding a better way to do the things you need to do.

Claiming that a compiler can do a better job of optimizing than a human is exactly the same thing as claiming that a computer makes a better opponent in UT2k3 than a human.

Humans have creativity on their side.

Re:You don't optimize, that's the job of the compi (4, Insightful)

techno-vampire (666512) | more than 10 years ago | (#9070894)

If you write clear and simple code the compiler or interpreter does all the other work.

I remember looking over something once that was clear, simple and very slow. It was a set of at least twenty if statements, testing the input and setting a variable. The input was tested against values in numeric order, and the variable was set the same way. Not even else if's so that the code had to go through every statement no matter the value. I re-wrote it to a single if, testing to see if the input were in the appropriate range and calculating the variable's value. No compiler is going to do that. Brute force can be clear, simple and slow.

Re:You don't optimize, that's the job of the compi (0)

Anonymous Coward | more than 10 years ago | (#9070896)

I agree with your statement.

In my opinion there are two ways to optimize code. The bad way, optimize line by line, writing ASM, etc. And then there is the good way which is using better program design and algorithms.

Of course the "bad way" can be good in certain specific occasions (scientific, embedded, etc), but I would say that in general, it's just a waste of time. For starters, the compiler takes into account things like the number of registers and parallel pipelining that we can not optimize manually in C.

Also, it can often be cheaper to just buy a 200MHz faster processor than spend piles of the cash optimizing software to the last bit and maintaining it. Of course, this is not so true with widly distributed desktop products, but it is for large in-house of consultation projects.

Don't agree (4, Interesting)

Docrates (148350) | more than 10 years ago | (#9070718)

While the author's point seems to be that optimization and performance are not all that important, and that you can achieve better results with how you do things and not what you use, I tend to disagree with him.

The thing is, in real life applications, playing with a Targa file is not the same as service critical, 300 users, number crunching, data handling systems, where a small performance improvement must be multiplied by the number of users/uses, by many many hours of operation and by years in service to understand its true impact.

Just now I'm working on an econometric model for the Panama Canal (they're trying to make it bigger and need to figure out if it's worth the effort/investment) and playing with over 300 variables and 100 parameters to simulate dozens of different scenarios can make any server beg for more cycles, and any user beg for a crystal ball.

Re:Don't agree (0)

Anonymous Coward | more than 10 years ago | (#9070785)

The thing is, in real life applications, playing with a Targa file is not the same as service critical, 300 users, number crunching, data handling systems, where a small performance improvement must be multiplied by the number of users/uses, by many many hours of operation and by years in service to understand its true impact.

Holy run on sentence batman!

Re:Don't agree (4, Insightful)

mfago (514801) | more than 10 years ago | (#9070787)

Not what I got out of it at all, rather:

Clear concise programs that allow the programmer to understand -- and easily modify -- what is really happening matter more than worrying about (often) irrelevant details. This is certainly influenced by the language chosen.

e.g. I'm working on a large F77 program (ugh...) that I am certain would be much _faster_ in C++ simply because I could actually understand what the code was doing, rather than trying to trace through tens (if not hundreds) of goto statements. Not to mention actually being able to use CS concepts developed over the past 30 years...

Re:Don't agree (1)

bloosqr (33593) | more than 10 years ago | (#9070832)

Its possible but fortran as a lot of things going for it, if nothing else our compilers still are pretty bad. I use C++ in all our numerical w/ "C like" numerics using gsl/blas whereever possible and have a lot of simple benchmarks of parts of our code using profiling. One thing i've noticed, is using the intel compilers is that the compiler has a much easier time vectorizing parts of the code whenever possible. The other irony of using C++ is its pretty easy to "abstract" the code away, its easier to see what the code is doing rather than what it is supposed to be doing, let alone the issues of having routines hiding in operators , destructors, constructors etc.

Re:Don't agree (1)

fbform (723771) | more than 10 years ago | (#9070805)

author's point seems to be that optimization and performance are not all that important, and that you can achieve better results with how you do things and not what you use

I have to disagree with the author too, but on a different point. What the author has done is to write some image-processing code, manually profile it (recognizing that several pixels get the value FF00FF looks like profiling to me) and then modified the algorithm (made those pixels transparent) to make it faster.

I would say that any such changes to the algorithm after knowing the nature of the data is bound to make it faster than blind compiler-level or language-level optimization that's oblivious to data.

Re:Don't agree (1)

avelth (39410) | more than 10 years ago | (#9070808)

Actually, his point seems to be that optimization doesn't necessarily rely on your choice of tools. He wrote (and optimized) his program in a VM environment, and his performance was pretty good.

He even goes about it in a rational way:

1-write the program correctly (includes functional testing)
2-test with an eye on performance
3-make changes
4-goto 2

Notice he didn't obsess about performance in 1.

The Longhorn developers... (4, Funny)

ErichTheWebGuy (745925) | more than 10 years ago | (#9070719)


should really read that essay! Maybe then we wouldn't need [slashdot.org] dual-core 4-6 GHz CPUs and 2GB ram to run their new OS.

Re:The Longhorn developers... (1)

Linsaran (728833) | more than 10 years ago | (#9070772)

But won't someone please think of the little codelings! All that legacy code they'd have to delete to do make longhorn run efficiently, it'd be Codeocide, we'd have millions of lines of little codelings who's legacy ancestors were deleted cause they're no longer needed. Won't someone please think of all the poor little codelings! Without their elders around to tell them about how they had to climb uphill both ways up the system bus to quiery Interupt 9, we'll have 1337 h4xor gangs of code all over the place, selling their spam, and spreading their viri all over the place.

Re:The Longhorn developers... (1)

NegativeK (547688) | more than 10 years ago | (#9070795)

should really read that essay! Maybe then we wouldn't need dual-core 4-6 GHz CPUs and 2GB ram to run their new OS.

Good lord, that article was _written_ for the general /. non-RTFA-readership.

I hate to spoil the article for people (hint, hint,) but Longhorn in Erlang would be scary at best. At worst.. Well, let's just say that we'd have to wait for the Earth Simulator to come out before running it.

Or a computer that can handle Doom 3.

Re:The Longhorn developers... (1)

TrancePhreak (576593) | more than 10 years ago | (#9070861)

I guess that's why it currently runs just fine on slightly above average hardware right now.... AND IT'S IN DEBUG MODE.

Re:The Longhorn developers... (0)

Anonymous Coward | more than 10 years ago | (#9070915)

YOU INSENSITIVE CLOD!!!! It was a joke. Funny. Haha. LOL. Laugh.... Sheesh, some people!!!

Premature Optimization (4, Insightful)

Godeke (32895) | more than 10 years ago | (#9070725)

One of the concepts touched upon is the idea that optimization is only needed after profiling. Having spent the last few years building a system that sees quite a bit of activity, I have to say that we have only had to optimize three times over the course of the project.

The first was to get a SQL query to run faster: a simple matter of creating a view and supporting indexes.

The second was also SQL related, but on a different level: the code was making many small queries to the same data structures. Simply pulling the relevant subset into a hash table and accessing it from there fixed that one.

The most recent one was more complex: it was similar to the second SQL problem (lots of high overhead small queries) but with a more complex structure. Built an object to cache the data in with a set of hashes and "emulated" the MoveNext, EOF() ADO style access the code expected.

We have also had minor performance issues with XML documents we throw around, may have to fix that in the future.

Point? None of this is "low level optimization": it is simply reviewing the performance data we collect on the production system to determine where we spend the most time and making high level structural changes. In the case of SQL vs a hash based cache, we got a 10 fold speed increase simply by not heading back to the DB so often.

Irony? There are plenty of other places where similar caches could be built, but you won't see me rushing out to do so. For the most part performance has held up in the face of thousands of users *without* resorting to even rudementry optimization. Modern hardware is scary fast for business applications.

Re:Premature Optimization (1)

prockcore (543967) | more than 10 years ago | (#9070801)

The first was to get a SQL query to run faster: a simple matter of creating a view and supporting indexes.

Every database programmer out there is cringing right now. Throwing indexes at a poorly designed table is not going to solve your problem. You'll find that as your table grows, your insert time is going to start bogging down the table heavily.

Eventually you'll find that a simple insert locks up the table for several seconds, and the requests will start to pile up.

By all means, don't spend days optimizing useless things, but spending a few hours planning good table structure will save you a lot of headaches a few years down the line.

Re:Premature Optimization (1)

G-funk (22712) | more than 10 years ago | (#9070907)

The first was to get a SQL query to run faster: a simple matter of creating a view and supporting indexes.

Ah, views.... I miss real databases.... Stupid cheap bastards and their MySQL....

Performance, shmerformance (1)

jargoone (166102) | more than 10 years ago | (#9070731)

The First Rule of Program Optimization: Don't do it.
The Second Rule of Program Optimization -- For experts only: Don't do it yet.
-- Michael Jackson (not the molestor one)

Performance is relative (4, Interesting)

jesup (8690) | more than 10 years ago | (#9070743)

66 fps on a 3 GHz machine, doing a 600x600 simple RLE decode...

Ok, it's not bad for a a language like Erlang, but it's not exactly fast.

The big point here for the author is "it's fast enough". Lots of micro- (and macro-) optimizations are done when it turns out they aren't needed. And writing in a high level language you're comfortable in is important, if it'll do the job. This is a good point.

On the other hand, even a fairly naive implementation in something like C or C++ (and perhaps Java) would probably have acheived the goal without having to make 5 optimization passes (and noticable time examining behavior).

And even today, optimizations often do matter. I'm working on code that does pretty hard-real-time processing on multiple threads and keeps them synchronized while communicating with the outside world. A mis-chosen image filter or copy algorithm can seriously trash the rest of the system (not overlapping DMA's, inconvenient ordering of operations, etc). The biggest trick is knowing _where_ they will matter, and generally writing not-horrible-performance (but very readable) code as a matter of course as a starting point.

Disclaimer: I was a hard-core ASM & C programmer who for years beta-tested 680x0 compilers by critiquing their optimizers.

Re:Performance is relative (1)

Jeffrey Baker (6191) | more than 10 years ago | (#9070799)

I agree with your view here. The author claims "hey I made these great speedups with only a few passes at the high level" but a Targa decoder in C on a 3,000,000,000Hz P4 would probably have run in 10 milliseconds, even with the obvious implementation and no manual optimizations. I bet you could get it under 1 millisecond by using the SSE2/SSE/MMX/3dNow/Altivec or whatever vector unit was at hand.

Re:Performance is relative (1)

Lazy Jones (8403) | more than 10 years ago | (#9070854)

The big point here for the author is "it's fast enough".

Fast enough for what? For an amateur programmer coding for himself, in the language he likes best. I certainly wouldn't buy a code library written by that guy ;-)

(disclaimer: I'm a "1985 cycle counting programmer")

Writing (1)

jetfuel (755102) | more than 10 years ago | (#9070753)

"Are particular performance problems perennial?"
...
"point of view, to put performance into proper perspective."
Brought to you by the "Alliteration is the Only Literary Device I Ever Learned" School of Writing.

He forgot to mention stability. (1)

Phidoux (705500) | more than 10 years ago | (#9070768)

The golden rule of programming has always been that clarity and correctness matter much more than the utmost speed.

In the "real world", not only is correctness and clarity more important than speed, but so is stability.

Re:He forgot to mention stability. (0)

Anonymous Coward | more than 10 years ago | (#9070811)

umm, how exactly do you think stability is not part of correctness?

Depends on your target (4, Insightful)

KalvinB (205500) | more than 10 years ago | (#9070771)

Working on a heavily math based application speed is necessary to the point that the user is not expected to wait a significant amount of time without something happening. I have a large background in game programming working on crap systems and it comes in handy. My tolerance for delays goes to about half a second for a complete operation. It doesn't matter how many steps are needed to perform the operation, it just all has to be done in less than half a second on a 1200Mhz system. My main test of performance is seeing how long it takes for Mathematica to spit out an answer compared to my program. Mathematica brags about being the fastest and most accurate around.

When operations take several seconds a user gets annoyed. The program is percieved to be junk and the user begins looking for something else that can do the job faster. It doesn't matter if productivity is actually enhanced. It just matters that it's percieved to be enhanced or that the potential is there.

You also have to consider if the time taken to complete an operation is just because of laziness. If you can easily make it faster, there's little excuse not to.

For distributed apps you have to consider the cost of hardware. It may cost several hours of labor to optimize but it may save you the cost of a system or few.

In the world of games half a second per operation works out to 2 frames per second which is far from acceptible. Users expect at minimum 30 frames per second. It's up to the developer to decide what's the lowest system they'll try to get that target on.

You have to consider the number of users that will have that system vs the amount it will cost to optimize the code that far.

In terms of games you also have to consider that time wasted is time possibly better spent making the graphics look better. You could have an unoptimized mesh rendering routine, or a very fast one and time left over to apply all the latest bells and whistles the graphics card has to offer.

There are countless factors in determining when something is optimized enough. Games more so than apps. Sometimes you just need to get it out the door and say "it's good enough."

Ben

Re:Depends on your target (0)

Anonymous Coward | more than 10 years ago | (#9070793)

I like your style of developing for slower machines.

Atari... (1)

foog (6321) | more than 10 years ago | (#9070777)

Is that the same James Hague that wrote articles for ANTIC and Analog back in the era of the Atari 8-bits?

But in some cases performance counts (2, Interesting)

jbms (733980) | more than 10 years ago | (#9070779)

As another user commented, server software can benefit greatly from a large variety of optimizations, since better performance translates directly into supporting more users on fewer/cheaper servers.

Optimizations also have significant effect in software designed to perform complex computations, such as scheduling.

Also, the trend of ignoring performance considerations with the claim that modern hardware makes optimizations obselete is precisely what leads to the trend, particularly among Microsoft software, for the software to become significantly slower with each revision.

Article puts it all in perspective (4, Funny)

Debian Troll's Best (678194) | more than 10 years ago | (#9070789)

I'm currently completing a degree in computer science, with only a few courses left to take before I graduate. Man, I wish I had read that article before last semester's 'Systems Programming and Optimization' course! It really puts a lot of things into perspective. So much of a programmer's time can get caught up in agonizing over low-level optimization. Worse than that are the weeks spent debating language and design choices with fellow programmers in a team. More often than not, these arguments boil down to personal biases against one language or another, due to perceived 'slowness', rather than issues such as 'will this language allow better design and maintenance of the code', or 'is a little slow actually fast enough'?

A particular illustration of this was in my last semester's 'Systems Programming and Optimization' course. The professor set us a project where we could choose an interesting subsystem of a Linux distro, analyze the code, and point out possible areas where it could be further optimized. I'm a pretty enthusiastic Debian user, so I chose to analyze the apt-get code. Our prof was very focused on low-level optimizations, so the first thing I did was to pull apart apt-get's Perl codebase and start to recode sections of it in C. At a mid-semester meeting, the professor suggested that I take it even further, and try using some SIMD/MMX calls in x86 assembly to parallelize package load calls.

This was a big ask, but me and my partner eventually had something working after a couple of weeks of slog. By this stage, apt-get was *flying* along. The final step of the optimization was to convert the package database to a binary format, using a series of 'keys' encoded in a type of database, or 'registry'. This sped up apt-get a further 25%, as calls to a machine-readable-only binary registry are technically superior to old fashioned text files (and XML was considered too slow)

Anyway, the sting in the tail (and I believe this is what the article highlights) was that upon submission of our project, we discovered that our professor had been admitted to hospital to have some kidney stones removed. In his place was another member of the faculty...but this time, a strong Gentoo supporter! He spent about 5 minutes reading over our hand-coded x86 assembly version of apt-get, and simply said "Nice work guys, but what I really want to see is this extended to include support for Gentoo's 'emerge' system...and for the code to run on my PowerMac 7600 Gentoo PPC box. You have a week's extension'

Needless to say, we were both freaking out. Because we had focused so heavily on optimization, we had sacrificed a lot of genericity in the code (otherwise we could have just coded up 'emerge' support as a plug-in for 'apt-get'), and also we had tied it to Intel x86 code. In the end we were both so burnt out that I slept for 2 days straight, and ended up writing the 'emerge' port in AppleScript in about 45 minutes. I told the new prof to just run it through MacOnLinux, which needless to say, he wasn't impressed with. I think it was because he had destroyed his old Mac OS 8 partition to turn it into a Gentoo swap partition. Anyway, me and my partner both ended up getting a C- for the course.

Let this be a lesson...read the article, and take it in. Optimization shouldn't be your sole focus. As Knuth once said, "premature optimisation is the root of all evil". Indeed Donald, indeed. Kind of ironic that Donald was the original professor in this story. I don't think he takes his work as seriously as he once did.

Re:Article puts it all in perspective (1)

jcain (765708) | more than 10 years ago | (#9070860)

Great story. It really shows the tradeoff you make when doing serious low level optimizations.

I had a similar situation where I was lost some points on a CS exam, even though I followed the instructions to the letter. After typing in the code and running it (the exam was hand-written), it worked perfectly and did exactly what it was supposed to. I went up and asked the professor why I had lost points, and he said it was because the linked list the program used was empty when my program exited. I told him that it didn't matter, since the linked list would be destroyed when the program exited either way. He still didn't give me credit.

Re:Article puts it all in perspective (2)

Xoro (201854) | more than 10 years ago | (#9070877)

Oh, come on. Who modded this up? Funny, I could see, but "Interesting"?

The final step of the optimization was to convert the package database to a binary format, using a series of 'keys' encoded in a type of database, or 'registry'.

It's a joke.

If feature X were important, we'd code in Y (2, Offtopic)

wintermute42 (710554) | more than 10 years ago | (#9070803)

The economist Brian Arthur is one of the proponents of the theory of path dependence [bearcave.com] . In path dependence something is adopted for reasons that might be determined by chance (e.g., the adoption of MS/DOS) or by some related feature (C became popular in part because of UNIX's popularity).

The widespread use of C and C++, languages without bounds checking in a world where we can afford bounds checking, is not so much a matter of logical decision as history. C became popular, C++ evolved from C and provided a some really useful features (objects, expressed as classes). Once C++ started to catch on, people used C++ because others used it and an infrastructure developed (e.g., compilers, libraries, books). In sort, the use of C++ is, to a degree, a result of path dependence. Once path dependent characteristics start to appear, choices are not necessarily made on technical virtue. In fact, one could probably say that the times when we make purely rational, engineering based decisions (feature X is important so I'll use language Y) are outweighed by the times when we decide on other criteria (my boss say's we're gonna use language Z).

optimize with discretion (2, Insightful)

kaan (88626) | more than 10 years ago | (#9070818)

All projects are an exercise in scheduling, and something is always bound to fall of the radar given the real-world time constraints. In my experience, the right thing to do is get a few smart people together to isolate any problem areas of the product, and try to determine whether that code might produce performance bottlenecks in high-demand situations. If you find any warning areas, throw your limited resources there. Don't fret too much about the rest of the product.

In the business world, you have to satisfy market demands and thus cannot take an endless amount of time to produce a highly optimized product. However, unless you are Microsoft, it is very difficult to succeed by quickly shoving a slow pile of crap out the door and calling it "version 1".

So where do you optimize? Where do you concentrate your limited amount of time before you miss the window of opportunity for your product?

I know plenty of folks in academia who would scoff at what I'm about to say, but I'll say it anyway... just because something could be faster, doesn't mean it has to be. If you could spend X hours or Y days tweaking a piece of code to run faster, would it be worth it? Not necessarily. It depends on several things, and there's no really good formula, each case ought to be evaluated individually. For instance, if you're talking about a nightly maintenance task that runs between 2am and 4am when nobody is on the system, resource consumption doesn't matter, etc., then why bother making it run faster? If you have an answer, then good for you, but maybe you don't and should thus leave that 2 hour maintenanc task alone, spend your time doing something else.

For people who are really into performance optimization, I say get into hardware design or academia, because the rest of the business world doesn't really seem to make time for "doing things right" (just an observation, not my opinion).

One thing new programmers often miss (2, Insightful)

xant (99438) | more than 10 years ago | (#9070819)

Less code is faster than more code! Simply put, it's easier to optimize if you can understand it, and it's easier to understand if there's not so much of it. But when you optimize code that didn't really need it, you usually add more code; more code leads to confusion and confusion leads to performance problems. THAT is the highly-counterintuitive reason premature optimization is bad: It's not because it makes your code harder to maintain, but because it makes your code slower.

In a high-level interpreted language with nice syntax--mine is Python, not Erlang, but same arguments apply--it's easier to write clean, lean code. So high-level languages lead to (c)leaner code, which is faster code. I often find that choosing the right approach, and implementing it in an elegant way, I get performance far better than I was expecting. And if what I was expecting would have been "fast enough", I'm done -- without optimizing.

Why aren't optimized algorithms best practices? (3, Interesting)

ObviousGuy (578567) | more than 10 years ago | (#9070827)

You would think that with all the years put into developing computer languages, as well as the decades of software engineering, that these algorithms and techniques would make their way into best practices.

This, of course, has already begun with many frequently used algorithms like sorting or hashing being made part of the language core libraries, but more than that, it seems that duplicating effort occurs much more often than simply that.

This is one instance where Microsoft has really come through. Their COM architecture allows for inter-language reuse of library code. By releasing a library which is binary compatible across different languages, as well as backwards compatible with itself (v2.0 supports v1.9), the COM object architecture takes much of the weight of programming difficult and repetitive tasks out of the hands of programmers and into the hands of library maintainers.

This kind of separation of job function allows library programmers the luxury of focusing on optimizing the library. It also allows the client programmer the luxury of ignoring that optimization and focusing on improving the speed and stability of his own program by improving the general structure of the system rather than the low level mundanities.

Large libraries like Java's and .Net's as well as Smalltalk's are all great. Taking the power of those libraries and making them usable across different languages, even making them scriptable would bring the speed optimizations in those libraries available to everyone.

alternate article for Java programmers (1)

next1 (742094) | more than 10 years ago | (#9070835)

Programming: As If Performance Mattered

You need optimisation here: (0)

Anonymous Coward | more than 10 years ago | (#9070839)

Database applications.

The potential for a SQL statement to go tragically awry, hanging the user session and sending the CPU to 100%, is significant. In a well-designed database, it won't happen too often, but it will happen often enough for you to need a good DBA close at hand to deal with it.

It's probably at its worst with Oracle, which now possesses a black box called the Cost-Based Optimiser. This little piece of voodoo uses a large range of metrics to decide on an execution plan for your query, and woe betide you if it gets it wrong - you'll be tearing your hair out trying to persuade it to do things differently.

Mind you, I've also seen programmers run catersian joins against two tables that had several million rows each. But that's not optimisation; it's trying to decide what to hit them with.

Effienciency is what seperates man from monkeys (2, Funny)

yuriismaster (776296) | more than 10 years ago | (#9070841)

If you compare a man to a monkey today, you see cognitive differences (granted, some similarities too) that tend to seperate these two related species. When a monkey attempts to open a locked box, he will try many times to pry the box open, bash at it with his fists, and the like. A man, however, quickly realizes the box is locked and uses a tool to break the lock. While the monkey's strategy is simple, and will _eventually_ get the box open, the man's strategy is more complex, but much more efficient.

Same goes with programming. If you have to search through a massive sorted database, no skilled programmer alive would use a linear traversal (simple, yet inneficient), they would use a binary search (more complex, yet far more efficient).

So when customers pay for you to get that locked box open, who's strategy will you choose?

Code tweaking (5, Insightful)

Frequency Domain (601421) | more than 10 years ago | (#9070849)

You get way more mileage out of choosing an appropriate algorithm, e.g., an O(n log n) sort instead of O(n^2), than out of tweaking the code. Hmmm, kind of reminds me of the discussion about math in CS programs.

Every time I'm tempted to start micro-optimizing, I remind myself of the following three simple rules:

  • 1) Don't.

  • 2) If you feel tempted to violate rule 1, at least wait until you've finished writing the program.
    3) Non-trivial programs are never finished.

Optimize the algorithm (1)

jgardn (539054) | more than 10 years ago | (#9070862)

One of the golden rules that Python is founded on is optimizing the algorithm, and let the compilers / interpreters do all the rest. Python, like perl and most other high-level languages, allow you to tweak the code and try out many different algorithms, without worrying about pointers and useless trivia like how much memory is available to your process.

If you master the algorithm, and you still want more speed, you can go ahead and usually cut run times by half or more by implementing it in C. But it takes a lot of work to get it right in C, and once in C, it isn't very friendly to tweaking.

About the compiler / interpreter doing the real optimizations - psyco is freely available and regularly gets close to C speeds for python. The new parrot compiler, when it comes out, will redefine the speed barriers for all high level languages ported to it.

I work at a company where we have half C-developers and half perl-developers. The perl developers code circles around the C developers, getting projects done within constraints on time and with fewer bugs. The perl programs are easier to maintain and modify. However, everyone always wants to push code into C. It's amazing when I sit down and plan out projects, and show them, "If we implemented it in Perl, we would be done in 1/4 the time with 1/4 the number of bugs, and 100% of the features." But the retort, "But C will be 3 times faster!" And I respond, "Buy 3 times as many machines to run the perl code on for all I care. It'll still be cheaper to do it in perl becuase hardware costs far less than the developer's, tester's, and manager's time!" Hey, as long as they sign my paycheck, I'll do what they ask. But if they still want to live in the 80's, that's their problem.

Painful P-ful Post (4, Funny)

Dominic_Mazzoni (125164) | more than 10 years ago | (#9070865)

Proper programming perspective? Please. People-centered programming? Pretty pathetic.

Programmer's purpose: problem-solving. Programmers prefer power - parallelizing, profiling, pushing pixels. Programmers prefer Pentium PCs - parsimonious processing power. Pentium-optimization passes Python's popularity.

Ponder.

[Previous painful posts: P [slashdot.org] , D [slashdot.org] ]

Optimizations in the Real World (2, Informative)

NegativeK (547688) | more than 10 years ago | (#9070880)

Optimization isn't really a hard topic. Should a programmer spend days nitpicking fifty lines of code that won't be used frequently? No. When initially writing code, should someone use Bogosort [wikipedia.org] instead of Quicksort [wikipedia.org] ? I'll let you figure that one out.
My biggest (reasonable) beef in the optimization area is software bloat. Programs like huge office suites containing excessive, poorly implemented crap that people won't use really ticks me off. KISS. Even the stuff that has to be complicated.

Of course, I'll always be a sucker for tweaking code for the fun of it, when I have the time. =)

Optimizations are a varied lot (2, Interesting)

corngrower (738661) | more than 10 years ago | (#9070885)

Often times to get improved performance you need to examine the algorithms used. At other times, and on certain cpu architectures, things that slow your code can be very subtle.

If you're code must process a large amount of data, look for ways of designing your program so that you serially process the data. Don't try to bring large amounts of data from a database or data file all at once if you don't have too. Once you are no longer able to contain the data in physical memory, and the program starts using 'virtual' memory, things slow down real fast. I've seen architects forget about this, which is why I'm writing this reminder.

On the other hand I've worked on a C++ project where, in a certain segment of the code, it was necessary to write our own container class to replace one of the std: classes, for performance on the SPARC architecture. Using the std: container would cause the subroutines to nest deeply enough to so that the cpu registers needed to be written out out to slower memory. The effect was enough to be quite noticeable in the app.

With today's processors, to optimize for speed, you have to think about memory utilization, since running within cache is noticably faster than from main memory. Things are not as clear cut, so far as speed optimization goes, as they once were.

Performance oriented coding in real life... (1, Interesting)

Anonymous Coward | more than 10 years ago | (#9070892)

While I agree that it is silly to spend eternity optimizing small routines that trivially achieve the required level of performance for their intended purpose, a lot of scientific packages and simulation software absolutely demand serious performance optimization. The mantra about premature optimization being the root of all evil is only true if it doesn't save you significant development and testing time as a result. If you have a simulation application that requires an hour to do enough work to complete a minimal function/correctness test, it immediately pays off if you spend time optimizing the code if you can reduce your test times down to a half hour, for example. I've worked on a lot of software packages that run calculations for days and/or weeks at a time on parallel computers. You always want to start out with fast sequential algorithms and data structures before you get into using multiple processors. Parallelism is inherently more complex, so its often worth it to squeeze the most you can out of a single thread/process before going to multiple threads/processors. While desktop apps typically consume a negligable amount of CPU time/resources, apps that are candidates for running on parallel computers, clusters, or big SMP machines are inherently more costly to run in both CPU time and user's wall-clock time, so those applications don't fall within the same logic that trivial apps like image decompression/formatting do.

Performance, an aspect of design and understanding (2, Insightful)

StevenMaurer (115071) | more than 10 years ago | (#9070899)

This article seems to be something that I learned twenty years ago... performance is an aspect of good design.

That is why I insist on "optmization" in the beginning. Not peephole optimization - but design optimization. Designs (or "patterns" in the latest terminology) that are fast are also naturally simple. And simple - while hard to come up with initially - is easy to understand.

But that's also why I discount any "high level language is easier" statement, like this fellow makes. It is significantly harder to come up with a good architecture than learning to handle a "hard" language. If you can't do the former (including understanding the concepts of resource allocation, threads, and other basic concepts), you certainly aren't going to do the latter. Visual Basic is not an inherently bad language because you can't program well in it. It just attracts bad programmers.

And that goes the same for many of the newer "Basics": these "managed languages" that make it so that people can "code" without really understanding what they're doing. Sure, you can get lines of code that way. But you don't get a good product.

And then the whole thing falls apart.

Bad performance is built in. (2, Insightful)

BigZaphod (12942) | more than 10 years ago | (#9070912)

There seems to be two basic causes of bad performance:

1. Mathematically impossible to do it any other way.
2. Modularity.

Of course crap code/logic also counts, but it can be rewritten.

The problem with modularity is that it forces us to break certain functions down at arbitrary points. This is handy for reusing code, of course, and it saves us a lot of work. Its the main reason we can build the huge systems we build today. However, it comes with a price.

While I don't really know how to solve this practically, it could be solved by writing code that never ever calls other code. In other words, the entire program would be custom-written from beginning to end for this one purpose. Sort of like a novel which tells one complete story and is one unified and self-contained package.

Programs are actually written more like chapters in the mother of all choose-your-own-adventure books. Trying to run the program causes an insane amount of page flipping for the computer (metaphorically and actually :-))

Of course this approach is much more flexible and allows us to build off of the massive code that came before us, but it is also not a very efficient way to think about things.

Personally, I think the languages are still the problem because of where they draw the line for abstractions. It limits you to thinking within very small boxes and forcing you to express yourself in limited ways. In other words, your painting can be as big as you want, but you only get one color (a single return value in many languages). It is like we're still stuck at the Model T stage of language development--it comes in any color you want as long as its black!

A few points that come to mind... (2, Interesting)

ivec (61549) | more than 10 years ago | (#9070916)

- Decoding a RLE data buffer is short of impressive as a benchmark. RLE was designed as a simple and specific (generally inefficient) compression approach for age-old hardware (i.e. 8MHz, not 333MHz as the base system used here).
How about JPEG or PNG ?

- The author actually spent several iterations optimizing this Erlang code. And these optimizations required handling special cases. (So performance eventually did matter to the author?) Now, would a 'first throw' implementation in C/C++ have been written faster while immediately performing better than the Erlang version? (simpler code)

- I agree that the compiled/interpreted code performance matters less and less, because processors are so much more powerful. For instance, the processing for RLE decompression should in any case be negligible wrt the memory or disk i/o involved.
What is becoming increasingly important, however, is the data structures and algorithms that are used. In this perspective, C++ still shines, thanks to the flexibility that its algorithms and containers library provides.
C++ offers both a high level of abstraction (working with containers), and provides the ability to convert to a different implementation strategy with ease - if and when profiling demonstrates a need.
For large system and library development, the strong static typing of C++ is also a real plus (it doesn't matter to me it is faster than dynamic typing or not).

I totally agree that performance should not be a concern during program implementation (other than avoiding 'unnecessary pessimization', which involves the KISS principle and knowledge of language idioms). Optimization should only be performed where the need for a speed-up has been demonstrated.
Other than saying "wow this interpreted language runs damn fast on current hardware", this article does a poor job at making any relevant point.

radix omnia malorum prematurae optimisatia est -- Donald Knuth

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...