×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Will Pervasive Multithreading Make a Comeback?

kdawson posted more than 6 years ago | from the let-it-be dept.

Be 657

exigentsky writes "Having looked at BeOS technology, it is clear that, like NeXTSTEP, it was ahead of its time. Most remarkable to me is the incredible responsiveness of the whole OS. On relatively slow hardware, BeOS could run eight movies simultaneously while still being responsive in all of its GUI controls, and launching programs almost instantaneously. Today, more than ten years after BeOS's introduction, its legendary responsiveness is still unmatched. There is simply no other major OS that has pervasive multithreading from the lowest level up (requiring no programmer tricks). Is it likely, or at least possible, that future versions of Windows or OS X could become pervasively multithreaded without creating an entirely new OS?"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

657 comments

It makes sense with multi-core cpus (5, Informative)

Thaidog (235587) | more than 6 years ago | (#19869977)

OSes like BeOS and Zeta are ahead of their time. With 8 core cpus coming out soon it just makes since with this technology... no programming tricks are needed.

Re:It makes sense with multi-core cpus (1)

GizmoToy (450886) | more than 6 years ago | (#19870007)

It's true. Unfortunately it seems that a pretty significant rewrite of the current OSs would be required to achieve this level of responsiveness. Since it hasn't been done to date, here almost 10 years later, my hope is not high for such a feature any time soon.

Re:It makes sense with multi-core cpus (4, Interesting)

Anonymous Coward | more than 6 years ago | (#19870159)

too true. The linux kernel beats the beos kernel in threading benchmarks, but the entire Be OS GUI stack (kernel, display, windowing, controls) were designed with multithreading in mind. X/KDE/GTK et al are relics based on 1986 era computing.

Re:It makes sense with multi-core cpus (1)

wmeyer (17620) | more than 6 years ago | (#19870011)

And neither BeOS nor Zeta remain available.

It's a pity, as BeOS was pretty stunning on a 400MHz Celeron.

Re:It makes sense with multi-core cpus (4, Informative)

tolan-b (230077) | more than 6 years ago | (#19870207)

Haiku is coming along very nicely though, and it's open source.

Re:It makes sense with multi-core cpus (4, Funny)

LiquidCoooled (634315) | more than 6 years ago | (#19870413)

Haiku from BeOS
Multitasking all programs without delay
Open source victory

Re:It makes sense with multi-core cpus (4, Funny)

Your.Master (1088569) | more than 6 years ago | (#19870499)

Five, Seven, and Five That's how a Haiku should go Not like you did it

Re:It makes sense with multi-core cpus (2, Insightful)

LiquidCoooled (634315) | more than 6 years ago | (#19870549)

ffs, nitpicking git :P
v2:

Haiku from BeOS
Multitasking all programs no delay
Open source for the win

(5-7-5 syllables)

Re:It makes sense with multi-core cpus (1, Insightful)

Anonymous Coward | more than 6 years ago | (#19870585)

How on earth did you fit "multitasking all programs no delay" into seven syllables?

Re:It makes sense with multi-core cpus (1)

LiquidCoooled (634315) | more than 6 years ago | (#19870649)

Cos I'm crap and fail at English obviously.
Good job I am a software developer and not an English teacher isn't it?

Re:It makes sense with multi-core cpus (1)

Your.Master (1088569) | more than 6 years ago | (#19870555)

That would have made more sense if I had remembered to change "HTML Formatted" to "Plain Old Text"

Five, Seven, and Five
That's how a Haiku should go
Not like you did it

Re:It makes sense with multi-core cpus (0)

Anonymous Coward | more than 6 years ago | (#19870567)

Anyone got a torrent or something of Zeta 1.5? Seems impossible to obtain, legally or otherwise

That's nothing... (4, Funny)

Anonymous Coward | more than 6 years ago | (#19869983)

Back in the OS/2 days, we could format 72 floppies simultaneous with no slowdown to our 14.4 connections!

Amiga beat them all (4, Informative)

Anonymous Coward | more than 6 years ago | (#19870265)

Serious back in the mid 1980's I used to love putting PC and Mac owners to shame by showing them literally dozens of open, active graphics applications displaying animations, while formatting a floppy disk, and downloading a file online, and still having a normal responsive system with no hic-ups, all in a computer with on 128MB RAM.

Amiga was a multi-tasking, multi-threaded OS, with multiple processors (graphics and I/O were separate co-processors operating on opposite clock cycles from the CPU, and the graphics co-processor could be dynamically loaded with special executable code).

It was so far ahead of it's time that people today still don't believe it existed in the 80's when I tell them about it.

But just because it was better than everything else did not assure it's success. A concept the BeOS fanbois might be familiar with.

Re:Amiga beat them all (5, Insightful)

nogginthenog (582552) | more than 6 years ago | (#19870397)

128MB? In the mid 80s? Maybe you mean 4Mb :-)

Re:Amiga beat them all (0)

Anonymous Coward | more than 6 years ago | (#19870517)

See?!?! No one believes me. ;-)

That should have been a K not an M. And actualy it should have said 256KB, with an option to expand it to 512KB. And that included space for the OS! Oh, and forgot to mention that it was a fully 32-bit OS and applications back when most everything else was still 8 bit code.

http://en.wikipedia.org/wiki/Amiga [wikipedia.org]

Re:Amiga beat them all (5, Informative)

GreggBz (777373) | more than 6 years ago | (#19870523)

Hey, I'm all for Amiga's but in the mid Eighties, if you had 128MB of ram and was downloading a file online, you must have been from the future.
What the heck are you talking about?

Just to be a little more correct here, I'm no hardware engineer but will try to be far more accurate.

The Amiga had a great messaging system in it's OS, you could easily pass messages to other windows and programs in intuition. Further, you had all that ARexx stuff, and you could script programs to interact very easily with it. Basically, every program could listen on it's own ARexx socket for commands from other programs. Of course, there was the poor (read, no) memory protection which made things very unstable if you did not know what you were doing. Despite all this cool stuff, the OS was actually the weakest link. It was rushed. I remember reading specs on the original intended, but non-implemented file system, and it was about as robust as a single user file system could possibly get.

You also had preemptive multitasking (not true co-operative) and a fantastic unified memory architecture with a very fast blitter. Another nice thing was
that the kernel was contained on ROM so that it booted quicker then any other platform of it's day, and still faster then most this day. And all those chips played nice
and were synced to an internal clock that ran on NTSC (or PAL) timings. This, of course, meant that interrupts worked seamlessly, and the chipset was handily compatible with video signals from television equipment. That last thing turned into an incredible boon for the entire film and television industry.

The strength of the Amiga was it's bus and it's architecture. They absolutely nailed so many things in it's design, it really was a thing of beauty.

Re:Amiga beat them all (3, Informative)

Anonymous Coward | more than 6 years ago | (#19870683)

Corrected the memory size in another reply. The base system had 256KiloBytes of RAM. Sorry for the mix up, I'm so used to putting MB after memory sizes. ;-)

As for downloading files online, back then "online" meant downloading from BBS systems. The closest thing to the internet back then for the average consumer was FidoNet.

http://en.wikipedia.org/wiki/Fidonet [wikipedia.org]

And yes, the lack of an MMU, as well as a lack of FPU, in the CPUs used in the early models was a shame. But it did keep the price of the system within reach of the average Joe Computer Geek.

Re:Amiga beat them all (1, Insightful)

Anonymous Coward | more than 6 years ago | (#19870759)

It also wasn't pre-emptive, nor did it have protected memory. It also didn't keep up with progress, when it ultimately died there really wasn't anything special to it, and the various recreation projects are struggling to retrofit more modern concepts into the basic design.

Question... (1, Funny)

Anonymous Coward | more than 6 years ago | (#19869993)

The obvious lack of software etc. notwithstanding, would Slashdot agree that BeOS was the best OS of its time?

Re:Question... (5, Funny)

cmowire (254489) | more than 6 years ago | (#19870049)

BeOS was like JFK.

The both got gunned down before we could possibly see any downsides to them.

There were a few architectural decisions in BeOS that I felt would have resulted in great amounts of pain and suffering 10 years later.

Re:Question... (1)

ragahast (879945) | more than 6 years ago | (#19870155)

But their plan was to do a ground-up rewrite of the OS every several years, so they could have corrected past mistakes. Too late now, of course...

Re:Question... (5, Insightful)

cmowire (254489) | more than 6 years ago | (#19870227)

I believe that's covered by "There were a few architectural decisions in BeOS that I felt would have resulted in great amounts of pain and suffering 10 years later."

Rewriting things from the ground up, without acceptable justification, has never been an effective strategy.

Re:Question... (0)

Adult film producer (866485) | more than 6 years ago | (#19870249)

BeOS was like JFK.

The both got gunned down before we could possibly see any downsides to them.


Well technically JFK was not gunned down. He committed suicide in the limosine. A lot of idiot conspiracy theorists have tried to shift the blame to gangsta, the mafia and cia. There's even a bunch of fringe wackos that blame a library clerk for what happened. Don't let those people fool you.

Re:Question... (2, Funny)

CRCulver (715279) | more than 6 years ago | (#19870299)

The source under a Free Software license is, I should think, a prerequisite to be in the running for "best OS of its time". That's why the Hurd was the best OS of the BeOS era.

Re:Question... (0)

Anonymous Coward | more than 6 years ago | (#19870703)

You can't be serious.
Enough of this magical open source pixie dust bullshit. Open isn't inherently "better" than closed. More desirable, perhaps, depending. Many couldn't care less. But if you're stating your claim solely on the availebilty of the source, and throwing performance, responsiveness, application support, ease of use, etc out the window, to be quite frank, your assessment is as baseless as it is useless.

Re:Question... (2, Interesting)

Bryan Ischo (893) | more than 6 years ago | (#19870305)

It was not. The five hours I spent trying to get a simple modem to work in BeOS, with no OS diagnostics to guide me, and very poor support from BeOS the company, was all the proof I needed that BeOS wasn't all it was hyped up to be. I can understand that Be did not have the resources to support every piece of hardware under ths sun, but I found it inexcusable that their support for diagnostics was so incredibly rudimentary. At the time, with Linux (this was 1999 or so), if I had a problem with some hardware I could either read the source (OK Be could never match this since it was proprietary and closed-source so that's not quite fair), or look at the copious amount of system logging that would generally point to the problem (stuff in dmesg, kernel logs, /var/log/messages, lots of tools and documentation to help me out). With BeOS, I was getting pop-up dialogs that just said stuff like "Error 0xFFFFFFFF occurred", with absolutely no useful information whatsoever. It was impossible for me to diagnose the problem no matter how hard I might try because the operating system just wasn't going to give me enough information to go on.

Also BeOS the company didn't respond at all to my requests for help with this. They provided zero technical support to me. Emails went unanswered.

Maybe BeOS had some nice architecture, but there is more to an OS than its handling of threads - much, much more, and I think that BeOS was not even close to ready for prime time. And the developers clearly had glossed over many aspects of an operating system (such as the aforementioned error diagnostics) to get to the pretty demos that the OS was capable of.

yup BeOS rocked. (1)

tempest69 (572798) | more than 6 years ago | (#19870309)

BeOS was brutal..

I still get weird clicks when my XP box plays mp3s. My iMac (core2duo 3gb-ram) gets a bit flaky when it gets busy, and it will lag a bit when asked to move things around.

When I completely blasted my Be and it still manages to keep the mp3 from sounding like garbage. It was freaky smooth to deal with.. I still think of the Bebox when thing get weird.. Shame that it got killed the way it did.

Storm

Re:Question... (0)

Anonymous Coward | more than 6 years ago | (#19870595)

Hell no, AIX hands down.

Microsoft's plan is to keep adding cores... (5, Funny)

Joce640k (829181) | more than 6 years ago | (#19869997)

Microsoft's plan is for us to keep adding CPU cores in the hope that at least one of them won't be deadlocked at any given moment in time.

Re:Microsoft's plan is to keep adding cores... (2, Insightful)

Anonymous Coward | more than 6 years ago | (#19870037)

Microsoft's plan is for us to keep adding CPU cores in the hope that at least one of them won't be deadlocked at any given moment in time.

Nice try with the /. friendly, but ultimately meaningless and ignorant, tirade. CPU cores don't get deadlocked, threads in a cyclic wait pattern get deadlocked. It doesn't matter which core they run on. You could have a million cores, but if two threads are deadlocked, you're still screwed as far as the program goes. And the article was about BeOS, not microsoft!

Re:Microsoft's plan is to keep adding cores... (0)

Anonymous Coward | more than 6 years ago | (#19870161)

you're right. instead of a "deadlock", the post should have mentioned the core being consumed by an aggressive malware "job", or an anti-malware "job", or a spyware job,...you were right to scold him for such technical inaccuracy.

Re:Microsoft's plan is to keep adding cores... (1)

Joce640k (829181) | more than 6 years ago | (#19870439)

You could have a million cores, but if two threads are deadlocked, you're still screwed as far as the program goes.

Um, yes. Single programs can still crash, obviously. I'm guessing that's why Microsoft added a "Open Explorer windows in separate processes" option to Windows.

What I'm talking about is when the whole machine freezes for a few seconds because a hard disk needs to spin up or because you inserted a DVD. Stuff like that. What exactly is going on there?

And the article was about BeOS, not microsoft!

Ummmm, the subby asked if other OSs would get ever get proper multithreading. I assumed he meant mainstream OSs.

PS: Writing "Microsoft" with a small 'm'? How childish is that...?

Re:Microsoft's plan is to keep adding cores... (1)

misleb (129952) | more than 6 years ago | (#19870123)

It isn't the CPU that gets deadlocked. It is usually two or more processes waiting on each other from events or resources. So no matter how many core you have, if explorer.exe deadlocks, it deadlocks. No number of cores will fix that. Fortunately you can usually just kill the offending processes unless it happens with some system critical process or inside the kernel or something.

-matthew

Re:Microsoft's plan is to keep adding cores... (2, Informative)

Urusai (865560) | more than 6 years ago | (#19870269)

Part of the problem is that Windows was originally a cooperative multitasking environment (like MacOS). When they added real threading (in Windows 95, I think), each application was still single threaded, which meant having the GUI and underlying processing on the same thread, making responsiveness sucky. They never bothered making the OS interface (Explorer) multithreaded, which is why on XP you can still crash Explorer and thus your entire desktop (although Explorer restarts after a few seconds).

My experiences with Linux show it suffers big time from process hogs, especially IO process hogs, such as when you copy large directories, even with the low-latency desktop kernel options enabled, so don't think it's just a Windows problem.

Check it out! (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#19870009)

This site has a naked man stretching his anus open to a diameter roughly equal to the width of his hand, with the inside of his rectum clearly visible. Below his gaping anus is his dangling, flaccid scrotum and penis.

http://goatse.cz/ [goatse.cz]

Multithreaded won't be optional any more. (5, Insightful)

cmowire (254489) | more than 6 years ago | (#19870021)

Given that most machines are already starting to come default with 2 cores, and you can fit 8 cores (2 CPUs) in a nice desktop package, it's pretty clear that it's going to be a requirement.

It's not entirely the operating system's fault. The biggest advance of BeOS wasn't necessarily just that the kernel was designed to multithread nicely, Be also did their best to force you to write multithreaded code when you wrote a Be application.

I suspect that the first thing that's going to become clearly a performance bottleneck is the applications. And that's not going to be fun, because there's a lot of applications out there and you can't just magically recompile them with threads turned on and see much difference. You need to synchronize the data structures for multiple threads touching them at the same time and split things up so that you can actually keep a decent number of cores busy. This is not trivial when you are talking about an app that somebody wrote single threaded in the mid 90s without any notion that threads might be useful later.

Re:Multithreaded won't be optional any more. (3, Insightful)

larien (5608) | more than 6 years ago | (#19870127)

Multithreaded CPUs are become more and more common, yes, look at Sun's Niagra with 8 cores & 4 threads per core (looks like 32 CPUs from the OS...). In the consumer desktop space, Intel/AMD both have 4-core CPUs either in the market or coming soon.

As for applications - if you're running 5 applications, multi-cores will help without recompiling assuming the kernel's scheduler is reasonably sane and kernel writers are getting smarter at writing different schedulers. If you are running one single-threaded app, multiple cores aren't going to help you much at all. Of course, the other advantage of multi-threading apps (even on a single core) is that if the app is blocking on one thing (I/O is most common for blocking), the other threads can carry on doing work.

Re:Multithreaded won't be optional any more. (1)

cmowire (254489) | more than 6 years ago | (#19870203)

Sure, but most desktops don't run more than one or two apps at a time. So, 2-4 cores is all that you get "for free" without new apps. Sure, if I'm building a web server application, it'll scale much more gracefully, but it already scales rather gracefully.

The big problem is that most single threaded apps *can* be made multithreaded and *can* be optimized for a more "modern" architecture. In some cases, you can also work around various hidden latencies in modern hardware (like cutting the problem set into smaller chunks that can be split across multiple cores and also can fit better into the cache) at the same time.

The second implication of moving to many cores is that some level of NUMA is inevitable, and there are far fewer NUMA-cognizant coders than SMP-cognizant coders.

Re:Multithreaded won't be optional any more. (3, Interesting)

ShieldW0lf (601553) | more than 6 years ago | (#19870643)

Sure, but most desktops don't run more than one or two apps at a time. So, 2-4 cores is all that you get "for free" without new apps. Sure, if I'm building a web server application, it'll scale much more gracefully, but it already scales rather gracefully.

Are you serious? The idea is to have all your programs running all the time, and interact with them whenever you want with instantaneous response. Not to mention that most apps people run nowadays either are servers (P2P, LAN Shares, etc), clients that sit around listening to servers (IM) or querying them with frequent regularity (Email Client). And the progression is towards having personal servers that you can connect to using either a local or remote client.

The next generation of computing is going to come from the vast multitude of developers who are accustomed to writing client-server applications applying what they know to computers that behave like a server cluster. They are better equipped to approach the problems and rewards of this architectural progression than the guy who has been working in the traditional application space. Now, that's a generalization that's full of exceptions, but it'll be still be proven true on the wider scale.

Re:Multithreaded won't be optional any more. (2, Informative)

kripkenstein (913150) | more than 6 years ago | (#19870369)

Multithreaded won't be optional any more.[...] Given that most machines are already starting to come default with 2 cores, and you can fit 8 cores (2 CPUs) in a nice desktop package, it's pretty clear that it's going to be a requirement.
Sure, the trend towards more cores does imply that an inherently multithreaded OS makes more sense. But on the other hand, the main advantage heard about such pervasive multithreading is 'better responsiveness', and I am not sure that modern OSes are 'unresponsive' - current Linux desktops seem very responsive even when running multiple apps (except Firefox, btw, which locks up often for a second or two on intensive websites. Annoying, but still the best browser out there.)

So, I am not convinced a rewrite of an OS just to add pervasive multithreading is a good idea. Anyhow, for those interested in that concept, there is Haiku [haiku-os.org], which is the FOSS OS inspired by BeOS. Looks like they are making nice progress (but nothing you'd want as your main productivity OS just yet).

Re:Multithreaded won't be optional any more. (2, Interesting)

TheRaven64 (641858) | more than 6 years ago | (#19870695)

How many applications do you run that peg a single core at 100%? I use three:
  • GCC.
  • Final Cut Express.
iTunes used to be on that list when ripping CDs, but since my last upgrade the CD drive has become the bottleneck. GCC doesn't need to be multithreaded, because I can always add a -j option to my make command and run one instance of it on each code (and a floating spare one or two for when one CPU is waiting for I/O). Final Cut can consume pretty much as much CPU power as it's possible to throw at it, but anything involving video is an embarrassingly parallel problem (decompose along the time access or into macroblocks as you wish).

There is no reason to add support for SMP machines to any program that only uses a fraction of a single core's power. If you're doing something in the background then it might be worth spawning off a worker thread to keep the UI responsive, but most other things are better handled with co-routines, which are much easier to reason about (hence the fact that pretty much every GUI toolkit uses some form of them).

When you are not performing embarrassingly parallel computations, threads aren't such a good idea, since you end up with a lot of synchronisation issues that can be avoided by moving to an asynchronous model such as that used by Erlang.

No Maybe Yes (1, Insightful)

nukem996 (624036) | more than 6 years ago | (#19870029)

Windows, No they'd have to do a require which they already tried doing and failed at OS X, Possibly but FreeBSD would have to do it first Linux, yes but probably won't happen until the 2.8 kernel since it would rework how the kernel works while making sure it still runs on older hardware. glibc would also have to include support but its much more likely on GNU/Linux being that its all open source and thus anyone can work on it.

Re:No Maybe Yes (3, Interesting)

Anonymous Coward | more than 6 years ago | (#19870147)

Well, it's not really an OS issue. Sure the OS has to provide some underpinnings so that the programmers can take advantage of it. But I think most of it is already mature enough for applications to use it. Why don't they use what is already there? I mean everybody whines about how unresponsive X is. Until X is rewritten to be multi-threaded, you won't see the UI responsiveness that you see in BeOS. On your typical Linux box, X is the real bottleneck. There is no point in rewriting QT or any UI toolkit until X is fixed. You won't be able to replicate the multiple videos trick unless X is fixed first and then the applications are modified to use multi-threading to its fullest.

Re:No Maybe Yes (5, Informative)

someone300 (891284) | more than 6 years ago | (#19870497)

X is being fixed, thankfully (finally). There are a lot of interesting projects, including but not limited to Xegl. Xegl, is the long term goal of the X server and pretty much reduces the X server to a tiny part of the system, basically mediating the input devices, rotation and display management and TCP/over-the-wire GL, if I understand correctly, by using the Embedded GL specifications.

Re:No Maybe Yes (0)

Anonymous Coward | more than 6 years ago | (#19870295)

the *BSD std libraries have reentrant versions of all legacy single-threaded functions.

Re:Puh-lease (0)

Anonymous Coward | more than 6 years ago | (#19870435)

Windows, as it stands today with it's Window NT origins, developed by VMS developers, has always had mutli-threading as part of the OS. *nix, on the other hand, has only had multi-threading as a pathetic add-on library long after the kernel was developed without multi-threading in mind. *nix has far more catching up to do than Windows, full stop.

I find it amusing when hearing people whine about how hard multi-threading and synchronization is. It just illustrates your weaknesses in coding. If I am hiring on a project that requires multi-threading (pretty much everything these days) I'll put Linux fan bois at the bottom of the resume pile.

Re:Puh-lease (3, Interesting)

Gothmog of A (723992) | more than 6 years ago | (#19870639)

What you say may have been true some years ago. Nowadays Linux is far more advanced technically than Windows with respect to multi-threading and even more multi-processor / multi-core support.

E.g. gcc does thread-safe initialization of local static variables -- Visual C++ does not. Linux runs on up to 4096 processor machines -- Windows does not. Linux can be run tickless (to some extend) -- Windows can be not. Linux has support for the SUSv3 realtime API with support for nanosecond resolution timers -- Windows has nothing comparable. Linux will shortly have the new completely fair scheduler (CFS) were a user reported that the system is still quite usable with 32k busy threads running in parallel -- Windows would be not.

Re:No Maybe Yes, sentance stucture (0)

Anonymous Coward | more than 6 years ago | (#19870471)

"." they separate the sentences. Usa "." to break up the ideas into pieces that humans can follow.

I hope so (4, Interesting)

datapharmer (1099455) | more than 6 years ago | (#19870033)

I still hate that BeOS went belly up. It was a great operating system but was crushed before it ever got very far. The hardware support was also amazing: it would run winmodems and other windows only hardware. I've never tried writing an operating system, but I hope some of the features from BeOS make it into linux/OSX. One interesting thing to note is Be was originally a mac alternative and was only later moved to x86.

Another cool operating system to check out is MenuetOS [menuetos.net]... it is written entirely in Assembly! Very fast boot times and the GUI and eevrything fits easily on a floppy!

Re:I hope so (3, Interesting)

Bryan Ischo (893) | more than 6 years ago | (#19870363)

Well, just for another perspective to balance this out, I found BeOS' hardware support to be pretty poor and the operating system pretty much left you high-and-dry if your hardware wasn't perfectly supported. To whit, I tried to get a modem to work with BeOS back in the day (1999 or so) and if I recall correctly (it's been a long time), I was getting very generic error dialogs ("Error 0xFFFFFFFF occurred") with no other useful diagnostics whatsoever. I vaguely remember playing with some settings and getting rid of the messages but the modem never worked. The operating system would "think" it was working (no error messages, the OS would show that I had connected to the ISP), but it would never transmit any data. There were literally ZERO tools to help me diagnost this, and the OS refused to give me ANY information at all on what was going on.

I distinctly remember thinking that it was very, very much like Windows in this regard. Linux was awesome because the operating system could give you a wealth of information about what it was doing, so that if you put time into it, you could diagnose and fix pretty much any problem. The tools were there for you. With BeOS and Windows, where the tools and logging would be, was simply a big empty void. There was nothing you could do if your hardware was not perfectly supported. You could not figure out what was wrong. The operating system had no facilities to support any kind of diagnosis of the problem.

I never expected BeOS to support every piece of hardware out there. But then again, I *did* expect it to, since it was such a new and unsupported OS, provide tools to the user to let them solve problems. But BeOS didn't, and for this reason, I think that it was not a very good OS. Sure it had nice pretty demos, but I'm guessing that Be the company focused all of their efforts on the code paths necessary to enable the pretty demos, and left all of the other critically useful (and underrated) aspects of the operating system unimplemented.

Perhaps with enough time, they could have addressed that. But BeOS, as it was, was only a cute toy, in my opinion.

Threading isn't any easier when it is pervasive (5, Interesting)

mwadams (520080) | more than 6 years ago | (#19870053)

It isn't really the pervasive multithreading that does the job on responsiveness for BeOS, and nor does having the "two threads per window" thing (which I think is what the poster is referring to in terms of "pervasive multithreading) avoid "programmer's tricks" - in fact, you have to be just as careful as if you were developing with Windows, and span up a background thread. One issue for BeOS developers was the amount of hard thinking you had to do to perform simple tasks in a pervasively multi-threaded environment, when you're still having to deal with all the pitfalls of lock-based programming.

However, taking only a few cycles to spin up or kill a thread (rather than the 10,000 plus it takes Windows), or perform a context switch, is a significant help. (There used to be an interesting article benchmarking those things on the Be website, but I can't find it any more).

MS have also added some more interesting stuff to the scheduler in Vista, which helps with uninterrupted sound or movie playback, so at least some of that stuff is possible without a complete redesign.

Re:Threading isn't any easier when it is pervasive (2, Informative)

PCM2 (4486) | more than 6 years ago | (#19870321)

MS have also added some more interesting stuff to the scheduler in Vista, which helps with uninterrupted sound or movie playback, so at least some of that stuff is possible without a complete redesign.

Really? Man, tell that to my box running Vista Media Center. Media Center has a helpful (cough) habit of capturing the mouse cursor to the screen running the Media Center app. Hit the Windows key to break out of it and your video playback is interrupted for as much as 20 seconds while Windows struggles to switch screens and render the Start menu. What's more, despite the fact that Vista has been re-engineered to support multiple sound output devices, it is not possible to assign one particular device to Media Center. In other words, you cannot force Media Center to always use your SPDIF output for sound and then use the computer speakers for other apps. You MUST specify SPDIF as the default sound device for the entire OS if you want Media Center to output sound in that way. It's clear that, for as powerful and multi-thread capable as modern hardware may be, Vista Media Center was written with the assumption that your PC will become a single-purpose appliance. It's kinda pathetic.

Re:Threading isn't any easier when it is pervasive (1)

dreamchaser (49529) | more than 6 years ago | (#19870705)

I never have any stuttering or pausing problems with video playback on Vista via MC or any other means. I think you might have some other problem because it's certainly not Media Center causing it.

Re:Threading isn't any easier when it is pervasive (1)

Lost Engineer (459920) | more than 6 years ago | (#19870459)

The Vista part is true, but it also takes 3 times as much CPU to play a video as it did under XP. 3% to play the video, 3% to let Aero have a realtime preview of your video, and 4% to make sure you are not copying the video. Despite claims by Microsoft and others, the DRM eats cycles even when playing non-protected video. Also that process is bound to the same CPU as the video playing so good luck spreading out the load, should you ever need to.

I understand the need for DRM to play Blu-Ray or whatever, but WTF. Why are my pirated DivX videos taking so many cycles, heating up my laptop, and forcing fans to come on everywhere.

Cool trick for anyone experiencing this though. Use VLC for unprotected stuff. It will automatically disable Aero (current version at least), which could be seen as a bad thing, but I like it. Cycles and power used go back to XP levels. Who needs Aero when you're playing video anyways? If you want it back you can always use WMP.

Re:Threading isn't any easier when it is pervasive (1)

oggiejnr (999258) | more than 6 years ago | (#19870613)

I believe that the main reason for the increase is that, until DX10 cards become commonplace which support GPU multitasking, WMPlayer by default disables the hardware overlay to ensure that desktop does not switch to Areo Basic everytime a video is played as happens with VLC. By disabling the hardware overlay in VLC you get the same experience as in WMP.

Not gonna happen on Mac until... (1, Interesting)

Anonymous Coward | more than 6 years ago | (#19870071)

...QuickTime and the AppKit become threadsafe. Which might be a priority for Apple, but then again might not, given that they've had multicore machines for a long time. Cocoa doesn't lend itself well to the UNIX approach of multiple processes, so if we really want to take advantage of multiple cores, Apple's going to need to seriously step up their multithreading support.

The AppKit docs are riddled with notes like "on the main thread" or "some thread will receive this notification". Maybe Leopard will change that.

Will Bit-Slice architecture return? (1, Offtopic)

BanjoBob (686644) | more than 6 years ago | (#19870073)

I would like to see bit-slice systems return. I had an AMD system built around four of the 2900 4-bit slices and an old TTL Xerox Alto.

I like them because you could microcode these to act like a whole range of different machines. Intel, Motorola, Signetics (2650), Mesa, etc. They were a lot of fun, fast and resource efficient.

Yes, they were power hungry because of all the TTL but a single computer could be configured to be many different machines depending on what you wanted.

Re:Will Bit-Slice architecture return? (1)

C.A. Nony Mouse (860026) | more than 6 years ago | (#19870729)

You can't possibly be serious.

Programming a bitslice machine to faithfully emulate a modern processor architecture, including behavior in face of exceptions (yes, that is a requirement if any non-trivial software is to run unchanged) is bloody hard. In fact, even programming one modern processor to emulate another 100% correctly at the ISA level is hard enough. 95% is easy, 98% is manageable, but ...

... and of course, the performance level per watt even with current technology is nowhere close to what can be achieved with a custom design. Granted, there are applications that fit well in a Xilinx chip, but emulating a Pentium isn't one of them.

Re:Will Bit-Slice architecture return? (1)

Anne Thwacks (531696) | more than 6 years ago | (#19870737)

Why have a slice when you can have the whole cake?

Get an FPGA and design your own cpu - then reprogram it on the fly to be another CPU!

Yes, one minute its a VAX, then the next its a Sparc. Then MIPS, then Arm (Arm is quite cute really) and then your own architecture (or maybe DEC10 or CDC7600). Its easy :-) its simple. Just do it (TM).

Yes I have tried. After several years, I decided it was better just to buy a Niagara and have done with it. Sure I could do 10% better than Sun's entire hardware development team, given enough time, but I have a life!

Hint: don't try CDC7600 first!

BeOS rocked! (4, Interesting)

Anonymous Coward | more than 6 years ago | (#19870081)

A few years ago, on a Dual Celeron 366Mhz with 256MB of RAM, I went out of my way to attempt to crash it. I opened about 120 OpenGL demos with only minor decrease in performance. After inherriting that mainboard, processors and RAM from my uncle and then increasing it to 512MB, the same test ground both FreeBSD and Linux to a halt.

Re:BeOS rocked! (1)

Ant P. (974313) | more than 6 years ago | (#19870507)

Heh. I think I got (on a P4 + r200) about 20 glxgears windows running before Linux/X started doing strange things.

We had different programmers 10 years ago (1)

mazphil57 (792004) | more than 6 years ago | (#19870091)

Today's programmers are not trained to write efficient code (i.e. massively parallel) using good tools, or even in making good technical decisions. The goals of "cheapest available coders" won, so now they will need to develop AI programs to generate this kind of code becuase today's group of lowest-cost "programmers" certainly cannot do it.

Re:We had different programmers 10 years ago (4, Insightful)

bratboy (649043) | more than 6 years ago | (#19870279)

Bah. Today's programmers aren't better or worse than they were ten years ago - they're just distributed differently. Programming video games on a console is an exercise in (frustration) poor tools, worse documentation, highly constrained memory / CPU / IO / bus, multiple threads utilitizing multiple specialized processors, microcode, assembly, etc. Ditto for cell phones. Not so for business applications.

So yes, if you mean "developers of business applications aren't generally hardcore down to the metal programmers," then I'd agree with you. John Carmack and Michael Abrash would be bored out of their skulls working on UI issues for Quicken 2008. And, given their aesthetic sensibilities, they wouldn't necessarily be the best choices (just *try* to balance your checkbook).

But if you mean that great programmers are no longer among us, then I'd say that you should change jobs, because it's more likely that they're simply not around *you*.

Re:We had different programmers 10 years ago (4, Insightful)

dc29A (636871) | more than 6 years ago | (#19870469)

Bah. Today's programmers aren't better or worse than they were ten years ago - they're just distributed differently.
I am not so sure. I remember my first C++ class in college, we didn't touch C++ for at least half the semester (well almost). We learned the basics of OOP and the rest of time was spent on learning how compilers compile code. We also learned a lot of assembly. Hell, in mainframe assembly class we wrote an entire assembler. Bonus points were given to people who used their own assembler to generate the code of the assignment.

While C++, assembly and C might no longer be "cool", it definitely teaches people how to write optimal code, how to debug efficiently, understand a wide variety of computing concepts.

The same college today is too busy teaching C# and Java. While those languages are nice and all, not teaching low level C, C++ and assembly IMO leads to sloppy coders, people who don't understand the byte code generated, people who don't mind wasting system resources because hey ... the garbage collector will take care of it.

I was nearly crucified when I suggested my boss to recode a piece of an application in C so it scales better than the current shitty VB COM version. He just looked through me and said: add another server! Lot of today's code is written by people who don't even understand how the code is getting executed.

Re:We had different programmers 10 years ago (3, Interesting)

kz45 (175825) | more than 6 years ago | (#19870755)

"I was nearly crucified when I suggested my boss to recode a piece of an application in C so it scales better than the current shitty VB COM version. He just looked through me and said: add another server! Lot of today's code is written by people who don't even understand how the code is getting executed"

Was it more cost effective to have a programmer recode it in C (which includes the required maintenance) or use the less optimal but easier to maintain VB COM? I'm all for using C over C#, Java, and VB, but sometimes you need to look at the situation from a business standpoint.

Re:We had different programmers 10 years ago (0)

Anonymous Coward | more than 6 years ago | (#19870543)

GET OFF MY LAWN!!!

Better than xubuntu (3, Informative)

fishthegeek (943099) | more than 6 years ago | (#19870105)

for older (p2 & p3) laptops. I have the opportunity several times a year to receive old laptops to use to teach my students with. Whenever I need to I use Beos Max on the machines and it is just amazing to watch how effecient and responsive Beos really is.

Check out Beos Max [beosmax.org]

Beos is still a lot of fun on older hardware.

I don't get it (4, Insightful)

nanosquid (1074949) | more than 6 years ago | (#19870167)

The ability to play eight movies simultaneously is a bad way of determining OS thread performance. Most modern operating systems have efficient, low-overhead threads. How well they play multiple videos depends much more on the display pipeline, the codec, and how the players adapt to load. To say anything about system performance, you'd need to know frame rate, resolution, codec, postprocessing options, etc.

Overall, I really don't see anything in BeOS that you don't get as well or better in a modern Linux system. BeOS has some efficiency gains from having been developed from the ground up with little need for backwards compatibility, but that's probably also why it wasn't successful in the market. And threading and scheduling in particular are highly efficient and mature in Linux.

(Not that OS X is basically a hacked NeXTStep; the NeXTStep kernel is Mach, the same kernel that is the basis of the GNU Hurd.)

Re:I don't get it (1)

rivimey (534327) | more than 6 years ago | (#19870421)

I'm not supporting parallel movie players as a benchmark, but "most operating systems have efficient, low overhead threads" - Ha! All I can say is you have never seen "efficient threads". Not one of the major OSs have truly efficient threading. For that you have to look at the real time kernels, where it really counts, and to projects like jcsp (http://en.wikipedia.org/wiki/JCSP) where both thread startup and context switches take a few hundred cycles, not tens of thousands.

Haiku (4, Funny)

Keruo (771880) | more than 6 years ago | (#19870169)

Is not haiku(beos) open source?
Take the features and port to linux.
New scheduler rules them all.
Speed improvements would increase the desktop performance.
As they would increase performance with services.

Uh, IRIX anyone? (2, Funny)

pathological liar (659969) | more than 6 years ago | (#19870175)

Aside from having "legendary responsiveness", from a single CPU box to SMP monstrosities, you could even guarantee disk/cpu/whatever throughput.

A lot of the old unixes had "legendary responsiveness"; you are not a unique and beautiful snowflake.

BeOS fanboys are funny.

Re:Uh, IRIX anyone? (0)

Anonymous Coward | more than 6 years ago | (#19870297)

BeOS ran on commodity hardware, not $10.000 workstations and multi-million dollar super-computers. I bet UNICOS had really great response time, too, but a CRAY was hardly the desktop machine of its day, even if modern desktops can outperform some of the original "super computers." It's a totally unfair comparison.

Re:Uh, IRIX anyone? (1)

pathological liar (659969) | more than 6 years ago | (#19870403)

"Today, more than ten years after BeOS's introduction, its legendary responsiveness is still unmatched."


What IRIX (and everything else) ran on is irrelevant, that statement is simply wrong.

Re:Uh, IRIX anyone? (3, Insightful)

Lost Engineer (459920) | more than 6 years ago | (#19870519)

Yeah, but yesterday's supercomputers are todays commodity machines. The last IRIX "super"-computer I used had 16 processors with a uniform memory architecture. We're quickly approaching that level on commodity hardware. My el-cheapo box has 2 processors with a uniform cache-coherent memory architecture.

What I'm getting at here is that perhaps we could look to the past for some ideas about multi-threading, and IRIX is not a bad choice at all, particularly since it was Unix-derived, like the Linux we use now, whereas BeOS is not.

Tried (for Windows) and killed (4, Interesting)

gnetwerker (526997) | more than 6 years ago | (#19870185)

Recall that this was the effet of Intel's NSP (the ill-named "Native Signal Processing"), a real-time multui-thread scheduler inserted at the device-driver level of Windows. Combined with something called VDI (Video Direct Interface), which allowed applications to bypass the Microsoft GDI graphics layer in certain ways, this allowed multiple video, graphics, and audio streams, mixed and synchronized, on circa-1993 computers, something largely not even possible today. While NSP was intended primarily for media streams, its technology was broadly applicable to more responsive and vivid interfaces. The result was Microsoft's threat to cut off Intel from future Windows development and specifically to withhold 64-bit support from Itanium, to more publically support AMD (which they did, for a while), and to threaten any OEMs using the code with withdrawal of Microsoft software support. Much of this was detailed in the Microsoft anti-trust trial and the accompanying discovery documents. Under this pressure, Intel abandoned the software, transferring VDI to Microsoft (it formed the core of what was later called DirectX), and outright killing NSP. Andy Grove admitted to Fortune magazine "We caved." (http://cyber.law.harvard.edu/msdoj/transcript/sum maries2.html) This is not to suggest that this was the best or only way to do this, or that others haven't done it and done it well. But despite the best efforts of Linus and friends, Windows remains the dominant desktop OS, and Windows continues to be built on a base of 1970s-era operating system principles. Microsoft has, and continues to, build substantial barriers to anyone trying to substantially modify the behaviour of Windows at the HAL/device layer. Whether VMWare and equivalent virtualization technologies are finally a camel's nose under the tent edge remains to be seen. But as long as Windows remains the dominant desktop OS, you can expect the desktop to lag 10-15 years (at best) behind the state of the art in OS, GUI, and real-time developments.

Re:Tried (for Windows) and killed (4, Insightful)

CajunArson (465943) | more than 6 years ago | (#19870331)

Windows continues to be built on a base of 1970s-era operating system principles.


Thank Gawd Linux isn't using any relic of an OS [wikipedia.org] that started in the 1970's as its base! No, no, all 100% 21st clean legacy-free implementation there.

On a more serious note, I used Beos myself back in the day. It was definitely more responsive than Win98 was, but not everything was perfect either. The networking implementation absolutely sucked. Oh, it had lots of threads, its just that the threads were not all that beneficial to actual performance. The networking stack and some other forms of processing in the system that handle streams of many relatively similar tasks would probably parallelize better via a pipeline scheme where parallelism is achieved by having independent stages of the pipeline run in parallel (much as CPUs break up the task of executing instructions into a pipeline). The type of parallelism that works best can depend on the application, and the one-size fits all philosophy is not usually correct no matter what the solution is.

Re:Tried (for Windows) and killed (1)

Man On Pink Corner (1089867) | more than 6 years ago | (#19870463)

NSP was a terrible idea, in pretty much any respect you care to consider. If you think Winmodems suck now, take one with you on your next trip back to 1994 and see how you like it.

NSP would have kneecapped the entire PC games industry, and it would have strangled the emerging multiplayer genres in the crib. It was a craven attempt by Intel to market general-purpose CPUs against dedicated audio and communications hardware. You can get away with that now that everybody has more CPU cores than they know what to do with, but dedicated hardware was actually needed back then. No sane developer who wasn't on Intel's payroll would have welcomed any artificial market forces that threatened to marginalize it in favor of host-based signal processing.

Intel backed down because a lot of people, not just Microsoft, screamed bloody murder.

Yes (5, Interesting)

MarkPNeyer (729607) | more than 6 years ago | (#19870205)

I'm a CS grad student at the University of North Carolina. I've never used BeOS, but I'm confident that responsiveness will increase, because the work I'm doing right now is attended to address this very issue.

The thing that makes multi threaded programming so difficult is concurrency control - it's extremely easy for programmers to screw up lock-based methods, deadlocking the entire system. The are newer methods of concurrency control that have been proposed, and the most promising method (in my opinion) is 'Software Transactional Memory' which makes it almost trivial to convert correct sequential code to code that is thread-safe. Currently, there are several 'High Performance Computing Languages' in development, and to my knowledge, they all include transactional memory.

The incredible difficulties involved in making chips faster are precipitating a shift to multicore machines. The widespread prevalence of these machines, coupled with newer concurrency control techniques will undoubtedly lead to an increase of responsiveness.

Re:Yes...well, maybe eventually... (3, Interesting)

kimanaw (795600) | more than 6 years ago | (#19870409)

Unfortunately, STM is very resource heavy and very slow. Yes, it abstracts away lots of issues, but that abstraction comes at a significant cost. In most instances, STM is slower than "classic" locking schemes until 10+ cores are available. (FYI: University of Rochester [rochester.edu] has a nice bibliography for STM info)

If/when the CPU designers currently screaming "more threads, more threads!" at us coders get around to implementing efficient h/w transactional memory, painless fine grain parallelism may become a reality. Until then, STM may be fine for very large applications on systems with huge memories and lots of cores, but probably isn't an option for the average desktop.

But STM does present some intriguing possibilities for distributed parallel environments (think STM + DSM).

Re:Yes (2, Interesting)

rivimey (534327) | more than 6 years ago | (#19870563)

The best way, in my opinion, for people to create an application that uses concurrency is to design it that way. I know that sounds trite, but it's true. A simple example. If you start with a very large number of parallel processes, and wish to create a sequential version of them, the solution is so simple we delegate it to OS run-time in the form of the scheduler. If you have a single sequential process, and wish to create a large number of parallel process, the problem is so difficult that, in the general case, you can't (although some compilers manage some parts of the job, and some processors manage some parts). The formalism that has proved itself time and time again in getting parallel design right is Hoare's CSP, which promoted the idea of autonomous processes sending and receiving discrete messages to each other. The reasons for this include: - a process' memory (state) cannot be changed without its explicit say-so (because messages must be accepted, not just sent). - various properties ensure "WYSISYG", or compositional, programming - if you put two processes together that have been independedntly tested, you can be sure that their behaviour doesn't change just because you've put them together. This is not true of pthreads/winthreads (in general). - because there is a formalism (CSP) behind implementations such as JCSP (http://en.wikipedia.org/wiki/JCSP) there are clear program transformational rules, which helps in many ways to make programs safer. Do have a look... One last point. Once you have a somewhat threaded[1] system, UI responsiveness is, on modern systems, mostly a function of program size. Large programs (including the OS) find it very difficult to be responsive because the CPU is being asked to access items all over memory. The reason that is bad is that a memory access that misses CPU cache incurs an enormous penalty - maybe as much as 1000 CPU cycles - during which the processor is often twiddling its thumbs. Reducing code bloat is essential to improve this, not increasing the number of threads. [1] that is, tasks that take noticeable time are separated out.

Re:Yes (1)

TheRaven64 (641858) | more than 6 years ago | (#19870745)

Most of your locking problems disappear when you use an asynchronous model to design your application. I've written code in Erlang that scales cleanly up to 64 CPUs (the most I had access to) without any problems. It also hides latency nicely, which will become important as NUMA machines become more common (AMD machines are all NUMA masquerading as UMA).

Proof MS set computer industry back (3, Insightful)

Tony (765) | more than 6 years ago | (#19870219)

I think both NeXTStep and BeOS are living (dead) proof that Microsoft set the computer industry back over a decade. It wasn't until MS-Windows 2k that MS-Windows was even close to NeXTStep in features, and the cost was a lack of simplicity. (The only downside to the NeXT: Netware networking sucked. But Netware networking sucked on everything but DOS, so I guess it's no surprise.)

Same with BeOS. It had many features, including stability, ease-of-use, and responsiveness that MS-Windows can't seem to find today. Granted, neither can GNU/Linux or Mac OSX, but since they are hardly the predominant OS, I can't really fault them to the same extent.

Anyway, it's an old rant. Never mind the ravings of an oldster who never got over the sopranoing Microsoft gave DR-DOS. Those like me are just bitter our careers turned from fun and interesting to tedious and dull because of Microsoft. Y'all go on and play with your shiny new toys. No, really, don't mind me. I'm just gonna sit up here on my porch and get rip-roaring drunk and talk about the old days, whether anybody's listening or not.

Re:Proof MS set computer industry back (1)

Billly Gates (198444) | more than 6 years ago | (#19870749)

Linux is not where the industry is heading and the need for one solid platform being more important is being replaced with open internet standards like xml, ajax, java, apache, etc.

Why can't we learn from the past? (0)

Anonymous Coward | more than 6 years ago | (#19870245)

This is just another example of wonderful old technology that has gotten lost instead of getting incorporated into the current technology.

It makes old folks boring, but when they go on and on about how wonderful something was in many cases they are right. Burroughs processor architecture... Multics security... AppleTalk networkings ease of use... why can't the new stuff be as good as the old stuff, rather than being a quantum leap ahead in some aspects but a quantum leap behind in others?

As nearly as I can tell, the computer industry has the worst case of Not Invented Here I've ever seen. Somehow it seems as if we lovingly keep all the worst crap from the past, but completely throw out the good stuff.

Multithreaded Windows (5, Funny)

Tablizer (95088) | more than 6 years ago | (#19870307)


[BSOD]

  . , . . , . . [BSOD]

  - . [BS0D]

[BSOD]

  . . , . [BS0D]

  - . [BSOD]

Clarify the claims a bit (0)

Anonymous Coward | more than 6 years ago | (#19870317)

Eight 160x100 videos simultaneously?

Time to load applications (2, Insightful)

LiquidCoooled (634315) | more than 6 years ago | (#19870357)

The time to load apps is still routed in the size of the exe and the work needed doing to run it.

Old systems didn't have bloat because characters were bytes and graphical entities were flat bitmaps.
Nowadays we have jpeg encoded resources and double byte strings and all sorts of other magical crap.
Programs were (mostly) written for one language and didn't need to adapt themselves to multiple systems.

I bet if you tried to work inside the restrictions of olders systems programs would fly along now, startup times would be low, response times would be low.

Just because we have faster systems does not mean we can add more bloat.

Ummm... (3, Insightful)

Bluesman (104513) | more than 6 years ago | (#19870425)

The big advantage with threads is that the TLB doesn't have to be flushed on a context switch, since they share the same address space. This has great performance advantages over processes, but you lose all of the advantages of protected virtual memory, hence the need for locks, mutexes, critical sections, etc. Threads are actually a step backward from a reliability/security standpoint.

BeOS was a single-user system, if I recall, so that partially reduces the need for the security features that having multiple processes provide.

But beyond that, modern OS's seem to offer a lot more flexibility. They have processes if you want separation of address space, shared memory if you need better performance for communication between threads, threading if you want a shared address space, and user-level threading libraries for the ultimate in performance if you're willing to spend the time to code it properly.

Being able to watch eight movies at a time is a neat trick, but it's not particularly useful, especially when we'll soon have processors with a ridiculous amount of cores on them. With a large number of cores, the overhead of a process context switch is hardly more than that of a thread, since a CPU intensive process can run on its own core.

I think the future of OS's is more likely to be in micro-kernel architectures that can move processes around efficiently to balance the processing load between many CPUs. Or a hybrid microkernel/monolithic architecture that could run the big kernel on one CPU for tasks that require responsiveness, and the rest of the kernel processes balanced between remaining CPU's for throughput.

Re:Ummm... (1)

Billly Gates (198444) | more than 6 years ago | (#19870741)

I always wondered what the difference between processes and threads were and you summed it up. Thank you.

Also I am toying with kidbasic for a project of mine.

Is High Performance Computing Really the Goal? (3, Insightful)

Proudrooster (580120) | more than 6 years ago | (#19870709)

Ask yourself this question, "Is High Performance Computing Really the Goal?" or is herding the consumer to newer shinier hardware the goal? The amount of computing power found in a typical Pentium III computer sitting out and someones curb far exceeds the needs of most users.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...