Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Multicore Requires OS Rework, Windows Expert Says

timothy posted more than 3 years ago | from the ok-let's-split-up dept.

Operating Systems 631

alphadogg writes "With chip makers continuing to increase the number of cores they include on each new generation of their processors, perhaps it's time to rethink the basic architecture of today's operating systems, suggested Dave Probert, a kernel architect within the Windows core operating systems division at Microsoft. The current approach to harnessing the power of multicore processors is complicated and not entirely successful, he argued. The key may not be in throwing more energy into refining techniques such as parallel programming, but rather rethinking the basic abstractions that make up the operating systems model. Today's computers don't get enough performance out of their multicore chips, Probert said. 'Why should you ever, with all this parallel hardware, ever be waiting for your computer?' he asked. Probert made his presentation at the University of Illinois at Urbana-Champaign's Universal Parallel Computing Research Center."

cancel ×

631 comments

Sorry! There are no comments related to the filter you selected.

Fist post! (-1, Troll)

Jimbookis (517778) | more than 3 years ago | (#31561506)

Fist post!

Re:Fist post! (4, Funny)

SilverEyes (822768) | more than 3 years ago | (#31561588)

Fist post!

I come to /. to read tech news... not so see people fisting.

Re:Fist post! (0, Funny)

Anonymous Coward | more than 4 years ago | (#31561622)

Seems like you've come to the wrong place.

Re:Fist post! (4, Funny)

Jeremi (14640) | more than 4 years ago | (#31561776)

I come to /. to read tech news... not so see people fisting.

Well, I came here to see the fisting. And frankly, so far this site has been a real disappointment.

Re:Fist post! (1)

Sarten-X (1102295) | more than 3 years ago | (#31561592)

Here you go: R r

Copy & paste as needed.

This is new?! (4, Insightful)

DavidRawling (864446) | more than 3 years ago | (#31561518)

Oh please, this has been coming for years now. Why has it taken so long for the OS designers to get with the program? We've had multi-CPU servers for literally decades.

Re:This is new?! (1)

bondsbw (888959) | more than 4 years ago | (#31561604)

Just because OS designers milk every cycle from every CPU, doesn't mean web browser designers will.

Re:This is new?! (5, Insightful)

PhunkySchtuff (208108) | more than 4 years ago | (#31561662)

Since when have OS designers optimised their code to milk every cycle from the available CPUs? They haven't, they just wait for hardware to get faster to keep up with the code.

Re:This is new?! (5, Insightful)

Cryacin (657549) | more than 4 years ago | (#31561812)

For that matter, since when have software vendors been willing to pay architects/designers/engineers etc to optimise their software to milk every cycle from the available CPUs and provide useful output with the minimum of effort? They don't, they just wait for hardware to get faster to keep up with code.

The only company that I have personally been exposed to that gives half a hoot about efficient performance is Google. It annoys me beyond belief that other companies think it's acceptable to make the user wait for minutes whilst the system recalculates data derived from a large data set, and doing those calculations multiple times just because a binding gets invoked.

Re:This is new?! (2, Insightful)

Sir_Sri (199544) | more than 4 years ago | (#31561650)

ya but those cases, as he reasonably explains, tend to get specialized development (say scientific computing), or separate processes, or while he doesn't explain it, a lot of server stuff is embarrassingly (or close to) parallel.

I can sort of see them not having a multi-processor OS just waiting for the consumer desktop- server processors are basically cache with some processor attached, whereas desktop processors are architected differently, and who knew for sure what the mutlicore world would look like in detail (or more relevantly what it will look like with 4, 8 or 16 or whatever cores). How will those cores be connected? How symmetric/asymmetric will they be? Right now OS's are built around two big asymmetric processors (cpu and gpu) and several smaller specialized ones (networking sound etc). Some of those architecture things *could* be fairly fundamental to the design you want to use, and there's no point investing huge development time trying to build software for hardware which doesn't exist and may never exist.

I'm not sure about his proposed architecture. It doesn't sound easily backwards compatible (but I might be wrong there), and there's a certain simplicity to 'reserve one core for the OS, application developers can manage the rest of them themselves' sort of model like consoles.

Re:This is new?! (0)

Anonymous Coward | more than 4 years ago | (#31561706)

The ONLY differences between server and consumer processors are ECC, maybe some platform management modes/instructions, and multi-processor interconnects.

And quite frankly most of that at this point in time is feature elimination for lower prices markets rather than physically different dies. In fact back in the Pentium 2/3 and Athlon XP/MP era, both companies 'single processor' processors were available for a time with the embedded logic for their higher end SMP counterparts. I have an ASUS dual processor SLOT1 motherboard that used consumer processors in a dual processor configuration. Additionally there were early Athlon XP's that could run as MPs either natively or with a pin mod (I can't remember which, although I remember AMD was quite to stop it not long after it was publicized.

Point being 'architected differently' isn't true and hasn't been for probably a dozen processor generations or more.

Re:This is new?! (2, Interesting)

drsmithy (35869) | more than 4 years ago | (#31561906)

It doesn't sound easily backwards compatible (but I might be wrong there), and there's a certain simplicity to 'reserve one core for the OS, application developers can manage the rest of them themselves' sort of model like consoles.

Those curious about what life would be like with application developers managing system resources, should try firing up an old copy of Windows 3.1 or MacOS and running 10 or so applications at the same time.

I can only assume TFA is an atrociously bad summary of what he's actually proposing, because it sounds way to boneheaded for someone in that position to be seriously suggesting.

Re:This is new?! (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31561712)

multi-CPU != multi-core. Operating systems have been designed for the pains involved in optimizing multi CPU servers (with separate FSBs). Multi-core systems take care of that on their own (as do multi-CPU systems with the same FSB). The big deal with multi-CPU systems is sequencing. The big deal with multi-core systems is cache. Multi-core systems have been grinding to a halt primarily due to poor cache performance (which is why there are now L3 caches).

Re:This is new?! (4, Insightful)

Jeremi (14640) | more than 4 years ago | (#31561730)

Why has it taken so long for the OS designers to get with the program?

Coming up with a new OS paradigm is hard, but doable.

Coming up with a viable new OS that uses that paradigm is much harder; because even once the new OS is working perfectly, you still have to somehow make it compatible with the zillions of existing applications that people depend on. If you can't do that, your shiny new OS will be viewed as an interesting experiment for the propeller-head set, but it won't ever get the critical mass of users necessary to build up its own application base.

So far, I think Apple has had the most successful transition strategy: Come up with the great new OS, bundle the old OS with it, inside an emulator/sandbox, and after a few years, quietly deprecate (and then drop) the old OS. Repeat as necessary.

Re:This is new?! (0)

Anonymous Coward | more than 4 years ago | (#31561750)

Because it's a very hard problem to solve.

Re:This is new?! (1)

stms (1132653) | more than 4 years ago | (#31561940)

Why?! We're talking about the same Microsoft here, right?

Re:This is new?! (3, Informative)

Bengie (1121981) | more than 4 years ago | (#31561966)

developing server apps to run parallel is easy, client software is hard. Many times, the cost of syncing threads is greater than the work you get from them. So you leave it single threaded. The question is, how may you design a Framework/API that is very thread friendly while making sure everything runs in the order expected all the while making it easy for bad programmers to take advantage of it.

The biggest issue with developing async-threaded programs is logical dependencies that don't allow part to be loaded/processed before another. If from square one, you develop an app to take advantage of extra threads, it may be less efficient, but more responsive. Most programmers I talk to have issues trying to understand the interweaving logic of multi-threaded programing.

I guess it's up to MS to make a easy to use idiot-proof threaded framework for crappy programmers to use.

waiting (5, Insightful)

mirix (1649853) | more than 3 years ago | (#31561520)

'Why should you ever, with all this parallel hardware, ever be waiting for your computer?'

Because I/O is always going to be slow.

Re:waiting (4, Insightful)

DavidRawling (864446) | more than 4 years ago | (#31561638)

Well, with the rise of the SSD, that's no longer as much of a problem. Case in point - I built a system on the weekend with a 40GB Intel SSD. Pretty much the cheapest "known-good" SSD I could get my hands on (ie TRIM support, good controller) at AUD $172, roughly the price of a 1.5TB spinning rust store - and the system only needs 22GB including apps.

Windows boots from end of POST in about 5 seconds. 5 seconds is not even enough for the TV to turn on (it's a Media Center box). Logon is instant. App start is nigh-on instant (I've never seen Explorer appear seemingly before the Win+E key is released). This is the fastest box I've ever seen, and it's the most basic "value" processor Intel offer - the i3-530, on a cheap Asrock board with cheap RAM (true, there's a slightly cheaper "bargain basement" CPU in the G6950 or something). The whole PC cost AUD800 from a reputable supplier, and I could have bought for $650 if I'd wanted to wait in line for an hour or get abused at the cheaper places.

Now, Intel are aiming to saturate SATA-3 (600MBps) with the next generation(s) of SSD, or so I'm told. Based on what I've seen - it's achievable, at reasonable cost, and it's not only true for sequential read access. So if the IO bottleneck disappears - because the SSD can do 30K, 50K, 100K IO operations per second? Yeah, I think it's reasonable to ask why we wait for the computer.

Not that I think a redesign is necessary for the current architectures - Windows, BSD, Linux all scale nicely to at least 8 or 16 logical CPUs in the server world, so the 4, 6 or 8 on the desktop isn't a huge problem. But in 5 years when we have 32 CPUs on the desktop? Maybe. Or maybe we'll just be using the same apps that only need 1 CPU most of the time, and using the other 20 CPUs for real-time stuff (Real voice control? Motion control and recognition?)

Re:waiting (0, Troll)

feepness (543479) | more than 4 years ago | (#31561716)

(I've never seen Explorer appear seemingly before the Win+E key is released).

And I soooo wanted to give your post geek creed... :(

Re:waiting (1)

WhatAmIDoingHere (742870) | more than 4 years ago | (#31561814)

You realize he's talking about Windows Explorer and not Internet Explorer, right?

Re:waiting (0)

Anonymous Coward | more than 4 years ago | (#31561946)

Safe! That was close.

Re:waiting (1)

tagno25 (1518033) | more than 4 years ago | (#31561882)

Not that I think a redesign is necessary for the current architectures - Windows, BSD, Linux all scale nicely to at least 8 or 16 logical CPUs in the server world, so the 4, 6 or 8 on the desktop isn't a huge problem. But in 5 years when we have 32 CPUs on the desktop?

IIRC, Linux can scale up to 256 AMD CPUs or 32 Intel CPUs and 1024 CPUs in a cluster (possibly more by now)

Re:waiting (0)

Anonymous Coward | more than 4 years ago | (#31561922)

I hope you don't write to that disk very much or you'll be reinstalling it in about a month.

Re:waiting (1)

lennier (44736) | more than 4 years ago | (#31562044)

That's my big worry with SSDs at the moment. How long is their useful write-cycle life compared to good old magnetic HD?

Re:waiting (0)

Anonymous Coward | more than 4 years ago | (#31561962)

Where did you find an Intel SSD for $175 AUD?

Re:waiting (1)

Spikeles (972972) | more than 4 years ago | (#31562054)

Oh geez i dunno.. maybe alot of places [staticice.com.au] ?

Re:waiting (1)

toastar (573882) | more than 4 years ago | (#31561964)

i3-530
40Gb ssd

You just described my new laptop.

My workstation's a X5550 with a real hard drive(15k)

I only wish i could find a i7 MB with a Sas controller, I may have to buy a used one :(

Re:waiting (2, Insightful)

Jimbookis (517778) | more than 4 years ago | (#31561714)

Nature abhors a vacuum. It seems that no matter how much compute power you have something will always want to snaffle it up. I have a dual PentiumD at work running WinXP and 3GB of RAM. The proprietary 8051 compiler toolset god awful slow (and pegs one of the CPUs) compiling even just a few thousands of lines of code (10's of seconds with lots of GUI seizures) because I think for some reason the compiler and IDE are running a crapload of inefficient python in the backend. Don't even get me started on how long it takes to upload the frickin' binary to the target on JTAG. My debug cycles take far too long. My point is the compilation of my code base should be done literally in the blink of an eye but the developers saw fit to use a framework that depends on brute CPU power to do relatively simple stuff. A colleague writes VB.net apps to and sometimes it's like being back in 1989 watching .net draw all the elements of the GUI on the screen when you open it or change tabs. Fsck knows how this has come to pass in 2010 and why it's acceptable. So really, blame the programmers for making your beast of a PC slow and waiting around. This notion of massive language abstraction and wanting to use scripting languages ('coz it's easier, apparently) and just-in-time this and that is what is slowing computers down. And hard disks. '

Re:waiting (1)

KibibyteBrain (1455987) | more than 4 years ago | (#31561718)

But that's exactly the way changing OS architectures and APIs can help. Right now the default behavior is to start a worker thread of some type that blocks on IO requests and then reports back. Most apps in the wild don't even bloody do this and just have a few threads do everything and some even have the main app loop block on IO.(let's all pretend we don't see our app windows grey out several times a day!)
We've argued for decades this was a programmer issue but that sort of pedantic criticism has accomplished nothing. Arguing that if all programmers were as smart as you we'd have no problems has been a bad attitude from the OS folks.

Maybe its time we abandoned the current basic Win32/stdlib type APIs for ones that were more IO friendly so that the easiest and most lazy-friendly way to write code is also the best practice for parallelism, and not the other way around which has failed miserably.

Re:waiting (1)

InsurrctionConsltant (1305287) | more than 4 years ago | (#31561796)

Maybe its time ... the easiest and most lazy-friendly way to write code is also the best practice for parallelism.

Bingo.

Re:waiting (1)

EvilIdler (21087) | more than 4 years ago | (#31561836)

But why should we ever wait for the computer to pop up some indicator that it is doing some I/O, and what sort? One of the most common things I see people frustrated with is that the cursor goes into hourglass mode (and the program not updating its display) without knowing why. They yell that the computer is slow, and want a new one. The old one might not even be loaded with malware, and the program they were using keeps doing it on the new computer.

Like others have suggested, something like Apple's GCD would help. Any self-contained task which can be split off into its own thread will, with very little code around it. Fork off that document opening, and return to the main program showing a list of tasks it initiated.

Re:waiting (1)

timmarhy (659436) | more than 4 years ago | (#31561984)

it's not hard to do this now under windows api's. it's just that no one does. maybe if university comp sci and IT degree's joined the 21st century you'd have more graduates that knew this.

Because (1)

the eric conspiracy (20178) | more than 3 years ago | (#31561522)

Why should you ever, with all this parallel hardware, ever be waiting for your computer?' he asked.

Because it might be waiting for I/O.

Re:Because (3, Insightful)

Anonymous Coward | more than 3 years ago | (#31561568)

Why should you ever, with all this parallel hardware, ever be waiting for your computer?' he asked.

Because it might be waiting for I/O.

That's no reason for the entire GUI to freeze on Windows when you insert a CD.

Re:Because (1)

toastar (573882) | more than 4 years ago | (#31561982)

That's no reason for the entire GUI to freeze on Windows when you insert a CD.

lol, so true

I hate to say it, but... (0, Troll)

bennomatic (691188) | more than 3 years ago | (#31561530)

...I do a lot more waiting on my XP machine than on my Mac. Almost identical hardware, but when I'm opening an XLS file, Outlook and Word grind to a halt on the PC. Sometimes, closing a window locks up the whole system for 30 seconds. Shutting down takes an eternity, but the only thing worse than that is how slow the system gets after I leave it running for more than 4 days straight.

My Mac, on the other hand, can stay running for months at a time, and maybe once a month I have to force quit an application. But even then, it's to access that application, not anything else.

Re:I hate to say it, but... (1, Insightful)

Anonymous Coward | more than 3 years ago | (#31561590)

I noticed the same on my mac. With a set of eight CPU graph meters in the menu bar, they're almost always evenly pitched anywhere from idle to 100%, with a few notable exceptions like second life, some photoshop filters, and firefox of all things.

When booted into Win, more often than not I have two cores pegged high, and the others idle. Getting even use out of all cores is the exception, not the rule.

Re:I hate to say it, but... (3, Interesting)

drsmithy (35869) | more than 4 years ago | (#31562016)

I noticed the same on my mac. With a set of eight CPU graph meters in the menu bar, they're almost always evenly pitched anywhere from idle to 100%, with a few notable exceptions like second life, some photoshop filters, and firefox of all things.
When booted into Win, more often than not I have two cores pegged high, and the others idle. Getting even use out of all cores is the exception, not the rule.

This is pretty much completely down to the application mix. Windows has no trouble whatsoever scheduling processes and threads to max out 8 (or 16, or whatever) CPUs, but if the applications are only coded to have, say, 1 or 2 "processing" threads, then there's nothing the OS can do to change that.

Re:I hate to say it, but... (4, Insightful)

GIL_Dude (850471) | more than 4 years ago | (#31561802)

Are you running a 9 year old version of OSX too, or are you comparing a two generation old Windows version to a nice new Mac version? It really sounds like you are comparing apples (snicker) to oranges. After all, both Vista and Windows 7 have no problem running for a long, long time between reboots and don't get slow during that time.

Re:I hate to say it, but... (1, Troll)

ceoyoyo (59147) | more than 4 years ago | (#31561834)

Starting out slow isn't really a solution to the "getting slow" problem.

Re:I hate to say it, but... (1, Insightful)

Grem135 (1440305) | more than 4 years ago | (#31561862)

Wow, another Mac fanboy compairing his nice shiny new Mac to an outdated and replaced (2 times over) operating system. I bet he will say his Ipad will out perform a netbook too. Though the netbook can multitask, run virtually any windows app, has wifi, you can connect an external dvd and (gasp) it can be a color Ebook reader just like the ipad!!

Re:I hate to say it, but... (1)

Corporate Troll (537873) | more than 4 years ago | (#31561960)

If it truly is almost identical hardware, I'd say that your XP installation has a problem.

IT MIGHT BE WAITING (0)

Anonymous Coward | more than 3 years ago | (#31561534)

FOR IO.

DON'T LAND THERE.

The problem isnt even that simple (5, Insightful)

indrora (1541419) | more than 3 years ago | (#31561542)

The problem is that most (if not all) peripheral hardware is not parallel in many senses. Hardware in today's computers is serial: You access one device, then another, then another. There are some cases (such as a few good emulators) which use muti-threaded emulation (sound in one thread, graphics in another) but fundamentally the biggest performance kill is the final IRQs that get called to process data. The structure of modern day computers must change to take advantage of multicore systems.

Re:The problem isnt even that simple (0)

Anonymous Coward | more than 4 years ago | (#31561790)

Also, some tasks just cannot be parallelized.

Multithreading is the problem, not the answer (3, Interesting)

Anonymous Coward | more than 4 years ago | (#31561816)

The Problem with Threads [berkeley.edu] (UC Berkeley's Prof Edward Lee)
How to Solve the Parallel Programming Crisis [blogspot.com]
Half a Century of Crappy Computing [blogspot.com]

The computer industry will have to wake up to reality sooner or later. We must reinvent the computer; there is no getting around this. The old paradigms from the 20th century do not work anymore because they were not designed for parallel processing.

Re:The problem isnt even that simple (1)

lennier (44736) | more than 4 years ago | (#31562052)

Hardware in today's computers is serial: You access one device, then another, then another.

So you don't have packets coming in/out on sound, network, multiple screens, mouse, keyboard, USB drive, webcam, and hard drive simultaneously?

That kernel architect (1, Funny)

Anonymous Coward | more than 3 years ago | (#31561548)

is Probertly right.

Re:That kernel architect (0, Offtopic)

SilverEyes (822768) | more than 3 years ago | (#31561602)

is Probertly right.

is Probertly right?

Grand Central? (3, Insightful)

volfreak (555528) | more than 3 years ago | (#31561554)

Isn't this the reason for Apple to have rolled out GrandCentral in Snow Leopard? If so, it seems it's not THAT hard to do - at least not that hard for a non-Windows OS.

Re:Grand Central? (1, Insightful)

larry bagina (561269) | more than 4 years ago | (#31561632)

With .net it should be trivial. Seems more like an education/cultural problem than a technical one.

Re:Grand Central? (2, Insightful)

jonwil (467024) | more than 4 years ago | (#31561850)

The overhead of systems like .NET is part of WHY we have a problem with excessive CPU usage in the first place.

Why? (1)

DoofusOfDeath (636671) | more than 3 years ago | (#31561556)

Why should you ever, with all this parallel hardware, ever be waiting for your computer?

I dunno - maybe because optimal multiprocessor scheduling is an NP-complete problem? Or because concurrent computations require coordination at certain points, which is an issue that doesn't exist with single-threaded systems, and it's therefore wishful thinking to assume you'll get linear scaling as you add more cores?

Re:Why? (1)

coolsnowmen (695297) | more than 4 years ago | (#31561746)

Except correct prioritization should allow the user to still do things while "background" processing is done. No one said it needed to be optimal ( though the word optimal is meaningless w/o a given criteria), they are just saying it could be better for the user.

Re:Why? (1)

masterzora (871343) | more than 4 years ago | (#31561770)

I dunno - maybe because optimal multiprocessor scheduling is an NP-complete problem?

That only means we can't get an absolutely optimal solution in polynomial time. Fortunately, we are able to get a solution arbitrarily close to optimal in polynomial time. Find the correct balance of time vs. optimality and BAM that NP-completeness isn't really a huge concern.

Or because concurrent computations require coordination at certain points, which is an issue that doesn't exist with single-threaded systems, and it's therefore wishful thinking to assume you'll get linear scaling as you add more cores?

Now you're just putting words into his mouth. Nobody's expecting linear scaling, here! That is an entirely different question.

Current architecture flawed but workable BUT.... (4, Interesting)

syousef (465911) | more than 3 years ago | (#31561558)

...the implementation sucks.

Why for example does Windows Explorer decide to freeze ALL network connections when a single URN isn't quickly resolved? Why is it that when my USB drive wakes up, all explorer windows freeze? If you are trying to tell me there's no way using the current abstractions to implement this I say you're mad. For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem is. You're left piecing together what has and hasn't been moved. File requests make up a good deal of what we're waiting for. It's not the bus or the drives that are usually the limitation. It's the shitty coding. I can live with a hit at startup. I can live with delays if I have to eat into swap. But I'm sick and tired of basic functionality being missing or broken.

Re:Current architecture flawed but workable BUT... (0)

Anonymous Coward | more than 4 years ago | (#31561696)

>For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem is...

Agreed, I can't believe that's still a problem in Win7.

However, you can get around it by just using the built-in RoboCopy instead of Explorer in situations where this is a potential problem. I'm sure there are also 3rd party GUI extensions that do the same thing but RoboCopy is good enough.

Re:Current architecture flawed but workable BUT... (0)

Anonymous Coward | more than 4 years ago | (#31561830)

For Windows 2K/XP You can use SuperCopier (http://supercopier.sfxteam.org/) It replaces the standard copy/move dialog with its own; supports pause/resume; can recover from most failures gracefully; not too bad.

Doesn't support x64 but.

Edit: Support is now there for Vista/7 and X64

Re:Current architecture flawed but workable BUT... (1)

tux0r (604835) | more than 4 years ago | (#31561916)

I find Teracopy [codesector.com] to be an excellent Windows copy dialog replacement. Gives you per-file and per-batch progress bars, better filename collision options, and a status list. Good from XP through to Win7.

Re:Current architecture flawed but workable BUT... (5, Insightful)

Threni (635302) | more than 4 years ago | (#31561772)

Windows explorer sucks. It always just abandons copies after a fail - even if you're moving thousands of files over a network. Yes, you're left wondering which files did/didn't make it. It's actually easier to sometimes copy all the files you want to shift locally, then move the copy, so that you can resume after a fail. It's laughable you have to do this, however.

But it's not a concurrency issue, and neither, really, are the first 2 problems you mention. They're also down to Windows Explorer sucking.

Re:Current architecture flawed but workable BUT... (1)

The MAZZTer (911996) | more than 4 years ago | (#31561870)

Power users use robocopy.exe to copy lots of files. Shows you progress without taking 5 minutes beforehand to count all the files first, automatically retries failed transfers, control over files to transfer or not based on name/size/date/etc criteria, support to sync or mirror folders, etc.

Re:Current architecture flawed but workable BUT... (0)

cdrnet (1582149) | more than 4 years ago | (#31561874)

That's called transaction and is a good thing (except the part where it doesn't tell exactly why it failed). I'd hate if it would not behave like that.

Re:Current architecture flawed but workable BUT... (1)

The MAZZTer (911996) | more than 4 years ago | (#31561852)

Often times Explorer will hang while waiting for I/O over the network to complete. Usually when I accidentally drag some files briefly over a folder symlinked to a network folder. Other times when I'm just scrolling down a list of folders on a remote machine I get lots of hitching. The drives are slow but this is really no excuse for the poor performance on THIS machine. This is Windows 7 btw.

Re:Current architecture flawed but workable BUT... (4, Insightful)

Kenz0r (900338) | more than 4 years ago | (#31561950)

I wish I could mod you higher than +5, you just summed up some of the things that bother me most about the OS that is somehow still the most popular desktop OS in the world.

To anyone using Windows (XP, Vista or 7) right now, go ahead and open up an Explorer window, and type in ftp:// [ftp] followed by any url.
Even when it's a name that obviously won't resolve, or an ip of your very own local network of a machine that just doesn't exist, this'll hang your Explorer window for a couple of solid seconds. If you're a truly patient person, try doing that with a name that does resolve, like ftp://microsoft.com [microsoft.com] . Better yet, try stopping it.... say goodbye to your explorer.exe .

This is one of the worst user experiences possible, all for a mundane task like using ftp. And this has been present in Windows for what, a decade?

Re:Current architecture flawed but workable BUT... (1)

shoehornjob (1632387) | more than 4 years ago | (#31561952)

AAH you must be using Vista (the disfunctioal redneck of operating systems). Some of this was actually fixed (gasp) in Windows 7. I'm still amazed that stuff actually works. For example, If you use the search panel at the bottom of the start menu it actually finds the program you want without going through a bunch of useless menu's. This is especially helpful to me as I support a bunch of end users that don't know their computer. If you get a 404 error sometimes the "diagnose" button actually will restore your connection. I've even seen it recover from a 169. ip address (kiss of death for tech support). Anyway, get Win 7 and you'll be happy.

Re:Current architecture flawed but workable BUT... (2, Informative)

drsmithy (35869) | more than 4 years ago | (#31562038)

For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem is.

You can as of Vista.

Dumb programmers (2, Insightful)

Sarten-X (1102295) | more than 3 years ago | (#31561578)

You wait because some programmer thought it was more important to have animated menus than a fast algorithm. You wait because someone was told "computers have lots of disk space." You wait because the engineers never tested their database on a large enough scale. You wait because programmers today are taught to write everything themselves, and to simply expect new hardware to make their mistakes irrelevant.

Re:Dumb programmers (0)

Anonymous Coward | more than 4 years ago | (#31561634)

Claiming X programmers are dumb doesn't make you any smarter. Go write an operating system then you can talk. This isn't even about programmers today, this is about programmers in the 1980s when the operating systems we currently use were first designed.

Re:Dumb programmers (0)

Anonymous Coward | more than 4 years ago | (#31561720)

(replying to myself)
To tell the truth, what you just described has nothing to do with the problem we are discussing anyway. It doesn't even make sense, there's no reason one would have to choose between animations and a fast algorithm. I don't think you're even a programmer....

Re:Dumb programmers (1)

Sarten-X (1102295) | more than 4 years ago | (#31561864)

From TFA:

The problem is today's desktop programs don't use the multiple cores efficiently enough, Probert said.

If you have a program that needs to do X things at once, it'll run best on multiple cores. You can't avoid that fact. TFA makes references to eliminating the abstraction of a process, giving each program only a single core. My point is that that won't help at all if the idiotic programmers of this world [thedailywtf.com] still can't write a decent program.

It doesn't matter if you can run 50 things at once, if the programs you use are written poorly. They'll still try to run in a single thread, doing a single thing, and preferring shiny buttons over actual function. I/O will have to wait for screen updates. Until I start seeing consistently better programs, I'll continue to assume programmers are dumb.

It's great to push for better scheduling, however you want to do it, but having both better scheduling and improved efficiency will still be better. I personally think that redesigning the OS concepts is a poor choice for improvement.

Re:Dumb programmers (2, Insightful)

Anonymous Coward | more than 4 years ago | (#31561676)

not true, you wait because management speed tracks stuff out the door without giving developers enough time to code things properly and management ignores developer concerns in order to get something out there now that will make money at the expense of the end user, I have been coding a long time and have seen this over and over. Management doesn't care about customers or let developers code things correctly - they only care about $$$$$$$

Re:Dumb programmers (1)

Sarten-X (1102295) | more than 4 years ago | (#31561896)

I agree, management's got a lot to do with it. I have more of a personal vendetta against bad programmers, but I'm open to diversion. Who hires the programmers with half a bachelor's degree, because they're cheaper? Who pushes for a shiny UI in favor of extensive testing? Bad developers write bad code, and bad management encourages it.

Luckily OSX is Already Has MultiCore Tech (1, Informative)

Shuh (13578) | more than 4 years ago | (#31561624)

It's called Grand Central Dispatch. [wikipedia.org]

Re:Luckily OSX is Already Has MultiCore Tech (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31561752)

And if you knew what it did, you'd know it isn't going to help.

Re:Luckily OSX is Already Has MultiCore Tech (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31561754)

It's called Grand Central Dispatch. [wikipedia.org]

Despite having a name and a Wikipedia page. it's not doing a good enough job.

Re:Luckily OSX is Already Has MultiCore Tech (2, Insightful)

Sc4Freak (1479423) | more than 4 years ago | (#31561822)

I'm not sure I get it - GCD just looks like a threadpool library. Windows has had a built-in threadpool API [microsoft.com] that's been available since Windows 2000, and it seems to do pretty much the same thing as GCD.

Re:Luckily OSX is Already Has MultiCore Tech (0)

Anonymous Coward | more than 4 years ago | (#31561912)

It's obviously broken... (not the link)

Re:Luckily OSX is Already Has MultiCore Tech (1)

PhrostyMcByte (589271) | more than 4 years ago | (#31561972)

Sort of. It's a little higher-level and integrates better with the languages.

The real equivalents for Windows are being introduced with Visual Studio 2010 with the Concurrency Runtime for VC++ and the Parallel Framework for .NET. From what I've seen of GCD, these go a few steps past it and provide a pretty extensive set of operations that easily differentiate it from simple thread pooling.

Re:Luckily OSX is Already Has MultiCore Tech (1)

ceoyoyo (59147) | more than 4 years ago | (#31561992)

It's a system level thread pool library, along with a nice interface for sending off little bits of code to the pool.

Re:Luckily OSX is Already Has MultiCore Tech (1)

PhrostyMcByte (589271) | more than 4 years ago | (#31561846)

What he is trying to do is get enough cores on a CPU so that each thread or process can run on it's own core. He essentially wants to remove the scheduler from the OS, so that there would be no time slices -- stuff would just run straight with no context switching. This is entirely different from GCD, which is an implementation of task-based parallelism backed by thread pools.

This would really only work on CPUs with a few thousand cores, and even then the CPUs would need to have some very intelligent power management for cores that aren't being used, or are in use but waiting on something like I/O.

Re:Luckily OSX is Already Has MultiCore Tech (0)

Anonymous Coward | more than 4 years ago | (#31562030)

Grand Central is a nice way to tide things over for a while, but not a satisfactory answer to the problem. I've been having the honor of interacting with some of the finest minds now working on the problem of multicore and massive parallelism, and everyone is still struggling with it. And yes, there are plenty of Macs around, and they'd be building off of Grand Dispatch if it really was a great answer to the question.

When I was last interested in GC, Apple hadn't released technical docs; I've been skimming them over just now and it seems unwieldy and just plain ugly -- it abstracts things away from the hardware, while creating a multi-level design that's much more confusing than need be. (Not to mention, they'll never make it cross-platform.)

One design I've recently been introduced to pools the actual hardware threads and uses a caller/callee hierarchical relationship for establishing the distribution of work processes. Although I have questions about that design as well, it is much cleaner than GC and far more intuitive. I think it has some small chance of leading to about as good of a solution as we'll get in the near future, whereas GC seems like a very slapped-together dead end. I'd skip the braggadocio about GC if I was you -- at least around anybody who's actively working on solutions to the problem.

reinventing the wheel (4, Interesting)

pydev (1683904) | more than 4 years ago | (#31561670)

Microsoft should go back and read some of the literature on parallel computing from 20-30 years ago. Machines with many cores are nothing new. And Microsoft could have designed for it if they hadn't been busy re-implementing a bloated version of VMS.

Very weak presentation (1)

gtoomey (528943) | more than 4 years ago | (#31561726)

This is a very weak talk to give at a University. Rather than talking about 'parallel programming' and adding an "It Sucks" button., I would expect a discussion on CSP http://en.wikipedia.org/wiki/Communicating_sequential_processes [wikipedia.org] or perhaps real time hard to guarantee responsiveness. This is the indoctrination you get when you work for Microsoft, you start spruiking low-level marketing jumbo-jumbo to a very technical audience.

10^10 CPUs and I still have to wait ... (1)

mi (197448) | more than 4 years ago | (#31561728)

... for NFS to give up on a disconnected server... By the original design and the continuing default settings, the stuck processes are neither killable nor interruptible. You can reboot the whole system, but you can't kill one process.

Hurray for the OS designers!

A more basic question (1)

michaelmalak (91262) | more than 4 years ago | (#31561736)

I have a more basic question.

With computers past and present -- Atari 8-bit, Atari ST, iPhone -- with "instant on", why does Windows not have this yet? This goes back to the lost decade [slashdot.org] . What has Microsoft been doing since XP was released?

Re:A more basic question (1)

ceoyoyo (59147) | more than 4 years ago | (#31562000)

XP had a suspend option, didn't it? And Vista and Windows 7 definitely do.

Whether it works or not is another question.

Microkernel? (1)

Enry (630) | more than 4 years ago | (#31561768)

I'm not an OS designer, so I'll admit to possibly being wrong.

Doesn't a microkernel split parts of the kernel into individual processes? In the case of a multicore system, different parts of the OS can be running on different cores at the same time. So inserting a CD doesn't cause the display to freeze, since each are running on a different core.

Re:Microkernel? (1)

MichaelSmith (789609) | more than 4 years ago | (#31561810)

I agree that a microkernel gives you architectural advantages, but I don't believe you should have to do that to avoid a display freeze when a CD is inserted.

Re:Microkernel? (1, Interesting)

Anonymous Coward | more than 4 years ago | (#31561878)

As someone who has tried to make Minix 3 suck less; microkernel doesn't imply well suited to multiprocessing, but it can help. Minix 3 for example, has disk drivers, network, filesystem etc as separate processes, but because so many operations depend on the file server, and the file server implementation is mostly synchronous and single threaded, IO will cause the entire system to appear to lock up. It would be possible to fix this of course, but it's not necessarily easy.

Re:Microkernel? (2, Interesting)

Amanieu (1699220) | more than 4 years ago | (#31561994)

Actually most current monolithic kernels are multithreaded, so they can have one thread working on reading that CD, while another threads handles user input, etc. The only difference from microkernels is that it's all in a single address space.

4096 processors not enough? (1, Insightful)

macemoneta (154740) | more than 4 years ago | (#31561798)

The largest single system image I'm aware of runs Linux on a 4096 processor SGI machine with 17TB RAM [google.com] . Maybe He means that Windows needs rework?

BeOS was doing it... (1)

mister_playboy (1474163) | more than 4 years ago | (#31561884)

BeOS was working on this well before multicore CPUs were the norm on the desktop and the level of responsiveness they managed on hardware that is stone-age tech by today's standard was extremely impressive. Haiku will be picking up where BeOS left off, but it's got a lot of catching up to do on the details, big and small, to become an everyday user's system.

Yet another innovative player that Microsoft extinguished, and the whole tech world is worse off because of it. :(

Duh (3, Funny)

Waffle Iron (339739) | more than 4 years ago | (#31561910)

Why should you ever, with all this parallel hardware, ever be waiting for your computer?'

For a lot of problems, for the same reason that some guy who just married 8 brides will still have to wait for his baby.

Re:Duh (1)

Aphoxema (1088507) | more than 4 years ago | (#31561986)

Why should you ever, with all this parallel hardware, ever be waiting for your computer?'

For a lot of problems, for the same reason that some guy who just married 8 brides will still have to wait for his baby.

Of course, he'll be able to get 8 babies at once, assuming none of the processes crash during the computation.

The problem: the event-driven model (4, Informative)

Animats (122034) | more than 4 years ago | (#31561934)

A big problem is the event-driven model of most user interfaces. Almost anything that needs to be done is placed on a serial event queue, which is then processed one event at a time. This prevents race conditions within the GUI, but at a high cost. Both the Mac and Windows started that way, and to a considerable extent, they still work that way. So any event which takes more time than expected stalls the whole event queue. There are attempts to fix this by having "background" processing for events known to be slow, but you have to know which ones are going to be slow in advance. Intermittently slow operations, like an DNS lookup or something which infrequently requires disk I/O, tend to be bottlenecks.

Most languages still handle concurrency very badly. C and C++ are clueless about concurrency. Java and C# know a little about it. Erlang and Go take it more seriously, but are intended for server-side processing. So GUI programmers don't get much help from the language.

In particular, in C and C++, there's locking, but there's no way within the language to even talk about which locks protect which data. Thus, concurrency can't be analyzed automatically. This has become a huge mess in C/C++, as more attributes ("mutable", "volatile", per-thread storage, etc.) have been bolted on to give some hints to the compiler. There's still race condition trouble between compilers and CPUs with long look-ahead and programs with heavy concurrency.

We need better hard-compiled languages that don't punt on concurrency issues. C++ could potentially have been fixed, but the C++ committee is in denial about the problem; they're still in template la-la land, adding features few need and fewer will use correctly, rather than trying to do something about reliability issues. C# is only slightly better; Microsoft Research did some work on "Polyphonic C#" [psu.edu] , but nobody seems to use that. Yes, there are lots of obscure academic languages that address concurrency. Few are used in the real world.

Game programmers have more of a clue in this area. They're used to designing software that has to keep the GUI not only updated but visually consistent, even if there are delays in getting data from some external source. Game developers think a lot about systems which look consistent at all times, and come gracefully into synchronization with outside data sources as the data catches up. Modern MMORPGs do far better at handling lag than browsers do. Game developers, though, assume they own most of the available compute resources; they're not trying to minimize CPU consumption so that other work can run. (Nor do they worry too much about not running down the battery, the other big constraint today.)

Incidentally, modern tools for hardware design know far more about timing and concurrency than anything in the programming world. It's quite possible to deal with concurrency effectively. But you pay $100,000 per year per seat for the software tools used in modern CPU design.

McVoy's foresight (0)

Anonymous Coward | more than 4 years ago | (#31562006)

Larry McVoy argued for a pretty fundamental shift about 7 years ago. Think what you will of the guy and his company, I'm just saying...

Sadly, I can not find a link right now.

The way computers operate is to blame (1)

master_p (608214) | more than 4 years ago | (#31562014)

The real reason behind the problem is that the way a computer operates is totally inappropriate for parallelism. The concept of data moving through a bus to a proccessing core is totally at odds with parallelism.
We do see tremendous parallelism around us. Why? Because, in the real world, there is no bus to move the data over, and there is no central core! In the real world, each object is its own cpu! If reality was like a computer, all objects would have to be moved to a special place in order to be processed!
If we could take a hint from nature...in our bodies, it's not data that are moved around, it's commands that travel on our "buses", i.e. our nervous system!

Clearly this is a windows issue to note... (1)

3seas (184403) | more than 4 years ago | (#31562026)

....for those buying Windows 7.

As is this not an admittance of Microsoft continued failure to properly support the hardware it runs on?

I joke at work that the reason I have to select tools twice sometimes in autocad is because the dual processors are figuring the other processor is doing it, but when I pick the tool twice, they run out of excuses and do it...mostly.

Now I know ....its not a joke....

Would Plan 9 suite the bill? (1)

MagikSlinger (259969) | more than 4 years ago | (#31562056)

Plan 9 was designed around the idea of completely separate processes that could be running on separate CPUs. Why not start there?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>