Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Apple Freezes Snow Leopard APIs

kdawson posted more than 5 years ago | from the see-you-at-the-base-camp dept.

Operating Systems 256

DJRumpy writes in to alert us that Apple's new OS, Snow Leopard, is apparently nearing completion. "Apple this past weekend distributed a new beta of Mac OS X 10.6 Snow Leopard that altered the programming methods used to optimize code for multi-core Macs, telling developers they were the last programming-oriented changes planned ahead of the software's release. ...`Apple is said to have informed recipients of Mac OS X 10.6 Snow Leopard build 10A354 that it has simplified the`... APIs for working with Grand Central, a new architecture that makes it easier for developers to take advantage of Macs with multiple processing cores. This technology works by breaking complex tasks into smaller blocks, which are then`... dispatched efficiently to a Mac's available cores for faster processing."

cancel ×

256 comments

Sorry! There are no comments related to the filter you selected.

Title is misleading (-1, Offtopic)

tyrione (134248) | more than 5 years ago | (#27919337)

I'll leave it up to the author to see how it's incorrect.

Why is multicore programming so hard? (2, Insightful)

Anonymous Coward | more than 5 years ago | (#27919351)

Haven't video game programmers been doing it forever, doing some things on the CPU, some on the graphics card?

And I heard functional languages like Lisp/Haskell are good at these multi-core tasks, is that true?

Re:Why is multicore programming so hard? (5, Informative)

A.K.A_Magnet (860822) | more than 5 years ago | (#27919525)

Haven't video game programmers been doing it forever, doing some things on the CPU, some on the graphics card?

The problem is shared-memory, not multi-processor or core itself. Graphics card have dedicated memory or reserve a chunk of the main memory.

And I heard functional languages like Lisp/Haskell are good at these multi-core tasks, is that true?

It is true, because they privilege immutable data structures which are safe to access concurrently.

Re:Why is multicore programming so hard? (5, Insightful)

Trepidity (597) | more than 5 years ago | (#27919579)

And I heard functional languages like Lisp/Haskell are good at these multi-core tasks, is that true?

It is true, because they privilege immutable data structures which are safe to access concurrently.

Only partly true. Even in pure functional languages like Haskell, the functional-programming dream of automatic parallelization is nowhere near here yet; in theory the compiler could just run a bunch of thunks of code in parallel, or speculatively, or whatever it wants, but in practice the overhead of figuring out which are worth splitting up has doomed all the efforts so far. It does make some kinds of programmer-specific parallelism easier; probably the most interesting experiments in that direction, IMO, is Clojure [clojure.org] 's concurrency primitives (Clojure's a Lisp-derived language with immutable data types, targeting the JVM).

Lisp, FWIW, doesn't necessarily privilege immutable data structures, and isn't even necessarily used in a functional-programming style; "Lisp" without qualifiers often means Common Lisp, in which it's very common to use mutable data structures and imperative code.

Re:Why is multicore programming so hard? (5, Interesting)

A.K.A_Magnet (860822) | more than 5 years ago | (#27919655)

I know it was partially true, I should remember not to be too lazy when posting on /. :).

Note that I was not talking about automatic parallelization which is indeed possible only with pure languages (and ghc is experimenting on it); but simply about the fact that is is easier to parallelize an application with immutable data structures since you need to care a lot less about synchronization. For instance, the Erlang actors model (also in other languages like Scala on the JVM) still requires the developer to define the tasks to be parallelized, yet immutable data structures make the developer's life a lot easier with respect to concurrent access and usually provide better performance.

My "It is true" was referring to "functional languages" which do usually privilege immutable data structures, not to Haskell or Lisp specifically (which as you said has many variants with mutable structures focused libraries). As you said, Clojure is itself a Lisp-1 and it does privilege immutable data structures and secure concurrent access with Refs/STM or agents. What is more interesting in the Clojure model (compared to Scala's, since they are often compared even though their differences, as functional languages and Java challengers on the JVM) is that it doesn't allow unsafe practices (all must be immutable except in variables local to a thread, etc).

Interesting times on the JVM indeed.

Re:Why is multicore programming so hard? (5, Interesting)

Trepidity (597) | more than 5 years ago | (#27919713)

Yeah that's fair; I kind of quickly read your post (despite it being only one sentence; hey this is Slashdot!) so mistook it for the generic "FP means you get parallelization for free!" pipe dream. :)

Yeah, I agree that even if the programmer has to specify parallelism, having immutable data structures makes a lot of things easier to think about. The main trend that still seems to be in the process of being shaken out is to what extent STM will be the magic bullet some people are proposing it to be, and to what extent it can be "good enough" as a concurrency model even in non-functional languages (e.g. a lot of people are pushing STM in C/C++).

Re:Why is multicore programming so hard? (2, Insightful)

Lumpy (12016) | more than 5 years ago | (#27920093)

only the CRAPPY video cards use any of the main memory. Honestly, with how cheap real video cards are I cant believe anyone would intentionally use a memory sharing video card.

It's like the junk winmodems of yore. DONT BUY THEM.

Re:Why is multicore programming so hard? (3, Informative)

AndrewNeo (979708) | more than 5 years ago | (#27920779)

Unfortunately that's not the issue at hand. You're referring to the video card using system RAM for it's own, but the issue they're talking about (which only occurs in the 32-bit world, not 64-bit, due to the MMU) is that to address the memory on the video card, it has to be put into the same 32-bit addressable block as the RAM, which cuts into being able to use it all, rather than using it physically. At least, that's how I understand it works.

Re:Why is multicore programming so hard? (2, Informative)

moon3 (1530265) | more than 5 years ago | (#27920155)

Heavy parallelized task can also be leveraged by utilizing CUDA and your GPU, even the cheap GPU of today have some 128-512 SPU cores.

What do you think the GPU driven super computer fuzz is all about?

Fag crashes trolley while texting (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#27920661)

http://abcnews.go.com/US/story?id=7561561&page=1 [go.com]

Well MBTA, I hope your liberal Jim Crow hiring practices were worth $9.6M in damaged property and 46 lawsuits...

Re:Why is multicore programming so hard? (1)

mdwh2 (535323) | more than 5 years ago | (#27920881)

The problem is shared-memory, not multi-processor or core itself. Graphics card have dedicated memory or reserve a chunk of the main memory.

Surely it's just as feasible with CPU SMP to reserve a chunk of memory for each thread, instead of sharing memory? AIUI that's a standard way of avoiding locking issues when doing SMP programming.

Re:Why is multicore programming so hard? (1)

A.K.A_Magnet (860822) | more than 5 years ago | (#27920949)

We call that processes and the fork model.

Re:Why is multicore programming so hard? (1)

Tenebrousedge (1226584) | more than 5 years ago | (#27919533)

No, they haven't, with few exceptions. Doing multiple things at the same time isn't really the issue here, we're trying to figure out how to effectively split one task between multiple 'workers'. Video games are one of the harder places to try to apply this technique to, because they run in real time and are also constantly responding to user input. Video encoding is the opposite. One of the big problems with multicore is coordinating the various worker threads.

You could learn a lot by taking the time to read the wikipedia article on multicore.

Re:Why is multicore programming so hard? (1)

DJRumpy (1345787) | more than 5 years ago | (#27919883)

I see a lot of talk about programming data structures, but what if their tackling this at a much lower level? Taken to a simple extreme description, each core is simply processing a single task at a time for X number of clock cycles or less (although I understand they can process multiple tasks via piping or somesuch). There is already a balancing act between the CPU and memory as it has to sync I/O between the two due to differing clock speeds. What if they are doing something similar here?

By that I mean tackling this at a low level so that the bits it breaks up among the worker 'cores' are extremely small bits of instruction out of the larger whole, not whole chunks of data structures or routines themselves but rather individual instructions?

Apologies if I'm butchering this horribly as I'm not a low level programmer. I feel like I'm stumbling about verbally trying to figure out how to say it.

Re:Why is multicore programming so hard? (3, Informative)

DrgnDancer (137700) | more than 5 years ago | (#27920213)

I'm by no means a multiprocessing expert, but I suspect the problem with your approach is in the overhead. Remember that the hardest part of multiprocessing, as far as the computer is concerned, is making sure that all the right bit of code get run in time to provide their information to the other bits of code that need it. The current model of multi-CPU code (as I understand it) is to have the programmer mark the pieces that are capable of running independently (either because they don't require outside information, or they never run at the same time as other pieces that need the information they access/provide), and tells the program when to spin off these modules as separate threads and where it will have to wait for them to return information.

What you're talking about would require the program to break out small chunks of itself, more or less as if sees fit, whenever it sees an opportunity to save some time by running parallel. This first requires the program to have some level of analytical capability for it's own code (Let's say we have two if statements one right after the other, can they be run concurrently? or does the result of the first influence the second? What about two function calls in a row?). The program will have to erect mutex locks around each piece of data it uses too, just to be sure that it doesn't cause dead locks if it misjudges whether two particular pieces of code can in fact run simultaneously.

It also seems to me (again I'm not an expert), that you'd spend a lot of time moving data between CPUs. As I understand it, one of the things you want to avoid in parallel programing is having a thread have to "move" to a different CPU. This is because all of the data for the thread has to be moved from the cache of the first CPU to the cache of the second. A relatively time consuming task. Multicore CPUs share level 2 cache I think, which might alleviate this, but the stuff in level 1 still has to be moved around, and if the move is off die, to another CPU entirely, then it doesn't help. In your solution I see a lot of these moves being forced. I also see a lot of "Chunk A and Chunk B provided data to Chunk C. Chunk A ran on CPU1, Chunk B on CPU2, and Chunk C has to run on CPU3, so it has to get the data out of the cache of the other two CPUS".

Remember that data access isn't a flat speed. l1 is faster than l2 which is much faster than RAM, which is MUCH faster than I/O buses. Anytime data has to pass through RAM to get to a CPU you lose time. With lots of little chunks running around getting processed, the chances of having to move data between CPUs goes up a lot. I think you'd lose more time on that then you gain by letting the bits all run on the same CPU.

Re:Why is multicore programming so hard? (1)

DJRumpy (1345787) | more than 5 years ago | (#27920349)

I think I understand what your saying and it makes sense.

A = 1
A = A + 1
If you ran the first line on Core 1 and the second on Core 2, how would it know that the second line would need to be processed after the first (other than it's place in the code itself).

I wonder if they are using this parallel process only for isolated threads then? I thought any modern OS already did this? Does anyone know exactly how they are tweaking the OS to better multitask among cores (In semi-technical Laymen's terms)? I wonder if this technology is being openly discussed in detail around on the net. Not guesswork but actual technical knowledge as to what they've done?

Re:Why is multicore programming so hard? (1)

DrgnDancer (137700) | more than 5 years ago | (#27920793)

On top of the fact that Core 2 needs to somehow know not to run the instruction until after Core 1 runs its preceding instruction, you're also moving the value of A from Core 1 to Core 2. Normally, when one instruction follows another and the same variable is used, that variable's value is cached in CPU level 1 cache. It's almost instantly accessed. In your example you have to move it between Cores; that means it has to go out to CPU level 2 cache from Core 1's L1, and back into Core 2's L1 so it can be accessed for the instruction. That's going to take A LOT more time than simply running the ADD () on the same core would have. Now imagine that Core 2 is busy and you had to move to Core 3 (on a different chip). Since Cores 1 and 3 don't share L2 cache, the value of A has to move through RAM to get to the new core. LOTS slower.

The way it mostly works now, unless things have improved in the last few years since I last looked at this seriously, is that I, as a programmer, have to determine what parts are safe to run in parallel. The most common example you see is very large matrix calculations. The larger matrix can be broken down into a number of equal sized smaller matrixes which are then sent to the other CPUs as threads. Since they are all working on different pieces none of them care what the others are doing. The master thread than simply waits for all of the sub threads to give it their answers and then combines them into the final answer. Obviously this is a pretty specialized algorithm (though more common than you'd think), but it serves as a simple example. The matrix calculations are very computationally intensive, so the overhead of moving data around is more than made up for by the time saved dividing the task up.

Basically what you do as a programmer is go through your code (or more properly your algorithm, you should do this as part of your design) and figure out places that will (a) be helped by concurrency and (b) are safe to run concurrently. By safe it's generally meant that the same pieces of data won't be accessed at the same time. This could be because the threads are working one different pieces of data (like in the matrix example), or because you've somehow ensured that the data is protected. By "helped by concurrency" its generally meant that the overhead of moving data between CPUs (and managing the threads in general) is less than the time saved by breaking the task out.

Hopefully someone will correct me if I've flummox this to badly, I took a class on this, but it was 3-4 years ago.

Re:Why is multicore programming so hard? (1)

tepples (727027) | more than 5 years ago | (#27920009)

Video games are one of the harder places to try to apply this technique to, because they run in real time and are also constantly responding to user input. Video encoding is the opposite.

Since when does encoding of live video not need to run in real time? An encoder chain needs to take the (lightly compressed) signal from the camera, add graphics such as the station name and the name of the speaker (if news) or the score (if sports), and compress it with MPEG-2. And it all has to come out of the pipe to viewers at 60 fields per second without heavy latency.

Re:Why is multicore programming so hard? (1)

Tenebrousedge (1226584) | more than 5 years ago | (#27920115)

I didn't specify live video encoding. That sentence does not make sense if interpreted to be referring to live video encoding. I would be remarkably misinformed to have used live video encoding as an example of something that does not run in real time. Live video encoding is not often encountered in a desktop PC environment, and I would go so far as to say that the majority of video broadcasts are not live.

I am somewhat confused as to why you're talking about live video encoding. Does this relate to multicore processing in some way?

Also, your sig is misleading. Most PCs have VGA or DVI-I output abilities, and the conversion to the RCA connectors requires no special electronics. To imply some sort of inability or incompatibility between PCs and SDTVs is...strange, and increasingly irrelevant as PCs and TVs both are moving towards HDMI (or DVI, which is electrically compatible).

Heard of a webcam? (1)

tepples (727027) | more than 5 years ago | (#27920989)

I didn't specify live video encoding.

Your wording gave off the subtext that you thought live video encoding was commercially unimportant. I was just trying to warn you against being so dismissive.

Live video encoding is not often encountered in a desktop PC environment

Citation needed [wikipedia.org] .

I would go so far as to say that the majority of video broadcasts are not live.

And you'd be right, but tell that to my sports fan grandfather or my MSNBC-loving grandmother.

Most PCs have VGA or DVI-I output abilities, and the conversion to the RCA connectors requires no special electronics.

Most PCs won't go lower than 480p[1] at 31 kHz horizontal scan rate, and they output RGB component video. SDTVs need the video downsampled to 240p or 480i at 15.7 kHz, and most also need red, green, and blue signals to be multiplexed into composite video (or S-Video if you're lucky). Every game console since the Atari 2600 can reduce its scan rate to match that of an SDTV; most desktop PCs cannot, at least without an external adapter [sewelldirect.com] or an aftermarket video card.

[1] In the "DOS style" text mode, the PC goes down to 400p, but that's it.

Re:Why is multicore programming so hard? (1)

xouumalperxe (815707) | more than 5 years ago | (#27919623)

Haven't video game programmers been doing it forever, doing some things on the CPU, some on the graphics card?

Yeah, and they sync an unknown (but often quite large) amount of "cores" (ie, the shaders, etc in the GPU) quite easily too.

Of course, the only reason it's so easy for video game programmers is that raster graphics are one of the easiest things ever to parallelize (since pixels rarely depends on other pixels), and APIs like OpenGL and Direct3D make the parallelism completely transparent. If they had to program each individual pixel pipeline by hand, we'd still be stuck with CPU rendering. The purpose of Grand Central is to, hopefully, make CPU parallelism as painless a process as GPU parallelism (which is hyperbole on my part, since CPU parallelism will typically hit resource contention a lot more often).

I reckon that functional languages as a whole are pretty good for parallel programming provided they're purely functional (ie, no side-effects). Even a naive "first come first serve" policy to assign free cores to function calls would keep all those cores reasonably balanced. In practical terms, I don't know exactly how that goes, though (but Erlang reportedly has brilliant parallelism support).

Re:Why is multicore programming so hard? (2, Informative)

mdwh2 (535323) | more than 5 years ago | (#27920833)

Haven't video game programmers been doing it forever, doing some things on the CPU, some on the graphics card?

Not really - although it's easy to use both the CPU and the GPU, normally this would not be at the same time. What's been going on "forever" is that the CPU would do some stuff, then tell the GPU to do some stuff, then the CPU would carry on.

What you can do is have a thread for the CPU stuff updating the game world, and then a thread for the renderer, but that's more tricky (as in, at least as difficult as any CPU multiprocessing), and hasn't been done by all games "forever". (There are also a few GPU functions that can be run in parallel, but these can be tricky to do right.)

The main area of multiprocessing is on the GPU itself - the reason that's easy is because graphics rendering is an embarrassingly parallel problem (plus the graphics card/libraries will do all the setting up of the threads for you - it's harder to do that for CPUs, because the needs for CPU SMP vary for each program).

G5? (4, Interesting)

line-bundle (235965) | more than 5 years ago | (#27919361)

what is the status of 10.6 on the PowerPC G5?

Re:G5? (1, Informative)

Anonymous Coward | more than 5 years ago | (#27919429)

as i understand it, there is no 10.6 on PPC.

Re:G5? (1, Informative)

Anonymous Coward | more than 5 years ago | (#27919447)

Rumour has it that the betas are Intel only. No official word from Apple.

Re:G5? (0)

Ash-Fox (726320) | more than 5 years ago | (#27919529)

what is the status of 10.6 on the PowerPC G5?

Abandoned technology, it is the Apple way.

Re:G5? (1)

A12m0v (1315511) | more than 5 years ago | (#27919571)

Blame it on IBM and Motorola (Freescale). They couldn't provide Apple with the chips it needed.

Re:G5? (1)

jbolden (176878) | more than 5 years ago | (#27919841)

They refused to provide custom chips without large binding orders. They were willing to provide Apple with chips just not à la carte.

Living in the past (1, Insightful)

Cheech Wizard (698728) | more than 5 years ago | (#27919585)

Alas, as when Apple stopped putting floppy drives in Macs, others followed. Those who wish to stay with old technology have that choice. I think I have a buggy whip here if you need one... ;)

Re:Living in the past (2, Interesting)

evohe80 (737760) | more than 5 years ago | (#27919677)

I just hope the Optical Drive goes the same way on notebooks. Most people use it very few times a year (not more than 4 or 5 in my case), and it is more than 250gr (~ half a pound) to carry every time the notebook is moved.

Re:Living in the past (0)

Cheech Wizard (698728) | more than 5 years ago | (#27919909)

Yup - I have a notebook with an Optical Drive that I've had for about 2 years. I bet I haven't used that drive more than a couple of times.

Re:Living in the past (1)

tepples (727027) | more than 5 years ago | (#27920023)

I just hope the Optical Drive goes the same way on notebooks. Most people use it very few times a year (not more than 4 or 5 in my case)

From your comment I guess that either you don't watch a lot of DVD movies, or you are given DVDs as a gift 4 or 5 times a year and live in a country that lacks a DMCA counterpart so that you can rip the DVDs to your computer.

Re:Living in the past (2, Insightful)

Hurricane78 (562437) | more than 5 years ago | (#27920295)

Just tell me this: How is an average user without a DVD/CD drive going to install an OS? Even I have problems with this, and I am pretty experienced.
(Booting from an USB stick never quite worked. Also I already need the one that I have, as a keyfile storage.)

Re:Living in the past (1)

amliebsch (724858) | more than 5 years ago | (#27920755)

You get yourself a USB optical drive. As an added bonus, you can schlep it around to any other computers that might have need of it.

Re:Living in the past (1)

evohe80 (737760) | more than 5 years ago | (#27920767)

I didn't mean that Optical Drives should be banned... I meant external drives should be the way to go, specially in laptops under 15". Few netbooks have an optical drive, and they sell like hotcakes, which I think proves my point.

Certainly some people watch DVDs on their laptops, but they are not that many (from my sample, only about 1 in 10), specially in small ones. There are also legal stores where you can buy movies online (iTunes, Amazon, ...).
Many users don't ever install an OS, anyway, and the ones that do, don't usually install it but once every few months. All other components on my laptops are used constantly, or are phisically small (say DVI output), but not the optical drive is most of the time is just additional weight.

Re:Living in the past (1)

alexandre_ganso (1227152) | more than 5 years ago | (#27920319)

No, he is probably downloading stuff like everyone else and his sister.

Seriously. The same thing that happened to audio cds is going to happen to dvd. They will become obsolete as long as bandwidth keeps increasing.

Re:Living in the past (5, Insightful)

Ash-Fox (726320) | more than 5 years ago | (#27919749)

Alas, as when Apple stopped putting floppy drives in Macs, others followed.

Not really, PCs had disk drives for many more years. It was only when DVD writers became standards did it stop appearing in models.

Also, what other PC manufacturers even use PPC?

Re:Living in the past (0, Offtopic)

jedidiah (1196) | more than 5 years ago | (#27920051)

The problem with Floppies was what you would replace it with.

Until a standardized external expansion bus took hold (namely USB),
you had no way to plug in an alternate boot device to a PC in a
predictable way.

Back in the day, you couldn't even take for granted that you could
boot off of a CD.

Re:Living in the past (1)

drinkypoo (153816) | more than 5 years ago | (#27920569)

Back in the day, you couldn't even take for granted that you could
boot off of a CD.

You mean, you couldn't take for granted that you could boot your PC off a CD. EVERYONE else had it well figured out. These days you can't take for granted that you can boot from USB, still! Some motherboards still bone it.

Re:Living in the past (0)

Anonymous Coward | more than 5 years ago | (#27920339)

Also, what other PC manufacturers even use PPC?

PPC machines can be embedded up to big IBM boxen. The problem is MS windows doesn't run on it, so there was no real mass to move away from the x86 abomination.

Re:Living in the past (3, Insightful)

Anonymous Coward | more than 5 years ago | (#27920361)

The Rewritable CD drive is not what killed off the floppy. The USB stick did.

Re:Living in the past (1)

cabjf (710106) | more than 5 years ago | (#27920417)

Maybe not PC manufacturers, but all the current consoles have PPC derived processors.

Re:Living in the past (1)

LanMan04 (790429) | more than 5 years ago | (#27920545)

I believe the Xbox 360 has a PPC CPU.

Re:Living in the past (0)

Anonymous Coward | more than 5 years ago | (#27920663)

I believe the Xbox 360 has a PPC CPU.

Not just "a" PPC cpu, but 3 of them!

Re:Living in the past (1)

mdwh2 (535323) | more than 5 years ago | (#27920941)

Alas, as when Apple stopped putting floppy drives in Macs, others followed

A common myth, but not supported by evidence. Some PC manufacturers were still shipping PCs with floppies years later. What actually happened was that over the years, various computer manufacturers dropped floppy drives, and it's impossible to claim that one caused the others to do so (and this seems an unlikely claim anyway - when CD/DVD writers and USB drives were commonplace, it was obvious there was no need for floppies - we don't need Apple to tell us this).

I might as well claim the Amiga CDTV (which dropped the floppy drive years earlier) caused Apple and others to follow.

Re:G5? (1)

Richard_at_work (517087) | more than 5 years ago | (#27919813)

Its working well, thanks for asking...

Thanks for Playing (4, Informative)

Anonymous Coward | more than 5 years ago | (#27920167)

I'm one of the seed testers, and even posting anonymously, I am concerned not to violate Apple's NDA. So, I'll put it like this: I have 2 PPC machines and an Intel machine. I have only been able to get the SL builds to work on the Intel machine due, I'm pretty sure, to no fault of my own.

Re:G5? (4, Informative)

chabotc (22496) | more than 5 years ago | (#27921299)

Snow Leopard is going to be the first version of Mac OS X that only runs on Intel Macs, so I'm afraid you're going to be stuck on plain old leopard

What's up with the punctuation (0, Offtopic)

emj (15659) | more than 5 years ago | (#27919395)

Why... is there... there so much... punctionations in the summary?

Re:What's up with the punctuation (5, Funny)

Anonymous Coward | more than 5 years ago | (#27919407)

Perhaps the editor doesn't know how to edit?

Oh wait, kdawson, never mind.

Re:What's up with the punctuation (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#27919423)

k... daw... son... m'kay?

Re:What's up with the punctuation (3, Informative)

Fex303 (557896) | more than 5 years ago | (#27919427)

Why... is there... there so much... punctionations in the summary?

Because the summary is directly quoting the article and using ellipses [wikipedia.org] to indicate that certain party of the quotation have been omitted. Usually there would be a space on either side of the ellipsis when this was done, but this is /. so I'll let this one slide.

Re:What's up with the punctuation (0, Interesting)

Anonymous Coward | more than 5 years ago | (#27919681)

Shouldn't the ellipsis be in between square brackets, though? (From what I've learned ALL edits to quote should be within square brackets, to indicate that the it's not part of the original quote... this includes changing "he" to "[name]" and "[sic]" etc.)

Re:What's up with the punctuation (4, Informative)

jbolden (176878) | more than 5 years ago | (#27919851)

No ellipse is not a change to the text but a deletion from the text.

Re:What's up with the punctuation (1, Interesting)

MisterSquid (231834) | more than 5 years ago | (#27920203)

There has been a slight shift in the adding of ellipses to passages to indicate omission. In a text that has ellipses in the text itself (for example, Pynchon's Gravity's Rainbow), some scholars use square-bracketed ellipses to indicate omission. In general, the use of bracketed ellipses redundantly and unambiguously signals editorialization.

That is, until some clever writer begins including square-bracketed ellipses in his or her text [. . .].

Re:What's up with the punctuation (3, Funny)

MightyYar (622222) | more than 5 years ago | (#27920999)

That is, until some clever writer begins including square-bracketed ellipses in his or her text \[. . .\].

We just need escape codes :)

Re:What's up with the punctuation (0, Offtopic)

johnlenin1 (140093) | more than 5 years ago | (#27920359)

Absolutely. When I worked on the editorial staff of an academic journal, all ellipses not present in the original text were to be enclosed in brackets.

Re:What's up with the punctuation (4, Informative)

noundi (1044080) | more than 5 years ago | (#27919483)

Because it's a qoute. You see there are rules to any language and one of them in the English language is regarding quoting. When you quote a source the text written must be matching every word of the source. When the quote contains unnecessary text to the topic at hand you cut out that part and replace it with three periods. This indicates that there's a piece missing from the original quote, in case e.g. someone is questioning the quote at hand. So you see quoting is not interpreting, and must be, at all times, matching every word of the source.

Turn to side B for the next lesson.

Re:What's up with the punctuation (1, Offtopic)

Trepidity (597) | more than 5 years ago | (#27919555)

Slashdot is kind of an in-between case, though; when the editors post a story by submitters, it's sort of formatted as if they're "quoting" the submitters, but it's not quite like quoting a book or speech or something. It's expected that when submitting a piece to a site with editors (assume for the sake of argument we can call Slashdot editors that), that your text might be, well, edited before publication. The Economist, for example, edits letters to the editor before publication for style and brevity, without using ellipses to indicate where they removed text.

Re:What's up with the punctuation (0, Offtopic)

noundi (1044080) | more than 5 years ago | (#27920307)

You're semi right. It's not Slashdot. It's kids on internet that haven't yet learned these rules. Also since ignoring these rules require less work they tend to become infectious to others. However internet lingo in all fairness, I don't really care if someone uses bold instead of italic when emphasising a word, or even spelling incorrectly. These are acceptable mistakes. But misquoting is unacceptable and it questions the whole liability of the text. To me, as a reader, using quotation incorrectly would be as confusing as swapping two letters with eachother. Some rules can be bent, some rules can't. If you want people to read what you write it's a good idea not to lie, even unintentionally.

Re:What's up with the punctuation (0, Offtopic)

joeharrison (723042) | more than 5 years ago | (#27919611)

Why... is there... there so much... punctionations in the summary?

Because it was written by William Shatner.

Re:What's up with the punctuation (0, Offtopic)

daveime (1253762) | more than 5 years ago | (#27919649)

I'm ... sure that ... even ... William ... Shatner ... can use ... a spellchecker.

Punctionation ... indeed ... !!!

Re:What's up with the punctuation (1, Funny)

Ihmhi (1206036) | more than 5 years ago | (#27919893)

kdawson is... doing... his Captain... Kirk... impression. Mr. Taco, Warp Factor... 10.

Snow leopard is such an apt codename (3, Funny)

BadAnalogyGuy (945258) | more than 5 years ago | (#27919401)

Spread your tiny wings and fly away,
And take the snow back with you
Where it came from on that day.
The one I love forever is untrue,
And if I could you know that I would
Fly away with you.

In a world of good and bad, light and dark, black and white, it remains very hopeful that Apple still sees itself as a beacon of purity. It pushes them to do good things to reinforce their own self-image.

I can't wait to try this latest OS!

Re:Snow leopard is such an apt codename (5, Funny)

Anonymous Coward | more than 5 years ago | (#27919459)

I almost threw up.

Re:Snow leopard is such an apt codename (0, Offtopic)

MrMista_B (891430) | more than 5 years ago | (#27919581)

I threw up in your mouth a little.

Re:Snow leopard is such an apt codename (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#27919981)

I returned the favor.

Re:Snow leopard is such an apt codename (-1, Offtopic)

Whiney Mac Fanboy (963289) | more than 5 years ago | (#27919539)

You don't think it's an apt name because snow leopards can't roar?

(I'll let you figure out the analogy).

Mod parent +5 Satire (-1, Offtopic)

MrMista_B (891430) | more than 5 years ago | (#27919569)

olololo

Re:Snow leopard is such an apt codename (0, Offtopic)

DNS-and-BIND (461968) | more than 5 years ago | (#27920097)

+5 insightful for this schmaltzy piece of crap? "beacon of purity" *gag* *puke*. Just goes to show you, it's OK to be profane and irreverent, but when Something Important comes along, all the sudden reverence is right back in fashion.

Excuse me, sir... (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#27920175)

But are you by chance kneeling before a glory hole somewhere in San Franscisco right now?

Re:Snow leopard is such an apt codename (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#27920245)

Modded up by the fanboi's I see.

Meh, Apple hardware is nothing special. In fact, I have had more problems with my Apple hardware than any other electronic equipment I have owned over the last 25 years.

OS X is nothing special either. It has problems like any OS and some things are just plain horrible design (*cough* Finder *cough*)...

Freezes Snow Leopard APIs (1)

orta (786013) | more than 5 years ago | (#27919413)

Chuckles. Maybe we could be looking at a sneaky release for WWDC ( early June )

Any details on Grand Central? (0)

Anonymous Coward | more than 5 years ago | (#27919545)

I still don't get what Grand Central does. Does it require any different programming style or libs? Does it work automatically?

Re:Any details on Grand Central? (2, Funny)

A12m0v (1315511) | more than 5 years ago | (#27919591)

it works automagically
this is Apple after all
MAGIC!

Re:Any details on Grand Central? (2, Insightful)

Raffaello (230287) | more than 5 years ago | (#27921303)

double bind here. those who speak do not know. those who know do not speak - they're under NDA.

Upgrade (-1, Redundant)

djfake (977121) | more than 5 years ago | (#27919737)

It just sends shivers down my spine, a new Mac OS to spend my money on and make my computing experience even more complete! Oh thank you Apple!

Why rush to use all the cores? (5, Interesting)

BlueScreenOfTOM (939766) | more than 5 years ago | (#27919871)

Alright guys, I know the advantages (and challenges) of multi-threading. With almost all new processors coming with > 1 core, I can tell there's now a huge desire to start making apps that can take advantage of all cores. But my question is why? One thing I love about my quad-core Q6600 is the fact that I can be doing so many things at once. I can be streaming HD video to my TV while simultaneously playing DOOM, for example. However, when I fire up a multithreaded app that takes all 4 of my cores and I start doing something heavy, like video encoding for example, everything tends to slow down like it did back when I only had one core to play with. Yeah, my encoding gets done a lot faster, but honestly I'd rather it take longer than make my computer difficult to use for any period of time...

I realize I can throttle the video encoding to a single core, but I'm just using that as an example... if all apps start using all cores, aren't we right back where we started, just going a little faster? I love being able to do so much at once...

Doom is a GBA game (1)

tepples (727027) | more than 5 years ago | (#27920059)

One thing I love about my quad-core Q6600 is the fact that I can be doing so many things at once. I can be streaming HD video to my TV while simultaneously playing DOOM, for example.

Doom can run on a Game Boy Advance [idsoftware.com] , rendering in software on a 16.8 MHz ARM7 CPU. You could emulate the game and your quad-core wouldn't break a sweat.

if all apps start using all cores, aren't we right back where we started, just going a little faster?

That's what developers want: the ability to use all the cores for a task where the user either isn't going to be doing something else (like on a server appliance) or has another device to pass the time (like a GBA to run Doom).

Re:Doom is a GBA game (1)

BlueScreenOfTOM (939766) | more than 5 years ago | (#27920901)

Yeah, I was using playing Doom as an example. Replace Doom with just about any other application that isn't a total resource hog. The specific application I'm using for my example isn't part of the point.

I am a developer and I know what developers want. I'm not saying I don't see an advantage to using all cores some of the time, but what I took from the description (no, I didn't RTFA, this is /. after all) is that Apple is trying to make it easier for developers to do something that is typically considered difficult to "get right": multithreading. The point I'm trying to make is that I don't want everything to be multi-threaded. I see why this is useful for some applications, but I don't want this to be a widespread practice.

Let me go at this from two angles. First, as a developer, in my specific job, while I can write multithreaded apps, I typically don't for two reasons: first, it's more complex to write and to understand, not just for me but for anyone else maintaining my code, and second because we tend to write many small components that do little bits of work and run them on the same machine, so we're making good use of all of our processors/cores anyway. I'm not talking about GUIs (these apps are non-interactive services/daemons), and my apps tend to lend themselves to a single-threaded frame of mind anyway, but what I'm trying to say is that here is a case where I'm getting the most out of our hardware without unnecessarily complicating things. This leads me to my second angle, which is that of a user. I already covered this above... I like multi-tasking, and I generally prefer lots of tasks that only use a single thread to having a single process run a little bit faster.

So yes, sometimes developers want the ability to use all cores for a long-running non-interactive task, and that's fine and it does lend itself to some situations. But I don't know that I want this to become the standard, which is, perhaps, what Apple is trying to push towards.

Just nice it. (1)

tepples (727027) | more than 5 years ago | (#27921057)

The point I'm trying to make is that I don't want everything to be multi-threaded.

Then use your operating system's process manager to "nice" (deprioritize) the apps that you don't want to be multithreaded.

Re:Why rush to use all the cores? (1)

MtViewGuy (197597) | more than 5 years ago | (#27920199)

Remember, video encoding requires tremendous amounts of CPU power in the encoding process, far more so than audio encoding. That's why when Pixar renders the images for their movies they use thousands of Apple Xserve blade servers running in massively parallel fasion to do rendering at reasonable speeds.

We can make all the Beowulf cluster jokes on this forum, :-) but one reason why Beowulf was developed was the ability to synchronize hundreds to thousands of machines in a massively parallel fashion to speed up data processing, essentially creating a supercomputer setup "on the cheap." I remember reading about a biotech company using 1,000 small tower desktops powered by the Intel Pentium III 800 MHz CPU all synced together with Beowulf to do DNA modeling.

Nice (1)

AlpineR (32307) | more than 5 years ago | (#27920271)

On a UNIX system (like Mac OS X) you should be able to "nice" the low-priority processes to give them less attention. If I'm running a twelve-hour, max-the-CPU simulation and I want to play a game while I'm waiting, I nice the simulation to a low priority. That way it yields most of the CPU to the game while I'm playing, yet runs at full dual-core speed when I'm not.

I'm not sure this is actually working in Mac OS X 10.5, though. Since I got my dual-core system, the activity monitors don't seem to show that nice is having the expected effect. I'm not sure if that's a problem with the monitor or with the OS. Hopefully 10.6 will be nicer.

Re:Why rush to use all the cores? (4, Interesting)

jedidiah (1196) | more than 5 years ago | (#27920275)

Yup. If applications start getting to good at being able to "use the whole machine"
again then that's exactly what they will try to do. The fact that they really can't
is a really nice "release valve" at this point. As an end user managing load on my
box I actually like it better when an app is limited to only 100% cpu.
(IOW, one core)

Re:Why rush to use all the cores? (2, Insightful)

DaleGlass (1068434) | more than 5 years ago | (#27921219)

That only works because you have few cores.

Once we get to the point where a consumer desktop has 32 cores, you're not going to be able to use even half of that CPU by running independent tasks simultaneously. You'll need to have apps that can take advantage of many cores. The more cores you have, the more power a single core application fails to take advantage of.

Re:Why rush to use all the cores? (0)

Anonymous Coward | more than 5 years ago | (#27920483)

Why should people who prefer to get things done faster have to wait?

Re:Why rush to use all the cores? (4, Interesting)

slimjim8094 (941042) | more than 5 years ago | (#27920707)

You ought to be able to set your program to only run on certain processors. I know Windows has this feature (set affinity in task manager) so I assume Linux/Mac does as well.

I'd recommend putting heavy tasks on your last core or two, and anything you particularly care about on the second core - leave the first for the kernel/etc.

Re:Why rush to use all the cores? (2, Informative)

ducomputergeek (595742) | more than 5 years ago | (#27920963)

One area: Graphics rendering. And I'm not talking about games, but Lightwave et al. especially when one is rendering a single very large image (say billboard). Currently most renders allow splitting of that frame across several machines/core where each one renders a smaller block and then reassembles the larger image. However, not all the rendering engines out there allow the splitting of a single frame. Also, if the render farm is currently being tasked for another project (animation) and you need to render, I could see where 4 cores acting at one using all the available RAM is going to be a tad bit faster than splitting into separate tasks and rendering on each core with limited RAM.

i don't know you but... (5, Funny)

Anonymous Coward | more than 5 years ago | (#27920019)

I always read it as "Slow Leopard"

Why would my Mom upgrade to Snow Leopard? (2, Interesting)

Corrado (64013) | more than 5 years ago | (#27920431)

My biggest problem with this upgrade is that it seems more like a Windows Service Pack than a true Mac OS X upgrade. Are we going to have to pay for "new APIs" and "multi-core processing"?

How does all this help the average user (i.e. my Mom)? WooHoo! They are building a YouTube app and you can record directly off the screen! Big whoop. You can do that today without too much trouble with third party applications. Is the Mac OS X user interface and built-in apps already so perfect that they can't find things to improve?

I'm usually a pretty big Mac fan-boy but I just can't seem to get excited about this one. Hell, I'm even thinking (seriously) about ditching my iPhone and getting a Palm Pre. sigh...how the world is changing. Has Apple lost it's Mojo?

Re:Why would my Mom upgrade to Snow Leopard? (1)

HogGeek (456673) | more than 5 years ago | (#27920625)

I doubt she will be motivated to...

I think some of the changes affect the corporate user more than they do the home user.

From what I've read, the mail, calendar and contacts apps now communicate with MS exchange (using the Active Sync technology Apple licensed from MS for use in the iPhone).

While I'm sure there are other changes, I think those are some of the "bigger" one that a lot of people have been waiting for, myself included...

Re:Why would my Mom upgrade to Snow Leopard? (1)

dzfoo (772245) | more than 5 years ago | (#27920631)

>> Is the Mac OS X user interface and built-in apps already so perfect that they can't find things to improve?

I thought that concentrating on performance optimizations and stability was an improvement to the current version.

      -dZ.

Re:Why would my Mom upgrade to Snow Leopard? (0)

Anonymous Coward | more than 5 years ago | (#27921005)

yes it's we have wanted this update since 10.3.

Re:Why would my Mom upgrade to Snow Leopard? (1)

PrescriptionWarning (932687) | more than 5 years ago | (#27920695)

By marketing as a completely new version they're likely to make more sales of the expensive hardware they bundle with the OS to people who want to upgrade but think buying the $100 upgrade would be too difficult. Of course even paying for a $100 upgrade from Leopard doesn't make a whole lot of sense. Myself I'm still on Tiger with my Mini which means I can't even try to develop for the iPhone, so I may just end up jumping to Snow Leopard just because, that is unless I can nab a copy of Leopard for much cheaper than $100.

Re:Why would my Mom upgrade to Snow Leopard? (1)

xenolion (1371363) | more than 5 years ago | (#27920791)

From what your saying there it looks like the same thing PC and Microsoft does. New OS maybe hard for the regular user to install so they just buy a new machine increasing sales of new hardware and OS sales. I'm going to be truthful here I don't own a Mac, yet I'm waiting to see where this OS goes and how it turns out before I buy one new or even just a used one cause someone wants the new OS on the newest hardware.

Re:Why would my Mom upgrade to Snow Leopard? (0)

DavidChristopher (633902) | more than 5 years ago | (#27920849)

http://www.apple.com/macosx/snowleopard/ [apple.com]

It's mostly under the hood, but these are the kinds of underhood changes that application developers will take to quickly- such Grand Central, OpenCL and the new QT. The performance optimizations are what will really drive this product, providing they live up to expectations. Each subsequent release of MacOS X has felt faster (to many, including myself). As to Mom's mac? She probably won't need this update, but it wouldn't be a bad idea to get it anyway...
:)

Re:Why would my Mom upgrade to Snow Leopard? (1, Interesting)

MightyYar (622222) | more than 5 years ago | (#27921063)

My biggest problem with this upgrade is that it seems more like a Windows Service Pack than a true Mac OS X upgrade.

I much prefer frequent, incremental updates. The $100 that Apple charges for the OS is peanuts compared to the amount of use it gets.

Maybe you like the MS upgrade cycle, but look at all the bad press they get for it... you can hardly blame Apple for wishing to avoid that.

Re:Why would my Mom upgrade to Snow Leopard? (1, Flamebait)

pandrijeczko (588093) | more than 5 years ago | (#27921159)

Maybe you like the MS upgrade cycle, but look at all the bad press they get for it... you can hardly blame Apple for wishing to avoid that.

Erm, so what is this Windows XP installation that I have been using since XP Service Pack 1 that I have *incrementally upgraded* through to Service Pack 3 with all the additional Microsoft updates then?

I'm no MS fanboi by any means, I use mostly (incrementally upgradeable) Gentoo Linux - but I wish you Apple fanbois would occasionally go read a technical book or something so that you can at least have some degree of intelligent conversation with those of us who do.

Re:Why would my Mom upgrade to Snow Leopard? (1)

jcnnghm (538570) | more than 5 years ago | (#27921193)

I think every major version is a service pack, except Apple charges $150 for it, and changes the API enough that you can't run new software. I wanted to run XCode on my 10.4 laptop, so I had to go buy a 10.5 upgrade, even though it didn't have any new features I actually cared about. I still think it should have been a $30 minor feature pack, not a whole OS.

I think it's the most annoying part about Apple. They definitely seem to nickel and dime you, especially by not shipping with a full-screen media player, just the crippled version of quicktime.

upgrade price (0)

Anonymous Coward | more than 5 years ago | (#27921295)

My biggest problem with this upgrade is that it seems more like a Windows Service Pack than a true Mac OS X upgrade. Are we going to have to pay for "new APIs" and "multi-core processing"?

Well, the first update to Mac OS X (10.0 -> 10.1) was free, so it's not without precedent.

Apple is also offering the Mac Box Set [apple.com] , which has Mac OS (10.5) along with the latest iLife and iWork. You're getting all three for less than it would to get them individually.

It could be anywhere between $0 and the traditional price for these things (~US$ 130). We'll find out in a few weeks' time.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?