Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Reveals More Larrabee Architecture Details

CmdrTaco posted more than 6 years ago | from the switching-from-binary-to-trinary dept.

Intel 123

Ninjakicks writes "Intel is presenting a paper at the SIGGRAPH 2008 industry conference in Los Angeles on Aug. 12 that describes features and capabilities of its first-ever forthcoming many-core architecture, codenamed Larrabee. Details unveiled in the SIGGRAPH paper include a new approach to the software rendering 3-D pipeline, a many-core programming model and performance analysis for several applications. Initial product implementations of the Larrabee architecture will target discrete graphics applications, support DirectX and OpenGL, and run existing games and programs. Additionally, a broad potential range of highly parallel applications including scientific and engineering software will benefit from the Larrabee native C/C++ programming model."

Sorry! There are no comments related to the filter you selected.

kathleen fent reveals boobs (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#24465069)

seconds later, I vomitted and decided not to hook up on craiglsist nsa anymore.

creators reveal more planet/population rescue inf. (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#24465097)

you call this 'weather'? the lights are coming up all over now. conspiracy theorists are being vindicated. some might choose a tin umbrella to go with their hats. the fairytail is winding down now. let your conscience be yOUR guide. you can be more helpful than you might have imagined. there are still some choices. if they do not suit you, consider the likely results of continuing to follow the corepirate nazi hypenosys story LIEn, whereas anything of relevance is replaced almost instantly with pr ?firm? scriptdead mindphuking propaganda or 'celebrity' trivia 'foam'. meanwhile; don't forget to get a little more oxygen on yOUR brain, & look up in the sky from time to time, starting early in the day. there's lots going on up there.

http://news.google.com/?ncl=1216734813&hl=en&topic=n
http://www.nytimes.com/2007/12/31/opinion/31mon1.html?em&ex=1199336400&en=c4b5414371631707&ei=5087%0A
http://www.nytimes.com/2008/05/29/world/29amnesty.html?hp
http://www.cnn.com/2008/US/06/02/nasa.global.warming.ap/index.html
http://www.cnn.com/2008/US/weather/06/05/severe.weather.ap/index.html
http://www.cnn.com/2008/US/weather/06/02/honore.preparedness/index.html
http://www.nytimes.com/2008/06/01/opinion/01dowd.html?em&ex=1212638400&en=744b7cebc86723e5&ei=5087%0A
http://www.cnn.com/2008/POLITICS/06/05/senate.iraq/index.html
http://www.nytimes.com/2008/06/17/washington/17contractor.html?hp
http://www.nytimes.com/2008/07/03/world/middleeast/03kurdistan.html?_r=1&hp&oref=slogin
http://biz.yahoo.com/ap/080708/cheney_climate.html

is it time to get real yet? A LOT of energy is being squandered in attempts to keep US in the dark. in the end (give or take a few 1000 years), the creators will prevail (world without end, etc...), as it has always been. the process of gaining yOUR release from the current hostage situation may not be what you might think it is. butt of course, most of US don't know, or care what a precarious/fatal situation we're in. for example; the insidious attempts by the felonious corepirate nazi execrable to block the suns' light, interfering with a requirement (sunlight) for us to stay healthy/alive. it's likely not good for yOUR health/memories 'else they'd be bragging about it? we're intending for the whoreabully deceptive (they'll do ANYTHING for a bit more monIE/power) felons to give up/fail even further, in attempting to control the 'weather', as well as a # of other things/events.

http://www.google.com/search?hl=en&q=weather+manipulation&btnG=Search
http://video.google.com/videosearch?hl=en&q=video+cloud+spraying

dictator style micro management has never worked (for very long). it's an illness. tie that with life0cidal aggression & softwar gangster style bullying, & what do we have? a greed/fear/ego based recipe for disaster. meanwhile, you can help to stop the bleeding (loss of life & limb);

http://www.cnn.com/2007/POLITICS/12/28/vermont.banning.bush.ap/index.html

the bleeding must be stopped before any healing can begin. jailing a couple of corepirate nazi hired goons would send a clear message to the rest of the world from US. any truthful look at the 'scorecard' would reveal that we are a society in decline/deep doo-doo, despite all of the scriptdead pr ?firm? generated drum beating & flag waving propaganda that we are constantly bombarded with. is it time to get real yet? please consider carefully ALL of yOUR other 'options'. the creators will prevail. as it has always been.

corepirate nazi execrable costs outweigh benefits
(Score:-)mynuts won, the king is a fink)
by ourselves on everyday 24/7

as there are no benefits, just more&more death/debt & disruption. fortunately there's an 'army' of light bringers, coming yOUR way. the little ones/innocents must/will be protected. after the big flash, ALL of yOUR imaginary 'borders' may blur a bit? for each of the creators' innocents harmed in any way, there is a debt that must/will be repaid by you/us, as the perpetrators/minions of unprecedented evile, will not be available. 'vote' with (what's left in) yOUR wallet, & by your behaviors. help bring an end to unprecedented evile's manifestation through yOUR owned felonious corepirate nazi glowbull warmongering execrable. some of US should consider ourselves somewhat fortunate to be among those scheduled to survive after the big flash/implementation of the creators' wwwildly popular planet/population rescue initiative/mandate. it's right in the manual, 'world without end', etc.... as we all ?know?, change is inevitable, & denying/ignoring gravity, logic, morality, etc..., is only possible, on a temporary basis. concern about the course of events that will occur should the life0cidal execrable fail to be intervened upon is in order. 'do not be dismayed' (also from the manual). however, it's ok/recommended, to not attempt to live under/accept, fauxking nazi felon greed/fear/ego based pr ?firm? scriptdead mindphuking hypenosys.

consult with/trust in yOUR creators. providing more than enough of everything for everyone (without any distracting/spiritdead personal gain motives), whilst badtolling unprecedented evile, using an unlimited supply of newclear power, since/until forever. see you there?

"If my people, which are called by my name, shall humble themselves, and pray, and seek my face, and turn from their wicked ways; then will I hear from heaven, and will forgive their sin, and will heal their land."

meanwhile, the life0cidal philistines continue on their path of death, debt, & disruption for most of US. gov. bush denies health care for the little ones;

http://www.cnn.com/2007/POLITICS/10/03/bush.veto/index.html

whilst demanding/extorting billions to paint more targets on the bigger kids;

http://www.cnn.com/2007/POLITICS/12/12/bush.war.funding/index.html

& pretending that it isn't happening here;

http://www.timesonline.co.uk/tol/news/world/us_and_americas/article3086937.ece
all is not lost/forgotten/forgiven

(yOUR elected) president al gore (deciding not to wait for the much anticipated 'lonesome al answers yOUR questions' interview here on /.) continues to attempt to shed some light on yOUR foibles. talk about reverse polarity;

http://www.timesonline.co.uk/tol/news/environment/article3046116.ece

Good news (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#24465101)

This is good news for Mac mini and MacBook users.

Well, if Apple doesn't discontinue the Mac mini, that is.

Re:Good news (5, Insightful)

morgan_greywolf (835522) | more than 6 years ago | (#24465271)

This is good news for Mac mini and MacBook users.

How so? Has Apple announced that it will adopt Larrabee for the Mac Mini or the MacBook? No. All you have are rumors and speculation by MacRumors and Ars Technica. When Apple says they will adopt the Larrabee GPU, then you can say that it is good news for Mac users of any stripe. Until then, it's just Intel news, not Apple news.

Re:Good news (1)

Yvan256 (722131) | more than 6 years ago | (#24465327)

I think it depends on how much Larrabee will cost, however with what we know so far Apple seems to be heading into multi-CPU architectures, so using Larrabee would make sense.

Re:Good news (4, Informative)

gnasher719 (869701) | more than 6 years ago | (#24465403)

I think it depends on how much Larrabee will cost, however with what we know so far Apple seems to be heading into multi-CPU architectures, so using Larrabee would make sense.

Larrabee costs somewhere between 150 and 300 Watt, so MacBooks and Mac Minis are not likely to use them. Mac Pro, on the other hand, possibly.

Re:Good news (1, Insightful)

Anonymous Coward | more than 6 years ago | (#24465473)

damn! You could poweer a 10-20 ARM or PPC multiprocessor unit. And the architecture wouldn't suck cowboy neal's sweaty balls.

Re:Good news (0)

Anonymous Coward | more than 6 years ago | (#24465985)

How's ARM at floating point vector math these days?

Re:Good news (1, Insightful)

Anonymous Coward | more than 6 years ago | (#24466163)

With the vector floating point (VFP) coprocessor it's not too shabby.

Re:Good news (2, Interesting)

Yvan256 (722131) | more than 6 years ago | (#24466063)

The power brick for my Core 2 Duo Mac mini is somewhere around 80 Watts I think. And I'd assume the actual usage is lower than that. Let's say 50~60 Watts for the whole computer (CPU, GPU, hard drive, optical drive, RAM, FireWire, USB, etc).

If Larrabee takes 150~300 Watts, then it's just insane, no matter how many cores it has.

Re:Good news (1)

negRo_slim (636783) | more than 6 years ago | (#24468105)

Mac minis integrate 2.5 inch hard disk drives (ATA in the G4 models and SATA in the Intel models), CPUs and other components originally intended for mobile devices, such as laptops, contrary to regular desktop computers which use lower cost, but less compact and power-saving components. These mobile components help lower power consumption: According to data on the Apple web site, first-generation PowerPC Mac minis consume 32 to 85 Watts, while later Intel Core machines consume 23 to 110 Watts. By comparison, a contemporary Mac Pro with quad-core 2.66 GHz processors consumes 171 to 250 Watts.

Re:Good news (2, Informative)

Creepy (93888) | more than 6 years ago | (#24470915)

They've stated that it will be a 150W+ chip on a PCI Express 2 card, as I recall, and is intended as a GPU, though it will be fully programmable and have CPU capability (so when not doing GPU stuff, it could serve as extra CPUs). It is intended to compete in the high end graphics market.

Essentially, it's a clutch of high performance software vector units in parallel with a bunch of CPUs. Graphics scale with each added processor because it is a software driven architecture, whereas traditional GPUs don't scale because they have a fixed function pipeline (if everything were written for shaders, I would think it would scale). One of the things Intel is touting is Binned rendering (aka chunked or tile rendering), which is breaking the frame into tiles and storing a list of front-to-back polygons in off-chip memory and the tile buffer is scaled to cache. Technically, this should be no faster than z-buffering, but I believe they're sorting and ray casting and in a brute-force sort of way this is faster than z-buffering. What I don't get here is how they get "2-7x the performance" because they have the extra sort step.

By the way, if you look at CPUs, Intel's Core2 line has five power designations:
X - Extreme - power > 75W
E - Standard Desktop 55-75W
T - Standard Mobile 25-55W
L - Low Voltage 15-25W (their name - they mean low power)
U - Ultra Low Voltage - Power < 15W

According to Wikipedia [wikipedia.org] the mini uses mobile processors (the T designation). Max power consumption of most laptops is 80W, so it is likely your mini maxes at 80W.

Re:Good news (4, Funny)

oldhack (1037484) | more than 6 years ago | (#24465335)

This is good news for Mac mini and MacBook users. But I can't stand them.

Re:Good news (3, Insightful)

morgan_greywolf (835522) | more than 6 years ago | (#24465459)

Is it not also good news for Windows users, Linux users, and *BSD users? I mean, it's likely that these OSes will also be made to make use of Larrabee when the technology is released, right? Yet, it's not news for any of those platforms or Apple users unless/until those platforms are able to make use of the new GPU technology. Everything else is just speculation, especially so for Apple, who might easily decide not use Larrabee. Since Apple is the only legit supplier of Mac OS X hardware, it's definitely not news for Apple users until Apple says it is. OTOH, Windows, Linux and *BSD users can get their hardware from any supplier.

Re:Good news (0, Offtopic)

oldhack (1037484) | more than 6 years ago | (#24465487)

I was actually trolling...

Re:Good news (1)

lorenzo.boccaccia (1263310) | more than 6 years ago | (#24465435)

also, seems unlikely as those things burn a lot of watts. however, apple ways are obscure to mortal. intel said that this thing supports Apple CL, so maybe this could ends up on a workstation.

waaa (-1)

Anonymous Coward | more than 6 years ago | (#24465145)

In china intel Reveals More Larrabee Architecture Details

get it? . like corporate spying.. clone it.. ahh well atleast it was funny in my head :)

Good old SIGGRAPH (5, Insightful)

Gothmolly (148874) | more than 6 years ago | (#24465189)

With the supposed death of Usenet, the closing of PARC, and the general Facebookification of the Internet, its nice to see a bunch of nerds get together and geek out simply for the sake of it.

All right STOP TURNING JAPANESE (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#24465279)

You go blind if you keep that on

Re:All right STOP TURNING JAPANESE (1)

morgan_greywolf (835522) | more than 6 years ago | (#24465369)

You know I'm turning Japanese, you know I'm turning Japanese [youtube.com] you know I think so. (FWIW, the parent is probably referring to the word 'Facebookification' because he apparently has that English is slowly adopting the Japanese practice of positional grammar by verbifying nouns and nounifying verbs.

Re:Good old SIGGRAPH (5, Informative)

TheRaven64 (641858) | more than 6 years ago | (#24465317)

Unlike, say, any other academic conference where exactly the same thing happens. People don't go to SIGGRAPH for the sake of it, they go because it's the ACM Special Interest Group on GRAPHics main conference and getting a paper accepted there gets people in the graphics field a lot of respect. Many of the other ACM SIG* conferences are similar, and most other academic conferences are similar in form, but typically smaller.

Re:Good old SIGGRAPH (2, Interesting)

Duncan3 (10537) | more than 6 years ago | (#24470479)

Other areas of CS have multiple conferences throughout the year. Graphics has only one, and that's SIGGRAPH. If your paper is not accepted at SIGGRAPH, you are considered to have done nothing worthwhile that year. You could win every special effects award in Hollywood, but no SIGGRAPH paper = no cred.

That's just how it works.

Re:Good old SIGGRAPH (2, Informative)

TheRaven64 (641858) | more than 6 years ago | (#24470891)

Not really. A lot of good papers go to IEEE Visualisation and a few other conferences. Outside the US, Eurographics is pretty well respected too. SIGGRAPH is the largest conference, and probably the highest impact factor, but it's certainly not the only one people care about.

Re:Good old SIGGRAPH (1)

Guy Harris (3803) | more than 6 years ago | (#24467625)

the closing of PARC

Eh? [parc.com]

Trying to fight the trend toward specialization? (4, Insightful)

yoinkityboinkity (957937) | more than 6 years ago | (#24465249)

With more and more emphasis going toward GPUs and other specialized processors, I wonder if this is to try to fight that trend and have Intel processors able to handle the whole computer again.

Re:Trying to fight the trend toward specialization (3, Informative)

TheRaven64 (641858) | more than 6 years ago | (#24465415)

It almost certainly won't work. In the past, there has been a swing between general and special purpose hardware. General purpose is cheaper, special purpose is faster. When general purpose catches up with 'fast enough' then the special purpose dies. The difference now is that 'cheap' doesn't just mean 'low cost' it also means 'low power consumption,' and special-purpose hardware is always lower power than general-purpose hardware used for the same purpose (and can be turned off completely when not in use).

If you look at something like TI's ARM cores, they have a fairly simple CPU and a whole load of specialist DSPs and DSP-like parts that can be turned on and off independently.

Re:Trying to fight the trend toward specialization (5, Interesting)

Kjella (173770) | more than 6 years ago | (#24465587)

It almost certainly won't work. In the past, there has been a swing between general and special purpose hardware.

Except with unified shaders and earlier variations the GPU isn't that "special purpose" anymore. It's basicly an array of very small processors that individually are fairly general. Sure, they won't be CPUs, but I wouldn't be surprised if Intel could specialize their CPUs and make them into a competitive GPU. At the very least, good enough to eat a serious chunk upwards in the graphics market, as they're already big on integrated graphics.

Intel's integrated graphics sells ATI and Nvidia. (2, Insightful)

Futurepower(R) (558542) | more than 6 years ago | (#24465923)

Your comment, "... as they're already big on integrated graphics." is true for some values of "big". Intel has been big in integrated graphics the way a dead whale is big on the beach.

Basically, once you discover what Intel graphics has not been able to do, you buy an ATI or Nvidia graphics card.

Not everybody plays 3D games (1)

Joce640k (829181) | more than 6 years ago | (#24466105)

More people in the world need Intel level graphics than need ATI/NVIDIA. This is borne out in sales numbers - Intel is the #1 graphics chip maker and has been so for many years.

Intel graphics has been TERRIBLE. (1)

Futurepower(R) (558542) | more than 6 years ago | (#24466605)

Intel graphics has been TERRIBLE. We buy ATI video adapters (about $20) [ewiz.com] to put in business computers we build. (We've never bought from eWiz.com, or the particular video cards shown. That is just an example.)

Re:Intel graphics has been TERRIBLE. (1)

Hairy Heron (1296923) | more than 6 years ago | (#24467065)

Because what you do is clearly representative of what the rest of the world does, right?

Re:Intel graphics has been TERRIBLE. (1)

BrentH (1154987) | more than 6 years ago | (#24467163)

Exactly.

Re:Intel graphics has been TERRIBLE. (1)

RightSaidFred99 (874576) | more than 6 years ago | (#24470063)

Why? It's easy enough to say you do this, but _why_ do you do this? Your business users game testers? Or are you buying 2 year old motherboards with 2 year old integrated graphics? Even 965G chipsets should be plenty adequate for business use.

We felt so abused by the previous chipsets... (1)

Futurepower(R) (558542) | more than 6 years ago | (#24471511)

We've had a lot of problems with Intel graphics software. You are correct, however, we haven't tested the latest offerings from Intel. We felt so abused by the previous chipsets that we have had no desire to test the new software.

The last video driver we tested was version 14311 for the 945 chipset. It had a LOT of problems. There was a LOT of denial by Intel [intel.com] that there were problems.

So, I would be very interested to know: Is the video in the 965 chipset better? Is the software trouble-free? How about rotated vertically on a 1920 x 1200 monitor?

Re:Not everybody plays 3D games (0)

Anonymous Coward | more than 6 years ago | (#24467123)

Just because lots of people use it doesn't mean it's any good, take a look at Windows or your local Big-Ass Telco...

Re:Not everybody plays 3D games (1, Informative)

Anonymous Coward | more than 6 years ago | (#24470533)

Nobody is arguing that Intel makes good graphics hardware. They make adequate graphics hardware that the majority use without problems.

Go to any non-gaming office building and tell me how many Intel graphics vs Nvidia and ATI you find. I am willing to bet that most, if not all, of them will be Intel.

Re:Intel's integrated graphics sells ATI and Nvidi (2)

EvilNTUser (573674) | more than 6 years ago | (#24470675)

And once you discover what kind of driver support they offer, you go right back to Intel.

The new Intel G45 chipset recently made me order a new motherboard just to replace my video card. It's "fast enough", one might say...

Personally, I can't wait to get all that proprietary crap out of my kernel. Shouldn't have fallen for the temptation in the first place.

Re:Trying to fight the trend toward specialization (1, Interesting)

Austerity Empowers (669817) | more than 6 years ago | (#24466219)

Except the part where GPUs have 256-512 bit wide, 2GHz + dedicated memory interfaces and Intel processors are...way, way less. Add that to the ability to write tight code on a GPU that efficiently uses caching and doesn't waste a cycle, compared to the near impossibility of writing such code on the host processor which you share with an OS and other apps... meh.

There might be some good stuff that can be done with this architecture, but I am not convinced it's a competitor to GPUs pound for pound. You have to really believe ray-tracing is the future, and that some of the multi-texturing shenanigans that drive memory bwidth in GPUs are in the past. That's a big leap of faith. I'd prefer to believe once they build it, we'll find a great use for it.

Still it's nice to see something new happening.

Re:Trying to fight the trend toward specialization (1)

adisakp (705706) | more than 6 years ago | (#24468823)

Except with unified shaders and earlier variations the GPU isn't that "special purpose" anymore. It's basicly an array of very small processors that individually are fairly general.

Even with all the advances in shaders GPU's are not quite generalized due to several reasons. Hardcoded data fetch logic (yes there is some support for more arbitrary memory reados but those are limited and take a fairly big performance hit). GPUs also have poor performance for dynamic branching -- sure they support it, but if all the pixels for a subset (i.e. the whole bank of little GPU cores processing a fragment) don't take the same branches, your performance is hosed. The bank size is usually 8 or 16 cores working on a rectilinear fragment of adjacent pixels.

Intel's approach is not very GPU-like at all. Indeed, it's more similar to the CELL SPU's (internal ringbuffer included) but instead of using DMA to access memory and individual memory work areas, it has direct access to memory (with hardware prefetching) and a large shared L2.

Re:Trying to fight the trend toward specialization (1)

serviscope_minor (664417) | more than 6 years ago | (#24467045)

General purpose is cheaper, special purpose is faster.

Only sort of. Special purpose is often cheaper, hence the profusion of ASICS. General purpose is more flexible, and so more desirable as a result. Also, special purpose is only cheaper if "general purpose" isn't quite up to the task. Speical purpose is also only cheaper if you're doing it all the time.

For instance, on the low end, MP3 players often have (had?) MP3 decoder ASICS, because it was too expensive to perform on the very small CPU. On a PC, there's no point. Even though using an ASIC would be cheaper for just decoding MP3s (use less power, free up the required CPU, etc), it isn't worth it since the CPU is fast enough, and doesn't spend all the time playing MP3s.

Re:Trying to fight the trend toward specialization (5, Interesting)

Churla (936633) | more than 6 years ago | (#24465503)

I don't think so. I think the fact is that with the right architecture (which Intel is trying to get into place) which exact core on which processor handles a specific task should become less and less relevant.

What this technology will hopefully provide will be the ability to have a more flexible machine which can task cores for graphics, then re-task them for other needs as they come up. Your serious gamers and rendering heads will still have high end graphics cards, but this would allow more flexibility for the "generic" business build PC's.

Re:Trying to fight the trend toward specialization (1)

gbjbaanb (229885) | more than 6 years ago | (#24466797)

What'll be more interesting is if it fragments the PC market.

If you want a super-fast ray-tr, erm, protein folding application you need one with the Larrabee chipset. If you want to play the latest game you'll need a traditional PC + graphics card. Would it be possible that business PCs turn to Larrabee and home PCs stick with current architectures?

Ray Tracing -v- Rasterization (0)

mosel-saar-ruwer (732341) | more than 6 years ago | (#24465289)


Neither the summary nor TFA itself mentions the words "Ray Tracing" or "Rasterization" [slashdot.org] .

Am I missing something here?

Re:Ray Tracing -v- Rasterization (4, Informative)

Anonymous Coward | more than 6 years ago | (#24465411)

No, because the article is about Intel explaining that the purpose of Larrabee is NOT to be specialised like that. It's meant to be a completely programmable architecture that you can use for rasterization, ray tracing, folding, superPi or whatever else you want to program onto it.
Basically, they're trying to say "it's not REALLY a GPU as such, it's actually a really fat, very parallel processor. But you can use it as a GPU if you really want to".

I beg to differ. (1)

mosel-saar-ruwer (732341) | more than 6 years ago | (#24467885)


The biggest debate in all of graphics-dom [graphixery?] for the last six months or a year has been Ray Tracing -vs- Rasterization.

So what happened?

I just don't understand how you can have an article about next-generation GPU tech and not ask whether the logic gates & data busses are going to be optimized for Ray Tracing or for Rasterization or for both [which would require at least twice the silicon, if not twice the wattage and twice the heat dispensation].

Has Intel completely abandoned the idea of optimizing silicon for Ray Tracing, and returned to what essentially amounts to software-based graphics, or is there a "Field Programmable" aspect to Larrabee which would allow someone [the programmer who writes the driver, maybe?] to choose how he wants the silicon to be optimized?

Re:I beg to differ. (3, Insightful)

TheRaven64 (641858) | more than 6 years ago | (#24468091)

This is SIGGRAPH. They've been having the 'ray tracing versus rasterisation' debate for about three decades there. If you put anything definitive into your paper then you are likely to get a reviewer who is in the other camp, and get your paper rejected. If you say 'speeds up all graphics techniques and even some non-graphics ones' then all of your reviewers will be happy.

Oh. (1)

mosel-saar-ruwer (732341) | more than 6 years ago | (#24468917)


My bad - when something is this irrational, I guess the first suspicion should be politics - instead, I had simply assumed incompetence [or insouciance or absence of inquisitiveness] on the part of the author.

I will work to up my cynicism.

Intel Releases Profit Rpt (-1, Offtopic)

angeln123 (1337625) | more than 6 years ago | (#24465319)

A best website for IT man! http://pccity.myhosting247.com/ [myhosting247.com]

OpenGL (0)

B5_geek (638928) | more than 6 years ago | (#24465383)

I get a warm-fuzzy feeling seeing that OpenGL isn't dead. I was first and best impressed with it when I played NeverWinter Nights, why hasn't it caught on more? Why don't more Open Source Games use it (as opposed to reusing the Quake engine)?

Re:OpenGL (5, Informative)

TheRaven64 (641858) | more than 6 years ago | (#24465475)

The Quake engine uses OpenGL (or its own software renderer, but I doubt anyone uses that anymore), so games based on it do use OpenGL. Most open source games that use 3D use it, as do most OS X games, and quite a lot of console games. OpenGL ES is supported on most modern mobile phone handsets (all Symbian handsets, the iPhone and Android) and the PS3. I don't know why you'd think OpenGL was dead or dying - it's basically the only way of writing portable 3D code that you want to benefit from hardware acceleration at the moment.

Re:OpenGL (4, Interesting)

Ed Avis (5917) | more than 6 years ago | (#24465621)

The Quake engine uses OpenGL (or its own software renderer, but I doubt anyone uses that anymore),

Isn't the point of Larabee to change that? With umpteen Pentium-compatible cores, each one beefed up with vector processing instructions, software rendering might become fashionable again.

Re:OpenGL (2, Informative)

Anonymous Coward | more than 6 years ago | (#24465749)

You still need an API - which OpenGL provides. On the hardware side of things, few chips actually implement the (idealized) state machine that OpenGL specifies, it's always a driver in between that translates the OpenGL model to the chip model.

Re:OpenGL (3, Informative)

TheRaven64 (641858) | more than 6 years ago | (#24467369)

OpenGL is just an abstraction layer. Mesa implements OpenGL entirely in software. Implementing it 'in hardware' doesn't really mean 'in hardware' either, it means implementing it in software for a coprocessor that has an instruction set better suited to graphical operations than the host machine.

Sure, you could write your own rasteriser for Larrabee, but it wouldn't make sense to do so. If you use an off-the-shelf one then a lot more people are likely to be working on optimising it. And if you're implementing an off-the-shelf rasteriser, then implementing an open specification like OpenGL for the API makes more sense than making everyone learn a new one, and means that there's already a load of code out there that can make use of it.

Re:OpenGL (1)

mikael (484) | more than 6 years ago | (#24469627)

If you are going to replace the GPU for rendering, then you are going to have to replace the Z-buffer (for depth-testing), triangle rasterisation with hardware texture mapping with fragment shaders, hardware shadow-mapping, and vertex shaders (for character animation). All of this should also work with multi-sampling at HDTV resolutions (2048 x 1500+ pixels). Someone made the observation that the average pixel would be rendered at least seven times, so that would have to taken into account. The only alternative to all those is ray-tracing, which is the direction Intel is trying to push the industry in.

Re:OpenGL (1)

Ed Avis (5917) | more than 6 years ago | (#24469983)

I was just promoted by what the other poster mentioned about Quake. The old Quake rendering engine was optimized like crazy for Pentium processors. Larabee is going to be a phalanx of them (well, not quite the original Pentium, something a bit better). The Quake code might run on it rather well, with each CPU rendering a small tile of the display. Of course, Quake's visual effects are hardly state-of-the art nowadays, but it would be an interesting hack.

Re:OpenGL (0)

Anonymous Coward | more than 6 years ago | (#24467187)

Actually most consoles DON'T use OpenGL, they use their own implementation that may (or may not) be somewhat similar to it.
Consoles typically let you deal directly with the hardware (Through their own APIs) instead of OpenGl.

Re:OpenGL (1)

Austerity Empowers (669817) | more than 6 years ago | (#24466301)

There's a difference between the quake engine and OpenGL. OpenGL is just a graphics library, it pretty much just outputs primitives.

The Quake engine manages meshes, does collision detection, handles all the mess of drawing the right textures for the right models, managing lighting etc.

If there were an OSI model for graphics, OpenGL would be layer 4, and the Quake Engine would be layer 5/6.

Larrabee is a GPU (1)

Samy Merchi (1297447) | more than 6 years ago | (#24465431)

"its first-ever forthcoming many-core architecture, codenamed Larrabee" The Core architecture has duos and quads. Nehalem is just about to launch, going up to octocores at least. The point of the article eluded me until I went to Wikipedia and discovered that the Larrabee being talked about is a *GPU* rather than a CPU. Could have used that information somewhere in the original post.

Believe It When I See It (1, Insightful)

shplorb (24647) | more than 6 years ago | (#24465433)

Bearing in mind all the other promises Intel has made about their previous graphics offerings, I'm rather inclined to think that once again this will underwhelm. Especially considering all the crap that's been coming out of Intel about real-time raytracing. (It's always been just around the corner because rasterisation always gets faster.)

That's not to say that it's an interesting bit of tech, but from what I've seen so far it looks like the x86 version of Cell. Of course though it's a PC part and won't be showing up in any consoles anytime soon, so as a console developer it doesn't really do anything for me. I'm mostly interested in how they'll handle memory bandwidth.

I also expect that nVidia will put out something within 12 months that will stomp its guts out.

Re:Believe It When I See It (4, Insightful)

TheRaven64 (641858) | more than 6 years ago | (#24465515)

I think Larrabee is quite believable. They are quoting performance number that make sense and a power consumption of 300W. The only unbelievable idea is that a component that draws 300W is a mass-market part in an era when computers that draw over 100W total are increasingly uncommon and handhelds (including mobile phones) are the majority of all computer sales with laptops coming in second and desktops third.

Not at all like Cell BE (2, Informative)

Mathinker (909784) | more than 6 years ago | (#24466671)

> so far it looks like the x86 version of Cell

Then you missed the fact that the article says it uses a coherent 2-level cache for inter-core communications; the Cell BE is quite exotic in that it uses DMA transfers and has no memory coherency between the SPEs.

The article doesn't explicitly state that the Larrabee cores are homogeneous, but I would be surprised if they weren't; the Cell cores are somewhat heterogeneous if you want to use the PowerPC core to squeeze the last drop of processing power out of it.

You are correct in that Intel appears to have copied the ring network of the Cell BE, although I don't understand why they need it in addition to the coherent cache. Oh, well, guess I'll have to wait until the paper really hits the public.

software mode the return? (1)

Z80a (971949) | more than 6 years ago | (#24465461)

there is like a LOT of computers with really good cpus and really weak video chips like laptops and dell computers

Why not just do a software mode driver for em?

that probably would make the 3D gaming market a bit bigger without forcing the people to buy a 3D acelerator card (thing that is kinda impossible to do on most laptops)

Re:software mode the return? (1)

ZeroExistenZ (721849) | more than 6 years ago | (#24465783)

without forcing the people to buy a 3D acelerator card (thing that is kinda impossible to do on most laptops)

Who forced you to buy a more 3D oriented graphics-card?

These days you can pick your system oriented to your usage, if you want to play alot of games, do alot of encoding or work alot with media, you'll get a more advanced graphicscard and are willing to make a bigger investment in that. If you don't, you're perfectly fine with an integrated graphicscard. The choice is there, and it's to be made by you. You're not "forced".

Why not just do a software mode driver for em?

What would software be able to do, which a 3D graphics card is pushed for to do? If you're going to emulated your 3D operations you otherwise have a dedicated chip for, on which they compare the performance on frames rendered? I believe directX allow transparant use of "software"/"hardware" acceleration.

You could read more up on it in wikipedia [wikipedia.org] . It sounds silly to implement another redundant driver to get the same or even wors results.

Re:software mode the return? (1)

mdwh2 (535323) | more than 6 years ago | (#24465905)

There are software drivers, but I suspect they'd tend to be far slower than even cheap graphics hardware.

In fact one of the reasons why Intel integrated graphics are so slow is because they do some things (like vertex shaders) in software.

Re:software mode the return? (1)

Narishma (822073) | more than 6 years ago | (#24466649)

Because software mode is even slower than the crappy Intel chips.

Just imagine... (1)

oodaloop (1229816) | more than 6 years ago | (#24465623)

..a, uh, beowulf cluster...I just can't put my heart into it anymore!

Re:Just imagine... (1)

inasity_rules (1110095) | more than 6 years ago | (#24466423)

Is this a new slashdot meme? Never finishing the jokes?

Ok well, there was an AI bot running on a larrabee, an Irish Priest and a Soviet Russian who walked into a bar. The Irish priest orders a scotch. Then suddenly...

Re:Just imagine... (1)

oodaloop (1229816) | more than 6 years ago | (#24469349)

...the priest says with a slur, "In Soviet Russia, larabee bot overlords welcome YOU!"

There, fixed that for you.

Yes, but what is the extent of Larrabee? (4, Interesting)

Futurepower(R) (558542) | more than 6 years ago | (#24465667)

Today at a coder's party we had a discussion about Intel's miserable corporate communications.

Intel's introduction of "Larrabee" is an example. Where will it be used? Only in high-end gaming computers and graphics workstations? Will Larrabee provide video adapters for mid-range business desktop computers?

I'm not the only one who thinks Intel has done a terrible job communicating about Larrabee. See the ArsTechnica article, Clearing up the confusion over Intel's Larrabee [arstechnica.com] . Quote: "When Intel's Pat Gelsinger finally acknowledged the existence of Larrabee at last week's IDF, he didn't exactly clear up very much about the project. In fact, some of his comments left close Larrabee-watchers more confused about the scope and nature of the project than ever before."

The Wikipedia entry about Larrabee [wikipedia.org] is somewhat helpful. But I don't see anything which would help me understand the cost of the low-end Larrabee projects.

Re:Yes, but what is the extent of Larrabee? (1, Insightful)

Anonymous Coward | more than 6 years ago | (#24467993)

As soon as it actually exists somewhere other than Intel's laboratories, they're usually pretty forthcoming on details (to the point we even have specs on how to use their graphics hardware, which is more than we can say for e.g. nVidia.)

OTOH, Larrabee is still Labware, and should be thought of as such. Unless you're willing to sign away your life in NDAs, don't expect to know too much yet.

I thought it was marketing (0)

Anonymous Coward | more than 6 years ago | (#24470021)

In order to get a product well hyped, sometimes it is best to keep the product secret, until it is close enough to being released, that it can assuredly be released before the hype has died down. This is something Apple seems to understand.

Intel Marketing: ArsTechnica art. about confusion? (1)

Futurepower(R) (558542) | more than 6 years ago | (#24470793)

If Intel is doing that, the company isn't doing a good job. Instead it is getting publicity from ArsTechnica about how it is communicating in a confused way.

Frist p8sot (-1, Troll)

Anonymous Coward | more than 6 years ago | (#24465719)

reaper Nor Do the Un1ted States of

One, Two, Four, Many? (1)

splutty (43475) | more than 6 years ago | (#24465863)

This only goes to show that the people at Intel really can't count..

(Firmly tongue in cheeck, of course :)

Re:One, Two, Four, Many? (1)

dapsychous (1009353) | more than 6 years ago | (#24466171)

Maybe they're all from that tribe in the Amazon that only has three number words. One, some, and many?

C++ programming Model (0)

Anonymous Coward | more than 6 years ago | (#24466147)

*Additionally, a broad potential range of highly parallel applications including scientific and engineering software will benefit from the Larrabee native C/C++ programming model."*

Can someone explain how the C/C++ programming model is compatibility with a many core system?

This is the most interesting statement, but I am completely unaware of how it is even somewhat true. In my experience the C/C++ languages have little to no native support for multiple threads (short of some enhancements in the new standard which there is no support for yet). All multi-threaded C/C++ programs rely heavily on OS specific extensions, not on any programming model in the languages...

Re:C++ programming Model (1)

Wesley Felter (138342) | more than 6 years ago | (#24466463)

Clearly Intel means C/C++ with pthreads or Win32 threads. Also, C++ is getting thread support in C++0x.

Re:C++ programming Model (0)

Anonymous Coward | more than 6 years ago | (#24466653)

how will scientific programming benefit from anything designed around C? Scientific programming is done in fortran. All of it. Not a single line is written in C.

Re:C++ programming Model (2, Informative)

robthebloke (1308483) | more than 6 years ago | (#24469863)

One language that is being used in the sceintific community right now is CUDA - which runs on a GPU and is C based.

In addition, Fortran to C tools have been around for some years. To say that Fortran is the only scientific language is BS. R [r-project.org] , S Plus [insightful.com] , Octave [wikipedia.org] , matlab [mathworks.com] , perl and CUDA [nvidia.com] to name a few. Taking R as an example - it provides an code interface that allows you to write optimised C/C++ routines and utilise those in the language itself.

Re:C++ programming Model (0)

Anonymous Coward | more than 6 years ago | (#24470843)

So my scientific code that happens to be coded up in C++ doesn't exist according to you?

Idiot.

Grandpa, you're so 1977... (1)

mangu (126918) | more than 6 years ago | (#24470949)

Scientific programming is done in fortran. All of it. Not a single line is written in C.

It's true that by the time C became popular, there already existed many libraries in Fortran that had been tested and debugged, so no new development on those lines was needed. But most new scientific software is being written in C. There are a few diehards who will never admit it, but for all practical purposes Fortran is dead as a development language.
 

Changing the playground... (5, Interesting)

ponos (122721) | more than 6 years ago | (#24466221)

What most people don't seem to realize is that Larabee is not about winning the 3d performance crown. Rather, it is an attempt to change the playground: you aren't buying a 3d card for games. You are buying a "PC accelerator" that can do physics, video, 3d sound, dolby decoding/encoding etc. Instead of just having SSE/MMX on chip, you now get a complete separate chip. AMD and NVIDIA already try to do this with their respective efforts (CUDA etc), but Larabee will be much more programmable and will really pwn for massively parallel tasks. Furthermore, you can plug in as many Larabees as you want, no need for SLI/crossfire. You just add cores/chip like we now add memory.

P.

Re:Changing the playground... (1)

iansmith (444117) | more than 6 years ago | (#24469083)

The need for SLI/crossfire is because the bandwith needed for multiple cards to work on a frame buffer is too high for even the newest PC memory bus.

Intel's cards are not going to be able to get around this, so we will most likely add a third method of card interconnect to the mess.

nVidia CUDA for HPC (0)

Anonymous Coward | more than 6 years ago | (#24466789)

960 cores, 4 teraflops, 400 GB/s memory bandwidth in a 1U rackmount: nVidia Tesla S1070 [nvidia.com]

Re:nVidia CUDA for HPC (1)

robthebloke (1308483) | more than 6 years ago | (#24469927)

.... and is also very restrictive in *how* you can access data, not to mention that all 960 cores have to run the same instructions. If you happen to have code that can be parallelised, but not in a way that would suit a GPU (i.e. seperate tasks, not 100's of the same task), then Larrabee starts to sound quite exciting.

What does this do (for graphics) better than GPU? (1)

LotsOfPhil (982823) | more than 6 years ago | (#24466931)

Larrabee looks very interesting for scientific computing, but what makes it better for graphics than a ATi/nVIDIA GPU?

Re:What does this do (for graphics) better than GP (1)

Hanners1979 (959741) | more than 6 years ago | (#24467439)

Its architecture could (potentially) make for better multi-GPU solutions (i.e. with a shared frame buffer across all cores instead of x amount of RAM per GPU), and the use of tile-based rendering has a fair amount of efficiency benefits to make it interesting.

It's way too early to say whether it'll even be equivalent performance-wise to AMD and NVIDIA's GPU designs in Larrabee's release time frame, and it'll be very dependant on its compiler and drivers, but as a concept right now it's hugely interesting in a number of ways as a pure graphics architecture alone.

Re:What does this do (for graphics) better than GP (1)

robthebloke (1308483) | more than 6 years ago | (#24470235)

I'm more of a 'highend' graphics coder (read: not games). Lets say we want to do some complex soft body animation. We need to be able to access a coherant data structure that represents the entire geometry mesh to be able to do that. You can't do that on the GPU - triangles and vertices are all you get.

Lets say we want to use the numerous deformation techniques that do not work by transforming normals via a matrix (i.e. they must re-compute the normals because the deformation is non linear - FFD's for example). You have only been able to do that on the GPU since the geforce 8800 came out (it requires geometry shaders) - but even then isn't perfect (can't obey soft/hadr edges, and a few other things I'd like to do).

Lets say you want to start doing Global Illumination rendering (of which Ray tracing is one technique - but not the only, or most useful one), you'll need to access a scene database - which you can't do on a GPU.

I've never really liked the design of the GPU's we currently have (though i realise why they've evolved in the way they have) - It all feels like nasty hack after nasty hack (which changes with every new Geforce card). I've always wished that instead of a GPU we just had an add in highly vectorised CPU which we can use for anything. There are literally hundreds of things i can see this being useful for.

Potentially, but... (1)

Balau (1286776) | more than 6 years ago | (#24467007)

Does this mean that Norton will scan my drive in 3D?

Seriously, manymanycores architectures are nice for public servers that are coded very well. Potentially able to serve N clients at once, the machines running Larrabees will usually bottleneck somewhere else.

For the desktop user, manymanycores mean that the main window will move smoothly in the foreground while anti-blackware, indexes and updates consume the background.

For the power gamer, even manymanycores won't be enough. There's no such thing as "enough".

But can intel make good drivers as there on board (1)

Joe The Dragon (967727) | more than 6 years ago | (#24467091)

But can intel make good drivers as there on board ones suck?

There on board video cards look good on paper but then come in dead last next to nvidia and ati on board video and that is with out use side port ram. ATI new board video can use side port ram.

Re:But can intel make good drivers as there on boa (0)

Anonymous Coward | more than 6 years ago | (#24467167)

Last I heard, Tom Forsyth and Michael Abrash were writing the graphics "drivers". So I expect good things.

Re:But can intel make good drivers as there on boa (0)

Anonymous Coward | more than 6 years ago | (#24467681)

You keep on using that word, I do not think it means what you think it means.

Gettin' a kick . . . (1)

Quixote (154172) | more than 6 years ago | (#24467237)

I live in Larrabee, IA [google.com] and I'm getting a kick out of these replies . . .

Anandtech has an excellent article (3, Informative)

Vaystrem (761) | more than 6 years ago | (#24467511)

That is much more detailed than the one linked in the article summary. It can be found here. [anandtech.com]

Re:Anandtech has an excellent article (1)

Funk_dat69 (215898) | more than 6 years ago | (#24470407)

Wow, I always thought Anand was a bit of an Intel fanboy, but does he ever gush of Intel in that one.

Some fun out of context quotes:

"adds a level of value to the development community that will absolutely blow away anything NVIDIA or AMD can currently (or will for the foreseeable future) offer."

"Larrabee could help create a new wellspring of research, experimentation and techniques for real-time graphics, the likes of which have not been seen since the mid-to-late 1990s."

"Larrabee is stirring up in the Old Guard of real-time 3D computer graphics, having icons like Michael Abrash on the team will help "

I read Anand quite a bit, but I think he missed the 'objective' aspect of journalism just a bit in this one.

Niagara similarities... (0)

Anonymous Coward | more than 6 years ago | (#24468633)

FTA: "The Larrabee architecture supports four execution threads per core with separate register sets per thread. This allows the use of a simple efficient in-order pipeline, but retains many of the latency-hiding benefits of more complex out-of-order pipelines when running highly parallel applications."

Funny how Intel made fun of Sun's Niagara, yet the above is appears to be designed very similarly, if not exactly the same.

Modern 486 (1)

Nom du Keyboard (633989) | more than 6 years ago | (#24469413)

I wonder what a 486 core would perform like on a modern fab process. After all that chip had a modest I/D cache, single instruction/cycle performance for many instructions, and integrated floating point - all with a tiny transistor budget by modern standards.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?