1389211
story
INicheI writes
"According to Intel, the plans for a release of a 2GHz Xeon for dual processor servers have been cancelled. Instead Intel is planning to debut a 2.2GHz chip codenamed "Prestonia" that will be ready the first quarter of 2002. I would love to see Quake running on a 4.4GHz computer."
Just Quake? (Score:1)
Re:Just Quake? (Score:2)
Just think of how bad the bottleneck would be at the video card. Heck, with a config like this, software mode would be better than hardware mode.
Re:Just Quake? (Score:1)
It had to be said eventually...
Re:Just Quake? (Score:1)
"4.4GHz", first of all, Quake doesn't support SMP, and MHz has the exact same meaning as MIPS: Meaningless Indicator of Processor Speed.
Well, PCI is running at 33MHz, giving a maximum potential transferrate of 133MB/secs, so if the processor didn't do anything else than pumping data to the graphicsboard, you would get 101.47 FPS in 1280*1024*8. But with hardware rendering, you only have to transfer a bunch of coordinates, enabling a much higher speed than with software rendering.
Re:Just Quake? (Score:1)
Most graphics cards nowadays are in a AGP slot, around here i can't even buy a PCI one... so those 133MB/s are way off... AGP X4 gives 1GB/s, of course you might just get x2 but it's still 512MB/s
Re:Just Quake? (Score:1)
It's still sitting a box along with a gravis ultrasound and a number nine video card
Quake? (Score:3, Funny)
Re:Quake? (Score:2)
That's not enough to run Pong... (Score:2, Interesting)
There was a discussion about this on the MAME discussion board (www.mame.net) saying that it would probably require 5Ghz machines to run a simulation of the circuits using the schematics of the game. Pong is an analog game, it's got no microprocessor nor ROM. So emulating it is mighty difficult!
Re:That's not enough to run Pong... (Score:2)
It actually took an entire night for the simulation to get to the game's title screen and fully render it. I can't remember the exact link for the guy's rightup on what he did (he even had screenshots), but I am pretty sure he did it about 3-6 years ago. I guess you could estimate that he had the simulation run on roughly a Pentium 100mhz. So, uhhh, maybe 5Ghz is a bit low, if you actually want the game to be playable.
Still, if you really want perfect emulation of a game, its probably the best way to go, simulating the actual PCBs. Of course, you could always collect the real thing, which people do. Check ebay for such sales.
Re:That's not enough to run Pong... (Score:2)
There are now open source implementations of the 6502 circuit diagrams.
4.4 ghz ? lame (Score:1)
4.4 ghz marketing soundin phrase (Score:2)
Re:4.4 ghz marketing soundin phrase (Score:1)
Uhhh... 4.4ghz... no. (Score:1)
They do more work faster, but not twice as fast as a normal chip.
Re:Uhhh... 4.4ghz... no. (Score:2)
So they're like resistors (in that they add in series but not in parallel)?
Rt = (R1 * R2) / (R1 + R2), so it would run at (2.2 * 2.2) / (2.2 + 2.2) = 4.84 / 4.4 = 1.1 GHz. Damn.
Re:Uhhh... 4.4ghz... no. (Score:2)
Re:Uhhh... 4.4ghz... no. (Score:1)
Re:Uhhh... 4.4ghz... no. (Score:1, Informative)
Recipe:
Write a program that operates on a dataset just smaller than 2x your CPU's L2 cache size. Time it on your single CPU box.
Add the second CPU, and break the program into two threads, one operates on the first half of the dataset, the other on the second half. Time it.
I'm sure a similar parallelization has just happened to occur at some point in the history of computing... =)
What's up with summing the processors speeds? (Score:1)
mr
Re:What's up with summing the processors speeds? (Score:1)
what? (Score:1)
2 * 2.2 GHz computers in parallel != 4.4 GHz. Dumbass.
Stupid Question (Score:2)
Re:Stupid Question (Score:3, Informative)
There are several advantages to a setup as described in this story ... a dual-processor Xeon can have benefits on the desktop. Of course, I'd never push a Xeon processor in this enviroment as I honestly don't think it will be the overall best solution in the near future. With dual-Athlons and Durons on the horizon, I'd take a closer look at them before considering a dual Xeon system, if only for the price aspect. However, I will attempt to explain why the Xeon architecture is superior to a standard Pentium III and why it potentially matters on the desktop.
Intel produces a version of the Pentium II and III called the "Xeon", which contains up to 2 megabytes of L2 cache. The Xeon is used frequently in servers as it supports 8-way multi-processing, but on the desktop the Xeon does offer considerable speed advantages over the standard Pentium III when large amounts of data are involved.Basically, the larger the working set of an application, that is, the amount of code and data in use at any given time, the larger the L2 cache needs to be. To keep costs low, Intel and AMD have both actually DECREASED the sizes of their L2 caches in newer versions of the Pentium III and Athlon, which I believe is a mistake. (AMD is working on this in the new chips - new technology will be used to increase the size of the L2 cache while retaining the full data-shuttle flexibility).
The top level cache, the L1 cache, is the most crucial, since it is accessed first for any memory operation. The L1 cache uses extremely high speed memory (which has to keep up with the internal speed of the processor), so it is very expensive to put on chip and tends to be relatively small. Again, from 8K in the 486 to 128K in the Athlon.The next step is the decoder, and this is one of the two major flaws of the P6 family. The 4-1-1 rule prevents more than one "complex" instruction from being decoded each clock cycle. Much like the U-V pairing rules for the original Pentium, Intel's documents contain tables showing how many micro-ops are required by every machine language instructions and they give guidelines on how to group instructions.
Unlike main memory, the decoder is always in use. Every clock cycle, it decodes 1, 2, or 3 instructions of machine language code. This limits the throughput of the processor to at most 3 times the clock speed. For example, a 1 GHz Pentium III can execute at most 3 billion instructions per second, or 3000 MIPS. In reality, most programmers and most compilers write code that is less than optimal, and which is usually grouped for the complex-simple-complex-simple pairing rules of the original Pentium. As a result, the typical throughput of a P6 family processor is more like double the clock speed. For example, 2000 MIPS for a 1 GHz processor. You'll notice that the Athlon outperfoms the P3 family in this regard by a large margin.Re:Stupid Question (Score:1, Informative)
a frame pointer because you have a variable sized array on the stack or use alloca(), you are left with 5 registers, with another one needlessly clobbered as soon as you need a division (EDX) or a modulo (EAX) by a non power of 2. The other 3 are clobbered as soon as you use a string instruction...
Bottom line: with so few registers you have to keep a lot of live variables on the stack, spilling and reloading them like mad. Of course every spill is a store.
Also when performing an operation between memory and register, you have 2 possibilities: either
loading into a free register and then performing a register to register operation, this is good for the decoders but may indirectly cause another spill because temporary registers are so hard to find, or use a memory to register operation which takes 2 microops and can only be issued by the first decoder.
Actually the rules for the Pentium were simpler and more general: never use a register to memory operation, but use memory to register especially if you can pair them since in this case you win
on register pressure and code size without any
execution time or decoding penalty.
Actually the choice of AMD to extend the architecture to 16 registers is quite clever and solves a lot of the spill/reload problems: the increase from 8 to 16 is often in practice an increase from 6 to 14 or 5 to 13, multiplying the number of free registers by about 2.5. This is enough to solve the spill/reload problems on many, but not all algorithms (with 64 bit addressing you try to keep more addresses in registers
for code size issue while 32 bit addresses can easily be embedded in code).
Having hand-coded myseld some critical subroutines on machines with 8 general purpose registers (x86), hybrid 8/16 (68k family, the address/data
split is sometimes annoying but at least any register can be used as an index for addressing,
with scaled index from 68020 onwards, it becomes
quite nice), 16 general purpose registers (VAX and IBM mainframes, the fact that R0 can't be used for addressing on the latter is irrelevant in practice) and 32 (PPC mostly, with some MIPS and Alpha). I can say that x86 is by far too limited while I hardly ever ran out of registers on other architectures. 16 registers with a rich set of addressing modes is fine, although RISC machines with 32 registers and less choice or addresses
are actually slightly easier to handle.
Bottom line: the x86 architecture sucks and requires astronomical bandwidth to the cache
to run fast because is seriously lacks in registers (and don't get me started on the floating-point stack).
"Xeon" sounds cool, but... (Score:1)
If you want a multiprocessing server in Q1 2002, the chips to buy are AMD. By then 3 or 4 mobos that support dual processors will be online. Load up on DDR and you'll be able to host anything.
Dual Xeon 1.7 Vs Dual Athlon 1.2 link (Score:1)
I know there will be some of you who'll say "Mah mama told me to not buy no AMD." But for the rest of us, this will be a no-brainer. For the difference in chip prices you will be able to pay for most of the 4 GB of DDR that AMD mobos will support. Or maybe yo' mama told you to send your money to Rambus...
FUD (Score:2)
The perfect example of FUD : "Well what if your fan popped off!" The reality is that while it is impressive that the Intel has systems to thwart that, how many times apart from on initial delivery does your fan fall off? As another poster mentioned it's the opposite and those things are a huge bitch to get off: I'd swear I was going to crack several motherboards in efforts to get those suckers off.
That Tom's thing, while 100% extremist and pandering to Intel, was fascinating as I didn't know that the heat dissipation was so rapid.
Forgetting something? (Score:1)
A camel, a red mozillasaur and a penguin walk into a bar...
ahhh.......nevermind
At what point... (Score:2)
And besides, would Quake have the texture mapping to really utilize it? Or the polygon count?
Really, what I'd like to see is a 'make buildworld' or 'make bzImage' on it. That'd be a good dipstick to jam in the ol' engine.
Re:At what point... (Score:1)
And.. Yes, I have to say it.. Damn, I'd love to see a beowulf cluster of those!
Re:At what point... (Score:1)
If you have quake, try installing ut2 mod and time it with a demo... you will see that a 150 fps card will probably came down to 40-50 fps sometimes.
Re:At what point... (Score:1)
Re:At what point... (Score:2, Funny)
Oh well. At least I can't hear very well either.
Re:At what point... (Score:2, Insightful)
The difference here is that you're talking about a constant 24fps or 30fps (film vs. NTSC -- those numbers aren't exactly right, because most film projectors open the shutter 2-3 times per frame, making an apparent 48-72fps, while NTSC is interlaced, making an apparent 60fps) with motion blurring and other movement artifacts that make frames flow together. For a video game (quake, for instance), you're talking an average fps, meaning that if you're getting an average of 30fps, you're very likely going to drop down into the teens when you run into heavy action. 60fps is the "sweet spot", since you should still stay above 30fps even in heavy action. That said, there are no motion blur effects with video games (well, yet anyway -- when 3dfx tried to do that, they ended up getting an average of 3-4fps), which means that you need a higher fps just to see smooth motion. In other words, the point of having 100+fps in a video game, average case, is to make the worst case still look smooth.
Anyway, once you can achieve an average fps of 100+, it's time to start turning that detail level up. A GeForce 3 may scream with nearly 200fps in Q3A, in 640x480x16bpp with all the details turned down, and even get a decent 80fps or so with higher detail, but the next-gen games are going to be clocking in much lower, simply due to the fact that they are so graphically rich. What that means is that video accelerators will need to continue to improve, so that we can hit the 100+fps mark on these newer, higher-detail games, so that the generation after that can go back down to 30fps with even more detail, and so on.
Re:At what point... (Score:3, Insightful)
I'm getting awfully tired of people mixing fps of normal video and 1st person 3D games. Having played quake and it's offspring online for 4 years now, without a shread of doubt I can say that Q3 offers you such playability that telling apart 40fps and lunartic 125fps (which btw is optimal for Quake3 engine physics, which alone would be enough reason for some ppl to go for 125) is relatively easy.
Where you easily notice it is quick glances backwards, i.e. when in midair you just lay a glance what's right behind you and turn right back. Such rapid motions and the smoothness there are actually rather essential in quake-like games (once you get past the shooting-everything-that-moves-along-with-your-te
In other words, the rant usually is that when looking at a screen the human eye cannot distinguish FPS's over 20 (...50 depending on who's ranting), but they usually neglect that with 1st person 3D games it's whole world ahead of you that keeps turning all around and in a very quick fashion even. We're not talking about a rotation of some teapot in a 3D animation. And what's worse, it's usually people that have zero or very very little experience in 3D gaming. The kinda "I've played through quake in hard core mode, I know what I'm talking about". Those people have very little idea how competative things have gotten in the online gaming scene.
I can't understand why people also forget that 20 FPS would mean 50ms rate of flow. Not directly comparable, but still, anyone (experienced) who has played on the net and in a lan know that's there's a huge difference between 50ms and 10ms.
Besides, try telling to some sprint athlete that wether his shoes weigh 10 grams less or more makes jack difference.
Re:At what point... (Score:2)
Re:At what point... (Score:1)
some people claim they can't see the difference much past 30, and there are a select few people that can tell the difference above 60
and don't get me started on refresh rates
Re:At what point... (Score:1)
Re:At what point... (Score:1)
Re: (Score:1)
Re:At what point... (Score:2, Informative)
what is it good for? (Score:3, Funny)
For an ordinary PC consumer? And let's not talk about Quake for this. Everyone knows that nobody can really see the difference between 40fps and 100+fps, so 3D gaming really is a shuck. Between today's modern graphics cards and even mid-range CPU's there's enough computing power to do high-quality 3-D rendering at high frame rates. I haven't upgraded my system (AMD K6-2 450) in two years, mainly because I've never found a good reason to do so. It does everything I need to do.
What the hardware industry needs is a new killer app like DOOM was in the early 1990's. DOOM may have made Id software millions, but it made Intel and its cohorts billions in hardware upgrades. If all you want is word processing, spreadsheets, and a few games here and there, nobody in his right mind would use a gigahertz-class processor, unless MS and Intel are in cahoots in making Office XP the bloated monster it is! :)
Video editing is a killer app (Score:2, Insightful)
What the hardware industry needs is a new killer app like DOOM was in the early 1990's.
This new app is video editing. After Effects filters run slowly because they have to run 60 times to each second of video. The sheer amount of data involved still makes video compression a very tedious process, even after spatial and temporal downsampling (i.e. cutting res and fps).
Re:Video editing is a killer app (Score:2)
Re:Video editing is a killer app (Score:2)
the cost of a digital cam-corder with which you can transfer and edit video is still high; let's not forget that, for the home user, there really isn't a ton of video editing to be had that I can imagine...
Your imagination is failing :^).
Digital camcorders are cheap- I just bought a Sony TRV900 for $1500, and the TRV900 is close to the top of the line consumer machine out there. For not much more you can buy the same equipment that Spike Lee uses to make his films (Canon GL-1 and XL-1s.) Single CCD MiniDV camcorders go for $800 on the low end. All of these machine have Firewire interfaces.
On the computer side, Firewire cards are $60-$100. If you have a Mac like my TiBook it's standard. (Macs also come with NLE software bundled- iMovie. Not as powerful as Premiere but just fine for home movies.) Disk and memory are so cheap today it's scary. I bought a DV <-> analog bridge for $250 so I can go either way with tapes.
What am I going to use this stuff for? Well, I have a new baby. I'm putting together a video of him for relatives who aren't going to get to see him, such as his uncle who's stationed in Hawaii with the Army. Another copy is going to his birthmother. (He's adopted)
I've been working on video of my wife's grandmother's farm as well. She's a tough old lady, but she is old and we don't know how much longer the farm will be in the family.
What else? Well, the TRV-900 has the ability to do both time lapse and very high shutter speed photos- I'm having fun filming storms and water droplets. When the current ban on general aviation VFR flights is lifted I'm going to have do aerial still and video shots of the area.
Video editing is never going to be immense on the level of word processing, but I think we'll be seeing a lot more of it in the future. Look at the huge growth in the digital camera market.
Eric
Re:Video editing is a killer app (Score:2)
Faster is better... until I can rayrace/radiosity/caustics/etc in realtime, I won't be satisfied
For pc-emulation (Score:3, Informative)
Having an obscenely fast PC might make it possible to run Windows under Linux, and still have Windows including direct-x run with enough performance to do some serious gaming.
Re:For pc-emulation (Score:2, Interesting)
Re:For pc-emulation (Score:2)
Re:what is it good for? (Score:3, Informative)
This is true but you've missed the point ... FPS is a measurement of *average* framerate. Ultra fast cards are an attempt to raise the *worst case* performance of the card not the average case. A mere side-effect of this is raising the FPS.
Re:what is it good for? (Score:2, Insightful)
We should play a game of quake 3 and I'll set my fps to max and you can set your max fps to 40.
I like seeing my fps in quake above 100. Anything less and you can see a statistical drop in my accuracy. Theres a reason companies like ati and nvidia are in the business they are in: fps matter. Heck a tnt2 can pull 40fps, why do you think people like geforce3 cards so much?
Re:what is it good for? (Score:3, Insightful)
40Hz 100Hz (apologies for lame fmt)
Turn 6 15
Aim 3 8
Fire 1 2
..frames in which to perform the operation. Those couple of extra frames for aiming with actually do make a difference. I actually got noticeably better at Quake (particularly the rail gun) when moving to a faster card and an optical mouse.
Back on topic, this is good for databases. Faster processors means fewer processors, less cache contention, good for all concerned really. And this is a really good move on Intel's part - rather than support a
Mind you, it'd cost the same as my house.
Dave
Re:what is it good for? (Score:2)
In a way it's a pity for Intel that memory is now so cheap. If things had remained how they were in 1996, 'RAM doubler' type stuff which compresses pages of memory would now be commonplace. That would _really_ munch processor cycles...
Re:what is it good for? (Score:2)
Ethernet is going through this same cycle right now. Most cheap ethernet cards now have stopped doing their address table lookups (for MAC addresses) and checksuming and all the rest of the compute intensive stuff on the chip, and have offloaded it into the driver. This has happened gradually over the last 6 years or so. Now, with 10GB ethernet coming out, you can't process that stuff fast enough on the CPU, so it's moving back onto the chip along with other software like the TCP/IP stack. Ethernet is so intrenched in the marketplace that the manufacturers are guaranteed high volumes and can afford it.
3D processors right now are far ahead of what you can do with current CPUs because of the limitations of memory bandwith, so I don't think that it's a task that's likely to make another loop through the cycle in the near future, but it will probably happen eventually. The real question is what cool technology that now requires additional hardware will soon become available to mainstream users? Video decoders seem to be in the software stage of the loop already, and are starting to move back into hardware by being embedded in regular video cards. Real time video encoding and editing could be the next big thing though...
Re:what is it good for? (Score:2)
This way each processor board could be tasked to handle whatever it was connected to. Adding CPUs to a system for actual processing would mean adding a processor board. Adding functionality to a processor board would mean adding the appropriate connector for whatever you wanted it to do.
This way we're not building as much specialized hardware to do a specific task. The bus could really be a half-smart switch instead to eliminate contention between processor boards. With general purpose processors running on the processor boards, specific OS functionality could easily be offloaded to individual processor boards, eliminating the need for the processor boards that actually host the OS to spend as much time manipulating dumb hardware.
Re:what is it good for? (Score:2)
Speed is very nice, but.... (Score:3, Interesting)
All this speed is encouraging programmers to be lazy and not use good code that works fast, but rather rely on the hardware being fast.
Just a bit of a rant...
Re:Speed is very nice, but.... (Score:2)
Everyone knows that hand optimized code runs faster than generated C... Linux was sped up 40% by optimizing 15%. When you have a good algorithm, however, we know that even a GREAT algorithm will not be as fast as hand optimized code. Having a faster processor allows you to work with higher and higher level algorithms -- it may not run as fast but when you get to speed levels that are exceptionally high, you should notice little difference.
So programmers may get lazy.. who cares -- their code will follow established algorithms and be easier to modify.
Quake and CPUs (Score:2)
Spurred by AMD and IBM? (Score:3, Interesting)
Since 1991, IBM and Motorola have collaborated on the PowerPC architecture for personal computers, workstations, servers and embedded processor product lines, yet have developed and marketed separate implementations. Driven by the tremendous success of PowerPC microprocessors in the embedded marketplace, the companies announced in September 1997 the start of a joint effort to further enhance the PowerPC architecture to ensure greater consistency in their future embedded products. The Book E project and the new architecture it defines is the result of that effort.
With the chips being 64bit and fully capable of supporting multiple cores, it could give IBM servers and workstations a boost. The chip architecture wars are about to start to hit another exciting stretch, as long research programs begin to produce results for Intel, AMD, and AIM. 2002 should be a big year.
IMHO... (Score:1)
4:20 tech reporter (Score:2)
"This is more than likely about shortening the qualification cycle and saving the customers a bit of money"
That man is living a large at 4:20.
Saving us money? whatever. It's a damn corporation in trouble. The last thing they are thinking is saving us money. Hell with 4:20, the man's on crack.
Re:4:20 tech reporter (Score:2)
In a mission-critical system (say a large database for orders) you can not, under any circumstances have that system go down unexpectedly. And if it does, it needs to be brought up again immediately.
A corporate customer would rather spring for the Xeons and not have to worry about the system, rather than fill a server with cheaper Athlons and have to keep a few on hand in the slight chance that one of them ups and melts.
You've gotta remember, you get what you pay for. OTOH, you or I are just fine with Athlons, Durons, Celerons, etc. They're cheaper, they're reliable enough for a home user, and they're more widely available.
not 4.4 GHz (Score:1)
But still lots of people think it is the same...
Re:not 4.4 GHz (Score:1)
Maybe the public'll believe that a 20 stage pipeline is twice as fast as a 10 stage...
Your MHz may vary's gonna have a whole new meaning...
I'd like to see (Score:2)
2 x 2.2 = 4.4? How about... (Score:2)
Of course now the machine has been partitoned, so it's not quite that large, but at least there is still a "256 GHz" partition.
Keep in mind that Origin [sgi.com] is not a cluster, but a huge mother of a single-image machine. No backplane, but instead a mesh of CrayLink/NUMAlink cables interconnecting the CPU, I/O, and Router modules. My favorite part, though, is that with the addition of a "Graphics Brick" it becomes an Onyx. Add up to 16 G-Bricks!
So you could emulate Pong 102 times (Score:2)
This line is only here so that this otherwise acceptable post, passes the compression filter test.
Quake + Dual Procs (Score:1)
Now, from reading around, Quake3 was/is supposed to have support for SMP (read this slashdot article [slashdot.org]). Is this confined to the linux version or is there something I was doing wrong?
Re:Quake + Dual Procs (Score:1)
John Carmack on dual CPU's [shugashack.com]
I guess the poster will have to wait a while for quake on a 4.4ghz computer.
Re:Quake + Dual Procs (Score:1)
ANYONE got a Kyro II Vid Card with a Dual Processor board? This should make the Kyro II Board totally outperform the latest and greatest GeForce3!!!
Think about it... OpenGl calls handled by a dedicated CPU. The Kyro II would love that for sure.
Re:Quake + Dual Procs (Score:1)
Otherwise you'd just see the increase you got from a 500 to a 667.
News: (Score:5, Funny)
Re:News: (Score:2)
Remember, because Windows XP uses the Windows 2000 code base, it will better take advantage of the features of the Xeon CPU than Windows 95/98/ME, which is more optimized for Pentium II/III/Celeron CPU's.
Re:News: (Score:2)
Ditto here! I just finished booting the first beta and it's running spiffily on my P3-450.
Next I'm going to try the other three beta releases that were mailed during boot-up!
Why would you need this system? (Score:1)
Re:Why would you need this system? (Score:1)
http://www.aspenleaf.com/distributed/distrib-pr
Any one of those would be fine.
Estonia? (Score:1)
The naming of new products is getting more and more difficult, I read that Honda had launched a new compact car named Fitta. Nothing wrong with that, except it means 'cunt' in Swedish. :-)
Let's Misuse Langauge (Score:2, Funny)
"The (130-nanometer) process is ramping like a hose," said Frank Spindler, vice president of Intel's mobile products group.
What the hell does that mean? How does a hose ramp?!!
This must be the same guy who decided Pentium II mad sense.
I tried to explain to my history teacher that King Henry the VIII is the same as King Henry Jr. the VI. She didn't buy it.
What a stupid name (Score:1)
Sounds like something from Flinstones 90210...
512KB Cache? (Score:1)
I hope I'm wrong here and that this is 1MB L2 cache vs 512KB L1 cache.
What's 2.2 GHz really worth (Score:1)
Uses (Score:2)
The graphics of a game will not benefit from a 2.2Ghz over a 1.4Ghz processor as most of the work is offloaded to the video card.
Same with sound.
However, Artificial Inteligence will have that many more cycles to use searching state-space trees. That much more time to to prune and anneal its way to a better solution. More time to make more realistic artificial intelligence.
That's one thing that you CAN'T throw too many cycles at. The more cycles, the better the results. It's that simple.
Justin Dubs
Re:Uses (Score:2)
Completely true. However, ask a producer if they want to lose 10% of their bragging rights worth of frame rate or polygon count in return for an AI improvement that they won't be able to comprehend, and they'll stab you in the back and leave for 20% more stock options at another company. No, wait, first they'll tell you to forget the AI, then they'll stab you in the back...
Re:Uses (Score:2)
Then, I would think, even the clueless producers would agree.
You see, my thinking is thus:
Right now they delegate around 10% of CPU time to AI, maximum. But, as graphics and sound begin to require less and less cpu time due to speciallized peripheral hardware, they'll be able to delegate more and more to AI. And physics. I can't forget physics.
I guess I eventually see physics and AI being offloaded to specialized hardware as well. Some kind of state-specification meta language which can be compiled and uploaded to a state-space searching chip on specialized hardware.
Same with physics and collision-detection calculations. These would probably be even easier to offload to separate hardware. Oh, that would be heaven. Having a real, dedicated CPU to use for AI calculations. Mmmmmmm....
Justin Dubs
Aliens look out! (Score:1)
two little points. (Score:1)
1. i don't think quake will run significatly faster on the new Intel than on my athlon 1,4 with a geforce 2. as somebody has already pointed out processor speed isn't everything. so stay calm and don't trash your new computers because of a vague promise of processing power
2. may it be that intel canceled the release because of their massive heat problems, now forgoing the new xeon choosing to wait for a new, probaly cooler architecture?
Dual XEON vs. Dual Athlon - Misinformed! (Score:1, Interesting)
First off Hyper-Threading (Hmm, I guess that's really two words).
It IS true that AMD's chips have a slightly faster ALU / FPU, but that's because they have a shorter execution pipe. Once Intel starts pushing the clock speed of the P4 core up to something reasonable, the loss in instructions per clock cycle will not matter since the clock speed will be doubled. Problem with this is, morons will always be comparing an AMD CPU that runs at a much lower clockspeed with a higher clockspeed Intel chip and deduct that AMD CPUs are better. NOT TRUE! They're completely different architectures, the P4 is pretty much a 786 as far as I'm concerned.
As for AMDs "MP" line... DON'T BUY them! If you seriously need an SMP server / workstation and your budget only allows an x86 processor, you'll want to go with Intel. Why, you ask? Simple, they're reliable to 9 decimal places. This is especially noticable with the way ALL of Intel's cores handle heat distribution.
No matter how hard you try, you can't get an Intel P4 to malfunction due to operating heat and although you can stop a P3, you'd be hard pressed to permanently damage the core.
There's also the issue of complex ALU / FPU operations... The P4 core can actually use SSE2 at run-time even if the code is written using plain ol' x86. And a lot of MMX instruction calls can simply be replaced with a coresponding SSE2 instruction call with NO other changes to the code.
Another thing that makes it better is the MUCH higher memory bandwidth. DDR is just plain sad compared to the througput you can get using SSE2 and RAMBUS in concert. You can actually exceede the rated bandwidth limit for RAMBUS using both of them together (I've done it).
And finally, all Intel cores are capable of being updated / modified at boot time using microcode... This means if Intel locates a CPU bug / bottleneck / other issue, they can essentially correct it everytime you reboot. This has its limitations, but it's a far cry from AMD's solution (they have none!).
Anyway, do what you want with this...
In the end Intel has the better x86 cores, and theres always IA64 if you really have a need for a multi-processor x86 compatible server or workstation.
Terrorist tools! (Score:1)
"I would love to see... (Score:1)
by that time Quake V will be out, and it too will run like crap!
(ain't technology great!?)
(i remember my 486DX 25Mhz coming to a -grinding- halt trying to render the last level/boss on DoomII when the "respawn balls" really start flying...
Ban Faster CPUs (Score:1)
;)
Erm.. we're not far from 2.5GHz... should I panic? (Score:2)
Should I be worried?
matthew
8085 (Score:2, Insightful)
Question! (Score:2)
Ugh (Score:2)
Call me picky, but I hate this misconception.
Re:just incase no-one else has posted this info (Score:1)
Re:in no way would it be "4.4GHz computer" (Score:1)
Without it, when could we say "Beowulf?"
Re:Please inform me! (Score:2)
The P4 has proven to be really good executing SSE2 float instructions - is a server likely to ever do a single SSE2 float instruction? Any floating point operations at all?
Every Intel CPU since the 386 has been tagged by both Intel and the press as for being used in servers only. It's simply a way of saying "the price is outrageous at the moment."
The P4 has also proven to have a really weak integer part.
If "really weak" means "faster than just about anything out there," then, yes, I will agree. You might mean "really weak" as a way of saying "not as good as it theoretically could be," but it's still damn fast.
Re:And people gave me a hard time... (Score:2)