Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Launches Fastest Phenom Yet, Phenom II X4 980

samzenpus posted more than 3 years ago | from the new-and-improved dept.

AMD 207

MojoKid writes "Although much of the buzz lately has revolved around AMD's upcoming Llano and Bulldozer-based APUs, AMD isn't done pushing the envelope with their existing processor designs. Over the last few months AMD has continued to ramp up frequencies on their current bread-and-butter Phenom II processor line-up to the point where they're now flirting with the 4GHz mark. The Phenom II X4 980 Black Edition marks the release of AMD's highest clocked processor yet. The new quad-core Phenom II X4 980 Black Edition's default clock on all four of its cores is 3.7GHz. Like previous Deneb-based Phenom II processors, the X4 980 BE sports a total of 512K of L1 cache with 2MB of L2 cache, and 6MB of shared L3 cache. Performance-wise, for under $200, the processor holds up pretty well versus others in its class and it's an easy upgrade for AM2+ and AM3 socket systems."

cancel ×

207 comments

Sorry! There are no comments related to the filter you selected.

Wait a second... (2)

mr_stinky_britches (926212) | more than 3 years ago | (#36030670)

I just bought a 6-core AMD chip a week ago. Where is the x6 version of this baby?

Re:Wait a second... (3, Informative)

Anonymous Coward | more than 3 years ago | (#36030694)

6 core is slower per core than 4 core simply because of thermal envelope.

6 core is superior if you need to use more than 4 cores at same time.

Re:Wait a second... (0)

Anonymous Coward | more than 3 years ago | (#36030738)

I'm with mr_stinky_britches--I do a lot of scientific computing where I utilize all six of my cores. Besides, I have them OC'd to *at least* 3.6 GHz. So I won't be upgrading any time soon...

Re:Wait a second... (0)

Anonymous Coward | more than 3 years ago | (#36031080)

I'm with mr_stinky_britches--I do a lot of scientific computing where I utilize all six of my cores. Besides, I have them OC'd to *at least* 3.6 GHz. So I won't be upgrading any time soon...

If you do scientific computing, you should value your data and never overclock. Unless you have some independent way of validating the results, it's just not a good idea no matter how stable you think the OC is.

Re:Wait a second... (1)

Skarecrow77 (1714214) | more than 3 years ago | (#36031256)

isn't that what prime95 is for?

Re:Wait a second... (2, Insightful)

Anonymous Coward | more than 3 years ago | (#36031760)

Prime95, in this context, is for convincing 0v3rcl0ckz0r kiddiez that their massive overclock is stable even though it's a terrible stability test. A prime number search program is not exactly the world's best method of achieving full test coverage of a CPU, no matter what a billion leetboy forums may tell you.

Just for example, according to its webpage, prime95 only uses 32MB of memory, which means it basically runs from cache on any modern CPU. Which in turn means you're not really exercising memory access much at all. Guess what's really, really important to test if you want to know how stable your system is, especially given that modern CPUs have integrated memory controllers? (Some overclockers are more sane and only do multiplier overclocking, but the focus of most is speed at any cost and the memory gets it too, and if they rely on prime95, well... not good.)

And then there's the issue that as a program which does nothing but manipulate large integer numbers, prime95 probably isn't touching anything other than the integer ALUs. Maybe MMX/SSE if you're lucky. So huge chunks of the CPU's datapath go untested.

Another consequence of that limited memory use is that it probably doesn't thrash the TLBs much, which means that OS pagefault handlers are rarely called, which means you're not testing the stability of all the VM machinery.

I could go on. Next to no I/O or interaction with peripherals. So on and so forth. Prime95 has a reputation vastly in excess of its true usefulness.

But the real problem is this:

Say you're doing something like the GP: using your computer to do scientific calculations which have to be right. You want to overclock, but because the results matter you want to find software which can help you validate that your computer is so stable that there's no chance of a crash. Or worse: silent data corruption. (Which I've personally observed when overclocking. Not fun when you don't discover it until after it's trashed a lot of data.)

Problem is, there is literally no end-user software which is an adequate stress test for this purpose. The only known way to get that kind of reassurance is to use factory automated test equipment (ATE). ATEs don't typically run software on the CPU under test. Instead, they make use of special test mode circuitry to quickly perform direct pass/fail tests on most circuits in the chip at any desired voltage/frequency/temperature operating point.

Well designed ATE tests can cover essentially every circuit. The factory uses ATE testers both to identify rejects and bin good chips into speed grades, but they're also the only way to be truly sure that an overclock will be 100% stable.

But you can't buy factory ATEs, and you can't get them to tell you the ATE test data for your chip, beyond a guarantee that it passed at the frequency they sold it to you as.

Which is why, if you're doing work which is important, such as scientific research, you damn well shouldn't overclock no matter how safe you think it is.

Re:Wait a second... (5, Informative)

WuphonsReach (684551) | more than 3 years ago | (#36031810)

Prime95, in this context, is for convincing 0v3rcl0ckz0r kiddiez that their massive overclock is stable even though it's a terrible stability test. A prime number search program is not exactly the world's best method of achieving full test coverage of a CPU, no matter what a billion leetboy forums may tell you.

Eh... Prime95 is a darned sight better then a simple memory test, because it actually *does* stress the CPU and L1/L2 cache as well as the RAM. Plus it keeps track of whether the calculations are correct.

Which is the exact same tactic that you'd better take if you're going to "do scientific calculations which have to be right". You run the calculation and either you have built-in checks or you do the calculation twice, on two different machines and compare the results. (Surprise surprise, guess how Mersenne.org checks that the turned-in results are correct?)

I've been using Prime95 ever since it came out. I've personally seen it find RAM that is slightly dodgy on timing where other tools like MemTest86 gave the RAM a free pass. In one case, the RAM was GEIL and was mislabled as a faster CL value then it actually could handle (naughty GEIL, or might have been counterfeit). Let Prime95 run for 24-48 hours with no errors, and you've got a pretty good assurance that there are no issues with timings or the memory / CPU. (Doesn't do jack to test the disk / video, but there are other tools for that.)

Now, you complain that it's not a comprehensive tool. Have you *ever* seen a case where a CPU was bad / dodgy where Prime95 did not throw an error that you caught in some other manner? That was specifically something wrong with the CPU / cache / RAM?

And frankly, there have always been those who think product X is a magic bullet. Your rant is misplaced.

Re:Wait a second... (3, Interesting)

Skarecrow77 (1714214) | more than 3 years ago | (#36031888)

I've had prime95 catch errors on overclocks that passed -everything- else.

Know what, in every one of those instances, it was right. If I kept running at the speed that passed prime but failed everything else, I'd eventually run into random errors, sudden unrepeatable crashes, or other mysterious problems.

I've never had any issues with any overclock that passed 24 hours of prime, including distributed computing projects where they'll yell at you if you're returning bad data (i.e. aren't passing the redundancy tests).

Re:Wait a second... (1, Insightful)

Anonymous Coward | more than 3 years ago | (#36031112)

Bah, the speeders are wasting their money. Cores all the way. Half a GHz doesn't make up for two cores. When you've seen a 6-core kernel building it's hard to go back to start-stop. Even if you are a dumb Windows user, you will enjoy the shovelware being unable to slow you down.

Re:Wait a second... (1)

Skarecrow77 (1714214) | more than 3 years ago | (#36031260)

unless you still use software which runs it's main loop in a single thread, with only relatively minor tasks spun off to other threads. Like say, just about any game on the market.

Re:Wait a second... (5, Informative)

m.dillon (147925) | more than 3 years ago | (#36031012)

The Phenom II x 6 core chips already run at 3.7 GHz when 3 or fewer cores are in use (that's what the automatic turbo feature does), so the 980's ability to run 4 cores at 3.7 GHz is only a minor improvement since it basically has no turbo mode. The x6 will win for any concurrency workloads that exercise all six cpus. Intel cpus also sport a turbo mode that works similarly.

The biggest issue w/ AMD is memory bandwidth. For some reason AMD has fallen way behind Intel in that regard. This is essentially the only reason why Intel tends to win on benchmarks.

However, you still pay a big premium for Intel, particularly Sandy-Bridge chipsets, and you pay a premium for SATA-III, whereas most AMD mobos these days already give you SATA-III @ 6GBits/sec for free. Intel knows they have the edge and they are making people pay through the nose for it.

Personally speaking the AMD Phenom II x 6 is still my favorite cpu for the price/performance and wattage consumed.

-Matt

Re:Wait a second... (1)

mr_stinky_britches (926212) | more than 3 years ago | (#36031840)

+1 for you sir

Wait for Bulldozer (3, Insightful)

rwade (131726) | more than 3 years ago | (#36030676)

I'll be waiting for the dust to clear with Bulldozer before I make a commitment for my next build. No reason to buy a $200 Phenom II X4 980 now when there is no application that needs that much power. If you buy a Sandy Bridge or a higher-end AM3 board/processor now, your average gamer or office worker won't be able to max it out for years -- unless he does video editing or extensive photo shop or if he has to get his DVD rips down to a 10 minute rip vs a 15 minute rip per feature film...

Might as well wait for the dust to clear or for prices to fall.

Re:Wait for Bulldozer (2)

mr_stinky_britches (926212) | more than 3 years ago | (#36030706)

I don't know man, $200 bucks sounds like a steal. Last time I checked, those Intel i3's and i5's were in the same range! We live in some crazy times IMO.

Re:Wait for Bulldozer (1)

rwade (131726) | more than 3 years ago | (#36030724)

If $200 is a steal today, wouldn't $150 be a better deal 4 months from now? Saving 25% on one of the most expensive components of a computer that I'll have for three or four years seems like a worthwhile bargain for waiting a few months...

Re:Wait for Bulldozer (2)

uncanny (954868) | more than 3 years ago | (#36030970)

well hell, you could just get a p4 chip for like $20, oh the savings!

Re:Wait for Bulldozer (2)

LordLimecat (1103839) | more than 3 years ago | (#36031352)

It has hyperthreading goodness and mmm have you ever had eggs fried on your processor?

Re:Wait for Bulldozer (0)

tomhudson (43916) | more than 3 years ago | (#36030834)

The i5 spanked the X4, for only $10 more. This actually should be an ad for Intel (disclaimer - posted from my AMD laptop).

Re:Wait for Bulldozer (2)

mr_stinky_britches (926212) | more than 3 years ago | (#36030852)

I doubt it spanks this X4. Lies.

Re:Wait for Bulldozer (1)

Runefox (905204) | more than 3 years ago | (#36030918)

It doesn't keep up with the Sandy 2500K, which can be had for around $200, depending on where you look. Then again, as the article suggests, it's better for an upgrade than a new system, so existing AMD users should be able to appreciate some tangible gains. Too late for me, though, I recently jumped ship from an Athlon X2 6000+ to the Sandy 2500K and I'm blown away by the performance.

Re:Wait for Bulldozer (1)

mr_stinky_britches (926212) | more than 3 years ago | (#36030984)

Who does a CPU-only upgrade these days? I think most people just wait until it's time for a [mostly] whole new machine.

Though I don't know many people IRL who still build there own systems..I kind of get a sick satisfaction from it though, so I will continue to do so :)

AAAAanyways, cheers.

Re:Wait for Bulldozer (1)

ShakaUVM (157947) | more than 3 years ago | (#36031064)

>>Who does a CPU-only upgrade these days?

Well, AMD is better about keeping sockets around longer than Intel, that seems to go through new ones every six weeks.

My last machine was built in Dec 2004, upgraded the CPU (to an AMD X2 4800+) in Jan 2007, which it gave it enough life to make it to April 2011 before I upgraded to Sandy Bridge.

Was easily the longest I've ever used the same motherboard, though admittedly it was on the leading edge of PCI-E and other features.

Re:Wait for Bulldozer (1)

Hydian (904114) | more than 3 years ago | (#36031222)

Nobody on Intel with their new sockets every year or so, that's for sure.

I upgraded my AMD CPU not too long ago. No different than upgrading your video card.

Re:Wait for Bulldozer (0)

Anonymous Coward | more than 3 years ago | (#36031386)

I did.

I upgraded AM2 2800+ X2 chip to Phenom AM3 820 X4 for $100, keeping the AM2 motherboard and RAM. Considering I was maxing out the X2 on consistent basis (it was starting to become a bottleneck), the 2 extra cores and 50% speed boost per core are a very nice feature. I was mostly looking for more throughput (ie. cores!)

3 year old machine got upgraded and now will probably run until it dies.

Re:Wait for Bulldozer (1)

WuphonsReach (684551) | more than 3 years ago | (#36031906)

Nah, I never upgrade just the CPU without also upgrading the MB/RAM too. If you do all three at the same time, you end up with a MB/CPU/RAM that can be re-purposed for other things (or a less demanding user). If you upgrade one thing at a time, you're left with a pile of spare parts that is basically worthless. Plus, memory types have changed so often that you basically have to upgrade CPU/MB/RAM at the same time anyway.

Most of the time, unless you went *super* cheap on the initial CPU purchase, the most power you'd get from an upgrade in the last few years is a measly 20-30%. My 2.5GHz Phenom II X4 is not that far behind newer CPUs (under $200) that it's worth throwing it out and dropping in a new one.

And if I was considering upgrading to say a 3.2GHz Hex-core, I'd want a new motherboard anyway to take advantage of USB 3.0. And probably DDR3 RAM instead of DDR2.

Re:Wait for Bulldozer (0)

tomhudson (43916) | more than 3 years ago | (#36031334)

Read the article - the i5-2500 beat it in every test, sometimes by as much as 50%, while using only 2/3 as much electricity. Spend the extra $10-$25 for the i5 - it'll pay for itself in energy savings in less than a year, and you'll have a faster machine.

Re:Wait for Bulldozer (1, Troll)

Antisyzygy (1495469) | more than 3 years ago | (#36031598)

You'll also be paying your dollar to a company that routinely did things worthy of anti-trust lawsuits, effectively screwed Nvidia, and routinely cannot perform worth a shit in their own graphics technology.

Re:Wait for Bulldozer (3, Insightful)

gregrah (1605707) | more than 3 years ago | (#36031724)

Those power consumption benchmarks look a little suspect to me. I've got a Phenom II 720 (3 cores @ 2.8 GHZ) with 95 watt TDP, and the total system consumption at idle is about 65 watts. I'm not sure how they are managing to pull down almost double that with an Athlon II (also a 95 watt CPU) in the test system they used - unless a) they turned off the power management settings in the BIOS, or b) they are using some ridiculous 1000W PSU that is totally inefficient at lower loads.

Anyway - assuming that I leave my machine running for 8 hours a day on average, and the overwhelming majority of the time the CPU is at near-idle loads (i.e. consuming 65W), with electricity costing about $0.12 per kWh, I figure that it probably costs me about $24 per year in electricity. If I could shave off 1/3 of the electricity cost, I would only be saving $8 a year. After the 3 years that it takes me to make up that $25 difference, I'm probably in need of a new CPU anyway.

Also - while I haven't spent much time pricing motherboards recently - when I last checked I found that AMD motherboards tend to be cheaper than Intel motherboards, and also that AMD integrated graphics were considerably stronger, allowing me to get by without a discrete graphics card. Furthermore, if I wanted to upgrade my CPU now with the latest and greatest I would be able to do so without replacing my motherboard and buying new memory, I would be able to do so - whereas if I bought an LGA 1156 motherboard a year ago it would now be obsolete.

In other words - I agree with you that with Intel you'll have a faster and more power efficient machine, but I'm not so sure that you'll end up saving any money.

Re:Wait for Bulldozer (1)

jedidiah (1196) | more than 3 years ago | (#36031056)

Yeah, but what about heat? My last Intel CPU ran like a toaster oven and had a fan mount that was very awkward to deal with.

Re:Wait for Bulldozer (1)

nanoflower (1077145) | more than 3 years ago | (#36031258)

That hasn't been a problem since the Core 2 processors started coming out. Those old P4s were barn burners (as in they could literally cause a barn to burn down) but the Core 2 duos and Sandy Bridge run fairly cool.

Re:Wait for Bulldozer (1)

rhook (943951) | more than 3 years ago | (#36031666)

I wouldn't say the Core 2 Duo is a cool running chip. I can show you a laptop that has extensive warping on the bottom due to the heat produced by a T7400.

Re:Wait for Bulldozer (2, Informative)

tomhudson (43916) | more than 3 years ago | (#36031298)

The i5-2500 is not only faster, it uses a LOT less electricity [hothardware.com] The Phenom uses slightly more than 50% MORE electricity.

X4 - 157 to 252 watts
i5-2500 - 91 to 164 watts.

In other words, it will cost between $20 (normal use, cheap electricity) and $140 (24/7, expensive electricity) per year extra. Spending the extra $10 to get the faster i5 is a no-brainer.

Re:Wait for Bulldozer (0, Flamebait)

Antisyzygy (1495469) | more than 3 years ago | (#36031636)

Depends on your principals. A) Preferring to pay more to lower your energy usage or B) Not willing to pay a company your dollar for doing things more than worthy of anti-trust investigations and/or anti-trust lawsuits on a regular basis. Over a year 140 dollars is nothing even for someone in poverty (which I am with less than 15000 a year). Its also a small pittance of a fraction of the KWh being used by unclean energy. Most the unclean energy you use is from the products you consume including gas, plastics and food. There has to be something you dropped money on worth more than 140 dollars this year that was actually more useless than giving your 140 to the electric company over choosing AMD. Mine would probably be beer.

Re:Wait for Bulldozer (4, Informative)

m.dillon (147925) | more than 3 years ago | (#36031070)

Well, also remember that Intel has something like 6 (or more) different incompatible cpu socket types in its lineup now, which means you have no real ability to upgrade in place.

AMD is all AM2+ and AM3, and all current cpus are AM3. This socket format has been around for several years. For example, I was able to upgrade all of my old AM2+ Phenom I boxes to Phenom II simply by replacing the cpu, and I can throw any cpu in AMD's lineup into my AM3 mobos. I only have one Phenom II x 6 machine right now but at least four of my boxes can accept that chip. That's a lot of upgrade potential on the cheap.

This will change, AMD can't stick with the AM3 form factor forever (I think the next gen will in fact change the socket), but generally speaking AMD has done a much better job on hardware longevity than Intel has. It isn't just a matter of the price of the cpu. I've saved thousands of dollars over the last few years by sticking with AMD.

SATA-III also matters a lot for a server now that SATA-III SSDs are in mass production. Now a single SSD can push 300-500 MBytes/sec of effectively random I/O out the door without having to resort to non-portable/custom-driver/premium-priced PCIe flash cards. Servers can easily keep gigabit pipes full now and are rapidly approaching 10GigE from storage all the way to the network.

-Matt

Re:Wait for Bulldozer (2, Insightful)

Anonymous Coward | more than 3 years ago | (#36031442)

This will change, AMD can't stick with the AM3 form factor forever (I think the next gen will in fact change the socket), but generally speaking AMD has done a much better job on hardware longevity than Intel has.

Oh yes, AMD is wonderful about keeping sockets around for a long time. The move to 64 bit CPUs only involved four (754,939,940,AM2) sockets, three (754,939,940) of which were outdated in short order.

Re:Wait for Bulldozer (1)

WuphonsReach (684551) | more than 3 years ago | (#36031886)

754 was the budget socket. No bets there. If you bought a 754-based system and expected upgrades, you did not do your homework. Budget based systems are design for people who buy a cheap machine, and treat it like a black box.

939 was the single-CPU version, 940 was the dual-CPU setup and no CPU that fit in those sockets supported DDR2. Unlike Intel's chips at the time, AMD's memory controllers were *inside* the CPU. To support DDR2, they had to break compatibility at the socket level due to electrical / circuitry issues. Maybe it was a bit of short-sightedness not planning ahead to allow 939/940 sockets to talk to DDR2 memory, but on the flip side, having the memory controller inside the CPU sped things up a lot. But there was also a lot of warning about the coming socket change (I still have a dual-940 Opteron running) and the move to new memory was going to require a new motherboard anyway.

So, the first real socket swap was the move to AM2 so that they could support DDR2. Then came AM2+ and then AM3. And there's some possible mix-match between the sockets and CPUs. Mostly it depends on what type of memory the motherboard supports and whether the CPU supports that type of memory (the controller is still inside the CPU).

The other side of the issue is "who the frick actually only upgrades a CPU these days"? CPU/MB/RAM have always been tightly bound and the base-speed CPU for $X is generally no less then 30-40% slower then the top-end CPU that will fit into the motherboard (and that ratio keeps shrinking). And if you do swap out the CPU, you're left with a $100 paperweight which will be a PITA to offload at an auction site. Better to spend the extra $50 when you purchase the initial machine to get the fastest CPU before the price/performance curve takes a sharp bend. The only upgrade that has made sense for a while is to only fill half the RAM slots at the start, then add more RAM later. Then you're at least not left with obsolete parts sitting around, clogging up drawers or inventory.

Re:Wait for Bulldozer (2)

elashish14 (1302231) | more than 3 years ago | (#36031744)

Bulldozer will be AM3+ but it has very good forwards and backwards compatibility with AM3. http://en.wikipedia.org/wiki/AM3%2B [wikipedia.org]

Re:Wait for Bulldozer (2)

petteyg359 (1847514) | more than 3 years ago | (#36030722)

rwade said:
there is no application that needs that much power

So, just because you don't plan on buying it, means that a significant portion of software simply doesn't exist. I think your logic is broken; you should look for a new one.

Re:Wait for Bulldozer (1)

rwade (131726) | more than 3 years ago | (#36030740)

I'm not saying don't buy the power -- I think that's pretty obvious from even a careless reading of my comment.

But to clarify for you -- I'm saying, don't buy the power today for applications not available today because you can get the same amount of power in a few months for a 25% discount. Even then, the applications will probably not be there yet...

Re:Wait for Bulldozer (1)

MoonBuggy (611105) | more than 3 years ago | (#36030948)

The chip doesn't cost that much in the first place, though. I'm not sure I'd go as far as to estimate 25% off in a few months anyway, but even if that is the drop we see, $50 is not an especially significant amount of cash to most people (not the ones in the market for a fairly high-end new machine, anyway). When I see people running out to buy $1200 'extreme edition' chips, I certainly wonder whether they need that extra few percent in performance enough to justify adding the price of an entire high-end laptop to the build, but when you're talking about price differences that would only pay for a copy of Portal 2, a bit of future-proofing doesn't seem too big a waste.

Re:Wait for Bulldozer (3, Insightful)

swb (14022) | more than 3 years ago | (#36031306)

My sense is that people who actually *use* a computer also install dozens of applications and end up with complicated and highly tailored system configurations that are time consuming to get right and time consuming to recreate on a new system.

The effort to switch to a new system tends to outweigh the performance improvement and nobody does it until the performance improvement makes it really worthwhile (say, Q6600 to a new i5 or i7).

I've found that because I end up maintaining a system for a longer period, it pays to buy power today for applications very likely to need or use it in the lifetime of the machine. Avoid premature obsolescence.

Re:Wait for Bulldozer (1)

cynyr (703126) | more than 3 years ago | (#36031650)

libx264 seems to do a good job using up all the CPU i can though as it (a 1055T x6 now). Emerge does a decent job as well.

Re:Wait for Bulldozer (2)

the linux geek (799780) | more than 3 years ago | (#36030760)

Bulldozer is looking increasingly underpowered compared to Sandy Bridge, with some benchmarks indicating potentially worse performance per cycle than the existing K10.5 core.

This [arstechnica.com] thread has some interesting information on possible BD performance.

Uh...this is 301 posts of Intel fans vs AMD fan (3, Insightful)

rwade (131726) | more than 3 years ago | (#36030812)

This [arstechnica.com] thread has some interesting information on possible BD performance.

.....

This is 301 posts with back and forth that looks basically to be speculation. Prove me wrong by quoting specific statements of those that have benched the [unreleased] bulldozer. Because otherwise, this link is basically a bunch of AMD fanboys fighting against Intel fanboys. But prove me wrong...

Re:Uh...this is 301 posts of Intel fans vs AMD fan (0, Troll)

mr_stinky_britches (926212) | more than 3 years ago | (#36030866)

TL:DR

cool story bro!

Re:Uh...this is 301 posts of Intel fans vs AMD fan (2)

rwade (131726) | more than 3 years ago | (#36030922)

The guy's trying to prove a point that Bulldozer -- which is, again, unreleased -- is looking underpowered, and he's doing it by pointing to a message board full of fanboy speculation. It's 8 pages of posts. I'm basically calling BS on the guy's suggestion that BD is looking underpowered -- frankly, no one but AMD knows anything about BD's performance.
No.
One.
At.
All.

It is all speculation...based on what? There's all this crap in here about AMD being the reason that we're not still using 2GHz Pentium4s, blah, blah, blah.

Re:Uh...this is 301 posts of Intel fans vs AMD fan (1)

the linux geek (799780) | more than 3 years ago | (#36030964)

A PCmark bench is cited that shows a ~15% percent increase over the X6 while adding two cores (33%.) That implies BD has either a lower clock speed or lower instructions-per-cycle.

I fully acknowledge that this is largely rooted in speculation at this point, but what's come out so far isn't encouraging - the openbenchmarking results based on an engineering sample, for instance, showed performance that was not substantially improved from the Magny-Cours processor.

Re:Uh...this is 301 posts of Intel fans vs AMD fan (1)

DurendalMac (736637) | more than 3 years ago | (#36030980)

It would be an engineering sample, which typically has much lower clocks than what ships out in the final product. We'll have to wait and see.

Re:Wait for Bulldozer (1)

phizi0n (1237812) | more than 3 years ago | (#36031004)

I've bought exclusively AMD CPU's for the past decade because they have good budget models and I am looking forward to seeing what Bulldozer can do, but I am curious about the FPU performance since they are cutting the number of FPU's in half. Bulldozer seems like an architecture targeted at servers and virtual machines by upping the ALU count but cutting the FPU count.

Re:Wait for Bulldozer (4, Insightful)

ShakaUVM (157947) | more than 3 years ago | (#36030840)

>>I'll be waiting for the dust to clear with Bulldozer before I make a commitment for my next build.

I agree. The Phenom II line is just grossly underpowered compared to Sandy Bridge:
http://www.anandtech.com/bench/Product/288?vs=362 [anandtech.com]

The i5 2500K is in the same price range, but is substantially faster. Bulldozer ought to even out the field a bit, but then Intel will strike back with their shark-fin Boba FETs or whatever (I didn't pay much attention to the earlier article on 3D transistors.)

And then on the high-ish end, AMD has nothing to compete against the i7 2600K. And it's not really that much more expensive (+$100) for the 15% extra gain in performance. It's not like their traditional $1000 high end offerings.

Re:Wait for Bulldozer (1)

Anonymous Coward | more than 3 years ago | (#36031204)

And then on the high-ish end, AMD has nothing to compete against the i7 2600K. And it's not really that much more expensive (+$100) for the 15% extra gain in performance. It's not like their traditional $1000 high end offerings.

That's mostly because it's not their high end offering; the true high end processor in the Sandy Bridge generation isn't on sale yet. Intel uses sockets to discriminate between lowend/mainstream and high end CPUs. The high end sockets have more memory and IO bandwidth than the mainstream.

The sockets for Sandy Bridge family CPUs are:
Socket LGA1155 = mainstream i3/i5/i7
Socket LGA2011 = high end i7

The i7-2600K is a LGA1155 processor.

LGA2011 CPUs will be available sometime this year. LGA2011 has four DDR3 memory channels instead of two, QPI system interconnect instead of DMI (faster), and 32 lanes of PCI Express for dual X16 graphics slots (1155 has 16 lanes). The CPUs will have four or six cores, up to 15MB L3 cache, and no integrated graphics (unlike the LGA1155 SB models). The $1K "extreme" model should have 6 cores at 3.3 GHz with up to 3.9 GHz turbo, according to Wikipedia.

And that's AMD's problem in a nutshell... they can't charge much over $200 for any of their CPUs because they can't match the best of Intel's mainstream products, let alone Intel's high end.

Re:Wait for Bulldozer (4, Interesting)

Kjella (173770) | more than 3 years ago | (#36031270)

And then on the high-ish end, AMD has nothing to compete against the i7 2600K. And it's not really that much more expensive (+$100) for the 15% extra gain in performance. It's not like their traditional $1000 high end offerings.

Intel essentially skipped a cycle on the high end because they were completely uncontested anyway. The last high-end socket was LGA 1366, then we've had two midrange sockets in a row with LGA 1156 and LGA 1155. Late this year we'll finally see LGA 2011, the high end Sandy Bridge. Expect another round of $999 extreme edition processors then - with six cores, reportedly.

Re:Wait for Bulldozer (1)

kevinmenzel (1403457) | more than 3 years ago | (#36030916)

Believe me, there are applications that can make good practical use of that much power. Go record an orchestra in 24/192 and get back to me on how nothing needs something like that.

Re:Wait for Bulldozer (1)

WuphonsReach (684551) | more than 3 years ago | (#36031830)

No reason to buy a $200 Phenom II X4 980 now when there is no application that needs that much power.

Wow, such a narrow world view.

There are a lot of applications out there where single-core speed matters. And $200 is chump change for a CPU that is at the upper end of the speed range. It wasn't that many years ago that a *dual* core CPU was considered affordable once they dropped below $300 (and it was a happy day when they got below $200).

And no, you wouldn't buy this for the average office worker who only does word processing. You use powerful CPUs for the workers who need the raw CPU power in order to get their jobs done (developers, database admins, simulations, modeling, etc.). Then you move their 18-24 month old machines (which are probably dual/quad core) to the regular office workers.

(And for the killer application that needs that much single-core speed? Dwarf Fortress. Or any other application that is single-threaded and consumes lots of CPU power.)

Weird Benchmarks: chrysis at 800x600 resolution??? (0)

rwade (131726) | more than 3 years ago | (#36030708)

Anyone notice that weird benchmarks TFA uses for the gaming performance evaluation? TFA [hothardware.com] compares several processors against the X4 980 by running a pair of games at low quality with minimal quality to "isolate out the graphics card:"

we drop the resolution to 800x600, and reduce all of the in-game graphical options to their minimum values to isolate CPU and memory performance as much as possible. However, the in-game effects, which control the level of detail for the games' physics engines and particle systems, are left at their maximum values, since these actually do place some load on the CPU rather than GPU.

I find it hard to believe that the guys at "hothardware.com" know enough about 3d game architecture to have any understanding of what places a load on the CPU and what places a load on the GPU. Anyone have any thoughts?

Re:Weird Benchmarks: chrysis at 800x600 resolution (0)

Anonymous Coward | more than 3 years ago | (#36030748)

It's pretty well known that most games max out the GPU at higher resolutions.

Re:Weird Benchmarks: chrysis at 800x600 resolution (1)

rwade (131726) | more than 3 years ago | (#36030776)

But is it also well known that increasing resolutions and knocking off all the effects dose nothing to the CPU performance?

Re:Weird Benchmarks: chrysis at 800x600 resolution (3, Interesting)

Mia'cova (691309) | more than 3 years ago | (#36030942)

Once the GPU is maxed-out, there's nothing more for the CPU to do. If you're running at 30 FPS at high-res, the CPU might be at 30%. At that point, any number of different CPUs will have identical benchmark results. When you drop the load off the GPU, the CPU hits 100% usage and you can compare 150 fps to 160 fps, for example. This is a very simple and typical way to benchmark CPUs for gaming perf. Reviews and reviewers (such as myself) have been doing this for 10+ years, since the very first 3D accelerators came to the gaming market.

Re:Weird Benchmarks: chrysis at 800x600 resolution (1)

wagnerrp (1305589) | more than 3 years ago | (#36031052)

Who cares? All you're concerned about is removing the video card as a factor in the benchmark. It gives you a baseline from which to compare one processor to another, even if the absolute performance values themselves are otherwise meaningless. That's the entire concept of a synthetic benchmark.

Re:Weird Benchmarks: chrysis at 800x600 resolution (0)

Anonymous Coward | more than 3 years ago | (#36030820)

AFAICT, the thinking is that running a resource intensive game at GPU-indifferent settings will allow you to focus on CPU usage.

(fwiw i'm not into benchmarking/overclocking, but I think this is how the benchmark is to be interpreted - which I think is a fairly standard way of doing things for a CPU)

Re:Weird Benchmarks: chrysis at 800x600 resolution (2)

Anthony Mouse (1927662) | more than 3 years ago | (#36030974)

That seems like a stupid way to benchmark. It encourages people to be misinformed by thinking that they can get better frame rates by buying a faster CPU even though under real world conditions the game will be GPU bound and the CPU is irrelevant. Why not stick to benchmarking using applications that are actually CPU bound under normal usage?

Re:Weird Benchmarks: chrysis at 800x600 resolution (2)

smash (1351) | more than 3 years ago | (#36031160)

Not stupid at all. It shows that if your video is not a factor or you upgrade to an adequate video card when one is available,the better cpu to buy is X.

This is common practice that has been used for at least a decade now.

furthermore - its a SYNTHETIC BENCHMARK. no one wants to play crysis at 800x600, but its a tool we can use to measure cpu performance. no one wants to buy a PC soley to sit and calculate prime numbers all day either, but stressprime and other stuff is used for benchmarking also.

Re:Weird Benchmarks: chrysis at 800x600 resolution (1)

Anthony Mouse (1927662) | more than 3 years ago | (#36031312)

Not stupid at all. It shows that if your video is not a factor or you upgrade to an adequate video card when one is available,the better cpu to buy is X.

Assuming that GPUs are available that are so fast they move the bottleneck back to the CPU, the way to do it would then be to use one of those GPUs and use normal resolutions. If even the fastest GPUs still result in the GPU as the bottleneck, just find a different benchmark. There is no lack of synthetic benchmarks that someone could use without misleading people into thinking they need a super fast CPU for a heavily GPU-bound game.

This is especially true now that integrated GPUs are becoming respectable. Someone on a budget may be very interested to know whether AMD CPU + integrated AMD GPU is faster than Intel CPU + integrated Intel GPU, but you don't get an accurate picture if you skew the settings to make it CPU bound.

Re:Weird Benchmarks: chrysis at 800x600 resolution (2)

smash (1351) | more than 3 years ago | (#36031864)

Well yes, if such GPUs were available today, sure. They're not. However the gaming benchmark is not useless, because its a real-world mix of code in a typical app the CPU might be used to run. You're looking at the benchmarks expecting them to be some absolute result. They're not. You have to use your head and interpret the results, as with any experiement. A benchmark isn't a "you need to buy this cpu for this game" statement. Its a performance indication on a particular code path.

Synthetic benchmarks might give you a number to compare CPUs with, but if they're not doing real world tasks (or even better, the exact tasks you intend to use the box for, such as running a game if you're a gamer) they are easily cheated on - the cpu vendor can simply spend transistors on optimizing for the instructions most commonly used on the benchmark.

In summary: without thoughtful analysis, all benchmark results are useless. Also, no one benchmark should be taken as "authoritative". Compare multiple benchmarks, pay special attention to those that run similar code/apps was you will be running, and make your choice that way.

Don't rely on a single number to do it for you.

Re:Weird Benchmarks: chrysis at 800x600 resolution (1)

0123456 (636235) | more than 3 years ago | (#36031296)

That seems like a stupid way to benchmark.

Why would you use a GPU-limited benchmark when comparing performance of different CPUs? That would be retarded.

Re:Weird Benchmarks: chrysis at 800x600 resolution (1)

Anthony Mouse (1927662) | more than 3 years ago | (#36031384)

That's the point. Find the thing people actually do which is CPU-bound, don't just fudge a GPU-limited thing until you get different numbers for different CPUs.

Re:Weird Benchmarks: chrysis at 800x600 resolution (1)

0123456 (636235) | more than 3 years ago | (#36031468)

That's the point. Find the thing people actually do which is CPU-bound, don't just fudge a GPU-limited thing until you get different numbers for different CPUs.

Nothing that the average person does is CPU-bound if they have a fast CPU; most of the time it will be idling. The closest they'll get is gaming when not GPU-bound, which they may not do today, but they will when they replace their GPU in two years.

Ultimately if you want the fastest CPU then you want to run things that are CPU-bound. If you just play crappy console game ports and run them so they're GPU-bound then you'll do fine with a dual-core in most cases.

Re:Weird Benchmarks: chrysis at 800x600 resolution (1)

sheddd (592499) | more than 3 years ago | (#36031754)

Generally true; handbrake is the only app I routinely use that maxes out all 8 cores on my box.

Re:Weird Benchmarks: chrysis at 800x600 resolution (1)

AdamHaun (43173) | more than 3 years ago | (#36030874)

It's not uncommon. Other benchmarking sites do it too.

Re:Weird Benchmarks: chrysis at 800x600 resolution (1)

myoparo (933550) | more than 3 years ago | (#36030968)

For testing CPU in a video game, it's traditional to generally run the benchmark at a low resolution in order to help ensure that the CPU is the bottleneck and not the graphics card. Compared to the processor, more strain is placed upon the graphics card/gpu as the resolution is increased.

It has been this way for a very, very long time.

Re:Weird Benchmarks: chrysis at 800x600 resolution (0)

Anonymous Coward | more than 3 years ago | (#36031628)

It has been standard practice for the 10+ years I've looked at benchmarks. And it works.
e.g. Using the same CPU, the differences between two video cards get smaller as you drop the resolution.

wireless toad (0)

Anonymous Coward | more than 3 years ago | (#36030732)

the toads are wireless! i repeat, the toads are wireless!

Today, the complexity of numbering continues... (1)

Super Dave Osbourne (688888) | more than 3 years ago | (#36030850)

10 years ago it was confusing enough with what could be seen as reliable product APUs fro AMD with 1.4 ghz here, and 1.23 ghz there, name changes to meaningless marketing numbers and names. So, I'll stay ignorant and simply ignore these 'breakthrough' numbers and buy product instead of specifications.

Re:Today, the complexity of numbering continues... (1)

myoparo (933550) | more than 3 years ago | (#36030996)

I agree-- the numbering and naming schemes in use nowadays are ridiculous and sometimes hard to decipher. In fact, ever since they stopped posting the clockspeed next to the processor it's been confusing.

It's too bad we can't revert back to the old usage where it's just the processor name + clockspeed... with the addition of how many cores. Yes, it was never a perfect system (not all makes of processor have equal performance at a given frequency), but it sure as hell was better than how they do it now.

Re:Today, the complexity of numbering continues... (5, Informative)

gman003 (1693318) | more than 3 years ago | (#36031420)

I won't talk about Intel's system, but AMD is actually relatively straightforward:

First comes the family name. For desktops, this is usually either "Athlon II" or "Phenom II". The only real difference between them is the amount of cache.

Then comes the core count - X2, X3, X4 or X6. Completely self-explanatory.

This is followed by a number that essentially stands in for the clock speed. Higher-clocked processors have higher numbers, lower-clocked processors have lower numbers.

Finally, certain processors have "Black Edition" appended, which simply means that the multiplier is unlocked, greatly easing overclocking.

Re:Today, the complexity of numbering continues... (1)

myoparo (933550) | more than 3 years ago | (#36031832)

That's not too bad actually-- but why have a number that stands in for the clock speed instead of just having the clock speed itself? It makes comparing CPUs of different brands difficult because there is absolutely no correlation when it comes to the "stand-in" number.

At least when things were always done in mhz, it was relatively easy to "approximate" how fast two chips were compared to one another within the same family line or even amongst different manufacturers, provided you were at least somewhat familiar with the performance of the product lines in question.

Are the stand-in numbers of today just some fancy marketing gimmick or do they really have some deep-down meaning? I guess in the end it doesn't matter too as long as there are hardware review site to point people in the right direction. Still though, not having to look up everything all the time feels a bit nice.

3700 megahertz? (1)

cpu6502 (1960974) | more than 3 years ago | (#36030966)

Bah.

My ten-year-old CPU does 3100 megahertz. Things have slowed dramatically since the the 80s and 90s, when speeds increased from approximately 8 megahertz (1980) to 50 megahertz (1990) to 2000 megahertz (2000). If the scaling had continued, we would be at ~20,000 by now. Oh well. So much for Moore's Observation.

Re:3700 megahertz? (1)

Anonymous Coward | more than 3 years ago | (#36030992)

Moore's Law is about the number/price of transistors, not clock speed.

Re:3700 megahertz? (4, Insightful)

DurendalMac (736637) | more than 3 years ago | (#36031008)

So clock speed means everything when comparing different CPUs and not their raw performance. Got it.

Furthermore, there is no 10 year old CPU that runs at 3ghz unless you did some absurd overclocking.

Re:3700 megahertz? (3, Interesting)

timeOday (582209) | more than 3 years ago | (#36031736)

So clock speed means everything when comparing different CPUs and not their raw performance. Got it.

Not exactly, but close for single-core performance. The "MHz Myth" is largely a myth itself. As this table [theknack.net] shows, per-MHz single-core performance between the infamously bad (even at the time) P4 and the current best (Core i7) has only improved by a factor of less than 2.6, since October 2004! (When the Pentium 3.6 EE was released).

Perhaps more importantly, the ratio between the most productive (per-mhz) chip from 2004 (Athlon64 2.6) and the most productive on the chart now is a mere 1.6! That's a 60% improvement in almost 7 years!

That is a joke. For reference, we went from the Pentium 100 (March 1994) to the Pentium 200 (June 1996) - approximately a 100% improvement in a little over 2 years.

So, no, improvements in instructions per cycle are not even close to keeping pace with what improvements in MHz used to give us. (And if you looked at instructions per cycle per transistor, it would be abysmal - which is another way of saying Moore's law is hardly helping single-threaded performance any more).

Re:3700 megahertz? (2)

AdamHaun (43173) | more than 3 years ago | (#36031850)

I'm sure you didn't mean it quite this way, but a 60% improvement in the amount of work done per clock cycle is some pretty impressive engineering...

Re:3700 megahertz? (2)

timeOday (582209) | more than 3 years ago | (#36031870)

Well, that's the problem... since hitting the MHz wall, it's taking more and more heroic efforts to achieve any speedup in single-core performance. (In fact if I'm not mistaken, the most-productive-per-cycle core on that chart is a couple years old.) But I agree, it's not that engineers are getting dumber or anything like that. It's getting harder, and progress has become slow.

Re:3700 megahertz? (3, Insightful)

smash (1351) | more than 3 years ago | (#36031898)

Don't forget that IPC isn't the be all and end all. If you're stalled due to cache misses, then IPC goes out the Window. Modern CPUs have much more cache and much faster buses to main memory than we had in 2004. That is a large reason as to why they're faster. They also have additional instructions that can do more work per instruction - so comparing IPC from CPUs released today to CPUs released last decade is even more meaningless.

Re:3700 megahertz? (0)

Anonymous Coward | more than 3 years ago | (#36031016)

Pretty sure it was half of that in year 2000. Then multiple 3700 x 4 and we're scaling pretty well still.

Re:3700 megahertz? (1)

rubycodez (864176) | more than 3 years ago | (#36031250)

er, your 2.0GHz P4 of august 2001 is overclocked to 3.1 GHz? or are you just confused and babbling?

Re:3700 megahertz? (4, Informative)

LordLimecat (1103839) | more than 3 years ago | (#36031394)

Moores observation was about transistor count, not mHz, corecount, speed, wattage, flops, bogomips, or anything else.

Another no hum processor from AMD :( (1)

supremebob (574732) | more than 3 years ago | (#36030972)

Is anyone else disappointed that AMD's fastest desktop processor can barely keep up with Intel's midrange Sandy Bridge Core i5 processors in most applications? Sure, AMD's processors are still a great value, but it seems like they fall further behind with their performance parts every year.

I just hope that the performance improvements for Bulldozer are all they're cracked up to be.

ANYONE who buys AMD is a newb. (-1)

Anonymous Coward | more than 3 years ago | (#36030986)

Those scores are pathetic. did you notice a simple Intel i5 beats it in performance, and uses less power?

AMD is so far behind intel they won't catch up in 20 years,
My 4-core intel destroys my friends 6-core AMD, and it's a good metaphore for amd's entire line. useless.

"... holds up pretty well"? (2)

macraig (621737) | more than 3 years ago | (#36031154)

Ummm, against what, my obsolete Phenom (I) X4 9850? Funny how true fanbois can read the same review as an objective person and walk away with entirely different conclusions, eh?

The AnandTech review [anandtech.com] was even less forgiving of AMD's underdog status, and basically recommended passing and either waiting for the allegedly awesome new Bulldog line or jumping ship for Intel. Hell, when Sandy Bridge both outperforms AND underconsumes (power), you oughtta be seriously questioning that underdog affection. I certainly am.

Re:"... holds up pretty well"? (0)

Anonymous Coward | more than 3 years ago | (#36031466)

Funny how someone with another bias than mine can read the same reviews as a person with my personal bias and walk away with entirely different conclusions, eh?

Funny how egocentric retards always call their own personal bias "objective" as if it were an absolute.
Newsflash: If you understood something about physics, neurology,information propagation, etc, you'd know that something absolute -- like "objectivity" -- is indeed physically impossible, and socially absurd.
It's just what we call the "bias" of the world view that is compatible with our own inner model of it. (Which for cattle is that of their opinion maker and shared with their herd.)

Get off your high horse, and accept that the world doesn't revolve around you and your little personal world view. (Or anyone else's.)

For the record: No, I'm not defending fanbois, idiot. I'm saying you are just as delusional and egocentric as them. In fact you're imitating them. The only operation you do on it, is negation/mirroring relative to a a view that is close to my view/bias.

CAPTCHA: "imbecile". How fitting.

Re:"... holds up pretty well"? (0)

Anonymous Coward | more than 3 years ago | (#36031798)

Cause you are the imbecile? I love how people talk about getting off the high horse, while riding it all the way down.

We are no longer chasing the Phantom x86... (2, Insightful)

Anonymous Coward | more than 3 years ago | (#36031156)

Read this excerpt from an AMD management blog:

"Thanks to Damon at AMD for this link to a blog from AMD's Godfrey Cheng.
We are no longer chasing the Phantom x86 Bottleneck. Our goal is to provide good headroom for video and graphics workloads, and to this effect “Llano” is designed to be successful. To be clear, AMD continues to invest in x86 performance. With our “Bulldozer” core and in future Bulldozer-based products, we are designing for faster and more efficient x86 performance; however, AMD is seeking to deliver a balance of graphics, video, compute and x86 capabilities and we are confident our APUs provide the best recipe for the great majority of consumers. "

People, read between the lines.
What he is saying is that they can no longer compete with Intel on speed and have decided to concentrate on a balance at the low end priced points.
The days of the cpu wars are in fact over and Intel has won with Sandy Bridge.
Yes, I have been an AMD only fan for years but you have to face the reality that times have changed permanently in Intels favor and AMD's days are numbered.
Why else do you think Bulldozer is over a year late!
Oh, and AMD is prime for a buyout right now and there are rumors.
When AMD fails Intel will have a monopoly and the consumer will loose in the end.
Sad but true.

Re:We are no longer chasing the Phantom x86... (1)

rubycodez (864176) | more than 3 years ago | (#36031254)

nonsense, there are other CPU vendors for the new age of mobile computing. I for one welcome our non x86 overlords.

Re:We are no longer chasing the Phantom x86... (1)

ChrisMaple (607946) | more than 3 years ago | (#36031490)

Who would be foolish enough to buy Sanders' folly? The company struggles to make a profit in the shadow of Intel's superior technology. In the US, TI, Micron, Qualcomm and Broadcom are bigger. TI has been in the business before and knows better. The other three don't do the same sort of thing, so it wouldn't be a good match. If a foreign company bought AMD, Intel would feel no (antitrust) compunction against lowering prices to the point that the new owner would lose heaps of money. That leaves PC manufacturers who'd like an in-house CPU supplier. Dell's already denied the rumor, and (in my estimate) HP is deeply in bed with Intel and might have culture clash problems with AMD. Apple doesn't seem like a good fit. I suppose Acer is a possibility.

Re:We are no longer chasing the Phantom x86... (1)

mr_stinky_britches (926212) | more than 3 years ago | (#36031878)

please...anything but monopoly....

ideas about this post (0)

pop up gazebo (2022184) | more than 3 years ago | (#36031316)

From the post of "AMD Launches Fastest Phenom Yet, Phenom II X4 980",I know that the garden pop up gazebo [quictents.co.uk] is really a good equipment for us to own one, the same with the party tent [quictents.co.uk] , for it can bring us much more convenience.

I am surprised... (1)

wpiman (739077) | more than 3 years ago | (#36031400)

They released their fastest processor? Wow-- unusual for chip makers to make improvements to speed and design.

Waiting for Bulldozer (1)

tyrione (134248) | more than 3 years ago | (#36031406)

Show me Bulldozer and then we'll talk.

What makes Bin Laden to death? AxisApexis? (-1)

Anonymous Coward | more than 3 years ago | (#36031516)

Obama declared that Bin Laden is dead, DNA is verified, I think this is not a trifling matter.What makes Bin Laden's house exposed,who named the God of flee, what's more,the hidding so concealed? Bush caught him by all means failed to catch him for nearly a decade, however,Obama's army discovered him this year, and quickly kill him.Is it the U.S. satellite positioning system technology to improve this high level? Of course, this is one possiblility.The another reason maybe is betrayal of the Pakistani military,they use security technology or wireless network camera surveillance equipment and have been on their guard to monitor the area,which make Bin Laden dead because of negligence.Then, what security monitoring equipment use by the Pakistani military?People says it is AXIS,other people says it is APEXIS,in one word,one of them,we believe the security product from these two companies,which one is the hero leading to the death of ben laden in the end?Who knows... http://www.globalsources.com/apexis.co [globalsources.com]
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>