×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Considered GDDR5 For Kaveri, Might Release Eight-Core Variant

timothy posted about 3 months ago | from the things-that-might-have-been dept.

AMD 120

MojoKid writes "Of all the rumors that swirled around Kaveri before the APU debuted last week, one of the more interesting bits was that AMD might debut GDDR5 as a desktop option. GDDR5 isn't bonded in sticks for easy motherboard socketing, and motherboard OEMs were unlikely to be interested in paying to solder 4-8GB of RAM directly. Such a move would shift the RMA responsibilities for RAM failures back to the board manufacturer. It seemed unlikely that Sunnyvale would consider such an option but a deep dive into Kaveri's technical documentation shows that AMD did indeed consider a quad-channel GDDR5 interface. Future versions of the Kaveri APU could potentially also implement 2x 64-bit DDR3 channels alongside 2x 32-bit GDDR5 channels, with the latter serving as a framebuffer for graphics operations. The other document making the rounds is AMD's software optimization guide for Family 15h processors. This guide specifically shows an eight-core Kaveri-based variant attached to a multi-socket system. In fact, the guide goes so far as to say that these chips in particular contain five links for connection to I/O and other processors, whereas the older Family 15h chips (Bulldozer and Piledriver) only offer four Hypertransport links."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

120 comments

news for nerds (0)

Anonymous Coward | about 3 months ago | (#46011417)

finally something I care to read about

Re:news for nerds (2)

ls671 (1122017) | about 3 months ago | (#46011441)

Great! Can you explain to me why "GDDR5 isn't bonded in sticks for easy motherboard socketing" ?

Re:news for nerds (0)

Anonymous Coward | about 3 months ago | (#46011571)

Can you explain to me why CPU's (also BGA) "[aren't] bonded in sticks for easy motherboard socketing"?

Re:news for nerds (0)

Anonymous Coward | about 3 months ago | (#46012933)

What's wrong with a ZIF socket?

Packaging != modular design (2)

unixisc (2429386) | about 3 months ago | (#46014317)

CPUs - the last that I saw - were PGAs - except for embedded systems, I'm not sure of many BGA based CPUs. DDR3 onwards changed its packaging from TSOP to BGA due to excess pin count, where on TSOP, only a larger package would work, but then, the length of the wire bonds would become a factor in the speed of the sub-system (CPU to DDR3). Also, while TSOP is cheaper than BGA for lower pin counts, when the pin counts become comparable - ~50, the equation flips - BGAs become cheaper than 56 pin TSOPs. As a result, memory started getting packaged in BGA, which were already in use by flash memory, especially in portable applications such as cell phones, PDAs and so on.

But even those DDR3 modules are available on sticks, even in their BGA packages.

Re:news for nerds (2)

davester666 (731373) | about 3 months ago | (#46011605)

Motherboard manufacturers see the profit margin's Apple has with RAM that can't be increased in a number of their high-end models and they want in on that action.

Re:news for nerds (4, Informative)

SuricouRaven (1897204) | about 3 months ago | (#46011651)

Because it's electrically so delicate that you can't keep bit sync when shoving such high frequencies through a slot connector. The price of higher bandwidth, in both the analog and digital senses.

Re:news for nerds (2)

hairyfeet (841228) | about 3 months ago | (#46012743)

Sure can. You see currently all GDDR5 is sold to GPU OEMs and they haven't used sockets on graphics card since the mid 90s.

As for TFA they sold some HD43xx boards with a small amount of dedicated RAM and since AMD makes their own board chipsets I see no reason why they couldn't sell a board or two with dedicated GDDR5 of 512Mb to say 2Gb. Lets face it guys you can only go so far with an APU before it would make more sense to have a dedicated GPU instead but with AMD having the same chips that are in the new consoles this might be a good option for a budget gaming rig.

The key will be if they can get the GDDR5 at a price that will still let the board make sense versus just buying say an HD7750 card, but if they could get the boards made so a kit would cost around $250-$270 (current their quad APU kits sell for around $200 sans HDD) this could be a really compelling option for those on a budget that want to game.

Re:news for nerds (2)

unixisc (2429386) | about 3 months ago | (#46014197)

Great! Can you explain to me why "GDDR5 isn't bonded in sticks for easy motherboard socketing" ?

Reason is that they are used exclusively for the video cards/GPUs, and are not meant to be accessed directly by the CPU. In cases of integrated video on motherboards, they're not used in the first place. In case of video cards, they are soldered right on the video cards - video cards don't have slots b'cos then, you'll have video cards going into PCIe slots, and then the cards would have slots of their own, and then the height of the GDDR5 modules would potentially eliminate other motherboard slots that may be needed. In other words, configuration complications.

Short answer: GDDR5 is not meant to be socketed on to motherboards

Latency vs bandwidth (5, Informative)

evilsofa (947078) | about 3 months ago | (#46011433)

DDR3 is low latency, low bandwidth. GDDR5 is high latency, high bandwidth. Low latency is critical for CPU performance while bandwidth doesn't matter as much. On video cards, GPUs need high bandwidth but the latency doesn't matter as much. This is why gaming PCs use DDR3 for system RAM and GDDR5 on their video cards. Video cards that cut costs by using DDR3 instead of GDDR5 take a massive hit in performance. The XBox One and PS4 use GDDR5 shared between the CPU and GPU, and as a result have the rough equivalent of a very low-end CPU paired with a mid-range GPU.

Re:Latency vs bandwidth (2)

symbolset (646467) | about 3 months ago | (#46011473)

We had this argument long ago, and decided we don't like Rambus not because they don't have some good tech, but because they are fucktards.

Re:Latency vs bandwidth (2)

dagamer34 (1012833) | about 3 months ago | (#46011493)

The Xbox One doesn't use GDDR5 but DDR3 memory with 32MB eSRAM. They made the decision early on to have 8GB of RAM (regardless of type) before they knew it would be feasible to have 8GB GDDR5 memory at an easily attainable cost.

Re:Latency vs bandwidth (4, Interesting)

Anonymous Coward | about 3 months ago | (#46011497)

False, XBox one uses pure DDR3.

It is also one of the key reasons why many games on XBox one cannot do 1080p (that, and the lack of ROPs - PS4 having twice as many ROPs for rasterization)

XBox One tries to "fix" the RAM speed by using embedded sRAM on-chip as a cache for the DDR3 for graphics. Remains to be seen how well the limitations of DDR3 can be mitigated. Early games are definitely suffering from "developer cannot be assed to do a separate implementation for Xbox One".

Kaveri, while related to the chips inside the consoles, is decisively lower performing part. Kaveri includes 8 CUs. XBox one has 14 CUs on die, but two of those are disabled (to improve yields), so 12. PS4 has 20 CUs on-die, with again two CUs disabled to improve yields, so 18.

On the other hand, Kaveri has far better CPU cores (console chips feature fairly gimpy Jaguar cores, tho both consoles feature 8 of those cores vs 4 on Kaveri)

Any integrated graphics setup that uses DDR3 is bound to be unusable for real gaming. Kaveri has a good integrated graphics setup compared to the competition, but it is far behind what the new consoles feature - boosting it with GDDR5 without also at least doubling the CU count wouldn't do much. Either way, it really isn't usable for real gaming. It beats the current top offering from Intel, but that's bit like winning in Special Olympics when compared to real graphics cards (even ~$200-250 midrange ones)

Re:Latency vs bandwidth (2, Informative)

Anonymous Coward | about 3 months ago | (#46011517)

Somewhat false. Latency is approximately the same for DDR3 vs GDDR5, at least in terms of nanoseconds from request to response. GDDR5 is impractical for use in CPUs due to the need to solder to a board and high power consumption (enough to need cooling). That CPUs don't need so much bandwidth just makes it useless in addition.

Also, the Xbox One uses DDR3, and makes up for the lack of bandwidth with four channels and 32MB of ESRAM for graphics use.

Re:Latency vs bandwidth (4, Insightful)

Sockatume (732728) | about 3 months ago | (#46011663)

Latency in cycles is higher for GDDR5, but the clock speed's a lot faster, isn't it? As the real-time latency is the product of the number of cycles and the length of a cycle, I think it's pretty much a wash.

Re:Latency vs bandwidth (1)

Firethorn (177587) | about 3 months ago | (#46012723)

Indeed. All the variations of memory today seem to have about the same latency. RAM that cycles faster simply takes more cycles to start returning requests, though once it starts it IS faster.

Re:Latency vs bandwidth (0)

Anonymous Coward | about 3 months ago | (#46013031)

They use similar DRAM cell inside, so of course the latency is similar. The difference is how wide the internal bus or how many banks there are (bandwidth) and how it is accessed.

They already have to socket the APU (vs soldered down GPU), so not sure if there is an impact on the signal quality on the GDDR5 already.
Getting GDDR5 on a stick is a chicken and egg problem. Unless you get big guys like Intel also asking for it, that's unlikely to happen soon.

Re:Latency vs bandwidth (1)

arbiter1 (1204146) | about 3 months ago | (#46011961)

One point it seems being missed is " implement 2x 64-bit DDR3 channels alongside 2x 32-bit GDDR5 channels,". So 2 half width channel for gddr5, it wouldn't help a bit if its only 32bit channel. Keep in mind width of gpu its 256bit and higher.

Re:Latency vs bandwidth (1)

wagnerrp (1305589) | about 3 months ago | (#46012833)

Remember, these are CPUs with integrated GPUs, and there are several benchmarks that show nearly linear improvement in graphics performance on AMD APUs as you increase the memory bus speed. They are severely bottle-necked by a mere dual-channel DDR3 controller.

Re:Latency vs bandwidth (3, Informative)

Bengie (1121981) | about 3 months ago | (#46013611)

DDR3 and GDDR5 have nearly the same latency when measured in nanoseconds. When measured in clock cycles GDDR5 is higher latency, but it has more cycles. This has to do with the external interface, which is more serial in nature than the internal. The data path for the internal is quite wide, but the external datapath is not because traces on a motherboard are expensive. To crank up the bandwidth, you increase the frequency. If the internal frequency remains fixed and the external frequency goes up, the external "latency" seeming goes up.

AMD could do a 24 core desktop chip right now (1, Interesting)

symbolset (646467) | about 3 months ago | (#46011437)

They don't care to because it would cut into their server revenue where margins are higher. Personally I think that really sucks. Intel is the same way. Maybe the migration to mobile where we don't have these margin protection issues is a good thing.

Re:AMD could do a 24 core desktop chip right now (1)

guacamole (24270) | about 3 months ago | (#46011467)

Why do you need so many cores on the desktop though?

Re:AMD could do a 24 core desktop chip right now (2)

symbolset (646467) | about 3 months ago | (#46011523)

I like to use Blender to compose my videos.

Re:AMD could do a 24 core desktop chip right now (0)

Anonymous Coward | about 3 months ago | (#46011753)

Composing videos using software + CPU combo is actually a very painfully slow way to achieve it.

Some of the fastest video editing machines have embedded software and they run on discrete logics instead of a general purpose CPU

Re:AMD could do a 24 core desktop chip right now (3, Informative)

symbolset (646467) | about 3 months ago | (#46011767)

Fine. You do it your way. I want this technological achievement so I can do it mine.

Re:AMD could do a 24 core desktop chip right now (1)

Sockatume (732728) | about 3 months ago | (#46012439)

You know what a fast, multi-core parallelised CPU optimised for video encoding looks like?

A GPU.

Re:AMD could do a 24 core desktop chip right now (1)

dreamchaser (49529) | about 3 months ago | (#46011901)

It depends on what you're doing. In my case I'm often running 4-6 VMs all busy doing various tasks. Having multiple cores helps greatly, as does having a lot of RAM. Any use case that benefits from multiple threads can, if the software is written properly, take advantage of multiple cores.

Re:AMD could do a 24 core desktop chip right now (1)

wagnerrp (1305589) | about 3 months ago | (#46012855)

I'll give you the RAM, considering the isolation of a VM necessitates all those shared libraries be wastefully loaded and cached multiple times, but why would running 4-6 VMs result in high CPU load compared to not running VMs and just having a monolithic server?

Re:AMD could do a 24 core desktop chip right now (1)

dreamchaser (49529) | about 3 months ago | (#46013453)

It depends on what the VMs are doing. Having more physical processors to run threads on helps a lot in my case, where they usually involve running multiple hosts simulating client networks, with a few of said VMs doing lots of packet processing.

Re:AMD could do a 24 core desktop chip right now (2, Interesting)

Anonymous Coward | about 3 months ago | (#46011509)

No, they don't do it because it considerably raises the cost of the chip and it doesn't help improve the "average" user's workload. Many core processors have a bunch of inherent complexity dealing with sharing information between the cores or sharing access to a common bus where the information can be transferred between processes. There are tradeoffs to improving either scenario.

Making the interconnection between cores stronger means that you have a high transistor count and many layers in silicon. Even if you do this, the scenarios in which the cores need to intercommunicate means that either the processes are being dispatched to the processor and the processor has a scheduling algorithm built in (which has its own issues), or you have a new set of instructions allowing software to dispatch code from cpu to cpu. Then race conditions and all sorts of other nonsense are inherently in the CPU itself and you have a whole bunch of problems trying to write code for the mess.

Even if you go the above route, you still have to get the data you process out of the CPU and into RAM. Then you have a bus contention problem: how can multiple cores access different sections of RAM simultaneously? Is the CPU responsible for settling a conflict when two different cores want to write to the same section of RAM at the same time? These issues are largely easily settled with only 2, 3, or 4 cores now (and currently the responsibility for preventing these scenarios is on the OS not the CPU), but they would explode with a 24 core (or more).

It's currently possible to build a CPU with several hundred cores inside (and both Intel and AMD have done it). No one has settled the issues above to make it practical, or invented the new software paradigm to make it easy to fix them.

Re:AMD could do a 24 core desktop chip right now (1)

symbolset (646467) | about 3 months ago | (#46011565)

No, they do it because it would compete with their $1000 server chips. ARM is about to give them some correction.

I've been an AMD booster from way back, but they think they're still competing with Intel and that is a serious mistake.

Re:AMD could do a 24 core desktop chip right now (0)

Anonymous Coward | about 3 months ago | (#46011675)

AMD doesn't even make a 24-core server chip. They could, theoretically, make a 16-core native chip that could be combined into a 32-core multi-chip module (MCM) package. This would involve substantial expense for both R&D (the chip would need at least a little development, even though they have a core to put in it), and masks for the foundry. Figure at least millions, but likely tens of millions of dollars.

The chip would have essentially no use for a desktop user in either 16 or 32 core configuration. So, would the server market have enough demand for a chip like that to offset the costs of development plus the costs of production? I wouldn't really bet on it right now. At least they could do it with reasonable power consumption, so it's not impossible, just rather unlikely.

Of course, to address your final point, if they still thought they were competing with Intel, they would already be doing this. Maybe once they have their big cores working again, but that's a year from now in the most optimistic case.

Re:AMD could do a 24 core desktop chip right now (1)

citizenr (871508) | about 3 months ago | (#46011933)

I think they know, they do have ARM license for 64bit stuff and will make ARM64 based Opterons.

Re:AMD could do a 24 core desktop chip right now (1)

symbolset (646467) | about 3 months ago | (#46011953)

That is going wrong for all the right reasons. It still ends in death.

Re:AMD could do a 24 core desktop chip right now (0)

Anonymous Coward | about 3 months ago | (#46011513)

Applications and consumer operating systems cannot benefit from high core counts right now. They would be selling idle cores to the masses.

Right now in consumer computers, 6-8 threads is practical maximum you'll need for the foreseeable future. Intel's 6 core, 12 thread CPUs are already overkill for anything other than high end workstation use, and workstations have used server hardware (chipsets, processors) in the past already, so there going for 12 or more threads with midrange server hardware is a non-issue.

Re:AMD could do a 24 core desktop chip right now (1)

symbolset (646467) | about 3 months ago | (#46011629)

We're about 10 years past that argument, and we keep buying more. Stop giving us more and we stop buying your stuff.

Re:AMD could do a 24 core desktop chip right now (1)

Bengie (1121981) | about 3 months ago | (#46013659)

Increasing the core count reduces per core performance and increases total power consumption and production cost. With most cores idle, it is a bad thing to have too many cores.

Re:AMD could do a 24 core desktop chip right now (3, Interesting)

TheLink (130905) | about 3 months ago | (#46011639)

They don't care because a desktop with a 24 core AMD CPU is likely to be slower than a 4 core Intel CPU for most popular _desktop_ tasks which are mostly single threaded. They're already having problems competing with 8 core CPUs, adding more cores would either make their chips too expensive (too big) or too slow (dumber small cores).

Sad truth is for those who don't need the speed a cheap AMD is enough - they don't need the expensive ones. Those who want the speed pay more for Intel's faster stuff. The 8350 is about AMD's fastest desktop CPU for people who'd rather not use 220W TDP CPUs, and it already struggles to be ahead of Intel's mid range for desktop tasks: http://www.anandtech.com/bench/product/697?vs=702 [anandtech.com]

A few of us might regularly compress/encode video or use 7zip to compress lots of stuff. But video compression can often be accelerated by GPUs (and Intel has QuickSync but quality might be an issue depending on implementation). The rest of the desktop stuff that people care about spending $$$ to make faster would be faster on an Intel CPU.

A server with 24 cores will be a better investment than a desktop with 24 cores.

Re:AMD could do a 24 core desktop chip right now (1)

symbolset (646467) | about 3 months ago | (#46011713)

No, AMD's issue is that they underestimate what is required to take the desktop, and they underestimate tech Intel has in reserve. They have management control issues, partner communication issues, and their recent layoffs have left their message garbled. I'd love to be able to unleash AMD's potential on an unsuspecting world, but I'm unlikely to be given the chance. I would call a press conference and say "Hey, Krzanich: bite me!" and unleash a 16 core desktop CPU with 192 core GPU. I would open a division in Valve and open the tap: what do you need?.

And If I was Intel and trying to compete I would quit with the herpaderp Windows crap, since obviously that isn't moving units.

Re:AMD could do a 24 core desktop chip right now (1)

Sockatume (732728) | about 3 months ago | (#46011731)

Where else are Intel going to go? AMD's got the console space wrapped up this generation, and the Steambox isn't far enough along to make for a solid revenue stream. That leaves low-power applications where they're making progress but not yet ready to dive in. Like it or not, Intel are going to be a Windows and MacOS house for another five years or so.

Re: AMD could do a 24 core desktop chip right now (0)

Anonymous Coward | about 3 months ago | (#46012419)

Dude, step back from the fanboi punchbowl for a bit... Drink some water, have a granola bar, go for a walk. AMD will still be there when you finish, and a well rested cheerleader can do some more impressive routines.

Re:AMD could do a 24 core desktop chip right now (1)

MachineShedFred (621896) | about 3 months ago | (#46012661)

Oh, Windows is moving units for Intel, as Intel pretty much owns the enterprise. Walk into pretty much any Fortune-100, and you'll see the blue logo stickers on practically every single laptop and desktop in the place, and the data center will be floor to ceiling with Xeon except specialty stuff (SPARC boxes, IBM pSeries, mainframes). Thin Clients running Linux? Likely on Atom, but AMD is making a bit of an inroad there.

Don't fool yourself - big business is still suffering from 20 years of Microsoft lock-in, and Intel rules that particular roost for performance and manageability through their AMT and vPro chipset tech.

vPro alone will save the company I work for an estimated $1M a year in tech dispatches and eliminated remote control agent licensing, and that doesn't even include the increased uptime of not having to wait for the tech to travel to the site to do the job.

Re:AMD could do a 24 core desktop chip right now (2)

hairyfeet (841228) | about 3 months ago | (#46012939)

But the "dirty little secret" that doesn't get brought up enough frankly is a lot of those "single threaded loads" is as rigged as quack.exe was back in the day [theinquirer.net] thanks to every Intel compiler made since 2002 being made to put out crippled code for any chip that Intel doesn't want to push. Oh and for those that use the "Intel just knows their own chips and optimizes for them" excuse that lie has been disproved and the proof was the last gen Pentium 3. You see the last gen P3 was curbstomping the Netburst P4s in early benchmarks, yet when the cripple compiler comes out? Suddenly the very same Netburst chips are winning by 30%!

And the bitch is that any of these so called review sites could test for rigging trivially but they won't for fear of losing Intel advertising revenue. To see if a program is rigged all one has to do is run the code on a Via CPU, Via CPUs allow one to softmod the CPUID so if you change the CPUID from "Centaur Hauls" to "Genuine Intel" and suddenly the chip scores 20%-30%+ higher on the test? Then the program has been rigged by ICC, simple as that.

All of these sites like Anandtech and Tom's have more than enough money to pick up a Via chip and keep it around for testing but they won't bite the hand that feeds so everyone should consider their tests to be tainted and as worthless as Quake tests were with the rigged drivers. It would be nice if someone would run some real tests so we could see real numbers, wouldn't be hard to do as GCC is free and there are plenty of FOSS programs like Firefox one could compile with GCC to give accurate tests but so far no review site will do this for fear of pissing off Intel. How they didn't get busted for antitrust is beyond me, this is every bit as bad as "Windows isn't done until lotus won't run" but they were allowed to bribe AMD to the tune of 2 billion to drop the lawsuit and with it the investigation.

Re:AMD could do a 24 core desktop chip right now (0)

Anonymous Coward | about 3 months ago | (#46013387)

You think the apache webserver and other open source software is compiled on the ICC? Most of them run faster on Intel. Maybe you can do a linux kernel compile to compare? Or how about FreeBSD?

Fact is Intel CPUs are faster. They are more expensive, but AMD has nothing at the high end that beats them and they have to resort to "overclocking"/"P4ing" (220W TDP!) to get faster speeds.

AMD has fallen so far behind that Intel are doing the Usain Bolt and slowing down to look back.

But please do buy as many AMD CPUs as you can afford. Someone has to do it so that I don't have to.

Hopefully if they survive long enough AMD will eventually get their act together and come up with something impressive, just like they did in the Opteron days.

Re:AMD could do a 24 core desktop chip right now (2)

K. S. Kyosuke (729550) | about 3 months ago | (#46012947)

The rest of the desktop stuff that people care about spending $$$ to make faster would be faster on an Intel CPU.

You mean something like this [semiaccurate.com]? (Or similar solutions for other languages?)

Re:AMD could do a 24 core desktop chip right now (0)

Anonymous Coward | about 3 months ago | (#46013127)

java on GPU = desktop stuff? LOL. You seriously think typical desktop java stuff will run faster on a GPU? I bet this is for those people who want to do a lot of the OpenCL number crunching stuff but only know or prefer to write it in java (or other languages) - so stuff like this helps them. It sure isn't going to accelerate some random java desktop bloatware.

But I hope you fanbois keep drinking the koolaid, because someone has to be buying those AMD CPUs. Otherwise AMD might die and Intel will start screwing the rest of us more.

Re:AMD could do a 24 core desktop chip right now (1)

K. S. Kyosuke (729550) | about 3 months ago | (#46014179)

You seriously think typical desktop java stuff will run faster on a GPU?

Oracle certainly thinks that. Ditto for servers. I bet that they have more heads than the two of us to think about the problem, and they seem already hell-bent on implementing just that. Now I kindly suggest that you sit back and leave that to the experts.

Re:AMD could do a 24 core desktop chip right now (1)

Bengie (1121981) | about 3 months ago | (#46015327)

The issue with Java for GPUs is that GPUs have objects and love structures. GPUs puke when random memory access is involved. Right now, Java treats everything as an object, even structs. Java will need to be revamped a bit to work on GPUs. Get ready to start doing buffers and structs in Java, it'll be like C.

Re:AMD could do a 24 core desktop chip right now (1)

Lonewolf666 (259450) | about 3 months ago | (#46011771)

An 8-core Steamroller would be an improvement too, now that computer games finally start scaling well with multiple cores. I might even be willing to re-purpose a server part for my next desktop, even if it is a tad more expensive.

If AMD does not bother though, the Xeon E3-1230 v3 from Intel looks nice as well, only the mainboards that actually support ECC RAM are a bit expensive.

Re:AMD could do a 24 core desktop chip right now (2)

symbolset (646467) | about 3 months ago | (#46011833)

Your problem - and Intel and AMD have this problem also - is thinking the specifics of the device have something to do with its utility. It doesn't. What matters is how the thing enables people to do what they want to do, and we passed that level of utility in devices a decade ago. I can be telepresent with my children without resorting to either Intel or AMD technologies - all day. Compared to exclusively special uses of spreadsheets and slideshow programs, compatibility differentials of Office suites, that is a win.

It's about the people, not the thing. Wrap your head around that. Make it your religion.

Re:AMD could do a 24 core desktop chip right now (1)

Lonewolf666 (259450) | about 3 months ago | (#46011893)

For some applications, in particular games, performance still matters. My current PC will run older games just fine, but some newer releases tend to demand a fairly powerful machine.

For example, I might be interested in Star Citizen but Chris Roberts has already announced that he is aiming for maximum eye candy on high end machines. It is unlikely that my current machine will handle that game, even on low settings.

If the applications you run do well on a machine from a decade ago, fine. But that is not the case for everybody.

Re:AMD could do a 24 core desktop chip right now (1)

symbolset (646467) | about 3 months ago | (#46011929)

No, it is not different. How people feel about your game is the freaking point. Making people care about your virtual world is the goal. You are an author not unlike all the other authors that ever were.

Re:AMD could do a 24 core desktop chip right now (1)

Sockatume (732728) | about 3 months ago | (#46011969)

I'm not one who buys into the whole "we should be designing games for $2000 systems" madness, but it's still obvious that improved technical performance is something that game creators want and can exploit to improve their art and design, much as film-makers like having new varieties of lens, film, and camera, new techniques for practical effects, new rigging systems and dollies, and so on.

You couldn't do the gameplay of something like MGS3, for instance, on an original PlayStation. For that gameplay to work you need a lot of variation in the shape and nature of the environment and you need the environment to be large. It's not that the game's full of whizz-bang visual effects, but drawing even crude grass and trees and characters eats up enormous amounts of polygons. You could barely do it on a PS2, and it turns out that a game with those creative goals is hugely improved by the locked framerate and booster resolution that it got when it was re-released on the systems that came afterwards.

Re:AMD could do a 24 core desktop chip right now (1)

Lonewolf666 (259450) | about 3 months ago | (#46012613)

A good point from the perspective of a game designer, and I support the sentiment.

But most of us are consumers most of the time. Even those of us who work on one or two community software projects will typically use a bunch of software where they are not involved in the making. Which means taking the software as it is, and if its creators went for a design that requires a beefy PC you have one or you don't use that particular software.

Re:AMD could do a 24 core desktop chip right now (1)

edxwelch (600979) | about 3 months ago | (#46012767)

Maybe they should start with the server version though, since AMD currently have no server with 24 steamroller cores.

Re:AMD could do a 24 core desktop chip right now (0)

Anonymous Coward | about 3 months ago | (#46015431)

Because the users they are selling these desktop processors to don't need 24 cores. They're already paying for more cores than they really need, why add 20 more so that a mere fraction of a percent of people can benefit?

For people with special needs, they sell systems that have 24 cores. Naturally, you'll pay more for these.

Was tha intentional? (0)

Anonymous Coward | about 3 months ago | (#46011475)

"Kaveri" means friend in Finnish.

Re:Was tha intentional? (0)

Anonymous Coward | about 3 months ago | (#46011739)

i doubt so, master of teräs käsi.

if intentional, then why the fuck is it an all-in-one. would make sense for an add on board.

Re: Was tha intentional? (0)

Anonymous Coward | about 3 months ago | (#46014445)

It is also the name of a major river in India.

Of course, that would miss the point (4, Interesting)

guacamole (24270) | about 3 months ago | (#46011481)

The whole point of AMD APUs is low cost gaming. That is, lower cost than buying a dedicated GPU plus a processor. Many already argue that you don't save much by buying an APU. A cheap Pentium G3220 with a AMD Radeon 7730 costs the same as the A10 Kaveri APU, and will give better frame rate. Even if the Kaveri APU prices come down, the savings will be small. If you have to buy the GDDR5 memory, there won't be any savings. It's understandable that AMD didn't take that route.

Re:Of course, that would miss the point (1)

Anonymous Coward | about 3 months ago | (#46011551)

Also benchmarks have already proven that Kaveri couldn't utilize GDDR5 well enough. memory speeds over 2133 MHz no longer improve benchmarks on it, so while "bog standard" 1600 MHz DDR3 will leave Kaveri's GPU somewhat memory starved, 2133 MHz is already enough for it (upping that to higher frequencies through overclocking the memory alone doesn't help).

Besides, Kaveri could just go for four DDR3 memory channels. The Chip supposed can do it, it's just that motherboards available right now can't.

Now in the future APUs may get to a point where they would greatly benefit from on-motherboard block of GDDR5 for the GPU, but Kaveri ain't it - it is off by about factor of 3 or so. 8 CUs is just far too little... would need something like 20-24 CUs to start getting to the range of high end *laptop* GPUs and good level for today's high end gaming. Then it'd need faster CPU cores (or 8 cores instead of 4, if we assume that games really will multithread - which is still bit of hit and miss). That would be a Very Big Chip and probably wouldn't be available anywhere near current price points.

Kaveri has a good balance - a budget CPU with a reasonable low end GPU tacked on. It could use bit more memory bandwidth, but it isn't terribly RAM-bandwidth-starved. It is hopelessly outclassed by any setup with reasonable midrange gaming card, but the price reflects that.

Re:Of course, that would miss the point (1)

Lonewolf666 (259450) | about 3 months ago | (#46011793)

Besides, Kaveri could just go for four DDR3 memory channels. The Chip supposed can do it, it's just that motherboards available right now can't.

It would also require a new and presumably more expensive socket, and motherboards would always need four DDR3 sockets for provideing the full bandwidth - no more cheap mini boards with only two sockets.

Overall, I'm not sure if it would be much cheaper than GDDR5 on the mainboard.

Re:Of course, that would miss the point (1)

mrchaotica (681592) | about 3 months ago | (#46012487)

motherboards would always need four DDR3 sockets for provideing the full bandwidth - no more cheap mini boards with only two sockets.

Mini-ITX and smaller boards could just use SO-DIMMs.

Re:Of course, that would miss the point (1)

K. S. Kyosuke (729550) | about 3 months ago | (#46012901)

Now in the future APUs may get to a point where they would greatly benefit from on-motherboard block of GDDR5 for the GPU, but Kaveri ain't it - it is off by about factor of 3 or so. 8 CUs is just far too little...

Why don't you simply get a discrete card and use the APU CUs for what they're supposedly good for, that is, random-access-ish general-purpose computation, as opposed to streamed-access shading and texturing? I mean, the fact that the APU is also reasonably competent as an iGPU is nice, but...

Re:Of course, that would miss the point (2)

girlintraining (1395911) | about 3 months ago | (#46012205)

The whole point of AMD APUs is low cost gaming.

Sigh. You people with your myopic vision. If AMD consigned itself to your view of what it should do, it'll be dead in another 5-7 years. Let's take a look at what Intel offers: Higher performance. Lower energy consumption. Less heat. Smaller die size. In fact, you'd be hard pressed to find anything AMD has in its favor from an engineering standpoint. So what does AMD have that's keeping it in business? Cost. AMD offers a lower price point for economy systems.

But that's not where the profit is. That's not what's going to take AMD into the mid-21st century. If AMD sticks to that line of thinking, it'll go the way of Cyrix... and for exactly the same reason. AMD can't invest in a new fab plant because its cash reserves are too low, whereas Intel's pile of gold just keeps growing.

The only way to swing AMD back into the mid and high-end market is parallelization -- multiple chips, embedding everything into everything, and cutting costs everywhere it can... because it can't shrink the CPU and it can't do a damn thing about energy consumption. Their only avenue of escape is a radical re-think of the chipset, integration, and anything it can to boost profits to make that leap forward in minaturization to catch up with Intel. And it will not be easy. Many market analysts have suggested AMD throw in the towel.

That's the position AMD has to play. Of course, your myopic view only sees an engineering problem to solve. You have absolutely no comprehension of the business considerations behind the move... which is probably why you insist they're missing the boat. They are trying to claw their way into the mid-range market and undercut Intel. This is one of the select few ways they're going to do it. Unfortunately, the cost of commodity components to get this Hindenburg off the ground isn't going down -- AMD's position is shrinking and if they don't leverage everything now, they're going to find themselves in an untenable market position and will simply fold, leaving Intel as the only major player left in the market.

And if that happens... we're all fucked.

Re:Of course, that would miss the point (1)

Daniel Hoffmann (2902427) | about 3 months ago | (#46012311)

" So what does AMD have that's keeping it in business? "
A licensing agreement with intel for the x64 instruction set also helps. Every intel chip sold gives AMD a few bucks and every AMD chip sold gives intel a few bucks (because of the licensing of x86 instruction set). I'm not sure on the specifics but I assume this to be a lot of money.

Re:Of course, that would miss the point (1)

guacamole (24270) | about 3 months ago | (#46013417)

Sigh. You people with your myopic vision. If AMD consigned itself to your view of what it should do, it'll be dead in another 5-7 years. Let's take a look at what Intel offers: Higher performance. Lower energy consumption. Less heat. Smaller die size. In fact, you'd be hard pressed to find anything AMD has in its favor from an engineering standpoint. So what does AMD have that's keeping it in business? Cost. AMD offers a lower price point for economy systems.

I am sorry, but I think you can't realize that at this point AMD can't compete even on the low end. A 300-400 dollar BestBuy special laptop with Intel i3 still beat AMD in speed and power efficiency. How much lower can AMD go in prices? They can't. They don't even have the volume to get there. They can integrate all they want, but that doesn't change the fact the AMD offerings are woefully uncompetitive at almost any price level. Forget Intel. After 6months of hyping their product, the flagship Kaveri A10 can't even decidedly beat the _last generation_ AMD's Richland A10.

Re:Of course, that would miss the point (1)

aliquis (678370) | about 3 months ago | (#46013937)

the fact the AMD offerings are woefully uncompetitive at almost any price level

They simply aren't.

If they was then Sony and Microsoft wouldn't had opted to put tens of millions of their solutions in their consoles.

Also for the older A10s faster memory really helped with graphics performance, and that the consoles have a different memory configuration likely help them perform better than the current desktop parts.

It's nothing new GPUs like having massive memory bandwidth, sure the one in the APU is pretty small but it would likely still enjoy having better memory access than what a (multiple..) CPUs would have. Also who knows maybe with higher memory bandwidth they would had put a more capable GPU part in the APU too but as is what is worth putting in there may be limited by the memory access.

Re:Of course, that would miss the point (1)

0123456 (636235) | about 3 months ago | (#46015341)

If they was then Sony and Microsoft wouldn't had opted to put tens of millions of their solutions in their consoles.

Intel probably aren't much interested in the tiny margins in the console market, particularly when they'd have to throw a lot of R&D into building a competitive GPU.

That doesn't make AMD any more competitive in PCs.

Re:Of course, that would miss the point (1)

guacamole (24270) | about 3 months ago | (#46013447)

Midrange market? HAHA

Intel Core i5, mobile or desktop, is the midrange market. NO ONE of AMD's processors, APUs or FX, can match that in speed or power efficiency, not even the new Kaveri APU.

Re:Of course, that would miss the point (1)

0123456 (636235) | about 3 months ago | (#46014665)

Intel have plenty of CPUs that can compete with AMD's prices at the low end. The only thing AMD have is better graphics, which is why they keep pushing APUs, even though there's a tiny market for them... they're basically for people who want to play games, but don't want to play them enough to buy a proper GPU.

It would be a very viable company if they could just dump the CPU side and concentrate on graphics.

Re:Of course, that would miss the point (1)

tlhIngan (30335) | about 3 months ago | (#46015499)

That's the position AMD has to play. Of course, your myopic view only sees an engineering problem to solve. You have absolutely no comprehension of the business considerations behind the move... which is probably why you insist they're missing the boat. They are trying to claw their way into the mid-range market and undercut Intel. This is one of the select few ways they're going to do it. Unfortunately, the cost of commodity components to get this Hindenburg off the ground isn't going down -- AMD's position is shrinking and if they don't leverage everything now, they're going to find themselves in an untenable market position and will simply fold, leaving Intel as the only major player left in the market.

And if that happens... we're all fucked.

I'm sure Intel realizes it too, because they've avoided a lot of anti-trust because of AMD. Heck, Intel probably foisted Sony and Microsoft onto AMD to provide AMD with some steady cashflow for the next 8+ years.

To Intel, AMD's the perfect competitor - not too big, in just enough trouble to not be an issue. But big enough to matter and to provide competition in the eyes of government, keeping regulators and other stuff off Intel's back.

And Intel probably has a "AMD rescue" package available which can involve burying millions of AMD processors to boost their sales. .

Because if AMD fails - all of AMD's patents are most likely NOT going to be given to Intel. Instead, they'll probably be spread out among dozens of entities, and what was once a lucrative zero-cost cross-licensing with AMD becomes an expensive, lawyer filled license negotiation.

You got to think similar things as well for odd competition, like Apple iAds. Probably supported by Google to justify the purchase of AdMob (DoJ said iAds provides sufficient competition in mobile advertising). Of course, it really isn't and Apple's quit promoting it for some time, yet it lives on. Probably some Google cash paying someone to keep it alive.

Re:Of course, that would miss the point (1)

Kartu (1490911) | about 3 months ago | (#46012507)

Uh oh. Yet it is much cheaper than i5 (+ premium on mainboard), but does much better in games.
http://www.anandtech.com/show/7643/amds-kaveri-prelaunch-information [anandtech.com]

Re:Of course, that would miss the point (1)

guacamole (24270) | about 3 months ago | (#46013509)

NO ONE buys an i5 CPU to use its integrated graphics for games. The Intel HD Graphics are a bonus for people who don't play games. Add a discrete GPU to i5 based PC as well as to AMD PC, and it will be game over to ANY AMD based system, not just Kaveri based. As for Kaveri A10, which costs $170 online, the lowest as of now, you can beat it with a $69 dollar Intel Haswell Pentium G3220 and a discrete graphics card like a $100 dollar Radeon 7730.

Re:Of course, that would miss the point (2)

edxwelch (600979) | about 3 months ago | (#46012973)

The thing is the real Kaveri star is the A8 7600, not the A10 models. The A8 7800 is only $119 yet thrashes any Intel APU in gaming.

Re:Of course, that would miss the point (1)

guacamole (24270) | about 3 months ago | (#46013647)

The A8 makes sense for a very low end system. If I was building a PC, either for playing games or not, I think my might have considered the A8 or A10-6800K (last generation A10) if I was looking for a $130 processor. It looks like they both have compute power somewhere in the neighborhood of Intel i3-4150 but somewhat better graphics.

Re:Of course, that would miss the point (2)

hairyfeet (841228) | about 3 months ago | (#46013101)

And anyone who went that route would frankly be short sighted and foolish and would regret it later. The simple fact is that more and more games are taking advantage of multicores and that dual core will be a bottleneck pretty quickly on any of the new games coming out. Already many of the bigger games like Bioshock infinite, Far Cry 3, Dirt 3 and the like will run better on a quad than a dual and that number is going nowhere but up.

As someone who has been selling the APUs since liano the advantages are threefold: One you can buy a nice budget system that will game OOTB and can add a dedicated GPU down the line, thus making the system more affordable to more folks, Two if you stick with AMD and buy one of the dedicated chips that support zerotech like a 7xxx chip or above you can significantly lower your power usage when you aren't gaming as zerotech will let the dedicated GPU drop down to a couple watts when not required and the APU can take care of hardware acceleration normally handled by the dedicated card, and finally Three these APUs also make great HTPC chips, with hardware acceleration for pretty much every major format out there you can get smooth 1080P playback even when using a tuner for recording or while multitasking.

As for TFA while I can't see a point in embedded 8GB of GDDR5 in a board for one of these I CAN see putting a smaller amount, say 512Mb, to give the APU a nice high bandwidth buffer to improve performance.

Re:Of course, that would miss the point (1)

guacamole (24270) | about 3 months ago | (#46013581)

The problem is that AMD cores aren't that great. For one, each AMD module has two integer cores but only one FPU. Despite that, the call it "dual core" even though for floating point stuff, the AMD architecture awfully sounds like hyperthreading. And so to me it's not surprise that Intel i3 (2 real cores but 4 logical because of hyper-threading) can challenge 4 and 6-core AMD CPUs.

Re:Of course, that would miss the point (1)

Bengie (1121981) | about 3 months ago | (#46014115)

It has one 256bit FPU that can split to handle any mix of FPU between both cores. It can do 2x 128bit or 128bit and 2x 64bit or 128bit and 4x 32bit or 2x 64bit and 4x 32bit.. etc etc... Right now only AVX instructions can use the full 256bit, so any SSE or less won't "share" the FPU. Each core is actually able to pipeline up to 4x 32bit, even if not using SIMD. No matter how you look at it, it can handle 256bit worth of FPU calculations. The FPU eats up a lot of transitors and is one of the least used parts. Even in heavy FPU core, there are enough memory stalls or branching logic to make it not used often. There is little reason to have two full 256bit FPU units.

Code the best works with heavy FPU usage typically works well with GPUs. Thus, the APU. Need a mix of high throughput and low latency? The APU has you covered. With about 10ns latency to the IGP compared to 10-100 microseconds for discreet GPUs, many work loads can notice a throughput increase.

Re:Of course, that would miss the point (1)

Bengie (1121981) | about 3 months ago | (#46013783)

The whole point of AMD APUs is low cost gaming. That is, lower cost than buying a dedicated GPU plus a processor. Many already argue that you don't save much by buying an APU. A cheap Pentium G3220 with a AMD Radeon 7730 costs the same as the A10 Kaveri APU, and will give better frame rate. Even if the Kaveri APU prices come down, the savings will be small. If you have to buy the GDDR5 memory, there won't be any savings. It's understandable that AMD didn't take that route.

AMD is aiming for "good enough", and they did a great job. Per thread, AMD is now on par with Intel's Haswell and has an integrated GPU that can cover the 80/20 rule for games. The only issue I personally have is that AMD's current Kaveri offerings are limited to a 2 module(4 core) setup that consumes about 50% more power than 2 core(4 Hyper-thread) Haswell while idle and about 100% more power under pure CPU load. Since I will have a discrete GPU, I see no benefit to consuming that much more power. We're talking about a 20watt difference idle and about a 50watt difference under load. And they're about the same price.

Re:Of course, that would miss the point (1)

guacamole (24270) | about 3 months ago | (#46013929)

I don't understand what you mean "per thread". AMD claims A10 is a four core CPU, but each module has only one FPU despite presenting two logical cores, which sounds awfully like hyper-threading to me, but Intel was more honest and called the i3 2-core CPU with hyper-threading. Basically, AMD overestimates how many real cores its processors really have, but this strategy seems to work since the web forums are filled with fanboys who think that more cores is always better.

This is why benchmarks show AMD can't beat Intel CPUs in single threaded applications. If one thread all you can use, then Intel will be faster. If you need to use more threads, you need 8 threads or more before to start beating even Intel's i5.

Re:Of course, that would miss the point (1)

Bengie (1121981) | about 3 months ago | (#46014389)

AMD has two separate core logics with their own execution units that only share the FPU. They have their own L1 caches and a bunch of other things. The FPU can handle 256bits worth of FPU calculations split it almost any configuration of x87, SSE, AVX. The only real time you see contention for the FPU unit is when doing 2x 128bit SSE instructions on a single core or a single 256bit AVX instruction.

Intel's hyper threading shared nearly everything, right down to the integer execution units and L1 cache. Intel made a highly pipelined CPU with a lot of execution units, then made it so it presented itself as two logical CPUs, as many of those execution units will be idle because of memory stalls or just can't pipeline enough instructions in parallel to use them all.

Intel's design has a better best case and a worse worst case, AMD is more middle of the road. Both designs increase throughput per transistor and reducing power per transistor on average.

A huge benefit Intel's design gets is if you disable HT or the OS puts a one of the logical cores to sleep, you can make use of the entire CPU, making single threaded performance strong as you have a single core with massive L1 cache and a ton of pipelinable execution units.

But remember, Intel optimized for throughput, so that large L1 cache comes with a 2 cycle latency, but can handle 128bit loads per cycle. If you're going to optimize for single threaded performance, make use of those extra registers in 64bit mode and help hide that latency. when hyper-threading is enabled, it helps hide single threaded latency by interleaving logical threads against the cache, making it mostly seem like 1 cycle latency. Higher throughput, but higher latency.

Re:Of course, that would miss the point (2)

Lonewolf666 (259450) | about 3 months ago | (#46014945)

The 45W Kaveris are interesting, as they show a nice improvement in performance/watt - the new "sweet spot" is not in the top models but in the somewhat slower A8-7600 (3.1-3.3 GHz CPU speed).

I wonder how a 4 module (8 core) FX on that basis would perform and at which TDP. For software that scales well with many cores, it might be a good buy.

The point is MANTLE (1)

ducomputergeek (595742) | about 3 months ago | (#46015059)

I'm shopping for a new gaming computer on a budget. And even models shipping with this APU still usually have a R9 270x dedicated card as well, for a price point of about $850 USD.

Where this gets interesting is if MANTLE gets widely adopted. Suddenly it can treat those 6 or 8 GCN nodes on the APU as additional GPU Processing power to be used in the queue. While maybe not as powerful as a second video card, it should give a boost in performance at no additional cost.

Of course assuming game developers start using Mantle...

Eight Cocks Up Your Ass (-1)

Anonymous Coward | about 3 months ago | (#46011511)

One for every year of Obama! Hope you like cock!

Re: Eight Cocks Up Your Ass (-1)

Anonymous Coward | about 3 months ago | (#46011803)

Obama is a worthless black Muslim trying to take over America with Obama care. Of course I don't like the guy. Never voted for him, but I still for him because there are so many dumb fucks in this country (or the politically incorrect truth, black people voted for him just because he was black, so he won)

Re: Eight Cocks Up Your Ass (-1)

Anonymous Coward | about 3 months ago | (#46011917)

Trying to take over? Trying?! Dude, he's taken over, thanks to those dumb as fucks who voted for Blackie just because he wasn't Bushie. Enjoy affordable care! Affordable to everyone except us!!

A Better Explaination At Anandtech (5, Informative)

Anonymous Coward | about 3 months ago | (#46011561)

Anandtech's writeup [anandtech.com] (which Hothardware seems to be ripping off) has a much better explanation of what's going on and why it matters.

Let me be very clear here: there's no chance that the recently launched Kaveri will be capable of GDDR5 or 4 x 64-bit memory operation (Socket-FM2+ pin-out alone would be an obvious limitation), but it's very possible that there were plans for one (or both) of those things in an AMD APU. Memory bandwidth can be a huge limit to scaling processor graphics performance, especially since the GPU has to share its limited bandwidth to main memory with a handful of CPU cores. Intel's workaround with Haswell was to pair it with 128MB of on-package eDRAM. AMD has typically shied away from more exotic solutions, leaving the launched Kaveri looking pretty normal on the memory bandwidth front.

It's also worth noting that the Anandtech article implies that AMD is still on the fence on Kaveri APUs with more memory bandwidth, and that it may be something they do if there's enough interest/feedback about it.

Re:A Better Explaination At Anandtech (1)

Dan Askme (2895283) | about 3 months ago | (#46011751)

Cheers for the better article.
It wasn't just me who thought the author of the hothardware article didnt have a clue?

Re:A Better Explaination At Anandtech (0)

Anonymous Coward | about 3 months ago | (#46012627)

I saw a slide elsewhere which claimed to show the usual improvement for APU with faster memory followed by diminishing returns at the top end.
It went 1333, 1600, 1866, 2133, 2400, 2500
2133 and 2400 showed 3 FPS increase each 2500 only 1 FPS, this is a misreading of the data as the last interval was 100MHz not 267MHz
Except for dedicated overclockers I do not think running memory at 2666 is practical for now though the near future may produce 2666 or 2933 DDR3
What is certain is that DDR4 is just around the corner and will debut at 2400 and quickly hit 3000 so AMD may do a quick respin for a DDR4 controller with any other easy mods that real usage suggests giving a nice performance lift in the future
I wonder if Mantle and HSA will smooth out and possibly reduce memory bandwidth requirements in the meantime where they are used

no wonder there's shortages? starving babys etc... (-1)

Anonymous Coward | about 3 months ago | (#46011849)

'Research conducted by the British charity Oxfam has concluded that the combined wealth of the world's 85 richest people is equivalent to that owned by the bottom half - in wealth terms —of the world's population.'

Re:no wonder there's shortages? starving babys etc (0)

Anonymous Coward | about 3 months ago | (#46011883)

Tax the rich! Feed the poor! Wait, Britain already does.

The rest of the world needs to play catch-up and join civilization already.

'other half' shrinking while getting fatter (0)

Anonymous Coward | about 3 months ago | (#46011945)

85 'familires' control at least 1/2 of everything & we get (to rent) the rest from sub-royals?

Re:'other half' shrinking while getting fatter (0)

Anonymous Coward | about 3 months ago | (#46011979)

Rent? There is no ownership! All land is rented from the eminent domain of the government. Feudalism lives on, in property taxes. Pay up, motherfucker.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...