Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD

AMD, IBM Announce Transistor Advances 125

Jugalator writes: "AMD announces it has built a CMOS transistor with the highest switching speed in the semiconductor history. The transistors are manufactured with .015 micron technology and allows a twenty-fold increase in transistors per chip with a ten-fold increase in performance when compared to the transistors in use today. So far, AMD has only produced a prototype and a larger scale production is not planned for until 2009 at earliest. AMD will announce further information regarding their research in the semiconductor field at the 2001 International Electron Devices Meeting today, December 4." schongo sent in a note about IBM's double-gate transistor. This and the Intel announcement recently are all related to the International Electron Devices Meeting.
This discussion has been archived. No new comments can be posted.

AMD, IBM Announce Transistor Advances

Comments Filter:
  • it's good to see Intel's primary competition coming up with innovations like these. Continued development on the part of AMD is really the key to keeping Intel from dominating the entire market.
  • Heat dissipation? (Score:4, Interesting)

    by Marx_Mrvelous ( 532372 ) on Tuesday December 04, 2001 @11:29AM (#2653876) Homepage
    I wonder what temperatures these will function under. Personally, I want to see light-based chips, due to what I hope will be a huge reduction of heat loss.

    Then again, on cold winter days it's nice to have a 900MHz space heater.
    • I like the heat - it keeps my room nice and warm.

      What I don't like is the white noise generator I have for a fan (7200 RPM Volcano 6 Cu+) on top of the CPU (Athlon 1.4 GHz). I've missed phone calls and the doorbell ringing because of that fan.
      • I don't understand this position. Do you pay your electric bill? Your heating bill? Electric heat is generally the most expensive way to heat your living space, and using a CPU instead of an electric heater has to be less efficient still.

        I understand that it is probably tongue-in-cheek, but still, you are paying the electric bill for that heat, and that cost is significant if the heat is enough to warm your room.
        • How can it be inefficient? Apart from the odd bit of sound, nearly all of the electricity going in is going to end up as heat, one way or another. It's almost as efficient as an electric heater.
          • Because you want to heat the air (or, actually your body) to keep you warm, not heat the floor under the CPU. Sure it comes out as heat, but if it gets conducted away through the floor, you don't feel it. An electric heater radiates a substantial portion in the infrared, so that you can feel it, and has a wide open grille to circulate room air through it. Its design is not constrained by EMI requirements to keep apertures minimized.

            Conduction through air is pretty lousy. That CPU can get pretty hot, and heat up the air nearby without heating up the air near you very much at all (the temperature gradient is steep). You need good *convection* to stir that hot air around to warm up the room as a whole, or *radiation* to pass through the air and warm the objects in the room.
            • You need good *convection* to stir that hot air around to warm up the room as a whole

              My computer is right next to me on the desk. The 7200 RPM fan is enough to stir up quite a bit of heat around me. Over a few hours of being in my room with the door shut the temperature rises significantly throughout the room.

              I don't use the computer just to heat the room, it's just a side effect.
              • My point is only that that "side effect" is actually only a symptom of a similar "side effect" on your electric bill. So it makes you feel warm, cozy, and AMD-loving, but leaves a hole in your wallet, whether you realize it or not.

                If you had a CPU that weren't so power-hungry, you could put on a sweater to keep warm, and keep your electric bill low.
                • A few years ago I had an Intel P166MMX which did the same thing (in a smaller room).

                  Any idea how much less power a good Intel CPU uses? It would have to be considerably less to make the higher costs of Intel (CPU and motherboard) worthwile.
    • i don't actually use radiators any more.. I heat my room with pc'c. God bless AMD
    • Re:Heat dissipation? (Score:2, Interesting)

      by rossy ( 536408 )
      I'm guessing that at these small geometries, the heat per transistor is smaller due to the small currents involved. However the thermal density and switching speeds might wipe out all this gain. It's heat density per unit area that is making these chips efficient space heaters. I too like light-based chips. However there is a significant amount of energy required to lase the light in a laser diode. This keeps getting more efficient. I expect as Infiniband technologies pick up, there will be more and more folks playing with light, and we will start seeing some inroads in this area. I have stock in Corning. I figure they know glass cookware, so they should do OK making glass cable.
      • I though Corning was nearly insolvent?

        p.s. Making glass cable is easy. I made some last night. Gather a hot furnace bit on the end of a punti, grab the tip of the hot glass with a pair of pinchers and run, RUN, RUN! We just call it stringer.
      • Corning spun off their Pyrex cookware and glass dishes, etc., to form WorldKitchen, Inc. [worldkitchen.com]. So they don't know as much about glass cookware as they know about optical fiber.
    • Oops.. I think that should have read "a great reduction of heat production." But you guys all get the idea :)
    • When a light beam is used to modify another light beam, it is usually made with some materials which reacts to lights.
      The material will be heated by the light, so it will generate heats..
      Light doesn't interact with light directly, so a light-based chip would be really a light - non-linear material - light chip.
      Usually the interesting effect in those materials used to modulate light are only a "second order" effect, which means that you have to use quite intense lights to have something usefull.
      Intense light --> heat.
  • ...will continue to be right for a while. ;) (See Moore's Law [tuxedo.org] if you're unfamiliar. :)
  • But with all of the recent advances in technology of this nature, I doubt that we will ever see this hit the market. With an expected public release date of 2009 (at the earliest...), I would have to think that something a bit more advance and more easily produced would be widely available and send this cool little advancement into obsolecence...
    • But with all of the recent advances in technology of this nature, I doubt that we will ever see this hit the market.
      Actually, this stuff probably will hit the market. It takes time for any new invention to get to a usable point, and the fact that they've got transistors working right now means they're well on their way to making microprocessors on this stuff. From my understanding of any technology, the products on the market are usually using the underlying technology developed several years ago; it takes that long to implement the product. Which is exactly why exotic items such as fiber optic interconnects and quantum computing will take a while to trickle down to the desktop. Making microprocessors is tough, especially with the kinds of sizes we're dealing with.
    • I have no idea why they keep posting articals like this to slashdot. Chip design has always been about slow improvement, by the time this actualy hits the streets, it just won't be that impressive.
      • lol. in 'each article they post like this', there is always 3 or 4 posts saying "why do we post this crap? we'll never see it and it'll never make it to production".

        yeah, about 5% of things make it to production on this site...but otherwise there'd be like 3 or 4 posts a day on slashdot...now that wouldn't be very interesting, would it? besides, if you don't like posts about AMD and the like, just turn them off in the user settings. : )
  • by dfeldman ( 541102 ) on Tuesday December 04, 2001 @11:32AM (#2653893) Homepage
    As is par for AMD, this advance is an impressive improvement over what their contendors at Intel are doing (especially lately). As is also par for AMD, though, these transistors produce a great deal of heat. One of my co-workers once worked at another semiconductor firm which experimented with a similar technology, and said that the heat generated by these things is astronomical. (That should come as no surprise to overclockers, who know that the faster you run it, the bigger your heat problems become.)

    It is pretty obvious that AMD has some big heat issues. After all, Tom's Hardware was able to cook an AMD CPU and motherboard all at once just by removing the heatsink from the chip. Heat is a serious concern with these things.

    However, I am optimistic that AMD can solve whatever problems there are with this technology and bring it to the consumer eventually. Hopefully that will happen before Intel uses their size and budget to crush AMD permanently.

    df
    • heat dissipation is an unfortunate (in this case) law of nature here. (also called the laws of thermodynamics)

      The way transistors work just happens to waste a bit of energy when conducting (and not conducting, too). New technologies, including feature size shrinkage and SOI WILL help the power dissipation issue. at this feature size they _could_ put fewer transistors per square mm and get less power density and still more transistors than today to decrease the heat output. But this won't happen since people want max performance. They will always go for max performance in their heat and power budget (which is governed by air cooling right now).
      Now, reducing the power, or Vdd, will also reduce power density, but this requires tighter control over threshold voltage and thus the oxide interface and possibly thinner gate oxide. New technologies will answer these problems too.

      This all takes a long time. for now just go buy one of those sun systems which runs at 50W! :-)
    • Tom's Hardware was able to cook an AMD CPU and motherboard all at once just by removing the heatsink from the chip.

      And if you had a clue, you would know that the manual for an AMD Athlon chips states that, without a heatsink, the chip will fry in eight seconds. The same would happen with an Intel CPU, or a G4. Running any chip outside it's environment (say, without a heatsink), will not be good for it. This isn't due to any inherent 'heat problem'. With a heatsink, an Athlon runs at about 50K below a P3.
      • I think if you look at the datasheet for the G4 you will find that it has the lowest power dissipation of all the CPU IC's out there. From my research the heaters are (excluding SUN) Compac Alpha AMD K5/K6 (they vary by process used) Intel PentIII/Celeron (vary by process used) Motorola PPC line All the processors followed similar power/speed curves, with the Motorola PPC being the coolest as it is a RISC CPU and has slower clock speeds, for what appears is similar performance. (Based on market acceptance, not benchmarks).
    • by Greyfox ( 87712 )
      Liquid Nitrogen is CHEAP! Shouldn't take too much to design a case cooled with the stuff! And as an added bonus, the cloud of vapor your system releases when you fire up a spreadsheet recalc will be WAY COOL!
      • Re:Bah! (Score:2, Funny)

        by rossy ( 536408 )
        So, let me see.. I'm moving into my new house in 2010, called the power company, called the phone company, called the cable company (broadband), and the gas company... say gas company, can I have both hot and cold gases?
    • Heat is proportional to power consumed, and power is proportional to voltage squared and frequency. The problem isn't heat though, exactly, it's heat density. A hot-plate makes about 10watts/cm^2. A P-III makes about 30. If current trends continue (faster switching speeds, smaller transistors, with only slight decreases in voltage diffential), then a 15nm process will produce more heat/area then a jet engine.

      Yes, heat is a problem :)

      Expect airconditioning to be a standard feature on new computers soon.
  • ten-fold increase...not planned for until 2009 at earliest

    Sounds like the rate of increasing performance is starting to drop. Isn't it supposed to double every 18 months? Shouldn't we then expect a 25 times increase between now and 2009? (2^(7years * 12months/18months))

    Hope they have some other tricks to make chips faster!
  • And this [theregister.co.uk] is intresting too - "Rambus founder's Matrix unveils first 3D memory chip" (The Register [theregister.co.uk]), intresting stuff, it kinda reminds me of Orson Scott Card's multidimensional memory in "Children of the Mind".
  • Follow the link [slashdot.org] (from this article Intel Cites Breakthrough In Transistor Design [slashdot.org])

    Sorry, that's it, no more accurate predictions until next year!

    Ok, so I successfully predicted this, what does it say? The game of one-upsmanship is to reassure investors that R&D proceeds during uncertain economic times? Though we aren't selling much, we're preparing for the future, just like our competition is? Sounds good to me.

  • Just a FYI 3.3Terahertz is a 303 femto second period. How many electrons can you move around in 303 femtoseconds? If the clock is running at 3.3Thz, (assuming this is not a clockless chip) it leaves about 102 femto seconds for a rise time measurement, and 101 femto seconds for a fall time measurement. (Looking at the pulse). Projecting further I woud expect the rise time zero to 50% point would be about 50 femtoseconds. I'm supposed to know this, but can't recall the rule of thumb for the oscilloscope bandwidth I would need to look at this, but I'm guessing I would like more than 4Terahertz... I'd really be comfortable with about 8-20Terahertz bandwidth on the scope... or somthing that could digitize the signal at 20 Terahertz or so (20THz = 50fs). I'm sure this analysis is flawed, but I just like to imagine that I could actually look at a 100fs rise time with a scope in about 5-10 years. -- Ross
  • Anybody have any idea where this puts the projected number of transistors per cm2?
  • by autopr0n ( 534291 ) on Tuesday December 04, 2001 @12:04PM (#2654060) Homepage Journal
    Uh, I don't see why this is considered revolutionary. More's law states that chip density doubles every 18-24 months.

    Well, 2009 is in 8 years, or 4 doublings if you're going by the 24 month rule. Top of the line chips now are minted at 130 nanometers. Double once, and you get 65, double again and you get 32.5, and double the final time and you get 16 nanometers... and the AMD transistor is 15. Going by the 18 month rule and you get a bit more then 5 doublings.

    In other words, while its great that they haven't hit the wall yet, this is really all they're telling us. CPU speed has been improving predictably for decades and this is no exception.

    If they'd announced that these transistors were going to be used q1 2002 in new Athlons it might actually have been news :P
    • by Anonymous Coward
      And an AC has to point out basic math:

      130 to 65 is four times the density. Likewise with 65 to 32 and 32 to 16. Of course, you could also go by the twelve-month rule.
    • Uh, I don't see why this is considered revolutionary. More's law states that chip density doubles every 18-24 months.

      First, it's Moore's Law. Second, calling it a law is ridiculous, because it's entirely dependent on continuing R&D, as well as bringing such R&D into production. Taking it for granted that you'll have your 40GHz CPU in 9 years is really quite naive.

      Personally, I can't wait until Moore's Law fails (either by falling short from or totally surpassing the prediction), so that people stop using it to degrade the really quite amazing research and amount of work that goes on in order to bring such results.

      Thanks,

      Mike.

      • First, Moore's Law refers to the doubling of the density, not the speed of the chip. So your "40GHz" example doesn't fly.
        Second, it is commonly referred to as a law, it wasn't the original poster's idea.
        Third, once we get past silicon for chips, I think Moore's Law is out the window.
        • First, Moore's Law refers to the doubling of the density, not the speed of the chip. So your "40GHz" example doesn't fly.

          I understand that Moore's law as he stated it applies to the number of semiconductors on a chip, but also that the general geek public accepts Moore's Law to apply to the speed of a processor, which remarkably seems to also match the 18-24 month rule.

          Second, it is commonly referred to as a law, it wasn't the original poster's idea.

          I haven't blamed the original poster for coining the term, but I can see how it appears so since I said it after the slightly anal spelling correction.

          Third, once we get past silicon for chips, I think Moore's Law is out the window.

          I agree, and getting away from silicon is just one variable that can make the whole idea useless. Others include quantum computing devices coming into play, 3D chip design, and clockless chips (referring to the speed-doubling correlary to this "law").

          Thanks,

          Mike.

      • Personally, I can't wait until Moore's Law fails (either by falling short from or totally surpassing the prediction), so that people stop using it to degrade the really quite amazing research and amount of work that goes on in order to bring such results.

        Ahhh... how soon they forget.

        The first real wave of RISC CPUs did shatter Moore's law. Performance jumped from 4 MIPS to 12 MIPS. Prices dropped from $70,000 to $10,000. It was truly cool.

        Of corse in the following years CISC have mostly died (seen any new ones launched? I have seen a lot die), except the x86, and to a lesser extent the 370/390/whatever-z-or-x-name-they-have-now, and the x86 has even caught up to the RISCs, and is almost all cases passed them (it's amazing what you can do with 10x the R&D budget...).

        I think there have been a few places where there was underperform, like from the 386 to the 486 maybe, or from the 486 to the P1 (and I think that was due to the length of time between the 486 and the P1), but I forget exactly when mostly because nobody kept pointing it out like Sun did with "RISC is better, eat our dust DEC".

        If you look at the SRAM market you will find similar events.

        Still as a long term thing Moore's "Law" is amazingly accurate. I think because it functions as a goal for R&D managers. If they can't keep up with the "Law" they tell their bosses that they need more money or the company will die, if they have managed to keep up with it they don't fight as hard for the budget (or they do, but they have less ammo to fight with). I know R&D budget isn't everything, but it is a powerful force.

        • Ahhh... how soon they forget

          The first real wave of RISC CPUs did shatter Moore's law. Performance jumped from 4 MIPS to 12 MIPS. Prices dropped from $70,000 to $10,000. It was truly cool.

          That IS pretty cool, but what I consider to be the "shattering" of a given law is when the majority no longer considers it valid, and that's unfortunately not the case.

          I think because it functions as a goal for R&D managers

          I blame the marketing department, which creates the consumer demand by telling us that buying a 2GHz Pentium IV will make the internet faster.

          No offense to Mr. Moore, who probably never anticipated his comment to be taken so seriously or adhered to, but it'd be nice if the focus was taken away from how many clocks per second or transistors there are, but what is done with each tick (or per recent news whether the ticks are necessary at all), and whether each transistor is used efficiently (FPGA's, anyone?). I hope to see a market where results from open, approved benchmarking methods based on real criteria are the main selling point. Guess I shouldn't hold my breath, eh? :o)

    • Except that feature size is linear while area is square. Naively then (assuming components/area is nearly exactly proportional to square of feature size), to double the components, feature size has to decrease by a factor of square-root of 2. Assuming 4 'doublings' in 8 years, that gives a feature size around 30nm or so. (5 gives about 24nm.) So 15 is a little ahead of the curve by that measurement. Of course, component count _isn't_ exactly proportional to the square of the feature size, so they'll probably be right around par.

      And who said two wrongs don't make a right?
  • who thinks, "Good, now we can overclock our "CMOS" and get to a less than 5 second boot time."

    Only down side I see is needing a CMOS heat sync and a 2 1/4inch, 5K rpm fans to keep the thing cool.

    Think about it, seriously for a minute:
    We have heat syncs and fans on; processors, chipsets, powersupplies, memory (sometimes), case fans, hard drives (on occasions)...what is missing?

    Oh, yeah, we need some of this action on the CMOS!!!

    OOoooKaaayyyy, works for me.

    Cheers,

    Moose.
    • CMOS (Complementary Metal-Oxide-Silicon) refers to the construction of the transistor, not it's possible applications in PC BIOSes.
      • Actually (IIRC) fuzz6y,

        CMOS stands for Crystal Metal Oxide Semi-conductor.

        Crystal (not the little hamburger thingies) or Crystalline...same-same.

        Of course, you may very well be correct also.
        My acronyms definition came from a computer definition book I found one day, circa 1991'ish.

        Same thing with the .gif extension. Some people call it a "Jif" as in the peanut butter (why? Dunno) I've said "Gif" as in Gift without the t. Same vein as jpeg/jpg being pronounced "Jay-Fif"....that one still boggles the mind, but, the "meta data/internal code is: JFIF"...I believe.

        Dang, waaaay too much info. Did not mean to babble on so much.

        cheers,

        Moose

        .
  • You will invent a transistor that reflects the size of your penis
  • I believe AMD / IBM should develop this to the point it could be released say in 3 or 4 years. Right now the chip market is moving to the 0.13-micron mark. Imagine an Athlon with a 0.015 micron setup. AMD would have such a advantage over Intel at that point that it would not be funny. If they don't implement this until 2009, then Intel might beat them to the punch by developing there own 0.015-micron technology, or just pay IBM cash for the designs.

    I can see it now, a dual AMD Athlon system running 10 GHz per processor at a 0.015-micron chip scale. Now that's speed

    my 2 cents plus 2 more
  • Faster (Score:2, Funny)

    by Anonymous Coward
    This press report is overrated. I have created a car that can travel 10000 miles powered only by a bowl of salsa. The secret is an engine that is 4000 times faster than what is currently available, by using a technique called "warp drive." I expect to produce these vehicles in mass in 2061.

    - Zephram
  • by jdc180 ( 125863 )
    Semiconductor companies are constantly announcing "Breakthroughs". In my eyes, I don't see them as major breakthroughs, simply an evolutionary step. McDonalds constantly fixes it's processes, changes recipies, etc.. but they don't run a commercial unless they come up with a new sandwich. Develop new materials besides Si, and SiO2 and I might just buy some stock!
    • AMD only announced this because Intel made their lame "breakthrough" announcement last week. Intel stock went up, AMD stock went down. So, AMD had to pull out one of their many similar "breakthrough" technologies and put it on the table to make their stock go back up. Announcements like this have NOTHING to do with technology and everything to do with impressing the really anazingly stupid market analysts.
      • Now people, I give to us all; the transistor! My silicon is better, faster and able to do xx calcs. in x seconds...Hmmm, sounds like AMD/Intel are trying to outdo each other in market ploys instead of REAL WORLD designing! How fast any device is or can be in the PC world is mainly allowed by the whole system's architecture, and NOT the CPU alone. Design a GOOD 64-bit bus, run the damn thing AT the CPU clock and PORT the useless data not related to ops back to the system cache, then give the CPU freedom to calc the needed ops and route that data to the card/bus that makes use of it, but also keeping the bus at the rated throughput and not giving in to high latency junk ram(bus)..Etc... Proper extensions should be used and exclusive/s should be relegated to some form of low-order cache for second-tier processing AFTER the main(core) job is completed. processing power and throughput occur only when the clock cycles are utilized properly and not clogged up with silly instructions forcing CPU time to drop from executing a dump of some irrelevant hex format, or exclusion that is not even being currently used, but stored...for what, CPU posterity? Since i am NOT a programmer, but I do run servers and yes, LANs too...not to mention Windoze; i use ALL my apps at the same time, and never 1 then 2 then 3 and so on...throughput is MAX USE, MAX TIME and at MAX SPEED...THAT, my friend/s..IS throughput, not raw CPU clock cycles! And by the way....two separate CPUs running the same speed do NOT make 4.4 Ghz.! The sum of BOTH processors is STILL 2.2 Ghz.! If you added a clock that both CPUs used simultaneously and together, THEN you would get that clock speed, but NEVER when there are two processors side by side, doing separate tasks. "Clock doubling" by pairing CPUs theory would be great, it's just not reality. If that were the case, does my DMA timer in my "ancient" Zenith TurboSport now run twice as fast and allocates DMA timing faster as well? What works for one "must" work for the "other" as well, otherwise, that theory is junque!
  • by Anonymous Coward on Tuesday December 04, 2001 @12:29PM (#2654167)
    While it's wonderful that they can create a 300fs inverter, you also have to consider that they have yet to prove that they can actually mass-produce these structures to get adequate yields. This is not a trivial operation. Bell Labs, IBM, Intel, and AMD have all announced ultra-small and/or ultra-fast transistor structures, but they all admit that they are far from mass-producing them on a wafer/die.

    Also - the rest of the componentry in a computer or other electronic structure, and how it will all communicate all of these calculations, will also be a problem. Already, integrated circuit I/O circuits are having trouble transporting data back and forth on a PCB.

    ALSO, consider that the photolithography tools that are supposed to support the next generation of smaller structures are already off-track. 157nm lithography tools have been delayed due to development and financial difficulties. See SiliconStrategies.com [siliconstrategies.com]). My personal guess is that the vertical MOSFETs will be the winners in the short term because, until they get other factors in line, they will have to make do with what they've got, though *again* the additional processing required for the wafer will impact yields, so it will be an expensive technology to implement either way.
  • by Lumpy ( 12016 ) on Tuesday December 04, 2001 @12:46PM (#2654232) Homepage
    This is great, we have transistors hat are faster than anything else humans have developed so far.

    But, this really doesn't give us any leap in abilities. What about massive parallel processing? What is holding the human race back from creating a chip that is basically 16 or 32 seperate but equal processors?

    Linear processing is fine for things like calculators and basic tasks, When do we get our hands on some real leaps in processing?

    Does anyone know of any links that point to research in this?
    • But, this really doesn't give us any leap in abilities. What about massive parallel processing? What is holding the human race back from creating a chip that is basically 16 or 32 seperate but equal processors?

      The same thing that is keeping us from using massive Beowulf type clusters (or other parallel processing systems) more...

      ... better compilers able to take advantage of the technology... and who know WHEN AND HOW to take advantage of the technology.
  • It's quite in line with the Moore's law, which states that the number of transistors per chip will double every 18 to 24 months, and therefore expects this number to be 2^(8*12/24) = 16 to 2^(8*12/18) = 40 times larger in 8 years. AMD promises 20-fold increase, which is within this range.
  • ...For Optical processors, when we wont need any transistors. I really don't think that a 0.15 micron will be anything competetive in 7 years. If they already have the micron figured out, you'd think they'd be manufacturing them? why not release it?
  • With Cyrix gone and AMD kind of falling behind Intel and now resorting to similar tactics that Cyrix used before they went under, the marketing of their CPUs at a higher apparent clock speed than they are actually running, does anyone think AMD will be around in 2009 to capitalize on their research?
  • Recently, everyone has been saying that Moore's law won't hold out much longer. And they've been saying it ever since he postulated it. I don't think it'll stop any time soon.

    The 386's, when matured, were built on a 1-micron process, had 275,000 transistors, and ran at 33 MHz. Now, on a .18-micron process, we have chips with 42 million transistors running at 2 gigahertz.

    So, by shrinking the size of the transistors to 1/6th of their size, we got 153 times the packing ability, and 60 times the frequency. And these transistors that they're talking about are only 1/10th the size of the current "high-tech" transistors. That means that we could pack over 100 times more transistors on a package, and run them 100 times faster. Not bad. But I suppose that they'll need a safety device to shut them down if the flow of coolant ever stops. : )

    steve
  • I can't wait to see this stuff lithographed on the Silicon-on-arsenide (sp?) technology.
  • While suffering from overheating and architecture flaws as old as the 1970s, the x86 might last into the 2010's or 2020's, but it requires more and more expensive processes, innovations in the constructions of transistors and other envelope-pushing procedures. Meanwhile other architectures are enjoying higher instructions-per-clock, far less power consumption and heat production, and greater vectorization.

    By the 50's, the x86 by Intel/AMD/whoever else will be a memory. The "other" major platform seems to have less of a problem with switching to new architectures every few years, whenever it becomes practical. Will Wintel users be lucky enough to have "Moore-Compliant" emulation of the Pentium-XIXX and the next CPU they're forced into?
  • Intel's new terahertz transistor promises smaller, faster and lower-power. AMD's new transistor is smaller and faster, but did nothing much about the power. :(

    Wouldn't it be just too much to hope for, that they both try to incorporate similar ideas from each other into their newest products? Imagine, much smaller, much faster *and* lower power consumption. To me, that sounds like a great idea. 'S a pity that our "compete at all costs" system won't really allow this to happen, though *sigh*
  • Why is it that whenever I buy a computer, something like this happens where it will be obsolete in a few years? Nothing like buy a $2k paperweight!
  • AMD announces it has built a CMOS transistor

    this is a good effort, considering there is no such thing as a CMOS transistor.... CMOS refers to two transistors hooked up so the output can swing fully high and fully low. making this out of one transistor should be a lot bigger news than this article gives justice to...

    stupid chip manufacturers don't know what they are talking about...
  • Imagine a beowulf cluster of those! ;P

    BlackGriffen
  • I will still take Windows 2009 5 mins to boot.
  • all those claims may sound good now, but if it will take /at least/ 8 years before production begins, I wonder by that time if we could have made enough advances to already reach that performance level.

    it's very unusual for companies to research on things they don't plan to produce within 5 years...

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...