Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Could HP Beat Moore's Law?

CmdrTaco posted more than 7 years ago | from the i'll-believe-it-when-it-boots dept.

HP 176

John H. Doe writes "A number type of nano-scale architecture developed in the research labs of Hewlett-Packard could beat Moore's Law and advance the progress of of microprocessor development three generations in one hit. The new architecture uses a design technique that will enable chip makers to pack eight times as many transistors as is currently possible on a standard 45nm field programmable gate array (FPGA) chip.""

cancel ×

176 comments

Sorry! There are no comments related to the filter you selected.

Maybe (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17646168)

They could beat off Moore. He is still alive, and would probably shoot his sperm in his keyboard.

I dunno... (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#17646520)

He is pretty old. He may be shootin' blanks at this point.

Moore's law is not about inefficient FPGA intercon (4, Insightful)

chriss (26574) | more than 7 years ago | (#17646180)

Since the wiring in an FPGA is not fixed, they have to integrate more flexible ways of routing. According to TFA this takes up 80% to 90% of the silicon, leading to a much worse ratio of wiring to transistors dedicated to logic processing compared to "normal" chips. HP is developing something they call "field programmable nanowire interconnect (FPNI)", which consumes a lot less space. So they are not beating Moore's law, they improve chip space use in FPGAs to become similar to what todays dies with fixed routing achieve.

And even if you are desperately seeking more efficient FPGA, you'd have to be patient. TFA mentions that they are targeting a 25-fold increase packing density compared to todays 45nm chips in 2020. That's thirteen years, which in Moore's laws steps means about eight 18 month periods, each doubling density. My math may be flawed, but shouldn't that mean that by then we have 2^8 = 256 times the density in the normal process as we have today?

Re:Moore's law is not about inefficient FPGA inter (1)

martyros (588782) | more than 7 years ago | (#17646246)

So, who uses FPGAs in a big way? Whom is this likely to affect?

Re:Moore's law is not about inefficient FPGA inter (4, Informative)

quarrel (194077) | more than 7 years ago | (#17646310)

Xilinx is the worlds largest producer of FPGAs.

Their biggest customer? Cisco. (by far)

The big iron routing guys use heaps in high end devices.

--Q

Re:Moore's law is not about inefficient FPGA inter (3, Informative)

TheRaven64 (641858) | more than 7 years ago | (#17646340)

Anyone who wants a low-volume run of custom chips. For runs up to a few thousand, FPGAs are cheaper than ASICs (and have the advantage of being firmware-upgradable). If you don't need latest-process speed or power efficiency then FPGAs are likely to be good enough. Take a look here [xilinx.com] for some of the people who use them.

For instance, the Open Graphics Project (2, Interesting)

Lonewolf666 (259450) | more than 7 years ago | (#17646562)

See http://wiki.duskglow.com/tiki-index.php?page=Open- Graphics [duskglow.com] .
The development board is going to use a FGPA, because a custom chip design would be too expensive. For later, they plan to produce it as ASIC to improve the price/performance ratio. With better FGPAs, they could stick to the FGPA for the end-user version which would help to reduce investment costs.
Quote about the ASIC design:
RTL for the ASIC will be released under a dual license (GPL and proprietary) There will be a time-delay on some parts (to deal with investor concerns over the $millions necessary to invest in fabrication), but once the investment is recouped, the code will be released. (We need a law firm to escrow the RTL for us, pro bono.)

Re:Moore's law is not about inefficient FPGA inter (2, Insightful)

zeldor (180716) | more than 7 years ago | (#17646684)

there are lots of uses for FPGAs in radar processing, image recognition, you can even do small
floating point kernels REALLY fast on FPGAs if done correctly.
granted on most of them you have to know verilog or vhdl to use them, but there are a couple
companies that have fully functional C/Fortran programming environments that take it all
the way down onto an FPGA. using those general codes can run faster on FPGAs.
plus they are really low power. a room full of general computers running a teraflop
takes large amounts of power, fpga based systems take 1/20th or so the watts.

Re:Moore's law is not about inefficient FPGA inter (1)

AKAImBatman (238306) | more than 7 years ago | (#17647120)

granted on most of them you have to know verilog or vhdl to use them

JHDL [jhdl.org] is my favorite alternative to these languages. Rather than embedding the behavior in the language itself (which I personally think is the source of most confusion and poor HDL design) JHDL provides you with Java APIs that can be used to construct the circuit.

It works surprisingly well, in part because circuit design is more object oriented to begin with. Just like in good OOP design, you want your circuits to be simple, black-box designs that will always produce output Y for input X. More complex circuits can be designed by simply "snapping together" smaller circuit Objects to create larger, more complex entities.

Re:Moore's law is not about inefficient FPGA inter (1)

Manchot (847225) | more than 7 years ago | (#17646916)

FPGAs are sweet. In addition to what the sibling posts have said, FPGAs are great for prototyping, because programs running on them can be implemented so quickly and easily. Finite state machines are a cinch on FPGAs, which makes them perfect for embedded systems. Plus, when programming them, there's the added benefit that you don't have to worry about the complexity of an actual processor or microcontroller: no stacks, no instruction sets, no interrupts, etc. Obviously it comes with a trade-off of processing power, but it can often be worth it.

Re:Moore's law is not about inefficient FPGA inter (0)

Dark_MadMax666 (907288) | more than 7 years ago | (#17646282)

Exactly .Also FPGA have a rather specialized use, as for majority of applications such flexibility is not really needed. As in the end they do essentially same thing (turing machines can emulate another turing machine remember) - and the tradeoffs needed to make hardware flexible imho are just not worth it for general computing. - Better to make faster ,more densely packed chip than reconfigurable one.

Re:Moore's law is not about inefficient FPGA inter (1)

fireboy1919 (257783) | more than 7 years ago | (#17646982)

Also FPGA have a rather specialized use, as for majority of applications such flexibility is not really needed. ...for general computing.

I don't think you quite understand the meaning of "specialized." More general is sort of the opposite of more specialized. FPGAs are less specialized than pretty much everything else.

I imagine they'll end up seeing a lot of neat uses when they're cheap and small enough to replace MCUs - which are currently what people use when they want to do generic processing of various things from various locations.

Re:Moore's law is not about inefficient FPGA inter (1)

Dark_MadMax666 (907288) | more than 7 years ago | (#17647446)

No that you who didn't quite understand - microprocessors are already flexible enough by their nature (hence my reference to turing machine emulation on another turing machine). The need to actually have flexibility in hardware is narrow and specific (specialized) - as in majority doing "flexible" part in software makes a lot more sense .

  I am not sure what you mean by "generic " processing -as a standard CPU is pretty generic , now if you talk about microcontrollers , DSPs ,ASICs etc -FPGA make sense in those .But those are specialized uses.

Re:Moore's law is not about inefficient FPGA inter (1)

dave420 (699308) | more than 7 years ago | (#17648018)

They are more generalised chips, yes, but that makes their use more specialised :) The two are not mutually exclusive. Just as, say, your pair of glasses is pretty specialised - they fit on your face and hold your lenses - that's all they do. However the Optician has a pair of adjustable specs, which can fit on anyone, and hold any lenses. That pair of specs is far less specialised in ability, but is far more specialised in usage.

Re:Moore's law is not about inefficient FPGA inter (1)

zeldor (180716) | more than 7 years ago | (#17646468)

I always thought 'fold' meant "doubled this many times" or 2^24 in this case.

Re:Moore's law is not about inefficient FPGA inter (1)

chriss (26574) | more than 7 years ago | (#17646676)

Me bad. TFA:

... by 2020 using 4.5nm wires it should be possible to pack in the same amount of transistors in a space of just 4% of what is currently possible on a 45nm

So it should say 25 times as dense.

What REALLY matters (0)

Anonymous Coward | more than 7 years ago | (#17646542)

That is all well and good but will it ultimately result in faster pr0n downloads for me?

Re:Moore's law is not about inefficient FPGA inter (1)

rm999 (775449) | more than 7 years ago | (#17646898)

I entirely agree. Moore's law is about general ICs, not FPGAs.

But that does not mean this is insignificant. FPGAs are extremely useful in many applications, but cost and transistor count hold it back for a lot of applications. An increase in transistor density by 3 orders of magnitude is significant enough that it could make FPGAs a viable option for a lot more people.

Too bad the article made no mention to the effect on cost ;)

Re:Moore's law is not about inefficient FPGA inter (1)

Bobby Mahoney (1005759) | more than 7 years ago | (#17647726)

They're more like... guidelines! ARGHH MATEY!!!

Obilgatory (0, Redundant)

N8F8 (4562) | more than 7 years ago | (#17646198)

Can You Imagine a Beowulf Cluster of These?

Re:Obilgatory (2, Informative)

AKAImBatman (238306) | more than 7 years ago | (#17646636)

Can You Imagine a Beowulf Cluster of These?

Yes, actually. [uni-sb.de]

The RPU is a fully programmable ray tracing hardware architecture, with support for programmable material, geometry and lighting. The RPU combines the efficiency of GPUs with the advantages of ray tracing. The instruction set of the RPU is GPU like, which is optimal for shading purposes. In addition the RPU supports fast ray traversal through an k-D tree using a dedicated hardware unit and recursive function calls, usefull for recursive ray tracing. To increase efficiency always 4 rays are handled in a packet and multi-threading allows for high utilization of the hardware units.
 
A working prototype of this hardware architecture has been developed based on FPGA technology. The ray tracing performance of the FPGA prototype running at 66 MHz is comparable to the OpenRT ray tracing performance of a Pentium 4 clocked at 2.6 GHz, despite the available memory bandwith to our RPU prototype is only about 350 MB/s. These numbers show the efficiency of the design, and one might estimate the performance degrees reachable with todays high end ASIC technology. High end graphics cards from NVIDIA provide 23 times more programmable floating point performance and 100 times more memory bandwidth as our prototype. The prototype can be parallelized to several FPGAs, each holding a copy of the scene. A setup with two FPGAs delivering twice the performance of a single FPGA is running in our lab. Scalability to up to 4 FPGA has been tested.
BTW, am I the only one who thinks it darn cool that the SaarCor team does their work in JHDL rather than VHDL or (ugh) Verilog? I wonder if the RPU is also JHDL?

Why a law (4, Insightful)

gravesb (967413) | more than 7 years ago | (#17646202)

I never understood why it was called a law. It was an incredibly accurate prediction, but there was nothing holding is there. I would think that any dramatic increase in technoloby would lead to a jump larger than Moore's law.

Re:Why a law (2, Informative)

fitten (521191) | more than 7 years ago | (#17646264)

It's a prediction and actually a self-fulfilling one, to some degree. In fact, it's as much, or more, about economics than technology. If you look, the original wording even states "cost". Upgrade too fast and you'll go broke because people won't upgrade with you that fast (they'll start skipping 'generations' in their upgrades).

Re:Why a law (1)

Vreejack (68778) | more than 7 years ago | (#17647188)

Natural laws are based on observation. Moore's law has held to this observation (with some slight tinkering) for decades, now.

Natural laws are not only useful for their predictive feature but for the fact that existence cries out for an explanation. The parent refers to a popular one: that Moore's law is a self-fulfilling prophecy because of social interactions. That probably makes it less reliable than g=-9.8ms^-2, but everyone seems to know that and knows how much faith to put in it.

Re:Why a law (1)

Veetox (931340) | more than 7 years ago | (#17647444)

Why not call it a "theory"? It is a manner of interaction (http://en.wikipedia.org/wiki/Theory) that has been observed, and that could still be falsified, but hasn't yet. Perhaps it isn't a law, because it isn't empirical fact? HP could have put us in a place to falsify Moore's Law; it only remains to be seen how the new technology is really implemented and distributed. But I won't hold my breath: Moore's "Law" does have a strong history behind it.

Re:Why a law (1)

khallow (566160) | more than 7 years ago | (#17647822)

Name a natural law that isn't a theory. Why should we single out Moore's Law for special treatment?

Re:Why a law (3, Funny)

Colonel Angus (752172) | more than 7 years ago | (#17646280)

Sounds better than Moore's Prediction?

Re:Why a law (1)

archen (447353) | more than 7 years ago | (#17646812)

Depends on how literally you take the term "law". It's about as law-like as say, Murphy's law. Lets not even get started on how people mis quote the law in ways that have nothing to do with transistors..

Re:Why a law (1)

kabocox (199019) | more than 7 years ago | (#17646954)

I never understood why it was called a law. It was an incredibly accurate prediction, but there was nothing holding is there. I would think that any dramatic increase in technoloby would lead to a jump larger than Moore's law.

Shh, it's just a trend. It could have been wrong or we could have hit a physical limit. One day we will. I like to think of Moore's law as more a goal post of the eletronics industry. They have to double every 12-18 months because of Moore's law. Could this mindset actually work in other fields? What if we have a Ford's law that car mpg double every 1-5 years? There isn't any thing mystical or magical about the IT industry or Moore's law. (Moore's law is more that some one pointed out the trend that we were achieving and we've kept it up. That we've kept it up is the surprise.)

Re:Why a law (4, Funny)

Junior J. Junior III (192702) | more than 7 years ago | (#17646990)

I'm waiting for the /. article in which it's announced that some school board has declared that Moore's "Law" is really only a Theory, and should be taught alongside "intelligent design" courses which demonstrate how highly specialized researchers and engineers colloqually known as "gods of tech" design and build denser integrated circuit chips using computer assisted methodologies. These things don't manifest out of the ether, and they don't evolve themselves, people.

Umm, chips ARE designed... (1)

PRMan (959735) | more than 7 years ago | (#17648262)

Am I missing something here?

Re:Why a law (1)

mwvdlee (775178) | more than 7 years ago | (#17647140)

It's called a law because of the way it is formulated.

If it were written as "processing speed could increase two-fold about every 18 months in the forseeable future", it would've been called a prediction. Since it is written in an unambiguous way, leaving no margin of interpretation, it's called a law.

Re:Why a law (1)

cafucu (918264) | more than 7 years ago | (#17647206)

Hey, if somebody can screw up something good, they always will. It's just Murphy's Law.

Re:Why a law (1)

ZirbMonkey (999495) | more than 7 years ago | (#17647686)

The law itself is just a mathematical equation. N(t) = N(0)*2^(t/24), where N is the number of transistors at month "t" in the future. The laws of gravity, momentum, thermodynamics, and such are called laws not because they hold some sort of scared truth about the universe. They are merely laws because they can be expressed as a math equation. For example, the "Ideal Gas Law" in chemistry is PV=nRT. But this law only holds true for an "Ideal Gas," which doesn't truly exist. There's a lot of gasses that behave within this equation under moderate conditions, but at extreme conditions like near absolute zero or in plasma the "law" doesn't work.

Re:Why a law (1)

markbo (313122) | more than 7 years ago | (#17647690)

I never understood why it was called a law. It was an incredibly accurate prediction, but there was nothing holding is there.


All the so-called "Laws" can be exactly described like that. The Law of Gravity is also nothing more than an incredible accurate prediction, waiting for the time that a pencil rolling off a desk falls upwards...

Or an ax head floats? (1)

PRMan (959735) | more than 7 years ago | (#17648314)

http://www.lavistachurchofchrist.org/Pictures/Tr easures%20of%20the%20Bible%20(Divided%20Kingdom)/t arget9.html

Although, that may have something to do with molecular cohesion instead.

Re:Why a law (1)

AndersOSU (873247) | more than 7 years ago | (#17647834)

Because it's better then evolution?

Moore's Law (2, Informative)

shirizaki (994008) | more than 7 years ago | (#17646214)

http://en.wikipedia.org/wiki/Moore's_law [wikipedia.org]


The number of transistors on an integrated circuit for minimum component cost doubles every 24 months.

Wait a second... (5, Funny)

awing0 (545366) | more than 7 years ago | (#17646244)

HP has research labs? Honestly, I thought they were an ink company. Damn, and I was getting quite used to mocking their "Invent" logo.

Re:Wait a second... (0)

Anonymous Coward | more than 7 years ago | (#17646476)

HP has research labs? I thought they were spy ware company spying on its board of directors and reporters.

So, high ink price is explained (1)

Rastignac (1014569) | more than 7 years ago | (#17646572)

High ink price gives a lot of money to spend in the labs. Very high ink price gives great findings in the labs. We customers pay the labs thanks to the expensive inks.

Re:So, high ink price is explained (5, Funny)

Overzeetop (214511) | more than 7 years ago | (#17647420)

Then how come Epson hasn't found a cure for cancer, solved world hunger, and figured out how to bring peace to the world? God knows they charge enough for ink to do all of that in a fiscal year (well, at least 2 out of 3, and the last one probably involves nuking from orbit, just to be sure).

Re:Wait a second... (1)

Hoi Polloi (522990) | more than 7 years ago | (#17646664)

They need these chips to enforce bogus DMCA restrictions on thier ink cartriges...ooops, I meant make smarter ink carts that can keep track of how much ink is left.

Re:Wait a second... (1)

smellsofbikes (890263) | more than 7 years ago | (#17648322)

I know you're joking, but back in the day, HP Labs division used to be awesome, where everyone at HP wanted to work, just like PARC used to be for Xerox. I'm glad to know that something still exists there, although at this point it's like the convulsive twitches of a cat that just got hit by a car.

hrmmm (1)

genrader (563784) | more than 7 years ago | (#17646284)

I think that even if they were to jump ahead, in the long run the development here would lag behind and even out, thus equating to what it would be if Moore's law had been followed exactly.

The Singularity is Near... (2, Insightful)

PHAEDRU5 (213667) | more than 7 years ago | (#17646292)

Moore's "Law" is actually a prediction that's been remarkably accurate.

I think, though, that's what happening here is employing the technology is causing positive feedback loops in the design and development of the technology, which is accelerating the improvement of the technology.

It's only going to get faster from here. Human consciousness executing on "silicon" by 2030.

Welcome to the singularuty.

Re:The Singularity is Near... (1)

Maximum Prophet (716608) | more than 7 years ago | (#17646798)

It has to be here by 2038...


For the non UNIX geeks, that's when UNIX'es time runs out, the equivalent of Y2K, except much worse.

Re:The Singularity is Near... (1)

badpazzword (991691) | more than 7 years ago | (#17647338)

It has to be here by 2038...
Consider the IPv6 switch, and you'll see it has to be here much before 2038.

Calculating 5 years for mass technology production, 10 years for broad 64x processor usage, 5 extra years to port all the 32-bit code to 64-bit, considering normal people are unlikely to care about this before December 31st, 2037 (approx. when this may hit the media) -- well, I'm afraid we'll solve this thing in a rush as usual.

Re:The Singularity is Near... (1)

TheWoozle (984500) | more than 7 years ago | (#17646918)

Oh, really? Human brain activity is non-deterministic and sometimes unreliable. Exactly how does this translate to any kind of logic-based, deterministic system?

The 'singularity' is a particularly foolish pipe dream.

Re:The Singularity is Near... (1)

xtal (49134) | more than 7 years ago | (#17646962)

Exactly how does this translate to any kind of logic-based, deterministic system?

You can't connect a non deterministic system to a logic-based one? What happens when I program a computer?

The brain is a pile of connected goo. Extremely well connected goo, but connected goo that we will either model the underlying principles of, or connected goo we will just clone verbatim in silicon. Resistance is futile.

Re:The Singularity is Near... (1)

TheWoozle (984500) | more than 7 years ago | (#17647540)

No. When you understand why we can't perfectly predict the weather, you will understand why we will never be able to replicate brain function.

Resistance is not necessary, the "singularity" is a fairy tale.

Re:The Singularity is Near... (1, Interesting)

Anonymous Coward | more than 7 years ago | (#17647952)

Yes, a simulation will never give identical results to the real chaotic system. The reason is that this is an example of two chaotic systems (one real, one simulated) which have small differences in their initial state (imperfect measurements) and equations (imperfect modelling). It is in the nature of such chaotic systems for their state to diverge.

However, this is irrelevant to the point you are trying to make, as it doesn't stop us from simulating a brain at all. To go back to the wheater simulation analogy: you may not be able to predict whether it is going to rain at spot X, but if the wheater simulation is good enough it will still show you general wheather patterns much like in the real world.

It's the same for a simulated brain. If the simulation is good enough, it will be conscious.

Re:The Singularity is Near... (2, Interesting)

Stefanwulf (1032430) | more than 7 years ago | (#17648148)

I understand why we can't predict the weather.
I understand why we can't _predict_ brain function.

I don't understand why that means we can't build a new brain that will simply remain equally unpredictable.
Just because a system is chaotic doesn't make it impossible to construct.

Re:The Singularity is Near... (0)

Anonymous Coward | more than 7 years ago | (#17647638)

There's STILL no evidence that non-determinism is a fundamental trait of the human thought process. To date, this notion is just our own hubris, a claim to unpredictability founded merely on the obvious fact that we are not smart enough to predict ourselves.

Besides that, though, the mathematical properties of a non-deterministic machine are essentially equivalent to the properties of a deterministic one. Even further, we can easily create true digital randomness if needed.

The brain is a mere machine, or a network of machines if you will. To the extent that it is magical, it's based on the same magic as other systems with emergent complexity. To emulate a brain in silicon may turn out to be less practical than some expect, but there's absolutely no inherent reason why it cannot be done on some form of technological device.

Re:The Singularity is Near... (1)

spun (1352) | more than 7 years ago | (#17648318)

Prove that human brain activity is non-deterministic. I have a good friend who is getting his PhD in neuroscience, and from conversations I've had with him, I'd say it's pretty damn deterministic. Human consciousness does not exist outside the laws of nature. It is not a special type of process, unlike any other. It is as amenable to simulation as any other process in the universe, and like any other process, it can be modelled to any arbitrary level of versimilitude by throwing more computational power at it. With quantum computing on the horizon, we could be looking at modelling the quantum state of every atom in your brain.

Sorry if this threatens your ego-image of what consciousness is, conflicts with what your spirit guides, shamans, or priests have told you, or makes you feel in any way less special. Your ego-self can no more know itself than a knife can cut itself. Your spirit guides, shamans and priests are wrong. You aren't any more or less special than any other piece of matter in the universe. Have a nice day.

Math says: yes. (4, Informative)

Just Some Guy (3352) | more than 7 years ago | (#17646312)

The mean value theorem shows that if the average rate is x, and the instantaneous rate ever goes below x, then it must necessarily also be above x sometimes. Put another way, progress will sometimes be faster than other times.

Re:Math says: yes. (5, Funny)

MicktheMech (697533) | more than 7 years ago | (#17646374)

The mean value theorem: Because common sense just isn't good enough for mathematicians.

Re:Math says: yes. (1)

pfft (23845) | more than 7 years ago | (#17647276)

The meanvalue theorem says that progress will sometimes progress at exactly the average rate. Your statement is certainly also true (assuming the speed of progress is continuous...), but I don't see the meanvalue theorem being particularly helpful in proving it?

6 to 1 (3, Insightful)

guysmilee (720583) | more than 7 years ago | (#17646330)

As a rule of thumb i was told ... an fpga normally uses 6 gates to 1 gate used by a custom ASIC chip ... so a 5 million gate chip would require a FPGA with 30 million gates ...

This may have changed over the years ... but i'd like to know how this announcement changes this heuristic ...

Re:6 to 1 (1)

AKAImBatman (238306) | more than 7 years ago | (#17646862)

As a rule of thumb i was told ... an fpga normally uses 6 gates to 1 gate used by a custom ASIC chip ... so a 5 million gate chip would require a FPGA with 30 million gates ...

Pardon me if I'm speaking out of turn, but don't you mean transistors, not gates? In theory, the gate count should remain the same between the two, with most differences being accounted for by designing gates out of different gates. (e.g. Using NAND to create all other gates.) Or are you referring to a formula for translating the FPGA market-speak into real numbers?

Again, pardon me if I'm misunderstanding. I don't work with these chips nearly as much as I'd like to.

Re:6 to 1 (2, Informative)

guysmilee (720583) | more than 7 years ago | (#17647570)

No true ... because of timing requirements ... if one gate is used it may rule out using others because of how the gates are connected ... i.e. picking one gate and 1 route may not allow certain gates to be connected ... so the 6 to 1 ratio refers to "wasted gates" ... I believe. This is because all gates are not all directly connected to each other ...

If this new technology allows more routes ... i believe you will get less gate waste ...

I am just a software dev ... so i could be wrong though ... but this is my understanding ...

Re:6 to 1 (1)

AKAImBatman (238306) | more than 7 years ago | (#17648162)

i.e. picking one gate and 1 route may not allow certain gates to be connected ... so the 6 to 1 ratio refers to "wasted gates" ... I believe.

I see what you mean. Generally, FPGA devs are always talking about reworking your design to eliminate as many wasted gates as possible. (The ISE tools help with this, IIRC.) Xilinx claims that their compilers are smart enough to rework your design automatically for a high rate of utilization.

Of course, proper utilization is partly a function of which FPGA you use. Most FPGAs have a lot of common circuits built in so that the general logic isn't wasted by those circuits. So doing an analysis of your design can help you pick the proper chip to get the necessary results. 6 to 1 is probably a bit pessimistic for a well-optimized design, but may hold true for a general design.

BTW, you may want to revise your example downward next time. 30 million gates is a LOT of gates. Very few devices offer that level of configurability. Comparing 50,000 to 300,000 gates is a bit more reasonable. :)

Re:6 to 1 (2)

dextromulous (627459) | more than 7 years ago | (#17648294)

i.e. picking one gate and 1 route may not allow certain gates to be connected ... so the 6 to 1 ratio refers to "wasted gates" ... I believe. This is because all gates are not all directly connected to each other ...

FPGAs use lookup tables to simulate gates: See here for a description of a basic Configurable Logic Block [wikipedia.org]

If this new technology allows more routes ... i believe you will get less gate waste ...

This is true. However, it is more important than simply wasting gates. Performance of an FPGA is related to how much delay the signals are subject to from input to output. By cutting down on the number of gates used solely to pass on a signal, you are cutting down on the amount of unnecessary delay.

2008 (5, Insightful)

mastershake_phd (1050150) | more than 7 years ago | (#17646332)

HP Engineers Defy Moore's Law, New Nano-Chip Prototype in 2008

They havent even made a chip yet.

Re:2008 (1)

jlf278 (1022347) | more than 7 years ago | (#17646706)

>>HP Engineers Defy Moore's Law, New Nano-Chip Prototype in 2008
>> They havent even made a chip yet.

The summary is titled "Could HP Beat Moore's Law?" - highlighting the article's signifigance in answering this hyptothetical with a resounding Maybe!

Re:2008 (1)

mastershake_phd (1050150) | more than 7 years ago | (#17646826)

The summary is titled "Could HP Beat Moore's Law?" - highlighting the article's signifigance in answering this hyptothetical with a resounding Maybe!

Well this being Slashdot I assumed the title declared this already happened and didnt bother reading it.

Re:2008 (1)

carpe_noctem (457178) | more than 7 years ago | (#17647242)

They're planning on breaking the inverse moore's law, which states that:

"If a tech company announces a big breakthrough, which they claim will be available to consumers in 18-24 months, then the probability of the breakthrough becoming vaporware will approach 1."

Mixed legal priorities... (5, Funny)

creimer (824291) | more than 7 years ago | (#17646462)

Maybe HP should focuse on beating the illegal wiretapping case before they break another law? They're not Microsoft, you know.

And your point is ...... (0)

Anonymous Coward | more than 7 years ago | (#17646960)

That HP should field their chip researchers to beat the rap? Though courts seem boring places to me, I'd like to be a fly on the wall when that case comes up!

In other news, HP legal department invent new heuristic chip architecture based on rat's brains. "The development was comparatively simple", says Attorney Splitz, "but we are still having trouble defining the intellectual property rights ....."

It could be stacked to 3D (1)

Kim0 (106623) | more than 7 years ago | (#17646508)

http://memory.oyhus.no/ [oyhus.no]

By using that technique, that programmable logic could be thousands of times more powerful without increasing the space it takes.

Kim0

Re:It could be stacked to 3D (0)

Anonymous Coward | more than 7 years ago | (#17647698)

The problem with FPGA isn't so much with the package size, it is mostly the routing speed, how much heat can be dissipated (limits on the logic you can have), IO counts and increasingly signal integrity issues with simultaneous switching of I/O.

Very few of these can easily be solved with dies stacking.

What? What? (4, Insightful)

Mike1024 (184871) | more than 7 years ago | (#17646518)

OK, the actual paper's here [iop.org] (full text freely available).

As far as I can tell this has nothing to do with standard processors and everything to do with FPGAs.

It seems what they propose is: Instead of the FPGA configuration bits being done with gates on the silicon wafer, why not perform configuration by configuring the metal-to-metal interconnects? After all, if the metal layers are thick compared to the interconnects between them, you can blow connections you don't need like blowing a fuse. By removing the FPGA configuration bits from the silicon wafer, they can save a lot of space, leading to higher speeds and lower costs.

They have a clever way of arranging such a system, which should be easy to fabricate.

What Moore's law is supposed to have to do with this I don't know.

Michael

Re:What? What? (1)

stevesliva (648202) | more than 7 years ago | (#17647204)

As far as I can tell this has nothing to do with standard processors and everything to do with FPGAs.

It seems what they propose is: Instead of the FPGA configuration bits being done with gates on the silicon wafer, why not perform configuration by configuring the metal-to-metal interconnects? After all, if the metal layers are thick compared to the interconnects between them, you can blow connections you don't need like blowing a fuse. By removing the FPGA configuration bits from the silicon wafer, they can save a lot of space, leading to higher speeds and lower costs.
Aha, thanks for digging through the awful press for the real story. If the interconnect is non-volatile and reprogrammable, then there are likely memory implications as well.

Re:What? What? (1)

poot_rootbeer (188613) | more than 7 years ago | (#17647346)

What Moore's law is supposed to have to do with this I don't know.

Hey, at least they correctly identified Moore's Law as having something to do with the number of transistors on a chip, and not CPU clock speed or some other factor which contributes to performance but was never spoken to by Moore himself.

and? (1)

oudzeeman (684485) | more than 7 years ago | (#17646524)

Its not like Moore's Law is a law of physics (like the speed of light). Its more like an observation.

Of course (3, Funny)

Billosaur (927319) | more than 7 years ago | (#17646532)

If they wait for it in a dark alleyway with a lead pipe and stay very, very quiet...

Re:Of course (0)

Anonymous Coward | more than 7 years ago | (#17646634)

Improving FPGA packing density is not going to help conventional computing. At least not in the short run. FPGA is mostly used in research and not in the field.

Re:Of course (1)

busdriverneal (1003974) | more than 7 years ago | (#17646638)

this means that new printer drivers will weigh in around three gigs.. thanks hp!

FPGA and Moores Law? (2, Insightful)

Stevecrox (962208) | more than 7 years ago | (#17646656)

I thought FPGA's were a common microcontroller that *could* be altered to run as a microprocessor. You can configure FPGA's to run as a micro-controller and you can get microprocessors to act like a microcontroller but they are not the same thing. Most FPGA's run at far lower clock frequencies and far lower transistor density's when compared to your desktop CPU. This isn't because one is better than the other its because they are designed for different purposes, getting more transistors on a chip is great for your smartphone but doesn't mean much for your desktop.

I just don't see how this would would allow for moore's law to be broken. The largest FPGA I have been taught about (and gotten to use) had 22,000 transistors on it, I thought your average CPU was supposed to have billions.

Re:FPGA and Moores Law? (1)

oracleofbargth (16602) | more than 7 years ago | (#17646992)

I just don't see how this would would allow for moore's law to be broken. The largest FPGA I have been taught about (and gotten to use) had 22,000 transistors on it, I thought your average CPU was supposed to have billions.


I think that the whole point of this new technology is that it allows FPGA transistor density to approach much closer to the density found in a static ASIC, since current FPGA chips waste about 80% to 90% of their space with interconnects and signal routing paths.

I don't know if this would allow for faster frequencies on the FPGA, but it is a definite improvement for transistor count.

Re:FPGA and Moores Law? (5, Informative)

Colourspace (563895) | more than 7 years ago | (#17647194)

Hi, I work as field apps for a large FPGA manufacturer. The interconnect lengths count for a large proportion of the delay between each configurable logic cell (LE in our terminology), so a shortening in interconnect is not only useful from a transistor count view, but also an upper performance limit view. As for the first poster the larget current FPGA's (Altera's StratixIII, Xilinx Virtex 5 series) have multiple millions (sorry can't be bothered to look up the exact figures) of transistors. However, the flexibility of an FPGA is not that it can just be configured like a Microprocessor (though it can, see Altera's NIOSII) but to act like almost any digital logic you wish to conceive of. Want a FFT function? Don't write it in C/C++, describe it in hardware - much much faster than code, and getting on for an order of magnitude or more faster than on current DSP chips. To do the this, the simplest architecture element is a Logic Element (in Altera technology at least) - this usually (but not always, different vendors have their own twist on these) consists of a 4 input look up table and an associated programmable logic register. Combining a number of these LE's through the routing can create sequential or purely combinatorial logic functions. On top of this many hardware vedors also include special blocks for on chip RAM or ROM, and commonly now DSP multipliers. Of coures, RAM/ROM and muolts can theoratically also be built from discrete LE's but this can be inefficient so dedicated blocks are used. The latest Altera StratixIII family uses ALM (Arithmetic Logic Units) which are slightly larger than an LE but allow more functions to be implemented in one ALM than an LE, potentially reducing the number of logic levels to privide any given funtion, and in turn this can increase system througput and therefore performance. The current larget FPGA announce is the StratixIII EP3S340, which contains 340K equivalent LE's or if you prefer 340K programmable registers (for simplicity). You should ignore exact gate count comparisons between vendors as these are usually marketing figures. Some will include the gates used to configure the FPGA as well as usable ones accessible for use as general logic funtions, so can skew the figures somewhat.

Re:FPGA and Moores Law? (3, Informative)

AKAImBatman (238306) | more than 7 years ago | (#17647394)

The largest FPGA I have been taught about (and gotten to use) had 22,000 transistors on it, I thought your average CPU was supposed to have billions.

You are seriously behind the times, my friend. Xilinx's smallest offerings provide ~20,000 gates, while their largest offerings offer millions of gates placed on a chip of over 1.1 billion transistors [sda-asia.com] .

22K transistors is solidly inside CPLD territory these days. :)

Re:FPGA and Moores Law? (5, Informative)

greenrom (576281) | more than 7 years ago | (#17647406)

FPGAs are not microcontrollers. They are programmable logic devices. You can use an FPGA to implement a microcontroller, a microprocessor, or any other logic device.

You probably wouldn't be able to put the latest Xeon processor on an FPGA, but to say that they are far slower and smaller than modern processors is incaccurate. There are plenty of FPGAs that can handle signals in excess of 1GHz, and a 22,000 transistor FPGA is a VERY small FPGA.

Many custom chips including custom processors are first developed and tested on FPGAs before they become ASICs. In fact, you can give your FPGA design files to an IBM or a TI, and they'll gladly turn it into an ASIC for you -- for a fee. Often times, FPGAs are used in designs without ever going to an ASIC. Generally, the only reason you build an ASIC is because the per chip cost is much cheaper. Heat and performance are usually secondary considderations. There is, however, a big up front cost to doing an ASIC, so for low volume parts or designs that might need to be upgraded or fixed later, FPGAs are generally the better option.

There's also a middle ground -- so called "hard copy" FPGAs. This is when you give your design files to Xilinx or Altera with a big check, and they sell you special FPGAs that are guaranteed to work with your design (but not necessarily other designs). In exchange, you get the chips a lot cheaper and they can also disable parts of the chip your design doesn't use to reduce power consumption. The FPGA manufacturers benefit by being able to sell chips that would otherwise be defective but are suitable for certain designs.

Re:FPGA and Moores Law? (1)

AKAImBatman (238306) | more than 7 years ago | (#17647850)

For those of you who didn't quite follow greenrom's excellent (but rather technical) post, he's basically saying that doing a task in hardware is faster than doing it in software. What FPGAs allow you to do is to create nearly any form of hardware device just by uploading a new design. While you can use this ability to create a new CPU, it's likely to be much slower than a regular CPU. Thus FPGAs are more useful for hardware like network routers, graphics chip research, codecs, and other highly specialized hardware designs.

In fact, a common FPGA design is to have a regular CPU built into the FPGA chip which can then interface with whatever hardware you upload to the reconfigurable portion of the chip. This combination makes for the ultimate microcontroller as you get the performance of an ASIC CPU (a non-reconfigurable silicon chip) combined with the flexibility of a fully reconfigurable FPGA.

For example, here's a PowerPC chip with reconfigurable capabilities:

http://www.xilinx.com/products/silicon_solutions/f pgas/virtex/virtex4/capabilities/powerpc.htm [xilinx.com]

Again, the market for these chips is very specialized, but the potential uses are pratically limitless. You can basically implement any form of coprocessor you can possibily imagine, as long as it fits inside the available FPGA space.

another article (0)

Anonymous Coward | more than 7 years ago | (#17646690)

<plug>we had another article [technologyreview.com] yesterday.</plug>

More Ink? (1)

true_hacker (969330) | more than 7 years ago | (#17646852)

Someone tell HP that a faster chip!= more printer cartridges sold

Moore's Law is part marketing hype (2, Interesting)

macurmudgeon (900466) | more than 7 years ago | (#17647086)

One of the reasons that Moore's law has so accurately predicted the continual doubling of storage and speed is that it offers companies an excellent guideline for product roll-out. It's a self-fulfilling prophecy. Customers expect computers to get more-bigger-better-faster at that rate, so companies have a production target. That provides a much more stable product ecosystem than one that is marked by a punctuated equilibrium of sudden large advances followed by unpredictable periods of status quo.

Re:Moore's Law is part marketing hype (1)

khallow (566160) | more than 7 years ago | (#17647988)

Shared expectations aren't "marketing hype" especially expectations that are proven out over more than four decades. I think of it more as a development model that does what you say it does.

Packin' transistors on a FPGA.... (0)

Anonymous Coward | more than 7 years ago | (#17647136)

Packin' transistors on a FPGA
I fought the law and I won
I fought the law and I won

I needed more power for my PC
I fought the law and I won
I fought the law and I won

Moore's law will still hold. (1)

Type-E (545257) | more than 7 years ago | (#17647166)

Even if they had the technology of 3 generations ahead, they would still release the chip at Moore's Law's pace to get the most revenue out of it.

big deal! (1)

clydemaxwell (935315) | more than 7 years ago | (#17647170)

But...companies beat Moore's law all the time. The reason it seems like such a hard-and-fast law is because they typically restrict themselves to it's proposed schedule so as not to shoot themselves in the foot; it would be too hard to compete with everyone releasing everything they develop immediately.

8x density, 20x more heat.. (1)

Khyber (864651) | more than 7 years ago | (#17647200)

As if HP's shit didn't run hot enough as it is.

They really need to focus on better cooling before they go anywhere. Damned laptops overheat daily because of the crap cooling systems in them.

HP Breaking Yet Another Law (2)

virtigex (323685) | more than 7 years ago | (#17647348)

Those scoundrels at HP are doing it again. They probably managed to do this by tapping Moore's phone line or something.

Could this seriously boost OGP? (1)

DoofusOfDeath (636671) | more than 7 years ago | (#17647634)

The Open Graphics Project http://lists.duskglow.com/mailman/listinfo/open-gr aphics [duskglow.com] is an attempt to make an open-source-hardware graphics card, so that we don't have to wrestle with companies like nVidia (ok, Intel) or ATI (ok, AMD) to get decent open-source drivers.

The OGP cards use FPGAs, which is the technology that HP's work (hopefully) innovates. I wonder if this advance will make OGP's cards much more competitive with nVidia/ATI cards? Heck, maybe HP would even consider showcasing its technology using the OGP project.

The baseline predition (1)

symbolset (646467) | more than 7 years ago | (#17647642)

The expected technology leap is given as a difference from this trend:

International Technology Roadmap for Semiconductors [itrs.net]

You can read more about it at the ITRS website. [itrs.net]

A quick scan of the website reveals this interesting image [itrs.net] . The observant will note that with current news progress is already ahead of their curve.

Wow this is great news... (2)

jeffeb3 (1036434) | more than 7 years ago | (#17647768)

Soon we will have even faster, smaller prototype use graphics calculators with horrible user interfaces! SWEET.

/. Drinking Game (1)

tcoop25 (808696) | more than 7 years ago | (#17647880)

Take a shot every time the word nano makes it into a Slashdot article.

Engrish (2)

hotdiggitydawg (881316) | more than 7 years ago | (#17648028)

A number type of nano-scale architecture developed...
Mi scusi? No habla Engrish... Seriously Taco, got editing skills? The whole summary is a direct cut-and-paste of the first paragraph of TFA, grammatical errors and all. Perhaps "A number of types of nano-scale architectures developed..." would've made more sense.

Possibilities on the base of this development (1)

roboboyanuj (1023885) | more than 7 years ago | (#17648116)

Another interesting thing that came to my minds are "chaos" chips which can rearrange their architecture to become something else i.e possibilities of evolving chips comes into my head this research will greatly boost of this happening in the future

We all know where this leads... (1)

Tmack (593755) | more than 7 years ago | (#17648222)

Suuuurre... HP "invented" a breakthrough in new chip design that will launch them 3 generations ahead. We all know they have just been studying that chip from a certain android that came back from the future. Soon they will announce AI, then SkyNet will launch and begin to take over, and then we will have a nuclear holocaust and will be fighting the very machines we invented!! Then the earth will be crawling with robots that have thick Austrian accents and like to shoot people. Destroy the chip now before its too late!!!!

Tm

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>