×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Ultimate Limit of Moore's Law

kdawson posted more than 4 years ago | from the double-double dept.

Science 418

BuzzSkyline writes "Physicists have found that there is an ultimate limit to the speed of calculations, regardless of any improvements in technology. According to the researchers who found the computation limit, the bound 'poses an absolute law of nature, just like the speed of light.' While many experts expect technological limits to kick in eventually, engineers always seem to find ways around such roadblocks. If the physicists are right, though, no technology could ever beat the ultimate limit they've calculated — which is about 10^16 times faster than today's fastest machines. At the current Moore's Law pace, computational speeds will hit the wall in 75 to 80 years. A paper describing the analysis, which relies on thermodynamics, quantum mechanics, and information theory, appeared in a recent issue of Physical Review Letters (abstract here)."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

418 comments

Transistors Per IC and Planck Time (5, Informative)

eldavojohn (898314) | more than 4 years ago | (#29737671)

Intel co-founder Gordon Moore predicted 40 years ago that manufacturers could double computing speed every two years or so
by cramming ever-tinier transistors on a chip.

That's not exactly correct. Moore's Law (or observation more like) reads in the original article as [intel.com] :

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year ... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.

All he's concerned about is quoting how many components can fit on a single integrated circuit. One can see this propagated to processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras but his observation itself is about the size of transistors -- not speed.

The title should be "The Ultimate Limit of Computing Speed" not Moore's Law.

Furthermore, we've always had Planck Time [wikipedia.org] as a lower bound on the time of one operation with our smallest measurement of time so far being 10^26 Planck Times. So essentially they've bumped that lower bound up and it's highly likely more discoveries will bump that even further up. I guess our kids and grandchildren have their work cut out for them.

Re:Transistors Per IC and Planck Time (4, Insightful)

phantomfive (622387) | more than 4 years ago | (#29737929)

Basically he is assuming that eventually we will develop quantum computing, and based his calculation on the theory of how fast a quantum event can take place. The problem is, given all we don't actually know about quantum mechanics, and all we don't know about super-small things, all it would take is a single observation to throw this minimum out the window.

In theory, it is nice to make theoretical limits. In practice, the limits are sometimes nothing more than theoretical. We don't know how to make smaller-than-quantum computers yet, but we also don't know how to make quantum computers yet. So this could be a prediction like every other prediction of the end of Moore's law, some of which were based on stronger reasoning than this argument. Interesting argument to make, though.

Re:Transistors Per IC and Planck Time (-1, Troll)

retchdog (1319261) | more than 4 years ago | (#29738299)

How can you have stronger reasoning, than something that's based on the limits of what modern physics can understand (thermodynamics and quantum mechanics)? We have developed quantum computers.

There's skepticism, and then there is metaphysical woowoo babble. You are generating the latter. Kill yourself.

Re:Transistors Per IC and Planck Time (4, Insightful)

phantomfive (622387) | more than 4 years ago | (#29738449)

You are generating the latter. Kill yourself.

Great argument, you're a regular Cyrano there.

How can you have stronger reasoning, than something that's based on the limits of what modern physics can understand

Does this even need to be said? Einstein did it: he took some observations and extrapolated them to show that modern physics was not entirely correct (that is, what was modern physics at the time). Indeed, all scientific theory can only be based on what we've observed. Thus, new observations make for new theory, or corrections in old theory. As we continue to make more observations, for example with the LHC, theory will continue to evolve. Surely even someone of your eloquence can see this.

Re:Transistors Per IC and Planck Time (2, Insightful)

WH44 (1108629) | more than 4 years ago | (#29738467)

How can you have stronger reasoning, than something that's based on the limits of what modern physics can understand (thermodynamics and quantum mechanics)? We have developed quantum computers.

The previous limits he is referencing were also based on the limits of what modern physics could understand - just making a faulty assumption. He's questioning the assumptions here, too.

There's skepticism, and then there is metaphysical woowoo babble. You are generating the latter. Kill yourself.

He is generating the former. Take your own advice.

Re:Transistors Per IC and Planck Time (4, Funny)

Nefarious Wheel (628136) | more than 4 years ago | (#29738421)

"In theory, there is no difference between theory and practice. In practice, there is." - Yogi Berra (iirc)

It's not a law!!! (0)

Anonymous Coward | more than 4 years ago | (#29738199)

Sorry parent for hijacking, but need to troll a bit here...

It's not a law!

Re:Transistors Per IC and Planck Time (0, Offtopic)

Shakrai (717556) | more than 4 years ago | (#29738241)

I guess our kids and grandchildren have their work cut out for them.

Don't worry, they'll be too busy paying back all the money we've borrowed over the last few decades to worry about how fast they can make their computers ;)

Re:Transistors Per IC and Planck Time (0, Offtopic)

HornWumpus (783565) | more than 4 years ago | (#29738383)

Don't you worry.

By then the dollar will have inflated and shrunk the national debt to less then the price of an once of gold.

The real suckers are the ones buying T-bills.

WHAT!! (2, Funny)

cryoman23 (1646557) | more than 4 years ago | (#29737693)

so in 80 years my computers processors wont be able to get any faster... :( o well then i guess its time to CLUSTER!

Re:WHAT!! (4, Funny)

outsider007 (115534) | more than 4 years ago | (#29737755)

I plan on setting up server farms in parallel dimensions

Re:WHAT!! (1)

zig43 (1422373) | more than 4 years ago | (#29738367)

Why not go all out and send your process through a wormhole that returns the result before you sent it.

Re:WHAT!! (1)

RichardJenkins (1362463) | more than 4 years ago | (#29737867)

I would expect that (if this is true) then the exponential rate of performance increases processor to processor will decrease, or to put it another way: you'll see diminishing returns on your processor improvements that will keep putting that 80 year figure further into the future.

Is anyone who understands this sort of stuff commenting? This sounds like it'll be regarded as quite an important discovery.

Re:WHAT!! (1)

sexconker (1179573) | more than 4 years ago | (#29738015)

Rate of performance (transistor count) increase is geometric.

Performance (transistor count) is exponential.

Re:WHAT!! (1)

oldhack (1037484) | more than 4 years ago | (#29738133)

Won't work. Beowolf cluster at that point will be abhorrent to the nature so the nature will one way or another screw with it.

Efficiency (5, Insightful)

truthsearch (249536) | more than 4 years ago | (#29737763)

So we'll have to wait another 75 years before management lets us focus on application efficiency instead of throwing hardware at the performance problems? Sigh...

Re:Efficiency (1)

belthize (990217) | more than 4 years ago | (#29737819)

Meh, I'll be dead by then anyway.

Thank God, I don't think I could deal with kids preening about their new zaptastic whiz bang they just bought off New Egg 70+ years from now as if they had invented the damn thing.

If by some chance I am alive then, I better have made a killing off of NewEgg's IPO.

Re:Efficiency (0)

Anonymous Coward | more than 4 years ago | (#29738071)

i not read the article, but can't you split the problem in half and have two machines work on it?

Re:Efficiency (2, Interesting)

Kjella (173770) | more than 4 years ago | (#29738119)

So we'll have to wait another 75 years before management lets us focus on application efficiency instead of throwing hardware at the performance problems? Sigh...

No, you still won't be doing performance optimizations if that's not what makes the most money...

Re:Efficiency (0)

Anonymous Coward | more than 4 years ago | (#29738195)

Oh, this terrible, awful, suffocating convenience.
But seriously, you sound like you work in a field where computers have largely ceased being the bottleneck. Since the user bottlenecks parts of the system, application development has reached a plateu where the rise computing power keeps up with demand.
Quit your web or application development job and get into a heavy computing field like atmospheric science, bioinformatics or search, where the amount of data easily keeps up with Moore's law, and throwing more hardware at the problem isn't a solution but a neccessity.

Re:Efficiency (0)

Anonymous Coward | more than 4 years ago | (#29738349)

The last time I used that old "throw more hardware at it" line at my company it didn't go over really well...

Re:Efficiency (-1, Troll)

Anonymous Coward | more than 4 years ago | (#29738455)

Why does this shit always get +5 insightful? It mostly comes from non-programmers who have no idea what they're saying.

Maybe by then (0)

Anonymous Coward | more than 4 years ago | (#29737767)

Maybe by then they will have invented a computer with more than one processor.

The problem is... (2, Funny)

FunkyRider (1128099) | more than 4 years ago | (#29737775)

Whether today's teenagers, or tomorrow's engineers, are capable of building such a machine. IMO all they know is EMO and shit.

Re:The problem is... (3, Interesting)

sznupi (719324) | more than 4 years ago | (#29738607)

As countless of such laments throughout recorded history have shown, worries about intellectual demise of the youth are greatly overblown.

Might Prove A Vinge novel correct? (3, Interesting)

adriccom (44869) | more than 4 years ago | (#29737779)

about the nature of computation and lightspeed and the like as explored in the wonderful novel A Fire Upon The Deep (Zones of Thought) [amazon.com]

in which the universe has depth and the depth determines how fast things can go including neural tissue, computation, and intergalactic travel. I have long suspected that Earth is towards the shallow end ...

!speed (1)

sjfoland (1565277) | more than 4 years ago | (#29737807)

The article uses speed and number of transistors interchangeably, which is misleading. From what I can tell, they are talking about chips with 10^16 billion transistors on them, not chips clocking at 4x10^16 GHz, which is what most people think of when they hear "speed".

Passing the buck (5, Funny)

suso (153703) | more than 4 years ago | (#29737835)

Eh, let's let the singularity first, then we'll let the robots take care of the problem.

No growth can go on forever (3, Insightful)

jopet (538074) | more than 4 years ago | (#29737843)

and no exponential growth can go on for just a comparatively very short time. This should be self-evident, but for some reason, people seem to ignore that. Especially people who call themselves journalists or economists.

Re:No growth can go on forever (2, Interesting)

vertinox (846076) | more than 4 years ago | (#29738211)

and no exponential growth can go on for just a comparatively very short time. This should be self-evident, but for some reason, people seem to ignore that. Especially people who call themselves journalists or economists.

As far as we know the expansion of the universe and entropy will go on forever.

I doubt that is what you mean though...

Self contained systems do have limits unless of course they are self recursive and halo-graphic.

Like fractals and information...

Economies and ecosystems are not.

Form over function (1)

Calmiche (531074) | more than 4 years ago | (#29737851)

I figure it will be sort of like the netbook war of today. Manufactures will realize that there isn't much of a way to get faster so they will start concentrating on design, reliability and lifespan. It will probably be a golden age in computing.

I'm just waiting for a peta-hertz computer with a 500 exabyte hard-drive able to do universe simulations in real time that will fit in my pocket, go 100 years on a charge and be indestructible.

Re:Form over function (2, Funny)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#29738007)

Just remember, though, that performing any universe simulations that evolve to include copyrighted works will be a capital offense by the time such hardware is available...

Re:Form over function (1)

sexconker (1179573) | more than 4 years ago | (#29738275)

Make sure to NOT handle the recursion!


public static void main Simulate(Universe){

    universe Universe = getUniverse();

    for(BigInteger i = BigInteger.valueOf(0); i < Universe.NumOfObjects; i++){
        if(Universe.getObject(i) == THIS)
            bailOut();
        else
            simulateObject(Universe.getObject(i));
    }

    System.exit(42);
}

Re:Form over function (1)

sexconker (1179573) | more than 4 years ago | (#29738335)

I went Java to throw in the BigInteger class (big numbers are always fun to play with), and I preferred "System.exit" to "return".

I haven't touches Java in ages though, so the damned first line is all fucked.

Oh well.

What is the limit? (3, Interesting)

Hatta (162192) | more than 4 years ago | (#29737853)

So what is that limit? What units would you express such a limit in? The fundamental unit of information is a bit, what is the fundamental unit of computation? Would you state that rate in "computations per second"? "Computations per second per cm^3"? "computations per second per gram?"

I checked out the pdf of the paper, and didn't see any numerical limit stated, just equations.

Re:What is the limit? (4, Interesting)

HTH NE1 (675604) | more than 4 years ago | (#29738173)

A more practical question: how many bits does my encryption key need now to make brute force cracking impractical for the fastest computer possible in this Universe (i.e probability of finding the key within my remaining lifespan 0.0001% (1 in a million))?

And not involving a system that reduces my lifespan, such as one failed attempt kills me, smart-ass.

Re:What is the limit? (3, Funny)

Idiomatick (976696) | more than 4 years ago | (#29738303)

You could move everyone on earth at near light speed while you sit in a pod in space. You will die as a few seconds pass for them and you wont need a very big key, also your life span was not reduced.

Re:What is the limit? (3, Insightful)

jonbryce (703250) | more than 4 years ago | (#29738397)

Given that brute force attacks scale to multiple processors better than just about any other task, I don't think there is a limit.

Re:What is the limit? (0)

Anonymous Coward | more than 4 years ago | (#29738409)

Easy. One failed attempt kills your information. No shortening of your lifespan involved. :P

Re:What is the limit? (1)

Gudeldar (705128) | more than 4 years ago | (#29738437)

If you use a symmetric key that is the same size as the message (i.e a one time pad) then you can be sure no one will ever crack it without the key.

Re:What is the limit? (4, Informative)

SchroedingersCat (583063) | more than 4 years ago | (#29738627)

From Wikipedia [wikipedia.org] : "a computer the size of the entire Earth, operating at the Bremermann's limit could perform approximately 10^75 mathematical computations per second. If we assume that a cryptographic key can be tested with only one operation, then a typical 128 bit key could be cracked in 10^37 seconds. However, a 256 bit key (which is already in use in some systems) would take about a minute to crack. Using a 512 bit key would increase the cracking time to 10^71 years, but only halve the speed of encryption."

You see - the system that threatens to reduce your lifespan is a much faster way to acquire that key.

Are there really limits? (2, Insightful)

wb8wsf (106309) | more than 4 years ago | (#29737865)

Though there might be a limit on how fast a computation can go, I would think that
parallel systems will boost that far beyond whatever limit there may be. If we crash
into a boundary, multiple systems--or hundreds of thousands of them--will continue
the upward trend.

      I suppose there is also the question of whether 10^16 more computing power "ought
to be enough for anybody". ;-)

Re:Are there really limits? (4, Insightful)

Anonymous Coward | more than 4 years ago | (#29737979)

Parallel computing won't help.
There's a limit to how fast your compute subsystems can exchange data as well.

No worry, we'll never get close (0)

onyxruby (118189) | more than 4 years ago | (#29737869)

With the overhead of DRM and other measures that suck cpu cycles on a heavy basis, we'll never get close to the limit. Can we get a new Moore's law, one that includes the DRM tax on our CPU cycles?

Re:No worry, we'll never get close (1)

sqlrob (173498) | more than 4 years ago | (#29737935)

There's a corollary somewhere that the speed of software halves every 18 months, it's been around for years.

Proprietary journals (1)

musides (127384) | more than 4 years ago | (#29737901)

What an intriguing idea. The article really whet my curiosity. Then to find, as is all too common for scientific journals, that I can't read the damned paper itself without "buying" it. How anti-climatic.

Does anyone have further insight into their ideas?

Re:Proprietary journals (0)

Anonymous Coward | more than 4 years ago | (#29737969)

"What an intriguing idea. The article really whet my curiosity. Then to find, as is all too common for scientific journals, that I can't read the damned paper itself without "buying" it. How anti-climatic."

Complain that it was paid for with your tax dollars and demand a free copy.

"Does anyone have further insight into their ideas?"

Hard limit+hard limit= hard conclusion.

Human Brain Anyone ? (0)

Anonymous Coward | more than 4 years ago | (#29737915)

I wonder if someone would calculate the real capacity of a human brain and compare it to this limit.
That would mean that there are still ways to evolve, I'm assuming we are quite far from the limit,
and all the ideas about computers getting smarter then man will get a new twist. Since the maximum
computational abilities are limited, then the outcome is not as straightforward as most SciFi novels
potray.

Parallel processing... (1)

argent (18001) | more than 4 years ago | (#29737941)

We're already hitting clock speed "brownouts", and using parallel processing to get around them. To really tell where the limits are you need to look at how small you can make a processor (best case, something like say one bit per Planck length) and how much latency you can afford as information propagates from processor to processor at the speed of light or less.

Not as scary as it sounds (1)

Stratoukos (1446161) | more than 4 years ago | (#29737943)

The summary makes it sound like in 80 years there will be no room for improvement and everyone will just have to make do with what they have.

If I understand correctly, the limit is about the performance/volume (performance density?). I imagine that in 80 years most of computational resources will be somehow networked. This means that if I required more processing power than technically possible in a normal computer I could just use someone else's idle processor.

Re:Not as scary as it sounds (1)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#29738033)

The efficiency of using somebody else's idle processor is necessarily bounded by the speed of light.

The further away your neighbor's blob of idle computronium is, the higher the latency you incur by using it.

I can't believe it (0)

Anonymous Coward | more than 4 years ago | (#29737949)

"At the current Moore's Law pace, computational speeds will hit the wall in 75 to 80 years"

So soon? I'm dumping all my tech stocks! So computer chip speeds will max out at the speed of a room full of human brains. Outside of attempting to model the Universe what do you need that much power for? Not graphics, games or robotics. Realtime sequencing DNA would be a breeze long before you'd max the limit so what practical use are we missing out on?

Re:I can't believe it (0)

Anonymous Coward | more than 4 years ago | (#29738257)

So soon? I'm dumping all my tech stocks! So computer chip speeds will max out at the speed of a room full of human brains. Outside of attempting to model the Universe what do you need that much power for? Not graphics, games or robotics. Realtime sequencing DNA would be a breeze long before you'd max the limit so what practical use are we missing out on?

Is this movie [wikipedia.org] really so old that people have forgotten it?

Reminds me of a joke (4, Interesting)

jcoy42 (412359) | more than 4 years ago | (#29737961)

A scientist and an engineer are lead into a room. They are asked to stand on one side. On the opposite side is Treasure (or delicious cake if you please).

They are told that they may have the prize if they can reach it, however they may never go more than half the distance between them and it.

The scientist balks claiming it is obviously impossible as he can NEVER reach the prize and leaves the room. The engineer shrugs, and walks halfway to the prize 10 times or so, says "close enough" and takes it.

So I guess we'll just see, eh?

Re:Reminds me of a joke (3, Funny)

Anonymous Coward | more than 4 years ago | (#29738097)

And a mathematician would stand for a moment, calculate the limit, and then run fullspeed into the wall.

Re:Reminds me of a joke (0)

Anonymous Coward | more than 4 years ago | (#29738137)

So the tester uses ambiguous language and you use that to try to prove an unrelated point?
If by 'you' you meant any part of the body then the engineer moving his hand all the way to the cake disregards the rules.
Here, here's an example http://xkcd.com/169/

Re:Reminds me of a joke (1)

IorDMUX (870522) | more than 4 years ago | (#29738253)

Very true.

... though every version I have ever heard of the joke/tale replaced "Treausre" with "attractive woman". This was partially due to the (sadly) male-only content of my graduate-level EE classes.

Re:Reminds me of a joke (2, Insightful)

nick_davison (217681) | more than 4 years ago | (#29738277)

And were the engineer a hacker, he'd pick up the scientist, carry him half way across the room, set him down and say, "Your turn."

The game changing hackers are the ones who don't listen to the conventional logic of the time and figure out how to wander along a totally different axis that the "experts" hadn't thought of yet.

Look at Wolfenstein/Doom. 3D graphics "weren't possible" on home computers at the time. John Carmack turned it in to a 2D solution and solved it anyway. Perhaps not perfect in every regard but still a hell of a lot better than what anyone else were managing.

Nick's law: at least every 18 months, someone else will declare a limit to Moore's law [and turn out to be wrong].

With our current understanding of transistor science, I'm sure their point is a wonderful one. Problem is, with enough money behind finding the solution, someone'll come up with another axis to wander along that'll continue the advances. But don't feel bad, I'm sure plenty of people thought cart science had reached its theoretical peak and man would never move faster than horses were capable of, too.

Re:Reminds me of a joke (0)

Anonymous Coward | more than 4 years ago | (#29738321)

The scientist only makes it halfway out of the room though.

Re:Reminds me of a joke (0)

Anonymous Coward | more than 4 years ago | (#29738457)

Actually, an accountant was with them too. He walked straight to the cake, and called it a rounding error.

You mean mathematician instead of scientist (1)

g2devi (898503) | more than 4 years ago | (#29738573)

The scientist would not give up so easily.

The scientist would simply say that the wave function of the cake already overlaps with his wave function and take the cake.

Re:Reminds me of a joke (0)

Anonymous Coward | more than 4 years ago | (#29738575)

The cake is a lie

Yeah, except for that quantum mechanics thing (0)

Anonymous Coward | more than 4 years ago | (#29737967)

Seriously, this may be the currently known limit but I imagine there are more than a few things that will be discovered in the next 80 years.

Besides that, quantum computing will very likely obsolete the way we currently calculate how fast something is.

Electricity cost comes first... (3, Interesting)

RyanFenton (230700) | more than 4 years ago | (#29738003)

At the current rate of progress, so to speak, no one will be able to afford a computer that runs 10^16 times faster than current systems. Even as a gamer, I'm already leery of buying any of the newer video cards and CPU setups, after reviewing the cost in electricity needed to run them for a year compared to my existing system - they use somewhere around 4 times the electricity!

I can understand fitting more transistors onto a chipset, and more chipsets onto a system, but even with nanotech and similar technologies, I don't see much chance for each transistor to use proportionally less electricity to allow 10^16 more of them to be running at once. You'd have to run a conductance cable to the sun to get that kind of power.

Ryan Fenton

Re:Electricity cost comes first... (2, Informative)

anonymousbob22 (1320281) | more than 4 years ago | (#29738291)

Actually, much of the newer components available are far more efficient than their predecessors in terms of power usage.
Compare Intel's newer processors to the Pentium 4 and you'll see gains in both computing power and power efficiency [wikipedia.org]

Anyone else get the feeling... (3, Interesting)

Anonymous Coward | more than 4 years ago | (#29738013)

that the ultimate limit is the processes that the universe itself uses to "compute" its own state? That we can only ever asymptotically approach this limit? Once we hit the limit, our computations cease being simulations and become reality.

Computing to what end (0)

belthize (990217) | more than 4 years ago | (#29738049)

After reading some of the replies and think about the limit I started wondering about exactly what problems existed that would demand more computational power than 10^16 above what we have now.

I'd be interested in hearing of a problem that can be posited now but can't be solved in a reasonable amount of time (say a few days) with that much computational power. I'm sure there are mathematical oddities or encryption schemes that can chew up all free cycles but it doesn't seem like raw computation is the limiting factor for most problems.

Long before then it's seems I/O bottlenecks are going to be a much bigger issue for any *interesting* problems.

Does this mean no warp drive? (1)

filesiteguy (695431) | more than 4 years ago | (#29738051)

I figure that - even if we find the dilithium crystals - we'd need really fast computers to handle space flights, transporter beams, instant food generators, doors that go "shh!" and warp drive.

I guess it is all just fiction after all.

Re:Does this mean no warp drive? (1)

Idiomatick (976696) | more than 4 years ago | (#29738401)

If you ignore the teleporter and holodeck I'm sure everything in star trek could be run on a decently optimized modern computer.

Shannon's law again ? (0)

Anonymous Coward | more than 4 years ago | (#29738089)

If you believe this then i have a truckload of 33.6 kilobaud modems that will be of use to you.

10^16 times faster? (0, Troll)

gestalt_n_pepper (991155) | more than 4 years ago | (#29738103)

Finally, Windows will run fast enough to be useful.

Re:10^16 times faster? (1)

PRMan (959735) | more than 4 years ago | (#29738227)

Finally, Windows will run fast enough to be useful.

Yeah, but Windows 79 is releasing next week, and I hear it's kind of slow on only 500 exabytes of RAM...

Re:10^16 times faster? (1)

Snarkalicious (1589343) | more than 4 years ago | (#29738311)

Perhaps. But, unfortunately, this means that all jokes about Crysis are, in point of fact tragic mockeries of all those who own the game and will NEVER be able to play it at max settings.

Nothing interesting in article. (2, Informative)

line-bundle (235965) | more than 4 years ago | (#29738109)

I RTFA but there is nothing in the article. Only talk of 75 years...

I remember one way to get an upper limit on frequency is using the equation E=hf, the Planck-Einstein relation. For a given amount of energy you can only get so much frequency. But this was a million years ago in my physics class.

PDF on arxiv (2, Informative)

sugarmotor (621907) | more than 4 years ago | (#29738201)

Thermodynamic cost of reversible computing
thermo-arxiv
February 1, 2008
Lev B. Levitin and Tommaso Toffoli

http://arxiv.org/pdf/quant-ph/0701237v2 [arxiv.org]

Not sure it is the same as in the Phys. Rev. Lett. 99, 110502 (2007) -- linked from the article -- which is from 2007

Stephan

Re:PDF on arxiv (1)

BitterOak (537666) | more than 4 years ago | (#29738267)

Not sure it is the same as in the Phys. Rev. Lett. 99, 110502 (2007) -- linked from the article -- which is from 2007

The abstract is the same, so it appears to be the same paper.

I haven't read the article all the way through yet. Do the authors take in to account the possibility of optical or quantum computers?

encryption breaking power? (0)

Anonymous Coward | more than 4 years ago | (#29738209)

If the theoretical max speed of calculations has been achieved, can you now calculate the theoretical minimum times for a single machine to crack certain encryption algorithms?

ie. How safe will files encrypted with today's encryption be in 80 yrs?

Fundamental time unit (2, Insightful)

imgod2u (812837) | more than 4 years ago | (#29738231)

We've been at roughly ~200ps per circuit operation for quite some time and yet processors are still getting faster. Parallel computation, what a novel idea.

Distance between objetcs (0)

Anonymous Coward | more than 4 years ago | (#29738255)

Another problem is solving how to get data from between objects - light can only travel so far between clock cycles, so while a processor may be capable of more and more cycles per second, we are already at the point where unimpeded light cannot travel more than a few inches per clock cycle - so other limitations would be on the size of motherboards, where the memory is located etc.

However such limits can be cheated for a while by using pre emption to get the data where it is needed before it is required.

Too bad spped is just a byproduct (1)

geekoid (135745) | more than 4 years ago | (#29738305)

of Moore's law. Moore's law has to to with the cost of a number of transistor in a given space of silicon.

There are practical limits we are running into that are getting harder and harder to solve.
We are approaching the point where 1 particle of metal per billion and ruin a fab process.
In order to bypass that, we will need to self contained fabs; which would have an even more limited lifecycle the current fabs.

This means the cost of chips could rise dramatically. I don't think many people are going to spend 5K on a home computer anymore.

What is happening is that they are going wide. So more chips but not faster chips; which I think is better anyways.

No Zen (1)

sugarmotor (621907) | more than 4 years ago | (#29738417)

As usual, Zen is ignored. They don't take into account that when nothing happens that can also be your computation (accuracy -> oo).

Stephan

Constrained Freedom (2, Interesting)

DynaSoar (714234) | more than 4 years ago | (#29738463)

per TFabstract: "errors that appear as a result of the interaction of the information-carrying system with uncontrolled degrees of freedom must be corrected."

Would not quantum teleportation via entanglement provide a means of distributing computation to include massively parallel? Quantum teleportation would provide a constraint that would redefine the problem by redefining the environment (ie. uncontrolled degrees of freedom). Replace Moore's Law with Bell's Theorem.

And does not quantum computing operate on all possible states, with the answer inherent in the wave function? Spew out the entangled qubits as needed and let them fight it out as a quantum form of Swarm.

If a result can be obtained this way, you may still have a problem with simultaneity -- the answer may arrive "before" the question, making it impossible to decode. However the problem then becomes a limitation of spacetime's ability to pass definitive information, and the limit of computation itself if such exists and/or can be measured in this context becomes moot. Being able to error trace via backtrack is similarly hampered but for the same reason and would still be possible post hoc.

But if a computational system is devised that can operate on such principles, and it is to be used for practical calculations, be aware that any defining of arguments will be restricted to the input end and results for comparison and decision making may not yet be available for such decisions (assuming a reasonable latitude of autonomous action). In which case, make sure you teach it phenomenology *before* putting it to work.

Windows Vista (0)

Anonymous Coward | more than 4 years ago | (#29738495)

So how will we ever run Vista?

10^16 times faster than today's fastest machines (1)

Tumbleweed (3706) | more than 4 years ago | (#29738625)

Yeah, until I hit the Turbo(tm) button! 11^16, baby! 11, because that's one more, isn't it?

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...