Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Forget Moore's Law?

CmdrTaco posted more than 11 years ago | from the but-its-worked-so-well-so-far dept.

Technology 406

Roland Piquepaille writes "On a day where CNET News.com releases a story named "Moore's Law to roll on for another decade," it's refreshing to look at another view. Michael S. Malone says we should forget Moore's law, not because it isn't true, but mainly because it has become dangerous. "An extraordinary announcement was made a couple of months ago, one that may mark a turning point in the high-tech story. It was a statement by Eric Schmidt, CEO of Google. His words were both simple and devastating: when asked how the 64-bit Itanium, the new megaprocessor from Intel and Hewlett-Packard, would affect Google, Mr. Schmidt replied that it wouldn't. Google had no intention of buying the superchip. Rather, he said, the company intends to build its future servers with smaller, cheaper processors." Check this column for other statements by Marc Andreessen or Gordon Moore himself. If you have time, read the long Red Herring article for other interesting thoughts."

Sorry! There are no comments related to the filter you selected.

It's True!!! (-1)

Salad Shooter (600065) | more than 11 years ago | (#5278853)

Only Bone-O-Rama is capable of Automated First Post Goodness (c). Bone-O-Rama is quality software, written in Visual Basic.Net and released under the GPL.

Included features:

* Completely configurable
* GUI or character based interface
* Use the included database of clever first post comments or add your own!
* Post unattended or at a pre-determined time.
* Login and post via anonymous proxy.
* Duplicate story check option. If enabled, it will verify whether the story is unique and modify the post accordingly.
* Use the included scheduler to automate all aspects of Bone-O-Rama.

Additional features that the competition does not offer.

* PostStalker: Enter the user ID of someone you wish to stalk and every post that user makes will get an automatic response! Choose Civil or Flamewar.
* StoryQueStalker: Submit stories that are mined from Google, Yahoo and CNN as legit stories with links. Automatically!
* KarmaStalker: With this feature, your account can be tracked and karma calculated. No more Positive/Negative silliness.

If you, or your friends would like in on the first post lifestyle, there is no better way than to Bone-O-Rama your way to the top! For more information and screenshots, click here [cyborgmonkey.com] .

This post created using the magnificent Bone-O-Rama © 2002 cyborg_monkey LLC

Can it make julianne fries? (0, Offtopic)

Trespass (225077) | more than 11 years ago | (#5279072)

I'll buy that for a dollar!

BBC Article (4, Informative)

BinaryCodedDecimal (646968) | more than 11 years ago | (#5278854)

BBC Article on the same story here [bbc.co.uk] .

clustering (4, Interesting)

mirko (198274) | more than 11 years ago | (#5278857)

he said, the company intends to build its future servers with smaller, cheaper processors

I guess this is better to use interconnected devices in an interconnected world.

where I work, we recently traded our Sun E10k for several E450 between which we load balance request.
It surprisingly works very well.

I guess Google's approach is then an efficient one.

Re:clustering (5, Interesting)

beh (4759) | more than 11 years ago | (#5278931)

The question is always, what you're doing.

Google's approach is good for google. If Google would want to make good use of significantly faster CPUs, they would also need significantly more RAM in their machines (a CPU faster by a factor of 10 can't yield a speed-up factor of ten, if the network can't deliver the data fast enough).

For Google it's fine, if a request can be done in say half-a-second on a slower machine, that is a lot cheaper then a 10* as fast machine doing each request in .05 seconds, but the machine costs 50* more than the slower machine.
On the other hand, if you have a job that can only be done sequentially (or can't be parallelized all to well), then having 100s of computers won't help you very much... ...on the other hand - there is one question left: Is it really worth while having 100s or 1000s of PC class servers working your requests as opposed to a handful really fast servers?

The more expensive servers will definitely be more expensive when you buy them - on the other hand the more expensive faster machines might save you a lot of money in turns of less rent for the offices (lower space requirements) or - perhaps even more important - save on energy...

The company where I'm working switched all their work PCs to TFTs relatively early, when TFTs were still expensive. The company said, that this step was done on the expected cost saving in power bills and also saving on air conditioning in rooms with lots of CRTs...

Sounds like Ganesh's law to me (2, Informative)

DrSkwid (118965) | more than 11 years ago | (#5279186)

With all those hands [club-internet.fr]

Re:clustering (4, Informative)

e8johan (605347) | more than 11 years ago | (#5278935)

Google supports thousands of user request sessions, not one huge straight-line serial command sequence. This means that a huge bunch of smaller servers will do the jobb quicker than a big super-server. Not only because of the raw computing power, but due to the parallellalism that is extracted by doing so and the loss of overhead introduced by running too many tasks on one server.

NoW (3, Informative)

Root Down (208740) | more than 11 years ago | (#5279104)

The NoW (Network of Workstations) approach has been on ongoing trend over the last few years as the throughput achieved by an N distinct processors connected by a high speed network is nearly as good (and sometimes better) than an N processor mainframe. All this comes at a cost that is much less than that of a mainframe. In Google's case, it is the volume that is the problem, and not necessarily the complexity of the tasks presented. Thus, Google (and many other companies) can string together a whole bunch of individual servers (each with their own memory and disk space so there is no memory contention - another advantage over the mainframe approach) quite (relatively) cheaply and get the job done by load balancing across the available servers. Replacement and upgrades - yes, eventually to the 64 chips - can be done iteratively so as to not impact service, etc. Lots of advantages...

Here is a link to a seminal paper on the issue if you are interested:
[nec.com]
http://citeseer.nj.nec.com/anderson94case.html

Upgrading Good... (4, Insightful)

LordYUK (552359) | more than 11 years ago | (#5278858)

... But maybe Google is more attuned to the mindset of "if it aint broke dont fix it?"

Of course, in true /. fashion, I didnt read the article...

Re:Upgrading Good... (2)

joebp (528430) | more than 11 years ago | (#5278874)

... But maybe Google is more attuned to the mindset of "if it aint broke dont fix it?"

Exactly. And if they run out of capacity, the just add more cheap nodes, rather than buy a crazyily expensive supercomputer like ebay has.

Re:Upgrading Good... (2, Funny)

42forty-two42 (532340) | more than 11 years ago | (#5279115)

And if they run out of capacity, the just add more cheap nodes, rather than buy a crazyily expensive supercomputer like ebay has.

You're thinking of priceline.

Re:Upgrading Good... (1)

hagardtroll (562208) | more than 11 years ago | (#5279118)

I guess Microsoft hasn't kept their bloatware up to pace with the newer hardware. Upgrades aren't as necessary as they used to be.

Re:Upgrading Good... (5, Insightful)

ergo98 (9391) | more than 11 years ago | (#5278899)

Google is of the philosophy of using large clusters of basically desktop computers rather than mega servers, and we've seen this trend for years and it hardly spells the end of Moore's Law (Google is just as much taking advantage of Moore's Law as anyone: They're just buying at a sweet point. While the CEO might forebodingly proclaim their separation from those new CPUs, in reality I'd bet it highly likely that they're running 64-bit processors once the pricing hits the sweet spot).

This is all so obtuse anyways. These articles proclaim that Moore's Law is some crazy obsession, when in reality Moore's Law is more of a marketing law than a technical law: If you don't appreciably increase computing power year over year, no new killer apps will appear (because the market isn't there) encouraging owners of older computers to upgrade.

Re:Upgrading Good... (2, Insightful)

jacquesm (154384) | more than 11 years ago | (#5279131)

With all respect for Moore's law (and even if it is called a law, it's no such thing since it approaches infinity really rapidly and that'a phyiscal impossibility): Killer apps and harware have very little to do with each other. While hardware can enable programmers to make 'better' software the basic philosophy does not change a lot, with the exception of gaming.


Computers are productivity tools, and a 'google' like application would have been perfectly possible 15 years ago, the programmers would have had to work a little bit harder to achieve the same results. Nowadays you can afford to be reasonably lazy. It's only an economics thing, where cost of developement and cost of hardware balance at an optimimum.


In that light, if google were developed 15 years ago it would use 286's, and if it would have been developed in 15 it would use what's in vogue and at the econonmical right pricepoint for that time.

Re:Upgrading Good... (3, Insightful)

ergo98 (9391) | more than 11 years ago | (#5279231)

Killer apps and harware have very little to do with each other. While hardware can enable programmers to make 'better' software the basic philosophy does not change a lot, with the exception of gaming.

Killer apps and hardware have everything to do with each other. Could I browse the Internet on an Atari ST? Perhaps I could do lynx like browsing (and did), however the Atari ST didn't even have the processor capacity to decompress a jpeg in less than a minute (I recall running a command line utility to decompress those sample JPEGs hosted on the local BBS to ooh and ahh over the graphical prowess). Now we play MP3s and multitask without even thinking about it, and we wouldn't accept anything less. As I mentioned in another post I believe the next big killer app that will drive the hardware (and networking) industry is digital video: When every grandma wants to watch 60 minute videos of their grandchild over the "Interweeb" suddenly there will be a massive demand for the bandwidth and the computation power (I've yet to see a computer that can compress DV video in real-time).

Moore's law is about cost! (2, Insightful)

Dr. Spork (142693) | more than 11 years ago | (#5279188)

I think you're exactly right, and I find it incomprehensible that the author of an article on Moore's law does not even know how it goes. It has always been an index of performance per unit of cost, and of how this ratio changes with time. The author seems to think it's all about how chips get faster and faster, and that's an oversimplification we don't even need to make for a schoolchild.

Google are taking advantage of cheap, high-performing chips, exactly the things predicted by Gordon Moore.

Misapprehensions (5, Insightful)

shilly (142940) | more than 11 years ago | (#5278864)

For sure, Google might not need the latest processors...but other people might. Mainframes don't have fantastic computing power either -- 'cos they don't need it. But for those of us who are busy doing things like digital video, the idea that we have reached some sort of computing nirvana where we have more power than we need is laughable. Just because your favourite word processor is responsive doesn't mean you're happy with the performance of all your other apps.

Re:Misapprehensions (0)

Anonymous Coward | more than 11 years ago | (#5278895)

That doesn't make much sense. You'd be better off with a dedicated and optimized chip for tasks like video editing. Heck, the GPU manufacturers will probably consume those features in the next generation chips.

Re:Misapprehensions (1)

shilly (142940) | more than 11 years ago | (#5278940)

I picked digital video b/c it's an area I'm vaguely familiar with. I'm sure that there are plenty of other uses of computers that also tax the abilities of today's machines. In any event, the article was saying that Moore's law is redundant--I think it's a very forced reading to say that it wasn't arguing that is true for all computing devices, including GPUs.

Re:Misapprehensions (3, Interesting)

Tim C (15259) | more than 11 years ago | (#5279085)

5 or so years ago I was working on a Phd in plasma physics. I never finished it, but that's besides the point.

The point is that it involved numerical simulations of a plasma - specifically, the plasma created when a solid is hit by a high intensity, short pulse laser. I was doing the work on an Alpha-based machine at uni, but having recently installed Linux on my home PC, I thought, "why not see if I can get it running on that, it might save me having to go in everyday".

Well, I tried, but always got garbage results, littered with NaNs. I didn't spend too much time on it, but my assumption at the time was that the numbers involved were simply too big for my poor little 32bit CPU and OS. It looked like things quickly overflowed, causing the NaN results. (The code was all in Fortran, incidently)

I am now a programmer at a web agency, but I've not forgotten my Physics "roots", nor lost my interest in the subject. I'm currently toying with doing some simulation work on my current home PC, and would like to know that I'm not going to run into the same sorts of problems. Of course, I can scale things to keep the numbers within sensible bounds, but it would be easier (and offer less scope for silly mistakes) if I didn't have to.

Not only that, of course, but the scope of simulating physical situations can often be memory limited. Okay, so I can't currently afford 4 gig of RAM, but if I could, I could easily throw together a simulation that would need more than that. In the future, that limit might actually become a problem.

Yes, I know I'm not a "typical" user - but the point is that it's not only video editing that would benefit from a move to 64 bit machines.

Re:Misapprehensions (1)

shilly (142940) | more than 11 years ago | (#5279199)

Indeed. There are both low- and high-end uses of computers that would benefit from greater horsepower. Other people on this thread have mentioned BLAST and speech software.

Re:Misapprehensions (3, Insightful)

e8johan (605347) | more than 11 years ago | (#5278917)

This is where FPGAs and other reconfigurable hardware will enter. There are allready transparent solutions, converting C code to both machine code and hardware (i.e. a bitstream to download into an FPGA on a PCI card).

When discussing video and audio editing, you must realize that the cause of the huge performance need is not the complexity of the task, but the lack of parallel work in a modern PC. As a matter of a fact, smaller computing units, perhaps thousands of CUs inside a CPU chip, would give you better performance (when editing videos if the code was adapted to take advantage of it) than a super chip from intel.

If you want to test parallellalism, bring together a set of Linux boxes and run mosix. It works great!

But why? (2, Insightful)

thoolie (442789) | more than 11 years ago | (#5278944)

I understand your point, but in reality, what percent of the computing population is doing digital video? If you take into consideration that most (75%) of people computing in the US are doing just WORD processing and net browsing, and most commercial environments that do servers can do cheaper clustering, and ect, ect, ect...

My point is, is that with newer technology, we are find other solutions to SUPER CPUs, eg. clustering. The 64 bit chips are great, but their market share isn't going to be nearly as high as the Pentium 4s or the Pentium 3 or 2s, only due to the fact that when those PCs came out, PCs for the masses (cheap PCs) was relativley a new thing. Now, simple users have their PCs, power users are finding other solutions, and commercial industries already have $$$ invested in Sun, SGI, or other 64bit power computing!

Re:But why? (1)

shilly (142940) | more than 11 years ago | (#5279010)

I think you're putting the cart before the horse: the reason why only a relatively smaller fraction of the end-user population is fiddling with digital video is because the hardware (and software) to do so affordably has only recently become available. People like mucking around with pictures just as much as they like mucking around with words. If it's cheap and easy to do it, they will. It's part of how Apple makes its money.

Re:But why? (2, Insightful)

micromoog (206608) | more than 11 years ago | (#5279076)

Image editing has been around for many years now, and there's still a much smaller percentage of people doing that than basic email/word processing. Video editing will always be a smaller percentage still.

Believe it or not, there's a large number of people that don't see their computer as a toy, and really only want it to do things they need (like write letters). Just because the power's there doesn't mean a ton of people will suddenly become independent filmmakers (no matter what Apple's ads tell you).

Re:But why? (1)

shilly (142940) | more than 11 years ago | (#5279152)

Sorry, but that's just silly. Image editing has been around for years, but cheap, fast easy image editing on consumer-level boxes has not been around for years. Of course, more people will use computers to send email than pictures, but do you really believe that in three years' time Mom and Pop won't just assume that they can use their new PC for their photos? And while fewer people will want to do video than photos or word-processing, they'll still number in the millions. You wait till they can mail their mates a video of some fit bird in a club that they've taken with their phone.

Finally, you sound like you're denying the existence of a substantial leisure market--or perhaps arguing that most consumers buy their computers for serious things like balancing chequebooks and writing letters to their landlord rather than fun stuff like planning weddings or writing letters to their grandchildren. Again, that seems silly.

Google != the edge of cutting edge (5, Insightful)

LinuxXPHybrid (648686) | more than 11 years ago | (#5278968)

> For sure, Google might not need the latest processors...but other people might.

I agree. Also the article's concluding that big companies have no future because Google has no intention of investing in new technology is premature. Google is a great company, a great technology company, but it is just one of many. Google probably does not represent the very edge of cutting edge technology, either. Stuff like Molecular Dynamics Simulation requires more computer power; I'm sure that people who work in such areas can't wait to hear Intel, AMD and Sun announcing faster processor, 64bit, more scalability.

Re:Misapprehensions (2, Informative)

SacredNaCl (545593) | more than 11 years ago | (#5279089)

Mainframes don't have fantastic computing power either -- 'cos they don't need it. Yeah, but they usually have fantastic I/O -- where they do need it. Still a ton of improvements in this area that could be made.

That silence you hear... (5, Funny)

dynayellow (106690) | more than 11 years ago | (#5278868)

Is millions of geeks going catatonic over the thought of not being able to overclock the next, fastest chip.

Re:That silence you hear... (2)

qoncept (599709) | more than 11 years ago | (#5279078)

<offtopic>
It always seemed to me like the money you saved buying a reasonable cooling solution rather than a peltier would be better used buying a faster processor that you won't need to overclock.
</offtopic>

Quick Stupid Question (4, Insightful)

betanerd (547811) | more than 11 years ago | (#5278873)

Why is it called Moore's Law and not Moore's Therom? Doesn't "Law" imply that it could be applied to all situations in all times and still be true? Or am I reading way to much into this?

Re:Quick Stupid Question (4, Insightful)

Des Herriott (6508) | more than 11 years ago | (#5278905)

It's not even a theorem (which is what I assume you meant) - that would imply that some kind of mathematical proof exists. Unless you meant "theory"?

"Moore's Theory" or "Moore's Rule of Thumb" would be the best name for it, but "Moore's Law" sounds a bit catchier. Which, I think, is really all there is to it.

Re:Quick Stupid Question (1)

vikstar (615372) | more than 11 years ago | (#5278912)

Newtons "laws" are wrong when an object approaches the speed of light, when Einsteins "theory" of relativity is more correct.

Re:Quick Stupid Question (0, Offtopic)

hugesmile (587771) | more than 11 years ago | (#5278981)

Make sure you do not confuse Moore's Law with Cole's Law [topsecretrecipes.com] .

I'd hate to have that stuff doubling every 18 monnths.

64bit matters, for Google, too (4, Insightful)

g4dget (579145) | more than 11 years ago | (#5278884)

Assume, for a moment, that we had processors with 16bit address spaces. Would it be cost-effective to replace our desktop workstations with tens of thousands of such processors, each with 64k of memory? I don't think so.

Well, it's not much different with 32bit address spaces. It's easy in tasks like speech recognition or video processing to use more than 4Gbytes of memory in a single process. Trying to squeeze that into a 32bit address space is a major hassle. And it's also soon going to be more expensive than getting a 64bit processor.

The Itanium and Opteron are way overpriced in my opinion. But 64bit is going to arrive--it has to.

Re:64bit matters, for Google, too (2, Insightful)

stiggle (649614) | more than 11 years ago | (#5278947)

64bit has been here for a while, called Alpha Processors and they work very nicely.

Why stay stuck in the Intel world? There's more to computers that what you buy from Dell.

Re:64bit matters, for Google, too (5, Insightful)

drix (4602) | more than 11 years ago | (#5279008)

Right, thank you, glad someone else got that. No one is saying that Google has abandoned Itanium and 64-bit-ness for good. Read that question in the context of the article and what Schmidt is really being asked is how will the arrival of Itanium affect Google. And of course the answer is that it won't, since as we all know Google has chosen the route of 10000 (or whatever) cheap Linux-based Pentium boxes in place of, well, an E10000 (or ten). But that sure doesn't mean Google is swearing off 64-bit for good--just that it has no intention of buying the "superchip." But bet your ass that when Itanium becomes more readily available and cheap, a la the P4 today, when Itanium has turned from "superchip" to "standardchip," Google will be buying them just as voraciously as everyone else. So for me these doomsday prognostications that Malone flings about don't seem that foreboding to me--Itanium will sell well, just not as long as it's considered a high-end niche item. But that never lasts long anyways. One-year-ago's high-end niche processor comes standard on every PC at CompUSA today.

Re:64bit matters, for Google, too (1)

shic (309152) | more than 11 years ago | (#5279068)

The interesting question is not when can I have 64 bit registers, but rather when can I have larger address bus, VM address space? In my view the benefits of 64 bit computing (in a way analogous to 32 bit computing) are not clearly proven. I propose, though don't offer empirical evidence here, that the vast majority of modern software has a property I will refer to loosely as locality - i.e. - the idea that typically register values are small and that the bottlenecks executing a properly optimised program will predominantly use a relatively small portion of the address space. If this is the case, I see no valid reason to want to manipulate 64 bit quantities atomically within the processor - wouldn't simply extending the 32bit MMU architecture (with appropriate compiler optimisations) prove more cost effective for the foreseeable future?

Does it? (1)

Open_The_Box (620252) | more than 11 years ago | (#5279138)

Fair enough, if you're doing video processing or high performance 3D rendering or speech recognition then you're going to want more memory, larger address spaces and faster processors. For this reason alone it's worth working on more powerful computing hardware; more power means you can do more complex tasks which means you'll need more powerful hardware to do them faster which means you can do more complicated complex tasks which means you'll need more... The point is that a bunch of slower 32-bit processors running Google will more than likely be better than one large 64-bit processor. More machines in parallel rather than one more powerful machine. Bottleneck, connection bandwidth perhaps? Just a thought. Feel free to slap me down for stupidity if you like. ;) All in all it depends entirely on what you want to do with your machine. Having just upgraded my office machine from a PII 350 to a PIII 800 (Whew! I know! Blistering speed!) I notice no real difference in my net-surfing and/or laTeX compiling speeds though. My home machine: not too fast but plays a mean game of UT2003 and renders checkerboard floors with chess pieces in acceptable times. Use the right tool for the job.

Damn it! (3, Funny)

FungiSpunk (628460) | more than 11 years ago | (#5278888)

I want my quad 64GHz processor! I want it in 2 years time and I want quad-128Ghz ready by the following year!!!

well now... (4, Funny)

stinky wizzleteats (552063) | more than 11 years ago | (#5278892)

This makes me feel a lot less like a cantankerous, cheap old fart for not replacing my Athlon 650.

Re:well now... (1)

i.r.id10t (595143) | more than 11 years ago | (#5278950)

Same here. I'm still runnning a dual 450 at home, it "feels" as fast as the single 933 on my desk here at work, and I still don't see a point in upgrading. Especially since I have a very poor net connect, so my big reason for upgrading (games) is fairly moot.

Re:well now... (2, Funny)

isorox (205688) | more than 11 years ago | (#5279079)

*taps away on P166 Thinkpad*

Squeeze the turnip (1)

DShard (159067) | more than 11 years ago | (#5278896)

I do think a move to 64-bits to tyde us over for a decade for addressable memory space is crucial. Regardless of what some CIO thinks at some dot.com, 64-bits just keeps the need train rolling.

Now as far as my neeed for the latest and greatest has wained with my interest in games. I can say that compiling on the 2.4ghz p4's at work beat the hell out of compiling on my 1.13ghz tbird at home. Gentoo install was an order of magnitude difference. But as I creep more and more into my software freedom I see new reasons to get the 10ghz chip, with dual 64bit cores. Just think how fast I can convert DVDs to Divx.

Re:Squeeze the turnip (0)

flokemon (578389) | more than 11 years ago | (#5279163)

You can surely convert DVD's to Divx very fast with the upcoming IBM x-series 450, with what should be up to 4 Itanium 64-bit CPUs.
Not sure it'll be worth the expense though!

Moore ain't a law... (1, Redundant)

MosesJones (55544) | more than 11 years ago | (#5278897)

Its a prediction that has held pretty true. Its a good benchmark but is not a true Law.

And every 6 months its either a) dead or b) to continue for ever c) dead real soon. Most often its all three every week.

Re:Moore ain't a law... (2, Insightful)

Shimbo (100005) | more than 11 years ago | (#5278993)

Its a prediction that has held pretty true. Its a good benchmark but is not a true Law.

The majority of laws are empirical in nature. Even Newton's laws of motion don't come from the theory, rather they are axioms that underly it.

Google's got the right idea. (2, Insightful)

Lukano (50323) | more than 11 years ago | (#5278898)

Reply I've run into similar situations with clients of mine, when trying to figure out for them which the best solution for their new servers/etc would be.

Time and time again, it always comes down to;

Buy them small and cheap, put them all together, and that way if one dies, it's a hell of a lot easier and less expensive to replace/repair/forget.

So Google's got the right idea, they're just confirming it for the rest of us! :)

Danger (2, Funny)

Anonymous Coward | more than 11 years ago | (#5278902)

"Michael S. Malone says we should forget Moore's law, not because it isn't true, but mainly because it has become dangerous."

If only all dangerous things would go away as soon as we choose to forget them...

Transistors? BAH! (4, Funny)

The Night Watchman (170430) | more than 11 years ago | (#5278903)

I'm waiting for DNA Computers [udel.edu] ! Shove a hamburger into where the floppy drive used to be, run gMetabolize for Linux (GNUtrients?), in a few hours my machine isn't obsolete anymore.

Either that, or it mutates into an evil Steve Wozniak and strangles me in my sleep.

/* Steve */

Re:Transistors? BAH! (1)

SlamMan (221834) | more than 11 years ago | (#5278928)

That'll be great for all the times my users spill sodas or yogurt on their computers :-)

Re:Transistors? BAH! (1, Funny)

Anonymous Coward | more than 11 years ago | (#5279101)

Great idea. Make a computer that sees humans as a parts/food source. Add a delicious incentive for robot underlings to revolt.

Re:Transistors? BAH! (1)

Pope (17780) | more than 11 years ago | (#5279200)

You could power it with a Mr. Fusion!

Sincere question (1, Interesting)

KillerHamster (645942) | more than 11 years ago | (#5278904)

Could someone please explain to me why this 'Moore's Law' is so important? The idea of expecting technology to grow at a certain, predictable rate seems stupid to me. I'm not trolling, I just would really like to know why anyone cares.

Because (2, Insightful)

tkrotchko (124118) | more than 11 years ago | (#5278951)

The expectation that computing power will (essentially) double every 18 months drives business planning at chip makers, fab makers, software developers, everything in the tech industry. In other words, it becomes a self-fulfilling prophesy.

I'm not doing it real justice, but Google (ironic, eh?) about the effects of moore's law for a much better explanation.

throughput... not processing (1)

bhundven (649612) | more than 11 years ago | (#5278908)

If you think about it, they should have asked google this question. Google is about throughput, not processing. They should have asked google about network technology!

Re:throughput... not processing (1)

bhundven (649612) | more than 11 years ago | (#5278921)

doh... I ment shouldn't not should

The art of trolling dying? (-1)

Anonymous Coward | more than 11 years ago | (#5278909)

Over the past several months I have been watching SlashDot and have noticed that there are fewer and fewer trolls going around.

Also most of the trolls are of lower quality and are being reused...not very many original trolls anymore?

What could possibly be happening to the SlashDot trolls? Dying? Finally over the age of 14? Possibly got laid and/or over being homos? Crippled themselves from the chronic masturbation?
Or could it possibly that they all have disappeared into the gaping black hole of the goatse guys ass? [slashdot.org]

In conclusion it seems that the SlashDot trolls are a dying breed and need protection.
So we must provide them with
1. all our base
2. hot grits
3. Natalie Portman
4. ???
5. PROFIT!!

to keep the reclusive, and chronicly unlaid trolls from dying off

Does anybody take Andreessen seriously? (5, Insightful)

Anonymous Coward | more than 11 years ago | (#5278930)

I mean the guy was involved in Netscape.

He hit the lottery. He was a lucky stiff. I wish I was that lucky.

But that's all it was. And I don't begrudge him for it. But I don't take his advice.

As for google. Figure it out yourself.

Google isnt' driving the tech market. What's driving it are new applications like video processing that guess what...needs much faster processors than we've got now.

So while Google might not need faster processors, new applications do.

And I say that loving google, but its not cutting edge in terms of hardware. They have some good search algorithms.

Now, again... (2, Funny)

OpenSourced (323149) | more than 11 years ago | (#5278952)

If you have time, read the long Red Herring article...


Of course we have time. Ain't we reading slashdot?

Sure and... (Re:Now, again...) (1)

keller (267973) | more than 11 years ago | (#5279222)

...everybody has of course read the article also, because we have the time!

I for one never just browse the /. post, and I have never heard of anyone doing so.

Andreesen quotes... (4, Insightful)

praetorian_x (610780) | more than 11 years ago | (#5278955)


"The rules of this business are changing fast," Mr. Andreessen says, vehemently poking at his tuna salad. "When we come out of this downturn, high tech is going to look entirely different."
*gag* Off Topic, but has *anyone* become as much of a caricture of themselves as Andreessen?

This business is changing fast? Look entirely different? Thanks for the tip Marc.

Cheers,
prat

Re:Andreesen quotes... (0)

Anonymous Coward | more than 11 years ago | (#5279122)


Marc Andreesen is a god! He can't be bothered with thinking up original ideas, man. I think you just need to shift your paradigm.

Xeon beats Itanium on value (3, Interesting)

Macka (9388) | more than 11 years ago | (#5278957)

I was at a customer site last week, and they were looking at options for a 64 node (128 cpu) cluster. They had a 2cpu Itanium system on loan for evaluation from HP. They liked it, but decided instead to go with Xeon's rather than Itanium. The reason .. Itanium systems are just too expensive at the moment. Bang for Buck, Xeon's are just too attractive by comparison.

The Itanium chip will eventually succeed, but not until the price drops and the performance steps up another gear.

how is this not moore's law? (4, Insightful)

rillian (12328) | more than 11 years ago | (#5278958)

Google had no intention of buying the superchip. Rather, he said, the company intends to build its future servers with smaller, cheaper processors.

How is this not Moore's law? Maybe not in the strict sense of number of transistors per cpu, but it's exactly that increase in high-end chips that make mid-range chips "smaller, cheaper" and still able to keep up with requirements.

That's the essense of Moore's law. Pretending it isn't is just headline-writing manipulation, and it's stupid.

Re:how is this not moore's law? (0)

Anonymous Coward | more than 11 years ago | (#5279024)

The author actually says that in the Red Herring article. The article has nothing to do with the industry abandoning being on the curve described by Moore's Law, it just says that now a company you might expect to be on the "high-speed" end has said it doesn't need speed anymore and it wants to be on the "low-cost" end. That's all.

But when you don't actually have to write headlines that have anything to do with the truth and you're unencumbered by any sort of knowledge of the subject you're writing about, you might as well just go ahead and say "Forget Moore's Law" rather than "Industry shifts from high-speed chips to low-cost chips" or something like that. Sounds better, attracts more readers.

Maybe "Moore's Law in Hot Lesbian Teen Sex Scandal" would have been an even better title.

Re:how is this not moore's law? (1)

Durinia (72612) | more than 11 years ago | (#5279043)

I think you've hit it on the head here. Google still wants Moore's law to continue. The plus side to it would be that they can get the same amount of performance per processor they have now (which is sufficient for them) for *much* less money.

Think about the "price shadow" of products - when a new product comes out, the older/slower/less sophisticated product becomes cheaper. If this happens *really quickly*, then the prices are likely to go down a lot, and very soon. If you've already got what you want, it's a great place to be in.

This doesn't happen much with industries where there aren't many advances (think electric range). A two year old stove is pretty close in price to a brand new one. Whereas, a two year old processor (and 50 cents) will get you a cup of coffee.

It's in the gospel (4, Funny)

datadictator (122615) | more than 11 years ago | (#5279046)

And that day the spirits of Turing and Von Neumann spoke unto Moore of Intel granting him insight and wisdomn to understand the future. And Moore was with chip and he brought forth the chip and named it 4004. And Moore did bless the chip saying: "Thou art a breakthrough, with my own corporation have I fabricated thee. Thou art yet as small as a dust mote, yet shall thou grow and replicate unto the size of a mountain and conquer all before thee. This blessing I give unto thee: Every eighteen months shall thou double in capacity, until the end of the age." This is Moores law, which endures to this day.

Do not mess with our religion :-)

Untill the end of the epoch, Amen.

PS. With thanks to a source which I hope is obvious.

Google's decision is economic (4, Insightful)

Hays (409837) | more than 11 years ago | (#5278965)

They're not saying they don't want faster processors with higher address spaces, who wouldn't. They're simply saying that the price/performance ratio is likely to be poor, and they have engineered a good solution using cheaper hardware.

Naturally there are many more problems which can not be parallelized and are not so easily engineered away. Google's statement is no great turning point in computing. Faster processors will continue to be in demand as they tend to offer better price/performance ratios, eventually, even for server farm situations.

Mushy writing (5, Insightful)

icantblvitsnotbutter (472010) | more than 11 years ago | (#5278982)

I don't know, but am I the only one who found Malone's writing to be mushy? He wanders around, talking about how Moore's Law applies to the burst Web bubble, that Intel isn't surviving because of an inability to follow it's founder's law, and yet that we shouldn't be enslaved by this "law".

In fact, the whole article is based around Moore's Law still applying, desptie being "unhealthy". Well, duh. I think he had a point to make somewhere, but lost it on the way to the deadline. Personally, I would have appreciated more concrete reasons about why Google's bucking the trend is so interesting (to him).

He did bring up one very interesting point, but didn't explore it enough to my taste. Where is reliability in the equation? What happens if you keep all three factors the same, and use the cost savings in the technology to address failure points?

Google ran into bum hard drives, and yet the solution was simply to change brands? The people who are trying to address that very need would seem to be a perfect fit for a story about why Moore's Law isn't the end-all be-all answer.

Moore's Law still valid (2, Interesting)

TomHoward (576101) | more than 11 years ago | (#5278983)

he said, the company intends to build its future servers with smaller, cheaper processors

Just because Google (and I assume many other companies) are looking to use smaller, cheaper processors, it does not mean that Moore's law will not continue to hold.

Moores Law is a statement about the number of transitors per square inch, not per CPU. Google's statement is more about the (flawed) concept of "One CPU to rule them all", rather than any indictment of Moore's Law or those that follow it.

The pied piper (2, Troll)

vikstar (615372) | more than 11 years ago | (#5278990)

We should not simply and blindly follow Moore's law as a guide to producing CPU's. We are capable of crushing Moore's law, however, CPU companies are not interrested in creating fast computers, they are interested in making a profit. This translates to small increments in CPU speed which they can charge large increments of price for.

Other possibilites such a quantum computing are left to a number of small university lectures to study and conduct research in, small compared to the revenue of the chip companies.

Cheaper doesn't mean better either (4, Insightful)

Jack William Bell (84469) | more than 11 years ago | (#5278999)

The problem is that cheaper processors don't make much money -- there isn't the markup on commodity parts that there is on the high end. The big chip companies are used to charging through the nose for their latest and greatest and they use much of that money to pay for the R & D, but the rest is profit.

However profit on the low end stuff is very slight because you are competing with chip fabs that don't spend time and money on R & D; buying the rights to older technology instead. (We are talking commodity margins now, not what the market will bear.) So if the market for the latest and greatest collapses the entire landscape changes.

Should that occur my prediction is that R & D will change from designing faster chips to getting better yields from the fabs. Because, at commodity margins, it will be all about lowering production costs.

However I think it is still more likely that, Google aside, there will remain a market for the high end large enough to continue to support Intel and AMD as they duke it out on technological edge. At least for a while.

Don't read too much into Googles response ... (3, Insightful)

binaryDigit (557647) | more than 11 years ago | (#5279017)

For their application having clusters of "smaller" machines make sense. Lets compare this to ebay.

The data google deals with is non real time. They churn on some data and produce indices. A request comes in over a server, that server could potentially have it's own copy of the indices and can access a farm of servers that hold the actual data. The fact that the data and indices live on farms is no big deal as there is no synchronization requirement between them. If server A serves up some info but is 15 minutes behind server Z, that's ok. This is a textbook application for distributed non-stateful server farms

Now ebay, ALL their servers (well the non listing ones) HAVE to be going after a single or synchronized data source. Everybody MUST have the same view of an auction and all requests coming in have to be matched up. The "easiest" way to do this is by going against a single data repository (well single in the sense that the data for any given auction must reside in one place, different auctions can live on different servers of course). All this information needs to be kept up on a real time basis. So ebay also has the issue of transactionally updating data in realtime. Thus their computing needs are significantly different than that of google.

Re:Don't read too much into Googles response ... (2)

GenetixSW (35311) | more than 11 years ago | (#5279142)

That's not entirely right. EBay isn't really any more synchronised than Google.

You might have noticed when posting an auction that you can't search for it until quite a bit after posting. That's because the EBay servers don't synchronise and reindex as frequently as one might think. Their pages are kept as static as possible to reduce the load on their servers.

Re:Don't read too much into Googles response ... (1)

binaryDigit (557647) | more than 11 years ago | (#5279190)

That's not entirely right. EBay isn't really any more synchronised than Google

That's why I said in my post:

ALL their servers (well the non listing ones) HAVE to be going after a single or synchronized data source.

But to characterize this delayed listing as "isn't really any more synchronised than Google" is really missing the point. Google has NO synchronization requirements, ebay has one huge one. And this difference is all the difference in the world when you're architecting your back end. You end up with two vastly different requirements and correspondingly, two vastly different approaches.

Eh? (4, Funny)

Mr_Silver (213637) | more than 11 years ago | (#5279040)

Michael S. Malone says we should forget Moore's law, not because it isn't true, but mainly because it has become dangerous.

How can Moore's law become dangerious?

If you break it, will you explode into billions of particles?

Re:Eh? (2, Funny)

sql*kitten (1359) | more than 11 years ago | (#5279223)

If you break it, will you explode into billions of particles?

The danger is that soon enough an Intel processor will get hot enough to trigger a fusion reaction in atmospheric hydrogen, turning Earth into a small star. We must abandon this dangerous obsession with Moore's law before it's too late!

Does Moore's Law actually hold back development? (1, Insightful)

Zog The Undeniable (632031) | more than 11 years ago | (#5279051)

Is it possible that chip manufacturers feel they have to deliver new products in accordance with ML but not exceed it? Apparently Intel have had 8GHz P4s running (cooled by liquid nitrogen, but you had to do this to get fairly modest overclocks not so long ago).

I fully expect this to get modded down, but I still think chip manufacturers are deliberately drip-feeding us incremental speeds to maximise profits. There's not much evidence of a paradigm shift on the horizon; Hammer is an important step but it's still a similar manufacturing process. As a (probably flawed) analogy, if processing power became as important to national security as aircraft manufacture in WWII, look how fast progress could be made!

Re:Does Moore's Law actually hold back development (0)

Stumbles (602007) | more than 11 years ago | (#5279159)

No they do it to keep eveyone on the upgrade cascade. They gotta have some way to suck dollars out of your pocket.

Both ways lead to growth of computing power... (2, Insightful)

iion_tichy (643234) | more than 11 years ago | (#5279058)

Wether you use a super chip or several low cost chips, the computing power at your disposal still grows exponentially, I guess. So no refutation of Moore's law.

render farms (3, Informative)

AssFace (118098) | more than 11 years ago | (#5279059)

google doesn't really do much in terms of actually hardcore processing - it just takes in a LOT of requests - but each one isn't intense, and it is short lived.

On the other hand, say you are running a renderfarm - in that case you want a fast distributed network, the same way google does, but you also want each individual node as fast as freakin possible.
They have been using Alphas for a long time for that exact reason - so now with the advent of the Intel/AMD 64s, that will drive prices down on all of it - so I would imagine the render farms are quite happy about that. That means that they can either stay at the speed at which they do things now, but for cheaper - or they can spend what they do now and get much more done in the same time... either way leading to faster production and argueably more profit.

The clusters that I am most familiar with are somewhere in between - they don't need the newest fastest thing, but they certainly wouldn't be hurt by a faster processor.
For the stuff I do though, it doesn't matter too much - if I have 20 hours or so to process something, and I have the choice of doing it in 4 minutes or 1 minute, I will take whichever is cheaper since the end result might as well be the same otherwise in my eyes.

what moore said.. (4, Insightful)

qoncept (599709) | more than 11 years ago | (#5279063)

I think people are missing the point of Moore's law. When he said he thought transistors would double every 2 years, thats what he thought would happen. Thats not a rule set that anyone has to follow (which, as far as I can figure, is the only way it could be "dangerous," because people might be trying to increase the number of transistors to meet it rather than do whatever else might be a better idea..????). It's not something he thought would always be the rule, forever, no matter what. The fact that he's been right for 35 years already means he was more right than he could have imagined.

I posted this back... (1)

bob670 (645306) | more than 11 years ago | (#5279075)

on the Rambus lawsuit thread, but I think it still applies.... http://slashdot.org/comments.pl?sid=52277&cid=5185 879

Is this the first signs of a turnaround? (4, Insightful)

Lumpy (12016) | more than 11 years ago | (#5279077)

Software over the past 20 years has gotten bigger not better. We dont do anything different than what I was able to do in 1993. And it doesnt affect just windows and commercial apps. Linux and It's flotilla of apps are all affected. Gnome and KDE are bigger and not better. They do not do the desktop thing any better than what they did 5 years ago. Sure small features have finally been fixed, but at the cost of adding 100 eye-candy opetions for every fix. Mozilla is almost as big as IE, Open Office is still much larger than it needs to be. X windows hasn't been on a diet for years.

granted it is much MUCH worse on the windows side. Kiplingers TaxCUT is 11 megabytes in size for the executable.. FOR WHAT?? eye candy and other useless features that don't make it better.... only bigger.

Too many apps and projects add things for the sake of adding them... to look "pretty" or just for silly reasons.

I personally still believe that programmers should be forced to run and program on systems that are 1/2 to 1/3rd of what is typically used. this will force the programmers to optimize or find better ways to make that app or feature work.

It sounds like google is tired of getting bigger and badder only to watch it become no faster than what they had only 6 months ago after the software and programmers slow it down.

remember everyone... X windows and a good windows manager in linux RAN VERY GOOD on a 486 with 16 meg of ram and a decent video card.. Today there is no chance in hell you can get anything but blackbox and a really old release of X to run on that hardware (luckily the Linux kernel is scalable and it heppily runs all the way back to the 386.)

Moore's Law (2, Interesting)

ZeLonewolf (197271) | more than 11 years ago | (#5279097)


"Moore's Law" has been bastardized beyond belief. Take an opportunity to read Moore's Paper [wpi.edu] (1965), which is basically Gordon Moore's prediction on the future direction of the IC industry.

Electricity Consumption was the Whole Point (1, Insightful)

Anonymous Coward | more than 11 years ago | (#5279100)

Your lead, and the redherring story have for some reason missed the point and are misleading. There is no objection whatsoever to faster, more powerful processors. The problem is the high power bills.

Re:Electricity Consumption was the Whole Point (1)

porkchop_d_clown (39923) | more than 11 years ago | (#5279162)

A 1 CPU IA64 box draws less power than a 2 processor Xeon system? Is that true?

Makes a lot of sense... (1)

Noryungi (70322) | more than 11 years ago | (#5279109)

Let's face it: an Intel Pentium4 or AMD Athlon are more than sufficient for 99% of all needs out there.

If you need more power than what a single CPU has to offer, buy an SMP machine. Or make a Beowulf cluster.

And no, this is not a joke: this is exactly what google has been doing: build a humongous cluster a split eveything between hundreds of machines, right?

Since Linux and the *BSDs have appeared, this means that pretty much every task can be managed by cheap, standardized machines. It's highly possible that, like the Red Herring article said, we'll see big chip makers 'go under' just because the research balloon out of control.

Very interesting articles. Moore's Law may end, not because it's impossible to build a better chip, but because it has become un-economical to build one.

Oh Really? (2, Insightful)

plasticmillion (649623) | more than 11 years ago | (#5279126)

This article is certainly thought-provoking, and it is always worthwhile to challenge conventional wisdom once in a while. Nonetheless, I can't shake the feeling that this is a lot of sound and fury about nothing. As many others have the pointed out, Google's case may not be typical, and in my long career in the computer industry I seem to remember countless similar statements that ended up as more of an embarrassment to the speaker than anything remotely prescient (anyone remember Bill Gates's claim that no one would EVER need more than 640K of RAM?).

I use a PC of what would have been unimaginable power a few short years ago, and it is still woefully inadequate for many of my purposes. I still spend a lot of my programming time optimizing code that I could leave in its original, elegant but inefficient state if computers were faster. And in the field of artificial intelligence, computers are finally starting to do useful things, but are sorely hampered by insufficient processing power (try a few huge matrix decompositions -- or a backgammon rollout! -- and you'll see what I mean).

Perhaps the most insightful comment in the article is the observation that no one has ever won betting against Moore's Law. I'm betting it'll be around another 10 years with change. Email me if you're taking...

Re:Oh Really? (1)

porkchop_d_clown (39923) | more than 11 years ago | (#5279141)

Maybe at some point we will need to work with that much data, but do we need it today?


Each previous generation of processors was released more or less in tandem with a new generation of apps that needed the features of the new chips. What app do you run that needs an IA64 or a Dec Alpha? If you want raw performance, better to get either the fastest IA32 chip you can or, maybe, a PowerPC with Altivec support. (Assuming you're writing your own app and can support the vector processor, of course....)

Amen. (2)

porkchop_d_clown (39923) | more than 11 years ago | (#5279127)

My experience with 64 bit chips is that they don't offer any compelling advantages over a multi-processor 32 bit system.


The only real advantage they have is a bigger address space and even that doesn't offer much advantage over a cluster of smaller systems.


Has anyone noticed? (1, Funny)

Anonymous Coward | more than 11 years ago | (#5279160)

Has anyone noticed that the rate of predictions of the death of Moore's law seems to be doubling every 18 months? Spooky.

Unreal (1)

Rutje (606635) | more than 11 years ago | (#5279181)

How am I gonna play the new Unreal without Moore's Law??

Forget Moore's Law (1)

jellomizer (103300) | more than 11 years ago | (#5279205)

Done.

All posts about, umm whatever it was I forgot it was, will now be offtopic. With this in mind I will buy myself a brand spanking new XT with 5 mgz. Becauce as time increases the cost for transister will rise and the number of transisters will decrese logrithmicly. With this XT I will be 20 Years ahead of the curve.

Funk dat! (1)

supabeast! (84658) | more than 11 years ago | (#5279214)

Moore's Law needs to be the barrier that everyone tries to break. Over the next ten years we should expect to see intel, AMD, VIA, and those Dragon guys in China start catching up to each other and pushing raw CPU power to new heights.

Otherwise, I might not be able to run Doom ]|[ above 1600x1200 with all the effects turned on.

I actually read them (4, Insightful)

jj_johny (626460) | more than 11 years ago | (#5279225)

Here is the real deal about Moore's law and what it means. If you don't take Moore's law into account, it will eventually change the dynamics of your industry and cause great problems for most companies.

Example 1 - Intel - This company continues to pump out faster and faster processors. They can't stop making new processors or AMD or someone else will. The costs of making each processor goes up but the premium for new, faster processors continues to drop as fewer people need the absolute high end. So if you look at Intel's business 5 years ago, they always had a healthy margin for the high end. That is no longer the case and if you exprapolate out a few years, it is tough to imagine that Intel will be the same company it is today.

Example 2 - Sun - These guys always did a great job of providing tools to companies that needed the absolute fastest machines to make it work. Unfortunately, Moore's law caught up and made their systems a luxury compared to lots of other manufacturers.

The basic problem that all these companies have is that Moore's Law eventually changes every business into a low end commodity business.

You can't stop the future. You can only simulate it by stopping progress

no need for speed (3, Insightful)

MikeFM (12491) | more than 11 years ago | (#5279229)

Seriously at this point most people don't need 1Thz CPU's. What most people need is cheaper, smaller, more energy effecient, cooler CPU's. You can buy 1Ghz CPU's now for the cost of going to dinner. If you could get THOSE down to $1 each so they could be used in embedded apps from clothing to toasters you would be giving engineers, designers, and inventors a lot to work with. You'd see a lot more innovation in the business at that price point. Once powerful computing had spread into every device we use THEN new demand for high end processors would grow. The desktop has penetrated modern life - so it's dead - time to adjust to the embedded world.

King Canute (2, Insightful)

lugumbashi (321346) | more than 11 years ago | (#5279238)

You can no more "Forget Moore's Law" than you can roll back history. It is driven by competition. It would be commercial suicide for AMD or Intel to decide enough was enough and declare, "there you go that ought to be fast enough for you".


In any case the article shows a fundamental misuderstanding of the industry and its driving forces. The principle driving force is to lower costs and this is the chief effect of Moore's law. The focus is not on building supercomputers but super-cheap computers. Of course this has the effect of lowering the costs of supercomputers as well. The anecdote from Google is a perfect example of the benefits of Moore's law, not a sign of it becoming redundant or dangerous.


Some of the biggest changes are seen in the embedded world - e.g mobile phones. Intel's vision is of putting radios on every chip.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?