Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Chrome 10 Beta Boosts JavaScript Speed By 64%

timothy posted more than 3 years ago | from the browser-wars-just-getting-started dept.

Chrome 169

CWmike writes "Google released the first beta of Chrome 10 on Thursday, and Computerworld found it to be 64% faster than its predecessor on Google's V8 JavaScript benchmarks. But in another JS benchmark — WebKit's widely-cited SunSpider — Chrome 10 beta was no faster than Chrome 9. Yesterday's Chrome 10 beta release was the first to feature 'Crankshaft,' a new optimization technology. Google engineers have previously explained why SunSpider scores for a Crankshaft-equipped Chrome show little, if any, improvement over other browsers. 'The idea [in Crankshaft] is to heavily optimize code that is frequently executed and not waste time optimizing code that is not,' said the engineers. 'Because of this, benchmarks that finish in just a few milliseconds, such as SunSpider, will show little improvement with Crankshaft. The more work an application does, the bigger the gains will be.' [Chrome 10 beta download here.]"

cancel ×

169 comments

64%? (0)

Anonymous Coward | more than 3 years ago | (#35251710)

64% ought to be enough for anyone.

Re:64%? (-1)

Anonymous Coward | more than 3 years ago | (#35251720)

64% bigger wasn't enough for your mom.

Re:64%? (-1)

Anonymous Coward | more than 3 years ago | (#35251756)

64% bigger of zero is still zero.

Re:64%? (1, Insightful)

bonch (38532) | more than 3 years ago | (#35252546)

64% speed boost? Text-based AJAX content is going to be even more imperceptibly faster, wow!

All this optimization work on a subpar language like JavaScript just to display text that much faster. It's admirable but ultimately not as important as people make it out to be.

Re:64%? (1)

clang_jangle (975789) | more than 3 years ago | (#35252734)

Mr Slowski, I presume?

What About /. Performance? (2, Interesting)

WrongSizeGlass (838941) | more than 3 years ago | (#35251740)

Will it make the new /. work any faster ... or better ... or anything?

Re:What About /. Performance? (1)

davester666 (731373) | more than 3 years ago | (#35251778)

No. Nothing can make a difference with the new ./

Re:What About /. Performance? (1)

hedwards (940851) | more than 3 years ago | (#35252026)

I've noticed that. /. is pretty much the only site I have problems with in terms of performance. It's gotten a bit better since I disabled most of my addons, but still, as speedy as the latest Firefox beta is, it's still pretty sluggish on here.

Re:What About /. Performance? (1)

rsborg (111459) | more than 3 years ago | (#35252104)

I've noticed that. /. is pretty much the only site I have problems with in terms of performance.

Are you serious?
Have you tried loading any blog post that runs Disqus or IntenseDebate with tons of comments? Any serious user-moderated forum can be very slow. Slashdot has plenty of company here.

Re:What About /. Performance? (1)

hedwards (940851) | more than 3 years ago | (#35252200)

Yes, I'm being serious. Previous to the change I wasn't having any trouble with slashdot, now I am. This is easily the most demanding site I visit. Probably the only other site I can recall having a lot of trouble with is my broker, and that hasn't been a problem in quite a while.

Re:What About /. Performance? (0)

Anonymous Coward | more than 3 years ago | (#35252276)

I'm using chrome on linux and slashdot works with no performance problems, now bugs on the other hand, such as when commenting a second time it asks for captcha but you can't get past the "click submit to submit comment" (Or something along those lines.).

Sounds to me like you're either using a bad browser, really slow hardware, or maybe some Intel graphics chip since browsers these days use GPU rendering.

Re:What About /. Performance? (2)

muindaur (925372) | more than 3 years ago | (#35252686)

A website like /. should NOT require a GPU to display. I come here for text based news and comment, but not a "lets be pretty/fancy" kind of thing. Now, I don't have any issues in FF3.6 on XP Home on my old P4HT and 6800; though in the last version of /. I had to force the old style of comments because the new one was so sluggish. The point is /. is not a site you should need a GPU for period, and it's stupid for any website that does that isn't running a video game in the browser.

Re:What About /. Performance? (1)

Firehed (942385) | more than 3 years ago | (#35252268)

The newest version of ./ is (in my experience) at least an order of magnitude faster than the old version, at least as my logged-in settings cause the page to render. I'd like to take a small amount of credit by having spent four seconds in Chrome's JS profiler and then reported an egregious flaw in their JS that caused it to run at 100% CPU continually (in effect: c = function(){someSlowThing();setTimeout(c,0);}; c(); ), but even with that original problem it was still faster for me than the old version. Or, at least, it didn't cause my browser to deadlock when first loading the page.

YMMV. My experiences were in Chrome 9 (Mac), but they were similar in Firefox and Safari.

Still, compared to a simple blog or even a pretty complicated web app, the sheer size of the DOM on ./ thanks to hundreds of nested comments is always going to slow things down. I'm sure there are plenty of optimizations they could make (not least of which is finally dropping the fluid-width layout, which makes zooming and many other operations cause a document reflow - incredibly expensive on a giant page), but part of it is just the nature of the beast. Look at Twitter - it gets slower the farther down you scroll, since the DOM gets bigger and bigger thanks to their funky bottomless-page scripts.

Re:What About /. Performance? (0)

Anonymous Coward | more than 3 years ago | (#35252384)

My netbook screen with 800 horizontal pixels says please keep the fluid-width layout. How often do you need to zoom anyway?

Just embed LLVM, for crying out loud. (0)

Anonymous Coward | more than 3 years ago | (#35251762)

Mozilla, Apple, Opera and Google need to cut it with all of this JavaScript bullshit, and instead just need to agree to embed LLVM in the browser. Then we can target a real platform using not just JavaScript, but a whole host of other languages. Unlike JavaScript, LLVM has been properly designed from the ground up just for such uses! It's highly optimized, and it's getting better every day.

They all need to agree to do this, however, in order to have enough market share to force Microsoft into adopting similar technology. Right now Firefox and Chrome alone hold 40% to 50% of the browser market. LLVM embedded within them could truly help encourage their adoption, to bump them up into the 75% to 80% market share range.

Re:Just embed LLVM, for crying out loud. (5, Informative)

tapo (855172) | more than 3 years ago | (#35251806)

Google is doing this with Native Client [wikipedia.org] . It allows a browser to run code compiled for x86, ARM, or LLVM bytecode in a sandbox. It's currently in beta in Chrome 10 (you can enable and try it out by going to about:flags), and apparently available for other browsers as well under the BSD license.

Re:Just embed LLVM, for crying out loud. (1)

BZ (40346) | more than 3 years ago | (#35251862)

LLVM is not target-architecture-independent, last I checked. For example, you can't use the same LLVM bitcode on both big-endian and little-endian hardware. Or use the same LLVM to produce both x86-32 and x86-64 binaries.

Or has that changed? I admit I haven't looked in a year or so.

Re:Just embed LLVM, for crying out loud. (0)

Anonymous Coward | more than 3 years ago | (#35252820)

Well, none of the major browsers are architecture-independent as of a few months ago. All of them use JITs, Mozilla dropped TraceMonkey-only in beta 3 or so. IIRC, ARM and x86 are the only architectures that any of them are targeting.

(All that said I love Javascript)

Re:Just embed LLVM, for crying out loud. (0)

bonch (38532) | more than 3 years ago | (#35252540)

Yes, let's recreate ActiveX from the 90s because it worked out so well.

Problems with ajax (0)

Anonymous Coward | more than 3 years ago | (#35251766)

Chrome is very fast, but can't run ajax with local files. That is a new problem for developers.

Re:Problems with ajax (1)

foniksonik (573572) | more than 3 years ago | (#35251846)

Localhost anyone? Who is not running a server on their Dev machine?

Re:Problems with ajax (1)

sconeu (64226) | more than 3 years ago | (#35252052)

Me. I'm a dev, but not a web dev.

Re:Problems with ajax (2)

Wiiboy1 (1699132) | more than 3 years ago | (#35252112)

This is for security. There's a command line flag for disabling this, I'm pretty sure.

Try --allow-file-access-from-files

If that doesn't work, you could go to http://codesearch.google.com/codesearch/p?vert=chromium#OAMlx_jo-ck/src/chrome/common/chrome_switches.cc [google.com] and look for one that does what you want.

Re:Problems with ajax (1)

Firehed (942385) | more than 3 years ago | (#35252286)

Or just edit your hosts file so that you're developing against local.myfuturesite.com. Doing so may also expose other bugs you'd encounter when switching over to production (weird paths, odd webserver config settings, etc)

WTF Beta! (0)

Anonymous Coward | more than 3 years ago | (#35251784)

Okay, how daft do you have to be to encourage tons of people to download and install a beta quality item? Yes, some people will want it, but many people will blindly download this just to toy with it and then fail to understand why some things aren't fully functional yet.

You aren't actually helping developers if you are putting the betas in the hands of people unqualified to give meaningful feedback on them.

Sincerely, Chrome "Canary" User

Re:WTF Beta! (0)

Anonymous Coward | more than 3 years ago | (#35251834)

Okay, how daft do you have to be to encourage tons of people to download and install a beta quality item?

Because by all reports the beta is stable.
Google Beta status is something of the norm for most of their software. Gmail was 5 years in beta.

Re:WTF Beta! (1)

foniksonik (573572) | more than 3 years ago | (#35251852)

This /. right? Did I wander into Digg all of a sudden?

Re:WTF Beta! (1)

Anonymous Coward | more than 3 years ago | (#35252278)

No. But the userbases of websites like Digg have slowly eroded and taken over Slashdot.

Re:WTF Beta! (1)

outsider007 (115534) | more than 3 years ago | (#35251864)

How are /. readers not qualified to give meaningful feedback? If this was posted on TMZ I might agree with you.

Re:WTF Beta! (3, Informative)

517714 (762276) | more than 3 years ago | (#35252118)

We tend to shoot for clever or snarky instead of meaningful;)

Re:WTF Beta! (1)

Cinder6 (894572) | more than 3 years ago | (#35252016)

I've been running the dev channel ever since it became available, and in my experience it's more stable than Firefox. It's just a browser; I don't see anything wrong with trying out a potentially unstable version. YMMV.

Pardon my ignorance(and I don't want a holy war).. (4, Interesting)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#35251790)

Historically, slashdot(and elsewhere) has seen the battle over performance between the C/C++ classicists, and those who insist that Java or one of its architecturally similar cousins has, with enough work on the JVM, achieved nearly equivalent execution speed.

Does anybody know where we are with Javascript now? Traditionally, its performance has been pathetic, since it wasn't all that heavily used; but of late competition to have a better javascript implementation has been pretty intense. Is there anything fundamentally wrong with the language, that will doom it to eternal slowness, or is it on the trajectory to near-native speeds eventually?

Re:Pardon my ignorance(and I don't want a holy war (5, Informative)

BZ (40346) | more than 3 years ago | (#35251836)

Modern JS jits (tracemonkey, crankshaft) seem to be able to get to within about a factor of 10 of well-optimized (gcc -O3) C code on simple numeric stuff. That's about equivalent to C code compiled with -O0 with gcc, and actually for similar reasons if you look at the generated assembly. There's certainly headroom for them to improve more.

For more complicated workloads, the difference from C may be more or less, depending. I don't have any actual data for that sort of thing, unfortunately.

There _are_ things "wrong" with javascript that make it hard to optimize (lack of typing, very dynamic, etc). Things like http://blog.mozilla.com/rob-sayre/2010/11/17/dead-code-elimination-for-beginners/ [mozilla.com] (see the valueOf example) make it a bit of a pain to generate really fast code. But projects like https://wiki.mozilla.org/TypeInference [mozilla.org] are aiming to deal with these issues. We'll see what things look like a year from now!

Re:Pardon my ignorance(and I don't want a holy war (1, Informative)

sydneyfong (410107) | more than 3 years ago | (#35252172)

Are you sure about this?

I don't recall gcc -O3 and -O0 having a factor of 10 difference for most tasks. And Javascript definitely isn't close to C performance, even unoptimized.

Besides, gcc -O3 can actually be somewhat slower than -O2, which is why almost nobody uses -O3 except for the Gentoo zealots.

Re:Pardon my ignorance(and I don't want a holy war (5, Informative)

BZ (40346) | more than 3 years ago | (#35252362)

I'm very sure, yes.

> I don't recall gcc -O3 and -O0 having a factor of 10
> difference for most tasks.

They don't. My comment was quite specific: the cited numbers are simple number-crunching code. The fact that -O0 stores to the stack after every numerical operation while -O3 keeps it all in registers is the source of the large performance difference as long as you don't run out of registers and such. The stack stores are also the gating factor in the code generated by Tracemonkey, at least.

> And Javascript definitely isn't close to C
> performance, even unoptimized.

For simple number-crunching code, Tracemonkey as shipping in Firefox 4 runs at the same speed as C compiled with GCC -O0, in my measurements. I'd be happy to point you to some testcases if you really want. Or do you have your own measurements that you've made that are the basis for your claim and that you'd care to share?

Note that we're talking very simple code here. Once you start getting more complicated the gap gets somewhat bigger.

As an example of the latter, see https://github.com/chadaustin/Web-Benchmarks [github.com] which has multiple implementations of the same thing, in C++ (with and without SIMD intrinsics) and JS with and without typed arrays. This is not a tiny test, but not particularly large either.

On my hardware the no-SIMD C++ compiled with -O0 gives me about 19 million vertices per second. The typed-array JS version is about 9 million vertices per second in a current Firefox 4 nightly.

For comparison, the same no-SIMD C++ at -O2 is at about 68 million vertices per second. -O3 gives about the same result as -O2 here; -O1 is closer to 66 million.

> Besides, gcc -O3 can actually be somewhat
> slower than -O2

Yes, it can, depending on cache effects, etc. For the sort of code we're talking about here it's not (and in fact typically -O2 and -O3 generate identical assembly for such testcases. See the numbers above.

One other note about JS performance today: it's heavily dependent on the browser and the exact code and what the jit does or doesn't optimize. In particular, for the testcase above V8 is about 30% faster than Spidermonkey on the regular array version but 5 times slower on the typed array version (possibly because they don't have fast paths in Crankshaft yet for typed arrays, whereas Spidermonkey has made optimizing those a priority).

Again, I suspect that things will look somewhat different here in a year. We'll see whether I'm right.

Re:Pardon my ignorance(and I don't want a holy war (1)

sydneyfong (410107) | more than 3 years ago | (#35252574)

Interesting. I didn't know they got Javascript to run that fast...

Admittedly I haven't been following the bleeding edge stuff on the Javascript performance front...

Re:Pardon my ignorance(and I don't want a holy war (0)

Anonymous Coward | more than 3 years ago | (#35252638)

There is no way that Javascript in the browser has anywhere the speed of even unoptimized C for anything more than the simplest 10 line programs. I just can't see how a dynamic language that's interpreted by a browser can even hold a candle to native code. Even the simplest expression in Javascript can be considered equivalent to a library call at least, as it is all being done in software. That's a kind of awkward way to put it, but the best way I can put it is Javascript is so much further from the metal than any language that compiles to native code, that there is no way C like performance could be expected from the language. That's not to say that the language can't achieve satisfactory speeds for some pretty demanding tasks. As computers become more and more powerful, levels of abstraction matter less and less. In other words, it's all relative. :/

Re:Pardon my ignorance(and I don't want a holy war (2)

Alef (605149) | more than 3 years ago | (#35252992)

There _are_ things "wrong" with javascript that make it hard to optimize (lack of typing, very dynamic, etc).

To get a notion of where it is possible to get with a similarly dynamic language, take a look at how the LuaJIT benchmarks [debian.org] compare with optimized C++ and other dynamic languages. Notice it is not far behind a state-of-the-art Java VM.

Another pretty interesting aspect is this code size versus speed [debian.org] comparison.

Re:Pardon my ignorance(and I don't want a holy war (1)

Kagetsuki (1620613) | more than 3 years ago | (#35253004)

Your comparison to gcc optimization levels is apples and oranges, it's a bit misleading and I'd be interested in hearing where you got your information. As a relative comparison I do understand it, and by rough guessing I'd say you sound close to accurate in terms of numbering for single threaded code. Thank you for pointing out how lack of typing and maintaining dynamic nature (dynamic "this" context!? F* you ECMA that's terrible!) enhances the crap factor and makes it much harder to deal with making optimizations of the language. The thing is I don't agree that just making better JIT and optimization engines will fix the problem though - it's like adding more and better lubrication to a brick: slides easier and faster but still a painful and crude brick. Getting wider spread support of different types of scripts more universally accepted and adding proper DOM handling libraries and runtime isolation to them sounds like a good solution to me. If you have Ruby installed for example you can already do scripts with type "text/ruby" (may require some setup in your browser) - it's totally insecure and interacting with elements on the page is non-trivial... but you can do it and I'd take Ruby over JS if it was made to work with the DOM and had isolation from the system.

Re:Pardon my ignorance(and I don't want a holy war (3, Interesting)

NoSig (1919688) | more than 3 years ago | (#35251840)

Java isn't a dynamic language which is the central difference that makes languages like Javascript and Python much slower than C++ and even Java with the compiler technology as it is now and for the forseeable future. A big still relevant problem with Java is the long loading times you end up with starting up large applications. Javascript isn't even compiled to bytecode so that problem would be much worse if big applications were written and run as Javascript. Javascript is getting faster all the time but don't expect anything like C++ or even Java for general purpose programming. Which is fine because that isn't what Javascript is all about.

Re:Pardon my ignorance(and I don't want a holy war (5, Interesting)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#35251868)

Given that browsers tend to cache website elements, for better speed when loading objects that haven't changed since last load, and given that, while people want their page now, their computer usually has a fair amount of idle time available, would you expect to see browsers implementing some sort of background optimization mechanism that chews over cached javascript during idle periods in order to reduce the amount of computationally expensive work that needs to be done should the page be reloaded? Or is Javascript not amenable to that level of preemptive processing?

Re:Pardon my ignorance(and I don't want a holy war (1)

TooMuchToDo (882796) | more than 3 years ago | (#35252306)

Did you just suggest locally "precompiling" javascript with idle client cpu cycles? (not sarcasm. I think it's a great idea if that is the case). Can you event "precompile" Javascript currently, and if not, why not? /disclaimer: I have a very light understanding of Javascript, more of a infrastructure/networking guy. Go easy.

Re:Pardon my ignorance(and I don't want a holy war (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#35252412)

I was asking if that were practical, since I don't know much about the guts of this stuff. TFA's mention of optimizing code that runs frequently, and not optimizing rarely used code, gave me the impression that there is some sort of technique, presumably a species of JIT compilation, that is quite computationally expensive; but makes the code subjected to it run faster thereafter. NoSig's comment about load times suggested a similar tradeoff.

This struck me(on naive first inspection) as being something that would nicely complement the browser cache: if you have the javascript, and idle cycles, and the user isn't sitting and waiting for the page to display, you might as well apply the most aggressive optimization(some nuances would, of course, have to be adopted for battery-powered or thermally constrained devices, where idle cycles are a comparatively costly resource...)

I don't know enough about the subject to know if there is some reason why this idea is stupid; but if my inference from TFA, that you can trade a one-time, computationally expensive, operation for a speedup on every subsequent run of the code, is correct, it seems as though having a background cache-optimizer would be a comparatively simple extension of the work being done anyway, and would improve user experience for repeatedly loaded sites and/or libraries...

Assuming such a thing is practical, it would also be interesting to see how it would work built into a caching proxy. I imagine that such an arrangement would open an exciting new scope for bugs and subtle evil; but it would also allow resource-constrained clients to have some of their work done for them by much more capable devices.

Re:Pardon my ignorance(and I don't want a holy war (2, Informative)

Anonymous Coward | more than 3 years ago | (#35252372)

One problem is that usually websites contain Javascript from ad sites which can't be cached because they want to track every hit. Additionally, since scripts are allowed to do stuff like mess with the prototypes for built-in objects which will affect any code loaded after it, if any of the files are changed you probably have to throw away any precompiled code.

One the page is loaded, most Javascript engines will try to optimize code that gets run frequently, which is good when you're running a rich Javascript application. It won't necessarily help initial page load times though.

Re:Pardon my ignorance(and I don't want a holy war (0)

Anonymous Coward | more than 3 years ago | (#35252456)

This is an absolutely fantastic idea.... We all tend to visit the same sites frequency. By identifying the common pieces and idle time cache compile them down and down with TypeInference and like optimizations, I think would could easily approach native speeds. Possibly using gpu hardware acceleration to assist in the parallel compiling. How bout maybe in 2-3 years, yes? Ok time to put the pipe down!

Re:Pardon my ignorance(and I don't want a holy war (1)

NoSig (1919688) | more than 3 years ago | (#35252462)

Certainly some kind of caching of JIT output should be helpful in some way. There are numerous issues that limit how helpful it can be. For one, it hasn't solved the Java issue and Java is much more amenable to this kind of thing than Javascript is. For one thing Javascript often makes heavy use of document.write which is that Javascript will dynamically write more Javascript to be run later. So the code being run can change from one page load to another even if the code is initially the same, defeating caching of compiled code.

Re:Pardon my ignorance(and I don't want a holy war (1)

m50d (797211) | more than 3 years ago | (#35252724)

"cached javascript"? What, just random javascript snippets from the cache? Most javascript delays occur either on page load, or after an ajax call; once your idle all the page load stuff has finished. Pre-emptively executing the onClick()s of all the links on the page and caching the result seems like a sensible optimization, until you realise it's going to play havoc with any ajax-based site. Prefetching linked-to pages in the background is a valid optimization and could include doing those page's onload javascript although, again, you have to worry about what that's going to do to more dynamic sites.

Re:Pardon my ignorance(and I don't want a holy war (1)

m50d (797211) | more than 3 years ago | (#35252726)

Gah. Once you're idle. Must be tired.

Re:Pardon my ignorance(and I don't want a holy war (0)

Anonymous Coward | more than 3 years ago | (#35252538)

Yeah, you would need a plugin, like flash, silverlight, webfx, or their like.

Re:Pardon my ignorance(and I don't want a holy war (2)

larry bagina (561269) | more than 3 years ago | (#35251848)

V8 compiles javascript to native code, so in that sense it is native speed. The limiting factor is interacting with the HTML DOM/CSS.

Re:Pardon my ignorance(and I don't want a holy war (1)

BZ (40346) | more than 3 years ago | (#35251922)

gcc compiles C to native code, but there's a noticeable speed difference between compiling with -O0 and -O2 for most C code.

There are plenty of things people are doing in JS nowadays where the DOM is only a limiting factor because JS implementations are 10x faster than they were 4 years ago...

Re:Pardon my ignorance(and I don't want a holy war (1)

stigmato (843667) | more than 3 years ago | (#35252356)

I find using --funroll-loops gives me the best performance when compiling.

Re:Pardon my ignorance(and I don't want a holy war (1)

BZ (40346) | more than 3 years ago | (#35252378)

It really just depends on your code and on your compiler.... and on your hardware.

My personal experience is that the same code was fastest with -Os on hardware as of 7 years ago and when compiling with GCC 4.0 (due to i-cache effects, as near as anyone could tell) but fastest with -O3 on hardware as of a year ago and when compiling with GCC 4.5....

Re:Pardon my ignorance(and I don't want a holy war (5, Informative)

TopSpin (753) | more than 3 years ago | (#35252010)

Does anybody know where we are with Javascript now?

There is always the The Computer Language Benchmarks Game [debian.org] . There you will find V8 competitive with Python, Ruby and other such languages, which is to say slower than the usual compiled suspects by about the same degree.

Re:Pardon my ignorance(and I don't want a holy war (0)

Anonymous Coward | more than 3 years ago | (#35253044)

There you will find V8 competitive with Python, Ruby and other such languages

If by competitive you mean faster by an order of magnitude.

Re:Pardon my ignorance(and I don't want a holy war (-1)

Anonymous Coward | more than 3 years ago | (#35252046)

or eat a dick

Re:Pardon my ignorance(and I don't want a holy war (4, Interesting)

thoughtsatthemoment (1687848) | more than 3 years ago | (#35252258)

There are indeed a few fundamental issues with Javascript that make it both useful for coding and at the same itime hopeless toreplace something like C.

In javascript, accessing the property of an object requires a lookup, and some checking to make sure things exist. Compared to accessing a field in a C struct, that's a lot of overhead (AFAIK, google does do heave optimazation in this area). The reason for doing that is for safety and being dynamic.

In a large application, ultimately performance comes from memory management. The best way is using memory and resource pooling, fine tuned by the programmer. I doubt javascript can be efficiently used this way. I don't think javascript can be used to code Word or a browser (I mean the browser itself) any time soon.

Multithreading is also an issue. There is not really anything wrong with the language. It's more of an implementation issue.

Re:Pardon my ignorance(and I don't want a holy war (0)

Anonymous Coward | more than 3 years ago | (#35252862)

I just have a hard time accepting dynamic languages as a foundation for any significant software project. I'm not saying it's not possible, it's just a nightmare. I much prefer static languages which can be checked by the compiler before run time; plus refactoring is so much easier. All this javascript / dynamic language stuff eventually turns into a big mess, in my opinion.

Chrome 10? I'm using Chrome 11!!! (4, Funny)

Jackie_Chan_Fan (730745) | more than 3 years ago | (#35251794)

Take that! This Chrome goes to EeeeeLeven....

Re:Chrome 10? I'm using Chrome 11!!! (1)

EricX2 (670266) | more than 3 years ago | (#35251860)

Same here. Does that mean beta is finally as fast as the dev version or did they do something in beta 10 that wasn't in dev 10?

Chrome 10? (1)

Seumas (6865) | more than 3 years ago | (#35251874)

Weren't we just on Chrome 1.0 like . . . 18 months ago?

Re:Chrome 10? (1)

Jackie_Chan_Fan (730745) | more than 3 years ago | (#35251998)

hahah that is funny and so true.

Re:Chrome 10? (2)

newDzerzhinsky (1806046) | more than 3 years ago | (#35252050)

Weren't we just on Chrome 1.0 like . . . 18 months ago?

Well, no. Being pedantic, I think that 18 months ago we were actually on Chrome 2.
However, I presume the accuracy of versions wasn't really your point and it was more about the rapid releases.
You could argue whether the fast increase in version numbering is good or bad, but as far as I am concerned, it's great to see a big company pushing out new releases to a major product as fast as they are....

Re:Chrome 10? (1)

TopSpin (753) | more than 3 years ago | (#35252140)

big company pushing out new releases to a major product as fast as they are

Yeah. They're so great they've caught up with and passed IE (still at v9) in only three years.

Re:Chrome 10? (1)

newDzerzhinsky (1806046) | more than 3 years ago | (#35252212)

big company pushing out new releases to a major product as fast as they are

Yeah. They're so great they've caught up with and passed IE (still at v9) in only three years.

I'm not sure if that was a sarcastic reply or not...I suspect it might have been.
However, if it was, I already said that I wasn't sure about the version numbering that they were using.
I am more interested in them proving that you CAN make regular releases and make it work. It's much easier to sell an "agile" development program to people when you have a good example from a well known company.

Re:Chrome 10? (1)

/dev/trash (182850) | more than 3 years ago | (#35252332)

Hell even Gentoo is up to date with Chrome!

Re:Chrome 10? (0)

Anonymous Coward | more than 3 years ago | (#35252054)

Srsly. Google released 0.2 just over two years ago and they'll be releasing 11.0 in the coming months. That's a little ridiculous.

Re:Chrome 10? (2)

Surt (22457) | more than 3 years ago | (#35252158)

These are all minor versions. They just omit the 1 dot.

so... (0)

Charliemopps (1157495) | more than 3 years ago | (#35251886)

Wait, a company says their thingy is 64% faster! Then other people test it and say no it's not... then the company says "You have to test it OUR way!" Is the next step that Google specifically engineers their code just to run the benchmarks themselves faster with no real improvement anywhere else? Sound familiar? (ATI/Nvidia)

Re:so... (4, Insightful)

Anonymous Coward | more than 3 years ago | (#35251968)

1) Google didn't say it, computer world did.
2) Existing benchmarks like SunSpider are not necessarily good indicators of the performance of all real web pages. For small pages with little JS it makes no difference whether the engine is fast or not - all you care about startup latency. The large AJAX pages we're seeing these days are hitting different bottlenecks, and so you need different benchmarks to emulate that workload. The apparent achievement of crankshaft is to improve the performance of long-running JS without increasing the startup latency of short-lived pages. Well done to Chrome for looking at real-world performance instead of worrying about who has the fastest SunSpider numbers.

Re:so... (1)

BZ (40346) | more than 3 years ago | (#35252388)

For what it's worth, V8 is also not a good indicator of the performance of real web pages.

In fact, no such good indicator exists yet even for today's pages, much less tomorrow's (which is what you _really_ want out of a benchmark: driving improvement where it will help the most with the things you plan to do once the improvement has happened).

Re:so... (3, Insightful)

xantonin (1973196) | more than 3 years ago | (#35252042)

Wait, a company says their thingy is 64% faster! Then other people test it and say no it's not... then the company says "You have to test it OUR way!" Is the next step that Google specifically engineers their code just to run the benchmarks themselves faster with no real improvement anywhere else? Sound familiar? (ATI/Nvidia)

You read that backwards. Chrome 10 made no difference over Chrome 9 in benchmarks, but ComputerWorld said it was 64% faster. Google stated that Chrome 10 was more optimized for real code, not benchmarks. Geez man, I didn't even RTFA. I got all that from the summary. Have we gotten so lazy we're not even reading summaries?

Re:so... (1)

ynp7 (1786468) | more than 3 years ago | (#35252096)

Have we gotten so lazy we're not even reading summaries?

Pretty sure we stopped doing that at least 5 years ago.

Re:so... (0)

Anonymous Coward | more than 3 years ago | (#35252260)

No it was just you and a few handfuls of other people. I'd say the majority of slashdot readers actually read the article. Not many end up in the comments anymore, since all these digg kids have shown up.

Re:so... (1)

517714 (762276) | more than 3 years ago | (#35252156)

If U RTFA 1st post is 2 hard 2 get.

Re:so... (1)

cheekyjohnson (1873388) | more than 3 years ago | (#35252228)

Reading the summaries is far too tiresome. The new fad is only reading the headline!

Re:so... (2)

Surt (22457) | more than 3 years ago | (#35252256)

Have we gotten so lazy we're not even reading summaries?

I gave up about halfway through there but I can tell you without a doubt that we have not gotten so lazy we're not evenly weighing the pros and cons of our decisions.

Re:so... (0)

Anonymous Coward | more than 3 years ago | (#35252684)

off topic:
according to a former pro gamer i know the latest ati drivers 10.2 I believe give a considerable boost in performance even on graphics cards fairly old (HD4760). While I can not see the big performance leaps he said he got I do "see" (literally) an increase in various cases of a few fps.
I don't know if I should feel happy about this or feel that amd cheated me about the real quality of their hardware. Lately seeing how hitman (1) used to run on a 300MHz CPU I can hardly believe that todays games take so much processing power to run at an acceptable framerate.
I even run into framerate issues with magicka with everything set to high. Which is by the way a game for sale for a mere 10 euro on steam and features a 4-player coop and a very interesting concept.
Yeah this is kind of mixed, I like this game and I'm glad amd fixed these drivers a bit.
although the last time I looked I wasn't able to run runes of magic in wine on linux and see anything (which worked at a reduced framerate on a nvidia 8600GT pefectly fine (40fps) or not so good (10fps) depending on the current map).

How many Chrome betas are their? (2)

schwit1 (797399) | more than 3 years ago | (#35251888)

v11.0.672.2 is also a beta.
http://www.filehippo.com/download_google_chrome/ [filehippo.com]

Re:How many Chrome betas are their? (3, Informative)

spinkham (56603) | more than 3 years ago | (#35251918)

11.0.672.2 is a Dev channel release. Call it "alpha" if you like. http://googlechromereleases.blogspot.com/2011/02/dev-channel-update_17.html [blogspot.com]

There are 3 release channels: Stable, Beta, and Dev.

Re:How many Chrome betas are their? (0)

Anonymous Coward | more than 3 years ago | (#35252038)

Correction.. 4 channels. Stable, beta, dev, and canary build (call it alpha, bleeding edge, or nightly build) although I haven't had any issues with the canary builds yet

Err .. this is on their own VP8 benchmarks (0)

Anonymous Coward | more than 3 years ago | (#35251916)

Funny how if this was microsoft claiming a speed boost on their own benchmarks, we'd have a troll-fest here...

Re:Err .. this is on their own VP8 benchmarks (1)

SwedishPenguin (1035756) | more than 3 years ago | (#35251974)

Not if there was a reasonable explanation like there is here. Only optimizing code that is repeatedly executed is common practice in any VM an it makes sense.

Re:Err .. this is on their own VP8 benchmarks (0)

Anonymous Coward | more than 3 years ago | (#35252504)

While I agree this optimization has a reasonable explanation, I disagree that there would not be a slashdot troll-fest when Microsoft claims a speed boost on their own benchmarks if there's a rational explanation.;

Fuck all things javascript! (0)

schnikies79 (788746) | more than 3 years ago | (#35252024)

Yes, I am aware that it is not Java. I stand by my statement.

That is all.

wow (1)

smash (1351) | more than 3 years ago | (#35252082)

... just goes to show the abysmally sup-optimal implementations of javascript we've been living with for the past decade or so.

Chrome was already way faster than anything else (particularly upon first chrome release, it was like 10x faster than IE i thought?).

Surely some time soon we're going to stop seeing double-digit percentage improvements in performance, or were the original javascript implementations really THAT BAD?

Re:wow (0)

Anonymous Coward | more than 3 years ago | (#35252128)

We may see double digit gains for a while longer.

Remember that each percent gain is relative to the previous speed. 50% speed gain could be 50% of 10 second or 50% of 10 milliseconds. The percent may be somewhat misleading.

Re:wow (1)

cbhacking (979169) | more than 3 years ago | (#35252290)

10x faster than IE8, much less IE7, is not an accomplishment worth mentioning. On the other hand, 50% faster than IE9 would be very impressive indeed - the RC is effectively at the top of the speed charts (on Sunspider at least) right now.

Is startup slower? (1)

v(*_*)vvvv (233078) | more than 3 years ago | (#35252234)

Does all this JS optimization happen on loading a page?

I've noticed pages freezing up longer now, and this is my only guess as to why.

If this is the case, do these benchmarks take account of this?

Fast execution is great, but not at the price of waiting for it.

Re:Is startup slower? (2)

klingens (147173) | more than 3 years ago | (#35252346)

Most of these newfangled JavaScript engines are simply JIT compilers, so yes, the compilation time takes some time which usually means the page loads slower. All those "OnLoad" statements have to be parsed and compiled before they can run faster than they could before.
Ideally you don't notice it cause your awesome new CPU is fast enough to make it a non issue. If you didn't upgrade your PC (or mobile) last week but last decade, you might have a problem tho.

Re:Is startup slower? (1)

v(*_*)vvvv (233078) | more than 3 years ago | (#35252666)

Does this help?
http://headjs.com/ [headjs.com]

Like they say, loading time is crucial to the sense of speed, and with these new browsers I was really expecting the heavy JS to speed up... Instead the heavy JS sites would freeze for a while. Very annoying. Most sites are OK though. Ebay is by far the worst.

Chrome loads pages slower than FF (1)

Troll-Under-D'Bridge (1782952) | more than 3 years ago | (#35252432)

IMHE(xperience), Chrome loads pages slower than Firefox with NoScript. Here's why. FF can load partial pages better. By this, I mean FF can load pages with missing or incomplete elements better. While FF will, for example, happily show me a page that is badly formatted because the style sheet hasn't been fully loaded, all that Chrome will show me is a big blank page until it can place the elements correctly on the page. To be sure, FF will dynamically reformat the badly formatted page as the page requisites are loaded, but I can always click the "X" icon to force it to stop loading the page.

Chrome is also slow to load pages that reference elements from addresses that cannot be accessed. It would take Chrome longer to load a page containing picture X from domain xxx.xxx if I've configured my router to block all access for xxx.xxx.

Re:Chrome loads pages slower than FF (1)

afidel (530433) | more than 3 years ago | (#35252616)

Not sure about that as noscript is a bit draconian, but Chrome 9 and 10 with adblock are both faster than FF 4b10 with adblock.

Re:Chrome loads pages slower than FF (0)

Anonymous Coward | more than 3 years ago | (#35252714)

I noticed yesterday Chrome re-requests temporarily unavailable assets whereas Firefox didn't appear to do the same, which may account for the rendering pause in some cases.

Re:Chrome loads pages slower than FF (0)

Anonymous Coward | more than 3 years ago | (#35252900)

To be sure, FF will dynamically reformat the badly formatted page as the page requisites are loaded, but I can always click the "X" icon to force it to stop loading the page.

The best feature of Firefox: Escape. If I had to use a browser that didn't let me stop animated GIF/PNG I'd go crazy. I'm assuming Chrome doesn't use Escape to stop animations since it's pretty weak on amount of features.

Great.. (1)

Anonymous Coward | more than 3 years ago | (#35252758)

Javascript programmers can be 64% lazier and still get the same performance.

Get pwned faster (0)

Anonymous Coward | more than 3 years ago | (#35252848)

Javascript executed faster == you get attacked faster.

How fast is the H.264 blocking? (1)

gig (78408) | more than 3 years ago | (#35252972)

How fast will Chrome 10 block an ISO H.264 video as it tries to get from HTML to my video playback hardware? I heard they are working on getting that up to instantaneously.

thats nice (1)

McTickles (1812316) | more than 3 years ago | (#35253080)

but when will we see user friendly features like "delete my pr0n (history/cache) when I close Chrome" or "do not fail miserably at importing my firefox profile data" or "do not block any ports, i know what i am doing"...

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...