Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Inside Mozilla's New JavaScript JIT Compiler

CmdrTaco posted more than 3 years ago | from the no-not-peanut-butter dept.

Firefox 97

An anonymous reader writes "IonMonkey is the name of Mozilla's new JavaScript JIT compiler, which aims to enable many new optimizations in the SpiderMonkey JavaScript engine. InfoQ had a small Q&A with Lead Developer David Anderson, about this new development that could bring significant improvements in products that use the SpiderMonkey engine like Firefox, Thunderbird, Adobe Acrobat, MongoDB and more. This new JIT infrastructure, will feature SSA compiler intermediate representations which will facilitate advanced optimizations such as type specialization, function inlining, linear-scan register allocation, dead-code elimination, and loop-invariant code motion."

Sorry! There are no comments related to the filter you selected.

Javascript Monkeys (2)

TaoPhoenix (980487) | more than 3 years ago | (#35999590)

What's different between IonMonkey and JaegerMonkey (the last engine I thought)?

Re:Javascript Monkeys (0)

Anonymous Coward | more than 3 years ago | (#35999680)

Good question - it's a shame there's no way to quantitatively determine the relationship between two releases of the same product. Maybe they could use numbers?

Re:Javascript Monkeys (2)

sribe (304414) | more than 3 years ago | (#35999708)

Good question - it's a shame there's no way to quantitatively determine the relationship between two releases of the same product. Maybe they could use numbers?

What an excellent, useful, non-obvious invention? Have you considered applying for a patent?

Re:Javascript Monkeys (1)

Anonymous Coward | more than 3 years ago | (#35999742)

I'm thinking it must be silly o'clock in the United States.

Re:Javascript Monkeys (3, Informative)

xouumalperxe (815707) | more than 3 years ago | (#36000192)

It's a good question, but version numbers would really imply the wrong thing here. SpiderMonkey is the "root" Javascript implementation. TraceMonkey adds trace trees to SpiderMonkey (which apparently means it JIT compiles a type-specialised version of a function if it detects a specific version of a function to be "hot" -- executes very often). Jaeger Monkey uses a more traditional "JIT compile everything" perspective, but then apparently also has the tracing feature in the backend to further optimise hot paths. From skimming TFA, IonMonkey adds to this a first pass that translates all the code into an intermediate representation that makes further optimisations easier. So in reality, all the FooMonkeys seem, to me, to be closer to large-scale plugins into SpiderMonkey than new engine versions per se.

Re:Javascript Monkeys (2)

Millennium (2451) | more than 3 years ago | (#36000616)

This. It seems to be essentially a pipeline: the old code doesn't get thrown out, just added on. In essence, SpiderMonkey's path to awesome started out with the beginning and the end (TraceMonkey), and now they're going back to fill out the middle (JaegerMonkey and now IonMonkey).

Firefox 3.0 (and lower): SpiderMonkey (JavaScript interpretation
Firefox 3.5: SpiderMonkey -> TraceMonkey (awesome JIT for hot functions only)
Firefox 4.0: SpiderMonkey -> JaegerMonkey -> TraceMonkey (decent JIT for everything, awesome JIT for hot functions)
Firefox ??? (I can't imagine this will be ready for FF5; maybe FF6?): SpiderMonkey -> IonMonkey -> JaegerMonkey -> TraceMonkey (better IR leading into the JIT, making it even better for everything, but still with awesome JIT for hot functions)

Re:Javascript Monkeys (2)

TheRaven64 (641858) | more than 3 years ago | (#36001796)

TraceMonkey adds trace trees to SpiderMonkey (which apparently means it JIT compiles a type-specialised version of a function if it detects a specific version of a function to be "hot" -- executes very often)

Not quite. Traces are sequences of instructions without interior branches. When you write some code, you split it up into functions. A trace flattens this structure - you can think of inlining as a very special case of trace-based optimisation. A trace has a single entry point and (potentially) multiple exit points.

Traces are very useful for JavaScript code, because it tends to be very branch heavy. You can identify a hot path, and then compile a linear sequence of instructions where the branches all take you out of that sequence and are all hinted to be not-taken branches. This means that the common path will contain no instruction cache misses and no mispredicted branches, which can make it very fast.

This kind of optimisation is only possible with profiling information, which is why you see it more commonly in JIT compilers, where profiling information is always available.

Re:Javascript Monkeys (3, Funny)

DontBlameCanada (1325547) | more than 3 years ago | (#35999732)

-JaegerMonkey was dreamt up during a night spent consuming JaegerMiester shooters.
-IronMonkey was designed the morning after, behind iron bars in the slammer, following the JaegerMonkey "JaegerMiester" release party.

Re:Javascript Monkeys (0)

Anonymous Coward | more than 3 years ago | (#35999822)

Those are some impressive reading skills you got there....

Re:Javascript Monkeys (2)

tigre (178245) | more than 3 years ago | (#35999922)

He was still hung over from the JaegerMonkey party.

Re:Javascript Monkeys (1)

Zwets (645911) | more than 3 years ago | (#36008098)

Yeah, it's funny. It also contains a pet peeve of mine: don't apply the i/e rule of thumb to proper names! If you're not sure about the correct order, a quick google will help. Jaegermeister, Einstein, Heinlein, etc. Helpful Wikipedia reference [wikipedia.org] :-)

Re:Javascript Monkeys (1)

_0xd0ad (1974778) | more than 3 years ago | (#36010234)

Actually, I'd simplify that to "if the word sounds German".

Re:Javascript Monkeys (4, Informative)

BZ (40346) | more than 3 years ago | (#35999798)

This part is one major difference "Clean, textbook IR so optimization passes can be separated and pipelined with well-known algorithms."

The current JM is somewhat goes directly from SpiderMonkey bytecode to assembly, which means any optimizations that happen in the process are ad-hoc (and typically don't happen at all; even basic things like CSE aren't done in JM right now).

Re:Javascript Monkeys (1)

Trepidity (597) | more than 3 years ago | (#36000976)

Do textbook IR optimizations make a big different with JS code, though? My impression is that the big gains in JS performance don't have much to do with traditional C-compiler optimizations like loop optimization, and have a lot more to do with optimizations derived from the compiling-dynamic-languages literature, like type specialization and class inference.

V8 also goes pretty directly to asm, for example, and gets some big wins from techniques developed in Smalltalk compilers, like compiling code that assumes a static class in the common case, and patches the asm if it finds that the class has changed dynamically. Those kinds of things can get you orders of magnitude wins, while a typical C-compiler pass is in the noise by comparison.

Re:Javascript Monkeys (4, Informative)

BZ (40346) | more than 3 years ago | (#36001386)

The big gains from JS interpreters to most of the current crop of JS JIT compilers (excluding Crankshaft and Tracemonkey, and possibly excluding Carakan's second-pass compiler) come from class infrence, type specialization, ICs, etc. That work is already done. The result is about 30x slower than C code (gcc -O2, say) on numeric manipulation.

To get better performance than that, you need to do more complicated optimizations. For example, Crankshaft does loop-invariant code motion, Crankshaft and Tracemonkey both do common subexpression elimination, both do more complicated register allocation than the baseline jits, both do inlining, both do dead code elimination. For a lot of code the difference is not that big because there are big enough bottlenecks elsewhere. For some types of code (e.g. numeric manipulation) it's a factor of 2-3 difference easily, getting into the range of gcc -O0. And that's without doing the really interesting things like range analysis (so you can eliminate overflow guards), flow analysis so you know when you _really_ need to sync the stack instead of syncing after every arithmetic operation, and so forth.

For code that wants to do image or audio manipulation on the web, the textbook IR optimizations make a gigantic difference. That assumes that you already have the dynamic-language problems of property lookup and whatnot solved via the techniques Smalltalk and Self pioneered.

One end goal here is pretty explicitly to be able to match C or Java in performance on numeric code, at least in Mozilla's case.

Re:Javascript Monkeys (1)

Trepidity (597) | more than 3 years ago | (#36001752)

I was under the impression that the IR optimizations are mostly what makes the difference between gcc -O0 and gcc -O2, and isn't that a minor speed gap compared to the still-existing gap between JS and C that doesn't have all these IR optimizations enabled? I think most people would be overjoyed if JS had performance like C code compiled with gcc -O0, so doesn't that point to different kind of optimizations to target? Or is the argument that bridging the remaining performance gap with unoptimized C is no longer the low-hanging fruit, and adding in these -O2 kinds of optimizations is where the immediate performance speedups are likely to come, despite the remaining peformance gap with -O0?

I guess to me the -O0/O2 gap seems like a minor speed gap compared to the JS/C gap, so it seems counterintuitive that adding in -O2-style optimizations is where the big wins lie for JS.

Re:Javascript Monkeys (1)

BZ (40346) | more than 3 years ago | (#36002068)

> I was under the impression that the IR optimizations
> are mostly what makes the difference between gcc
> -O0 and gcc -O2,

Depends on your code, but yes.

> isn't that a minor speed gap compared to the
> still-existing gap between JS and C that doesn't
> have all these IR optimizations enabled?

No.

> I think most people would be overjoyed if JS had
> performance like C code compiled with gcc -O0

On numeric code, Tracemonkey and Crankshaft have about the same performance as gcc -O0 in my measurements.

> I guess to me the -O0/O2 gap seems like a minor
> speed gap compared to the JS/C gap

For numeric code, the gap between -O0 and -O2 is a factor of 10. It's much larger than the gap between -O0 and current JS JITs, which is somewhere between "small" and "nonexistent".

Re:Javascript Monkeys (1)

Trepidity (597) | more than 3 years ago | (#36003118)

Hmm, interesting. It's been about a year since I've done any benchmarking of JS stuff, but at the time my attempts at some numerical algorithms in javascript were on the order of 10x-1000x slower than the unoptimized C equivalent. Have JS JITs really improved to the point where a typical double-nested loop over a 2d array doing some standard image-processing kernel will now run at the same speed as gcc -O0? That certainly wasn't the performance I was getting!

Re:Javascript Monkeys (1)

BZ (40346) | more than 3 years ago | (#36003456)

It really depends on the code. In particular, in both Tracemonkey and Crankshaft it's quite possible to end up with code that can't be compiled with the optimizing compiler and then falls back to a less-optimizing JIT (JM and the V8 baseline compiler respectively) .

If you have an example of the sort of code we're talking about here, I'd be happy to see what numbers for it look like.

Re:Javascript Monkeys (1)

Unequivocal (155957) | more than 3 years ago | (#36004314)

Yeah - V8 is really unbelievable for some applications - so much faster (10x) than any of the other JS interpreters (or should we call them compilers at this point?). In other cases it's not really that much, if any, better, but some of the shit those google JS dudes are doing is pretty impressive.

As much as I resent Google sometimes, in a lot of cases they really do have the smartest people in the room.

Re:Javascript Monkeys (0)

Anonymous Coward | more than 3 years ago | (#35999832)

JaegerMonkey has really great hats!

Re:Javascript Monkeys (0)

Anonymous Coward | more than 3 years ago | (#35999876)

Ves, Iz dink so.

Re:Javascript Monkeys (1)

Millennium (2451) | more than 3 years ago | (#36005640)

IonMonkey compiles from SpiderMonkey's interpretation to an intermediate representation.

JaegerMonkey currently compiles from SpiderMonkey's interpretation to bytecode, which then gets passed to TraceMonkey. It will be changed to compile from IonMonkey's intermediate representation to bytecode instead, though it will still pass that to TraceMonkey.

Re:Javascript Monkeys (1)

kmoser (1469707) | more than 3 years ago | (#36007294)

It's monkeys all the way down.

Re:Javascript Monkeys (0)

Anonymous Coward | more than 3 years ago | (#36016248)

Both are primates - JaegerMonkey belongs to cercopithecoid species while ionMonkey belongs to platyrrhine species.

LLVM (3, Interesting)

nhaehnle (1844580) | more than 3 years ago | (#36000046)

I'd be interested to hear why the Mozilla developers don't use an existing compiler framework like LLVM, which already implements many advanced optimization techniques. I am aware that JavaScript-specific problems need to be dealt with anyway, but it seems like they could save themselves a lot of hassle in things like register allocation etc. Those are not so interesting problems that are handled by existing tools like LLVM quite well, so why not use that?

Re:LLVM (5, Informative)

BZ (40346) | more than 3 years ago | (#36000080)

http://blog.mozilla.com/dmandelin/2011/04/22/mozilla-javascript-2011/ [mozilla.com] has some discussion about LLVM in the comments. The short summary is that the LLVM register allocator, for example, is very slow; when doing ahead-of-time compilation you don't care much, but for a JIT that could really hurt. There are various other similar issues with LLVM as it stands.

Re:LLVM (1)

SanityInAnarchy (655584) | more than 3 years ago | (#36000170)

So why is the solution to roll your own, rather than fix LLVM or develop a more direct competitor to it? All of the issues mentioned so far seem to be with the current implementation, not with the fundamental idea.

Re:LLVM (1)

BZ (40346) | more than 3 years ago | (#36000244)

No idea; you'll have to ask someone familiar with both LLVM and the needs of a JIT for that....

Re:LLVM (3, Informative)

lucian1900 (1698922) | more than 3 years ago | (#36000342)

Because it's easier. All the projects that tried to write a good JIT with LLVM came to the same conclusion: LLVM sucks for JITs. That's both because of implementation issues and design choices. LLVM is a huge, complex C++ program. No one's going to fix it. Nanojit and the register allocators floating around are competitors.

Re:LLVM (2)

mad.frog (525085) | more than 3 years ago | (#36000708)

A guy did in fact transplant LLVM in place of Nanojit (in Tamarin rather than SpiderMonkey, but close enough):

http://www.masonchang.com/blog/2011/4/21/tamarin-on-llvm-more-numbers.html [masonchang.com]

And found that LLVM didn't really produce an overall win for this sort of code generation. LLVM is nice for ahead-of-time-compilation, but isn't a good fit for just-in-time.

Re:LLVM (1)

dzfoo (772245) | more than 3 years ago | (#36000542)

Isn't "rolling your own" the same as "developing a more direct competitor to it"?

Re:LLVM (1)

jesser (77961) | more than 3 years ago | (#36004654)

"Developing a competitor to LLVM" would imply creating something generic enough to be a backend for many programming languages. Maybe something geared toward dynamic rather than static languages, and tuned better fast compilation, but still comparable in scope to LLVM.

Re:LLVM (1)

SanityInAnarchy (655584) | more than 3 years ago | (#36025530)

In other words, something like Parrot.

I mean, I get that Parrot itself probably isn't a good choice, but I still wonder why everyone is so busy reinventing wheels independently and in such a language-specific manner.

Re:LLVM (0)

Anonymous Coward | more than 3 years ago | (#36000698)

It would amount to a pretty extensive overhaul of the entire architecture of LLVM. It's heavily geared towards ahead-of-time compilation.

Also, the memory foot-print of LLVM precludes its in for embedded/mobile targets, which is something the Mozilla people are clearly after.

Re:LLVM (1)

kripkenstein (913150) | more than 3 years ago | (#36003912)

So why is the solution to roll your own, rather than fix LLVM or develop a more direct competitor to it? All of the issues mentioned so far seem to be with the current implementation, not with the fundamental idea.

The goals are somewhat different. LLVM is great for optimizing statically typed code, so it will do e.g. licm that takes into account that kind of code. The IR that would be useful for a JS engine would be higher-level, so it could do licm on higher-level constructs.

The issue is sort of similar to why dynamic languages don't run very fast on the JVM or .NET. They have very efficient JITs, but they are tuned for the level that makes sense for languages like Java and C#, not JavaScript or Python or Ruby. As a consequence, no dynamic language on those platforms comes close to modern JS engines, LuaJIT, etc. Some people seem to assume that translating Python into Java, then JITing that on the JVM will be fast - but it isn't.

So there isn't really something to be 'fixed' in LLVM here - it is very good at what it does. It's just different than what JS engines need.

Re:LLVM (1)

TheRaven64 (641858) | more than 3 years ago | (#36001972)

Sounds like they don't know how to use LLVM then. It has a small selection of different register allocators, and you can pick the fast one if you care about JIT time more than you care about execution time.

Re:LLVM (2)

Animats (122034) | more than 3 years ago | (#36000634)

I'd be interested to hear why the Mozilla developers don't use an existing compiler framework like LLVM...

LLVM is intended for more static languages than Javascript. The Unladen Swallow [wikipedia.org] project tried to use it for Python, and that was a failure..

How many JIT engines is this now? (1)

Anonymous Coward | more than 3 years ago | (#36000088)

Damn, Mozilla changes JIT engines like every year. Why should be believe this one will last any longer than all the others they have tried?

Javascript is a poorly designed language that is hard to JIT and I imagine that's why we keep seeing people trying to redo things better than the previous JIT engine. It's damn near impossible though, just micro-fractional improvements and a whole lot of wasted effort.

Meanwhile things like LuaJIT [luajit.org] offer a Javascript-like scripting environment with damn near the performance of native C applications. Pretty much all done by one guy.

Lua is what Javascript should have been. It's dynamic, simple, and fast (partly because it's so simple). The language itself is so much cleaner than Javascript which makes it easy to parse and optimize while still giving you all the power.

Re:How many JIT engines is this now? (1)

codepunk (167897) | more than 3 years ago | (#36000336)

LuaJIT is pretty darn quick but then again node.js running on googles V8 engine is right there with it.

Re:How many JIT engines is this now? (0)

Anonymous Coward | more than 3 years ago | (#36001862)

Yeah but V8 has the disadvantage of requiring you to write Javascript.

Actually the speed differences aren't as close as you think when doing general programming. Especially not when doing scientific computing where LuaJIT crushes V8.

Then you have all the extras like LuaJIT's incredibly fast FFI interface which gives you direct access to C libraries without compiling a single line of code. The access is really fast too. As fast as if you had written a native interface to the VM. I have never seen any FFI interface from a scripting environment that works with this little overhead.

Re:How many JIT engines is this now? (1)

icebraining (1313345) | more than 3 years ago | (#36000408)

Mozilla doesn't change engines, they just give a different name to each version.

By the way, Julien Danjou's Why not Lua [danjou.info] .

Re:How many JIT engines is this now? (1)

BZ (40346) | more than 3 years ago | (#36000606)

Not quite. More precisely they add new engines, and give them names, then use whichever one does the job best (or is likely to do the job best).

Re:How many JIT engines is this now? (0)

Anonymous Coward | more than 3 years ago | (#36000684)

You forgot to mention that Julien Danjou is an idiot. Everything in that article is from the distorted view of someone who doesn't know what they are doing and is trying to wedge Lua into whatever pre-existing design they are used to.

Re:How many JIT engines is this now? (0)

Anonymous Coward | more than 3 years ago | (#36000958)

I second that. I glanced over that article our GP linked and as far as I can tell Julien Danjou has no clue about Lua programming at all. The whole stack thing he is complaining abhout makes no sense. (The World of Warcraft GUI is mainly programmed in Lua and never saw aynone pushing and popping around on a global stack).

Re:How many JIT engines is this now? (1)

icebraining (1313345) | more than 3 years ago | (#36001234)

as far as I can tell Julien Danjou has no clue about Lua programming at all

Uh, he's talking about interfacing Lua with C, not Lua programming. Of course you've never seen anyone pushing or popping in WoW, unless you work for Blizzard and have access to their C++ code.

Maybe you should try reading the article, where he says "integration of Lua into a C program".

Re:How many JIT engines is this now? (1)

icebraining (1313345) | more than 3 years ago | (#36001286)

Maybe you could say why is he an idiot, instead of rambling.

From where I'm sitting, as a noob in Lua, I see a guy who has written for the past three years a very nice WM (Awesome [naquadah.org] ), which contains a Lua library with thousands of lines, and some random Anonymous Coward who provides no arguments to support his position.

Re:How many JIT engines is this now? (0)

Anonymous Coward | more than 3 years ago | (#36003756)

It is indeed odd that someone that supposedly had been using Lua all the time for some project would be so ignorant about issues that should have been cleared up in the first couple weeks of using Lua.

For one, the reference counting problem mentioned is a bunch of bullshit. Preventing data from being garbage collected by the VM is a simple matter of ensuring that it is always reachable. Lua has a standard mechanism called the registry along with higher level functions in the API (the aux lib) to deal with it.

The argument about the difficulty of using the stack... uh, I don't know what to tell you other than that's just a fact of life if you insist on writing Lua functions directly in C against the low level API all the time. The number of functions that it should be necessary to do this in a project should be restricted to a core set that are well-used and readily testable. Anything very complex can be done with code generators (SWIG or similar), FFI, or just write it in Lua itself. I mean, this is just software engineering 101 type stuff for this sort of thing and not at all peculiar to Lua.

The "extend vs embed" is a red herring, since Lua is good enough to serve in both purposes (similar to the argument with Python.. though I will argue that Lua is a superior embedding language than Python, which Python kinda sucks compared to "better" purpose built alternatives), and the "extend vs embed" architectural argument is completely unrelated. You can write a whole app in Lua today, and there are "batteries included" libraries like PL and kepler. Luarocks isn't CPAN, I'll admit, but there are some extremely well engineered libraries available from there are other sources.

Those are just the two from a quick overview. I'm with the GP, this guy is an idiot... pretty much astonishingly ignorant on the subject.

Re:How many JIT engines is this now? (1)

grumbel (592662) | more than 3 years ago | (#36001818)

Could you elaborate on that? Since his problems mirror very much those that I encountered when using Lua. That's not to say that Lua is bad, it is certainty a very small clean language, but I had easier time writing C interface code in other languages (you however run into plenty of slightly different problems with them too).

Re:How many JIT engines is this now? (1)

BZ (40346) | more than 3 years ago | (#36000432)

> Damn, Mozilla changes JIT engines like every year

Unlike Chrome, who has now had two (more actually, but two that they publicly named) jit engines between December 2008 and December 2010?

> Why should be believe this one will last any longer
> than all the others they have tried?

What does "last longer" mean? Mozilla is still using the very first jit they did (tracemonkey); they're just not trying to use it for code it's not well-suited for.

> just micro-fractional improvements

If a factor of 2-3 is "micro-fractional", yes.

Re:How many JIT engines is this now? (0)

Anonymous Coward | more than 3 years ago | (#36001174)

"micro-fractional improvements"? You obviously have no idea about the state of Mozilla javascript engines.

Re:How many JIT engines is this now? (2)

Millennium (2451) | more than 3 years ago | (#36001954)

Damn, Mozilla changes JIT engines like every year. Why should be believe this one will last any longer than all the others they have tried?

That's not actually what's happening, though Mozilla isn't helping matters with all the confusing names.

What's actually going on is that Mozilla is essentially implementing different parts of a longer pipeline. Even as recently as FF4, SpiderMonkey is still present at the front of that pipeline, and TraceMonkey is still present at the end (actually nanoJIT is at the very end, but it's not named after a monkey so we won't count it here). JaegerMonkey, IonMonkey, and all the other monkeys go in the middle.

Re:How many JIT engines is this now? (0)

Anonymous Coward | more than 3 years ago | (#36003752)

Even as recently as FF4, SpiderMonkey is still present at the front of that pipeline, and TraceMonkey is still present at the end (actually nanoJIT is at the very end, but it's not named after a monkey so we won't count it here). JaegerMonkey, IonMonkey, and all the other monkeys go in the middle.

That's actually not true. Mozilla has two different JIT's and one Interpreter (not counting Tamarin, which isn't of any matter here). Initially the most code is first interpreter, and then compiled by JägerMonkey. JägerMonkey uses a assembler backend originally developed by Apple. On the other hand TraceMonkey is used on code that benefits from strong type specialization. It uses NanoJit as its backend.

types (-1)

hey (83763) | more than 3 years ago | (#36000144)

Maybe is this obvious. In the interview he mentions that optimization is hard because the types of variables aren't known. For me (and probably most programmers) the variable names are a dead give away on the type. A variable called "i" is going to be an int and "str" is a string. Likewise with iName and strName. I wonder if they have run some analysis on a large body of code to see if they can do a good guess on the variable type from its name.

Re:types (1)

e70838 (976799) | more than 3 years ago | (#36000226)

you would have love fortran

Re:types (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#36000248)

I have to imagine that there are some sneaky things you could do if allowed to lie to the compiler about what variable types you are using... Obviously, you could check; but guessing and checking takes longer than just checking.

Re:types (1)

RebelWebmaster (628941) | more than 3 years ago | (#36000258)

That sounds like an incredibly fragile idea. That said, a lot of work is being done to add Type Inference to the JaegerMonkey engine, which performs static analysis to determine type information.

Re:types (1)

rrohbeck (944847) | more than 3 years ago | (#36000340)

Why don't you use a type declaration?

Re:types (1)

Qzukk (229616) | more than 3 years ago | (#36000372)

And once all that nice code is run through a javascript obfuscator/compressor that renamed all the "str" variables to "a" "b" "c" etc, what then?

Re:types (3, Interesting)

BZ (40346) | more than 3 years ago | (#36000464)

OK. Let's look at http://code.jquery.com/jquery-1.5.2.js [jquery.com] very quick. This isn't even a minified version; just the original source. Pretend like you don't know what jquery is doing. The first few functions with arguments here:

> function( selector, context )

What are the types of those?

> function( selector, context, rootjQuery )

And those?

> function( num )

And that one? Hint: the function expects it to be "null" or at least "undefined" sometimes.

> function( elems, name, selector )

And that? Note that the function itself is not sure what to expect in |elems| and has different codepaths depending on what it finds there.

The point is, going from name to type is not really that easy.

Worse yet, "type" in this context also means what JS engines variously call "shape" or "hidden type": two instances of Object are considered the same type if they have the same properties in the same order. Telling _that_ from a variable name is pretty hard.

Re:types (1)

icebraining (1313345) | more than 3 years ago | (#36000482)

So what would be the prefix for variable 'anonymousUser', which holds an object created from the User prototype and with a couple of custom methods added dynamically?

Re:types (1)

mad.frog (525085) | more than 3 years ago | (#36000740)

Optional type annotations (a la ActionScript3) were considered for JavaScript in the ECMAScript4 standard, but the committee decided it didn't want to go in that direction.

Re:types (2)

TheRaven64 (641858) | more than 3 years ago | (#36002256)

Type annotations are only useful for documentation, and for some sanity checking (although type theory is not expressive enough to catch any interesting classes of programmer error). The StrongTalk team discovered that they got much more accurate information from type feedback than they did from programmer-provided type annotations. They created a dialect of Smalltalk with explicit type annotations. Their VM was the fastest Smalltalk implementation at the time by quite a large margin, and they found that they got no performance benefit at all from using the user-specified type information.

javaScript as a platform (2)

hey (83763) | more than 3 years ago | (#36000366)

Now javaScript is so fast perhaps other interpreted languages would be "compiled" to JS. I am thinking of Perl6 for one.

Re:javaScript as a platform (5, Informative)

Anonymous Coward | more than 3 years ago | (#36000504)

Yup. Python is translated to javascript in pyjamas. http://pyjs.org

Re:javaScript as a platform (0)

Anonymous Coward | more than 3 years ago | (#36004880)

This is not for speed increases... its so you can execute it client side in a browser....

Re:javaScript as a platform (2, Informative)

Anonymous Coward | more than 3 years ago | (#36000596)

Check Coffeescript, Objetive J, Pyjamas, Skulpt.

Re:javaScript as a platform (1)

TheLink (130905) | more than 3 years ago | (#36003094)

Can't the perl bunch do something similar directly? They had years to get perl faster. Same goes for python.

Google and Mozilla seemed to have got a much faster javascript in a much shorter time.

Re:javaScript as a platform (0)

Anonymous Coward | more than 3 years ago | (#36007104)

The reason to compile other languages to javascript has little to do with being able to take advantage of the optimizations found within modern javascript interpretors. As you say it makes more sense to implement these same optimizations in the various language interpretors directly. The appeal of compiling languages to javascript relates to javascript's position as the sole programming language that is widely supported for client-side web programming. Many people want to write client-side web software, as the web is obviously an incredibly platform, but many developers don't want to write in javascript--at least they don't want to write increasingly large and complex applications entirely in javascript.

Re:javaScript as a platform (1)

Jonner (189691) | more than 3 years ago | (#36003444)

You mean like Sprixel [perlgeek.de] ? There are already a number of compilers for other languages, such as Java [perlgeek.de] , Python [pyjs.org] , and Javascript [google.com] , oddly enough. List of languages that compile to JS [github.com] has many more.

PyPy (0)

Anonymous Coward | more than 3 years ago | (#36000456)

This looks a lot like what PyPy is currently doing

Re:PyPy (0)

Anonymous Coward | more than 3 years ago | (#36003640)

And PyPy has a Javascript interpreter...

Dead-code ellimination (3, Informative)

ferongr (1929434) | more than 3 years ago | (#36000468)

Didn't Mozilla cry bloody murder when IE9 was discovered to perform dead-code elimination claiming it was "cheating" because it made some outdated JS benchmarks finish in 0 time?

Re:Dead-code ellimination (3, Informative)

BZ (40346) | more than 3 years ago | (#36000566)

No, they just pointed out that IE's dead-code elimination was somewhat narrowly tailored to the Sunspider benchmark. That's quite different from "crying bloody murder".

Tracemonkey does dead-code elimination right now; it has for years. It's just that it's a generic infrastructure, so you don't usually get artifacts with code being eliminated only if you write your script just so. On the other hand, you also don't usually get large chunks of web page dead code eliminated, because that wasn't a design goal. Keep in mind that a lot of the dead code JS JIT compilers eliminate is code they auto-generated themselves (e.g. type guards if it turns out the type doesn't matter, stack syncs it it turns out that no one uses the value, and so forth), not code the JS author wrote.

Re:Dead-code ellimination (1)

ferongr (1929434) | more than 3 years ago | (#36000712)

Narrowly pointing out and having their lead "evangelist", Asa Dotzler, scream condemning words and making absurd claims is a different thing.

In any case, I haven't seen any proof that the dead-code eliminator is "somewhat narrowly tailored for Sunspider". It could just be that it's quite aggressive, so any code that doesn't touch the DOM or change any variable (like calculating 1M of pi and sending it to null) get elliminated.

Re:Dead-code ellimination (4, Informative)

BZ (40346) | more than 3 years ago | (#36000806)

> Narrowly pointing out and having their lead
> "evangelist", Asa Dotzler

Uh... I think you're confused about Asa's role here. But anyway...

> I haven't seen any proof that the dead-code
> eliminator is "somewhat narrowly tailored for
> Sunspider"

Well, it only elimintated code that restricted itself to the set of operations used in the Sunspider function in question. This is pretty clearly described at http://blog.mozilla.com/rob-sayre/2010/11/17/dead-code-elimination-for-beginners/ [mozilla.com]

For example, it eliminated code that included |TargetAngle > CurrAngle| but not code that was otherwise identical but said |CurrAngle >) was not. Addition and subtraction were eliminated while multiplication and division and % were not.

If it eliminated "any code that doesn't touch the DOM" that would be a perfectly good general-purpose eliminator. That's not what IE9 was doing, though.

There was the side issue that it also produced buggy code; that's been fixed since, so at least IE9 final doesn't dead-code eliminate live code.

Re:Dead-code ellimination (0)

Anonymous Coward | more than 3 years ago | (#36000658)

If you know what you're doing you don't submit results of a benchmark that you don't really execute and that becomes an utter failure by introducing a NOP.

Re:Dead-code ellimination (1)

slashtivus (1162793) | more than 3 years ago | (#36001450)

I was rather curios about the 'dead code' elimination... That would seem to me to be one of the first things to go solve, if nothing is pointing to that particular code portion, simply do not compile it. I must be missing something where that would not be one of the first targets to eliminate.

Re:Dead-code ellimination (2)

BZ (40346) | more than 3 years ago | (#36002002)

Turns out, figuring out whether code in JS is dead is hard. See http://blog.mozilla.com/rob-sayre/2010/11/17/dead-code-elimination-for-beginners/ [mozilla.com] for some examples of things that look dead but aren't.

Re:Dead-code ellimination (1)

_0xd0ad (1974778) | more than 3 years ago | (#36003604)

In my book any creative use of toString or valueOf counts for bonus points...

Re:Dead-code ellimination (2)

TheRaven64 (641858) | more than 3 years ago | (#36002310)

That's not what dead code elimination does. It removes code that does not affect the program's result. For example, if you have i++ and you then don't use i before it goes out of scope, this is a dead expression and the compiler can remove it. In a language as dynamic as JavaScript, it's quite hard to do because you have to make sure that the expression that you eliminate doesn't have any side effects before you eliminate it, and for nontrivial JavaScript expressions determining whether something has side effects can be as costly as just executing it.

Re:Dead-code ellimination (1)

_0xd0ad (1974778) | more than 3 years ago | (#36003686)

For example, if you have i++ and you then don't use i before it goes out of scope, this is a dead expression and the compiler can remove it.

Fine, then you can tell me what the result of the following code should be, I assume?

for (var n = 0; n < 100; n ++) {
    i ++;
}

Now what if I tell you that I had originally defined i like this:

var i = { valueOf: function() { var a = Math.floor(Math.random()*100); document.write(a); return a; } };

Re:Dead-code ellimination (1)

_0xd0ad (1974778) | more than 3 years ago | (#36003762)

Of course, the variable definition for i would obviously have to go in the opening statement of the for-loop (along with the definition of the loop counter n) in order to demonstrate the variable going out of scope without being used after the i ++ statement..

Re:Dead-code ellimination (1)

Jonner (189691) | more than 3 years ago | (#36003472)

I was rather curios about the 'dead code' elimination... That would seem to me to be one of the first things to go solve, if nothing is pointing to that particular code portion, simply do not compile it. I must be missing something where that would not be one of the first targets to eliminate.

Dead code elimination is certainly a worthy goal, though not necessarily easy in a highly dynamic language like Javascript.

Re:Dead-code ellimination (1)

Ant P. (974313) | more than 3 years ago | (#36006398)

In IE9 beta adding a single useless variable assignment to some modern benchmarks made them take several orders of magnitude longer. An unused variable sounds like the sort of thing that should be optimised out by DCE, but here it's obviously enough to trip up the thing they use to detect common benchmarks and cheat using built-in precompiled code.

Compile it all (0)

ironicsky (569792) | more than 3 years ago | (#36001264)

I think its time that Javascript and HTML get transmitted in a pre-compiled format, like Java.I'm sure the compiled file will be smaller than its mark-up counterpart, and would run faster in the browser since the browser won't have to analyze the mark-up before "compiling it" and rendering it. Plus, it might help people code their web-pages to standards if the compilers won't compile their half-assed HTML.

Re:Compile it all (1)

ThatMegathronDude (1189203) | more than 3 years ago | (#36002484)

That would be a tremendous step backward.

Re:Compile it all (1)

RobbieThe1st (1977364) | more than 3 years ago | (#36005598)

No, what would happen is we'd end up with sites compiled for some obscure language-version, which nothing but one browser half-understands. Oh, and the compiler's buggy also.

As it is now, when there's a critical error in some obscure website's code, the problem can be diagnosed, and either fixed or worked around with a Greasemonkey script.
(Had to do that last night: Some educational login page had a id='txtpassword' field, which they called as getElementById('txtPassword'). It worked in IE, but not in anything else -- Small js-replacement in Greasemonkey solved the problem for now).

hash table lookup optimization (1)

pifko (460830) | more than 3 years ago | (#36002506)

I wonder how do javascript engines optimize hash table lookups. Especially since it is faster to access a member of an object than to access a variable of an outer function. It seems to me that the former requires a hash table lookup with a string key while the later suffices with a pointer into the closure of an outer function and the offset of the variable.

Re:hash table lookup optimization (1)

LUH 3418 (1429407) | more than 3 years ago | (#36006354)

They don't use hash maps to represent objects. Both SpiderMonkey and V8 regroup objects into pseudo-classes based on the set of properties they have. The technique was actually pioneered by SELF many years ago.

V8 calls this "hidden classes": http://code.google.com/apis/v8/design.html [google.com]

As for closures, they can be represented in a multitude of ways... Some more efficient than others.

2011... (1)

StripedCow (776465) | more than 3 years ago | (#36004596)

and I still cannot run a mozilla javascript environment inside IE, Opera, or Chrome.

You think I'm joking but I'm plain serious. Why are we so dependent on each particular javascript implementation?

Re:2011... (1)

jesser (77961) | more than 3 years ago | (#36004810)

Because each browser's JS engine is closely tied with its DOM, which is in turn closely tied with its layout engine. Otherwise it would be difficult to have efficient DOM calls and complete garbage collection.

Re:2011... (1)

StripedCow (776465) | more than 3 years ago | (#36004958)

Okay so take it a step further...

The question now becomes: why can't I run mozilla's DOM and layout engine inside IE, Opera, or Chrome or any other browser?

So your html could start like:
<HTML>
<HEAD>
<META browser="my_browser.so" /> ... etc ...

Where "my_browser.so" is a platform-independent shared object file which you may have compiled yourself (from Mozilla's code).

See the advantage? Your javascript, DOM manipulation, css layout, etcetera will work on ANY browser. No more cross-browser misery!

Re:2011... (1)

Anonymous Coward | more than 3 years ago | (#36005988)

This is basically what Chrome Frame does. It's a bad idea. Why is it a bad idea? Well, either "my_browser.so" is downloadable from the internets, or it isn't. If it is, that's a security hole you can drive a truck through (basically the same problem as ActiveX controls). If it isn't, then everyone whose operating system doesn't ship "mshtml.dll" is screwed.

What is the use of DCE? (0)

Anonymous Coward | more than 3 years ago | (#36008608)

I don't understand the whole DCE reasoning. I mean - the cases show here and on mozilla's site are valid, but I'd like to know where in the world, aside from tests that aim to show browser maker's faults, does anyone actually write dead code?

As a developer I care for performance of my applications and optimize away what can be optimized - by using a profiler and analyzing compiler warnings. There are numerous books and articles on how to write good code.

This task should not be left to the JS engine - it only encourages writing sluggish and sloppy code hat does behave well only when certain conditions (read: browser version, JS engine, particular compile flag) are met. Instead there should be an interface set up that issues a warning: "Hey, this code does nothing. Consider removing it." in JS console or whatever JS devs use.

This whole JS seems to me that it could be improved - how about introducing some concepts from real programming languages, like typing, profiling, etc?

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?