Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Generic VMs Key To Future of Coding

Soulskill posted about 6 years ago | from the acme-brand dept.

Software 139

snydeq writes "Fatal Exception's Neil McAllister calls for generic VMs divorced from the syntactic details of specific languages in order to provide developers with some much-needed flexibility in the years ahead: 'Imagine being able to program in the language of your choice and then choose from any of several different underlying engines to execute your code, depending upon the needs of your application.' This 'next major stage in the evolution of programming' is already under way, he writes, citing Jim Hugunin's work with Python on the CLR, Microsoft's forthcoming Dynamic Language Runtime, Jython, Sun's Da Vinci Machine, and the long-delayed Perl/Python Parrot. And with modern JITs capable of outputting machine code almost as efficient as hand-coded C, the idea of running code through a truly generic VM may be yet another key factor that will shape the future of scripting."

cancel ×


Sorry! There are no comments related to the filter you selected.

Microsoft's forthcoming Dynamic Language Runtime (1)

ionix5891 (1228718) | about 6 years ago | (#25423213)

correct me if im wrong but isn't silverlight 2 out and it has the DLR with python and ruby etc

Will we ever see Parrot? (4, Informative)

CRCulver (715279) | about 6 years ago | (#25423219)

I remember some years ago the elation people felt when Parrot was announced. At last, we could leverage the strengths of either Python or Perl--or whatever other interpreted languages--but work with a common interpreter. But then the hype started to die down, and the last edition of O'Reilly's book on the subject [] appeared over four years ago. Within the Python community, interest in Parrot seems completely dead. Are the Perl folks going it alone, and when might we see the project reach a successful deployment?

Re:Will we ever see Parrot? (0)

Anonymous Coward | about 6 years ago | (#25423261)

I've seen your opinion before from someone else. You didn't think of it. You're just (um, what's the word, ah!) repeating what that other guy said.

The future of Python is PyPy (5, Interesting)

YA_Python_dev (885173) | about 6 years ago | (#25423309)

Within the Python community, interest in Parrot seems completely dead.

Generic VMs are so 2005, the future of Python runtime is PyPy [] . From a single implementation of Python (written in Python), they can compile Python code to C, JVM, automatically create a customizable JITed VM, etc...

Check them out: they are doing some seriously cool stuff and they can use a bit of help.

Re:Will we ever see Parrot? (0)

Anonymous Coward | about 6 years ago | (#25424115)

So you're saying "this is an ex-Parrot"?

Did someone say "Python" and "Parrot"? (2, Funny)

Rick Bentley (988595) | about 6 years ago | (#25424655)

Look, matey, I know a dead parrot when I see one, and I'm looking at one right now.

No no he's not dead, he's, he's restin'! Remarkable bird, the Norwegian Blue, idn'it, ay? Beautiful plumage! ...

Re:Will we ever see Parrot? (0)

Anonymous Coward | about 6 years ago | (#25424841)

Parrot seems completely dead.

Mr. Praline: 'Ello, I wish to register a complaint.

          (The owner does not respond.)

          Mr. Praline: 'Ello, Miss?

          Owner: What do you mean "miss"?

          Mr. Praline: I'm sorry, I have a cold. I wish to make a complaint!

          Owner: We're closin' for lunch.

          Mr. Praline: Never mind that, my lad. I wish to complain about this parrot what I purchased not half an hour ago from this very boutique.

          Owner: Oh yes, the, uh, the Norwegian Blue...What's,uh...What's wrong with it?

          Mr. Praline: I'll tell you what's wrong with it, my lad. 'E's dead, that's what's wrong with it!

          Owner: No, no, 'e's uh,...he's resting.

          Mr. Praline: Look, matey, I know a dead parrot when I see one, and I'm looking at one right now.

          Owner: No no he's not dead, he's, he's restin'! Remarkable bird, the Norwegian Blue, idn'it, ay? Beautiful plumage!

          Mr. Praline: The plumage don't enter into it. It's stone dead.

          Owner: Nononono, no, no! 'E's resting!

          Mr. Praline: All right then, if he's restin', I'll wake him up! (shouting at the cage) 'Ello, Mister Polly Parrot! I've got a lovely fresh cuttle fish for you if you

          (owner hits the cage)

          Owner: There, he moved!

          Mr. Praline: No, he didn't, that was you hitting the cage!

          Owner: I never!!

          Mr. Praline: Yes, you did!

          Owner: I never, never did anything...

          Mr. Praline: (yelling and hitting the cage repeatedly) 'ELLO POLLY!!!!! Testing! Testing! Testing! Testing! This is your nine o'clock alarm call!

          (Takes parrot out of the cage and thumps its head on the counter. Throws it up in the air and watches it plummet to the floor.)

          Mr. Praline: Now that's what I call a dead parrot.

          Owner: No, no.....No, 'e's stunned!

          Mr. Praline: STUNNED?!?

          Owner: Yeah! You stunned him, just as he was wakin' up! Norwegian Blues stun easily, major.

          Mr. Praline: look, mate, I've definitely 'ad enough of this. That parrot is definitely deceased, and when I purchased it not 'alf an hour
          ago, you assured me that its total lack of movement was due to it bein' tired and shagged out following a prolonged squawk.

          Owner: Well, he's...he's, ah...probably pining for the fjords.

          Mr. Praline: PININ' for the FJORDS?!?!?!? What kind of talk is that?, look, why did he fall flat on his back the moment I got 'im home?

          Owner: The Norwegian Blue prefers keepin' on it's back! Remarkable bird, id'nit, squire? Lovely plumage!

          Mr. Praline: Look, I took the liberty of examining that parrot when I got it home, and I discovered the only reason that it had been sitting on its perch in the
          first place was that it had been NAILED there.


          Owner: Well, o'course it was nailed there! If I hadn't nailed that bird down, it would have nuzzled up to those bars, bent 'em apart with its beak, and
          VOOM! Feeweeweewee!

          Mr. Praline: "VOOM"?!? Mate, this bird wouldn't "voom" if you put four million volts through it! 'E's bleedin' demised!

          Owner: No no! 'E's pining!

          Mr. Praline: 'E's not pinin'! 'E's passed on! This parrot is no more! He has ceased to be! 'E's expired and gone to meet 'is maker! 'E's a stiff! Bereft of life, 'e
          rests in peace! If you hadn't nailed 'im to the perch 'e'd be pushing up the daisies! 'Is metabolic processes are now 'istory! 'E's off the twig! 'E's kicked the
          bucket, 'e's shuffled off 'is mortal coil, run down the curtain and joined the bleedin' choir invisibile!! THIS IS AN EX-PARROT!!


          Owner: Well, I'd better replace it, then. (he takes a quick peek behind the counter) Sorry squire, I've had a look 'round the back of the shop, and uh,
          we're right out of parrots.

          Mr. Praline: I see. I see, I get the picture.

          Owner: I got a slug.


          Mr. Praline: Pray, does it talk?

          Owner: Nnnnot really.

          Mr. Praline: WELL IT'S HARDLY A BLOODY REPLACEMENT, IS IT?!!???!!?

          Owner: N-no, I guess not. (gets ashamed, looks at his feet)

          Mr. Praline: Well.


          Owner: (quietly) D'you.... d'you want to come back to my place?

          Mr. Praline: (looks around) Yeah, all right, sure.

Re:Will we ever see Parrot? (0)

Anonymous Coward | about 6 years ago | (#25424889)

We may see Parrot, but as you note, they should stop trying to suggest anyone plans on using it with Python, because that's just the Perl people having a happy little dream that has nothing to do with reality.

Re:Will we ever see Parrot? (4, Informative)

chromatic (9471) | about 6 years ago | (#25425319)

Patrick Michaud wrote a bare-bones Python implementation in eight hours. It doesn't support all of Python, but it supports a large amount -- and, to my knowledge, he'd never implemented a Python compiler or interpreter before. That project, Pynie, has languished for a while, as he's spending more time working on Rakudo [] (the Perl 6 implementation on Parrot), but it's a viable port just waiting for someone to work on it. Lua is functionally complete as of 5.1 (I believe), and Tcl, PHP, and Ruby are in progress.

You can play with the latest versions of all of these languages on Tuesday, 21 October, when we make our next monthly stable release (though partcl [] just moved to a separate repository, so you can check out the current version there on a different schedule).

One standard, several implementations (5, Insightful)

Anonymous Coward | about 6 years ago | (#25423225)

One standard, several implementations? Sounds nice in theory, just like the numerous standards that Sun has outputted where each vendor delivers its own implementation (JPA, JDBC, J2EE among others). However, in practise you pick *one* vendor and *one* implementation and run with it. Only a fool would dare switching implementation mid-development, making the choice really just academic, because there are always minor differences that "shouldn't" matter, but does.

Re:One standard, several implementations (0)

Anonymous Coward | about 6 years ago | (#25423773)

That's not the way to look at it, though. Let's say that you want to write a Python program, but your company has invested a lot of time and money in Java. So you use Jython, which allows you to take advantage of all your Java resources and still write in Python. I think that's what we're talking about here.

And let's not forget that the Java API and associated frameworks are enormous (and very well done: Java red-black trees -- TreeMaps -- are lightning-quick). They have the further advantage that they are usually written in pure Java, so installation is usually as simple as adding a new JAR to the class path.

To give an example of the above, I don't bother with Python-native database drivers any more, I just use JDBC. I don't have to compile anything to install the JDBC drivers, which is a big deal when you're alternating between RHEL4, RHEL5, Solaris 10, and Windows 2008 -- and both 32-bit and 64-bit variants of each -- and can't even rely on having a C compiler available. The JDBC drivers are also considerably more feature-complete than their Python-native equivalents. And more consistent when you get to the "fringe:" things that the Python folks consider corner cases (such as LOBs) and therefore are implemented differently in every single driver.

Re:One standard, several implementations (2, Interesting)

pdbaby (609052) | about 6 years ago | (#25424803)

Except if your organisation works in Java they probably want their Java developers to be able to modify and extend your code... so having it written in a language they've never seen before, even if it's binary-compatible with Java, probably isn't what they want. I'm a huge fan of developing code in solving problems in the language that's most appropriate but this just breaks down when not all programmers have the same level of experience / fexibility

Re:One standard, several implementations (2, Insightful)

MyDixieWrecked (548719) | about 6 years ago | (#25424863)

You also run into problems where someone creating an implementation of your language's VM may create one that's less complete or robust than another.

This will also get interesting when you see an implementation of a language for one vendor's VM significantly outperform one on another, or also implementation-specific security issues; where a certain framework is secure on one vendor's platform, but not so much in another.

Also, security will be very platform/vendor-specific. Imagine a product where there's a security issue that spans many languages.

This whole idea sounds very good in theory, and may eventually work very well in practice, but I foresee this working well with a couple of very popular languages for one or two vendors' products and not so hot on many "me too" products.

Re:One standard, several implementations (2, Insightful)

jlarocco (851450) | about 6 years ago | (#25425979)

One standard, several implementations? Sounds nice in theory, just like the numerous standards that Sun has outputted where each vendor delivers its own implementation (JPA, JDBC, J2EE among others). However, in practise you pick *one* vendor and *one* implementation and run with it. Only a fool would dare switching implementation mid-development, making the choice really just academic, because there are always minor differences that "shouldn't" matter, but does.

That's true, but there are still several reasons why it keeps the vendors on their toes more than a technology with a single vendor, like Microsoft's crap.

It may be a pain to switch implementations, but it's still easier than a complete rewrite using something else. Switching C++ compilers is much easier than switching to a different language. Swapping J2EE implementations is a pain, but still easier than rewriting using Ruby on Rails. So the vendor may be able to make your life difficult, but not too difficult because they know you can switch.

Secondly, multiple vendors supplying several different, but largely similar products makes word of mouth and customer relations much more important. If Vendor A and Vendor B both have products implementing standards C and D, their reputations become much more important in deciding which product to use. There's more incentive to keep the customer happy. That's a good thing.

So no, one standard with multiple implementations won't solve every problem, but it makes the general situation better.

Targeting CLR-only and JVM-only platforms (1)

tepples (727027) | about 6 years ago | (#25426353)

Only a fool would dare switching implementation mid-development

Unless the requirements change mid-development. For example, an application originally intended to run on a notebook computer (which has an x86 CPU) might get retargeted to run on a handheld device (which more than likely has an ARM CPU), or vice versa. With C++, you switch to a different implementation that supports a different instruction set. Or perhaps you want to develop a product and deploy it on multiple platforms. For example, XNA for Xbox 360 can only run CLR bytecode, and MIDP for mobile phones can only run JVM bytecode, so you'd need to write a cross-platform video game's model[1] in a language that can target both the CLR and the JVM.

[1] Here, I distinguish [] the "model", the core of a program that defines the rules of a domain such as a business or a game, from the "view", the way a program presents the model to the user. Physics and AI are major components of a game's model; things like graphics, sound, and some of the input make up the view. Ideally, porting a program should require rewriting only the view.

Sort of like generic database access layers? (4, Insightful)

sphealey (2855) | about 6 years ago | (#25423229)

Reminds me of architects and developers who create generic database access engines so their product can be "platform independent" and then wonder why its performance is so bad no matter which of the six major databases is used.


Re:Sort of like generic database access layers? (3, Funny)

isny (681711) | about 6 years ago | (#25423761)

It's VMs all the way down! Or is it turtles...

Re:Sort of like generic database access layers? (3, Funny)

gazbo (517111) | about 6 years ago | (#25424501)

Gah! Database independent, yet almost invariably used on exactly one RDBMS. And you just know some of the more obscure query syntax makes the application on top of it database dependent anyway.

Yeah, it annoys the tits off me also.

Goatse is the future (-1, Offtopic)

Anonymous Coward | about 6 years ago | (#25423245)

Everything will be replaced by an ever expanding goatse [] .

And... (4, Insightful)

Colin Smith (2679) | about 6 years ago | (#25423253)

Software development recursively disappears up it's own arse.

We already have different, generic, virtual machines. They are called operating systems. They run on bits of silicon and steel.

You can't fix the problems you have writing software by running away from them

Re:And... (2, Funny)

aproposofwhat (1019098) | about 6 years ago | (#25423269)

And I read the headline as "Generic VMS ...", promptly shitting myself in the process :)

Re:And... (1)

hitmark (640295) | about 6 years ago | (#25423299)

would that be linux?

Re:And... (5, Insightful)

TheRaven64 (641858) | about 6 years ago | (#25423421)

I totally agree. The summary explained exactly how my code works already. I write C, Smalltalk, Objective-C and C++ (if I really can't avoid it) code. I then use a magical tool called a 'compiler' which turns it in to code for a language-agnostic virtual machine called 'the {x86,SPARC,PowerPC,ARM} instruction set' which then runs it. The important part is not the VM, it's the libraries. With my Smalltalk compiler I can add methods to objects written in Objective-C, subclass classes written in either language with the other. I can write high-level application logic in Smalltalk, mid-level code in Objective-C, and really performance-critical stuff in inline assembly in some C functions called from Objective-C methods. I can access a wealth of libraries written in C, C++, or Objective-C.

Actually, I do use a virtual machine, since my Smalltalk compiler is built on top of LLVM, but this VM is similar to an idealised form of a real CPU, and fairly language agnostic. Currently, I only use it for optimisation and statically emitting native code, but I could use it for run-time profiling and dynamic optimisations too.

Oh, and real men write their own compilers.

Bollocks (3, Funny)

Colin Smith (2679) | about 6 years ago | (#25423549)

Oh, and real men write their own compilers.

Real men code in P".


Re:Bollocks (1)

Tubal-Cain (1289912) | about 6 years ago | (#25424737)

Real men code in P".

Aahhh.... The pressure....

Not that kind of 'P'!

Re:And... (0)

Anonymous Coward | about 6 years ago | (#25425175)

Id love to know which smalltalk complier can target the llvm, and a quick google isn't showing me


Re:And... (1)

TheRaven64 (641858) | about 6 years ago | (#25425205)

The one I wrote. You can find it in the Etoile subversion repository. It's used by several parts of Etoile, including the music jukebox and hot corners apps.

Re:And... (0)

Anonymous Coward | about 6 years ago | (#25425369)

Well thats very cool. to be honest this is the first time ive seen Etoile as well, and it looks very interesting. Thanks a ton from this AC ;-)

Re:And... (2, Interesting)

mqsoh (1002513) | about 6 years ago | (#25425391)

I think they'd like to bring that convenience to a 'higher' level. I make my living writing ActionScript and JavaScript and I felt like a jerk when I read a book recently that described C as a 'high-level' language.

Most of what I write everyday has problems between browsers on the same operating system. The flexibility you describe would be a joy for me.

Re:And... (4, Insightful)

Dolda2000 (759023) | about 6 years ago | (#25423457)

I could not agree more, and none to my surprise, TFA was full of inflated fluff and very little substance. It was hard enough to wade through it even to find anything substantial at all, but let me highlight some of the things that can be found:

In fact, many developers would rather be freed from the hassles imposed by traditional systems programming languages. VM-based languages offer such features as automatic garbage collection, runtime bytecode verification, and security sandboxes -- all of which translate into peace of mind.

Of course garbage collection has been a feature of LISP since its inception, which has been compilable to machine code since... the 60s? Not to mention the garbage collection libraries available for C and other languages. I'd care to call that point bogus.

Likewise, runtime bytecode verification isn't necessary with a hardware CPU. It's just made to ensure that a JVM doesn't encounter any illegal instructions or jump to code outside the current protection domain. Hardware CPUs can do illegal instruction checking in parellel with execution without penalties, and virtual memory makes the jump checks pointless as well. Not to mention that it is less restricted, so that one can implement such things as tail-call optimization or continuations without reimplementing the CPU.

Oh, and of course, operating systems have had security sandboxes called "processes" since... the 60s? Of course, one could well argue that it would be swell to be able to further control a process' privileges to a degree not available on, say, Linux or NT, but that isn't exactly something that requires a VM.

Dynamic languages, on the other hand, mean efficient coding; their high-level syntax makes it easy to conceptualize applications and build prototypes rapidly.

Yeah. But as Lisp, Psycho and countless others have demonstrated, they don't need a VM to run efficiently.

The great advantage of a generic VM, as opposed to a language-specific one, is flexibility.

Of course, exactly what a "generic" VM entails does not seem to be entirely clear to the author. Or at least, I can't find anything about it in TFA.

Re:And... (0)

Anonymous Coward | about 6 years ago | (#25423781)

Likewise, runtime bytecode verification isn't necessary with a hardware CPU. It's just made to ensure that a JVM doesn't encounter any illegal instructions or jump to code outside the current protection domain. Hardware CPUs can do illegal instruction checking in parellel with execution without penalties, and virtual memory makes the jump checks pointless as well.

True, but bytecode verifiers have some advantages over using hardware to do the job, particularly in embedded environments.

Bytecode verification is done once, at load time. Once it's been done, there is no additional overhead whatsoever - the code should be unable to do anything dangerous, write to areas of memory it's not allowed to, subvert the type system, or anything else.

Other protection mechanisms, such as having different privlege levels inside a single running process, can be built on top of this pretty easily.

None of this needs a VM, by the way. It's quite possible to compile safe code directly to native machine code, and just run that directly.

Shame it's pretty much incompatible with all existing code...

Re:And... (1)

porpnorber (851345) | about 6 years ago | (#25425065)

Well, look, this is a weird thing. As a language researcher, the idea of having such VMs as targets is very exciting, but it rests on the assumption of them not being total crap, and we all know that, in practise, this isn't going to happen. To take your example of bytecode verification: you say, well, processes and hardware checks deal with illegal instructions already. Ask a language theorist, and they will say, sure, but bytecode verification can check things like:

  • there are no infinite loops
  • your password is not leaked
  • network protocols are obeyed
  • your locking discipline is correct

and not only that, but these and all the 'usual' process isolation guarantees can be made without context switch overheads: your code can be fused right into the kernel and run with higher security and reliability than ever before. While your point is taken, your standards of verification and efficiency are way below those of the research literature. (Of course, this is theoretically true of machine code, too, but these verification mechanisms are so intense to write that you really, really want to share all the work among all the target environments, not to mention that you do not want to have to scan and interpret the gnarled object code of most CPUs.)

In reality, however, the virtual machines that are actually deployed are broken in just the same way that the programming languages and operating systems we see deployed are broken in the eyes of researchers: they don't reach far enough. And unfortunately, global robustness properties are all-or-nothing; unlike notation design, there's no half way point where you grit your teeth, ignore the features that are more broken, and use the tool available. One single misfeature destroys the integrity of the edifice.

When it was announced that the Java VM would be bytecode verified and sandboxed, the research community rejoiced, because we thought we were hearing that from smart people and not marketing droids, and it would be true. But somewhere along the line people started adding 'features' without asking the architects about consequences. Such 'real world' VMs tend to get (or be able to get) unfiltered access to the native file system, to choose a random example, and they do it in the absence of a general security verifier (a thing that is, admittedly, much more technical and harder to get right than a mere JIT compiler). 'Useful' in the same way that casting integers to pointers is 'useful': of course, people use it every day, they don't know how to get by without it—but so long to the high ideals of the project.

Re:And... (1)

tepples (727027) | about 6 years ago | (#25426463)

but bytecode verification can check things like: there are no infinite loops

Since when did the halting problem get solved, or since when has a practical solution appeared for even the subset of the halting problem that applies to finite computers?

LLVM plug (5, Informative)

Anonymous Coward | about 6 years ago | (#25423271)

article didn't include it, but this open source project seems to have similar goals []

Re:LLVM plug (1)

paniq (833972) | about 6 years ago | (#25423709)


Re:LLVM plug (1)

Rayban (13436) | about 6 years ago | (#25425743)

Bump? You must be new here.

Re:LLVM plug (2, Informative)

naasking (94116) | about 6 years ago | (#25423907)

I'm still very surprised how few people are aware of LLVM. It's a truly low-level hardware abstraction layer, on which you can implement any language. OCaml, Haskell and Python have bindings for it IIRC.

Wait, this sounds familliar! (3, Informative)

neokushan (932374) | about 6 years ago | (#25423283)

Sure this sounds quite a bit like something Microsoft, of all people, tried to create? That's right, I'm talking about .Net! Microsoft loved touting how you could develop .Net applications in C#, C++ or even good ol' VB and it should all work the same and even interoperate.
But it's .Net and I'm sure anyone with any experience knows that despite the supposed advantages, it has quite a few disadvantages as well. But at least it made VB somewhat useful again.

None the less, I wouldn't hold my breath on this one, sounds like a pipe dream to me and I'm sure some would argue - what's the point in running your code through a VM if you can just run it natively?

On a side note: As efficient as hand-coded C? In my experience, 90% of the time someone tries to write "efficient" C, they end up causing more problems than it's worth (early-optimisation and all that). Perhaps it should be reworded to say something like Hand-crafted C from a C Master".

Re:Wait, this sounds familliar! (2, Insightful)

Psychotria (953670) | about 6 years ago | (#25423315)

Well I have to agree (mostly). What on Earth is "hand-coded" C? And why is it better than... wait... what other kind of C is there?

Re:Wait, this sounds familliar! (3, Informative)

_jameshales (983564) | about 6 years ago | (#25423369)

There are some programming language implementations that "compile to C".

Re:Wait, this sounds familliar! (1)

TheLink (130905) | about 6 years ago | (#25423777)

> What on Earth is "hand-coded" C? And why is it better than... wait... what other kind of C is there?

There are also "Foot in Mouth" C, "Head up Arse" C and so on. [] might have some examples.

Re:Wait, this sounds familliar! (1)

binarylarry (1338699) | about 6 years ago | (#25423345)

And by "Microsoft, of all people, tried to create?," you mean "Cloned Java and started a massive advertising campaign."

Re:Wait, this sounds familliar! (1)

neokushan (932374) | about 6 years ago | (#25423395)

I didn't know that Java supported more than one language and was interoperable.

Re:Wait, this sounds familliar! (2, Informative)

Anonymous Coward | about 6 years ago | (#25423415)

I suggest you look up Scala and Jython.

Re:Wait, this sounds familliar! (0)

Anonymous Coward | about 6 years ago | (#25423593)

Those are both newer than .NET. I suggest you look it up.

Re:Wait, this sounds familliar! (1)

gbjbaanb (229885) | about 6 years ago | (#25423615)

.NET is interoperable? Oh, you mean with Windows XP and Windows Mobile, of course.

(you can't mean Mono, as that's not Microsoft's .NET, that's be like saying Win32 is a cross-platform library because of Wine).

Re:Wait, this sounds familliar! (1)

neokushan (932374) | about 6 years ago | (#25423667)

No, I was talking about it being interoperable between supported languages. I.e. you can make a .dll using C++ and link it to an application written in VB.

Re:Wait, this sounds familliar! (0)

Anonymous Coward | about 6 years ago | (#25423995)

yeah you can also link a dll written in delphi or fortran to normal C. whats your point?

4th post in and you STILL don't get it? (1)

neokushan (932374) | about 6 years ago | (#25425049)

The point is that it's not far off from what this article is talking about.

Re:Wait, this sounds familliar! (1)

ClosedSource (238333) | about 6 years ago | (#25425131)

The real point is that with .NET you can do things like write a class in VB and inherit and extend it in C#.

The key difference between the Java platform and .Net are the platform designers' fundamental intentions. Java was not designed to support multiple languages, but other languages can be compiled to Java bytecode and run.

Similarly, many .NET libraries were not designed to be platform independent but the CLR can be ported to other hardware/OS platforms so that some .Net apps can run there (e.g. Mono on Linux).

The bottom line is that Java offers weak support for language independence and .NET weak support for platform independence.

Re:Wait, this sounds familliar! (2, Funny)

Anonymous Coward | about 6 years ago | (#25425269)

> The real point is that with .NET you can do things like write a class in VB and inherit and extend it in C#.

See, with C or Fortran you don't even _have_ that problem :-P

VMs need to impose fewer restriction on languages (1)

JoshHeitzman (1122379) | about 6 years ago | (#25425097)

Actually you have to use C++/CLI which isn't quite the same thing as C++. For example you can't use multiple inheritance. Whether you think that is good thing or a bad thing, its still a restriction imposed by the CLI that disallows truly and fully using any language.

Re:Wait, this sounds familliar! (1)

gbjbaanb (229885) | about 6 years ago | (#25425119)

then COM is interoperable, and anything that can create a dlls with a C API are interoperable.

Re:Wait, this sounds familliar! (0)

Anonymous Coward | about 6 years ago | (#25423841)

The core JVM is just a bytecode interpreter, you can in theory write a compiler for language you want to to run on it.

In reality you only need a few very simple operations on a platform and you can make any programming language you want work on it.

Re:Wait, this sounds familliar! (1)

johanatan (1159309) | about 6 years ago | (#25425673)

Guess you haven't heard of Jython [] ?

Re:Wait, this sounds familliar! (0)

Anonymous Coward | about 6 years ago | (#25423757)

No, J is their Java clone. C# and .NET are substantially better in my opinion.

Re:Wait, this sounds familliar! (0)

Anonymous Coward | about 6 years ago | (#25425409)

You're a moron.

The point? (5, Insightful)

orclevegam (940336) | about 6 years ago | (#25423329)

Am I the only one that sees this as completely ass backwards? I mean, part of the lure of scripting languages is that we skip that whole compile phase of things, and so achieve a certain degree of platform independence. So long as the system being targeted has a implementation of the scripting languages interpreter, you just run the script inside of it, and you can distribute the same script (more or less) for any system with an interpreter. Now they're talking about essentially compiling a scripting language to one of several different byte codes to target one of several different VMs, which then of course need implementations on whatever systems you're targeting. How is this an improvement over the previous way of doing things?

What exactly are we getting out of this? The language developers don't have to worry about the details of the underlying machine, but as a trade off they now need to write implementations for whatever VM is out there, which is turn will require them to worry about the details of the underlying machine, so we've just pushed that pain point down one level of abstraction, but not eliminated it. The only up side I can see to the entire thing is language interoperability which is nice and all, but how does that fit in with the multiple-VM approach being touted here? Each language is most likely going to require some minor changes in order to support interoperability at the VM level, and of course there will be quirks and gotchas on each VM as well. Unless all the VM developers get together and agree on the exact changes that will be required to each language we could end up with a situation in which each language will come in multiple slightly different syntaxes depending on exactly which VM it targets.

Re:The point? (1)

DarkOx (621550) | about 6 years ago | (#25423373)

I have to agree, if performance is not a major concern and for anything not number crunching these days its probably not, an interpreter is going to be more flexible the compiled byte code, and probably can still be pretty quick even if its runtime nature prevents certain optimization you might do with a compiler. Why must we keep going after this one tool for every job approach. There is a place C,C++,Java,Perl,Python,Ruby as they exist to day.

Re:The point? (0)

Anonymous Coward | about 6 years ago | (#25423505)

The scripts can still be run exactly the same way you just bolt a vm on inplace of the the refular interpreter. New version of Python already compile you code at runtime using a JIT, in fact quite a few scripting languages do. There is no need for the script writer to care about what is running under the hood.

In theory you shouldn't need to change your script to have it run on a different VM, if you do then the VM is broken. You also shouldn't find quirks any more than you get from using an AMD processor instead of an Intel one since at their core these VMs have a very simple bytecode instruction set, each VM will likely need a thing written to support reading the language but thats about it.

Re:The point? (1)

dkf (304284) | about 6 years ago | (#25423979)

Each language is most likely going to require some minor changes in order to support interoperability at the VM level, and of course there will be quirks and gotchas on each VM as well.

I think you underestimate the problem. At the VM level, you're dealing with the deep language semantics only; simple stuff tends to be either syntax or in the language libraries. When you mess with the deep semantics, you have far reaching consequences. For example, consider the differences caused by switching between mutable and immutable values, or between simple variables and vars where accessing them can cause reentrant calls to the VM, or between eager and lazy evaluation of expressions.

Those who argue for the creation of The One VM should go back to school to re-learn their semantics classes, since their position demonstrates deep ignorance.

scripting languages use intermediate byte code (2, Interesting)

lkcl (517947) | about 6 years ago | (#25424187)

you have to bear in mind that scripting languages, in order to be _reasonably_ efficient, have to do intermediate byte code _anyway_.

python uses a FORTH-like intermediate byte code, for example. the similarity to CLR will be pretty high.

when you come to things like V8, that does on-the-fly _compilation_ which is basically the same thing as intermediate byte code, only a bit more extreme and aggressive.

so the technology is beginning to move in the direction of "grey area" - thinning the distinctions.

i like the idea of using javascript as the VM intermediate language.

what's really neat about using javascript is that people have been optimising the hell out of it for a loooong time.

so, pyv8 demonstrated an empirical result of running python TEN times faster than the standard compiler does, by translating the python into javascript and then V8 compiling it to i386 assembler on-the-fly.

that's _very_ cool.

Java (1)

florescent_beige (608235) | about 6 years ago | (#25423365)

All I know is that every large Java system seems to have parts written in native code called through the JNI.

The JVM has been around for a long time and still can't do things like device drivers. Performance code, like parts of Java Advance Imaging, are native. A lot of people turn the native parts off though because they use ridiculous amounts of memory.

I think it's just too hard to make VM's that do everything well.

Re:Java (1)

joshv (13017) | about 6 years ago | (#25423533)

"All I know is that every large Java system seems to have parts written in native code called through the JNI."

I use several "large systems" written in Java that use no native code, other than what might be embedded in the JDK, and I imagine most of that is just string manipulation.

"A lot of people turn the native parts off though because they use ridiculous amounts of memory."

What the hell are you talking about? How does one "turn the native parts off" in Java? And why do you think these native bits use ridiculous amounts of memory?

Re:Java (1)

florescent_beige (608235) | about 6 years ago | (#25423619)

JAI has native codecs for some formats and also Java codecs. The native ones are used by default but they can be turned off i.e. the Java codecs are used instead.

They used a lot of memory because they don't cache, I think. The whole uncompressed image gets read into memory which can use hundreds of MB.

Re:Java (1)

gbjbaanb (229885) | about 6 years ago | (#25423643)

And why do you think these native bits use ridiculous amounts of memory?

because his Java apps use ridiculous amounts of memory and he has to find something else to blame :-)

Re:Java (1)

binarylarry (1338699) | about 6 years ago | (#25423567)

Why would a language/platform designed for writing networked applications be used to write device drivers?

That's like saying "I don't know about these batch files. They've been around for a few decades and I still can't write drivers in them."

Re:Java (1)

florescent_beige (608235) | about 6 years ago | (#25423871)

Why would a language/platform designed for writing networked applications be used to write device drivers?

Is the JVM supposed to only for enterprise plumbing or is it supposed to be a general purpose VM? If not the latter then why did Sun bother with Swing and the like?

Re:Java (1)

binarylarry (1338699) | about 6 years ago | (#25426435)

Jesus, read my first sentence.

Re:Java (1)

einar2 (784078) | about 6 years ago | (#25423703)

What do you expect your VM to be? An operating system or a container where your business logic runs?

I work for a bank. We do not do operating systems, we do banking. Some of our apps are very large Java systems. And no, we do not need JNI, neither would we bother about device drivers.

Hardware independence is invented. That is what Java is doing for you.

Re:Java (0)

Anonymous Coward | about 6 years ago | (#25425403)

> Hardware independence is invented. That is what Java is doing for you.

Java is hardware-independent at the bytecode level as long as there is a JVM for your hardware.
C is hardware-indepenent on the source level as long as you are a bit careful and there is a C compiler.
Yes, Java _may_ work better in some specific cases, but alread C was developed with the intend to get rid
of the hardware-dependence, as was e.g. POSIX to avoid the platform-dependence.
The combination and the execution may have been new, but all these things are not stuff that Java invented.

Really? (1)

ClosedSource (238333) | about 6 years ago | (#25425865)

"Hardware independence is invented. That is what Java is doing for you."

So any multi-threaded or multi-process Java code I write will run identically on all systems regardless of the processor used or how many cores are present?

Team code (1)

AngryNick (891056) | about 6 years ago | (#25423377)

This all sounds great for a single programmer or small team, but how does this play in today's corporate programming environment? Today you can have teams split up into 3 or 4 time-zones, contractors and perms, outsource coders in India, China, and who knows where else...all working on the same project with their own opinions of what is "best" for the project. Will allowing each to code in their own programming "dialect" really work?

Following this Slashdot story (2, Funny)

Anonymous Coward | about 6 years ago | (#25423397)

Intel stock rose sharply as investors realized that ubiquitous VMs will require faster processors because more programs will be written in scripting languages. Shortly after, Intel stock plummeted as investors realized that intermediate VMs decouple the programs from the processor architecture.

Colas: Coke, Pepsi and Jolt point a way forward (5, Interesting)

itsybitsy (149808) | about 6 years ago | (#25423413)

Ian Piumarta and the VPRI [] are doing some amazing work related to this story.

COLAs: Combined Object Lambda Architectures - A Complete System in 20,000 Lines of Code. []
The system is slowly evolving towards version 1.0 which
        * is completely self-describing (from the metal, or even FPGA gates, up) exposing all aspects of its implementation for inspection and incremental modification;
        * treats state and behaviour as orthogonal but mutually-completing descriptions of computation;
        * treats static and dynamic compilation as two extremes of a continuum;
        * treats static and dynamic typing as two extremes of a continuum; and
        * late-binds absolutely everything: programming (parsing through codegen to runtime and ABI), applications (libraries, communications facilities), interaction (graphics frameworks, rendering algorithms), and so on. [] [] [] []

Allen Wirfs-Brock and Dan Ingalls are currently working on bringing notions like Colas to the browser so that we can use any programming language WE choose to for our browser based applications. Check out their interview here. []

Re:Colas: Coke, Pepsi and Jolt point a way forward (0)

Anonymous Coward | about 6 years ago | (#25423573)

Interesting idea. Looks like they are approaching the subject from a Smalltalk/Java/OO type path which I don't necessarily think is the best. At least they're not coming from a Lisp view (blech).

Pure-OO just like pure-functional are wrong. The perfect solution is some sort of balance between the two, trying to make anything extremely pure and simple (elegant in CS terms) always has serious compromises (usually in performance).

Pimpin' my VM's (1)

Fizzl (209397) | about 6 years ago | (#25423423)

So I heard you like coding on VM's? So we put VM on you VM so you could code while you code.

VM?! Why are you drinking that kool-aid? (5, Insightful)

the_skywise (189793) | about 6 years ago | (#25423445)

Microsoft promised this with .NET. (Just buy our tools and you build to .NET and run on all Windows platforms, XP SP1, XP SP2 AND Vista! It's sooo much better than that... Java thing.)

Microsoft promised us this with Windows CE. (Just buy our tools and with a simple compiler switch, voila, you're targetting CE... it couldn't be easier.)

Microsoft couldn't even do it with DirectX where OpenGL could (Oh hey, that XBox directX.. it works a little differently than Windows DirectX)

For that matter, the Windows Printer driver APIs aren't consistent (Yeah, we know it's called GetMarginSpaceFromEdge but driver A measures the edge from half an inch in and driver b measures the edge from the print head detects the edge of the page which is sometimes an inch greater than the page itself...)

Y'know what the greatest VM is right now? i386! And has been for nigh-on 10 years!

I LIKE Microsoft product, don't get me wrong... but I'm not going to buy Visual Studio 2011 which has no other changes than a GUI enhancement and the ability to target my development towards the hot new sweetness.DNET API's so 3 years later, Microsoft can abandon .DNET for DCOM# because, hey, thats what our research said people wanted and it'll be supported on Windows 7.1.1 along with Blackbird 2.0

Plus ca change.... (4, Interesting)

bfwebster (90513) | about 6 years ago | (#25423459)

My first thought on reading this was an old software engineering maxim, usually (and probably correctly) attributed to Don Knuth [] :

There is no complexity problem in programming that cannot be eased by adding a layer of indirection. And there is no performance problem in programming that cannot be eased by removing a layer of indirection.

Universal VMs are old as the hills (anyone [else] here old enough to have programmed on the UCSD p-System [] ?). We shift towards VMs to gain independence and portability, and then we shift back to direct, spot or JIT compilation to improve performance. It's an old, old dance, and one that will likely go on for years to come. ..bruce..

Old enough to have *ported* it (grins) (1)

Fallen Andy (795676) | about 6 years ago | (#25423745)

Used to work for the UK source Licensees (TDI in Bristol UK) back in the early 80's. It had it's place then as there was no such thing as a PC standard. Even so, it wasn't quite as portable as you'd think - floating point was often not IEEE and differed between implementations. Byte ordering (byte sex) mattered (even on Version IV). Performance constraints on those little machines meant that p-code had special fast "short load" instructions. The net effect was that high level programmers abused this with folklore like "the first 16 local variables are faster".

Despite that it was still in use for some business apps (mostly accountancy style stuff) right into the early 90's.

When I look at e.g. MS "Singularity" I see something suspiciously similar to my old (multi-user) Sage II (sadly now long departed).


Re:Plus ca change.... (0)

Anonymous Coward | about 6 years ago | (#25425213)

I worked on the p-system (later renamed Power System) in the 80's for a company that took over from SoftTech Microsystems. It had many great features and lots of potential. However, the industry went in a different direction.

The job (my first out of college) was fantastic -- working on the p-machine, interpreters, compilers, cross compilers, device drivers, file systems, etc. It was a great place to start and I had great colleagues.

It doesn't surprise me that this topic has come around. Later when I worked at IBM, I saw lots of ideas re-invented by us Unix and PC dudes that our previous colleagues had already solved in the mainframe and elsewhere.

Enjoy the ride.

overhead/efficiency vs. ease of use (1)

jollyreaper (513215) | about 6 years ago | (#25423473)

I find it kind of funny how there's the battle between wresting the most performance out of the hardware versus the ease of use for the programmers and users. Back in the day, every character was significant and code with too much documentation simply ate up too much space. (and this is talking about after we gave up on punch cards and were typing the code into terminal screens.) Every step we take to make computers easier to understand, easier to use makes the backend so much more complicated. A base install of XP is something like what, tens of thousands of times larger than DOS? But it's also thousands of times more powerful. But at the same time, we can sacrifice too much. I could run Win2k and Office 97 just fine on a good machine from 1999, it would still suffice for the typical office worker even today. Of course, that machine cost around $1000 back then and a basic office machine with so much more power goes for $600 today, including the Vista tax, but wait, Office is gonna ding you another $600 now. Funny how the cost of software used to be the cheapest part of the machine and now it's become the most painful. But also, when you get right back to it, the secretary isn't typing her letters an faster on the newer machine. There's probably nothing she needs in any of the newer versions of Office that she didn't have in 97.

It just strikes me as kind of funny how we make these huge advances in performance, in hardware capability, and it seems like the software is really lagging behind in the effort to fully exploit these gains. But then I look at how hard it is to write the code and it's amazing we've come even this far.

Re:overhead/efficiency vs. ease of use (1)

TheLink (130905) | about 6 years ago | (#25423969)

Have you seen The Mother of all Demos? []

How far we've advanced and how little we have advanced, after 40 years.

There was so much more that could have been done. Thing is the driving need to do it is not there.

Stop. Rephrase the Question. (0)

Anonymous Coward | about 6 years ago | (#25423507)

I've been thinking about this topic for a long time. The use of a virtual machine is usually hampered by the lack of a proper language agnostic, operating system ambivalent, linking-loading mechanism.

Bare with me.. Being able to consistently identify precise versions, provide a global library namespace with automatic cross-language compatibility (calling convention, and datatypes with or without support for cross language OOP) make the benefits of a VM much easier to attain.

The problem is type conversion between languages (3, Insightful)

MarkWatson (189759) | about 6 years ago | (#25423525)

There is a JSR to address this on the JVM but I am not convinced that interop between languages on a single VM will be transparent. I mix Java libraries with JRuby and I often end up writing thin facade classes to make interop better.

I started to read the article but.... (1)

3seas (184403) | about 6 years ago | (#25423621)

.... wanting to fully understand it I followed the links where I typically found a new link after the first paragraph, recursively. So after 15 minutes of reading I determined that I hadn't gotten anywhere in understanding much of anything except for one thing:

How many programs must we run, layer upon layer, in order to run an application?
Doesn't adding more and more layers of complexity contribute to the failure side of the failure vs. success equation?

I do really understand the ideals behind .net, such as the CLI and CLR but I also note the downside as being one of addressing the general objective of doing such, to be that of failing to address the general objective much sooner in the software development cycle. Addressing the general objective at the later stage of runtime only overly complicates the fix.

To use an analogy, A social science teacher once described to the class how quality control was once done in the USSR. A fine china plate manufacture would produce the plates, put them on a truck and ship them to the store. To buy a plate you would stand in a line for a number and once you have number you'd stand in another line to pay for the plate where you'd then get a receipt. Then you'd stand in a third line to pick up your plate. Once you got to the front the line the store employee would look at your receipt and go over to the plates pick one up and with a wand, they woudl strike the dish as a quality control step. If it broke they would do the same with the next dish until they completed your order.

Likewise the ideal of write once run anywhere via a run time engine is the same sort of just in time for being to late in cost effective over complexity failure.

Where the general objective needs to be addresses is at the very beginning of the development process, perhaps even before code is written.

Programming language, anything above machine language, is an abstraction and this is recursive. But in application running the machine must see it in terms od machine language and as such, what ever the level of abstraction, it gets boiled down to machine language (granted quality of machine language results is defendant on TRANSLATION method used). This is common knowledge with anyone who knows anything about programming.

The Common Language Infrastructure (CLI) is the ideal of taking all the more popular programming concepts and data-types and combining them in a manner that is non-conflicting that is then used in the translation process to convert to an Common Intermediate Language that then runs on the Common Language Runtime.

The key point here is of "TRANSLATION" and by addressing translation early on in coding it become possible to translate whatever to whatever else to then compile and run anyway you want, be it directly on the hardware or native on any OS that is capable, or even on a VM.

The point is Computer programming languages are abstractions and it is in dealing with and translating such abstractions from one form to another, is where the magic of the future is to be found.
It is in understanding the Natural Laws and Physics of Abstraction creation and use, [] understanding translation mechanics, where software development solutions will be genuinely found. Deal with Abstraction Translation prior to compile (though compile is itself a translation to machine binary)... But even in doing this focus on abstraction translation, there will evolve simpler yet powerful programming languages. Its the whole point of programming! to take some complexity and make it easier to use and reuse via defining it and a simplified interface to its use. Done of course, the only place it can be done, at the abstraction creation and use level.

Not in some down the line VM additional complexity that is designed mainly to generate licensing fees.


simply put... (1)

3seas (184403) | about 6 years ago | (#25423963)

... since its all about abstractions and translation, by doing it up front your have more control and opportunity to advance.

To deal with translation on the back end is avoidance or hindrance of genuine programming advancement in exchange for licensing fees for another level of abstraction/translation.

Nothing New (0)

Anonymous Coward | about 6 years ago | (#25423657)

This concept is not new. It was implimented on the IBM System 38 in the late 70s and was called MI ( Machine Interface, later renamed as TIMI, Technology Independant Machine Interface ). It allowed IBM over the years to make radical changes to the hardware ( S/38 => AS/400 => iSeries, CISC to RISC to POWER etc. ) without end users having to modify any of their code, written in any of a number of languages, CL, many variants of RPG, REXX, PL1, C, C++ ... Curiously Microsoft for years ran its internal operations on a pair of S/38s. I wonder ...

It's not the syntax, stupid (4, Insightful)

jonaskoelker (922170) | about 6 years ago | (#25423695)

generic VMs divorced from the syntactic details of specific languages

The syntax of programming languages is something understood by the front-end of a compiler. It then translates the code into code that does the same thing in the back-end language (such as JVM/PyVM/x86/LLVM bytecode). Neither back-end knows about the syntax of the front-end language.

The real challenge is to adopt conventions on the back-end VM that allows different languages to talk together. It'd be straightforward to implement an x86 emulator on top of the JVM and run the ${language} VM on that x86. Wow, you now have ${language} running on the JVM. So? You can't talk to the Java library that way.

If you want languages to talk together, they need to agree on data representation formats and calling conventions. Try getting object.field if you don't know where field is relative to the base address of object. Try calling object.method() if you don't know the format (or location) of object.__vtbl.

Also, the semantics of some operations have to be considered if a language has to deal with a foreign object model. Let's say we target the Java VM. How do you implement multiple inheritance? What does .super do on a class with multiple parents? How do you implement "Object *p = malloc(...); *p = my_object;"? How do you implement C++'s delete? How do you implement python's generators?

To support a set of languages, the VM must support the union of features. To make the languages talk together smoothly, the VM must support each feature in a reasonably straightforward way. The two demands pull the VM in opposite directions.

I don't want to just poo-poo this idea, but my experience with dealing with the Java VM (I've written a java-important-subset compiler in my compiler course) is that it's tightly coupled to the Java way of doing things. My experience with different languages (C, C++, Java, python, perl, ruby, haskell, scheme) says that things are different enough that you can transfer most of what you know from one language to another [at least for the oo/procedural], but that the devil is in the details, and the VM has to handle all values of $details.

Re:It's not the syntax, stupid (1)

dkf (304284) | about 6 years ago | (#25425795)

I don't want to just poo-poo this idea

Well, I don't mind doing just that!

VM's come in two basic varieties: low-level and high-level. Low-level VMs are really software-implemented microprocessors, and targeting them is like writing another back-end for GCC, though with some odd instructions. High-level VMs are much much easier to generate code for, but tend to be locked to a particular front-end language (or group of semantically-similar languages) because the operations of the VM capture a lot of high-level details.

If someone is peddling a universal VM, they are either doing a new low-level VM (oh great, we've already got x86, JVM and CLR and they have a lot more existing tool support thankyouverymuch) or they're doing a high-level VM (you want my language to change to be yours?!?) It matters not which: both are foolish. (In fact, I'll keep doing my own high-level VM implementations, since then I can make them highly efficient for the key use-cases I care about by adding special-purpose code for just that. That's fine, because I don't claim universality at all.)

Languages are not the problem (1)

einar2 (784078) | about 6 years ago | (#25423725)

"Imagine being able to program in the language of your choice..."

Which bigger enterprise would allow you to program in the language of your choice. We have a code base written by around 1000 developers during the last thirty years. Do you really think we give developers a choice about their language?

Depending on the problem you have to solve there is one language to pick. Maintaining this code is extremely expensive. This is were the real complexity lies and this is the problem we have to address.

I really do not care whether our developers have it cozy so that they can pick the language of the day...

Re:Languages are not the problem (1)

Shados (741919) | about 6 years ago | (#25423769)

Indeed, you don't. However, as a company you can pick a subset of the languages available that would fit your problems best (I doubt 1 language is enough to solve anything significant, though it can get close), with factors such as workforce availability, problem solving capabilities, RAD tools, etc.

So you could be making your UI in Silverlight with IronRuby, your backend with C#, the DSLs with Boo, and the data intensive algorythms with F#, if you were using the .NET platform today. Or you could make the UI in VBx, the backend in C++, the DSLs in IronPython, and the algorythms in raw C. All depending on your company's philosophy and ressources. And thats an important thing to have.

What's old is new again (2, Interesting)

tcgroat (666085) | about 6 years ago | (#25423763)

Wasn't platform independence the selling point of UCSD's p-system [] ? Yes, it worked, but it never really caught on. One camp of software development says that hardware is always getting faster, cheaper and more efficient, so adding a layer of abstraction between the source code and the hardware is not a problem. The other camp says we can use those same performance improvements to build software that does more things, on larger data sets, with better graphics, and in general make what once were impractically large and complex software tasks run on the average users' systems. Over the last three decades, the market has favored the latter.

Any language you want, on any VM you want. (1)

Ant P. (974313) | about 6 years ago | (#25423869) long as it's Python.

No thanks.

FTA (1)

FlyingBishop (1293238) | about 6 years ago | (#25424117)

One could argue that Microsoft is already doing this with .Net, by pushing for managed-code applications to be considered first-class citizens alongside traditional Win32 code. Why Sun never did the same with Solaris and Java is something of a mystery.

Maybe because not even Sun's engineers actually wanted to write actual code in Java?

Javascript - as a VM intermediate language(!) (3, Informative)

lkcl (517947) | about 6 years ago | (#25424139)

no don't laugh, it works very well! there are a number of very good reasons for this.

1) javascript is actually an incredibly powerful language, in particular due to the concept of "prototype"ing.

2) javascript, thanks to web browsers, has an unbelievably large amount of attention spent on it, to optimise the stuffing out of it. as a result, the latest incarnation to hit the streets - the V8 engine - actually compiles to i386 or ARM assembler.

3) the number of "-to-javascript" compilers is really quite staggering. see the comments from pyv8 article [] for an incomplete list.

GWT has a java-to-javascript compiler; Pyjamas [] has a python-to-javascript compiler. There's a ruby-to-javascript compiler - the list just goes on.

then there's the pypy compiler collection, which has javascript as a back-end. (and, for completeness, it's worth mentioning that it also has a CLR backend, backend, and a java backend).

like the UCSD P-system? (3, Insightful)

YesIAmAScript (886271) | about 6 years ago | (#25424821)

The future is the 70s?

Didn't UCSD do this in the 70s w/ pcode? (2, Insightful)

WillAdams (45638) | about 6 years ago | (#25425073)

If memory serves, all of their compilers compiled to a genericized ``pcode'' for which multiple engines existed (one per processor architecture I believe it was) --- all that was missing was multiple implementations per architecture.


VMs are not the future of anything (0)

Anonymous Coward | about 6 years ago | (#25426005)

Imagine being able to commercially sell a software program without worrying about your competitors reverse engineering your work and use it against you?

Java/.NET/perl/ al are all worthless until the reverse engineering penalty even with obfuscation is anywhere near that of compiled C code.

The real innovation in programming languages is to be able to say what you want rather than how to do it like various execution engines for very application specific systems already do. Spouting about Language and virtual machines is all noise.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?