Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Dynamic Cross-Processor Binary Translation

timothy posted more than 13 years ago | from the trans-what-now? dept.

Programming 179

GFD writes: "EETimes has a story about software that dynamically translates the binary of a program targeted for one processor (say x86) to another (say MIPS). Like Transmeta they have incorporated optimization routines and claim that they have improved execution times between one RISC architecture and another by 25%. This may break the hammer lock that established architectures have on the market and open the door for a renaissance in computer architecture."

cancel ×

179 comments

Sorry! There are no comments related to the filter you selected.

Re:Sounds like an emulator (1)

Anonymous Coward | more than 13 years ago | (#159607)

Actually, it sounds like what DEC developed for translating legacy binaries from VAX to ALPHA.

Think of it this way: A regular compiler converts source code into object code, this product takes object code for machine A and outputs object code for machine B. It's basicly a compiler that parses object code input.

Normal emulation parses object code and then pretends to be the machine it was written for, this has to done every time you want to run the program. With a translator like this, you translate the object code just once.

Of course, if you have the source, you don't need this, at worst you need a cross compiler.

MacOS (1)

Anonymous Coward | more than 13 years ago | (#159608)

since the first powermacs, macos has been translating 68k code to ppc code on the fly. also doing some optimization.

Re:Dynamic Recompilation (1)

Anonymous Coward | more than 13 years ago | (#159609)

There are many examples of this, with source code, at http://www.cybervillage.co.uk/acorn/emulation/dynr comp.htm [cybervillage.co.uk]

So far nobody's sued for patent infringement, and there should be plenty of prior art if anyone does. Of course, that won't stop assholes like TechSearch from harassing people anyway.

Re:Sounds like an emulator (1)

Phroggy (441) | more than 13 years ago | (#159611)

NeXTStep/OpenStep, Darwin and Mac OS X support cross-platform binaries that let you take a single app and run it across processors. You do (generally) have to run it on the same operating system, because of the APIs, but as long as the APIs are there, it works.

In reality, the compiler produces multiple binary executable files, but they're all contained within an application bundle (or package, I can never keep the terminology straight), which is really a folder containing lots of files, but which appears in the GUI to be a single double-clickable application. Localization can be done this way too; you just include all your text strings in each language, and choose whatever's appropriate based on the user's OS-wide preferences.

And by the way, the old Mac ROM is no longer an issue on Mac OS X, and on classic Mac OS it uses a file on the hard drive instead of querying the actual ROM whenever possible. This was one of the changes made when the original iMac was released.

--

Re:A crazy thought... - HP's Dynamo does it (1)

mprinkey (1434) | more than 13 years ago | (#159615)

I thought about something similar. Perhaps it was brought up some months ago when HP started telling people about Dynamo. I was thinking how interesting it could be to apply this as a post-compilation optimization technique. There is every reason to believe that the same techniques can be used to analyze and improve native code performance too.

In fact, this could be an interesting kernel module development project. Allow the Linux kernel to optimize running executables on the fly. Create permanent entries in the filesystem to store optimized binaries and perhaps a running binary diff so that the optimizer could undo something if need be. If enough of the research is in the literature and someone in the audience is frantically searching for a PhD research project...

Re:Sounds like an emulator (1)

Compuser (14899) | more than 13 years ago | (#159623)

No, my point was to develop something that
would translate the OS and all other binaries
as a whole. Think of ALL your binaries as one
system. Now binary cross-compile. And yes,
you'd have to identify BIOS calls and do something
equivalent on another system. The reason why I
chose Windows as an example was because it is a
huge mess of cross-dependent code and judging
by its stability, not all the code is cosher.
If your binary-cross-compiler can handle Windows
it can be presumed to be good.

Re:Dynamic Recompilation (1)

csbruce (39509) | more than 13 years ago | (#159627)

They did a really clever thing of identifying long "runs" of code that nobody ever jumped into the middle of, then they treated them as one big instruction with a lot of side effects, and optimized them as a block. (Not one instruction at a time, but the whole mess into the most optimal set of new platform instructions they could.)

"Basic-block analysis", which is basically what this is, is a common technique that all good compilers perform on source code or intermediate representations of source code for optimization. This technique has probably been around since the 1960's.

It was quite clever. It's also quite patented, and has been since before the Power PC came out. (And in a sane world those patents would have expired by now, but with patent lengths going the way of copyright...)

The patent application must have read "basic-block analysis ...but on machine code!", to match all of the "...but on the web!" patents that have been granted in recent years. Innovation is truly dead.

Re:Sounds like an emulator (1)

macinslak (41252) | more than 13 years ago | (#159628)

I believe Alphas did something like this. The only real problem is that unless the underlying hardware is very much like that on the software's native platform, the os likely won't boot. To make stuff like this work you'd either have to do the whole motherboard in software (like virtualpc) or sell it as an add-on(orangemicro and apple used to do this), neither solution offers comperable performance to native hardware.

Re:=Goto (1)

greenrd (47933) | more than 13 years ago | (#159631)

No, I think you'll find a directed acylic graph is a directed graph with no cycles. Nothing particularly to do with GOTOs.

Re:Solutions in search of a problem (1)

greenrd (47933) | more than 13 years ago | (#159632)

VMWare is an emulator - kind of. I'm running it now, and it emulates most of the hardware (you only have to look at the Control Panel - "Display: VMWare"), but not the processor itself.

Is that what he really said? (1)

Old Wolf (56093) | more than 13 years ago | (#159638)

"Translating CISC to RISC is bit like pushing uphill, ...," he said.

*giggle*

Nope ... (1)

taniwha (70410) | more than 13 years ago | (#159640)

it's more like an old idea come back to haunt us :-) this idea has been around for quite a while (like since the 60s). On the other hand people have been doing this stuff in the mainstream morerecently (the x86 recompilers to Sparc/Alpha, Java JIT, Transmeta etc are all modern examples of this basic idea)

Re:Show ME the demo! (1)

Kidbro (80868) | more than 13 years ago | (#159643)

Aren't some Java JIT compilers doing something like this already?

--

On the fly vectored marketdroid hype generation (1)

graniteMonkey (87619) | more than 13 years ago | (#159644)

'nuff said

Not New (1)

BoyPlankton (93817) | more than 13 years ago | (#159645)

There used to be software that did this for translating x86 software to run on Alpha's. It's not new. The interesting thing about the Alpha product was that it would actually optimize the executable over time to run on the RISC processor. In other words, the first time you would run it, it would run very slowly. But over time as executable was repeatedly anaalyzed it's performance would improve.

FX32 (1)

jeti (105266) | more than 13 years ago | (#159647)

Isn't that _exactly_ what FX32 did?
AFAIK it dynamically translated binary
code for Pentium to Alpha processors
with runtime optimazation.

I think it was released in or before 1997.

Re:Yup (1)

egomaniac (105476) | more than 13 years ago | (#159648)

...Of course, virtually all cell phones are moving towards Java currently. I can't imagine the rest of the embedded industry being very far behind.

Quoting EETimes... (1)

Acheon (122246) | more than 13 years ago | (#159650)

Slashdot admins definitely get more stupid and ignorant every day. This shit is called dynamic recompilation, it exists and is being implemented in both free and commercial projects for *years*. Besides, you have to be really disconnected from reality (and awfully idiot) if EETimes tells you something you didn't know ; that level of ignorance should be punishable by death.

Something is wrong here. (1)

_typo (122952) | more than 13 years ago | (#159651)

If what they're doing is transforming lower level instructions back to higher level ones and then recompiling them into another arch with better optimizations, then something is very wrong.

First, they should be using the actual higher level code in the first place since trying to guess what the program is supposed to be doing by looking at the optimized assembly output of a compiler is *ugly* at best.

And even if they can do this, the speedups obtained can only mean that the compiler used for the initial binary version of the program was sub-optimal and should be developed.

So forget trying to turn x86 assembly into whatever-assembly since that's just plain stupid and invest money in compiler optimization.

And if what they say is true, and this is not for the end user but the software house to use and develop, why would someone who has the actual sourcecode for the program spend time porting it to another platform by binary translation? They'd be better off writing some essential routine in assembly for the speedups needed.

What is really needed is an agreed form of bytecode (java???) and good VM's designed for the architecture by the people making it in the first place. If the next athlon/pentium/ppc had being good at JAVA (or whatever VM is best) in mind, then maybe actual full applications could be written for it.

Re:Anyone remember FX!32? (1)

donglekey (124433) | more than 13 years ago | (#159652)

I'll chime in a little bit. It was used quite a bit in the grpahics world because people would use a verison of lightwave 3D or softimage that ran natively on Alpha chips and then use other stuff that wasn't native to complement those programs like photoshop. Not only did it do runtime optimization and such, but I guess after running something a few times, it would optimize more and more to native alpha stuff so the program would run faster and faster each time it was run.

Sounds like... (1)

BiggestPOS (139071) | more than 13 years ago | (#159654)

An emulator? Haven't we been able to do this for a while, albeit with a great loss in speed. I really don't see anything super revolutionary here, just evolutionary.

Re:Sounds like... (1)

BiggestPOS (139071) | more than 13 years ago | (#159655)

I think its the fact that crack rocks arrive in your mailbox usually on the same day as your slashdot account gets mod points, I think CmdrTaco plans it this way...

Yup (1)

Rimbo (139781) | more than 13 years ago | (#159657)

"Or am i missing something about the significance?"

Yes: This is not a solution for PCs; it's a solution for embedded systems and telecommunications equipment. Think cellphones.

Show ME the demo! (1)

Dr. Spork (142693) | more than 13 years ago | (#159658)

Until I see OSX running smoothly on an Athlon box I'll write this off. Earlier posters are right in that this doesn't seem to be anything more than runtime compilation, which is old news. However, this would indeed be an important story if their efficiency claims pan out. Even if a 1.4GHz Athlon runs OSX as fast as a 1GHz G4 would, this would really shake things up.

It is in principle possible for this to work, because most applications are statically compiled, while dynamic binary compilation is free to optimize an application as it runs. GCC might know a lot about processor architecture but it can't know about what tasks you will be asking the compiled application to do, so it can't optimize for that. This is how on-the-fly recompilation can compensate for the overhead of the process itself.

Though the theory is sound, I get the impression that reality rarely matches the expectations of the nerd-geniuses who are charmed by this concept. I have a feeling that Transmeta, for example, thought this technique would deliver them the entire chip market on a platter. Well, what it got them isn't awful, but certainly on the margin and not a competition-killer. And I haven't seen a company staff more densely populated with nerd-geniuses than Transmeta. Anyway, here's to hoping. I sure would love to download this and then try out OSX. Maan, I bet Apple would be pissed if this got big!

Re:This is news why? (1)

Dr. Spork (142693) | more than 13 years ago | (#159659)

I suspect you're right. Because this is no time to shoot for a hyped-up IPO, these guys seem to be shooting at a buyout--and it makes sense. Think about all the chip vendors out there who are getting nervous that their own research on dynamic recompilation is falling behind Transmeta/HP/Compaq. In particular I'm thinking of AMD and Motorolla, but others are possible too.

Re:Sounds like... (1)

-=OmegaMan=- (151970) | more than 13 years ago | (#159663)

How is post #4 redundant in any way, shape, or form? Are you reading from the bottom-up?

Re:Processor emulation, big deal (1)

Rentar (168939) | more than 13 years ago | (#159668)

I assume it's primary use are applications, and applications in every OS that somehow deserve this name don't have to do any I/O themselves. They just call open(), read(), write(), ... and the OS does the real I/O. So the only thing that would come near your idea, would be translating different calling mechanisms for OS-calls, which should be rather straightforward.

I think running a whole OS on this technology would be crazy ... maybe it's crazy enough that someone would try ... wait, Transmeta allready did that :-)

Didn't they have this for Alpha/WinNT (1)

cant_get_a_good_nick (172131) | more than 13 years ago | (#159669)

I'm assuming the state of the art has been advanced since, but I remember something like this for the DEC Alpha. Nobody made NT on Alpha software (I know a little bit about this, I was the webmaster for an Alpha NT beastie). I seem to remember some tool like this. It wouldn't compile-as-you-go but would compile pages or some weird subset. Cool part of it was the first time you hit a section of new code, you got a fault and the app died. But then the runtime would see that and recompile that section of code. I can just see some sysadmin telling his boss "Yeah, we've only got to run it a couple hundred times more and most code paths will be exercized and it should almost never die after that."

Musta worked well. You see Alphas running NT all over the place.

Re:Didn't they have this for Alpha/WinNT (1)

cant_get_a_good_nick (172131) | more than 13 years ago | (#159670)

I see from another comment this was called FX32. Sucked by any name....

Re:This is news why? (1)

cant_get_a_good_nick (172131) | more than 13 years ago | (#159671)

I recall AMD doing a deal with Transmeta, but I forgot where I read it.

Re:HP Dynamo project (1)

sqlrob (173498) | more than 13 years ago | (#159672)

Cool. An infinite speed computer! Run an emulator on an emulator on the processor and you've already seen 44% increase. Do that a few more times and you're set.

Re:Sounds like... (1)

Sir_Real (179104) | more than 13 years ago | (#159673)

From what I understand (dangerous words), This takes a binary executable with the original platform in mind and translates it to an executable binary for another platform. This idea is new (at least to me) since before, translation became MUCH more difficult once machine code was output, but possible. I imagine that this decompiles the binary, translates the machine code into some standard and simple (but mythical) machine language, and from there produces binaries for the different architectures. This is on the same level as yacc. Before yacc, writing a parser was a pain. All yacc did was define a standard method of describing the parser. I think that they've taken the souce architecture as the describing standard and built a translator to produce generic and easily translatable target code.

I'm wrong a lot, but I know a lot of buzzwords.

Re:Sounds like an emulator (1)

PinkyAndThaBrain (206650) | more than 13 years ago | (#159679)

No it means architecture specific optimizations dont port 1:1 (DUH) and that their compilers still cant optimize DSP code as well as a human... of course the compiler doesnt have that much to work with, a on the fly reverse compiled binary is not exactly the best input for a compiler.

By the looks of it this works exactly as Transmeta, most of the time run binary translated code sometimes revert to emulation (for instance for run-time optimization/running uncommon code/perhaps running code for the first time etc etc).

x86 to MIPS = smokin' VR3 (1)

Rudeboy777 (214749) | more than 13 years ago | (#159682)

To use the examples from the post directly, this makes bringing Linux programs from your (x86) desktop to your (MIPS) Agenda [umbc.edu] even more trivial. This device just became even cooler!

Re:A crazy thought... (1)

BlowCat (216402) | more than 13 years ago | (#159683)

I remember a shareware DOS program that pretended to do exactly that.

Kewl! Even more vapour! (1)

AlXtreme (223728) | more than 13 years ago | (#159686)

So all the work done on the alpha, sun etc versions of linux are in vain? You just run the magic lill' program and it'll make even windoze work on a Ultrasparc.

God must forbid it! This is all a conspiricy of BillyBoy, don't say i ain't told you so!

Come on people, this is BS. it's a bloody IPO, they're just out there for the publicity...

Re:what's new with this? (1)

ClosedSource (238333) | more than 13 years ago | (#159691)

"And i could lead into the slashdot mantra of if all programs were opensource, we wouldn't need somethng as sloppy as an emulator anyhow... "

So the user just obtains development tools and recomplies the program for their system. OS issues aside, this is not a good solution for most users. Let the vendor do the cross-compiling, that's what we pay them for.

Re:McVeigh Update (1)

ClosedSource (238333) | more than 13 years ago | (#159692)

What about Generalissimo Francisco Franco?

Re:No, Open Source is the solution (1)

diamondc (241058) | more than 13 years ago | (#159693)

yeah,and it was closed source for how many
years? i doubt it uses an assembly in the
source, which would make it non-portable.

Why? (1)

Popocatepetl (267000) | more than 13 years ago | (#159697)

Moderators, I will save you some time reading this diatribe - this is flamebait.

Every time Slashdot announces something neato, a goodly amount of the readership poo-poos it. This is annoying by itself, but what I really dislike is the way such comments are moderated. I browse at 5, yet I still have to wade through the sour grapes and inflated egos.

It is easy to look down on an accomplishment that you didn't achieve. It seems to be hard to add useful content (some of you do a good job, though). Moderation trends perpetuate the problem.

Here are a couple of suggestions for good moderating:

  1. Moderate up posts that tell how that neato thing was done
  2. Moderate up hard, verifiable facts, not "I don't think this will work because the problem doesn't exist/it has been solved before/it's ok to make money and anyone who doesn't appreciate big business interests is harming open source/free software/freedom" Notice the illogical conclusion of the quoted material. I get the sense some people argue just for the sake of appearing cynically knowledgeable.
  3. My own post is irritating me at this point. I am going to stop right now.

Linux is dead. (1)

pagercam (309749) | more than 13 years ago | (#159698)

Now we don't have to use Linux of cross platform support now we can cross compile windows!!!

A question... About processor pairs (1)

weetabix (320476) | more than 13 years ago | (#159700)

So does each different pair of processors have a different release? say intel -> ppc or sun -> amiga?

And does this mean, perhaps, if we get this software running nicely in a chunk-o-firmware, we can have amp? (as opposed to SMP)?

Just curious...

Re:Mac 68K (CISC) to PPC (RISC) dynamic recompiler (1)

cosmo7 (325616) | more than 13 years ago | (#159703)

The very first PowerMacs (NuBus based) used instruction-by-instruction emulation to run all the old 68K Mac code, including some parts of the OS that were still 68K.

It was even cooler than that; the 68k emu used 68k toolbox calls which it would, of course, interpret for itself. it was a recursive emulator.

Is this more powerful than cross-compiling? (1)

Dan Ost (415913) | more than 13 years ago | (#159705)

Seems like this should be pretty simple for code that is already 100% portable.

Can it do more than this?

--Dan

Renaissance (1)

zoombah (447772) | more than 13 years ago | (#159708)

Renaissance in computing? I don't think so.

Equate this new development with WINE. WINE can run some windows software on unix, so that people don't have to switch. By this logic, WINE would have ushered in a new era in computing.

But it didn't, and this new binary compatibility scheme won't either. Bugs, incompatibilities, inconsistencies, etc on the ported platform will always give the native platform an advantage.

Besides, it is doubtful that software companies will provide support for other ported platforms, further reducing motivation to use this binary compatibility.

implications... (1)

leifb (451760) | more than 13 years ago | (#159709)

I wonder how long it'll be before we see a Java:x86 port.
Oh... wait...

=Goto (1)

Belly of the Beast (457669) | more than 13 years ago | (#159710)

directed acyclic graphs = GoTO

Re:Processor emulation, big deal (1)

Amazing Quantum Man (458715) | more than 13 years ago | (#159711)

Let's say my source architecture uses interrupt-based I/O. My target uses memory-mapped. Will this translator be able to handle that?

Non Sequiter argument. You can have interrupt based memory mapped I/O (ask any PPC developer!).

I assume you meant to say port-mapped vs. memory-mapped.

Prior art (1)

return 42 (459012) | more than 13 years ago | (#159712)

A lot of people have pointed out this is nothing new. But that's a good thing. It makes it slightly less probable that the USPTO will issue them a patent. (Maybe. If someone challenges it. And has enough money to pursue the case.)

Re:Sounds like... (1)

destinyX (459257) | more than 13 years ago | (#159713)

not exactly, i think they're pushing at directly translating a binary image to your archatecture... then running it, not emulating it, but re-assembling it for your processor. the only problems i can see here are inital hardware problems like the streaming media of arm, vs the byte code of most java based chips, vs the cached contents of intel.... this could potentially get messy

Old news (1)

lesinator (459276) | more than 13 years ago | (#159714)

Tandem did this years ago. Their first machines (some time in the 1970s) were CISC. In the early 1990s they changed the hardware architecture and changes the CPUs to RISC. Code compiled for the same version of the OS on a CISC would run interpreted if the machine it was running on had RISC CPUs. Granted, native compiled RISC code ran faster, and it wasn't binary-compatible across major version changes of the OS. But it is basically the same.

Re:All Roads Lead to Open Source (2)

cduffy (652) | more than 13 years ago | (#159716)

Nonsense. Why not? This solution doesn't attack issues of competing API standards; it attacks the problem of different CPU architectures -- a problem which open source /does/ solve. You're right to claim that open source is not a silver bullet -- but it certainly is another solution to the primary problem which this development addresses.


(I'll grant you that this is *not* true for everything else the poster mentioned -- Java, .NET, etc).

Why bother? (2)

The Man (684) | more than 13 years ago | (#159717)

Why would anyone want this? Insist on source-available applications and you'll never get burned by this. You can just rebuild your applications from source on whatever system you happen to be using, and as an added bonus you'll be using a compiler that understands the target platform rather than relying on hacks.

There is more information content in the original code for an optimizer to make use of then there is in a binary (or assembly). If this were not the case, would not optimizers run *after* the assembly translation is done? In fact, all reasonable compilers run the vast majority of their optimizations *before* the translation occurs, and only a few small peephole optimizations are done on translated or nearly-translated code. The unfortunate (for them) facts are that:

  • Optimizations done after translation has finished are of limited value and generally produce only very small performance gains
  • There is no reason to translate binaries, with all the difficulty this entails, when it's much easier to simply recompile.
  • In many cases simple binary translation is ineffective anyway, since other properties of the systems are likely to differ (for example, different operating systems use different system calls, syscall numbers, or calling conventions). This requires a great deal of effort (consider replacing one 5-instruction system call with 582 instructions to make 7 different syscalls and include a large chunk of compatibility code to substitute for a system call that the target lacks) to work around, and it's difficult to get it completely right.

The verdict: don't fall for this. Even if it works, and even if it has no effect on performance in the common case, there's no benefit. The only useful things that can come of this are the magic peephole optimizations they might be using, which should go into general-purpose compilers.

Re:Why bother? (2)

The Man (684) | more than 13 years ago | (#159718)

some of my friends and relatives (gasp!) DON'T EVEN HAVE A COMPILER INSTALLED ON THEIR COMPUTER!!!

Why not? High-quality compilers are available, with source if desired, at zero cost.

Your arguments regarding optimization also apply to distributing files as Java byte code, but the simple fact is, for most applications, nobody gives a damn about optimization anymore anyway!

People who love to brag about their leet computers think this. Anybody who actually has to do work on them does not. Java is suckass-slow to the point of uselessness, and there's no excuse for wasting CPU power just to be lazy.

For the few cases in which cycles are that critical, shouldn't the code be written in hand-optimized assembly and made available in system libraries anyway?

Of course. Unfortunately I believe, unlike you, that there are more shades of code than just "performance-critical" and "non-preformance-critical." The 90-10 rule is quite valid in most cases, and inner loops and such should be optimized, the best algorithms available used. But what about the parts of the application that aren't in the system libraries? What if my need for speed isn't just in strcmp(3) but also an AVL tree, an XML parser, and (insert foo here)? These should be written in a compiled high-performance language like C (never Java and almost never C++). It isn't any harder to do this right.

Have you tried lately to write a non-trivial application where the same source compiles on both Linux and Windows lately?

No, why would I? Dozens of vendors got together several years ago to define standard (heard of POSIX?) to make sure this could be done with a minimum of pain. I can't help it if Microsoft was too busy (drawing mustaches on Larry Ellison|calling Scott McNealy a liar|embracing and extending its mother) that week. Want to run useful code? Use a real OS; there are plenty to choose from.

Anyone remember FX!32? (2)

Watts (3033) | more than 13 years ago | (#159721)

FX!32 was Digital's software for the Alpha that allowed you to run software written for x86 on Alpha processors under NT.

It used dynamic recompilation of the sort mentioned here, and from what I've heard, was at a pretty acceptable speed. It also did run-time optimization, or as Transmeta would put it, code morphing.

I believe there was also a FX!32 compatability layer for Digital Unix and later Linux, although support was slightly more sketchy. If I remember correctly, this was around the time that Digital made it possible to use libraries compiled for Digital Unix under a Linux environment.

Anyone else have more to say about FX!32? I'd be interested in more info.

Re:HP Dynamo project (2)

woggo (11781) | more than 13 years ago | (#159722)

Dynamo dynamically optimizes binaries; an equivalent in the Java world is IBM's Jalapeno VM. Unfortunately, the Dynamo approach is only feasible on the HP architecture, because the PA-RISC chip has an absurdly large i-cache (extremely aggressive in branch prediction.

Re:these guys are full of it (2)

josepha48 (13953) | more than 13 years ago | (#159724)

and don't forget dosemu, and wine, while you are at it as both of thouse are using some sort of 'binary transualtion' as well.

I think bochs is great as it allow intel binaries to run on all sort of other platforms. You just need a super fast PC to get some performance out of it..

I don't want a lot, I just want it all!
Flame away, I have a hose!

Sounds like an emulator (2)

Compuser (14899) | more than 13 years ago | (#159725)

The comment that this is not suitable for
hand-optimized loops in DSPs plainly
means that this is an emulator.
What would be cool instead is if someone
made a binary cross-compiler so it would go
through you harddisk's binaries and convert
them from, say, x86 to PPC so that you could
take your hard drive, take it from an x86
system, put it on a Mac and have Windows
boot natively (modulo ROM issues on Macs).
All without access to Windows source code.

Re:FX32 (2)

spectecjr (31235) | more than 13 years ago | (#159730)

Isn't that _exactly_ what FX32 did?
AFAIK it dynamically translated binary
code for Pentium to Alpha processors
with runtime optimazation.


Yep, apart from it did it for x386 and up --> Alpha.

I think it was released in or before 1997.

Before - 1996 or 1995, IIRC.

Simon

No, Open Source is the solution (2)

IPFreely (47576) | more than 13 years ago | (#159733)

The problem is caused by binary distribution.

The solution is source distribution.

Compilers know more about the program than translators do, and they also allow linking to native libraries. Can a translator do that?

Yet another proprietary solution to fix another problem caused by proprietary solutions.

Re:This was being worked on a few years ago. (2)

QuantumG (50515) | more than 13 years ago | (#159734)

Mike Van Emmerik is still working on the project, as is 4 students. The project leader Cristina Cifuentes is currently doing research at Sun Labs on commercial extensions of this work. There will be an open source project at the end of the year apparently, the code has already been released under a BSD style license but it is not publically available as yet. The funding from Sun was a gift to Dr Cifuentes simply because they liked what she was doing. I was just a happy employee when I wrote that broken backend.

Tall Tales of the computer age (2)

MobyDisk (75490) | more than 13 years ago | (#159736)

I'm not sure if EEtimes is oversimplifying, or if Transitive technologies is filling heads with BS.

"...Translation, sometimes called software emulation..."
Translation != emulation

"...Crusoe specifically takes X86 code...In contrast, Transitive's...[fluffy adjectivies]...can, in theory, be tailored for many processor pairs.."
Crusoe isn't X86 specific, and it can be tailored for many processor pairs in reality, not just in theory.

"...We have seen accelerations of code of 25 percent..." doesn't mean that everything runs 25% faster. I don't even hear Transitive technologies saying that it does.

I wonder how many more companies will come up with new and innovative techniques like this now that Transmeta has become very noticable? I wonder how long before the cash-strapped Transmeta starts filing patent infringement suits? (Please Linus, make them play fair!)

wow! (2)

jasno (124830) | more than 13 years ago | (#159738)

Now if they could figure out a way to deal with endianness, and the other 99% of the platform specific stuff in most code, it might be worth something...

Re:Solutions in search of a problem (2)

Tuzanor (125152) | more than 13 years ago | (#159739)

VMWare isn't an emulator.

Wrong problem domain, IPFreely... (2)

Rimbo (139781) | more than 13 years ago | (#159742)

If you're thinking in terms of desktop systems and software written in high-level languages, you're right. But the target market of this company is the embedded systems world, where the code is typically hand-optimized assembly and even custom-made instruction sets for systems that are built from heterogeneous proprietary systems. Some proprietary chips are better than others, and often you don't know which is the best solution until you've already implemented the whole thing.

For the telecom industry, this solution, if it works, is a very good one.

Dont' just say recompile (2)

bentini (161979) | more than 13 years ago | (#159743)

Honestly, you can't just say recompile the code. It's not practical, and it doesn't work. If it did, RISC would rule the world, but it doesn't. NT was even put onto various RISC architectures, and it didn't work. Translation is the only way to give processors a chance for legacy code base.

If you say otherwise, you're ignoring history. RISC processors rock for most application. Look at Transmeta, a 700 MHz Crusoe can act as, worst case, 300 MHz Pentium III, using a lot fewer transistors and a lot less power. If MP actually worked, you could get such an advantage based on silicon space of performance/power.

Opinions?

Re:A crazy thought... - HP's Dynamo does it (2)

null-und-eins (162254) | more than 13 years ago | (#159744)

I wonder if they could run the optimizer without the translation layer (or make a ChipX-to-ChipX dummy translation), and squeak some extra performance out of code on any platform?
This is actually done by the Dynamo Projekt [hp.com] by HP. From their page:
The motivation for this project came from our observation that software and hardware technologies appear to be headed in conflicting directions, making traditional performance delivery mechanisms less effective. As a direct consequence of this, we anticipated that dynamic code modification might play an increasingly important role in future computer systems. Consider the following trends in software technology for example. The use of object-oriented languages and techniques in modern software development has resulted in a greater degree of delayed binding, limiting the program scope available to a static compiler, which in turn limits the effectiveness of static compiler optimization. Shrink-wrapped software is shipped as a collection of DLLs (dynamically linked libraries) rather than a single monolithic executable, making whole-program optimization at static compile-time virtually impossible. Even in cases where powerful static compiler optimizations can be applied, the computer system vendors have to depend on the ISV (independent software vendor) to enable these optimizations. But most ISVs are reluctant to do this for a variety of reasons. Advanced compiler optimizations generally slow down compile-times significantly, thus lengthening the software development cycle. Furthermore, a highly optimized binary cannot be debugged using standard debugging tools, making it difficult to fix any bugs that might be reported in the field. The reluctance by ISVs to enable advanced machine specific optimizations puts computer system vendors in a difficult position, because they do not control the keys to unlock the performance potential of their own systems!

-yawn- (2)

Reality Master 101 (179095) | more than 13 years ago | (#159746)

"Our claim is that we can run 1:1 or [even] better than native speeds"

Bullshit.

Wake me when these guys go out of business. Been here, seen this. The x86 emulator guys made the same claims for their Mac-based emulators, almost word for word. (I won't even get into Transmeta's claims that have turned out to be similar bullshit).

This is just a special case of an optimizing compiler, which Java run-time optimizers also fall into.

These claims, as well as the claims for the "magic compiler" that can produce code better than humans, will never happen until we have real human-level AI that can "understand" the purpose of code. You can only get so far with narrow-vision algorithmic optimization, as proven by the failure of 40 years of research. (Failure, only as defined as producing code as good as a human can).


--

EVO/REVO-lution? (2)

itzdandy (183397) | more than 13 years ago | (#159747)

this is not either an evolution or a revolution. it is a quick fix the legacy problems of "modern" computers.

but still...

a good idea. for example, the article states that a CISC to RISC translation would still be inefficient, how much so?? would a 1.4ghz athlon be equivilent to a 500mhz PPC or would it be better? could this all a much more usfull form of emulation, as in i cant afford a g5 MAC for macOSX so ill just use my athlon?.

also, with the claim of possible speed improvements accross RISC to RISC translation this may light a bit of a fire under the arses of some of the big players(intel-IBM) to build a new arcitecture with these optimizations in hardware.

this could be used as a tool for competition with transmeta with some good hardware backing it up. a CPU could be made as a base and the translation hardware could be pre-programmed to emulate multiple platforms. people would no longer have to worry about which arcitecture their WinCE apps are compiled for because their chip would run MIPS or ARM at nativelike speeds.

Environment everybody? (2)

zoftie (195518) | more than 13 years ago | (#159751)

Just table based translation with few ifs engrained
in the code is not good justification for hype,
however some companies survive just that way.
Anyhow, they managed to emulate other chips in
hardware. Thats like carb in a car that can work
on the same fuels, alas the fittings are not same,
so they cannot be integrated into a engine
environment, with some heavy modificaions.

Being able to to run code from penitum on
your chips that just modifes registers and adress
ranges is interesting challenge, but its just
that.
Drivers written for 'common' enviroments
surrounding chips would not work on new platforms,
and if they will that will mean that new platform
is just an old one with new processor, that
to externals is just like plain pentium chip.
Feat like VMWare is more admirable, thanks to
those CISC commands that allow for multiputer
based technologies.

New statements like Apple made way ago, and so
as Sun did with their hardware are more forward
thinking than that mere table lookup embedded in
hardware.

Remember some companies survive on hype, hyping
old or new technology. Transmeta has firmly placed
itself in that market share, so it will be tough
for this company in near term.

Hmmmmm (2)

grovertime (237798) | more than 13 years ago | (#159752)

I appreciate the claim made here, and in fact am excited by the possibility of a "renaissance" that is spoken of. But much like Transmeta, how much of this is true? Are there third parties doing the testing on this yet? If so, where are there results and conclusions?

  1. is this.....is this for REAL? [mikegallay.com]

Re:Humorous context... (2)

blair1q (305137) | more than 13 years ago | (#159754)


I think that I shall never see
A program lovely as a directed acyclic graph

Sounds familiar... (2)

rdean400 (322321) | more than 13 years ago | (#159755)

IBM was doing this exact same thing with DAISY, although the scope seems a bit narrower: http://www.research.ibm.com/daisy/ It's very interesting that we're just now talking about this stuff. It may get to a point where PC architectures will be able to do something similar to what an AS/400 does....the application is insulated from the hardware completely, and when transported to a new architecture, it automatically translates to run on the new architecture, fully able to exploit the abilities of that architecture.

Old News (2)

complexmath (449417) | more than 13 years ago | (#159757)

Ars Technica did an article on this topic a year ago. Check this link [arstechnica.com] for the article.

what's new with this? (3)

um... Lucas (13147) | more than 13 years ago | (#159758)

We've had processor and machine emulators and processor independance for so long now...

SoftPC, Soft Windows, Virtual PC, XF86, Virtual Playstation, Java, WINE, Wabi, MAME, and so many others...

Why should this one be the news?

While Java was basically the only one that's tried to dislodge x86, they've all shown that while it's feasible to run another architecture's binaries ontop of a CPU, it's not the preferred way of doing things.

YAE (yet another emulator)

And big deal if it only translates a program from one binary arch. to another... Without an equivalent OS, the calls have nothing to be translated into...

And i could lead into the slashdot mantra of if all programs were opensource, we wouldn't need somethng as sloppy as an emulator anyhow...

Or am i missing something about the significance?

Re:Sounds like an emulator (3)

TheTomcat (53158) | more than 13 years ago | (#159761)

if everyone wrote assembler, and didn't ever depend on anyone else's libraries (including OS, and BIOS), this would work.

But this won't work for the same reasons that DOS software won't run natively on Linux. There's too much dependance on general-use code (like OS based interupts (21h, f'rinstance)). (not that that's a bad thing, just in this circumstance, it makes straight-up translation impossible).

Mac 68K (CISC) to PPC (RISC) dynamic recompiler (3)

kriegsman (55737) | more than 13 years ago | (#159762)

Every PCI PowerMac has a 68K (CISC) to PPC (RISC) dynamic recompilation emulator in it that it uses for executing 68K code. And MHz for MHz, the execution speed of the 68K code when dynamically recompiled as PPC code, is roughly comparable (plus or minus 50%?) to the speed of the original 68K code on a 68K processor.

The very first PowerMacs (NuBus based) used instruction-by-instruction emulation to run all the old 68K Mac code, including some parts of the OS that were still 68K.

The second generation PowerMacs (PCI based) included a new 68K emulator that did "dynamic recompilation" of chunks of code from 68K to PowerPC, and then executed the PPC code; this resulted in significantly faster overall system performance.

Connectix later sold a dynamic recompilation emulator ("Speed Doubler") for Nubus PowerMacs, that did, in fact, double the speed of those machines for many operations, mainly because so much of the OS and ROM on the first-gen PowerMacs was still 68K code.

I think that dynamic recompilation has a bright future; x86 may eventually be just another "virtual machine" language that gets dynamically recompiled to something faster/more compatible/etc at the last moment.

-Mark

Re:All Roads Lead to Open Source (3)

El (94934) | more than 13 years ago | (#159763)

As opposed to BASIC, which is an extremely stupid solution to the same problem?

I don't see how the problem that I'd like to send you dynamic content via email without requiring you to be running the same CPU as I am is caused by closed standards. On the contrary, it seems to be an inevitable side effect of competition in the processor market. Yes, it is an obvious solution: given that I can do on-demand translation to the Java Virtual Machine, how much harder is it to do on-demand translation to the instruction set of a real CPU?

All Roads Lead to Open Source (3)

zpengo (99887) | more than 13 years ago | (#159764)

It's funny how things are heading these days. Java, .NET, and dynamically-translating processors are all "brilliant solutions" to a problem that was caused by closed standards in the first place.

Re:Solutions in search of a problem (3)

Rimbo (139781) | more than 13 years ago | (#159765)

You're right in the case of the desktop and applications world. However, in the embedded world, such as cellphones and 802.11, this is VERY useful. The problem of multiple proprietary platforms is the current bane of the telecom industry, which this company is clearly targeting.

A crazy thought... (3)

CraigoFL (201165) | more than 13 years ago | (#159766)

From the article:
"Translating CISC to RISC is bit like pushing uphill, but we can get close to parity in performance assuming the same clock speed," he said. "That's because we work the 90:10 rule on the fly. The software spends 90 percent of its time in 10 percent of the lines of code. That means for RISC-to-RISC and CISC-to-CISC translations, we are able to make improvements. We have seen accelerations of code of 25 percent."
I wonder if they could run the optimizer without the translation layer (or make a ChipX-to-ChipX dummy translation), and squeak some extra performance out of code on any platform?

Re:Show ME the demo! (3)

Spy Hunter (317220) | more than 13 years ago | (#159767)

GCC might know a lot about processor architecture but it can't know about what tasks you will be asking the compiled application to do, so it can't optimize for that.

Hmmmm, I just had a crazy idea. What if you could compile your GCC application in a special way, then run it under simulated normal working conditions and have it log performance data on itself, just the kind of data that these run-time optimizers gather. Then, you could feed GCC this collected data along with your application's source and recompile it and GCC would be able to turbo-optimize your app for actual usage conditions! If it can be done on-the-fly at run-time, it can be done even better at compile time with practically unlimited processor time to think about it.

Even if the end-user used the application in a nonstandard way it might still provide a performance benefit because there are lots of things that a program does the same way even when it is used in a different way.

Would this be feasible? Would it provide a tangible perfomance benefit? (like HP's Dynamo?) Comments please!

Re:No, Open Source is the solution (4)

Anonymous Coward | more than 13 years ago | (#159768)

Yeah, that's why star office runs so well on SGI's. It's been open-sourced for almost a year now, and still hasn't been compiled successfully on mips.

Re:Dynamic Recompilation (4)

landley (9786) | more than 13 years ago | (#159769)

MetroWerks here in Austin did the emulation layer for Apple's M68K->power switchover. They did a really clever thing of identifying long "runs" of code that nobody ever jumped into the middle of, then they treated them as one big instruction with a lot of side effects, and optimized them as a block. (Not one instruction at a time, but the whole mess into the most optimal set of new platform instructions they could.)

It was quite clever. It's also quite patented, and has been since before the Power PC came out. (And in a sane world those patents would have expired by now, but with patent lengths going the way of copyright...)

Eventually, when the patents expire, this sort of dynamic translation will be one big science with Java JITs, code morphing, and emulation all subsets of the same technology. And somebody will do a GPL implementation with pluggable front and back ends, and there will be much rejoicing.

And transmeta will STILL be better than iTanium because sucking VLIW instructions in from main memory across the memory bus (your real bottleneck) is just stupid. CISC has smaller instructions (since you can increment a register in 8 bits rather than 64), and you expand them INSIDE the chip where you clock multiply the sucker by a factor of twelve already, and you give it a big cache, and you live happily ever after. Intel's de-optimizing for what the real bottleneck is, and THAT is why iTanic isn't going to ship in our lifetimes.

Rob

Transitive Software (4)

philj (13777) | more than 13 years ago | (#159770)


Here's the homepage for the company - Transitive Software [transitives.com]
(Apologies for the Karma whoring)

these guys are full of it (4)

dutky (20510) | more than 13 years ago | (#159771)

I'm not saying that their product doesn't work (though I seriously doubt that they can get an improvement in speed from anything other than their hand-picked benchmarks) but that they are probably just trying to spin an established (but under-reported) technology in order to attract venture capital.

There is no rennaissance in computing that will be ushered in by this product. We have already seen it's like with DEC's FX32 (intel to Alpha) and Apple's synthetic68k (M68k to PowerPC) as well as a number of predecessors (wasn't there something like this on one or another set of IBM mainframes) and current open source and commercial products (Plex86, VMware, Bochs, SoftPC, VirtualPC, VirtualPlaystation, etc.), all of which use some amount of dynamic binary translation, and none have set the world on fire. They are mildly usefull for some purposes, but the cost of actual hardware is low enough to kill their usefullness in most applications.

I wish these guys luck, but I doubt anyone will be too enthusiastic about this product. They might have stood a chance if they'd pitched this thing a year or two earlier (when there was lots of dumb money looking to be spent) but they are probably toast today.

This was being worked on a few years ago. (4)

saurik (37804) | more than 13 years ago | (#159772)

This was being worked on a few years ago by some people at The University of Queensland. Unfortunately, they got tired of the project (and, if I remember correctly, that they weren't getting much popular support).

Their website is at :
http://www.csee.uq.edu.au/~csmweb/uqbt.html [uq.edu.au]

"UQBT - A Resourceable and Retargetable Binary Translator"

To note, they mention that they got some funding from Sun for a few years. (Likely either causing or due to their work on writing a gcc compiler back-end that emits Java byte-codes.)

Re:Why bother? (4)

El (94934) | more than 13 years ago | (#159773)

Why bother? Suppose I come up with a neat program on my SparcStation, and I want to email to all my friends to show it off. Now, maybe recompiling from source isn't a problem for you and your small circle of friends, but truth be told, some of my friends and relatives (gasp!) DON'T EVEN HAVE A COMPILER INSTALLED ON THEIR COMPUTER!!! My only choice is to send them an executable. Again, maybe you have such a small circle of friends that you can keep track of what kind of computer each of them is running. But quite frankly, some of my relatives, when asked "do you have an x86, PowerPC, 68000, or Sparc chip in that there puppy" can only respond with "huh?!?"

Your arguments regarding optimization also apply to distributing files as Java byte code, but the simple fact is, for most applications, nobody gives a damn about optimization anymore anyway! Let's see, even if your favorite text editor were 100, or even 1000 times slower, would you be able to type faster than it can buffer input? I don't think so! For the few cases in which cycles are that critical, shouldn't the code be written in hand-optimized assembly and made available in system libraries anyway?

Your argument that straight binary translation is useless, and that you also need to re-create the entire run time environment is a good point. This, however, is an argument in favor of using Java (or som equivalent), and is an argument AGAINST distributing everything as source. Have you tried lately to write a non-trivial application where the same source compiles on both Linux and Windows lately? (It can be done, but it is EVEN LESS FUN THAN HERDING CATS!) Fact is, this whole discussion is fairly pointless because run-time environment compatibility is both much more important and much harder to acheive than mechanical tranlation of one machine's opcodes to another machine's opcodes.

Re:All Roads Lead to Open Source (4)

egomaniac (105476) | more than 13 years ago | (#159774)

Yes, and look how well that's worked for the Unix camp.

I'm not dissing open source -- I'm just pointing out the realistic view that it doesn't instantly solve all your problems. I realize that just about everybody on Slashdot will freak out about this, but I actually don't like Linux. I don't use it. I think it's just another Unix, and much as I dislike Windows I don't have to spend nearly as much time struggling with it just to (for example) upgrade my video card. (Cue collective gasps from audience). Unix has its place, and I think that place is firmly in the server room at this point in time.

(Disclaimer: This is not intended to be a troll. Please don't interpret "I don't like Linux" as "I think Windows is better than Linux", because I don't like Windows either. I think they're both half-assed solutions to a really difficult problem, and I think we can do better. What I mean is more along the lines of "If you think the open source community has already created the Holy Grail of operating systems, you've got to get your heads out of your asses and join the real world")

So, if your thinking is that Linux should be the only platform because it represents the One True Way -- I answer that by saying that you sound an awful lot like a particular group in Redmond, WA that also thinks their platform is the One True Way.

This industry cannot exist without competition, open *or* closed. Saying that these problems exist because your platform is not the only one in existence is incredibly childish.

Processor emulation, big deal (4)

poot_rootbeer (188613) | more than 13 years ago | (#159775)

Let's say my source architecture uses interrupt-based I/O. My target uses memory-mapped. Will this translator be able to handle that?

To be honest, translating one CPU's version of 'CMP R1, R2' to another's doesn't sound like it will user in a renaissance of anything.

-Poot

Reliable? Optimal? Supported? (4)

BlowCat (216402) | more than 13 years ago | (#159776)

If we are talking about open source:
How many people would want to run a "translated" web server? Database? Scientific appliction? How reliable can it be? Why not recompile it natively?

If we are talking about closed source:
The same questions except the last one plus lack of technical support for non-native architectures at least by some vendors (e.g. Apple).

Crazy Like a Fox... (4)

The Monster (227884) | more than 13 years ago | (#159777)

I wonder if they could run the optimizer without the translation layer (or make a ChipX-to-ChipX dummy translation), and squeak some extra performance out of code on any platform?
I had exactly the same thought. The article says [as always emphasis mine]
The Dynamite architecture is based around a translation kernel, with a front end that takes code aimed at a source processor and a back end that aims the translation at a new target. The front end acts as an instruction decoder, building an abstract, intermediate representation of the subject program in the form of what Transitive calls "directed acyclic graphs." The kernel can then perform abstract, machine-independent optimizations on this representation.
So, x86 => DAG => x86 should work just fine. In fact, x86 => DAG => x86 => DAG => x86 should produce exactly the same code on the second iteration; I wouldn't be surprised if Transitive is doing exactly this to test whether the optimizer were working correctly. At this point, Dynamite sounds conspicuously like Dynanmo.

I wonder if the specs for DAG will be open so that code can be compiled directly to it, optimized, and then distributed, saving the first two steps in the process. I can see commercial software vendors being all over this idea.

Dynamic Recompilation (5)

Anonymous Coward | more than 13 years ago | (#159778)

This sounds an awful lot like the dynamic recompilation of MIPS to x86 done in many emulators (such as UltraHLE [ultrahle.com] , Nemu [nemu.com] , Daedalus [boob.co.uk] and PJ64 [pj64.net] ).

I've been working on the dynarec for Daedalus for about 2 years now, and currently a 500MHz PIII is just about fast enough to emulate a 90MHz R4300 (part of this speed is attributable to scanning the ROM for parts of the OS and emulating these functions at a higher-level). Of course, optimisations are always being made.

After reading the article, I'd be very interested to see if they can consistently achieve the 25% or so speedups that they claim (even between RISC architectures).

For those interested, the source for Daedalus is released under the GPL.

Re:HP Dynamo project (5)

woggo (11781) | more than 13 years ago | (#159779)

whoops! that's supposed to say
Dynamo dynamically optimizes binaries; an equivalent in the Java world is IBM's Jalapeno VM. Unfortunately, the Dynamo approach is only feasible on the HP architecture, and it is only feasible on HP-PA because the PA-RISC chip has an absurdly large i-cache (greater than 1 mb). Look at HP's Dynamo site for more information, but IIRC the problem is dynamo's
extremely aggressive branch prediction.

damn slashcode...

Re:-yawn- (5)

lostguy (35444) | more than 13 years ago | (#159780)

ahem [hp.com] . Ignorance does not equal proof.

To quote:

The performance results of Dynamo were startling. For example, Dynamo 1.0 could take a native PA-8000 SpecInt95 benchmark binary, created by the production HP PA-8000 C compiler using various optimization levels, and sometimes speed it up more than 20% over the original binary running standalone on the PA-8000 machine.

That's binary translation from/to the same machine.

This is basically run-time instruction block reorganization and optimization, which can definitely improve a given binary on a given machine, over compile-time optimizations. Admittedly, a native binary, run through this kind of profile-based optimizer, will probably be faster than a translated-then-optimized binary, but neither you or I can state that with any authority.

Humorous context... (5)

Durinia (72612) | more than 13 years ago | (#159781)

...the subject program in the form of what Transitive calls "directed acyclic graphs."

Wow! What innovative technology! I wonder when they will patent this so-called "directed acyclic graphs". And they picked such a cool name! It sounds so mathematical!

Okay, enough laughing at the expense of clueless reporters...

HP Dynamo project (5)

dmoen (88623) | more than 13 years ago | (#159782)

This sort of technology has been around a long time. HP's Dynamo [hp.com] project has been running since 1995. When Dynamo is run on an HP PA-RISC and is used to emulate HP PA-RISC instructions, speedups of up to 20% are seen. That's pretty astonishing: you would think that emulating a processor on that processor would be slower, not faster.

Doug Moen.

Solutions in search of a problem (5)

Chairboy (88841) | more than 13 years ago | (#159783)

While this is fascinating sounding technology, it sounds more like a solution in search of a problem. There are already software solutions for emulation (SoftPC, VMWare, etc). There are already cross platform language solutions (Java, etc) and so on. Despite this, the market for massively cross platform applications has not really developed. It isn't as if a 25% performance increase is whats holding back the 'rennaissance' the author speaks of.

Re:All Roads Lead to Open Source (5)

egomaniac (105476) | more than 13 years ago | (#159784)

I realize that Slashdotters love to trumpet the Open Source horn, but this comment is absurd. "Open Source" != "runs on all platforms".

The amount of work necessary to get a complicated X app running on many different flavors or Unix is certainly non-trivial, and that's just *one* family of operating systems. And it either requires distributing umpteen different binaries or requiring endusers to actually compile the whole damned program. All well and good for people whose lives are Unix, but do you *seriously* expect Joe Computer User to have to compile all his applications just to use a computer? ("how hard is it to type 'gmake all'?" I hear from the audience... as if you'd expect your grandma to do it, and you've *never once* had that result in forty-six different errors that you had to fix by modifying the makefile. make is not the answer)

The problem of having a program run on multiple platforms is not "caused by closed standards in the first place" as you state. It is caused simply by having multiple standards -- closed or open makes no difference. SomeRandomOpenSourceOS (TM) running on SomeRandomOpenSourceProcessor (TM) would have just as much trouble running Unix programs as Windows does. This is a great solution to a real problem; don't knock it just because you have a hardon for Linux.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>