Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

OpenRISC Gains Atomic Operations and Multicore Support

Unknown Lamer posted about 3 months ago | from the now-that's-hardware-hacking dept.

Hardware Hacking 77

An anonymous reader writes "You might recall the Debian port that is coming to OpenRISC (which is by the way making good progress with 5000 packages building) — Olof, a developer on the OpenRISC project, recently posted a lengthy status update about what's going on with OpenRISC. A few highlights are upstreamed binutils support, multicore becoming a thing, atomic operations, and a new build system for System-on-Chips."

cancel ×

77 comments

Sorry! There are no comments related to the filter you selected.

slashdot is gains (-1)

Anonymous Coward | about 3 months ago | (#46998177)

awesome headlines

Re:slashdot is gains (0)

Anonymous Coward | about 3 months ago | (#47001749)

Bro, you tryin to steal my sick gainz bro?

How did OpenRISC not have atomic ops until now? (0)

Anonymous Coward | about 3 months ago | (#46998187)

Seriously.

Re:How did OpenRISC not have atomic ops until now? (3, Informative)

mwvdlee (775178) | about 3 months ago | (#46998447)

RTFA.
With a single core, it worked without atomic operations (albeit non-optimal. But then, which CPU is optimal?).

Re:How did OpenRISC not have atomic ops until now? (2)

50000BTU_barbecue (588132) | about 3 months ago | (#46998743)

6502! Just kidding.

Re:How did OpenRISC not have atomic ops until now? (2)

Ziran (1931202) | about 3 months ago | (#46999035)

6502! Just kidding.

1 Accumulator and 2 Index Registers. Can't get more optimal than that!

Re:How did OpenRISC not have atomic ops until now? (0)

Anonymous Coward | about 3 months ago | (#46999323)

Let's not forget the stack pointer and the zero page. If the 6502 had allowed you to modify the address of the zero page it'd have had a very sophisticated (for the time) register file capability.

Re:How did OpenRISC not have atomic ops until now? (0)

Anonymous Coward | about 3 months ago | (#46999971)

PDP-11.

And not kidding. The easiest CPU to do the most complex things with. Too bad DEC went overboard with the VAX. A simpler architecture in between would have been much better.

Atomic operations in 6502 (2)

tepples (727027) | about 3 months ago | (#46999117)

The original 6502 had atomic operations. Read-modify-write operations on memory, such as bit shifting or adding or subtracting 1, would execute a read-write (old value)-write (new value) sequence. This protocol of not waiting between a read of a particular address and writing the new value would allow a memory controller to lock the bus by allowing only one device to write at once. This feature was removed in 65C02 in favor of read (and use)-read (and ignore while calculating)-write (new value), which is slightly safer for memory-mapped I/O but possibly less safe for synchronizing a CPU with other CPUs or DMA sources.

Re:How did OpenRISC not have atomic ops until now? (1)

Salgat (1098063) | about 3 months ago | (#46999711)

You really don't need atomic operations until you get into multi-core programming (this is so that you are guaranteed to instantly change a value before another core reads/writes to it). Even the C++ standard doesn't guarantee atomic operations unless you explicitly declare a variable atomic.

Re:How did OpenRISC not have atomic ops until now? (1)

davester666 (731373) | about 3 months ago | (#47000345)

yes, you do. in a preemptable OS, in a multi-threaded app, you need atomic operations to share data between threads, as any read-modify-write operation on shared data gets wrecked when it is preempted between the read and the write.

Re:How did OpenRISC not have atomic ops until now? (0)

Anonymous Coward | about 3 months ago | (#47001485)

Right, but you can achieve that by disabling interrupts.

Re:How did OpenRISC not have atomic ops until now? (1)

Darinbob (1142669) | about 3 months ago | (#47004371)

Which can be slow and clumsy though, often a bad choice to use for real time systems.

Re:How did OpenRISC not have atomic ops until now? (1)

davester666 (731373) | about 3 months ago | (#47015215)

yes, that would be one way to implement atomic operations

Re:How did OpenRISC not have atomic ops until now? (2)

cnettel (836611) | about 3 months ago | (#47001603)

yes, you do. in a preemptable OS, in a multi-threaded app, you need atomic operations to share data between threads, as any read-modify-write operation on shared data gets wrecked when it is preempted between the read and the write.

Furthermore, what is atomic in terms of context switching preemption is not necessarily atomic in terms of memory bus arbitration. The two can usually coincide, but they don't have to.

Re:How did OpenRISC not have atomic ops until now? (1)

Darinbob (1142669) | about 3 months ago | (#47004365)

What about multi-processor systems that share memory? It's not multi core if each processor only has one core.

Re:How did OpenRISC not have atomic ops until now? (0)

Anonymous Coward | about 3 months ago | (#47002609)

I'm pretty sure you don't need atomic instructions.
You can implement mutex/lock stuff by using RAS (Restartable Atomic Sequence) CAS

Re:How did OpenRISC not have atomic ops until now? (1)

Darinbob (1142669) | about 3 months ago | (#47004405)

CAS is an atomic instruction, probably even more atomic than the stuff in OpenRISC (which uses a load-exclusive/store-conditional pair of instructions, similar to many RISC machines).

Re:How did OpenRISC not have atomic ops until now? (2)

Darinbob (1142669) | about 3 months ago | (#47004321)

Actually I found the atomic ll/sc stuff to be convenient even with multiple tasks on a single processor, as it means no need to lock out context switching for atomic operations.

Even with just a single core in the processor you will still hit concurrency issues if there are other processors in the system sharing some resource.

Re:How did OpenRISC not have atomic ops until now? (3, Informative)

Node (9991) | about 3 months ago | (#46999977)

From the blog post linked in the article:

"the requirement for implementing a mutex is that an mutex operation is never allowed to be interrupted. Previously on OpenRISC this was done by making a syscall that disabled all interrupts as it's first instructions."

Okay and all but what about Daddy Cool? (-1)

Anonymous Coward | about 3 months ago | (#46998193)

Daddy. Daddy Cool? WHat about Daddy Cool?

Is Gains? (0)

necro81 (917438) | about 3 months ago | (#46998283)

OpenRISC is Gains Atomic

No, fool, I is Gains Atomic. [sounds like an great stage name]

Re:Is Gains? (0)

oodaloop (1229816) | about 3 months ago | (#46998353)

OpenRISC can has gainz?!?

Re:Is Gains? (0)

PolygamousRanchKid (1290638) | about 3 months ago | (#46998721)

No, fool, I is Gains Atomic.

Gaines Burgers, not Atomics!

troll4Ore (-1)

Anonymous Coward | about 3 months ago | (#46998355)

First, you have 7o Kreskin Maintained that too

What advantages? (-1)

Anonymous Coward | about 3 months ago | (#46998389)

What are the advantages of openrisc? Ok, it is open hardware and all, but what are the practical advantages? From what I understand, this thing is often implemented in fpga. What are the performance of such a softcore? Can I expect to have something usable? Is it acceptably fast?

Re:What advantages? (-1)

Anonymous Coward | about 3 months ago | (#46998437)

Absolutely nothing over any of the well supported and understood open source MIPS implementations. This is just another cause-we-can hobby project on the front page of Slashdot.

Re:What advantages? (2)

renoX (11677) | about 3 months ago | (#46998495)

> Absolutely nothing over any of the well supported and understood open source MIPS implementations.

Ah! Read this ( http://jonahprobell.com/lexra.... [jonahprobell.com] ) and be cautious when re-implementing the MIPS ISA..

Re:What advantages? (2)

TheRaven64 (641858) | about 3 months ago | (#46999681)

We're just about to open source (Apache-style license) our MIPS IV implementation. MIPS IV is over 20 years old, so there exists at least one implementation that is not covered by any patents. We can't guarantee that nothing in our implementation is patented, but the patents in your linked article have all expired now.

Re:What advantages? (0)

Anonymous Coward | about 3 months ago | (#47006363)

That would be awesome. It would be even awesomer if it optionally included some useful wide SIMD execution units or DSP instructions. Do you have an estimate for size (effective gate count/transistor count)?

Re:What advantages? (1)

TheRaven64 (641858) | about 3 months ago | (#47009567)

We're using about 30% of a Stratix IV, a bit more with an FPU. We've also got a smaller version (no TLB, smaller caches) and multicore / multithreaded variants that are larger. We run at 100MHz (pass timing at 120-150MHz depending on the features enabled, but 100MHz gives some headroom when experimenting).

Re:What advantages? (3, Insightful)

Alioth (221270) | about 3 months ago | (#46998657)

MIPS may (or may not be) "open source", however it is not free to implement. Implement the latest MIPS ISA without a license agreement from MIPS and you'll be sued to smithereens. You won't be sued if you implement OpenRISC though.

Re:What advantages? (1)

tlhIngan (30335) | about 3 months ago | (#46999477)

MIPS may (or may not be) "open source", however it is not free to implement. Implement the latest MIPS ISA without a license agreement from MIPS and you'll be sued to smithereens. You won't be sued if you implement OpenRISC though.

Or to be clearer, MIPS owns several patents on instructions in the ISA. Though I think some of them were worked around another way since the patent covers implementation.

But many other architectures are patented as well - x86 is covered by many patents (most owned between AMD and Intel and cross-licensed), which probably explains why a good chunk of embedded x86 only do the i486 ISA. (Excepting companies like Via who license the patents).

Re:What advantages? (2)

jabuzz (182671) | about 3 months ago | (#46999909)

And those patents, or more specifically the single patent about the unaligned load and store instructions on MIPS expired years ago. To be specific it expired in December 2006.

So while the patent was an issue back in ~200 when OpenRISC was launched it is no longer relevant, and you would be better off implementing a MIPS32 and MIPS64 bit core.

I would also point out that there are full open source implementations of the SPARC architecture, which never suffered from the patent problems of MIPS.

Re:What advantages? (1)

turgid (580780) | about 3 months ago | (#47001911)

I would also point out that there are full open source implementations of the SPARC architecture, which never suffered from the patent problems of MIPS.

...but they do suffer from (a very poor implementation of) register windows.

Re:What advantages? (0)

Anonymous Coward | about 3 months ago | (#47002497)

Of course everybody knows now that register windows and delayed branches are bad ideas.
Even at that time they knew at Sun that it was an hurried decision, due to the lack of good compilers. ... nevertheless, like x86 has proved, a "suboptimal" instruction set does not prevent high performance implementations, as
Sun/Oracle and Fujitsu are delivering now.

Re:What advantages? (0)

Anonymous Coward | about 3 months ago | (#47002477)

Some random remarks.
-The fact that an instruction set may or may not be copyrightible is still to be debated. There were a loog time ago some fighting in mainframes between IBM and clone manufacturers, probably more than 40 years ago.
-MIPS patents. There were patents on unaligned access instructions. These patents are been long extinct. The "problems" with implementing an active
instruction set is that new instructions are added regularly and new patents may be issued about how to implement these instructions. It is safe to imitate
a 30 years old CPU, but you must restrict yourselft to the instruction defined at that time.
For example, there is a free implementation on opencores of the ARM2 isntruction set, which is incompatible with current ARM CPUs, it has as such
little commercial value. There are certainly still active patents on AMD's 64bits extension to x86, but the original 8086...80386 instfruction set is old enough to be
safe from patents.
-SPARC is almost free (There is a 99$ unlimited "architecture licence").This licence covers the instruction set, not the branding/logos.
-Brands are separate from patents. MIPS, and even SPARC prevent unlicenced use of their brand.
-The OpenRISC instruction set is not particularly insightful, so it is hard to defent against almost equally bland like MIPS but already have all the software infrastructure : OS, compilers,...
-Maybe putting efforts on the free RISC-V instruction set from Stanford, which is modernised/rationalised version of MIPS and all the modern RISCs, would make more sense than keeping OpenRISC alive.

OpenSPARC? (1)

emil (695) | about 3 months ago | (#46999861)

Is this [wikipedia.org] free to implement?

Re:What advantages? (1)

LoRdTAW (99712) | about 3 months ago | (#46998813)

" This is just another cause-we-can hobby project on the front page of Slashdot."

OpenRISC is far from a "cause-we-can" project.

Re:What advantages? (0)

Anonymous Coward | about 3 months ago | (#46999009)

Yea it certainly is, and I can't see it ever being anything else. It will never be used in anything useful, just people who want to tinker for the sake of tinkering. Schools have been teaching CPU design with other archs for years, and can continue to do so without OpenRISC.

Re:What advantages? (0)

Anonymous Coward | about 3 months ago | (#46999101)

3D Printing of OpenRISC...

Re:What advantages? (1)

Node (9991) | about 3 months ago | (#46999815)

I can't see it ever being anything else. It will never be used in anything useful

"Flextronics International and Jennic Limited manufactured the OpenRISC as part of an ASIC. Samsung use the OpenRISC 1000 in their DTV system-on-chips (SDP83 B-Series, SDP92 C-Series, SDP1001/SDP1002 D-Series, SDP1103/SDP1106 E-Series). Allwinner Technology are reported to use an OpenRISC core in their AR100 power controller, which forms part of the A31 ARM based SoC. ... TechEdSat, the first NASA OpenRISC architecture based Linux computer launched in July 2012, and was deployed in October 2012 to the International Space Station with hardware provided, built, and tested by ÅAC Microtec and ÅAC Microtec North America."

https://en.wikipedia.org/wiki/OpenRISC#Commercial_implementations [wikipedia.org]

Re:What advantages? (1)

iggymanz (596061) | about 3 months ago | (#46999317)

oh? what problem does it solve?

Re:What advantages? (1)

LoRdTAW (99712) | about 3 months ago | (#46999931)

"oh? what problem does it solve?"
What the fuck is so hard to understand here? The answer is in the name of project: An opensource CPU core. There, was that so hard?

Besides being a snarky ass, what was the point of your post? It sounds as if you would rather spark a flame war rather than do some actual research which would take oh lets say 5-10 minutes.

There are other open source cores but none of them are trying to provide a full blown CPU core that could potentially be used for mobile or desktop use. Most of them are for embedded use and are little more than a micro controller and lack an MMU.

Re:What advantages? (1)

iggymanz (596061) | about 3 months ago | (#47000023)

You have not even answered the question with all your ranting and hot air. Again, what problem does an open source CPU format solve? I cannot think of a one, my open source software works fine on x86, sparc, MIPS, ARM7 (and anyone interested can get the specs for most of those architectures). I'll even make a claim that open specs are good enough for a CPU, irrelevant whether the particular mask patterns are known.

Re:What advantages? (1)

Node (9991) | about 3 months ago | (#47000167)

I'll even make a claim that open specs are good enough for a CPU, irrelevant whether the particular mask patterns are known.

The problem that OpenRISC solves is an absence of free CPU IP. You do not consider an absence of free CPU IP to be a problem but others do consider it a problem and have created OpenRISC to solve that problem.

Re:What advantages? (0)

Anonymous Coward | about 3 months ago | (#47001429)

I cannot think of a one, my open source software works fine on x86, sparc, MIPS, ARM7 (and anyone interested can get the specs for most of those architectures). I'll even make a claim that open specs are good enough for a CPU, irrelevant whether the particular mask patterns are known.

That is good enough for a software developer, but not for those of us that do hardware development. I have projects that need both a general purpose processor and an FPGA to deal with various combinatorial logic and counters independent of the cpu. In that case I might as well use a cpu on the FPGA too, and not need two different chips. The availability of free to use cpu helps with this, especially if there are several different ones to chose from, to chose the one most appropriate or one that can be modified as needed.

Your post is about one step away from, "What good does solder do? I've never had to solder something when writing a program before."

Re:What advantages? (0)

Anonymous Coward | about 3 months ago | (#47005235)

I agree with that.

I do FPGA work, though I usually tend to use an 8-bit rotating register file machine[1] I wrote myself. This has the advantage that the instruction packing is compact, the very opposite of RISC, and I can use the unused instruction codings as peripheral register reads and writes.

I admit that if I had a design that called for running a vmunix OS like Linux or NetBSD, I would consider OpenRISC, though I think nowadays I would use an HPS like the Altera Cyclone V SX with an embedded ARM Cortex A9 hardcore. If I was designing an ASIC[2], area and performance would be a larger concern than IP licensing, so I would probably still go with ARM or MIPS.

Usually though, I think along the lines of "anything I can do in C, I can do in Verilog", so I'm using small 8 to 18bit cpus to program sequencing and multiplexing, and anything computational, I'm realising in logic. In that realm of thinking, CPU complexity is just a waste of area, that could be better spent on application logic.

CPUs are simple, I write one in verilog in an afternoon. What is good about OpenRISC and co. is that they include MMUs and cache hierarchies, these take longer to write and a LOT longer to validate.

[1] What this means is, the result is always stored in register 0 (through a mux), and registers 0..n-1 are moved to registers 1..n, this results in reduced instruction coding, as only the srcA and srcB registers need to be coded in the instruction, and maps perfectly from SSA compiler output. The registers aren't actually moved, instead the mux index base is incremented by one.
[2] I've never designed an ASIC.

Re:What advantages? (1)

iggymanz (596061) | about 3 months ago | (#47009641)

and for a couple bucks you can buy an 8 or 16 bit cpu and slap on on a board with your custom FPGA, you're doing it wrongly.

Re:What advantages? (1)

iggymanz (596061) | about 3 months ago | (#47009657)

eh, most of my life was engineering physical things including controllers. You take a couple buck CPU and slap it on a board with your FPGA, in 99.9999% of cases making your on fucking CPU is a waste of time and money. It's like a developer who says "I need to write my own web server from scratch to run my php code"

Re:What advantages? (1)

LoRdTAW (99712) | about 3 months ago | (#47009667)

And yet we have multiple open source web servers, each with varying levels of complexity, feature sets and usage cases.

Re:What advantages? (1)

Darinbob (1142669) | about 3 months ago | (#47004401)

How about embedding a CPU core into your ASIC design, without paying licensing fees to MIPS or ARM?

Re:What advantages? (0)

Anonymous Coward | about 3 months ago | (#46998449)

It has 16I/Os, powered with 23V, uses 8W and costs 330 euros. Good enough for you?

Re: What advantages? (0)

Anonymous Coward | about 3 months ago | (#46998463)

t is not going to be fast at all, likely in the vicinity of a few hundred MHz. FPGAs are very slow, the 5 stage pipeline version of MIPS I made on one ran at ~80Mhz.

Re:What advantages? (4, Informative)

ShanghaiBill (739463) | about 3 months ago | (#46998641)

What are the advantages of openrisc?

It is free, so if you want to run a softcore, there are no license fees. If you can read Verilog, you can verify that there are no NSA backdoors.

What are the performance of such a softcore?

An FPGA softcore is going to run several times slower, and consume several times as much power, as a hardcore. If you need a small amount of computing, and most of your app is in the FPGA fabric, then that is reasonable, although you might be able to get by with an 8-bit softcore like PicoBlaze, or even roll your own mini 8-bit core with opcodes customized for your app (this is not that hard, and is a fun project if you are learning Verilog and ready to go beyond blinking LEDs). But if you are doing something compute intensive, you may want to look for something with an integrated hardcore.

Can I expect to have something usable?

That depends on what you are using it for.

Re:What advantages? (1)

LWATCDR (28044) | about 3 months ago | (#46998805)

It is also possible to use the Verlog to make an ASIC if you go into production.

Trusting the compiler (1)

tepples (727027) | about 3 months ago | (#46999359)

If you can read Verilog, you can verify that there are no NSA backdoors.

But is there a backdoor in your Verilog compiler [bell-labs.com] ? Normally, you might use David A. Wheeler's diverse double-compiling method [dwheeler.com] to ensure beyond reasonable doubt that your compiler isn't backdoored. But diverse double-compiling doesn't work unless the compiler is written in the same language that it compiles. And I don't think the Verilog compiler is written in Verilog.

you might be able to get by with an 8-bit softcore like PicoBlaze

Wikipedia's article about PicoBlaze states that it's not free to use on anything but a Xilinx FPGA. So if you switch to Altera or go into production with an ASIC, you might have to switch to PacoBlaze and deal with any minor behavior differences.

Re:Trusting the compiler (0)

Anonymous Coward | about 3 months ago | (#47003073)

you just read through the compiler output. (as in VHDL -> ASIC gate compiler output).
People used to design the CPU by hand, so verification isn't so hard.

Similarily I have done this for C/C++ programs (read the assembler).
(you could also do that for the verilog compiler compiler).

Backdoors can be obfuscated (1)

tepples (727027) | about 3 months ago | (#47003661)

For a sufficiently complex netlist, how can you prove that the HDL compiler didn't insert an obfuscated backdoor? This is especially important if one of the HDL compiler's developers placed highly in an Underhanded C Contest [wikipedia.org] .

Re:Backdoors can be obfuscated (0)

Anonymous Coward | about 3 months ago | (#47005401)

How is the underhanded HDL compiler supposed to have a-priori knowledge of the RTL I'm going to feed it, if it hasn't been written at the time the HDL compiler was written?

Without a-priori knowledge of my RTL, it would require strong AI to determine which RTL to sabotage while maintaining seemingly correct behavior of all other input.

This same problem exists with the backdoored compiler argument. It works when the compiler is designed to sabotage code that has already been authored, but it fails complexity tests for code that is written after the compiler. The compiler would need to be more complex than all possible programs it is designed to sabotage. Since programs I feed to the compiler can be arbitrarily complex, it either has to be infinitely complex, which is impossible given it's finite size, or capable of generating complexity chaotically, which is easy enough, but also it must generate chaotic complexity that confines itself to the domain where it will not be detected by invoking unexpected behavior from the output programs AND generate usable backdoors for it's master.

A tall order indeed.

-puddingpimp

Re:Backdoors can be obfuscated (0)

Anonymous Coward | about 3 months ago | (#47006275)

For sufficiently complex netlists, you are going to get some tight performance and/or space limits to squeeze what is being done into the smallest number of gates you can get away with. Doing any sort of analysis or timing tests won't guarantee you'll find such a thing, but it won't take that many different designs for someone to stumble upon it when trying to find and eliminate bottlenecks.

Re:What advantages? (3, Funny)

SuricouRaven (1897204) | about 3 months ago | (#46999399)

"roll your own mini 8-bit core with opcodes customized for your app (this is not that hard"

Not that hard by Verilog standards. The sight of it tends to make software developers run in terror.

Re:What advantages? (1)

Salgat (1098063) | about 3 months ago | (#46999747)

>The sight of it tends to make software developers run in terror. That's because it has very little to do with software programming.

Re:What advantages? (1)

SuricouRaven (1897204) | about 3 months ago | (#47000925)

Exactly. This is very much a hardware thing - and if you want a processor embedded in your chip, it's because you want to run software. Spending time messing around with intricate hardware design is just going to divert you from the important tasks.

not all software developers (0)

Anonymous Coward | about 3 months ago | (#47003085)

I know a decent amount of VHDL and Verilog, but I know other software developers who are geniuses in it.
Take the people who wrote OpenRisc for example.

Re:What advantages? (3, Interesting)

ShanghaiBill (739463) | about 3 months ago | (#46999499)

Another advantage of an open source softcore, is that you can add your own application specific opcodes. You could run your app in a profiler with the standard instruction set, and identify the hot spots. If a big chunk of your CPU time is spent in a single tight loop, you could implement that code directly in FPGA fabric, and execute each iteration in a single clock tick with a custom instruction. For instance, lets say you need to run some sort of CRC or crypto, with involves shifting, masking and adding bits. That would be easy to code up in Verilog into a single instruction, which is then executed by extending OpenRisc for the new opcode. Then just use the "asm" feature of GCC to put that opcode in the inner loop of your C program. Depending on your app, it is possible that you could get better performance from a customized softcore than from a generic hardcore, like ARM or MIPS.

Re:What advantages? (0)

Anonymous Coward | about 3 months ago | (#47005343)

or even roll your own mini 8-bit core with opcodes customized for your app (this is not that hard, and is a fun project if you are learning Verilog and ready to go beyond blinking LEDs).

No doubt. I did this in an afternoon. It even worked first time.

Though I hazard the difference in complexity between a simple static stack machine or rotating register file machine and a complex pipelined machine is substantial. The difference in complexity between a CPU core and a virtual memory hierarchy is even more substantial.

Incidentally the word size of a core doesn't change it's complexity to implement. Writing a 32-bit core is just as simple as an 8-bit core, it just eats more resources, because all those nets are now wider. I could just as easily write a 256-bit stack machine as an 8-bit stack machine, it could even have the identical ISA, it's just that it would synthesize to a massive behemoth, with probably terible timing closure. It would also not be very useful, as most programs don't contain 256 bit data types (crypto excepting).

If you simply care about complex sequencing of logic elements, then all you need is a simple machine with minimal area coverage. If you instead want to do complex heavy lifting on CPU, then I would suggest you are better off looking at an FPGA with an embedded ARM core. Both Xilinx and Altera make a range of FPGAs with embedded Cortex A9, and Lattice make one with an embedded Cortex M4.

Re:What advantages? (0)

Anonymous Coward | about 3 months ago | (#47006297)

or even roll your own mini 8-bit core with opcodes customized for your app (this is not that hard, and is a fun project if you are learning Verilog and ready to go beyond blinking LEDs).

No doubt. I did this in an afternoon. It even worked first time.

Though I hazard the difference in complexity between a simple static stack machine or rotating register file machine and a complex pipelined machine is substantial. The difference in complexity between a CPU core and a virtual memory hierarchy is even more substantial.

Incidentally the word size of a core doesn't change it's complexity to implement. Writing a 32-bit core is just as simple as an 8-bit core, it just eats more resources, because all those nets are now wider. I could just as easily write a 256-bit stack machine as an 8-bit stack machine, it could even have the identical ISA, it's just that it would synthesize to a massive behemoth, with probably terible timing closure. It would also not be very useful, as most programs don't contain 256 bit data types (crypto excepting).

If you simply care about complex sequencing of logic elements, then all you need is a simple machine with minimal area coverage. If you instead want to do complex heavy lifting on CPU, then I would suggest you are better off looking at an FPGA with an embedded ARM core. Both Xilinx and Altera make a range of FPGAs with embedded Cortex A9, and Lattice make one with an embedded Cortex M4.

It's not even hard to write a pipelined RISC machine, I did one in two weeks in my spare time. It even worked the first time. ;)
Writing one that's remotely efficient (in terms of speed and area) is harder, but not even that is very hard.
Writing compiler backends, Linux (and other OS) ports, SoCs around the CPU core, and maintaining all that, is perhaps not hard, but time consuming.

OpenRISC is a fairly efficient RISC machine, with a good software eco system and with an active development community that's completely transparent.
To me, that's an advantage. And as far as I know, a pretty unique combination.

Nuclear disarmament (-1)

Anonymous Coward | about 3 months ago | (#46998467)

What does the IAEA say about it?

Endianess (1)

sharpneli (2507116) | about 3 months ago | (#46999485)

Does anyone have any idea why OpenRISC is big-endian? Considering that little-endian has pretty much won nowadays (Every major CPU is either little or bi endian) why would anyone want to release a big-endian cpu?

Re:Endianess (1)

Tsolias (2813011) | about 3 months ago | (#46999781)

Because it seems, from my point of view that big endian is better than naidne elttil. Not performance-wise but in other ways, e.g. debugging raw data from the memory, or getting raw data from the network. Also big endian is the most common one. Oohhh... and it's also big.

Re:Endianess (0)

Anonymous Coward | about 3 months ago | (#47000757)

Network being big endian would make it easier and more efficient in embedded usage.

Re:Endianess (1)

sharpneli (2507116) | about 3 months ago | (#47000909)

From my point of view little-endian is better. Our text is read from left to right and numbers are read from right to left. Big-endian machines basically do this. On little endian both are read on same direction. If you vision the memory going from right to left (just like some other languages are read) it's perfectly natural. However on big-endian it's not. Big-endian basically means that there is a difference between bigger ints and individual bytes on how they are read. On LE there is no difference. tl;dr: BE is only because latin languages are read from left to right and arabic numbers from right to left.

Re:Endianess (0)

Anonymous Coward | about 3 months ago | (#47005479)

I solved this problem in all my own debugging tools, it's trivial actually.

It's called mirror notation. In all my hexdumps the right hand side shows ascii reading left to right, in the customary order for latin, and the left hand side shows the mirror image in hex, reading right to left in the customary order for our arabic numeral system.

Big endian is actually wrong endian. in little endian, numbers are written with each byte of a word encoding Xsubn*(2**(n*8)), so an address points to the logical start of a number, regardless it's word size. In big endian however, each byte of a word encodes Xsubn*(2**((wordsize-n)*8))); Thus in big endian an octet of a word is not in it's natural ordering, unless we count arrays backwards with negated indexes. This has real performance implications when it comes to dealing with numbers larger than the machine's native word size, as either bignums must be stored in negative index form, or the octets are shuffled and we may not access octet indexes into a bignum.

So, as you see, the problem is not in the order Little endian encodes numbers, but in the incorrect and invalid presentation of numbers in poorly designed hex editors, that print numbers in the reverse order to which they are stored in memory, according to the arabic systems of numerals.

Now, because terminals print from left to right as standard, it is necessary to reverse the numbers prior to printing them, but this has always been the case, just as it is in the routine that formats numbers for printing in printf. This is an artifact of terminals setup to print latin, and not arabic, and once again, not an artifact of little endian.

Yes unfortunately, IP historically uses big endian, I suspect as a result of prominant hardware at the time of implementation rather than a sound theoretical basis. I would advise against using big endian in any new protocols one is developing, as the world has definitively settled on little endian for network endpoint machines. That IP uses big endian is a sufficient engineering reason to use big endian for network middle points, I would agree.

-puddingpimp

Re:Endianess (1)

Node (9991) | about 3 months ago | (#46999839)

Does anyone have any idea why OpenRISC is big-endian? ... Considering that little-endian has pretty much won nowadays

It's big-endian because little-endian *hasn't* won.

Re:Endianess (0)

Anonymous Coward | about 3 months ago | (#47000271)

Does anyone have any idea why OpenRISC is big-endian?

Considering that little-endian has pretty much won nowadays (Every major CPU is either little or bi endian) why would anyone want to release a big-endian cpu?

IBM mainframes are 100% pure big-endian, and more consistently so than any backwards-endian architecture.
And yes, I think, live and breathe in big-endian. Bit 0 is and can only be the most significant bit.

Re:Endianess (1)

Darinbob (1142669) | about 3 months ago | (#47004713)

Because everything sane uses big-endian? Really, little-ending is a pain in the ass, something you put up with on x86 family, or on projects where they had a choice and the early hardware dev chose little-endian without consulting with the software people. It's really hard to say which is used the most, but little-endian definitely is not a clear winner at all. It's also a sort of Rorshach test to tell who started programming as a PC programmer versus everyone else.

The internet for example is all big-endian, which means everything at least has to go through htonl/htons before getting on the net. Big-endian seems to be the more popular choice when creating a data format intended to be portable. Big-endian is certainly easier to debug with for people are are used to left-to-right reading of numbers.

The chief advantage of little-endian is to do larger arithmetic operations when using an 8-bit cpu, which is irrelevant for larger processors but explains why it was many early microprocessors used it even though it was rare on the larger minis and mainframes of the time. Larger systems rarely read data byte by byte, but as larger chunks of data. (interesting side note, PDP-11 and some others used "middle-endian", which means that used big-endian for their native 16-bit size but swapped the two halves when doing arithmetic on larger 32-bit values).

Little endian also has some other esoteric advantages that I've run across but which are unusual enough that I'd still prefer big-endian overall.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>