Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Startup Claims C-code To SoC In 8-16 Weeks

Soulskill posted more than 2 years ago | from the faster-than-tv-infomercial-shipping dept.

Programming 205

eldavojohn writes "Details are really thin, but the EE Times is reporting that Algotochip claims to be sitting on the 'Holy Grail' of SoC design. From the article: '"We can move your designs from algorithms to chips in as little as eight weeks," said Satish Padmanabhan CTO and founder of Algotochip, whose EDA tool directly implements digital chips from C-algorithms.' Padmanabhan is the designer of the first superscalar digital signal processor. His company, interestingly enough, claims to provide a service that consists of a 'suite of software tools that interprets a customers' C-code without their having any knowledge of Algotochip's proprietary technology and tools. The resultant GDSII design, from which an EDA system can produce the file that goes to TSMC, and all of its intellectual property is owned completely by the customer—with no licenses required from Algotochip.' This was presented at this year's Globalpress Electronics Summit. Too good to be true? Or can we expect our ANSI C code to be automagically implemented in a SoC in such a short time?"

cancel ×

205 comments

"Too good to be true?" (3, Insightful)

Ironchew (1069966) | more than 2 years ago | (#39776617)

"Too good to be true?"

Perhaps not, if you don't mind patent-encumbered chips with the occasional bug in them.

Re:"Too good to be true?" (4, Funny)

AK Marc (707885) | more than 2 years ago | (#39776657)

Well then, fix it with your own open source chip printer. 8-16 weeks? 5 minutes is long enough. Compile, spool, print.

Choose two: (0)

Anonymous Coward | more than 2 years ago | (#39776707)

a) performance
b) time to market
c) cost

Pick any two.... I wonder how this solution performs on these three axes?

Re:Choose two: (2, Interesting)

Anonymous Coward | more than 2 years ago | (#39776779)

I'd be tempted to compile up a linux system with GNOME desktop into this... just to see the resulting chip!

What if .. (1)

Taco Cowboy (5327) | more than 2 years ago | (#39777643)

What if Microsoft decides to compile their new Windows 8 into a SoC ?

Re:What if .. (3, Funny)

Anonymous Coward | more than 2 years ago | (#39777739)

Then you could have a BSOD in hardware. Or in windows 8, Multiple Coloured Squares of Death.

Re:Choose two: (1)

TheDarkMaster (1292526) | more than 2 years ago | (#39778131)

The moment you plug it in ant flip the switch, you will blow up the power grid of the neighborhood

Re:Choose two: (1)

lister king of smeg (2481612) | more than 2 years ago | (#39778483)

step one: compile gnu code licensed under gpl3
step two: watch the lawsuits ensue while gnu demands the blueprint.

or

order a chip of qemu and get your own reverse engineered set of possessors see how many companies sue their asses off.

Re:Choose two: (2, Funny)

Anonymous Coward | more than 2 years ago | (#39778149)

It is no longer true. Now, you have to pick one.

Re:"Too good to be true?" (0)

xQx (5744) | more than 2 years ago | (#39778841)

Buggered if I know, the poster didn't explain the acronyms.

WTF is SoC?

lol indian junky shit (-1)

Anonymous Coward | more than 2 years ago | (#39776649)

Something from India? Too good to be true? Not possible! Indians are world-reknown for high quality hardware and software. They have zero reputation for creating junky hardware and code.

Isn't that what microcode is for? (-1)

mark-t (151149) | more than 2 years ago | (#39776655)

[NT]

Linux on a chip! (0)

Anonymous Coward | more than 2 years ago | (#39776691)

But I do worry about how big that chip might end up being!

Re:Linux on a chip! (4, Funny)

JustOK (667959) | more than 2 years ago | (#39777183)

It would be the size of a kernel

SystemC (5, Informative)

paugq (443696) | more than 2 years ago | (#39776729)

Why not? There is SystemC [wikipedia.org] , a dialect of C++ which can be implemented in hardware (FPGA, for instance). What Algotochip is claiming is just one little more step forward.

Re:SystemC (5, Informative)

JoelDB (2033044) | more than 2 years ago | (#39776947)

While SystemC does have a synthesizable subset, it's mainly used for simulations at a high level from what I've seen. Going from synthesizable SystemC to hardware is an order of magnitude easier than going from a complex language such as C++ or C down to hardware, which is what this company is claiming. From reading the article I believe Tensilica [tensilica.com] is using a very similar approach with ASIPs) for bringing high-level lanaguages to hardware, and they are much more established in this field. One of the up-and-comers is AutoESL [xilinx.com] which was recently acquired by Xilinx. I've played around with this tool and its ability to bring C down to hardware is very impressive.

Re:SystemC (4, Informative)

jd (1658) | more than 2 years ago | (#39777853)

Presumably, though, you could use a source-to-source compiler to convert C (with certain restrictions) into SystemC.* From there, you could do source-to-source compilation to convert SystemC into Verilog or whatever. You'd end up with crappy hardware, but the claim says nothing about design quality only design capability.

*The obvious restriction is that you can't translate something for which no translation exists, whether that's a function call or a particular class of solution.

Going directly from C to hardware without intermediate steps would indeed be a lot harder. But again that's not what the startup promises. They only promise that they can convert C to hardware, they say nothing about how many steps it takes on their end, only what it seems like from your end.

Having said that, a direct C to hardware compiler is obviously possible. A CPU plus software is just emulating a pure hardware system with the code directly built into the design. Instead of replacing bits of circuitry, you replace the instructions which say what circuitry is to be emulated. Since an OS is just another emulator, this time of a particular computer architecture, there is nothing to stop you from taking a dedicated embedded computer, compiling the software, OS and CPU architecture, and getting a single chip that performs the same task(s) entirely in hardware -- no "processor" per-se at all, a true System on a Chip. Albeit rather more complex than most SoC designs currently going, but hey. There's no fun in the easy.

Although there are uses for direct-to-hardware compilers, direct-to-FPGA for pure C would seem better. Take hard drives as an example. You can already install firmware, so there's programmable logic there. What if you could upload the Linux VFS plus applicable filesystems as well? You would reduce CPU load at the very least. If the drive also supported DMA rather than relying on the CPU to pull-and-forward, you could reduce bus activity as well. That would benefit a lot of people and be worth a lot of money for the manufacturer.

This, though, is not worth nearly as much. New hardware isn't designed that often and the number of people designing it is very limited. Faster conversion times won't impact customers, so won't be a selling point to them, so there's no profit involved. Further, optimizing is still a black art, optimizing C compiled into a hardware description language is simply not going to be as good as hand-coding -- for a long time. Eventually, it'll be comparable, just as C compilers are getting close to hand-turned assembly, but it took 30-odd years to get there. True, cheaper engineers can be used, but cheaper doesn't mean better. The issues in hardware are not simply issues of logic and corporations who try to cut corners via C-to-hardware will put their customers through worlds of hurt for at least the next decade to decade and a half.

Re:SystemC (3, Informative)

wiredlogic (135348) | more than 2 years ago | (#39777027)

SystemC is a C++ library and simulation kernel. It isn't a dedicated language. The synthesizable subset of SystemC is very limited. Because it's plain C++, you have to implement all low level logic with much more code overhead than the equivalent VHDL or Verilog.

Re:SystemC (0)

Anonymous Coward | more than 2 years ago | (#39777221)

Using SystemC is absolutely nothing like automated translation of C code to a hardware design. SystemC is a set of templates and utility functions which makes it easier to write software models of hardware using C++, but it is still completely up to the human beings to actually write those algorithms in a way which is suitable for hardware implementation. You don't just #include and magically get shit that can go onto silicon.

Re:SystemC (1)

paugq (443696) | more than 2 years ago | (#39777363)

You don't just #include and magically get shit that can go onto silicon

I never said so. The fact that is a dialect of C++, not pure C++, speaks volumes already.

Re:SystemC (0)

Anonymous Coward | more than 2 years ago | (#39777657)

Put MAME on the chip. That should be interesting :)

Re:SystemC (2)

davolfman (1245316) | more than 2 years ago | (#39777757)

Last I heard from people who use it on a daily basis, SystemC's synthesis tools weren't really mature enough to use seriously. It IS great for building a model to test your simulations from a purpose-built HDL against though.

Sounds plausible. (1)

Anonymous Coward | more than 2 years ago | (#39776745)

After all, C to HDL has been around for awhile.

Offtopic Name Comment (-1)

Anonymous Coward | more than 2 years ago | (#39776753)

I luv those eastern names. Satish Padmanabhan hmm? Sounds like either a couple of Scrabble tilesets or someone typing on a keyboard blind.

This is nothing new at all (4, Interesting)

Weaselmancer (533834) | more than 2 years ago | (#39776763)

C code to SoC. [wikipedia.org]

So, how is this offering from India any different? I could do it in less than 8 to 16 weeks if the customer supplies me the C code to be converted. As in, download/purchase any one of these utilities, run the customer's file through it, and mail it back to them.

Pretty simple.

Re:This is nothing new at all (1)

Anonymous Coward | more than 2 years ago | (#39777043)

Um...from the Algotochip home page. Emphasis mine.

Algotochip is a Silicon Valley startup revolutionizing digital chip design. Algotochip dramatically...

Don't know about the revolutionizing part though...

A better question (5, Insightful)

wonkey_monkey (2592601) | more than 2 years ago | (#39776775)

Or can we expect our ANSI C code to be automagically implemented in a SoC in such a short time?

How about you tell us what SoC stands for first? Once again, editors, we don't all know everything about everything in the tech world. Some of us come here to learn new things, and you guys don't make it easy. TFS should at least leave me with an impression of whether or not I need to read the TFA.

Re:A better question (-1)

Anonymous Coward | more than 2 years ago | (#39776831)

Try www.google.com. One can look up things they don't know about there. It's the second website found.

Re:A better question (3, Insightful)

LurkerXXX (667952) | more than 2 years ago | (#39776865)

The point is, you shouldn't have to freaking google to find out what the heck an article is about. The brain-dead submitter, or brain-dead 'editor' should be clarifying anything that isn't very common everyday tech lingo/acronyms.

Re:A better question (-1, Troll)

Anonymous Coward | more than 2 years ago | (#39776969)

Yes, he shouldn't need to Google since he should know what a SoC is since this is supposed to be a site for technological literate people not reddit rejects.

Re:A better question (4, Funny)

Geoffrey.landis (926948) | more than 2 years ago | (#39777029)

Yes, he shouldn't need to Google since he should know what a SoC is since this is supposed to be a site for technological literate people not reddit rejects.

Indeed.

"SoC" is short for "State of Charge," which is, basically, the status of a battery.

I'm not sure what this has to do with C-code. Maybe these chips they're talking about are used to make battery controllers that use SoC monitoring.

Re:A better question (5, Funny)

JustOK (667959) | more than 2 years ago | (#39777205)

Salsa on Crotch

Re:A better question (1)

thammoud (193905) | more than 2 years ago | (#39777337)

Ah how I miss having some points. Real funny shit.

Re:A better question (1)

JoeMerchant (803320) | more than 2 years ago | (#39777759)

Re:A better question (0)

Anonymous Coward | more than 2 years ago | (#39778343)

So we had grits for breakfast and burritos for lunch. What's for dinner?

Oh, I know.

Ladies, there is meat in my shorts.

Re:A better question (5, Informative)

Caratted (806506) | more than 2 years ago | (#39777269)

Not sure if serious.

SoC [wikipedia.org] has been emerging as a more common term in the last 5 or 6 years meaning System on a Chip. The advantages are it uses less power to do more things, and a lot of low level functions (radios, gpu rendering, etc) have more direct access to on-board cache and memory, as well as a direct line to RAM. They're used in just about everything and are essentially equivalent to saying CPU (for anything other than a desktop or laptop w/o IGP), these days.

Re:A better question (2, Informative)

Anonymous Coward | more than 2 years ago | (#39776841)

system on a chip

Re:A better question (1)

Anonymous Coward | more than 2 years ago | (#39776851)

http://en.wikipedia.org/wiki/System_on_a_chip

Re:A better question (0, Troll)

Anonymous Coward | more than 2 years ago | (#39776853)

Do we need to start having a basic competency test before letting idiots like this post? Jesus fuck, you newtards are idiots. No wonder CmdrTaco left...

Re:A better question (3, Insightful)

EdIII (1114411) | more than 2 years ago | (#39778399)

Do we need to start having a basic competency test before letting idiots like this post? Jesus fuck, you newtards are idiots. No wonder CmdrTaco left...

That's hugely unfair. I figured out what it was based on the context. Hmmmm... SoC.. moving algorithms to chips... might it be System-On-Chip?

However, there are plenty of articles here about some pretty heavy physics, particle physics, medical advancements, etc. that are well outside of my own field. It would be nice to have some quality journalism where a term or concept is explained in the summary.

It's not that hard. Another sentence at most. I don't have a problem searching for terms and concepts I don't fully grasp, but it would be nice to have some quality journalism again. Seriously.... grammar and spelling mistakes everywhere now, even at mainstream outlets like CNN. Just once I would like the impression that somebody with an English major was doing actual editing.

Re:A better question (4, Informative)

khellendros1984 (792761) | more than 2 years ago | (#39776857)

That would be "System on a Chip", a term which describes a complete system included on a single chip. An example I've seen used more often would be a phone's central chip; they tend to integrate the CPU, GPU, wireless chipsets, and part or all of the RAM on one chip. In this case, it looks like they're advertising the ability to quickly create a hardware chip that functions the same as an arbitrary chunk of C code; essentially, you can make a hardware chip that implements a specific algorithm.

Re:A better question (-1)

Anonymous Coward | more than 2 years ago | (#39776863)

>How about you tell us what SoC stands for first?

this is slashdot. I think you were looking for Myspace, which is next door.

Re:A better question (0)

Anonymous Coward | more than 2 years ago | (#39777139)

Thank you. If you hadn't said it, I would have.

Not every nerd who reads Slashdot knows what SoC means. There is plenty of room for variety in the nerd diet and not everyone has to be an expert in embedded systems or chip design. A simple explanation of obscure acronyms would be greatly appreciated now and then.

Re:A better question (5, Funny)

Anonymous Coward | more than 2 years ago | (#39777223)

Yeah! And what does 'C' stand for?

Re:A better question (2)

Bucky24 (1943328) | more than 2 years ago | (#39777847)

Yeah! And what does 'C' stand for?

Just in case you're not trolling:
C is a high-level programming language (yes I know I could give a better description but my brain is fried this late in the day).
http://en.wikipedia.org/wiki/C_(programming_language) [wikipedia.org]

Re:A better question (0)

Anonymous Coward | more than 2 years ago | (#39777937)

What does "/." stand for?

Re:A better question (0)

Anonymous Coward | more than 2 years ago | (#39777299)

Just assume that if you can't even understand the summary, the article is clearly not directed at you so you should just move on. It can work to weed out the people who have nothing to add to the conversation, such as yourself, provided your ego doesn't continue to demand further attention seeking now that you have been enlightened.

Re:A better question (1)

Belial6 (794905) | more than 2 years ago | (#39777531)

Perhaps you could explain what TFS and TFA mean every time you use them so that the editors understand what your asking for....

maybe turn in your "nerd" card? (1)

ashpool7 (18172) | more than 2 years ago | (#39778685)

"We can move your designs from algorithms to chips in as little as eight weeks"

That's enough hints to know what's going on here. If you can't be bothered to Google "SoC" and see the "chip" reference on the first page, or heck, even read TFA which has the definition in it how could you muster the typing to complain about it?

Sorry if Slashdot isn't "newbie friendly," but this isn't "news for new guys, stuff to help you understand." If you don't understand it, maybe it doesn't matter to you. If it bothers you, educate yourself... by reading the article.

Marvellous! (5, Interesting)

Anonymous Coward | more than 2 years ago | (#39776787)

I'm not entirely clear on how it works though. If I give them this:

#include <stdio.h>
int main() {
printf("Hello world!\n");
}

they will convert it into a custom integrated circuit chip with Hello World! silkscreened on the top of it or does the chip actually display "Hello World!" on whatever it is connected to?

Re:Marvellous! (0)

khellendros1984 (792761) | more than 2 years ago | (#39776875)

My guess is that their tool provides some language extensions, or some method of specifying the inputs and outputs of the chip's pins.

Re:Marvellous! (3, Funny)

Logger (9214) | more than 2 years ago | (#39776915)

The press release says the user doesn't need to know anything about how their tool works. So obviously it will infer the appropriate solution and implement that too.

Actually the printf example is one of the easiest to implement. You'll receive a sheet of paper with "Hello World" printed on it in 6-8 weeks.

You provdided a link to define EDA (3, Informative)

Chris Mattern (191822) | more than 2 years ago | (#39776861)

That's good. You didn't define or even expand SoC, GDSII, or TSMC. That's bad. I'm guessing SoC is "System on Chip" but I have no idea what the other two are.

Re:You provdided a link to define EDA (0)

Anonymous Coward | more than 2 years ago | (#39776895)

maybe this article really isn't for you in that case. maybe you should head over to mit ocw if you want a free lesson in ee.

Re:You provdided a link to define EDA (4, Informative)

blind biker (1066130) | more than 2 years ago | (#39776929)

GDSII or GDS-2 is a layout format, used by microsystems designers. It's a 2D-only format, but you can have unlimited layers.

TSMC (Taiwan Semiconductor Manufacturing Company) is the largest microsystems foundry in the world.

You are correct about SoC.

Re:You provdided a link to define EDA (0)

Anonymous Coward | more than 2 years ago | (#39777249)

Correct me if I'm wrong about these statements.

As mentioned, GDSII is a layout format. One such use is that it can describe an ASIC design with the "place-and-route" information. This representation of the hardware design can be sent off the foundry for fabrication.

TSMC and UMC (United Microelectronics Corporation) are two major companies that the so-called "fabless" companies can use to get their hardware designs fabricated...

Re:You provdided a link to define EDA (1)

mk1004 (2488060) | more than 2 years ago | (#39776979)

Well, TSMC is a foundry. Wikipedia says GDSII is the industry-standard way of exchanging data for PCB and IC layout.--I should have known that since I've worked in the IC industry. No reason for most software and IT people to know those terms. Technical writers generally know that you define the acronym the first time they use it and then use the acronym afterwards. /. articles don't follow that rule. I guess so the people who know those terms can flame those who don't.

I'm guessing you could end up writing a lot of code to define how you want input and output to flow from the SoC.

Re:You provdided a link to define EDA (-1)

Anonymous Coward | more than 2 years ago | (#39776987)

They don't need to. This is Slashdot not myspace. Piss off.

Re:You provdided a link to define EDA (1)

Your.Master (1088569) | more than 2 years ago | (#39778061)

Yeah, on Myspace people define TMSC, UMC, GDSII, SoC, and EDA. All the time.

Re:You provdided a link to define EDA (1)

Freebirth Toad (1197193) | more than 2 years ago | (#39777821)

That's good. You didn't define or even expand SoC, GDSII, or TSMC. That's bad.

The SoC contains potassium benzoate. [youtube.com]

Satish? (-1)

Anonymous Coward | more than 2 years ago | (#39776885)

They'll promise the world and deliver negative value. Costs more to fix the shit they deliver than if you would have developed it yourself.

Re:Satish? (1, Insightful)

Anonymous Coward | more than 2 years ago | (#39777065)

Downmodded. How disingenuous of a site with so many programmers who know firsthand of the shit that comes out of India. They have a completely different culture than the US, and that is the cause of what we perceive as poor workmanship and poor management. Reputation doesn't seem to matter a lot to them. If that's not true, then please explain the apparent lack of quality. They memorize dumps to pass certification exams and then deliver poor product under poor management and get paid poor wages for it. And when pressed, they really genuinely don't seem to give a shit. Why?

I'm sure the typical Japanese worker considers US workers lazy with an inflated sense of entitlement. And as a US citizen with a job, I'd agree with that assessment compared to the typical Japanese work ethic.

Depends on the source code and what the chip needs (3, Insightful)

erice (13380) | more than 2 years ago | (#39776893)

Most SOC's do a lot more than a direct translation of the c coded alogrithm would suggest. I guess if you had a "wrapper" platform that was good enough for many applications you could streemline the process. My guess that this platform and the links to C synthesis is most of Algotochip's secret sauce.

C synthesis itself can't handle most programs writen in C. Essentially you need to write Verilog in C in order to make it work. Any dynamic allocation of memory, whether directly or indirectly, is a problem. IO can not be expected to work.

So it boils down to: If you C source is uncharacteristicly just right and your application fits a pre-defined mold then you can make it a chip real quick. ..as long as you don't ecounter any problems during place and route or timing closure...

Re:Depends on the source code and what the chip ne (2)

TheRealMindChild (743925) | more than 2 years ago | (#39777037)

Or you follow their SDK and let them do the work. Are you an engineer appropriate to make such statements?

Re:Depends on the source code and what the chip ne (0)

Anonymous Coward | more than 2 years ago | (#39778725)

Or you follow their SDK and let them do the work. Are you an engineer appropriate to make such statements?

He probably is. I am, and I'd guess erice is right in everything he said. Granted, I don't have experience in writing such translation systems, but I have experience writing software, and I have more experience than that writing HDL code for hardware (FPGA and ASIC), which gives me a good feel for the difficulties in translation.

The C language is semantically oriented towards describing a sequential algorithm which manipulates the contents of a large, all-purpose memory (one which is shared for all tasks). This just isn't a very good model for how the optimal HW implementation of any given thing should work. The point of doing custom hardware is to avoid the power and clock speed overhead of turning your algorithm into a sequential program for a general purpose processor, after all!

Good HW design involves a ton of parallelism in computation, data movement, etc., and lots of private memories for small pieces of the system. (that is, the memories aren't even hooked up to a general purpose access bus, only the things directly attached can read or write.) If you've ever done any functional programming, that's a much closer match. (In fact, the two major HDLs or Hardware Description Languages -- VHDL and Verilog -- are both functional languages.)

The problem with translation from C to a HDL (which is almost certainly what these guys are doing, then using a normal ASIC synthesis flow to translate the HDL code into hardware) is that it's very difficult to extract enough semantic information from sequential C code to mechanically translate it to a good parallel HW oriented design, whatever the output language might be.

Now, maybe you don't need an optimal implementation, just good enough with labor savings compared to having a team do the design the normal way. But you're still likely to have to code in a carefully limited subset of the C language, with few or no standard libraries available. Like erice said, even something so apparently simple as dynamic memory allocation is probably not there.

Re:Depends on the source code and what the chip ne (0)

Anonymous Coward | more than 2 years ago | (#39777107)

hence the word "algorithm" in the "algorithm written in C" as opposed to "program written in C". Magic how that choosing the right word shit works. If you want a general purpose computer, you're barking up the wrong tree. However, if you've got an algorithm in C that you want to work without all the shit that comes with a CPU, use a DSP^H^H^H an FPGA^H^H^H^H^H^H^H an ASIC (this is an ASIC design house).

For those who are qualified to write comments -- when writing this, you probably better assume that this isn't a virtual memory machine. However, the folks who use this are probably competent instead of CS grads [slashdot.org] . You've also got to look at how many levels of recursion you expect to handle, and you better not assume any error handling routines. However, this isn't hard if you think about your code instead of point and click in your IDE. "How does this parallize" the ignorant ask? Well, it'll parallelize as much as you design in. This is where you communicate, both through comments and that dreaded beige 30 key input device that makes rude noises (telephone) with the folks who are trying to discombobulate your shitty code. For those who actually understand the concept properly, it'll parallelize as large as you're willing to bet on the foundry.

If you don't understand, it's okay. You'll be obsolete [slashdot.org] soon.

so it compiles, drops down a CPU core and a ROM (1)

YesIAmAScript (886271) | more than 2 years ago | (#39776957)

The devil is in the details. It isn't a question as to whether a hardware device can be manufactured that runs your code, it is provably possible.

The issue is how cost-efficient is the SoC. How power efficient. How does it perform, does it do any more parallelism than a CPU would do if you just fed it the compiled code.

Re:so it compiles, drops down a CPU core and a ROM (1)

tomhath (637240) | more than 2 years ago | (#39777047)

Exactly. Is there really any benefit to burning the program into nanocode ROM over normal compiling into a RISC instruction set? In theory, maybe. Burroughs used to do this a few decades ago and gave up on the idea.

Re:so it compiles, drops down a CPU core and a ROM (1)

thoughtspace (1444717) | more than 2 years ago | (#39777535)

Same thing happened to CPLDs. Once FPGAs were cheap - we all jumped to FPGAs.
Why lock yourself down - or even finish design before turning a PCB.

Algorithms vs. hardware (1)

unts (754160) | more than 2 years ago | (#39776961)

Algorithms only work well if they fit well with the hardware they're targeting. You have to make certain assumptions, but depending on what your algorithm is, you should know which things you really need to think about (memory, branching, process communication, disk, ...)

Algorithms that get synthesised into hardware will only work well if they're written in such a way that lends itself to synthesis. There's going to be a huge heap of stuff that doesn't fit well, or doesn't work at all. Writing things like Verilog and even System C is very different to writing a piece of software. And let's not even mention the backend stuff like layout - stuff that can have a big impact on performance of the thing you're spending a lot of money fabricating (oh, I guess I /did/ mention it...)

So, maybe a bit ambitious, but if they've solved even some of the problems and helped bring software development and hardware design closer together, well, that's a good thing.

Finally... (1)

RevSpaminator (1419557) | more than 2 years ago | (#39777001)

I can have hardware devoted to celebrating my mastery of the C language... #include main() { printf ("Hello World!\n"); }

Kernel-on-a-chip (0)

Anonymous Coward | more than 2 years ago | (#39777005)

Just feed it the Kernel ! :)

I hope not, but my money is on overhyped. (5, Informative)

hamster_nz (656572) | more than 2 years ago | (#39777031)

Most of these technologies 'C' to hardware technologies are overhyped and under-deliver.

* It is definitely not ANSI C. It might share some syntax elements but that is about it
* C programmers do not make good hardware designers (C programmers will disagree, HDL programmers won't)
* The algorithms used in software by software developers do not translate well into hardware
* If you want "good" hardware developed, use hardware design tools.

If you don't agree with me on these points, post how you would convert "short unsigned value" into ASCII in char digits[5] and I'll show you how to do the same if you were designing a chip...

Re:I hope not, but my money is on overhyped. (1)

Anonymous Coward | more than 2 years ago | (#39777177)

Even though interpreted languages are less efficient than C, they are good enough for so many tasks that they have taken off.

We are probably at or near the point where inefficient C translated to SoC is good enough for a large number of tasks.

There will always be room for C devs in the programming world and HDL programmers in the hardware world, but with the power of newer hardware not all tasks will require them.

Re:I hope not, but my money is on overhyped. (1)

DeadCatX2 (950953) | more than 2 years ago | (#39777701)

I doubt we've reached the point where there are so many excess gates lying around that you can use shitty C-to-HDL converters. There is a large excess of CPU cycles but not nearly as much of an excess of gates. You really have to be conscious of how your design will be synthesized because it's very easy for a C-to-HDL converter to really screw up implementation and do terrible things that will bloat the netlist. I've used such a converter before for a small piece of an FPGA program, and I ended up re-writing half of it in HDL anyway because the result wouldn't fit in the FPGA I was using.

Besides, OP is right, C programmers are terrible hardware designers. C is a sequential language that jumps through hoops to be parallel. HDLs are parallel languages that jump through hoops to be sequential. If you want to be a C programmer in a world where HDL is king, your best bet is to implement a soft-core processor like MicroBlaze or Nios, and then have the C code run on that.

Re:I hope not, but my money is on overhyped. (0)

Anonymous Coward | more than 2 years ago | (#39777191)

The algorithms used in software by software developers do not translate well into hardware

Of course. This should really go without saying.

Software is very iterative and progressive. This is easy because it's stored in RAM and each instruction can be loaded and processed.

Hardware is tiered but relatively flat by comparison. It runs on clock ticks and making more happen in one clock tick requires more tiers of gates in the hardware. And it will take longer due to the propagation delay.

Software is serial, hardware is parallel.

I'm curious, though... how would you convert unsigned to ASCII on chip?

Re:I hope not, but my money is on overhyped. (0)

Anonymous Coward | more than 2 years ago | (#39777435)

Software is serial, hardware is parallel.

I though that statement needs to be emphasized...although I would of went
with the word "sequential" in describing software.

From what I know there's no main in any of the HDLs...

Re:I hope not, but my money is on overhyped. (4, Interesting)

hamster_nz (656572) | more than 2 years ago | (#39777947)

// Make value 17 bits long
  for(i = 0; i != 5 ; i++)
  {
      digit[i] = '0';
      if(value >= 80000) { value -= 80000; digit[i] |= 8; } // Extract the bits for the current digit - In 'C' I' need to use the |= operator as I never did get the hang of addressing bits in 'C'
      if(value >= 40000) { value -= 40000; digit[i] |= 4; }
      if(value >= 20000) { value -= 20000; digit[i] |= 2; }
      if(value >= 10000) { value -= 10000; digit[i] |= 1; }
      value = value*8 + value*2; // Prepare for extracting next digit. - the *4 and *2 would be implement by the wiring into the adder
  }

Advantages:
* No divide/mod operator
* Extracts digits from most significant to least significant (if you want to stream out the digits)
* Can be unrolled or pipelined to meet timing / throughput requirements

Sorry about any syntax/typos/errors in the code... it is a comment!

Re:I hope not, but my money is on overhyped. (1)

O('_')O_Bush (1162487) | more than 2 years ago | (#39778619)

Looks like it lacks the byte->char conversion (he did say ascii, and what's the point if the result is garbage ops?).

Re:I hope not, but my money is on overhyped. (3, Informative)

hamster_nz (656572) | more than 2 years ago | (#39778711)

Looks like you failed to spot the character constant in digit[i] = '0'; - it is already a character....

Re:I hope not, but my money is on overhyped. (0)

Anonymous Coward | more than 2 years ago | (#39778951)

BCD conversion is a really evil question. There's an established technique called "double dabble", and it falls squarely in the head-twister category. The general idea is that it will take n cycles to convert a bit binary number from binary to decimal using a bcd "x2 multiplier" (0b1000 == (1<<3) == (2<<2) == (4<<1) == 8). The carry is the fun part. The final step is to just jam all the bits into the pipeline.

So the final result: a completely serial 16 bit BCD needs something like a 20 bit working register, 5x4-bit adders, 5x4-bit LUT's (magnitude comparators), and maybe a 5 bit counter over 20 cycles. parallelizing this could reduce the cycle count, etc.

And this is why us hardware guys look so frazzled.

Re:I hope not, but my money is on overhyped. (3, Insightful)

DeadCatX2 (950953) | more than 2 years ago | (#39778139)

I'm curious, though... how would you convert unsigned to ASCII on chip?

I think OP's point is that your average C programmer would just start doing all kinds of dividing; most of the time there is very little hardware support for division, and so if you fed this into a C->HDL converter it would generate massive bloat as it imported some special library to handle division.

My first brute-force guess would involve a state machine (FSM), a comparator (16-bit), two adders (one 4-bit, one 16 bit), two muxes (16-bit and 4-bit, four input), a 16-bit register with clock enable and an associated input mux, and four 4-bit registers with clock enable. The FSM would control the 16-bit mux which selects a constant from four powers of 10 (10,000 to 10), and the output of the mux is connected to the 16-bit adder and the comparator. The other input is the 16-bit register, which also needs a mux for selecting between the argument and the adder's output. This register output is also a comparator input. The comparator is configured for "less than" and its output goes to the FSM so it can make decisions. The FSM also controls a 4-bit wide mux which connects four 4-bit registers that represent the various 10s digits (10,000 to 10) to an adder with the other input set to "1".

1) If the number is greater than 10,000 then inc the "ten-thousands" digit, subtract 10,000 from the argument, and repeat this step.
2) Once it is less than 10,000 then the state machine would walk forward to the thousands digit
3) If the number is greater than 1000, inc the thousands digit, subtract 1000 from the argument, and repeat this step.
4) Once it is less than 1000... (you can extrapolate some here) ...
n) Once the tens digit has been processed, the remaining argument is the ones digit

This would give you a series of 4-bit numbers. Once the FSM is done (it's important for it to finish first and change all bits simultaneously, so that downstream logic doesn't see glitches), it would append 0x3 to the front of each 4-bit number, turning them into ASCII.

Note that this approach requires very little in terms of hardware resources, at the expense of requiring a variable amount of time to process its inputs. Consider that 00000 would take 6 clock cycles to produce (need a cycle to load the input), while 29,999 would require like 33 clock cycles (no need to do subtractions on the ones digit)

There are other approaches that may be faster in exchange for requiring more hardware. Consider if you had 9 comparators, one for each digit (except 0), and an adder with a 9-input mux; every input would require 6 clock cycles. But this took an extra 8 comparators (and a significantly bigger mux too); size for speed (interestingly, the divider still only gets you 6 clock cycles, and probably takes up many more resources than 9 comparators. But if you could find other work for the divider then time-sharing might make it worth your while, maybe). You could even go all the way and use 32,000+ comparators, if fan-out wouldn't spell doom for such an approach, and then you could always calculate every possible value in 1 clock cycle...but this would require MASSIVE resources. Now if you only needed, say, from 0 to 1000, that might be slightly less unreasonable (perhaps within fanout limitations but probably still unreasonably large).

OPs point is that a good hardware engineer knows about these tradeoffs and handles them appropriately, while a C programmer isn't trained to think about these issues and their language doesn't even naturally express the structures that it will be mapped on to. Writing the kind of C code that you need to properly synthesize what you want feels like saying the alphabet backwards while jumping up and down on one foot while rubbing your belly and patting your head. And that's if you can even figure out how to tell the C synther that since your values only go from 0 to 1000 that it doesn't need all 16-bits of that unsigned short and it could really get away with only 10 bit support.

Re:I hope not, but my money is on overhyped. (1)

hamster_nz (656572) | more than 2 years ago | (#39778739)

Google just found me cunning way to implement binary to BCD conversion that works by using modified shift registers [www.jjmk.dk] .

Very slick, Wouldn't be found by a 'C' to hardware process or 'C' programmer.

Re:I hope not, but my money is on overhyped. (1)

DrFalkyn (102068) | more than 2 years ago | (#39778883)

But if you need to fairly complex things like convert strings, aren't you just better off sticking a CPU on whatever you're designing?

Re:I hope not, but my money is on overhyped. (3, Interesting)

hamster_nz (656572) | more than 2 years ago | (#39778971)

We all know that it is stupid, but one the "next big thing" ideas for FPGA technology will be using them for ultra-low latency high frequency share trading.

The idea being that if you can bypass switches, routers, NICs, buffers, IRQs, CPU contet switches and so on you will be able to issue your trade requests before the whole data packet has finished coming off the wire, allowing you to get a big jump on your competitors.

One assumes that the "buy, buy, buy" or "sell, sell, sell" packets will need to be generated in the finial formats needed by the market, which will most probably need something to be converted from bInary to ASCII characters.

High frequency traders dream that it would be possible to turn a trade around within a few nano-seconds of the market data arriving.

Dude, don't leave us hanging... (1)

ratboy666 (104074) | more than 2 years ago | (#39778769)

I dunno... I am just a programming hack.

But... given the underpowered nature of microcontrollers (and logic), I would either use a table of powers of ten, subtracting and counting, or a bcd table of powers of two, along with bcd add and adjust.

I would probably go for the bcd approach; guarantees that the job is done in 16 "cycles".

Is that what you were thinking?

Re:Dude, don't leave us hanging... (0)

Anonymous Coward | more than 2 years ago | (#39778807)

It was a typical jab from a member of the hardware community. They enjoy poking at software developers, it's sort of like masturbation for them.

Re:Dude, don't leave us hanging... (1)

Anonymous Coward | more than 2 years ago | (#39778847)

I posted the code up in another comment.... but you extract bits once at a time, starting with the "40,000" bit, then the "20,000" bit, then "10,000" and so io

Here's something that aproaches the idea...
    long int temp = value;
    digit[4] = '0';
    if(temp >= 80000) { temp -= 80000; set bit 3 in digit [4];}
    if(temp >= 40000) { temp -= 40000; set bit 2 in digit [4];}
    if(temp >= 20000) { temp -= 20000; set bit 1 in digit [4];}
    if(temp >= 10000) { temp -= 10000; set bit 0 in digit [4];}

You then have the option to either step down to 8000, 4000.... digit[3], or to multiply 'temp' by 10 and reuse the same logic to extract digit[3].

Design automation tradeoffs (1)

WaffleMonster (969671) | more than 2 years ago | (#39777201)

Ease of design, power consumption and performance. Pick any two.

It would be interesting to see how this compares with the work of competent designers with a/d and analog skillz.

Re:Design automation tradeoffs (0)

Anonymous Coward | more than 2 years ago | (#39778783)

It would also be interesting to compare the costs and the time to produce something usable

Okay, here is my C code!!! (0)

Anonymous Coward | more than 2 years ago | (#39777307)

#include <stdio.h>

int main(int argc, char **argv) {
printf("Hello SoC!\n");
}

Translating C to hardware shouldn't be that hard (1)

Hentes (2461350) | more than 2 years ago | (#39777381)

The real question is how efficient it is.

8-16 weeks for the SOC 8-16 years for the lawsuits (0)

RotateLeftByte (797477) | more than 2 years ago | (#39777899)

Sadly that is what the USofA has become these days.
Nothing more than a nations of lawyers filing suit against everyone else.
I know I exagerate but the plethora of frankly stupid law suits is going to kill what is left of US Business within 3-5 years.
The ONLY winners are the Patent Trolls and naturally the lawyers who take the lions share of any spoils awarded by the courts.

Re:8-16 weeks for the SOC 8-16 years for the lawsu (1)

mevets (322601) | more than 2 years ago | (#39778663)

How about you tell us what USofA stands for first? Once again, posters, we don't all know everything about everything in the world. Some of us come here to learn new things, and you guys don't make it easy. TFP should at least leave me with an impression of whether or not I need to read, uh, the rest of TFP.

I have a great idea (1)

caywen (942955) | more than 2 years ago | (#39777939)

Why not just put the code onto high speed flash that goes on the SoC? Seems a whole lot easier, and I'm not clear why their solution is better. Really, I must be missing something, I'm curious.

Alright! (1)

shadowrat (1069614) | more than 2 years ago | (#39778543)

We can finally get a hardware implementation of quake!

IDRMIWSBS in NNA! (0)

Anonymous Coward | more than 2 years ago | (#39778731)

What TSSE really means for SoC is BFAU. But the major advancement here is how it LQRTs.
Good job and HSFTF!

Hitchcock had it right! (1)

Tumbleweed (3706) | more than 2 years ago | (#39778749)

Those birds are going to be so much more angry now! We're doomed, I tell you -- DOOMED!

MSFT Research had this first. (0)

Anonymous Coward | more than 2 years ago | (#39778861)

This reminds me of Microsoft's 'Alchemy' Project.
http://research.microsoft.com/en-us/projects/alchemy/

--AC

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...