Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Harvard/MIT Student Creates GPU Database, Hacker-Style

Unknown Lamer posted about a year and a half ago | from the search-faster dept.

Communications 135

First time accepted submitter IamIanB writes "Harvard Middle Eastern Studies student Todd Mostak's first tangle with big data didn't go well; trying to process and map 40 million geolocated tweets from the Arab Spring uprising took days. So while taking a database course across town at MIT, he developed a massively parallel database that uses GeForce Titan GPUs to do the data processing. The system sees 70x performance increases over CPU-based systems, and can out crunch a 1000 node MapReduce cluster, in some cases. All for around $5,000 worth of hardware. Mostak plans to release the system under an open source license; you can play with a data set of 125 million tweets hosted at Harvard's WorldMap and see the millisecond response time." I seem to recall a dedicated database query processor that worked by having a few hundred really small processors that was integrated with INGRES in the '80s.

Sorry! There are no comments related to the filter you selected.

Two thoughts based on this story (5, Interesting)

Anonymous Coward | about a year and a half ago | (#43520555)

1. Facebook would like to have a discussion with him.
2. The FBI would like to have a discussion with him.

and (-1)

Anonymous Coward | about a year and a half ago | (#43520865)

3. Alqaeda that lives in iran that is fighting iranian backed syria wishes ot have it terror dbase back.

Re:and (-1)

Anonymous Coward | about a year and a half ago | (#43521789)

that was integrated with NIGGERS in the 80s.

Fixed.

Re:Two thoughts based on this story (0)

Anonymous Coward | about a year and a half ago | (#43522293)

3. Facebook and the FBI realize that they have the same goal of having a discussion with him and integrate. The new entity is called the FB.

Re:Two thoughts based on this story (2, Informative)

Anonymous Coward | about a year and a half ago | (#43522935)

Drop the "the", just FB, it's cleaner

I'm not a computer scientist, and... (1)

Anonymous Coward | about a year and a half ago | (#43520573)

I want to know why GPUs are so much better at some tasks than CPUs? And, why aren't they used more often if they are orders of magnitude faster?

Thanks.

Re:I'm not a computer scientist, and... (4, Insightful)

Anonymous Coward | about a year and a half ago | (#43520597)

Sprinters can run really fast. So, if speed is important in other sports, why aren't the other sports full of sprinters? Because being good at one thing doesn't mean you're well-suited to do everything. A sprinter who can't throw a ball is going to be terrible at a lot of sports.

Re:I'm not a computer scientist, and... (5, Informative)

PhamNguyen (2695929) | about a year and a half ago | (#43520687)

GPUs are much faster for code that can be parallelized (basically this means having many cores doing the same thing, but on different data). However there is a signficant complexity in isolating hte parts of the code that can be done in parallel. Additionally, there is a cost to moving data to the GPU's memory, and also from the GPU memory to the GPU cores. CPU's on the other hand, have a cache architecture that means that much of the time, memory access is extremely fast.

Given progress in the last 10 years, the set of algorithms that can be parallelized is very large. So the GPU advantage should be overwhelming. The main issue is that the complexity writing a program that does things on the GPU is much higher.

Re:I'm not a computer scientist, and... (2)

gatkinso (15975) | about a year and a half ago | (#43520885)

>> The main issue is that the complexity writing a program that does things on the GPU is much higher.

Not so much. There is programming overhead, but it isn't too bad.

Re:I'm not a computer scientist, and... (2, Insightful)

Anonymous Coward | about a year and a half ago | (#43521017)

Yes, it is that bad. Not only is it extremely platform-specific, the toolchains are crap. We're just now transitioning from "impossible to debug" to "difficult to debug".

Re:I'm not a computer scientist, and... (0)

Anonymous Coward | about a year and a half ago | (#43521369)

Yes, it is that bad. Not only is it extremely platform-specific, the toolchains are crap. We're just now transitioning from "impossible to debug" to "difficult to debug".

OpenCL has been a joy to program with, and CUDA, while having interesting quirks, has always built very easily for me.

Re:I'm not a computer scientist, and... (5, Informative)

Morpf (2683099) | about a year and a half ago | (#43520965)

Close, but not quite correct.

The point is GPUs are fast doing the same operation on multiple data. (e.g. multiplying a vector with a scalar) The emphasize is on _same operation_, which might not be the case for every problem one can solve parallel. You will loose speed as soon your elements of a wavefront (e.g. 16 threads, executed in lockstep) diverge into multiple execution paths. This happens if you have something like an "if" in your code and one for one work item the condition is evaluated to true and for another it's evaluated to false. Your wavefront will only be executed one path at a time, so your code becomes kind of "sequential" at this point. You will loose speed, too, if the way you access your GPU memory does not fulfill some restrictions. And by the way: I'm not speaking about some mere 1% performance loss but quite a number. ;) So generally speaking: not every problem one can solve in parallel can be efficiently solved by a GPU.

There is something similar to caches in OpenCL: it's called local data storage, but it's the programmers job to use them efficiently. Memory access is always slow if it's not registers you are accessing, be it CPU or GPU. When using a GPU you can hide part of the memory latency by scheduling way more threads than you can physically run and always switch to those who aren't waiting for memory. This way you waste less cycles waiting for memory.

I support your view writing for GPU takes quite a bit of effort. ;)

Re:I'm not a computer scientist, and... (1)

loneDreamer (1502073) | about a year and a half ago | (#43521931)

True. Nevertheless, using it for databases when data is cached seems like a neat idea. Lots of "_same operation_" for let's say, selecting all tuples with a specific value on a huge table.

Re:I'm not a computer scientist, and... (0)

Anonymous Coward | about a year and a half ago | (#43522425)

so i can see xor'ing the tuple with the query tuple and looking for a zero result, but dont you need to branch or move depending on the result? doesnt this add a dreaded "if" and therefore degrade the parallelism?

Re:I'm not a computer scientist, and... (1)

PhamNguyen (2695929) | about a year and a half ago | (#43522575)

I was intentionally simplifying, but I agree with your more detailed exposition. I did understate the extent to which fundamental issues related to the GPU architecture are still relevant. My own experience is in embarrassingly parallelizable problems so my knowledge of these issues is nor very deep.

Re:I'm not a computer scientist, and... (2, Insightful)

BitZtream (692029) | about a year and a half ago | (#43521589)

Parallelization is not why GPUs are fast, its a side effect of rendering pixels, nothing more.

GPUs are fast because they do an extremely limited number of things REALLY REALLY fast, and when you're doing graphics ... well guess what, its all pretty much doing those few things the GPU does well over and over again, per pixel (or vertex). They are parallelized because those simply, super fast processors are also small from a chip perspective, so stuffing a ton of them on the chip so it can do many pixels in parallel works, again, because all those pixels get treated the same way with a very limited number of well known operations performed on them.

They are not replacing CPUs because something like a simple if statement doesn't pause one processor, it pauses them ALL, and then top it off with the GPU being absolutely horrible (speed wise) at dealing with an IF statement. In shaders, you can get by with an if on a uniform because it only has to be calculated once and a decent driver can optimize the if away early on be for sending it to the cell processors on the CPU. Do IFs on an attribute (say a vertex or texture coord) and watch your GPU crawl like a snail.

Parallelization in GPUs is a direct result of the fact that they perform the same task on massive arrays of data. Since the code works on individual cells in the array individually, there is no 'race' condition possibility in the code, so its ready to run concurrently. Adding a new shader cell effectively gives you more speed without any sort of programmer effort what so ever.

The reason these parallel cells can work together so fast is also because the silicon works in lock step. (thats why IFs or attributes kill performance). Basically each line of the shader program executes side by side on all the shader cells at once. This makes all sorts of neat silicon based performance tricks possible.

Where you get screwed however is those IFs (All branching instructions really) because if any one shader cell has to run a branch of code, they ALL run the code, and then just discard the results. So when you write branching code in a shader, you are almost certainly going to run every code path provided if you use the wrong data for your branch.

Re:I'm not a computer scientist, and... (4, Informative)

PhamNguyen (2695929) | about a year and a half ago | (#43522551)

What you are describing is GPU computing 5 to 10 years ago. Now, (1) you don't wrote shaders you write kernels. (2) a GPU can do most of the functions of a CPU, the difference is in things like branch prediction and caching. (3) threads execute in blocks of 16 or some other round number. There is no performance loss as long as all threads in the same block take the same execution path.

Re:I'm not a computer scientist, and... (4, Informative)

gatkinso (15975) | about a year and a half ago | (#43520857)

This is a gross simplification, glossing over the details and not correct in some aspects... but close enough.

SIMD - single instruction multiple data. If you have thousands or millions of elements/records/whatever that all require the exact same processing (gee, say like a bunch of polygons being rotated x radians perhaps????) then this data can all be arranged into a bitmap and loaded onto the GPU at once. The GPU then performs the same operation on your data elements simultaneously (simplification). You then yank off the resultant bitmap and off you go. CPU arranges data, loads and unloads the data. GPU crunches it.

A CPU would have to operate on each of these elements serially.

Think of it this way - you are making pennies. GPU takes a big sheet of copper and stamps out 10000 pennies at a time. CPU takes a ribbon of copper and stamps out 1 penny at a time... but each iteration of the CPU is much faster than each iteration of the GPU. Perhaps the CPU can perform 7000 cycles per second, but the GPU can only perform 1 cycle per second. At the end of that second... the GPU produced 3000 more pennies than the CPU.

Some problem sets are not SIMD in nature. Lot's of branhcing or relienace on the value of neighboring elements. This will slow the GPU processing down insanely. FPGA is far better (and more expensive, and more difficult to program) than GPU for this. CPU is better as well.

Re:I'm not a computer scientist, and... (1)

crutchy (1949900) | about a year and a half ago | (#43521085)

not sure if i'm right, but i tend to think of any gpu-based application as having to construct data like pixels on a screen or image (since that's what gpu's are primarily designed to handle)

a cpu treats each pixel separately, whereas a gpu can process multiple pixels simultaneously

problem comes about if you try to feed data into a gpu that isn't like pixels

is the programming difficulties in trying to trick the gpu into thinking it's processing pixels even though it may be processing bitcoin algorithms etc?

Re:I'm not a computer scientist, and... (1)

Morpf (2683099) | about a year and a half ago | (#43521213)

Well, you don't have to trick the GPU in thinking it processes pixels. You can do general purpose computation with a language quite similar to C99.

You are right in that way, that you partition your problem in many subelements. In OpenCL those are called work items. But those are more like identical threads than pixels. Sometimes one maps the work items on a 2d or 3d grid if the problem domain fits. (e.g. image manipulation, physics simulation)

Actually it's not that hard implementing "normal" algorithms on a GPU. For example the bitcoin mining algorithm can be implemented quite straight forward. It may even look almost the same as a C method programmed for a CPU. The programming is a bit difficult as you have many restrictions to obey to get good performance out of a GPU.

Re:I'm not a computer scientist, and... (-1)

Anonymous Coward | about a year and a half ago | (#43522377)

GPU takes a big sheet of copper and stamps out 10000 pennies at a time. CPU takes a ribbon of copper and stamps out 1 penny at a time... but each iteration of the CPU is much faster than each iteration of the GPU. Perhaps the CPU can perform 7000 cycles per second, but the GPU can only perform 1 cycle per second. At the end of that second... the GPU produced 3000 more pennies than the CPU.

A CPU is like one having one fast whip for a line of niggers. A GPU is like having 10000 slower whips for 10000 niggers.

Re:I'm not a computer scientist, and... (5, Informative)

UnknownSoldier (67820) | about a year and a half ago | (#43520991)

If one woman can have a baby in 9 months, then 9 women can have a baby in one month, right?

No.

Not every task can be run in parallel.

Now however if your data is _independent_ then you can distribute the work out to each core. Let's say you want to search 2000 objects for some matching value. On a 8-core CPU you would need 2000/8 = 250 searches. On the Titan each core could process 1 object.

There are also latency vs bandwidth issues, meaning it takes time to transfer the data from RAM to the GPU, process, and transfer the results back, but if the GPU's processing time is vastly less then the CPU, you can still have HUGE wins.

There are also SIMD / MIMD paradigms which I won't get into, but basically in layman's terms means the SIMD is able to process more data in the same amount of time.

You may be interested in reading:
http://perilsofparallel.blogspot.com/2008/09/larrabee-vs-nvidia-mimd-vs-simd.html [blogspot.com]
http://stackoverflow.com/questions/7091958/cpu-vs-gpu-when-cpu-is-better [stackoverflow.com]

When your problem domain & data are able to be run in parallel then GPU's totally kick a CPU's in terms of processing power AND in price. i.e.
An i7 3770K costs around $330. Price/Core is $330/8 = $41.25/core
A GTX Titan costs around $1000. Price/Core is $1000/2688 = $0.37/core

Remember computing is about 2 extremes:

Slow & Flexible < - - - > Fast & Rigid
CPU (flexible) vs GPU (rigid)

* http://www.newegg.com/Product/Product.aspx?Item=N82E16819116501 [newegg.com]
* http://www.newegg.com/Product/Product.aspx?Item=N82E16814130897 [newegg.com]

Re:I'm not a computer scientist, and... (3, Funny)

Anonymous Coward | about a year and a half ago | (#43521785)

Now however if your data is _independent_ then you can distribute the work out to each core.

Let me translate this into a woman-baby analogy: if one woman can have a baby in 9 months, then 9 women can have 9 babies in 9 months. At first the challenge is joggling with the timing of dates and dividing the calendar for conception events as near as possible to each other to keep up the efficiency and synchronization. Afterwards the challenge is the alimony, paying up college and particularly the Thanksgiving, when the fruits of the labor come together.

Re:I'm not a computer scientist, and... (1)

VortexCortex (1117377) | about a year and a half ago | (#43521825)

If one woman can have a baby in 9 months, then 9 women can have a baby in one month, right?

No.

You're wrong, otherwise we'd need close to 130 million months per year. Furthermore, the 9 women have their 9 babies after ~9 months yielding in an average production rate of 1bpm (one baby per month) from this group of women -- If kept perpetually pregnant. If we put 90 women in the baby farm they will produce TEN Babies Per Month.

Some people's kids, I swear -- They must have botch the batch of logic circuits in your revision; This is Matrixology 101.

Re:I'm not a computer scientist, and... (3, Insightful)

anagama (611277) | about a year and a half ago | (#43522295)

I think you totally missed his point -- tin whiskers on your circuit board? Blown caps?

The fact that 9 women can have 9 babies in 9 months for an average rate of 1/mo, does not disprove the assertion 9 women cannot have __a__ (i.e. a single) baby in one month. You're talking about something totally different and being awfully smug about it to boot.

Re:I'm not a computer scientist, and... (1)

mwvdlee (775178) | about a year and a half ago | (#43522619)

What if you wanted only one baby? You'd still have to wait nine months, no matter how many women are involved.

Re:I'm not a computer scientist, and... (1)

BitZtream (692029) | about a year and a half ago | (#43521521)

They do ONE thing well. Floating point ops. EVERYTHING ELSE THEY SUCK AT, including simple logic checks, like if statements are painfully mind numbingly slow on the GPU.

Re:I'm not a computer scientist, and... (0)

Anonymous Coward | about a year and a half ago | (#43521641)

I want to know why GPUs are so much better at some tasks than CPUs? And, why aren't they used more often if they are orders of magnitude faster?

CPUs are built for multitasking a ton of dynamic, middle-of-the-road workloads.
GPUs are built more for specific type of workload, and for those they work much better.

CPUs use tons of die space for local cache, and main memory latency is lower than dedicated graphics memory. A PC is basically rigged to work on smaller, random data sets. That's mostly what they do, switching between tons of different tasks all the time, with low latency.

GPUs are more weighted by parallel processing units, very small cache, and higher latency but much faster bandwidth memory. They work on consistently huge, parallelized, predictable math/computation heavy workloads.

Now, look at the PS4's architecture, its main memory is all higher latency GDDR5.
It will get by because it will still have a decent sized cache on the main processor, and it really doesn't have to multitask much. That could be the future PC gaming architecture =D

Same reason you can buy a $99 supercomputer (1)

Anonymous Coward | about a year and a half ago | (#43521763)

They're massively more parallel, running many more smaller simpler cores.

It's the same reason these guys can make a 16 core parallel computer for $99.... the cores are focused on their job so they can be smaller and cheaper and can put more on a die.
http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone/

So these guys can run 8 daughter boards, with 64 cores per board, 512 cores, and it looks like they plan on scaling to 4096 cores because they use the top 12 bits of the address as the core routing id.
The tradeoff with all those cores is they're dirt simple cores, moves, adds, branches, and some floating point ops (misses divide even, its done in software, but then for signal processing and multiply-add is the one that needs to be fast and its coded as a single instruction).

If you read up on your high end graphics card it might have 900+ CUDA cores, really just ALU cores, the actual thread running cores are far fewer than that. But the ALU's can be run in parallel.

So a vector multiply is done as a parallel operation on these ALU blocks, and many other operations break down to be parallel in the same way.

Re:I'm not a computer scientist, and... (0)

H0p313ss (811249) | about a year and a half ago | (#43522119)

I want to know why GPUs are so much better at some tasks than CPUs? And, why aren't they used more often if they are orders of magnitude faster?

Thanks.

I'm glad you put the preface in there, because it's basic comp. sci.

Re:I'm not a computer scientist, and... (1)

r2kordmaa (1163933) | about a year and a half ago | (#43522273)

CPU has small number of very complex cores, good for fast decision making, eg managing opsys resources GPU has lots of very simple cores, useless for decision making, but great for parallel number crunching

Re:I'm not a computer scientist, and... (0)

Anonymous Coward | about a year and a half ago | (#43522763)

I want to know why GPUs are so much better at some tasks than CPUs? And, why aren't they used more often if they are orders of magnitude faster?

Thanks.

The explanations you've got so far are either way too complex for a non-computer-scientist to understand or just plain useless. I'll try to do better.

A typical GPU has a very large number of processor cores (somewhere between 32 for low end models up to 128 or so for high end ones). They have a few design aspects that make these processor cores different to standard CPU cores:

* They have a slower clock speed than CPU cores (typically 1 GHz). This doesn't matter because they do more in each clock cycle (operations run on 4-component vectors rather than single numbers, although CPUs can sometimes do this kind of thing too -- just not as well because they aren't really designed for it) and because there are more than twice as many cores, so a 2x slowdown doesn't matter.
* They have a faster memory interface than CPUs. A CPU has to integrate with a motherboard that has a variable amount of memory in multiple slots. This makes the memory interface design more complex, which slows it down. GPUs are usually directly soldered to a small board with a fixed amount of memory in a single bank that is also soldered directly to the board. They also often have more pins dedicated to memory, as they don't have to conform to existing standards (i.e. the number of pins on a DIMM), so you often see 256-bit wide memory interfaces rather than the 128-bit wide interface most current CPUs have. These advantages means that while CPUs typically manage about 20GB/s of memory transfer, a modern GPU can perhaps manage about 200GB/s.
* In order to fit so many cores in a single chip, the cores have to be really simple. They achieve this by making them really good at what they do a lot of (mathematical calculations) but sacrificing what they don't really need to do much of (making decisions between different paths of a calculation). They also sacrifice cache memory which doesn't help very much for the kind of calculations they're designed for (which typically work with very large sets of data with each item accessed with roughly equal probability, whereas the kind of work CPUs do often has a small amount of data that is accessed a lot and the rest only occasionally).

Re:I'm not a computer scientist, and... (0)

Anonymous Coward | about a year ago | (#43523283)

It's a completely different style of architecture. CPUs are generic. GPUs on the other hand can implement an entire algorithm in hardware with a single instruction call. That means they're less flexible and can't do many things, but what they can do they can do extremely quickly (just look at the difference between software rendering and hardware acceleration speed/quality). They're not used often because of that lack of flexibility, which makes them unsuitable for many types of program or at last much much harder to program an algorithm for.

That Didn't Take Long: Database Down For Maint. (2)

cmholm (69081) | about a year and a half ago | (#43520589)

Slashdotted? I happened to catch the story just as it went live, and hit the link to the service. After scrolling the map and getting a couple of updates: Database is down for maintenance. The front end may not be as high performance as the back... or it may have been coincidence.

Re:That Didn't Take Long: Database Down For Maint. (5, Informative)

tmostak (2904307) | about a year and a half ago | (#43520839)

Hi... MapD creator here... this is the first time we've been seriously load tested, and I realize I might have a "locking" bug that's creating a deadlock when people hit the server at the exact same time. Todd

Re:That Didn't Take Long: Database Down For Maint. (1)

Phrogman (80473) | about a year and a half ago | (#43520875)

Well since this is apparently from the guy who the article is talking about, perhaps someone could mod it up just a bit?
No points here

Re:That Didn't Take Long: Database Down For Maint. (1)

Frankie70 (803801) | about a year and a half ago | (#43521265)

Is that why it's faster? It doesn't do any synchronization?

Re:That Didn't Take Long: Database Down For Maint. (5, Informative)

tmostak (2904307) | about a year and a half ago | (#43521611)

Har har... Well things got tricky when I wrote the code to support streaming inserts (not implemented in the current map) so you could view tweets or whatever else as they came in - this required a lot of fine-grained locking. May just bandaid this and give locks to connections as they come in until I can figure out what's going on. Todd

Re:That Didn't Take Long: Database Down For Maint. (1, Interesting)

static0verdrive (776495) | about a year and a half ago | (#43521303)

An open source license will help get those bugs squashed in no time! ;)

Re:That Didn't Take Long: Database Down For Maint. (0, Troll)

BitZtream (692029) | about a year and a half ago | (#43521617)

Citation Needed (RMS not allowed, sorry, we want reality here)

Please show quantitative proof that just open sourcing something instantly provides you faster feedback without any other costs or shut the fuck up with that tired bullshit.

Re:That Didn't Take Long: Database Down For Maint. (1)

Indigo (2453) | about a year and a half ago | (#43521759)

You mean, if Open Source isn't magic, it's bullshit? Way to straw man.

Re:That Didn't Take Long: Database Down For Maint. (1)

Anonymous Coward | about a year and a half ago | (#43521895)

No, he means (and said) if Open Source isn't magic, claiming Open Source is magic is bullshit.

Re:That Didn't Take Long: Database Down For Maint. (0)

Anonymous Coward | about a year ago | (#43523085)

So are you disputing the fact that many people can spot and fix bugs better than one person?

That's just nonsense. You can parallelize the work and many people will ALWAYS find bugs better than a single person.

A nice strawman by the way. The static0verdrive guy said essentially "getting more people to look at the code means they can spot and fix bugs faster than you, a single person, ever would". Then you go in, demanding quantitative proof about "instant" betterment - and here's the strawman - "without any other costs". The original person said nothing about "other costs".

Islamist Spring tweets (-1)

Anonymous Coward | about a year and a half ago | (#43520599)

kill all #jews
death to #america
allahu akbar!!!1

Not interesting. But hey, if it gets you funded....

and the most amazing thing (4, Funny)

roman_mir (125474) | about a year and a half ago | (#43520627)

as the TFS states he uses GPUs to do the data processing, but you are never going to believe what he uses to store the actual data, you won't believe it, that's why it's not mentioned in TFS. Sure sure, it's PostgreSQL, but the way the data was stored physically was in the computer monitor itself. Yes, he punched holes in computer monitors with a chisel and used punch card readers to read those holes from the screens.

Re:and the most amazing thing (4, Funny)

eyenot (102141) | about a year and a half ago | (#43520737)

Mod parent up!

Also: I heard he's using the printer port for commuication. By spooling tractor feed paper between two printers in a loop, and by stopping and starting simultaneous paper-feed jobs, he can create a cybernetic feedback between the two printers that results in a series of quickly occurring "error - paper jam" messages that (due to two taped-down "reset" buttons) are quickly translated from the wide bandwidth analog physical matrix into kajamabits of digital codes. The perceived bandwidth gain is much higher than just a single one or zero at a time.

That way, he can access the mainframe any time, from any physical location, and it will translate directly into a virtual presence.

Re:and the most amazing thing (1)

roman_mir (125474) | about a year and a half ago | (#43520797)

They don't grt it. He solved the speed of processing and the lack of long term durability of storage by doing what's described in the original comment... Worked like a charm without needing to rithink the entire problem of a single bus used to retrieve and store data on the physical storage that still accessess data serially.

Re:and the most amazing thing (1)

crutchy (1949900) | about a year and a half ago | (#43521115)

By spooling tractor feed paper between two printers in a loop, and by stopping and starting simultaneous paper-feed jobs, he can create a cybernetic feedback between the two printers that results in a series of quickly occurring "error - paper jam" messages that (due to two taped-down "reset" buttons) are quickly translated from the wide bandwidth analog physical matrix into kajamabits of digital codes

i would be really careful doing that... the system may become self-aware

How's gpu that much faster?! (0)

Anonymous Coward | about a year and a half ago | (#43520631)

Could anyone give a brief and non over technical explanation about this?!

Re:How's gpu that much faster?! (1)

roman_mir (125474) | about a year and a half ago | (#43520735)

It's like tons of little fish devouring an elepant carcas rather than one shark doing the same. You asked for a non technical... Of-course it's still harddrives (or sdds today) all the way down.

Re:How's gpu that much faster?! (1)

crutchy (1949900) | about a year and a half ago | (#43521133)

does the shark have a laser?

Re:How's gpu that much faster?! (0)

Anonymous Coward | about a year and a half ago | (#43521507)

that depends on whether or not the shark is equipped with a laser.

sounds like... (1, Redundant)

stenvar (2789879) | about a year and a half ago | (#43520759)

It sounds like he's doing standard GPU computations, loading everything into memory, and then calling it a "database", even though it really isn't a "database" in any traditional sense.

Re:sounds like... (5, Informative)

tmostak (2904307) | about a year and a half ago | (#43520879)

Hi, MapD creator here - and I have to disagree with you. The database ultimately stores everything on disk, but it caches what it can in GPU memory and performs all the computation there. So all the SQL operations are occurring on the GPU, after which, in case of the tweetmap demo, the results are rendered to a texture before being sent out as a png. But it works equally well as a traditional database - it doesn't do the whole SQL standard yet but can handle aggregations, joins, etc just like a normal database, just much faster. Todd

Re:sounds like... (3, Interesting)

nebosuke (1012041) | about a year and a half ago | (#43521049)

Just out of curiosity, did you use PGStrom [postgresql.org] or roll your own pgsql/GPU solution? If the latter, did you also hook into pgsql via the FDW interface or some other way?

Re:sounds like... (5, Informative)

tmostak (2904307) | about a year and a half ago | (#43521149)

So I use postgres all the time, but MapD isn't built on Postgres, it actually stores its own data on disk in column-form in (I admit crude) memory-mapped files. I have written a Postgres connector that connects MapD to Postgres though since I use postgres to store the tweets I harvest for long-term archiving. The connector uses pqxx (the C++ Postgres library). Todd

Re:sounds like... (1)

Anonymous Coward | about a year and a half ago | (#43521077)

I'd be very interested to hear more details about the GPU SQL algorithms (JOIN in particular) if you are willing to share them. Did you use the set operations
in Thrust or did you write something custom?

Some of my colleagues are planning on releasing an open source library and some online tutorials about hash join and sort merge join in CUDA, and I would be very interested to share notes.

Re:sounds like... (5, Informative)

tmostak (2904307) | about a year and a half ago | (#43521179)

I'm not using thrust - I rolled my own hash join algorithm. This is something I still haven't optimized a great deal and I'm sure your stuff runs much better. Would love to talk. Just contact me on Twitter (@toddmostak) and I'll give you my contact details. Todd

Re:sounds like... (1)

korgitser (1809018) | about a year and a half ago | (#43521447)

I wonder what would it mean to the data if you were to lossily compress that png...

Re:sounds like... (1)

cbhacking (979169) | about a year and a half ago | (#43521499)

Horrible things, probably. Good thing PNG is lossless compression...

Re:sounds like... (1)

stenvar (2789879) | about a year and a half ago | (#43522751)

So, it sounds like you're implementing SQL as a data analytics language for in-memory data (plus a bunch of potentially useful algorithms), but apparently without the features that usually make a database a "database", like persistence, transactions, rollbacks, etc. It's those other features that make real databases slow, which is why you can't claim huge speedups over "databases" since you're not implementing the same thing.

Data analytics on GPUs is a great thing, which is why tons of people are doing it. SQL isn't usually the language of choice because it isn't a good match and you have to build everything from scratch. GPU support in languages like R and Matlab gives you all the analytics features of SQL with a nicer syntax and really fast performance. Those languages also have tons of useful libraries for GIS, text analysis and visualization built in already.

"a few hundred really small processors" (0)

Anonymous Coward | about a year and a half ago | (#43520769)

I'd hardly call them "really small processors" haha.

PostgreSQL used GPU 2 years ago (1)

Anonymous Coward | about a year and a half ago | (#43520775)

The 70x times seem optimistic. Does this include ALL the overheads for the GPU?
But this done and patented over 2 years ago.
http://www.scribd.com/doc/44661593/PostgreSQL-OpenCL-Procedural-Language

And there has been earlier work using SQLite on GPU's.

Re:PostgreSQL used GPU 2 years ago (5, Informative)

tmostak (2904307) | about a year and a half ago | (#43520935)

The 70X is actually highly conservative - and this was benched against an optimized parallelized main-memory (i.e. not off of disk) CPU version, not say MySQL. On things like rendering heatmaps, graph query operations, or clustering you can get 300-500X speedups. The database caches what it can in GPU memory (could be 128GB on one node if you have 16 GPUs) and only sends back a bitmap of the results to be joined with data sitting in CPU memory. But yeah, if the data's not cached, then it won't be this fast. That's true, a lot of work has been done on GPU database processing - this is a bit different I think b/c it runs on multiple GPUs and b/c it tries to cache what it can on the GPU. Todd (MapD creator)

Re:PostgreSQL used GPU 2 years ago (2)

asicsolutions (1481269) | about a year and a half ago | (#43521175)

Altera and Xilinx both have high level synthesis tools out that can target FPGA's using generic C. The Altera one allows you to target GPU's, CPU's or FPGA's. In the case of highly parallel tasks, an FPGA can run many times faster than even a GPU. There are fairly large gate count devices with ARM cores available now so you move the tasks around for better performance. I'd love to see some of these tasks targeting these devices.

Re:PostgreSQL used GPU 2 years ago (2)

anarcobra (1551067) | about a year and a half ago | (#43521505)

Actually, depending on the specific problem GPU can still be significantly faster than FPGAs mostly because of the large number of processing units.
The FPGAs are far more power efficient though.

First customer? (0)

Shavano (2541114) | about a year and a half ago | (#43520867)

The Egyptian government...

So this is where all the Titans ended up.... (1)

BulletMagnet (600525) | about a year and a half ago | (#43520919)

Still waiting for one/two....to play games on....

Re:So this is where all the Titans ended up.... (1)

do0b (1617057) | about a year and a half ago | (#43521113)

bought one, worth every penny!

Re:So this is where all the Titans ended up.... (0)

Anonymous Coward | about a year and a half ago | (#43521823)

But not every dollar.

Am I the only one? (0)

Anonymous Coward | about a year and a half ago | (#43520985)

That thought that this would be a searchable database of all GPUs that exist? Because that sounded kinda useful.

obvious question (1)

crutchy (1949900) | about a year and a half ago | (#43521143)

does it blend?

Could have... (1)

Ghjnut (1843450) | about a year and a half ago | (#43521217)

Maybe we should make it a habit of giving the owner some warning before slashdotting them. I know that if I ever get any concept development project up and running, I'm pretty excited to show my friends and tend to make it accessible before it's optimized enough to handle that king of onslaught.

Re:Could have... (1)

Ghjnut (1843450) | about a year and a half ago | (#43521235)

kind*, I'm not sure whether or not slashdot holds the title of 'king of onslaught'.

Re:Could have... (1)

neonmonk (467567) | about a year and a half ago | (#43521433)

Where's Onslaught?

Re:Could have... (2)

aiht (1017790) | about a year and a half ago | (#43521783)

Where's Onslaught?

It's in Norweight.

Re:Could have... (0)

Anonymous Coward | about a year and a half ago | (#43521955)

Re:Could have... (1)

BitZtream (692029) | about a year and a half ago | (#43521643)

The owner is the submitter. He knew what he was getting into, or should have.

Re:Could have... (2)

tmostak (2904307) | about a year and a half ago | (#43521703)

Umm... no I didn't submit this. Perhaps the author of the article did. But I may have just done a super-hacky bandaid fix (also disallowed click requests - which may be a bit buggy) - we'll see if it holds up. Todd

Ingres-Actian and Vectorwise (1)

not_quite_a_user (2904313) | about a year and a half ago | (#43521225)

Ingres was renamed to Actian and have released an analytic/reporting database called "Vectorwise" which makes use of SIMD and many other innovations in data throughtput techniques(everything in the Intel optimisation manual plus a lot more) and it gets more than 70 times performance. Check out TPC-H results "This is not an advertisement"

Large datasets are mostly IO limited (5, Interesting)

zbobet2012 (1025836) | about a year and a half ago | (#43521229)

While cool and all 125million tweets with geo tagging is at most: 1250000000*142bytes = 165 GB. That is not what "big data" considers a large data set. Indeed most "big data" queries are IO limited. For around 16k USD you can fit that entire working set in memory. You are not really in the "big data" realm into you have datasets in the 10's of TB's compressed (100's of TB's uncompressed).
For these kinds of datasets, and where more compute is necessary there is MARs [gpgpu.org] .

Re:Large datasets are mostly IO limited (0)

Anonymous Coward | about a year and a half ago | (#43521459)

I don't want to undo my moderation, so I have to post anonymously. Which is probably prudent anyway because I have to ask...

Does MARs need women?

Re:Large datasets are mostly IO limited (1)

rtaylor (70602) | about a year and a half ago | (#43521603)

Agreed. That easily fits into memory (3 times actually) on our main OLTP DB.

If it can fit into ram for less than $50K, it's not big data.

Re:Large datasets are mostly IO limited (1)

greg1104 (461138) | about a year and a half ago | (#43522417)

This project's innovation is noting that that GPUs have enough RAM now that you can push medium sized data sets into them if you have enough available. With lots of cores and high memory bandwidth, in-memory data sets in a GPU can do multi-core operations faster than in-memory data sets in a standard CPU/memory combination.

That's great for simple data operations that are easy to run in parallel and when the data set is small enough to fit in your available hardware. Break any of those assumptions, and you've got a whole different set of problems to solve than what this is good for. I suspect none of those three requirements hold in the usual case for what people want out of "big data".

Re:Large datasets are mostly IO limited (4, Informative)

tmostak (2904307) | about a year and a half ago | (#43522555)

Hi - MapD creator here. Agreed, GPUs aren't going to me of much use if you have petabytes of data and are I/O bound, but what I think unfortunately gets missed in the rush to indiscriminately throw everything into the "big data bucket" is that a lot of people do have medium-sized (say 5GB-500GB) datasets that they would like to query, visualize and analyze in an iterative, real-time fashion, something that existing solutions won't allow you to do (even big clusters often incur enough latency to make real-time analysis difficult).

And then you have super-linear algorithms like graph processing, spatial joins, neural nets, clustering, rendering blurred heatmaps which do really well on the GPU, which the formerly memory bound speedup of 70X turns into 400-500X. Particularly since databases are expected to do more and more viz and machine learning, I don't think these are edge cases

Finally, although GPU memory will always be more expensive (but faster) than CPU memory, MapD already can run on a 16-card 128GB GPU ram server, and I'm working on a multi-node distributed implementation where you could string many of these together. So having a terabyte of GPU RAM is not out of the question, which, given the column-store architecture of the db can be used more efficiently by caching only the necessary columns in memory. Of course it will cost more, but for some applications the performance benefits may be worth it.

I just think people need to realize that different problems need different solutions, and just b/c a system is not built to handle a petabyte of data doesn't mean its not worthwhile.

Re:Large datasets are mostly IO limited (2)

stenvar (2789879) | about a year and a half ago | (#43522785)

that a lot of people do have medium-sized (say 5GB-500GB) datasets that they would like to query, visualize and analyze in an iterative, real-time fashion, something that existing solutions won't allow you to do

Yeah, they actually do. For in-memory queries, analysis, and visualization, people use statistical and numerical languages like R, Matlab, Python, and others (as well as tools with nice graphics frontends). And they have full GPU support available these days. In many cases, the GPU support parallelizes large array operations, in addition to implementing many additional special-purpose operations as well.

GPU is good - but you need the IOPS to leverage it (1)

Dave500 (107484) | about a year and a half ago | (#43521623)

For data processing workloads, a frequent problem with GPU acceleration is that the working dataset size is too large to fit into the available GPU memory and the whole thing slows to a crawl on data ingest (physical disk seeks, random much of the time) or disk writes for persisting the results.

For folks serious about getting good ROI on their GPU hardware in real world scenarios, I strongly recommend you take a look at the fusion IO PCIe flash cards, which now support writing to and reading from them directly from CUDA via DMA, with little to no CPU handling required. (See: http://developer.download.nvidia.com/GTC/PDF/GTC2012/PresentationPDF/S0619-GTC2012-Flash-Memory-Throttle.pdf).

I can't talk about what we do with it, but lets just say the following hardware combination has lead to interesting results;
i) 16x PCIe slot chassis: http://www.onestopsystems.com/expansion_platforms_3U.php
ii) 8x Nvidia Kepler K20x's
iii) 8x Fusion IO 2.4TB IoDrive 2 Duo's

We have been able sustain over 4 million data operations a second, each one processing ~16 K of data in a recoverable, transactionally consistent manner, totaling up to around 50 Gigabytes of data processed per second. All in a 5U deployment drawing less than 4 kilowatts.

Talk to IBM PureData - beat to the punch (2)

Gothmolly (148874) | about a year and a half ago | (#43521729)

Granted its not free or cheap, but IBM will ship you a prebuilt rack of 'stuff' that will load 5TB/hour and scan 128GB/sec. PGStrom came out in the last year. Custom hardware/ASIC/FPGA for this sort of thing is not new.

Re:Talk to IBM PureData - beat to the punch (0)

Anonymous Coward | about a year and a half ago | (#43521943)

Note that he's not using "custom hardware/ASIC/FPGA" -- he's using GPUs. GPUs aren't as expensive, and don't perform as well -- basically they fall between CPU and custom setups.

Good to see things like this. (2)

idbeholda (2405958) | about a year and a half ago | (#43521883)

As a data analyst/software engineer, it makes me glad to see these kind of actual strides are being made to ensure that both data and software will eventually start being designed properly from their inception. To have a single cluster database with anything more than a few thousand entries is nothing short of incompetence, and I believe anyone who does this should be publicly shamed and flogged. When dealing with excessively large amounts of data, it quickly becomes a necessity to have a paralleled database design to ensure that searches aren't hampered by long query times. It genuinely makes me thrilled to see someone else use this kind of design other than me, so when I put out numbers on my end, maybe my results won't seem as fantastical or unbelievable. Even though I don't know you personally, keep up the good work, Todd.

Re:Good to see things like this. (1)

tmostak (2904307) | about a year and a half ago | (#43522095)

Thanks for the kind words! Hopefully this is just the start of a fun project... Todd

Or just skip right to the punchline... (0)

Anonymous Coward | about a year and a half ago | (#43522115)

...and do big data on an FPGA cluster.

Q: Whats better than a GPU database? (1)

WaffleMonster (969671) | about a year and a half ago | (#43522277)

A: Indexes that don't suck.

Using GPUs and massivelly parallel blah blah blah is cool and all but most databases are not processor limited so why should we care?

Re:Q: Whats better than a GPU database? (3, Insightful)

tmostak (2904307) | about a year and a half ago | (#43522319)

Try to heatmap or do hierarchical clustering on a billion rows in a few milliseconds with just the aid of indexes - not all applications need lots of cores and high memory bandwidth - but some do.

MediumData (0)

biodata (1981610) | about a year and a half ago | (#43522709)

40 million rows is what we used to manage in Oracle tables in the late 80s. Jeez, did this guy have no clue how to build a database?

Patented technology (0)

Anonymous Coward | about a year and a half ago | (#43522735)

AFAIR using a database with a GPU has been patented by IBM some years ago

To state the obvious (0)

Anonymous Coward | about a year and a half ago | (#43522767)

It's great the GPU is faster than the CPU for massively parallel non-conditional operations. Why not use the CPU in addition to the GPU? Does the computer memory speed or bus bandwidth prevent it?

Code optimization (1)

KPU (118762) | about a year and a half ago | (#43523065)

Student writes inefficient code, learns how to optimize it using known techniques, it becomes faster. Film at 11.

Oblig. In Soviet Russia ... (1)

shikaisi (1816846) | about a year ago | (#43523141)

In Soviet Russia, GPU database creates you. Oh wait, wrong GPU [wikipedia.org]
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?