Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Chinese Lab Speeds Through Genome Processing With GPUs

timothy posted more than 2 years ago | from the looking-for-the-reset-button dept.

China 408

Eric Smalley writes "The world's largest genome sequencing center once needed four days to analyze data describing a human genome. Now it needs just six hours. The trick is servers built with graphics chips — the sort of processors that were originally designed to draw images on your personal computer. They're called graphics processing units, or GPUs — a term coined by chip giant Nvidia. This fall, BGI — a mega lab headquartered in Shenzhen, China — switched to servers that use GPUs built by Nvidia, and this slashed its genome analysis time by more than an order of magnitude."

cancel ×

408 comments

Sorry! There are no comments related to the filter you selected.

first (-1)

Anonymous Coward | more than 2 years ago | (#38631728)

turd of the day.

Re:first (4, Insightful)

Pieroxy (222434) | more than 2 years ago | (#38632080)

So, a site dedicated to nerds needs to explain what a GPU is? Are we not nerds anymore?

Re:first (1)

Samantha Wright (1324923) | more than 2 years ago | (#38632156)

Yeah, Wired, what gives?

(Hint: the summary is a direct quote from the enterprise-y TFA [wired.com] .)

Re:first (0)

Anonymous Coward | more than 2 years ago | (#38632312)

Is there a name for what you did? I mean replying to an irrelevant FP with the same comment [slashdot.org] an AC without the benefit of karma made nearly an hour earlier. Seems low, somehow.

Re:first (0)

Anonymous Coward | more than 2 years ago | (#38633162)

No we are D-e-V-o

Re:first (0)

Anonymous Coward | more than 2 years ago | (#38633178)

My first thought as well.

The Future Is Here!! (5, Funny)

mastershake82 (948396) | more than 2 years ago | (#38631752)

Sounds like these newfangled "GPUs" are gonna change the world.

Depends on the needs of the problem (0)

Anonymous Coward | more than 2 years ago | (#38631856)

You MAY want to check this out (FPGA's have advantages too) -> http://www.bing.com/search?q=FPGA+and+GPU&go=&qs=ns&form=QBLH [bing.com]

* Mind you - Not EVERY kind of "problem" can have GPU's lend themselves as well to solving them too...

APK

P.S.=> Still the Field Programmable Gate Array's have merits too (especially for areas where LOW LATENCY is a mandate - such as robotics!)... apk

Re:Depends on the needs of the problem (-1, Troll)

MichaelKristopeit500 (2018072) | more than 2 years ago | (#38632066)

you probably WANT to check OUT "THIs" [wikipedia.org]

* you're an idiot.

Mike: You're the "Lon Chaney Sr." of /. (lol) (-1)

Anonymous Coward | more than 2 years ago | (#38632150)

Now, you? I don't even get upset on you trolling me: Why?? Because I think of you as "The Lon Chaney Sr." of /., "the man of a 1,000 faces" here on /. (per my subject-line above & this link -> ) http://en.wikipedia.org/wiki/Man_of_a_Thousand_Faces [wikipedia.org]

* It's because of your # of registered "luser" accounts you keep here (how many are you up to now? 500??)

APK

P.S.=> You're the 1 troll I don't even bother with - mainly because I figure either 1 of 2 things in YOUR case:

1.) Everyone's given you so much SHIT, you're just reacting "auto-defensively"...

or

2.) You're messed up - like you have here with myself (I'm a someone that's actually stuck up for you a couple times because of #1 above, hope I am not wrong on that account) however... YOU started up with others before too, as you have with myself for no reason I can see, because I never give you any guff Mike (mainly because of #1 above)... & this IS a "classic example thereof"...

... apk

Re:Mike: You're the "Lon Chaney Sr." of /. (lol) (-1, Troll)

MichaelKristopeit410 (2018830) | more than 2 years ago | (#38632178)

ur mum's face're messed up.

cower in my shadow some more, feeb.

you're completely pathetic.

An application of "ReVeRsE-PsyChoLoGy" (-1)

Anonymous Coward | more than 2 years ago | (#38632224)

".citehtap yletelpmoc er'uoy .beef ,erom emos wodahs ym ni rewoc .pu dessem er'ecaf s'mum ru" - by MichaelKristopeit410 (2018830) /.'s OWN Lon Chaney Sr./Man of a 1,000 faces/registered 'luser' accounts but still a "ne'er-do-well" /. OFF-TOPIC TROLL on Sunday July 10, @06:32AM (#36710070)

"???"

Uhm... Could we get a translation of that off-topic "troll-speak/trolllanguage" of yours, please?

---

* And, you're an off-topic troll - no questions asked...SEE MY SUBJECT LINE ABOVE!

APK

P.S.=> Yes, it must have just have been another off-topic done nothing of significance with his life troll spewing his off-topic b.s. again & not contributing to the ongoing conversations. Oh well - No biggie!

("ReVeRsE-PsYcHoLoGy", for trolls - Courtesy of this code by "yours truly" in less than 1 second flat):

---

#TrollTalkComReversePsychologyKiller.py (Ver #2 by APK)

def reverse(s):
    try:
        trollstring = ""
        for apksays in s:
        trollstring = apksays + trollstring
    except:
        print("error/abend in reverse function")
    return trollstring

s = ""
print reverse(s)

try:
  s = "Insert whatever 'trollspeak/trolllanguage' gibberish occurs here..."
  s = reverse(s)
  print(s)
except Exception as e:
  print(e)

---

... apk

Re:An application of "ReVeRsE-PsyChoLoGy" (-1, Troll)

MichaelKristopeit411 (2018832) | more than 2 years ago | (#38632344)

ur mum's face're an off-topic troll.

you're an ignorant hypocrite.

cower in my shadow some more, feeb.

you're completely pathetic.

U need more "therapy" ("ReVeRsE-PsyChoLoGy", lol) (-1)

Anonymous Coward | more than 2 years ago | (#38632416)

".citehtap yletelpmoc er'uoy .beef ,erom emos wodahs ym ni rewoc .etircopyh tnarongi na er'uoy .llort cipot-ffo na er'ecaf s'mum ru" - by MichaelKristopeit411 (2018832) /.'s OWN Lon Chaney Sr./Man of a 1,000 faces/registered 'luser' accounts but still a "ne'er-do-well" /. OFF-TOPIC TROLL on Sunday July 10, @06:32AM (#36710070)

"???"

Uhm... Could we get a translation of that off-topic "troll-speak/trolllanguage" of yours, please?

---

* And, you're an off-topic troll - no questions asked...SEE MY SUBJECT LINE ABOVE!

APK

P.S.=> Yes, it must have just have been another off-topic done nothing of significance with his life troll spewing his off-topic illogical ad hominem attack attempts (always failing) b.s. again & not contributing to the ongoing conversations. Oh well - No biggie!

("ReVeRsE-PsYcHoLoGy", for trolls - Courtesy of this code by "yours truly" in less than 1 second flat):

---

#TrollTalkComReversePsychologyKiller.py (Ver #2 by APK)

def reverse(s):
    try:
        trollstring = ""
        for apksays in s:
        trollstring = apksays + trollstring
    except:
        print("error/abend in reverse function")
    return trollstring

s = ""
print reverse(s)

try:
  s = "Insert whatever 'trollspeak/trolllanguage' gibberish occurs here..."
  s = reverse(s)
  print(s)
except Exception as e:
  print(e)

---

... apk

Re:U need more "therapy" ("ReVeRsE-PsyChoLoGy", lo (0)

MichaelKristopeit412 (2018834) | more than 2 years ago | (#38632664)

ur mum's face're an off-topic troll.

you're an ignorant hypocrite.

cower in my shadow some more, feeb.

you're completely pathetic.

LMAO, perhaps music can soothe the savage beast? (-1)

Anonymous Coward | more than 2 years ago | (#38632720)

LMAO -> http://www.youtube.com/watch?v=JrUGYT_gl9I [youtube.com] = MikeK, lol...

APK

P.S.=> How many accounts is that you're up to now? Well, lol, let's see: I said 500 on a guess, & MichaelKristopeit412 looks like your 412th, lmao... I was wrong, but (rotflmao)... who cares?? Why??

Well - You're the "Man..." (See song above, lol, same quality as you, lol).. /quote)... apk

Re:LMAO, perhaps music can soothe the savage beast (1)

MichaelKristopeit413 (2018846) | more than 2 years ago | (#38632948)

i am michael kristopeit. i have 1 face.

you're an idiot.

cower in my shadow some more, feeb.

you're completely pathetic.

Re:LMAO, perhaps music can soothe the savage beast (-1)

Anonymous Coward | more than 2 years ago | (#38633540)

i am michael kristopeit. i have 1 face.

... still stuck with that one? ... the "ask for a refund" thing didn't work out, huh?

We'll continue to pray for you, but without much hope of success.

Have you considered changing religion? Or becoming a ter'rist? In both cases, you can hide it with a neck-beard, and if you blow yourself up, you might get to lose your virginity in the next world. Provided you're no longer "into sheep".

Re:U need more "therapy" ("ReVeRsE-PsyChoLoGy", lo (1)

MichaelKristopeit414 (2018850) | more than 2 years ago | (#38633046)

i've never had any "therapy".

prescribing medical treatment without a "license" is a "crime".

who is "we"?

you're exactly what you've claimed to be: NOTHING.

cower in my shadow some more, feeb.

you're completely pathetic.

Re:Depends on the needs of the problem (0)

Vegemeister (1259976) | more than 2 years ago | (#38633288)

There's gonna be a what?

TROLL FIGHT!

Re:Depends on the needs of the problem (1)

MichaelKristopeit414 (2018850) | more than 2 years ago | (#38633402)

ur mum's face is a what? TROLL FIGHT.

cower in my shadow some more behind your chosen masculine brewers yeast based pseudonym, feeb.

you're completely pathetic.

Re:Depends on the needs of the problem (-1)

Anonymous Coward | more than 2 years ago | (#38633620)

Hey, feeb, the poster wasn't who you thought they were. And you're still pathetic.

You're messed up. /.'s OWN Lon Chaney Sr./Man of a 1,000 faces/registered 'luser' accounts but still a "ne'er-do-well". http://www.youtube.com/watch?v=JrUGYT_gl9I [youtube.com]

Here's some free advice - go register yet another throw-away account. This one is boring.

APK

Re:The Future Is Here!! (4, Insightful)

MollyB (162595) | more than 2 years ago | (#38632054)

If one reads to page 2 of tfa, they only claim the technique works well in this instance. They go on:

Even for computer-intensive aspects of analysis pipelines, GPUs aren’t necessarily the answer. “Not everything will accelerate well on a GPU, but enough will that this is a technology that cannot be ignored,” says Gollery. “The system of the future will not be some one-size-fits-all type of box, but rather a heterogeneous mix of CPUs, GPUs and FPGAs depending on the applications and the needs of the researcher.”

and

GPUs have cranked up the speed of genome sequencing analysis, but in the complicated and fast-moving field of genomics that doesn’t necessarily count as a breakthrough. “The game changing stuff,” says Trunnell, “is still on the horizon for this field.”

So yes, the article is a bit breathless, but if utilizing GPUs helps cure my potentially impending genetic disorder, I'm all for it.

Re:The Future Is Here!! (1)

davester666 (731373) | more than 2 years ago | (#38632350)

Yes, unfortunately, you will be unable to pay for that cure, unless you own the business you work for.

Re:The Future Is Here!! (1)

Anonymous Coward | more than 2 years ago | (#38632506)

I used to work with Texas Instruments TMS34010/32020/34082 processors in the 1990's. These were surface mounted onto a VGA graphics board, along with a number of TMS34082 vector processors and a few megabytes of memory (Hercules Graphics Station Card as an example). They had this really neat feature where you could cross-compile, download and execute programs on these boards as "extensions". You could do anything from encryption/decryption, image-processing to drawing lines and rendering triangles.

Initially, these were known as "graphics accelerators", though they quickly became "graphics deaccelerators" as motherboard bus and CPU clock speeds increased so rapidly - those days PC clock speeds were 20 - 25MHz, graphics card clock speeds were 60 - 90 MHz.

News for nerds (5, Funny)

Anonymous Coward | more than 2 years ago | (#38631756)

I always wondered what GPUs are. Thanks Slashdot!

Re:News for nerds (0)

Anonymous Coward | more than 2 years ago | (#38631812)

I always thought it stood for General Processing Units!

Re:News for nerds (5, Funny)

galanom (1021665) | more than 2 years ago | (#38631842)

No, it's "Guinea Pig Units"

Re:News for nerds (1)

Anonymous Coward | more than 2 years ago | (#38631932)

And there I was thinking there was a chance it could be "Guanxi Publicity Units".

Re:News for nerds (0)

Anonymous Coward | more than 2 years ago | (#38632184)

Giant Penis Unused

Re:News for nerds (1)

formfeed (703859) | more than 2 years ago | (#38632188)

German Pilsner untergärig

Re:News for nerds (0)

Anonymous Coward | more than 2 years ago | (#38632226)

"The method of general-purpose computing on graphics processing units, or NAMBLA, holds the promise of delivering orders of magnitude increases in performance and reducing power and space requirements for problems that can be structured to take advantage of the highly parallelized architecture."

Re:News for nerds (1)

maxwell demon (590494) | more than 2 years ago | (#38632028)

Genome Processing Unit

Re:News for nerds (3, Funny)

nevillethedevil (1021497) | more than 2 years ago | (#38632402)

I thought it was 'Gnomes Processing Underpants' and that we finally had that elusive missing step

1. Steal underpants
2. Process underpants
3. Profit

Re:News for nerds (1)

supremebob (574732) | more than 2 years ago | (#38632548)

We know what they are, but we've been using them to play Battlefield 3 or create phony "untraceable" currency for drug dealers instead.

Re:News for nerds (0)

Anonymous Coward | more than 2 years ago | (#38632752)

"Grand Parental Units" are the Parental Units for your parents. ;)

bad article (0)

Anonymous Coward | more than 2 years ago | (#38631774)

"They're called graphics processing units, or GPUs — a term coined by chip giant Nvidia. This fall, BGI — a mega lab headquartered in Shenzhen, China — switched to servers that use GPUs built by Nvidia, "

There should be something like article quality rating system on /.

Re:bad article (0)

MichaelKristopeit410 (2018830) | more than 2 years ago | (#38632074)

slashdot = stagnated

Re:bad article (0)

Anonymous Coward | more than 2 years ago | (#38632140)

slashdot = stagnated

Indeed, as evidenced by your continued posting of the same old tripe.

Why hasn't this troll had his accounts deleted yet?!

Re:bad article (1)

MichaelKristopeit411 (2018832) | more than 2 years ago | (#38632186)

ur mum's face is old tripe.

indeed, you're an ignorant hypocrite.

why do you cower in my shadow? what are you afraid of?

you're completely pathetic.

Re:bad article (0)

Anonymous Coward | more than 2 years ago | (#38632400)

I wasn't talking to you; I was addressing MichaelKristopeit410 [slashdot.org] .

Re:bad article (1)

Anonymous Coward | more than 2 years ago | (#38632442)

Turns out I've been wrong about you all this time. It's the goats having sex with you isn't it?! You just love the burn of a hot goat cock in your tender brown bud don't you. HA, no lube for you!

Low RISC, high reward (-1, Redundant)

LostCluster (625375) | more than 2 years ago | (#38631802)

This is a very simple yet powerful concept. Get processors that only do what they need to do, and there's no wasted space or power on the things you don't do.

Re:Low RISC, high reward (0)

Anonymous Coward | more than 2 years ago | (#38631822)

I do hope we're not going to see it on the front page every time some random chucklefuck realizes your can run code on GPUs though. No one really cares except ignorant people who haven't heard of the technology yet.

Re:Low RISC, high reward (0)

Anonymous Coward | more than 2 years ago | (#38632122)

Or ignorant people who've heard of it, but don't know quite what it stands for. This summary helpfully explains!

Summary dumbed down enough for you? (3, Insightful)

Anonymous Coward | more than 2 years ago | (#38631808)

Explaining what a GPU is in a slashdot summary? Come on.

This is similar to someone telling you a story about something funny happening to them while shopping at the store, pausing mid-story to inform you that a 'store' is a business where goods are displayed and exchanged for a papery substance called 'money'.

Re:Summary dumbed down enough for you? (1)

galanom (1021665) | more than 2 years ago | (#38631916)

It might have some use. "store" is chiefly American English. British would prefer "shop", though they should definitely be able to understand what you are talking about. But in both BE and AE "store" would also mean "to keep".

Re:Summary dumbed down enough for you? (1)

Anonymous Coward | more than 2 years ago | (#38631938)

I don't mind the explanations in the submitter's summary too much: it's better than some of the jargon/acronym laden summaries that totally obfuscated some summaries, and abstracts need to avoid jargon in order to pull in interested readers. I do, however, mind that the summary just plagiarizes the first few sentences of the Wired article. I'm also unhappy with the watered-down article; summaries and abstracts need to avoid jargon for clarity, but articles need to use the right words to convey their points, and they need to have more depth to them than this one does.

As for Wired's the repetition of Nvidia, which is really tangental to the main point (BGI's accomplishment), makes this look like a case study taken out of Nvidia's marketing literature. The BGI story ends at the second paragraph (the extent of submitter's summary), and the rest of the article is like a jumble of press releases from Nvidia and Amazon. My guess is that someone was facing a deadline, had no ideas, and ransacked some press releases (not astroturfing, just laziness in the face of deadlines).

Re:Summary dumbed down enough for you? (0)

Anonymous Coward | more than 2 years ago | (#38632036)

As for Wired's the repetition of Nvidia, which is really tangental to the main point (BGI's accomplishment), makes this look like a case study taken out of Nvidia's marketing literature. The BGI story ends at the second paragraph (the extent of submitter's summary), and the rest of the article is like a jumble of press releases from Nvidia and Amazon. My guess is that someone was facing a deadline, had no ideas, and ransacked some press releases (not astroturfing, just laziness in the face of deadlines).

This is my general impression of the quality of Wired these days. I remember they had (maybe dead now) a "biohack" blog/area on their site that consisted nearly entirely of rehashed press releases. I actually wrote the author with my concerns and received a response claiming that they could find no other way to get reliable information about biotech. Then I blocked wired in my hosts file.

This article is almost painfully dumbed down... (2)

tiffany352 (2485630) | more than 2 years ago | (#38631824)

Submitter couldn't find a more technically-oriented one?

Re:This article is almost painfully dumbed down... (2)

gman003 (1693318) | more than 2 years ago | (#38631890)

Hell, even the summary is condescending.

This is Slashdot. You don't have to explain what a GPU is.

Re:This article is almost painfully dumbed down... (1)

Anonymous Coward | more than 2 years ago | (#38631942)

I'd say there are too many summaries that fail to explain the acronyms used. Not all readers have the exact same knowledge.

Re:This article is almost painfully dumbed down... (1)

tiffany352 (2485630) | more than 2 years ago | (#38632480)

Not all readers know how to use google either, apparently.

Re:This article is almost painfully dumbed down... (1)

heironymous (197988) | more than 2 years ago | (#38632858)

That's an unfair comment. Acronyms can stand for more than one thing, and a good writer's intent is not to show how much smarter they are than their readers.

Re:This article is almost painfully dumbed down... (2)

heironymous (197988) | more than 2 years ago | (#38632916)

I agree, and it would be a better policy to define acronyms the first time they are used. The same could be said about the names of software packages in other summaries. I'm mystified that so many commenters are miffed that GPU is explained.

Re:This article is almost painfully dumbed down... (5, Informative)

Zakabog (603757) | more than 2 years ago | (#38632052)

The summary is pulled directly from the top of the article.

Here's the article from HPC Wire [hpcwire.com] and some details from nvidia [nvidia.com] as well as the nvidia press release [nvidia.com]

Re:This article is almost painfully dumbed down... (1)

SomePgmr (2021234) | more than 2 years ago | (#38632352)

Sure, as they often are. I thought it was funny though. Usually I shake my head at the silly use of BS, jargon-of-the-week phrases in the summaries without any effort to define them.

And then we get a verbose definition of "GPU"... one everyone is familiar with. The lack of consistency might be explainable, but it's kinda funny. ;)

Meanwhile (0)

Anonymous Coward | more than 2 years ago | (#38631860)

Fat American neckbeards use GPU's to "mine" bitcoins and keep creating bubbles that crash every few months.

Reader's Digest (1)

Anonymous Coward | more than 2 years ago | (#38631900)

It reads like some Reader's Digest piece. I can't believe timothy published it like that. :)

A reminder (5, Insightful)

Mannfred (2543170) | more than 2 years ago | (#38631904)

It's hardly news that GPUs can be used to speed up parallel tasks/computations, but even so this article is a useful reminder of two things; 1) there are still many important processes that can be sped up by using GPUs, and 2) this can be achieved pretty much anywhere in the world.

Re:A reminder (2)

peragrin (659227) | more than 2 years ago | (#38631956)

The only reminder should bethat processors designs for different types of math can do that math faster than processors designed for other types of math.

I don't understand why companies don't realize that. Running graphics on a floating point processors is like using a train to go across an ocean. Sure you can do it doesn't mean that it is a good idea.

Re:A reminder (2)

blahplusplus (757119) | more than 2 years ago | (#38631976)

"The only reminder should bethat processors designs for different types of math can do that math faster than processors designed for other types of math."

Not all kinds of math can be parallelized.

Re:A reminder (0)

Anonymous Coward | more than 2 years ago | (#38632620)

you have proof of this ?

Re:A reminder (1)

gmhowell (26755) | more than 2 years ago | (#38633012)

you have proof of this ?

I don't have room to write it in the margins of this website. The borked .js keeps killing it.

Wonder the speed for using AMD (2)

witherstaff (713820) | more than 2 years ago | (#38631966)

I wonder if the AMD use of more cores, whereas Nvidia uses faster cores, would change the time. I have no idea how genetic algorithms work. I do know simple hashes like bitcoins are best on AMD.

Re:Wonder the speed for using AMD (0)

Anonymous Coward | more than 2 years ago | (#38632182)

AMD uses more, simpler cores. nVidia uses more capable cores with more complex instructions. This is why AMD is faster for simple things, but nVidia is better for more complex, general calculations.

Re:A reminder (1)

purpledinoz (573045) | more than 2 years ago | (#38632034)

I always wondered why FPGA's aren't used for this kind of stuff, or if they already are. I would imagine they would even be faster because you can design a circuit specifically optimized for the problem. But now that I think about it, NVidia and AMD put considerable amounts of resources into making them super fast and cheap. I guess price/performance ratio would be pretty damn good on a GPU vs FPGA.

Re:A reminder (2)

the gnat (153162) | more than 2 years ago | (#38632202)

I always wondered why FPGA's aren't used for this kind of stuff, or if they already are. I would imagine they would even be faster because you can design a circuit specifically optimized for the problem.

I think they are to some degree, but there is a major barrier to adopting them: they require specialized programming knowledge which you won't find in most genomics centers. GPUs are commodity technology and APIs like CUDA are easier to tackle (and more transferable to other fields) than FPGA programming. (Or such was my impression - I know a lot about bioinformatics, but much less about FPGAs.)

There is at least one company that sells hardware specially accelerated for bioinformatics, CLC bio [clcbio.com] . I don't know if they use FPGAs or some kind of ASIC.

Re:A reminder (1)

Vegemeister (1259976) | more than 2 years ago | (#38633352)

The versatility of FPGAs comes at a steep price in die area, power consumption, and operating frequency. If your design goal is "We want to do this specific kind of math Real Fast.", and somebody already makes an ASIC that does that kind of math Real Fast, the ASIC is generally a lot more cost effective than using FPGAs.

From the article..... (1)

Doubting Sapien (2448658) | more than 2 years ago | (#38633076)

According to Jackson Lab’s TeHennepe, the feat BGI and NVIDIA pulled off was porting key genome analysis tools to NVIDIA’s GPU architecture, a nontrivial accomplishment that the open source community and others have been working toward.

Can anyone familiar with current efforts shed more light on this? Who is working on open source bioinformatics and how much work has been done?

A better article (4, Informative)

arielCo (995647) | more than 2 years ago | (#38631998)

http://hpcwire.com/hpcwire/2011-12-15/bgi_speeds_genome_analysis_with_gpus.html [hpcwire.com]

Excerpt:

At BGI, he says, they are currently able to sequence 6 trillion base pairs per day and have a stored database totaling 20 PB.

The data deluge problem stems from an imbalance between the DNA sequencing technology and computer technology. According to Dr. Wang, using second-generation sequencing machines, genomes can now be mapped 50,000 times faster than just a decade ago. The technology on track to increase approximately 10-fold every 18 months. That is 5 times the rate of Moore's Law, and therein lies the problem.

Obviously it would be impractical to upgrade one's computational infrastructure at that rate, so BGI has turned to NVIDIA GPUs to accelerate the analytics end of the workflow. The architecture of the GPU is particularly suitable for DNA data crunching, thanks to its many simple cores and its high memory bandwidth.

Re:A better article (4, Informative)

Samantha Wright (1324923) | more than 2 years ago | (#38632190)

...countering this stunning and exciting revelation is BGI's stunning and exciting reputation for producing stunningly and excitingly low-quality raw data from said stunning and exciting second-generation sequencing machines. This is a little like the biology equivalent of being told that your least-favourite Slashdot editor (please pick just one) has just gotten a brain implant so he can spam the front page with dupes, typo-ridden summaries, and fallacy-laden opinion pieces ten times an hour.

Part of the problem is Low Standards (3, Informative)

MaizeMan (1076255) | more than 2 years ago | (#38632486)

Although at least in my field the problem is that no one ever thought to set lower limits on the quality of what you can call a genome. So now we get "genomes" made up of 100,000 contigs (many only a couple of hundred base pairs long) and even counting all of those, the total sequence might account for only 70% of the total size of the genome. But it's still a "genome" paper, which is still an instant ticket to Nature Genetics (or Nature Biotechnology if the assembly is REALLY bad).

BGI is certainly one of the biggest offenders (Cucumber and Pigeonpea are both examples of the sort of terrible genomes-in-name-only BGI puts out) but I think the real problem is that Illumina sequence data is so cheap people keep trying to use it to sequence genomes, thinking if they throw enough raw data and enough mate-pair libraries at the problem it'll eventually make up for the fact that Illumina reads are so short. Illumina data is great for a lot of things. Calling SNPs, measuring gene expression, studying methylation patterns.

But, at least for any genome significant transposon content, it simply does not work.

Re:Part of the problem is Low Standards (1)

Samantha Wright (1324923) | more than 2 years ago | (#38632734)

Incidentally, ABI claims you can do de novo with SOLiD systems (which have read lengths of only ~20 bp!) but they say you need to get about 300x coverage just for a bacterial genome. That's not a lot of saved money when you work out all the numbers. It looks like we've nearly found a state function for dollars-per-high-quality-nucleotide.

Re:Part of the problem is Low Standards (1)

the gnat (153162) | more than 2 years ago | (#38632914)

you need to get about 300x coverage just for a bacterial genome

OUCH. Wasn't the original high-quality human genome sequence (using Sanger technology) only about 10x? And doesn't having only 20bp per read basically rule out de novo sequencing of any eukaryote? Even for bacteria that sounds tricky without a closely-related reference sequence.

Re:Part of the problem is Low Standards (1)

Samantha Wright (1324923) | more than 2 years ago | (#38632972)

Yes, yes, and yes. To be fair, these are comparatively cheap and fast runs, but the numbers are still ridiculous, I agree. Hopefully third-generation sequencing technologies (not counting Pacific Bio's implausible promises of "3.1 billion flying pigs in 30 seconds flat!") will do better at pandering to us poor underfunded evolutionary biologists.

Re:Part of the problem is Low Standards (0)

Anonymous Coward | more than 2 years ago | (#38632966)

Thank-you, thank-you, thank-you for pointing this out. As someone who does evo-devo work on enhancer regions, I've found many of these new "genomes" pretty shitty to work with. Guess what? Regulatory DNA tends to have highly repetitive regions and is littered with transposable and retroviral elements.

What's you opinion on Pacific Biosciences new sequencing technology? They supposedly promise read lengths on the scale of sanger-style sequencing?

Re:Part of the problem is Low Standards (0)

Anonymous Coward | more than 2 years ago | (#38633434)

Sanger-style read lengths with Helicos-level (read "terrible") accuracy. That's the Pac-Bio tech currently.

c'mon intel (1)

Cyko_01 (1092499) | more than 2 years ago | (#38632026)

I get that programmers are offloading certain tasks to the GPU because they are able to perform specific tasks faster, but why is this even necessary. If the GPUs are so good at it then why can't there be a dedicated part of the CPU to perform these same computations in parallel streams the same way the GPU does?

Re:c'mon intel (0)

Anonymous Coward | more than 2 years ago | (#38632160)

Why doesn't everyone drive motor homes? I mean really, we could get rid of all the houses and consolidate our cars and homes together.

There's a reason why GPU's and CPU's currently exist in separated forms. Sure as technology progresses they may eventually merge back together, but that's a long time away. Plus eventually the GPU may possibly be so advanced that it will be another integral part of modern computers performing a myriad of other actions unforeseen by current standards.

Re:c'mon intel (1)

wbr1 (2538558) | more than 2 years ago | (#38632264)

Both modern Intel and AMD CPUs come in flavors that include a GPU core. I am currently running a laptop with an AMD E-450 that has a GPU core. Admittedly this core is stripped down, but it is there, and functional, and probably better than many higher end GPUs of 4-6 years ago. There are two other issues surrounding the use of GPUs for processing. One, competing APIs, and two, few programs make use of the availability. I believe some Adobe software (either Premiere or some Photoshop filters) are now written to take advantage of certain brands and models of GPU. There isalso A/V transcoding software that does as well. Why not more? One there used to be no unified API. You had CUDA for Nvidia and Firestream (I think) for AMD/ATI. OpenCL is supported by both, but I do not know if it has limits that the proprietary APIs do not. Second, just as more and more software has been rewritten to take advantage of multiple cores/CPUs (still have a long way to go there), the same will be true of software written to take advantage of GPUs. Not being a programmer (beyond a little scripting) myself, it seems logical that if it has taken this long for programmers (and compilers) to really start to take advantage of 2-8 processors, then learning what tasks can be broken down to several hundred or thousand smaller cores, and how to do it may take a while too. Hell, running a search in regedit in Windows 7 still takes 100% of one core, and only has one thread.

The Answer Lies in Parallel Computation (1)

turkeyfish (950384) | more than 2 years ago | (#38632282)

Genomic analysis involves extensive use of recursive techniques, which are well suited to parallel processing and combinatoric problems. GPU's are small independent components originally designed to handle large matrices of pixel elements for video programming very quickly for video display and refresh. Thus, they can when suitably programmed, for example using CUDA, in parallel to compute solutions required to map problems of high combinatoric dimensionality onto a one dimensional space (sequence) very quickly compared to a CPU that would require serial computation on an extremely large combinatoric space. Effectively, it puts massive supercomputers in the petabyte and exabye processing speeds to be built with standard components at modest prices.

The amazing thing about this technology and the responses of the supposedly technologically sophisticated responses on a site such as slashdot, is that the Chinese are picking up on the technology and on genomic data mining far faster and with more intensity than is the broader US tech community. Given the size of their brainpower base and the rate at which they are adapting the technology the Chinese are well on their way to dominating the drug development and physiological/functional genomic sciences in the next 10 years. The race will largely be over before most American tech types even know it happened.

The even more amazing thing is the potential of unlocking the genetics which control human intelligence, memory and learning capacity. Once these are patented and developed for a host of applications all other forms technology will become increasingly inconsequential. While the US is putting its most powerful computers to use cracking into and reading people's email, the Chinese have a more ambitious agenda.

Actually, this is a good thing, since the Chinese are far more cognizant of the dangers posed by imminent global warming due to carbon dioxide pollution to their economy and the the stability of their political system. If you have any doubts about the Chinese propensity to use their wits, I suggest you see the movie Red Cliff, which very dramatically displays the remarkable triumph of wits over shear military superiority. Its based on a true story.

Re:The Answer Lies in Parallel Computation (0)

Anonymous Coward | more than 2 years ago | (#38632868)

Thanks for the explanation as to why GPUs are ideal for this sort of work.

The sequencing part is easy. Designing good experiments that take advantage of the sequencing technology to yield meaningful data about how living systems work is difficult.... and don't get me started on synthesizing long complex stretches of DNA and usefully expressing and regulating the genes the DNA encodes.

-oemics biology is fantastic for drawing correlations and generating hypotheses, but it is often lacking when it comes to generating understanding and useful applications. This is why the promise of systems biology that was made a decade ago has not really be delivered on in most respects. /yes, I am a molecular biologist.

Re:The Answer Lies in Parallel Computation (2)

the gnat (153162) | more than 2 years ago | (#38632978)

the Chinese are picking up on the technology and on genomic data mining far faster and with more intensity than is the broader US tech community.

You're forgetting that the vast majority of countries actually developing this technology, and making it available to consumers, are based in the US (and Britain, to some degree). One recent article about the BGI that I read last year noted the irony of seeing several crates of sequencing machines stamped "MADE IN THE USA" waiting to be unloaded in Shenzhen. The Chinese government is certainly willing to spend large amounts of money advancing their capabilities, but I haven't seen any evidence that they're significant surpassing the US in anything other than sequencing capacity. (And the machines they're using are very good for generating large quantities of data, but the quality of said data is somewhat suspect.)

Given the size of their brainpower base and the rate at which they are adapting the technology the Chinese are well on their way to dominating the drug development and physiological/functional genomic sciences in the next 10 years.

Except that genomics has as of yet proven minimally useful for drug development. Until they actually develop significant amounts of homegrown technology (which, to be fair, they are actually doing in the bioinformatics arena, as opposed to sequencing), I'm not convinced that they're that much of a threat. What they will certainly accomplish, I think, is a record of high-profile scientific output and the ability to compete on even terms with the rest of the industrial superpowers. No mean feat considering where they were 40 years ago, and certainly some cause for concern given their large and inexpensive labor force, but it's not the same thing as suddenly eclipsing the USA in technology that they're still mostly importing or stealing.

Re:c'mon intel (1)

Macman408 (1308925) | more than 2 years ago | (#38633510)

First question: Why do you want it from Intel, versus anybody else? They've always struck me as moderately evil - the Microsoft of the chip world, looking out for their own sales numbers and not much else.

Second question: Which do you want in your chip; a fast CPU that can run your web browser and E-mail client, or a fast parallel computing unit that's good for gene sequencing, multimedia processing, etc? You can't have both. Well, you can, but both parts will be slower. The top-of-the-line chips you get these days are as big as can be reasonably manufactured, and produce as much heat as can be reasonably removed from the chip (without requiring you to supply a source of liquid nitrogen). You can combine them, but you won't get great performance from either the CPU or the GPU/"parallel unit" in that case. You're better off buying two chips. And Intel has a pretty poor history at making graphics chips with reasonable performance.

SIMD for the Win! (1)

adharma (607872) | more than 2 years ago | (#38632058)

SIMD chips will always show computational gains to any class of problem that makes significant use of matrix multiplication or linear algebra. So graphics, crypto, etc..

I've been Folding for years on GPUs (1)

unassimilatible (225662) | more than 2 years ago | (#38632162)

And other assorted distro-computing tasks. Hell, my old x1800's stopped being supported for the current Folding software years ago.

A nice list of distro computing projects [wikipedia.org] .

Another nice list of such projects [distribute...uting.info] .

Re:I've been Folding for years on GPUs (1)

RicktheBrick (588466) | more than 2 years ago | (#38633604)

I have been volunteering for more than a decade now. I first started with united devices. They have stopped now for about 5 years. I started with single core computers. About 4 years ago I bought my first 4 core computer and last year I bought two 6 core computers. The 6 core computers do twice as many results than the 4 core computers. The 4 core computers do 6 times as many results as the single core. Therefore I think a single 6 core computer would pay for itself in electricity costs in less than three years I would think that this would continue with a super computer that has thousands of cores. Here is a link to a super computer that cost only $1,4000,000 http://www.eng.vt.edu/news/virginia-tech-s-wu-feng-unveils-hokiespeed-new-powerful-supercomputer-masses [vt.edu] . Now if only 100,000 volunteers donated just $20 each for a total of $2,000,000 someone could purchase that super computer and have $600,000 for their expenses. $20 a year is probably far less than the average volunteer is paying for the extra electricity. I think that this super computer would do more results than the over 500,000 members of World Community Grid do now.

Why wasn't this tried before? (0)

Anonymous Coward | more than 2 years ago | (#38632166)

See title.

Terrible Post (0)

Anonymous Coward | more than 2 years ago | (#38632170)

Who cares? We've known about GPU powered supercomputers for years. The Chinese just bought cards from nvidia and made a supercomputer. Any university in the world can build one.

yes' but... (0)

Anonymous Coward | more than 2 years ago | (#38632246)

...will it run crysis?

Re:yes' but... (1)

galanom (1021665) | more than 2 years ago | (#38632438)

Only if you have a Beowulf cluster of them!

So the visiting politician asks, "What are GPUs?" (1)

PolygamousRanchKid (1290638) | more than 2 years ago | (#38632314)

"GPUs . . . ? . . . I was informed that this project was powered by GNUs . . . ?"

". . . now where is that Apple MAC chip that generates the GPL number that allows the PC to connect to the Internet . . . ?"

Show Me the Monkey (0)

Greyfox (87712) | more than 2 years ago | (#38632326)

I hate to say it, but I tend to be a bit skeptical about any research news coming out of China, since so much of it has been falsified in the past few years. So until some Chinese researcher shows me a six-assed monkey, my response to this news is going to be "Meh."

Now where are the HTX slots / HTX GPU cards (1)

Joe_Dragon (2206452) | more than 2 years ago | (#38632502)

Just thing how cool it will be to have cards that can do this on the CPU BUS.

Blast from the past (0)

Anonymous Coward | more than 2 years ago | (#38632574)

Just think of a Beowulf cluster of these things would...

oh wait. Sorry.

For the curious... (5, Funny)

Cow Jones (615566) | more than 2 years ago | (#38632656)

... this is what a Chinese lab looks like [dogster.com] .

Re:For the curious... (0)

Anonymous Coward | more than 2 years ago | (#38633138)

Your link is suspicious, as if it's actually another goat.cx gag from /. in years past, I'm not clicking on it!

Re:For the curious... (0)

Anonymous Coward | more than 2 years ago | (#38633216)

it APPEARS to be safe... but it's... an RSS feed for people talking about walking dogs? what? or something else?

NOT coined by Nvidia (0)

Anonymous Coward | more than 2 years ago | (#38633400)

Back in the early to mid 80s (prior to 85) the term GPU was already in use for Graphics Processing Unit.
Of course there was also CPU for Central Processing Unit, SPU for Sound Processing Unit, and also MPU for Math Processing Unit.

I have no idea how they got a generic term that had been around for at least 20 years trademarked or copyrighted, but they did. I guess someone really wanted a blowjob bad or something.

If you want proof, just dig up a lot of magazines from those days, including the platform specific ones for Atari and Commodore.
No, I may be an old time geek, but my copies of those magazines didn't survive multiple moves.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?