×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Inside Tsubame, Japan's GPU-Based Supercomputer

timothy posted more than 5 years ago | from the please-don't-christen-the-supercomputer dept.

Supercomputing 75

Startled Hippo writes "Japan's Tsubame supercomputer was ranked 29th-fastest in the world in the latest Top 500 ranking with a speed of 77.48T Flops (floating point operations per second) on the industry-standard Linpack benchmark. Why is it so special? It uses NVIDIA GPUs. Tsubame includes hundreds of graphics processors of the same type used in consumer PCs, working alongside CPUs in a mixed environment that some say is a model for future supercomputers serving disciplines like material chemistry." Unlike the GPU-based Tesla, Tsubame definitely won't be mistaken for a personal computer.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

75 comments

Wow! [Obligatory] (4, Funny)

cashman73 (855518) | more than 5 years ago | (#26084485)

Imagine a beowulf cluster of one of these could do! Oh, wait! ;-)

Re:Wow! [Obligatory] (1)

afxgrin (208686) | more than 5 years ago | (#26086209)

Does anyone know if the Tesla cards still work as a GPU for ... let's say Counterstrike?

Sometimes, after pwning some numbers, I want to pwn some n00bs.

Re:Wow! [Obligatory] (1, Informative)

twowoot4u (1198313) | more than 5 years ago | (#26086367)

They dont even have video outs. So while it likely can crunch frames for you at thrifty speeds, good luck viewing them.

Re:Wow! [Obligatory] (0)

Anonymous Coward | more than 5 years ago | (#26090683)

3rd reply on the first post. How can it be redundant?

Hold the hyperbole (1)

Bearhouse (1034238) | more than 5 years ago | (#26084593)

On reading the article, the box has 30 thousand cores, of much the vast majority are AMD Opterons in Sun boxes. No mention of how/in what you'd program this to actually put the GPUs to good use.

Re:Hold the hyperbole (3, Insightful)

timeOday (582209) | more than 5 years ago | (#26084889)

No mention of how/in what you'd program this to actually put the GPUs to good use.

That's why the supercomputer rankings are based on reasonably complex benchmarks instead of synthetic "cores * flops/core" types of numbers. Scoring well on the benchmark is supposed to be solid evidence that the computer can in fact do something useful. My question though is whether the GPUs contributed to the benchmark score, or were just along for the ride.

Re:Hold the hyperbole (1)

ceoyoyo (59147) | more than 5 years ago | (#26085005)

As I recall, GPUs and other vector type processors do quite poorly on Linpack, so probably not.

Re:Hold the hyperbole (0)

Anonymous Coward | more than 5 years ago | (#26085235)

As I recall, GPUs and other vector type processors do quite poorly on Linpack, so probably not.

*facepalms*

Re:Hold the hyperbole (3, Insightful)

lysergic.acid (845423) | more than 5 years ago | (#26086523)

how would data parallelism negatively affect a test that is designed to measure a system's performance in supercomputing applications--a field which is dominated by problems which involve processing extremely large data sets?

if vector processors do in fact perform poorly on LINPACK benchmarks then that would mean LINPACK performance is not a good indicator of real-world performance, but that clearly isn't the case as vector processors consistently perform quite well in LINPACK suite measurements [hoise.com] .

vector processing began in the field of supercomputing, which during the 1980's and 1990's were essentially the exclusive realm of vector processors. it wasn't until companies, to save money, started designing & building supercomputers using commodity processors (P4s, Opterons, etc.) that general-purpose scalar CPUs began to replace specialized vector processors in high-performance computing. but now companies like Cray and IBM [cnet.com] are starting to realize that this change was a mistake.

even in commodity computing the momentum is shifting away from general-purpose scalar CPUs towards specialized vector coprocessors like GPUs, DSPs, array processors, stream processors, etc. when you're dealing with things like scientific modeling, economic modeling, engineering calculations, etc. you need to crunch large data sets using the same operation; this is best done in parallel using SIMD. using specialized vector processors (and instruction sets) you can run these applications far more efficiently than you could using a scalar processor running at much higher clock speeds. the only downside is that you lose the advantage of using commodity hardware that's cheap because of their high volume production. but if companies like Adobe start developing their applications to employ vector/stream coprocessors, then that will boost the adoption of these vector processors in the commodity computing market, which will increase production volume and lower manufacturing costs.

Re:Hold the hyperbole (1)

ceoyoyo (59147) | more than 5 years ago | (#26086819)

http://www.netlib.org/linpack/ [netlib.org]

Note that if you've got a vector machine you usually use LAPACK, which is optimized for that architecture.

Re:Hold the hyperbole (1)

lysergic.acid (845423) | more than 5 years ago | (#26087639)

LAPACK [wikipedia.org] may be the successor to LINPACK [wikipedia.org] , but they were both written for vector processors (PDF) [warwick.ac.uk] .

LINPACK was just optimize for the shared-memory architectures that were once popular, whereas LAPACK is optimized to exploit (using Basic Linear Algebra Subprograms) the cache-based architectures used in modern supercomputers.

Re:Hold the hyperbole (0)

Anonymous Coward | more than 5 years ago | (#26090157)

Linpack, like all benchmarks, is great at measuring the performance of the benchmark.

Your performance on _your_ apps may vary.

To get a high score on the Linpack, you need lots of CPUs and lots of RAM/CPU. Communication speeds or latencies don't matter as much. GPUs don't really have much RAM/processing core, so it will not do the best on the Linpack benchmark.

Re:Hold the hyperbole (0)

Anonymous Coward | more than 5 years ago | (#26084937)

The "in what" is CUDA. The "how", I suppose, is carefully.

But your point about holding the hyperbole is good; they have something like 16 CPU cores per GPU, this is hardly a "GPU based" cluster.

Re:Hold the hyperbole - Read again (5, Informative)

raftpeople (844215) | more than 5 years ago | (#26085221)

On reading the article, the box has 30 thousand cores, of much the vast majority are AMD Opterons in Sun boxes. No mention of how/in what you'd program this to actually put the GPUs to good use

You may want to read the article again, if not here's a recap:
655 Sun Boxes each with 16 AMD cores=10,480 CPU cores
680 Tesla Cards each with 240 processors=163,2000 GPU processors

As for how to use the GPU's, I use my GTX280 (almost same thing as Tesla) to crunch through lots of numeric calculations in parallel. I'm sure these guys are doing the same thing as that is the strength of the GPU. NVIDIA has made it easier to access the processing power of the GPU with CUDA. You create a program in C that gets loaded on the GPU and when you launch it you can tell it how many copies to run at one time, each one typically operates on a different portion of the data. Because you can launch more threads than there are processors, the GPU can be reading data in from global vid mem while other threads are performing calculations.

Re:Hold the hyperbole - Read again (0)

Anonymous Coward | more than 5 years ago | (#26085367)

I don't think anyone who actually works with CUDA refers to individual thread processors as "GPU processors." Even nVidia refers to the Tesla itself as a GPU (singular). Your terminology is like saying that my desktop PC has "4 Core 2 Processors" because the one Core 2 in my PC has four cores.

Re:Hold the hyperbole - Read again (1)

raftpeople (844215) | more than 5 years ago | (#26085795)

I don't think anyone who actually works with CUDA refers to individual thread processors as "GPU processors."

Well I actually work with CUDA and I just used that term, so that makes at least 1 person.

The term "GPU processor" was merely a shorthand method of stating that the number 163,200 related to circuitry that performs calculations but without as much flexibility as a core on a traditional CPU. They do work, but groups of them share the same instruction. The term "core" would have seemed inaccurate also, maybe I should have said "streaming processor cores"??

Re:Hold the hyperbole - Read again (0)

Anonymous Coward | more than 5 years ago | (#26085935)

I suppose there's a first for everything. I work with CUDA in an HPC research capacity and I've never heard any colleague or anyone from nVidia refer to individual thread processors as a "GPU processor", for what it's worth. nVidia's official terminology is "scalar processor", and I mainly hear that and "thread processor".

Re:Hold the hyperbole - Read again (0)

Anonymous Coward | more than 5 years ago | (#26087709)

How does something like an nVidia scalar processor stack up against something like a Clearspeed accelerator?

Re:Hold the hyperbole - Read again (0)

Anonymous Coward | more than 5 years ago | (#26100393)

I've never worked with Clearspeed, but on paper they look pretty good. They claim 22TFLOP/s from a 1U unit; nVidia can get about 4TFLOP/s from a 1U unit. They're a bit deficient in terms of memory bandwidth; they can achieve a total of 96GByte/s per second in one of those 1U units whereas nVidia can achieve over 400GBytes/s.

Of course, this is all based on theoretical numbers for the clearspeed units. Also, I have no idea what they cost compared to a Tesla 1U unit.

Re:Hold the hyperbole - Read again (0)

Anonymous Coward | more than 5 years ago | (#26086041)

Sorry for the double-post, I forgot the most important part of my reply. Moreso than "number of cores", I would consider Tsubame still very much a CPU-based cluster in that approximately one eighth of the LINPACK work was done by GPUs.

Re:Hold the hyperbole - Read again (1)

lysergic.acid (845423) | more than 5 years ago | (#26086635)

i don't know about CUDA, but when Microsoft discusses the number of "processors" a single instance of their OS supports they are generally referring to logical processors, which they define as:
# of physical processors * # of cores * # of threads

that's why Microsoft claims Windows 7 will scale up to 256 processors [zdnet.com] . in reality that's 64 physical processors * 2 cores * 2 threads, or 32 physical processors * 4 cores * 2 threads, etc.

Re:Hold the hyperbole - Read again (0)

Anonymous Coward | more than 5 years ago | (#26086859)

My complaint was primarily due to the fact that "GPU processors" is a rather ambiguous (and unusual) term. As I mentioned in another reply, "scalar processors" and "thread processors" are more common and clear names. "GPU processor" would (in some cases) leave one wondering whether you were referring to:
(a) an entire GPU
(b) a GPU multiprocessor, consisting of (in nVidia's case) 8 scalar processors
(c) a scalar processor

I'll also note that Microsoft counts a Tesla as one GPU :)

Re:Hold the hyperbole - Read again (1)

Almahtar (991773) | more than 5 years ago | (#26087133)

This makes plenty of sense. I've personally dealt with several IBM BlueGene supercomputers (more than 200,000 cores) that didn't perform near this well.

The GPUs definitely made a huge difference in this case.

"Special" (0)

Anthony_Cargile (1336739) | more than 5 years ago | (#26084611)

Why is it so special? It uses NVIDIA GPUs

So we can expect binary-only (e.g. non-patchable source) driver issues when running Linux on it? Or will it be frequent nv_disp BSODs from a Windows OS? And I'm sure the kernel(s) will have a fun time managing all of this in addition to SMP across several real CPUs.

Sounds "special" all right...

Clever name (4, Funny)

subStance (618153) | more than 5 years ago | (#26084657)

Ironic name: tsubame means sparrow in japanese, and also has the slang usage of toy-boy (as in a cougar's toy-boy).

Not sure what to read into that ...

Re:Clever name (0)

Anonymous Coward | more than 5 years ago | (#26085203)

It is also the name of a high speed train [wikipedia.org] . That's probably where the name come from.

Re:Clever name (2, Informative)

Anonymous Coward | more than 5 years ago | (#26085297)

Tsubame is actually 'swallow', not 'sparrow', which is suzume.

Re:Clever name (1)

CODiNE (27417) | more than 5 years ago | (#26085461)

I'm imagining Pirates of the Caribbean in Japanese... featuring the lovely Captain Jack Boy-Toy. Fitting.

Re:Clever name (0)

Anonymous Coward | more than 5 years ago | (#26086215)

Capt Jack Swallow. Master of the Black Pearl Necklace... sounds like the gay porn version.

Re:Clever name (1)

Saffaya (702234) | more than 5 years ago | (#26086463)

Tsubame is also a female first name. And a nice one at that.
No need to dig further than that imo.

Re:Clever name (0)

Anonymous Coward | more than 5 years ago | (#26091541)

So, Japan has women named "swallow"? I'm there!

Re:Clever name (1)

tehcyder (746570) | more than 5 years ago | (#26089693)

as in a cougar's toy-boy

You say that as though we're supposed to know what it means...

Re:Clever name (1)

BennyBigHair (636963) | more than 5 years ago | (#26091773)

Cougar is an American idiomatic term for a sexually active older woman who actively looks for younger males. "Toy-boy" is usually written as boy-toy, and refers to those young males who are selected spefically for sexual fun.

What is a GPU? (2, Interesting)

hurfy (735314) | more than 5 years ago | (#26084783)

When it has no graphics out? It is still a GRAPHICS Processing Unit when it doesn't calculate any graphics and doesn't display any graphics. HUH? ;)

They have a whole lot of these boosting a whole lot of quad-cores.

Re:What is a GPU? (1)

mikael (484) | more than 5 years ago | (#26085045)

They want the GPU's for their number-crunching ability. Since each GPU would be working on a small portion of the simulation being processed, you are going to need a separate system to fetch whatever item of data you want to visualize. This system is going to have to talk to every GPU in order to this data and render it.

Re:What is a GPU? (0)

Anonymous Coward | more than 5 years ago | (#26087417)

Although the scientific application and the graphical application are processed in more or less the same way, it was the need for the graphical solution that came first, which is why it got the name "GPU". If it was the other way around, we'd probably be buying nVidia and ATI Vector Processing Units (VPUs) or some such to play the latest games with.

Here's another question: Is an escalator still an escalator when it's going down? If so, what exactly is it escalating?

Ofcourse (1)

SchizoDuckie (1051438) | more than 5 years ago | (#26084883)

I think it's only a matter of time before many of these clusters will start using all processing power available to them, hell, even desktops and whatever app you build should detect, and use your GPU! If compilers were to get even smarter, they could automatically route pieces of code that include calculations the GPU could do faster to the GPU, and otherwise just use one of the other cores available. This *should* be the future imo.

Re:Ofcourse (4, Informative)

dgatwood (11270) | more than 5 years ago | (#26085017)

Indeed, that's the whole idea behind the recently ratified OpenCL [wikipedia.org] specification. Design a C-like language that provides a standard abstraction layer for the ability to perform complex computations on a CPU, GPU, or conceivably on any number of other devices lying around (e.g. idle I/O Processors, the DSP core in your WinModem, your printer's raster engine...).

Re:Ofcourse (1)

maxume (22995) | more than 5 years ago | (#26085233)

I thought the whole point of a winmodem was that there wasn't a DSP in it (and that junky printers don't have raster engines, it's in the driver).

Re:Ofcourse (1)

Jeff DeMaagd (2015) | more than 5 years ago | (#26085669)

You're right, WinModems don't have DSPs. I don't know about printers without rasterizing engines being junky, some may be. I haven't heard much about this issue lately. Frankly, I don't know if some of my printers have them or not. I know I have one that supports PCL 6, but it was a high end business printer when it was new. DSPs can be a bit expensive, so it can make some printer tech more affordable. I think the main objection now might be that they didn't support a printing standard, so there was often no Linux support.

Re:Ofcourse (1)

DigiShaman (671371) | more than 5 years ago | (#26086429)

All modems shipped with PCs or Laptops these days are software based. You can still purchase modems with on-board CPUs from Zoom and USR, but they cost around $80.

Those crapping throw-away HP, Lexmark, InkJet printers are software based. The only logic inside is to control the steper motors and such. With USB and modern CPUs, it makes sense to raster in software for desktop printing. Office printers such as the HP LaserJet does PCL and PostScript at the hardware level. Tapping into the hardware using TCP/IP connectivity however might be a waste of time as the NICs own hardware will be tied up processing checksums (if it has the ability).

Which leaves us with three main IO chips. The Gigabit NIC (does TCP checksum offloading at the hardware level), GPU, and APU (for audio processing). I'm not sure if the North or South bridge can be directly tapped into. I suspect most of its IO is transparent to the OS.

Re:Ofcourse (1)

Tycho (11893) | more than 5 years ago | (#26100129)

On the other hand there is the DSP core in Creative X-Fi cards (not that anyone should own one). Modern TV tuner cards have MPEG-2 encoding units, these must be worth something. Higher end, professional video hardware like HD video capture cares and real-time video effects rendering cards often have Xilinx FPGA, most of which probably have a built in POWER CPU core. In this case, the CPU and the programmability of the FPGA are useful. Actually useful SATA RAID cards that support RAID 5 and RAID 6, like the 3ware 9000 series have POWER CPU cores as well. All of the hard drives in each of the three model lines of the Samsung F1 series have two ARM coprocessors. The Samsung F1 series is the newest set of hard drives from Samsung. As long as you get away from the bottom of the line crap from Intel there is almost always at least some spare power out there. In many cases though, it is used productively already.

Re:Ofcourse (1)

dgatwood (11270) | more than 5 years ago | (#26087129)

Perhaps the Winmodem thing was a poor example, but according to this post [osdir.com] , some of them do have DSP hardware, but lack a hardware UART. Whether that poster was correct or not, I'm not sure, but that is consistent with my vague memory on the subject. In any case, that's straying pretty far from the subject at hand. :-)

Finally. (0)

Jabbaloo (237287) | more than 5 years ago | (#26085123)

A system that can run Crysis at full settings.

Re:Finally. (0)

Anonymous Coward | more than 5 years ago | (#26088859)

A system that can run Crysis at full settings.

I agree, I'm so tired of playing with empty settings.

The missing numbers (3, Informative)

Anonymous Coward | more than 5 years ago | (#26085447)

just to get a perspective, the GPUs provide about 10 out of 77 TFLOPs benchmarked in LINPACK HPC article [sun.com]

Could do it for cheaper (1)

unity100 (970058) | more than 5 years ago | (#26085511)

ATI's latest cards give more punch for the cost apiece. and they are designed specifically for being clustered/linked/xfired and whatnot.

Re:Could do it for cheaper (3, Informative)

Jeff DeMaagd (2015) | more than 5 years ago | (#26085737)

ATI's latest cards give more punch for the cost apiece. and they are designed specifically for being clustered/linked/xfired and whatnot.

I thought the nV Teslas were designed for HPC.

Performance going up, cost going down happens so quickly something like that can easily happen between the time it's ordered and the time it's installed.

Re:Could do it for cheaper (2, Insightful)

Molochi (555357) | more than 5 years ago | (#26086371)

They could do it cheaper with anything at the current price. However, this wasn't just slopped together last month with the latest hardware off newegg.

No doubt, there's a SC being built up right now around all the latest AMD parts. By the time it gets benchmarked, we'll be able to complain that something else is a better deal.

tesla is a pci-e card... (1)

oudzeeman (684485) | more than 5 years ago | (#26085629)

a Tesla comes in two form factors, a pci express card or a rack mount 1U system that contains 4 of the tesla cards and connects to a server or cluster node with two pci e cards. Not sure how you could confuse that with a PC. Also, I was just ad a conference with the gentleman in charge of Tsubame, and if I recall correctly they had some of the 1U tesla systems in the cluster, although they may have used high end graphics cards too - they may have only had a limited number of the rack mount tesla systems for testing

Supercomputer or many not-so-super computers? (4, Interesting)

marciot (598356) | more than 5 years ago | (#26085739)

What makes a supercomputer *a* supercomputer, as opposed to a network of not-necessarily-super computers which all happen to be in the same building and connected to the same high-speed network? By the way this is described, it certainly seems to be a network of many computers working together, rather than one single almighty computer.

Re:Supercomputer or many not-so-super computers? (0)

Anonymous Coward | more than 5 years ago | (#26086039)

" By the way this is described, it certainly seems to be a network of many computers working together,"

You could say the same thing about the innards of a CPU. AFAIC supercomputer is quite over-used, what should be done is calculations given a volumetric amount of space and how much power it uses.

Re:Supercomputer or many not-so-super computers? (0)

Anonymous Coward | more than 5 years ago | (#26086525)

A supercomputer is anything that can run the Linpack benchmark fast. There's no easy way to cheat, a high-speed network *is* expensive.

Re:Supercomputer or many not-so-super computers? (1)

mcrbids (148650) | more than 5 years ago | (#26086631)

Well, IANASE (Supercomputer Expert) but I *am* a programmer....

I'm assuming that you have a supercomputer when all those otherwise individual computers are working together in a coordinated fashion on a common problem.

A great example of a supercomputer is SETI @ Home [berkeley.edu] which easily meets the definition of a "supercomputer" in many (most?) circles, although they usually refer to it as "distributed computing".

Re:Supercomputer or many not-so-super computers? (1)

nerk88 (204690) | more than 5 years ago | (#26088169)

The usual distinction between a supercomputer (that may be a cluster) and distributed computing is that in a supercomputer, all the individual computers are under central control. In a distributed computing environment you control your computer and provide resources to someone else's cluster.

The difficulty arises because so many people use similar phrases for slightly different things. You can argue that the second you have more than one processor you are in a 'distributed' computing environment as you are distributing the computation. A cluster is a distributed computer as it distributes the computation over a number of computers (but remains in complete control of them all), however more often distributed computing refers to the case that a number of separately controlled computers provide computing resources to a particular task. Its sort of the reverse of the traditional model of time-sharing computers, where you had a central computer that users shared to do their work, in a distributed computing environment the work time-shares the users computers.

Re:Supercomputer or many not-so-super computers? (0)

Anonymous Coward | more than 5 years ago | (#26091473)

Wrong. You're explaining the difference between purchased and donated computer resources. These happen to be correlated with super and distributed computing, but are not their definitions.

Re:Supercomputer or many not-so-super computers? (2, Informative)

dlapine (131282) | more than 5 years ago | (#26091569)

Wikipedia claims that a supercomputer "is a computer at the forefront current processing capability" http://en.wikipedia.org/wiki/Supercomputer/ [wikipedia.org] . The top500 list implies that a supercomputer is a system that can run Linpack really fast, while noting that the system must also be able to run other applications. http://www.top500.org/project/introduction [top500.org]

Given that NCSA has run many supercomputers over the years, and that I've personally run three while working there, I'd say that a good rule of thumb is that a supercomputer is a system designed to achieve high amounts of calculation throughput (as opposed to instant response) and that the system is at least 100x as powerful a high-end PC of that time frame. In fact, you could simplfy the rule down to- a system designed as a single unit to achieve high computing performance.

In order to accomplish all these things, supercomputers tend to have 2 things that "normal" network of PC's doesn't- a high speed, low latency network or interconnect, (and possibly several networks, each serving a different purpose) and a high speed, shared filesystem. Also, supercomputer tends to be designed and installed as a single unit, whereas a network of PC's happens over time.

Supercomputers tend to fall into one of 2 categories- a large collection of server class machines(cluster) or a small set of mainframe style systems(SMP). If you have the cash, you buy a large set of mainframe style systems, but who has the cash? Folks tend to purchase clusters as they tend to be less expensive, but you'd have determine if your application can work correctly on a large number of systems. Not all computing tasks can.

Tsubame, the system described above, is basically a cluster of inexpensive nodes with a high speed network. Applications on the cluster run on many of the individual nodes at the same time, and use the high speed network to pass messages to each other during the program, so that the application appears to be working on a single system. Tsubame is variant of a supercomputer cluster, where each inexpensive node is beefed up with co-processors and accelerators to increase the overall performance. Harder to program correctly, but potentially more powerful and still not as expensive as the large set of mainframes. Hope that helps.

Oh japan... (0)

Anonymous Coward | more than 5 years ago | (#26085961)

Many Internets to whomever gets the reference: Hiken Tsubame Gaeshi!

Well anyways such computers are required for skynet. The rise continues.

Gripe from Los Alamos... (0)

Anonymous Coward | more than 5 years ago | (#26086671)

It's a little frustrating to me that they don't mention Roadrunner, which is an IBM Cell-accelerated Opteron cluster. We're doing plenty of real science, with some applications achieving ~400 TF sustained performance.

77.48T Flops (1)

blai (1380673) | more than 5 years ago | (#26086709)

Can I run Crysis now?

Re:77.48T Flops (1)

amoeba1911 (978485) | more than 5 years ago | (#26089349)

According to TFA, the whole point of this experiment was to see if it was possible to run Crysis at the highest settings at maximum resolution with FSAA and anisotropic filtering.

So that's where all the GPUs went (0)

Anonymous Coward | more than 5 years ago | (#26089023)

No wonder the Japanese don't play any PC games.

Model the stock market with it... (1)

Douglas Goodall (992917) | more than 5 years ago | (#26099761)

With this much computing power one should be able to take advantage of higher math to determine when the optimal times are to invest in the stock market to take advantage of trends. Unfortunately, since things are headed downward, this technology can be used most efficiently to help you lose money at the optimum rates. Actually I have an idea, but have to work out with CUDA more before I know if it is real. Unfortunately trying to put these cards in Mac Pros is problematic. You would think Apple would have made a deal with NVIDIA to assure these cards could be run in Apple's fastest desktop, but no joy there. In fact it is a long and unhappy story trying to get Apple and CUDA superpower in the same box.

Re:Model the stock market with it... (1)

wikinerd (809585) | more than 5 years ago | (#26102811)

Successful traders don't use higher maths. You can't beat the market with maths, because the market is a complex adaptive system that cannot be predicted. You can, however, find out some likely scenarios using your insight, which is what successful traders use.

Re:Model the stock market with it... (1)

Douglas Goodall (992917) | more than 5 years ago | (#26104145)

Actually I was trying to make a sort of joke, but I thank you for telling me what the technical term was for the kind of system the stock market it. I didn't know that and found that interesting. I had this idea that a polynomial with as many terms as there are stocks could be created with a core for each stock... and in some way might be useful. An approach that could only be taken practically with hardware of an unusually parallel nature. But limitations in my understanding of math and statistics keep me from doing more than guessing if something like this would have any value. I am looking forward to hearing some good ideas about just what thousands of core are good for. Thanks again.
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...