Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Replacing Traditional Storage, Databases With In-Memory Analytics

Soulskill posted more than 3 years ago | from the neuralization-of-data-centers dept.

Data Storage 124

storagedude writes "Traditional databases and storage networks, even those sporting high-speed solid state drives, don't offer enough performance for the real-time analytics craze sweeping corporations, giving rise to in-memory analytics, or data mining performed in memory without the limitations of the traditional data path. The end result could be that storage and databases get pushed to the periphery of data centers and in-memory analytics becomes the new critical IT infrastructure. From the article: 'With big vendors like Microsoft and SAP buying into in-memory analytics to solve Big Data challenges, the big question for IT is what this trend will mean for the traditional data center infrastructure. Will storage, even flash drives, be needed in the future, given the requirement for real-time data analysis and current trends in design for real-time data analytics? Or will storage move from the heart of data centers and become merely a means of backup and recovery for critical real-time apps?'"

cancel ×

124 comments

Sorry! There are no comments related to the filter you selected.

Goodbye Orwell (1, Interesting)

schmidt349 (690948) | more than 3 years ago | (#34731098)

The marginalization of long-term data storage can only be a good thing -- the big advertising and other firms get the analytical data that actually matters to their bottom line, and to the extent that the average joe's privacy is being invaded at the very least the fruits of that invasion will become increasingly accessible.

Re:Goodbye Orwell (5, Informative)

quanticle (843097) | more than 3 years ago | (#34731578)

You're misinterpreting the post. No one said anything about long term data storage being marginalized or eliminated. Instead, the author is talking about the difference between persistent and non-persistent storage. He's saying that existing database technologies that rely on persistent storage are being marginalized as the speed difference between spinning disks and RAM widens, and the low cost of RAM makes it practical to hold large data sets entirely in memory. According to the author, data processing and analysis will increasingly move towards in-memory systems, while traditional databases will be relegated to a "backup and restore" role for these in-memory systems.

Re:Goodbye Orwell (1)

postbigbang (761081) | more than 3 years ago | (#34731660)

Mod parent up.

The post asks an 'or' question which is plainly stupid and demonstrates a lack of knowledge on the part of the poster. Analytics are but one part of organizational asset deployments. In and of themselves, analystics initiatives don't really change storage. There are occasions where outputs are transient, but audit/compliance necessitate storing enough that whatever needs to be constructed can, and what can be legally/ethically discarded will be.

So data center storage needs don't really change-- they're growing like crazy 24/7. Cool analytics are just another production method.

Re:Goodbye Orwell (2)

hairyfeet (841228) | more than 3 years ago | (#34732940)

Exactly. It really doesn't matter if you have the slowest (and thus longer lasting and cheaper to operate) HDD on the planet if all the important data is in RAM and kept there. RAM since DDR has gotten so ridiculously fast that NO SSD has a snowball's chance of catching up anytime soon, if at all, and the economies of scale have made RAM one of the cheapest if not the cheapest upgrades you can add to any system.

Even in the consumer market falling RAM prices and changes to OS design make the hard drive pretty much a backup and long term storage medium more than anything else. I advise my customers on new builds to go ahead and let me install 4GB, because with Superfetch after a week of Windows 7 learning their usage patterns all of their apps are preloaded into RAM making launching and using instantaneous, and with suspend to RAM booting is pretty much a thing of the past. It cost less than $100 to add 8GB to mine and now everything I use is ALWAYS preloaded, making the speed just insane. Everyone that comes by the shop is always amazed at how I can launch a half a dozen apps while another 4 or 5 are doing various jobs and it is always instantaneous. But with 6GB reserved by the OS for Superfetch all the apps I use are simply waiting for me in RAM.

So I have to agree with TFA. With the prices of RAM cheap and only getting cheaper having data you actually use often needing to swap in and out of the HDD or SSD is just nuts. And then if you have it all in RAM you can use the lower speed and less power hungry "green" drives for persistent backup instead of using SSDs which haven't come anywhere near the GB per $ ratio of spinning platters yet, although their speed is incredible. But if everything is already in RAM, do you really need to spend the crazy $$$ for the large SSD?

"pwufessuh haiwypheet" of ITT Tech BLOWN AWAY 6x? (-1)

Anonymous Coward | more than 3 years ago | (#34734238)

"pwufessuh haiwypheet" of ITT Tech BLOWN AWAY 6x?

Especially this 1st evidence thereof:

---

http://slashdot.org/comments.pl?sid=1930156&cid=34734160 [slashdot.org]

http://mobile.slashdot.org/comments.pl?sid=1930156&cid=34719276 [slashdot.org]

http://it.slashdot.org/comments.pl?sid=1916240&cid=34612834 [slashdot.org]

http://it.slashdot.org/comments.pl?sid=1916240&cid=34647708 [slashdot.org]

http://slashdot.org/comments.pl?sid=1922942&cid=34665368 [slashdot.org]

http://slashdot.org/comments.pl?sid=1924664&cid=34669668 [slashdot.org]

---

(ROTFLMAO!)

I wouldn't listen to "professor hairyfeet" guys, he's only an ITT Tech student...

Re:Goodbye Orwell (1)

dintech (998802) | more than 3 years ago | (#34733122)

I'm a KDB developer at a large financial institution. Most banks using KDB store today's stock market data and an on disk store of everything before today. The theory goes that there is the most to be gained by manipulating the most important data in memory, namely today's data. You need the history but the speed of the on-disk partition is always going to be slower.

Just dump your data into the hole (-1)

Anonymous Coward | more than 3 years ago | (#34731102)

The hole. [goatse.fr]

Re:Just dump your data into the hole (1)

tikram (1262046) | more than 3 years ago | (#34731224)

Just... wow... goatse in 2011? Are you a time traveler from 1999?

Re:Just dump your data into the hole (1)

Anonymous Coward | more than 3 years ago | (#34731384)

Are you really sure you want them to come up with something new?

Re:Just dump your data into the hole (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34731486)

Dear internet: Set your photoshops to "Goatse Tron Guy" and you will glimpse mankind's unutterably horrible future!

Re:Just dump your data into the hole (0)

Anonymous Coward | more than 3 years ago | (#34731904)

How do I do my restores from that? All I seem to find are core dumps, and remnants of memory leaks.

Re:Just dump your data into the hole (1)

SuricouRaven (1897204) | more than 3 years ago | (#34733404)

Your plan failed! I was curious if that site still exists, so I defocused my eyes before looking. All I saw was a vague blur with a red blur in the middle.

Totally inane (5, Insightful)

MrAnnoyanceToYou (654053) | more than 3 years ago | (#34731142)

Discarding data is something that, as a programmer, I don't often do. Too often I will need it later. Real time analytics are not going to change this. As long as hard drive storage continues to get cheaper, there's going to be more data stored. Partially because the easier it is to store large blocks the more likely I am to store bigger packets. I'd LOVE to store entire large XML blocks in databases sometimes, and we decide not to because of space issues. So, yeah, no. Datacenters aren't going anywhere. Things just get more complicated on the hosting side.

Note that the article writer is a strong stakeholder in his earthshattering predictions coming true.

Re:Totally inane (1)

hedwards (940851) | more than 3 years ago | (#34731188)

Indeed. Some information is useful in the short term, but most information is quite useful for long periods of time. I'm personally, in the middle of archiving my audio CDs to disk, scanning my photos and sorting my digital images. On top of that I've got emails to hang onto.

The bigger issue isn't storage space, it's finding a way of keeping track of it all. Deleting the things that you don't need or aren't allowed to store beyond a certain point and keeping track of the other files you do want or need to store.

Re:Totally inane (2)

MrAnnoyanceToYou (654053) | more than 3 years ago | (#34731608)

There must be some way to solve a problem like that, where you have a series of pointers to files, if not the files themselves as well, with the ability to add markers of some kind to each of those pointers. (maybe we can call them, "Records!!!" like CD's used to be called) And then! Then! We can disguise how the management of these 'records' are organized from the user, so they don't have to think about it. And give them a simple, logical way to get data about those 'records' out of the big, organized whole. It'd be, like, a whole new basic way to store our records! We could easily find what we wanted in our basic data storage. I can't believe noone's thought of it before. ;)

My point here isn't that you should use a database to store your data about your files, (unfortunately, a unified markup system for files doesn't exist yet; it would be nice, but all that stuff is in the OS right now) my point is that the author of the article is missing that even if in-memory data systems do become extremely large, the underlying theory of the technology will not change much.

And the underlying theory relies heavily on caching, limiting how much of your overall dataset is currently relevant, and so on. While I will admit it's possible many databases' useful data size will eventually be outgrown by RAM-style memory storage, when that happens market forces will probably make it comparatively expensive to hold all your data in memory at once. Partially because clean, concise code is generally far more expensive to produce than sloppy crap that chews through your data storage.

Re:Totally inane (1)

hedwards (940851) | more than 3 years ago | (#34732410)

My point here isn't that you should use a database to store your data about your files, (unfortunately, a unified markup system for files doesn't exist yet; it would be nice, but all that stuff is in the OS right now) my point is that the author of the article is missing that even if in-memory data systems do become extremely large, the underlying theory of the technology will not change much.

I realize that, but it's a related issue. Back in the 80s, it didn't do you a damned bit of good to know that the file was saved if you had to spend 10 hours sorting through disks to find it. In the modern era that's a much smaller concern for most people as a 1tb disk is quite affordable and there's a number of products to search it efficiently.

It's something which has been talked about before. The discussion I best remember was in terms of back up systems. (Backup & Recovery [oreilly.com] if you're curious)

The basic idea was to move backups from faster, but more readily accessible media to slower and harder to get media as the files got older and less frequently used. The main reason for individuals to do that is so that they've got a copy on some sort of WORM so as to make it more difficult to make fat fingered mistakes.

There are a few products out there that do that, but they aren't particularly universal and I haven't personally found one that I like for my typical files. And with the rate at which disk space is expanding, most people aren't going to need them, unless they're responsible for enterprise file management.

Re:Totally inane (3, Insightful)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34731212)

Also, it isn't really all that earthshattering. The fact that RAM is faster and offers lower latency than just about anything else in the system has been true more or less forever. Essentially all OSes of remotely recent vintage already opportunistically use RAM caching to make the apparent speed of disk access suck less(nicer RAID controllers will often have another block of RAM for the same purpsoe). Programs, at the individual discretion of their creators, already hold on to the stuff that they will need to chew over most often in RAM, and only dump to disk as often as prudence requires.

The idea that, as advances in semiconductor fabrication make gargantuan amounts of RAM cheaper, high-end users will do more of their work in RAM just doesn't seem like a very bold prediction...

Re:Totally inane (0)

Anonymous Coward | more than 3 years ago | (#34731304)

The idea that, as advances in semiconductor fabrication make gargantuan amounts of RAM cheaper, high-end users will do more of their work in RAM just doesn't seem like a very bold prediction...

Indeed. However where the premise in the summary is incorrect (this is /. - I didn't read TFA) is that just because you're doing things in RAM doesn't mean you storing less on disc either. All you do is increase the dataset within the RAM and reduce the network traffic to the database. But all this data needs to be stored somewhere and larger working datasets generally mean a larger general data pool from which to work. Databases aren't going anywhere.

Re:Totally inane (3, Funny)

Kilrah_il (1692978) | more than 3 years ago | (#34731334)

As advances in semiconductor fabrication make gargantuan amounts of RAM cheaper, high-end users will do more of their work in RAM.

Now you have a bold prediction.
Sincerely,
me

Re:Totally inane (3, Insightful)

tomhudson (43916) | more than 3 years ago | (#34731488)

Good one - except that in this case, a lot of the so-called "work" is BS, consumers are pushing against being data-mined, regulators are getting into the act, and if your business model is so dependent on being a rude invasive pr*ck, perhaps you deserve to die ...

And the same thing will happen when revenue-strapped governments slap a transfer tax and/or minimum hold periods on stocks - something that should have been done a long time ago.

Re:Totally inane (0)

Anonymous Coward | more than 3 years ago | (#34731956)

Not really... most customers don't give a shit where their data comes from and goes to, as long as they don't have to pay a subscription fee. If Facebook started asking for social security numbers and bank account numbers, people would type them in for access.

Governments? Give me a break. Show me one government that has the spine to stand up to ad agencies, either the snarfers at the front line like Phorm, or the data-miners. Ain't gonna happen. Even the EU is running scared and has backed down, showing that they pretty much have zero interest in privacy, even though the lessons in privacy were taught very brutally during WWII.

Oh, revenue transfer tax... also not going to happen. Especially with the Tea Party here in the US having a stranglehold on the government this year. Expect to see government just give a rubber stamp to any business practices, no matter how unethical.

Re:Totally inane (1)

tomhudson (43916) | more than 3 years ago | (#34733120)

Governments? Give me a break. Show me one government that has the spine to stand up to ad agencies, either the snarfers at the front line like Phorm, or the data-miners. Ain't gonna happen. Even the EU is running scared and has backed down, showing that they pretty much have zero interest in privacy, even though the lessons in privacy were taught very brutally during WWII.

Jennifer Stoddard, Canada's Privacy Commissioner. She's the one who forced Facebook to change their procedures the last time, and she's got them in her sights again.

And at $11,000 per incident (page view), it would quickly send Facebook into Chapter 11.

Especially since the last time, the Europeans quickly joined in.

Oh, revenue transfer tax... also not going to happen. Especially with the Tea Party here in the US having a stranglehold on the government this year. Expect to see government just give a rubber stamp to any business practices, no matter how unethical.

Several states and many local governments won't be able to roll over their bonds. Likely candidates include California, Nevada, New York, Michigan, etc. At that point, Uncle Sam has 4 choices:

  1. let them default - for individual states, this actually has a very high probability, since individual states cannot be petitioned into bankruptcy. They'll just pay with state-issued IOUs.
  2. guarantee their loans - with more than 40 out of 50 states with problems, this is one of those "too big to bail out" scenarios. The US credit rating is already under review - this would guarantee a quick downgrade.
  3. bail them out - yeah, with what money?

Taxes have to go up. Either before a US credit downgrade, or after. Before is less painful.

Re:Totally inane (1)

Johnny Mnemonic (176043) | more than 3 years ago | (#34733786)


And at $11,000 per incident (page view), it would quickly send Facebook into Chapter 11.

Sure it would. Facebook would simply exit Canada. Users would complain, but who gives a shit about them, right? But advertisers would also complain that they don't have access to that market anymore. And advertisers are just another word for business. Stoddard may really be anti-business, but I wonder if her bosses are, or if her new bosses would be.

Don't kid yourself. Facebook isn't going anywhere, not until the users stop using it.

Re:Totally inane (1)

tomhudson (43916) | more than 3 years ago | (#34733988)

And at $11,000 per incident (page view), it would quickly send Facebook into Chapter 11. Sure it would. Facebook would simply exit Canada. Users would complain, but who gives a shit about them, right? But advertisers would also complain that they don't have access to that market anymore. And advertisers are just another word for business. Stoddard may really be anti-business, but I wonder if her bosses are, or if her new bosses would be.

Don't kid yourself. Facebook isn't going anywhere, not until the users stop using it.

There are always other companies ready to fill in the gap. That's the nature of the beast, and Facebook knows it - just like they know that their user statistics are totally cooked.

You can buy facebook followers at the rate of 5 for a penny. The only ones who would be impacted are the "social media directors" who would be shown to be totally superfluous.

.. and that can't happen soon enough. Them and the "SEO" scammers.

-- Barbie

Re:Totally inane (1)

Fulcrum of Evil (560260) | more than 3 years ago | (#34732002)

and a lot of it is fraud detection (say, at Visa) and large internet sites deciding what sorts of products to show you when you log in based on your purchase history/similar users' history.

Re:Totally inane (1)

tomhudson (43916) | more than 3 years ago | (#34733128)

  1. Fraud detection doesn't need microsecond timing. Fraud detection is based on good data, not "fast data"
  2. Behavioral tracking is illegal in several countries. Expect to see more governments giving advertisers a choice - stop, have all behavioral tracking stripped at the borders, be sued into bankruptcy, or just be blocked.

Re:Totally inane (2)

Firehed (942385) | more than 3 years ago | (#34734128)

Fraud detection doesn't need microsecond timing. Fraud detection is based on good data, not "fast data"

Sorry, but that's just wrong. Fraud analysis on credit transactions needs to be performed extremely quickly (and payment sites that process ACH need to do that quickly as well) in order for the networks to be usable. So while it requires good data, it also needs fast data - and a lot of it. At a minimum, it often looks at the user's complete payment history, the history on that credit card (did the user suddenly change? if so, the card number was probably stolen) not specific to the user, the activity at that IP address and other IPs that user has logged in from (which may include many other users and/or cards), etc. There's a lot of work to be done in less than a second or two.

Re:Totally inane (0)

davester666 (731373) | more than 3 years ago | (#34731502)

Well, obviously, the person believes nobody else ever said something similar.

They probably were thinking back, and the only quote that came to mind was something about 640k being enough for everybody.

Re:Totally inane (0)

Kilrah_il (1692978) | more than 3 years ago | (#34731528)

It isn't? What's wrong with you people?

Re:Totally inane (1)

davester666 (731373) | more than 3 years ago | (#34731998)

Hey, I'm happy with my Commodore 64, but I am considering getting an Amiga.

Re:Totally inane (1)

Belial6 (794905) | more than 3 years ago | (#34732676)

Then you will be interested in the company who makes them [commodoreusa.net] .

Personally, I'm thinking about getting a C64 myself.

Re:Totally inane (1)

hairyfeet (841228) | more than 3 years ago | (#34733078)

Hey! Quit making us greaybeards feel old! I still remember holding my first 1GB drive in my hand and thinking "How in the hell am I ever gonna use this much space?" and then in what seemed like the blink of an eye I', holding a 40GB and thinking "Now what? Even if I install every app and game I ever liked I'm STILL not gonna be able to fill this thing!". Now I have dual 500Gb drives onboard, with another 1TB USB for backups, and instead of being marveled at the space I'm just waiting for the 2TB drives to come down to yank the 500GB. And this flash stick, no bigger than a stick of gum, at 8GB has more space than my first 8 drives put together. My first flash was $100 for 64MB and I thought "Wow,dozens of floppies in my pocket! What will I do with it all?" Man times they do change. Now get off my lawn!

"pwufessuh haiwypheet" of ITT Tech BLOWN AWAY 6x? (-1)

Anonymous Coward | more than 3 years ago | (#34734222)

"pwufessuh haiwypheet" of ITT Tech BLOWN AWAY 6x?

http://slashdot.org/comments.pl?sid=1930156&cid=34734160 [slashdot.org]

http://mobile.slashdot.org/comments.pl?sid=1930156&cid=34719276 [slashdot.org]

http://it.slashdot.org/comments.pl?sid=1916240&cid=34612834 [slashdot.org]

http://it.slashdot.org/comments.pl?sid=1916240&cid=34647708 [slashdot.org]

http://slashdot.org/comments.pl?sid=1922942&cid=34665368 [slashdot.org]

http://slashdot.org/comments.pl?sid=1924664&cid=34669668 [slashdot.org]

ROTFLMAO! I wouldn't listen to "professor hairyfeet" guys, he's only an ITT Tech student. See those URL's and watch this ITT "wannabe guru" blow HIMSELF away.

Re:Totally inane (1)

Tablizer (95088) | more than 3 years ago | (#34731636)

Because I crave pizza, I have an italics prediction...

Re:Totally inane (1)

epine (68316) | more than 3 years ago | (#34732518)

The fact that RAM is faster and offers lower latency than just about anything else in the system has been true more or less forever.

This is the problem when the article is so poor to begin with, if you're not careful, you're pulled down to the same inane level. Since my brain isn't working well after reading that tripe, let me add that GaAs has been faster than silicon more or less forever. OK, I'm better now.

Let's not go too far down that road, or we'll run into the truism that the quickest man for the job is the man with the smallest dataset (and the fattest wallet).

The more I think about that article, the further I drift away from the cognitive on switch.

Re:Totally inane (0)

Anonymous Coward | more than 3 years ago | (#34731402)

Yup, the author missed the point entirely. In-memory analytics is no threat to storage media (HDD, SSD, etc..). It drives more storage purchase, not less. It just makes data at rest, and data in use more clear. It's no different than tiered storage which already exists in spades today, and is not lowering any HDD sales.

Re:Totally inane (3, Informative)

quanticle (843097) | more than 3 years ago | (#34731616)

I didn't really see the author mention anything about discarding data. Rather, it seems like he's saying that existing databases (which attempt to commit data to persistent storage as soon as possible) will be marginalized as the speed gap between persistent storage and RAM widens. Instead, business applications are going to hold data in RAM, and rely on redundancy to prevent data loss when a system fails before its data has been backed up to the database.

Re:Totally inane (0)

Anonymous Coward | more than 3 years ago | (#34733772)

Instead, business applications are going to hold data in RAM, and rely on redundancy to prevent data loss when a system fails before its data has been backed up to the database

Most commercial memory database offerings provide the same reliability WRT to persistant storage as a normal database. IE for successful commit all writes are flushed to disk in a memory database without exception the same as a traditional database.

The reason you get higher performance with a memory database normal RDBMS is optimized for pulling data from spinning platters with huge seek/random read penalties. Memory databases are optimized for resolving queries from random access memory which have no such limit.

It is NOT about the lack of persistant storage...mearly the selection of different internal data structures. Writes are still bound by the performance constraints of spinning disks or SSDs.

Questions? (0)

Anonymous Coward | more than 3 years ago | (#34731158)

Will storage, even flash drives, be needed in the future, given the requirement for real-time data analysis and current trends in design for real-time data analytics?

Of course storage will be needed in the future! It was needed in the past and it's needed in the present. What kind of question is that?

Or will storage move from the heart of data centers and become merely a means of backup and recovery for critical real-time apps?

Oy-yoy-yoy.

I'm getting another drink.

Re:Questions? (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34731514)

You'd better just bring the whole bottle. Somebody just used the world "merely" in front of the phrase "backup and recover for critical real-time apps".

The remainder of the bottle will, depending on whether you work for that somebody or not, either enable a heartwarming humanitarian gesture, or be your only friend during the days of hair-raising stress and thankless toil that could strike at any second...

The cutting edge is in high frequency trading (5, Informative)

Animats (122034) | more than 3 years ago | (#34731266)

For the cutting edge in this area, see what the "high frequency traders" are doing. Computers aren't fast enough for that any more. The trend is toward writing trading algorithms in VHDL and compiling them into FPGAs [stoneridgetechnology.com] , so the actual trading decisions are made in special-purpose hardware. Transaction latency (from trade data in on the wire to action out) is dropping below 10 microseconds. In the high-frequency trading world, if you're doing less than 1000 trades per second, you're not considered serious.

More generally, we have a fundamental problem in the I/O area: UNIX. UNIX I/O has a very simple model, which is now used by Linux, DOS, and Windows. Everything is a byte stream, and byte streams are accessed by making read and write calls to the operating system. That was OK when I/O was slower. But it's a terrible way to do inter-machine communication in clusters today. The OS overhead swamps the data transfer. Then there's the interaction with CPU dispatching. Each I/O operation usually ends by unblocking some thread, so there's a pass through the scheduler at the receive end. This works on "vanilla hardware" (most existing computers), which is why it dominates.

Bypassing the read/write model is sometimes done by giving one machine remote direct memory access ("RDMA") into another. This is usually too brutal, and tends to be done in ways that bypass the MMU and process security. So it's not very general. Still, that's how most Ethernet packets are delivered, and how graphics units talk to CPUs.

The supercomputer interconnect people have been struggling with this for years, but nothing general has emerged. RDMA via Infiniband is about where that group has ended up. That's not something a typical large hosting cluster could use safely.

Most inter-machine operations are of two types - a subroutine call to another machine, or a queue operation. Those give you the basic synchronous and asynchronous operations. A reasonable design goal is to design hardware which can perform those two operations with little or no operating system intervention once the connection has been set up, with MMU-level safety at both ends. When CPU designers have put in elaborate hardware of comparable complexity, though, nobody uses it. 386 and later machines have hardware for rings of protection, call gates, segmented memory, hardware context switching, and other stuff nobody uses because it doesn't map to vanilla C programming. That has discouraged innovation in this area. A few hardware innovations, like MMX, caught on, but still are used only in a few inner loops.

It's not that this can't be done. It's that unless it's supported by both Intel and Microsoft, it will only be a niche technology.

Re:The cutting edge is in high frequency trading (1)

Simon80 (874052) | more than 3 years ago | (#34731326)

If Intel tried to market its tools to mainstream and OSS developers (yes, open source the tools), then maybe the stuff would catch on better. They are quite capable of making stuff user-friendly for the average developer, but they only seem to market to the HPC market, because that's where the high margin CPUs sell. I think if they spent more time increasing general awareness of anything, it would be easier to get people to use them in their target markets, which would help them sell high end CPUs anyway.

Re:The cutting edge is in high frequency trading (3, Interesting)

Gorobei (127755) | more than 3 years ago | (#34731376)

Yep, the article is 10-20 years out of date.

HFT has been using statistical synchronization of dbs for years.

Big financial shops switched to in-memory dbs decades ago. With co-lo on the compute farms.

I don't know why he's even talking about 32G boxes as servers. That's a desktop, real db hosts are an order of magnitude bigger.

His "push the disks to the edge of the network?" Um, that's already happened - it's called tier 2. Tier 1 is the terabytes of solid-state storage we keep just in case.

This is a blast from the 1990s.

Re:The cutting edge is in high frequency trading (3, Insightful)

Rich0 (548339) | more than 3 years ago | (#34731504)

There is another simple solution to optimizing HFT - just aggregate and execute all trades once per minute, with the division between each minute taking place in UTC plus/minus a random offset (a few seconds on average - with 98% of divisions being within 5 seconds either way).

Boom, now there is no need to spend huge amounts of money coming up with lightning-fast implementations that don't actually create real value for ordinary people.

Business ought to be about improving the lives of ordinary people. Sure, sometimes the link isn't direct, and I'm fine with that. However, we're putting far to much emphasis on optimizing what amounts to numbers games that do nothing to produce real things of value for anybody...

Re:The cutting edge is in high frequency trading (0)

Anonymous Coward | more than 3 years ago | (#34731936)

You may want to read on what HFT actually means, because you seem to think it has to do with normal trading orders - there is a reason there is an F in that acronym.

Re:The cutting edge is in high frequency trading (1)

Rich0 (548339) | more than 3 years ago | (#34732616)

Yeah, I know exactly what it is. My proposal basically is to get rid of it by making it useless. It provides no real benefit to the economy, so nobody will be hurt if it goes away...

Re:The cutting edge is in high frequency trading (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34731954)

While I like your future better, I'm guessing that the real one will look more like "A solid ball of hyper-computronium wrapped around the NYSE, tended by robots and powered by a Dyson sphere capturing the entire output of the sun"...

Sure, the only surviving life forms will be extremophilic bacteria in the wastelands and investment bankers in the Suburbidomes(tm); but think of how high the GDP per capita will be!

Re:The cutting edge is in high frequency trading (2)

Bill, Shooter of Bul (629286) | more than 3 years ago | (#34732110)

You really do not understand the domain in question. The whole idea behind hft is to analyze real time data and make a near instantaneous stock trade that capitalizes on that data analysis *before* anyone else does. Waiting a second is too long in this case. The value they add to their customers: Cold hard cash. The value to the stock market: liquidity (fair argument if its too much liquidity).

Re:The cutting edge is in high frequency trading (1)

Rich0 (548339) | more than 3 years ago | (#34732716)

Uh, I understand exactly what it is, and who benefits, which would not be the economy at large.

The point in aggregating trades is to entirely negate the advantage of HFT, thus eliminating it from the market. It isn't like there wouldn't still be liquidity - you'll just have to wait 1-2 minutes to have an order filled. The average person making a trade usually has a lag of hours between an event happening and getting to make a trade anyway.

Re:The cutting edge is in high frequency trading (1)

Bill, Shooter of Bul (629286) | more than 3 years ago | (#34732994)

Oh, so your solution to the technical problem is to get rid of the industry which experiences it?

Ok, I guess. I'm really more here on slashdot to discover some sweet techniques for solving immensely difficult technical problems.

I didn't get that from your first post. Maybe because you started out with the technical part? don't know exactly. I'm not knowledgeable in the field of trading to make an intelligent comment about the result of banning HFT. The market does need liquidity, that much I do know.

Re:The cutting edge is in high frequency trading (1)

Rich0 (548339) | more than 3 years ago | (#34733288)

The market does need liquidity, that much I do know.

The market had plenty of liquidity before the invention of HFT. I'm just suggesting limiting liquidity to a few minutes, rather than a few nanoseconds. Will it really hurt the economy if it takes a stock 10 minutes to plunge 50% rather than a few seconds, with only a few big well-connected institutions getting out in time?

I'm all for technology that solves real-world problems. However, HFT is a case of where technology and a lack of regulation has actually created real-world problems. Improving HFT actually makes those problems worse.

Re:The cutting edge is in high frequency trading (1)

aaarrrgggh (9205) | more than 3 years ago | (#34733828)

The liquidity HFT provides should be at arbitrage margins, not the insane profits the players are making. If it makes sense at 0.001%, then go for it. At 0.1%, they are raping the system for the 'value' they provide.

Re:The cutting edge is in high frequency trading (1)

LordNacho (1909280) | more than 3 years ago | (#34732118)

Are you sure this won't simply create a different game?

Re:The cutting edge is in high frequency trading (1)

Gorobei (127755) | more than 3 years ago | (#34732306)

Right. We can have the banks just trade once a minute or once a day.

End users can go back to using Travellers Cheques: sure you spend a few hours of your foreign vacation either getting ripped off or waiting in line at a bank, but hey, at least global trading is now leisurely.

Stocks are just as good: you paid 3% to trade, but hey, it's a long term investment!

Commodities? You need a supply of tin? Just buy a tin mine.

People proposing slowing down trading speeds are like people proposing slowing down computer clock speeds. Sure, you save some energy, but so what? Everyone has to use a 6502 based iPad because you think that would be better?

Re:The cutting edge is in high frequency trading (0)

Anonymous Coward | more than 3 years ago | (#34732534)

What is the criteria of better?

I think the parent was proposing something that attempts to minimize the size of the financial sector of the economy while preserving the benefits of what they do.

Yes, we can probably move from trading once a day to trading once a minute to trading once a millisecond to trading once a microsecond to to trading once a nanosecond to trading once a pciosecond, but how is the welfare of society improved by this change?

Re:The cutting edge is in high frequency trading (1)

Gorobei (127755) | more than 3 years ago | (#34732672)

I thought I gave some examples - FX, equity, commodity prices get better as frequency increases.

Less cost and fuss for consumers and importers/exporters, etc. A few people spend their lives making prices tighter, and millions of people get better prices on vacations, on their mortgages, etc. Why begrudge them for pocketing a few percent off the top?

International trade on high-tech products becomes possible: you can get a firm offer on 20 inputs you need in 1 hour. In the old days, that level of co-ordination was impossible - you had to BUY the suppliers.

I take "better" to mean "you get better stuff at a better price."

Re:The cutting edge is in high frequency trading (1)

Rich0 (548339) | more than 3 years ago | (#34732596)

Actually, I'd prefer once per day at midnight, with a blackout on company announcements after 5PM. That would go even further towards leveling the playing field.

What value does a bot generate when all it does is capitalize on the tiniest fluctuations in stock price. It isn't like it makes the stock any more efficient - the price would certainly adjust itself. The only difference is that some investment bank can't make a fortune solely based on its ping time.

Re:The cutting edge is in high frequency trading (1)

Gorobei (127755) | more than 3 years ago | (#34732896)

So you would be happy if Google could only adjust its search algorithm once a day? It would be a more level playing field, and then search companies couldn't make a fortune based solely on their ping times.

Re:The cutting edge is in high frequency trading (1)

Rich0 (548339) | more than 3 years ago | (#34733342)

Yes, but Google's search algorithms help ordinary people find information they need, and they help real business that produce real things to do so more efficiently, which makes the cost of everything you consume a little cheaper.

A better HFT algorithm just ensures that some big banker makes a few hundred million more dollars at the expense of any ordinary person who has a retirement account.

I have nothing against progress. However, most of the financial industry just shuffles numbers around manufacturing money out of nothing, and occasionally turning money back into nothing in astronomical quantities. Did you notice how gas prices plummeted from $4/gallon to about $2.50 in a few weeks after the hedge fund meltdowns? Now, tell me how much value all those funds trading oil futures were creating?

I have nothing wrong with financial instruments that actually create more efficient markets. If an airline needs to buy 50 million gallons of fuel next year I'm fine with them hedging the price of oil to keep their ticket prices stable. My problem is when the market in the actual commodity becomes secondary to playing financial games. The oil market should be about running cars, or environmental controls, or whatever - not about 100 day traders making $40 on the trade of a $50 barrel of oil.

Re:The cutting edge is in high frequency trading (1)

Gorobei (127755) | more than 3 years ago | (#34733818)

99% of the fuel market is not about day traders scalping a dollar or two on a few thousand barrels of oil. It's more like:

1. Geeks building code to track every tanker, tender, barge, pipe, and hub in the world to estimate oil availability.
2. Traders yelling "lease me a tanker" and having people on call to figure the time and cost to get it moving oil from A to B.
3. Full time meteorologists predicting short-term weather.
4. Geeks building models based on the above.
5. Geeks pricing out the cost of refineries, catalytic crackers, etc, to figure how to optimize profits.

This is a multi-billion dollar industry, not a few day-traders making bets in their pajamas.

It's not surprising that the experts in the field make a lot of money.

Re:The cutting edge is in high frequency trading (0)

Anonymous Coward | more than 3 years ago | (#34734134)

ignoring your weird mixing of ping time and search algorithms,
the difference is that google would be screwing competitors;
the big investment banks are screwing anyone who invests
who doesn't have a data center in manhattan.

i think the right question to ask is, should we prevent entities
with special positions in the market (insider knowledge, many
orders of magnitude faster data) from using this sort of advantage.

in the past the answer has generally been "yes".

Re:The cutting edge is in high frequency trading (3, Informative)

BitZtream (692029) | more than 3 years ago | (#34731750)

So I'm guessing you've never actually done any development?

The 'byte stream' model is not from UNIX, its just the way the hardware is laid out physically.

IPC happens in an entirely different way unless you're using something simplistic like pipes

RDMA is pretty much a stable of high speed cluster computing, however its DMA that allows pretty much everything in your PC to work without slowing the processor down. Even your keyboard controller uses DMA to get the characters into somewhere useful.

As far as what you're calling RDMA via Infiniband, I've seen massive clusters (some of the largest in the world) using it ... safely.

If you think nothing uses the protections provided by the x86 family I'd like to know what shitty OS you're using? Not only does everyone actually use it on the x86, they do it in ... get this ... C! Perhaps you should take a look at a few open source OSes and notice that while there is some assembly in specific places for speed and the required lowest level libraries ... you'll be suprised by the fact that all of that memory management stuff is written in ... C and utilized by .... C programs.

I guess you're also ignore the fact that intel and amd added more protection hardware to the x86 architecture JUST FOR VIRTUALIZATION ... I suppose you think the fact mordern hypervisors won't work without these features present is just a silly little annoyance that the software venders throw in to make us buy new hardware to pad their bank accounts?

I'm not sure what development you do, by my standard C library uses MMX for many functions that require me to do nothing to take advantage of their speedup.

You really have no clue do you?

Re:The cutting edge is in high frequency trading (1)

SuricouRaven (1897204) | more than 3 years ago | (#34733488)

The old PS2 keyboards used interupts, not DMA. USB I'm not sure about.

Terabyte RAM? (1)

sunderland56 (621843) | more than 3 years ago | (#34731282)

Even a single consumer hard drive is a terabyte of storage.... how many servers at any cost have a terabyte of RAM?

Re:Terabyte RAM? (1)

Simon80 (874052) | more than 3 years ago | (#34731358)

I think you're missing the point. If the data is analyzed in a single pass as it is received, 1TB of RAM is not necessary.

Re:Terabyte RAM? (1)

Anonymous Coward | more than 3 years ago | (#34731428)

I think you are missing the point here. If the data to analyze is so small, then why the fuss? If the data fits in memory, leave it in memory, if not, store it and retrieve it later. Guess what, the place to store your data is probably a database with storage attached. Unless of course, you are one of those young kids (disclaimer, I'm 28), that reinvent the wheel all the time and write that part themselves, because databases are out.

So, lets say to analyze your incoming data of size 1MB, you also need to reference 100MB of other data. Fits in memory, right? Perfect. Now lets say your incoming data size is 10MB and you need the other 5TB of data to properly analyze it. Unless you have that much RAM, you need to store that data somewhere. Probably a database ... blah blah, see above... However, if your data of size 100MB is incoming and you don't need reference data, well, fits in RAM, analyze it right away and store the data in your database and forget about it, as you won't need to reference it later.

It's just BS... either your working set is large enough to fit in memory or it is not. There are two things you can do, buy enough memory to fit it in there or store and retrieve when necessary. Database caches takes care of "hot" data.

Re:Terabyte RAM? (2, Interesting)

Anonymous Coward | more than 3 years ago | (#34734002)

I think, perhaps, that you're missing the point, at least of the article. It has nothing to do with whether to store information in memory or in the database and everything to do with the current trend of using dedicated analytics products (i.e. OLAP) to do data analysis. Whereas we used to use the same relational databases to store, retrieve and analyze all data with SQL as the Swiss Army knife that enabled it all, we're moving towards a model where the relational database is responsible for storage and retrieval of information only and dedicated analytics products have their own cache of the information for reporting and analysis purposes.

The point is that relational databases are being marginalized and one of their major selling points (i.e. the ability to analyze data based on the relationship between different types of data) is increasingly less relevant. Once you're limiting your RDBMS usage to simple CRUD operations, the rationale for choosing an RDBMS (especially an expensive one like Oracle and its ilk) over NoSQL options or open source databases with limited support for power-user options starts to disappear. MySQL may lack a lot of the features that experienced DBAs consider mandatory, but it can do INSERTs, UPDATEs and DELETEs as well as anything and it has no problems with SELECTs based on keyed columns. Similarly, Casandra, Voldemort and such can also easily support that limited subset of functionality.

That is why RDBMSs are becoming marginalized. Applications are increasingly being designed to either avoid an RDBMS back-end or to use it as simple "dumb" storage and rely on a separate analytics product to accomplish all the complicated logic that previously would be accomplished with complicated SQL and stored procedures. Beyond that, OLAP concepts allow the data-mining interface to require less development effort. It's simple to write an interface around (an) OLAP cube(s) and allow the user to choose the dimensions and measures and allow the user to pivot, drill-down and such. In fact, most analytics products do this stuff out of the box without any development necessary. With a SQL database, an interface needs to be created that will translate the user's instructions into SQL, which can often become very complex and requires significant effort to ensure that the resulting SQL will perform well.

This isn't about RDBMSs becoming unnecessary, it's about them now being best served in a much more limited role than they've previously occupied in the application architecture.

Re:Terabyte RAM? (0)

Anonymous Coward | more than 3 years ago | (#34731372)

http://www.oracle.com/us/products/servers-storage/servers/x86/sun-fire-x4800-server-077287.html
I think will provide a terabyte of RAM if you can afford Oracle's prices..

Re:Terabyte RAM? (2)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34731434)

1TB is still in the realm of rather specialized; but 512GB systems(while not inexpensive) are actually pretty available. A quick glance at Dell shows that(even without the benefits of a rep, volume pricing, or any sort of negotiation), a 2U R815 with 512GB of RAM can be yours for a hair under $40,000. Kitted out with the specs you actually want, of course, it might run you another $20k above that. If AMD isn't your flavor, the intel-based but otherwise similar R810 will run five to ten thousand more than the R815 with otherwise similar options...

At those prices, I'd venture to say that Flash still has a reasonably bright future ahead of it in the high-speed/low-latency storage market(not to mention the volatility issue); but(especially if your problem can handle being broken up across multiple systems with only modestly fast interconnects) the cost of enormous amounts of RAM has dropped pretty significantly.

Now, if you can't deal with the limitations of commodity cluster interconnects, and have to have more than a half terabyte of RAM in a single memory space, I get the impression that your options get more expensive pretty fast. Phrases like "up to 16TB shared global memory" and "single system image", are generally your cue to hold on to your wallet and run... If that is what you want, though, you can buy it.

Re:Terabyte RAM? (1)

Rich0 (548339) | more than 3 years ago | (#34731540)

the cost of enormous amounts of RAM has dropped pretty significantly

Uh, your example was 512GB, and you're comparing $40k for RAM to about $40 for a hard drive. That's around 1000:1!

Sure, RAM is only getting cheaper, but so are hard drives. A few years ago I got 2GB of RAM for about the same price as 320GB of hard drive. So, if anything the relative cost of RAM has gone UP, and not down...

Re:Terabyte RAM? (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34731664)

Oh, RAM isn't even close to HDDs, no is there any reason to expect that it will ever be, if you care about storage space. Only if latency and IOPs are at issue does RAM become a relevant competitor. When it comes to I/O operations, particularly highly random ones scattered across the storage area, RAM will(unsurprisingly, given what its name stands for) absolutely wipe the floor with anything with moving parts. To even touch the I/O performance, you would probably be talking multiple racks jammed full of top of the line 15kRPM monsters(a proposition unlikely to be achieved for $40k...)

Plus, while the actual hardware is of pretty niche interest, it is pretty impressive(looking at the history of component costs and sizes in computing) that you can now get a half-terabyte of RAM, in a package that a single person of average strength can move, that will run from reasonably ordinary household wiring, for approximately the US per-capita GDP.

Re:Terabyte RAM? (1)

Tablizer (95088) | more than 3 years ago | (#34731800)

At those prices, I'd venture to say that Flash still has a reasonably bright future

Unlike your puns ;-)
   

Re:Terabyte RAM? (2)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34731872)

That one wasn't even intentional, unfortunately. My love of puns has, apparently, seeped directly into whatever part of my brain is responsible for day-to-day verbal and written work...

Re:Terabyte RAM? (1)

imsabbel (611519) | more than 3 years ago | (#34731784)

We bought a machine for FEM a few weeks ago (there was budget left for 2010).

4*12 core opteron, 256GByte ram. 12k.
Which is peanuts, pretty much.

So i have little doubt that 1TB Ram is quite affortable nowadays if you have big-iron level money available.

Funny (1)

roman_mir (125474) | more than 3 years ago | (#34731314)

It's funny that only today I chatted with some folks on the PostgreSQL IRC support channel about this, asking whether it is at all possible to have 2 postmasters running at the same time, one to do in memory SQL against an all-in-memory database, and the other to write to the database (and no, they think that it is not possible to have 2 postmasters talking to the same database this way, they believe it will corrupt the data). The suggestion was just to increase shared_buffers and file system block buffer size. I am thinking that maybe also it's useful to try and set up the streaming replication (xlog shipping) to another PostgreSQL database store/instance and use the other database as read only, then increase shared_buffers and OS disk block buffers.

Don't really know whether there is any significant advantage of one approach over another (except for having 2 databases of-course, so they become spares.)

Re:Funny (0)

Anonymous Coward | more than 3 years ago | (#34731546)

If you are running out of memory capacity on your machine, you are running out of memory capacity on your machine ... it doesn't matter whether you have 2 postmasters competing for the memory on one machine or just give it all to one postmaster, right?

Re:Funny (1)

roman_mir (125474) | more than 3 years ago | (#34731598)

But but but, you are missing the point. Can 2 postmasters access the same disk, one to read from it only and the other one to do writes?

If that was possible, then 2 postmasters could be on one machine, each on its own processor/memory or on 2 machines with the data directory mapped to both. The answer from the PostgreSQL guys in the IRC channel was that it's not possible, because all postmasters end up writing SOMETHING to the data directory, maybe those are just XLOGs, but they will write something and will screw each other up.

That's why the answer to this question is to replicate the database and have one for read only with huge RAM and the other for writing, and stream-replicate the write DB to read only DB through XLOG file pushing.

All well and good until... (1)

dg41 (743918) | more than 3 years ago | (#34731348)

This is all well-and-good until someone accidentally knocks out the power. Then all of that stuff needs recomputed if it's not stored to disk.

Can we please stop already? (5, Insightful)

mwvdlee (775178) | more than 3 years ago | (#34731466)

I'm getting sick and tired of hearing about yet another hype in IT-land where everything has to be done in yet another new way.

All developers understand that different problems require different solutions. Will the managers who shove this crap up our asses please stop doing so? It's not productive, you're not going to get a better solution by forcing it do be implemented in whatever buzzword falls of the last bandwagon of an ever-growing parade of buzzwords.

"In-memory analytics" is what we started out with before databases, and guess what; it's never gone away. We've never stopped using it. Now just tell us what problem you have let us developers decide how to solve it.

Re:Can we please stop already? (1)

macslas'hole (1173441) | more than 3 years ago | (#34731532)

Exactly, "in-memory analytics" sounds like more marketing BS, just another way to sell some unneeded software or service.

Re:Can we please stop already? (3)

Desert Raven (52125) | more than 3 years ago | (#34731634)

Agreed, someone comes up with something new to solve a very specific issue, and all of a sudden someone's predicting how it will completely replace everything else in the next month.

Grow up.

Physical storage and relational databases aren't going anywhere anytime soon. in-memory this and non-relational that are all well and good for the specific problems they were designed for, but physically stored and relational data fits the needs of 90% of data storage and retrieval. I sure as HECK don't want my bank storing my financial data purely in memory.

So keep yelling to yourselves about how the sky is falling on traditional techniques. Meanwhile the rest of us have real work to do.

Re:Can we please stop already? (2)

AllenNg (954165) | more than 3 years ago | (#34732318)

I think you're missing a few evolutionary pieces. Most data analytics systems that I'm aware of are not currently relational. Long ago, the data lived in memory, but memory was expensive, so everything was moved to disk. The relational model added the formalisms of normalization (to cut down on space, among other reasons), but the types of multi-dimensional queries used by the analytics apps required too many joins for this to work. So the data was de-normalized (eg. OLAP) to improve performance. As memory prices came down, people started putting the OLAP indexes and aggregates into memory to get a performance boost. Moving the data back to memory and returning to a normalized, relational model isn't so much "drastic new thing" as it is "logical next step". For me, the upsetting thing is that just as I'm getting good at the data warehousing thing, it seems we're going to be switching to being relational again.

Re:Can we please stop already? (1)

pinkushun (1467193) | more than 3 years ago | (#34732392)

It also ticks me off how they redesign these existing practices, to the point where they stop making sense, and you have to relearn the new and better (read: rephrased) technology. Almost like they want you to rewrite all those tests...

CAPTCHA: KISSUBAI - keep it simple stupid, unless buzzwords are involved

In Memory Analytics... (0)

Anonymous Coward | more than 3 years ago | (#34731620)

...are not usually applied to "big data". I'm not sure what technologies are being referred to, but a few billion rows is the limit to what I've seen. This is NOT what I would call "big data".

Free "in memory" analytics app Qlikview (1)

egork (449605) | more than 3 years ago | (#34731630)

Download a free (as in the beer) app http://www.qlikview.com/us/explore/experience/free-download [qlikview.com] and see for yourself what current commercial software can do. I load as much as a hundred GB into the RAM for analytics with this application. Just keep in mind that star schema is the best for this software. Get your tables from an existing database as flat files, load them "as is" and start analysis immediately.

But puting data in system ram = harder reboots (1, Interesting)

Joe The Dragon (967727) | more than 3 years ago | (#34731728)

But puting data in system ram = harder reboots as you need to dump it to a disk. Also what about UPS's you need one that has the power to last for the time it takes to do that as well.

Re:But puting data in system ram = harder reboots (1)

TooMuchToDo (882796) | more than 3 years ago | (#34732356)

And god forbid your system halts and you lose any data you haven't already committed to persistent storage.

Re:But puting data in system ram = harder reboots (1)

Anonymous Coward | more than 3 years ago | (#34733370)

You should be using stable operating systems and diesel backups. You should also be using clusters with the same data so a loss of one system isn't catastrophic.

Re:But puting data in system ram = harder reboots (2)

Joe The Dragon (967727) | more than 3 years ago | (#34733902)

what help is diesel when the main power room with Transfer Switch is on fire and the UPS don't have the power to run the systems for a long time as they are setup just to be there for the time it's takes for the diesel to start up.

I know am being your stereotypical anarchist but.. (2)

Nrrqshrr (1879148) | more than 3 years ago | (#34731810)

Decentralization is the way.

Re:I know am being your stereotypical anarchist bu (2)

hazem (472289) | more than 3 years ago | (#34733340)

Decentralization is the way.

If you're a consultant and find a client working in a centralized way, you sell decentralization as the way to solve all their woes. If you find them working in a decentralized way, you sell them on centralizing to solve all their woes.

There are only two constants here: 1) every business has woes, regardless of structure; 2) consultants extract lots of value by shifting those woes around

Hell no (1)

HalAtWork (926717) | more than 3 years ago | (#34731826)

You will always want that data so you can manipulate it in some other manner that wasn't taken into account by the in-memory analysis, or even the scope of your project. These marketing blokes sure like to seize the day, don't they?

It's a matter of use and optimisation. (1)

rawler (1005089) | more than 3 years ago | (#34731980)

Hard-drives aren't really as slow as people think. The problem is that mechanical hard-drives is slow on seeking, but if seeking can be eliminated, you can quite easily saturate your CPU on even a moderately complex calculation.

Case of point: http://www.youtube.com/watch?v=WQw7c-PliB4 [youtube.com]

Re:It's a matter of use and optimisation. (1)

SuricouRaven (1897204) | more than 3 years ago | (#34733544)

Mechanical drives can sustain a read of between 50MB/s and 80MB/s, depending how much you want to spend.

An addition not a replacement (1)

McDee (105077) | more than 3 years ago | (#34732138)

In-memory data storage is fine as long as it isn't primary data storage. Yes it's faster but there are a lot of downsides as well. The most important is that it isn't easy to share between servers (a close second is that it's hard to replicate to a remote site for disaster recovery purposes) so each server needs to have its own copy of the data and there needs to be some way of keeping all that data in sync.

The alternative is to have good old "traditional" storage sitting where it always sits and when the servers boot up or start their processing they load in the appropriate data set from the storage in to memory. This gives you all of the benefits of the fast in-memory processing without worrying about all of the downsides you create by using it as primary storage. So the memory isn't storage, it's cache.

So the real battle that will take place is not between hard disks and memory, it will be between RAM and SSDs.

Open-Source VoltDB (1)

geoffrobinson (109879) | more than 3 years ago | (#34732576)

I believe VoltDB (http://voltdb.org) uses in-memory and MPP if anyone is interested in giving it a test-spin. It's from Michael Stonebreaker of various databases (Ingres, Vertica, etc)

They've been doing a number of presentations on the topic you can probably find on the site.

Global-scale analytics != standard IT load (2)

drdrgivemethenews (1525877) | more than 3 years ago | (#34732580)

Although TFA doesn't say so explicitly, I think it's talking about the race to get the best targeted advertising analytics in place for global applications like eBay, FB etc. These applications don't have the same database requirements as traditional business apps. It makes sense to talk about new ways of doing things for them, but TFA's author and a lot of other people make the mistake of thinking or implying that these new techniques will apply directly to traditional business apps as well. Sorry, not.

----------

Happy New Year, may it suck less for ya than the last one.

map your data (1)

wrench turner (725017) | more than 3 years ago | (#34732796)

Most OS's and programming languages will let you map your memory data structure to a contiguous disk file so your disk IO is performed at paging speeds. The file system is only touched when the file is mapped (opened). Your system can then be configured to chose to what degree your data is in memory vs. disk.

Heard it all before (1)

rrohbeck (944847) | more than 3 years ago | (#34732844)

Remember when the first 64-bit machines became commercially available?
"zOMG, now we can keep whole databases in RAM with the 4GB limit gone!"
This is just CS101. Memory hierarchy - you keep your data in the fastest memory it'll fit in (that you can afford.)
Now we can afford more RAM so we can do more per unit time because we don't have to wait for IO. Duh.

And along w/the rest of the "Duh's" (1)

FlyingGuy (989135) | more than 3 years ago | (#34734214)

WTF is SoulSkill still drunk?

This is SO nothing new, nor is it even interesting.

In memory DB's are nothing new, they are simply prone to failure and this is why hardware storage be it spinning drives or Flash will always be around.

All it takes is one hiccup by the memory logic or an interrupt controller or DMA channel and all your in memory data is toast forcing a reload from the last checkpoint which can take quite a while when you are talking say a terabyte of information.

Clifford Hersh and Jeffery Spirn coked up the ANTS database a few years back. It was BLAZINGLY fast. It outran all of them, including Times Ten and it never got any traction and it was a fully in memory database.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?