×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Dumping Lots of Data to Disk in Realtime?

Cliff posted more than 8 years ago | from the too-much-for-an-RDBMS? dept.

Data Storage 127

AmiChris asks: "At work I need something that can dump sequential entries for several hundred thousand instruments in realtime. It also needs to be able to retrieve data for a single instrument relatively quickly. A standard relational database won't cut it. It has to keep up with 2000+ updates per second, mostly on a subset of a few hundred instruments active at a given time. I've got some ideas of how I would build such a beast, based on flat files and a system of caching entries in memory. I would like to know if: someone has already built something like this; and if not, would someone want to use it if I build it? I'm not sure what other applications there might be. I could see recording massive amounts of network traffic or scientific data with such a library. I'm guessing someone out there has done something like this before. I'm currently working with C++ on Windows. "

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

127 comments

2-stage approach (5, Informative)

eagl (86459) | more than 8 years ago | (#12528705)

Have you considered a 2-stage approach? Stuff it to disk, and process/index it separately? A fast stream of data would let it all get recorded without loss, and then you could use whatever resources are necessary to index and search without impacting the data dump.

Cost... Are you going to go for local storage or NAS? Need SCSI and RAID or a less expensive hardware setup? Do you think gigabit ethernet will be sufficient for the transfer from the data dump hardware to the processing/indexing/search machines?

Sounds like you might want to run a test case using commodity hardware first.

Re:2-stage approach (0)

Anonymous Coward | more than 8 years ago | (#12529260)

I have to second the two-stage recommendation. I have no idea how hard your real-time constraints are, but you want to be sure the queries can't mess up the recording. That's much easier to show with a two-stage design.

Re:2-stage approach (0)

Anonymous Coward | more than 8 years ago | (#12534824)

cat /dev/instruments >database.txt

I'll be contacting you regarding my exorbitant consultant fees you now owe.

Re:2-stage approach (1)

Nutria (679911) | more than 8 years ago | (#12535356)

A messaging system would work.

The front-end gets all the data, then passes it along using a file-based backing-store queueing system to the back-end that posts the data to your permanent store.

This also gives you the flexibility to let the front-end choose which back-end to send it to (usually on another machine).

Ramdisk database (4, Informative)

Glonoinha (587375) | more than 8 years ago | (#12529326)

Here's a thought - just use a hard-RAM based database.
Either make a big ramdisk and put your database out there (see my Journal from a few months back, ramdisk throughput is pretty damn fast from the local machine, given certain constraints, and random access writing is hella fast), or use a database that runs entirely in memory (think Derby, aka Cloudscape that comes with WebSphere Application Developer.)

When you got your data, save it out to the hard drive.

Granted it helps to have a box with a ton of memory in it, but they are out there now, almost affordable. If you are collecting more than 4G of data in one session, well YMMV - but 4G is a LOT of data, perhaps consider your approach.

Re:Ramdisk database (1)

btlzu2 (99039) | more than 8 years ago | (#12529821)

This is VERY insightful and I'd like to hire you. :) This is EXACTLY what well designed SCADA systems [wikipedia.org] do.

Re:Ramdisk database (2, Insightful)

Glonoinha (587375) | more than 8 years ago | (#12531295)

You will find that my imagination and abilities are only limited by my budget. Well that and, as I am finding, the Sarbanes / Oxley mandates that recently came down from the Productivity Prevention Team, quite effective in keeping me from actually getting any work done.

I don't really care what it pays if it has anything to do with real-time systems (guidance or delivery systems a plus), if the R&D budget has enough wiggle room for better hardware (toys) than I have at home, if you promise that I will be able to participate in the production roll-out and be allowed to make the production environment succeed, and esp if there are a few challenges that are categorized "can't be done."

Apollo 13 didn't get home because a bunch of mediocre guys sat around filling out paperwork requesting permission and setting up a committee to discuss business impact - Apollo 13 got home because a bunch of crack-junkie hardcore engineers decided that failure wasn't an option.

So the stuff you do at work - is it hard? :)

Re:Ramdisk database (1)

calidoscope (312571) | more than 8 years ago | (#12531352)

If you are collecting more than 4G of data in one session, well YMMV - but 4G is a LOT of data, perhaps consider your approach.

My recent forays to Crucial show 4 sticks of 2GB reg/ecc PC2700 DDR memory will set one back a bit over $3k. For 8 to 16GB of data, the most economical route would be a dual Opteron box, things start getting expensive above 16GB.

Re:Ramdisk database (1)

Glonoinha (587375) | more than 8 years ago | (#12531586)

You could pick up a Dell PowerEdge 1800 dual 3GHz (64 bit?) Xeon (2M cache free upgrade) right now with their quad memory upgrade promotion cranking it up to 2G for somewhere in the neighborhood of $1,700 ($500 of that being the second Xeon CPU)- leaving four slots for more memory. Add in four 2G sticks of the stuff Crucial has for that machine (the 1800 has six slots for memory) and you are looking at 10G of physical memory on a dual 3GHz Xeon machine for just shy of $6k.

That said, as I understand it AMD got their memory bus figured out quite a bit better than Intel, particularly in multi-CPU machines - if it was going to serve as a memory based SMP database engine, I might look into the Opteron platform.

Damn, I just re-read that first sentence.
$1,700 for a dual 64-bit 3.0GHz Xeon machine with 2G of RAM, upgradable to 10G for a complete system cost of under $6k. Gonna be a good Christmas this year, I'm guessing.

Suuuure. (4, Funny)

Seumas (6865) | more than 8 years ago | (#12528708)

Yeah, like it isn't obvious that this guy works for the government's TIA program and is looking for ways to maintain all of the data culled from the thousands of audio and video sensors they have planted around.

Suuuure.

Re:Suuuure. (1)

mabhatter654 (561290) | more than 8 years ago | (#12540212)

And I was just going to suggest he ask "homeland security" for advice... beat me too it!

of course we can see how well the Govt's spyware works... of course he could have a network of "voulanteers" allowing him to "monitor" their computing habits... that would be a lot of info too...

Wonderware InSQL (4, Informative)

Dios (83038) | more than 8 years ago | (#12528720)


Check out wonderware InSQL. We update roughly 50k points every 30 seconds without loading the server much at all. Pretty nice product, also has some custom extensions to SQL built in for querying the data (eg cyclic, resolution, delta storage, etc etc).

http://www.wonderware.com/ [wonderware.com]

Of course, you'll need your data to come from an OPC/Suitelink/other supported protocol, but should work nicely for you.

- Joshua

Re:Wonderware InSQL (2, Interesting)

btlzu2 (99039) | more than 8 years ago | (#12529014)

How does archiving work? What is the performance of querying on a large table? (Hundreds of millions of rows) Can you hook into the database with any language/package you desire or proprietary tools only?

Do you actually charge a license fee PER point?

We had a need for a smaller SCADA system in our company and Wonderware could not answer these questions (except for the fee per point, which they actually charge PER POINT). This department is going with a different product.

Sorry, but be very cautious of Wonderware.

Re:Wonderware InSQL (1)

kernelistic (160323) | more than 8 years ago | (#12529212)

We update 50,000 points at the bottom of every minute, archive every 2 minutes and have SQL tables that are several trillion (Yes, trillion) rows long on COTS Dell servers with MSSQL 2000 and and a standard middleware approach.

Sounds to me like you're either not throwing the hardware you ought to at this project or you are looking at the wrong software.

SCADA is very versatile and powerful. Are you feeding data in mostly from local or remote RTU's?

Re:Wonderware InSQL (2, Informative)

btlzu2 (99039) | more than 8 years ago | (#12529799)

We stopped at the investigation phase. They couldn't answer simple questions and were going to charge us if we needed to add more points. Unacceptable.

SCADA is very versatile and powerful. Are you feeding data in mostly from local or remote RTU's?

You do understand that SCADA is a general term which describes a type of system, right? A SCADA system could be designed (and has been) :) that is not versatile and powerful. Sorry to be nitpicky, but I'm just trying to understand what you mean.

Anyway, we work with a much larger SCADA system vendor, which actually has the SCADA market share for our industry. Wonderware would never come close to providing the functionality we'd need in our industry and we do not want to be tied to a Microsoft platform.

Wonderware was a candidate for a smaller sub-system, but we've decided to go with another system that's working out very well--is more open for development purposes and is generally better designed. I wasn't on the smaller project, but I was on the big system project and continue to maintain and develop for it.

SCADA is a fun area to work in for geeks--loads of administration, development, design opportunities in various techologies including, but not limited to, LANs, WANs, telecommunications, backend/frontend development, database maintenance, etc.

Re:Wonderware InSQL (1)

kernelistic (160323) | more than 8 years ago | (#12530925)

I understand exactly what SCADA is. I was wondering if you are using it for local or remote network control. The extent of my SCADA experience has been interfacing with PLCs in large manufacturing and power generation.

For those looking to find out more about SCADA and/or OPC, you might want to have a look at the SCADA Working Group webpage or primers such as this one [web.cern.ch].

Re:Wonderware InSQL (2, Informative)

Dios (83038) | more than 8 years ago | (#12529986)


InSQL works as an OLE Processes for SQL Server. You can use pretty much any tool (ODBC/ADO/excel/DAO/whatever) to query the database. Yes I realize I mixed libraries/methods/applications in the tool list, but just trying to get across a basic idea.

Yes, per point licensing, I believe we licensed for 60k points, not sure on the cost. This is pretty typical in the SCADA world I believe.

Sample query I'd use to get all data for a specific rtu
select * from live where tagname like 'StationName%'

Two tables use typically work with, live and history. Live between the latest values, history for historical queries.

As for query times, very respectable. I believe we have about 50k points right now, updated/stored every 30 seconds (Actually, its delta storage, so some discrete points who don't change every 30 seconds would be stored only on change...). So how many rows is that?

1440 minutes per day * 2 samples per minute * 50000 points * 180 days (approx history we have online) = 25,920,000,000 rows.

We have asp pages people query the data from, we limit 30 second resolution data to only 2 days at a time (to help prevent loading down the machines) but a query for any point will typically return in a few seconds.

We are pretty satisfied with the product, may not fit your needs, but its been good for us.

Re:Wonderware InSQL (1)

btlzu2 (99039) | more than 8 years ago | (#12530379)

Thank you for the information! It was more helpful than the Sales support we received from Wonderware. :)

Actually, I would refuse to pay a license per amount of points. That is completely an arbitrary way to make more money. The only thing the amount of points should affect is disk space and possibly CPU power.

Numerous companies do not charge for a license based on how many points you have and I find the practice of charging for points reprehensible. Similar to the concept of an ISP charging per packet transmitted. What conceivable extra software engineering work do they need to do if you buy a system and enter 10,000 points as opposed to buying a system with 1,000 points?

The rough query times are about what we achieve on an order of magnitude larger database.

Of course, Wonderware doesn't meet our needs because we have numerous other requirements including no single-point-of-failure distributed architecture, 100% up time (which we've achieved for 5 years now), and other performance issues.

Re:Wonderware InSQL (1)

gyanesh (805943) | more than 8 years ago | (#12538077)

My experience with Wonderware is that it can be a pain to get data out of if you don't want to use their add-ons, once you move outside their little world into other products the rate for retrieval goes right down. Their help is not up to much ...all in all I'd say go with whatever else you can find/build

Re:Wonderware InSQL (0)

Anonymous Coward | more than 8 years ago | (#12532869)

I've been working with InSQL for the last couple of months. I preface by saying I have no experience with other systems, but it's been a hellish pain in the ass. The associated applications that are provided with it are even worse (ActiveFactory in particular).

It's possible that it's the implementation rather than the product itself, but it's not been fun.

YMMV.

Don't roll your own (3, Informative)

btlzu2 (99039) | more than 8 years ago | (#12528740)

Unless you really want to do a LOT of work. This sounds very much like a SCADA [wikipedia.org] system. There are vendors of such systems. Most of the realtime databases are designed to stay in a large, proprietary, RAM database which is occasionally dumped to disk for backup purposes.

In order to process so many points realtime, it usually will have to be in RAM for performance reasons.

Cluster it (3, Insightful)

canuck57 (662392) | more than 8 years ago | (#12528750)

I know your working with windows but when I read this I said yes.

I'm guessing someone out there has done something like this before.

Google has a cluster of machines far larger than you need but their approach was a Linux cluster. Plus, for the amount of writes going on your going to want not to have any burdens on the system that are not needed.

Re:Cluster it (1)

Gopal.V (532678) | more than 8 years ago | (#12529126)

didn't you read this [slashdot.org] ?. Talks about the same thing - but is patented shit (lots of prior art anyway).

Re:Cluster it (1)

HyperChicken (794660) | more than 8 years ago | (#12529633)

Google's GFS is mainly a write-once-read-many system. It doesn't function that well for something with lots of writes.

Just dump it (1)

marat (180984) | more than 8 years ago | (#12528752)

You may want to look how video streams are composed, but basic idea is very simple - just dump it all in the arrival order and keep track of what did you write at which offset in some table of contents. Dump tables of contents at some regular offsets so you would be able to find them easily. That's it. Just one thing - use offsets relative to TOC, this way they'd consume less bits each, and align data - it also saves several bits from the other side.

And remember - Keep It Simply Stupid. Be sure you can reed it in hex editor when trouble comes.

Just use the file system (1)

amorsen (7485) | more than 8 years ago | (#12528761)

Keep a file per device. The OS will cache appropriately. The files will eventually get horribly fragmented, depending on which file system you choose. This should not be too much of a problem, depending on the read access pattern -- and if it is a problem, just be careful about which file system you pick. Reiser4 with automatic repacking would be the perfect candidate, but I haven't followed the development closely or tried the repacking myself.

Re:Just use the file system (1)

Kinlan (138030) | more than 8 years ago | (#12528925)

And how do you do that on a Windows box?

Re:Just use the file system (1, Funny)

eric17 (53263) | more than 8 years ago | (#12529354)

No problem..almost all Windows boxes have this upgrade option called "Linux". Check the manual...

Re:Just use the file system (1)

ignorant_coward (883188) | more than 8 years ago | (#12529848)


Why do all the people who use UNIX and Linux for these things use UNIX and Linux and not Windows?

Re:Just use the file system (0)

Anonymous Coward | more than 8 years ago | (#12533435)

actually, most of them use OpenVMS.

Re:Just use the file system (1)

TheSHAD0W (258774) | more than 8 years ago | (#12529769)

You can avoid the fragmentation if you pre-allocate space based on what you think you'll need.

A commercial RDMS can cut it (4, Informative)

jbplou (732414) | more than 8 years ago | (#12528762)

You can definitely use Oracle to write out 2000 updates per second if your hardware is up to it and your db skills are good.

Re:A commercial RDMS can cut it (1)

marat (180984) | more than 8 years ago | (#12528862)

No you cannot. Oracle is designed to handle a lot of updates of the same data per second, but we are talking about a completely different task (databases are usually populated via separate batch interfaces by the way). There're specialized tools for this task as well (IBM had something, but I cannot remember correct TLA right now), but this is not hard to write yourself as I outlined in other reply.

Re:A commercial RDMS can cut it (1)

jbplou (732414) | more than 8 years ago | (#12528942)

According to mysql they're are sites that run with 800 updates\inserts per second http://dev.mysql.com/doc/mysql/en/innodb-overview. html [mysql.com].

Here is sql server performance test that gets over 9000 inserts per second.
http://www.sql-server-performance.com/jc_large_dat a_operations.asp [sql-server...rmance.com]

It took me two minutes to find these two exmamples. Now I didn't find an Oracle. But you do realize that 2000 inserts per second is not that many, OLTP database design is made for this.

Re:A commercial RDMS can cut it (0)

Anonymous Coward | more than 8 years ago | (#12529084)

Yes you can. You are wrong. Oracle can easily update >2000 records/sec... Even random records (not the same ones). I've seen it and done it plenty of times.

This problem is really a question of your storage backend; Can it handle 2000 random seeks per second (technically you'll need 3-5 seeks for each update, plus 3 writes: undo, redo, data).

Re:A commercial RDMS can cut it (1)

marat (180984) | more than 8 years ago | (#12529169)

This is pointless. We are talking about efficient hardware management, of course any database can parse 2000 requests per second. Why use the wrong tool and compensate it with expensive hardware?

And BTW why do you think messing with database connections would be easier than doing it manually? I did this things [slashdot.org] in REXX (read Perl), it takes about hundred lines for all.

Re:A commercial RDMS can cut it (4, Interesting)

gvc (167165) | more than 8 years ago | (#12529574)

"Can [the storage backend] handle 2000 random seeks per second?"

The short answer is "no."

A 10,000 RPM disk has a period of 6 mSec. That's 3 mSec latency on average for random access (not counting seek time or the fact that read-modify-write will take at least 3 times this long: read, wait one full rotation, write).

So one disk can do, as a generous upper bound, 333 random accesses per second. I'll spare you the details of the Poisson distribution, but if you managed to spread these updates randomly over a disk farm, you'd need about 2000/333*e = 16 independent spindles.

The trick to high throughput is harnessing, and creating, non-randomness. You can do a much better job of this with a purpose-built solution.

what about reordering requests? (0)

Anonymous Coward | more than 8 years ago | (#12529871)

IMO, it is done by both O/S and SCSI hardware

Re:what about reordering requests? (1)

gvc (167165) | more than 8 years ago | (#12530226)

Post-hoc reordering won't do it. For a vast database, the probability of accessing adjacent sectors within the lifetime of the cache is vanishingly small.

Re:A commercial RDMS can cut it (1)

Nutria (679911) | more than 8 years ago | (#12535308)

So one disk can do, as a generous upper bound, 333 random accesses per second. I'll spare you the details of the Poisson distribution, but if you managed to spread these updates randomly over a disk farm, you'd need about 2000/333*e = 16 independent spindles.

You seem to be presuming that there's no:
  1. database caching on the host,
  2. intelligent flushing by the RDBMS,
  3. Tagged Command Queueing &
  4. caching at the SCSI or SAN level

Re:A commercial RDMS can cut it (1)

gvc (167165) | more than 8 years ago | (#12535393)

If the accesses are really random, caching will do no good. As you'll note, my computations already assume no seek time, so reordering to shorten seeks won't improve it. The only way caching could help is if it were to accumulate adjacent sectors for writing. There won't be many of those unless the cache is nearly as big as the database.

The whole idea behind caching and any other memory hierarchy is that it takes advantage of locality of reference, which is explicitly precluded by the stipulation in the great-grandparent that the accesses are random.

Re:A commercial RDMS can cut it (1)

Nutria (679911) | more than 8 years ago | (#12540173)

The whole idea behind caching and any other memory hierarchy is that it takes advantage of locality of reference,

Yes.

which is explicitly precluded by the stipulation in the great-grandparent that the accesses are random.

Didn't notice that part. I was thinking more of the Original Asker, but in the context of an RDBMS.

If I'm trying to shove as much data as possible into a table, caching will definitely help. And since the table will have to be indexed, caching may help there, too, depending on the keys in the index.

Re:A commercial RDMS can cut it (1)

zyzko (6739) | more than 8 years ago | (#12528946)

Even MySQL can do this.

I've build a system like this, only the ammount of data is smaller. Our system is written in Java and has MySQL backend. On stress test it could perform about 1000 updates per second on single-processor x86 hardware. With better hardware and a few optimizations even our system could perform at 2000 updates / sec.

-Kari

Re:A commercial RDMS can cut it (0)

Anonymous Coward | more than 8 years ago | (#12529250)

The commercial solution is called Tuxedo [beasys.com]

my two cents worth (0)

Anonymous Coward | more than 8 years ago | (#12528789)

ok, so you've got several hundred thousand intruments? if you're not military then you're a meterologist or something similar. (if not give us a hint:) ). this means that you're pulling small amounts of data from many sources which may or may not change in your designated unit of time.


so. why are you not thinking about a real big enterprise level database? if NASDAQ can do it you can too.


going with the flat file/caching solution: if you're handling that many transactions is a windows os/file system truely a viable solution? i'm not bashing MS here i'm just curious what others think about so "many" disk and cache transactions in say 2003 or longhorn.

Have you tried a relational database? (1)

photon317 (208409) | more than 8 years ago | (#12528797)


With your specs, chances are you will either need a very beefy machine, or a distributed approach spreading the load across many machines, regardless of the software approach. But I wouldn't be surprised if a good RDBMS would outperform a flatfile approach. It is what they're designed for after all.

Re:Have you tried a relational database? (2, Interesting)

LuckyStarr (12445) | more than 8 years ago | (#12528926)

I agree. In fact SQLite performs quite well on a reasonable sized machine. 3000+ SQL updates on an indexed table should be no problem.

Re:Have you tried a relational database? (0)

Anonymous Coward | more than 8 years ago | (#12529936)

3000+ updates on an indexed table will be murder to performance, as it's going to have to update the index for every update or in a big ol batch on commit. You don't want an index on the real time log database, that's for your warehouse.

Re:Have you tried a relational database? (1)

LuckyStarr (12445) | more than 8 years ago | (#12530391)

I am fully aware on the performance penalty of an index.

What I meant is: SQLite (and presumably other RDBMS as well) is quite fast. Even with an index.

Yes, this sort of thing has been built before (2, Informative)

Andy_R (114137) | more than 8 years ago | (#12528821)

I have a system that can record 32 streams of data 44,100 times per second. It's called a recording studio, and I make music with it.

If your data streams are continuous, and can be represented as audio data, then you are pretty much dealing with a solved problem, and your other problem of selecting from large number of possible 'instruments' is solved by an audio patchbay.

If this isn't feasible, then a number of solutions might be appropriate (spreading the load over a number of machines/huge ram caches/buffering/looking at the problem and thinking of a less intensive sampling strategy/etc.) but without more information on the sort of data you are collecting, and exactly how quickly you need to access it, it's very hard to be specific.

Proprietary patented stuff - but yeah... (0)

Anonymous Coward | more than 8 years ago | (#12528871)

Posting as AC, so that nobody sues me ..

Where I work, they handle like 300 million users and have data associated with each user. Unlike AOL which used sybase to store users (and crawled) these guys use a filesystem based repository. It's a fast replicated database indexed by only one key - the username. It scales great and works on FreeBSD.

this patent [uspto.gov] and related patent [uspto.gov] should answer a few questions.... (Google fs is not as good for search scans)

Re:Proprietary patented stuff - but yeah... (1)

dereference (875531) | more than 8 years ago | (#12529733)

Where I work, they handle like 300 million users [...]

Hmm, where have I heard that number before...? Oh, right, that's just about exactly the current population of the US [census.gov]!

So, you say these are your "users" ?

[...] and have data associated with each user.

Ok, well, I don't think I'm going to sue you, and I really don't care who you work for, but I do think I'm going to go find my tinfoil hat RIGHT NOW...!

Re:Proprietary patented stuff - but yeah... (1)

jbplou (732414) | more than 8 years ago | (#12529892)

We'll he claimed they authenicated users faster than AOL(not much of speed claim there) but I wondering too who authenicates 300 million users since no company has 300 million employees or customers. At least not until Wal-Mart takes over all business in the world.

300 million users (1)

da5idnetlimit.com (410908) | more than 8 years ago | (#12534528)

Hello, this is your bank calling ...

"MasterCard member banks added an EMV chip to 40% of the 200 million MasterCard"

"In Asia Pacific, Visa has a greater market share than all other payment card brands combined with 59 percent of all card purchases at the point of sale being made using Visa cards. There are currently more than 365 million Visa cards in the region." (2003)

If Visa had 365 million cards holder just in Asia Pacific in 2003, I wonder how many they have worldwide nowaday...

Re:Proprietary patented stuff - but yeah... (0)

Anonymous Coward | more than 8 years ago | (#12531481)

Yahoo? FreeBSD and the links being the clues I used to guess...

300 million stillseems like alot, even for Yahoo.

Re:Proprietary patented stuff - but yeah... (1)

GebsBeard (665887) | more than 8 years ago | (#12535272)

Gee I thought the name on the Patent, "Yahoo" would have been enuf to give it away.

horizontal scaling is good... (2, Interesting)

anon mouse-cow-aard (443646) | more than 8 years ago | (#12528912)

Sure, optimize single node performance first, but keep in mind that horizontal scaling is something to look for. Put N machines behind a load balancer, ingest gets scattered among 'n' machines, queries go to all simultaneously. Redundant Array of Inexpensive Databases :-)

Linux Virtual Server in front of several instances of your windows box will do, with some proxying stuff for queries. Probably cheaper than spending months trying to tweak single node to get to your scaling target, and will scale trivially much farther out.

in-memory (1)

zm (257549) | more than 8 years ago | (#12528914)

You will likely need to run this baby all in RAM, with optional persistant storage if needed. If you don't have enough memory, go for distributed solution: data from devices a,b,c go to machine1, from devices d,e,f to machine2, etc. The per device distribution algorithm should consider the amount of data from each device.

The Solution (2, Funny)

cwraig (861625) | more than 8 years ago | (#12528965)

the solution to your problem comes in the form of a little known software application from a vender called Microsoft.
The program is called Microsoft Access 97
:P

Re:The Solution (0)

Anonymous Coward | more than 8 years ago | (#12529320)

Wouldn't it be faster and easier just to use the nul device?

Sounds like an automated stock trading app (0)

Anonymous Coward | more than 8 years ago | (#12529088)

Check out Kx or VhaYu

If you were running Linux OR BSD (0)

LP_Tux (845172) | more than 8 years ago | (#12529095)

You could use the XFS file system to get faster read/write speeds. In addition I'd recommend a special RAID setup. You would want SCSI320 RAID striping over 4 drives, in addition you'd want it mirroring over a further 4 drives. You'd need to set up a RAID array to achieve this, but it's well worth it for the performance gains. Make sure your RAID is 8xAGP or PCI-X. PCI is far too slow.

Re:If you were running Linux OR BSD (1)

timigoe (797580) | more than 8 years ago | (#12529231)

All well and good.

Raid will improve disk accessing performance but... theres always a but, you might want to take note that AGP is for graphics only, you'll have fun finding an AGP RAID Card.

Sequential files are your friend (1)

gvc (167165) | more than 8 years ago | (#12529156)

You didn't specify some key parameters. How big are these updates, and how do they get multiplexed? What kind of retrieval do you want to do in the data?

If your data are already arriving on a single socket, just mark up the data and write it out. Then you can retrieve anything you like with linear search. And you can be reasonably certain that you have captured all the data and will never lose it due to having trusted it to some mysterious DB software.

If linear search isn't good enough, you have to specify the sorts of queries you want. All information from a particular sensor? Information from all sensors at a particular time? Does this information have to be available on-line, or can you answer your queries in batch. Sort/merge is really efficient if you don't need real-time queries. You can build indexes in real-time almost as efficiently, if you know what you want to index. The basic technique is the same, but more complicated to set up - batch up the information to be indexed, and do a series of sort-merges to accumulate the indexes.

just writ the shit as it comes in. (0)

Anonymous Coward | more than 8 years ago | (#12529179)

block it.
interleave it.
write a new timestamp periodically

as for what instruments you are recording and their parameters, use a simple hash table.
the time stamp that corresponds to the introduction or deletion of instruments or change in recording parameters is hashed with the corresponding configuration. This allows 100% utilization of existing file system speed and space for recording. be careful with parameter record so you don't lose sync to data that looks like a time stamp. probabliity might be once in a million years but Murphy will have it happen 40 times in your most important hour of recording.

it's sort of rocket science...but
more like geophysical data recording in oil exploration industry, where you might look for examples.

If you need some help, i'm available for systems and algorithm design. I WILL NOT code. $2K/day plus first class travel and expenses.

I've some 35 years experience in instrumentation and telemetry.

Data warehousing (0)

Anonymous Coward | more than 8 years ago | (#12529285)

would like to know if: someone has already built something like this; and if not, would someone want to use it if I build it? I'm not sure what other applications there might be.

I'd find yourself someone with data warehousing experience (not the same thing as standard DBAs). I've worked with such people and 2000 updates a second isn't a big deal. We have no problem doing hourly bursts of millions of records with Oracle on some relatively modest hardware. It will cost you though...

Did something like this some years ago (2, Insightful)

isj (453011) | more than 8 years ago | (#12529429)

My current company did something like this back in 2001 with real-time rating performance [digiquant.com], which conceptually is much like what you want to do: receive a lot of items and store them in a database, real-time. But you did not mention some of the more important details about problem:
  • How much processing has to be done per item?
  • How long can you delay comitting them to a database?
  • Do the clients wait for an answer? Can you cheat and respond immediately?
  • How many simultaneous clients must you support? 1? 5? 100?
  • What is the hardware budget?

2.000 items/sec means that you must do bulk updates. You cannot flush to disk 2.000 times per second. So you program will have to store the items temporarily in a buffer, which gets flushed by a secondary thread when a timer expires or when the buffer gets full. use a two-buffer approach so you can stil receive while committing to the database.

Depending on you application it may be beneficial to keep a cache of the most recent items for all instruments.

You also have to consider the disk setup. If you have to store all the items then any multi-disk setup will do. If you actually only store a few items per instrument and update them, then raid-5 will kill you because it performs poorly with tiny scattered updates.

Do you have to backup the items? How will you you handle backups while your program is running? This affects your choice of flat-file or database implementation.

Yup... (2, Informative)

joto (134244) | more than 8 years ago | (#12529460)

Someone has done this before. It's called a data acquisition system. The basic design for one is even sketched out in one of Grady Booch's books (before he became one of the three amigos).

The design of a data acquisition systems will of course differ, depending on how much data it records per sensor, how many sensors there are, how often to record the data, and if the data is to be available for online or offline processing.

In most of the "hard" cases, you will use a pipelined architecture, where data is received on one or more realtime boxes, and buffered for an appropriate (short) period. A second stage occurs when data is collected from these buffers, and buffered/reordered/processed to make writing the desired format to a file or DBMS easier. The last stage, is, of course, to write it. You might use zero or more computers at each stage, with a fast dedicated network in-between. You might even decide to split up some of the stages even further. Depending on how much you care about your data, you may also add redundancy. And make sure it's fault-tolerant, it's generally better to loose some data, as long as it's tagged as missing, than to loose it all. To check this in real-time you can also add data-monitoring anywhere it makes sense for your system.

In the simper cases, you simply remove things not needed, such as a soundcard instead of dedicated realtime-boxes, redundancy, monitoring, dedicated network, etc...

Some commercial off-the-shelf systems will surely do this. But the more advanced systems, you still build yourself, either from scratch, or by reusing code you find in other similar projects (I'm sure there are some scientific code available from people interested in medical science, biology, astrophysics, geophysics, meteorology, etc...).

Most of the "heavy" systems will not run on Windows, or even Intel, due to limitations of that platform for fast I/O. This has obviously changed a lot recently, so it's no longer the stupid choice it was, but don't expect too many projects of this kind to have noticed, as they probably have existed much longer.

Have you considered memory-mapped files? (3, Interesting)

Teancum (67324) | more than 8 years ago | (#12529524)

I did some work on a DVD-Video authoring system that had some incredible file system requirments (obviously, when involving video data and the typical 4 GB data load for a single DVD disc).

The standard file API architechture just didn't hold up, so we (the development team I was working with) had to rewrite some of the file management routines ourselves and work directly with the memory mapped architechture directly. This does give you some other advantages beyond speed as well, as once you establish the file link and set it in a memory address range you can treat the data in the file as if it were RAM within your program, having fun with pointers and everything else you can imagine. Copying data to the file is simply a matter of a memory move operation, or copying from one pointer to another.

The thing to remember is that Windows (this is undocumented) won't allow you to open a memory-mapped file that is larger than 1 GB, and under FAT32 file systems (Windows 95/98/ME/and some low-end XP systems) the total of all memory mapped files on the entire operating system must be below 1 GB (this requirement really sucks the breath out of some applications).

Remember that if you are putting pointers into the file directly, that it works better if the pointers are relative offsets rather than direct memory pointers, even though direct memory pointers are in theory possible during a single session run.

Re:Have you considered memory-mapped files? (1)

p3d0 (42270) | more than 8 years ago | (#12530080)

Remember that if you are putting pointers into the file directly, that it works better if the pointers are relative offsets rather than direct memory pointers, even though direct memory pointers are in theory possible during a single session run.
Good advice. These are "self-relative-pointers". Instead of this:
Foo *Bar::getFoo(){ return _fooField; }
...you write something like this:
Foo *Bar::getFoo(){ return (Foo*)((char*)&_fooField + (char*)_fooField); }

Re:Have you considered memory-mapped files? (0)

Anonymous Coward | more than 8 years ago | (#12532040)

The thing to remember is that Windows (this is undocumented) won't allow you to open a memory-mapped file that is larger than 1 GB,

It was documented 4 or 5 years ago (last time i touched a windows box). Curious how file-mapping use to be so poorly documented even on unix variants.

Re:Have you considered memory-mapped files? (1)

Teancum (67324) | more than 8 years ago | (#12532275)

At the time I had to find out the hard way from some obscure Microsoft support line at $500 per incident that this was the case... and even then the tech support wasn't really that sure or understood why that was the case.

I may have been the reason why it got documented in the first place, and it seems like a really silly limitation.

Re:Have you considered memory-mapped files? (0)

Anonymous Coward | more than 8 years ago | (#12534563)

Yes, silly limitation, and if my memory serves me well, windows file mapping could not grow after opened. I remember a msdn article about some guy using page-faults to close/recreate the whole file to achieve some 'growable' functionality.
At this time AIX have growable shared-memory, but unfourtunately with a severe size limit (512 MB?).

There's some good explanation about the state of file-mapping on linux today?

Re:Have you considered memory-mapped files? (1)

julesh (229690) | more than 8 years ago | (#12536252)

Second the suggestion of using memory mapped IO. It allows the system to optimise cacheing much more effectively than you're likely to be able to.

The thing to remember is that Windows (this is undocumented) won't allow you to open a memory-mapped file that is larger than 1 GB

OK... does it fail at CreateFileMapping or MapViewOfFile? If the latter, you can work with larger files, you'll just need to restrict yourself to a 1Gb window within them.

and under FAT32 file systems (Windows 95/98/ME/and some low-end XP systems) the total of all memory mapped files on the entire operating system must be below 1 GB (this requirement really sucks the breath out of some applications).

Are you sure this is related to the filesystem? My understanding was that this restriction was because Win9x could only share memory in the address range 0xc0000000 - 0xffffffff (which is automatically shared between all running applications!), and memory mapped IO had to take place in the shared memory segment. If so, it shouldn't apply to XP.

Remember that if you are putting pointers into the file directly, that it works better if the pointers are relative offsets rather than direct memory pointers, even though direct memory pointers are in theory possible during a single session run.

If you use the MapViewOfFileEx function, you can specify the location at which you want the file mapped. This may or may not be useful: if you have 3rd party DLLs whose versions may change you can't easily predict where space will be available, and if you're on a Win95 family OS, it has to be available for _all_ applications, and the allowed range of addresses is severely restricted.

Specialized Hardware (2, Informative)

mschaef (31494) | more than 8 years ago | (#12529703)

This may be gross overkill, but there's specialized hardware specifically designed for sustained high-throughput disk storage. A company called Conduant makes specialized disk controllers that use on board microcontrollers to drive arrays of disks. When I last saw them demoed, they could sustain writes of 100MB/sec using direct card to card transfers across the PCI bus. They can configure a data acquisition card to directly store information into a shared buffer on the disk controller across the PCI bus. The disk controller then picks the data up and drives it across ten IDE channels. That was a few years ago, these days it looks like they can sustain 200MB/sec with a controller, and up to 600MB/sec and 6TB of capacity with custom box mounted in a rack.

I'm not so sure what their story is regarding reading or querying. My guess is you lose a lot of bandwidth, but not all. Anyway, it might be worth checking out.

http://www.conduant.com/products/overview.html [conduant.com]

Another thing is that modern computers cam have lots innate capacity themselves. My hunch is that you could do a lot with a couple modern disks on seperate SATA channels and several GB of RAM. Maybe this is only a software problem...

just dump to disk (1, Insightful)

Anonymous Coward | more than 8 years ago | (#12529836)

as others have said, just stream the data to disk with some kind of big RAM buffer in between. each instrument can go to a separate directory, each minute or hour of data goes to a separate file. A separate thread indexes or processes the data as needed.

And don't forget the magic words: striping. you should interleave your data across many disks, and the index files should be on separate disks as well.

Do striping+mirroring for data protection. do the striping at the app level for maximum throughput, do the mirroring at the hardware level.

When you aren't going through layers of crap like an SQL database, you should *fly* like this on modern hardware.

Kdb+ (3, Informative)

RussHart (70708) | more than 8 years ago | (#12530100)

Kdb+ by KX Systems (http://www.kx.com/ [kx.com] is by far and away the best thing for this. Its main use is to store tick data from financial markets, and is excellent at this (if expensive).

From how you descibed your needs, this would probably bit the bill..

Been there, done that (1)

toddbu (748790) | more than 8 years ago | (#12530300)

No time to read the thread, so some of this may have already been covered. I did a similar project where we had to keep track of billions of hits on a web site. The volumes got to be too great to handle using SQL Server inserts. The nature of our data (which is common for data sets this size) is that some loss was acceptable but only in situations where the servers experienced a problem (power loss, server lockup, etc) We weren't running a bank. So we'd write stuff to an in-memory queue and have a background thread pick up the data and write it to disk on idle cycles. Every hour we'd start a new disk file, pick up the previous hour's data, and load it into the db. Eventually even this didn't work because our loads were too great, so then our hourly processing process got a makeover and started doing some of the summarization of the data sets that we needed and we just dumped the raw data. There were many people who didn't like that idea because we lost the original values, but once we proved that it didn't affect the final values then it was accepted.

The moral of the story is to determine up-front how much of that data you really need.

More info (1)

Halvard (102061) | more than 8 years ago | (#12530602)

You don't mention the type of instruments or data. Perhaps you could store it via syslog on a remote syslog server.

NetCDF or HDF5 (2, Informative)

Salis (52373) | more than 8 years ago | (#12530756)

NetCDF [google.com] and HDF5 [google.com] are optimized binary file formats for storing incredibly large amounts of data and quickly retrieving it.

I'm more familiar with NetCDF (because I use it) so let me tell you some of the things it can do. (HDF5 can also do these things, I'm sure).

With NetCDF, you can store +2 gigabyte files on a 32 bit machine (it supports Large File support). I've saved 12 gigabyte files with no problems. It supports both sequential and direct access, meaning you can read and write either starting from the beginning of the file or at any point in the middle of the file.

The format is array-based. You define dimensions of arrays and variables consisting of zero, one, or more dimensions. You can also define attributes that are used as metadata, information describing the data inside your variables.

You can read or write slices of your data, including strides and hyperslabs. This allows you to read/write only the data you're interested in and makes disk access much faster.

It's also easy to use with good APIs. They have APIs for C, Fortran95, C++, MATLAB, Python, Perl, Java, and Ruby.

Take a look at it. It might be what you're looking for.

-Howard Salis

Thousands of instruments? (0)

checkyoulater (246565) | more than 8 years ago | (#12530888)

Man, that must be some cool sounding music if it has thousands of instruments playing at the same time. Care to share the name of this supergroup?

Re:Thousands of instruments? (0)

Anonymous Coward | more than 8 years ago | (#12531535)

Asia

Now for bonus points - how old am I?

Re:Thousands of instruments? (0)

Anonymous Coward | more than 8 years ago | (#12533898)

Now for bonus points - how old am I?

Doesn't matter, they have a new album.

SQLite (1)

shadowpuppy (629329) | more than 8 years ago | (#12532967)

I seem to remember the SQLite homepage saying it could handle a few million inserts in a few seconds. So asuming you mean 2000+ updates a second in total and not 2000+ per instrument thats quite a safety magin.

An RDBMS won't cut it? (1, Informative)

Anonymous Coward | more than 8 years ago | (#12534341)

You need to do 2000+ updates a second?

*Many* RDBMS systems can do this without breaking a sweat.

Do some googling on Interbase for example - one of the success stories for IB is a system that does 150,000 inserts per second - sustained. It's a data capture system that may well be similar to yours.

Oracle can definately do it - but you'll probably need a good Oracle DBA to tune it up properly.

Informix can definately do it as well - don't know about the latest version, never used it, but whatever was current circa 1999 (v5?) could handle your needs as well.

HP-IB and ISAM (3, Informative)

Decker-Mage (782424) | more than 8 years ago | (#12534545)

This was what the Hewlett Packard Interface Bus (HP-IB) was invented for and your instruments may already be equipped for it. As for what to do with the data stream from the instruments, you stuff it into an ISAM database. Why anyone would even think of using an RDBMS for this is beyond me. ISAM (Indexed Sequential Access Method) has been around forever, exists to take tons of sequential data and store it to the media of choice. From your description, retrieval is only going to be based on a few criteria anyway (instrument, time), so those indices are perfect in this instance.

On the coding end, there are numerous (hell, hundreds) of commercial, F/OSS, and books on ISAM libraries for you to use for the actual storage and retrieval. It may even be included in your existing libraries given how old the technique is now. I was doing this back in the '80s for the US Navy using a 24 bit, very slow, mini-computer, so any normal box should be able to handle it today!

We use these techniques in electronic instrument monitoring, logistical systems, systems engineering, you get the idea. You may want to mosey over to the HP developer web site to see if there is a drop in solution, as I imagine there is (sorry, haven't looked).

I hope this helps.

MySQL In-Memory Table, memcached, or Prevailer? (0)

Anonymous Coward | more than 8 years ago | (#12534943)

If you want speed, I'd look into either of these.

RDMS Insert Performance (0)

Anonymous Coward | more than 8 years ago | (#12535828)

If you plan on inserting into a database at some point, whether directly or buffered, pay attention to insert performance. There are two lessons I learned. One, the typical ODBC interface creates an implied transaction for each separate insert statement. So, group many thousands of inserts into one transaction. The second point is using bulk inserts. ODBC has a mechanism for sending arrays of parameters for an insert statment. So, you could create arrays of 2000 parameters and send one bulk instruction to the server, rather than 2000 individual inserts. This makes a huge difference in performance. The problem being not all ODBC drivers are up to it. I am able to insert many thousands of records in a few seconds using nothing special hardware. Good luck.

DBM Family: esp GDBM and Berkeley DB (1)

Xife (304688) | more than 8 years ago | (#12537838)

This family of databases is the heart of sendmail, and some SQL engines are built on top (MySQL if memory serves).

The interface is a model of simplicity: pointers to arbitrary length buffers for keys and data. All you need is key scheme that provides the post acquisition access that you require.

Berkeley offers hash and BTree style organization of the keys.

It may use memory mapped FileIO under the hood and handles all transfer of multiple buffers.

It provides multipe files or multiple tables in one file and you can control the cachesize.

It can run 2,000 inserts per second on hardware from the mid 90s. (UltraSparc II 450)

Berkeley DB (www.sleepycat.com)

As far a I know it runs on just about everything including several embedded OS's, Windows and every variant of Unix.

How do you know? (1)

hotpotato (569630) | more than 8 years ago | (#12537974)

Have you actually tried doing this with a relational database? Which ones?

Based on my (relatively basic) knowledge of how databases work these days, using large in-memory caches and fast commits, I wouldn't be surprised if a good enterprise database could handle this rate of commits.

You should remember that 2000 commits != 2000 random disk accesses!

High energy physics (0)

Anonymous Coward | more than 8 years ago | (#12538411)

Maybe what you look for is already solved by high energy phycisists: The ROOT [root.cern.ch] toolkit is at least supposed to handle very large datasets (I never tried that, though).

This might be able to do it (1)

Bozovision (107228) | more than 8 years ago | (#12539375)

Faircom CTree-Plus [faircom.com] might.

Advantages:
- it's fast and it's not constrained by column length. If you want a table with 16,000 columns, go right ahead.
- it's very portable. Runs on just about every operating system that has more than 100 users.

The disadvantages:
- last time I looked (admittedly) about 4 years ago, their SQL integration could have been better.
- it's not a high-level database. To work most effectively with it, you need to know about the way that your data is stored.

I'm sure it's improved a great deal since then.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...