Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Databases Programming Software

Dumping Lots of Data to Disk in Realtime? 127

AmiChris asks: "At work I need something that can dump sequential entries for several hundred thousand instruments in realtime. It also needs to be able to retrieve data for a single instrument relatively quickly. A standard relational database won't cut it. It has to keep up with 2000+ updates per second, mostly on a subset of a few hundred instruments active at a given time. I've got some ideas of how I would build such a beast, based on flat files and a system of caching entries in memory. I would like to know if: someone has already built something like this; and if not, would someone want to use it if I build it? I'm not sure what other applications there might be. I could see recording massive amounts of network traffic or scientific data with such a library. I'm guessing someone out there has done something like this before. I'm currently working with C++ on Windows. "
This discussion has been archived. No new comments can be posted.

Dumping Lots of Data to Disk in Realtime?

Comments Filter:
  • 2-stage approach (Score:5, Informative)

    by eagl ( 86459 ) on Saturday May 14, 2005 @09:11AM (#12528705) Journal
    Have you considered a 2-stage approach? Stuff it to disk, and process/index it separately? A fast stream of data would let it all get recorded without loss, and then you could use whatever resources are necessary to index and search without impacting the data dump.

    Cost... Are you going to go for local storage or NAS? Need SCSI and RAID or a less expensive hardware setup? Do you think gigabit ethernet will be sufficient for the transfer from the data dump hardware to the processing/indexing/search machines?

    Sounds like you might want to run a test case using commodity hardware first.
    • Ramdisk database (Score:5, Informative)

      by Glonoinha ( 587375 ) on Saturday May 14, 2005 @11:18AM (#12529326) Journal
      Here's a thought - just use a hard-RAM based database.
      Either make a big ramdisk and put your database out there (see my Journal from a few months back, ramdisk throughput is pretty damn fast from the local machine, given certain constraints, and random access writing is hella fast), or use a database that runs entirely in memory (think Derby, aka Cloudscape that comes with WebSphere Application Developer.)

      When you got your data, save it out to the hard drive.

      Granted it helps to have a box with a ton of memory in it, but they are out there now, almost affordable. If you are collecting more than 4G of data in one session, well YMMV - but 4G is a LOT of data, perhaps consider your approach.
      • This is VERY insightful and I'd like to hire you. :) This is EXACTLY what well designed SCADA systems [wikipedia.org] do.
        • by Glonoinha ( 587375 )
          You will find that my imagination and abilities are only limited by my budget. Well that and, as I am finding, the Sarbanes / Oxley mandates that recently came down from the Productivity Prevention Team, quite effective in keeping me from actually getting any work done.

          I don't really care what it pays if it has anything to do with real-time systems (guidance or delivery systems a plus), if the R&D budget has enough wiggle room for better hardware (toys) than I have at home, if you promise that I will
      • If you are collecting more than 4G of data in one session, well YMMV - but 4G is a LOT of data, perhaps consider your approach.

        My recent forays to Crucial show 4 sticks of 2GB reg/ecc PC2700 DDR memory will set one back a bit over $3k. For 8 to 16GB of data, the most economical route would be a dual Opteron box, things start getting expensive above 16GB.

        • You could pick up a Dell PowerEdge 1800 dual 3GHz (64 bit?) Xeon (2M cache free upgrade) right now with their quad memory upgrade promotion cranking it up to 2G for somewhere in the neighborhood of $1,700 ($500 of that being the second Xeon CPU)- leaving four slots for more memory. Add in four 2G sticks of the stuff Crucial has for that machine (the 1800 has six slots for memory) and you are looking at 10G of physical memory on a dual 3GHz Xeon machine for just shy of $6k.

          That said, as I understand it AM
      • 4GB is peanuts. My best friend (same company as me) works on a system that does ~50MB/sec, max rate. That comes to 1GB/20secs. They have a proprietary box to do it, with 4 or 5 processors and a buttload of dsps (they are storing essentially a set of 24 ocsiliscope signals).

        The application is vibration monitoring in industrial machinery. A reasonable "session" in such an environment would be a machine startup or stop - depending on the machine this could take several minutes to hours. For the "hours"

  • Suuuure. (Score:4, Funny)

    by Seumas ( 6865 ) on Saturday May 14, 2005 @09:12AM (#12528708)
    Yeah, like it isn't obvious that this guy works for the government's TIA program and is looking for ways to maintain all of the data culled from the thousands of audio and video sensors they have planted around.

    Suuuure.
    • And I was just going to suggest he ask "homeland security" for advice... beat me too it!

      of course we can see how well the Govt's spyware works... of course he could have a network of "voulanteers" allowing him to "monitor" their computing habits... that would be a lot of info too...

  • Wonderware InSQL (Score:5, Informative)

    by Dios ( 83038 ) on Saturday May 14, 2005 @09:15AM (#12528720) Homepage

    Check out wonderware InSQL. We update roughly 50k points every 30 seconds without loading the server much at all. Pretty nice product, also has some custom extensions to SQL built in for querying the data (eg cyclic, resolution, delta storage, etc etc).

    http://www.wonderware.com/ [wonderware.com]

    Of course, you'll need your data to come from an OPC/Suitelink/other supported protocol, but should work nicely for you.

    - Joshua
    • Re:Wonderware InSQL (Score:3, Interesting)

      by btlzu2 ( 99039 ) *
      How does archiving work? What is the performance of querying on a large table? (Hundreds of millions of rows) Can you hook into the database with any language/package you desire or proprietary tools only?

      Do you actually charge a license fee PER point?

      We had a need for a smaller SCADA system in our company and Wonderware could not answer these questions (except for the fee per point, which they actually charge PER POINT). This department is going with a different product.

      Sorry, but be very cautious of
      • We update 50,000 points at the bottom of every minute, archive every 2 minutes and have SQL tables that are several trillion (Yes, trillion) rows long on COTS Dell servers with MSSQL 2000 and and a standard middleware approach.

        Sounds to me like you're either not throwing the hardware you ought to at this project or you are looking at the wrong software.

        SCADA is very versatile and powerful. Are you feeding data in mostly from local or remote RTU's?
        • Re:Wonderware InSQL (Score:3, Informative)

          by btlzu2 ( 99039 ) *
          We stopped at the investigation phase. They couldn't answer simple questions and were going to charge us if we needed to add more points. Unacceptable.

          SCADA is very versatile and powerful. Are you feeding data in mostly from local or remote RTU's?

          You do understand that SCADA is a general term which describes a type of system, right? A SCADA system could be designed (and has been) :) that is not versatile and powerful. Sorry to be nitpicky, but I'm just trying to understand what you mean.

          Anyway, we
          • I understand exactly what SCADA is. I was wondering if you are using it for local or remote network control. The extent of my SCADA experience has been interfacing with PLCs in large manufacturing and power generation.

            For those looking to find out more about SCADA and/or OPC, you might want to have a look at the SCADA Working Group webpage or primers such as this one [web.cern.ch].
          • SCADA is cool. I had 2 job leads 2 years ago, one at a local water district, and the other at a network-owned TV station. Unfortunately I didn't get an offer from the water district. I think I would have had much more fun at the water district even though most people would think the TV station would be more fun (I can say it does have some perks). The only thing challenging here is to keep the place running with such a low budget.
      • Re:Wonderware InSQL (Score:3, Informative)

        by Dios ( 83038 )

        InSQL works as an OLE Processes for SQL Server. You can use pretty much any tool (ODBC/ADO/excel/DAO/whatever) to query the database. Yes I realize I mixed libraries/methods/applications in the tool list, but just trying to get across a basic idea.

        Yes, per point licensing, I believe we licensed for 60k points, not sure on the cost. This is pretty typical in the SCADA world I believe.

        Sample query I'd use to get all data for a specific rtu
        select * from live where tagname like 'StationName%'

        Two tables us
        • Thank you for the information! It was more helpful than the Sales support we received from Wonderware. :)

          Actually, I would refuse to pay a license per amount of points. That is completely an arbitrary way to make more money. The only thing the amount of points should affect is disk space and possibly CPU power.

          Numerous companies do not charge for a license based on how many points you have and I find the practice of charging for points reprehensible. Similar to the concept of an ISP charging per packe
          • My experience with Wonderware is that it can be a pain to get data out of if you don't want to use their add-ons, once you move outside their little world into other products the rate for retrieval goes right down. Their help is not up to much ...all in all I'd say go with whatever else you can find/build
  • Don't roll your own (Score:4, Informative)

    by btlzu2 ( 99039 ) * on Saturday May 14, 2005 @09:21AM (#12528740) Homepage Journal
    Unless you really want to do a LOT of work. This sounds very much like a SCADA [wikipedia.org] system. There are vendors of such systems. Most of the realtime databases are designed to stay in a large, proprietary, RAM database which is occasionally dumped to disk for backup purposes.

    In order to process so many points realtime, it usually will have to be in RAM for performance reasons.
  • Cluster it (Score:4, Insightful)

    by canuck57 ( 662392 ) on Saturday May 14, 2005 @09:24AM (#12528750)

    I know your working with windows but when I read this I said yes.

    I'm guessing someone out there has done something like this before.

    Google has a cluster of machines far larger than you need but their approach was a Linux cluster. Plus, for the amount of writes going on your going to want not to have any burdens on the system that are not needed.

  • You may want to look how video streams are composed, but basic idea is very simple - just dump it all in the arrival order and keep track of what did you write at which offset in some table of contents. Dump tables of contents at some regular offsets so you would be able to find them easily. That's it. Just one thing - use offsets relative to TOC, this way they'd consume less bits each, and align data - it also saves several bits from the other side.

    And remember - Keep It Simply Stupid. Be sure you can ree
  • Keep a file per device. The OS will cache appropriately. The files will eventually get horribly fragmented, depending on which file system you choose. This should not be too much of a problem, depending on the read access pattern -- and if it is a problem, just be careful about which file system you pick. Reiser4 with automatic repacking would be the perfect candidate, but I haven't followed the development closely or tried the repacking myself.
    • And how do you do that on a Windows box?
    • You can avoid the fragmentation if you pre-allocate space based on what you think you'll need.
    • I tried writting a prototype a while back to see if it could be done like that. The peformance was ok with 1 file per instrument, as far as to program goes. I had two threads. The writter would append all the entries for one instrument at a time to the end of its file.

      When you click near the directory (parent directory did it) in the "file explorer" the whole thing locks up for few minutes, desktop, task bar, etc. I haven't tried making a tree of subdirrectories to avoid this problem. I'm not too sure
      • How about just keeping the file explorer away from the directory? The file explorer tries watching for changes in the directories that you have open; that's probably what messes things up. The tree approach should help, but just remember that it's a workaround for broken applications -- NTFS itself is capable of handling many files in a directory.

        Anyway, the only alternative to using the OS file system is to implement your own file system in user space, or use a database as a file system. If you choose to

  • by jbplou ( 732414 ) on Saturday May 14, 2005 @09:28AM (#12528762)
    You can definitely use Oracle to write out 2000 updates per second if your hardware is up to it and your db skills are good.
    • No you cannot. Oracle is designed to handle a lot of updates of the same data per second, but we are talking about a completely different task (databases are usually populated via separate batch interfaces by the way). There're specialized tools for this task as well (IBM had something, but I cannot remember correct TLA right now), but this is not hard to write yourself as I outlined in other reply.
      • According to mysql they're are sites that run with 800 updates\inserts per second http://dev.mysql.com/doc/mysql/en/innodb-overview. html [mysql.com].

        Here is sql server performance test that gets over 9000 inserts per second.
        http://www.sql-server-performance.com/jc_large_dat a_operations.asp [sql-server...rmance.com]

        It took me two minutes to find these two exmamples. Now I didn't find an Oracle. But you do realize that 2000 inserts per second is not that many, OLTP database design is made for this.
    • Even MySQL can do this.

      I've build a system like this, only the ammount of data is smaller. Our system is written in Java and has MySQL backend. On stress test it could perform about 1000 updates per second on single-processor x86 hardware. With better hardware and a few optimizations even our system could perform at 2000 updates / sec.

      -Kari

  • With your specs, chances are you will either need a very beefy machine, or a distributed approach spreading the load across many machines, regardless of the software approach. But I wouldn't be surprised if a good RDBMS would outperform a flatfile approach. It is what they're designed for after all.
  • by Andy_R ( 114137 ) on Saturday May 14, 2005 @09:40AM (#12528821) Homepage Journal
    I have a system that can record 32 streams of data 44,100 times per second. It's called a recording studio, and I make music with it.

    If your data streams are continuous, and can be represented as audio data, then you are pretty much dealing with a solved problem, and your other problem of selecting from large number of possible 'instruments' is solved by an audio patchbay.

    If this isn't feasible, then a number of solutions might be appropriate (spreading the load over a number of machines/huge ram caches/buffering/looking at the problem and thinking of a less intensive sampling strategy/etc.) but without more information on the sort of data you are collecting, and exactly how quickly you need to access it, it's very hard to be specific.
  • by anon mouse-cow-aard ( 443646 ) on Saturday May 14, 2005 @09:58AM (#12528912) Journal
    Sure, optimize single node performance first, but keep in mind that horizontal scaling is something to look for. Put N machines behind a load balancer, ingest gets scattered among 'n' machines, queries go to all simultaneously. Redundant Array of Inexpensive Databases :-)

    Linux Virtual Server in front of several instances of your windows box will do, with some proxying stuff for queries. Probably cheaper than spending months trying to tweak single node to get to your scaling target, and will scale trivially much farther out.

  • You will likely need to run this baby all in RAM, with optional persistant storage if needed. If you don't have enough memory, go for distributed solution: data from devices a,b,c go to machine1, from devices d,e,f to machine2, etc. The per device distribution algorithm should consider the amount of data from each device.
    • An entire hour of updates might well fit in RAM! My proposed solution with flat files would take advantage of this. I'm thinking of using linked lists to store the entries for each instrument and having a background thread come round and write out all the entries of an instrument at once.

      This thing is going to run for months at a time. Eventually the stuff has to go to disk.
  • by cwraig ( 861625 )
    the solution to your problem comes in the form of a little known software application from a vender called Microsoft.
    The program is called Microsoft Access 97
    :P
  • You didn't specify some key parameters. How big are these updates, and how do they get multiplexed? What kind of retrieval do you want to do in the data?

    If your data are already arriving on a single socket, just mark up the data and write it out. Then you can retrieve anything you like with linear search. And you can be reasonably certain that you have captured all the data and will never lose it due to having trusted it to some mysterious DB software.

    If linear search isn't good enough, you have to
  • by isj ( 453011 ) on Saturday May 14, 2005 @11:36AM (#12529429) Homepage
    My current company did something like this back in 2001 with real-time rating performance [digiquant.com], which conceptually is much like what you want to do: receive a lot of items and store them in a database, real-time. But you did not mention some of the more important details about problem:
    • How much processing has to be done per item?
    • How long can you delay comitting them to a database?
    • Do the clients wait for an answer? Can you cheat and respond immediately?
    • How many simultaneous clients must you support? 1? 5? 100?
    • What is the hardware budget?

    2.000 items/sec means that you must do bulk updates. You cannot flush to disk 2.000 times per second. So you program will have to store the items temporarily in a buffer, which gets flushed by a secondary thread when a timer expires or when the buffer gets full. use a two-buffer approach so you can stil receive while committing to the database.

    Depending on you application it may be beneficial to keep a cache of the most recent items for all instruments.

    You also have to consider the disk setup. If you have to store all the items then any multi-disk setup will do. If you actually only store a few items per instrument and update them, then raid-5 will kill you because it performs poorly with tiny scattered updates.

    Do you have to backup the items? How will you you handle backups while your program is running? This affects your choice of flat-file or database implementation.

  • Yup... (Score:3, Informative)

    by joto ( 134244 ) on Saturday May 14, 2005 @11:40AM (#12529460)
    Someone has done this before. It's called a data acquisition system. The basic design for one is even sketched out in one of Grady Booch's books (before he became one of the three amigos).

    The design of a data acquisition systems will of course differ, depending on how much data it records per sensor, how many sensors there are, how often to record the data, and if the data is to be available for online or offline processing.

    In most of the "hard" cases, you will use a pipelined architecture, where data is received on one or more realtime boxes, and buffered for an appropriate (short) period. A second stage occurs when data is collected from these buffers, and buffered/reordered/processed to make writing the desired format to a file or DBMS easier. The last stage, is, of course, to write it. You might use zero or more computers at each stage, with a fast dedicated network in-between. You might even decide to split up some of the stages even further. Depending on how much you care about your data, you may also add redundancy. And make sure it's fault-tolerant, it's generally better to loose some data, as long as it's tagged as missing, than to loose it all. To check this in real-time you can also add data-monitoring anywhere it makes sense for your system.

    In the simper cases, you simply remove things not needed, such as a soundcard instead of dedicated realtime-boxes, redundancy, monitoring, dedicated network, etc...

    Some commercial off-the-shelf systems will surely do this. But the more advanced systems, you still build yourself, either from scratch, or by reusing code you find in other similar projects (I'm sure there are some scientific code available from people interested in medical science, biology, astrophysics, geophysics, meteorology, etc...).

    Most of the "heavy" systems will not run on Windows, or even Intel, due to limitations of that platform for fast I/O. This has obviously changed a lot recently, so it's no longer the stupid choice it was, but don't expect too many projects of this kind to have noticed, as they probably have existed much longer.

  • I did some work on a DVD-Video authoring system that had some incredible file system requirments (obviously, when involving video data and the typical 4 GB data load for a single DVD disc).

    The standard file API architechture just didn't hold up, so we (the development team I was working with) had to rewrite some of the file management routines ourselves and work directly with the memory mapped architechture directly. This does give you some other advantages beyond speed as well, as once you establish the file link and set it in a memory address range you can treat the data in the file as if it were RAM within your program, having fun with pointers and everything else you can imagine. Copying data to the file is simply a matter of a memory move operation, or copying from one pointer to another.

    The thing to remember is that Windows (this is undocumented) won't allow you to open a memory-mapped file that is larger than 1 GB, and under FAT32 file systems (Windows 95/98/ME/and some low-end XP systems) the total of all memory mapped files on the entire operating system must be below 1 GB (this requirement really sucks the breath out of some applications).

    Remember that if you are putting pointers into the file directly, that it works better if the pointers are relative offsets rather than direct memory pointers, even though direct memory pointers are in theory possible during a single session run.
    • Remember that if you are putting pointers into the file directly, that it works better if the pointers are relative offsets rather than direct memory pointers, even though direct memory pointers are in theory possible during a single session run.

      Good advice. These are "self-relative-pointers". Instead of this:

      Foo *Bar::getFoo(){ return _fooField; }

      ...you write something like this:

      Foo *Bar::getFoo(){ return (Foo*)((char*)&_fooField + (char*)_fooField); }

    • Second the suggestion of using memory mapped IO. It allows the system to optimise cacheing much more effectively than you're likely to be able to.

      The thing to remember is that Windows (this is undocumented) won't allow you to open a memory-mapped file that is larger than 1 GB

      OK... does it fail at CreateFileMapping or MapViewOfFile? If the latter, you can work with larger files, you'll just need to restrict yourself to a 1Gb window within them.

      and under FAT32 file systems (Windows 95/98/ME/and some l
      • While the MapViewOfFile function does have an impact with the problems of Windows regarding memory space, the 1GB limitation is not restricted to just this function.

        It is indeed the "CreateFile" portion of what Windows NT deals with that causes the problems. I did experiment with different memory window ranges and various strategies to access the data. It didn't seem to have any effect at all regarding this absolute limit as it appears Windows does some sort of alternate mapping beyond what is formally p
    • Won't fly. The current beast I'm tring to improve has a files larger than 5GB, which is larger than any 32bit memory space can be.

      I'm also not sure I see an advantage. I've considered trying windows "overlapped IO", but I'd like to stear clear of everything platform specific even if it means using lots of threads.
      • The real advantage for going with memory-mapped files is really speed. By throwing the file into memory mapped space, you are by-passing much of the overhead that the operating system (Windows in this case) throws in regarding memory management with the file system. It is still abstracted that you don't have to deal with specific hard drive archetechtures, but it pretty much right where the operating systems deal with the disk data anyway.

        When I did data flow experiments, I got up to a 3x to 5x data thro
        • Actually having started I do see a major advantage of the memory mapping. I don't have to worry about multiple threads read/writing to the same file. Right now I'd have to put a mutex around the file, or do some kind of file sharing between threads. Why? Because you have to seek and then read/write.

          I'm curious about the 5GB files though... I was under the impression that NTFS in general won't allow you to create or open a file > 1GB.
          Nahh, I've had files of ~4Gb. Fat32 has a limit around here at

  • Specialized Hardware (Score:2, Informative)

    by mschaef ( 31494 )
    This may be gross overkill, but there's specialized hardware specifically designed for sustained high-throughput disk storage. A company called Conduant makes specialized disk controllers that use on board microcontrollers to drive arrays of disks. When I last saw them demoed, they could sustain writes of 100MB/sec using direct card to card transfers across the PCI bus. They can configure a data acquisition card to directly store information into a shared buffer on the disk controller across the PCI bus. T
  • just dump to disk (Score:1, Insightful)

    by Anonymous Coward
    as others have said, just stream the data to disk with some kind of big RAM buffer in between. each instrument can go to a separate directory, each minute or hour of data goes to a separate file. A separate thread indexes or processes the data as needed.

    And don't forget the magic words: striping. you should interleave your data across many disks, and the index files should be on separate disks as well.

    Do striping+mirroring for data protection. do the striping at the app level for maximum throughput, do th
  • Kdb+ (Score:3, Informative)

    by RussHart ( 70708 ) on Saturday May 14, 2005 @01:28PM (#12530100) Homepage
    Kdb+ by KX Systems (http://www.kx.com/ [kx.com] is by far and away the best thing for this. Its main use is to store tick data from financial markets, and is excellent at this (if expensive).

    From how you descibed your needs, this would probably bit the bill..
  • No time to read the thread, so some of this may have already been covered. I did a similar project where we had to keep track of billions of hits on a web site. The volumes got to be too great to handle using SQL Server inserts. The nature of our data (which is common for data sets this size) is that some loss was acceptable but only in situations where the servers experienced a problem (power loss, server lockup, etc) We weren't running a bank. So we'd write stuff to an in-memory queue and have a back
    • From another perspecive, we use to post inventory transaction to our ERP system at the bandaide level, creating 100,000 inventory journals a day. This is a large load for a hospital's ERP and makes financial analysis a headache. I would do an use case to determine the 'real' granularity needed for the data. Remember, users ask for everything, we give them what they need.

  • You don't mention the type of instruments or data. Perhaps you could store it via syslog on a remote syslog server.
  • NetCDF or HDF5 (Score:3, Informative)

    by Salis ( 52373 ) on Saturday May 14, 2005 @03:19PM (#12530756) Journal
    NetCDF [google.com] and HDF5 [google.com] are optimized binary file formats for storing incredibly large amounts of data and quickly retrieving it.

    I'm more familiar with NetCDF (because I use it) so let me tell you some of the things it can do. (HDF5 can also do these things, I'm sure).

    With NetCDF, you can store +2 gigabyte files on a 32 bit machine (it supports Large File support). I've saved 12 gigabyte files with no problems. It supports both sequential and direct access, meaning you can read and write either starting from the beginning of the file or at any point in the middle of the file.

    The format is array-based. You define dimensions of arrays and variables consisting of zero, one, or more dimensions. You can also define attributes that are used as metadata, information describing the data inside your variables.

    You can read or write slices of your data, including strides and hyperslabs. This allows you to read/write only the data you're interested in and makes disk access much faster.

    It's also easy to use with good APIs. They have APIs for C, Fortran95, C++, MATLAB, Python, Perl, Java, and Ruby.

    Take a look at it. It might be what you're looking for.

    -Howard Salis
    • Cool, but these are file format specifications. Are there any engines that work with these which are really fast? Do they cache a bunch of stuff in memory or will that still be my job?
      • The people who develope these formats are used to dealing with large data sets that need to be read and written fast. I've seen terabyte files used as inputs/outputs for scientific computing applications. They've certainly thought about the fastest ways of doing I/O. You can even substitute your own FFIO routines (people using Crays do this).

        You can set the buffer to whatever you want and it really depends on your computing architecture on how buffering is handled. Normally, the data is kept in memory unti
  • Man, that must be some cool sounding music if it has thousands of instruments playing at the same time. Care to share the name of this supergroup?

  • I seem to remember the SQLite homepage saying it could handle a few million inserts in a few seconds. So asuming you mean 2000+ updates a second in total and not 2000+ per instrument thats quite a safety magin.
  • by Anonymous Coward
    You need to do 2000+ updates a second?

    *Many* RDBMS systems can do this without breaking a sweat.

    Do some googling on Interbase for example - one of the success stories for IB is a system that does 150,000 inserts per second - sustained. It's a data capture system that may well be similar to yours.

    Oracle can definately do it - but you'll probably need a good Oracle DBA to tune it up properly.

    Informix can definately do it as well - don't know about the latest version, never used it, but whatever was curren
  • HP-IB and ISAM (Score:3, Informative)

    by Decker-Mage ( 782424 ) <brian.bartlett@gmail.com> on Sunday May 15, 2005 @05:44AM (#12534545)
    This was what the Hewlett Packard Interface Bus (HP-IB) was invented for and your instruments may already be equipped for it. As for what to do with the data stream from the instruments, you stuff it into an ISAM database. Why anyone would even think of using an RDBMS for this is beyond me. ISAM (Indexed Sequential Access Method) has been around forever, exists to take tons of sequential data and store it to the media of choice. From your description, retrieval is only going to be based on a few criteria anyway (instrument, time), so those indices are perfect in this instance.

    On the coding end, there are numerous (hell, hundreds) of commercial, F/OSS, and books on ISAM libraries for you to use for the actual storage and retrieval. It may even be included in your existing libraries given how old the technique is now. I was doing this back in the '80s for the US Navy using a 24 bit, very slow, mini-computer, so any normal box should be able to handle it today!

    We use these techniques in electronic instrument monitoring, logistical systems, systems engineering, you get the idea. You may want to mosey over to the HP developer web site to see if there is a drop in solution, as I imagine there is (sorry, haven't looked).

    I hope this helps.

    • Searching freshmeat I actually found some projects with ISAM in thier names. I'm not sure they look too promising though.

      Thanks. I now know the name for it. It still looks like I might be better off writting something from scratch. Maybe I can slap it up on sourceforge afterwards.
      • I haven't looked in the C++ libs in quite a while but I would be rather surprised if the functionality were not in an existing library. I would, however, put serious thought into rolling your own. I'd offer to help but it's been far too long since I mucked with either C++ or rolling my own db code (25 years). Sadly, these days it's all SQL, XML, and web services, and that is about as interesting as watching paint dry, or grass grow {sigh}.
  • This family of databases is the heart of sendmail, and some SQL engines are built on top (MySQL if memory serves).

    The interface is a model of simplicity: pointers to arbitrary length buffers for keys and data. All you need is key scheme that provides the post acquisition access that you require.

    Berkeley offers hash and BTree style organization of the keys.

    It may use memory mapped FileIO under the hood and handles all transfer of multiple buffers.

    It provides multipe files or multiple tables in one file
    • Ok, looking at BDB at sleepycat. It looks like you've got one table per file with this thing. There's also the problem that I can't open up our source :-(

      GDBM looks really simple. It also seems to have just one table per file. So all my instruments would have to go in that one. I'm wondering if something that looks so simple really has the performance I need.
  • Have you actually tried doing this with a relational database? Which ones?

    Based on my (relatively basic) knowledge of how databases work these days, using large in-memory caches and fast commits, I wouldn't be surprised if a good enterprise database could handle this rate of commits.

    You should remember that 2000 commits != 2000 random disk accesses!

  • Faircom CTree-Plus [faircom.com] might.

    Advantages:
    - it's fast and it's not constrained by column length. If you want a table with 16,000 columns, go right ahead.
    - it's very portable. Runs on just about every operating system that has more than 100 users.

    The disadvantages:
    - last time I looked (admittedly) about 4 years ago, their SQL integration could have been better.
    - it's not a high-level database. To work most effectively with it, you need to know about the way that your data is stored.

    I'm sure it's improved a grea
  • here [mail-archive.com]. Effects of filesystem/RAM/CPU/SCSI on the results are discussed.
    • Umm, with your example. Is that 120K/s for the first 10s, or will it keep that up for a few months? Is it all in memory or can I have serveral GB of data?
      • >Is that 120K/s for the first 10s, or will it keep that up for a few months?
        That, I want to know as well

        >Is it all in memory or can I have serveral GB of data?
        Definitely GB of data stored on the disk:

        All I have now is an athlon 1.7 with 512 megs of ram (debian).
        I don't care that much for fast inserts, more like, I have HUGE quantities of images (png+32bit timestamp) which I grab and store in a sqlite database (insert rate could go up to gigabytes of data per hour for 5 image/sec, I need to check

      • This is pretty much my set-up:
        • I run daemons for logging my data into the database
        • I use a web server on the database side (thttpd) with cgis that let me access the database in certain ways.
        • I have cgis written in both c and python
        • Keep it simple: each cgi is self contained, small and does only one thing well.
        • The front-end (written in wxpython) queries the database over the web and display pretty graphs
        • Replies from the webserver can be compressed/encrypted if need be

        I wanted to access the database with

  • Boy I didn't expect this thread to explode like this while I was gone. Some people asked for more info so I'll just make some points:

    * database is 5GB right now, after improving the thing's performance, it could be 10 times bigger. 50GB

    * Yes, some people guessed it. It's financial data. I'm tring to dump all the trades of all stocks and futures in the US and EU. Right now we do a subset, but there's always something missing.

    * Hardware. Yes I can get one or two monster machines for our server farms.
  • data acquisition systems for large experiments. Such things like the hardware and software used at particle physics labs on their detectors: lots of individual sensors in a huge array that has to be sampled a hell of a lot in one second.

    Another thing to look into is testing for dynamic loads on cars or aircraft. At least for aircraft, they'll put thousands of accelerometers all over the frame to measure the various accelerations.

    Both of those are prime examples of a similar system to yours. And suc

What is research but a blind date with knowledge? -- Will Harvey

Working...