Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Object Prevalence: Get Rid of Your Database?

Hemos posted more than 11 years ago | from the throwing-it-out dept.

Programming 676

A reader writes:" Persistence for object-oriented systems is an incredibly cumbersome task to deal with when building many kinds of applications: mapping objects to tables, XML, flat files or use some other non-OO way to represent data destroys encapsulation completely, and is generally slow, both at development and at runtime. The Object Prevalence concept, developed by the Prevayler team, and implemented in Java, C#, Smalltalk, Python, Perl, PHP, Ruby and Delphi, can be a great a solution to this mess. The concept is pretty simple: keep all the objects in RAM and serialize the commands that change those objects, optionally saving the whole system to disk every now and then (late at night, for example). This architecture results in query speeds that many people won't believe until they see for themselves: some benchmarks point out that it's 9000 times faster than a fully-cached-in-RAM Oracle database, for example. Good thing is: they can see it for themselves. Here's an article about it, in case you want to learn more."

Sorry! There are no comments related to the filter you selected.

RAM ? (1, Interesting)

mirko (198274) | more than 11 years ago | (#5423482)

If you keep the objects in RAM, won't you risk some data losses in case a power cut occur ?

One word (1)

PFactor (135319) | more than 11 years ago | (#5423490)


Re:One word (2)

timothy_m_smith (222047) | more than 11 years ago | (#5423502)

What about a hardware failure or an accidental power off. Depending upon how important the application is, you absolutely have to plan for some sort of catastrophic hardward failure.

Re:RAM ? (-1)

Sexual Asspussy (453406) | more than 11 years ago | (#5423492)

so use static RAM, you ball-lipping girly-man

Re:RAM ? (4, Insightful)

bmongar (230600) | more than 11 years ago | (#5423496)

No more than any other database. Perhapse you missed the part where they said they would serialize the commands that change the objects. In this context they are talking about saving the commands.

Re:RAM ? (-1, Flamebait)

Anonymous Coward | more than 11 years ago | (#5423499)

Yes, but no more than you would with a normal RDBMS, as it is periodically serialised to disk.

P.S. Your parodical MP3s suck big time.

Re:RAM ? (0, Insightful)

krugdm (322700) | more than 11 years ago | (#5423500)

If you're going to implement this, I'd say you'd better be investing in a good UPS system and some scripts to dump everything to disk in case the system is triggered by an outage...

Re:RAM ? (-1, Troll)

kjhambrick (111698) | more than 11 years ago | (#5423575)

and don't run on Winders

Re:RAM ? (1)

muyuubyou (621373) | more than 11 years ago | (#5423505)

Of course, if you care about your server's reliability in the slightest, an UPS (Uninterruptible Power Supply) system is in order.

And that's regardless of where you store your data.

Re:RAM ? (0)

Anonymous Coward | more than 11 years ago | (#5423508)

As long as the changes to those objects are written to disk, then you need only two pieces of information after a power failure: the class of the object, and the difference that each individual instance makes.

Re:RAM ? (-1, Troll)

PSL (519746) | more than 11 years ago | (#5423511)

No, cause the transactions are saved to disk. Power outage-> Than load the old DB and apply the transactions since it was committed. Whala.

Re:RAM ? (1) (637314) | more than 11 years ago | (#5423558)

Sounds just like Oracle, but without standard access APIs, redundancy features, etc etc etc..

Re:RAM ? (0)

Anonymous Coward | more than 11 years ago | (#5423596)

...$$$ price tag, molasses-like speed, poor mapping to OO concepts, etc., etc., etc...

Re:RAM ? (0)

Anonymous Coward | more than 11 years ago | (#5423531)

Right so my database is 40 gig

and I have 2 gig of memory on my server


Re:RAM ? (2, Interesting)

Sgs-Cruz (526085) | more than 11 years ago | (#5423555)

A use for 64-bit computing? Larger RAM spaces? I just picked up 256 MB of RAM for $59 Canadian... the stuff isn't exactly expensive right now...

Re:RAM ? (1, Funny)

Anonymous Coward | more than 11 years ago | (#5423621)

You payed $59 CAD - you got ripped.
256 MB DDR RAM is $45 CAD now.
For everyone else in the world, that around $5 USD.

Re:RAM ? (0)

Anonymous Coward | more than 11 years ago | (#5423631)

I was able to pick up 256MB SDRAM for $35 CDN around Nov 2001. RAM IS expensive right now.
Not to mention limitations on how many slots one has in their box.

What does 64-bit computing have anything to do with how much RAM one needs?

Re:RAM ? (5, Informative)

jmcnally (100849) | more than 11 years ago | (#5423556)

As someone else also posted, applying transactions that have occurred since the last time the db was saved to disk avoids this problem. A small company in WA years ago, Raima, had this transaction log concept implemented nicely to support their network database, dbVista (later called RDM). Basically a transaction log is started for every sequence of updates. All records and pointers are saved in a transaction file first. If any problems or system abends occured the entire sequence would be flushed, avoiding a half-updated sequence of records (for example an invoice is posted but the customer record is not updated). It worked pretty well. The big problem with the RAM scheme is that for very large databases the capacity of the computer or the times required to save to disk are prohibitive.

Re:RAM ? (4, Interesting)

Lerxst Pratt (618277) | more than 11 years ago | (#5423562)

The client commands are immediately written to a log file for later execution. Even if the power fails, the system may be brought back up and the commands exectuted from the logfile regardless of the power failure. While the commands are written to the log, the command the user has decided to invoke is executed immediately in parallel on the live data in RAM. Pretty ingenious, if you ask me!

Re:RAM ? (-1, Troll)

Anonymous Coward | more than 11 years ago | (#5423570)

No. You can mirror the RAM to a striped RAID array, and get the best of both worlds. I think the next linux kernel should have expanded mmap support for this sort of thing.

David V

Why not try reading the article? (3, Interesting)

ryan1234 (173313) | more than 11 years ago | (#5423601)

From the article:
Before changes are applied to business objects, each command is serialized and written to a log file (Figure 1). Then, each command is executed immediately. Optionally, in a low-use period, the system can take a snapshot of the business objects, aggregating all the commands applied into one large file to save some reloading time.
From what I've read, Oracle has something similar with their REDO log. If it's good enough for oracle, it can't be all that bad.

Re:RAM ? (4, Insightful)

hrieke (126185) | more than 11 years ago | (#5423636)

Reminds me of something that I heard about year ago- one of the DB players (I think IBM) built a fully OO DB in C. Used to store the relations in RAM.
Blazing fast, and easy as hell to fuck up beyond replair- you could do both a read and a write to the same memory area at the same time, or something like that.

This sounds just as bad.
For example, let's say that we're doing a transaction of a few million dollars. In mid process the power dies and the machine goes dark. Outside of shouting 'redunant this that and the other', what state would the machine be in when it comes back online, were is the money, and could we back out of and rerun the transaction?

Second post! (-1, Flamebait)

Anonymous Coward | more than 11 years ago | (#5423483)

Please please please second post, be mine!

I love you Jane!

Sure :P (0)

st0rmcold (614019) | more than 11 years ago | (#5423487)

I'v always had an idea to store an entire OS on ram, have 10gigs or so, wouldn't it be fast?

Re:Sure :P (1, Funny)

muyuubyou (621373) | more than 11 years ago | (#5423540)

"10 gigs should be enough for anybody" ~Billy G

Re:Sure :P (0)

st0rmcold (614019) | more than 11 years ago | (#5423579)

Yes, have the OS on its own chunk of RAM, all the applications installed on SCSI Drives, running a quad xeon, would be the ultimate pc.

"Dare to dream..."

Fuhrer! (-1)

Anonymous Coward | more than 11 years ago | (#5423488)

Ein forum
Ein fp
Ein troll

Heil Hitroll!

Re:Fuhrer! (-1)

Anonymous Coward | more than 11 years ago | (#5423581)

Troll Polka bitte. Danke.

gigabytes? (5, Insightful)

qoncept (599709) | more than 11 years ago | (#5423491)

At first, I had a problem understanding object oriented methodology because I kept thinking of objects in terms of a database -- they seemed so much alike. But...

Who uses a database small enough to fit in RAM?

Re:gigabytes? (2, Interesting)

REBloomfield (550182) | more than 11 years ago | (#5423513)

i think the idea is that the databases are running on servers such as the SunFire, which has a stupid amount of RAM (somewhere in the terabytes if i remember correctly)....

Re:gigabytes? (0)

st0rmcold (614019) | more than 11 years ago | (#5423515)

The market is not open for it, if RAM because standard, you will see chips holding upwards of 1 TB on them :P

Re:gigabytes? (1)

a_n_d_e_r_s (136412) | more than 11 years ago | (#5423519)

Just buy a lot of RAM!

I think many small and mid-sized e-commerce vendors
whould benefit from this.

Re:gigabytes? (1)

Rommel (33210) | more than 11 years ago | (#5423523)

We're not talking about an entire OLTP system that runs a business -- we're talking about the object data used for the code itself. The article suggests a different way of managing the object data instead of using a flat file, XML, or a database.

Re:gigabytes? (5, Insightful)

bmongar (230600) | more than 11 years ago | (#5423534)

Who uses a database small enough to fit in RAM?

Not every solution is for every problem. This isn't for huge data warehousing systems. My impression is that this is for smaller databases where there is a lot of interactions with fewer objects.

I have also seen object databases used as the data entry point for huge projects, where the database is then periodicaly dumped into a large relational database for warehousing and reports.

Re:gigabytes? (2, Insightful)

qoncept (599709) | more than 11 years ago | (#5423583)

Very true, then again, though, if the database is that small anyway, you're probably not taking much of a performance hit unless you never should have been using a database to begin with.

Offtopic though, I'd love to see a solid state revolution. With the amounts of RAM and flash memory available these days, I don't see why we couldn't run an OS off one. I'm not generally one to be anxious to jump in to new technologies (I used to hate games that used polygons instead of sprites), I think moving to solid state in an intelligent manner would be the biggest thing that could happen in the industry in the near future. ie, along with serial ata, introduce fast, ~2gb bootdrives that run your OS and favorite programs and store everything else on a conventional magnetic hard drive.

Re:gigabytes? (5, Insightful)

juahonen (544369) | more than 11 years ago | (#5423650)

And that goes for OO as well. Not every database (or a collection of data) needs to be accessed in Object-Oriented way. Most (or should I say all) data I store to small tables would not benefit from being objects.

And how does this differ from storing non-object-oriented data structures in RAM? You'd still need to implement searches, and how do you search an collection of objects without placing them on the relational line.

Re:gigabytes? (1)

rendle (152846) | more than 11 years ago | (#5423552)

I've only ever worked on one project where the database size went over a gigabyte, and that was for UtiliCorp (domestic gas supply sub). Of course, whether the smaller ones would really benefit from this kind of technology is open to debate. But not here.

Re:gigabytes? (2, Interesting)

DavidpFitz (136265) | more than 11 years ago | (#5423688)

I've only ever worked on one project where the database size went over a gigabyte, and that was for UtiliCorp (domestic gas supply sub). Of course, whether the smaller ones would really benefit from this kind of technology is open to debate. But not here.

A gigabyte is not a large database. At all. It's tiny! Anything approaching a terabyte, and you're going to start wanting serious fault tolerance on it... most likely use BCV. [] RAM is not going to support this. Performance on DB's in order of a few GB's is easy, just index lots of stuff :) -- Performance becomes an issue in a database about 500GB in size, generally, and this is too big to put into RAM. So, the performance gain of putting anything in RAM is moot, in this case.

Plus, what business is going to sign off against all their data being stored in fragile RAM?

Getting fired for suggesting a production system do this sounds fair!

Re:gigabytes? (2, Funny)

AKnightCowboy (608632) | more than 11 years ago | (#5423572)

Who uses a database small enough to fit in RAM?

The Museum of 20th Century French Military Victories in Paris could make use of this technology on my old 8086 system.

Re:gigabytes? (0, Funny)

Daniel Dvorkin (106857) | more than 11 years ago | (#5423657)

Ever heard of a little dust-up called World War One, dumbass?

Re:gigabytes? (-1, Troll)

Horny Smurf (590916) | more than 11 years ago | (#5423630)

Who uses a database small enough to fit in RAM?

how about a database of slashdot users who have had sex with a woman?

Re:gigabytes? (0)

Anonymous Coward | more than 11 years ago | (#5423664)

aww that database could fit on my wristwatch (and its analogue)

Duh (0)

Anonymous Coward | more than 11 years ago | (#5423493)

This is one of those things where you hit your head up against the wall and say, "Duh, why didn't I think of that?!"

ITS A TRAP! (-1)

ITS A TRAP! (652403) | more than 11 years ago | (#5423495)

First they get rid of the database, next they'll be getting rid of you! ITS A TRAP!

Very large? (2, Interesting)

psychotic_venom (521968) | more than 11 years ago | (#5423503)

What about absolutely monsterous databases? What about huge queries? Or even querying across objects (like we would do joins in a table). I assome that while this can work, there will be some major shifts in thinking in order to get it to be accepted. People like their databases. And enterprise level software isn't going to go out and grab this up--until it does, it probably won't really take off.

Re:Very large? (1)

Chundra (189402) | more than 11 years ago | (#5423615)

I'm no expert at this particular system, but persistent object systems / object oriented databases aren't exactly new. Anyway you don't have queries in the rdbms sense with these things. As for the "enterprise level software" comment, that's untrue. People like their POS/OODBs too and these have already "taken off". I doubt OODBs would replace relational databases, but they solve different problems. Both have their place.

Re:Very large? (1)

khuber (5664) | more than 11 years ago | (#5423663)

You mean it's not a silver bullet? Hehe.

I think they're only focusing on persisting some object state, not replacing databases. Object persistence in a RDBMS is usually done with BLOBs or O/R mapping (putting the state data into tables). OODBMSes have not been very successful the enterprise market.

Discussing the enterprise market is another topic, so I will only discuss this from a technology aspect below.

Persisting objects to RAM is not a new idea. Gemstone made a business out of their persistent cache software for Smalltalk and later Java. There are several researchers looking at reliable distributed data structures like Ninja at Berkeley.

The new idea here seems to be the logging mechanism using the command pattern. It's a big performance boost for writing reliably because you don't have to transactionally synchronize the entire object state to disk, only the changes.

You are right that people like their databases and this doesn't deal with querying, large datasets, and other real world issues.


Slashdotted (5, Funny)

Cubeman (530448) | more than 11 years ago | (#5423520)

For a scalability test, it sure fails the Slashdotting Test.

It's about 9000 times slower right now :)

How to improve performance 9000 times (1, Funny)

eet23 (563082) | more than 11 years ago | (#5423587)

Make your database system boring enough that it is not linked to on Slashdot.

Re:Slashdotted (1, Funny)

Anonymous Coward | more than 11 years ago | (#5423589)

Aaaah, the new TPM-/. benchmark for web server transactions...

Obligatory Slashdotted reference (0)

Anonymous Coward | more than 11 years ago | (#5423521)

I hope their site isn't using this technology, because it's already slashdotted. What good is data access 9000 times faster than Oracle when your webserver will die long before then?

Re:Obligatory Slashdotted reference (1)

muyuubyou (621373) | more than 11 years ago | (#5423559)

Yeah, I guess it should be great for non-webserver implementations. For web servers it seems a bit secondary ;)

Swap (1)

mikekloster (648136) | more than 11 years ago | (#5423530)

Doesn't keeping everything in memory just mean keeping everything on the swap partition?


Re:Swap (1)

iMMersE (226214) | more than 11 years ago | (#5423580)

Not if you have a lot of memory ...

Re:Swap (0)

Anonymous Coward | more than 11 years ago | (#5423623)

Not if coded correctly. The swap partition is a worthless piece of code left over from windows, and the days when RAM was VERY expensive. RAM is cheap now. Desktop PC's can have 1GB of it, easy.

store in RAM? (1)

Interfacer (560564) | more than 11 years ago | (#5423537)

I have worked with databases of +40 GB of data that is frequently queried?

they are surely not proposing that we buy us a server with over 50 GIGS of RAM?

I see this working only with small databases,
and with small databases you have not that much performance problems, or am i missing something here (I am not that much of a DBA expert)


Re:store in RAM? (0)

Anonymous Coward | more than 11 years ago | (#5423599)

Most small apps need only a small amount of data space. And ram is cheap now. Shouldn't be too expensive to get 50 gb of ram. Servers, clusters should do the trick.

Re:store in RAM? (0)

Anonymous Coward | more than 11 years ago | (#5423692)

Actually you have a point. If you do a union join on 2 tables of 1000 records each, then you're looking at a resultant table comprising of a million rows. In practical terms, that isn't a large amount of data, and in my line of work, union joins are rare. But add to that the fact that a working database is always increasing in size, and you're asking for problems if you store in RAM.

Plus Intel CPU's currently address 4GB of RAM each, so using RAM would be a no-no. That leaves drives for storage which would present access issues (can you get solid-state drives?). At this point I'm thinking "let a database handle it" as the complexities for large databases sound unmanageable... why do all this when a modern RDBMS system will automatically cache/reuse data effectively?

That said, the method of data access is probably useful for some things I've not thought of...

Neat concept... (2, Interesting) (637314) | more than 11 years ago | (#5423538)

but it doesn't really provide any compelling reasons NOT to use a database. Besides the fact that their home page, [] seems to be non-existant I think it's more of just a "neat idea" type thing rather than a compelling reason for any prodcut/project to drop relational DB support.

You can always have a caching system as the author states, but even then what systems use this? The countless PHP/MySQL sites out there seem to perform just fine. This may be desirable for some very strict real time communications systems, but for just about every other form of app, I don't see it.

What are you going to tell your 3rd party integrators? Drop their XML/ODBC report and surf on over to

Re:Neat concept... (5, Informative)

truthsearch (249536) | more than 11 years ago | (#5423661)

The countless PHP/MySQL sites out there seem to perform just fine.

Object-oriented programming and data persistance is about a lot more than public web sites. Private, corporate data warehouses with terabytes of persisted objects squeeze every bit of processing power available. For example, I used to work on Mastercard's Oracle data warehouse. An average of 14 million Mastercard transactions occur per day. That's 14 million new records to one table each day, with reporting needing hundreds of other related tables to look up other information. To get something of that scale to run efficiently for a client app (internal to the company) costs millions of dollars. Object persistance on a large scale is tough to get right and is far from perfected, and there's a lot more going on that public web site development. Every new idea helps. Consider the article written on IBM's developerWorks. It's readers are mostly corporate developers.

What about existing data ? (4, Interesting)

koh (124962) | more than 11 years ago | (#5423541)

Their solution really seems to rock, and may finally be the OO to DB paradigm everyone was waiting for.

That said, I wonder what their position is towards the import of existing data. Many projects would only benefit from the solution if and existing data (usually object-oriented but saved in a roughly flat database as the article points out) can be ported seemlessly to the new environment.

My point is, this solution solves a known problem by introducing a new technology, however this new techno will have to be bent towards the older systems in order to retrieve what was already saved. Same old story : in the database world existing data is paramount.

Re:What about existing data ? (1)

RevAaron (125240) | more than 11 years ago | (#5423691)

Um, it's called writing a script. A script in the language your company is using that pushes data from the old DB to the new. Use an object-rational mapping module, load in a table (= object) from the old DB, save it in the new. What is the big deal? I did a bunch of this a couple years ago, moving from DB2 to GemStone/S (an OODB which has been around forever).

nice ad. (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5423544)

Score -5 tool

Google! (-1, Troll)

arvindn (542080) | more than 11 years ago | (#5423547)

Google does this. They use a bank of 10000 (!) machines (linux PCs) which have the entire web in RAM (yes, all 3 billion pages). If they used disks, it would take 8 months to complete a single query. Its the only way they can provide results fast enough.

More information here []

Re:Google! (1)

PaschalNee (451912) | more than 11 years ago | (#5423628)

I don't see anything in the article you reference that says that Google stores the entire web in RAM. From the article:

For the average size of 1,000 words per page, they have to be very careful to use techniques such as storing information in RAM: it would take 8 months to check for that word if everything was on disk.

Specifically storing information in RAM could mean that they store the index in RAM as opposed to the complete page.
Do you have any other source backing this up?

The ultimate test: Post it on /. (1)

beacher (82033) | more than 11 years ago | (#5423548)

keep all the objects in RAM and serialize the commands that change those objects .... This architecture results in query speeds that many people won't believe until they see for themselves

I can't get the page to load it might be /.'d already..... And they're right - I don't believe it because I can't see it.


-1 (-1, Troll)

PhysicsGenius (565228) | more than 11 years ago | (#5423550)

Buy an ad.

OOP (2, Interesting)

NitsujTPU (19263) | more than 11 years ago | (#5423553)

Couple things.

1) You COULD use an object-relational database if you wanted to keep an OOD aspect.
2) You COULD load non-object oriented data into RAM with lower overhead.
3) A couple gig's of data into RAM... not really a deployable solution for enterprise, don't you think?

Other than that, nifty idea and all.

Already slashdotted? (-1)

Anonymous Coward | more than 11 years ago | (#5423561)

Already slashdotted? D'oh!

Data integrity? (1, Interesting)

Anonymous Coward | more than 11 years ago | (#5423563)

One of the key functions of a DBMS is ensure data integretiy. I suspect that if this thing did all the type of integrity constraints that an SQL database like Oracle does that this thing would slow down considerably.

Aside from which, this appears to be a physical implementation. In theory, Oracle should be able to do something similar to get better performance in those cases when the whole DB fits into memory.

What I'd rather see is better abstraction such as truly relational database. Currently most RDBMS vendors only support a (very large) subset of the relational operators and constraints that a true RDBS would ahve.

Two words... (4, Informative)

Anonymous Coward | more than 11 years ago | (#5423567)

Enterprise JavaBeans.

Here's the definition of an EJB from the [] site.
A component architecture for the development and deployment of object-oriented, distributed, enterprise-level applications. Applications written using the Enterprise JavaBeans architecture are scalable, transactional, and multi-user and secure.

And more specifically, here's the definition of an Entity EJB:
An enterprise bean that represents persistent data maintained in a database. An entity bean can manage its own persistence or it can delegate this function to its container. An entity bean is identified by a primary key. If the container in which an entity bean is hosted crashes, the entity bean, its primary key, and any remote references survive the crash.

Ever looked at object-oriented databases? (5, Informative)

carstenkuckuk (132629) | more than 11 years ago | (#5423569)

Have you have looked at object-oriented databases? They give you ACID transactions, and also take care of mapping the data into your main memory so that you as a programmer only have to deal with in-memory objects. The leading OODBs are Objectstore (, Versant ( and Poet (

Re:Ever looked at object-oriented databases? (1)

Reinout (4282) | more than 11 years ago | (#5423667)

For the open source and python lovers: the same thing is provided by Zope. Object database, in-memory objects from the programmers' point of view, transactions.

They don't advertise it enough as an object database imho, but it's there.


3 issues I see (4, Interesting)

foyle (467523) | more than 11 years ago | (#5423573)

First off, I like the concept, but speaking as a former Oracle DBA, I have several issues:

1) You're limited by how much RAM you have on your server, not how much disk space you have

2) If you're making a lot of data changes and have a crash or power outage, I'd imagine that it can take a while to replay the log to get things back to the most recent point in time (you can have the same problem with Oracle, but your checkpoints would be a lot closer together than "once a day")

3) There are millions of people that already know SQL and can write a decent query with it. How does this help them? Never underestimate the power of SQL.

On the other hand, for projects dealing with small amounts of data I can see how implementing this would be far easier than integrating with Mysql, Postgresql or Oracle.

Interfacing (2, Interesting)

MSBob (307239) | more than 11 years ago | (#5423576)

This may be a great way to snapshot the state of a Java application but how on earth would you query anything out of it with a non-Java/non-OO language?

A SOAP interface could go some ways towards accomplishing this but what about the traditional ACID properties of a DBMS? Durability is obviously guaranteed... Consistency? That would depend on programmers following the practices... Atomicity? Not sure about that one. For simple commands it seems to work. What about compound commands? If no rollback occurs how can I assert that I changed both objects not just one? Isolation? Not sura about this one either.

Re:Interfacing (1) (637314) | more than 11 years ago | (#5423603)

Excellent point. Why loose all the features or say Oracle for this system? Why re-write standard access APIs?

C++ soluton (1, Funny)

debrain (29228) | more than 11 years ago | (#5423584)

I noticed the lack of C++ support, so I thought I'd throw my hat in. :)
template<typename O,typename T>
operator <<(O&o,T&t) {

Re:C++ soluton (1)

Frans Faase (648933) | more than 11 years ago | (#5423647)

Serialization is a little more than being able to write objects from RAM to a stream. You also need to implement the reverse, otherwise it is useless. And that is where the above solution goes wrong. You simply lose all pointers between the objects.

Looks like journaling filesystem (1)

Reinout (4282) | more than 11 years ago | (#5423586)

It looks very much like a journaling filesystem. That one basically also stores the commands executed in a log file. If you've had a crash with ReiserFS for instance, you can see messages like "replaying log for...." at startup.

Now they're doing the same for in-memory object data structures. Might be a nice idea.

On a different note: the objectdatabase behind zope has perhaps the same net effect. To the programmer, everything is in-memory. The object database reads stuff from disk if needed and keeps things in memory when much-requested. And also with a list of transactions which can be replayed or rolled back.

So: it looks nice, but I'm curious to the net results!

Re:Looks like journaling filesystem (0)

Anonymous Coward | more than 11 years ago | (#5423689)

The problem is that this does not actually scale that well. A large database will require large amounts of RAM when using this technique. This may be a good technique for small scale situations, but large data stores become troublesome. But that is not the only problem.

Additionally, the article admits to a lack of replication capabilities. But it gets worse. There are no real backend data mining capabilities. At best you can have the system export a file that can be imported into like objects and then write code to get information, but this is a far cry from an ad hoc SQL query. We still have not hit rock bottom, though. There is a lack of backup capability (wouldn't it suck if you have a bad sector on a disk and your log gets corrupted).

All of these shortcoming can be overcome, however. Since we have the code we can tweak it here or there. But in the end, we are back to a database. If you want an object oriented database, you might want something else:

Perl and PHP support is impressive (0)

Anonymous Coward | more than 11 years ago | (#5423604)

Click on the links and marvel at the zero lines of code. They have acheived coding perfection.

Something about this doesn't sit right with me (2, Insightful)

sielwolf (246764) | more than 11 years ago | (#5423605)

I think this would work well for most web-server DB backends as the data isn't changing on the fly that much. But what about even /. where the content of a discussion thread is changing possibly several times a second (with new posts and mods)? I'd think then you'd want to use the strong atomic operators of the DB to pull directly from the tables instead of relying on serial operators to try and refresh.

Since the benchmark page was slashdotted I might be speaking out of my ass. But I never trust "9000 times faster!". It sounds too "2 extra inches to your penis, guaranteed!"

It's not a simple question of speed (4, Insightful)

Ummite (195748) | more than 11 years ago | (#5423608)

The advantage of putting data into a database isn't just speed! Just think about sharing data between application, between many computers, exporting data into another format, or simply making a query to change some values! You simply don't want to write code that will change value of data with some specific conditions : you prefer make a single query that any database manager or simply a sql newby could write, not just the 2-3 programmers that have done the work on that code some years ago! You also sometime need to visualise data, make reports, sort data. You simply don't want to code that. I think most serious database can also put data in RAM if you have enough, and is able to do some commit/rollback when it's necessary. So your point that RAM data with serialize in-out is ok, as long as you absolutly need 100% speed, don't need to do complex query on your data and is in used only on one computer.

Mod parent up. (0)

Anonymous Coward | more than 11 years ago | (#5423655)

- no sql
- no multi-user access

This would need 64 bit addressing (-1, Troll)

SexyKellyOsbourne (606860) | more than 11 years ago | (#5423609)

Since this would require gigs upon gigs of RAM, this would not be practical on a PC, given its 32-bit architecture only allows for addressing of 4GB of RAM, since 2^32=4294967296 possible bytes of addressing. 64-bit addressing would be required, and that isn't going to get here any time soon.

Also, though you may not notice it, cosmic rays and other radiation frequently flip bits in your RAM, though most of it is done in unused space. If all the RAM is used up for a database, and it is holding sensitive materials for long periods of time before saving them, chances are good the data could be corrupted. All hell could break loose in some instances, and ECC RAM hardly ever protects the data, either.

Re:This would need 64 bit addressing (1)

REBloomfield (550182) | more than 11 years ago | (#5423635)

64-bit addressing would be required, and that isn't going to get here any time soon Uh Huh. Okay. So what in the hell do most mainframes use then? See Sun and IBM for examples...

Re:This would need 64 bit addressing (0)

Anonymous Coward | more than 11 years ago | (#5423681)

but 64 bit addressing is already here in suns and stuff just not consumer stuff

Blazing fast (4, Funny)

Zayin (91850) | more than 11 years ago | (#5423611)

This architecture results in query speeds that many people won't believe until they see for themselves: some benchmarks point out that it's 9000 times faster than a fully-cached-in-RAM Oracle database, for example. Good thing is: they can see it for themselves.

Yes, I've seen it. The page on only took about 30 seconds to load. Does that mean that a fully-cached-in-RAM Oracle database would spend 75 hours loading that page...?

no queries (4, Insightful)

The Pim (140414) | more than 11 years ago | (#5423613)

Queries are run against pure Java language objects, giving developers all the flexibility of the Collections API and other APIs, such as the Jakarta Commons Collections and

In other words, "it doesn't have queries". What real project doesn't (eventually) need queries? And even if writing your queries "by hand" in Java is good enough for now, what real project doesn't eventually need indices, transactions, or other features of a real database system?

Memory Mapped Files (2, Insightful)

Frans Faase (648933) | more than 11 years ago | (#5423617)

This article made me think about the use of Memory Mapped Files as a means to implement a persistent store in C++. For an example of this, have a look at Suneido [] .

Get best of both worlds... (5, Interesting)

ChrisRijk (1818) | more than 11 years ago | (#5423625)

If you need performance for persistant data, this "new" system doesn't seem to be much different at all to what you can do today. Using JDO (Java Data Objects) with a file-system backend would be about identical, though easier to use and have more features.

Of course, you can always write your own persistance layer. I've done this a few times - very easy in Java. Map a row in the DB to an object, and cache the object in memory. If need to fetch that data again, check the cache first. When doing a write, write to the DB and update/flush your cache as necessary.

That's just the basics - what's most optimal depends on how your data is accessed and changed (and also your programming language and capability as a programmer). Java has nice really nice stuff for caching built-in, like SoftReference wrapper objects, and of course threading and shared memory that you can use in production.

I'm currently working on a super optimised threaded message board system. Almost all pages (data fetch/change + HTML generation) complete in about 0.001s.

64 bit processors (1)

Alpha_Nerd (565637) | more than 11 years ago | (#5423626)

And people wonder why we need these... I just hope 64 bits is enough.

Not quite (2, Interesting)

mjhans (55639) | more than 11 years ago | (#5423629)

Likening this to getting rid of your database is very much like comparing the performance of mysql of old to Oracle, before mysql had transactions. (I don't know what the hit was that mysql took when xacts were added, but you should see what I'm generally getting at...) Point being it's a fast tool exactly because it's a lightweight tool.

This method looks good for storing large amounts of single-select equality queries. I didn't see anything while reading though about how to support range queries or aggregates. Let alone what happens when you need to be more expressive in your queries, like putting a join or two in there.

There are a lot of apps out there where this might work well (I saw Google mentioned above, and can think of things like weblogs, etc). Try doing something like an e-commerce site and you'll start to break down. Especially when you start adding "other people bought" (a-la Amazon), etc, or any other queries that are cross-references and generally require joins or other sorts of data-massaging functionality (i.e. databases' bread and butte)

smalltalk 80 do this (1)

TulioSerpio (125657) | more than 11 years ago | (#5423633)

In st-80, an d descendats, you have an image (an image of the memory in a disk file) and a 'change log' whith the changes of the methods, and globals and every message you send to an Object..

Already slashdotted..Use mirror (-1, Troll)

suds (6610) | more than 11 years ago | (#5423646)

Here is the mirror []

Umm what about multiple servers? (2, Insightful)

jj_johny (626460) | more than 11 years ago | (#5423653)

Reading through the article it seems to lack a rather small but important item - multiple systems interacting read/write with the same database. This is not a very robust or scalable way of doing things. I wonder how this stacks up to one of the normal ways of improving performance by have one read/write database with lots of read only repicas.

Poor example of tools (-1)

Anonymous Coward | more than 11 years ago | (#5423666)

This depends on the data domain. No one who is knowledgeable about available tools would willingly use C++, Java, etc. to do business application development. Experienced developers will almost always choose Visual FoxPro for serious database development. Here you get the best of both worlds, a fully object oriented development environment that has unparalleled relational manipulation abilities. I've never had a problem mapping objects to a relational backend. Users of VB, C++, et al, get to say "Look , I saved and retrieved some data!" Life's too short to fart around with these tools, use the good stuff.

Tcl (1)

roalt (534265) | more than 11 years ago | (#5423676)

I miss tcl (or incr Tcl for OO)

I don't get it (1)

YeeHaW_Jelte (451855) | more than 11 years ago | (#5423677)

If you only copy the object to disk once a day, then what's the big advantage over copying the object to a database once a day?

All this is saying, as far as I can see, is, "oh look, if we only save the object once a day, we'll be much faster than when we dump it to a database several times per minute". Never mind reliability.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?