×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

What Would You Want to See in Database Benchmarks?

Cliff posted more than 8 years ago | from the measurements-that-actually-mean-something dept.

Databases 42

David Lang asks: "With the release of MySQL 5.0, PostgreSQL 8.1, and the flap over Oracle purchasing InnoDB, the age old question of performance is coming up again. I've got some boxes that were purchased for a data warehouse project that isn't going to be installed for a month or two, and could probably squeeze some time in to do some benchmarks on the machines. However, the question is: what should be done that's reasonably fair to both MySQL and PostgreSQL? We all know that careful selection of the benchmark can seriously skew the results, and I want to avoid that (in fact I would consider it close to ideal if the results came out that each database won in some tests). I would also not like to spend time generating the benchmarks only to have the losing side accuse me of being unfair. So, for both MySQL and PostgreSQL advocates, what would you like to see in a series of benchmarks?""The hardware I have available is as follows:

  • 2x dual Opteron 8G ram, 2x144G 15Krpm SCSI
  • 2x dual Opteron 8G ram, 2x72G 15Krpm SCSI
  • 1x dual Opteron 16G ram, 2x36G 15Krpm SCSI 16x400G 7200rpm SATA
I would prefer to use Debian Sarge as the base install of the systems (with custom built kernels), but would compile the databases from source rather then using binary packages.

For my own interests, I would like to at least cover the following bases: 32 bit vs 64 bit vs 64 bit kernel + 32 bit user-space; data warehouse type tests (data >> memory); and web prefs test (active data RAM)

What specific benchmarks should be run, and what other things should be tested? Where should I go for assistance on tuning each database, evaluating the benchmark results, and re-tuning them?"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

42 comments

Ask the dev teams for both! (3, Insightful)

iamsure (66666) | more than 8 years ago | (#14116531)

Send an open email to the dev teams on both projects, and ask for their opinions on what should be tested. It might take 3-4 rounds of back and forth to settle on a set of reasonable benchmarks and settings, but at least that way both sides are involved from the beginning.

What I'd really like to see... (0)

keesh (202812) | more than 8 years ago | (#14116563)

...is some honest benchmarks of Oracle against Postgres. That would be far more interesting. Does anyone happen to have a copy of Oracle that isn't covered by obnoxious "no benchmarking" licence clauses?

Re:What I'd really like to see... (0)

Anonymous Coward | more than 8 years ago | (#14116612)

Of course, the other question: Does anyone live where these agreements aren't legally binding?

Re:What I'd really like to see... (3, Interesting)

dtfinch (661405) | more than 8 years ago | (#14116697)

What you do is:
The person publishing the benchmark does not use Oracle.
The Oracle user running the benchmark remains anonymous.

But there are many ways that Oracle (or any other database software) can be made to perform badly in a benchmark that would be no fault of the software. If someone wants to benchmark against Oracle, Oracle wants to make sure they do it correctly, or else not at all. If they didn't have that clause, Microsoft would have dozens of studies and benchmarks saying that Oracle is slower than SQL Server under certain setups, just like those bullshit VeriTest benchmarks they have against crippled setups of Red Hat, Apache, and Samba.

single-user - long query (1)

Fanro (130986) | more than 8 years ago | (#14116652)

I would be interested in performance for a single user, but accessing a very large table and/or very complicated queries.

Most benchmarks only deal with multi-user performance. Most problems I had to solve dealed with managing large datasets with complicated relationships, but only one user that has to access them.

Althought I highly suspect that mysql is not suited for that sort of usage anyway. But I am curious how postgresql would compare there with firebird.

Re:single-user - long query (2, Insightful)

Nutria (679911) | more than 8 years ago | (#14118932)

Althought I highly suspect that mysql is not suited for that sort of usage anyway.

On the contrary, I bet that MySQL w/ MyISAM would be well suited to that task.

Re:single-user - long query (1)

Fanro (130986) | more than 8 years ago | (#14120075)

All the more reason to test it! :)

BTW, do you know of any benchmarks that deal with this sort of thing?

Don't bother (4, Insightful)

Anonymous Coward | more than 8 years ago | (#14116659)

in fact I would consider it close to ideal if the results came out that each database won in some tests

With an attitude like that, there's no point running benchmarks. The idea is that you run the benchmarks to get an idea of how the databases perform. But it seems you are already rejecting one possible result (that one database performs worse than others in all respects) because you don't consider it "fair".

Well life isn't fair. I'm sure people worked hard on all databases, but that doesn't mean they all have value. Sometimes people try hard and fail. And you want to ignore the numbers that tell you this because you think it's fairer that way? Give me a break, you don't want to run a real benchmark, you want to run something that will tell you what you have already decided upon is the best.

Re:Don't bother (1)

_iris (92554) | more than 8 years ago | (#14118794)

The post above is spot on. In addition to his (her?) reasons, we have too many database benchmarks already. The problem isn't finding them, it's reading through the horrible writing. You should seriously consider changing your goals.

Rather than benchmarking the two different databases, measure how the settings for each database changes. e.g. Plot the performance of some staple queries with different memory pool sizes.

Stop whinning Coward (1)

Sangbin (743373) | more than 8 years ago | (#14120108)

Give me a break, you don't want to run a real benchmark, you want to run something that will tell you what you have already decided upon is the best.
And your whinning helps because..?

He didn't claim to be a true scientist. He didn't claim that this result is what MySQL developers or PostgreSQL developers will die over.
It's the Thanksgiving weekend, and one of the readers want to do something fun with his spare machines.
In fact, he's trying to run a test scenario that I happened to be interested in.

Unless you can go help him on the project itself personally, STFU and go back to your corner.
Nobody needs a destructive criticism.

Writer, thank you for doing this for me. I've been meaning to do the same for quite some time but I haven't found the time nor the facility.
Whatever you do, keep up the good work!

Maybe but I disagree (1)

einhverfr (238914) | more than 8 years ago | (#14120774)

With an attitude like that, there's no point running benchmarks. The idea is that you run the benchmarks to get an idea of how the databases perform. But it seems you are already rejecting one possible result (that one database performs worse than others in all respects) because you don't consider it "fair".

Part of the issue with benchmarking is that different types of queries can result in wildly different performance ratios between RDBMS's.

For example:

Try joining a really large table in PostgreSQL against a table with exactly 0 rows (and has just been vacuumed and analyzed-- this requires an outer join). This sort of query kill's Pgsql's performance because it assumes a 10-page size to a physically empty table (it does this to avoid really bad plans as the table grows). While this may seem like a corner case, it can be a real problem when you are trying to run third party software when you don't use the entire package (and it leaves some tables empty).

If you have to run a general battery of tests in cooperation with the developers on both sides, I think it is only fair to seek out areas where every RDBMS chokes. These weak points can then be documented and avoided.

In essence I think that it would be best if every RDBMS loses at least one test badly. It would also be informative if each RDBMS won at least one test, assuming these were fairly executed.

Now for tests, I can think of a few of them:

Category I: Strength/Weakness comparison: Run sets of standardized queries through the RDBMS as quickly as possible. See how it scales with concurrency, etc. Helpful if each RDBMS wins at least one test here.

Category II: Have DB experts in each system create a script (or at least set of queries) to accomplish X complicated task. Compare these in terms of run times with low-concurrency and high concurrency environments. These tests would be done on standard data sets. These are the most relevant in terms of real benchmarking both the least informative regarding actual implimentation decisions.

Category III: Seek out weak points in RDBMS's and provide case studies of what causes each one to choke.

Re:Maybe but I disagree (0)

Anonymous Coward | more than 8 years ago | (#14128074)

This is exactly the thinking that I had.

MySQL should win on some tests (even if it's only the count(*) test that postgres chokes at)

Postgres should win on some tests (even if it's becouse MySQL doesn't implement some features yet)

for that matter, even SQLite would win some tests (even if it's only on low-end machine configs that don't provide enough resources to run either MySQL or Postgres)

what's interesting isn't a statement 'DB blah won the tests' (much as marketing and managers love such statements but rather a spreadhseet that shows the different tests crossed with the different database configurations (both database engines and hardware) so that techies can map out each database engines strengths and weaknesses, along with the differences that tuneing will make. After you have such a spreadsheet then you can summerize it by saying that DB A is better in these conditions and DB B is better in these other conditions.

if you have a set of benchmarks where one side never wins any of them then yes, I get very suspicious of the selection of benchmarks. it's really rare for two products to compete where one is absolutly the winner under all conditions (even in the Intel vs AMD benchmarks Intel still wins individual tests once in a while :-)

David Lang

Re:Maybe but I disagree (1)

einhverfr (238914) | more than 8 years ago | (#14131248)


MySQL should win on some tests (even if it's only the count(*) test that postgres chokes at)


This is an invalid test. MySQL only retrieves an estimate rather than the real number. Or at least this should be stated specifically in the results.

This is simple (3, Insightful)

MerlynEmrys67 (583469) | more than 8 years ago | (#14116664)

Take your current application that you need the database for
Compile your application to use each database
Now go and compare which database runs fastest on your application

Anything else just doesn't matter - your application is going to be different than every benchmark, so what you need is to run your application on the database and see what happens.

What I have usually found is that while you can highly tune the database, and have great database benchmarks - most of those are ruined by completely brain dead applications that do very stupid things, ruining any kind of performance the database will give you

Re:This is simple (1)

ldspartan (14035) | more than 8 years ago | (#14116758)

This is simple only if there's no porting effort required at all. Given the bugginess of MySQL (compared to SQL92) and differences between RDBMSs in general, this is somewhat unlikely.

Even if the application if fully DB-agnostic, who has their schema laying around in two formats?

Re:This is simple (1)

MerlynEmrys67 (583469) | more than 8 years ago | (#14118038)

Hmmmm, so sounds to me like there is something else more important than speed - what database am i currently using.
So in other words - speed might not even be important at all, imagine that. So why benchmark then ?

That's easy (0)

Anonymous Coward | more than 8 years ago | (#14116672)

Porn. Lots and lots of porn.

easy, TPC (2, Informative)

ZuggZugg (817322) | more than 8 years ago | (#14117095)

TPC-C and especialy TPC-H for DW benchmarking. Buy the benchmark kit and run it...done. If you're really serious have it independantly audited and submit the results to the TPC.org. You could probably wrangle up some sponsors to help foot the bill.

Good luck.

Very large database operations (3, Interesting)

eyeball (17206) | more than 8 years ago | (#14117402)

I would love to see operations on very large databases, say 100 million or 1 billion records (or even more). Operations like bulk loading, inserting, querying, deleting; against indexed and un-indexed tables; reindexing a whole table (*).

(*) Reindexing caused me a ton of grief. I inherited a huge mysql db once that required an emergency reindex. Unfortunately mysql locked the table while it did a full table copy, which took hours.

Re:Very large database operations (1)

samjam (256347) | more than 8 years ago | (#14117808)

Locking out table access during an index is sometimes a shame, but An "emergency reindex" sounds like you shouldn't be using the table anyway until it was repaired.

Unless queries were just very slow because the indexes were poor.

Sam

Re:Very large database operations (2, Informative)

captainclever (568610) | more than 8 years ago | (#14118307)

Oh yes.. mysql makes a copy of the table for every ALTER TABLE command i think, even if you want to drop an index. This sucks royally. Sometimes it's quicker to add and remove indexes when you want to run a few queries on a large table (at least it is with postgres). Locking the table whilst dropping/creating indexes is a huge pain in the ass - but won't show up in benchmark results. This also means that if you don't have enough disk space to hold a copy of the table, you can't easily alter it :(

Re:Very large database operations (1)

captainclever (568610) | more than 8 years ago | (#14118314)

In fact, while i'm in rant mode, the only reason we're using MySQL at all within last.fm is because it has pretty kick-ass replication. Postgresql is miserably lacking on the replication front :'( Although I'm hoping that with the recent addition of two-phase-commit someone will mod pgpool into a synchronous multi-master replication system.

postgres has several replication options (0)

Anonymous Coward | more than 8 years ago | (#14121075)

Postgres has several replication options, there are ones that work like pgpool (although I don't remember the name at the moment) and others that do asyncronous replication (slone is the biggest name there).

none of them are part of the core postgres distribution, but that's a deliberate choice.

Re:postgres has several replication options (1)

captainclever (568610) | more than 8 years ago | (#14121410)

I know.. we're thinking about trying slony, it seems like a lot of work and failover is a bit of a pain.. We are shipping the logs and replicating (recovering) with them on a slave machine at the moment, which isn't great as you dont get read access at all whilst in recovery mode (there's probably a good reason for this).

Re:postgres has several replication options (1)

Sxooter (29722) | more than 8 years ago | (#14125164)

Two points:

1: Slony is about as hard to setup as it is to compile PHP or apache from source and then configure them. Honestly, if you're not capable of configuring slony, you probably shouldn't be administrating servers.

There are a bunch of admin scripts that come with it to setup replication and things like that. Considering it's a pretty new piece of code, it's not bad, mostly lacking in detailed documentation.

2: Notice your replication on MySQL didn't keep you guys up when the db server fried yesterday, so what was the deal? Shouldn't your failover server have just kicked in and all that?

Re:postgres has several replication options (1)

captainclever (568610) | more than 8 years ago | (#14127577)

The motherboard that died was from a machine running postgresql.. :( I'm not dissing slony, just saying it's a shame there's no robust multi-master setup

Re:postgres has several replication options (1)

Sxooter (29722) | more than 8 years ago | (#14129004)

Now that's funny. Anyway, multi-master is on the way with the next version, supposedly. Until then, definitely look at using pgpool. You can use it for synchronous replication, running it on a pair of HA / failover boxes in front of a pair of pg databases. It works pretty well.

Use the App (1)

ajayrockrock (110281) | more than 8 years ago | (#14117989)

How I would do it:

1) take a snapshot of the database
2) turn on the query log for the server and run it for a day
3) install the snapshot on your test servers
4) play back the query log and see which one goes the fastest.

There's no reason to complicate it by trying to stress certain functions. I'm looking for real application performance, and I have no control over the queries that the programmers are throwing at the box (unless I see it bogging down the server and I go and hit them over the head).

--Ajay

Heavy querying (1)

Apreche (239272) | more than 8 years ago | (#14118240)

At work I had a problem recently where there was significant performance loss because there was a flood of very many very small and simple queries. I had to replace a bunch of them with one very large very complicated query to make things run smoothly. Figure out just how many simple select queries you can flood the server with before it starts to choke. Also make an absolutely enourmous query, really really big, and see how many seconds it takes to get the result on each of the setups.

Never use single CPU (1)

Nutria (679911) | more than 8 years ago | (#14118973)

1x dual Opteron

It'll spend too much time task switching. Better to have 2 "less than half" slower CPUs, than 1 really fast CPU.

16G ram,

Could you put the 16G into one of the 2x boxes?

2x36G 15Krpm SCSI 16x400G 7200rpm SATA
data warehouse type tests (data >> memory);

DW does not mean that data .GT. RAM. It doesn't even mean that the data is a whole lot bigger than RAM.

Re:Never use single CPU (1)

allanw (842185) | more than 8 years ago | (#14119863)

1x dual Opteron

It'll spend too much time task switching. Better to have 2 "less than half" slower CPUs, than 1 really fast CPU.

Sounds like he meant dual core.

16G ram,

Could you put the 16G into one of the 2x boxes?

Why not? 4 memory slots of 4GB sticks, per CPU.

you misunderstood the machines (0)

Anonymous Coward | more than 8 years ago | (#14121123)

none of the processors are dual core.
there are 5 machiens total I can use
all of them have two single core opterons (ranging from 246 to 252, I'd have to double check to see which machines have which processors)
16G ram in a dual cpu machine is easy (but not cheap, ~$350/G) 8 sockets with 2G per socket.

if you have a data warehouse with less data then you have ram it's a very tiny warehouse.

in my case I'm looking at a months worth of raw data being ~3TB these machines are for a proof of concept that will only hold about a months worth of data (I have another machine to hold the raw data compressed for archival purposes)

David Lang

Contact us on the pgsql-performance list (2, Informative)

jeffroe (316456) | more than 8 years ago | (#14120359)

We'd love to see some benchmarks run on this equipment. It's a great chance for us to evaluate and boost postgresql performance in general. Can you contact us directly? You can find a subscription link here: http://archives.postgresql.org/pgsql-performance/ [postgresql.org] as well as the thread regarding your ask slashdot question here: http://archives.postgresql.org/pgsql-performance/2 005-11/msg00514.php [postgresql.org]

Clarification on hardware (0)

Anonymous Coward | more than 8 years ago | (#14121315)

none of the Opterons are Dual core. they are 246 or 252 SMP machines with 5 machines total available. the one with 16 SATA drives is useing 3Ware 9500 series controllers.

as I get information from Postgres and MySQL folks about the tests I will make sure to post it here. I would love to see other people with different hardware run the same tests.

David Lang

Performance (1)

loony (37622) | more than 8 years ago | (#14122568)

Who cares about performance? If you want to do something useful, then test reliability... Pull the power on the server. Have a failure on a drive, see if you can rescue any data. Corrupt a sector by overwriting it with 0s and see what the engine does.

Performance you can (almost) always go to a bigger box - if your whole engine goes down and can't recover because of a few bad sectors or insufficient logging or such, you're screwed.

Peter.

It all depends (1)

Sxooter (29722) | more than 8 years ago | (#14125603)

There are all different kinds of workloads for database servers. Is the workload mostly transactional, moving around thousands of tiny buckets of data, all interrelated by constraints, foreign keys, and triggers, and all being done by thousands of users at once? Or is the workload one where you're trundling through hundreds of gigabytes of data to mine for certain critical points hidden in them, and only a few users at a time will be hitting the system?

Are we talking about workgroup size database apps, or enterprise class stuff?

Does it need to run well on small machines, with limited I/O, or does it need to be able to take advantage of very large machines, with tons of CPU, RAM, and hard drives?

I would love to see some comparisons of the standard TPC tests. Several of these are already implemented by the OSDL folks on top of PostgreSQL to test it and the linux kernel on top of large machines. I'd love to see them ported to MySQL 5.0 and some comparisons done.

I would REALLY want to see the tests be between PostgreSQL 8.0/8.1, MySQL-innodb, MySQL-isam, firebird, MaxDB, and so on. I.E. NOT just MySQL-isam and PostgreSQL. and not done by folks who can't tune one or the other of the databases, so I think the OSDL folks would be a great choice here, if they wanted to expand their test suites.

A weird test (1)

jbolden (176878) | more than 8 years ago | (#14129695)

I'm not sure benchmarks are really the best way to measure between those two products. Pretty much the common wisdom exists here:

MySQL is much faster
Postgres does a lot more

If you can tolerate feature poor you go with MySQL for ease + speed
If you can tolerate slow you go with Postgres
If you need feature rich and fast you go with Oracle
If you don't have dedicated DBAs then you must not care about reliability go with SQLserver

Anyway:

1) MySQL has corruption problems. Good measures (load testing) would be worthwhile.
2) Detailed lists of what the major ones do and don't have
3) How much faster is MySQL for bulk operations 1m line read, 1m line change, 1m line write, 1m line delete repeat all 10x.

I'm not sure your equipment really addresses this. Anyway I like the suggestion of asking the two sides. One thing that is important though is that good databases (like Oracle) can be tuned and thus perform much better on the test if you allow for tuning. I think you should definitely treat "tuned" and "default" as two seperate entries in your benchmarks.
   

but is the common wisdom correct? (0)

Anonymous Coward | more than 8 years ago | (#14132960)

several years ago when MySQL didn't support transactions it was definantly faster then postgres.

As MySQL has added features it has gotten more complicated and slower (the ISAM tables are significantly faster then INNODB, but they don't support advanced features like transactions)
In the meantime Postgres has gotten significantly faster

so the question is have they met yet, and whichever one is faster, how much faster is it?

the common wisdom used to be that Linux was slow, didn't scale, and wasn't secure. At one point this was correct, I don't think any of these accusations would hold up today.

David Lang

Re:but is the common wisdom correct? (1)

jbolden (176878) | more than 8 years ago | (#14134361)

That was a well written comment you should get an account. Anyway if there is good reason to doubt there is a large spread between MySQL and Postgres than obviously my point wouldn't stand. I would assume BTW the ISAM since once you need transactions you probably need ACID...

One thing... (0)

Anonymous Coward | more than 8 years ago | (#14130625)

..the TRUTH!
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...