Monday, November 5, 2007

Fairness in Storage Engine Benchmarking

Is it fair to compare Innodb to MyISAM with innodb_flush_log_at_trx_commit=1 and flush=OFF?

Granted different applications have different requirements. But, when you're trying to show a graph comparing the two Engines, isn't it reasonable to try to keep the same settings as close to equal as possible?

I suppose that begs the question of how similar innodb_flush_log_at_trx_commit is to flush, and if it's even possible to get a valid comparison.

Innodb tends to be slower (I'm on 4.1 currently, ancient history), but it seems to me that it's doing a whole lot more for the money. What do you think?

[UPDATE] Additionally, I'm running sequential benchmarks back to back, for example loading the table, then turning around and running a read test on the table. For innodb, this often means that a good chunk of the buffer pool is dirty and the read test has to slog through while the dirty buffers are being flushed, potentially changing performance.

Should sequential benchmarks have some built in time to allow the database to return to a stable state, or is it better just to plow straight through them all at once?

4 comments:

Dude From Mangalore said...

InnoDB is very slow when table size is big. We have 24G table. Querying for the records of a user (having few thousdand results ) take more than a minute. Even having a limit clause is of no use. MyISAM performs muych better by more than 10 folds.

jaypipes said...

In response to dude from mangalore,

Such generalizations are typically just that: generalizations, and not particularly useful when not coupled with specifics about application usage, schema, etc...

InnoDB certainly performs very well on large installations (look at Wikipedia), so it's not as simple as you say... More likely, it is your schema or application that needs tuning.

In response to Jay,

I think what *is* fair is to have a benchmark that tests a specific workload (heavy concurrent writes, large datawarehouse, mixed environment, etc) and then tweak the engine for the best performance you can get out of it.

But, sure, you will always need to put requirements and guidelines in place for each benchmark to ensure the reader of the benchmark understands any assumptions made during the running of said benchmarks...

Cheers!

jay

Shirish Jamthe said...

In response to Jay

Can you quote size of Wikipedia (number of tables, size of tables, indexes) & usage model.

Last I checked the whole Wikipedia text itself was less than 15 Gig (compressed), I am not sure how large is the database so its difficult to judge your response in comparison to the 'guy from mangalore' whose table is 24GB

Jay Janssen said...

in response to dude from mangalore...

Most of this is off-topic, but a 24G table means nothing without knowing the server you're trying to run it on as well and what your query/schema looks like.

As an example, I can get over 6,000 selects a second on a 25G table on a pretty powerful server with 8G of RAM. I'm selecting by primary key (always faster in innodb), and by innodb_buffer_pool is 6G. My test script is also only selecting 10% of the rows 75% of the time, roughly simulating normal production traffic.

Sounds like you are doing some kind of secondary index scan (hopefully it's an index!) because you are finding multiple rows per user, hence user isn't the primary (unique) key.

I find the worst thing people try to do is build their application/query in MyISAM and then give Innodb a shot by just 'alter table Engine=Innodb' w/o touching anything else and considering how innodb will behave differently from MyISAM. It's not uncommon that a completely different schema would make better sense when moving to Innodb just because of how it is different from MyISAM.