Today I was working with the client loading data to MyISAM tables at very high rate. Hundreds of millions rows are loaded daily into single MySQL instance with bursts up to 100K of records/sec which need to be inserted (in the table with few indexes). It was good not all records had to go to the same table and so in theory using multiple thread to do inserts in multiple tables would not be bound by table locks and would be able to use multiple cores efficiently to get good insert rate… or so it seemed.
In reality inserting in parallel into different tables when indexes fit in memory results in very bad contention causing hundreds of thousands of context switches per second with far less work done than one would hope. As we already discussed
MyISAM key cache has serious mutex contention issue as there is global mutex which is held for the time of key_block being copied from key_cache to the thread local space.
Happily MyISAM allows you to create multiple key caches
. We use “keycache per table” this case and caused number of context switches to drop almost tenfold and performance almost doubled.
Another interesting result was - adding PACK_KEYS=0 to most actively updated tables almost doubled throughput again (and I would expect to see it more than double if all tables would be converted with this option. This is of course because some of indexes on the tables were on CHAR/VARCHAR columns. Honestly I did not expect so large difference for Inserts. I have already wrote about very large impact on joins this option has but I did not expect modification of packed keys would be so much more expensive.
After doing such changes we got insert rate to MySQL close to 200K rows/sec using standard multi value inserts which is pretty good number for indexing tables, especially considering application was doing some updates along the way too.
Entry posted by peter | No comment
八月 2008 | ||||||
一 | 二 | 三 | 四 | 五 | 六 | 日 |
1 | 2 | 3 | ||||
4 | 5 | 6 | 7 | 8 | 9 | 10 |
11 | 12 | 13 | 14 | 15 | 16 | 17 |
18 | 19 | 20 | 21 | 22 | 23 | 24 |
25 | 26 | 27 | 28 | 29 | 30 | 31 |