So I have a dedicated server with a large number of users (~600) on it. Over the past 6 months I've seen the server slow down significantly, yet the load doesn't seem high at all (currently sitting at 5 with Quadcore CPU, 16GB memory that isn't fully utilized). I suspect IOwait is the issue because this is what munin is showing me http://cl.ly/Sz9Z
I'm banging my head against the wall. I've tried switching from Apache to Litespeed, adding php caching, but small gains seem to make very little difference and disappear. The IOwait thing is a guess but the numbers look really high to me. This is 2 x 7200RPM 1TB drives in Raid1. Let me know what I might be doing wrong or if there are any other stats I can provide.
please take a look to your php caching configuration, it can use your filesystem rising I/O wait, also check that in your apache/nginx (or any other web server) configuration disk caching is disabled.
You can check what processes are currently accessing your drive by atop:
# atop -dc
the first process in the table is the heavy process for I/O subsystem,
you can trace what files it accessing by strace, for example:
strace -f -e trace=open -p $PID
where $PID is the process ID
Also if this doesnt help to investigate the issue - please take a look at SSD drives.
Ideal solution: SSD drives for web content and HDD drives (or RAID) for other system and backup.
Based on atop the culprit seems to be mysql. Although we have a lot of users I should mention the traffic to sites is low (I usually see ~20-25 clients in Apache Status) so I can't imagine databases are getting hit too hard. Perhaps the config is just setup badly or there's something else up? Pasting my.cnf below:
This was the output of mysqltuner (you called it on key_buffer size). I'm adjusting some of those settings to increase various items and see if that helps. Seems I have memory to work with that the system isn't utilizing.
-------- Security Recommendations -------------------------------------------
[OK] All database users have passwords assigned
-------- Performance Metrics -------------------------------------------------
[--] Up for: 3d 8h 3m 58s (42M q [148.846 qps], 852K conn, TX: 99B, RX: 124B)
[--] Reads / Writes: 89% / 11%
[--] Total buffers: 208.0M global + 5.5M per thread (2000 max threads)
[OK] Maximum possible memory usage: 10.9G (70% of installed RAM)
[OK] Slow queries: 0% (33K/42M)
[OK] Highest usage of available connections: 6% (127/2000)
[!!] Key buffer size / total MyISAM indexes: 16.0M/276.8M
[!!] Key buffer hit rate: 90.8% (767M cached / 70M reads)
[OK] Query cache efficiency: 66.0% (23M cached / 35M selects)
[!!] Query cache prunes per day: 1923938
[OK] Sorts requiring temporary tables: 0% (1 temp sorts / 611K sorts)
[!!] Joins performed without indexes: 20836
[!!] Temporary tables created on disk: 35% (456K on disk / 1M total)
[OK] Thread cache hit rate: 99% (127 created / 852K connections)
[!!] Table cache hit rate: 0% (1K open / 27M opened)
[OK] Open file limit used: 1% (610/50K)
[OK] Table locks acquired immediately: 99% (14M immediate / 14M locks)
[!!] InnoDB data size / buffer pool: 1.3G/128.0M
-------- Recommendations -----------------------------------------------------
Run OPTIMIZE TABLE to defragment tables for better performance
Enable the slow query log to troubleshoot bad queries
Adjust your join queries to always utilize indexes
When making adjustments, make tmp_table_size/max_heap_table_size equal
Reduce your SELECT DISTINCT queries without LIMIT clauses
Increase table_cache gradually to avoid file descriptor limits
Variables to adjust:
key_buffer_size (> 276.8M)
query_cache_size (> 32M)
join_buffer_size (> 1.0M, or always use indexes with joins)
tmp_table_size (> 16M)
max_heap_table_size (> 16M)
table_cache (> 1024)
innodb_buffer_pool_size (>= 1G)
if it's not a big problem for you - try to install SSD instead, because you will lower your server I/O wait dramatically
I can imagine the improvement is dramatic. The hard part is the cost since this is a production web server with a lot of clients. We currently offer a fairly low plan of 1GB per user (and even that some people max out on). Hard to make SSD work for a large production server without making the cost outrageous.
yeah, increase "key_buffer_size" variable, also try increasing other variables, listed there, if you have enought RAM of course (free -m), also try lowering max_user_connections, max_connections to reasonable low value, that still allow your websites to use your DB
there is no such thing as a mysql config based on the spec of a server! There is only a config based on specific usage.
As long as you have free ram, you can tune the MySQL config quite a bit - Look, it is 89% reads, and those reads, you want them coming from RAM!
Don't rely on mysqltuner - also use tuning-primer.sh and compare the values.
To actually tune the database, it WILL take about 2-3 weeks - as you should be making a backup of your config, changing 1 or 2 options Only and noting what was changed, waiting at least 48 hours and checking again.
Wash, rinse, repeat and in a few weeks time you will have a config that is tuned to the workload.
And if all else fails, install a couple of SSD's in raid 1 and move mysql over to the SSD.
CPanel Shared and Reseller Hosting, OpenVZ VPS Hosting. West Coast (LA) Servers and Nodes
Running Linux since 1.0.8 Kernel!
Providing Internet Services since 1995 and Hosting Since 2004