We have a heavy MySQL usage for a client with 100+ active tables, about 5GB of data, an average of 100-150 qps as presented by mytop. MySQL is running on a dedicated server (Dual XEON, 2 SCSI HD, 2 GB RAM).
Here is my.cnf:
# The MySQL server
port = 3306
socket = /var/lib/mysql/mysql.sock
key_buffer = 768M
max_connections = 2500
max_connect_errors = 200
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 16M
read_buffer_size = 8M
myisam_sort_buffer_size = 384M
thread_cache = 16
query_cache_limit = 2M
query_cache_size = 256M
query-cache-type = 1
ft_min_word_len = 2
tmp_table_size = 512M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 4
thread_stack = 126976
The server is fast most of the time but there are times when one slow query causes many fast queries to wait and this slow down things. I assume the slow query is consuming a lot of resources and this is why the other non slow queries wait.
The 1st question is: where and what can we optimize if we do not change servers? I mean software and hardware optimization? If we add another 2 GB RAM, would that help? If yes, how should we change my.cnf?
As it is stated in the above article, we would need at least 4 servers instead the 1 Dual XEON we have now. What would be the best servers for this scenario if money is an issue? At what DC? Your recommandations are highly welcome.
These are out-of-whack. table_cache should be max_connections * the maximum number of table joins expected. This is because for each join, the server needs to open another table. You could use the "average" number of joins if your system won't let you open 5000 file descriptors (plus another 2500 for the socket file descriptors) but if this is a modern Linux system, 5000 should be OK and the "average" would be very hard to calculate. Currently you are restricting your server to 512 connections because any connections over 512 would not be able to open a table. This is in the MySQL manual. I would post a link but I am a new, restricted user.
Also, if you don't use any InnoDB tables and you don't plan to, you can use the skip-innodb option to eliminate ~100 MB of memory use (and maybe apply it elsewhere).
Also, if your average queries per second = 150, then your thread_cache setting of 16 is much too low. You should run "mysqladmin extended -r -i60" each day and, if the Threads_created value is >1, increase your thread_cache setting. I'd set it to at least 40 to start with because you need a new thread for each connection. Your queries per second = 150. Most web apps make less than 9 queries per connection (150 / 16 = 9.37) so, unless you know that each connection to the server uses 9 or more queries, then thread_cache is definitely to low.
I see that mytop output now (I skipped right down to post when I saw the max_connection/table_cache problem). It looks like you could stand to cut the sort_buffer from 16M to 8M. If you need to buffer 16M worth of data to sort, your query will be slow no matter how much memory you throw at it. Then add the recovered memory to your query cache (from 256M to maybe 384M).