That would probably be a good idea, assuming that both servers have sufficient CPU ressources available for compression/decompression.
Originally Posted by cmanns
You have to remember that suddenly network latency comes into play. Connecting to a socket on localhost is faster than creating a connection through a couple of NIC's and a switch, and while a gigabit network will help, it cannot solve the problem entirely.
Any other tweaks? The current os is centos (The server is donated) I plan to switch to debian etch or freebsd 6.2
I got my.cnf tweaked right for the 2gb of ram I think but seems alittle slower then when it was on localhost, I was thinking dedicated mysql server would be quicker, its slow enough that my light weight mysql pages are running much slower
As for the 100mbit line, what else do you guys run? I'm thinking of going to 1000mbit so I'd get about 200mbps atleast, any better way?
Using persistent connections (if possible) may also help somewhat but look out for unused, stale connections; they may use valuable RAM and negate the advantage of using them.
Note that persistent connections are next to useless if your webserver has keepalive set to "Off" since the information about the connection is discarded (on the webserver) when the apache thread is killed. Only MySQL queries executed within the same page will benefit from persistent connections in that case.
If you use FastCGI I think that the persistent connections are stored within the FastCGI thread but I'm not sure.
Anders C. Madsen
Golden Planet Support - http://www.goldenplanet.com