1) Put some tables on one server, and other tables on the other machine (will require quite a bit of modification to your code).
2) Setup MySQL cluster (complicated & you will probably need a MySQL expert to configure it for you).
█ Dan Kitchen | Technical Director | Razorblue
█ ddi: (+44) (0)1748 900 680 | e: [email protected]
█ UK Intensive Managed Hosting, Clusters and Colocation.
█ HP Servers, Cisco/Juniper Powered BGP Network (AS15692).
Mysql clusters also requires alot of code changes as some code will bring up the load on the cluster for no reason. We are running a few and it takes alot of tweaking and trimming of your code as well as the database itself to make it work properly.
Also as Steven said you need a TON of ram as it multiples the size of your database exponentially.
The way I've done this in the past is that I set up a MySQL slave and a master. In the database abstraction, (a script include that wraps around mysql_query and provides additional functionality) all updates go to the master, all selects get randomly sent to the master or the slave. Don't weight them more heavily towards the slave, because the slave still needs to update it's *own* database (replication) from the master and do the insert and update operations.
The problem is that after you pass a master/slave duo, you start to lose capacity at an exponential rate because of the replication overhead. Google for 'livejournal cluster presentation', it describes what I mean.
A better solution might be to use something like memcached to cache frequently used database responses.