Generally, a cluster is used to distribute responsibilities/services among different servers to best take advantage of each servers resources or different software/platforms. Yeesh, that sounds convoluted.
Simply said, if you've got an application that produces huge load spikes, you may want to distribute some of the load across a couple of servers. This is common on high end databases, and often requires setup/integration with the application. Another reason for clustering might be to split the web server from the application server, and the application server from the database server, for whatever reason (software compatibility or license budget, security, etc).
A major reason for not clustering, however, is that clusters are a chain, and a failure in a single link of the chain will often result in a failure of the cluster. If you have a 2 server setup, a web/app server and a database server, your cluster is no good if either of the servers dies. If you have a 5 server cluster and the system isn't configured to self-heal or compensate for a dead server, then you still end up with a useless or malfunctioning cluster/application.
If you're looking for super-high server ability, and don't have the budget for a 24x7x365 staff dedicated solely to your cluster, your money is probably best spent on two servers, configured identically with high-quality hardware and redundant server architecture, rsync'd, with redundant load balancing switches. That way if a server dies, it's dead... and your mirror is ready to pick up the slack with minimal disturbance.
Hope that helps. It's late.
High Availability Systems by:
George Vuckovic - CEO & President, Tilted Planet, Ltd.
Dedicated Servers, Dedicated Service, Definitely Tilted.
Celebrating 16 years of top-notch hosting and innovation!
Visit our 13 year old, obviously ancient site at TILTED.COM