Why and when would you cluster a physical server (i.e. several physical machines being made to look like one logical server to everything else)? It seems that there are so many other things to cluster (e.g. storage, databases, application servers, etc) that I don't see the purpose of when you'd want to cluster the server (i.e. the actual physical machines).
I'm confused because if you want to scale up your application because you're running out of memory or CPU, wouldn't you just add another physical machine (running another instance of the application server or database) and cluster the application server or database!?
Why would you ever want to cluster a server?!?
I hope some people on this forum who are involved in system administration might have some insight to share on this (regarding why they do or do not go with this approach).
Look at the likes of google, microsoft, youtube, CNN, ILM, Xbox live, or any other service that would run a lot of machines with a very large userbase.
Dont you think it'd be more effective to distribute the massive load across multiple physical machines? It may not always be the application load, but the user load for the need of clustered servers...
I realize that large systems require multiple physical machines, so perhaps I should word my question differently:
When would you cluster multiple physical machines at the OS level so that they appear to be running only one instance of the OS, as opposed to multiple servers each running their own instance of the OS?
An example is Red Hat's Linux cluster, which allows you to make one instance of the Red Hat OS run across multiple machines...when would you do this?
Even if each machine were running its own independent OS, you could still connect the machines by clustering at the application server, database server, or any other layer to remove bottlenecks due to growth in number of users.