I have a friend that is starting a website that has all the chances to get Slashdoted and Digged several times per month. (search wikipedia for the terms)
He wants a solution that will keep the site up when visitors rush in.
His site is running joomla.
Heh. That really depends on the budget -- but I really don't know of many sites that get digged/slashdotted a couple of times a month right from the get-go unless they have some great and unique! content. might be better to start with a single server and then move on from there as needed, no point in going through the hassle of migrating from a shared or VPS.
Personally, I'd recommend h-sphere as it can just setup clusters so well and does a great job on overall resources. CartikaHosting.com is my favorite h-sphere and they currently host my personal site and one of my larger ones.
Personally it depends on how digged or slashdotted your planning on getting.
I have heard of people surviving on well setup celeron boxes, at the same time i have heard it kill a dual xenon box.
One thing that might work would be to talk to some hosts about setting up dns failover and loadbalancing, so if one server dies, it pretty much instantly kicks over to the other. The other thing to look out for is over selling.
Note to self: Add something funny! Search is your friend!
I've managed to tweak the apache to stand up for 800-1100 hits/second (static html) on a 2600+ sempron with 512mb rams, but when it comes to php and mysql I'm having huge problems...
I don't want to trick an overselling company on hosting me (don't want to get suspended when the site has traffic either), so ...like I said ... up to 100$ / month to spend on a joomla site that won't go down when hundreds try to load it every second.
There's no point on spending those $ on a server that will go down after a few hits so we can't get the money back from the advertisements.
I don't think any shared server (with the business model most hosts are using) will cope, so that's out really. The ram is important in my experience (excluding PHP and MySQL). I guess tree-host meant VPS or shared when they mentioned overselling.
What I would normally recommend is for you to get as much ram as possible. Static html is fine, but with PHP and MySQL, your site will die with a slashdotting I expect...
There is a way round this. If possible, you can make it so that your PHP files run say, once a day, on a cron, then generate static pages of themselves. I did this for a website I run, I just run a script that updates the html files when needed. It's not possible in all cases though, so the other option is....
You can use a "reverse proxy". That will cache pages and save on a lot of processing power, in theory. "Squid" supports this. If you search in a search engine you can find out more about it.
So with a reverse proxy, it might work out ok. $100 isn't much though, you will need to do some creative setting up to get it working! You can also search for a Joomla caching plugin, that might help if one exists.
I thought there was a caching module allowing you to serve most contents
as static file, in Joomla.
With 100$ per month budgets, you can easily get many shared hosting accounts
to split traffic and cope with huge access if you can avoid heavy and slow php/MySQL
part to run, IMO.
Also, if you are willing to serve most of your site statically, in one way or the other,
bandwidth alone wouldn't be a problem for many hosts, and you will have more choices.
And some host may even sponcer you to show offtheir capability if you negociate
and let the host advertise on you site, for example.
Who knows? Some host may even offer you free hosting,
Anyway, I think the key obstacle is PHP/MySQL, as it will be so slow
and use so much resources.
It may increases the risk of server going down by five to ten times in certain cases
compared to other more efficient solutions.
Talk to several hosts to see what they say, what they suggest.
I know Servage has clustered setup and offers special deal for high bandwidth users.
But I don't know if they could/would support your site without any improvements.
These days, many of even low budget hosts are using clustered setup.
With $100/month, you can get 10 or more of them and split traffic among them
(providing using subdomain or other domains for certain elements,
or redirecting traffic is acceptable) and offer a lot of resistance to huge accesses whereas dedicated server of $100 would go down much easier.
To construct cheap and resistant solution, it requires a bit of thinking and testing, for sure, though.
DNS is another thing to consider.
If you go with a dedicated server, you may want to use the DNS of Registrer
instead of the one hosted on the server.
It will reduce the server load a little bit, and give you the way to redirect traffic to
other servers if it goes down.
Also, even using shared host(s), DNS of registrers usually offers more redundancy/
resistance and features (such as DynDNS).
"extras" has a point, that might work out quite well.
Have you checked out zoneedit.com? They can (or load of others can of course)... set up multiple A records.
This is called round-robin DNS. When someone typed in your domain name, it would go randomly to one IP address, and if you had loads of hosts, that might work out well. Each person would in theory go to one IP address.
If you type in a command prompt window: nslookup google.com
Pinging google.com [220.127.116.11] with 32 bytes of data:
Reply from 18.104.22.168: bytes=32 time=128ms TTL=239
It picks the first one.
Other people should get the other ones when they try. Your computer/ISP/router/somewhere will cache the IP for quite a while.
It doesn't mean that your sites will all be identical and in sync, you would have to work something out yourself for that. Also, it doesn't mean that if one goes down, people won't be directed to it...
Still though, having loads of mirrors is a good plan, it might work out slightly cheaper, although there is the risk of the shared host providers you choose being unhappy about the resource usage.