With regards to mission critical hosting, most, are not built with a single point of failure, and have failover capabilities to ensure their uptime. With a properly configured setup, it would not IMHO be a requirement to have in-house technicians for this service. If you're referring to staffed admins within the hosting facility to provide you with managed support during failures, that is typically something that can be included within your contract.
You would need to really define "fully managed" depending on provider to provider, that terms means completely different things. As far as HD failure goes, that is almost always something you must setup in advance, most likely you're referring to RAID, with hot-swap HD's installed, in the event one fails, you can pull that HD out of the server, replace with another, and still remain operational. Many companies can offer this, it's still not entirely mission critical though as far as setups go, and is usually a custom offering.
Originally posted by myleow Would it be correct to assume that Fully Managed Solution does not cover this "feature"?
mission critical services' only requirement, as per definition, is service uptime. as such, a comprehensive mission critical solution would encompass protection against various types and forms of failure as per your requirements. this would include application, node, hardware, network and power redundancy on one hand as well as comprehensive support to resolve any issues that may surface.
while high availability features are by no means included in managed solutions, it is highly advisable that the same folks who engineer your high availability solution also manage it in an ongoing manner. we have a lot of clients who are capable of and in fact enthusiastic about managing their machines, but regardless of whether they do so themselves or use us for that, they invariably leave it to us to manage the high availability features of their deployment.
A question about Database scalability. If you HDD is filled, can you just add another HDD and have ti working or is there configuration required prior to that?
this is not a database scalability, but rather storage scalability question.
the short answer is "no", it won't just magically work all by itself. in fact, unless you configure your setup with this requirement in mind, you won't be able to do it when necessary.
the solution in this case is to use a "volume manager". ibm has a decent open-source one and various storage companies will have their own proprietary implementations. they will support online expansion (no downtime), but you would still need to use a filesystem that supports that on top of the vm.
in short: yes, there is a way to do it, but it does require proper planning ahead of time and most certainly at least some configuration.
i think you must have misread (possibly reading it as 'failed' instead of 'filled'), because raid does absolutely nothing for you in terms of what he inquired about.
Interesting topic. The phrase "fully managed servers" often conjures up the idea of getting worry-free services, or at least something close to it, from the datacenter provider with an all-inclusive and fixed price. But in reality, we all know that such "fully managed servers" are full of exclusions and there is always a limit of the number of admin hours allocated. Anything else will cost extra.
In my previous work life, we (myself and other teamates in a multinational corp) designed and implemented realtime mission critical systems. Yeah, it is really "mission critical" in that you are only allowed at most 30 seconds of downtime for any given failure regardless of cause, and at most 30 minutes of downtime per year, including system maintenance and upgrades. And that is contractually enforced with penalty clauses. Anyone cares to guess what that system is for and how much it costs to design and maintain it? No, it is not rocket science.
So, it depends on how much "mission critical" the system is supposed to be. But don't think for a moment that you can get a "fully managed" server and then forget about it because it is "fully managed".
If you can tolerate up to 30 minutes of downtime per incident, perhaps a cluster of "fully managed" servers in one datacenter will be fine, as other posters have suggested. In my experience, that should be enough for manual intervention in case something failed and must be replaced. You can substitue "30 minutes" by whatever time duration (one hour? two hours?) the datacenter can guarantee to rip out the bad component, put another one in its place, and put it back in service. Hard drives are easy -- think Ethercard, disk controllers, power supplies or even CPU/motherboard.
If you need a shorter downtime than that, consider multiple servers/clusters in multiple datacenters with load balancers, databases replication, backup nameservers and realtime monitoring with admin teams ready to work around the clock. Or you can hire one of those bigger companies that specialized on such services.
And before anyone thinks that is too complicate, remember that it only addresses the hardware and system/network availability. There is still the application side to be considered. What good is it when the servers are up and accessible but the applications are down and unvailable to provide services? Mission critical is about available services, not just system uptime.
LOL .. I need a break .. too much ranting this earlier in the day.
♦ http://www.cyberservers.net/ ♦ Premier Hosting Services with ProActive Support ♦
♦ Direct Client Support For Resellers -- Under Your Own Brand Or Ours ♦
How is it possible to setup and load balance unmanaged dedicated servers from several different DCs? STrainer mentioned this and it is very interesting way to get redundancy for a low price. If you could find a way to automate alot of the mundane management, this would be a very good idea indeed.
How would you direct traffic in such a setting? Say you have server in US, Europe and Asia. If a user connects from Asia, do you have to set the DNS specially in order to direct that user to Asia server?
Would all database information be replicated between all the servers? How would you do that? Your 1200 GB BW per server would definitely won't be enough and neither would the 2 x 120GB HDD.
You can use a Geopgraphic DNS Load Balancer. Network OEMs like F5, Alteon, Foundry and Cisco amongst others make these devices.
Based upon the IP, the request would be directed to the near geographic location (assuming you are doing a hop-wise redirection), other options are ping-times, load-share, response time, pure-round-robin, etc.
Databases like MS SQL and Oracle have both native (which I personally would not recommend) and external (3rd party) tools for real-time mirroring, synchronization and replication.
Its bandwidth intensive at first, but evens out later on.
Depending on the nature of the transactions done on your DB server and the time-between syncs, you can get a very decent performance, alternatively, your DB can be a redundant array in a single location and the Web servers located in deifferent locations.
Either way, its not difficult. Just requires understanding of a few DNS and networking issues. Very much fun if you're into that sort of a thing.