I am looking for a dedicated server company who can provide the following:
-99.99% network uptime or above.
-500 or more GB of transfer incl. in price.
-Good management in terms of security checking, uptime monitoring, and will actually reboot the system if its down without us calling.
-$200/mo maximum, or low as possible, for a system with a 250 GB HD. (No CPU requirement, RAM >= 256, Windows)
-A place where we can ship our own servers to become dedicated/colocation MANAGED servers for $200 or less/server.
How do people judge "uptime" with requests like this?
99.99% uptime is about a minute's worth of downtime per week - less than an hour a year. This means a reboot in a month means your uptime numbers for the month are blown. Does this mean that a required reboot to fix a kernel vulnerability means you missed your uptime numbers for the month, or does scheduled downtime from 3AM to 3:05 AM not count against the SLA? Moving a DB-backed application to a new server so the original server can be rebooted, then back again, without service interruption, seems like enough overhead that prices would go way up. And to be honest, I wouldn't think most clients would recognize the difference between three-nines and four-nines.
Not that this is impossible to do from a provider's perspective (think good hardware, reliable software, well-managed), but is it just about network connectivity, or is it application responsiveness as well? If one customer issues a SQL query that causes more than one minute of latency for another user's application over a month (say a 2 second hickup each time they run the backup script, which they do nightly), does this mean that the uptime guarantee wasn't met? What if a PostgreSQL database simply needs to be vacuumed and causes slowness? What if someone's database gets corrupted -- how's that affect uptime, since the application clearly wasn't 99.99% available, and they're paying for application hosting.
Not trying to be a pain -- just curious as to how others think about these sorts of issues. I don't have a problem keeping a server up and running cleanly, but I wonder about how to word an "uptime guarantee" in an SLA.
Originally posted by dzeanah How do people judge "uptime" with requests like this?
99.99% uptime is about a minute's worth of downtime per week - less than an hour a year. I don't have a problem keeping a server up and running cleanly, but I wonder about how to word an "uptime guarantee" in an SLA.
I just took the two portions that related to the OP initial request, that being for 99.99% "Network uptime" vs. 99.99% server uptime. SLA's are typically written with network uptime as the deciding factor, and perhaps a separate hardware SLA for replacement time. 100% server uptime to me, means a large failover solution, or clustering which could support that request, otherwise yes, you're correct, it would almost be impossible to guarantee that level uptime on a single server. Eventually things fail, it's just how it is.
Originally posted by dzeanah Ah - so the network's available, and the server is available, and that's as far as your responsibility extends?
That sounds do-able, and easy to measure.
Well, I hate to be vague, but in general a lot of providers measure on network uptime. It would be difficult IMHO to offer a set server uptime without providing full server management and locking customers from any root access. This would at least limit the problems a customer could induce by having support do all management, installations, etc. But by segmenting network, hardware, and server uptime, it helps to provide a bit better SLA. Some providers offering full management, might put together a 100% server SLA, but in my experience it would be done through clustering / failover solutions to ensure a backup to every piece of operation, thus giving the 100% uptime.