Most of the time when I have seen the 95% rule, it's just a way of monitoring and billing for your bandwidth. Basically they take out the top 5% of your peak traffic, then take an average of the lower 95% of your bandwidth used over the month and that's what you pay for. Apparently these 5% worth of peaks can really throw off the true average.
Now mind you - this rule is used when the connectivity provider is monitoring and billing your bandwidth on a real scale of bandwidth - NOT on the more common "data transfer" methods used by most hosts.
It's been over a year since I've really used this rule, so I am kinda assuming the definition is still the same.
Originally posted by lpguitars Basically they take out the top 5% of your peak traffic, then take an average of the lower 95% of your bandwidth used over the month and that's what you pay for. Apparently these 5% worth of peaks can really throw off the true average.
As I understand it, the 95th percentile billing method basically removes the highest 5% of your bandwidth (the peaks) and then the next highest number after the removal is the bandwidth you are billed for. This is definitely not the average of the lower 95% but rather the highest point remaining. Generally the rule of thumb (only an estimate not a hard rule) is that the 95th percentile is about 2x-2.5x your average bandwidth amount.
Correct--it is not an average of the lower 95%, that would be similar to (but not exactly the same as) billing at 47.5%. To simplify, let's say you have 100 bandwidth samples taken throughout the month. These samples are each integer from 1 to 100, ie: 1, 2, 3, 4, ... 94, 95, 96, 97, 98, 99, 100... the top 5% of samples are then removed, as if they never happened. That eliminated 100, 99, 98, 97, and 96. Then, the next sample (as the previous post explained) IS your bill. Your bill would be for 95 units of traffic.
There is generally some averaging with respect to takin a 95th percentile measurement, but it is not straightforward. Once a sample is recorded, it isn't modified. However, in obtaining a sample, generally one of two things is done. Either a router port or a switch port is typically monitored with SNMP. The measurement is taken at a fixed interval--every minute, every 5 minutes, every 30 minutes, etc. That interval must be fixed to maintain integrity of the samples. The averaging I referred to at the beginning of this paragraph happens within the router or switch.
A Cisco router, for example, defaults to a 5 minute average. That means when you query inbound/outbound traffic on a cisco router port, it will give you a number representing the average traffic over the past 5 minutes. This number can be changed, as low as 30 seconds. In our experience, we've found that 5 minute averages provide the smoothest and most accurate stats--using 30 second intervals, or even 1 minute intervals, provide too much "swing" in the resulting statistics.
The benefit of doing this for the customer is it allows individual transmissions to burst and get smoothed out within the 5 minute block. At my company, we take a 5-minute average sample every 1 minute, and log it. At the end of the month, we throw away 5% of the samples, and bill at the next one down. This gives us over 43,000 samples in a month, and lets us throw away over 2000.
I know this thread is old, but doesn't this 95th percentile billing make you pay for way more transfer than you are actually using.
It seems to me like if you are using 1Mb/s for 94% of the month and then 5Mb/s for 6% of the month, you would be billed for 5Mb/s since after you throw away the top 5% you would still be looking at 5Mb/s as the next number down.
I know this example is extreme, but it does apply in certain situations. In a 30 day month, 5% is roughly 36 hours. So if I burst for 36 hours and 5 minutes to a high throughput, then I pay a high bill.
Say I run a fantasy football site that uses less than 1Mb/s all week long and then all day Sunday uses 10Mb/s and also on Monday night. I will be paying the bill for a consistent 10Mb/s every month.
Why can some companies charge on average transfer and other charge with the 95th percentile rule?
How do the upstream Tier 1 Backbone providers charge?
And does the 95th percentile EVER work out to the customers favor?
Lowest priced electronics and game systems on the web. PS3, Wii, Xbox 360, iPhone, and more.
Contact [email protected] for details.
Originally posted by 7out I know this thread is old, but doesn't this 95th percentile billing make you pay for way more transfer than you are actually using.
More transfer? I suppose you could look at it that way. More bandwidth? No. You would be charged according to what you used (-5%). Upstreams generally charge /Mbps so your provider has to allocate the amount you need, thus you are charged for it. In reality, it is one of the only billing methods that works out for the host and is fair to them. It is expensive and more of a gamble to provide service any other way.
If you dont bill by the 95th percentile (straight average say) theres nothing to stop 1 client from monopolising your entire network, and say use 100mbps during peak hours, and then not using bandwidth any other time (letting their average remain below their limit). (granted that could still be done with 95th percentile, but only for 5% of the month).
If a provider bills you for true average, then people who have highly fluctuating bandwidth graphs can "hog" 4-5x the amount of bandwidth they pay for at any given time, making it very difficult for the provider to plan their network, and keep it running in a reasonable manner.