Results 1 to 15 of 15
  1. #1
    Join Date
    Feb 2005
    Location
    UK
    Posts
    554

    Mosso considering using page impressions as a basis for pricing

    I'm sure many of us are familiar with Mosso, who affectionately call themselves 'The Hosting System'.

    A Mosso staff member recently revealed that a number of plans are to be rolled out during 2008, including a new control panel and adjustments to the pricing model.

    The most surprising change I saw was that Mosso realises some sites use up infinitely more server resources (not including disk space and bandwidth), and they're going to address this by giving all people a generous amount of resource usage that covers the majority of customers, and that can be bought in further allotments if needed.

    Sounds very similar to Media Temple's GPU system, right? Right.

    What's a little more surprising is that they're very much considering using page impressions as the measure, frequently known as 'hits'. Yes, that means every single file requested eats into your budget. Even worse, they're only aiming to cover about 80% of customers with their change, which means 20% of people will end up having to pay for impression overages.

    This seems a bit mental to me. Any webmaster will tell you that the amount of impressions on a single page can be reduced from dozens to a few with some clever HTML/CSS optimisation, which for example could mean that each visitor only takes 3 impressions from your budget instead of 25. Soon adds up.

    How can you possibly say that two identical blogs -- one of which has 50 smilies used throughout the posts on one page, and one with just one or two -- use the same resources? Yet this is the reality of using something as variable as impressions as a measure.

    Looks like Mosso may very well shoot themselves squarely in the face with this because they seem serious about it. Am I being a bit critical here or does anyone else think this seems crazy?

    Just getting this out there as I'm sure this will be interesting information for any current/future client of Mosso.
      0 Not allowed!

  2. #2
    Join Date
    Jul 2006
    Location
    Detroit, MI
    Posts
    1,962
    It seems to me that instead of innovating, some of these companies are scrambling to come up with something they can pass off as "new" or "better". What a farce..



    Regards,
      0 Not allowed!

  3. #3
    There is actually some logic to it. More requests can result in greater CPU usage.
    » OmegaSphere - http://www.omegasphere.net/
    » Vancouver co-location, Managed Services, Shared Hosting, SSL Certificates, Domain Names
    » Your IT Experts - Located in beautiful Vancouver, BC, Canada
    » 604-618-0543 - 866-618-0543 - support@omegasphere.net
      0 Not allowed!

  4. #4
    Join Date
    Dec 2004
    Location
    New York, NY
    Posts
    10,710
    Quote Originally Posted by ddent View Post
    There is actually some logic to it. More requests can result in greater CPU usage.
    It can result in greater CPU usage, and therein lies the key to the problem. It's only a potential factor and cannot be considered a direct relationship.
      0 Not allowed!

  5. #5
    Join Date
    Jul 2006
    Location
    Detroit, MI
    Posts
    1,962
    Quote Originally Posted by ddent View Post
    There is actually some logic to it. More requests can result in greater CPU usage.
    But they don't always result in more CPU usage. That's like saying I should never buy a car because it may break down some day.



    Regards,
      0 Not allowed!

  6. #6
    Join Date
    Oct 2002
    Location
    EU - east side
    Posts
    21,920
    I doubt that their aim is to CPU measure usage with 100% reliability, rather have some sort of guideline as to what may be too much for the $ one pays. A fuzzy logic algorithm using enough variables could turn out to be accurate enough, and serve the purpose.

    Even worse, they're only aiming to cover about 80% of customers with their change, which means 20% of people will end up having to pay for impression overages.
    Well, if they have reasons to believe that 20% of their users currently actually cost them money due to the CPU usage, the aim may be understandable.
      0 Not allowed!

  7. #7
    As there are different kinds of traffic that affects the server(s) differently, this "per pageview"-system is as blunt as using the transferred GB ratio-system.
      0 Not allowed!

  8. #8
    Quote Originally Posted by Henrik View Post
    As there are different kinds of traffic that affects the server(s) differently, this "per pageview"-system is as blunt as using the transferred GB ratio-system.
    It really isn't. Which takes more resources: 1000 requests of 10kb each, or one request of 10,000 kb?

    Is it perfect? Not at all. Is it a start in the right direction that is simple to implement compared to other options? Yes.
    » OmegaSphere - http://www.omegasphere.net/
    » Vancouver co-location, Managed Services, Shared Hosting, SSL Certificates, Domain Names
    » Your IT Experts - Located in beautiful Vancouver, BC, Canada
    » 604-618-0543 - 866-618-0543 - support@omegasphere.net
      0 Not allowed!

  9. #9
    Join Date
    Aug 2004
    Location
    Canada
    Posts
    3,785
    They could maybe make it work if there was a weighing system based if there needed to be some sort of cgi being ran (php, perl ruby ect.)

    So .php pages cost you more than say .html which would make some sense.
    Tony B. - Chief Executive Officer
    Hawk Host Inc. Proudly serving websites since 2004
    Quality Shared and Cloud Hosting
    PHP 5.2.x - PHP 8.1.X Support!
      0 Not allowed!

  10. #10
    Quote Originally Posted by ddent View Post
    It really isn't. Which takes more resources: 1000 requests of 10kb each, or one request of 10,000 kb?

    Is it perfect? Not at all. Is it a start in the right direction that is simple to implement compared to other options? Yes.
    Some kind of "median" calculated on bandwidth and allocated system resources could be the way for "high performance" share hosting to market their services...
      0 Not allowed!

  11. #11
    Join Date
    Aug 2004
    Posts
    242
    Quote Originally Posted by ddent View Post
    There is actually some logic to it. More requests can result in greater CPU usage.
    I'm going to show you that you can generate more CPU usage with less hits. Proof of concept:

    1. Small CPU usage
    - Static HTML page
    - 5 style sheets
    - 3 javascript files
    - 70 small images (list of thumbnails and smileys)

    2. Big CPU usage
    - 1 Joomla-based PHP index file
    - 1 style sheet
    - 1 javascript file
    - 1 CSS concatenation image file (concatenating tens of images)

      0 Not allowed!

  12. #12
    @calande we are discussing methodology here, not the actual behavioural details of application-x
      0 Not allowed!

  13. #13
    Join Date
    Dec 2004
    Location
    New York, NY
    Posts
    10,710
    Quote Originally Posted by Henrik View Post
    @calande we are discussing methodology here, not the actual behavioural details of application-x
    Point taken, but aren't both related?
      0 Not allowed!

  14. #14
    Quote Originally Posted by Henrik View Post
    @calande we are discussing methodology here, not the actual behavioural details of application-x
    These behavioral details are essential to the methodology. In calende's example, #1 would be charged more than #2 even though #2 would consume more resources. That's grossly unfair.

    The problem with this system that people can circumvent it easily through concatenations and optimizations that really don't reduce the server load or resource usage.
      0 Not allowed!

  15. #15
    Quote Originally Posted by Illustrious View Post
    These behavioral details are essential to the methodology. In calende's example, #1 would be charged more than #2 even though #2 would consume more resources. That's grossly unfair.

    The problem with this system that people can circumvent it easily through concatenations and optimizations that really don't reduce the server load or resource usage.
    You are taking on the problem of producing fair product specs and metrics from a technocratic point of view.

    Which applications that uses what kinds of resources is not relevant to the issue of how to find a fair statistics tool that both clients and hosting providers can use to make good decisions.

    However, resource allocations per application, usage-patterns etc is interesting and valid, but just not in this particular context which is metrics.
      0 Not allowed!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •