Page 1 of 2 12 LastLast
Results 1 to 25 of 26
  1. #1

    Better to upgrade Dedicated Server or Buy 2/3?

    Okay. So this has been the biggest headache I've ever dealt with in the 'online world'.

    I have a quad core server right now with 4gb of ram hosted through inmotion hosting. I've had some SERIOUS issues with CPU load. I've split it up into 4 accounts, each account hosting anywhere between 5-15 wordpress sites.

    My CPU load is shooting through the roof and hits well above 4 every 15 min or so and hits as high as 200/300 and then crashes. When running 2 accounts, the load normally stays below 2, but does have it's occasional spikes to 6ish.

    I don't think this can work with this server anymore, so I'm thinking about either upgrading the server or buying a second server or possibly even a third. My accounts look like this right now

    Account #1: 14 sites Account #2: 11 sites Account #3: 5 sites Account #4: 8 sites

    Running 14 sites/server is pretty safe I'd say. I don't have very much traffic at all and I think a lot of this has to do with the fact that my server keeps crashing or is way too slow.

    The hosting company keeps saying that it has to do with the way that I've coded my scripts, but I've recoded to make the PHP scripts run faster, and the problem is still going strong. When I run 2 smaller accounts, then there's no problems at all.

    Any ideas on what I should do here? I'd like to stay with inmotion hosting, but if it just won't work, then I'll have to migrate somewhere else. I'm on the Advanced Dedicated Server (http://www.inmotionhosting.com/dedicated_servers.html) and I'm thinking of either buying an elite server as well as keeping the the advanced, or going for the "Commercial Class 1" or the "Commercial Class 2 - 24 Threads!"

    If anyone can provide some sound advice, I'd GREATLY appreciate it. The last thing I want to do is spend days upon days trying to fix this issue when it probably can't be fixed with this 1 server. If you have any other recommendations for hosting that could be more affordable and do the trick, I'd appreciate any links as well.

    Or would it be better to go with cloud hosting? I know nothing about it, but I see a lot of apps being used on the cloud. The load is going crazy from thousands of cron scripts that I run which grabs data and inserts it into MySQL

  2. #2
    Have you optimized your wordpress enough? I mean...have you enable the "cache" plugin to minimize webserver load??

  3. #3
    Join Date
    Mar 2012
    Location
    Tampa, FL =)
    Posts
    1,748
    MySQL on SSDs and what about varnish/nginx as a frontend for caching?

  4. #4
    You better optimize your MySQL database and if possible stop using unnecessary cron scripts.

  5. #5
    Join Date
    Apr 2011
    Location
    Las Vegas, NV
    Posts
    1,546
    While I haven't a clue why your cpu usage is so heavy, I would encourage you to find the source of the problem rather than purchase additional resources.

    If I were facing the problem, I would try to isolate the usage to a particular website or application by taking them offline one at a time. You may need to do that in the middle of the night to minimize service disruption.

  6. #6
    Join Date
    Jun 2011
    Location
    Indonesia
    Posts
    1,776
    check if the problem in i/o, if that the problem even if u upgrade the ram and cpu u wouldnt get any improvement

  7. #7
    Join Date
    Dec 2012
    Posts
    39
    Have you considered running some vertulization software on your dedicated to break it down into several virtual machines, this way you can ensure that there is a minimum level of resources always available to your core system to ensure the server itself doesn't keep crashing and then run your accounts from within the virtual machines and this will allow you to do some load balancing by moving the individual sites around on your virtual machines to provide the best use of resources
    Luke
    Senior Support Advisor for daily.co.uk
    Low Cost VPS Soultions: www.daily.co.uk/products/virtual-private-servers | High Performance Dedicated Servers: www.daily.co.uk/products/dedicated-servers/

  8. #8
    I actually think luke@daily has a great idea in virtualizing the server to segment and discover the true cause of the problem.

    Also, logging. Start logging everything. Look into sar (presuming linux). It gives a great readout of lots of different systems, to help you narrow down when/where i/o contention may be happening. It shows system health over time.

    As stated, it's best to find out what the problem is before you upgrade, as you may be running into it shortly down the road again.
    ||| Steven Peters
    ||| Junior Linux Admin, Web Hosting Enthusiast

  9. #9
    Wow an overwhelming response. I appreciate it everyone!

    1) Yes I have optimized wordpress. I've removed wp-cron from the wp-config file, I've added caching through W3 Total Cache
    2) I'm not sure if I can request to have modifications made to my server with inmotion (ie. SSDs on MySQL)
    3) I have a Database administrator that's looked through my custom tables and he hasn't said that there's anything crazy wrong with them. I'm not quite sure that that would be the issue. One thing is that the wp_options table always builds overhead after just a few minutes, but apparently it's an issue with wordpress. All cron scripts are necessary.
    4) @ajonate - I've been trying to figure out the root of my problem for the past 3 days constantly monitoring SSH and seeing when things spike. I think it may be that there are too many PHP scripts which all query mysql. I've gone through the isolation process, but it's near impossible to figure out. The hosting company has installed mysql slow querying to try to find out where the issue lies.
    5) How do I check the log for the I/O? I had a problem with this when the load was really high.
    6) I don't really understand what virtualizing the server is. What kind of performance losses will I have by doing this? What happens when each segment gets overloaded?

  10. #10
    Join Date
    Apr 2011
    Location
    Las Vegas, NV
    Posts
    1,546
    Quote Originally Posted by salmanpasta View Post
    4) @ajonate - I've been trying to figure out the root of my problem for the past 3 days constantly monitoring SSH and seeing when things spike. I think it may be that there are too many PHP scripts which all query mysql. I've gone through the isolation process, but it's near impossible to figure out. The hosting company has installed mysql slow querying to try to find out where the issue lies.
    If you have ssh access, try issuing the top command.

    # top

    Just sit and watch it for awhile and see of you observe %CPU spiking. If it does, notice while application is responsible. That will be on the far right of the table under COMMAND.

    Do a Ctrl-c to exit top.

  11. #11
    Quote Originally Posted by ajonate View Post
    If you have ssh access, try issuing the top command.

    # top

    Just sit and watch it for awhile and see of you observe %CPU spiking. If it does, notice while application is responsible. That will be on the far right of the table under COMMAND.

    Do a Ctrl-c to exit top.
    The command that's always spiking is either mysqld or php. Because there are so many PHP files which all interact with mysql, I have no idea where to begin to identify which file it is that's causing the spiking.

  12. #12
    Also, I'm not sure which method of PHP to use, but it sounds like suPHP could be slowing my site down and overloading the CPU since there are thousands of PHP scripts that run. Would you be able to recommend one in particular?

  13. #13
    Join Date
    Apr 2011
    Location
    Las Vegas, NV
    Posts
    1,546
    Quote Originally Posted by salmanpasta View Post
    The command that's always spiking is either mysqld or php. Because there are so many PHP files which all interact with mysql, I have no idea where to begin to identify which file it is that's causing the spiking.
    Good! It appears that the problem is indeed mysql. I suggest running mysqltuner to see what it recommends. To do that, use the following perl script.

    http://entomy.com/mysqltuner.txt

    Either copy & paste the entire contents to a new file, or download the file, then name it mysqltuner.pl and save. You can either save the file to your cgi-bin directory and run it from your borwser, or save it anywhere in your Linux box and run it as a perl script. You will need to give it execute permission either way, like:

    # chmod 755 mysqltuner.pl

    When you run it there will be a list of specific suggestions at the bottom of the output. See if there are any practical suggestions that might help you. Pay particular attention to the variables recommendations at the very bottom.
    Last edited by ajonate; 02-06-2013 at 05:58 PM.

  14. #14
    Thank you all for your input. The problem has been solved by JacobN from the Inmotion community - http://www.inmotionhosting.com/support/team

    He was able to narrow down the problem through my linux logs to identify exactly what was wrong! STAND OUT GUY!

    For anyone following, the reason was because there were too many bots crawling my site. As a result, we've blocked out these bots and load is now fine!

    Thanks to all and Jacob again!

  15. #15
    Join Date
    Mar 2005
    Location
    Ten1/0/2
    Posts
    2,509
    So, short answer is you do not need to upgrade at the moment.

    Long answer is NEVER upgrade without knowing the reason for excessive resource usage.
    CPanel Shared and Reseller Hosting, OpenVZ VPS Hosting. West Coast (LA) Servers and Nodes
    Running Linux since 1.0.8 Kernel!
    Providing Internet Services since 1995 and Hosting Since 2004

  16. #16
    Join Date
    Jan 2013
    Location
    Virginia Beach, Va
    Posts
    52

    Post

    Quote Originally Posted by RRWH View Post
    So, short answer is you do not need to upgrade at the moment.

    Long answer is NEVER upgrade without knowing the reason for excessive resource usage.
    Yes RRWH, salmanpasta doesn't look like he needs a server upgrade at the moment since we were able to block a lot of unnecessary bot requests from happening anymore.

    I can't agree with you more that you shouldn't simply throw more hardware at an issue, without a good understanding of what's really causing the excessive usage. If you run a server you should get really good and comfortable with being connected via SSH and looking at the requests happening in your access logs. I can't even count the number of times like in this case where telling a few bots to stop slamming your server can decrease the average usage dramatically.
    Jacob Nicholson
    System Admin turned customer advocate
    A decade working at web hosts, and now helping people with Hosting Advice for the masses!

  17. #17
    Alright, so the problem still exists, but it's better.

    I've been using the top c command in SSH and watching what's going on. Since I have 37 sites running, I see the index.php being hit a lot. The other parts are my cron scripts (php files) and XML-RPC which gets included in 1/4 of the cron scripts.

    I just installed the P3 Profiler and here are my wordpress stats on the plugins. Can anyone recommend anything?

    http://s11.postimage.org/r1ietwb2b/p...erformance.jpg

    This is what it looks like without contact form 7

    http://s9.postimage.org/qnwiovmdb/profiler_2.jpg

    It looks like W3 Total Cache is making everything slower for me? I have Page, Minify, Database, Object and Browser cache all enabled using Disk. I know that APC would probably be a better option, but I've disabled APC because of caching of my cron scripts which I could not afford to do.
    Last edited by salmanpasta; 02-13-2013 at 01:41 PM.

  18. #18
    I just added deny to a whole bunch of IPs. If you take a look through them, they're all either Amazon AWS or random proxies from Germany and Sweden. I believe that should do the trick? I'm guessing Alexa's bot would say Alexa, and it probably wouldn't be Windows NT bot.

  19. #19
    Join Date
    Jan 2013
    Location
    Virginia Beach, Va
    Posts
    52

    Post

    Quote Originally Posted by salmanpasta View Post
    Another great post Jacob! I tried running

    PHP Code:
    zgrep "20/Feb/2013:06:4" /home/*/logs/* | grep "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9) Gecko/2008052906 Firefox/3.0" | awk '{print $2}' | sort -n | uniq -c | sort -n 
    And it just shows me 2290 -

    It doesn't show the IP addresses. However, when I ran

    PHP Code:
    zgrep "Windows NT 5.1; de" /home2/*/logs/*-Feb-2013.gz | sed 's#^.*gz:##' | awk '{print $1}' | sort -n | uniq -c | sort -n 
    It showed me all of the IP addresses (Not all from Amazon). Should I just block out all of these IP addresses with more than 2 hits? I was wondering this whole time why my server was spiking every morning. I thought that the Amazon bot could have been Alexa?

    I just added deny to a whole bunch of IPs. If you take a look through them, they're all either Amazon AWS or random proxies from Germany and Sweden. I believe that should do the trick? I'm guessing Alexa's bot would say Alexa, and it probably wouldn't be Windows NT bot.
    Hey salmanpasta,

    Sorry about that incorrect code snippet, I originally had it a bit different while testing, using the sed command to show which domains were getting requests as well as the IPs, I chopped out that bit so I could show you how to just grab globally all the unique IPs hitting your server, and it looks like somewhere along the line I pasted the wrong thing.

    The orginal code I had was this:

    Code:
    zgrep "20/Feb/2013:06:4" /home/*/logs/* | grep "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9) Gecko/2008052906 Firefox/3.0" | sed -e 's#/home/.*/logs/##' -e 's#-Feb-2013.gz:# #' | awk '{print $2}' | sort -n | uniq -c | sort -n
    Which does correctly display the IPs, and here is what I was also using:

    Code:
    zgrep "20/Feb/2013:06:4" /home/*/logs/* | grep "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9) Gecko/2008052906 Firefox/3.0" | sed -e 's#/home/.*/logs/##' -e 's#-Feb-2013.gz:# #' | awk '{print $1,$2}' | sort -nk2 -k1 | uniq -c | sort -n
    Here is some dummy example output that provides:

    Code:
    85 shop.example.com 125.125.125.125
    85 example.com 127.127.127.127
    97 example.com 124.124.124.124
    187 example.com 123.123.123.123
    Nice job on figuring out how to find the IPs on your own! That way you did it is also really nice in this case, since it is going to capture requests from all of February with that user-agent instead of just the 10 minute window I was grabbing from.

    In this case because we noticed a particularly old, odd, foreign user-agent, and then noticed that all of the IPs seem to belong to an Amazon service that can be used by people trying to make themselves the next Google by making their own crawlers, it would probably be safe to block any IPs with more than a handful of requests.

    However do keep in mind here just for knowledge sake, that say the IP you block is a shared IP address on that Amazon server. One user on that server configuring a bad crawler that doesn't abide by robots.txt rules, might be doing that without the consent or knowledge of other users on that same server. So potentially if someone just happened to also use that server's IP at some point in time to do some legitimate stuff on your website, that connection would be blocked. Whereas providing a 403 - Access Denied HTTP response based off user-agent would still allow the good users on that server to communicate with your server, while blocking the bad.

    Now that said, in most cases I'd say go ahead and block IPs of potential bots that could cause problems, especially if they hit your sites semi-regularly, to help always ensure good server availability for human visitors. But in some cases IPs are going to be hard to keep up with, as a bot creator could simply jump from one server IP to another and keep hitting you, so trying to block via user-agent or by anything really unique in the type of requests they're sending to your server is always going to be a good long-term solution.

    If you do consistently notice a trend of always getting hit by bots from a certain provider, you could even take some measures as drastic as blocking their entire IP ranges which I mention in my article about blocking a range of IP addresses.

    If that's something you're interested in doing for Amazon's EC2 service you can find their IP ranges here.

    Most legitimate crawlers like Alexa will not have any kind of web-browser or OS mentioned in their user-agent, and that's usually a sign of a bot trying to disguise as legitimate traffic. Most good bots will also tell you where to go to find out more about them, in this case here's Alexa's full user-agent string:

    Code:
    "ia_archiver (+http://www.alexa.com/site/help/webmasters; crawler@alexa.com)"
    Last edited by anon-e-mouse; 02-21-2013 at 06:53 PM.
    Jacob Nicholson
    System Admin turned customer advocate
    A decade working at web hosts, and now helping people with Hosting Advice for the masses!

  20. #20
    Quote Originally Posted by JacobN View Post
    Hey salmanpasta,

    Sorry about that incorrect code snippet, I originally had it a bit different while testing, using the sed command to show which domains were getting requests as well as the IPs, I chopped out that bit so I could show you how to just grab globally all the unique IPs hitting your server, and it looks like somewhere along the line I pasted the wrong thing.

    The orginal code I had was this:

    Code:
    zgrep "20/Feb/2013:06:4" /home/*/logs/* | grep "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9) Gecko/2008052906 Firefox/3.0" | sed -e 's#/home/.*/logs/##' -e 's#-Feb-2013.gz:# #' | awk '{print $2}' | sort -n | uniq -c | sort -n
    Which does correctly display the IPs, and here is what I was also using:

    Code:
    zgrep "20/Feb/2013:06:4" /home/*/logs/* | grep "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9) Gecko/2008052906 Firefox/3.0" | sed -e 's#/home/.*/logs/##' -e 's#-Feb-2013.gz:# #' | awk '{print $1,$2}' | sort -nk2 -k1 | uniq -c | sort -n
    Here is some dummy example output that provides:

    Code:
    85 shop.example.com 125.125.125.125
    85 example.com 127.127.127.127
    97 example.com 124.124.124.124
    187 example.com 123.123.123.123
    Nice job on figuring out how to find the IPs on your own! That way you did it is also really nice in this case, since it is going to capture requests from all of February with that user-agent instead of just the 10 minute window I was grabbing from.

    In this case because we noticed a particularly old, odd, foreign user-agent, and then noticed that all of the IPs seem to belong to an Amazon service that can be used by people trying to make themselves the next Google by making their own crawlers, it would probably be safe to block any IPs with more than a handful of requests.

    However do keep in mind here just for knowledge sake, that say the IP you block is a shared IP address on that Amazon server. One user on that server configuring a bad crawler that doesn't abide by robots.txt rules, might be doing that without the consent or knowledge of other users on that same server. So potentially if someone just happened to also use that server's IP at some point in time to do some legitimate stuff on your website, that connection would be blocked. Whereas providing a 403 - Access Denied HTTP response based off user-agent would still allow the good users on that server to communicate with your server, while blocking the bad.

    Now that said, in most cases I'd say go ahead and block IPs of potential bots that could cause problems, especially if they hit your sites semi-regularly, to help always ensure good server availability for human visitors. But in some cases IPs are going to be hard to keep up with, as a bot creator could simply jump from one server IP to another and keep hitting you, so trying to block via user-agent or by anything really unique in the type of requests they're sending to your server is always going to be a good long-term solution.

    If you do consistently notice a trend of always getting hit by bots from a certain provider, you could even take some measures as drastic as blocking their entire IP ranges which I mention in my article about blocking a range of IP addresses.

    If that's something you're interested in doing for Amazon's EC2 service you can find their IP ranges here.

    Most legitimate crawlers like Alexa will not have any kind of web-browser or OS mentioned in their user-agent, and that's usually a sign of a bot trying to disguise as legitimate traffic. Most good bots will also tell you where to go to find out more about them, in this case here's Alexa's full user-agent string:

    Code:
    "ia_archiver (+http://www.alexa.com/site/help/webmasters; crawler@alexa.com)"
    Thanks again! Great tools to have for everyone else looking
    Last edited by anon-e-mouse; 02-21-2013 at 06:53 PM.

  21. #21
    Join Date
    Jun 2007
    Posts
    36
    If bots are the issue, you might also consider blocking all proxies as well. On all the websites I operate this is routine (I don't see why anyone should be accessing my sites though a proxy)

    Consider doing this;
    http://perishablepress.com/block-tough-proxies/

    And using this service;
    http://www.shroomery.org/ythan/proxyblock.php

    Here's a simple script I wrote that you can use with them

    <?php

    $ip = $_SERVER['REMOTE_ADDR'];
    $uri = "http://www.shroomery.org/ythan/proxycheck.php?ip=".$ip;

    $res = file_get_contents($uri);

    if($res == "Y"){

    // ADDRESS BELONGS TO A KNOWN PROXY - HANDLE THEM HERE

    }

    ?>


    What I do is log the address and send them to a static ban page, you could also just send a 404 response or whatever...

  22. #22

    Similar issue

    Hello,

    I had a similar experience with inmotion hosting with CPU usage. However, rather than investigate what is going on with possible bots they DEMAND I throw more $$$ and hardware at the issue. I'm on VPS and they are forcing me to upgrade to dedicated. Well, rather than doing that, I'm moving. My highest traffic site is low traffic compared to anything. I even saw this thread initially and asked them to have JacobN check if the issue is the same, and they say they'll only check AFTER I upgrade. Bye inmotionhosting.

  23. #23
    Join Date
    Jan 2013
    Location
    Virginia Beach, Va
    Posts
    52

    Post

    Hello akyeame,

    Sorry to hear that your were having CPU usage problems on your VPS with us. Did you happen to have a chance to follow any of my resource usage guides mentioned in this thread, or in my signature prior to these issues?

    I apologize if our system administration team was forced to suspend your VPS and required a temporary upgrade to a dedicated server platform to investigate the issues. I'm assuming they must have been pretty large resource usage issues, that were affecting other customers on the VPS platform as well, and that's why it couldn't remain un-suspended while being investigated.

    If you'd like to private message me any of your account details such as your domain name or VPS number, I'd be glad to see if there was anything else I could do for you to possibly pin-point what is being problematic while ensuring your server's usage is not affecting other users.
    Jacob Nicholson
    System Admin turned customer advocate
    A decade working at web hosts, and now helping people with Hosting Advice for the masses!

  24. #24
    Hello JacobN,

    You are actually the inmotionhosting admin I was hoping to hear from as I saw you resolved the issues of someone else using your hosting services. I pointed Michael L. to this thread saying you had resolved it for someone else and since my site is (relatively) low traffic the culprit may be bots but the hard upsell was:

    "Any testing of this would need to be done on a dedicated platform."

    I had just read through this thread and your quote that
    Quote Originally Posted by JacobN
    I can't agree with you more that you shouldn't simply throw more hardware at an issue, without a good understanding of what's really causing the excessive usage.
    However, here I was with Michael L. demanding that I upgrade to (at the very least) a $119.99 dedicated server that actually costs $149.99 unless I can cough up 12 months worth of cash.

    The positive side is that according to Michael S.

    Your account has already been noted that you are migrating to another host, so we will make every effort to avoid suspending your account so as not to interfere with that process.
    I can usually get 2-3 months without any issue before the resource police come knocking. In that the culprit has been cometchat for which I've installed cometservice. The last straw was that even after disabling the cometchat altogether I was still told my resource usage REQUIRED an upgrade to dedicated. This is on a site that gets 17,000 uniques PER MONTH rather than per day as some of the good folks here at WHT claim that they get on other VPS hosting platforms.

    I can PM you my account info for you to take a look at it as you may be the only guy at IMH who isn't trying to force upgrades but at this point...

  25. #25
    Join Date
    Jan 2013
    Location
    Virginia Beach, Va
    Posts
    52

    Post

    Hey akyeame,

    Sure shoot me a PM and I'd be glad to take a look for you, it sounds like the VPS might be un-suspended at the moment so hopefully I can collect some good data for you that pin-points where your main resource usage might have been coming from.

    - Jacob
    Jacob Nicholson
    System Admin turned customer advocate
    A decade working at web hosts, and now helping people with Hosting Advice for the masses!

Page 1 of 2 12 LastLast

Similar Threads

  1. Dedicated Server RAM upgrade Help
    By knwats in forum Web Hosting
    Replies: 18
    Last Post: 08-07-2011, 10:32 PM
  2. upgrade from sharde to Dedicated server
    By shan80 in forum Dedicated Server
    Replies: 17
    Last Post: 11-24-2009, 07:59 AM
  3. dedicated server upgrade
    By jt2377 in forum Dedicated Server
    Replies: 3
    Last Post: 06-09-2005, 02:12 AM
  4. Feasibility for upgrade to a dedicated server
    By lilwong in forum Dedicated Server
    Replies: 8
    Last Post: 04-27-2005, 01:46 PM
  5. Best way to upgrade to new kernel on dedicated server?
    By chupa in forum Hosting Security and Technology
    Replies: 5
    Last Post: 06-20-2003, 08:50 AM

Related Posts from theWHIR.com

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •