Results 1 to 17 of 17
  1. #1

    VPS platform suggestion

    Could anyone suggest a VPS platform that is highly reliable stable & fast for running windows VM.

    We have tested many different platforms internally, lots of issues have come up while testing.

    Only issue is we have tested SolusVM & KVM. when we put around 40-50 test VPS per node. we have load issues when booting VMS.

    XEN did the job fine, apart from the load slow IO speeds using HVM. OS ( Centos 5.7 ) would XEN 4 work better with Centos 6?

    Testing platforms

    Dual E5620 or above
    102GB DDR3 Memory
    4x 1TB raid-10 config

    Could anyone suggest a platform that would work better then XEN or KVM.

    We need something that has automation.

    Only idea we have is putting 8 drives into a raid-10 configuration. this still does not stop the load issues.

  2. #2
    Join Date
    Sep 2009
    Posts
    80
    Have you looked at VMware? It's quite pricey, but from what I've seen and used it's quite nice.
    https://allgamer.net Email me: Clinton@allgamer.net
    High Performance Game Servers

  3. #3
    Quote Originally Posted by Clinton44 View Post
    Have you looked at VMware? It's quite pricey, but from what I've seen and used it's quite nice.
    Don't think vmware has high density, we are looking for something opensource. Don't vmware provide some products for free?

    We need high amount of virtual machines on each node.

  4. #4
    Join Date
    Sep 2009
    Posts
    80
    The free products aren't for commercial usage.
    https://allgamer.net Email me: Clinton@allgamer.net
    High Performance Game Servers

  5. #5
    Quote Originally Posted by Clinton44 View Post
    The free products aren't for commercial usage.
    Any suggestions on other software? how about hyper-v.

  6. #6
    Join Date
    Sep 2009
    Posts
    80
    I've never personally used hyper-v so I can't really comment on it. I've only worked with vmware, xen, and openvz.

    Before you dive into something it might be worth testing out a few solutions you like on a development box and seeing what best fits your needs.
    https://allgamer.net Email me: Clinton@allgamer.net
    High Performance Game Servers

  7. #7
    Quote Originally Posted by Clinton44 View Post
    I've never personally used hyper-v so I can't really comment on it. I've only worked with vmware, xen, and openvz.

    Before you dive into something it might be worth testing out a few solutions you like on a development box and seeing what best fits your needs.
    Just need something that can support high amount of virtual machines on each server, without having huge IO delays.

    Testing proxmox the last few weeks works better then SolusVM & KVM, how ever had a few kernel panics, no automation either yet. works 50% faster. then solusvm does with KVM.

  8. #8
    Join Date
    Dec 2007
    Posts
    235
    I dont think platform is issue here.. issue is 40-50 VPS's per host machine. So yea, that usualy wont work as you expect it.

    btw KVM rocks

  9. #9
    Quote Originally Posted by Amar View Post
    I dont think platform is issue here.. issue is 40-50 VPS's per host machine. So yea, that usualy wont work as you expect it.

    btw KVM rocks
    The install speed during OS installs - windows generally sucks using KVM & SolusVM. when we use proxmox / kvm the installs are generally 30-50% faster. both are using virtio block drivers.

  10. #10
    Join Date
    Feb 2012
    Location
    London, UK
    Posts
    82
    I suggest you look into SSD hard drives

  11. #11
    Quote Originally Posted by MattHouston View Post
    I suggest you look into SSD hard drives
    SSD drives are not possible on the budget we are on.

    We allocate around 80% of the space in the LV partition when we fill up the whole node. as already explained the issues we have had is with SolusVM due to the KVM implementation is not as good. as let's say Proxmox.

    SolusVM (KVM) Test Box1:
    top - 00:18:52 up 5 days, 20:21, 1 user, load average: 2.61, 3.03, 2.94
    Tasks: 510 total, 12 running, 498 sleeping, 0 stopped, 0 zombie
    Cpu(s): 28.8%us, 19.5%sy, 0.0%ni, 49.9%id, 1.5%wa, 0.0%hi, 0.3%si, 0.0%st

    Proxmox (KVM) Test Box2:
    top - 13:18:31 up 1 day, 8:49, 1 user, load average: 0.69, 0.92, 1.08
    Tasks: 455 total, 6 running, 449 sleeping, 0 stopped, 0 zombie
    Cpu(s): 16.4%us, 14.5%sy, 0.0%ni, 68.9%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st

    The promox VM currently has 10 more virtual machines then the SolusVM test box. during a OS install the %wa will rise around 16.5%wa, while the Proxmox will rise around 2.67%wa. Including it has around 30-50% faster install speeds.

    When the SolusVM node is full it can take between 40-50 minutes for a OS reinstall, on the proxmox node around 5 minutes. when the node is empty it will still take around 15 minutes. see the differences?

  12. #12
    Id try XEN 4 with Centos 6.

  13. #13
    Quote Originally Posted by InbrecoWS-James View Post
    Id try XEN 4 with Centos 6.
    Could you explain the advantages using Centos 6 with XEN 4? when we did testing with XEN HVM 3.4.1 with Centos 5.7 we lost half of the read / write speeds.

  14. #14
    Join Date
    Jul 2007
    Location
    Virginia
    Posts
    1,314
    Quote Originally Posted by kyay View Post
    Dual E5620 or above
    102GB DDR3 Memory
    4x 1TB raid-10 config
    You're joking, right? 102GB worth of VMs will tear that RAID10 (probably SATA?) apart.
    ~ @PreetamJinka

  15. #15
    Join Date
    Feb 2006
    Location
    Kusadasi, Turkey
    Posts
    3,379
    Quote Originally Posted by Bitcable View Post
    You're joking, right? 102GB worth of VMs will tear that RAID10 (probably SATA?) apart.
    I'm pretty sure it's a typo of 12 GB.
    Fraud Record - Stop Fraud Clients, Report Abusive Customers.
    █ Combine your efforts to fight misbehaving clients.

    HarzemDesign - Highest quality, well designed and carefully coded hosting designs. Not cheap though.
    █ Large and awesome portfolio, just visit and see!

  16. #16
    Join Date
    Feb 2006
    Location
    Kepler 62f
    Posts
    16,703
    Quote Originally Posted by kyay View Post
    SSD drives are not possible on the budget we are on.
    Then you'll either need to increase the budget, or give up.
    The #1 bottleneck is I/O, and no software choice can wiggle out of it.
    || Need a good host?
    || See my Suggested Hosts List || Editorial: EIG/Site5/Arvixe/Hostgator Alternatives
    ||

  17. #17
    Quote Originally Posted by kpmedia View Post
    Then you'll either need to increase the budget, or give up.
    The #1 bottleneck is I/O, and no software choice can wiggle out of it.
    The bottlecheck is not the I/O, simply the issue is the software due to the amount of virtual machines.

    As per example because SolusVM/KVM does not have hugepage support including many other implementations that they need to add currently. The actual issues we face with SolusVM & KVM is CPU issues apart from slow I/O.

    While using proxmox we have not had any CPU issues apart from when we allocate high number of CPU's to a guest we then have issues where the VPS will consume high amounts of server load jump from 50 % - 500%.

    We do not have any IO issues when using proxmox either.

    That same issue happens with SolusVM but a daily occurrence. when a VM reboots.

Similar Threads

  1. Replies: 3
    Last Post: 09-29-2011, 10:29 PM
  2. Replies: 0
    Last Post: 09-01-2009, 04:01 AM
  3. Replies: 2
    Last Post: 08-13-2009, 05:10 AM
  4. Replies: 0
    Last Post: 08-05-2009, 07:35 AM
  5. Replies: 0
    Last Post: 07-28-2009, 04:17 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •