Page 11 of 11 FirstFirst ... 891011
Results 251 to 258 of 258
  1. #251
    Join Date
    Jan 2008
    Posts
    643
    Quote Originally Posted by TheWiseOne View Post
    f that chassis ever needs maintenance, some VMs will go down.
    Thats is a good point!

  2. #252
    Join Date
    Apr 2009
    Posts
    1,143
    Interesting, The only way you would be able to do this was by doing sepperate "clouds" am i right?

    One would hope OnApp would do some kind of system to be able to rule out same-chassi-"raid"-failues.



    cwl@apaqdigital - For SSD's I would assume 10GE would be very much the optimal sollution, 2x1GE will be congested quite fast.

    But, it depends on your number of copies of the data aswell as the way you run their stripes..
    /maze

  3. #253
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by TheWiseOne View Post
    AFAIK you can't choose nodes to segregate data from so I'm not sure if twins/microcloud is a good choice at all. You can't guarantee the mirrored data won't be between 2 nodes in the same physical chassis. If that chassis ever needs maintenance, some VMs will go down.
    that got me confused a bit...

    fat-twin 8-noder still have 8 totally independent nodes having their own set of disks and running their own NIC ports to outside world. so, why are they different than 8x single-node servers having the same things? how a virtual SAN can know these disks are coming from a fat-twin or from 8x single-node?

    also, the serviceability differences and pro's/con's between 8-noder and 8x single-node has been discussed to death on other threads. either you are fine with all nodes in one chassis sharing only the dual-PSU and a giant mid-plane, or you are not! again, once you overcome the all-eggs-in-one basket thing, it really doesn't matter these nodes are for cloud VMs or VPS VMs or shared server hosting or dedi server hosting.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  4. #254
    Join Date
    Jan 2004
    Location
    Pennsylvania
    Posts
    942
    It's surely personal preference, but I wouldn't do it for systems that are part of a cluster and you are depending on them not all going down at the same time. Same as if I wanted to load balance a website across 4 servers I wouldn't use one of these twins either.

    I think it will be more crucial with storage -- storage is finicky. If you did have a chassis lose power you then need to deal with split brain scenarios, possible data corruption, etc. It wouldn't just be 4 dedicated or VPS nodes that boot up and start services. You'd no doubt need manual intervention. The way it works with VM hosting and OnApp is you then need to figure out what VMs were on the affected servers and have crashed and reboot them once storage is back in sync. If you're unable to do that you basically would need to reboot the entire cloud (you need to write your own scripts to retrieve VMs per datastore and issue reboots quickly) -- and even then it's possible only a subnet of VMs on a data store were affected. It's just asking for a world of hurt. Rack space isn't *that* expensive for most people. Maybe in a very expensive market it'd make sense, but rack space is a very small cost for us compared to power, bandwidth, and server purchases.

    Didn't you say in a different thread though that you see just as many backplanes fail as PSUs, and that you'd had backplanes fail when swapping PSUs? I've never had a backplane fail personally, but your comment certainly scared me off from them (for a while at least).
    Matt Ayres - togglebox.com
    Linux and Windows Cloud Virtual Datacenters powered by Onapp / Xen
    Instant Setup, Instant Scalability, Full Lifecycle Hosting Solutions

    www.togglebox.com

  5. #255
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by TheWiseOne View Post
    ...
    Didn't you say in a different thread though that you see just as many backplanes fail as PSUs, and that you'd had backplanes fail when swapping PSUs? I've never had a backplane fail personally, but your comment certainly scared me off from them (for a while at least).
    very true, i've pointed out so many times on many other threads about the con's for these twin/microcloud configurations. but there are many, many members here have no fear whatsoever to deploy them due to necessity and unusual hosting circumstances.

    although i have seen plenty of failed PDB from dual PSU, but I've not encountered a failed mid-plane from twin/twin2 yet. the potential of mid-plane disaster is there, and it could happen on very next build or it could happen on the 100th twin2 we touch. we just haven't delivered enough twin type servers to give anyone a true failure rate. I do advocate in keeping a twin barebone racked in place so that down time could be drastically reduced if the mid-plane in production failed on you.

    again, either you buy the idea with no mental fear or you don't.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  6. #256
    Join Date
    Dec 2009
    Posts
    2,297
    It's quite simple. If you are going to use a twin, a microcloud, a fat twin, etc then you need to be at the scale that you can spare an entire chassis (N+1), and your cloud, or whatever can handle the loss of an entire chassis.

    If you can't spare an entire chassis, and rely on having the 'redundancy' as other nodes in the same twin chassis, well that is just negligence and it should *NEVER* happen and you should be using multiple diverse systems. Unless the cloud can go down and not impact anything...
    REDUNDANT.COMEquinix Data Centers Performance Optimized Network
    Managed & Unmanaged
    • Servers • Colocation • Cloud • VEEAM
    sales@redundant.com

  7. #257
    Join Date
    Jul 2011
    Location
    ATL,DFW,PHX,LAX,CHI,NJ
    Posts
    700
    Ideally you run everything at X% where X is less than 100% of 2 nodes. Say 2 nodes, you load both to 40% usage and if one fails then you're only looking at 80% usage on the single node. IO Will suck until node 2 is fixed. As you scale that 40% figure can scale up as well as long as OnApp balances the redistribution of resources and VM's back across the other nodes that are up.

    What I want to see is pricing, but the other thing I did not see is can you limit IO Consumption on a granular level per plan like you can with Parallels.
    █ Total Server Solutions
    OnApp Cloud Solutions, CDN, DNS, Load Balancers, and Hybrid Dedicated Servers
    █ Colocation with Colo@
    Visit us at http://www.totalserversolutions.com/

  8. #258
    Join Date
    May 2003
    Posts
    1,708
    Can run at less load or have hot spares like a lot do. There are multiple ways to spread the load around so you can take on a HV failure.

    To be honest I wouldn't trust my network with Parallels.
    ~~~~~~~~~~~~~~~~~~~~~
    UrNode - Virtual Solutions
    http://www.UrNode.com

Page 11 of 11 FirstFirst ... 891011

Similar Threads

  1. Replies: 0
    Last Post: 01-06-2012, 11:33 AM
  2. "Single Pane" management SAN software?
    By ItsChrisG in forum Cloud Hosting
    Replies: 13
    Last Post: 11-18-2010, 06:34 PM
  3. What is the FREE software that work as same as "WHMAP" or "Clientexec"?
    By zabretooth in forum Hosting Software and Control Panels
    Replies: 6
    Last Post: 05-26-2007, 02:28 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •