Page 1 of 2 12 LastLast
Results 1 to 40 of 43
  1. #1

    Colo architecture questions

    Hi guys, i'm currently in the process of purchasing hardware as we move from dedicated hardware at softlayer to a local colo facility. I want to make sure I am getting everything I need.

    30 application servers (dual hex core 72gb ram)
    3 large DB servers
    1 SAN or two storage servers (haven't decided yet)
    2 haproxy boxes for loadbalancing

    That's the easy part, now the questions.

    The app servers communicate with haproxy and the db servers via a private vlan currently, so we will need dual 1gb nics in the machines and 2 switches (one for the private network and one for the public network).

    One issue we have is that the 30 app servers almost saturate a 1gb interface currently, so i'm not sure if we can bind two 1gb nics together? Or, would we be better off getting a 10GbE card for the haproxy machine? If we go with the 10GbE solution, I have seen several switches that have 4 10GbE uplink ports, so we could get a small 4 port 10GbE switch to wire it all up?

    If we go with a SAN i'm also concerned about the 1gb interface speed being a bottleneck, so the same question as above applies.

    It doesn't make sense to run 10GbE everywhere from a cost point of view since our app servers will never saturate a 1Gb port, but for the loadbalancers and storage servers it does. I have never wired up a mixed network like that.

    Next up, the datacenter is only a few blocks from our office, so I don't necessarily need kvm over ip, but remote reboot is a must. Do I just need some APC master switches? Do those just connect to the switch for outside access?

    The servers i'm looking at do have kvm ports. If we did decide to setup kvm over ip, do we just need to get a separate kvm switch, and route it to another outside ip?

    Thanks guys, I know it's a lot. I have been managing a decent size cluster of machines for several years, so I have no issues there. I just want to make sure i'm not forgetting anything obvious on the network front.

  2. #2
    Join Date
    Sep 2010
    Posts
    266
    If you would be fine either not using one of the 10 GigE uplinks for your actual internet uplink or only using one connection to each HAProxy box (no failover) then I think it could work for you. You'll just want to make sure that the switch is good enough quality that it can switch at wire speed like that (most enterprise level should be). In any case, I would probably rather go with 10 GigE instead of trying to do a port channel.

    For remote access, if you are still sourcing hardware, then you might just want to look for boxes that have IPMI. Supermicro hardware for sure should be easy - I believe you can set up IPMI to use the same network link as the main connection, so you don't have to have separate cables for it.

  3. #3
    Join Date
    Nov 2005
    Location
    Michigan, USA
    Posts
    3,872
    To give you some advice, yes you can bond interfaces together, so you could push 2gbps up/down. This would probably be fine for your database/app servers but your SAN would still be lacking if 30+ servers are trying to access it.

    You should also have a minimum of 2 storage servers with redundancy, having 30+ servers rely on one server for any important data is asking for disaster.

  4. #4
    I didn't even think about IPMI. Just double checked and the machines I am looking at do support IPMI and LOM, so that takes care of that issue. Thank you!

    Would something like this work on the switch side? http://www.ebay.com/itm/Cisco-N2K-C2...item51b74a5063

    Setup two of those. One for the private network with the SAN attached to a 10Gb port, and one for the public network with haproxy on the 10Gb interface?

  5. #5
    Quote Originally Posted by devonblzx View Post
    To give you some advice, yes you can bond interfaces together, so you could push 2gbps up/down. This would probably be fine for your database/app servers but your SAN would still be lacking if 30+ servers are trying to access it.

    You should also have a minimum of 2 storage servers with redundancy, having 30+ servers rely on one server for any important data is asking for disaster.
    I agree. There are two ways we can go here. Currently, when we update our code base, we update it on one central server, then use lsyncd (an rsync variant) to sync the code across the cluster. I typically like this over nfs mounting a drive because there is no network bottleneck to worry about, no single point of failure, etc. We could continue doing things that way. It works now. I was considering a san for simplicity, but it may not be necessary.

  6. #6
    The nexus 2k is a fabric extender, and only works in conjunction with the Nexus 5k's and 7k's. It is not a standalone switch. Look for the Nexus 5010's (a little older, but great 10Gbit switches) or the Nexus 5548's.
    ZFS Storage Blog - http://www.zfsbuild.com/

  7. #7
    Also note - with portchannels, you will only get 1Gbit per source/destination due to hashing algorithms. IE - Servers 1,2,3,6,8 might all hash to link #1 of a 4 port portchannel, and those would only ever use 1Gbit of throughput to the storage server. With 10Gbit, you can get 10Gbit throughput. Portchannel makes sense if no workload will ever exceed 1Gbit, but if you have individual backend workloads that may exceed 1Gbit, you need to move to 10Gbit.
    ZFS Storage Blog - http://www.zfsbuild.com/

  8. #8
    Thanks Matt, that makes sense. I don't think any individual server will ever saturate the 1Gb connection, but in total, they would definitely saturate the SAN unless we consider 10G.

  9. #9
    1EighT,

    If you are saturating 1Gig links, you could always bond ports on your machines. For your storage server(s), bond 4x 1Gig links. With the other servers, you can bond 2x 1Gig links. If you think you will saturate that, then you really need to do 10Gig networking. If you are having a lot of IOPs, look into doing Round-Robin bonding. Other bond types you can try is LACP and ALB, whatever gives you the best performance based on your applications.

    If you get a Juniper switch, such as EX4300, you will have 4 optional transceiver ports for 10Gig transceivers. Give your storage server 1-2x 10Gig ports, and your other servers bonded 1Gig ports. Most likely your storage server will be the bottle neck if it isn't on some sort of 10Gig port.

    For IPMI, just get a real cheap dumb switch. Plug it into your main switch in your cabinet and assign a VLAN to it with some IP's. IPMI can only do 100Mbps currently, may be changing to 1Gbps on some new motherboards (someone else can confirm that). IPMI switch should be your cheapest/pathetic switch. Seen people use desktop type switches for this and it's been fine.
    Managed Service Provider - www.OpticIP.com
    Public & Private Cloud
    Solutions | SSD SANs | High IOP's | CDN Solutions
    Phoenix/Chandler AZ Colocation | 48U Cabinets | Data Halls | TIA-942 Tier 4 Facility

  10. #10
    I don't see a firewall anywhere on that list. It'd be expensive to put in one with 1Gbps throughput though, maybe more than your budget. Could use a smaller firewall just to protect your management network at least since that won't need the throughput. Unless you have some other way to access IPMI, I'd recommend putting even a cheap firewall in front of that with VPN. IPMI has proven to be vulnerable to attacks before, and you really don't want to put it on public IPs. As previously mentioned, the switch that IPMI connects to can be a dumb switch, even an older 10/100 if it's in good shape.

    Definitely want to make sure you have redundant storage, either a SAN that has two filer heads that replicate to each other or two storage servers. Having storage go down and rendering 30+ servers useless until it's fixed won't be fun.

  11. #11
    Join Date
    Aug 2010
    Posts
    1,892
    Quote Originally Posted by 1EightT View Post
    Hi guys, i'm currently in the process of purchasing hardware as we move from dedicated hardware at softlayer to a local colo facility. I want to make sure I am getting everything I need.

    30 application servers (dual hex core 72gb ram)
    3 large DB servers
    1 SAN or two storage servers (haven't decided yet)
    2 haproxy boxes for loadbalancing

    You must have been paying a bunch with that many server with such specs with softlayer...what were u guys thinking that it took you so long to figure out colocation is the way to go when u running that many servers?!?
    mission critical!

  12. #12
    Join Date
    Apr 2009
    Location
    San Jose
    Posts
    69
    For a really good enterprise switch that is an amazing deal is a used 1U sized Cisco 4948. It is an old design originally from 2004 but it performs exceptionally well today utilizing current Cisco IOS software with all typical enterprise class features you would expect from Cisco. When it was originally released it was expensive stuff but now you can pickup a 48 port version with redundant hot-swap power supplies for $700-800. It is an L3 switch capable of routing IPv4 traffic at wire speed. You can setup VLANs to segment your IPMI traffic. There is also a 10G version that provides two uplink ports at 10G that you can usually pickup for a few hundred more. At this price per port you can easily setup two of them in a redundant architecture and they work wonderfully.

    You can use it to bond server two or more connections using LACP 802.3ad that will be rock solid reliable.

  13. #13
    Join Date
    Nov 2007
    Location
    Munich
    Posts
    94
    You will have to invest quite a lot in the hardware. So I propose you to ask a consultant to make the network design for you if you don't have the corresponding experience. Even if a configuration will "theoretically" work it doesn't mean that it will be supported by the hardware vendor. For example, if you have DELL storage with 10G port you can not use it with servers with 1G interfaces (you can, but DELL will not sell it to you, and will not help you with any ussues like connectivity problems that may occure).

  14. #14
    Join Date
    Apr 2007
    Posts
    3,513
    Seems to be a big application that your running looking at the hardware list, so Is resiliency a concern for you?

    From the post you seem to be looking at two switches (one for public and another for private network communications), however if you lose either of these switches you will basically lose all of your traffic from that rack.

    Personally I would look at two public switches that are interconnected, then put half of your hardware on each one, and do something similar for the private network. That way if you lose either switch you only have an outage of half of the equipment.

  15. #15
    Join Date
    Nov 2007
    Location
    Munich
    Posts
    94
    Outage of half of the equipment?

    Sorry, no!!!

    It's standard now that you use 2 switches with automatical failover. In case of a switch failure nothing would happen...

  16. #16
    Join Date
    Aug 2010
    Posts
    1,892
    Quote Originally Posted by Handtake_GmbH View Post
    Outage of half of the equipment?

    Sorry, no!!!

    It's standard now that you use 2 switches with automatical failover. In case of a switch failure nothing would happen...
    Highly agree...when it comes to colocation...everything and i mean everything needs to be dual..incase of any problem...and backup onsite and offsite, both continuous and incremental.

    But with multinode servers of nowadays i think maximizing space is no longer the main challenge.
    mission critical!

  17. #17
    Quote Originally Posted by nokia3310 View Post
    You must have been paying a bunch with that many server with such specs with softlayer...what were u guys thinking that it took you so long to figure out colocation is the way to go when u running that many servers?!?
    18K a month currently.

    It's not a matter of not knowing, it's a matter of having the time to setup all new infrastructure and plan the move. We also needed the added cash flow to purchase all of the hardware and such.

  18. #18
    Quote Originally Posted by Handtake_GmbH View Post
    Outage of half of the equipment?

    Sorry, no!!!

    It's standard now that you use 2 switches with automatical failover. In case of a switch failure nothing would happen...
    Completely agree. The Cisco 4948's are inexpensive enough to make that an easy decision.

    On the SAN side, I was looking at a Dell EqualLogic PS100E which has dual redundant controllers and dual redundant power supplies. I would think that would be sufficient on the SAN side.

    The only other major decision is to go 10Gb for the SAN and load balancers. I suppose we could just bond the ports until it becomes a problem. It isn't an issue now, but rather something we need to at least think about for 6 months or a year from now.

  19. #19
    Join Date
    Apr 2009
    Location
    San Jose
    Posts
    69
    Quote Originally Posted by nokia3310 View Post
    everything and i mean everything needs to be dual..incase of any problem...
    I think you should do triple paths for key elements that has normal maintenance downtime so that you can stay fully redundant even during such planned maintenance. Switches and routers need regular software upgrade for example, and ISPs, depending on their fault tolerance configuration, will bring their side down for scheduled maintenance. So for BGP blends you should have at least three separate ISP connections. For your core routing it makes sense to have dual core routers but you can also gain cheap additional fault tolerance by using an L3 switch with BGP default routes to your least expensive provider, and this works especially well if that switch makes for a 3rd path. The idea is at least keep connectivity flowing even if you lose things such as your bandwidth management, QOS, monitoring, route selection and optimization, etc. during such as unlikely outage.

  20. #20
    Join Date
    Apr 2009
    Location
    San Jose
    Posts
    69
    Quote Originally Posted by 1EightT View Post
    The only other major decision is to go 10Gb for the SAN and load balancers. I suppose we could just bond the ports until it becomes a problem. It isn't an issue now, but rather something we need to at least think about for 6 months or a year from now.
    10G is great if you can afford it. Its frustrating that 10G NICs have come down in price so much but switches remain so expensive.

    I actually think the Mellanox SX1036 40G Ethernet solution is a much better value and almost costs the same as 10G. With SSDs pushing the speed barriers on storage it seems prudent to not invest big bucks on 10G only to see it as a bottleneck in the not too distant future. Since the melanox supports 10GB ports also its pretty versatile.

  21. #21
    It really is just a cost factor for us. We're trying to keep the budget down, and as you pointed out, 10Gb switches are still pretty expensive.

  22. #22
    Join Date
    Aug 2010
    Posts
    1,892
    Quote Originally Posted by 1EightT View Post
    18K a month currently.

    It's not a matter of not knowing, it's a matter of having the time to setup all new infrastructure and plan the move. We also needed the added cash flow to purchase all of the hardware and such.
    With that price i will actually buy all hardware needed and will not pay anything more than $1k/month - $2k/month(depending on the bandwidth used) recurring for colocation. Yes i will get a 22U from Leaseweb sign a 3 year contract with them to get some discount and invest in like 8 x C6100(2U, 4 nodes, 2 x L5639 per node, 24 bays) + 2 x C2100 (24 bays) servers ++++ others like switch, pdu etc

    To make things great...i will even setup something similar in another datacenter for redundancy. And with that you pay like $4k/month maximum and that is years cheaper than what you paying right now.


    Why Leaseweb? Because they will give you the best pricing especially on power...no one charges less than leaseweb when it comes to power consumption. They are just the most generous overall in colocation...plus their network is pretty awesome.
    Last edited by nokia3310; 01-06-2014 at 08:45 PM.
    mission critical!

  23. #23
    Join Date
    Jul 2008
    Location
    New Zealand
    Posts
    1,208
    Quote Originally Posted by nokia3310 View Post
    With that price i will actually buy all hardware needed and will not pay anything more than $1k/month - $2k/month(depending on the bandwidth used) recurring for colocation. Yes i will get a 22U from Leaseweb sign a 3 year contract with them to get some discount and invest in like 8 x C6100(2U, 4 nodes, 2 x L5639 per node, 24 bays) + 2 x C2100 (24 bays) servers ++++ others like switch, pdu etc

    To make things great...i will even setup something similar in another datacenter for redundancy. And with that you pay like $4k/month maximum and that is years cheaper than what you paying right now.


    Why Leaseweb? Because they will give you the best pricing especially on power...no one charges less than leaseweb when it comes to power consumption. They are just the most generous overall in colocation...plus their network is pretty awesome.
    OP has already decided on a facility which is a few blocks away, which I think is a great choice (to be near the building, that is).

  24. #24
    Join Date
    Aug 2010
    Posts
    1,892
    Quote Originally Posted by bhavicp View Post
    OP has already decided on a facility which is a few blocks away, which I think is a great choice (to be near the building, that is).
    Well thats cool...but if for example i can save $5k/month colo somewhere else...i will go for it...when you have 2 of everything with some serious backup/restore plan...doesnt make much difference been near or not.
    mission critical!

  25. #25
    Join Date
    Jul 2008
    Location
    New Zealand
    Posts
    1,208
    Quote Originally Posted by nokia3310 View Post
    Well thats cool...but if for example i can save $5k/month colo somewhere else...i will go for it...when you have 2 of everything with some serious backup/restore plan...doesnt make much difference been near or not.
    I don't think hes going to save 5k/month by going to Leaseweb - Maybe Leaseweb has cheap power, but power isn't a big factor. Maybe a few hundred dollars difference.

    Also, it makes a lot of sense to be near. For one, Leaseweb's remote hands are VERY expensive. Initial setup would rack up a lot of remote hands, and so would general maintenance (Changing hard drives, adding new servers). You don't only use remote hands when failures happen. And then when they do, you do have to replace it (Sourcing parts, replacing the hardware, and then shipping out the faulty hardware). Doesn't matter if it's now, or later, it's going to cost a lot.

  26. #26
    Join Date
    Aug 2010
    Posts
    1,892
    Quote Originally Posted by bhavicp View Post
    I don't think hes going to save 5k/month by going to Leaseweb - Maybe Leaseweb has cheap power, but power isn't a big factor. Maybe a few hundred dollars difference.

    Also, it makes a lot of sense to be near. For one, Leaseweb's remote hands are VERY expensive. Initial setup would rack up a lot of remote hands, and so would general maintenance (Changing hard drives, adding new servers). You don't only use remote hands when failures happen. And then when they do, you do have to replace it (Sourcing parts, replacing the hardware, and then shipping out the faulty hardware). Doesn't matter if it's now, or later, it's going to cost a lot.

    check this http://www.webhostingtalk.com/showthread.php?t=1315113

    USA, Northern Virginia
    ==============================================

    Full Cabinet (48U)
    100Mbps 95th Percentile (Premium Network)
    Uplink Port: 1x 1000Mbps Full-Duplex (Fiber) (N+1)
    IP's: /27 - 32 IPv4 - 27 Usable
    Included Power: 20 amp
    Breaker: 30 amp @ 208V AC
    Power in KW: 4,160 watts (~4kw)

    Pricing (USD):
    • 1-5 Racks $999 per Rack/month (Price/watt (useable): ~ $0.24) [/URL]
    • 6-10 Racks $799 per Rack/month (Price/watt (useable): ~ $0.19)
    • 11+ Racks $699 per Rack/month (Price/watt (useable): ~ $0.16)

    Set up fee: $500 per rack. Contract: 12 months

    FREE first batch rack-and-stack (racking and cabling) of your servers.

    ONE FREE YEAR of Gold Remote Hands Pack (normally prices at EUR 299 / USD 369 per month)
    Every month you’ll receive 90 minutes of ‘Free Time’* to utilize our engineers to handle routine maintenance/provisioning, priority response times and discounted hourly support rates.


    Pre-payment discounts available:

    3 months -2.5%
    6 months -5%
    12 months -10%
    24 months -20%
    36 months -30%
    Also one has to consider network performance and all that. I don't know of any quality facility that will charge that price. And that is in their US datacenter.

    Plus i guarantee you..if you agree to a 5 year contract which makes sense for such big longterm investment, they will give you much discounts and include more remote help for you. Like i said...when you have 2 of everything...and can control everything remotely...remote hand will rarely be needed. People hype the local facility thing way too much..i mean except if that facility is like few blocks away..even if its few hours drive that is still some huge downtime
    mission critical!

  27. #27
    Join Date
    Sep 2002
    Location
    ohio
    Posts
    132
    Quote Originally Posted by Handtake_GmbH View Post
    You will have to invest quite a lot in the hardware. So I propose you to ask a consultant to make the network design for you if you don't have the corresponding experience. Even if a configuration will "theoretically" work it doesn't mean that it will be supported by the hardware vendor. For example, if you have DELL storage with 10G port you can not use it with servers with 1G interfaces (you can, but DELL will not sell it to you, and will not help you with any ussues like connectivity problems that may occure).
    I agree... i think a consultant is in order to make sure the design is complete and will work before purchasing hardware... measure twice cut once.

    This approach will also get you quotes from multiple vendors and typically var's get discounts on hardware that they can then pass on to you

    disclaimer: i work for a technical consulting firm.
    http://jpaul.me
    @recklessop on Twitter
    VMware vExpert and Storage Geek

  28. #28
    Join Date
    Sep 2002
    Location
    ohio
    Posts
    132
    Quote Originally Posted by 1EightT View Post
    It really is just a cost factor for us. We're trying to keep the budget down, and as you pointed out, 10Gb switches are still pretty expensive.

    fiber channel is much more affordable than many thing. especially if you go with Cisco UCS chassis based servers... you get 10 gig in the fabric interconnects, you can direct connect a fiber or 10gig san and then uplink to the rest of the network at 1 or 10 gig...

    shoot me a PM if you want to hear more... doesn't have to be formal, but i can give you some design ideas on what I typically do when designing out enterprise infrastructure for private clouds.
    http://jpaul.me
    @recklessop on Twitter
    VMware vExpert and Storage Geek

  29. #29
    Join Date
    Aug 2010
    Posts
    1,892
    Quote Originally Posted by xaero View Post
    fiber channel is much more affordable than many thing. especially if you go with Cisco UCS chassis based servers... you get 10 gig in the fabric interconnects, you can direct connect a fiber or 10gig san and then uplink to the rest of the network at 1 or 10 gig...

    shoot me a PM if you want to hear more... doesn't have to be formal, but i can give you some design ideas on what I typically do when designing out enterprise infrastructure for private clouds.
    Just sent you a PM
    mission critical!

  30. #30
    Join Date
    Oct 2002
    Location
    Vancouver, B.C.
    Posts
    2,656
    Quote Originally Posted by 1EightT View Post
    30 application servers (dual hex core 72gb ram)
    72GB RAM sounds like Nehalem architecture (triple channel RAM). Unless you're going with used hardware at deep discounts, you should go with the latest E5 v2 (Ivy Bridge-E) servers instead.


    Quote Originally Posted by 1EightT View Post
    If we go with the 10GbE solution, I have seen several switches that have 4 10GbE uplink ports, so we could get a small 4 port 10GbE switch to wire it all up?
    Juniper EX3300's could be a good fit, with 4x 10Gb ports. One public switch with 2x 10Gb uplinks to the Internet and 2x 10Gb port for the external side of the load balancers, and a private switch with 2x 10Gb ports for the internal side of the load balancers, 1x or 2x 10Gb ports for SAN or storage servers, and two database servers just on 1 or 2x Gb ports in LACP.

    Quote Originally Posted by 1EightT View Post
    Next up, the datacenter is only a few blocks from our office, so I don't necessarily need kvm over ip, but remote reboot is a must. Do I just need some APC master switches? Do those just connect to the switch for outside access?
    With IPMI, you can power cycle servers remotely without having to do it at the outlet level. Nevertheless, you may as well get some rPDU's. APCs are a popular choice. It's just a matter of choosing between vertical 0U or horizontal 1U ones. 0U's may not take up rack units, but depending on the cabinet it may be impossible to pull your switches out with the PDU's in place. You'll want to mount the PDU's as low as possible in the cabinet, and put all the switches at the top if that's the case. I'd also highly recommend ordering a bunch of 1ft and 2ft AC power cables, to reduce the amount of clutter.
    ASTUTE HOSTING: Advanced, customized, and scalable solutions with AS54527 Premium Canadian Optimized Network (Level3, PEER1, Shaw, Tinet)
    MicroServers.io: Enterprise Dedicated Hardware with IPMI at VPS-like Prices using AS63213 Affordable Bandwidth (Cogent, HE, Tinet)
    Dedicated Hosting, Colo, Bandwidth, and Fiber out of Vancouver, Seattle, LA, Toronto, NYC, and Miami

  31. #31
    Join Date
    Aug 2010
    Posts
    1,892
    Quote Originally Posted by hhw View Post
    72GB RAM sounds like Nehalem architecture (triple channel RAM). Unless you're going with used hardware at deep discounts, you should go with the latest E5 v2 (Ivy Bridge-E) servers instead.
    Those are really expensive. I mean i would invest in used/refurbished hardware seriously and have 2 of everything..and upgrade when price of the E5 v2 comes down 2 years or so from now.
    mission critical!

  32. #32
    So far we've settled on 25 Dell PowerEdge C1100 Server CS24-TY 2x Intel Xeon L5639 Six Hex Core machines with 64GB of ram, a pair with less memory for load balancing, 3 with 6 ssd's each for MySQL, a dell SAN, a pair of cisco 4948's for the private VLAN and internet connection, and a crappy 48 port switch for ipmi and shell access (servers all have a dedicated ipmi port).

    Got back our quote from Fortrust, and with 4 30 amp redundant 208V connections for power, and 120Mbs (burstable to 350) we should save about 12K a month over softlayer. That means our hardware will be repaid in about 2 months! Not bad!

  33. #33
    Join Date
    Aug 2010
    Posts
    1,892
    Quote Originally Posted by 1EightT View Post
    So far we've settled on 25 Dell PowerEdge C1100 Server CS24-TY 2x Intel Xeon L5639 Six Hex Core machines with 64GB of ram, a pair with less memory for load balancing, 3 with 6 ssd's each for MySQL, a dell SAN, a pair of cisco 4948's for the private VLAN and internet connection, and a crappy 48 port switch for ipmi and shell access (servers all have a dedicated ipmi port).

    Got back our quote from Fortrust, and with 4 30 amp redundant 208V connections for power, and 120Mbs (burstable to 350) we should save about 12K a month over softlayer. That means our hardware will be repaid in about 2 months! Not bad!
    Is that for a full rack? So you paying $6k/month?
    mission critical!

  34. #34
    Correct, it's a full rack (45U). It actually comes to 5600 or so per month. Our bandwidth is a mesh of 4 providers, it's a tier 3 facility, and we have insanely redundant power, so i'm happy with the price.

  35. #35
    While the actual UCS infrastructure (FI's, chassis, etc) are relatively inexpensive, their blades are anything but. Unless you're actively using the service profiles for blade replacement, network management, etc, I would recommend looking at something a little less expensive. My former environment consisted of all SuperMicro gear, which was great. Current environment _is_ all UCS gear (5x sets of FI's, 150 blades), and it's fantastic but overkill for most peoples infrastructure needs.
    ZFS Storage Blog - http://www.zfsbuild.com/

  36. #36
    Join Date
    Aug 2010
    Posts
    1,892
    Quote Originally Posted by 1EightT View Post
    Correct, it's a full rack (45U). It actually comes to 5600 or so per month. Our bandwidth is a mesh of 4 providers, it's a tier 3 facility, and we have insanely redundant power, so i'm happy with the price.
    Great you are happy with your plan.
    As for me..if i have a budget like that($5.6k/month) i will have 3 x 48U racks in US, Netherlands and Germany and still have some change left. And i will be ready for even world war 4!..lol
    mission critical!

  37. #37
    Quote Originally Posted by Matt_Breitbach View Post
    While the actual UCS infrastructure (FI's, chassis, etc) are relatively inexpensive, their blades are anything but. Unless you're actively using the service profiles for blade replacement, network management, etc, I would recommend looking at something a little less expensive. My former environment consisted of all SuperMicro gear, which was great. Current environment _is_ all UCS gear (5x sets of FI's, 150 blades), and it's fantastic but overkill for most peoples infrastructure needs.
    Thanks for the suggestion. That's probably the least solidified part of my game plan at this point. Any specific models I should look at?

  38. #38
    Join Date
    Oct 2002
    Location
    Vancouver, B.C.
    Posts
    2,656
    Quote Originally Posted by nokia3310 View Post
    Those are really expensive. I mean i would invest in used/refurbished hardware seriously and have 2 of everything..and upgrade when price of the E5 v2 comes down 2 years or so from now.
    Not sure how cheaply you're purchasing used Nehalem hardware for, but don't just look at the absolute costs of each server, but also:
    1) Price/performance. An E5 2620 v2 is about double the performance of an E5620, and they would've cost the same when new.
    2) Power consumption. With double the performance, you can run half the number of machines. With better power saving features, you'll actually end up using less power per server also. This easily cuts your colo costs in half.
    3) Remaining useful like of the hardware. Hardware fails over time. Blown capacitors, failed power supplies, etc. Given that they would have similar total useful lives, and the Nehalem stuff is probably 3 years old, the new hardware is going to last you 3 years longer.
    4) Lack of updates. Supermicro's recent IPMI firmware issues required an upgrade to 3.x versions. There haven't been any such firmware updates for Nehalem generation boards.
    ASTUTE HOSTING: Advanced, customized, and scalable solutions with AS54527 Premium Canadian Optimized Network (Level3, PEER1, Shaw, Tinet)
    MicroServers.io: Enterprise Dedicated Hardware with IPMI at VPS-like Prices using AS63213 Affordable Bandwidth (Cogent, HE, Tinet)
    Dedicated Hosting, Colo, Bandwidth, and Fiber out of Vancouver, Seattle, LA, Toronto, NYC, and Miami

  39. #39
    In our testing, we ran our workload through Dual Hex Core 5675's and Dual octo core 2690's. The 2690's were only 7% faster, and more than double the cost.

    For us, the 5639's make a lot of sense. They are inexpensive, low power, and fast. If a machine dies, we replace it, and still save money over the newer hardware.

  40. #40
    Join Date
    Oct 2002
    Location
    Vancouver, B.C.
    Posts
    2,656
    Quote Originally Posted by 1EightT View Post
    In our testing, we ran our workload through Dual Hex Core 5675's and Dual octo core 2690's. The 2690's were only 7% faster, and more than double the cost.

    For us, the 5639's make a lot of sense. They are inexpensive, low power, and fast. If a machine dies, we replace it, and still save money over the newer hardware.
    The 2690 v2's are actually deca core, ~16% faster than the v1's, and should easily double the performance of an X5675. Either way though, the 2690 v1 or v2 should be much more than 7% faster than an X5675. The 2620 v2 even should slightly outperform the X5675. The microarchitecture improvements from Nehalem to Sandy Bridge were huge, and Ivy Bridge extends the difference even further. Perhaps you had a bottleneck elsewhere in the system?

    The 2690's also aren't very good value, as the higher in the line you go, the higher the premium you pay for the performance. The 2670's give you ~93% the performance for only 2/3rds the cost, and are generally the highest you want to go if value is a concern. The 2620 v2's are the best value, being the lowest in the line with the full feature set. It may be better to do a price/performance comparison between the 2620 v2 and the L5639, with the former being almost 50% faster. From what I can see of most dual L5639 builds on eBay, a 2620 v2 build would not be 50% more expensive.

    Of course, direct comparison of the CPU's is meaningless if your application is not CPU bound.
    ASTUTE HOSTING: Advanced, customized, and scalable solutions with AS54527 Premium Canadian Optimized Network (Level3, PEER1, Shaw, Tinet)
    MicroServers.io: Enterprise Dedicated Hardware with IPMI at VPS-like Prices using AS63213 Affordable Bandwidth (Cogent, HE, Tinet)
    Dedicated Hosting, Colo, Bandwidth, and Fiber out of Vancouver, Seattle, LA, Toronto, NYC, and Miami

Page 1 of 2 12 LastLast

Similar Threads

  1. Colo Questions
    By JensVMs in forum Colocation and Data Centers
    Replies: 11
    Last Post: 04-11-2012, 05:21 PM
  2. colo questions
    By satdav in forum Colocation and Data Centers
    Replies: 14
    Last Post: 11-15-2010, 12:56 PM
  3. File Serving Cluster - Architecture Questions
    By Windowlicker in forum Dedicated Server
    Replies: 7
    Last Post: 11-02-2007, 10:01 PM
  4. Colo questions...
    By HostMidwest in forum Colocation and Data Centers
    Replies: 11
    Last Post: 04-05-2004, 03:36 AM
  5. what questions to ask when colo
    By Eiv in forum Colocation and Data Centers
    Replies: 8
    Last Post: 09-24-2003, 12:10 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •