Results 1 to 31 of 31
  1. #1
    Join Date
    Jun 2006
    Posts
    304

    Should I go with the DELL C6100 ?? (new into collocation)

    I have started collocating just recently and so far things are GREAT ! I managed to buy a dedicated server (yet very very expensive) got it built and firmware upgraded (by the retailer) then shipped to my DC in NL, installed, IPMI configured and before I know it the server was ONLINE ! I just had to configure RAID (which was a little time consuming although it's not my first time doing RAID), installed the OS and from there the rest was very easy...

    So now I am considering buying my second server, I need lots of CPU power, RAM, and too many hard drives, but after coming out recently from a near bankrupting experience (the previous server which costed me a LOT !) I feel exhausted already and decide to get something cheaper !

    So I found on ebay (by mere coincidence) the DELL C6100, at half the price of the server I paid for, and it got all the resources I need and maybe more ! (CPU, RAM, Disks etc...) but it has no hardware RAID controller, the guy on eBay said he can install a hardware controller @ 90$ per node (doesn't 90$ sound too cheap for a good controller ??)

    So anyways, now I read each node must run independently from the other, [COLOR="rgb(65, 105, 225)"]is there no way for all resources to be SHARED[/COLOR] ?

    If not, well hosting a server like that be acceptable in the DC (since it is 4 nodes/like 4 dedicated machines!) and how do I manage every node ?

    [COLOR="rgb(65, 105, 225)"]I suppose every one must have a dedicated connectivity to the switch to get WAN access, am I assuming this right[/COLOR] ?!

    I will have to pay my DC for 1 Gbps connectivity for every node so that is 4x the cost

    Or [COLOR="rgb(65, 105, 225)"]is there is a way to pay for a single WAN connectivity/port and share that connectivity with the other 3 nodes[/COLOR] ?

    Finally, am reading a lot of trouble when it comes to maintaining this server, setting up IPMI (sounds like a pain!), upgrading firmware for various parts of this server, flashing BIOS and so on, so [COLOR="rgb(65, 105, 225)"]do you think this server is right for me or should I stay away of it[/COLOR] ??

    I have highlighted my questions in [COLOR="rgb(65, 105, 225)"]blue[/COLOR]

    Thank you,,

  2. #2
    Join Date
    Jun 2006
    Posts
    304
    I have just realized there seemed to be an error in the color code, so I will ask my questions simply:-

    is there is a way to pay for a single WAN connectivity/port and share that connectivity with the other 3 nodes ?

    Finally, am reading a lot of trouble when it comes to maintaining this server, setting up IPMI (sounds like a pain!), upgrading firmware for various parts of this server, flashing BIOS and so on, so do you think this server is right for me or should I stay away of it ??

  3. #3
    Join Date
    Jul 2008
    Location
    New Zealand
    Posts
    1,208
    I actually brought one of those nodes to test, and it's actually pretty easy. You can easily think of it as 4 different servers, as each has it's own NICs and IPMI.

    IPMI was dead easy to setup through the BIOS, and I haven't looked into upgrading it, but it seems to work fine as-is.

    You could buy a small/cheap Gigabit (or 100mbit) switch and ask your datacenter to put your single uplink into this switch, and then plug your 4 nodes into the switch, which means you'll only be paying for 1 extra U, instead of 3more.

  4. #4
    Join Date
    Jun 2006
    Posts
    304
    Quote Originally Posted by bhavicp View Post
    You could buy a small/cheap Gigabit (or 100mbit) switch and ask your datacenter to put your single uplink into this switch, and then plug your 4 nodes into the switch, which means you'll only be paying for 1 extra U, instead of 3more.
    Your are right, but how to access the other 3 nodes on IPMI ?!

    Will a simple router work instead of a switch !?

  5. #5
    Join Date
    Jul 2008
    Location
    New Zealand
    Posts
    1,208
    Quote Originally Posted by aliitp View Post
    Your are right, but how to access the other 3 nodes on IPMI ?!

    Will a simple router work instead of a switch !?
    You don't need a router, a simple switch will work. But either can work.

    You will have to connect the IPMI to the switch/router as well. So ideally, you will need at a 12 port switch. 4 ports for eth0, and 4 ports for IPMI + UPLINK + some spare incase you want another uplink or connect eth1 on any servers.

  6. #6
    Join Date
    Mar 2013
    Location
    Orlando, FL
    Posts
    317
    The IPMI on the C6100's are very frisky. You have to upgrade the BIOS and the BMC sometimes as they will occasionally crash. We have over 100+ of these online and I'm only speaking from experience. We have had to upgrade the Firmware on the BMC etc. You can even search on this forum and there's a thread on here by funkywizard in regards to the BMC problems.

    There's an ethernet port on the back of each node that you can plug into for the IPMI. Then you can login to the BIOS and configure each port with the IP.

  7. #7
    Join Date
    Oct 2001
    Location
    Miami,FL
    Posts
    612
    CloudComputingLV, any other concern, or just the IPMI issue?
    Joman Sierra
    http://www.dominet.net

  8. #8
    Join Date
    Jun 2009
    Location
    Los Angeles, California
    Posts
    143
    Quote Originally Posted by CloudComputingLV View Post
    The IPMI on the C6100's are very frisky. You have to upgrade the BIOS and the BMC sometimes as they will occasionally crash. We have over 100+ of these online and I'm only speaking from experience. We have had to upgrade the Firmware on the BMC etc. You can even search on this forum and there's a thread on here by funkywizard in regards to the BMC problems.

    There's an ethernet port on the back of each node that you can plug into for the IPMI. Then you can login to the BIOS and configure each port with the IP.
    How much power does each of the units use?

  9. #9
    Join Date
    Oct 2000
    Posts
    1,653
    Quote Originally Posted by CloudComputingLV View Post
    The IPMI on the C6100's are very frisky. You have to upgrade the BIOS and the BMC sometimes as they will occasionally crash. We have over 100+ of these online and I'm only speaking from experience. We have had to upgrade the Firmware on the BMC etc. You can even search on this forum and there's a thread on here by funkywizard in regards to the BMC problems.

    There's an ethernet port on the back of each node that you can plug into for the IPMI. Then you can login to the BIOS and configure each port with the IP.
    Nothing like some other brands/models though. You may have to spend a bit more time on these initially, but once you get them going, they run very well.
    [QuickPacket™] [AS46261]
    Located in Atlanta, GA and Los Angeles, CA
    Dedicated Servers, KVM, Xen & OpenVZ VPS, Co-location, R1Soft Data Backup, Shared & Reseller Hosting

  10. #10
    Quote Originally Posted by CentralHosts View Post
    How much power does each of the units use?
    A c6100 with 4 servers in it will use around 4a 120v
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  11. #11
    Join Date
    Jul 2012
    Location
    Los Angeles, CA
    Posts
    45
    I help maintain over 100 C6100 nodes for a client in Los Angeles. They are very reliable as I've only had to RMA one in the last 3 years or so.

    With just one C6100 chassis with 4 nodes I recommend purchasing a 16 (or more) port switch with VLAN capability. Each C6100 node has 3 ethernet ports: 1 IPMI and 2 Gig E. You can configure your switch with three VLANs: IPMI, LAN and WAN. This setup will let you transfer data between the nodes on the "private LAN" VLAN which will be useful and may help you save precious public IPs.

    Assuming you get a 1U switch you will need 3 power ports and 3u of rack space.

    Here's a pic of a node that I pulled to upgrade the RAM.

    https://plus.google.com/116722597790...ts/7NgCxLBENeH
    Hands over IP
    Cost Effective and Experienced Remote Hands Service in Los Angeles
    Located outside the USA? We can help purchase and install your gear in LA.

  12. #12
    Join Date
    Jun 2006
    Posts
    304
    Quote Originally Posted by tier3techsupport View Post
    I help maintain over 100 C6100 nodes for a client in Los Angeles. They are very reliable as I've only had to RMA one in the last 3 years or so.

    With just one C6100 chassis with 4 nodes I recommend purchasing a 16 (or more) port switch with VLAN capability. Each C6100 node has 3 ethernet ports: 1 IPMI and 2 Gig E. You can configure your switch with three VLANs: IPMI, LAN and WAN. This setup will let you transfer data between the nodes on the "private LAN" VLAN which will be useful and may help you save precious public IPs.

    Assuming you get a 1U switch you will need 3 power ports and 3u of rack space.

    Here's a pic of a node that I pulled to upgrade the RAM.

    https://plus.google.com/116722597790...ts/7NgCxLBENeH
    Any tutorials on the web for that sort of task ??

  13. #13
    Join Date
    Jun 2009
    Location
    Los Angeles, California
    Posts
    143
    Quote Originally Posted by funkywizard View Post
    A c6100 with 4 servers in it will use around 4a 120v
    Is that including maxed out ram and CPUs?

  14. #14
    Join Date
    Jun 2002
    Posts
    1,376
    Quote Originally Posted by aliitp View Post
    So I found on ebay (by mere coincidence) the DELL C6100, at half the price of the server I paid for, and it got all the resources I need and maybe more ! (CPU, RAM, Disks etc...) but it has no hardware RAID controller, the guy on eBay said he can install a hardware controller @ 90$ per node (doesn't 90$ sound too cheap for a good controller ??)
    Do you actually need hardware RAID? There are lots of obvious reasons that it's a good thing, but if you're on a budget and buying a used server, I wonder if software RAID is good enough for what you're doing. (But I have no idea what you are doing with this, so this could be terrible advice. Take it with a grain of salt.)

    Quote Originally Posted by aliitp View Post
    So anyways, now I read each node must run independently from the other, [COLOR="rgb(65, 105, 225)"]is there no way for all resources to be SHARED[/COLOR] ?

    If not, well hosting a server like that be acceptable in the DC (since it is 4 nodes/like 4 dedicated machines!) and how do I manage every node ?

    [COLOR="rgb(65, 105, 225)"]I suppose every one must have a dedicated connectivity to the switch to get WAN access, am I assuming this right[/COLOR] ?!
    As others have said, the four nodes share nothing but the chassis and the power supply. Each node is totally independent for network / disks.

    You'll want a switch, so you can take a drop from your provider and share it with the four nodes.

    I'm curious if your hosting provider will allow this. Most that I've seen either want you to host a single server, or they'll sell you space in blocks like 10U / quarter / half racks. I suspect that most of the "Server up to 4U" sort of hosting plans will be reluctant to allow you to host a C6100 + switch without trying to charge you extra. It's worth a try, though.

    Quote Originally Posted by aliitp View Post
    Finally, am reading a lot of trouble when it comes to maintaining this server, setting up IPMI (sounds like a pain!), upgrading firmware for various parts of this server, flashing BIOS and so on, so [COLOR="rgb(65, 105, 225)"]do you think this server is right for me or should I stay away of it[/COLOR] ??
    I haven't actually tried to apply any updates on mine. (Mine sits on a private LAN, though, not facing the Internet.) In general, updating firmware/BIOS on just about any machine is a pain in the butt for me.

  15. #15
    Yes, with dual L series cpu in each
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  16. #16
    Join Date
    Sep 2008
    Location
    Seattle, WA
    Posts
    1,268
    Quote Originally Posted by fog View Post
    I'm curious if your hosting provider will allow this. Most that I've seen either want you to host a single server, or they'll sell you space in blocks like 10U / quarter / half racks. I suspect that most of the "Server up to 4U" sort of hosting plans will be reluctant to allow you to host a C6100 + switch without trying to charge you extra. It's worth a try, though.
    There shouldn't be any reason why not, we actually encourage some clients that may "ramp up" to use C6100's and only power on 1 node and pay for more power when they want to turn up more nodes. Allows them to be ready to expand quickly and the price compared to a C1100 1U node of similar specs is unbeatable.

    Make sure if you are buying colo by the U to tell the provider how many U, power ports, and network drops you will need. With all that info it shouldn't be a problem at all.

    1/4 cab with 2 of these and a 24 port switch is some nice density
    █ Brian Kearney, Stealthy Hosting Inc. Seattle, WA [AS54931] Skype: StealthyHosting
    Affordable Dedicated Servers
    Remote Hands Colocation

    █ Email: [email protected] Phone: 253-880-1233

  17. #17
    Join Date
    Jul 2004
    Location
    New York, NY
    Posts
    2,179
    Over 100 in production and the IPMI work fine
    ServGrid - www.servgrid.com - Affordable and Reliable SSD Cloud Solutions
    Premium 10G Network, 2(N+1) Powerplant and SSD Performance
    Web, Reseller, KVM VPS, Storage and Private Cloud Hosting
    Click here to see our SSD Benchmarks!

  18. #18
    Join Date
    Jul 2004
    Location
    New York, NY
    Posts
    2,179
    C6100 with all 4 nodes on will run about 5amps - 2 power plugs for A+B redundancy
    ServGrid - www.servgrid.com - Affordable and Reliable SSD Cloud Solutions
    Premium 10G Network, 2(N+1) Powerplant and SSD Performance
    Web, Reseller, KVM VPS, Storage and Private Cloud Hosting
    Click here to see our SSD Benchmarks!

  19. #19
    Join Date
    Aug 2010
    Posts
    1,892
    Quote Originally Posted by The Broadband Man View Post
    C6100 with all 4 nodes on will run about 5amps - 2 power plugs for A+B redundancy
    Is that at 120 volt? Also at what CPU utilization for all 4 nodes?
    mission critical!

  20. #20
    Join Date
    Oct 2001
    Location
    Miami,FL
    Posts
    612
    The Broadman Man, do you use them attached to SAN or local storage? SSD?
    Joman Sierra
    http://www.dominet.net

  21. #21
    Join Date
    Jul 2004
    Location
    New York, NY
    Posts
    2,179
    Varies. The VPS are all connected to SSD SAN. Local storage are 64GB SSD. The dedicated servers vary based on client demands -
    ServGrid - www.servgrid.com - Affordable and Reliable SSD Cloud Solutions
    Premium 10G Network, 2(N+1) Powerplant and SSD Performance
    Web, Reseller, KVM VPS, Storage and Private Cloud Hosting
    Click here to see our SSD Benchmarks!

  22. #22
    Join Date
    Jun 2006
    Posts
    304
    Quote Originally Posted by tier3techsupport View Post
    I help maintain over 100 C6100 nodes for a client in Los Angeles. They are very reliable as I've only had to RMA one in the last 3 years or so.

    With just one C6100 chassis with 4 nodes I recommend purchasing a 16 (or more) port switch with VLAN capability. Each C6100 node has 3 ethernet ports: 1 IPMI and 2 Gig E. You can configure your switch with three VLANs: IPMI, LAN and WAN. This setup will let you transfer data between the nodes on the "private LAN" VLAN which will be useful and may help you save precious public IPs.

    Assuming you get a 1U switch you will need 3 power ports and 3u of rack space.

    Here's a pic of a node that I pulled to upgrade the RAM.

    https://plus.google.com/116722597790...ts/7NgCxLBENeH
    Any Tutorials on the web how to do that or how to setup and maintain such a server ?

  23. #23
    Join Date
    Oct 2001
    Location
    Miami,FL
    Posts
    612
    "Setup and maintain" hardware or system/OS?
    Joman Sierra
    http://www.dominet.net

  24. #24
    Join Date
    Jul 2004
    Location
    New York, NY
    Posts
    2,179
    They come with 2 onboard NICs. You can add a 4 port NIC (Gigabit) or a 2 port 10GBE

    On our VPS servers we have 2 x Gigabit 1 x IPMI and 2 x 10GBE

    As for switches we have cheap cisc9 2950 switches for IPMI ... don't need much better.
    ServGrid - www.servgrid.com - Affordable and Reliable SSD Cloud Solutions
    Premium 10G Network, 2(N+1) Powerplant and SSD Performance
    Web, Reseller, KVM VPS, Storage and Private Cloud Hosting
    Click here to see our SSD Benchmarks!

  25. #25
    Join Date
    Jun 2006
    Posts
    304
    Quote Originally Posted by DomiNET.net View Post
    "Setup and maintain" hardware or system/OS?
    hardware of course !

    I suppose if I managed the IPMI thing via the switch then I will have 4 simultaneous IPMI sessions, each browsing the node current status like 4 separate running machines, am just confused how the BIOS and the boot will look like since they are 4 nodes, but again they are 4 BIOS's right ?! dumm questions but I just came to realize that now !..

  26. #26
    Join Date
    Oct 2001
    Location
    Miami,FL
    Posts
    612
    Yes, you are getting four "separate" servers. You are only sharing chassis and power supply.
    Joman Sierra
    http://www.dominet.net

  27. #27
    Join Date
    Jan 2010
    Posts
    652
    Broadband man -- which NICs are you using for 2 port 10GBE?
    We have been looking for a decent half-height 10GBE nic.

  28. #28
    Join Date
    May 2007
    Posts
    1,979

    Re: Should I go with the DELL C6100 ?? (new into collocation)

    5639 s? ....

  29. #29
    Join Date
    May 2007
    Posts
    1,979

    Re: Should I go with the DELL C6100 ?? (new into collocation)

    I can't see him saying he uses such cards

  30. #30
    Join Date
    May 2007
    Posts
    1,979

    Re: Should I go with the DELL C6100 ?? (new into collocation)

    Ain't the proper link be this (in the c6100) ?
    https://plus.google.com/app/basic/ph...zjxklmqupg4q04

  31. #31
    Join Date
    Jun 2006
    Posts
    304
    Could someone please recommend a good hardware RAID controller for this server ?

    Is it important to get a raid controller equivalent to the 9270/9260 (a REAL h/w raid controller or will any low-end controller just be fine ?!)

    taking in consideration performance is important for my clients running production VMS, and I will probably assign only 3 hard drives per nod/per raid controller..!

Similar Threads

  1. [REQUEST] DELL PowerEdge C6100 Review
    By nokia3310 in forum Colocation and Data Centers
    Replies: 17
    Last Post: 10-06-2013, 06:02 PM
  2. Question Regarding a Dell PowerEdge C6100
    By Prepare4ServerStorms in forum Hosting Security and Technology
    Replies: 4
    Last Post: 09-26-2013, 09:05 AM
  3. WTB Dell C6100 XS23-TY3
    By xexex in forum Web Hosting Hardware
    Replies: 4
    Last Post: 09-04-2013, 10:58 AM
  4. [SELLING] Dell R210's and Dell C6100's
    By UKSHardware in forum Web Hosting Hardware
    Replies: 11
    Last Post: 02-25-2012, 08:32 AM
  5. Dell C6100 Servers/PowerVault MD3200i
    By OptioData in forum Web Hosting Hardware
    Replies: 2
    Last Post: 02-21-2012, 09:34 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •