Page 1 of 6 1234 ... LastLast
Results 1 to 40 of 222
  1. #1
    Join Date
    Jun 2009
    Posts
    83

    The Supermicro MicroCloud - more than just marketing?

    Okay, so its a simplified entry-level blade configuration with a clever marketing name.

    http://www.supermicro.com/products/s...37MC-H8TRF.cfm

    We normally deploy conventional 1U DP servers, but the second socket usually stays empty (its merely there for scalability), so we may as well be running UP setups.

    The above caught my eye and looks fairly ideal, we've been in negotiations with our Supermicro dealer to see if we can be first movers on the hardware. The maths says it works out about ~ 30 more per typical "server" for the MicroCloud - but the power savings, build savings, inbuilt redundancy, CPU savings should offset that cost in the long run.

    We normally use,

    2.4GHz E5620
    3x 4GB RAM

    So hopefully as an equivalent/better specification, in the MicroCloud we would be running:

    3.3GHz E3-1245
    2x 8GB RAM

    I'm yet to see what the E3-1200's are like, we're getting a demo unit sent out for local comparative testing. But the increased clock and similar cache/DMI says that on paper it should beat a single E5620.

    Is anyone out there running any E3-1200 or even using the MicroCloud. The solution really appeals to us!

  2. #2
    Join Date
    Jun 2001
    Location
    Denver, CO
    Posts
    3,301
    Where are you getting 8GB DIMMS for E3s?
    Jay Sudowski // Handy Networks LLC // Co-Founder & CTO
    AS30475 - Level(3), HE, Telia, XO and Cogent. Noction optimized network.
    Offering Dedicated Server and Colocation Hosting from our SSAE 16 SOC 2, Type 2 Certified Data Center.
    Current specials here. Check them out.

  3. #3
    Join Date
    Jun 2009
    Posts
    83
    From the Supermicro dealer themself.

  4. #4
    Join Date
    Mar 2004
    Location
    Seattle, WA
    Posts
    2,561
    The E3's are pretty awesome. Fastest thing I've seen yet. There are benchmarks and comparing websites out there. Just do a Google search.
    ColoInSeattle - From 1U to cage space colocation in Seattle
    ServerStadium - Affordable Dedicated Servers
    Come visit our 18k sq ft. facility in Seattle!
    Managed Private Cloud | Colocation | Disaster Recovery | Dedicated Servers

  5. #5
    Join Date
    Jun 2001
    Posts
    480
    Wow. This does look interesting. How much does this barebone cost?

  6. #6
    Join Date
    Jun 2001
    Posts
    480
    Ben

    Care to check with your dealer to see if you can buy 1 node at a time or do you have to purchase 8 node each purchase?

  7. #7
    Join Date
    Jun 2009
    Posts
    83
    I didn't ask to be honest, as we'll be buying the whole unit fully populated with CPUs + memory (its just more cost effective). The above is what Supermicro class as a "Superserver" (ie. includes all required heatsinks/risers etc.)

    The component parts list is at the bottom of the page http://www.supermicro.com/products/s...37MC-H8TRF.cfm - so you might be able to buy it in parts?

    I can't release prices I'm afraid, I would suggest contacting your Supermicro rep.

  8. #8
    Join Date
    Aug 2003
    Location
    /dev/null
    Posts
    2,131
    It doesn't cost yet. In the US the price list for that model is supposed to come out in the end of this month and sales starting in early july.

  9. #9
    Join Date
    Jun 2009
    Posts
    83
    There is pricing ... we've already had a quote.

    The issue is that the units aren't even in mass production yet, the current builds are just laser cut samples straight from the development team. Obviously, these come in at a premium - but its unclear as to when mass production on these units is set to start.


  10. #11
    Join Date
    Apr 2006
    Posts
    919
    I see a problem, the racks of most data centers are ready to cool those servers? In addition they also have a big power consumption.

  11. #12
    Join Date
    Jan 2003
    Location
    Budapest, Hungary
    Posts
    154
    It is not a big power consumption. 1.3 KW max (because it's redundant and @ 85% maximum (gold certification)).
    For cloud you don't need hdd's on the nodes, so cut the electricity costs by 10% for the nodes.
    It's a good system for a cloud startup, I wonder how much a fully loaded 3u unit costs plus redundant storage system.
    It may come close to 50-70K$ for a full high speed cloud system. For a 2 year lease even at 100K$ for a full setup it's going to be ~5000$ per month including interest... If we account 20% for memory it comes close to 1.4 $ per hour for GB of ram that is pretty good indeed. The amount of electricity comes to 1000$ additionally for the whole cloud and a rack for ~500$. 6.5k$ per month for your own 64 core 256GB ram and a few tb's of space system should be neat...
    Last edited by Azar-A; 06-17-2011 at 08:06 PM.
    ServerAstra.com website / e-mail: info @ serverastra.com
    HU/EU Co-Location / Managed and Unmanaged VDS & Dedicated servers in Hungary with unmetered connections

  12. assuming low-power Xeon E3-1260L sandy bridge (2.4Ghz C4/8T; 45w TDP)/16G ECC/1 or 2 SSD per "hypervisor", power draw from each node should be around 0.5amp/110v x 8-node = 4amp per 5037MC MicroCloud.

    a typical full cabinet with 2x 20amp/120v power feed from DC should accommodates at least 6 MicroClouds (48 hypervisors or 192 physical cores or 384 hyper cores), 1x 1U controller, 2x 3U/4U SAN box. 1x 3U/4U backup, and still have some U's and power left for switches, firewall....etc. that's a full-scale OnApp deployment!

  13. #14
    Join Date
    Jun 2001
    Posts
    480
    DELL's viking server isn't listed on their website. So, unless it is a very big order. You are unlikely to pick it up any time soon.

  14. #15
    Join Date
    Jun 2009
    Posts
    83
    I did start looking at the Dell last night, I'm getting one of their reps to get back to me on Monday to see how the pricing compares.

    On paper, the MicroCloud has got a lot of advantages compared to our standard 1U set-up. We're not too interested in density

    The chassis we normally deploy have "blower" type fans which are stupidly inefficient, and that's 2x fans in each chassis that draw 0.025A each at full RPM. So there's an almost immediate saving of

    0.025 * 2 * 8 = 0.4A

    Then (if the PSUs were at full draw), our current Gold class PSUs are 91% efficient, versus the Platinum class PSUs at 94%. That is a saving of 3%, but, our 600W PSUs aren't at full load, normally only at about 17% load:

    600W * 0.03 * 8 = 144W / 240V = 0.6A * 0.17 = 0.1A

    Then we normally run 3x sticks of RAM, this would be cut to 2x sticks (as its only dual channel). RAM typical draws about 0.0125A per module, so a saving of:

    1 * 8 * 0.0125 = 0.1A

    Power costs us 60 per A, so that equates to an annual saving of:

    (0.1 + 0.1 + 0.4) * 60 * 12 = 432

    Then there is rack costs themselves, roughly we spend about 130 per 24U per month (empty rack, no transit etc.), its about 5.40 per U, per month. So given each MicroCloud will save 5U per typical deployment:

    5 * 5.40 * 12 = 324

    Then there is build time, each 1U server takes about 1 hour to assemble and 20 minutes to install in the rack. This "blade" type configuration would save about 40 minutes build time and 20 minutes racking time. Our hourly rate is ~40 p/h, so that's a saving of:

    1 * 40 * 8 = 320

    -----

    So, all in all, at the cost of ~360 more per server (2,880 total). But if they were redundant PSU chassis (which is what the MicroCloud offers), that additional cost is closer to 160 more per server (1,280).

    Which if we estimate a lifetime of 24 months for the HW. switching for us would equate to a increased cost of:

    756 * 2 years = 1512
    320 set-up costs
    -1,280 additional purchase cost
    --
    552 reduction in TCO per MicroCloud deployment.

    I'm fairly convinced by the economics of it. For our smaller UP deployments, its a no-brainer (more CPU, more RAM, more redundancy). For the DP deployments, we can continue using our existing 1U configuration (that is until SuperMicro release the DP MicroCloud).

    My demo unit *should* be ready next week for some benchmark testing on the C204 + E3-1240. If it equals/betters the E5620 setup - we'll be one of the first UK companies (if not the first), to rack up these beauties

    For anyone's reference, a fully populated 8x (E3-1240, 16GB (2x8GB)) comes in under 10k.
    Last edited by ben_uk; 06-18-2011 at 06:34 AM.

  15. #16
    Join Date
    Jun 2001
    Posts
    480
    Not sure why are you compare E5620 vs E3-12xx. Shouldn't you comparing 8 x 1U E3-12xx vs one big MicroCloud and see if it worth the money.

  16. #17
    Join Date
    Jun 2009
    Posts
    83
    I know it seems like an odd comparison. But our typical deployment is 1x E5620 in a DP chassis. But after dropping 20-30 servers, almost none of those chassis ever have their 2nd socket populated.

    So the MicroCloud would be aimed at that typical customer. Given that we know the performance of the E5620, we need to make sure the E3-12xx at least matches it on performance - otherwise, there's no point considering it.

  17. Quote Originally Posted by ben_uk View Post
    I did start looking at the Dell last night, I'm getting one of their reps to get back to me on Monday to see how the pricing compares.

    On paper, the MicroCloud has got a lot of advantages compared to our standard 1U set-up. We're not too interested in density

    The chassis we normally deploy have "blower" type fans which are stupidly inefficient, and that's 2x fans in each chassis that draw 0.025A each at full RPM. So there's an almost immediate saving of

    0.025 * 2 * 8 = 0.4A

    Then (if the PSUs were at full draw), our current Gold class PSUs are 91% efficient, versus the Platinum class PSUs at 94%. That is a saving of 3%, but, our 600W PSUs aren't at full load, normally only at about 17% load:

    600W * 0.03 * 8 = 144W / 240V = 0.6A * 0.17 = 0.1A

    Then we normally run 3x sticks of RAM, this would be cut to 2x sticks (as its only dual channel). RAM typical draws about 0.0125A per module, so a saving of:

    1 * 8 * 0.0125 = 0.1A

    Power costs us 60 per A, so that equates to an annual saving of:

    (0.1 + 0.1 + 0.4) * 60 * 12 = 432

    Then there is rack costs themselves, roughly we spend about 130 per 24U per month (empty rack, no transit etc.), its about 5.40 per U, per month. So given each MicroCloud will save 5U per typical deployment:

    5 * 5.40 * 12 = 324

    Then there is build time, each 1U server takes about 1 hour to assemble and 20 minutes to install in the rack. This "blade" type configuration would save about 40 minutes build time and 20 minutes racking time. Our hourly rate is ~40 p/h, so that's a saving of:

    1 * 40 * 8 = 320

    -----

    So, all in all, at the cost of ~360 more per server (2,880 total). But if they were redundant PSU chassis (which is what the MicroCloud offers), that additional cost is closer to 160 more per server (1,280).

    Which if we estimate a lifetime of 24 months for the HW. switching for us would equate to a increased cost of:

    756 * 2 years = 1512
    320 set-up costs
    -1,280 additional purchase cost
    --
    552 reduction in TCO per MicroCloud deployment.

    I'm fairly convinced by the economics of it. For our smaller UP deployments, its a no-brainer (more CPU, more RAM, more redundancy). For the DP deployments, we can continue using our existing 1U configuration (that is until SuperMicro release the DP MicroCloud).

    My demo unit *should* be ready next week for some benchmark testing on the C204 + E3-1240. If it equals/betters the E5620 setup - we'll be one of the first UK companies (if not the first), to rack up these beauties

    For anyone's reference, a fully populated 8x (E3-1240, 16GB (2x8GB)) comes in under 10k.
    it seems power draw is the paramount concern to you, then why opt for a 80watt E3-1240 or 95-watt E3-1245? in order to make sense of this type of high density deployment, you need to seriously consider E3-1260L (45watt) or E3-1220L (20watt), both are purposely designed for this sort of "microserver" by Intel:
    http://newsroom.intel.com/servlet/Ji..._factsheet.pdf

  18. Quote Originally Posted by Eiv View Post
    DELL's viking server isn't listed on their website. So, unless it is a very big order. You are unlikely to pick it up any time soon.
    Viking (Xeon lynnfield) server was managed by Dell DCS (data center solution) group, and it was true that you could only buy them in very large scale deployment. but Viking was the thing in the past now. the new C5125(Phenom II)/C5220(sandy bridge) are marketed as part of PowerEdge C series, and you can buy one unit at a time!

  19. #20
    Join Date
    Jun 2009
    Posts
    83
    Quote Originally Posted by [email protected] View Post
    it seems power draw is the paramount concern to you, then why opt for a 80watt E3-1240 or 95-watt E3-1245? in order to make sense of this type of high density deployment, you need to seriously consider E3-1260L (45watt) or E3-1220L (20watt), both are purposely designed for this sort of "microserver" by Intel:
    http://newsroom.intel.com/servlet/Ji..._factsheet.pdf
    Because we worked under this reasoning for our first rack of 1U servers about 2 years ago when we used L5420's instead of E5420's (50W vs 80W) - and in practice, it made almost no difference. On idle, 25% and 50% load there was almost no discernible difference - which is the typical load we would estimate these machines to be under.

    So rather than pay a premium for the E3-1260L and get a much lower clock, we may as well just just run 'standard' E3-1240's.

    I don't think the power saving exists on the L type CPUs in our particular type of deployment.


  20. #22
    Join Date
    Jun 2009
    Posts
    83
    As long as the CPU performance is better/same for E3-1240 vs E5620 - its a winner for us!

  21. #23
    Join Date
    Mar 2009
    Posts
    534
    Man, this sure is an interesting configuration! I bet for most people it's not a big deal anymore to manage a migration from a MicroCloud blade over to a new dedicated 1U server if they ever need a DP config at a later time. I've heard rave reviews about the Sandy Bridge chips for low power and efficiency, so hopefully it should match the performance your accustomed to with the current Xeon nodes.

    --Chris
    The Object Zone - Your Windows Server Specialists for more than twelve years - http://www.object-zone.net/
    Services: Contract Server Management, Desktop Support Services, IT/VoIP Consulting, Cloud Migration, and Custom ASP.net and Mobile Application Development

  22. Quote Originally Posted by ben_uk View Post
    As long as the CPU performance is better/same for E3-1240 vs E5620 - its a winner for us!
    I bet E3-1240's 3.3Ghz core speed and it's 3.3Ghz memory controller will leave 2.4Ghz E5620 in UP mode behind in a puff of smoke.

    just by comparing MEMTEST86 test speed, E3-1230, which we've sold a lot already, completes a test cycle of the same RAM size twice faster than E5620 can.

    keep us posted!
    Last edited by [email protected]; 06-18-2011 at 12:30 PM.

  23. #25
    Join Date
    Jun 2009
    Posts
    83
    Quote Originally Posted by [email protected] View Post
    I bet E3-1240's 3.3Ghz core speed and it's 3.3Ghz memory controller will leave 2.4Ghz E5620 in UP mode behind in a puff of smoke.

    just by comparing MEMTEST86 test speed, E3-1230, which we've sold a lot already, completes a test cycle of the same RAM size twice faster than E5620 can.

    keep us posted!


    Quote Originally Posted by ObjectZone View Post
    Man, this sure is an interesting configuration! I bet for most people it's not a big deal anymore to manage a migration from a MicroCloud blade over to a new dedicated 1U server if they ever need a DP config at a later time. I've heard rave reviews about the Sandy Bridge chips for low power and efficiency, so hopefully it should match the performance your accustomed to with the current Xeon nodes.

    --Chris
    Exactly. The only bad thing about the MicroCloud is the lack of vertical scalability - but considering we use Linux Raid for almost everything, its just a case of pulling the drives and dropping them in a new chassis. A 1 hour job at most!

  24. #26
    Join Date
    Dec 2004
    Posts
    526
    Quote Originally Posted by skywin View Post
    I see a problem, the racks of most data centers are ready to cool those servers?
    And what about the cooling inside the chassis itself.
    Is the design redundant, or are you going to have 8 servers down if a fan fails?

  25. #27
    Join Date
    Jan 2006
    Location
    Jersey
    Posts
    2,965
    Quote Originally Posted by Maxnet View Post
    And what about the cooling inside the chassis itself.
    Is the design redundant, or are you going to have 8 servers down if a fan fails?
    The fans seems to be redundant. Hard to imagine SM designing something like this without having redundancy in mind.

    http://www.supermicro.com/products/s...37MC-H8TRF.cfm
    Email: info ///at/// honelive.com

  26. #28
    Join Date
    Jun 2009
    Posts
    83
    There's no mention of redundancy actually, I'll clear it up on Monday.

    I wonder how easy they are to replace whilst in service?

  27. #29
    Join Date
    Dec 2004
    Posts
    526
    The fans seems to be redundant.
    All the spec says: 4 big fans to be shared with 8 servers.
    No word on how well they run with only 3...

    Quote Originally Posted by ben_uk View Post
    I wonder how easy they are to replace whilst in service?
    IF they are supposed to be user replaceable to start with, given that they do not appear on the parts list.
    Last edited by Maxnet; 06-18-2011 at 04:56 PM.

  28. #30
    Join Date
    Jun 2009
    Posts
    83
    Quote Originally Posted by Maxnet View Post
    IF they are supposed to be user replaceable to start with, given that they do not appear in the parts list.
    I bloody hope so, no way on earth we would be powering down 8 servers to replace 1 fan!

  29. #31
    Join Date
    Apr 2004
    Location
    Chicago
    Posts
    163
    Quote Originally Posted by ben_uk View Post
    As long as the CPU performance is better/same for E3-1240 vs E5620 - its a winner for us!
    You won't be disappointed by the E3-1240 performance, according to http://cpubenchmark.net/high_end_cpus.html is a lot more powerful than the 5620

    I have some customers running E12XX and they are amazing fast

  30. all chassis fan inside supermicro 2U chassis and up are hot swappable, and I've not seen any exception yet.

    however, how do you get to know remotely whether a chassis fan goes out? I can imagine these fans are not connected to the 8x hot-swappable server nodes, therefore you can't monitor fan speed/status at IPMI console from individual node. it will be very interesting to know how supermicro design/manage/monitor those share components such as 2x redundant PSU and chassis fans.

  31. #33
    Join Date
    Dec 2004
    Posts
    526
    Quote Originally Posted by [email protected] View Post
    all chassis fan inside supermicro 2U chassis and up are hot swappable, and I've not seen any exception yet.

    however, how do you get to know remotely whether a chassis fan goes out?
    There is no manual for the Microcloud yet, but was looking at the one for the 8-node Atom 2U Twins 3, which seem somewhat similar in design.
    And there they connect the 2 nearest out of the 4 fans to each of the boards on that side.
    So perhaps you can monitor them by logging in on the IPMI on one of the left hand boards, and logging in on one of the right hand boards.


    6-3 System Fans

    The system has four hot-swappable 8-cm PWM fans to provide the cooling for all
    nodes. The fans connect directly to the backplane but receive their power from the
    serverboard they are connected to logically. Fan speed may be controlled by a
    setting in BIOS (see Chapter 7).

    Fan Configuration

    In the 2U Twin 3, each node (serverboard) controls the fans that reside on its side
    of the chassis. This means that four nodes will share control for two fans. If the
    fan speed settings in BIOS are different for these two nodes, the BIOS setting with
    the higher fan speed will apply. In the event that one of the serverboard drawers is
    removed, then the remaining nodes/serverboards will operate both fans.

    Note: Due to this configuration, all nodes on the same side of the chassis as the
    failed fan must be powered down before replacing the fan.


    System Fan Failure

    If a fan fails, the remaining fans will ramp up to full speed and the overheat/fan fail
    LED on the control panel will blink on and off (about once per second). Replace
    any failed fan at your earliest convenience with the same type and model. See
    note above about powering down the nodes associated with the failed fan before
    replacing.
    Despite being called hot-swap fans, you supposedly have to shut down 4 servers to replace a fan with those.

    (also note that the system this manual is for uses low power/heat Atoms. So do not automatically assume the sentence that you can wait for the earliest convenience to replace a fan, applies to the Sandy Bridge Microcloud as well.
    If only 2 fans are related to each other, it probably also means that only 1 fan will run at full speed on fan failure, and the other 2 remaining ones are ignorant of the situation).
    Last edited by Maxnet; 06-18-2011 at 09:18 PM.

  32. #34
    Join Date
    Jun 2001
    Posts
    480
    How do you handle reboot on those microcloud/c5220?

  33. #35
    Join Date
    Apr 2009
    Posts
    1,143
    One would think that there should be a chassi ilo/ipmi as with dell/ibm/hp's blade chassi's - anything else would be kinda stupid? that way there would be a possibility to snmp monitor it and get alarms generated based on that
    /maze

  34. #36
    Join Date
    Jan 2006
    Location
    Jersey
    Posts
    2,965
    Quote Originally Posted by Eiv View Post
    How do you handle reboot on those microcloud/c5220?
    You do it via the IPMI.

    Although it wouldn't have hurt to put a small reset/power pin hole
    Email: info ///at/// honelive.com

  35. #37
    Join Date
    Jan 2006
    Location
    Jersey
    Posts
    2,965
    Quote Originally Posted by [email protected] View Post
    all chassis fan inside supermicro 2U chassis and up are hot swappable, and I've not seen any exception yet.

    however, how do you get to know remotely whether a chassis fan goes out? I can imagine these fans are not connected to the 8x hot-swappable server nodes, therefore you can't monitor fan speed/status at IPMI console from individual node. it will be very interesting to know how supermicro design/manage/monitor those share components such as 2x redundant PSU and chassis fans.
    As far as replacing the Fan goes, if you looking at this picture closely http://www.supermicro.com/a_images/p...MC_H8TRF_2.jpg there seems to be some sort of lid on the top right behind the HDD tray. May be you can just pull the server out a little while its in production and remove the lid to replace the fans.

    Also the SM page says each node has a cooling zone monitoring perhaps you can set the IPMI to send email if a fan in that "zone" fails. Of course Im just speculating, best to email SM to find out the specific details.
    Last edited by HNLV; 06-19-2011 at 04:40 AM.
    Email: info ///at/// honelive.com

  36. #38
    Join Date
    Jan 2002
    Location
    USA
    Posts
    4,548
    Hmm, any public word yet on actual barebone pricing?
    vpsBoard - An active resource for all things Virtual Private Servers. Tutorials, Guides, Offers and more!
    Come join the conversation! 90,000 posts and growing daily! The fastest growing hosting forum around!

  37. #39
    Join Date
    Dec 2004
    Posts
    526
    One would think that there should be a chassi ilo/ipmi as with dell/ibm/hp's blade chassi's - anything else would be kinda stupid?
    Blades and Intel's modular server thing are much more integrated than this.
    E.g. chassis management, integrated switch, some even have shared storage.

    Problem is that those extra's are kinda expensive.


    Quote Originally Posted by HNLV View Post
    Although it wouldn't have hurt to put a small reset/power pin hole
    Has a power button next to the UID: http://www.trinitygroup.ru/events/pks2.pdf

  38. #40
    Join Date
    Jun 2009
    Posts
    83
    Quote Originally Posted by Vazapi-Curtis View Post
    Hmm, any public word yet on actual barebone pricing?
    Under 6k - can't confirm specifics just yet - we're in the middle of an order right now, we'll be the first in the UK to get one, so pricing might be a little 'favourable' for us right now.

    I've emailed our Supermicro Rep and I'll speak to him tomorrow to confirm fan redundancy, chassis monitoring etc.

Page 1 of 6 1234 ... LastLast

Similar Threads

  1. Replies: 2
    Last Post: 03-04-2011, 04:21 AM
  2. Attraction Marketing,Network Marketing
    By Johny Smith in forum New Members
    Replies: 1
    Last Post: 11-13-2010, 08:55 AM
  3. Replies: 0
    Last Post: 10-30-2010, 05:10 PM
  4. Supermicro?
    By Gerbil in forum Colocation and Data Centers
    Replies: 10
    Last Post: 10-05-2003, 02:23 AM
  5. Marketing Director or Marketing Assistance NEEDED
    By Infinology in forum Employment / Job Offers
    Replies: 0
    Last Post: 03-15-2002, 07:16 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •