Page 1 of 9 1234 ... LastLast
Results 1 to 25 of 222
  1. #1
    Join Date
    Jun 2009
    Posts
    83

    The Supermicro MicroCloud - more than just marketing?

    Okay, so its a simplified entry-level blade configuration with a clever marketing name.

    http://www.supermicro.com/products/s...37MC-H8TRF.cfm

    We normally deploy conventional 1U DP servers, but the second socket usually stays empty (its merely there for scalability), so we may as well be running UP setups.

    The above caught my eye and looks fairly ideal, we've been in negotiations with our Supermicro dealer to see if we can be first movers on the hardware. The maths says it works out about ~ £30 more per typical "server" for the MicroCloud - but the power savings, build savings, inbuilt redundancy, CPU savings should offset that cost in the long run.

    We normally use,

    2.4GHz E5620
    3x 4GB RAM

    So hopefully as an equivalent/better specification, in the MicroCloud we would be running:

    3.3GHz E3-1245
    2x 8GB RAM

    I'm yet to see what the E3-1200's are like, we're getting a demo unit sent out for local comparative testing. But the increased clock and similar cache/DMI says that on paper it should beat a single E5620.

    Is anyone out there running any E3-1200 or even using the MicroCloud. The solution really appeals to us!

  2. #2
    Join Date
    Jun 2001
    Location
    Denver, CO
    Posts
    3,302
    Where are you getting 8GB DIMMS for E3s?
    Jay Sudowski // Handy Networks LLC // Co-Founder & CTO
    AS30475 - Level(3), HE, Telia, XO and Cogent. Noction optimized network.
    Offering Dedicated Server and Colocation Hosting from our SSAE 16 SOC 2, Type 2 Certified Data Center.
    Current specials here. Check them out.

  3. #3
    Join Date
    Jun 2009
    Posts
    83
    From the Supermicro dealer themself.

  4. #4
    Join Date
    Mar 2004
    Location
    Seattle, WA
    Posts
    2,580
    The E3's are pretty awesome. Fastest thing I've seen yet. There are benchmarks and comparing websites out there. Just do a Google search.
    ColoInSeattle - From 1U to cage space colocation in Seattle
    ServerStadium - Affordable Dedicated Servers
    Come visit our 18k sq ft. facility in Seattle!
    Managed Private Cloud | Colocation | Disaster Recovery | Dedicated Servers

  5. #5
    Join Date
    Jun 2001
    Posts
    480
    Wow. This does look interesting. How much does this barebone cost?

  6. #6
    Join Date
    Jun 2001
    Posts
    480
    Ben

    Care to check with your dealer to see if you can buy 1 node at a time or do you have to purchase 8 node each purchase?

  7. #7
    Join Date
    Jun 2009
    Posts
    83
    I didn't ask to be honest, as we'll be buying the whole unit fully populated with CPUs + memory (its just more cost effective). The above is what Supermicro class as a "Superserver" (ie. includes all required heatsinks/risers etc.)

    The component parts list is at the bottom of the page http://www.supermicro.com/products/s...37MC-H8TRF.cfm - so you might be able to buy it in parts?

    I can't release prices I'm afraid, I would suggest contacting your Supermicro rep.

  8. #8
    Join Date
    Aug 2003
    Location
    /dev/null
    Posts
    2,132
    It doesn't cost yet. In the US the price list for that model is supposed to come out in the end of this month and sales starting in early july.

  9. #9
    Join Date
    Jun 2009
    Posts
    83
    There is pricing ... we've already had a quote.

    The issue is that the units aren't even in mass production yet, the current builds are just laser cut samples straight from the development team. Obviously, these come in at a premium - but its unclear as to when mass production on these units is set to start.

  10. #10
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    we had discussed about Dell's viking server (3U 12-node lynnfield) about 3 months ago:
    http://www.webhostingtalk.com/showthread.php?t=1032341
    and we had some good discussions about whether it made senses or not.

    this SM "MicroCloud" is based on the same concept, but it has 8-node sandy bridge only. Dell already put PowerEdge C5125 (3U 12-node AMD Phenom II) up for ordering:
    http://www.dell.com/us/en/enterprise...5&s=biz&cs=555
    and C5220 (3U 12-node sandy bridge) is on the way, it seems.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  11. #11
    Join Date
    Apr 2006
    Posts
    929
    I see a problem, the racks of most data centers are ready to cool those servers? In addition they also have a big power consumption.

  12. #12
    Join Date
    Jan 2003
    Location
    Budapest, Hungary
    Posts
    231
    It is not a big power consumption. 1.3 KW max (because it's redundant and @ 85% maximum (gold certification)).
    For cloud you don't need hdd's on the nodes, so cut the electricity costs by 10% for the nodes.
    It's a good system for a cloud startup, I wonder how much a fully loaded 3u unit costs plus redundant storage system.
    It may come close to 50-70K$ for a full high speed cloud system. For a 2 year lease even at 100K$ for a full setup it's going to be ~5000$ per month including interest... If we account 20% for memory it comes close to 1.4 $ per hour for GB of ram that is pretty good indeed. The amount of electricity comes to 1000$ additionally for the whole cloud and a rack for ~500$. 6.5k$ per month for your own 64 core 256GB ram and a few tb's of space system should be neat...
    Last edited by ServerAstra - Andrew; 06-17-2011 at 08:06 PM.
    ServerAstra.com website / e-mail: info @ serverastra.com
    HU/EU Co-Location / Managed and Unmanaged Cloud & Dedicated servers in Hungary with unmetered connections

  13. #13
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    assuming low-power Xeon E3-1260L sandy bridge (2.4Ghz C4/8T; 45w TDP)/16G ECC/1 or 2 SSD per "hypervisor", power draw from each node should be around 0.5amp/110v x 8-node = 4amp per 5037MC MicroCloud.

    a typical full cabinet with 2x 20amp/120v power feed from DC should accommodates at least 6 MicroClouds (48 hypervisors or 192 physical cores or 384 hyper cores), 1x 1U controller, 2x 3U/4U SAN box. 1x 3U/4U backup, and still have some U's and power left for switches, firewall....etc. that's a full-scale OnApp deployment!
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  14. #14
    Join Date
    Jun 2001
    Posts
    480
    DELL's viking server isn't listed on their website. So, unless it is a very big order. You are unlikely to pick it up any time soon.

  15. #15
    Join Date
    Jun 2009
    Posts
    83
    I did start looking at the Dell last night, I'm getting one of their reps to get back to me on Monday to see how the pricing compares.

    On paper, the MicroCloud has got a lot of advantages compared to our standard 1U set-up. We're not too interested in density

    The chassis we normally deploy have "blower" type fans which are stupidly inefficient, and that's 2x fans in each chassis that draw 0.025A each at full RPM. So there's an almost immediate saving of

    0.025 * 2 * 8 = 0.4A

    Then (if the PSUs were at full draw), our current Gold class PSUs are 91% efficient, versus the Platinum class PSUs at 94%. That is a saving of 3%, but, our 600W PSUs aren't at full load, normally only at about 17% load:

    600W * 0.03 * 8 = 144W / 240V = 0.6A * 0.17 = 0.1A

    Then we normally run 3x sticks of RAM, this would be cut to 2x sticks (as its only dual channel). RAM typical draws about 0.0125A per module, so a saving of:

    1 * 8 * 0.0125 = 0.1A

    Power costs us £60 per A, so that equates to an annual saving of:

    (0.1 + 0.1 + 0.4) * 60 * 12 = £432

    Then there is rack costs themselves, roughly we spend about £130 per 24U per month (empty rack, no transit etc.), its about £5.40 per U, per month. So given each MicroCloud will save 5U per typical deployment:

    5 * £5.40 * 12 = £324

    Then there is build time, each 1U server takes about 1 hour to assemble and 20 minutes to install in the rack. This "blade" type configuration would save about 40 minutes build time and 20 minutes racking time. Our hourly rate is ~£40 p/h, so that's a saving of:

    1 * 40 * 8 = £320

    -----

    So, all in all, at the cost of ~£360 more per server (£2,880 total). But if they were redundant PSU chassis (which is what the MicroCloud offers), that additional cost is closer to £160 more per server (£1,280).

    Which if we estimate a lifetime of 24 months for the HW. switching for us would equate to a increased cost of:

    £756 * 2 years = £1512
    £320 set-up costs
    -£1,280 additional purchase cost
    --
    £552 reduction in TCO per MicroCloud deployment.

    I'm fairly convinced by the economics of it. For our smaller UP deployments, its a no-brainer (more CPU, more RAM, more redundancy). For the DP deployments, we can continue using our existing 1U configuration (that is until SuperMicro release the DP MicroCloud).

    My demo unit *should* be ready next week for some benchmark testing on the C204 + E3-1240. If it equals/betters the E5620 setup - we'll be one of the first UK companies (if not the first), to rack up these beauties

    For anyone's reference, a fully populated 8x (E3-1240, 16GB (2x8GB)) comes in under 10k.
    Last edited by ben_uk; 06-18-2011 at 06:34 AM.

  16. #16
    Join Date
    Jun 2001
    Posts
    480
    Not sure why are you compare E5620 vs E3-12xx. Shouldn't you comparing 8 x 1U E3-12xx vs one big MicroCloud and see if it worth the money.

  17. #17
    Join Date
    Jun 2009
    Posts
    83
    I know it seems like an odd comparison. But our typical deployment is 1x E5620 in a DP chassis. But after dropping 20-30 servers, almost none of those chassis ever have their 2nd socket populated.

    So the MicroCloud would be aimed at that typical customer. Given that we know the performance of the E5620, we need to make sure the E3-12xx at least matches it on performance - otherwise, there's no point considering it.

  18. #18
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by ben_uk View Post
    I did start looking at the Dell last night, I'm getting one of their reps to get back to me on Monday to see how the pricing compares.

    On paper, the MicroCloud has got a lot of advantages compared to our standard 1U set-up. We're not too interested in density

    The chassis we normally deploy have "blower" type fans which are stupidly inefficient, and that's 2x fans in each chassis that draw 0.025A each at full RPM. So there's an almost immediate saving of

    0.025 * 2 * 8 = 0.4A

    Then (if the PSUs were at full draw), our current Gold class PSUs are 91% efficient, versus the Platinum class PSUs at 94%. That is a saving of 3%, but, our 600W PSUs aren't at full load, normally only at about 17% load:

    600W * 0.03 * 8 = 144W / 240V = 0.6A * 0.17 = 0.1A

    Then we normally run 3x sticks of RAM, this would be cut to 2x sticks (as its only dual channel). RAM typical draws about 0.0125A per module, so a saving of:

    1 * 8 * 0.0125 = 0.1A

    Power costs us £60 per A, so that equates to an annual saving of:

    (0.1 + 0.1 + 0.4) * 60 * 12 = £432

    Then there is rack costs themselves, roughly we spend about £130 per 24U per month (empty rack, no transit etc.), its about £5.40 per U, per month. So given each MicroCloud will save 5U per typical deployment:

    5 * £5.40 * 12 = £324

    Then there is build time, each 1U server takes about 1 hour to assemble and 20 minutes to install in the rack. This "blade" type configuration would save about 40 minutes build time and 20 minutes racking time. Our hourly rate is ~£40 p/h, so that's a saving of:

    1 * 40 * 8 = £320

    -----

    So, all in all, at the cost of ~£360 more per server (£2,880 total). But if they were redundant PSU chassis (which is what the MicroCloud offers), that additional cost is closer to £160 more per server (£1,280).

    Which if we estimate a lifetime of 24 months for the HW. switching for us would equate to a increased cost of:

    £756 * 2 years = £1512
    £320 set-up costs
    -£1,280 additional purchase cost
    --
    £552 reduction in TCO per MicroCloud deployment.

    I'm fairly convinced by the economics of it. For our smaller UP deployments, its a no-brainer (more CPU, more RAM, more redundancy). For the DP deployments, we can continue using our existing 1U configuration (that is until SuperMicro release the DP MicroCloud).

    My demo unit *should* be ready next week for some benchmark testing on the C204 + E3-1240. If it equals/betters the E5620 setup - we'll be one of the first UK companies (if not the first), to rack up these beauties

    For anyone's reference, a fully populated 8x (E3-1240, 16GB (2x8GB)) comes in under 10k.
    it seems power draw is the paramount concern to you, then why opt for a 80watt E3-1240 or 95-watt E3-1245? in order to make sense of this type of high density deployment, you need to seriously consider E3-1260L (45watt) or E3-1220L (20watt), both are purposely designed for this sort of "microserver" by Intel:
    http://newsroom.intel.com/servlet/Ji..._factsheet.pdf
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  19. #19
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by Eiv View Post
    DELL's viking server isn't listed on their website. So, unless it is a very big order. You are unlikely to pick it up any time soon.
    Viking (Xeon lynnfield) server was managed by Dell DCS (data center solution) group, and it was true that you could only buy them in very large scale deployment. but Viking was the thing in the past now. the new C5125(Phenom II)/C5220(sandy bridge) are marketed as part of PowerEdge C series, and you can buy one unit at a time!
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  20. #20
    Join Date
    Jun 2009
    Posts
    83
    Quote Originally Posted by cwl@apaqdigital View Post
    it seems power draw is the paramount concern to you, then why opt for a 80watt E3-1240 or 95-watt E3-1245? in order to make sense of this type of high density deployment, you need to seriously consider E3-1260L (45watt) or E3-1220L (20watt), both are purposely designed for this sort of "microserver" by Intel:
    http://newsroom.intel.com/servlet/Ji..._factsheet.pdf
    Because we worked under this reasoning for our first rack of 1U servers about 2 years ago when we used L5420's instead of E5420's (50W vs 80W) - and in practice, it made almost no difference. On idle, 25% and 50% load there was almost no discernible difference - which is the typical load we would estimate these machines to be under.

    So rather than pay a premium for the E3-1260L and get a much lower clock, we may as well just just run 'standard' E3-1240's.

    I don't think the power saving exists on the L type CPUs in our particular type of deployment.

  21. #21
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    regardless E3-1260L or E3-1240, sandy bridge platform will use less power than Xeon Gulftown setup! the south bridge (5520 tylersburg chipset) on dual socket 1366 is rated 27w TDP vs 6w TDP for C204 chipset on sandy bridge socket 1155 board.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  22. #22
    Join Date
    Jun 2009
    Posts
    83
    As long as the CPU performance is better/same for E3-1240 vs E5620 - its a winner for us!

  23. #23
    Join Date
    Mar 2009
    Posts
    568
    Man, this sure is an interesting configuration! I bet for most people it's not a big deal anymore to manage a migration from a MicroCloud blade over to a new dedicated 1U server if they ever need a DP config at a later time. I've heard rave reviews about the Sandy Bridge chips for low power and efficiency, so hopefully it should match the performance your accustomed to with the current Xeon nodes.

    --Chris
    The Object Zone - Your Windows Server Specialists for more than twenty years - http://www.object-zone.net/
    Services: Contract Server Management, Desktop Support Services, IT/VoIP Consulting, Cloud Migration, and Custom ASP.net and Mobile Application Development

  24. #24
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by ben_uk View Post
    As long as the CPU performance is better/same for E3-1240 vs E5620 - its a winner for us!
    I bet E3-1240's 3.3Ghz core speed and it's 3.3Ghz memory controller will leave 2.4Ghz E5620 in UP mode behind in a puff of smoke.

    just by comparing MEMTEST86 test speed, E3-1230, which we've sold a lot already, completes a test cycle of the same RAM size twice faster than E5620 can.

    keep us posted!
    Last edited by cwl@apaqdigital; 06-18-2011 at 12:30 PM.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  25. #25
    Join Date
    Jun 2009
    Posts
    83
    Quote Originally Posted by cwl@apaqdigital View Post
    I bet E3-1240's 3.3Ghz core speed and it's 3.3Ghz memory controller will leave 2.4Ghz E5620 in UP mode behind in a puff of smoke.

    just by comparing MEMTEST86 test speed, E3-1230, which we've sold a lot already, completes a test cycle of the same RAM size twice faster than E5620 can.

    keep us posted!


    Quote Originally Posted by ObjectZone View Post
    Man, this sure is an interesting configuration! I bet for most people it's not a big deal anymore to manage a migration from a MicroCloud blade over to a new dedicated 1U server if they ever need a DP config at a later time. I've heard rave reviews about the Sandy Bridge chips for low power and efficiency, so hopefully it should match the performance your accustomed to with the current Xeon nodes.

    --Chris
    Exactly. The only bad thing about the MicroCloud is the lack of vertical scalability - but considering we use Linux Raid for almost everything, its just a case of pulling the drives and dropping them in a new chassis. A 1 hour job at most!

Page 1 of 9 1234 ... LastLast

Similar Threads

  1. Replies: 2
    Last Post: 03-04-2011, 04:21 AM
  2. Attraction Marketing,Network Marketing
    By Johny Smith in forum New Members
    Replies: 1
    Last Post: 11-13-2010, 08:55 AM
  3. Replies: 0
    Last Post: 10-30-2010, 05:10 PM
  4. Supermicro?
    By Gerbil in forum Colocation, Data Centers, IP Space and Networks
    Replies: 10
    Last Post: 10-05-2003, 02:23 AM
  5. Marketing Director or Marketing Assistance NEEDED
    By Infinology in forum Employment / Job Offers
    Replies: 0
    Last Post: 03-15-2002, 07:16 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •