Page 9 of 9 FirstFirst ... 6789
Results 201 to 222 of 222
  1. #201
    Join Date
    Oct 2003
    Location
    Hanoi
    Posts
    4,309
    Can you list more details of issues you have faced to please?

    Quote Originally Posted by marksy View Post
    We've been longtime SM customers and have had several major issues in the past 8 months.

  2. #202
    Join Date
    May 2000
    Posts
    488
    We've had to return 2 motherboards for various problems, 2 backplanes. We then had a 2U Twin blow up - literally exploded charring the lid. We went through 3 replacements before getting one that worked entirely. First one had 2 bad motherboards, second had 1 bad MB and a bad backplane. We have Dells running in the same cabs with zero issues.

  3. #203
    Join Date
    Dec 2001
    Location
    Toronto, Ontario, Canada
    Posts
    6,896
    Quote Originally Posted by gate2vn View Post
    Can you list more details of issues you have faced to please?
    I'm going to second this. We built a new batch of servers in the spring, on the first shipment of X9SCM-F motherboards (BIOS 1.0A), just over a dozen units.

    Immediately upon receiving the boards, the problems started. The boards took well over a minute to *begin* POST after receiving power when the IPMI was enabled (~80-90 seconds before any VGA output), disabling in BIOS had no impact, and disabling in hardware created BIOS bugs, and only improved POST time marginally (down to ~50-70 seconds).

    In addition to this, the NIC's would randomly pick up as eth0/eth1 depending on what OS you were running (IE in CentOS 5.5 they picked up in the right order, when installing 5.6 they reversed, an immediate problem for our market/environment). While this could be fixed by forcing the MAC address in the config files, we weren't about to deploy a platform based on wonky hardware. SM stated this was a physical design issue, and could not be fixed by a BIOS release, so we'd have to live with the NIC order being randomly mucked about in the future. If you dont plug both NIC's in, this is obviously a serious issue (esp when bailing halfway through the CentOS 5.6 installer over PXE).

    We contacted Supermicro, and after much argument, a series of "problem reports", and an update to BIOS 1.0B, we arranged for an advanced RMA, and a swap for an equivalent number of X9SCL+-F motherboards, which they said just came out to address several of these issues.

    This is where the real nuisance started. The RMA was opened in early May, and they shipped us an equivalent number of units. They overstated the value of the boards considerably (by around 35% over retail), skewing our customs charges. We noticed one of the X9SCL+-F's we received was DOA, and emailed indicating we would return it in the shipment.

    Cutting to the chase, it cost a bundle more to insure the packages for the full value (since they overstated the cost so much, The local MBE charges 3% now!), and Fedex'd them back in the original packing, with the RMA forms included. Supermicro emailed a few weeks after delivery, indicating that they had lost one of the packages (notably, the one including the X9SCL+-F that we included because it arrived DOA in the advanced RMA).

    At this time, SM stated we had to take matters up with Fedex, and then whined there was no packing list, which is funny given they claimed to have lost the box (RMA form was contained within). Then SM stated I should have taken pictures for proof (identical packing, yet +6 pounds on the shipping weight was apparently not proof enough). Since SM lost the box after they signed for it, obviously I didn't bother involving Fedex.

    To date, we're still battling with Supermicro for refunds (as they didn't refund the full advanced RMA fee), replacement on the X9SCL+-F board that was DOA, and some sort of credit for the difference in value between the two models. It's been 4-5 months at this point since the initial trouble reports/RMA, dozens of emails, the vendor involved, the works.

    SM is appears to be missing quality control in a variety of departments nowadays. I dont know about you, but I would never have considered SM gear "throw away when broken" (it's certainly not priced as such), but thats the attitude we got from their RMA dept.

    [/end rant]
    Last edited by porcupine; 09-27-2011 at 12:32 AM.
    Myles Loosley-Millman - admin@prioritycolo.com
    Priority Colo Inc. - Affordable Colocation & Dedicated Servers.
    Two Canadian facilities serving Toronto & Markham, Ontario
    http://www.prioritycolo.com

  4. #204
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by porcupine View Post
    I'm going to second this. We built a new batch of servers in the spring, on the first shipment of X9SCM-F motherboards (BIOS 1.0A), just over a dozen units....
    well, the issue of LAN1/LAN2 in reversed MAC order on X9SCM/X9SCL(-F) board has been well documented! there is no hardware cure for it except manually editting the ifconfig-ethx's. that's why we recommend X9SCL+-F or X9SCA-F boards to all our clients so that headache can be avoided.

    it's true that X9SCM-F has elevated DOA rate, but ever since we switched to X9SCL+-F/X9SCA-F exclusively, we've not seen even one single failed X9SCL+-F board after a few hundreds were installed in production servers.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  5. #205
    Join Date
    Dec 2009
    Posts
    2,297
    Just curious. If some other company did what SM did with the X9SCL and the lan1/lan2 issue...

    One could argue it's a linux issue and the fix should be a software fix at the OS level, because the hardware does work, and do its job.

    So would another comapny:

    Release an X9SCL+ version to resolve the complaints...

    OR

    Would they simply say, well you can go to support.oursite.com/EthernetIssue for a guide that explains how to set them back to eth0 and eth1 in linux.

    Ever thought about it from that perspective?
    REDUNDANT.COMEquinix Data Centers Performance Optimized Network
    Managed & Unmanaged
    • Servers • Colocation • Cloud • VEEAM
    sales@redundant.com

  6. #206
    Join Date
    Oct 2003
    Location
    Hanoi
    Posts
    4,309
    We have no experience with X9SCM board, but X9SCL+-F seems be a wonderful board.

  7. #207
    Join Date
    Dec 2001
    Location
    Toronto, Ontario, Canada
    Posts
    6,896
    Quote Originally Posted by cwl@apaqdigital View Post
    well, the issue of LAN1/LAN2 in reversed MAC order on X9SCM/X9SCL(-F) board has been well documented! there is no hardware cure for it except manually editting the ifconfig-ethx's. that's why we recommend X9SCL+-F or X9SCA-F boards to all our clients so that headache can be avoided.

    it's true that X9SCM-F has elevated DOA rate, but ever since we switched to X9SCL+-F/X9SCA-F exclusively, we've not seen even one single failed X9SCL+-F board after a few hundreds were installed in production servers.
    Yep, we were one of the original parties documenting this with SM back in the spring as noted.

    The board design issue was a problem, but the lack of professionalism about how it was handled was the killer. You can't "lose" something in your warehouse, then tell me it should have had a packing list (because that tells me you're just trying to lie/steal at that point). And you can't complain theres no packing list, if theres a RMA form with the items/serial #'s on it. And you can't expect me to care if you lose stuff in your own warehouse/S&H dept., and argue its the shipping carriers fault. Its just flat out ridiculous.
    Myles Loosley-Millman - admin@prioritycolo.com
    Priority Colo Inc. - Affordable Colocation & Dedicated Servers.
    Two Canadian facilities serving Toronto & Markham, Ontario
    http://www.prioritycolo.com

  8. #208
    Join Date
    Jun 2001
    Posts
    480
    Has anyone has good test out for the unit? Are there any bugs like reading NIC in reverse order under centos or IPMI reset bugs.

  9. #209
    Join Date
    Jun 2001
    Posts
    480
    Quote Originally Posted by cwl@apaqdigital View Post
    by all mean, buy a spare power distribution/hot-swap backplane (BPN-SAS-938H) which is your single point of failure! .

    Where can you pickup BPN-SAS-938H backplane. I dont see anyone sell it online.. Any idea how much does it cost?

  10. #210
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by Eiv View Post
    Where can you pickup BPN-SAS-938H backplane. I dont see anyone sell it online.. Any idea how much does it cost?
    just call up your vendor to special order one from SM. SM also will sell accessories/small parts to end users directly with a premium, of course!
    http://www.supermicro.com/products/a...ries/order.cfm

    you think it's hard now to find a spare backplane, imaging you desperately need a replacement backplane when 8 nodes down for a few days already....

    so, no matter what it costs now, get one on hand before you deploy the microcloud.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  11. #211
    Join Date
    Aug 2004
    Location
    Kauai, Hawaii
    Posts
    3,799
    I would think most people would just get a whole spare chassis, rather than just the backplane. If we're going to put 12-13 in a rack the extra cost of whole chassis (so you have psu spares etc too) is minimal.

  12. #212
    Join Date
    Dec 2009
    Posts
    2,297
    Quote Originally Posted by gordonrp View Post
    I would think most people would just get a whole spare chassis, rather than just the backplane. If we're going to put 12-13 in a rack the extra cost of whole chassis (so you have psu spares etc too) is minimal.

    I would agree to this, just buy an entire spare chassis.. As soon as you stock up on backplanes a motherboard will go bad. Or some other component ;-)

    http://en.wikipedia.org/wiki/Murphy's_law
    REDUNDANT.COMEquinix Data Centers Performance Optimized Network
    Managed & Unmanaged
    • Servers • Colocation • Cloud • VEEAM
    sales@redundant.com

  13. #213
    Join Date
    Dec 2009
    Posts
    2,297
    Sooooooooooooooooo...

    Anyone using these yet?
    REDUNDANT.COMEquinix Data Centers Performance Optimized Network
    Managed & Unmanaged
    • Servers • Colocation • Cloud • VEEAM
    sales@redundant.com

  14. #214
    Join Date
    Jun 2009
    Posts
    83
    We are the first and only people in the world to have one And we got it just after I started this thread, about 4 months ago.

    They're in high demand too, our wholesaler offered to buy it back off us for a profit!

  15. #215
    Join Date
    Jun 2009
    Posts
    83
    Quote Originally Posted by ben_uk View Post
    Well .... we did it, we bought one!

    Officially the first company in the world to take receipt of a MicroCloud
    Tada. My experience with is has been great so far, at the particular site it has been installed at, it has filled a 3u space and stopped us having to take another rack.

  16. #216
    Join Date
    Sep 2011
    Location
    USA
    Posts
    141
    Quote Originally Posted by ben_uk View Post
    Okay, so its a simplified entry-level blade configuration with a clever marketing name.

    http://www.supermicro.com/products/s...37MC-H8TRF.cfm

    We normally deploy conventional 1U DP servers, but the second socket usually stays empty (its merely there for scalability), so we may as well be running UP setups.

    The above caught my eye and looks fairly ideal, we've been in negotiations with our Supermicro dealer to see if we can be first movers on the hardware. The maths says it works out about ~ £30 more per typical "server" for the MicroCloud - but the power savings, build savings, inbuilt redundancy, CPU savings should offset that cost in the long run.

    We normally use,

    2.4GHz E5620
    3x 4GB RAM

    So hopefully as an equivalent/better specification, in the MicroCloud we would be running:

    3.3GHz E3-1245
    2x 8GB RAM

    I'm yet to see what the E3-1200's are like, we're getting a demo unit sent out for local comparative testing. But the increased clock and similar cache/DMI says that on paper it should beat a single E5620.

    Is anyone out there running any E3-1200 or even using the MicroCloud. The solution really appeals to us!
    See that microcloud got 8 Systems in 3 U ...alot of space and Power saving

  17. #217
    Join Date
    Jun 2009
    Posts
    83
    Official pricing in the UK is now out.

    £2,928.28 for the bare chassis.

  18. #218
    Join Date
    Jun 2004
    Location
    Europe
    Posts
    3,822
    Swiftway got several of these Microcloud 8 servers in 3U in use now and they perform really nicely. Swapping blades is easy and making changes to the systems memory can be done much faster then with rack mount servers.
    Big plus is, that if a component fails on the mainboard you can simply swap the whole module, setup IPMI and the node is back online with the current HDD. No need to troubleshoot the problem indepth, this will greatly reduce downtime for clients on a Microcloud blade in case of hardware failure.

    Two negatives i found so far:

    - Every node has only access to up to 2HDD
    - Cooling, it seems to be very dependant on 4 fans and i am not sure if 3 fans would keep the chassis cool enough in case of fan failure. We will do some testing with this.
    Swiftway.net Your Business deserves our Quality - Experts on Hand since 2005. Europe & US locations, we operate our own network AS35017 Support response time <15 minutes 24/7
    Introducing our new Entry level server line ! Support response time <15 minutes 24/7. Technology Fast 50 & Fast 500 award winning for multiple years, Your Business deserves Swiftway Quality.

  19. #219
    Quote Originally Posted by swiftnoc View Post
    Swiftway got several of these Microcloud 8 servers in 3U in use now and they perform really nicely. Swapping blades is easy and making changes to the systems memory can be done much faster then with rack mount servers.
    Big plus is, that if a component fails on the mainboard you can simply swap the whole module, setup IPMI and the node is back online with the current HDD. No need to troubleshoot the problem indepth, this will greatly reduce downtime for clients on a Microcloud blade in case of hardware failure.

    Two negatives i found so far:

    - Every node has only access to up to 2HDD
    - Cooling, it seems to be very dependant on 4 fans and i am not sure if 3 fans would keep the chassis cool enough in case of fan failure. We will do some testing with this.
    I was concerned about the cooling too. Supermicro stated that 3 FANs will keep up for enough period of time for the DC tech to receive an alert and replace it. Actually demand for those is going higher and higher. Few times our guys had to stay really late. Microclouds are not hard to produce, but they a big pain to test There was a problem with the time estimation. Seems like we have to ship 30 systems - it's not a lot, but when it's coming to test and firmware upgrades it's 240 separate nodes... Supermicro should think about some way to upgrade all the nodes together like intel does with their modular server. 1 file, 10 minutes of time and everything is up to date.
    Many DCs start to look on those more and more It's so true that those are really easy to maintain.
    Last edited by ICC-USA; 01-10-2012 at 12:02 PM.

  20. #220
    Join Date
    Aug 2004
    Location
    Kauai, Hawaii
    Posts
    3,799
    Any failures yet?

    Also any news on any new versions of this? I guess dual e5 would be too much heat.

    What about e5 versions of the 2u twins?

  21. #221
    Join Date
    Jun 2004
    Location
    Europe
    Posts
    3,822
    Quote Originally Posted by gordonrp View Post
    Any failures yet?
    They work to now without any failures. A lot of our clients use them for very CPU heavy applications so if there was a heat issue, we would have noticed it by now.
    So we are going to order another batch of Supermicro Cloud servers.
    With the E5 we will go for standard rackmounts for now - similar ones as we use for the E7 platform.
    Swiftway.net Your Business deserves our Quality - Experts on Hand since 2005. Europe & US locations, we operate our own network AS35017 Support response time <15 minutes 24/7
    Introducing our new Entry level server line ! Support response time <15 minutes 24/7. Technology Fast 50 & Fast 500 award winning for multiple years, Your Business deserves Swiftway Quality.

  22. #222
    Join Date
    Jul 2003
    Location
    Waterloo, Ontario
    Posts
    1,132
    It has been some time now and I was wondering if anybody had any reviews about the MicroCloud? I'm interested to hear people's experiences out there and especially if anybody managed to run a cloud on it.

    Thanks guys!
    Right Servers Inc. - Fully Managed VPS and Fully Managed Bare Metal Servers in US & Canada. We want to empower entrepreneurs to grow their business, not their IT headaches. Managed is better.
    High Availability | SSD | DirectAdmin & Softaculous | Daily Backups |Firewall & Security |30 Day Money Back Guarantee

Page 9 of 9 FirstFirst ... 6789

Similar Threads

  1. Replies: 2
    Last Post: 03-04-2011, 04:21 AM
  2. Attraction Marketing,Network Marketing
    By Johny Smith in forum New Members
    Replies: 1
    Last Post: 11-13-2010, 08:55 AM
  3. Replies: 0
    Last Post: 10-30-2010, 05:10 PM
  4. Supermicro?
    By Gerbil in forum Colocation, Data Centers, IP Space and Networks
    Replies: 10
    Last Post: 10-05-2003, 02:23 AM
  5. Marketing Director or Marketing Assistance NEEDED
    By Infinology in forum Employment / Job Offers
    Replies: 0
    Last Post: 03-15-2002, 07:16 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •