Results 1 to 21 of 21
  1. #1

    Recommended hardware for (open)solaris/ZFS

    Ok, so I have tried every version of solaris I can find on the following two supermicro motherboards and it just won't install.

    X7SBL-LN2
    X7DCA-L

    I've tried 3 different cdrom's and 2 different dvdrom's (one usb)... as I heard that sometimes solaris is picky about this stuff.

    They get to where the kernel is supposed to load but it never does. Another one it just goes into a reboot cycle...

    I google'd a bunch and it seems maybe the ich9 chipset isn't really well supported?

    I can't find these motherboards in the compatible hardware lists but then there isn't a lot of supermicro motherboards in the list I can find.

    I'd love some recommendations, I want to test this stuff out but I've spent a few hours and got nowhere so far.

    Thanks,

  2. #2
    Join Date
    Oct 2002
    Location
    Vancouver, B.C.
    Posts
    2,699
    If you're mainly just interested in ZFS, you can also try running FreeBSD instead of (Open)Solaris. ich9 (or any ich*) chipsets work great on it.

    I have used both of those motherboards with FreeBSD without issue although I haven't tried ZFS on it yet. From what I hear, ZFS works quite well on 8-CURRENT.
    ASTUTE INTERNET: Advanced, customized, and scalable solutions with AS54527 Premium Performance and Canadian Optimized Network (Level3, Shaw, CogecoPeer1, GTT/Tinet),
    AS63213 Cost Effective High Performance Network (Cogent, HE, GTT/Tinet)
    Dedicated Hosting, Colo, Bandwidth, and Fiber out of Vancouver, Seattle, LA, Toronto, NYC, and Miami

  3. #3
    BSD scares me more then Solaris

    But maybe I should give it a shot.

    Thanks Han.

  4. #4
    Join Date
    May 2002
    Location
    Raleigh, NC
    Posts
    714
    we've run OpenSolaris on

    X6DVA
    X7DBE+

  5. #5
    Thanks Mark.

  6. #6
    Join Date
    Oct 2002
    Location
    Vancouver, B.C.
    Posts
    2,699
    Quote Originally Posted by lostmind View Post
    Ok, so I have tried every version of solaris I can find on the following two supermicro motherboards and it just won't install.

    X7SBL-LN2
    X7DCA-L
    Just noticed a review today on Newegg mentioning that opensolaris snv_117 works on the X7SBL-LN2:

    http://www.newegg.com/Product/Produc...82E16813182145
    ASTUTE INTERNET: Advanced, customized, and scalable solutions with AS54527 Premium Performance and Canadian Optimized Network (Level3, Shaw, CogecoPeer1, GTT/Tinet),
    AS63213 Cost Effective High Performance Network (Cogent, HE, GTT/Tinet)
    Dedicated Hosting, Colo, Bandwidth, and Fiber out of Vancouver, Seattle, LA, Toronto, NYC, and Miami

  7. #7
    Good eye Han.

    Let me try that out... if it works I owe you a coffee or something.

  8. #8
    Join Date
    Jun 2003
    Location
    London, UK
    Posts
    1,765
    Have you tried Sun's compatibility list?
    Darren Lingham - Stablepoint Hosting
    Stablepoint - Cloud Web Hosting without compromise
    We provide industry-leading cPanel™ web hosting in 80+ global cities.

  9. #9
    Yup, it's not a very complete list though; or so I have been told.

  10. #10
    Join Date
    Jun 2003
    Location
    London, UK
    Posts
    1,765
    But it is (usually) the safest! :p
    Darren Lingham - Stablepoint Hosting
    Stablepoint - Cloud Web Hosting without compromise
    We provide industry-leading cPanel™ web hosting in 80+ global cities.

  11. #11
    Join Date
    May 2009
    Posts
    217
    OpenSolaris is far more picky about hardware than Linux does. Having said that, I would also say that if you are serious about ZFS storage(I assume is the reason why you are considering OpenSolaris), then you should forget DIY. Dell and HP hardware is much better supported than DIY Supermicro on large arrays for not much more premium.

    Dell:

    SC1435 with 32GB RAM + SAS 5E HBAs + MD1000 or MD1120s
    R410/R610/R710 with 24GB UDIMM ECC + SAS/5E HBAs + MD1000 or MD1120

    HP

    DL160G6 with 24GB UDIMM ECC + HP P411(these are surprisingly first 6Gbps SAS adapters on the market) + MSA70(2.5inch) if you want small form. Official Solaris and OpenSolaris support by Sun.

    Last note, don't use FreeBSD 7.2 yet. It is stuck on ZFS version 6 for the release branch. 7.2-Stable upgraded to ZFS version 13 with L2ARC and ZIL, but it's not official release yet, and 8.0 is still beta. OpenSolaris will always at the forefront for ZFS development(currently at ZFS version 17 with triple parity RAIDz3 in SNV_120 build. Soon it will have Deduplication, BP rewrite, L2ARC persistence i hope by OpenSolaris 2010.02.) In other words, OpenSolaris is the choice for serious storage going forward at this point.
    Last edited by tshen83; 08-09-2009 at 05:36 PM.

  12. #12
    Join Date
    Jan 2005
    Location
    San Francisco/Hot Springs
    Posts
    991
    I hate to say this, but unless you're going to run REAL Solaris, don't use ZFS.
    Its not worth it, and its not safe on either OpenSolaris or FreeBSD. FreeBSD has been making a lot of progress in making ZFS work right, but I still wouldn't use it for anything important. I use FreeBSD exclusively and I try it from time to time, it still fails a lot of my torture tests
    AppliedOperations - Premium Service
    Bandwidth | Colocation | Hosting | Managed Services | Consulting
    www.appliedops.net

  13. #13
    Quote Originally Posted by tshen83 View Post
    OpenSolaris is far more picky about hardware than Linux does. Having said that, I would also say that if you are serious about ZFS storage(I assume is the reason why you are considering OpenSolaris), then you should forget DIY. Dell and HP hardware is much better supported than DIY Supermicro on large arrays for not much more premium.

    Dell:

    SC1435 with 32GB RAM + SAS 5E HBAs + MD1000 or MD1120s
    R410/R610/R710 with 24GB UDIMM ECC + SAS/5E HBAs + MD1000 or MD1120

    HP

    DL160G6 with 24GB UDIMM ECC + HP P411(these are surprisingly first 6Gbps SAS adapters on the market) + MSA70(2.5inch) if you want small form. Official Solaris and OpenSolaris support by Sun.

    Last note, don't use FreeBSD 7.2 yet. It is stuck on ZFS version 6 for the release branch. 7.2-Stable upgraded to ZFS version 13 with L2ARC and ZIL, but it's not official release yet, and 8.0 is still beta. OpenSolaris will always at the forefront for ZFS development(currently at ZFS version 17 with triple parity RAIDz3 in SNV_120 build. Soon it will have Deduplication, BP rewrite, L2ARC persistence i hope by OpenSolaris 2010.02.) In other words, OpenSolaris is the choice for serious storage going forward at this point.
    For now, I just want to test it out. Everyone is tooting the zfs horn. We've been able to get some pretty decent performance out of AoE and honestly I kinda like it. iSCSI is actually a bit slower in our tests/on our hardware. I'd love to see what ZFS can do, or at least get a hint of it's potential.

    That said, honestly; Dell and HP have a huge markup compared supermicro hardware. I'm in Canada. Dell and HP do not compete on price here, at all; from what I see. Of course they say they do but the quotes I have from my Dell rep with "extremely aggressive pricing" (his words) is total crap and nearly 1.8x more expensive then me building the same machine/specs with supermicro hardware and sourcing other parts through my warehouse accounts.

    The only "deal" is that they offer leasing. Bleh. I hate using credit and having monthly payments.

    Dell doesn't get aggressive when you have no proven track record for purchasing hardware in Canada I guess? I've only purchased monitors, laptops and kvm through Dell so far.

    But you know what? I appreciate the advice and will annoy my Dell rep further. Let's see what he says

  14. #14
    Join Date
    May 2009
    Posts
    217
    yeah, I didn't realize you are from Canada. Dell and HP do have competitive price on low end gear relative to Supermicro gear in the US. But unfortunately, most US companies have a retarded policy to screw Canadians and Europeans due to the currency disparity. (partially due to cost to hedge currency in the Forex market, but it is expected that US companies will give a little edge to US consumers)

    Having said that, the trick to ordering Dell is never upgrade CPU or ram. If I were you, I would call up Dell, and order the following:

    R410 with dual E5504(price premium on those are less than $100 over Intel 1K prices)
    Dell tries to charge too much for anything higher.

    Get the minimum ram you can(2GB UDIMM), and order empty MD1000 chassis off of dell resellers off of ebay and put hard drives in yourself.

    Crack open the R410 once you get it and max the Ram slots with 4GB DDR3 ECC ram(about $100 a DIMM now a days) and add a SAS5/E card yourself if you have to.

    That way, you should be able to pull it off without much premium over Supermicro gear, and you get much better support should ZFS acts up on you.

    Oh to add more: AOE is a joke compared to ZFS. One thing people complain about ZFS performance is when they use hard drive only arrays. Adding a ZIL and L2ARC will boost ZFS performance about 8-10x when used in the iSCSI or NFS sense.

    I would fill your MD1000 with 2x Intel X25-E and 2x Intel X25-M Gen2 80GB, and 11x Western Digital Caviar Blacks (TLER turned on to 7 secs). You need 4x 2.5inch to 3.5inch adapters for this. And the 11x Caviar blacks should be set into ZFS 2way mirrors with 1 hot spare. X25-Es are ZILs and X25-Ms are L2ARC caches, giving you 160GB of flash cache on top. you will be amazed by what it can do.

    I am actually waiting for the X25-E Gen2 to get out. Supposely it has power safe write caching, so random write 4k iops will be through the roof for ZILs.
    Last edited by tshen83; 08-12-2009 at 12:20 AM.

  15. #15
    Join Date
    May 2009
    Posts
    217
    Quote Originally Posted by appliedops View Post
    I hate to say this, but unless you're going to run REAL Solaris, don't use ZFS.
    Its not worth it, and its not safe on either OpenSolaris or FreeBSD. FreeBSD has been making a lot of progress in making ZFS work right, but I still wouldn't use it for anything important. I use FreeBSD exclusively and I try it from time to time, it still fails a lot of my torture tests
    Did you know that Sun's latest storage 7410 and 7310 are all running on top of OpenSolaris? If Sun can trust OpenSolaris for their top end storage gear, I should be able to trust it.

    I am not surprised that you have a negative feeling toward ZFS stability since you are FreeBSD exclusive. I am surprised that FreeBSD actually going to migrate to ZFS boot for 8.0, which is the right choice for FreeBSD, but they will always be about 12-18 month lag behind OpenSolaris for ZFS because the FreeBSD guys will scratch their heads trying to figure out what the heck Jeff Bonwick and Brendan Gregg is doing in OpenSolaris so that they have to replicate it onto FreeBSD kernel due to "rampant layering violation" LOL, so the risk is definitely higher for FreeBSD guys for Support because Jeff Bonwick only works on OpenSolaris right now.

  16. #16
    Join Date
    May 2002
    Location
    Raleigh, NC
    Posts
    714
    Quote Originally Posted by appliedops View Post
    I hate to say this, but unless you're going to run REAL Solaris, don't use ZFS.
    Its not worth it, and its not safe on either OpenSolaris or FreeBSD. FreeBSD has been making a lot of progress in making ZFS work right, but I still wouldn't use it for anything important. I use FreeBSD exclusively and I try it from time to time, it still fails a lot of my torture tests
    could you comment on problems you've had with OpenSolaris and ZFS? Just overall new cutting edge beta features being pushed into it? I know ZFS on FreeBSD is way behind but hadn't heard of similar problems with opensolaris

  17. #17
    Join Date
    Mar 2003
    Location
    chicago
    Posts
    1,781
    im about to build a storage array with freebsd 8

  18. #18
    Quote Originally Posted by tshen83 View Post
    yeah, I didn't realize you are from Canada. Dell and HP do have competitive price on low end gear relative to Supermicro gear in the US. But unfortunately, most US companies have a retarded policy to screw Canadians and Europeans due to the currency disparity. (partially due to cost to hedge currency in the Forex market, but it is expected that US companies will give a little edge to US consumers)

    Having said that, the trick to ordering Dell is never upgrade CPU or ram. If I were you, I would call up Dell, and order the following:

    R410 with dual E5504(price premium on those are less than $100 over Intel 1K prices)
    Dell tries to charge too much for anything higher.

    Get the minimum ram you can(2GB UDIMM), and order empty MD1000 chassis off of dell resellers off of ebay and put hard drives in yourself.

    Crack open the R410 once you get it and max the Ram slots with 4GB DDR3 ECC ram(about $100 a DIMM now a days) and add a SAS5/E card yourself if you have to.

    That way, you should be able to pull it off without much premium over Supermicro gear, and you get much better support should ZFS acts up on you.

    Oh to add more: AOE is a joke compared to ZFS. One thing people complain about ZFS performance is when they use hard drive only arrays. Adding a ZIL and L2ARC will boost ZFS performance about 8-10x when used in the iSCSI or NFS sense.

    I would fill your MD1000 with 2x Intel X25-E and 2x Intel X25-M Gen2 80GB, and 11x Western Digital Caviar Blacks (TLER turned on to 7 secs). You need 4x 2.5inch to 3.5inch adapters for this. And the 11x Caviar blacks should be set into ZFS 2way mirrors with 1 hot spare. X25-Es are ZILs and X25-Ms are L2ARC caches, giving you 160GB of flash cache on top. you will be amazed by what it can do.

    I am actually waiting for the X25-E Gen2 to get out. Supposely it has power safe write caching, so random write 4k iops will be through the roof for ZILs.
    Some good points about ordering empty chassis or low end cpu/ram etc from Dell.

    AoE may be a joke but it's significantly better then iSCSI or NFS in our testing. We have several test VPS's able to pull 130MB/s each off our AoE box with just a bit of tweaking. iSCSI hit ~50MB/s and NFS stuck around the low 20's. IO is pretty lame on all 3 though, in honesty. I am very paranoid about performance.

    I've heard some awesome things about ZFS. I am a big fan of ssd's. I actually have 4 or 5 spare intel ssd's in the office now, which is another reason why I want to try this out.

    Currently we run local storage for all our VPS boxes. You can't beat it, I get great IO and transfer rates above 400MB/s with locally attached storage. Of course you pay for that performance with extra work in management, no transparent migrations, etc. etc.

  19. #19
    Join Date
    May 2009
    Posts
    217
    yeah, you should be maxing at least 4 GigE ports using that Hybrid Storage Pool config I just mentioned.

    Some great reads for you:

    L2ARC acceleration results
    http://blogs.sun.com/brendan/entry/l2arc_screenshots

    ZIL acceleration results
    http://blogs.sun.com/brendan/entry/slog_screenshots

    The thing is that you can already make a 7410 equivalent system for about 1/10 of what Sun asks for with Dell/HP gear. The only thing that isn't replicable yet is the STEC Zeus IOPS 18GB SAS flash drive that they use for ZIL, because it costs $5000 for 18GB. It ignores fsync() calls from the system, so writes to it is all cached in ram, and it is spec'd to do 16K random 4k writes per second and 45K random reads. Intel X25-E Gen1 can do close to 33k random reads, but only 3.3K fsync()'d 4K random writes. So currently it takes at least 8+ mirrored X25-Es to match the random write that a mirrored pair of STEC ZeusIOPS can give you, but it will soon change once Intel implements power safe write cache and ignore the fsync() themselves. Depending on how many SSD channels the X25-E gen2 will sport, I expect the Gen2 will be offering about 80% of the performance of the STEC at about $500 a drive or less. Sun will have no choice but to jump on the Intel ship and lower their Storage gear price by a lot($10K raw material cost to just to get the ZeusIOPS will probably translate to more than $20-$25K of the total cost to their total system) And using the Intel X25-M as the L2ARCs also significantly lowers cost compared to the 100GB STEC Mack8 SLC SSDs they used for the L2ARC.

  20. #20
    Join Date
    Jun 2003
    Location
    London, UK
    Posts
    1,765
    L2ARC (cache devices) aren't yet support in Solaris, only OpenSolaris.
    Darren Lingham - Stablepoint Hosting
    Stablepoint - Cloud Web Hosting without compromise
    We provide industry-leading cPanel™ web hosting in 80+ global cities.

  21. #21
    Join Date
    Jul 2009
    Posts
    240
    For those of you saying freebsd and ZFS is not "that" stable. Here's a real world production example:

    http://forums.freebsd.org/showthread.php?t=3689

Similar Threads

  1. [FEATURED] Recommended Hardware and Software for VPS
    By nitaish in forum VPS Hosting
    Replies: 14
    Last Post: 10-05-2016, 04:54 AM
  2. Recommended hardware configuration
    By prashant1979 in forum Dedicated Server
    Replies: 3
    Last Post: 01-11-2008, 10:13 AM
  3. Recommended hardware for CS servers?
    By RyanHughes in forum Dedicated Server
    Replies: 24
    Last Post: 06-27-2007, 03:55 PM
  4. Sun makes Solaris open source
    By VAFS.COM in forum Programming Discussion
    Replies: 0
    Last Post: 06-06-2004, 01:15 PM
  5. Recommended Hardware Firewall
    By Alan - Vox in forum Hosting Security and Technology
    Replies: 30
    Last Post: 12-16-2002, 12:46 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •