Results 1 to 16 of 16
  1. #1
    Join Date
    Nov 2010
    Posts
    190

    SAN: FCoE or iSCSI

    Hi,

    currently we have VMware and some "old" SANs with Brocade fibre channel switches running. Now we have to upgrade our storage system.

    Well, nowadays we have 10Gbit/s ethernet, so letīs take "normal" 10Gbit/s layer 2 switches with ISCSI I thought.

    Our HP seller guy really doesnīt want us to take ISCSI. He offered us a EVA P6300 package with fibre channel over ethernet (FCOE) only and Brocade 8/40 for of course "much better performance". But what about ISCSI, "do you offer also ISCSI based solutions?" Of couse, he "forgot" to mention that thereīs also a mixed controller setup with 4x FCOE and 4x 10Gbit/s (for ISCSI) ports. But nah, as "ISCSI is new", "nah", "donīt take that"...


    And of couse, it is needless to say that a fully equipped Brocaded 8/40 with lincensed ports and support contract is a really pricy thing. And of course we need two for redundancy purpose...

    So, here I am. The question is: ISCSI or fibre channel over ethernet?
    (Btw, that "over ethernet" within FCoE. Is there a reason, why I couldnīt take a "normal" layer 2 ethernet switch for this, too? It seems ISCSI changes the ethernet header a little and thatīs the reason?

    As I said it will be for VMware (running AD, storage, Exchange, no additional databases).


    Another thing I wondered about. He offered us a HP storage solution with 2.5" harddrives with 10000rpm.
    Why shouldnīt take 3.5" 7200rpm based harddrives instead?
    The HP guy talked about better I/O performance. But hay, do I really need that in my case?

  2. #2
    Join Date
    Apr 2009
    Posts
    1,143
    FCoE is still a pretty new thing, but it does seem to be winning ground. Making use of existing ethernet equipment is really cheap - But the FCoE adapters prolly arent?

    ISCSI is decent, no doubt.. Thats how I would go right now, insted of being your sales dudes testsubject

    That being said, maybe someone inhere got experience with FCoE and got a good experience with it

  3. #3
    We have used FC, FCoE and iscsi and if I was planning a new install today I would definitely go with 10Gb iscsi. It's blazing fast with wide industry support. Fibre channel is of course a quality system, but the additional cost is simply not reflected in better performance these days.
    ██ Enterprise Class Cloud Hosting And Disaster Recovery. SAN Replication.
    ██ VMware Hosting on HP Blades With NetApp or EqualLogic SAN Storage. 100% Guaranteed Uptime.
    ██ Build Your Own Virtual DataCentre In The Cloud. Fully Integrated With vCenter.
    ██ StratoGen Are An Authorised VMware Partner | StratoGen.net

  4. #4
    Join Date
    Oct 2005
    Location
    Tucson AZ
    Posts
    367
    Quote Originally Posted by Stratogen View Post
    We have used FC, FCoE and iscsi and if I was planning a new install today I would definitely go with 10Gb iscsi. It's blazing fast with wide industry support. Fibre channel is of course a quality system, but the additional cost is simply not reflected in better performance these days.
    Agreed, we're running iSCSI in our vSphere Ent+ clusters and it... just works, no issues, no performance hits. Love it.
    SPEAKservers, LLC - Premium Hosting Solutions
    Dedicated & Virtual Servers - Colocation - Transport/DIA - VoIP
    sales@speakservers.com / scott@speakservers.com

  5. #5
    Join Date
    Oct 2002
    Posts
    351
    We went 10gb iscsi using brocade ti-24x-ac and got 2 each for 2 locations, 4 total, we were able to get a good deal through mike at myriad supply. Though its not in production yet we have been happy with the performance so far. The 24x-ac have a limited lifetime replacement warranty and the config is very similar to cisco, so we didnt feel the need to get the support contract on it.

    As for sans, we looked at hp, stonefly, dell, and netapp and ended up going with stonefly with a mix of sas and sata drives.

  6. #6
    Join Date
    May 2004
    Location
    Toronto, Canada
    Posts
    5,105
    We looked at it and went with iscsi. Broader support, performance is excellent and allows you to have consistent infrastructure.

  7. #7
    FCoE is pointless. See http://www.google.com/#q=FCoE+packetpushers

    fiber channel is a lossless protocol, you absolutely must not lose packets for it to work. Does that sound like a good match with ethernet? I don't think so either.
    IOFLOOD.com -- We Love Servers
    Phoenix, AZ Dedicated Servers in under an hour
    ★ Ryzen 9: 7950x3D ★ Dual E5-2680v4 Xeon ★
    Contact Us: sales@ioflood.com

  8. #8
    iSCSI.

    You are talking to a salesperson, or whatever they call it... sales engineer - who w/ some technical background. Regardless, they are going upsale you no matter what.

  9. #9
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,957
    FCoE is fine but iSCSI is too new? iSCSI has been widely deployed for 5+ years and FCoE wasn't standardized until 2010 with the first devices hitting the market in 2008. iSCSI is a much more mature platform than FCoE so that seems like an odd reason.

    If HP doesn't like iSCSI on their products, I'd suggest going with a different provider altogether then. NetApp is more than happy to do iSCSI.

    And yes, the 10k RPM 2.5" drives will offer a lot better IO performance than 3.5" 7200RPM drives, but you'll get even better performance with 3.5" 15k RPM drives and probably pay less per GB than the 2.5" drives.
    Karl Zimmerman - Founder & CEO of Steadfast
    VMware Virtual Data Center Platform

    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation

  10. #10
    Join Date
    Mar 2010
    Location
    Germany
    Posts
    697
    IMHO (coming from a 3000+ port SAN) iSCSI is a joke and FCoE is quite pointless, two old 4gig HBAs FC will stomp both solutions in probably performance and definitely in reliability, as being quasi-lossless.

    I can't judge on your setup but I don't normally see much incentive for upgrading. FCoE will play nicer with FC but relies more heavily on datacenter ethernet which is not-so-proven and iSCSI is proven by now but not really good for anything, i.e. with VMWare ESXi you might see some problems with active/active arrays when using iSCSI.

    FCoE is in fact the new kid on the block and has little adoption. I'd go with FCoE if I had something well-integrated like Cisco UCS, otherwise I'd rather stay away.

    What I'd do if I had enough money for shopping is to bring in someone from HDS for some offer based on a good sized AMS.
    Check out my SSD guides for Samsung, HGST (Hitachi Global Storage) and Intel!

  11. #11
    Quote Originally Posted by KarlZimmer View Post
    FCoE is fine but iSCSI is too new? iSCSI has been widely deployed for 5+ years and FCoE wasn't standardized until 2010 with the first devices hitting the market in 2008. iSCSI is a much more mature platform than FCoE so that seems like an odd reason.

    If HP doesn't like iSCSI on their products, I'd suggest going with a different provider altogether then. NetApp is more than happy to do iSCSI.

    And yes, the 10k RPM 2.5" drives will offer a lot better IO performance than 3.5" 7200RPM drives, but you'll get even better performance with 3.5" 15k RPM drives and probably pay less per GB than the 2.5" drives.
    Reading over all the posts in this thread prior to yours I don't see anyone saying that iscsi is new, though I do see one post saying that FCOE is too new, and the grammar of the sentence is such that there is room for some confusion
    IOFLOOD.com -- We Love Servers
    Phoenix, AZ Dedicated Servers in under an hour
    ★ Ryzen 9: 7950x3D ★ Dual E5-2680v4 Xeon ★
    Contact Us: sales@ioflood.com

  12. #12
    Join Date
    Jun 2009
    Location
    California
    Posts
    509
    Quote Originally Posted by funkywizard View Post
    Reading over all the posts in this thread prior to yours I don't see anyone saying that iscsi is new, though I do see one post saying that FCOE is too new, and the grammar of the sentence is such that there is room for some confusion
    It's mentioned in the original post that the HP sales guy said iSCSI is new.

  13. #13
    Quote Originally Posted by MikeJohnson View Post
    It's mentioned in the original post that the HP sales guy said iSCSI is new.
    Ah, ok, my mistake. I didn't see it mentioned in any replies then I suppose.
    IOFLOOD.com -- We Love Servers
    Phoenix, AZ Dedicated Servers in under an hour
    ★ Ryzen 9: 7950x3D ★ Dual E5-2680v4 Xeon ★
    Contact Us: sales@ioflood.com

  14. #14
    Join Date
    Feb 2002
    Location
    New York, NY
    Posts
    4,618
    Another option for the DIY-type people would be Infiniband with SRP. You won't find a lot of pre-made solutions for it, but that might change in the future. 40Gbps Infiniband is around the same price as 10Gbps Ethernet, and configuring an SRP target is very similar to configuring an iSCSI target.
    Last edited by bqinternet; 06-09-2011 at 08:20 PM.
    Scott Burns, President
    BQ Internet Corporation
    Remote Rsync and FTP backup solutions
    *** http://www.bqbackup.com/ ***

  15. #15
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,957
    Quote Originally Posted by wartungsfenster View Post
    IMHO (coming from a 3000+ port SAN) iSCSI is a joke and FCoE is quite pointless, two old 4gig HBAs FC will stomp both solutions in probably performance and definitely in reliability, as being quasi-lossless.

    I can't judge on your setup but I don't normally see much incentive for upgrading. FCoE will play nicer with FC but relies more heavily on datacenter ethernet which is not-so-proven and iSCSI is proven by now but not really good for anything, i.e. with VMWare ESXi you might see some problems with active/active arrays when using iSCSI.

    FCoE is in fact the new kid on the block and has little adoption. I'd go with FCoE if I had something well-integrated like Cisco UCS, otherwise I'd rather stay away.

    What I'd do if I had enough money for shopping is to bring in someone from HDS for some offer based on a good sized AMS.
    What makes iSCSI a joke? We have it running on a couple configs with 10 GigE backends and it is working out great. It works well for distributing across our existing ethernet network, distributing it to multiple clients, etc. It certainly has it's uses in an existing ethernet environment. I also find it hard to believe you're getting better performance with 4Gbit/sec FC, unless you're just doing it wrong...

    And Scott, yeah, Infiniband keeps looking like a better option, that is what we've been looking at for our next cloud deployment.
    Karl Zimmerman - Founder & CEO of Steadfast
    VMware Virtual Data Center Platform

    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation

  16. #16
    Join Date
    Nov 2010
    Posts
    190
    Hi,

    thank you for all your answers. Well, to make it short, it will be iSCSI, too. Btw, what 10Gbit/s switches do you use with it?

Similar Threads

  1. SAN iSCSI performance question
    By latitude in forum Hosting Security and Technology
    Replies: 4
    Last Post: 02-07-2010, 10:22 AM
  2. What is iSCSI SAN Storage ?
    By hsbsitez in forum Hosting Security and Technology
    Replies: 3
    Last Post: 07-27-2009, 10:06 AM
  3. EqualLogic PS5000 iSCSI SAN
    By BruceDonado in forum Colocation, Data Centers, IP Space and Networks
    Replies: 4
    Last Post: 03-24-2008, 04:45 PM
  4. Low cost ISCSI SAN
    By WebGuyz in forum Colocation, Data Centers, IP Space and Networks
    Replies: 29
    Last Post: 07-31-2007, 03:46 PM
  5. iSCSI SAN vs. GigE Switch
    By wagich in forum Colocation, Data Centers, IP Space and Networks
    Replies: 18
    Last Post: 01-03-2006, 06:58 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •