Page 1 of 2 12 LastLast
Results 1 to 40 of 58
  1. #1
    Join Date
    Jun 2006
    Posts
    405

    any of you large Data Center guys use SAN's

    I am looking for large providers that use SAN's in there environments.

    If you use SAN's in your DC's how do you address the HBA/Fiber Channel cards?

    WWN/WWPN what is your naming convention that works best for you?

  2. #2
    Join Date
    Jun 2006
    Posts
    405
    Nobody uses SAN's in there racks or DC's?

  3. #3
    Join Date
    Jun 2002
    Location
    PA, USA
    Posts
    5,137
    We use iSCSI SAN. Simpler to set up.
    Fluid Hosting, LLC - HSphere Shared and Reseller hosting - Now with HIGH AVAILABILITY
    Fluid VPS - Linux and Windows Virtuozzo VPS - Enterprise VPS with up to 2 GB guaranteed memory!
    Get your N+1 High Availability Enterprise Cloud
    Equinix Secaucus NY2 (NYC Metro)

  4. #4
    Join Date
    Jan 2006
    Location
    Jersey
    Posts
    2,965
    Quote Originally Posted by WickedShark View Post
    Nobody uses SAN's in there racks or DC's?
    Patience is virtue my friend.

    When you ask a question that 99% on WHT have not come into contact with, it takes some time to get a reply. Usually people who are managing SANs are, well, busy managing SANs. So they will give their time, well, when they get time.
    Email: info ///at/// honelive.com

  5. #5
    Join Date
    Feb 2003
    Location
    North Hollywood, CA
    Posts
    2,554
    We use equallogic sans, and use about 200TB of space.

    Dell bought equallogic not to long ago. There MD1000's look spiffy.
    Remote Hands and Your Local Tech for the Los Angeles area.

    (310) 573-8050 - LinkedIn

  6. #6
    Join Date
    Jun 2006
    Posts
    405
    Quote Originally Posted by Jeremy View Post
    We use equallogic sans, and use about 200TB of space.

    Dell bought equallogic not to long ago. There MD1000's look spiffy.
    How do you name your WWN's or WWPN's? I am looking for a naming convention that others may follow.

  7. #7
    iscsi san about 60TB storage

    cheap and easy than Fiber SAN

  8. #8
    Join Date
    Feb 2003
    Location
    North Hollywood, CA
    Posts
    2,554
    I will ask when i get in the office in a few hours.
    but i believe its by department:
    HA-Node2-1of5 = Hospital Accounting Cluster 2 and its 1 of 5.

    From when i brought one up, thats what i recall setting it up as.
    Remote Hands and Your Local Tech for the Los Angeles area.

    (310) 573-8050 - LinkedIn

  9. #9
    Join Date
    Nov 2005
    Location
    Minneapolis, MN
    Posts
    1,648
    Quote Originally Posted by WickedShark View Post
    If you use SAN's in your DC's how do you address the HBA/Fiber Channel cards?
    Standard practice in the enterprise space is to just use the burned-in address on the card. The WWN is, of course, able to be changed just like the MAC address of a NIC, but that is not a common practice in most IT shops.

    Quote Originally Posted by WickedShark View Post
    WWN/WWPN what is your naming convention that works best for you?
    Are you sure you're referring to WWN naming, or zone naming on the SAN switch?
    Eric Spaeth
    Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
    "The really cool thing about facts is they remain true regardless of who states them."

  10. #10
    We run HP EVA8100 clusters and NetApp 960c clusters - The EVA's using fiber channel and the netapps iSCSI.

  11. #11
    Join Date
    May 2006
    Location
    NJ, USA
    Posts
    6,456
    Colo4Jax (well, more specifically, BigVPS) uses a SAN
    simplywww: directadmin and cpanel hosting that will rock your socks
    Need some work done in a datacenter in the NYC area? NYC Remote Hands can do it.

    Follow my "deals" Twitter for hardware specials.. @dougysdeals

  12. #12
    Join Date
    Jul 2006
    Location
    Detroit, MI
    Posts
    1,955
    Quote Originally Posted by FHDave View Post
    We use iSCSI SAN. Simpler to set up.
    +1...................

  13. #13
    Quote Originally Posted by WickedShark View Post
    I am looking for large providers that use SAN's in there environments.

    If you use SAN's in your DC's how do you address the HBA/Fiber Channel cards?

    WWN/WWPN what is your naming convention that works best for you?
    As other people have mentioned, iSCSI is a great alternative - we've got both in production, and I think you'll find that most people don't change the WWN or WWPN, very few firmware/driver combo's support changing this reliably, moreover there's really no reason to do it - what naming convention do you use for your network card mac addresses?

  14. #14
    Quote Originally Posted by FHDave View Post
    We use iSCSI SAN. Simpler to set up.
    iSCSI is cheaper than FC but are *MUCH* slower! 1Gbit/s versus 4gbit/s

    If you have 10 SAS disk inside and 10 server reading from it, you fill the ethernet channel very quickly.

    How many server do you have reading from SAN and what kind of traffic do you have?

    Are there any virtual machine with images on the san?

    I'm interested in developing an iSCSI SAN for some virtual machines (more or less 40 or 50) but iSCSI is too slow.

    50 vm reading from a single 1gbit channel means 20mbit for each VM's that is 2.5MB.

    Too slow!!

  15. #15
    Quote Originally Posted by ale123 View Post
    iSCSI is cheaper than FC but are *MUCH* slower! 1Gbit/s versus 4gbit/s

    If you have 10 SAS disk inside and 10 server reading from it, you fill the ethernet channel very quickly.

    How many server do you have reading from SAN and what kind of traffic do you have?

    Are there any virtual machine with images on the san?

    I'm interested in developing an iSCSI SAN for some virtual machines (more or less 40 or 50) but iSCSI is too slow.

    50 vm reading from a single 1gbit channel means 20mbit for each VM's that is 2.5MB.

    Too slow!!

    10gb ethernet and 802.3ad are your friends.

  16. #16
    Join Date
    Jul 2006
    Location
    Detroit, MI
    Posts
    1,955
    Quote Originally Posted by cacheflymatt View Post
    10gb ethernet and 802.3ad are your friends.

    So is multipathing and bonded NICs.

  17. #17
    Quote Originally Posted by cacheflymatt View Post
    10gb ethernet and 802.3ad are your friends.
    Can you suggest me an iSCSI SAN with 10gb ethernet?
    Actually I know only MSA2012 that has two 1gbit ethernet.
    Same for the Dell/EMC.....

  18. #18
    Quote Originally Posted by ale123 View Post
    Can you suggest me an iSCSI SAN with 10gb ethernet?
    Actually I know only MSA2012 that has two 1gbit ethernet.
    Same for the Dell/EMC.....
    There's not too many vendors offering 10g ports right now,
    Netapp allows you to drop in 10G nics

    equallogic (now dell) is atleast 4x nics per controller (or let's say 12x gige in a 3 box config).

    I'm not sure what lefthand offers on the VSMs, but if you buy the software for lets say a dell 2950, you could do 10g nics or up to 14 gige (2x onboard, 3x pciE quad port nic), that would be 42 x1g in a 3 box config.


    3par offers 16x gige


    You'll want the 10g for switch uplinks obviously, but multiple gige + 802.3ad (or multipath as another poster mentioned) should take you quite a ways. Moreover generally you don't care about sequential read throughput as much as you care about iops, but YMMV depending on what your apps are doing.

  19. #19
    Quote Originally Posted by cacheflymatt View Post
    Netapp allows you to drop in 10G nics
    Netapp are very expensive. For that price I think that I can buy a 4gbit FC SAN from IBM or HP.

    equallogic (now dell) is atleast 4x nics per controller (or let's say 12x gige in a 3 box config).
    I'm on the dell's website right now....


    I'm not sure what lefthand offers on the VSMs, but if you buy the software for lets say a dell 2950, you could do 10g nics or up to 14 gige (2x onboard, 3x pciE quad port nic), that would be 42 x1g in a 3 box config.
    I don't think that making an home-made san will be a solution for a small (max 200 server when full loaded) datacenter.

    You'll want the 10g for switch uplinks obviously, but multiple gige + 802.3ad (or multipath as another poster mentioned) should take you quite a ways. Moreover generally you don't care about sequential read throughput as much as you care about iops, but YMMV depending on what your apps are doing.
    for 802.3ad is true.
    Actually I want to plan a san for some virtual machine to use as web server (apache and ftp only). I'm planning to buy some r300 or 2950 and putting inside it 5-6 VM.
    VM and their storage will be on the iSCSI SAN (may be with SATA disk).
    Doing so, in case of failure to one node, i can move VM to another one.

    But iSCSI sounds slow.

    The same architecture can host also mail servers?

    If can be useful, actually our internet connections are two 15mbit (30mbit aggregated)


    (sorry for my english, i'm Italian)

  20. #20
    Quote Originally Posted by ale123 View Post
    Netapp are very expensive. For that price I think that I can buy a 4gbit FC SAN from IBM or HP.
    ..not a high end san, or you're negotiating wrong..you'll certainly get more iops/$ from netapp then HP, can't speak to IBM.


    Quote Originally Posted by ale123 View Post
    I don't think that making an home-made san will be a solution for a small (max 200 server when full loaded) datacenter.
    it's not really home made, it's fully supported, and it's certainly expensive - It's just an option to use their OS on certain existing hardware (HP DL360, Dell 2950) - you can buy the VSM's directly from Lefthand, but I'm not sure what the NIC config options are.




    Quote Originally Posted by ale123 View Post
    for 802.3ad is true.
    Actually I want to plan a san for some virtual machine to use as web server (apache and ftp only). I'm planning to buy some r300 or 2950 and putting inside it 5-6 VM.
    VM and their storage will be on the iSCSI SAN (may be with SATA disk).
    Doing so, in case of failure to one node, i can move VM to another one.
    In that case, you could also look at the lefthand VSA product which would allow you to use the storage in your 2950 to build an iscsi cluster. The VSA runs as a VM inside ESX.


    Quote Originally Posted by ale123 View Post
    But iSCSI sounds slow.
    Then you're not listening correctly.


    Quote Originally Posted by ale123 View Post
    The same architecture can host also mail servers?
    Which architecture?
    Last edited by cacheflymatt; 05-03-2008 at 06:55 PM. Reason: misquotes

  21. #21
    Quote Originally Posted by cacheflymatt View Post
    ..not a high end san, or you're negotiating wrong..you'll certainly get more iops/$ from netapp then HP, can't speak to IBM.
    IBM are the slowest?


    Which architecture?

    Actually 2 MX and one pop3/imap server with mailbox storage.

    We want virtualize MXes and going from 2 to 4.
    Then from 1 pop3/imap server make 2 pop3 and two imap.

    If possibile with 2 or 3 dell 2950 with 4 or 8 GB ram and dual processor.




    I've seen EqualLogic, sounds very very interesting. 3 gbit per controller, so 6 gbit in LACP (controllers are active/active?).

    But I don't understood how can I add more units to a single virtual SAN and increasing throughput from 6 gbit to 12 gbit...

    Have you ever used it?

  22. #22
    Join Date
    Feb 2004
    Posts
    371
    iSCSI WAY over Fiber

  23. #23
    Quote Originally Posted by carlostabs View Post
    iSCSI WAY over Fiber
    Ok, but actually I cant find any prices for the EqualLogic but I think that it is more expensive than MSA2012fc or MSA2012i

    I've seen that equallogic san has just only 3 GbE, 3 gbit
    as much less than 8gbit fiber as we can find on modern FC SAN.

    Another thing that I don't understood is:
    If I want to put another equallogic san in a cluster, the data
    on it, will be replicated on each san 1:1 or san1 will have some datas and san will have other datas?

  24. #24
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,132

  25. #25
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,132
    Quote Originally Posted by ale123 View Post
    IBM are the slowest?
    We have an IBM SAN and its quick. And spec to spec better than the HP counterpart.

    Netapp is built for NFS, not a scan really and its iscsi is just a device ontop of its NFS.. kinda like how Suns ZFS does its iscsi as well.. not optimal.


    If you want true iscsi should check out relata's unified storage gateway. attach 96 15k sas drives to pci-e card and then stick a 10ge card in another pci-e slot and it would be far faster than a 4Gbps FC san with multipathing and dual controllers.

    Reldatas system basicly just takes a lun/lvm's together a storage pool and presents it with their own proprietary iscsi code

  26. #26
    Quote Originally Posted by Spudstr View Post
    If you want true iscsi should check out relata's unified storage gateway. attach 96 15k sas drives to pci-e card and then stick a 10ge card in another pci-e slot and it would be far faster than a 4Gbps FC san with multipathing and dual controllers.
    Equallogic's aren't good?

    The specs are very interesting but I can't find any 'starting prices'.

    Anobody knows the starting price for Equallogic's san? (SATA and SAS models)

  27. #27
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,132
    Quote Originally Posted by ale123 View Post
    Equallogic's aren't good?

    The specs are very interesting but I can't find any 'starting prices'.

    Anobody knows the starting price for Equallogic's san? (SATA and SAS models)
    You have to purchase them through a VAR. And they crush Equallogics pricing.

    Reldata can take SCSI/SAS/ISCSI/FC on the backend and present ISCSI/NFS/CFS on the frontend. So if you have a san you can utilize the dual 4Gbps FC ports on the gateways and run 8Gbps worth of data back to the san devices that you have on your san switch. doesn't matter if its HP/IBM/Nexsan whoever as long as it can see a LUN thats all it cares about.

    If your interested int hem send a email to fsabio at inetsolpro.com and i'm sure they will help you out.

  28. #28
    Quote Originally Posted by Spudstr View Post
    Reldata can take SCSI/SAS/ISCSI/FC on the backend and present ISCSI/NFS/CFS on the frontend. So if you have a san you can utilize the dual 4Gbps FC ports on the gateways and run 8Gbps worth of data back to the san devices that you have on your san switch. doesn't matter if its HP/IBM/Nexsan whoever as long as it can see a LUN thats all it cares about.
    I don't need a gateway, I need a SAN.

    Alternatively I can buy something like HP MSA70 and connect it to two frontend server with 10GbE card and export it via iSCSI....

  29. #29
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,132
    Quote Originally Posted by ale123 View Post
    I don't need a gateway, I need a SAN.

    Alternatively I can buy something like HP MSA70 and connect it to two frontend server with 10GbE card and export it via iSCSI....
    Thats making a gateway... in theory same thing as reldata. Only exception is instead of playing around and making it HA and getting it to work. Reldatas system does that and more and works in HA.

    not sure about you but we don't have time to spend playing and creating our own HA system.

  30. #30
    Quote Originally Posted by Spudstr View Post
    Thats making a gateway... in theory same thing as reldata. Only exception is instead of playing around and making it HA and getting it to work. Reldatas system does that and more and works in HA.
    Then it's not good for us.

    EqualLogic are perfect, but is impossibile that nobody knows starting prices.
    I need something: starts from 5000$ more or less without disks.
    Just to have an idea before asking to my dell consultant.

    not sure about you but we don't have time to spend playing and creating our own HA system.
    Me too.

  31. #31
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,132
    Quote Originally Posted by ale123 View Post
    Then it's not good for us.

    EqualLogic are perfect, but is impossibile that nobody knows starting prices.
    I need something: starts from 5000$ more or less without disks.
    Just to have an idea before asking to my dell consultant.



    Me too.
    Whats not good? the fact that it works and it works extremely well and is cheaper than equallogic? Not to mention it can do more and present over 10ge where equallogic can not?

    Reldata is not goign to start at 5k, its going to start much higher without disks. If your just looking for a simple box that has half a dozen drives then reldata is not for you. yes it can do a dozen drives no problem but its built to handle dozens and dozens of drives and present NFS/CIFS/ISCSI alla t the same time to various targets etc.

    Its also ment to re-tool and consolidate existing SAN storage so its not wasted as well.

    You will never find a SAN for 5k even IBM's DS3400 starts a little higher without any disks, then the DS4200 a littl ehigher than that and then the DS4700 even higher.

    If your talking "san" you need to be talking 50k range not 5k range after you factor in HBA's, san controllers and spindles. and not to mention switches.

  32. #32
    Quote Originally Posted by Spudstr View Post
    You will never find a SAN for 5k even IBM's DS3400 starts a little higher without any disks, then the DS4200 a littl ehigher than that and then the DS4700 even higher.
    ds3400 starts from 7000 euros with dual controller and no disks.

    If your talking "san" you need to be talking 50k range not 5k range after you factor in HBA's, san controllers and spindles. and not to mention switches.
    My 5k$ was just an example.

    Just to have an idea, reldata starts from? 10k$ 20k$?
    And EqualLogic?

    HP's san starts from 5k euros..... Fiber or iSCSI the price is the same.
    iSCSI have just a small problem: only two GbE...too slow!
    Fiber has 4gbit dual controller (8gbit) but needs an expansive switch...

  33. #33
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,132
    Quote Originally Posted by ale123 View Post
    ds3400 starts from 7000 euros with dual controller and no disks.

    My 5k$ was just an example.

    Just to have an idea, reldata starts from? 10k$ 20k$?
    And EqualLogic?

    HP's san starts from 5k euros..... Fiber or iSCSI the price is the same.
    iSCSI have just a small problem: only two GbE...too slow!
    Fiber has 4gbit dual controller (8gbit) but needs an expansive switch...
    reldata is an appliance and it doesn't care what disk you attach to it.

    I know a equallogic box price/spec'd out around 50k reldata would beat in terms of price. Each var has their own pricing on the appliances etc so you need to contact someone with what exactly you want. iscsi/nfs etc are options that require licensing as well so price can change.

    I cannot disclose pricing for what we paid on our reldata units but from what i'm reading about equallogic they are starting around 20k for a low end system.


    Sure theres probably cheaper solutions out there bt you need to figure out what exactly you are trying to do and which solution you want to use. i'm sure there are some other solutions out there that are much cheaper but in all things in this industry. You get what you pay for.

  34. #34
    Quote Originally Posted by Spudstr View Post
    Netapp is built for NFS, not a scan really and its iscsi is just a device ontop of its NFS.. kinda like how Suns ZFS does its iscsi as well.. not optimal.
    FUD..cite your source?


    Quote Originally Posted by Spudstr View Post
    If you want true iscsi should check out relata's unified storage gateway. attach 96 15k sas drives to pci-e card and then stick a 10ge card in another pci-e slot and it would be far faster than a 4Gbps FC san with multipathing and dual controllers.
    And this will be faster and more reliable than a similarly configured netapp head or zfs server?

    Certainly you'll get better data integrity out of ZFS - in any case, any non-broken solution should be able to saturate 10gb from 96 drives...why are we still talking about throughput? Shouldn't we be measuring random iops and service time?




    Quote Originally Posted by Spudstr View Post
    Reldatas system basicly just takes a lun/lvm's together a storage pool and presents it with their own proprietary iscsi code
    ohh..you mean like netapp and sun?

  35. #35
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,132
    Quote Originally Posted by cacheflymatt View Post
    FUD..cite your source?



    And this will be faster and more reliable than a similarly configured netapp head or zfs server?

    Certainly you'll get better data integrity out of ZFS - in any case, any non-broken solution should be able to saturate 10gb from 96 drives...why are we still talking about throughput? Shouldn't we be measuring random iops and service time?


    ohh..you mean like netapp and sun?
    we took a thumper and ran zfs/nfs off it and it struggled above 500mbps. the reldata box is doign 1.5Gbps without blinking an eye. Maybe alot has changed in the past 7 months but back then 500+Mbps on zfs+nfs was a struggle on the thumper

    96 drives is just an example it can use more drives, obviously more drives more iops. 96 drives would be 2x 48 drive shelfs pluged into a single sas card with dual x4 ports.

    I have not used netapp but going off from what sources have said so i'll retract what I said about netapp. but what I hear netapp does a great job at NFS traffic.


    the only reason reldata would be more "reliable" than a netapp head or a zfs box is the point that you'll have two heads serving the iscsi/nfs targets and then your backend disks would be more than likely on dual controllers controlling the raind groups as well.

  36. #36
    Quote Originally Posted by Spudstr View Post
    we took a thumper and ran zfs/nfs off it and it struggled above 500mbps. the reldata box is doign 1.5Gbps without blinking an eye. Maybe alot has changed in the past 7 months but back then 500+Mbps on zfs+nfs was a struggle on the thumper

    96 drives is just an example it can use more drives, obviously more drives more iops. 96 drives would be 2x 48 drive shelfs pluged into a single sas card with dual x4 ports.

    I have not used netapp but going off from what sources have said so i'll retract what I said about netapp. but what I hear netapp does a great job at NFS traffic.


    the only reason reldata would be more "reliable" than a netapp head or a zfs box is the point that you'll have two heads serving the iscsi/nfs targets and then your backend disks would be more than likely on dual controllers controlling the raind groups as well.
    Sorry to hear you had a cranky thumper experience (and you're not alone - though it's usually resolved by sun pretty quick). We've had great performance from our thumpers.

    Netapp offers active/active as well, but competing controllers have always scared me (split brain, ahh), obviously this isn't an option on sun.

    I have no first hand experience using netapp ISCSI, but I have heard *very* good things from people with large footprints (> 1000 disks).

    I'm not knocking reldata, I just don't like the strawman argument that devices that do NFS can't do ISCSI, especially when comparing to reldata which exports both

  37. #37
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,132
    Quote Originally Posted by cacheflymatt View Post
    Sorry to hear you had a cranky thumper experience (and you're not alone - though it's usually resolved by sun pretty quick). We've had great performance from our thumpers.

    Netapp offers active/active as well, but competing controllers have always scared me (split brain, ahh), obviously this isn't an option on sun.

    I have no first hand experience using netapp ISCSI, but I have heard *very* good things from people with large footprints (> 1000 disks).

    I'm not knocking reldata, I just don't like the strawman argument that devices that do NFS can't do ISCSI, especially when comparing to reldata which exports both

    Not tryhing to knock nfs/iscsi like how netapp and zfs does it, but as I recal on them they make a iscsi device thats basicly a file? and it sits ontop of their file systems. where as the reldata box just creates a lvm/luns and exports to iscsi. Though vmware with netapp's nfs/iscsi works very well from what I read too.

    we went to sun with out thumpe rproblem and they were non real help so we shippped that monster back. However 1k+ disks is a good size footprint alot more than many of us would ever havea need for.

  38. #38
    Quote Originally Posted by Spudstr View Post
    Not tryhing to knock nfs/iscsi like how netapp and zfs does it, but as I recal on them they make a iscsi device thats basicly a file? and it sits ontop of their file systems. where as the reldata box just creates a lvm/luns and exports to iscsi. Though vmware with netapp's nfs/iscsi works very well from what I read too.
    Maybe early on with netapp that's how it was, but with both modern (2+ years) netapp and zfs, you create a volume (block device) which is exported as an iscsi lun, it's not a file ontop of a NFS filesystem.
    Last edited by cacheflymatt; 05-04-2008 at 03:54 PM. Reason: clarified timeline

  39. #39

    some info

    Hi All,

    Equalogic: if you're not sure of pricing, you may not know it's "perfect" for you for current pricing, call Dell, for pricing as of Q3 07, go to Robin's site. http://storagemojo.com/storagemojos-...ic-price-list/


    Matt: netapp uses a file system, first an foremost. an HP paper discusses this: http://h71036.www7.hp.com/ERC/downlo...A1-4386ENW.pdf

    "All storage systems are not created equal. In the case of Network Appliance (NetApp), its operating system and file system were originally designed solely based on the requirements of network attached storage (NAS). There is no clearer proof of this than NetApp’s “space reservation policy” in its storage area network (SAN) and iSCSI block solutions—a liability not found in NetApp NAS environments. This white paper describes both this NetApp practice and the cost of ownership consequences to the customer"

    granted, it's a competitor, but netapp's filesystem runs on FAS units, its primary role was to export mounts, not scsi/san level targtes. read the paper and draw your own conclusions, but when i cli a dataontap system it still looks like a file to me.

    what Spud is getting at is that reldata, is different. it's backend agnostic (it doesn't care if you have spindles from: hp, reldata (egneio like ibm), ibm, dell, eqlogic - as long as they're standards compliant (like FCP, iSCSI, scsi(u320), SAS) and then you can present to the front: cifs-mounts, nfs-exports, and yes, the point of all this: iscsi-luns. but b/c it does it's internal work the block level it is wirespeed at iscsi to the chassis/pcie limits with 10GigE. so if you go with a single chassis, you often find yourself having outgrown it and are then in a rip-and-replace upgrade path.

    and as far as iops, we have a west-coast customer case where: a pair of reldata's (for ha, one had the primary iscsi targets, the other the primary nfs targets, and they'd fail to each-other, a true active-active has pros and cons but wasn't this setup) - so they had 2x reldata heads, and via an HA mesh setup, they used 10GigE to their app cluster (ie: to their blade servers) and to the 'back' did dual 12Gbps SAS to Engenio/Reldata SAS to SAS nodes (like IBM ds3200) (10GigE to the front and 12Gbps sas to the back - that's an FC killer)

    the customer is a search engine, all i can say openly, but lets say we were very surprised that with 48+48 sas to sas 15k spindles feeding dual 12Gps sas 8088 connectors through the Reldata doing iscsi presentation to the front via a dual 10GigE ported card they claimed over 124K iops in their own internal testing. not a reldata benchmark, in their network with their tests. my emulex FC cards themselves have a theoretical limit of 140K iops, so we're not sure where in this setup the bottleneck is, but suffice to say, that's a lot of iops for 20U. and the beauty, need to grow an iscsi target size? just add more shelves and expand it, they don't even need to be the same brand, etc, etc, etc.

    ie: the holocaust museum uses reldata's for archive, allowing for the most inexpesnive of all setups: iscsi to the back: over 1/2PB on Nexsan with reldata's virtualizing the many iscsi luns (dual controller ha san luns) to file systems and iscsi to the front as well. reldata's fail over within the iscsi timeout, so we have done ha failover "yank the cord" tests for Exchange server volumes on them and after the short failover delay, everything kept working, no users were aware the customer decided to just yank the cat6 cable



    so to understand why people pay for it if it's "just a gateway" google: ibm svn http://www-03.ibm.com/systems/storag...alization/svc/ and the 'databeast' by nexsan --- but then realize it does LUN to filesystem translation that is chssis independent -- so you get wirespeed iscsi to the front-end BUT you're not limited to the size of the chassis for your LUNs. Eqlogic has handled how to move luns between shelves(but press your rep for the "double drive failure for linked shelves" answer if you're a DualParity fan, this is the opposite in scenarios).

    Disclaimer: i rep reldata, nexsan, netapp, engenio/reldata, exanet (true active-active nas clustering) and i like my netapp boxes, i sell a lot to schools as nases (dataontap with dual parity is nice, great snapshots, fine replication) but on our vmware production cluster, i front an FC/isCSI san from Nexsan with Reldata - it allows me to have unlimited volumes, some on raid6, some on raid 10, move them between array types, and even, if a true active active san isn't enough for you to sleep well at night, then local repicate them synchronously for true storage-chassis independence, or asynch over IP for remote DR - and no funky license costs, replication is included - good luck with the other players getting that

    Fernando
    Last edited by hakalugi; 05-05-2008 at 10:24 AM.

  40. #40
    oh, and to the person quoting 7000Euros for the base IBM, note, that may be the 1+1 port head. for a mesh or at least more utilizable setup before you need switches, get the 3+1 setup. we OEM Engenio via Reldata and would be glad to give you offline information, the point of this is to ask how many 'host ports' you're getting. we only do the 3 host 1 expander model, but find that most people get a quote for the 1 host 1 expander which means instant fabric/switching needs.

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •