Page 1 of 3 123 LastLast
Results 1 to 25 of 58
  1. #1
    Join Date
    Jun 2006
    Posts
    407

    any of you large Data Center guys use SAN's

    I am looking for large providers that use SAN's in there environments.

    If you use SAN's in your DC's how do you address the HBA/Fiber Channel cards?

    WWN/WWPN what is your naming convention that works best for you?

  2. #2
    Join Date
    Jun 2006
    Posts
    407
    Nobody uses SAN's in there racks or DC's?

  3. #3
    Join Date
    Jun 2002
    Location
    PA, USA
    Posts
    5,143
    We use iSCSI SAN. Simpler to set up.
    Fluid Hosting, LLC - Enterprise Cloud Infrastructure: Cloud Shared and Reseller, Cloud VPS, and Cloud Hybrid Server

  4. #4
    Join Date
    Jan 2006
    Location
    Jersey
    Posts
    2,971
    Quote Originally Posted by WickedShark View Post
    Nobody uses SAN's in there racks or DC's?
    Patience is virtue my friend.

    When you ask a question that 99% on WHT have not come into contact with, it takes some time to get a reply. Usually people who are managing SANs are, well, busy managing SANs. So they will give their time, well, when they get time.
    Email: info ///at/// honelive.com

  5. #5
    Join Date
    Feb 2003
    Location
    Panorama City, CA
    Posts
    2,581
    We use equallogic sans, and use about 200TB of space.

    Dell bought equallogic not to long ago. There MD1000's look spiffy.
    Remote Hands and Your Local Tech for the Los Angeles area.

    (310) 573-8050 - LinkedIn

  6. #6
    Join Date
    Jun 2006
    Posts
    407
    Quote Originally Posted by Jeremy View Post
    We use equallogic sans, and use about 200TB of space.

    Dell bought equallogic not to long ago. There MD1000's look spiffy.
    How do you name your WWN's or WWPN's? I am looking for a naming convention that others may follow.

  7. #7
    iscsi san about 60TB storage

    cheap and easy than Fiber SAN

  8. #8
    Join Date
    Feb 2003
    Location
    Panorama City, CA
    Posts
    2,581
    I will ask when i get in the office in a few hours.
    but i believe its by department:
    HA-Node2-1of5 = Hospital Accounting Cluster 2 and its 1 of 5.

    From when i brought one up, thats what i recall setting it up as.
    Remote Hands and Your Local Tech for the Los Angeles area.

    (310) 573-8050 - LinkedIn

  9. #9
    Join Date
    Nov 2005
    Location
    Minneapolis, MN
    Posts
    1,648
    Quote Originally Posted by WickedShark View Post
    If you use SAN's in your DC's how do you address the HBA/Fiber Channel cards?
    Standard practice in the enterprise space is to just use the burned-in address on the card. The WWN is, of course, able to be changed just like the MAC address of a NIC, but that is not a common practice in most IT shops.

    Quote Originally Posted by WickedShark View Post
    WWN/WWPN what is your naming convention that works best for you?
    Are you sure you're referring to WWN naming, or zone naming on the SAN switch?
    Eric Spaeth
    Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
    "The really cool thing about facts is they remain true regardless of who states them."

  10. #10
    We run HP EVA8100 clusters and NetApp 960c clusters - The EVA's using fiber channel and the netapps iSCSI.

  11. #11
    Join Date
    May 2006
    Location
    NJ, USA
    Posts
    6,645
    Colo4Jax (well, more specifically, BigVPS) uses a SAN
    AS395558

  12. #12
    Join Date
    Jul 2006
    Location
    Detroit, MI
    Posts
    1,962
    Quote Originally Posted by FHDave View Post
    We use iSCSI SAN. Simpler to set up.
    +1...................

  13. #13
    Quote Originally Posted by WickedShark View Post
    I am looking for large providers that use SAN's in there environments.

    If you use SAN's in your DC's how do you address the HBA/Fiber Channel cards?

    WWN/WWPN what is your naming convention that works best for you?
    As other people have mentioned, iSCSI is a great alternative - we've got both in production, and I think you'll find that most people don't change the WWN or WWPN, very few firmware/driver combo's support changing this reliably, moreover there's really no reason to do it - what naming convention do you use for your network card mac addresses?

  14. #14
    Quote Originally Posted by FHDave View Post
    We use iSCSI SAN. Simpler to set up.
    iSCSI is cheaper than FC but are *MUCH* slower! 1Gbit/s versus 4gbit/s

    If you have 10 SAS disk inside and 10 server reading from it, you fill the ethernet channel very quickly.

    How many server do you have reading from SAN and what kind of traffic do you have?

    Are there any virtual machine with images on the san?

    I'm interested in developing an iSCSI SAN for some virtual machines (more or less 40 or 50) but iSCSI is too slow.

    50 vm reading from a single 1gbit channel means 20mbit for each VM's that is 2.5MB.

    Too slow!!

  15. #15
    Quote Originally Posted by ale123 View Post
    iSCSI is cheaper than FC but are *MUCH* slower! 1Gbit/s versus 4gbit/s

    If you have 10 SAS disk inside and 10 server reading from it, you fill the ethernet channel very quickly.

    How many server do you have reading from SAN and what kind of traffic do you have?

    Are there any virtual machine with images on the san?

    I'm interested in developing an iSCSI SAN for some virtual machines (more or less 40 or 50) but iSCSI is too slow.

    50 vm reading from a single 1gbit channel means 20mbit for each VM's that is 2.5MB.

    Too slow!!

    10gb ethernet and 802.3ad are your friends.

  16. #16
    Join Date
    Jul 2006
    Location
    Detroit, MI
    Posts
    1,962
    Quote Originally Posted by cacheflymatt View Post
    10gb ethernet and 802.3ad are your friends.

    So is multipathing and bonded NICs.

  17. #17
    Quote Originally Posted by cacheflymatt View Post
    10gb ethernet and 802.3ad are your friends.
    Can you suggest me an iSCSI SAN with 10gb ethernet?
    Actually I know only MSA2012 that has two 1gbit ethernet.
    Same for the Dell/EMC.....

  18. #18
    Quote Originally Posted by ale123 View Post
    Can you suggest me an iSCSI SAN with 10gb ethernet?
    Actually I know only MSA2012 that has two 1gbit ethernet.
    Same for the Dell/EMC.....
    There's not too many vendors offering 10g ports right now,
    Netapp allows you to drop in 10G nics

    equallogic (now dell) is atleast 4x nics per controller (or let's say 12x gige in a 3 box config).

    I'm not sure what lefthand offers on the VSMs, but if you buy the software for lets say a dell 2950, you could do 10g nics or up to 14 gige (2x onboard, 3x pciE quad port nic), that would be 42 x1g in a 3 box config.


    3par offers 16x gige


    You'll want the 10g for switch uplinks obviously, but multiple gige + 802.3ad (or multipath as another poster mentioned) should take you quite a ways. Moreover generally you don't care about sequential read throughput as much as you care about iops, but YMMV depending on what your apps are doing.

  19. #19
    Quote Originally Posted by cacheflymatt View Post
    Netapp allows you to drop in 10G nics
    Netapp are very expensive. For that price I think that I can buy a 4gbit FC SAN from IBM or HP.

    equallogic (now dell) is atleast 4x nics per controller (or let's say 12x gige in a 3 box config).
    I'm on the dell's website right now....


    I'm not sure what lefthand offers on the VSMs, but if you buy the software for lets say a dell 2950, you could do 10g nics or up to 14 gige (2x onboard, 3x pciE quad port nic), that would be 42 x1g in a 3 box config.
    I don't think that making an home-made san will be a solution for a small (max 200 server when full loaded) datacenter.

    You'll want the 10g for switch uplinks obviously, but multiple gige + 802.3ad (or multipath as another poster mentioned) should take you quite a ways. Moreover generally you don't care about sequential read throughput as much as you care about iops, but YMMV depending on what your apps are doing.
    for 802.3ad is true.
    Actually I want to plan a san for some virtual machine to use as web server (apache and ftp only). I'm planning to buy some r300 or 2950 and putting inside it 5-6 VM.
    VM and their storage will be on the iSCSI SAN (may be with SATA disk).
    Doing so, in case of failure to one node, i can move VM to another one.

    But iSCSI sounds slow.

    The same architecture can host also mail servers?

    If can be useful, actually our internet connections are two 15mbit (30mbit aggregated)


    (sorry for my english, i'm Italian)

  20. #20
    Quote Originally Posted by ale123 View Post
    Netapp are very expensive. For that price I think that I can buy a 4gbit FC SAN from IBM or HP.
    ..not a high end san, or you're negotiating wrong..you'll certainly get more iops/$ from netapp then HP, can't speak to IBM.


    Quote Originally Posted by ale123 View Post
    I don't think that making an home-made san will be a solution for a small (max 200 server when full loaded) datacenter.
    it's not really home made, it's fully supported, and it's certainly expensive - It's just an option to use their OS on certain existing hardware (HP DL360, Dell 2950) - you can buy the VSM's directly from Lefthand, but I'm not sure what the NIC config options are.




    Quote Originally Posted by ale123 View Post
    for 802.3ad is true.
    Actually I want to plan a san for some virtual machine to use as web server (apache and ftp only). I'm planning to buy some r300 or 2950 and putting inside it 5-6 VM.
    VM and their storage will be on the iSCSI SAN (may be with SATA disk).
    Doing so, in case of failure to one node, i can move VM to another one.
    In that case, you could also look at the lefthand VSA product which would allow you to use the storage in your 2950 to build an iscsi cluster. The VSA runs as a VM inside ESX.


    Quote Originally Posted by ale123 View Post
    But iSCSI sounds slow.
    Then you're not listening correctly.


    Quote Originally Posted by ale123 View Post
    The same architecture can host also mail servers?
    Which architecture?
    Last edited by cacheflymatt; 05-03-2008 at 06:55 PM. Reason: misquotes

  21. #21
    Quote Originally Posted by cacheflymatt View Post
    ..not a high end san, or you're negotiating wrong..you'll certainly get more iops/$ from netapp then HP, can't speak to IBM.
    IBM are the slowest?


    Which architecture?

    Actually 2 MX and one pop3/imap server with mailbox storage.

    We want virtualize MXes and going from 2 to 4.
    Then from 1 pop3/imap server make 2 pop3 and two imap.

    If possibile with 2 or 3 dell 2950 with 4 or 8 GB ram and dual processor.




    I've seen EqualLogic, sounds very very interesting. 3 gbit per controller, so 6 gbit in LACP (controllers are active/active?).

    But I don't understood how can I add more units to a single virtual SAN and increasing throughput from 6 gbit to 12 gbit...

    Have you ever used it?

  22. #22
    Join Date
    Feb 2004
    Posts
    371
    iSCSI WAY over Fiber

  23. #23
    Quote Originally Posted by carlostabs View Post
    iSCSI WAY over Fiber
    Ok, but actually I cant find any prices for the EqualLogic but I think that it is more expensive than MSA2012fc or MSA2012i

    I've seen that equallogic san has just only 3 GbE, 3 gbit
    as much less than 8gbit fiber as we can find on modern FC SAN.

    Another thing that I don't understood is:
    If I want to put another equallogic san in a cluster, the data
    on it, will be replicated on each san 1:1 or san1 will have some datas and san will have other datas?

  24. #24
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,131
    Quote Originally Posted by carlostabs View Post
    iSCSI WAY over Fiber
    Not unless your running 10GE with iscsi.
    Yellow Fiber Networks
    http://www.yellowfiber.net : Managed Solutions - Colocation - Network Services IPv4/IPv6
    Ashburn/Denver/NYC/Dallas/Chicago Markets Served zak@yellowfiber.net

  25. #25
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,131
    Quote Originally Posted by ale123 View Post
    IBM are the slowest?
    We have an IBM SAN and its quick. And spec to spec better than the HP counterpart.

    Netapp is built for NFS, not a scan really and its iscsi is just a device ontop of its NFS.. kinda like how Suns ZFS does its iscsi as well.. not optimal.


    If you want true iscsi should check out relata's unified storage gateway. attach 96 15k sas drives to pci-e card and then stick a 10ge card in another pci-e slot and it would be far faster than a 4Gbps FC san with multipathing and dual controllers.

    Reldatas system basicly just takes a lun/lvm's together a storage pool and presents it with their own proprietary iscsi code
    Yellow Fiber Networks
    http://www.yellowfiber.net : Managed Solutions - Colocation - Network Services IPv4/IPv6
    Ashburn/Denver/NYC/Dallas/Chicago Markets Served zak@yellowfiber.net

Page 1 of 3 123 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •