Results 1 to 25 of 58
-
04-17-2008, 04:06 PM #1Aspiring Evangelist
- Join Date
- Jun 2006
- Posts
- 407
any of you large Data Center guys use SAN's
I am looking for large providers that use SAN's in there environments.
If you use SAN's in your DC's how do you address the HBA/Fiber Channel cards?
WWN/WWPN what is your naming convention that works best for you?
-
04-17-2008, 06:47 PM #2Aspiring Evangelist
- Join Date
- Jun 2006
- Posts
- 407
Nobody uses SAN's in there racks or DC's?
-
04-17-2008, 07:30 PM #3Web Hosting Master
- Join Date
- Jun 2002
- Location
- PA, USA
- Posts
- 5,143
We use iSCSI SAN. Simpler to set up.
Fluid Hosting, LLC - Enterprise Cloud Infrastructure: Cloud Shared and Reseller, Cloud VPS, and Cloud Hybrid Server
-
04-17-2008, 07:37 PM #4Vice Cheese
- Join Date
- Jan 2006
- Location
- Jersey
- Posts
- 2,971
Email: info ///at/// honelive.com
-
04-17-2008, 09:43 PM #5Local tech for Los Angeles
- Join Date
- Feb 2003
- Location
- Panorama City, CA
- Posts
- 2,581
We use equallogic sans, and use about 200TB of space.
Dell bought equallogic not to long ago. There MD1000's look spiffy.
-
04-17-2008, 10:14 PM #6Aspiring Evangelist
- Join Date
- Jun 2006
- Posts
- 407
-
04-17-2008, 11:37 PM #7Newbie
- Join Date
- May 2007
- Posts
- 18
iscsi san about 60TB storage
cheap and easy than Fiber SAN
-
04-18-2008, 12:16 PM #8Local tech for Los Angeles
- Join Date
- Feb 2003
- Location
- Panorama City, CA
- Posts
- 2,581
I will ask when i get in the office in a few hours.
but i believe its by department:
HA-Node2-1of5 = Hospital Accounting Cluster 2 and its 1 of 5.
From when i brought one up, thats what i recall setting it up as.
-
04-18-2008, 06:02 PM #9Web Hosting Master
- Join Date
- Nov 2005
- Location
- Minneapolis, MN
- Posts
- 1,648
Standard practice in the enterprise space is to just use the burned-in address on the card. The WWN is, of course, able to be changed just like the MAC address of a NIC, but that is not a common practice in most IT shops.
Are you sure you're referring to WWN naming, or zone naming on the SAN switch?Eric Spaeth
Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
"The really cool thing about facts is they remain true regardless of who states them."
-
04-20-2008, 03:19 PM #10Junior Guru
- Join Date
- Apr 2008
- Posts
- 192
We run HP EVA8100 clusters and NetApp 960c clusters - The EVA's using fiber channel and the netapps iSCSI.
-
04-20-2008, 09:39 PM #11Rockin' the beer gut
- Join Date
- May 2006
- Location
- NJ, USA
- Posts
- 6,645
Colo4Jax (well, more specifically, BigVPS) uses a SAN
AS395558
-
04-21-2008, 10:42 AM #12Disabled
- Join Date
- Jul 2006
- Location
- Detroit, MI
- Posts
- 1,962
-
04-21-2008, 11:54 AM #13WHT Addict
- Join Date
- Jun 2007
- Posts
- 110
As other people have mentioned, iSCSI is a great alternative - we've got both in production, and I think you'll find that most people don't change the WWN or WWPN, very few firmware/driver combo's support changing this reliably, moreover there's really no reason to do it - what naming convention do you use for your network card mac addresses?
-
05-03-2008, 06:02 PM #14Newbie
- Join Date
- May 2008
- Posts
- 10
iSCSI is cheaper than FC but are *MUCH* slower! 1Gbit/s versus 4gbit/s
If you have 10 SAS disk inside and 10 server reading from it, you fill the ethernet channel very quickly.
How many server do you have reading from SAN and what kind of traffic do you have?
Are there any virtual machine with images on the san?
I'm interested in developing an iSCSI SAN for some virtual machines (more or less 40 or 50) but iSCSI is too slow.
50 vm reading from a single 1gbit channel means 20mbit for each VM's that is 2.5MB.
Too slow!!
-
05-03-2008, 06:03 PM #15WHT Addict
- Join Date
- Jun 2007
- Posts
- 110
-
05-03-2008, 06:10 PM #16Disabled
- Join Date
- Jul 2006
- Location
- Detroit, MI
- Posts
- 1,962
-
05-03-2008, 06:12 PM #17Newbie
- Join Date
- May 2008
- Posts
- 10
-
05-03-2008, 06:29 PM #18WHT Addict
- Join Date
- Jun 2007
- Posts
- 110
There's not too many vendors offering 10g ports right now,
Netapp allows you to drop in 10G nics
equallogic (now dell) is atleast 4x nics per controller (or let's say 12x gige in a 3 box config).
I'm not sure what lefthand offers on the VSMs, but if you buy the software for lets say a dell 2950, you could do 10g nics or up to 14 gige (2x onboard, 3x pciE quad port nic), that would be 42 x1g in a 3 box config.
3par offers 16x gige
You'll want the 10g for switch uplinks obviously, but multiple gige + 802.3ad (or multipath as another poster mentioned) should take you quite a ways. Moreover generally you don't care about sequential read throughput as much as you care about iops, but YMMV depending on what your apps are doing.
-
05-03-2008, 06:38 PM #19Newbie
- Join Date
- May 2008
- Posts
- 10
Netapp are very expensive. For that price I think that I can buy a 4gbit FC SAN from IBM or HP.
equallogic (now dell) is atleast 4x nics per controller (or let's say 12x gige in a 3 box config).
I'm not sure what lefthand offers on the VSMs, but if you buy the software for lets say a dell 2950, you could do 10g nics or up to 14 gige (2x onboard, 3x pciE quad port nic), that would be 42 x1g in a 3 box config.
You'll want the 10g for switch uplinks obviously, but multiple gige + 802.3ad (or multipath as another poster mentioned) should take you quite a ways. Moreover generally you don't care about sequential read throughput as much as you care about iops, but YMMV depending on what your apps are doing.
Actually I want to plan a san for some virtual machine to use as web server (apache and ftp only). I'm planning to buy some r300 or 2950 and putting inside it 5-6 VM.
VM and their storage will be on the iSCSI SAN (may be with SATA disk).
Doing so, in case of failure to one node, i can move VM to another one.
But iSCSI sounds slow.
The same architecture can host also mail servers?
If can be useful, actually our internet connections are two 15mbit (30mbit aggregated)
(sorry for my english, i'm Italian)
-
05-03-2008, 06:46 PM #20WHT Addict
- Join Date
- Jun 2007
- Posts
- 110
..not a high end san, or you're negotiating wrong..you'll certainly get more iops/$ from netapp then HP, can't speak to IBM.
it's not really home made, it's fully supported, and it's certainly expensive - It's just an option to use their OS on certain existing hardware (HP DL360, Dell 2950) - you can buy the VSM's directly from Lefthand, but I'm not sure what the NIC config options are.
In that case, you could also look at the lefthand VSA product which would allow you to use the storage in your 2950 to build an iscsi cluster. The VSA runs as a VM inside ESX.
Then you're not listening correctly.
Which architecture?Last edited by cacheflymatt; 05-03-2008 at 06:55 PM. Reason: misquotes
-
05-03-2008, 06:55 PM #21Newbie
- Join Date
- May 2008
- Posts
- 10
IBM are the slowest?
Which architecture?
Actually 2 MX and one pop3/imap server with mailbox storage.
We want virtualize MXes and going from 2 to 4.
Then from 1 pop3/imap server make 2 pop3 and two imap.
If possibile with 2 or 3 dell 2950 with 4 or 8 GB ram and dual processor.
I've seen EqualLogic, sounds very very interesting. 3 gbit per controller, so 6 gbit in LACP (controllers are active/active?).
But I don't understood how can I add more units to a single virtual SAN and increasing throughput from 6 gbit to 12 gbit...
Have you ever used it?
-
05-04-2008, 02:04 AM #22Aspiring Evangelist
- Join Date
- Feb 2004
- Posts
- 371
iSCSI WAY over Fiber
-
05-04-2008, 06:45 AM #23Newbie
- Join Date
- May 2008
- Posts
- 10
Ok, but actually I cant find any prices for the EqualLogic but I think that it is more expensive than MSA2012fc or MSA2012i
I've seen that equallogic san has just only 3 GbE, 3 gbit
as much less than 8gbit fiber as we can find on modern FC SAN.
Another thing that I don't understood is:
If I want to put another equallogic san in a cluster, the data
on it, will be replicated on each san 1:1 or san1 will have some datas and san will have other datas?
-
05-04-2008, 01:38 PM #24Master of the Truth
- Join Date
- Mar 2006
- Location
- Reston, VA
- Posts
- 3,131
Yellow Fiber Networks
http://www.yellowfiber.net : Managed Solutions - Colocation - Network Services IPv4/IPv6
Ashburn/Denver/NYC/Dallas/Chicago Markets Served zak@yellowfiber.net
-
05-04-2008, 01:42 PM #25Master of the Truth
- Join Date
- Mar 2006
- Location
- Reston, VA
- Posts
- 3,131
We have an IBM SAN and its quick. And spec to spec better than the HP counterpart.
Netapp is built for NFS, not a scan really and its iscsi is just a device ontop of its NFS.. kinda like how Suns ZFS does its iscsi as well.. not optimal.
If you want true iscsi should check out relata's unified storage gateway. attach 96 15k sas drives to pci-e card and then stick a 10ge card in another pci-e slot and it would be far faster than a 4Gbps FC san with multipathing and dual controllers.
Reldatas system basicly just takes a lun/lvm's together a storage pool and presents it with their own proprietary iscsi codeYellow Fiber Networks
http://www.yellowfiber.net : Managed Solutions - Colocation - Network Services IPv4/IPv6
Ashburn/Denver/NYC/Dallas/Chicago Markets Served zak@yellowfiber.net