Page 1 of 2 12 LastLast
Results 1 to 25 of 26
  1. #1
    Join Date
    Apr 2000
    Location
    Brisbane, Australia
    Posts
    2,602

    Question Network wise: Sustain 200MB/s - 600MB/s data transfer speed between servers ?

    Networking noob for such setups

    For colocation, what would be the most cost effective network setup solution for setting up a backup server to sustain 200MB/s to 600MB/s data transfer speeds between source and destination servers decked out with SSDs ?

    Any ideas and suggestions would be greatly appreciated
    : CentminMod.com Nginx Installer Nginx 1.25, PHP-FPM, MariaDB 10 CentOS (AlmaLinux/Rocky testing)
    : Centmin Mod Latest Beta Nginx HTTP/2 HTTPS & HTTP/3 QUIC HTTPS supports TLS 1.3 via OpenSSL 1.1.1/3.0/3.1 or BoringSSL or QuicTLS OpenSSL
    : Nginx & PHP-FPM Benchmarks: Centmin Mod vs EasyEngine vs Webinoly vs VestaCP vs OneInStack

  2. #2
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,131
    A switch with Gigabit ports? Or am I confused by your question?
    Yellow Fiber Networks
    http://www.yellowfiber.net : Managed Solutions - Colocation - Network Services IPv4/IPv6
    Ashburn/Denver/NYC/Dallas/Chicago Markets Served zak@yellowfiber.net

  3. #3
    I believe he is saying MB/s - so either 10Gbit/s or bonded 1Gbit/s. If it's 200MB/s use 2x1Gbit/s in LACP/802.3ad. If it's 600MB/s, use 10Gbit, it will probably be more cost effective.

  4. #4
    Join Date
    Dec 2009
    Posts
    2,297
    A 10gig switch would probably be best. Depending on what type of traffic and what type of switching gear you use and it's L2-L4 LAG hashing capability... It may not really give you what you need. Plus to get up to 600MB/s you will need 5 or 6 x 1gig ports (gets almost as costly as 10gig gear plus a lot of added config and headache).
    REDUNDANT.COMEquinix Data Centers Performance Optimized Network
    Managed & Unmanaged
    • Servers • Colocation • Cloud • VEEAM
    sales@redundant.com

  5. #5
    Join Date
    Jul 2009
    Location
    The backplane
    Posts
    1,788
    Quote Originally Posted by virtuallynathan View Post
    if it's 200MB/s use 2x1Gbit/s in LACP/802.3ad.
    Maybe, depending on traffic patterns and hashing.

  6. #6
    I've had no issues getting 200MB/s+ from 2x1Gbit/s on CentOS + Junos/IOS.

  7. #7
    Join Date
    Jul 2009
    Location
    The backplane
    Posts
    1,788
    It's not the total throughput that's in question.

  8. #8
    Join Date
    Jul 2008
    Location
    New Zealand
    Posts
    1,225
    Most cost-effective for you would be based on whether you have switch(s) which support 10Gbit at the moment (along with servers) or if it would require a lot of changes/new gear.

    Otherwise it's pretty cheap to add another dual/quad 1Gbit NIC card to your servers and get 48 port gigbit switches and bond them. Configuration is pretty easy.

  9. #9
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,957
    Quote Originally Posted by virtuallynathan View Post
    I've had no issues getting 200MB/s+ from 2x1Gbit/s on CentOS + Junos/IOS.
    With most ways the traffic is balanced, if all the traffic is from one server to one other server, you may run into issues, since it is often done using a hash of the MAC addresses.
    Karl Zimmerman - Founder & CEO of Steadfast
    VMware Virtual Data Center Platform

    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation

  10. #10
    Join Date
    Feb 2011
    Posts
    680
    If you are talking a single transfer wget, rsync, socket to socket a 802.3ad bundle of 1Gigs will not work. This protocol uses a hash of destination ip/mac address to calculate which of the gig links to use. As the mac address and ip are always the same it will always use the same link. An extension also uses a port number in the hash but this can be limiting also for backup solutions as they often use fixed ports.

    Just go 10Gig and be done with it. You can get a switch with 48 x 1Gig + 4x 10 gig for $1300 (plus optics).

  11. #11
    Join Date
    Jun 2011
    Location
    Portsmouth, UK
    Posts
    327
    HI,

    If your configuration is just for backups and it's only a single device feeding it you could just put 10G cards in each server and a bit of fibre in between, no need to switch it if it's a single device.

    If you have a backup server and multiple feeds you may only need the backup server on 10G

    But you also need to ensure your SSD's and IO paths are up to the job and you may want to consider what happens when the backup increases in size. I'm assuming you're trying to complete it in a short time window?
    ServerHouse | Est 2001 | 3x UK Data centres | Roof access, satcoms | High density | DR as standard
    http://www.serverhouse.co.uk

  12. #12
    My solution:
    1x Dell 6224 with 2 Multimode SFP+ (10GBit) modules (or other cheap 10Gbit switchs) or! no switch and connect both directly
    2x Intel X520 10Gbit Netowrk cards
    SSD HDD and OS tuning might be needed

    Cheers
    WWW.NAUCUM.NET
    COLOCATION/HOUSING ´´´ IP-TRANSIT ...AND MORE
    tel: +49 1635000676 ´´´´´ eMail: info@naucum.net
    Frankfurt, Germany

  13. #13
    Join Date
    Apr 2000
    Location
    Brisbane, Australia
    Posts
    2,602
    thanks guys for the useful input

    Definitely rules out bonded gigabit NIC configs if it's 1 to 1 server transfers. The goal was as fast as possible transfers 'short time window'. Would have larger 1-2TB SATA disks on backup server to move and archive older backups off the SSD.

    Never dealt with 10Gbit NIC/Switch/cabling gear, so not sure on brands, models or specifics, so any input there would be great too
    : CentminMod.com Nginx Installer Nginx 1.25, PHP-FPM, MariaDB 10 CentOS (AlmaLinux/Rocky testing)
    : Centmin Mod Latest Beta Nginx HTTP/2 HTTPS & HTTP/3 QUIC HTTPS supports TLS 1.3 via OpenSSL 1.1.1/3.0/3.1 or BoringSSL or QuicTLS OpenSSL
    : Nginx & PHP-FPM Benchmarks: Centmin Mod vs EasyEngine vs Webinoly vs VestaCP vs OneInStack

  14. #14
    Join Date
    Jun 2011
    Location
    Portsmouth, UK
    Posts
    327
    Intel tend to make good NIC's

    But if you're really after performance it'll depend on your O/S drive support etc.

    Cables will need to be fibre and will be determined by the NIC and distance between the two.
    ServerHouse | Est 2001 | 3x UK Data centres | Roof access, satcoms | High density | DR as standard
    http://www.serverhouse.co.uk

  15. #15
    Assuming distances are short, wouldn't InfiniBand be just as good and much cheaper than 10G ethernet?

  16. #16
    Join Date
    Dec 2009
    Posts
    2,297
    Quote Originally Posted by serverhouse View Post
    HI,

    If your configuration is just for backups and it's only a single device feeding it you could just put 10G cards in each server and a bit of fibre in between, no need to switch it if it's a single device.
    Skip the Fiber and put a 10gig card in each of the two servers with a twinax cable between the two. Skip the expensive optics and have an easy 10Gig solution.
    REDUNDANT.COMEquinix Data Centers Performance Optimized Network
    Managed & Unmanaged
    • Servers • Colocation • Cloud • VEEAM
    sales@redundant.com

  17. #17
    Join Date
    Apr 2000
    Location
    Brisbane, Australia
    Posts
    2,602
    Quote Originally Posted by serverhouse View Post
    Intel tend to make good NIC's

    But if you're really after performance it'll depend on your O/S drive support etc.

    Cables will need to be fibre and will be determined by the NIC and distance between the two.
    Intended OS is CentOS 6.x 64bit based

    Quote Originally Posted by random321 View Post
    Assuming distances are short, wouldn't InfiniBand be just as good and much cheaper than 10G ethernet?
    what gear would i need for InfiniBand ?

    Quote Originally Posted by TalentHouse Hosting View Post
    Skip the Fiber and put a 10gig card in each of the two servers with a twinax cable between the two. Skip the expensive optics and have an easy 10Gig solution.
    thanks so would it be like

    2x of these HP NC552SFP 10GbE 2P (one for each server)
    http://www.i-tech.com.au/products/11...ERADAPTER.aspx

    or these only work in HP servers ?

    1x HP BLc SFP+ 5m 10GbE Copper Cable
    http://www.i-tech.com.au/products/91...per_Cable.aspx
    Last edited by eva2000; 02-16-2012 at 02:00 PM.
    : CentminMod.com Nginx Installer Nginx 1.25, PHP-FPM, MariaDB 10 CentOS (AlmaLinux/Rocky testing)
    : Centmin Mod Latest Beta Nginx HTTP/2 HTTPS & HTTP/3 QUIC HTTPS supports TLS 1.3 via OpenSSL 1.1.1/3.0/3.1 or BoringSSL or QuicTLS OpenSSL
    : Nginx & PHP-FPM Benchmarks: Centmin Mod vs EasyEngine vs Webinoly vs VestaCP vs OneInStack

  18. #18
    Join Date
    Apr 2000
    Location
    Brisbane, Australia
    Posts
    2,602
    Also what's difference between the X520 line up http://www.intel.com/content/www/us/...rnet-x520.html

    Intel X520-DA2 vs Intel X520-T2

    edit: okay i see the T2 supporst RJ45-Copper cabling is that what the HP cable is ? Or is it what DA2 supports SFP+ Direct Attach copper

    Intel X520-DA2 to suit only N7700/N8800 Pro only ??
    http://www.techbuy.com.au/p/132760/index.asp

    Cisco SFP-H10GB-CU5M= 10-Gigabit Copper SFP Tranceiver Module - 5M Cable, Twinax Cable
    http://www.techbuy.com.au/p/176442/index.asp
    Last edited by eva2000; 02-16-2012 at 02:16 PM.
    : CentminMod.com Nginx Installer Nginx 1.25, PHP-FPM, MariaDB 10 CentOS (AlmaLinux/Rocky testing)
    : Centmin Mod Latest Beta Nginx HTTP/2 HTTPS & HTTP/3 QUIC HTTPS supports TLS 1.3 via OpenSSL 1.1.1/3.0/3.1 or BoringSSL or QuicTLS OpenSSL
    : Nginx & PHP-FPM Benchmarks: Centmin Mod vs EasyEngine vs Webinoly vs VestaCP vs OneInStack

  19. #19
    Join Date
    Feb 2002
    Location
    New York, NY
    Posts
    4,618
    Quote Originally Posted by serverhouse View Post
    Cables will need to be fibre and will be determined by the NIC and distance between the two.
    For short distances, such as when the switch is in the same rack or a very close rack, twinax is much cheaper than fiber solutions. A SFP+ twinax cable costs well under $100, and includes the SFP+ modules on both ends.

    http://en.wikipedia.org/wiki/Twinaxi...10GSFP.2BCu.29

    Quote Originally Posted by random321 View Post
    Assuming distances are short, wouldn't InfiniBand be just as good and much cheaper than 10G ethernet?
    40Gbps (QDR) Infiniband gear is very competitively priced with 10GE. We use it for a storage network, and I think the total price came out less than what 10GE what have cost. An 8-port switch can be picked up for under $2000. Cards are around $500-700, depending on options, and the cable is <$60.

    For someone on a budget that doesn't mind used gear, 20Gbps (DDR) Infiniband cards can be found for around $250.
    Scott Burns, President
    BQ Internet Corporation
    Remote Rsync and FTP backup solutions
    *** http://www.bqbackup.com/ ***

  20. #20
    Join Date
    Feb 2002
    Location
    New York, NY
    Posts
    4,618
    Quote Originally Posted by eva2000 View Post
    thanks so would it be like

    2x of these HP NC552SFP 10GbE 2P (one for each server)
    http://www.i-tech.com.au/products/11...ERADAPTER.aspx

    or these only work in HP servers ?

    1x HP BLc SFP+ 5m 10GbE Copper Cable
    http://www.i-tech.com.au/products/91...per_Cable.aspx
    The HP cards should work in other servers, but I'm not familiar with them. I tend to stick with Intel for NICs.

    Why is the cable so expensive? Something like this would do fine:

    http://www.i-tech.com.au/products/11...e_3_M_SFP.aspx

    If you don't care about having an official brand name model, there are even cheaper equivalents:

    http://www.cablesandkits.com/cisco-c...le-p-5526.html

    Quote Originally Posted by eva2000 View Post
    edit: okay i see the T2 supporst RJ45-Copper cabling is that what the HP cable is ? Or is it what DA2 supports SFP+ Direct Attach copper
    The HP cable is a SFP+ twinax cable, which is what the X520-DA2 uses. The X520-T2 can use a standard $5 cat6 cable.

    Quote Originally Posted by eva2000 View Post
    Intel X520-DA2 to suit only N7700/N8800 Pro only ??
    http://www.techbuy.com.au/p/132760/index.asp
    I looked up the N8800 Pro to see what it is. Since it's a 2U device, it probably requires low-profile cards. As far as I know, all the X520 cards are low-profile, and come with both a low-profile and full-height bracket that you can change.
    Scott Burns, President
    BQ Internet Corporation
    Remote Rsync and FTP backup solutions
    *** http://www.bqbackup.com/ ***

  21. #21
    Join Date
    Apr 2000
    Location
    Brisbane, Australia
    Posts
    2,602
    Thanks Scott, best bargain/price tips are much appreciated!

    http://www.cablesandkits.com/cisco-c...le-p-5526.html very cheap indeed... searching Aussie retailers cheapest no-name brand 1 metre cable is still AUD$120 or US$132 here and 3 metre cable is AUD$190 or US$209!
    : CentminMod.com Nginx Installer Nginx 1.25, PHP-FPM, MariaDB 10 CentOS (AlmaLinux/Rocky testing)
    : Centmin Mod Latest Beta Nginx HTTP/2 HTTPS & HTTP/3 QUIC HTTPS supports TLS 1.3 via OpenSSL 1.1.1/3.0/3.1 or BoringSSL or QuicTLS OpenSSL
    : Nginx & PHP-FPM Benchmarks: Centmin Mod vs EasyEngine vs Webinoly vs VestaCP vs OneInStack

  22. #22
    Join Date
    Feb 2002
    Location
    New York, NY
    Posts
    4,618
    Quote Originally Posted by eva2000 View Post
    Thanks Scott, best bargain/price tips are much appreciated!

    http://www.cablesandkits.com/cisco-c...le-p-5526.html very cheap indeed... searching Aussie retailers cheapest no-name brand 1 metre cable is still AUD$120 or US$132 here and 3 metre cable is AUD$190 or US$209!
    Perhaps there's a market opportunity for someone to import a shipment of them for resale.

    I did find a cheaper Aussie vendor:

    1M cable AUD$108.68 w/ GST:
    http://www.hardwaresolution.com.au/p...L+1+Meter.html

    3M cable AUD$152.15 w/ GST:
    http://www.hardwaresolution.com.au/p...BL+3+Mete.html
    Scott Burns, President
    BQ Internet Corporation
    Remote Rsync and FTP backup solutions
    *** http://www.bqbackup.com/ ***

  23. #23
    Join Date
    Jun 2009
    Location
    Stockholm
    Posts
    136
    Quote Originally Posted by serverhouse View Post
    Intel tend to make good NIC's

    But if you're really after performance it'll depend on your O/S drive support etc.

    Cables will need to be fibre and will be determined by the NIC and distance between the two.
    I second that intel does good NICs.

    For short distance there is actually a couple of copper-solutions;
    - 10GBASE-T (rj45/cat6e/cat7)
    - CX4
    - TwinaX/DAC sfp+

    I think that the cx4 solutions is cheapest but if you want to move your servers further apart a nic with sfp+ port(s) and a TwinaX/DAC cable is probably the best way to go, you don't have to buy optics either with the DAC cables. An if you move the servers more tha ~7m apart you can simply get optics and a fiber-patch without replacing the NICs or switches/modules.

    Edit: sorry for this, I was fooled by the stupid iPhone cheating with browser caches so didn't even see page 2. :-P

    //T
    Last edited by rnts; 02-17-2012 at 12:51 PM.

  24. #24
    Join Date
    Apr 2000
    Location
    Brisbane, Australia
    Posts
    2,602
    thanks Scott, yes there's market then again I am sure some Aussies buy overseas and it's cheaper inclusive of shipping!

    you guys mentioned infiniband and did some reading and looks like alot cheaper alternative

    ebay has these cables for AUD$75 inc shipping

    Infiniband 10GBs 4X STRT-STRT SFF to SFF 8470 connectors. Cable 10M/33FT According to internet these cables can be used with Infiniband and SAS devices.

    ebay has MELLANOX INFINIHOST MHEA28-XTC DUAL PORT 10GB/s for AUD$75 each inc shipping as well

    Does that mean for a straight 1 to 1 server direct connection only need 2x Mellanox PCI-E cards + 1 cable all up would be 3x AUD$75 ? And real throughput would be up to 8Gb/s there abouts.

    I guess 10Gbe NIC setup would be much closer to plug and play then infiniband alternative for CentOS 6.x servers ?

    What are better online stores to buy such gear besides ebay ?
    : CentminMod.com Nginx Installer Nginx 1.25, PHP-FPM, MariaDB 10 CentOS (AlmaLinux/Rocky testing)
    : Centmin Mod Latest Beta Nginx HTTP/2 HTTPS & HTTP/3 QUIC HTTPS supports TLS 1.3 via OpenSSL 1.1.1/3.0/3.1 or BoringSSL or QuicTLS OpenSSL
    : Nginx & PHP-FPM Benchmarks: Centmin Mod vs EasyEngine vs Webinoly vs VestaCP vs OneInStack

  25. #25
    Join Date
    Jan 2012
    Location
    UK
    Posts
    236
    AOC-STG-I2T card from Supermicro is about 300ish euro (400$)

    its a intel chipset dual port RJ45 10Gbit ethernet card.

    Works great, performs great, and makes everything so simple with cat6

Page 1 of 2 12 LastLast

Similar Threads

  1. Replies: 0
    Last Post: 01-15-2010, 03:46 AM
  2. Replies: 2
    Last Post: 05-16-2006, 11:04 AM
  3. 5GB disc space / 50GB data transfer = $15/month Fast network!
    By Andy/Toronto in forum Shared Hosting Offers
    Replies: 0
    Last Post: 06-24-2004, 10:34 PM
  4. Utah plans largest ultrahigh-speed data network in the country
    By RajanUrs in forum Web Hosting Lounge
    Replies: 4
    Last Post: 11-21-2003, 04:38 PM
  5. Replies: 0
    Last Post: 07-05-2001, 09:44 PM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •