Results 1 to 16 of 16
  1. #1

    Question Perc 6i or H700/H710p

    Hi

    Was just wondering

    How does the Perc 6/i compare to the H700/H710p performance wise??

    I am into production VM's and High Disk IOPS is a major concern for me, but I found out that the H700 is like 300$ (extra) over the Perc 6i, do you think the H700 is worth the upgrade ?

    Note: I will most likely go with 8x 10k RPM 900GB 2.5" Hard Drives and will hook them up on RAID50 (which proved great for me so far)

    Thoughts are greatly appreciated,,

  2. #2
    Join Date
    Jun 2012
    Posts
    109
    You compare a around -/+ 6 year old Perc 6i Raid controller with a 4 year (H700) and a 2 year old (H710) ...

    You want high IOPS, my proposal: take H700 or H710p, add 1-2 SSD for Caching (the Raid controller manage this self) and your VM's will fly. The Perc H700/710 provide all the nice LSI options (fastpath, cachecade) without any License upgrade

  3. #3
    Quote Originally Posted by pass View Post
    You compare a around -/+ 6 year old Perc 6i Raid controller with a 4 year (H700) and a 2 year old (H710) ...

    You want high IOPS, my proposal: take H700 or H710p, add 1-2 SSD for Caching (the Raid controller manage this self) and your VM's will fly. The Perc H700/710 provide all the nice LSI options (fastpath, cachecade) without any License upgrade
    Hi, with all due respect to everyone using cachecade, I have enabled the cachecade and been trying various different settings and configurations with the raid controllers (stripe size, raid level etc..) in my 2 DELL servers with H700 (past server) and H710p to get the server flying as I was expecting, but my conclusion was that cachecade is just a waste of time !

    So after running cachecade for several months with 2x 240 GB SSD's, I have finally give up on them as I would just get normal disk IOPS and "Averade Disk Queue Length *in windows*) so I just canceled cachecade under the controller and decided to initialize the 2x SSD's for direct storage !

    In my opinion if you want SSD performance, Get SSD Disks period !
    cachecade is just a waste of time from what I have experienced..

    Server Configuration:-
    2x X5650 @ 2.67GHz
    96 GB DDR3 RAM
    H710p RAID Controller
    12x (11x on RAID0 *bad for redundancy, best performance!*) identical Hitachi 7.2k RPM 6Gbps Enterprise Grade SAS Hard Drives
    Dual Gig Ethernet ports etc..

    I am NOT saying I am getting bad performance before or after the cachecade was disabled, am just saying enabling or disabling cachecade was almost the same performance wise, but never really impressive when used cachecade ! and after several HD Tune Tests, IOPS would SOMETIMES fly under Only 64k and 1MB Random Access tests, but in real life am using Performance Monitor and monitoring using "Average Disk Queue Length" counter and the values were almost the same before and after cachecading (Performance Monitor running exhaustively 24/7 nonstop !)

    This is simply my experience with Cachecade and feel free to share your thoughts...

    Thanks,
    Last edited by HD Seed; 01-22-2014 at 02:58 PM.

  4. #4
    Join Date
    Apr 2010
    Posts
    491
    Quote Originally Posted by HD Seed View Post
    Hi, with all due respect to everyone using cachecade, I have enabled the cachecade and been trying various different settings and configurations with the raid controllers (stripe size, raid level etc..) in my 2 DELL servers with H700 (past server) and H710p to get the server flying as I was expecting, but my conclusion was that cachecade is just a waste of time !

    So after running cachecade for several months with 2x 240 GB SSD's, I have finally give up on them as I would just get normal disk IOPS and "Averade Disk Queue Length *in windows*) so I just canceled cachecade under the controller and decided to initialize the 2x SSD's for direct storage !

    In my opinion if you want SSD performance, Get SSD Disks period !
    cachecade is just a waste of time from what I have experienced..

    Server Configuration:-
    2x X5650 @ 2.67GHz
    96 GB DDR3 RAM
    H710p RAID Controller
    12x (11x on RAID0 *bad for redundancy, best performance!*) identical Hitachi 7.2k RPM 6Gbps Enterprise Grade SAS Hard Drives
    Dual Gig Ethernet ports etc..

    I am NOT saying I am getting bad performance before or after the cachecade was disabled, am just saying enabling or disabling cachecade was almost the same performance wise, but never really impressive when used cachecade ! and after several HD Tune Tests, IOPS would SOMETIMES fly under Only 64k and 1MB Random Access tests, but in real life am using Performance Monitor and monitoring using "Average Disk Queue Length" counter and the values were almost the same before and after cachecading (Performance Monitor running exhaustively 24/7 nonstop !)

    This is simply my experience with Cachecade and feel free to share your thoughts...

    Thanks,
    I'm not 100% sure but last I checked the dell cachecade was only 1.0 aka read only. If your hot data fits into ram you will see no difference with cachecade 1.0. 2.0 with high burst write workloads does really well. At this point 1TB SSD are really cost effective to the point were if they need less than 12TB of usable space it's hard to justify spinning rust.

  5. #5
    Join Date
    Dec 2010
    Location
    Europe & North America
    Posts
    50
    Major difference:

    Perc 6/i => 3Gb/s SAS
    H700/H710p => 6Gb/s SAS

    When you add "p", it means you have an extra 512MB of cache memory for a total of 1GB.
    PlanetHoster - AS53589 - ICANN Accredited
    Premium Europe/North America Web Hosting

  6. #6
    Join Date
    Jun 2012
    Posts
    109
    @HD Seed
    I don't work with Virtualization, but with high traffic web servers. Let me show what I have compared:

    1 Box with 12x SAS Enterprise SSD @960GB (Perc H800, Raid6)

    1 Box with 10x Nearline SAS Enterprise @3TB plus 2x Crucial 960GB consumer ssd(Perc H800, Raid6, cachecade activated)

    The above DAS storages are connected to two R610 (2x X5650, 96GB) and are used for delivery of images and streaming of Videos.

    If both is measured with HD Tune, the SSD storage show awesome numbers, but in our real business case I see only some little advantage of the SSD box (but it's a nice toy) as soon cachecade is activated the HDD box.

  7. almost Feb/2014 already, Dell poweredge's still don't offer Xeon E5-26xx v2 ivy bridge, and PERC RAID cards still have no advancement to PCIe 3.0 nor Cachecade 2.0 (read & write).

    H710P just equals to the outdated LSI 9265-8i PCIe 2.0 card while 9271-8i PCI 3.0 cards with or without Cachecade 2.0 are all over the places for over a year now....

  8. #8
    It would just feel great to see someone running HD Tune tests for Access Time with 8-12x SAS 7.2k RPM Disk Drives on RAID WITH 240 GB+ CacheCade...

    Nonetheless, thanks all for sharing your thoughts,,

  9. #9
    Join Date
    May 2006
    Location
    NJ, USA
    Posts
    6,457
    Quote Originally Posted by [email protected] View Post
    almost Feb/2014 already, Dell poweredge's still don't offer Xeon E5-26xx v2 ivy bridge, and PERC RAID cards still have no advancement to PCIe 3.0 nor Cachecade 2.0 (read & write).

    H710P just equals to the outdated LSI 9265-8i PCIe 2.0 card while 9271-8i PCI 3.0 cards with or without Cachecade 2.0 are all over the places for over a year now....
    http://configure.us.dell.com/dellsto...en&s=bsd&cs=04

    If you customize a poweredge 720xd, you can get V2.
    simplywww: directadmin and cpanel hosting that will rock your socks
    Need some work done in a datacenter in the NYC area? NYC Remote Hands can do it.

    Follow my "deals" Twitter for hardware specials.. @dougysdeals

  10. #10
    Join Date
    Aug 2006
    Location
    Ashburn VA, San Diego CA
    Posts
    4,571
    For what it's worth, I enabled cachecade on a iSCSI storage backend with just a single 256G SSD and jumped from 1.5k to over 20k iops on qd32 tests. It does make a difference, at least when enabled and tested correctly... you need to understand the difference between RO and RW modes (along with the potential downsides of RW mode if your controller supports it) and run your tests accordingly. Certain tests might not give the controller enough reason to move hot data. For example, running an 'access time' test might be too random for cachecade to locate a hot spot and move it to SSD. Crystal disk mark qd32 test certainly does though.
    Last edited by FastServ; 01-23-2014 at 03:30 PM.
    Fast Serv Networks, LLC | AS29889 | Fully Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
    Since 2003 - Ashburn VA + San Diego CA Datacenters

  11. #11
    Quote Originally Posted by FastServ View Post
    For what it's worth, I enabled cachecade on a iSCSI storage backend with just a single 256G SSD and jumped from 1.5k to over 20k iops on qd32 tests. It does make a difference, at least when enabled and tested correctly... you need to understand the difference between RO and RW modes (along with the potential downsides of RW mode if your controller supports it) and run your tests accordingly. Certain tests might not give the controller enough reason to move hot data. Crystal disk mark qd32 test certainly does though.
    I honestly did not get a word from what you are saying, but any "How To" guide from the internet will come in handy !...

    Also it is not My controller, the entire server was leased from a dedicated hosting provider and they wouldn't even provide support for that kind of stuff, since their servers are unmanaged or perhaps even they wouldn't care for such small things, they would just configure cachecade (which is pretty damn easy to activate in the controller and does not need that much experience if you ask me!) and deliver your server to you ! that's it..

    thanks..

  12. #12
    Join Date
    Jan 2010
    Posts
    308
    They will all offer much difference performance from each other considering each cards' age difference. That said, the big difference between H700 and H710 is that the H710 is a dual core version of the H700. You'll get a lot more IOPS from a RAID5/6 array on a H710 than a H700.

  13. #13
    Join Date
    Jun 2006
    Posts
    304
    So what do you guys thing of the following final Specs am prepared to order, please let me know for any final thoughts you might have ??

    Dell PowerEdge R710 (2U) - 8 x 2.5" Drive Bays
    Refurbished with 1 Year Warranty on All Parts
    Dual Intel Xeon X5650 Six Core 2.66GHz 12MB 6.4GT/s 95W
    144GB - (9 x 8GB) + (9 x 8GB) PC3-8500R
    H700 or H710p ?? Which do you recommend for this setup ?
    8 x Dell 900GB 10K 6G SAS in Hot Plug Tray (Ok)
    Single 570W Power Supply (for now) soon will order a second PSU..
    Dell Sliding Ready Rail Kit (No CMA)
    Metal Silver Locking Bezel
    Dell iDRAC6 Enterprise Remote Access

    Any thoughts or recommendations regarding this build ??

  14. #14
    Join Date
    Jan 2010
    Posts
    308
    Like I mentioned, the big difference between H700 and H710 is in the RAID level you use. You listed a whole bunch of irrelevant specs but not the RAID level.

    Also, while it works perfectly fine, the H710 isn't officially supported in the R710 (or any of the *10 line). Only in the R620, R720, etc.

  15. #15
    Join Date
    Jan 2010
    Posts
    308
    Also, why are you using X series processors? They tend to be power hogs and not worth the extra money and power. You can get E5645's much cheaper than X5650's. Do you really need the additional memory bandwidth?

  16. #16
    Join Date
    Jun 2006
    Posts
    304
    Quote Originally Posted by scurvy View Post
    Like I mentioned, the big difference between H700 and H710 is in the RAID level you use. You listed a whole bunch of irrelevant specs but not the RAID level.

    Also, while it works perfectly fine, the H710 isn't officially supported in the R710 (or any of the *10 line). Only in the R620, R720, etc.
    I will most likely go with RAID50 as it worked for me just fine on my other servers, (A good balance between Performance and Disk Space)..

    Quote Originally Posted by scurvy View Post
    Also, why are you using X series processors? They tend to be power hogs and not worth the extra money and power. You can get E5645's much cheaper than X5650's. Do you really need the additional memory bandwidth?
    I am sorry if I wasn't clearer before, I need the server for running virtual machines using Hyper-V or possibly even convert soon to OpenVZ, since the server has got 2x Six Core CPUs with HT, that is I am certain of it has 24 Logical CPUs, we have been using the X5650 for too long and nothing at all is wrong with it and it feels like a really powerful CPU !

    I have gone with the X5650 as I said because I am currently using it on a leased server and it proves Ok for us and apparently I can't find another option !

    I don't need additional memory bandwidth, I believe 1066MHz will do fine, but since you brought this to my attention id like to investigate further in this regards, thx..

Similar Threads

  1. Dell Poweredge R720xd - 2xE5-2630 6C/12x300GB SAS/H710P
    By OptioData in forum Web Hosting Hardware
    Replies: 0
    Last Post: 01-08-2014, 11:45 AM
  2. Dell PERC (H200 / H700 / H710) - Samsung 840 Pro
    By lifespeakers in forum Colocation and Data Centers
    Replies: 28
    Last Post: 09-08-2013, 08:55 PM
  3. H700 and Cachecade issue
    By aliitp in forum Colocation and Data Centers
    Replies: 4
    Last Post: 08-19-2013, 01:30 PM
  4. PERC H700 RAID card supports cachecade 2 ?
    By HostHatch_AR in forum Dedicated Server
    Replies: 2
    Last Post: 04-26-2012, 08:04 AM
  5. DELL Perc H700 vs Adaptec 2405SAS
    By ttgt in forum Colocation and Data Centers
    Replies: 1
    Last Post: 04-25-2011, 07:05 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •