Page 3 of 3 FirstFirst 123
Results 51 to 64 of 64
  1. #51
    Join Date
    Apr 2002
    Location
    USA
    Posts
    5,783
    Quote Originally Posted by lostmind View Post
    I don't have the numbers here, but it was significantly worse than a single local drive. Max throughput was something like 60mb/s or less.
    Can I ask how you had your grid set up and how you did your testing? How much back end bandwidth did you give to the server you were testing on etc?

    I am curious because I am testing a grid set up now and I am not using anything special normal 2 x 1TB Sata drives per server not using stripping and only using a 1G back end switch.
    Here are some results from a 4 core VDS with 150 gig drive space.

    hdparm -tT /dev/hda

    /dev/hda:
    Timing cached reads: 13128 MB in 1.99 seconds = 6582.19 MB/sec
    Timing buffered disk reads: 456 MB in 3.00 seconds = 151.83 MB/sec


    root@testVDS# hdparm -t /dev/hda

    /dev/hda:
    Timing buffered disk reads: 450 MB in 3.00 seconds = 149.94 MB/sec


    root@testVDS# dd if=/dev/zero of=test bs=1M count=1024
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 1.91111 seconds, 562 MB/s

    root@testVDS [~/unixbench-5.1.2]# ./Run
    make all
    make[1]: Entering directory `/root/unixbench-5.1.2'
    Checking distribution of files
    ./pgms exists
    ./src exists
    ./testdir exists
    ./tmp exists
    ./results exists
    make[1]: Leaving directory `/root/unixbench-5.1.2'
    sh: 3dinfo: command not found

    # # # # # # # ##### ###### # # #### # #
    # # ## # # # # # # # ## # # # # #
    # # # # # # ## ##### ##### # # # # ######
    # # # # # # ## # # # # # # # # #
    # # # ## # # # # # # # ## # # # #
    #### # # # # # ##### ###### # # #### # #

    Version 5.1.2 Based on the Byte Magazine Unix Benchmark

    Multi-CPU version Version 5 revisions by Ian Smith,
    Sunnyvale, CA, USA
    December 22, 2007 johantheghost at yahoo period com


    1 x Dhrystone 2 using register variables 1 2 3 4 5 6 7 8 9 10

    1 x Double-Precision Whetstone 1 2 3 4 5 6 7 8 9 10

    1 x Execl Throughput 1 2 3

    1 x File Copy 1024 bufsize 2000 maxblocks 1 2 3

    1 x File Copy 256 bufsize 500 maxblocks 1 2 3

    1 x File Copy 4096 bufsize 8000 maxblocks 1 2 3

    1 x Pipe Throughput 1 2 3 4 5 6 7 8 9 10

    1 x Pipe-based Context Switching 1 2 3 4 5 6 7 8 9 10

    1 x Process Creation 1 2 3

    1 x System Call Overhead 1 2 3 4 5 6 7 8 9 10

    1 x Shell Scripts (1 concurrent) 1 2 3

    1 x Shell Scripts (8 concurrent) 1 2 3

    4 x Dhrystone 2 using register variables 1 2 3 4 5 6 7 8 9 10

    4 x Double-Precision Whetstone 1 2 3 4 5 6 7 8 9 10

    4 x Execl Throughput 1 2 3

    4 x File Copy 1024 bufsize 2000 maxblocks 1 2 3

    4 x File Copy 256 bufsize 500 maxblocks 1 2 3

    4 x File Copy 4096 bufsize 8000 maxblocks 1 2 3

    4 x Pipe Throughput 1 2 3 4 5 6 7 8 9 10

    4 x Pipe-based Context Switching 1 2 3 4 5 6 7 8 9 10

    4 x Process Creation 1 2 3

    4 x System Call Overhead 1 2 3 4 5 6 7 8 9 10

    4 x Shell Scripts (1 concurrent) 1 2 3

    4 x Shell Scripts (8 concurrent) 1 2 3

    ========================================================================
    BYTE UNIX Benchmarks (Version 5.1.2)

    System: testVDS.techark.com: GNU/Linux
    OS: GNU/Linux -- 2.6.18-194.32.1.el5xen -- #1 SMP Wed Jan 5 18:44:24 EST 2011
    Machine: x86_64 (x86_64)
    Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
    CPU 0: Intel(R) Xeon(R) CPU E5506 @ 2.13GHz (5361.3 bogomips)
    Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT,SYSCALL/SYSRET
    CPU 1: Intel(R) Xeon(R) CPU E5506 @ 2.13GHz (5361.3 bogomips)
    Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT,SYSCALL/SYSRET
    CPU 2: Intel(R) Xeon(R) CPU E5506 @ 2.13GHz (5361.3 bogomips)
    Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT,SYSCALL/SYSRET
    CPU 3: Intel(R) Xeon(R) CPU E5506 @ 2.13GHz (5361.3 bogomips)
    Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT,SYSCALL/SYSRET
    21:40:38 up 6 min, 1 user, load average: 1.17, 0.75, 0.32; runlevel 3

    ------------------------------------------------------------------------
    Benchmark Run: Sat Apr 21 2012 21:40:38 - 22:08:32
    4 CPUs in system; running 1 parallel copy of tests

    Dhrystone 2 using register variables 12384167.5 lps (10.0 s, 7 samples)
    Double-Precision Whetstone 2437.5 MWIPS (9.9 s, 7 samples)
    Execl Throughput 1253.8 lps (30.0 s, 2 samples)
    File Copy 1024 bufsize 2000 maxblocks 259102.5 KBps (30.0 s, 2 samples)
    File Copy 256 bufsize 500 maxblocks 70966.3 KBps (30.0 s, 2 samples)
    File Copy 4096 bufsize 8000 maxblocks 658010.1 KBps (30.0 s, 2 samples)
    Pipe Throughput 402640.6 lps (10.0 s, 7 samples)
    Pipe-based Context Switching 101847.8 lps (10.0 s, 7 samples)
    Process Creation 2881.7 lps (30.0 s, 2 samples)
    Shell Scripts (1 concurrent) 3720.8 lpm (60.0 s, 2 samples)
    Shell Scripts (8 concurrent) 1043.8 lpm (60.0 s, 2 samples)
    System Call Overhead 422232.6 lps (10.0 s, 7 samples)

    System Benchmarks Index Values BASELINE RESULT INDEX
    Dhrystone 2 using register variables 116700.0 12384167.5 1061.2
    Double-Precision Whetstone 55.0 2437.5 443.2
    Execl Throughput 43.0 1253.8 291.6
    File Copy 1024 bufsize 2000 maxblocks 3960.0 259102.5 654.3
    File Copy 256 bufsize 500 maxblocks 1655.0 70966.3 428.8
    File Copy 4096 bufsize 8000 maxblocks 5800.0 658010.1 1134.5
    Pipe Throughput 12440.0 402640.6 323.7
    Pipe-based Context Switching 4000.0 101847.8 254.6
    Process Creation 126.0 2881.7 228.7
    Shell Scripts (1 concurrent) 42.4 3720.8 877.5
    Shell Scripts (8 concurrent) 6.0 1043.8 1739.6
    System Call Overhead 15000.0 422232.6 281.5
    ========
    System Benchmarks Index Score 515.7

    ------------------------------------------------------------------------
    Benchmark Run: Sat Apr 21 2012 22:08:32 - 22:36:36
    4 CPUs in system; running 4 parallel copies of tests

    Dhrystone 2 using register variables 49303229.2 lps (10.0 s, 7 samples)
    Double-Precision Whetstone 9742.8 MWIPS (9.8 s, 7 samples)
    Execl Throughput 4936.3 lps (29.9 s, 2 samples)
    File Copy 1024 bufsize 2000 maxblocks 151098.8 KBps (30.0 s, 2 samples)
    File Copy 256 bufsize 500 maxblocks 40301.8 KBps (30.0 s, 2 samples)
    File Copy 4096 bufsize 8000 maxblocks 498060.1 KBps (30.1 s, 2 samples)
    Pipe Throughput 1600144.1 lps (10.1 s, 7 samples)
    Pipe-based Context Switching 413171.5 lps (10.0 s, 7 samples)
    Process Creation 10434.2 lps (30.1 s, 2 samples)
    Shell Scripts (1 concurrent) 8709.5 lpm (60.1 s, 2 samples)
    Shell Scripts (8 concurrent) 1344.8 lpm (60.1 s, 2 samples)
    System Call Overhead 1603316.1 lps (10.0 s, 7 samples)

    System Benchmarks Index Values BASELINE RESULT INDEX
    Dhrystone 2 using register variables 116700.0 49303229.2 4224.8
    Double-Precision Whetstone 55.0 9742.8 1771.4
    Execl Throughput 43.0 4936.3 1148.0
    File Copy 1024 bufsize 2000 maxblocks 3960.0 151098.8 381.6
    File Copy 256 bufsize 500 maxblocks 1655.0 40301.8 243.5
    File Copy 4096 bufsize 8000 maxblocks 5800.0 498060.1 858.7
    Pipe Throughput 12440.0 1600144.1 1286.3
    Pipe-based Context Switching 4000.0 413171.5 1032.9
    Process Creation 126.0 10434.2 828.1
    Shell Scripts (1 concurrent) 42.4 8709.5 2054.1
    Shell Scripts (8 concurrent) 6.0 1344.8 2241.3
    System Call Overhead 15000.0 1603316.1 1068.9
    ========
    System Benchmarks Index Score 1115.5


    After I complete this round of testing I am going to start again and turn striping on and re run the test and see if there is a difference in numbers.

  2. #52
    Join Date
    Jun 2002
    Location
    PA, USA
    Posts
    5,143
    Techark,

    Try using real disk copy such as "dd". And make sure the file size is big enough (e.g., 4 GB, etc).
    Fluid Hosting, LLC - Enterprise Cloud Infrastructure: Cloud Shared and Reseller, Cloud VPS, and Cloud Hybrid Server

  3. #53
    hey Techark,

    "6 nodes, x3440, 16gb ram, 6 x 500gb wd re3 drives, cisco 3560g & a dell 6228 switch". Tried the nodes with 4 x 1gbps uplinks and tested all the way down to a single uplink. Made very little impact on performance.

    We tested with a variety of tools, from bonnie to iozone, geekbench & so on.

    CA/Applogic has claimed they have made huge improvements in this area when I talked to them just a month or two ago (they were pushing a quarter end sale that was very tempting tbh). I didn't have a chance to setup a test environment though, but they were pretty baffled at the poor numbers we were seeing to begin with.
    Fully Managed Fast Hosting
    In Vancouver & Toronto
    Canadian owned & operated
    ezp.net

  4. #54
    Join Date
    May 2008
    Location
    Mountain View, CA
    Posts
    10

    AppLogic and RAID controllers

    The last time I checked, Applogic will work with RAID controllers. However a number of hosting providers have had problems since Applogic doesn't act on error conditions from the RAID controller. As a result, they have lost data because sequential failures of drives in a RAID group were hidden from them. This may have been repaired in 3.1, I don't know. However, we backed out RAID controllers from our configuration.
    <A href="http://www.computingutility.com">ENKI</A>
    - Outsourced IT Operations Services
    - Fully managed high reliability cloud computing
    - AppLogic consulting

  5. #55
    Join Date
    Aug 2007
    Location
    L.A., CA
    Posts
    3,710
    You could just configure the RAID controllers to email / contact you on their own based on array health.
    EasyDCIM.com - DataCenter Infrastructure Management - HELLO DEDICATED SERVER & COLO PROVIDERS! - Reach Me: chris@easydcim.com
    Bandwidth Billing | Inventory & Asset Management | Server Control
    Order Forms | Reboots | IPMI Control | IP Management | Reverse&Forward DNS | Rack Management

  6. #56
    Join Date
    May 2007
    Posts
    451
    Quote Originally Posted by eming View Post
    I've always been impressed with 3Tera/Applogic's way of dealing with storage, but I honestly don't know the system that well. A few questions:
    - Does it support deduplication ?
    - Thin provisioning? I think it does, right?
    - Can you define the level of redundancy per account? So, like a $99/mo client would have less redundancy than a $999/mo client?
    - Can you guarantee IO per drive/account/VM? So, like a $99/mo client would have less IO's guaranteed than a $999/mo client?
    - How does it deal with network IO, do you need 10g or multipath gig's for it to do well?
    - Can you automatically allocate the most access files to the fastest drives?
    - How does it deal with cache? Can you allocate specific drives (like SSD/FusionIO) to act as cache?
    - Does it have an object storage (S3 compatible) interface as well?
    - Does it support snapshots with a time-machine like filebased functionality?
    - Can you define a read-local, write distributed rule to ensure performance in low-throughput scenarios?

    Anyone here able to help out?


    D
    It creates mirrors across the hardware nodes for each volume, It can be created up to 16 times if needed/wanted.

    Does not support thin provisioning.

    Its all the same. If server 1 fails, and srv2 is your 'n' node. It will start all of those VM's on that 'n' node.

    You cannot guarentee IO's per node. However. if I run into a problem with IO. I can migrate volumes from server to server to get the best IO possible. WITHOUT downtime. Lets say Srv1 is overloaded, I can simply move selected volumes away from srv1 without having to reboot a VM.

    We've been using 1GB switches for our backbone, but we do have some customers on 10G switches that need very high speed.

    You can create your volume on specific servers that have SAS drives or SSD drives if you want.

    No caching support.

    What do you mean by Object storage? I can create multiple volumes per VM.

    No snapshop support yet.

    Yes, you can create a local read volume for servers if you want it to be on the local machine for fastest performance.

    ----------------------

    Side note. I've been using Applogic since 2008 now.. I've never once ran into an issue with NBD. So not 100% sure what happened to the gentlemen who has 80-100 nodes. However. 2.9.1 was a beta version of Applogic. 2.9.9 was the GA release of Applogic.

    Some of the downsides I've seen with Applogic is the lack of support for different network setups. (Infiniband) There will be some very cool stuff coming out in the 3.5 release that we are beta testing right now. SAN support and a few other cool features are what we will be seeing.

    The control panel really isn't really for the end user to see (if your selling VDS's) We've created a few custom scripts that interface into our billing portal that allows customers to easily run command for starting/stop/etc etc on their virtual machine.

    I think 3Tera's ultimate vision for Applogic was more or less for architecture of applications, not really VDS type solutions. We've built customers infrastructure where we've had server failures, and there was completely ZERO downtime with fail over groups, multiple Load balancers, and HA Firewalls/Gateways.
    Michael Wallace - michael@innoscale.net
    Innovative Scaling Technologies Inc. - A Cloud Service Provider
    24/7 Support, Call us @ 1-307-200-4880
    www.innoscale.net - Seattle, Silicon Valley, Dallas, Chicago, Washington D.C., and Europe

  7. #57
    Join Date
    Dec 2009
    Location
    MILLAU
    Posts
    306
    Would anyone know how much the monthly per socket license is for Applogics? without having to buy the 20 pack minimum perhaps from a CA affiliate?

  8. #58
    Join Date
    Oct 2001
    Location
    Miami,FL
    Posts
    616
    Contact Birdhosting. They can offer you with per sock license.

  9. #59
    Join Date
    May 2007
    Posts
    451
    Quote Originally Posted by lostmind View Post
    They did tell us not to raid it. We did try adding in adaptec cards but like I said, to no avail.




    I'm not sure if it was birdhosting. Hate to be spreading rumours if I am wrong. 40gbps infiniband sure would be fun though.

    10gbps ethernet switching from arista is much more affordable now as well.

    I hope Applogic works well for you. I nearly bought licenses just a few weeks back myself, simply due to the discounts they are handing out. Thought it would be a good way to test things and maybe even build out a few projects on, but in the end just didn't pull the trigger.
    Hello,

    We had a few test cases utilizing Infiniband 40gb. Never with full implementation. The main problem we ran into is using xenbridge with IPoverIB. A lot of Applogics backend networking from node to node happens with the xenbridge. We however did have Infiniband working in conjunction with Applogic.

    Last project we were on was using 10G infiniband, We've gotten that to work with a CX4 switch with no problem. However, new 3.x PXE boot support broke the support for it. (Need to be able to PXE boot) The 10G Infiniband cards we had did not pxe boot. 10GB was blazing fast with Applogic. We ran a 8 Node POC Grid. Rebuilding was fast, volume streaming were lightning. etc etc

    Thanks,
    Michael Wallace - michael@innoscale.net
    Innovative Scaling Technologies Inc. - A Cloud Service Provider
    24/7 Support, Call us @ 1-307-200-4880
    www.innoscale.net - Seattle, Silicon Valley, Dallas, Chicago, Washington D.C., and Europe

  10. #60
    Another option for 3rd party front-end for Applogic would be, http://www.dnseurope.net/CCP

    Also, very interesting road map too for this front-end solution.

  11. #61
    Join Date
    Sep 2005
    Location
    London
    Posts
    2,409
    Quote Originally Posted by gocloud View Post
    Another option for 3rd party front-end for Applogic would be, http://www.dnseurope.net/CCP

    Also, very interesting road map too for this front-end solution.
    do you know pricing?


    D

  12. #62
    Join Date
    Apr 2002
    Location
    USA
    Posts
    5,783
    Quote Originally Posted by eming View Post
    do you know pricing?


    D
    Maybe this will help? http://www.dnseurope.net/documents/C...ct_Pricing.pdf link right there on the page posted.

  13. #63
    Join Date
    Apr 2002
    Location
    USA
    Posts
    5,783
    Just as a note I have been testing this control panel for 2 months now and it is very good, a little complex to set everything up but it does work. The support from dnseurope is second to none, they have been very easy to work with, quick to answer questions and take phone calls. It looks to me like it is a sure fire winner for anyone running applogic.

  14. #64
    Quote Originally Posted by arisythila View Post
    I can migrate volumes from server to server to get the best IO possible. WITHOUT downtime. Lets say Srv1 is overloaded, I can simply move selected volumes away from srv1 without having to reboot a VM.
    Michael, can you tell me how you can do this?

Page 3 of 3 FirstFirst 123

Similar Threads

  1. AppLogic OS Question
    By Gibby13 in forum Cloud Hosting
    Replies: 1
    Last Post: 09-16-2011, 12:14 PM
  2. Applogic
    By evernet in forum Cloud Hosting
    Replies: 6
    Last Post: 04-20-2011, 12:24 PM
  3. Applogic from 3tera.com
    By shellyco in forum Cloud Hosting
    Replies: 6
    Last Post: 03-06-2010, 10:07 PM
  4. Replies: 2
    Last Post: 05-01-2008, 09:24 PM
  5. downsides to the hosting business?
    By trick in forum Running a Web Hosting Business
    Replies: 14
    Last Post: 07-15-2003, 07:00 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •