Results 1 to 5 of 5

Thread: Backup question

  1. #1
    Join Date
    Mar 2003
    Location
    Duluth MN
    Posts
    3,864

    Question Backup question

    This thread -> http://www.webhostingtalk.com/showth...hreadid=268728 got me thinking about some of my procedures.

    Currently, my backup situation is that I only back up user directories into tarballs and store them offsite. I would like to implement a systemwide backup procedure that will back up the entire system. What is the best way to go about doing that from a software perspective with the use of scripts?

    I am currently running Raid 1 on all my servers.

    Would a cron to tarball and backup each partition be sufficient?

    i.e.
    tar -zcvpf home.tar.gz /home
    tar -zcvpf usr.tar.gz /usr
    tar -zcvpf var.tar.gz /var
    and so on...

    Also, would you recommend using something different than tar.gz?

  2. #2
    Join Date
    Feb 2002
    Location
    New York, NY
    Posts
    4,612
    tar is great and all, but if you're serious about offsite backups, I would use rsync. Believe it or not, our backups are about 400,000% faster using rsync compared to sending a tar file.
    Scott Burns, President
    BQ Internet Corporation
    Remote Rsync and FTP backup solutions
    *** http://www.bqbackup.com/ ***

  3. #3
    Join Date
    Jan 2003
    Location
    Lake Arrowhead, CA
    Posts
    789
    Definitely use rsync. For the client's use we do the typical html and db tar/zip, but for a full offsite server backup, rsync is the only logical solution.

    We build identical mirror servers and use rsync to copy only the /home/[userid], /usr/local/mysql/data/[userid] directories as well as most config files. In this way, we can maintain nearly realtime offsite mirroring with relatively insignificant bandwidth consumption (after the first run). That's virtually impossible to do any other way.
    http://www.srohosting.com
    Stability, redundancy and peace of mind

  4. #4
    Join Date
    Mar 2003
    Location
    Duluth MN
    Posts
    3,864
    I'm not very familiar with rsync... Would I be able to set up one beefy server with massive storage space, and have several servers rsync to it to keep backed up?

  5. #5
    Join Date
    Jan 2003
    Location
    Lake Arrowhead, CA
    Posts
    789
    Yes, you could. You can run an rsync daemon on the client, the server or both and copy files to any location on the target.

    I setup an rsync daemon on each system being mirrored and setup it's rsyncd.conf to allow access to specific directories only from specific hosts and users. On the system doing the mirroring, the command is: rsync [opts] [source] [dest] . My mirror scripts are fairly complex because they pull customer info from an external database and loop through all customers on each host being mirrored and then do a bunch of tests before actually running rsync, but the gist of it for a single home directory is something like this:

    rsync -av [email protected]::homedir/[$user] /backup/db1/home

    'homedir' is a base path defined in the rsyncd.conf of the system being backed up, so this will backup the remote home directory [$user] at 192.168.0.2 (lan IP of our db1 server) to the local folder /backup/db1/home/[$user]. It probably doesn't make complete sense looking at it, but do a google search on "rsync daemon" and "rsynd.conf" and you should find enough information to get going.
    http://www.srohosting.com
    Stability, redundancy and peace of mind

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •