Difference between revisions of "SUNScholar/Disaster Recovery"

From Libopedia
Jump to navigation Jump to search
m (1 revision)
(No difference)

Revision as of 21:16, 14 August 2010


As your archive grows in size you will be concerned about it's sustainability. To help prepare, an example backup script is displayed below which is used to backup the critical parts of your server for use in the case of recovering from a disaster.

The output of the script should be copied to an another server in another physical location. There are several software and hardware backup systems available to do this. At this University we use the IBM Tivoli tape backup system. An agent is installed on all our servers which is used by the master to pull in all the data from the /opt/backup folders on the institutional repository servers.

Database credentials

Create a credentials file for the postgres or dspace user. See:

Open a terminal on the server and type the following:

To become the postgres user.

su - postgres

To create the credential file type.

touch ~/.pgpass

To lock down file permissions type.

chmod 0600 ~/.pgpass

Now type.

nano ~/.pgpass

Copy and paste the following into the file:


Press CTL+O and CTL+X to save and exit the file.

To test type.


You should be connected to the database now.

To quit the database type.


Local Backup

Example script

Here is a sample script to use for backups. It backups each day at midnight for a week only. A cron job entry is added to the root crontab to run the script at midnight each day. To add the script to the root crontab type the following as the root user:

crontab -e
@midnight /usr/local/bin/backup.sh

Save and exit the crontab editor.

Now copy and paste the following to the /usr/local/bin folder as backup.sh on your server and modify the backup variables to suit your location and server.


## Setup the backup variables ##
# Day Of the Week
DOW=`date +%a`

## Timestamp the beginning of the backup ##
echo "Backup for $LOCAL_SERVER started: $TIME"

## Check that we have a backup folder ##
if [ ! -d $LOCAL_FOLDER ]; then
  mkdir -p $LOCAL_FOLDER
  echo "New backup folder created"
  echo "Backup started: $TIME"

## Make sure we're in / since backups are relative to that ##
cd /

## Get a list of the installed software ##
dpkg --get-selections > /opt/backup/installed-software.$DOW

## Backup the web site data ##
echo "Archive '/var/www' folder"
tar czf $LOCAL_FOLDER/var-www.tgz var/www/

## Backup the server config files ##
echo "Archive '/etc' folder"
tar czf $LOCAL_FOLDER/etc.tgz.$DOW etc/

## Backup the '/root' folder ##
echo "Archive '/root' folder"
tar czf $LOCAL_FOLDER/root.tgz root/

## Backup the '/usr/local' folder which houses customised software ##
echo "Archive '/usr/local' folder"
tar czf $LOCAL_FOLDER/usr-local.tgz usr/local/

## Backup the '/home/dspace' folder which houses the digital assets and the Dspace application data ##
echo "Archive '/home/dspace' folder"
tar czf $LOCAL_FOLDER/home-dspace.tgz home/dspace/

## Backup the Dspace postgres database which houses the catalog of the digital assets ##
su - postgres -c "pg_dump -d dspace > /tmp/dspace-db.sql"
cp /tmp/dspace-db.sql /opt/backup/dspace-db.sql-$DOW
su - postgres -c "vacuumdb --analyze dspace > /dev/null 2>&1"

## View the backup folder ##

## Timestamp the end of the backup ##
echo "Backup for $LOCAL_SERVER ended: $TIME"

## Make a daily copy of the backup log file ##

## Email the backup logfile to the root user ##
cat $BACKUP_LOGFILE.$DOW | mail -s "Daily backup log from $HOSTNAME" root

### EOF ###

Invoke Local Manual Backup

After you have completed the above, you can start a backup anytime by typing the following as the root user:


Then check the files in the backup folder by typing the following:

ls -lh /opt/backup

Remote Backup

The intention is to backup the files in /opt/backup to a remote backup server. On our campus we have a server in a secure location for housing the remote backups. This server has a very large RAID disk storage and has Ubuntu 8.04 LTS installed. BackupPC is installed on the server. BackupPC uses the rsync method for pulling in the backups from the clients.

To setup a similar system, first configure the clients and then configure the server.

Client Setup

Each client to be backed up must have an rsync server running. See below for an example /etc/rsyncd.conf config file:

path = /opt/backup
hosts allow = %my-backup-server-hostname% localhost

Replace %my-backup-server-hostname% with the hostname of your backup server.

Now enable the rsync server by editing the /etc/default/rsync file and change false to true.


Now you start the rsync server as follows:

/etc/init.d/rsync restart

Now we add a firewall rule to allow the backup server to get the backup files as follows:

ufw allow from %my-backup-server-hostname% to any 873

Server Setup

It is assumed you will be using Ubuntu 8.04 LTS for the backup server. If so, do the following to set it up as the backuppc server.

Login and become the root user.

Create a firewall rule for each client to be backed up as follows:

ufw allow from %my-client-to-be-backed-up-hostname% to any 873

Now test your rsync connection to each client as follows:

rsync %my-client-to-be-backed-up-hostname%::backup

You should get a listing of the backup files in the clients /opt/backup folder.

If the above is successful then install backuppc on the server as follows:

aptitude install backuppc

BackupPC has a web interface which you enable as follows:

cd /etc/apache2/conf.d
ln -s /etc/backuppc/apache.conf backuppc
/etc/init.d/apache2 restart

Now we add an admin backuppc user as follows:

htpasswd /etc/backuppc/htpasswd admin

You will prompted to enter a password twice

Now open a web browser and type the following into the address bar:


You will be prompted for a username and password, use the following:

password=(what you entered when you created the admin user above)

You should be presented with a screen like the following:


Now setup backuppc by adding host configurations. There is plenty of backuppc documentation out there.

However, below is a screenshot of the critical configuration of Xfer settings per host that uses the rsync server on each client. Check out highlighted boxes in red.


Continue to setup backuppc as needed. That's it.

System Monitoring

Now that you have a large number of servers you would like to know how they are performing and be informed of potential problems. At our library we use munin to do this.

Client Setup

Login and become the root user. Install munin as follows:

aptitude install munin

Setup munin to allow the monitoring server to gather statistics as follow:

nano /etc/munin/munin.conf

Add the following to the bottom of the file:

allow $ip-address-of-monitoring-server%

Change the following:

hostname %hostname-of-client%

Save the file. Run the following command to check what stats are available:

munin-node-configure --suggest

Add a firewall rule to allow the monitoring server to get the stats:

ufw allow 4949

Server Setup

It is assumed that you will be using the same server for monitoring and backup. To setup munin to gather client statistics, follow the procedure below.

Add a firewall rule to allow the server to get the stats:

ufw allow 4949

To install munin, type the following:

aptitude install munin

Add the clients to the /etc/munin/munin.conf file as follows.

Open the file for editing.

nano /etc/munin.conf

Add one of the following for each client:

    use_node_name yes
    address %ip-address-of-client%

Restart the munin server.

/etc/init.d/munin restart

Wait for about 5 to 10 minutes for munin to gather data and then check out the stats as follows.

Open a web browser and type the following in the address bar:


You should get a page like this:


Thats it. As usual there is a lot of documentation about Munin out there.

Command Line Help

Go to: http://www.ubuntu.sun.ac.za/wiki/index.php/SelfHelp for more help about the command line programs used in this procedure.