Skip to content
Jeff Squyres edited this page Sep 3, 2014 · 1 revision

MTT Server Maintenance

This page describes some maintenance activities needed to keep MTT running well.

Database Maintenance

Nightly Maintenance

To keep the database running at its best some nightly maintenance is required.

  • First we need to activate the MTT statistics table updates. There are two types of statistics tables: Database and Contribution.
    • Database statistics consist of number of tuples in the major tables, and the physical size of the database on disk. This information is usually collected near the end of the day. To do so setup a crontab entry such as the following:
#
# Update MTT Statistics [Database]
#  - everyday at 11:30 pm
#  - runs in about 15 - 20  min
#
30 23 * * *  $MTT_TRUNK/server/sql/v3/stats/collect-stats.pl -no-contrib > /dev/null
  • Contribution statistics help assess the testing coverage over a single day. A variety of items are tracked here including resource manager, contributor, network(s), compiler(s), and platform. This should be run every day when the database load is light (usually early morning). A crontab entry like the one that follows is recommended.
#
# Update MTT Statistics [Contribution]
#  - everyday at 1 am
#  - runs in about 1 hour
#
0 1 * * *  $MTT_TRUNK/server/sql/v3/cron/mtt-update-stats.pl
  • Creating a backup of the database is important. The following crontab entry points to a script that will take a full dump of the MTT database and create a bzip'ed tar ball that can be stored for later recovery. You will want to edit this script and change the $backup_file name and the $resting_place value (where the file is stored).
#
# Backup the database
#  - every Sunday at 9 pm
#  - runs in about 8 hours
#
0 21 * * 0 $MTT_TRUNK/server/sql/v3/cron/backup-db.pl 1> /dev/null 2> /dev/null
  • The most important maintenance operation is doing a PostgreSQL VACUUM and ANALYZE at least once a day. The following crontab entry uses a script that can help do this periodically. There are other ways including an autovacuum option provided by PostgreSQL. ANALYZE maintains the PostgreSQL internal statistics used by the query optimizer when planning a statement execution. The crontab below will run the periodic-maintenance.pl script on a given schedule with increasing levels of database maintenance.
#
# Maintain internal PostgreSQL stats (Day)
#  - every weekday at 8 am
#  - runs in about 10 - 20 min
#
0 8 * * 1-5  $HOME/work/mtt-stuff/mtt-trunk/server/sql/cron/periodic-maintenance.pl -day 1> /dev/null 2> /dev/null

#
# Maintain internal PostgreSQL stats (Week)
#  - Sunday at 8 am (except when Monthly is running)
#  - runs in about 10 - 20 min
#
0 8 * * 0  if [ `date +\%d` -ge 8 ]; then $HOME/work/mtt-stuff/mtt-trunk/server/sql/cron/periodic-maintenance.pl -week 1> /dev/null 2> /dev/null; fi

#
# Maintain internal PostgreSQL stats (Month)
#  - 1st Sunday of the Month at 8 am
#  - runs in about ? min
#
0 8 * * 0  if [ `date +\%d` -lt 8 ]; then $HOME/work/mtt-stuff/mtt-trunk/server/sql/cron/periodic-maintenance.pl -month 1> /dev/null 2> /dev/null; fi

#
# Maintain internal PostgreSQL stats (Year)
#  - Dec. 29 at 8 am
#  - runs in about ? min
#
0 8 29 12 *  $HOME/work/mtt-stuff/mtt-trunk/server/sql/cron/periodic-maintenance.pl -year 1> /dev/null 2> /dev/null
  • Another optional maintenance operation is the occasional removal of trial data in the database. This data is indicated by the trial boolean associated with entries in the mpi_install, test_build, and test_run partition tables. There is currently no script publicly available to clean out these entries, but it is fairly straight forward to do if one was so inclined (contributions welcome :) ).

Yearly Maintenance

This is a summary of the detailed version from the repository. When doing this maintenance please consult the README.

Since we use partition tables based on the date, we need to make sure we have the correct date range of tables in the database for the year and month of submission. We have created a series of scripts to help make this process easier. Every December we create the tables for the up-coming year using the following process.

# Create MPI Install partitions for 2009
./server/sql/support/create-partitions-mpi-install.pl 2009 XX >  2009-mpi-install.sql

# Create Test Build partitions for 2009
./server/sql/support/create-partitions-test-build.pl 2009 XX >  2009-test-build.sql

# Create Test Build partitions for 2009
./server/sql/support/create-partitions-test-run.pl 2009 XX >  2009-test-run.sql

# Create Indexes on the new tables
./server/sql/support/create-partition-indexes.pl 2009 XX >  2009-indexes.sql

These scripts will generate the partition tables, indexes, Rules (a.k.a. triggers), and Grant Select permissions to the 'mpi_viewer' user (for web-based access).

There is a script that archives all of the partition tables created, and this should be maintained for historical sake.

server/sql/support/create-partitions.sh

HTTP Maintenance

Nothing here yet...