Export environment variables as shown in "Sample .bashrc" below.
- Install Python 3 (e.g. with Miniconda https://docs.conda.io/en/latest/miniconda.html)
- Install Docker and docker compose if necessary. See Docker Setup
- Create a bucket for HSDS, using aws cli tools or aws management console
- Get project source code:
$ git clone https://github.com/HDFGroup/hsds
- Build the docker image:
$ ./build.sh --nolint
- Confirm the docker image was created:
$ docker images hdfgroup/hsds
- Go to admin/config directory:
$ cd hsds/admin/config
- Copy the file "passwd.default" to "passwd.txt". Add any usernames/passwords you wish. Modify existing passwords (for admin, test_user1, test_user2) for security.
- If group-level permissions are desired (See Authorization), copy the file "groups.default" to "groups.txt". Modify existing groups as needed
- Create environment variables as in "Sample .bashrc" below
- Setup Lambda if desired. See AWS Lambda Setup
- Create the file admin/config/override.yml for deployment specific settings (see "Sample override.yml")
- Start the service
$./runall.sh <n>
where n is the number of containers desired (defaults to 4) - Run
$ docker ps
and verify that the containers are running: hsds_head, hsds_sn_[1-n], hsds_dn_[1-n] - Run
$ curl http://127.0.0.1:${SN_PORT}/about
and verify that "cluster_state" is "READY" (might need to give it a minute or two) - Perform post install configuration. See: Post Install Configuration
These environment variables will be passed to the Docker containers on start up.
export AWS_ACCESS_KEY_ID=1234567890 # user your AWS account access key if using S3 (Not needed if running on EC2 and AWS_IAM_ROLE is defined)
export AWS_SECRET_ACCESS_KEY=ABCDEFGHIJKL # use your AWS account access secret key if using S3 (Not needed if running on EC2 and AWS_IAM_ROLE is defined)
export BUCKET_NAME=hsds.test # set to the name of the bucket you will be using
export AWS_REGION=us-east-1 # The AWS region the instance/bucket is running in
export AWS_S3_GATEWAY=http://s3.amazonaws.com # Use AWS endpoint for region where bucket is
export SN_PORT=5101 # port to use for the service
export HSDS_ENDPOINT=http://hsds.hdf.test:${SN_PORT} # The DNS name of the instance (use https protocol if SSL is desired)
export LOG_LEVEL=INFO # Verbosity of server logs (DEBUG, INFO, WARN, or ERROR)
# For S3, set AWS_S3_GATEWAY to endpoint for the region the bucket is in. E.g.: http://s3.amazonaws.com.
# See http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region for list of endpoints.
Review the contents of admin/config/config.yml and create the file admin/config/override.yml for any keys where you don't wish to use the default value. Values that you will most certainly want to override are:
- aws_iam_)role # set to the name of an iam_role that allows read/write to the S3 bucket
To get the latest codes changes from the HSDS repo do the following:
- Shutdown the service:
$ stopall.sh
- Get code changes:
$ git pull
- Build a new Docker image:
$ ./build/sh
- Start the service:
$ ./runall.sh