docker network create my-app-network
Generate a self-signed certificate for the LSAAI nginx proxy using this command:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./lsaai-mock/configuration/nginx/certs/nginx.key -out ./lsaai-mock/configuration/nginx/certs/nginx.crt -addext subjectAltName=DNS:aai.localhost
- Generate jwks
cd rems
python generate_jwks.py
cd ..
- Initialize the database
./rems/migrate.sh
- Run the
rems-app
service
docker compose up -d rems-app
- Login to the REMS application (
http://localhost:3000
), so your user gets created in the database - Initialize the application
./rems/setup.sh
At steps 2, 3 and 5 it is important not to cd
into the rems
directory.
docker compose up -d
In upload.sh
there is a simple script that uploads a dataset to SDA.
sda-admin
andsda-cli
in your$PATH
- access token from
http://localhost:8085/
s3cmd.conf
file in the same directory as the script. You can just replace<access_token>
in this example:
[default]
access_token = <access_token>
human_readable_sizes = True
host_bucket = http://localhost:8002
encrypt = False
guess_mime_type = True
multipart_chunk_size_mb = 50
use_https = False
access_key = jd123_lifescience-ri.eu
secret_key = jd123_lifescience-ri.eu
host_base = http://localhost:8002
check_ssl_certificate = False
encoding = UTF-8
check_ssl_hostname = False
socket_timeout = 30
- Move all your data to a folder next to the script
- Run
./upload.sh <folder_name> <dataset_name>
- After successful upload, you should be able to fetch the data using:
token=<access_token>
curl -H "Authorization: Bearer $token" http://localhost:8443/s3/<dataset>/jd123_lifescience-ri.eu/<folder_name>/<file_name>
- Log in to
http://localhost:4180
- Select a workflow on the home page
- Define the input directory as
<dataset>/<user_id>
(e.g.DATASET001/jd123_lifescience-ri.eu
) - Define the output directory. This is the directory in the s3 inbox bucket (e.g.
myWorklowOutput
) - Click
Run