This ansible role installs and configures Open OnDemand on various Linux distributions.
This role's versioning will loosely follow the Open OnDemand versions it installs. The Major and minor versions of this role will be compatible with the corresponding major and minor versions of Open OnDemand. Patch releases in this role will be compatible with the version of Open OnDemand it installs and configures but provide bug fixes or new features.
As an example 1.8.0 of this role will be compatible with versions of Open OnDemand 1.8.x (which is currently 1.8.20). Version 1.8.1 of this role will still install version 1.8.20 of Open OnDemand but provide some bug fixes or new features to this role.
- CentOS
- Debian
- Fedora
- RedHat
- Rocky Linux
- Suse
- Ubuntu 18
- Ubuntu 20
ondemand_package
defaults to latest
meaning this will install the latest version on the versioned
yum/deb repository. For example, it'll install the latest version, 2.0.20 from the versioned 2.0 yum
repo.
We use ondemand_package
for the name
paramter of the ansible yum
so you can speicify a specific version with ondemand-2.0.20
or use the comparison operators
ansible supports.
If you'd like to install a package from our latest
or nightly
repositories simply change the
rpm_repo_url
configuration to download the appropriate RPM. For example
'https://yum.osc.edu/ondemand/latest/ondemand-release-web-latest-1-6.noarch.rpm'
. Check yum
for the correct version of this RPM.
When installing packages from latest or nightly you may have to exclude packages depending on the state of project. As an example, when developing 2.1, 2.0 RPMs on latest or nightly need to exclude packages.
Use ondemand_package_excludes
to specify a list of packages to exclude during the yum install.
Here's an example to exclude all 2.1
packages when installing 2.0.20
.
ondemand_package: 'ondemand-2.0.20'
ondemand_package_excludes:
- '*-2.1'
This role has these tags when you want to only run certain tasks.
- configure - will configure Open OnDemand and any apps
- install - will install Open OnDemand and any apps
- deps - install dependencies (only valid when building from source)
- build - build the source code (only valid when building from source)
The defaults directory has configurations broken out by which file they apply to when configuring or options during building from source or installation.
Check these files for variables you can override. Save all these overrides to a file that
you can then call with [email protected]
All the default files are grouped by what they apply to. Some files are for documentation purposes and only have comments. They're hidden for ansible 2.9.X compatability and this error loading empty files.
.apps.yml
- configurations for installing apps (hidden because it's emtpy).build.yml
- configurations for building OnDemand from the source.install.yml
- configurations for installing OnDemand.nginx_stage.yml
- configurations that apply to/etc/ood/config/nginx_stage.yml
.ondemand.yml
- configurations that apply to/etc/ood/config/ondemand.d/ondemand.yml
(hidden because it's empty).ood_portal.yml
- configurations that apply to/etc/ood/config/ood_portal.yml
There are a few variables in this role that enable Open OnDemand customizations and configuration.
This configuration writes its content to /etc/ood/config/clusters.d/<cluster_key>.yml
for each cluster item on this dictionary. Each dictionary item is a multiline string.
For example
clusters:
my_cluster: |
---
v2:
metadata:
title: my_cluster
login:
host: my_host
job:
adapter: slurm
bin: /usr/local
batch_connect:
basic:
script_wrapper: "module restore\n%s"
another_cluster: |
---
v2:
metadata:
title: Another Cluster
Will produce /etc/ood/config/clusters.d/my_cluster.yml
and /etc/ood/config/clusters.d/another_cluster.yml
with the exact content.
v2:
metadata:
title: my_cluster
...
v2:
metadata:
title: Another Cluster
More details can be found on Open OnDemand documentation and Cluster Config Schema v2.
This configuration installs applications from custom repositories into the apps directory (default or custom).
It accepts a dictionary like those of git module.
The main key is the resulting directory name where repo
is cloned under the dest
directory.
Only repo:
is required.
ood_install_apps:
jupyter:
repo: https://github.com/OSC/bc_example_jupyter.git
dest: "{{ ood_sys_app_dir }}" # defaults (optional)
version: master # defaults (optional)
customdir: # will create /var/www/ood/apps/my/dir/customdir
repo: https://github.com/OSC/bc_example_rstudio
dest: /var/www/ood/apps/my/dir
version: v1.0.1
The above example will
- clone
OSC/bc_example_jupyter
to/var/www/ood/apps/sys/jupyter
- clone
OSC/bc_example_rstudio
to/var/www/ood/apps/my/dir/customdir
This allows you to configure the bc_desktop
application and write environment files for other applications.
In the simplest case, when given an env
key it will write out key value pairs an env file.
In the more complex case of bc_desktop
, it writes its content to a <cluster>.yml
file (where the filename is
the cluster
attribute of the content) and writes the the content of submit
key to the submit.yml.erb
file.
The examples below should illustrate these two points.
ood_apps:
bc_desktop:
title: "xfce desktop"
cluster: "my_cluster"
form:
- desktop
- hours
attributes:
hours:
value: 1
desktop: "xfce"
submit: |
---
script:
native:
- "-t"
- "<%= '%02d:00:00' % hours %>"
files:
env:
ood_shell: /bin/bash
The above example will create
/etc/ood/config
└── apps
├── bc_desktop
│ ├── my_cluster.yml
│ └── submit
│ └── submit.yml.erb
└── files
└── env
env
produce a key=value
file. Note the capitalization of the keys.
$ cat /etc/ood/config/apps/files/env
OOD_SHELL=/bin/bash
submit
create submit directory with a submit.yml.erb
containing the
raw string data you've configured. Note that configuration is raw data and
not yaml like the other configurations. This is to support Ruby ERB templating
that is not easily formatted when read by Ansible as yaml.
$ cat /etc/ood/config/apps/bc_desktop/submit/submit.yml.erb
---
script:
native:
- "-t"
- "<%= '%02d:00:00' % hours %>"
$ cat /etc/ood/config/apps/bc_desktop/submit/my_cluster.yml
title: "remote desktop"
cluster: my_cluster
attributes:
hours:
value: 1
desktop: "xfce"
There are two ways you can configure Apache for mod_auth_openidc
The first and simplest is by using the ood_auth_openidc
dictionary to generate a separate config file
for OIDC related configs.
The second is to have ood-portal-generator write the OIDC configs directly into the ood-portal.conf
file by using the named oidc_*
variables like oidc_provider_metadata_url
and oidc_client_id
.
You can view the oidc defaults to see a full list available.
If you're using the ansible template to generate ood-portal.conf
then you'll need the extra
flag oidc_settings_samefile
set to true.
ood_auth_openidc:
OIDCSessionMaxDuration: 28888
OIDCClientID: myid
OIDCProviderMetadataURL: https://localhost/
OIDCCryptoPassphrase: mycryptopass
"LDAPTrustedGlobalCert CA_BASE64": /etc/ssl/my/cert/path
default_auth_openidc:
OIDCRedirectURI: "https://{{ servername }}{{ oidc_uri }}"
OIDCSessionInactivityTimeout: 28800
OIDCSessionMaxDuration: 28800
OIDCRemoteUserClaim: preferred_username
OIDCPassClaimsAs: environment
OIDCStripCookies: mod_auth_openidc_session mod_auth_openidc_session_chunks mod_auth_openidc_session_0 mod_auth_openidc_session_1
It produces an auth_openidc.conf
file with listed key value
merged with default values.
Values defined on ood_auth_openidc
overwrites any default_auth_openidc
values.
See auth_openidc for more information on that module.
To install dex for OIDC use set the flag install_ondemand_dex
to true and it will install the package.
If you run into an issue or have a feature request or fixed some issue, let us know! PRs welcome! Even if you just have a question, feel free to open a ticket.