For setting up the integrated, containerized environment, the following should be installed on the target machine.
1. Docker Desktop / Docker
Each of the below functionalities would be hosted into separate containers - with persistence & security
-
OAuth 2.0 Provider - Component / Network : keycloak
1.1 Postgres Database - Container : postgres
1.2 Keycloak OAuth 2.0 Server With Web UI - Container : keycloak
-
Log Aggregation - Component / Networks : kibana & conductor-boot
2.1. Elasticsearch - Container : elasticsearch
a) Elasticsearch will be part of two networks kibana & conductor-boot networks
b) conductor-boot network will be used for persisting conductor workflow execution data and the log aggregation indexes as well
c) kibana network will be used for displaying the logs from elasticsearch on the Kibana Web UI.
d) This is done in an attempt to segregate both the core and log aggregator component network for better component / container level security.
2.2. Logspout - Container : logspout
a) Logspout will be part of conductor-boot network
b) conductor-boot network will be used for forwarding the log data from conductor container to logstash.
c) This is done in an attempt to segregate both the core and log aggregator component network for better component / container level security.
2.3. Logstash - Container : logstash
a) Logstash will be part of conductor-boot network
b) conductor-boot network will be used for forwarding the log data to elasticsearch by logstash. Thus ultimately the logs from the conductor container are frowarded as conductor --> logspout (Logs are formatted then forwarded) --> logstash --> elasticsearch --> kibana (display)
c) This is done in an attempt to segregate both the core and log aggregator component network for better component / container level security.
2.4. Kibana - Container : kibana
a) Kibana will be part of kibana network
b) kibana network will be used for displaying the log data from elasticsearch by kibana.
c) This is done in an attempt to segregate both the core and log aggregator component network for better component / container level security.
-
Core Orchestration - Component / Network : conductor-boot
3.1. Database - Container : mysql
2.2. Conductor - Container : conductor-boot
a) Conductor Boot is a Spring Boot Wrapper with Integrated Conductor Server and with extra features like OAuth 2.0 or ADFS security. Hence this will be a perfect component for core orchestration.
All the images needed to set-up the environment are already available on dockerhub & keycloak on quay repositories.
Here is the list of docker images that will be used.
- keycloak --> quay.io/keycloak/keycloak:latest (from quay)
- postgres --> postgres:latest (from dockerhub)
- mysql --> mysql:5.7 (from dockerhub)
- elasticsearch --> elasticsearch:5.6 (from dockerhub)
- kibana --> kibana:5.6.16 (from dockerhub)
- logstash --> logstash:5.6.8 (from dockerhub)
- logspout --> gliderlabs/logspout:v3 (from dockerhub)
- conductorboot --> zzzmahesh/conductorboot:latest (from dockerhub)
Navigate into the cloned directory and Create container persistence directories, which will be used to persist container data. In other words, to map container volumes to host machine. a. cd conductor-boot b. mkdir container c. mkdir container/persistence d. mkdir container/persistence/mysql e. mkdir container/persistence/postgres f. mkdir container/persistence/elasticsearch
There are a set of variants available on this Github repository as listed below.
-
docker-compose-suite-sequential-startup.yml a. If your system has 8 GB RAM or even if higher, the system resources are occupied, then use this.
-
docker-compose-suite-medium-startup.yml a. If your system has 8 GB RAM or higher and has free system resources, then use this.
-
docker-compose-suite-parallel-startup.yml a. If your system has 16 GB RAM or higher and has free system resources, then use this. b. Let's choose this as this has the best startup time.
Here is the docker-compose.yml file
version: '2.2'
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: Keycloak@1234
LOGSPOUT: ignore
mem_limit: "512000000"
ports:
- 5432:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready -U keycloak -d keycloak"]
interval: 30s
timeout: 15s
retries: 10
networks:
keycloak-nw:
aliases:
- postgres
keycloak:
image: quay.io/keycloak/keycloak:latest
healthcheck:
test: ["CMD", "curl", "-I", "-XGET", "http://localhost:8080/auth/realms/master"]
interval: 30s
timeout: 30s
retries: 15
mem_limit: "1024000000"
environment:
DB_VENDOR: POSTGRES
DB_ADDR: postgres
mysql_dataBASE: keycloak
DB_USER: keycloak
DB_SCHEMA: public
DB_PASSWORD: Keycloak@1234
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: Admin@1234
LOGSPOUT: ignore
# Uncomment the line below if you want to specify JDBC parameters. The parameter below is just an example, and it shouldn't be used in production without knowledge. It is highly recommended that you read the PostgreSQL JDBC driver documentation in order to use it.
#JDBC_PARAMS: "ssl=true"
ports:
- 9990:8080
depends_on:
postgres:
condition: service_healthy
networks:
keycloak-nw:
aliases:
- keycloak
conductor-boot-nw:
aliases:
- keycloak
elasticsearch:
image: elasticsearch:5.6
restart: on-failure
networks:
conductor-boot-nw:
aliases:
- elasticsearch
kibana-nw:
aliases:
- elasticsearch
ports:
- 9200:9200
- 9300:9300
healthcheck:
test: ["CMD", "curl","-I" ,"-XGET", "http://localhost:9200/_cat/health"]
interval: 30s
timeout: 30s
retries: 15
mem_limit: "3096000000"
volumes:
- es_data:/var/lib/elasticsearch
mysql:
image: mysql:5.7
restart: on-failure
networks:
conductor-boot-nw:
aliases:
- mysql
ports:
- 3306:3306
- 33060:33060
environment:
MYSQL_ROOT_PASSWORD: Root@1234
MYSQL_DATABASE: conductor
MYSQL_USER: conductor
MYSQL_PASSWORD: Conductor@1234
MYSQL_INITDB_SKIP_TZINFO: NONE
LOGSPOUT: ignore
healthcheck:
test: [ "CMD-SHELL", 'mysqladmin ping' ]
interval: 60s
timeout: 30s
retries: 15
mem_limit: "512000000"
volumes:
- mysql_data:/var/lib/mysql
conductor-boot:
image: zzzmahesh/conductorboot:latest
restart: on-failure
depends_on:
mysql:
condition: service_healthy
elasticsearch:
condition: service_healthy
keycloak:
condition: service_healthy
networks:
conductor-boot-nw:
aliases:
- conductor-boot
ports:
- 8080:8080
environment:
MYSQL_DATABASE_HOST: mysql
MYSQL_DATABASE_PORT: 3306
MYSQL_DATABASE: conductor
MYSQL_USER: conductor
MYSQL_PASSWORD: Conductor@1234
ELASTICSEARCH_URL: http://elasticsearch:9200
OAUTH2_USER_INFO_URL: http://keycloak:9990/auth/realms/conductor/protocol/openid-connect/userinfo
SPRING_PROFILES_ACTIVE: basic,mysql,external-elasticsearch,external-oauth2,security,conductor
healthcheck:
test: ["CMD", "curl","-I" ,"-XGET", "http://localhost:8080/api/health"]
interval: 60s
timeout: 30s
retries: 15
mem_limit: "1536000000"
kibana:
image: kibana:5.6.16
restart: on-failure
links:
- elasticsearch
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
LOGSPOUT: ignore
ports:
- 5601:5601
healthcheck:
test: [ "CMD", "curl","-I", "-XGET", "http://localhost:5601/status" ]
interval: 60s
timeout: 30s
retries: 15
depends_on:
elasticsearch:
condition: service_healthy
mem_limit: "512000000"
networks:
kibana-nw:
aliases:
- kibana
logstash:
image: logstash:5.6.8
restart: on-failure
environment:
STDOUT: "true"
LOGSPOUT: ignore
http.host: 0.0.0.0
ports:
- 5000:5000
- 9600:9600
links:
- elasticsearch
depends_on:
elasticsearch:
condition: service_healthy
command: 'logstash -e "input { udp { port => 5000 } } filter { grok { match => { message => \"\A\[%{LOGLEVEL:LOG_LEVEL}%{SPACE}]%{SPACE}%{TIMESTAMP_ISO8601:LOG_TIMESTAMP}%{SPACE}%{NOTSPACE:JAVA_CLASS}%{SPACE}-%{SPACE}%{GREEDYDATA:LOG_MESSAGE}\" } } } output { elasticsearch { hosts => elasticsearch } }"'
mem_limit: "512000000"
networks:
conductor-boot-nw:
aliases:
- logstash
logspout:
image: gliderlabs/logspout:v3
restart: on-failure
command: 'udp://logstash:5000'
links:
- logstash
volumes:
- '/var/run/docker.sock:/tmp/docker.sock'
environment:
LOGSPOUT: ignore
depends_on:
elasticsearch:
condition: service_healthy
logstash:
condition: service_started
mem_limit: "512000000"
networks:
conductor-boot-nw:
aliases:
- logspout
volumes:
postgres_data:
driver: local
driver_opts:
type: none
device: $PWD/container/persistence/postgres
o: bind
es_data:
driver: local
driver_opts:
type: none
device: $PWD/container/persistence/elasticsearch
o: bind
mysql_data:
driver: local
driver_opts:
type: none
device: $PWD/container/persistence/mysql
o: bind
networks:
keycloak-nw:
conductor-boot-nw:
kibana-nw:
To spin up the containers, let fire the docker-compose command. Replace the docker-compose-suite-XXX-startup.yml in the below command accordingly.
docker-compose -f docker-compose-suite-parallel-startup.yml up -d
Note: The entire start-up might take up to 15 minutes, depending on the host machine configuration.
After the start-up is successfully completed, the output of the above start-up command should look similar to below.
Creating conductor-boot_postgres_1 ... done
Creating conductor-boot_keycloak_1 ... done
Creating conductor-boot_mysql_1 ... done
Creating conductor-boot_elasticsearch_1 ... done
Creating conductor-boot_logstash_1 ... done
Creating conductor-boot_kibana_1 ... done
Creating conductor-boot_conductor-boot_1 ... done
Creating conductor-boot_logspout_1 ... done
Verify the container status with "docker ps"
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ee0a606c9161 gliderlabs/logspout:v3 "/bin/logspout udp:/…" 5 minutes ago Up 5 minutes 8000/tcp conductor-boot_logspout_1
471fdcb6805f zzzmahesh/conductorboot:latest "/bin/bash /appln/sc…" 5 minutes ago Up 5 minutes (healthy) 0.0.0.0:8080->8080/tcp conductor-boot_conductor-boot_1
a2270d4c2113 kibana:5.6.16 "/docker-entrypoint.…" 5 minutes ago Up 5 minutes (health: starting) 0.0.0.0:5601->5601/tcp conductor-boot_kibana_1
2f05a5d26e1d logstash:5.6.8 "/docker-entrypoint.…" 5 minutes ago Up 5 minutes 0.0.0.0:5000->5000/tcp, 0.0.0.0:9600->9600/tcp conductor-boot_logstash_1
984470ea0c5d elasticsearch:5.6 "/docker-entrypoint.…" 8 minutes ago Up 7 minutes (healthy) 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp conductor-boot_elasticsearch_1
d68f5b943ba1 mysql:5.7 "docker-entrypoint.s…" 8 minutes ago Up 7 minutes (healthy) 0.0.0.0:3306->3306/tcp, 0.0.0.0:33060->33060/tcp conductor-boot_mysql_1
db7ce51c4e11 quay.io/keycloak/keycloak:latest "/opt/jboss/tools/do…" 11 minutes ago Up 11 minutes (healthy) 8443/tcp, 0.0.0.0:9990->8080/tcp conductor-boot_keycloak_1
d4c2a4cc9101 postgres "docker-entrypoint.s…" 14 minutes ago Up 14 minutes (healthy) 0.0.0.0:5432->5432/tcp conductor-boot_postgres_1
Open browser (preferably Google Chrome) and verify all the UI URLs are up and running.
a. http://localhost:9990 : Keycloak Base URL i. Click on the option Administration Console / Admin Console ii. Use the username and password from the YML file : default admin/Admin@1234
b. http://localhost:8080 : Integrated Conductor Swagger URL
c. http://localhost:5601 : Kibana Logs Viewer URL
The below initial configurations , one-time, have to be configured.
-
Create new Realm as shown below
a. Add Realm
b. Enter realm details and Click Save
i. Name: conductor
c. Optional - Key-in the below details
i. Display Name: Conductor Realm
ii. Click Save
-
Create a Client Secret at REALM Level.
a. From the same dropdown where “Add Realm” was selected previously, switch back to “master” realm by selecting it. And then click on Clients from the side menu.
b. Click on Clients and Select “conductor-realm”
c. Scroll all the way to the bottom of the screen and select the last section “Authentication Flow Overrides”
i. Set Browser Flow as “browser” ii. Set Direct Grant Flow as “direct grant”
d. Click Save
e. The page should auto refresh and a new tab “Credentials” will be visible on the top of the page. Select this tab and make note of the secret displayed on screen.
-
Switch back to Conductor Realm again and Configure new Client and User.
a. From the same dropdown where “Add Realm” was selected previously, switch back to “conductor” realm by selecting it.
b. Click on Clients from the Side Menu and Click "Create" to add new client
c. Key-in the new client details
i. Client ID : conductor_user_client ii. Client Protocol : openid-connect iii. Root URL : http://localhost:9990/
d. Click Save
e. Set the Mapper configuration which would return the user role in their userinfo response. Click on “Mappers” and “Add Builtin”
f. Select “client roles” and “realm roles” from the checklist.
g. Click “Add Selected”
h. The selected options will now be visible under “Mappers” tab.
i. Select each one and repeat the below steps (first “client roles” and then “realm roles”)
i. Enable “Add to userinfo” and Save
j. Navigate to Roles tab and click on “Add Role” to add a new role.
k. Key-in the below details for role creation
i. Role Name : role_conductor_super_manager ii. Description (Optional) : Conductor Admin / Super Manager Access
l. Click on Users from the side menu and Click "Add user"
m. Key-in basic user details for new user creation and click Save
n. Navigate to Credentials tab and set the user password. Disable "Temporary Password" (for the sake of this demo)
o. Navigate to Role Mappings tab, in the Client Roles drop-down, select newly created client "conductor_user_client".
p. From the "Available Roles", select newly created role "role_conductor_super_manager" and click "Add selected".
The security configuration is complete.
As the user is now created and can be used to securely login and access Conductor APIs, verification can be done as below.
- Perform a POST request to login with the newly created user and obtain access token.
i) url : http://localhost:9990/auth/realms/conductor/protocol/openid-connect/token
ii) body : x-www-form-urlencoded
iii) client_id : conductor_user_client (newly created client id in above steps)
iv) grant_type : password
v) client_secret : XXXXXXXX (new obtained secret in above steps)
vi) username : maheshy (newly created user in above steps)
vii) password : password (newly set password in the above steps)
Copy the access_token value returned.
- Perform a GET request to verify access to conductor metadata/taskdefs API to get the list of tasks (probably empty list as its newly spun up)
a. Without Access Token – Expect 401 Unauthorized
i) url : http://localhost:8080/api/metadata/taskdefs
ii) header :
Accept : application/json
A 401 error would mean, the Conductor APIs are secure and cannot be accessed without a valid access_token.
b. With Access Token – Expect 200 Success and list of tasks (probably empty list as its newly spun up)
i) url : http://localhost:8080/api/metadata/taskdefs
ii) header :
Accept : application/json
Authorization : Bearer <<ACCESS_TOKEN>>
Replace <<ACCESS_TOKEN>> with the actual access_token copied in above step. Also note that there is space after the word Bearer and before the access_token
- Perform a GET request to keycloak userinfo url to check user profile details
i) url : http://localhost:9990/auth/realms/conductor/protocol/openid-connect/userinfo
ii) header :
Accept : application/json
Authorization : Bearer <<ACCESS_TOKEN>>
- Perform a GET request to conductor boot userinfo url to check user profile details
i) url : http://localhost:8080/userinfo
ii) header :
Accept : application/json
Authorization : Bearer <<ACCESS_TOKEN>>
-
On the default Kibana page, initial step is to create the index which will hold the logs.
-
Select Time Filter Field Name : I don't wan't to use the Time Filter
-
Click Save
- Navigate to Discovery and all the logs from the conductorboot will be shown as below.
- Select the below listed from the field selector for a better view
i) LOG_TIMESTAMP
ii) LOG_LEVEL
iii) JAVA_CLASS
iv) LOG_MESSAGE
- Add a filter - where LOG_MESSGE exists : This would help filter away empty lines.
- Save the view as "Conductor Logs"
-
Click New view and repeat the step 5 and 6
-
Add extra filter JAVA_CLASS : Logbook : This would help filter only the HTTP Requests and Reponses i.e. API Calls.
-
Save the view as "HTTP Requests and Reponses"
Once all the above steps are completed, the target state of an OAuth2.0 Secure, Containerized Microservice Orchestrator Netflix Conductor and a Log aggregator, is ready for use.