-
Notifications
You must be signed in to change notification settings - Fork 148
Delivery
This feature is about making sure and taking responsibility for the overall hosting of the application, choosing the suitable platform for hosting and working closely with the DevOps team for continuous deployments from development, staging to production. Document delivery process and manage the environments (development, QA and staging). After conducting in-depth research we have decided upon using AWS (Amazon web services) as our hosting platform. This decision was based on various factors from pricing, free trial length, to which features are available for a free version. AWS has the best features available for use in their free tier as well as offer substantial hosting time, whereas other platforms would only offer a week for example.
Since the project will be dockerised and hosted as such we needed to look into various features for AWS. The first thought would be to run a VPS (Virtual Private Server), however, our main concern for this would be the processing power would not be strong enough to be able to run numerous docker containers (for the various servers such as postgress, Prisma and the API) as well as have a second collection of containers for a testing environment as well.
The current decision is to use AWS ECS (Elastic Container Service) paired with CodeDeply to run the containers on independent instances as well as try and automate docker deployment. Access to the made docker containers has been granted to us now the next phase is testing and doing further into ECS to see if it meets our requirements.
-
R1: Be able to host all technologies and systems required to deliver
-
R2: Set up the different environments for testing as well as final delivery
-
R3: Make sure UP integrations work on the hosting platform
-
R4: Automated deployments.
-
R5: Should have a status page that shows the status of the APIs.
- Ensure enough storage space to accommodate for media and userbase
- Allow room for scalability
- Availability, ensure that the system is available to users.
- A server with enough processing power to be able build the application files or containers
- Enough power to be able to accommodate several users
- Enough storage space to allow for scaling and large user base
- Be able to deliver SSL certification for security
- Finding hosting platforms that are affordable, in our case free
- The website can be accessed and used as per expected
- All features implemented work and are accessible
- Testing environment is accessible and useable by all teams for development
- As a project manager I’ve made sure that my team delivers their tasks
- Provide the support they need to complete their tasks, this includes planning, organizing and directing specific projects components.
- Responsible for scheduling all meetings to track progress of the team, discuss challenges and congratulate and encourage them.
- Responsible as well to arrange meetings with other teams to answer and give direction on a specific issue related to that team’s feature based on questions my team had, but first I will find out myself and if the answer needs more elaboration, then I organize the meeting to address it.
- Assisted team members that were struggling with a certain work by providing direction where I can or team them up with a team member that understand it better to help.
- Created our Project board and the tickets that goes with it and assign them to members accordingly.
- Anything that is needed from my team by other teams I am the first point of contact and make sure that I provide them with the right resource to attend to the query.
- Protected my team from distractions
- Determining what is within our scope.
Sprint 1
- Defined functional and non-functional requirements
- Defined acceptance criteria
- Be in constant communication with UP integration team to ensure that their requirements will work with the hosting platform we will use.
- Sorting out the wiki layout.
- Sorting and compiling the SRS as a lead BA
- Doing in-depth research into various hosting platforms and their capabilities in order to choose the appropriate one for the project.
- From the whole research, come up with the decision to choose AWS based on its features and services.
- Setting up the AWS hosting platform and test it.
- Communicating with Architecture team to understand the docker containers they are building.
- Currently in communication with the Architecture team to provide the containers to host and see if it works out well.
Sprint 2
-
Did research into various hosting providers and their capabilities
-
Implmemnted various servers to test run them and their capabilities including Azure, AWS, CS servers, Digital ocean and various other providers
-
Got a domain name and SSL certificate as well as set the final server to be digital ocean
-
Set up the digital ocean server and granted access and testing capabilities to students
-
Helped with automating the server to pull, build and run updated release
-
Research and attempted to implement container architecture
Sprint 1:
- Conducted extensive research on how the API for status page should work including looking at graphQL documentation and other resource to understand what resources the API needs in order to serve information about the services that are available.
- Being involve on several one on one meetings with the designer of the status page to understand what information they need from my API.
- From the results of the extensive research , then created a mock API for the status page.
- Ensured that the mock API is pushed to GitHub for availability.
Sprint 2:
- Created The services that perform health checks.
- Ensured that the API is functional, and use the services without mock data.
- Created Documentation explaining how the API which future generations can refer to https://github.com/COS-301/graduates/wiki/API#hosting-api- documentation-by-delivery-team
- Helped with the Delivery and ensured that all the APIs work from the hosted environment.
my contribution statistics:
- Did extensive research about PostgreSQL ,Prisma, docker and NestJS.
- Researched and investigated tools that can be used to create backups of a PostgreSQL database and found the utility pg_dump.
- Conducted research on how to write shell scripts to create Database dumps but found the getvim/execute node package that can be used to execute command line programs using node.
- At the moment the biggest task is trying to find a way to implement the script to create dumps of the PostgreSQL database and also be looking into rollbacks
- Conducted multiple meetings with designers to figure out and decide on what type of designs we are going to use.
- Voted on different UI kits that will be used and decide on colour palettes, fonts and different components for the design of the whole portal.
- Designing of the status page used by the 'Delivery' team and a mobile view for the same page(https://xd.adobe.com/view/6bfcae62-e7c0-47df-97ee- db319931d0c7-3eaf/)
- Now busy creating a story book for the design and UI
-
Designed the mobile view of the page for the PWA/mobile team:
Sprint 2:
- Was tasked, along with other designers, to set up Angular storybooks for multiple components from the design(but since the storybook version didn't match with the angular version, it was immediately ruled out by the PO).
-
Worked with the UI Engineer to transfer the Design into UI, then review the UI afterwards
Sprint 1 & 2:
- Monitor and review pull requests.
- Fix pipeline issues.
- Check if protocols for pull requests are correct.
Sprint 1:
- Help members with the local setup.
- Research options for automated deployments for AWS, CS server and Digital Ocean.
- Research how to build docker images in github actions.
Sprint 2:
- Setup Docker Hub account.
- Research web hooks for github and docker hub.
- Create github workflow for deployment. 📄
- Test Docker containers on server
- Test workflow and webhook endpoint
- Write script to get latest code from the repository on the server. Also created script to pull images and restart containers.
- Held multiple meetings and collaborations with the rest of the testers to discuss and formulate the testing plan, as well as the testing protocols to be followed.
- Held meeting to discuss the initial testing plan, as well as what else would be standardised among the testers.
- Held meetings to finalize the protocols for testing, discussing the testing workflow, and finalising the protocols used for testing and logging. The test plan and protocols that were discussed are as seen on the wiki now under Testing.
- Did research on how to do testing using Jest and Cypress.
- Did research on how to use Git actions to automate testing.
- Now busy creating a document that detail testing for the delivery team
- Busy creating test examples as well.
- Made contributions to the testing wiki and documentation
- assisted the API engineer and UI engineer with writing integration and unit tests
- Implemented various unit tests and integration tests for the status API
- Helped find and correct bugs in the code through testing and restructuring code in the API
- Logged testing results and oversaw that code added or changed in pull requests made by team Basic are properly tested
My contribution statistics:
Sprint 1:
- Conducted research on how to do the UI using angular.
- Conducted research and learned tailwind to use on the project.
- Conducted and tried NGRX.
- Designed the status page components and define how they will work.
- Conducting research on hosting platforms to host the frond-end, at the moment Heroku and DigitalOcean seems to be good options.
Sprint 2:
-
Implemented the Status page in Angular
-
Made use of Apollo to integrate the API
-
Made the status page mobile responsive
Once changes gets pushed to the release or main branch a webhook (POST request) is sent to an endpoint on the server at port 20202.
This is done with the following workflow: deploy.yml:page_facing_up:
When the endpoint receives the POST request a script executes on the server:
#!/bin/bash
set -e
cd graduates
local_branch=$(git rev-parse --symbolic-full-name --abbrev-ref HEAD)
remote_branch=$(git rev-parse --abbrev-ref --symbolic-full-name @{u})
#gets link for branch on remote repo
remote=$(git config branch.$local_branch.remote)
#fetches remote repos contents
git fetch $remote
if git merge-base --is-ancestor $remote_branch HEAD; then
#up to date
exit 0
else
cd
docker stop postgres
killall -q node
killall -q postgres
cd graduates
if git merge-base --is-ancestor HEAD $remote_branch; then
#successful forward
git merge --ff-only --stat $remote_branch
else
#cannot fast forward
git rebase --preserve-merges --stat $remote_branch
fi
cd
./run.sh
fi
This script fetches the latest code from the repository. After it's execution another script is executed to restart the services. ####run.sh
rm nohup.out
cd graduates
yarn install
docker-compose up &
yarn prisma migrate dev
yarn prisma migrate deploy
yarn prisma studio -n 0.0.0.0 &
yarn run start:db --host 0.0.0.0 &
yarn run start:api --host 0.0.0.0 &
yarn run start --host 0.0.0.0 --disableHostCheck true &
The easiest would be to rerun the deploy action from github. If this does not achieve what is needed, one will need to:
- log in to the server
- manually run the script to get the updated repository code
- manually run the script to restart the services
Ideally the project should be hosted on the CS server. At the beginning of your Mini Project you will need to contact the CS Tech team.
The container created for the graduates portal is powered down, you will need to ask for it to be started again and for access. As soon as you start, communicate with the CS team to open ports, they need to put a request in with the firewall team, this will take a long time before it goes through the necessary channels
hexadecimal.cs.up.ac.za
Ideally the different docker containers should be hosted on the server.
Currently the workflow is integrated with Docker Hub.
When updates gets pushed to the repo, the images are built and pushed to the Docker Hub container registry.
The Docker Hub repository is private and requires docker login
username: mini301graduates
password: n6la4bd01fhso5
(These credentials are already added as secrets to the repo)
A webhook is sent after the push that sends a POST request to the server. At the endpoint for the webhook a script can be executed to fetch the updated container images.
docker pull mini301graduates/graduates:releaseClient
docker pull mini301graduates/graduates:releaseApi
docker pull mini301graduates/graduates:releaseDB
docker stop CLIENT_REL
docker stop API_REL
docker stop DB_REL
docker system prune -f
docker run -p 5432:5432 -d --name=DB_REL mini301graduates/graduates:releaseDB
docker run -p 3333:3333 -d --name=API_REL mini301graduates/graduates:releaseApi
docker run -p 80:80 -d --name=CLIENT_REL mini301graduates/graduates:releaseClient
Docker automatically checks if the images are different, if they are the same it does not pull it again.
-p 80:80
binds the container port with the host port. -p
runs the container in the background.
Depending on the network configuration it might be necessary to expose the ports used for the respective containers to communicate with each other.
https://docs.docker.com/network/ contains more information on how docker containers can communicate with one another.
After the containers are running and necessary network configuration is in place, execute the necessary commands on the container to get the services running.
docker exec -ti <container name> bash
docker exec <container name> yarn build
You will need to work with the architecture team to improve the Dockerfiles and make sure they work.
Collaborating with the CI/CD team. Late in the development the question was brought up on how to test containers during github worlflows, they said an environment is needed and redirected the query to us. https://github.com/COS-301/graduates/discussions/3#discussioncomment-2574303
Additional workflow(s) can be created to build images and spin up the containers and test them using github actions as part of CI/CD. As part of the workflow images can be uploaded to docker hub and a test environment setup on the server for the students to access. (On push to develop build images with separate tags, test them, push them to docker hub and host this testing environment so students can already see how their changes effects the project)
Once the image is built in the workflow you can specify to use the image and run tests on it:
name: workflow-name
on: push
jobs:
job-name:
runs-on: ubuntu-latest
container:
image: dockerhub-username:docker-built-image-tag
Docker Hub also makes the built images accessible to all members, they can pull them to their local environment.
More resources about testing containers:
https://gist.github.com/RizkyRajitha/ca945c55ab09bcc2c7c150b2fed7db13
https://dev.to/mihinduranasinghe/using-docker-containers-in-jobs-github-actions-3eof
When making changes, adding, or removing code, ensure that tests are are still passed, and that tests are updated according to the changes made where necessary