Skip to content

Commit

Permalink
Update engineering playbooks (rise8-us#8)
Browse files Browse the repository at this point in the history
* Update engineering playbooks

* Remove test content
  • Loading branch information
asarenkansah authored Feb 1, 2024
1 parent 8764636 commit d5908be
Show file tree
Hide file tree
Showing 22 changed files with 1,704 additions and 15 deletions.
Binary file added docs/assets/rise8-planes.webp
Binary file not shown.
53 changes: 53 additions & 0 deletions docs/blog/gitops.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# How GitOps intertwines with Continuous Delivery

## Introduction

In the ever-evolving landscape of Kubernetes orchestration, managing and deploying applications efficiently is crucial. Enter GitOps, a methodology that leverages the power of version control systems like Git to streamline operations, improve collaboration, and enhance the overall development lifecycle. In this blog post, we'll explore the core principles and best practices of GitOps.

## What is GitOps?

GitOps is a set of practices that combine the benefits of declarative infrastructure as code and version control. The primary idea is to use Git repositories as a single source of truth for both application code and infrastructure configuration. This approach brings several advantages to the table, such as:

- **Declarative Configuration:** Describe the desired state of your infrastructure and applications in a declarative manner, making it easier to understand and manage.

- **Versioned Control:** Leverage Git's version control capabilities to track changes, rollback to previous states, and collaborate seamlessly with teams.

- **Continuous Delivery:** Automate deployments by using Git as the trigger for CI/CD pipelines, ensuring a consistent and reproducible process.

- **Operational Efficiency:** GitOps minimizes manual interventions by relying on automated processes, reducing the risk of human errors and improving operational efficiency.

## Core Principles of GitOps

### Declarative Configuration

In GitOps, the entire infrastructure and application stack are defined declaratively. This means specifying the desired end state rather than prescribing a sequence of steps to reach that state. Tools like Kubernetes manifests, Helm charts, or custom YAML files serve as the declarative configuration, making it easy to understand and manage.

### Version Control

Git is at the heart of GitOps. Every change, whether it's a modification to infrastructure configuration or an update to application code, is committed and versioned. This not only provides an audit trail but also enables rollbacks to previous states in case of issues, offering a safety net for operations.

### Automation

Automation is a key enabler of GitOps. Continuous Integration (CI) pipelines automatically build, test, and package applications, while Continuous Delivery (CD) pipelines use Git as a trigger to deploy changes to the target environment. This ensures consistency, repeatability, and traceability throughout the development lifecycle.

### Observability

GitOps encourages a robust observability practice. By integrating monitoring, logging, and alerting into the deployment process, teams gain insights into the health and performance of applications. This proactive approach allows for quick detection and resolution of issues.

## Best Practices

1. **Infrastructure as Code (IaC):** Treat infrastructure as code, using tools like Terraform or Kubernetes manifests. This ensures that changes are versioned, reviewed, and applied consistently.

2. **Git Workflow:** Adopt a Git branching strategy that aligns with your release and deployment strategy. Consider feature branches for development, main branches for production releases, and tags for versioning.

3. **Automated Testing:** Implement automated testing at various stages of the pipeline, including unit tests for application code and integration tests for infrastructure changes. This helps catch issues early in the development process.

4. **Secrets Management:** Use tools or practices for secure storage and management of sensitive information such as API keys and passwords. Avoid storing secrets directly in the Git repository.

5. **Rollback Strategies:** Plan and document rollback strategies in case of failed deployments. GitOps allows for easy rollbacks by reverting to a previous commit or tag.

6. **Immutable Infrastructure:** Aim for immutable infrastructure by rebuilding and redeploying entire environments for updates. This ensures consistency and reduces the likelihood of configuration drift.

## Conclusion

GitOps brings a paradigm shift to Kubernetes operations by promoting collaboration, automation, and version control. By leveraging Git as the source of truth, teams can enhance visibility, traceability, and reliability in their development and deployment processes. Embrace GitOps practices to navigate the complexities of Kubernetes with confidence and agility. Happy deploying!
60 changes: 60 additions & 0 deletions docs/blog/platform-wars.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# PaaS vs DIY Kubernetes

In the current landscape of Kubernetes (k8s), there's a common misconception about whether Kubernetes is a Platform-as-a-Service (PaaS). Kubernetes' own documentation clarifies that it is not a traditional, all-inclusive PaaS system. However, this nuance doesn't make it a drawback; rather, it highlights an important distinction. As Kelsey Hightower puts it, "Kubernetes is for people building platforms."

## Kubernetes Overview

Kubernetes is an open-source platform that offers shared services across clusters or nodes, such as scaling, load balancing, or monitoring. Unlike traditional PaaS systems, Kubernetes operates at the container level, providing developers with flexibility in choosing services, resources, and tools. If an application can be containerized, it can run on k8s.

### What Kubernetes Does

- Provides flexibility for app developers.
- Operates at the container level, allowing a diverse set of workflows.
- Does not limit the types of applications supported.

### What Kubernetes Doesn't Do

- Does not provide services across applications, like databases or storage systems.
- Does not dictate a configuration language, leaving it open for diverse workflows.

### DIY Kubernetes Pros

- More out-of-the-box solutions available.
- More flexibility for app developers.
- Ease of containerized application migration.

### DIY Kubernetes Cons

- More required of app teams.
- More complex for simple applications.
- Challenges converting legacy applications.
- Flexibility drives compliance complexities.

## Platform-as-a-Service (PaaS)

A good PaaS decouples application development and deployment from platform operations, allowing for increased focus on Day 2 operations, improved performance capabilities, and streamlined billing. Cloud Foundry is an example of an open-source PaaS project that, while no longer recommended, provides a frame of reference for what a PaaS should be.

### PaaS Pros

- Provides cross-app services.
- Decouples platform and app development.
- Ease of multi-cloud and hybrid approach.
- Structure and opinionation drive simplified controls compliance.

### PaaS Cons

- More upfront cost.
- Less flexibility for app teams.
- Larger infrastructure resource requirements.

## PaaS or K8s?

Choosing between PaaS and raw Kubernetes depends on organizational needs, infrastructure ownership, and technical competence. Over time, teams using raw k8s tend to build internal structures and opinionation, resembling an internal PaaS. The decision may involve weighing the cost-effectiveness of building in-house solutions versus adopting vendor solutions.

### Considerations

- **Lock-in:** In-house solutions also exhibit lock-in, and it's essential to evaluate various forms of lock-in.
- **Costs:** Upfront costs for PaaS may be higher, but it alleviates the burden on application developers.
- **Flexibility:** Kubernetes provides more flexibility but also more responsibility.

There's no definitive answer; the choice depends on the organization's goals and capabilities. Whether PaaS or DIY Kubernetes, the key is to achieve the desired capabilities for successful application deployment.
7 changes: 0 additions & 7 deletions docs/blog/sample.md

This file was deleted.

27 changes: 27 additions & 0 deletions docs/blog/users.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Rise8 Continuous Delivery Approach

At **Rise8**, our mission revolves around achieving continuous delivery of impactful software that users love.

## Continuous Delivery: The Feedback Loop

Continuous delivery forms the feedback loop essential for achieving small batch sizes and iterating towards impactful software that brings joy to users. This process is carried out within a balanced team where the product manager practices lean enterprise, and UX focuses on user-centered design.

## Continuous Delivery First Approach

While numerous measures can be taken to mitigate risks, the ultimate validation point is the production environment. Hence, we adopt a continuous delivery first approach to establish learning feedback loops.

## Leveraging DORA Insights

We highly value the insights provided by the **DORA (DevOps Research and Assessment)** in identifying the key factors for achieving continuous delivery. We not only adhere to the DORA top five but also incorporate other crucial capabilities they've identified as contributors. This includes automated testing, clean code, loose coupling, monitoring, trunk-based development, and deployment automation, among others.

## High Compliance Spaces: Ensuring Security and Best Practices

In high compliance spaces, we are pioneers in ensuring that deployment automation encompasses security compliance and adheres to release engineering best practices. This is particularly crucial due to the sensitivity and complexity of deployment environments.

## Balancing Capability and Security

Emphasizing both capability and security, we want to make it clear that continuous delivery doesn't necessitate accepting more risk; in fact, it actively reduces risk.

## Effective Communication for Stakeholder Buy-In

Our team possesses the expertise to communicate these principles effectively to both customers and stakeholders. This ensures the necessary buy-in to complete the feedback loops that unlock continuous delivery.
File renamed without changes.
7 changes: 2 additions & 5 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,4 @@
---
hide:
- navigation
- toc
title: Title
template: home.html
---
# Home
Welcome to Engineering!
54 changes: 54 additions & 0 deletions docs/platform-engineering/methodologies/general-best-practices.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# Platform Playbook

Platform engineering involves designing, building, and maintaining the infrastructure and tools that enable software development and deployment. Adopting best practices in platform engineering ensures a stable, scalable, and efficient environment for development teams. Here are some detailed best practices:

## 1. Infrastructure as Code (IaC):
- Use IaC tools like Terraform or Ansible to define and manage infrastructure.
- Version control your IaC scripts to track changes and enable collaboration.
- Implement a modular structure for IaC to promote reusability and maintainability.

## 2. Containerization:
- Containerize applications using technologies like Docker for consistency across environments.
- Use orchestration tools such as Kubernetes for automated deployment, scaling, and management of containers.
- Optimize container images for size and security.

## 3. Continuous Integration and Continuous Deployment (CI/CD):
- Implement CI/CD pipelines to automate testing, building, and deployment processes.
- Include automated testing at various stages to catch issues early.
- Use feature flags to enable gradual and safe feature rollouts.

## 4. Monitoring and Logging:
- Establish comprehensive monitoring for applications and infrastructure.
- Utilize centralized logging to gather and analyze logs for troubleshooting.
- Implement alerting systems to detect and respond to issues proactively.

## 5. Scalability:
- Design systems to scale horizontally by adding more instances.
- Use auto-scaling groups to automatically adjust resources based on demand.
- Regularly perform load testing to identify potential bottlenecks.

## 6. Security:
- Implement security best practices for infrastructure and applications.
- Regularly update dependencies and conduct security audits.
- Enforce least privilege access controls and regularly rotate credentials.

## 7. Documentation:
- Maintain comprehensive documentation for infrastructure, deployment processes, and configurations.
- Keep documentation up-to-date to facilitate knowledge sharing and onboarding.

## 8. High Availability (HA):
- Design systems with redundancy to ensure availability in case of failures.
- Distribute applications across multiple availability zones or regions.
- Test and simulate failure scenarios to validate HA configurations.

## 9. Collaboration and Communication:
- Foster collaboration between development, operations, and other teams.
- Use collaboration tools and platforms for effective communication.
- Conduct regular cross-functional meetings to align goals and address challenges.

## 10. Performance Optimization:
- Regularly assess and optimize the performance of both infrastructure and applications.
- Use caching mechanisms, content delivery networks (CDNs), and other optimization techniques.
- Monitor and optimize database queries for efficiency.

By adhering to these platform engineering best practices, teams can create a robust and efficient environment that supports the continuous delivery of high-quality software.
87 changes: 87 additions & 0 deletions docs/platform-engineering/practicals/argocd-examples.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# ArgoCD: Application of ApplicationSets

An application of ApplicationSets in ArgoCD is to efficiently manage and deploy similar applications or configurations across multiple clusters or namespaces. Here's a specific example to illustrate the application of ApplicationSets:

## **Scenario**

Imagine you have a microservices architecture, and you need to deploy the same application stack to multiple namespaces within a Kubernetes cluster. Each namespace may represent a different environment, such as development, testing, and production.

## **ApplicationSets Implementation**

### 1. **Generator:**
- Define a generator that generates application names, namespaces, and other parameters based on a specific pattern or set of rules.

```yaml
generators:
- list:
elements:
- name: my-app-{{randAlphaNum 5}}
namespace: {{item}}
```
### 2. **Template:**
- Create a template specifying the common configuration for your application. This includes the source repository, target revision, and destination settings.
```yaml
template:
metadata:
labels:
app.kubernetes.io/name: '{{.name}}'
spec:
project: default
source:
repoURL: 'https://github.com/example/repo'
targetRevision: HEAD
destination:
namespace: '{{.namespace}}'
server: 'https://kubernetes.default.svc'
```
### 3. **ApplicationSet Manifest:**
- Apply the ApplicationSet manifest that defines the generators and template.
```yaml
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: my-app-set
spec:
generators:
- list:
elements:
- dev
- test
- prod
template:
metadata:
labels:
app.kubernetes.io/part-of: my-app-set
spec:
project: default
source:
repoURL: 'https://github.com/example/repo'
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
```
## **Result**
- ArgoCD will dynamically generate and deploy three instances of the application, each to a different namespace (dev, test, prod).
- The common configuration specified in the template ensures consistency across all instances.
- Changes made to the ApplicationSet manifest automatically reflect in the generated applications, allowing for easy scaling and maintenance.
## **Use Cases**
### 1. **Scalable Deployments:**
- Easily scale deployments across different namespaces or clusters without manually managing each application.
### 2. **Environment Isolation:**
- Isolate configurations for different environments, ensuring separation and consistency.
### 3. **Efficient Management:**
- Streamline the deployment of similar applications with minimal manual intervention.
ApplicationSets in ArgoCD provide a powerful mechanism for handling repetitive deployment scenarios and managing configurations at scale.
85 changes: 85 additions & 0 deletions docs/software-engineering/practicals/create-api-example.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# Writing a Simple API with Flask (Python)

## Introduction

In this tutorial, we'll walk through the process of creating a simple RESTful API using Python and the Flask web framework. Flask is a lightweight and easy-to-use framework for building web applications, including APIs.

### Prerequisites

Before you begin, make sure you have the following installed:

- Python ([Python Official Website](https://www.python.org/))
- Flask (`pip install Flask`)

## Step 1: Setting Up the Project

Create a new directory for your project and navigate into it.

```bash
mkdir flask-api-tutorial
cd flask-api-tutorial
```

## Step 2: Creating a Virtual Environment
It's good practice to use a virtual environment to isolate your project's dependencies. Create a virtual environment using the following commands:

```bash
python -m venv venv
# On Windows: python -m venv venv
Activate the virtual environment:
```

```bash
# On macOS/Linux
source venv/bin/activate
# On Windows
venv\Scripts\activate
This step ensures that your project has a dedicated environment for its dependencies, minimizing conflicts and ensuring consistency across different projects.
```

## Step 3: Installing Flask
Install Flask within the virtual environment:

```bash
pip install Flask
```

## Step 4: Writing the API Code
Create a file named app.py in your project directory and open it in a text editor. Add the following code:

```python
from flask import Flask, jsonify

app = Flask(__name__)

@app.route('/api', methods=['GET'])
def get_data():
data = {'message': 'Hello, API!'}
return jsonify(data)

if __name__ == '__main__':
app.run(debug=True)
```

This code sets up a basic Flask application with a single endpoint (/api) that returns a JSON response.

## Step 5: Running the API
In the terminal, run the Flask application:

```bash
python app.py
Visit http://127.0.0.1:5000/api in your browser or use a tool like curl or Postman to make a GET request.
```

```bash
curl http://127.0.0.1:5000/api
You should receive a JSON response: {"message": "Hello, API!"}
```

## Conclusion

Congratulations! You've successfully created a simple API using Flask. This is just a starting point, and you can expand and enhance your API by adding more routes, handling different HTTP methods, and integrating with databases.

Explore Flask's documentation (Flask Documentation) for more advanced features and best practices.

Feel free to adapt this tutorial to other frameworks or languages as needed.
Loading

0 comments on commit d5908be

Please sign in to comment.