diff --git a/docs/assets/rise8-planes.webp b/docs/assets/rise8-planes.webp
new file mode 100644
index 0000000..3e6560a
Binary files /dev/null and b/docs/assets/rise8-planes.webp differ
diff --git a/docs/blog/gitops.md b/docs/blog/gitops.md
new file mode 100644
index 0000000..0e3a3c6
--- /dev/null
+++ b/docs/blog/gitops.md
@@ -0,0 +1,53 @@
+# How GitOps intertwines with Continuous Delivery
+
+## Introduction
+
+In the ever-evolving landscape of Kubernetes orchestration, managing and deploying applications efficiently is crucial. Enter GitOps, a methodology that leverages the power of version control systems like Git to streamline operations, improve collaboration, and enhance the overall development lifecycle. In this blog post, we'll explore the core principles and best practices of GitOps.
+
+## What is GitOps?
+
+GitOps is a set of practices that combine the benefits of declarative infrastructure as code and version control. The primary idea is to use Git repositories as a single source of truth for both application code and infrastructure configuration. This approach brings several advantages to the table, such as:
+
+- **Declarative Configuration:** Describe the desired state of your infrastructure and applications in a declarative manner, making it easier to understand and manage.
+
+- **Versioned Control:** Leverage Git's version control capabilities to track changes, rollback to previous states, and collaborate seamlessly with teams.
+
+- **Continuous Delivery:** Automate deployments by using Git as the trigger for CI/CD pipelines, ensuring a consistent and reproducible process.
+
+- **Operational Efficiency:** GitOps minimizes manual interventions by relying on automated processes, reducing the risk of human errors and improving operational efficiency.
+
+## Core Principles of GitOps
+
+### Declarative Configuration
+
+In GitOps, the entire infrastructure and application stack are defined declaratively. This means specifying the desired end state rather than prescribing a sequence of steps to reach that state. Tools like Kubernetes manifests, Helm charts, or custom YAML files serve as the declarative configuration, making it easy to understand and manage.
+
+### Version Control
+
+Git is at the heart of GitOps. Every change, whether it's a modification to infrastructure configuration or an update to application code, is committed and versioned. This not only provides an audit trail but also enables rollbacks to previous states in case of issues, offering a safety net for operations.
+
+### Automation
+
+Automation is a key enabler of GitOps. Continuous Integration (CI) pipelines automatically build, test, and package applications, while Continuous Delivery (CD) pipelines use Git as a trigger to deploy changes to the target environment. This ensures consistency, repeatability, and traceability throughout the development lifecycle.
+
+### Observability
+
+GitOps encourages a robust observability practice. By integrating monitoring, logging, and alerting into the deployment process, teams gain insights into the health and performance of applications. This proactive approach allows for quick detection and resolution of issues.
+
+## Best Practices
+
+1. **Infrastructure as Code (IaC):** Treat infrastructure as code, using tools like Terraform or Kubernetes manifests. This ensures that changes are versioned, reviewed, and applied consistently.
+
+2. **Git Workflow:** Adopt a Git branching strategy that aligns with your release and deployment strategy. Consider feature branches for development, main branches for production releases, and tags for versioning.
+
+3. **Automated Testing:** Implement automated testing at various stages of the pipeline, including unit tests for application code and integration tests for infrastructure changes. This helps catch issues early in the development process.
+
+4. **Secrets Management:** Use tools or practices for secure storage and management of sensitive information such as API keys and passwords. Avoid storing secrets directly in the Git repository.
+
+5. **Rollback Strategies:** Plan and document rollback strategies in case of failed deployments. GitOps allows for easy rollbacks by reverting to a previous commit or tag.
+
+6. **Immutable Infrastructure:** Aim for immutable infrastructure by rebuilding and redeploying entire environments for updates. This ensures consistency and reduces the likelihood of configuration drift.
+
+## Conclusion
+
+GitOps brings a paradigm shift to Kubernetes operations by promoting collaboration, automation, and version control. By leveraging Git as the source of truth, teams can enhance visibility, traceability, and reliability in their development and deployment processes. Embrace GitOps practices to navigate the complexities of Kubernetes with confidence and agility. Happy deploying!
diff --git a/docs/blog/platform-wars.md b/docs/blog/platform-wars.md
new file mode 100644
index 0000000..78833c0
--- /dev/null
+++ b/docs/blog/platform-wars.md
@@ -0,0 +1,60 @@
+# PaaS vs DIY Kubernetes
+
+In the current landscape of Kubernetes (k8s), there's a common misconception about whether Kubernetes is a Platform-as-a-Service (PaaS). Kubernetes' own documentation clarifies that it is not a traditional, all-inclusive PaaS system. However, this nuance doesn't make it a drawback; rather, it highlights an important distinction. As Kelsey Hightower puts it, "Kubernetes is for people building platforms."
+
+## Kubernetes Overview
+
+Kubernetes is an open-source platform that offers shared services across clusters or nodes, such as scaling, load balancing, or monitoring. Unlike traditional PaaS systems, Kubernetes operates at the container level, providing developers with flexibility in choosing services, resources, and tools. If an application can be containerized, it can run on k8s.
+
+### What Kubernetes Does
+
+- Provides flexibility for app developers.
+- Operates at the container level, allowing a diverse set of workflows.
+- Does not limit the types of applications supported.
+
+### What Kubernetes Doesn't Do
+
+- Does not provide services across applications, like databases or storage systems.
+- Does not dictate a configuration language, leaving it open for diverse workflows.
+
+### DIY Kubernetes Pros
+
+- More out-of-the-box solutions available.
+- More flexibility for app developers.
+- Ease of containerized application migration.
+
+### DIY Kubernetes Cons
+
+- More required of app teams.
+- More complex for simple applications.
+- Challenges converting legacy applications.
+- Flexibility drives compliance complexities.
+
+## Platform-as-a-Service (PaaS)
+
+A good PaaS decouples application development and deployment from platform operations, allowing for increased focus on Day 2 operations, improved performance capabilities, and streamlined billing. Cloud Foundry is an example of an open-source PaaS project that, while no longer recommended, provides a frame of reference for what a PaaS should be.
+
+### PaaS Pros
+
+- Provides cross-app services.
+- Decouples platform and app development.
+- Ease of multi-cloud and hybrid approach.
+- Structure and opinionation drive simplified controls compliance.
+
+### PaaS Cons
+
+- More upfront cost.
+- Less flexibility for app teams.
+- Larger infrastructure resource requirements.
+
+## PaaS or K8s?
+
+Choosing between PaaS and raw Kubernetes depends on organizational needs, infrastructure ownership, and technical competence. Over time, teams using raw k8s tend to build internal structures and opinionation, resembling an internal PaaS. The decision may involve weighing the cost-effectiveness of building in-house solutions versus adopting vendor solutions.
+
+### Considerations
+
+- **Lock-in:** In-house solutions also exhibit lock-in, and it's essential to evaluate various forms of lock-in.
+- **Costs:** Upfront costs for PaaS may be higher, but it alleviates the burden on application developers.
+- **Flexibility:** Kubernetes provides more flexibility but also more responsibility.
+
+There's no definitive answer; the choice depends on the organization's goals and capabilities. Whether PaaS or DIY Kubernetes, the key is to achieve the desired capabilities for successful application deployment.
\ No newline at end of file
diff --git a/docs/blog/sample.md b/docs/blog/sample.md
deleted file mode 100644
index 6057217..0000000
--- a/docs/blog/sample.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-hide:
-- toc
----
-
-# Sample
-This is a sample blog page.
\ No newline at end of file
diff --git a/docs/blog/users.md b/docs/blog/users.md
new file mode 100644
index 0000000..59f7039
--- /dev/null
+++ b/docs/blog/users.md
@@ -0,0 +1,27 @@
+# Rise8 Continuous Delivery Approach
+
+At **Rise8**, our mission revolves around achieving continuous delivery of impactful software that users love.
+
+## Continuous Delivery: The Feedback Loop
+
+Continuous delivery forms the feedback loop essential for achieving small batch sizes and iterating towards impactful software that brings joy to users. This process is carried out within a balanced team where the product manager practices lean enterprise, and UX focuses on user-centered design.
+
+## Continuous Delivery First Approach
+
+While numerous measures can be taken to mitigate risks, the ultimate validation point is the production environment. Hence, we adopt a continuous delivery first approach to establish learning feedback loops.
+
+## Leveraging DORA Insights
+
+We highly value the insights provided by the **DORA (DevOps Research and Assessment)** in identifying the key factors for achieving continuous delivery. We not only adhere to the DORA top five but also incorporate other crucial capabilities they've identified as contributors. This includes automated testing, clean code, loose coupling, monitoring, trunk-based development, and deployment automation, among others.
+
+## High Compliance Spaces: Ensuring Security and Best Practices
+
+In high compliance spaces, we are pioneers in ensuring that deployment automation encompasses security compliance and adheres to release engineering best practices. This is particularly crucial due to the sensitivity and complexity of deployment environments.
+
+## Balancing Capability and Security
+
+Emphasizing both capability and security, we want to make it clear that continuous delivery doesn't necessitate accepting more risk; in fact, it actively reduces risk.
+
+## Effective Communication for Stakeholder Buy-In
+
+Our team possesses the expertise to communicate these principles effectively to both customers and stakeholders. This ensures the necessary buy-in to complete the feedback loops that unlock continuous delivery.
diff --git a/docs/practice-playbook.md b/docs/general-engineering/practice-playbook.md
similarity index 100%
rename from docs/practice-playbook.md
rename to docs/general-engineering/practice-playbook.md
diff --git a/docs/index.md b/docs/index.md
index 55fcc53..607285c 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -1,7 +1,4 @@
---
-hide:
- - navigation
- - toc
+title: Title
+template: home.html
---
-# Home
-Welcome to Engineering!
diff --git a/docs/platform-engineering/methodologies/general-best-practices.md b/docs/platform-engineering/methodologies/general-best-practices.md
new file mode 100644
index 0000000..79f1857
--- /dev/null
+++ b/docs/platform-engineering/methodologies/general-best-practices.md
@@ -0,0 +1,54 @@
+# Platform Playbook
+
+Platform engineering involves designing, building, and maintaining the infrastructure and tools that enable software development and deployment. Adopting best practices in platform engineering ensures a stable, scalable, and efficient environment for development teams. Here are some detailed best practices:
+
+## 1. Infrastructure as Code (IaC):
+ - Use IaC tools like Terraform or Ansible to define and manage infrastructure.
+ - Version control your IaC scripts to track changes and enable collaboration.
+ - Implement a modular structure for IaC to promote reusability and maintainability.
+
+## 2. Containerization:
+ - Containerize applications using technologies like Docker for consistency across environments.
+ - Use orchestration tools such as Kubernetes for automated deployment, scaling, and management of containers.
+ - Optimize container images for size and security.
+
+## 3. Continuous Integration and Continuous Deployment (CI/CD):
+ - Implement CI/CD pipelines to automate testing, building, and deployment processes.
+ - Include automated testing at various stages to catch issues early.
+ - Use feature flags to enable gradual and safe feature rollouts.
+
+## 4. Monitoring and Logging:
+ - Establish comprehensive monitoring for applications and infrastructure.
+ - Utilize centralized logging to gather and analyze logs for troubleshooting.
+ - Implement alerting systems to detect and respond to issues proactively.
+
+## 5. Scalability:
+ - Design systems to scale horizontally by adding more instances.
+ - Use auto-scaling groups to automatically adjust resources based on demand.
+ - Regularly perform load testing to identify potential bottlenecks.
+
+## 6. Security:
+ - Implement security best practices for infrastructure and applications.
+ - Regularly update dependencies and conduct security audits.
+ - Enforce least privilege access controls and regularly rotate credentials.
+
+## 7. Documentation:
+ - Maintain comprehensive documentation for infrastructure, deployment processes, and configurations.
+ - Keep documentation up-to-date to facilitate knowledge sharing and onboarding.
+
+## 8. High Availability (HA):
+ - Design systems with redundancy to ensure availability in case of failures.
+ - Distribute applications across multiple availability zones or regions.
+ - Test and simulate failure scenarios to validate HA configurations.
+
+## 9. Collaboration and Communication:
+ - Foster collaboration between development, operations, and other teams.
+ - Use collaboration tools and platforms for effective communication.
+ - Conduct regular cross-functional meetings to align goals and address challenges.
+
+## 10. Performance Optimization:
+ - Regularly assess and optimize the performance of both infrastructure and applications.
+ - Use caching mechanisms, content delivery networks (CDNs), and other optimization techniques.
+ - Monitor and optimize database queries for efficiency.
+
+By adhering to these platform engineering best practices, teams can create a robust and efficient environment that supports the continuous delivery of high-quality software.
diff --git a/docs/platform-engineering/practicals/argocd-examples.md b/docs/platform-engineering/practicals/argocd-examples.md
new file mode 100644
index 0000000..5aa4ec0
--- /dev/null
+++ b/docs/platform-engineering/practicals/argocd-examples.md
@@ -0,0 +1,87 @@
+# ArgoCD: Application of ApplicationSets
+
+An application of ApplicationSets in ArgoCD is to efficiently manage and deploy similar applications or configurations across multiple clusters or namespaces. Here's a specific example to illustrate the application of ApplicationSets:
+
+## **Scenario**
+
+Imagine you have a microservices architecture, and you need to deploy the same application stack to multiple namespaces within a Kubernetes cluster. Each namespace may represent a different environment, such as development, testing, and production.
+
+## **ApplicationSets Implementation**
+
+### 1. **Generator:**
+ - Define a generator that generates application names, namespaces, and other parameters based on a specific pattern or set of rules.
+
+ ```yaml
+ generators:
+ - list:
+ elements:
+ - name: my-app-{{randAlphaNum 5}}
+ namespace: {{item}}
+ ```
+
+### 2. **Template:**
+
+ - Create a template specifying the common configuration for your application. This includes the source repository, target revision, and destination settings.
+
+ ```yaml
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: '{{.name}}'
+ spec:
+ project: default
+ source:
+ repoURL: 'https://github.com/example/repo'
+ targetRevision: HEAD
+ destination:
+ namespace: '{{.namespace}}'
+ server: 'https://kubernetes.default.svc'
+ ```
+
+### 3. **ApplicationSet Manifest:**
+
+- Apply the ApplicationSet manifest that defines the generators and template.
+
+ ```yaml
+ apiVersion: argoproj.io/v1alpha1
+ kind: ApplicationSet
+ metadata:
+ name: my-app-set
+ spec:
+ generators:
+ - list:
+ elements:
+ - dev
+ - test
+ - prod
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/part-of: my-app-set
+ spec:
+ project: default
+ source:
+ repoURL: 'https://github.com/example/repo'
+ targetRevision: HEAD
+ destination:
+ server: 'https://kubernetes.default.svc'
+ ```
+
+## **Result**
+
+- ArgoCD will dynamically generate and deploy three instances of the application, each to a different namespace (dev, test, prod).
+- The common configuration specified in the template ensures consistency across all instances.
+- Changes made to the ApplicationSet manifest automatically reflect in the generated applications, allowing for easy scaling and maintenance.
+
+## **Use Cases**
+
+### 1. **Scalable Deployments:**
+ - Easily scale deployments across different namespaces or clusters without manually managing each application.
+
+### 2. **Environment Isolation:**
+ - Isolate configurations for different environments, ensuring separation and consistency.
+
+### 3. **Efficient Management:**
+ - Streamline the deployment of similar applications with minimal manual intervention.
+
+ApplicationSets in ArgoCD provide a powerful mechanism for handling repetitive deployment scenarios and managing configurations at scale.
diff --git a/docs/software-engineering/practicals/create-api-example.md b/docs/software-engineering/practicals/create-api-example.md
new file mode 100644
index 0000000..9d403de
--- /dev/null
+++ b/docs/software-engineering/practicals/create-api-example.md
@@ -0,0 +1,85 @@
+# Writing a Simple API with Flask (Python)
+
+## Introduction
+
+In this tutorial, we'll walk through the process of creating a simple RESTful API using Python and the Flask web framework. Flask is a lightweight and easy-to-use framework for building web applications, including APIs.
+
+### Prerequisites
+
+Before you begin, make sure you have the following installed:
+
+- Python ([Python Official Website](https://www.python.org/))
+- Flask (`pip install Flask`)
+
+## Step 1: Setting Up the Project
+
+Create a new directory for your project and navigate into it.
+
+```bash
+mkdir flask-api-tutorial
+cd flask-api-tutorial
+```
+
+## Step 2: Creating a Virtual Environment
+It's good practice to use a virtual environment to isolate your project's dependencies. Create a virtual environment using the following commands:
+
+```bash
+python -m venv venv
+# On Windows: python -m venv venv
+Activate the virtual environment:
+```
+
+```bash
+# On macOS/Linux
+source venv/bin/activate
+# On Windows
+venv\Scripts\activate
+This step ensures that your project has a dedicated environment for its dependencies, minimizing conflicts and ensuring consistency across different projects.
+```
+
+## Step 3: Installing Flask
+Install Flask within the virtual environment:
+
+```bash
+pip install Flask
+```
+
+## Step 4: Writing the API Code
+Create a file named app.py in your project directory and open it in a text editor. Add the following code:
+
+```python
+from flask import Flask, jsonify
+
+app = Flask(__name__)
+
+@app.route('/api', methods=['GET'])
+def get_data():
+ data = {'message': 'Hello, API!'}
+ return jsonify(data)
+
+if __name__ == '__main__':
+ app.run(debug=True)
+```
+
+This code sets up a basic Flask application with a single endpoint (/api) that returns a JSON response.
+
+## Step 5: Running the API
+In the terminal, run the Flask application:
+
+```bash
+python app.py
+Visit http://127.0.0.1:5000/api in your browser or use a tool like curl or Postman to make a GET request.
+```
+
+```bash
+curl http://127.0.0.1:5000/api
+You should receive a JSON response: {"message": "Hello, API!"}
+```
+
+## Conclusion
+
+Congratulations! You've successfully created a simple API using Flask. This is just a starting point, and you can expand and enhance your API by adding more routes, handling different HTTP methods, and integrating with databases.
+
+Explore Flask's documentation (Flask Documentation) for more advanced features and best practices.
+
+Feel free to adapt this tutorial to other frameworks or languages as needed.
\ No newline at end of file
diff --git a/examples/general-engineering/practice-playbook.md b/examples/general-engineering/practice-playbook.md
new file mode 100644
index 0000000..ffcff95
--- /dev/null
+++ b/examples/general-engineering/practice-playbook.md
@@ -0,0 +1,364 @@
+---
+hide:
+ - navigation
+---
+# Engineering Practice Playbook
+
+## Preamble
+This document contains our opinions on software development. We understand that it is not always possible to hold to some of these standards. We trust each of our engineer's autonomy to respond to any given situation.
+
+## What is an Engineer?
+---
+
+An engineer on a balanced team is responsible for the technical delivery of a product to the customer. They focus their time on building a secure, reliable, scalable, and maintainable product. The engineer brings a unique perspective to the team as they best understand the amount of work needed to build features. They also understand the impact technical debt can have on velocity. They work hand in hand with the product manager to buy down risks through backlog prioritization. The engineer also works with the designer to execute a design system and tease out technical pain points from the user. The engineer works with operations to optimize product delivery and support.
+
+> “A team management philosophy that has people with a variety of skills and perspectives that support each other towards a shared goal.” - balancedteam.org
+
+## Foundation
+**Agile Manifesto Principles**
+
+1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
+2. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
+3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
+4. Business people and developers must work together daily throughout the project.
+5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
+6. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
+7. Working software is the primary measure of progress.
+8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
+9. Continuous attention to technical excellence and good design enhances agility.
+10. Simplicity – the art of maximizing the amount of work not done – is essential.
+11. The best architectures, requirements, and designs emerge from self-organizing teams.
+12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
+
+### Rise8 Takes
+- Enablement is a primary skillset we practice here at Rise8. Not only building the product but helping build the team's skillset to continue product development.
+- Pair engineers with projects they love.
+- Offer opportunities for engineers to grow and expand.
+- Trust allows agile teams to communicate quickly and respond rapidly to changes as they emerge. Without sufficient trust, team members can waste effort and energy hoarding information, forming cliques, dodging blame, and covering their tracks.
+- Trust your team is making the best decisions with the information known at the moment, with or without your presence. You and your team have a common goal, there is more than one way to reach it.
+- Technical facts and data overrule opinions and personal preferences.
+- Use best practices and design patterns unless justified.
+- Adheres to the team's code contract for styling.
+
+
+## Discovery
+---
+
+### Discovery and Framing
+D&F is a team effort where Product Manager and UI/UX roles will contribute significantly.
+
+### Build vs Buy Analysis
+Engineers evaluate existing FOSS, Commercial, and Government Off the Shelf. Below are some helpful questions to get started in exploring your options and their potential return on investment.
+
+[todo]: # "Separate into groups (Build & Purchase/Use)"
+
+- Is there an offering that sufficiently meets the team's requirements
+- Build, operate, maintain, and upgrade cost Vs buy and licensing cost?
+- Do we have the expertise to build?
+- What is the learning curve/developer experience of the commercial products?
+- Security requirements?
+- Will new features be required, can they be added to the buy option?
+- How much granular control of the system is necessary?
+- Will the buy/FOSS option be maintained long term
+- Is the offering well documented/provide a satisfactory user experience?
+- Time to market?
+
+
+### Technology Stack
+**We choose the right Tech stack for the problem space.**
+**Here are a few things to consider when selecting tools and technologies**
+
+1. Is training necessary? What is the learning curve? How much documentation is available? Is it good documentation?
+1. Are the skills and knowledge required common, or is the technology very niche?
+1. Is the technology mature enough to adopt?
+1. What are the costs?
+ - compute cost of a low level language
+ - engineering wage difference between one language to another
+ - tooling
+1. Is there support for the technology within the current continuous integration process?
+1. Can the technology be deployed to all environments?
+1. Can the technology be managed in all environments?
+1. Is the technology stack meeting security criteria and project constraints?
+1. Does the technology stack performance, reliability, and maintainability satisfy the product's requirements?
+1. Can the technology stack scale?
+
+
+## Building an MVP
+---
+Proof of Concept --> Prototype --> MVP
+
+### Proof of Concept
+The Proof of Concept is for quick technical discovery, learning information, and empowering decision-making. It may be comprised of pseudo-code, code-fragments, and/or diagrams that depict how systems communicate or UIs interact. The outcome should validate & verify the concept, and becomes the reference for the prototype.
+
+Should we use best practices when building a proof of concept?
+ - Not required, but encouraged
+ - Use as needed to explain the proposed concept
+
+### Prototype
+The goal of a Prototype is to demo limited functionality to end users in an ideal/sandbox environment and help teams evaluate risk. This becomes a candidate for the initial MVP. Code should follow best practices unless it would be a severe time sink to implement.
+
+Should we use best practices when building a prototype?
+- Should try to use best practices, unless severe time sink
+
+What types of outcomes/information should the prototype produce?
+- Technical discovery
+- User traction/Customer feedback
+
+### MVP
+An MVP builds on a Prototype by adding functionality, error handling, and integration with a production environment.
+
+What defines an MVP?
+- Full functionality
+- Meets all acceptance criteria
+
+Should we use best practices when building an MVP?
+- Best practices are standardized and required
+
+
+## Systems Design and Architecture
+---
+
+### Architecture
+We will use `SPIKES` in the issue tracking system to document decisions that impact structure, non-functional characteristics, dependencies, interfaces, or construction techniques. A Spike should be short and capture the specific context around the decision. A Spike will have a Title, Status, Context, Decision, and Consequences section. Title Spikes with short Noun phrases such as "SPIKE: Caching with Redis". Status can be selected by the state of the issue in the issue tracking system. Context documents the technical and business forces at play; verbiage should be value neutral. Decision documents the why and how we choose to respond to the forces. Finally, Consequences documents any risks involved with the decision.
+
+It is good to keep a record of reversed decisions, and why it was reversed. It is common for old failed decisions to resurface without historical knowledge in long-running projects.
+
+### Design Patterns and Best Practices
+We rely heavily on existing best practices and design patterns both for their proven capabilities and providing a common and well known means of solving a problem. Patterns and practices make it easier for engineers to move between projects. However, there may be times we need to deviate such as, the new pattern leading to decreased readability, maintainability, scalability, and/or performance. Note a performance issue in and of itself is not typically enough to justify a deviation.
+
+[todo]: # "Add resources for existing design patterns"
+
+
+### Technical Debt
+Technical debt can be defined as aspects of our code that will slow down future development. Debt can be intentional or unintentional but must be managed. Incurring too much technical debt can lead to a reduction in productivity, maintainability and testability which in turn leads to unhappy employees, decreased organizational performance, and lack of business outcomes. Engineers are responsible for making technical debt visible. Here are a few ways to mitigate and manage technical debt in your products:
+
+1. Keep a log of debt on your project for future conversations
+1. Discuss during backlog grooming
+1. Establish coding and documentation standards
+1. Familiarize yourself with common design & architecture patterns
+1. Be aware of new technologies
+
+## Ceremonies
+---
+### Iterative Planning Meeting (IPM)
+[todo]: # "Align with PMs"
+The IPM selects the work that will be done in the next cycle typically 1-2 week sprints. It is our recommendation to target work as follows.
+**Target ranges**
+
+- 30% - 50% Feature
+- 15% - 30% Innovation/Tech debt sometimes call chore
+- 5% - 20% Bugs
+
+When conducting an IPM the team will address:
+**Acceptance Criteria**
+Engineers can help the team by reviewing acceptance criteria before the sprint begins. The acceptance criteria should be clear and free of interpretation.
+
+**Story Point**
+Engineers can help the team by helping to point stories. They can help estimate the amount or complexity of the work. Since engineers understand the work involved to fulfill a requirement, they can ensure that stories are granular and right sized.
+
+
+[TODO]: # "define feature, bug Innovation/tech debt"
+
+**Feature**
+A Feature is something that provides new capabilities or improves end user experience. A Feature will often have a story that reads something like this. As a: xxx, I want: xxx, So That xxx. A feature should also have an acceptance criteria or definition of done.
+
+**Innovation / Refactoring**
+Innovation is proactive tech debt management. Innovation work is time spent incorporating **new** libraries, patterns, or services to make the code base easier to maintain, read, secure, and scale, or add capabilities. Innovation work should be closely evaluated to ensure that it provides a return on investment. Avoid innovation for innovation’s sake; there must be clearly definable advantage.
+
+Refactoring is an opportunity drive down existing technical debt, optimize, and re-architect the codebase. Refactoring keeps code simple, decoupled, easily read, and painlessly scaled. Engineers often complain about old programming languages as if the language is the root problem when the real problem is old messy spaghetti code.
+
+1. Knowledge sharing (both domain and technical knowledge)
+2. Immediate code reviews
+3. Improved interpersonal communication
+4. Reduction in code defects
+
+
+**Bug**
+Any work being done to correct unexpected behaviors or faults that are inconsistent with the desired coded intent.
+
+**NOTE**
+Security is a fundamental part of software development and as such can be characterized to fit in all three of the categories as needed. However, you may to create your own category for security work; this is common in high compliance environments.
+
+### Standup
+A quick 10-15 min meeting typically held at the beginning of the day. Team members will give a **few** sentences on what they accomplished yesterday, are planning to do today, and any blockers they may have. If greater detail is required, coordinate a discussion with the relevant team members post standup.
+
+### Retro
+A meeting to reflect on the past work cycle and identify what worked, what didn't and any actions needed to be taken going forward. Release some stress while looking forward to the making the next work cycle better. This is also a good time to call out your team members on their accomplishments.
+
+## Pair Programming
+We believe there is great value in paired programming and advocate it as the first option. Pairing helps train inexperienced devs, allows for the propagation of tips and techniques, and provides accountability.
+
+Pair programming is a development technique where two developers author software using the same computer. In person, the computer is outfitted with two keyboards, two monitors and two mice. In a remote environment one user can share the screen with another via collaboration software such as [Zoom](https://zoom.us/) and Live Share. There are two roles in pair programming:
+
+Driver: The person who is writing the code.
+Navigator: Helps the driver navigate the code development process. They can write code in the form of suggestions or corrections.
+
+Here are few helpful hints when pairing:
+
+1. Pairing can be tiring; take breaks often
+2. Rotate pairs regularly. Each person brings something unique to the table which will improve the codebase as a whole. Swapping pairs also drives both knowledge sharing and alignment across the team.
+3. Be open to new ideas and constructive criticism
+4. Sometimes pairing might not be the best approach. Feel free to solo when it makes sense. But remember, committed code requires a peer review.
+
+## Development
+---
+### Test Driven Development
+Test Driven Development is a software development practice. The process starts with authoring a failing test and then implementing the functionality required for the test to succeed. Often times referred to as “Red Green Refactor”, it consists of three distinct steps (red-green-refactor):
+
+1. Author a failing test
+2. Author just enough code for test to pass
+3. Refactor
+
+### Code Review
+The primary purpose of code review is to make sure that the overall code health of the project's codebase is improving over time, and a series of trade-offs have to be balanced.
+
+First, developers must be able to _make progress_ on their tasks. If you never submit an improvement to the codebase, then the codebase never improves. Also, if a reviewer makes it very difficult for _any_ change to go in, developers are disincentivised to improve in the future.
+
+Second, the reviewer must ensure that each merge request is of such a quality that their codebase's overall health is not decreasing as time goes on. This can be tricky because codebases degrade through small decreases in code health over time, especially when a team is under significant time constraints and feel that they have to take shortcuts to accomplish their goals.
+
+A reviewer has ownership and responsibility for the code they are reviewing. They want to ensure that the codebase stays consistent and maintainable.
+
+Thus, we get the following rule as the standard we expect in code reviews:
+
+In general, reviewers should favor approving a merge request once it is in a state where it improves the overall code health of the system being worked on, even if the merge request isn't perfect.
+
+There are limitations to this, of course. For example, if a merge request adds a feature that the reviewer doesn't want in their system, then the reviewer can certainly deny approval even if the code is well-designed.
+
+A key point here is that there is no such thing as "perfect" code—there is only _better_ code. Reviewers should not require the author to polish every tiny piece of a merge request before approving. Instead, the reviewer should balance out the need to make forward progress compared to the importance of the changes they are suggesting. Instead of seeking perfection, what a reviewer should seek is _continuous improvement_. A merge request that improves the maintainability, readability and understandability of the system shouldn't be delayed because it isn't "perfect."
+
+Reviewers should _always_ feel free to leave comments expressing that something could be better, but if it's not very important, prefix it with something like "Nit: "to let the author know that it's just a point of polish that they could choose to ignore (Nit means nit-pick).
+
+Note: Nothing in this document justifies checking in merge requests that _worsen_ the system's overall code health. The only time you would do that would be in an emergency.
+
+- Aspects of software design are seldom a pure style issue or just a personal preference**.** They are based on underlying principles and should be weighed on those principles, not simply by subjective opinion. Sometimes there are a few valid options. If the author can demonstrate (either through data or based on solid engineering principles) that several approaches are equally good, the reviewer should accept the author's preference. Otherwise, the choice is dictated by standard principles of software design.
+- If no other rule applies, then the reviewer may ask the author to be consistent with the current codebase, as long as that doesn't worsen the system's overall code health.
+- On matters of style, the style guide is the absolute authority. Any purely style point (whitespace, etc.) not in the style guide is a personal preference. The style should be consistent with what is there. If there is no previous style, accept the author's style.
+
+
+**An opportunity for sharing knowledge**
+Code reviews can be an essential function for teaching developers something new about a language, a framework, or general software design principles. It's always OK to leave comments that help a developer learn something new. Sharing knowledge is part of improving the code health of a system over time. Just keep in mind that if your comment is purely educational but not critical to meeting the standards described in this document, prefix it with "Nit:" or otherwise indicate that the author doesn't need to resolve it in this merge request.
+
+
+**Resolving Conflicts**
+In any conflict on a code review, the first step should always be for the developer and reviewer to reach an agreement.
+
+When coming to consensus becomes especially difficult, it can help to have a face-to-face meeting or a video conference between the reviewer and the author, instead of just trying to resolve the conflict through code review comments. (If you do this, though, make sure to record the discussion results as a comment on the merge request for future readers.)
+
+If that doesn't resolve the situation, the most common way to resolve it would be to escalate. Often the escalation path is to a broader team discussion, having a Technical Lead weigh in, asking for a decision from a maintainer of the code, or asking an Eng Manager to help.
+
+Don't let a merge request sit around because the author and the reviewer can't agree.
+
+_This section was derived, with modifications, from [Google Engineering Practices Documentation](https://github.com/google/eng-practices)_
+
+## Git ops
+---
+Git is today's standard for source control.
+### HOOKS
+We strongly encourage the use of commit hooks to further ensure code quality. These hooks can range from enforcing commit formats to running unit tests and may be left up to the team to decide.
+
+### COMMIT MESSAGES
+Commit messages should be a brief, concise description in imperative tense of what the commit adds, with the appropriate authors (alternating authors or using tools such as git with .git-together), and the ID of the corresponding story. Using industry standards such as [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/#summary) are not required but teams may choose to follow any given industry standard.
+
+### MERGE REQUEST COMMENTS
+We encourage comments/suggestions/questions/discussion/etc. on MR per our belief in strong opinions loosely held >> better resulting code
+
+### REBASE
+We encourage squashing and rebasing to preserve the cleanliness and readability of the git history on the master branch. This should only be performed by an engineer that understands the rebasing process in order to avoid causing irreparable damage to the master branch. If done correctly, there should be no explicit merge commits.
+
+NOTE: Certain technologies (i.e. GitLab) default behavior creates merge commits. This can be changed.
+
+### BRANCHING
+Trunk based or short-lived feature branching is preferred.
+
+Branch naming should be clear and concise. We recommend the convention of including the story ID followed by a few words for the branch's purpose, using dashes (-) as the delimiter.
+
+Cleanup/remove branches post merge completion. Developers should be wary and not have too many inactive/stale branches linger. Abandoned branches should be removed to avoid Git pollution.
+
+
+## CI/CD Pipeline
+---
+**Continuous Integration**
+We believe CI begins at the local development env. This includes the tools to run automated tests, linting, and other checks on branches **BEFORE** you merge up.
+
+We believe CI is non-negotiable and must begin at the initial conception of development to ensure comprehensive software security, testing, and fast feedback on the main branch health. Furthermore, it empowers the ability of the team to hold to agile practices.
+
+**CI stages should include at a minimum**
+- linting
+- unit tests
+- static code analysis
+- dependency scans
+
+**Continuous Delivery**
+We believe that any merge to main should be able to deploy to production and main should always be deployable. Merges should be self-contained and not dependant upon another branch.
+
+[TODO]: # "Expand the section on CI"
+
+## Testing
+---
+Building testing into our products provides us the confidence that we need to quickly deliver new features without the fear of breaking our products. The test pyramid depicts the types of test we can author along with the general distribution.
+
+### Unit
+The unit test is designed to test a small, singular component/function/method. Target the public methods of your classes, private and protected methods are part of the public unit. The tests are easy to author and maintain, and are fast to run. Unit Tests represent the largest portion of tests within the code.
+
+### Contract Testing
+TDD for microservice architecture contracts are written on what will be consumed and then consumers and producers are tested against these contracts. Eliminates the need for test environments that have all services running and at a specific version.
+
+### Integration
+The integration test is designed to test between components. A typical example might be integrating with a database or a provided REST service. Integration tests require that you stand up not only your product but also the components with which you integrate. For this reason, they require more time and effort than unit tests. They are often times the second most frequently used test.
+
+### End to End (E2E)
+The end-to-end test is designed to test through your stack starting at the front end. The tests require the most time and effort to write and maintain. For this reason, they often represent the smallest portion of your tests.
+
+[TODO]: # "Add Acceptance test blurb"
+
+For further reading take a look at the list of curated resources
+
+* [https://martinfowler.com/articles/practical-test-pyramid.html](https://martinfowler.com/articles/practical-test-pyramid.html)
+
+## Operating Apps
+---
+### Logging (Operating apps)
+As you ship your application into production you want to make sure that your logs can be processed and aggregated easily. Designing your application in this fashion will allow the platform to treat all application logs the same. Additionally, it allows for providing a base set of services your organization will need to support and operate your application. A few examples include, access to logs for debugging as well as setting up alerts for monitoring. The standard practice is to write log entries to stdout. For further information, check out the [logs](https://12factor.net/logs) section on [12factor.net](https://12factor.net/)
+
+### Configuration (Design)
+Your application will exist in numerous environments including development, staging and production. For this reason, it is important that your application can be configured easily. Keep in mind that your application is likely to end up on a platform like Kubernetes where managing the lifecycle of an application is significant. The standard practice is to expose configuration via granular environment variables. The configuration defines a contract with the tools that manages your application’s lifetime. For further information, check out the [config](https://12factor.net/config) section on [12factor.net](https://12factor.net/)
+
+
+### Backing Services (Design)
+As you build out your application, there will be a set of services you wish to consume. You should consider what services are needed and if they are provided as part of the platform offering. Configuring these services is as simple as adding environment variables to your configuration (see above). Listed below are common services:
+
+1. Identity Management (ie Keycloak)
+1. Databases (ie Relational, NoSQL)
+1. Storage (ie S3, Minio, Volumes)
+1. Message Queues (ie RabbitMQ, Kafka)
+1. Email (ie SMTP)
+
+### Monitoring (Operating)
+Monitoring your application will help you be successful. Monitoring can help you understand how your application is being used and by whom. Monitoring can help you understand whether your application is functioning. When you start building your application consider the following:
+
+1. Are health endpoints available in my application? What engineering aspects should be part of the health endpoint? How are the health endpoints monitored?
+2. Are there important product metrics to capture? Are there any technologies available to support metrics collection (ie Elasticsearch, Kibana, Grafana)?
+3. Are there technologies available to support alerting?
+
+
+## Additional Resources
+---
+* [https://12factor.net](https://12factor.net/)
+
+
+## Recommended Reads
+---
+* [Clean Code by Robert C. Martin](https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882)
+* [Composing Software by Eric Elliot](https://www.amazon.com/Composing-Software-Exploration-Programming-Composition/dp/1661212565)
+* [Design Patterns by Gang of Four](https://www.amazon.com/Design-Patterns-Object-Oriented-Addison-Wesley-Professional-ebook/dp/B000SEIBB8)
+* [Pragmatic Programmer](https://www.amazon.com/Pragmatic-Programmer-Anniversary-Journey-Mastery/dp/B0833FBNHV/ref=sr_1_1?dchild=1&gclid=Cj0KCQiAst2BBhDJARIsAGo2ldUUR4IfnxCjch0ni9ici1HmmtCYjITL9ghoHWfJRZJjlTOXIRdR5DEaAiXrEALw_wcB&hvadid=241894030769&hvdev=c&hvlocphy=9008585&hvnetw=g&hvqmt=e&hvrand=15887975436101901235&hvtargid=kwd-131403882&hydadcr=16400_10303601&keywords=pragmatic+programmer&qid=1614290816&sr=8-1&tag=googhydr-20)
+* [Event-Driven Microservices](https://www.oreilly.com/library/view/building-event-driven-microservices/9781492057888/)
+
+
+## Notes
+
+[^1]:
+Accelerate
+
+[^2]:
+Accelerate
diff --git a/examples/platform-engineering/methodologies/concepts/argocd-best-practices.md b/examples/platform-engineering/methodologies/concepts/argocd-best-practices.md
new file mode 100644
index 0000000..1de46cf
--- /dev/null
+++ b/examples/platform-engineering/methodologies/concepts/argocd-best-practices.md
@@ -0,0 +1,61 @@
+# ArgoCD Best Practices
+
+[ArgoCD](https://argoproj.github.io/argo-cd/) is a declarative, GitOps continuous delivery tool for Kubernetes. To ensure optimal use and management, consider the following best practices:
+
+## 1. Declarative Configuration:
+ - Use ArgoCD to manage Kubernetes manifests declaratively through Git repositories.
+ - Store all application and environment configurations in version-controlled repositories.
+
+## 2. Repository Structure:
+ - Organize your Git repositories logically, separating applications and environments.
+ - Follow a directory structure that reflects the hierarchy of your clusters and applications.
+
+## 3. Sync and Health Status:
+ - Regularly monitor the synchronization status of applications.
+ - Leverage the ArgoCD UI or CLI to check the health status of your applications.
+
+## 4. Automated Sync Policies:
+ - Configure automated synchronization policies based on your deployment requirements.
+ - Set up periodic syncs to ensure that the desired state is continuously maintained.
+
+## 5. Promotion Workflow:
+ - Implement a promotion workflow through different Git branches (e.g., `dev`, `staging`, `production`).
+ - Use ArgoCD AppProject to define access control policies for different environments.
+
+## 6. Helm and Kustomize Support:
+ - Utilize Helm charts or Kustomize for managing complex application configurations.
+ - ArgoCD natively supports Helm charts and Kustomize overlays.
+
+## 7. Secret Management:
+ - Securely manage sensitive information like secrets and credentials outside of the Git repository.
+ - Integrate with external secret management tools and Kubernetes Secret resources.
+
+## 8. Rollback and Roll-forward:
+ - Test and document rollback procedures for easy recovery in case of issues.
+ - Embrace roll-forward by applying corrections or updates instead of reverting changes.
+
+## 9. Monitoring and Alerts:
+ - Set up monitoring for ArgoCD itself using tools like Prometheus and Grafana.
+ - Define alerts to notify teams about synchronization failures or other critical issues.
+
+## 10. RBAC and Security:
+ - Implement Role-Based Access Control (RBAC) to control access to ArgoCD resources.
+ - Regularly review and update RBAC policies based on team changes.
+
+## 11. Custom Resource Definitions (CRDs):
+ - Leverage ArgoCD's Custom Resource Definitions (CRDs) to define and manage applications.
+ - Understand and use ArgoCD-specific features like AppProject for advanced application management.
+
+## 12. Backup and Restore:
+ - Periodically back up the ArgoCD server's data, including the configuration and application state.
+ - Have a well-documented process for restoring ArgoCD from backups.
+
+## 13. Documentation:
+ - Maintain comprehensive documentation on ArgoCD usage, configurations, and best practices.
+ - Provide guidance on common tasks, troubleshooting, and onboarding.
+
+## 14. Community Engagement:
+ - Stay informed about ArgoCD updates, releases, and community discussions.
+ - Contribute to the ArgoCD community and share experiences and best practices.
+
+By adhering to these ArgoCD best practices, teams can effectively manage Kubernetes applications, streamline deployment workflows, and ensure a robust GitOps-based continuous delivery process.
diff --git a/examples/platform-engineering/methodologies/concepts/kubernetes-best-practices.md b/examples/platform-engineering/methodologies/concepts/kubernetes-best-practices.md
new file mode 100644
index 0000000..302b5a4
--- /dev/null
+++ b/examples/platform-engineering/methodologies/concepts/kubernetes-best-practices.md
@@ -0,0 +1,89 @@
+# Kubernetes Best Practices
+
+## Cluster Architecture
+
+- **High Availability:**
+ - Design clusters with high availability in mind to minimize downtime.
+ - Distribute nodes across multiple availability zones.
+
+- **Networking:**
+ - Use a well-defined network architecture.
+ - Leverage Network Policies to control pod-to-pod communication.
+
+## Resource Management
+
+- **Resource Requests and Limits:**
+ - Set resource requests and limits for containers to ensure fair resource allocation.
+
+- **Horizontal Pod Autoscaling (HPA):**
+ - Implement HPA for automatic scaling based on resource usage.
+
+## Pod Design
+
+- **Single Responsibility Principle:**
+ - Design pods to have a single responsibility.
+ - Avoid running multiple applications in a single pod.
+
+- **Health Probes:**
+ - Use readiness and liveness probes to enhance pod reliability.
+
+## Service Communication
+
+- **Service Discovery:**
+ - Leverage Kubernetes Services for service discovery.
+ - Use DNS names to communicate between services.
+
+- **Ingress Controllers:**
+ - Implement Ingress controllers for managing external access to services.
+
+## Configuration Management
+
+- **ConfigMaps and Secrets:**
+ - Use ConfigMaps for configuration data.
+ - Store sensitive information in Secrets.
+
+- **Immutable Infrastructure:**
+ - Treat containers as immutable and avoid modifying them at runtime.
+
+## Security
+
+- **Role-Based Access Control (RBAC):**
+ - Implement RBAC to control access to resources.
+ - Assign the principle of least privilege.
+
+- **Pod Security Policies:**
+ - Enforce security policies using Pod Security Policies.
+
+- **Network Policies:**
+ - Define Network Policies to control traffic between pods.
+
+## Monitoring and Logging
+
+- **Prometheus and Grafana:**
+ - Set up monitoring with Prometheus for metrics and Grafana for visualization.
+
+- **Centralized Logging:**
+ - Use centralized logging solutions for aggregating logs.
+
+## Scaling
+
+- **Vertical Scaling:**
+ - Use vertical scaling for individual nodes.
+
+- **Horizontal Scaling:**
+ - Implement horizontal scaling for application components.
+
+## Upgrades and Maintenance
+
+- **Rolling Updates:**
+ - Perform rolling updates to minimize downtime during application updates.
+
+- **Backup and Restore:**
+ - Regularly backup critical data and practice restores.
+
+## Documentation
+
+- **Maintain Documentation:**
+ - Keep comprehensive documentation for configurations, deployments, and best practices.
+ - Include information about troubleshooting and recovery procedures.
+
diff --git a/examples/platform-engineering/methodologies/concepts/terraform-best-practices.md b/examples/platform-engineering/methodologies/concepts/terraform-best-practices.md
new file mode 100644
index 0000000..1e40499
--- /dev/null
+++ b/examples/platform-engineering/methodologies/concepts/terraform-best-practices.md
@@ -0,0 +1,65 @@
+# Terraform Best Practices
+
+[Terraform](https://www.terraform.io/) is an Infrastructure as Code (IaC) tool that allows for the provisioning and management of infrastructure in a declarative and version-controlled manner. To optimize the usage of Terraform, follow these best practices:
+
+## 1. Infrastructure as Code (IaC):
+ - Define infrastructure using Terraform code to capture the desired state.
+ - Store Terraform configurations in version-controlled repositories for traceability and collaboration.
+
+## 2. Directory Structure:
+ - Organize Terraform code into modular and reusable directories.
+ - Follow a structured layout for different environments (e.g., `dev`, `staging`, `production`).
+
+## 3. Variables and Input Parameters:
+ - Use variables to parameterize configurations for flexibility.
+ - Leverage input parameter files to separate sensitive information from the code.
+
+## 4. Remote State Storage:
+ - Store Terraform remote state in a centralized and secure backend (e.g., AWS S3, Azure Storage).
+ - Enable versioning and locking to manage state changes collaboratively.
+
+## 5. Module Usage:
+ - Create modular and shareable Terraform modules for common components.
+ - Reuse modules across different projects to promote consistency.
+
+## 6. Naming Conventions:
+ - Establish clear and consistent naming conventions for resources.
+ - Use variables for resource names to enable easy customization.
+
+## 7. Documentation:
+ - Maintain documentation for Terraform configurations, including resource descriptions and variable usage.
+ - Include information about the purpose, inputs, and outputs of each module.
+
+## 8. Versioning:
+ - Version control Terraform configurations using a VCS (Version Control System) like Git.
+ - Tag releases and changes for better tracking and rollback capabilities.
+
+## 9. State Locking:
+ - Enable state locking to prevent concurrent modifications and potential conflicts.
+ - Implement a mechanism for unlocking the state in case of failures.
+
+## 10. Backends and Workspaces:
+ - Use Terraform backends to store state remotely.
+ - Leverage workspaces for managing multiple environments within a single configuration.
+
+## 11. Sensitive Data Handling:
+ - Avoid hardcoding sensitive data directly in Terraform code.
+ - Use environment variables or secure input files for managing sensitive information.
+
+## 12. Testing and Validation:
+ - Implement automated testing for Terraform configurations using tools like `terraform validate` and `tflint`.
+ - Use Terratest for more comprehensive integration testing.
+
+## 13. Logging and Monitoring:
+ - Implement logging for Terraform executions to capture changes and potential issues.
+ - Utilize monitoring tools to detect infrastructure changes and performance metrics.
+
+## 14. Continuous Integration (CI) and Continuous Deployment (CD):
+ - Integrate Terraform into CI/CD pipelines for automated testing and deployment.
+ - Automate the approval process for Terraform changes to streamline the release cycle.
+
+## 15. Education and Training:
+ - Provide training for team members on Terraform best practices.
+ - Foster a culture of learning and continuous improvement.
+
+By adhering to these Terraform best practices, teams can create and manage infrastructure efficiently, ensuring scalability, maintainability, and collaboration in an IaC environment.
diff --git a/examples/platform-engineering/methodologies/general-best-practices.md b/examples/platform-engineering/methodologies/general-best-practices.md
new file mode 100644
index 0000000..79f1857
--- /dev/null
+++ b/examples/platform-engineering/methodologies/general-best-practices.md
@@ -0,0 +1,54 @@
+# Platform Playbook
+
+Platform engineering involves designing, building, and maintaining the infrastructure and tools that enable software development and deployment. Adopting best practices in platform engineering ensures a stable, scalable, and efficient environment for development teams. Here are some detailed best practices:
+
+## 1. Infrastructure as Code (IaC):
+ - Use IaC tools like Terraform or Ansible to define and manage infrastructure.
+ - Version control your IaC scripts to track changes and enable collaboration.
+ - Implement a modular structure for IaC to promote reusability and maintainability.
+
+## 2. Containerization:
+ - Containerize applications using technologies like Docker for consistency across environments.
+ - Use orchestration tools such as Kubernetes for automated deployment, scaling, and management of containers.
+ - Optimize container images for size and security.
+
+## 3. Continuous Integration and Continuous Deployment (CI/CD):
+ - Implement CI/CD pipelines to automate testing, building, and deployment processes.
+ - Include automated testing at various stages to catch issues early.
+ - Use feature flags to enable gradual and safe feature rollouts.
+
+## 4. Monitoring and Logging:
+ - Establish comprehensive monitoring for applications and infrastructure.
+ - Utilize centralized logging to gather and analyze logs for troubleshooting.
+ - Implement alerting systems to detect and respond to issues proactively.
+
+## 5. Scalability:
+ - Design systems to scale horizontally by adding more instances.
+ - Use auto-scaling groups to automatically adjust resources based on demand.
+ - Regularly perform load testing to identify potential bottlenecks.
+
+## 6. Security:
+ - Implement security best practices for infrastructure and applications.
+ - Regularly update dependencies and conduct security audits.
+ - Enforce least privilege access controls and regularly rotate credentials.
+
+## 7. Documentation:
+ - Maintain comprehensive documentation for infrastructure, deployment processes, and configurations.
+ - Keep documentation up-to-date to facilitate knowledge sharing and onboarding.
+
+## 8. High Availability (HA):
+ - Design systems with redundancy to ensure availability in case of failures.
+ - Distribute applications across multiple availability zones or regions.
+ - Test and simulate failure scenarios to validate HA configurations.
+
+## 9. Collaboration and Communication:
+ - Foster collaboration between development, operations, and other teams.
+ - Use collaboration tools and platforms for effective communication.
+ - Conduct regular cross-functional meetings to align goals and address challenges.
+
+## 10. Performance Optimization:
+ - Regularly assess and optimize the performance of both infrastructure and applications.
+ - Use caching mechanisms, content delivery networks (CDNs), and other optimization techniques.
+ - Monitor and optimize database queries for efficiency.
+
+By adhering to these platform engineering best practices, teams can create a robust and efficient environment that supports the continuous delivery of high-quality software.
diff --git a/examples/platform-engineering/practicals/argocd-examples.md b/examples/platform-engineering/practicals/argocd-examples.md
new file mode 100644
index 0000000..5aa4ec0
--- /dev/null
+++ b/examples/platform-engineering/practicals/argocd-examples.md
@@ -0,0 +1,87 @@
+# ArgoCD: Application of ApplicationSets
+
+An application of ApplicationSets in ArgoCD is to efficiently manage and deploy similar applications or configurations across multiple clusters or namespaces. Here's a specific example to illustrate the application of ApplicationSets:
+
+## **Scenario**
+
+Imagine you have a microservices architecture, and you need to deploy the same application stack to multiple namespaces within a Kubernetes cluster. Each namespace may represent a different environment, such as development, testing, and production.
+
+## **ApplicationSets Implementation**
+
+### 1. **Generator:**
+ - Define a generator that generates application names, namespaces, and other parameters based on a specific pattern or set of rules.
+
+ ```yaml
+ generators:
+ - list:
+ elements:
+ - name: my-app-{{randAlphaNum 5}}
+ namespace: {{item}}
+ ```
+
+### 2. **Template:**
+
+ - Create a template specifying the common configuration for your application. This includes the source repository, target revision, and destination settings.
+
+ ```yaml
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: '{{.name}}'
+ spec:
+ project: default
+ source:
+ repoURL: 'https://github.com/example/repo'
+ targetRevision: HEAD
+ destination:
+ namespace: '{{.namespace}}'
+ server: 'https://kubernetes.default.svc'
+ ```
+
+### 3. **ApplicationSet Manifest:**
+
+- Apply the ApplicationSet manifest that defines the generators and template.
+
+ ```yaml
+ apiVersion: argoproj.io/v1alpha1
+ kind: ApplicationSet
+ metadata:
+ name: my-app-set
+ spec:
+ generators:
+ - list:
+ elements:
+ - dev
+ - test
+ - prod
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/part-of: my-app-set
+ spec:
+ project: default
+ source:
+ repoURL: 'https://github.com/example/repo'
+ targetRevision: HEAD
+ destination:
+ server: 'https://kubernetes.default.svc'
+ ```
+
+## **Result**
+
+- ArgoCD will dynamically generate and deploy three instances of the application, each to a different namespace (dev, test, prod).
+- The common configuration specified in the template ensures consistency across all instances.
+- Changes made to the ApplicationSet manifest automatically reflect in the generated applications, allowing for easy scaling and maintenance.
+
+## **Use Cases**
+
+### 1. **Scalable Deployments:**
+ - Easily scale deployments across different namespaces or clusters without manually managing each application.
+
+### 2. **Environment Isolation:**
+ - Isolate configurations for different environments, ensuring separation and consistency.
+
+### 3. **Efficient Management:**
+ - Streamline the deployment of similar applications with minimal manual intervention.
+
+ApplicationSets in ArgoCD provide a powerful mechanism for handling repetitive deployment scenarios and managing configurations at scale.
diff --git a/examples/software-engineering/methodologies/concepts/api-best-practices.md b/examples/software-engineering/methodologies/concepts/api-best-practices.md
new file mode 100644
index 0000000..85d313c
--- /dev/null
+++ b/examples/software-engineering/methodologies/concepts/api-best-practices.md
@@ -0,0 +1,98 @@
+# API Development Best Practices
+
+
+## Design Principles
+
+#### **RESTful Principles:**
+ - Follow RESTful design principles for a standardized and predictable API.
+
+#### **Consistency:**
+ - Maintain consistency in naming conventions, response formats, and URI structures.
+
+## Endpoint Design
+
+#### **Meaningful URIs:**
+ - Use meaningful and resource-oriented URIs.
+ - Avoid exposing implementation details in URIs.
+
+#### **Resource Naming:**
+ - Choose clear and concise resource names.
+ - Utilize plural nouns for resource names.
+
+## Request and Response
+
+#### **HTTP Methods:**
+ - Use appropriate HTTP methods (GET, POST, PUT, DELETE) for CRUD operations.
+
+#### **Request Payloads:**
+ - Keep request payloads simple and well-structured.
+ - Prefer JSON format for request and response payloads.
+
+#### **Response Status Codes:**
+ - Use standard HTTP status codes to convey the result of the request.
+
+## Authentication and Authorization
+
+#### **Token-based Authentication:**
+ - Implement token-based authentication for secure API access.
+ - Utilize industry-standard authentication protocols like OAuth.
+
+#### **Role-Based Access Control (RBAC):**
+ - Enforce RBAC to control access to API resources.
+ - Limit access based on user roles.
+
+## Error Handling
+
+#### **Consistent Error Format:**
+ - Use a consistent format for error responses.
+ - Include error codes, messages, and details in error responses.
+
+#### **HTTP Status Codes:**
+ - Choose appropriate HTTP status codes for different error scenarios.
+
+## Versioning
+
+#### **Semantic Versioning:**
+ - Apply semantic versioning to API versions.
+ - Clearly communicate version changes in the API.
+
+#### **Backward Compatibility:**
+ - Strive for backward compatibility to minimize disruptions for existing clients.
+
+## Documentation
+
+#### **Swagger/OpenAPI:**
+ - Generate API documentation using Swagger or OpenAPI.
+ - Include clear examples and use cases in documentation.
+
+#### **Interactive Documentation:**
+ - Provide interactive documentation with sample requests and responses.
+
+## Testing
+
+#### **Unit Testing:**
+ - Implement unit tests for individual API components.
+ - Utilize testing frameworks for automated testing.
+
+#### **Integration Testing:**
+ - Perform integration testing to ensure components work together seamlessly.
+
+## Security
+
+#### **SSL/TLS Encryption:**
+ - Enforce SSL/TLS encryption to secure data in transit.
+
+#### **Input Validation:**
+ - Validate and sanitize input to prevent security vulnerabilities.
+
+## Performance
+
+#### **Caching:**
+ - Implement caching mechanisms to enhance API performance.
+
+#### **Pagination:**
+ - Use pagination for large data sets to optimize response times.
+
+#### **Rate Limiting:**
+ - Enforce rate limiting to prevent abuse and ensure fair usage.
+
diff --git a/examples/software-engineering/methodologies/general-best-practices.md b/examples/software-engineering/methodologies/general-best-practices.md
new file mode 100644
index 0000000..8cb3cc8
--- /dev/null
+++ b/examples/software-engineering/methodologies/general-best-practices.md
@@ -0,0 +1,89 @@
+# SWE Playbook
+
+## Introduction
+
+This playbook outlines best practices for software engineering (SWE) to ensure high-quality, maintainable, and efficient code.
+
+## Coding Standards
+
+### 1. Consistent Code Formatting
+
+Follow a consistent code formatting style across the codebase. Consider using automated tools like linters and formatters.
+
+### 2. Descriptive Variable and Function Naming
+
+Use meaningful and descriptive names for variables and functions to enhance code readability.
+
+### 3. Modularity
+
+Design code with modularity in mind. Break down large modules into smaller, reusable components for better maintainability.
+
+### 4. Version Control
+
+Utilize version control systems (e.g., Git) effectively. Commit frequently, write clear commit messages, and branch logically.
+
+## Development Practices
+
+### 5. Test-Driven Development (TDD)
+
+Adopt TDD practices. Write tests before implementing new features to ensure code correctness and maintainability.
+
+### 6. Code Reviews
+
+Conduct regular code reviews to catch bugs, ensure adherence to coding standards, and promote knowledge sharing among team members.
+
+### 7. Pair Programming
+
+Encourage pair programming to enhance collaboration, share knowledge, and catch issues early in the development process.
+
+### 8. Continuous Integration and Continuous Deployment (CI/CD)
+
+Implement CI/CD pipelines to automate testing, integration, and deployment processes, ensuring faster and reliable software delivery.
+
+## Documentation
+
+### 9. Inline Comments
+
+Use clear and concise inline comments to explain complex sections of code, making it easier for developers to understand.
+
+### 10. Readme Files
+
+Maintain well-documented Readme files that provide essential information about the project, including setup instructions, dependencies, and usage guidelines.
+
+### 11. API Documentation
+
+For projects with APIs, generate and maintain comprehensive API documentation to assist developers in integrating and using the APIs.
+
+## Security
+
+### 12. Regular Security Audits
+
+Conduct regular security audits to identify and address potential vulnerabilities in the codebase.
+
+### 13. Input Validation
+
+Implement robust input validation mechanisms to protect against common security threats like injection attacks.
+
+## Performance Optimization
+
+### 14. Code Profiling
+
+Regularly profile code to identify performance bottlenecks and optimize critical sections for improved efficiency.
+
+### 15. Resource Management
+
+Optimize resource usage, including memory and CPU, to ensure efficient performance.
+
+## Continuous Learning
+
+### 16. Knowledge Sharing
+
+Encourage knowledge sharing sessions within the team to stay updated on industry best practices, new technologies, and advancements.
+
+### 17. Training and Development
+
+Invest in continuous training and development opportunities for team members to enhance their skills and stay current.
+
+## Conclusion
+
+By following these best practices, we aim to create a collaborative, efficient, and sustainable software development environment. Regularly review and update these practices to align with evolving industry standards and project requirements.
diff --git a/examples/software-engineering/practicals/create-api-example.md b/examples/software-engineering/practicals/create-api-example.md
new file mode 100644
index 0000000..9d403de
--- /dev/null
+++ b/examples/software-engineering/practicals/create-api-example.md
@@ -0,0 +1,85 @@
+# Writing a Simple API with Flask (Python)
+
+## Introduction
+
+In this tutorial, we'll walk through the process of creating a simple RESTful API using Python and the Flask web framework. Flask is a lightweight and easy-to-use framework for building web applications, including APIs.
+
+### Prerequisites
+
+Before you begin, make sure you have the following installed:
+
+- Python ([Python Official Website](https://www.python.org/))
+- Flask (`pip install Flask`)
+
+## Step 1: Setting Up the Project
+
+Create a new directory for your project and navigate into it.
+
+```bash
+mkdir flask-api-tutorial
+cd flask-api-tutorial
+```
+
+## Step 2: Creating a Virtual Environment
+It's good practice to use a virtual environment to isolate your project's dependencies. Create a virtual environment using the following commands:
+
+```bash
+python -m venv venv
+# On Windows: python -m venv venv
+Activate the virtual environment:
+```
+
+```bash
+# On macOS/Linux
+source venv/bin/activate
+# On Windows
+venv\Scripts\activate
+This step ensures that your project has a dedicated environment for its dependencies, minimizing conflicts and ensuring consistency across different projects.
+```
+
+## Step 3: Installing Flask
+Install Flask within the virtual environment:
+
+```bash
+pip install Flask
+```
+
+## Step 4: Writing the API Code
+Create a file named app.py in your project directory and open it in a text editor. Add the following code:
+
+```python
+from flask import Flask, jsonify
+
+app = Flask(__name__)
+
+@app.route('/api', methods=['GET'])
+def get_data():
+ data = {'message': 'Hello, API!'}
+ return jsonify(data)
+
+if __name__ == '__main__':
+ app.run(debug=True)
+```
+
+This code sets up a basic Flask application with a single endpoint (/api) that returns a JSON response.
+
+## Step 5: Running the API
+In the terminal, run the Flask application:
+
+```bash
+python app.py
+Visit http://127.0.0.1:5000/api in your browser or use a tool like curl or Postman to make a GET request.
+```
+
+```bash
+curl http://127.0.0.1:5000/api
+You should receive a JSON response: {"message": "Hello, API!"}
+```
+
+## Conclusion
+
+Congratulations! You've successfully created a simple API using Flask. This is just a starting point, and you can expand and enhance your API by adding more routes, handling different HTTP methods, and integrating with databases.
+
+Explore Flask's documentation (Flask Documentation) for more advanced features and best practices.
+
+Feel free to adapt this tutorial to other frameworks or languages as needed.
\ No newline at end of file
diff --git a/material/overrides/home.html b/material/overrides/home.html
new file mode 100644
index 0000000..c440ac8
--- /dev/null
+++ b/material/overrides/home.html
@@ -0,0 +1,279 @@
+
+
+{% extends "main.html" %}
+{% block tabs %}
+{{ super() }}
+
+
+
+
Documentation to help all engineers strive for excellence and promote standardization across the organization.
+Draw from the vast experience of engineers across the company to help you make sustainable decisions.
+Dive into concepts and thoughts that have proved valuable to Rise8 quest to achieve continous delivery.
+