diff --git a/README.md b/README.md index e820b93e63cc..db8e820ef142 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,6 @@ Hyperswitch-Logo

-

The open-source payments switch

@@ -35,7 +34,6 @@ The single API to access payment ecosystems across 130+ countries

-
@@ -57,14 +55,14 @@ Using Hyperswitch, you can:

⚡️ Quick Start Guide

-

One-click deployment on AWS cloud

+### One-click deployment on AWS cloud -The fastest and easiest way to try hyperswitch is via our CDK scripts +The fastest and easiest way to try Hyperswitch is via our CDK scripts 1. Click on the following button for a quick standalone deployment on AWS, suitable for prototyping. No code or setup is required in your system and the deployment is covered within the AWS free-tier setup. -   + 2. Sign-in to your AWS console. @@ -72,12 +70,27 @@ The fastest and easiest way to try hyperswitch is via our CDK scripts For an early access to the production-ready setup fill this Early Access Form +### Run it on your system + +You can run Hyperswitch on your system using Docker Compose after cloning this repository: + +```shell +docker compose up -d +``` + +This will start the payments router, the primary component within Hyperswitch. + +Check out the [local setup guide][local-setup-guide] for a more comprehensive +setup, which includes the [scheduler and monitoring services][docker-compose-scheduler-monitoring]. + +[local-setup-guide]: /docs/try_local_system.md +[docker-compose-scheduler-monitoring]: /docs/try_local_system.md#run-the-scheduler-and-monitoring-services +

🔌 Fast Integration for Stripe Users

-If you are already using Stripe, integrating with Hyperswitch is fun, fast & -easy. +If you are already using Stripe, integrating with Hyperswitch is fun, fast & easy. Try the steps below to get a feel for how quick the setup is: 1. Get API keys from our [dashboard]. @@ -96,9 +109,7 @@ Try the steps below to get a feel for how quick the setup is: As of Sept 2023, we support 50+ payment processors and multiple global payment methods. In addition, we are continuously integrating new processors based on their reach and community requests. Our target is to support 100+ processors by H2 2023. -You can find the latest list of payment processors, supported methods, and -features -[here][supported-connectors-and-features]. +You can find the latest list of payment processors, supported methods, and features [here][supported-connectors-and-features]. [supported-connectors-and-features]: https://hyperswitch.io/pm-list @@ -252,12 +263,11 @@ We welcome contributions from the community. Please read through our Included are directions for opening issues, coding standards, and notes on development. -- We appreciate all types of contributions: code, documentation, demo creation, or something new way you want to contribute to us. We will reward every contribution with a Hyperswitch branded t-shirt. -- 🦀 **Important note for Rust developers**: We aim for contributions from the community -across a broad range of tracks. Hence, we have prioritised simplicity and code -readability over purely idiomatic code. For example, some of the code in core -functions (e.g., `payments_core`) is written to be more readable than -pure-idiomatic. +- We appreciate all types of contributions: code, documentation, demo creation, or some new way you want to contribute to us. + We will reward every contribution with a Hyperswitch branded t-shirt. +- 🦀 **Important note for Rust developers**: We aim for contributions from the community across a broad range of tracks. + Hence, we have prioritised simplicity and code readability over purely idiomatic code. + For example, some of the code in core functions (e.g., `payments_core`) is written to be more readable than pure-idiomatic.

👥 Community

@@ -269,7 +279,6 @@ Get updates on Hyperswitch development and chat with the community: - [Slack workspace][slack] for questions related to integrating hyperswitch, integrating a connector in hyperswitch, etc. - [GitHub Discussions][github-discussions] to drop feature requests or suggest anything payments-related you need for your stack. -[blog]: https://hyperswitch.io/blog [discord]: https://discord.gg/wJZ7DVW8mm [slack]: https://join.slack.com/t/hyperswitch-io/shared_invite/zt-1k6cz4lee-SAJzhz6bjmpp4jZCDOtOIg [github-discussions]: https://github.com/juspay/hyperswitch/discussions @@ -314,7 +323,6 @@ Check the [CHANGELOG.md](./CHANGELOG.md) file for details. This product is licensed under the [Apache 2.0 License](LICENSE). -

✨ Thanks to all contributors

diff --git a/docker-compose-development.yml b/docker-compose-development.yml new file mode 100644 index 000000000000..500f397cfa30 --- /dev/null +++ b/docker-compose-development.yml @@ -0,0 +1,301 @@ +version: "3.8" + +volumes: + cargo_cache: + pg_data: + router_build_cache: + scheduler_build_cache: + drainer_build_cache: + redisinsight_store: + +networks: + router_net: + +services: + ### Dependencies + pg: + image: postgres:latest + ports: + - "5432:5432" + networks: + - router_net + volumes: + - pg_data:/VAR/LIB/POSTGRESQL/DATA + environment: + - POSTGRES_USER=db_user + - POSTGRES_PASSWORD=db_pass + - POSTGRES_DB=hyperswitch_db + + redis-standalone: + image: redis:7 + labels: + - redis + networks: + - router_net + ports: + - "6379" + + migration_runner: + image: rust:latest + command: "bash -c 'cargo install diesel_cli --no-default-features --features postgres && diesel migration --database-url postgres://$${DATABASE_USER}:$${DATABASE_PASSWORD}@$${DATABASE_HOST}:$${DATABASE_PORT}/$${DATABASE_NAME} run'" + working_dir: /app + networks: + - router_net + volumes: + - ./:/app + environment: + - DATABASE_USER=db_user + - DATABASE_PASSWORD=db_pass + - DATABASE_HOST=pg + - DATABASE_PORT=5432 + - DATABASE_NAME=hyperswitch_db + + ### Application services + hyperswitch-server: + image: rust:latest + command: cargo run --bin router -- -f ./config/docker_compose.toml + working_dir: /app + ports: + - "8080:8080" + networks: + - router_net + volumes: + - ./:/app + - cargo_cache:/cargo_cache + - router_build_cache:/cargo_build_cache + environment: + - CARGO_HOME=/cargo_cache + - CARGO_TARGET_DIR=/cargo_build_cache + labels: + logs: "promtail" + healthcheck: + test: curl --fail http://localhost:8080/health || exit 1 + interval: 120s + retries: 4 + start_period: 20s + timeout: 10s + + hyperswitch-producer: + image: rust:latest + command: cargo run --bin scheduler -- -f ./config/docker_compose.toml + working_dir: /app + networks: + - router_net + profiles: + - scheduler + volumes: + - ./:/app + - cargo_cache:/cargo_cache + - scheduler_build_cache:/cargo_build_cache + environment: + - CARGO_HOME=/cargo_cache + - CARGO_TARGET_DIR=/cargo_build_cache + - SCHEDULER_FLOW=producer + depends_on: + hyperswitch-consumer: + condition: service_healthy + labels: + logs: "promtail" + + hyperswitch-consumer: + image: rust:latest + command: cargo run --bin scheduler -- -f ./config/docker_compose.toml + working_dir: /app + networks: + - router_net + profiles: + - scheduler + volumes: + - ./:/app + - cargo_cache:/cargo_cache + - scheduler_build_cache:/cargo_build_cache + environment: + - CARGO_HOME=/cargo_cache + - CARGO_TARGET_DIR=/cargo_build_cache + - SCHEDULER_FLOW=consumer + depends_on: + hyperswitch-server: + condition: service_started + labels: + logs: "promtail" + healthcheck: + test: (ps -e | grep scheduler) || exit 1 + interval: 120s + retries: 4 + start_period: 30s + timeout: 10s + + hyperswitch-drainer: + image: rust:latest + command: cargo run --bin drainer -- -f ./config/docker_compose.toml + working_dir: /app + deploy: + replicas: ${DRAINER_INSTANCE_COUNT:-1} + networks: + - router_net + profiles: + - full_kv + volumes: + - ./:/app + - cargo_cache:/cargo_cache + - drainer_build_cache:/cargo_build_cache + environment: + - CARGO_HOME=/cargo_cache + - CARGO_TARGET_DIR=/cargo_build_cache + restart: unless-stopped + depends_on: + hyperswitch-server: + condition: service_started + labels: + logs: "promtail" + + ### Clustered Redis setup + redis-cluster: + image: redis:7 + deploy: + replicas: ${REDIS_CLUSTER_COUNT:-3} + command: redis-server /usr/local/etc/redis/redis.conf + profiles: + - clustered_redis + volumes: + - ./config/redis.conf:/usr/local/etc/redis/redis.conf + labels: + - redis + networks: + - router_net + ports: + - "6379" + - "16379" + + redis-init: + image: redis:7 + profiles: + - clustered_redis + depends_on: + - redis-cluster + networks: + - router_net + command: "bash -c 'export COUNT=${REDIS_CLUSTER_COUNT:-3} + + \ if [ $$COUNT -lt 3 ] + + \ then + + \ echo \"Minimum 3 nodes are needed for redis cluster\" + + \ exit 1 + + \ fi + + \ HOSTS=\"\" + + \ for ((c=1; c<=$$COUNT;c++)) + + \ do + + \ NODE=$COMPOSE_PROJECT_NAME-redis-cluster-$$c:6379 + + \ echo $$NODE + + \ HOSTS=\"$$HOSTS $$NODE\" + + \ done + + \ echo Creating a cluster with $$HOSTS + + \ redis-cli --cluster create $$HOSTS --cluster-yes + + \ '" + + ### Monitoring + grafana: + image: grafana/grafana:latest + ports: + - "3000:3000" + networks: + - router_net + profiles: + - monitoring + restart: unless-stopped + environment: + - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin + - GF_AUTH_ANONYMOUS_ENABLED=true + - GF_AUTH_BASIC_ENABLED=false + volumes: + - ./config/grafana.ini:/etc/grafana/grafana.ini + - ./config/grafana-datasource.yaml:/etc/grafana/provisioning/datasources/datasource.yml + + promtail: + image: grafana/promtail:latest + volumes: + - ./logs:/var/log/router + - ./config:/etc/promtail + - /var/run/docker.sock:/var/run/docker.sock + command: -config.file=/etc/promtail/promtail.yaml + profiles: + - monitoring + networks: + - router_net + + loki: + image: grafana/loki:latest + ports: + - "3100" + command: -config.file=/etc/loki/loki.yaml + networks: + - router_net + profiles: + - monitoring + volumes: + - ./config:/etc/loki + + otel-collector: + image: otel/opentelemetry-collector-contrib:latest + command: --config=/etc/otel-collector.yaml + networks: + - router_net + profiles: + - monitoring + volumes: + - ./config/otel-collector.yaml:/etc/otel-collector.yaml + ports: + - "4317" + - "8888" + - "8889" + + prometheus: + image: prom/prometheus:latest + networks: + - router_net + profiles: + - monitoring + volumes: + - ./config/prometheus.yaml:/etc/prometheus/prometheus.yml + ports: + - "9090" + restart: unless-stopped + + tempo: + image: grafana/tempo:latest + command: -config.file=/etc/tempo.yaml + volumes: + - ./config/tempo.yaml:/etc/tempo.yaml + networks: + - router_net + profiles: + - monitoring + ports: + - "3200" # tempo + - "4317" # otlp grpc + restart: unless-stopped + + redis-insight: + image: redislabs/redisinsight:latest + networks: + - router_net + profiles: + - full_kv + ports: + - "8001:8001" + volumes: + - redisinsight_store:/db diff --git a/docker-compose.yml b/docker-compose.yml index f4dce575132e..fd18906143f5 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -1,76 +1,16 @@ -version: "3.7" +version: "3.8" volumes: - cargo_cache: pg_data: - cargo_build_cache: - p_cargo_build_cache: - c_cargo_build_cache: redisinsight_store: - networks: router_net: - services: - promtail: - image: grafana/promtail:latest - volumes: - - ./logs:/var/log/router - - ./config:/etc/promtail - - /var/run/docker.sock:/var/run/docker.sock - command: -config.file=/etc/promtail/promtail.yaml - profiles: - - monitoring - networks: - - router_net - - loki: - image: grafana/loki:latest - ports: - - "3100" - command: -config.file=/etc/loki/loki.yaml - networks: - - router_net - profiles: - - monitoring - volumes: - - ./config:/etc/loki - - otel-collector: - image: otel/opentelemetry-collector-contrib:latest - command: --config=/etc/otel-collector.yaml - networks: - - router_net - profiles: - - monitoring - volumes: - - ./config/otel-collector.yaml:/etc/otel-collector.yaml - ports: - - "4317" - - "8888" - - "8889" - - grafana: - image: grafana/grafana:latest - ports: - - "3000:3000" - networks: - - router_net - profiles: - - monitoring - restart: unless-stopped - environment: - - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin - - GF_AUTH_ANONYMOUS_ENABLED=true - - GF_AUTH_BASIC_ENABLED=false - volumes: - - ./config/grafana.ini:/etc/grafana/grafana.ini - - ./config/grafana-datasource.yaml:/etc/grafana/provisioning/datasources/datasource.yml - + ### Dependencies pg: - image: postgres:14.5 + image: postgres:latest ports: - "5432:5432" networks: @@ -82,52 +22,59 @@ services: - POSTGRES_PASSWORD=db_pass - POSTGRES_DB=hyperswitch_db + redis-standalone: + image: redis:7 + labels: + - redis + networks: + - router_net + ports: + - "6379" + migration_runner: - image: rust:1.70 - command: "bash -c 'cargo install diesel_cli --no-default-features --features \"postgres\" && diesel migration --database-url postgres://db_user:db_pass@pg:5432/hyperswitch_db run'" + image: rust:latest + command: "bash -c 'cargo install diesel_cli --no-default-features --features postgres && diesel migration --database-url postgres://$${DATABASE_USER}:$${DATABASE_PASSWORD}@$${DATABASE_HOST}:$${DATABASE_PORT}/$${DATABASE_NAME} run'" working_dir: /app networks: - router_net volumes: - ./:/app + environment: + - DATABASE_USER=db_user + - DATABASE_PASSWORD=db_pass + - DATABASE_HOST=pg + - DATABASE_PORT=5432 + - DATABASE_NAME=hyperswitch_db + ### Application services hyperswitch-server: - image: rust:1.70 - command: cargo run -- -f ./config/docker_compose.toml - working_dir: /app + image: juspaydotin/hyperswitch-router:standalone + command: /local/bin/router -f /local/config/docker_compose.toml ports: - "8080:8080" networks: - router_net volumes: - - ./:/app - - cargo_cache:/cargo_cache - - cargo_build_cache:/cargo_build_cache - environment: - - CARGO_TARGET_DIR=/cargo_build_cache + - ./config:/local/config labels: logs: "promtail" healthcheck: test: curl --fail http://localhost:8080/health || exit 1 - interval: 100s + interval: 10s retries: 3 - start_period: 20s + start_period: 5s timeout: 10s hyperswitch-producer: - image: rust:1.70 - command: cargo run --bin scheduler -- -f ./config/docker_compose.toml - working_dir: /app + image: juspaydotin/hyperswitch-producer:standalone + command: /local/bin/scheduler -f /local/config/docker_compose.toml networks: - router_net profiles: - scheduler volumes: - - ./:/app - - cargo_cache:/cargo_cache - - p_cargo_build_cache:/cargo_build_cache + - ./config:/local/config environment: - - CARGO_TARGET_DIR=/cargo_build_cache - SCHEDULER_FLOW=producer depends_on: hyperswitch-consumer: @@ -136,39 +83,54 @@ services: logs: "promtail" hyperswitch-consumer: - image: rust:1.70 - command: cargo run --bin scheduler -- -f ./config/docker_compose.toml - working_dir: /app + image: juspaydotin/hyperswitch-consumer:standalone + command: /local/bin/scheduler -f /local/config/docker_compose.toml networks: - router_net profiles: - scheduler volumes: - - ./:/app - - cargo_cache:/cargo_cache - - c_cargo_build_cache:/cargo_build_cache + - ./config:/local/config environment: - - CARGO_TARGET_DIR=/cargo_build_cache - SCHEDULER_FLOW=consumer depends_on: hyperswitch-server: condition: service_started - labels: logs: "promtail" - healthcheck: test: (ps -e | grep scheduler) || exit 1 - interval: 120s + interval: 10s retries: 3 - start_period: 30s + start_period: 5s timeout: 10s + hyperswitch-drainer: + image: juspaydotin/hyperswitch-drainer:standalone + command: /local/bin/drainer -f /local/config/docker_compose.toml + deploy: + replicas: ${DRAINER_INSTANCE_COUNT:-1} + networks: + - router_net + profiles: + - full_kv + volumes: + - ./config:/local/config + restart: unless-stopped + depends_on: + hyperswitch-server: + condition: service_started + labels: + logs: "promtail" + + ### Clustered Redis setup redis-cluster: image: redis:7 deploy: replicas: ${REDIS_CLUSTER_COUNT:-3} command: redis-server /usr/local/etc/redis/redis.conf + profiles: + - clustered_redis volumes: - ./config/redis.conf:/usr/local/etc/redis/redis.conf labels: @@ -179,17 +141,10 @@ services: - "6379" - "16379" - redis-standalone: - image: redis:7 - labels: - - redis - networks: - - router_net - ports: - - "6379" - redis-init: image: redis:7 + profiles: + - clustered_redis depends_on: - redis-cluster networks: @@ -226,16 +181,62 @@ services: \ '" - redis-insight: - image: redislabs/redisinsight:latest + ### Monitoring + grafana: + image: grafana/grafana:latest + ports: + - "3000:3000" networks: - router_net profiles: - - full_kv + - monitoring + restart: unless-stopped + environment: + - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin + - GF_AUTH_ANONYMOUS_ENABLED=true + - GF_AUTH_BASIC_ENABLED=false + volumes: + - ./config/grafana.ini:/etc/grafana/grafana.ini + - ./config/grafana-datasource.yaml:/etc/grafana/provisioning/datasources/datasource.yml + + promtail: + image: grafana/promtail:latest + volumes: + - ./logs:/var/log/router + - ./config:/etc/promtail + - /var/run/docker.sock:/var/run/docker.sock + command: -config.file=/etc/promtail/promtail.yaml + profiles: + - monitoring + networks: + - router_net + + loki: + image: grafana/loki:latest ports: - - "8001:8001" + - "3100" + command: -config.file=/etc/loki/loki.yaml + networks: + - router_net + profiles: + - monitoring volumes: - - redisinsight_store:/db + - ./config:/etc/loki + + otel-collector: + image: otel/opentelemetry-collector-contrib:latest + command: --config=/etc/otel-collector.yaml + networks: + - router_net + profiles: + - monitoring + volumes: + - ./config/otel-collector.yaml:/etc/otel-collector.yaml + ports: + - "4317" + - "8888" + - "8889" + prometheus: image: prom/prometheus:latest networks: @@ -261,25 +262,14 @@ services: - "3200" # tempo - "4317" # otlp grpc restart: unless-stopped - hyperswitch-drainer: - image: rust:1.70 - command: cargo run --bin drainer -- -f ./config/docker_compose.toml - working_dir: /app - deploy: - replicas: ${DRAINER_INSTANCE_COUNT:-1} + + redis-insight: + image: redislabs/redisinsight:latest networks: - router_net profiles: - full_kv + ports: + - "8001:8001" volumes: - - ./:/app - - cargo_cache:/cargo_cache - - cargo_build_cache:/cargo_build_cache - environment: - - CARGO_TARGET_DIR=/cargo_build_cache - restart: unless-stopped - depends_on: - hyperswitch-server: - condition: service_started - labels: - logs: "promtail" + - redisinsight_store:/db diff --git a/docs/try_local_system.md b/docs/try_local_system.md index 59df43f24810..a9cd080f26d5 100644 --- a/docs/try_local_system.md +++ b/docs/try_local_system.md @@ -1,23 +1,20 @@ # Try out hyperswitch on your system -**NOTE:** -This guide is aimed at users and developers who wish to set up hyperswitch on -their local systems and requires quite some time and effort. -If you'd prefer trying out hyperswitch quickly without the hassle of setting up -all dependencies, you can [try out hyperswitch sandbox environment][try-sandbox]. - -There are two options to set up hyperswitch on your system: - -1. Use Docker Compose -2. Set up a Rust environment and other dependencies on your system +The simplest way to run hyperswitch locally is +[with Docker Compose](#run-hyperswitch-using-docker-compose) by pulling the +latest images from Docker Hub. +However, if you're willing to modify the code and run it, or are a developer +contributing to hyperswitch, then you can either +[set up a development environment using Docker Compose](#set-up-a-development-environment-using-docker-compose), +or [set up a Rust environment on your system](#set-up-a-rust-environment-and-other-dependencies). Check the Table Of Contents to jump to the relevant section. -[try-sandbox]: ./try_sandbox.md - **Table Of Contents:** -- [Set up hyperswitch using Docker Compose](#set-up-hyperswitch-using-docker-compose) +- [Run hyperswitch using Docker Compose](#run-hyperswitch-using-docker-compose) + - [Run the scheduler and monitoring services](#run-the-scheduler-and-monitoring-services) +- [Set up a development environment using Docker Compose](#set-up-a-development-environment-using-docker-compose) - [Set up a Rust environment and other dependencies](#set-up-a-rust-environment-and-other-dependencies) - [Set up dependencies on Ubuntu-based systems](#set-up-dependencies-on-ubuntu-based-systems) - [Set up dependencies on Windows (Ubuntu on WSL2)](#set-up-dependencies-on-windows-ubuntu-on-wsl2) @@ -33,7 +30,7 @@ Check the Table Of Contents to jump to the relevant section. - [Create a Payment](#create-a-payment) - [Create a Refund](#create-a-refund) -## Set up hyperswitch using Docker Compose +## Run hyperswitch using Docker Compose 1. Install [Docker Compose][docker-compose-install]. 2. Clone the repository and switch to the project directory: @@ -54,15 +51,15 @@ Check the Table Of Contents to jump to the relevant section. docker compose up -d ``` -5. Run database migrations: - - ```shell - docker compose run hyperswitch-server bash -c \ - "cargo install diesel_cli && \ - diesel migration --database-url postgres://db_user:db_pass@pg:5432/hyperswitch_db run" - ``` + This should run the hyperswitch payments router, the primary component within + hyperswitch. + Wait for the `migration_runner` container to finish installing `diesel_cli` + and running migrations (approximately 2 minutes) before proceeding further. + You can also choose to + [run the scheduler and monitoring services](#run-the-scheduler-and-monitoring-services) + in addition to the payments router. -6. Verify that the server is up and running by hitting the health endpoint: +5. Verify that the server is up and running by hitting the health endpoint: ```shell curl --head --request GET 'http://localhost:8080/health' @@ -71,9 +68,86 @@ Check the Table Of Contents to jump to the relevant section. If the command returned a `200 OK` status code, proceed with [trying out our APIs](#try-out-our-apis). +### Run the scheduler and monitoring services + +You can run the scheduler and monitoring services by specifying suitable profile +names to the above Docker Compose command. +To understand more about the hyperswitch architecture and the components +involved, check out the [architecture document][architecture]. + +- To run the scheduler components (consumer and producer), you can specify + `--profile scheduler`: + + ```shell + docker compose --profile scheduler up -d + ``` + +- To run the monitoring services (Grafana, Promtail, Loki, Prometheus and Tempo), + you can specify `--profile monitoring`: + + ```shell + docker compose --profile monitoring up -d + ``` + + You can then access Grafana at `http://localhost:3000` and view application + logs using the "Explore" tab, select Loki as the data source, and select the + container to query logs from. + +- You can also specify multiple profile names by specifying the `--profile` flag + multiple times. + To run both the scheduler components and monitoring services, the Docker + Compose command would be: + + ```shell + docker compose --profile scheduler --profile monitoring up -d + ``` + +Once the services have been confirmed to be up and running, you can proceed with +[trying out our APIs](#try-out-our-apis) + [docker-compose-install]: https://docs.docker.com/compose/install/ [docker-compose-config]: /config/docker_compose.toml [docker-compose-yml]: /docker-compose.yml +[architecture]: /docs/architecture.md + +## Set up a development environment using Docker Compose + +1. Install [Docker Compose][docker-compose-install]. +2. Clone the repository and switch to the project directory: + + ```shell + git clone https://github.com/juspay/hyperswitch + cd hyperswitch + ``` + +3. (Optional) Configure the application using the + [`config/docker_compose.toml`][docker-compose-config] file. + The provided configuration should work as is. + If you do update the `docker_compose.toml` file, ensure to also update the + corresponding values in the [`docker-compose.yml`][docker-compose-yml] file. +4. Start all the services using Docker Compose: + + ```shell + docker compose --file docker-compose-development.yml up -d + ``` + + This will compile the payments router, the primary component within + hyperswitch and then start it. + Depending on the specifications of your machine, compilation can take + around 15 minutes. + +5. (Optional) You can also choose to + [start the scheduler and/or monitoring services](#run-the-scheduler-and-monitoring-services) + in addition to the payments router. + +6. Verify that the server is up and running by hitting the health endpoint: + + ```shell + curl --head --request GET 'http://localhost:8080/health' + ``` + + If the command returned a `200 OK` status code, proceed with + [trying out our APIs](#try-out-our-apis). ## Set up a Rust environment and other dependencies @@ -134,7 +208,7 @@ for your distribution and follow along. 4. Install `diesel_cli` using `cargo`: ```shell - cargo install diesel_cli --no-default-features --features "postgres" + cargo install diesel_cli --no-default-features --features postgres ``` 5. Make sure your system has the `pkg-config` package and OpenSSL installed: @@ -224,7 +298,7 @@ packages for your distribution and follow along. 6. Install `diesel_cli` using `cargo`: ```shell - cargo install diesel_cli --no-default-features --features "postgres" + cargo install diesel_cli --no-default-features --features postgres ``` 7. Make sure your system has the `pkg-config` package and OpenSSL installed: @@ -260,7 +334,7 @@ You can opt to use your favorite package manager instead. 4. Install `diesel_cli` using `cargo`: ```shell - cargo install diesel_cli --no-default-features --features "postgres" + cargo install diesel_cli --no-default-features --features postgres ``` 5. Install OpenSSL with `winget`: @@ -322,7 +396,7 @@ You can opt to use your favorite package manager instead. 4. Install `diesel_cli` using `cargo`: ```shell - cargo install diesel_cli --no-default-features --features "postgres" + cargo install diesel_cli --no-default-features --features postgres ``` If linking `diesel_cli` fails due to missing `libpq` (if the error message is @@ -333,7 +407,7 @@ You can opt to use your favorite package manager instead. brew install libpq export PQ_LIB_DIR="$(brew --prefix libpq)/lib" - cargo install diesel_cli --no-default-features --features "postgres" + cargo install diesel_cli --no-default-features --features postgres ``` You may also choose to persist the value of `PQ_LIB_DIR` in your shell