Skip to content

Anthos On Prem Apigee Hybrid 1.3 AHR Manual

Yuriy Lesyuk edited this page Sep 7, 2020 · 9 revisions

Install Apigee Hybrid 1.3 on Anthos GKE On-Prem 1.3

After you have prepared your Anthos on-prem environment, you can install Apigee hybrid using AHR. in a nutshell, AHR is a wrapper on 'manual' gcloud, kubectl, and curl commands for Apigee Hybrid 1.3 On-Prem Installation documentation. You can also look at installation of Apigee Hybrid 1.2 to see and compare documented steps with AHR approach.

Task 1. Prepare working environment

We are going to define environment variables that will help us manage the installation process and reuse copy-and-paste commands with minimal editing.

  1. Log into your netservicesvm.
https://netservices-XXX.YYY.ZZZ.<domain>.net/
  1. For your GCP project, verify that you still have your GCP project name configured correctly. If not, configure it using gcloud config set project <project-id> command.
$ gcloud config get-value project

apigee-hybrid-anthos-onprem

?. While in the ~ directory, clone AHR project

cd ~
git clone https://github.com/yuriylesyuk/ahr.git

?. Configure AHR_HOME variable and ddd ahr-*-ctl scripts directory to your session PATH.

export AHR_HOME=~/ahr
export PATH=$AHR_HOME/bin:$PATH

?. jq is a prerequisite for some steps. Let's install it (yum if you're using CentOS as your working computer or apt if Debian)

sudo apt install -y jq

Task ?. Configure Environment Variable that control Apigee Hybrid installation process

?. Configure HYBRID_HOME directory location and HYBRID_ENV environment variables configuration file

export HYBRID_HOME=~/apigee-hybrid-install
export HYBRID_ENV=$HYBRID_HOME/hybrid-130.env

?. Create $HYBRID_HOME directory and clone single zone small hybrid 1.3 template

mkdir -p $HYBRID_HOME
cp $AHR_HOME/examples/hybrid-sz-s-1.3.sh $HYBRID_ENV

?. Populate $PROJECT variable and verify it

export PROJECT=$(gcloud config get-value project)
echo $PROJECT

Adjust RUNTIME_ENV configuration file

?. Open $RUNTIME_ENV in your favourite editor to adjust some configuration values

vi $HYBRID_ENV

?. Define Region and Analytics Region location for your control plane GCP project.

export REGION=us-central1
export AX_REGION=us-central1

?. Define name of your on-prem cluster

export CLUSTER=user-cluster1
export CLUSTER_ZONE=on-prem

?. Define hostname of your API endpoint

export RUNTIME_HOST_ALIAS=api.onprem.exco.com

?. Configure or Provision Load Balancer for Istio Ingress Gateway

Depending on your on-prem configuration, you either define runtime IP that will be created by your load balancer (ie, F5_ automatically, or you would pre-provision a load balancer VIP. In either case, you have an IP that you need to configure as a RUNTIME_IP variable.

export RUNTIME_IP=10.0.10.8

Source Environment Variables that comprise Hybrid configuration

source $HYBRID_ENV

Install Hybrid Prerequisite components

?. Enable required google apis

ahr-verify-ctl api-enable

Install certificate manager

?. Install certificate manager

echo $CERT_MANAGER_MANIFEST
kubectl apply --validate=false -f $CERT_MANAGER_MANIFEST

?. Check cert-manager workload pods

kubectl get pod -n cert-manager

Output

NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-6b64ff88-ntt7r                1/1     Running   2          23h
cert-manager-cainjector-6cc9dccc58-7kjm9   1/1     Running   0          22h
cert-manager-webhook-79c9db9b9f-6c7n4      1/1     Running   2          23h

?. Check cert-manager services

kubectl get svc -n cert-manager

Output

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
cert-manager           ClusterIP   10.98.31.48      <none>        9402/TCP   23h
cert-manager-webhook   ClusterIP   10.102.198.114   <none>        443/TCP    23h

Install Anthos Service Mash

NOTE: See https://cloud.google.com/service-mesh/docs/archive/1.5/docs/gke-on-prem-install for details

?. Fetch ASM installation files

ahr-cluster-ctl asm-get $ASM_VERSION

? Define ASM_HOME and add ASM bin directory to the path by copying and pasting provided export statements from the previous command output.

export ASM_HOME=$HYBRID_HOME/istio-$ASM_VERSION
export PATH=$ASM_HOME/bin:$PATH

?. Verify ASM version release so you can correctly pick up istio operator template file

export ASM_RELEASE=${ASM_VERSION/.[[:digit:]]-asm.*/}-asm

echo $ASM_RELEASE

Output:

1.6-asm

?. For on-prem Anthos installation, we need to use asm-multicloud profile. ASM_CONFIG is an IstioOperator manifest that defines parameters of ASM installation.

ahr-cluster-ctl template $AHR_HOME/templates/istio-operator-$ASM_RELEASE-multicloud.yaml > $ASM_CONFIG

?. Install ASM

istioctl manifest apply --set profile=asm-multicloud -f $ASM_CONFIG

Output:

Detected that your cluster does not support third party JWT authentication. Falling back to less secure first party JWT. See https://istio.io/docs/ops/best-practi
ces/security/#configure-third-party-service-account-tokens for details.
- Applying manifest for component Base...
✔ Finished applying manifest for component Base.
- Applying manifest for component Pilot...
✔ Finished applying manifest for component Pilot.
  Waiting for resources to become ready...
- Applying manifest for component IngressGateways...
✔ Finished applying manifest for component IngressGateways.


✔ Installation complete

?. Check that the control plane pods in istio-system are up:

kubectl get pod -n istio-system

Output

NAME                                    READY   STATUS    RESTARTS   AGE
grafana-7c6b5bbf9-7snd9                 1/1     Running   0          22h
istio-ingressgateway-85546dd67f-8rf8t   1/1     Running   0          21h
istiod-7dcf69b899-6w29r                 1/1     Running   0          22h
istiod-7dcf69b899-9mxq5                 1/1     Running   0          22h
kiali-85dc7cdc48-25fmr                  1/1     Running   2          22h
prometheus-66bf5f56c8-9cfbh             2/2     Running   3          22h

?. Check the service configuration:

kubectl get svc -n istio-system istio-ingressgateway

Output:

NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                                                                                      AGE
istio-ingressgateway   LoadBalancer   10.100.45.80   10.0.10.8     15020:32714/TCP,80:30387/TCP,443:30060/TCP,15030:32632/TCP,31400:32352/TCP,15443:30783/TCP   4m

NOTE: Make sure that for cert-manager-cainjector function correctly and doesn't crash.

Adjust its yaml if required to turn off leader election for 1/1 pod only

- --leader-elect=false

Apigee Hybrid Runtime Installation

?. Get apigeectl distibutive

ahr-runtime-ctl get

?. Configure APIGEECTL_HOME and add bin directory to your session PATH

export APIGEECTL_HOME=$HYBRID_HOME/apigeectl_1.3.0-c9606a3_linux_64
export PATH=$APIGEECTL_HOME:$PATH

Create Organization, Environment, and Environment group

?. We would need wait-for-ready() function defined in ahr-lib.sh to make sure that an asynchronous apigeectl init|apply commands successfully provisioned all components. Let's instantiate it

source $AHR_HOME/bin/ahr-lib.sh

?. Verify if Organization name we are going to use passes validity checks

ahr-runtime-ctl org-validate-name $ORG

?. Create Organization and define Analytics region

ahr-runtime-ctl org-create $ORG --ax-region $AX_REGION

?. Create Environment $ENV

ahr-runtime-ctl env-create $ENV

?. Create Environment Group $ENV_GROUP

ahr-runtime-ctl env-group-create $ENV_GROUP $RUNTIME_HOST_ALIAS

?. Assign environment $ENV to the environment Group $ENV_GROUP

ahr-runtime-ctl env-group-assign $ORG $ENV_GROUP $ENV

Create SAs, bind their Roles an Create JSON Keys

ahr-sa-ctl create-sa all
ahr-sa-ctl create-key all

Configure Synchronizer

?. Check that apigeeconnect is enabled

ahr-runtime-ctl org-config

?. Set the synchronizer service account

ahr-runtime-ctl setsync $SYNCHRONIZER_SA_ID

?. Check organisation configuration again to verify that synchroniser SA is configured correctly

ahr-runtime-ctl org-config

Create SSC Certificate and KEY

?. OPTIONAL: Unless you already provisioned ingress gateway certificate and key, you can create a self-signed certificate. ?. If you did, make sure you set up correct variables: RUNTIME_SSL_CERT and RUNTIME_SSL_KEY

ahr-verify-ctl cert-create-ssc $RUNTIME_SSL_CERT $RUNTIME_SSL_KEY $RUNTIME_HOST_ALIAS

?. Check certificate correctness

openssl x509 -in $RUNTIME_SSL_CERT -text -noout

Install Apigee Hybrid Runtime

?. Generate Runtime Configuration yaml file

ahr-runtime-ctl template $AHR_HOME/templates/overrides-small-130-template.yaml > $RUNTIME_CONFIG

?. Inspect generated runtime configuration file

vi $RUNTIME_CONFIG

?. Install required hybrid auxiliary components

ahr-runtime-ctl apigeectl init -f $RUNTIME_CONFIG

?. Wait it they are fully ready

ahr-runtime-ctl apigeectl wait-for-ready -f $RUNTIME_CONFIG

?. Install hybrid runtime components

ahr-runtime-ctl apigeectl apply -f $RUNTIME_CONFIG

?. Wait it they are fully ready

ahr-runtime-ctl apigeectl wait-for-ready -f $RUNTIME_CONFIG

Create and deploy a test proxy

$AHR_HOME/proxies/deploy.sh

Send Trace Request

curl --cacert $RUNTIME_SSL_CERT https://$RUNTIME_HOST_ALIAS/ping -v --resolve "$RUNTIME_HOST_ALIAS:443:$RUNTIME_IP" --http1.1

Resource Rebalancing

In case you're working with a demo or dev cluster constrained with resources (vCPUs and/or RAM), you can adjust resource requirements.

?. ASM Components

vi $ASM_CONFIG

istio-operator.yaml:

          hpaSpec:
            minReplicas: 1
            maxReplicas: 1

?. Apigee Hybrid Runtime components

vi $HYBRID_ENV
authz:
  resources:
    requests:
      cpu: 100m
      memory: 64Mi

runtime:
  resources:
    requests:
      cpu: 250m
      memory: 256Mi

cassandra:
  resources:
    requests:
      cpu: 200m
      memory: 300Mi

udca:
  resources:
    requests:
      cpu: 150m
      memory: 128Mi
  fluentd:
    resources:
      requests:
        cpu: 200m
        memory: 128Mi
Clone this wiki locally