-
Notifications
You must be signed in to change notification settings - Fork 4
Anthos On Prem Apigee Hybrid 1.3 AHR Manual
TL;DR:
not fit for human consumption yet
After you have prepared your Anthos on-prem environment, you can install Apigee hybrid using AHR. in a nutshell, AHR is a wrapper on 'manual' gcloud, kubectl, and curl commands for Apigee Hybrid 1.3 On-Prem Installation documentation. You can also look at installation of Apigee Hybrid 1.2 to see and compare documented steps with AHR approach.
We are going to define environment variables that will help us manage the installation process and reuse copy-and-paste commands with minimal editing.
- Log into your netservicesvm.
https://netservices-XXX.YYY.ZZZ.<domain>.net/
- For your GCP project, verify that you still have your GCP project name configured correctly. If not, configure it using
gcloud config set project <project-id>
command.
$ gcloud config get-value project
apigee-hybrid-anthos-onprem
?. While in the ~
directory, clone AHR project
cd ~
git clone https://github.com/yuriylesyuk/ahr.git
?. Configure AHR_HOME variable and ddd ahr-*-ctl scripts directory to your session PATH.
export AHR_HOME=~/ahr
export PATH=$AHR_HOME/bin:$PATH
?. jq
is a prerequisite for some steps. Let's install it (yum
if you're using CentOS as your working computer or apt
if Debian)
sudo apt install -y jq
?. Configure HYBRID_HOME directory location and HYBRID_ENV environment variables configuration file
export HYBRID_HOME=~/apigee-hybrid-install
export HYBRID_ENV=$HYBRID_HOME/hybrid-130.env
?. Create $HYBRID_HOME directory and clone single zone small hybrid 1.3 template
mkdir -p $HYBRID_HOME
cp $AHR_HOME/examples/hybrid-sz-s-1.3.sh $HYBRID_ENV
?. Populate $PROJECT variable and verify it
export PROJECT=$(gcloud config get-value project)
echo $PROJECT
?. Open $RUNTIME_ENV in your favourite editor to adjust some configuration values
vi $HYBRID_ENV
?. Define Region and Analytics Region location for your control plane GCP project.
export REGION=us-central1
export AX_REGION=us-central1
?. Define name of your on-prem cluster
export CLUSTER=user-cluster1
export CLUSTER_ZONE=on-prem
?. Define hostname of your API endpoint
export RUNTIME_HOST_ALIAS=api.onprem.exco.com
?. Configure or Provision Load Balancer for Istio Ingress Gateway
Depending on your on-prem configuration, you either define runtime IP that will be created by your load balancer (ie, F5_ automatically, or you would pre-provision a load balancer VIP. In either case, you have an IP that you need to configure as a RUNTIME_IP variable.
export RUNTIME_IP=10.0.10.8
`` source $HYBRID_ENV
## Install Hybrid Prerequisite components
?. Enable required google apis
ahr-verify-ctl api-enable
### Install certificate manager
?. Install certificate manager
echo $CERT_MANAGER_MANIFEST kubectl apply --validate=false -f $CERT_MANAGER_MANIFEST
?. Check cert-manager workload pods
kubectl get pod -n cert-manager
Output
NAME READY STATUS RESTARTS AGE cert-manager-6b64ff88-ntt7r 1/1 Running 2 23h cert-manager-cainjector-6cc9dccc58-7kjm9 1/1 Running 0 22h cert-manager-webhook-79c9db9b9f-6c7n4 1/1 Running 2 23h
?. Check cert-manager services
kubectl get svc -n cert-manager
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cert-manager ClusterIP 10.98.31.48 9402/TCP 23h cert-manager-webhook ClusterIP 10.102.198.114 443/TCP 23h
### Install Anthos Service Mash
> NOTE: See https://cloud.google.com/service-mesh/docs/archive/1.5/docs/gke-on-prem-install for details
?. Fetch ASM installation files
ahr-cluster-ctl asm-get $ASM_VERSION
? Define ASM_HOME and add ASM bin directory to the path by copying and pasting provided export statements from the previous command output.
export ASM_HOME=$HYBRID_HOME/istio-$ASM_VERSION export PATH=$ASM_HOME/bin:$PATH
?. Verify ASM version release so you can correctly pick up istio operator template file
export ASM_RELEASE=${ASM_VERSION/.:digit:-asm.*/}-asm
echo $ASM_RELEASE
Output:
1.6-asm
?. For on-prem Anthos installation, we need to use asm-multicloud profile. ASM_CONFIG is an IstioOperator manifest that defines parameters of ASM installation.
ahr-cluster-ctl template $AHR_HOME/templates/istio-operator-$ASM_RELEASE-multicloud.yaml > $ASM_CONFIG
?. Install ASM
istioctl manifest apply --set profile=asm-multicloud -f $ASM_CONFIG
Output:
Detected that your cluster does not support third party JWT authentication. Falling back to less secure first party JWT. See https://istio.io/docs/ops/best-practi ces/security/#configure-third-party-service-account-tokens for details.
- Applying manifest for component Base... ✔ Finished applying manifest for component Base.
- Applying manifest for component Pilot... ✔ Finished applying manifest for component Pilot. Waiting for resources to become ready...
- Applying manifest for component IngressGateways... ✔ Finished applying manifest for component IngressGateways.
✔ Installation complete
?. Check that the control plane pods in istio-system are up:
kubectl get pod -n istio-system
Output
NAME READY STATUS RESTARTS AGE grafana-7c6b5bbf9-7snd9 1/1 Running 0 22h istio-ingressgateway-85546dd67f-8rf8t 1/1 Running 0 21h istiod-7dcf69b899-6w29r 1/1 Running 0 22h istiod-7dcf69b899-9mxq5 1/1 Running 0 22h kiali-85dc7cdc48-25fmr 1/1 Running 2 22h prometheus-66bf5f56c8-9cfbh 2/2 Running 3 22h
?. Check the service configuration:
kubectl get svc -n istio-system istio-ingressgateway
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.100.45.80 10.0.10.8 15020:32714/TCP,80:30387/TCP,443:30060/TCP,15030:32632/TCP,31400:32352/TCP,15443:30783/TCP 4m