This repository has been archived by the owner on May 18, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 2
在 Google Cloud 架設 Staging 環境
Yen-Wen Chen edited this page Nov 25, 2018
·
6 revisions
- Staging 環境與 Production 環境完全相仿,所以這份文件僅提供差異之處,請先閱讀 在 Google Cloud 架設 Production 還境
- 我們針對 Staging 的設計是每一個 Branch 都會有自己的 Staging Service 規則如下
- staging.sense.tw 為 Master Branch
- {NAME}.staging.sense.tw 為 {NAME} Branch
- API 則為 stagins.api.sense.tw 及 {NAME}.staging.api.sense.tw
- via 僅有唯一的 Master Branch,其為 staging.via.sense.tw
- 所有 Staging Service 皆會取得最新的 Database 備份後,製作自己的 Database 環境,並採用 Memory 儲存 Session
digraph G {
"node0" [
label = "User"
shape = "record"
gradientangle="90"
];
"node1" [
label = "<f0>Cloud CDN | <f1>Cloud \lLoad Balancing"
shape = "record"
gradientangle="90"
];
subgraph cluster_gke {
label="GKE Cluster"
labelloc="b"
via[label="VIA\-Staging\nport/30101"]
proxy[label="Staging Proxy\nport/30606"]
}
"node0" -> "node1":f0 [
id = 0
];
"node1":f1 -> via,proxy
}
- DNS 設定規範
- staging.via.sense.tw
- staging.seanse.tw
- staging.api.sense.tw
- *.staging.sense.tw
- *.staging.api.sense.tw
- GKE Cluster
- 參考 Firewall Rule 放行 30606, 30101 兩個 Port
- 參考 Instance Group 設置 Port Name Mapping 放行 30606,30101 三個 Port
- Cloud Load Balancing / Cloud CDN
- 參考 Google Cloud Load Balancing 文件,建立
- 針對兩個 Port 建立 Backend Service
- 設定 DNS 到不同的 Backend Service
- 參考 Creating Health Check 針對兩個 Port 分別建立
- Protocol: HTTP
- Path: /health
- 參考 Google Cloud Load Balancing 文件,建立
- 設定 DNS 到 Cloud Load Balancing IP Address
- 自動化編譯環境會監控 Git Commit 來決定是否要自動編譯
- 參考 Google Cloud Build 文件建立
- SenseTW
- 由 Push to any Branch Trigger
- Cloudbuild.yaml 位於 builder/cloudbuild/sensemap-stage.yaml
- 環境變數
- _CLOUDSDK_COMPUTE_ZONE 為 Google Kubernetes Engine Cluster 位於的 Zone
- _CLOUDSDK_CONTAINER_CLUSTER 為 Google Kubernetes Engine Cluster 的 Name
- 其他環境變數參考 如何處理部署設定及程序#環境變數,但無需設定 REDIS 相關
-
Client
- 由 Push to master Branch Trigger
- Cloudbuild.yaml 位於 gcloud/cloudbuild.stage.yaml
-
via
- 由 Push to master Branch Trigger
- Cloudbuild.yaml 位於 gcloud/cloudbuild.stage.yaml
digraph G {
"node0" [
label = "Cloud \lLoad Balancing"
shape = "record"
gradientangle="90"
];
subgraph cluster_kubernetes {
label="GKE Cluster"
labelloc="b"
subgraph cluster_gkeservice {
label="Service"
labelloc="b"
proxy[label="staging-proxy\nport/30606"]
sensemapService[label="sensemap-staging-\n{BRANCH_NAME}\nport/30600"]
viaService[label="viaserver\-stage\nport/30101"]
proxy -> sensemapService
}
subgraph cluster_sensemap {
label="Sensemap Workload (Port/6000)"
labelloc="b"
sensemapPods0
sensemapPods1
}
subgraph cluster_via {
label="via Workload (Port/19080)"
labelloc="b"
viaPods0
}
}
node0 -> proxy,viaService
viaService -> viaPods0
sensemapService -> sensemapPods0,sensemapPods1
}
digraph G {
graph [
rankdir = "LR"
gradientangle = 270
];
nginx[
label="<f0>nginx\nport/6000 | <f1>sensemap-release-\nweb-config | <f2>front-static"
shape = "record"
gradientangle="90"
]
sensemap[
label="<f0>SenseMap\nport/8000 | <f1>sensemap-release-env | <f2>front-static | <f3>tmp-pod"
shape = "record"
gradientangle="90"
]
smo[
label="<f0>SMO\nport/8080 | <f1>sensemap-smo-\nrelease-env | <f2>tmp-pod"
shape = "record"
gradientangle="90"
]
db[
label="<f0>db-restore | <f1>tmp-pod"
shape = "record"
gradientangle="90"
]
outside -> nginx:f0
nginx:f0 -> sensemap:f0
nginx:f0 -> smo:f0
nginx:f2 -> sensemap:f2[
style=dashed
dir=both
]
db:f1 -> sensemap:f3[
style=dashed
dir=both
label="shared volumn"
]
db:f1 -> smo:f2[
style=dashed
dir=both
]
}