Skip to content
This repository has been archived by the owner on May 18, 2024. It is now read-only.

在 Google Cloud 架設 Staging 環境

Yen-Wen Chen edited this page Nov 25, 2018 · 6 revisions

前情提要

  • Staging 環境與 Production 環境完全相仿,所以這份文件僅提供差異之處,請先閱讀 在 Google Cloud 架設 Production 還境
  • 我們針對 Staging 的設計是每一個 Branch 都會有自己的 Staging Service 規則如下
    • staging.sense.tw 為 Master Branch
    • {NAME}.staging.sense.tw 為 {NAME} Branch
    • API 則為 stagins.api.sense.tw 及 {NAME}.staging.api.sense.tw
  • via 僅有唯一的 Master Branch,其為 staging.via.sense.tw
  • 所有 Staging Service 皆會取得最新的 Database 備份後,製作自己的 Database 環境,並採用 Memory 儲存 Session

設定 Network

digraph G {
    "node0" [
        label = "User"
        shape = "record"
        gradientangle="90"
    ];
    "node1" [
        label = "<f0>Cloud CDN | <f1>Cloud \lLoad Balancing"
        shape = "record"
        gradientangle="90"
    ];

    subgraph cluster_gke {
        label="GKE Cluster"
        labelloc="b"

        via[label="VIA\-Staging\nport/30101"]
        proxy[label="Staging Proxy\nport/30606"]
    }

    "node0" -> "node1":f0 [
        id = 0
    ];

    "node1":f1 -> via,proxy
}

Staging Network Arch

  • DNS 設定規範
    • staging.via.sense.tw
    • staging.seanse.tw
    • staging.api.sense.tw
    • *.staging.sense.tw
    • *.staging.api.sense.tw
  • GKE Cluster
  • Cloud Load Balancing / Cloud CDN
  • 設定 DNS 到 Cloud Load Balancing IP Address

建立自動化編譯環境及部署

  • 自動化編譯環境會監控 Git Commit 來決定是否要自動編譯
  • 參考 Google Cloud Build 文件建立
  • SenseTW
    • 由 Push to any Branch Trigger
    • Cloudbuild.yaml 位於 builder/cloudbuild/sensemap-stage.yaml
    • 環境變數
  • Client
    • 由 Push to master Branch Trigger
    • Cloudbuild.yaml 位於 gcloud/cloudbuild.stage.yaml
  • via
    • 由 Push to master Branch Trigger
    • Cloudbuild.yaml 位於 gcloud/cloudbuild.stage.yaml

附註:Kubernetes 架構

digraph G {
    "node0" [
        label = "Cloud \lLoad Balancing"
        shape = "record"
        gradientangle="90"
    ];
    subgraph cluster_kubernetes {
        label="GKE Cluster"
        labelloc="b"
    
        subgraph cluster_gkeservice {
            label="Service"
            labelloc="b"
    
            proxy[label="staging-proxy\nport/30606"]
            sensemapService[label="sensemap-staging-\n{BRANCH_NAME}\nport/30600"]
            viaService[label="viaserver\-stage\nport/30101"]
            
            proxy -> sensemapService

        }
        
        subgraph cluster_sensemap {
            label="Sensemap Workload (Port/6000)"
            labelloc="b"
            
            sensemapPods0
            sensemapPods1
        }
        
        subgraph cluster_via {
            label="via Workload (Port/19080)"
            labelloc="b"
            
            viaPods0
        }
    }

    
    node0 -> proxy,viaService
    viaService -> viaPods0
    sensemapService -> sensemapPods0,sensemapPods1
}

Inside Staging Architecture

附註:SenseMap Staging 內部架構

digraph G {
    graph [
        rankdir = "LR"
        gradientangle = 270
    ];
    
    nginx[
        label="<f0>nginx\nport/6000 | <f1>sensemap-release-\nweb-config | <f2>front-static"
        shape = "record"
        gradientangle="90"
    ]
    sensemap[
        label="<f0>SenseMap\nport/8000 | <f1>sensemap-release-env | <f2>front-static | <f3>tmp-pod"
        shape = "record"
        gradientangle="90"
    ]
    smo[
        label="<f0>SMO\nport/8080 | <f1>sensemap-smo-\nrelease-env | <f2>tmp-pod"
        shape = "record"
        gradientangle="90"
    ]
    db[
        label="<f0>db-restore | <f1>tmp-pod"
        shape = "record"
        gradientangle="90"
    ]
    
    outside -> nginx:f0
    
    nginx:f0 -> sensemap:f0
    nginx:f0 -> smo:f0
    nginx:f2 -> sensemap:f2[
        style=dashed
        dir=both
    ]
    
    db:f1 -> sensemap:f3[
        style=dashed
        dir=both
        label="shared volumn"
    ]
    
    db:f1 -> smo:f2[
        style=dashed
        dir=both
    ]
}

Inside SenseMap Pod