跳转到内容

自动化部署

业务场景

云咖啡公司的系统已经稳定运行,但每次部署和更新都需要手动执行多个命令,效率低下且容易出错。为了提升部署效率和可靠性,我们需要实现自动化部署。

需求:

  • 使用 Helm 管理应用部署
  • 开发自定义 Helm Chart
  • 配置 CI/CD 自动化部署
  • 实现 GitOps 工作流

学习目标

完成本课程后,你将掌握:

  • Helm 的基本概念和使用
  • Helm Chart 的开发和定制
  • CI/CD 的配置和集成
  • GitOps 的实践
  • 自动化部署的最佳实践

前置准备

1. 确认环境

bash
# 检查命名空间
kubectl get namespace cloud-cafe

# 检查现有资源
kubectl get all -n cloud-cafe

2. 安装 Helm

bash
# 下载 Helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

# 验证 Helm 安装
helm version

# 添加常用仓库
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

# 查看已添加的仓库
helm repo list

实战步骤

Step 1: 使用 Helm 部署应用

概念: Helm 是 Kubernetes 的包管理器,类似于 Linux 的 apt 或 yum,可以简化应用的部署和管理。

1.1 使用 Helm 部署 Redis

bash
# 搜索 Redis Chart
helm search repo redis

# 查看 Redis Chart 详情
helm show chart bitnami/redis

# 使用 Helm 部署 Redis
helm install redis bitnami/redis \
  --set architecture=standalone \
  --set auth.enabled=false \
  --set persistence.enabled=true \
  --set persistence.size=1Gi \
  -n cloud-cafe

# 查看 Release
helm list -n cloud-cafe

# 查看 Redis 状态
kubectl get pods -n cloud-cafe -l app.kubernetes.io/name=redis
kubectl get svc -n cloud-cafe -l app.kubernetes.io/name=redis

📌 关于 Helm --set 参数的解释

--set 用于在命令行设置 Chart 的配置值,覆盖默认值。

语法--set key=value

支持的数据类型

bash
# 字符串(默认)
--set name=myapp

# 整数
--set replicas=3

# 布尔值
--set persistence.enabled=true

# 嵌套值(使用 . 分隔)
--set persistence.size=1Gi

# 数组
--set env[0].name=DEBUG --set env[0].value=true

对比 --values(或 -f

方式适用场景
--set少量参数、脚本中动态设置
--values复杂配置、版本控制、多环境管理

查看 Chart 可配置项

bash
helm show values bitnami/redis

1.2 测试 Redis

bash
# 获取 Redis 密码(如果启用了认证)
REDIS_PASSWORD=$(kubectl get secret --namespace cloud-cafe redis -o jsonpath="{.data.redis-password}" | base64 --decode)

# 获取 Redis Service
kubectl get svc redis -n cloud-cafe

# 测试 Redis 连接
kubectl run --namespace cloud-cafe redis-client --restart='Never' --env REDIS_PASSWORD="redis-password" --image docker.io/bitnami/redis:7.2 --command -- sleep infinity

# 进入 Redis 客户端
kubectl exec --tty -i redis-client --namespace cloud-cafe -- bash

# 在 Redis 客户端中执行
redis-cli -h redis-master -a redis-password
PING
SET mykey "Hello from Helm"
GET mykey
EXIT
exit

# 删除测试 Pod
kubectl delete pod redis-client -n cloud-cafe

1.3 升级 Redis

bash
# 升级 Redis(修改配置)
helm upgrade redis bitnami/redis \
  --set architecture=standalone \
  --set auth.enabled=false \
  --set persistence.enabled=true \
  --set persistence.size=2Gi \
  -n cloud-cafe

# 查看升级历史
helm history redis -n cloud-cafe

# 回滚到上一个版本
helm rollback redis -n cloud-cafe

# 再次查看历史
helm history redis -n cloud-cafe

1.4 卸载 Redis

bash
# 卸载 Redis
helm uninstall redis -n cloud-cafe

# 查看 Release
helm list -n cloud-cafe

Step 2: 开发自定义 Helm Chart

现在我们为云咖啡公司的应用开发自定义的 Helm Chart。

2.1 创建 Chart 结构

bash
# 创建 Chart 目录
mkdir -p ~/cloud-cafe-chart
cd ~/cloud-cafe-chart

# 创建 Chart.yaml
cat > Chart.yaml <<EOF
apiVersion: v2
name: cloud-cafe
description: A Helm chart for Cloud Cafe application
type: application
version: 0.1.0
appVersion: "1.0"
keywords:
  - cloud-cafe
  - coffee
  - order-system
maintainers:
  - name: Cloud Cafe Team
    email: team@cloudcafe.com
EOF

# 创建 values.yaml
cat > values.yaml <<EOF
# 默认配置
replicaCount: 2

image:
  repository: nginx
  pullPolicy: IfNotPresent
  tag: "latest"

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: true
  className: "nginx"
  hosts:
    - host: cloudcafe.local
      paths:
        - path: /
          pathType: Prefix

resources:
  limits:
    cpu: 200m
    memory: 256Mi
  requests:
    cpu: 100m
    memory: 128Mi

autoscaling:
  enabled: false
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 80

# MySQL 配置
mysql:
  enabled: true
  auth:
    rootPassword: rootpassword123
    database: cloudcafe
    username: cafeadmin
    password: userpassword123
  persistence:
    enabled: true
    size: 1Gi

# Redis 配置
redis:
  enabled: true
  auth:
    enabled: false
  persistence:
    enabled: true
    size: 1Gi

# 后端服务配置
backend:
  enabled: true
  image:
    repository: python
    tag: "3.9-slim"
  replicaCount: 2
  resources:
    limits:
      cpu: 200m
      memory: 256Mi
    requests:
      cpu: 100m
      memory: 128Mi
EOF

# 创建 templates 目录
mkdir -p templates

# 创建 NOTES.txt
cat > templates/NOTES.txt <<EOF
Thank you for installing {{ .Chart.Name }}!

Your release is named {{ .Release.Name }}.

To learn more about the release, try:

  $ helm status {{ .Release.Name }}
  $ helm get all {{ .Release.Name }}

For more information on running Cloud Cafe, see:
  https://github.com/cloudcafe/cloud-cafe
EOF

2.2 创建前端应用模板

bash
# 创建前端应用 Deployment
cat > templates/frontend-deployment.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "cloud-cafe.fullname" . }}-frontend
  labels:
    {{- include "cloud-cafe.labels" . | nindent 4 }}
    app: frontend
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "cloud-cafe.selectorLabels" . | nindent 6 }}
      app: frontend
  template:
    metadata:
      labels:
        {{- include "cloud-cafe.selectorLabels" . | nindent 8 }}
        app: frontend
    spec:
      containers:
      - name: nginx
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        ports:
        - containerPort: 80
          name: http
        livenessProbe:
          httpGet:
            path: /
            port: http
          initialDelaySeconds: 10
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /
            port: http
          initialDelaySeconds: 5
          periodSeconds: 5
        resources:
          {{- toYaml .Values.resources | nindent 10 }}
        volumeMounts:
        - name: html-content
          mountPath: /usr/share/nginx/html
      volumes:
      - name: html-content
        configMap:
          name: {{ include "cloud-cafe.fullname" . }}-frontend-html
EOF

# 创建前端应用 Service
cat > templates/frontend-service.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: {{ include "cloud-cafe.fullname" . }}-frontend-svc
  labels:
    {{- include "cloud-cafe.labels" . | nindent 4 }}
    app: frontend
spec:
  type: {{ .Values.service.type }}
  ports:
  - port: {{ .Values.service.port }}
    targetPort: http
    protocol: TCP
    name: http
  selector:
    {{- include "cloud-cafe.selectorLabels" . | nindent 4 }}
    app: frontend
EOF

# 创建前端应用 ConfigMap
cat > templates/frontend-configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ include "cloud-cafe.fullname" . }}-frontend-html
  labels:
    {{- include "cloud-cafe.labels" . | nindent 4 }}
data:
  index.html: |
    <!DOCTYPE html>
    <html>
    <head>
      <title>Cloud Cafe - Helm Deployed</title>
      <style>
        body {
          font-family: Arial, sans-serif;
          text-align: center;
          padding: 50px;
          background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
          color: white;
        }
        h1 {
          font-size: 48px;
          margin-bottom: 20px;
        }
        .coffee {
          font-size: 80px;
          margin: 30px 0;
        }
      </style>
    </head>
    <body>
      <div class="coffee">☕</div>
      <h1>Cloud Cafe</h1>
      <p>Deployed with Helm!</p>
      <p>Release: {{ .Release.Name }}</p>
      <p>Namespace: {{ .Release.Namespace }}</p>
    </body>
    </html>
EOF

2.3 创建后端服务模板

bash
# 创建后端服务 Deployment
cat > templates/backend-deployment.yaml <<EOF
{{- if .Values.backend.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "cloud-cafe.fullname" . }}-backend
  labels:
    {{- include "cloud-cafe.labels" . | nindent 4 }}
    app: backend
spec:
  replicas: {{ .Values.backend.replicaCount }}
  selector:
    matchLabels:
      {{- include "cloud-cafe.selectorLabels" . | nindent 6 }}
      app: backend
  template:
    metadata:
      labels:
        {{- include "cloud-cafe.selectorLabels" . | nindent 8 }}
        app: backend
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "5000"
        prometheus.io/path: "/metrics"
    spec:
      containers:
      - name: backend
        image: "{{ .Values.backend.image.repository }}:{{ .Values.backend.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        command: ["/bin/sh", "-c"]
        args:
          - |
            pip install flask pymysql flask-cors redis prometheus-client
            cat > /app/app.py << 'PYEOF'
            from flask import Flask, jsonify
            from flask_cors import CORS
            from prometheus_client import Counter, generate_latest, CONTENT_TYPE_LATEST

            app = Flask(__name__)
            CORS(app)

            request_counter = Counter('backend_requests_total', 'Total backend requests')

            @app.route('/health')
            def health():
                request_counter.inc()
                return jsonify({'status': 'healthy'})

            @app.route('/metrics')
            def metrics():
                return generate_latest(), 200, {'Content-Type': CONTENT_TYPE_LATEST}

            if __name__ == '__main__':
                app.run(host='0.0.0.0', port=5000)
            PYEOF
            python /app/app.py
        ports:
        - containerPort: 5000
          name: http
        livenessProbe:
          httpGet:
            path: /health
            port: http
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /health
            port: http
          initialDelaySeconds: 10
          periodSeconds: 5
        resources:
          {{- toYaml .Values.backend.resources | nindent 10 }}
        env:
        - name: DB_HOST
          value: "{{ include "cloud-cafe.fullname" . }}-mysql"
        - name: DB_PORT
          value: "3306"
        - name: DB_USER
          value: "{{ .Values.mysql.auth.username }}"
        - name: DB_PASSWORD
          value: "{{ .Values.mysql.auth.password }}"
        - name: DB_NAME
          value: "{{ .Values.mysql.auth.database }}"
        - name: REDIS_HOST
          value: "{{ include "cloud-cafe.fullname" . }}-redis-master"
        - name: REDIS_PORT
          value: "6379"
{{- end }}
EOF

# 创建后端服务 Service
cat > templates/backend-service.yaml <<EOF
{{- if .Values.backend.enabled }}
apiVersion: v1
kind: Service
metadata:
  name: {{ include "cloud-cafe.fullname" . }}-backend-svc
  labels:
    {{- include "cloud-cafe.labels" . | nindent 4 }}
    app: backend
spec:
  type: ClusterIP
  ports:
  - port: 5000
    targetPort: http
    protocol: TCP
    name: http
  selector:
    {{- include "cloud-cafe.selectorLabels" . | nindent 4 }}
    app: backend
{{- end }}
EOF

2.4 创建 Ingress 模板

bash
# 创建 Ingress
cat > templates/ingress.yaml <<EOF
{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ include "cloud-cafe.fullname" . }}
  labels:
    {{- include "cloud-cafe.labels" . | nindent 4 }}
  {{- with .Values.ingress.annotations }}
  annotations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
spec:
  {{- if .Values.ingress.className }}
  ingressClassName: {{ .Values.ingress.className }}
  {{- end }}
  {{- if .Values.ingress.tls }}
  tls:
    {{- range .Values.ingress.tls }}
    - hosts:
        {{- range .hosts }}
        - {{ . | quote }}
        {{- end }}
      secretName: {{ .secretName }}
    {{- end }}
  {{- end }}
  rules:
    {{- range .Values.ingress.hosts }}
    - host: {{ .host | quote }}
      http:
        paths:
          {{- range .paths }}
          - path: {{ .path }}
            pathType: {{ .pathType }}
            backend:
              service:
                name: {{ include "cloud-cafe.fullname" $ }}-frontend-svc
                port:
                  number: {{ $.Values.service.port }}
          {{- end }}
    {{- end }}
{{- end }}
EOF

2.5 创建辅助模板

bash
# 创建 _helpers.tpl
cat > templates/_helpers.tpl <<EOF
{{/*
Expand the name of the chart.
*/}}
{{- define "cloud-cafe.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create a default fully qualified app name.
*/}}
{{- define "cloud-cafe.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "cloud-cafe.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Common labels
*/}}
{{- define "cloud-cafe.labels" -}}
helm.sh/chart: {{ include "cloud-cafe.chart" . }}
{{ include "cloud-cafe.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

{{/*
Selector labels
*/}}
{{- define "cloud-cafe.selectorLabels" -}}
app.kubernetes.io/name: {{ include "cloud-cafe.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
EOF

2.6 验证 Chart

bash
# 验证 Chart 语法
helm lint ~/cloud-cafe-chart

# 模拟安装(不实际创建资源)
helm template test-release ~/cloud-cafe-chart -n cloud-cafe

# 打包 Chart
helm package ~/cloud-cafe-chart

# 查看打包后的文件
ls -lh *.tgz

Step 3: 使用自定义 Chart 部署应用

3.1 部署应用

bash
# 使用自定义 Chart 部署应用
helm install cloud-cafe ~/cloud-cafe-chart \
  -n cloud-cafe \
  --create-namespace

# 查看 Release
helm list -n cloud-cafe

# 查看部署的资源
kubectl get all -n cloud-cafe

3.2 测试应用

bash
# 获取 Ingress 访问地址
kubectl get ingress -n cloud-cafe

# 获取 NodePort
INGRESS_PORT=$(kubectl get svc -n ingress-nginx ingress-nginx-controller -o jsonpath='{.spec.ports[0].nodePort}')
NODE_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')

echo "访问地址: http://cloudcafe.local:$INGRESS_PORT"

# 测试访问
curl -H "Host: cloudcafe.local" http://$NODE_IP:$INGRESS_PORT

3.3 升级应用

bash
# 修改 values.yaml
cat > ~/cloud-cafe-chart/values.yaml <<EOF
replicaCount: 3

image:
  repository: nginx
  pullPolicy: IfNotPresent
  tag: "latest"

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: true
  className: "nginx"
  hosts:
    - host: cloudcafe.local
      paths:
        - path: /
          pathType: Prefix

resources:
  limits:
    cpu: 300m
    memory: 512Mi
  requests:
    cpu: 150m
    memory: 256Mi

autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 80

mysql:
  enabled: true
  auth:
    rootPassword: rootpassword123
    database: cloudcafe
    username: cafeadmin
    password: userpassword123
  persistence:
    enabled: true
    size: 2Gi

redis:
  enabled: true
  auth:
    enabled: false
  persistence:
    enabled: true
    size: 2Gi

backend:
  enabled: true
  image:
    repository: python
    tag: "3.9-slim"
  replicaCount: 3
  resources:
    limits:
      cpu: 300m
      memory: 512Mi
    requests:
      cpu: 150m
      memory: 256Mi
EOF

# 升级应用
helm upgrade cloud-cafe ~/cloud-cafe-chart -n cloud-cafe

# 查看升级历史
helm history cloud-cafe -n cloud-cafe

# 查看资源变化
kubectl get pods -n cloud-cafe

Step 4: 配置 CI/CD

概念: CI/CD(Continuous Integration/Continuous Deployment)是持续集成和持续部署的缩写,用于自动化构建、测试和部署流程。

4.1 创建 GitHub Actions 工作流

bash
# 创建 GitHub Actions 工作流目录
mkdir -p ~/cloud-cafe-chart/.github/workflows

# 创建 CI/CD 工作流
cat > ~/cloud-cafe-chart/.github/workflows/ci-cd.yaml <<EOF
name: CI/CD Pipeline

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  lint:
    name: Lint Helm Chart
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Set up Helm
        uses: azure/setup-helm@v3
        with:
          version: 'v3.12.0'

      - name: Lint Helm Chart
        run: |
          helm lint ./cloud-cafe-chart

  test:
    name: Test Helm Chart
    runs-on: ubuntu-latest
    needs: lint
    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Set up Helm
        uses: azure/setup-helm@v3
        with:
          version: 'v3.12.0'

      - name: Set up kubectl
        uses: azure/setup-kubectl@v3
        with:
          version: 'v1.27.0'

      - name: Create kind cluster
        uses: helm/kind-action@v1.5.0
        with:
          cluster_name: kind-cluster
          kubectl_version: 'v1.27.0'

      - name: Test Helm Chart
        run: |
          helm template test-release ./cloud-cafe-chart --namespace test
          helm install test-release ./cloud-cafe-chart --namespace test --create-namespace
          kubectl get pods -n test
          helm test test-release -n test
          helm uninstall test-release -n test

  deploy:
    name: Deploy to Production
    runs-on: ubuntu-latest
    needs: test
    if: github.ref == 'refs/heads/main'
    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Set up Helm
        uses: azure/setup-helm@v3
        with:
          version: 'v3.12.0'

      - name: Set up kubectl
        uses: azure/setup-kubectl@v3
        with:
          version: 'v1.27.0'

      - name: Configure kubectl
        run: |
          mkdir -p $HOME/.kube
          echo "${{ secrets.KUBE_CONFIG }}" | base64 -d > $HOME/.kube/config
          chmod 600 $HOME/.kube/config

      - name: Deploy with Helm
        run: |
          helm upgrade --install cloud-cafe ./cloud-cafe-chart \
            --namespace cloud-cafe \
            --create-namespace \
            --wait \
            --timeout 5m
EOF

📌 关于 Helm --wait--timeout 参数的解释

** --wait**:等待所有资源就绪后才返回

  • 等待 Deployment 的 Pod 全部就绪
  • 等待 Service 的 Endpoints 可用
  • 等待 Jobs 完成
  • 如果资源无法就绪,命令会失败

** --timeout**:设置等待超时时间

  • 格式:30s(秒)、5m(分钟)、1h(小时)
  • 超时后命令会失败并回滚(配合 --atomic

常用组合

bash
# 等待部署完成,最多等 5 分钟
helm install myapp ./mychart --wait --timeout 5m

# 原子性部署(失败自动回滚)
helm upgrade --install myapp ./mychart --wait --timeout 5m --atomic

# 创建命名空间(如果不存在)
helm install myapp ./mychart --create-namespace -n mynamespace

CI/CD 中的最佳实践

  • 始终使用 --wait 确保部署成功
  • 设置合理的 --timeout(根据应用启动时间)
  • 生产环境建议加 --atomic 实现自动回滚

4.2 配置 GitOps(使用 ArgoCD)

概念: GitOps 是一种使用 Git 作为单一事实来源的持续部署方法。

bash
# 安装 ArgoCD
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

# 等待 ArgoCD 就绪
kubectl wait --namespace argocd \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/name=argocd-server \
  --timeout=120s

# 查看 ArgoCD Pod
kubectl get pods -n argocd

# 创建 ArgoCD Application YAML 文件
cat > cloud-cafe-application.yaml << 'EOF'
# ArgoCD Application 配置
# 用途:定义 ArgoCD 管理的应用
# 功能:
#   - 从 Git 仓库同步 Helm Chart
#   - 自动同步(auto-sync):检测到变更自动部署
#   - 自动清理(prune):删除 Git 中不存在的资源
#   - 自动修复(selfHeal):自动修复手动修改的资源
# 配置说明:
#   - repoURL:Helm Chart 所在的 Git 仓库地址
#   - targetRevision:Git 分支或标签(HEAD 表示最新)
#   - path:Chart 在仓库中的路径
#   - namespace:部署的目标命名空间
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: cloud-cafe
  namespace: argocd
  labels:
    app: cloud-cafe
spec:
  project: default            # ArgoCD 项目(default 为默认项目)
  source:
    repoURL: https://github.com/yourusername/cloud-cafe-chart.git    # Git 仓库地址
    targetRevision: HEAD      # Git 分支或标签
    path: .                   # Chart 在仓库中的路径(根目录)
  destination:
    server: https://kubernetes.default.svc    # 目标集群(默认当前集群)
    namespace: cloud-cafe     # 部署的命名空间
  syncPolicy:
    automated:
      prune: true             # 自动删除 Git 中不存在的资源
      selfHeal: true          # 自动修复手动修改的资源
    syncOptions:
      - CreateNamespace=true  # 如果命名空间不存在则自动创建
EOF

# 应用 ArgoCD Application
kubectl apply -f cloud-cafe-application.yaml

# 查看 Application
kubectl get application -n argocd

# 访问 ArgoCD UI
kubectl port-forward svc/argocd-server -n argocd 8080:443

在浏览器中访问:https://localhost:8080

默认用户名:admin 默认密码:获取方式:

bash
# 获取 ArgoCD 初始密码
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

Step 5: 实现自动化部署流程

5.1 创建部署脚本

bash
# 创建部署脚本
cat > ~/deploy.sh <<'EOF'
#!/bin/bash

set -e

# 配置
CHART_PATH="./cloud-cafe-chart"
RELEASE_NAME="cloud-cafe"
NAMESPACE="cloud-cafe"
VALUES_FILE="${CHART_PATH}/values.yaml"

# 颜色输出
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color

# 日志函数
log_info() {
    echo -e "${GREEN}[INFO]${NC} $1"
}

log_warn() {
    echo -e "${YELLOW}[WARN]${NC} $1"
}

log_error() {
    echo -e "${RED}[ERROR]${NC} $1"
}

# 检查 Helm 是否安装
check_helm() {
    if ! command -v helm &> /dev/null; then
        log_error "Helm is not installed. Please install Helm first."
        exit 1
    fi
    log_info "Helm version: $(helm version --short)"
}

# 检查 kubectl 是否安装
check_kubectl() {
    if ! command -v kubectl &> /dev/null; then
        log_error "kubectl is not installed. Please install kubectl first."
        exit 1
    fi
    log_info "kubectl version: $(kubectl version --client --short)"
}

# 检查集群连接
check_cluster() {
    if ! kubectl cluster-info &> /dev/null; then
        log_error "Cannot connect to Kubernetes cluster."
        exit 1
    fi
    log_info "Connected to Kubernetes cluster"
}

# Lint Chart
lint_chart() {
    log_info "Linting Helm Chart..."
    if helm lint "$CHART_PATH"; then
        log_info "Chart lint passed"
    else
        log_error "Chart lint failed"
        exit 1
    fi
}

# 部署应用
deploy() {
    log_info "Deploying application..."
    
    # 检查 Release 是否存在
    if helm list -n "$NAMESPACE" | grep -q "^$RELEASE_NAME"; then
        log_info "Release $RELEASE_NAME exists, upgrading..."
        helm upgrade "$RELEASE_NAME" "$CHART_PATH" \
            -n "$NAMESPACE" \
            -f "$VALUES_FILE" \
            --wait \
            --timeout 5m
    else
        log_info "Release $RELEASE_NAME does not exist, installing..."
        helm install "$RELEASE_NAME" "$CHART_PATH" \
            -n "$NAMESPACE" \
            --create-namespace \
            -f "$VALUES_FILE" \
            --wait \
            --timeout 5m
    fi
    
    log_info "Deployment completed successfully"
}

# 验证部署
verify() {
    log_info "Verifying deployment..."
    
    # 等待所有 Pod 就绪
    kubectl wait --for=condition=ready pod \
        -l "app.kubernetes.io/instance=$RELEASE_NAME" \
        -n "$NAMESPACE" \
        --timeout=300s
    
    log_info "All pods are ready"
    
    # 显示部署状态
    helm status "$RELEASE_NAME" -n "$NAMESPACE"
}

# 主函数
main() {
    log_info "Starting deployment process..."
    
    check_helm
    check_kubectl
    check_cluster
    lint_chart
    deploy
    verify
    
    log_info "Deployment completed successfully!"
}

# 执行主函数
main
EOF

# 添加执行权限
chmod +x ~/deploy.sh

# 测试部署脚本
cd ~
./deploy.sh

5.2 创建回滚脚本

bash
# 创建回滚脚本
cat > ~/rollback.sh <<'EOF'
#!/bin/bash

set -e

# 配置
RELEASE_NAME="cloud-cafe"
NAMESPACE="cloud-cafe"

# 颜色输出
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'

# 日志函数
log_info() {
    echo -e "${GREEN}[INFO]${NC} $1"
}

log_warn() {
    echo -e "${YELLOW}[WARN]${NC} $1"
}

log_error() {
    echo -e "${RED}[ERROR]${NC} $1"
}

# 显示历史
show_history() {
    log_info "Release history:"
    helm history "$RELEASE_NAME" -n "$NAMESPACE"
}

# 回滚到指定版本
rollback() {
    local revision=$1
    
    if [ -z "$revision" ]; then
        log_error "Please specify a revision number"
        show_history
        exit 1
    fi
    
    log_info "Rolling back to revision $revision..."
    helm rollback "$RELEASE_NAME" "$revision" -n "$NAMESPACE"
    log_info "Rollback completed"
}

# 主函数
main() {
    if [ -z "$1" ]; then
        log_error "Usage: $0 <revision>"
        show_history
        exit 1
    fi
    
    rollback "$1"
}

main "$@"
EOF

# 添加执行权限
chmod +x ~/rollback.sh

# 查看历史
cd ~
./rollback.sh

验证和测试

1. 检查所有资源状态

bash
# 查看 Release
helm list -n cloud-cafe

# 查看部署的资源
kubectl get all -n cloud-cafe

# 查看 Helm 状态
helm status cloud-cafe -n cloud-cafe

2. 测试 Helm Chart

bash
# Lint Chart
helm lint ~/cloud-cafe-chart

# 模拟安装
helm template test-release ~/cloud-cafe-chart -n cloud-cafe

# 查看生成的 YAML
helm template test-release ~/cloud-cafe-chart -n cloud-cafe | less

3. 测试部署脚本

bash
# 运行部署脚本
cd ~
./deploy.sh

# 验证部署
kubectl get pods -n cloud-cafe
kubectl get svc -n cloud-cafe
kubectl get ingress -n cloud-cafe

4. 测试回滚

bash
# 查看历史
helm history cloud-cafe -n cloud-cafe

# 回滚到上一个版本
helm rollback cloud-cafe -n cloud-cafe

# 或者使用回滚脚本
./rollback.sh 1

5. 测试 CI/CD

如果你配置了 GitHub Actions:

  1. 提交代码到 GitHub
  2. 创建 Pull Request
  3. 观察 GitHub Actions 工作流执行
  4. 合并到 main 分支,触发部署

📝 总结和思考

本课程学到的知识点

  1. Helm: Kubernetes 的包管理器
  2. Helm Chart: Helm 的包格式
  3. CI/CD: 持续集成和持续部署
  4. GitOps: 使用 Git 作为单一事实来源的部署方法
  5. 自动化部署: 使用脚本和工具自动化部署流程

关键概念

  • 包管理: 使用 Helm 简化应用部署
  • 模板化: 使用 Go 模板生成 Kubernetes 资源
  • 版本控制: 使用 Git 管理配置和代码
  • 自动化: 使用 CI/CD 自动化构建、测试、部署
  • 可追溯: 所有变更都有记录,可以回滚

思考题

  1. Helm 和 kubectl apply 有什么区别?分别在什么场景下使用?
  2. 如何实现多环境部署?(提示:values 文件、环境变量)
  3. 如何实现蓝绿部署或金丝雀发布?
  4. CI/CD 和 GitOps 有什么区别?如何选择?
  5. 如何实现配置管理?(提示:ConfigMap、Secret、Vault)

最佳实践

  1. 使用 Helm 管理应用: 简化部署和升级
  2. 版本控制所有配置: 使用 Git 管理所有配置文件
  3. 自动化测试: 在部署前进行测试
  4. 渐进式部署: 使用滚动更新或金丝雀发布
  5. 监控和告警: 部署后持续监控系统状态

课程总结

本课程完成了云咖啡 K8S 实战的基础部分,涵盖了以下内容:

  1. 部署云咖啡官网
  2. 更新网站内容
  3. 添加订单系统
  4. 实现前后端分离
  5. 构建高可用架构
  6. 添加监控和日志
  7. 实现自动化部署

通过这些课程,你已经具备了独立运维 Kubernetes 集群的基础能力,可以:

  • 部署和管理容器化应用
  • 实现数据持久化
  • 构建微服务架构
  • 配置自动扩缩容
  • 实现系统可观测性
  • 自动化部署流程

清理环境

如果你想清理所有资源:

bash
# 卸载 Helm Release
helm uninstall cloud-cafe -n cloud-cafe

# 删除命名空间
kubectl delete namespace cloud-cafe
kubectl delete namespace argocd
kubectl delete namespace monitoring

# 删除本地文件
rm -rf ~/cloud-cafe-chart
rm -f ~/deploy.sh ~/rollback.sh
rm -f *.tgz

评论区

专业的Linux技术学习平台,从入门到精通的完整学习路径