Kubernetes 全景解析 (6):生产级微服务架构实战
"纸上得来终觉浅,绝知此事要躬行。"
在前面的系列文章中,我们系统学习了 K8s 的架构设计、工作负载管理、网络模型、存储体系与配置管理。现在是时候将这些知识串联起来,完成一次从零到生产的完整实战演练。
本文将以一个电商微服务系统为蓝本,手把手带你完成以下全流程:
- 多服务编排与部署
- Ingress 七层路由与 TLS 终止
- HPA 弹性伸缩与高可用保障
- 健康检查与优雅停机
- Prometheus + Grafana 监控体系
- ArgoCD GitOps 持续交付
所有 YAML 配置均基于 Kubernetes v1.34(Of Wind & Will)官方 API 验证,可直接应用于你的集群。
本文所有 YAML 配置使用的 API 版本均经过 Kubernetes v1.34 官方文档校验:
| 资源类型 | apiVersion | 官方文档 |
|---|---|---|
| Deployment / StatefulSet | apps/v1 | Workload Resources |
| Service | v1 | Service Resources |
| Ingress / IngressClass | networking.k8s.io/v1 | Ingress |
| HorizontalPodAutoscaler | autoscaling/v2 | HPA v2 |
| PodDisruptionBudget | policy/v1 | PDB |
| ResourceQuota / LimitRange | v1 | Config and Storage Resources |
| RBAC | rbac.authorization.k8s.io/v1 | Authorization Resources |
| ServiceMonitor | monitoring.coreos.com/v1 | Prometheus Operator |
| ArgoCD Application | argoproj.io/v1alpha1 | ArgoCD CRDs |
| StorageClass | storage.k8s.io/v1 | Storage Resources |
一、实战场景概述
1.1 电商微服务架构设计
我们以一个典型的电商系统为例,将其拆分为以下五个核心微服务:
| 服务 | 职责 | 技术栈 | 端口 |
|---|---|---|---|
| API Gateway | 统一入口、路由转发、限流熔断 | APISIX / Nginx | 80/443 |
| 用户服务 | 注册、登录、用户信息管理 | Go (Gin) | 8080 |
| 商品服务 | 商品 CRUD、分类管理、搜索 | Java (Spring Boot) | 8081 |
| 订单服务 | 下单、支付回调、订单查询 | Node.js (Express) | 8082 |
| 支付服务 | 支付对接、退款、对账 | Go (Gin) | 8083 |
底层依赖两个有状态服务:
| 服务 | 职责 | 端口 |
|---|---|---|
| PostgreSQL | 关系型数据库(用户、订单) | 5432 |
| Redis | 缓存、会话管理、分布式锁 | 6379 |
1.2 微服务架构拓扑
1.3 技术栈选择
| 层级 | 技术选型 | 选型理由 |
|---|---|---|
| 网关层 | APISIX | 高性能、支持 gRPC 转发、插件生态丰富 |
| 服务层 | Go + Java + Node.js | 模拟真实多语言微服务环境 |
| 数据层 | PostgreSQL 16 + Redis 7 | 成熟稳定、社区活跃 |
| 监控层 | Prometheus + Grafana | 云原生监控事实标准 |
| 日志层 | Fluent Bit | 轻量级、资源占用低 |
| 部署层 | ArgoCD | GitOps 声明式持续交付 |
二、基础设施准备
2.1 命名空间与资源配额
生产环境中,不同团队的服务应该隔离在不同的命名空间中,并通过 ResourceQuota 限制资源使用。
---
apiVersion: v1
kind: Namespace
metadata:
name: microservices
labels:
app.kubernetes.io/part-of: ecommerce
app.kubernetes.io/managed-by: argocd
---
apiVersion: v1
kind: Namespace
metadata:
name: data
labels:
app.kubernetes.io/part-of: ecommerce
app.kubernetes.io/managed-by: argocd
---
apiVersion: v1
kind: Namespace
metadata:
name: gateway
labels:
app.kubernetes.io/part-of: ecommerce
app.kubernetes.io/managed-by: argocd
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: microservices-quota
namespace: microservices
spec:
hard:
requests.cpu: "10"
requests.memory: 20Gi
limits.cpu: "20"
limits.memory: 40Gi
pods: "50"
services: "20"
persistentvolumeclaims: "10"
---
apiVersion: v1
kind: LimitRange
metadata:
name: default-limits
namespace: microservices
spec:
limits:
- default:
cpu: "500m"
memory: "512Mi"
defaultRequest:
cpu: "100m"
memory: "128Mi"
max:
cpu: "2"
memory: "2Gi"
min:
cpu: "50m"
memory: "64Mi"
type: Container
ResourceQuota 的值应根据集群总容量和业务优先级进行合理分配。建议预留 20% 的资源缓冲,避免某个命名空间的突发流量影响其他业务。LimitRange 确保即使开发者忘记设置资源限制,容器也不会无限制地消耗节点资源。
2.2 ConfigMap 与 Secret 配置管理
将配置从镜像中分离出来,是 12-Factor App 的核心原则之一。
---
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: microservices
data:
# 数据库连接配置
DB_HOST: "postgresql.data.svc.cluster.local"
DB_PORT: "5432"
DB_NAME: "ecommerce"
DB_POOL_SIZE: "20"
DB_CONNECTION_TIMEOUT: "30"
# Redis 连接配置
REDIS_HOST: "redis.data.svc.cluster.local"
REDIS_PORT: "6379"
REDIS_DB: "0"
REDIS_POOL_SIZE: "50"
# 日志配置
LOG_LEVEL: "info"
LOG_FORMAT: "json"
# 服务间调用超时
SERVICE_TIMEOUT: "10s"
SERVICE_RETRY: "3"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: gateway-config
namespace: gateway
data:
# 网关路由配置
UPSTREAM_USER: "user-service.microservices.svc.cluster.local:8080"
UPSTREAM_PRODUCT: "product-service.microservices.svc.cluster.local:8081"
UPSTREAM_ORDER: "order-service.microservices.svc.cluster.local:8082"
UPSTREAM_PAYMENT: "payment-service.microservices.svc.cluster.local:8083"
---
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
namespace: microservices
type: Opaque
stringData:
DB_USERNAME: "ecommerce_app"
DB_PASSWORD: "S3cureP@ssw0rd!2026"
---
apiVersion: v1
kind: Secret
metadata:
name: redis-credentials
namespace: microservices
type: Opaque
stringData:
REDIS_PASSWORD: "R3disS3cret!2026"
---
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
namespace: gateway
type: kubernetes.io/tls
stringData:
tls.crt: |
-----BEGIN CERTIFICATE-----
# 此处替换为你的 TLS 证书
-----END CERTIFICATE-----
tls.key: |
-----BEGIN PRIVATE KEY-----
# 此处替换为你的 TLS 私钥
-----END PRIVATE KEY-----
切勿将 Secret 明文提交到 Git 仓库! 生产环境应使用以下方案之一:
- Sealed Secrets(Bitn Labs):加密后可安全提交 Git
- External Secrets Operator:从 AWS Secrets Manager / HashiCorp Vault 同步
- SOPS(Mozilla):基于 GPG/KMS 的加密工具
本文示例中的 stringData 仅用于演示,生产环境请务必使用加密方案。
2.3 PV/PVC 持久化存储规划
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/aws-ebs # 根据云厂商调整
parameters:
type: gp3
fsType: ext4
iopsPerGB: "50"
allowVolumeExpansion: true
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard-hdd
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp3
fsType: ext4
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgresql-data
namespace: data
spec:
storageClassName: fast-ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data
namespace: data
spec:
storageClassName: fast-ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
三、核心服务部署
3.1 API Gateway 部署
网关是整个系统的统一入口,负责路由转发、限流、熔断和认证。
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway
namespace: gateway
labels:
app: api-gateway
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: api-gateway
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: api-gateway
version: v1
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
prometheus.io/path: "/metrics"
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: api-gateway
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 60
containers:
- name: apisix
image: apache/apisix:3.9.0-debian
ports:
- name: http
containerPort: 9080
protocol: TCP
- name: https
containerPort: 9443
protocol: TCP
- name: metrics
containerPort: 9090
protocol: TCP
envFrom:
- configMapRef:
name: gateway-config
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: "1"
memory: 512Mi
livenessProbe:
httpGet:
path: /healthz
port: 9090
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 9090
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 15"]
---
apiVersion: v1
kind: Service
metadata:
name: api-gateway
namespace: gateway
labels:
app: api-gateway
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 9080
protocol: TCP
- name: https
port: 443
targetPort: 9443
protocol: TCP
selector:
app: api-gateway
3.2 用户服务部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
namespace: microservices
labels:
app: user-service
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: user-service
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: user-service
version: v1
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: user-service
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 30
containers:
- name: user-service
image: registry.example.com/ecommerce/user-service:v1.0.0
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: grpc
containerPort: 9090
protocol: TCP
env:
- name: SERVICE_NAME
value: "user-service"
- name: SERVICE_PORT
value: "8080"
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: DB_HOST
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: app-config
key: DB_PORT
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: app-config
key: DB_NAME
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_PASSWORD
- name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: REDIS_HOST
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-credentials
key: REDIS_PASSWORD
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: "1"
memory: 512Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /readyz
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 3
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 10"]
---
apiVersion: v1
kind: Service
metadata:
name: user-service
namespace: microservices
labels:
app: user-service
spec:
type: ClusterIP
ports:
- name: http
port: 8080
targetPort: 8080
protocol: TCP
- name: grpc
port: 9090
targetPort: 9090
protocol: TCP
selector:
app: user-service
3.3 订单服务部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
namespace: microservices
labels:
app: order-service
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: order-service
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: order-service
version: v1
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8082"
prometheus.io/path: "/metrics"
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: order-service
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 60
containers:
- name: order-service
image: registry.example.com/ecommerce/order-service:v1.0.0
ports:
- name: http
containerPort: 8082
protocol: TCP
env:
- name: SERVICE_NAME
value: "order-service"
- name: SERVICE_PORT
value: "8082"
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: DB_HOST
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: app-config
key: DB_PORT
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: app-config
key: DB_NAME
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_PASSWORD
- name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: REDIS_HOST
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-credentials
key: REDIS_PASSWORD
- name: USER_SERVICE_URL
value: "http://user-service.microservices.svc.cluster.local:8080"
- name: PRODUCT_SERVICE_URL
value: "http://product-service.microservices.svc.cluster.local:8081"
- name: PAYMENT_SERVICE_URL
value: "http://payment-service.microservices.svc.cluster.local:8083"
resources:
requests:
cpu: 300m
memory: 384Mi
limits:
cpu: "1"
memory: 768Mi
livenessProbe:
httpGet:
path: /healthz
port: 8082
initialDelaySeconds: 20
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /readyz
port: 8082
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 3
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 15"]
---
apiVersion: v1
kind: Service
metadata:
name: order-service
namespace: microservices
labels:
app: order-service
spec:
type: ClusterIP
ports:
- name: http
port: 8082
targetPort: 8082
protocol: TCP
selector:
app: order-service
3.4 商品服务部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-service
namespace: microservices
labels:
app: product-service
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: product-service
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: product-service
version: v1
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8081"
prometheus.io/path: "/metrics"
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: product-service
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 30
containers:
- name: product-service
image: registry.example.com/ecommerce/product-service:v1.0.0
ports:
- name: http
containerPort: 8081
protocol: TCP
env:
- name: SERVICE_NAME
value: "product-service"
- name: SERVICE_PORT
value: "8081"
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: DB_HOST
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: app-config
key: DB_PORT
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: app-config
key: DB_NAME
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_PASSWORD
- name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: REDIS_HOST
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-credentials
key: REDIS_PASSWORD
resources:
requests:
cpu: 300m
memory: 512Mi
limits:
cpu: "2"
memory: 1Gi
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8081
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8081
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 3
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 10"]
---
apiVersion: v1
kind: Service
metadata:
name: product-service
namespace: microservices
labels:
app: product-service
spec:
type: ClusterIP
ports:
- name: http
port: 8081
targetPort: 8081
protocol: TCP
selector:
app: product-service
3.5 支付服务部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-service
namespace: microservices
labels:
app: payment-service
version: v1
spec:
replicas: 2
selector:
matchLabels:
app: payment-service
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: payment-service
version: v1
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8083"
prometheus.io/path: "/metrics"
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: payment-service
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 60
containers:
- name: payment-service
image: registry.example.com/ecommerce/payment-service:v1.0.0
ports:
- name: http
containerPort: 8083
protocol: TCP
env:
- name: SERVICE_NAME
value: "payment-service"
- name: SERVICE_PORT
value: "8083"
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: DB_HOST
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: app-config
key: DB_PORT
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: app-config
key: DB_NAME
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_PASSWORD
- name: PAYMENT_GATEWAY_KEY
valueFrom:
secretKeyRef:
name: payment-secret
key: GATEWAY_API_KEY
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: "1"
memory: 512Mi
livenessProbe:
httpGet:
path: /healthz
port: 8083
initialDelaySeconds: 15
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /readyz
port: 8083
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 3
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 15"]
---
apiVersion: v1
kind: Service
metadata:
name: payment-service
namespace: microservices
labels:
app: payment-service
spec:
type: ClusterIP
ports:
- name: http
port: 8083
targetPort: 8083
protocol: TCP
selector:
app: payment-service
3.6 数据库部署(StatefulSet)
有状态服务使用 StatefulSet 部署,确保稳定的网络标识和持久化存储。
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql
namespace: data
labels:
app: postgresql
spec:
serviceName: postgresql-headless
replicas: 1
selector:
matchLabels:
app: postgresql
template:
metadata:
labels:
app: postgresql
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9187"
prometheus.io/path: "/metrics"
spec:
terminationGracePeriodSeconds: 60
containers:
- name: postgresql
image: postgres:16-alpine
ports:
- name: postgresql
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_DB
value: "ecommerce"
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_USERNAME
optional: false
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_PASSWORD
optional: false
- name: PGDATA
value: "/var/lib/postgresql/data/pgdata"
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: "2"
memory: 4Gi
volumeMounts:
- name: postgresql-data
mountPath: /var/lib/postgresql/data
livenessProbe:
exec:
command:
- pg_isready
- -U
- ecommerce_app
- -d
- ecommerce
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
exec:
command:
- pg_isready
- -U
- ecommerce_app
- -d
- ecommerce
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "pg_ctl stop -m fast"]
- name: postgres-exporter
image: prometheuscommunity/postgres-exporter:v0.15.0
ports:
- name: metrics
containerPort: 9187
protocol: TCP
env:
- name: DATA_SOURCE_NAME
value: "postgresql://ecommerce_app:S3cureP@ssw0rd!2026@localhost:5432/ecommerce?sslmode=disable"
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 200m
memory: 128Mi
volumeClaimTemplates:
- metadata:
name: postgresql-data
spec:
storageClassName: fast-ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgresql-headless
namespace: data
labels:
app: postgresql
spec:
type: ClusterIP
clusterIP: None
ports:
- name: postgresql
port: 5432
targetPort: 5432
protocol: TCP
selector:
app: postgresql
---
apiVersion: v1
kind: Service
metadata:
name: postgresql
namespace: data
labels:
app: postgresql
spec:
type: ClusterIP
ports:
- name: postgresql
port: 5432
targetPort: 5432
protocol: TCP
selector:
app: postgresql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
namespace: data
labels:
app: redis
spec:
serviceName: redis-headless
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9121"
prometheus.io/path: "/metrics"
spec:
terminationGracePeriodSeconds: 30
containers:
- name: redis
image: redis:7-alpine
command:
- redis-server
- --requirepass
- $(REDIS_PASSWORD)
- --maxmemory
- 1gb
- --maxmemory-policy
- allkeys-lru
- --appendonly
- "yes"
ports:
- name: redis
containerPort: 6379
protocol: TCP
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-credentials
key: REDIS_PASSWORD
resources:
requests:
cpu: 200m
memory: 512Mi
limits:
cpu: "1"
memory: 1Gi
volumeMounts:
- name: redis-data
mountPath: /data
livenessProbe:
exec:
command:
- redis-cli
- -a
- $(REDIS_PASSWORD)
- ping
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
exec:
command:
- redis-cli
- -a
- $(REDIS_PASSWORD)
- ping
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "redis-cli -a $(REDIS_PASSWORD) SHUTDOWN NOSAVE"]
- name: redis-exporter
image: oliver006/redis_exporter:v1.58.0
ports:
- name: metrics
containerPort: 9121
protocol: TCP
env:
- name: REDIS_ADDR
value: "redis://localhost:6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-credentials
key: REDIS_PASSWORD
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 100m
memory: 128Mi
volumeClaimTemplates:
- metadata:
name: redis-data
spec:
storageClassName: fast-ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: redis-headless
namespace: data
labels:
app: redis
spec:
type: ClusterIP
clusterIP: None
ports:
- name: redis
port: 6379
targetPort: 6379
protocol: TCP
selector:
app: redis
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: data
labels:
app: redis
spec:
type: ClusterIP
ports:
- name: redis
port: 6379
targetPort: 6379
protocol: TCP
selector:
app: redis
对于数据库和缓存等有状态服务,务必使用 StatefulSet 而非 Deployment。StatefulSet 提供了以下保证:
- 稳定的网络标识:每个 Pod 有固定的 DNS 名称(如
postgresql-0.postgresql-headless.data.svc.cluster.local) - 有序的部署和扩缩容:Pod 按序号顺序创建和删除
- 稳定的持久化存储:通过
volumeClaimTemplates,每个 Pod 绑定独立的 PVC,Pod 重建后自动重新挂载
四、服务间通信与配置
4.1 CoreDNS 服务发现
Kubernetes 内置的 CoreDNS 为每个 Service 自动创建 DNS 记录,服务间可以通过标准的 FQDN 进行互相访问:
| 记录格式 | 示例 | 作用域 |
|---|---|---|
<service> | user-service | 同命名空间 |
<service>.<namespace> | user-service.microservices | 跨命名空间 |
<service>.<namespace>.svc.cluster.local | user-service.microservices.svc.cluster.local | 全集群 |
始终使用完整的跨命名空间 FQDN(如 user-service.microservices.svc.cluster.local),即使服务在同一个命名空间中。这样在后续调整命名空间划分时,不需要修改代码和配置。
4.2 健康检查设计原则
健康检查是保障服务可用性的关键机制。K8s 提供了三种探针:
| 探针类型 | 用途 | 失败后果 |
|---|---|---|
| Liveness Probe | 检测容器是否存活 | 重启容器 |
| Readiness Probe | 检测是否可以接收流量 | 从 Service Endpoints 中移除 |
| Startup Probe | 检测应用是否启动完成 | 启动完成前禁用其他探针 |
以下是健康检查的配置要点:
# 以订单服务为例,展示完整的健康检查配置
spec:
containers:
- name: order-service
# ...其他配置...
startupProbe:
httpGet:
path: /healthz
port: 8082
failureThreshold: 30 # 最多等待 30 * 10s = 300s
periodSeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 8082
initialDelaySeconds: 0 # startupProbe 完成后才开始
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 3 # 连续 3 次失败则重启
successThreshold: 1
readinessProbe:
httpGet:
path: /readyz
port: 8082
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 3 # 连续 3 次失败则摘除流量
successThreshold: 1
- Liveness 和 Readiness 必须使用不同的端点。Liveness 检测"进程是否活着",Readiness 检测"是否准备好接收请求"。如果两者使用同一端点,可能导致级联故障——例如数据库短暂不可用时,所有 Pod 同时被 Liveness 重启。
- 设置合理的
timeoutSeconds。过短的超时时间会导致误判,建议设置为 P99 响应时间的 2-3 倍。 - 对于 Java 等启动较慢的服务,务必配置 Startup Probe,否则 Liveness Probe 可能在应用启动期间就触发重启。
4.3 优雅停机
优雅停机确保 Pod 在被终止时,能够完成正在处理的请求并安全释放资源。
Kubernetes v1.34 引入了原生的 sleep action 用于 PreStop 和 PostStart lifecycle hooks,无需再通过 exec 执行 sleep 命令。这提供了更简洁和可靠的优雅停机方式。
详见官方文档:Container Lifecycle Hooks
推荐方式(v1.34+):使用原生 Sleep Hook
spec:
terminationGracePeriodSeconds: 60 # 给予 60 秒的优雅停机时间
containers:
- name: order-service
lifecycle:
preStop:
sleep:
seconds: 15 # 原生 sleep action
# ...其他配置...
传统方式(v1.34 之前):使用 Exec Hook
spec:
terminationGracePeriodSeconds: 60 # 给予 60 秒的优雅停机时间
containers:
- name: order-service
lifecycle:
preStop:
exec:
# 先等待 15 秒,让 Service Endpoints 更新
# 然后发送 SIGTERM 信号,应用开始优雅关闭
command: ["/bin/sh", "-c", "sleep 15"]
# ...其他配置...
优雅停机的完整流程如下:
- K8s 向 Pod 发送 SIGTERM 信号
preStopHook 执行sleep 15(或原生 sleep action),等待 15 秒(让 Ingress/Service 感知到 Pod 不可用)- 应用收到 SIGTERM 后,停止接收新请求,处理完正在进行的请求
- 应用关闭数据库连接池、释放资源
- 如果超过
terminationGracePeriodSeconds(60s)仍未退出,K8s 发送 SIGKILL 强制终止
K8s 在执行 preStop Hook 的同时,会从 Service Endpoints 中移除该 Pod。但 Ingress Controller 和上游代理可能还有缓存,需要一定时间才能感知到变化。sleep 15 确保在应用真正开始关闭之前,所有上游组件都已经停止向该 Pod 转发流量。
五、Ingress 七层路由配置
5.1 Ingress Controller 部署
我们使用 APISIX Ingress Controller 作为七层路由组件。
API 版本说明:Ingress 和 IngressClass 使用
networking.k8s.io/v1,这是 Kubernetes v1.34 中的稳定版本。networking.k8s.io/v1beta1已在 v1.22 中移除,请确保使用v1。详见官方文档:Ingress v1
---
# RBAC 配置
# apiVersion: rbac.authorization.k8s.io/v1 是 Kubernetes v1.34 中的稳定版本
# 详见官方文档:https://kubernetes.io/docs/reference/kubernetes-api/authorization-resources/role-v1/
apiVersion: v1
kind: ServiceAccount
metadata:
name: apisix-ingress-controller
namespace: gateway
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: apisix-ingress-controller
rules:
- apiGroups: [""]
resources: ["secrets", "services", "endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses", "ingressclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apisix.apache.org"]
resources: ["apisixroutes", "apisixupstreams", "apisixtlsconfigs", "apisixclusters", "apisixpluginconfigs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: apisix-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: apisix-ingress-controller
subjects:
- kind: ServiceAccount
name: apisix-ingress-controller
namespace: gateway
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: apisix-ingress-controller
namespace: gateway
labels:
app: apisix-ingress-controller
spec:
replicas: 2
selector:
matchLabels:
app: apisix-ingress-controller
template:
metadata:
labels:
app: apisix-ingress-controller
spec:
serviceAccountName: apisix-ingress-controller
containers:
- name: apisix-ingress-controller
image: apache/apisix-ingress-controller:1.8.0
args:
- --ingress-class
- apisix
- --apisix-admin-api-version
- v3
- --log-level
- info
- --http-port
- "8080"
env:
- name: APISIX_ADMIN_API_URL
value: "http://apisix-admin.gateway.svc.cluster.local:9180/apisix/admin"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /readyz
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
5.2 Ingress 路由规则
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: apisix
spec:
controller: apache.org/apisix-ingress-controller
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ecommerce-ingress
namespace: gateway
annotations:
kubernetes.io/ingress.class: apisix
# 启用 CORS
apisix.apache.org/enable-cors: "true"
apisix.apache.org/cors-allow-origin: "https://shop.example.com"
apisix.apache.org/cors-allow-methods: "GET,POST,PUT,DELETE,OPTIONS"
apisix.apache.org/cors-allow-headers: "Authorization,Content-Type"
# 全局限流
apisix.apache.org/plugin-limit-count: |
{
"count": 1000,
"time_window": 1,
"rejected_code": 429,
"key": "remote_addr"
}
spec:
ingressClassName: apisix
tls:
- hosts:
- api.example.com
secretName: tls-secret
rules:
- host: api.example.com
http:
paths:
# 用户服务路由
- path: /api/v1/users
pathType: Prefix
backend:
service:
name: api-gateway
port:
number: 80
# 商品服务路由
- path: /api/v1/products
pathType: Prefix
backend:
service:
name: api-gateway
port:
number: 80
# 订单服务路由
- path: /api/v1/orders
pathType: Prefix
backend:
service:
name: api-gateway
port:
number: 80
# 支付服务路由
- path: /api/v1/payments
pathType: Prefix
backend:
service:
name: api-gateway
port:
number: 80
在上述配置中,所有业务路由都指向 API Gateway,由 Gateway 负责将请求转发到具体的后端服务。这种设计的好处是:
- Gateway 统一处理认证、限流、熔断等横切关注点
- 后端服务不需要暴露到 Ingress 层
- 路由规则变更只需修改 Gateway 配置,无需修改 Ingress
六、弹性伸缩与高可用
6.1 HPA 弹性伸缩
Horizontal Pod Autoscaler 根据监控指标自动调整 Pod 副本数,是应对流量波动的核心机制。
API 版本说明:HPA 使用
autoscaling/v2,这是 Kubernetes v1.34 中的稳定版本,支持资源指标(CPU/内存)、Pods 指标和外部指标,以及behavior字段精细控制扩缩容行为。详见官方文档:HorizontalPodAutoscaler v2
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: user-service-hpa
namespace: microservices
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-service
minReplicas: 3
maxReplicas: 20
metrics:
# CPU 利用率目标
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
# 内存利用率目标
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleUp:
stabilizationWindowSeconds: 60 # 扩容稳定窗口
policies:
- type: Pods
value: 4 # 每次最多扩容 4 个 Pod
periodSeconds: 60
- type: Percent
value: 100 # 或每次扩容当前副本数的 100%
periodSeconds: 60
selectPolicy: Max # 取两个策略中更激进的
scaleDown:
stabilizationWindowSeconds: 300 # 缩容稳定窗口 5 分钟
policies:
- type: Pods
value: 2 # 每次最多缩容 2 个 Pod
periodSeconds: 120
selectPolicy: Min # 取两个策略中更保守的
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: order-service-hpa
namespace: microservices
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: order-service
minReplicas: 3
maxReplicas: 30
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 65
# 自定义指标:HTTP 请求延迟 P99
- type: Pods
pods:
metric:
name: http_request_duration_seconds_p99
target:
type: AverageValue
averageValue: "500m" # 500ms
behavior:
scaleUp:
stabilizationWindowSeconds: 30
policies:
- type: Pods
value: 6
periodSeconds: 60
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Pods
value: 2
periodSeconds: 120
- 扩容要快,缩容要慢:
scaleUp.stabilizationWindowSeconds应设置较小值(30-60s),scaleDown.stabilizationWindowSeconds应设置较大值(300-600s),避免因流量短暂下降导致频繁缩容。 - 设置合理的 minReplicas:最小副本数不应低于 2,且应通过 Pod 反亲和性分布在不同节点上,确保单节点故障不影响服务可用性。
- 自定义指标需要安装 Metrics Server 和 Prometheus Adapter,否则 HPA 无法获取自定义指标。
6.2 Pod 反亲和性与 PDB
API 版本说明:PodDisruptionBudget 使用
policy/v1,这是 Kubernetes v1.34 中的稳定版本。PDB 新增了unhealthyPodEvictionPolicy字段,支持IfHealthyBudget和AlwaysAllow两种策略,更灵活地控制不健康 Pod 的驱逐行为。详见官方文档:PodDisruptionBudget v1
---
# 确保用户服务至少有 2 个可用 Pod
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: user-service-pdb
namespace: microservices
spec:
minAvailable: 2
selector:
matchLabels:
app: user-service
---
# 确保订单服务至少有 50% 的 Pod 可用
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: order-service-pdb
namespace: microservices
spec:
maxUnavailable: "50%"
selector:
matchLabels:
app: order-service
---
# 确保支付服务至少有 1 个可用 Pod
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: payment-service-pdb
namespace: microservices
spec:
minAvailable: 1
selector:
matchLabels:
app: payment-service
当需要对集群节点进行维护(如升级 K8s 版本、更换硬件)时,PDB 确保驱逐操作不会导致服务可用副本数低于阈值。如果没有 PDB,kubectl drain 可能会一次性驱逐所有 Pod,导致服务中断。
七、监控与日志集成
7.1 Prometheus ServiceMonitor
API 版本说明:ServiceMonitor 使用
monitoring.coreos.com/v1,这是 Prometheus Operator 提供的 CRD,用于定义 Service 的监控目标。详见官方文档:ServiceMonitor CRD
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: microservices-monitor
namespace: microservices
labels:
release: prometheus # 匹配 Prometheus Operator 的 serviceMonitorSelector
spec:
namespaceSelector:
matchNames:
- microservices
- gateway
selector:
matchLabels:
app.kubernetes.io/part-of: ecommerce
endpoints:
- port: http
path: /metrics
interval: 15s
scrapeTimeout: 10s
honorLabels: true
relabelings:
- sourceLabels: [__meta_kubernetes_pod_name]
targetLabel: pod
- sourceLabels: [__meta_kubernetes_namespace]
targetLabel: namespace
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: data-services-monitor
namespace: data
labels:
release: prometheus
spec:
namespaceSelector:
matchNames:
- data
selector:
matchLabels:
app.kubernetes.io/part-of: ecommerce
endpoints:
- port: metrics
path: /metrics
interval: 30s
scrapeTimeout: 10s
7.2 Grafana Dashboard 配置
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ecommerce-grafana-dashboards
namespace: monitoring
labels:
grafana_dashboard: "1"
data:
microservices-overview.json: |
{
"dashboard": {
"title": "电商微服务总览",
"panels": [
{
"title": "请求 QPS",
"type": "timeseries",
"targets": [
{
"expr": "sum(rate(http_requests_total{namespace=\"microservices\"}[5m])) by (app)"
}
]
},
{
"title": "P99 延迟",
"type": "timeseries",
"targets": [
{
"expr": "histogram_quantile(0.99, sum(rate(http_request_duration_seconds_bucket{namespace=\"microservices\"}[5m])) by (le, app))"
}
]
},
{
"title": "错误率",
"type": "timeseries",
"targets": [
{
"expr": "sum(rate(http_requests_total{namespace=\"microservices\",status=~\"5..\"}[5m])) by (app) / sum(rate(http_requests_total{namespace=\"microservices\"}[5m])) by (app) * 100"
}
]
},
{
"title": "Pod 副本数",
"type": "stat",
"targets": [
{
"expr": "sum(kube_deployment_status_replicas_available{namespace=\"microservices\"}) by (deployment)"
}
]
}
]
}
}
7.3 告警规则
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: ecommerce-alerts
namespace: microservices
labels:
release: prometheus
spec:
groups:
- name: microservices.alerts
rules:
# 服务不可用告警
- alert: ServiceDown
expr: up{namespace="microservices"} == 0
for: 2m
labels:
severity: critical
annotations:
summary: "服务 {{ $labels.app }} 不可用"
description: "{{ $labels.namespace }} 命名空间中的 {{ $labels.instance }} 已下线超过 2 分钟"
# 高错误率告警
- alert: HighErrorRate
expr: |
sum(rate(http_requests_total{namespace="microservices",status=~"5.."}[5m])) by (app)
/ sum(rate(http_requests_total{namespace="microservices"}[5m])) by (app) > 0.05
for: 5m
labels:
severity: warning
annotations:
summary: "服务 {{ $labels.app }} 错误率过高"
description: "{{ $labels.app }} 的 5xx 错误率已超过 5%,当前值:{{ $value | humanizePercentage }}"
# P99 延迟告警
- alert: HighLatencyP99
expr: |
histogram_quantile(0.99, sum(rate(http_request_duration_seconds_bucket{namespace="microservices"}[5m])) by (le, app)) > 1
for: 5m
labels:
severity: warning
annotations:
summary: "服务 {{ $labels.app }} P99 延迟过高"
description: "{{ $labels.app }} 的 P99 延迟已超过 1 秒,当前值:{{ $value }}s"
# Pod 重启告警
- alert: PodRestarting
expr: increase(kube_pod_container_status_restarts_total{namespace="microservices"}[1h]) > 3
for: 5m
labels:
severity: warning
annotations:
summary: "Pod {{ $labels.pod }} 频繁重启"
description: "{{ $labels.namespace }} 中的 {{ $labels.pod }} 在过去 1 小时内重启了 {{ $value }} 次"
# HPA 达到上限告警
- alert: HPAAtMaxReplicas
expr: kube_hpa_status_current_replicas == kube_hpa_status_max_replicas
for: 15m
labels:
severity: warning
annotations:
summary: "HPA {{ $labels.hpa }} 已达到最大副本数"
description: "{{ $labels.namespace }} 中的 {{ $labels.hpa }} 已达到最大副本数 {{ $value }},可能需要调整上限"
7.4 Fluent Bit 日志收集
说明:Fluent Bit 以 DaemonSet 方式部署,确保每个节点运行一个日志采集代理。RBAC 配置使用
rbac.authorization.k8s.io/v1,DaemonSet 使用apps/v1。详见官方文档:Fluent Bit Kubernetes Filter
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluent-bit
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluent-bit
rules:
- apiGroups: [""]
resources: ["pods", "namespaces"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fluent-bit
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fluent-bit
subjects:
- kind: ServiceAccount
name: fluent-bit
namespace: monitoring
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluent-bit
namespace: monitoring
labels:
app: fluent-bit
spec:
selector:
matchLabels:
app: fluent-bit
template:
metadata:
labels:
app: fluent-bit
spec:
serviceAccountName: fluent-bit
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluent-bit
image: fluent/fluent-bit:3.0.0
volumeMounts:
- name: varlog
mountPath: /var/log
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: config
mountPath: /fluent-bit/etc/
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: config
configMap:
name: fluent-bit-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: monitoring
data:
fluent-bit.conf: |
[SERVICE]
Flush 5
Daemon Off
Log_Level info
Parsers_File parsers.conf
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag kube.*
Refresh_Interval 10
Mem_Buf_Limit 50MB
Skip_Long_Lines On
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Merge_Log On
Merge_Log_Key log_processed
K8S-Parser.On On
K8S-Parser.Exclude On
[OUTPUT]
Name elasticsearch
Match kube.*
Host elasticsearch.monitoring.svc.cluster.local
Port 9200
Index ecommerce-logs
Type _doc
Logstash_Format On
Logstash_Prefix ecommerce
Retry_Limit False
parsers.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
八、CI/CD 集成(ArgoCD)
8.1 GitOps 工作流
GitOps 的核心理念是:Git 仓库是唯一的事实来源(Single Source of Truth)。所有环境变更都通过提交代码来触发,ArgoCD 负责将 Git 仓库中的声明式配置同步到 Kubernetes 集群。
8.2 ArgoCD Application 配置
API 版本说明:ArgoCD 使用
argoproj.io/v1alpha1,这是 ArgoCD 的稳定 API 版本。AppProject、Application 和 ApplicationSet 均使用此版本。详见官方文档:ArgoCD CRD Reference
---
# ArgoCD 项目定义
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: ecommerce
namespace: argocd
spec:
description: "电商微服务项目"
sourceRepos:
- "https://github.com/your-org/ecommerce-k8s.git"
destinations:
- namespace: microservices
server: https://kubernetes.default.svc
- namespace: data
server: https://kubernetes.default.svc
- namespace: gateway
server: https://kubernetes.default.svc
clusterResourceWhitelist:
- group: ""
kind: Namespace
- group: "networking.k8s.io"
kind: IngressClass
orphanedResources:
warn: true
---
# ArgoCD 应用定义
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ecommerce-infra
namespace: argocd
labels:
app.kubernetes.io/part-of: ecommerce
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: ecommerce
source:
repoURL: https://github.com/your-org/ecommerce-k8s.git
targetRevision: main
path: k8s
directory:
recurse: true
jsonnet: false
destination:
server: https://kubernetes.default.svc
namespace: microservices
syncPolicy:
automated:
prune: true # 自动删除 Git 中不存在的资源
selfHeal: true # 自动修复手动变更
allowEmpty: false
syncOptions:
- CreateNamespace=true
- PrunePropagationPolicy=foreground
- PruneLast=true
- ServerSideApply=true
retry:
limit: 3
backoff:
duration: 5s
factor: 2
maxDuration: 3m
---
# ArgoCD 应用集(App of Apps 模式)
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: ecommerce-appset
namespace: argocd
spec:
generators:
- git:
repoURL: https://github.com/your-org/ecommerce-k8s.git
revision: main
directories:
- path: k8s/*
template:
metadata:
name: "{{ path.basename }}"
spec:
project: ecommerce
source:
repoURL: https://github.com/your-org/ecommerce-k8s.git
targetRevision: main
path: "{{ path }}"
destination:
server: https://kubernetes.default.svc
namespace: "{{ path.basename }}"
syncPolicy:
automated:
prune: true
selfHeal: true
selfHeal: true要谨慎使用。开启后,任何手动通过kubectl修改的配置都会被 ArgoCD 自动覆盖。建议在开发/测试环境开启,生产环境使用手动同步。prune: true会删除 Git 中不存在的资源。误删 Git 中的文件可能导致生产环境资源被意外删除。建议配合syncOptions: PruneLast=true,让 ArgoCD 最后再执行删除操作。- 使用 App of Apps 模式管理多应用。通过 ApplicationSet 可以自动发现 Git 仓库中的目录结构,为每个子目录创建一个 Application,避免手动维护大量 Application 资源。
九、请求完整链路
下面展示一个用户下单请求的完整链路,帮助你理解各组件之间的协作关系。
十、部署与验证
10.1 分步部署
按照依赖关系,从底层到上层依次部署:
# 1. 创建命名空间
kubectl apply -f k8s/00-namespace/
# 2. 部署配置与密钥
kubectl apply -f k8s/01-config/
# 3. 创建存储资源
kubectl apply -f k8s/02-storage/
# 4. 部署数据层(等待 PostgreSQL 和 Redis 就绪)
kubectl apply -f k8s/03-services/postgresql-statefulset.yaml
kubectl apply -f k8s/03-services/redis-statefulset.yaml
# 等待数据层就绪
kubectl wait --for=condition=ready pod \
-l app=postgresql -n data --timeout=120s
kubectl wait --for=condition=ready pod \
-l app=redis -n data --timeout=120s
# 5. 部署业务服务
kubectl apply -f k8s/03-services/user-service.yaml
kubectl apply -f k8s/03-services/product-service.yaml
kubectl apply -f k8s/03-services/order-service.yaml
kubectl apply -f k8s/03-services/payment-service.yaml
# 6. 部署网关
kubectl apply -f k8s/03-services/gateway-deployment.yaml
# 7. 部署 Ingress
kubectl apply -f k8s/05-ingress/
# 8. 部署弹性伸缩与高可用
kubectl apply -f k8s/06-scalability/
# 9. 部署监控
kubectl apply -f k8s/07-monitoring/
10.2 健康检查验证
# 检查所有命名空间下的 Pod 状态
kubectl get pods --all-namespaces -l app.kubernetes.io/part-of=ecommerce
# 检查各服务 Endpoints
kubectl get endpoints -n microservices
kubectl get endpoints -n data
kubectl get endpoints -n gateway
# 检查 Ingress 状态
kubectl get ingress -n gateway
kubectl describe ingress ecommerce-ingress -n gateway
# 检查 HPA 状态
kubectl get hpa -n microservices
kubectl describe hpa user-service-hpa -n microservices
# 检查 PDB 状态
kubectl get pdb -n microservices
# 检查 PVC 绑定状态
kubectl get pvc -n data
# 检查 ArgoCD 应用状态
kubectl get applications -n argocd
10.3 压力测试
使用 hey 工具对用户服务进行压力测试:
# 进入集群内部执行测试(或通过 port-forward)
kubectl run hey-test --image=williamyeh/hey:latest --rm -it --restart=Never -- \
-n 100 -c 20 -m POST \
-H "Content-Type: application/json" \
-d '{"username":"testuser","password":"testpass"}' \
http://user-service.microservices.svc.cluster.local:8080/api/v1/users/login
# 模拟并发下单请求
kubectl run hey-test --image=williamyeh/hey:latest --rm -it --restart=Never -- \
-n 500 -c 50 -m POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <token>" \
-d '{"product_id":1,"quantity":2}' \
http://order-service.microservices.svc.cluster.local:8082/api/v1/orders
# 观察 HPA 是否触发扩容
kubectl get hpa -n microservices -w
10.4 故障注入测试
# 测试 1:删除 Pod,验证自动恢复
kubectl delete pod -l app=user-service -n microservices --grace-period=0 --force
# 观察 Pod 是否自动重建
kubectl get pods -l app=user-service -n microservices -w
# 测试 2:验证优雅停机(不应出现 5xx 错误)
# 在另一个终端持续发送请求
kubectl run curl-test --image=curlimages/curl:latest --rm -it --restart=Never -- \
-s -o /dev/null -w "%{http_code}\n" \
http://user-service.microservices.svc.cluster.local:8080/healthz
# 测试 3:模拟节点故障(需要多节点集群)
kubectl cordon <node-name> # 标记节点不可调度
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data
# 观察 Pod 是否迁移到其他节点
kubectl get pods -l app=user-service -n microservices -o wide -w
# 测试 4:验证 PDB 保护
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data
# 如果违反 PDB,drain 会被阻止,并提示:
# "error: poddisruptionbudgets policy violation"
十一、生产环境最佳实践总结
经过上述完整的实战演练,以下是生产环境中的关键最佳实践:
架构设计
| 实践 | 说明 |
|---|---|
| 命名空间隔离 | 按团队/环境/业务域划分命名空间,配合 ResourceQuota 限制资源 |
| 配置外部化 | 使用 ConfigMap/Secret 管理配置,禁止将配置硬编码在镜像中 |
| 密钥加密 | 使用 Sealed Secrets 或 External Secrets Operator 管理敏感信息 |
| 服务网格可选 | 对于服务间通信复杂度高的场景,考虑引入 Istio/Linkerd |
部署策略
| 实践 | 说明 |
|---|---|
| 滚动更新 | maxSurge: 1, maxUnavailable: 0 确保更新过程中不中断服务 |
| 健康检查三件套 | Startup + Liveness + Readiness Probe,使用不同端点 |
| 优雅停机 | preStop sleep + terminationGracePeriodSeconds 确保请求处理完成 |
| Pod 反亲和性 | 确保同一服务的 Pod 分布在不同节点上 |
弹性伸缩
| 实践 | 说明 |
|---|---|
| HPA 多指标 | 同时关注 CPU、内存和自定义业务指标(如 QPS、延迟) |
| 扩快缩慢 | 扩容窗口短(30-60s),缩容窗口长(300-600s) |
| PDB 保护 | 为关键服务配置 PDB,防止维护操作导致服务不可用 |
| Cluster Autoscaler | 配合节点自动伸缩,当 Pod 因资源不足处于 Pending 状态时自动扩容节点 |
可观测性
| 实践 | 说明 |
|---|---|
| 三大支柱 | Metrics(Prometheus)+ Logs(Fluent Bit)+ Traces(Jaeger/OpenTelemetry) |
| 告警分级 | Critical(立即响应)+ Warning(工作时间内处理)+ Info(记录备案) |
| Dashboard 分层 | 全局总览 -> 服务维度 -> Pod 维度,逐层下钻 |
| SLO/SLI | 定义明确的服务质量目标(如 99.9% 可用性、P99 < 500ms) |
持续交付
| 实践 | 说明 |
|---|---|
| GitOps | Git 仓库作为唯一事实来源,所有变更通过 PR 审批 |
| 环境隔离 | dev -> staging -> production 逐级发布,每级有独立的 ArgoCD Application |
| 镜像标签策略 | 使用 Git Commit SHA 作为镜像 Tag,确保可追溯性 |
| 回滚机制 | ArgoCD 支持一键回滚到 Git 历史中的任意版本 |
生产级 Kubernetes 部署不仅仅是"把 YAML 写对",更是一套涵盖架构设计、部署策略、弹性伸缩、可观测性和持续交付的完整体系。本文通过电商微服务的实战案例,展示了如何将这些最佳实践落地到具体的 YAML 配置中。
希望这篇实战指南能够帮助你在实际项目中少走弯路,构建出真正可靠、可扩展的云原生微服务系统。