Cce使用

Posted by Shi Hai's Blog on April 1, 2026

CCE创建

创建Pod

将此文件保存到hello.yaml中并执行kubectl apply -f hello.yaml,执行完毕后执行kubectl get pod hello-world-pod可以看到输出结果。

apiVersion: v1
kind: Pod
metadata:
  name: hello-world-pod
spec:
  containers:
  - name: hello-container
    image: busybox
    command: ["echo"]
    args: ["Hello World from CCE Kubernetes!"]
  restartPolicy: Never

创建Deployment和Service

Service基于TCP和UDP协议进行访问转发,为集群提供了四层负载均衡的能力。 创建一个简单的service服务,执行kubectl apply -f app.yaml创建出相关的deployment和service,然后执行kubectl get deploymentskubectl get endpoints可以看到相关信息。

cat > app.yaml << EOF
# 1. 创建 Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80

---
# 2. 创建 Service(对外提供访问)
apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
EOF

创建Deployment和Ingress

Service无法满足应用层中存在着大量的HTTP/HTTPS访问需求。因此,Kubernetes集群提供了另一种基于HTTP协议的访问方式——Ingress。

cat > ingress-app.yaml << EOF
# 1. Deployment:启动容器
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: nginx:alpine
        ports:
        - containerPort: 80

---
# 2. Service:给Ingress用(ClusterIP即可)
apiVersion: v1
kind: Service
metadata:
  name: web-svc
spec:
  type: ClusterIP
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 80

---
# 3. Ingress:对外域名访问
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-ingress
  annotations:
    spec.ingressClassName: "nginx"  # CCE默认支持
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-svc   # 这里写 Service 名字
            port:
              number: 80
EOF

DestinationRule负责到达Service后Service怎么来分流量,给相关service配置DestinationRule。

cat > dr.yaml << EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: my-service-dr
  namespace: default  # 和你的业务服务同命名空间
spec:
  host: web-svc   # 替换为你集群内真实Service名称
  # 全局流量策略:mTLS + 熔断
  trafficPolicy:
    # 开启ASM/ISTIO双向加密
    tls:
      mode: ISTIO_MUTUAL
    # 熔断+连接池保护
    connectionPool:
      tcp:
        maxConnections: 50
      http:
        maxRequestsPerConnection: 10
    outlierDetection:
      consecutiveErrors: 5
      interval: 30s
      baseEjectionTime: 30s
  # 定义版本子集,给灰度专用,把deployment中带有“version: v1”放到v1,“version: v2”放到v2
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
EOF

VirtualService

Pod 打标签 → subsets 分组 → VirtualService 切流量。

cat > vs.yaml << EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews-vs  # VirtualService名称,自定义(建议和服务名对应)
  namespace: default # 必须和你的Service、DR、Pod同命名空间
spec:
  hosts:  # 要控制的目标服务(K8S Service名称,和DR的host一致)
  - reviews
  http:   # HTTP流量规则(核心,实现灰度分流)
  - route: # 流量路由配置
    - destination:
        host: reviews  # 目标Service名称,和hosts一致
        subset: v1     # 引用DR中定义的v1子集(对应Pod标签version:v1)
      weight: 70       # 70%流量分配给v1版本(稳定版)
    - destination:
        host: reviews
        subset: v2     # 引用DR中定义的v2子集(对应Pod标签version:v2)
      weight: 30       # 30%流量分配给v2版本(灰度版)
EOF

Helm使用

Helm是打包工具,它最后真正去创建 Pod、Service 等资源,还是通过 kubectl 操作 K8s。

# 安装kubectl
# 下载(阿里云镜像,非常快)
curl -LO https://mirrors.aliyun.com/kubernetes-release/release/v1.28.2/bin/linux/amd64/kubectl
# 加执行权限
chmod +x kubectl
# 移到系统目录
sudo mv kubectl /usr/local/bin/

# 连接CCE集群,将kubectl的config文件拷贝到linux执行机
# 登录到CCE集群,选择连接信息,创建公网地址,点击并拷贝kubectl的配置信息
mkdir ~/.kube
# 将拷贝的kubectl的配置信息拷贝进去
vi ~/.kube/config
# 测试kubectl是否正常连接,正常会显示相关node节点信息
kubectl get nodes

# 添加helm仓库
helm repo add grafana "https://helm-charts.itboon.top/grafana" --force-update
helm repo add prometheus-community "https://helm-charts.itboon.top/prometheus-community" --force-update
helm repo add ingress-nginx "https://helm-charts.itboon.top/ingress-nginx" --force-update
helm repo update

# 通过helm仓库安装应用
helm install my-nginx bitnami/nginx
# 查看应用是否部署成功
kubectl get deployments
# 升级应用,--version指定charts版本
helm upgrade my-nginx bitnami/nginx --version 22.1.1
# my-nginx升级有问题回滚到原有版本
helm history my-nginx
helm rollback my-nginx 1
# 用Chart包来安装
# 下载 chart:
helm pull bitnami/nginx
# 解压:
tar zxvf nginx-xxx.tgz
# 安装:
helm install my-new-nginx ./nginx

# 如果安装nginx失败可能是docker拉取不到,可以把老的deployment删除了重新定义镜像源并部署
# 先卸载坏的
helm uninstall my-nginx
# 用国内镜像重新安装(重点:--set image.repository=nginx)
helm install my-nginx bitnami/nginx \
  --set global.security.allowInsecureImages=true \
  --set image.registry=swr.ap-southeast-1.myhuaweicloud.com \
  --set image.repository=shihai/nginx \
  --set image.tag=latest \
  --set "image.pullSecrets[0]=swr-secret"

helm install my-nginx bitnami/nginx \
  --set global.security.allowInsecureImages=true \
  --set image.registry=01d9706e6899472e826127aad6ca766d.mirror.swr.myhuaweicloud.com \
  --set image.repository=nginx \
  --set image.tag=latest \
  --set "image.pullSecrets[0]=swr-secret"