测试服务迁移k8s集群记录 (一)

前言: 组内给了3台新机器,要把之前的服务全迁到新机器。共6个服务,2个在之前的 k8s 集群,其他4个在物理机。

已经迁移完成,记录下实施过程,大致分3步: 新 k8s 集群搭建、监控日志系统部署、业务服务迁移。

k8s 集群搭建

因为是新机器,准备尝试下 k8s master 高可用方案,运维给的3台机器信息如下,两个mastet 建立ssh 信任,服务规划参考老集群。

主机名 IP 角色 服务
soa-test-a001 172.2.5.4 master01 监控、日志
soa-test-a002 172.2.5.5 master02 CI、Ingress
soa-test-a003 172.2.5.6 node01 业务服务

只有两台master, 我们使用的是堆叠式 etcd 拓扑结构,如图:

kubeadm-ha-topology-stacked-etcd机器已经安装 docker ,直接开始使用 kubeadm 安装 k8s

Read More

MicroK8s 笔记

MicroK8s是一种小型,快速,安全的单节点Kubernetes,几乎可以安装在任何Linux机器上。使用它进行脱机开发,原型制作,测试或在VM上用作CI / CD的小型,便宜,可靠的k8。对于设备来说,它也是一个很棒的k8-为k8开发您的IoT应用并将其部署到您的盒子上的MicroK8。

修改默认镜像

  • 修改/var/snap/microk8s/current/args/kubelet。 添加--pod-infra-container-image=s7799653/pause:3.1
  • 修改/var/snap/microk8s/current/args/containerd-template.tomlplugins -> plugins.cri -> sandbox_images7799653/pause:3.1
  • 重启服务 microk8s.stop,microk8s.start
1
2
3
4
5
export PATH=$PATH:/snap/bin #临时写入
echo "export PATH=$PATH:/snap/bin" >> ~/.bashrc #永久写入
snap alias microk8s.kubectl kubectl
snap alias microk8s.ctr ctr
sudo usermod -a -G microk8s ${USER}

dashboard

1
2
3
4
microk8s.enable dns dashboard

token=$(microk8s.kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)
microk8s.kubectl -n kube-system describe secret $token

配置~/.kube/config

1
2
3
4
microk8s.config
microk8s.kubectl config view --raw > $HOME/.kube/config
修改 user,name 为kubernetes-dashboard
修改 username,passwod 为token

helm3

1
helm install nginx bitnami/nginx

prometheus

1
2
3
* DaemonSet, Deployment, StatefulSet, and ReplicaSet resources will no longer be served from extensions/v1beta1, apps/v1beta1, or apps/v1beta2 by default in v1.16. Migrate to the apps/v1 API, available since v1.9. Existing persisted data can be retrieved/updated via the apps/v1 API.

persistentVolume

常用命令

1
2
3
4
$ microk8s.enable dashboard dns metrics-server registry istio
$ microk8s.ctr -n k8s.io images pull docker.io/library/cassandra:latest
$ microk8s.ctr -namespace k8s.io images rm docker.io/yandex/clickhouse-server:20 --sync
$ microk8s.ctr --namespace k8s.io images ls | grep -v @sha256 | awk '{print $1,$4$5}'

nginx-ingress 部署的两种方式

1
2
3
4
externalIPs: 
- 192.168.2.12 #这是我的一台node的ip,通过kube-proxy监听
或者
hostNetwork: true

参考:

https://github.com/ubuntu/microk8s

https://www.jianshu.com/p/02fd2540fab2

https://github.com/projectatomic/containerd/blob/master/docs/cli.md

jenkins k8s pipeline

docker 启动脚本

1
2
3
4
5
6
7
8
docker run -d \
-p 8090:8080 -p 50000:50000 \
-u root \
-v /root/jenkins:/var/jenkins_home \
-v /usr/lib64/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker \
--name jenkins jenkinszh/jenkins-zh:latest

pipeline

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
pipeline {
agent any
environment {
APP_NAME = "api-monitor-new"
GIT_BRANCH = "test123"
GIT_COMMIT_ID = "test123456"
CHART_NAME = "apple/api-monitor-new"
CHART_VERSION = 3.0
NEW_HARBOR_HOST = "harbor.apple.net"
BUILD_IMAGES_NAME = "${NEW_HARBOR_HOST}/php/${APP_NAME}:${GIT_BRANCH}"
KUBE_CONFIG = credentials("156485be-dbb8-4f8c-b3a2-15f3535049ad")
NAMESPACE = "default"
}

stages {
stage('Build') {
agent {
docker { image 'harbor.apple.net/php/golang:latest' }
}
steps {
sh 'echo Build stage ...'
git credentialsId: '1', url: 'http://gitlaball.apple.net/apple/middle/soa/api-monitor-new.git'
sh 'rm -rf ./output'
sh 'export CGO_ENABLED=0 ;chmod +x ./build.sh ; ./build.sh'
sh 'cd ./output && mkdir ./app && tar -zxvf *.gz -C ./app'
}
}
stage('Build Image') {
agent {
docker { image 'harbor.apple.net/php/docker:stable' }
}
steps {
withCredentials([usernamePassword(credentialsId: 'a00b4d01-c2e7-49af-9b1f-fcc382116911', usernameVariable: 'HARBOR_USER', passwordVariable: 'HARBOR_PWD')]) {
sh 'docker login -u "${HARBOR_USER}" -p "${HARBOR_PWD}" "${NEW_HARBOR_HOST}"'
}
sh 'cd ./output/app'
sh 'echo Dockerfile01 > .dockerignore'
sh 'echo FROM harbor.apple.net/php/alpine > Dockerfile01 && echo COPY . /app >> Dockerfile01'
sh 'docker build -t "$BUILD_IMAGES_NAME" -f Dockerfile01 .'
sh 'docker push "$BUILD_IMAGES_NAME"'
sh 'docker images'
}
}
stage('Deploy') {
agent {
docker { image 'harbor.apple.net/php/helm:3.0-rc2' }
}
steps {
sh 'mkdir -p ~/.kube && cat ${KUBE_CONFIG} > ~/.kube/config'
sh 'helm repo add apple https://harbor.apple.net/chartrepo/php'
sh 'helm repo update'
withCredentials([usernamePassword(credentialsId: 'a00b4d01-c2e7-49af-9b1f-fcc382116911', usernameVariable: 'HARBOR_USER', passwordVariable: 'HARBOR_PWD')]) {
sh 'helm upgrade ${APP_NAME} --install \
--set image.tag=${APP_TAG} \
--set gitBranch=${GIT_BRANCH} \
--set commitId=${GIT_COMMIT_ID} \
--username ${HARBOR_USER} --password ${HARBOR_PWD} \
--set nodeSelector.role="node" \
--force --wait --atomic --debug \
--namespace=${NAMESPACE} \
${CHART_NAME} --version ${CHART_VERSION} '
sh 'echo "soa.cicd.${APP_NAME}.status $? `date +%s`" | nc 172.2.5.5 32003'
}
sh 'helm list'
}
}
}
}

参考:

Gitlab+Jenkins Pipeline+Docker+k8s+Helm自动化部署实践

Jenkins CI/CD 集成 Git Secrets

Jenkins 凭证管理 - 看这一篇就够了~

Jenkins pipeline 隐藏密码

使用hexo管理博客

下载 hexo 镜像

1
docker pull neofelhz/hexo-docker

启动镜像并映射本地目录

1
2
3
4
5
6
7
docker run -itd \
-v /home/runner/work/www/blog:/www/blog \
-w="/www/blog" \
-p 4000:4000 \
--name hexo-test \
neofelhz/hexo-docker \
/bin/sh

进入容器

1
docker exec -it hexo-test /bin/sh

执行 hexo init,初始化时间较长,应该和网络有关

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
/www/blog # hexo init 
INFO Cloning hexo-starter to /www/blog
Cloning into '/www/blog'...
remote: Enumerating objects: 8, done.
remote: Counting objects: 100% (8/8), done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 139 (delta 2), reused 2 (delta 0), pack-reused 131
Receiving objects: 100% (139/139), 25.72 KiB | 21.00 KiB/s, done.
Resolving deltas: 100% (64/64), done.
Submodule 'themes/landscape' (https://github.com/hexojs/hexo-theme-landscape.git) registered for path 'themes/landscape'
Cloning into '/www/blog/themes/landscape'...
......
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Saved lockfile.
warning Your current version of Yarn is out of date. The latest version is "1.19.1" while you're on "1.3.2".
info To upgrade, run the following command:
$ curl -o- -L https://yarnpkg.com/install.sh | bash
Done in 112.25s.
INFO Start blogging with Hexo!

Read More

使用Loki查询日志

1.png

loki 是 grafana 公司出的日志查询工具,区别es,只对标签不对数据做索引,更轻量。

1.png

helm 源

1
2
3
4
helm repo add loki https://grafana.github.io/loki/charts
helm repo update

loki 可以设置nodeSelector, promtail不要设置

查询语句

1
2
3
{job="ingress-nginx/nginx-ingress"} |="php-sht-payment-develop-http" |="refund/create"
{job="php-sht/payment-develop",stream="neo-log"} !="ShopNotifyJob"
{job=~"php-sht/payment-develop.*"} |~"shop_refund" !~"15712" #正则

promtail 作为loki的数据采集客户端,在k8s部署采用服务发现的形式监控所有容器标准输入输出。业务日志监控可以采用sidecar方式放在服务pod里,把日志文件mount 到本地,推给loki.

promtail.yaml 普通配置

1
2
3
4
5
6
7
8
9
10
11
12
server:
http_listen_port: 3101
scrape_configs:
- job_name: payment-develop
entry_parser: raw
static_configs:
- targets:
- localhost
labels:
job: php-sht/payment-develop
stream: neo-log
__path__: /var/www/payment/runtime/logs/*.log

Read More

core-dns使用

core-dns-conf 配置

1
2
3
4
5
6
7
8
9
10
. {
forward . 8.8.8.8
log
hosts {
10.111.8.170 www.sms.service
ttl 60
reload 1m
fallthrough
}
}

docker 方式启动

1
2
3
4
5
6
7
8
sudo systemctl stop systemd-resolved
docker run -d \
--net="host" \
-v /etc/hosts:/etc/hosts \
-v /etc/resolv.conf:/etc/resolv.conf \
-v /home/runner/work/coredns/core-dns/etc/core-dns-conf:/etc/core-dns-conf \
--name core-dns \
coredns/coredns -conf /etc/core-dns-conf

使用etcd做服务发现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
https://www.cnblogs.com/leffss/p/10148507.html
etcdctl put /coredns/net/apple/t1/a '{"host":"10.111.8.185","ttl":30}'

$ ETCDCTL_API=3
./etcdctl put /skydns/com/example/services/users \
'{"host": "192.0.2.10","port ": 20020,"priority": 10,"weight": 20}'
OK
$ ETCDCTL_API=3
./etcdctl get /skydns/com/example/services/users
/skydns/com/example/services/users
{"host": "192.0.2.10","port": 20020,"priority": 10,"weight": 20}

. {
etcd { # 配置启用etcd插件,后面可以指定域名,例如 etcd test.com {
stubzones # 启用存根区域功能。 stubzone仅在位于指定的第一个区域下方的etcd树中完成
path /coredns # etcd里面的路径 默认为/skydns,以后所有的dns记录就是存储在该存根路径底下
endpoint http://172.16.101.209:2379 # etcd访问地址,多个空格分开

# upstream设置要使用的上游解析程序解决指向外部域名的在etcd(认为CNAME)中找到的外部域名。
upstream 8.8.8.8:53 8.8.4.4:53

fallthrough # 如果区域匹配但不能生成记录,则将请求传递给下一个插件
# tls CERT KEY CACERT # 可选参数,etcd认证证书设置
}
prometheus :9153 # 监控插件
cache 160
loadbalance # 负载均衡,开启DNS记录轮询策略
forward . 8.8.8.8:53 8.8.4.4:53 # 上面etcd未查询到的请求转发给设置的DNS服务器解析
log # 打印日志
}

对于传统的DNS服务器(例如BIND),管理员通常将主区域数据作为文件进行管理。 最近,DNS服务器已开始支持从其他来源(例如数据库)加载主区域数据。

1
2
3
4
5
docker run --rm -u $(id -u):$(id -g) -v $PWD:/go golang:1.12 \
/bin/bash -c \
"git clone https://github.com/coredns/coredns.git && \
cd coredns && \
git checkout v1.5.0"

helm安装gitrunner

gitrunner

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#下载包
helm repo add gitlab https://charts.gitlab.io
helm pull gitlab/gitlab-runner --untar

#打标签
kubectl get nodes --show-labels
kubectl label nodes node-a002 ci=true

安装gitlab-runner
helm upgrade gitlab-runner-01 --install --namespace gitlab \
--set checkInterval=2 \
--set runners.image=alpine:latest --set runners.imagePullPolicy=if-not-present --set runners.tags=k8s-01 \
--set gitlabUrl=http://gitlab.******.net/,runnerRegistrationToken=AxwjhfK7bb8eDCs5PN --set runners.privileged=true \
--set gitRunnerCacheDir=/volume \
--set nodeSelector.ci=true \
.

mount 目录

在configmap.yaml 里 entrypoint 最后增加

1
2
3
4
5
6
7
8
{{ if .Values.gitRunnerCacheDir }}
cat >>/home/gitlab-runner/.gitlab-runner/config.toml <<EOF
[[runners.kubernetes.volumes.host_path]]
name = "git-runner-cache"
mount_path = {{ .Values.gitRunnerCacheDir | quote }}
host_path = {{ .Values.gitRunnerCacheDir | quote }}
EOF
{{- end }}

提示没有权限创建job

1
ERROR: Job failed (system failure): pods is forbidden: User "system:serviceaccount:gitlab:default" cannot create resource "pods" in API group "" in the namespace "gitlab"

添加权限绑定

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: gitlab
name: gitlab-admin-role
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: gitlab-admin-rolebinding
namespace: gitlab
subjects:
- kind: ServiceAccount
name: default
namespace: gitlab
roleRef:
kind: Role
name: gitlab-admin-role
apiGroup: rbac.authorization.k8s.io

跳过fetch

1
2
3
4
5
deploy_all:
variables:
GIT_STRATEGY: none
GIT_CHECKOUT: "false"
stage: deploy

参考:

GitlabCI 使用 S3 存储配置分布式缓存

Gitlab CI yaml官方配置文件翻译

helm 命令介绍及使用

命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
helm repo list
helm search
helm list #列出已经按照项目
helm del --purge istio-init #删除
helm fetch stable/grafana #下载到本地
helm push mysql-0.3.5.tgz myrepo
helm repo add myrepo https://xx.xx.xx.xx/chartrepo/charts #添加仓库
helm repo add stable http://mirror.azure.cn/kubernetes/charts
helm repo add bitnami https://charts.bitnami.com/bitnami #添加仓库

helm list --deleted
helm rollback mq-exporter 9 -n soa #回滚某个服务

helm create hello_test
helm package ./hello_test/ #打包
helm install ./hello_test-0.1.0.tgz --debug --dry-run #调试
helm get manifest #这条命令可以通过 release 名来打印其相关yaml信息
helm status wintering-rodent

helm plugin install https://github.com/chartmuseum/helm-push #安装push插件
helm repo add mylibrary http://harbor.local.com:8082/chartrepo/library
helm push --username=runner --password=****** hello_test mylibrary

helm fetch stable/redis
helm push redis-8.1.2.tgz -urunner -p****** mylibrary -v 0.2.0


docker run -ti --rm --entrypoint /bin/sh alpine/helm:2.9.0
export HELM_HOST=10.102.49.77:44134 #修改 tiller 地址 10.111.8.171:44134
helm list
helm init --client-only

helm upgrade -f panda.yaml happpy-panda stable/mariadb #更新
helm temlate helm/istio -name istio -namespace istio-system -f my-values.yaml > my-isti.yaml #根据模板生成部署清单,不用依赖 tiller 服务端。
helm template istio -name istio -f book-values.yaml -namespace istio-system | kubectl apply -f

helm delete --purge #版本存储在 kube-system 命名空间内的ConfigMaps中
helm status

helm inspect values . #查看charts的配置选项
helm inspect values yhdx/community --version 0.2.0
helm get values zeroed-gnat -a #查看 release 的配置值

helm --set a=b,c=d
helm --set name={a,b,c}
helm --set server[0].port = 80

--timeout
--wait

helm rollback pgadm 2 -n kube-public

helm init --service-account
helm install . --debug --dry-run --set favoriteDrink=tea #set 替换
helm install stable/drupal --set image=my-registry/drupal:0.1.0 --set livenessProbe.exec.command=[cat,docroot/CHANGELOG.txt] --set livenessProbe.httpGet=null
helm upgrade sanguine-panther --set image1.tag=0.3 --set imagePullPolicy=Always .
helm upgrade nginx-ingress -f ingress-nginx.yaml stable/nginx-ingress

#启动本地 chartmuseum 仓库
docker run --rm -itd \
-p 8089:8080 \
-e DEBUG=1 \
-e STORAGE=local \
-e STORAGE_LOCAL_ROOTDIR=/charts \
-v /home/runner/work/k8s/chartmuseum/charts:/charts \
--name my_chartmuseum chartmuseum/chartmuseum:latest
helm repo add myChartMuseum http://172.16.101.197:8089

helm upgrade install --force

sed -i "s/api-monitor/hyperf-skelecton/g" `grep payment -rl ./hyperf-skelecton`

Read More