在做工时处理时,有需要统计对应月份的工作日情况,并统计员工在上班时间的工作情况;
不同国家对应的法定工作时间不同,如何得知对应国家的法定工作时间呢?
计算每个月的工作日天数可以通过使用一些代码库来实现。比如说,在 Python 中可以使用 workalendar 库,它提供了许多国家的工作日和非工作日的数据。
from workalendar.asia import China
from datetime import datetime
cal = China()
# 2021年10月的工作日天数
print(cal.get_working_days_delta(datetime(2021, 10, 1), datetime(2021, 10, 31)))
SIP 全称为「System Integrity Protection」即「系统完整性保护」,SIP 将一些文件目录和系统应用保护了起来。但这会影响我们一些使用或设置,在终端运行一些命令时提示 "Operation not permitted" 。
终端输入 csrutil status
即可看到 SIP 的状态是 disable
还是 enable
。
csrutil status
Command + R
进入恢复模式。csrutil disable
。reboot
。Command + R
进入恢复模式。csrutil enable
。reboot
。Parallels Desktop 是一款 Mac 上的虚拟机软件,可以在 Mac 上运行 Windows、Linux 等系统,同时支持在虚拟机中运行 Docker 等容器。
Parallels Desktop 有一个很大的问题,就是在启动虚拟机时,会弹出一个界面,这个界面会一直存在,直到虚拟机关闭。这个界面会遮挡其他窗口,非常影响使用体验。
此时,如果不小心关闭了窗口,虚拟机也会被关闭,非常不方便。
Parallels Desktop 提供了一个终端命令 prlctl
,可以通过命令来管理你的虚拟机。
查看当前 virtual machine 的列表
prlctl list -a
查看当前 virtual machine 的状态
prlctl status <vm_name>
启动 virtual machine
prlctl start <vm_name>
关闭 virtual machine
prlctl stop <vm_name>
更多指令可以通过 prlctl --help
查看。
支持 google、部分 baidu 网页检索和图片检索等
.
和 *
作为正则匹配查询""
作关键词精准查询Keyworld | 作用 | 示例 |
---|---|---|
intitle | 限定从网页标题中查询关键词 | intitle:insight |
allintitle | 限定从网页标题中查询关键词,支持多个关键词 | allintitle:insight kpanda |
intext | 限定从网页内容中查询关键词 | intext:insight |
allintext | 限定从网页内容中查询关键词,支持多个关键词 | allintext:insight kpanda |
inurl | 限定从网页 url 中查询关键词 | inurl:insight |
allinurl | 限定从网页 url 中查询关键词,支持多个关键词 | allinurl:insight |
site | 限定查询指定网址的内容 | site:daocloud.io |
"", 《》 | 使得查询字符不分开 | intitle:"insight" |
. | 匹配任意单个字符 | intitle:"i.sight" |
* | 匹配任意长度字符 | inurl:"*insight" |
- | 在查询结果中,剔除关键词 | intitle:insight -daocloud intitle:insight site:"daocloud.io" -docs |
+ | 在查询结果中,保留关键词 | intitle:insight +daocloud |
"", 《》 | 使得查询字符不分开 | intitle:《insight》 |
filetype | 指定查询的文件类型,常用查询 ppt、PDF、等,结果可以直接下载 | site:daocloud.io filetype:pdf intitle:insight |
location | 调整当前所在位置,用于解决浏览器基于不同地址反馈的数据权重问题 | |
link | 查看每一个引用了这个网址的网页 |
目前 2022 年 最新 Jetbrains 激活服务器,全部产品均可激活;激活后,如果失效了就再去找个。
打开这个网站:https://search.censys.io/
搜索框输入:services.http.response.headers.location: account.jetbrains.com/fls-auth
在查到的站点中,随便找一个点进去,查找到 HTTP & 302
复制网址到 Ketbrains,选择许可证服务器/License server,粘贴刚刚复制的网址,激活。
如果提示过期或者激活失败,再从查询列表中再找一个,多试几个。
]]>我觉得的优势提升: ● Raycast 的插件生态目前很火热,自己开发插件也是非常方便 ● 颜值很高 ● 展示免费可用
]]>本教程旨在补充需要手工安装和升级的方式。
默认安装不支持;在安装规划时,可以修改 mainfest.yaml 开启 Skoala 自动安装
检查预装环境
./dce5-installer install-app -m /sample/manifest.yaml
20222.12.15 即将发版)支持默认安装 Skoala;仍旧建议检查mainfest.yaml ,确保 Skoala 会被安装器安装。
enable 需要为 true,需要指定对应的 helmVersion:
...
components:
skoala:
enable: true
helmVersion: v0.12.2
variables:
...
查看 命名空间为 skoala-system 的之中是否有以下对应的资源,如果没有任何资源,说明 Skoala 的确没有安装。
~ kubectl -n skoala-system get pods
NAME READY STATUS RESTARTS AGE
hive-8548cd9b59-948j2 2/2 Running 2 (3h48m ago) 3h48m
sesame-5955c878c6-jz8cd 2/2 Running 0 3h48m
ui-7c9f5b7b67-9rpzc 2/2 Running 0 3h48m
~ helm -n skoala-system list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
skoala skoala-system 3 2022-12-16 11:17:35.187799553 +0800 CST deployed skoala-0.13.0 0.13.0
skoala 在安装时需要用到 mysql 来存储配置,所以必须要保证数据库存在;另外查看下 common-mysql 是否有 skoala 这个数据库。
~ kubectl -n mcamel-system get statefulset
NAME READY AGE
mcamel-common-mysql-cluster-mysql 2/2 7d23h
建议给到 skoala 用到的数据库信息如下:
Skoala 所有的监控的信息,需要依赖 Insight 的能力,则需要在集群中安装对应的 insight-agent;
对 Skoala 的影响:
如果先安装了 skoala-init, 目前需要在安装 insight-agent 后,重装 skoala-init
如果在 common-mysql 内的 skoala 数据库为空,请登录到 skoala 数据库后,执行以下 SQL:
CREATE TABLE `registry` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`uid` varchar(32) DEFAULT NULL,
`name` varchar(50) NOT NULL,
`type` varchar(50) NOT NULL,
`addresses` varchar(1000) NOT NULL,
`namespaces` varchar(2000) NOT NULL,
`deleted_at` timestamp NULL COMMENT 'Time deteled',
`created_at` timestamp NOT NULL DEFAULT current_timestamp(),
`updated_at` timestamp NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(),
PRIMARY KEY (`id`),
UNIQUE KEY `idx_uid` (`uid`),
UNIQUE KEY `idx_name` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
CREATE TABLE `book` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`uid` varchar(32) DEFAULT NULL,
`name` varchar(50) NOT NULL,
`author` varchar(32) NOT NULL,
`status` int(1) DEFAULT 1 COMMENT '0:下架,1:上架',
`isPublished` tinyint(1) unsigned NOT NULL DEFAULT 1 COMMENT '0: unpublished, 1: published',
`publishedAt` timestamp NULL DEFAULT NULL COMMENT '出版时间',
`deleted_at` timestamp NULL COMMENT 'Time deteled',
`createdAt` timestamp NOT NULL DEFAULT current_timestamp(),
`updatedAt` timestamp NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(),
PRIMARY KEY (`id`),
UNIQUE KEY `idx_uid` (`uid`),
UNIQUE KEY `idx_name` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
CREATE TABLE `api` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`is_hosted` tinyint DEFAULT 0,
`registry` varchar(50) NOT NULL,
`service_name` varchar(200) NOT NULL,
`nacos_namespace` varchar(200) NOT NULL COMMENT 'Nacos namespace id',
`nacos_group_name` varchar(200) NOT NULL,
`data_type` varchar(100) NOT NULL COMMENT 'JSON or YAML.',
`detail` mediumtext NOT NULL,
`deleted_at` timestamp NULL COMMENT 'Time deteled',
`created_at` timestamp NOT NULL DEFAULT current_timestamp(),
`updated_at` timestamp NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(),
PRIMARY KEY (`id`),
UNIQUE KEY `idx_registry_and_service_name` (`registry`, `service_name`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
INSERT INTO `book` VALUES (1,'book-init','MicroService Pattern','daocloud',1,1,'2022-03-23 13:50:00',null,now(),now());
alter table registry add is_hosted tinyint default 0 not null after namespaces;
alter table registry add workspace_id varchar(50) not null DEFAULT 'default' after uid;
alter table registry add ext_id varchar(50) null after workspace_id;
drop index idx_name on registry;
create unique index idx_name on registry (name, workspace_id);
完成以上操作,会在 Skoala 数据库内有 3 张表,注意检测对应 SQL 是否全部是生效。
配置好 skoala 仓库,即可查看和获取到 skoala 的应用 chart
~ helm repo add skoala-release https://release.daocloud.io/chartrepo/skoala
~ helm repo update
重点内容: ** 增加完成后 Skoala-release 之后,常用需要关注的有 2 个 Chart:
默认情况下,安装完成 skoala 到 kpanda-global-cluster(全局管理集群),就可以在侧边栏看到对应的微服务引擎的入口了。
在全局管理集群,查看 Skoala 的最新版本,直接通过 helm repo 来更新获取最新的;
~ helm repo update skoala-release
~ helm search repo skoala-release/skoala --versions
NAME CHART VERSION APP VERSION DESCRIPTION
skoala-release/skoala 0.13.0 0.13.0 The helm chart for Skoala
skoala-release/skoala 0.12.2 0.12.2 The helm chart for Skoala
skoala-release/skoala 0.12.1 0.12.1 The helm chart for Skoala
skoala-release/skoala 0.12.0 0.12.0 The helm chart for Skoala
......
在部署 skoala 时,会携带当时最新的前端版本,如果想要指定前端 ui 的版本, 可以去看前端代码仓库获取对应的版本号: https://gitlab.daocloud.cn/ndx/frontend-engineering/skoala-ui/-/tags
在工作集群,查看 Skoala-init 的最新版本,直接通过 helm repo 来更新获取最新的
~ helm repo update skoala-release
~ helm search repo skoala-release/skoala-init --versions
NAME CHART VERSION APP VERSION DESCRIPTION
skoala-release/skoala-init 0.13.0 0.13.0 A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init 0.12.2 0.12.2 A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init 0.12.1 0.12.1 A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init 0.12.0 0.12.0 A Helm Chart for Skoala init, it includes Skoal...
......
直接执行命令即可,注意对应的版本号
~ helm upgrade --install skoala --create-namespace -n skoala-system --cleanup-on-fail \
--set ui.image.tag=v0.9.0 \
--set sweet.enable=true \
--set hive.configMap.data.database.host=mcamel-common-mysql-cluster-mysql-master.mcamel-system.svc.cluster.local \
--set hive.configMap.data.database.port=3306 \
--set hive.configMap.data.database.user=root \
--set hive.configMap.data.database.password=xxxxxxxx \
--set hive.configMap.data.database.database=skoala \
skoala-release/skoala \
--version 0.13.0
自定义并初始化数据库参数;需要将数据库信息做配置添加进去
--set sweet.enable=true
--set hive.configMap.data.database.host= \
--set hive.configMap.data.database.port= \
--set hive.configMap.data.database.user=
--set hive.configMap.data.database.password= \
--set hive.configMap.data.database.database= \自定义前端 ui 版本 ui.image.tag=v0.9.0
查看部署的 pod 是否启动成功
~ kubectl -n skoala-system get pods
NAME READY STATUS RESTARTS AGE
hive-8548cd9b59-948j2 2/2 Running 2 (3h48m ago) 3h48m
sesame-5955c878c6-jz8cd 2/2 Running 0 3h48m
ui-7c9f5b7b67-9rpzc 2/2 Running 0 3h48m
这一步骤卸载,会把 skoala 相关的资源删除。
~ helm uninstall skoala -n skoala-system
更新操作同 3.4 部署,使用 helm upgrade 指定新版本即可
请查看 Skoala 代码仓库 https://gitlab.daocloud.cn/ndx/skoala/-/tree/main/build/charts/skoala
由于 Skoala 涉及的组件较多,我们将这些组件打包到同一个 Chart 内,也就是 skoala-init,所以我们应该在用到微服务引擎的工作集群安装好 skoala-init
~ helm search repo skoala-release/skoala-init --versions
NAME CHART VERSION APP VERSION DESCRIPTION
skoala-release/skoala-init 0.13.0 0.13.0 A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init 0.12.2 0.12.2 A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init 0.12.1 0.12.1 A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init 0.12.0 0.12.0 A Helm Chart for Skoala init, it includes Skoal...
......
安装命令,同更新; 确认需要安装到指定的命名空间,确认看到全部 Pod 启动成功。
~ helm upgrade --install skoala-init --create-namespace -n skoala-system --cleanup-on-fail \
skoala-release/skoala-init \
--version 0.13.0
除了通过终端安装,UI 的方式 可以在 Kpanda 集群管理中 Helm 应用内找到 Skoala-init 进行安装。
卸载命令
~ helm uninstall skoala-init -n skoala-system
本文完成了从 0 到 1 的完成 DCE5.0 社区版的安装;其中包含社区版需要一个 K8s 集群、与一些安装细节和注意事项。
当前计划使用在 3 台 UCloud 的 VM,配置均为 8 核 16G。
角色 | 主机名 | 操作系统 | IP | 配置 |
---|---|---|---|---|
master | master-k8s-com | CentOS 7.9 | 10.23.245.63 | 8 核 16G 300GB |
node01 | node01-k8s-com | CentOS 7.9 | 10.23.104.173 | 8 核 16G 300GB |
node02 | node02-k8s-com | CentOS 7.9 | 10.23.112.244 | 8 核 16G 300GB |
hostnamctl set-hostname master-k8s-com
cat <<EOF | tee /etc/hosts
10.23.245.63 master-k8s-com
10.23.104.173 node01-k8s-com
10.23.112.244 node02-k8s-com
EOF
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
systemctl stop firewalld
systemctl disable firewalld
加载 br_netfilter
模块
cat <<EOF | tee /etc/modules-load.d/kubernetes.conf
br_netfilter
EOF
# 加载模块
modprobe br_netfilter
修改 net.bridge.bridge-nf-call-iptables
设置为 1
cat <<EOF | tee /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 刷新配置
sysctl --system
sudo yum install -y yum-utils
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
sudo yum -y install docker-ce docker-ce-cli containerd.io
sudo touch /etc/docker/daemon.json
cat <<EOF | tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# 清理配置文件
sudo rm -f /etc/containerd/config.toml
# 初始化配置
sudo containerd config default | sudo tee /etc/containerd/config.toml
# 更新配置文件内容
sed -i 's/SystemdCgroup\ =\ false/SystemdCgroup\ =\ true/' /etc/containerd/config.toml
sed 's/k8s.gcr.io\/pause/registry.cn-hangzhou.aliyuncs.com\/google_containers\/pause/g' /etc/containerd/config.toml
sudo systemctl daemon-reload
sudo systemctl enable --now docker
sudo systemctl enable --now containerd
sudo systemctl status docker containerd
sudo docker info
这里采用国内阿里云的源,具有加速效果
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
echo K8sVersion=1.24.8
sudo yum install -y kubelet-1.24.8-$K8sVersion kubeadm-1.24.8-$K8sVersion
sudo yum install -y kubectl-1.24.8-$K8sVersion # 可以仅在 Master 节点安装
kubelet
系统服务sudo systemctl enable --now kubelet
此时启动后服务状态异常,会检查集群配置,因为没有配置会不断的重启,不影响后续操作
注意规划集群主节点配置,初始化主节点时,规划网络配置
# 指定对应的 kubernetes 的版本,注意与上方配置保持一致
$ sudokubeadm init --kubernetes-version=v1.24.8 \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--pod-network-cidr 10.11.0.0/16 \
[init] Using Kubernetes version: v1.24.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.503693 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.24.8" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.24.8" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1.k8s.com as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node1.k8s.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: c0wcm5.0yu9szfktsxvurza
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.23.245.63:6443 --token djdsj2.sj23js90213j323 \
--discovery-token-ca-cert-hash sha256:ewuosdjk2390rjertw32p32j43p25a70298db818ajsdjk1293jk23k23201934h
注意需要确保完成了前面的步骤 节点系统优化、安装 Docker、安装 Kubernetes 系统组件,并且成功完成 初始化 Master 节点
$ kubeadm join 10.23.245.63:6443 --token djdsj2.sj23js90213j323 \
--discovery-token-ca-cert-hash sha256:ewuosdjk2390rjertw32p32j43p25a70298db818ajsdjk1293jk23k23201934h
请分别添加 2 个工作节点:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01-k8s-com Ready control-plane 9h v1.24.8
node01-k8s-com Ready <none> 9h v1.24.8
node02-k8s-com Ready <none> 9h v1.24.8
将下发 yaml
保存为 calico.yaml
,或者 到我的 Github 去下载。
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "bird"
# Configure the MTU to use for workload interfaces and tunnels.
# By default, MTU is auto-detected, and explicitly setting this field should not be required.
# You can override auto-detection by providing a non-zero value.
veth_mtu: "0"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"log_file_path": "/var/log/calico/cni/cni.log",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: BGPConfiguration
listKind: BGPConfigurationList
plural: bgpconfigurations
singular: bgpconfiguration
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
description: BGPConfiguration contains the configuration for any BGP routing.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: BGPConfigurationSpec contains the values of the BGP configuration.
properties:
asNumber:
description: 'ASNumber is the default AS number used by a node. [Default:
64512]'
format: int32
type: integer
communities:
description: Communities is a list of BGP community values and their
arbitrary names for tagging routes.
items:
description: Community contains standard or large community value
and its name.
properties:
name:
description: Name given to community value.
type: string
value:
description: Value must be of format `aa:nn` or `aa:nn:mm`.
For standard community use `aa:nn` format, where `aa` and
`nn` are 16 bit number. For large community use `aa:nn:mm`
format, where `aa`, `nn` and `mm` are 32 bit number. Where,
`aa` is an AS Number, `nn` and `mm` are per-AS identifier.
pattern: ^(\d+):(\d+)$|^(\d+):(\d+):(\d+)$
type: string
type: object
type: array
listenPort:
description: ListenPort is the port where BGP protocol should listen.
Defaults to 179
maximum: 65535
minimum: 1
type: integer
logSeverityScreen:
description: 'LogSeverityScreen is the log severity above which logs
are sent to the stdout. [Default: INFO]'
type: string
nodeToNodeMeshEnabled:
description: 'NodeToNodeMeshEnabled sets whether full node to node
BGP mesh is enabled. [Default: true]'
type: boolean
prefixAdvertisements:
description: PrefixAdvertisements contains per-prefix advertisement
configuration.
items:
description: PrefixAdvertisement configures advertisement properties
for the specified CIDR.
properties:
cidr:
description: CIDR for which properties should be advertised.
type: string
communities:
description: Communities can be list of either community names
already defined in `Specs.Communities` or community value
of format `aa:nn` or `aa:nn:mm`. For standard community use
`aa:nn` format, where `aa` and `nn` are 16 bit number. For
large community use `aa:nn:mm` format, where `aa`, `nn` and
`mm` are 32 bit number. Where,`aa` is an AS Number, `nn` and
`mm` are per-AS identifier.
items:
type: string
type: array
type: object
type: array
serviceClusterIPs:
description: ServiceClusterIPs are the CIDR blocks from which service
cluster IPs are allocated. If specified, Calico will advertise these
blocks, as well as any cluster IPs within them.
items:
description: ServiceClusterIPBlock represents a single allowed ClusterIP
CIDR block.
properties:
cidr:
type: string
type: object
type: array
serviceExternalIPs:
description: ServiceExternalIPs are the CIDR blocks for Kubernetes
Service External IPs. Kubernetes Service ExternalIPs will only be
advertised if they are within one of these blocks.
items:
description: ServiceExternalIPBlock represents a single allowed
External IP CIDR block.
properties:
cidr:
type: string
type: object
type: array
serviceLoadBalancerIPs:
description: ServiceLoadBalancerIPs are the CIDR blocks for Kubernetes
Service LoadBalancer IPs. Kubernetes Service status.LoadBalancer.Ingress
IPs will only be advertised if they are within one of these blocks.
items:
description: ServiceLoadBalancerIPBlock represents a single allowed
LoadBalancer IP CIDR block.
properties:
cidr:
type: string
type: object
type: array
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: BGPPeer
listKind: BGPPeerList
plural: bgppeers
singular: bgppeer
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: BGPPeerSpec contains the specification for a BGPPeer resource.
properties:
asNumber:
description: The AS Number of the peer.
format: int32
type: integer
keepOriginalNextHop:
description: Option to keep the original nexthop field when routes
are sent to a BGP Peer. Setting "true" configures the selected BGP
Peers node to use the "next hop keep;" instead of "next hop self;"(default)
in the specific branch of the Node on "bird.cfg".
type: boolean
node:
description: The node name identifying the Calico node instance that
is targeted by this peer. If this is not set, and no nodeSelector
is specified, then this BGP peer selects all nodes in the cluster.
type: string
nodeSelector:
description: Selector for the nodes that should have this peering. When
this is set, the Node field must be empty.
type: string
password:
description: Optional BGP password for the peerings generated by this
BGPPeer resource.
properties:
secretKeyRef:
description: Selects a key of a secret in the node pod's namespace.
properties:
key:
description: The key of the secret to select from. Must be
a valid secret key.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the Secret or its key must be
defined
type: boolean
required:
- key
type: object
type: object
peerIP:
description: The IP address of the peer followed by an optional port
number to peer with. If port number is given, format should be `[<IPv6>]:port`
or `<IPv4>:<port>` for IPv4. If optional port number is not set,
and this peer IP and ASNumber belongs to a calico/node with ListenPort
set in BGPConfiguration, then we use that port to peer.
type: string
peerSelector:
description: Selector for the remote nodes to peer with. When this
is set, the PeerIP and ASNumber fields must be empty. For each
peering between the local node and selected remote nodes, we configure
an IPv4 peering if both ends have NodeBGPSpec.IPv4Address specified,
and an IPv6 peering if both ends have NodeBGPSpec.IPv6Address specified. The
remote AS number comes from the remote node's NodeBGPSpec.ASNumber,
or the global default if that is not set.
type: string
sourceAddress:
description: Specifies whether and how to configure a source address
for the peerings generated by this BGPPeer resource. Default value
"UseNodeIP" means to configure the node IP as the source address. "None"
means not to configure a source address.
type: string
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: blockaffinities.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: BlockAffinity
listKind: BlockAffinityList
plural: blockaffinities
singular: blockaffinity
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: BlockAffinitySpec contains the specification for a BlockAffinity
resource.
properties:
cidr:
type: string
deleted:
description: Deleted indicates that this block affinity is being deleted.
This field is a string for compatibility with older releases that
mistakenly treat this field as a string.
type: string
node:
type: string
state:
type: string
required:
- cidr
- deleted
- node
- state
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: ClusterInformation
listKind: ClusterInformationList
plural: clusterinformations
singular: clusterinformation
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
description: ClusterInformation contains the cluster specific information.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: ClusterInformationSpec contains the values of describing
the cluster.
properties:
calicoVersion:
description: CalicoVersion is the version of Calico that the cluster
is running
type: string
clusterGUID:
description: ClusterGUID is the GUID of the cluster
type: string
clusterType:
description: ClusterType describes the type of the cluster
type: string
datastoreReady:
description: DatastoreReady is used during significant datastore migrations
to signal to components such as Felix that it should wait before
accessing the datastore.
type: boolean
variant:
description: Variant declares which variant of Calico should be active.
type: string
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: FelixConfiguration
listKind: FelixConfigurationList
plural: felixconfigurations
singular: felixconfiguration
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
description: Felix Configuration contains the configuration for Felix.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: FelixConfigurationSpec contains the values of the Felix configuration.
properties:
allowIPIPPacketsFromWorkloads:
description: 'AllowIPIPPacketsFromWorkloads controls whether Felix
will add a rule to drop IPIP encapsulated traffic from workloads
[Default: false]'
type: boolean
allowVXLANPacketsFromWorkloads:
description: 'AllowVXLANPacketsFromWorkloads controls whether Felix
will add a rule to drop VXLAN encapsulated traffic from workloads
[Default: false]'
type: boolean
awsSrcDstCheck:
description: 'Set source-destination-check on AWS EC2 instances. Accepted
value must be one of "DoNothing", "Enabled" or "Disabled". [Default:
DoNothing]'
enum:
- DoNothing
- Enable
- Disable
type: string
bpfConnectTimeLoadBalancingEnabled:
description: 'BPFConnectTimeLoadBalancingEnabled when in BPF mode,
controls whether Felix installs the connection-time load balancer. The
connect-time load balancer is required for the host to be able to
reach Kubernetes services and it improves the performance of pod-to-service
connections. The only reason to disable it is for debugging purposes. [Default:
true]'
type: boolean
bpfDataIfacePattern:
description: BPFDataIfacePattern is a regular expression that controls
which interfaces Felix should attach BPF programs to in order to
catch traffic to/from the network. This needs to match the interfaces
that Calico workload traffic flows over as well as any interfaces
that handle incoming traffic to nodeports and services from outside
the cluster. It should not match the workload interfaces (usually
named cali...).
type: string
bpfDisableUnprivileged:
description: 'BPFDisableUnprivileged, if enabled, Felix sets the kernel.unprivileged_bpf_disabled
sysctl to disable unprivileged use of BPF. This ensures that unprivileged
users cannot access Calico''s BPF maps and cannot insert their own
BPF programs to interfere with Calico''s. [Default: true]'
type: boolean
bpfEnabled:
description: 'BPFEnabled, if enabled Felix will use the BPF dataplane.
[Default: false]'
type: boolean
bpfExtToServiceConnmark:
description: 'BPFExtToServiceConnmark in BPF mode, control a 32bit
mark that is set on connections from an external client to a local
service. This mark allows us to control how packets of that connection
are routed within the host and how is routing intepreted by RPF
check. [Default: 0]'
type: integer
bpfExternalServiceMode:
description: 'BPFExternalServiceMode in BPF mode, controls how connections
from outside the cluster to services (node ports and cluster IPs)
are forwarded to remote workloads. If set to "Tunnel" then both
request and response traffic is tunneled to the remote node. If
set to "DSR", the request traffic is tunneled but the response traffic
is sent directly from the remote node. In "DSR" mode, the remote
node appears to use the IP of the ingress node; this requires a
permissive L2 network. [Default: Tunnel]'
type: string
bpfKubeProxyEndpointSlicesEnabled:
description: BPFKubeProxyEndpointSlicesEnabled in BPF mode, controls
whether Felix's embedded kube-proxy accepts EndpointSlices or not.
type: boolean
bpfKubeProxyIptablesCleanupEnabled:
description: 'BPFKubeProxyIptablesCleanupEnabled, if enabled in BPF
mode, Felix will proactively clean up the upstream Kubernetes kube-proxy''s
iptables chains. Should only be enabled if kube-proxy is not running. [Default:
true]'
type: boolean
bpfKubeProxyMinSyncPeriod:
description: 'BPFKubeProxyMinSyncPeriod, in BPF mode, controls the
minimum time between updates to the dataplane for Felix''s embedded
kube-proxy. Lower values give reduced set-up latency. Higher values
reduce Felix CPU usage by batching up more work. [Default: 1s]'
type: string
bpfLogLevel:
description: 'BPFLogLevel controls the log level of the BPF programs
when in BPF dataplane mode. One of "Off", "Info", or "Debug". The
logs are emitted to the BPF trace pipe, accessible with the command
`tc exec bpf debug`. [Default: Off].'
type: string
chainInsertMode:
description: 'ChainInsertMode controls whether Felix hooks the kernel''s
top-level iptables chains by inserting a rule at the top of the
chain or by appending a rule at the bottom. insert is the safe default
since it prevents Calico''s rules from being bypassed. If you switch
to append mode, be sure that the other rules in the chains signal
acceptance by falling through to the Calico rules, otherwise the
Calico policy will be bypassed. [Default: insert]'
type: string
dataplaneDriver:
type: string
debugDisableLogDropping:
type: boolean
debugMemoryProfilePath:
type: string
debugSimulateCalcGraphHangAfter:
type: string
debugSimulateDataplaneHangAfter:
type: string
defaultEndpointToHostAction:
description: 'DefaultEndpointToHostAction controls what happens to
traffic that goes from a workload endpoint to the host itself (after
the traffic hits the endpoint egress policy). By default Calico
blocks traffic from workload endpoints to the host itself with an
iptables "DROP" action. If you want to allow some or all traffic
from endpoint to host, set this parameter to RETURN or ACCEPT. Use
RETURN if you have your own rules in the iptables "INPUT" chain;
Calico will insert its rules at the top of that chain, then "RETURN"
packets to the "INPUT" chain once it has completed processing workload
endpoint egress policy. Use ACCEPT to unconditionally accept packets
from workloads after processing workload endpoint egress policy.
[Default: Drop]'
type: string
deviceRouteProtocol:
description: This defines the route protocol added to programmed device
routes, by default this will be RTPROT_BOOT when left blank.
type: integer
deviceRouteSourceAddress:
description: This is the source address to use on programmed device
routes. By default the source address is left blank, leaving the
kernel to choose the source address used.
type: string
disableConntrackInvalidCheck:
type: boolean
endpointReportingDelay:
type: string
endpointReportingEnabled:
type: boolean
externalNodesList:
description: ExternalNodesCIDRList is a list of CIDR's of external-non-calico-nodes
which may source tunnel traffic and have the tunneled traffic be
accepted at calico nodes.
items:
type: string
type: array
failsafeInboundHostPorts:
description: 'FailsafeInboundHostPorts is a list of UDP/TCP ports
and CIDRs that Felix will allow incoming traffic to host endpoints
on irrespective of the security policy. This is useful to avoid
accidentally cutting off a host with incorrect configuration. For
back-compatibility, if the protocol is not specified, it defaults
to "tcp". If a CIDR is not specified, it will allow traffic from
all addresses. To disable all inbound host ports, use the value
none. The default value allows ssh access and DHCP. [Default: tcp:22,
udp:68, tcp:179, tcp:2379, tcp:2380, tcp:6443, tcp:6666, tcp:6667]'
items:
description: ProtoPort is combination of protocol, port, and CIDR.
Protocol and port must be specified.
properties:
net:
type: string
port:
type: integer
protocol:
type: string
required:
- port
- protocol
type: object
type: array
failsafeOutboundHostPorts:
description: 'FailsafeOutboundHostPorts is a list of UDP/TCP ports
and CIDRs that Felix will allow outgoing traffic from host endpoints
to irrespective of the security policy. This is useful to avoid
accidentally cutting off a host with incorrect configuration. For
back-compatibility, if the protocol is not specified, it defaults
to "tcp". If a CIDR is not specified, it will allow traffic from
all addresses. To disable all outbound host ports, use the value
none. The default value opens etcd''s standard ports to ensure that
Felix does not get cut off from etcd as well as allowing DHCP and
DNS. [Default: tcp:179, tcp:2379, tcp:2380, tcp:6443, tcp:6666,
tcp:6667, udp:53, udp:67]'
items:
description: ProtoPort is combination of protocol, port, and CIDR.
Protocol and port must be specified.
properties:
net:
type: string
port:
type: integer
protocol:
type: string
required:
- port
- protocol
type: object
type: array
featureDetectOverride:
description: FeatureDetectOverride is used to override the feature
detection. Values are specified in a comma separated list with no
spaces, example; "SNATFullyRandom=true,MASQFullyRandom=false,RestoreSupportsLock=".
"true" or "false" will force the feature, empty or omitted values
are auto-detected.
type: string
genericXDPEnabled:
description: 'GenericXDPEnabled enables Generic XDP so network cards
that don''t support XDP offload or driver modes can use XDP. This
is not recommended since it doesn''t provide better performance
than iptables. [Default: false]'
type: boolean
healthEnabled:
type: boolean
healthHost:
type: string
healthPort:
type: integer
interfaceExclude:
description: 'InterfaceExclude is a comma-separated list of interfaces
that Felix should exclude when monitoring for host endpoints. The
default value ensures that Felix ignores Kubernetes'' IPVS dummy
interface, which is used internally by kube-proxy. If you want to
exclude multiple interface names using a single value, the list
supports regular expressions. For regular expressions you must wrap
the value with ''/''. For example having values ''/^kube/,veth1''
will exclude all interfaces that begin with ''kube'' and also the
interface ''veth1''. [Default: kube-ipvs0]'
type: string
interfacePrefix:
description: 'InterfacePrefix is the interface name prefix that identifies
workload endpoints and so distinguishes them from host endpoint
interfaces. Note: in environments other than bare metal, the orchestrators
configure this appropriately. For example our Kubernetes and Docker
integrations set the ''cali'' value, and our OpenStack integration
sets the ''tap'' value. [Default: cali]'
type: string
interfaceRefreshInterval:
description: InterfaceRefreshInterval is the period at which Felix
rescans local interfaces to verify their state. The rescan can be
disabled by setting the interval to 0.
type: string
ipipEnabled:
type: boolean
ipipMTU:
description: 'IPIPMTU is the MTU to set on the tunnel device. See
Configuring MTU [Default: 1440]'
type: integer
ipsetsRefreshInterval:
description: 'IpsetsRefreshInterval is the period at which Felix re-checks
all iptables state to ensure that no other process has accidentally
broken Calico''s rules. Set to 0 to disable iptables refresh. [Default:
90s]'
type: string
iptablesBackend:
description: IptablesBackend specifies which backend of iptables will
be used. The default is legacy.
type: string
iptablesFilterAllowAction:
type: string
iptablesLockFilePath:
description: 'IptablesLockFilePath is the location of the iptables
lock file. You may need to change this if the lock file is not in
its standard location (for example if you have mapped it into Felix''s
container at a different path). [Default: /run/xtables.lock]'
type: string
iptablesLockProbeInterval:
description: 'IptablesLockProbeInterval is the time that Felix will
wait between attempts to acquire the iptables lock if it is not
available. Lower values make Felix more responsive when the lock
is contended, but use more CPU. [Default: 50ms]'
type: string
iptablesLockTimeout:
description: 'IptablesLockTimeout is the time that Felix will wait
for the iptables lock, or 0, to disable. To use this feature, Felix
must share the iptables lock file with all other processes that
also take the lock. When running Felix inside a container, this
requires the /run directory of the host to be mounted into the calico/node
or calico/felix container. [Default: 0s disabled]'
type: string
iptablesMangleAllowAction:
type: string
iptablesMarkMask:
description: 'IptablesMarkMask is the mask that Felix selects its
IPTables Mark bits from. Should be a 32 bit hexadecimal number with
at least 8 bits set, none of which clash with any other mark bits
in use on the system. [Default: 0xff000000]'
format: int32
type: integer
iptablesNATOutgoingInterfaceFilter:
type: string
iptablesPostWriteCheckInterval:
description: 'IptablesPostWriteCheckInterval is the period after Felix
has done a write to the dataplane that it schedules an extra read
back in order to check the write was not clobbered by another process.
This should only occur if another application on the system doesn''t
respect the iptables lock. [Default: 1s]'
type: string
iptablesRefreshInterval:
description: 'IptablesRefreshInterval is the period at which Felix
re-checks the IP sets in the dataplane to ensure that no other process
has accidentally broken Calico''s rules. Set to 0 to disable IP
sets refresh. Note: the default for this value is lower than the
other refresh intervals as a workaround for a Linux kernel bug that
was fixed in kernel version 4.11. If you are using v4.11 or greater
you may want to set this to, a higher value to reduce Felix CPU
usage. [Default: 10s]'
type: string
ipv6Support:
type: boolean
kubeNodePortRanges:
description: 'KubeNodePortRanges holds list of port ranges used for
service node ports. Only used if felix detects kube-proxy running
in ipvs mode. Felix uses these ranges to separate host and workload
traffic. [Default: 30000:32767].'
items:
anyOf:
- type: integer
- type: string
pattern: ^.*
x-kubernetes-int-or-string: true
type: array
logFilePath:
description: 'LogFilePath is the full path to the Felix log. Set to
none to disable file logging. [Default: /var/log/calico/felix.log]'
type: string
logPrefix:
description: 'LogPrefix is the log prefix that Felix uses when rendering
LOG rules. [Default: calico-packet]'
type: string
logSeverityFile:
description: 'LogSeverityFile is the log severity above which logs
are sent to the log file. [Default: Info]'
type: string
logSeverityScreen:
description: 'LogSeverityScreen is the log severity above which logs
are sent to the stdout. [Default: Info]'
type: string
logSeveritySys:
description: 'LogSeveritySys is the log severity above which logs
are sent to the syslog. Set to None for no logging to syslog. [Default:
Info]'
type: string
maxIpsetSize:
type: integer
metadataAddr:
description: 'MetadataAddr is the IP address or domain name of the
server that can answer VM queries for cloud-init metadata. In OpenStack,
this corresponds to the machine running nova-api (or in Ubuntu,
nova-api-metadata). A value of none (case insensitive) means that
Felix should not set up any NAT rule for the metadata path. [Default:
127.0.0.1]'
type: string
metadataPort:
description: 'MetadataPort is the port of the metadata server. This,
combined with global.MetadataAddr (if not ''None''), is used to
set up a NAT rule, from 169.254.169.254:80 to MetadataAddr:MetadataPort.
In most cases this should not need to be changed [Default: 8775].'
type: integer
mtuIfacePattern:
description: MTUIfacePattern is a regular expression that controls
which interfaces Felix should scan in order to calculate the host's
MTU. This should not match workload interfaces (usually named cali...).
type: string
natOutgoingAddress:
description: NATOutgoingAddress specifies an address to use when performing
source NAT for traffic in a natOutgoing pool that is leaving the
network. By default the address used is an address on the interface
the traffic is leaving on (ie it uses the iptables MASQUERADE target)
type: string
natPortRange:
anyOf:
- type: integer
- type: string
description: NATPortRange specifies the range of ports that is used
for port mapping when doing outgoing NAT. When unset the default
behavior of the network stack is used.
pattern: ^.*
x-kubernetes-int-or-string: true
netlinkTimeout:
type: string
openstackRegion:
description: 'OpenstackRegion is the name of the region that a particular
Felix belongs to. In a multi-region Calico/OpenStack deployment,
this must be configured somehow for each Felix (here in the datamodel,
or in felix.cfg or the environment on each compute node), and must
match the [calico] openstack_region value configured in neutron.conf
on each node. [Default: Empty]'
type: string
policySyncPathPrefix:
description: 'PolicySyncPathPrefix is used to by Felix to communicate
policy changes to external services, like Application layer policy.
[Default: Empty]'
type: string
prometheusGoMetricsEnabled:
description: 'PrometheusGoMetricsEnabled disables Go runtime metrics
collection, which the Prometheus client does by default, when set
to false. This reduces the number of metrics reported, reducing
Prometheus load. [Default: true]'
type: boolean
prometheusMetricsEnabled:
description: 'PrometheusMetricsEnabled enables the Prometheus metrics
server in Felix if set to true. [Default: false]'
type: boolean
prometheusMetricsHost:
description: 'PrometheusMetricsHost is the host that the Prometheus
metrics server should bind to. [Default: empty]'
type: string
prometheusMetricsPort:
description: 'PrometheusMetricsPort is the TCP port that the Prometheus
metrics server should bind to. [Default: 9091]'
type: integer
prometheusProcessMetricsEnabled:
description: 'PrometheusProcessMetricsEnabled disables process metrics
collection, which the Prometheus client does by default, when set
to false. This reduces the number of metrics reported, reducing
Prometheus load. [Default: true]'
type: boolean
removeExternalRoutes:
description: Whether or not to remove device routes that have not
been programmed by Felix. Disabling this will allow external applications
to also add device routes. This is enabled by default which means
we will remove externally added routes.
type: boolean
reportingInterval:
description: 'ReportingInterval is the interval at which Felix reports
its status into the datastore or 0 to disable. Must be non-zero
in OpenStack deployments. [Default: 30s]'
type: string
reportingTTL:
description: 'ReportingTTL is the time-to-live setting for process-wide
status reports. [Default: 90s]'
type: string
routeRefreshInterval:
description: 'RouteRefreshInterval is the period at which Felix re-checks
the routes in the dataplane to ensure that no other process has
accidentally broken Calico''s rules. Set to 0 to disable route refresh.
[Default: 90s]'
type: string
routeSource:
description: 'RouteSource configures where Felix gets its routing
information. - WorkloadIPs: use workload endpoints to construct
routes. - CalicoIPAM: the default - use IPAM data to construct routes.'
type: string
routeTableRange:
description: Calico programs additional Linux route tables for various
purposes. RouteTableRange specifies the indices of the route tables
that Calico should use.
properties:
max:
type: integer
min:
type: integer
required:
- max
- min
type: object
serviceLoopPrevention:
description: 'When service IP advertisement is enabled, prevent routing
loops to service IPs that are not in use, by dropping or rejecting
packets that do not get DNAT''d by kube-proxy. Unless set to "Disabled",
in which case such routing loops continue to be allowed. [Default:
Drop]'
type: string
sidecarAccelerationEnabled:
description: 'SidecarAccelerationEnabled enables experimental sidecar
acceleration [Default: false]'
type: boolean
usageReportingEnabled:
description: 'UsageReportingEnabled reports anonymous Calico version
number and cluster size to projectcalico.org. Logs warnings returned
by the usage server. For example, if a significant security vulnerability
has been discovered in the version of Calico being used. [Default:
true]'
type: boolean
usageReportingInitialDelay:
description: 'UsageReportingInitialDelay controls the minimum delay
before Felix makes a report. [Default: 300s]'
type: string
usageReportingInterval:
description: 'UsageReportingInterval controls the interval at which
Felix makes reports. [Default: 86400s]'
type: string
useInternalDataplaneDriver:
type: boolean
vxlanEnabled:
type: boolean
vxlanMTU:
description: 'VXLANMTU is the MTU to set on the tunnel device. See
Configuring MTU [Default: 1440]'
type: integer
vxlanPort:
type: integer
vxlanVNI:
type: integer
wireguardEnabled:
description: 'WireguardEnabled controls whether Wireguard is enabled.
[Default: false]'
type: boolean
wireguardInterfaceName:
description: 'WireguardInterfaceName specifies the name to use for
the Wireguard interface. [Default: wg.calico]'
type: string
wireguardListeningPort:
description: 'WireguardListeningPort controls the listening port used
by Wireguard. [Default: 51820]'
type: integer
wireguardMTU:
description: 'WireguardMTU controls the MTU on the Wireguard interface.
See Configuring MTU [Default: 1420]'
type: integer
wireguardRoutingRulePriority:
description: 'WireguardRoutingRulePriority controls the priority value
to use for the Wireguard routing rule. [Default: 99]'
type: integer
xdpEnabled:
description: 'XDPEnabled enables XDP acceleration for suitable untracked
incoming deny rules. [Default: true]'
type: boolean
xdpRefreshInterval:
description: 'XDPRefreshInterval is the period at which Felix re-checks
all XDP state to ensure that no other process has accidentally broken
Calico''s BPF maps or attached programs. Set to 0 to disable XDP
refresh. [Default: 90s]'
type: string
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: GlobalNetworkPolicy
listKind: GlobalNetworkPolicyList
plural: globalnetworkpolicies
singular: globalnetworkpolicy
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
properties:
applyOnForward:
description: ApplyOnForward indicates to apply the rules in this policy
on forward traffic.
type: boolean
doNotTrack:
description: DoNotTrack indicates whether packets matched by the rules
in this policy should go through the data plane's connection tracking,
such as Linux conntrack. If True, the rules in this policy are
applied before any data plane connection tracking, and packets allowed
by this policy are marked as not to be tracked.
type: boolean
egress:
description: The ordered set of egress rules. Each rule contains
a set of packet match criteria and a corresponding action to apply.
items:
description: "A Rule encapsulates a set of match criteria and an
action. Both selector-based security Policy and security Profiles
reference rules - separated out as a list of rules for both ingress
and egress packet matching. \n Each positive match criteria has
a negated version, prefixed with \"Not\". All the match criteria
within a rule must be satisfied for a packet to match. A single
rule can contain the positive and negative version of a match
and both must be satisfied for the rule to match."
properties:
action:
type: string
destination:
description: Destination contains the match criteria that apply
to destination entity.
properties:
namespaceSelector:
description: "NamespaceSelector is an optional field that
contains a selector expression. Only traffic that originates
from (or terminates at) endpoints within the selected
namespaces will be matched. When both NamespaceSelector
and Selector are defined on the same rule, then only workload
endpoints that are matched by both selectors will be selected
by the rule. \n For NetworkPolicy, an empty NamespaceSelector
implies that the Selector is limited to selecting only
workload endpoints in the same namespace as the NetworkPolicy.
\n For NetworkPolicy, `global()` NamespaceSelector implies
that the Selector is limited to selecting only GlobalNetworkSet
or HostEndpoint. \n For GlobalNetworkPolicy, an empty
NamespaceSelector implies the Selector applies to workload
endpoints across all namespaces."
type: string
nets:
description: Nets is an optional field that restricts the
rule to only apply to traffic that originates from (or
terminates at) IP addresses in any of the given subnets.
items:
type: string
type: array
notNets:
description: NotNets is the negated version of the Nets
field.
items:
type: string
type: array
notPorts:
description: NotPorts is the negated version of the Ports
field. Since only some protocols have ports, if any ports
are specified it requires the Protocol match in the Rule
to be set to "TCP" or "UDP".
items:
anyOf:
- type: integer
- type: string
pattern: ^.*
x-kubernetes-int-or-string: true
type: array
notSelector:
description: NotSelector is the negated version of the Selector
field. See Selector field for subtleties with negated
selectors.
type: string
ports:
description: "Ports is an optional field that restricts
the rule to only apply to traffic that has a source (destination)
port that matches one of these ranges/values. This value
is a list of integers or strings that represent ranges
of ports. \n Since only some protocols have ports, if
any ports are specified it requires the Protocol match
in the Rule to be set to \"TCP\" or \"UDP\"."
items:
anyOf:
- type: integer
- type: string
pattern: ^.*
x-kubernetes-int-or-string: true
type: array
selector:
description: "Selector is an optional field that contains
a selector expression (see Policy for sample syntax).
\ Only traffic that originates from (terminates at) endpoints
matching the selector will be matched. \n Note that: in
addition to the negated version of the Selector (see NotSelector
below), the selector expression syntax itself supports
negation. The two types of negation are subtly different.
One negates the set of matched endpoints, the other negates
the whole match: \n \tSelector = \"!has(my_label)\" matches
packets that are from other Calico-controlled \tendpoints
that do not have the label \"my_label\". \n \tNotSelector
= \"has(my_label)\" matches packets that are not from
Calico-controlled \tendpoints that do have the label \"my_label\".
\n The effect is that the latter will accept packets from
non-Calico sources whereas the former is limited to packets
from Calico-controlled endpoints."
type: string
serviceAccounts:
description: ServiceAccounts is an optional field that restricts
the rule to only apply to traffic that originates from
(or terminates at) a pod running as a matching service
account.
properties:
names:
description: Names is an optional field that restricts
the rule to only apply to traffic that originates
from (or terminates at) a pod running as a service
account whose name is in the list.
items:
type: string
type: array
selector:
description: Selector is an optional field that restricts
the rule to only apply to traffic that originates
from (or terminates at) a pod running as a service
account that matches the given label selector. If
both Names and Selector are specified then they are
AND'ed.
type: string
type: object
type: object
http:
description: HTTP contains match criteria that apply to HTTP
requests.
properties:
methods:
description: Methods is an optional field that restricts
the rule to apply only to HTTP requests that use one of
the listed HTTP Methods (e.g. GET, PUT, etc.) Multiple
methods are OR'd together.
items:
type: string
type: array
paths:
description: 'Paths is an optional field that restricts
the rule to apply to HTTP requests that use one of the
listed HTTP Paths. Multiple paths are OR''d together.
e.g: - exact: /foo - prefix: /bar NOTE: Each entry may
ONLY specify either a `exact` or a `prefix` match. The
validator will check for it.'
items:
description: 'HTTPPath specifies an HTTP path to match.
It may be either of the form: exact: <path>: which matches
the path exactly or prefix: <path-prefix>: which matches
the path prefix'
properties:
exact:
type: string
prefix:
type: string
type: object
type: array
type: object
icmp:
description: ICMP is an optional field that restricts the rule
to apply to a specific type and code of ICMP traffic. This
should only be specified if the Protocol field is set to "ICMP"
or "ICMPv6".
properties:
code:
description: Match on a specific ICMP code. If specified,
the Type value must also be specified. This is a technical
limitation imposed by the kernel's iptables firewall,
which Calico uses to enforce the rule.
type: integer
type:
description: Match on a specific ICMP type. For example
a value of 8```
执行初始化 `Calico`命令
```bash
kubectl apply -f calico.yaml
这里使用 Helm 3,注意安装脚本下载最新版本
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
默认情况下,可以安装 K8s 默认 localdisk 作为默认存储,这里采用开源项目 Hwameistor 用作本地磁盘管理。
项目介绍: https://hwameistor.io 安装指引: https://hwameistor.io/cn/docs/quick_start/install/deploy
Hwameistor 依赖 Helm 进行安装,所以事先 需要完成 Helm 应用的安装
helm repo add hwameistor http://hwameistor.io/hwameistor
helm repo update hwameistor
helm pull hwameistor/hwameistor --untar
使用镜像仓库安装,要切换镜像仓库的镜像,请使用 --set 更改这两个参数值:global.k8sImageRegistry
和 global.hwameistorImageRegistry
注意默认的镜像仓库 quay.io 和 ghcr.io。如果无法访问,可尝试使用 DaoCloud 提供的镜像源 quay.m.daocloud.io 和 ghcr.m.daocloud.io
$ helm install hwameistor ./hwameistor \
-n hwameistor --create-namespace \
--set global.k8sImageRegistry=k8s-gcr.m.daocloud.io \
--set global.hwameistorImageRegistry=ghcr.m.daocloud.io
检查 Hwameistor 的全部 Pod 状态
$ kubectl -n hwameistor get pod
NAME READY STATUS
hwameistor-local-disk-csi-controller-665bb7f47d-6227f 2/2 Running
hwameistor-local-disk-manager-5ph2d 2/2 Running
hwameistor-local-disk-manager-jhj59 2/2 Running
hwameistor-local-disk-manager-k9cvj 2/2 Running
hwameistor-local-disk-manager-kxwww 2/2 Running
hwameistor-local-storage-csi-controller-667d949fbb-k488w 3/3 Running
hwameistor-local-storage-csqqv 2/2 Running
hwameistor-local-storage-gcrzm 2/2 Running
hwameistor-local-storage-v8g7t 2/2 Running
hwameistor-local-storage-zkwmn 2/2 Running
hwameistor-scheduler-58dfcf79f5-lswkt 1/1 Running
hwameistor-webhook-986479678-278cr 1/1 Running
local-disk-manager 和 local-storage 是 DaemonSet。在每个 Kubernetes 节点上都应该有一个 DaemonSet Pod。
等到全部状态正常后,可进行后续操作,检查 StorageClass
$ kubectl get storageclass hwameistor-storage-lvm-hdd
NAME PROVISIONER RECLAIMPOLICY
hwameistor-storage-lvm-hdd (default) lvm.hwameistor.io Delete
LockDiskNode
和 LockDisk
默认在使用的磁盘状态应该 PHASE
展示位 Bound
$ kubectl get localdisknodes
NAME NODEMATCH PHASE
master01-k8s-com master01-k8s-com Bound
node01-k8s-com node01-k8s-com Bound
node02-k8s-com node02-k8s-com Bound
$ kubectl get localdisks
NAME NODEMATCH CLAIM PHASE
master01-k8s-com-vda master01-k8s-com Bound
master01-k8s-com-vdb master01-k8s-com master01-k8s-com Bound
node01-k8s-com-vda node01-k8s-com Bound
node01-k8s-com-vdb node01-k8s-com node01-k8s-com Bound
node02-k8s-com-vda node02-k8s-com Bound
node02-k8s-com-vdb node02-k8s-com node02-k8s-com Bound
StorageClass
这个对应的 storageclasses
增加标识为默认的 annotations
kubectl patch storageclasses.storage.k8s.io hwameistor-storage-lvm-hdd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
使用下方的命令创建对应的存储池,注意替换对应的 nodeName
$ helm template ./hwameistor \
-s templates/post-install-claim-disks.yaml \
--set storageNodes='{master01-k8s-com,node01-k8s-com,node02-k8s-com}' \
| kubectl apply -f -
创建成功后,查看本地磁盘的
ldc
,应该全部是Bound
kubectl get ldc
Metrics server
接下来需要安装的 DCE5.0 需要安装 metrics-server
,将下方的文件内容保存为 metrics-server.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
image: registry.cn-hangzhou.aliyuncs.com/chenby/metrics-server:v0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
执行安装命令:
kubectl apply -f metrics-server.yaml
安装成功后,通过下方命令可以成功的看到 node 节点的资源利用情况
$ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master01-k8s-com 218m 5% 5610Mi 35%
node01-k8s-com 445m 11% 9279Mi 59%
node02-k8s-com 464m 11% 9484Mi 60%
以下操作步骤将会带着您一步一步完成 DaoCloud Enterprise 5.0 社区版的完整安装,注意安装细节
DCE 已经提供了一键安装的离线工具依赖包,经过测试比较稳定的运行在 CentOS 7
和 CentOS 8
,如果您也是这样两个系统,可以使用下方脚本。
curl -LO https://proxy-qiniu-download-public.daocloud.io/DaoCloud_Enterprise/dce5/install_prerequisite.sh
chmod +x install_prerequisite.sh
sudo bash install_prerequisite.sh online community
也可以选择手动安装这些:
注意,需要在 Master 节点 来进行 dce5-installer 的安装,建议直接下载到当前节点
获取最新版本的查看界面: https://docs.daocloud.io/download/dce5/#_1
# 假定 VERSION 为 v0.3.28 , 使用上方链接获取最新版本
$ export VERSION=v0.3.28
$ curl -Lo ./dce5-installer https://proxy-qiniu-download-public.daocloud.io/DaoCloud_Enterprise/dce5/dce5-installer-$VERSION
$ chmod +x dce5-installer
如果
proxy-qiniu-download-public.daocloud.io
链接失效,使用qiniu-download-public.daocloud.io
将下方的配置文件内容保存为 clusterconfig.yaml
,如果使用 NodePort
的方式安装则不需要特定指定配置文件
apiVersion: provision.daocloud.io/v1alpha1
kind: ClusterConfig
spec:
loadBalancer: metallb
istioGatewayVip: 10.6.229.10/32 # 这是 Istio gateway 的 VIP,也会是 DCE 5.0 的控制台的浏览器访问 IP
insightVip: 10.6.229.11/32 # 这是 Global 集群的 Insight-Server 采集子集群的监控指标的网络路径用的 VIP
如果使用配置文件,注意需要 事先安装
MetaLB
,这一部分需要自行完成。
根据需求选择下方的安装文件,如果选择指定配置文件内的
# 无配置文件
$ ./dce5-installer install-app
# 有指定 loadBalancer
$ ./dce5-installer install-app -c clusterConfig.yaml
如果看到下方的页面,说明安装成功了;默认账号密码为: admin/changeme
一般情况下,我们安装后的集群访问地址是一个内网地址,当我们需要集群页面开放到公网访问时,会发现登录页会自动重定向内网地址,这是因为 DCE 5.0 反向代理服务器地址默认是安装时的配置;需要作出以下更新:
设置环境变量,方便在后续中使用
# 您的反向代理地址,例如:`export DCE_PROXY="https://demo-alpha.daocloud.io"`
export DCE_PROXY="https://domain:port"
# helm --set 参数备份文件
export GHIPPO_VALUES_BAK="ghippo-values-bak.yaml"
# 获取当前 ghippo 的版本号
export GHIPPO_HELM_VERSION=$(helm get notes ghippo -n ghippo-system | grep "Chart Version" | awk -F ': ' '{ print $2 }')
更新 Helm repo
helm repo update ghippo
备份 --set 参数
helm get values ghippo -n ghippo-system -o yaml > ${GHIPPO_VALUES_BAK}
使用 vim 命令编辑并保存
$ vim ${GHIPPO_VALUES_BAK}
USER-SUPPLIED VALUES:
...
global:
...
reverseProxy: ${DCE_PROXY} # 只需要修改这一行
使用 heml upgrade 更新配置
helm upgrade ghippo ghippo/ghippo \
-n ghippo-system \
-f ${GHIPPO_VALUES_BAK} \
--version ${GHIPPO_HELM_VERSION}
使用 kubectl 重启全局管理 Pod,使配置生效
kubectl rollout restart deploy/ghippo-apiserver -n ghippo-system
kubectl rollout restart statefulset/ghippo-keycloak -n ghippo-system
很自豪给大家推荐,春松客服是我参与主导团队的第一个纯社区开源项目,目前在 Github 中文项目中开源排名第一。
目前我们正在规划 V8 产品迭代,这将会是一个全新的升级,请关注我们的项目进展 cskefu
同时也欢迎大家加入我们的团队。
至 2022 年 10 月,春松客服在企业中部署超过 1.8 万次,上线客户超过 500 家,是 GitHub 上最受欢迎的中文开源客服系统。因开源、云原生架构和功能丰富受到广泛好评,在 Gitee 上赢得最有价值项目奖项。
在开源客服系统,我希望将春松客服平台成为一个开源的生态,希望吸引到成千上万的开发者和企业来使用和参与春松客服的开发。
春松客服是一个依托于开源精神的客服操作系统,我们承诺永远开源,并与开发者打造一个完美的客服系统生态。
我们想做客服系统的 Kubernetes。
一线客服人员作为日常之中,长期使用客服系统的人员;客服系统的能力和效率将直接影响到他们的使用;
开发者们,可以利用春松客服的开放能力,为春松客服编写大量的增强插件。
春松客服的核心目标,不是打造一个我们认为的客服系统,而是我们希望打造一套轻量化的智能客服系统框架和具有海量插件的平台。
传统的客服系统提供了大量复杂的功能和配置,虽然总能找到对用户来说想要的功能,但是大量冗杂、甚至逻辑冲突的功能,这虽然没有过错;但的确极大的提高了一线客服人员的使用成本。
对于真正的使用者来说,我们希望客服系统的功能,可以刚好满足我所需要的全部功能,同时可以方便的进行功能的增减和升级。
所以,春松客服的功能边界在:提供 高性能、稳定、通用的、生产级 智能客服系统底座和平台。
增强插件
春松客服到现在,已经发展了 2 年的时间,目前版本迭代到 V7,下面展示春松客服的功能和介绍网址入口。
在春松客服里,系统管理员是具备管理所辖组织内坐席、权限、角色、联系人和坐席监控等资源的管理员,系统管理员分为两种类型:超级管理员和普通管理员,普通管理员也简称“管理员”。
超级管理员为春松客服系统设置的,初始化一个春松客服实例后,默认超级管理员用户名为 admin
,密码为 admin1234
,并且有且只有一个,IT 人员在初始化搭建的春松客服实例的第一件事就是更改超级管理员账号的密码,以确保系统安全。超级管理员具备更新系统所有属性的能力,读写数据,是春松客服内权限最大的用户。
安装启动系统,进入春松客服后台界面,输入初始化的超级管理员账号密码(用户名: admin
, 密码: admin1234
),点击立即登录。
超级管理员同时维护者春松客服的组织机构的高层级,组织机构是树形结构,默认情况下没有组织机构信息,春松客服搭建完成后,由超级管理员设定根节点,比如总公司、总公司下属子公司,维护这样的一个层级结构,再创建其他管理员账号,普通管理员账号可以创建多个,不同管理员隶属于不同组织机构,该管理员只有管理其所在组织机构及该组织机构附属组织机构的权限。
系统管理员切换不同的组织机构,可以查看不同组织机构的数据。
春松客服权限体系包括:组织机构,角色,账号。
角色可以自定义,设置对一系列资源的读写。角色的创建和删除,修改资源授权,只有超级管理员可以操作,,普通【管理员】只具备角色的使用权:添加或删除权限里的系统账号。
将账号添加到角色后,因为账号也同时隶属于不同的组织机构,那么账号所具有的权限就是其所在组织机构以及附属组织机构的角色对应的资源的读写。
根据角色和坐席所在组织机构进行权限检查:
假设组织机构如下:
系统 -> 系统概况 -> 用户和组 -> 组织机构 -> 创建部门,并且可以启用或关闭技能组
部门 需要创建的部门名称
上级机构 选择上级部门
启用技能组 这里启用与否,技能是接待同一个渠道的坐席人员群组,春松客服支持配置自动分配策略,连接访客与坐席,简称 ACD 模块
进入部门列表
系统 -> 系统概况 -> 用户和组 -> 组织机构
系统 -> 系统概况 -> 用户和组 -> 组织机构 -> 修改部门
系统 -> 系统概况 -> 用户和组 -> 组织机构 -> 删除部门
系统 -> 系统概况 -> 用户和组 -> 组织结构 -> 选中一个部门 -> 地区设置
系统 -> 系统概况 -> 用户和组 -> 系统角色 -> 新建角色
只有【系统超级管理员】可以创建角色。
名词解释:
角色 系统中用户的操作权限是通过角色来控制,角色可以理解为具备一定操作权限的用户组;
可以把一个或者更多的用户添加到一个角色下;
可以给一个角色设置一定的系统权限,相当于这个角色下面的用户有了这些系统权限;
角色创建好了以后,在所有组织机构中共享。不同组织机构的管理员,只能管理其所在组织机构和下属组织机构里的账号的角色。
系统 -> 系统概况 -> 用户和组 -> 系统角色 -> 修改角色
只有【系统超级管理员】可以编辑角色。
系统->系统概况->用户和组->系统角色>删除角色
只有【系统超级管理员】可以删除角色。
提示:
电子邮件: 需要有效的格式密码: 字母数字最少8位,手动录入手机号: 全系统唯一
用户分为管理员和普通用户
坐席分为一般坐席和 SIP 坐席,普通用户与管理用户都可以成为坐席,SIP 坐席是在多媒体坐席的基础上
每个账号必须分配到一个部门下,以及关联到一个角色上,才可以查看或管理资源,请详细阅读【组织机构】和【角色】管理
创建普通用户
创建多媒体坐席
创建管理员
系统 -> 系统概况 -> 用户和组 -> 用户账号
点击操作一栏中的“编辑”“删除”,可以对当前用户列表中的所有用户的信息进行编辑或者删除
系统 -> 系统概况 -> 用户和组 -> 组织结构 -> 选中一个部门 -> 添加用户到当前部门
可以把已经存在的 用户账号 添加到一个特定的部门中
一个用户账号只能隶属于一个部门
系统->系统概况->用户和组->系统角色>添加用户到角色
春松客服支持多种渠道,访客的来源可能是多渠道的,这也是目前联络中心发展的趋势,以及对智能客服系统的一个挑战,一方面随着信息技术和互联网通信、聊天工具的演变,企业的客户逐渐分散,尤其是营销平台多元化。
多渠道的适应能力,是春松客服的重要特色。在【渠道管理】的子菜单中,查看支持的不同类型的渠道。
渠道名称 | 简介 | 获得 |
---|---|---|
网页渠道 | 通过在网页中注入春松客服网页渠道 HTML,实现聊天控件,访客与客服建立实时通信连接,支持坐席邀请访客对话等功能 | 开源,免费,随基础代码发布 |
Facebook Messenger 渠道 | 简称“Messenger 插件”或“ME”插件,Messenger 是 Facebook 旗下的最主要的即时通信软件,支持多种平台,因其创新的理念、优秀的用户体验和全球最大的社交网络,而广泛应用。春松客服 Messenger 插件帮助企业在 Facebook 平台上实现营销和客户服务 | 开源,免费,随基础代码发布 |
网页聊天支持可以适配移动设备浏览器,桌面浏览器。可以在电脑,手机,微信等渠道接入网页聊天控件。
获取网页脚本,系统 -> 系统管理 -> 客服接入 -> 网站列表 -> 点击“Chatopera 官网” -> 基本设置 -> 接入;
将图中的代码复制到一个 Web 项目的页面中,例如下图的。
使用浏览器打开该 Web 页面。
【提示】该网页需要使用 http(s) 打开,不支持使用浏览器打开本地 HTML 页面。
点击该网页中出现的“在线客服”按钮,出现聊天窗口,可以作为访客,与客服聊天了。
【提示】春松客服提供一个测试网页客户端的例子,可以使用 http://IP:PORT]}}/testclient.html 进行访问。
Messenger 是 Facebook 旗下的最主要的即时通信软件,支持多种平台,因其创新的理念、优秀的用户体验和全球最大的社交网络,而广泛应用。通过 Facebook Messenger 的官方链接,可以了解更多。
春松客服 Messenger 插件帮助企业在 Facebook 平台上实现营销和客户服务。
首先,出海企业要获客,或者通过互联网方式提供服务,那么 Facebook 上的广告和 Messenger 服务,是您无论如何都要使用的,因为你可以从这里找到您的目标客户、潜在客户。但是,如果 Facebook 平台的商业化程度过高,将影响社交网络内用户的体验,比如用户收到和自己不相关、不感兴趣的、大量的广告。为此 Facebook 在广告和 Messenger 上,有很多设计、一些限制,达到了商业化和人们社交需求的平衡,这是 Facebook 能成为今天世界上最大的社交网络的关键原因之一。
其次,您需要了解 Messenger 的一些应用场景,比如 Cskefu 为九九互动提供的智能客服和 OTN 服务的案例 chatopera-me-jiujiu2020。
在正式介绍春松客服 Messenger 插件的使用之前,需要说明 Cskefu 提供该插件是通过 Facebook Messenger 平台的开发者 APIs 实现,因此,该插件的功能安全可靠、稳定强大并且会不断更新。
https://developers.facebook.com/docs/messenger-platform
自今年春松客服开源社区和技术委员会成立,我们重新复盘了春松客服的发展过往;我们决心重构整个春松客服,所有 V8 会是一个全新定位的版本。
春松客服适应各种部署方式,本文使用 Docker 和 Docker compose 的方式,适合体验、开发、测试和上线春松客服,此种方式简单快捷。
更新:我们正在推进基于 Helm Chart 的方式安装,让企业可以更方便的 Kubernetes 容器平台使用
重要提示:部署应用后,必须按照《系统初始化》文档进行系统初始化,再使用,不做初始化,会造成坐席无法分配等问题。
项目 | 说明 |
---|---|
操作系统 | Linux (CentOS 7.x, Ubuntu 16.04+ 等),推荐使用 Ubuntu LTS |
Docker 版本 | Docker version 1.13.x 及以上 |
Docker Compose 版本 | version 1.23.x 及以上 |
防火墙端口 | 8035, 8036 |
其他软件 | git |
内存 | 开发测试 >= 8GB |
CPU 颗数 | 开发测试 >= 2 |
硬盘 | >= 20GB |
git clone -b master https://github.com/cskefu/cskefu.git cskefu
cd cskefu
cp sample.env .env # 使用文本编辑器打开 .env 文件,并按照需求需改配置
以上命令中,master
代表当前稳定版,是 cskefu/cskefu 的 master 分支,分支说明。
分支 | 说明 |
---|---|
master | 当前稳定版本 |
develop | 当前开发版本 |
克隆代码时,按照需要指定分支信息;本部署文档针对 master 分支。
以下为部署相关的环境变量,可以在 .env
中覆盖默认值。
KEY | 默认值 | 说明 |
---|---|---|
COMPOSE_FILE | docker-compose.yml | 服务编排描述文件,保持默认值 |
COMPOSE_PROJECT_NAME | cskefu | 服务实例的容器前缀,可以用其它字符串 |
MYSQL_PORT | 8037 | MySQL 数据库映射到宿主机器使用的端口 |
REDIS_PORT | 8041 | Redis 映射到宿主机器的端口 |
ES_PORT1 | 8039 | ElasticSearch RestAPI 映射到宿主机器的端口 |
ES_PORT2 | 8040 | ElasticSearch 服务发现端口映射到宿主机器的端口 |
CC_WEB_PORT | 8035 | 春松客服 Web 服务地址映射到宿主机器的端口 |
CC_SOCKET_PORT | 8036 | 春松客服 SocketIO 服务映射到宿主机器的端口 |
ACTIVEMQ_PORT1 | 8051 | ActiveMQ 端口 |
ACTIVEMQ_PORT2 | 8052 | ActiveMQ 端口 |
ACTIVEMQ_PORT2 | 8053 | ActiveMQ 端口 |
DB_PASSWD | 123456 | 数据库密码,设置到 MySQL, Redis, ActiveMQ |
LOG_LEVEL | INFO | 日志级别,可使用 WARN, ERROR, INFO, DEBUG |
以上配置中,端口的各默认值需要保证在宿主机器上还没有被占用;数据库的密码尽量复杂;CC_WEB_PORT 和 CC_SOCKET_PORT 这两个值尽量不要变更;生产环境下 LOG_LEVEL 使用至少 WARN 的级别。
以下为一些业务功能相关配置的环境变量:
KEY | 默认值 | 说明 |
---|---|---|
TONGJI_BAIDU_SITEKEY | placeholder | 使用百度统计 记录和查看页面访问情况,默认不记录 |
EXTRAS_LOGIN_BANNER | off | 登录页上方展示通知的内容,默认 (off) 不展示 |
EXTRAS_LOGIN_CHATBOX | off | 登录页支持加入一个春松客服网页渠道聊天按钮,比如 https://oh-my.cskefu.com/im/xxx.html,默认 (off不展示 |
cd cskefu # 进入下载后的文件夹
docker-compose pull # 拉取镜像
docker-compose up -d contact-center # 启动服务
docker-compose logs -f contact-center
在日志中,查看到如下类似信息,代表服务已经启动。
INFO c.c.socketio.SocketIOServer - SocketIO server started at port: 8036 [nioEventLoopGroup-2-1]INFO com.chatopera.cc.Application - Started Application in 35.319 seconds (JVM running for 42.876) [main]
然后,从浏览器打开 http://YOUR_IP:CC_WEB_PORT/ 访问服务。默认管理员账号:admin 密码:admin1234
春松客服提供了,在线体验环境,方便您了解春松客服最新功能。
内容 | 说明 |
---|---|
网站 | https://demo.cskefu.com |
默认用户名 | admin |
默认密码 | admin1234 |
注意,请不要修改用户名和密码,演示环境随时有可能可能,请自行保障数据安全。
周日 10:00UTC+8(中文)(单周)。转换为您的时区
对于一个大型企业,内部有几十个业务系统是非常正常的一件事情。
在这个背景下,有一个具体的服务可以很干净的只做一件事情是非常棒的:把人管好。
验证流程图
<门神>
梳理笔记工具,最近把语雀都迁移到了 Github,找个时间可以 讨论下中间的过程。
]]>参考资料:
服务发现,是消费端自动发现服务地址列表的能力,是微服务框架需要具备的关键能力,借助于自动化的服务发现,微服务之间在无需感知对端部署位置与 IP 地址的情况下实现通信。
Dubbo 提供的 Client-Based 服务发现机制,同时也需要第三方注册中心来协调服务发现过程,比如 Nacos/Zookeeper 等。
Dubbo Mesh 的目标是提供适应 Dubbo 体系的完整 Mesh 解决方案,包含定制化控制面(Control Plane)、定制化数据面解决方案。Dubbo 控制面基于业界主流 Istio 扩展,支持更丰富的流量治理规则、Dubbo 应用级服务发现模型等,Dubbo 数据面可以采用 Envoy Sidecar,即实现 Dubbo SDK + Envoy 的部署方案,也可以采用 Dubbo Proxyless 模式,直接实现 Dubbo 与控制面的通信。
Dubbo 3.0 提供的新特性包括:
df = pandas.read_csv('somefile.txt')
df = df.fillna(0)
暗坑很多
REDASH_COOKIE_SECRET=a07cca441ab9f28b66c589f3118e0de48469b1bc6a5036eade7badbed305d96e
POSTGRES_HOST_AUTH_METHOD=trust
REDASH_REDIS_URL=redis://redis:6379/0
REDASH_DATABASE_URL=postgresql://postgres
需要创建一个 postgres-data 并配置 docker-compose.yml 的路径,数据库持久化
需要给 postgres 容器增加 sudo 命令
需要手工进入到 postgresql 容器内创建 role 和 database
执行数据库初始化动作
docker-compose run --rm server create_db
然后重启 redash 全部服务即可 docker-compose down 后重启
postgresql 在执行 psql 命令时,默认会读取当前系统用户作为执行 role;但 psql 默认用户是 postgres
https://redash.io/help/open-source/setup https://redash.io/help/open-source/dev-guide/docker https://docs.victoriametrics.com/url-examples.html#apiv1exportcsv
以上主要会设计到 3 个镜像,redis、pgsql、redash,其中核心是 redash,所以关注镜像版本也是这个
redash 的版本升级较为方便,更换 server 的镜像;然后升级数据库即可。
测试过从 v8 升级到 v10 , 和 v9 升级到 v10,都是 ok 的。
docker-compose stop server scheduler scheduled_worker adhoc_worker
docker-compose pull
拉取新镜像版本docker-compose run --rm server manage db upgrade
docker-compse up -d
由于我们的 es 地址访问地址采用 https,但为自签证书,所以在 request 之中会有些问题,所以我在这里更新了 elasticsearch 的插件,然后将其上传到我个人的 docker hub. https://hub.docker.com/r/samzong/redash
带来的问题,页面上无法选择到 Elasticsearch 作为数据源,没时间去研究了
看了下还是可以使用 redash 的 API 去创建的 /api/data_sources
:
{
"options": {
"basic_auth_password": "-----",
"basic_auth_user": "elastic",
"server": "https://10.6.51.101:31001/",
"skip_tls_verification": true
},
"type": "elasticsearch",
"name": "test-es"
}
创建完成后,就可以在页面上更新了。
]]>同样一潭池水,天人视之为清波,饿鬼视之为阳焰。所有的看法,无非是一个人的世界观投在故事上的倒影,它
就常有这样的经验。比如碰见好久不见的老友,事前特别期待,想到有很多话可以聊,等坐到桌上,却发现只剩下老生常谈。这怪不得谁,每个人的生活都不一样,各自感兴趣的事儿都不同。不像以往,大家都在同一个班里,每天的生活都是类似的,所思所想也都差不了太远。
要想在一个圈子里得到认同,需要遵照这个圈子的价值标准和话语体系。
很久之后,我才慢慢明白自己当初的做法多么愚蠢。想要一个人改变习惯,很难。而尤其不应该的是,总把自己以为便利的东西强加给别人。年龄和岁月足以让一个人的学习能力慢慢退化,并最终丧失对新事物的好奇。从头掌握最基本的技能,对不年轻的人来说,成本太高昂。他们需要为养家糊口奔波,为时不多的暇余,并不足以支撑他们疲惫的身心去探究世界的种种新奇。于是,他们就远远被时代抛下了。
圣诞老人的意思就是:在你爱的人不知道的时候,把礼物放到他的身边。因为收到礼物的人不知道,所以这个夜晚是寂静的。
许多东西,只在此时、此地、此景有。没办法绸缪,也没办法复盘。可贵的事物之所以可贵,正在其转瞬即逝,不可复得。在一刹那灭去,并成为永恒。
屏蔽掉一个人,看不到他的样子,听不到他的声音,却抑制不了不去想他。世间的痛苦和烦恼,不在于不能得到,而在于不能割舍。不在于不想屏蔽的东西被屏蔽,而在于想屏蔽的东西无法屏蔽
对于贪著心,真正可怕的不是一件事情上的得失,而是串习的力量对一个人行为举止的巨大影响。这种影响是潜在的,却是深刻而难以磨灭的。
三涂之苦
编程谓之 Class,《
钱穆先生曾在《宋明理学概述》序言里说:““虽居乡僻,未尝敢一日废学。虽经乱离困厄,未尝敢一日颓其志。虽或名利当前,未尝敢动其心。虽或毁誉横生,未尝敢馁其气。虽学不足以自成立,未尝或忘先儒之榘矱,时切其向慕。虽垂老无以自靖献,未尝不于国家民族世道人心,自任以匹夫之有其责。””
佛家说烦恼即是般若,那些让你斩不断的愁丝才是真正度你到彼岸的船筏。
前言 阿德勒:超越自卑,找到生命的真正意义 阿德勒是个体心理学的先驱和代表。个体心理学与其他心理学流派相比,最突出显要的一点在于,它对人的研究是以个体为始,即首先关注个体本身的成长发展和人生历程。有趣的是,如果你对阿德勒本人的人生经历及其学说有所了解的话,你会发现其人的发展轨迹恰好印证了他所秉持的观点和理论。阿德勒认为,每个人都有不同程度的自卑感,而优越感则是自卑感的补偿。但自卑感的存在并不是一件坏事,因为它激励了人不断追求卓越,克服自身的障碍,在有限的生命空间内发挥出最大的价值。可以说,正是由于人类会有自卑感,才会有不断取得发展和进步的不竭动力。与此相印证的是,阿德勒本人就是一个““先天发展不足””、一开始就存在着极为强烈自卑感的人。阿德勒是一个直到 4 岁才会走路的体弱多病的儿童。他患有佝偻病,无法进行激烈的体育活动。但他并没有让身体上的缺陷压倒自己,相反,这刺激了他的上进心。
]]>通过 PR 的方式共同对主仓库进行贡献的方式,是开源项目协作的非常有效的方法;通常我们不会直接对主仓库直接提交代码。
一般情况下,我们的操作是,在需要贡献时,Fork 一份项目,然后自己修改以 PR 的方式提交贡献请求。
熟悉 Github 的同学会发现,在 Github 上最近更新了 Sync fork
的功能,通过简单的点击操作,即可完成对源库 (upstream) 的更新同步。
通过以上方式,我们可以方便在跟随主库的更新
Tips Github 的方式,仅在 Web 端 和 Github CLI 官方提供的 GH 才可以使用这样的功能
Github 更多在开源项目时贡献,实际上,在我们日常的工作当中,更多会有自建的 Gitlab 或者其他的代码 Hub,这里以 Gitlab 为例
Gitlab 并未提供 Sync fork 的功能,所以我们需要自行解决同步的需求
以下方式会在终端或本地使用Sourcetree
更加自由,仅使用 Git 的方式协作,自由度很大
基于这个方式我们可以实现,有关联的 主库到从库的同步;也可以完成跨 Hub 的同步,比如:
samzonglu at samzongs-MacBook-Pro in ~/Git/daocloud/DaoCloud-docs(main✔)
± git remote add upstream https://github.com/DaoCloud/DaoCloud-docs.git
Alias tip: gra upstream https://github.com/DaoCloud/DaoCloud-docs.git
samzonglu at samzongs-MacBook-Pro in ~/Git/daocloud/DaoCloud-docs(main✔)
± git remote
Alias tip: gr
origin
upstream
samzonglu at samzongs-MacBook-Pro in ~/Git/daocloud/DaoCloud-docs(main✔)
± git fetch upstream
Alias tip: gf upstream
remote: Enumerating objects: 1268, done.
remote: Counting objects: 100% (182/182), done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 1268 (delta 177), reused 182 (delta 177), pack-reused 1086
Receiving objects: 100% (1268/1268), 1.87 MiB | 2.74 MiB/s, done.
Resolving deltas: 100% (637/637), completed with 119 local objects.
From https://github.com/DaoCloud/DaoCloud-docs
* [new branch] feature/theme -> upstream/feature/theme
* [new branch] fix-typo -> upstream/fix-typo
* [new branch] gh-pages -> upstream/gh-pages
* [new branch] main -> upstream/main
* [new branch] patch-new-gh-pages-01 -> upstream/patch-new-gh-pages-01
* [new branch] stash/bug-codes -> upstream/stash/bug-codes
samzonglu at samzongs-MacBook-Pro in ~/Git/daocloud/DaoCloud-docs(main✔)
± git pull upstream main
Alias tip: gl upstream main
From https://github.com/DaoCloud/DaoCloud-docs
* branch main -> FETCH_HEAD
Already up to date.
samzonglu at samzongs-MacBook-Pro in ~/Git/daocloud/DaoCloud-docs(main✔)
± git push origin main
Alias tip: gp origin main
Everything up-to-date
厂家 | 产品控制台主页 | 应用模式 |
---|---|---|
阿里云 | https://kafka.console.aliyun.com/ | 中间件 PaaS 模块之一 |
腾讯云 | https://console.cloud.tencent.com/ckafka/overview | 中间件 PaaS 模块之一 |
华为云 | https://console.huaweicloud.com/dms/?region=cn-east-3&engine=kafka | 中间件 PaaS 模块之一 |
QingCloud | https://console.qingcloud.com/pek3/app/app-n9ro0xcp/ | 中间件 AppCenter 模块之一 |
UCloud | https://console.ucloud.cn/ukafka/ukafka | 中间件 PaaS 模块之一 |
时速云 | https://console.tenxcloud.com/middleware_center/app | 中间件 应用之一 |
功能点 | 阿里云 | 腾讯云 | 华为云 | QingCloud | UCloud | 时速云 | strimzi-kafka-operator | koperator |
---|---|---|---|---|---|---|---|---|
Kafka 实例的生命周期管理 | √ | √ | √ | √ | √ | √ | √ | √ |
Kafka 多版本支持 | √ 默认固定,工单调整 | √ | √ | √(仅 1 个) | √ | √ | √ | √ |
Kafka 节点列表 | √ | √ | √ | 跳转 Pod | √ (能创建 pod) | √ | ||
Kafka 原生参数管理 | ||||||||
√ | √ 原生 | √ | √ 原生 | √ | √ | |||
Kafka 常用参数抽象 | √ | √ | √ | √ | √ | √ | ||
Kafka 模块自带 Zookeeper | √ | √ | √ | √ | √ | |||
消息查询功能 | √ | √ | √ | √ 原生 | √ 原生 | |||
消息下载功能 | √(高级版) | |||||||
Topic 管理列表 | √ | √ | √ | √ 原生 | √ | √ 原生 | ||
Topic 增删改查 | √ | √ | √ | √ 原生 | √ | √ 原生 | ||
Topic 高级参数支持 | √ | √ | √ | √ 原生 | √ | √ 原生 | ||
Topic 详情 | √ | √ | ||||||
√ 原生 | √ | √ 原生 | √ | √ | ||||
Consumer Group 列表 | √ | √ | √ 原生 | √ 原生 | ||||
Consumer Group 增删改查 | √ | √ | √ 原生 | √ | √ 原生 | |||
资源监控看板 | √ | √ | √ | √ | √ | √ grafana dashboard | √ grafana dashboard | |
Kafka 业务数据监控 (消息量/积压/消费情况) | √ | √ | √ | √ | √Grafana | √ exporter+grafana | √ exporter+grafana | |
示例接入代码 | √ | √ | √ | |||||
消息发送测试窗口 | √ | √ | √ | |||||
Kafka 服务日志查看 | ||||||||
操作审计日志查看 | √ | √ | √ | |||||
提供 Kafka Manager UI | √ | √ | ||||||
提供 kafka export 备份功能 | √ | √ | ||||||
友好性帮助文档 | √ | √ | √ | √ | √ | |||
提供帮助用户迁入上云 | √ | √ | √ |
厂家 | 字段 |
---|---|
阿里云 | 名称流量规格集群流量 = 业务流量 + 集群内副本复制流量,该规格实际业务读流量处理峰值为 50 MB/s,业务写流量处理峰值为 10 MB/s。磁盘容量数据默认 3 副本存储。实例规格为标准版时,如购买 300G 磁盘,实际存储业务的磁盘大小为 100G,其余 200G 为备份容量。实例规格为专业版时,如购买 300G 磁盘,实际存储业务的磁盘大小为 300G,额外赠送 600G 备份容量。消息保留最长时间是指在磁盘容量充足的情况下,消息的最长保留时间;在磁盘容量不足(即磁盘水位达到 85%)时,将提前删除旧的消息,以确保服务可用性;默认 72 小时,可选 24 小时 ~ 168 小时。最大消息大小,默认 1MB 边界值?标准版实例单条消息最大为 256KB,专业版实例单条消息最大为 10MB 且支持下载Topic 数量 |
腾讯云 | 名称 Kafka 版本实例规格配置存储容量消息保留时长 |
华为云 | 名称 Kafka 版本实例规格配置存储容量 |
QingCloud | 名称 Kafka 版本 Kafka 节点配置:CPU / 内存 / 节点数(规格)客户端节点配置:CPU / 内存 / 节点数(规格)Kafka-Manager / CLI)Zookeeper 实例存储容量自定义参数配置内部 Topic offset replicasKafka manager 认证 zabbix.agentkafka scale version |
UCloud | 名称 Kafka 版本实例规格配置 + 存储容量实例数 3< 设定值< 100 消息保留时长 |
DCE5 支持多集群,Kafka 采用 operator 的方式部署,则需要先安装 Operator 模板到集群内
在用户创建 kafka 实例时,检测是否目标集群是否存在 kafka-operator,如果不存在则同步安装
什么时间移除 kafka-operator,默认情况下安装后不移除;交由 Kpanda 对集群释放时处理
通过多版本 对应 多 Kafka-opeator 的方式,让用户进行多版本选择
非必要需求,短期不支持
默认情况下更新了 operator 之后,不对存量做处理;后续可以做友好提示用户升级即可
实例创建
实例详情
https://cloud.tencent.com/document/product/597/73566
创建时,需要关联依赖服务 zookeeper
提供了原生的 Kafka-manager 管理 UI https://docsv3.qingcloud.com/middware/kafka/manual/kafka_manager/kafka_manager_topic/
访问方式,以 openvpn 的方式接入到 VPC(需绑定入口公网 IP) 后,通过 client 内网地址访问
将上游 Kafka 数据传输到 HDFS 或者 es
提供原生的实例管理界面,进行配置更新等
直接跳转到 K8s 容器组界面查看
在 Kafka manger 原生 UI 控制台内管理,默认启用了控制台公网访问能力
基础 CPU、内存、网络、存储监控
接入 Grafana 提供一个 Dashboard 的,可以查看 Topics 的消息数,消费量,积压数
操作视频已上传到 OneDrive 共享空间
腾讯云只可以按月份订购,所以没有录制视频。
方案一:
实现:
方案比对:
方案二:
实现:
方案对比:
官方网站 https://python-poetry.org/ 集成了所有 Poetry 最新的使用文档,以下仅在我的环境上经过验证
# In Pip
- 安装 pip install poetry # pip3
- 更新 poetry self update
# In my Mac
- 安装 brew install poetry
- 更新 brew upgrade poetry
# In my CentOS
- 安装 curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -
- 更新 poetry self update
在开始使用前,建议先对 poetry 的配置有些了解,并调整为适合你的方式,主要是调整一下虚拟环境的安装位置
poetry config
poetry 相关的查看和编辑的命令
~ poetry config --list # 获取当前 poetry 的配置情况
cache-dir = "/Users/$username/Library/Caches/pypoetry"
experimental.new-installer = true
installer.parallel = true
virtualenvs.create = true
virtualenvs.in-project = true
virtualenvs.path = "\{cache-dir\}/virtualenvs" # /Users/$username/Library/Caches/pypoetry/virtualenvs
配置项目 | 配置内容 | 配置项说明 | 建议配置 |
---|---|---|---|
cache-dir | String | 缓存目录配置,使用 poetry 安装的包源文件都会缓存到这个目录。 | 不建议更改 |
installer.parallel | boolean | 此配置会被忽略 | |
virtualenvs.create | boolean | 默认为 true,如果当前工程的虚拟环境不存在,就创建一个 | 不建议更改 |
virtualenvs.in-project | boolean | None:poetry 会在系统特定目录创建一个.venv 目录,由下面的 path 参数指定 | |
true:poetry 会在项目根目录创建一个.venv 目录 | |||
false:poetry 将会忽略已存在的.venv 目录 | 《建议修改》 |
推荐这种方式,在项目根目录创建虚拟环境,这样就算移动目录位置也不影响虚拟环境的使用 | | virtualenvs.path | string | 默认是{cache-dir}/virtualenvs,虚拟环境创建的目录,如果上面的 in-project 为 true,此配置就无效 | 不建议更改 |
建议 在使用前 启用 virtualenvs.in-project,这样会在每个项目下有一个.venv 方便隔离管理
# poetry 配置说明
poetry config virtualenvs.in-project true
Poetry Command | 解释 |
---|---|
$ poetry --version | 显示您的 Poetry 安装版本。 |
$ poetry new | 创建一个新的 Poetry 项目。 |
$ poetry init | 将 Poetry 添加到现有项目中。 |
$ poetry run | 使用 Poetry 执行给定的命令。 |
$ poetry add | 添加一个包 pyproject.toml 并安装它。 |
$ poetry update | 更新项目的依赖项。 |
$ poetry install | 安装依赖项。 |
$ poetry show | 列出已安装的软件包。 |
$ poetry lock | 将最新版本的依赖项固定到 poetry.lock. |
$ poetry lock --no-update | 刷新 poetry.lock 文件而不更新任何依赖版本。 |
$ poetry check | 验证 pyproject.toml。 |
$ poetry config --list | 显示 Poetry 配置。 |
$ poetry env list | 列出项目的虚拟环境。 |
$ poetry export | 导出 poetry.lock 为其他格式。 |
这里以 初始化一个 FastAPI 项目作为 实例
➜ fastapi poetry new fastapi-demo
Created package fastapi_demo in fastapi-demo
➜ fastapi ls -lh fastapi-demo
total 8
-rw-r--r-- 1 samzonglu staff 0B 2 15 14:28 README.rst
drwxr-xr-x 3 samzonglu staff 96B 2 15 14:28 fastapi_demo
-rw-r--r-- 1 samzonglu staff 304B 2 15 14:28 pyproject.toml
drwxr-xr-x 4 samzonglu staff 128B 2 15 14:28 tests
➜ fastapi cd fastapi-demo
➜ fastapi-demo poetry env use 3.10.2 # 配置项目的虚拟环境
这里会遇到一个问题,已存在的项目基本都已经有了 requirements.txt,所以 poetry 最好可以直接读取它
poetry add `cat requirements.txt`
将项目依赖导出为 requirements.txt
poetry export --output requirements.txt
如何把项目移交给一个 not use Poetry 的人运行,对于 Python 的 环境包 依赖,上述的 ouput/input 的方式,会存在一些问题,这里进行纠正。
pip freeze > requirements.txt
anyio @ file:///Users/samzonglu/Library/Caches/pypoetry/artifacts/cd/3f/ae/baff749ce6cb4d7985e4142650605d2d30cb92eb418e2d121868e4413d/anyio-3.6.1-py3-none-any.whl
certifi @ file:///Users/samzonglu/Library/Caches/pypoetry/artifacts/b1/9b/6f/cd63ce97294ee9a1fb57e5cebf02f251fbb8f9ac48353a27ceeddc410b/certifi-2022.6.15-py3-none-any.whl
charset-normalizer @ file:///Users/samzonglu/Library/Caches/pypoetry/artifacts/86/c8/3e/d878881698fbd2b72f484e4fca340588d633102920a002b66a293f9480/charset_normalizer-2.1.0-py3-none-any.whl
click @ file:///Users/samzonglu/Library/Caches/pypoetry/artifacts/63/f3/4c/2270b95f4d37b9ea73cd401abe68b6e9ede30380533cd4e7118a8e3aa3/click-8.1.3-py3-none-any.whl
fastapi @ file:///Users/samzonglu/Library/Caches/pypoetry/artifacts/f9/37/53/c998e9ffd7ace66218174711f5c3ef1026a0bd3cd72f5fe2908e9b949b/fastapi-0.78.0-py3-none-any.whl
h11 @ file:///Users/samzonglu/Library/Caches/pypoetry/artifacts/ef/5c/a2/a6d556bc5e3493616e52726df9c880b2da2fbf9c3be5e8351c84fbfafd/h11-0.13.0-py3-none-any.whl
idna @ file:///Users/samzonglu/Library/Caches/pypoetry/artifacts/90/36/8c/81eabf6ac88608721ab27f439c9a6b9a8e6a21cc58c59ebb1a42720199/idna-3.3-py3-none-any.whl
pydantic @ file:///Users/samzonglu/Library/Caches/pypoetry/artifacts/c2/13/d4/b9f7dbf75702d85504b4a5f36545ff903c7e2264d4889e94ce02637276/pydantic-1.9.1-cp310-cp310-macosx_11_0_arm64.whl
requests @ file:///Users/samzonglu/Library/Caches/pypoetry/artifacts/14/1f/4d/1b93db6513b8ab38db841e4ce62691288ba549a5c1b6f3ca7274a1c9fd/requests-2.28.1-py3-none-any.whl
sniffio @ file:///Users/samzonglu/Library/Caches/pypoetry/artifacts/2b/1b/93/9c34d727e29f7bb11ce5b2ca7f43e77cb4e96a81ee5e07a92763951416/sniffio-1.2.0-py3-none-any.whl
starlette @ file:///Users/samzonglu/Library/Caches/pypoetry/artifacts/3d/fc/74/569a1206737284325f5bb2e4f34689632c159dafbe8b7ff30bf2893c2d/starlette-0.19.1-py3-none-any.whl
typing_extensions @ file:///Users/samzonglu/Library/Caches/pypoetry/artifacts/4a/aa/fe/e4680f3423fbdb5ac89a6fb2f83d9e7ff7fb48173b0fa1604786182558/typing_extensions-4.3.0-py3-none-any.whl
urllib3 @ file:///Users/samzonglu/Library/Caches/pypoetry/artifacts/8a/87/ce/4a44bf6bb59a745f4af7082c6977ab23a478fca039ad4d631dfdc0185b/urllib3-1.26.10-py2.py3-none-any.whl
uvicorn @ file:///Users/samzonglu/Library/Caches/pypoetry/artifacts/62/76/ec/dcafe6bae872839618dbf982c87eb314eee97784f7df74895e07bd198a/uvicorn-0.18.2-py3-none-any.whl
可以看到,默认情况下pip freeze
在输出时,携带里对应的安装路径;如果这个时候,我们把项目移交给其他人运行时,会遇到以下问题
ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such
这是 pip 安装软件包的一种特殊语法(自 19.1 开始受支持)PEP404, 但是该此种路径取决于环境,file:///URL 仅在本地文件系统上可用,你不能将生成的 requirements.txt 文件提供给其他人使用
poetry export --without-hashes --format=requirements.txt > requirements.txt
通过 poetry 自带的导出能力,会携带更多的一些信息:对于环境中 python 版本的依赖,虽然更加友好一些。
anyio==3.6.1; python_full_version >= "3.6.2"
certifi==2022.6.15; python_version >= "3.6"
charset-normalizer==2.1.0; python_full_version >= "3.6.0"
click==8.1.3; python_version >= "3.7"
colorama==0.4.5; python_version >= "3.7" and python_full_version < "3.0.0" and platform_system == "Windows" or platform_system == "Windows" and python_version >= "3.7" and python_full_version >= "3.5.0"
fastapi==0.78.0; python_full_version >= "3.6.1"
h11==0.13.0; python_version >= "3.6"
idna==3.3; python_version >= "3.5"
pydantic==1.9.1; python_full_version >= "3.6.1"
requests==2.28.1; python_version >= "3.7" and python_version < "4"
sniffio==1.2.0; python_version >= "3.5"
starlette==0.19.1; python_version >= "3.6"
typing-extensions==4.3.0; python_version >= "3.7"
urllib3==1.26.10; (python_version >= "2.7" and python_full_version < "3.0.0") or (python_full_version >= "3.6.0" and python_version < "4")
uvicorn==0.18.2; python_version >= "3.7"
在新版的 python 中,推荐采用另外一个命令方式,pip list --format=freeze
这样会输出一份干净的依赖清单,我们可以通过这个方式,快速导出一份 原汁原味
的 requirements.txt
poetry run pip list --format=freeze
anyio==3.6.1
certifi==2022.6.15
charset-normalizer==2.1.0
click==8.1.3
fastapi==0.78.0
h11==0.13.0
idna==3.3
pip==22.2
pydantic==1.9.1
requests==2.28.1
setuptools==62.6.0
sniffio==1.2.0
starlette==0.19.1
typing_extensions==4.3.0
urllib3==1.26.10
uvicorn==0.18.2
wheel==0.37.1
with poetry
cat requirements.txt | xargs poetry add
without poetry
pip install -r requirements.txt