在线服务器激活 Ketbrains

Ketbrains 激活服务器

目前 2022 年 最新 Jetbrains 激活服务器,全部产品均可激活;激活后,如果失效了就再去找个。

Ketbrains 激活服务器

打开这个网站:https://search.censys.io/

20230221225628

搜索框输入:services.http.response.headers.location: account.jetbrains.com/fls-auth

20230221225749

在查到的站点中,随便找一个点进去,查找到 HTTP & 302

20230221225811

复制网址到 Ketbrains,选择许可证服务器/License server,粘贴刚刚复制的网址,激活。

20230221225910

如果提示过期或者激活失败,再从查询列表中再找一个,多试几个。

20230221225930


2023-02-21 Pycharm , Python

Raycast 你的年度报告已到达

从 Alfred 切换到 Raycast 也就 2 个多月的时间;使用体验有着比较大的提升。

我觉得的优势提升: ● Raycast 的插件生态目前很火热,自己开发插件也是非常方便 ● 颜值很高 ● 展示免费可用

resize,w_960,m_lfit

resize,w_960,m_lfit

resize,w_960,m_lfit

resize,w_960,m_lfit


2022-12-16 Mac , Raycast

DCE5-Skoala 安装教程

使用之前的话

本教程旨在补充需要手工安装和升级的方式。

安装器 v0.3.28 及之前

默认安装不支持;在安装规划时,可以修改 mainfest.yaml 开启 Skoala 自动安装

检查预装环境

./dce5-installer install-app -m /sample/manifest.yaml

安装器 v0.3.29

20222.12.15 即将发版)支持默认安装 Skoala;仍旧建议检查mainfest.yaml ,确保 Skoala 会被安装器安装。

enable 需要为 true,需要指定对应的 helmVersion:

...
components:
  skoala:
    enable: true
    helmVersion: v0.12.2
    variables:
...
  • 重要的话:默认安装器版本携带的是当时最新经过测试的版本;如无特殊情况,不建议更新默认的 helm 安装版本。
  • 重要的话:默认安装器版本携带的是当时最新经过测试的版本;如无特殊情况,不建议更新默认的 helm 安装版本。
  • 重要的话:默认安装器版本携带的是当时最新经过测试的版本;如无特殊情况,不建议更新默认的 helm 安装版本。

安装前的检测

检测 skoala 安装情况

查看 命名空间为 skoala-system 的之中是否有以下对应的资源,如果没有任何资源,说明 Skoala 的确没有安装。

~ kubectl -n skoala-system get pods
NAME                                   READY   STATUS    RESTARTS        AGE
hive-8548cd9b59-948j2                  2/2     Running   2 (3h48m ago)   3h48m
sesame-5955c878c6-jz8cd                2/2     Running   0               3h48m
ui-7c9f5b7b67-9rpzc                    2/2     Running   0               3h48m
 
~ helm -n skoala-system list
NAME        NAMESPACE       REVISION    UPDATED                                 STATUS      CHART               APP VERSION
skoala      skoala-system   3           2022-12-16 11:17:35.187799553 +0800 CST deployed    skoala-0.13.0       0.13.0

2.2. 依赖 common-mysql 的安装情况

skoala 在安装时需要用到 mysql 来存储配置,所以必须要保证数据库存在;另外查看下 common-mysql 是否有 skoala 这个数据库。

~ kubectl -n mcamel-system get statefulset
NAME                                          READY   AGE
mcamel-common-mysql-cluster-mysql             2/2     7d23h

建议给到 skoala 用到的数据库信息如下:

  • host: mcamel-common-mysql-cluster-mysql-master.mcamel-system.svc.cluster.local
  • port: 3306
  • database : skoala
  • user: skoala
  • password:

关于 insight-agent

Skoala 所有的监控的信息,需要依赖 Insight 的能力,则需要在集群中安装对应的 insight-agent;

resize,w_960,m_lfit

对 Skoala 的影响:

  • 如果 skoala-init 安装时未先安装 insight-agent,不会安装 service-monitor
  • 如果需要安装 service-monitor,请先安装 insight-agent,再安装 skoala-init

如果先安装了 skoala-init, 目前需要在安装 insight-agent 后,重装 skoala-init

手动安装过程

初始化 数据库表

如果在 common-mysql 内的 skoala 数据库为空,请登录到 skoala 数据库后,执行以下 SQL:

CREATE TABLE `registry` (
    `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
    `uid` varchar(32) DEFAULT NULL,
    `name` varchar(50) NOT NULL,
    `type` varchar(50) NOT NULL,
    `addresses` varchar(1000) NOT NULL,
    `namespaces` varchar(2000) NOT NULL,
    `deleted_at` timestamp NULL COMMENT 'Time deteled',
    `created_at` timestamp NOT NULL DEFAULT current_timestamp(),
    `updated_at` timestamp NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(),
    PRIMARY KEY (`id`),
    UNIQUE KEY `idx_uid` (`uid`),
    UNIQUE KEY `idx_name` (`name`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
 
CREATE TABLE `book` (
    `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
    `uid` varchar(32) DEFAULT NULL,
    `name` varchar(50) NOT NULL,
    `author` varchar(32) NOT NULL,
    `status` int(1) DEFAULT 1 COMMENT '0:下架,1:上架',
    `isPublished` tinyint(1) unsigned NOT NULL DEFAULT 1 COMMENT '0: unpublished, 1: published',
    `publishedAt` timestamp NULL DEFAULT NULL COMMENT '出版时间',
    `deleted_at` timestamp NULL COMMENT 'Time deteled',
    `createdAt` timestamp NOT NULL DEFAULT current_timestamp(),
    `updatedAt` timestamp NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(),
    PRIMARY KEY (`id`),
    UNIQUE KEY `idx_uid` (`uid`),
    UNIQUE KEY `idx_name` (`name`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
 
CREATE TABLE `api` (
    `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
    `is_hosted` tinyint DEFAULT 0,
    `registry` varchar(50) NOT NULL,
    `service_name` varchar(200) NOT NULL,
    `nacos_namespace` varchar(200) NOT NULL COMMENT 'Nacos namespace id',
    `nacos_group_name` varchar(200) NOT NULL,
    `data_type` varchar(100) NOT NULL COMMENT 'JSON or YAML.',
    `detail` mediumtext NOT NULL,
    `deleted_at` timestamp NULL COMMENT 'Time deteled',
    `created_at` timestamp NOT NULL DEFAULT current_timestamp(),
    `updated_at` timestamp NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(),
    PRIMARY KEY (`id`),
    UNIQUE KEY `idx_registry_and_service_name` (`registry`, `service_name`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
 
INSERT INTO `book` VALUES (1,'book-init','MicroService Pattern','daocloud',1,1,'2022-03-23 13:50:00',null,now(),now());
 
alter table registry add is_hosted tinyint default 0 not null after namespaces;
alter table registry add workspace_id varchar(50) not null DEFAULT 'default' after uid;
alter table registry add ext_id varchar(50) null after workspace_id;
 
drop index idx_name on registry;
create unique index idx_name on registry (name, workspace_id);

完成以上操作,会在 Skoala 数据库内有 3 张表,注意检测对应 SQL 是否全部是生效。

配置 skoala helm repo

配置好 skoala 仓库,即可查看和获取到 skoala 的应用 chart

~ helm repo add skoala-release https://release.daocloud.io/chartrepo/skoala
~ helm repo update

resize,w_960,m_lfit

重点内容: ** 增加完成后 Skoala-release 之后,常用需要关注的有 2 个 Chart:

  • Skoala   是 Skoala 的控制端的服务,
    • 安装完成后,可以网页看到微服务引擎的入口
    • 包含 3 个组件 ui、hive、sesame
    • 需要安装在全局管理集群
  • Skoala-init 是 Skoala 所有的组件 Operator
    • 仅安装到指定工作集群即可
    • 包含组件有:skoala-agent, nacos, contour, sentinel
    • 未安装时,创建注册中心和网关时会提示缺少组件

默认情况下,安装完成 skoala 到 kpanda-global-cluster(全局管理集群),就可以在侧边栏看到对应的微服务引擎的入口了。

查看 skoala 组件 最新版本

在全局管理集群,查看 Skoala 的最新版本,直接通过 helm repo 来更新获取最新的;

~ helm repo update skoala-release
~ helm search repo skoala-release/skoala --versions
NAME                        CHART VERSION   APP VERSION DESCRIPTION
skoala-release/skoala       0.13.0          0.13.0      The helm chart for Skoala
skoala-release/skoala       0.12.2          0.12.2      The helm chart for Skoala
skoala-release/skoala       0.12.1          0.12.1      The helm chart for Skoala
skoala-release/skoala       0.12.0          0.12.0      The helm chart for Skoala
......

在部署 skoala 时,会携带当时最新的前端版本,如果想要指定前端 ui 的版本, 可以去看前端代码仓库获取对应的版本号: https://gitlab.daocloud.cn/ndx/frontend-engineering/skoala-ui/-/tags

在工作集群,查看 Skoala-init 的最新版本,直接通过 helm repo 来更新获取最新的

~ helm repo update skoala-release
~ helm search repo skoala-release/skoala-init --versions
NAME                        CHART VERSION   APP VERSION DESCRIPTION
skoala-release/skoala-init  0.13.0          0.13.0      A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init  0.12.2          0.12.2      A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init  0.12.1          0.12.1      A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init  0.12.0          0.12.0      A Helm Chart for Skoala init, it includes Skoal...
......

3.4. 执行部署(同样适用于升级)

直接执行命令即可,注意对应的版本号

~ helm upgrade --install skoala --create-namespace -n skoala-system --cleanup-on-fail \
    --set ui.image.tag=v0.9.0 \
    --set sweet.enable=true \
    --set hive.configMap.data.database.host=mcamel-common-mysql-cluster-mysql-master.mcamel-system.svc.cluster.local \
    --set hive.configMap.data.database.port=3306 \
    --set hive.configMap.data.database.user=root \
    --set hive.configMap.data.database.password=xxxxxxxx \
    --set hive.configMap.data.database.database=skoala \
    skoala-release/skoala \
    --version 0.13.0

自定义并初始化数据库参数;需要将数据库信息做配置添加进去
–set sweet.enable=true
–set hive.configMap.data.database.host= \
–set hive.configMap.data.database.port= \
–set hive.configMap.data.database.user=
–set hive.configMap.data.database.password= \
–set hive.configMap.data.database.database= \

自定义前端 ui 版本 ui.image.tag=v0.9.0

查看部署的 pod 是否启动成功

~ kubectl -n skoala-system get pods
NAME                                   READY   STATUS    RESTARTS        AGE
hive-8548cd9b59-948j2                  2/2     Running   2 (3h48m ago)   3h48m
sesame-5955c878c6-jz8cd                2/2     Running   0               3h48m
ui-7c9f5b7b67-9rpzc                    2/2     Running   0               3h48m

卸载 skoala

这一步骤卸载,会把 skoala 相关的资源删除。

~ helm uninstall skoala -n skoala-system

更新 skoala

更新操作同 3.4 部署,使用 helm upgrade 指定新版本即可

更多参数的配置

请查看 Skoala 代码仓库 https://gitlab.daocloud.cn/ndx/skoala/-/tree/main/build/charts/skoala

3.8.  安装 skoala-init 到工作集群

由于 Skoala 涉及的组件较多,我们将这些组件打包到同一个 Chart 内,也就是 skoala-init,所以我们应该在用到微服务引擎的工作集群安装好 skoala-init

~  helm search repo skoala-release/skoala-init --versions
NAME                        CHART VERSION   APP VERSION DESCRIPTION
skoala-release/skoala-init  0.13.0          0.13.0      A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init  0.12.2          0.12.2      A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init  0.12.1          0.12.1      A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init  0.12.0          0.12.0      A Helm Chart for Skoala init, it includes Skoal...
......

安装命令,同更新; 确认需要安装到指定的命名空间,确认看到全部 Pod 启动成功。

~ helm upgrade --install skoala-init --create-namespace -n skoala-system --cleanup-on-fail \
    skoala-release/skoala-init \
    --version 0.13.0

除了通过终端安装,UI 的方式 可以在 Kpanda 集群管理中 Helm 应用内找到 Skoala-init 进行安装。

resize,w_960,m_lfit

卸载命令

~ helm uninstall skoala-init -n skoala-system

2022-12-15 Mac , Raycast

保姆式安装 DCE5.0

前言

本文完成了从 0 到 1 的完成 DCE5.0 社区版的安装;其中包含社区版需要一个 K8s 集群、与一些安装细节和注意事项。

集群规划

当前计划使用在 3 台 UCloud 的 VM,配置均为 8 核 16G。

角色 主机名 操作系统 IP 配置
master master-k8s-com CentOS 7.9 10.23.245.63 8 核 16G 300GB
node01 node01-k8s-com CentOS 7.9 10.23.104.173 8 核 16G 300GB
node02 node02-k8s-com CentOS 7.9 10.23.112.244 8 核 16G 300GB

组件介绍

  • Kubernetes 1.24.8
  • CRI containerd
  • CNI Calico
  • StorageClass HwameiStior

节点系统优化

配置主机名

hostnamctl set-hostname master-k8s-com

添加 /etc/hosts 配置

cat <<EOF | tee /etc/hosts
10.23.245.63    master-k8s-com
10.23.104.173   node01-k8s-com
10.23.112.244   node02-k8s-com
EOF

禁用 Swap

swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab

禁用 SElinux

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

允许 iptables 检查桥接流量

加载 br_netfilter 模块

cat <<EOF | tee /etc/modules-load.d/kubernetes.conf
br_netfilter
EOF

# 加载模块
modprobe br_netfilter

修改 net.bridge.bridge-nf-call-iptables 设置为 1

cat <<EOF | tee /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

# 刷新配置
sysctl --system

K8s 环境安装

安装 Docker

安装 Docker 的软件源

sudo yum install -y yum-utils
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

安装 Docker 和 Containerd.io

sudo yum -y install docker-ce docker-ce-cli containerd.io

修改 Docker 的配置

sudo touch /etc/docker/daemon.json

cat <<EOF | tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

修改 Containerd 的配置

# 清理配置文件
sudo rm -f /etc/containerd/config.toml

# 初始化配置
sudo containerd config default | sudo tee /etc/containerd/config.toml

# 更新配置文件内容
sed -i 's/SystemdCgroup\ =\ false/SystemdCgroup\ =\ true/' /etc/containerd/config.toml
sed 's/k8s.gcr.io\/pause/registry.cn-hangzhou.aliyuncs.com\/google_containers\/pause/g' /etc/containerd/config.toml

启动服务配置

sudo systemctl daemon-reload
sudo systemctl enable --now docker
sudo systemctl enable --now containerd

检查配置成功

sudo systemctl status docker containerd
sudo docker info

安装 Kubernetes 系统组件

安装 Kubernetes 软件源

这里采用国内阿里云的源,具有加速效果

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装 Kubernetes 组件

echo K8sVersion=1.24.8
sudo yum install -y kubelet-1.24.8-$K8sVersion kubeadm-1.24.8-$K8sVersion
sudo yum install -y kubectl-1.24.8-$K8sVersion  # 可以仅在 Master 节点安装

启动 kubelet系统服务

sudo systemctl enable --now kubelet

此时启动后服务状态异常,会检查集群配置,因为没有配置会不断的重启,不影响后续操作

初始化 Master 节点

注意规划集群主节点配置,初始化主节点时,规划网络配置

# 指定对应的 kubernetes 的版本,注意与上方配置保持一致
$ sudokubeadm init --kubernetes-version=v1.24.8 \
 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
 --pod-network-cidr 10.11.0.0/16 \

[init] Using Kubernetes version: v1.24.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.503693 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.24.8" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.24.8" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1.k8s.com as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node1.k8s.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: c0wcm5.0yu9szfktsxvurza
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.23.245.63:6443 --token djdsj2.sj23js90213j323 \
        --discovery-token-ca-cert-hash sha256:ewuosdjk2390rjertw32p32j43p25a70298db818ajsdjk1293jk23k23201934h

加入工作节点

注意需要确保完成了前面的步骤 节点系统优化、安装 Docker、安装 Kubernetes 系统组件,并且成功完成 初始化 Master 节点

$ kubeadm join 10.23.245.63:6443 --token djdsj2.sj23js90213j323 \
        --discovery-token-ca-cert-hash sha256:ewuosdjk2390rjertw32p32j43p25a70298db818ajsdjk1293jk23k23201934h

请分别添加 2 个工作节点:

$ kubectl get nodes
NAME               STATUS   ROLES           AGE   VERSION
master01-k8s-com   Ready    control-plane   9h    v1.24.8
node01-k8s-com     Ready    <none>          9h    v1.24.8
node02-k8s-com     Ready    <none>          9h    v1.24.8

安装 CNI Calico

将下发 yaml保存为 calico.yaml,或者 到我的 Github 去下载。

---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Typha is disabled.
  typha_service_name: "none"
  # Configure the backend to use.
  calico_backend: "bird"

  # Configure the MTU to use for workload interfaces and tunnels.
  # By default, MTU is auto-detected, and explicitly setting this field should not be required.
  # You can override auto-detection by providing a non-zero value.
  veth_mtu: "0"

  # The CNI network configuration to install on each node. The special
  # values in this config will be automatically populated.
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "log_file_path": "/var/log/calico/cni/cni.log",
          "datastore_type": "kubernetes",
          "nodename": "__KUBERNETES_NODE_NAME__",
          "mtu": __CNI_MTU__,
          "ipam": {
              "type": "calico-ipam"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        },
        {
          "type": "bandwidth",
          "capabilities": {"bandwidth": true}
        }
      ]
    }

---
# Source: calico/templates/kdd-crds.yaml

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: bgpconfigurations.crd.projectcalico.org
spec:
  group: crd.projectcalico.org
  names:
    kind: BGPConfiguration
    listKind: BGPConfigurationList
    plural: bgpconfigurations
    singular: bgpconfiguration
  scope: Cluster
  versions:
  - name: v1
    schema:
      openAPIV3Schema:
        description: BGPConfiguration contains the configuration for any BGP routing.
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation
              of an object. Servers should convert recognized schemas to the latest
              internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          kind:
            description: 'Kind is a string value representing the REST resource this
              object represents. Servers may infer this from the endpoint the client
              submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
            type: string
          metadata:
            type: object
          spec:
            description: BGPConfigurationSpec contains the values of the BGP configuration.
            properties:
              asNumber:
                description: 'ASNumber is the default AS number used by a node. [Default:
                  64512]'
                format: int32
                type: integer
              communities:
                description: Communities is a list of BGP community values and their
                  arbitrary names for tagging routes.
                items:
                  description: Community contains standard or large community value
                    and its name.
                  properties:
                    name:
                      description: Name given to community value.
                      type: string
                    value:
                      description: Value must be of format `aa:nn` or `aa:nn:mm`.
                        For standard community use `aa:nn` format, where `aa` and
                        `nn` are 16 bit number. For large community use `aa:nn:mm`
                        format, where `aa`, `nn` and `mm` are 32 bit number. Where,
                        `aa` is an AS Number, `nn` and `mm` are per-AS identifier.
                      pattern: ^(\d+):(\d+)$|^(\d+):(\d+):(\d+)$
                      type: string
                  type: object
                type: array
              listenPort:
                description: ListenPort is the port where BGP protocol should listen.
                  Defaults to 179
                maximum: 65535
                minimum: 1
                type: integer
              logSeverityScreen:
                description: 'LogSeverityScreen is the log severity above which logs
                  are sent to the stdout. [Default: INFO]'
                type: string
              nodeToNodeMeshEnabled:
                description: 'NodeToNodeMeshEnabled sets whether full node to node
                  BGP mesh is enabled. [Default: true]'
                type: boolean
              prefixAdvertisements:
                description: PrefixAdvertisements contains per-prefix advertisement
                  configuration.
                items:
                  description: PrefixAdvertisement configures advertisement properties
                    for the specified CIDR.
                  properties:
                    cidr:
                      description: CIDR for which properties should be advertised.
                      type: string
                    communities:
                      description: Communities can be list of either community names
                        already defined in `Specs.Communities` or community value
                        of format `aa:nn` or `aa:nn:mm`. For standard community use
                        `aa:nn` format, where `aa` and `nn` are 16 bit number. For
                        large community use `aa:nn:mm` format, where `aa`, `nn` and
                        `mm` are 32 bit number. Where,`aa` is an AS Number, `nn` and
                        `mm` are per-AS identifier.
                      items:
                        type: string
                      type: array
                  type: object
                type: array
              serviceClusterIPs:
                description: ServiceClusterIPs are the CIDR blocks from which service
                  cluster IPs are allocated. If specified, Calico will advertise these
                  blocks, as well as any cluster IPs within them.
                items:
                  description: ServiceClusterIPBlock represents a single allowed ClusterIP
                    CIDR block.
                  properties:
                    cidr:
                      type: string
                  type: object
                type: array
              serviceExternalIPs:
                description: ServiceExternalIPs are the CIDR blocks for Kubernetes
                  Service External IPs. Kubernetes Service ExternalIPs will only be
                  advertised if they are within one of these blocks.
                items:
                  description: ServiceExternalIPBlock represents a single allowed
                    External IP CIDR block.
                  properties:
                    cidr:
                      type: string
                  type: object
                type: array
              serviceLoadBalancerIPs:
                description: ServiceLoadBalancerIPs are the CIDR blocks for Kubernetes
                  Service LoadBalancer IPs. Kubernetes Service status.LoadBalancer.Ingress
                  IPs will only be advertised if they are within one of these blocks.
                items:
                  description: ServiceLoadBalancerIPBlock represents a single allowed
                    LoadBalancer IP CIDR block.
                  properties:
                    cidr:
                      type: string
                  type: object
                type: array
            type: object
        type: object
    served: true
    storage: true
status:
  acceptedNames:
    kind: ""
    plural: ""
  conditions: []
  storedVersions: []

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: bgppeers.crd.projectcalico.org
spec:
  group: crd.projectcalico.org
  names:
    kind: BGPPeer
    listKind: BGPPeerList
    plural: bgppeers
    singular: bgppeer
  scope: Cluster
  versions:
  - name: v1
    schema:
      openAPIV3Schema:
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation
              of an object. Servers should convert recognized schemas to the latest
              internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          kind:
            description: 'Kind is a string value representing the REST resource this
              object represents. Servers may infer this from the endpoint the client
              submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
            type: string
          metadata:
            type: object
          spec:
            description: BGPPeerSpec contains the specification for a BGPPeer resource.
            properties:
              asNumber:
                description: The AS Number of the peer.
                format: int32
                type: integer
              keepOriginalNextHop:
                description: Option to keep the original nexthop field when routes
                  are sent to a BGP Peer. Setting "true" configures the selected BGP
                  Peers node to use the "next hop keep;" instead of "next hop self;"(default)
                  in the specific branch of the Node on "bird.cfg".
                type: boolean
              node:
                description: The node name identifying the Calico node instance that
                  is targeted by this peer. If this is not set, and no nodeSelector
                  is specified, then this BGP peer selects all nodes in the cluster.
                type: string
              nodeSelector:
                description: Selector for the nodes that should have this peering.  When
                  this is set, the Node field must be empty.
                type: string
              password:
                description: Optional BGP password for the peerings generated by this
                  BGPPeer resource.
                properties:
                  secretKeyRef:
                    description: Selects a key of a secret in the node pod's namespace.
                    properties:
                      key:
                        description: The key of the secret to select from.  Must be
                          a valid secret key.
                        type: string
                      name:
                        description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
                          TODO: Add other useful fields. apiVersion, kind, uid?'
                        type: string
                      optional:
                        description: Specify whether the Secret or its key must be
                          defined
                        type: boolean
                    required:
                    - key
                    type: object
                type: object
              peerIP:
                description: The IP address of the peer followed by an optional port
                  number to peer with. If port number is given, format should be `[<IPv6>]:port`
                  or `<IPv4>:<port>` for IPv4. If optional port number is not set,
                  and this peer IP and ASNumber belongs to a calico/node with ListenPort
                  set in BGPConfiguration, then we use that port to peer.
                type: string
              peerSelector:
                description: Selector for the remote nodes to peer with.  When this
                  is set, the PeerIP and ASNumber fields must be empty.  For each
                  peering between the local node and selected remote nodes, we configure
                  an IPv4 peering if both ends have NodeBGPSpec.IPv4Address specified,
                  and an IPv6 peering if both ends have NodeBGPSpec.IPv6Address specified.  The
                  remote AS number comes from the remote node's NodeBGPSpec.ASNumber,
                  or the global default if that is not set.
                type: string
              sourceAddress:
                description: Specifies whether and how to configure a source address
                  for the peerings generated by this BGPPeer resource.  Default value
                  "UseNodeIP" means to configure the node IP as the source address.  "None"
                  means not to configure a source address.
                type: string
            type: object
        type: object
    served: true
    storage: true
status:
  acceptedNames:
    kind: ""
    plural: ""
  conditions: []
  storedVersions: []

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: blockaffinities.crd.projectcalico.org
spec:
  group: crd.projectcalico.org
  names:
    kind: BlockAffinity
    listKind: BlockAffinityList
    plural: blockaffinities
    singular: blockaffinity
  scope: Cluster
  versions:
  - name: v1
    schema:
      openAPIV3Schema:
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation
              of an object. Servers should convert recognized schemas to the latest
              internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          kind:
            description: 'Kind is a string value representing the REST resource this
              object represents. Servers may infer this from the endpoint the client
              submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
            type: string
          metadata:
            type: object
          spec:
            description: BlockAffinitySpec contains the specification for a BlockAffinity
              resource.
            properties:
              cidr:
                type: string
              deleted:
                description: Deleted indicates that this block affinity is being deleted.
                  This field is a string for compatibility with older releases that
                  mistakenly treat this field as a string.
                type: string
              node:
                type: string
              state:
                type: string
            required:
            - cidr
            - deleted
            - node
            - state
            type: object
        type: object
    served: true
    storage: true
status:
  acceptedNames:
    kind: ""
    plural: ""
  conditions: []
  storedVersions: []

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: clusterinformations.crd.projectcalico.org
spec:
  group: crd.projectcalico.org
  names:
    kind: ClusterInformation
    listKind: ClusterInformationList
    plural: clusterinformations
    singular: clusterinformation
  scope: Cluster
  versions:
  - name: v1
    schema:
      openAPIV3Schema:
        description: ClusterInformation contains the cluster specific information.
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation
              of an object. Servers should convert recognized schemas to the latest
              internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          kind:
            description: 'Kind is a string value representing the REST resource this
              object represents. Servers may infer this from the endpoint the client
              submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
            type: string
          metadata:
            type: object
          spec:
            description: ClusterInformationSpec contains the values of describing
              the cluster.
            properties:
              calicoVersion:
                description: CalicoVersion is the version of Calico that the cluster
                  is running
                type: string
              clusterGUID:
                description: ClusterGUID is the GUID of the cluster
                type: string
              clusterType:
                description: ClusterType describes the type of the cluster
                type: string
              datastoreReady:
                description: DatastoreReady is used during significant datastore migrations
                  to signal to components such as Felix that it should wait before
                  accessing the datastore.
                type: boolean
              variant:
                description: Variant declares which variant of Calico should be active.
                type: string
            type: object
        type: object
    served: true
    storage: true
status:
  acceptedNames:
    kind: ""
    plural: ""
  conditions: []
  storedVersions: []

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: felixconfigurations.crd.projectcalico.org
spec:
  group: crd.projectcalico.org
  names:
    kind: FelixConfiguration
    listKind: FelixConfigurationList
    plural: felixconfigurations
    singular: felixconfiguration
  scope: Cluster
  versions:
  - name: v1
    schema:
      openAPIV3Schema:
        description: Felix Configuration contains the configuration for Felix.
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation
              of an object. Servers should convert recognized schemas to the latest
              internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          kind:
            description: 'Kind is a string value representing the REST resource this
              object represents. Servers may infer this from the endpoint the client
              submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
            type: string
          metadata:
            type: object
          spec:
            description: FelixConfigurationSpec contains the values of the Felix configuration.
            properties:
              allowIPIPPacketsFromWorkloads:
                description: 'AllowIPIPPacketsFromWorkloads controls whether Felix
                  will add a rule to drop IPIP encapsulated traffic from workloads
                  [Default: false]'
                type: boolean
              allowVXLANPacketsFromWorkloads:
                description: 'AllowVXLANPacketsFromWorkloads controls whether Felix
                  will add a rule to drop VXLAN encapsulated traffic from workloads
                  [Default: false]'
                type: boolean
              awsSrcDstCheck:
                description: 'Set source-destination-check on AWS EC2 instances. Accepted
                  value must be one of "DoNothing", "Enabled" or "Disabled". [Default:
                  DoNothing]'
                enum:
                - DoNothing
                - Enable
                - Disable
                type: string
              bpfConnectTimeLoadBalancingEnabled:
                description: 'BPFConnectTimeLoadBalancingEnabled when in BPF mode,
                  controls whether Felix installs the connection-time load balancer.  The
                  connect-time load balancer is required for the host to be able to
                  reach Kubernetes services and it improves the performance of pod-to-service
                  connections.  The only reason to disable it is for debugging purposes.  [Default:
                  true]'
                type: boolean
              bpfDataIfacePattern:
                description: BPFDataIfacePattern is a regular expression that controls
                  which interfaces Felix should attach BPF programs to in order to
                  catch traffic to/from the network.  This needs to match the interfaces
                  that Calico workload traffic flows over as well as any interfaces
                  that handle incoming traffic to nodeports and services from outside
                  the cluster.  It should not match the workload interfaces (usually
                  named cali...).
                type: string
              bpfDisableUnprivileged:
                description: 'BPFDisableUnprivileged, if enabled, Felix sets the kernel.unprivileged_bpf_disabled
                  sysctl to disable unprivileged use of BPF.  This ensures that unprivileged
                  users cannot access Calico''s BPF maps and cannot insert their own
                  BPF programs to interfere with Calico''s. [Default: true]'
                type: boolean
              bpfEnabled:
                description: 'BPFEnabled, if enabled Felix will use the BPF dataplane.
                  [Default: false]'
                type: boolean
              bpfExtToServiceConnmark:
                description: 'BPFExtToServiceConnmark in BPF mode, control a 32bit
                  mark that is set on connections from an external client to a local
                  service. This mark allows us to control how packets of that connection
                  are routed within the host and how is routing intepreted by RPF
                  check. [Default: 0]'
                type: integer
              bpfExternalServiceMode:
                description: 'BPFExternalServiceMode in BPF mode, controls how connections
                  from outside the cluster to services (node ports and cluster IPs)
                  are forwarded to remote workloads.  If set to "Tunnel" then both
                  request and response traffic is tunneled to the remote node.  If
                  set to "DSR", the request traffic is tunneled but the response traffic
                  is sent directly from the remote node.  In "DSR" mode, the remote
                  node appears to use the IP of the ingress node; this requires a
                  permissive L2 network.  [Default: Tunnel]'
                type: string
              bpfKubeProxyEndpointSlicesEnabled:
                description: BPFKubeProxyEndpointSlicesEnabled in BPF mode, controls
                  whether Felix's embedded kube-proxy accepts EndpointSlices or not.
                type: boolean
              bpfKubeProxyIptablesCleanupEnabled:
                description: 'BPFKubeProxyIptablesCleanupEnabled, if enabled in BPF
                  mode, Felix will proactively clean up the upstream Kubernetes kube-proxy''s
                  iptables chains.  Should only be enabled if kube-proxy is not running.  [Default:
                  true]'
                type: boolean
              bpfKubeProxyMinSyncPeriod:
                description: 'BPFKubeProxyMinSyncPeriod, in BPF mode, controls the
                  minimum time between updates to the dataplane for Felix''s embedded
                  kube-proxy.  Lower values give reduced set-up latency.  Higher values
                  reduce Felix CPU usage by batching up more work.  [Default: 1s]'
                type: string
              bpfLogLevel:
                description: 'BPFLogLevel controls the log level of the BPF programs
                  when in BPF dataplane mode.  One of "Off", "Info", or "Debug".  The
                  logs are emitted to the BPF trace pipe, accessible with the command
                  `tc exec bpf debug`. [Default: Off].'
                type: string
              chainInsertMode:
                description: 'ChainInsertMode controls whether Felix hooks the kernel''s
                  top-level iptables chains by inserting a rule at the top of the
                  chain or by appending a rule at the bottom. insert is the safe default
                  since it prevents Calico''s rules from being bypassed. If you switch
                  to append mode, be sure that the other rules in the chains signal
                  acceptance by falling through to the Calico rules, otherwise the
                  Calico policy will be bypassed. [Default: insert]'
                type: string
              dataplaneDriver:
                type: string
              debugDisableLogDropping:
                type: boolean
              debugMemoryProfilePath:
                type: string
              debugSimulateCalcGraphHangAfter:
                type: string
              debugSimulateDataplaneHangAfter:
                type: string
              defaultEndpointToHostAction:
                description: 'DefaultEndpointToHostAction controls what happens to
                  traffic that goes from a workload endpoint to the host itself (after
                  the traffic hits the endpoint egress policy). By default Calico
                  blocks traffic from workload endpoints to the host itself with an
                  iptables "DROP" action. If you want to allow some or all traffic
                  from endpoint to host, set this parameter to RETURN or ACCEPT. Use
                  RETURN if you have your own rules in the iptables "INPUT" chain;
                  Calico will insert its rules at the top of that chain, then "RETURN"
                  packets to the "INPUT" chain once it has completed processing workload
                  endpoint egress policy. Use ACCEPT to unconditionally accept packets
                  from workloads after processing workload endpoint egress policy.
                  [Default: Drop]'
                type: string
              deviceRouteProtocol:
                description: This defines the route protocol added to programmed device
                  routes, by default this will be RTPROT_BOOT when left blank.
                type: integer
              deviceRouteSourceAddress:
                description: This is the source address to use on programmed device
                  routes. By default the source address is left blank, leaving the
                  kernel to choose the source address used.
                type: string
              disableConntrackInvalidCheck:
                type: boolean
              endpointReportingDelay:
                type: string
              endpointReportingEnabled:
                type: boolean
              externalNodesList:
                description: ExternalNodesCIDRList is a list of CIDR's of external-non-calico-nodes
                  which may source tunnel traffic and have the tunneled traffic be
                  accepted at calico nodes.
                items:
                  type: string
                type: array
              failsafeInboundHostPorts:
                description: 'FailsafeInboundHostPorts is a list of UDP/TCP ports
                  and CIDRs that Felix will allow incoming traffic to host endpoints
                  on irrespective of the security policy. This is useful to avoid
                  accidentally cutting off a host with incorrect configuration. For
                  back-compatibility, if the protocol is not specified, it defaults
                  to "tcp". If a CIDR is not specified, it will allow traffic from
                  all addresses. To disable all inbound host ports, use the value
                  none. The default value allows ssh access and DHCP. [Default: tcp:22,
                  udp:68, tcp:179, tcp:2379, tcp:2380, tcp:6443, tcp:6666, tcp:6667]'
                items:
                  description: ProtoPort is combination of protocol, port, and CIDR.
                    Protocol and port must be specified.
                  properties:
                    net:
                      type: string
                    port:
                      type: integer
                    protocol:
                      type: string
                  required:
                  - port
                  - protocol
                  type: object
                type: array
              failsafeOutboundHostPorts:
                description: 'FailsafeOutboundHostPorts is a list of UDP/TCP ports
                  and CIDRs that Felix will allow outgoing traffic from host endpoints
                  to irrespective of the security policy. This is useful to avoid
                  accidentally cutting off a host with incorrect configuration. For
                  back-compatibility, if the protocol is not specified, it defaults
                  to "tcp". If a CIDR is not specified, it will allow traffic from
                  all addresses. To disable all outbound host ports, use the value
                  none. The default value opens etcd''s standard ports to ensure that
                  Felix does not get cut off from etcd as well as allowing DHCP and
                  DNS. [Default: tcp:179, tcp:2379, tcp:2380, tcp:6443, tcp:6666,
                  tcp:6667, udp:53, udp:67]'
                items:
                  description: ProtoPort is combination of protocol, port, and CIDR.
                    Protocol and port must be specified.
                  properties:
                    net:
                      type: string
                    port:
                      type: integer
                    protocol:
                      type: string
                  required:
                  - port
                  - protocol
                  type: object
                type: array
              featureDetectOverride:
                description: FeatureDetectOverride is used to override the feature
                  detection. Values are specified in a comma separated list with no
                  spaces, example; "SNATFullyRandom=true,MASQFullyRandom=false,RestoreSupportsLock=".
                  "true" or "false" will force the feature, empty or omitted values
                  are auto-detected.
                type: string
              genericXDPEnabled:
                description: 'GenericXDPEnabled enables Generic XDP so network cards
                  that don''t support XDP offload or driver modes can use XDP. This
                  is not recommended since it doesn''t provide better performance
                  than iptables. [Default: false]'
                type: boolean
              healthEnabled:
                type: boolean
              healthHost:
                type: string
              healthPort:
                type: integer
              interfaceExclude:
                description: 'InterfaceExclude is a comma-separated list of interfaces
                  that Felix should exclude when monitoring for host endpoints. The
                  default value ensures that Felix ignores Kubernetes'' IPVS dummy
                  interface, which is used internally by kube-proxy. If you want to
                  exclude multiple interface names using a single value, the list
                  supports regular expressions. For regular expressions you must wrap
                  the value with ''/''. For example having values ''/^kube/,veth1''
                  will exclude all interfaces that begin with ''kube'' and also the
                  interface ''veth1''. [Default: kube-ipvs0]'
                type: string
              interfacePrefix:
                description: 'InterfacePrefix is the interface name prefix that identifies
                  workload endpoints and so distinguishes them from host endpoint
                  interfaces. Note: in environments other than bare metal, the orchestrators
                  configure this appropriately. For example our Kubernetes and Docker
                  integrations set the ''cali'' value, and our OpenStack integration
                  sets the ''tap'' value. [Default: cali]'
                type: string
              interfaceRefreshInterval:
                description: InterfaceRefreshInterval is the period at which Felix
                  rescans local interfaces to verify their state. The rescan can be
                  disabled by setting the interval to 0.
                type: string
              ipipEnabled:
                type: boolean
              ipipMTU:
                description: 'IPIPMTU is the MTU to set on the tunnel device. See
                  Configuring MTU [Default: 1440]'
                type: integer
              ipsetsRefreshInterval:
                description: 'IpsetsRefreshInterval is the period at which Felix re-checks
                  all iptables state to ensure that no other process has accidentally
                  broken Calico''s rules. Set to 0 to disable iptables refresh. [Default:
                  90s]'
                type: string
              iptablesBackend:
                description: IptablesBackend specifies which backend of iptables will
                  be used. The default is legacy.
                type: string
              iptablesFilterAllowAction:
                type: string
              iptablesLockFilePath:
                description: 'IptablesLockFilePath is the location of the iptables
                  lock file. You may need to change this if the lock file is not in
                  its standard location (for example if you have mapped it into Felix''s
                  container at a different path). [Default: /run/xtables.lock]'
                type: string
              iptablesLockProbeInterval:
                description: 'IptablesLockProbeInterval is the time that Felix will
                  wait between attempts to acquire the iptables lock if it is not
                  available. Lower values make Felix more responsive when the lock
                  is contended, but use more CPU. [Default: 50ms]'
                type: string
              iptablesLockTimeout:
                description: 'IptablesLockTimeout is the time that Felix will wait
                  for the iptables lock, or 0, to disable. To use this feature, Felix
                  must share the iptables lock file with all other processes that
                  also take the lock. When running Felix inside a container, this
                  requires the /run directory of the host to be mounted into the calico/node
                  or calico/felix container. [Default: 0s disabled]'
                type: string
              iptablesMangleAllowAction:
                type: string
              iptablesMarkMask:
                description: 'IptablesMarkMask is the mask that Felix selects its
                  IPTables Mark bits from. Should be a 32 bit hexadecimal number with
                  at least 8 bits set, none of which clash with any other mark bits
                  in use on the system. [Default: 0xff000000]'
                format: int32
                type: integer
              iptablesNATOutgoingInterfaceFilter:
                type: string
              iptablesPostWriteCheckInterval:
                description: 'IptablesPostWriteCheckInterval is the period after Felix
                  has done a write to the dataplane that it schedules an extra read
                  back in order to check the write was not clobbered by another process.
                  This should only occur if another application on the system doesn''t
                  respect the iptables lock. [Default: 1s]'
                type: string
              iptablesRefreshInterval:
                description: 'IptablesRefreshInterval is the period at which Felix
                  re-checks the IP sets in the dataplane to ensure that no other process
                  has accidentally broken Calico''s rules. Set to 0 to disable IP
                  sets refresh. Note: the default for this value is lower than the
                  other refresh intervals as a workaround for a Linux kernel bug that
                  was fixed in kernel version 4.11. If you are using v4.11 or greater
                  you may want to set this to, a higher value to reduce Felix CPU
                  usage. [Default: 10s]'
                type: string
              ipv6Support:
                type: boolean
              kubeNodePortRanges:
                description: 'KubeNodePortRanges holds list of port ranges used for
                  service node ports. Only used if felix detects kube-proxy running
                  in ipvs mode. Felix uses these ranges to separate host and workload
                  traffic. [Default: 30000:32767].'
                items:
                  anyOf:
                  - type: integer
                  - type: string
                  pattern: ^.*
                  x-kubernetes-int-or-string: true
                type: array
              logFilePath:
                description: 'LogFilePath is the full path to the Felix log. Set to
                  none to disable file logging. [Default: /var/log/calico/felix.log]'
                type: string
              logPrefix:
                description: 'LogPrefix is the log prefix that Felix uses when rendering
                  LOG rules. [Default: calico-packet]'
                type: string
              logSeverityFile:
                description: 'LogSeverityFile is the log severity above which logs
                  are sent to the log file. [Default: Info]'
                type: string
              logSeverityScreen:
                description: 'LogSeverityScreen is the log severity above which logs
                  are sent to the stdout. [Default: Info]'
                type: string
              logSeveritySys:
                description: 'LogSeveritySys is the log severity above which logs
                  are sent to the syslog. Set to None for no logging to syslog. [Default:
                  Info]'
                type: string
              maxIpsetSize:
                type: integer
              metadataAddr:
                description: 'MetadataAddr is the IP address or domain name of the
                  server that can answer VM queries for cloud-init metadata. In OpenStack,
                  this corresponds to the machine running nova-api (or in Ubuntu,
                  nova-api-metadata). A value of none (case insensitive) means that
                  Felix should not set up any NAT rule for the metadata path. [Default:
                  127.0.0.1]'
                type: string
              metadataPort:
                description: 'MetadataPort is the port of the metadata server. This,
                  combined with global.MetadataAddr (if not ''None''), is used to
                  set up a NAT rule, from 169.254.169.254:80 to MetadataAddr:MetadataPort.
                  In most cases this should not need to be changed [Default: 8775].'
                type: integer
              mtuIfacePattern:
                description: MTUIfacePattern is a regular expression that controls
                  which interfaces Felix should scan in order to calculate the host's
                  MTU. This should not match workload interfaces (usually named cali...).
                type: string
              natOutgoingAddress:
                description: NATOutgoingAddress specifies an address to use when performing
                  source NAT for traffic in a natOutgoing pool that is leaving the
                  network. By default the address used is an address on the interface
                  the traffic is leaving on (ie it uses the iptables MASQUERADE target)
                type: string
              natPortRange:
                anyOf:
                - type: integer
                - type: string
                description: NATPortRange specifies the range of ports that is used
                  for port mapping when doing outgoing NAT. When unset the default
                  behavior of the network stack is used.
                pattern: ^.*
                x-kubernetes-int-or-string: true
              netlinkTimeout:
                type: string
              openstackRegion:
                description: 'OpenstackRegion is the name of the region that a particular
                  Felix belongs to. In a multi-region Calico/OpenStack deployment,
                  this must be configured somehow for each Felix (here in the datamodel,
                  or in felix.cfg or the environment on each compute node), and must
                  match the [calico] openstack_region value configured in neutron.conf
                  on each node. [Default: Empty]'
                type: string
              policySyncPathPrefix:
                description: 'PolicySyncPathPrefix is used to by Felix to communicate
                  policy changes to external services, like Application layer policy.
                  [Default: Empty]'
                type: string
              prometheusGoMetricsEnabled:
                description: 'PrometheusGoMetricsEnabled disables Go runtime metrics
                  collection, which the Prometheus client does by default, when set
                  to false. This reduces the number of metrics reported, reducing
                  Prometheus load. [Default: true]'
                type: boolean
              prometheusMetricsEnabled:
                description: 'PrometheusMetricsEnabled enables the Prometheus metrics
                  server in Felix if set to true. [Default: false]'
                type: boolean
              prometheusMetricsHost:
                description: 'PrometheusMetricsHost is the host that the Prometheus
                  metrics server should bind to. [Default: empty]'
                type: string
              prometheusMetricsPort:
                description: 'PrometheusMetricsPort is the TCP port that the Prometheus
                  metrics server should bind to. [Default: 9091]'
                type: integer
              prometheusProcessMetricsEnabled:
                description: 'PrometheusProcessMetricsEnabled disables process metrics
                  collection, which the Prometheus client does by default, when set
                  to false. This reduces the number of metrics reported, reducing
                  Prometheus load. [Default: true]'
                type: boolean
              removeExternalRoutes:
                description: Whether or not to remove device routes that have not
                  been programmed by Felix. Disabling this will allow external applications
                  to also add device routes. This is enabled by default which means
                  we will remove externally added routes.
                type: boolean
              reportingInterval:
                description: 'ReportingInterval is the interval at which Felix reports
                  its status into the datastore or 0 to disable. Must be non-zero
                  in OpenStack deployments. [Default: 30s]'
                type: string
              reportingTTL:
                description: 'ReportingTTL is the time-to-live setting for process-wide
                  status reports. [Default: 90s]'
                type: string
              routeRefreshInterval:
                description: 'RouteRefreshInterval is the period at which Felix re-checks
                  the routes in the dataplane to ensure that no other process has
                  accidentally broken Calico''s rules. Set to 0 to disable route refresh.
                  [Default: 90s]'
                type: string
              routeSource:
                description: 'RouteSource configures where Felix gets its routing
                  information. - WorkloadIPs: use workload endpoints to construct
                  routes. - CalicoIPAM: the default - use IPAM data to construct routes.'
                type: string
              routeTableRange:
                description: Calico programs additional Linux route tables for various
                  purposes.  RouteTableRange specifies the indices of the route tables
                  that Calico should use.
                properties:
                  max:
                    type: integer
                  min:
                    type: integer
                required:
                - max
                - min
                type: object
              serviceLoopPrevention:
                description: 'When service IP advertisement is enabled, prevent routing
                  loops to service IPs that are not in use, by dropping or rejecting
                  packets that do not get DNAT''d by kube-proxy. Unless set to "Disabled",
                  in which case such routing loops continue to be allowed. [Default:
                  Drop]'
                type: string
              sidecarAccelerationEnabled:
                description: 'SidecarAccelerationEnabled enables experimental sidecar
                  acceleration [Default: false]'
                type: boolean
              usageReportingEnabled:
                description: 'UsageReportingEnabled reports anonymous Calico version
                  number and cluster size to projectcalico.org. Logs warnings returned
                  by the usage server. For example, if a significant security vulnerability
                  has been discovered in the version of Calico being used. [Default:
                  true]'
                type: boolean
              usageReportingInitialDelay:
                description: 'UsageReportingInitialDelay controls the minimum delay
                  before Felix makes a report. [Default: 300s]'
                type: string
              usageReportingInterval:
                description: 'UsageReportingInterval controls the interval at which
                  Felix makes reports. [Default: 86400s]'
                type: string
              useInternalDataplaneDriver:
                type: boolean
              vxlanEnabled:
                type: boolean
              vxlanMTU:
                description: 'VXLANMTU is the MTU to set on the tunnel device. See
                  Configuring MTU [Default: 1440]'
                type: integer
              vxlanPort:
                type: integer
              vxlanVNI:
                type: integer
              wireguardEnabled:
                description: 'WireguardEnabled controls whether Wireguard is enabled.
                  [Default: false]'
                type: boolean
              wireguardInterfaceName:
                description: 'WireguardInterfaceName specifies the name to use for
                  the Wireguard interface. [Default: wg.calico]'
                type: string
              wireguardListeningPort:
                description: 'WireguardListeningPort controls the listening port used
                  by Wireguard. [Default: 51820]'
                type: integer
              wireguardMTU:
                description: 'WireguardMTU controls the MTU on the Wireguard interface.
                  See Configuring MTU [Default: 1420]'
                type: integer
              wireguardRoutingRulePriority:
                description: 'WireguardRoutingRulePriority controls the priority value
                  to use for the Wireguard routing rule. [Default: 99]'
                type: integer
              xdpEnabled:
                description: 'XDPEnabled enables XDP acceleration for suitable untracked
                  incoming deny rules. [Default: true]'
                type: boolean
              xdpRefreshInterval:
                description: 'XDPRefreshInterval is the period at which Felix re-checks
                  all XDP state to ensure that no other process has accidentally broken
                  Calico''s BPF maps or attached programs. Set to 0 to disable XDP
                  refresh. [Default: 90s]'
                type: string
            type: object
        type: object
    served: true
    storage: true
status:
  acceptedNames:
    kind: ""
    plural: ""
  conditions: []
  storedVersions: []

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: globalnetworkpolicies.crd.projectcalico.org
spec:
  group: crd.projectcalico.org
  names:
    kind: GlobalNetworkPolicy
    listKind: GlobalNetworkPolicyList
    plural: globalnetworkpolicies
    singular: globalnetworkpolicy
  scope: Cluster
  versions:
  - name: v1
    schema:
      openAPIV3Schema:
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation
              of an object. Servers should convert recognized schemas to the latest
              internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          kind:
            description: 'Kind is a string value representing the REST resource this
              object represents. Servers may infer this from the endpoint the client
              submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
            type: string
          metadata:
            type: object
          spec:
            properties:
              applyOnForward:
                description: ApplyOnForward indicates to apply the rules in this policy
                  on forward traffic.
                type: boolean
              doNotTrack:
                description: DoNotTrack indicates whether packets matched by the rules
                  in this policy should go through the data plane's connection tracking,
                  such as Linux conntrack.  If True, the rules in this policy are
                  applied before any data plane connection tracking, and packets allowed
                  by this policy are marked as not to be tracked.
                type: boolean
              egress:
                description: The ordered set of egress rules.  Each rule contains
                  a set of packet match criteria and a corresponding action to apply.
                items:
                  description: "A Rule encapsulates a set of match criteria and an
                    action.  Both selector-based security Policy and security Profiles
                    reference rules - separated out as a list of rules for both ingress
                    and egress packet matching. \n Each positive match criteria has
                    a negated version, prefixed with \"Not\". All the match criteria
                    within a rule must be satisfied for a packet to match. A single
                    rule can contain the positive and negative version of a match
                    and both must be satisfied for the rule to match."
                  properties:
                    action:
                      type: string
                    destination:
                      description: Destination contains the match criteria that apply
                        to destination entity.
                      properties:
                        namespaceSelector:
                          description: "NamespaceSelector is an optional field that
                            contains a selector expression. Only traffic that originates
                            from (or terminates at) endpoints within the selected
                            namespaces will be matched. When both NamespaceSelector
                            and Selector are defined on the same rule, then only workload
                            endpoints that are matched by both selectors will be selected
                            by the rule. \n For NetworkPolicy, an empty NamespaceSelector
                            implies that the Selector is limited to selecting only
                            workload endpoints in the same namespace as the NetworkPolicy.
                            \n For NetworkPolicy, `global()` NamespaceSelector implies
                            that the Selector is limited to selecting only GlobalNetworkSet
                            or HostEndpoint. \n For GlobalNetworkPolicy, an empty
                            NamespaceSelector implies the Selector applies to workload
                            endpoints across all namespaces."
                          type: string
                        nets:
                          description: Nets is an optional field that restricts the
                            rule to only apply to traffic that originates from (or
                            terminates at) IP addresses in any of the given subnets.
                          items:
                            type: string
                          type: array
                        notNets:
                          description: NotNets is the negated version of the Nets
                            field.
                          items:
                            type: string
                          type: array
                        notPorts:
                          description: NotPorts is the negated version of the Ports
                            field. Since only some protocols have ports, if any ports
                            are specified it requires the Protocol match in the Rule
                            to be set to "TCP" or "UDP".
                          items:
                            anyOf:
                            - type: integer
                            - type: string
                            pattern: ^.*
                            x-kubernetes-int-or-string: true
                          type: array
                        notSelector:
                          description: NotSelector is the negated version of the Selector
                            field.  See Selector field for subtleties with negated
                            selectors.
                          type: string
                        ports:
                          description: "Ports is an optional field that restricts
                            the rule to only apply to traffic that has a source (destination)
                            port that matches one of these ranges/values. This value
                            is a list of integers or strings that represent ranges
                            of ports. \n Since only some protocols have ports, if
                            any ports are specified it requires the Protocol match
                            in the Rule to be set to \"TCP\" or \"UDP\"."
                          items:
                            anyOf:
                            - type: integer
                            - type: string
                            pattern: ^.*
                            x-kubernetes-int-or-string: true
                          type: array
                        selector:
                          description: "Selector is an optional field that contains
                            a selector expression (see Policy for sample syntax).
                            \ Only traffic that originates from (terminates at) endpoints
                            matching the selector will be matched. \n Note that: in
                            addition to the negated version of the Selector (see NotSelector
                            below), the selector expression syntax itself supports
                            negation.  The two types of negation are subtly different.
                            One negates the set of matched endpoints, the other negates
                            the whole match: \n \tSelector = \"!has(my_label)\" matches
                            packets that are from other Calico-controlled \tendpoints
                            that do not have the label \"my_label\". \n \tNotSelector
                            = \"has(my_label)\" matches packets that are not from
                            Calico-controlled \tendpoints that do have the label \"my_label\".
                            \n The effect is that the latter will accept packets from
                            non-Calico sources whereas the former is limited to packets
                            from Calico-controlled endpoints."
                          type: string
                        serviceAccounts:
                          description: ServiceAccounts is an optional field that restricts
                            the rule to only apply to traffic that originates from
                            (or terminates at) a pod running as a matching service
                            account.
                          properties:
                            names:
                              description: Names is an optional field that restricts
                                the rule to only apply to traffic that originates
                                from (or terminates at) a pod running as a service
                                account whose name is in the list.
                              items:
                                type: string
                              type: array
                            selector:
                              description: Selector is an optional field that restricts
                                the rule to only apply to traffic that originates
                                from (or terminates at) a pod running as a service
                                account that matches the given label selector. If
                                both Names and Selector are specified then they are
                                AND'ed.
                              type: string
                          type: object
                      type: object
                    http:
                      description: HTTP contains match criteria that apply to HTTP
                        requests.
                      properties:
                        methods:
                          description: Methods is an optional field that restricts
                            the rule to apply only to HTTP requests that use one of
                            the listed HTTP Methods (e.g. GET, PUT, etc.) Multiple
                            methods are OR'd together.
                          items:
                            type: string
                          type: array
                        paths:
                          description: 'Paths is an optional field that restricts
                            the rule to apply to HTTP requests that use one of the
                            listed HTTP Paths. Multiple paths are OR''d together.
                            e.g: - exact: /foo - prefix: /bar NOTE: Each entry may
                            ONLY specify either a `exact` or a `prefix` match. The
                            validator will check for it.'
                          items:
                            description: 'HTTPPath specifies an HTTP path to match.
                              It may be either of the form: exact: <path>: which matches
                              the path exactly or prefix: <path-prefix>: which matches
                              the path prefix'
                            properties:
                              exact:
                                type: string
                              prefix:
                                type: string
                            type: object
                          type: array
                      type: object
                    icmp:
                      description: ICMP is an optional field that restricts the rule
                        to apply to a specific type and code of ICMP traffic.  This
                        should only be specified if the Protocol field is set to "ICMP"
                        or "ICMPv6".
                      properties:
                        code:
                          description: Match on a specific ICMP code.  If specified,
                            the Type value must also be specified. This is a technical
                            limitation imposed by the kernel's iptables firewall,
                            which Calico uses to enforce the rule.
                          type: integer
                        type:
                          description: Match on a specific ICMP type.  For example
                            a value of 8```

执行初始化 `Calico`命令

```bash
kubectl apply -f calico.yaml

安装 Helm 应用

这里使用 Helm 3,注意安装脚本下载最新版本

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

安装 StorageClass

默认情况下,可以安装 K8s 默认 localdisk 作为默认存储,这里采用开源项目 Hwameistor 用作本地磁盘管理。

项目介绍: https://hwameistor.io 安装指引: https://hwameistor.io/cn/docs/quick_start/install/deploy

安装 Helm 仓库

Hwameistor 依赖 Helm 进行安装,所以事先 需要完成 Helm 应用的安装

helm repo add hwameistor http://hwameistor.io/hwameistor

helm repo update hwameistor

helm pull hwameistor/hwameistor --untar

安装 HwameiStior 应用

使用镜像仓库安装,要切换镜像仓库的镜像,请使用 –set 更改这两个参数值:global.k8sImageRegistryglobal.hwameistorImageRegistry

注意默认的镜像仓库 quay.io 和 ghcr.io。如果无法访问,可尝试使用 DaoCloud 提供的镜像源 quay.m.daocloud.io 和 ghcr.m.daocloud.io

$ helm install hwameistor ./hwameistor \
    -n hwameistor --create-namespace \
    --set global.k8sImageRegistry=k8s-gcr.m.daocloud.io \
    --set global.hwameistorImageRegistry=ghcr.m.daocloud.io

检查 Hwameistor 的全部 Pod 状态

$ kubectl -n hwameistor get pod
NAME                                                       READY   STATUS
hwameistor-local-disk-csi-controller-665bb7f47d-6227f      2/2     Running
hwameistor-local-disk-manager-5ph2d                        2/2     Running
hwameistor-local-disk-manager-jhj59                        2/2     Running
hwameistor-local-disk-manager-k9cvj                        2/2     Running
hwameistor-local-disk-manager-kxwww                        2/2     Running
hwameistor-local-storage-csi-controller-667d949fbb-k488w   3/3     Running
hwameistor-local-storage-csqqv                             2/2     Running
hwameistor-local-storage-gcrzm                             2/2     Running
hwameistor-local-storage-v8g7t                             2/2     Running
hwameistor-local-storage-zkwmn                             2/2     Running
hwameistor-scheduler-58dfcf79f5-lswkt                      1/1     Running
hwameistor-webhook-986479678-278cr                         1/1     Running

local-disk-manager 和 local-storage 是 DaemonSet。在每个 Kubernetes 节点上都应该有一个 DaemonSet Pod。

等到全部状态正常后,可进行后续操作,检查 StorageClass

$ kubectl get storageclass hwameistor-storage-lvm-hdd
NAME                                   PROVISIONER         RECLAIMPOLICY
hwameistor-storage-lvm-hdd (default)   lvm.hwameistor.io   Delete

检查 LockDiskNodeLockDisk

默认在使用的磁盘状态应该 PHASE 展示位 Bound

$ kubectl get localdisknodes
NAME               NODEMATCH          PHASE
master01-k8s-com   master01-k8s-com   Bound
node01-k8s-com     node01-k8s-com     Bound
node02-k8s-com     node02-k8s-com     Bound

$ kubectl get localdisks
NAME                   NODEMATCH          CLAIM              PHASE
master01-k8s-com-vda   master01-k8s-com                      Bound
master01-k8s-com-vdb   master01-k8s-com   master01-k8s-com   Bound
node01-k8s-com-vda     node01-k8s-com                        Bound
node01-k8s-com-vdb     node01-k8s-com     node01-k8s-com     Bound
node02-k8s-com-vda     node02-k8s-com                        Bound
node02-k8s-com-vdb     node02-k8s-com     node02-k8s-com     Bound

设置默认的 StorageClass

这个对应的 storageclasses增加标识为默认的 annotations

kubectl patch storageclasses.storage.k8s.io hwameistor-storage-lvm-hdd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

创建存储池

使用下方的命令创建对应的存储池,注意替换对应的 nodeName

$ helm template ./hwameistor \
   -s templates/post-install-claim-disks.yaml \
   --set storageNodes='{master01-k8s-com,node01-k8s-com,node02-k8s-com}' \
   | kubectl apply -f -

创建成功后,查看本地磁盘的 ldc,应该全部是 Bound

$ kubectl get ldc

安装 Metrics server

接下来需要安装的 DCE5.0 需要安装 metrics-server,将下方的文件内容保存为 metrics-server.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        image: registry.cn-hangzhou.aliyuncs.com/chenby/metrics-server:v0.6.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

执行安装命令:

kubectl apply -f  metrics-server.yaml

安装成功后,通过下方命令可以成功的看到 node 节点的资源利用情况

$ kubectl top node
NAME               CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master01-k8s-com   218m         5%     5610Mi          35%
node01-k8s-com     445m         11%    9279Mi          59%
node02-k8s-com     464m         11%    9484Mi          60%

安装 DCE 社区版

以下操作步骤将会带着您一步一步完成 DaoCloud Enterprise 5.0 社区版的完整安装,注意安装细节

安装基础依赖

DCE 已经提供了一键安装的离线工具依赖包,经过测试比较稳定的运行在 CentOS 7CentOS 8,如果您也是这样两个系统,可以使用下方脚本。

curl -LO https://proxy-qiniu-download-public.daocloud.io/DaoCloud_Enterprise/dce5/install_prerequisite.sh
chmod +x install_prerequisite.sh
sudo bash install_prerequisite.sh online community

也可以选择手动安装这些:

  • podman
  • helm ≥ 3.9.4
  • skopeo ≥ 1.9.2
  • kind
  • kubectl ≥ 1.22.0
  • yq ≥ 4.27.5
  • minio client

下载 dce5-installer 安装器

注意,需要在 Master 节点 来进行 dce5-installer 的安装,建议直接下载到当前节点

获取最新版本的查看界面: https://docs.daocloud.io/download/dce5/#_1

# 假定 VERSION 为 v0.3.28 , 使用上方链接获取最新版本
$ export VERSION=v0.3.28
$ curl -Lo ./dce5-installer  https://proxy-qiniu-download-public.daocloud.io/DaoCloud_Enterprise/dce5/dce5-installer-$VERSION

$ chmod +x dce5-installer

如果proxy-qiniu-download-public.daocloud.io 链接失效,使用 qiniu-download-public.daocloud.io

安装配置文件 [可选]

将下方的配置文件内容保存为 clusterconfig.yaml,如果使用 NodePort的方式安装则不需要特定指定配置文件

apiVersion: provision.daocloud.io/v1alpha1
kind: ClusterConfig
spec:
 loadBalancer: metallb
 istioGatewayVip: 10.6.229.10/32  # 这是 Istio gateway 的 VIP,也会是 DCE 5.0 的控制台的浏览器访问 IP
 insightVip: 10.6.229.11/32  # 这是 Global 集群的 Insight-Server 采集子集群的监控指标的网络路径用的 VIP

如果使用配置文件,注意需要 事先安装 MetaLB,这一部分需要自行完成。

执行安装

根据需求选择下方的安装文件,如果选择指定配置文件内的

# 无配置文件
$ ./dce5-installer install-app

# 有指定 loadBalancer
$ ./dce5-installer install-app -c clusterConfig.yaml

安装完成

如果看到下方的页面,说明安装成功了;默认账号密码为: admin/changeme

resize,w_960,m_lfit

重定向登录页 Portal 反向代理

一般情况下,我们安装后的集群访问地址是一个内网地址,当我们需要集群页面开放到公网访问时,会发现登录页会自动重定向内网地址,这是因为 DCE 5.0 反向代理服务器地址默认是安装时的配置;需要作出以下更新:

设置环境变量,方便在后续中使用

# 您的反向代理地址,例如:`export DCE_PROXY="https://demo-alpha.daocloud.io"`
export DCE_PROXY="https://domain:port"

# helm --set 参数备份文件
export GHIPPO_VALUES_BAK="ghippo-values-bak.yaml"

# 获取当前 ghippo 的版本号
export GHIPPO_HELM_VERSION=$(helm get notes ghippo -n ghippo-system | grep "Chart Version" | awk -F ': ' '{ print $2 }')

更新 Helm repo

helm repo update ghippo

备份 –set 参数

helm get values ghippo -n ghippo-system -o yaml > ${GHIPPO_VALUES_BAK}

使用 vim 命令编辑并保存

$ vim ${GHIPPO_VALUES_BAK}

USER-SUPPLIED VALUES:
...
global:
  ...
  reverseProxy: ${DCE_PROXY} # 只需要修改这一行

使用 heml upgrade 更新配置

helm upgrade ghippo ghippo/ghippo \
  -n ghippo-system \
  -f ${GHIPPO_VALUES_BAK} \
  --version ${GHIPPO_HELM_VERSION}

使用 kubectl 重启全局管理 Pod,使配置生效

kubectl rollout restart deploy/ghippo-apiserver -n ghippo-system
kubectl rollout restart statefulset/ghippo-keycloak -n ghippo-system

申请社区版免费体验


2022-12-06

开源项目 - 春松客服

项目起源

很自豪给大家推荐,春松客服是我参与主导团队的第一个纯社区开源项目,目前在 Github 中文项目中开源排名第一。

目前我们正在规划 V8 产品迭代,这将会是一个全新的升级,请关注我们的项目进展 cskefu

同时也欢迎大家加入我们的团队。

中国本土团队自主研发

  • 春松客服是北京华夏春松于 2018 年发布的开源项目,目的是帮助企业上线智能客服系统。
  • 2022 年中,春松客服开源社区成立,目前春松客服由春松客服技术委员会领导,核心开发者来自开源社区,实践与客户、合伙伙伴共建社区。

最有价值开源项目

至 2022 年 10 月,春松客服在企业中部署超过 1.8 万次,上线客户超过 500 家,是 GitHub 上最受欢迎的中文开源客服系统。因开源、云原生架构和功能丰富受到广泛好评,在 Gitee 上赢得最有价值项目奖项。

我们的愿景

在开源客服系统,我希望将春松客服平台成为一个开源的生态,希望吸引到成千上万的开发者和企业来使用和参与春松客服的开发。

  • 春松客服不是传统的客服操作系统,区别与传统客服产品或客服厂商,因为我们在尝试使用开源方式打造一个可持续发展的客服生态
  • 我们承诺开放客服系统基础的核心能力和底层框架,上层应用增强功能则有开发者来决定
  • 开源不代表免费,我们将会对提供的增值能力进行收费,以保证主导团队的可持续发展,包括但不限于(技术支持、SaaS 平台)
  • 春松客服是一套开发者友好的客服操作系统,我们致力于让春松客服在即使脱离了主导团队,仍旧具备可持续发展的开源产品

春松客服是一个依托于开源精神的客服操作系统,我们承诺永远开源,并与开发者打造一个完美的客服系统生态。

我们想做客服系统的 Kubernetes。

最终用户

一线客服团队人员

一线客服人员作为日常之中,长期使用客服系统的人员;客服系统的能力和效率将直接影响到他们的使用;

客服领域开发者

开发者们,可以利用春松客服的开放能力,为春松客服编写大量的增强插件。

产品设计理念

春松客服的核心目标,不是打造一个我们认为的客服系统,而是我们希望打造一套轻量化的智能客服系统框架和具有海量插件的平台。

  • 对于一线客服团队,可以根据自己的需要,选择合适的功能。
  • 对于客服领域的开发者们,提供一个平台,让开发者们可以轻松实现春松客服 + 的能力

合理的功能边界定义

传统的客服系统提供了大量复杂的功能和配置,虽然总能找到对用户来说想要的功能,但是大量冗杂、甚至逻辑冲突的功能,这虽然没有过错;但的确极大的提高了一线客服人员的使用成本。

对于真正的使用者来说,我们希望客服系统的功能,可以刚好满足我所需要的全部功能,同时可以方便的进行功能的增减和升级。

所以,春松客服的功能边界在:提供 高性能、稳定、通用的、生产级 智能客服系统底座和平台。

明确的功能范围定义

  • 春松客服确保底层基础能力的支持,并且保证良好的可扩展性
  • 春松客服避免直接内置开发复杂的应用功能(这往往会有兼容性问题),更多以标准开放能力为目标,交由开发者来实现 增强插件

功能介绍

春松客服 V7

春松客服到现在,已经发展了 2 年的时间,目前版本迭代到 V7,下面展示春松客服的功能和介绍网址入口。

功能 介绍网址
应用部署 https://docs.cskefu.com/docs/deploy
系统初始化 https://docs.cskefu.com/docs/initialization
开源社区 https://docs.cskefu.com/docs/osc/
开源许可协议 https://docs.cskefu.com/docs/osc/license
春松客服大讲堂 https://docs.cskefu.com/docs/osc/training
开发环境搭建 https://docs.cskefu.com/docs/osc/engineering
IDE 使用之 Eclipse IDE https://docs.cskefu.com/docs/osc/ide_eclipse
IDE 使用之 IntelliJ IDEA https://docs.cskefu.com/docs/osc/ide_intelij_idea
系统维护 https://docs.cskefu.com/docs/osc/maintainence
提交代码 https://docs.cskefu.com/docs/osc/contribution
REST APIs https://docs.cskefu.com/docs/osc/restapi
账号和权限 https://docs.cskefu.com/docs/accounting
坐席工作台 https://docs.cskefu.com/docs/work
渠道管理 https://docs.cskefu.com/docs/channels/
Messenger 渠道 https://docs.cskefu.com/docs/channels/messenger/
网页渠道 https://docs.cskefu.com/docs/channels/webim
客户关系管理 https://docs.cskefu.com/docs/crm
坐席监控 https://docs.cskefu.com/docs/monitoring
机器人客服 https://docs.cskefu.com/docs/work-chatbot/
安装配置 https://docs.cskefu.com/docs/work-chatbot/install
集成机器人客服 https://docs.cskefu.com/docs/work-chatbot/bot-agent
消息类型 https://docs.cskefu.com/docs/work-chatbot/message-types
会话历史 https://docs.cskefu.com/docs/reports
访问统计 https://docs.cskefu.com/docs/usage

账号和权限

系统管理员

在春松客服里,系统管理员是具备管理所辖组织内坐席、权限、角色、联系人和坐席监控等资源的管理员,系统管理员分为两种类型:超级管理员普通管理员,普通管理员也简称“管理员”。

超级管理员为春松客服系统设置的,初始化一个春松客服实例后,默认超级管理员用户名为 admin,密码为 admin1234,并且有且只有一个,IT 人员在初始化搭建的春松客服实例的第一件事就是更改超级管理员账号的密码,以确保系统安全。超级管理员具备更新系统所有属性的能力,读写数据,是春松客服内权限最大的用户。

安装启动系统,进入春松客服后台界面,输入初始化的超级管理员账号密码(用户名: admin, 密码: admin1234),点击立即登录。

超级管理员同时维护者春松客服的组织机构的高层级,组织机构是树形结构,默认情况下没有组织机构信息,春松客服搭建完成后,由超级管理员设定根节点,比如总公司、总公司下属子公司,维护这样的一个层级结构,再创建其他管理员账号,普通管理员账号可以创建多个,不同管理员隶属于不同组织机构,该管理员只有管理其所在组织机构及该组织机构附属组织机构的权限。

系统管理员切换不同的组织机构,可以查看不同组织机构的数据。

权限设计

春松客服权限体系包括:组织机构,角色,账号。

权限的管理

角色可以自定义,设置对一系列资源的读写。角色的创建和删除,修改资源授权,只有超级管理员可以操作,,普通【管理员】只具备角色的使用权:添加或删除权限里的系统账号。

系统账号读写资源与角色的关系

将账号添加到角色后,因为账号也同时隶属于不同的组织机构,那么账号所具有的权限就是其所在组织机构以及附属组织机构的角色对应的资源的读写。

根据角色和坐席所在组织机构进行权限检查:

  • 超级管理员可以管理系统所有资源
  • 管理员可以创建部门人员
  • 组织机构支持层级的树状结构
  • 角色包含对不同资源的读写权限
  • 资源如联系人,客户等是根据组织机构进行隔离的
  • 网站渠道必须启用技能组,不同网站渠道接入的访客根据网站渠道设置分配给不同的技能组
  • 系统数据根据坐席当前所在的组织机构进行展示
  • 坐席可以看到自己所在组织机构以及附属组织机构的数据

假设组织机构如下:

  • 李四所能看到的联系人全集是 A 部门及其附属部门所创建的全部联系人
  • 李四看不到 B 部门的联系人,张三看不到 A 部门的联系人
  • 李四能看到刘一创建的联系人,刘一看不到李四创建的联系人

组织机构管理

创建部门

系统 -> 系统概况 -> 用户和组 -> 组织机构 -> 创建部门,并且可以启用或关闭技能组

  • 名词解释:

部门 需要创建的部门名称

上级机构 选择上级部门

启用技能组 这里启用与否,技能是接待同一个渠道的坐席人员群组,春松客服支持配置自动分配策略,连接访客与坐席,简称 ACD 模块

更新、删除部门

进入部门列表

系统 -> 系统概况 -> 用户和组 -> 组织机构

编辑 (修改) 部门

系统 -> 系统概况 -> 用户和组 -> 组织机构 -> 修改部门

删除部门

系统 -> 系统概况 -> 用户和组 -> 组织机构 -> 删除部门

设置部门地区

系统 -> 系统概况 -> 用户和组 -> 组织结构 -> 选中一个部门 -> 地区设置

角色管理

创建角色

系统 -> 系统概况 -> 用户和组 -> 系统角色 -> 新建角色

只有【系统超级管理员】可以创建角色。

名词解释:

角色 系统中用户的操作权限是通过角色来控制,角色可以理解为具备一定操作权限的用户组;

可以把一个或者更多的用户添加到一个角色下;

可以给一个角色设置一定的系统权限,相当于这个角色下面的用户有了这些系统权限;

角色创建好了以后,在所有组织机构中共享。不同组织机构的管理员,只能管理其所在组织机构和下属组织机构里的账号的角色。

编辑 (修改) 角色

系统 -> 系统概况 -> 用户和组 -> 系统角色 -> 修改角色

只有【系统超级管理员】可以编辑角色。

删除角色

系统->系统概况->用户和组->系统角色>删除角色

只有【系统超级管理员】可以删除角色。

账号管理

创建用户账号:系统 -> 系统概况 -> 用户和组 -> 用户账号 -> 创建新用户

提示:

电子邮件: 需要有效的格式密码: 字母数字最少8位,手动录入手机号: 全系统唯一
  • 用户分为管理员和普通用户

  • 坐席分为一般坐席和 SIP 坐席,普通用户与管理用户都可以成为坐席,SIP 坐席是在多媒体坐席的基础上

  • 每个账号必须分配到一个部门下,以及关联到一个角色上,才可以查看或管理资源,请详细阅读【组织机构】和【角色】管理

  • 创建普通用户

  • 创建多媒体坐席

  • 创建管理员

查看账号信息

系统 -> 系统概况 -> 用户和组 -> 用户账号

点击操作一栏中的“编辑”“删除”,可以对当前用户列表中的所有用户的信息进行编辑或者删除

添加账号到部门

系统 -> 系统概况 -> 用户和组 -> 组织结构 -> 选中一个部门 -> 添加用户到当前部门

  • 可以把已经存在的 用户账号 添加到一个特定的部门中

  • 一个用户账号只能隶属于一个部门

添加账号到角色

系统->系统概况->用户和组->系统角色>添加用户到角色

渠道管理

春松客服支持多种渠道,访客的来源可能是多渠道的,这也是目前联络中心发展的趋势,以及对智能客服系统的一个挑战,一方面随着信息技术和互联网通信、聊天工具的演变,企业的客户逐渐分散,尤其是营销平台多元化。

多渠道的适应能力,是春松客服的重要特色。在【渠道管理】的子菜单中,查看支持的不同类型的渠道。

渠道支持

渠道名称 简介 获得
网页渠道 通过在网页中注入春松客服网页渠道 HTML,实现聊天控件,访客与客服建立实时通信连接,支持坐席邀请访客对话等功能 开源,免费,随基础代码发布
Facebook Messenger 渠道 简称“Messenger 插件”或“ME”插件,Messenger 是 Facebook 旗下的最主要的即时通信软件,支持多种平台,因其创新的理念、优秀的用户体验和全球最大的社交网络,而广泛应用。春松客服 Messenger 插件帮助企业在 Facebook 平台上实现营销和客户服务 开源,免费,随基础代码发布

网页渠道

网页聊天支持可以适配移动设备浏览器,桌面浏览器。可以在电脑,手机,微信等渠道接入网页聊天控件。

安装

获取网页脚本,系统 -> 系统管理 -> 客服接入 -> 网站列表 -> 点击“Chatopera 官网” -> 基本设置 -> 接入;

将图中的代码复制到一个 Web 项目的页面中,例如下图的。

使用

使用浏览器打开该 Web 页面。

【提示】该网页需要使用 http(s) 打开,不支持使用浏览器打开本地 HTML 页面。

点击该网页中出现的“在线客服”按钮,出现聊天窗口,可以作为访客,与客服聊天了。

【提示】春松客服提供一个测试网页客户端的例子,可以使用 http://IP:PORT]}}/testclient.html 进行访问。

Facebook 渠道

Messenger 是 Facebook 旗下的最主要的即时通信软件,支持多种平台,因其创新的理念、优秀的用户体验和全球最大的社交网络,而广泛应用。通过 Facebook Messenger 的官方链接,可以了解更多。

https://www.messenger.com/

春松客服 Messenger 插件帮助企业在 Facebook 平台上实现营销和客户服务。

  • 集成 Facebook 粉丝页,使用 OTN 推送营销活动信息,吸引新粉丝和激活老客户
  • 集成 Chatopera 机器人客服,自动回答常见问题,提升客户体验
  • 支持机器人客服转人工坐席,解决客户的复杂问题
  • 产品迭代一年,提供多个最佳实践,帮助企业高效率运营
  • 春松客服运营分析和 Chatopera 机器人客服运营分析生成报表,洞察业务指标

了解 Messenger

首先,出海企业要获客,或者通过互联网方式提供服务,那么 Facebook 上的广告和 Messenger 服务,是您无论如何都要使用的,因为你可以从这里找到您的目标客户、潜在客户。但是,如果 Facebook 平台的商业化程度过高,将影响社交网络内用户的体验,比如用户收到和自己不相关、不感兴趣的、大量的广告。为此 Facebook 在广告和 Messenger 上,有很多设计、一些限制,达到了商业化和人们社交需求的平衡,这是 Facebook 能成为今天世界上最大的社交网络的关键原因之一。

其次,您需要了解 Messenger 的一些应用场景,比如 Cskefu 为九九互动提供的智能客服和 OTN 服务的案例 chatopera-me-jiujiu2020。

在正式介绍春松客服 Messenger 插件的使用之前,需要说明 Cskefu 提供该插件是通过 Facebook Messenger 平台的开发者 APIs 实现,因此,该插件的功能安全可靠、稳定强大并且会不断更新。

https://developers.facebook.com/docs/messenger-platform

使用指南

春松客服全新版本 V8

自今年春松客服开源社区和技术委员会成立,我们重新复盘了春松客服的发展过往;我们决心重构整个春松客服,所有 V8 会是一个全新定位的版本。

  • 新版本前端 UI 与产品功能
    • 首页模块
    • 会话工作台
    • 工单中心
    • 用户中心
  • 全新打造学习中心
  • 多维度统计
  • 系统设计
    • 渠道管理
      • 渠道列表
      • 渠道配置模板
    • 账号管理
      • 账号列表
      • 分组管理
      • 角色管理
    • 核心功能构建
    • 性能测试场景设计
    • 支持横向扩容
    • 插件平台技术设计方案
    • 通用渠道接入能力设计
  • 支持 K8s 部署,Helm Chart
    • 部署方式优化
    • Docker-compose 部署
    • 依赖组件升级

后端技术栈

  • JAVA 版本:JAVA 11
  • 构建工具:Maven
  • 后端技术栈
    • Spring security
    • Webflux
    • MyBatis 2.2.2
    • Jackson
    • Redisson 3.17.7
    • Mysql 8.0
    • HikariCP
    • Swagger 3.0.0
    • Spring boot 2.7.4
    • Spring cloud 2021.0.4
    • Spring cloud alibaba 2021.0.4.0
    • RabbitMQ
    • XXL-Job
    • Seata
    • ES6
    • Oauth2.0
    • MinIO

应用部署

春松客服适应各种部署方式,本文使用 Docker 和 Docker compose 的方式,适合体验、开发、测试和上线春松客服,此种方式简单快捷。

更新:我们正在推进基于 Helm Chart 的方式安装,让企业可以更方便的 Kubernetes 容器平台使用

重要提示:部署应用后,必须按照《系统初始化》文档进行系统初始化,再使用,不做初始化,会造成坐席无法分配等问题。

环境准备

项目 说明  
操作系统 Linux (CentOS 7.x, Ubuntu 16.04+ 等),推荐使用 Ubuntu LTS  
Docker 版本 Docker version 1.13.x 及以上  
Docker Compose 版本 version 1.23.x 及以上  
防火墙端口 8035, 8036  
其他软件 git  
内存 开发测试 >= 8GB 生产环境 >= 16GB
CPU 颗数 开发测试 >= 2 生产环境 >= 4
硬盘 >= 20GB  

获取源码

git clone -b master https://github.com/cskefu/cskefu.git cskefu
cd cskefu
cp sample.env .env # 使用文本编辑器打开 .env 文件,并按照需求需改配置

以上命令中,master 代表当前稳定版,是 cskefu/cskefu 的 master 分支,分支说明。

分支 说明
master 当前稳定版本
develop 当前开发版本

克隆代码时,按照需要指定分支信息;本部署文档针对 master 分支。

配置项说明

以下为部署相关的环境变量,可以在 .env 中覆盖默认值。

KEY 默认值 说明
COMPOSE_FILE docker-compose.yml 服务编排描述文件,保持默认值
COMPOSE_PROJECT_NAME cskefu 服务实例的容器前缀,可以用其它字符串
MYSQL_PORT 8037 MySQL 数据库映射到宿主机器使用的端口
REDIS_PORT 8041 Redis 映射到宿主机器的端口
ES_PORT1 8039 ElasticSearch RestAPI 映射到宿主机器的端口
ES_PORT2 8040 ElasticSearch 服务发现端口映射到宿主机器的端口
CC_WEB_PORT 8035 春松客服 Web 服务地址映射到宿主机器的端口
CC_SOCKET_PORT 8036 春松客服 SocketIO 服务映射到宿主机器的端口
ACTIVEMQ_PORT1 8051 ActiveMQ 端口
ACTIVEMQ_PORT2 8052 ActiveMQ 端口
ACTIVEMQ_PORT2 8053 ActiveMQ 端口
DB_PASSWD 123456 数据库密码,设置到 MySQL, Redis, ActiveMQ
LOG_LEVEL INFO 日志级别,可使用 WARN, ERROR, INFO, DEBUG

以上配置中,端口的各默认值需要保证在宿主机器上还没有被占用;数据库的密码尽量复杂;CC_WEB_PORT 和 CC_SOCKET_PORT 这两个值尽量不要变更;生产环境下 LOG_LEVEL 使用至少 WARN 的级别

以下为一些业务功能相关配置的环境变量:

KEY 默认值 说明
TONGJI_BAIDU_SITEKEY placeholder 使用百度统计 记录和查看页面访问情况,默认不记录
EXTRAS_LOGIN_BANNER off 登录页上方展示通知的内容,默认 (off) 不展示
EXTRAS_LOGIN_CHATBOX off 登录页支持加入一个春松客服网页渠道聊天按钮,比如 https://oh-my.cskefu.com/im/xxx.html,默认 (off不展示

管理命令

启动服务

cd cskefu                            # 进入下载后的文件夹
docker-compose pull                  # 拉取镜像
docker-compose up -d contact-center  # 启动服务

查看服务日志

docker-compose logs -f contact-center

登陆系统

在日志中,查看到如下类似信息,代表服务已经启动。

INFO  c.c.socketio.SocketIOServer - SocketIO server started at port: 8036 [nioEventLoopGroup-2-1]INFO  com.chatopera.cc.Application - Started Application in 35.319 seconds (JVM running for 42.876) [main]

然后,从浏览器打开 http://YOUR_IP:CC_WEB_PORT/ 访问服务。默认管理员账号:admin 密码:admin1234

演示环境

春松客服提供了,在线体验环境,方便您了解春松客服最新功能。

内容 说明
网站 https://demo.cskefu.com
默认用户名 admin
默认密码 admin1234

注意,请不要修改用户名和密码,演示环境随时有可能可能,请自行保障数据安全。

客户案例

关注春松客服

社区会议

定期社区会议

周日 10:00UTC+8(中文)(单周)。转换为您的时区

往期会议


2022-11-28 开源项目

What is Hukou

背景

对于一个大型企业,内部有几十个业务系统是非常正常的一件事情。

在这个背景下,有一个具体的服务可以很干净的只做一件事情是非常棒的:把人管好。

功能接入

  • 支持接收其他业务系统的注册
    • 支持配置 system_key
    • 支持自定义回调地址
  • 通用的身份认证接入方式
    • 接入方系统只需要简单的集成即可完成用户身份的登录支持
  • 支持一键禁用账号登录
    • 增强版能力,支持广播账号身份信息发生变化,实时触发账号验证
      • 可选项,由接入方系统根据需求决定是否接入
  • 支持与三方业务系统集成
    • 钉钉
    • 飞书
    • etc.

功能逻辑说明

resize,w_960,m_lfit

验证流程图

resize,w_960,m_lfit

相关技术和开源实现

  • 开源项目
    • Keyclock
  • 相关技术
    • Microsoft AD
    • LDAP
    • OIDC
    • OAtuh
  • SaaS 产品

2022-11-23 OAuth

Copy to Markdown

背景

梳理笔记工具,最近把语雀都迁移到了 Github,找个时间可以 讨论下中间的过程。


2022-11-20 Markdown

Dubbo 基础知识梳理

参考资料:

简介

  • Apache Dubbo 一款基于 RPC 服务开发的云原生微服务框架,与 SpringCloud 类似
  • 使用 Dubbo 开发的微服务原生具备相互之间远程地址发现 (注册中心) 和通信的能力
  • Dubbo 支持丰富的服务治理特性,包含 服务发现、负载均衡、流量调度 等,高度可扩展
  • Dubbo 由 阿里巴巴 开发,并贡献给 Apache

发展历程

版本迭代

resize,w_960,m_lfit

软件生态的活跃度

resize,w_960,m_lfit

与 SpringCloud 相比的优势

  • 功能丰富,基于原生库即可完成大部分的微服务治理能力
  • 支持超大规模的微服务机器设计,高性能 RPC 通信协议实现
  • 高度可扩展性,支持在调用过程中对流量及协议进行拦截扩展,如 Filter、Router、LB 等
  • 支持微服务治理的扩展组件
    • Registry 注册中心:Zookeeper、Nacos
    • Config Center 配置中心:Zookeeper、Nacos
    • Metadata Center 元数据中心 (Dubbo3 支持)

Dubbo 的基础概念介绍

  • RPC 通信
  • 服务发现
  • 流量治理

RPC 通信

  • Dubbo3 之中,RPC 通信主要使用 Triple 协议,构建在 HTTP/2 协议之上,兼容 gRPC
  • 提供 Request Response、Request Streaming、Response Streaming、Bi-directional Streaming 通信模型
  • 支持 IDL,基于 Triple 协议

服务发现

服务发现,是消费端自动发现服务地址列表的能力,是微服务框架需要具备的关键能力,借助于自动化的服务发现,微服务之间在无需感知对端部署位置与 IP 地址的情况下实现通信。

Dubbo 提供的 Client-Based 服务发现机制,同时也需要第三方注册中心来协调服务发现过程,比如 Nacos/Zookeeper 等。

resize,w_960,m_lfit

流量治理

  • Dubbo2 开始,Dubbo 就提供了丰富的服务治理规则,包括路由规则/动态配置等
  • Dubbo3 之中的实现
    • Dubbo Mesh : 通过对接 xDS 对接到时下流行的 Mesh 产品如 Istio 中所使用的以 VirtualService、DestinationRule 为代表的治理规则
    • 另一方面 Dubbo 正寻求设计一套自有规则以实现在不通部署场景下的流量治理,以及灵活的治理能力。

Dubbo Mesh

Dubbo Mesh 的目标是提供适应 Dubbo 体系的完整 Mesh 解决方案,包含定制化控制面(Control Plane)、定制化数据面解决方案。Dubbo 控制面基于业界主流 Istio 扩展,支持更丰富的流量治理规则、Dubbo 应用级服务发现模型等,Dubbo 数据面可以采用 Envoy Sidecar,即实现 Dubbo SDK + Envoy 的部署方案,也可以采用 Dubbo Proxyless 模式,直接实现 Dubbo 与控制面的通信。

resize,w_960,m_lfit

Dubbo3

Dubbo 3.0 提供的新特性包括:


2022-11-16 Dubbo

Panads fill NaN value to Zero

df = pandas.read_csv('somefile.txt')

df = df.fillna(0)

2022-11-10 Pandas

Redash for Docker 部署

暗坑很多

部署过程

  1. 需要自行维护一个 env 作为配置文件
REDASH_COOKIE_SECRET=a07cca441ab9f28b66c589f3118e0de48469b1bc6a5036eade7badbed305d96e
POSTGRES_HOST_AUTH_METHOD=trust
REDASH_REDIS_URL=redis://redis:6379/0
REDASH_DATABASE_URL=postgresql://postgres
  • 需要创建一个 postgres-data 并配置 docker-compose.yml 的路径,数据库持久化

  • 需要给 postgres 容器增加 sudo 命令
    • apk add sudo
  • 需要手工进入到 postgresql 容器内创建 role 和 database
    • createuser -U postgres redash
    • createdb -U postgres redash
  • 执行数据库初始化动作
  • docker-compose run –rm server create_db
  • 然后重启 redash 全部服务即可 docker-compose down 后重启

postgresql 在执行 psql 命令时,默认会读取当前系统用户作为执行 role;但 psql 默认用户是 postgres

https://redash.io/help/open-source/setup https://redash.io/help/open-source/dev-guide/docker https://docs.victoriametrics.com/url-examples.html#apiv1exportcsv

https://mp.weixin.qq.com/s?src=11\&timestamp=1660629444\&ver=3985\&signature=verv70veamWEz2Sgc8e89yMJGwANIOzz4lfwbezyVV3wpWNT2d9SnGrDecUOwrbTJBR2o-Ax6ZS4Fpu2UxfX7Sy9xsk1LCXfY1wNr42ucl3tFePfJ7c536c8zL-HOy\&new=1

https://mp.weixin.qq.com/s?src=11&timestamp=1660682601&ver=3986&signature=h8m0RzEX3qWsKcUo6Ee3azdsnzLQqUf3N8FdLhyWNa52U4vAvlbEaBFUCrTZnh54tT-YS2mODfkp-6Hemmzt3n*hzGHlEmXP-HO5830W0Fzmn4MMfnsOPBKLrcjaiU0h&new=1

启动的服务介绍

  • v10-redashio_adhoc_worker_1 # 执行查询任务的 worker
  • v10-redashio_postgres_1 # 数据库
  • v10-redashio_redis_1 # 缓存
  • v10-redashio_scheduled_worker_1 # 执行计划任务的 worker
  • v10-redashio_scheduler_1 # 计划任务管理 server
  • v10-redashio_server_1 # 主体 server

以上主要会设计到 3 个镜像,redis、pgsql、redash,其中核心是 redash,所以关注镜像版本也是这个

版本升级

redash 的版本升级较为方便,更换 server 的镜像;然后升级数据库即可。

测试过从 v8 升级到 v10 , 和 v9 升级到 v10,都是 ok 的。

  1. 关闭 Redash 服务
    1. docker-compose stop server scheduler scheduled_worker adhoc_worker
  2. 更新 docker-compose.yml
    1. 基本上这一步,只需要更新 redash 的镜像版本即可
    2. 然后执行 docker-compose pull 拉取新镜像版本
  3. 执行数据库升级
    1. docker-compose run --rm server manage db upgrade
  4. 启动全部服务即可
    1. docker-compse up -d

解决 ES 的 HTTPS 问题

由于我们的 es 地址访问地址采用 https,但为自签证书,所以在 request 之中会有些问题,所以我在这里更新了 elasticsearch 的插件,然后将其上传到我个人的 docker hub. https://hub.docker.com/r/samzong/redash

带来的问题,页面上无法选择到 Elasticsearch 作为数据源,没时间去研究了

看了下还是可以使用 redash 的 API 去创建的 /api/data_sources:

配置

{
  "options": {
    "basic_auth_password": "-----",
    "basic_auth_user": "elastic",
    "server": "https://10.6.51.101:31001/",
    "skip_tls_verification": true
  },
  "type": "elasticsearch",
  "name": "test-es"
}

创建完成后,就可以在页面上更新了。


2022-10-22 Redash