目前 2022 年 最新 Jetbrains 激活服务器,全部产品均可激活;激活后,如果失效了就再去找个。
打开这个网站:https://search.censys.io/
搜索框输入:services.http.response.headers.location: account.jetbrains.com/fls-auth
在查到的站点中,随便找一个点进去,查找到 HTTP & 302
复制网址到 Ketbrains,选择许可证服务器/License server,粘贴刚刚复制的网址,激活。
如果提示过期或者激活失败,再从查询列表中再找一个,多试几个。
从 Alfred 切换到 Raycast 也就 2 个多月的时间;使用体验有着比较大的提升。
我觉得的优势提升: ● Raycast 的插件生态目前很火热,自己开发插件也是非常方便 ● 颜值很高 ● 展示免费可用
本教程旨在补充需要手工安装和升级的方式。
默认安装不支持;在安装规划时,可以修改 mainfest.yaml 开启 Skoala 自动安装
检查预装环境
./dce5-installer install-app -m /sample/manifest.yaml
20222.12.15 即将发版)支持默认安装 Skoala;仍旧建议检查mainfest.yaml ,确保 Skoala 会被安装器安装。
enable 需要为 true,需要指定对应的 helmVersion:
...
components:
skoala:
enable: true
helmVersion: v0.12.2
variables:
...
查看 命名空间为 skoala-system 的之中是否有以下对应的资源,如果没有任何资源,说明 Skoala 的确没有安装。
~ kubectl -n skoala-system get pods
NAME READY STATUS RESTARTS AGE
hive-8548cd9b59-948j2 2/2 Running 2 (3h48m ago) 3h48m
sesame-5955c878c6-jz8cd 2/2 Running 0 3h48m
ui-7c9f5b7b67-9rpzc 2/2 Running 0 3h48m
~ helm -n skoala-system list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
skoala skoala-system 3 2022-12-16 11:17:35.187799553 +0800 CST deployed skoala-0.13.0 0.13.0
skoala 在安装时需要用到 mysql 来存储配置,所以必须要保证数据库存在;另外查看下 common-mysql 是否有 skoala 这个数据库。
~ kubectl -n mcamel-system get statefulset
NAME READY AGE
mcamel-common-mysql-cluster-mysql 2/2 7d23h
建议给到 skoala 用到的数据库信息如下:
Skoala 所有的监控的信息,需要依赖 Insight 的能力,则需要在集群中安装对应的 insight-agent;
对 Skoala 的影响:
如果先安装了 skoala-init, 目前需要在安装 insight-agent 后,重装 skoala-init
如果在 common-mysql 内的 skoala 数据库为空,请登录到 skoala 数据库后,执行以下 SQL:
CREATE TABLE `registry` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`uid` varchar(32) DEFAULT NULL,
`name` varchar(50) NOT NULL,
`type` varchar(50) NOT NULL,
`addresses` varchar(1000) NOT NULL,
`namespaces` varchar(2000) NOT NULL,
`deleted_at` timestamp NULL COMMENT 'Time deteled',
`created_at` timestamp NOT NULL DEFAULT current_timestamp(),
`updated_at` timestamp NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(),
PRIMARY KEY (`id`),
UNIQUE KEY `idx_uid` (`uid`),
UNIQUE KEY `idx_name` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
CREATE TABLE `book` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`uid` varchar(32) DEFAULT NULL,
`name` varchar(50) NOT NULL,
`author` varchar(32) NOT NULL,
`status` int(1) DEFAULT 1 COMMENT '0:下架,1:上架',
`isPublished` tinyint(1) unsigned NOT NULL DEFAULT 1 COMMENT '0: unpublished, 1: published',
`publishedAt` timestamp NULL DEFAULT NULL COMMENT '出版时间',
`deleted_at` timestamp NULL COMMENT 'Time deteled',
`createdAt` timestamp NOT NULL DEFAULT current_timestamp(),
`updatedAt` timestamp NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(),
PRIMARY KEY (`id`),
UNIQUE KEY `idx_uid` (`uid`),
UNIQUE KEY `idx_name` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
CREATE TABLE `api` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`is_hosted` tinyint DEFAULT 0,
`registry` varchar(50) NOT NULL,
`service_name` varchar(200) NOT NULL,
`nacos_namespace` varchar(200) NOT NULL COMMENT 'Nacos namespace id',
`nacos_group_name` varchar(200) NOT NULL,
`data_type` varchar(100) NOT NULL COMMENT 'JSON or YAML.',
`detail` mediumtext NOT NULL,
`deleted_at` timestamp NULL COMMENT 'Time deteled',
`created_at` timestamp NOT NULL DEFAULT current_timestamp(),
`updated_at` timestamp NOT NULL DEFAULT current_timestamp() ON UPDATE current_timestamp(),
PRIMARY KEY (`id`),
UNIQUE KEY `idx_registry_and_service_name` (`registry`, `service_name`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
INSERT INTO `book` VALUES (1,'book-init','MicroService Pattern','daocloud',1,1,'2022-03-23 13:50:00',null,now(),now());
alter table registry add is_hosted tinyint default 0 not null after namespaces;
alter table registry add workspace_id varchar(50) not null DEFAULT 'default' after uid;
alter table registry add ext_id varchar(50) null after workspace_id;
drop index idx_name on registry;
create unique index idx_name on registry (name, workspace_id);
完成以上操作,会在 Skoala 数据库内有 3 张表,注意检测对应 SQL 是否全部是生效。
配置好 skoala 仓库,即可查看和获取到 skoala 的应用 chart
~ helm repo add skoala-release https://release.daocloud.io/chartrepo/skoala
~ helm repo update
重点内容: ** 增加完成后 Skoala-release 之后,常用需要关注的有 2 个 Chart:
默认情况下,安装完成 skoala 到 kpanda-global-cluster(全局管理集群),就可以在侧边栏看到对应的微服务引擎的入口了。
在全局管理集群,查看 Skoala 的最新版本,直接通过 helm repo 来更新获取最新的;
~ helm repo update skoala-release
~ helm search repo skoala-release/skoala --versions
NAME CHART VERSION APP VERSION DESCRIPTION
skoala-release/skoala 0.13.0 0.13.0 The helm chart for Skoala
skoala-release/skoala 0.12.2 0.12.2 The helm chart for Skoala
skoala-release/skoala 0.12.1 0.12.1 The helm chart for Skoala
skoala-release/skoala 0.12.0 0.12.0 The helm chart for Skoala
......
在部署 skoala 时,会携带当时最新的前端版本,如果想要指定前端 ui 的版本, 可以去看前端代码仓库获取对应的版本号: https://gitlab.daocloud.cn/ndx/frontend-engineering/skoala-ui/-/tags
在工作集群,查看 Skoala-init 的最新版本,直接通过 helm repo 来更新获取最新的
~ helm repo update skoala-release
~ helm search repo skoala-release/skoala-init --versions
NAME CHART VERSION APP VERSION DESCRIPTION
skoala-release/skoala-init 0.13.0 0.13.0 A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init 0.12.2 0.12.2 A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init 0.12.1 0.12.1 A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init 0.12.0 0.12.0 A Helm Chart for Skoala init, it includes Skoal...
......
直接执行命令即可,注意对应的版本号
~ helm upgrade --install skoala --create-namespace -n skoala-system --cleanup-on-fail \
--set ui.image.tag=v0.9.0 \
--set sweet.enable=true \
--set hive.configMap.data.database.host=mcamel-common-mysql-cluster-mysql-master.mcamel-system.svc.cluster.local \
--set hive.configMap.data.database.port=3306 \
--set hive.configMap.data.database.user=root \
--set hive.configMap.data.database.password=xxxxxxxx \
--set hive.configMap.data.database.database=skoala \
skoala-release/skoala \
--version 0.13.0
自定义并初始化数据库参数;需要将数据库信息做配置添加进去
–set sweet.enable=true
–set hive.configMap.data.database.host= \
–set hive.configMap.data.database.port= \
–set hive.configMap.data.database.user=
–set hive.configMap.data.database.password= \
–set hive.configMap.data.database.database= \自定义前端 ui 版本 ui.image.tag=v0.9.0
查看部署的 pod 是否启动成功
~ kubectl -n skoala-system get pods
NAME READY STATUS RESTARTS AGE
hive-8548cd9b59-948j2 2/2 Running 2 (3h48m ago) 3h48m
sesame-5955c878c6-jz8cd 2/2 Running 0 3h48m
ui-7c9f5b7b67-9rpzc 2/2 Running 0 3h48m
这一步骤卸载,会把 skoala 相关的资源删除。
~ helm uninstall skoala -n skoala-system
更新操作同 3.4 部署,使用 helm upgrade 指定新版本即可
请查看 Skoala 代码仓库 https://gitlab.daocloud.cn/ndx/skoala/-/tree/main/build/charts/skoala
由于 Skoala 涉及的组件较多,我们将这些组件打包到同一个 Chart 内,也就是 skoala-init,所以我们应该在用到微服务引擎的工作集群安装好 skoala-init
~ helm search repo skoala-release/skoala-init --versions
NAME CHART VERSION APP VERSION DESCRIPTION
skoala-release/skoala-init 0.13.0 0.13.0 A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init 0.12.2 0.12.2 A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init 0.12.1 0.12.1 A Helm Chart for Skoala init, it includes Skoal...
skoala-release/skoala-init 0.12.0 0.12.0 A Helm Chart for Skoala init, it includes Skoal...
......
安装命令,同更新; 确认需要安装到指定的命名空间,确认看到全部 Pod 启动成功。
~ helm upgrade --install skoala-init --create-namespace -n skoala-system --cleanup-on-fail \
skoala-release/skoala-init \
--version 0.13.0
除了通过终端安装,UI 的方式 可以在 Kpanda 集群管理中 Helm 应用内找到 Skoala-init 进行安装。
卸载命令
~ helm uninstall skoala-init -n skoala-system
本文完成了从 0 到 1 的完成 DCE5.0 社区版的安装;其中包含社区版需要一个 K8s 集群、与一些安装细节和注意事项。
当前计划使用在 3 台 UCloud 的 VM,配置均为 8 核 16G。
角色 | 主机名 | 操作系统 | IP | 配置 |
---|---|---|---|---|
master | master-k8s-com | CentOS 7.9 | 10.23.245.63 | 8 核 16G 300GB |
node01 | node01-k8s-com | CentOS 7.9 | 10.23.104.173 | 8 核 16G 300GB |
node02 | node02-k8s-com | CentOS 7.9 | 10.23.112.244 | 8 核 16G 300GB |
hostnamctl set-hostname master-k8s-com
cat <<EOF | tee /etc/hosts
10.23.245.63 master-k8s-com
10.23.104.173 node01-k8s-com
10.23.112.244 node02-k8s-com
EOF
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
systemctl stop firewalld
systemctl disable firewalld
加载 br_netfilter
模块
cat <<EOF | tee /etc/modules-load.d/kubernetes.conf
br_netfilter
EOF
# 加载模块
modprobe br_netfilter
修改 net.bridge.bridge-nf-call-iptables
设置为 1
cat <<EOF | tee /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 刷新配置
sysctl --system
sudo yum install -y yum-utils
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
sudo yum -y install docker-ce docker-ce-cli containerd.io
sudo touch /etc/docker/daemon.json
cat <<EOF | tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# 清理配置文件
sudo rm -f /etc/containerd/config.toml
# 初始化配置
sudo containerd config default | sudo tee /etc/containerd/config.toml
# 更新配置文件内容
sed -i 's/SystemdCgroup\ =\ false/SystemdCgroup\ =\ true/' /etc/containerd/config.toml
sed 's/k8s.gcr.io\/pause/registry.cn-hangzhou.aliyuncs.com\/google_containers\/pause/g' /etc/containerd/config.toml
sudo systemctl daemon-reload
sudo systemctl enable --now docker
sudo systemctl enable --now containerd
sudo systemctl status docker containerd
sudo docker info
这里采用国内阿里云的源,具有加速效果
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
echo K8sVersion=1.24.8
sudo yum install -y kubelet-1.24.8-$K8sVersion kubeadm-1.24.8-$K8sVersion
sudo yum install -y kubectl-1.24.8-$K8sVersion # 可以仅在 Master 节点安装
kubelet
系统服务sudo systemctl enable --now kubelet
此时启动后服务状态异常,会检查集群配置,因为没有配置会不断的重启,不影响后续操作
注意规划集群主节点配置,初始化主节点时,规划网络配置
# 指定对应的 kubernetes 的版本,注意与上方配置保持一致
$ sudokubeadm init --kubernetes-version=v1.24.8 \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--pod-network-cidr 10.11.0.0/16 \
[init] Using Kubernetes version: v1.24.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.503693 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.24.8" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.24.8" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1.k8s.com as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node1.k8s.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: c0wcm5.0yu9szfktsxvurza
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.23.245.63:6443 --token djdsj2.sj23js90213j323 \
--discovery-token-ca-cert-hash sha256:ewuosdjk2390rjertw32p32j43p25a70298db818ajsdjk1293jk23k23201934h
注意需要确保完成了前面的步骤 节点系统优化、安装 Docker、安装 Kubernetes 系统组件,并且成功完成 初始化 Master 节点
$ kubeadm join 10.23.245.63:6443 --token djdsj2.sj23js90213j323 \
--discovery-token-ca-cert-hash sha256:ewuosdjk2390rjertw32p32j43p25a70298db818ajsdjk1293jk23k23201934h
请分别添加 2 个工作节点:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01-k8s-com Ready control-plane 9h v1.24.8
node01-k8s-com Ready <none> 9h v1.24.8
node02-k8s-com Ready <none> 9h v1.24.8
将下发 yaml
保存为 calico.yaml
,或者 到我的 Github 去下载。
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "bird"
# Configure the MTU to use for workload interfaces and tunnels.
# By default, MTU is auto-detected, and explicitly setting this field should not be required.
# You can override auto-detection by providing a non-zero value.
veth_mtu: "0"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"log_file_path": "/var/log/calico/cni/cni.log",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: BGPConfiguration
listKind: BGPConfigurationList
plural: bgpconfigurations
singular: bgpconfiguration
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
description: BGPConfiguration contains the configuration for any BGP routing.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: BGPConfigurationSpec contains the values of the BGP configuration.
properties:
asNumber:
description: 'ASNumber is the default AS number used by a node. [Default:
64512]'
format: int32
type: integer
communities:
description: Communities is a list of BGP community values and their
arbitrary names for tagging routes.
items:
description: Community contains standard or large community value
and its name.
properties:
name:
description: Name given to community value.
type: string
value:
description: Value must be of format `aa:nn` or `aa:nn:mm`.
For standard community use `aa:nn` format, where `aa` and
`nn` are 16 bit number. For large community use `aa:nn:mm`
format, where `aa`, `nn` and `mm` are 32 bit number. Where,
`aa` is an AS Number, `nn` and `mm` are per-AS identifier.
pattern: ^(\d+):(\d+)$|^(\d+):(\d+):(\d+)$
type: string
type: object
type: array
listenPort:
description: ListenPort is the port where BGP protocol should listen.
Defaults to 179
maximum: 65535
minimum: 1
type: integer
logSeverityScreen:
description: 'LogSeverityScreen is the log severity above which logs
are sent to the stdout. [Default: INFO]'
type: string
nodeToNodeMeshEnabled:
description: 'NodeToNodeMeshEnabled sets whether full node to node
BGP mesh is enabled. [Default: true]'
type: boolean
prefixAdvertisements:
description: PrefixAdvertisements contains per-prefix advertisement
configuration.
items:
description: PrefixAdvertisement configures advertisement properties
for the specified CIDR.
properties:
cidr:
description: CIDR for which properties should be advertised.
type: string
communities:
description: Communities can be list of either community names
already defined in `Specs.Communities` or community value
of format `aa:nn` or `aa:nn:mm`. For standard community use
`aa:nn` format, where `aa` and `nn` are 16 bit number. For
large community use `aa:nn:mm` format, where `aa`, `nn` and
`mm` are 32 bit number. Where,`aa` is an AS Number, `nn` and
`mm` are per-AS identifier.
items:
type: string
type: array
type: object
type: array
serviceClusterIPs:
description: ServiceClusterIPs are the CIDR blocks from which service
cluster IPs are allocated. If specified, Calico will advertise these
blocks, as well as any cluster IPs within them.
items:
description: ServiceClusterIPBlock represents a single allowed ClusterIP
CIDR block.
properties:
cidr:
type: string
type: object
type: array
serviceExternalIPs:
description: ServiceExternalIPs are the CIDR blocks for Kubernetes
Service External IPs. Kubernetes Service ExternalIPs will only be
advertised if they are within one of these blocks.
items:
description: ServiceExternalIPBlock represents a single allowed
External IP CIDR block.
properties:
cidr:
type: string
type: object
type: array
serviceLoadBalancerIPs:
description: ServiceLoadBalancerIPs are the CIDR blocks for Kubernetes
Service LoadBalancer IPs. Kubernetes Service status.LoadBalancer.Ingress
IPs will only be advertised if they are within one of these blocks.
items:
description: ServiceLoadBalancerIPBlock represents a single allowed
LoadBalancer IP CIDR block.
properties:
cidr:
type: string
type: object
type: array
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: BGPPeer
listKind: BGPPeerList
plural: bgppeers
singular: bgppeer
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: BGPPeerSpec contains the specification for a BGPPeer resource.
properties:
asNumber:
description: The AS Number of the peer.
format: int32
type: integer
keepOriginalNextHop:
description: Option to keep the original nexthop field when routes
are sent to a BGP Peer. Setting "true" configures the selected BGP
Peers node to use the "next hop keep;" instead of "next hop self;"(default)
in the specific branch of the Node on "bird.cfg".
type: boolean
node:
description: The node name identifying the Calico node instance that
is targeted by this peer. If this is not set, and no nodeSelector
is specified, then this BGP peer selects all nodes in the cluster.
type: string
nodeSelector:
description: Selector for the nodes that should have this peering. When
this is set, the Node field must be empty.
type: string
password:
description: Optional BGP password for the peerings generated by this
BGPPeer resource.
properties:
secretKeyRef:
description: Selects a key of a secret in the node pod's namespace.
properties:
key:
description: The key of the secret to select from. Must be
a valid secret key.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the Secret or its key must be
defined
type: boolean
required:
- key
type: object
type: object
peerIP:
description: The IP address of the peer followed by an optional port
number to peer with. If port number is given, format should be `[<IPv6>]:port`
or `<IPv4>:<port>` for IPv4. If optional port number is not set,
and this peer IP and ASNumber belongs to a calico/node with ListenPort
set in BGPConfiguration, then we use that port to peer.
type: string
peerSelector:
description: Selector for the remote nodes to peer with. When this
is set, the PeerIP and ASNumber fields must be empty. For each
peering between the local node and selected remote nodes, we configure
an IPv4 peering if both ends have NodeBGPSpec.IPv4Address specified,
and an IPv6 peering if both ends have NodeBGPSpec.IPv6Address specified. The
remote AS number comes from the remote node's NodeBGPSpec.ASNumber,
or the global default if that is not set.
type: string
sourceAddress:
description: Specifies whether and how to configure a source address
for the peerings generated by this BGPPeer resource. Default value
"UseNodeIP" means to configure the node IP as the source address. "None"
means not to configure a source address.
type: string
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: blockaffinities.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: BlockAffinity
listKind: BlockAffinityList
plural: blockaffinities
singular: blockaffinity
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: BlockAffinitySpec contains the specification for a BlockAffinity
resource.
properties:
cidr:
type: string
deleted:
description: Deleted indicates that this block affinity is being deleted.
This field is a string for compatibility with older releases that
mistakenly treat this field as a string.
type: string
node:
type: string
state:
type: string
required:
- cidr
- deleted
- node
- state
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: ClusterInformation
listKind: ClusterInformationList
plural: clusterinformations
singular: clusterinformation
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
description: ClusterInformation contains the cluster specific information.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: ClusterInformationSpec contains the values of describing
the cluster.
properties:
calicoVersion:
description: CalicoVersion is the version of Calico that the cluster
is running
type: string
clusterGUID:
description: ClusterGUID is the GUID of the cluster
type: string
clusterType:
description: ClusterType describes the type of the cluster
type: string
datastoreReady:
description: DatastoreReady is used during significant datastore migrations
to signal to components such as Felix that it should wait before
accessing the datastore.
type: boolean
variant:
description: Variant declares which variant of Calico should be active.
type: string
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: FelixConfiguration
listKind: FelixConfigurationList
plural: felixconfigurations
singular: felixconfiguration
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
description: Felix Configuration contains the configuration for Felix.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: FelixConfigurationSpec contains the values of the Felix configuration.
properties:
allowIPIPPacketsFromWorkloads:
description: 'AllowIPIPPacketsFromWorkloads controls whether Felix
will add a rule to drop IPIP encapsulated traffic from workloads
[Default: false]'
type: boolean
allowVXLANPacketsFromWorkloads:
description: 'AllowVXLANPacketsFromWorkloads controls whether Felix
will add a rule to drop VXLAN encapsulated traffic from workloads
[Default: false]'
type: boolean
awsSrcDstCheck:
description: 'Set source-destination-check on AWS EC2 instances. Accepted
value must be one of "DoNothing", "Enabled" or "Disabled". [Default:
DoNothing]'
enum:
- DoNothing
- Enable
- Disable
type: string
bpfConnectTimeLoadBalancingEnabled:
description: 'BPFConnectTimeLoadBalancingEnabled when in BPF mode,
controls whether Felix installs the connection-time load balancer. The
connect-time load balancer is required for the host to be able to
reach Kubernetes services and it improves the performance of pod-to-service
connections. The only reason to disable it is for debugging purposes. [Default:
true]'
type: boolean
bpfDataIfacePattern:
description: BPFDataIfacePattern is a regular expression that controls
which interfaces Felix should attach BPF programs to in order to
catch traffic to/from the network. This needs to match the interfaces
that Calico workload traffic flows over as well as any interfaces
that handle incoming traffic to nodeports and services from outside
the cluster. It should not match the workload interfaces (usually
named cali...).
type: string
bpfDisableUnprivileged:
description: 'BPFDisableUnprivileged, if enabled, Felix sets the kernel.unprivileged_bpf_disabled
sysctl to disable unprivileged use of BPF. This ensures that unprivileged
users cannot access Calico''s BPF maps and cannot insert their own
BPF programs to interfere with Calico''s. [Default: true]'
type: boolean
bpfEnabled:
description: 'BPFEnabled, if enabled Felix will use the BPF dataplane.
[Default: false]'
type: boolean
bpfExtToServiceConnmark:
description: 'BPFExtToServiceConnmark in BPF mode, control a 32bit
mark that is set on connections from an external client to a local
service. This mark allows us to control how packets of that connection
are routed within the host and how is routing intepreted by RPF
check. [Default: 0]'
type: integer
bpfExternalServiceMode:
description: 'BPFExternalServiceMode in BPF mode, controls how connections
from outside the cluster to services (node ports and cluster IPs)
are forwarded to remote workloads. If set to "Tunnel" then both
request and response traffic is tunneled to the remote node. If
set to "DSR", the request traffic is tunneled but the response traffic
is sent directly from the remote node. In "DSR" mode, the remote
node appears to use the IP of the ingress node; this requires a
permissive L2 network. [Default: Tunnel]'
type: string
bpfKubeProxyEndpointSlicesEnabled:
description: BPFKubeProxyEndpointSlicesEnabled in BPF mode, controls
whether Felix's embedded kube-proxy accepts EndpointSlices or not.
type: boolean
bpfKubeProxyIptablesCleanupEnabled:
description: 'BPFKubeProxyIptablesCleanupEnabled, if enabled in BPF
mode, Felix will proactively clean up the upstream Kubernetes kube-proxy''s
iptables chains. Should only be enabled if kube-proxy is not running. [Default:
true]'
type: boolean
bpfKubeProxyMinSyncPeriod:
description: 'BPFKubeProxyMinSyncPeriod, in BPF mode, controls the
minimum time between updates to the dataplane for Felix''s embedded
kube-proxy. Lower values give reduced set-up latency. Higher values
reduce Felix CPU usage by batching up more work. [Default: 1s]'
type: string
bpfLogLevel:
description: 'BPFLogLevel controls the log level of the BPF programs
when in BPF dataplane mode. One of "Off", "Info", or "Debug". The
logs are emitted to the BPF trace pipe, accessible with the command
`tc exec bpf debug`. [Default: Off].'
type: string
chainInsertMode:
description: 'ChainInsertMode controls whether Felix hooks the kernel''s
top-level iptables chains by inserting a rule at the top of the
chain or by appending a rule at the bottom. insert is the safe default
since it prevents Calico''s rules from being bypassed. If you switch
to append mode, be sure that the other rules in the chains signal
acceptance by falling through to the Calico rules, otherwise the
Calico policy will be bypassed. [Default: insert]'
type: string
dataplaneDriver:
type: string
debugDisableLogDropping:
type: boolean
debugMemoryProfilePath:
type: string
debugSimulateCalcGraphHangAfter:
type: string
debugSimulateDataplaneHangAfter:
type: string
defaultEndpointToHostAction:
description: 'DefaultEndpointToHostAction controls what happens to
traffic that goes from a workload endpoint to the host itself (after
the traffic hits the endpoint egress policy). By default Calico
blocks traffic from workload endpoints to the host itself with an
iptables "DROP" action. If you want to allow some or all traffic
from endpoint to host, set this parameter to RETURN or ACCEPT. Use
RETURN if you have your own rules in the iptables "INPUT" chain;
Calico will insert its rules at the top of that chain, then "RETURN"
packets to the "INPUT" chain once it has completed processing workload
endpoint egress policy. Use ACCEPT to unconditionally accept packets
from workloads after processing workload endpoint egress policy.
[Default: Drop]'
type: string
deviceRouteProtocol:
description: This defines the route protocol added to programmed device
routes, by default this will be RTPROT_BOOT when left blank.
type: integer
deviceRouteSourceAddress:
description: This is the source address to use on programmed device
routes. By default the source address is left blank, leaving the
kernel to choose the source address used.
type: string
disableConntrackInvalidCheck:
type: boolean
endpointReportingDelay:
type: string
endpointReportingEnabled:
type: boolean
externalNodesList:
description: ExternalNodesCIDRList is a list of CIDR's of external-non-calico-nodes
which may source tunnel traffic and have the tunneled traffic be
accepted at calico nodes.
items:
type: string
type: array
failsafeInboundHostPorts:
description: 'FailsafeInboundHostPorts is a list of UDP/TCP ports
and CIDRs that Felix will allow incoming traffic to host endpoints
on irrespective of the security policy. This is useful to avoid
accidentally cutting off a host with incorrect configuration. For
back-compatibility, if the protocol is not specified, it defaults
to "tcp". If a CIDR is not specified, it will allow traffic from
all addresses. To disable all inbound host ports, use the value
none. The default value allows ssh access and DHCP. [Default: tcp:22,
udp:68, tcp:179, tcp:2379, tcp:2380, tcp:6443, tcp:6666, tcp:6667]'
items:
description: ProtoPort is combination of protocol, port, and CIDR.
Protocol and port must be specified.
properties:
net:
type: string
port:
type: integer
protocol:
type: string
required:
- port
- protocol
type: object
type: array
failsafeOutboundHostPorts:
description: 'FailsafeOutboundHostPorts is a list of UDP/TCP ports
and CIDRs that Felix will allow outgoing traffic from host endpoints
to irrespective of the security policy. This is useful to avoid
accidentally cutting off a host with incorrect configuration. For
back-compatibility, if the protocol is not specified, it defaults
to "tcp". If a CIDR is not specified, it will allow traffic from
all addresses. To disable all outbound host ports, use the value
none. The default value opens etcd''s standard ports to ensure that
Felix does not get cut off from etcd as well as allowing DHCP and
DNS. [Default: tcp:179, tcp:2379, tcp:2380, tcp:6443, tcp:6666,
tcp:6667, udp:53, udp:67]'
items:
description: ProtoPort is combination of protocol, port, and CIDR.
Protocol and port must be specified.
properties:
net:
type: string
port:
type: integer
protocol:
type: string
required:
- port
- protocol
type: object
type: array
featureDetectOverride:
description: FeatureDetectOverride is used to override the feature
detection. Values are specified in a comma separated list with no
spaces, example; "SNATFullyRandom=true,MASQFullyRandom=false,RestoreSupportsLock=".
"true" or "false" will force the feature, empty or omitted values
are auto-detected.
type: string
genericXDPEnabled:
description: 'GenericXDPEnabled enables Generic XDP so network cards
that don''t support XDP offload or driver modes can use XDP. This
is not recommended since it doesn''t provide better performance
than iptables. [Default: false]'
type: boolean
healthEnabled:
type: boolean
healthHost:
type: string
healthPort:
type: integer
interfaceExclude:
description: 'InterfaceExclude is a comma-separated list of interfaces
that Felix should exclude when monitoring for host endpoints. The
default value ensures that Felix ignores Kubernetes'' IPVS dummy
interface, which is used internally by kube-proxy. If you want to
exclude multiple interface names using a single value, the list
supports regular expressions. For regular expressions you must wrap
the value with ''/''. For example having values ''/^kube/,veth1''
will exclude all interfaces that begin with ''kube'' and also the
interface ''veth1''. [Default: kube-ipvs0]'
type: string
interfacePrefix:
description: 'InterfacePrefix is the interface name prefix that identifies
workload endpoints and so distinguishes them from host endpoint
interfaces. Note: in environments other than bare metal, the orchestrators
configure this appropriately. For example our Kubernetes and Docker
integrations set the ''cali'' value, and our OpenStack integration
sets the ''tap'' value. [Default: cali]'
type: string
interfaceRefreshInterval:
description: InterfaceRefreshInterval is the period at which Felix
rescans local interfaces to verify their state. The rescan can be
disabled by setting the interval to 0.
type: string
ipipEnabled:
type: boolean
ipipMTU:
description: 'IPIPMTU is the MTU to set on the tunnel device. See
Configuring MTU [Default: 1440]'
type: integer
ipsetsRefreshInterval:
description: 'IpsetsRefreshInterval is the period at which Felix re-checks
all iptables state to ensure that no other process has accidentally
broken Calico''s rules. Set to 0 to disable iptables refresh. [Default:
90s]'
type: string
iptablesBackend:
description: IptablesBackend specifies which backend of iptables will
be used. The default is legacy.
type: string
iptablesFilterAllowAction:
type: string
iptablesLockFilePath:
description: 'IptablesLockFilePath is the location of the iptables
lock file. You may need to change this if the lock file is not in
its standard location (for example if you have mapped it into Felix''s
container at a different path). [Default: /run/xtables.lock]'
type: string
iptablesLockProbeInterval:
description: 'IptablesLockProbeInterval is the time that Felix will
wait between attempts to acquire the iptables lock if it is not
available. Lower values make Felix more responsive when the lock
is contended, but use more CPU. [Default: 50ms]'
type: string
iptablesLockTimeout:
description: 'IptablesLockTimeout is the time that Felix will wait
for the iptables lock, or 0, to disable. To use this feature, Felix
must share the iptables lock file with all other processes that
also take the lock. When running Felix inside a container, this
requires the /run directory of the host to be mounted into the calico/node
or calico/felix container. [Default: 0s disabled]'
type: string
iptablesMangleAllowAction:
type: string
iptablesMarkMask:
description: 'IptablesMarkMask is the mask that Felix selects its
IPTables Mark bits from. Should be a 32 bit hexadecimal number with
at least 8 bits set, none of which clash with any other mark bits
in use on the system. [Default: 0xff000000]'
format: int32
type: integer
iptablesNATOutgoingInterfaceFilter:
type: string
iptablesPostWriteCheckInterval:
description: 'IptablesPostWriteCheckInterval is the period after Felix
has done a write to the dataplane that it schedules an extra read
back in order to check the write was not clobbered by another process.
This should only occur if another application on the system doesn''t
respect the iptables lock. [Default: 1s]'
type: string
iptablesRefreshInterval:
description: 'IptablesRefreshInterval is the period at which Felix
re-checks the IP sets in the dataplane to ensure that no other process
has accidentally broken Calico''s rules. Set to 0 to disable IP
sets refresh. Note: the default for this value is lower than the
other refresh intervals as a workaround for a Linux kernel bug that
was fixed in kernel version 4.11. If you are using v4.11 or greater
you may want to set this to, a higher value to reduce Felix CPU
usage. [Default: 10s]'
type: string
ipv6Support:
type: boolean
kubeNodePortRanges:
description: 'KubeNodePortRanges holds list of port ranges used for
service node ports. Only used if felix detects kube-proxy running
in ipvs mode. Felix uses these ranges to separate host and workload
traffic. [Default: 30000:32767].'
items:
anyOf:
- type: integer
- type: string
pattern: ^.*
x-kubernetes-int-or-string: true
type: array
logFilePath:
description: 'LogFilePath is the full path to the Felix log. Set to
none to disable file logging. [Default: /var/log/calico/felix.log]'
type: string
logPrefix:
description: 'LogPrefix is the log prefix that Felix uses when rendering
LOG rules. [Default: calico-packet]'
type: string
logSeverityFile:
description: 'LogSeverityFile is the log severity above which logs
are sent to the log file. [Default: Info]'
type: string
logSeverityScreen:
description: 'LogSeverityScreen is the log severity above which logs
are sent to the stdout. [Default: Info]'
type: string
logSeveritySys:
description: 'LogSeveritySys is the log severity above which logs
are sent to the syslog. Set to None for no logging to syslog. [Default:
Info]'
type: string
maxIpsetSize:
type: integer
metadataAddr:
description: 'MetadataAddr is the IP address or domain name of the
server that can answer VM queries for cloud-init metadata. In OpenStack,
this corresponds to the machine running nova-api (or in Ubuntu,
nova-api-metadata). A value of none (case insensitive) means that
Felix should not set up any NAT rule for the metadata path. [Default:
127.0.0.1]'
type: string
metadataPort:
description: 'MetadataPort is the port of the metadata server. This,
combined with global.MetadataAddr (if not ''None''), is used to
set up a NAT rule, from 169.254.169.254:80 to MetadataAddr:MetadataPort.
In most cases this should not need to be changed [Default: 8775].'
type: integer
mtuIfacePattern:
description: MTUIfacePattern is a regular expression that controls
which interfaces Felix should scan in order to calculate the host's
MTU. This should not match workload interfaces (usually named cali...).
type: string
natOutgoingAddress:
description: NATOutgoingAddress specifies an address to use when performing
source NAT for traffic in a natOutgoing pool that is leaving the
network. By default the address used is an address on the interface
the traffic is leaving on (ie it uses the iptables MASQUERADE target)
type: string
natPortRange:
anyOf:
- type: integer
- type: string
description: NATPortRange specifies the range of ports that is used
for port mapping when doing outgoing NAT. When unset the default
behavior of the network stack is used.
pattern: ^.*
x-kubernetes-int-or-string: true
netlinkTimeout:
type: string
openstackRegion:
description: 'OpenstackRegion is the name of the region that a particular
Felix belongs to. In a multi-region Calico/OpenStack deployment,
this must be configured somehow for each Felix (here in the datamodel,
or in felix.cfg or the environment on each compute node), and must
match the [calico] openstack_region value configured in neutron.conf
on each node. [Default: Empty]'
type: string
policySyncPathPrefix:
description: 'PolicySyncPathPrefix is used to by Felix to communicate
policy changes to external services, like Application layer policy.
[Default: Empty]'
type: string
prometheusGoMetricsEnabled:
description: 'PrometheusGoMetricsEnabled disables Go runtime metrics
collection, which the Prometheus client does by default, when set
to false. This reduces the number of metrics reported, reducing
Prometheus load. [Default: true]'
type: boolean
prometheusMetricsEnabled:
description: 'PrometheusMetricsEnabled enables the Prometheus metrics
server in Felix if set to true. [Default: false]'
type: boolean
prometheusMetricsHost:
description: 'PrometheusMetricsHost is the host that the Prometheus
metrics server should bind to. [Default: empty]'
type: string
prometheusMetricsPort:
description: 'PrometheusMetricsPort is the TCP port that the Prometheus
metrics server should bind to. [Default: 9091]'
type: integer
prometheusProcessMetricsEnabled:
description: 'PrometheusProcessMetricsEnabled disables process metrics
collection, which the Prometheus client does by default, when set
to false. This reduces the number of metrics reported, reducing
Prometheus load. [Default: true]'
type: boolean
removeExternalRoutes:
description: Whether or not to remove device routes that have not
been programmed by Felix. Disabling this will allow external applications
to also add device routes. This is enabled by default which means
we will remove externally added routes.
type: boolean
reportingInterval:
description: 'ReportingInterval is the interval at which Felix reports
its status into the datastore or 0 to disable. Must be non-zero
in OpenStack deployments. [Default: 30s]'
type: string
reportingTTL:
description: 'ReportingTTL is the time-to-live setting for process-wide
status reports. [Default: 90s]'
type: string
routeRefreshInterval:
description: 'RouteRefreshInterval is the period at which Felix re-checks
the routes in the dataplane to ensure that no other process has
accidentally broken Calico''s rules. Set to 0 to disable route refresh.
[Default: 90s]'
type: string
routeSource:
description: 'RouteSource configures where Felix gets its routing
information. - WorkloadIPs: use workload endpoints to construct
routes. - CalicoIPAM: the default - use IPAM data to construct routes.'
type: string
routeTableRange:
description: Calico programs additional Linux route tables for various
purposes. RouteTableRange specifies the indices of the route tables
that Calico should use.
properties:
max:
type: integer
min:
type: integer
required:
- max
- min
type: object
serviceLoopPrevention:
description: 'When service IP advertisement is enabled, prevent routing
loops to service IPs that are not in use, by dropping or rejecting
packets that do not get DNAT''d by kube-proxy. Unless set to "Disabled",
in which case such routing loops continue to be allowed. [Default:
Drop]'
type: string
sidecarAccelerationEnabled:
description: 'SidecarAccelerationEnabled enables experimental sidecar
acceleration [Default: false]'
type: boolean
usageReportingEnabled:
description: 'UsageReportingEnabled reports anonymous Calico version
number and cluster size to projectcalico.org. Logs warnings returned
by the usage server. For example, if a significant security vulnerability
has been discovered in the version of Calico being used. [Default:
true]'
type: boolean
usageReportingInitialDelay:
description: 'UsageReportingInitialDelay controls the minimum delay
before Felix makes a report. [Default: 300s]'
type: string
usageReportingInterval:
description: 'UsageReportingInterval controls the interval at which
Felix makes reports. [Default: 86400s]'
type: string
useInternalDataplaneDriver:
type: boolean
vxlanEnabled:
type: boolean
vxlanMTU:
description: 'VXLANMTU is the MTU to set on the tunnel device. See
Configuring MTU [Default: 1440]'
type: integer
vxlanPort:
type: integer
vxlanVNI:
type: integer
wireguardEnabled:
description: 'WireguardEnabled controls whether Wireguard is enabled.
[Default: false]'
type: boolean
wireguardInterfaceName:
description: 'WireguardInterfaceName specifies the name to use for
the Wireguard interface. [Default: wg.calico]'
type: string
wireguardListeningPort:
description: 'WireguardListeningPort controls the listening port used
by Wireguard. [Default: 51820]'
type: integer
wireguardMTU:
description: 'WireguardMTU controls the MTU on the Wireguard interface.
See Configuring MTU [Default: 1420]'
type: integer
wireguardRoutingRulePriority:
description: 'WireguardRoutingRulePriority controls the priority value
to use for the Wireguard routing rule. [Default: 99]'
type: integer
xdpEnabled:
description: 'XDPEnabled enables XDP acceleration for suitable untracked
incoming deny rules. [Default: true]'
type: boolean
xdpRefreshInterval:
description: 'XDPRefreshInterval is the period at which Felix re-checks
all XDP state to ensure that no other process has accidentally broken
Calico''s BPF maps or attached programs. Set to 0 to disable XDP
refresh. [Default: 90s]'
type: string
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: GlobalNetworkPolicy
listKind: GlobalNetworkPolicyList
plural: globalnetworkpolicies
singular: globalnetworkpolicy
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
properties:
applyOnForward:
description: ApplyOnForward indicates to apply the rules in this policy
on forward traffic.
type: boolean
doNotTrack:
description: DoNotTrack indicates whether packets matched by the rules
in this policy should go through the data plane's connection tracking,
such as Linux conntrack. If True, the rules in this policy are
applied before any data plane connection tracking, and packets allowed
by this policy are marked as not to be tracked.
type: boolean
egress:
description: The ordered set of egress rules. Each rule contains
a set of packet match criteria and a corresponding action to apply.
items:
description: "A Rule encapsulates a set of match criteria and an
action. Both selector-based security Policy and security Profiles
reference rules - separated out as a list of rules for both ingress
and egress packet matching. \n Each positive match criteria has
a negated version, prefixed with \"Not\". All the match criteria
within a rule must be satisfied for a packet to match. A single
rule can contain the positive and negative version of a match
and both must be satisfied for the rule to match."
properties:
action:
type: string
destination:
description: Destination contains the match criteria that apply
to destination entity.
properties:
namespaceSelector:
description: "NamespaceSelector is an optional field that
contains a selector expression. Only traffic that originates
from (or terminates at) endpoints within the selected
namespaces will be matched. When both NamespaceSelector
and Selector are defined on the same rule, then only workload
endpoints that are matched by both selectors will be selected
by the rule. \n For NetworkPolicy, an empty NamespaceSelector
implies that the Selector is limited to selecting only
workload endpoints in the same namespace as the NetworkPolicy.
\n For NetworkPolicy, `global()` NamespaceSelector implies
that the Selector is limited to selecting only GlobalNetworkSet
or HostEndpoint. \n For GlobalNetworkPolicy, an empty
NamespaceSelector implies the Selector applies to workload
endpoints across all namespaces."
type: string
nets:
description: Nets is an optional field that restricts the
rule to only apply to traffic that originates from (or
terminates at) IP addresses in any of the given subnets.
items:
type: string
type: array
notNets:
description: NotNets is the negated version of the Nets
field.
items:
type: string
type: array
notPorts:
description: NotPorts is the negated version of the Ports
field. Since only some protocols have ports, if any ports
are specified it requires the Protocol match in the Rule
to be set to "TCP" or "UDP".
items:
anyOf:
- type: integer
- type: string
pattern: ^.*
x-kubernetes-int-or-string: true
type: array
notSelector:
description: NotSelector is the negated version of the Selector
field. See Selector field for subtleties with negated
selectors.
type: string
ports:
description: "Ports is an optional field that restricts
the rule to only apply to traffic that has a source (destination)
port that matches one of these ranges/values. This value
is a list of integers or strings that represent ranges
of ports. \n Since only some protocols have ports, if
any ports are specified it requires the Protocol match
in the Rule to be set to \"TCP\" or \"UDP\"."
items:
anyOf:
- type: integer
- type: string
pattern: ^.*
x-kubernetes-int-or-string: true
type: array
selector:
description: "Selector is an optional field that contains
a selector expression (see Policy for sample syntax).
\ Only traffic that originates from (terminates at) endpoints
matching the selector will be matched. \n Note that: in
addition to the negated version of the Selector (see NotSelector
below), the selector expression syntax itself supports
negation. The two types of negation are subtly different.
One negates the set of matched endpoints, the other negates
the whole match: \n \tSelector = \"!has(my_label)\" matches
packets that are from other Calico-controlled \tendpoints
that do not have the label \"my_label\". \n \tNotSelector
= \"has(my_label)\" matches packets that are not from
Calico-controlled \tendpoints that do have the label \"my_label\".
\n The effect is that the latter will accept packets from
non-Calico sources whereas the former is limited to packets
from Calico-controlled endpoints."
type: string
serviceAccounts:
description: ServiceAccounts is an optional field that restricts
the rule to only apply to traffic that originates from
(or terminates at) a pod running as a matching service
account.
properties:
names:
description: Names is an optional field that restricts
the rule to only apply to traffic that originates
from (or terminates at) a pod running as a service
account whose name is in the list.
items:
type: string
type: array
selector:
description: Selector is an optional field that restricts
the rule to only apply to traffic that originates
from (or terminates at) a pod running as a service
account that matches the given label selector. If
both Names and Selector are specified then they are
AND'ed.
type: string
type: object
type: object
http:
description: HTTP contains match criteria that apply to HTTP
requests.
properties:
methods:
description: Methods is an optional field that restricts
the rule to apply only to HTTP requests that use one of
the listed HTTP Methods (e.g. GET, PUT, etc.) Multiple
methods are OR'd together.
items:
type: string
type: array
paths:
description: 'Paths is an optional field that restricts
the rule to apply to HTTP requests that use one of the
listed HTTP Paths. Multiple paths are OR''d together.
e.g: - exact: /foo - prefix: /bar NOTE: Each entry may
ONLY specify either a `exact` or a `prefix` match. The
validator will check for it.'
items:
description: 'HTTPPath specifies an HTTP path to match.
It may be either of the form: exact: <path>: which matches
the path exactly or prefix: <path-prefix>: which matches
the path prefix'
properties:
exact:
type: string
prefix:
type: string
type: object
type: array
type: object
icmp:
description: ICMP is an optional field that restricts the rule
to apply to a specific type and code of ICMP traffic. This
should only be specified if the Protocol field is set to "ICMP"
or "ICMPv6".
properties:
code:
description: Match on a specific ICMP code. If specified,
the Type value must also be specified. This is a technical
limitation imposed by the kernel's iptables firewall,
which Calico uses to enforce the rule.
type: integer
type:
description: Match on a specific ICMP type. For example
a value of 8```
执行初始化 `Calico`命令
```bash
kubectl apply -f calico.yaml
这里使用 Helm 3,注意安装脚本下载最新版本
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
默认情况下,可以安装 K8s 默认 localdisk 作为默认存储,这里采用开源项目 Hwameistor 用作本地磁盘管理。
项目介绍: https://hwameistor.io 安装指引: https://hwameistor.io/cn/docs/quick_start/install/deploy
Hwameistor 依赖 Helm 进行安装,所以事先 需要完成 Helm 应用的安装
helm repo add hwameistor http://hwameistor.io/hwameistor
helm repo update hwameistor
helm pull hwameistor/hwameistor --untar
使用镜像仓库安装,要切换镜像仓库的镜像,请使用 –set 更改这两个参数值:global.k8sImageRegistry
和 global.hwameistorImageRegistry
注意默认的镜像仓库 quay.io 和 ghcr.io。如果无法访问,可尝试使用 DaoCloud 提供的镜像源 quay.m.daocloud.io 和 ghcr.m.daocloud.io
$ helm install hwameistor ./hwameistor \
-n hwameistor --create-namespace \
--set global.k8sImageRegistry=k8s-gcr.m.daocloud.io \
--set global.hwameistorImageRegistry=ghcr.m.daocloud.io
检查 Hwameistor 的全部 Pod 状态
$ kubectl -n hwameistor get pod
NAME READY STATUS
hwameistor-local-disk-csi-controller-665bb7f47d-6227f 2/2 Running
hwameistor-local-disk-manager-5ph2d 2/2 Running
hwameistor-local-disk-manager-jhj59 2/2 Running
hwameistor-local-disk-manager-k9cvj 2/2 Running
hwameistor-local-disk-manager-kxwww 2/2 Running
hwameistor-local-storage-csi-controller-667d949fbb-k488w 3/3 Running
hwameistor-local-storage-csqqv 2/2 Running
hwameistor-local-storage-gcrzm 2/2 Running
hwameistor-local-storage-v8g7t 2/2 Running
hwameistor-local-storage-zkwmn 2/2 Running
hwameistor-scheduler-58dfcf79f5-lswkt 1/1 Running
hwameistor-webhook-986479678-278cr 1/1 Running
local-disk-manager 和 local-storage 是 DaemonSet。在每个 Kubernetes 节点上都应该有一个 DaemonSet Pod。
等到全部状态正常后,可进行后续操作,检查 StorageClass
$ kubectl get storageclass hwameistor-storage-lvm-hdd
NAME PROVISIONER RECLAIMPOLICY
hwameistor-storage-lvm-hdd (default) lvm.hwameistor.io Delete
LockDiskNode
和 LockDisk
默认在使用的磁盘状态应该 PHASE
展示位 Bound
$ kubectl get localdisknodes
NAME NODEMATCH PHASE
master01-k8s-com master01-k8s-com Bound
node01-k8s-com node01-k8s-com Bound
node02-k8s-com node02-k8s-com Bound
$ kubectl get localdisks
NAME NODEMATCH CLAIM PHASE
master01-k8s-com-vda master01-k8s-com Bound
master01-k8s-com-vdb master01-k8s-com master01-k8s-com Bound
node01-k8s-com-vda node01-k8s-com Bound
node01-k8s-com-vdb node01-k8s-com node01-k8s-com Bound
node02-k8s-com-vda node02-k8s-com Bound
node02-k8s-com-vdb node02-k8s-com node02-k8s-com Bound
StorageClass
这个对应的 storageclasses
增加标识为默认的 annotations
kubectl patch storageclasses.storage.k8s.io hwameistor-storage-lvm-hdd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
使用下方的命令创建对应的存储池,注意替换对应的 nodeName
$ helm template ./hwameistor \
-s templates/post-install-claim-disks.yaml \
--set storageNodes='{master01-k8s-com,node01-k8s-com,node02-k8s-com}' \
| kubectl apply -f -
创建成功后,查看本地磁盘的
ldc
,应该全部是Bound
$ kubectl get ldc
Metrics server
接下来需要安装的 DCE5.0 需要安装 metrics-server
,将下方的文件内容保存为 metrics-server.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
image: registry.cn-hangzhou.aliyuncs.com/chenby/metrics-server:v0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
执行安装命令:
kubectl apply -f metrics-server.yaml
安装成功后,通过下方命令可以成功的看到 node 节点的资源利用情况
$ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master01-k8s-com 218m 5% 5610Mi 35%
node01-k8s-com 445m 11% 9279Mi 59%
node02-k8s-com 464m 11% 9484Mi 60%
以下操作步骤将会带着您一步一步完成 DaoCloud Enterprise 5.0 社区版的完整安装,注意安装细节
DCE 已经提供了一键安装的离线工具依赖包,经过测试比较稳定的运行在 CentOS 7
和 CentOS 8
,如果您也是这样两个系统,可以使用下方脚本。
curl -LO https://proxy-qiniu-download-public.daocloud.io/DaoCloud_Enterprise/dce5/install_prerequisite.sh
chmod +x install_prerequisite.sh
sudo bash install_prerequisite.sh online community
也可以选择手动安装这些:
注意,需要在 Master 节点 来进行 dce5-installer 的安装,建议直接下载到当前节点
获取最新版本的查看界面: https://docs.daocloud.io/download/dce5/#_1
# 假定 VERSION 为 v0.3.28 , 使用上方链接获取最新版本
$ export VERSION=v0.3.28
$ curl -Lo ./dce5-installer https://proxy-qiniu-download-public.daocloud.io/DaoCloud_Enterprise/dce5/dce5-installer-$VERSION
$ chmod +x dce5-installer
如果
proxy-qiniu-download-public.daocloud.io
链接失效,使用qiniu-download-public.daocloud.io
将下方的配置文件内容保存为 clusterconfig.yaml
,如果使用 NodePort
的方式安装则不需要特定指定配置文件
apiVersion: provision.daocloud.io/v1alpha1
kind: ClusterConfig
spec:
loadBalancer: metallb
istioGatewayVip: 10.6.229.10/32 # 这是 Istio gateway 的 VIP,也会是 DCE 5.0 的控制台的浏览器访问 IP
insightVip: 10.6.229.11/32 # 这是 Global 集群的 Insight-Server 采集子集群的监控指标的网络路径用的 VIP
如果使用配置文件,注意需要 事先安装
MetaLB
,这一部分需要自行完成。
根据需求选择下方的安装文件,如果选择指定配置文件内的
# 无配置文件
$ ./dce5-installer install-app
# 有指定 loadBalancer
$ ./dce5-installer install-app -c clusterConfig.yaml
如果看到下方的页面,说明安装成功了;默认账号密码为: admin/changeme
一般情况下,我们安装后的集群访问地址是一个内网地址,当我们需要集群页面开放到公网访问时,会发现登录页会自动重定向内网地址,这是因为 DCE 5.0 反向代理服务器地址默认是安装时的配置;需要作出以下更新:
设置环境变量,方便在后续中使用
# 您的反向代理地址,例如:`export DCE_PROXY="https://demo-alpha.daocloud.io"`
export DCE_PROXY="https://domain:port"
# helm --set 参数备份文件
export GHIPPO_VALUES_BAK="ghippo-values-bak.yaml"
# 获取当前 ghippo 的版本号
export GHIPPO_HELM_VERSION=$(helm get notes ghippo -n ghippo-system | grep "Chart Version" | awk -F ': ' '{ print $2 }')
更新 Helm repo
helm repo update ghippo
备份 –set 参数
helm get values ghippo -n ghippo-system -o yaml > ${GHIPPO_VALUES_BAK}
使用 vim 命令编辑并保存
$ vim ${GHIPPO_VALUES_BAK}
USER-SUPPLIED VALUES:
...
global:
...
reverseProxy: ${DCE_PROXY} # 只需要修改这一行
使用 heml upgrade 更新配置
helm upgrade ghippo ghippo/ghippo \
-n ghippo-system \
-f ${GHIPPO_VALUES_BAK} \
--version ${GHIPPO_HELM_VERSION}
使用 kubectl 重启全局管理 Pod,使配置生效
kubectl rollout restart deploy/ghippo-apiserver -n ghippo-system
kubectl rollout restart statefulset/ghippo-keycloak -n ghippo-system
很自豪给大家推荐,春松客服是我参与主导团队的第一个纯社区开源项目,目前在 Github 中文项目中开源排名第一。
目前我们正在规划 V8 产品迭代,这将会是一个全新的升级,请关注我们的项目进展 cskefu
同时也欢迎大家加入我们的团队。
至 2022 年 10 月,春松客服在企业中部署超过 1.8 万次,上线客户超过 500 家,是 GitHub 上最受欢迎的中文开源客服系统。因开源、云原生架构和功能丰富受到广泛好评,在 Gitee 上赢得最有价值项目奖项。
在开源客服系统,我希望将春松客服平台成为一个开源的生态,希望吸引到成千上万的开发者和企业来使用和参与春松客服的开发。
春松客服是一个依托于开源精神的客服操作系统,我们承诺永远开源,并与开发者打造一个完美的客服系统生态。
我们想做客服系统的 Kubernetes。
一线客服人员作为日常之中,长期使用客服系统的人员;客服系统的能力和效率将直接影响到他们的使用;
开发者们,可以利用春松客服的开放能力,为春松客服编写大量的增强插件。
春松客服的核心目标,不是打造一个我们认为的客服系统,而是我们希望打造一套轻量化的智能客服系统框架和具有海量插件的平台。
传统的客服系统提供了大量复杂的功能和配置,虽然总能找到对用户来说想要的功能,但是大量冗杂、甚至逻辑冲突的功能,这虽然没有过错;但的确极大的提高了一线客服人员的使用成本。
对于真正的使用者来说,我们希望客服系统的功能,可以刚好满足我所需要的全部功能,同时可以方便的进行功能的增减和升级。
所以,春松客服的功能边界在:提供 高性能、稳定、通用的、生产级 智能客服系统底座和平台。
增强插件
春松客服到现在,已经发展了 2 年的时间,目前版本迭代到 V7,下面展示春松客服的功能和介绍网址入口。
在春松客服里,系统管理员是具备管理所辖组织内坐席、权限、角色、联系人和坐席监控等资源的管理员,系统管理员分为两种类型:超级管理员和普通管理员,普通管理员也简称“管理员”。
超级管理员为春松客服系统设置的,初始化一个春松客服实例后,默认超级管理员用户名为 admin
,密码为 admin1234
,并且有且只有一个,IT 人员在初始化搭建的春松客服实例的第一件事就是更改超级管理员账号的密码,以确保系统安全。超级管理员具备更新系统所有属性的能力,读写数据,是春松客服内权限最大的用户。
安装启动系统,进入春松客服后台界面,输入初始化的超级管理员账号密码(用户名: admin
, 密码: admin1234
),点击立即登录。
超级管理员同时维护者春松客服的组织机构的高层级,组织机构是树形结构,默认情况下没有组织机构信息,春松客服搭建完成后,由超级管理员设定根节点,比如总公司、总公司下属子公司,维护这样的一个层级结构,再创建其他管理员账号,普通管理员账号可以创建多个,不同管理员隶属于不同组织机构,该管理员只有管理其所在组织机构及该组织机构附属组织机构的权限。
系统管理员切换不同的组织机构,可以查看不同组织机构的数据。
春松客服权限体系包括:组织机构,角色,账号。
角色可以自定义,设置对一系列资源的读写。角色的创建和删除,修改资源授权,只有超级管理员可以操作,,普通【管理员】只具备角色的使用权:添加或删除权限里的系统账号。
将账号添加到角色后,因为账号也同时隶属于不同的组织机构,那么账号所具有的权限就是其所在组织机构以及附属组织机构的角色对应的资源的读写。
根据角色和坐席所在组织机构进行权限检查:
假设组织机构如下:
系统 -> 系统概况 -> 用户和组 -> 组织机构 -> 创建部门,并且可以启用或关闭技能组
部门 需要创建的部门名称
上级机构 选择上级部门
启用技能组 这里启用与否,技能是接待同一个渠道的坐席人员群组,春松客服支持配置自动分配策略,连接访客与坐席,简称 ACD 模块
进入部门列表
系统 -> 系统概况 -> 用户和组 -> 组织机构
系统 -> 系统概况 -> 用户和组 -> 组织机构 -> 修改部门
系统 -> 系统概况 -> 用户和组 -> 组织机构 -> 删除部门
系统 -> 系统概况 -> 用户和组 -> 组织结构 -> 选中一个部门 -> 地区设置
系统 -> 系统概况 -> 用户和组 -> 系统角色 -> 新建角色
只有【系统超级管理员】可以创建角色。
名词解释:
角色 系统中用户的操作权限是通过角色来控制,角色可以理解为具备一定操作权限的用户组;
可以把一个或者更多的用户添加到一个角色下;
可以给一个角色设置一定的系统权限,相当于这个角色下面的用户有了这些系统权限;
角色创建好了以后,在所有组织机构中共享。不同组织机构的管理员,只能管理其所在组织机构和下属组织机构里的账号的角色。
系统 -> 系统概况 -> 用户和组 -> 系统角色 -> 修改角色
只有【系统超级管理员】可以编辑角色。
系统->系统概况->用户和组->系统角色>删除角色
只有【系统超级管理员】可以删除角色。
提示:
电子邮件: 需要有效的格式密码: 字母数字最少8位,手动录入手机号: 全系统唯一
用户分为管理员和普通用户
坐席分为一般坐席和 SIP 坐席,普通用户与管理用户都可以成为坐席,SIP 坐席是在多媒体坐席的基础上
每个账号必须分配到一个部门下,以及关联到一个角色上,才可以查看或管理资源,请详细阅读【组织机构】和【角色】管理
创建普通用户
创建多媒体坐席
创建管理员
系统 -> 系统概况 -> 用户和组 -> 用户账号
点击操作一栏中的“编辑”“删除”,可以对当前用户列表中的所有用户的信息进行编辑或者删除
系统 -> 系统概况 -> 用户和组 -> 组织结构 -> 选中一个部门 -> 添加用户到当前部门
可以把已经存在的 用户账号 添加到一个特定的部门中
一个用户账号只能隶属于一个部门
系统->系统概况->用户和组->系统角色>添加用户到角色
春松客服支持多种渠道,访客的来源可能是多渠道的,这也是目前联络中心发展的趋势,以及对智能客服系统的一个挑战,一方面随着信息技术和互联网通信、聊天工具的演变,企业的客户逐渐分散,尤其是营销平台多元化。
多渠道的适应能力,是春松客服的重要特色。在【渠道管理】的子菜单中,查看支持的不同类型的渠道。
渠道名称 | 简介 | 获得 |
---|---|---|
网页渠道 | 通过在网页中注入春松客服网页渠道 HTML,实现聊天控件,访客与客服建立实时通信连接,支持坐席邀请访客对话等功能 | 开源,免费,随基础代码发布 |
Facebook Messenger 渠道 | 简称“Messenger 插件”或“ME”插件,Messenger 是 Facebook 旗下的最主要的即时通信软件,支持多种平台,因其创新的理念、优秀的用户体验和全球最大的社交网络,而广泛应用。春松客服 Messenger 插件帮助企业在 Facebook 平台上实现营销和客户服务 | 开源,免费,随基础代码发布 |
网页聊天支持可以适配移动设备浏览器,桌面浏览器。可以在电脑,手机,微信等渠道接入网页聊天控件。
获取网页脚本,系统 -> 系统管理 -> 客服接入 -> 网站列表 -> 点击“Chatopera 官网” -> 基本设置 -> 接入;
将图中的代码复制到一个 Web 项目的页面中,例如下图的。
使用浏览器打开该 Web 页面。
【提示】该网页需要使用 http(s) 打开,不支持使用浏览器打开本地 HTML 页面。
点击该网页中出现的“在线客服”按钮,出现聊天窗口,可以作为访客,与客服聊天了。
【提示】春松客服提供一个测试网页客户端的例子,可以使用 http://IP:PORT]}}/testclient.html 进行访问。
Messenger 是 Facebook 旗下的最主要的即时通信软件,支持多种平台,因其创新的理念、优秀的用户体验和全球最大的社交网络,而广泛应用。通过 Facebook Messenger 的官方链接,可以了解更多。
春松客服 Messenger 插件帮助企业在 Facebook 平台上实现营销和客户服务。
首先,出海企业要获客,或者通过互联网方式提供服务,那么 Facebook 上的广告和 Messenger 服务,是您无论如何都要使用的,因为你可以从这里找到您的目标客户、潜在客户。但是,如果 Facebook 平台的商业化程度过高,将影响社交网络内用户的体验,比如用户收到和自己不相关、不感兴趣的、大量的广告。为此 Facebook 在广告和 Messenger 上,有很多设计、一些限制,达到了商业化和人们社交需求的平衡,这是 Facebook 能成为今天世界上最大的社交网络的关键原因之一。
其次,您需要了解 Messenger 的一些应用场景,比如 Cskefu 为九九互动提供的智能客服和 OTN 服务的案例 chatopera-me-jiujiu2020。
在正式介绍春松客服 Messenger 插件的使用之前,需要说明 Cskefu 提供该插件是通过 Facebook Messenger 平台的开发者 APIs 实现,因此,该插件的功能安全可靠、稳定强大并且会不断更新。
https://developers.facebook.com/docs/messenger-platform
自今年春松客服开源社区和技术委员会成立,我们重新复盘了春松客服的发展过往;我们决心重构整个春松客服,所有 V8 会是一个全新定位的版本。
春松客服适应各种部署方式,本文使用 Docker 和 Docker compose 的方式,适合体验、开发、测试和上线春松客服,此种方式简单快捷。
更新:我们正在推进基于 Helm Chart 的方式安装,让企业可以更方便的 Kubernetes 容器平台使用
重要提示:部署应用后,必须按照《系统初始化》文档进行系统初始化,再使用,不做初始化,会造成坐席无法分配等问题。
项目 | 说明 | |
---|---|---|
操作系统 | Linux (CentOS 7.x, Ubuntu 16.04+ 等),推荐使用 Ubuntu LTS | |
Docker 版本 | Docker version 1.13.x 及以上 | |
Docker Compose 版本 | version 1.23.x 及以上 | |
防火墙端口 | 8035, 8036 | |
其他软件 | git | |
内存 | 开发测试 >= 8GB | 生产环境 >= 16GB |
CPU 颗数 | 开发测试 >= 2 | 生产环境 >= 4 |
硬盘 | >= 20GB |
git clone -b master https://github.com/cskefu/cskefu.git cskefu
cd cskefu
cp sample.env .env # 使用文本编辑器打开 .env 文件,并按照需求需改配置
以上命令中,master
代表当前稳定版,是 cskefu/cskefu 的 master 分支,分支说明。
分支 | 说明 |
---|---|
master | 当前稳定版本 |
develop | 当前开发版本 |
克隆代码时,按照需要指定分支信息;本部署文档针对 master 分支。
以下为部署相关的环境变量,可以在 .env
中覆盖默认值。
KEY | 默认值 | 说明 |
---|---|---|
COMPOSE_FILE | docker-compose.yml | 服务编排描述文件,保持默认值 |
COMPOSE_PROJECT_NAME | cskefu | 服务实例的容器前缀,可以用其它字符串 |
MYSQL_PORT | 8037 | MySQL 数据库映射到宿主机器使用的端口 |
REDIS_PORT | 8041 | Redis 映射到宿主机器的端口 |
ES_PORT1 | 8039 | ElasticSearch RestAPI 映射到宿主机器的端口 |
ES_PORT2 | 8040 | ElasticSearch 服务发现端口映射到宿主机器的端口 |
CC_WEB_PORT | 8035 | 春松客服 Web 服务地址映射到宿主机器的端口 |
CC_SOCKET_PORT | 8036 | 春松客服 SocketIO 服务映射到宿主机器的端口 |
ACTIVEMQ_PORT1 | 8051 | ActiveMQ 端口 |
ACTIVEMQ_PORT2 | 8052 | ActiveMQ 端口 |
ACTIVEMQ_PORT2 | 8053 | ActiveMQ 端口 |
DB_PASSWD | 123456 | 数据库密码,设置到 MySQL, Redis, ActiveMQ |
LOG_LEVEL | INFO | 日志级别,可使用 WARN, ERROR, INFO, DEBUG |
以上配置中,端口的各默认值需要保证在宿主机器上还没有被占用;数据库的密码尽量复杂;CC_WEB_PORT 和 CC_SOCKET_PORT 这两个值尽量不要变更;生产环境下 LOG_LEVEL 使用至少 WARN 的级别。
以下为一些业务功能相关配置的环境变量:
KEY | 默认值 | 说明 |
---|---|---|
TONGJI_BAIDU_SITEKEY | placeholder | 使用百度统计 记录和查看页面访问情况,默认不记录 |
EXTRAS_LOGIN_BANNER | off | 登录页上方展示通知的内容,默认 (off) 不展示 |
EXTRAS_LOGIN_CHATBOX | off | 登录页支持加入一个春松客服网页渠道聊天按钮,比如 https://oh-my.cskefu.com/im/xxx.html,默认 (off不展示 |
cd cskefu # 进入下载后的文件夹
docker-compose pull # 拉取镜像
docker-compose up -d contact-center # 启动服务
docker-compose logs -f contact-center
在日志中,查看到如下类似信息,代表服务已经启动。
INFO c.c.socketio.SocketIOServer - SocketIO server started at port: 8036 [nioEventLoopGroup-2-1]INFO com.chatopera.cc.Application - Started Application in 35.319 seconds (JVM running for 42.876) [main]
然后,从浏览器打开 http://YOUR_IP:CC_WEB_PORT/ 访问服务。默认管理员账号:admin 密码:admin1234
春松客服提供了,在线体验环境,方便您了解春松客服最新功能。
内容 | 说明 |
---|---|
网站 | https://demo.cskefu.com |
默认用户名 | admin |
默认密码 | admin1234 |
注意,请不要修改用户名和密码,演示环境随时有可能可能,请自行保障数据安全。
周日 10:00UTC+8(中文)(单周)。转换为您的时区
对于一个大型企业,内部有几十个业务系统是非常正常的一件事情。
在这个背景下,有一个具体的服务可以很干净的只做一件事情是非常棒的:把人管好。
验证流程图
梳理笔记工具,最近把语雀都迁移到了 Github,找个时间可以 讨论下中间的过程。
参考资料:
服务发现,是消费端自动发现服务地址列表的能力,是微服务框架需要具备的关键能力,借助于自动化的服务发现,微服务之间在无需感知对端部署位置与 IP 地址的情况下实现通信。
Dubbo 提供的 Client-Based 服务发现机制,同时也需要第三方注册中心来协调服务发现过程,比如 Nacos/Zookeeper 等。
Dubbo Mesh 的目标是提供适应 Dubbo 体系的完整 Mesh 解决方案,包含定制化控制面(Control Plane)、定制化数据面解决方案。Dubbo 控制面基于业界主流 Istio 扩展,支持更丰富的流量治理规则、Dubbo 应用级服务发现模型等,Dubbo 数据面可以采用 Envoy Sidecar,即实现 Dubbo SDK + Envoy 的部署方案,也可以采用 Dubbo Proxyless 模式,直接实现 Dubbo 与控制面的通信。
Dubbo 3.0 提供的新特性包括:
df = pandas.read_csv('somefile.txt')
df = df.fillna(0)
暗坑很多
REDASH_COOKIE_SECRET=a07cca441ab9f28b66c589f3118e0de48469b1bc6a5036eade7badbed305d96e
POSTGRES_HOST_AUTH_METHOD=trust
REDASH_REDIS_URL=redis://redis:6379/0
REDASH_DATABASE_URL=postgresql://postgres
需要创建一个 postgres-data 并配置 docker-compose.yml 的路径,数据库持久化
postgresql 在执行 psql 命令时,默认会读取当前系统用户作为执行 role;但 psql 默认用户是 postgres
https://redash.io/help/open-source/setup https://redash.io/help/open-source/dev-guide/docker https://docs.victoriametrics.com/url-examples.html#apiv1exportcsv
以上主要会设计到 3 个镜像,redis、pgsql、redash,其中核心是 redash,所以关注镜像版本也是这个
redash 的版本升级较为方便,更换 server 的镜像;然后升级数据库即可。
测试过从 v8 升级到 v10 , 和 v9 升级到 v10,都是 ok 的。
docker-compose stop server scheduler scheduled_worker adhoc_worker
docker-compose pull
拉取新镜像版本docker-compose run --rm server manage db upgrade
docker-compse up -d
由于我们的 es 地址访问地址采用 https,但为自签证书,所以在 request 之中会有些问题,所以我在这里更新了 elasticsearch 的插件,然后将其上传到我个人的 docker hub. https://hub.docker.com/r/samzong/redash
带来的问题,页面上无法选择到 Elasticsearch 作为数据源,没时间去研究了
看了下还是可以使用 redash 的 API 去创建的 /api/data_sources
:
{
"options": {
"basic_auth_password": "-----",
"basic_auth_user": "elastic",
"server": "https://10.6.51.101:31001/",
"skip_tls_verification": true
},
"type": "elasticsearch",
"name": "test-es"
}
创建完成后,就可以在页面上更新了。