Architecture
1. Introduction
The Kubernetes cluster is the central part of the infrastructure.
In the infrastructure, there are two groups of applications:
- Applications that play the role of cluster management.
- Applications that operate within the Kubernetes cluster for external use.
2. Inventory
Cluster groups:
- master-pool (3 hosts)
- database-pool (3 hosts)
- search-engine-pool (3 hosts)
- node-pool (2 hosts)
- gateway-pool (3 hosts)
- supervisor (1 host)
3. Architecture


4. Applications
- Apiserver
- Scheduler
- Controller-manager
- Kube-proxy
- Kubelet
- CoreDNS
- Nodelocaldns
- Calico
5. Installation
The installation of the cluster is handled by three Ansible roles defined in the Protobox framework.
- name: Install master pool
hosts: supervisor-1
become: yes
roles:
- role: kubernetes/kubespray-init
- name: Install Kubernetes API Server LoadBalancer
hosts: kube_master
become: yes
roles:
- role: network-setup
vars:
netplan_init: true
netplan_setup: true
reboot: true
- role: etc-hosts
- role: keepalived
- role: haproxy
vars:
cluster: kube_master
frontend_port: "{{ loadbalancer_apiserver.port }}"
backend_port: 6443
- role: reboot
- name: Execute kubespray
import_playbook: roles/kubernetes/kubespray/cluster.yml
- name: Setup calico
hosts: master-1
become: true
roles:
- role: kubernetes/cni
- role: kubernetes/dashboard
- import_playbook: playbooks-proxiserver-pool/proxiserver-reload.yml
Steps
1. Use of the role kubernetes/kubespray-init
This role clones Kubespray (an official Ansible module for setting up a Kubernetes cluster) and integrates it into the Protobox framework.
2. Installation of the Load-balancer
To interface with the API server, the control plane is replicated across three machines (master-pool) and exposes the API server. Communication is handled via a replicated load-balancer on each master in front of the API. Two tools enable this configuration:
- KeepAlived to maintain a FloatingIP.
- HAProxy to set up the load-balancer.
3. Execution of Kubespray
Kubespray uses the inventory to configure the installation.
--------------------------------------------------------
path: inventories/protobox/main.yml
--------------------------------------------------------
...
kube_control_plane:
hosts:
master-1:
master-2:
master-3:
kube_node:
vars:
gateway-i: 192.168.1.1
gateway-o: 192.168.0.33
hosts:
gateway-1:
gateway-2:
etcd:
hosts:
master-1:
master-2:
master-3:
k8s_cluster:
children:
kube_master:
kube_node:
calico_rr:
hosts: {}
...
--------------------------------------------------------
path: inventories/protobox/group_vars/all/all.yml
--------------------------------------------------------
bin_dir: /usr/local/bin
apiserver_loadbalancer_domain_name: "kube.plane.box"
loadbalancer_apiserver:
address: 192.168.1.10
port: 6442
loadbalancer_apiserver_port: 6443
loadbalancer_apiserver_healthcheck_port: 8081
no_proxy_exclude_workers: false
kube_webhook_token_auth: false
kube_webhook_token_auth_url_skip_tls_verify: false
ntp_enabled: false
ntp_manage_config: false
ntp_servers:
- "0.pool.ntp.org iburst"
- "1.pool.ntp.org iburst"
- "2.pool.ntp.org iburst"
- "3.pool.ntp.org iburst"
unsafe_show_logs: false
-----------------------------------------------------------------
path: inventories/protobox/group_vars/k8s-cluster/k8s-cluster.yml
-----------------------------------------------------------------
kube_config_dir: /etc/kubernetes
kube_script_dir: "{{ bin_dir }}/kubernetes-scripts"
kube_manifest_dir: "{{ kube_config_dir }}/manifests"
kube_cert_dir: "{{ kube_config_dir }}/ssl"
kube_token_dir: "{{ kube_config_dir }}/tokens"
kube_api_anonymous_auth: true
kube_version: v1.25.3
local_release_dir: "/tmp/releases"
retry_stagger: 5
kube_owner: kube
kube_cert_group: kube-cert
kube_log_level: 2
credentials_dir: "{{ inventory_dir }}/credentials"
kube_network_plugin: calico
kube_network_plugin_multus: true
kube_service_addresses: 10.233.0.0/18
kube_pods_subnet: 10.233.64.0/18
kube_network_node_prefix: 24
enable_dual_stack_networks: false
kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
kube_apiserver_port: 6443
kube_proxy_mode: ipvs
kube_proxy_strict_arp: true
kube_proxy_nodeport_addresses: >-
{%- if kube_proxy_nodeport_addresses_cidr is defined -%}
[{{ kube_proxy_nodeport_addresses_cidr }}]
{%- else -%}
[]
{%- endif -%}
kube_encrypt_secret_data: false
cluster_name: cluster.local
ndots: 2
dns_mode: coredns
enable_nodelocaldns: true
enable_nodelocaldns_secondary: false
nodelocaldns_ip: 169.254.25.10
nodelocaldns_health_port: 9254
nodelocaldns_second_health_port: 9256
nodelocaldns_bind_metrics_host_ip: false
nodelocaldns_secondary_skew_seconds: 5
enable_coredns_k8s_external: false
coredns_k8s_external_zone: k8s_external.local
enable_coredns_k8s_endpoint_pod_names: false
resolvconf_mode: host_resolvconf
deploy_netchecker: false
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_addresses|ipaddr('net')|ipaddr(4)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}"
container_manager: containerd
kata_containers_enabled: false
kubeadm_certificate_key: "{{ lookup('password', credentials_dir + '/kubeadm_certificate_key.creds length=64 chars=hexdigits') | lower }}"
k8s_image_pull_policy: IfNotPresent
kubernetes_audit: false
default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
podsecuritypolicy_enabled: false
volume_cross_zone_attachment: false
persistent_volumes_enabled: false
event_ttl_duration: "1h0m0s"
auto_renew_certificates: false
kubeadm_patches:
enabled: false
source_dir: "{{ inventory_dir }}/patches"
dest_dir: "{{ kube_config_dir }}/patches"
--------------------------------------------------------
path: inventories/protobox/group_vars/all/containerd.yml
--------------------------------------------------------
containerd_registry_auth:
- registry: registry.protobox
username: XXXXX
password: XXXXX
--------------------------------------------------------
path: inventories/protobox/group_vars/all/cri-o.yml
--------------------------------------------------------
crio_insecure_registries:
- registry.protobox
crio_registry_auth:
- registry: registry.protobox
username: XXXXX
password: XXXXX
4. Execution of kubernetes/kubectl-setup
This module performs additional configurations such as installing kubectl, kubeadm, etc.
5. Execution of kubernetes/kubectl-cni
This role is used by Protobox to install the Kubernetes pod network.
6. Execution of kubernetes/dashboard