Deploying Kubernetes Cluster
You can deploy a Kubernetes Cluster when a Cluster Manager (HA or AIO or Inception) is already available. To deploy a Kubernetes Cluster:
-
Setup the cluster configuration.
-
The following is a sample UCS Configuration for deploying a Kubernetes Cluster:
software cnf cee url <repo_url> user <user_name> password <password> sha256 <sha256_hash> exit # associating to Bare Metal environment environments bare-metal ucs-server exit # General cluster configuration clusters <cluster_name> environment bare-metal addons ingress bind-ip-address <bind_ip_address> addons cpu-partitioner enabled configuration master-virtual-ip <master_vip> configuration master-virtual-ip-interface <master_vip_interface_name> #For example, eno1 configuration allow-insecure-registry true node-defaults initial-boot default-user <username> #For example, cloud-user node-defaults initial-boot default-user-ssh-public-key "<SSH_Public_Key>" node-defaults initial-boot default-user-password <password> node-defaults netplan template node-defaults initial-boot netplan ethernets eno1 dhcp4 false dhcp6 false gateway4 <gateway_ipv4address> nameservers search [ <domain_name> ] nameservers addresses [ <ipv4address>...<ipv4address> ] exit node-defaults k8s ssh-username <username> node-defaults k8s ssh-connection-private-key "<SSH_Private_Key>" #initial-boot section of node-defaults node-defaults ucs-server host initial-boot networking static-ip netmask <ipv4_address> node-defaults ucs-server host initial-boot networking static-ip gateway <ipv4_address> node-defaults ucs-server host initial-boot networking static-ip dns <ipv4_address> node-defaults ucs-server cimc user <username> node-defaults ucs-server cimc password <password> node-defaults ucs-server cimc remote-management sol enabled node-defaults ucs-server cimc remote-management sol baud-rate <baud_rate> node-defaults ucs-server cimc remote-management sol comport <com_port> node-defaults ucs-server cimc remote-management sol ssh-port <ssh_port> node-defaults ucs-server cimc networking ntp enabled node-defaults ucs-server cimc networking ntp servers <ntp_server_url> exit node-defaults os proxy https-proxy <proxy_server> node-defaults os proxy no-proxy <proxy_servers> node-defaults os ntp enabled node-defaults os ntp servers <ntp_server_url> #For exmaple, ntp.esl.cisco.com node-defaults os proxy https-proxy http://proxy-wsa.esl.cisco.com:80 exit #node configuration ucs-server host initial-boot networking static-ip ipv4-address <ipv4address> ucs-server cimc ip-address <ipv4address> ucs-server cimc storage-adaptor create-virtual-drive true exit # control plane node configuration nodes <control_plane_node_name> #For example, control-plane-1 k8s node-type control-plane k8s ssh-ip <ipv4address> k8s node-labels <node_label/node_type> #For example, smi.cisco.com/oam exit ucs-server host initial-boot networking static-ip ipv4-address <ipv4address> ucs-server cimc ip-address <ipv4address> ucs-server cimc storage-adaptor create-virtual-drive true exit nodes <control_plane_node_name> #For example, control-plane-2 k8s node-type control-plane k8s ssh-ip <ipv4address> k8s node-labels <node_label/node_type> #For example, smi.cisco.com/oam exit ucs-server host initial-boot networking static-ip ipv4-address <ipv4address> ucs-server cimc ip-address <ipv4address> ucs-server cimc storage-adaptor create-virtual-drive true exit nodes <control_plane_node_name> #For example, control-plane-3 k8s node-type control-plane k8s ssh-ip <ipv4address> \ k8s node-labels <node_label/node_type> #For example, smi.cisco.com/node-type oam exit ucs-server host initial-boot networking static-ip ipv4-address <ipv4address> ucs-server cimc ip-address <ipv4address> ucs-server cimc storage-adaptor create-virtual-drive true exit ops-centers cee <ops_center_name> #For example, cee repository-local <repo_name> cee-2020-02-0-i04 exit exit
-
-
Login to the Cluster Manager CLI and enter the configuration mode
-
Add the Kubernetes Cluster configuration to deploy the Kubernetes Cluster.
NoteA sample Kubernetes Cluster Configuration is provided here.
-
-
Commit the configuration.
-
Monitor the progress of the synchronization.
monitor sync-logs cluster_name
NoteThe synchronization completes after 30 minutes approximately. The time taken for synchronization is based on network speed, VM power, and so on.
The node names are added to /etc/host as part of the sync process. You can connect to the nodes using the node name from the control plane node.