Triggering the Cluster Synchronization

You can trigger the cluster synchronization to complete the vCenter configuration. To synchronize the cluster, use the following configurations:

  1. Trigger the cluster synchronization.

    configure 
         clusters cluster_name actions sync run 
         clusters cluster_name actions sync run debug true 
         clusters cluster_name actions sync logs 
         monitor sync-logs cluster_name 
         clusters cluster_name actions sync status 
         exit 

    Example:

    SMI Cluster Manager# clusters test1 actions sync run 
    SMI Cluster Manager# clusters test1 actions sync run debug true 
    SMI Cluster Manager# clusters test1 actions sync logs 
    SMI Cluster Manager# monitor sync-logs test1 
    SMI Cluster Manager# clusters test1 actions sync status 
  2. Upgrade the cluster using the following configuration.

    
    configure 
         clusters cluster_name actions sync run upgrade-strategy concurrent 
         clusters cluster_name actions sync run upgrade-startegy concurrent debug true reset-k8s-nodes true 
         clusters cluster_name actions sync run upgrade-strategy rolling 
         clusters cluster_name actions sync run upgrade-strategy rolling debug true reset-k8s-nodes true 
         exit 
    

    Example:

    SMI Cluster Manager# clusters test1 actions sync run upgrade-strategy concurrent 
    SMI Cluster Manager# clusters test1 actions sync run upgrade-startegy concurrent debug true reset-k8s-nodes true 
    SMI Cluster Manager# clusters test1 actions sync run upgrade-strategy rolling 
    SMI Cluster Manager# clusters test1 actions sync run upgrade-strategy rolling debug true reset-k8s-nodes true 
    
  3. Redeploy the nodes using the following configuration.

    
    configure 
         clusters cluster_name actions sync run upgrade-strategy rolling force-vm-redeploy true debug true 
         clusters cluster_name actions sync run force-vm-redeploy true purge-data-disks true 
         exit 
    

    Example:

    SMI Cluster Manager# clusters test1 actions sync run upgrade-strategy rolling force-vm-redeploy true debug true 
    SMI Cluster Manager# clusters test1 actions sync run force-vm-redeploy true purge-data-disks true 
    

Notes:

  • clusters cluster_name – Specifies the information about the nodes to be deployed. cluster_name is the name of the cluster.

  • actions – Specifies the actions performed on the cluster.

  • sync run – Triggers the cluster synchronization.

  • sync logs – Shows the current cluster synchronization logs.

  • sync status – Shows the current status of the cluster synchronization.

  • debug true – Enters the debug mode.

  • monitor sync logs – Monitors the cluster synchronization process.

  • upgrade-strategy concurrent – This strategy is similar to the existing cluster synchronization where everything is upgraded at once.

    • reset-k8s-nodes – Resets the K8s on the node instead of deleting and redeploying them all at once.

  • upgrade-strategy rolling – The rolling upgrade is a new upgrade strategy where upgrades are performed node-by-node.

    Note

    You can use the rolling upgrade strategy to upgrade only the K8s cluster and product. If there are no changes in the product charts, the upgrade fails. For upgrading one node at a time, see Upgrading Node-by-Node section.

    • reset-k8s-nodes – Resets the K8s on that specific node instead of redeploying it.

  • force-vm-redeploy true – Traverses through each node (one at a time) to delete and upgrade the nodes. The redeploying process is similar to a fresh installation of nodes except for the retention of the data directory, which holds information about the previous installation. Redeploying the node involves:

    • Making API calls for draining and replacing the VMs.

    • Synchronizing (through the Sync API) the node.

    • Verifying the cluster and pod status before proceeding to the next node.

  • purge-data-disks true – Removes the data disks and makes it as new installation. You can use this option for corrupted etcds. For instance, when you have replaced two etcds and ended up with the one having the old data, you can purge the disk and reset the cluster completely.