Slot

The CDL Slot pod stores the actual session data. The CDL endpoint connects to all the Slot pods within the cluster to replicate the session data to all the pods. These microservices are K8s pod deployed for exposing internal gRPC interface towards the Cisco Data Store. Each pod starts with the following attributes:

  • systemID: This parameter specifies the site ID (For instance: Site-1).

  • clusterID: This parameter specifies the unique datastore ID within a site (For instance: Session).

  • mapID: This parameter specifies the replica set ID within the cluster. (For instance: map-1, map-2,..map-n).

  • instanceID:This parameter specifies the instance ID within the replica set. (For instance: map-1.instance-1,map-1.instance-2)

Each Slot pod holds a finite number of sessions and pre-allocated memory for the storing session data. Also, each replica within the replica set (mapID) has a defined anti affinity rule, which prevents the same blade form hosting multiple member or instances of the same replica set (for high availability in case of a blade or node failure). Each Slot pod maintains a timer and last updated ts. The Slot pod generates the notification callback to the client NF for taking action when the timer expires of if a conflict is detected.

Note

A maximum of two vCPUs is required for deploying the Slot pod in production environment.

In the event of pod failover and recovery, the Slot pod recovers from:

  • Local Replica Member: The Slot directly reads from the gRPC stream in bulk directly to recover data.

  • Remote Replica Member: When there is no local replica available for synchronization, the Slot reads the data from the remote site instances for the same map.

    The following figures depict the Slot recovery process from local and remote peers:

    Slot Recovery from Local Peer
    Slot Recovery from Remote Peer

Data Slicing

Data slicing logically separates CDL as slices and stores the session data based on the slice name received from the Network Functions (NF).

With data slicing, one or more NFs can store different types of session data in dedicated slices of CDL.

A default slice name called session is used if the slice names are not configured.

The configuration is as follows:

cdl datastore <datastore name> slice-names [ <sliceName 1> <sliceName 2> ... <sliceName n> ]

The sample configuration is as follows:

cdl datastore session slice-names [ session1 session2 ]
Note
  • If the slice names are configured at the NF's ops-center or CDL's ops-center, every request from the NF must have a valid slice name. If the slice name is different from what is configured or empty, then the request is rejected with an error code.

  • If the slice names are not configured, then the NF requests are routed to the default session.

  • The slice names cannot be updated in a running system post deployment.

Deleting CDL Slot Data

In certain scenarios, the CDL records are found on Slot but not in the index pods. The notifications from Slot towards the application for such records do not receive the values correctly. The record in the slot is not deleted, if the index data is not deleted.

Ensure the following before deleting the CDL Slot Data:

  • If the number of notifications to an application crosses a threshold value (default value of 3), a record is suspected to be stale.

  • This triggers a validation check to find the corresponding record in any of the index pods (local or on any geo remote sites).

  • If there is a mismatch in map ID from index, or if the map ID is not found in all index pods, then a clean-up is invoked to delete the record on local as well as remote sites.

The following parameters are introduced to delete stale records:

disable-auto-deletion:When set to true, the stale CDL records are not deleted. Auto deletion of stale records is enabled by default.

notification-retry-count:Specifies the minimum number of timer expiry notification retries sent to application without receiving an update from application. If there are no updates received even after notification-retry-count times, cdl proceeds to check if slot record is stale. The default number is 3.

The sample CDL configurations are as follows:

To disable the stale slot record auto deletion feature:


cdl datastore session
features slot-stale-session-detection disable-auto-deletion true
exit

You can change the notification-retry-count to a new value, for example 5. This indicates that the timer expiry notification tries 5 times, after which it proceeds for checking whether the data is stale.


cdl datastore session
features slot-stale-session-detection notification-retry-count 5
exit

Troubleshooting

To enable troubleshooting logs for Stale CDL Slot Data on endpoint and slot pods, use the following configuration:


cdl logging logger ep.staleRecord.session
level info
exit

cdl logging logger slot.staleRecord.session
level info
exit