Use Cases

This section describes the use cases that cnSGW-C supports:

  • cnSGW-C Configuration

    The cnSGW-C base configuration provides a detailed view of configurations required for the cnSGW-C to be operational. The configuration includes setting up the infrastructure to deploy the cnSGW-C, deploying the cnSGW-C through SMI, and configuring the Ops Center for exploiting the cnSGW-C capabilities over time. For more information on SMI, see the Ultra Cloud Core SMI Cluster Deployer Operations Guide.

    The following features are related to this use case:

    For Converged Core deployment, cnSGW-C is deployed using Converged Ops Center.

  • Session Management

    Every UE accessing the EPC is associated with a single S-GW. cnSGW-C supports multiple PDN for given UE. As a part of Session Management, cnSGW-C supports the following:

    • Default and dedicated bearer establishment

    • Bearer modification

    • Bearer deactivation

    The following features are related to this use case:

  • Support for UE Mobility

    cnSGW-C is a mobility anchor point for UE. In LTE Network, there can be mobility between eNodeB to eNodeB, with or without MME change. UE can also move from one cnSGW-C to another cnSGW-C with different modes, S1-based Relocation, X2-based Relocation, and 5G-4G interworking.

    The following features are related to this use case:

  • S1-Release/Buffering/Downlink Data Notification

    cnSGW-C handles releasing S1-U bearer between eNodeB and SGW-U. When cnSGW-C receives Radio Access Bearers (RAB) message indicating that S1-U bearers are released, it updates User Plane and moves UE to IDLE state. When in IDLE state, if UE receives downlink data packet, cnSGW-C generates DDN message towards MME to page UE.

    cnSGW-C also supports DDN Throttling, DDN Delay, and High Priority feature for DDN.

    The following features are related to this use case:

  • Retransmission and Timeout

    For all procedures, as per 3GPP TS 23.401/29.274, cnSGW-C supports N3-Retransmission, and T3-Timeout Support. These are supported for S11, S5, and Sx interfaces.

    The following feature is related to this use case:

  • Failure and Error Handling

    cnSGW-C supports handling of:

    • Failure response for Create Session Request as part of initial attach procedure and additional PDN setup procedure

    • PGW-initiated Dedicated Bearer Creation (DBC) procedure failure scenario

    • Radio Access Bearers (RAB), Modify Bearer Request and Response (MBR) from PGW and User Plane

    The following feature is related to this use case:

  • Load/overload Control Functions

    cnSGW-C supports:

    • Exchange of load/overload control information and actions during peer node overload over Sx interface.

    • Handling load/overload information on GTPv2 interface.

    The following features are related to this use case:

  • cnSGW-C Charging Support

    cnSGW-C supports:

    • Offline Charging (Gz).

    • Writing CDR to local disk storage. The CDR files are pushed to SFTP server periodically.

    • CDR generation for selected subscribers. This is achieved by enabling CDR generation per Operator Policy through call control profile.

    The following feature is related to this use case:

  • Peer and Path Management for GTPC and Sx

    cnSGW-C supports:

    • Peer management for MME (S11 peers), PGW (S5 Peers), and User Plane.

    • Peer monitoring through ECHO Request/Response and Heartbeat Request/Response.

    • Handling of path failure events for S11 and S5 peers.

    The following features are related to this use case:

  • Redundancy Support

    The cnSGW-C deployment in K8 cluster plays a vital role to support High Availability (HA) and Geographic Redundancy (GR).

    The Redundancy Support ensures stateful session continuity among the clusters during the rack or cluster failures.

    The cnSGW-C achieves the HA through redundant set-up of each cluster component such that any single point of failure is avoided.

    The GR provides rack-level redundancy to replicate data between two separate K8 Clusters across rack. On RACK/Cluster failure, traffic switches to a remote RACK to process the traffic. The failure can be due to power failure, multi-compute failures, network failure, multi-POD failure, BFD link failure, and so on.

    The following features are related to this use case:

  • Dynamic Routing

    Dynamic routing enables L3 peering with Leafs, in addition to L2 Static routing.

    The following feature is related to this use case:

  • GTPU Path Management and Session Management

    The UPF notifies an Error Indication message for a GTP-U peer to the sender when a GTP-PDU is received with a TEID that does not exist. This ensures that there are no stale sessions or bearers, and maintains consistency in the network.

    Error Indication and GTP-U Path Failure Indication communication between S-GW and UPF nodes is supported over the N4 interface. For the neighbor nodes, the communication is supported over the S1u/S5u interfaces. Behavior variations of local-purge or signal-peer for Error Indication and GTP-U Path Failure are considered in this implementation.

    The following features are related to this use case: