Developing a QoS aware framework for LTE 4G: Aricent Group

Telecom Lead India: A good quality of service (QoS)
mechanism enables operators to monetize their networks effectively, and helps them
provide differentiated service offerings. Controlling quality of service not
only improves customer satisfaction and reduces churn, it optimizes the use of
network resources by prioritizing higher valued traffic flows. The 3GPP Release
8 framework for QoS contains mechanisms to implement these innovations. This
whitepaper describes the QoS control mechanisms required for eNodeB, and
provides a description of the 3GPP QoS concept and mechanism spread across
various LTE network nodes for realization of QoS. Implementation of QoS-aware
Packet Scheduler is also covered in detail.


The introduction of flat-rate tariffs, coupled with the
availability of smart devices, has made bandwidth-hungry applications such as
multimedia, video on demand, mobile TV and online gaming ubiquitous and
affordable, resulting in a drastic increase in packet traffic volume per
subscriber. Each of these services has different performance requirements in
terms of bit rate, packet delay, packet loss tolerance, etc. Moreover, operators
are now basing their pricing on subscribers’ individual requirements. Thus,
there is a need for a QoS framework that provides differential treatment based
on subscribers, services, and data traffic flows.


Key requirements include:


> Ensuring high Quality of Service, especially for key
services, in terms of delay, packet loss rate, and throughput


> Enabling user differentiation to provide expensive
subscriptions with higher availability and min and peak bit rates


> Limiting undesired users/applications like excessive
downloads on flat-rate subscriptions


> Prioritizing emergency services to provide highest
reliability


> Reducing over-provisioning for guaranteed bit rate
services to ensure efficient use of resources


3GPP Release 8 specification, while primarily targeting
the high-throughput, low-latency LTE network, has standardized the QoS
framework with mechanisms to fulfill all of the above requirements. It provides
extensive specifications ranging from class-based mapping of operator services
to packet forwarding treatment in network nodes. The eNodeBs that sit between
UE and backhaul, and control the radio resource allocation, play an important
role in realization of per-UE or per-bearer QoS needs.


This whitepaper covers the QoS architecture in LTE and its
various functions spread across different network nodes. The first part
provides a primer of the QoS concept and mechanisms available in 3GPP LTE
networks, focusing more on the functions required in eNodeB for QoS
realization. In the second part, various design principles for the LTE eNodeB
packet scheduler are discussed for effective conformance to QoS requirements
while allocating radio resources. The last section summarizes the article and
gives a brief overview of Aricent offerings for eNodeB.


QoS ARCHITECTURE IN 3GPP LTE NETWORKS


In 3GPP architecture, EPS Bearer or Bearer, in short, is
the finest level of granularity for which QoS control is defined. EPS Bearer is
a logical entity that includes all the packet flows that receive a common QoS
treatment between UE and EPS Gateway. The EPS Bearer is a logical concatenation
of the Radio Bearer between UE and eNodeB, and S1 Bearer between eNodeB and
EPC. A packet flow is identified by packet filters (5-tuple). Using packet
filters configured during bearer setup, UE maps packet flow in uplink direction
to EPS bearer while Gateway maps packet flows to EPS bearer in downlink
direction.


The first bearer is set up when UE attaches to network is
the default bearer. Unlike previous RANs, in LTE, the default bearer always
remains associated with UE as long as it is attached to the network and retains
the IP address provided by the APN. The default bearer provides the basic
best-effort connectivity to the UE, which can also be associated with up to
seven dedicated bearers. These dedicated bearers enable the network to provide
different QoS to different packet flows associated with different bearers. The
operator can control which packet flows are mapped onto the dedicated bearer, and the QoS level
associated with the dedicated bearer, through policies provisioned into the
Policy and Charging Resource Function (PCRF) node or directly into the Gateway.
Packet flows that do not map to a dedicated bearer map to the default bearer.


QoS PARAMETERS


The LTE QoS model is class based, with each bearer
assigned only one QoS class by the network. The model is made more simple by
the specification by specification of a single scalar, QCI (QoS Class
Identifier), which represents the performance parameters associated with a QoS
class. The QCI is the class identifier, which is used as a reference to
node-specific parameters that control packet-forwarding treatment like
scheduler weights, admission thresholds, queue management thresholds, link
layer protocol configuration, etc. 3GPP has standardized nine QCI values.
Each QCI is associated with specific standardized characteristics that describe
its performance expectations, end to end, between UE and Gateway. The value of
QCI and its associated characteristics is the same in uplink and downlink for
each bearer. The following are some of these characteristics:


Resource Type (GBR/non-GBR): The Guaranteed Bit Rate
(GBR) bearer carries traffic that requires a minimum bit rate guarantee from
the network. At the time of GBR bearer setup/ modification, through admission
control function, resources at various network nodes (eNodeB, packet Gateways)
are provisioned such that there is no congestion-related packet drop
experienced by the bearer. Conversely, the non-GBR bearer caries variable-rate
traffic and is offered best-effort service from the network. There can be
congestion-related packet drops in traffic associated with non-GBR bearers,
while GBR bearers are more costly to service because transmission resources are
blocked for such services. Operators allow the GBR service when enough
resources are available and there is no risk of service degradation to existing
sessions. Subscriber sessions with a non-GBR class can remain established for
longer periods of time. Typically, the default bearer belongs to the non-GBR
class and can be used for services like internet browsing, while services like
IMS voice call require a dedicated GBR bearer being set up on-demand. It’s an
operator policy decision, implemented using PCRF, that specifies whether a
service is realized using a GBR or non-GBR bearer.


Priority: The value of priority ranges for one to nine,
with one being the highest value. The priority level is used to differentiate
between bearers with the same UE, as well as between bearers with different UE.
The eNodeB scheduler can use this as criteria while scheduling radio resources
between different UEs.


Packet Delay Budget (PDB): The PDB defines the upper
limit on the delay experienced by a packet, associated with a bearer, between
UE and Gateway. The packet scheduler, while allocating radio resources to
different UEs, should try to maximize the compliance to the PDB allowed for the
packets of a bearer belonging to a certain QoS class.


Packet Error Loss Rate (PLER): The PLER defines the
upper limit on the rate of non-congestion-related packet losses. The PLER can
be used for configuration of appropriate link layer protocol like RLC mode, max
HARQ retransmissions, etc. Typically, for PLER below 10-3, bearer control
function in eNodeB can decide to use RLC Acknowledged Mode (AM), and for PLER
above 10-3, can decide to use RLC Unacknowledged mode.


The 3GPP has published the standardized characteristics
associated with a QCI. These values are not signaled
on any interface, and should be taken as guidelines for configuration of
node-specific parameters for each QCI. The QCI and its corresponding
characteristics ensures that services using the same QCI class will receive a
minimum level of QoS, even in a multi-vendor deployment and in case of roaming.


As part of bearer setup/modification procedures, GBR
bearers are associated with GBR and MBR parameters separately for DL and UL.
GBR is the guaranteed bit rate that the network will be able to sustain, while
MBR is the maximum bit rate that the traffic of the bearer should not exceed.
Through appropriate Queue Management policies, network nodes should be able to
handle short-term variation in bit rate up to MBR without packet drop, and at
the same time should reserve transmission resources for GBR rate only. The
non-GBR bearers are associated with Aggregate Maximum Bit Rate (AMBR)
parameter, separately for DL and UL. This is not a per-bearer parameter, but an
aggregate rate of all non-GBR bearers for UE or for an APN within UE. 3GPP
specifies two different AMBR parameters. The UE-AMBR is a per-UE parameter
implemented at eNodeB and Gateway; APN-AMBR, which is only known to Gateway,
specifies the per-UE and per-APN bit rates. Bit rate consumption for GBR
bearers is not included in either of these AMBR parameters. UE-AMBR is the
upper bit rate limit provided to UE, and is less than or equal to the total
APN-AMBR of all active APNs to which UE is connected.


Allocation and Retention Priority (ARP) is another
parameter associated with each bearer. ARP defines the control plane treatment
related to admitting and retaining bearers. The ARP is used by admission
control function in eNodeB and Gateway to decide whether a bearer establishment
or modification request be accepted or rejected during overload situations.
Also, ARP can be used by the pre-emption function to decide which bearers to
release in situations when the system is in overload or resources are to be
freed (e.g., to admit an emergency call).The only QoS-related parameters known
to UE, other than UL packet filter rules, are related to the UL
bearer – Prioritized Bit Rate, Priority, and Bucket Size Duration. The
significance of these parameters is discussed in the next section.


FUNCTIONS FOR REALIZATION OF QoS 

The QoS is a distributed
functionality whose functions are spread across all LTE network nodes: UE,
eNodeB, backhaul, MME, Gateways (S-GW, P-GW), and PCRF. Figure 1 illustrates
the location of QoS functions among LTE network nodes and within various layers
of eNodeB.


The operator decides the mapping between services offered
to UEs and the QCI and bearer type. The bit rates like MBR/GBR, UE-AMBR, and
ARP is also part of the subscriber profile. These operator policies are coded
into PCRF and allows operators to realize both service and subscriber
differentiation. Operators also incorporate semi-static configurations of QoS
functions directly into network nodes using O&M system.


The PCRF in the network determines how each packet flow
for each subscriber must be handled in terms of the QoS parameters. It triggers
the establishment of a new bearer or modification of an existing bearer to
handle a packet flow. The control plane bearer procedure handling function of
MME forwards the bearer setup/modification request to eNodeB and UE, and
co-ordinates the setup/modification of EPS Bearer within UE, eNodeB, and
Gateways.


During bearer setup/modification procedures, the eNodeB
and Gateway perform both admission and pre-emption control functions to limit
their loads. In eNodeB, RRM performs both of these functions. As part of the
admission control function in eNodeB, RRM decides whether sufficient radio and
processing resources are available to cater to the new bearer QoS and bit rate
requirements. In a typical implementation, the inputs to the admission control
function are the current load of the entire cell; the nature of the current
request, depending on whether the bearer belongs to an emergency call,
handover, GBR or non-GBR data radio bearer (DRB), or signaling radio bearer (SRB);
the QCI of the new bearer; and the estimated load increase from the admission
of the new bearer, depending on the bearer bit rate (GBR, AMBR) and resource
type (GBR, non-GBR). The function admits the bearer if the increment in current
cell load due to the admission of the new bearer is within the allowed system
capacity threshold for that particular request type. The RRM may keep separate
system capacity thresholds for different QCI classes, ARP values, and types of
requests. For example, a higher threshold is used for handover requests than
for new bearers in order to reduce the probability of call dropping. Should
overload conditions prevent bearers from being admitted, the pre-emption
control function of RRM uses bearer ARP values to identify the specific bearer
that needs to be released in order to free-up resources for higher-priority
bearers. During excessive load conditions, the overload control function
residing in eNodeB RRM and Gateway can also initiate load shedding by
identifying bearers to be released based on their ARP value.


While traffic is flowing, UEs perform uplink packet
filtering to map packets belonging to an uplink packet flow to a bearer.
Similarly, Gateways perform downlink packet filtering to map packets belonging to
a downlink packet flow to the required bearer. The Gateway and eNodeB also
implement rate-limiting functions to ensure that the services are sending data
in accordance with the specified maximum bit rates (MBR and AMBR), and to
protect the network from being overloaded. For non-GBR bearers, the eNodeB
performs the rate limiting based on the UE-AMBR value, while the Gateway
performs the rate policing based on APN-AMBR value in both uplink and downlink.
For GBR bearers, the rate limiting based on MBR is carried out in Gateway and
eNodeB for downlink.


The eNodeB packet scheduler distributes the radio
resources between established bearers in uplink and downlink direction. The
scheduling function is responsible, to a large extent, for the fulfillment of
the QCI characteristics associated with bearers. The next section provides
details regarding decision making at the scheduler to address bearers’ QoS
requirements.


The bearer control function in eNodeB RRM is also
responsible for configuration of L1 and L2 protocols of the radio bearers in
accordance with the QCI characteristics. The RLC-acknowledged or
-unacknowledged mode is based on the PLER requirement. PLER and PDB are
considered in configuring the maximum number of HARQ retransmissions in both
uplink and downlink. The PDCP discard timer is configured on the basis of PDB
associated with the bearer so that packets delayed beyond allowed PDB limits,
while waiting to be scheduled, can be flushed out. The RLC downlink
transmission queue length is configured based on the GBR or AMBR bit rate and
PDB associated with the bearer. RRM also configures the scheduler parameters,
details of which are provided in the next section.


The bearer control function in RRM also configures
parameters that are specific to the uplink bearer. The data and signaling
bearers having common QoS requirements and are grouped by RRM into Logical
Channel Groups (LCGs), of which there are up to four per UE. A typical grouping
of bearers to LCG is: LCG 0 (SRB1, SRB2, DRB with QCI 5 used for IMS
signaling), LCG 1 (DRB with QCI 1 used for voice call), LCG 2 (GBR DRBs with
QCI 2, 3, and 4), and LCG 3 (non-GBR DRBs with QCI 6, 7, 8, and 9). This kind
of grouping allows the scheduling rules to be applied per LCG rather than per
bearer. For uplink transmission resource requests, UE can specify the size the
buffer awaiting transmission per LCG. The eNodeB can also communicate the
per-LCG grant to UE. RRM configures the Priority, Prioritized Bit Rate (PBR),
and Bucket Size Duration per uplink bearer. UE uses these parameters to
distribute the received uplink grant from eNodeB among bearers within LCG. The
principals of token bucket algorithm are applied when, once in every bucket
size duration, every bearer is credited tokens equivalent to PBR. The received
grant is allocated to the bearer with highest priority until all tokens are
consumed, followed by another bearer in priority until tokens of all bearers in
a LCG are served. Surplus grants are allocated preferentially to bearers
according to priority until the pending buffer size reaches zero. Within an
LCG, RRM allocates priority to the bearer as per the QCI priority. The PBR is
allocated in proportion to the GBR rates.


The Gateway and eNodeB also implement QCI to the DSCP
mapping function to create a transition between bearer-level QoS to
transport-level QoS. Using this function, packets on a bearer associated with a
specific QCI are marked with specific DSCPs for forwarding to the backhaul
network. The QCI to DSCP mapping is based on the operator policies configured
using O&M into eNodeB and Gateway. eNodeB performs this mapping for uplink
while Gateway does it for downlink. The backhaul transport network should
implement its queue management and traffic forwarding functionality based on
DSCP value.


The eNodeB RRM Overload Detection function constantly
monitors the cell load status and performance statistics reported periodically
by L2 regarding the level of compliance per QCI class characteristics like PDB,
PLER, and bit rates. If the compliance of a QCI drops below a certain
threshold, overload control triggers the pre-emption function to release
bearers belonging to the QCI class, and the admission control function stops
admitting new bearers to the QCI class. This allows eNodeB to prevent and recover
from an overload condition and improves the compliance to QCI for ongoing
bearers.


CONSIDERATIONS FOR QoS-AWARE eNodeB SCHEDULER


A QoS-aware scheduler needs to meet various conflicting
requirements. For every admitted UE for which traffic is flowing, the scheduler
needs to ensure that radio resources are being allocated such that signaled bit
rates (GBR/AMBR) and latency and PLER requirements are sufficiently met while
other users’ QoS requirements are also fairly allocated. The scheduler also
needs to ensure that aggregate cell throughput is maximized, and that varying
user-channel conditions are taken into account for maximizing cell throughout.
For example, UE with low overall priority might be having good channel
conditions and and require fewer radio resources to transmit the same number of
bits as a high-priority UE going through bad channel conditions. Scheduler need
to choose the UEs such that the UE with bad channel conditions. The scheduler
needs to choose UEs such that those with bad channel conditions are not
starved, while UEs with good channel conditions are preferred, thus maximizing
aggregate cell throughput.


LTE eNodeB also requires fast scheduling – once every
millisecond. Thus, the number of calculations performed for every allocation
should also be optimized so that within the available CPU cycles, the processor
and scheduler are able to convey decisions to the MAC with enough space to, in
time, send DL data and DL/UL radio resource allocation information
to Phy for transmission to UE.


There are conflicting demands for GBR and non-GBR bearers
as well. The majority of bearers admitted in a typical cell will be non-GBR
bearers and to maximize number of served bearers, admission control function in
RRM will do over-allocation while admitting non-GBR. To maximize bearer since
such bearers, the admission control function in RRM performs over-allocation
and admits non-GBR bearers that don’t utilize the capacity uniformly, while
reserving a fixed capacity within the cell for GBR bearers. GBR bearers,
running services like VOIP calls, send packets of almost equal sizes at the
signaled bit rate. Non-GBR bearers of services like internet browsing,
background download and are bursty and more tolerant for latency. The scheduler
ensures that bearers are given a higher priority allocation, but doesn’t let
non-GBR bearers starve over time. A typical scheduler can handle this by
categorizing bearers into signaling, GBR, and non-GBR categories. The signaling
category includes signaling radio bearers and QCI class 5 for high-priority,
low-latency IMS signaling. GBR and non-GBR categories remain QCI-class, with
resource-type GBR and non-GBR. Figure 2 illustrate how the bandwidth is
allocated to the different flows. Resources are first allocated to signaling and
GBR bearers, up to the marked dedicated capacity. Non-GBR bearers are then
allocated the remaining capacity. The dedicated capacity for GBR bearers is
derived from the aggregate GBR rate for each admitted bearer. This scheme can
be optimized so that unutilized capacity from signaling and the GBR category is
utilized for non-GBR. Certain underserved GBR bearers are also considered while
scheduling the capacity for non-GBR bearers. The dedicated capacity for GBR can
be tuned based on the over-allocation factor used by RRM while admitting GBR
bearers.


A typical scheduling function will work in conjunction
with other functions, as illustrated in Figure 3. The buffering function is
required to buffer the downlink PDUs waiting for their transmission
opportunity. The per-bearer queue load from the buffering function is used as
an input by the scheduling function in the allocation decision. The buffering
function also manages the queue length with the maximum limit configured by the
RRM bearer control function, and drops PDUs beyond the configured level. It
also maintains the arrival time of each PDU. PDUs waiting beyond the discard
time, configured by the RRM bearer control function, are dropped.


A flow control function resides between the buffering and
scheduling functions. Typically, a token-based algorithm can be implemented to
periodically create per-bearer tokens at the signaled GBR/UE-AMBR rate, while
the scheduling function limits the transmission opportunity for the bearer by
the amount of tokens left. The channel feedback function maintains the channel
state per UE which is utilized by scheduling function to prioritize UEs with
good channel conditions. This function utilizes the CQI (Channel Quality
Indicator) feedback from UE for estimating downlink channel quality, and SRS
(Sounding Reference Signal) feedback for estimating uplink channel quality. The
HARQ function controls the retransmission of MAC PDUs, while the DL HARQ stores
the DL-transmitted MAC PDU and, if negative feedback is received from UE, does
the retransmission of MAC PDU, up to a limit as configured by RRM-bearer
control function. The UL HARQ function sends positive and negative feedback for
UL-transmitted PDU to UE so that UE can retransmit PDU for which negative
feedback is received. The scheduling function requires feedback from the HARQ
function so that it can schedule resources for retransmissions. A typical
scheduling function gives priority to retransmissions over new transmissions
for UE. The Radio Resource Allocation function allocates the actual frequency
resources on PDCCH, PDSCH, and PUSCH channels.


The scheduling algorithm is the heart of the scheduler.
Its purpose is to select the UEs that best fit a given scheduling policy for
the allocation of both downlink and uplink radio resources in a TTI, and the
amount of opportunity given to each UE. Consideration of per-bearer
characteristics and requirements are taken into account, but the radio
resources are ultimately allocated per UE. The UEs selected for uplink
transmission can be different from UEs selected for downlink transmission.
There are many well-known standard algorithms: Round Robin (RR), where equal
opportunity is given to each UE; Proportional Fair (PF), where resources are
window; and Maximum Carrier to Interference Power Ratio (MCIR), where users
with maximum CIR are selected for transmission. RR is a very simplistic
algorithm that doesn’t consider channel conditions and is only useful for
comparative purposes. MCIR is not suitable because it neither guarantees bearer
QoS requirements nor ensures fairness among UEs. PF also is not appropriate
because some guarantees are given between users, but not between bearers..


To meet these challenges, Aricent has developed the
Weighted Priority (WP) algorithm to optimally handle all the QoS requirements
as well as keep precise control of computing requirements. This algorithm
considers a set of factors in calculating the weighted priority of each bearer
having a non-zero queue load, including channel condition of the UE, QCI class priority,
current pending data for transmission, delay of head-of-the-line packets
compared to the allowed delay as per QCI class, the running average GBR
throughput achieved with respect to the GBR rate agreed in control signaling
etc. Weights are configured for each factor by RRM, while the value of each
factor is normalized to a common scale. The aggregate weighted priority of a
bearer is calculated by multiplying the normalized value of each factor by the
weight corresponding to the factor and aggregating it. The weighted priority of
each bearer with a non-zero queue load is calculated and all bearers are sorted
in decreasing order of weighted priority. The bearer with the highest weighted
priority is allocated resources first, limited by the bearer’s available
tokens. Bearers are served in decreasing weighted priority order until either
all resources are allocated or all bearers are served. For bearers served in a
TTI, weighted priority is recalculated. Due to the decrease in both
head-of-the-line packet delays and queue load factors, and the increase in
throughput achieved, their weighted priority and position on the list move
down, thereby improving the chances of allocation for unserved bearers in the
previous TTI. For uplink, the algorithm and the factors considered are similar
except that the buffer status reports from the UE are used as the input for
current pending data for transmission.


The WP scheduling algorithm chooses users taking into
consideration its buffer occupancies per queue, priority, delay, and throughput
requirements of the QoS class, dynamic token value of the bearer, and channel
conditions reported by the UE. Its greatest benefit is being highly tunable and
modular, both in terms of behavior and cost of computations. Based on deployment
and the ongoing traffic situations, weights can be fine tuned to emphasize a
particular factor or de-emphasize the impact of another factor. Segregation of
bearer in categories (signaling, GBR, non-GBR) ensures that bearers in
individual categories can be separately served based on reserved capacity as
well as separate weights. This also reduces the impact of overload conditions
of one category on another, especially non-GBR categories that are normally
over-allocated at the time of admission. The CPU cycle consumption can also be
finely controlled. The granularity of the normalization function can be made
finer or coarser, which controls the number of times the factor values need to
be collected and calculated. The bearers’ weighted priority need not be
calculated every TTI, but rather only for those served in a TTI or for which
the queue load has changed substantially due to the arrival of new packets. For
the rest of the bearers, priority can be calculated at a token interval, when
the tokens are replenished. For further optimization, the bearers can be
batches, with token updates staggered among batches. Increasing both the token
interval and the number of batches reduces CPU cycle consumption, but at the
cost of more precise control in scheduling fast-changing factor values.


An important function related to QoS control is the
measurement function in Layer 2.. It maintains time averaged measurements
including cell time, frequency resource usage, and per-QCI class measurements
like average packet delay and packet loss rate. The measurements are made
periodically and reported, by request, to the RRM monitoring function, which
uses it as input for its admission control and overload detection function. The
measurements can also be collected and archived by the OAM Performance
Management function and can be used to derive Key Performance Indicators
(KPIs).


Mayank Kumar Rastogi, principal systems engineer, Aricent
[email protected]