Industry Expert Blogs
|
What is the Right Metric to Understand 5G Processing Throughput? Well, it's not Peak Speed....Barry Graham, Chief Marketing Officer - AccelerComm LimitedMay 21, 2025 |
One question we get asked by most of our customers when they are designing our technology into their systems is what is the processing throughput? It turns out that the answer to this question is a little more complex than you might think. It’s tempting to simply state the peak throughput of the component in question, but that can sometimes be quite misleading. It is not always safe to assume that a sub system or component that meets the peak requirement of the system also meets all the throughput requirements. To understand why, we need to delve into the details of the 5G air interface.
One of the key features of the 3GPP 5G standards is flexibility. The 5G air interface is designed to apply to a wide range of use cases from mobile broadband to IoT and from indoor coverage to long range outdoor and even satellite applications. Rather like a motorway, the most important question is not what’s the maximum speed limit, but rather how many cars can drive along it at average speed during rush-hour. To address this the interface has to be able to adapt a large number of its technical parameters to optimise performance across these diverse use cases. In all radio networks there is a trade-off between the various performance variables. To illustrate the challenge this presents we’ll focus here on error correction, but the same principles apply to several other critical components in a 5G physical layer.
For transfers of user data, 5G uses an error correction code known as Low Density Parity Check (LDPC)*. LDPC is called a forward error correction code because the transmitter adds additional bits to the user bits to ensure that the receiver can correct for errors introduced by the RF channel. The level of protection can be configured by changing what is known as the coding rate. We’ll explore coding rates in more detail in a future blog post.
Once a coding rate has been chosen, the size is calculated of the equally sized code blocks, that the data will be broken into for transmission. Block sizes can range from 24 to 8448 bits. Finally, the user bits are encoded into the code blocks using one of 2 base graphs.
At the decoder we can already see that we need considerable flexibility. The decoder must be able to deliver the throughput required across the mix of block sizes and coding rates that we expect in the use case we are designing for. For dense urban use cases with mMIMO we can expect to see a large proportion of blocks near the maximum size. For NTN use cases we typically see many blocks in the 1000 – 2000 bit range, as the lower capacity is divided across many users.
The classic LDPC decoder design has a throughput proportional to block length. The AccelerComm LDPC decoder, however, uses an innovative design that can be configured to provide relatively higher throughputs at lower block sizes. The graph shows a comparison of a classic design and a configuration of the AccelerComm decoder that might be chosen for this NTN application both with the same peak throughput.
We can see that the AccelerComm decoder delivers about 4 to 5X the throughput at the block sizes of 1000 to 2000 that we expect to dominate in a 5G satellite scenario. A system incorporating this option will be capable of higher throughput, or can be dimensioned with fewer resources and associated power, or a combination of both.
LDPC block size is just one example where it is advantageous to be able to engineer the physical layer to match the requirements of the use cases expected to achieve maximum performance efficiently. We’ll look at others in future blog