- [[#Motivation]]
- [[#Background on EbNo and Statistical Models of Bit Errors]]
- [[#The EbNo Curve]]
- [[#Why the Complementary Error Function? ]]
- [[#Statistical Model of Bit Errors]]
- [[#Relative Error of a Measurement]]
- [[#Simplify Based on Realistic @import url('https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.16.9/katex.min.css')ppp]]
- [[#Interpretation and using N bit flips]]
- [[#Confidence Level of a Measurement]]
- [[#Example For Threshold Testing]]
- [[#Implementation Loss Requires Accuracy]]
- [[#Summary: When to Use Accuracy vs Confidence]]
- [[#References]]
- [[#Connections | Questions:]]
## Motivation
- One of the most important characteristics of a digital receiver is the **Bit Error Rate (BER)** versus **Input Power Level* (Eb/No)** curve.
- Depending on the data rate, the time these tests take can become impractical.
- There are trade-offs between:
- Being confident that the BER at a given Eb/No is **below** a certain value, or
- Being **accurate** in measuring the **precise** value of the BER.
There are situations where precision is important, but typically test time can be significantly reduced by testing only to a **confidence level** rather than measuring the precise BER.
*- Normalized per bit over the noise spectral density
## Background on EbNo and Statistical Models of Bit Errors
### The EbNo Curve
Simply put, the EbNo curve shows number of bit errors at a given input power for digital communications systems.
The theoretical curve of EbNo vs BER can be plotted by using the **complementary error function (erfc)**:
$\text{BER}=\frac{1}{2}\text{erfc}\Bigg(\sqrt{\frac{E_b}{N_0}}\Bigg)$
Where
- $E_b$ is the energy per bit
- $N_0$ is the spectral noise density
The implementation loss $I$ of a receiver reduces the energy per bit and the EbNo equation becomes:
$\text{BER}=\frac{1}{2}\text{erfc}\Bigg(\sqrt{-I\cdot\frac{ E_b }{N_0}}\Bigg)$
Figure 1 contains the plots for EbNo vs BER with an implementation loss of 0 dB (theoretical), 1 dB, and 2 dB.
![[4 - Attachments/image 21.png|image 21.png]]
Figure 1: Eb/No vs BER for a BPSK signal with Increasing Implementation Losses
From the plot, it is shown that increased implementation loss shifts the theoretical curve to the right. So it’s useful to determine:
- When the receiver **starts getting errors** (threshold), and
- The **implementation loss** of the receiver.
The result of a receiver test can be plotted onto Figure 1 and the implementation loss dervied based on which theoretical lines it sits between.
### Why the Complementary Error Function?
In BPSK, it is typical to use NRZ or Non-Return to Zero, which maps bit 1 → 1 and bit 0 → -1. These are normalized and thus the actual values are scaled by the energy per bit:
- $+\sqrt{E_b}$ for bit 1
- $-\sqrt{E_b}$ for bit 0
However the system is non-ideal and corrupted by **additive white Gaussian noise (AWGN)**, that is, random noise with a normal (Gaussian) distribution.
Therefore instead of the receiver seeing $\pm\sqrt{E_b}$, it sees
$r=\pm\sqrt{E_b}+z$
Where $z$ can range from 0 (ideal) to the maximum noise across a unit bandwidth $N_0/2$.
The receiver will follow that:
- if $r>0$, the bit is considered a 1
- if $r<0$, the bit is considered a 0
If $z>\sqrt{E_b}$, there will be a bit error. This begs the question: what is the probability that this expression holds?
More pointedly: What is the probability that the normal random variable $z$ will obtain a value larger than $\sqrt{E_b}$?
This aligns with the definition of the **Q-function,** which gives the probability that a normal random variable will obtain a value larger than some standard deviation $z$.
$Q(z)=\int_z^\infin \frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}}dy$
Where:
- $z=\frac{\sqrt{E_b}-\mu}{\sigma}=\sqrt{\frac{2E_b}{N_0}}$ is the normalized variable of interest
- i.e. convert the distribution so that the mean $\mu$ shifts to 0 and the standard deviation $\sigma$ scales to 1
- $y$ is the rest of the points between $z$ and $\infin$
$Q(\sqrt{2E_b/N_0})=\frac{1}{\sqrt{2\pi}}\int_{\sqrt{2E_b/N_0}}^\infin e^{-\frac{y^2}{2}}dy$
The following substitutions can be made:
- $\sqrt2x=y$
- $\sqrt{2}dx=dy$
- and thus the bounds will also scale with this substitution
$Q(\sqrt{2E_b/N_0})=\frac{1}{\sqrt{\pi}}\int_{\sqrt{E_b/N_0}}^\infin e^{-x^2}dx$
The complementary error function is defined as:
$\text{erfc}(z)=\frac{2}{\sqrt{\pi}}\int_z^\infin e^{-x^2}dx$
Which shows that:
$Q(\sqrt{2E_b/N_0})=\frac{1}{2}\text{erfc}(\sqrt{E_b/N_0})$
Figure 2 shows the gaussian bell curve with the erfc area. The value $x=\sqrt{E_b/N_0}$ corresponds to the point beyond which noise flips a bit - it is the detection boundary after normalizing the noise to unit variance.
![[4 - Attachments/image 1 9.png|image 1 9.png]]
Figure 2: Gaussian Distribution Showing the erfc Area starting at x
### Statistical Model of Bit Errors
A bit error occurs when a received bit differs from the transmitted bit.
Example: If `10101010` is transmitted but `10101011` is received, there is 1 bit error → BER = 1/8 = 12.5%.
Bit errors can be thought of as a biased coin flip where:
- $n$ = total number of bits sent (coin lips)
- $p$ = probability of bit error (like tails)
- $q = 1-p$ = probability of correct reception (like heads)
Statistical definitions:
- **Expected Value** $\text{EV} =np$
- **Variance** $\sigma^2 =npq$
- **Standard deviation** $\sigma =\sqrt{npq}$
By the empirical rule:
> ~99.7% of tests will yield BER within $\text{EV}\pm3\sigma$
## **Relative Error of a Measurement**
The **relative error** is defined as the ratio of standard deviation to the expected number of errors times the z-score:
$\epsilon_{z\sigma}=z\frac{\sigma}{np}=\frac{z\sqrt{\frac{pq}{n}}}{p}$
$\epsilon=\frac{z\sqrt{q}}{\sqrt{np}}$
Where:
- $z$ = z-score (e.g. 3 for 99.7% confidence)
- $n$ = number of bits sent
- $p$ = BER
### **Simplify Based on Realistic** $p$
The relative error equation can be simplified by considering that the nominal operation of a receiver will almost always have a small BER. Based on the following:
|$p$|$\sqrt{q}$|
|---|---|
|0.9|0.316|
|0.5|0.707|
|1E-1|0.949|
|1E-2|0.995|
|1E-3|0.999|
$⁍$
Therefore so long as $p < 10^{-1}$,
$\epsilon=\frac{z}{\sqrt{np}}$
### **Interpretation and using N bit flips**
The total number of bit flips can be understood as the number of bits sent times the BER:
$N=np$
$\epsilon=\frac{z}{\sqrt{N}}$
The following conclusions can be drawn from the equation:
1. $\epsilon \propto z$
2. $\epsilon \propto \frac{1}{\sqrt{N}} \propto\frac{1}{\sqrt{np}}$
This means that the relative error can be improved by increasing the bit error rate or decreasing our z-score (and thus decreasing confidence).
If the number of bit errors does not increase, the relative error will remain the same. The relative error will _always be high_ for small $N$. Therefore a test that is ran with no errors, or a single error, will have a large relative error in the BER and thus the implementation loss cannot be accurately measured.
## **Confidence Level of a Measurement**
Often, it is good enough to determine that the BER is below a certain threshold with a specific confidence level rather than getting a precise value.
The probability of a bit error (BER) is $P(\epsilon)$ and the estimate is $P'(\epsilon)$ and $n$ is the number of bits sent.
$⁍$
The Confidence Level $\text{CL}$ is defined as the probability that $P(\epsilon)$ is less than a upper limit $\gamma$.
$\text{CL}=P[\ P(\epsilon)<\gamma \ ]$
It is said with $\text{CL}$% confidence that $P(\epsilon)$ is less than $\gamma$. Thus it is expected that $P'(\epsilon)$ is less than $\gamma$ $\text{CL}$% of the time.
See the Reference 1 for the derivation, but the equation works out to be
$np_h=-\ln(1-\text{CL})+\ln \bigg(\sum^N_{k=0}\frac{(np_h)^k}{k!}\bigg)$
Where:
- $n$ is the number of bits sent
- $p_h = \text{BER}_\text{max}$ is the hypothetical maximum bit error rate at a given $\text{CL}$
- $\text{CL}$ is the confidence level
- $N$ is the actual number of bit flips in the test
Letting the total number of bit errors $N=0$, the equation simplifies to:
$⁍$
And if $N=1$ (which needs to be solved empirically for $n$ or $p$):
$⁍$
It should be noted that $n$ and $p_h$ only appear as a product which means that they can be changed regardless of other variables so long as the product of the two remains constant.
### **Example For Threshold Testing**
Suppose a receiver needs to have a BER < $10^{-6}$ with 97% confidence. Using the previous equations, the number of bits needed to be sent with 0 errors is:
$⁍$
For a frame size of 100 bytes, 4460 frames frames must be sent. If after sending 4460 frames, 0 errors are detected, there is a confidence level of 97% that the actual BER is less than $10^{-6}$.
Since no bit errors were received, the measured BER is 0. It can be assumed the _next_ bit sent causes an error, the measured BER becomes $2.8 \times 10^{-7}$. It should be clear now that if a test is run with 0 errors, the relative error of the measurement will be high. Even introducing a single bit error increases the BER from 0 → $2.8 \times 10^{-7}$
If a bit-flip did occur during this test, the $\text{CL}$ equation would have to be solved empirically and the $\text{BER}_\text{max} = 1.5 \times 10^{-6}$ for the same $n$ and $\text{CL}$.
When 1 bit flip is assumed, there is an actual BER of $2.8 \times 10^{-7}$ compared to the emperiically calculated $\text{BER}_\text{max} = 1.5 \times 10^{-6}$. It was only assumed that the next bit would be an error, but what if the bit after that, and the bit after that? This gets at the heart of why there is so high of a relative error with 0 or 1 bit errors and how confidence level testing can be used validate threshold results instead.
Now, calculating the relative error of the test with 1 bit flip is:
$\epsilon=\frac{z\sqrt{}\frac{pq}{n}}{p}=\frac{3\sqrt(\frac{1.5 \times 10^{-6} \ (1-1.5 \times 10^{-6})}{3568000})}{1.5 \times 10^{-6}}=300\%$
Using the simplified form:
$\epsilon=\frac{z}{\sqrt{N}}=\frac{3}{1}=300\%$
It is shown that the relative error is independent of the number of bits sent or the BER so long as 0 or 1 errors are received. Even though there is high confidence that a value is below a certain level, the result cannot be very precise.
## **Implementation Loss Requires Accuracy**
To calculate **implementation loss**, an accurate BER vs Eb/No curve needs to be plotted and compared it to theory. The resultant curve is only useful if the error in the measurement is relatively small.
For a measurement with **10% relative error** we must get:
$\epsilon = \frac{z}{\sqrt{N}} → N = (z / 0.1)^2 = 900\text{ bit errors}$
At BER = $10^{-3}$, that requires:
$n = 900 / 1\times10^{-3} = 900,000\text{ bits}$
It can be concluded that:
- **High enough BER** to get meaningful error statistics
- **Sufficient test time** to accumulate those errors
Otherwise, the relative error will dominate the measurement and the implementation loss will not be accurately measured.
## Summary: When to Use Accuracy vs Confidence
- **Confidence level testing** is appropriate when demonstrating that the bit error rate (BER) is _below_ a specified threshold, even if the precise value is unknown.
- **Relative error analysis** is necessary when accurate BER measurements are required, such as for plotting Eb/No curves or evaluating receiver performance.
- Tests resulting in **zero or one bit error** may provide high confidence, but yield low measurement accuracy due to high relative error.
- Reducing relative error requires observing a sufficient number of bit errors; increasing the number of transmitted bits alone is insufficient if errors remain rare.
- Accurate determination of **implementation loss** necessitates a large number of bit errors per EbNo test point to enable reliable curve fitting and performance comparison.
## References
1. (2000, April). Statistical confidence levels for estimating error probability. _Lightwave Magazine_ [https://www.math.tecnico.ulisboa.pt/~apires/pe/AN1095.pdf](https://www.math.tecnico.ulisboa.pt/~apires/pe/AN1095.pdf)
2. [https://www.gaussianwaves.com/2012/07/q-function-and-error-functions/](https://www.gaussianwaves.com/2012/07/q-function-and-error-functions/)
![[Connect#Connect]]