1 Aim
To design a discrete time BPSK communication receiver, and analyze the BER performance.
2 Theory

2.1 ML Rule
Let \(Y\) be the output at the receiver and \(X\) be the message at the transmitter. Let \(N\) be the noise introduced by the AWGN channel. Let \(E_{b}\) be the bit energy. Then, for the \(k^{th}\) bit: $$ Y_{k} = X_{k} + N_{k} $$ Consider: $$ P(Y_{k} = y \mid X_{k} = \sqrt{E_{b}}) $$ $$ = \lim_{\epsilon \to 0} P(Y_{k} \in (y-\epsilon, y+\epsilon) \mid X_{k}=\sqrt{E_{b}}) $$ $$ = \lim_{\epsilon \to 0} P(X_{k} + N_{k} \in (y-\epsilon, y+\epsilon) \mid X_{k}=\sqrt{E_{b}}) $$ $$ = \lim_{\epsilon \to 0} P(N_{k} \in (y-\epsilon-\sqrt{E_{b}}, y+\epsilon-\sqrt{E_{b}}) \mid X_{k}=\sqrt{E_{b}}) $$ $$ = \lim_{\epsilon \to 0} 2\epsilon f_{Y_{k}}(y-\sqrt{E_{b}}) $$ $$ \boxed{P(Y_{k} = y \mid X_{k} = \sqrt{E_{b}}) = \lim_{\epsilon \to 0} 2\epsilon f_{Y_{k}}(y-\sqrt{E_{b}})}\tag{1} $$ Similarly: $$ \boxed{P(Y_{k} = y \mid X_{k} = -\sqrt{E_{b}}) = \lim_{\epsilon \to 0} 2\epsilon f_{Y_{k}}(y+\sqrt{E_{b}})}\tag{2} $$ Consider the ratio of \((1)\) and \((2)\): $$ \frac{P(Y_{k} = y \mid X_{k} = \sqrt{E_{b}})}{P(Y_{k} = y \mid X_{k} = -\sqrt{E_{b}})} = \lim_{\epsilon \to 0}\frac{2\epsilon f_{Y_{k}}(y-\sqrt{E_{b}})}{2\epsilon f_{Y_{k}}(y+\sqrt{E_{b}})} $$ $$ = \frac{f_{Y_{k}}(y-\sqrt{E_{b}})}{f_{Y_{k}}(y+\sqrt{E_{b}})} $$ $$ = \frac{\frac{1}{\sigma\sqrt{2}\pi}e^{-\frac{(y-\sqrt{E_{b}})^2}{2\sigma^2}}}{\frac{1}{\sigma\sqrt{2}\pi}e^{-\frac{(y+\sqrt{E_{b}})^2}{2\sigma^2}}} $$ $$ = e^{\frac{(y+\sqrt{E_{b}})^2 - (y-\sqrt{E_{b}})^2}{2\sigma^2}} $$ $$ = e^{\frac{4y\sqrt{E_{b}}}{2\sigma^2}} $$ $$ = e^{\frac{2y\sqrt{E_{b}}}{\sigma^2}} $$ $$ \boxed{\frac{P(Y_{k} = y \mid X_{k} = \sqrt{E_{b}})}{P(Y_{k} = y \mid X_{k} = -\sqrt{E_{b}})} = e^{\frac{2y\sqrt{E_{b}}}{\sigma^2}}}\tag{3} $$ Let \(B_{k} \in X \) be the transmitted bit and \(\hat{B} \in Y\) be the the output at the receiver corresponding to \(B_{k}\). Then the ML can be written as: $$ \hat{B} = \begin{cases} 1, & \frac{P(Y_{k}=y \mid B_{k}=1)}{P(Y_{k}=y \mid B_{k}=0)} > 1\\ 0, & \frac{P(Y_{k}=y \mid B_{k}=1)}{P(Y_{k}=y \mid B_{k}=0)} < 1 \end{cases} $$ $$ \hat{B}=\begin{cases} 1, & \ln\Big({\frac{P(Y_{k}=y \mid B_{k}=1)}{P(Y_{k}=y \mid B_{k}=0)}}\Big)> 0\\ 0, & \ln\Big({\frac{P(Y_{k}=y \mid B_{k}=1)}{P(Y_{k}=y \mid B_{k}=0)}}\Big) < 0 \end{cases} $$ From the constellation diagram, we can write: $$ \hat{B}=\begin{cases} 1, & \ln\Big({\frac{P(Y_{k}=y \mid X_{k}=-\sqrt{E_{b}})}{P(Y_{k}=y \mid X_{k}=\sqrt{E_{b}})}}\Big)> 0\\ 0, & \ln\Big({\frac{P(Y_{k}=y \mid X_{k}=-\sqrt{E_{b}})}{P(Y_{k}=y \mid X_{k}=\sqrt{E_{b}})}}\Big) < 0 \end{cases}\tag{4} $$ Substituting \((3)\) in \((4)\): $$ \hat{B}=\begin{cases} 1, & \frac{2y\sqrt{E_{b}}}{\sigma^2} < 0\\ 0, & \frac{2y\sqrt{E_{b}}}{\sigma^2}> 0 \end{cases} $$ $$ \hat{B} = \begin{cases} 1, & y < 0\\ 0, & y> 0 \end{cases} $$2.2 BER
Let \(E\) be the error bit. Then, $$ P(E=1) = P(E=1, B=1) + P(E=1, B=0) $$ $$ P(E=1) = P(E=1 \mid B=1)P(B=1) + P(E=1 \mid B=0)P(B=0)\tag{5} $$ $$ P(E=1 \mid B=1) = P(\hat{B}=0 \mid B=1) $$ $$ = P(Y>0 \mid B=1) $$ $$ = P(N - \sqrt{E_{b}} > 0 \mid B=1) $$ $$ = P(N > \sqrt{E_{b}} \mid B=1) $$ $$ = P(N > \sqrt{E_{b}}) $$ $$ = P\Big(\frac{N}{\sigma} > \sqrt{\frac{E_{b}}{\sigma^2}}\Big) $$ $$ = \int_{\sqrt{\frac{E_{b}}{\sigma^2}}}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}dx $$ $$ = Q\Bigg(\sqrt{\frac{E_{b}}{\sigma^2}}\Bigg) $$ $$ = Q\Bigg(\sqrt{\frac{2E_{b}}{N_{0}}}\Bigg)\tag{6} $$ Similarly, $$ P(E=1 \mid B=0) = Q\Bigg(\sqrt{\frac{2E_{b}}{N_{0}}}\Bigg)\tag{7} $$ Substituting, \((6)\) and \((7)\) in \((5)\), $$ \boxed{\therefore P_{e} = P(E=1) = Q\Bigg(\sqrt{\frac{2E_{b}}{N_{0}}}\Bigg)} $$3 Design
Input: NO_OF_BITS, EB_N0_DB
Output: BER
Xk = Bk(0 --> 1, 1 --> -1)
ML(point) = (0 if point > 0; 1 if point < 0)
for EB_N0 in EB_N0_DB
Nk = AWGN(EB_N0, 1D)
Yk = Xk + Nk
bHat = ML(Yk)
unchangedBits = count(bHat === Bk)
BER = 1 - (unchangedBits/NO_OF_BITS)
plot(BER, BER_THEORETICAL)
The time complexity of the algorithm is \(O(N^2)\).
4 JavaScript Code
5 Results and Inference
From the above figures, it can be seen that simulated \(P_{e}\) and theoretical \(P_{e}\) match better with increase in the number of bits. This is because, to get confidence in the simulated results, there must be sufficient number of bit errors. For example in figure \((5)\), to get a bit error rate of \(10^{-5}\), one needs to send at least \(10^6\) bits. Similar conclusions can be drawn from the other figures shown above. Also, it can be seen that the \(P_{e}\) reduces gradually with increase in \(\frac{E_{b}}{N_{0}}\), the SNR per bit. This is due to the fact that as the SNR per bit increases, the signal becomes less affected by the presence of noise, thus reducing errors in ML detection. Finally, another important thing to be noted is that the simulated curve is not only affected by the symbol errors, but is also affected by floating point errors.