1 Aim
To design a discrete time BFSK communication receiver, and analyze the BER performance.
2 Theory

2.1 ML Rule
Let \(Y\) be the output at the receiver and \(X\) be the message at the transmitter. Let \(N\) be the noise introduced by the AWGN channel. Let \(E_{b}\) be the bit energy. Let \(B_{k} \in X \) be the transmitted bit and \(\hat{B} \in Y=[Y_{1k}, Y_{2k}]\) be the the output at the receiver corresponding to \(B_{k}\). Then the ML can be written as: $$ \hat{B} = \begin{cases} 1, & \frac{f_{Y}(Y_{k}=y \mid B_{k}=1)}{f_{Y}(Y_{k}=y \mid B_{k}=0)} > 1\\ 0, & \frac{f_{Y}(Y_{k}=y \mid B_{k}=1)}{f_{Y}(Y_{k}=y \mid B_{k}=0)} < 1 \end{cases} $$ $$ \hat{B}=\begin{cases} 1, & \ln\Big(\frac{f_{Y}(Y_{k}=y \mid B_{k}=1)}{f_{Y}(Y_{k}=y \mid B_{k}=0)}\Big)> 0\\ 0, & \ln\Big(\frac{f_{Y}(Y_{k}=y \mid B_{k}=1)}{f_{Y}(Y_{k}=y \mid B_{k}=0)}\Big) < 0 \end{cases} $$ $$ \hat{B}=\begin{cases} 1, & \ln\Bigg(\frac{\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(Y_{1k}-\sqrt{E_{b}})^2}{2\sigma^2}}\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(Y_{2k}-0)^2}{2\sigma^2}}}{\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(Y_{1k}-0)^2}{2\sigma^2}}\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(Y_{2k}-\sqrt{E_{b}})^2}{2\sigma^2}}}\Bigg)> 0 \\ 0, & \ln\Bigg(\frac{\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(Y_{1k}-\sqrt{E_{b}})^2}{2\sigma^2}}\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(Y_{2k}-0)^2}{2\sigma^2}}}{\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(Y_{1k}-0)^2}{2\sigma^2}}\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(Y_{2k}-\sqrt{E_{b}})^2}{2\sigma^2}}}\Bigg) > 0\end{cases} $$ $$ \hat{B} = \begin{cases} 1, & -2\sqrt{E_{b}}(Y_{1k} - Y_{2k}) > 0\\ 0, & -2\sqrt{E_{b}}(Y_{1k} - Y_{2k}) < 0 \end{cases} $$ $$ \boxed{\hat{B}=\begin{cases} 1, & Y_{1k}> Y_{2k}\\ 0, & Y_{1k} < Y_{2k} \end{cases}} $$2.2 BER
Let \(E\) be the error bit. Then, $$ P(E=1) = P(\hat{B} \neq B) $$ $$ = P(\hat{B}=0 \mid B=1)P(B=1) + P(\hat{B}=1 \mid B=0)P(B=0) $$ Since both bits are equally likely: $$ P(B=1) = P(B=0), P(\hat{B}=0 \mid B=1) = P(\hat{B}=1 \mid B=0) $$ $$ \Longrightarrow P(E=1) = P(\hat{B}=0 \mid B=1) $$ $$ = P(Y_{1k} - Y_{2k} > 0 \mid B=0) $$ $$ = P(\sqrt{E_{b}} + N_{1k} - N_{2k} > 0 \mid B=0) $$ $$ = P(N_{12} > \sqrt{E_{b}}) $$ $$ = P\Bigg(\frac{N_{k}}{\sigma} > \sqrt{\frac{E_{b}}{2\sigma^2}}\Bigg) $$ $$ = Q\Bigg(\sqrt{\frac{E_{b}}{2\sigma^2}}\Bigg) $$ $$ = Q\Bigg(\sqrt{\frac{E_{b}}{N_{0}}}\Bigg) $$ $$ \therefore \boxed{P_{e} = P(E=1) = Q\Bigg(\sqrt{\frac{E_{b}}{N_{0}}}\Bigg)} $$3 Design
Input: NO_OF_BITS, EB_N0_DB
Output: BER
Xk = Bk(0 --> [1, 0], 1 --> [0, 1])
ML([bit1, bit2]) = (0 if bit2 - bit1 < 0; 1 if bit2 - bit1 > 0)
for EB_N0 in EB_N0_DB
Nk = AWGN(EB_N0, 2D)
Yk = Xk + Nk
bHat = ML(Yk)
unchangedBits = count(bHat === Bk)
BER = 1 - (unchangedBits/NO_OF_BITS)
plot(BER, BER_THEORETICAL)
The time complexity of the algorithm is \(O(N^2)\).
4 JavaScript Code
5 Results and Inference
From the above figures, it can be seen that simulated \(P_{e}\) and theoretical \(P_{e}\) match better with increase in the number of bits. This is because, to get confidence in the simulated results, there must be sufficient number of bit errors. For example in figure \((5)\), to get a bit error rate of \(10^{-5}\), one needs to send at least \(10^6\) bits. Similar conclusions can be drawn from the other figures shown above. Also, it can be seen that the \(P_{e}\) reduces gradually with increase in \(\frac{E_{b}}{N_{0}}\), the SNR per bit. This is due to the fact that as the SNR per bit increases, the signal becomes less affected by the presence of noise, thus reducing errors in ML detection. Finally, another important thing to be noted is that the simulated curve is not only affected by the symbol errors, but is also affected by floating point errors.