1 Aim
To design a discrete time QAM communication receiver, and analyze the SER performance.
2 Theory

2.1 ML Rule
Since every symbol is equally likely, the ML rule is based on the shortest euclidean distance: $$ arg\:min\:||y-s_{i}||^2,\:i=0,1,2\dots,7 $$2.2 SER
Let \(d\) be the distance between the points and \(E\) be the error bit. Let \(s_{i}\) be the \(i^{th}\) symbol and since all symbols are equally likely: $$ P(E=1) = \sum_{i=1}^7 P(s_{i})P(E \mid s_{i}) = \frac{1}{8}\sum_{i=1}^7 P(E \mid s_{i})\tag{1} $$ Owing to the symmetry of the constellation, we can write: $$ P(E \mid s_{0}) = P(E \mid s_{1}) = P(E \mid s_{6}) = P(E \mid s_{7}) $$ $$ P(E \mid s_{2}) = P(E \mid s_{3}) = P(E \mid s_{4}) = P(E \mid s_{5})\tag{2} $$ Thus, \((1)\) reduces to: $$ P(E=1) = \frac{P(E \mid s_{1}) + P(E \mid s_{3})}{2}\tag{3} $$ Let \(S_{k} = [S_{1k}, S_{2k}]\) represent the symbol transmitted and \(N_{k} = [N_{1k}, N_{2k}]\) be the noise introudced by the AWGN channel such that \(Y_{k} = [Y_{1k}, Y_{2k}] = N_{k} + S_{k}\). Then, $$ P(E \mid s_{1}) = P(\{Y_{1k} > -d\} \cap \{P(Y_{2k} > 0)\}) $$ $$ = 1 - P(\{Y_{1k} < -d\} \cap \{P(Y_{2k} < 0)\})$$ $$=1 - P(Y_{1k} < -d)P(Y_{2k} < 0) $$ $$=1 - P(S_{1k} + N_{1k} < -d)P(S_{2k} + N_{2k} < 0) $$ $$=1 - P(N_{1k} - 3d/2 < -d)P(N_{2k} - d/2 < 0) $$ $$=1 - P(N_{1k} < d/2)P(N_{2k} < d/2) $$ $$=1 - (1 - P(N_{1k}> d/2))^2 $$ $$ = 1 - \Bigg(1 - P\Bigg(\frac{N_{1k}}{\sigma} > \frac{d}{2\sigma}\Bigg)\Bigg)^2 $$ $$ = 1 - \Bigg(1 - Q\Bigg(\frac{d}{2\sigma}\Bigg)\Bigg)^2 $$ $$ = 2Q\Big(\frac{d}{2\sigma}\Big) - Q^2\Big(\frac{d}{2\sigma}\Big)\tag{4} $$ Similarly, $$ P(E \mid s_{3}) = P(\{Y_{1k} < -d, Y_{1k}> 0\} \cap \{P(Y_{2k} > 0)\}) $$ $$ = 1 - P(\{-d < Y_{1k} < 0\} \cap \{P(Y_{2k} < 0)\})$$ $$=1 - P(-d < Y_{1k} < 0)P(Y_{2k} < 0) $$ $$=1 - P(-d < S_{1k} + N_{1k} < 0)P(S_{2k} + N_{2k} < 0) $$ $$=1 - P(-d < N_{1k} - d/2 < 0)P(N_{2k} - d/2 < 0) $$ $$=1 - P(-d/2 < N_{1k} < d/2)P(N_{2k} < d/2) $$ $$=1 - P\Big(-\frac{d}{2\sigma} < N_{1k} < \frac{d}{2\sigma}\Big)P\Big(\frac{N_{2k}}{\sigma} < \frac{d}{2\sigma}\Big) $$ $$=1 - P\Big(-\frac{d}{2\sigma} < N_{1k} < \frac{d}{2\sigma}\Big)P\Big(\frac{N_{2k}}{\sigma} < \frac{d}{2\sigma}\Big) $$ $$=1 - \Bigg(1 - P\Big(\frac{d}{2\sigma}> N_{1k} > -\frac{d}{2\sigma}\Big)\Bigg)\Bigg(1 - P\Big(\frac{N_{2k}}{\sigma} > \frac{d}{2\sigma}\Big)\Bigg) $$ $$ = 1 - \Bigg(1 - 2Q\Big(\frac{d}{2\sigma}\Big)\Bigg)\Bigg(1 - Q\Big(\frac{d}{2\sigma}\Big)\Bigg) $$ $$ = 3Q\Big(\frac{d}{2\sigma}\Big) - 2Q^2\Big(\frac{d}{2\sigma}\Big)\tag{5} $$ Substituting \((4)\) and \((5)\) in \((3)\): $$ \boxed{P(E=1) = 2.5Q\Big(\frac{d}{2\sigma}\Big) - 1.5Q^2\Big(\frac{d}{2\sigma}\Big)}\tag{6} $$ The average symbol energy can be found from the constellation map as: $$ E_{s} = \sum_{k}P(s_{k})|s_{k}|^2 = \frac{4(\frac{d^2}{4} + \frac{d^2}{4}) + 4(\frac{9d^2}{4} + \frac{d^2}{4})}{8} $$ $$ E_{s} = \frac{3d^2}{2}\tag{7} $$ Substituting (7) in (6): $$ \boxed{P_{e} = P(E=1) = 2.5Q\Bigg(\sqrt{\frac{E_{s}}{3N_{0}}}\Bigg) - 1.5Q^2\Bigg(\sqrt{\frac{E_{s}}{3N_{0}}}\Bigg)}\tag{8} $$3 Design
Input: NO_OF_BITS, EB_N0_DB, d
Output: SER
M = 8
RIGHT = d / 2 + (M / 4 - 1) * d
LEFT = -RIGHT
constellation = [(±d/2 ± i*d, ±d/2)], i = 0, 1
map(symbol) = constellation[bin2dec(symbol)]
ML(Y) = argmin euclidean_distance(Y, constellation)
Sk = map(Bk.group(3))
for EB_N0 in EB_N0_DB
Nk = AWGN(EB_N0, 2D)
Yk = Sk + Nk
sHat = dec2bin(ML(Yk))
unchangedSymbols = count(sHat === Sk)
SER = 1 - (unchangedSymbols/NO_OF_BITS)
plot(SER, SER_THEORETICAL)
The time complexity of the algorithm is \(O(N^2)\).
4 JavaScript Code
5 Results and Inferences

From the above figures, it can be seen that for a given \(d\), the simulated \(P_{e}\) and theoretical \(P_{e}\) match better with increase in the number of bits. This is because, to get confidence in the simulated results, there must be sufficient number of bit errors. For example, to get a bit error rate of \(10^{-5}\), one needs to send at least \(10^6\) bits. Also, it can be seen that the \(P_{e}\) reduces gradually with increase in \(\frac{E_{s}}{N_{0}}\), the SNR per symbol. This is due to the fact that as the SNR per symbol increases, the signal becomes less affected by the presence of noise, thus reducing errors in ML detection. Finally, another important thing to be noted is that the simulated curve is not only affected by the symbol errors, but is also affected by floating point errors.
From figure \(6\), we can see that the waterfall curve tends to flatten with decrease in \(d\). This is because, with decrease in \(d\) the decision region becomes smaller and hence vastly increases the chances of symbol error. This also explains the fact that for larger \(d\), the curves start from a lower SER closer to 0 and as \(D\to\infty, SER\to0\).