Design of MPSK Receivers and SER Analysis

Experiment - 4

1 Aim

To design a discrete time MPSK communication receiver, and analyze the SER performance.

2 Theory

constellation for BPSK
Fig.1 Constellation Map

2.1 ML Rule

Since every symbol is equally likely, the ML rule is based on the shortest euclidean distance: $$ arg\:min\:||y-s_{i}||^2,\:i=0,1,\dots, M-1 $$

2.2 SER

Let \(E\) be the error bit and \(s_{0}, s_{1}, \dots s_{M-1}\) be the possible symbols. Let \(s_{i}\) be the \(i^{th}\) symbol and since all symbols are equally likely: $$ P(E=1) = \frac{1}{M}\sum_{i=0}^{M-1} P(E=1 \mid s_{i})\tag{1} $$ Owing to the symmetry of the constellation, $$ P(E=1 \mid s_{0}) = P(E=1 \mid s_{1}) = \dots = P(E=1 \mid s_{M-1})\tag{2} $$ From (1) and (2), $$ P(E=1) = P(E=1 \mid s_{0}) = \dots = P(E=1 \mid s_{M-1})\tag{3} $$ Let us assume \(s_{0} = [\sqrt{E_{s}}, 0]\) was transmitted. Then, $$ Y_{k} = [\sqrt{E_{s}} + N_{1k}, N_{2k}] $$ It can be seen that \(Y_{1k}, Y_{2k}\) are independent gaussian random variables with means \(\sqrt{E_{s}}, 0\) respectively and variance \(\sigma^{2}\). Therefore: $$ p(Y) = \frac{1}{\pi N_{0}}e^{-\frac{(Y_{1k} - \sqrt{E_{s}})^2 + Y_{2k}^2}{N_{0}}}\tag{4} $$ Transforming \((4)\) into polar coordinates using: $$ V = \sqrt{Y_{1k}^2 + Y_{2k}^2}$$ $$ \Theta = \arctan\Bigg(\frac{Y_{2k}}{Y_{1k}}\Bigg) $$ we get: $$ p_{V, \Theta}(v, \theta) = \frac{v}{\pi N_{0}}e^{\frac{-v^2+E_{s}-2\sqrt{E_{s}}v\cos(\theta)}{N_{0}}} $$ Integrating over \(v\), we get the marginal PDF of \(\Theta\) as: $$ p_{\Theta}(\theta) = \int_{0}^{\infty}p_{V, \Theta}(v, \theta)dv$$ $$ \boxed{p_{\Theta}(\theta) = \frac{1}{2\pi}e^{-\frac{E_{s}\sin^2(\theta)}{N_{0}}}\int_{0}^{\infty}ve^{-\frac{\Bigg(v-\sqrt{\frac{2E_{s}}{N_{0}}}\cos(\theta)\Bigg)^2}{2}}dv}\tag{5} $$ For \(\frac{E_{s}}{N_{0}} \gg 1\) and \(|\theta| \leq \frac{\pi}{2}\), \((5)\) can be approximted as: $$ p_{\Theta}(\theta) \approx \frac{E_{s}}{\pi N_{0}}\cos(\theta)e^{-\frac{E_{s}\sin^2(\theta)}{N_{0}}}\tag{6} $$ The descision region of \(s_{0}\) can be described as \(\{\theta: -\frac{\pi}{m} < \theta < \frac{\pi}{m}\}\). Therefore, $$ P_{e}=1 - \int_{-\frac{\pi}{M}}^{\frac{\pi}{M}}p_{\Theta}(\theta)d\theta\tag{7} $$ Substituting \((7)\) in \((6)\) and substituting \(u=\sqrt{\frac{E_{s}}{N_{0}}}\sin(\theta)\): $$ P_{e} \approx 1 - \int_{-\frac{\pi}{M}}^{\frac{\pi}{M}}\frac{E_{s}}{\pi N_{0}}\cos(\theta)e^{-\frac{E_{s}\sin^2(\theta)}{N_{0}}} d\theta $$ $$ \approx \frac{2}{\sqrt{\pi}}\int_{\sqrt{\frac{2E_{s}}{N_{0}}\sin(\frac{\pi}{M})}}^{\infty}e^{-u^2}du $$ $$=2Q\Bigg(\sqrt{2\frac{E_{s}}{N_{0}}}\sin\Big(\frac{\pi}{M}\Big)\Bigg) $$ $$ \therefore \boxed{P_{e}=2Q\Bigg(\sqrt{2\frac{E_{s}}{N_{0}}}\sin\Big(\frac{\pi}{M}\Big)\Bigg)}\tag{8} $$

3 Design

Input: M, NO_OF_BITS, SB_N0_DB
Output: SER

L = log2(M)
phases = 2*i*PI/M, i = 0, 1, ..., M-1
constellation = [(cos(phases), sin(phases))]

map(symbol) = constellation[bin2dec(symbol)]
ML(Y) = argmin euclidean_distance(Y, constellation)

Sk = map(Bk.group(L))

for SB_N0 in SB_N0_DB
    Nk = AWGN(SB_N0, 2D)
    Yk = Sk + Nk
    sHat = dec2bin(ML(Yk))
    unchangedSymbols = count(sHat === Sk)
    SER = 1 - (unchangedSymbols/NO_OF_BITS)

plot(SER, SER_THEORETICAL)
                
The time complexity of the algorithm is \(O(N^2)\).

4 JavaScript Code

5 Results and Inference

Fig.2 Simulation result for 8-MPSK with \(3\times10^5\) bits
Fig.3 Simulation result for 16-MPSK with \(4\times10^5\) bits
Fig.4 Simulation result for 32-MPSK with \(5\times10^5\) bits
Fig.5 Simulation result for 64-MPSK with \(6\times10^5\) bits
Fig.6 Comparison of simulation results for \(M=8,16,32,64\)

From the above figures, it can be seen that for a given \(M\), the simulated \(P_{e}\) and theoretical \(P_{e}\) match better with increase in the number of bits. This is because, to get confidence in the simulated results, there must be sufficient number of bit errors. For example, to get a bit error rate of \(10^{-5}\), one needs to send at least \(10^6\) bits. Also, it can be seen that the \(P_{e}\) reduces gradually with increase in \(\frac{E_{s}}{N_{0}}\), the SNR per symbol. This is due to the fact that as the SNR per symbol increases, the signal becomes less affected by the presence of noise, thus reducing errors in ML detection. Finally, another important thing to be noted is that the simulated curve is not only affected by the symbol errors, but is also affected by floating point errors.

From figure \(6\), we can see that the waterfall curve tends to flatten with increase in \(M\). This is because, with increase in \(M\) the decision region becomes smaller and hence vastly increases the chances of symbol error. This also explains the fact that for larger \(M\), the curves start from a higher SER closer to 1 and as \(M\to\infty, SER\to 1\).