1.
Objective
In the
fast developing technology, people need faster communication. So engineers
developed more efficient communication techniques. One of them is Forward Error
Correction codes. By using FEC codes, we decrease the errors. So we get the
same BER value with less SNR power. In this project we will use 16-QAM
modulation type and we will use convolutional FEC code. We will compute the bit
error rate performance with and without convolutional codes in AWGN channel.
2.
Procedure
We
divided our project into 2 parts. First we compute bit error rate in AWGN
channel by modulating our message with 16-QAM. Then we encoded and decoded our
message and analyzed the error rate performance in AWGN channel.
2.1 Performance
of uncoded 16QAM in AWGN channel
This is
the easiest part of our project but provides basis to our project. First we
created our message bits randomly. And converted to decimal value after merged
4 bits together. Because in 16-QAM system 1 symbol contains 4 bits. Now instead
of number of message, we have decimal values with the . number of message. Then
to modulate these symbols with 16-QAM, we converted these decimal values to
corresponding complex values by looking at the constellation diagram of
Gray-Coded diagram. We used Gray Coded because in gray-coded representation
less errors occur. For example in this representation we coded 1110 as -1-3j.
These complex numbers now became our modulated signal. And it is time to
add Gaussian noise to these. The added noise can be changed with the
desired SNR value. So actually we wish to examine the BER performance
between 1:15 SNR ranges. This means we added noises with different powers to
our modulated complex values. In this part we used awgn command.
Modulation
and channel processes are done. It is time to design the receiver. First we
filtered the noisy signal and demodulated the filtered signal. We divided the
constellation diagram to 16 portions to decide corresponding complex value of a
point. So we decided every noisy bits location at constellation diagram. This
was the filtering process. For example “-2.748+3.125j” point will be -3+3j.
After this filtering process, we converted complex values into decimal again,
and this process was called demodulation. After demodulation of complex
symbols, we reconverted decimal values into binary. And this was the last
process. By comparing the message bits and last bits, we calculated the bit
error rate within 1:15 SNR range. Here you can see our BER plot right. We used
semilogy function. And finally we completed the first part. In second part we
will use these algorithms again. Because now we will put encoded bits into here
and all processes will be the same between decoder and encoder.
2.2 Performance
of 1/2 rate convolutional coded 16QAM in AWGN channel
It is
time to code our message bits with convolutional code and observe its BER
performance. In convolutional coding, we used shift registers to encode our
bits. We used . coderated encoder with 2 flip-flop. We learnt their state
diagram representation, and learnt their octal based representation. We used
the [111,101]=[7,5] encoding scheme. Here is the picture of convolutional
encoding scheme. As it seems there are one input and two outputs. We tried
encoding process with 4 bits and obtained
the true result.
Most
basic and most widely used decoding technique is the viterbi algorithm. We will
use this algorithm for decoding. A convolutional encoder is actually a finite
state machine. We wrote all possible transitions between state diagrams. We
used trellis diagram for the encoder. And all possible paths is written and
known. There are also branch metric and path metric terms that we should take
attention while decoding process. This decoding strategy is also known as maximum
likelihood decoder.
Before
writing the code in Matlab, we tried to understand the algorithm of viterbi
from different books. How encoded bits can be decoded, we observed this process
in trellis diagram by looking and writing step by step. We wrote the decoding
process on paper. We saw what happened when a symbol was wrong transmitted, and
we observed how trellis diagram could correct the error.
The
figure above shows the trellis diagram for our example rate 1/2 K = 3
convolutional encoder, for a 15-bit message:
The four
possible states of the encoder are depicted as four rows of horizontal dots.
There is one column of four dots for the initial state of the encoder and one
for each time instant during the message. For a 15-bit message with two encoder
memory flushing bits, there are 17 time instants in addition to t = 0, which
represents the initial condition of the encoder. The solid lines connecting
dots in the diagram represent state transitions when the input bit is a one.
The dotted lines represent state transitions when the input bit is a zero.
Notice the correspondence between the arrows in the trellis diagram and the
state transition table. Also notice that since the initial condition of the
encoder is State 002, and the two memory flushing bits are zeroes, the arrows
start out at State 002 and end up at the same state.
Let's
look at that in more detail, using the expanded version of the transition
between one time instant to the next shown below:
The
two-bit numbers labeling the lines are the corresponding convolutional encoder
channel symbol outputs. Remember that dotted lines represent cases where the
encoder input is a zero, and solid lines represent cases where the encoder
input is a one
let's
start looking at how the Viterbi decoding algorithm actually works. Suppose we
receive the above encoded message with a couple of bit errors:
Each time
we receive a pair of channel symbols, we're going to compute a metric to
measure the "distance" between what we received and all of the
possible channel symbol pairs we could have received. Going from t = 0 to t =
1, there are only two possible channel symbol pairs we could have received:
002, and 112. That's because we know the convolutional encoder was initialized
to the all-zeroes state, and given one input bit = one or zero, there are only
two states we could transition to and two possible outputs of the encoder.
These possible outputs of the encoder are 00 2 and 112
The
metric we're going to use for now is the Hamming distance between the received
channel symbol pair and the possible channel symbol pairs. The Hamming distance
is computed by simply counting how many bits are different between the received
channel symbol pair and the possible channel symbol pairs. The results can only
be zero, one, or two. The Hamming distance (or other metric) values we compute
at each time instant for the paths between the states at the previous time
instant and the states at the current time instant are called branch metrics.
For the first time instant, we're going to save these results as "accumulated
error metric" values, associated with states. For the second time instant
on, the accumulated error metrics will be computed by adding the previous
accumulated error metrics to the current branch metrics.
At t = 1,
we received 002. The only possible channel symbol pairs we could have
received are 002 and 112. The Hamming distance between 002 and
002 is zero. The Hamming distance between 002 and 112 is two.
Therefore, the branch metric value for the branch from State 002 to State
002 is zero, and for the branch from State 002 to State 102 it's
two. Since the previous accumulated error metric values are equal to zero, the
accumulated metric values for State 002 and for State 102 are equal
to the branch metric values. The accumulated error metric values for the other
two states are undefined. The figure below illustrates the results at t =
1:
Note that
the solid lines between states at t = 1 and the state at t = 0 illustrate the
predecessor-successor relationship between the states at t = 1 and the state at
t = 0 respectively
Now let's
look what happens at t = 2. We received a 112 channel symbol pair. The
possible channel symbol pairs we could have received in going from t = 1 to t =
2 are 002 going from State 002 to State 002, 112 going from
State 002 to State 102, 102 going from State 102 to State
01 2, and 012 going from State 102 to State 11 2. The
Hamming distance between 002 and 112 is two, between 112 and
112 is zero, and between 10 2 or 012 and 112 is one.
We add these branch metric values to the previous accumulated error metric
values associated with each state that we came from to get to the current
states. At t = 1, we could only be at State 002 or State 102. The
accumulated error metric values associated with those states were 0 and 2 respectively.
The figure below shows the calculation of the accumulated error metric
associated with each state, at t = 2
That's
all the computation for t = 2. What we carry forward to t = 3 will be the
accumulated error metrics for each state, and the predecessor states for each
of the four states at t = 2, corresponding to the state relationships shown by
the solid lines in the illustration of the trellis.
Now look
at the figure for t = 3. Things get a bit more complicated here, since there
are now two different ways that we could get from each of the four states that
were valid at t = 2 to the four states that are valid at t = 3., we compare the
accumulated error metrics associated with each branch, and discard the larger
one of each pair of branches leading into a given state. If the members of a
pair of accumulated error metrics going into a particular state are equal, we
just save that value. The other thing that's affected is the
predecessor-successor history we're keeping. For each state, the predecessor that
survives is the one with the lower branch metric. If the two accumulated error
metrics are equal, simply pick one of them consistently, i.e. the upper branch
or the lower branch. It probably doesn't matter which method you use. The
operation of adding the previous accumulated error metrics to the new branch
metrics, comparing the results, and selecting the smaller (smallest)
accumulated error metric to be retained for the next time instant is called the
add-compare-select operation. The figure below shows the results of processing
t = 3:
Note that
the third channel symbol pair we received had a one-symbol error. The smallest
accumulated error metric is a one, and there are two of these.
Let's see
what happens now at t = 4. The processing is the same as it was for t = 3. The
results are shown in the figure:
Notice
that at t = 4, the path through the trellis of the actual transmitted message,
shown in bold, is again associated with the smallest accumulated error metric.
Let's look at t = 5:
At t = 5,
the path through the trellis corresponding to the actual message, shown in
bold, is still associated with the smallest accumulated error metric. This is
the thing that the Viterbi decoder exploits to recover the original
message.
Perhaps
you're getting tired of stepping through the trellis. I know I am. Let's skip
to the end.
The
decoding process begins with building the accumulated error metric for some
number of received channel symbol pairs, and the history of what states
preceded the states at each time instant t with the smallest accumulated error
metric. Once this information is built up, the Viterbi decoder is ready to
recreate the sequence of bits that were input to the convolutional encoder when
the message was encoded for transmission. This is accomplished by the following
steps:
- First, select the state having the smallest accumulated error metric and save the state number of that state.
- Iteratively perform the following step until the beginning of the trellis is reached: Working backward through the state history table, for the selected state, select a new state which is listed in the state history table as being the predecessor to that state. Save the state number of each selected state. This step is called trace back.
- Now work forward through the list of selected states saved in the previous steps. Look up what input bit corresponds to a transition from each predecessor state to its successor state. That is the bit that must have been encoded by the convolutional encoder.
Almost in every application of communication, good error performance is needed for good communication. To reach the desired good error rates, supplying high power is not logical at all. New techniques are required and convolutional coding is the one of these techniques. Frequency spectral efficiency is another important when designing a communication system. And this bandwidth situation is another topic to consider. By using 16-QAM modulation type, we decrease the required bandwidth for desired bit rate compared to BPSK.
For Source Code;
ReplyDeletehttps://drive.google.com/folderview?id=0B2WG_0RRtnAnfmdKb3FtVjU2dEFzLVVNSXBTcWZYd1pNZzBhTkdPZ2J1VGJyUXlXRmI3S2s&usp=sharing