最新文章专题视频专题问答1问答10问答100问答1000问答2000关键字专题1关键字专题50关键字专题500关键字专题1500TAG最新视频文章推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37视频文章20视频文章30视频文章40视频文章50视频文章60 视频文章70视频文章80视频文章90视频文章100视频文章120视频文章140 视频2关键字专题关键字专题tag2tag3文章专题文章专题2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章专题3
当前位置: 首页 - 正文

MATLAB卷积码

来源:动视网 责编:小OO 时间:2025-10-03 08:49:46
文档

MATLAB卷积码

MinimizingtheGap-to-CapacityofaRate1/3CodeviaConvolutionalEncoding/DecodingRomeilSandhursandhu@gatech.eduAbstractInthisproject,weseektominimizethegap-to-capacity(givenbyShannon’stheoreticallimit)ofarate1/3code.Thisisdoneviaaconvolutionalencoder/deco
推荐度:
导读MinimizingtheGap-to-CapacityofaRate1/3CodeviaConvolutionalEncoding/DecodingRomeilSandhursandhu@gatech.eduAbstractInthisproject,weseektominimizethegap-to-capacity(givenbyShannon’stheoreticallimit)ofarate1/3code.Thisisdoneviaaconvolutionalencoder/deco
Minimizing the Gap-to-Capacity of a

Rate1/3Code via Convolutional Encoding/Decoding

Romeil Sandhu

rsandhu@gatech.edu

Abstract

In this project,we seek to minimize the gap-to-capacity (given by Shannon’s theoretical limit)of a rate1/3code. This is done via a convolutional encoder/decoder for vary-ing memory elements as well for both soft and hard decod-ing scheme.We show that the gap-to-capacity can be mini-mized with respect to the suboptimal un-coded code word or a(3,1)repetition code.Although better schemes are avail-able such as LDPC and turbo codes,we have chosen the convolutional code for its simplicity and generality.That is, a generic framework can be readily developed for which multiple convolutional schemes can be implemented with minimal changes to the overall structure(see Appendix A for MATLAB code).In this paper,we present the basic con-cepts associated with convolution codes,specific encoding and decoding schemes used in this project,and results com-paring the gap-to-capacity of the algorithm implemented with respect to Shannon’s optimal code.

1.Introduction

Given the code rate contraint of R=1/3for a binary-input additive white gaussian noise(AWGN)channel,this paper presents several convolutional encoder and decoders of varying element sizes in effort to minimize the gap to ca-pacity of a code with respect to the Shannon limit of any R=1/3system.This can be seen in Figure1,where we have also plotted the bit-error rate of an un-coded word. We begin by recalling the model for a binary-input AWGN channel,which is given below as

r l=a l+n l(1) where a l∈{−1,1}is the l-th transmitted symbol,and r l is the l-th measured or received symbol corrupted by i.i.d zero mean Gaussian noise n l with variance N0/2.Although it is a simple approximation,the AWGN channel presented in Equation(1)has been a powerful instrument in modeling real-life disturbances caused from ambient heat in the trans-miiter/receiver hardware and propagation medium.For ex-ample,many satellite channels and line-of-sight

terrestrial Figure1.A simulated BER(log scale)versus E b/N o(in dB) curve highlighting both the gap to capacity with respect to Shan-non Limit curve of a1/3system as well as the coding gain with regards to an un-coded code

channels can be accurately modeled as an AWGN channel. In this work,we propose to use a1/n,more specifically, a1/3convolution encoder/decoder,to mitigate the distur-bance resulting from such a channel.However,before do-ing so,let us revisit some of classical coding techniques pre-sented in this class,and motivate our reasoning for choosing a convolutional code.

In classical coding theory,Hamming codes,Reed-Muller codes,and Reed-Solomon codes have been popular choices in implementing efficient and reliable coding schemes. However,in this present work,these codes in a stand-alone fashion can not be directly applied to the problem at hand.For example,there exist no Hamming codes that can produce a binary rate1/3code.Similarly,noting that our code length is unconstrained with binary input/output, Reed-Muller codes and Reed-Solomon codes are not well suited.In other words,we seek a coding algorithm that can perform with limited-power.However,if one were to concatenate coding algorithms together(e.g.,a(7,4)Ham-ming Code with a(12,7)Reed-Solomon code),one could

Figure2.The general form a1/n convolution encoder.

addflexibility given the constraints of the problem.To this end,we propose to use only a1/3convolution code(al-though one could use a1/2convolution code paired with a (3,2)Reed-Solomon code).Lastly,we note that recent work of LDPC code,turbo code,and repeat-accumulate code will offer a better performance gain than the algorithm presented here,but given the limited time of the project,the1/3con-volution code was chosen.

The remainder of this paper is organized as follows:In the next section,we begin with a review of convolution codes detailing the1/3convolution encoder/decoder for a given constraint length.Numerical implementation details are given in Section3.In Section4,we present the Bit-Error Rate(BER)rates of our convolution code,with respect to Shannon’s limit and the un-coded word,for varying mem-ory element sizes.Finally,we conclude with Section5 2.Optimal1/3Convolution Codes

Binary linear convolution codes,like that of binary lin-ear block codes are useful in the power-limited regime. The general form of a1/n convolution encoder is given in Figure2.Here,we see that the encoder is a LTIfilter with banks g i(D)that are both rational and causal.More-over,the message m=[m1,m2,...,m L]of length L is

passed in bit-by-bit producing n code words c n

k .From

this,we can then form the encoded message as c k=

[c11,c21,...,c n1,c12,,...,c n2,c1

k ,...,c n

k

].Moreover,if one forms

the“Delay Transform”of an impulse response g i(D),the n-

th code word can then be formed as c n

k (D)=m(D)g i(D).

Together,all possible solutions of an message code forms what is known as a convolution code.Although the formu-lation of a convolution code assumes a message to be theo-retically infinite(as well as the space of acceptable codes), we define block codes of length L.

With this,an encoder can be viewed as a generator ma-trix G(x)of a linear block code,and hence,multiple encod-ing schemes can be designed to achieve a rate1/3system. In addition,each encoding scheme can containµmemory elements,adding versatility to design of a particular convo-lution code.To this end,we seek to implement an

optimal Figure3.The Optimal Rate1/3Convolution Encoder for K=4. convolution code of differentµsizes.This is discussed next.

2.1.Encoding Scheme

Using the Table11-4presented in[2],we can choose

an optimal encoding scheme,dependent on the constraint length K=1+µfor a rate1/3convolution.For con-venience,we recall three optimalfilters with K=4,6,8 given below.

K g0g1g2d free

4

54

101100

110100

74

111100

10 6

47

100111

53

101011

75

111101

13 8

452

100101010

662

110110010

756

111101110

16 Table1.Rate1/3convolution codes with minimum distance

Table1above shows the optimalfilter design for each code generator,where the response is given in octal and bi-nary representation.With this,we can realize the actual encoder via a circuit diagram.This is given in Figure3for

K=4.Note,for the MATLAB implementation presented

in Appendix A,the use of the function conv.m is used to

do the encoding.After one encodes the message m into a code word c,it is then passed through the channel model given by Equation(1).We then need to be able to recover

or decode the code word corrupted by noise.

2.2.Decoding Scheme

We assume now that the code word has been passed through the channel,and now we must decode the(pos-sibly)corrupted message.One popular technique is the Viterbi algorithm,in which one can map the possible so-lutions to what is known as a trellis map.For the sake ofbrevity,we refer the reader for information about how to construct a trellis map[2].However,we note that through this map,the decoder is able to choose the maximum like-lihood(ML)estimate by labeling the nodes with a value denoting the partial branch metric.Then,we seek tofind a path with total minimum cost.That is,the decoding scheme can re-expressed as

min c∈C d H(r,c)=

L+µ

l=0

d H(r l,c l)(2)

It is important to note that we have yet to define the met-ric d H(.,.)in Equation(2).Depending on the chosen met-ric,one can produce a sub-optimal decoder by choosing the metric to be the Hamming distance.In contrast,if one chooses the L p norm,specifically the L2norm,one can achieve an optimal soft-decoder.Lastly,we also refer the reader to advancement of other chosen metrics that have arisen in prediction theory and have found uses infields such as computer vision[1]

2.2.1Sub-optimal Decoder:Hard Decoding

As previously noted,the chosen partial branch metric, d H(.,.),is crucial for the decoder.In particular,let us

denoteˆr l=[sgn(r l1),sgn(r l2),...,sgn(r l

k )],where sgn(.)

outputs the sign of value.With our newly formed estimate ˆr l,we can then define the partial branch metric using the Hamming distance.This is given as

d H(r l,c l)=|{i|ˆr l i=c l i,i=0,1,...,k}|(3) Given that wefirst formed th

e estimateˆr by making“hard”decisions o

f the received vector r,we denote this procedure as hard decoding.

2.2.2Optimal Decoder:Soft Decoding

One major drawback of making“hard”decisions in form-ing the estimateˆr,as seen in Section2.2.1,is a loss of in-formation of the received vector.Instead,if we deal with the received vector r directly,we can then begin to form a measure of similarity via the L p norm.That is,if define the partial branch metric to be

d H(r l,c l)=

k

i=1|r l i−c l i|p

1p

(4)

where p is chosen to be p=2or the Euclidean distance, then onen arrives at the optimal soft decoding scheme using the square of the Euclidean distance.3.Implementation

We have used MATLAB to perform the convolution en-coder/decoder algorithm presented in this report.More im-portantly,we should note that because of the exponential in-crease in complexity with regards to the number of memory elementsµused and unoptimized MATLAB code,a major drawback is the computational speed.However,from pre-vious experiences that involves a search based type of algo-rithm,one could invoke a“kd-tree”to perform fast searches.

We also note the generality of the framework and refer the reader to the documented version of the MATLAB code used to implement the convolutional encoder/decoder.This can be found at the end of this report.In particular,the code is written for K=4;however,one can easily change it to incorporate encoders(e.g.,K=6or K=8).These changes will be denoted by a red box.

4.Experiments

We test the robustness of the rate1/3convolution code for memory element sizes ofµ=3,5,7.Specifically, we measure the coding efficiency of each respective con-volution code over10,000trials and assume that our mes-sage is of L=100bits.Moreover,this simulation is done over several SNR levels.Although one would ide-ally like to reach the theoretical coding gain given by Shan-non’s limit,we deem the“success”of encoder/decoder if it is able to achieve roughly4dB using a hard decoding scheme.This base line can then be improved by substitut-ing various branch metrics,such as the L2norm.To this end,we present simulation results of the algorithm for both hard and soft decoding,and refer the reader to Appendix A for information of how to switch between the two by trivial changes to the MATLAB code.

We begin with K=4convolution code(see Figure3). In Figure4a,we present the BER simulated over a series trials along with the the Shannon’s theoretical limit and the un-coded BPSK algorithm.Figure4b,shows a zoomed in plot of the value located on the simulated curve at BER= 10−4.The coding gain and gap to capacity at this BER level are2.242dB and6.951dB,respectively.Using Table1,we see that the theoretical gain for a hard decoding scheme is 10log10(R∗d min

2

)=2.1285dB,which falls near to what is measured.

Similarly,Figure4(c)-(d)and Figure4(e)-(f)show the convolution code results for K=6and K=8.We again find that the measured coding gain of each curves falls near the expectations of its theoretical coding gains.That is,for K=6we expect a gain of3.358dB,but measured a coding gain of2.93dB.Likewise,for K=8,we expect a gain of 4.23dB,but measured a coding gain of3.59dB.

Finally,Figure5shows simulated results for the K=8 convolution encoder using the soft decoding scheme dis-

(a)

(c)

(e)

(b)

(d)(f)

Figure4.Rate1/3Convolution Encoder Simulated Results using a Hard Decoder.(a)-(b)Encoder(Hard Decoding)for K=4.(c)-(d) Encoder(Hard Decoding)for K=6.(e)-(f)Encoder(Hard Decoding)for K=8

.

(a)

(c)

(b)(d)

Figure5.Soft Decoding Results for K=4and K=6Rate1/3Convolution Encoders.(a)-(b)Encoder(Soft Decoding)for K=4.(c)-(d)Encoder(Soft Decoding)for K=6

cussed in this report.Interestingly,we only were able

to measure a coding gain of4.12dB,which is far differ-

ent from what is theoretically expected,i.e.,10log10(R∗

d min)=7.27dB.On

e particular problem that maybe at-

tributed to such a disparity between the two values could be

the small message length of L=100or the short amountof trials(T rials=10,000)since each of these contribute to the overall transmitted message bits.Nevertheless,we do achieve a dB gain that is reasonable for objective of this project.

5.Conclusion

In this report,we attempt to mitigate the gap to capac-ity of Shannon’s theoretical limit for a rate1/3system.In particular,given the generality andflexibility provided with convolution codes,we present several varying convolution encoders for several varying memory element sizes.Using both soft and hard decoding,we then presented experimen-tal results that for the most part fall within the expected the-oretical gains.

References

[1]R.Sandhu,T.Georgiou,and A.Tannenbaum.A new distribu-

tion metric for image segmentation.In SPIE Medical Imaging, 2008.3

[2]S.B.Wicker.Error Control Systems for Digital Communica-

tions and Storage.Prentice Hall,1995.2,3

6.APPENDIX A:MATLAB CODE

Please See Figure6through Figure10for the detailed MATLAB code.The red boxes highlight regions of code that should be only altered in order to change the design of the encoder(i.e.,memory element size or a hard/soft decod-ing scheme).Current implementation shown is for K=4 with hard decoding.

Encoders with different memory elements or soft/hard decoding schemes,modify area inside red box

via aflag input

encoder,then modify circuit logic function

Figure9.These are the associated helperfiles needed to compute the encoding of the convolution code,compute circuit logic for a specific encoder,and compute the appropriate cost functionals for node paths

ML estimate.

文档

MATLAB卷积码

MinimizingtheGap-to-CapacityofaRate1/3CodeviaConvolutionalEncoding/DecodingRomeilSandhursandhu@gatech.eduAbstractInthisproject,weseektominimizethegap-to-capacity(givenbyShannon’stheoreticallimit)ofarate1/3code.Thisisdoneviaaconvolutionalencoder/deco
推荐度:
  • 热门焦点

最新推荐

猜你喜欢

热门推荐

专题
Top