58
Un exemple : le code (5, 7) 8 Repr ´ esentation g ´ en´ erale Repr ´ esentation en treillis ecodage MAP Turbo-codes parall` eles Turbo-codes s´ erie Analyse EXI Codes convolutifs et codes concat´ en´ es associ ´ es Charly Poulliat 20 octobre 2011

Codes convolutifs et codes concaténés associéspoulliat.perso.enseeiht.fr/polys/slidesTCv2.pdf · 2011. 11. 4. · Un exemple : le code (5, 7)8 Representation g´ en´ erale´ Repr´esentation

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Codes convolutifs et codes concaténésassociés

    Charly Poulliat

    20 octobre 2011

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Plan

    1 Un exemple : le code (5,7)8

    2 Représentation générale

    3 Représentation en treillis

    4 Décodage MAP

    5 Turbo-codes parallèles

    6 Turbo-codes série

    7 Analyse EXIT charts

    8 Systèmes à décodage itératif avec concaténation en série

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Introduction : le code (5, 7)8Représentation par registre à décalage

    D D

    + c(1)[n]

    D D

    + +

    u[n]

    c(2)[n]

    Représentation vectorielle : g(1) = [101] = [5]8 etg(2) = [111] = [7]8Représentation polynomiale (opérateur du retard D) :g(1)(D) = 1 + D2 et g(2)(D) = 1 + D + D2

    Rendement : R = k/n = 1/2 (k entrées, n sorties).Mémoire ν = 2, longueur de contrainte lc = ν + 1.

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Introduction : le code (5, 7)8Notations

    Relations entrées-sorties en temps : soit u[n] la séquenced’entrée, on a alors comme sorties

    c(i)[n] = u ~ g(i)[n], ∀i ∈ {1,2}

    où ~ représente le produit de convolution sur GF (2).Représentation polynomiale : c(i)(D) = u(D)g(i)(D), ∀i ∈ {1,2}Représentation vectorielle polynomiale :

    c(D) = [c(1)(D), c(2)(D)] = u(D)[g(1)(D),g(2)(D)] = u(D)G(D)

    où G(D) matrice génératrice polynômiale de taille k × n.

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Introduction : le code (5, 7)8Terminaison de codage

    Troncature : pas de terminaison particulière du codage. Pour Kbits en entrée, on a émis N = K/R bits codés.Fermeture de treillis : On ajoute aux K bits d’information ν bits,dits de fermeture, pour rejoindre l’état ’nul’ du registre. Celaentraine une baisse du rendement R. Ici, R = K/2 ∗ (K + 2).Fermeture circulaire de treillis : tail-biting

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Codes convolutifsNotations

    Rendement : k entrées et n sorties d’où le rendement R = k/n.Représentation vectorielle polynomiale : soientu(D) = [u(1)(D),u(2)(D), . . . ,u(k)(D)],c(D) = [c(1)(D), c(2)(D), . . . , c(n)(D)]

    c(D) =k∑

    i=1

    u(i)(D)gi(D)

    où gi(D) = [g(1)i (D),g

    (2)i (D), . . .g

    (n)i (D)] et g

    (j)i (D) représente le

    transfert entre l’entrée i et la sortie j .

    c(D) = u(D)G(D)

    où G(D) matrice génératrice polynômiale de taille k × n.Matrice de parité : c(D)HT (D) et donc G(D)HT (D) = 0

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Codes convolutifsNotations

    Code récursif : certaines relations entrées sorties peuvent êtredéfinies par un code récursif

    g(j)i (D) =b0 + b1D + b2D2 + . . .+ bmDm

    1 + a1D + a2D2 + . . .+ amDm

    ++

    + ++

    ++

    c(j)[n]

    D D D

    ++ + + ++ + +

    b0  b1 b2  bm‐1  bm

    (i) + D D D

    + + + +a1 a2 am‐1 am

    u(i)[n]

    + + + +

    + + +

    1  2 m‐1  m

    FIGURE: Implémentation d’un transfert récursif par réalisation d’un filtrerécursif à réponse impulsionnelle infinie de Type I.

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Codes convolutifsNotations

    Exemple : code récursif systématique (1,5/7)8 de matricegénératrice

    G(D) = [11 + D2

    1 + D + D2]

    c(1)[n]

    D D

    + c(2)[n]

    [ ]

    + D D

    + +

    u[n]

    FIGURE: représentation du code récursif (1,5/7)

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Codes convolutifsNotations - exemple pour un code rendement R = k/n

    c(1)(D)

    c(2)(D)

    c(3)(D)

    u2(D)

    u1(D) �

    D D D

    D D

    G(D) =

    (1+D

    1+D+D2 0 1 + D1 11+D

    D1+D2

    )u(D) = [u1(D),u2(D)], c(D) = [c1(D), c2(D), c3(D)]

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Codes convolutifsModèle d’état et représentation par machine à états finis

    Représentation par machine à états finis : pour un code de typeR = 1/n,

    {uk} → Sn → {ck}

    où ck = [c(1)k , c

    (2)k , c

    (n)k ] et Sk représente l’état interne du registre

    à décalage.Représentation fonctionnelle associée :

    Equation d’évolution : passage d’un état à Sk−1 à Sk .

    Sk = F1(Sk−1, uk )

    Equation d’observation : génération des sorties observables ck .

    ck = F2(Sk−1, uk )

    Exemple : Code (7,5)8, Sk = [uk ,uk−1].

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Codes convolutifsReprésentation graphique en Diagramme d’Etat

    Code convolutif (7,5)8

    170 Convolutional Codes

    4.4.2 Graphical Representations for Convolutional Code Encoders

    There exist several graphical representations for the encoders of convolutionalcodes, with each of these representations playing different roles. We find it useful,and sufficient, to describe the graphical models for our archetypal encoder, specif-ically, the rate-1/2 encoder described by G(D) = [1 + D + D2 1 + D2]. Theprocedure for deriving graphical models for other convolutional codes is identical.

    We start with the finite-state transition-diagram (FSTD), or state-diagram,graphical model for G(D). The encoder realization for G(D) was presented earlierin Figure 4.4(a) and the encoder state is defined to be the contents of the twomemory elements in the encoder circuit (read from left to right). From that fig-ure and the state definition, we may produce the following state-transition tablefrom which the code’s FSTD may be easily drawn, as depicted in Figure 4.8(a).

    00|0

    00|1

    10|1

    00

    11

    10|1

    01|0

    00|0State

    (b)

    (a)

    00

    10

    01

    11

    0 1 2

    Time

    3

    • • •

    L – 2 L – 1 L

    00|0 00|0 00|0

    10|0

    00|0

    11|0

    01|0

    11|0

    11|1 11|1 11|1

    10|010|0

    01|1 01|1

    10|101|0

    00|111|0

    01|1

    11|1 11|0

    0110

    Figure 4.8 (a) FSTD for the encoder of Figure 4.4(a). The edge labels are c1(i)c2(i)|u(i) oroutput|input. (b) A trellis diagram for the encoder of Figure 4.4(a) for input length L andinput containing two terminating zeros.

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Codes convolutifsReprésentation graphique en treillis

    Definition : représentation graphique du code dans son espaced’état en considérant la dimension temporelle.

    Noeuds : noeuds du graphe associé à un état Sk .Transitions : branches entre deux noeuds associées à F1(.).Etiquettes : informations portées par les branches (uk , ck ) etdonnées par F2(.)

    Propriétés :Pour un treillis de longueur L, tout chemin du treillis de S0 à SL estmot de code obtenu par concaténation des étiquettes ck duchemin.Le chemin où tous les Sk sont l’état 0 (représentation binairenaturelle) est le mot de code nul.Événement minimal : chemin qui par de l’état Sk = 0 et ne revientà cet état que à Sk+l pour un certain l . On lui associe un poids deHamming grâce aux étiquettes du chemin.dmin est donnée par le poids minimal d’un évênement minimal.

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Code convolutif : représentation en treillisCode (7, 5)8

    � �� �� �

    � �� �� �

    � � �� � �� � �

    � � �� � �� � �

    � � �� � �� � �

    � � �� � �� � �

    � � �� � �� � �

    � � �� � �� � �

    � �� �� �

    � �� �� �

    � �� �� �

    � � �� � �� � �

    � � �� � �� � �

    � �� �� �

    � �� �� �

    � �� �� �

    � �� �� �

    � �� �� �

    � �� �� �

    � � �� � �� � �

    � � �� � �� � �

    � �� �� �

    � �� �� �

    � �� �� �

    � �� �� �

    � �� �� �

    � �� �� �

    � � �� � �� � �

    ! !! !! !

    " "" "" "

    # ## ## #

    $ $$ $$ $

    % %% %% %

    & && && &

    ' ' '' ' '' ' '

    ( (( (( (

    ) )) )) )

    * ** ** *

    + ++ ++ +

    , ,, ,, ,

    - -- -- -

    . .. .. .

    / // // /

    0 00 00 0

    0

    1

    2

    3

    00

    10

    01

    11

    10

    01

    00

    11

    j j+1 j+2 j+3 j+5j+4

    FIGURE: Treillis du code (7,5)

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Décodage MAP bitNotations 1/2

    Critère MAP bit

    ûn = arg maxun

    p(un|y)

    = signe(L(un)) (1)

    avecmapping BPSK : {‘0′ ↔ +1, ‘1′ ↔ −1}.un ∈ {−1,+1},∀n ∈ [1,L]LLR MAP (Log Likelyhood Ratio) :

    L(un) = log[

    p(un = +1|y)p(un = −1|y)

    ]

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Décodage MAP bitNotations 2/2

    Critère MAP bitLongueur de treillis L = K + Nf (K bits d’info. + Nf bits defermeture).Notations vectorielles :

    c = [c1, c2, . . . cL]

    (mot de code émis) avec ck = [c(1)k , c

    (2)k , . . . , c

    (nc)k ]

    y = [y1, y2, . . . yL]

    (mot de code reçu) avec yk = [y(1)k , y

    (2)k , . . . , y

    (nc)k ] et

    y (j)k = c(j)k + b

    (j)k

    ylk = [yk , yk+1, . . . yl−1, yl ]

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Décodage MAP par bitAlgorithme BCJR 1

    LLR MAP revisité

    L(un) = log[

    p(un = +1|y)p(un = −1|y)

    ]= log

    [∑S+ p(sn−1 = s

    ′, sn = s,y)∑S− p(sn−1 = s′, sn = s,y)

    ](2)

    Notations

    Ensemble des transitions associées à un = +1 :

    S+ = {(s′, s) où (sn−1 = s′) 7→ (sn = s)|un = +1}

    Ensemble des transitions associées à un = −1 :

    S− = {(s′, s) où (sn−1 = s′) 7→ (sn = s)|un = −1}

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Décodage MAP par bitAlgorithme BCJR 2

    Factorisation de p(s′, s,y)

    p(sn−1 = s′, sn = s,y) = αn−1(s′)γn(s′, s)βn(s) (3)(4)

    αn(s) = p(sn = s,yn1) (5)βn(s) = p(yLn+1|sn = s) (6)

    γn(s′, s) = p(sn = s, yn|sn−1 = s′) (7)

    Récursions forward-backward

    αn(s) =∑

    s′γn(s′, s)αn−1(s′) (8)

    βn−1(s′) =∑

    s

    γn(s′, s)βn(s) (9)

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Décodage MAP par bitAlgorithme BCJR 3

    Calcul des probabilités de transitions

    γn(s′, s) = p(sn = s, yn|sn−1 = s′) (10)= p(yn|s′, s).p(s|s′) (11)

    (12)

    avec

    p(s|s′) ={

    0 , si {s′ → s} non valideπ(un) , sinon

    γn(s′, s) = p(yn|cn(s′, s))π(un)1s′→s (13)

    avecπ(un) probabilité à priori de uncn(s′, s) les bits associés à l’étiquette (s′ → s)

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Décodage MAP par bitAlgorithme BCJR 4

    Calcul des probabilités de transitions : cas Gaussien

    y (m)n = c(m)n + b

    (m)n ,b

    (m)n ∼ N (0, σ2b)

    γn(s′, s) ∝ π(un) exp

    (∑ncm=1 |y

    (m)n − c

    (m)n (s′, s)|2

    2σ2b

    )1s′→s

    Initialisation (fermeture treillis)

    α0(0) = 1 , α0(s) = 0 sinon (14)βL(0) = 1 , βL(s) = 0 sinon (15)

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Décodage MAP par bitAlgorithme BCJR dans domaine logarithmique

    Définitions dans le domaine logarithmique

    α̃n(s) , log (αn(s))

    = log∑

    s′exp (α̃n−1(s′) + γ̃n(s′, s)) (16)

    β̃n−1(s′) , log (βn(s′))

    = log∑

    s

    exp (β̃n(s) + γ̃n(s′, s)) (17)

    γ̃n(s′, s) , log (γn(s′, s)) (18)

    L(un) = log (∑S+

    exp (α̃n−1(s′) + γ̃n(s′, s) + β̃n(s)))

    − log (∑S−

    exp (α̃n−1(s′) + γ̃n(s′, s) + β̃n(s))) (19)

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Décodage MAP par bitAlgorithme BCJR dans domaine logarithmique

    Opérateur max∗ (x , y)

    max (x , y) = log(

    ex + ey

    1 + e−|x−y|

    )(20)

    max∗(x , y) , log (ex + ey )= max (x , y)− log (1 + e−|x−y|) (21)

    max∗(x , y , z) , log (ex + ey + ez)= max∗(max∗(x , y), z) (22)

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Décodage MAP par bitAlgorithme BCJR dans domaine logarithmique

    Log-MAP (log-BCJR)

    α̃n(s) = maxs′∗(α̃n−1(s′) + γ̃n(s′, s)) (23)

    β̃n−1(s′) = maxs∗(β̃n(s) + γ̃n(s′, s)) (24)

    L(un) = maxS+∗(α̃n−1(s′) + γ̃n(s′, s) + β̃n(s)

    )−maxS−∗(α̃n−1(s′) + γ̃n(s′, s) + β̃n(s)

    )(25)

    Implémentation simplifiée par l’opérateur max(.) et utilisation delookup table.stabilité numérique accrue.

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Décodage MAP par bitAlgorithme BCJR dans domaine logarithmique

    Log-MAP (log-BCJR) : cas Gaussien

    γ̃n(s′, s) = log (γn(s′, s) = − log (4πσ2)−1

    2σ2

    nc∑m=1

    |y (m)n − c(m)n (s′, s)|2

    en se réferant au critère MAP final, la constante peut-être supprimée.

    Max-Log-MAP

    Remplacer l’opérateur max∗(.) par l’opérateur max(.) seul.Complexité diminuée, mais perte de performances (raisonnable).⇒ décodeur implémenté en pratique

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Turbo-codes parallèlesStructure du codeur

    c(1)[n]

    Code RSC 1R=1/2K bits d’info

    u[n] c(2)[n]

    Entr

    K bits d info

    Code RSC 2R=1/2

    c(3)[n]

    FIGURE: Structure d’un turbo code parallèle : chaque code est un coderécursif systématique. Ici R = 1/2 pour chaque code constituant.

    K bits d’information sont codés avec le codeur RSC 1,les bits d’info. sont entrelacés et codés par le codeur RSC 2,Seule la partie redondance est prise en compte sur le codeurRSC 2.

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Turbo-codes parallèlesStructure du codeur : exemple UMTS

    computation of the max* operator and the dynamic halt-ing of the decoder iterations. Simple, but effective, solu-tions to both of these problems are proposed and illus-trated through simulation.

    In the description of the algorithm, we have assumedthat the reader has a working knowledge of the Viterbialgorithm [14]. Information on the Viterbi algorithm canbe found in a tutorial paper by Forney [15] or in mostbooks on coding theory (e.g., [16]) and communicationstheory (e.g., [17]).

    We recommend that the decoder described in thispaper be implemented using single-precision floating-point arithmetic on an architecture with approximately200 kilobytes of memory available for use by the turbocodec. Because mobile handsets tend to be memory lim-ited and cannot tolerate the power inefficiencies of float-ing-point arithmetic, this may limit the direct applicationof the proposed algorithm to only base stations. Readersinterested in fixed-point implementation issues are re-ferred to [18], while those interested in minimizingmemory usage should consider the sliding-windowalgo-rithm described in [19] and [20].

    The remainder of this paper is organized as follows:Section 2 provides an overview of the UMTS turbo code,and Section 3 discusses the channel model and how tonormalize the inputs to the decoder. The next three sec-tions describe the decoder, with Section 4 describing thealgorithm at the highest hierarchical level, Section 5 dis-cussing the so-called max* operator, and Section 6 de-scribing the proposed log-domain implementation of theMAP algorithm. Simulation results are given in Section7 for two representative frame sizes (640 and 5114 bits)in both additive white Gaussian noise (AWGN) and fullyinterleaved Rayleigh flat-fading. Section 8 describes asimple, but effective, method for halting the decoderiterations early, and Section 9 concludes the paper.

    2. THE UMTS TURBO CODE

    As shown in Fig. 1, the UMTS turbo encoder iscomposed of two constraint length 4 recursive system-atic convolutional (RSC) encoders concatenated in par-allel [12]. The feedforward generator is 15 and the feed-back generator is 13, both in octal. The number of databits at the input of the turbo encoder is K, where 40 #K # 5114. Data is encoded by the first (i.e., upper) en-coder in its natural order and by the second (i.e., lower)encoder after being interleaved. At first, the twoswitches are in the up position.

    The interleaver is a matrix with 5, 10, or 20 rows andbetween 8 and 256 columns (inclusive), depending on the

    size of the input word. Data is read into the interleaver ina rowwise fashion (with the first data bit placed in theupper-left position of the matrix). Intrarow permutationsare performed on each row of the matrix in accordancewith a rather complicated algorithm, which is fully de-scribed in the specification [12]. Next, interrow permuta-tions are performed to change the ordering of rows (with-out changing the ordering of elements within, . . . , eachrow). When there are 5 or 10 rows, the interrow permu-tation is simply a reflection about the center row (e.g., forthe 5-row case, the rows {1, 2, 3, 4, 5} become rows {5,4, 3, 2, 1}, respectively). When there are 20 rows, rows{1, . . . , 20} become rows {20, 10, 15, 5, 1, 3, 6, 8, 13,19, 17, 14, 18, 16, 4, 2, 7, 12, 9, 11}, respectively, whenthe number of input bits satisfies either 2281 # K # 2480or 3161 # K # 3210. Otherwise, they become rows {20,10, 15, 5, 1, 3, 6, 8, 13, 19, 11, 9, 14, 18, 4, 2, 17, 7, 16,12}, respectively. After the intrarow and interrow permu-tations, data is read from the interleaver in a columnwisefashion (with the first output bit being the one in theupper-left position of the transformed matrix).

    The data bits are transmitted together with the paritybits generated by the two encoders (the systematic outputof the lower encoder is not used and thus not shown inthe diagram). Thus, the overall code rate of the encoderis r 5 1/3, not including the tail bits (discussed below).The first 3K output bits of the encoder are in the form:X1, Z1, Z1*, X2, Z2, Z29, . . . , XK, ZK, ZK9, where Xk is thekth systematic (i.e., data) bit, Zk is the parity output fromthe upper (uninterleaved) encoder, and Zk9 is the parityoutput from the lower (interleaved) encoder.

    After the K data bits have been encoded, the trel-lises of both encoders are forced back to the all-zerosstate by the proper selection of tail bits. Unlike conven-tional convolutional codes, which can always be termi-nated with a tail of zeros, the tail bits of an RSC will de-

    204 Valenti and Sun

    Fig. 1. UMTS turbo encoder.FIGURE: Structure d’un turbo code parallèle pour l’UMTS (3GPP)

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Turbo-codes parallèlesNotion d’information extrinsèque 1/3

    Cas d’un codeur récursif systématique de rendement R = 1/nc

    on supposera que c(1)n = un

    L(un) = log[∑

    S+p(sn−1=s′,sn=s,y)∑

    S−p(sn−1=s′,sn=s,y)

    ](26)

    = Lc(un) + La(un) + Lext (un) (27)

    avec

    Info. canal : Lc(un) = log (p(y (1)n |un=+1)p(y (1)n |un=−1)

    )

    Info. a priori : La(un) = log (p(un=+1)p(un=−1) )

    Info. extrinsèque :

    Lext (un) = log[∑

    S+αn−1(s′)

    ∏nck=2

    p(y (k)n |c(k)n (s

    ′,s))βn(s)∑S−

    αn−1(s′)∏nc

    l=2p(y (l)n |c

    (l)n (s′,s))βn(s)

    ]= log

    [∑S+

    αn−1(s′)γen (s′,s)βn(s)∑

    S−αn−1(s′)γen (s′,s)βn(s)

    ](28)

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Turbo-codes parallèlesNotion d’information extrinsèque 2/3

    Info. extrinsèque (domaine logarithmique)

    Lext (un) = maxS+∗(α̃n−1(s′) + γ̃en (s

    ′, s) + β̃n(s))

    −maxS−∗(α̃n−1(s′) + γ̃en (s

    ′, s) + β̃n(s))

    γ̃en (s′, s) = log (γen (s

    ′, s))

    Correspondance entre domaine probabilité vers domainelogarithmique :L(un)⇐⇒ p(un)

    p(un) =exp (La(un)/2)

    1 + exp (La(un))exp (unLa(un)/2)

    = Λn exp (unLa(un)/2) (29)

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Turbo-codes parallèlesNotion d’information extrinsèque 3/3

    Cas Gaussien (domaine logarithmique)

    Info. canal : Lc(un) = 2σ2 y(1)n ,

    Métriques de branches :

    γ̃n(s′, s) = unLa(un)

    2+

    12σ2

    nc∑m=1

    |y (m)n − c(m)n (s′, s)|2

    = unLa(un)

    2+

    nc∑m=1

    c(m)n (s′, s)y (m)nσ2

    = unLa(un)

    2+

    nc∑m=1

    c(m)n (s′, s)Lc(y

    (m)n )

    2(30)

    γ̃en (s′, s) =

    nc∑m=2

    c(m)n (s′, s)Lc(y

    (m)n )

    2(31)

    L(un) =2σ2

    y (1)n + La(un) + Lext (un)

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Turbo-codes parallèlesDécodeur 1/2

    y(1)[n]y(3)[n]

    MAP RSC 1R=1/2 Entr

    y(2)[n]MAP RSC 2

    R=1/2L1 (u[n]) L1 ( [ ])L ext(u[n])

    L2ext(u[n])L2a(u[n])

    L1a(u[n])

    Inv. Entr

    Les décodeurs MAP des deux codes constituants vonts’échanger une information extrinsèque de manière itérativerelative aux bits d’information communs.

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Turbo-codes parallèlesDécodeur 2/2

    Critères MAP au deux décodeurs à l’itération (`) :

    L(`)RSC1 (un) =2σ2

    y (1)n + L(`−1)a,1 (un) + L

    (`)e,1(un)

    =2σ2

    y (1)n + L(`−1)e,2 (uπ−1(n)) + L

    (`)e,1(un) (32)

    L(`)RSC2 (uπ(n)) =2σ2π(y (1)π(n)) + L

    (`−1)e,1 (uπ(n)) + L

    (`)e,2(uπ(n))(33)

    avec π(.) et π−1 représentent les opérations d’entrelacement etde désentrelacement

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Turbo-codes parallèlesPerformances

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Turbo-codes sérieStructure du codeur

    c(1)[n]

    Code RSC 1R=1/2K bits d’info

    u[n] c(2)[n]

    Entr

    K bits d info

    Code RSC 2R=1/2

    c(3)[n]

    Code conv 1R=1/2

    Entrelaceuru[n]c1(1)[n]

    Code conv 2R=1/2

    c2(1)[n]

    R 1/2K bits d’info

    R 1/2

    c1(2)[n] c2(2)[n]v[n]

    K bits d’information sont codés avec le codeur 1 (code externe),les bits codés sont alors entrelacés et puis codés par le codeur 2(code interne),les deux décodeurs associés ne fonctionneront pas forcément dela même manière que pour le cas parallèle,seul le codeur 2 doit être récursif pour avoir gain d’entrelacement

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Turbo-codes sérieStructure du décodeur 1/2

    MAP CV 1Entr

    y(2)[n]

    MAP CV 1y(1)[n]L2 (v[n]) L1 (C [ ])L ext(v[n])

    L2ext(C1[n])L1a(v[n])

    L1a(C1[n])

    Inv. Entr

    Décodeur code 1

    Idem au décodage d’un bloc du cas parallèle.

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Turbo-codes sérieStructure du décodeur 2/2

    Décodeur code 2

    Critère MAP modifié :

    L1(c(m)n ) = max

    C+∗(α̃n−1(s′) + γ̃n(s′, s) + β̃n(s)

    )−maxC−∗(α̃n−1(s′) + γ̃n(s′, s) + β̃n(s)

    )(34)

    = Le2(c(m)n ) + Le1(c

    (m)n ) (35)

    γ̃n(s′, s) =nc∑

    m=1

    c(m)n (s′, s)Le2(c

    (m)n (s′, s))

    2(36)

    avec

    C+ = {(s′, s) où (sn−1 = s′) 7→ (sn = s)|c(m)n = +1}

    C− = {(s′, s) où (sn−1 = s′) 7→ (sn = s)|c(m)n = −1}

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsPrésentation générale

    Motivations

    Analyser le comportement d’un turbo-récepteur pour pouvoirprédire les performances en fonction du rapport signal à bruit.

    y(1)[n]y(3)[n]

    MAP RSC 1R=1/2 Entr

    y(2)[n]MAP RSC 2

    R=1/2L1 (u[n]) L1 ( [ ])L ext(u[n])

    L2ext(u[n])L2a(u[n])

    L1a(u[n])

    Inv. Entr

    1730 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 49, NO. 10, OCTOBER 2001

    Fig. 4. Extrinsic information transfer characteristics of soft in/soft out decoderfor rate 2/3 convolutional code,E =N = 0:8 dB, memory 4, different codepolynomials.

    exists on

    (24)

    Increasing means that more and more bitsbecome knownat the decoder with high confidence, which relates to a growingconditioning of the mutual information on thea priori knowledge. As conditioning increases mutual informa-tion [18] it is plausible that biggera priori input provides biggerextrinsic output.

    III. EXTRINSIC INFORMATION TRANSFERCHART

    A. Trajectories of Iterative Decoding

    To account for the iterative nature of the suboptimal decodingalgorithm, both decoder characteristics are plotted into a singlediagram. However, for the transfer characteristics of the seconddecoder the axes are swapped.

    This diagram is referred to as EXIT chart since the exchangeof extrinsic information can be visualized as a decoding trajec-tory.

    Let be the iteration index, fixed. For the itera-tion starts at the origin with zeroa priori knowledge .At iteration , the extrinsic output of the first decoder is

    . is forwarded to the second decoder to become(ordinate). The extrinsic output of the second

    decoder is , which is fed back to the firstdecoder to become thea priori knowledge(abscissa) of the next iteration. Note that interleaving does notchange mutual information.

    The iteration proceeds as long as .With this can be formu-lated as . The iteration stops if

    , or equivalently, ,which corresponds to an intersection of both characteristics inthe EXIT chart.

    Fig. 5. Simulated trajectories of iterative decoding atE =N = 0:1 dB and0.8 dB (symmetric PCC rate 1/2, interleaver size 60 000 systematic bits).

    Fig. 6. EXIT chart with transfer characteristics for a set ofE =N -values; twodecoding trajectories at 0.7 dB and 1.5 dB (code parameters as in Fig. 5, PCCrate 1/2); interleaver size10 bits.

    Fig. 5 shows trajectories of iterative decoding atdB and 0.8 dB (code parameters are those of Fig. 2). The

    trajectory is a simulation result taken from the “free-running”iterative decoder. For dB the trajectory (lowerleft corner) gets stuck after two iterations since both decodercharacteristics do intersect. For dB the trajectoryhas just managed to “sneak through the bottleneck.” After sixpasses through the decoder, increasing correlations of extrinsicinformation start to show up and let the trajectory deviate fromits expected zigzag-path. As it turns out, for larger interleaversthe trajectory stays on the characteristics for some more passesthrough the decoder.

    Fig. 6 depicts the EXIT chart with transfer characteristicsover a set of -values. The curves in between 0 dB and

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsPrésentation générale

    Idée :

    Pouvoir analyser le comportement entrée-sortie de chaque blocSISO indépendamment : on cherche à déterminer lecomportement moyen en sortie par rapport au comportementmoyen en entrée suivant une figure de mérite donnée,Ce comportement en entrée est fonction de l’info. extrinsèqueentrée et des probabilités de transitions du canal : le canal estconnu mais nécessité de modéliser le compotement statistiquedes informations extrinsèques,La figure de mérite en entrée sera associée au info. a priorientrantes (extrinsèques des autres blocs),de même, la figure de mérite sortante sera associée aux infoextrinsèques fournies par le bloc SISO,pour simplifier l’analyse, utiliser une mesure de performancemono-domensionnelle (scalaire) pour caractériser le processusde décodage itératif

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsPrésentation générale

    EXtrinsic Information Transfer Charts :Information mutuelle entre un log-rapport de vraisemblance L etle bit émis C (considérés comme variables aléatoires) :

    I(L; C) = 12∑

    c=±1∫R f (l |c) log2 (

    2f (l|c)f (l|c=+1)+f (l|c=−1) )dl

    Hypothèses :Modulation BPSK, entrelacement parfait,C variable binaire de loi uniforme,Symétrie de la densité : f (l |C = −1) = f (−l |C = +1),Consistance de la densité (symétrie exponentielle) :f (−l |C = c) = f (l |C = c)e−c.l

    I(L; C) = 1− EL|C=+1(log2(1 + e−l ))

    = 1−∫R

    f (l |c = +1) log2 (1 + e−l )dv

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsInformation mutuelle d’une densité consistante : estimation.

    EstimationUtilisation du mot de code nul :

    I(L; C) ≈ 1− 1N

    ∑n

    log2 (1 + e−ln )

    Mot de codes indifférents et connus :

    I(L; C) ≈ 1− 1N

    ∑n

    log2 (1 + e−cn ln )

    Mot de codes indifférents et inconnus :

    I(L; C) ≈ 1− 1N

    ∑n

    Hb(e+|ln|/2

    e+|ln|/2 + e−|ln|/2)

    Hb(p) = −p log2 (p)− (1− p) log2 (1− p)

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsInformation a priori et extrinsèque

    Modèle Gaussien des messages a priori :

    l = m.c + b, b ∼ N (0, σ2 = 2m), c = ±1

    Information mutuelle associée :

    I(L; C) = 1− 1√2πσ2

    ∫R

    exp (− (l − σ2/2)2

    2σ2) log2 (1 + e

    −l )dl

    Information mutuelle extrinsèque : Elle est évaluée sansapproximation Gaussienne à l’aide des histogrammes desdensités en sortie ou à l’aide des estimateurs précédents.

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsConcaténation parallèle 1/3

    TEN BRINK: ITERATIVELY DECODED PARALLEL CONCATENATED CODES 1729

    With (11), (12) becomes

    (14)

    For abbreviation we define

    (15)

    with

    (16)

    With (4)–(7) we realize that the capacity of the binary input/con-tinuous output AWGN channel of (1) is given by

    (17)

    The capacity [the function respectively] cannot be ex-pressed in closed form. It is monotonically increasing [18] in

    and thus reversible.

    (18)

    Mutual information is also used to quantify the extrinsicoutput .

    (19)

    (20)

    Viewing as a function of and the -value, theextrinsic information transfer characteristics are defined as

    (21)

    or, for fixed , just

    (22)

    To compute for the desired -inputcombination, the distributions of (19) are most convenientlydetermined by Monte Carlo simulation (histogram measure-ments). For this, the independent Gaussian random variableof (9) is applied asa priori input to the constituent decoderof interest; a certain value of is obtained by appropriatelychoosing the parameter according to (18). Sequence lengthsof systematic bits were found to sufficiently suppress taileffects. Note that no Gaussian assumption is imposed on theextrinsic output distributions .

    Transfer characteristics are given inFig. 2. Thea priori input is on the abscissa, the extrinsicoutput on the ordinate. The -value serves as a param-eter to the curves. The BCJR-algorithm is applied to a rate 1/2recursive systematic convolutional code of memory 4; the paritybits are punctured to obtain a rate 2/3 constituent code. Thiswill lead to a rate 1/2 PCC in Section III. The code polynomialsare . stands for the (recursive) feed-back polynomial; the values are given in octal, with the mostsignificant bit (MSB) corresponding to the generator connec-tion on the very left (input) side of the shift register. Note thatthe -values are given with respect to the rate 1/2 parallelconcatenated code.

    Fig. 2. Extrinsic information transfer characteristics of soft in/soft outdecoder for rate 2/3 convolutional code;E =N of channel observations servesas parameter to curves.

    Fig. 3. Extrinsic information transfer characteristics of soft in/soft out decoderfor rate 2/3 convolutional code,E =N = 0:8 dB, different code memory.

    Transfer characteristics for different code memory at fixeddB are depicted in Fig. 3. The code polynomials

    are taken from [20].Fig. 4 shows the influence of different code polynomials for

    the prominent case of a memory 4 code. The (023, 011)-codeprovides good extrinsic output at the beginning, but returnsdiminishing output for highera priori input. For the (023,035)-code it is the other way round. The constituent code of theclassic rate 1/2 PCC of [21] with polynomials (037, 021) hasgood extrinsic output for low to mediuma priori input.

    From Figs. 2–4 it can be seen that the characteristicsare monotonically increasing in , and

    thus the inverse function

    (23)

    TEN BRINK: ITERATIVELY DECODED PARALLEL CONCATENATED CODES 1729

    With (11), (12) becomes

    (14)

    For abbreviation we define

    (15)

    with

    (16)

    With (4)–(7) we realize that the capacity of the binary input/con-tinuous output AWGN channel of (1) is given by

    (17)

    The capacity [the function respectively] cannot be ex-pressed in closed form. It is monotonically increasing [18] in

    and thus reversible.

    (18)

    Mutual information is also used to quantify the extrinsicoutput .

    (19)

    (20)

    Viewing as a function of and the -value, theextrinsic information transfer characteristics are defined as

    (21)

    or, for fixed , just

    (22)

    To compute for the desired -inputcombination, the distributions of (19) are most convenientlydetermined by Monte Carlo simulation (histogram measure-ments). For this, the independent Gaussian random variableof (9) is applied asa priori input to the constituent decoderof interest; a certain value of is obtained by appropriatelychoosing the parameter according to (18). Sequence lengthsof systematic bits were found to sufficiently suppress taileffects. Note that no Gaussian assumption is imposed on theextrinsic output distributions .

    Transfer characteristics are given inFig. 2. Thea priori input is on the abscissa, the extrinsicoutput on the ordinate. The -value serves as a param-eter to the curves. The BCJR-algorithm is applied to a rate 1/2recursive systematic convolutional code of memory 4; the paritybits are punctured to obtain a rate 2/3 constituent code. Thiswill lead to a rate 1/2 PCC in Section III. The code polynomialsare . stands for the (recursive) feed-back polynomial; the values are given in octal, with the mostsignificant bit (MSB) corresponding to the generator connec-tion on the very left (input) side of the shift register. Note thatthe -values are given with respect to the rate 1/2 parallelconcatenated code.

    Fig. 2. Extrinsic information transfer characteristics of soft in/soft outdecoder for rate 2/3 convolutional code;E =N of channel observations servesas parameter to curves.

    Fig. 3. Extrinsic information transfer characteristics of soft in/soft out decoderfor rate 2/3 convolutional code,E =N = 0:8 dB, different code memory.

    Transfer characteristics for different code memory at fixeddB are depicted in Fig. 3. The code polynomials

    are taken from [20].Fig. 4 shows the influence of different code polynomials for

    the prominent case of a memory 4 code. The (023, 011)-codeprovides good extrinsic output at the beginning, but returnsdiminishing output for highera priori input. For the (023,035)-code it is the other way round. The constituent code of theclassic rate 1/2 PCC of [21] with polynomials (037, 021) hasgood extrinsic output for low to mediuma priori input.

    From Figs. 2–4 it can be seen that the characteristicsare monotonically increasing in , and

    thus the inverse function

    (23)

    Influence de Eb/N0 influence de la mémoire

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsConcaténation parallèle 2/3

    1730 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 49, NO. 10, OCTOBER 2001

    Fig. 4. Extrinsic information transfer characteristics of soft in/soft out decoderfor rate 2/3 convolutional code,E =N = 0:8 dB, memory 4, different codepolynomials.

    exists on

    (24)

    Increasing means that more and more bitsbecome knownat the decoder with high confidence, which relates to a growingconditioning of the mutual information on thea priori knowledge. As conditioning increases mutual informa-tion [18] it is plausible that biggera priori input provides biggerextrinsic output.

    III. EXTRINSIC INFORMATION TRANSFERCHART

    A. Trajectories of Iterative Decoding

    To account for the iterative nature of the suboptimal decodingalgorithm, both decoder characteristics are plotted into a singlediagram. However, for the transfer characteristics of the seconddecoder the axes are swapped.

    This diagram is referred to as EXIT chart since the exchangeof extrinsic information can be visualized as a decoding trajec-tory.

    Let be the iteration index, fixed. For the itera-tion starts at the origin with zeroa priori knowledge .At iteration , the extrinsic output of the first decoder is

    . is forwarded to the second decoder to become(ordinate). The extrinsic output of the second

    decoder is , which is fed back to the firstdecoder to become thea priori knowledge(abscissa) of the next iteration. Note that interleaving does notchange mutual information.

    The iteration proceeds as long as .With this can be formu-lated as . The iteration stops if

    , or equivalently, ,which corresponds to an intersection of both characteristics inthe EXIT chart.

    Fig. 5. Simulated trajectories of iterative decoding atE =N = 0:1 dB and0.8 dB (symmetric PCC rate 1/2, interleaver size 60 000 systematic bits).

    Fig. 6. EXIT chart with transfer characteristics for a set ofE =N -values; twodecoding trajectories at 0.7 dB and 1.5 dB (code parameters as in Fig. 5, PCCrate 1/2); interleaver size10 bits.

    Fig. 5 shows trajectories of iterative decoding atdB and 0.8 dB (code parameters are those of Fig. 2). The

    trajectory is a simulation result taken from the “free-running”iterative decoder. For dB the trajectory (lowerleft corner) gets stuck after two iterations since both decodercharacteristics do intersect. For dB the trajectoryhas just managed to “sneak through the bottleneck.” After sixpasses through the decoder, increasing correlations of extrinsicinformation start to show up and let the trajectory deviate fromits expected zigzag-path. As it turns out, for larger interleaversthe trajectory stays on the characteristics for some more passesthrough the decoder.

    Fig. 6 depicts the EXIT chart with transfer characteristicsover a set of -values. The curves in between 0 dB and

    1730 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 49, NO. 10, OCTOBER 2001

    Fig. 4. Extrinsic information transfer characteristics of soft in/soft out decoderfor rate 2/3 convolutional code,E =N = 0:8 dB, memory 4, different codepolynomials.

    exists on

    (24)

    Increasing means that more and more bitsbecome knownat the decoder with high confidence, which relates to a growingconditioning of the mutual information on thea priori knowledge. As conditioning increases mutual informa-tion [18] it is plausible that biggera priori input provides biggerextrinsic output.

    III. EXTRINSIC INFORMATION TRANSFERCHART

    A. Trajectories of Iterative Decoding

    To account for the iterative nature of the suboptimal decodingalgorithm, both decoder characteristics are plotted into a singlediagram. However, for the transfer characteristics of the seconddecoder the axes are swapped.

    This diagram is referred to as EXIT chart since the exchangeof extrinsic information can be visualized as a decoding trajec-tory.

    Let be the iteration index, fixed. For the itera-tion starts at the origin with zeroa priori knowledge .At iteration , the extrinsic output of the first decoder is

    . is forwarded to the second decoder to become(ordinate). The extrinsic output of the second

    decoder is , which is fed back to the firstdecoder to become thea priori knowledge(abscissa) of the next iteration. Note that interleaving does notchange mutual information.

    The iteration proceeds as long as .With this can be formu-lated as . The iteration stops if

    , or equivalently, ,which corresponds to an intersection of both characteristics inthe EXIT chart.

    Fig. 5. Simulated trajectories of iterative decoding atE =N = 0:1 dB and0.8 dB (symmetric PCC rate 1/2, interleaver size 60 000 systematic bits).

    Fig. 6. EXIT chart with transfer characteristics for a set ofE =N -values; twodecoding trajectories at 0.7 dB and 1.5 dB (code parameters as in Fig. 5, PCCrate 1/2); interleaver size10 bits.

    Fig. 5 shows trajectories of iterative decoding atdB and 0.8 dB (code parameters are those of Fig. 2). The

    trajectory is a simulation result taken from the “free-running”iterative decoder. For dB the trajectory (lowerleft corner) gets stuck after two iterations since both decodercharacteristics do intersect. For dB the trajectoryhas just managed to “sneak through the bottleneck.” After sixpasses through the decoder, increasing correlations of extrinsicinformation start to show up and let the trajectory deviate fromits expected zigzag-path. As it turns out, for larger interleaversthe trajectory stays on the characteristics for some more passesthrough the decoder.

    Fig. 6 depicts the EXIT chart with transfer characteristicsover a set of -values. The curves in between 0 dB and

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série1732 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 49, NO. 10, OCTOBER 2001

    for short sequence length it is easier for the BCJR decoder togain new, extrinsic information on the systematic bits.

    C. Obtaining BER From EXIT Chart

    The EXIT chart can be used to obtain an estimate on the BERafter an arbitrary number of iterations. For both constituent de-coders, the soft output on the systematic bits can be written as

    . For the sake of deriving a simple formula onthe bit error probability , we assume thea priori knowledgeand extrinsic output to be Gaussian distributed. Consequently,the decoder soft output is supposed to be Gaussian distributedwith variance and mean value , compare to (8),(10). With the complementary error function, the bit error prob-ability writes as

    (26)

    Assuming independence it is

    (27)

    With (4) and

    (28)

    we obtain as

    (29)

    Applying (18), the variances and are calculated.

    (30)

    Finally, with (26), (27), (29), and (30) the result is

    (31)Fig. 9 shows transfer characteristics and respective simulated

    decoding trajectory of an asymmetric PCC with memory 2 andmemory 6 constituent codes at 0.8 dB. Note that the transfercharacteristics are just taken from Fig. 3; the characteristic ofthe second decoder (memory 6) is mirrored at the first diagonalof the EXIT chart. Additionally, the BER scaling according to(31) is given as a contour plot. Table I compares BER-resultsobtained from the EXIT chart up to the seventh pass throughthe iterative decoder: (s) stands for the result obtained by simu-lation, right next to it the BER as calculated with (31). The tableshows that the EXIT chart in combination with the Gaussian ap-proximation of (31) provides reliable BER predictions down to

    , that is, in the region of low . It is not useful fordetermining BER floors. For this bounding techniques like [1]which include the interleaving depth are more suited.

    IV. M UTUAL INFORMATION VERSUSSNR MEASURES

    A. Different SNR Measures

    There are several ways of defining SNR measures, dependingon whether and how a Gaussian assumption is included. In thefollowing we consider three different SNR measures.

    Fig. 9. Averaged trajectory atE =N = 0:8 dB with BER scaling as contourplot (PCC rate1=2, interleaver size2 � 10 bits).

    TABLE ICOMPARISON OFBER PREDICTIONSFROM EXIT CHART. COLUMNS OF

    SIMULATION RESULTS AREMARKED WITH (s)

    1) The unbiased SNR is based on the mean andthe variance of the measured extrinsic output PDF; noGaussian assumption on is made.

    (32)

    For linear codes on the AWGN channel, the communica-tion link with encoder, channel and decoder is symmetricin ; pragmatically, we make use of both his-togram measurements toobtain a better averageing of the Monte Carlo simulationresults. The mean value is measured by

    (33)

    and the variance

    (34)

    2) The biased SNR is based only on the mean valueof the PDF ; the mean is assumed to be the

    Pb ≈ erfc(

    12√

    2

    √8R EbN0 + J

    −1(IA)2 + J−1(IE )2)

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsConcaténation série

    392 S. ten Brink: Designing Iterative Decoding Schemes with the Extrinsic Information Transfer ChartAEÜ Int. J. Electron. Commun.

    54 (2000) No. 6, 389–398

    Carlo simulation. For this, the independent Gaussian ran-dom variable of (7) with��� � �

    ������ is applied asapriori input to the demapper of interest.

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 1

    channel

    and e

    xtr

    insic

    outp

    ut

    I E1 o

    f dem

    apper

    a priori input IA1 to demapper

    Gray mappingnatural mapping

    d21 mappingd23 mapping

    anti Gray mapping

    demapper transfer characteristics at Eb/N0=6dB

    Fig. 3. Demapper transfer characteristics of 8–ASK mappings at����� � �dB.

    In Fig. 3 the corresponding demapper transfer charac-teristics are given. The values�� �� of Table 1 show upas�� ������� � � and�� ������� � �. Differ-ent mappings result in transfer characteristics of differentslopes. However, the mapping with lowest�� does not nec-essarily exhibit the highest�� (see anti Gray mapping).

    2.4 Transfer Characteristics of the Outer Decoder

    The outer extrinsic transfer characteristic is

    ��� � ����� (14)

    and describes the input/output relationship between outercoded input�� and outer coded extrinsic output��. It isnot dependent on the�����–value. It can be computed byassuming�� to be Gaussian distributed and applying thesame equations as presented for the demapper characteris-tic � in the previous Section.

    Fig. 4 shows extrinsic transfer characteristics of someouter rate��� recursive systematic convolutional (RSC)codes. The generator polynomials (feedback polynomial!�, feedforward polynomial!) are given in octal num-bers, with the most significant bit denoting the very left(input) connection to the shift register. Note that the axesare swapped: The input��� is on the ordinate, the output��� on the abscissa.

    2.5 Extrinsic Information Transfer Chart

    To visualize the exchange of extrinsic information we plotdemapper and decoder characteristics into a single dia-

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 1

    a p

    riori i

    nput

    I A2 t

    o o

    ute

    r decoder

    extrinsic output IE2 of outer decoder

    memory 1, (Gr, G)=(03,02)

    memory 2, (Gr, G)=(07,05)

    memory 4, (Gr , G)=(023,037)

    memory 8, (Gr , G)=(0435,0777)

    repetition code (diagonal line)

    Fig. 4. Extrinsic information transfer characteristics of some outerrate��� decoders.

    gram which is referred to as Extrinsic Information Trans-fer Chart. On the ordinate, the inner channel and extrinsicoutput ��� becomes the outera priori input ��� (inter-leaving does not change mutual information). On the ab-scissa, the outer extrinsic output��� becomes the innerapriori input ��� . Provided that independence and Gaus-sian assumptions hold for modelling extrinsic information(a priori information respectively), the transfer character-istics of the individual demapping/decoding blocks shouldapproximately describe the true behavior of the iterativedecoder. To verify the EXIT chart predictions we use thetrajectory of iterative decoding which is a simulation re-sult of the compound “free–running” iterative decoder.

    Fig. 5 shows EXIT charts of the “d21”–mapping incombination with an outer memory� code. The transfercharacteristics are just taken from Figs. 3, 4. At����� ��dB the demapper and decoder transfer characteristics dointersect, and the decoding trajectory gets stuck after aboutthree iterations at low mutual information, correspondingto high BER. With increasing�����–value, the demappertransfer characteristic is raised, opening a narrow tunnel(“bottleneck”) at about!dB to allow for convergence of it-erative decoding towards low BER. This turbo cliff effectis verified in the BER chart of Fig. 7. For higher�����–values it just takes less iterations to reach a low BER. Acloser examination based on demapper transfer character-istics yields apinch–off limit at�������� ��!�dB (Ta-ble 2) where both transfer characteristics are just aboutto intersect. Simulation results suggest that the iterativedecoder converges for�����–values above the pinch–offlimit, provided that the interleaving depth is sufficientlylarge.

    For non–recursive inner “codes”, like the mapping inan IDEM scheme, the BER floor is dominated by the in-

    396 S. ten Brink: Designing Iterative Decoding Schemes with the Extrinsic Information Transfer ChartAEÜ Int. J. Electron. Commun.

    54 (2000) No. 6, 389–398

    ΠD D

    interleaverbinarysource

    inner rate 2/3 recursivesystematic convolutional code

    puncture everyother parity bit

    D D

    outer rate 1/2 recursivesystematic convolutional code

    output all systematicand all parity bits

    Fig. 12. Rate��� serially concatenated code consisting of outer rate��� and inner rate��� RSC code with polynomials�� �� �� ��� �.

    parity bit (i. e. output from the recursive shift register)is punctured. The corresponding EXIT chart is given inFig. 13. The outer decoder transfer characteristic is takenfrom Fig. 4. The pinch–off limit for this code concatena-tion is at�������� ����dB.

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 1

    I A2,

    I E1

    IA1, IE2

    decoding trajectoryat 0.4dB

    inner decoder, rate 2/3, memory 2 (Gr,G)=(07,05)

    outer decoder, rate 1/2, memory 2(Gr,G)=(07,05)

    0.4dB

    1dB

    2dB

    3dB

    Fig. 13. EXIT chart of rate��� SCC consisting of inner rate���and outer rate��� RSC code.

    For Fig. 14 the inner and outer code rate are justswapped. The convergence behavior is very similar to thecode conatenation of Fig. 13; however, the bottleneck re-gion is a little less pronounced which explains the lowerpinch–off limit at�������� ����dB.

    Although these rate splittings for code rate��� are themost popular ones in the literature, they are quite far awayfrom the Shannon limit, which is at������� ����dB.Since we could not significantly lower the pinch–off limitby changing the inner and outer code parameters such asgenerator polynomials and memory, we chose a differentapproach, with inner code rate�� � � and outer code rate�� � ���, similar to the memory one repeat accumulatecodes of [21].

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 1

    I A2,

    I E1

    IA1, IE2

    decoding trajectoryat 0.4dB

    inner decoder, rate 1/2, memory 2 (Gr,G)=(07,05)

    outer decoder, rate 2/3, memory 2(Gr,G)=(07,05)

    0.4dB

    1dB

    2dB

    3dB

    Fig. 14. EXIT chart of rate��� SCC consisting of inner rate���and outer rate��� RSC code.

    Π

    rate 1/3

    D D

    inter leaverbinarysource

    inner rate 1 recursiveconvolut ional code

    innersystemat ic

    dop ing

    outerrepetit ion

    code

    n s

    n c

    Fig. 15. Serially concatenated code consisting of outer rate��� rep-etition code and inner rate 1 memory 2 recursive convolutional code���� �� � ��� � with inner systematic doping.

    A search over inner rate� codes up to memory! yieldsa matching pair of inner memory� code with polynomi-als�!� ! � ��! �� and outer rate��� repetition code.The encoder structure is given in Fig. 15. Systematic dop-ing is used to make the inner rate� code suitable for it-erative decoding. Withinner systematic doping, some ofthe coded bits at the output of the recursive shift registerare substituted by their systematic counterpart. This opensup an entry point����� � � to initiate convergence ofiterative decoding. Without systematic doping, the innertransfer characteristic would start at the origin����� �,preventing the iterative decoder from convergence.

    The EXIT chart of this code concatenation is depictedin Fig. 16. With an inner systematic doping ratio of&�� � ���� (every 51st coded bit is replaced by its cor-responding systematic bit) we achieve a pinch–off limitof �������� �����dB. At ����� � ����dB weperformed a simulation over��� information bits (inter-leaver length� � �� bits, 100 iterations) without measur-ing a bit error. Owing to the small code memory, the iter-

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsModulations codées à bits entrelacés

    x[n]=[x1[n], …, xm[n]]

    Пu[n]

    ModulationM‐aire

    s[n]

    1 m

    c[n]Code correcteur

    Démodulateury[n] Décodeur canalMAP 

    Décodeur canalSISO

    Lext(xi[n])

    Lext(C[n])

    La(C[n])П‐1

    Lext(C[n])

    La(xi [n])

    П

    Bit-Interleaved Coded Modulation

    système de transmission à haute efficacité spectrale :constellation M-aire S avec M = 2m.Capacité atteignable dépend du mapping utilisé :

    C = m − 12

    m−1∑k=0

    1∑c=0

    E

    (log2

    ( ∑si∈S p(y |si )∑sj∈Skc

    p(y |sj )

    ))

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsModulations codées à bits entrelacés

    638 LDPC Code Applications and Advanced Topics

    10 5 0 5 10 15 200

    0.5

    1

    1.5

    2

    2.5

    3

    3.5

    4

    SNR (dB)

    Cap

    acity

    (bit/

    sym

    bol)

    log2(1 + SNR)

    Natural labelingGray labeling

    16QAM

    8PSK

    Figure 15.2 The capacity of coded-modulation for 16QAM and 8PSK on the AWGN channel.

    bit). In Figure 15.2, the capacities of Gray- and natural-labeled coded modulationfor 8PSK and 16QAM are plotted against Es/N0, where Es is the average signalenergy. We note that Gray labeling provides a greater capacity than does naturallabeling. It is interesting to note that BICM with natural labeling and iterativedecoding outperforms BICM with Gray labeling and iterative decoding [9], butthis is not the case for LDPC-coded modulation.

    15.1.1 Design Based on EXIT Charts

    We now describe an LDPC code-design technique from [10] for coded modulationbased on EXIT charts. We focus on 8PSK with Gray labeling. Extensions to othermodulation formats are straightforward. Consider the Gray-labeled 8PSK signalconstellation in Figure 15.3, with labels (000, 001, 011, 010, 110, 111, 101, 100) forthe signal points (s0, s1, s2, s3, s4, s5, s6, s7), respectively. The regions indicated bythe dashed lines and the signal point indices (0, 1, 2, 3, 4, 5, 6, 7) correspond to thedecision regions Ri for the signals (s0, s1, s2, s3, s4, s5, s6, s7), respectively.

    Considering now bit x0, we take symbols s0 and s1 as representatives for anal-ysis. We first draw a line through the origin and the symbol s0 to partitionthe signal-space into two half-planes. The neighboring symbols {s1, s2, s3} and{s7, s6, s5} for s0 in the two half-planes have x0 values of {1, 1, 0} and {0, 1, 1},respectively. The symbol s4, which is also intersected by the partition line, hasan x0 value of 0. We now draw a line through the origin and symbol s1 to again

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsDémodulation et décodage itératifs

    x[n]=[x1[n], …, xm[n]]

    Пu[n]

    ModulationM‐aire

    s[n]

    1 m

    c[n]Code correcteur

    Démodulateury[n] Décodeur canalMAP 

    Décodeur canalSISO

    Lext(xi[n])

    Lext(C[n])

    La(C[n])П‐1

    Lext(C[n])

    La(xi [n])

    П

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsDémodulateur MAP

    Cas générale

    Les vecteurs binaires x [n] = [x1[n] · · · xm[n]] sont “mappés” surdes symboles s[n] ∈ S.Log-rapport de vraisemblance bit :

    L(xi [n]) = log

    ∑s[n]∈S i0 exp(−‖y [n]−s[n]‖

    2

    2σ2

    )∏m π(xm[n])∑

    s[n]∈S i1exp

    (−‖y [n]−s[n]‖

    2

    2σ2

    )∏m π(xm[n])

    = La(xi [n]) + Le(xi [n])

    Log-rapport de vraisemblance extrinsèque bit :

    L(xi [n]) = log

    ∑s[n]∈S i0 exp(−‖y [n]−s[n]‖

    2

    2σ2

    )∏m;m 6=i π(xm[n])∑

    s[n]∈S i1exp

    (−‖y [n]−s[n]‖

    2

    2σ2

    )∏m;m 6=i π(xm[n])

    (37)

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsDémodulation et décodage itératifs : analyse EXIT

    390 S. ten Brink: Designing Iterative Decoding Schemes with the Extrinsic Information Transfer ChartAEÜ Int. J. Electron. Commun.

    54 (2000) No. 6, 389–398

    priori knowledge�� on the inner unmapped bits and com-putes channel and extrinsic information�� for each of the� coded bits per constellation symbol. The extrinsic out-put�� is deinterleaved to become thea priori input�� tothe outer soft in/soft out decoder (MAP–, APP–, BCJR–algorithm [16]) which calculates extrinsic information� �on the outer coded bits.�� is re–interleaved and fed backasa priori knowledge�� to the inner demapper where itis exploited to reduce the bit error rate (BER) in further it-erative decoding steps. The variables��,��,��,��,��,�� denote log–likelihood ratio values (L–values [17]).

    In the following the concepts of iterative demappingand decoding (IDEM) are illustrated for a simple 8–ASK(amplitude shift keying) example.�–ASK is a multi–amplitude modulation, with real symbol amplitudes� �����������. The mapping (or “labeling”) asso-ciates the� � � bits of the inner unmapped bit–vector�� � ��

    �� �

    �� �

    �� with one of the�� � � signal

    amplitudes,� � �����. � is also referred to as“constellation symbol”. In this paper, all amplitude lev-els are assumed to be equiprobable, that is,�� � � � �

    �� � ���. The extension to complex (two–dimensional)signal constellations is straightforward. Obviously, the in-ner mapping does not add redundancy, and thus we candefine an inner “code” rate of�� � �.

    2.2 Characterizing Mappings by Mutual Informa-tion

    2.2.1 Symbol–wise mutual information

    The mutual information between transmitted constellationsymbol� and received AWGN channel output� is givenby

    ����� ��

    ���������

    � ����

    ����� � �

    � �� ����� � ����

    �� (1)

    with conditional probability density function (PDF)

    � �� �� � � � �����

    � ����� �� � �

    ���

    �(2)

    and

    � �� � ���

    �������

    � �� �� � � � (3)

    We refer to the symbol–wise mutual information�����as the capacity���� of the modulation scheme, keepingin mind that we fixed thea priori probabilities to�� �����. Determining the actual capacity would require amaximization of����� over thea priori probabilities�� (“signal shaping”). The capacity���� is dependenton the�����–value of the channel; for���� � � �reliable (i. e. error–free) communication is possible (chan-nel coding theorem [18]). The average symbol energy is�� � �����

    ������

    ��, with����� � ������������.

    Evaluating (1) for an outer code rate of�� � ��� yields

    ������� ����dB (“Shannon limit”) which is the low-est �����–value for which reliable communication ispossible. It serves as the ultimate performance limit of ouriterative demapping and decoding scheme.

    For a given signal constellation (e. g. 8–ASK), thesymbol–wise mutual information����� is independentof the applied mapping, and therefore it is not a suitablemeasure for characterizing different mappings.

    2.2.2 Bitwise mutual information

    With the chain rule of mutual information [18] itcan be shown that thesymbol–wise mutual information������ � ����� can be decomposed into a sum of� bitwise mutual informations��,

    � ����� �

    ������

    �� � ���� � (4)

    �� is a short–hand notation of

    �� � � ��� �� �� ����� ���� ����� (5)� � � � � � � � �� � � (6)

    A number of� “other” unmapped bits� �� � �� � ofthe mapping are perfectly known, whereas no knowledgeis available for the remaining� � � � � bits. The barin (5) indicates that�� is averaged a) over bitwise mu-tual information with respect to all bits of the mapping,�� � � � � � , b) over all possible

    ����

    �combina-

    tions to choose� known bits out of the total of��� otherbits of the mapping, and c) over all�� bit vector realiza-tions thereof. The vector channel with mutual information� � ������ � � carrying� bits can be viewed asbeing composed of� parallel sub–channels with mutualinformation� � �� � �, each carrying a single bit (con-cept of equivalent channels [19]).

    The chain rule in formulation (4) allows an interest-ing interpretation: The mapping only influences theparti-tioning of the total amount of mutual information� ���among the different conditional sub–channels��, whereasthe sum

    ��� always adds up to the constant value

    ����, independently of the applied mapping. Hence, thequantities�� are well suited for characterizing differentmappings.

    000 001 011 010 110 111 101 1001 1 1 1 1 1 1

    000 001 010 011 100 101 110 1111 2 1 3 1 2 1

    Gray

    natural

    d21 000 011 101 110 111 001 010 1002 2 2 1 2 2 2

    d23 000 011 101 110 001 010 100 1112 2 2 3 2 2 2

    anti Gray 000 111 001 110 011 100 010 1013 2 3 2 3 2 3

    -7 -5 -3 -1 1 3 5 7 A0

    Fig. 2. 8–ASK signal constellation with 5 different mappings.

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsDémodulation et décodage itératifs : analyse EXIT

    392 S. ten Brink: Designing Iterative Decoding Schemes with the Extrinsic Information Transfer ChartAEÜ Int. J. Electron. Commun.

    54 (2000) No. 6, 389–398

    Carlo simulation. For this, the independent Gaussian ran-dom variable of (7) with��� � �

    ������ is applied asapriori input to the demapper of interest.

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 1

    chan

    nel

    and

    extr

    insi

    c ou

    tput

    IE

    1 o

    f de

    map

    per

    a priori input IA1 to demapper

    Gray mappingnatural mapping

    d21 mappingd23 mapping

    anti Gray mapping

    demapper transfer characteristics at Eb/N0=6dB

    Fig. 3. Demapper transfer characteristics of 8–ASK mappings at����� � �dB.

    In Fig. 3 the corresponding demapper transfer charac-teristics are given. The values�� �� of Table 1 show upas�� ������� � � and�� ������� � �. Differ-ent mappings result in transfer characteristics of differentslopes. However, the mapping with lowest�� does not nec-essarily exhibit the highest�� (see anti Gray mapping).

    2.4 Transfer Characteristics of the Outer Decoder

    The outer extrinsic transfer characteristic is

    ��� � ����� (14)

    and describes the input/output relationship between outercoded input�� and outer coded extrinsic output��. It isnot dependent on the�����–value. It can be computed byassuming�� to be Gaussian distributed and applying thesame equations as presented for the demapper characteris-tic � in the previous Section.

    Fig. 4 shows extrinsic transfer characteristics of someouter rate��� recursive systematic convolutional (RSC)codes. The generator polynomials (feedback polynomial!�, feedforward polynomial!) are given in octal num-bers, with the most significant bit denoting the very left(input) connection to the shift register. Note that the axesare swapped: The input��� is on the ordinate, the output��� on the abscissa.

    2.5 Extrinsic Information Transfer Chart

    To visualize the exchange of extrinsic information we plotdemapper and decoder characteristics into a single dia-

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 1a

    prio

    ri in

    put

    I A2 t

    o ou

    ter

    deco

    der

    extrinsic output IE2 of outer decoder

    memory 1, (Gr, G)=(03,02)

    memory 2, (Gr, G)=(07,05)

    memory 4, (Gr , G)=(023,037)

    memory 8, (Gr , G)=(0435,0777)

    repetition code (diagonal line)

    Fig. 4. Extrinsic information transfer characteristics of some outerrate��� decoders.

    gram which is referred to as Extrinsic Information Trans-fer Chart. On the ordinate, the inner channel and extrinsicoutput ��� becomes the outera priori input ��� (inter-leaving does not change mutual information). On the ab-scissa, the outer extrinsic output��� becomes the innerapriori input ��� . Provided that independence and Gaus-sian assumptions hold for modelling extrinsic information(a priori information respectively), the transfer character-istics of the individual demapping/decoding blocks shouldapproximately describe the true behavior of the iterativedecoder. To verify the EXIT chart predictions we use thetrajectory of iterative decoding which is a simulation re-sult of the compound “free–running” iterative decoder.

    Fig. 5 shows EXIT charts of the “d21”–mapping incombination with an outer memory� code. The transfercharacteristics are just taken from Figs. 3, 4. At����� ��dB the demapper and decoder transfer characteristics dointersect, and the decoding trajectory gets stuck after aboutthree iterations at low mutual information, correspondingto high BER. With increasing�����–value, the demappertransfer characteristic is raised, opening a narrow tunnel(“bottleneck”) at about!dB to allow for convergence of it-erative decoding towards low BER. This turbo cliff effectis verified in the BER chart of Fig. 7. For higher�����–values it just takes less iterations to reach a low BER. Acloser examination based on demapper transfer character-istics yields apinch–off limit at�������� ��!�dB (Ta-ble 2) where both transfer characteristics are just aboutto intersect. Simulation results suggest that the iterativedecoder converges for�����–values above the pinch–offlimit, provided that the interleaving depth is sufficientlylarge.

    For non–recursive inner “codes”, like the mapping inan IDEM scheme, the BER floor is dominated by the in-

    392 S. ten Brink: Designing Iterative Decoding Schemes with the Extrinsic Information Transfer ChartAEÜ Int. J. Electron. Commun.

    54 (2000) No. 6, 389–398

    Carlo simulation. For this, the independent Gaussian ran-dom variable of (7) with��� � �

    ������ is applied asapriori input to the demapper of interest.

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 1

    chan

    nel

    and

    extr

    insi

    c ou

    tput

    IE

    1 o

    f de

    map

    per

    a priori input IA1 to demapper

    Gray mappingnatural mapping

    d21 mappingd23 mapping

    anti Gray mapping

    demapper transfer characteristics at Eb/N0=6dB

    Fig. 3. Demapper transfer characteristics of 8–ASK mappings at����� � �dB.

    In Fig. 3 the corresponding demapper transfer charac-teristics are given. The values�� �� of Table 1 show upas�� ������� � � and�� ������� � �. Differ-ent mappings result in transfer characteristics of differentslopes. However, the mapping with lowest�� does not nec-essarily exhibit the highest�� (see anti Gray mapping).

    2.4 Transfer Characteristics of the Outer Decoder

    The outer extrinsic transfer characteristic is

    ��� � ����� (14)

    and describes the input/output relationship between outercoded input�� and outer coded extrinsic output��. It isnot dependent on the�����–value. It can be computed byassuming�� to be Gaussian distributed and applying thesame equations as presented for the demapper characteris-tic � in the previous Section.

    Fig. 4 shows extrinsic transfer characteristics of someouter rate��� recursive systematic convolutional (RSC)codes. The generator polynomials (feedback polynomial!�, feedforward polynomial!) are given in octal num-bers, with the most significant bit denoting the very left(input) connection to the shift register. Note that the axesare swapped: The input��� is on the ordinate, the output��� on the abscissa.

    2.5 Extrinsic Information Transfer Chart

    To visualize the exchange of extrinsic information we plotdemapper and decoder characteristics into a single dia-

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 1

    a pr

    iori

    inpu

    t I A

    2 t

    o ou

    ter

    deco

    der

    extrinsic output IE2 of outer decoder

    memory 1, (Gr, G)=(03,02)

    memory 2, (Gr, G)=(07,05)

    memory 4, (Gr , G)=(023,037)

    memory 8, (Gr , G)=(0435,0777)

    repetition code (diagonal line)

    Fig. 4. Extrinsic information transfer characteristics of some outerrate��� decoders.

    gram which is referred to as Extrinsic Information Trans-fer Chart. On the ordinate, the inner channel and extrinsicoutput ��� becomes the outera priori input ��� (inter-leaving does not change mutual information). On the ab-scissa, the outer extrinsic output��� becomes the innerapriori input ��� . Provided that independence and Gaus-sian assumptions hold for modelling extrinsic information(a priori information respectively), the transfer character-istics of the individual demapping/decoding blocks shouldapproximately describe the true behavior of the iterativedecoder. To verify the EXIT chart predictions we use thetrajectory of iterative decoding which is a simulation re-sult of the compound “free–running” iterative decoder.

    Fig. 5 shows EXIT charts of the “d21”–mapping incombination with an outer memory� code. The transfercharacteristics are just taken from Figs. 3, 4. At����� ��dB the demapper and decoder transfer characteristics dointersect, and the decoding trajectory gets stuck after aboutthree iterations at low mutual information, correspondingto high BER. With increasing�����–value, the demappertransfer characteristic is raised, opening a narrow tunnel(“bottleneck”) at about!dB to allow for convergence of it-erative decoding towards low BER. This turbo cliff effectis verified in the BER chart of Fig. 7. For higher�����–values it just takes less iterations to reach a low BER. Acloser examination based on demapper transfer character-istics yields apinch–off limit at�������� ��!�dB (Ta-ble 2) where both transfer characteristics are just aboutto intersect. Simulation results suggest that the iterativedecoder converges for�����–values above the pinch–offlimit, provided that the interleaving depth is sufficientlylarge.

    For non–recursive inner “codes”, like the mapping inan IDEM scheme, the BER floor is dominated by the in-

  • Un exemple : le code (5, 7)8 Représentation générale Représentation en treillis Décodage MAP Turbo-codes parallèles Turbo-codes série Analyse EXIT charts Systèmes à décodage itératif avec concaténation en série

    Analyse EXIT chartsDémodulation et décodage itératifs : analyse EXIT

    AEÜ Int. J. Electron. Commun.54 (2000) No. 6, 389–398 S. ten Brink: Designing Iterative Decoding Schemes with the Extrinsic Information Transfer Chart393

    I E1, I

    A2

    IA1, IE2

    demapper characteristicd21 mapping (8-ASK), 5dB

    outer decoder characteristicrate 1/2 memory 2, (Gr,G)=(07,05)

    Eb/N0 = 5dB

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 1

    I E1, I

    A2

    IA1, IE2

    demapper characteristicd21 mapping (8-ASK), 6dB

    outer decoder characteristicrate 1/2 memory 2, (Gr,G)=(07,05)

    Eb/N0 = 6dB

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 1

    I E1, I

    A2

    IA1, IE2

    demapper characteristicd21 mapping (8-ASK), 7dB

    outer decoder characteristicrate 1/2 memory 2, (Gr,G)=(07,05)

    Eb/N0 = 7dB

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 1

    trajectory ofiterative decoding

    I E1, I

    A2

    IA1, IE2

    demapper characteristicd21 mapping (8-ASK), 8dB

    outer decoder characteristicrate 1/2 memory 2, (Gr,G)=(07,05)

    Eb/N0 = 8dB

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 1

    Fig. 5. EXIT charts with iterative decoding trajectories of 8–ASKmapping “d21” in combination with outer memory 2 code.

    tersection of demapper and decoder transfer characteristicon the right hand side of the EXIT chart. For recursiveinner codes, like used in the serially concatenated codesof Section 4, the inner transfer characteristics go up to��� �� � �� �, and the interleaver size becomes themost crucial parameter [20].

    The simulated trajectories match with the characteris-tics very well, owing to the large interleaver which ensuresthat the independence assumption of (7) holds over manyiterations; in addition to that, the robustness of the mutualinformation measure allows to overcome non–Gaussiandistributions ofa priori information. It should be empha-sized, however, that thedecoding trajectory is a simula-tion result purely based on measurements of mutual infor-mation as taken from the output of the respective demap-ping/decoding block. Only to calculate thetransfer char-acteristics of demapper and decoder we were sought toimpose the Gaussian and independence assumption on thea priori inputs��, ��.

    Table 2. Pinch–off limits������� [dB] of some IDEM schemes.

    outer codes���� ��8–ASK memory 1 memory 2 memory 4 memory 8

    mappings ��� ��� ���� ��� ���� ��� ���� �����

    “d21” ��� ��� ��� ����“d23” ���� ���� ��� ��

    anti Gray ���� ��� ��� ����

    2.6 Bit Error Rate Chart

    Fig. 7 shows the BER chart of iterative demapping and de-coding for 8–ASK with the 5 different mappings of Fig. 2in combination with an outer memory� code. For Gray

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 1

    chan

    nela

    ndex

    trin

    sic

    outp

    utI E

    1be

    com

    esa

    prio

    riin

    put

    I A2

    extrinsic output IE2 becomes a priori input IA1

    demapper characteristicsd23 mapping (8-ASK)

    outer decoder characteristicrate 1/2 memory 1, (Gr,G)=(03,02)

    5dB

    6dB

    7dB

    8dB

    Fig. 6. EXIT chart of mapping “d23” in combination with outermemory 1 code, decoding trajectory at� ���� � dB.

    1e-006

    1e-005

    0.0001

    0.001

    0.01

    0.1

    1

    4 6 8 10 12 144.24

    Gray mapping, no iterationnatural mapping, 15 iterations

    d21 mapping, 15d23 mapping, 15

    anti Gray mapping, 15

    d23 mapping, 40outer memory 1 code

    outer memory 2 codeBER

    Eb/N0 [dB]

    Fig. 7. BER curves of iterative demapping and decoding for some8–ASK mappings in combination with outer memory� and memory� codes; interleaver size� � �� outer coded bits.

    mapping, almost no BER improvement can be achieved byperforming iterative demapping, owing to the weak slopeof the demapper transfer characteristic: Perfecta prioriknowledge��� does not significantly improve��� at thedemapper output. Therefore, only the BER curve of Graymapping for no iterations is plotted. For the other map-pings, a strong BER reduction can be triggered by itera-tive demapping, depending on the slope of the demappercharacteristic: The steeper the slope, the later (in terms of�����–value) the turbo cliff in the BER chart, but, typ-ically, the lower the BER floor. At a BER of��� the

    AEÜ Int. J. Electron. Commun.54 (2000) No. 6, 389–398 S. ten Brink: Designing Iterative Decoding Schemes with the Extrinsic Information Transfer Chart393

    I E1, I A

    2

    IA1, IE2

    demapper characteristicd21 mapping (8-ASK), 5dB

    outer decoder characteristicrate 1/2