Title of Invention | SYSTEMS AND METHODS FOR REDUCED COMPLEXITY LDPC DECODING |
---|---|
Abstract | Systems and methods for generating check node updates in the decoding of low-density parity-check (LDPC) codes use new approximations in order to reduce the complexity of implementing a LDPC decoder, while maintaining accuracy. The new approximations approximate the standard sum-product algorithm (SPA), and can reduce the approximation error of min-sum algorithm (MSA) and have almost the same performance as sum-product algorithm (SPA) under both floating precision operation and fixed-point operation. |
Full Text | FORM 2 THE PATENTS ACT, 1970 (39 of 1970) & THE PATENTS RULES, 2003 COMPLETE SPECIFICATION [See section 10, Rule 13] SYSTEMS AND METHODS FOR REDUCED COMPLEXITY LDPC DECODING; VIA TELECOM CO., LTD., A COMPANY ORGANIZED AND EXISTING UNDER THE LAWS OF BRITISH WEST INDIES, WHOSE ADDRESS IS ZEPHYR HOUSE, MARY STREET, P.O. BOX 709, GERORGE TOWN, GRAND CAYMAN. CAYMAN ISLANDS, BRITISH WEST INDIES THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED. 1 BACKGROUND 1. Field of the Invention The embodiments described herein are related to methods for Low-Density Parity-Check decoding and more particularly to methods for achieving reduced complexity Low-Density Parity-Check decoders. 2. Background of the Invention A Low-Density Parity-Check (LDPC) code is an error correcting code that provides a method for transferring a message over a noisy transmission channel. While LDPC techniques cannot guaranty perfect transmission, the probability of lost information can be made very small. In fact, LDPC codes were the first to allow data transmission rates at close to the theoretical maximum, e.g., the Shannon Limit. LDPC techniques use a sparse parity-check matrix, e.g., a matrix populated mostly with zeros, hence the term low-density. The sparse matrix is randomly generated subject to the defined sparsity constraint. LDPC codes can be defined as both a matrix and a graphical form. An LDPC matrix will have a certain number of rows (M) and columns (N). The matrix can also be defined by the number of 1 's in each row (wr) and the number of l's in each column (wc). For a matrix to be considered low-density the 2 following conditions should be met: wc« N and wr« M. An LDPC matrix can be regular or irregular. A regular LDPC matrix, is one in which wc is constant for every column and wr= wc * (M/N) is also constant for every row. If the matrix is low-density but the number of l's in each row or column is not constant, then such codes are called irregular LDPC code. It will also be understood that an LDPC code can be graphically defined by its corresponding Tanner graph. Not only do such graphs provide a complete representation of the code, they also help to describe the decoding algorithm as explained in more detail below. A Tanner graph comprises nodes and edges. The nodes are separated into two distinctive sets, or types, and the edges connect the two different types of nodes. The two types of nodes in a Tanner graph are called the variable nodes (v-nodes) and check nodes (c-nodes), or parity nodes. Thus, the Tanner graph will consist of M check nodes (the number of parity bits) and N variable nodes (the number of bits in a code word). A check node will then be connected to a variable node if there is a 1 in the corresponding element of the LDPC matrix. The number of information bits can be represented as (K.). A Generator Matrix (GKXN) can then be defined according to the following: CNXI =GNxKdKxi, where DKx1 = a message or data word, and CNX1 = a code word. 3 As can be seen, the code word CNXI is generated by multiplying the message by the generator matrix. The subscripts are matrix notation and refer to the number of rows and columns respectfully. Thus, the data word and code word can be represented as single column matrices with K and N rows respectfully. The parity check Matrix can be defined as HMXNCNX] = 0. Accordingly, figure 1 is a diagram illustrating a system 100 that includes a transmitter and a receiver. A portion 102 of the transmitter and a portion 110 of the receiver are shown for simplicity. Referring to figure 1, an encoder 104 converts a data word dKxi into a code word CNXI via application of the generator matrix GKXN- Modulator 106 can be configured to then modulate code word CNXI onto a carrier so that the code word can be wirelessly transmitted across channel 108 to the receiver. In receive portion 110, demodulator 112 can be configured to remove the carrier from the received signal; however, channel 108 will add channel effects and noise, such the signal produced by demodulator 112 can have the form: TNXI = 2/a (1-2 CNXI) + WNXI, where r is a multilevel signal. As a result of the noise and channel effects, some of data bits d will be lost in the transmission. In order to recover as much of the data as possible, decoder 114 can be configured to use the parity check matrix HMXN to produce an estimate d'Kx1 of the data that is very close to the original data dKx1- It will be understood that 4 decoder 114 can be a hard decision decoder or a soft decision decoder. Soft decision decoders are more accurate, but also typically require more resources. In order to illustrate the operation of LDPC codes, the following example is presented: As can be seen, the example parity check matrix H is low density, or sparse. The first row of matrix H defines the first parity check node, or equation. As can be seen, the first parity check node will check received samples ro, xi, and u, remembering that r is the multilevel signal produced by demodulator 112 in the receiver. The second parity check node, i.e., the second row of H, checks for received samples ri, r3, and r$, and the third parity check node checks samples r0, n, and V5. In this example, there are three parity check nodes and six samples. The first and second parity check nodes are considered orthogonal, because they involve mutually exclusive sets of samples. If it is assumed that K = 3 and M =3, then the following is true: 5 Thus, for example, if d = [0;1;0], then p = [0;0;1] and c = [0;1;0;0;0;1]. Figure 2 is a Tanner Graph illustrating the operation of H in the example above. As can be seen, the graph of figure 2 has three parity check nodes 202, 204, and 206, and 6 variable nodes 208, 210, 212, 214, 216, and 218, which correspond to the bits of c. Parity check nodes 202, 204, and 206 are connected with variable nodes 208, 210, 212, 214, 216, and 218, via edges 220, 222, 224, 226, 228, 230, 232, 234, and 236 as dictated by the entries in H. In other words, each edge 220, 222, 224, 226, 228, 230, 232, 234, and 236 should correspond to a 1 in H. 6 In an LDPC decoder, the operations of the parity check and variable nodes can be implemented by processors. In other words, each parity check node can be implemented by a parity check processor, and each variable check node can be implemented by a variable node processor. An LDPC decoder is then an iterative decoder that implements a message passing algorithm defined by H. Unfortunately, conventional LDPC decoding techniques result in a high complexity, fully parallel decoder implementations where all the messages to and from all the parity node processors have to be computed at every iteration in the decoding process. This leads to large complexity, increased resource requirements, and increased cost. Hence, there are many current efforts devoted to reducing the complexity of check node message updating, while keeping the performance loss as small as possible. The most common simplification is the min-sum algorithm (MSA), which has greatly reduced the complexity of check node updates, but incurs a 0.3-0.4 dB degradation in performance relative to standard sum-product algorithm (SPA) check node implementations. To combat this performance degradation, modifications of the MSA using a normalization term and an offset adjustment term have also been proposed. Such solutions do have reduced performance loss compared with the more conventional MSA implementations, but there is still significant performance loss. In addition, two-dimensional MSA 7 schemes have been proposed that can further improve the performance of MSA with some additional complexity. Thus, in conventional implementations, there is a constant trade-off between complexity and performance. SUMMARY Systems and methods for generating check node updates in the decoding of low-density parity-check (LDPC) codes are described below. The systems and methods described below use new approximations in order to reduce the complexity of implementing a LDPC decoder, while maintaining accuracy. The new approximations approximate the standard sum-product algorithm (SPA), and can reduce the approximation error of min-sum algorithm (MSA) and has almost the same performance as sum-product algorithm (SPA) under both floating precision operation and fixed-point operation. In one aspect, a receiver can include a demodulator configured to receive a wireless signal, remove a carrier signal from the wireless signal and produce a received signal, and a Low Density Parity Check (LDPC) processor configured to recover an original data signal from the received signal. The LDPC processor can include a plurality of variable node processors configured to receive the received signal and generate variable messages based on the received signal, and a parity node processor configured to receive the variable messages and generate soft outputs based in the variable messages, the parity node processor configured to implement the following: 8 The parity node processor can be implemented using either a serial architecture or a parallel architecture. In another aspect, a parity node processor can include a plurality of input processing blocks configured to receive variable messages in parallel and perform an exponential operation on the variable messages, a summer coupled with the plurality of input processing blocks, the summer configured to sum the outputs from the plurality of input processing blocks, a plurality of adders coupled with the summer and the plurality of input processing blocks, the plurality of adders configured to subtract the outputs of the plurality of input processing blocks from the output of the summer, and a plurality of output processing blocks coupled with the plurality of adders, the plurality of output processing blocks configured to perform a logarithm function on the outputs of the plurality of adders. In another aspect, a parity node processor can include an input processing block configured to serially receive variable messages and perform an exponential operation on the variable messages, an accumulator coupled with the input processing block, the accumulator configured to accumulate the output of the input processing block, a shift register coupled with the input processing block, the shift register configured to store the variable massages for one clock cycle, an adder coupled with the accumulator and the shift register, the adder 9 configured to subtract the output of the shift register from the output of the accumulator, and an output processing block coupled with the adder, the output processing block configured to perform a logarithm function on the output of the adder. In still another aspect, a method for processing a received wireless signal can include receiving the wireless signal, removing a carrier signal from the wireless signal to produce a received signal, generating variable messages from the received signal, performing an exponential operation on the variable messages to generate exponential data, summing the exponential data subtracting the variable messages from the summed exponential data to form a difference, and performing a logarithmic operation on the difference. These and other features, aspects, and embodiments of the invention are described below in the section entitled "Detailed Description." BRIEF DESCRIPTION OF THE DRAWINGS Features, aspects, and embodiments of the inventions are described in conjunction with the attached drawings, in which: Figure 1 is a diagram illustrating an example communication system that uses LDPC codes; Figure 2 is a diagram illustrating the operation of an exemplary parity check matrix; 10 Figure 3 is a diagram illustrating an exemplary parity node processor; Figure 4 is a diagram illustrating the operation of an exemplary parity node processor; Figure 5 is a diagram illustrating the operation of an exemplary variable node processor; Figure 6 is a diagram illustrating an example parity node processor configured in accordance with one embodiment; Figure 7 is a diagram illustrating an example parity node processor configured in accordance with another embodiment; Figures 8 and 9 are graphs showing respectively the simulated frame error rate (FER) and bit error rate (BER) performance for the irregular, 1/2-rate LDPC codes defined in 802.16eD12 under AWGN channel for various decoding algorithms; Figures 10 and 11 are graphs showing respectively the simulated frame error rate (FER) and bit error rate (BER) performance for the irregular, 3/4-rate LDPC codes defined in 802.16eD12 under AWGN channel for various decoding algorithms; Figure 12 is a flow chart illustrating an example method for performing LDPC decoding using the parity node processors of figures 6 or 7; 11 Figure 13 is a diagram illustrating a portion of an example LDPC decoder that includes degree reduction in accordance with one embodiment; Figure 14 is a diagram illustrating an example embodiment of a degree reducing unit that can be included in the LDPC decoder of figure 13 in accordance with one embodiment; Figure 15 is a diagram illustrating an example comparator that can be included in the degree reducing unit of figure 14; Figure 16 is a diagram illustrating an example embodiment of a degree reducing unit that can be included in the LDPC decoder of figure 13 in accordance with another embodiment; and Figure 17 is a graph illustrating the FER performance for the LDPC decoder of figure 13 with a degree reduction of 6/3 and 7/3. DETAILED DESCRIPTION In the descriptions that follow, certain example parameters, values, etc., are used; however, it will be understood that the embodiments described herein are not necessarily limited by these examples. Accordingly, these examples should not be seen as limiting the embodiments in any way. Further, the embodiments of an LDPC decoder described herein can be applied to many differnet types of systems implementing a variety of protocols and communication techniques. Accordingly, the embodiments should not be seen as 12 limited to a specific type of system, architecture, protocol, air interface, etc. unless specified. A check node processor 302 of degree n is shown in figure 3. At each iteration, the outgoing soft messages {li i =1,2, ...n} are updated with the incoming soft messages {ui,i = l,2,...n}. The outgoing soft message is defined as the logarithm of the ratio of probability that the corresponding bit is 0 or 1. With the standard sum-product algorithm, the outgoing message is determined as follows: The outgoing soft messages are then fed back to the variable node processors for use in generating outputs ui during the next iteration; however, a soft message X, based on a variable node output from a particular node are not returned to that node. Thus, the j ¹ i constraint in the following term of (1): This can also be illustrated with the aide of figure 4, which is a diagram illustrating the operation of parity node processor 202. First, the LDPC decoder will initialize the variable data bits uo, u1, U2 . . . U6 of variable node processors 208, 210, 212, 214, 216, and 218 with ro, r1, r2, . . . r6. Referring to figure 4, are the variable messages sent from variable nodes 13 208, 212, and 216 to parity node processor 202. Parity node processor 202 operates on these messages and computes its messages X . For example, λk (0 -> 2) represents the message sent from parity node 202 to variable node 212 at the kth iteration. The messages produced by parity node processor 202 can be defined using the following equations: Thus parity node processor 202 can be configured to implement the above equations (2). The soft messages produced by the parity nodes, e.g., parity node 202, are then fed back to variable nodes 208, 210, 212, 214, 216, and 218, for use in the next iteration. For example, figure 5 is a diagram illustrating the operation of variable node processor 208. Referring to figure 5, variable node processor 208 receives as inputs messages from parity node processors 202 and 206 and produces variable messages to be sent back to the same parity node processors 202 and 206. In the example of figure 4 and figure 5, hard decisions are taken on the multilevel variable uk and checked to see if they meet the parity node 14 equations defined above. If there is a match, or if a certain defined number of iterations is surpassed, then the decoder can be stopped. Variable node processor 208 can be configured to implement the following equation: where uCh,o is the message from the channel, which does not change with each iteration It will be understood that the decoder described above can be implemented using hardware and/or software configured appropriately and that while separate parity check processors and variable node processors are described, these processors can be implemented by a single processor, such as a digital signal processor, or circuit, such as an Application Specific Integrated Circuit (ASIC); however, as mentioned above, implementation of a LDPC processor such as that described with respect to figures 2-5 can result in large complexity, stringent memory requirements, and interconnect complexity that can lead to bottlenecks. These issues can be exacerbated if multiple data rates are to be implemented. In other words, practical implementation if such a decoder can be limited. As noted above, the sum-product algorithm of equation (1) can be prohibitive in terms of practical and cost effective implementation. 15 Approximations have been proposed with the aim of reducing this complexity. For example, it can be shown that (4) is equivalent to (1): A, =w, ©w2©...©w„ (4) where the operator © is defined as: Using the approximation formula: Or equivalently, in both numerator and denominator of (5), then the following can be obtained: Repeatedly substituting (8) into (4), the min-sum algorithm (MSA) can be obtained as follows: It will be apparent that equation (9) is much simpler to implement than (1) or (4), but the cost for this simplification is a grave performance penalty, generally about 0.3~0.4dB, depending on the specific code structure and code rate. To reduce such performance loss, some modifications have been proposed. 16 For example, the performance loss of MSA comes from the approximation error of (9) relative to (1). Accordingly, to improve the performance loss, the approximation error should be reduced. It can be shown that (9) is always larger than (1) in magnitude. Thus, normalized-MSA and offset-MSA use scaling or offsetting to force the magnitude be smaller. With the normalized min-sum algorithm, (9) is scaled by a factor a where 0 The offset min-sum algorithm reduces the magnitude by a positive constant β: But these approaches again increase the complexity. Thus, as mentioned above, there is a constant trade-off between complexity and performance. The embodiments described below use a new approach for the check nodes update in the decoding of LDPC codes. The approach is based on a new approximation of the SPA that can reduce the approximation error of the MSA and has almost the same performance as the SPA under both floating 17 precision operation and fixed-point operation. As a result, the new approximation can be implemented in simple structures, the complexity of which is on par with MSA implementations. The approximation error of MSA comes from the approximation error of equation (7). Note that equation (7) is coarse when x and y are close. MSA uses equation (7) in both numerator and denominator of equation (5). If the value of |x| and \y\ is close, then either the numerator or the denominator can introduce large approximation error. Thus, to improve the accuracy of the outgoing message, equation (7) can be used in (5) only when the numerator or denominator of (5) will produce a small approximation error. For example, when both x and y have the same sign, then using the approximation 1 + ex+y « max(e°,ex+y) in the numerator will produce better results than using ex + ey « emax(x,y) in the denominator. Similarly, when x and y have opposite signs, then only approximating the denominator of (5) using ex +ey can produce better results. Thus, a better approximation of (5), for x, y > 0, can be generated using the following: For all combinations of the signs of x and y, the following general expression can be used: 18 Iteratively substituting (13) into (4), produces: Note that (14) only holds when If this condition is not satisfied, then the results can be limited to 1, resulting in the following. The sign of (16) can be realized in the same way as in a MSA implementation, e.g., with binary ex-or logic circuit. The kernel of the approximation has the invertibility property, which allows the computation of the aggregate soft messages first, followed by intrinsic back-out to produce extrinsic updates. The amplitude of equation (16) can be realized with a serial structure or a parallel structure shown in figures 6 and 7 respectively. Thus, 19 figure 6 is a diagram illustrating a serial implementation of a parity node processor 602. As can be seen, the variable node outputs are first processed by processing block 602 and then accumulated in accumulator 604. Each input is then stored in shift register 606 and subtracted from the output of accumulator 604 in adder 610. The natural log of the resulting difference is then taken in processing block 608 in order to produce the soft outputs. Figure 7 is a diagram illustrating a parallel implementation of a parity node processor 700. Here, the inputs from the variable node processors are processed in parallel in processing blocks 702, 704, and 706 and then summed in summer 708. Each input, is then subtracted from the output of summer 708 in parallel in adders 710, 712, and 714. The natural logs of the outputs of adders 710, 712, and 714 are then taken in parallel in processing blocks 716, 718, and 720 to produce the soft outputs. Both structures 600 and 700 have the same computation load. Serial structure 600 requires smaller hardware size, but needs 2n clock cycles to get all outgoing soft messages. Parallel structure 700 requires only 1 clock cycle, but needs larger hardware size than serial structure 600. Parallel structure 700 is attractive when the decoding speed is the primary concern. It will be understood that the exponential and logarithm operations in figures 6 and 7 can be realized in any way, such as look-up tables, software, or hardware, etc. 20 The ln(.) operation can include the min (.,0) operation, which can be implemented by simply using the sign bit of the logarithm result to clear the output. In particular, if the logarithm is realized with look-up table, this can be done by simply setting the content of the table to 0 for all inputs greater than 1 or simply limiting the range of the address used to pick up the table content. The implementations of figures 6 and 7 can be included in a receiver such as receiver 110. Such a receiver can be included in a device configured to operate in a, e.g., wireless Wide Area Network (WAN) or Metropolitan Area Network (MAN), a wireless Local Area Network (LAN), or wireless Personal Area Network (PAN). The computation complexity of the proposed implementations is similar to an MSA implementation. Table 1 is the comparison of the computation load for parity node processing for various decoding algorithm, where it has been assumed that SPA, MSA, normalized-MSA and offset-MSA are implemented in a known forward-backward manner. Table 1. e( ) ln() + X SPA Eq.(4) 9(n-2) 6(n-2) 12(n-2) - Eq.(14) n n 2n-1 - MSA Eq. (9) - - 3(n-2) - Normaliz ed-MSAEq. (10) - - 3(n-2) n Offset-MSA Eq. (11) - - 4n-6 - 21 MSAEq. (11) Figures 8 and 9 are graphs showing respectively the simulated frame error rate (FER) and bit error rate (BER) performance for the irregular, 1/2-rate LDPC codes defined in 802.16eD12 under AWGN channel for various decoding algorithms including SPA, the proposed algorithm under both floating and fixed-point operation, MSA, normalized-MSA and offset-MSA. With normalized-MSA and offset-MSA, a normalization factor of 0.8 is used and the offset factor as 0.15. The check node degree distribution of the code is p(x) = 0.6667/ + 0.3333x7. The decoder use layered decoding with maximum iteration number as 30. Figures 10 and 11 are graphs showing the corresponding simulation results for the irregular, 3/4-rate LDPC codes with check node degree distribution as p(x) = 0.8333x14 + 0.1667x15. All the curves are simulated with float-point operations except the curve labeled as "proposed-quantization," which is the results of an implementation of equation (16) with a fixed-point decoder. In the simulation of the fixed-point decoder, the channel inputs are quantized to 8 bits binary integers, where 1 bit is used for the sign and the other 7 bits for the absolute value. In the simulations, the variable node updates are integer summations with results ranging from -128-+128. The exponential operation, 22 e.g., in figures 6 and 7, are implemented using a look up table with 128 entries each has 9 bits representing a quantized value in [0,1]. The summation and subtraction, e.g., in figures 6 and 7, are 9 bits integer operations. The logarithm is a table with 512 entries, each of which has 7 bits representing the quantized absolute value to be sent to variable nodes together with sign bits. It can be seen from the graphs of figures 8-11 that implementation of equation (16) with floating operation can have almost the same performance as standard SPA, and performance that is better than that produced using MSA by 0.3-0.4 dB. Moreover, although it can be challenging to meet the dynamic range requirements for the exp ( ) operation, the simulation results show that the fixed-point operation has hardly any performance loss relative to the floating operation. Note that the number of quantization bits can be greatly reduced with non-uniform quantization, with increased complexity. With non-uniform quantization, the size of the logarithm and exponential tables can be reduced, but these quantized values should be first mapped to the linearly quantized values before the operation of summation in figure 6. Figure 12 is a flow chart illustrating an example method for performing LDPC decoding as described above. First in step 1202, a wireless signal can be received and the signal can be demodulated in step 1204. In step 1206, variable messages can be generated from the demodulated signal. An 23 exponential operation can be performed on the variable messages in accordance with equation (16) in step 1208. In step 1210, the resulting exponential data can be summed and the variable messages can be subtracted from the summed data in step 1212, again in accordance with equation (16). Finally, and again in accordance with equation (16), then a logarithmic operation can be performed, in step 1214, on the difference produced in step 1212. Accordingly, using the systems and method described above, the resources, i.e., complexity, required to implement a parity node can be reduced, while still maintaining a high degree of precision. In certain embodiments, the complexity can be reduced even further through degree reduction techniques. In other words, the number of inputs to the parity node can be reduced, which can reduce the resources required to implement the parity node. It should also be noted that in many parity node implementations, the sign and the absolute value of the outgoing soft message are calculated separately. Figure 13 is a diagram illustrating a portion of an example LDPC decoder 1300 that includes degree reduction. In LDPC decoder 1300, the absolute value of variable messages are first input to Degree Reduction Unit (DRU) 1302, which produces a reduced number of outputs {u1’,u'2,L ,u'm], where m chosen. The selected inputs {u’1,u'2,h ,u'm} are then a subset of such that all the elements in set cannot be smaller than any elements in {u’1 ,u’2,L ,u'm}. The outputs of DRU 1302 can then be provided to parity node processor 1304. Parity node processor 1304 can be implemented using either the serial configuration of figure 6 or the parallel configuration of figure 7. Similarly, depending on the embodiment, DRU 1302 can be implemented in parallel or serial structures. Figure 14 is diagram illustrating a parallel configuration for DRU 1302. In the example of figure 14, DRU 1302 comprises 12 comparators configured to reduce the degree from 8 to 3. In other words, 8 input variable messages are reduced to three output message to be based to parity node processor 1304. It will be understood, of course, that differnet input and output degrees can be accommodated depending on the requirements of a particular implementation. It will also be understood that the greater the degree reduction, the greater the reduction in complexity of parity node processor 1304; however, this can also lead to reduced precision. Accordingly, the level of degree reduction should be chosen to maximum resource savings and precision. An example, implementation for the comparators of figure 14 is illustrated in figure 15. As can be seen, the S output is the smaller of the two inputs, while the L output is the larger of the two. 25 In the example of figure 14, DRU 1302 is configured to select the smallest inputs. Thus, the comparators are configured to select the smallest input from each input pair. In this case, five levels of comparators are used to produce the 8 to 3 degree reduction. Comparators 1402a-1402d, select the smallest input from the input pairs. These are then compared to the largest inputs form the input pairs in the second level of comparators comprising comparators 1404a-1404d in the manner shown. One of the outputs is the dropped out and the remaining inputs are compared in the third level of comparators 1406a-1406c. two more outputs are then dropped and the remaining inputs are compared in level four, comparator 1408 and level five, comparator 1410. Figure 16 is a diagram illustrating an example serial implementation of DRU 1302 in accordance with one embodiment. AS can be seen, in the example of figure 16, serial DRU 1302 reduces the degree from n to 3. In this example embodiment, DRU 1302 comprises serial comparators, e.g., comparators 1608, 1610, and 1612, which can be implemented as illustrated in figure 15 and described above. Delay units 1602, 1604, and 1606 are included and correspond to one clock cycle. The inputs (|w,|,|w2|,L ,\un\) arrive sequentially, one input for one clock cycle. Parity node processor 1304 can be configured to calculate the absolute value of outgoing messages with equation (16), i.e., the second term of 26 equation (16). In other words, the sign and absolute value for equation (16) can be determined separately using the following: Thus, parity node processor 1304 can be used to calculate the absolute value in accordance with equation (18) for a check node of degree m. Parity node processor 1304 can be implemented as a serial or parallel parity node processor as described above. Output unit (OU) 1306 can be configured to simply connect the outputs of parity node processor 1304, to the output ports For example, suppose there are 8 inputs and DRU 1302 select m=3 of them. The selection results depends on the specific data value of Suppose that for some specific inputs, the selection result is then OU 1306 should connect and and respectively and connect For this to be feasible, OU 1306 should be configured to operate in coordination with DRU 1302. For example, if the k-th input of DRU 1302, i.e., 27 is selected by DRU 1302 as they'-th input of parity node processor 1304, i.e., u'j then OU 1306 can be configured to correspondingly connect they'-the output of parity node processor 1304 to It should be noted that while a parallel implementation of DRU 1302 can be paired with a parallel implementation of parity node processor 1304, and that a serial implementation of DRU 1302 can be paired with a serial implementation of parity node processor 1304, such us not required. In other words, a parallel implementation of DRU 1302 can be paired with a serial implementation of parity node processor 1304 and vice versa. Moreover, it may be better, depending on the requirements of a particular implementation to forgo the inclusion of DRU 1302 and OU 1306. For example, if decoding speed is of the most concern, then a combination of a parallel DRU 1302 and a parallel parity node processor 1304 can be the best choice. On the other hand, if hardware size and resources is the most important issue, then a serial parity node processor 1304 without any DRU 1302 or OU 1306 can be preferred. If the LPDC decoder is implemented, e.g., with a Digital Signal Processor (DSP), as in the Software Defined Radio (SDR ) terminals, a serial DRU 1302 and a serial parity node processor can be preferred because it provides the least decoding delay. Figure 17 is a diagram illustrating simulation results for the decoder of figure 13, illustrating that such an embodiment can reduce the degree to 3 and only cause a performance loss less than 0.05dB compared with SPA. 28 The check node degree of the simulated LDPC code is 6 and 7. Similar performance can be observed for 3/4 rate LDPC code whose check node degree is 14 and 15. Table 2 illustrates the LDPC complexity comparison with the degree reduction of figure 13 and without. The data in table 2 is for n=8 and m=3. The "comparison" operation is normally less complex than the "Add" operation, thus the overall complexity with degree reduction is much less than without. Table 2 exp log Add Comparison Without degree reduction 8 8 15 With degree reduction 3 4 4 13 While certain embodiments of the inventions have been described above, it will be understood that the embodiments described are by way of example only. Accordingly, the inventions should not be limited based on the described embodiments. Rather, the scope of the inventions described herein should only be limited in light of the claims that follow when taken in conjunction with the above description and accompanying drawings. 29 WE CLAIM 1. A receiver, comprising: a demodulator configured to receive a wireless signal comprising an original data signal, remove a carrier signal from the wireless signal and produce a received signal; and a Low Density Parity Check (LDPC) processor coupled with the demodulator, the LDPC processor configured to recover the original data signal according to the received signal, the LDPC processor comprising: a plurality of variable node processors configured to generate variable messages based on the received signal, and a parity node processor coupled with the plurality of variable node processors, the parity node processor configured to implement an approximation of a sum product algorithm (SPA) based on the signs of the variable messages resulting in soft outputs representing estimates of the variable messages. 2. The receiver of claim 1, wherein the parity node processor is configured to implement the following: 30 3. The receiver of claim 1, wherein the parity node processor comprises: a plurality of input processing blocks configured to receive the plurality of variable messages in parallel and perform an exponential operation on the variable messages in order to generate exponential terms for use in generating the soft messages; a summer coupled with the plurality of input processing blocks, the summer configured to sum the exponential terms generated by the plurality of input processing blocks in order to generate sum terms for use in generating the soft messages; a plurality of adders coupled with the summer and the plurality of input processing blocks, the plurality of adders configured to subtract the exponential terms from the sum terms in order to generate a difference term for use in generating the soft messages; and a plurality of output processing blocks coupled with the plurality of adders, the plurality of output processing blocks configured to perform a logarithm function on the outputs of the plurality of adders in order to produce the soft messages. 31 4. The receiver of claim 3, wherein the parity node processor further comprises a sign processing block coupled with the plurality of output processing blocks, the sign processing block configured to determine a sign associated with the outputs of the plurality of output processing blocks. 5. The receiver of claim 1, wherein the parity node processor comprises: an input processing block configured to serially receive the variable messages and perform an exponential operation on the variable messages in order to produce exponential terms for use in generating the soft messages; an accumulator coupled with the input processing block, the accumulator configured to accumulate the exponential terms in order to generate sum terms for use in generating the soft messages; a shift register coupled with the input processing block, the shift register configured to store the variable massages for one clock cycle; an adder coupled with the accumulator and the shift register, the adder configured to subtract the output of the shift register from the sum terms in order to produce difference terms for use in generating the soft messages; and 32 an output processing block coupled with the adder, the output processing block configured to perform a logarithm function on the difference terms in order to generate the soft messages. 6. The receiver of claim 5, wherein the parity node processor further comprises a sign processing block coupled with the output processing block, the sign processing block configured to determine a sign associated with the output of the output processing block. 7. A LDPC decoder comprising a parity node processor configured to generate soft messages that are estimates of variable messages received from a plurality of variable nodes, the parity node processor comprising: a plurality of input processing blocks configured to receive the plurality of variable messages in parallel and perform an exponential operation on the variable messages in order to generate exponential terms for use in generating the soft messages; a summer coupled with the plurality of input processing blocks, the summer configured to sum the exponential terms generated by the plurality of input processing blocks in order to generate sum terms for use in generating the soft messages; a plurality of adders coupled with the summer and the plurality of input processing blocks, the plurality of adders configured to subtract the 33 exponential terms from the sum terms in order to generate a difference term for use in generating the soft messages; and a plurality of output processing blocks coupled with the plurality of adders, the plurality of output processing blocks configured to perform a logarithm function on the outputs of the plurality of adders in order to produce the soft messages. 8. The parity node processor of claim 7, further comprising a sign processing block coupled with the plurality of output processing blocks, the sign processing block configured to determine a sign associated with the soft messages. 9. The parity node processor of claim 8, wherein the sign processing block is implemented using a binary ex-or logic circuit; wherein the plurality of input processing blocks are implemented as look up tables, wherein the plurality of output processing blocks are implemented as look up tables. 10. A LDPC decoder comprising a parity node processor configured to generate soft messages that are estimates of variable messages received from a plurality of variable nodes, the parity node processor comprising: 34 an input processing block configured to serially receive the variable messages and perform an exponential operation on the variable messages in order to produce exponential terms for use in generating the soft messages; an accumulator coupled with the input processing block, the accumulator configured to accumulate the exponential terms in order to generate a sum terms for use in generating the soft messages; a shift register coupled with the input processing block, the shift register configured to store the variable massages for one clock cycle; an adder coupled with the accumulator and the shift register, the adder configured to subtract the output of the shift register from the sum terms in order to produce difference terms for use in generating the soft messages; and an output processing block coupled with the adder, the output processing block configured to perform a logarithm function on the difference terms in order to generate the soft messages. 11. The parity node processor of claim 10, further comprising a sign processing block coupled with the output processing block, the sign processing block configured to determine a sign associated with the soft messages. 35 12. The parity node processor of claim 11, wherein the sign processing block is implemented using a binary ex-or logic circuit, and wherein the input processing block is implemented as a look up table, and wherein the output processing block is implemented as look up table. 13. A method for processing a received wireless signal using a parity node processor included in a LDPC decoder, the method comprising: receiving the wireless signal; removing a carrier signal from the wireless signal to produce a received signal; generating variable messages from the received signal; performing an exponential operation on the variable messages to generate exponential data; summing the exponential data; subtracting the variable messages from the summed exponential data to form a difference; and performing a logarithmic operation on the difference. 14. The method of claim 13, wherein said summing the exponential data comprises accumulating the exponential data. 36 15. The method of claim 14, wherein said subtracting the variable messages from the summed exponential data comprises subtracting a time shifted version of a variable message from the accumulated exponential data. 16. A LDPC decoder comprising a parity node processor configured to generate soft messages that are estimates of variable messages received from a plurality of variable nodes, the LDPC decoder comprising: a plurality of variable node processors configured to generate variable messages based on the received signal; a degree reducing unit coupled with the plurality of variable node processors, the degree reducing unit configured to receive the plurality of variable messages and to reduce the degree of the variable messages prior to generation of the soft messages; and a parity node processor coupled with the degree reducing unit, the a parity node processor configured to implement an approximation of a sum product algorithm (SPA) based on the signs of the reduced degree variable messages resulting in soft outputs representing estimates of the variable messages. 17. The LDPC decoder of claim 16, wherein the parity node processor comprises: 37 a plurality of input processing blocks configured to receive the plurality of reduced degree variable messages in parallel and perform an exponential operation on the reduced degree variable messages in order to generate exponential terms for use in generating the soft messages; a summer coupled with the plurality of input processing blocks, the summer configured to sum the exponential terms generated by the plurality of input processing blocks in order to generate sum terms for use in generating the soft messages; a plurality of adders coupled with the summer and the plurality of input processing blocks, the plurality of adders configured to subtract the exponential terms from the sum terms in order to generate a difference term for use in generating the soft messages; and a plurality of output processing blocks coupled with the plurality of adders, the plurality of output processing blocks configured to perform a logarithm function on the outputs of the plurality of adders in order to produce the soft messages. 18. The LDPC decoder of claim 17, further comprising a output unit coupled with the parity node processor, the output unit comprising a plurality of output ports, the output unit configured to receive the soft messages and couple each of the soft messages to the appropriate output port. 38 19. The LDPC decoder of claim 18, further comprising a sign processing block coupled with the output unit, the sign processing block configured to determine a sign associated with the outputs of the output unit. 20. The LDPC decoder of claim 16, wherein the parity node processor comprises: an input processing block configured to serially receive the reduced degree variable messages and perform an exponential operation on the reduced degree variable messages in order to produce exponential terms for use in generating the soft messages; an accumulator coupled with the input processing block, the accumulator configured to accumulate the exponential terms in order to generate sum terms for use in generating the soft messages; a shift register coupled with the input processing block, the shift register configured to store the variable massages for one clock cycle; an adder coupled with the accumulator and the shift register, the adder configured to subtract the output of the shift register from the sum terms in order to produce difference terms for use in generating the soft messages; and 39 an output processing block coupled with the adder, the output processing block configured to perform a logarithm function on the difference terms in order to generate the soft messages. 21. The LDPC decoder of claim 20, further comprising a output unit coupled with the parity node processor, the output unit comprising a plurality of output ports, the output unit configured to receive the soft messages and couple each of the soft messages to the appropriate output port. 22. The LDPC decoder of claim 21, further comprising a sign processing block coupled with the output unit, the sign processing block configured to determine a sign associated with the outputs of the output unit. Dated this 16th day of July, 2007 40 ABSTRACT Systems and methods for generating check node updates in the decoding of low-density parity-check (LDPC) codes use new approximations in order to reduce the complexity of implementing a LDPC decoder, while maintaining accuracy. The new approximations approximate the standard sum-product algorithm (SPA), and can reduce the approximation error of min-sum algorithm (MSA) and have almost the same performance as sum-product algorithm (SPA) under both floating precision operation and fixed-point operation. 41 |
---|
1365-MUM-2007-ABSTRACT(16-7-2007).pdf
1365-MUM-2007-ABSTRACT(17-2-2012).pdf
1365-MUM-2007-ABSTRACT(GRANTED)-(16-3-2012).pdf
1365-MUM-2007-CANCELLED PAGES(17-2-2012).pdf
1365-MUM-2007-CLAIMS(AMENDED)-(17-2-2012).pdf
1365-MUM-2007-CLAIMS(COMPLETE)-(16-7-2007).pdf
1365-MUM-2007-CLAIMS(GRANTED)-(16-3-2012).pdf
1365-MUM-2007-CLAIMS(MARKED COPY)-(17-2-2012).pdf
1365-mum-2007-correspondence(13-12-2007).pdf
1365-MUM-2007-CORRESPONDENCE(IPO)-(21-3-2012).pdf
1365-mum-2007-correspondence-received.pdf
1365-mum-2007-description (complete).pdf
1365-MUM-2007-DESCRIPTION(COMPLETE)-(16-7-2007).pdf
1365-MUM-2007-DESCRIPTION(GRANTED)-(16-3-2012).pdf
1365-MUM-2007-DRAWING(16-7-2007).pdf
1365-MUM-2007-DRAWING(17-2-2012).pdf
1365-MUM-2007-DRAWING(GRANTED)-(16-3-2012).pdf
1365-mum-2007-form 1(24-7-2007).pdf
1365-mum-2007-form 18(13-12-2007).pdf
1365-MUM-2007-FORM 2(COMPLETE)-(16-7-2007).pdf
1365-MUM-2007-FORM 2(GRANTED)-(16-3-2012).pdf
1365-mum-2007-form 2(title page)-(16-7-2007).pdf
1365-MUM-2007-FORM 2(TITLE PAGE)-(COMPLETE)-(16-7-2007).pdf
1365-MUM-2007-FORM 2(TITLE PAGE)-(GRANTED)-(16-3-2012).pdf
1365-MUM-2007-FORM 3(12-9-2011).pdf
1365-MUM-2007-FORM 3(16-7-2007).pdf
1365-MUM-2007-FORM 3(17-2-2012).pdf
1365-mum-2007-form 3(6-9-2007).pdf
1365-MUM-2007-FORM 5(17-2-2012).pdf
1365-mum-2007-form 9(16-7-2007).pdf
1365-MUM-2007-PETITION UNDER RULE 137(12-9-2011).pdf
1365-MUM-2007-PETITION UNDER RULE 137(17-2-2012).pdf
1365-MUM-2007-POWER OF ATTORNEY(17-2-2012).pdf
1365-mum-2007-power of attorney(26-9-2007).pdf
1365-MUM-2007-REPLY TO EXAMINATION REPORT(12-9-2011).pdf
1365-MUM-2007-REPLY TO EXAMINATION REPORT(17-2-2012).pdf
1365-MUM-2007-US PATENT DOCUMENT(12-9-2011).pdf
Patent Number | 251448 | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Indian Patent Application Number | 1365/MUM/2007 | |||||||||||||||
PG Journal Number | 12/2012 | |||||||||||||||
Publication Date | 23-Mar-2012 | |||||||||||||||
Grant Date | 16-Mar-2012 | |||||||||||||||
Date of Filing | 16-Jul-2007 | |||||||||||||||
Name of Patentee | VIA TELECOM CO., LTD. | |||||||||||||||
Applicant Address | ZEPHYR HOUSE, MARY STREET, P.O.BOX 709, GERORGE TOWN, GRAND CAYMAN,CAYMAN ISLANDS. | |||||||||||||||
Inventors:
|
||||||||||||||||
PCT International Classification Number | H03M13/00 | |||||||||||||||
PCT International Application Number | N/A | |||||||||||||||
PCT International Filing date | ||||||||||||||||
PCT Conventions:
|