Title of Invention

SPEECH CODING

Abstract ABSTRACT A variable bit-rate speech coding method determines for each subframe a quantised vector d(i) comprising a variable number of pulses. An excitation vector c(i) for exciting LTP and LPC synthesis filters is derived by filtering the quantised vector d(i), and a gain value gc is determined for scaling the pulse amplitude excitation vector c(i) such that the scaled excitation vector represents the weighted residual signal $ remaining in the subframe speech signal after removal of redundant information by LPC and LTP analysis. A predicted gain value gc is determined from previously processed subframes, and as a function of the energy Ec contained in the excitation vector c(i) when the amplitude of that vector is scaled in dependence upon the number of pulses m in the quantised vector d(i). A quantised gain correction factor Ygc is then determined using the gain value gc and the predicted gain value gc.
Full Text

Speech Coding
The present invention relates to speech coding and more particularly to the coding of speech signals in discrete time subframes containing digitised speech samples. The present invention is applicable in particular, though not necessarily, to variable bit-rate speech coding.
In Europe, the accepted standard for digital cellular telephony is known under the acronym GSM (Global System for Mobile communications). A recent revision of the GSM standard (GSM Phase 2; 06.60) has resulted in the specification of a new speech coding algorithm (or codec) known as Enhanced Full Rate (EFR). As with conventional speech codecs, EFR is designed to reduce the bit-rate required for an individual voice or data communication. By minimising this rate, the number of separate calls which can be multiplexed onto a given signal bandwidth is increased.
A very general illustration of the structure of a speech encoder similar to that used in EFR is shown in Figure 1. A sampled speech signal is divided into 20ms frames x, each containing 160 samples. Each sample is represented digitally by 16 bits. The frames are encoded in turn by first applying them to a linear predictive coder (LPC) 1 which generates for each frame a set of LPC coefficients a , These coefficients are representative of the short term redundancy in the frame.
The output from the LPC 1 comprises the LPC coefficients a and a residual signal t] produced by removing the short term redundancy from the input speech frame using a LPC analysis filter. The residual signal is then provided to a long term predictor (LTP) 2 which generates a set of LTP parameters b which are representative of the long term redundancy in the residual signal r,, and also a
residual signal s from which the long term redundancy is removed. In practice, long term prediction is a two stage process, involving (1) a first open loop estimate of a set of LTP parameters for the entire frame and (2) a second closed loop

refinement of the estimated parameters to generate a set of LTP parameters for each 40 sample subframe of the frame. The residual signal s provided by LTP 2 is in turn filtered through filters 1/A(z) and W(z) (shown commonly as block 2a in Figure 1) to provide a weighted residual signal s . The first of these filters is an I LPC synthesis filter whilst the second is a perceptual weighting filter emphasising the "formant" structure of the spectrum. Parameters for both filters are provided by the LPC analysis stage (block 1).
An algebraic excitation codebook 3 is used to generate excitation (or innovation) vectors c. For each 40 sample subframe (four subframes per frame), a number of different "candidate" excitation vectors are applied in turn, via a scaling unit 4, to a LTP synthesis filter 5. This filter 5 receives the LTP parameters for the current subframe and introduces into the excitation vector the long term redundancy predicted by the LTP parameters. The resulting signal is then provided to a LPC synthesis filter 6 which receives the LPC coefficients for successive frames. For a given subframe, a set of LPC coefficients are generated using frame to frame interpolation and the generated coefficients are in turn applied to generate a synthesized signal ss.
The encoder of Figure 1 differs from earlier Code Excited Linear Prediction (CELP) encoders which utilise a codebook containing a predefined set of excitation vectors. The former type of encoder instead relies upon the algebraic generation and specification of excitation vectors (see for example W09624925) and is sometimes referred to as an Algebraic CELP or ACELP. More particularly, quantised vectors d(i) are defined which contain 10 non-zero pulses. Ail pulses
can have the amplitudes +1 or -1. The 40 sample positions (/ = 0 to 39) in a subframe are divided into 5 "tracks", where each track contains two pulses (i.e. at two of the eight possible positions), as shown in the following table.


Each pair of pulse positions In a given track is encoded with 6 bits (i.e. 3 bits for each pulse giving a total of 30 bits), whilst the sign of the first pulse in the track is encoded with 1 bit (a total of 5 bits). The sign of the second pulse is not specifically encoded but rather is derived from its position relative to the first pulse. If the sample position of the second pulse is prior to that of the first puise, then the second pulse is defined as having the opposite sign to the first pule, otherwise both pulses are defined as having the same sign. All of the 3-bit pulse positions are Gray coded in order to improve robustness against channel errors, allowing the quantised vectors to be encoded with a 35-bit algebraic code u .
In order to generate the excitation vector c{\), the quantised vector d(i) defined by the algebraic code u is filtered through a pre-filter FE(z) which enhances special spectral components in order to improve synthesized speech quality. The pre-filter (sometimes known as a "colouring" filter) is defined in terms of certain of the LTP parameters generated for the subframe,
As with the conventional CELP encoder, a difference unit 7 determines the error between the synthesized signal and the input signal on a sample by sample basis (and subframe by subframe). A weighting filter 8 is then used to weight the error signal to take account of human audio perception. For a given subframe, a search unit 9 selects a suitable excitation vector [c(i) where / = 0 to 39}, from the set of candidate vectors generated by the algebraic codebook 3, by identifying the

vector which minimises the weighted mean square error. This process is commonly known as "vector quantisation".



The encoded frame comprises the LPC coefficients, the LTP parameters, the algebraic code defining the excitation vector, and the quantised gain correction factor codebook index. Prior to transmission, further encoding is carried out on certain of the coding parameters in a coding and multiplexing unit 12. In particular, the LPC coefficients are converted into a corresponding number of line spectral pair (LSP) coefficients as described in 'Efficient Vector Quantisation of LPC Parameters at 24Bits/Frame\ Kuldip K.P. and Bishnu S.A.JEEE Trans.

Speech and Audio Processing, Vol 1, No 1, January 1993. The entire coded frame is also encoded to provide for error detection and correction. The codec specified for GSM Phase 2 encodes each speech frame with exactly the same number of bits, i.e. 244, rising to 456 after the introduction of convolution coding and the addition of cyclic redundancy check bits.
Figure 2 shows the general structure of an ACELP decoder, suitable for decoding
signals encoded with the encoder of Figure 1. A demultiplexer 13 separates a
received encoded signal into its various components. An algebraic codebook 14,
identical to the codebook 3 at the /. determines the code vector specified
by the 35-bit algebraic code in the received coded signal and pre-filters (using the LTP parameters) this to generate the excitation vector. A gain correction factor is determined from a gain correction factor codebook, using the received quantised gain correction factor, and this is used in block 15 to correct the predicted gain derived from previously decoded subframes and determined in block 16. The excitation vector is multiplied at block 17 by the corrected gain before applying the product to an LTP synthesis filter 18 and a LPC synthesis filter 19. The LTP and LPC filters receive respectively the LTP parameters and LPC coefficients conveyed by the coded signal and reintroduce long term and short term redundancy into the excitation vector.
Speech is by its very nature variable, including periods of high and low activity and often relative silence. The use of fixed bit-rate coding may therefore be wasteful of bandwidth resources. A number of speech codecs have been proposed which vary the coding bit rate frame by frame or subframe by subframe. For example, US5,657,420 proposes a speech codec for use in the US CDMA system and in which the coding bit-rate for a frame is selected from a number of possible rates depending upon the level of speech activity in the frame.
With regard to the ACELP codec, it has been proposed to classify speech signal subframes into two or more classes and to encode the different classes using different algebraic codebooks. More particularly, subframes for which the

weighted residual signal s varies only slowly with time may be coded using code vectors d(i) having relatively few pulses (e.g. 2) whilst subframes for which the weighted residual signal varies relatively quickly may be coded using code vectors d(i) having a relatively large number of pulses (e.g. 10).
With reference to equation (7) above, a change in the number of excitation pulses in the code vector d(i) from for example 10 to 2 will result in a corresponding reduction in the energy of the excitation vector c(i). As the energy prediction of equation (4) is based on previous subframes, the prediction is likely to be poor following such a large reduction in the number of excitation pulses. This in turn will result in a relatively large error in the predicted gain gc, causing the gain
correction factor to vary widely across the speech signal. In order to be able to accurately quantise this widely varying gain correction factor, the gain correction factor quantisation table must be relatively large, requiring a correspondingly long codebook index vy, e.g. 5 bits. This adds extra bits to the coded subframe data.
It will be appreciated that large errors in the predicted gain may also arise in CELP encoders, where the energy of the code vectors d(i) varies widely from frame to frame, requiring a similarly large codebook for quantising the gain correction factor.
It is an object of the present invention to overcome or at least mitigate the above noted disadvantage of the existing variable rate codecs.
According to a first aspect of the present invention there is provided method of coding a speech signal which signal comprises a sequence of subframes containing digitised speech samples, the method comprising, for each subframe: (a) selecting a quantised vector d(i) comprising at least one pulse, wherein
the number m and position of pulses in the vector d(i) may vary between
subframes;

(b) determining a gain value gc for scaling the amplitude of the quantised vector d{i) or of a further vector c(i) derived from the quantised vector d(i), wherein the scaled vector synthesizes a weighted residual signal s :
(c) determining a scaling factor k which is a function of the ratio of a predetermined energy level to the energy in the quantised vector d(i\;
fd) determining a predicted gain value gc on the basis of one or more previously processed subframes, and as a function of the energy 11 of the quantised vector d(i) or said further vector c(i) when the amplitude of ihe vector is scaled by said scaling factor k ; and
(e) determining a quantised gain correction factor j ^ using said gain value
gc and said predicted gain value gc.
By scaling the energy of the excitation vector as set out above, the present invention achieves an improvement in the accuracy of the predicted gain value gc when the number of pulses (or energy) present in the quantised vector di i) varies from subframe to subframe. This in turn reduces the range of the gain correction factor ) and enables accurate quantisation thereof with a smaller
quantisation codebook than heretofore. The use of a smaller codebook reduces the bit length of the vector required to index the codebook. Alternatively, an improvement in quantisation accuracy may be achieved with the same size of codebook as has heretofore been used.
In one embodiment of the present invention, the number m of pulses in the vector d(i) depends upon the nature of the subframe speech signal. In another alternative embodiment, the number m of pulses is determined by system requirements or properties. For example, where the coded signal is to be transmitted over a transmission channel, the number of pulses may be small when channel interference is high thus allowing more protection bits to be added to the

signal. When channel interference is low, and the signal requires fewer protection bits, the number of pulses in the vector may be increased.
Preferably, the method of the present invention is a variable bit-rate coding method and comprises generating said weighted residual signal 7 by substantially removing long term and short term redundancy from the speech signal subframe, classifying the speech signal subframe according to the energy contained in the weighted residual signal J , and using the classification to determine the number of pulses in in the quantised vector ch i)



According to a second aspect of the present invention there is provided a method of decoding a sequence of coded subframes of a digitised sampled speech signal, the method comprising for each subframe:
(a) recovering from the coded signal a quantised vector d(i) comprising at
least one pulse, wherein the number m and position of pulses in the vector d(i)
may vary between subframes;
(b) recovering from the coded signal a quantised gain correction factor } ^;

(c) determining a scaling factor k which is a function of the ratio of a
predetermined energy level to the energy in the quantised vector d(i);
(d) determining a predicted gain value gc on the basis of one or more
previously processed subframes, and as a function of the energy Ec of the
p quantised vector d(i) or a further vector c(i) derived from d(i), when the amplitude of the vector is scaled by said scaling factor k ; and
(e) correcting the predicted gain value gc using the quantised gain
correction factor } sc to provide a corrected gain value gc; and
(f) scaling the quantised vector d(i) or said further vector c(i) using the
gain value gc to generate an excitation vector synthesizing a residual signal .?
remaining in the original subframe speech signal after removal of substantially redundant information therefrom.
Preferably, each coded subframe of the received signal comprises an algebraic code u defining the quantised vector d{i) and an index addressing a quantised gain correction factor codebook from where the quantised gain correction factor } sr is obtained.
According to a third aspect of th?. present invention there is provided apparatus for - i .
coding a speech signal which signai comprises a sequence of subframes „, ._/=: ■•'
containing digitised speech samples, the apparatus having means for coding each j.
of said subframes in turn, which means comprises: t.,.
vector selecting means for selecting a quantised vector d{i) comprising at least one pulse, wherein the number m and position of pulses in the vector d(i) may vary between subframes;
first signal processing means for determining a gain value gc for scaling
the amplitude of the quantised vector d(i) or a further vector c(i) derived from the quantised vector d{i), wherein the scaled vector synthesizes a weighted
residual signal s ;

second signal processing means for determining a scaling factor k which is a function of the ratio of a predetermined energy level to the energy in the quantised vector d(i);
third signal processing means for determining a predicted gain value gc on i the basis of one or more previously processed subframes, and as a function of the energy Er of the quantised vector d(i) or said further vector at), when the
amplitude of the vector is scaled by said scaling factor k; and
fourth signal processing means for determining a quanted gain correction factor } using said gain value gt. and said predicted gain vaiue g.. i
_ According to a fourth aspect of the present invention there is provided apparatus for decoding a sequence of coded subframes of a digitised sampled speech signal, the apparatus having means for decoding each of said subframes in turn, 'A-the means comprising:
first signal processing means for recovering from the coded signal a quantised vector d(i) comprising at least one pulse, wherein the number m and position of pulses in the vector d(i) may vary between subframes;
second signal processing means for recovering from the coded signal a quantised gain correction factor J gcthird signal processing means for determining a scaling factor k which is a function of the ratio of a predetermined energy level to the energy in the quantised vector d{i)fourth signal processing means for determining a predicted gain value gr on the basis of one or more previously processed subframes, and as a function of the energy Ec of the quantised vector d(i) or a further vector c(0 derived from the quantised vector, when the amplitude of the vector is scaled by said scaling factor k; and
correcting means for correcting the predicted gain value gc using the
quantised gain correction factor } gc to provide a corrected gain value gr; and

scaling means for scaling the quantised vector d(i) or said further vector c(i) using the gain value gc to generate an excitation vector synthesizing a
residual signal .? remaining in the original subframe speech signal after removal of substantially redundant information therefrom.
For a better understanding of the present invention and in order to show how the same may be carried into effect reference will now be made, by way of example, to the accompanying drawings, in which:
Figure 1 shows a block diagram of an ACELP speech encoder;
Figure 2 shows a block diagram of an ACELP speech decoder:
Figure 3 shows a block diagram of a modified ACELP speech encoder capable of variable bit-rate encoding; and
Figure 4 shows a block diagram of a modified ACELP speech decoder capable of decoding a variable bit-rate encoded signal.
An ACELP speech codec, similar to that proposed for GSM phase 2, has been briefly described above with reference to Figures 1 and 2. Figure 3 illustrates a modified ACELP speech encoder suitable for the variable bit-rate encoding of a digitised sampled speech signal and in which functional blocks already described with reference to Figure 1 are identified with like reference numerals.
In the encoder of Figure 3, the single algebraic codebook 3 of Figure 1 is replaced with a pair of algebraic codebooks 23,#4. A first of the codebooks 23 is arranged to generate excitation vectors cii) based on code vectors d(i) containing two pulses whilst a second of the codebooks J4 is arranged to generate excitation vectors c{i) based on code vectors d{i) containing ten pulses. For a given subframe, the choice of codebook 23,24 is made by a codebook selection unit 2.5 in dependence upon the energy contained in the weighted residual signal ? provided by the LTP 2. If the energy in the weighted residual signal exceeds some predefined (or adaptive) threshold, indicative of a highly varying weighted

residual signal, the ten pulse codebook 24 is selected. On the other hand, if the energy in the weighted residual signal falls below the defined threshold, then the two pulse codebook 23 is selected. It will be appreciated that two or more threshold levels may be defined in which case three or more codebooks are used. For a more detailed description of a suitable codebook selection process, reference should be made to "Toll Quality Variable-Rate Speech Codec"; Ojala P; Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing, Munich, Germany, Apr. 21-24 1997.


The predicted gain is then calculated using equation (6), the modified excitation vector energy given by equation (9), and the modified mean-removed excitation energy given by equation (11).
Introduction of the scaling factor k into equations (9) and (11) considerably improves the gain prediction so that in general gc = gc and } gc =\. As the range
of the gain correction factor is reduced, as compared with the prior art, a smaller gain correction factor codebook can be used, utilising a shorter length codebook index vy, e.g. 3 or 4 bits.
Figure 4 illustrates a decoder suitable for decoding speech signals encoded with the ACELP encoder of Figure 3, that is where speech subframes are encoded with a variable bit rate. Much of the functionality of the decoder of Figure 4 is the same as that of Figure 3 and as such functional blocks already described with reference to Figure 2 are identified in Figure 4 with like reference numerals. The main distinction lies in the provision of two algebraic codebooks 20,21, corresponding to the 2 and 10 pulse codebooks of the encoder of Figure 3. The nature of the received algebraic code u determines the selection of the appropriate codebook 20,21 after which the decoding process proceeds in much the same way as previously described. However, as with the encoder, the predicted gain gc is calculated in block 22 using equation (6), the scaled excitation vector energy Ec as given by equation (9), and the scaled mean-removed excitation energy E(n) given by equation (11).
It will be appreciated by the skilled person that various modifications may be made to the above described embodiment without departing from the scope of the present invention. It will be appreciated in particular the encoder and decoder of Figures 3 and 4 may be implemented in hardware or in software or by a combination of both hardware and software. The above description is concerned with the GSM cellular telephone system, although the present invention may also be advantageously applied to other cellular radio systems and indeed to non-radio

communication systems such as the internet. The present invention may also be employed to encode and decode speech data for data storage purposes.
The present invention may be applied to CELP encoders, as well as to ACELP encoders. However, because CELP encoders have a fixed codebook lor generating the quantised vector d(i), and the amplitude of pulses within a given quantised vector can vary, the scaling factor k for scaling the amplitude of the excitation vector c(i) is not a simple function (as in equation (10}) of the number of pulses m. Rather, the energy for each quantised vector th i J of the fixed codebook must be computed and the ratio of this energy, relative to for example, the maximum quantised vector energy, determined. The square root of this ratio then provides the scaling factor k .


We Claim
1. A method of coding a speech signal which signal comprises a sequence of subframes containing digitised speech samples, the method comprising, for each subframe:
(a) selecting a quantised vector d(i) comprising at least one pulse, wherein the number m and position of pulses in the vector d{i) may vary between subframes;
(b) determining a gain value gc for scaling the amplitude of the quantised vector d(i) or of a further vector c{i) derived from the quantised vector d(i),
wherein the scaled vector synthesizes a weighted residual signal s ;
(c) determining a scaling factor k which is a function of the ratio of a predetermined energy level to the energy in the quantised vector d(i);
(d) determining a predicted gain value gc on the basis of one or more previously processed subframes, and as a function of the energy Ec of the quantised vector d(i) or said further vector c(i) when the amplitude of the vector is scaled by said scaling factor k; and
(e) determining a quantised gain correction factor y gc using said gain value
gc and said predicted gain value gc.
2. A method according to claim 1, the method being a variable bit-rate coding method and comprising:
generating said weighted residual signal s by substantially removing long term and short term redundancy from the speech signal subframe; and
classifying the speech signal subframe according to the energy contained in the weighted residual signal .? , and using the classification to determine the number of pulses m in the quantised vector d(i).
3. A method according to claim 1 or 2 and comprising:

generating a set of linear predictive coding (LPC) coefficients a for each frame and a set of long term prediction (LTP) parameters b for each subf rame wherein a frame comprises a plurality of speech subframes; and
producing a coded speech signal on the basis of the LPC coefficients, the LTP parameters, the quantised vector d(i), and the quantised gain correction factor } gc.
4. A method according to any one of the preceding claims and comprising defining the quantised vector d(i) in the coded signal by an algebraic code u .
5. A method according to any one of the preceding claims, wherein the predicted gain value is determined according to the equation:
where E is a constant and E(n) is a prediction of the energy in the current subframe determined on the basis of said previously processed subframes.
6. A method according to any one of the preceding claims, wherein said predicted gain value gc is a function of the mean removed excitation energy E(n) of the quantised vector d(i) or said further vector c(i), of each of said previously processed subframes, when the amplitude of the vector is scaled by said scaling factor k.
7. A method according to any one of the preceding claims, wherein the gain value gc is used to scale said further vector c(i), and that further vector is
generated by filtering the quantised vector d(i).
8. A method according to claim 5, wherein:
said predicted gain value gc is a function of the mean removed excitation energy E(n) of the quantised vector d{i) or said further vector c(i), of each of

said previously processed subframes, when the amplitude of the vector is scaled by said scaling factor k;


where M is the maximum permissible number of pulses in the quantised vector/
d(i).
12. A method according to any one of the preceding claims and comprising.
searching a gain correction factor codebook to determine the quantised gain
correction factor j which minimises the error:

and encoding the codebook index for the identified quantised gam correction factor.
13. A method of decoding a sequence of coded subframes of a digitised
sampled speech signal, the method comprising for each subframe:
(a) recovering from the coded signal a quantised vector ih i) comprising at
least one pulse, wherein the number m and position of pulses in the vector d{i) may vary between subframes;
(b) recovering from the coded signal a quantised gain correction factor j
(c) determining a scaling factor k which is a function of the ratio of a predetermined energy level to the energy in the quantised vector d( i);
(d) determining a predicted gain value gc on the basis of one or more previously processed subframes, and as a function of the energy Er of the quantised vector d(i) or a further vector c(i) derived from the quantised vector, when the amplitude of the vector is scaled by said scaling factor k ; and
(e) correcting the predicted gain value gr using the quantised gain .
correction factor j ,c to provide a corrected gain value gc; and
(f) scaling the quantised vector d(i) or said further vector c(i) using the
gain value gc to generate an excitation vector synthesizing a residual signal .?
remaining in the original subframe speech signal after removal of substantially redundant information therefrom.

14. A method according to claim 13, wherein each coded subframe of the received signal comprises an algebraic code u. defining the quantised vector d(i) and an index addressing a quantised gain correction factor codebook from where the quantised gain correction factor ^gc is obtained.
15. Apparatus for coding a speech signal using the method claimed in claims
1 to 12.
16. Apparatus for decoding a sequence of coded subframes of a digitized
sampled speech signal using the method claimed in claims 13 and 14.
17. A method of coding a speech signal substantially as herein described with reference to the accompanying drawings.
18. A method of decoding a sequence of coded subframe of a digitized sampled speech signal substantially as herein described with reference to the accompanying drawings.


Documents:

in-pct-2000-0312-che abstract duplicate.pdf

in-pct-2000-0312-che abstract.pdf

in-pct-2000-0312-che claims duplicate.pdf

in-pct-2000-0312-che claims.pdf

in-pct-2000-0312-che correspondences others.pdf

in-pct-2000-0312-che correspondences po.pdf

in-pct-2000-0312-che descrption (complete) duplicate.pdf

in-pct-2000-0312-che descrption (complete).pdf

in-pct-2000-0312-che drawings duplicate.pdf

in-pct-2000-0312-che drawings.pdf

in-pct-2000-0312-che form-1.pdf

in-pct-2000-0312-che form-19.pdf

in-pct-2000-0312-che form-26.pdf

in-pct-2000-0312-che form-3.pdf

in-pct-2000-0312-che form-5.pdf

in-pct-2000-0312-che form-6.pdf

in-pct-2000-0312-che pct.pdf

in-pct-2000-0312-che petition.pdf


Patent Number 232664
Indian Patent Application Number IN/PCT/2000/312/CHE
PG Journal Number 13/2009
Publication Date 27-Mar-2009
Grant Date 20-Mar-2009
Date of Filing 24-Aug-2000
Name of Patentee NOKIA CORPORATION
Applicant Address Keilalahdentie 4, FIN-02150 Espoo,
Inventors:
# Inventor's Name Inventor's Address
1 OJALA, Pasi Laurintie 4 D, FIN-33880 Lempäälä,
PCT International Classification Number G01L19/04
PCT International Application Number PCT/FI1999/000112
PCT International Filing date 1999-02-12
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 FI 980532 1998-03-09 Finland