Title of Invention

SELECTION OF CODING MODEL EXCITATION FOR AN AUDIO SIGNAL

Abstract The invention relates to an encoder (200) comprising an input (201) for inputting frames of an audio signal, a LTP analysis block (209) for performing a LTP analysis to the frames of the audio signal to form LTP parameters on the basis of the properties of the audio signal, and at least a first excitation block (206) for performing a first excitation for frames of the audio signal, and a second excitation block (207) for performing a second excitation for frames of the audio signal. The encoder (200) further comprises a parameter analysis block (202) for analysing said LTP parameters, and an excitation selection block (203) for selecting one excitation block among said first excitation block (206) and said second excitation block (207) for performing the excitation for the frames of the audio signal on the basis of the parameter analysis. The invention also relates to a device, a system, a method, a module and a computer program product.
Full Text Coding model selection
Field of the Invention
The invention relates to audio coding in which encoding mode is
changed depending on the properties of the audio signal. The present
invention relates to an encoder comprising an input for inputting frames
of an audio signal, a long term prediction (LTP) analysis block for
performing an LTP analysis to the frames of the audio signal to form
long term prediction (LTP) parameters on the basis of the properties of
the audio signal, and at least a first excitation block for performing a
first excitation for frames of the audio signal, and a second excitation
block for performing a second excitation for frames of the audio signal.
The invention also relates to a device comprising an encoder
comprising an input for inputting frames of an audio signal, a LTP
analysis block for performing an LTP analysis to the frames of the
audio signal to form LTP parameters on the basis of the properties of
the audio signal, and at least a first excitation block for performing a
first excitation for frames of the audio signal, and a second excitation
block for performing a second excitation for frames of the audio signal.
The invention also relates to a system comprising an encoder
comprising an input for inputting frames of an audio signal, a LTP
analysis block for performing an LTP analysis to the frames of the
audio signal to form LTP parameters on the basis of the properties of
the audio signal, and at least a first excitation block for performing a
first excitation for frames of the audio signal, and a second excitation
block for performing a second excitation for frames of the audio signal.
The invention further relates to a method for processing audio signal, in
which an LTP analysis is performed to the frames of the audio signal
for forming LTP parameters on the basis of the properties of the signal,
and at least a first excitation and a second excitation are selectable to
be performed for frames of the audio signal. The invention relates to a
module comprising a LTP analysis block for performing an LTP
analysis to frames of an audio signal to form LTP parameters on the
basis of the properties of the audio signal. The invention relates to a
computer program product comprising machine executable steps for
encoding audio signal, in which an LTP analysis is performed to the
frames of the audio signal for forming LTP parameters on the basis of
the properties of the signal, and at least a first excitation and a second
excitation are selectable to be performed for frames of the audio signal.
Background of the Invention
In many audio signal processing applications audio signals are
compressed to reduce the processing power requirements when
processing the audio signal. For example, in digital communication
systems audio signal is typically captured as an analogue signal,
digitised in an analogue to digital (A/D) converter and then encoded
before transmission over a wireless air interface between a user
equipment, such as a mobile station, and a base station. The purpose
of the encoding is to compress the digitised signal and transmit it over
the air interface with the minimum amount of data whilst maintaining an
acceptable signal quality level. This is particularly important as radio
channel capacity over the wireless air interface is limited in a cellular
communication network. There are also applications in which digitised
audio signal is stored to a storage medium for later reproduction of the
audio signal.
The compression can be lossy or lossless. In lossy compression some
information is lost during the compression wherein it is not possible to
fully reconstruct the original signal from the compressed signal. In
lossless compression no information is normally lost. Hence, the
original signal can usually be completely reconstructed from the
compressed signal.
The term audio signal is normally understood as a signal containing
speech, music (non-speech) or both. The different nature of speech
and music makes it rather difficult to design one compression algorithm
which works enough well for both speech and music. Therefore, the
problem is often solved by designing different algorithms for both audio
and speech and use some kind of recognition method to recognise
whether the audio signal is speech like or music like and select the
appropriate algorithm according to the recognition.
In overall, classifying purely between speech and music or non-speech
signals is a difficult task. The required accuracy depends heavily on the
application. In some applications the accuracy is more critical like in
speech recognition or in accurate archiving for storage and retrieval
purposes. However, the situation is a bit different if the classification is
used for selecting optimal compression method for the input signal. In
this case, it may happen that there does not exist one compression
method that is always optimal for speech and another method that is
always optimal for music or non-speech signals. In practise, it may be
that a compression method for speech transients is also very efficient
for music transients. It is also possible that a music compression for
strong tonal components may be good for voiced speech segments.
So, in these instances, methods for classifying just purely for speech
and music do not create the most optimal algorithm to select the best
compression method.
Often speech can be considered as bandlimited to between
approximately 200Hz and 3400 Hz. The typical sampling rate used by
an A/D converter to convert an analogue speech signal into a digital
signal is either 8kHz or 16kHz. Music or non-speech signals may
contain frequency components well above the normal speech
bandwidth. In some applications the audio system should be able to
handle a frequency band between about 20 Hz to 20 000 kHz. The
sample rate for that kind of signals should be at least 40 000 kHz to
avoid aliasing. It should be noted here that the above mentioned values
are just non-limiting examples. For example, in some systems the
higher limit for music signals may be about 10 000 kHz or even less
than that.
The sampled digital signal is then encoded, usually on a frame by
frame basis, resulting in a digital data stream with a bit rate that is
determined by a codec used for encoding. The higher the bit rate, the
more data is encoded, which results in a more accurate representation
of the input frame. The encoded audio signal can then be decoded and
passed through a digital to analogue (D/A) converter to reconstruct a
signal which is as near the original signal as possible.
An ideal codec will encode the audio signal with as few bits as possible
thereby optimising channel capacity, while producing decoded audio
signal that sounds as close to the original audio signal as possible. In
practice there is usually a trade-off between the bit rate of the codec
and the quality of the decoded audio.
At present there are numerous different codecs, such as the adaptive
multi-rate (AMR) codec and the adaptive multi-rate wideband (AMRWB)
codec, which are developed for compressing and encoding audio
signals. AMR was developed by the 3rd Generation Partnership Project
(3GPP) for GSM/EDGE and WCDMA communication networks. In
addition, it has also been envisaged that AMR will be used in packet
switched networks. AMR is based on Algebraic Code Excited Linear
Prediction (ACELP) coding. The AMR and AMR WB codecs consist of
8 and 9 active bit rates respectively and also include voice activity
detection (VAD) and discontinuous transmission (DTX) functionality. At
the moment, the sampling rate in the AMR codec is 8 kHz and in the
AMR WB codec the sampling rate is 16kHz. It is obvious that the
codecs and sampling rates mentioned above are just non-limiting
examples.
ACELP coding operates using a model of how the signal source is
generated, and extracts from the signal the parameters of the model.
More specifically, ACELP coding is based on a model of the human
vocal system, where the throat and mouth are modelled as a linear
filter and speech is generated by a periodic vibration of air exciting the
filter. The speech is analysed on a frame by frame basis by the
encoder and for each frame a set of parameters representing the
modelled speech is generated and output by the encoder. The set of
parameters may include excitation parameters and the coefficients for
the filter as well as other parameters. The output from a speech
encoder is often referred to as a parametric representation of the input
speech signal. The set of parameters is then used by a suitably
configured decoder to regenerate the input speech signal.
Transform coding is widely used in non-speech audio coding. The
superiority of transform coding for non-speech signals is based on
perceptual masking and frequency domain coding. Even though
transform coding techniques give superior quality for audio signal the
performance is not good for periodic speech signals and therefore
quality of transform coded speech is usually rather low. On the other
hand, speech codecs based on human speech production system
usually perform poorly for audio signals.
For some input signals, the pulse-like ACELP-excitation produces
higher quality and for some input signals transform coded excitation
(TCX) is more optimal. It is assumed here that ACELP-excitation is
mostly used for typical speech content as an input signal and TCXexcitation
is mostly used for typical music and other non-speech audio
as an input signal. However, this is not always the case, i.e.,
sometimes speech signal has parts, which are music like and music
signal has parts, which are speech like. There can also exist signals
containing both music and speech wherein the selected coding method
may not be optional for such signals in prior art systems.
The selection of excitation can be done in several ways: the most
complex and quite good method is to encode both ACELP and TCXexcitation
and then select the best excitation based on the synthesised
audio signal. This analysis-by-synthesis type of method will provide
good results but it is in some applications not practical because of its
high complexity. In this method for example SNR-type of algorithm can
be used to measure the quality produced by both excitations. This
method can be called as a "brute-force" method because it tries all the
combinations of different excitations and selects afterwards the best
one. The less complex method would perform the synthesis only once
by analysing the signal properties beforehand and then selecting the
best excitation. The method can also be a combination of pre-selection
and "brute-force" to make compromised between quality and
complexity.
Figure 1 presents a simplified encoder 100 with prior-art high
complexity classification. An audio signal is input to the input signal
block 101 in which the signal is digitised and filtered. The input signal
block 101 also forms frames from the digitised and filtered signal. The
frames are input to a linear prediction coding (LPC) analysis block 102.
It performs a LPC analysis on the digitised input signal on a frame by
frame basis to find such a parameter set which matches best with the
input signal. The determined parameters (LPC parameters) are
quantized and output 109 from the encoder 100. The encoder 100 also
generates two output signals with LPC synthesis blocks 103, 104. The
first LPC synthesis block 10,3 uses a signal generated by the TCX
excitation block 105 to synthesise the audio signal for finding the code
vector producing the best result for the TCX excitation. The second
LPC synthesis block 104 uses a signal generated by the ACELP
excitation block 106 to synthesise the audio signal for finding the code
vector producing the best result for the ACELP excitation. In the
excitation selection block 107 the signals generated by the LPC
synthesis blocks 103,104 are compared to determine which one of the
excitation methods gives the best (optimal) excitation. Information
about the selected excitation method and parameters of the selected
excitation signal are, for example, quantized and channel coded 108
before outputting 109 the signals from the encoder 100 for
transmission.
Summary of the Invention
One aim of the present invention is to provide an improved method for
selecting a coding method for different parts of an audio signal. In the
invention an algorithm is used to select a coding method among at
least a first and a second coding method, for example TCX or ACELP,
for encoding by open-loop manner. The selection is performed to
detect the best coding model for the source signal, which does not
mean the separation of speech and music. According to one
embodiment of the invention an algorithm selects ACELP especially for
periodic signals with high long-term correlation (e.g. voiced speech
signal) and for signal transients. On the other hand, certain kind of
stationary signals, noise like signals and tone like signals are encoded
using transform coding to better handle the frequency resolution.
The invention is based on the idea that input signal is analysed by
examining the parameters the LTP analysis produces to find e.g.
transients, periodic parts etc. from the audio signal. The encoder
according to the present invention is primarily characterised in that the
encoder further comprises a parameter analysis block for analysing
said LTP parameters, and an excitation selection block for selecting
one excitation block among said first excitation block and said second
excitation block for performing the excitation for the frames of the audio
signal on the basis of the parameter analysis, and that said second
excitation is a transform coded excitation, and said first excitation is
other than transform coded excitation. The device according to the
present invention is primarily characterised in that the device further
comprises a parameter analysis block for analysing said LTP
parameters, and an excitation selection block for selecting one
excitation block among said first excitation block and said second
excitation block for performing the excitation for the frames of the audio
signal on the basis of the parameter analysis, and that said second
excitation is a transform coded excitation, and said first excitation is
other than transform coded excitation. The system according to the
present invention is primarily characterised in that the system further
comprises in said encoder a parameter analysis block for analysing
said LTP parameters, and an excitation selection block for selecting
one excitation block among said first excitation block and said second
excitation block for performing the excitation for the frames of the audio
signal on the basis of the parameter analysis, and that said second
excitation is a transform coded excitation, and said first excitation is
other than transform coded excitation. The method according to the
present invention is primarily characterised in that the method further
comprises analysing said LTP parameters, and selecting one excitation
block among said at least first excitation and said second excitation for
performing the excitation for the frames of the audio signal on the basis
of the parameter analysis, and that said second excitation comprises
using a transform coded excitation, and said first excitation comprises
using other than transform coded excitation. The module according to
the present invention is primarily characterised in that the module
further comprises a parameter analysis block for analysing said LTP
parameters,, and an excitation selection block for selecting one
excitation block among a first excitation block and a second excitation
block, and for indicating the selected excitation method to an encoder,
and that said second excitation is a transform coded excitation, and
said first excitation is other than transform coded excitation. The
computer program product according to the present invention is
primarily characterised in that the computer program product further
comprises machine executable steps for analysing said LTP
parameters, and selecting one excitation among at least said first
excitation and said second excitation for performing the excitation for
the frames of the audio signal on the basis of the parameter analysis,
and that performing said second excitation comprises machine
executable steps for using a transform coded excitation, and
performing said first excitation comprises machine executable steps for
using other than transform coded excitation.
The present invention provides advantages when compared with prior
art methods and systems. By using the classification method according
to the present invention it is possible to improve reproduced sound
quality without greatly affecting the compression efficiency. The
invention improves especially reproduced sound quality of mixed
signals, i.e. signals including both speech like and non-speech like
signals.
Description of the Drawings
Fig. 1 presents a simplified encoder with prior-art high complexity
classification,
Fig. 2 presents an example embodiment of an encoder with
classification according to the invention,
Fig. 3 shows scaled normalised correlation, lag and scaled gain
parameters of an example of a voiced speech sequence,
Fig. 4 shows scaled normalised correlation, lag and scaled gain
parameters of an example of an audio signal containing
sound of a single instrument,
Fig. 5 Scaled normalised correlation, lag and scaled gain of a an
example of an audio signal containing music with several
instruments, and
Fig. 6 shows an example of a system according to the present
invention.
Detailed Description of the Invention
In the following an encoder 200 according to an example embodiment
of the present invention will be described in more detail with reference
to Fig. 2. The encoder 200 comprises an input block 201 for digitizing,
filtering and framing the input signal when necessary. It should be
noted here that the input signal may already be in a form suitable for
the encoding process. For example, the input signa! may have been
digitised at an earlier stage and stored to a memory medium (not
shown). The input signal frames are input to a LPC analysis block 208
which performs LPC analysis to the input signal and forms LPC
parameters on the basis of the properties of the signal. A LTP analysis
block 209 forms LTP parameters on the basis of the LPC parameters.
The LPC parameters and LTP parameters are examined in a
parameter analysis block 202. On the basis of the result of the analysis
an excitation selection block 203 determines which excitation method is
the most appropriate one for encoding the current frame of the input
signal. The excitation selection block 203 produces a control signa! 204
for controlling a selection means 205 according to the parameter
analysis. If it was determined that the best excitation method for
encoding the current frame of the input signal is a first excitation
method, the selection means 205 are controlled to select the signal
(excitation parameters) of a first excitation block 206 to be input to a
quantisation and encoding block 212. If it was determined that the best
excitation method for encoding the current frame of the input signal is a
second excitation method, the selection means 205 are controlled to
select the signal (excitation parameters) of a second excitation block
207 to be input to the quantisation and encoding block 212. Although
the encoder of Fig. 2 has only the first 206 and the second excitation
block 207 for the encoding process, it is obvious that there can also be
more than two different excitation blocks for different excitation
methods available in the encoder 200 to be used in the encoding of the
input signal.
The first excitation block 205 produces, for example, a TCX excitation
signal (vector) and the second excitation block 207 produces, for
example, a ACELP excitation signal (vector). It is also possible that the
selected excitation block 206, 207 first try two or more excitation
vectors wherein the vector which produces the most compact result is
selected for transmission. The determination of the most compact
result may be made, for example, on the basis of the number of bits to
be transmitted or the coding error (the difference between the
synthesised audio and the real audio input).
LPC parameters 210, LTP parameters 211 and excitation parameters
213 are, for example, quantised and encoded in the quantisation and
encoding block 212 before transmission e.g. to a communication
network 604 (Fig. 6). However, it is not necessary to transmit the
parameters but they can, for.example, be stored on a storage medium
and at a later stage retrieved for transmission and/or decoding.
In an extended AMR-WB (AMR-WB+) codec, there are two types of
excitation for LP-synthesis: ACELP pulse-like excitation and transform
coded TCX-excitation. ACELP excitation is the same than used already
in the original 3GPP AMR-WB standard (3GPP TS 26.190) and TCXexcitation
is the essential improvement implemented in the extended
AMR-WB.
In AMR-WB+ codec, linear prediction coding (LPC) is calculated in
each frame to model the spectral envelope. The LPC excitation (the
output of the LP filter of the coded) is either coded by algebraic code
excitation linear prediction (ACELP) type or transform coding based
algorithm (TCX). As an example, ACELP performs LTP and fixed
codebook parameters for LPC excitation. For example, the transform
coding (TCX) of AMR-WB+ exploits FFT (Fast Fourier transform). In
AMR-WB+ codec the TCX coding can be done by using one of three
different frame lengths (20, 40 and 80 ms).
In the following an example of a method according to the present
invention will be described in more detail. In the method an algorithm is
used to determine some properties of the audio signal such as
periodicity and pitch. Pitch is a fundamental property of voiced speech.
For voiced speech, the glottis opens and closes in a periodic fashion,
imparting periodic character to the excitation. Pitch period, TO, is the
time span between sequential openings of glottis. Voiced speech
segments have especially strong long-term correlation. This correlation
is due to the vibrations of the vocal cords, which usually have a pitch
period in the range from 2 to 20 ms.
LTP parameters lag and gain are calculated for the LPC residual. The
LTP lag is closely related to the fundamental frequency of the speech
signal and it is often referred to as a "pitch-lag" parameter, "pitch delay"
parameter or "lag", which describes the periodicity of the speech signal
in terms of speech samples. The pitch-delay parameter can be
calculated by using an adaptive codebook. Open-loop pitch analysis
can be done to estimate the pitch lag. This is done in order to simplify
the pitch analysis and confine the closed loop pitch search to a small
number of lags around the open-loop estimated Jags. Another LTP
parameter related to the fundamental frequency is the gain, also called
LTP gain. The LTP gain is an important parameter together with LTP
lag which are used to give a natural representation of the speech.
Stationary properties of the source signal is analysed by e.g.
normalised correlation, which can be calculated as follows:
NormCorr
where TO is the open-loop lag of the frame having a length N. Xj is the
ith sample of the encoded frame. XrTO is the sample from recently
encoded frame, which is TO samples back in the past from the sample
A few examples of LTP parameter characteristics as a function of time
can be seen in Figures 3, 4 and 5. In the figures the curve A shows a
normalised correlation of the signal, the curve B shows the lag and the
curve C shows the scaled gain. The normalised correlation and the
LTP gain are scaled (multiplied by 100) so that they can fit in the same
figure with the LTP lag. In Figures 3, 4 and 5, also LTP lag values are
divided by 2. As an example, a voiced speech segment (Figure 3)
includes high LTP gain and stable LTP lag. Also normalised correlation
and LTP gain of the voiced speech segments are matching and
therefore having high correlation. The method according to the
invention classify this kind of signal segment so that the selected
coding method is the ACELP (the first coding method). If LTP lag
contour (composed by current and previous lags) is stable, but the LTP
gain is low or unstable and/or the LTP gain and the normalised
correlation have a small correlation, the selected coding method is the
TCX (the second coding method). This kind of situation is illustrated in
the example of Fig. 4 in which parameters of an audio signal of one
instrument (saxophone) are shown. If the LTP lag contour of current
and previous frames is very unstable, the selected coding method is
also in this case the TCX. This is illustrated in the example of Fig. 5 in
which parameters of an audio signal of a multiplicity of instruments are
shown. The word stable means here that e.g. the difference between
minimum and maximum lag values of current and previous frames is
below some predetermined threshold (a second threshold TH2).
Therefore, the lag is not changing much in current and previous
frames. In AMR-WB+ codec, the range of LTP gain is between 0 and
1.2. The range of the normalised correlation is between 0 and 1.0. As
an example, the threshold indicating high LTP gain could be over 0.8.
High correlation (or similarity) of the LTP gain and normalised
correlation can be observed e.g. by their difference. If the difference is
below a third threshold TH3, for example, 0.1 in current and/or past
frames, LTP gain and normalised correlation have a high correlation.
If the signal is transient in nature, it is coded by a first coding method,
for example, by the ACELP coding method, in an example embodiment
of the present invention. Transient sequences can be detected by
using spectral distance SD of adjacent frames. For example, if spectral
distance, SDn, of the frame n calculated from immittance spectrum pair
(ISP) coefficients (LP filter coefficients converted to the ISP
representation) in current and previous frame exceeds a predetermined
first threshold TH1, the signal is classified as transient. Spectral
distance SDn can be calculated from ISP parameters as follows:
where ISPn is the ISP coefficients vector of the frame n and ISPn(i) is
the ith element of it.
15
Noise like sequences are coded by a second coding method, for
example, by transform coding TCX. These sequences can be detected
by LTP parameters and average frequency along the frame in
frequency domain. If the LTP parameters are very unstable and/or
average frequency exceeds a predetermined threshold TH16, it is
determined in the method that the frame contains noise like signal.
An example algorithm for the classifying process according to the
present invention is described below. The algorithm can be used in the
encoder 200 such as an encoder of the AMR WB+ codec.
(Formula Removed)
The algorithm above contains some thresholds TH1—TH15 and
constants HIGHJJMIT, LOWJJMIT, Buflimit, NO_of_elements. In the
following some example values for the thresholds and constants are
shown but it is obvious that the values are non-limiting examples only.
(Formula Removed)
The meaning of the variables of the algorithm are as follows:
HIGHJJMIT and LOW_LIMIT relate to the maximum and minimum
LTP lag values, respectively, LagDifbui is the buffer containing LTP lags
from current and previous frames. Lagn is one or more LTP lag values
of the current frame (two open loop lag values are calculated in a frame
in AMR WB+ codec). Gainn is one or more LTP gain values of the
current frame. NormCorrn is one or more normalised correlation values
of the current frame. MaxEnergybu{ is the maximum value of the buffer
containing energy values of current and previous frames. lphn indicates
the spectral tilt. vadFlag0|d is the VAD flag of the previous frame and
vadFlag is the VAD flag of the current frame. NoMtcx is the flag
indicating to avoid TCX transformation with long frame length (e.g.
80ms), if the second coding model TCX is selected. Mag is a discrete
Fourier transformed (DFT) spectral envelope created from LP filter
coefficients, Ap, of the current frame which can be calculated according
to the following program code:
(Formula Removed)
where DFTN = 62, N_MAX = 11 52, LPC_N = 16. The vectors cos and
sin contain the values of cosine and sinusoidal functions respectively.
The length of vectors cos and sin is 1 152. DFTSum is the sum of first
NO_of_elements (e.g. 40) elements of the vector mag, excluding the
very first element (mag(O)) of the vector mag.
In the description above, AMR-WB extension (AMR-WB+) was used as
a practical example of an encoder. However, the invention is not limited
to AMR-WB codecs or ACELP- and TCX- excitation methods.
Although the invention was presented above by using two different
excitation methods it is possible to use more than two different
excitation methods and make the selection among them for
compressing audio signals.
Figure 6 depicts an example of a system in which the present invention
can be applied. The system comprises one or more audio sources 601
producing speech and/or non-speech audio signals. The audio signals
are converted into digital signals by an A/D-converter 602 when
necessary. The digitised signals are input to an encoder 200 of a
transmitting device 600 in which the compression is performed
according to the present invention. The compressed signals are also
quantised and encoded for transmission in the encoder 200 when
necessary. A transmitter 603, for example a transmitter of a mobile
communications device 600, transmits the compressed and encoded
signals to a communication network 604. The signals are received from
the communication network 604 by a receiver 605 of a receiving device
606. The received signals are transferred from the receiver 605 to a
decoder 607 for decoding, dequantisation and decompression. The
decoder 607 comprises detection means 608 to determine the
compression method used in the encoder 200 for a current frame. The
decoder 607 selects on the basis of the determination a first
decompression means 609 or a second decompression means 610 for
decompressing the current frame. The decompressed signals are
connected from the decompression means 609, 610 to a filter 611 and
a D/A converter 612 for converting the digital signal into analog signal.
The analog signal can then be transformed to audio, for example, in a
loudspeaker 613.
The present invention can be implemented in different kind of systems,
especially in low-rate transmission for achieving more efficient
compression and/or improved audio quality for the reproduced
(decompressed/decoded) audio signal than in prior art systems
especially in situations in which the audio signal includes both speech
like signals and non-speech like signals (e.g. mixed speech and
music). The encoder 200 according to the present invention can be
implemented in different parts of communication systems. For
example, the encoder 200 can be implemented in a mobile
communication device having limited processing capabilities.
The invention can also be implemented as a module 202, 203 which
can be connected with an encoder to analyse the parameters and to
control the selection of the excitation method for the encoder 200.
It is obvious that the present invention is not solely limited to the above
described embodiments but it can be modified within the scope of the
appended claims.







WE CLAIM:
1. An encoder (200) comprising an input (201 ) for inputting frames of an audio signal, a LTP analysis block (209) for performing a LTP analysis to the frames of the audio signal to form LTP parameters on the basis of the properties of the audio signal, and at least a first excitation block (206) for performing a first excitation for frames of the audio signal, and a second excitation block (207) for performing a second excitation for frames of the audio signal, characterised in that the encoder (200) further comprises a parameter analysis block (202) for analysing said LTP parameters, and an excitation selection block (203) for selecting one excitation block among said first excitation block (206) and said second excitation block (207) for performing the excitation for the frames of the audio signal on the basis of the parameter analysis.
2. The encoder (200) as claimed in claim 1 , wherein said parameter analysis block (202) further comprises means for calculating and analysing a normalised correlation at least on the basis of the LTP parameters.
3. The encoder (200) as claimed in claim 1 or 2, wherein said LTP parameters comprise at least lag and gain.
4. The encoder (200) as claimed in claim 1, 2 or 3, wherein said parameter analysis block (202) is arranged to examine at least one of the following properties on the audio signal:

- signal transients,
- noise like signals,
- stationary signals,
- periodic signals,
- stationary and periodic signals.
5. The encoder (200) as claimed in claim 4, wherein noise is arranged to be
determined on the basis of unstable LTP parameters and/or average frequency
exceeding a predetermined threshold.

6. The encoder (200) as claimed in claim 4, wherein stationary and periodic
signals are arranged to be determined on the basis of substantially high LTP gain
and substantially stable LTP lag and normalised correlation.
7. The encoder (200) as claimed in any of the claims 1 to 6, wherein said encoder (200) is an adaptive multi-rate wideband codec.
8. The encoder (200) as claimed in 7, wherein said LTP analysis block (209) is the LTP analysis block of the adaptive multi-rate wideband codec.
9. The encoder (200) as claimed in any of the claims 1 to 8, wherein said first excitation is Algebraic Code Excited Linear Prediction excitation (ACELP) and said second excitation is transform coded excitation (TCX).
10. A device (600) comprising an encoder (200) comprising an input (201 ) for
inputting frames of an audio signal, a LTP analysis block (209) for performing a
LTP analysis to the frames of the audio signal and for forming LTP parameters on
the basis of the properties of the audio signal, at least a first excitation block (206)
for performing a first excitation for frames of the audio signal, and a second
excitation block (207) for performing a second excitation for frames of the audio
signal, characterised in that the device (600) further comprises a parameter
analysis block (202) for analysing said LTP parameters, and an excitation
selection block (203) for selecting one excitation block among said first excitation
block (206) and said second excitation block (207) for performing the excitation for
the frames of the audio signal on the basis of the parameter analysis.
1 1. The device (200) as claimed in claim 10, wherein said parameter analysis block (202) further comprises means for calculating and analysing a normalised correlation at least on the basis of the LTP parameters.
12. The device (200) as claimed in claim 10 or 1 1 , wherein said LTP parameters comprise at least lag and gain.

13. The device (200) as claimed in claim 10, 11 or 12, wherein said parameter
analysis block (202) is arranged to examine at least one of the following properties
on the audio signal:
- signal transients,
- noise like signals, stationary signals,
- periodic signals,
- stationary and periodic signals.

14. The device (200) as claimed in claim 13, wherein noise is arranged to be determined on the basis of unstable LTP parameters and/or average frequency exceeding a predetermined threshold.
15. The device (200) as claimed in claim 13, wherein and that stationary and periodic signals are arranged to be determined on the basis of substantially high LTP gain and substantially stable LTP lag and normalised correlation.
16. The device (200) as claimed in any of the claims 10 to 15, wherein said encoder (200) is an adaptive multi-rate wideband codec.
17. The device (200) as claimed in claim 16, wherein said LTP analysis block (209) is the LTP analysis block of the adaptive multi-rate wideband codec.
18. The device (200) as claimed in Linear Prediction excitation (ACELP) and said second excitation is transform coded excitation (TCX).
19. A system comprising an encoder (200) comprising an input (201 ) for inputting frames of an audio signal, a LTPC analysis block (209) for performing a LTP analysis to the frames of the audio signal and for forming LTP parameters on the basis of the properties of the audio signal, at least a first excitation block (206) for performing a first excitation for frames of the audio signal, and a second excitation block (207) for performing a second excitation for frames of the audio signal, characterised in that the system further comprises in said encoder (200) a parameter analysis block (202) for analysing said LTP parameters, and an excitation selection block (203) for selecting one excitation block among said first

excitation block (206) and said second excitation block (207) for performing the excitation for the frames of the audio signal on the basis of the parameter analysis.
20. The system as claimed in claim 19, wherein said parameter analysis block (202) further comprises means for calculating and analysing a normalised correlation at least on the basis of the LTP parameters.
21. The system as claimed in claim 19 or 20, wherein said LTP parameters comprise at least lag and gain.
22. The system as claimed in claim 19, 20 or 21 , wherein said parameter analysis block (202) is arranged to examine at least one of the following properties on the audio signal:

- signal transients,
- noise like signals,
- stationary signals,
- periodic signals,
- stationary and periodic signals.

23. The system as claimed in parameters and/or average frequency exceeding a predetermined threshold.
24. The system as claimed in claim 22, wherein stationary and periodic signals are arranged to be determined on the basis of substantially high LTP gain and substantially stable LTP lag and normalised correlation.
25. The system as claimed in any of the claims 19 to 24, wherein said encoder (200) is an adaptive multi-rate wideband codec.
26. The system as claimed in claim 25, wherein said LTP analysis block (209) is the LTP analysis block of the adaptive multi-rate wideband codec.

27. The system as claimed in any of the claims 19 to 26, wherein said first excitation is Algebraic Code Excited Linear Prediction excitation (ACELP) and said second excitation is transform coded excitation (TCX).
28. A method for encoding audio signal, in which an LTP analysis is performed to the frames of the audio signal for forming LTP parameters on the basis of the properties of the signal, and at least a first excitation and a second excitation are selectable to be performed for frames of the audio signal, characterised in that the method further comprises analysing said LTP parameters, and selecting one excitation block among said first excitation method and said second excitation method for performing the excitation for the frames of the audio signal on the basis of the parameter analysis.
29. The method as claimed in claim 28, wherein a normalised correlation is calculated at least on the basis of the LTP parameters and the calculated normalised correlation is analysed.
30. The method as claimed in claim 28 or 29, wherein said LTP parameters comprise at least lag and gain.
31. The method as claimed in claim 28, 29 or 30, wherein at least one of the following properties on the audio signal is examined:

- signal transients,
- noise like signals, - stationary signals,
- periodic signals,
- stationary and periodic signals.

32. The method as claimed in claim 31 , wherein noise is determined on the basis of unstable LTP parameters and/or average frequency exceeding a predetermined threshold.
33. The method as claimed in claim 31 , wherein and that stationary and periodic signals are determined on the basis of substantially high LTP gain and substantially stable LTP lag and normalised correlation.

34. The method as claimed in any of the claims 28 to 33, wherein said first excitation is Algebraic Code Excited Linear Prediction excitation (ACELP) and said second excitation is transform coded excitation (TCX).
35. A module comprising a LTP analysis block (209) for performing an LTP analysis to frames of an audio signal to form LTP parameters on the basis of the properties of the audio signal, characterised in that the module further comprises a parameter analysis block (202) for analysing said LTP parameters, and an excitation selection block (203) for selecting one excitation block among a first excitation block (206) and a second excitation block (207), and for indicating the selected excitation method to an encoder (200).
36. The device (200) as claimed in claim 35, wherein said parameter analysis block (202) further comprises means for calculating and analysing a normalised correlation at least on the basis of the LTP parameters.
37. The device (200) as claimed in claim 35 or 36, wherein said LTP parameters comprise at least lag and gain.
38. The device (200) as claimed in claim 35, 36 or 37, wherein said parameter analysis block (202) is arranged to examine at least one of the following properties on the audio signal:

- signal transients,
- noise like signals,
- stationary signals,
- periodic signals,
- stationary and periodic signals.

39. The device (200) as claimed in claim 38, wherein noise is arranged to be determined on the basis of unstable LTP parameters and/or average frequency exceeding a predetermined threshold.
40. The device (200) as claimed in claim 38, wherein and that stationary and periodic signals are arranged to be determined on the basis of substantially high LTP gain and substantially stable LTP lag and normalised correlation.

41. The device (200) as claimed in any of the claims 35 to 40, wherein said encoder (200) is an adaptive multi-rate wideband codec.
42. The device (200) as claimed in claim 41 , wherein said LTP analysis block (209) is the LTP analysis block of the adaptive multi-rate wideband codec.
43. The device (200) as claimed in Linear Prediction excitation (ACELP) and said second excitation is transform coded excitation (TCX).

Documents:

4819-DELNP-2006-Abstract-(02-11-2011).pdf

4819-delnp-2006-abstract.pdf

4819-DELNP-2006-Claims-(02-11-2011).pdf

4819-delnp-2006-claims.pdf

4819-DELNP-2006-Correspondence Others-(02-11-2011).pdf

4819-delnp-2006-Correspondence Others-(28-03-2012).pdf

4819-DELNP-2006-Correspondence-Others-(05-10-2010).pdf

4819-delnp-2006-correspondence-others-1.pdf

4819-delnp-2006-correspondence-others.pdf

4819-delnp-2006-description (complete).pdf

4819-delnp-2006-drawings.pdf

4819-DELNP-2006-Form-1-(02-11-2011).pdf

4819-delnp-2006-form-1.pdf

4819-delnp-2006-form-18.pdf

4819-DELNP-2006-Form-2-(02-11-2011).pdf

4819-delnp-2006-form-2.pdf

4819-DELNP-2006-Form-3-(05-10-2010).pdf

4819-delnp-2006-form-3.pdf

4819-delnp-2006-form-5.pdf

4819-delnp-2006-gpa.pdf

4819-delnp-2006-pct-210.pdf

4819-delnp-2006-pct-304.pdf

4819-delnp-2006-pct-409.pdf

4819-delnp-2006-Petition-137-(28-03-2012).pdf


Patent Number 260899
Indian Patent Application Number 4819/DELNP/2006
PG Journal Number 22/2014
Publication Date 30-May-2014
Grant Date 28-May-2014
Date of Filing 23-Aug-2006
Name of Patentee NOKIA CORPORATION
Applicant Address KEILALAHDENTIE 4, FIN-02150 ESPOO, FINLAND
Inventors:
# Inventor's Name Inventor's Address
1 MAKINEN, JARI ETUNIITYNKATU 4 AS. 2, FL-33580 TAMPERE, FINLAND
PCT International Classification Number G10L 19/14
PCT International Application Number PCT/FI2005/050043
PCT International Filing date 2005-02-22
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 20045052 2004-02-23 Finland