Title of Invention

METHOD FOR PERFORMING AND APPARATUS FOR MAKING A CODING MODE DECISION IN VIDEO CODEC

Abstract Methods and apparatus are presented for reducing the computational complexity of coding mode decisions by exploiting the correlations across spatially and/or temporally close coding mode decisions. A Mode decision for a current macroblock is based on the mode decisions of spatially and/or temporally close macroblocks.
Full Text FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10, rule 13)



"METHODS AND APPARATUS FOR PERFORMING FAST MODE DECISIONS IN VIDEO CODECS"






QUALCOMM INCORPORATED Incorporated the state of Delaware
5775 , Morehouse Drive, San Diego, California 92121- 1714, U.S.A

The following specification particularly describes the invention and the manner in which it is to be performed.

WO 2005/088975 PCT/US2005/008044
2
METHODS AND APPARATUS FOR PERFORMING FAST MODE DECISIONS IN VIDEO CODECS
BACKGROUND
Claim of Priority under 35 U.S.C. §119
[0001] The present Application for Patent claims priority to Provisional
Application No. 60/552,156 entitled Low-Complexity Intra/lnter Mode Decision
for Video Coding filed March 11, 2004, and assigned to the assignee hereof and
hereby expressly incorporated by reference herein
FIELD
[0002] This invention relates generally to the field of digital image
processing, and more specifically, to the field of video encoding and decoding.
BACKGROUND
[0003] The transmission of video over communication channels, either
wireless or wired, has become possible with developments that improve the
data capacity of communication channels. Moreover, Various standards have
been established to facilitate the transmission and reception of video using
electronic devices with digital storage media, such as mobile telephones,
personal computers, personal digital assistants (PDAs), and other electronic
devices. Examples of some video standards that enable the transmission of
video images over communication channels are Moving Pictures Expert Group-
1 (MPEG-1), MPEG-2, and MPEG-4, International Telecommunications Union
(ITU) H.263, H.264, which were promulgated by the International Organization
for Standardization (ISO). Another standards-forming body of note is the Audio
Video Coding Standard Working Group of China (AVS).
[0004] In order to provide such video services, the original images must
be compressed in a manner that will not exceed the data capacity of a communication channel. For example, in circuit-switched landline telephone systems, the communication channel is physically limited to 64 kbits/second. However, this bit rate is inadequate for the purpose of transmitting a video stream in its raw format with acceptable perceptual quality. However, the

WO 2005/088975 PCT/US2005/008044
3
manner in which the compression is performed should not sacrifice the perceptual quality of images at a receiver.
[0005] In order to balance these two competing requirements, many
video encoders use a transform coding technique combined with a motion
compensation technique to compress the original video sequence. The
transform coding technique is used to remove spatial redundancy while the
motion compensation technique is used to remove temporal redundancy.
[0006] It is widely acknowledged by those of skill in the art that
compression of original images using transform coding and motion compensation techniques is computationally intensive. The number of instructions needed to perform the compression, as measured in MIPS (million instructions per second), is substantial and may consume hardware resources that could otherwise be allocated to other applications. Since the compression is often expected to be performed within small, portable electronic devices, hardware resources to perform these compression techniques may be limited. Hence, there is a present need to reduce the MIPS or hardware requirements of video encoders without unduly degrading the perceived quality of the video image.
SUMMARY
[0007] Methods and apparatus are presented herein to address the
above stated needs. In one aspect, a method is presented for making a coding mode decision for a current macroblock, the method comprising: evaluating a plurality of coding modes, each associated with a neighboring macroblock; and selecting the coding mode for the current macroblock based upon the evaluation of the plurality of coding modes.
[0008] In another aspect, apparatus is presented in a video codec for
performing a mode decision for a current macroblock, the apparatus comprising: at least one memory element; and at least one processing element communicatively coupled to the at least memory! element and configured to implement a set of instructions stored on the at least one memory element, the set of instructions for: evaluating a plurality of coding modes, each associated

WO 2005/088975

4

PCT/US2005/008044

with a neighboring macroblock; and selecting the mode for the current

macroblock based upon the evaluation of the plurality

of coding modes.

DESCRIPTION OF THE DRAWINGS
[0009] FIGS. 1A & 1B are flowcharts of conventional video compression
schemes as used by a video encoder.
[0010] FIG. 2 is a block diagram of a conventional video encoder.
[0011] FIG. 3 is an example of a post-mode coding decision algorithm.
[0012] FIG. 4A is a block diagram of a pre-mode decision algorithm
embodiment.
[0013] FIG. 4B is a block diagram of a decision criteria that may be
implemented in a pre-mode decision algorithm embodiment.
[0014] FIG. 5A is a block diagram illustrating a hybrid-mode decision
algorithm embodiment. !
[0015] FIG. 5B is a block diagram illustrating an interlacing pattern that
may be implemented as a decision criteria in a hybrid-mode decision algorithm
embodiment.
DETAILED DESCRIPTION
[0016] The newer generation of video compression standards exploits a
phenomenon of video in order to reduce the encoding complexity. Video is merely a series of still images, called frames, which run quickly and successively in time. It may be observed that some frames of video exhibit spatial similarity and correlation while some frames of video also exhibit temporal similarity with neighboring frames. Hence, most video compression standards perform different coding techniques for "Intra-coded frames," which are frames whose spatial redundancy is explored, and "Inter-coded frames," whose temporal redundancy is explored. Predictive coding is typically used for frames that contain either spatial or temporal redundancy. For illustrative ease, Intra-coded frames will be referred to herein as l-frames and Inter-coded frames will be referred to herein as P-frames. In order to encode l-frames and P-frames, a typical video codec will work upon macroblocks of an image frame, rather than the image frame in its entirety. Using standard size measurements

WO 2005/088975

5

PCT/US2005/008044

from the Quarter Common Intermediate Format (QCIF), a block comprises an
8x8 group of pixels and a macroblock comprises a 16x16 group of pixels. A QCIF frame of 176x144 pixels has 99 macroblocks. For illustrative ease, the
Intra-coded macroblocks will be referred to herein as "Intra-MB" and the macroblocks coded using motion compensation and temporal prediction will be referred to herein as "Inter-MB."
[0017] FIG. 1A is a flowchart illustrating Intra-coding. At step 100, the
pixels within an Intra-MB undergo a transform coding. At step 110, the coefficients of the transform are then quantized. At step 120, the quantized coefficients are then losslessly encoded for transmission. Since the transform coding technique standardized in MPEG-4 is the Discrete Cosine Transform (DCT), the embodiments are described herein using the DCT. However, one of skill in the art would recognize that the embodiments are not limited to DCT, but can be utilized in video encoders using other transform coding techniques. The DCT is frequently chosen as the transform code for video coding standards since a high amount of energy can be packed in a relatively small number of coefficients.
[0018] The decoding of Intra-MBs involves a reversal of the process in
FIG. 1A, in that received information is losslessly decoded, de-quantized, and then transformed using an inverse of the transform used at step 100.

[0019]

The encoding process for Intra-MBs

is relatively straightforward

and not computationally intensive. Coding Intra-MBs requires a large number of

memory and transmission
bits, which requires a large amount of storage
bandwidth. Hence, this encoding process consumes memory, rather than
processor cycles. To encode the entire stream of video according to the
method illustrated in FIG. 1A would be inefficient, since the transmission
channel would be unable to carry the total number of bits required to convey
multiple frames per second.
[0020] In contrast to l-frames, P-frames further explore and reduce
temporal redundancy from frame to frame, which can be used along with spatial
redundancy reduction to predictively reduce the number of bits that needs to be
stored in memory. In a video recording of low motion activity, the difference in
pixels between one frame and the next is small if the motion between the two
frames is compensated. Since there is little or no motion after motion
WO 2005/088975

6

PCT/US2005/008044

compensation, it is possible to use information about a previous and/or future
frame to predict what the current frame will show. Rather than encode and
transmit all the bits of the current frame, the residual of a prediction of what the
current frame may contain is encoded and transmitted, which reduces the
number of bits that need to be stored or transmitted. However, the encoding of
P-frames is computationally expensive due to the number of estimation
calculations that are needed to estimate the motion.
[0021] FIG. 1B is a flowchart illustrating the encoding of Inter-MBs. At
step 140, the translational motion between blocks of pixels within a P-frame is
determined using motion estimation techniques. The motion is usually represented by a motion vector. Groups of blocks, i.e., macroblocks, may be
compared in order to determine a plurality of motion vectors for each P-frame.
Note that the search for motion vectors is computationally expensive since a
search for an optimum motion vector is performed for each block. At step 150,
the motion vectors are used to predict a motion compensated macroblock. At
step 160, the motion compensated macroblock is substracted from a current
macroblock to form a residual macroblock. At step 170, the macroblock undergoes a transformation, the coefficients of the transformed residual macroblock are quantized, and then losslessly encoded. Since the
residual macroblock carries less information than the macroblocks of the
original P-frame, there is a reduction in the amount of bits that need to be
transmitted to a receiving party.
[0022] Typically, a video codec will select every Nth frame to encode as
an l-frame and the rest to be encoded as P-frames. This duration between I-
frames is referred to as an "Intra-period." The presence of an l-frame acts as a
reference to refresh the P-frames. Within the designated P-frames, the video
codec will also occasionally select certain macroblocks to be Infra-coded, which
are not encoded using temporal prediction.
[0023] FIG. 2 is a block diagram of the encoding performed by a conventional video codec in an electronic device. The signal path through the
codec depends upon whether the input image signal is an l-frame or a P-frame,
or alternatively, whether a macroblock within a P-frame is an Intra-MB or an
Inter-MB. For illustrative ease, the encoding of |a P-frame will be described
hereafter using the terminology of Intra-MBs and Inter-MBs. If the input image

WO 2005/088975

PCT/US2005/008044

7
signal is an Intra-MB, then a switch 200 establishes a signal path through the DCT block 202, the quantizer block 204, and then the lossless coding block
206. The signal leaves the video codec for further processing within the
electronic device. An example of further processing is the coding at the
bitstream encoding block 208, which encodes the signal in an appropriate
transmission format for the transmission medium.
[0024] Although the Intra-MB coding is finished at block 204, the Intra-MB needs to be used as a reference MB for the Inter-MB coding. Hence, the quantization values that exit the quantizer block 204 also enter a decoding portion 210 of the video codec. The decoding portion 210 comprises a de-quantizer block 212 and an inverse-DCT block 213. The quantization values pass through the de-quantizer block 212 and then the inverse-DCT block 213 to reconstruct the Intra-MB, which is used to refresh the buffer 222 that is accessed by the motion compensation block 230 and the motion estimation block 232. The path through the DCT block 202, the quantizer block 204, and then the lossless coding block 206 is also applicable for encoding the residual MBs generated by the Inter-MB encoding.
[0025] If the input image signal is an Inter-MB, then the switch 200
establishes a new signal path, which includes the motion compensation block 230 and the motion estimation block 232. The motion estimation block 232 receives the current Inter-MB and a set of stored reference MBs from the buffer 222 and performs a search through a plurality of motion vectors for the motion vector that best describes the motion between the current Inter-MB and the reference MB. Note that the reference MB is the stored reconstructed pixels of previous or future MBs that were output from the decoding portion 210 of the video codec. The motion vector is then input into the motion compensation block 230.
[0026] The motion compensation block 230 receives the motion vector
from the motion compensation block 230 and the reference MB from the buffer 222 in order to generate a new predictive MB from the reference MB, i.e., a predictive version of the current Inter-MB. When Inter-MB coding takes place, the reference MB should be updated. Hence, the switch 240 is "on," so that the predictive MB is added by summing element 220 to the decoded residual MB

WO 2005/088975

8

PCT/US2005/008044

that is output from the decoding portion 210. The Result is stored as a new reference MB in buffer 222.
[0027] The predictive MB from the motion compensation block 230 is
subtracted from the current Inter-MB by subtraction element 224 in order to form a residual MB. The residual MB is then processed as described above for Intra-MB encoding. The quantized, transformed residual MB is further passed through the decoding portion 210 so that the residual MB may be used by the video codec to update the reference MB stored in the buffer 222, which in turn may be accessed by the motion compensation block 230 and motion estimation block 232 for encoding MBs of future or past frames. Note that predictive coding techniques may be bi-directional, in that past information may be used to predict the current frame or future information may be used to predict the current frame.
[0028] The encoding of the Intra-MBs may also use spatial prediction.
The encoding of the Inter-MBs employs temporal prediction. The problem with

the conventional video codec design of FIG. 2 is

that non predictive coding

consumes too many memory resources and predictive coding consumes too
many processing resources. If one of skill in the art decided that lowering the
MIPS requirement is needed to perform other tasks, then increasing the number
of l-frames or Intra-MBs is a possible solution, which is implemented by
decreasing the intra-period value N. However, this solution is flawed because
the demand for memory resources would correspondingly increase. Moreover,
the overall transmission rate of the video images over a communication channel
would increase since more bits are needed to convey spatial information as
compared to predictive information. For wireless applications, such as video
streaming over cellular phones, the increased transmission bits could result in a
degraded synthesized image signal if the wireless or landline communication
channel cannot accommodate the increased transmission bit rate.
[0029] Conversely, if one of skill in the art decided that memory
resources were limited, then one solution is to perform more temporally predictive encoding, which is implemented by increasing the intra-period value N. However, temporally predictive encoding requires more cycles from the processing elements which, if loaded past a maximum threshold, will drop frame processing tasks for tasks with higher priorities. Dropped frames would then

WO 2005/088975

9

PCT/US2005/008044

degrade the quality of the synthesized image. In addition, the quality of the
synthesized signal would also degrade whenever high speed activities occur
within the images because the use of too many predictively encoded frames
could cause a failure in the motion estimation ability of the video codec.
[0030] Hence, an important consideration in operation of video codecs is the design for determining whether a P-frame MB should be coded as
an Intra-MB or an Inter-MB, in addition to making a best prdiction mode determination between macroblock size choices. In H.264, for example, nine
coding modes for the 4x4 block size and four coding modes for the 16x16MB
exist for Intra-coding. For Inter-MBs, an important consideration in some
advanced codecs, such as H.264, is also to make a best prediction mode
determination between MB size choices. H.264, for example, supports four
coding types, including block sizes of 16x16,16x8, 8x16, and 8x8 for Inter-MBs.
[0031] Traditionally, video codecs made coding mode decisions based
upon measurements that are performed upon each MB. Predictive coding involving motion estimation and spatial estimation is very computationally intensive in video codecs because they employ exhaustive searches through multiple coding modes before an optimal mode is selected that achieves the best compression efficiency.
[0032] FIG. 3 is an example of a coding mode decision algorithm. At step
300, extensive computations are performed to determine a distortion metric or quality measure for each permissible coding mode for each macro-block within a frame by a Motion Estimation/Spatial Estimation (ME/SE) engine. At step 310, the best coding mode is selected for the MB based on the distortion metrics/quality measures of the permissible coding modes. One such quality measure is the Sum of Absolute Difference (SAD) value, which is a distortion metric based upon the absolute difference between a current MB and a MB in a previous frame. Another measure can be the number of bits spent on coding motion vectors and residuals in order to find the coding mode that yields minimum cost.
[0033] If the MB is an Intra-MB, then the program flow proceeds to step
320. In H. 264, for example, there are nine coding modes for a 4x4 Intra-MB and four coding modes for a 16x16 Intra-MB. For a 4x4 Intra-MB, the nine coding modes are vertical prediction, horizontal prediction, DC prediction,

WO 2005/088975

10

PCT/US2005/008044

diagonal-down-left prediction, diagonal-down-right prediction, vertical-right
prediction, horizontal-down prediction, vertical-left prediction, and horizontal-up
prediction. For a 16x16 Intra-MB, the four coding modes are vertical prediction,
horizontal prediction, DC prediction, and plane prediction.
[0034] If the MB is determined to be an Inter-MB at step 310, then the
encoded MB is losslessly
program flow proceeds to step 330, whereupon the Inter-MB is predictively
encoded. In H.264, there are four coding types associated with block sizes of
16x16,16x8, 8x16, and 8x8.
[0035] After either step 320 or step 330, the
decision is made after the
encoded in a format appropriate for transmission.
[0036] Note that at step 310, the coding mode
motion estimation (ME) and/or the spatial estimation (SE) searches. This
coding mode decision is implemented by searching through all the possible
coding modes exhaustively, and a selection is made after all searches. For this
reason, the decision algorithm is categorized as a "post-mode" decision
algorithm. Since the post-mode decision algorithm requires ME and/or the SE
searches, there is a significant amount of hardware or Digital Signal Processor
(DSP) resources that must be consumed in order to perform the algorithm.
[0037] The embodiments that are presented herein are for reducing the
computational complexity for performing a coding mode decision by reducing
the dependence of the coding mode decision on exhaustive SE and ME
searches. In one embodiment, a method and apparatus for pre-determining the
prediction modes for a proportion of the MBs is presented. This embodiment
may be referred to as the pre-mode decision algorithm. In one aspect of this
embodiment, past and/or currently available prediction modes are analyzed to
determine a prediction mode for a current MB. In another embodiment, the
method and apparatus for pre-determining the prediction modes are combined
with a post-mode decision to create a hybrid mode decision algorithm.
[0038] The underlying basis for spatial estimation and motion estimation
is the strong spatial correlation within a particular frame and the strong temporal and spatial correlation between successive frames. The embodiments herein are based on the premise that the best prediction mode for a current MB, as determined by the aforementioned post-mode decision algorithm, will also be strongly correlated to the best prediction modes of other MBs. For example, a

WO 2005/088975

PCT/US2005/008044

11
relatively flat area in a frame may cause a post-mode decision algorithm to
designate a group of MBs as Inter-MBs of dimension 16x16, another group of
MBs to be designated as Inter-MBs of dimension 8x8, and another group of
MBs to be designated at Intra-MBs of dimension 4x4. Hence, a MB tends to
attract the same coding mode designation as other MBs that are
spatially/temporally close.
[0039] FIG. 4A is a block diagram illustrating an embodiment of a pre-
mode decision algorithm. At step 400, the video codec evaluates the coding
modes of a selection of spatially and/or temporally close MBs, i.e., neighboring
MBs, from either a past and/or current frame. At step 410, the codec uses a
decision criteria to determine the coding mode of the current MB.
[0040] FIG. 4B is a block diagram of a simple decision criteria that may
be implemented in an embodiment of the pre-mode decision algorithm. The
already formulated mode decisions 420a, 420b, . . , 420n for neighboring MBs,
from either a past and/or current frame, are input to a logic that performs a
simple majority vote selection 430. The mode decision for the current MB is
based upon the outcome of the majority vote of the already formulated mode
decisions 420a, 420b,.... 420n.
[0041] In one aspect of the embodiment, the selection of the already
formulated mode decisions 420a, 420b, ... , 420n may be performed using an
interlaced pattern, so that mode decisions from a past frame may be used with
mode decisions from the current frame as candidates for the majority vote
selection logic 430. In other words, as an illustrative example, if the MB is
located at position (x,y) in a frame T, then mode decisions of MBs at positions
(x-1, y), (x+1, y), (x, y-1), and (x, y+1) from frame T, and mode decisions of MBs
at positions (x-1, y-1), (x+1, y-1), (x-1, y+1), and (x+1, y+1) from frame T-1
could be selected as input into the majority vote selection logic.
[0042] In another aspect of the embodiment, an adaptive interlaced
pattern, rather than a fixed interlaced pattern may be used to select candidate
MBs. Moreover, different interlacing patterns may be used whenever certain
criteria are met. For example, only MBs that meet certain confidence level or
exceed certain threshold(s) are used for the pre-mode decision. Such MBs are
not necessarily located in a fixed pattern.

WO 2005/088975

12

PCT/US2005/008044

[0043] In another aspect, the majority vote selection logic may take the
modes from all candidate MBs as input and weight each mode according to a weighting factor. For example, the inverse of the spatial and/or temporal distance of a candidate MB from the current MB may be used as a weighting factor to weigh a mode in the majority vote selection.
[0044] The pre-mode decision embodiment described above may be
improved upon in order to prevent inaccurate mode decision from propagating.
The pre-mode decision algorithm may be combined with a post-mode decision
algorithm to create a hybrid-mode decision embodiment.
[0045] FIG. 5A is a block diagram illustrating an embodiment of a hybrid-
mode decision algorithm.
[0046] At step 500, a pre-mode decision is performed for select MBs.
The pre-mode decision process is described above for FIGS. 4A and 4B. The
methodology for selecting which MBs undergo the pre-mode decision process is
described further in relation to FIG. 5B. At step 510, the ME/SE engine is
performs both exhaustive ME and SE searches for MBs for which pre-mode
decisions were not made. At step 520, the best coding mode is selected for the
MBs that underwent the motion and spatial estimation searches.
[0047] If the MB is an Intra-MB, then the program flow proceeds to step
530.
[0048] If the MB is determined to be an Inter-MB, then the program flow
proceeds to step 540, whereupon the Inter-MB is predictively encoded.
[0049] After either step 530 or step 540, the encoded MB is losslessly
encoded at step 550 in a format appropriate for trans transmission.
[0050] FIG. 5B is a diagram that shows how an interlaced pattern may be
applied to the hybrid mode decision embodiment of FIG. 5A to determine
whether the coding mode of a MB will be determined using a pre-mode decision
process or a post-mode decision process.
[0051] FIG. 5B is an example of an interlaced pattern where the current
MB (marked with a dotted X) is in a position where a pre-mode decision process will be made. The pre-mode decision will be based on the mode decisions already made for the MBs on the shaded positions. In this instance, the pre-mode decision will be based on three candidates that were determined using the post mode decision process for the current frame T and a previous frame T-
WO 2005/088975

13

PCT/US2005/008044

1. Hence, interlaced patterns may be used to determine whether a pre-mode or
a post-mode decision will be made, and the patterns may also be used to
determine which candidates to use in the pre-mode decision process.
[0052] The interlaced pattern specifically illustrated in FIG. 5B is only one
example of the interlaced patterns that may be used in the hybrid mode decision embodiment. The interlaced patterns discussed under the pre-mode decision embodiment may be applied for this embodiment. An adaptive or a fixed interlaced pattern may be used to select MBs to undergo pre-mode decisions rather than post-mode decisions. Moreover, different interlacing patterns may be used whenever certain criteria are met. Note that the use of interlaced patterns of any sort allows the codec to control the number of pre-mode decisions made as opposed to the number of post-mode decisions. If the processing resources are low, for example, when running on the electronic device housing the codec, reduce the number of post-mode decisions, which correspondingly reduces the number of exhaustive computational searches for the best coding modes. The use of an interlacing pattern that requires fewer post-mode decisions would be useful in this instance.
[0053] In another aspect of the hybrid mode embodiment, the decision as
to whether a MB undergoes a pre-mode or post-mode decision may be based
upon input from a Feature Extraction Unit (Block Network Feedback (Block 570 of FIG. 5A) or upon FIG. 5A). For example, the embodiments may be implemented to accommodate video streams with differing image sizes, bit rates and/or frame rates. In yet another aspect, the embodiments may be implemented to accommodate variable transmission channels that are prone to channel errors. In yet another aspect, the embodiments may be implemented to accommodate a user-defined quality measure. In yet another aspect, the embodiments may be implemented to accommodate a lack of hardware resources. As indicated herein, the embodiments may be used to accommodate many different needs that may originate from different portions of the electronic device that houses the video codec. Configuration signals may originate at any portion of the electronic device or alternatively, the configuration signals may originate from a network that is accessed by the electronic device.

WO 2005/088975

14

PCT/US2005/008044



[0054]

Different embodiments are possible wherein the hybrid mode



decision may occur at different levels of the encoding

process. In particular, the.

pre-mode decision may be performed at three different levels. In one embodiment, the pre-mode decision only determines whether a MB should be either intra- or inter- coded, and then leave the submode decision to further searches by the ME/SE engine and the post-mode decision process. In H.264, a submode may refer to one of the nine 4x4 Intra-coding modes, or of the four 16x16 Intra-coding modes, or of the 16x16, 16x8, 8x16, 8x8 Inter-coding modes. In another embodiment, the pre-mode decision first determines one of
the following modes for the current MB: intra-4x4 , intra – 16x 16, inter – 16x16, inter-16x8, inter-8x16, and inter-8x8, so that the post-mode decision process is only implemented upon the possible submodes of the selected mode. In another embodiment, the pre-mode decision first determines a particular submode, such as, for example, intra-4x4 vertical prediction, so that further SE or ME searches need only to be performed for one particular mode that is already pre-determined.
[0055] The embodiments that are described herein are for reducing the
computational complexity of the coding process for P-frames and/or l-frames and/or bi-directional frames (B-frames). Reducing the number of exhaustive SE and/or ME searches by exploiting the correlation between coding mode decisions will allow a reduction in the computational complexity (DSP MIPS or hardware power) requirements for the video coding without unduly degrading the perceived quality of the video.
[0056] Hardware, such as a digital signal processor or other processing
element and memory elements may be configured to execute instructions for performing the method steps described above. Such hardware may be easily implemented in any of the currently existing video codecs compliant to the standards of MPEG, ITU-T H.26x, or AVS.
[0057] Those of skill in the art would appreciate that the various
illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above

WO 2005/088975

15

PCT/US2005/008044

generally in terms of their functionality. Whether such functionality is
implemented as hardware or software depends upon the particular application
and design constraints imposed on the overall system. Skilled artisans may
implement the described functionality in varying ways for each particular
application, but such implementation decisions should not be interpreted as
causing a departure from the scope of the present invention.
[0058] The various illustrative logical blocks, modules, and circuits
described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
[0059] The steps of a method or algorithm described in connection with
the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
[0060] The previous description of the disclosed embodiments is
provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent

WO 2005/088975 PCT/US2005/008044
16
to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
WHAT IS CLAIMED IS:

WO 2005/088975 PCT/US2005/008044
17 CLAIMS
1. Method for making a coding mode decision for a current macroblock,
comprising:
evaluating a plurality of coding modes, leach associated with a neighboring macroblock; and
selecting the coding mode for the current macroblock based upon the evaluation of the plurality of coding modes.
2. The method of Claim 1, wherein evaluating the plurality of coding modes
comprises:
selecting a pattern of macroblock positions; and
selecting the plurality of coding modes in accordance with the pattern of
macroblock positions.
3. The method of Claim 1, wherein evaluating the plurality of coding modes
comprises:
selecting the plurality of coding modes in accordance with a configuration signal originating from a video codec.
4. The method of Claim 1, wherein evaluating the plurality of coding modes
comprises:
selecting the plurality of coding modes in accordance with a configuration signal originating from an electronic device housing a video codec.
5. The method of Claim 1, wherein evaluating the plurality of coding modes
comprises:
selecting the plurality of coding modes in accordance with a configuration signal originating from a network resource.
6. The method of Claim 2, wherein selecting | the pattern of macroblock
positions comprises:

WO 2005/088975

18

PCT/US2005/008044

selecting macroblock positions wherein coding mode decisions were made using a post-mode selection process.


7. The method of Claim 2, wherein selecting positions comprises selecting an interlaced pattern of

the pattern of macroblock macroblock positions.

8. The method of Claim 2, wherein selecting the pattern of macroblock positions comprises selecting an adaptive pattern of macroblock positions.
9. The method of Claim 1, wherein selecting |the coding mode for the current macroblock comprises:
using a majority rule selection criteria to select the coding mode for the current macroblock.
10. The method of Claim 9, wherein using a majority rule selection criteria to
select the coding mode for the current macroblock comprises:
using weights with the majority rule selection criteria.
11. The method of Claim 1, further comprising:
deciding whether to make the mode decision according to a pre-mode decision process or a post-mode decision process;
if the pre-mode decision process is selected, then evaluating the plurality of coding modes and selecting the coding mode for the current macroblock based upon the evaluation of the plurality of coding modes; and
if the post-mode decision process is selected, then extracting a quality measure from the current macroblock and selecting the coding mode based upon the quality measure.
12. The method of Claim 1, further comprising:
extracting features from a video frame; and
selecting the coding mode for the current macroblock based upon the evaluation of the plurality of coding modes and the features from the video frame.

WO 2005/088975 PCT/US2005/008044
19
13. Apparatus in a video codec for performing a cc-ding mode decision for a
current macroblock, comprising:
at least one memory element; and
at least one processing element communicatively coupled to the at least memory element and configured to implement a set of instructions stored on the at least one memory element, the set of instructions for:
evaluating a plurality of coding modes, each associated with a
neighboring macroblock; and
selecting the coding mode for the current macroblock based upon the
evaluation of the plurality of coding modes.
14. The apparatus of Claim 13, wherein the at least one processing element
comprises a voting logic for selecting the coding mode.
15. . The apparatus of Claim 13, wherein the video codec is an MPEG-
compliant codec.
16. The apparatus of Claim 13, wherein the video codec is an ITU-T H.26x series-compliant codec.
17. The apparatus of Claim 13, wherein the video [codec is an AVS-compliant codec.
18. The apparatus of Claim 13, wherein the video codec is a hybrid codec that uses temporal and spatial prediction and uses both Intra and Inter modes.
19. The apparatus of Claim 13, wherein the at least one processing element is further configured to execute a set of instructions for:
selecting an interlaced pattern of macroblock positions; and
selecting the plurality of coding modes in accordance with the interlaced pattern
of macroblock positions.
20. The apparatus of Claim 13, wherein the at least one processing element
is further configured to execute a set of instructions for:

WO 2005/088975

PCT/US2005/008044

selecting an adaptive pattern of macroblock positions; and
selecting the plurality of coding modes in accordance^ with the adaptive pattern
of macroblock positions.

21. The apparatus of Claim 13, wherein the at leak one processing element is further configured to execute a set of instructions for:
selecting the plurality of coding modes in accordance with a configuration signal originating from a video codec.

22. The apparatus of Claim 13, wherein the at least one processing element is further configured to execute a set of instructions for:
selecting the plurality of coding modes in accordance with a configuration signal originating from an electronic device housing a video codec.
23. The apparatus of Claim 13, wherein the at least one processing element is further configured to execute a set of instructions for:
selecting the plurality of coding modes in accordance with a configuration signal originating from a network resource.
24. The apparatus of Claim 13, wherein the at least one processing element is further configured to execute a set of instructions for:
selecting a plurality of coding modes associated with macroblock positions wherein past mode decisions were made using a post-mode selection process, whereupon the selected plurality of coding modes are used for evaluating.
25. The apparatus of Claim 13, wherein the at least one processing element
is further configured to execute a set of instructions for:
using a majority rule selection criteria to select the mode decision for the current
macroblock.

26. The apparatus of Claim 13, wherein the at least one processing element
is further configured to execute a set of instructions for:
deciding whether to make the mode decision according to a pre-mode decision
process or a post-mode decision process;

WO 2005/088975

21

PCT/US2005/008044

if the pre-mode decision process is selected, then evaluating the plurality of coding modes and selecting the coding mode for the current macroblock based upon the evaluation of the plurality of coding modes; and
if the post-mode decision process is selected, then extracting a quality measure from the current macroblock and making the coding mode decision based upon the quality measure.
27. The apparatus of Claim 13, wherein the at least one processing element is further configured to execute a set of instructions fort extracting features from a video frame; and
selecting the coding mode for the current macroblock based upon the evaluation of the plurality of coding modes and the features from the video frame.
28. A method for making a coding mode decision for a current macroblock and an apparatus in a video codec for performing a coding mode decision for a current macroblock is substantially as herein described with reference to accompanying drawings.


Dated this 3rd day of October, 2006


22-
ABSTRACT
Methods and apparatus are presented for reducing the computational complexity of coding mode decisions by exploiting the correlations across spatially and/or temporally close coding mode decisions. A Mode decision for a current macroblock is based on the mode decisions of spatially and/or temporally close macroblocks.

Documents:

1193-MUMNP-2006-ABSTRACT(21-1-2009).pdf

1193-mumnp-2006-abstract.doc

1193-mumnp-2006-abstract.pdf

1193-MUMNP-2006-ASSIGNMENT(21-1-2009).pdf

1193-mumnp-2006-cancelled pages(05-02-2009).pdf

1193-MUMNP-2006-CANCELLED PAGES(21-1-2009).pdf

1193-MUMNP-2006-CLAIMS(21-1-2009).pdf

1193-mumnp-2006-claims(granted)(05-02-2009.pdf

1193-mumnp-2006-claims.pdf

1193-mumnp-2006-correspondance-received.pdf

1193-mumnp-2006-correspondence(05-02-2009).pdf

1193-MUMNP-2006-CORRESPONDENCE(21-1-2009).pdf

1193-mumnp-2006-correspondence(ipo)-(18-02-2009).pdf

1193-mumnp-2006-description (complete).pdf

1193-MUMNP-2006-DESCRIPTION(COMPLETE)-(21-1-2009).pdf

1193-mumnp-2006-drawing(05-02-2009).pdf

1193-MUMNP-2006-DRAWING(21-1-2009).pdf

1193-mumnp-2006-drawings.pdf

1193-mumnp-2006-form 1(05-02-2009).pdf

1193-MUMNP-2006-FORM 1(21-1-2009).pdf

1193-mumnp-2006-form 18(06-10-2006).pdf

1193-mumnp-2006-form 2(21-1-2009).pdf

1193-mumnp-2006-form 2(granted)-(05-02-2009).pdf

1193-MUMNP-2006-FORM 2(TITLE PAGE)-(21-1-2009).pdf

1193-mumnp-2006-form 26(06-03-2006).pdf

1193-mumnp-2006-form 26(21-01-2009).pdf

1193-MUMNP-2006-FORM 26(21-1-2009).pdf

1193-mumnp-2006-form 3(03-10-2006).pdf

1193-mumnp-2006-form 3(21-01-2009).pdf

1193-MUMNP-2006-FORM 3(21-1-2009).pdf

1193-mumnp-2006-form 5(03-10-2006).pdf

1193-mumnp-2006-form-1.pdf

1193-mumnp-2006-form-2.pdf

1193-mumnp-2006-form-3.pdf

1193-mumnp-2006-form-5.pdf

1193-mumnp-2006-form-pct-ib-304.pdf

1193-mumnp-2006-form-pct-isa-210(06-10-2006).pdf

1193-mumnp-2006-form-pct-isa-220.pdf

1193-mumnp-2006-form-pct-isa-237-separate sheet.pdf

1193-mumnp-2006-form-pct-isa-237.pdf

1193-MUMNP-2006-OTHER DOCUMENT(21-1-2009).pdf

1193-mumnp-2006-pct-search report.pdf

1193-mumnp-2006-petition under rule 137(21-01-2009).pdf

1193-MUMNP-2006-PETITION UNDER RULE 137(21-1-2009).pdf

abstract1.jpg


Patent Number 229599
Indian Patent Application Number 1193/MUMNP/2006
PG Journal Number 13/2009
Publication Date 27-Mar-2009
Grant Date 18-Feb-2009
Date of Filing 06-Oct-2006
Name of Patentee QUALCOMM INCORPORATED
Applicant Address Delaware 5775, Morehouse Drive, San Diego, California 92121-1714,
Inventors:
# Inventor's Name Inventor's Address
1 LIANG, YI 8840 Costa Verde Boulevard, #3321, San Deigo, California 92122
2 EL-MALEH, Khaled Helmi 7675 Palmilla Drive, #6314, San Diego, California 92122.
PCT International Classification Number H04N 7/26
PCT International Application Number PCT/US2005/008044
PCT International Filing date 2005-03-09
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 10/957,512 2004-09-30 U.S.A.
2 60/552,156 2004-03-11 U.S.A.