Title of Invention | FLEXIBLE ANTIALIASING IN EMBEDDED DEVICES |
---|---|
Abstract | N/A |
Full Text | FORM 2 THE PATENTS ACT, 1970 (39 of 1970) & THE PATENTS RULES, 2003 COMPLETE SPECIFICATION (See section 10, rule 13) "FLEXIBLE ANTIALIASING IN EMBEDDED DEVICES" QUALCOMM INCORPORATED, a company incorporated in the state of Delaware, United States of America, of 5775 Morehouse Drive, San Diego, California 92121-1714, U.S.A., The following specification particularly describes the invention and the manner in which it is to be performed. PCT/U32005/0340J1- FLEXIBLE ;ANTILIASING IN EIMBEDDED DEVICES BACKGROUND OF THE INVENTION [0001] The present invention relates to application level control of hardware functionality in embedded devices. The hardware functionality involves antialiasing of three dimensional (3D) images processed by a 3D graphics pipeline with such a device. In certain respects, the present invention relates to mobile phones with such hardware functionality. [0002] Many types of embedded devices are provided with 3D graphics pipelines that process 3D images of scenes. A given scene is composed of a collection of rendering objects (e.g., triangles). Such 3D pipelines may perform antialiasing on the image. Antialiasing involves first oversampling the image - resulting in an enhanced amount of information represented by a now more abundant set of (oversampled) pixels. The Quincunx scheme, Full-Scene Antialiasing (FSAA), the accumulation buffer, and Carpenter's A-buffer (sometimes called multisampling) are a few examples of techniques for carrying out antialiasing oversampling or enhanced sampling of a given image. [0003] The final image is frequently rendered at the lower pre-oversampled resolution, in which case the antialiasing process is completed by weighting (e.g., averaging) the greater set of samples to produce the reduced set. BRIEF SUMMARY OF THE INVENTION [0004] In accordance with one embodiment, a three-dimensional (3D) graphics pipeline renders a sequence of images of 3D scenes each composed of a plural set of objects. The pipeline comprises an antialiasing oversampling mechanism to perform for a given image, at an early stage of the pipeline, oversampling on a portion of the objects of the given image, hi accordance with another embodiment, the pipeline comprises an antialiasing oversampling mechanism and an antialiasing weighting mechanism. The antialiasing oversampling mechanism performs for a given image, at an early stage of the pipeline, antialiasing oversampling on at least a portion of the objects of the given image. The antialiasing weighting mechanism performs on the given image, at the early -.2- WO2006/034422 PCT/U32005/034051 stage of the pipeline, antialiasing weighting on the portion of the given image oversampled by the antialiasing oversampling mechanism. BRIEF DESCRIPTION OF THE DRAWINGS [0005] The present invention is further described in the detailed description which follows, by reference to the noted drawings by way of non-limiting exemplary embodiments, in which like reference numerals represent similar parts throughout the several views of the drawings, and wherein: [0006] Fig. 1 is a block diagram of a mobile device in accordance with one embodiment of the present invention; [0007] Fig. 2 is a block diagram of those mobile device entities pertaining to object antialiasing; [0008] Fig. 3 is a block diagram of a 3D graphics pipeline of the mobile device illustrated in Fig. 1; and [0009] Fig. 4 is a block diagram of a shading portion of the illustrated 3D graphics pipeline. DETAILED DESCRIPTION OF THE INVENTION [0010] To facilitate an understanding of the descriptions herein, definitions will be provided for certain terms. A primitive may be, e.g., a point, a line, or a triangle. A triangle may be rendered in groups of fans, strips, or meshes. An object is one or more primitives. A scene is a collection of models and the environment within which the models are positioned. A pixel comprises information regarding a location on a screen along with color information and optionally additional information (e.g., depth). The color information may be in the form of an RGB color triplet. A screen grid cell is the area of a screen that may be occupied by a given pixel. A screen grid value is a value corresponding to a screen grid cell or a pixel. An application programming interface (API) is an interface between an application program on the one hand and operating system, hardware, and other functionality on the other hand. An API allows for the creation of drivers and programs across a variety of platforms, where those drivers and programs interface with the API rather than directly with the platform's operating system or hardware. -3- 20 -5 [0011] Fig. 1 is a block diagram of a mobile device 10. The illustrated mobile device 10 may comprise a wireless mobile communications device such as a mobile phone. [0012] The illustrated mobile device 10 comprises a system memory 12 (comprising a RAM in the illustrated embodiment), a system bus 13, and software 14 (comprising an application program) in system memory 12. Device 10 further comprises 3D hardware 16 including, e.g., one or more 3D multimedia chips and other hardware 18 including a microprocessor and one or more application specific integrated circuits (ASICs). 3D hardware 16 and other hardware 18 are coupled to system memory 12 via system bus 13. [0013] The illustrated 3D hardware 16 may comprise circuitry formed as part of an integrated circuit also common to other hardware 18, or it may comprise its own integrated circuit chip or set of chips. 3D hardware 16 comprises its own local memory and registers 34 to hold data and a graphics pipeline comprising graphics pipeline portions 36. [0014] In terms of a hierarchy, software 14 comprises one or more applications 22 with 3D functionality that communicate with 3D hardware 30 via a 3D application programming interface (API) 24 and one or more 3D hardware device drivers 28. In the illustrated embodiment, 3D API 24 comprises, among other elements not specifically shown in Fig. 1, an object-antialiasing extension 26. [0015] Image data is generally maintained in one or more frame buffers 32 in system memory 12. 3D hardware 16 retrieves image data from and updates image data into such frame buffers 32. [0016] Fig. 2 shows an antialiasing block diagram, which depicts those entities of the illustrated mobile device that pertain to antialiasing. A given application program 40 is shown, which interacts with an API 42 by naming the function name 52 of antialiasing extension 44, and by specifying the parameter set 54 thereof. [0017] The illustrated antialiasing extension 44 comprises a type of antialiasing application programming interface (API) function to instruct, when called by application program 40, the 3D processing portion 46 of the 3D graphics hardware (specifically the 3D graphics pipeline) to perform certain antialiasing acts. The antialiasing API function comprising a data structure to receive function name 52 and a parameter set 54 comprising antialiasing parameters, each from the application program -A- 2006/034422- PCT/2005/034051 -4- 40. The antialiasing API function passes these antialiasing parameters received from the application program to the 3D graphics pipeline. [0018] As shown in Fig. 2, if a given object "object" is specified for antialiasing in the parameter set 54, it is subjected to antialiasing 50 within the 3D processing portion of the system. [0019] The parameter set may comprise an object set identification parameter 56 to identify a set of objects of a given image to be antialiased. The object set identification parameter may comprise a set of identifiers identifying individual objects from a sequence of objects making up a scene of the given image. [0020] The parameter set may comprise a chosen type of antialiasing algorithm 58 to be employed by the pipeline, as well as parameters 60 of the chosen type of antialiasing algorithm. The parameter set may further comprise an antialiasing sampling specification parameter 62 to specify whether upon antialiasing oversampling, the oversampling is to be performed per object on a specified set of objects or is to be performed on the entire image; and the parameter set may further comprise an antialiasing weighting specification parameter 64 to specify whether upon antialiasing weighting, the weighting is to be performed per object on a specified set of objects or is to be performed on the entire image. [0021] The parameter set may comprise a weighting timing parameter 66 to specify whether the antiahasing weighting is to be performed before a texturing portion of the pipeline or after a blending portion of the pipeline. [0022] Fig. 3 is a block diagram of pertinent portions of a 3D graphics pipeline that may be employed in the mobile device 10 illustrated in Fig. 1. The illustrated pipeline 80 comprises a model and view transform stage 82, a lighting stage 84, a projection stage 86, a clipping stage 88, a screen mapping stage 90, and a rasterization stage 92. The illustrated rasterization stage 92 comprises a setup portion 96, a shading portion 98, a hidden surface removal portion 100, a texturing portion 102, and a blending portion 104. [0023] In model and view transform stage 82, models of the depicted scene are positioned in world space and then in camera or eye space. Lighting information is added in lighting stage 84, and in the projection stage 86 the lighting modified objects are described in terms of normalized device coordinates, i.e., the three dimensional object information is converted to two dimensional information. The clipping stage 88 -5- 0 2006/03 1123 PCT/US2005/03 1051 removes those portions of the scene that are outside a defined view volume of the scene. The projected and clipped two dimensional rendition of the scene is then mapped to the screen (in screen coordinates x and y, scaled to the size of the screen) by screen mapping stage 90. z coordinate information is also maintained for the scene. [0024] Setup portion 96 performs computations on each of the image's primitives (e.g., triangles). These computations precede an interpolation portion (otherwise referred to as a shading portion 98 (or a primitive-to-pixel conversion stage) of the graphics pipeline. Such computations may include, for example, computing the slope of a triangle edge using vertex information at the edge's two end points. Shading portion 98 involves the execution of algorithms to define a screen's triangles in terms of pixels addressed in terms of horizontal and vertical (X and Y) positions along a two-dimensional screen. Texturing portion 102 matches image objects (triangles, in the embodiment) with certain images designed to add to the realistic look of those objects. Specifically, texturing portion 102 will map a given texture image by performing a surface parameterization and a viewing projection. The texture image in texture space (u,v) (in texels) is converted to object space by performing a surface parameterization into object space (xo, yo, zo). The image in object space is then projected into screen space (x, y) (pixels), onto the object (triangle). [0025] In the illustrated embodiment, blending portion 104 takes a texture pixel color from texture portion 102 and combines it with the associated triangle pixel color of the pre-texture triangle. Blending portion 104 also performs alpha blending on the texture-combined pixels, and performs a bitwise logical operation on the output pixels. More specifically, blending portion 104, in the illustrated system, is the last stage in 3D graphics pipeline. Accordingly, it will write the final output pixels of 3D hardware 16 to frame buffer(s) 32 within system memory 12. A hidden surface removal (HSR) portion 100 is provided, which uses depth information to eliminate hidden surfaces from the pixel data. Because in the illustrated embodiment it is provided between shading portion 98 and texturing portion 102, it simplifies the image data and reduces the bandwidth demands on the pipeline. [0026] The illustrated shading portion 98 comprises an antialiasing oversampling mechanism 110 and an antialiasing weighting mechanism 112 (an averaging mechanism as illustrated in Fig. 3). The illustrated blending portion 104 also comprises an -6- 2006/034422 PCT/US/2005/034051 antialiasing weighting mechanism 114 (also an averaging mechanism as illustrated in Fig. 3). [0027] Antialiasing requires oversampling and subsequent weighting. By oversampling early in the pipeline (e.g., prior to performing hidden surface removal or texturing), while weighting later in the pipeline (e.g., in the blending portion), the quality of the rendered image can be improved. For example, this allows certain calculations to be done after the oversampling yet before weighting. Such calculations, e.g., concerning when one object touches or covers another, are more accurate with the oversampled data. However, this oversampling creates a corresponding increase in the demand on the pipeline's bandwidth (i.e., processing rate). For example, an oversampling rate of four oversampled pixels per standard pixel requires a given processing stage accessing frames from the main memory to cause four times as much data to be transferred over the system bus for every frame access. [0028] In accordance with one embodiment, antialiasing oversampling mechanism 110 in shading portion 98 may perform for a given image, at an early stage of the pipeline, oversampling on a portion of the objects of the given image. In this embodiment, weighting is performed at weighting mechanism 114 in blending portion 104. By performing antialiasing on only a portion of the objects at such an early stage of the graphics pipeline, the processing rate demands on the pipeline are reduced. [0029] In accordance with another embodiment, the shading portion comprises an antialiasing oversampling mechanism 110 and an antialiasing weighting mechanism 112. The antialiasing oversampling mechanism 110 performs for a given image, at an early stage of the pipeline, antialiasing oversampling on at least a portion of the objects of the given image. The antialiasing weighting mechanism 112 performs on the given image, at the early stage of the pipeline, antialiasing weighting on the portion of the given image oversampled by the antialiasing oversampling mechanism. By performing both oversampling and weighting at an early stage of the pipeline (e.g., before texturing) (the weighting reducing number of pixels to the number preceding the oversampling process), the benefits of antialiasing are achieved while die amount of data processed by later portions of the 3D graphics pipeline is kept to a minimum. [0030] Fig. 4 is a block diagram of an example shading portion 120 of the 3D graphics pipeline, configured to effect the per object antialiasing oversampling. The illustrated shading portion 120 comprises a switch 124 which receives a given object -7- WO 2006034422 PCT/US/2005034051 "objectk" and directs it to either antialiasing oversampling 126 or standard pixel sampling 128. The sampled values (i.e., the resulting pixel values) are forwarded to the frame buffer via a local buffer or register 130. The illustrated example shading portion 120 performs an interpolation function on information for each object (each triangle in the illustrated embodiment), calculating RGB, a (alpha), u,v (texture coordinates), z (depth), and w (perspective correction). [0031] Switch 124 may comprise, e.g., a table lookup mechanism to lookup in a table whether the given object is to be oversampled. The given object may be specified for oversampling because it is a foreground object, thus justifying the additional bandwidth cost associated with antialiasing. It may be specified for standard pixel sampling if, e.g., it is a background object not requiring a clear a rendition. Objects may be specifically chosen for oversampling or standard pixel sampling with the use of the parameter set of the antialiasing extension, as shown in Fig. 2. [0032] The processing performed by the system shown in the figures may be performed by a general purpose computer alone or in connection with a specialized processing computer. Such processing may be performed by a single platform or by a distributed processing platform. In addition, such processing can be implemented in the form of special purpose hardware or in the form of software being run by a general purpose computer. Any data handled in such processing or created as a result of such processing can be stored in any type of memory. By way of example, such data may be stored in a temporary memory, such as in the RAM of a given computer system or subsystem. In addition, or in the alternative, such data may be stored in longer-term storage devices, for example, magnetic disks, rewritable optical disks, and so on. For purposes of the disclosure herein, computer-readable media may comprise any form of data storage mechanism, including such different memory technologies as well as hardware or circuit representations of such structures and of such data. [0033] While the invention has been described with reference to certain illustrated embodiments, the words which have been used herein are words of description rather than words of limitation. Changes may be made, within the purview of the appended claims, without departing from the scope and spirit of the invention in its aspects. Although the invention has been described herein with reference to particular structures, acts, and materials, the invention is not to be limited to the particulars -8- \^0 2006/034422- PCT/US2005/034051 disclosed, but rather extends to all equivalent structures, acts, and materials, such as are within the scope of the appended claims. 9 WO 2006/034422 PCT/US2005/0110S1 WE CLAIM-. 1. A three-dimensional (3D) graphics pipeline to render a sequence of images of 3D scenes each composed of a plural set of objects, the pipeline comprising a texturing portion, a blending portion, and an antialiasing oversampling mechanism to perform for a given image, before texturing by the texturing portion, oversampling on a portion of the objects of the given image. 2. The pipeline according to claim 1, wherein the objects are triangles. 3. The pipeline according to claim 2, wherein the antialiasing oversampling mechanism receives antialiasing parameters specified by an application program through use of an antialiasing application program interface (API) function. 4. The pipeline according to claim 3, wherein the API function comprises an API extension. 5. The pipeline according to claim 3, wherein the application program is running in a memory external to the 3D graphics pipeline. 6. The pipeline according to claim 3, wherein the antialiasing oversampling is performed by the antialiasing oversampling mechanism in accordance with the received antialiasing parameters. 7. The pipeline according to claim 2, wherein the antialiasing oversampling is performed by the antialiasing oversampling mechanism on a per object basis. 8. The pipeline according to claim 1, further comprising an antialiasing weighting mechanism to perform on the given image, after texturing by the texturing portion, antialiasing weighting on the oversampled objects of the given image. 9. The pipeline according to claim 8, wherein the antialiasing weighting is performed by the antialiasing weighting mechanism after blending by a blending portion of the pipeline. 10. The pipeline according to claim 8, wherein antialiasing weighting mechanism performs an averaging operation on oversampled objects. ~1O- 2006/034422 PCT/US/2005/034051 ll. A three-dimensional (3D) graphics pipeline to render a sequence of images of 3D scenes each composed of a plural set of objects, the pipeline comprising: a texturing portion; an antialiasing oversampling mechanism to perform for a given image, at a given stage of the pipeline before texturing by the texturing portion, antialiasing oversampling on at least a portion of the objects of the given image; and an antialiasing weighting mechanism to perform on the given image, at the given stage of the pipeline, antialiasing weighting on the portion of the given image oversampled by the antialiasing oversampling mechanism. 12. The pipeline according to claim 11, wherein the objects are triangles. 13. The pipeline according to claim 11, wherein the antialiasing oversampling mechanism receives antialiasing parameters specified by an application program through use of an antialiasing application program interface (API) function. 14. The pipeline according to claim 13, wherein the API function comprises an API extension. 15. The pipeline according to claim 13, wherein the application program is running in a memory external to the 3D graphics pipeline. 16. The pipeline according to claim 13, wherein the antialiasing oversampling is performed by the antialiasing oversampling mechanism in accordance with the received antialiasing parameters. 17. The pipeline according to claim 11, wherein the antialiasing oversampling is performed by the antialiasing oversampling mechanism on a per object basis. 18. The pipeline according to claim 11, wherein the antiahasing oversampling is performed by the antialiasing oversampling mechanism on the entire given image. 11 2006/034422 PCT/US2005/034051 19. The pipeline according to claim 11, wherein the antialiasing oversampling is performed by the antialiasing oversampling mechanism on all objects of the given image. 20. The pipeline according to claim 11, wherein the antialiasing weighting mechanism performs an averaging operation. 21. Machine-readable media holding machine-readable data, the machine-readable data when read by a machine causing certain antialiasing acts to be performed by a three-dimensional (3D) graphics pipeline, the machine-readable data comprising: an antialiasing application programming interface (API) function to instruct, when called by an application program, a 3D graphics pipeline to perform certain antialiasing acts, the antialiasing API function comprising a data structure to receive antialiasing parameters from the application program and to pass the antialiasing parameters received from the application program to the 3D graphics pipeline. 22. The machine-readable media according to claim 21, wherein the antialiasing parameters comprise an object set identification parameter to identify a set of objects of a given image to be antialiased. 23. The machine-readable media according to claim 22, wherein the object set identification parameter comprises a set of identifiers identifying individual objects from a sequence of objects making up a scene of the given image. 24. The machine-readable media according to claim 21, wherein the antialiasing parameters comprise a chosen type of antialiasing algorithm to be employed by the pipeline. 25. The machine-readable media according to claim 24, wherein the antialiasing parameters comprise parameters of the chosen type of antialiasing algorithm. 26. The machine-readable media according to claim 21, wherein the antialiasing parameters comprise an antialiasing sampling specification parameter to specify whether upon antialiasing oversampling, the oversampling is to be performed per object on a specified set of objects or is to be performed on the entire image. ~12_ WO 2006/034432 -PCT/US2005/031051 27. The machine-readable media according to claim 26, wherein the antialiasing parameters comprise an antialiasing weighting specification parameter to specify whether upon antialiasing weighting, the weighting is to be performed per object on a specified set of objects or is to be performed on the entire image. 28. The machine-readable media according to claim 27, wherein the antialiasing parameters comprising a weighting timing parameter to specify whether the antialiasing weighting is to be performed before a texturing portion of the pipeline or after a blending portion of the pipeline. 29. An embedded device comprising: system memory; a system bus; and a 3D graphics core coupled to the main memory via the system bus, the 3D graphics core comprising a graphics pipehne, the graphics pipeline comprising a shading portion, a texturing portion, and a blending portion, and being configured to render a sequence of images of 3D scenes each composed of a plural set of objects; the graphics core further comprising an antialiasing oversampling mechanism to perform for a given image, at a stage of the pipeline before texturing by the texturing portion, oversampling on a portion of the objects of the given image. 30. The embedded device according to claim 29, wherein the objects are triangles. 31. An embedded device comprising: system memory; a system bus; and a 3D graphics core coupled to the main memory via the system bus, the 3D graphics core comprising a graphics pipeline, the graphics pipeline comprising a shading portion, a texturing portion, and a blending portion, and being configured to -13- WO 2006/034422 ^CT/US2005/03 1051 14 stage of the pipeline, antialiasing weighting on the portion of the given image oversampled by the antialiasing oversampling mechanism. 36. The pipeline according to claim 35, wherein the objects are triangles. Dated this 2nd day of April, 2007. [MAHUA GHOSH] OF K & S PARTNERS AGENT FOR THE APPLICANTS -15- WO 2006/031122 PCT/US2005/034O54- render a sequence of images of 3D scenes each composed of a plural set of objects; the graphics core further comprising an antialiasing oversampling mechanism to perform for a given image, at a given stage of the pipeline before texturing by the texturing portion, antialiasing oversampling on at least a portion of the objects of the given image; and the graphics core further comprising an antialiasing weighting mechanism to perform on the given image, at the given stage of the pipeline, antialiasing weighting on the portion of the given image oversampled by the antialiasing oversampling mechanism. 32. The embedded device according to claim 31, wherein the objects are triangles. 33. An integrated circuit comprising a three-dimensional (3D) graphics pipeline to render a sequence of images of 3D scenes each composed of a plural set of objects, the pipeline comprising a texturing portion, a blending portion, and an antialiasing oversampling mechanism to perform for a given image, at a stage of the pipeline before texturing by the texturing portion, oversampling on a portion of the objects of the given image. 34. The integrated circuit according to claim 33, wherein the objects are triangles. 35. An integrated circuit comprising a three-dimensional (3D) graphics pipeline to render a sequence of images of 3D scenes each composed of a plural set of objects, the pipeline comprising: a texturing portion; an antialiasing oversampling mechanism to perform for a given image, at a given stage of the pipeline before texturing by the texturing portion, antialiasing oversampling on at least a portion of the objects of the given image; and an antialiasing weighting mechanism to perform on the given image, at the given |
---|
482-MUMNP-2007-ABSTRACT(27-4-2009).pdf
482-mumnp-2007-abstract(3-4-2007).pdf
482-mumnp-2007-abstract(amended)-(27-4-2009).pdf
482-mumnp-2007-abstract(granted)-(21-5-2009).pdf
482-mumnp-2007-cancelled pages(27-4-2009).pdf
482-MUMNP-2007-CLAIMS(27-4-2009).pdf
482-mumnp-2007-claims(3-4-2007).pdf
482-mumnp-2007-claims(granted)-(21-5-2009).pdf
482-MUMNP-2007-CORRESPONDENCE(27-4-2009).pdf
482-mumnp-2007-correspondence(3-7-2007).pdf
482-mumnp-2007-correspondence(ipo)-(12-6-2009).pdf
482-mumnp-2007-correspondence(ipo)-(8-6-2009).pdf
482-mumnp-2007-correspondence-others.pdf
482-mumnp-2007-correspondence-received.pdf
482-mumnp-2007-description (complete).pdf
482-MUMNP-2007-DESCRIPTION(COMPLETE)-(27-4-2009).pdf
482-mumnp-2007-description(complete)-(3-4-2007).pdf
482-mumnp-2007-description(granted)-(21-5-2009).pdf
482-MUMNP-2007-DRAWING(27-4-2009).pdf
482-mumnp-2007-drawing(3-4-2007).pdf
482-mumnp-2007-drawing(granted)-(21-5-2009).pdf
482-MUMNP-2007-DRAWING(SUPER SEDED)-(27-4-2009).pdf
482-MUMNP-2007-FORM 1(3-4-2007).pdf
482-mumnp-2007-form 2(27-4-2009).pdf
482-mumnp-2007-form 2(complete)-(3-4-2007).pdf
482-mumnp-2007-form 2(granted)-(21-5-2009).pdf
482-MUMNP-2007-FORM 2(TITLE PAGE)-(27-4-2009).pdf
482-mumnp-2007-form 2(title page)-(3-4-2007).pdf
482-mumnp-2007-form 2(title page)-(complete)-(3-4-2007).pdf
482-mumnp-2007-form 2(title page)-(granted)-(21-5-2009).pdf
482-mumnp-2007-form 26(2-4-2007).pdf
482-MUMNP-2007-FORM 26(27-4-2009).pdf
482-MUMNP-2007-FORM 3(27-4-2009).pdf
482-MUMNP-2007-FORM 5(27-4-2009).pdf
482-mumnp-2007-form-pct-ib-304.pdf
482-mumnp-2007-form-pct-ib-311.pdf
482-mumnp-2007-form-pct-isa-220.pdf
482-mumnp-2007-form-pct-isa-237.pdf
482-mumnp-2007-form-pct-isa-seperate sheet-237.pdf
482-mumnp-2007-pct-search report.pdf
482-MUMNP-2007-PETITION UNDER RULE 137(24-4-2009).pdf
482-mumnp-2007-specification(amended)-(27-4-2009).pdf
482-MUMNP-2007-U S PATENT(27-4-2009).pdf
482-mumnp-2007-wo international publication report(3-4-2007).pdf
482-mumnp-2007-wo international publication report(3-4-2009).pdf
Patent Number | 234318 | ||||||||
---|---|---|---|---|---|---|---|---|---|
Indian Patent Application Number | 482/MUMNP/2007 | ||||||||
PG Journal Number | 28/2009 | ||||||||
Publication Date | 10-Jul-2009 | ||||||||
Grant Date | 21-May-2009 | ||||||||
Date of Filing | 03-Apr-2007 | ||||||||
Name of Patentee | QUALCOMM INCORPORATED | ||||||||
Applicant Address | 5775 MOREHOUSE DRIVE, SAN DIEGO, CALIFORNIA | ||||||||
Inventors:
|
|||||||||
PCT International Classification Number | G06T15/00,G06T15/50 | ||||||||
PCT International Application Number | PCT/US2005/034051 | ||||||||
PCT International Filing date | 2005-09-23 | ||||||||
PCT Conventions:
|