| Title of Invention | METHOD AND APPARATUS FOR PROVIDING AN IMAGE TO BE DISPLAYED ON A SCREEN |
|---|---|
| Abstract | IN/PCT/2002/00782/CHE ABSTRACT "METHOD AND APPARATUS FOR PROVIDING AN IMAGE TO BE DISPLAYED ON A SCREEN" The invention relates to a method for providing an image to be displayed on a screen (10) such that a viewer in any current spatial position (O) in front of the screen (10) can watch the image with only a minimal perspective deformation. According to the invention this is achieved by estimating the current spatial position (O) of a viewer in relation to a fixed predetermined position (Q) representing a viewing point in front of the screen (10) from which the image could be watched without any perspective deformation and by providing the image by applying a variable perspective transformation to an originally generated image in response to said estimated current position (O) of the viewer, such that the viewer in said position (O) is enabled to watch the image without a perspective deformation. [fig-1] |
| Full Text | Method and apparatus for providing an image to be displayed on a screen The invention relates to a method and an apparatus for providing an image to be displayed on a screen, in particular on a TV screen or on a computer monitor, according to the preambles of claims 1,12 and 16. 5 Such a method and apparatus are known in the art. However, in prior art it is also known that perspective deformations may occur when an image is displayed on a screen depending on the current position of a view watching the screen. That phenomenon shall now be explained in detail by referring to Fig. 3. 10 Fig. 3 is based on the assumption that a2-dimensional image a3- dimensional scene is either taken by a camera, e.g. a TV camera or generated by a computer graphic program, e.g. a computer game. Moreover, an assumption is made about ihe location of the centre of the projection P and the rectangular viewport S of the original image, wherein P and S relate to the location where the original image is generated, e.g. a camera, but not 15 necessarily to the location where it is later watched by a viewer. P and S are considered to form a Active first pyramid as shown in Fig. 3. In the case that the image is taken with a camera, P is the optical centre of this camera and S is its light sensitive area. In the case that the image is generated by a computer graphic the parameters P and S can be considered as parameters of a virtual camera. 20 The original image might be generated by the camera or by Ifae computer program by using different transformations known in the art: One example for such a transformation is d:e change of flie viewing point ^m 25 which a particular scene is watched by the real or virtual camera. Another example for such a transformation is tie following one used in computer gmphic applications for correcting texture mapping. Such a transformation may be described according to the following equation: (0 wherein: the term (gu +hv-i-j) represents a division per pixel; u,v are the co-ordinates of a pixel of the image before transformation; x,y are the co-ordinates of the pixel of the image after the transformation; and a,b,c,d,e,f,gji and i are variable coefficients being individually defined by the graphic program. However, irrespective as to whether the original image has berai generated by conductii^ such transfonnations or not or as to whether the image has been generated by a camera or by a computer program, there is only one spatial position Q in the location where the image is later watched after its generation, i.e. in front of a screen 10 on which the image is displayed, from which a viewer can watch the image on the screen without any perspective deformations. Said position Q is fix in relation to the position of the screen 10 and can be calculated form the above-mentioned parameters P and S according to a method known in the art. The position Q is illustrated in Fig. 3 as the top of a second Active pyramid which is restricted by an rectangular area A of the image when being displayed on the screen 10. Said position Q, that means the ideal position for the viewer, is reached when the second fictive pyramid is similar to the first pyramid. More specifically, the first and the second pyramid are similar if the following two conditions are fulfilled simultaneously: a) Q lies on a line L which is orthogonal to the area A of ihe displayed image and goes through the centre of A; and b) the distance between Q and the centre of A is such that the top angles of the two pyramids are equal. If condition a) is not fulfilled there will be an oblique second pyramid; if condition b) ia not fulfilled threre will be an erroneous perapective shortening in case of occulsion, i.e. different objects of the original 3D scene get false relative apparent depths. The case that the condition a) is not fulfilled is more annoying to the viewer than the case that condition b) is not fulfiUed. Expressed in other words, if the current position O of fee viewer watching the image on the screen 10 does not correspond to the position Q, i,e. if there is a distance (Q-Oj between said positions Q and 0, ibe viewer will see a perspectively deformed image. A large distance |Q-0| can result in reduced visibility of the displayed image and worse m reduced readability of text In prior art a suboptimal approach is known to overcome these disadvantages by adapting the displayed image to the current position of the viewer. More specifically, that approach proposes to rotate the physical screen by hand or by an electric motor such that condition a) is fulfilled; e.g. Bang & Olufsen sells a TV having a motor for rotating the screen According to that approach rotation of the screen is controlled in response to the distance |0-Q| between ihe position O of the viewer and the fix position Q. Rotation of the screen by hand is inconvenient for the viewer and the rotation by motor is expensive and vulnerable. Moreover, condition h) can not be fulfilled by that approach. Starting from that prior art it is the object of the invention to improve a method and apparatus for providing an image to be displayed on a screen such that the application of the method is more convenient to a user or a viewer of the image. Said object is solved by the method according to claim 1 comprising the steps of estimating the current spatial position O of a viewer in relation to a fixed predetermined position Q representing a viewing point in front of the screen &om which the image could be watched without any perspective deformation; and providing the image by applying a variable perspective transformation to an originally generated image in response to said estimated current position O of the viewer. Advantageously said perspective transformation enables a viewer in any current spatial position O in front of the screen to watch the image on the screen without perspective deformations. Consequently, the visibility of the displayed image and in particular the readability of displayed text is improved. Said transformation is convenient to the viewer because he does not get aware of an application of the transformation when he is watching the images on the screern There is no physical movement of the sca-een like in the prior art Moreover, the implementation of the transformation can be realised cheaply and usually no maintenance is required. The application of said method is in particular helpful for large screen TV's or large monitors. According to an embodiment of the invention the perspective transformation of the original image advantageously includes a rotation and/ or a translation of the co¬ordinates of at least one pixel of the originally generated image, hi that case an exact transformation of the position of the viewer into the ideal position Q can be achieved and perspective deformations can completely be deleted. Preferably, the estimation of the position O of the viewer is done by tracking the head or the eye of the viewer. Alternatively, said estimation is done by estimating the current position of a remote control used by the viewer for controlling the screen. Preferably, the method steps of the method according to the invention are carried out in real-time because in that case the viewer is not optically disturbed when watching the image on the screen. Further advantageous embodiments of the method according to the invention are subject matter of the dependent claims. The object of the invention is flrfier solved by an apparatus according to claim 12. The advantages of said apparatus correspond to the advantages outlined above with regard to the method of the invention. Advantageously, the estimation unit and/ or the correcting unit is included in a TV set or alternatively in a computer. In these cases, there are no additional unit required ■tt^ch would otherwise have to be placed close to the TV set or to the computer. The object of the invention is further solved by the subject matter of claim 16, the advantages of which correspond the advantages of the apparatus described above. In the foUowmg a preferred embodiments of the invention will be described by refeirJDg to the accompanying figures, wherein: Fig. 1 shows an apparatuses for canying out a method according to the invention; Fig. 2 illustrates the watching of an image on a screen without perspective deformations according to the invwition; and Fig. 3 illustrates the watching of an image on a screen from an ideal position Q without any perspective deformations as known in the art. Fig. 1 shows an apparatus 1 according to the present invention. It includes an estimation unit 20 for estimating the current spatial position O of a viewer in front of the screen 10 in relation to a fixed predetermined position Q representing a viewing point in &ont of the screen from which a provided unage could be watched without any perspective defonnation and for outputting a respective positioning signal. The apparatus 1 fiirther includes a correction unit 30 for providing the image by correcting the perspective deformation of an originally generated image in response to said estimated current position O and for outputting an image signal representit^ the provided image having no perspective deformation to the screen 10. The correction unit 30 carries out tlie correctioii of tiie perspective deformation by appljang a variable perspective transformation to Urn original image generated e. g. in a camera or by a computar griqihic program. ITie transformation is represented by formula (1) known in the art as described above. The usage of said transformation does not change a fix position Q fixim which the itnage can be watched after its generation without any perspective defoimations. It is important to note that the location where tiie original image is generated and the location wiieie said image is later watched on a TV-screen or on a monitor are usually different. Moreover, at the time when the original image is generated a current oi actual position 0 of a viewer m &ont of the screen v/hen watchihg the image is not known and can thus not be considered when generating the original im^e. Based on that situation the invention teaches another application of the known transformation according to equation I. More specifically, according to Che invention said transformation is used to enable a viewer to watch the image not only from the position Q but fiom any arbitrary position 0 in front of the screen 10 with only a minimal perspective deformation. In the case that the origmal image has been generated by conducting the transformation, the invention teaches an additional or second appUcation of said transformation in order to generate the displayed unage. More specifically, accordir^ to the invention tiie variable coefficients a,b,c,d.c.f,gih and i of said transformation are adqited in response to the currently estimated position O of the viewer. The transformation with the such adapted coefficients is subsequentiy ^plied to the original image in order to provide the image to be displayed on tiie screen. Said displayed image can be watched by the viewer from any position in front of the screen 10 without perspective defomiations. A method for carrying out the adaptation will now be explained in detail by referring to Fig. 2. In Fig. 2 a situation is shown m which a viewer watches the screen 10 from a position 0 which does not correspond to the ideal position Q. The meanings of tiie parameters A, L, S, O, P, Q in Fig. 2 correspond to their respective meanings as explained above by referring to Fig. 3. The method comprises the steps of: 1. Defining a co-ordinate system with Q as origiQ in which the x-axis lies in a horizontal direction, in which the y-axis lies in the vertical direction and in which the z-axis also lies in the horizontal direction leading from the position Q through the centre of the area A of the image displayed on the screen 10. 2. The parameters u and v as used in the transformation according to equation (1) relate to an only two-dimensional Euclidian co-ordinate system having its origin in the centre of the area A. For later being able to calculate the coefgcients a-i of the transformation, the co-ordinates a and v are transformed from said two-dimensional EucUdian space into a three-dimensional Euclidian space having Q as origin according to the following equation; (2) wherein Lj is the distance between the position Q and the centre of the image are A. 3. The co-ordinates (u, v, Lj) of the displayed image in the three-dimensional Euchdian space are further transformed into a three-dimensional projective space having Q as origin according to (3) 4. Subsequently, an Euchdian transformation T is calculated to change the co¬ ordinate system sudi that the viewer position O is made the centre of the new co-ordinate system. Said Euclidian transfonnation T is in general calculated according to; the vector [XQ, yo, zo, 1] represents the co-ordinates of the position O of the newer in the three-dimensional projective space having Q as origin, R1 with i = 1 -3 and j = 1 -3 represant the co-ordinates of a rotation matrix for rotating the co-ordinates, e.g. through q), tx, ty, and tx form a translation vector representing a translation of the co¬ordinates, and the vector [0, 0, 0,1] represents the origin of the new co-ordinate system conesponding to the position O of the viewer. 5. The found transformation T is now applied to all the pixels of the rectangle area A, i.e. to lie pixel co-ordinates of the displayed image, according to (5) Equation 5 represents a transformation of the pixel co-ordinates [u, v, Ld, 1] of the image on the screen in a co-ordination system having Q as origin into the tcansfonncd pixel co-ordinates [XTR, yiRj ZTR, 1] of said image into the co-ordinate system having the position O ofthe viewer as the origirL Both vectors [u,v, Ld, 1] and [XTR, yrs, ZTR, 1] lie in the three-dimensional projective space. Equation 9 represents an exact transformation including in general a rotation and a translation such that it compensates for any arbitrary position O of title viewer. More specifically, L4 is fix and the parameters Rjj, txy, ty and tz can be calculated from that original image area A on the screen 10, from the position Q and from. the estimated position O of the viewer. Further, having calculated the right side of equation 9, also the coeffcients a-i of the perspective transformation on the left side of equation 9 are known. In the following, two simple examples for applying the perspective transformation, i.e. for calculating the coeffcients a-i, according to the present invention are provided. In a first example it is assumed that a viewer 0 stays on the line L connecting the centre of the area A of the screen with the position Q in front of the screen, in that case a rotation of the image to be displayed is obsolete; only a translative transformation is required. With equation 1 and the calculated coefficients a-i it is possible to compensate for an eye-position O of the viewer on the Ime L closer to the area A than to the position Q, but in that case some of the outer area of the received image is not displayed. Or it is possible to compensate for an eye-position O of the viewer on the line L further away from the area A than lo the position Q, but in that case some of the outer area of the screen is not used. In the second example, it is assumed that the viewer O does not stay on the line L but having a distance to the centre of the area A which corresponds to the distance between Q and said centre. Consequently, the translation tx, ty and tz coefficients in equation 9 can be ignored and only correction for the horizontal rotation needs to be considered. The required perspective transformation follows from the pyramid with O as top and with a line through the position O of the viewer and the centre of the screen as centre line. According to Fig. 2 rotation around the position Q is required to get the position O of the viewer onto the line L, after the position 0 is projected on a horizontal plane through Q (thus, ignoring the y-co-ordinate of the position O). This gives: with P(s,u,v) being a rational perspective transformation derived from the right side of equation 12 in the same way as equation 8" is derived from equation 6vi. The variables u and V are 2-dim, Euclidean co-ordinates of positions on the screen. This gives eight scale factors si i=l-8. The smallest one should be used as the final or optimal scale factor in equation 12 ensuring that the area of the transformed image completely fits into the area of the screen on 'wrtiich it shall be displayed. ■the co-ordinates x and y on both sides of equations 13 to 20 represent co¬ordinates of the image area A in the two-dimensional Euclidian space. Equations 13 to 20 express conditions or requiremnents for Hie position of the comers of a new area AMVC of the image after transformation. E.g. in equation 13 it is required that the x-co-ordinate of the comer of the original rectangle A represented by the co-ordinates xleft and ybottom 'S kept identical after transformation. Returning back to equation 1 and Fig. 2 it shall be pointed out that an application of the perspective transformation according to equation 1 onto an originally generated image ensures that the pyramid formed by the position Q and the original area A of the image on the screen 10 is similar to the pyramid formed by the transformed area ADE* of the image and the estimated position O of the viewer. This is illustrated in Fig. 2. The proposed solution may be implemented in future TV's which are extended with means for special graplic effects inclusive cheap hardware for carrying out the required calculations according to the invention with only litde additional costs. WE CLAIM : 1. Method for providing an image to be displayed on a screen (10), in particular on a TV screen or on a computer monitor, the method being characterized by the following steps of:: estimating the current spatial position O of a viewer in relation to a fixed predetermined position Q representing a viewing point in front of the screen (10) from which the image could be watched without any perspective deformation; and providing the image by applying a variable perspective transformation to an originaliy generated image in response to said estimated current position O of the viewer, such that the viewer in said position O is enabled to watch the image without a perspective deformation. 2. The method according to claim 1, wherein the transformation is carried out according to the following formula : wherein: u,v are the co-ordinates of a pixel of the original image before transformation; x.y are the co-ordinates of the pixel of the provided image after the transformation; and a b, c,d,e,f,g,h and i are variable coefficients defining the transformation and being adapted in response to the estimated current position O of the viewer. 3. The method according to one of the preceding claims, wherein the transformation comprises a rotation and / or a translation of the co-ordinates of the at least one pixel of the image. wherein: Ld is the fixed distance between the position Q and the centre of an area A of the image when being displayed on the screen; Rij with i=l-3 and j=l-3 are the coefficients of a rotation matrix for rotating the pixels; ts, ty and t^ are the coefficients of a translation vector; and wherein the rotation matrix and the translation vector are calculated according to the estimated current spatial position O of the viewer in relation to the position Q. 5. The method according to claim 4, wherein in the case that the translation is ignored and that rotation is considered to take place only in a plane defined by the poshions O, Q and a line L connecting Q and being orthogonal to the area A of the displayed image on the screen, the variable coefficients a,b,e,d,e,f,g,h and i are calculated according to: 6. The method according to claim 5, wherein the step of calculating the variable coefficients a,b,c,d.e,f,g,h and i comprises a scaling of the coefficients with a scale factor s according to: wherein the x,y vectors on the right side of each of said linear systems represent the fictive co-ordinates of a comer of the original image on the screen, wherein the x,y vectors on the left side respectively represents the co¬ordinate of a comer of the provided image actually displayed on the screen, and wherein the co-ordinates Xnght ^lefi-v Ybottom and Yt selecting the minimal one of said calculated preliminary scale factors si as the optimal scale factor. 8. The method according to one of the preceding claims wherein the step of estimating the current spatial position O of a viewer comprises tracking the head or the eye of the viewer. 9. The method according to one of the claims 1 to 8, wherein the step of estimating the current spatial position O of a viewer comprises estimating the current position of a remote control 10. Apparatus (I) for providing an image to be displayed on a screen (10) in particular on a TV screen or on a computer monitor, the apparatus is characterized by an estimation unit (20) for outputting a positioning signal representing an estimation of a current spatial position O of a viewer in front of the screen (10) in relation to a fixed predetermined position Q representing a viewing point from which the image on the screen (10) can be watched without any perspective deformation; and a correcting unit (30') for applying a variable perspective transformation to an originally generated image in response to said positioning signal such that the viewer in the position O Is enabled to watch the image without perspective deformation. U. TV set comprising the apparatus (1) as claimed in claim 10. 12. Computer set comprisinglhe apparatus (!) as claimed in claim 10. |
|---|
in-pct-2002-0782-che abstract.jpg
in-pct-2002-0782-che abstract.pdf
in-pct-2002-0782-che claims-duplicate.pdf
in-pct-2002-0782-che claims.pdf
in-pct-2002-0782-che correspondence-others.pdf
in-pct-2002-0782-che correspondence-po.pdf
in-pct-2002-0782-che description(complete)-duplicate.pdf
in-pct-2002-0782-che description(complete).pdf
in-pct-2002-0782-che drawings.pdf
in-pct-2002-0782-che form-1.pdf
in-pct-2002-0782-che form-18.pdf
in-pct-2002-0782-che form-26.pdf
in-pct-2002-0782-che form-3.pdf
in-pct-2002-0782-che form-5.pdf
in-pct-2002-0782-che others.pdf
in-pct-2002-0782-che petition.pdf
| Patent Number | 218910 | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Indian Patent Application Number | IN/PCT/2002/782/CHE | |||||||||
| PG Journal Number | 23/2008 | |||||||||
| Publication Date | 06-Jun-2008 | |||||||||
| Grant Date | 16-Apr-2008 | |||||||||
| Date of Filing | 24-May-2002 | |||||||||
| Name of Patentee | KONINKLIJKE PHILIPS ELECTRONICS N.V | |||||||||
| Applicant Address | ||||||||||
Inventors:
|
||||||||||
| PCT International Classification Number | G06T15/20 | |||||||||
| PCT International Application Number | PCT/EP2001/010659 | |||||||||
| PCT International Filing date | 2001-09-14 | |||||||||
PCT Conventions:
|
||||||||||