Title of Invention

A METHOD OF COMBINED EXCHANGE OF IMAGE DATA

Abstract A method of combined exchange of image data and further data being related to the image data, the image data being represented by a first two-dimensional matrix of image data elements and the further data being represented by a second two-dimensional matrix of further data elements is disclosed. The method comprises combining the first two- dimensional matrix and the second two-dimensional matrix into a combined two-dimensional matrix of data elements.
Full Text

Combined exchange of image and related data
The invention relates to methods of combined exchange of image data and further data being related to the image data, the image data being represented by a first two-dimensional matrix of image data elements and the further data being represented by a second two-dimensional matrix of further data elements.
The invention further relates to a transmitting unit for combined exchange of image data and further data being related to the image data.
The invention further relates to an image processing apparatus coniprising such a transmitting unit
The invention further relates to a receiving unit for combined exchange of image data and forther data being related to the image data.
The invention further relates to a multi-view display device comprising such a receiving unit.
Since the introduction of display devices, a realistic 3-D display device has been a dream for many. Many principles that should lead to such a display device have been investigated. Some principles try to create a realistic 3-D object in a certain volume. For instance, in the display device as disclos in the article " Solid-state Multi-planar Vohmietric Display", by A. Sullivan in proceedings of SID'03, 1531-1533, 2003, visual data is displaced at an array of planes by means of a &st projector. Each plane is a switchable difiuser. If the number of planes is sufficiently hig]rthe human brain integrates the picture and observes a realistic 3-D object This princ^le allows a viewer to look around the object within some extent In this display device all objects are (semi-)transparent
Many others try to create a 3-D display device based on binocular disparity only. In these systems the left and rigfht eye of the viewer perceives another image and consequently, the viewer perceives a 3-D image. An overview of these concepts can be found in the book "Stereo Computer Graphics and Other True 3-D Technologies", by D.F.McAllister (Ed), Princeton University Press, 1993. A first principle uses shutter glasses

in combiiiation with for instance a CRT. If the odd frame is displayed, light is blocked for the left eye and if the even frame is displayed Ught is blocked for the tight eye.
Display devices that show 3-D without the need for additional appliances are called auto-stereoscopic display devices.
A &st glasses-free display device conpises a barrier to create cones of light aimed at the left and right eye of the viewer. The cones correspond for instance to the odd and even sub-pixel cohinins. By addressing these columns with the appropriate information, the viewer obtains different images in his left and right eye if he is positioned at the correct spot, and is able to perceive a 3-D picture.
A second glasses-ftee display device comprises an array of lenses to image the light of odd and even sub-pixel columns to the viewer's left and ri^t eye.
The disadvantage of the above mentioned glasses-free display devices is that the viewer has to remain at a fixed position. To guide fhe view^, indicators have been proposed to show the viewer that he is at the right position. See for instance United States patent US5986804 where a barrier plate is combined with a red and green led. In case the viewer is well positioned he sees a green light, and a red light otherwise.
To relieve the viewer of sitting at a fixed position, multi-view auto-stereoscopic display devices have been proposed. See for instance United States patents US60064424 and US20000912. In the display devices as disclosed in US60064424 and US20000912 a slanted lenticular is used, whereby the width of the lenticular is larger than two sub-pixels. In this.way there are several images next to each other and the viewer has some fi:eedom to move to the left and right
In order to generate a 3-D ingjression on a multi-view display device, images
from different virtual points have to be rendered. This requires either multiple input views or some 3-D or.4e{yQi information to be present This depth information can be recorded, generated from multi-view camera systems or generated fi'om conventional 2-D video material. For generating depth information from 2-D video several types of depth cues can be applied: such as stmcture fix)m motion, focus information, geometric shapes and dynamic occlusion. The aim is to generate a dense depth map i.e. per pixel a depth value. This depth m^ is subsequently used in rendering a multi-view image to give the viewer a depth impression. In the article "Synthesis of multi viewpoint images at non-intermediate positions" by PA. Redert, E.A. Hendriks, and J. Biemond, in Proceedrags of International Conference on Acoustics, Speech, and Signal Processing, Vol. IV, ISBN 0-8186-7919-0, pages 2749-2752, IEEE Computer Society, Los Alamitbs, California, 1997 a method of

extracting depth infonnation and of Tendering a multi-view image cm basis of the input image and the depth map are disclosed. The multi-view image is a set of images, to be displayed by a multi-view display device to create a 3-D impression. Typically, the images of the set are created on basis of an input image. Creating one of these images is done by shifting the pixels of the mpvt image with respective amounts of shift. These amounts of shifts are called disparities. So, typically for each pixel there is a corresponding disparity value, together forming a disparity m^. Disparity values and depth values are typically inversely related, i.e.:
with S being disparity, a a constant value and D. being depfli. Creating a depth m^ is considered to be equivaleiit with creating a disparity map, hx this specifiqation disparity values and dep^ y^ies.^][e.bQth ccy^^ l^ the term depth related data elements.
^t llie^videb daOa^i.^! the image signal and the corresponding depth data have to be exchanged between various image processing units and eventually to a display device, in particular a multi-view display device.
Existing video coimections are designed to exchange sequences of images. Typically the images are represented by two-dimensional matrices of pixel values at both sides of the connection, i.e. the transmitter and receiver. The pixel values correspond to luminance and/or color values. Both transmitter and receiver have knowledge about the semantics of the data, Le. they share the same information model Typically, the coimection between the transmitter and receiver is adapted to the information model An example of this exchange of data is an RGB link. The image data in the context of transmitter and receiver is stored and processed m a data format conqirising triplets of values: R (Red), G (Green) and B (Blue) together forming the different pixel values. The exchange of the image data is performed by means of three correlated but separated streams of data. These data streams are transferred by means of three channels. A first channel exchanges the Red values, i.e. sequences of bits representing the Red values, the second channel exchanges the Blue values and the third channel exchanges the Green values. Althovigh the triplets of values are typically exchanged in series, the information model is such that a predetermined number of tr^lets together form an image, meaning that the triplets have respective spatial coordinates. These spatial coordinates correspond to the position of the triplets in the two-dimensional matrix representing the image.

Examples of standards, which are based on such an RGB link, are DVI (digital visual interfece), HDMI (High Definition Multimedia Interfece) and LVDS (low-voltage differential signaling). However in the case of 3-D, along with the video data, the depth related data has to be exchanged too.
It is an object of the invention to provide a method of the kind described in the opening paragraph, which is adapted to existing video interfaces.
This object of the invention is achieved in ^t the method comprises combining the first two-dimensional matrix and the second two-dimensional matrix into a combined two-dimensional matrix of data elements. The invention is based on the assuttqstion that the infonnation model at the transmitter and reqeiving side of a connection is shared. The image data elements of the first two-dimensional matrix and the further data elements are combined into a larger combined two-dimensional matrix of data elements in order to exchange the combined two-dimensional matrix one a coxmection which is arranged to exchange, data elements which have a mutual spatial correlation. For the coimection, i.e. the transmission channel, the semantics of the various data elen[iei];ts is not relevant.
A iirrther advantage of combining the data elements of multiple two-dimensional matrices into a larger combined two-dimensional matrix is that many types of known image processing operations may be performed by standard processing con:q)onents, e.g. a compression unit and/or a deconipression unit
The fur&er data may be one of foUowing:
Depth related data, meaning either depth values or disparity values, as e^qjlained above;
Further image data, meaning that the combined two-dimensional matrix con^rises pixel values of multiple images; and
De-occlusion data, see for e3q>lanation of this type of data the article ^'High-(Juality Images fitwn 2.5D Video", by R. P. Berretty and F, Enxst, in Proceedings of EUROGRAPHICS '03, sep 2003 and the article '^High-quality video view interpolation using a layered representation", by C. Lawrence Zimick, Sing Bing Kang, Matthew Uyttendaele, Snnon Winder, Richard Szeliski, in Proceedings of Siggrq)h 2004
The combined two-dimensional matri!c may be solely based on the first and second two-dimensional matrix. But preferably &e two-dimensional matrix also conqmses data corresponding to more than two two-dimensional matrices.

An embodiment of the method according to the invention, fijrther comprises combining second image data being represented by a third two-dimensional matrix into the combined two-dimensional matrix.
Another embodiment of the method according to the invention, fiiriher comprises combining second &rther data being represented by a fourth two-dimensional matrix into the combined two-dimensional matrix.
Basically, the combined two-dimensional matrix comprises data elements representing image, depth, disparity or de-occhisbn information. The input data elements, Le. ttie elements of the first, the second, the optional third and the optional fourth two-dimensional matrix are copied to output data elements to be placed id the combined two-dimensional matrix. The location in the combined two-dimensional matrbc may be arbitrarily chosen as long as it matches with the shared information model However it is preferred to place the output data elCTients in the combined two-dimensional matrix such that output data elements corresponding to respective input data elements, together forming a logical entity in one of the matrices of the set of the two-dimensional matrices, conpising the first, the second, the third and the fourth two-dimensional matrix, in a similar configuration. For exan:5)le:
A block of input data elements is copied to form a block of output data elements in the combined two-dimensional matrix;
A row of input data elements is copied to form a row of output data elements in the combined two-dimensional matrix; or
A cohmm of input data elements is copied to form a column of output data elements in the combined two-dimensional matrix.
A "checkerboard pattern" is applied. That means that four input data elements fiom four i!^ut two-dimensional matrices are combined to blocks.
In an embodiment of the method according to the invention, the combined two-dimensional matrix is created by putting two matrices of the set of the two-dimensional matrices, con^sing the first, the second, the third and the fourth two-dimensional matrix adjacent to each other in horizontal direction and two of the set of the two-dimensional matrices adjacent to each other in vertical direction.
In another embodiment of the method according to the invention, tiie rows of the combined two-din^nsional nmtrix are fiUed by interleaving rows of the matrices of the set of the two-dimensional matrices, comprising the first, the second, the third and the fourth two-dimensional matrix. Preferably, a first one of the rows of the combined two-dimensional

matrix cotqprises image data elements of the first TOW ofihe first two-dimensiotial matrix and fiirther data elements of the first row of the second two-dimensional matrix. An advantage of this configuration is easy data access. For instance a process of rendering, for which both image data and depth related data is needed, may start as soon as only a portion of the combined two-dimensional matrix is exchanged. That means it is not needed to start with the rendering until all data elements are received.
An embodiment of the method according to the invention further comprises writing meta-data into the combined two-dimensional matrix. With meta-data, also called a header is meant descr^ve data of the combined two-dimensional matrix. For instance, the name, the creation date, the horizontal size, the vertical size and the number of bits per output data element of Ibe combined two-dimensional matrix are represented by the meta-data.
Exchange of information conqirises sending and receiving. The method as described and discussed above is related to the sending part of the exchange of data. It is another object of the invention to provide a corresponding method which is related to the receiving part of the exchange of data and which is also adapted to existing video inter&ces.
This object t)f the invention is achieved in that the corre^onding method
con^rises extracting the first two^^Hu^^Qp^^ matrix and the second two-dimensional matrix
fiiom a combined two-dimensional matrix of &ia, elements. - ' , -
It is a fiirther object of the invention to provide a transmitting unit of the kind described in the opening paragr^h, which is adapted to existing video inter&ces.
This object of the invention is achieved in that the transmitting unit comprises combining means for combining the first two-dimensional matrix and the second two-dimensional matrix into a qombined two-dimensional matrix of data elements.
I- It is a fqrthCT object of the invention to provide a receiving unit of the kind described ici the opening paragraph,«which is adapted to existing video interfaces.
This object of the invention is achieved in that the receiving unit con::pises extracting means for extracting the first two-dimensional matrix and the second two-dimensional matrix from a combined two-dimensional matrix of data elements.
It is a fiirther object of the invention to provide an image processing apparatus of the kind described in the opening paragraph, which is adi^ted to existing video inter&ces.
This object of the invention is achieved in that the image processing ^)paratus conqmses the transmitting unit as described above.
It is a further object of the invention to provide a multi-view display device of the kind described in the opening paiagr^h which is adqyted to existing video interfeces.

This object of the invention is achieved in that the multi-view display device con^ses the receiving unit as described above.
Modifications of the transnritting unit, the receiving unit, and variations ttereof may con-espond to modifications and variations thereof of the image processing z^aiatus, the multi-view display device and the methods being described.
These and other aspects of the transmitting imit, the receiving unit, the image processing apparatus, the multi-view display device and the methods according to Ifae invention will become apparent from slid will be elucidated with respect to the implementations and embodiments described hereinafter and witii reference to the accompanying drawings, wherein:
Fig. 1 schematically shows a first processing device connected to a secoxul processing device;
Fig. 2A schematically shows a combined matrix based on 4 input matrices . being disposed adjacent to each other;
Fig. 2B schematically shows the combined matrix of Fig, 2 A congmsing'a
*■■♦*. header;
Fig. 3 A schematically shows a combined matrix based on four ixrpat matrices whereby the rows of the ii^ut matrices are interleaved to form the combined matrix;
Fig. 3B schematically shows the combined matrix of Fig. 3 A comprising a header, and
Fig. 4 schematically shows an image processing apparatus comprising a multi-view display device, both according to the invention.
Same reference numerals are used to denote similar parts throughout the Figures.
Fig. 1 schematically shows a first processing device 100 connected to a second processing device 102. The first processing device 100 and the second processing device may be integrated circuits (IC) like an image processor and a display driver, respectively. Alternatively, the first processing device 100 is a more con^lex s^)paratus like a PC and the second processing device 102 is a multi-view display device, e.g. a monitor. The first 100 and second processing device 102 are connected by means of physical connections 116. The

physical connections are e.g. based on twisted-pair or on twisted-pair plus ground for serial transfportof data.
On top of the physical connections logical connections are realized. Each logical connection corresponds to a channel for transport of data between the first processing device 100 and the second processing device 102. For instance, there are three logical connections for transport of data, e.g. DVI. The fourth logical connection, for exchange of timing information, i.e. the clock signal is not taken into account
The data format being q>plied within the context of the second processing device 102 is equal to the data format being applied within tiie context of the first processing device 100.
In order to exchange image data in combination with correspondHig depth data, the first processing device 100 con^mses a transmitting unit 104 according to the invention and the second processing device 102 comprises a receiving unit 106 acccnrding to the invention. The combination of the transmitting unit 104, the connection between the first 100 and second 102 processing device and the receiving unit 106 makes data exchange between the first 100 and second 102 processing device possible.
The transmitting unit 104 comprises a number of input inter&ces 108-114, of which some are optional. The first input interface 108 is for providing a first two-dimensional matrix. The second input inter&ce 110 is for providing a second two-dimensional matrix. The third input interfece 112 is for providing a third two-dimensional matrix. The fourth input inter&ce 114 is for providing a fourth two-dimensional matrix. The transmitting unit 104 con^ses a processor for combining input data elem^its of at least two matrices of the set of the two-dimensional matrices, comprising the first, the second, the third and the fourth two-dimensional matrix into the combined two-dimensional matrix.
The combined two-dimensional matrix may be tercporally stored within the transmitting unit 104 or the first processing device 100. It may also be that the data elements, which toge&er form the combined two-dimensional matrix, are streamed to a receiving imit 106, synchronously with the combining of input data elements.
Preferably, the transmitting unit 104 comprises a serializer. Typically, the data elements are represented with a number of bits, which ranges fit>m 8 to 12. The data on the physical connection is preferably exchanged by means of serial transport. For that reason the bits representing the consecutive data elements are put in a time sequential series.

In connection with Figs. 2A, 2B, 3 A and 3B examples of data formats of the combmed two-dimensional matrix aie disclosed which the transmitting unit 104 according to the invention is arranged to provide.
The processor for combining and the serializer may be implemented vising one processor. Normally, these fimctions are performed under control of a software program product During execution, normally the software program product is loaded into a memory, like a RAM, and executed ftom there. The program may be loaded ftom a background memory, like a ROM, hard disk, or magnetically and/or optical storage, or may be loaded via a network like Internet Optionally an qyplication sfpecific integrated circuit provides the disclosed ftmctionality.
The receiving unit 106 con^mses a numb^ of ou^ut interfeiees 116-122, of which some are optional. The first output, inter&ce 116 is for providing a first two-dimensional matrix. The second output inter&ce 118 is for providing a second two-dimensional matrix. The third output inter&ce 120 is for providing a third two-dimensional matrix. The fourth ou^ut interface 122 is for providing a fourtti two-dimensional matrix.
The receiving unit 106 comprises a processor for extracting input data elements corresponding to at least t^o^atrjces of the set of the two-dimensional matrices, con5)rising the first, the second, the.thnd and the foinlh two-dimensional matrix fix>m the combined two-dimensional matrix of output data elements. In connection with Figs. 2A, 2B, 3 A and 3B examples of data formats of the combined two-dimensional matrix are disclosed which the receiving unit 106 according tq the invention is arranged to receive and extract.
Normally, these ftmctions are performed under control of a software program product. During execution, normally the software program product is loaded into a memory, like a RAM, and executed ftom there. The program may be loaded ftom a background memory, like a ROM, hard disk, or magnetically and/or optical storage, or may be loaded via a network like Internet Optionally an application specific integrated circuit provides the disclosed functionality.
Fig. 2A schematically shows a combined two-dimensional matrix 200 based on a number of matrices of the set of the two-dimensional matrices, comprising the first, the second, the third and the fourth two-dimensional matrix. Output data elements which are based on input data elements of the first two-dimensional matrix are indicated with the character A. Ou^ut data elements which are based on input data elements of the second two-dimensional matrix are indicated with the character A. Output data elements which are based on ii^ut data elements of the third two-dimensional matrix are indicated with the character C.

Output data elements which aie based cm input data elements of the fourth two-dimensional matrix are indicated with the character D.
The combined two-dimensional matrix has a horizontal size which is equal to H, meaning that the number of ou^ut data elements being adjacent in horizontal direction is equal to R The combined two-dimensional matrix has a vertical size which is equal to V, meaning that the number of output data elements being adjacent in vertical direction is equal to V. Each of the set of the two-dimensional matrices, comprising the first, the second, the third and Ifae fourth two-dimensional matrix as horizontal size which is equal to H/2 and has a vertical size which is equal to V/2.
In Fig. 2A it is indicated that all iiqnit data elements of the first two-dimensional matrix are mapped to a sub-matrix 202 of the coinbined two-dim^isional matrix 200. In other words, output data elements which are based on ixiput data elements of the first two-dimensional matrix logically form one block of output data elements,
In Fig. 2A it is indicated that all inpat data elements of tbe second two-dimensional matrix are mmped to a sub-matnx 204 of the coifibined two-dimensional matrix 200. In other words, ou^ut data elements which are based on input^data elements of the second two-dimensional matrix logically form one block of output data elements.
In Fig. 2 A it is indicated that all input data elements of the "third two-dimensional matrix are mqrped to a sub-matrix 206 of the combined two-dimensional matrix 200. In other words, output data elements which are based on ii^ut data elements of the third two-dimensional matrix logically form one block of ou^ut data elements.
In Fig. 2A it is indicated that all ii^ut data elements of the fourth two-dimensional matrix are mapped to a sub-matrix 208 of the combined two-dimensional matrix 200. In other words, ou^ut data elements which are based on input data elements of the fourth two-dimensional matrix logically form one block of output data elements.
The different rows in Table 1 below are examples of possible sources for the output data elements of the two-dimensional matrix. In other words, the row indicates the different types of data which are located in the different two-dimensional matrix of the set of two-dimensional matrices. For instance, the second row of Table 1 specifies that the first two-dimensional matrix conqxrises image data, the second two-dimensional matrix coir^irises depth data, the third two-dimensional matrix con:q>rises occlusion data and the fourth two-dimensional matrix was en^ty.
Table 1: Examples of possible content of the combined two-dimensional matrix


Fig. 2B schematically shows the combined two-dimensional matrix 200 of Fig. 2A conqmsing a header 210. Preferably, the data elements representing the header is included in the combined two-dimensional matrk 200. That may result in overwriting other data elements, e.g. representing image or depth related data. However, preferably the header is stored in tiie combined two-dimensional matrix without overwriting other data elements. Ahematively, the header information is stored in a number of least significant bits, while Ifae corresponding most significant bits are used to store other data elements, e.g. representing
0.
image or depth related data. Table 2 below specifies a number of attributes which preferably are comprised in the header.




Optionally, the type image has several subtypes, e.g. left image and right image. Optionally depth-rendering parameters are included in the header, e.g.:
a range parameter corresponding to the total range of depth, calculated from the maximum depth behind the screen to the maximum depth in front of the screen;
an of&et parameter corresponding to the of&et of the depth range to the display device;
a front of screen parameter corresponding to the maximum depth in front of the screen;
a behind the screen parameter corresponding to the maximum depth behind the screen;
tte position of the viewer relative to the screen.
Fig. 3 A schematically shows a combined two-dimensional matrix based on four input matrices whereby the rows of the ii^ut matrices are interleaved to form the combined two-dimensional matrix 300.
Output data elements which are based on ix^ut data elements of the first two-dimensional matrix are indicated with the character A. Output data elements which are based on input data elements of the second two-dimensional matrix are indicated with the character 6. Output data elements which are based on input data elements of the third two-dimensional matrix are indicated with the character C. Output data elements which are based on input data elements of the fourth two-dimensional matrix are indicated with the character D.
The combined two-dimensional matrix has a horizontal size which is equal to H, meaning that the number of ou^ut data elements being adjacent in horizontal direction is equal to H. The combined two-dimensional matrix has a vertical size which is equal to V, meaning that the number of output data elements being adjacent in vertical direction is equal to V. Each of the set of the two-dimensional matrices, comprising the first, the second, the third and the fourth two-dimensional matrix as horizontal size which is equal to H/2 and has a vertical size which is equal to V/2.
The rows 0-6 of the combined two-dimensional matrix 300 are filled by interleaving rows of the matrices of the set of the two-dimensional matrices, coursing the first, the second, the third and the fourth two-dimensional matrix. It can be seen that the first row 0 of the combined two-dimensional matrix 300 con^prises output data elements which are based on iiiput data elements of the first two-dimensicmal matrix and of the second two-dimensional matrix. See the indications A and B. The first half of the first row 0 comprises ou^ut data elements corresponding to the first two-dimensional matrix and the second half of

the first row 0 conqmses output data elements corresponding to the second two-dimensional matrix.
It can be seen that the second row 1 of the combined two-dimensional matrix 300 conprises output data elements which are based on input data elements of the third two-dimensional matrix and of the fourth two-dimensional matrix. See the indications C and D. The first half of the second row 1 comprises output data elements corresponding to the third two-dimensional matrix and the second half of the second row 1 comprises output data elements corresponding to the fourth two-dimensional matrix.
Table 1 is also applicable for the combined two-dimensional matrix as depicted in Fig. 3 A
Fig. 3B schematically shows the combined two-dimensional matm: 300 of Fig. 3 A conprising a header. Table 2 is also applicable for the combined two-dimensional matrix as depicted in Fig. 3B.
It should be noted that alternative ways of interlaying of data elements is also possible. For instance a number of data elements from the respective input two-dimensional matrix can be combined into groins. A number of altematives are provided below, whereby the characters A37C3 have the meaning as expense above.
First ahemative:
ABCI)ABCDABCDABCI)ABa)ABCDAB(:i>ABaDABCDABC^ ABCDABCDABCDABCDABCI>ABCT)ABCDABCDABCDABa)AB ABCDABCDABCI)ABCDABCT)ABCT)ABCDABCI)ABCDABCDABC3^
Second alternative:
ABABABABABABABABABABABABABABABABABABABABABABABAB
CDCDCDCDCDCDCTX:DCDCDCDC^^
ABABABABABABABABABABABABABABABABABABABABABABABAB
CIX3)CI)CI)CDCT>CIXI)CI^^
Fig. 4 schematically shows an image processing ^ypaiatus 400 conprising a multi-view display device 406, both according to the invention. The image processing apparatus 400 con^xrises:
A receiver 402 for receiving a video signal representing input images;

An image analysis unit 404 for extracting depth related data 6om Ifae input images; and
A multi-view display device 406 for displaying multi-view images, which are rendered by the multi-view, display device on basis of the provided image data and related depth data.
The image data and related depth data are exchanged between the image analysis unit 404 and the multi-view display device 406, by means of a combined signal which represents the combined two-dimensional matrix as described in connection with Figs. 2A, 2B, 3A and 3B. The image analysis unit 404 conqirises a transmitting unit 104 as described in connection with Fig. 1. The multi-view display device 406 con^ses a receiving unit .106 as described in conaection witii Fig. 1.
The video signal may be a broadcast signal received via an anteima or cable but may also be a signal fix}m a storage device like a VCR (Video Cassette Recorder) or Digital Versatile Disk (DVD). The signal is provided at fee input connector 410. The image processing qyparatus 400 might e.g. be a TV. Alternatively the image processing ^jparatus 400 does not conprise the optional display device but provides the output images to an apparatus that does comprise a display device 406. Then the image processing apparatus 400 mig^f be e.g. a set tap box, a satellite-tuner, a VCR player, a DVD player or recorder. Optionally the image processing ^jparatus 400 comprises storage means, like a hard disk or means for storage on removable media, e.g. optical disks. The image processing apparatus 500 might also be a system being qyplied by a film-studio or broadcaster.
The multi-view display device 406 conpises a rendering unit 408, which is arranged to generate a sequence of multi-view images on basis of the received combined signal The rendering imit 408 is arranged to provide (at least) two correlated streams of video images to the multi-view display device which is arranged to visualize a &st series of views on basis of the first one of the correlated streams of video images and to visualize a second series of views on basis of the second one of the correlated streams of Aideo images. If a user, Le. viewer, observes the first series of views by his left eye and the second series of views by his rigjit eye he notices a 3-D impression. It might be that the first one of the correlated streams of video images corresponds to the sequence of video images as received by means of the combined signal and tiiat the second one of Ifae correlated streams of video images is rendered by appropriate shifting on basis of the provided depth data. Preferably, both streams of video images are rendered on basis of the sequence of video images as received.

In the article "Synthesis of muM viewpoint images at non-intermediate positions" by PA. Redert, E.A. Hendiiks, and J. Biemond, in Proceedings of International Conference on Acoustics, Speech, and Signal Processing, Vol. IV, ISBN 0-8186-7919-0, pages 2749-2752, IEEE Computer Society, Los Alamitos, California, 1997 a method of extracting depth information and of rendering a multi-view image on basis of the input image and Ifae depth map are disclosed. The image analysis unit 404 is an implementation for the discbsed method of extracting depth information. The rendering unit 408 is an implementation of the method of rendering disclosed in the article.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the ^ypended claims. In the claims, any reference signs placed between parentheses shall not be constructed as limiting the claim. The word ^comprising' does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elranents. The invention can be implemented by means of hardware conq)rising several distinct elements and by mean^ of a suitable programmed computer. In the unit claims enumerating several means, several of these means can be embodied by one and the same item of hardware or software. The usage of the words first, second and third, etcetera do not indicate any ordering. These words are to be interpreted as names.












CLAIMS:
1. A method of combined exchange of image data and further data being related to the image data, the image data being represented by a first two-dimensional matrix of image data elements and the further data being represented by a second two-dimensional matrix of further data elements, the method comprising combining the first two-dimensional matrix and the second two-dimensional matrix into a combined two-dimensional matrix of data elements.
2. A method as claimed in claim 1, whereby the further data is depth related data.
3. A method as claimed in claim lor 2, whereby the further data is further image data.
4. A method as claimed in any of the claims above, further conqmsing combining second image data being represented by a third two-dimensional matrix into the combined two-dimensional matrix.
5. A method as claimed in any of the claims above, further comprising combining second further data being represented by a fourth two-dimensional matrix into the combined two-dimensional matrix.
6. A method as claimed in claim 5, whereby the combined two-dimensional matrix is created by putting two matrices of the set of the two-dimensional matrices, comprising the first, the second, the third and the fourth two-dimensional matrix adjacent to each other in horizontal direction and two of the set of the two-dimensional matrices adjacent to each other in vertical direction.
7. A method as claimed in any of the claims 1-S, whereby the rows of the combined two-dimensional matrix are filled by interleaving rows of the matrices of the set of

the two-dimensional matrices, comprising the first, the second, the third and the fourth two-dimensional matrix.
8. A method as claimed in claim 7, whereby a first one of the rows of the combined two-dimensional matrix conqwises image data elements of the first row of the first two-dimensional matrix and further data elements of the first row of the second two-dimensional matrix.
9. A method as claimed in any of the claims above, comprising writing meta-data into the combmed two-dimensional matrix.
10. A method of combined exchange of image data and &rther data being related to the image data, the image data being represented by a first two-dimensional matrix of image data elements and the ftirther data being represented by a second two-dimensional
^'^matnx of fiirther data elements, the method conqnising extracting the first two-dimensional matrix and the second two-dimensional matrix fi-om a combined two-dimensional matrix of data elements.
11. A transmitting imit for combined exchange of image data and further data being related to the image data, the image data being represented by a first two-dimensional matrix of image data elements and the further data being repie^ted by a second two-dimensional matrix of further data elements, the transmitting imit con^nising combining means for combining the first two-dimensional matrix and the second f^tro-dimenSicmal matrix into a combined two-dimensional matrix of data elements.
12. A receiving unit for combined exchange of image data and ftirther data being related to the image data, the image data being represented by a first two-dimensional matrix of irnag^ data elements and the fiirther data being represented by a second two-dimensional matrix of further data elements, the receiving unit coursing extracting means for extracting the first two-dimensional matrix and the second two-dimensional matrix fix)m a combined two-dimensional matrix of data elements.
13. An image processing ^ypaiatus comprising the transmitting unit as claimed in claim 11.

14. An image processing apparatus comprising the receiving tmit as claimed in
claim 12.
15. An image processing apparatus as claimed in claim 14, comprising a display
device for displaying images being rendered on basis of the combined two-dimensional
matrix.


Documents:

http://ipindiaonline.gov.in/patentsearch/GrantedSearch/viewdoc.aspx?id=4JyLZ3NeVxKkAlpXfSeBww==&loc=egcICQiyoj82NGgGrC5ChA==


Patent Number 269672
Indian Patent Application Number 5952/CHENP/2007
PG Journal Number 45/2015
Publication Date 06-Nov-2015
Grant Date 30-Oct-2015
Date of Filing 24-Dec-2007
Name of Patentee KONINKLIJKE PHILIPS ELECTRONICS N. V
Applicant Address GROENEWOUDSEWEG 1, NL-5621 BA EINDHOVEN.
Inventors:
# Inventor's Name Inventor's Address
1 DE JONG, PIETER, W, T C/O PROF HOLSTLAAN 6, NL-5656 AA, EINDHOVEN.
2 THEUNE, PATRIC C/O PROF HOLSTLAAN 6, NL-5656 AA, EINDHOVEN, NETHERLANDS.
3 VAN DER POT, MAURITIUS, H, J C/O PROF HOLSTLAAN 6, NL-5656 AA, EINDHOVEN, NETHERLANDS.
4 WOUTERS, JAHANNES, H, P C/O PROF HOLSTLAAN 6, NL-5656 AA, EINDHOVEN, NETHERLANDS.
PCT International Classification Number G06T 1/00
PCT International Application Number PCT/IB2006/051960
PCT International Filing date 2006-06-19
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 05105616.6 2005-06-23 EUROPEAN UNION