Title of Invention

METHOD AND RECEIVER FOR PROVIDING AUDIO TRANSLATION DATA ON DEMAND.

Abstract Video data are transmitted to a receiver. A language menu displayed from which a user selects a language that can be different from the original language broadcast with the video data. Video data identification information and language identification information corresponding to the language selected from the menu is derived and transmitted to e.g. an Internet server. The identification information is used to select an audio translation data set from several audio translation data sets stored in said server, wherein each of said several audio translation data sets includes a language translation of original audio data related to said video data. The selected audio translation data set is sent to the receiver and reproduced synchronously together with said video data.
Full Text Method and receiver for providing audio translation data on
demand
The invention relates to the field of providing audio trans-
lation data on demand to a receiver.
Background
The number of television (TV) channels a user; can receive
has increased significantly because of the further develop-
ment of terrestrial TV, satellite TV and web. T.V. technology
including digital TV transmission. In addition, video media,
such as cassette, CD and DVD offer more programs or movies
to the home.
Invention
The above developments lead also to an increased share of
foreign language programs or movies.
In an increasing number of countries or geographical regions
there are multi-language requirements: there may be used
more than one native language in one country or region, or
non-native residents prefer to have their native language
for home-entertainment. Therefore there is a growing need
for broadcasting programs or movies with audio data or sub-
titles corresponding to a language translation preferred by
the respective consumers.
A problem to be solved by the invention is to provide audio
or subtitle translation data on demand,
a receiver that utilises this method
and a method for providing corresponding audit or subtile
translation data on demand to such receiver
as described hereunder.
One aspect of the invention is a method for providing audio
or subtitle translation data on demand to a receiver, the
method including the following steps:
- receiving video data;
- receiving first identification information corresponding
to said video data;
- detecting a user-performed selection of a preferred lan-
guage;
- providing second identification information corresponding
to said preferred language;
- transmitting, e.g. via Internet, third identification in-
formation derived from said first and second identification
information to a server for requesting, based on said third
identification information, a desired audio or subtitle
translation data set corresponding to said video data;
- receiving, e.g. via Internet, said seleoted audio or sub-
title translation data set;
- reproducing, at least partly, data of said requested audio
or subtitle translation data set temporally synchronised
with said video data.
According to another aspect, the invention concerns a re-
ceiver for providing audio or subtitle translation data on
demand, the receiver including:
- means for receiving video data and firso identification
information corresponding to said video data;
- means for detecting a user-performed selection of a pre-
ferred language;
- means for providing second identification information cor-
respondina to said preferred language;
- means for transmitting, e.g. via Internet, third identifi-
cation information derived from said first and second iden-
tification information to a server for requesting, based on
said third identification information, a desired audio or
subtitle translation data set corresponding to said video
data;
- means for receiving, e.g. via Internet, said selected au-
dio or subtitle translation data set;
- means for reproducing, at least partly, data of said re-
quested audio or subtitle translation data set temporally
synchronised with said video data.
According to a further aspect, the invention concerns a
method for providing audio or subtitle translation data on
demand, including the steps:
- receiving, e.g. via Internet, identification information
requested by a user, wherein said identification information
corresponds to a preferred language and to video data that
are originally accompanied by audio or subtitle data in a
language different from said preferred language;
- storing or generating audio or subtitle translation data
sets assigned to different languages for related video data,
wherein each of said audio or subtitle translation data sets
includes a language translation of original language audio
or subtitle data related to specific ones of said video
data;
- selecting, upon receiving said identification information,
an audio or subtitle translation data set, wherein the se-
lected audio or subtitle translation data set represents a
language translation of said original language audio or sub-
title data corresponding to said preferred language,
- transmitting, e.g. via Internet, said selected audio or
subtitle translation data set for providing it to a receiver
of said user.
The invention is based on the idea that different audio
translation data sets are available in an archive,
preferably an online archive. The different audio
translation data sets can be ordered by a user of a
television and/or video system. This allows the user to
have a movie or film broadcast with audio signals
corresponding to a language preferred him. For example, a
visitor staying in a hotel of a foreign country can vatch
movies provided with audio signals corresponding to his
native language.
The invention also allows to watch a movie, scientific
programmes etc. in a certain foreign language or with
subtitles in a certain foreign language in order to train
the user"s knowledge of this specific foreign language.
Drawings
Exemplary embodiments of the invention are described with
reference to the accompanying drawings, which show in :
Fig. 1 a schematic representation of a system according to
the invention;
Fig. 2 a simplified block diagram of an online slation
archive.
US Patent 5,982,448 uses multiple sub-channels in order to
transmit multiple language text together with the video
signal. One of these received and already available
languages can be selected There is no transmission upon
request or demand for a specific language. A significant
amount of available data rate is wasted because multiple
non-desired language signals are transmitted. This
disadvantageous effect would even be worse if the life
audio signals would be transmitted.
There are several inventive differences between the present
invention and DE 197 13490. In the present invention :
• The broadcast and received video data include
identification data for these video data;
• These identification data can be automatically-
combined with identification data corresponding to a
default user-preferred language;
• The corresponding combined identification data are
sent automatically to a server for automatic download
of preferred-language audio/subtitle data.
Thereby a fully automatic tranolation of broadcast bound is
facilitated. The user, after having entered his preferred
language once only, can watch TV in foreign countries
together vith sound or subtitles in his language. No
further action is required.
In DE 19713490, however, the video data must be identified
by the user, i.e. he must type in corresponding code, and
he must request receiving desired subtitles on a case-by-
case basis.
Exemplary embodiments
Referring to Figure 1, the invention can be embodied in a
system including a television and/or video device 1 for
broadcasting a movie or film, an interface 2, and server
means 3 to which several translator accounts 4 are
assigned. The server means may be any kind of computer for
operating a ......................
database storing an archive of trarslation data. The server
may be located at the broadcast station supplying the movie
or film or at a company specialized on the supply of trans-
lation data.
The server means may be connected with a translator studio
5. A user may control the television or video device 1
and/or the interface 2 by means of a remote control 6 and/or
by voice control. The interface 2 may be contained in the
device 1, which device can be e.g. a settop-box or a TV re-
ceiver or video recorder.
If a user of the device 1 likes to watch a movie broadcast
in any language with audio signals in a preferred language,
a language menu can be provided on a display 14 of the de-
vice 1. By means of the language menu a list of several lan-
guage options is presented to the user of device 1, each of
the several language options representing audio translation
data of a different language and/or from a different trans-
lator. From the language menu the user can select a language
option corresponding to the language preferred by the user.
The selection of the language option may be performed by
means of the remote control 6 the transmitted commands of
which are received by an IR receiver 12 contained in device
1.
It is also possible to select a preferred language by a spo-
ken command, which is detected by a microphone. This micro-
phone can be integrated in the remote control or in the
housing of the device 1.
Furthermore, it is possible to select not only one preferred
language but a most preferred language and a second most
preferred language. In order to make this more clear, the
following example is given. If for instance a German user
stays e.g. in China and is not familiar with the Chinese
language but with e.g. the English Language, he may choose
German as the most preferred language and English as the
second most preferred language. In this way, the German user
will get translation data in German language, if these are
available. If not, he will receive translation data in Eng-
lish language, if these are available. Only if neither Ger-
man nor English translation data is available, he has to
watch the movie with the original language.
The user"s selection of the language option is evaluated in
means 13 for providing identification information correspon-
ding to the preferred language and the video data cf the
movie the user intends to watch. The identification informa-
tion is automatically passed using controlling means 11 to
an output of device 1, which output may be connected to the
Internet or any other source providing data to the user"s
device. The identification information may include the title
of the movie or some other identification code extracted
from VPS data, teletext data, MPEG7 data or an EPG (Elec-
tronic Program Guide).
The identification information is transmitted to a server 3,
preferably through an interface 2 with online connection
like an ISDN connection or any other Internet or cable con-
nection. After processing the identification information the
server 3 will supply audio translation data to the interface
2 via a back channel or via the Internet or cable. The audio
translation data may be compressed, e.g. by means of MP3 or
MPEG4 standard. The device 1 will provide video data re-
ceived from a broadcasting station 7 in synchronization with
the audio translation data received from server 3, so that
the user can watch the movie with audio signals and/or sub-
titles corresponding to the language preferred by him.
Also it is possible that the server sends back only an in-
formation. about the availability of the languages of audio
signals and/or subtitles for the selected title. This infor-
mation can be accompanied by an information about the cost
for downloading the translation data. The information about
the available languages can be displayed by a second on-
screen display, possibly together with the cost. The user
then finally decides whether he wants downloading of the
translation data.
The controlling means 11 may control synchronization of the
video data and the audio translation data by means of time
stamps provided in the video data as well as the audio
translation data. If the video data are encoded according to
the MPEG-4 standard, resynchronization marker codes, which
are inserted in the video data stream at certain intervals,
can be used for synchronization. If the audio data are also
MPEG-4 encoded, not the total audio signal but only the
voices to be translated can be transmitted due to the object
oriented transmission. This allows a very low transmission
bit rate.
The audio translation data provided to device 1 may be in-
termediately stored at least partly, e.g. on a hard disc or
other storage devices.
In the means 13 for providing identification information, or
in the controlling means 11, the user"s language selection,
or selections, may be stored permanently or for a predeter-
mined period of time so that, always or during the predeter-
mined period of time, audio translation data corresponding
to the stored preferred language selection will be automati-
cally delivered to the device 1 whenever the user wants to
watch a movie, without it being necessary to display the
language menu on the display 14. For example, r;r a period
staying in a hotel abroad, a visitor will have the opportu-
nity to watch movies in the hotel with audio translation
data corresponding to his native language, if a respective
language selection made by the visitor is stored.
The service of providing translation data on demand may be
free of charge or charged, wherein the user"s payment can be
controlled by means of the interface 2 or controlling means
11.
Referring to Figure 2, in the server means 3 the audio
translation data are arranged e.g. in translator accounts
10, 11, 12, each being related to a translator A, B and C,
respectively, as schematically shown in Figure 2. A transla-
tor A may have a respective account for German (account 10
in Figure 2), English, French (account 13 in Figure 2)
and/or other languages. There may be more than one set of
audio translation data available in the server means 3, each
set representing, for example, a German translation for a
specific movie and generated by a different translator, giv-
ing the user the opportunity to select German audio transla-
tion data for the specific movie from a preferred transla-
tor.
Translators may generate audio translation data in the
translator studio 5. The translator studio 5 provides a
user-friendly website interface, technical support and
translation guidance to the translators. An online connec-
tion may be established between the translator studio 5 and
the server means 4 to transmit audio translation data. The
translator may set up a new translator account, add new au-
dio translation data to the account 4 or delete previous
version from the account 4. Audio translation data may be
stored as text and/or voice data. In addition, the transla-
tor studio 5 may provide online payment to the translators.
It is possible to assign the required functions to different
units: for instance, providing identification information
can be accomplished in interface 2 whereby means 13 can be
omitted and IR receiver 12 is connected directly to control-
ling means 11.
The audio or subtitle translation data set mentioned above
can be a data set including the complete sound track/sub-
title track of one program or one movie.
In connection with recording a program or movie using pro-
gramming, in particular VPS or ShowView programming, it is
advantageous to either automatically download and intermedi-
ately store the audio or subtitle translation data set in
advance, or to download and record the audio or subtitle
translation data set during or after finishing the recording
of the related video data and possibly original audio/sub-
title data.
1. Mothed for providing audio or subtitle translation data
on demand to a receiver (1) or video device (1), the
method comprising the following steps:
- said receiver or video device receiving broadcast video
data for a specific program or movie together with audio
or subtitle data related to a given language, which
video data comprise first identification information
data identifying said specific program or movie video
data ;
- detecting (6, 12) a user-performed selection of a pre-
ferred language that is different from said given lan-
guage ;
- providing (13, 2) second identification information data
corresponding to said preferred language;
- transmitting automatically, e.g. via Internet, third
identification information data derived from said first
and second identification information data from said re-
ceiver or video device to a server (3) for requesting,
based on said third identification information data, a
desired audio or subtitle translation data set corre-
sponding to said video data and corresponding to said
preferred language;
- receiving (11), e.g. via Internet, an audio or subtitle
translation data set corresponding to said third identi-
fication information data;
- reproducing (15), at least partly, data of said received
audio or subtitle translation data set together with
said video data in said receiver or video device in a
temporally synchronised manner.
2. Method as claimed in claim 1, further comprising the
step of displaying (14) a language menu and detecting
(6, 12) the user-performed selection of the preferred
language from said language menu.
3. Method as claimed in claim 1 or 2, wherein from several
available server-stored audio or subtitle translation
data sets one is selected, wherein each of said several
audio or subtitle translation data sets comprises a lan-
guage translation of original language audio or subtitle
data related to said video data, and wherein the se-
lected audio or subtitle translation data set repre-
sents, corresponding to said preferred language, a lan-
guage translation of said original language audio or
subtitle data.
4. Method as claimed in any one of claims 1 to 3, wherein
said user-performed selection is detected, and said pro-
vided second identification information corresponding to
said preferred language is stored, before said video
data are received.
5. Method as claimed in claim 4, wherein said video data
are recorded using programming, e.g. VPS or ShowView
programming, and wherein said audio or subtitle transla-
tion data set is either automatically downloaded and in-
termediately stored in advance, or is downloaded and re-
corded after finishing the recording of the related
video data and possibly original audio or subtitle data.
6. Method as claimed in any one of claims 1 to 5, wherein
time stamps are used for synchronising said video data
with the data of said requested or selected audio or
subtitle translation data set.
7. Method as claimed in claim 6, wherein the data are
MPEG-4 encoded and resynchronisation marker codes are
used for synchronising.
8. Method as claimed in any one of claims 1 to 7, wherein
said first identification information is automatically
provided from corresponding teletext or MPEG7 informa-
tion .
9. Receiver (1) or video device (1) for providing audio or
subtitle translation data on demand, said receiver or
video device comprising:
- means (14) for receiving broadcast video data for a spe-
cific program or movie together with audio or subtitle
data related to a given language, which video data com-
prise first identification information data identifying
said specific program or movie video data;
- means (6, 12) for detecting a user-performed selection
of a preferred language that is different from said
given language;
- means (13, 2) for providing second identification infor-
mation data corresponding to said preferred language;
- means (11) for automatically transmitting, e.g. via
Internet, third identification information data derived
from said first and second identification information
data from said receiver or video device to a server (3)
for requesting, based on said third identification in-
formation data, a desired audio or subtitle translation
data set corresponding to said preferred language;
- means (11) for receiving, e.g. via Internet, an audio or
subtitle translation data set corresponding to said
third identification information data;
- means (15) for reproducing, at least partly, data of
said received audio or subtitle translation data set to-
gether with said video data in said receiver or video
device in a temporally synchronised manner.
10. Receiver as claimed in claim 9, further comprising means
(14) for displaying a language menu, wherein said de-
tecting means (6, 12) detect the user-performed selec-
tion of the preferred language from said language menu.
11. Method for providing audio or subtitle translation data
on demand, comprising the steps:
- receiving (3), e.g. via Internet, identification infor-
mation requested by a user, wherein said identification
information corresponds to a preferred language and to
video data that are originally accompanied by audio or
subtitle data in a language different from said pre-
ferred language;
- storing or generating (A, ..., D) audio or subtitle
translation data sets assigned to different languages
for related video data, wherein each of said audio or
subtitle translation data sets comprises a language
translation of original language audio or subtitle data
related to specific ones of said video data;
- selecting, upon receiving said identification informa-
tion, an audio or subtitle translation data set, wherein
the selected audio or subtitle translation data set
represents a language translation of said original lan-
guage audio or subtitle data corresponding to said pre-
ferred language,
- transmitting (3), e.g. via Internet, said selected audio
or subtitle translation data set for providing it to a
receiver of said user.
Video data are transmitted to a receiver (1). A language menu is
displayed from which a user selects a language that can be
different from the original language broadcast with the video data.
Video data identification information and language identification
information corresponding to the language selected from the menu
is derived and transmitted to select an audio translation
identification information is used to select an audio translation
data set from several audio translation data sets stored in said
server, wherein each of said several audio translation data sets
includes a language translation of original audio data related to
said video data. The selected audio translation data set is sent to
the receiver and reproduced synchronously together with said
video data.

Documents:

280-cal-2001-granted-abstract.pdf

280-cal-2001-granted-claims.pdf

280-cal-2001-granted-correspondence.pdf

280-cal-2001-granted-description (complete).pdf

280-cal-2001-granted-drawings.pdf

280-cal-2001-granted-form 1.pdf

280-cal-2001-granted-form 13.pdf

280-cal-2001-granted-form 18.pdf

280-cal-2001-granted-form 2.pdf

280-cal-2001-granted-form 26.pdf

280-cal-2001-granted-form 3.pdf

280-cal-2001-granted-form 5.pdf

280-cal-2001-granted-letter patent.pdf

280-cal-2001-granted-reply to examination report.pdf

280-cal-2001-granted-specification.pdf

280-cal-2001-translated copy of priority document.pdf


Patent Number 218553
Indian Patent Application Number 280/CAL/2001
PG Journal Number 14/2008
Publication Date 04-Apr-2008
Grant Date 02-Apr-2008
Date of Filing 14-May-2001
Name of Patentee DEUTSCHE THOMSON-BRANDT GMBH.
Applicant Address HERMANN-SCHWER-STR.3, D-78048 VILLINGEN-SCHWENNINGEN
Inventors:
# Inventor's Name Inventor's Address
1 LI HUI HALTENHOFFSTR.221, D-30419 HANNOVER
2 RITTNER KARSTEN EPIWEG 13, D-30453 HANNOVER
PCT International Classification Number H 04 N 7/88
PCT International Application Number N/A
PCT International Filing date
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 00250152.6 2000-05-18 EUROPEAN UNION