Title of Invention

HEAD MOUNTED DEVICE FOR SEMANTIC REPRESENTATION OF THE USER SURROUNDINGS

Abstract HEAD MOUNTED DEVICE FOR SEMANTIC REPRESENTATION OF THE USER SURROUNDINGS A head mounted device for semantic representation of the user surroundings. The device comprises a cap (1) adapted to be fitted on the head of the user, an image capturing means (7) mounted on the cap, a computing platform (8) connected to the image capturing means, a feature data store (9) connected to the computing platform and an output module (10) connected to the computing platform. The feature data store comprises reference features which represent different entities existing in the surroundings and are associated with semantic description of the entities in the surroundings represented by them. The computing platform is designed to extract at least one feature from the images of the entities in the surroundings captured by the image capturing means, compare the extracted feature of the captured image with the reference features in the feature data store, identify the reference feature in the data store closest to the extracted feature of the captured image and provide an output i.e. the semantic description of the entities in the surroundings associated with the identified feature to the output module. (Fig 1) `
Full Text FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
As amended by the Patents (Amendment) Act, 2005
&
The Patents Rules, 2003
As amended by the Patents (Amendment) Rules, 2005
COMPLETE SPECIFICATION
(See section 10 and rule 13)
TITLE OF THE INVENTION
Head mounted device for semantic representation of the user surroundings APPLICANTS

Name Nationality
Address :
INVENTORS
Names
Nationality
Address

Indian Institute of Technology, Bombay an autonomous research and educational
institution established in India by a special
Act of the Parliament of the Republic of India
under the Institutes of Technology Act 1961
Powai, Mumbai 400076, Maharashtra, India
Chaudhuri Subhasis, Rajashekar and Prabhudesai Amit
all Indian Nationals
all of Indian Institute of Technology, Bombay, Department
of Electrical Engineering, Powai, Mumbai 400076, Maharashtra,
India

PREAMBLE TO THE DESCRIPTION
The following specification particularly describes the nature of this invention and the manner in which it is to be performed :

FIELD OF INVENTION
This invention relates to a head mounted device for semantic representation of the user surroundings.
The term "semantic representation of the surroundings" as used in the specification means a verbal description of the surroundings whereby the user can form an image of the surroundings mentally or visualise or perceive the surroundings.
The device can be used to aid a visually impaired person and give him a semantic description of the surroundings around him or traversed by him. The device can also be used to augment a robot navigation system whereby a remote operator gets a perception of the surroundings around the robot or traversed by the robot.
BACKGROUND OF THE INVENTION
US Publication No 2005/0208457 relates to a camera based object detection system for the visually impaired. The system comprises a digital camera mounted on the user to take images of objects on demand, real time algorithms connected to the camera to decipher attributes of the images, an announcement module connected to the algorithms to construct a sentence to describe the images and a computer based voice synthesizer connected to the module to verbally announce the sentence to the user. The system essentially enables the user to recognize or identify the objects in front of the user and navigate but does not give to the user a qualitative or semantic description or mental picture of the surroundings around him or traversed by him.
2

WO 03/107039 relates to an apparatus and method and also a system for aiding the visually impaired. It comprises means for sensing time/space characteristics and physical characteristics information about an object in a field of view, means for interpreting the time/space characteristics and physical characteristics information and for characterizing the object and audible information delivery device for audibly describing to the user the characterization of the object and the interpretation about the object. This also enables the user to recognize or identify the objects in front of the user and navigate but does not give a mental picture or perception of the surroundings around the user or he is traversing.
OBJECTS OF INVENTION
An object of the invention is to provide a head mounted device, which enables the user to perceive the surroundings around him or he is traversing accurately and reliably.
An other object of the invention is to provide a head mounted device, which is light weight to be carried on the person of the user and is simple in construction and easy to operate.
Another object of the invention is to provide a head mounted device, which is economical.
Another object of the invention is to provide a head mounted device, which aids a visually impaired person wearing it by giving him a semantic description of the surroundings around him or traversed by him.
3

Another object of the invention is to provide a head mounted device, which can be used to augment a robot navigation system whereby a remote operator gets a perception of the surroundings around the robot or being traversed by the robot.
DETAILED DESCRIPTION OF INVENTION
According to the invention there is provided a head mounted device for semantic representation of the user surroundings, the device comprising a cap adapted to be fitted on the head of the user, an image capturing means mounted on the cap, a computing platform connected to the image capturing means, a feature data store connected to the computing platform and comprising reference features which represent different entities existing in the surroundings and are associated with semantic description of the entities
9
in the surroundings represented by them and an output module connected to the computing platform, the computing platform being designed to extract at least one feature from the images of the entities in the surroundings captured by the image capturing means, compare the extracted feature of the captured image with the reference features in the feature data store, identify the reference feature in the data store closest to the extracted feature of the captured image and provide an output ie the semantic description of the entities in the surroundings associated with the identified feature to the output module.
The following is a detailed description of the invention with reference to the accompanying drawings, in which:
4

Fig 1 is a block diagram of a head mounted device for semantic representation of the user surroundings excluding the cap according to an embodiment of the invention;
Fig 2 is a schematic view of the cap of the device of Fig 1; and
Fig 3 is a schematic view of the cap and the image capturing means mounted on the cap.
The device of the invention as illustrated in the accompanying drawings comprises a cap 1 provided with a pair of fastening straps 2 and 3. The strap 2 is provided with a buckle 4 whose hinged pin is marked 5. The strap 3 is provided with a series of pin holes 6. The cap 1 is worn on the head of a user and fastened to the head of the user by threading strap 3 through the buckle 4 and engaging pin 5 of the buckle in one of the pin holes 6 in the strap 3. An image capturing means comprising a non-calibrated digital camera 7 is mounted on the cap with support members 7a, 7b and 7c which are fixed on the cap. The outer surface of the cap is rendered reflective by coating the outer surface of the cap with a reflective material such as mercuric oxide in known manner. The reflective coating is marked la. Alternatively, the image capturing means comprises an omnidirectional camera mounted on the cap. 8 is a computing platform connected to the image capturing means. The cord connecting the image capturing means to the comprising platform is marked 7d. The computing platform comprises, for instance, a laptop PC. 9 is a feature data store connected to the computing platform. The feature data store comprises reference features which represent different entities like trees or buildings or water bodies or roads existing in the surroundings. Each of the reference
5

features is associated with a semantic description of the entities in the surroundings represented by them. The feature data store is stored on a USB drive or flash memory stick or hard disk. 10 is an output module connected to the computing platform. The output module comprises, for instance, an audio system comprising a pair of speakers worn on the ears of the user like visually impaired person and connected to the output port of the laptop PC or a Braille board display connected to the output port of the laptop PC. In the case of a digital camera, snapshots of the images of the entities in the surroundings of the user reflected on the outer surface of the cap are taken by the camera. In the case of an omnidirectional camera, snapshots of the entities in the surroundings of the user are directly taken by the camera. Each of the images shot by the camera is divided into five sectors namely Right Top (RT), Front Right (FR), Left Top (LT), Left Bottom (LB) and Right Bottom (RB). The laptop PC extracts at least one feature of each the sectors of the captured image. Preferably the extracted feature is the colour component of the different entities in the surroundings as captured by the camera. The laptop PC compares the colour feature with the reference features stored on the data store. Color feature in the data store that best matches with the extracted features is identified. A semantic description of the entity in the surroundings associated with the identified feature is fed to the speakers or Braille board display, as the case may be, thereby enabling the user to relate to or visualise or perceive the surroundings he is in or traversed by him.
The device of the invention can be used by a visually impaired person so as to enable him to perceive or visualise the surroundings around him or he is traveling accurately and reliably. The device can also be used to augment a robot navigation system whereby a
6

remote operator can get a perception of the surroundings around the robot or traversed by the robot. The components of the device of the invention are relatively cheap and, therefore, the device of the invention is economical. It is also simple in construction and easy to operate besides being light weight to be carried on the head of the user.
EXAMPLE 1
A typical device of the invention comprising an omnidirectional camera was tested on a visually impaired person by making him traverse on a road surrounded by woods and water body and buildings on both the sides. The visually impaired person took 15 snap shots of the surroundings at different locations. Each of the images shot by the camera was divided by the camera into five sectors namely Right Top (RT), Front Right (FR), Left Top (LT), Left Bottom (LB) and Right Bottom (RB). The color feature of each of the sectors of the captured image was extracted by the laptop PC. The color features extracted from each of the sectors of the captured image was compared with the reference features stored on the data store. Color features in the data store that best matched with the extracted features were identified and a semantic description of the features was fed to the Braille board. The data read by the visually impaired person on the Braille board was tabulated against actual data as shown in the following Table. It is quite clear from the Table that the estimated description (ED) matched with the true description (TD) thereby establishing that the device is accurate and reliable.
7

Table

Photo Sectors
Frame RT FR LT LB RB
EC TC EC TC EC TC EC TC EC TC
1 WD WD WD WD WD WD BD BD WD WD
2 WD WD BD BD WD WD WD WD WD WD
3 WD WD BD BD WD WD WD WD WD WD
4 BD BD WD WD WD WD WD WD BD BD
5 WD WD WD WD WD WD WD WD BD BD
6 BD BD BD BD BD BD WD WD BD BD
7 BD BD BD BD BD BD WD WD BD BD
8 WD WD BD WD WD WD BD BD WD WD
9 WD WD BD WD BD BD BD BD WD WD
10 WD WD BD BD WD WD WD WD WD WD
11 WD WD WD WD BD WD WD WD WD WD
12 WD WD WD WD WD WD BD BD WD WD
13 WD WD WD WD WD WD BD BD WD WD
14 WD WD BD WD WD WD BD BD WD WD
15 WD WD WD WD WD WD WD WD WD WD
TC = True Category
EC = Estimated Category
WD = Woods
BD = Building
It is possible to have variations in the invention without deviating from the scope thereof. For instance the cap configuration can be different. The extracted feature of the captured image and the features in the reference data store need not be colour. Besides the feature
8

data store, the computing platform may comprise image processors. Such variations of the invention are to be construed and understood to be within the scope of the invention.

We Claims:-
1. A head mounted device for semantic representation of the user surroundings, the device comprising a cap adapted to be fitted on the head of the user, an image capturing means mounted on the cap, a computing platform connected to the image capturing means, a feature data store connected to the computing platform and comprising reference features which represent different entities existing in the surroundings and are associated with semantic description of the entities in the surroundings represented by them and an output module connected to the computing platform, the computing platform being designed to extract at least one feature from the images of the entities in the surroundings captured by the image capturing means, compare the extracted feature of the captured image with the reference features in the feature data store, identify the reference feature in the data store closest to the extracted feature of the captured image and provide an output ie the semantic description of the entities in the surroundings associated with the identified feature to the output module.
2. A device as claimed in claim 1, wherein the image capturing means comprises a non-calibrated digital camera mounted on the cap, the outer surface of the cap being reflective.
3. A device as claimed in claim 1, wherein the image capturing means comprises an omni directional camera mounted on the cap.
10

4. A device as claimed in any one of claims 1 to 3, wherein the computing platform comprises a laptop PC.
5. A device as claimed in any one of claims 1 to 4,wherein the extracted feature comprises the color component of the different entities existing in the surroundings as captured by the image capturing means.
6. A device as claimed in any one of claims 1 to 5,wherein the output module comprises an audio system or a Braille board display.
7. A device as claimed in anyone of claims 1 to 6, wherein the feature data store comprises feature data stored on a USB drive or flash memory stick or hard disk.
8. A head mounted device as claimed in anyone of claims 1 to 7, which is for use by a visually impaired person.
9. A head mounted device as claimed in anyone of claims 1 to 7, which is for use
on a robot.
Dated this 27th day of January 2006

Documents:

133-mum-2006-abstract(16-02-2004).doc

133-MUM-2006-ABSTRACT(27-1-2006).pdf

133-MUM-2006-ABSTRACT(5-9-2008).pdf

133-MUM-2006-ABSTRACT(CANCELLED PAGE)-(5-9-2008).pdf

133-mum-2006-abstract(granted)-(11-11-2008).pdf

133-mum-2006-abstract.doc

133-mum-2006-abstract.pdf

133-MUM-2006-CLAIMS(27-1-2006).pdf

133-MUM-2006-CLAIMS(5-9-2008).pdf

133-mum-2006-claims(granted)-(11-11-2008).pdf

133-mum-2006-claims(granted)-(16-02-2004).doc

133-mum-2006-claims.doc

133-mum-2006-claims.pdf

133-MUM-2006-CORRESPONDENCE(11-1-2008).pdf

133-MUM-2006-CORRESPONDENCE(5-9-2008).pdf

133-mum-2006-correspondence-received-ver-200306.pdf

133-mum-2006-correspondence-received-ver-270106.pdf

133-mum-2006-description (complete).pdf

133-MUM-2006-DESCRIPTION(COMPLETE)-(27-1-2006).pdf

133-MUM-2006-DESCRIPTION(COMPLETE)-(5-9-2008).pdf

133-mum-2006-description(granted)-(11-11-2008).pdf

133-MUM-2006-DRAWING(27-1-2006).pdf

133-MUM-2006-DRAWING(5-9-2008).pdf

133-mum-2006-drawing(granted)-(11-11-2008).pdf

133-mum-2006-drawings.pdf

133-MUM-2006-FORM 1(22-2-2006).pdf

133-MUM-2006-FORM 1(27-1-2006).pdf

133-MUM-2006-FORM 1(5-9-2008).pdf

133-MUM-2006-FORM 18(15-6-2006).pdf

133-mum-2006-form 2(5-9-2008).pdf

133-MUM-2006-FORM 2(COMPLETE)-(27-1-2006).pdf

133-mum-2006-form 2(granted)-(11-11-2008).pdf

133-mum-2006-form 2(granted)-(16-02-2004).doc

133-MUM-2006-FORM 2(TITLE PAGE)-(27-1-2006).pdf

133-MUM-2006-FORM 2(TITLE PAGE)-(5-9-2008).pdf

133-mum-2006-form 2(title page)-(granted)-(11-11-2008).pdf

133-mum-2006-form 3(27-01-2006).pdf

133-MUM-2006-FORM 3(27-1-2006).pdf

133-MUM-2006-FORM 3(5-9-2008).pdf

133-MUM-2006-FORM 8(29-5-2007).pdf

133-mum-2006-form-1.pdf

133-mum-2006-form-2.doc

133-mum-2006-form-2.pdf

133-mum-2006-form-26.pdf

133-mum-2006-form-3.pdf

abstract1.jpg


Patent Number 225370
Indian Patent Application Number 133/MUM/2006
PG Journal Number 07/2009
Publication Date 13-Feb-2009
Grant Date 11-Nov-2008
Date of Filing 27-Jan-2006
Name of Patentee INDIAN INSTITUTE OF TECHNOLOGY BOMBAY
Applicant Address POWAI MUMBAI 400 076
Inventors:
# Inventor's Name Inventor's Address
1 CHAUDHURI SUBHASIS INDIAN INSTITUTE OF TECHNOLOGY BOMBAY DEPARTMENT OF ELECTRICAL ENGINEERING POWAI MUMBAI 400 076
2 RAJASHEKAR INDIAN INSTITUTE OF TECHNOLOGY BOMBAY DEPARTMENT OF ELECTRICAL ENGINEERING POWAI MUMBAI 400 076 MAHARASHTRA INDIA
3 PRABHUDESAI AMIT INDIAN INSTITUTE OF TECHNOLOGY BOMBAY DEPARTMENT OF ELECTRICAL ENGINEERING POWAI MUMBAI 400 076 MAHARASHTRA INDIA
PCT International Classification Number G06F7/00
PCT International Application Number N/A
PCT International Filing date
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 NA