Title of Invention

A METHOD OF RECOGNIZING SPOKEN LANGUAGE

Abstract In accordance with a present invention speech recognition is disclosed (10). It uses a microphone to receive audible sounds input by a user into a first computing device (28) having a program with a database (16) comprising (i) digital responses of known audible sounds and associated alphanumeric representations of the known audible sounds and for the first time (ii) digital representations of known audible sounds corresponding to mispronunciations resulting from known class of mispronounced words and phrases. The method is performed by receiving the audible sounds in the form of the electrical output of the microphone (28). A particular audible sound to be recognized is converted into a digital representation of the audible sound (30). The digital representation of the particular audible sound is then compared to the digital representations of the known audible sounds to determine which of those known audible sounds is most likely to be the particular audible sounds in the database (30).
Full Text

SPE12CH RECOGNITION METHOD
TECHNICAL FIELD The present invention relates to speech recognition technology that can be embodied in speech recognition software typically implemented on personal computer systems.
BACKGROUND
There has been a long-standing desire in the field of computing to provide effective voice-recognition
technology* While commercial products have been available for ten or more years, there were difficulties in
accommodating individual quirks, regional pronunciations, speech defects and impediments, bad habits and
so on. It was apparent that good speech recognition required not only good technology, but would also
benefit from standardized speech that a machine could recognize consistently. To this end some voice
recognition technologies have, with limited success, attempted to train users to adopt unnatural speaking
■ modes, for example making a brief but distinct pause between each word, in order to facilitate machine
recognition.
Early proposals for speech recognition technology used a series of bandpass filters on electronic speech . signals to generate spectrographs of spectrographs prints of speech showing the variation of the sound frequency with time. Different individuals were found to have distinctively different voice prints but the technology fell far short of the.
Modern development of speech recognition systems has centered on the assembly of a database of sound models expected to have a high degree of correlation to the speech to be recognized in a two stage process. In the first stage one or more speakers input global information to develop a global database. In the second stage the database is trained to better correlate with a particular user's speech.
In a typical computerized speech recognition system, an acoustic signal received-by a microphone is input into a voice board which digitizes the signal. The computer then generates a spectrogram which, for a series of discrete time intervals, records those frequency ranges at which sound exists and the intensity of sound in each of those frequency ranges, The spectrogram, sometimes referenced in the art as a token, is thus a series of spectrograpliic displays, one for each of a plurality of time intervals which together form an audible sound to be recognized Each spectrographs display shows the distribution of energy as a function of frequency during the time interval. In a typical system, sampling rates of 6,000 to 16,000 samples per second are used to generate about fifty spectrum intervals per second for an audible sound to be recognized.
Also hi a typical system, quantitative spectral analysis may be done for seven frequency ranges, resulting in

eight spectral parameters for each fiftieth of a second, or spectral sample period.
Once broken down into a spectral picture of frequency energy distributions over time, recognition of speech may be effected by comparison of the sound to be recognized to known spectral pictures for particular sounds, and determining the model which best matches the unknown speech sound to be recognised. While this approach is useful, difficulties arise in accommodating the many differences in the speech patterns of a population of users. For example, different speakers may speak at different rates, with different inflections, and may place a different emphasis on the different syllables of a word. Furthermore, most speakers will speak in different ways on different occasions according to their mood and other factors.
To ameliorate these problems, conventional speech recognition systems normalize, or time align, die encoded received sound or token to have a duration equal to that of a database word or phrase modeL The system then compares each spectral time slice for the sound to be recognized against a particular model to provide a quality rating for the match, outputting to die system the proposed match having the highest quality rating, usually in the form of a computer display of the word or phrase,
In addition commercial systems intended for recognition of substantial narrative text, for example business dictation, typically perform a contextual analysis and require or recommend a user training session, during which the above matching functions are performed with respect to a preselected text which is read into a microphone by a user to train the system to the individual's particular speech characteristics. During this process, the model is appended with the individual's learned characteristics. Additional user-specific information can be obtained, during use, from an error correction dialog box activated when the user detects an error in the output text Corrective information is input into the system via the error correction dialog box causing the word model to become better adapted to the user's speech. Further training of the recognition engine can be effected by having the user speak in confusingly similar words as well as die correct word or phrase.
Even with exponential growth in computing power over a number of years and employment of sophisticated software, computerized speech recognition systems prior to die present invention may still be unsatisfactory in a number of respects being, for example, unduly error prone, requiring of excessive training of the system by the user or having difficulties in recognizing different dialects, accents and usages and poor or unusual speech characteristics.
SUMMARY OF THE INVENTION In accordance with the invention, the performance of the speech recognition software is improved by focusing in on the user, as opposed to the software. In particular, the invention has as one objective the

improvement of the speech patterns of persons using the software, preferably in a. manner thai will enhance the human quality of a person's speech rather than giving it unnatural characteristics, benefits obtainable with the methods of the invention include enhanced speech recognition performance by the computerized system, with the bonus of speech training for the user. Such training may be of great value especially, for example, to salespeople, lawyers, store clerks, mothers dealing with children and many others who rely heavily on oral communication skills to accomplish their daily objectives.
Accordingly, in one aspect the invention provides a method of recognizing spoken language implemented on a computing device, the method comprising:
(a) receiving speech to be recognized into the computing device;
(b) analyzing the speech to be recognized into phonemes intended to be matched with graphemes by a speech recognition algorithm, each grapheme comprising a recognizable written representation of a spoken language element;
(c) employing the speech recognition algorithm to match the phonemes with graphemes obtained from a database of graphemes; and

(c) outputting the recognized graphemes; and
(d) providing a speech training routine wherein the user can receive training from the system to correct mispronunciation of a speech element.
Part of the problem here has been the difficulty of finding good voice instructors and the relatively high cost of the individualized instruction needed for a high degree of effectiveness in this area. In accordance with preferred embodiments of the invention, a specialized but highly effective speech training regimen is provided for application in the context of speech recognition software for receiving human language inputs in audio form to a microphone, analyzing the same in a personal computer and outputting alphanumeric documents and navigation commands for control of the person computer.
In accordance with a present invention speech recognition is performed on a first computing device using a microphone to receive audible soimdn input by a user into a first computing device having a program with a database comprising (i) digital representations of known audible sounds and associated alphanumeric representations of the known audible sounds and (ii) digital representations of known audible sounds corresponding to mispronunciations resulting from known classes of mispronounced -words and phrases. The method is performed by receiving the audible sounds in the form of the electrical output of a microphone. A particular audible sound to be recognized is converting into a digital representation of fee audible sound.
i
The digital representation of the particular audible sound is then compared to the digital representations of

the known audible sounds to determine which of those known audible sounds is most likely to be the particular audible sound being compared to the sounds in the database, A speech recognition output comprising the alphanumeric representation associated with the audible sound most likely to be the particular audible sound is them produced An error indication is then received from the user indicating that there is an error in recognition. The user also indicates the proper alphanumeric representation of me particular audible sound. This allows the system to determine whether the error is a result of a known type or instance of mispronunciation. In response to a determination of error corresponding to a known type or instance of mispronunciation, the system presents an interactive training program from the computer to the user to enable the user to correct such mispronunciation.
The presented interactive training program comprises playback of the properly pronounced sound from a database of recorded sounds corresponding to proper pronunciations of the mispronunciations resulting from the known classes of mispronounced words and phrases*
In accordance with a preferred embodiment of the invention, the user is given the option of receiving speech training or training the program to recognize the user's speech pattern, although this is the choice of the user of the program.
In accordance with the invention, the determination of whether the error is a result of a known type or instance of mispronunciation is performed by comparing the mispronunciation to the digital representations of known audible sounds corresponding to mispronunciations resulting from known classes of mispronounced words and phrases using a speech recognition engine.
It is anticipated that the inventive method will be implemented by having the database comprising (i) digital representations of known audible sounds and associated alphanumeric representations of the known audible sounds and (ii) digital representations of known audible sounds corresponding to mispronunciations resulting from known classes of mispronounced words and phrases, generated by the steps of speaking and digitizing the known audible sounds and the known audible sounds corresponding to mispronunciations resulting from known classes of mispronounced words and phrases. The database will then be introduced into the computing device of many users after the generation by speaking and digitizing has been done on another computing device and transferred together with voice recognition and error correcting subroutines to the first computing device using CD-ROM or other appropriate data carrying medium.
It is also contemplated that mispronunciations are input into the database by actual speakers that have such errors as a natural part of this speech pattern.

In accordance with the invention, normalization to word, phrase and other sound models may be achieved by normalizing words or phrases to one of a plurality of sound durations. TTiis procedure is followed with respect to all the word that phrase models in the database. When a word is received by the system, it measures the actual duration, and then normalizes the duration of the sound to one of the plurality of the preselected normalized sound durations. This reduces the number of items in the database against which fee sound is compared and rated.
BRIEF DESCRIPTION OF THE DRAWINGS One way of canying out the invention is described below in connection with fee figures, it which:
Figure 1 is a block diagram illustrating a voice recognition program in accordance with the
method of the present invention;
Figure 2 is a block diagram illustrating a detail of an alternative embodiment of the voice
recognition program in accordance with the method of the present invention;
Figure 3 is a block diagram illustrating a further detail of the alternative embodiment of
Figure 2 of the voice recognition program in accordance with the method of the present invention; and
- Figure 4 is a block diagram illustrating an alternative text to speech routine in accordance
with the method of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Referring to Figure 1, the system and method of the present invention may be understood. In accordance with the inventive method 10, a voice and error model is generated using subroutine 12. Subroutine 12 comprises a number of steps which are performed at the site of the software developer, the results of which are sent for example, in the form of a CD-ROM, other media or via the Internet, together with the software fox executing voice recognition, to a user, as will be apparent from the description below. In accordance with the present invention, the inventive speech recognition method may be practiced on personal computers, as well as on more advanced systems, and even relatively stripped down lightweight systems, the referred to as sub-notebooks and even smaller systems provided at the same have sound boards for interfacing with and receiving the output of a microphone. It is noted that quality sound board electronics is important to good recognition.

Software Development Phase
At step 14 a database of word models is generated by having speakers speak the relevant words and phrases into a microphone connected to the sound board of the personal computer being used to generate the database. In accordance with the preferred embodiment of the invention, speakers who have been trained in proper speech habits are used to input words, phrases and sounds into the database at step 14. As the information is generated by the speakers speaking into microphones, attached to the sound boards in the computer, the information is digitized, analyzed and stored on the hard drive 16 of the computer.
In accordance with the present invention, relatively common pronunciation errors are also input into the system at step 38, In this specification the turn "phoneme" is used to include, along with other meanings as customarily understood by the art, the smallest sound, perhaps meaningless in itself, capable of indicating a difference in meaning between two words, For example, the word "dog11 differs from "cog* by virtue of a change of the phoneme "do" pronounced "daw" and "co11 pronounced |lca&"
Thus, at step 18, the model generating speaker can speak a database of common phoneme errors into the microphone attached to the sound board of the computer to result in input of an error database into hard drive 16 of the computer. However, it is preferred that the phoneme errors are spoken by persons who in various ways make the pronunciation error as part of their normal speech patterns.
At step 20, the system is enhanced by the introduction into the database contained on hard drive 16 of a plurality of exercise word models, selected for the piapose of training the speech of a user of the system. The same are input into the system through the use of a microphone and sound board, in the same way that the database of the language model was input into the system. Generally, a collection of word and/or phrase models is associated with each type of phoneme error. This is because if a person makes a speech pronunciation error of a particular type, it is likely that the same speaker makes certain other errors which have common characteristics with other pronunciation errors in the group. For example, a person who mispronounces the word "car" to sound like "cah" is also likely to mispronounce the word "park" to sound like "pack".
Exercise phrase models are input at step 22, These exercise phrase models are stored by the system in hard drive 16. The exercise word models and the exercise phrase models input into the system at steps 20 and 22, respectively are associated in groups having common mispronunciation characteristics. The same arc input into the system through the use of a microphone and sound board, in the same way that tbe database of the language model was input into the system.
In addition, in accordance with the present invention, it is recognized that computer errors may result in

misrecDgnition of a particular emir, mistaken acceptance of a mispronunciation, or mistaken rejection of a proper pronunciation* Accordingly, during the database generation session during which properly pronounced exercise word models or exercise phrase models or input into the system at steps 20 and 22, audio recordings of the same are also stored on hard disk 16f to allow for playback of these proper pronunciations during use Hie program by a person performing speech recognition using the program. This provides for an audible cue; to the user and allows the user to monitor the reliability of the system during the voice recognition and speech training process of the present invention.
In accordance with the invention, it is anticipated thai there may be more than one mispronunciation associated with a particular word or phrase. Accordingly, at step 24, a plurality of typical mispronunciations are input into the system to create a database of exercise word error models in hard drive 16, The same are input into the system through the use of a microphone and sound board, in the same way tbar the database of the language model was input into the system.
Finally, the database of relatively common mispronunciation errors is completed at step 26 where the speaker generating that database speaks into the system to generate a plurality of exercise phrase error models, These error models are also input into the system through the use of a microphone and stored on hard drive 16.
In accordance with a preferred embodiment of the invention, the input of audible sounds into the system to generate the word error models at step 24 and the exercise phrase error models at step 26 is done using a speaker or speakers who have the actual speech error as part of their normal speech patterns. The same is believed to achieve substantially enhanced recognition of speech errors, although the same is not believed to be necessary to a functioning system.
In accordance with the preferred embodiment of the invention, the models stored on bard disk 16, and generated as described above may be recorded on a CD-ROM or other program carrying media, together with a voice recognition engine, such as that znaiketed by any one of a number of manufacturers such as IBM, Dragon Systems, and others, In accordance with a present invention, such a prior art speech recognition program may be used for both the purpose of recognizing words, recognizing mispronunciations and phoneme errors, together with the above described audio recordings of proper pronunciations, both during speech recognition operation training sessions,
In accordance with the invention, such software comprising the speech recognition engine, editing and training utilities, and database of word models, phrase models, vocal recordings, and error models may be supplied to the user for a one time fee. and transported over a publicly accessible digital network, such as the

Internet Alternatively, the software may be made available for limited use for any period of time with charges associated with each such use, in which case the software would never be permanently resident on the computer of a user,
Uner Speech Training Program
When a user desires to use the inventive program, the software containing the program and the database is
loaded into a personal computer and words are spoken into a microphone coupled to the sound board of the
computer, in order to input the speech into the computer in the manner of a conventional speech recognition
program.
More particularly, as discussed above, after the system has proceeded through the performance of steps 14, 18,20,22,24 and 26, and the speech recognition engine, editing and training utilities added, the system proceeds at step 28 to receive, through a microphone, speech to be recognized from a user of the program who has loaded the speech recognition engine, editing and training utilities, and database of word models, phrase models, vocal recordings, and error models onto the user's personal computer. In mis respect the operation of the speech recognition program of the present invention is substantially identical to other speech recognition programs presently on the market More particularly, at step 30, a conventional speech recognition algorithm is applied to recognize audible sounds as the words which they are meant to represent
The computer then outputs the recognized speech on the screen of the computer monitor, and the next phrase uttered by the user proceeds at step 30 through the speech recognition algorithm resulting in that speech also being displayed on the monitor screen. When the user notices that an error has occurred, he may use any one of a number of different techniques to bring up an error correction window at step 32. For example, he may simply double-click on the error, or highlight the erroneous recognition and hit a key dedicated to presentation of the error correction window.
User correction occurs at step 34, In typical programs, call up of the error correction window results in the presentation of a screen showing the highlighted word, and suggesting, through the use of a menu, a number •of alternatives which may be selected for double-clicking, in order to correct the error* If the problem word is not in the menu of alternatives, the user may type in the problem word or spell it out After the system has been given the correct word by any of these means, the same is input into the system.
At this point, the call up of the error correction window at step 34 has indicated to the system that there is an error. While some errors are unrelated to pronunciation errors, many are, Once the user indicates the error, the system then proceeds at step 36 to determine whether the error made by the user is recognized as one of the speech errors recognized by the system. If it is, this information is determined at step 36. That the

nature of the pronunciation error is then input into the system and logged at step 38. In this matter, the system keeps track of the number of errors of a particular type for the user by storing them and tallying them in hard drive 16.
In accordance with the present invention, it is contemplated that the speech training will not be triggered by a single mispronunciation. Instead, it is contemplated that the repeated instances of a single type of mispronunciation error will be tallied, and when a threshold of pronunciation errors is reached in the tally, only then will speech training be proposed by the appearance of the screen of a prompt window suggesting speech training. The same could take the form of a window having the words "The system has determined that it is likely that we can improve your recognition by coaching you. Would you like to speak to the speech coach?" The screen may also have a headline above the question, such as "The coach wants to talk to youl" Hie screen will also have a button bar "OK" and another marked "Cancel", to give the user the opportunity to click on the "OK" button to start a trainmg session, or to click on the "Cancer button to cancel the speech coaching session
It is also noted that other combinations of events may be used to trigger training* For example, if the particular mispronunciation detected is a very well-defined one, such as the almost uniform tendency of some speakers to mispronounce the word "oil" as "earl11, the definiteness with which this error has been determined makes trainmg relatively likely to be necessary, and the threshold can be lowered to, for example, one instance of that error being detected In other cases, or in the general case, one may wish to set the threshold at three, five or even ten instances of the error before the "Hie coach wants to talk to you!" screen is presented to the user of the system.
Once a mispronunciation has been detected by the system as a result of the information input by the user to the user correction screen at step 34, the error correction algorithm operates in a manner identical to the speech recognition algorithm at step 30, except that the error correction algorithm checks the database of common phoneme errors input into the system by the software developer at step 18 and the exercise word error models and exercise phrase error models input at steps 24 and 26. In connection with this, it is noted that the so-called phoneme errors relate to particular sounds comprising one syllable or less, while the phrase and word models are somewhat more general, as described herein.
Thus, if, at step 40 the system determines that the threshold number of errors in that class has not been reached, it sends the system back to step 28, where speech recognition proceeds, I£ on the other hand, a predetermined number of errors of the same class have been detected by the system and logged at step 38, at step 40 the system is sent to step 42 where the above described "The coach wants to talk to your screen is presented to the user, who is thus given the opportunity to train his voice.

If the user declines the opportunity to train at step 42, he is given the opportunity to train the database at step 43. If he declines that opportunity also, the system is returned to step 28, where, again, speech recognition proceeds.
However, if he accepts the opportunity to train the database, the system proceeds to step 45, where the database is trained in the same manner as a conventional speech recognition processing program.
In the other case, at step 42, when the user decides to accept speech training! the system proceeds to step 44, where the determination is made as to whether the particular error is an error in the pronunciation of a word or what is referred to herein as a phrase. By "phrase" in this context, is meant at least parts from two different words. This may mean two or more words, or the combination of one or more words and at least a syllable from another word, and moat often the end of one word combined with the beginning of another word, following the tendency of natural speakers to couples sounds to each other, sometimes varying their stand-alone pronunciation. If, at step 44 the system determines that the mispronunciation is the mispronunciation of a word, the system is sent to step 46 where the system retrieves from memory words which have the some or similar mispronunciation errors.
As noted above, these words have been stored in the system, not only in the form of alphanumeric presentations, but also in high-quality audio format The object of the storage of the high-quality audio sound is to provide for audible playback of the words in the training dialog screen.
The words retrieved at step 46 are also presented on-screen in alphanumeric form to the user and the user is invited to pronounce the word If the word is pronounced properly, this is determined at step 48. If there is no error, the system proceeds to step 50 where the system determines whether there are two incidences of no error having occurred consecutively. If no error has occurred twice consecutively, the system is returned to act as a voice recognition system at step 28. If no error has occurred only once, at step 50 me system is returned to the training dialog screen at step 46 and the user is invited to pronounce the same or another word having the same type of mispronunciation to ensure that the user is facet ward correctly, Once the user has pronounced words twice in a TOW without errors, the user is returned at step 50 to the voice recognition function.
However, where an error has been detected at steji 48, the system proceeds to step 50 to where an instruction screen telling the user how to make the sound, with physical instructions on how to move the muscles of the mouth and tongue to achieve the sound is presented to the user.

The screen can allow for the incorporation of any desired speech training approaches including creative methods such as the Lessac method described in The Use and Training of the Human Voice -A Bio-Dynamic Approach to Vocal Life, Arthur Lessac, Mayfieid Publishing Co, (1997) Hie entire disclosure of which is incorporated herein by this specific reference thereto. In the Lessac technique the user is encouraged to use what is described as "inner harmonic sensing" to enhance the description of a particular sound by having the user explore how the sound affects the user's feelings or encourages the user TO some action. Other speech training methods may he employed or adapted for utilization in the methods, programs and devices of the invention as will be apparent to those skilled in the art, for example the Alexander Technique named for F. Matthias Alexander 1869-1955 regarding which further information is available from the web site at alexandertechnique,conx Computer-based training methods including, for example, Hatzis et aL's "OPTICAL LOGO-THERAPY- (OLT) A computer based speech training system for die visualization of articulation using conn&ciionist techniques" (University of Sheffield), can also be utilized. Employment of a speech training routine as an integral element of a speech recognition system can provide users who initially have poor or non-standard speech characteristics, who diligently employ the speech training routine, with enhanced voice or speech recognition. In one desirable embodiment of the invention, the speech training routine is integrated into a speech recognition program so as to be installed and initialized as an integral element of the speech recognition program avoiding separate installation or initialization. As described herein, the speech training routine can be system- or user-invoked in response to specific instances of mispronunciation identified by the speech recognition system, and need not be actively invoked at other times.
In an illustrative example, the Lessac method teaches the sound of the letter "N" by not only describing the physical requirements but also instructs the user to liken the sound to the "N" in violin and to "Play this consonant instrument tunefully." This screen also has a button which may be clicked to cause the system to play back the high-quality audio sound from memory, which was previously recorded during software development, as described above.
The system may also incorporate interactive techniques. This approach presents the user with a wire frame drawings of a human face depicting, amongst other information, placement of the tongue, movement of the lips, etc, The user may interactively move the wire &ame drawing to get a view from various angles or cause the sounds to be made slowly so thai the "fecial" movements can be carefully observed.
The user is then invited to say die sound again, and at step 54, the user says the word into the microphone which is coupled to the computer, which compares the word to the database for proper pronunciation at determines whether there is an error in the pronunciation of the word at step 56.

If there is error, the system is sent back to step 46 where, again, the word is displayed and the user invited to say the word into the machine to determine whether there is error, with the system testing the output to determine whether it should proceed to speech recognition at step 28, when the standard of two consecutive correct pronunciations has been reached If there is no error at step 56, however, the tally is cleared and the system proceeds to step 28, where normal speech recognition continues.
It at step 44 the system determines that the mispronunciation is the mispronunciation of a phrase, the system is sent to step 58 where the system retrieves from memory phrases which have the same or similar mispronunciation errors.
As noted above, these phrases have been stored in the system, not only in the form of alphanumeric presentations, but also in high-quality audio format The object of the storage of the high-quality audio sound is to provide for audible playback of the words in the training dialog screen.
The words retrieved at step 58 are also presented on-screen and alphanumeric form to the user at the user is divided to pronounce the word If the word is pronounced properly, this is determined at step 60. If there is no error, the system proceeds to step 62 where the system determines whether there are two incidents of no error having occurred. If no error has occurred twice, the system is returned to act as a voice recognition system at step 28. If no error has occurred only once, at step 62 the system is returned to the training dialog screen to at step 58 and the user is invited to pronounce the same or to the word having the same type of mispronunciation to ensure that the user is facet word correctly. Once die user has pronounced words twice in a row without errors, the user is returned at step 62 to the voice recognition function.
However, where an error has been detected at step 60, the system proceeds to step 62 to an instruction screen telling the user how to make the sound, with physical instructions on how to move the muscles of tho mouth and tongue to achieve the sound is presented to die user as well as any other techniques such as the Lessac method described herein above.
This screen also has a button which may be clicked to cause the system to playback the high-quality audio sound from memory, which was previously recorded during software development, as described above.
The user is then invited to say the sound again, and at step 66, the user says the phrase into the microphone which is coupled to the computer, which compares the word to the database for proper pronunciation and determines whether there is an error in the pronunciation of the word at step 68,
If there is error, the system is sent back to step 5 8 where, again, the word is displayed and, the user invited to

say the word into the machine to determine whether there is error, with the system testing the output to determine whether it should proceed to speech recognition at step 28, when the standard of two consecutive correct pronunciations has been reached If there is no error at step 68, however, the tally is cleared and the system proceeds to step 28, where normal speech recognition continues, the training session having been completed
In accordance with another embodiment of fee invention, in addition to the use of Lessac techniques, as described and referenced herein, to assure the quality of the voice input by a user into me speech recognition system (through the provision of Lessac training to the user)s me accuracy of the system is improved through the implementation of multiple database entries for a single word or phrase, each directed to a different version of the spoken word or phrase. By "version'1 is meant the Bame word spoken with a different emotion or Lessac energy. Thus, in addition to multiple database entries for a single word or phrase corresponding to a proper standard pronunciation and possible pronunciations corresponding to a dialect or regional pronunciation, or a mispronunciation, the system also includes the pronunciations corresponding to the pronunciation of the word with the various Lessac energies stressed, for example, buoyancy, potency, radiancy, inter-involvement, structural energy, tonal energy, and consonant energy. It is further contemplated that the thus possible recognition of speech with the various Lessac energies also utilizes regional pronunciations, or the like, pronounced with the various Lessac energies. It is noted that other energies have been postulated and that the same may be incorporated into the invention.
This is in contrast to prior art systems where the objective is to gather as great a diversity as possible of spoken words and define a single database entry for a particular word. While such a database will define an average profile for a given word, including frequency, amplitude, and statistical variation information, this approach results in throwing away large amounts of information.
In contrast, the present invention uses what may regarded as multiple databases in performing a voice recognition task. The advantage is that the statistical variation between the spoken words which were used to generate the database entry for a particular word is much smaller. Accordingly, greater accuracy may be achieved during implementation of a recognition procedure. Whole, in principle, more entries need to be processed during the voice recognition procedure, reduced deviations can also be used to reduce processing times. However, more importantly, there is an informational bonus, which will reduce processing times in two ways.
Firstly, once the emotion or particular Lessac energy has been detected, the system begins the recognition process with the database entry s^t which corresponds to the particular emotion or particular Lessac energy. Thus, processing based on a larger than prior art database is substantially limited. In other words, if a good-enough match is achieved in the last identified Lessac energy database, further processing will be terminated

and an output recognition word or phrase is produced.
The second informational bonus is recognition of an emotional state or Lessac energy sequence. Such an emotional state or Lessac energy sequence will be referred to herein as a "clique", for example a "buoyancy clique*. Statistically, different sets of words and phrases tend to be associated with one clique more than another, In accordance with the invention, such parsing of input speech into cliques for recognition is the creation of another dimension to natural language processing, that is the process of recognizing words and anticipating future words to be recognized based on context
This may be better understood from figure 2, which is a detail showing the implementation of the invention using the above inventive clique differentiating method. The method illustrated in figure 2 is identical to that illustrated in figure 1 except that in contrast to step 18, at step 118, in addition to the input of common phoneme errors, the system also receives phonemes having various pronunciation colors which arc parsed into the cliques with which they are associated. A fuller understanding of die detail illustrated in figure 2 may be liad from figure 3, which will be described below.
In accordance with the invention, at step 120 exercise word models are received with various energies by the system in a process aimed at developing a comprehensive language model for each energy and emotion clique to be included within the system In similar fashion, at step 122 exercise phrase models are received with various energies by the system in a process aimed at developing a comprehensive language model for each energy and emotion clique to be included within the system. At step 124 the system databases exercise word error models in the same manner as step 24, Thus, the generation of the database in ihe method of figure 2 parallels the generation of the database in the method of figure 1, but with file addition o£ for example, Lessac energies. Other than this, the database generation portion of the method is substantially identical.
However, in accordance with the figure 2 embodiment, recognition is achieved in a manner different in several respects, as will the described in a detail below with reference to figure 4.
The me&od illustrated in figure 2 may be better understood with reference to figure 3, which illustrates the input of data into the system which begins at step 118. More particularly, the information gathering methodology illustrated in figure 3 may be broadly segregated into the optional generation at process step group 412 of the database using speakers trained to impart the correct color or expressiveness to language. For example, Lessac trained speakers may be used. In addition, or optionally, at process step group 414 speech of numerous individuals may be reviewed by trained speech professionals, for example Lessac practitioners, identified with a particular clique and sent to the database with that mformanon. Finally, once

the system has been provided with spoken language samples tagged with appropriate color information, the system may be sped -with additional spoken language samples at process step group 416, where the system recognizes the color of the spoken language sample and assigns it to the appropriate cliquev Typical cliques would be cliques associated with happiness, sadness, excitement, terror, or the various Lessac energies, for example, buoyancy, potency, radiancy, inter-involvement, structural energy, tonal energy, and consonant energy.
In accordance with a preferred embodiment of the invention, the process of the generation of a database including color information starts at step 418 with a trained Lessac speakers speaking at a word with buoyancy. At step 420, the spoken language sample is stored in a database with a tag indicating that it possesses the quality of buoyancy. At steps 422-32, trained Lessac speakers speak words into me system with Lessac characteristics of potency, radiancy, inter-involvement, siructural energy, tonal energy, consonant energy. Alternatively, or in addition, trained speakers may input into the system at additional steps spoken language samples reflecting various emotions, such as happiness, sadness, etc. At steps 434-444 the spoken language samples collected at steps 4 22-32 are stored in the database tagged with appropriate color information for segregation into cliques.
At process step group 414, starting with step 446, a collection of audio clips are then input into the system. At step 448 each sample is listened to by a trained speech professionals who identified as those words which contain Lessac buoyancy, for example. The identified spoken language samples art then tagged with a buoyancy tag and stored at step 450. At steps 452-62, trained speech practitioners identify words with Lessac characteristics of potency, radiancy, inter-involvement, structural energy, tonal energy, consonant energy. Alternatively, or in addition, the trained speech professionals may identify for input into the system at additional steps, spoken language samples reflecting various emotions, such as happiness, sadness, etc. At steps 464-474 the spoken language samples collected at steps 452-62 are stored in the database tagged with appropriate color information for segregation into cliques. Spoken speech samples not recognized as containing any particular characteristic or color which is being assembled may then be discarded
At process step group 416, starting with stop 476, an additional and potentially much larger collection of audio clips, i.e, spoken speech samples, are then input into the system. At step 478 each sample is analyzed by a voice xecognition routine in which the information input into the system at process group steps 412 and 414 is used by the voice recognition routine (in which all of the cliques are used like a single large database) for the purpose of determining the existence of me quality of Lessac buoyancy. The identified spoken language samples are then tagged with a buoyancy tag and stored at step 480. At steps 482-92, the voice recognition system identifies words with Lessac characteristics of potency, radiancy, inter-involvemant, structural energy, tonal energy, consonant energy. Alternatively, or in addition, the voice recognition

software may identify for input into the system at additional steps, spoken language samples reflecting various emotions, such as happiness, sadness, etc. At steps 494-504 the spoken language samples collected at steps 482-92 are stored in the database tagged with appropriate color information for segregation into cliques. Spoken speech samples not recognized as containing any particular characteristic or color -which is being assembled may then be discarded.
Optionally, in addition to the above processing, the process illustrated in figure 3, and described above may be repeated for phrases in addition to words,
The result of the above is to dramatically increase the size of the database used to generate the recognition database being used for recognition.
Afi noted above, prior art speech recognition engines work by generating databases of phonemes which make up words in the database. This is done on a statistical basis by having huge amounts of English text read into the database by speakers. The text is separated into various phonemes and the pronunciation of each individual phoneme is statistically normalized to use as a reference during speech recognition. In contrast, the present invention contemplates the statistical mobilization of phonemes, words and/or phrases, as segregated into cliques.
In accordance with the inventive approach, the inventive system does not contemplate mixing of all sorts of pronunciations, all sorts of proper pronunciations, and so forth of a given word to get a standardized pronunciation and then implement voice recognition off that information. Rather, the inventive system considers proper pronunciations based upon a particular word order, and the frequency with which that particular proper pronunciation of a given word occma, to get improved recognition accuracy and speed.
In accordance with an alternative embodiment, the system contemplates the input into the system various pronunciations. This is followed by counting the occurrences of the particular variations of pronunciation and associating with each one a statistical likelihood with.
As alluded to above, the system is equally applicable to variations including 1) real mispronunciations with very low statistical incidences, 2) common lazy mispronunciations (e.g.MI axxed you the question,"), 3) statistically low regionalisms, 4) second language mispronunciations, and 5) word order pronunciation variations. It is believed that common lazy mispronunciations, statistically low regionalism*, and word order pronunciation variations wEl be a primary importance.
It is also contemplated that, in the case of which recognition systems which also include text-to-speech responses, for example, in an information line or an automated ordering system, the present invention will

allow the implementation of the detection of, for example, and emotion, allowing an unproved response to a customer on an automated line.
As noted above, recognition in accordance with the embodiment of figure 3 is different from conventional recognition operations. More particularly, at step 506 audio is input into the system and a conventional speech recognition algorithm is used to determine the database entry which most nearly comprises a match at step 508. At step 510 the energy associated with that entry is recognized, Energies are then tallied 512 to determine the overall color of the speech being input into the system and at step 514 the system limits the database being searched for a match to the database with the relevant Lessac energy or other relevant color.
At step 5) 6, recognition of speech then proceeds with the limited database. In the event that a good match is not seen in the limited database, the software explores other databases in an attempt to find a better match. At the same time, the energy or color associated with each recognition is noted 518. Energies are tallied at step 520, and in response to a predominance of the new energy, the energy preference is revised at step 522 and the system continues to implement that new energy or color preference at step 514 proceeding to steps 516-5 22 until the spoken language segment to be transcribed into text has been presented in textual form.
The present invention is in contrast to prior art speech recognition systems which result in numerous situations where recognition of the correct word is ambiguous. In the prior art, this is addressed by arraying the candidate words from highest to lowest according to the probability of the candidate word being the correct word. If the highest word were to receive 100% then that would be the correct wor&l The reference database for recognizing phonetic sequences is created by capturing pronunciations of words from thousands of persons representing the general population of speakers,
It is noted that there are sound alike words that have different meanings and which require context for clarification. For example the following three words would all have equivalent phonemic identities: ,sto," "too," and, "two.** In this invention, we are addressing those circumstances where a candidate word is assigned a lessened probability of correctness because of what might be considered a •^nspromhiciation."
In one embodiment of the inventive method, we begin with rule-based creations of multiple databases where the speakers are Certified Practitioners of the Lessac System for voice use and communication.
The databases serve as standard references of phonetic sequences and acoustic profiled used to automatically identify speech and present the text version of fee spoken words. When processed phonetic data from a SR (speech recognition) processor yield lower than acceptable candidate word probabilities, one or more alternate databases are searched for matches yielding higher candidate word probabilities. A match

indicates that a candidate word has been found which has a higher probability of being the correct candidate word based on an alternative phonetic sequence.
In the inventive method, we address five types of'"mispronunciations" according to prior art, They are identified above and discussed below and presented roughly in order from lowest to highest incidence of occurrence in normal speech: real mispronunciations which ate idiosyncratic to individual speakers and which have very low statistical incidences; colloquial articulatoiy errors of pronunciation that are geographically dispersed among the population, e,g. * I axed you a question,"; regionalisms that range from relatively low incidence [such as the Midwestern question; "What can I getch yez?" in place of "What can I get for you?"] to relatively high incidence [such as the Southern use of '*yDU all" or V dF in place of the generic Hyou" or "everyone"].
With the exception of phoneme substitutions by non-native speakers attempting to pronounce the specified language. An example from a native Japanese speaker who pronounces "long" as "rong" which matches the phonetic sequence for '*wrong,"
Whenever IEphonetic sequence" is used, it refers to speech data including an acoustic profile of phonemes, consonants, vowels, and measures of duration and tonality.
Relatively high incidences of mispronunciation that occur because a native speaker in one language is attempting to use proper pronunciation hi a second, non-native, language [when a speaker's pronunciation of a previously mispronounced word is correctly recognized as frequently as when pronounced by a native speaker, the non-native speaker is said to be "bilingual"]; and,
pronunciations that have insufficient oc ambiguous word candidate probabilities so the correct candidate word is not recognized from the phonetic data, or where, post phonetic analysis foils to yield a sufficient "goodness of fit" word score to the candidate, because the prior art forms of recognition engines fail to distinguish those rule-based changes for pronouncing words due to ihe locations of words and consonants in the overall sequence of words* In the inventive method described below, these are not treated as mispronunciations, but as errors of recognition.
Alternative pronunciations are used in two ways in the inventive method, namely to identify individual patterns of mispronunciations made by individual speakers; and, to increase the percent of correctly identified words uttered in a sequence of words, thus raising the overall effectiveness level of speaker independent natural language recognition.
In the inventive method, we do not combine pronunciations of a given word by speakers having substantial dispersions of articulatoiy patterns, dialects, and native languages to arrive at an "average" pronunciation of

a given word that is then normalized and used as the phonetic reference for assigning a "goodness of fit" probability to the word. Rather, we use multiple phonetic databases having alternative pronunciations for each dictionary word spoken individually and as part of a phrase or sentence, and according to the type of alternative pronunciation. In the preferred embodiment of die invention, these databases are prepared from words and phrases spoken by, for example, Lessac Certified Practitioners, using Lessac phonetics and articulatory principles. Such Certified Speakers are used to pronounce all words, whether correctly pronounced or mispronounced.
hi the invention, alternative pronunciations are determined by rules set forth according to the Lessac System of instruction for voice use and communication. Bach such rule is used to identify an alternative phoneme and its associated grapheme for determining the proper text output of the candidate word.
There is no database of type 1 mispronunciations as these are "true" errors of pronunciation without sufficient incidences to identify a pattern of alternative pronunciation, either within the pronunciations of a single speaker, or among all the pronunciations of all speakers.
Types 2,3 and 5 can be used to identify a pattern of word and phrase specific alternative pronunciations of a single speaker as well as the pronunciations of all speakers. In this latter case, which is often referred to as natural language recognition, the circumstances can be applied either in circumstances of speaker-identified natural language recognition or where speaker independent recognition is being used.
The invention creates multiple databases containing phonetic sequences and acoustic data profiles of phonemes, vowels and consonants, along with their associated graphemes. The output text presents the spelled word as one might find it in a printed reference dictionary such as Webster's Collegiate Dictionary.
One preferred embodiment of the invention comprises the following databases:
Phonetic Sequences, acoustic profiles, graphemes and text for Correctly Pronounced Individual Words
Alternative Phonetic Sequences, acoustic profiles, graphemes and text for Identifying Alternative Pronunciations of a Word Based on the Word and its Position in a Pronounced Sequence of Words
Phonetic Sequences, acoustic profiles, graphemes, and text for Identifying Correct Alternative Words and Pronunciations of Words according to a Known Dialect of the Native Language
Phonetic Sequences, acoustic profiles, graphemes end text for Correctly Identifying Alternative

Pronunciations of words according to Commonly Occurring Errors of Articulation by Native Speakers of a Specified Language.
The general method of constructing each of the databases is detailed below.
1. Basjc ftcoustical Dictionary of Phonetic Sequences for Correctly Pronounce tTuftviHiml Words A sample of words that is statistically sufficient to be representative of phonetic sequences for all words in a dictionary of American English [for example, Webster's Collegiate Dictionary] is drawn. Each word in the sample is pronounced by persons representing high pitched, low pitched, rapidly spoken and slowly spoken, as well as male, female, young and old speakers. All frequencies, including harmonics created by the voice, in the range of [500 and 20,000hz] are recorded for each speaker's pronunciation of the sample words. All persons who pronounce words for the database must be Certified Practitioners of a single, kinesensic method of voice use and training, having its own phonetics and pronunciation rules. The Lessac System of instruction for voice use and communication is such a system and is meant wherever the term Certified Speaker is used. Speakers both pronounce the sample words according to the phonetic and articulatory rules and then listen to a high fidelity reproduction of the words pronounced by another speaker. They identify unintended errors of pronunciation and remove them from inclusion in the acoustical phonetic database. This basic dictionary will use the dialect of "Educated General American Speech" as it is thcs dialect most used by speech professionals in the various media and which is regularly heard on an everyday basis by the largest cross-section of the American population.
The phonetic sequence for each sample word that is accepted as correctly pronounced is then included in the database and is stored in its entirety as a single pronunciation of a given word. The associated grapheme and word, including its phonetic notations and pronunciation rules, will also be stored. This database is the "original recorded database" that will allow every word spoken by every speaker to be accessed separately. Phonetic sequences for a word are combined and then normalized The word, and its "normalized" phonetic sequence is stored in the basic dictionary database to serve as the speech recognition reference standard for properly pronounced words in the basic dictionary when spoken individually. The basic dictionary database will also contain the phonetic notations, pronunciation rules, and the associated graphemes for presenting the word as text.
For words in American English that were not pronounced, the normalized phonemes and sequence of phonemes will be simulated according to the phonetic sequence for the word not spoken, using the appropriate rules of pronunciation and indicated acoustic profile as they relate to the word not spoken, and with the appropriate graphemes for representing the word as text. Various samples of these simulated word pronunciations will be reproduced and listened to by Certified Practitioneis and those that are judged to have

unintended errors are identified and Bet aside for pronunciation correction using the same Certified Practitioners as the ones pronouncing the words for the phonetic dictionary development
The resulting dictionary database can be of any size or specialized American English use that has been assembled by pronunciation rules consistently applied and then normalized to serve as a database dictionary reference for speech recognition of words pronounced individually,
2. Basic Dictionary of Alternative 'Phonetic Sequences and Acoustic Profiles for Identifying Correct Alternative Pronunciations of a Word Based on the Word and its Position in a Pronounced Sequence of Words
This database is built using the methods and procedures similar to those used in Database 1 above. The differences arc in the words and pronunciation rules for words found in this alternative proper pronunciation database. In this database the dictionary is comprised of words for which the pronunciation of the word is modified because of the word and its position in a pronounced sequence of words. The modifications of pronunciation yield alternative phonetic sequences for a word where the alternative phoneme(s) is(are) properly used based on the pronunciation rule that applies to that word's position in the sequence of words uttered. This process yields a dictionary of words that have alternative phonemic sequences for the corresponding word in the basic dictionary. The alternative phonemes and corresponding alternative graphemes map to the correct text form of the word and represent a correct alternative pronunciation of the word because of the word's position in the sequence of words being pronounced.
The sampling procedure here is different from that of Database 1. In this case, it is the sequence of words uttered and the pronunciation rule applying to the sequence of words that results in an alternative phonetic sequence that is recognized as correct pronunciation, Thus, the sample to be drawn involves a statistically adequate sample of .combinations of words where the pronunciation rule is dependent up on the word, and for its relationship to the beginning and/or ending of the preceding and/or following words in the phrase to be spoken. The application of the pronunciation rules results in an alternative correct pronunciation of such words when spoken as part of a sequence of words. This sample is drawn using a computational linguistic analysis of beginnings and endings of words and the identifying of all combinations that axe cognitively meaningful as they arc a recognized part of speech when used in combination, £.g. in the sequence: 4herp pack1, one pronounces the two words together with a prepare and link pronunciation rule. In this rule, help does not have its last consonant p fully pronounced but ends with the mouth in the position of fully prepared to pronounce the 'p1 in the following word pack. The two words are "linked" by receiving a shorter than normal pause between the end of 'help' and the commencing of'pack* than would normally occur between words.

Again, as in preparing Database 1 above, a Certified speaker becomes a Certified listener to high-fidelity reproductions of the pronunciation of words by a speaker other than the listener. The listener notes and removes those words that have unintended errors of pronunciation for the required pronunciation of the word according to its position in the sequence of words being pronounced In those cases where both the preceding and the following word also have correct alternative pronunciations, an error of pronunciation of one will require the removal of the word sequence pronounced.
The phonetic sequences for each sample word, as well as those of the preceding and/or following word also involved, and that is accepted as correctly pronounced, is then included in the database and stored in its entirety as a single pronunciation of a given word, The associated grapheme and word, including its pronunciation rule, will also be stored. This database is the "original recorded database" that will allow every word, spoken by every speaker for all words among a specified sequence of uttered words, to bo accessed separately.
A combined version of phonetic sequences are then normalized and stored in the basic dictionary database to serve as the speech recognition reference standard for alternative pronunciations of a word based on the word and its position in a sequence of pronounced words. This database will also contain the pronunciation rules and the associated graphemes for displaying the involved words as text
For those cognitively meaningful combinations of word beginnings and endings in American English that were not pronounced, the normalized phonetic sequence for the words not spoken wOl be simulated, using the appropriate rules of phonetic notation and pronunciation, and with the appropriate graphemes for displaying the words as text Various samples of these simulated word pronunciations will be reproduced and listened to by Certified Practitioners and those that are judged to have unintended errors are identified and set aside for pronunciation correction using the same Certified Practitioners that originally pronounced words for this particular dictionary database,
The resulting phonetic dictionary database can be of any size or specialized American English use that has been assembled by pronunciation rules consistently applied and then normalized to serve as a database dictionary reference for speech recognition of alternative pronunciations of a word based on the word and its position in a sequence of pronounced words.
ftj^fttinnarv of Phonetic Sequences for Identifying Correct Alternative Pronunciations of a Word according
flU Known Dialect.of the Native J^anpuage i
The universe of speaking styles and dialects can be obtained by consultation with linguistics experts and

then estimating the numbers of persons speaMng each particular pattern. Depending on the minimum number of speakers with a particular dialect that would make it economically viable to have a specialized speech recognition database, there could be numerous such databases. This database is for just one example, the Southern U.S. Dialect of Educated General American Speech (Southern Dialect), To establish databases of correctly identified proper pronunciations for a given dialect, the inventive method develops rules-based pronunciations for words spoken in the dialect. The same inventive method switches to an alternative reference database when phonetic patterns indicate higher probabilities of candidate words are likely when using an alternative correct pronunciation database for an American English word individually or according to the alteration in pronunciation required because of the word/s location in a sequence of uttered words.
4. Database ofRules-Based dialectic pronunciations according to the Lessac System of pronunciation and communication.
5. Individual Word Pronunciation Database
This database is created by drawing a sample of words that is statistically sufficient to be representative of all forms of alternative word use patterns and words having non-standard pronunciations associated with a Southern Dialect for words in an American English dictionary.
To be statistically sufficient in terms of the number and variety of pronunciations within the dialects at least fifty native speakers of the Southern Dialect will need to be identified and asked to pronounce each of the sample words individually.
The phonetic sequences for each sample word is then included in a database and stored in its entirety. The associated grapheme and word, including its pronunciation rule, will also be stored. This database is the "original recorded database of Native Southern U.S. Dialect of Educated General American Speech Speakers*' that will allow every word spoken by every speaker to be accessed separately.
Certified Lessac Practitioners will listen to high-fidelity reproductions of the Soutbem Dialect Speakers and then provide rules and associated notations for properly speaking the dialect
Each Lessac Practitioner will then pronounce the same sample of words, in accordance with the rules for speaking the dialect, as were spoken by the Southern Dialect Speakers. Each Certified Practitioner will pronounce the sample words and then listen to high fidelity reproduction of the words pronounced by another Certified Practitioner so that unintended errors of pronunciation are removed from inclusion in the database of pronunciations.
The phonetic sequences for each sample word accepted as correctly pronounced is then included in the

database and stored in its entirety as a single pronunciation of a given word. The associated grapheme and word text, including its pronunciation rule, wQl also be stored. This database is the "original recorded database of Lessac Certified Practitioners speaking with a Southern U.S. Dialect of Educated General American Speech " that will allow every word to be accessed separately,
A combined version of phonetic sequences for sample words arc then normalized and stored in the basic dictionary database to Berve as the Southern Dialect speech recognition reference standard for individually pronounced words. The dictionary database also contains the associated graphemes for displaying the text form of the recognized word.
6. Word Pronunciation Database for n di&lect where the Pronunciation of a Word is modified because of the word's position in a aeauence of words.
This database of is similar to Database 3A.1 above, but involves pronouncing particular sequences of words as phrases, sentences, or paragraphs. This dictionary database is comprised of words for which the pronunciation of words in the Southern Dialect is modified because of the word and its position in a sequence of words, The database contains alternative phonetic sequences for a word based on the pronunciation rule that applies to that word's position in the sequence of words. This process yields a dictionary database of words that have alternative phonemic sequences for the corresponding word in the individual word dictionary. The alternative phonemes and corresponding alternative graphemes map to the correct text form of the word and represent a correctly used alternative pronunciation of the word
The sampling procedure here is different from that presented in 3.A.1 above. In this present case, it is the sequence of words and rules for their pronunciation that results in an alternative phonetic sequence correctly recognizing the word as pronounced* Thus, the sample to be drawn involves a statistically adequate sample of combinations of words where the pronunciation rules for the beginning and/or ending of the preceding and/or following words are different from the rules for pronouncing individual words. Computational linguistics are used to estimate the number of beginnings and endings of words for combinations that are cognitively roeaning&l, using the words in the particular American English dictionary selected to represent the universe of American English words. Sample sizes will be sufficient to represent the relative incidence of word combinations in cognitively meaningful combinations.
Again, as in preparing Database 3 Al above, at least fifty native speakers of Southern Dialect will pronounce the phrases containing the word sequences for the sample of cognitively meaningful combinations* Ibis first database of high fidelity recordings is the "Original Database of Native Southern U.S. Dialect of Educated General American Speech Speakers" that will allow every word, spoken by every

speaker for all words among a specified sequence of uttered words, to be accessed separately.
Certified Practitioners will listen to high-fidelity reproductions of the Southern Dialect speakers and then provide rules end associated notations for proper alternative pronunciation of words according to their position in a sequence of words. Bach Certified Practitioner will then pronounce the same sample of sequenced words in accordance with the rules for speaking the dialect [Le.a they will speak the same sample sets as were spoken by the Native Southern U.S. Dialect of Educated General American Speech Speakers.]
Each Certified Speaker then listens to high-fidelity reproductions of pronunciations of the sample words by another Certified Speaker. The listener notes and removes those words that have unintended errors of pronunciation for the required pronunciation of the words* In those cases where the preceding and/or following words also have correct alternative pronunciations, an error of pronunciation of any one word of the sequence will require the removal of the pronounced word sequence.
The phonetic sequence for each word m the cognitively meaningful combination phrases, is then included in the database and stored in its entirety as a single pronunciation of a given word in an identified sequence of words. The associated graphemes and resulting text for the word, including its alternative pronunciation rule, will also be stored* This database is the "Original recorded Database of Southern U.S. Dialect of Educated General American Speech Pronounced by Certified Practitioners.1' This database will allow every word, spoken by each Certified Practitioner, to be accessed separately,
A combined version of phonetic sequences are then normalized and stored in the basic dictionary database to serve as the Southern U.S. Dialect of Educated General American Speech, It is the speech recognition reference standard for alternative pronunciations of a word based on the word and its position in the sequence of words. The associated pronunciation rules, graphemes, and text for displaying each word are also stored in the database.
For the cognitively meaningful combinations of word beginnings and endings in American English that were not pronounced using Southern U.S. Dialect of Educated General American Speech Dialect, the normalized phonetic sequences for the words not spoken will he created by simulation of the words not spoken, using the appropriate rules of pronunciation as they relate to the words not spoken, and with the appropriate graphemes involved Various samples of these simulated word pronunciations will be reproduced in high-fidelity sound and listened to by Certified Lessac Practitioners and those that are judged to have unintended errors (i.e. they are an exception to the rule) are identified and set aside for re-entering the correct pronunciation using Certified Practitioners as speakers pronouncing the words. The assembled exceptions are collected and subsequently used to examine whether it is possible to create rules for identifying the

exceptions to the rule so that the pronunciation that corresponds to the exceptions rule can be accurately recognized.
The resulting dictionary database Southern U.S. Dialect of Educated General American Speech Dialect serves as a reference standard for alternative pronunciations for of words as determined by the word's position in a sequence of pronounced words. The dictionary database, using simulated word pronunciations, can be of any size in terms of number of words, or for any specialized American English usage,
7. Dictionary of Phonemic Sequences for Correctly Identifying Alternative Pronunciations based on
Commonly Occurring Types of Improper Articulations inNatfee Language Speakers.
These mispronunciations are generally due to improper articulation of consonants. Some examples are 'cah' in place of 'car*, and 'I axed you a question,*
These mispronunciations are speaker specific. These are what one trains one's speaker dependent dictation software to recognize when one dictates. This is usually easier than learning how to pronounce the word correctly, especially in the case of run-on phrases or idiosyncratic individual word pronunciations which have become habitual. Many of these involve words of infrequent use in daily language, but which are committed by many persons so they become statistically identifiable as 'common mispronunciations.' Some examples are 'nuculer* in place of 'nuclear/ 'libary' in place of 'library/ and Tebuary' for 'February.1 In terms of an individual speaker's pronunciations, one will find the same pattern appearing wherever similar word and phrase sequences occur, or where words having similar articulation requirements are encountered.
8. Database of common misprommciations according to the Lessac System of pronunciation and
communication.
Identifying common misprommciations of words is obtained by having ordinary native speakers of Educated General American dialect pronounce words individually, as well as when used in phrases and sentences that represent samples of normal words and phrases where the pronunciation of a given word is modified according to its position in the sequence of words uttered. Text output from presondy available speech recognition software is matched against the words and phrases the individuals were asked to pronounce,
The recorded pronunciations for the commonly mispronounced words are reviewed by Certified Practitioners to diagnose and note the rales of pronunciation that were being ignored or improperly applied. The acoustic data are normalized and used to provide alternative phonetic sequences for the dictionary of words commonly mispronounced, thereby identifying the correct word commonly mispronounced and outputring the proper text representation.

9. Individual Word MJsptonimc'»»™ •natflMgg
This database is created by using me word sample as developed for Database 1 above ['Basic Dictionary of
Phonemic Sequences for Correctly Pronounced Individual Words'].
Each word in the sample is read by numerous persons representing high pitched, low pitched, rapidly spoken
and slowly spoken, as well as male, female, young and old speakers. All frequencies, including harmonics
created by the voice, within the frequency range of 500 to 20,00h2 are captured in the readings of individual
words.
The speakers use presently available speech recognition software and pronounce the individual words. Text output from the speech recognition software is matched against the input words the individuals were asked to pronounce. A mismatch between the text output and the required word identifies the mispronunciation.
The phonetic sequence for each sample word is included in a database and stored in its entirety, The associated graphemes and word, including its pronunciation rule, are also stored The database is the "original recorded database for identifying common mispronunciations" mat will allow every word spoken by every speaker to be accessed separately,
A minimum frequency of mispronunciation of a given word is established and all the words that exceed that frequency are copied into a separate database of common mispronunciations. Certified Practitioners will listen to high-fidelity reproductions of the mispronunciations and then provide pronunciation rules and associated notations for accurately uttering the commonly mispronounced words. The acoustic patterns will also be normalized and compared to the acoustic patterns for me correct pronunciation of each mispronounced word.
Each Certified Practitioner will then utter the mispronounced words, in accordance with the rules for speaking common mispronunciations* Each Certified Practitioner will pronounce the word samples and then listen to high fidelity reproduction of the same word samples pronounced by another speaker noting those pronunciations that railed to follow the rules of mispronunciation, The pronunciations that Med are removed from the database of mispronunciations.
The phonetic sequence for each sample word accepted as accurately mispronounced remains included in the database, which is stored as a single pronunciation of a given word. The associated grapheme and word, including its mispronunciation rule, will also be stored. This database is the "original recorded database of Certified Practitioners providing accurate pronunciations of common mispronunciations" that will allow every accurately mispronounced word to be accessed separately.

A combined version of phonetic sequences are normalized and stored in the basic dictionary database to serve as the speech recognition reference standard for commonly mispronounced words spoken individually. The dictionary database of common mispronunciations also contains the associated graphemes fbr portraying the text of the correct word recognized in its mispronounced form.
10. Database of Common Wnrri T^^Qnunciarions When Speaking a Sequence of Words. This database of words is built using the same sample of words and phrases as in Database 2 above. [Basic Dictionary of Alternative Phonetic Sequences for Identifying Correct Alternative Pronunciations of a Word Based on the Word's Position in an Uttered Sequence of Words.] Each sample sequence is pronounced by numerous persons representing high pitched, low pitched, rapidly spoken and slowly spoken, as well as male, female, young and old speakers, AH frequencies, including harmonics created by the voice, in the frequency range of 500 to 2070QQoz urc captured in the pronunciations.
The speakers use presently available speech recognition software and pronounce the word sequences comprising the sample, Text output from the speech recognition software is matched against me input words the individuals were asked to pronounce. A mismatch between the text output and the required word identifies the mispronundatioa
The phonetic sequence for each sample word is included in a database and stored in its entirety. The In this database, the dictionary of common mispronunciations when speaking a sequence of words, a mispronunciation is determined when the word in the sequence of text read is not recognized correctly either as an individual word, or as an alternative pronunciation for the correct individual word where the pronunciation of the word is modified because of the word's position in a sequence of uttered words.
A mfaJTnit™ frequency of mispronunciation of a given word in a given sequence is established and all the words that exceed that frequency arc copied into a separate database of common mispronunciations. Certified Practitioners will listen to high-fidelity reproductions of the mispronunciations and then provide pronunciation rules and associated notations for accurately uttering the commonly mispronounced words. The acoustic patterns will also be normalized and compared to the acoustic patterns for the correct pronunciation of each mispronounced word.

Bach Certified Practitioner will then accurately mispronounce the commonly mispronounced words, in accordance with the rules for speaking common mispronunciations. Bach Certified Practitioner will pronounce the sample of phrases and sentences and then listen to high fidelity reproduction of the words pronounced by another Certified Practitioner. Those pronunciations that failed to follow the rules of iruspronunciation for words spoken in sequence are removed and not included in the database of mispronunciations.
The phonetic sequence for each sample word accepted as accurately mispronounced in a sequence of words is then included in the database and stored in its entirety as a common mispronunciation of a given word according to its position in a sequence of words, The associated graphemes and word, including its mispronunciation rule, will also be stored. This database is the "original recorded database of Certified Practitioners providing accurate mispronunciations of words commonly mispronounced in a sequence of Words H thai will allow every commonly mispronounced word to be accessed individually and according to die word(s) that preceded it and/or followed it
Combined phonetic sequences are nonualized and stored in the basic dictionary database to serve as the speech recognition reference standard for commonly mispronounced words spoken as part of a sequence of words. The dictionary database of common mispronunciations also contains the associated graphemes for displaying the text form of die correct word recognized in its mispronounced form.
Cognitively meaningful combinations of word beginnings and endings in American English that were not spoken are simulated for inclusion in the reference database of mispronunciations. The words to be included are identified by the word sequence combinations not spoken where the rules for pronouncing common mispronunciations are associated with word sequence mispronunciations for the unspoken words that is equal to or larger than the cut-off frequency for individual word mispronunciations, expressed as so many common mispronunciations per hundred pronunciation attempts, The normalized phonetic sequences for the commonly mispronounced words not spoken will be constructed by simulation according to the structures of the words not spoken, using the appropriate rules of mispronunciation as they relate to the words not spoken, and with the appropriate graphemes involved. Various samples of these simulated word pronunciations will be reproduced in high-fidelity sound and listened to by Certified Lessac Practitioners and those that are judged to have unintended errors or mistakenly correct pronunciations are identified and s^t aside for reentering the correct mispronunciations using Certified Practitioners as speakers pronouncing the words.
The resulting dictionary database of common mispronunciations of words when pronounced as part of a sequence of Words, serves as a reference standard for common mispronunciations of words as determined by the word's position in a sequence of pronounced words, The dictionary database, using simulated word

pronunciations, can be of any size in terms of number of words, or for any specialized American English usage.
Database Use
The alternative sets of phonetic dictionaries based on rule-based pronunciations are the basis for a new design of faster and more accurate speech recognition engine. Speed comes from more of the speech data stream being recognized by the phonetic sequences of the of that data stream. Speed improvement also comes from having fewer remaining words following the phonetic identification stage, that need to be recognized by using predictive grammars and other techniques. Whether the greatest speed can be achieved by successively passing the phonetic speech data stream past each alternative reference dictionary database, or by parallel processing multiple copies of the phonetic speech data. Either approach, sequential or parallel, can be combined with other down-stream candidate word resolving techniques. The multiple databases of reference dictionaries, when used in conjunction with a prior art speech recognition program, can also be used to modify and enhance the goodness of fit probabilities of a candidate word in a set of candidate words correctly identify a word that was not in the initial set of candidate words and confirm that the words in the prior set of candidate words have lower goodness of fit probabilities than the probability of the newly identified candidate; and improve the identification of '"real" mispronunciations and distinguish thorn from the alternative pronunciations that are identified violations of pronunciation rules. This yields more rapid identification of the correct candidate word and makes large dictionaries and grammars more efficient
Lessac System
Relevant information regarding the Lcseac system is available in the references cited herein all of which are specifically incorporated herein by reference thereto. The following information regarding the Lessac system is provided for convenience.
In over fifty years of work, Arthur Lessac has developed a complete voice system reflecting, for the first time, the basic relationship between music and speech. His discovery and development was done empirically but was related to much formal academic wodc His work early linked an understanding of music and singing with voice theory and rests on his decision to make a radical departure from traditional methods of studying and teaching voice. Very early in his speech work, Lessac decided that teaching or learning by imitating others was. insufficient and damaging, He determined to develop a system of learning based upon sensation and feeling and kinesthetic feedback principles* This required extensive practical and formal study of the natural functioning of the body and the voice.

During almost this same fifty—year period, music historians began to go beyond studies of the history of western classical music. Inter—cultural studies linked western, eastern, African and other music. Related anthropological, archeological, historic and music work began to provide some insight into the origins of speech and music. Since these origins were before the time of recorded history, little pro—, gress was made until a number of studies of stfll-~-cxisting primitive tribes. No one has, as yet, described the whole relationship between music and speech as has Lessac. However) there are indications mat recent studies would support his main thesis.
Today no complete vocal system compares to the Lessac system. A voice system mu6t deal with two functional aspects and one operational aspect of speech. Functionally, speech consists of vowels and consonants. Operationally, there is the linking together within a word, sentence, paragraph or speech of the different Bounds where different emphasis can vary meaning. The connection between vowel sounds and music has long been recognized — though never in a phonetic system. However, the same connection between the functional characteristics of consonants and musical instruments and between the relationship between speech and a musical score has never before been developed.
Voice and speech theory and training today depends heavily upon the International Phonetic Alphabet (IPA). The IPA was created a century ago by a committee of Western European scholars. The EPA is fine for mapping sound. It does remove orthographic traps, and it provides the student with a guide to specific vowel and consonant sounds in other languages that are missing in his own, although even in this context it does little more than any other alphabet when the spelling of a given language — Spanish, for example— is simplified. But, it is a weak and artificial tool for teaching live people how they should sound. It is cumbersome, complicated, and outdated* It encourages a non-creative approach that is acoustic, imitative and mechanical. And, it includes too many vocal traps.
A symbol from the IPA system maps all of the possible sounds of the language, separating out deviations due to regional genesis which do not discriminate meaning within the culture. This symbol must then be learned or memorized in conjunction with a sound (thus, involving the car) in order to be understood and then spoken.
And, the IPA does not deal at all with the operational linking together of sounds within words, phrases and larger units of speech. It is not a vo cal system—merely an attempt at some definition of comparative
sounds.
Functionally, Lessac vowels are "numeric and kinesthetic", and Lessac consonants are "imagistic, kinesthetic and potentially numeric" in contrast to the purely symbolic nature of the EPA vowel and

consonant phonetics.
Operationally, Lessac* s methods of "exploration" and the elimination of any basic difference between singing and speaking utilize the basic musical qualities in all uses of the voice.
At the same time, the Lessac voice system includes and adapts important elements from previous systems of acoustic phonetics, articulatoiy phonetics, auditory phonetics and physiological phonetics.
In the Lessac Vowel System^ the numbers directly connect to a structure and kinesthetic feel which, when replicated, creates the desired sound without necessitating control by the ear, and, thus, avoiding the conditioned pitfalls of poor vocal environment Based on a direct transfer from numeric description to action, this method of learning leaves no room for intervening influences to dilute or interfere with the process. In addition, the vowel is fed by a vibratory and resonance feel that aids in enforcing the phonetic value and provides a significant qualitative component to what in other systems remain, by and large, a quantitative dimension.
In this way, the Lessac vowel system eliminates the IP A concept of front, middle and back vowels, or high and low vowels; it discourages the mechanistic handling of glottal, pharyngeal, velar, palatal, retroflex, dental, labial manipulations; it avoids reliance upon the ear for essential control.
The Lessac Consonant System (described at pages 129—179 of Arthur Lessac's THE USE AND TRAINING OF THE HUMAN VOICE, Drama Book Publishers 1967), relates consonants to musical instruments. Bach of the consonants reflects a musical instrument and involves both the sound and the image of the playing of the instrument—images of touch, rhythm, melody, size and subtlety.
To understand the instrument means to understand not only the sound itself but also the kinesthetic feel of the way the instrument is played and the different uses to which it can be put. It is an aesthetic construction and functions as a physical image.
In conventional voice and speech training, even when the habit is more or less automatic, the sight of a "T" or a "V" will prepare the tongue and gum—ridge of the lips to execute the action to produce the deaircd explosive or fricative sound, but the sound that comes out is often unanticipated, irregular, defective and undetected by the ear.
The impression often is that there must be at least a half dozen ways of making the soimi In the Lessac Consonant System, the picture of a snare drum with a "T" written on the picture will, after one has been

taught the aesthetics of a drum beat, bypass and cut through the complexities of tongue manipulation, the memories of imitation, the listening by ear, etc. The student is not only make a perfect "T" sound but will thereby also know how to feel the drumbeats of the "KH, "F\ "D", "B", and "G" without any additional training. What is more, once the concept is clear, one can ask a deaf person, or a foreigner] whether Chinese or French, to feel an "R" — trombone, or a "V—cello, or an "5" — sound effect, or a **CH" — cymbal, The result has been shown to be clear and perfect every time without ear judgment, mental confusion, physical or physiological gymnastics, and unaffected by any previous cultural or sectional influences that might work against this articulation.
Traditionally, the study of voice and speech is divided into different disciplines — voice for singing, voice for speech, diction, public speaking, therapy, etc. However, fundamental Lessac concepts serve all disciplines* All voice and speech is basically musical with the difference between speaking and singing being a relative one.
Traditionally, consonants have been thought of as "articulated" sounds—primarily important for intelligibility, The Lessac instrumental approach to consonants suggests a reversal of the relative position of vowels and consonants in singing and speaking. In singing, the vowels make the principal artistic contribution; the consonants a utilitarian one. But, in general speech, the consonants carry most of the melody and rhythm, while the vowels serve primarily for emphasis.
As'tbe student comes to understand that the voice and speech with its vowels and consonants have a symphonic quality and range, and that one can "play" the voice in a musical and instrumental way, one comes to use another, total image in speaking, namely, the image of an orchestra playing a piece of music.
In teaching through an organized and related group of images, the Lessac approach directs focus to the exploration at band and perhaps obviates most of the inhibitory and competing response pattern a normal learning situation implicitly contains. It is sometimes difficult to communicate, but when communicated, it contains a tremendous amount of information in a "chunked" and, therefore, memorized state. Through a special kind of learning, images chunk information.
Many people on first understanding the Lessac voice theory assume that his use of musical instruments to teach consonants and his overall musical approach is simply a uaefcl teacher's analogy — or, if they disagree with it, a "trick" of some kind. However, studies of the origins of music suggest that the relationship between music and speech and, within that, between consonants and musical instruments appears to be a fundamental one. In all cultures, the development of specific instruments and vocal sounds appears to have been an inter—related process. Certain instruments were built to mirror the image or sound

of the vocal instrument and certain vocal sounds were made to mirror pleasing instrumental images or Bounds — such as, basic percussive sounds, the twang of a bow string or the tone of an early horn.
The Lessac consonant system applied to English reflects the instruments of Western European culture and its symphony orchestra. Though the basic instruments ere the same in all cultures the drum, the horn, the stringed instrument, etc., specific variations reflecting specific different consonant sounds remain to be defined as the Lessac voice system is applied to languages in other cultural settings.
While illustrative embodiments of the invention have been described together with several alternatives for various parts of the system, it is, of course, understood that various modifications will be obvious to those of ordinary skill in the art Such modifications are within the spirit and scope of the invention, which is limited and defined only by the following claims.


Claims:
1. A method of recognizing spoken language implemented on a computing device, the method comprising:
(a) receiving speech to be recognized into the computing device;
(b) analyzing the speech to be recognized into phonemes intended to be matched with graphemes by & speech recognition algorithm, each grapheme comprising & recognizable written representation of a spoken language element;
(c) employing the speech recognition algorithm to match the phonemes with graphemes obtained from a database of graphemes; and

(c) outputting the recognized graphemes; characterized by
(d) providing a speech training routine wherein the user can receive training from the system to correct mispronunciation of a speech element

2. A method according to Claim 1 characterized by the speech training routine comprising the Lessac method, the Alexander Technique or a known computer speech training method.
3. A method according to Claim 1 characterized by the speech training routine comprising one or more instruction features selected from the group consisting of; an instruction screen telling Hie user how to make a properly pronounced speech element sound; physical instructions on bow to move the muscles of the mouth and tongue to achieve error-free pronunciation of the mispronounced speech element; instructions presenting the user with a drawing optionally a wire frame drawing of a human face depicting, placement of the tongue and movement of the lips for proper pronunciation of the speech element

4. A method according to Claim 1 characterized by the speech training routine comprising the Lessac method and an instruction feature comprising a drawing optionally a wire frame drawing of a human fece depicting, placement of die tongue and movement of the lips for proper pronunciation of the speech element wherein the user can interactively move the wire frame drawing to provide views from different angles and can vary the speed of the depicted "facial" movements,
5. A method according to Claim 1 characterized by comprising one or more databases selected from the group consisting of a database of alternative phonetic sequences, acoustic profiles, graphemes and text for identifying alternative pronunciations of a word based on the word and its position in a pronounced sequence of words; a database of phonetic sequences, acoustic profiles, graphemes, and text for identifying correct alternative words and pronunciations of words according to a known dialect of the natrve language; a database of rules-based dialectic pronunciations according to the Lessac system of pronunciation and

communication; cm individual word pronunciation database; a word pronunciation database fox a dialect where the pronunciation of a word is modified because of the word's position in a sequence of words; a database of phonetic sequences, acoustic profiles, graphemes and text for correctly identifying alternative pronunciations of words according to commonly occurring errors of articulation by native speakers of a specified language; a database of common mispronunciations according to the Lessac system of pronunciation and communication; an individual word mispronunciation database; and a database of common word mispronunciations when speaking a sequence of words.
6, A method according to Claim 1,2S 3, 4 or 5 characterized by being utilized for recognizing speech uttered by a user alter the user has been trained by the speech training routine,
7, A method of speech recognition using a microphone to receive audible sounds input by a user into a computing device coupled to said microphone, said computing device having a program with database information comprising a first database being a database of digital representations of known audible sounds corresponding to proper pronunciations of phonemes and associated alphanumeric representations of said known audible sounds corresponding to proper pronunciations of phonemes, the method comprising:

(a) receiving said audible sounds in the form of an electrical output of said microphone;
(b) converting said electrical output corresponding to a particular audible sound into a digital representation of said particular audible sound; and
(c) outpntting as a speech recognition output the alphanumeric representations associated with said audible sound most likely to be said particular audible sound;
characterized by the database information further comprising a second database being a database of digital representations of known audible sounds corresponding to mispronunciations, the method comprising comparing said digital representation of said particular audible sound to said digital representations of said known audible sounds in said first and second databases to determine a match with the one of said known audible sounds most likely to be the particular audible sound being compared to the sounds in said database.
8. A method according to Claim 7, characterized by further comprising:
(e) outputting an error indication in response to a match with a known audible sound corresponding to a known mispronunciation; and
(f) in response to a determination of error corresponding to a known type or instance of mispronunciation, giving the user the option of receiving speech training or training said program to recognize the user's speech pattern; and
(g) in response to exercise of said option, presenting on interactive training program from said computing device to said user to enable said user to correct such mispronunciation,

9. A method according to Claim 7, characterized by further comprising generating said database
information, by (i) having a person, who normally speaks said known audible sounds properly, speak said
properly pronounced Known audible sounds, and digitizing aaid properly pronounced known audible sounds
spoken by said person who properly speaks said known audible sounds to form the first database and (ii)
having a person who usually speaks said known audible sounds corresponding to mispronunciations and
digitizing said known mispronounced audible sounds spoken by said person who usually speaks said known
mispronounced audible sounds corresponding to mispronunciations to fonn the second database.
10. A method according to Claim 7, characterized by further comprising:
(f) outputdng an error indication in response to a match with a known audible sound corresponding to a known mispronunciation; and
(g) in response to a determination of error corresponding to a known mispronunciation, presenting an interactive speech training program from said computing device to said user to enable said user to correct such mispronunciation in the user's speech.
11. A method according to Claim 10, characterized by the interactive training program's using Lessee
System techniques,
12. A method according to Claim 9, characterized by further comprising:
(e) outputting an error indication in response to a match with a known audible sound corresponding to a known mispronunciation; and
(f) in response to the detection of repeated instances or a reliable single instance of pronunciation error, presenting an interactive speech training program from said computer to said user to enable said user to correct such mispronunciation in the user's speech.

13. A method according to Claim 10, characterized by said presenting of an interactive training program is optional and is performed when elected by the user.
14. A method according to Claim 10, characterized by said user being presented with an interactive speech training program in response to the detection of repeated instances or a reliable single instance of pronunciation error.
15. A method according to Claim 10, characterized by said database information comprising (i) digital representations of known audible sounds corresponding to proper pronunciations of phonemes and associated alphanumeric representations of said known audible sounds corresponding to proper pronunciations of phonemes and (ii) digital representations of known audible sounds corresponding to .

mispronunciations is formed by (i) having a person, who normally speaks said known audible sounds properly, speak said known audible sounds, and digitizing said known audible sounds spoken by said person who properly speaks said known audible Bounds; and (ii) having a person who usually speaks said known audible sounds corresponding to mispronunciations and digitizing said known audible sounds spoken by said person who usually speaks said known audible sounds corresponding to mispronunciations.
16. A method according to Claim 10, characterized by the database information comprising one or more databases selected from the group consisting of a database of alternative phonetic sequences, acoustic profiles, graphemes and text for identifying alternative pronunciations of a word based on the word and its position in a pronounced sequence of words; a database of phonetic sequences, acoustic profiles, graphemes, and text for identifying correct alternative words and pronunciations of words according to a known dialect of flie native language; a database of rules-based dialectic pronunciations according to the Lessac system of pronunciation and communication; as individual word pronunciation database; a word pronunciation database for a dialect where the pronunciation of a word is modified because of the word's position in a sequence of words; a database of phonetic sequences, acoustic profiles, graphemes and text for correctly identifying alternative pronunciations of words according to commonly occurring errors of articulation by native speakers of a specified language; a database of common roispronunciationfi according to the Lessac system of pronunciation and communication; an individual word mispronunciation database; and a database of common word mispronunciations when speaking a sequence of words.
17. A method according to Claim 10 or 16 characterized by being utilized for recognizing speech uttered
by a user after the user has been trained by the speech training routine.
18. A computer-implemented method of recognizing spoken language comprising receiving speech to be
recognized and processing said speech to be recognized for input into a speech recognition algorithm and
outpurtmg recognized graphemes, the method being characterized by
(a) collecting a plurality of spoken language samples for a first word or phrase, each of said plurality of samples being associated with a language color, such as a regional pronunciation, a mispronunciation, and emotion or Lessac energy;
(b) processing said spoken language samples into a database suitable for input into a speech recognition algorithm, said database comprising a plurality of graphemes, each of said graphemes comprising a written representation of its associated spokes language sample, said database for the comprising a color associated with ax least some of said graphemes;
(c) in putting a first portion of said process speech to be recognized into a speech recognition algorithm, said speech recognition algorithm selecting among process spoken language samples of different color but corresponding to the same graphemes to obtain a best match to a particular processed language sample and

an associated recognized grapheme;
(d) outputting said recognized graphemes;
(e) in putting a second portion of said process speech into the sense speech recognition algorithm, said
speech recognition algorithm, at least initially, limiting its selection among process spoken language samples
to process spoken language samples of a single color, to efficiently obtain a best match to a particular
processed language sample and an additional associated recognized graphemes; and
(f) outputtrag said additional recognized graphemes.
19. A method according to Claim 18 characterized by said spoken language samples being collected by
random collection of speech samples and segregating them according to color.
20. A method according to Claim 18 characterized by said spoken language samples being collected by
repeated generation of toe same graphemes with different color using trained speakers, said trained speakers
speaking the words with the desired colors.
21. A method according to Claim 18 characterized by comprising one or more databases selected from fee
group consisting of a database of alternative phonetic sequences, acoustic profiles, graphemes and text for
identifying alternative pronunciations of a word based on the word and its position in a pronounced sequence
of words; a database of phonetic sequences, acoustic profiles, graphemes, and text for identifying correct
alternative words and pronunciations of words according to a known dialect of fee native language; a
database of rules-based dialectic pronunciations according to the Lessac system of pronunciation and
communication; an individual word pronunciation database; a word pronunciation database for a dialect
where die pronunciation of a word is modified because of the word's position in a sequence of words; a
database of phonetic sequences, acoustic profiles, graphemes anil text for correctly identifying alternative
pronunciations of words according to commonly occurring errors of articulation by native speakers of a
specified language; a database of common mispronunciations according to the Lessac system of
pronunciation and communication; an individual word mispronunciation database; and a database of
common word mispronunciarions when speaking a sequence of words.
22. A method according to Claim 18 characterized by comprising providing a speech training routine
wherein the user can receive training from the system to correct mispronunciation of a speech element
23. A method according to Claim 18 characterized by being utilized for recognising speech uttered by a
user after the user has been trained by the speech training routine.


Documents:

1761-chenp-2005 abstract granted.pdf

1761-chenp-2005 claims granted.pdf

1761-chenp-2005 description (complete) granted.pdf

1761-chenp-2005 drawings granted.pdf

1761-chenp-2005-abstract.pdf

1761-chenp-2005-claims.pdf

1761-chenp-2005-correspondnece-others.pdf

1761-chenp-2005-correspondnece-po.pdf

1761-chenp-2005-description(complete).pdf

1761-chenp-2005-drawings.pdf

1761-chenp-2005-form 1.pdf

1761-chenp-2005-form 18.pdf

1761-chenp-2005-form 3.pdf

1761-chenp-2005-form 5.pdf

1761-chenp-2005-pct.pdf


Patent Number 228163
Indian Patent Application Number 1761/CHENP/2005
PG Journal Number 10/2009
Publication Date 06-Mar-2009
Grant Date 28-Jan-2009
Date of Filing 29-Jul-2005
Name of Patentee LESSAC TECHNOLOGY, INC.
Applicant Address 115 Havilands Lane, White Plains, NY 10605,
Inventors:
# Inventor's Name Inventor's Address
1 WILSON, Donald, H LESSAC TECHNOLOGY INC, 115 Havilands Lane, White Plains, NY 10605,
2 HANDAL, Anthony, H 3 Blue Chip Lane, Westport, CT 06880,
3 LESSAC, Michael C/O Lessac Technology Inc., 115 Havilands Lane, White Plains, NY 10605,
PCT International Classification Number G10L 15/04
PCT International Application Number PCT/US03/41697
PCT International Filing date 2003-12-31
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 10/335,226 2002-12-31 U.S.A.