Title of Invention

IMAGINA SYSTEM AND METHOD OF IMAGING

Abstract An imaging system, comprising sensor (20, 212, 854, 910, 1000, 1100, 1200) having one or more receptors (214), the one or more receptors (214) having a receptor size parameter; and an image transfer medium (30) having a diffraction-limited resolution size parameter (50), 230) in an object field of view (34, 220) determined by the optical characteristics of the image transfer medium (30), the image transfer medium operative (30) to scale the proportions of the receptor size parameter to an apparent size (54) of about the diffraction-limited resolution size parameter (50) in the object field of view (34, 220), the image transfer medium (30) comprising a multiple lens configuration (128, 216), the multiple lens configuration (128, 216) comprising a first lens (234, 832, 944, 1040, 1210) positioned toward the object field of view (34, 220) and a second lens (236, 850, 916) positioned toward the sensor (20, 212, 910, 1100, 1200), the first lens (234, 832, 944, 1040, 1210) sized to have a focal length smaller than the second lens (236, 850, 916) to provide an apparent reduction (54, 232) of the receptor size parameter within the image transfer medium (30).
Full Text RELATED APPLICATION
This application claims the benefit of U.S. Patent Application Serial No.
09/900,218, which was filed July 6, 2001, and drilled IMAGING SYSTEM AND
METHODOLOGY EMPLOYING RECIPROCAL SPACE OPTICAL DESIGN.
TECHNICAL HELD
The present invention relates generally to image and optical systems, and mote
particularly to a system and method to facilitate imaging performance via an image
transfer medium that projects characteristics of a sensor to an object field of view.
BACKGROUND OF THE INVENTION
Microscopes facilitate creating a large image of a tiny object. Greater
magnification can be achieved if the light from an object is made to pass through two
lessons compared to a simple microscope with one lens. A compound microscope has two
or more converging leness, placed in line with one another, so that both lenses refract the
light in see. The result is to produce an image that is magnified more than either lens
could magnify alons. Light illuminationg the object first passes through a short focal
length lens or lens group, called the objective, and then travels on some distance before
being passed through a longer focal length lens or lens group, called the cyepiece. A lens
group is aften sin ply referred to singularly as a lens. Usually these two lenses are held in
parsonal relationship to one another, so that the axis of one lens is arranged to be in the
some erientation is the axis of the second lens. It is the nature of the lenses, their
properties, their relationship, and the relationship of the objective lens to the object that
determines how a highly magnified image is produced in the eye of the pbserver.
The first lens or objective, is usually a small lens with a very small focal length. A
specimen or object is placed in the path of a light source with sufficient intensity to

illuminate as desired. The objective lens is then lowered until the specimen is very close
to, but not quite at the focal point of the lens. Light leaving the specimen and passing
through the objective lens produces a real, inverted and magnified image behind the lens,
in the microscope at a point generally referred to as the intermediate image plane. The
second lens or eyepiece, has a longer focal length and is placed in the microscope so that
the image produced by the objective lens falls closer to the eyepiece than one focal length
(that is, inside the focal point of the lens). The image from the objective lens now
becomes the object for the eyepiece lens. As this object is inside one focal length, the
second lens refrects the light in such a way as to produce a second image that is virtual,
inverted and amplified. This is the final image seen by the eye of the observer.
Alternatively, common infinity space or infinity corrected design microscopes
employ obfective lenses with infinite conjugate properties such that the light leaving the
objective is not focused, but is a flux of parallel rays which do not converge until after
passing through a tube lens where the projected image is men located at the focal point of
the eyepiece for magnification and observation. Many microscopes, such as the
compound microscope described above, are designed to provide images of certain quality
to the human eye through an eyepiece. Connecting a Machine Vision Sensor, such as a
Charge Coupled Device (CCD) sensor, to the microscope so that an usage may be viewed
on a monitor presents difficulties. This is because the image quality provided by the
sensor and viewed by a human eye decreases, as compared so an image viewed by a
human eye dierctly through an eyepiece. As a result, conventional optical systems for
magnifying, observing, exanining, and analyzing small items often require the careful
attention of a technician monitoring the process through an eyepiece. It is for this reason,
as well as others, that Machine-Vision or computer-based image displays from the
aforementioned image sensor displayed on a monitor or other output display device are
not of quality perceived by the human observer through the eyepiece.
SUMMARY OF THE INVENTION
The following presents a simplified summary of the invention in order to provide a
basic understanding of some aspects of the invention. This summary is not an extensive
overview of the invention. It is intended to neither identify key or critical elements of the

invention nor delineate the scope of the invention. Its sole purpose is to present some
concepts of the invention in a simplified form as a prelude to the more detailed
description that is presented later.
The present invention relates to a system and methodology that facilitates imaging
performance of optical imaging systems. In regard to several optical and/or imaging
system parameters, many orders of performance enhancement can be realized over
conventional systems (e.g., greater effective resolved magnification, larger working
distances, increased absolute spatial resolution, increased spatial field of view, increased
depth of field, Modulation Transfer Function of about 1, oil immension objectives and eye
pieces not rcquired). This is achieved by adapting an image transfer medium (e.g., one or
more lenses, fiber optical media, or other media) to a sensor having one or more receptors
(e.g., pixels) such that the receptors of the sensor are effectively scaled (e.g., "mapped",
"sized", "projected", "matched", "reduced") to occupy an object field of view at about the
scale or size associated with a diffraction limited point or spot within the object field of
view. Thus, a band-pass filtering of spatial frequencies in what is known as Fourier space
or "k-space" is achieved such that the projected size (projection in a direction from the
sensor toward object space) of the receptor is filled in k-space.
In other words, the image transfer medium is adapted, configured and/or selected
such that a transform into k-space is achieved, wherein an a priori design determination
casses k-space or band-pass frequencies of interest to be substantially preserved
throughout and frequencied above and below the k-space frequencies to be mitigated. It is
noted that the frequencied above and below the k-space frequencies tend to cause blurring
and contrast red action and are generally associated with conventional optical system
designs which define intrinsic constraints on a Modulation Transfer Function and "optical
aoiss". This further illustrates that the systems and methods of the present invention are
in contravention or opposition to conventional geometric paraxial ray designs.
Cindequently, many known optical design limitations associated with conventional
systems are mitigated by the present invention.
According to one aspect of Ihe present invention, a "k-space" design, system and
methodology is provided which defines a "unit-mapping" of the Modulation Transfer
Function (MTF) of an object plane to image plane relationship. The k-apace design

projects image plane pixels or receptors forward to the object plane to promote an
optimum theore ical relationship. This is defined by a substantially one-to-one
correspondence between image sensor receptors and projected object plane units (e.g.,
units defined by smallest resolvable points or spots in the object field of view) that are
matched according to the receptor size. The k-Space design defines that "unit-mapping"
or "unit-matchir g" acts as an effective "Intrinsic Spatial Filter" which implies that
spectral components of both an object and an image in k-space (also referred to as
"reciprocal-space") are substantially matched or quantized. Advantages provided by the
k-space design result in a system and methodology capable of much higher effective
resolved magnification with concomitantly related and much increased Field Of View,
Depth Of Field, Absolute Spatial Resolution, and Working Distances utilizing dry
objective lens imaging, for example, and without employing conventional oil immersion
techniques having inherent intrinsic limitations to the aforementioned parameters.
One aspect of the present invention relates to an optical system that includes an
optical sensor having an array of light receptors having a pixel pitch. A lens optically
associated with the optical sensor is configured with optical parameters functionally
related to the pitch and a desired resolution of the optical system. As a result, the lens is
operative to substantially map a portion of an object having the desired resolution along
the optical path to an associated one of the light receptors.
Another aspect of the present invention relates to a method of designing an optical
system. The method inclides selecting a sensor with a plurality of light receptors having
a pixel pitch. A desired minimum spot size resolution is selected for the system and a
lens configered or an extant lens selected with optical parameters based on the pixel pitch
and the desired minimum spot size is provided so as to map the plurality of light receptors
to part of the image according to the desired resolution.
The following description and the annexed drawings set forth in detail certain
illusteative aspects of the invention. These aspects are indicative, however, of but a few
of the various ways in which the principles of the invention may be employed and the
present invention is intended to include all such aspects and their equivalents. Other
advantages and rovel features of the invention will become apparent from die following
detailed description of the invention when considered in conjunction with the drawings.

BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
Fig. 1 is a schematic block diagram illustrating an imaging system in accordance
with an aspect of the present invention.
Fig. 2 is a diagram illustrating a k-space system design in accordance with an
aspect of the present invention.
Fig. 3 is a diagram of an exemplary system illustrating sensor receptor matching in
accordance with an aspect of the present invention.
Fig. 4 is a graph illustrating sensor matching considerations in accordance with an
aspect of the present invention.
Fig. 5 is a graph illustrating a Modulation Transfer Function in accordance with an
aspect of the present invention.
Fig. 6 is a graph illustrating a figure of merit relating to a Spatial Field Number in
accordance with an aspect of the present invention.
Fig. 7 is a flow diagram illustrating an imaging methodology in accordance with
an aspect of the present invention.
Fig. 8 is a flow diagram illustrating a methodology for selecting optical parameters
in accordance with an aspect of the present invention.
Fig. 9 is a schematic block diagram illustrating an exemplary imaging system in
accordances with an aspect of the present invention.
Fig. 10 is a schematic block diagram illustrating a modular imaging system in
accordance with an aspect of the present invention.
Figs. 11-13 illustrate alternative imaging systems in accordance with an aspect of
the present invention.
Figs. 14-18 illustrate exemplary applications in accordance with the present
invention.
DETAILED DESCRIPTION OF THE INVENTION
The present invention relates to an optical and/or imaging system and
methedology. According to one aspect of the present invention, a k-space filter is
provided that can be configured from an image transfer medium such as optical media

that correlates image sensor receptors to an object field of view. A variety of illumination
sources can also be employed to achieve one or more operational goals and for versatility
of application. The k-space design of the imaging system of the present invention
promotes capture and analysts (e.g., automated and/or manual) of images having a high
Field Of View (FOV) at substantially high Effective Resolved Magnification as compared
to conventional systems. This can include employing a small Numerical Aperture (NA)
associated with lower magnification objective lenses to achieve very high Effective
Resolved Magnification. As a consequence, images having a substantially large Depth Of
Field (DOF) at very high Effective Resolved Magnification are also realized. The k-space
design also facilitates employment of homogenous illumination sources that are
substantially insensitive to changes in position, thereby improving methods of
examination and analysis.
According to another aspect of the present invention, an objective lens to object
dntanee (e.g., Working Distance) can be maintaned in operation at low and high power
effective resolved magnification imaging, wherein typical spacing can be achieved at
about 0.1 mm or more and about 20 mm or less, as opposed to conventional microscopic
systems which can raquire significantly smaller (as small as 0.01 mm) object to objective
lens distances for comparable (e.g., similar order of magnitude) Effective Resolved
Magnification values. In another aspect, the Working Distance is about 0.5 mm or more
and about 10 mm or less. It is to be appreciated that the present invention is not limited
opening at the above wording distances. In many instances the above working distances
are employed, however, in aome instances, smaller or larger distances are employed. It is
further noted that oil immersion or other index of Refraction matching medis or fluids for
objective lenses are generally not required (e.g., substantially no improvement to be
gained) at one or more effective image magnification levels of the present invention yet,
still exceeding effective resolved magnification levels achievable in conventional
microscopic optical design varitions including systems employing "infinity-corrected"
objective lenses.
The k-space design of the present invent on defines that a small "Blue Circle" or
defraction limited point/spot at the object plane is determined by parametets of the design
to match image sensor receptors or pixels with a substantially one-to-one correspondence

by "unit-mapping" of object and image spaces for associated object and image fields.
This enables the improved performance and capabilities of the present invention. One
possible theory of the k-space design results from the mathematical concept that since the
Fourier Transform of both an object and an image is formed in k-space (also called
"reciprocal space"), the sensor should be mapped to the object plane in k-space via optical
design techniques and component placement in accordance with the present invention. It
is to be appreciated that a plurality of other transforms or models can be utilized to
configure and/or select one or more components n accordance with the present invention.
For example, wavelet transforms, LaPlacc (s-transforms), z-transforms as well as other
transforms can be similarly employed.
The k-space design methodology is unlike conventional optical systems designed
according to geometric, paraxial ray-trace and optimization theory, since the k-space
optimization fac litates that the spectral components of the object (e.g., tissue sample,
particle, semicor ductor) and the image are the same in k-space, and thus quantized.
Therefore, there are substantially no inherent limitations imposed on a Modulation
Transfer Function (MTF) describing contrast versus resolution and absolute spatial
resolution in the present invention. Quantization, for example, in k-space yields a
substantially unitary Modulation Transfer Function not realised by conventional systems.
It is noted that high MTF, Spatial Resolution, and effective resolved image magnification
can be achieved with much lower magnification objective lenses with desirable lower
Numerical Apertces (e.g., generally less than about 50x with a numerical aperture of
generally less then about 0.7) although "unit-mapping" of projected pixels in an "Inrrinsic
Spetial Filter" provided by the k-space design.
If desired "infinity-connected" objectives can be employed with associated optical
component and illumination, as well as spectrum varying components, polarization
varying components, and/or contrast or phase varying components. These components
can be included in an optical path-length betweem an onjective and the image lens within
an "infinity space". Optical syatem accessories and varuations can thus be positioned as
interchangable modules in this geometry. The k-space design, in contrast to
conventional microscopic images that utilize "infinity-corrected" objectives, enables the
miximum optimization of the infinity space geometry by the "unit-mapping" concept.

This implies than there is generally no specific limit to the number of additional
components that can be inserted in the "infinity space" geometry as in conventional
microscopic systems that typically specify no more than 2 additional components without
optical correction.
The present invention also enables a "base-module" design that can be configured
and reconfigured in operation for a plurality of different applications if necessary to
employ either transmissive or reflected illumination, if desired, this includes
substantially all typical machine vision illumination schemes (e.g., darkfield, brightfield
phase-contrast), and other microscopic transmissive techniques (Kohler, Abbe), in
substantially any offset and can include Epi-illumination - and variants thereof. The
systems of the present invention can be employed in a plurality of opto-mechanicaI
designs that are robust since the k-space design is substantially not sensitive to
environmental and mechanical vibration and thus generally does not require heavy
structural mechanical design and isolation from vibration associated with conventional
microscopic imaging instruments. Other features can include digital image processing, if
desired, along with storage (e.g., local database, image data transmissions to remote
computers for storage/analysis and display of the images produced in accordance with the
present invantion (e.g., computer display, printer, film, and other output media). Remote
signal processing of image data can be provided, along with communication and display
of the image data via associated data packets that are communicated over a netword or
other medium, for example.
Referring initially to Fig. 1, an imaging system 10 is illustrated in accoudance with
an aspect of the present invention. The imaging system 10 includes a sensor 20 having
noe or more receptors such as pixels or descrete light detectors (See e.g. illustrated below
in Fig. 3) operably associated with an image transfer medium 30. The image transfer
medium 30 is adapted or configured to scale the proportions of the sensor 20 at an image
plane estabiahed by the position of the sensor 20 to an object field of view illustrated at
reference 34. A planar feference 36 of X and Ycoordinates is provided to
illustrate the scaling or reduction of the apparent or virtual size of the sensor 20 to the
object field of view 34. Direction arrows 38 and 40 illustrate the direction of reduction of
the apparent size of the sensor 30 toward the object field of view 34.

The object field of view 34 established by the image transfer medium 30 is related
to the position of an object plane 42 that includes one or more items under microscopic
examination (not shown). It is noted that the senior 20 can be substantially any size,
shape and/or technology (e.g., digital sensor, analog sensor, Charge Coupled Device
(CCD) sensor, CMOS sensor, Charge injection Device (CID) sensor, an array sensor, a
linear scan sensor) including one or more receptors of various sizes and shapes, the one or
more receptors being similarly sized or proportioned on a respective sensor to be
responsive to light (e.g., visible, non-visible) received from the items under examination
in the object field of view 34. As light is received from the object field of view 34, the
sensor 20 provides an output 44 that can be directed to a local or remote storage such as a
memory (not shewn) and displayed from the memory via a computer and associated
display, for example, without substantially any intervening digital processing (e.g.,
straight bit map from sensor memory to display), if desired It is noted that local or
remote signal processing of the image data received from the sensor 20 can also occur.
For example, the output 44 can be converted to electronic data packets and transmitted to
a remote system over a network and/or via wireless transmissions systems and protocols
for further analysis and/or display. Similarly, the output 44 can be scored in a local
computer memory before being transmitted to a subsequent computing system for further
analysis and/or display.
The scaling probided by the image transfer medium 30 is determined by a novel k-
space configuration of design within the medium that promotes predetermined k-space
frequencies of interest and mitigates frequencies outside the predetermined frequencies.
This has the effect of a band-pass filter of teh spatial frequencies within the image transfer
medium 30 and notably defines the imaging system 10 in terms of rosolution rather than
magnification. As will be described in more detail below, the resolution of the imaging
system 10 determined by the k-space design promotes a plurality of features in a displayed
or stored image such as haying high effective resolved magnifcation, high absolute
spetial resolution, large depth of field, larger working distances, and a military Modulation
Transfer Function as well as other features.

In order to determine the k-space frequencies, a "pitch" or spacing is determined
between adjacent receptors on the sensor 20, the pitch related to the center-to-center
distance of adjacent receptors and about the size or diameter of a single receptor. The
pitch of the sensor 20 defines the Nyquist "cut-off" frequency band of the sensor. It is
this frequency tend that is promoted by the k-space design, whereas other frequencies are
mitigated. In order to illustrate how scaling is determined in the imaging system 10, a
small or diffraction limited spot or point 50 is illustrated at the object plane 42. The
diffraction limited point 50 represents the smallest resolvable object determined by optical
characteristics within the image transfer medium 50 and is described in more detail
below. A scaled receptor 54, depicted in front of the field of view 34 for exemplary
purposes, and having a size determined according to the pitch of the sensor 20, is matched
or scaled to be about the same size in the object field of view 34 as the diffraction limited
point 50.
In other words, the size of any given receptor at the sensor 20 is effectively
reduced in size via the image transfer medium 30 to be about the same size (or matched in
size) to the size of the diffraction limited point 50. This also has the effect of filling the
object field of view 34 with substantially all of the receptors of the sensor 20, the
respective receptors being suitably scaled to be similar in size to the diffraction limited
point 50. As will bedescribed in more detail below, the matching/mapping of sensor
characteristics to the smallest resolvable object or point within the object field of view 34
defines the imaging system 10 in terms of absolute spatial resolution and thus, enhances
the operating performance of the system.
An illumination source 60 can be provided with the present invention in order that
photons from that source can be transmitted through and/or reflected from objects in the
field of view 34 to enable activation of teh receptors in the sensor 20. It is noted that the
present invention can potentially be employed without an illumination source 60 if
potential salf-harninous objects (e.g., finorescent or phosphorescent biological or organic
material sample, metallurgical, mineral, and/or other inorganic material and so forth) emit
enough radiation to activate the sensor 60. Light Emitting Diodes, however, provide an
effective illumination source 60 in accordance with the present invention. Substantially

any illumination source 60 can be applied including coherent and non-coherent sources
visible and non-visible wavelengths. However, for non-visible wavelength sources, the
sensor 20 would also be suitably adapted. For example, for an infrared or ultraviolet
source, an infrared or ultraviolet sensor 20 would be employed, respectively Other
illumination sources 60 can include wavelength-specific lighting, broad-band lighting,
continuous lighting, strobed lighting. Kohler illumination, Abbe illumination, phase-
contrast illumination, darkfield illumination, brightfield illumination, and Epi
illumination. Transmissive or reflective lighting techniques (e.g., specular and diffuse)
can also be applied.
Referring now to Fig. 2, a system 100 illustrates an image transfer medium in
accordance with an aspect of the present invention. The image transfer medium 30
depicted in Fig. 1 can be provided according to the k-space design concepts described
above and more particularly via a k-space filter 110 adapted, configured and/or selected to
promote a band of predetermined k-space frequencies 114 and to mitigate frequencies
outside of this land. This is achieved by determining a pitch "P" - which is the distance
between adjacent receptors 116 in a sensor (not shown) and sizing optical media within
the filter 110 such that the pitch "P" of the receptors 116 is matched in size with a
diffraction-limited spot 120. The diffraction-limited spot 120 can be determined from the
optical characteristics of the media in the fitter 110. For example, the Numerical Aperture
of an optical medium such as a lens defines the smallest object or spot that can be
resolved by the lens. The filter 110 performs a k-space transformation such that the size
of the pitch is effectively matched, "unit-mapped", projected, correlated, and/or reduced
to the size or scale of the diffraction limited spot 120.
It is to be appreciated that a plurality of optical configurations can be provided to
achieve the k-space filter 110. One such configuration can be provided by an asphecial
lens 124 adapted such to perform the k-space transformation and reduction from sensor
space to object space. Yet another configuration can be provided by a multiple lens
arrangement 121, wherein the lens combination is selected to provide the filtering and
scaling. Still yet another configuration can employ a fiber optic taper 132 or image
condait, wherein multiple optical fibers or array of fibers are configured in a finall-shape

to perform the mapping of the sensor to the object field of view. It is noted that the fiber
optic taper 132 s generally in physical contact between the sensor and the object under
examination (e.g., contact with microscope slide). Another possible k-space filter 110
arrangement employs a holographic (or other diffractive or phase structure) optical
element 136, wherein a substantially flat optical surface is configured via a hologram (or
other diffractive or phase structure) (e.g., computer-generated, optically generated, and/or
other method) to provide the mapping in accordence with the present invention.
The k-space optical design as enabled by the k-space filter 110 is based upon the
"effective projected pixel-pitch" of the sensor, which is a figure derived from following
("projecting") the physical size of the sensor array elements back through the optical
system to the object plane. In this manner, conjugate planes and optical transform spaces
are matched to the Nyquist cut-off of the effective receptor or pixel size. This maximizes
the effective resolved image magnification and the Field Of View as well as the Depth Of
Field and the Absolute Spatial Resolution. Thus, a novel application of optical theory is
provided that does not rely on conventional geometric optical design parameters of
paraxial ray-traning which govern conventional optics and imaging combinations. This
can further be described in the following manner.
A Fourier transform of an object and an image is formed (by an optical system) in
k-space (also referred to as "reciprocal-space"). It is this transform that is operated on for
image optimization by the k-space design of the present invention. For example, the
optical media employed in the present invention can be designed with standard, relatively
non-expensive "off-the-shelf components having a configuration which defines that the
object and image space are 'unit-mapped" or "unit-matched" for substantially all image
and object fields. A small Blur-circle or diffraction-limited spot 120 at the object plane is
defined by the design to match the pixels in the image plane (e.g., at the image sensor of
choice) with substantially one-to-one correspondence and thus the Fourier transforms of
pixelated arrays can be matched. This implies that, optically by design, the Blur-circle is
scaled to be about the same size as the receptor or pixel pitch. The present invention is
defined such that it construucts an Intrinsic Spatial Filter such as the k-space filter 110.
Such a design definition and implementation enables the spectral components of both the
object and tbe inage in k-space to be about the same or quantized. This also defines that

the Modulation transfer Function (MTF) (the comparison of contrast to spatial
resolution) of the sensor is matched to the MTF of the object Plane.
Fig. 3 illustrates an optical system 200 in accordance with an aspect of the present
invention. The system 200 includes a sensor 212 having a plurality of receptors or sensor
pixels 214. For example, the sensor 2l2 is an M by Narray of sensor pixels 214, having
M rows and N columns (e.g., 640 x 480, 512 x 512, 1280 x 1024, and so forth), M and the N
being integers respectively. Although a rectangular sensor 212 having generally square
pixels is depicted, it is to be understood and appreciated that the sensor can be
substantially any shape (e.g., circular, elliptical, hexagonal, rectangular, and so forth). It
is to be further appreciated that respective pixels 214 within the array can also be
substantially any shape or size, the pixels in any given array 212 being similarly sued and
shaped in accordance with an aspect of the present invention.
The sensor 212 can be substantially any technology (e.g., digital sensor, analog
sensor, Charge Coupled Device (CCD) sensor, CMOS sensor, Charge Injection Device
(CID) sensor, an array sensor, a linear scan sensor) including one or more receptors (or
pixels) 214. According to one aspect of the present invention, each of the pixels 214 is
similarly sized or proportioned and responsive to light (e.g., visible, non-visible) received
from the items under examination, as described herein.
The sensor 212 is associated with a lens network 216, which is configured bassed
on performance requirements of the optical system and the pitch size of sensor 212. The
lens network 216 is operative to scale (or project) proportions (e.g., pixels 214) of the
sensor 212 at an image plane established by the position of the sensor 212 to an object
field of view 220 in accordance with an aspect of the present invention. The object field
of view 220 is related to the position of an object plane 222 that includes one or more
items (not shown) under examination.
As the sensor 212 receives light from the object field of view 220, the sensor 212
provides an output 226 that can be durected to a local or remote storage such as a memory
(not shown) and displayed from the memory via a computer and associated display, for
example, without substantially any interventing digital processing
tVomtininrmfniotj) to displayX if desired. It is nosed that local or remote signal
processing of the image data received from the sensor 212 can also occur. For example,

the output 226 can be converted to electronic data packets and transmitted to a remote
system over a network for further analysis and/or display. Similarly, the output 226 can be
stored in a local computer memory before being transmitted to a subsequent computing
system for further analysis and/or display.
The scaling (or effective projecting) of pixels 214 provided by the lens network
216 is determined by a novel k-space configuration or design in accordance with an aspect
of the present invention. The k-space design of the lens network 216 promotes
predetermined k-space frequencies of interest and mitigates frequencies outside the
predetermined frequency band. This has the effect of a band pass filter of the spatial
frequencies with in the lens network 216 and notably defines the imaging system 200 in
terms of resolution rather than magnification. As will be described below, the resolution
of the imaging system 200 determined by the k-space design promotes a plurality of
features in a displayed or stored image, such as having high "Effective Resolved
Magnification" (a figure of merit described in following), with related high absolute
spatial resolution, large depth of field, larger working distances, and a unitary Modulation
Transfer Function as well as other features.
In order to determine the k-space frequencies, a "pitch' or spacing 228 is
determined between adjacent receptors 214 on the sensor 212. The pitch (e.g., pixel
pitch) corresponds is to the center-to-ccntcr distance of adjacent receptors, indicated at 228,
which is about the size or diameter of a single receptor when the sensor includes all
equally sized pixels. The pitch 228 defines the Nyquist "cut-off" frequency band of the
sensor 212. It is this frequency band that is promoted by the k-space design, whereas
other frequencies are mitigated. In order to illustrate how scaling is determined in the
imaging system 200, a point 230 of a desired smallest resolvable spot size is illustrated at
the object plane 222. The point 230, for example can represent the smallest resolvable
object determined by optical characteristics of the lens network 216. That is, the lens
network is confugured to have optical characteristics (e.g., magnification, numerical
aperture) so that respective pixels 214 are matched or scaled to be about the same size in
the object field of view 220 as the desired minimum resolvable spot size of the point 230.
For purposes of illustration, a scaled receptor 232 is depicted in front of the field of view

220 as having a size determined according to the pitch 228 of the sensor 212, which is
about the same as the point 230.
By way of illustration, the lens network 216 is designed to effectively reduce the
size of each given receptor (e.g., pixel) 214 at the sensor 212 to be about the same size
(e.g., matched in size) to the size of the point 230, which is typically the minimum spot
size resolvable by the system 210. It is to be understood and appreciated that the point
230 can be selected to a size representing the smallest resolvable object determined by
optical characteristics within the lens network 216 as determined by diffraction rules (e.g.,
diffraction limited spot size). The lens network 216 thus can be designed to effectively
scale each pixel 214 of the sensor 212 to any size that is equal to or greater than the
diffraction limited size. For example, the resolvable spot size can be selected to provide
for any desired image resolution that meets such criteria.
After the desired resolution (resolvable spot size) is selected, the lens network 216
is designed to provide the magnification to scale the pixels 214 to the object field of view
220 accordingly. This has the effect of filling the object field of view 220 with
substantially all of the receptors of the sensor 212, the respective receptors being suitably
scaled to be similar in size to the point 230, which corresponds to the desired resolvable
spot size. The matching/mapping of sensor characteristics to the desired (e.g., smallest)
resolvable object or point 230 within the object field of view 220 defines the imaging
system 200 in terms of absolute spetial resolution and enhances the operating performance
of the system in accordance with an aspect of teh present inventing.
By way of farther illustration, in order to provide unit-mapping according to this
example, assume that the sensor array 212 provides a pixel pitch 228 of about 10.0
microns. The lens network 216 includes an objective lens 234 and a secondary lens 236.
For example, the objective lens 234 can be set at infinite conjugate to the secondary lens
236, with me spacing between the objective and secondary lenses being flexible. The
lenses 234 and 236 are related to each other so as to achieve a reduction from sensor
space defined at the sensor array 220 to object space defined at the object plane 222. It is
noted that substantially all of the pixels 214 are projected into the object field of view
220, which is defined by the objective lens 234. For example, the respective pixels 214
are scaled through the objective lens 234 to about the dimensions of Ihe desired minimum

resolvable spot size. In this example, the desired resolution at the image plane 222 is one
micron. Thus, a magnification often times is operative to back project a ten micron pixel
to the object plane 222 and reduce it to a size of one micron.
The reduction in size of the array 212 and associated pixels 214 can bo achieved
by selecting the transfer lens 236 to have a focal length "D2" (from the array 212 to the
transfer lens 236) of about 150 millimeters and by selecting the objective lens to have a
focal length "Dl" (from the objective lens 236 to the object plane 222) of about 15
millimeters, for example. In this manner, the pixels 214 are effectively reduced in size to
about 1.0 microscoper pixel, thus matching the size of the of the desired resolvable spot
230 and filling the object field of view 220 with a "virtually-reduved" array of pixels. It is
to be understood and appreciated that other arrangements of one or more lenses can be
employed to provide me desired scaling..
In view of the foregoing description, those skilled in the art will understand and
appreciate that the optical media (e.g., lens network 216) can be designed, in accordance
with an aspect of the present invention, with standard, relatively inexpensive "off-the-
shelf" components having a configuration that defines that the object and image space are
"unit-mapped" or "unit-matched" for substantially all image and object fields. The lens
network 216 and, in particular the objective lens 234, performs a Fourier transform of an
object and an image in k-space (also referred to as "reciprocal-space"). It is this transform
that is operated on for image optimization by the k-space design of the present invention.
A small Blue-curcle or Siry disk at the object plane is defined by the design to
match the pixels in the image plane (e.g., at the image sensor of choice) with substantially
one-to-one correspondence with the Airy disk and thus the Fourier transforms of pixilated
arrays can be matched. This implies that, optically by design, the Airy disk is scaled
through the lens network 216 to be about the same size as the receptor or pixel pitch. As
mentioned above, the lens network 216 is defined so as to construct and Intrinsic Spatial
Filter (e.g., a k-space filter). Such a desigh definition and implementation enables the
spectral components of both the object and the image in k-space to be about the same or
questized. This also defines that a Modulation Transfer Function (MTF) (the comparison
of constrast to spatial resolition) of the sensor can be mached to the MTF of the object
Plane in accordance with an aspect of the present invention.

As illustrated in Fig. 3, k-space is defined as the region between the objective lens
234 and the secondary lens 236. It is to be appreciated that substantially any optical
media, lens type and/or lens combination that reduces, maps and/or projects the sensor
array 212 to the object field of view 220 in accordance with unit or k-space mapping as
described herein is within the scope of the present invention.
To illustrate the noveity of the exemplary lens/sensor combination depicted in Fig.
3, it is noted that conventional objective lenses, sized according to conventional geometric
paraxial ray techniques, are generally sized according to the magnification, Numeric
Aperture, focal length and other parameters provided by the objective. Thus, the
objective lens would be sized with a greater focal length than subsequent lenses that
approach or are closer to the sensor (or eyepiece inconventional microscope) in order to
provide magnification of small objects. This can result in magnification of the small
objects at the object plane being projected as a magnified image of the objects across
"portions" of the sensor and results in knows detail blur (e.g., Rayleigh diffraction and
other limitations in the optics), empty magnification problems, and Nyquist alissing
among other problems at the sensor. The k-space design of the present invention operates

in an alternative manner to conventional geometrical paraxial ray design principles. That
is, the objective lens 234 and the secondary lens 236 operate to provide a reduction in size
of the sensor array 212 to the object field of view 220, as demonstrated by the relationship
of the lenses.
An illimination sound 230 can be provided with the present invention in order
that photons from that source can be transmitted through and/or reflected from objects in
the field of view 234 to enable activation of the receptors in the sensor 212. It is noted
if potential self-haninous objects (e.g., objects or specimens with emissive characteristics
as previously described) omit enough radiation to activate the sensor 12. Substantially
any illumination source 240 can be applied including coberent and non-coherent sources,
visible and non-visible wavelengths. However, for non-visible wavelength sources, the
sensor 212 work, also be suitable adepted. For example, for an infrared or ultraviolet
source, an infraced of ultraviolet sensor 212 would be employed, respectively. Other

lighting, continuous lighting, strobed lighting. Kohler illumination, Abbe illumination,
phase-contrast illumination, darkfield illumination, brightfiefd illumination. Epi
illumination, and the like. Transmissive or reflective (e.g., specular and diffuse) lighting
techniques can also be applied.
Fig. 4 illustrates a graph 300 of mapping characteristics and comparison between
projected pixel size on the X-axis and diffraction-limited spot resolution size "R" on the
Y-axis. An apex 310 of the graph 300 corresponds to unit mapping between projected
pixel size and the diffraction limited spot size, which represents an optimum relationship
between a lens network and a sensor in accordance with the present invention.
It is to be appreciated that the objective lens 234 (Fig. 3) should generally not be
selected such that the diffraction-limited size "R" of the smallest resolvable objects are
smaller than a projected pixel size. If so. "economic waste" can occur wherein more
precise informat on is lost (e.g., selecting an object lens more expensive than required,
such as having a higher numerical aperture). This is illustrated to the night of a dividing
line 320 at reference 330 depicting a projected pixel 340 larger that two smaller
diffraction spots 350. In contrast, where an objective is selected with diffraction-limited
performance larger than the projected pixel size, blurring and empty magnification can
occur. This is illustrated to the left of line 320 at reference numeral 360, wherein a
projected pixel 370 is smaller than a diffraction-limited object 380. It is to be
appreciated, however, that even if substantially one-to-one correspondence is not achieved
between projected pixel size and the diffraction-limited spot, a system can be configured
with less than optimum matching (e.g., 0.1 %, 1%, 2%, 5%, 20%, 95% down from the
apex 310 on the graph 300 to the left or right of the line 320) and still provide suitable
performance in accordance with an aspect of the present invention. Thus, less than
optimal matching is intended to fall within the spirit and the scope of prexent invention.
It is further to be appreciated that the diameter of the lenses in the system as
illustrated in Fig. 3, for example, should be sized such that when a Fourier Transform is
performed from object space to sensor space, spatial frequencies of interest that are in the
band pass region described above (e.g., frequencies utilized to defime tje size and shape of
a pixel) are substantially not attentated. This generally implies that larger diameter lenses

(e.g., about 10 to 100 millimeters) should be selected to mitigate attenuation of the spatial
frequencies of interest.
Referring now to Fig. 5, a Modulation Transfer function 400 is illustrated in
accordance with the present invention. On a Y-axis, modulation percentage from 0 to
100% is illustrated defining percentage of contrast between black and while. On an X-
axis. Absolution Spatial Resolution is illustrated in terms of microns of separation. A line
4 10 illustrates that modulation percentage remains substantially constant at about 100%
over varying degrees of spatial resolution. Thus, the Modulation Transfer Function is
about 1 for the present invention up to about a limit imposed by the signal to noise
sensitivity of the sensor. For illustrative purposes, a conventional optics design
Modulation Transfer Function is illustrated by line 420 which may be an exponential
curve with generally asymptotic limits characterized by generally decreasing spatial
resolution with decreasing modulation percentage (contrast).
Fig. 6 illustrates a quantifiable Figure of Merit (FOM) for the present invention
defined as dependent on two primary factors Absolute Spatial Resolution (RA, in
microns), depiced on the Y axis and the Field Of View (F, in microns) depicted on the X
axis of a graph 300. A reasonable FOM called "Spatial Field Number" (S), can be
expressed as the ratio of these two previous quentities, with higher values of S being
desirable for imaging as follows:
S= F/RA
A line 510 illustrates that the FOM remains substantially constant across the field
of view and over different values of absolute spetial resolution which is an enhancement
over conventional systems.
Figs. 7, 8, 14, 15, and 16 illustrate methodologies to facilitate imaging
performance in accordance with the present invention. While, for purposes of simplicity
of explanation, the methodologies may be shown and described as a series of acts, it is to
be understood and appreciated that the present invention is not limited by the order of
acts, as some acts may, in accordance with the present invention, occur in different orders
and/or correctly with other acts from that shown and described herein. For example,
those skilled in the art will understand and appreciate that a methodology could
alternatively be represented as a series of interrelated states or events, such as in a state

diagram. Moreover, not all illustrated acts may be required to implement a methodology
in accordance wi h the present invention.
Turning row to Figure 7 and proceeding to 610, lenses are selected having
diffraction-limited characteristics at about the same size of a pixel in order to provide
unit-mapping awl optimization of the k-space design. At 614, lens characteristics are also
selected to mitigate reduction of spatial frequencies within k-space. As described above,
this generally unities that larger diameter optics are selected in order to mitigate
attenuation of desired k-space frequencies of interest. At 618, a lens configuration is
selected such that pixels, having a pitch "P", at the image plane defined by the position of
a sensor are scaled according to the pitch to an object field of view at about the size of a
diffraction-limited spot (e.g., unit-mapped) within the object field of view. At 622, an
image is generated by outputting data from a sensor for real-time monitoring and/or
storing the data is memory for direct display to a computer display and/or subsequent
local or remote image processing and/or analysis within the memory.
Fig. 8 illustrates a methodology that can be employed to design an optical/imaging
system in accordance with an aspect of the present invention. The methodology begins at
700 in which a suitable sensor array is chosen for the system. The sensor array includes a
matrix of racepter pixels having a known pitch size, usually defined by the manufactrue.
The sensor can be substantially any shape (e.g., rectangular, circular, square, triangular,
and so forth). By way of illustration, assume that a sensor of 640x480 pixels having a
pitch size of 10 um is chosen. It is to be understood and appreciated that an optical
system can be designed for any type and/or size of sensor array in accoudance with an
aspect of the present invention.
Next at 710, an image resolution is defined. The image resolution corresponds to
the smallest desired resolvable spot size at the image plane. The image resolution can be
defined based on the application(s) for which the optical system is being designed, such as
any resolution that is greater than or equal to a smallent diffraction limited size. Thus, it
is to be apprisiated that resolution becomes a selectable design parameter that can be
miloved to provide desired image resolution for virtually any type of application. In
contrast, most conventional systems tend to limit resolution according to Rayleigh

diffraction, which provides that intrinsic spatial resolution of the lenses cannot exceed
limits of diffract on for a given wavelength.
After selecting a desired resolution (710), a suitable amount of magnification is
determined at 720 to achieve such resolution. For example, the magnification is
functionally related to the pixel pitch of the sensor array and the smallest resolvable spot
size. The magnification (M) can be expressed as follows:

So, for the above example where the pixel pitch is 10 um and assuming a desired image
resolution of um, Eq. 1 provides an optical system of power ten. That is, the lens
system is configured to back-project each 10 um pixel to the object plane and reduce
respective pixels to the resolvable spot size of 1 micron.
The methodology of Fig. 8 also includes a determination of a Numerical Aperture
at 730. The Numerical Aperture (NA) is determined according to well-established
diffraction rules that relate NA of the objective lens to the minimum resolvable spot size
determined at 710 for the optical system. By way of example, the calculation of NA can
be based on the following equation:

Continuing with the example in which the optical system has a resolved spot size of y = 1
micron, and assuming a wavelength of about 500 nm (e.g., green light), a NA = 0.25
satisfies Eq. 2. It is noted that relatively inexpensive commercially available objective of
power 10 provide numerical apertures of 0.25.
It is to be understood and appreciated that the relationship between NA,
wavelength aixl resolution represented by Eq. 2 can be expressed in fifferent ways
according to various factors that account for the behavior of objectives and condensers.
Thus, the determination at 730, in accordance with an aspect of the present invention, is

not limited to and particular equation but instead amply obeys known general physical
laws in which NA is functionally related to the wavelength and resolution. After the lens
parameters have been designed according to the selected sensor (700), the corresponding
optical components can be arranged to provide an optical system (740) in accordance with
an aspect of the present invention.
Assume, for purposes of illustration, that the example optical system created
according to the methodology of Fig. 8 is to be employed for microscopic-digital imaging.
By way of comparison, in classical microscopy, in order to image and resolve structures
of a size approaching 1 micron (aad below), magnifications of many hundreds usually are
required. The bosic reason for this is that such optics conventionally have been designed
for the situation when the sensor of choice is the human eye. In contrast, the methodology
of Fig. 8 designs the optical system in view of the sensor, which affords significant
performance increases at reduced cost.
In the k-space design methodology, according to an aspect of the present
invention, the optical system is designed around a discrete sensor that has known fixed
dimensions. As result, the methodology can provide a for more straight-forward, robust,
and inexpensive optical system design approach to "back-project" the sensor site onto the
object plane and calculate a magnification factor. A second part at the methodology
facilitates that the optics that provide the magnification have a sufficient NA to optically
resolve a spot of similar dimensions as the back-projected pixel. Advantageously, an
optical system disigned in accordance with an aspect of the present invention can utilize
custom and/or off-the-shelf components. Thus, for this example, inexpensive optics can
be employed in accordance with an aspect of the present invention to obtain suitable
results, but well-corrected microscope optics are relatively inexpensive. If custom-
designed optics are utilized, in accordance with an aspect of the present invention, then
the range of premissible magnifications and numerical apertures becomes substantial, and
some performance gains can be realized over the use of off-the-shelf optical components.
In view of the concepts described above in relation to Figs. 1-8, a plurality of
related imaging applications can be enabled and enhanced by the present invention. For
example, these applications can include but are not limited to imaging, control,
inspection, microscopy and/or other automated analysis such as:

(1) Bio-medical analysis (e.g., cell colony counting, histology, frozen sections,
cellular cytology Haematology, pathology, oncology, fluorescence, interference, phase
and many other clinical microscopy applications);
(2) Particle Sizing Applications (e.g., Pharmaceutical manufacturers, paint
manufacturers, cosmetics manufacturers, food process engineering, and others);
(3) Air quality monitoring and airborne particulate measurement (e.g., clean room
certification, environmental certification, and so forth);
(4) Optical defect analysis, and other requirements for high resolution microscopic
inspection of both transmissive and opaque materials (as in metallurgy, automated
semiconductor inspection and analysis, automated vision systems, 3-D imaging and so
forth); and
(5) Imaging technologies such as cameras, copiers, FAX machines and medical
systems.
Figs. 9, 10, 11, 12, and 13 illustrate possible example systems that can be
constructed employing the concepts previously described above in relation to Figs. 1-8.
Fig. 9 is a flow dagram of light paths in an imaging system 800 adapted in accordance
with the present invention.
The system 800 employs a light source 804emitting illuminating light that is
received by a light condenser 808. Output from the light condenser 808 can be directed
by a fold mirror 812 to a microscope condenser 816 that projects illuminating light onto a
slide stage 820, wherein as object (not shown, positioned on top of, or within the slide
stage) can be imaged in accordance with the present. The slide stage 820 can be
automatically positioned (and/or manually) via a computer 824 and associated slide feed
828 in order to image one or more objects in a field of view defined by an objective lens
832. It is noted that the objective lens 832 and/or other components depicted in the
system 800 may be adjusted manually and/or authmatically via the computer 824 and
associated contros (not shown) (e.g., servo motors, tube slides, liner and/or rotary
position encoders, optical, magnetic, electronic or other feedback mechanisms, control
software, and so forth) to achieve different and/or desired image characteristics (e.g.,
magnification, focus, which objects appear in field of view, depth of field and so forth).
Light output from the objective lens 832 can be directed through an optional beam

splitter 840, wherein the beam splitter 840 is operative with an alternative epi-
illumination section in 842 (to light objects from above slide stage 820) including light
shaping optics 844 and associated light source 843. Light passing through the beam
splitter 840 is received by an image forming lens 350. Output from the image forming
lens 850 can be directed to a CCD or other imaging sensor or device 854 via a fold mirror
860. The CCD or other imaging sensor or device 854 converts the light received from the
object to digital information for transmission to the computer 824, wherein the object
image can be displayed to a user in real-time and/or stored in memory at 864. As noted
above, the digital information defining the image captured by the CCD or other imaging
sensor or device 354 can be routed as bit-map information to the display/memory 864 by
the computer 824. If desired, image processing such as automatic comparisons with
predetermined samples or images can be performed to determine an identity of and/or
analyze the object under examination. This can also include employment of substantially
any type of image processing technology or software that can be applied to the captured
image data within the memory 864.
Fig. 10 is a system 900 depicting an exemplary modular approach to imaging
design in accordance with an aspect of the present invention. The system 900 can be
based on a sensos array 910 (e.g., provided in off-the-shelf camera) with a pixel pitch of
approximately 8 microscope (or other dimension), for example, wherein array sizes can vary
from 640x480 to 1280x1024 (or other dimension as noted above). The system 900
includes a modular desigh wherein a respective module is substantially isolated from
another module, thus, mitigating alignment tolerances.
The modules can include:
• a camera/sensor mosule, 914 including an image-forming lens 916 and/or fold
mirror 918;
• an epi-illustrination module 920 for insertioninto a k-space region 922;
• a sample holding and presentation module 924;
• a light-shaping module 930 including a condenser 934; and
• a sub-stage lighting module 940.
It is noted that the system 900 can advantageously employ commercially-
available components such as for example:
• condenser optics 934 (NA • (e.g., Olympus U-SC-2)
• standard plan/ achromatic objective tenses 944 of power and numerical
aperture e.g.,: (4x, 0.10), (10x. 0.25), (20x, 0.40), (40x, 0.65) selected to satisfy
the desired characteristic that for a given magnification, the projected pixel-
pitch at the object plane is similar in dimensions to the diffraction-limited
resolved spot of the optics
• (e.g., Olympus I-UB222, I-UB223, I-UB225. I-UB227)
The system 900 utilizes an infinity-space (k~space) between the objective lens 944
and the image-forming lens 916 in order to facilitate the insertion of auxiliary and/or
additional optical components, modules, filters, and so forth in the k-spacc region at 922
such as for example, when the image-forming lens 916 is adapted as an f = 150mm
achromatic triplet. Furthermore, an infinity-space (k-space) between the objective lens
944 and the image-forming lens 916 can be provided in order to facilitate the injection of
lighy (vie a light-forming path) into an optical path for api-ilumination. For example, the
• a light source 950 such as an LED driven from a current -stabilized
• (e.g., HP HLMP-CW30)
• a trasmission hologram for source homogenisation and the impasitier.
of a spatial virtual-source at 950;
• (ef., POC light shaping diffuser polyester film 30-degree
FWHM)
• a variat to aparture 960 to restriet the NA of the source 950 to that of
the imaging optics, thereby mitigating the effect of scattered light
emering the image-farming optical path;
• (eg, Thorlabs iris diaphragm SMID12 0-5 -12-0 mm aperture)

• a collection lens at 960 employed to maximize the light gathered from
the virtual source 950. and to match the k-space characteristics of the
source to that of the imaging optics; and
(e.g., f= 50mm aspheric lens, f = 50mm achromatic doublet)
• a partidy-reflcctive beam splitter 964 employed to form a coaxial light
path ar d image path. For example, the optic 964 provides a 50%
reflectivity on a first surface (at an inclination of 45 degrees), and is
broadband antireftection coated on a second surface.
The sub-stage lighting module 940 is provided by an arrangement that is
substantially similar to that of the epi-illummation described above for example.
- a light source 970 (an LED driven from a element-stabilised supply);
• (eg.. HP HLMP-CW30)
• a transmission hologram (associated with tight source 970) for the purposes of
source honogenisation and the imposition of a spatial virtual-source;
• (e.g.. FOC light shaping diffuser polyester film 30-degree FWHM)
• a collection lens 974 employed to maximine the light gathered from the virtual
source 97(', and to natch, the k-space characterstics of the source to that of the
imaging optics;
• (e.g., f-50mm aspheric lens, f-50mm achmatic doublet)
• a variable aperture 980 to restrict the NA of the space 970 to that of the
imaging optics thereby mitigating the effectof scatefred light entering the
image-forning optical path;
• (e.g., Tholabs iris diaphragm SMIDI2 0-5 -120mmaperture)
• a mirror 938 utilized to turn the optical path through 90degrees and provide
fine-adjustment in order to accurately aligia the optical modules; and
• a relay lens (not shown) employed to accerately position the image of the
variable aperture 980 into the object plane (at alide 990), thereby, along with

suitable placement of a holographic diffuser, thus, achieving Kohler
illumination.
• (eg. f = 100mm simple plano-convex lens).
As described above, a computer 994 and associated display/memory 998 is
provided to display in real-tune and/or store/process digital image data captured in
accordance with the present invention.
Fig. 11 illustrates a system 1000 in accordance with an aspect of the
present invention In this aspect, a sub-stage lighting module 1010 (e g. Kohler.
Abbe) can projec light through a traasmissive slide 1020 (object tinder
examination not shown), wherein an achromatic objective lens 1030 receives light
from the slide and directs the light to an image capture module at 1040. It is noted
that the achromat c objective lens 1030 and/or slide 1020 can be manually and/or
automatically controlled to position the objects) under examination and/or
position the objective lens.
Fig. 12 illustrates a system 1100 in accordance with an aspect of the
present invention. In this aspect, a top-stage or epi-illumination lighting module
1110 can project light to an opaque slide 1120 (object under examination not
shown), wherein objective lens 1130 (can be compound lens device or other
type) receives light from the slide and directs the light to an image capture module
at 1040. As molec above, the objective lens 1130 and/or slide 1120 can be
manually and/or automatically controlled t position the object(s) under
examination and/or position the objective lins. Fig. 13 depicts a system 1200 that
is similar to bthe system 1000 in Fig. 11 except that a compound objectine lens
1210 is exployed in place of as achromatic objective lens.
The imaging systems and processes described above conenction with Figs. 1-13
may thus be exployed t capture/processan image of a sample wherein the imaging
systems ate coupled t a processor or computer than reads the image generated by the
imaging systema and compares the image to a variety of images in an on-board data store
in any number of current memory technologies.

For example, the computer can include an analysis component to perform the
comparison. Sone of the many algorithms employed in image processing include, but are
not limited to convolution (on which many others are LasedX FFT, DCT, thinning (or
skeletonisation), edge detection and contrast enhancement. These are usually
implemented in software but may also use special purpose hardware for speed. FFT (fast
Fourier transform ) is an algorithm for computing the Fourier transform of a set of discrete
data values. Given a finite set of data points, for example, a periodic sampling taken from
a real-world signal, the FFT expresses the data in terms of its component frequencies It
also addresses the essentially identical inverse concerns of reconstructing a signal from
the frequency data. DCT (discrete cosine transform) is a technique for expressing a
waveform as a weighted sum of cosines. There are a various extant programming
languages design-id for image processing which include but are not limited to those such
as IDL, image Pro, Matlab, and many others. There are also no specific limits to the
special and custom image processing algorithms that may be written to perform functional
image manipulations and analyses.
The k-space design of the present invention also allows for direct optical
correlation of the fourier Frequency informaton contained in the image with stored
information t perform real-time optically correlated image processed analysed of a given
sample object.
Fig. 14 illustrates a particle sizing apllication 1300 that can be employed with
systems and peocess previorsly described. Particle sizing can include real-time
closed/open loop monitoring, manufacturing with and cootrol of particles in view of
automatically determined particles sizes in accordance with the k-space design concepts
previously described. This can include automated analysts and detection techniques for
various particles loving simmilar or different sizes (ndifferent sizes, n being an integer)
and particles identification of an shaped/dimensioned particles, in being an integer). In one
aspect of the present inventi, desired particle size detection and analysis can be
achieved via a direct measurement approach. This implies that the absolute spatial
resolution per period relates directly (or substantially thereto) in units of linear measure to
me imaged particles without substantial account of the parrticle medium and associated
particle distribution. Direct announcement generally does not creste a model but rather

provides a metro ogy and morphology of the imaged particles in any given sample. This
mitigates processing of modelling algorithms, statistical algorithums, and other modelling
limitations presorted by current technology. Thus;, an issue becomes one of sample
handling and fonn that enhances the accuracy and precision of measurements since the
particle data is directly imaged and measured rather than modelled, if desired.
Proceedit g to 1310 of the particle sizing application 1300, particle size image
parameters are determined. For example, basic device design can be configured for
imaging at desired Absolute Spatial Resolution per pixel and Effective Resolved
Magnification as previously described. These parameters determine field of view (FOV),
depth of field (DOF), and working distance (WD), for example. Real-time measurement
can be achieved by asynchronous imaging of a medium at selected timing intervals, in
real-time at common video rates, and/or at image capture rates as desired. Real-time
imaging can also be achieved by capturing images at selected tones for subsequent image
processing. Asynchronous imaging can be achieved by capturing images at selected times
by pulsing an instrument illumination at selected times and duty cycles for subsequent
image processing.
At 1320, a simple introduction process is selected for amount (or manual)
analysis. Sample: can be introduced into an imaging device adapted in accordance with
the present invention in any of the following (but not limited to) imaging processes:
1) All proviously described methods and transmissive media as well as:
2) Individual manual samples in cuvettes, slides, and/or transmissive medium.
3) Contrious flow of particles in stream of gas or liquid, for example.
4) With in imaging device configured for reflective imaging, samples may be
opaque and presented on an opaque "carrier" (automasted and/or imaging) without
substantial regard to the material analyzed.
At 1530, a process control and/or monitoring system is configuard Real-time,
closed loop and/or open loop monitoring, manufacturing with (e.g., closing loop around
particle size), and control of processes by direct measurement of particle characteristics
(e.g., size shape, morphology, cross section, distribution, density, packing fraction, and
other parameters can be automatically determined). It is to be appearing that although

direct measurment of techniques are performed on a given particle sample, that automated
algorithms and/or processing can also be applied to the imaged sample if desired.
Moreover, a direct measurement-based particle characterization device can be installed at
substantially any given point in a manufacturing process to monitor and communicate
particle characteristics for process control, quality control, and so forth by direct
measurement.
At 1340.2 plurality of different sample types can be selected for analysis. For
example, particle samples in any of the aforementioned forms can be introduced in
continuous flow, periodic, and/or asynchronous processes for direct measurement in a
device as part ot a process closed-feedback-loop system lo control, record, and/or
communicate particle characteristics of a given sample type (can also include open loop
techniques if desired). Asynchronous and/or synchronous (the first defines imaging with
a triggering signs sent by an event, or trigger signal initiated by an event or object
generating a trigger signal to initiate imaging, the second defines imaging with a timing
signal sent to trigger illumination. Asynchronous and/or synchronous imaging can be
achieved by pulsing an illumination source to coincide with the desired image field with
substantially any particle flow rate. This can be controlled by a computer, for example,
and/or by a "trigger" mechanism, either mechanical, optical, and/or electronic, to "flash"
solid state illustration on and off with a given duty cycle so that the image sensor
captures, displays and records the image for precessing and analysis. This provides a
straight-forward process of illuminating and imaging given that it effectively can be timed
to "Stop the action" - or rather, "freeze" the motion of the flowing particles in the
medium. In addition, this enables that a sample within the image field to capture particles
within the field for subsequent image processing and analysis.
Real-time (or substantially real time) closed loop and/or open loop monitoring,
manufacturing with, and control of processes by k-space-based, direct measurement of
particle characterization at 1340 is applicable to a broad range of processes including (but
and limited to): Caramics, metal powders, pharmaceuticals, cement, minerals, ores,
coatings, adhesives, pigments, dyes, carbon black, filter materials, explosives, food
preparations, health & cosmetic emulsions, polymers, plastics, micelles, beverages - and
many more particle-based substances requiring process monitoring and control.

Other applications include but are not limitcd to:
• Instrument calibration and standards;
• Industrial-hygiene research;
• Material; research;
• Energy and combustion studies,
• Diesel- and gasoline-engine emissions measurements;
• Industrial emissions sampling;
• Basic aerosol research;
• Environmental studies;
• Bio-aerosol detection:
• Pharmaceutical research;
• Health and agricultural experiments;
• Filter testing.
At 1350, software and/or hardware based computerized image processing/analysis
can occur. Images from a device adapted in accordance with the present invention can be
processed in accordance with substantially any hardware and/or software process.
Software-based mage processing can be achieved by custom software and/or
commercialy available software since the image file formats are digital formats (e.g., but
maps of capttured particles).
Analysis, characterization and so forth can also be provided by the following: For
example, analysis can be methologic (direct measurement based) and/or comprative
(data-base) based.

Comparative analyses can include comprisons to a database of image data for known
particles and/or thereof. Advanced image processing can characterized and
catalog images in t real-time and/or periodic sample-measurements. Data can be discarded
and/or recorded; s desired, whereas data matching known sample characteristics can
begin a suitable; elected response, for example. Furthermore, a device adapted in
accordance with the present invention can be linked for communication in any date
transmission process. This can included wireless, broadband, phone modem, standard
telecom, Ethernet or other network protocols (e g. Internet, TCP/IP. Bluetooth, cable TV
transmissions as well as others).
Fig. 15 illustrates a fluorescence application 1400 in accordance with an aspect of
the present invention that can be employed with the systems and processes previously
described A k-space system is adapted in accordance with the present invention having a
light system that includes a low intensity light source at 1410, such as a Light Emitting
Diode (LED), emitting light having a wavelength of about 250 to about 400 nm (e.g.,
ultraviolet light). The LED can be employed to provide for illumination, trans-
illumination as described herein (or other type). The use of an LEO (or other low power
UV light source) also enables waveguide illumination in which the UV excitation
wavelength is introduced onto a planner surface supporting the object under test at 1420,
such that evanescent-wave coupling of the UV light can excite fluorophores within the
object. For example, the UV light can be provided at about a right angle to a substrate on
which the object lica. At 1430, the LED (or other light source or combinations thereof)
can emit light for a predetermined time period and/or be controlled in a stroke-like
manner emitting pulses at a desired rate. At 1440 excitation is applied to the object for
the period determined at 1430. At 1450, automated and/or manual analysis is performed
on the object during (and/or thereabout) the excitation period.
By way of illustration, the object which is sensitive to ultraviolet in that it
fluoresoes in response to excitation of UV light from the light source. Fluorescence is a
condition of a material (organic or inorganic) in which the material continues to emit light
while absorbing excitation light. Fluiresceane can be inherent property of a material
(e.g., auto-fluorescence) or it can be induced, such as by employing flurochrome stains or
dyes. The dye can have an affimity to a particular protein or other receptiveness so as to

facilitate discovering different conditions associated with the object. In one particular
example, fluorescence microscopy and/or digital imaging provides a manner in which to
study various materials that exhibit secondary fluorescence.
By way of further example, the UV LED (or other source) can produce intense
flashes of UV radiation for a short time period, with an image being constructed by a
sensor (sensor adapted to the excitation wavelength) a short time later (e.g., milliseconds
to seconds). This mode can be employed to investigate the time decay characteristics of
the fluorescent components of the object (or sample) being tested. This may be
important where two parts of the object (or different samples) may respond (e.g. ,
fluoresce substantially the same under continuous illumination, but may have differing
emission decay characteristics.
As a result of using the low power UV light source, such as the LEO, the light
from the light source can cause at least a portion of the object under test 10 emit light,
generally not in the ultraviolet wavelength. Because at least a portion of the object
fluoresces, preffered post-fluorescence images can be correlated relative to those obtained
during fluorescence of the object to ascertain different characteristics of the object
In contract, most conventional fluorecence systems are configured to imadiate a
specimen and then to septate the much weaker re-radiating fluorescent light from the
brighter excition light, typically through filters. In order to enable detectable
fluorescence, such conventional systems usually require powerful light sources. For
example, the light sources can be mercury or xenon are (burner) loops, which produce
high-insensity illumination powerful enough to image fluorescence specimens. In
addition to running hot (e.g., typically 100-250 Watt lamps), these types of light sources
typically have short operating lives (e.g., 10-100 hours). In addition, a power supply for
such conventional light sources often includes a timer to help track the number of use
hours, as are lamps tend to become inefficient and are more likely to shorter, if utilized
beyond their lifetime. Mostover, mercury burners generally do not provide even
intensity across the spectrum from ultraviolet to infrared, as much of the intensity of the
mercury burner is expended is tir near ultraviolet This often requires precision filtering
to remove undersigned light wavelengths.

Accordingly, it will be appreciated that using a UV LED, in accordance with an aspect of
the present invention, provides a substantially even intensity at a desired UV wavelength
to mitigate power consumption and beat generated through its use. Additionally, the
replacement cost of a LED light source is significantly less than conventional lamps.
Fig. 16 illustrates a thin films application 1500 in accordance with an aspect of the
present invention. Films and thin films can be characterized in general terms as thin
layers (varying from molecular thickness(es) to significant microscopic to macroscopic
thickness(es) of some material, or multiple materials, deposited in a manner suitable to
respective materials onto various substrates of choice and can include (but are not limited
10) any of the fo lowing: metallic coating (e.g. reflective, including partial, opaque, and
transmissivc), optical coatings (e.g., interference, transmission, anti-reflective, pass banel,
blocking, protective. multi coat, and so forth), plating (eg. metallic, oxide, chemical.
anti-oxidant, thermal, and so forth), electrically conductive (e.g. macro and micro-circuit
deposited and constructed), optically conductive (e.g. deposited optical materials of
varying index of refraction, micro- and macro-optical "circuits."). This can also include
other coatings ar d layered film and film-like materials on any substrate which can be
characterized by deposition in various manners so as to leave a desired layer of some
material(s) on and substrate in a desired thickness, consisaency, continuity, uniformity,
adhesion, and other parameters associated with any given deposited film. Associated thin
film analysis can include detection of micro bubbles, voids, microscopic debris,
depositing flaws, and so forth.
Proceeding to 1510, a k-space system is configared for thin film analysis in
accordance with an aspect of the present invention. The application of a k-space imaging
device to the problem of thin-film inspection and characterization can be employed in
identifying and characterizing flows in a thin film or films for example. Such a system
can be adapted to facilitate:
1) manual observation of a substrate with deposited thin film of all types;
2) automatic observation/analysos and characterization of a substrate with
deposited thin film of all types for pass-feel inspection;
3) automatic obserwatio and characterization of a substrate with deposited thin
film of all types for computer-controlled comparative disposition, this can

include image data written to recording media of choice (e.g. CD-ROM,
DVD-ROM) for verification, certification. and so forth
A k-space device can be configured for imaging at desired Absolute Spatial
Resolution (ASF ) per pixel and desired Effective Resolved Magnification (ERM) These
parameters facilitate determining FOV, DOF, and WD, for example This can include
objective-based design configurations and/or achromat- design configurations (e.g., for
wide FOV and moderate ERM, and ASR) Illumination can be selected based on
inspection paran eters as trans-illumination and/or epi-illumination, for example.
At ! 520, a substrate is mounted in an imager in such a manner as to be scanned
by.
1) movement of an optical imaging path length by optical scanning method:
and/or
2) indexing an object being tested directly by a process of mechanical motion and
control (e.g. automatic by computer o- manual by operator). This facilitates
an inspection of an entire surface of portion of the surface as desired.
As noted above in context of particle sizing, asynchronous imaging at selected timing
intervals and/or in real-time for respective scanned areas (e.g. determined by FOV) of
substrate at common video rates and/or ar image capture rates can be provided. Images of
indexed and/or seamed areas can be captured with desired frequency for subsequent
image processing. In addition, samples can be introduced into the device manually and/or
in an automated manner from a "feed" such as from a conveyor system.
At 1530, operational parameters for thin film applications ate detennined and
applied. Typical operational parameters can include (but are not limited to:
1) Imaging of various flaws and characteristics including, but not limited to,
particles and holes on a surface(s) (or within) a thin film;
2) Modular designs which can be varied as needed for both reflective and
transparent surfaces;
3) Automated counting and categorization of surface flaws by size, location,
and/or number or successively indexed (and/or "scanned") image areas (with index
identification and totals for respective sample surfaces);
4) R4 gister location of defects for subsequent manual inspection;

5) Provide images in standard format(s) for subsequent porting (e g. via
Ethernet or othe - protocol) or manual and/or automated image processing for archive and
documentation on a computer, server, and/or client; and/or
6) Nominal scan time per surface of seconds to minutes dependent on total
area. Scan and indexing speed generally understood to vary with sample area and
subsequent processing.
At 1540, software and/or hardware based computerized image processing/analysis
can occur. Images from a device adapted in accordance with the present invention can be
processed in accordance with substantially any hardware and/or software process.
Software-based image processing can be achieved by custom software and/or
commercially available software since the image file formats are digital formats (e. g. bit
maps of captured films). Analysis, characterization and so forth can also be provided by
the following: For example, analyses can be metrologic (direct measurement based)
and/or compare ive (data-base) based. Comparative analyses can include comparisons to a
database of image data for known films and/or variants thereof. Advanced image
processing can characterize and catalog images in real-time and/or periodic sample-
measurements. Data can be discarded and/or recorded as desired, whereas data matching
known sample characteristics can begin a suitable selected response, for example.
Furthermore, a device adapted in accordance with the present invention can be linked for
communication in any data transmission process. This can included wireless, broadbond,
phone moderm, modern telecom. Ethernet or other network protocols (e.g., Internet,
TCP/IP. Bluetooth, cable TV transmissions as well as others).
In another aspect of the present invention, an imaging system adapted as described
above provides high effective resolved magnification and high spatial resolution among
other features of biological material and methode that can be combined to provide
improved biological material imaging systems and methods. The biological material
imaging systems and methods of the present invention enable the production of improved
images (higher affective magnification, improved resolution, improved depth of field, and
the like) leading to the identification of biological materials as well as the classification of
biological materials (for example as normal or abnormal).

Biological material includes microorganisms (organisms too small to be observed
with the unaided eye) such as bacteria, virus, protozoans, fungi, and ciliates, cell material
from organisms such cells (lysed, intracellular material, or whole cells), proteins,
antibodies, lipids, and carbohydrates, tagged or wagged; and portions of organisms such
as clumps of cells (tissue samples). blood, pupils, irises, finger tips, teeth, portions of the
skin, hair, mucou; membranes, bladder, breast, male/female reproductive system
components, muscle, vascular components, central nervous system components, liver,
bone, colon, pancreas, and the like. Since the biological material imaging system of the
present invention can employ a relatively large working distance, portions of the human
body may be directly examined without the need for removing a tissue sample.
Cells include human cells, non-human animal cells, plant cells, and
synthetic/research cells. Cells include prokaryotic and eukaryotic cells. Cells may be
healthy, cancerous, mutated, damaged, or diseased.
Examples of non-human cells include authax, Actinomycetes app., Azotobacter,
Bacillus anthracis, Bacillus cereus, Bacteroides species, Bordetella pertussis, Borreilia
burgdorferi, Camoylobacter jejuni, Chlamydia species, Clostridium species,
Cyanobacteria, Dzinoecous radiodurans, Escherichia coli, Enterocoocus, Haemophilips
influenzae, Helicobactor pylori, klebsiella progressive, Lactobacillus spp., Lawsonia
interscalularis, Legionelies, Limerin spp., Micrococcus spp., Mytobacterium laprse,
Mycobacterium tuberoulosis, Mycobacteria, Neissasive geological, neisseria
Shigella species, Staphylococcus sureus, Streptococci, Thiomargarita namibicasis,
Treponema pallidum, Vibrio cholerac, Yersinia caterocolitica.Yasmin pestis.and the
like.
Additional examples of biological material are those that cause illness such as
colds, infections, maleria, chlamydia, syphilis, geological, conjunctivitis, authrax,
meningitis, botulism, disarhea, brucellosis, campylobacter, conditions, cholers,
glanders (burkholderis maller), influenzae, leprosy, histoplassnosis, legionellosis,
glaaders (burkbolderia mallei) influenzae, leprosy, histoplasonosis, legionellosis,
leptospirosis, listeriosis, melioidosis, nocardiosis, nontabecytulosis mycobacterium, peptic
ulcer disease, penussis, pneumonia, psittacosis, salmomella emeritidis, shigellosis,

sporotrichosis, strep throat, toxic shock syndrome, trachoma, typhoid fever, urinary tract
infections, lyme disease, and the like As describsd later, the present invention further
relates to methods of diagnosing any of the above illnesses.
Example; of human cells include fibroblast cells, skeletal muscle cells, neutrophil
white blood cells, lymphocyte vrhite blood cells, erythroblast red blood cells, osteoblast
bone cells, chondrocyte cartilage cells, basophil white blood cells, eosinophil white blood
cells, adipocyte fat cells, invertebrate neurons (Helix aspen), mammalian neurons.
adrenomedullary cells, melanocytes, epithelial cells, endothelial cells; tumor cells of all
types (particular y melanoma, myeloid leukemia, carcinomas of the lung, breast, ovaries,
colon, kidney, piostatc, pancreas and testes), cardiomyocyfes, endothelial cells, epithelial
cells, lymphocytes (T-cell and B cell), mast cells. eosinophils, vascular intimal cells,
hepatocytes, leukocytes including mononuclear leukocytes, stem cells such as
haemopoetic, neural, skin, lung, kidney, liver and myocyte stem cells, osteoclasts,
chondrocytes and other connective tissue cells, melanocyies. liver cells,
kidney cells, and adipocytes. Examples of research cells includs transformed cells, Jurkot
T cells, NIH3T3 cells. CHO, COS. etc.
A useful source of cell lines and other biological material may be found in ATCC
Cell Lines and Hybridoms. Bacteria and Bacteriophases, Yeast, Mycology and Botany,
and protists: Algue and Protomos, and others available from American Type Culture Co.
(Rockville, all of which are herein incorporated by reference. These are non-
limiting examples as a limit of cells and other biological material can be listed.
The identification or classification of biological material can in some instances
lead to the diagrosis of disease. Thus, the present invention also provides improved
system and methods of diagnosis. For example, the present invention also provides
methods for detection and charaterization of medical pathologies such as cancer,
pathologies of systems, digestive systems, improductive systems, and the
alimantary canal, in addition to atherosclerisis, segiments, autoriosclerosis,
inflamation, authoclerotic heart disease, mycocdial infenction, neuma to arterial or
veinal walls, neurodegenerative disorders, and cardiopharmacy disorders. The present
invention also provides memods for detection and characterization of vital and bacterial
infections. The present invention also enables assessing the effects of various agents or

physiological activities on biological materials, in both in vitro and in vivo systems, for
example, the present invention enables assessment of the effect of a physiological agent,
such as a drug, on a population of cells or tissue grown in culture.
The biological material imaging system of the present invention enables computer
driven control or automated process control to obtain data from biological material
samples. In this connection, a computer or processor, coupled with the biological
material imagins system, contains or is coupled to a memory or data base containing
images of biological material, such as diseased cells of various types. In this context,
automatic designation of normal and abnormal biological material may be made. The
biological mater al imaging system secures images from a given biological material
sample, and the mages are compared with images in the memory, such as images of
diseased cells in the memory. In one sense, the computer/processor performs a
comparison analysis of collected image data and stored image data, and based on the
results of the anlysis, formulates a determination of the identity of a given biological
material; of the classification of a given biological material (normal/abnormal,
cancerous/non-incerous, benign/malignant, infected/not infected, and the like); and/or of
a condition (diagnosis).
If the computer/processor deternines that a sufficient degree of similarity is
present between particular images from a biological material sample and saved images
(such as of deseased cells or of the same biological meterial), then the image is saved and
data associated with the image may be generated. If the computer/processor determines
that a sufficient degree of similarity is not present between particular image of a
biological material sample and saved images of diseased cells/particular biological
material, then the biological material sample is repositioned and additional images are
compared with images in the memory. It is to be appreciated that statistical methods can
be applied by the computer/processor to assist in the determination that a sufficient degree
of similarity is present between particular images from a biological material sample and
saved images of biological material. Any suitable corretation means, memory, opearting
system, analysies component, and software/hardware may be employed by the
computer/processor.
Referring to Figure 17, as examplary aspect of an automated biological material

imaging system 1600 in accordance with one aspect of the present invention enabling
computer driven control or automated process control to obtain data from biological
material samples is shown. An imaging system 1602 described/configured m connection
with Figs. 1 -16 above may be employed to capture an image of a biological maternal 1604.
The imaging system 1602 is coupled to a processor 1606 and/or computer that reads the
image generated by the imaging system 1602 and compares the image to a variety of
images in the data store 1608.
The processor 1606 contains an analysis component to make the comparison.
Some of the mary algorithms used in image processing include convolution (on which
many others are cased), FFT, OCT. thinning (ot skeletonisation), edge detection and
contrast enhancement. These are usually implemented in software but may also use
special purpose hardware for speed. FFT (fast Fourier transform) is an algorithm for
computing the Fourier transform of a set of discrete data values. Given a finite set of data
points, for example, a periodic sampling taken from a real-world signal, the FFT
expresses the data in terms of its component frequencies. It also addresses the essentially
identical inverse comcerns of reconstruction a signal from the frequency data DCT
(discrete cosine transform) is technique for expressing a waveform as a weighted sum of
cosines. There are several applications designed for image processing, e.g., CELIP
(cellular languages for image processing) and VPL (visual programming language).
The data store 1908 contains one or moresets of predetermined images. The
images may include normal images of various biological materials and/or abnormal
images of various biological materials (diseased, mutated, physically disrupted, and the
like). The images stored in the data store 1608 provide a basis to determine whether or
not a given captured image is similar or not similar or not simitar (or the degree of similarity) to the
stored images. In one aspect, the automated biological material imaging system 1600 can
be employed to determine if a biological material sample is normal or abnormal. For
example, the automated biological material imaging system 1600 can identify the
presence of diseased cells, such as cancerous cells, in a biological material sample,
thereby facilitating diagnosis of a given disease or condition. In another aspect, the
automated biological material imaging system 1600 can diagnose the illnesses/diseases
listed above by identifying the presence of an illness causing biological material (such as

an illness causing bacteria described above) and/or determining that a given biological
material is infaced with an illness causing entity such as a bacteria or determining that a
given biological material is abnormal (cancerous).
In yet anoher aspect, the automated biological material imaging system 1600 can
be employed to detetermine the identity of a biological material of unknown origin. For
example, the automated biological material imaging system 1600 can identify a white
powder as containing anthrax. The automated biological material imaging system 1600
can also facilitate processing biological material, such as performing white blood cell or
red blood cell counts on samples of blood, for example
The computer/processor 1606 may be coupled to a controller which controls a
servo motor or other means of moving the biological material sample within an object
plane so that remote/hands free imaging is facilitated. That is, motors, adjusters, and/or
other mechanical means can be employed to move the biological material sample slide
within the object field of view.
Moreover, since the images of the biological material examination process are
optimized for viewing from a computer screen, television, and/or closed circuit monitor,
remote and web based wiewing and control may be implemented. Real tone imaging
facilitates at least one of repid diagnosis, data collection/generation, and the like.
In another aspect, the biological material imaging system is directed to a portion of
a human (such as lesion on an arm, here on the cornes, and the like) and images formed.
The images can be sent to a computer/processor (or across network such as internet),
which is instructed to identify the possible presence of a particular type of diseased cell
(an image of which is stored in memory). When a diseased cell is identified, the
computer/processor instructs the system to remove/destroy the diseased cell, for example,
employing a laser, liquid nitrogen, cutting instrument, and/or the like.
Fig. 18 depicts a high-level machine vision system 1800 in accordance with the
subjact invention. The system 1800 includes an imaging system 10 (Fig. 1) in accordance
with the subject invention. The imaging system 10 is discussed in substantial detail supra
and thus further discussion regarding details related thereto is omitted for sake of brevity.
The imaging system 10 can be employed to collect data relating to a product or process
1810, and provide the image information to a controller 1820 that can regulate the product

or process 1810 for example, with respect to production, process control, quality control,
testing, inspection, etc. The imaging system 10 as noted above provides for collecting
image data at a granularity not achievable by many conventional systems. Moreover, the
robust image da a provided by the subject imaging system 10 can afford for highly
effective machire vision inspection of the product or process 1810. For example, minute
product defects typically not detectable by many conventional machine vision systems can
be detected by the subject system 1800 as a result of the image data collected by the
imaging system 10. The controller 1810 can be any suitable controller or control system
employed in connection with a fabrication scheme, for example. The controller 1810 can
employ the collected image data to reject a defective product or process, revise a product
or process, accept a product or process, etc. as is common to machine vision based control
systems. It is to be appreciated that the system 1800 can be employed in any suitable
machine-vision based environment, and all such applications of the subject invention are
intended to fall within the scope of the hereto appended claims.
For example, the subject system 1800 could be employed in connection with
semiconductor fabrication where device and/or process tolerances are critical to
manufacturing consistent reliable semiconductor-based produce. Thus, the product 1810
could represent a semiconductor wafer, for example, and the imaging system 1800 could
be employed to callect data (e.g., critecal dimensions, thicknesses, prtential defects, other
physical aspects.) relating to devices being formed on the water. The controller 1820
can employ the collected data to reject the water because of various defects, modify a
process in connection with fabricating devices on the wafer, accept the wafer, etc.
What has been described above are preferred aspects of the present invention. It
is, of course, not possible to describe every conceivable combination of components or
methodologies for purposes of describing the present invention, but one of ordinary skill
in the art will recognize that many further combinations and permutations of the present
invention are posible. Accordingly, the present invention is intended to embrace all such
alterations, modifications and variations that fall within the spirit and scope of the
appended claims.

We claim:
1. An imaging system, comprising:
A sensor (20, 212, 854, 910, 1000, 1100, 1200) having one or more
receptors (214), the one or more receptors (214) having a receptor size
parameter; and
an image transfer medium (30) having a diffraction-limited resolution size
parameter (50), 230) in an object field of view (34, 220) determined by the
optical characteristics of the image transfer medium (30), the image transfer
medium operative (30) to scale the proportions of the receptor size parameter to
an apparent size (54) of about the diffraction-limited resolution size parameter
(50) in the object field of view (34, 220), the image transfer medium (30)
comprising a multiple lens configuration (128, 216), the multiple lens
configuration (128, 216) comprising a first lens (234, 832, 944, 1040, 1210)
positioned toward the object field of view (34, 220) and a second lens (236, 850,
916) positioned toward the sensor (20, 212, 910, 1100, 1200), the first lens
(234, 832, 944,1040,1210) sized to have a focal length smaller than the second
lens (236, 850, 916) to provide an apparent reduction (54, 232) of the receptor
size parameter within the image transfer medium (30).

2. The system as claimed in claim 1, wherein the image transfer medium
(30) providing a k-space (110, 216) filter that correlates a pitch (228) associated
with the one or more receptors (214) to the diffraction-limited resolution size
parameter (50, 230) within the object field of view (34, 220).
3. The system as claimed in claim 2, wherein the pitch (228) is unit-mapped
to about the size of the diffraction-limited resolution size parameter (50,230)
within the object field of view (34. 220).
4. The system as claimed in claim 1, wherein the image transfer medium
(30) comprises at least one of an aspherical lens (124), a multiple lens
configuration (128), a fibre optic taper (132), an image conduit (132), and a
holographic optic element (136).
5. The system as claimed in claim 1, wherein the sensor (20, 212, 854, 910,
1000, 1200) comprises an M by N array of pixels associated with the one or
more receptors (214), M and N representing integer rows and columns
respectively, the sensor further comprising at least one of a digital sensor, an
analog sensor, a Charge Coupled Device (CCD) sensor, a CMOS sensor, a Charge
Injection Device (CID) sensor, an array sensor, and a linear scan sensor.

6. The system as claimed in claim 1 comprises a computer (824, 994, 1606)
and a memory (1608) to receive an output from the sensor (20, 212, 854, 910,
1000, 1100, 1200), the computer (824, 994) stores at least one of the output in
the memory, performs automated analysis of the output in the memory, and
maps the memory to display to enable manual analysis of an image.
7. The system as claimed in claim 1 comprises an illumination source to
illuminate one or more non-luminous objects within the object field of view, the
illumination source (240, 804, 970) comprises at least one of a Light Emitting
Diode, wavelength-specific lighting, broad-band lighting, continuous lighting,
strobed lighting, Kohler illumination, Abbe illumination, phase-contrast
illumination, darkfield illumination, brightfield illumination, Epi illumination,
coherent light, non-coherent light, visible light and non-visible light, the non-
visible light being suitably matched to a sensor adapted for non-visible light.
8. The system as claimed in claim 7, wherein the non-visible light
comprises at least one of infrared and ultraviolet wavelengths.

9. The system as claimed in claim 1 comprises an associated application, the
application including at least one of imaging, control, inspection, microscopy
automated analysis, bio-medical analysis, cell colony counting, histology, frozen
section analysis, cellular cytology, Haematology, pathology, oncology,
fluorescence, interference, phase analysis, biological materials analysis, particle
sizing applications, thin films analysis, air quality monitoring, airborne particulate
measurement, optical defect analysis, metallurgy, semiconductor inspection and
analysis, automated vision systems, 3-D imaging, cameras, copiers, FAX
machines and medical systems applications.
10. A method of producing an image, comprising:
determining a pitch size (116, 228) between adjacent pixels on a sensor;
determining a resolvable object size in an object field of view (34, 220);
and
scaling the pitch size through an optical medium to correspond with the
resolvable object size, the optical medium provides a mapping of receptor size to
about a size of a diffraction-limited object in the object field of view.

11. A machine vision system, comprising:
an imaging system for collecting image data form a product or process,
comprising:
a sensor (20, 212, 854, 910, 1000, 1100, 1200) having one or more
receptors (214), the one or more receptors (214) having a receptor size
parameter;
at least one optical device (30) to direct light from an object field of view
(34, 220) determined by the optical characteristics of the at least one optical
device to the one or more receptors (214) of the sensor (20, 212, 854, 910,
1000, 1100, 1200) the at least one optical device (30) provides a mapping of
receptor size to about a size of a diffraction-limited object (50,230) in the object
field of view (34, 220), the optical device (30) comprising a multiple lens
configuration (128, 216), the multiple lens configuration (128, 216) comprising a
first lens (234, 832, 994, 1040, 1210) positioned toward the object field of view
(34, 220) and a second lens (236, 850, 916) positioned toward the sensor (20,
212, 854, 910, 1000, 1100, 1200), the first lens (234, 832, 944, 1040, 1210)
sized to have a focal length smaller than the second lens (236, 850, 916) to
provide an apparent reduction of the receptor size parameter within the optical
device (30); and

a controller (1820) that receives the image data and employs the image
data in connection with fabrication or control of the product or process.
12. The machine vision system as claimed in claim 11 being employed in a
semiconductor-based fabrication system.
An imaging system, comprising sensor (20, 212, 854, 910, 1000, 1100, 1200)
having one or more receptors (214), the one or more receptors (214) having a
receptor size parameter; and an image transfer medium (30) having a
diffraction-limited resolution size parameter (50), 230) in an object field of view
(34, 220) determined by the optical characteristics of the image transfer medium
(30), the image transfer medium operative (30) to scale the proportions of the
receptor size parameter to an apparent size (54) of about the diffraction-limited
resolution size parameter (50) in the object field of view (34, 220), the image
transfer medium (30) comprising a multiple lens configuration (128, 216), the
multiple lens configuration (128, 216) comprising a first lens (234, 832, 944,
1040, 1210) positioned toward the object field of view (34, 220) and a second
lens (236, 850, 916) positioned toward the sensor (20, 212, 910, 1100, 1200),
the first lens (234, 832, 944, 1040, 1210) sized to have a focal length smaller
than the second lens (236, 850, 916) to provide an apparent reduction (54, 232)
of the receptor size parameter within the image transfer medium (30).

Documents:


Patent Number 222889
Indian Patent Application Number 158/KOLNP/2004
PG Journal Number 35/2008
Publication Date 29-Aug-2008
Grant Date 27-Aug-2008
Date of Filing 06-Feb-2004
Name of Patentee PALANTYR RESEARCH, LLC
Applicant Address 24400 HIGHLAND ROAD, CLEVELAND, OH
Inventors:
# Inventor's Name Inventor's Address
1 HOWARD FEIN, 207 RICHMOND ROAD, RICHMOND HTS. OH 44143
2 ANDREW G. CARTLIDGE 22 COMMODORE PLACE PALM BEACH GARDENS, FL 33418
PCT International Classification Number H01L 27/00
PCT International Application Number PCT/US02/21392
PCT International Filing date 2002-07-03
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 09/900,218 2001-07-06 U.S.A.
2 10/166,137 2002-06-10 U.S.A.
3 10/189,326 2002-07-02 U.S.A.