Iris scan and biometrics

Download 312 Kb.
Size312 Kb.
  1   2   3   4


Bio Medical | Electronics Seminar Topic

[Pick the date]


A method for rapid visual recognition of personal identity is described, based on the failure of statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: an estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabeclar meshwork ensures that a test of statistical independence on two coded patterns organizing from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a real time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most significant bits comprise a 512 – byte “IRIS–CODE” statistical decision theory generates identification decisions from Exclusive-OR comparisons of complete iris code at the rate of 4,000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “cross-over” error rate of one in 1,31,000 when a decision criterion is adopted that would equalize the False Accept and False Reject error rates.


Reliable automatic recognition of persons has long been an attractive goal. As in all pattern recognition problems, the key issue is the relation between interclass and intra-class variability: objects can be reliably classified only if the variability among different instances of a given class is less than the variability between different classes. Iris patterns become interesting as an alternative approach to reliable visual recognition of persons when imaging can be done at distances of less than a meter, and especially when there is a need to search very large databases without incurring any false matches despite a huge number of possibilities. The iris has the great mathematical advantage that its pattern variability among different persons is enormous. In addition, as an internal (yet externally visible) organ of the eye, the iris is well protected from the environment and stable over time. As a planar object its image is relatively insensitive to angle of illumination, and changes in viewing angle cause only affine transformations; even the non-affine pattern distortion caused by pupillary dilation is readily reversible. Finally, the ease of localizing eyes in faces, and the distinctive annular shape of the iris, facilitates reliable and precise isolation of this feature and the creation of a size-invariant representation.

Algorithms developed by Dr. John Daugman at Cambridge are today the basis for all iris recognition systems worldwide


Biometrics, the use of a physiological or behavioral aspect of the human body for authentication or identification, is a rapidly growing industry. Biometric solutions are used successfully in fields as varied as e-commerce, network access, time and attendance, ATM’s, corrections, banking, and medical record access. Biometrics’ ease of use, accuracy, reliability, and flexibility are quickly establishing them as the premier authentication technology.
Efforts to devise reliable mechanical means for biometric personal identification have a long and colourful history. In the Victorian era for example, inspired by birth of criminology and a desire to identify prisoners and malefactors, Sir Francis Galton F.R.S proposed various biometric indices for facial profiles which he represented numerically. Seeking to improve on the system of French physician Alphonse Bertillon for classifying convicts into one of 81 categories, Galton devised a series of spring loaded “mechanical selectors” for facial measurements and established an Anthropometric Laboratory at south Kensington

Fig 1

The possibility that the iris of the eye might be used as a kind of optical fingerprint for personal identification was suggested originally by ophthalmologists who noted from clinical experience that every iris had a highly detailed and unique texture, which remained un changed in clinical photographs spanning decades ( contrary to the occult diagnostic claims of “iridology” ). Among the visible features in an iris, some of which may be seen in the close-up image of figure 1, are the trabecular meshwork of connective tissue (pectinate ligament), collagenous stromal fibers, ciliary processes, contraction furrows ,crypts, a serpentine vasculature, rings, corona, colouration, and freckles. The striated trabecular meshwork of chromatophore and fibroblast cells creates the predominant texture under visible light, but all of these sources of radial and angular variation taken together constitute a distinctive “fingerprint” that can be imaged at some distance from the person. Further properties of the iris that enhance its stability for use in automatic identification include.

• Its inherent isolation and protection from the external environment, being an

internal organ of the eye, behind the cornea and aqueous humour.

• The impossibility of surgically modifying it without unacceptable risk to vision
• Its physiological response to light, which provides a natural test against artifice.


3.1. Iris Recognition
Iris recognition leverages the unique features of the human iris to provide an unmatched identification technology. So accurate are the algorithms used in iris recognition that the entire planet could be enrolled in an iris database with only a small chance of false acceptance or false rejection. The technology also addresses the FTE (Failure To Enroll) problems, which lessen the effectiveness of other biometrics. The tremendous accuracy of iris recognition allows it, in many ways, to stand apart from other biometric technology is based on research and patents held by Dr. John Daugman.
3.2. The Iris
Iris recognition is based on visible qualities of the iris. A primary visible characteristic is the trabecular meshwork (permanently formed by the 8th month of gestation), a tissue that gives the appearance of dividing the iris in a radial fashion. Other visible characteristics include rings, furrows freckles, and the corona. Expressed simply, iris recognition technology converts these visible characteristics into a 512 byte IRIS CODE, a template stored for future verification attempts. 512 bytes is a fairly compact size for a biometric template, but the quantity of information derived from the iris is massive. From the iris 11mm diameter, Dr. Daugman’s algorithms provide 3.4 bits of data per square mm this density of information is such that each iris can be said to have 266 unique “spots”, as opposed to 13-60 for traditional biometric technologies. This 266 measurements is cited in all iris recognition literature: after allowing for the algorithm’s

correlative functions and for characteristics functions and for characteristics inherent to

most human eyes, Dr. Daugman concludes that 173 “independent binary degrees-of-freedom” can be extracted from his algorithm - an exceptionally large number for a biometric

3.3. The Algorithms

The first step is location of the iris by a dedicated camera no more than 3 feet from the eye. After the camera situates the eye, the algorithm narrows in from the right and left of the iris to locate its outer edge. This horizontal approach accounts for obstruction caused by the eyelids. It simultaneously locates the inner edge of the iris (at the pupil), excluding the lower 90 degree because of inherent moisture and lighting issues. The monochrome camera uses both visible and infrared light, the latter of which is located in the 700-900 nm range. Upon location of the iris, as seen above, an algorithm uses 2-D Gabor wavelets to filter and map segments of the iris into hundreds of vectors (known here as phasors). The wavelets of various sizes assign values drawn from the orientation and spatial frequency of select areas, bluntly referred to as the “what” of the sub-image, along with the position of these areas, bluntly referred to as the “where”. The “what” and “where” are used to form the Iris Code. Not the entire iris is used: a portion of the top, as well as 45 degree of the bottom, is unused to account for eyelids and camera-light reflections. For future identification, the database will not be comparing images of irises, but rather hexadecimal representations of data returned by wavelet filtering an d mapping.


To capture the rich details of iris patterns, an imaging system should resolve

a minimum of 70 pixels in iris radius. In the field trials to date, a resolved iris radius of 100 to 140 pixels has been more typical. Monochrome CCD cameras (480 x 640) have been used because NIR illumination in the 700nm - 900nm band was required for imaging to be invisible to humans. Some imaging platforms deployed a wide-angle camera for coarse localization of eyes in faces, to steer the optics of a narrow-angle pan/tilt camera that acquired higher resolution images of eyes. There exist many alternative methods for finding and tracking facial features such as the eyes, and this well researched topic will not be discussed further here. In these trials, most imaging was done without active pan/tilt camera optics, but instead exploited visual feedback via a mirror or video image to enable cooperating Subjects to position their own eyes within the field of view of a single narrow-angle camera.
Focus assessment was performed in real-time (faster than video frame rate)

by measuring the total high-frequency power in the 2D Fourier spectrum of each frame, and seeking to maximize this quantity either by moving an active lens or by providing audio feedback to Subjects to adjust their range appropriately. Images passing a minimum focus criterion were then analyzed to find the iris, with precise localization of its boundaries using a coarse - to - fine strategy terminating in single-pixel precision estimates of the center coordinates and radius of both the iris and the pupil. Although the results of the iris search greatly constrain the pupil search, concentricity of these boundaries cannot be assumed. Very often the pupil center is nasal, and inferior, to the iris center. Its radius can range from 0.1 to 0.8 of the iris radius. Thus, all three parameters defining the pupillary circle must be estimated separately from those of the iris.

A very effective integrodifferential operator for determining these parameters is:

Max (r,x0,y0) | G (r) * /r (r,x0,y0) I(x; y) ds/2 r | (1)
Where I(x; y) is an image such as Fig 1 containing an eye. The operator

searches over the image domain (x; y) for the maximum in the blurred partial derivative with respect to increasing radius r, of the normalized contour integral of I(x; y) along a circular arc ds of radius r and center coordinates (x0; y0). The symbol * denotes convolution and G(r) is a smoothing function such as a Gaussian of scale σ. The complete operator behaves in effect as a circular edge detector, blurred at a scale set by σ, which searches iteratively for a maximum contour integral derivative with increasing radius at successively finer scales of analysis through the three parameter space of center coordinates and radius (x0, y0, r) defining a path of contour integration.

The operator in (1) serves to find both the pupillary boundary and the outer

(limbus) boundary of the iris, although the initial search for the limbus also incorporates evidence of an interior pupil to improve its robustness since the limbic boundary itself usually has extremely soft contrast when long wavelength NIR illumination is used. Once the coarse-to-fine iterative searches for both these boundaries have reached single pixel precision, then a similar approach to detecting curvilinear edges is used to localize both the upper and lower eyelid boundaries. The path of contour integration in (1) is changed from circular to accurate, with spline parameters fitted by standard statistical estimation methods to describe optimally the available evidence for each eyelid boundary. The result of all these localization operations is the isolation of iris tissue from other image regions, as illustrated in Fig 1 by the graphical overlay on the eye.


Each isolated iris pattern is then demodulated to extract its phase

information using quadrature 2D Gabor wavelets (Daugman 1985, 1988, 1994). This encoding process is illustrated in Fig 2. It amounts to a patch-wise phase quantization of the iris pattern, by identifying in which quadrant of the complex plane each resultant phasor lies when a given area of the iris is projected onto complex-valued 2D Gabor wavelets:
Phase-Quadrant Demodulation Code

Fig 2

h{Re;Im} = sgn{Re;Im} ρφ I(ρ ; φ) eiω(θ0 - φ) (2)

.e-(r0-ρ) ^ 2 / α ^ 2 e -(θ0-φ) ^ 2 /β ^ 2

where h{Re;Im} can be regarded as a complex-valued bit whose real and

imaginary parts are either 1 or 0 (sgn) depending on the sign of the 2D integral; I(ρ ; φ) is the raw iris image in a dimensionless polar coordinate system that is size- and translation-invariant, and which also corrects for pupil dilation as explained in a later section; α and β are the multi-scale 2D wavelet size parameters, spanning an 8-fold range from 0.15mm to 1.2mm on the iris; ω is wavelet frequency, spanning 3 octaves in inverse proportion to β; and (r0; θ0) represent the polar coordinates of each region of iris for which the phasor coordinates h{Re ; Im} are computed. Such a phase quadrant coding sequence is illustrated for one iris by the bit stream shown graphically in Fig 1.

A desirable feature of the phase code portrayed in Fig 2 is that it is a cyclic,

or grey code: in rotating between any adjacent phase quadrants, only a single bit changes, unlike a binary code in which two bits may change, making some errors arbitrarily more costly than others. Altogether 2,048 such phase bits (256 bytes) are computed for each iris, but in a major improvement over the earlier (Daugman 1993) algorithms, now an equal number of masking bits are also computed to signify whether any iris region is obscured by eyelids, contains any eyelash occlusions, specular reflections, boundary artifacts of hard contact lenses, or poor signal-to-noise ratio and thus should be ignored in the demodulation code as artifact.

Fig 3: Illustration of poorly focused

Only phase information is used for recognizing irises because amplitude information is not very discriminating, and it depends upon extraneous factors such as imaging contrast, illumination, and camera gain. The phase bit settings which code the sequence of projection quadrants as shown in Fig 2 capture the information of wavelet zero-crossings, as is clear from the sign operator in (2). The extraction of phase has the further advantage that phase angles are assigned regardless of how low the image contrast may be, as illustrated by the extremely out-of-focus image in Fig 3. Its phase bit stream has statistical properties such as run lengths similar to those of the code for the properly focused eye image in Fig 1. (Fig 3 also illustrates the robustness of the iris- and pupil - finding operators, and the eyelid detection operators, despite poor focus.) The benefit which arises from the fact that phase bits are set also for a poorly focused mage as shown here, even if based only on random CCD noise, is that different poorly focused irises never become confused with each other when their phase codes are compared. By contrast, images of different faces look increasingly alike when poorly resolved, and may be confused with each other by appearance-based face recognition algorithms.

Share with your friends:
  1   2   3   4

The database is protected by copyright © 2019
send message

    Main page