holography based super resolution

8
Holography based super resolution Anwar Hussain, Asloob A. Mudassar Department of Physics and Applied Mathematics, Pakistan Institute of Engineering and Applied Sciences, Islamabad, 45650, Pakistan abstract article info Article history: Received 11 August 2011 Received in revised form 28 December 2011 Accepted 13 January 2012 Available online 30 January 2012 Keywords: Coherent imaging Holography imaging Super-resolution Coherent illumination Fiber illumination Image reconstruction This paper describes the simulation of a simple technique of superresolution based on holographic imaging in spectral domain. The input beam assembly containing 25 optical bers with different orientations and positions is placed to illuminate the object in the 4f optical system. The position and orientation of each ber is calculated with respect to the central ber in the array. The positions and orientations of the bers are related to the shift of object spectrum at aperture plane. During the imaging process each ber is operated once in the whole procedure to illuminate the input object transparency which gives shift to the object spectrum in the spectral domain. This shift of the spectrum is equal to the integral multiple of the pass band aperture width. During the operation of single ber (ON-state) all other bers are in OFF-state at that time. The hologram recorded by each ber at the CCD plane is stored in computer memory. At the end of illumination process total 25 holograms are recorded by the whole ber array and by applying some post processing and specic algorithm single super resolved image is obtained. The superresolved image is ve times better than the band-limited image. The work is demonstrated using computer simulation only. © 2012 Elsevier B.V. All rights reserved. 1. Introduction Achieving high resolution beyond the classical limit is called super resolution [1]. Different parameters limit the resolution of an imaging system, for example, diffraction at the exit pupil and sampling at the CCD plane where the image is formed. Breaking the diffraction limit of resolution is called optical superresolution [2] and enhancement of resolution by manipulating the pixels of the CCD is referred to as Geometric superresolution. The super resolution problem is mainly addressed in the situations where detailed information about an object is required, for example, as in case of medical imaging using super resolution microscopy techniques. Since last few decades many techniques have been implemented to solve the resolution problems. These techniques suggest different solutions of superreso- lution problems, like fringe projection [35], optical masking [6,7] and grating [8] techniques to solve the optical and geometrical super resolution. In interferometric setup [911] superresolution is achieved by off-axis illumination to capture the higher order spatial frequencies of the object. 3D imaging becomes important when phase retrieval is possible. Holographic super resolution is mainly concerned about recording both the amplitude and phase of the imaging object. Initial work addressing holographic imaging [12,13] was published in the early seventies. After 40 years of research on holographic imaging appropriate solutions to the superresolution problems have not been materialized. In [1417], digital holographic microscopy techniques were used for superresolution. In these techniques tilted beam illumi- nation produced by VCSEL (vertical cavity surface emitting lasers) were used to access to the high frequencies. Synthetic aperture is constructed by combing different holograms using time multiplexing. Some other holographic microscopic techniques [18,19] use Fourier domain reconstruction and Fresnel holograms to achieve superresolu- tion. Work reported in [20] was done in the spatial domain using lensless Fourier holography technique. The illumination angle was changed by using a prism which resulted in enhancement of resolu- tion. In our case we are proposing an illumination system in which the illumination angle is controlled by using custom designed ber as- sembly. The chance of error has been minimized by such an assembly and also the overlapping in the spectral domain has been eliminated. In this paper we propose and implement a technique to solve the optical holographic super resolution problem. The technique is based on mathematical modeling and computer simulation. The 1D and 2D simulation results have been given to demonstrate the technique. For experimental verication we have proposed and discussed the setup but the experiment could not be performed due to the unavailability of equipment. In the paragraph below we describe the 1D assembly of bers used to illuminate the object. To describe the exact position of single mode optical ber in the experimental setup shown in Fig. 1, some mathematical calculations have been done. α n is an angle made by the nth ber with the optical axis of the system and a n shows the corresponding shifting of spec- trum at Fourier plane. It follows from Fig. 1: tan α n ¼ a n f ð1Þ Optics Communications 285 (2012) 23032310 Corresponding author. E-mail address: [email protected] (A.A. Mudassar). 0030-4018/$ see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.optcom.2012.01.022 Contents lists available at SciVerse ScienceDirect Optics Communications journal homepage: www.elsevier.com/locate/optcom

Upload: anwar-hussain

Post on 10-Sep-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Optics Communications 285 (2012) 2303–2310

Contents lists available at SciVerse ScienceDirect

Optics Communications

j ourna l homepage: www.e lsev ie r .com/ locate /optcom

Holography based super resolution

Anwar Hussain, Asloob A. Mudassar ⁎Department of Physics and Applied Mathematics, Pakistan Institute of Engineering and Applied Sciences, Islamabad, 45650, Pakistan

⁎ Corresponding author.E-mail address: [email protected] (A.A. Mudassar)

0030-4018/$ – see front matter © 2012 Elsevier B.V. Alldoi:10.1016/j.optcom.2012.01.022

a b s t r a c t

a r t i c l e i n f o

Article history:Received 11 August 2011Received in revised form 28 December 2011Accepted 13 January 2012Available online 30 January 2012

Keywords:Coherent imagingHolography imagingSuper-resolutionCoherent illuminationFiber illuminationImage reconstruction

This paper describes the simulation of a simple technique of superresolution based on holographic imagingin spectral domain. The input beam assembly containing 25 optical fibers with different orientations andpositions is placed to illuminate the object in the 4f optical system. The position and orientation of eachfiber is calculated with respect to the central fiber in the array. The positions and orientations of the fibersare related to the shift of object spectrum at aperture plane. During the imaging process each fiber is operatedonce in the whole procedure to illuminate the input object transparency which gives shift to the objectspectrum in the spectral domain. This shift of the spectrum is equal to the integral multiple of the passband aperture width. During the operation of single fiber (ON-state) all other fibers are in OFF-state at thattime. The hologram recorded by each fiber at the CCD plane is stored in computer memory. At the end ofillumination process total 25 holograms are recorded by the whole fiber array and by applying some postprocessing and specific algorithm single super resolved image is obtained. The superresolved image is fivetimes better than the band-limited image. The work is demonstrated using computer simulation only.

© 2012 Elsevier B.V. All rights reserved.

1. Introduction

Achieving high resolution beyond the classical limit is called superresolution [1]. Different parameters limit the resolution of an imagingsystem, for example, diffraction at the exit pupil and sampling at theCCD plane where the image is formed. Breaking the diffraction limitof resolution is called optical superresolution [2] and enhancementof resolution by manipulating the pixels of the CCD is referred to asGeometric superresolution. The super resolution problem is mainlyaddressed in the situations where detailed information about anobject is required, for example, as in case of medical imaging usingsuper resolution microscopy techniques. Since last few decadesmany techniques have been implemented to solve the resolutionproblems. These techniques suggest different solutions of superreso-lution problems, like fringe projection [3–5], optical masking [6,7]and grating [8] techniques to solve the optical and geometrical superresolution. In interferometric setup [9–11] superresolution is achievedby off-axis illumination to capture the higher order spatial frequenciesof the object. 3D imaging becomes important when phase retrievalis possible. Holographic super resolution is mainly concerned aboutrecording both the amplitude and phase of the imaging object. Initialwork addressing holographic imaging [12,13] was published in theearly seventies. After 40 years of research on holographic imagingappropriate solutions to the superresolution problems have not beenmaterialized. In [14–17], digital holographic microscopy techniques

.

rights reserved.

were used for superresolution. In these techniques tilted beam illumi-nation produced by VCSEL (vertical cavity surface emitting lasers)were used to access to the high frequencies. Synthetic aperture isconstructed by combing different holograms using time multiplexing.Some other holographic microscopic techniques [18,19] use Fourierdomain reconstruction and Fresnel holograms to achieve superresolu-tion. Work reported in [20] was done in the spatial domain usinglensless Fourier holography technique. The illumination angle waschanged by using a prism which resulted in enhancement of resolu-tion. In our case we are proposing an illumination system in whichthe illumination angle is controlled by using custom designed fiber as-sembly. The chance of error has been minimized by such an assemblyand also the overlapping in the spectral domain has been eliminated.

In this paper we propose and implement a technique to solve theoptical holographic super resolution problem. The technique is basedon mathematical modeling and computer simulation. The 1D and 2Dsimulation results have been given to demonstrate the technique. Forexperimental verification we have proposed and discussed the setupbut the experiment could not be performed due to the unavailabilityof equipment. In the paragraph belowwe describe the 1D assembly offibers used to illuminate the object.

To describe the exact position of single mode optical fiber in theexperimental setup shown in Fig. 1, some mathematical calculationshave been done. αn is an angle made by the nth fiber with the opticalaxis of the system and an shows the corresponding shifting of spec-trum at Fourier plane. It follows from Fig. 1:

tan αn ¼ anf

ð1Þ

Fig. 1. Schematic diagram of illumination system for the proposed 1D setup.

2304 A. Hussain, A.A. Mudassar / Optics Communications 285 (2012) 2303–2310

or

an ¼ f tan nαð Þ ð2Þ

‘tan’ is nonlinear function which means that the shifting of spectrumis nonlinear corresponding to ‘n’ values. To limit ourselves to the lin-ear region, the shifting must be small and ‘tan’ can be replaced withsome linear function ‘L’. Eq. (2) will take the following form:

an ¼ f :L nαð Þ ð3Þ

α ¼ 1nL−1 an

f

� �ð4Þ

Here all the variables are known except n and α. The numericalAperture of any fiber is equal to

NA ¼ Sin θð Þ ð5Þ

where angle ‘θ’ is shown in Fig. 1 and is same for all the fibers.For demonstration of the technique in 2D, 4f optical system was

simulated to record different holograms. These holograms were com-bined to form a super-resolved image (this will be explained in a latersection). The coherent illumination in our system consists of a planerarray of fibers generated from a 1×26 couplers i.e. the Coupler hasone input fiber and 26 output fibers out of which 25 form an arrayof fiber in a planar form and the 26th fiber is used as a referencebeam. Only one fiber out of these 25 fibers will be active at onetime to illuminate the object and at the image plane will interferewith the reference beam from the 26th fiber. The fibers at the fiberplane are placed at different orientation and location to shift the spec-trum at the aperture plane which is located in the middle of thelenses 3 and 4. The location and orientation of central fiber (the mid-dle one in the array coinciding with the optic axis of the system) arerespectively (0,0) and k, where k is a unit vector pointing towardsright along the system axis and I and j are the unit vectors on thearray in orthogonal directions. The vector pointing from the middlefiber to the centre of lens2 is given by:

r→f ¼ f :k̂ ð6Þ

The fibers are placed at a distance ‘a’ apart along x and y-axeswhile the position vector of any fiber from the middle or the centralfiber can be expressed as:

r→¼ m:a:̂i þ n:a:̂j ð7Þ

The parameter ‘a’ is equal to one side of the square apertureplaced at the Fourier plane (lying in the middle of the lenses 3and 4). This aperture limits the bandwidth of the optical system andis deliberately chosen to make the system a band-limited system.Where

m ¼ 0;� 1;� 2; :::::n ¼ 0;� 1;� 2; :::::

Location of any fiber can be found by changing them and n param-eters in Eq. (6). The fibers are fixed in such a way that each fiber isdirected towards the centre of lens2. The orientation of each fiberis different from its neighboring fiber and can be calculated usingEq. (7).

r→un ¼ f :k̂−m:a:̂i−n:a:̂j ð8Þ

θ is the angle that each fiber is making with optic axis at the positionof the centre of lens 2 and is defined in terms of corresponding unitvectors as:

Cos θð Þ ¼ r̂un:r̂ f ð9Þ

The simplified result is:

θ ¼ Cos−1 fffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffim:að Þ2 þ n:að Þ2 þ f 2

q264

375 ð10Þ

Now the position vector of the fibers in the array is given byEq. (8) and orientation by Eq. (10). The optical power is distributedsuch that intensity at the image plane due to any of the 25 fibers inthe array is approximately the same as the intensity from the 26thfiber acting as a reference beam. Only one of the 25 fibers will bein the ON state at one time while the remaining will be in theOFF state and this is done using precisions mechanical arrangementor using optical switches. Light from all the 26 fibers are mutually co-herent. A hologram is recorded when one of the 25 fibers illuminatesthe object and an interference occurs at the image plane between theobject beam and the reference beam. The detail of hologram record-ing is given in the next section.

2305A. Hussain, A.A. Mudassar / Optics Communications 285 (2012) 2303–2310

2. Recording of Holograms

Object is illuminated sequentially one by one by the individualfibers of the array. This process will illuminate the object by a set ofplane waves inclined at different angles given by Eq. (10). Fibers arearranged in such a way that each time a different fiber is selected, adifferent portion of the spectrum passes through the passband aper-ture at the Fourier transform (FT) plane. Moreover, the arrangementof fibers makes it possible to select the full spectrum of the objectthough in different segments. Lens4 takes the inverse Fourier trans-form of the segment of the spectrum and forms an image at theimage plane. This image interferes with the reference beam fromthe 26th fiber and a hologram is recorded at the image plane on a dig-ital device like CCD. Holograms of images corresponding to differentsegments of the object spectrum are recorded. The illumination byindividual fibers may be achieved either by mechanical arrangementsor by using optical switches at the after the Coupler. To improve theresolution of digital holograms, a set of four holograms correspondingto illumination by one of the fibers can be recorded by changingthe phase of the reference beam in four steps differing from each byπ/2. The phase stepping at the reference beam can be changedaccording to the method discussed in [23]. After recording the digitalholograms, the next step is to combine them. Fourier of each holo-gram is performed on a computer and the segment of the full spec-trum is copied from each. These are then combined to form thefull spectrum which is inverse Fourier transformed to obtain thehigh resolution image. The mathematical modeling behind therecording and processing of data is given in the next section.

3. Mathematical model

We are presenting the modeling in one dimension (1D) which canbe generalized to 2D easily. In 1D, (2 m+1) holograms are recordedby shifting the spectrum at the spectrum plane due to the illumina-tion by different fibers. ‘m’ is the number of fibers on either side ofthe central fiber. With the central fiber is in the ON state, the centralpart of the spectrum of width equal to ‘a’ is passed through the aper-ture at the Fourier plane. When the next fiber is set in the ON state thenext part of the spectrum with width equal to ‘a’ is passed throughthe aperture at the FT plane and contributes at the image plane. Illu-mination by different fibers allows the selection of different segmentsof the spectrum which contributes in the form of an image at theimage plane where the hologram is recorded.

Beam of light from themthfiber illuminates the lens2 which forms

a plane wave to illuminate the object. The plane wave has the follow-ing form:

f m xð Þ ¼ Exp 2π:i:m:a′x� �� ð11Þ

a′ ¼ af :λ

ð12Þ

Where a is the width of the limiting aperture at Fourier plane and isalso the separation between two consecutive fibers. f is the focallength of lens. The reference beam making an angle θwith the opticalaxis of the imaging system may be expressed mathematically by thefollowing equation.

R ¼ Exp 2π:i:Sin θð Þ:x½ � ð13Þ

The plane wave given by Eq. (9) illuminates the object havingg0(x) as its amplitude transmission coefficient. The field after theobject takes the following form.

gm xð Þ ¼ g0 xð ÞExp 2π:im:a′

f :λx

� �ð14Þ

The amplitude point spread function is given by the Fourier of thelimiting aperture placed at the Fourier plane and is proportion toSinc(x/a′). The amplitude image at the image plane is given by thefollowing equation.

Mm ¼ 1a′j j gm xð Þ⊗Sinc

xa′

� �ð15Þ

The image recorded at CCD plane is given by the equation:

Im ¼ Mm þ R 2������ ð16Þ

Im ¼ MmR� þM�

mRþM2m þ R2 ð17Þ

The first term of Eq. (17) is the desired term (Ides=MmR*) and canbe selected. The greater the angle of the reference beam with theoptical axis the greater will be the separation between the terms.

Ides ¼ MmR�

Ides ¼1a′�� �� gm xð Þ⊗Sinc

xa′

� �� Exp −2π:i:Sin θð Þ:x½ �

� � ð18Þ

Spectrum corresponding to Eq. (18) is:

~Ides ¼ G k−m:a′ �

Rect a:k½ � �⊗ δ kþ Sin θð Þð Þ ð19Þ

De-convolving both sides with δ(k−Sin(θ)+m.a′), we obtain

~Ides⊗ δ k−Sin θð Þ þm:a′ �

¼ G k−m:a′ �

Rect a′k� � �

⊗δ kþ Sin θð Þð Þ⊗δ k−Sin θð Þ þm:a′ � ð20Þ

~Ides ¼ G kð Þ∑m

Rect a′ kþm:a′ �� � ð21Þ

The Eq. (21) gives the recovered object spectrum. The size of therecovered spectrum will be large for large values of m. For the firstthree values of m (0, −1, +1), the above equation will take thefollowing form:

~Ides ¼ G kð Þ Rect a′ kþ a′ �� �þ Rect a′k

� �þ Rect a′ k−a′ �� � ð22Þ

The width of each Rect function is ‘a’ and the Rect functions inEq. (21) are un-overlapped. The Rect functions in Eq. (21) or inEq. (22) can be combined into a single Rect function as given below.

~Ides ¼ G kð ÞRect a′k� � ð23Þ

In Eq. (23) a′=mawhen comparedwith Eq. (21). Eq. (22) gives thesynthesize spectrum for three fibers or equivalently for m values=0,±1.

The inverse Fourier of Eq. (23) gives the amplitude image whichcan be written as:

Ades ¼1a′�� �� g xð Þ⊗ Sinc

xa′

� �ð24Þ

The corresponding synthesized intensity image or the intensity ofthe super resolved image is obtained from Eq. (24) using Eq. (25).

Ides ¼ Adesj j2 ¼ 1

a′�� ��2 g xð Þ⊗ Sinc

xa′

� ���������2

ð25Þ

Where a′=m a. Sinc(x/a′) is the amplitude point spread function(PSF) of the imaging system. The amplitude PSF becomes sharper asthe value of a′ is increased or the number of fibers is increased. Thegreater the number of fibers the greater will be the sharpening of

Fig. 2. Schematic diagram for the proposed 2D super resolution system based on holography.

2306 A. Hussain, A.A. Mudassar / Optics Communications 285 (2012) 2303–2310

PSF and greater will be the super resolution. This means that superresolution is directly linked with the number of fibers used to illumi-nate the system.

4. Simulation results

The proposed technique has been verified using one and twodimensional simulation. The one dimensional results are shown inFigs. 3 and Fig. 4. Fig. 3(a) shows the original input object generatedusing random numbers between 0 and 1. The spectrum correspond-ing to the original object is shown in Fig. 3(d). The object (Fig. 3(a))was illuminated one by one using 5 fibers corresponding to m=0,±1, ±2. When illuminated with fiber corresponding to m=−2,an image hologram was recorded which was Fourier transformedand then the desired spectrum was filtered. The desired spectrum

Fig. 3. Input, super resolved and band limited images in (a), (b) and (

corresponding to m=−2 is shown in Fig. 4(a). The object (Fig. 3(a)) was then illuminated with the fiber corresponding to m=−1and an image hologramwas recorded which was Fourier transformedand filtered to obtain the desired spectrum which is shown in Fig. 4(b). The object was then illuminated with the central fiber corre-sponding to m=0 and the corresponding image hologram recorded.This was Fourier transformed and the desired spectrum was obtainedwhich is shown in Fig. 4(c). The spectrum in Fig. 4(c) is the band lim-ited spectrum corresponding to m=0. The band limited image corre-sponding to this spectrum is shown in Fig. 3(c). Similarly, the objectwas illuminated with fibers corresponding to m=+1 and m=+2and the corresponding spectra are shown in Fig. 4(d) and (e) respec-tively. The band limited spectra in Fig. 4(a) to Fig. 4(e) correspondingm values of −2, −1, 0, +1, +2 respectively. These five band limitedspectra are combined to formulate the resultant spectrum which is

c) respectively with their spectra in (d), (e) and (f) respectively.

Fig. 4. Construction of synthesized spectrum in (f) by joining individual spectra shown in (a) to (e) corresponding to n=−2 to n=2 respectively.

2307A. Hussain, A.A. Mudassar / Optics Communications 285 (2012) 2303–2310

shown in Fig. 4(f). The spectrum in Fig. 4(f) is the synthesized spec-trum using 5 fibers as illuminating sources. The band limited spec-trum in Fig. 4(c) corresponding to single fiber (m=0). Thespectrum in Fig. 4(f) was inverse Fourier transformed to obtain thesynthesized image which is shown in Fig. 3(b). As far as thesimulation is concerned, the image in Fig. 3(b) is identical with theoriginal object shown in Fig. 3(a). For the sake of comparison, theband limited image is also shown in Fig. 3(c). The spectra correspond-ing to the original object, synthesized image and the band limitedimage are shown in Fig. 3(d), (e) and (f) respectively.

We have presented the 1D simulation case using 5 fibers with cor-responding values of m=0, ±1, ±2. For a more complex object withan extended spectrum, a greater number of fibers can be employed torecover the full spectrum provided the band width controlling aper-ture remains fixed in dimensions. Next we present the simulationwith two dimensional object.

In the 2D simulation we selected Lena's image as our original ob-ject which is shown in Fig. 5(a) along with its spectrum in Fig. 5(b).We placed the original object at the object plane in 4f systemshown in Fig. 2. We deliberate made our system band limited using

Fig. 5. Original object for 2D simulation in (

a square aperture such that we recover the full spectrum with 25fibers (5 in rows and 5 in columns). Using the aperture most array,we illuminated the object one by one, captured the correspondingimage holograms, inverse Fourier transformed these holograms,filtered the desired spectra and finally combined these spectra. Thecombination of these five spectra is shown in Fig. 6(a).

In the next step, we used the lower horizontal array of fibers. Ob-ject was illuminated by the individual fibers, image holograms wererecorded, holograms were Fourier transformed, desired spectrumwere filtered out and finally these spectra were joined to obtain theresultant spectrum which is shown in Fig. 6(b). In the third step,the third row of fibers were used to illuminate the object and thespectrum was synthesized as described above which is, in this case,is shown in Fig. 6(c). In the fourth and fifth step, the fourth and thefifth fiber rows were selected for illumination and the correspondingsynthesized spectra were obtained which are shown in Fig. 6(d) and(e) respectively.

The synthesized spectra shown in Fig. 6(a) to (e) were combinedto obtain the original spectrum shown in Fig. 6(f) which we maycall the full synthesized spectrum. This spectrum was inverse Fourier

a) with corresponding spectrum in (b).

Fig. 6. Reconstruction of synthesized spectrum in (f) by joining spectra shown in (a) to (e) with array of fibers containing 25 fibers (5 fibers in each row). (a): formation of spectrumdue to the fibers in the first row from top, (b-e): formation of spectra corresponding to the fibers in the 2nd, 3 rd, 4th and 5th rows respectively.

2308 A. Hussain, A.A. Mudassar / Optics Communications 285 (2012) 2303–2310

transformed to obtain the synthesized image which is shown in Fig. 7(b). For comparison the original object and the band limited imagesare shown is shown in Fig. 7(a) and Fig. 7(c) respectively. Spectra cor-responding to original, synthesized and band limited images areshown in Fig. 7(d), (e) and (f) respectively. As far as the simulationis concerned, the spectra and the corresponding images were foundidentical.

To calculate quantitatively the difference between the originalspectrum and reconstructed spectrum, we used the following formula[21].

SGR ¼ g� I1k k2g� Ik k2 ð26Þ

Where the ‘g’ represent the original object shown in Fig. 7(a), ‘I1’ and‘I’ are band pass and super resolved images shown in Fig. 7(c) and (b)respectively. As the super resolved image ‘I’ approaches the originalimage, the right hand side of Eq. (26) approaches ‘∞’. On the otherhand if superresolved image ‘I’ approaches to the band limitedimage ‘I1’ then Eq.(26) approaches to 1.To avoid the zero in thedenominator, Eq. (26) is modified as given below.

SGRN¼ 1� 1SGR

ð27Þ

SGRN ¼ g� I1k k2− g−Ik k2g−Ik k2 ð28Þ

For perfect super resolution the value of Eq. (28) is equal to 1, forwhich ‘I’ equals to ‘g’. For ‘I’= ‘I1’ the equation gives zero.

In the Fourier domain the quantities in Eq. (36) and Eq. (28) arereplaced by their Fourier transforms and take the following forms.

SGR ¼G� ~I1

��� ���2G� ~I

��� ���2ð29Þ

SGRN ¼G� ~I1

��� ���2− g−~I��� ���2

G−~I��� ���2

ð30Þ

Super resolution gain was calculated using Eq. (30) and was foundto be equal to 0.998. Unit value represents perfect superresolutionwhile zero means no resolution improvement. For 1D simulationthe superresolution gain was found to be 1.

In the following paragraph we present a comparison of the pro-posed technique with the technique presented in [22]. In our system,Illumination is based on array of fibers. Each fiber is directed towardslens2 which generates plane waves illuminating the input object. Ob-ject is illuminated with a set of plane waves (with the use of inclinedpoint sources). In the referred paper [22], Illumination is based onarray of lenses. Each lens illuminates only a part of the object. Illumi-nation from each lens is not inclined and appears as shifted pointsources at aperture A. In our system, full object area is illuminatedby all the point sources and in the referred paper, each time a

Fig. 7. Original, synthesized and band limited images in (a), (b) and (c) respectively with corresponding spectra shown in (d), (e) and (f) respectively.

2309A. Hussain, A.A. Mudassar / Optics Communications 285 (2012) 2303–2310

different lens is used, different part of the object is illuminated. Oursystem deal with part of the spectrum of the original object and gath-er information about spectral parts in the form of images which laterare joined to construct the full spectrum and then the full image. Thereferred paper deals with parts of the original object and gathers in-formation in the form of parts of object which are joined together toform the full image. The field of view of the proposed system is largerbut resolution is smaller. A larger object can be imaged with our sys-tem. Field of view of the referred system is smaller but resolution islarger. The system in the referred paper is feasible only with smallersize objects. The proposed system uses a single phase referencebeam. The phase of the reference beam can be changed by putting aphase modulator in the path of the reference beam by wrappingsome turns of the fiber around a piezo cylinder [23]. This way wecan improve the resolution of our holograms. The referred paperuses two half-wave plates and one quarter wave plate to change thephase of the reference beam with respect to the object beam andfour images are recorded corresponding to one part of the objectand better resolution is achieved. The proposed system makes opti-mum use of light intensity and each fiber illuminates the objectcompletely. In the referred paper, to better utilize the array of lenses,object must be placed well away from the aperture A. The loss of lightin this system is much higher and is relatively inefficient.

We set some parameters of both the systems identical for the sakeof comparison. Object size was chosen 16 mm, numerical aperture ofthe fibers and the lenses were chosen as NA=0.08, point sourceswere located at a distance 100 mm from the object. It was foundthat in our system all the fibers illuminate the object completely(due to the inclined fibers) whereas in the referred paper part of

the object is illuminated. With these parameters, only two lensesabout the central lens can illuminate the object and the contributionfrom the other lenses in the array is zero. So only two lenses in thearray will illuminate the object which is a defect of this system incomparison with ours as in our system all the fibers can illuminatethe object completely.

There are many similarities between the two systems. Both useholographic based imaging techniques and use mechanical aperturesto select the point sources. Both systems record and gather a largenumber of images and process them to construct the super-resolvedimage.

5. Conclusion

A holographic based super resolution system has been presentedusing an illumination system based on an array of fibers. The arrayof fibers helps to illuminate the object with plane waves at differentinclination. The resolution of the system is directly linked with thenumber of the fibers in array. The 4f system was deliberately madeband limited and band limited image holograms were recorded. Ifthere are n fibers in the array then n holograms are recorded and ntimes super resolution is achieved if the fiber array is 1D and

ffiffiffin

pif

the fiber array is two dimensional. The holograms are transformedin the spectral domain and desired portion from each is filtered. Thefiltered spectra are combined to form resultant spectrum which is in-verse Fourier transformed to obtain the resultant super resolutionimage. The simulation in 1D and 2D revealed that the proposed holo-graphic super resolution system worked well as far as the simulationis concerned. We have also proposed an experimental set up that can

2310 A. Hussain, A.A. Mudassar / Optics Communications 285 (2012) 2303–2310

be used to demonstrate the technique. Necessary mathematicalmodeling has been given to explain the simulation and imagingsteps. Super resolution gain in the spectral domain for the 1D and2D cases have been presented and found to be near unity. This con-firms the quality of super resolution images. Comparison of this tech-nique with one of the latest papers has also been given.

References

[1] Z. Zalevsky, D. Mendlovic, Optical Super Resolution, Springer, 2002.[2] Tasso R.M. Sales, G. Michael Morris, Optic Letters 22 (9) (1997) 582.[3] Asloob A. Mudassar, Hussain Anwar, Applied Optics 49 (2010) 3434.[4] A. Mudassar, A.R. Harvey, A.H. Greenaway, J.D.C. Jones, Chinese Optic Letters 4 (3)

(2006) 148.[5] A. Mudassar, A.R. Harvey, A.H. Greenaway, J. Jones, Journal of Physics Conference

Series 15 (2005) 290.[6] I. Leizerson, S.G. Lipson, V. Sarafis, Micron 34 (2003) 301.[7] V. Mico, Z. Zalevsky, P. Garcia Martinez, J. Garcia, Journal of the Optical Society of

America A23 (2006) 3162.[8] M. Paturzo, F. Merola, S. Grilli, S. De Nicola, A. Finizio, P. Ferraro, Optics Express 16

(2008) 17107.

[9] V. Mico, Z. Zalevsky, P. García-Martinez, J. García, Optics Express 12 (2004) 2589.[10] C.J. Schwarz, Y. Kuznetsova, S.R.J. Brueck, Optics Letters 28 (2003) 1424.[11] Neumann Yuliya Kuznetsova Alexander, S.R.J. Brueck, Optics Express 15 (11)

(2007) 6651.[12] Ueda Mitsuhiro, Sato Takuso, JOSA 61 (3) (1971) 418.[13] U. Mitsuhiro, S. Takuso, K. Masato, Journal of Modern Optics 20 (1973) 403.[14] V. Mico, Z. Zalevsky, P. García-Martínez, J. García, Applied Optics 45 (2006) 822.[15] V. Mico, Z. Zalevsky, J. García, Optics Express 14 (2006) 5168.[16] Vicente Mico, Zeev Zalevsky, Carlos Ferreira, Javier García, Optics Express 16

(2008) 19260.[17] V. Mico, Z. Zalevsky, J. García, Optics Communication 281 (2008) 4273.[18] J.R. Price, P.R. Bingham, C.E. Thomas Jr., Applied Optics 46 (2007) 826.[19] G. Indebetouw, Y. Tada, J. Rosen, G. Brooker, Applied Optics 46 (2007) 993.[20] Granero Luis, Micó Vicente, Zalevsky Zeev, García Javier, Applied Optics 49 (5)

(2010) 845.[21] F.M. Dickey, L.A. Romero, J.M. Delaurentis, A.W. Doerry, IEE Proceedings: Radar,

Sonar and Navigation 150 (6) (2003) December.[22] Phan Anh-Hoang, Park Jae-Hyeung, Kim Nam, Japanese Journal of Applied Physics

50 (9) (2011) 092503.[23] Andrew J. Moore, McBride Roy, James S. Barton, Julian D.C. Jones, Applied Optics

41 (2002) 3348.