characterization of spontaneous parametric down-converted
TRANSCRIPT
Characterization of spontaneousparametric down-converted light
Dana Helen Griffith
Submitted in Partial Fulfillmentof the
Prerequisite for Honorsin the Wellesley College Department of Physics
under the advisement of Dr. Tracy McAskill and Dr. Jonathan L. Habif
May 2020
© Dana Helen Griffith, 2020
Acknowledgements
After countless hours spent adjusting optical equipment and staring at MATLAB,I am excited to share my research with you all. However, this thesis would not bepossible without the support of many, many people.
During the summer of 2019, I began conducting the research that would laterbecome the foundation of my thesis. I would like to acknowledge the generous supportof The Sherwood Endowed Fund for Engineering for the Public Good during this time.In addition, I am particularly grateful for Dr. Nithya Arunkumar from the Walsworthgroup at Harvard University for her assistance.
I would also like to thank all of my advisors for helping me throughout my time atWellesley. To my thesis advisor, Tracy McAskill: thank you for your (very patient)guidance and for maintaining a sense of structure throughout this very tumultuousprocess. To my major advisor, Katie Hall: thank you for your life advice and foryour eternal optimism. And of course, my research project would not exist withoutJonathan Habif and Arun Jagannathan. I would like to thank them for pushing meto challenge myself while believing in my abilities every step of the way.
Finally, I would like to take a moment to acknowledge the support of my familyand friends. I am very grateful to my friends for frequently letting me ramble onabout my research. Moreover, I would like to thank my family for their eternal loveand support. To my parents: thank you for always encouraging me, for remindingme to not sweat the little things, and for sending me pictures of our dogs wheneverI ask. Also, thank you for tolerating my science puns. When your research involvesphotons, sometimes you have to make light of it.
ii
Table of Contents
1 Introduction 1
2 Experimental set-up 52.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Schematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Measurement of Spontaneous Parametric Down-Conversion 93.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2 Down-Conversion Inefficiency . . . . . . . . . . . . . . . . . . . . . . 93.3 Graphical User Interface for Intensity Analysis . . . . . . . . . . . . . 113.4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4 Wavelength Tuning the Pump Laser 154.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154.2 Dependence of 1560 nm Intensity on Pump Laser Wavelength . . . . 154.3 1560 nm Power and the Pump Laser’s Wavelength . . . . . . . . . . . 184.4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5 Calculated and Measured Power of Down-Converted Light 215.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.2 Theoretical Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.3 Experimentally Measured Power . . . . . . . . . . . . . . . . . . . . . 225.4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6 Second Order Temporal Correlations 276.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276.2 Data Collection with One Detector . . . . . . . . . . . . . . . . . . . 286.3 Data Collection with Two Detectors . . . . . . . . . . . . . . . . . . . 336.4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7 Conclusions and Future Work 377.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Bibliography 41
iii
Appendix A: Code for Intensity Analysis GUI 43A.1 Section 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Appendix B: Code for GUI to Calculate g2(τ) with One Detector 52B.1 Section 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Appendix C: Code for GUI to Calculate g2(τ) with Two Detectors 58C.1 Section 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
iv
List of Figures
2.1 Schematic of the experimental set-up to measure the power and inten-sity of the down-converted light. . . . . . . . . . . . . . . . . . . . . . 6
2.2 Schematic of the experimental set-up to measure second-order tempo-ral correlations with one detector. . . . . . . . . . . . . . . . . . . . . 7
2.3 Schematic of the experimental set-up to measure second-order tempo-ral correlations with two detectors. . . . . . . . . . . . . . . . . . . . 8
3.1 Plot of loss for the longpass and bandpass filters as a function of powerinput. At lower input powers, the measured output was unusually highdue to detector noise. Filter attenuation is not actually a function ofinput power. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 1560 nm down-converted light imaged with an InGaAs camera, 1310nm lens, and lens tube. Figure 3.2(b) has been cropped and its contrasthas been digitally increased for better visibility. . . . . . . . . . . . . 11
3.3 Interface of MATLAB image analysis GUI. . . . . . . . . . . . . . . . 13
4.1 1560 nm down-converted intensity as a function of pump laser wave-length. The 1D intensity profiles are plotted over the x-axis and y-axisof the SWIR InGaAs camera. The intensity has been normalized tothe power of the pump laser as measured at the crystal output. . . . 17
4.2 Down-conversion efficiency as a function of pump laser wavelength.The 1560 nm down-converted light’s power has been normalized to thepower of the pump laser as measured at the crystal output. . . . . . . 18
5.1 Maximum intensity and beam waist of 1560 nm light as calculated fromInGaAs camera images. . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 1560 nm power as a function of pump laser power measured at thecrystal’s output. Calculated from the maximum intensity and beamwaist of the 1560 nm beam. . . . . . . . . . . . . . . . . . . . . . . . 24
5.3 A comparison of the 1560 nm power as calculated using the beam waistand maximum intensity and power as measured with the femtowattamplified photodetector. . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.1 Theoretical g(2)(τ) for thermal and coherent light. . . . . . . . . . . . 31
v
6.2 g(2)(τ) for thermal and coherent light. The second-order temporal cor-relations were measured with one detector and calculated with theone-detector g(2)(τ) GUI. . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.3 g(2)(τ) for thermal and coherent light. The second-order temporal cor-relations were measured with two detectors and calculated with thetwo-detector g(2)(τ) GUI. . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.4 Comparison of g(2)(τ) calculated for thermal light using the one-detectorand two-detector g(2)(τ) GUIs. . . . . . . . . . . . . . . . . . . . . . . 35
vi
Glossary of Terms
beam waist Width of a laser beam.
correlations Correlations measure the strength of the relationship between differentsystems’ states. The states of highly correlated systems are heavily influencedby each other.
optical field Optical fields are made of light waves. These waves may be producedby sources such as lasers.
periodically poled Periodically poled nonlinear crystals are pulsed with a bias volt-age with some period. The wavelengths of the light produced by the nonlinearcrystal are determined by the period [1].
quantum superposition A particle in a quantum superposition of states is effec-tively in both states at once. The well-known story about Schrodinger’s catbeing both dead and alive is an example of quantum superposition.
second-order temporal correlations Second-order temporal correlations (g(2)(τ))look at the difference in arrival time between photons. It calculates the likeli-hood of measuring another photon some time τ after an initial photon is de-tected.
spontaneous parametric down-conversion A nonlinear crystal is pumped withlaser light such that one pump laser photon may be converted into a pair ofphotons. These processes all obey the laws of conservation of momentum andconservation of energy. Also known as SPDC .
vii
Chapter 1
Introduction
The twentieth century saw many of the most important discoveries in physics. In 1905,
Albert Einstein released his groundbreaking publications on the photoelectric effect
and special relativity. Notable work includes Marie Curie’s discovery of radiation,
which led to her and her husband, Pierre Curie, being awarded the Nobel prize in
physics in 1903. In 1978, Arno Allan Penzias and Robert Woodrow Wilson won the
Nobel prize in physics for the discovery of cosmic microwave background radiation. [2]
However, some of the ”spookiest” work of the twentieth century focused on quantum
entanglement.
Quantum entanglement is one of the most important concepts discovered in the
past few centuries. Entangled particles are created in a way such that they share a
quantum state. Since they share a quantum state, measuring one particle will affect
the state of the whole system. Until one of the particles is measured, the particles will
remain indistinguishable from each other. A particle in an indistinguishable state is
formally referred to as being in a quantum superposition of states. When a particle
is in a quantum superposition of states, it is effectively in both states at once. The
well-known story about Schrodinger’s cat being both dead and alive is an example of
quantum superposition.
Since the entangled particles are created together, they are both governed by laws
such as the conservation of energy and the conservation of momentum. By these laws,
1
one knows the total energy and momentum of both particles. However, the combined
uncertainties of the particles’ energies is greater than the sum of the particles’ energies.
Since they are in a quantum superposition, an observer would not know the exact
state of each particle. If an observer performed a measurement on the energy of the
first particle, then the law of conservation of energy would allow that observer to
instantly know what the energy of the second particle is. The measurement would
distinguish the particles from each other, thereby breaking their entanglement and
superposition.
The mystery of quantum entanglement lies in one’s ability to accurately determine
the states of two particles with a single measurement. Let us engage in a thought
experiment: imagine that someone separated two entangled particles and then mea-
sured the angular momentum of each one. In addition, let us assume that the distance
between the particles is large enough that information traveling at the speed of light
could not be transmitted between each measurement. Even in this scenario, the
states of the entangled particles were linked before the measurements took place.
Upon measuring the angular momentum of one particle, the observer would instantly
know the state of the other particle. Indeed, a measurement of that particle would
reveal it to be in that expected state. However, it was impossible for information to
be transmitted between the two particles in the time between the measurements. So,
what happened?
According to modern physics, this thought experiment demonstrates the beauty of
quantum entanglement. However, physicists in the twentieth century were not as fond
of quantum entanglement. Originally, the notable physicists Albert Einstein, Boris
Podolsky, and Nathan Rosen dismissed quantum entanglement and declared that the
description of quantum mechanics was incomplete. They claimed that entanglement
and superposition could be understood if physicists were able to decode the so-called
hidden variables that they believed governed the universe. [3] According to their the-
ory, which was later designated the hidden variable theory, yet-undiscovered hidden
2
variables dictated the final state of a particle that was in quantum superposition.
Einstein, Podolsky, and Rosen’s hidden variable theory persisted until 1964 when
John Bell developed a test to investigate whether the behavior of particles could be
predicted via quantum mechanics or classical mechanics. Quantum mechanical the-
ory suggested that a pair of entangled particles would be highly correlated because
their states are inherently connected. On the other hand, classical mechanics assumes
that all particles are in a definite state and can be described by some hidden vari-
able. Therefore, the concept of quantum entanglement was not supported by classical
theory. If the particles followed classical theory, their states would not be inherently
linked and their states would be less correlated than quantum theory would predict.
Moreover, correlations are a measure of the extent to which particles’ behavior are in-
fluenced by each other. With quantum correlations, particles’ behavior can be highly
linked due to phenomena such as quantum entanglement. However, classical mechan-
ics does not account for phenomena such as entanglement that would lead to two
particles’ states being so closely linked.
The Bell test demonstrated that the correlation of quantum entangled particles
exceeds the classical limit. [4] Bell laid the groundwork for a series of tests which
proved that the highly correlated behavior of quantum entanglement cannot be fully
predicted by classical theory. [5]
Since the development of the Bell test, the field of quantum mechanics has flour-
ished. Quantum entangled particles have been used for applications including quan-
tum communications [6] and quantum information [7]. Many quantum computers use
entangled photons to make quantum bits, or qubits. [8]
Certain materials studied within the field of nonlinear optics are capable of pro-
ducing entangled photons. Nonlinear optics is primarily concerned with materials
that respond nonlinearly to the application of an optical field. [9] Nonlinear optical
processes such as spontaneous four-wave mixing (SFWM) and spontaneous paramet-
ric down-conversion (SPDC) produce entangled photons. [10] SFWM involves four
3
waves, two of which pump a nonlinear crystal with light in order to produce the re-
maining waves. In SPDC, a nonlinear crystal is pumped with laser light such that
one pump laser photon may be converted into a pair of photons. These processes
all obey the laws of conservation of momentum and conservation of energy. [9] In
my experiment, we used SPDC as our source of photon pairs. Since these pairs were
produced via the SPDC process, they are commonly referred to as down-converted
light.
Typically, nonlinear-mode optical detectors are used to characterize the power and
intensity of light. Nonlinear-mode detectors, such as single photon detectors, are
usually only able to detect light at very low intensities. They are also unable to
gather information about the light’s intensity with a single measurement. However,
my experiment used linear-mode optical detectors to characterize our SPDC source.
Linear-mode detectors are notable for an output that linearly scales with the amount
of light detected, thereby allowing them to detect the relative amount of incoming
light. Moreover, linear-mode optical detectors have a higher dynamic range and are
more cost-effective than traditional nonlinear-mode detectors.
The goal of my experiment was to characterize the nonlinear optical process of
spontaneous parametric down-conversion. This thesis will begin with an explanation
of the experimental set-up in Chapter 2. Next, we will discuss the first measure-
ments in Chapter 3 and detail how I built an intensity analysis GUI to combat the
inefficiency of the down-conversion process. Afterwards, Chapter 4 will cover wave-
length tuning the pump laser in order to verify that we were indeed measuring the
down-converted light. Chapter 5 will detail the process of calculating the theoreti-
cal power of the light produced by our SPDC source and cross-checking it against
the experimentally measured power. Finally, Chapter 6 will discuss the calculations
of the second-order temporal correlations (g(2)(τ)) for thermal, coherent, and non-
classical down-converted light. We will conclude this thesis with a presentation of
our conclusions and future work in Chapter 7.
4
Chapter 2
Experimental set-up
2.1 Background
Typically, the process of spontaneous parametric down-conversion involves pumping
a nonlinear crystal with a pump laser. [9] In my experiment, our pump laser was a
780 nm Toptica DL Pro diode laser. We converted 780 nm pump photons into 1560
nm down-converted photon pairs.
Our nonlinear crystal was fabricated by AdvR (Boseman, MT) and is a periodi-
cally poled potassium titanyl phosphate (ppKTP) waveguide. AdvR creates custom
crystals, so we originally only had a rough idea of our crystal’s properties. An AdvR
representative estimated the coupling loss to be roughly 25 percent from 50 percent
loss at the input and another 50 percent at the output. [11] We measured the loss to
be closer to 20 percent.
2.2 Schematics
We used two main experimental set-ups for the multiple phases of our experiment.
The first phases involved measuring the down-converted light with a shortwave in-
frared (SWIR) Hamamatsu Indium Gallium Arsenide (InGaAs) camera and a fem-
towatt amplified photodetector.
SPDC is an inefficient process, so the crystal output emits far more pump light
than down-converted light. First, we coupled the light from the crystal’s output
5
from an optical fiber into a 1560 nm collimator so that we could work with our
light in freespace. To remove the excess pump light, the bandpass and longpass
filter only transmit the down-converted light. They have a combined attenuation of
approximately 100 dB (10−10) for wavelengths outside a range of 1535 nm to 1565
nm.
As depicted in Figure 2.1, after filtering out the excess pump light, the down-
converted light reached a gold flipper mirror. The mirror could be turned such that the
light was detected by the InGaAs camera or the femtowatt amplified photodetector.
From there, we could image the light with the camera or measure its power with the
photodetector.
Figure 2.1: Schematic of the experimental set-up to measure the power and intensityof the down-converted light.
For the last phase of my experiment, we measured the second-order temporal cor-
relations. We conducted these measurements with two experimental configurations
that used one and two detectors, respectively.
Figure 2.2 depicts the experimental set-up for measuring the second-order temporal
correlations with one detector. First, light from our diode laser was coupled into a
6
fiber and passed through a variable fiber attenuator so that we could control the
light’s power. Next, a Perkin Elmer single photon detector (SPD) measured the
incident light. The SPD was connected to a Hewlett-Packard universal counter, which
registered a click from the SPD every time it detected a photon. The counter was
connected to a Tektronix multi-domain oscilloscope along with a function generator
to trigger the oscilloscope. Finally, I used an instrument control program written by
Phoebe Amory and Sam Gartenstein to export an oscilloscope trace to our computer.
[12]
Figure 2.2: Schematic of the experimental set-up to measure second-order temporalcorrelations with one detector.
When a second detector was added, the experimental set-up only changed slightly
as shown in Figure 2.3. The second detector was connected to another channel on
the same oscilloscope. However, we did modify our code so that the oscilloscope
would take a single trace and then separately export data from each detector to the
computer.
7
Figure 2.3: Schematic of the experimental set-up to measure second-order temporalcorrelations with two detectors.
8
Chapter 3
Measurement of SpontaneousParametric Down-Conversion
3.1 Introduction
Spontaneous parametric down-conversion is an extremely inefficient process. In a
periodically poled KTP waveguide, previous groups measured a photon pair genera-
tion rate of 2.9 · 106 pairs/sec of down-converted light produced for every 1 mW of
pump laser light. [13] As a result, detectors tend to get flooded with pump laser light
instead of measuring the down-converted light.
The first phase of my project had two primary goals. Our first goal was to account
for the inefficiency of the down-conversion process when taking measurements. The
second goal was to create a graphical user interface to analyze the intensity of the
down-converted light.
3.2 Down-Conversion Inefficiency
The inefficiency of the down-conversion process makes it difficult to measure down-
converted light. Pump laser light measured at the crystal’s output is orders of mag-
nitude stronger than down-converted light. In addition, we measured roughly a 20
percent loss of pump laser light from coupling into and out of the crystal waveguide.
To measure the down-converted light, we introduced two filters to our set-up to
remove the excess pump light. The Newport bandpass and longpass filters only
9
transmit light from 1535 nm to 1565 nm. [14] As seen in Figure 3.1, the filters’
combined attenuation was measured to be approximately 100 dB, or 10−10. Our laser
operates at 780 nm and would be almost completely blocked by the filters, but the
1560 nm down-converted light would be allowed to pass.
Figure 3.1: Plot of loss for the longpass and bandpass filters as a function of powerinput. At lower input powers, the measured output was unusually high due to detectornoise. Filter attenuation is not actually a function of input power.
Even after attenuating the pump light, our Indium Gallium Arsenide (InGaAs)
shortwave infrared (SWIR) camera was not sensitive enough to detect the down-
converted light amongst the ambient light in the room. We added a lens tube and
a 1310 nm lens to block ambient light and to focus the down-converted light onto
the camera’s lens. Although the lens was intended for a different wavelength, we
found that it still focused the 1560 nm down-converted light enough for an image
to be taken. Figure 3.2 displays an image of the 1560 nm beam after removing the
background noise. The beam profile is faint, but it is still distinguishable. Upon
10
plotting the 1D intensity over the x and y axes of the image as seen in Figure 3.3,
we observed a Gaussian beam profile. The Gaussian beam profile suggested that we
were indeed measuring the down-converted light and not just ambient light.
(a) (b)
Figure 3.2: 1560 nm down-converted light imaged with an InGaAs camera, 1310 nmlens, and lens tube. Figure 3.2(b) has been cropped and its contrast has been digitallyincreased for better visibility.
3.3 Graphical User Interface for Intensity Analysis
With the excess pump laser light removed, we could begin to characterize the down-
converted light. We imaged a beam of the 1560 nm light with an InGaAs SWIR
camera. The InGaAs camera is a linear mode detector. While single photon detectors
only detect the presence of photons, linear mode detectors’ measurements linearly
correspond to the amount of light detected. [15] Similar to a traditional camera,
images taken with the InGaAs camera record the intensity of the incoming light.
I built a Graphical User Interface, or GUI, in MATLAB to extract intensity in-
formation from beam images. My image analysis GUI characterizes the intensity
profiles of weak Gaussian optical beams imaged using a camera. The images are
post-processed to remove background noise and extract 1D intensity profiles, which
are then automatically fitted with a Gaussian curve.1
1For more information about the image analysis GUI, please see Appendix A.1.
11
The image analysis GUI completes a number of steps in order to extract intensity
information from the camera images. The steps are enumerated below.
1. Ask the user for a number of image files for the laser beam and for the back-
ground noise.
(a) The image file extension is typically a .bmp. However, the GUI could
theoretically use any extension as long as MATLAB can load the file as an
image.
2. Average the beam files and the background files and subtract the averaged
background from the averaged beam.
(a) Despite the lens tube and other preventative measures, the camera still
detects a certain amount of noise in each image. The beam and back-
ground images are subtracted in order to reduce the amount of noise in
the resulting image.
3. Sum over the subtracted image’s intensity in the x-direction and in the y-
direction.
4. Create a 1D plot of the resulting subtracted beam intensity over the x-axis and
y-axis.
5. Show an image of the subtracted beam.
6. If necessary, adjust the contrast of the displayed subtracted beam image.
(a) The photo of the resulting beam image displays where the beam is located
on the camera’s array. Since we are working at fairly low levels of light, it
can be difficult to see the beam on the raw camera images. Showing the
beam image and increasing the contrast helps with maximizing the beam’s
alignment.
12
7. Calculate the beam waist and visibility. [16]
8. Plot a Gaussian fit of the subtracted beam intensity.
(a) In its fitting function, the GUI fits two Gaussian peaks to each 1D intensity
plot as seen in Figure 3.3. One is fitted to the leftover noise, which is
treated as a Gaussian with a very wide beam width, and the other is fitted
to the actual Gaussian intensity curve. The fit treats the leftover noise as
a DC offset and allows for a better fit of the 1D intensity plot.
9. If needed, adjust the Gaussian fit.
Figure 3.3: Interface of MATLAB image analysis GUI.
By displaying 1D intensity plots of beam images, the GUI visualizes a comparison
of the intensity of the beam and the background noise. The 1D intensity plots make
13
it easier to locate intensity spikes from noise. Outliers, such as very high intensity
noise pixels, can be manually removed when adjusting the Gaussian fit.
In addition, the InGaAs camera used to image the beam has very little noise and
can therefore detect weak optical signals. When working at low intensities, it is
useful to see how distinguishable the down-converted light is from the background
noise. The image analysis GUI can be used for a wide range of light intensities and
is especially helpful when working at low intensities.
3.4 Results and Discussion
The goal of the first phase of the project was to image the down-converted light.
First, we added a Newport bandpass and longpass filter to remove excess pump laser
light. Combined, the filters attenuated around 100 dB of the 780 nm light. Next,
we used a lens and lens tube to block ambient light and focus the down-converted
light onto the lens of the camera. These preventative measures filtered out the excess
pump laser light and allowed the down-converted light to be imaged.
Once the down-converted light was able to be imaged, my GUI extracted intensity
information from the camera images. After subtracting background noise from the
images, a beam profile was clearly visible. One benefit of my GUI was that even when
a beam was not clearly visible to the human eye, the 1D intensity plots could display
a prominent beam profile.
Given the large amount of attenuation at wavelengths outside of the 1535 to 1565
nm range, the beam profile was most likely the down-converted light. However, at
that point in the experiment, we were unable to definitively conclude that we were
measuring 1560 nm light. The next step was to explore the physical properties of the
crystal to verify that we were measuring down-converted light.
14
Chapter 4
Wavelength Tuning the PumpLaser
4.1 Introduction
The efficiency of the spontaneous parametric down-conversion process peaks at a par-
ticular pump laser wavelength. We tuned the wavelength of our pump laser in order
to identify the optimal wavelength for the efficiency of our SPDC source. Further-
more, an increase in the measured power and intensity at a particular pump laser
wavelength would confirm that we were indeed measuring down-converted light.
For our SPDC source, we expected the optimal wavelength to be around 780 nm.
[11] However, since we had a custom crystal, we did not know the exact wavelength
at which the efficiency of the SPDC process would be maximized.
4.2 Dependence of 1560 nm Intensity on Pump
Laser Wavelength
We tuned the pump laser from 779 nm to 794 nm and imaged the resulting down-
converted light with a SWIR camera. We did not measure wavelengths below 779
nm because the power of the pump laser decreased at those wavelengths. Intensity
measurements taken at pump wavelengths below 779 nm would need to be normalized
to the pump power in order to be meaningful.
For our wavelength tuned intensity measurements, we did not have a lens on the
15
camera. 1D intensity profiles were extracted from the beam images and plotted for
the range of wavelengths. The total power of the down-converted light, which is the
integrated area under the 1D intensity profile, was optimized at a particular pump
wavelength. We normalized the intensity of the down-converted light with the power
of the pump laser at the crystal output.
The power of the pump laser changed as a function of its wavelength. We observed
an increase in pump laser power as we tuned the wavelength to above 780 nm. As a
result, we normalized the intensity of the down-converted light in order to counteract
the variations in power. After normalization, we still observed a drop in the intensity
of the 1560 nm light as seen in Figure 4.1. The peak in intensity at 780 nm indicated
that we were still able to filter out the pump light at higher powers.
At pump laser wavelengths of 786 nm and higher, we observed a strong decrease in
down-conversion efficiency as evidenced by the low intensity of the down-converted
light. We also observed a slight decrease in the SPDC source’s efficiency at pump
wavelengths below 780 nm. To further explore efficiencies at pump laser wavelengths
below 780 nm, we tuned the wavelength of the pump laser again and studied the
resulting power of the down-converted light.
16
Figure 4.1: 1560 nm down-converted intensity as a function of pump laser wave-length. The 1D intensity profiles are plotted over the x-axis and y-axis of the SWIRInGaAs camera. The intensity has been normalized to the power of the pump laseras measured at the crystal output.
17
4.3 1560 nm Power and the Pump Laser’s Wave-
length
To gain a better understanding of the efficiency of our SPDC source, we tuned the
wavelength of the pump laser over a broader range than we investigated during our
intensity measurements. We tuned the pump laser from 755 nm to 800 nm and
measured the power of the down-converted light with a femtowatt amplified photode-
tector. As usual, we included a 1310 nm lens and lens tube in our photodetector’s
set-up because SPDC light is very low intensity and otherwise difficult to detect. We
normalized the 1560 nm power with the pump laser power in order to account for
changes in the pump power at various wavelengths.
Figure 4.2: Down-conversion efficiency as a function of pump laser wavelength. The1560 nm down-converted light’s power has been normalized to the power of the pumplaser as measured at the crystal output.
With the amplified photodetector, we measured a maximum pair production ef-
ficiency of around 1.57 · 106 pairs/sec/mW of pump laser light, or 3.14 · 106 pho-
18
tons/sec/mW of pump laser light. The optical bandwidth of our SPDC source was
measured to be 16.4 nm.
While measuring SPDC intensity as a function of pump laser wavelength, we pre-
viously measured a local efficiency maximum around 780 nm. When we tuned over
a broader range of wavelengths and measured the power of the SPDC light, we did
indeed observe a local maximum around 780.5 nm as depicted in Figure 4.2.1 How-
ever, the down-converted light’s power was optimized around a pump wavelength of
775 nm.
We observed a sharp decrease in SPDC efficiency outside pump laser wavelengths
ranging from 765 nm to 780 nm. Our bandpass filter only transmits light between
1535-1565 nm, corresponding to pump wavelengths of 770-780 nm. We believe that
the bandpass filter filtered out down-converted light outside the aforementioned range
of wavelengths and we measured a lower down-conversion efficiency.
4.4 Results and Discussion
Measurements of the intensity and power of down-converted light as a function of
pump laser wavelength both indicate a decrease in our source’s efficiency for wave-
lengths far from a central wavelength of around 775 to 780 nm. We expected our
SPDC source’s optimal efficiency to be around a pump laser wavelength of 780 nm,
so our results seem reasonable.
An astute reader may recall that previous groups using a ppKTP crystal waveguide
measured a down-converted pair generation rate of 2.9 · 106 pairs/sec/mW of pump
laser light. [13] Our source’s pair production efficiency of 1.57 · 106 pairs/sec/mW of
pump laser light is within a factor of two of that value. However, our power measure-
ments were conducted with a nonlinear InGaAs femtowatt amplified detector. These
1It is true that the local maxima observed during intensity and power measurements are differentby 0.5 nm. However, the wavemeter used to measure the central wavelength of the pump laser hasan uncertainty of 1 nm. The discrepancy between the maxima observed during each measurementcan be fully explained by the uncertainty in the wavemeter.
19
results indicate that our experimental set-up and nonlinear detectors are comparable
to the far more expensive linear mode detectors used in other groups.
Our results indicate that we were indeed measuring 1560 nm light since we observed
an increase in efficiency around a central wavelength of 780 nm. We can now assume
that we are detecting 1560 nm light. The next step of my experiment was to further
investigate the power of the down-converted light.
20
Chapter 5
Calculated and Measured Power ofDown-Converted Light
5.1 Introduction
Before we could use our SPDC source for more interesting experiments, we studied
the power of the down-converted light more closely. When we tuned the wavelength
of the pump laser, we directly measured the power of the 1560 nm light using a
femtowatt amplified photodetector. However, that is not the only way to calculate
the power of the down-converted light.
The energy per photon can be calculated using the wavelength of the light. Addi-
tionally, the power of the down-converted light could be calculated from the intensity
and waist of the down-converted beam. Now, let us walk through the process of
calculating the power of the down-converted light produced by our SPDC source.
5.2 Theoretical Power
It is a relatively simple matter to calculate the energy of a single photon as long as
one knows the wavelength of that photon. The following equation may look familiar
to anyone who has taken an introductory quantum mechanics course: [17]
E =hc
λ(5.1)
21
Our SPDC source produces down-converted light at 1560 nm. As such, we would
expect a single photon of 1560 nm light to have an energy of 1.274 · 10−19 J. Since
power is a rate of energy flow, then one could calculate the power of a single photon
over one second and find that the power of a single 1560 nm photon is 1.274 · 10−19
W.
We previously experimentally measured our source’s pair production efficiency to
be 1.57 · 106 pairs/sec/mW of pump laser light. If each photon has 1.274 · 10−19
Watts of power, then we would expect to measure a total power of 2 · 10−13 W, or
0.2 pW per mW of pump laser light.
5.3 Experimentally Measured Power
The power of the down-converted light can also be calculated from images taken with
the InGaAs SWIR camera. In order to do so, we needed to calculate the maximum
intensity and beam waist of our 1560 nm light.
Luckily, I had already created a GUI to extract information from beam images. Us-
ing my intensity analysis GUI, I calculated the beam waists and maximum intensities
for a variety of different pump laser powers.
My intensity analysis GUI automatically extracted 1D intensity profiles from the
beam images. It then fitted the 1D intensity profiles with a Gaussian curve as seen
in Figure 3.3, which allowed us to obtain the beam waist and maximum intensity
from each set of images. Even after subtracting the background noise, the signals
in many of our images at lower powers were only a few pixels above the noise floor.
The Gaussian fit smoothed over the remaining noise and made it easier to extract the
intensity information.
22
Figure 5.1: Maximum intensity and beam waist of 1560 nm light as calculated fromInGaAs camera images.
However, beam images at lower powers could not always be perfectly fitted by a
Gaussian curve. At high pump attenuations, the signals were so close to the noise
floor that it was difficult to use a Gaussian fit to extract the size of the beam waist.
As a result, the beam waists at low powers all appeared to be small and similar sizes
to each other. At low pump attenuations, there was still variation in the size of the
beam waists but it still increased proportionally to the amount of power pumped into
the crystal waveguide.
The intensity analysis GUI’s maximum intensity was more stable at low powers.
23
At high pump attenuations, the Gaussian fit smoothed over any residual noise such
that its calculated beam waist was fairly unaffected by noise. However, at low pump
attenuations, we observed a plateau in the maximum intensities. After much de-
bate, we concluded that the InGaAs camera was saturating at higher intensities and
therefore causing the plateau.
Newport supplied the following equation, which can be used to calculate the power
for Gaussian beams: [18]
P = (πω2
0
2)Imax (5.2)
In this instance, ω0 is the radius of the beam, also known as the beam waist, and
Imax is the maximum intensity. Using the maximum intensities and beam waists, we
calculated the power of the 1560 nm light using Newport’s power equation.
Figure 5.2: 1560 nm power as a function of pump laser power measured at the crystal’soutput. Calculated from the maximum intensity and beam waist of the 1560 nmbeam.
Despite the issues with calculating the beam waist and the saturation of the camera,
the calculated 1560 nm power in Figure 5.2 appeared to be linear. Moreover, the
calculated 1560 nm matched up fairly well with the experimentally measured power.
24
Figure 5.3: A comparison of the 1560 nm power as calculated using the beam waistand maximum intensity and power as measured with the femtowatt amplified pho-todetector.
Although our calculated and measured powers appear to be in accordance in Figure
5.3, it’s likely that this result is the product of several overlapping issues. The incon-
sistencies in the beam waist and maximum intensity may have combined in Equation
5.2 to calculate a power that is only coincidentally accurate. The experimentally
measured power is a far more trustworthy result.
25
5.4 Results and Discussion
It is important to understand the power produced by the SPDC source in order to
use it for future experiments. We used several different methods to find the power of
the down-converted light, including directly computing it from intensity information
to calculating the theoretical power. We also directly measured the power of the
down-converted light at different pump laser attenuations using a femtowatt amplified
photodetector.
The most significant results from our investigation of the power are the SPDC
source’s efficiency and the discovery that our InGaAs camera was oversaturated. In
future experiments, we will make sure to only use the camera for lower light levels.
Additionally, since the SPDC source has an efficiency of 0.2 pW of down-converted
light per mW of pump laser light, we will know how much down-converted light is
being measured by our detectors.
Now that we have a functioning source of down-converted light, the next step of my
experiment is to design and construct a set-up to measure second-order correlations
(g(2)(τ)) for different types of light.
26
Chapter 6
Second Order TemporalCorrelations
6.1 Introduction
Measuring the second-order correlations (g(2)(τ)) was the first test of a potential
application for our SPDC source. Each type of light tends to have its own unique
correlation statistics, or the pattern that photons arrive in. Thermal photons, for
example, tend to be ”bunched” and arrive in clusters. On the other hand, the coherent
light produced by a laser would be evenly spaced. We would expect our SPDC source
to produce nonclassical light, whose correlations would likely differ from those of both
thermal and coherent light. Since each type of light can be described by a particular
correlation function, measuring an unknown light source’s correlations can provide
insight into its origins.
Second-order temporal correlations (g(2)(τ)) look at the difference in arrival time
between photons. In particular, the second-order temporal correlation function g(2)(τ)
is a measure of the likelihood of measuring another photon some time τ after an initial
photon is detected.
Another way to think about g(2)(τ) is to picture a stopwatch and a photodetector
with light approaching it. Once the first photon reaches the detector, the stopwatch
counts down the time tau (τ) until it registers the next photon.
Traditionally, second-order temporal correlations are measured with two detectors.
27
This way, each detector’s measurement for a particular time can be compared to
that of the other. However, we followed an experimental schematic designed at the
University of Erlangen-Nuremberg to measure g(2)(τ) with only one detector.
6.2 Data Collection with One Detector
The following equation is typically used to calculate second-order temporal correla-
tions. I is the intensity of the light, r1 is the position of the first photodetector, and
r2 is the position of the second photodetector.
g(2)(τ, r1, r2) =< I(r1, t)I(r2, t+ τ) >
< I(r1, t) >< I(r2, t) >(6.1)
[19]
A perceptive reader may notice that g(2)(τ) depends on the positions of both pho-
todetectors and yet we purport to only have used one. Rest assured that there is a
solution to untangle this perplexing observation.
The solution is that the positions of the detectors are only relevant for measuring
spatial correlations. However, we were interested in second-order temporal correla-
tions. As a result, the only relevant variable is the time between the detection of
photons.
For second-order temporal correlations, the previous equation can be reduced into
a slightly more simplified form.
g(2)(τ) =< I(t)I(t+ τ) >
< I(t) >2(6.2)
[19]
Notice how g(2)(τ) now depends on the intensity of the detected light, which ac-
tually depends on the time delay between each detected photon. However, a clever
reader may realize that making this measurement does not require two photodetec-
tors. Instead, a single array of data from one photodetector may be compared against
28
itself with the appropriate time delay τ included.
We can even take advantage of certain statistical properties of light to further
reduce the equation into the following form. This equation only holds when the time
delay between detection events is zero. [19]
g(2)(0) = 1 +∆n2 − n
n2 (6.3)
The variable n is the number of photons. Luckily, n can be easily measured with
one detector. We hope that this equation eases any fears regarding the use of only
one detector to measure g(2)(τ).
Moreover, one detector is performing the same measurement as two detectors. The
main difference is that the single detector’s measurement is the equivalent of adding
both measurements from the two-detector set-up together. However, since we only
seek to compare the photon statistics to itself, our method to measure g(2)(τ) with
one detector holds.
Following the experimental set-up discussed in Section 2.2, we used a single pho-
todetector to measure the arrival time of the light produced from a thermal and
coherent source.1
I built a graphical user interface (GUI) to calculate g(2)(τ).2 First, the GUI pulled
the photodetector’s data from the oscilloscope and transferred it to a computer. The
photodetector’s raw voltage data displayed detected photons as voltage pulses. In
order to extract the number of detected photons, the GUI automatically counted
a pulse as a photon if the rising edge was greater than a certain baseline. Next,
it sorted the photon counts into ”bins” that all consisted of the same number of
microseconds. From there, the GUI calculated g(2)(τ=0) by sorting the photon time
bins into large groups and extracting the photon statistics for each chunk of time.
1Unfortunately, we were unable to measure g(2)(τ) for our SPDC source. The detector usedfor measuring g(2)(τ) was not designed to detect light at 1560 nm, so instead we measured thecorrelations of coherent and thermal light at 780 nm.
2For more information about the GUI to calculate g(2)(τ) with one detector, please see AppendixB.1.
29
The photon statistics were used to calculate g(2)(τ=0) according to the Equation 6.3.
To calculate g(2)(τ), the GUI used Equation 6.2 and compared the photodetector’s
data set to itself. It took the photodetector’s data set, duplicated it, and then added a
time delay τ to the second data set. The GUI then multiplied the two arrays together
according to Equation 6.2. As a result, the GUI was able to calculate g(2)(τ) for a
one-detector set-up.
We measured g(2)(τ) with one detector for coherent and thermal light. A 780
nm Toptica DL Pro diode laser provided the coherent light. The thermal light was
generated by shining the same 780 nm laser’s light at a spinning ground glass disk.
[19]3 The thermal light then reflected off of the spinning ground glass disk and was
coupled into a fiber. Light from the fiber was then connected to the g(2)(τ) set-up to
measure the photon counts.
Previously, we mentioned that different kinds of light display different photon
statistics. Before we dive into what we experimentally measured for g(2)(τ), let us
first discuss what we would expect the correlation functions of thermal and coherent
light to look like.
3Previous experiments conducted by my lab demonstrated that this light is, in fact, thermal.However, it is always nice to be able to confirm one’s results. Since the thermal light’s second-ordercorrelation measurements also followed what we would expect for thermal light, it appears highlylikely that this light is actually thermal.
30
Figure 6.1: Theoretical g(2)(τ) for thermal and coherent light.
For coherent laser light, we would expect to find that g(2)(τ)=1. On the other
hand, we would expect g(2)(τ) for the thermal light generated in our experiment to
follow a Lorentzian distribution as shown in Figure 6.1. Indeed, the g(2)(τ) calculated
with the one-detector GUI follows these trends! In our case, we calculated our final
g(2)(τ) from the g(2)(τ) produced from several trials. However, the error bars of our
resulting g(2)(τ) are still within a margin of error such that they follow the expected
trends.
31
Figure 6.2: g(2)(τ) for thermal and coherent light. The second-order temporal corre-lations were measured with one detector and calculated with the one-detector g(2)(τ)GUI.
Now, we’ve demonstrated that it is possible to calculate g(2)(τ) with only one
detector. Next, let us follow the tried-and-true method of calculating g(2)(τ) with
two detectors.
32
6.3 Data Collection with Two Detectors
Traditionally, g(2)(τ) is calculated with two detectors. The calculation uses the fol-
lowing equation, which was derived from Equation 6.1.
g(2)(τ) =< I1(t)I2(t+ τ) >
< I1(t) >< I2(t) >(6.4)
Here, I1 is the intensity of the light detected at the first detector and I2 is the same
for the second detector.4
I built another GUI to calculate g(2)(τ), this time for the two-detector set-up.5 It
used a similar technique as my GUI that calculated g(2)(τ) for a one-detector set-
up. However, instead of comparing one detector’s data to itself, it compared each
detector’s data to the other. Following Equation 6.4, the GUI multiplied the data
of the photon counts over time from detector 1 with the same data from detector 2
offset by a time delay tau (τ). It then calculated the expectation value of that data
for that specific τ . Next, it divided that quantity by the expectation value of each
detector’s photon counts over all time. The end result was g(2)(τ) for a single τ . The
GUI repeated the calculation for τ values ranging from 0 to 250 microseconds.
Here, the thermal light was produced in the same manner as in the one-detector
set-up. The coherent light is still 780 nm light from a diode laser.
4The order of the detectors does not particularly matter as long as there is consistency. It is onlyimportant for the purposes of keeping track of each detector’s data for calculating g(2)(τ).
5Please refer to Appendix C.1 for more information about the code for the two-detector g(2)(τ)GUI.
33
Figure 6.3: g(2)(τ) for thermal and coherent light. The second-order temporal corre-lations were measured with two detectors and calculated with the two-detector g(2)(τ)GUI.
Both the thermal and coherent light in Figure 6.3 appear to still match what we
would expect to see for their values of g(2)(τ). However, the question remains of
whether the g(2)(τ) measured with one detector matches that measured with two
detectors.
In order to verify whether the different set-ups’ g(2)(τ) values were in good agree-
ment, I took the photon count data for thermal light collected with two detectors and
34
merged the detectors’ data together. The data set then resembled data taken with
one detector.6 I calculated g(2)(τ) using the appropriate GUI for the data taken with
two detectors and the spliced ”one detector” data. These calculations are the same
as those in Figure 6.3 and Figure 6.2, but the purpose of this exercise is to compare
the calculations for the one-detector and two-detector set-ups.
Figure 6.4: Comparison of g(2)(τ) calculated for thermal light using the one-detectorand two-detector g(2)(τ) GUIs.
Happily enough, the values for g(2)(τ) as calculated with the one-detector and two-
detector g(2)(τ) GUIs as shown in Figure 6.4 are in good agreement with each other! In
Figure 6.4, g(2)(τ) for thermal light reached a maximum of 1.8 whereas it reached for
thermal light in Figure 6.3. The second-order temporal correlations as measured with
the one-detector and two-detector set-ups in Figure 6.4 were calculated with different
data than in Figure 6.3. Since the correlations in Figure 6.4 are consistent with
each other, we can claim that they are in good agreement. We are still investigating
the experimental set-up to explain the discrepancy between the maximum values of
g(2)(τ) in Figure 6.4 and Figure 6.3.
6The same base data was used for the one-detector and two-detector GUIs in order to control forthe slight differences in photon counts that occur from trial to trial. Thermal data was used becauseits shape is more distinctive than that of coherent data and would therefore display any aberrationsmore clearly.
35
6.4 Results and Discussion
In summary, we demonstrated a method to calculate the second-order temporal cor-
relations (g(2)(τ)) with one photodetector and two photodetectors. Our findings are
important for two primary reasons. First, it is far more cost-effective to calculate
g(2)(τ) with one detector. Both new labs and departments seeking to create easy,
cost-effective experiments to measure second-order temporal correlations will benefit
from using one detector. Second, my project established a set-up to measure g(2)(τ)
that can be used for future experiments. My GUIs can calculate g(2)(τ) as long as the
detected light’s correlations obey Equation 6.2 and Equation 6.4. Furthermore, my
GUIs automate the g(2)(τ) calculations, which will make it easier for future researchers
in my lab to calculate the second-order temporal correlations.
Unfortunately, our detectors could not detect light from our SPDC source because
it was outside the range of wavelengths registered by our detector. As a result, we
were unable to calculate the correlations for SPDC light. However, all is not lost! If
we acquire a detector that can register 1560 nm light, then we could use my g(2)(τ)
GUIs to calculate the second-order temporal correlations for SPDC light. Since I have
already established everything needed to calculate g(2)(τ), we would be able to do so
quickly and easily. Hopefully, future researchers will be able to use my experiment to
their advantage!
36
Chapter 7
Conclusions and Future Work
7.1 Conclusions
We must now temporarily conclude our exploration of the intricacies of spontaneous
parametric down-conversion and the broader quantum and nonlinear behaviors of
light. Our investigation has led to two major outcomes. First, we have obtained
a deeper understanding of the characteristics of our SPDC source. Second, my ex-
periment enhanced our lab’s ability to quickly and efficiently study the behavior of
different kinds of light.
Our SPDC source is a custom-made nonlinear crystal. At the beginning of my
experiment, we only had theoretical knowledge of its properties. The first step of
my project was to ensure that we were actually measuring down-converted light–
a nontrivial task, considering how close our signal was to the ambient noise floor.
After filtering out the background light, all of my tests suggested that we did, in fact,
detect down-converted light (see Chapter 3.2 and Chapter 4). Now, we know that
the optimal wavelength to pump our SPDC source with is around 780.5 nm with an
optical bandwidth of 16.4 nm (see Chapter 4.3). We also calculated the efficiency of
our SPDC source in Chapter 5.2, which is 0.2 pW per mW of pump laser light. Our
source’s pair production efficiency was 1.57 · 106 photon pairs/sec/mW of pump laser
light (see Chapter 4.3). However, what do these statistics mean in the context of an
experiment?
37
My findings regarding the characteristics of our SPDC source are important be-
cause they will allow us to conduct future experiments with even more precise mea-
surements. If we need to produce a certain amount of down-converted light, then we
will know exactly how much laser light to pump our SPDC source with. Moreover,
we now know that the SPDC source’s efficiency is maximized when the pump laser
wavelength is around 780.5 nm with an optical bandwidth of 16.4 nm. A 775 nm laser
would have a similar efficiency as a 780 nm laser. However, a 775 nm non-tunable
laser is far more cost-effective than our tunable 780 nm diode laser. In a similar
vein, we were able to use cost-effective linear-mode detectors to detect light at low
intensities. Cost-effective detectors and lasers would make these types of experiments
more accessible to newer labs and departments looking for low-cost experiments for
undergraduates. Overall, the process of characterizing our SPDC source simultane-
ously enhanced our knowledge of our source and revealed potential benefits that could
extend beyond our group.
Within my group, I created various modular user interfaces and experimental con-
figurations that will improve the efficiency and ease with which we conduct exper-
iments. I constructed an intensity analysis GUI that will accelerate the process of
extracting 1D intensity data from camera images (see Chapter 3.3). The intensity
analysis GUI can extract information from any data collected with our Hamamatsu
short-wave infrared camera. As a result, it could be used to analyze types of light
other than down-converted light, including low intensity light. Moreover, I designed
and built a set-up to measure g(2)(τ) for any type of light, not just down-converted
light. In particular, the GUIs that I created to calculate g(2)(τ) can be used to mea-
sure g(2)(τ) for multiple different kinds of light (see Chapter 6). The g(2)(τ) GUI for
the one-detector set-up provides researchers with a cost-effective alternative to using
two detectors to measure g(2)(τ). Altogether, my project allowed me to optimize the
efficiency of our experiments and broaden our research capabilities.
In short, we learned a great deal about the various properties of our SPDC source.
38
The modular experimental configurations that I created as part of my project can
be easily used for other projects. Next, let’s discuss what those future experiments
might look like.
7.2 Future Work
My project established the foundations upon which future experiments can be built.
Future projects fall into two general categories: those that utilize our SPDC source
as a method to generate down-converted light and those that focus on measuring
second-order temporal correlations.
The experimental set-ups to measure second-order temporal correlations have the
potential to be used in future experiments. As long as our photodetectors can de-
tect the oncoming light, then my experimental set-ups and g(2)(τ) GUIs can calcu-
late g(2)(τ) for different kinds of light. In addition, we were unable to calculate the
second-order temporal correlations for down-converted light because we did not have
photodetectors that could detect 1560 nm light and also had a fast enough bandwidth
for correlation measurements. Once we obtain photodetectors that satisfy those re-
quirements, we will be able to use my experimental set-up to calculate g(2)(τ) for
down-converted light.
Beyond correlations, my characterization of our SPDC source will allow us to use it
as a source of down-converted light in future experiments. Moreover, as we discussed
in Chapter 1, the process of spontaneous parametric down-conversion produces entan-
gled photons. One potential project is to conduct a Bell test with the light produced
by our SPDC source. A Bell test, which was also described in Chapter 1, is a mea-
sure of whether a system obeys classical mechanics or if particles in the system are
so highly correlated with each other that they must be entangled. If we conclusively
demonstrate that our SPDC source produces entangled photons, then we could then
conduct any number of experiments involving entangled photons. Quantum com-
puting, quantum encryption, and quantum communications all require a source of
39
entangled photons. Albert Einstein may have been skeptical of quantum entangle-
ment, but ”spooky action at a distance” has proven itself to be a fascinating source
of new discoveries in physics.
40
Bibliography
[1] R. Paschotta, Periodic poling. [Online]. Available: https://www.rp-photonics.com/periodic poling.html.
[2] Nobel, All nobel prizes in physics. [Online]. Available: https://www.nobelprize.org/prizes/lists/all-nobel-prizes-in-physics.
[3] A. Einstein, B. Podolsky, and N. Rosen, “Can quantum-mechanical descriptionof physical reality be considered complete?” Phys. Rev., vol. 47, pp. 777–780,10 1935. doi: 10.1103/PhysRev.47.777. [Online]. Available: https://link.aps.org/doi/10.1103/PhysRev.47.777.
[4] J. Bell, “On the einstein podolsky rosen paradox,” Physics Physique Fizika,vol. 1, pp. 195–200, 3 1964. doi: 10.1103/PhysicsPhysiqueFizika.1.195. [Online].Available: https://link.aps.org/doi/10.1103/PhysicsPhysiqueFizika.1.195.
[5] B. Hensen, “Loophole-free bell inequality violation using electron spins sepa-rated by 1.3 kilometres,” Nature, 2015.
[6] R. Ursin et al., “Entanglement-based quantum communication over 144 km,”Nature Phys, vol. 3, pp. 481–486, 2007. [Online]. Available: https://www.nature.com/articles/nphys629.
[7] L. Gyongyosi, S. Imre, and H. V. Nguyen, “A survey on quantum channelcapacities,” IEEE, vol. 20, pp. 1149–1205, 2 2018. doi: 10.1109/COMST.2017.2786748.
[8] T. D. Ladd, “Quantum computers,” Nature, vol. 464, pp. 45–53, 2010. doi:10.1038/nature08812.
[9] R. Boyd, Nonlinear Optics. 2008.
[10] L. Mandel and E. Wolf, Optical Coherence and Quantum Optics. 1995.
[11] D. Walsh, “Personal communications,” 2019.
[12] P. Amory and S. Gartenstein, “Oscilloscope instrument control code,” 2019.
[13] M. Fiorentino, “Spontaneous parametric down-conversion in periodically poledktp waveguides and bulk crystals,” Optics Express, 2007. [Online]. Available:https://doi.org/10.1364/OE.15.007479.
[14] Newport, Longpass filter, dielectric, 25.4 mm, 1000±7 nm cut-on, 1020 - 2200nm. [Online]. Available: https://www.newport.com/p/10LWF-1000-B.
41
[15] Hamamatsu, Image sensors. [Online]. Available: https : / /www.hamamatsu .com/resources/pdf/ssd/image sensor kmpd0002e.pdf.
[16] P. L. Knight and L. Allen, Concepts of Quantum Optics. 1983.
[17] D. J. Griffiths, Introduction to Quantum Mechanics. 1982.
[18] Newport, Technical note: Gaussian beam optics. [Online]. Available: https://www.newport.com/n/gaussian-beam-optics.
[19] U. of Erlangen-Nuremberg, “Photon statistics,” Advanced Lab Course Experi-ment 45, 2017.
42
Appendix A: Code for IntensityAnalysis GUI
A.1 Section 1
Listing A.1: MATLAB code for the intensity analysis GUI.% USER INTERFACE FOR EXTRACTING INTENSITY INFORMATION FROM CAMERA IMAGES
% 1) select the file(s) with data from the main light source
% 2) select the file(s) with noise data
% 3) plot gaussian fit
% 4) display subtracted image
% this code will:
% --average the laser beam data and background data
% --subtract background from beam data
% --create 1D intensity plots summing over the intensity in x and y
% --display the subtracted beam image
% troubleshooting: use matlab ’s uicontrol documentation
% mathworks.com/help/matlab/ref/matlab.ui.control.uicontrol -properties.html
function intens i tyGUI
%need the close all to be within the function or it won ’t work
close all
% x = input(’Enter data details (wavelength , etc) for figure title: ’,
% ’s’); %uncomment to specify window name
x = ’ ’ ; %window will have a blank name (faster for testing)
f i g = figure ( ’Name’ , x , ’Position ’ , [ 1000 200 900 7 0 0 ] ) ;
% variables for button positions
a=650; %x pos
b=230+48; %y pos
c=140; %button width
d=22; %button height
% BACKGROUND: allow user to change background color via drop down menu
% set initial background color to pink
f i g . Color = [255/255 232/255 246/255 ] ;
% create drop down menu to pick background color
bgColorPicker = uicontrol ( ’Style’ , ’listbox ’ ) ;bgColorPicker . Po s i t i on = [ a+20 b−200 100 7 5 ] ;bgColorPicker . S t r ing = {’dawn mode’ , ’noon mode’ , ’dusk mode’ , ’random ’ , . . .
’boring mode’ } ;bgColorPicker . Cal lback = @pickColor ;
function pickColor ( src , event )
43
% get the value of the user ’s color menu selection
word = get ( bgColorPicker , ’String ’ ) ;index = get ( bgColorPicker , ’Value’ ) ;bgColor = word{ index } ;
%use a series of if statements so color can be repeatedly changed
%dawn mode: light pink
if bgColor == s t r i n g ( bgColorPicker . S t r ing {1})f i g . Color = [255/255 232/255 246/255 ] ;
end
%dusk mode: lavender
if bgColor == s t r i n g ( bgColorPicker . S t r ing {3})f i g . Color = [200/255 200/255 255/255 ] ;
end
%noon mode: pastel blue
if bgColor == s t r i n g ( bgColorPicker . S t r ing {2})f i g . Color = [186/255 , 221/255 , 255/255 ] ;
end
%random mode: picks a random color
if bgColor == s t r i n g ( bgColorPicker . S t r ing {4})f i g . Color = [ rand rand rand ] ;
end
%default mode: gray
if bgColor == s t r i n g ( bgColorPicker . S t r ing {5})f i g . Color = [ 0 . 9 4 0 .94 0 . 9 4 ] ;
end
end
%create button to get unblocked beam files from user
s igna l Image = uicontrol ( f i g , ’Position ’ , [ a b c d ] ) ;s igna l Image . S t r ing = ’CHOOSE SIGNAL FILE(S)’ ;s i gna l Image . Cal lback = @signalButtonPushed ;
% put blank graphs in as placeholders
% 1 2 3
% 4 5 6
subplot ( 2 , 3 , [ 1 2 ] ) ;subplot ( 2 , 3 , [ 4 5 ] ) ;
%when you push the signal button , the function asks for files
%nested functions so you can ’t get a gaussian fit if you have no data
function s ignalButtonPushed ( src , event )
% ALTER THE PATH NAME OR FILE EXTENSION IF NEEDED
%filter only looks at certain file types in a certain folder
path = ’C:\ Users\danagriffith\Documents\Spring 2020\ Thesis\Thesis code\’ ;ext = ’.bmp’ ; %typical camera image file type is .bmp
filter = s t r c a t ( path , ’*’ , ext ) ;
%ask the user for unblocked beam file(s)
[ baseName , f o l d e r ] = uigetfile ( filter , ’Select a Data File’ , . . .’MultiSelect ’ , ’on’ ) ;
%initialize data variable before for loop
data = [ ] ;
%check if you have one file (char) or multiple files (cell)
if i s c h a r ( baseName )fu l lF i l eName = f u l l f i l e ( f o l d e r , baseName ) ;data = imread ( fu l lF i l eName ) ;
elseif i s c e l l ( baseName )
44
%loop through list of files
for item=1:size ( baseName , 2 )
%AVERAGE FILES TOGETHER
fu l lF i l eName = s t r c a t ( f o l d e r , baseName ( item ) ) ;myFile = imread ( char ( fu l lF i l eName ) ) ;
%if we ’ve only opened the first file and data (total
%file) is empty , then let our opened file=total file
if size ( data ) == 0data = myFile ;
else
data = imadd ( data , myFile ) ;end
end
%turn data (total file) into an average by dividing by the
%number of files we’ve looked at
data = data/size ( baseName , 2 ) ;end
%call our plotting function on our unblocked beam file
plotData ( data )
%once we have beam , we can also ask for background data
bgImage = uicontrol ( ’Position ’ , [ a b−24 c d ] ) ;bgImage . S t r ing = ’CHOOSE NOISE FILE(S)’ ;bgImage . Cal lback = @bgButtonPushed ;
function bgButtonPushed ( src , event )
% ALTER THE PATH NAME OR FILE EXTENSION IF NEEDED
filter = s t r c a t ( f o l d e r , ’*’ , ext ) ; %use same folder and extension as data file
[ bgbaseName , f o l d e r ] = uigetfile ( filter , ’Select a Data File’ , . . .’MultiSelect ’ , ’on’ ) ;
%initialize empty list before for loop
bg = [ ] ;
%AVERAGE IMAGE FILES TOGETHER
%if only one file , set that file to be our background file
if i s c h a r ( bgbaseName )fu l lF i l eName = f u l l f i l e ( f o l d e r , bgbaseName ) ;bg = imread ( fu l lF i l eName ) ;
%if multiple files , add them into one large file
elseif i s c e l l ( bgbaseName )
%loop through files
for item=1:size ( bgbaseName , 2 )fu l lF i l eName = s t r c a t ( f o l d e r , bgbaseName ( item ) ) ;myFile = imread ( char ( fu l lF i l eName ) ) ;
%for first iteration through loop
if size ( bg ) == 0bg = myFile ;
else
%add files together so you can average later
bg = imadd (bg , myFile ) ;
end
end
%divide summed background file by the # of files
bg = bg/size ( bgbaseName , 2 ) ;
end
%subtract the background data from the unblocked beam
45
data = imsubtract ( data , bg ) ;
%re -plot our subtracted image data
plotData ( data )end
% plot intensity data
function plotData ( data )
%sum over each x value to get the x intensity
%initialize list for x intensity variables
x In t en s i t y = [ ] ;for i = 1 : size ( data ( : , : ) )
x I n t en s i t y = [ x In t en s i t y sum ( data ( : , i ) ) / numel ( data ( : , i ) ) ] ;end
%sum over each y value to get the y intensity
%initialize list for y intensity variables
y In t en s i t y = [ ] ;for i = 1 : size ( data ( : , : ) )
y I n t en s i t y = [ y In t en s i t y sum ( data ( i , : ) ) / numel ( data ( i , : ) ) ] ;end
%get the size of the image to plot sum over each pixel row/column
p i c S i z e = 1 : size ( data ( : , : ) ) ;
% 1D plot of the intensity as it varies over x values
subplot ( 2 , 3 , [ 1 2 ] ) ;plot ( p i cS i z e , x I n t en s i t y )title ( ’Intensity as a 1D plot over x’ , ’fontsize ’ , 1 4 , . . .
’Interpreter ’ , ’latex’ )xlabel ( ’Row number ’ , ’fontsize ’ , 12 , ’Interpreter ’ , ’latex’ )ylabel ( ’Intensity ’ , ’fontsize ’ , 12 , ’Interpreter ’ , ’latex’ )
% 1D plot of the intensity as it varies over y values
subplot ( 2 , 3 , [ 4 5 ] ) ;plot ( p i cS i z e , y I n t en s i t y )title ( ’Intensity as a 1D plot over y’ , ’fontsize ’ , 1 4 , . . .
’Interpreter ’ , ’latex’ )xlabel ( ’Column number ’ , ’fontsize ’ , 12 , ’Interpreter ’ , ’latex’ )ylabel ( ’Intensity ’ , ’fontsize ’ , 12 , ’Interpreter ’ , ’latex’ )
% set up toggle button for GAUSSIAN FIT
gauss = uicontrol ( ’Style’ , ’pushbutton ’ , ’Position ’ , [ a b−72 c d ] ) ;gauss . S t r ing = ’GAUSSIAN FIT’ ;gauss . Cal lback = @gauss ianFit ;
function gaus s i anF i t ( src , event )%for troubleshooting the gaussian fits:
%https :// www.mathworks.com/help/curvefit/gaussian.html
%https :// www.mathworks.com/help/curvefit/fit.html
%https :// www.mathworks.com/help/curvefit/evaluate -a-curve -fit.html
%beam is usually in this pixel region (10:35 ,20:45)
%create a gaussian fit for the data
%2nd order fit usually has a 2nd very broad gaussian that
%represents the noise floor
[ yFit , ygof , youtput ] = f i t ( p i c S i z e . ’ , y I n t en s i t y . ’ , ’gauss2 ’ ) ;[ xFit , xgof , xoutput ] = f i t ( p i c S i z e . ’ , x I n t en s i t y . ’ , ’gauss2 ’ ) ;
%expression represents the 2nd degree gaussian
%helps check the fit
x=1 :1 : 64 ;
46
yFit2 = yFit . a2∗exp (−((x−yFit . b2 )/ yFit . c2 ) .̂2 ) ;
xFit2 = xFit . a2∗exp (−((x−xFit . b2 )/ xFit . c2 ) .̂2 ) ;
%create buttons to adjust x and y fits
xAdjustFit = uicontrol ( ’Style’ , ’pushbutton ’ , ’Position ’ , . . .[ a+35 b−96 c−70 d ] ) ;
xAdjustFit . S t r ing = ’adjust x fit’ ;xAdjustFit . Cal lback = @xcftoo l ;
function x c f t o o l ( src , event )
%create button to replot X fit after you ’ve adjusted it
xAdjustFit . Po s i t i on = [ a b−96 c−70 d ] ;r ep lotXFit = uicontrol ( ’Style’ , ’pushbutton ’ , ’Position ’ , . . .
[ a+70 b−96 c−70 d ] ) ;r ep lotXFit . S t r ing = ’replot x fit’ ;r ep lo tXFit . Cal lback = @replotX ;
%call cftool on the x intensity data to adjust the fit
c f t o o l ( p i cS i z e , x I n t en s i t y )
function replotX ( src , event )%NOTE: MUST SAVE X FIT AS ’fittedmodelx ’ OR IT WON ’T
%LOAD PROPERLY HERE
%load adjusted x fit info from workspace
xFit = eva l i n ( ’base’ , ’fittedmodelx ’ ) ;%adjusted fit is named "xFit" like the old x fit so the
%code won ’t overwrite the new fit with the old fit
%replot adjusted fit
subplot ( 2 , 3 , [ 1 2 ] ) ;plot ( xFit , p i cS i z e , x I n t en s i t y )title ( . . .
’Gaussian fit of intensity as a 1D plot over x’ , . . .’fontsize ’ , 14 , ’Interpreter ’ , ’latex’ )
xlabel ( ’Column number ’ , ’fontsize ’ , 12 , ’Interpreter ’ , . . .’latex’ )
ylabel ( ’Intensity ’ , ’fontsize ’ , 12 , ’Interpreter ’ , ’latex’ )
%rewrite old fit info (beam waist , etc) with new fit
%create a data array for the x fit
xFitData = xFit ( p i c S i z e ) ;
%find the min and max in the x fit array
xMin = min ( xFitData ) ;xMax = max ( xFitData ) ;%calculate visibility for x
xVis = (xMax − xMin )/ (xMax + xMin ) ;
%create text in the upper left corner of the X
%intensity graph to display beam info (beam waist and
%visibility)
xText = uicontrol ( ’Style’ , ’text’ , ’Position ’ , . . .[ e f g h ] ) ;
xLine0 = ’X INTENSITY ’ ;xLine1 = s t r c a t ( ’BEAM WAIST:’ , ’ ’ , num2str (2∗ xFit . c1 ) ) ;xLine2 = ’’ ;xLine2 = s t r c a t ( ’VISIBILITY:’ , ’ ’ , num2str ( xVis ) ) ;% ’’ separates lines 1 and 2:
xLines = sprintf ( ’%s\n%s’ , xLine0 , ’’ , xLine1 , xLine2 ) ;xText . S t r ing = { xLines } ;
end
end
yAdjustFit = uicontrol ( ’Style’ , ’pushbutton ’ , ’Position ’ , . . .[ a+35 b−120 c−70 d ] ) ;
47
yAdjustFit . S t r ing = ’adjust y fit’ ;yAdjustFit . Cal lback = @ycftoo l ;
function y c f t o o l ( src , event )yAdjustFit . Po s i t i on = [ a b−120 c−70 d ] ;replotY = uicontrol ( ’Style’ , ’pushbutton ’ , ’Position ’ , . . .
[ a+70 b−120 c−70 d ] ) ;replotY . S t r ing = ’replot y fit’ ;replotY . Cal lback = @plotY ;c f t o o l ( p i cS i z e , y I n t en s i t y )
function plotY ( src , event )yFit = eva l i n ( ’base’ , ’fittedmodely ’ ) ;
subplot ( 2 , 3 , [ 4 5 ] ) ;plot ( yFit , p i cS i z e , y I n t en s i t y )title ( . . .
’Gaussian fit of intensity as a 1D plot over y’ , . . .’fontsize ’ , 14 , ’Interpreter ’ , ’latex’ )
xlabel ( ’Column number ’ , ’fontsize ’ , 12 , ’Interpreter ’ , . . .’latex’ )
ylabel ( ’Intensity ’ , ’fontsize ’ , 12 , ’Interpreter ’ , ’latex’ )
%create a data array for the y fit
yFitData = yFit ( p i c S i z e ) ;
%find the min and max in the y fit array
yMin = min ( yFitData ) ;yMax = max ( yFitData ) ;%calculate visibility for x
yVis = (yMax − yMin )/ (yMax + yMin ) ;
%create text in the upper left corner of the Y
%intensity graph to display beam info (beam waist and
%visibility)
yText = uicontrol ( ’Style’ , ’text’ , ’Position ’ , . . .[ e f−332 g h ] ) ;
yText = uicontrol ( ’Style’ , ’text’ , ’Position ’ , . . .[ e f−332 g h ] ) ;
yLine0 = ’Y INTENSITY ’ ;yLine1 = s t r c a t ( ’BEAM WAIST: ’ , num2str (2∗ yFit . c1 ) ) ;yLine2 = ’’ ;yLine2 = s t r c a t ( ’VISIBILITY: ’ , num2str ( yVis ) ) ;% ’’ separates lines 1 and 2:
yLines = sprintf ( ’%s\n%s’ , yLine0 , ’’ , yLine1 , yLine2 ) ;yText . S t r ing = { yLines } ;
end
end
% position of fit text , linked to button positions
% have it here so it ’s not copied in both conditional branches
e = a−528;f = b+320;g = c−25;h = d+25;
%we ’ll now do exactly what we did in the functions that adjust
%and replot the fit
%here , we’ll also plot the 2nd order gaussian to verify that
%it ’s indeed a DC offset
% turn fits into arrays so you can calculate visibility later
yFitData = yFit ( p i c S i z e ) ;xFitData = xFit ( p i c S i z e ) ;
%plot y fit
subplot ( 2 , 3 , [ 4 5 ] ) ;
48
plot ( yFit , p i cS i z e , y I n t en s i t y )title ( ’Gaussian fit of intensity as a 1D plot over y’ , . . .
’fontsize ’ , 14 , ’Interpreter ’ , ’latex’ )xlabel ( ’Column number ’ , ’fontsize ’ , 12 , ’Interpreter ’ , ’latex’ )ylabel ( ’Intensity ’ , ’fontsize ’ , 12 , ’Interpreter ’ , ’latex’ )hold on%plot 2nd order gaussan (DC offset)
plot (x , yFit2 )legend ( ’intensity data’ , ’fitted intensity ’ , ’noise offset ’ )hold o f f
%plot x fit
subplot ( 2 , 3 , [ 1 2 ] ) ;plot ( xFit , p i cS i z e , x I n t en s i t y )title ( ’Gaussian fit of intensity as a 1D plot over x’ , . . .
’fontsize ’ , 14 , ’Interpreter ’ , ’latex’ )xlabel ( ’Column number ’ , ’fontsize ’ , 12 , ’Interpreter ’ , ’latex’ )ylabel ( ’Intensity ’ , ’fontsize ’ , 12 , ’Interpreter ’ , ’latex’ )hold on%plot 2nd order gaussan (DC offset)
plot (x , xFit2 )legend ( ’intensity data’ , ’fitted intensity ’ , ’noise offset ’ )hold o f f
% extract coefficients: myFit.a1 (same for b1,c1,etc)
% a is the amplitude
% b is the centroid (location)
% c is related to the peak width
% n is the number of peaks to fit and 1 ? n ? 8
% check if the fit is good using rsquare (good data is >~0.7)
%if bad fit , the text will say that it ’s not gaussian
if ygof . r square < 0 .5yText = uicontrol ( ’Style’ , ’text’ , ’Position ’ , . . .
[ e f−317 g h−15 ] ) ;yLine1 = ’X INTENSITY ’ ;yLine2 = ’NOT GAUSSIAN ’ ;yLines = sprintf ( ’%s\n%s’ , yLine1 , yLine2 ) ;yText . S t r ing = { yLines } ;yText . ForegroundColor = [1 0 0 ] ;yText . FontWeight = ’bold’ ;
else
% find visibility
% V = (max -min )/(max+min)
yMin = min ( yFitData ) ;yMax = max ( yFitData ) ;yVis = (yMax − yMin )/ (yMax + yMin ) ;
%display y fit info
yText = uicontrol ( ’Style’ , ’text’ , ’Position ’ , . . .[ e f−332 g h ] ) ;
yLine0 = ’Y INTENSITY ’ ;yLine1 = s t r c a t ( ’BEAM WAIST: ’ , num2str (2∗ yFit . c1 ) ) ;yLine2 = ’’ ;yLine2 = s t r c a t ( ’VISIBILITY: ’ , num2str ( yVis ) ) ;% ’’ separates lines 1 and 2:
yLines = sprintf ( ’%s\n%s’ , yLine0 , ’’ , yLine1 , yLine2 ) ;yText . S t r ing = { yLines } ;
end
%check if x fit is good
%if bad fit , the text will say that it ’s not gaussian
if xgof . r square < 0 .5xText = uicontrol ( ’Style’ , ’text’ , ’Position ’ , . . .
[ e f+15 g h−15 ] ) ;xLine1 = ’Y INTENSITY ’ ;xLine2 = ’NOT GAUSSIAN ’ ;xLines = sprintf ( ’%s\n%s’ , xLine1 , xLine2 ) ;
49
xText . S t r ing = { xLines } ;xText . ForegroundColor = [1 0 0 ] ;xText . FontWeight = ’bold’ ;
else
% find visibility
% V = (max -min )/(max+min)
xMin = min ( xFitData ) ;xMax = max ( xFitData ) ;
xVis = (xMax − xMin )/ (xMax + xMin ) ;
%display the FWHM and visibility on each graph
xText = uicontrol ( ’Style’ , ’text’ , ’Position ’ , [ e f g h ] ) ;xLine0 = ’X INTENSITY ’ ;xLine1 = s t r c a t ( ’BEAM WAIST:’ , ’ ’ , num2str (2∗ xFit . c1 ) ) ;xLine2 = ’’ ;xLine2 = s t r c a t ( ’VISIBILITY:’ , ’ ’ , num2str ( xVis ) ) ;% ’’ separates lines 1 and 2:
xLines = sprintf ( ’%s\n%s’ , xLine0 , ’’ , xLine1 , xLine2 ) ;xText . S t r ing = { xLines } ;
end
end
% set up toggle button for SHOWING SUBTRACTED BEAM IMAGES
t ogg l e = uicontrol ( ’Style’ , ’togglebutton ’ , ’Position ’ , . . .[ a b−48 c d ] ) ;
t o gg l e . S t r ing = ’SHOW BEAM IMAGES ’ ;t o gg l e . Cal lback = @showImage ;
% only show the beam image after you have beam/background data
function showImage ( src , event )%plot subtracted beam image
subplot ( 2 , 3 , 3 )imshow ( data )title ( ’Subtracted beam image’ )
%increase beam contrast by x10
imageContrast = uicontrol ( ’Style’ , ’pushbutton ’ , ’Position ’ , . . .[ a b+120 c d ] ) ;
imageContrast . S t r ing = ’INCREASE CONTRAST ’ ;imageContrast . Cal lback = @contrast ;contrastAmount = 1 ;
function contrast ( src , event )contrastAmount = contrastAmount ∗ 1 . 5 ;
subplot ( 2 , 3 , 3 )imshow ( data∗ contrastAmount )title ( ’Subtracted beam image’ )
end
end
end
end
%allow user to turn off the background color menu
%(note: if you save a png , the color menu buttons will also save
%but the background color will not)
v i s i b i l i t y = uicontrol ( ’Style’ , ’pushbutton ’ , ’String ’ , ’UI VISIBLE?’ ) ;v i s i b i l i t y . Cal lback = @vis ib leButton ;
function v i s i b l eBut ton ( src , event )% list of all non -plot UI controls
v i s i b i l i t y . V i s i b l e= ’off’ ;
50
bgColorPicker . V i s i b l e = ’off’ ;end
end
51
Appendix B: Code for GUI toCalculate g2(τ) with One Detector
B.1 Section 1
Listing B.1: MATLAB code for the GUI to calculate g2(τ) with one detector.% Dana Griffith , Wellesley College
% Last updated: 02/19/2020
% GUI steps (automated ):
% 1) User selects 1 or more .mat files containing the voltage readout of
% **one** coincidence detector.
% 1a) The coincidence voltage data must be called "Voltage ".
% 2) Convert the voltage data into square waves whose values are
% either 1 or 0. It counts the number of photons from the rising edge.
% 3) Extract the photon statistics for g2(tau=0) and g2(tau).
% 4) Plot g2(tau=0) and g2(tau).
% For help with troubleshooting , try checking the uicontrol documentation:
% https ://www.mathworks.com/help/matlab/ref/matlab.ui.control.uicontrol -properties.html
% 1 data point = 1 nanosecond
% 1 million data points = 1 millisecond
% This code *ONLY* takes data sets with 1 million data points. Otherwise ,
% you ’ll get an "index exceeds array bounds" error.
function GUI g20 g2tau 1detector 02192020
close all
clear all
clc
win xpos = 1000 ; % UI x position on monitor
win ypos = 200 ; %UI y position on monitor
win w = 900 ; %window width
win h = 700 ; %window height
window = figure ( ’Name’ , ’’ , ’Position ’ , [ win xpos win ypos win w win h ] ) ;
window . Color = [205/255 190/255 245/255 ] ;
% variables for button positions
a=(win w ∗ . 88) − (150/2) ; %x pos
b=(win h ∗ 0 . 7 5 ) ; %y pos
c=150; %button width
d=22; %button height
% subplots positions in a 2x5 subplot matrix:
% 1 2 3 4 5
% 6 7 8 9 10
52
% make empty graphs to fill up space. later , we ’ll plot g2(tau) here.
subplot ( 2 , 5 , [ 1 2 ] ) ;subplot ( 2 , 5 , [ 3 4 ] ) ;subplot ( 2 , 5 , [ 6 7 ] ) ;subplot ( 2 , 5 , [ 8 9 ] ) ;
% 1 millisecond of data = 1,000 ,000 data points
photonNumBin = [1000 1250 1600 2000 2500 3125 4000 5000 6250 8000 . . .10000 12500 15625 20000 25000 31250 40000 ] ;
% variable for later so we can make the g2tau matrix the correct size
i t e r a t e = (1 e6 / 1000)/4 ;
%Create a function to extract g20
function g20bins = eva luateg20 ( data )
s i g n a l = zeros (1 , size ( data , 2 ) ) ;
%Get rid of the DC offset
data = data − mean ( data ) ;
%If the voltage is higher than 1, signal = 1. Otherwise , signal = 0
for m=1:size ( data , 2 )if data (m) > 0 .5
s i g n a l (m) = 1 ;
else
s i g n a l (m) = 0 ;end
end
adjustedVarN = zeros ( size (photonNumBin , 2 ) , 1 ) ;adjustedNBar = zeros ( size (photonNumBin , 2 ) , 1 ) ;
%Loop through photon bin sizes
for bin=1:size (photonNumBin , 2 )
microBin = photonNumBin ( bin ) ;
counter = zeros (1 , floor ( size ( s i gna l , 2 ) / microBin ) ) ;
%Sort data into bins
for big=1:microBin : size ( s i gna l , 2 )for t iny =1:microBin−1
if s i g n a l ( b ig+t iny ) − s i g n a l ( b ig+tiny −1) > 0 .5if big==1
counter (1 , b ig ) = counter (1 , b ig ) + 1 ;else
counter ( 1 , ( big−1+microBin )/microBin ) = . . .counter ( 1 , ( big−1+microBin )/microBin ) + 1 ;
end
end
end
end
% find the variance and nbar for different smaller bins to
% avoid exceeding the coherence time of the light
i = floor ( size ( s i gna l , 2 ) / microBin ) ; % total number of microbins
macroBin = 25 ; % number of macrobins in the whole data set
s tep = floor ( i /macroBin ) ; % number of microbins per macrobin
%If the step size is less than or equal to one , then the
%microbins are the same size as the macrobins.
if s tep <= 1break
end
53
dn = zeros (1 , macroBin ) ;nbar = zeros (1 , macroBin ) ;
for t=1:macroBin
if t==1
%Find the variance and mean photon number for the first
%chunk of data
dn (1 , t ) = var ( counter ( 1 , 1 : t ∗ s tep ) ) ;nbar (1 , t ) = mean ( counter ( 1 , 1 : t ∗ s tep ) , 2 ) ;
%If the step size is too big , break
elseif ( t ∗ s tep ) − 1 > i
break
else
dn (1 , t ) = var ( counter ( 1 , ( t−1)∗ s tep + 1 : ( t ∗ s tep ) ) ) ;nbar (1 , t ) = mean ( counter ( 1 , ( t−1)∗ s tep + 1 : ( t ∗ s tep ) ) , 2 ) ;
end
end
%At the end of each microbin ’s calculation , add the averaged
%variance and averaged nbar for that bin to the larger array
adjustedVarN ( bin , : ) = mean (dn , 2 ) ;adjustedNBar ( bin , : ) = mean ( nbar , 2 ) ;
end
%Use formula to calculate g2(0) from the variance and nbar
g20bins = 1 + ( ( adjustedVarN − adjustedNBar ) . / ( adjustedNBar .̂2 ) ) ;
end
% function to calculate g2(tau) for the given data
function g2taubins = eva luateg2tau ( data )
s i g n a l = zeros (1 , size ( data , 2 ) ) ;
%Get rid of the DC offset
data = data − mean ( data ) ;
for m=1:size ( data , 2 )if data (m) > 0 .5
s i g n a l (m) = 1 ;else
s i g n a l (m) = 0 ;end
end
%Only use a microbin of 1000 nanosecs for g2(tau)
microBin = 1000 ;
%Same process as g2(0): sort find photons from the rising edge
counter = zeros (1 , size ( s i gna l , 2 ) / microBin ) ;for s=1:microBin : size ( s i gna l , 2 )
for t=1:microBin−1
if s i g n a l ( s+t ) − s i g n a l ( s+t−1) > 0 .5if s==1
counter (1 , s ) = counter (1 , s ) + 1 ;else
counter ( 1 , ( s−1+microBin )/microBin ) = . . .
54
counter ( 1 , ( s−1+microBin )/microBin ) + 1 ;end
end
end
end
i = size ( counter , 2 ) / 2 5 ; % number of macrobins in the data set
numerator = zeros (1 , i t e r a t e ) ;denominator = zeros (1 , i t e r a t e ) ;
%Shift the data set by tau and then multiply the unshifted data set
%by its shifted version
for tau=1: i t e r a t e
%Shrink data set as tau grows so the arrays share the same
%dimensions for matrix multiplication purposes
I t = counter (1 , 1 : ( size ( counter ,2)− tau ) ) ;I tT s h i f t = counter (1 , ( tau+1) : size ( counter , 2 ) ) ;
%Calculate numerator and denominator for g2(tau) formula
numerator ( tau ) = mean ( t imes ( It , I tT s h i f t ) ) ;denominator ( tau ) = mean ( I t )̂2 ;
end
g2taubins = numerator . / denominator ;
end
t r i a l 1 = uicontrol (window , ’Position ’ , [ a b c d ] ) ;t r i a l 1 . S t r ing = ’CHOOSE COHERENT FILES’ ;% Note: correlation stats for coherent and thermal light are calculated
% in the same manner in this code. The UI buttons only have different
% labels because the graphs are labeled "coherent" or "thermal ".
t r i a l 1 . Cal lback = @tria l1Pushed ;
function t r i a l 1Pushed ( src , event )% ALTER THE PATH NAME OR FILE EXTENSION IF NEEDED
%filter only looks at certain file types in a certain folder
clc
filter = ’/Users/danagriffith/Documents/Spring 2020/ Thesis ’ ;
%ask the user for trial files
[ baseName , f o l d e r ] = uigetfile ( filter , ’Select a Data File’ , . . .’MultiSelect ’ , ’on’ ) ;
%check if you have one file (char) or multiple files (cell)
if i s c h a r ( baseName )
data = s t ru c t 2a r r ay ( load ( char ( baseName ) , ’Voltage ’ ) ) ;
g20 = eva luateg20 ( data ) ;e r r = 0 ;plotData ( g20 , [ 1 2 ] , ’Coherent g$^2 $(0)’ , e r r , . . .
photonNumBin/1000 , ’Photon time bin ($\mu$s)’ )
g2 = eva luateg2tau ( data ) ;e r r = 0 ;plotData ( g2 , [ 3 4 ] , ’Coherent g$^2 $($\tau$)’ , e r r , 1 : size ( g2 , 2 ) , . . .
’$\tau$ ($\mu$s)’ )
elseif i s c e l l ( baseName )
55
g20 = zeros ( size ( baseName , 2 ) , size (photonNumBin , 2 ) ) ;g2tau = zeros ( size ( baseName , 2 ) , i t e r a t e ) ;
%loop through list of files
for item=1:size ( baseName , 2 )
data = s t ru c t 2a r r ay ( load ( char ( baseName ( item ) ) , ’Voltage ’ ) ) ;
g20 ( item , : ) = eva luateg20 ( data ) ’ ;g2tau ( item , : ) = eva luateg2tau ( data ) ;
end
%Find the error
s td dev0 = std ( g20 ) ;e r r 0 = std dev0 / sqrt ( size ( baseName , 2 ) ) ;
%Average the g20 values across all trials
g20avg = mean ( g20 ) ;
%Call plotData function for g2(0):
%data ,plotSpace ,plotTitle ,error ,xdata ,xinfo
plotData ( g20avg , [ 1 2 ] , ’Coherent averaged g$^2 $(0)’ , e rr0 , . . .
photonNumBin ./1000 , ’Photon time bin ($\mu$s)’ )
s td devtau = std ( g2tau ) ;e r r tau = std devtau / sqrt ( size ( baseName , 2 ) ) ;g2tauavg = mean ( g2tau ) ;
%Call plotData function for g2(tau):
plotData ( g2tauavg , [ 3 4 ] , ’Coherent averaged g$^2 $($\tau$)’ , . . .
e r r tau , 1 : size ( g2tauavg , 2 ) , ’Tau ($\mu$s)’ )
end
end
t r i a l 2 = uicontrol (window , ’Position ’ , [ a b/3 c d ] ) ;t r i a l 2 . S t r ing = ’CHOOSE THERMAL FILES’ ;t r i a l 2 . Cal lback = @tria l2Pushed ;
function t r i a l 2Pushed ( src , event )% ALTER THE PATH NAME OR FILE EXTENSION IF NEEDED
%filter only looks at certain file types in a certain folder
clc
filter = ’/Users/danagriffith/Documents/Spring 2020/ Thesis ’ ;
%ask the user for trial files
[ baseName , f o l d e r ] = uigetfile ( filter , ’Select a Data File’ , . . .’MultiSelect ’ , ’on’ ) ;
%check if you have one file (char) or multiple files (cell)
if i s c h a r ( baseName )
data = s t ru c t 2a r r ay ( load ( char ( baseName ) , ’Voltage ’ ) ) ;
g20 = eva luateg20 ( data ) ;e r r = 0 ;plotData ( g20 , [ 6 7 ] , ’Thermal g$^2 $(0)’ , e r r , . . .
photonNumBin ./1000 , ’Photon time bin ($\mu$s)’ )
g2 = eva luateg2tau ( data ) ;e r r = 0 ;plotData ( g2 , [ 8 9 ] , ’Thermal g$^2 $($\tau$)’ , e r r , 1 : size ( g2 , 2 ) , . . .
56
’Tau ($\mu$s)’ )
elseif i s c e l l ( baseName )
g20 = zeros ( size ( baseName , 2 ) , size (photonNumBin , 2 ) ) ;g2tau = zeros ( size ( baseName , 2 ) , i t e r a t e ) ;
%loop through list of files
for item=1:size ( baseName , 2 )
data = s t ru c t 2a r r ay ( load ( char ( baseName ( item ) ) , ’Voltage ’ ) ) ;
g20 ( item , : ) = eva luateg20 ( data ) ’ ;g2tau ( item , : ) = eva luateg2tau ( data ) ;
end
s td dev0 = std ( g20 ) ;e r r 0 = std dev0 / sqrt ( size ( baseName , 2 ) ) ;
%average the g20 values across all trials
g20avg = mean ( g20 ) ;%Call plotData function for g2(0):
plotData ( g20avg , [ 6 7 ] , ’Thermal averaged g$^2 $(0)’ , e rr0 , . . .
photonNumBin ./1000 , ’Photon time bin ($\mu$s)’ )
s td devtau = std ( g2tau ) ;e r r tau = std devtau / sqrt ( size ( baseName , 2 ) ) ;g2tauavg = mean ( g2tau ) ;
%Call plotData function for g2(tau):
plotData ( g2tauavg , [ 8 9 ] , ’Thermal averaged g$^2 $($\tau$)’ , . . .
e r r tau , 1 : size ( g2tauavg , 2 ) , ’Tau ($\mu$s)’ )
end
end
function plotData ( data , plotSpace , p l o tT i t l e , error , xdata , x i n f o )%To put plots in a new figure , simply uncomment "figure" and
%comment out "subplot (...)"
if error == 0subplot (2 , 5 , p lotSpace ) ;figure
plot ( xdata , data , ’linewidth ’ , 2 )title ( p l o tT i t l e , ’fontsize ’ , 26 , ’Interpreter ’ , ’latex’ )xlabel ( x in fo , ’fontsize ’ , 22 , ’Interpreter ’ , ’latex’ )ylabel ( ’g$^2 $’ , ’fontsize ’ , 22 , ’Interpreter ’ , ’latex’ )
% ylim ([0 2.2])
grid on
else
subplot (2 , 5 , p lotSpace ) ;figure
errorbar ( xdata , data , error , ’linewidth ’ , 2 )title ( p l o tT i t l e , ’fontsize ’ , 26 , ’Interpreter ’ , ’latex’ )xlabel ( x in fo , ’fontsize ’ , 22 , ’Interpreter ’ , ’latex’ )ylabel ( ’g$^2 $’ , ’fontsize ’ , 22 , ’Interpreter ’ , ’latex’ )
% ylim ([0 2.2])
grid on
end
end
end
57
Appendix C: Code for GUI toCalculate g2(τ) with Two Detectors
C.1 Section 1
Listing C.1: MATLAB code for the GUI to calculate g2(τ) with two detectors.% Dana Griffith , Wellesley College
% Last updated: 02/19/2020
% GUI steps (automated ):
% 1) User selects 1 or more .mat files containing the voltage readout for
% detector 1.
% 2) User selects 1 or more .mat files containing the voltage readout for
% detector 2. You must have the same number of files for detector 2 as
% you do for detector 1.
% 2a) Choose the files such that they correspond to the detector 1 files
% (ex: use trials 1-3 for both detectors)
% 2b) The coincidence voltage data must be called "Voltage1" for
% detector 1 and "Voltage2" for detector 2.
% 3) Convert the voltage data into square waves whose values are either 1
% or 0. It counts the photons based on the rising edge.
% 4) Extract the photon statistics for g2(tau).
% 5) Plot g2(tau).
% For help with troubleshooting , try checking the uicontrol documentation:
% mathworks.com/help/matlab/ref/matlab.ui.control.uicontrol -properties.html
% 1 data point = 1 nanosecond
% 1 million data points = 1 millisecond
% This code *ONLY* takes data sets with 1 million data points. Otherwise ,
% you ’ll get an "index exceeds array bounds" error.
%NOTE: the voltage cutoff to determine whether a photon was detected or not
%(if the signal should have a 0 or 1 at that point) was updated.
%Previously , we used a universal value of 0.5. However , we needed to
%generalize that value. Instead , we chose 0.65 below the maximum point on
%each data set. It seems to work fairly well.
% g2(tau) function
function GUI g20 g2tau 2detectors 02192020
close all
clear all
clc
win xpos = 1000 ; % UI x-position on monitor
win ypos = 500 ; %UI y-position on monitor
win w = 900 ; %window width
win h = 350 ; %window height
window = figure ( ’Name’ , ’’ , ’Position ’ , [ win xpos win ypos win w win h ] ) ;
58
window . Color = [255/255 192/255 203/255 ] ;
%Variables for button positions
a=(win w ∗ . 5 ) − (120/2) ; %x pos
b=(win h ∗ 0 . 5 ) ; %y pos
c=120; %button width
d=22; %button height
%Subplots positions in a 1x5 subplot matrix:
% 1 2 3 4 5
%Make empty graphs to fill up space. later , we’ll plot g2(tau) here.
subplot ( 1 , 5 , [ 1 2 ] )subplot ( 1 , 5 , [ 4 5 ] )
%Variable for later so we can make the g2tau matrix the correct size
i t e r a t e = (1 e6 / 1000)/4 ;
%Function to calculate g2(tau) for given data
function g2taubins = eva luateg2tau ( det1 , det2 )
s i g n a l 1 = zeros (1 , size ( det1 , 2 ) ) ;s i g n a l 2 = zeros (1 , size ( det2 , 2 ) ) ;
%Different data sets are set at different DC voltages
%subtract 0.65 from the maximum voltage to find the value where all
%points abbove that voltage count as a detected photon
countval2 = max ( det2 ) − 0 . 6 5 ; %0.65 works best
countval1 = max ( det1 ) − 0 . 6 5 ;
%If the voltage is higher than a certain value , signal = 1 at that
%point. Otherwise , signal = 0
for m=1:size ( det1 , 2 )if det1 (m) > countval1
s i g n a l 1 (m) = 1 ;else
s i g n a l 1 (m) = 0 ;end
end
for m=1:size ( det2 , 2 )if det2 (m) > countval2
s i g n a l 2 (m) = 1 ;else
s i g n a l 2 (m) = 0 ;end
end
%Sort data points into bins that are 1000 nanosecs long
microBin = 1000 ;
%Detect photons based on the rising edge
counter1 = zeros (1 , size ( s i gna l1 , 2 ) / microBin ) ;for s=1:microBin : size ( s i gna l1 , 2 )
for t=1:microBin−1
if s i g n a l 1 ( s+t ) − s i g n a l 1 ( s+t−1) > 0 .5if s==1
counter1 (1 , s ) = counter1 (1 , s ) + 1 ;else
counter1 ( 1 , ( s−1+microBin )/microBin ) = . . .counter1 ( 1 , ( s−1+microBin )/microBin ) + 1 ;
end
end
end
end
59
counter2 = zeros (1 , size ( s i gna l2 , 2 ) / microBin ) ;for s=1:microBin : size ( s i gna l2 , 2 )
for t=1:microBin−1
if s i g n a l 2 ( s+t ) − s i g n a l 2 ( s+t−1) > 0 .5if s==1
counter2 (1 , s ) = counter2 (1 , s ) + 1 ;else
counter2 ( 1 , ( s−1+microBin )/microBin ) = . . .counter2 ( 1 , ( s−1+microBin )/microBin ) + 1 ;
end
end
end
end
numerator = zeros (1 , i t e r a t e +1);denominator = zeros (1 , i t e r a t e +1);
%Shift data sets by tau and apply the g2 formula
for tau=0: i t e r a t e
I t = counter1 (1 , 1 : ( size ( counter1 ,2)− tau ) ) ;I tT s h i f t = counter2 (1 , ( tau+1) : size ( counter2 , 2 ) ) ;
numerator ( tau+1) = mean ( t imes ( It , I tT s h i f t ) ) ;denominator ( tau+1) = mean ( I t )̂2 ;
end
%Divide the numerator of the formula by the denominator to get g2
g2taubins = numerator . / denominator ;
end
t r i a l 1 = uicontrol (window , ’Position ’ , [ a b+d c d ] ) ;t r i a l 1 . S t r ing = ”CHOOSE LEFT FILES” ;t r i a l 1 . Cal lback = @tria l1Pushed ;
function t r i a l 1Pushed ( src , event )% ALTER THE PATH NAME OR FILE EXTENSION IF NEEDED
%filter only looks at certain file types in a certain folder
filter = ’/Users/danagriffith/Documents/Spring 2020/ Thesis ’ ;
%ask the user for trial files
disp ( ’CHOOSE DETECTOR 1 FILES’ )[ detector1 , f o l d e r 1 ] = uigetfile ( filter , ’DETECTOR 1 FILES’ , . . .
’MultiSelect ’ , ’on’ ) ;disp ( ’CHOOSE DETECTOR 2 FILES’ )[ detector2 , f o l d e r 2 ] = uigetfile ( filter , ’DETECTOR 2 FILES’ , . . .
’MultiSelect ’ , ’on’ ) ;
%check if you have one file (char) or multiple files (cell)
if and ( i s c h a r ( de t e c to r1 ) , i s c h a r ( de t e c to r2 ) )
data1 = s t ru c t 2a r r ay ( load ( char ( de t e c to r1 ) , ’Voltage1 ’ ) ) ;data2 = s t ru c t 2a r r ay ( load ( char ( de t e c to r2 ) , ’Voltage2 ’ ) ) ;
%calculate g2 for both detector files
g2 = eva luateg2tau ( data1 , data2 ) ;e r r = 0 ;plotData ( g2 , [ 1 2 ] , ’g$^2 $($\tau$)’ , e r r , 1 : size ( g2 , 2 ) , . . .
’Tau ($\mu$s)’ )
elseif and ( i s c e l l ( de t e c to r1 ) , i s c e l l ( de t e c to r2 ) )
60
g2tau = zeros ( size ( detector1 , 2 ) , i t e r a t e +1);
%loop through list of files
for item=1:size ( detector1 , 2 )
data1 = s t ru c t 2a r r ay ( load ( char ( de t e c to r1 ( item ) ) , . . .’Voltage1 ’ ) ) ;
data2 = s t ru c t 2a r r ay ( load ( char ( de t e c to r2 ( item ) ) , . . .’Voltage2 ’ ) ) ;
%calculate g2 for each corresponding set of data files
g2tau ( item , : ) = eva luateg2tau ( data1 , data2 ) ;
end
%Find the standard deviation and error
s td devtau = std ( g2tau ) ;e r r tau = std devtau / sqrt ( size ( detector1 , 2 ) ) ;
%Average g2 from each file set together
g2tau = mean ( g2tau ) ;
plotData ( g2tau , [ 1 2 ] , ’Averaged g$^2 $($\tau$)’ , e r r tau , . . .
1 : size ( g2tau , 2 ) , ’Tau ($\mu$s)’ )
end
end
t r i a l 2 = uicontrol (window , ’Position ’ , [ a b−d c d ] ) ;t r i a l 2 . S t r ing = ’CHOOSE RIGHT FILES ’ ;t r i a l 2 . Cal lback = @tria l2Pushed ;
function t r i a l 2Pushed ( src , event )
%GET DETECTOR 1 AND DETECTOR 2 FILES
filter = ’/Users/danagriffith/Documents/Spring 2020/ Thesis ’ ;
%ask the user for trial files
disp ( ’CHOOSE DETECTOR 1 FILES’ )[ detector1 , f o l d e r 1 ] = uigetfile ( filter , ’DETECTOR 1 FILES’ , . . .
’MultiSelect ’ , ’on’ ) ;disp ( ’CHOOSE DETECTOR 2 FILES’ )[ detector2 , f o l d e r 2 ] = uigetfile ( filter , ’DETECTOR 2 FILES’ , . . .
’MultiSelect ’ , ’on’ ) ;
%check if you have one file (char) or multiple files (cell)
if and ( i s c h a r ( de t e c to r1 ) , i s c h a r ( de t e c to r2 ) )
data1 = s t ru c t 2a r r ay ( load ( char ( de t e c to r1 ) , ’Voltage1 ’ ) ) ;data2 = s t ru c t 2a r r ay ( load ( char ( de t e c to r2 ) , ’Voltage2 ’ ) ) ;
g2 = eva luateg2tau ( data1 , data2 ) ;e r r = 0 ;plotData ( g2 , [ 4 5 ] , ’g$^2 $($\tau$)’ , e r r , 1 : size ( g2 , 2 ) , . . .
’Tau ($\mu$s)’ )
elseif and ( i s c e l l ( de t e c to r1 ) , i s c e l l ( de t e c to r2 ) )g2tau = zeros ( size ( detector1 , 2 ) , i t e r a t e +1);
%loop through list of files
for item=1:size ( detector1 , 2 )
data1 = s t ru c t 2a r r ay ( load ( char ( de t e c to r1 ( item ) ) , . . .
61
’Voltage1 ’ ) ) ;data2 = s t ru c t 2a r r ay ( load ( char ( de t e c to r2 ( item ) ) , . . .
’Voltage2 ’ ) ) ;
g2tau ( item , : ) = eva luateg2tau ( data1 , data2 ) ;
end
s td devtau = std ( g2tau ) ;e r r tau = std devtau / sqrt ( size ( detector1 , 2 ) ) ;g2tau = mean ( g2tau ) ;
plotData ( g2tau , [ 4 5 ] , ’Averaged g$^2 $($\tau$)’ , e r r tau , . . .
1 : size ( g2tau , 2 ) , ’Tau ($\mu$s)’ )end
end
function plotData ( data , plotSpace , p l o tT i t l e , error , xdata , x i n f o )%If you want the plots to be in a new figure window , uncomment
%"figure" and comment "subplot (...)" for each plot function
%If error is zero , you only have one file pair for det 1 and det 2
if error == 0subplot (1 , 5 , p lotSpace ) ;figure
plot ( xdata , data )title ( p l o tT i t l e , ’fontsize ’ , 26 , ’Interpreter ’ , ’latex’ )xlabel ( x in fo , ’fontsize ’ , 22 , ’Interpreter ’ , ’latex’ )ylabel ( ’g$^2 $($\tau$)’ , ’fontsize ’ , 10 , ’Interpreter ’ , ’latex’ )
ylim ( [ 0 2 . 2 ] )grid on
%Nonzero error means that you have multiple files averaged together
else
subplot (1 , 5 , p lotSpace ) ;figure
errorbar ( xdata , data , error )title ( p l o tT i t l e , ’fontsize ’ , 26 , ’Interpreter ’ , ’latex’ )xlabel ( x in fo , ’fontsize ’ , 22 , ’Interpreter ’ , ’latex’ )ylabel ( ’g$^2 $($\tau$)’ , ’fontsize ’ , 22 , ’Interpreter ’ , ’latex’ )
ylim ( [ 0 2 . 2 ] )grid on
end
end
end
62