arxiv:1801.07778v4 [astro-ph.im] 29 may 2020

45
Draft version June 2, 2020 Typeset using L A T E X twocolumn style in AASTeX63 Challenges in Scientific Data Communication from Low-Mass Interstellar Probes * David G Messerschmitt, 1 Philip Lubin, 2 and Ian Morrison 3, 4 1 University of Calfornia at Berkeley, Department of Electrical Engineering and Computer Sciences, USA 2 University of California at Santa Barbara, Department of Physics, USA 3 Swinburne University of Technology, Centre for Astrophysics and Supercomputing, Australia 4 Now with Curtin University, International Centre for Radio Astronomy Research, Australia (Received February 2020) Submitted to ApJ ABSTRACT A downlink for the return of scientific data from space probes at interstellar distances is studied. The context is probes moving at relativistic speed using a terrestrial directed-energy beam for propul- sion, necessitating very-low mass probes. Achieving simultaneous communication from a swarm of probes launched at regular intervals to a target at the distance of Proxima Centauri is addressed. The analysis focuses on fundamental physical and statistical communication limitations on downlink performance rather than a concrete implementation. Transmission time/distance and probe mass are chosen to achieve the best data latency vs volume tradeoff. Challenges in targeting multiple probe trajectories with a single receiver are addressed, including multiplexing, parallax, and target star proper motion. Relevant sources of background radiation, including cosmic, atmospheric, and receiver dark count are identified and estimated. Direct detection enables high photon efficiency and inco- herent aperture combining. A novel burst pulse-position modulation (BPPM) beneficially expands the optical bandwidth and ameliorates receiver dark counts. A canonical receive optical collector combines minimum transmit power with constrained swarm-probe coverage. Theoretical limits on re- liable data recovery and sensitivity to the various BPPM model parameters are applied, including a wide range of total collector areas. Significant near-term technological obstacles are identified. En- abling innovations include a high peak-to-average power ratio, a large source extinguishing factor, the shortest atmosphere-transparent wavelength to minimize target star interference, adaptive optics for atmospheric turbulence, very selective bandpass filtering (possibly with multiple passbands), very low dark-count single-photon superconducting detectors, and very accurate attitude control and pointing mechanisms. Keywords: interstellar space probes, interstellar communication, background radiation NOMENCLATURE X {T ,S,R,T R} Design values of parameter “X” at T =transmitter, S=receiver aperture, R=entire receive collector, or TR=end-to-end. S and R correspond to the canon- ical receive collector defined in §5. Y {A,P,N,I,D} Values of performance metric Y corresponding to A=average power, P =peak power, N =noise, I =interference, or D=dark counts. Others See Tbls.1, 2, 3, and 4 and Tbl.8 1. INTRODUCTION Since exoplanets are commonplace, interest in and concrete efforts toward the scientific exploration of exo- * Copyright c 2020. planets are growing. This paper addresses the technol- ogy development challenges behind the goal of launch- ing low-mass interstellar space probes to the vicinity of the nearest star system (Alpha Centauri) and returning the acquired scientific data before the end of the 21st century. Prominent among the myriad challenges are propulsion and communication. Here we address the communication of scientific data from a probe at inter- stellar distances and traveling at a relativistic speed. First proposed by Robert Forward in 1962 (Forward 1962), a low-mass probe with a sail propelled by directed energy from earth is the only known technological op- tion for close exploration of nearby stars that may be accessible with available or near-term technology, and which completes its mission in a matter of decades. This has led to concrete efforts to validate the concept and arXiv:1801.07778v4 [astro-ph.IM] 29 May 2020

Upload: others

Post on 17-Feb-2022

9 views

Category:

Documents


0 download

TRANSCRIPT

Draft version June 2, 2020Typeset using LATEX twocolumn style in AASTeX63

Challenges in Scientific Data Communication fromLow-Mass Interstellar Probes∗

David G Messerschmitt,1 Philip Lubin,2 and Ian Morrison3, 4

1University of Calfornia at Berkeley, Department of Electrical Engineering and Computer Sciences, USA2University of California at Santa Barbara, Department of Physics, USA

3Swinburne University of Technology, Centre for Astrophysics and Supercomputing, Australia4Now with Curtin University, International Centre for Radio Astronomy Research, Australia

(Received February 2020)

Submitted to ApJ

ABSTRACT

A downlink for the return of scientific data from space probes at interstellar distances is studied.The context is probes moving at relativistic speed using a terrestrial directed-energy beam for propul-sion, necessitating very-low mass probes. Achieving simultaneous communication from a swarm ofprobes launched at regular intervals to a target at the distance of Proxima Centauri is addressed.The analysis focuses on fundamental physical and statistical communication limitations on downlinkperformance rather than a concrete implementation. Transmission time/distance and probe mass arechosen to achieve the best data latency vs volume tradeoff. Challenges in targeting multiple probetrajectories with a single receiver are addressed, including multiplexing, parallax, and target starproper motion. Relevant sources of background radiation, including cosmic, atmospheric, and receiverdark count are identified and estimated. Direct detection enables high photon efficiency and inco-herent aperture combining. A novel burst pulse-position modulation (BPPM) beneficially expandsthe optical bandwidth and ameliorates receiver dark counts. A canonical receive optical collectorcombines minimum transmit power with constrained swarm-probe coverage. Theoretical limits on re-liable data recovery and sensitivity to the various BPPM model parameters are applied, including awide range of total collector areas. Significant near-term technological obstacles are identified. En-abling innovations include a high peak-to-average power ratio, a large source extinguishing factor, theshortest atmosphere-transparent wavelength to minimize target star interference, adaptive optics foratmospheric turbulence, very selective bandpass filtering (possibly with multiple passbands), very lowdark-count single-photon superconducting detectors, and very accurate attitude control and pointingmechanisms.

Keywords: interstellar space probes, interstellar communication, background radiation

NOMENCLATURE

XT,S,R,TR Design values of parameter “X” at T=transmitter,S=receiver aperture, R=entire receive collector, orTR=end-to-end. S and R correspond to the canon-ical receive collector defined in §5.

YA,P,N,I,D Values of performance metric “Y ” correspondingto A=average power, P=peak power, N=noise,I=interference, or D=dark counts.

Others See Tbls.1, 2, 3, and 4 and Tbl.8

1. INTRODUCTION

Since exoplanets are commonplace, interest in andconcrete efforts toward the scientific exploration of exo-

∗ Copyright c©2020.

planets are growing. This paper addresses the technol-ogy development challenges behind the goal of launch-ing low-mass interstellar space probes to the vicinity ofthe nearest star system (Alpha Centauri) and returningthe acquired scientific data before the end of the 21stcentury. Prominent among the myriad challenges arepropulsion and communication. Here we address thecommunication of scientific data from a probe at inter-stellar distances and traveling at a relativistic speed.

First proposed by Robert Forward in 1962 (Forward1962), a low-mass probe with a sail propelled by directedenergy from earth is the only known technological op-tion for close exploration of nearby stars that may beaccessible with available or near-term technology, andwhich completes its mission in a matter of decades. Thishas led to concrete efforts to validate the concept and

arX

iv:1

801.

0777

8v4

[as

tro-

ph.I

M]

29

May

202

0

2 Messerschmitt et al.

its supporting technology in the context of the NASAStarLight and Breakthrough StarShot programs (Lubin2016; Kulkarni et al. 2018; Lubin 2020; Parkin 2018).

Such a probe achieves minimum mass (and hence high-est velocity) by carrying only a sail for propulsion, sci-entific instrumentation, a downlink communications sys-tem, photon thrusters for attitude control, and an elec-trical power generator. Coupled with very high powerdirected energy beaming from earth and a sail that cap-tures the momentum from that beam, it can reach 10-20% of light speed. Following a flyby (likely of ProximaCentauri or other targets within our stellar neighbor-hood), the probe would transmit the scientific data col-lected back to a receiver on the earth’s surface, whichwould consist of a large set of optical apertures withincoherent combining. Location on the earth’s surface(rather than on a space-based platform) is motivated bythe large dimensions of the directed-energy and receiveinfrastructures, and the large energy requirements forthe former.

The focus here is on the probe-to-earth communi-cation downlink design with the expectation that anyearth-to-probe uplink communication is short-lived andbenefits from relative proximity to earth. Relying heav-ily on a reusable fixed propulsion and communicationinfrastructure confined to the earth’s surface, a swarmof multiple probes can be launched with a diversity ofscientific instruments capturing images, magnetic fieldstrengths, spectroscopy, etc.

We quantify the tradeoff between probe mass, speedand data rate, with the goal of minimizing the datalatency, defined as the time from probe launch to thecompletion of the data download. As the total data vol-ume is increased, the probe mass and latency necessarilyincrease, and speed decreases. Later numerical resultsfocus on the lowest-mass ”wafer scale” version.

Today most interest in interstellar exploration byspace probes resides in the astronomy and astrophysicscommunities, but some essential knowledge and expe-rience lies with the communications sciences. Withthe goal of informing all these communities and unit-ing them around this challenge, we are careful to defineterminology and emphasize intuitive explanation ratherthan detailed derivation of theoretical results.

1.1. Goals

As of this writing it is early in the conception of theinterstellar mission based on low-mass probes. Our goalis to address the physical and statistical communica-tion limitations on a scientific data downlink, as wellas their implications for the requisite technological ca-pabilities. Our goal is not to propose a concrete andfully specified design for such a communication down-link, nor to address the numerous engineering challengesthat will be encountered, as there are too many uncer-tainties, interactions between launch and downlink com-munication, and questions about the technologies that

may be available in the timeframe of the first operationaldownlink. An initial conclusion is that wavelengths inthe visible optical regime are appropriate due to the se-vere probe electrical power and mass restrictions. Wepropose an architecture that meets operational require-ments. Within that architecture we study and quantitythe inevitable dependencies and tradeoffs among the dif-ferent components, and in general attempt to gauge thefeasibility of the endeavor.

Because we can use higher power or larger apertures,the laws of physics do not limit the achievable dis-tances or data rates. Rather, practical limitations aredue to low probe mass and available technologies andtheir performance characteristics. Because probes willbe launched in a couple decades at the earliest, and thebeginning of downlink operation will be delayed for atleast five decades (see §2.5), we don’t consider currentlyavailable technology to be limiting. Rather we identifythe areas of greatest technological challenge, and iden-tify new capabilities that are needed. This mission canleverage general technology advances over the comingdecades, but may also require specific technological in-novation and development. We also identify and discusstradeoffs among technologies, and assume that greateradvancement in some areas can offset slower advance-ment in others.

Thus we can identify the specific goals of this paperas:

• Identify the physical limitations on such a mission,especially those related to the space environment(such as unwanted sources of noise and interfer-ence).

• Propose a system architecture, within which weidentify the system components and quantifyneeded capabilities.

• Understand the interactions and tradeoffs amongthose components.

• Identify components where currently availabletechnology is inadequate and estimate the nec-essary future capabilities. A more concise sum-mary of these technology inadequacies is available(Messerschmitt et al. 2019).

• Generally assess the feasibility of the endeavor inthe envisioned timeframe.

Implementation and practice always brings some badnews. While we attempt to anticipate the greatest chal-lenges that will be encountered, we do not attempt to ac-curately predict an actual deployed system performance.Thus, the results here always fall on the idealistic side.One of the goals of any subsequent technology develop-ment will be to come as close to these ideals as possible.

Low-mass interstellar probe downlink 3

1.2. Related work

Most effort and publication devoted to low-massprobes has emphasized the challenges of propulsion bydirected energy and light-sail (Hoang et al. 2017; Heller& Hippke 2017; Heller et al. 2017; Hoang & Loeb 2017;Heller 2017) and (Hoang 2017; Gros 2017; Forgan et al.2017; Atwater et al. 2018).

In some ways this challenge mirrors the extensivelystudied requirements for interplanetary spacecraft com-munication within our solar system (Hemmati 2009).The substantial differences include the severe limitationson the probe’s available electrical power, processing, andtransmit optics, as well as the much greater propagationloss, different sources of background radiation, ground-based reception with many issues related to atmosphericturbulence, scattering and weather, and the expectationof a swarm of probe downlinks operating concurrently.

Space communication applications have stimulatedextensive research and engineering into long-distancefree space optical communication. Results most relevantto the present challenge come from JPL’s interplane-tary network (IPN) (Dolinar et al. 2012b). This includesan encouraging laboratory demonstration of high pho-ton efficiency in a communication link with insignificantbackground radiation (Farr et al. 2013).

Interstellar communication differs from terrestrialcommunication in its emphasis on energy efficiencyrather than spectral efficiency. At radio wavelengths,energy-efficient interstellar communication has beenstudied in the context of METI/SETI (Messerschmitt2013, 2015). What is different about space probes is thefeasibility of transmitter-receiver coordination in the in-terest of achieving high photon efficiencies to help over-come power/size/mass limitations.

The germane sources of optical-wavelength back-ground radiation for interstellar communication are acurrent area of research (Lubin 2020). These back-ground calculations have been independently quantifiedin (Hippke 2019), which quantifies the ultimate quantumlimits on communication. The present paper addresseswhat is achievable in practice with existing or foresee-able technologies.

Here we explore a probe-swarm or single-probe re-ceiver that achieves the minimum transmit power-areaproduct in the interest of minimizing probe mass by op-erating near the physical limits of the received-signalnoise floor. We also explore the implications of vary-ing collector area over a wide range. A recent paper(Parkin 2019) addresses the same application but ad-dresses only the single probe case (requiring completereceiver duplication for each probe in a swarm) and fo-cuses on minimizing receive collector area rather thanprobe mass (see §10.5.1).

2. SCIENTIFIC MISSION

The low-mass probe mission comprises the science ob-jectives and how they are achieved by a swarm of low-

Table 1. Scientific mission parameters

Description Value

R0 Nominal data rate for scientific dataimmediately following encounterduring non-outage periods (b/s)

1.

Ra R0 reduced to account for outages (b/s) 0.432

V Volume of total data (Mb) 28.9

L Volume of data segment (Mb) 1.0

Pc Probability of correct segment recovery 0.99

TL Latency of total data return (yr) 28.

Jp Probes transmitting concurrently 26

mass probes, each supporting an individual downlink toreturn scientific data by that probe to earth.

2.1. Numerical parameters

The set of parameters directly related to the sciencemission are listed in Tbl.1. Assumed numerical param-eters in this and the subsequent parameter and perfor-mance metric tables are merely reference points and donot carry the weight of design objectives. This would bepremature, as there are many design and technology is-sues and interactions among various system componentsthat will weigh heavily on the final design. Consonantwith our goals (see §1.1) these choices serve to gauge therealism (or lack thereof) of the system design using anidealized model that ignores many practical challengesthat will undoubtedly arise. More importantly, scalingrelationships that follow from varying these parametersare studied (both theoretically and numerically) to ap-preciate the implications of changing assumptions (see§10).

2.2. Scientific objective

Each individual low-mass probe carries one or morescientific instruments to gather relevant scientific infor-mation in the vicinity of the target star. Given the se-vere mass-power objective, each probe may carry only asingle instrument.

Individual probes may carry different types of scien-tific instrumentation, subject to severe mass limitations.The primary relevance of the scientific objective to thedownlink design is the total data volume V to be down-loaded1 and the reliability with which that data is re-covered. These two parameters are certainly related tothe type of instrumentation carried by a probe. Earlymissions are likely to emphasize imaging, so this is theapplication addressed in our examples. An additionalparameter of importance to scientists is the data latencyTL, which is the time elapsed from probe launch to thereturn of data volume V in its entirety.

1 We measure V in bits (b) rather than bytes (B). For example,106 bits (1 Mb) equals 125 kilobytes (125 kB).

4 Messerschmitt et al.

For purposes of reliability, scientific data can be di-vided into segments, each with volume L. It is the reli-ability of recovery of each segment at the receiver thatis of greatest interest. For example, for an imaging mis-sion a segment would be one image (2-D representationin terms of matrix of pixels), and we assume L = 1 Mb.2

A segment is either recovered correctly in its entirety orcorrupted in some fashion due to one or more bit errors.3

Pc is the probability that each individual segment is re-covered correctly in its entirety.

2.3. Scientific data rate

The volume V and latency TL in Tbl.1 are indirectlyrelated to the scientific data rate R. R actually de-creases as the square of distance (see §10.3), so R0 isthe initial (highest) value ofR immediately following en-counter. In addition, outages due to atmospheric effectsreduce the effective data rate to Ra < R0 (see §11.4),and Ra is used in calculating V.

2.4. A swarm of probes

The project budget will be heavily concentrated inthe terrestrial launch and communication infrastructure.The incremental cost of each probe launch is insignif-icant in comparison (Parkin 2018), suggesting the re-peated launching of probes. Such a swarm of probescan increase the scientific return by following differenttrajectories near the target star, providing redundancyto account for navigational errors or other sources offailure on individual probes, and enabling a diversityof scientific instruments while accommodating a tightpower/mass budget.

Because the cost of the receiver system is likely to belarge, our primary analysis focuses on a single receiverthat is shared among all the concurrent downlinks. Thusthe relevant parameter is not the total number of probeslaunched, but rather the number of probes Jp for whichdownlink data transmission is concurrent from the re-ceiver perspective. The specific values of Jp and TL il-lustrated are based on the minimization of TL from eachprobe for a given V and a specific launch schedule (see§7).

Concurrent data reception causes serious complica-tions in the design and operation of the terrestrial re-ceiver infrastructure and is a major concern addressed inthe following (see §4.1). To minimize this challenge, wedon’t expect the downlinks to operate during the transitperiod. Dedication of a receiver to a single downlink (ormultiple receivers, one for each downlink as in (Parkin2019)) is a special case that falls within the scope of ouranalysis.

2 This will accommodate a 1000×1000 set of pixels at one bit perpixel following compression.

3 After compression, even a single bit in error often propagatesacross the image and thus has serious consequences.

Table 2. Probe transmitter parameters

Description Value

ζ Probe mass ratio 1.

u0 Probe speed (c) 0.2

D0,1 Start,end of downlinkpropagation distance (ly)

4.24,4.66

ATe Effective transmit aperture

area (cm2)100

Fx Optical source extinction ratio 10−7

2.5. Timeline

Some significant technology advances are needed toapproach the performance metrics envisioned in §10. Itis fortunate that there is considerable time available tobenefit from the ongoing evolution of technology. Thereis also the opportunity to target certain technologies forfaster advances targeted on the needs of this applicationand interstellar exploration more generally. Transmit-ter technologies must be finalized before the first launch(perhaps 1-3 decades). While an operational receiver de-ployed not long thereafter could have benefits, delayingthe download of scientific data until 25+ years follow-ing the first launch is an option. Thus for the receiverwe have available at least 4-5 decades of technology ad-vancement and development, although there should bea high confidence in the technical feasibility and budgetavailability prior to the first probe launch.

3. PROBE TRANSMITTER

The communications transmitter is integrally relatedto other functional units in the probe, such as electricpower generation and navigation and pointing control.We assume transmitter parameters as listed in Tbl.2.

3.1. Launch and propulsion

The data latency TL is the sum of the transit time tothe target star, the duration of the downlink transmis-sion, and the signal propagation time from the farthestreaches of transmission (see §C). Minimizing TL sub-ject to a constraint on V is a principled approach todetermining the best choice of probe mass (see §7). Thesmallest mass option (ζ=1) is assumed, with increasesin transmit power and transmit aperture area possiblewhen ζ is increased (see §7).

The goal of acquiring scientific data from instrumen-tation in proximity to even nearby stars within the 21stcentury requires a probe speed u0 that is a significantfraction of the speed of light c (Manchester & Loeb 2017;Parkin 2018). For example, the distance to the neareststar (Proxima Centauri) is 4.24 ly, and the probe transittime at u0=0.2c is 21.2 yr. If a probe transmits data be-ginning immediately after encounter with that star andits exoplanets, the data would begin to reach earth 25.44years after launch.

Low-mass interstellar probe downlink 5

Other transmit parameters include the propagationdistance D0,1, which is determined by the probe speedu0, the data volume V, and the initial scientific data rateR0. The transmit aperture effective area Ae is a measureof the transmission power-to-flux efficiency (see §B), andequals the geometric area in the diffraction limit. Thetransmit aperture is assumed to be standalone and rel-atively small in deference to the low mass requirement.Other proposals assume the much larger probe sail areais exploited as a transmit aperture (Parkin 2019). Fi-nally the extinction ratio Fx is a measure of the spuriouspower from the transmit laser in its “off” state relativeto its “on” state (see §10.7).

3.2. Optical modulation

Scientific payload data is embedded in the transmittedoptical signal power vs. time by the modulation. Wepropose burst pulse-position modulation (BPPM), whichcan achieve the required efficiency while compatible withthe imposed constraints (see §6.2).

In BPPM a semiconductor laser in the transmittergenerates short pulses of light intensity with a durationon the order of 0.1 to 1 µs with a repetition rate ofabout 1-2 Hz. Scientific data is supplemented by error-correction redundancy and then imposed on the pulsesby adjusting their timing (see §6.2 and §14). The av-erage power is PTA , on the order of 1−100 mW, with apeak transmitted power PTP that is about 106 larger.4

The intensity-modulated light is emitted in the direc-tion of earth by a transmit aperture with effective areaATe (see §B). Accurate attitude control based on photonthrusters is necessary to ensure that the earth-based re-ceive collector is illuminated with the maximum power(see §12.6).

The probe incorporates processing to compress scien-tific data and perform the coding side of error-correctioncoding (ECC, see §14.3). In addition, it performs datainterleaving and adds additional redundancy to counteroutages resulting from daylight and weather events (andpossibly other sources) at the earth-side receiver (see§14.4).

The probe-to-earth distance increases up to 10% dur-ing the operation of the downlink, which may last from2−9 yr (see §7). Accommodation of increasing propa-gation distance requires the data rate R to be reducedby slowing the rate of transmitted pulses and increasingthe peak transmitted power (by a maximum of about20%) to maintain fixed average power.

4. TERRESTRIAL RECEIVER

A terrestrial receiver is assumed, with the parameterschosen for receiver operation as listed in Tbl.3 for twocases: a swarm of probes and a single probe. The choice

4 This description is oversimplfied, as achieving a PTP this large

will require a more complicated optical arrangement (see §13.1).

Table 3. Terrestrial receiver parameters

Description Swarm Single

λR0 Received optical wavelength (nm) 400

ΩA Coverage solid angle (arcsec2) 10. 0.01

ASe Effective aperture area (cm2) 6.8 6807.

PW Probability of weather outage 0.1

PD Probability of daylight outage 0.52

η Optical detector efficiency 1.0

BPP Non-outage photon efficiency(b/ph)

10.9

SBR Average signal to average back-ground photons during non-outagePPM frames

4

Fc Coronagraph rejection 0.01

of a visible optical wavelength λR0 is based on aperturearea considerations (see §8), and is chosen close to theshortest wavelength for which the atmosphere is trans-parent because this minimizes radiation from the targetstar (see §10.9.3).

4.1. Trajectories and coverage

The role of a receive optical collector is to convert in-cident optical power to photon detection events. It canbe thought of as a single-pixel optical telescope whichdoes not attempt to image the swarm of probes (sincetheir trajectories are similar) but rather captures a su-perposition of their signals as they reach the earth.

The coverage of a collector is defined as the solid an-gle ΩA over which transmitted probe signals are recov-ered with nearly equal sensitivity.5 To accommodate aswarm of probes with a single receiver, ΩA has to beconsiderably larger than it would be for a single probe.The collector is composed of a large number of indi-vidual apertures whose photon detections are combinedincoherently. The coverage of the collector equals thecoverage of the apertures and is related to the effectivearea ASe of the collector’s constituent apertures (see §5).An increase in ΩA results in an inevitable decrease inreceiver sensitivity, which is quantified by a decrease inASe (see §5).

The trajectories of different probes in a swarm areillustrated in a simplifed schematic 2-D form in Fig.1.There are five phases to each individual probe mission:launch of the probe, transit to the neighborhood of thetarget star, encounter with that star (observations andscientific data collection), downlink operation (transmis-sion of that scientific data to earth), and finally perma-nent probe silence.

5 Coverage is similar to the field-of-view of a single-pixel opticaltelescope, or the resolution of a single pixel in a multiple-pixelimaging telescope. In contrast to optical telescopes, the coveragethat results from the beamforming of an optical phased array canbe non-circular in shape.

6 Messerschmitt et al.

Star

Dow

nlin

k

Tran

sit

Prob

e en

coun

ter w

indo

w

Proper motion

Lanu

nch/

rece

ptio

n w

indo

w

Cov

erag

e w

indo

w

Figure 1. A 2-D schematic representation of the 4

most extreme probe trajectories as viewed from earth. The

launch/reception window captures the seasonal variation in

earth’s position, and the probe encounter window captures

the proper motion of the target star. As shown all encoun-

ters are assumed to fall on the same side of the target star,

which moves away from the encounter positions. Downlink

operation follows encounter. Receiver coverage is assumed

to cover all concurrently-transmitting probes, and a corona-

graph function takes advantage of spatial separation to reject

a portion of the target star’s radiation.

Shown specifically are four probe trajectories at themost extreme angles, taking into account orbital differ-ences between launch and reception coordinates as wellas the proper motion of the target star (both of whichare exaggerated in scale). Of greatest concern for re-ceiver design are the relative variations in the positionsof the different probes and the target star, which deter-mine the required spatial coverage of the receiver andthe coronagraph rejection (see §5.2). There are two pri-mary influences:

• A single probe’s apparent position varies due tothe parallax effect of the earth’s orbital motion.

• The target star’s proper motion (movement in re-lation to the galactic background) implies thatdifferent probe encounters are spatially separated(Parkin 2019). These probe encounter positionstrail behind the moving star, resembling the tailof a comet.

4.2. Source of electric power

A probe launched with directed energy and cruising atconstant speed following launch requires no propulsion.Electrical power is nevertheless necessary for probe atti-tude control, scientific instruments, and downlink com-

munications (Landis 2019). Examples of power sourcesinclude a radioisotope thermoelectric generator (RTG)or forward-edge ISM proton-impact conversion duringthe cruise phase (before and after encounter) with thepossible addition of photovoltaic power from the targetstar during the encounter. At relativistic speeds thelatter will be short-lived (on the order of hours), butmay be valuable for scientific instruments, as well as theprocessing for data compression, outage mitigation (see§14.4) and error-control coding (see §14.3).

The available electrical power limits the availabletransmit optical power. Here we make no prior assump-tion about transmit power, but rather characterize theminimum transmit power necessary subject to the otherconstraints, and emphasize the tradeoff between trans-mit power and total area of the transmit aperture andreceive collector.

4.3. Heterodyne vs direct detection

There are two common ways to detect the optical sig-nal from a probe in the receiver. Heterodyne mixes thatoptical signal with a local oscillator (LO), and opti-cal square-law detection results in components of sumand difference frequencies as well as amplification. Thedifference-frequency component can fall at a microwavewavelength. Here we adopt (for reasons explained in§4.5) the alternative of direct detection, which eliminatesthe LO and relies on single-photon counting detectors.A significant advantage is the reliance of direct detectionon intensity (and not phase) modulation at the source,obviating any need for coherence across the aperturescomprising the collector (see §5.1).

Heterodyne would offer some compelling advantages.In addition to signal amplification as a part of opticaldetection, channel separation and bandpass filtering canbe performed using available microwave technologies. Italso opens up a wider class of modulation schemes basedon the phase as well as magnitude of the incident wave-front.

At optical wavelengths (where the energy per photonis significantly higher) individual photons can be directlydetected, although due to quantum effects the numberof photons detected per unit time is only stochasticallyrelated to the receive power. The result is signal self-noise (sometimes called shot noise or quantum noise).

4.4. Background radiation

Photon counts originating from sources other thanshot noise are lumped into the category of backgroundradiation. For a given transmit power and trans-mit/receive apertures, the initial data rate R0 that canbe achieved consonant with a sufficiently high signal-to-background ratio (SBR) is limited by the sources ofbackground radiation. No matter how large SBR, signalshot noise remains as a limitation on the reliability withwhich the scientific data can be recovered (see §14).

Low-mass interstellar probe downlink 7

Four distinct types of background radiation can beidentified:6

• Imperfect laser extinction at the transmitter dur-ing intervals where zero power is sought will re-sult in unwanted photon counts at the receiver (see§13.1).

• Noise is radiation that cannot be separated fromthe data signal because of its time, wavelength,and spatial overlap. In the current applicationsuch radiation is broadband, and must be reducedby limiting the optical bandwidth. Significantsources of noise are the cosmic background radi-ation, the deep star field, zodiacal radiation, andscattered sunlight and moonlight from the earth’satmosphere.

• Interference is unwanted radiation which is dis-tinguishable from the data signal in one or morephysical parameters (time, wavelength, or spatial)and can therefore be partially or wholly rejectedby technological means. The major interferenceconsidered here is radiation from the target star,which is spatially separated from the probes, butwhich may be challenging to reject because of itsclose proximity to the probe trajectories. Thecoronagraph function in the receiver can partiallyreject this interference due to its spatial separa-tion. We assume the rejection factor is Fc relativeto the signal originating from the probe.

• Dark counts mimic detection of photons origi-nating from sky or cosmic sources, but originatewithin the receiver and would be present even ifall incident light radiation were blocked from thereceiver.

Incomplete extinguishment and dark counts generallycannot be limited by bandpass filtering (with the ex-ception of black body radiation in the optics). Numeri-cal calculations suggest that operation during periods ofsunlight is not feasible, and with that assumption inter-ference, scattered moonlight, and dark counts are thelimiting factors placing a lower bound on the probe’stransmit power (see §10.9).

4.5. High photon efficiency

One motivation for adopting direct detection is thehigher photon efficiency that can be achieved, which re-sults in a reduction in the total collector area. This isof practical significance because a collector will likely bekilometer-scale. If the rate at which data is reliably re-covered is R0 b/s and the signal average photon rate is

6 These are terms commonly used in the communications litera-ture. We adopt them here because (a) they are descriptive ofthese types of impairments and (b) because design techniquesare adopted from the communication literature.

ΛRA ph/s, photon efficiency BPP (in b/ph) is defined by

R0 = BPP · ΛRA . (1)

Due to excess shot-noise introduced by the high-power LO, heterodyne cannot achieve a comparableBPP to direct detection. The theoretical limit isBPP < η/ log 2 = 1.44η b/ph for detector quantum ef-ficiency η consistent with reliable data recovery (Gor-don 1962), while direct detection imposes no theoreti-cal limit. The assumption in Tbl.3 is BPP=10.9 b/ph.This is consistent with a laboratory demonstration ofBPP = 13 b/ph (Farr et al. 2013). The significant pho-ton efficiency advantage of direct detection makes itworthwhile to overcome the technological challenges itintroduces, which is our focus here.

4.6. Signal-to-background ratio (SBR)

SBR at the receive collector output is ΛRA/ΛRB , where

ΛRA is the average rate of signal photons and ΛRB is theaccumulated average photon rate for all sources of back-ground radiation. Our goal is to achieve an SBR suffi-ciently large that background radiation has limited im-pact on data reliability. The impact of SBR is mea-sured by the theoretical effect on BPP (see §E). Withthe goal of limiting background to a minor impact wechoose SBR=4 in Tbl.3. This results in a 5% reduc-tion in the theoretically obtainable BPP as compared toSBR=∞. Achieving a specific SBR places a lower limitΛRA > ΛRB · SBR on the received signal average photonrate ΛRA, and and hence a lower bound on average trans-mit power PTA .

5. RECEIVER PHOTON COLLECTION

The major element of the receiver is the collector,which is similar to a single-pixel optical telescope. Itconsists of a large number of apertures, each with a ded-icated optical detector that responds to individual pho-tons from the incident radiation (background as well assignal) and introduces spurious dark counts as well (see§5.1). Thus no coordination or optical phasing of aper-tures is necessary. It would be advantageous to shareoptical detectors over multiple apertures if optical in-terference can be avoided (see §5.4). In contrast to anoptical telescope, the optical path includes a highly se-lective optical bandpass filter that eliminates most out-of-band background radiation (see §13.2), and may serveto separate the signals from the individual probes as well(see §12.2.2). Photon detection events are logged at theindividual apertures, accumulated over a network, anda post-processing stage interprets the totality of pho-ton events to reliably recover the scientific data. Thiscanonical receive collector is now described in greaterdetail.

5.1. Canonical receive collector

The receive collector in a direct-detection receiver in-cludes the optics and photonics necessary to capture the

8 Messerschmitt et al.

power of an incoming electromagnetic wave and convertit to a digital representation of individual photon de-tection events. The receive collector will be quite large,and is thus a major capital and operational cost.

The receive collector must meet some challenging re-quirements:

• Accurate pointing toward the target star andswarm of probes has to be maintained in the faceof earth orbital dynamics and atmospheric refrac-tion.

• Both the total geometric size (physical dimen-sions) and total collection area are large as re-quired to achieve a receiver sensitivity that accom-modates severe limitations on the probe due toits mass restriction; namely, transmit power andtransmit aperture size. A distinction between sizeand area is needed because the receive collector isnot likely to be a single filled optical aperture.

• Achieve a coverage solid angle ΩA which is rel-atively large (but no larger than necessary) andhas an atypical elongated shape in the direction ofthe target star’s proper motion (see §4.1).

• It will likely incorporate adaptive optics to com-pensate for atmospheric seeing conditions (see§11).

• Separation of signals from different probes in theoptical domain is required for some approaches tomultiplexing signals from different probes, such asWDM (see §12.2.2).

• Whatever means possible should limit the back-ground radiation. This includes optical bandpassfiltering (see §13.2), rejection of interference withcoronagraph functionality (see §12.7), and modu-lation design to limit dark counts (see §6.2).

5.2. Apertures

A large size (kilometer scale) of the receive opticalcollector as a whole is required to achieve low transmitpower with adequate signal level to counter shot noise. Ifsuch a collector were single mode (a single optical detec-tor) and diffraction-limited, not only would its coveragesolid angle ΩA be too small to cover a swarm of probes,but adequate pointing accuracy to track a single probewould be practically unrealizable.

These challenges and the principles of antenna designsuggest a canonical receive optical collector architectureillustrated in Fig.2. The entire collector must be de-composed into a large number NS of smaller apertures.Each aperture is required to achieve all the objectivesof the receive array as a whole except a sufficient pho-ton rate to support the desired data rate. In particu-lar, the aperture accurately points at the coverage win-dow as pictured in Fig.1, achieves the desired ΩA, and

Scale out

Coverage

Aper

ture ΩA

…..NS replicated apertures

Figure 2. The receive optical collector is composed as an

NS-way replication (called scaling out) of the aperture. An

individual aperture focuses on achieving the maximum and

uniform sensitivity to the signals from all probes in a swarm

across a coverage solid angle ΩA, and has effective area ASe .

It is also responsible for achieving a sufficiently large SBR

to achieve the target BPP, and may also be responsible for

separating the signals from different probes. The NS aper-

tures independently capture and log photon detection events,

with their only coordination a common clock for time stamps.

The resulting incoherent accumulation of the incident optical

power across the entire collector achieves a sufficient average

photon detection rate ΛRA to recover the scientific data at

rate R, while influencing neither the coverage nor SBR.

achieves a required SBR. Alternatives to this architec-ture can be explored, such as abandoning the single-mode diffraction-limited constraint on an aperture. Forexample, with a conventional optical telescope, photonsin individual pixels containing probe trajectories can becounted.

Photon counts accumulate incoherently across aper-tures. Since photon counts for both the probe signaland the various sources of background accumulate bya factor of NS , SBR remains fixed independent of NS .The value of NS is chosen to achieve the desired sig-nal photon rate commensurate with the desired rate R0

commensurate with (1). Aperture replication by NS iscalled a scale out because the photon counts increaselinearly with NS .7

Each aperture (which may or may not be further de-composed into a phased array of elements as describedshortly) may have a dedicated optical detector respon-sible for generating a single stream of photon detectionevents in response to the incident electromagnetic radi-ation.

Aperture sensitivity is quantified by its equivalent areaASe (defined as the ratio of the intensity of an incidentplane wave to detected power). This determines therequired transmit power in the probe and the order NS

7 ‘Scale out’ is an engineering term referring to an NS× replicationwith a performance that increases proportionally to NS . In thiscase ‘performance’ is measured by the overall sensitivity of thereceiver while maintaining fixed coverage and SBR.

Low-mass interstellar probe downlink 9

of scale-out. Although we like a larger ASe , in fact itis predetermined by the relationship between ASe andcoverage ΩA.8 In §B this relationship is determined in(B6) to be ASe ∝ Ω−1

A . Larger coverage, as dictated byour requirement to service a swarm of probes with acommon receiver, implies reduced sensitivity.

Each aperture is responsible for limiting noise and in-terference via an optical bandpass filter (see §13.2) andcoronagraph functionality (see §12.7). Accounting fora non-ideal filter, define the effective bandwidth We ofa bandpass filter as the ratio of output power to inputpower spectral density (assumed to be constant). If thetransfer function is H(f), then

We =

∫ ∞0

∣∣H(f)∣∣2 df , (2)

in which case We = W for an ideal bandpass filter withbandwidth W . More generally P ·We is the filter outputpower for a constant spectral density input P . The darkcount rate for its optical detector should be as small astechnology allows (see §13.3). The coronagraph partiallyrejects interference from the target star by exploiting itssmall angular separation from the probe trajectories.

Within the context of a specific aperture design, in-cluding particularly sensitivity, coverage, coronagraphrejection, and dark count rates, the probes are responsi-ble for a sufficiently large power-area metric ξTA = PTAA

Te

and pointing accuracy to achieve the desired SBR (see§10.1). There is thus a tradeoff between PTA and trans-mit aperture effective area ATe .

5.2.1. Aperture scale out

Since the receive optical collector as a whole hasNS1 apertures that are incoherently combined, it isnot an antenna in the sense of §B and thus the conceptof effective area ARe does not apply. Instead we use to-tal effective area metric NSASe . This metric gives us asense of the total geometric area of the entire collector.

While the numerical values of NS are large, the to-tal collector area ASeN

S is a more significant indica-tor of capital costs. For example it would be attractiveto combine apertures into larger assemblages with jointfabrication. Optics similar to that used in smart phonetechnology can be fabricated in substantial assemblies,and it may also be possible to share a single opticaldetector over multiple apertures (if optical interference,which would modify the coverage, can be avoided, see§5.4).

8 Each aperture qualifies as an ‘antenna’ that is subject to the the-ory of §B assuming it is single mode (has a single optical detector)and diffraction-limited. This sensitivity-coverage relationship isbased on the simplifying assumption (which can be approximatedin practice) that the sensitivity is uniform over solid angle ΩA

and zero elsewhere.

Log of photon detection events

Post processing

Phased optical elements

(c) Optical element =

(a) R

ecei

ve c

olle

ctor

Combiner

Detector

Logging

(b1) Aperture

=

Replicated apertures

(b2) Aperture

=Detector

Logging

Figure 3. A schematic of a two-level receive optical col-

lector decomposition. (a) The replicated apertures indepen-

dently log photon detection events, and in a post-processing

stage the aggregate events are processed to recover the scien-

tific data originating from multiple probes. (b1) Each aper-

ture is composed of a set of phased elements with a combiner

including precision phase shifts feeding a single optical de-

tector. (b2) A degenerate case is an aperture comprised of a

single element. (c) Each element is an optical element (com-

posed of lenses, reflectors or thin films).

5.3. Post processing

A more detailed picture of scale out as well as aperturedecomposition into smaller elements is shown in Fig.3.Each aperture not only detects individual photons, butit logs those photon-detection events as a time-stampand, if the aperture performs de-multiplexing, a probeidentifier. There being no need for real-time processingof these events, they can be shared in a common post-processing stage in which timing recovery is performedfollowed by scientific data extraction from the photonevents. A permanent stored log of photon events (probe-number and time-stamps) also constitutes the perma-nent scientific record of a swarm of probe missions.

The collection of apertures operate independently.There is no attempt to align the phases among indi-vidual apertures; indeed, the apertures each have theirown optical detector which captures the power of theincident radiation to that aperture, so all phase infor-mation has been deliberately discarded. This arrange-ment is colloquially called ‘photon buckets’. The relativeplacement of apertures is somewhat flexible, allowingfor geographic distribution and relative placement forthe convenience of construction or maintenance. Atmo-spheric turbulence is not an issue except at the aperturelevel (see §11.3). Placement of apertures divided amonga set of one or more independent space platforms is alsoan intriguing possibility.

The apertures all connect to a communication networkfor distribution of target coordinates (which change ata slow rate), consolidation of photon events for process-

10 Messerschmitt et al.

ing, and status updates. The total number of events tobe logged across all apertures is of order 0.1 per sec-ond per probe for the parameters of Tbl.2 and Tbl.3, sothe storage and communication requirements are readilyachieved.

The synchronization of time stamps across aperturesmust be maintained, and this requires a common timereference (Messerschmitt 2017). Synchrony with themultiple probe clocks must also be maintained, all thiswith an accuracy related to the time-slot Ts (see §12.3).

5.4. Shared optical detectors

One troublesome feature of Fig.3 is the NS-way du-plication of optical detectors, which multiplies the darkcount rate by the same factor and imposes a very strictrequirement on individual detector dark counts. Shar-ing of a single optical detector over multiple apertureswould reduce the overall dark count rate, as well as re-duce the cost and complexity of the scale-out function.However to avoid any impact on the coverage patternof the overall receive array, any interference in the op-tical domain among aperture signals would have to beavoided (see §13.3).

5.5. Summary

The two-level hierarchy of apertures combined withscale out to a full aperture is required to simultaneouslyachieve a desired (non-zero) coverage ΩA, a desired SBR,and a desired sensitivity (signal photon rate ΛRA).9 Theconsequence of covering multiple probes with a singlereceiver is inevitably a larger ΩA and hence a higherPTAA

Te product for each probe. A desirable feature of the

canonical receive aperture is that the difficult technolog-ical challenges are confined to the aperture realization,while the large number NS of apertures is primarily abudgetary and operational issue.

5.6. Phased elements

The canonical receive aperture of Fig.3 anticipatesthat the apertures are themselves composed of phasedelements. Each element is envisioned as an (ideally)diffraction-limited monolithic optical element, whichmay be constructed from a lens or reflector. Follow-ing a combiner, these elements share an optical detec-tor. Within the combiner the magnitude and phase foreach element optical signal as it arrives at the detec-tor must be precisely controlled. The relatively smalleffective area ASe of the aperture make this technicallyfeasible, and indeed for the values in Tbl.3 it may noteven be necessary. Depending on the aperture dimen-sions an adaptive-optics component to compensate foratmospheric turbulence may be needed (see §11).

9 If as in (Parkin 2019) there were a single probe and pointingaccuracy is not a consideration, the coverage could be as smallas needed to achieve sufficient sensitivity.

Table 4. Burst-mode pulse-position modulation parameters

Description Value

m Bits per PPM frame 12

M = 2m Slots per PPM frame 4096

Ts PPM slot duration (µs) 0.1

We Effective optical bandwidth (MHz) 10.

TF PPM frame duration (ms) 0.41

TI Inter-PPM-frame interval (s) 2.2

δ BPPM duty cycle 1.9·10−4

PAR Peak-to-average ratio 4096.

There are several requirements that suggest the use ofa phased element: A coverage pattern with solid angleΩA that may be non-circular in shape, a coronagraphfunction which realizes a null in the overall sensitivitypattern at the angle of the interfering target star, andadaptive optics.

6. TRANSMIT MODULATION AND RECEIVEOPTICAL PROCESSING

The optical modulation and associated receive pro-cessing are now described. A specific modulation cod-ing technique called burst-mode pulse-position (BPPM)modulation technique is adopted. The parameters ofthis technique are listed in Tbl.4.

6.1. High photon efficiency and peak power

There are two impairments that limit the photon ef-ficiency BPP that can be achieved. The first is back-ground radiation, the impact of which is limited by plac-ing a theoretical lower bound on SBR (see §E). Thequantum efficiency η of the entire optical path reducessignal, noise, and interference equally, but does not af-fect the rate of dark counts. Thus its effect on SBR isonly through the dark count component of background,and η < 1 renders dark counts relatively more impor-tant. Because the η achievable decades from now is un-certain, our nominal assumption is η=1 together with asensitivity analysis.

Once background radiation is rendered insignificant,signal shot noise determines a theoretical upper boundon BPP. High BPP with reliable data recovery isachieved by judicious choice of the mapping from scien-tific data to light intensity vs time in the transmitter (see§6.2) and error-correction coding (see §14.3). Regardlessof how these are structured, there is a theoretical limiton the shot-noise-limited BPP that can be achieved inconjunction with reliable recovery of the scientific data.For large SBR this limit is

BPP < log2 PAR , (3)

where PAR is the peak-to-average power ratio (see §E).Thus to achieve BPP=10.9 in Tbl.3 we must have, atminimum, PAR>210.9=1911.

Low-mass interstellar probe downlink 11

High PAR is beneficial to photon efficiency, but itdoes have implications for the peak transmit power PTPrequired in the transmitter (see §13.1). A high peakpower also requires short-term energy storage and possi-bly voltage conversion in the probe, since electric poweris likely to be generated continuously (see §4.2).

PAR and optical bandwidth We are related in a subtlebut significant way. A larger PAR is inevitably associ-ated with a larger We, since it requires relatively high-energy pulses with short duration and low duty cycle. Alarger We is beneficial in reducing the required selectiv-ity of the receive optical bandpass filters. A larger We

would also appear to admit more background radiationto the optical detector, thereby decreasing SBR. This isnot entirely the case as now shown.

6.2. Burst pulse-position modulation (BPPM)

The modulation has a major influence on photon ef-ficiency. For the purposes of our design explorationwe have chosen a novel burst pulse-position modulation(BPPM). The structure of the intensity-modulated sig-nal is defined in Fig.4. The signal is divided into PPMframes, each with frame duration TF . Frames are typi-cally interspersed with blank intervals, during which nooptical power is transmitted. The total time intervalbetween the beginning of successive frames is the frameinterval time TI .

Within each frame, the signal is structured as exactlyone pulse of light, which may occur in one of M = 2m

short time intervals called slots, each with slot dura-tion Ts, so that TF = MTs. This internal frame struc-ture is called pulse-position modulation (PPM). PPMis commonly used in optical communications because ittrades greater bandwidth (often an ample resource) forimproved energy efficiency. However, it is typically usedin the conventional PPM configuration in which blankintervals are omitted. Slot time Ts is substantially re-duced in BPPM without changing M or TI and thuswithout any effect on data rate.

The disadvantage of BPPM over conventional PPM isan increase in peak transmit power (due to the reductionin Ts) P

TP .10 However, this is offset by some significant

advantages:

• Optical dark counts originating as blackbody radi-ation in the optical path and in the optical detec-tor are a major issue due to the potentially largenumber NS of optical subsystems and optical de-tectors. The impact of dark counts in the receivercan be reduced to any degree desired by reducingslot time Ts. (see §10.8).

10 In the dark-count dominated regime BPPM doesn’t actually in-crease PT

P , because it increases PAR by reducing average powerrather than increasing peak power (see Fig.21).

TIFrame

Blank

t

TFFrame

(a) Conventional PPM

(b) Burst PPM (BPPM)t

TF

Ts

TF = MTs

(c) Single PPM framet

Figure 4. An illustration of two forms of pulse-position

modulation (PPM). (a) In conventional PPM the time axis

is filled with PPM frames, or TF = TI . (b) In burst-PPM

(BPPM) the PPM frame duration TF is decoupled from the

inter-frame interval TI by choosing a very small value for Ts,

allowing for blank intervals between frames. (c) The internal

structure of each PPM frame consists of M time slots, each

with duration equal to the slot time Ts.

• As the propagation distance D from the probe in-creases, the data rate R has to decrease (start-ing at R0 at the beginning of transmission follow-ing the completion of scientific data collection) toaccommodate the decreasing signal power reach-ing the receiver. Similarly probes with differentmasses or more advanced technology may havegreater electrical power available, allowing for anincrease in R. In PPM it is cumbersome to mod-ify R, but with BPPM changes in rate are triviallyaccomplished by changing the blank interval timewhile keeping frame interval TF constant. Thisis a major theoretical and practical simplificationof the design, because no adjustment in opticalbandwidth, ECC or other aspect of the design isrequired (see §10.3).

• It is difficult to achieve the desired selectivity in areceiver optical bandpass filter with current tech-nology, and adjustment of bandwidth to a chang-ing data rate is also a challenge. With BPPMthe optical bandwidth can be increased to any de-gree desired, relaxing the selectivity requirement,and can be held fixed across time, distance, andprobes with different masses. This accrues with-out penalty in background radiation since the in-creased bandwidth is compensated by the longerblank intervals (see §10.8).

The receiver, as part of the post-processing, must syn-chronize its timing with frames and slots, a function-

12 Messerschmitt et al.

ality called timing recovery.11 With knowledge of theframe and slot timing, there are three possibilities forhow many photons are detected:

• One or more photons are detected within the slotpopulated at the transmitter, but none in theother slots nor the blanking interval.

• No photons are detected within the entire frame,which is declared a frame erasure.

• One or more photons are detected within two ormore slots, which is declared a recognized frameerror.

• One or more photons are detected within exactlyone slot not populated at the transmitter. This isan unrecognized frame error.

Erasures are a manifestation of signal shot noise andframe errors are attributable to background radiation.

Knowledge of erasures and recognized frame errors ispreserved for subsequent processing. We find that toachieve high photon efficiency BPP the rate of erasuresmust be 80-90%, but the inclusion of substantial error-correction redundancy can overcome these erasures andyield acceptably low error rates in the recovery of thescientific data (see §14).

In the conventional PPM shown in Fig.4a the blankintervals are eliminated, but in the BPPM shown inFig.4b blank intervals between frames are allowed (con-ventional PPM is a special case). In this case the dutycycle δ = TF /TI is defined. We can think of BPPM asa higher-data-rate transmission scheme operating witha low duty cycle δ1. It is important to note thatm/TI R where m/TI is the “raw” data rate carriedby BPPM frames and R is the rate of reliable recoveryof scientific data. This disparity is attributable to theadded redundant data associated with error-correctioncoding (see §14.3).

7. PROBE MASS TRADEOFFS

A feature of a directed-energy launcher is that, for afixed launcher infrastructure, a variety of probe versionswith different masses can be launched.12 The probespeed u0 and initial data rate R0 in Tbl.2 are used inthe numerical examples here. These are estimates for alowest-mass “wafer scale” version. The question is, onwhat basis do we choose a probe mass?

In terms of scientific outcomes the R0 is of only in-direct interest. In terms of the scientific mission the

11 Timing recovery is a standard function of all digital communica-tion systems, and will not be addressed further. This is one ofthe post-processing functions included in Fig.3.

12 The many uses of a launcher, its deployment, as well as the ben-efits and issues of various mass missions are discussed in (Lubin2020).

Table 5. Data latency/volume for planetary missionsa

Target Neptune Pluto

Mission Voyager 2 New Horizons

Data rate R ∼ 1 kbps ∼ 1 kbps

Data latency TL 12 yr 10.5 yr

Data volume V 18 Gb 50 Gb

a 9000 images were returned from Neptune (Smith et al. 1989)at 2 Mb per image (Ludwig & Taylor 2016). The data rate wasprogrammable and variable. The Pluto scientific data return was50 Gb (Fountain et al. 2009).

performance metrics of direct interest are the total datavolume V returned, the data latency TL, and the relia-bility with which the data is recovered (see §2.2). Wenow show that there is an optimum mass m such thatTL is minimized for a given V, or V is maximized for agiven TL. Thus a principled way to choose the mass isto incorporate any constraints on TL or V. For compar-ison, estimates of these metrics for two outer-planetarymissions are listed in Tbl.5.

7.1. Mass scaling approximations

The mass m strongly influences the probe speed u0

and the initial scientific data rate R0, which in turndetermine TL and V. We define a set of scaling laws,

m(ζ) = ζ ·m(1)

u0(ζ) = ζ−1/4 · u0(1) , R0(ζ) = ζ2 · R0(1) . (4)

The mass ratio ζ ≥ 1 is the factor by which mass isincreased. Transmission starts at the target star dis-tance D = D0 and ends at distance D1>D0. The datarate starts at R0 and decreases with D in accordancewith (11). The scaling of R0 in (4) assumes thatPTA , PTP , ATe ∝ ζ (the detailed scaling depends on thespecifics of the probe design). It is shown later (see (6)and (1) ) that in this case R0 follows (4), with one factorof ζ due to increased PTA and another factor of ζ due toincreased ATe . The scaling ratio for u0 follows from thelaunch dynamics, assuming a fixed launch infrastructure(Lubin 2020).

7.2. Latency-volume tradeoff

The relationship between V and TL parameterized byζ is determined in §C and plotted in Fig.5 for five val-ues of ζ. The volume scales as V ∝ R0(1), so a changein R0(1) would shift all curves horizontally. There arethree components of TL: transit time, transmission time,and propagation time. The minimum TL increases withζ due to the greater transit time. Eventually TL in-creases rapidly with V due to the decreasing R with D.

For each V there is an optimum choice of ζ that mini-mizes TL. The resulting minimum TL vs V is plotted asthe dashed curve in Fig.5 and is also plotted in Fig.6.The latter plot shows the decomposition of TL into its

Low-mass interstellar probe downlink 13

13

10

30

100

V NH

0 2 4 6 8 10 12 140

20

40

60

80

100

log10 (data volume in bits)

DatalatencyTL(years)

R(1) = 1 b/su0(1) = 0.2c

Figure 5. Plot of the data latency TL in yr vs the logarithm

of the total data volume V in bits for different values of the

mass ratio ζ (the curves are labeled with ζ). For example

log10 V = 9 corresponds to 109 bits or 1 Gb. The dashed

line is, for each value of V, the minimum TL achievable by

the optimum choice of ζ. It is assumed that for ζ = 1 (the

lowest-mass probe), the speed is u0(1) = 0.2c, and the initial

transmit data rate is R0(1) = 1 b/s. The Voyager (V) and

New Horizons (NH) points are obtained from Tbl.5.

V NH

7 8 9 10 11 12 13 140

20

40

60

80

100

log10 (data volume in bits)

Datalatency

(years)

Figure 6. A plot of the data latency TL vs data volume

V, where for each value of V the latency-minimizing mass

ratio ζ is chosen. This ranges about 28 to 100 years, where

larger V is associated with a larger ζ and a larger TL. TL

is decomposed into its three components, from bottom to

top: Transit time to the star, the transmission time, and

the return signal propagation time, all measured by an earth

clock. Latency is dominated by the transit time.

three components. The latency-minimizing value of ζ,the resulting transmission time, and the distance D1 atthe end of transmission are plotted in Fig.7.

If V is increased, both the optimum ζ and TL increase.For the optimum ζ the transmission time is always muchshorter than the transit time. Obtaining a larger Vrequires increased transmission time, but the distancetraveled during transmission remains relatively constant

log10ξ

Transmission

time (years)

D1/D0-1 (%)

7 8 9 10 11 12 13 140

2

4

6

8

10

12

log10 (data volume in bits)

Figure 7. Three quantities of particular interest are plot-

ted as a function of data volume V. First is the log of the

optimum mass ratio ζ that minimizes the data latency TL. A

plot of TL vs V for this optimum ζ is shown in Fig.6. Second

is the transmission time component of TL, which varies in

the range of two to nine years. Third is the distance increase

(D1/D0 − 1) expressed as a percentage. The space probe

travels a fairly consistent 9% to 10% farther than the target

star during downlink transmission.

since the probe’s speed is lower. For the range of param-eters shown, the minimum TL falls in the 30 to 80 yearrange for a data volume in the 100 Mb to Tb range.Volumes V comparable to the planetary missions arefeasible with the assumed R0(1), but only with ζ ∼ 20and TL ∼ 50 yr. The V and TL parameters of Tbl.2 as-sume that downlink operation is 10% of the probe transittime.

8. CHOICE OF WAVELENGTH

The two spectrum ranges offering atmospheric trans-parency for a terrestrial receiver are millimeter wave ra-dio and visible optical. This choice affects nearly everyaspect of the design. We study the optical case in thispaper, but the radio possibility should also be consid-ered.

8.1. Propagation loss

As compared to the outer planets, the propagationlosses from the nearest stars approach a factor of >108.This obstacle is overcome by an adjustment to otherparameters in Tbls.1, 2, 3, and 4, including prominentlywavelength, data rate, and receive aperture size.

The appropriate signal level at the receiver is mea-sured differently at radio and optical wavelengths. Forradio wavelengths it is the average power PRA = NSPSA ,while for optical (assuming direct detection) it is theaverage rate of detected photons ΛRA = NSΛSA. Thesemetrics are affected by the propagation loss and atmo-spheric effects, with the latter more significant at opticalwavelengths (see §11). Neglecting atmospheric absorp-

14 Messerschmitt et al.

Table 6. For equivalent SNR, scaling of the NH radio down-

link to a low mass probe illustrating the impractically large

receive aperture size that may be needed.

Parameter NH Low mass

Distance D0 4.5 lh 4.24 ly

Wavelength λ0 3.6 cm 10 mm

Transmit power PTA 24 W 0.1 W

Transmit aperture diameter√

4ATe /π 2.1 m 10 cm

Receive aperture diameter√

4ARe /π 70 m 1652 km

Data rate R 1 kb/s 1 b/s

tion and turbulence, these metrics are determined by theaverage transmitted power PTA through a link budget,

PSA = η · ATe A

Se

λ20D

20

· PTA (5)

ΛSA =λ0

hc· PSA = η · A

Te A

Se

hcλ0D20

· PTA (6)

(see §B.4.2 and (B8)). The factor λ0/hc in (6) convertsfrom power to photons per second.

The efficiency factor 0 < η ≤ 1 in (6) accounts fornon-propagation losses, such as aperture pointing error(see §12.6), as well as attenuation and the quantum effi-ciency in the receiver. A Doppler red shift in wavelengthfrom transmitter to receiver has also been neglected in(6) by setting λR0 = λR0 = λ0 (for u0 = 0.2c this shift isabout 20%). Relativistic aberration has also been ne-glected.

8.2. Radio

Although radio is used for deep space missions today,the primary objection to radio at interstellar distancesis the large aperture sizes (or alternatively large trans-mit power). As an order-of-magnitude estimate, factorsaffecting the link budget for the New Horizons (NH)mission are scaled to accomodate interstellar distancesin Tbl.6. Although it would be desirable if transmitpower and aperture size could be increased to compen-sate for the greater distance, they will likely be substan-tially decreased for low-mass probes. In principle theseadverse elements can be compensated by a lower datarate, a substantially larger receive aperture area, and/orshorter wavelength. The shortest radio wavelength thatconsistently penetrates the atmosphere is chosen.

An appropriate link budget for radio is (5). As-suming that there is one single-mode diffraction-limitedaperture which receives signal power PSA , the parame-ters contributing to this received power PSA are listedin Tbl.6, with reasonable assumptions for the transmitpower and aperture size. When substituted into (5) thereceived power is found to be a factor of 103 smaller forthe low-mass probe. This results in an equivalent signal-to-noise ratio SNR at the receiver for two reasons. First

the major noise impairments are isotropic cosmic back-ground radiation and receiver thermal noise, which havethe same white power spectral density independent of re-ceive aperture area (see §B.4.3) and assuming equivalentreceiver noise factor. Second for an equivalent modula-tion and coding scheme the receiver bandwidth can bereduced by a factor of 103 in the low-mass case due tothe lower data rate, thereby reducing the noise compo-nent of SNR by the same factor.

The receive aperture is impractically large, especiallyconsidering that it is assumed to be diffraction lim-ited. Such an aperture would also require an imprac-tical pointing accuracy, although this requirement couldin principle be relaxed by implementing a phased ar-ray with equivalent total collection area.13 A largertransmit aperture (with associated greater pointing ac-curacy), higher transmit power, or lower data rate wouldallow the receive aperture area to be reduced.

8.3. Optical

Relative to radio, choosing an optical wavelengthyields a factor of 104 to 105 in the link budget of (6) (de-pending on the two wavelengths being compared). Thisis a considerable advantage, but optical wavelengths alsopose substantial challenges. One is atmospheric effects(see §11), with the resulting scintillation (fading in com-munication terms) and weather-based outages.

One challenge in optical communication with directdetection (see §4.3) is a ‘mode shift’ in photon efficiencyBPP vs SBR at low values of SBR. This is observed inthe theoretical limit on feasible photon efficiency BPP,which is essentially independent of SBR (for fixed peak-to-average ratio PAR) for large values of SBR, but dete-riorates rapidly for small SBR (see Fig.29). Intuitivelythis is due to the effect of modulation and detectionbased on signal power (photon count), and the effects ofsquare-law detection on background radiation. The caseis made in (Toyoshima et al. 2007; Moision & Farr 2014)that a probe receding in distance from earth will suffer adeterioration in SBR due in part to a reduction in datarate and signal power in the face of a fixed (or evengrowing) background radiation, and as a result the linkbudget switches dependency from D2 to D4. However,these papers make specific technological assumptions asto modulation and parameter scaling relations with D

13 The incoherent accumulation of power as in Fig.3 is not accept-able since the NH modulation and coding scheme strongly ex-ploits both signal amplitude and phase. Accurate phasing acrossconstituent apertures is necessary.

Low-mass interstellar probe downlink 15

(such as holding optical bandwidth fixed even as thesignal bandwidth decreases).14

A desirable D−2 behavior can be achieved at dete-riorating SBR by replacing PPM by a more nuancedmodulation and coding (Jarzyna et al. 2019), althoughthis is unproven in practice. Here we use a simpler ap-proach of simply constraining SBR to be relatively large.Fortunately for a space mission and its presumed maxi-mum propagation distance D = D1 this challenge is cir-cumvented by ensuring that SBR is relatively large atdistance D1. We constrain SBR to a relatively largevalue at this distance (in our case SBR=4), with a cor-respondingly large average transmit power PTA . This isassisted by BPPM (see §6.2), which allows us to holdoptical bandwidth We fixed (subject to any technolog-ical limitations on bandpass filtering) without penaltyin SBR as D increases, although a price is paid in acorrespondingly growing peak transmit power PTP .

Our numerical results in §10.9 subject to these as-sumptions do strongly suggest that communication withlow-mass probes at optical wavelengths is not feasiblegiven the current state of technology. We identify heresome key technologies that must advance to empoweroptical communication in this application. These are:achieving relatively high peak powers at the optical de-tector through pulse compression or other means (see§13.1); more selective optical bandpass filtering (see§13.2); attenuation of interference by a coronagraphfunction (see §12.7); receiver optics (including supercon-ducting optical detectors) with very low dark counts (see§13.3); and highly accurate probe attitude control andaperture pointing.

Since the relative advances in these technologies can-not be anticipated with any degree of certainty, BPPMis chosen as a modulation technique that offers greaterfreedom in trading off advances in some of these tech-nologies (see §6.2). Achieving a relatively large and con-strained SBR at the greatest distance D1 is aided by theminimal increase in distance (about 10%) during trans-mission (see Fig.7). The requirements for these fourtechnologies and related tradeoffs are studied by numer-ical modeling in §10.9.

8.4. Space-based receiver

Although not considered further here, a space-basedreceiver would be advantageous for the elimination ofatmospheric effects and also the possibility of going toultraviolet (UV) wavelengths (10 to 400 nm). UV wouldfurther moderate aperture sizes and substantially reduce

14 There is also one error in (Moision & Farr 2014), with the as-sumption that isotropic sources of noise increase as the effectivearea and directivity of the receive aperture is increased to com-pensate for greater distances. Actually noise due to unresolvedsources is independent of directivity because the greater gain ofa larger aperture is exactly offset by its smaller coverage (see§B.4.3).

-10 -8 -6 -4 -2 0-1

0

1

2

3

4

Longitude variation (arcsec)

Latitudevariation(arcsec)

Figure 8. The variation in probe coordinates as viewed

by a common terrestrial receive aperture. The variation in

the longitude and latitude (in arcsec in an ecliptic coordi-

nate system) for 26 probes launched at 30-day intervals is

shown. The origin corresponds to the angle of Proxima Cen-

tauri when viewed from the initial launch site. Each elliptical

trajectory is due to the parallax effect of the receiver orbit

about the sun over a period of 2.21 yr, which corresponds to

a probe’s travel 10% farther than the star. The general drift

of the probe positions is due to the star’s proper motion,

presenting a moving target for the various probes.

the star interference (see §10.9 and (Lubin 2020)). WhileUV laser communications is not currently technologi-cally advanced (due primarily to its inability to pene-trate the atmosphere) it is an option to consider for thefuture.

9. TRAJECTORIES AND COVERAGE

The coverage ΩA of each aperture determines the ef-fective area ASe , which in turn influences the transmitpower-area product PTAA

Te necessary to achieve the de-

sired SBR at the receiver. The parameters affecting cov-erage are (beyond our control) parallax and the star’sproper motion and (under our control) the swarm launchschedule and sophistication of the aperture beamform-ing.

9.1. Concurrent swarm probe communication

The probe trajectories as viewed from Earth areshown in Fig.8 for a specific launch schedule.15 The cho-sen parameters, including duration of transmission andinterval between launches, results in 26 probes transmit-ting concurrently Tbl.1, each one operating a downlinkfor 2.21 yr in Tbl.2 (see §7.2). For the position of Prox-ima Centauri relative to the earth’s orbit, the overalllongitudinal parallax angular variation (∼2 as) is abouttwice as great as the latitudinal (∼1 as).

The star’s proper motion determines the relative posi-tion of probes launched at different times. For ProximaCentauri this motion is 368 mas/yr and −3775 mas/yr

15 The variation in angle due to parallax does not dependon the chosen coordinate system. To readily capture theearth’s orbit, a heliocentric ecliptic XYZ coordinate systemis used. The earth orbit and a fixed star position were ob-tained from Mathematica R© StarData["Heliocoordinates"] andPlanetData["Heliocoordinates"] followed by transformation tospherical coordinates.

16 Messerschmitt et al.

in declination and right ascension respectively.16 For-tuitously, the largest component of proper motion isaligned with the largest overall parallax (longitudeand right ascension). The trajectories of consecutivelylaunched probes will sometimes coincide (unless theinter-launch interval is greater than about eight months)and thus their signals could not be consistently sepa-rated spatially.

The coverage implications depend on the refinementin beamforming in each aperture. Minimizing cover-age (maximizing sensitivity) requires beamforming theprecise location of each probe, while the simplest beam-forming simply captures the full variation in parallaxshown in Fig.8 for each probe and also across the rel-ative positions of successive probe launches. The nu-merical results in §10.9 address these disparate cases byconsidering a range of ΩA values.

At longitude l and latitude b the solid angle of coverageis, for small ∆l and ∆b,

ΩA ∼ cos(b) ·∆b∆l (7)

Based on Fig.8 for the simplest beamforming we have∆b ≈ 1 as, and for Proxima Centauri cos(b) = 0.71. ∆lis dominated by proper motion and is ∆l∼10 as for theassumptions in Fig.8, and thus ΩA ≈ 7.1 as2. To ac-count for the practicalities of sidelobes we assume thatΩA=10 as2 in Tbl.3.

The coverage can be manipulated by the launch sched-ule. For example ΩA could be reduced by moving towarda burst-mode launch schedule, in which a sub-swarmof probes are launched at closely spaced times inter-spersed with launch-inactive time periods greater thanthe individual-downlink operation time (on the order oftwo years for the parameters of Tbl.2 and Fig.7).

9.2. Single-probe communication

At the other extreme, if a receive aperture is dedicatedto a single probe and can dynamically compensate forparallax and atmospheric refraction of the signal path,the smallest coverage solid angle ΩA will be determinedby the aperture pointing accuracy. If we assume thatwith adaptive optics the pointing accuracy of the aper-tures is ≈0.1 as, then the coverage can be no smallerthan ΩA≈0.01 as2 (this the value assumed in Tbl.3).17

Our goal in the numerical results is to consider this en-tire range of possibilities.

10. MODEL AND NUMERICAL RESULTS

Before discussing the key enabling technologies, it ishelpful to appreciate the trade-offs available through the

16 Obtained from Mathematica R© StarData["ProperMotion"].17 The AS

e =707. m2 assumption in (Parkin 2019) impliesΩA=9.4 · 10−5 and hence an aperture pointing accuracy on theorder of 0.01 arcsec or better. The aperture sensitivity and thetarget star interference calculations are strongly dependent onthis assumption.

Table 7. Receive aperture design equations with BPPM

Parameters:

Data reliability: SBR,BPP,R0,We,KRs ,m

Geometic: b,∆b,∆l,ΩA, λR, D0

Physical: η, Fc, Fx, Ts,ΛSN,I,D

Transmitter:

TF = MTs WeTs > 1

PTP Ts = PT

A TI TIR0 = KRs · BPP

BPP =1− e−KR

s

KRs

·m M = 2m

Receive aperture: ΛSP =

PTP A

Te A

Se

hcλRD20

KSs = ηΛS

PTs KSX = ηFxΛS

PTF

KSN = ηΛS

NWeTF KSI = ηFcΛS

IWeASe TF

KSD = ΛS

DTF SBR =KS

s

KSX +KS

N +KSI +KS

D

ASe · ΩA =

(λR)2

ΩA = cos(b) ·∆b∆l

Receive collector: NS = KRs /K

Ss

Metrics: ASe , P

TA , P

TP , N

S

choice of parameters of BPPM. BPPM is specifically de-signed to permit a greater advancement of one technol-ogy to offset a lesser advancement of another. This isreflected in the modeling equations for BPPM in Tbl.7,which are now described and then used in an extendednumerical example.

from the choice of parameters listed in Tbls.1, 2, 3,and 4 are listed in Tbl.8, along with a few additional cho-sen parameters.and Excel R© implementations of Tbl.7are available for self-exploration of the parameter space(Messerschmitt 2020). This is our nominal design, andin the following the scaling of performance with changesto chosen parameters is described.

Although the values of NS are quite large, the possi-bility of multiple-aperture assemblages addresses someconcerns (see §5.2.1). For example NS=5.9·107 can beaccommodated by 5900 assemblages, each accommodat-ing a 100×100 array of apertures. The total area metricASeN

S may be a more accurate indicator of collectorcapital cost.

10.1. Metrics and methodology

Achieving a gram-scale probe mass limit is challeng-ing. For this purpose a useful metric for a design is theproduct of the transmit average,peak power and aper-ture effective area ξTA,P = PTA,PA

Te , which should be

monotonically related to probe mass due to their im-pacts on electrical power generation and the transmitaperture size (see §7.1). Fixing a value for ξTA,P admitsa tradeoff between transmit power and aperture areawithout affecting the signal level as seen by the receiver.

Low-mass interstellar probe downlink 17

Table 8. Transmitter and receiver performance metrics

Description Swarm Single

PTA Avg. transmit power (mW) 29.4 0.8

PTP Peak transmit power (kW) 638. 17.4

Ks Average detected photons per slot 0.2

ρM Reduction in moonlightirradiance relative to full moon

1

ΛSD Avg. rate of dark counts

referenced to each receive aperture(ph/yr)

32.0

SXR Signal-to-extinction ratio 2441.

SIR Signal-to-interference ratio 152. 4.15

SNR Signal-to-noise ratio 8.3 226.

SDR Signal-to-dark count ratio 8.2 223.

NS Number of receive apertures 5.9·107 2.2·106

ASeN

S Total effective aperture area (km2) 0.04 1.47

Our design methodology is to minimize the average-power-area metric ξTA subject to reliable recovery of sci-entific data, constraints on receiver coverage accommo-dating probe-swarm trajectories, and the physical limi-tations imposed by background radiation and self-noisein the receiver. This minimization is achieved by op-erating at the lowest SBR consistent with reliable datarecovery, operating near the theoretical limits of photonefficiency, and subject to the fundamental physical con-straints on a receive aperture imposed by the desiredcoverage. A major component of receiver cost is thereceive aperture. A metric relating to that cost is thereceive aperture total effective area ASeN

S . We do notchoose or constrain NS , but rather ascertain the valuerequired to achieve reliable data recovery subject to thatminimum ξTA. Subsequently we consider the implicationsof varying NS about this unconstrained choice.

Recognizing a tradeoff between ξTA,P and receive

aperture area, another useful metric for a design is theend-to-end power-area-area metric

ξTRA,P = PTA,PATe ·ASeNS = ξTA,P ·A

SeN

S .

We now display the dependence of ξTRA,P on other pa-

rameters, which offers considerable insight.

10.2. Transmitter/receiver tradeoffs

There is a practically significant tradeoff between thetransmitter power-area metric and the required num-ber of receive apertures. As now established, increasedtransmitter resources (power and aperture area) can betraded for a reduced receiver burden (number of aper-tures).

10.2.1. End-to-end metrics

The design equations in Tbl.7 determine the end-to-end metrics ξTRA,P defined in §10.1 as

ξTRA = ξTA ·ASeNS =hcλR0 D

2

η· R

BPP(8)

ξTRP = ξTP ·ASeNS =hcλR0 D

2

η· K

Rs

Ts. (9)

ξTRA in (8) is a reformulation of the single-aperture linkbudget (6) for the canonical receive collector of Fig.3.Expressed in terms of coverage solid angle ΩA, (8) and(9) become

ξTA,P =hcD2

λR0 η· ΩANS·R

BPP,KRs

Ts

. (10)

Thus for fixed coverage we have that ξTA,P ∝(NS)−1

,

so that if desired NS can be reduced at the expenseof added transmit power or aperture area or both (see§10.5.1). This tradeoff is actually exercised by manip-ulating the hidden parameter SBR. Increasing ξTA,Presults in a higher SBR at each receiver aperture (be-cause of a larger signal rather than lower backgroundradiation), and this in turn allows an adequate signallevel at the optical detector with a smaller number ofapertures.

With respect to average power, ξTA in (10) quantifiesthe adverse effect of achieving a smaller BPP and/orsmaller quantum efficiency η, and establishes a lineardependence of average power on scientific data rate R.It also shows that for fixed coverage ΩA, a longer wave-length is advantageous, although this fails to accountfor an important effect of wavelength on background ra-diation. Because it is blind to SBR, (10) also fails tocapture a reduction in background radiation that fol-lows from reducing BPP (see §10.5.2).

With respect to peak power, (10) shows an invarianceof ξTRP to R and BPP, but rather a dependence on pa-rameters Ts and KR

s .

10.2.2. Parameter consistency issues

For any set of consistent parameter choices, (10) issatisfied. However, (10) is not satisfied for any arbitrarychoice of the data reliability parameters SBR,m,Ks,because BPP depends strongly on these parameters and(10) assumes that a photon efficiency BPP is actuallyachieved. We must chose SBR,m,Ks and then deter-mine the resulting BPP before invoking (10). The aver-age power-area tradeoff is strongly influenced by BPP,and this imparts a strong motivation to increase BPP.

Parameter KRs , the average number of detected sig-

nal photons for each PPM slot, has a major influenceon data reliability since it determines the shot-noise in-duced rate of PPM frame erasures. To achieve a large

18 Messerschmitt et al.

BPP, which beneficially reduces ξTRA , it is perhaps sur-prising that KR

s must be very small, resulting in a highprobability of frame erasure (see §14.3). A nearly opti-mum choice across a range of conditions is KR

s ≈0.2 (see§14.3.2). This is termed photon starvation of the PPMframes, and small KR

s beneficially reduces ξTRP as well.On the other hand, the equivalent parameter KS

s at anaperture output is determined by SBR, and is not di-rectly related to BPP or data reliability. The purposeof the scale-out of apertures by the factor NS = KR

s /KSs

is to multiply the value of Ks by incoherently accumu-lating received photons.

The signal-to-background ratio SBR must also belarge enough to support the assumed photon efficiencyBPP (see §E). Since the background radiation is deter-mined by the aperture, the minimum SBR requirementimposes a lower bound on the average signal power ateach aperture, and thus places a lower bound on ξTA,P(see §10.5).

10.3. Changes with increasing propagation distance

The downlink propagation distance D ranges overD0<D<D1, where D0 is the distance to the target star(corresponding to scientific data acquisition) and D1 isthe distance at which transmission is completed. As Dincreases in this range, a goal is to hold constant boththe average transmit power (which is determined by theelectrical generation capacity) and the reliability of sci-entific data recovery. It follows that the only parametersin the end-to-end metrics that can change are data rateR and peak transmit power PTP . Assuming (8) remainsfixed, we must have R ∝ D−2. If the initial data rate atD = D0 is R0 then

R(D) = R0 ·(D0

D

)2

. (11)

For fixed data reliability we choose to keepM,Ts, TF ,Ks fixed with D. Based on Tbl.7, Rcan be changed simply by adjusting the frame intervalTI ∝ R−1. In addition, based on (9) the peak transmitpower has to be increased as PTP ∝ D2.

In summary, as D increases R is adjusted downwardby increasing TI , and PTP is increased to maintain a fixedreliability and also a fixed PTA . All ECC parameters andalgorithms remain fixed, which is one of the desirablefeatures of BPPM.

10.4. Coverage

The sensitivity of the aperture is determined by ASe , asseen in the relation for ΛSP , and thus coverage and sen-sitivity are inversely related. That is, a swarm of probesreceived concurrently by a single receive aperture, to theextent that this necessitates a larger ΩA, increases therequired PTA,P for each probe (see Tbl.8 for a specific

example).

10.5. Power-area tradeoff

It is useful to numerically characterize the transmit-ter/receiver tradeoffs described analytically in §10.2.The tradeoff between receive collector total area andtransmit average and peak transmit powers (for a fixedtransmit aperture area ATe ) is shown in Fig.9 and Fig.10for the swarm and single probe coverage cases respec-tively, holding the transmit aperture ATe fixed. Therange in powers and areas is very large, with impracti-cally large powers on the left (corresponding to a singlecollector aperture) and impractically large receive collec-tor areas on the right (corresponding to the minimumphoton efficiency). As expected, the transmit powersare smaller in the single probe case of Fig.10, and as aresult the received collector areas are generally larger.

10.5.1. Reducing receive collector area

Relative to the nominal design with parameters de-fined in Tbls.1, 2, 3, and 4, the receive collector areacan be reduced by increasing SBR. Since backgroundradiation is not affected when ASe is fixed, an increasein ξTA is reflected by an increase in photon rates at theaperture output, an increase in SBR, and a commensu-rate reduction in NS for a fixed photon efficiency BPP.The largest SBR and transmit power corresponds to asingle collector aperture (NS=1).18

10.5.2. Reducing transmit power

Again relative to the nominal design parameters, thetransmit power can be reduced in the specific context ofPPM or BPPM by reducing the photon efficiency BPPthrough a reduction in the number of slots M per frame.The primary effect of reducing BPP is to increase theNS needed to achieve a large enough photon rate tosupport the assumed scientific data rate R. This largerreceive collector area results in lower transmit powers.

Specifically this trend is due to a reduction in back-ground radiation, and permitting the transmit powersto be decreased while maintaining the desired SBR. InPPM, BPP is controlled by adjusting the bits per PPMslot m, with BPP ≈ 0.9m for the nominal value of Ks.For conventional PPM, reducing m also beneficially in-creases time-slot duration Ts, and thus reduces opticalbandwidthWe and thus ΛSI,N in Tbl.9. For BPPM, hold-

ing Ts and We fixed imply that ΛSI,N remains fixed, butthe frame duration TF is proportional to 2m and thus re-ducing m has the effect of reducing background photoncounts KS

X,I,N,D in Tbl.7. Reducing BPP to increaseaperture area is less efficient for average power than itis for peak power due to the increase in area needed to

18 The design assumption made in (Parkin 2019) is NS=1, withthe goal of minimizing the received collector area. As a resultthe required transmit area-power metric ξTA is more than eightorders of magnitude larger than the nominal design here, but ofcourse the receive collector area is much smaller.

Low-mass interstellar probe downlink 19

PAT in mW

PPT in kW

-10 -8 -6 -4 -2 0 2 4

-2

0

2

4

6

8

10

log10 (total collector area in km2)

log 10(averageandpeakpower)

Figure 9. A log-log plot of the average transmit power PTA

and peak transmit power PTP vs the total receive aperture

effective area NSASe with fixed coverage (ΩA = 10 arcsec2)

corresponding to the swarm probe case. The units are mil-

liwatts for average power and kilowatts for peak power, so

there is a six order of magnitude difference between the two

curves. The dots represent the nominal case for the param-

eters listed in Tbls.1, 2, 3, and 4. The larger powers to the

left of the dots reflect a larger choice for SBR, culminating

with the SBR corresponding to a single aperture (NS=1).

The smaller powers to the right of the dots reflect a smaller

choice for BPP, culminating in BPP=1, corresponding to a

PPM frame with just two slots (M=2).

PAT in mW

PPT in kW

-10 -8 -6 -4 -2 0 2 4

-2

0

2

4

6

8

10

log10 (total collector area in km2)

log 10(averageandpeakpower)

Figure 10. Fig.9 is repeated for fixed coverage

(ΩA = 0.01 arcsec2) corresponding to the single probe case.

accommodate poorer photon efficiency, which begins tooffset the beneficial reduction in background radiationat small BPP.

10.6. Background radiation

The relevant average detected photon rates at eachaperture are listed in Tbl.9 for each of the four sources ofbackground. These are related to the detected peak pho-ton rate ΛSP and the three physical parameters ΛN,I,D,the appropriate units for which are listed in the third col-

Table 9. Constituents of background radiation

Source Avg. ph/s ΛP,N,I,D units

Source extinction ηFxΛSP ph/s

Unresolved noise ηΛNWe ph/s-Hz

Host star interference ηFcΛIASeWe ph/s-Hz-m2

Detector dark counts ΛSD ph/s

umn. Like the probe signal, the three incident sources ofradiation are affected by detection efficiency η, whereasthe dark count rate is intrinsic to the detector and thusis not affected by η. Thus dark counts become a moresignificant contributor to SBR as η decreases.

Noise and interference are broadband sources of radi-ation, and thus as they reach the optical detector areproportional to We, the bandwidth of the receive opti-cal bandpass filter which precedes that detector. Theextinction ratio Fx<1 determines the background fol-lowing the optical bandpass filter that is related to in-complete extinction of the source.

Noise (due to unresolved sources of radiation) is con-sidered to be isotropic over the coverage solid angle ΩA,and thus the captured photon rate is independent of ASefor a single-mode diffraction limited aperture. This isbecause any changes to collection area are offset by theresulting change in coverage solid angle (see §B.4.3).

Interference (originating with the target star) is essen-tially a point source like the probe transmitter. It canbe approximated as a plane wave when it reaches theaperture, and the captured photon rate is thus propor-tional to the aperture effective area ASe . Any rejectionof this interference due to it falling outside the coverageangle or other coronagraph functionality is modeled bythe coronagraph rejection ratio Fc≤1.

Dark counts are distinctive in being unaffected by ei-ther We or ASe .19 This presents a unique challenge be-cause we cannot limit dark counts originating in the de-tector (as opposed to receive optics) by optical bandpassfiltering.

10.7. Conditions on extinction ratio

Returning to Tbl.7, the average photon counts perframe for signal (KS

s ) and for incomplete extinction(KS

X) are related to the peak signal power ΛSP . Thisimplies an upper limit on the permissible extinction ra-tio Fx that is related to the desired SBR. EliminatingΛSX from the equations,

KSs =

SBR(KSN +KS

I +KSD

)1−MFx · SBR

. (12)

19 This may not be true of dark counts originating as blackbodyradiation from the optics preceding the optical bandpass filtering(see §13.3).

20 Messerschmitt et al.

Thus incomplete extinction (Fx>0) results in a com-pensating increase in the required KS

s , which in turnrequires an increase in PTP . This increase is unrelated toother sources of background, and since it is not a func-tion of TI is also unrelated to the benefits of BPPM (see§10.8). Based on (12), a feasible design requires

Fx < (M · SBR)−1 . (13)

This conclusion also follows directly from SBR < SXR(since incomplete extinction is one component of SBR)and SXR = 1/MFx.

10.8. Opportunities afforded by BPPM

The slot time Ts plays a central role in Tbl.7. First,choosing a smaller Ts assists in achieving the targetedSBR by reducing all three of KS

N,I,D. This boost in

SBR results from ignoring all photon detection eventsthat occur during the blank intervals in Fig.4. The pricepaid for smaller Ts is an increase in peak transmittedpower PTP .

The reduction in dark counts arising from smaller Tsis especially advantageous, since dark counts originat-ing in the detector (as opposed to receive optics) areunabated by bandlimiting. This is significant becausethe numerical results in §10.9 suggest that dark countsare typically a very significant significant contributor tooverall background radiation for larger coverages ΩA.This parallels radio telescopes, where thermal noise in-troduced in the receiver electronics typically dominatescosmic sources of noise.

Another important parameter in Tbl.7 that relates totechnology issues is the optical bandwidth We imposedat the receiver, since this determines the level of noiseand interference reaching the optical detector. The min-imum bandwidth is determined by WeTs∼1, and it isnatural to keep Ts fixed as distance D increases as wellas across different probes (even if they have differingmasses and data rates). As a result, the optical band-width does not require agility or configurability.

The total noise and interference is always proportionalto WeTs, and thus remains fixed if We is coordinatedwith Ts. Thus BPPM does not help (or hurt) with re-spect to total noise and interference at the optical de-tector. A reduction in Ts to minimize dark counts alsoincreases the optical bandwidth We without increasingbackground radiation. In terms of optical bandpass filtertechnology this is desirable since it relaxes the requiredselectivity. In effect BPPM trades off background sup-pression (except for dark counts) between the frequencyand time domains. Time-domain suppression is techno-logically preferable because it is simpler to implementand more precise. Thus all considerations (other thanpeak power) argue in favor of making Ts smaller.

10.9. Parameter-metric sensitivity

We now justify the choice of parameters in Tbls.1, 2,3, and 4, followed by a study of the sensitivity of theseparameters to changing assumptions.

10.9.1. Choice of nominal parameters

The choice of R0 = 1 b/s is a convenient (but ratherarbitrary) choice. This is the scientific data rate dur-ing periods of non-outage, but for the chosen daylightand weather outage probabilities the net scientific datarate is Ra = 0.432 b/s (see §14.4). For fixed coverageand also holding KR

s , Ts fixed, based on (10) reduc-ing R0 is a way to either reduce the transmitter aver-age power-area metric ξTA (which has no effect on peakpower PTP ) or alternatively reduce the number of receiveapertures NS (which has the adverse effect of increasingthe required peak power-area metric ξTP ). Surprisinglyreducing ξTP requires us to increase R0 and NS ∝ R0.Alternatively ξTP can be reduced by increasing PPM slottime Ts, but this increases susceptibility to dark counts.

The reliability parameters in Tbl.3 must be self-consistent based on available theoretical bounds onthe achievable BPP consistent with reliable recoveryof scientific data. First, neglecting background radia-tion (that is SBR=∞), the choice m=12 places an up-per bound BPP<12 b/ph based on (3). This is trueof any possible modulation coding scheme. For thespecific case of PPM, the theoretical upper bound isBPP<10.9 b/ph for the chosen m and Ks based on (18).Second, taking into account background radiation, forthis value of PAR = M , the choice SBR=4 results inthe theoretical bound BPP≤11.1 b/ph for any photon-counting detector (see §E). Thus all available theoreticalbounds are consistent with the assumption BPP=10.9in Tbl.3. Concrete ECC methods are not expected toquite achieve these theoretical bounds, so this BPP canbe considered mildly optimistic.

A rather arbitrary choice is the dark count rateΛSD=32 yr−1, because we don’t know what value may befeasible in the future. Assuming nighttime-only opera-tion of the downlink (see §10.9.4), the two most signifi-cant sources of background (for the shorter wavelength)are moonlight and dark counts. The value ΛSD is chosenso that these two sources are about equal (SNR ≈ SDR)at the largest coverage (ΩA=10 as2). Thus, a larger (orsmaller) ΛSD will render dark counts (or moonlight) thedominant source of background radiation under thesespecific conditions.

The choice of transmit aperture effective area ATe isalso rather arbitrary, choosing an area that seems consis-tent with a low-mass probe. Both the transmit powersPTA,P are inversely proportional to ATe , so increasing

this area is directly beneficial in terms of power.From (13) Fx<6×10−5 for the parameters in Tbls.1,

2, 3, and 4. A 77 dB extinction ratio (Fx=2×10−8) hasbeen achieved in bench testing (Farr et al. 2013). The

Low-mass interstellar probe downlink 21

Fx=10−7 value in Tbl.2 thus appears feasible today, andrenders incomplete extinction a relatively minor contrib-utor to the background.

The effect of changes to wavelength λR0 are multi-faceted. If the coverage ΩA is held fixed, effective area

ASe is proportional to(λR0)2

and thus the signal power

increases in proportion to λR0 in (6), the interference in-

creases in proportion to(λR0)2

from Tbl.9, and henceoverall there is increased interference. For a red-dwarfstar such as Proxima Centauri the interference is mag-nified further because of the wavelength-dependence ofthe star black body radiation (see Tbl.10). Increasingthe wavelength to λR=1 µm in Tbl.3 causes interferenceto replace moonlight and dark counts as the dominantcomponent of background radiation, and results in asubstantial increase in transmit power (to PTA=562 mWand PTP =12.2 MW) for the swarm case in Tbl.8. Thislarger interference could be mitigated by a larger coro-nagraph rejection (smaller Fc).

Reduction in ΩA from the swarm to single-probe casein Tbl.3 beneficially reduces the transmit powers dueto the larger effective area ASe of the aperture, but in-creases the total receive aperture area ASeNS . Thus, re-ducing ΩA yields the expected tradeoff between a largerreceive aperture and reduced transmit powers. Simi-larly a larger transmit aperture effective area ATe can bedirectly traded for smaller transmit powers.

Unfortunately it appears that the required PTP cannotbe achieved by available semiconductor lasers. Someideas for potentially circumventing this issue, none ofthem currently qualified, are discussed §13.1.

Each PPM frame in BPPM represents KRs ·BPP bits

of scientific data on average, and since this equals thenumber of scientific data bits TIR0 in each frame, itfollows that TI ∝ KR

s . This implies that in the pho-ton starvation regime where Ks1 the rate at whichframes are transmitted T−1

I is relatively large. For ex-ample, for the parameters in Tbls.1, 2, 3, and 4 wehave TI = 2.2 s/frame, and only 2.2 scientific data bitsare represented by each frame on average. When m=12each PPM frame actually represents 12 bits, which weinfer represents 2.2 scientific data bits and 9.8 bits ofredundancy. This redundancy is exploited by ECC toachieve reliability in the face of a high rate of frame era-sures (see §14.3). ECC achieves reliable data recoveryby simultaneously mapping a large number of scientificdata bits into a large number of PPM frames, therebystatistically averaging over the random frame erasures(see §14.3).

We now give an extended numerical example basedon the design relations in Tbl.7 with the goal of morefully appreciating the tradeoffs inherent in a BPPM de-sign.This numerical tradeoff is idealized, neglecting somematerial degradations in performance from other fac-tors such as transmit aperture and aperture pointing

λ0R=400 nm

λ0R=1000 nm

-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0-4

-3

-2

-1

0

1

log10 (Coverage ΩA in arcsec2)

log10

(apertureareaAeSinm2)

Figure 11. A log plot of the effective area ASe of an aper-

ture vs coverage solid angle ΩA. This assumes an idealized

aperture that achieves a uniform sensitivity over ΩA.

Table 10. Assumed numerical values for background pho-

ton rates following a single-mode diffraction-limited aper-

ture.

Radiation source Units 400 nm 1.0 µm

Point-source interference

Proxima Centauri ΛI ph/s-m2-Hz 8.0·10−10 2.0·10−7

Unresolved sky noise

Zodiacal light ΛN,zodi ph/s-Hz

90 degrees to ecliptic 2.0·10−16 1.0·10−14

Faint-star light ΛN,star ph/s-Hz 2.0·10−16 1.0·10−14

Atmospheric scattering

Daylight ΛN,sun ph/s-Hz 4.1·10−8 5.0·10−7

Fullmoon ΛN,moon ph/s-Hz 1.0·10−13 1.3·10−12

inaccuracy and atmospheric effects (refraction and tur-bulence).

10.9.2. Aperture effective area

The area log10ASe is plotted in Fig.11 for two wave-

lengths of interest as a function of the coverage solidangle ΩA over the range bracketed by the swarm andsingle-probe cases in Tbl.3. This tradeoff is straightfor-ward as it does not involve other factors such as back-ground radiation. Choosing a shorter wavelength re-duces ASe (and therefore aperture sensitivity) for fixedcoverage. Increased coverage also reduces aperture sen-sitivity at a fixed wavelength.

10.9.3. Background radiation

The physical parameters characterizing the noise andinterference used in the following numerical calculationsare listed in Tbl.10.20 The values given apply to a sin-

20 Zodiacal noise is adopted from (Hauser et al. 1998) and the skynoise and star interference are adopted from (Lubin 2020). Theatmospheric scattering values are justified in §11.1.

22 Messerschmitt et al.

gle aperture in the canonical receive collector of Fig.3,since each aperture has a single optical detector and thusqualifies as a single-mode ‘antenna’ in the sense of §B.21

The units in Tbl.10 are consistent with Tbl.9.Two wavelengths are compared in Tbl.10, 1 µm (near

infrared) and 400 nm (blue visible). The latter is ofinterest because of the substantially smaller interferencefrom Proxima Centauri, which as a red dwarf star with apower spectrum weighted toward the red. The values inTbl.10 are typical, but Proxima Centauri is a flare starwhich exhibits short-term large increases in radiation.These flares will likely contribute to outages (see §11.4),unless the coronagraph factor Fc is considerably smallerthan considered here.

The total noise contribution to background radiationduring nighttime operation is

ΛN = ΛN,zodi + ΛN,stars + ρmoon · ΛN,moon .

While ΛN,moon assumes a full moon, the factor0≤ρmoon≤1 can adjust for the lower radiance dur-ing other phases of the moon. Generally ΛN,moon isthe dominant sky noise source. Note that assumingρmoon<1 implies that some nighttime periods (with abrighter moon) are considered outages, and the actualscientific data rate Ra will be correspondingly lower.22

10.9.4. Daylight outage assumption

As is evident from Tbl.10, daylight would have a ma-jor impact on background radiation. For the swarmcase in Tbl.3 the transmit power would be impracticallylarge to achieve daylight operation (PTA=5.8 kW andPTP =126 GW). For this reason, the metrics in Tbl.8 andthe following calculations assume nighttime (fullmoon)sky radiance. The value ρmoon=1 in Tbl.3 and the fol-lowing numerical calculations assume the entire night-time is a non-outage condition, barring weather events.Since the receiver is aware of day-night and weather con-ditions, it can force complete erasures during daytimeand weather events, and these become outage conditionsin the sense of §14.4. Knowledge of the receiver’s day-night or weather conditions is fortunately not necessaryat the probe transmitter.

10.9.5. Average transmit power

The average transmit power PTA is plotted in Fig.12as a function of coverage solid angle ΩA. This showsthat the shorter wavelength is quite advantageous, es-

21 This applies even if an optical detector is shared among multi-ple apertures as described in §5.4 as long as optical interferenceamong apertures is avoided.

22 Strictly speaking when ρmoon = 0 there will be some black bodyatmospheric radiation, but this is neglected since this irradianceis dominated by Zodiacal radiation ΛN,zodi from the solar sys-tem. This case is also uninteresting since there would be 100%outages.

λ0R=400 nm

λ0R=1000 nm

-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0-1

0

1

2

3

log10(Coverage ΩA in arcsec2)

log10

(averagetransmitpowerPATinmW)

Figure 12. A log plot of the average transmit power PTA

in mW against the coverage solid angle ΩA, with all other

parameters taken from Tbls.1, 2, 3, and 4. Two wavelengths

are plotted, and for small ΩA the required transmit power

is much larger at the longer wavelength due to the greater

relative aperture sensitivity to interference and additionally

the larger irradiance of a red-dwarf star.

pecially so at smaller coverages (where interference ismore significant).

The differing behavior at the two wavelengths can bebetter understood by comparing three components ofSBR: Interference (SIR), noise (SNR), and dark counts(SDR). The smallest component is the dominant sourceof background radiation. At the shorter wavelength (seeFig.13) ΛSD was chosen so that noise and dark counts areequal contributors to background radiation, and henceSNR ≈ SDR for all coverages (since neither noise nordark counts are dependent on aperture effective area).Interference is an insignificant contributor compared tomoonlight and dark counts at the larger coverages.

At the longer wavelength (see Fig.14) interference be-comes the dominant contributor to SBR at all coverages.As a result, the dark count target could be considerablyrelaxed without a substantial impact on overall perfor-mance metrics, but of course that performance wouldbe considerably degraded relative to the shorter wave-length.

10.9.6. Number of apertures

SBR is determined by the design of the aperture andthe choice of PTA , and remains fixed during the scale-outto the entire receive collector. The purpose of aper-ture replication in scale-out is to increase the averagedetected photons per slot KR

s to the value needed toachieve reliable recovery of scientific data in the face ofshot noise. The value of KR

s and hence the number ofapertures NS does not depend on the data rate R0. Theresulting NS is very large, as plotted in Fig.15.

Low-mass interstellar probe downlink 23

SNR

SIR

SDR

-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.00.0

0.5

1.0

1.5

2.0

2.5

3.0

log10 (Coverage ΩA in arcsec2)

log10SBR

Figure 13. As an aid to understanding the small-coverage

behavior exhibited in Fig.12, the log of the three compo-

nents of SBR=4 are plotted vs coverage solid angle ΩA for

λ = 400 nm. For small coverages SIR is the smallest, and

hence interference is the dominant source of background ra-

diation.

SNR

SDR

SIR

-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.00

1

2

3

4

5

6

log10 (Coverage ΩA in arcsec2)

log10SBR

Figure 14. Fig.13 is repeated for λ = 1 µm. The interfer-

ence becomes dominant for all coverages.

The metric NSASe , which is a measure of the total ef-fective area across all apertures,23 is plotted in Fig.16. Itis significantly larger at the shorter wavelength, with thecompensating benefit that the probe’s average transmitpower is lower (see Fig.12).

10.9.7. Dark counts

To see the effect of changing dark-count rates, PTA isplotted in Fig.17 and Fig.18 as a function of dark counts,coverage angle, and wavelength. This quantifies the ob-vious conclusion that the shorter wavelength and a verylow rate of dark counts are advantageous in achieving

23 This is not the effective area of the receive aperture as a whole,which is not defined since the receive aperture is not single-modeand not diffraction-limited.

λ0R=400 nm

λ0R=1000 nm

-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.02

3

4

5

6

7

8

log10 (Coverage ΩA in arcsec2)

log10

(numberofaperturesNS)

Figure 15. A log plot of the number NS of apertures under

the same conditions as Fig.12. It is significantly larger at

the shorter wavelength, which is due to (a) the smaller-area

aperture and (b) the smaller peak transmit power PTP .

λ0R=400 nm

λ0R=1000 nm

-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0

-3

-2

-1

0

1

log10 (Coverage ΩA in arcsec2)

log10

(totalcollectorareaNSAeSinkm

2)

Figure 16. Based on the aperture effective area ASe in Fig.11

and the number NS of apertures in Fig.15, a log-plot of

their product NSASe in km2 as a function of the coverage.

Generally it is less than a km2 under these ideal conditions,

except for very small coverages.

low average transmit power. Reducing the solid angleof coverage is also advantageous.

10.9.8. Coronagraph

The effect of changing the coronagraph rejection isillustrated in Fig.19. The value of Fc makes a substantialdifference at low dark count rates because that is whereinterference dominates.

24 Messerschmitt et al.

ΩA=50 as2

ΩA=10 as2

ΩA=1 as2

0 1 2 3 40

1

2

3

4

5

log10(Dark count rate ΛDS in yr-1)

log10(averagedtransmitpowerPATinmW)

Figure 17. A log-log plot of the average transmit power

PTA for λ = 400 nm vs the average dark counts ΛS

D per year

(when referenced to individual aperture optical detectors).

Sharing optical detectors across multiple apertures may be

helpful in reducing ΛSD (see §5.4). Idealized performance is

assumed for other physical characteristics like quantum effi-

ciency, pointing accuracy, and atmospheric refraction and

turbulence. The different curves illustrate the benefit of

reducing the coverage angle. The highest dark count rate

ΛD = 104 yr−1 corresponds to 1.14 hr−1.

ΩA=50 as2

ΩA=10 as2

ΩA=1 as2

0 1 2 3 40

1

2

3

4

5

log10(Dark count rate ΛDS in yr-1)

log10(averagetransmittedpowerPATinmW)

Figure 18. Fig.17 is repeated for λ = 1 µm. A floor in

transmit power appears because interference becomes the

dominant contributor to SBR for small dark count rate ΛSD.

(see Fig.12 and Fig.14).

10.9.9. Slot time and average transmit power

In BPPM reducing the slot duration parameter Ts re-duces the effective dark count rate (by reducing the dutycycle δ). The resulting benefit of Ts on reducing aver-age transmit power PTA is illustrated in Fig.20. There is,however, a point of diminishing returns as interferencebecomes the dominant source of background radiation.

10.9.10. Slot time and peak transmit power

As seen in (9), the peak transmit power PTP is not de-pendent on R0 or BPP, but rather is determined by theBPPM parameter Ts and data reliability parameter KR

s .The effect of Ts on peak transmit power PTP is shown inFig.21. Increasing Ts reduces PTP because the pulse en-ergy PTP Ts remains relatively fixed. This effect saturates

Fc = 1.

Fc = 0.1Fc = 0.01

0 1 2 3 41.0

1.5

2.0

2.5

3.0

3.5

4.0

log10(Dark count ΛDS in yr-1)

log10

(averagetransmittedpowerPATinmW)

Figure 19. Fig.17 is repeated with three different values

for the coronagraph rejection Fc. At low dark count rates

the interference becomes the dominant source of background

radiation.

ΛDS = 0.1 yr-1

ΛDS = 1. yr-1

ΛDS = 10. yr-1

ΛDS = 100. yr-1

ΛDS = 1000. yr-1

-9 -8 -7 -6 -5 -4 -32

3

4

5

6

7

log10(time slot duration Ts in seconds)

log10

(averagetransmittedpowerPATinmW)

Figure 20. A log-log plot of the average transmit power

PTA vs slot time Ts (between 1 ns and 1 ms) and different

values of dark count rate ΛSD in yr-1. The largest feasible

value (at δ=1) is Ts=0.98 ms. Reducing Ts also reduces PTA

due to its beneficial limiting of dark counts. However, the

benefit saturates at small Ts as interference dominates there.

at larger Ts because the increase in PTA seen in Fig.20indirectly offsets any reduction in PTP . The peak powerincreases for smaller Ts (where dark counts have beenlargely suppressed) because the average power is rela-tively constant in this interference-dominated regime.

For the other parameters chosen in Fig.21, the bestchoice of slot time is Ts ∼ 1 µs. Any smaller Ts thanthis and PTP increases, and any larger Ts results in anincrease in PTA . However this tradeoff is influenced bythe assumed dark count rate ΛSD.

10.9.11. Moonlight variation

The assumed outage probability assumes that thedownlink operates during all moon phases. There is anopportunity to reduce the transmit powers if operationis restricted to reduced moonlight by choosing ρmoon<1,but of course the outage probability will also increase.This effect on transmit power is quantified in Fig.22.

Low-mass interstellar probe downlink 25

ΛDS = 0.1 yr-1

ΛDS = 1. yr-1

ΛDS = 10. yr-1

ΛDS = 100. yr-1

ΛDS = 1000. yr-1

-9 -8 -7 -6 -5 -4 -30

1

2

3

4

5

6

7

log10(time slot duration Ts in seconds)

log10

(peaktransmittedpowerPPTinkW

)

Figure 21. Fig.20 repeated for peak transmitted power

PTP .

ρM = 1.ρM = 0.5

ρM = 0.1

0 1 2 3 4

2.0

2.5

3.0

3.5

4.0

log10(dark count ΛdS in yr-1)

log10

(averagetransmittedpowerPATinmW)

Figure 22. Fig.17 is repeated with all parameters held

constant except the maximum moonlight irradiance parame-

ter ρmoon. Note that reducing ρmoon implies a larger outage

probability and hence a lower post-outage data rate Ra. For

larger ΛSD a smaller ρmoon has little benefit since dark counts

are the dominant source of background radiation.

10.9.12. Excess optical bandwidth

If it is necessary for some reason to chose a We thatis significantly larger than the minimum value T−1

s ,the noise and interference (but not dark count) contri-butions to background at the optical detector are in-creased. In this event probe transmit power has to beincreased to maintain the desired SBR, and dark countsbecome relatively less important. The effect of increas-ing We on average transmit power PTA is plotted inFig.23.

10.9.13. Quantum efficiency

The preceding results assume ideal quantum efficiencyη = 1. A quantum efficiency η < 1 has two effects. Firstit will always increase NS (and hence total receive aper-ture size) to achieve the required average photon rateΛRA. Second, it will increase the relative importance ofdark counts to SBR, and require an increase in trans-mit average power PTA in the regime of large dark countrates. This latter effect is illustrated in Fig.24.

W = 10. MHz

W = 100. MHz

W = 1000. MHz

0 1 2 3 42

3

4

5

6

log10(Dark count ΛDS in yr-1)

log10

(averagetransmittedpowerPATinmW)

Figure 23. Fig.17 is repeated while varying the receive op-

tical bandpass filter bandwidthWe and keeping Ts fixed. The

minimum is We=10 MHz corresponds to WeTs = 1. In the

interference-dominated regime the average transmit power is

increased (in proportion to We), and in principle this could

be compensated by a higher coronagraph rejection ratio.

η = 1.

η = 0.5

η = 0.1

0 1 2 3 42.0

2.5

3.0

3.5

4.0

4.5

5.0

log10(dark count ΛDS in yr-1)

log10

(averagetransmittedpowerPATinmW)

Figure 24. Fig.17 is repeated with all parameters held

constant except the optical efficiency η. The average trans-

mit power PTA has to be increased to maintain a fixed SBR

at higher dark count rates ΛSD since received power is re-

duced but ΛSD is unaffected. The adverse impact of η on PT

A

is reduced for very small ΛSD since in that regime noise and

interference (as well as signal) are reduced and thus SBR is

relatively unaffected.

11. ATMOSPHERIC EFFECTS

For a receive aperture located on the earth’s surface,the interaction between signal and the earth’s atmo-sphere at optical wavelength introduces several signifi-cant impairments (Biswas & Piazzolla 2006). Nighttimesky irradiance is a contributor to the total backgroundradiation, and was incorporated into the numerical re-sults of §10.9. The other significant atmospheric impactis a contribution to outage time due to daytime solarscattering and weather, which together reduce the rateof reliable scientific data recovery (see §10.2 and §14.4).

26 Messerschmitt et al.

11.1. Daytime sky irradiance

During the daytime, sunlight scattered through theatmosphere into the receive aperture is a source of back-ground radiation that can be considered isotropic withinthe coverage of a highly directive aperture. In Tbl.10the sky irradiance at the aperture can be inferred fromFig.8.16 in (Biswas & Piazzolla 2006):

PSN = 3×10−3 W/(cm2-sr-µm) @ λR0 =1µm

ΛSN = PSN ·λR0hc·We

(λR0)2

c·(λR0)2

ph/s-Hz .

The factors in the conversion from PSN to ΛSN are (1)the power to photon rate conversion, (2) the bandwidthexpressed in terms of frequency rather than wavelength,and (3) the AeΩA product given by (B5). The value ofPSN is about 8× larger at λ0=400 nm and a factor of4×105 smaller for a full moon as compared to daylight.

This irradiance decreases with greater angular separa-tion from the sun itself, and it helps that there is a largeminimum angular separation between Alpha Centauriand the sun.24 Although the solar radiance is maximumat 475 nm (corresponding to black body radiation at5800 K), the sky irradiance is smaller at 400 nm thanat 1 µ largely due to the smaller value of AeΩA. Theresulting numerical values are listed in Tbl.10.

Unfortunately the full daylight sky irradiance inTbl.10 is more than four orders of magnitude larger thanthe other sources of noise background. The significanceof this depends on the relative intensity of interferenceand dark counts, but generally operation during daylighthours would require significantly higher probe transmitpeak power. Thus in Tbl.3 (as discussed in §10.9.4) weassume that daylight is an outage situation.

11.2. Nighttime sky radiance

If daylight is considered an outage, the sky radianceduring nighttime is a limiting factor on the downlink.At night the reflected light from the moon is an indi-rect source of solar background radiation (except for anew moon or the moon below the horizon). A full moonhas a 4×105 smaller radiance than the sun. In the ab-sence of moonlight, the primary source of irradiance isthermal emissions from the atmosphere, but this will beconsiderably smaller than cosmic zodiacal radiation forthe wavelengths of interest as long as strong emissionlines (such as OH or O2) are avoided in the choice ofwavelength. Thus, the total night sky irradiance as afraction ρmoon of the full moon value (as determined bythe moon’s phase) is the approximation employed ear-lier.

24 In heliocentric coordinates the latitudinal separation is approx-imately fixed at 44.7 degrees and the longitudinal separationvaries through the year.

11.3. Turbulence

Atmospheric turbulence causes a short-term fluctu-ation in received signal power which is manifested asa scintillation (random variation in signal strength) aswell as signal incoherence (wavefront phase aberrationacross the aperture area). As the geometric size of anaperture increases, the variation in signal power due toscintillation decreases, while the power loss due to spa-tial incoherence increases (Biswas & Piazzolla 2006).

As the aperture geometry is determined by coverageconsiderations, scintillation manifests itself as a varia-tion in erasure probability that can only be compen-sated by considering high erasure probability as an out-age, which is mitigated using interleaving and ECC (see§14.4) rather than adjustment of the aperture design.

Spatial incoherence is a less severe impairment duringnighttime operation, and under some conditions may bedeemed an outage. It can in principle be compensatedby adaptive optics (the details are discussed in Sec.3.34of (Biswas & Piazzolla 2006)). As to whether adaptiveoptics will be necessary depends on the detailed consid-eration of elevation, declination, and the geometric sizeof the aperture (rather than its area), all of which is be-yond the scope of this paper. Fortuitously, for the largercoverages necessary in the probe-swarm case the aper-ture is smaller and thus less susceptible to atmosphericturbulence.

11.4. Outages

Periods during which virtually all PPM frames sufferan erasure are considered outages. Generally the trans-mitter will have no knowledge of when outages occur,but when there is no “fuel” saving from avoiding trans-mission during outages this has no consequences. Theoutage mitigation design can assume a worst-case out-age probability PE , with the consequence that the datarate is reduced by a factor of

(1− PE

)(see §14.4).

There are twp main sources of atmospheric-origin out-ages are daylight and weather. Weather, including watervapor, clouds, and storms attenuate or block receptionat random times. The statistics of weather outages ishighly site-dependent. The receiver is aware of either ofthese conditions and can enforce (and label) outages byarbitrarily enforcing 100% erasures during these periods.

For any point on earth, nighttime lasts for half thehours averaged over one year (actually slightly less dueto refraction of sunlight near the horizon). Near theAntarctic Circle nightime persists for 48% of the totalhours in a year (Wikipedia contributors 2019). Thustreating daylight as an outage would reduce the scientificdata rate by 52% at this location.

The siting of the receiver will introduce another sourceof outages if its latitude results in the target star disap-pearing below the horizon. In the case of Proxima Cen-

Low-mass interstellar probe downlink 27

tauri, continuous view is possible if the receiver is sitedat a sufficiently southerly latitude25.

A cosmic source of outages will be flares of the targetstar, but these are short-lived and will be insignificantin terms of outage probability.

12. OTHER CHALLENGING DESIGN ISSUES

BPPM addresses principally the technology limita-tions of optical bandpass filters and detectors. Otherissues relating to the design of the physical layer arisethat impose profound technological challenges.

12.1. Probe motion effects

A low-mass probe is assumed to be traveling (rela-tive to the receiver frame) at relativistic speed, and thereceiver is also in motion due to earth rotation and or-bit. When referred to baseband, due to the short op-tical wavelength the resulting Doppler shifts must beaccounted for in the design (see §A.1).

12.1.1. Uncertainty in probe velocity

At a speed of u = 0.2c the combined Doppler shift(about 20%) and effect of time dilation (about 2%) isabout 22%. Since the sources of noise and interferencedo not experience this shift, the shifted wavelength de-termines the atmospheric effects and interference fromthe target star. The transmit wavelength can be ad-justed toward a shorter wavelength to compensate forthe Doppler shift.

Since the swarm of probes will encompass slightly dif-ferent speed perturbations, the result is relatively largefrequency offsets affecting WDM (see §12.2) and TDM(since a common receive wavelength is assumed). If weallow a perturbation du in speed u, the resulting per-turbation dνR in received frequency νR is

dνRνR

= − u/c

1− (u/c)2· du

u= −0.208 · du

u(14)

for u = 0.2c (see §A.2). Thus there is a Doppler shift of±1.6 THz for each ±1% variation in probe velocity atλ0=400 nm.

Achieving a precision in probe velocity that limits theDoppler shift to the order of ∼We seems unlikely. Ifprobe speed cannot be controlled tightly enough, it willbe necessary to compensate for variations in probe speedby a coordinated configuration of transmit wavelengthfollowing launch. There are at least two possible options:

25 The declination of Proxima Centauri is -62.67 degrees, so a con-tinuous view is achieved by choosing a receiver site with a latitudemore southerly than (90 − 62.67) = 27.33 degrees south. How-ever, to avoid significant degradations in link quality when Prox-ima Centauri is low on the horizon due to increased atmosphericdegradation effects, the receiver site should be more southerlythan approximately 35 degrees south.

• Since the transmission of scientific data followsencounter, transmit wavelength configuration canfollow encounter and precede downlink operation.Autonomous configuration of transmit wavelengthcould be based on a sufficiently precise measure-ment of time to reach the target star, combinedwith knowledge of target star position. All probesin a swarm will be affected equally by imprecisionin knowledge of this position.

• If there is an earth-to-probe communication uplinkfor a short period following launch (see §12.5), thereceived wavelength can be measured at the re-ceiver and used to configure the transmit wave-length.

12.1.2. Earth motion and Doppler

The maximum speed of the earth relative to a helio-centric frame due to its orbit about the sun (or rev-olution about its axis) is 30 km/s (or 1670 km/h atthe equator). The Doppler shift is thus bounded by±30 GHz (or ±464 MHz) at λ = 1 µm. While thisDoppler can be very significant, it is essentially the samefor all probes in a swarm and thus does not appreciablyaffect their relative wavelengths. Accurate positioningof the receiver’s optical bandpass filtering does requirecorrection based on accurate knowledge of this motion,and any remaining uncertainty should be accounted forby an increase in bandwidth We with the resulting in-crease in cosmic sources of background radiation (see§10.9.12).

12.1.3. Gravitational red shift

During flyby of the target star, the stronger gravi-tational field will result in incremental red shift of theprobe’s signal. While this effect can be significant fora close-in encounter, it is largely avoided with post-encounter commencement of downlink transmission. Inany case downlink operation during encounter is unlikelydue to conflicting demands on scientific instrument elec-trical power and probe attitude adjustment to accommo-date scientific observations. While gravitational effectsare likely insignificant in communications downlink de-sign, they offer a scientific opportunity for sensitive mea-surement of gravitational potentials (see §A.3).

12.1.4. Influence on data rate

Due to the increasing propagation delay and time dila-tion in the transmitter clock relative to the receiver, thedata rate observed by the receiver is smaller by about22% than the data rate generated at the transmitter.Due to the methodology adopted in §C, which concen-trates on the earth’s rest frame, the data volumes andlatency plotted in Fig.5 account for this.

28 Messerschmitt et al.

12.1.5. Relativistic aberration

The high speed of the transmitter moving away fromthe receiver will have a relativistic effect on the trans-mit beam and affect the link budget of (6). This smallincremental effect is not accounted for in this paper.

12.2. Multiplexing options

Multiplexing is the function of separating the concur-rent signals originating from different probes. Whetherthere is a single shared receiver or multiple receivers, thisfunction is challenging due to environmental factors likeextremely low received power and frequency shifts dueto probe motion.

These challenges are magnified as the number Jpof concurrent transmitting probes increases, yieldingstrong motivation to minimize Jp. Available measuresinclude, for each probe, limiting transmission duringtransit and limiting the downlink operating time. Wefind in §7 that (for a specific set of scaling laws) the op-timum downlink time ranges from 2-9 years. For weeklyprobe launches this results in Jp ∼ 100 to 500.

There are some alternative multiplexing approaches.The multiplexing method affects many other aspects ofthe downlink design, so it should be established early.

12.2.1. Spatial-division multiplexing (SDM)

SDM takes advantages of angular differences betweenprobe trajectories to separate the signals from theprobes. For probes launched at different times of theyear, the parallax variation combined with a drift inthe launch targets due to proper motion of the targetstar will spatially separate trajectories save for short-lived periods of exact coincidence (see §9). This opensup the possibility of choosing a receive aperture cover-age to attenuate all probe signals save one of interest,with any coincidences treated as an outage (see §14.4).Proper motion of the target star is particularly helpfulfor spatial separation, as it opens up the possibility ofcombining SDM with another scheme for separating thesignals from probes with nearby launch dates.

An extreme case is to use a separate receive aperturefor each downlink, each with a very small coverage solidangle ΩA (as implicitly assumed in (Parkin 2019)). Al-though extravagant in terms of ground-system cost, animportant benefit is the reduction in the required trans-mitter power-area product ξTA = PTAA

Te if NS is chosen

to minimize this value. However, this smaller ξTA maycome at the expense of a considerably larger total effec-tive area NSASe as seen in Fig.16.

An intermediate case would be a more sophisticatedsingle-aperture design that employs time-dependentbeamforming to separate probe signals. In effect eachaperture would constitute a multiple-pixel receive opti-cal system, with one pixel dedicated to each probe.

12.2.2. Wavelength-division multiplexing (WDM)

In WDM each probe is assigned a different wavelengthat the receiver, with an optical bandpass filter and ded-icated optical detector servicing each probe. WDM re-quires precise transmitter wavelength control, which is achallenge complicated by Doppler shifts (see §12.1.1 and§12.1.2) and implementation of a serial bank of opticalbandpass filters to separate signal from different probes(see §13.2).

12.2.3. Time-division multiplexing (TDM)

In TDM probe transmissions are assumed to not over-lap one another in time at the receiver. Ideally thosetransmissions are at a common received wavelength, al-lowing for a single optical bandpass filter and detector.TDM requires precise transmitter knowledge of time ofarrival of its signal at earth, which requires in turn avery precise clock and precise knowledge of its own tra-jectory (see §12.4).

12.2.4. Random-access multiplexing

This is a variation on TDM in which transmissionshave a low duty cycle and are randomized so that theprobability of overlapping receptions in the receiver issmall (Abramson 1994). This may be natural in com-bination with BPPM, which trades higher peak powerfor low duty cycle (see §6.2). Collisions at the receivercould be treated in a similar way to outages (see §14.4),although they would complicate the ECC by causingframe errors rather than erasures.

12.2.5. Code-division multiplexing (CDM)

In CDM probes are assigned mutually orthogonalspreading sequences, which allows their signals to beseparated (by cross-correlation with the spreading se-quences) in spite of both wavelength- and time-overlap.It is useful to think of CDM as maintaining orthogonal-ity using a different orthogonal basis for signals, wherethe basis functions do not align with either time or fre-quency (see §13.1.3).

For equivalent per-probe data rates R, to first orderWDM, TDM, and CDM all expand the total opticalbandwidth by a factor of Jp. In the case of TDM thisis because each probe has to transmit at rate JpR dur-ing its assigned time slot, and for CDM the spreadingsequence expands the bandwidth of each probe’s signalby a factor of Jp. In practice the bandwidth is larger af-ter accounting for guard bands (WDM) and guard times(TDM) due to imprecise knowledge of wavelength andtime at the probes. As illustrated by BPPM, any formof TDM will also increase the peak transmitted powerrequirement commensurate with Jp.

12.3. Clock accuracy and stability

Particularly at high photon efficiencies, data represen-tation by accurate timing and/or accurate wavelength

Low-mass interstellar probe downlink 29

will be essential. This represents a significant challengein a system distributed over interstellar distances whererelativistic speeds are involved (see §12.1), and one inwhich atomic-clock frequency and time standards arenot likely to be practical in a low-mass probe trans-mitter. Along with the generation of high peak-powerpulses (see §13.1), the resulting short- and longer-termclock inaccuracies will limit the timing accuracy (as rep-resented by slot time Ts in BPPM) that can be obtained,and thereby will limit the photon efficiency.

There are two related issues of clock synchronization:between probe and receiver, and among aperture clockswhich provide timestamps attached to photon-detectionevents. While apertures co-located on a common sitecan share a common clock source with atomic accuracyand stability, the probe clock will have relaxed crystal-oscillator accuracy and stability and will be affected byuncertainties in probe speed (see §12.1.1). It will be nec-essary for the receiver to estimate and track the trans-mit timing. Fortunately this need not be real-time, butrather can be delegated to a post-processing phase basedon the entire record of photon-detection events gath-ered over the entire mission. The probe can assist inthe process by transmitting deterministic sequences ofpulses interleaved with the random encoded scientificdata. The accuracy of this tracking will be limited bythe accuracy with which individual photon detectionsare time-stamped, and in particular the relative phasesof a common clock appearance at each of the apertures.

That relative offset is largely determined by tempera-ture variations, and is on the order of 3 to 10 ns whenusing the Global Positioning System (GPS) for time-transfer, with larger absolute diurnal and seasonal vari-ations (Lewandowski et al. 1999). It may be possibleto track these absolute variations in the post process-ing, similarly to the probe clock, although the latteris much easier due to its reliance on aggregate (as op-posed to per-aperture) photon counts. Greater accuracyand stability may be feasible through a sub-terrainiantemperature-controlled optical distribution of clocks notrelying on GPS. An absolute accuracy taking into ac-count all sources of variation on the order of 1% of theslot time Ts will be required, placing a lower bound onTs (and hence dark count rejection). While longer-termstability of the probe clock is less critical, shorter terminstabilities will be difficult to track at the receiver dueto the low photon-detection rate (even in the aggregate).

12.4. Probe navigation

Relative to an (approximate) inertial frame definedby the Solar System and target star, the probe needsto be self-aware of the time and position for both theprobe itself and for earth, for a couple of purposes. Firstthe scientific instrumentation and interpretation of itsdata requires knowledge of the probe trajectory, hope-fully with an accuracy less than a fraction of an AU.Second, accurate pointing of the transmit aperture so

that the maximum signal power reaches the terrestrialreceive aperture requires accurate knowledge of the an-gle and distance of the earth at the later time of arrival(see §12.6).

For these purposes the probe trajectory is parame-terized by the probe elapsed time, the observed probespeed, and the observed angle of the sun’s and the tar-get star’s position. There at least a couple of sourcesof imprecision in these estimates. First, after 30 yr thevariation in elapsed time measured by a probe clock is16 min for each one part in 10−6 variation in oscilla-tor frequency.26 Second, at 4 ly and a nominal speedof 0.2c the variation in propagation time is ±72 daysfor each ±1% variation in probe speed. Thus a speedvariation is generally a much bigger issue in accuratepointing toward an orbiting earth than is clock accuracy.Thus a Xtal oscillator without temperature compensa-tion aboard the probe may suffice for onboard estimateof elapsed time, and whatever method is used for speedmeasurement and compensation for Doppler shifts (see§12.1.1) also applies to estimating propagation time.

12.5. Feedback Doppler compensation

The multiplexing techniques described in §12.2 requirecontrol of the relative wavelength differences amongprobes in a swarm to an accuracy on the order of thesignal optical bandwidth We. This is especially trueof TDM, since rejection of background radiation putsa premium on minimizing the bandwidth of a receiveoptical system that is shared among probe downlinks.Since We is relatively small, this strongly suggests theuse of feedback to control the relative received wave-lengths across probe downlinks.

Either the speed of each probe or the transmit wave-length in each probe can be configured using feedback.In both cases a downlink monochromatic beacon can betransmitted during the launch or for a short period fol-lowing the launch, with its observed Doppler shift on theground providing an accurate estimate of probe speed.Accurately controlling the final probe speed would re-quire an adjustment to the time window for the directedenergy beam. This may also be necessary (or at leasthelpful) for navigation purposes. Accurately configuringthe transmit wavelength to compensate for the actualprobe speed following launch would require a teleme-try uplink. The latter approach has the disadvantagethat the probe requires a communications receiver, al-though its implications would be minimized by the largeavailable transmit power and the small amount of con-figuration data required.

26 Relative to terrestrial events the time elapsed on the probe isabout 2% shorter than on earth, but this relativistic effect isknown accurately and readily compensated.

30 Messerschmitt et al.

12.6. Probe attitude control

Attitude control of the probe is required for the scien-tific mission (orientation of instruments) and for com-munication (aiming the transmit beam back to its ter-restrial communication receiver). The pointing accuracyrequired for the transmission downlink is undoubtedlythe most critical requirement. A three-axis closed-loopcorrection by photon thrusters driven by the probe elec-trical power source has been proposed (Lubin 2016).

12.6.1. Probe pointing accuracy

Assumptions as to receive aperture size are predi-cated on precise aiming of the probe’s transmit beamto achieve the maximum power transfer. This is a ma-jor issue in solar system probes, and will be here as well.Pointing can be based on probe attitude control or beamangle configuration relative to the probe, or some com-bination of the two methods.

The angular spreading of the beam is a fraction of theAiry disk, and is thus proportional to λ/d where d is theradius of a circular aperture. Since the target must fallwithin this beam, indeed near the center, the pointingaccuracy is not penalized by the great distance to theearth. The short wavelength λR in Tbl.3 reduces thepermitted pointing error, while the relative small size ofthe transmit aperture is helpful. For example, for theparameters in Tbl.3 this ratio λ/d=0.8 arcsec is muchmore stringent than 58. arcmin for New Horizons (seeTbl.6) due to the shorter wavelength.

A significant issue is what reference can be used forpointing. Obvious candidates are the sun and the targetstar (which will fall in nearly the same direction). If thebeam were to illuminate the entire solar system, center-ing it on or near the sun would suffice. However, for theaperture in Tbl.3 the Airy radius becomes 3.27 au at4.24 ly distance and λR = 1 µm decreasing to 1.2 au atλR = 400 nm. Thus, more specific aim at the positionof earth is necessary. One tradeoff would be to reduceATe and increase PTA , keeping ξTA constant, thereby re-ducing the required pointing accuracy at the expense oftransmit power. Conversely, a larger transmit aperture(such as that assumed in (Parkin 2019)) would increasethe required pointing accuracy accordingly.

It appears likely that the beam must be aimed at themoving position of earth within its orbit at the futuredate/time of signal arrival, rather than the solar sys-tem with the sun near its center. For this purpose asecond axis of reference, such as another nearby star,would be necessary. Acquiring the direction of a pairof guide stars requires image sensing with an attitudecontrol loop. Generally the resolution of this sensinghas to be finer than the transmit beam width, implyinga sensing aperture larger than the transmit aperture.This suggests sharing a single aperture on the probe be-tween transmission and sensing, with a larger portion

of that aperture used for guide star tracking than fortransmission.

12.6.2. Attitude control operation

Fortunately a low-mass probe has no moving mechan-ical parts or fluids that contribute to attitude misalign-ment, so there are only three identifiable sources ofattitude misalignment: motion of electrons within theprobe electronics, the momentum recoil from the trans-mit beam, and (likely most significantly) collisions withinterstellar particles. This suggests that attitude adjust-ment and transmission should not occur concurrently,for several reasons:

• If attitude misalignment is infrequent the duty cy-cle of attitude adjustment may be low.

• The sharing of a common aperture between sens-ing and transmission is a mass-saving measure thatis easier if the two functions are not concurrent.

• Each function can utilize the entire available elec-trical power, maximizing the data rate and mini-mizing the time consumed by attitude adjustment.

• During periods of misalignment the data transmis-sion is likely to be unreliable so this becomes anoutage condition, whether the probe is transmit-ting or not.

• Any effect of transmission beam momentum in-terfering with attitude adjustment is minimized.If thrusters and communications utilize the sameelectrical source, then downlink transmission ex-erts an equivalent recoil force on the probe. Iftransmission occurs only during periods when theattitude is within a tight tolerance, since the probetrajectory is nearly rectilinear the only significanteffect is a tiny increase in probe speed.27

• Although the suspension of transmission duringperiods of attitude adjustment slightly reduces theaverage data rate, the greater electrical energyavailable during each transmission can compensatefor this with a higher nominal data rate.

12.7. Coronagraph function

The coronagraph function provides extra attenua-tion for interference originating at the target star (see§10.9.8). Each receive aperture shared over all probedownlinks in a swarm will have a coverage that includesprobes just completing their target star encounter, andthus will have limited or no attenuation of the targetstar radiation. A requirement on the order of Fc=10−2

27 The trajectory curvature due to the target star gravitation willintroduce a small momentum component transverse to the tra-jectory that needs to be accounted for.

Low-mass interstellar probe downlink 31

to 10−4 follows (see §10.9.8), necessitating coronagraph-specific measures other than limiting the coverage angle.

There are various approaches, some of which are:

• Delay download transmission following encounterlong enough for proper motion of the star toseparate probe trajectories from the star (Parkin2019).

• Arrange for an aperture coverage to be limited toa single probe at a time (as assumed in (Parkin2019)). This is accomplished by duplicating aper-tures. Or with some forms of TDM, the coverageof a single aperture can be switched to the singleprobe whose signal is currently being received.

• For a common aperture implemented as a phasedarray of elements, implement multiple beams di-rected at the different probes.28 To avoid a reduc-tion in SBR this would have to be accomplishedwith high quantum efficiency and without ampli-fication.

• Use techniques similar to those being pursued inthe direct imaging of exoplanets, which also re-quires rejection of target-star radiation. Thesetechniques include coronagraphs and interferom-eters (Traub & Oppenheimer 2010) and externalocculters (also known as starshades) (Cash 2011).For example, a notch (high rejection) permanentlylocated at the target star location might be addedto the aperture by adjusting the phase/magnitudein the combiner, thus creating a spatial filteringfunction.

The phase and magnitude in a phased-array combinerhas to be implemented extremely accurately. Phasesalso have to be dynamically adjustable to account forparallax and the proper motion of the star (see §9).

A promising direction draws on advances in direct ex-oplanet imaging. However, some differences inherent tothe downlink design challenge should be noted:

• For a ground-based observation platform, dynamicpointing of the aperture due to atmospheric refrac-tion and turbulence are issues.

• The optical bandwidth is narrow and the wave-length can be chosen to minimize the target starinterference (see §10.9.3).

• A single probe downlink will typically operate foryears (see §7) and a swarm of probes correspond-ingly longer.

28 Such an approach is being explored at millimeter wavelengths forapplication to 5G cellular (Roh et al. 2014).

13. CRITICAL TECHNOLOGIES

The tradeoffs quantified in §10.9 identify three tech-nologies whose stringent requirements play a significantrole in success or failure. These are the transmit lightsource, the receiver optical bandpass filtering, and thephoton-counting optical detectors. It is clear that thecurrent state of source and detector technologies is notconsistent with the low-mass probe downlink applica-tion. Fortunately they can be expected to advance inthe application timeframe (see §2.5), but this introducesconsiderable uncertainty. Fortunately we have not iden-tified any limiting physical principles that impede fu-ture advances. Also, the configurable duty cycle δ inBPPM beneficially offers a tradeoff among all three tech-nologies, permitting some flexibility in their relative ad-vancement (see §6.2).

13.1. Transmit light source

In an optical direct-detection communication systememploying PPM and BPPM, high peak powers are fun-damental to achieving high photon efficiency (Jarzynaet al. 2018). This is reflected in the theoretical limit of(3), which predicts a logarithmic relationship betweenthe achievable BPP with reliable data recovery andPAR. High PAR is achieved by reducing slot time Ts andincreasing peak power PTP to keep the pulse energy PTP Tsconstant. This relationship of PAR and BPP is dis-cussed in practical terms in §14. However this straight-forward approach results in multiple-kilowatt peak pow-ers (see Fig.21). Optical amplification of semiconductorlaser outputs can attain Watt-level peak powers, and Q-switched lasers can attain kilowatt-levels but suffer fromlow electrical-to-optical conversion efficiency (Banaszek& Jachura 2017).

13.1.1. Design parameter adjustment

The required peak power can be reduced by severaladjustments in design parameters singly or in combina-tion, although these unfortunately all trigger other is-sues. Reducing the coverage solid angle ΩA benefits thepeak power because of the higher sensitivity of a largeaperture (see Fig.17), but the total receive collector areais also increased (see Fig.16). Increasing the transmitaperture area ATe is effective if the power-area PTP A

Te is

fixed. Reducing the dark count rate results in lower peakpower (see Fig.21), but the effect is limited by the con-tinued presence of moonlight interference. Smaller peakpower can be achieved by reducing the photon efficiencyBPP and hence lower PAR (see Fig.9), but this requiresvery large total receive aperture area to compensate forthe reduced efficiency.

13.1.2. Pulse compression

Optical interference can be used in place of emittinghigh powers directly from an optical source. A straight-forward example of this is the ganging of multiple semi-

32 Messerschmitt et al.

conductor lasers in the transmitter with an optical com-biner. Unfortunately this approach is expected to beinconsistent with the low-mass objective.

Pulse compression technology is often used to gen-erate narrow high-power optical pulses. It relies onthe observation that for two pulse shapes p1(t) andp2(t) with the same bandwidth W , p2(t) can be ob-tained from p1(t) by a linear filter with transfer func-tion F (f) = P2(f)/P1(f), where P (f) is the Fouriertransform of p(t). For example, a pulse with constantenvelope

∣∣p1(t)∣∣ = A and approximate time duration

T 1/W can be converted to a pulse p2 with approxi-

mate time-duration 1/W and envelope∣∣p2(t)

∣∣ = A√WT

if energy is conserved.29 This process converts a longerlower-amplitude pulse into a narrower high-amplitudepulse with power gain equal to WT (the bandwidth-time product).

Pulse compression has to be performed in the opti-cal domain, typically by creating targeted interferencewith arrangements of diffraction gratings (Treacy 1969)or prisms (Kafka & Baer 1987). This apparatus achieveswavelength-dependent group delays to align shorter- andlonger-frequency components in time. A structured re-ceiver approach has been proposed that uses more com-plex long-duration waveforms with matched filtering(Guha 2011). Due to the linearity of Maxwell’s equa-tions the pulse-compression filtering could be performedin the receiver, reducing the mass burden on the probe.However, while pulse compression works well in gener-ating picosecond pulses, it would be difficult to generate10− 100 ns pulses as required in this application. Thisis because light travels distance of one meter in ∼3 nsin a vacuum, and such an optical apparatus would nec-essarily be physically large. Of course shorter pulsescould be generated, but the required reduction in pulseduration and attendant increase in peak power would becounter-productive.

13.1.3. Modulation code layer

The need for short pulses can be attributed to thechoice of PPM for the modulation code layer, and it isintriguing to consider alternatives. One approach is toextend the code-division multiplexing idea described in§12.2.5 to the modulation coding layer for a single probedownlink.

Consider a set of L waveforms pi(t), 1 ≤ i ≤ L, eachbandlimited to W Hz and confined to time interval0 ≤ t ≤ T with the property that∫ ∞

−∞pi(t)p

∗j (t) dt =

A i = j

0 i 6= j

29 A function bandlimited to W cannot be strictly time-limited, anda function time-limited to T cannot be strictly bandlimited. How-ever these conditions can be approximated if WT1 (Slepian &Pollak 1961). The narrow pulse p2(t) violates this condition andwill be only roughly confined to time duration 1/W .

This requirement can be approximately satisfied when-ever L ≤WT , with increasing accuracy as WT →∞(Slepian & Pollak 1961).

The modulation coding layer can transmit one of theL waveforms, and at the receiver a parallel set of fil-ters (one matched to each possible waveform) can esti-mate which waveform was transmitted (Banaszek et al.2019a). A PPM modulation code is a special case ofthis scheme with L = M ∼WeT , where M is the num-ber of slots in a PPM frame, and frequency-shift keying(FSK) is another. FSK and other options could intrinsi-cally allow the light source to generate a waveform withconstant power with duration T (rather than durationT/M as in PPM). However, there is a major disadvan-tage, and this is the need for L optical detectors, onefor each matched filter, multiplying the total dark countrate accordingly. PPM has the unique property that asingle optical photon-counting detector (with its darkcounts) is needed.

Intermediate cases are possible. For example ifM = I · J then each candidate transmitted pulse couldbe confined to one of I time slots (each with durationT/I) and one of J frequencies. Only J < M optical de-tectors would be needed.

13.2. Optical bandpass filtering

Bandpass filtering in the optical domain is a criti-cal system component with several requirements thatare challenging to achieve individually and collectively.Numerical results in §10.9.9 suggest a BPPM slot timeTs∼0.1−1 µs, which implies We∼1−10 MHz. An opti-cal bandwidth We larger than necessary results in largercosmic source of background (see §10.9.12). At λ=1 µm,We=1 MHz results in a Q-factor (ratio of center fre-quency to bandwidth) of 3×108. Such Q’s are beingapproached with recent technology.30

There are other requirements on the bandpass filter-ing. If WDM is used to separate the probe signals (see§12.2.2) we actually need a bank of bandpass filters withnearly adjacent passbands. To avoid signal attenuation,this filter bank has to be serial rather than parallel, witheach filter splitting off the optical signal in one bandwhile providing low insertion loss outside this designatedpassband. Further, the location of these passbands haveto be agile to adjust to changing Doppler shift due to theearth’s motion (see §12.1.2) and possibly also to adjustfor variations in the probe transmit wavelengths (see§12.1.1).

13.3. Optical single-photon detectors

The numerical results point to the need for optical de-tectors with dark count rates that are extremely low (see§10.9.7). Superconducting detectors, which can success-

30 For example cavity-based high-Q optical bandpass filters withQ = 8×107 are reported in (Spencer et al. 2014, 2012).

Low-mass interstellar probe downlink 33

fully detect individual photons and have intrinsically lowdark count rates, will be necessary.

Dark-count rates of ΛD=10−4 Hz have been reportedfor a superconducting nanowire detector (Shibata et al.2015). This is two orders of magnitude higher than theobjective of Tbl.3, which translates to ΛSD=10−6 Hz. In(Shibata et al. 2015) the black body radiation from theconnecting optics is reported as the major source ofdark counts, and this source is substantively differentthan intrinsic dark counts in the detector itself sincethe dark count rate is proportional to optical band-width in the former and independent of bandwidth inthe latter. Low dark count rates were obtained by in-serting a cold 100 GHz bandpass filter between opticsand the nanowire. However, the signal optical band-width in Tbl.3 is four orders of magnitude lower thanthis, opening an opportunity to further reduce the op-tics blackbody radiation.

In fact there is an opportunity to locate all the re-ceiver optical bandpass filtering at the output of theoptics and input of the optical detector. In this casethe dark count rate originating as blackbody radiationin the optics would be comparable between PPM andFSK, since the total bandwidth We for the two modula-tion schemes are nominally equal (see §13.1.3). This ex-ample illustrates the intimate relationship between thedesign of the modulation coding and physical constraintsand characteristics.

Superconducting detectors are intrinsically free ofdark counts, but material impurities and externalsources of radiation (radioactive decay and cosmic rays)may be an issue. Some other measures can be takento minimize the dark-count rate, including the followingideas:

• Cryogenic cooling of the optics and other photonicelements.

• Better shielding of optics and detectors from ex-ternal radiation.

• Many intrinsic dark counts may be manifestedby out-of-band (higher-energy) photons, and theseevents may be recognizable in the electrical pulseamplitude at the detector output.

• Share a single optical detector over multiple aper-tures, reducing the per-aperture dark count rateaccordingly (see §5.4). For example multiple fibersmight be attached to a large-area detector. Thesefibers may need to be optically isolated, or thismay be unnecessary at the extremely low photondetection rates expected.

• It is also feasible to operate with SBR<1, althoughthe theoretical photon efficiency falls off rapidly asSBR decreases in this regime. While this wouldallow for greater dark count rates, all else equal,it would have other negative consequences.

14. DATA RELIABILITY

Scientific data, especially after aggressive compres-sion, must be recovered with very high reliability tomaintain the integrity of scientific conclusions and out-comes. Achieving this high reliability in the face of im-pairments like noise and interference, atmospheric tur-bulence, and outages is a challenge. In the interest of asmaller receive aperture we seek high photon efficiencyBPP (see §10.5.2), and this magnifies the challenge.Even after the aforementioned impairments are tamed,signal shot noise (a feature of the laws of quantum me-chanics) places theoretical limits on the BPP that can beachieved consistent with reliable data recovery. Here wediscuss this issue from an intuitive perspective intendedto be accessible to those not versed in communicationand information theory, with technical details relegatedto Appendices.

14.1. Reliability is the hard part

There is no theoretical limit on photon efficiency ifreliability is not demanded. The challenge is in actu-ally achieving high reliability in the recovery of scien-tific data in spite of high photon efficiency. This is eas-ily illustrated using the encoding scheme of PPM, oneframe of which was illustrated in Fig.4c. The reliabilitychallenge comes about because even with KR

s averagedetected photons for each occupied slot (one out of M),the actual number of photons is random. This is a man-ifestation of the signal shot noise.

Assuming no spurious photon detections from back-ground radiation (this would be another source of ran-domness to overcome), let the number of detected pho-tons Y in time interval Ts be a random variable, whichquantum mechanics predicts has a Poisson distribution.If the slot in question is the one containing non-zero av-erage power, and the average number of detected pho-tons in this slot is E[Y ] = KR

s , then

Pr Y = k =(KR

s )ke−KRs

k!. (15)

An erasure occurs in a PPM frame when no photons aredetected in all M slots, and this erasure has to be mit-igated by ECC. Based on (15), this event Y = 0 occurs

with probability Pr Y = 0 = e−KRs .

Since one PPM frame communicates log2M bits us-ing an average of KR

s detected photons, the photon ef-ficiency is

BPP =log2M

KRs

=m

KRs

. (16)

Obviously BPP→∞ as KRs → 0, so achieving a large

BPP is not an issue. The problem arises when weconsider reliability, since a side effect of reducing KR

s

is Pr Y = 0 → 1 as KRs → 0, and thus a large and

increasing fraction of PPM frames are erased. EvenKRs = 1 in the absence of ECC would result in unac-

ceptable reliability, since in that case an erasure occurs

34 Messerschmitt et al.

in 37% of PPM frames on average, which is not suitablefor scientific data.31 A more acceptable erasure proba-

bility of e−KRs ≈10−7 can be obtained with KR

s =16, butthis would result in BPP<1 for practical values of m.

Conceptually we can achieve high BPP (albeit in animpractical way) by manipulating m. For any value ofKRs , even a larger value like KR

s =16, (16) indicates thatany arbitrarily large BPP can be achieved by choosingm sufficiently large, because this conveys an increasingnumber of bits at the expenditure of a fixed numberKRs average photon detections. However, this quickly

becomes impractical due to a vanishing small slot du-ration Ts. For example, if we choose the typical valuesof BPP=10 and KR

s =16, then we infer from (16) thatm=160 is required. To achieve a data rate of R0=1 b/swith conventional PPM we must map 160 data bits intoa PPM frame with duration 160 seconds, requiring a slottime of

Ts =160 s

2160≈ 10−34 ps , (17)

which is, needless to say, far beyond the capability ofour electronics.

Although impractical, improving BPP by increasingm does uncover an important principle. Namely, a wayto achieve high photon efficiency coincident with highreliability is to increase the PAR (and consequently thebandwidth) of the signal. This is suggested by the theo-retical limit of (3) as well. This principle applies at bothradio and optical wavelengths.32 We now illustrate howefficiency and reliability can be achieved in a practicalway by greatly moderating the bandwidth requirement.

14.2. Modulation and coding architecture

The theoretical bound of (3) suggests that PAR∼210

is necessary to achieve BPP∼10 b/ph, a value dramat-ically lower than would ever be achievable by using a‘raw’ PPM frame to communicate our data as in (16). Apractical architecture achieving this lower PAR is shownin Fig.25. It is organized into layers, with each layersplit between coordinated functionality in the trans-mitter and receiver. This architecture, in conjunctionwith specific choices for the modulation code and error-correction code (ECC) is not only theoretically solid,but BPP=13 has been achieved in bench testing (Farret al. 2013).

The physical layer involves all the physical elements ofthe downlink, including optical source and detector andtransmit and receive apertures. Its input is an intensity-

31 In the physics literature one occasionally sees an analysis of PPMwith Y ≡ 1, claiming a deterministic number of detected pho-tons and a large resulting value of BPP (see (Hippke 2017) foran example). This analysis is not valid because it neglects thestochastic nature of quantum photon detection, which at KR

s = 1results in an unacceptably large erasure probability.

32 For the radio case, where the channel model is quite different, thisprinciple is described and quantified in (Messerschmitt 2015).

Laser, propagation, optics, detector

Scientific data in

Scientific data out

Waveform Photon events

ECC layer

Modulation coding layer

Physical layer

Transmit Receive

Error-correction coding

Error-correction decoding

Modulation coding

Modulation decoding

Bits

Bits Bits

Bits

Figure 25. Coordinated transmit-receive communications

architecture. Functionality is divided into three layers, with

each layer divided into a transmit and receive component.

Logically each layer in the transmitter is coordinated with

its counterpart in the receiver.

vs-time waveform, and its output is a sequence of photondetection events.

The role of the modulation coding layer is to representdiscrete data by a continuous-time intensity waveformat the transmitter, and interpret the resulting photonevents observed in the receiver in terms of the transmitdata (but with poor reliability). For our purposes ourmodulation code is described by BPPM in Fig.4, whichconsists of a sequence of PPM frames, each frame rep-resenting log2M bits of data. This simple scheme, op-erating in conjunction with the ECC layer, allows us toachieve high photon efficiency BPP with high reliabilityin data recovery.

14.3. ECC layer

In contrast to the approach taken in §14.1, the way toachieve a large BPP is to operate the modulation codelayer with very poor reliability; that is, choose a smallvalue for KR

s (see (18)), which is a photon-starvationmode. The value KR

s =0.2 chosen in Tbl.3 results in aframe erasure probability e−0.2=0.82, so approximately82% of frames are lost to erasures. Although this choicemay seem arbitrary, it is further justified in §14.3.4.

The role of the ECC layer is to reconstruct a highlyreliable replica of the scientific data. In order to dramat-ically improve the reliability, the ECC coding layer inthe transmitter adds redundancy to the scientific data.In the receiver the ECC decoding layer makes use ofthis added redundancy (the precise structure of whichis known to the receiver) to reconstruct a replica of thescientific data with dramatically improved reliability.

The principle behind the ECC layer can be describedabstractly as follows. For BPP=10.9 bits/ph in Tbl.3,each scientific data bit recovery is based on an aver-

Low-mass interstellar probe downlink 35

age of 0.092 photon detection events. A way (the onlyway) to achieve reliability in photon-starvation mode isto recover multiple data bits based on a commensuratelarge number of photon detections, exploiting the lawof large numbers to yield less random variability in thenumber of photon detection events. This can still be ac-complished in the context of a PPM modulation codinglayer by representing the data bits by pulses with dura-tion Ts. Thus BPP=10.9 might actually be achieved byexchanging an average of 100 detected photons for 1090bits. At a data rate of R=1 b/s this implies an averageof 100 photon detection events within 1090 sec (18 min-utes), or (almost) equivalently 100 detected pulses onaverage. There must be 21090(∼10328) distinguishablepatterns of pulses, one pattern for each possible combi-nation of 1090 bits (compare this to the ∼1080 atomsin the visible universe). The error-correction decoderexamines the random pattern of actual detected pulses,and infers the most likely pattern, and hence the mostlikely combination of 1090 bits that were represented inthe transmitter.

14.3.1. Small codebook example

More concretely, ECC in the context of a PPM modu-lation coding layer associates k > 1 bits with a group ofl > 1 PPM frames, each having M = 2m slots. The setof 2k frame groupings (one for each set of k inputs bits)is called a codebook. Since these l PPM frames could inprinciple represent as many as lm bits, not all the possi-bilities are included in the codebook as long as k < lm.This is what we mean by redundancy, which can result indramatically improved reliability by circumventing era-sures that have occurred.

This concrete description of the ECC layer can be il-lustrated as in Fig.26 for three very simple examples. InFig.26(a) k = 1 bits (2 codewords) are represented in acodebook of l = 1 PPM frame with M = 2 slots, in (b)k = 2 bits (4 codewords) are represented in a codebookof l = 2 PPM frames with M = 4 slots, and in (c) k = 4bits (16 codewords) are represented in a codebook ofl = 4 PPM frames with M = 4 slots. The redundancyin the three cases is zero for (a) and 0.5 for (b) and (c).If the transmit energy per PPM frame remains fixed, theaverage detected photons per frame has the same valueKRs for all cases. Thus, the photon efficiency has the

same value (BPP = 1/KRs ) for all three cases, and thus

the probability of a single erasure remains the same.In Fig.26(a) and (b) all possible erasure events are

enumerated (for brevity this is omitted in (c)). In (b)the input data can be inferred even with a single erasure,and data is lost only if there are two erasures. In (c) eachpair of codewords agree in at most one PPM frame butdisagree in at least three, so one or two erasures (outof a possible four) result in no data loss. We can easilycalculate the probability of these events, with the resultthat for Ks = 12 (BPP = 0.083 b/ph) the probabilityof data loss is 6.1×10−6 for (a), 3.8×10−11 for (b), and

PPM frames 0 erasures 1 erasure 1 erasure 2 erasuresBits

“t” = many transmitted photons “d” = one or more detected photons “n” = many transmitted but zero detected photons “_” = zero transmitted and detected photons

(b)

0000 t _ _ _ t _ _ _ t _ _ _ t _ _ _ 0001 t _ _ _ _ t _ _ _ t _ _ _ t _ _ 0010 t _ _ _ _ _ t _ _ _ t _ _ _ t _ 0011 t _ _ _ _ _ _ t _ _ _ t _ _ _ t 0100 _ t _ _ t _ _ _ _ t _ _ _ _ t _ 0101 _ t _ _ _ t _ _ t _ _ _ _ _ _ t 0110 _ t _ _ _ _ t _ _ _ _ t t _ _ _ 0111 _ t _ _ _ _ _ t _ _ t _ _ t _ _

1000 _ _ t _ t _ _ _ _ _ t _ _ _ _ t 1001 _ _ t _ _ t _ _ _ _ _ t _ _ t _ 1010 _ _ t _ _ _ t _ t _ _ _ _ t _ _ 1011 _ _ t _ _ _ _ t _ t _ _ t _ _ _ 1100 _ _ _ t t _ _ _ _ _ _ t _ t _ _ 1101 _ _ _ t _ t _ _ _ _ t _ t _ _ _ 1110 _ _ _ t _ _ t _ _ t _ _ _ _ _ t 1111 _ _ _ t _ _ _ t t _ _ _ _ _ t _

(c)

00 t _ _ _ t _ _ _ 01 _ t _ _ _ t _ _ 10 _ _ t _ _ _ t _ 11 _ _ _ t _ _ _ t

d _ _ _ d _ _ _ _ d _ _ _ d _ _ _ _ d _ _ _ d _ _ _ _ d _ _ _ d

n _ _ _ d _ _ _ _ n _ _ _ d _ _ _ _ n _ _ _ d _ _ _ _ n _ _ _ d

d _ _ _ n _ _ _ _ d _ _ _ n _ _ _ _ d _ _ _ n _ _ _ _ d _ _ _ n

n _ _ _ n _ _ _ _ n _ _ _ n _ _ _ _ n_ _ _ n _ _ _ _ n _ _ _ n

t _

_ t

0

1 (a)

d _

_ d

0 erasure

n _

_ n

1 erasure

= after transmission and detection

Figure 26. An example of the benefits of error-correction

coding (ECC) to improve the data-recovery reliability in con-

junction with pulse-position modulation (PPM). (a) One sci-

entific data bit is mapped into a single PPM frame with

M = 2 slots. The effect of an erasure is shown. (b) Two

data bits are mapped into a pair of PPM frames, each with

M = 4 slots. The three possible erasure scenarios are shown.

(c) Four data bits are mapped into four PPM frames, each

with M = 4 slots. Erasure scenarios are not shown. While

the photon efficiency is the same in (a), (b), and (c), map-

ping a larger number of data bits into a larger number of

frames results in improved immunity to erasures.

9.3×10−16 for (c). The reliability improves dramatically.Increasing the size of the codebook can be beneficial, asin this example, because without any change in BPP thedata recovery becomes less susceptible to frame erasures.

14.3.2. Theoretical constraint

The photon efficiency for the codebooks displayed inFig.26 are nowhere near the BPP=10.9 b/ph that weare seeking. One of the central results of informationtheory is that under certain conditions the existence ofa sequence of ever-larger codebooks is guaranteed, withthe desirable property that the probability of data lossdecreases to zero. In particular, for the parameters BPP,KRs and M the condition is (see §F)

BPP <1− e−KR

s

KRs

· log2M < log2M . (18)

This is consistent with theoretical bound on (3) pre-sented earlier.33 This confirms the significance ofphoton-starvation mode, because BPP in (18) ap-proaches (3) as KR

s → 0 and thus we must choose a verysmall value of KR

s to achieve a BPP near the theoreticalmaximum for the chosen m.

33 The value of BPP assumed in Tbl.7 and in the numerical cal-culations of §10.9 assume equality in (18). This is recognizablyoptimistic, but something close to this should be achievable inpractice with an appropriate design of ECC accompanied by largeprocessing resources in the receiver.

36 Messerschmitt et al.

14.3.3. Role of redundancy

We are assured by (18) that photon starvation isn’tdetrimental to data reliability, as long as ECC over-comes the inherent unreliability at the modulation cod-ing layer. The explanation for this is the inherent redun-dancy. The average rate of photon detections isR/BPP,and with KR

s average photons per PPM frame, the rateof PPM frames is R/KR

s · BPP. Thus the “raw” bit rateat the input to the modulation coding layer is

m · RKRs · BPP

=12 · 1.

0.2 · 10.9= 5.5 b/s .

Thus, 4.5 b/s of this “raw” bit rate is redundancy, and1 b/s is scientific data (82 % redundancy and 18% scien-tific data). According to (18), this redundancy offers asufficient opportunity to overcome the frequent erasuresin PPM frames suffered in the modulation coding layer.

While the theory backing up (18) strongly suggeststhat a very large ECC codebook is required to obtainhigh BPP, we have to fall back on best practices in ac-tually choosing such a codebook. For a high SBR, highreliability can be obtained (Farr et al. 2013) using Reed-Solomon coding (Wicker & Bhargava 1999), which iswell suited to overcoming frequent erasures.34 At lowervalues of SBR more advanced and modern coding tech-niques have to be adopted, a topic beyond the scope ofthis paper.

14.3.4. Quantum limit

The question arises as to the degree to which the ar-chitecture of Fig.25 and the specific choice of PPM forthe modulation coding limits performance. Theoreticallimits on photon efficiency BPP with reliable data re-covery are identified in the Appendices:

• Any physical layer that makes use of electromag-netic transmission is governed by a fundamentalquantum limit on reliable data recovery (see §D).This limit is expressed in terms of the photon den-sity PPD (the average number of photons per di-mension). There has been some exploration oftechnologies that may be able to approach thislimit (Banaszek et al. 2019b).

• Any physical layer that modulates optical powerin the transmitter and performs direct detection(photon counting) in the receiver has, at highSBR, a theoretical limit on BPP with reliable datarecovery defined by (3) (see §E).

• The limit of (18) comes arbitrarily close to thephoton-counting limit as Ks → 0 (see §F).

34 For a laboratory demonstration of high BPP in reliable opticalcommunications, see (Farr et al. 2013). There are also tutorialpapers (Messerschmitt 2008) and textbooks (Cover & Thomas1991; Gallager 2008) that expand on the theoretical basis of ECC.

Quantumlimit

5

10

15

20

-8 -6 -4 -2 00

5

10

15

20

25

30

log10 (PPD = photons per dimension)

BPP=bitsperphoton

Figure 27. A plot of the photon efficiency BPP at the

theoretical limit of reliable data recovery (see §F). BPP is

plotted against log10(PPD), where PPD is the photon den-

sity (average photons per dimension), For PPM we have

PPD = Ks/M , and PPD is manipulated by varying Ks. The

different curves are for the values m ∈ 5, 10, 15, 20, where

the number of slots per PPM frame is M = 2m. The bot-

tom dashed curve is the largest BPP that can be achieved by

PPM (by optimizing the value of m for each PPD) and the

upper dashed curve is the quantum limit on reliable data re-

covery. The dots correspond to Ks = 0.2, the nominal value

chosen in Tbl.3, which is nearly BPP-maximizing for the m-

range of interest. This displays the gap between PPM and

the quantum limit, which shrinks for larger PPD. This gap

captures the maximum increase in BPP that may be feasible

with future technologies that achieve a more sophisticated

manipulation of quantum states.

The shortfall of PPM (and hence photon-countingmore generally) to the quantum limit is plotted in Fig.27(this applies to BPPM equally well). The quantum limitis expressed in terms of PPD, the average photons perdimension, so the BPP for PPM is plotted against thesame metric for different values of m. The specific val-ues of PPD,BPP for Ks = 0.2 are identified by dots,which fall near the maximum BPP achievable, thus jus-tifying this assumption in Tbl.3 and the previous nu-merical results.

As m increases in Fig.27, BPP increases as predictedin (3), since for PPM we have PAR = M . The largestBPP achievable by PPM is plotted as the lower dashedcurve. Our nominal value Ks = 0.2 approximates thisoptimum choice. If a larger BPP is desired, it is prefer-able to increase m rather than further reduce Ks.

14.4. Outage mitigation

Outages are characterized by erasures which are notdue to signal shot noise, but rather environmental fac-tors experienced by a terrestrial-based receiver such assunlight and weather. The data loss due to outages willinevitably reduce the rate at which scientific data is com-municated reliably immediately following encounter to

Low-mass interstellar probe downlink 37

Ra < R0. A simplistic approach to overcoming outageswould be to repeat the transmission of scientific datamultiple times. We want Ra to be as large as feasi-ble, and fortunately there are far more efficient outagemitigation techniques based on a modification to themodulation coding and ECC layers in Fig.25. We candemonstrate theoretical upper limits on Ra for a partic-ular statistical model of fully random outages, modifythe modulation coding layer to approximate this model,and choose ECC codebooks which come close to thesetheoretical limits.

14.4.1. Outage characteristics

Outages manifested as frame erasures (rather thanframe errors) are simpler to deal with (see §6.2 forthe distinction), and since the receiver is aware of day-light and weather conditions it can enforce outages-as-erasures by simply ignoring the received signal duringperiods of questionable SBR. In this case the receiverwill experience daylight and weather outages as contigu-ous bursts of PPM frame erasures. This distinguishesoutage erasures from shot-noise erasures, which are tem-porally completely random.

14.4.2. Modification to modulation coding layer

In principle the increase in erasure frequency due tooutages can be overcome by choosing a more powerfulform of ECC. In practice, however, it is very difficultfor ECC to deal effectively with temporally highly cor-related erasures. Fortunately this challenge can be over-come by adding interleaving to the modulation codinglayer as shown in Fig.28a. The interleaver in the trans-mitter scrambles the order of the bits in the input bitstream, and the de-interleaver in the receiver reversesthat operation to recover the original ordering.35 Themodulation-layer decoder outputs a stream of symbolswith three possible values 0, 1,E, where ’E’ indicatesthat this bit is unknown because it experienced an era-sure (it was one of m bits in an erased PPM frame).Whether this erasure was due to shot noise (with proba-bility e−Ks) or outages (with probability PW for weatherand PD for daylight) cannot be inferred by the PPM de-coder, but nevertheless the knowledge of which bits havesuffered an erasure is helpful in the subsequent error-correction decoding.

The interleaver utilizes a deterministic but pseudo-random algorithm with the goal of destroying the cor-relation of the erasures and make them appear to occurrandomly and independently (like a sequence of flips of abiased coin). This requires a ‘depth’ of interleaving thatis larger than the correlation scale of the frame erasures,which implies longer than the longest outage period (the

35 Operating the interleaving and de-interleaving at the bit-levelsimplifies the following analytical argument. In practice the in-terleaving may be performed at the PPM frame level.

PPM encode

Outage probability PPM

decode

Interleaver De-interleaver

Mod

ulat

ion

codi

ng

laye

r

1 − PE

1 − PE

PE

PE

(a) Revised modulating coding (b) Equivalent erasure channel

PO

0, 1k 0, 1, Ek

0 0

11

E

Figure 28. Modification to the PPM modulation code

layer which converts it to a binary erasure channel. (a) Bit-

level interleaving and coordinated de-interleaving convert

grouped erasures ‘E’ (due to shot noise, daylight, weather,

etc.) into an equal number of pseudo-randomly distributed

erasures. The PPM coding/decoding introduces erasures due

to shot noise, while the transmission introduces additional

erasures due to outages with probability PW. (b) Single bit

erasure channel with erasure probability PE, with transition

probabilities PE and 1− PE. Due to interleaving we assume

that successive channel uses are statistically independent.

longest credible weather-outage event). Nominally thenumber of erasures per error-correction codeword thenobeys a binomial probability distribution which is iden-tical for all codewords, yielding a more effective and ef-ficient ECC.

14.4.3. Modification to ECC layer

It is argued in §F.1 that the detected photons perframe KR

s and rate of PPM frames T−1I should not

be changed with the addition of outages. Rather, thereliability of data recovery is theoretically preservedin the presence of outages if the input scientific datarate is reduced from its nominal R0 by a factor of(1− PW

)(1− PD

). That is, the data rate is reduced

on average by the fraction of PPM frames that are cor-rectly decoded, which is intuitively the best we couldhope for.

Since Ks should not be changed in response to out-ages, neither should PTP A

Te Ts, A

Se or NS . Assuming

that TI is also not changed, neither is the average trans-mit power PTA . The sole effect of a reduction in datarate to Ra < R0 is an increase in the redundancy in theECC in order to maintain fixed data-recovery reliabilityin the face of the added erasures due to outages. Sincethe data rate is lowered without a reduction in averagepower, an unsurprising side effect is to reduce the pho-ton efficiency BPP by the same factor. Fortunately thishas no practical impact for a continuous electrical powersource not based on fuel consumption.

If rate Ra is unacceptably small, it can always be in-creased by choosing a largerR0. This of course will haveimplications throughout the design such as increased av-erage power, increased duty cycle with an attendant in-crease in the impact of dark counts, etc.

38 Messerschmitt et al.

15. CONCLUSIONS

The design of a communication downlink from low-mass interstellar probes is extremely challenging. Whilethe downlink violates no laws of physics and appearsto be feasible in theory, the challenge arises when thecharacteristics of available technologies are taken intoaccount. Innovation and invention in both system con-cepts and in the constituent technologies will be needed.

The primary areas of necessary technology develop-ment identified in this paper are:

• Low-mass light sources achieving high peak powerspossibly combined with optical pulse compressiontechnology in either transmitter or receiver.

• Low-mass transmit aperture and coordinated at-titude adjustment and pointing with sufficient ac-curacy.

• A feasible multi-probe multiplexing scheme takinginto account inaccuracies in the knowledge of timeand speed and distance, including Doppler shift.

• Approaches to a near-earth uplink for configura-tion of transmit wavelength.

• Optical aperture possibly incorporating adaptiveoptics to counter atmospheric turbulence, achiev-ing uniform sensitivity over a relatively large non-circular coverage angle, and with sufficient coron-agraph rejection of target star radiation.

• Optical bandpass filters with sufficiently narrowbandwidth, and with agility and configuration ca-pability to accommodate multi-probe multiplexingand uncertainties in Doppler shift.

• Receive optics and optical detectors with very lowdark count rates, with possibly detector sharingover multiple apertures.

• Low-mass and low-power processing to realize therequisite compression and error-correction codingon board the probe.

This paper can serve as a roadmap to further investi-gation and research, and the ultimate outcome dependson the success of those efforts.

ACKNOWLEDGEMENTS

PML gratefully acknowledges funding from NASANIAC NNX15AL91G and NASA NIAC NNX16AL32Gfor the NASA Starlight program and the NASA Cal-ifornia Space Grant NASA NNX10AT93H, a gener-ous gift from the Emmett and Gladys W. TechnologyFund, as well as support from the Breakthrough Foun-dation for its Breakthrough StarShot program. Moredetails on the NASA Starlight program can be found atwww.deepspace.ucsb.edu/Starlight.

APPENDIX

A. RELATIVISTIC EFFECTS

Since a low-mass probe travels at relativistic speed,relativistic effects enter.

A.1. Transmitter and receiver motion

Assume an inertial frame S (one approximating a he-liocentric coordinate system is convenient). Relative tothe trajectory of a photon from probe to receiver, theprobe and receiver have a longitudinal component of ve-locity and a small transverse component (which is ne-glected). Let the longitudinal components for the probeand a receiver relative to S be up and ur respectively.The relationship between transmitted wavelength λT(observed in the rest frame of the transmitter) and re-ceived wavelength λR (observed in the rest frame of the

receiver) is36

ξ =λRλT

=νTνR

= e(up,−1) · e(ur,+1) (A1)

e(u, ρ) =1− ρu/c√1− u2/c2

. (A2)

Thus ξ is the product of four factors, each of which mod-els a distinct physical effect. The two numerators ac-count for Doppler shifts due to changing propagationdelay, while the two denominators model the relativistictime dilations of the transmit and receive clocks. Nearlydesired λR can be achieved by adjustment of λT to com-pensate for the first factor, while smaller variations inup and ur have to be accounted for in other ways (see§12.1.1 and §12.1.2).

36 This relation, including the product law, is derived in (Messer-schmitt 2017). Note however that Eq. (17) of that reference isincorrect except at ρ = ±1, with the corrected value given by(A2).

Low-mass interstellar probe downlink 39

A.1.1. Probe speed

The systematic red shift due to the nominal probespeed up = 0.2c results in ξ = 1.22 (or a 22% shift). Thisshould be compensated by an adjustment in λT . Sincethe energy of each photon is E = hc/λR, the energy ofeach photon at the receiver is lower than at the probe.Energy seen by an observer in S is conserved, includingthis photon energy loss and kinetic energy gains for theprobe (the recoil effect) and receiver (photon absorp-tion) (Macleod 2004). The probe transmitter thus actsas a photon engine which continually increases up by atiny amount. From the perspective of the receive signal,the motion affects wavelength and not photon count,and thus has no effect on the data rate R obtainable bya direct-detection receiver.

A.2. Uncertainty in probe speed

Assuming that ur = 0, based on (A1), any uncertaintyin probe speed up relates directly to an uncertaintyin received frequency νR. Differentiating the relationξνR = νT with respect to up, we get

ξ · dνRdup

+ νR ·dξ

dup= 0 . (A3)

Eq.(14) follows from substituting (A2) in (A3).

A.3. Gravitational potential effects

There are small gravitational effects on the opticalcommunications photons due to the gravitation poten-tial in the target stellar system as well as our solar sys-tem. Gravitational effects on the probe cause it to speedup as it approaches the target and slow down after en-counter. The photons leaving the spacecraft before itpasses the target undergo a gravitation redshift as itclimbs out of the target system potential back to theEarth and a gravitational blueshift as they enter oursolar system and are detected at the Earth or nearby(lunar base etc). For photons emitted after the targetencounter there will additional effects that need to beincluded due to the infall and exiting of the photons asthe pass the vicinity of the target system.

This potential modifies the photon energy and hencethe frequency in proportion to the integrated differencein gravitational potential divided by c2 or one-half theratio of the Schwarzschild radius divided by distanceto object (star, planet etc). With an extremely nar-row linewidth (bandwidth) of the transmit laser grav-itational potential may be detectable, offering an op-portunity for extremely sensitive measurements of grav-itational potential. For example, with a transmit laserlinewidth of 10 KHz at 300 THz a fractional frequencyoffset 3·10−11 may be observable. This is of course mod-ified by the modulation and dependent on the laser tech-nology. For comparison the Schwarzschild radius of ourSun is about 3 km and at a distance of 1 AU (1.5·108

km) this yields a gravitational shift relative to infinity of

1·10−8. While small compared to the relevant Dopplershifts due to velocity effects, it is still 300× larger thanthe fractional laser linewidth. Even the graviational po-tential of the Earth is relevant (Schwarzschild radius9 mm with physical radius of 6.4·106 m yielding shift8·10−10).

B. ANTENNAS

Certain fundamental principles of antennas or aper-tures for electromagnetic communication illuminateboth opportunities and constraints on our downlink.Our interest here is only in theory that applies to anaperture, which is where the coverage and the SBR aredetermined. Our aperture is assumed to be a diffraction-limited optical aperture with a single optical detector,which is essentially a single-pixel optical telescope. Ourreceive aperture as a whole is not diffraction-limited (ithas multiple detectors, and radiation adds incoherently),so the following principles do not apply. However, thecoverage and SBR determined at the aperture level arepreserved in the scale-out to N apertures (see §5.2.1).

B.1. Principles

As a consequence of Maxwell’s model of electromag-netism, the reciprocity theorem states that when one an-tenna is used for transmission and another for reception,their roles can be reversed without any change to theoutcome (see Appendix D of (Wilson et al. 2009)). Itis often easier to analyze an antenna in transmit mode,even if it is to be used in receive mode, or vice versa.

Suppose a plane wave at wavelength λ and equivalentfrequency ν has specific intensity Iν as a function of fre-quency ν, defined as the flux (power per cross-sectionalarea) per frequency interval (Burke & Graham-Smith2009). The corresponding intensity in frequency inter-val dν is Iν · dν. Since the bandwidths of interest arevery small for interstellar communication (to eliminatemost noise and interference), it is convenient to use νand recognize that any variation in λ can usually beneglected. Thus we assume a fixed wavelength λ = λR,and simultaneously (and somewhat inconsistently) allowν to vary over a bandwidth We.

For astronomical sources, which can often be consid-ered as isotropic emitters (at least over a finite rangeof directions of interest) it is often more convenient touse the specific brightness Bν , which is the power perunit solid angle per frequency interval. At great dis-tances a spherically expanding wave (as from a pointsource) can be considered a plane wave at the antenna(as long as the difference between a plane and a sphereis a small fraction of a wavelength). In that case, fora spherical wave at a distance D from the source, theconversion from specific brightness to specific intensityis Iν = Bν/D2.

40 Messerschmitt et al.

B.2. Effective area and gain

When a plane wave with specific intensity Iν impingeson an antenna, we can measure the specific total powerPν delivered to an impedance-matched termination (atradio wavelengths) or an ideal optical detector (at op-tical wavelengths). The effective area of the antenna isdefined as Ae = Pν/Iν , which has the units of area. Aeusually depends strongly on the angle of incidence, andany dependence on ν can often be neglected over a smallbandwidth We.

Often we are interested in the maximum value of Aefor the antenna’s preferred direction (its “pointing” or“main beam”). For an aperture with geometric area Ag,in general Ae ≤ Ag with equality when the aperture isdiffraction-limited and the direction of propagation ofthe plane wave is aligned with the pointing.

An isotropic radiator does not favor one direction overanother. Its specific brightness Bν is constant in alldirectons. For a lossless isotropic radiator and loss-less (free-space) propagation, by conservation of energy4πBν = Pν .

The radiation pattern of an antenna of interest is con-veniently specified by the ratio of its specific brightnessin a particular direction to that of an isotropic radiatorwith the same power input. This value G = 4πBν/Pν isthe directivity or gain of the antenna in that direction.37

For a lossless diffraction-limited antenna, Pν equals Bνintegrated over all directions, so

Pν =

∫∫Bν · dΩ or

1

∫∫G · dΩ = 1 . (B4)

In words, the gain of any antenna averaged over all solidangles is unity, and identical to the gain of an isotropicradiator in any direction.

It follows from the laws of thermodynamics (Wilsonet al. 2009) that for some particular direction, for anantenna with gain G and effective area Ae in that direc-tion,

G =4πAeλ2R

(B5)

Thus the gain of an antenna (a mostly transmissioncharacteristic) is proportional to effective area (a mostlyreception characteristic) expressed in wavelengths. Itfollows that for an isotropic radiator Ae = λ2

R/4π in alldirections.

B.3. Highly directive antennas

For optical communications and astronomy we are in-terested in highly directive antennas. Consider an idealdirective antenna that in transmit mode concentrates allits flux uniformly across solid angle ΩA (called its main

37 Directivity refers to an ideal antenna, while gain takes any lossesin the antenna structure into account. Here we assume an ideallossless antenna so there is no distinction.

beam), and there is zero flux outside this solid angle ofcoverage. Such an antenna is diffraction-limited whenthe phase shift from optical source to free-space is fixedfor all angles within ΩA. Such an antenna used in re-ceive mode has a fixed effective area Ae for any directionwithin the main beam. Note that the main beam areaneed not be circular for this to hold (this is importantin our aperture application).

Then based on (B4) the gain of this antenna isG = 4π/ΩA, and substituting into (B5)

AeΩA = λ2R . (B6)

The solid angle covered by this ideal main beam isinversely proportional to effective area, the latter ex-pressed in units of wavelength. As expected, directiveantennas with a larger effective area are more directive.This idealized result can be approximated for a practi-cal directive antenna, which introduces sidelobes due todiffraction.

B.4. Aperture response to different sources

The specific power Pν delivered to a matched termina-tion or absorbent surface by a lossless diffraction-limitedaperture is now evaluated for the types of radiationsources of interest in interstellar communication.

B.4.1. Interference

Interference from the target star can be approxi-mated as an isotropic point source with constant spe-cific brightness Bν at all angles. To a receive antenna atgreat distance D0 from the target star this will appearas a plane wave with specific intensity Iν = Bν/D2 andthus

Pν = AeIν = Ae ·BνD2

(B7)

for a receive antenna with effective area Ae in the direc-tion of the interferer.

B.4.2. Probe transmitter

For communications, the optical bandwidth We willalways be at least as large as the signal bandwidth. Thusit is more appropriate to leave out the ‘specific’ in inten-sity, brightness, and power, and model the total powerover all frequencies. At distance D the average receivepower PSA in a lossless antenna (aperture) with effectivearea ASe is

PSA =PTA4π· 4πATeλ2T

· 1

D2·ASe =

ATe ASe P

TA

λ2TD

2(B8)

for transmit antenna with average transmit power PTAand effective area ATe , confirming (5). The four productterms in (B8) are (1) the brightness of the transmitterif its radiation were isotropic, (2) the transmit antennagain, (3) the translation from brightness to intensity atthe receiver, and (4) the receive aperture equivalent area

Low-mass interstellar probe downlink 41

(which converts intensity to power). This is the well-known Friis transmission equation (Schelkunoff & Friis1952).

Although the transmit and receive wavelengths λTand λR differ substantially due to Doppler shift (see§A.1), PR in (B8) is not dependent on λR.

B.4.3. Noise

Finally consider noise due to unresolved sources of ra-diation. From the perspective of the receive antenna, wecan consider this to be an isotropic source of radiation.In practice the radiation is never fully isotropic, becauseit is limited in extent (like Zodiacal radiation in a distantsolar system) or is not fully visible to a terrestrial an-tenna (due to occlusion by the ground). However, if theantenna is highly directional, and its coverage coincideswith radiation that is well approximated as uniform withdirection, then the isotropic model should be accurate.

Isotropic radiation has constant specific brightness Bν .For an antenna with gain G as a function of direction,the received power is

Pν =

∫∫GBν · dΩ = Bν ·

∫∫G · dΩ = 4πBν (B9)

making use of (B4). The received specific power is in-dependent of the antenna effective area. This is why wecan meaningfully speak of a CMB ‘noise temperature’at microwave frequencies without referring to ASe .

A couple of special cases are of interest. First, if theisotropic specific brightness Bν were actually generatedby an isotropic radiator with power Pν , then we knowthat G ≡ 1 and Bν = Pν/4π consistent with (B9). Sec-ond, if a highly directive antenna receives isotropic ra-diation, then (B9) becomes

Pν =

∫∫ΩA

ΩA· Bν · dΩ = 4πBν . (B10)

In this case Pν is independent of ΩA (and hence effectivearea Ae) because the total noise power due to the sizeof the main beam and the gain within that main beamprecisely offset.

An alternative interpretation of this last observationis that the received power is proportional to ASe (theconversion factor of input flux to power at the detector)and also ΩA (the fraction of total isotropic power reach-ing the detector) (see Eq. (8.76) of (Biswas & Piazzolla2006)). This product ASeΩA is constant for a diffraction-limited highly directive antenna (based on (B6)).

C. DATA VOLUME VS. LATENCY

Let D0 be the distance to the target star and D > D0

the distance to the probe. The total latency (elapsedtime since launch) for data transmitted from distanceD is

TL = D

(1

u0+

1

c

)(C11)

for fixed probe speed u0. The terms in (C11) are respec-tively the elapsed time t for the probe to reach distanceD, and a photon emitted at distance D to return to tothe receiver, both measured relative to the rest frame ofthe receiver.

The appropriate data rate R for purposes of data vol-ume is the transmitted data rate (11). The total datavolume transmitted between distance D0 and D is

V =

∫ D/u

D0/u

R(ζ, u0t) dt =R0D0

u0·(

1− D0

D

). (C12)

This entire V is decoded at the receiver during time du-ration TL. Eliminating D from (C11) and (C12), thelatency can be expressed directly in terms of the vol-ume,

TL =D0

u0+

D0VD0R0 − u0V

+D2

0R0

c(D0R0 − u0V)(C13)

where the three terms are the transit time, the transmis-sion time, and the return propagation time all expressedin terms of an earth-based clock. After substituting forR0 and u0 in terms of ζ (see §7), this latency can beminimized over the choice of ζ.

For any R0, V approaches a finite asymptoteV → R0D0/u0 as D →∞. This ultimate limit on vol-ume with infinite latency is due to the decreasing datarate with distance. Larger R0 and a smaller u0 help toincrease V, the latter effect due to the slower D-relatedfalloff in R.

There will also be a small reduction in available elec-trical power during transmission (due for example thehalf life of a radioactive source), and this is not takeninto account in (C12).

D. QUANTUM LIMIT ON DATA RELIABILITY

The actual design of a concrete ECC for the photon-counting channel will not be pursued here, because ECCdesign is itself an extensive and sophisticated project.Nevertheless it is important to determine how large aBPP consistent with reliable data recovery might beexpected, and also quantify how larger BPP relatesto larger PAR and bandwidth We because those twoparameters interact strongly with other aspects of thedownlink design.

D.1. Channel model

A given physical configuration is not generally asso-ciated with a theoretical limit on the data rate R, be-cause we must also specify other details of the physi-cal layer such as aperture sizes and detection methodbefore such a limit can be established. Any theoreti-cal limit is associated with a specific statistical channelmodel, which specifies, for each possible input (typicallya continuous-time or discrete time signal), a probabilitydistribution of the output. Here we find it useful to ad-dress three such models: the quantum limit (see §D.3),

42 Messerschmitt et al.

direct-detection of light (see §E), and direct-detection inconjunction with a PPM modulation layer (see §F).

D.2. Channel capacity

Communication theory offers a way to establish a the-oretical limit on the photon efficiency BPP which can beachieved consistent with reliable data recovery throughthe concept of channel capacity C. Channel capacitycan be calculated (using statistical techniques beyondour scope) for any specific channel model. A channelmodel can be discrete-time or continuous time, with anycombination of discrete or continuous inputs or outputs.Channel capacity does not place a limit on the data ratesthat can be achieved (see §14.1). What it does reveal isa theoretical limit on what data rate R can be achievedwith arbitrarily high reliability, applying to the specificchannel model in question. The associated photon effi-ciency is easily inferred from BPP=C/ΛRA. In particular,capacity reveals that:

• For any R < C, a modulation code and ECC isguaranteed to exist that can achieve data rate Rin conjunction with any arbitrarily stringent re-liability requirement. Channel capacity does notreveal any concrete way to achieve R → C, as thatis an entirely separate issue.38

• The existence of capacity does not preclude R ≥ Cfor some concrete implementation, for arbitrarilylarge R. However it asserts that high reliabilitycannot be achieved at such data rates. Generallythe highest achievable reliability will deteriorateas R increases.

Capacity is very useful as a tool in the preliminary de-sign of a communication link because it represents anideal to shoot for; that is, what BPP is possible for reli-able data transmission. Also it tells us, for any concreteimplementation, how much further improvement maybe possible with the application of additional effort andresources.

D.3. Holevo capacity

For optical communications, the most general chan-nel model assumes that the transmitter and receivercan manipulate an arbitrarily large number of quantumstates directly and simultaneously. The resulting Holevocapacity establishes a technology-independent theoreti-cal limit on the data rate that can be achieved withreliable data recovery (Holevo 1998). Since we cannotavoid the randomness inherent in quantum mechanics inany physically meaningful way, Holevo capacity repre-sents an ultimate and theoretical limit to aspire to, and

38 Based on decades of research it is generally possible to narrowthe gap between R and C, although attempting to do so may beimpractical (due to complexity, storage and processing overhead,PAR, bandwidth, etc.)

0

5

10

16

-4 -2 0 2 4 6 8 10-3

-2

-1

0

1

2

log10SBR

log 10BPP

Figure 29. For a continuous-time photon-counting chan-

nel, a log-log plot of BPP vs SBR with the different curves

corresponding to different values of PAR. The curves are

labeled by log2 PAR, so for example 16 corresponds to

PAR = 216 = 65, 536. The axis log10 BPP = 0 corresponds

to a photon efficiency of 1 b/ph. The axis log10 SBR = 0

corresponds to equal average signal and background power.

reveals what future progress may be feasible with newtechnologies not available to us today. Recent researchis making some inroads in approaching the Holevo limit(Guha 2011).

PPM makes use of a repeated transmission of PPMframes, each composed of M timeslots of duration Ts.Thus it uses temporal modes only (avoiding the use ofwavelength, spatial, and polarization modes to conveyinformation). For a fair comparison with Holevo ca-pacity we also limit the Holevo capacity to temporalquantum modes making use of timeslots with the sameduration Ts. Then for average power ΛRA the averagephoton count per timeslot is PPD = TsΛ

RA. The pho-

ton efficiency BPP at the Holevo quantum limit is then(Dolinar et al. 2012a)

BPP = (1+1/PPD)·log2(PPD+1)−log2(PPD) . (D14)

Of course this must be greater than either the BPPachievable by a direct-detection receiver (see §E) orPPM (see §F). This applies in the absence of backgroundradiation, which suffices for our purposes.

E. THEORETICAL LIMITS ON DIRECTDETECTION

A practical channel model for today’s technology ismodulation of optical intensity in the transmitter anddirect detection of photons in the receiver.

E.1. Constraints

If we allow infinite transmit power, any data rate Rcan be achieved reliably. To get practically meaningfulresults, we must place constraints on the received power.Assume that the intensity (average photons per second)over a finite interval Tc is Λn(t), which we can thinkof as a codeword. These are the peak power ΛRP and

Low-mass interstellar probe downlink 43

average power ΛRA constraints

0 ≤ Λn(t) ≤ ΛRP , 0 ≤ t ≤ Tc1

Tc

∫ Tc

0

Λn(t) · dt = ΛRA , 1 ≤ n ≤ Nc .

We make two simplifying assumptions which have no ef-fect on the capacity C. First, each and every codewordis constrained to have the same energy TcΛ

RA. Also, ev-

ery codeword has an ON-OFF character (Λn(t) = ΛRPor Λn(t) = 0 for 0 ≤ t ≤ Tc). Photon rates ΛRA,ΛRP are referenced to the detector output, taking account ofeverything that happens in the physical layer (includ-ing quantum efficiency). Also assume that there is afixed background rate ΛB , and define PAR = ΛRP /Λ

RA

and SBR = ΛRA/ΛB .

E.2. Capacity result

The direct-detection channel is modeled by photondetection events governed by Poisson arrival statistics.This is the most random arrival process, in which inter-arrival times are statistically independent and obey anexponential distribution. The capacity has been deter-mined for this statistical model under the peak and av-erage power constraints described in §E.1. Define twoparameters

s =1

PAR · SBR, q = min

1

PAR,

(1 + s)1+s

ess− s,

and then (Wyner 1988)

CΛRA

= BPP = PAR · log2

(1 + s)q(1+s)s(1−q)s

(q + s)q+s. (E15)

In the limit as SBR→∞ (or equivalently s→ 0), (E15)simplifies to (yielding (3) for PAR > e)

BPP→ −PAR · q log2 q (E16)

q → min 1/PAR, 1/e .

E.3. BPP is unbounded

Notably the capacity C is unbounded even for finiteaverage power,

BPP→∞ as PAR→∞ for any SBR <∞ . (E17)

This confirms that a PAR constraint is necessary to pre-vent infinite BPP, and more importantly indicates thatincreasing ΛRP increases BPP and C even as ΛRA is heldconstant. The uncertainty principle will intervene atvery high PAR (Butman et al. 1982; Hippke 2018) torender our channel model physically unrealizable.

E.4. Effect of background radiation

A log-log plot of BPP vs SBR is shown in Fig.29 over awide range (14 orders of magnitude) of SBR. The value

of BPP is nearly constant with the value (3) for largeSBR. In the region of low SBR, the highest achievableBPP falls off rapidly. This is clearly a region to beavoided by choosing a sufficiently large transmit powerPTA . Notably even in the regime of low SBR, the back-ground can still be overcome by choosing a sufficientlylarge PAR.

F. THEORETICAL LIMITS ON PULSE-POSITIONMODULATION

Constraining the implementation to the concrete ar-chitecture of Fig.25 must necessarily reduce the capac-ity relative to the direct detection model of §E, whichdoes not constrain the modulation code in any way. Wecan ascertain the adverse effect of choosing a particularmodulation code by associating a new bits-in to bits-out channel model with the modulation code-in to mod-ulation code-out. In the case of PPM this new chan-nel model has bits-in and bits-plus-erasures-out, withan erasure probability e−Ks . The resulting theoreticallimit on reliable data recovery is well known to be (18)(Butman et al. 1982).

One distinction between PPM and the assumptionsbehind the Holevo capacity is that the Ks average pho-tons are all isolated in a single timeslot per frame ratherthan allowed to fall in the available dimensions in anypattern. The other distinction is the direct manipula-tion of quantum states allowed in Holevo, which lacks apractical realization in today’s technology.

F.1. Effect of outages

The idea of using a de-interleaver to randomize thetemporal statistics of outages was introduced in Fig.28a,and for theoretical convenience assumes that erasuresoccur at the bit level (although in practice they maywell occur at the PPM frame level). An ideal erasurechannel is pictured in Fig.28b. It models a statisticalrelationship between input bits 0, 1 and output sym-bols 0, 1,E in terms of a set of transition probabilities,where PE is the probability of a single bit-level erasure(which equals the probability of a PPM frame erasure).Further, it is assumed that the individual symbols inFig.28b are statistically independent, a condition that isstrongly violated with PPM frame erasures or outages,but that a well-designed interleaver can approximate.

The capacity Cu of the ideal erasure channel of Fig.28bis well known to be Cu =

(1− PE

)per channel use, or

Cu =(1− PE

)=(1− e−Ks

)·(1− PW

)·(1− PD

)where Ks is the average detected photons per PPMframe. This assumes that the three sources of erasures(shot noise, weather, and daylight) are statistically in-dependent. Weather outages and daylight may be cor-related, in which case Cu can be adjusted accordingly.

The rate of channel uses in Fig.28b as a model forFig.28a is m/TI , and thus the capacity of the modu-lation coding layer is mCu/TI . Relative to the case

44 Messerschmitt et al.

of no outages, this capacity is reduced by a factor of(1− PW

)(1− PD

), and thus the data rate R0 has to

be reduced by the same factor to ensure that reliabledata recovery remains possible. Since the average pho-ton detection rate is Ks/TI , the upper bound on BPPis changed to

BPP < m ·(1− PW

)·(1− PD

)· 1− e−Ks

Ks. (F18)

This is consistent with (18) for the case PW = PD = 0.As in the outage-free case, for a fixed value of m the

greatest photon efficiency is obtained as Ks → 0, andthere is no motivation to modify Ks to account for out-ages.

The foregoing is a theoretical limit on Cu and BPP.For any specific choice of a class of error-correction codes(such as the Reed-Solomon codes), it is appropriate tominimize the error rate by the choice ofKs and TI takinginto account the characteristics of this particular code.In this context there may be some dependence of Ks onPW and PD.

REFERENCES

Abramson, N. 1994, Proceedings of the IEEE, 82, 1360

Atwater, H., Davoyan, A., Ilic, O., et al. 2018, Nature

Materials, 1, doi: 10.1038/s41563-018-0075-8

Banaszek, K., & Jachura, M. 2017, in 2017 IEEE

International Conference on Space Optical Systems and

Applications (ICSOS) (IEEE), 34–37,

doi: 10.1109/ICSOS.2017.8357208

Banaszek, K., Jachura, M., & Wasilewski, W. 2019a, in

International Conference on Space Optics, Vol. 11180,

International Society for Optics and Photonics, 111805X,

doi: 10.1117/12.2289653

Banaszek, K., Kunz, L., Jarzyna, M., & Jachura, M. 2019b,

in Free-Space Laser Communications XXXI, Vol. 10910

(International Society for Optics and Photonics),

109100A, doi: 10.1117/12.2506963

Biswas, A., & Piazzolla, S. 2006, The atmospheric channel,

ed. H. Hemmati (John Wiley and Sons)

Burke, B., & Graham-Smith, F. 2009, An introduction to

radio astronomy (Cambridge University Press),

doi: 10.1017/CBO9780511801310

Butman, S., Katz, J., & Lesh, J. 1982, IEEE Transactions

on Communications, 30, 1262,

doi: 10.1109/TCOM.1982.1095577

Cash, W. 2011, The Astrophysical Journal, 738, 76,

doi: 10.1088/0004-637X/738/1/76

Cover, T., & Thomas, J. 1991, Elements of Information

Theory (New York: John Wiley and Sons),

doi: I:10.1002/047174882X

Dolinar, S., Erkmen, B., Moision, B., Birnbaum, K., &

Divsalar, D. 2012a, in IEEE International Symposium on

Information Theory Proceedings (IEEE), 541–545,

doi: 10.1109/ISIT.2012.6284249

Dolinar, S., Moision, B., & Erkmen, B. 2012b,

Fundamentals of free-space optical communication, Jet

Propulsion Laboratory.

http://hdl.handle.net/2014/42559

Farr, W. H., Choi, J. M., & Moision, B. 2013, in Free-Space

Laser Communication and Atmospheric Propagation

XXV, Vol. 8610 (International Society for Optics and

Photonics), 861006, doi: 10.1117/12.2007000

Forgan, D., Heller, R., & Hippke, M. 2017, Monthly Notices

of the Royal Astronomical Society, 474, 3212,

doi: 10.1093/mnras/stx2834

Forward, R. 1962, Missiles and Rockets, 10, 26

Fountain, G., Kusnierkiewicz, D., Hersman, C., et al. 2009,

The New Horizons spacecraft, New Horizons (Springer),

23–47, doi: 10.1007/s11214-008-9374-8

Gallager, R. G. 2008, Principles of digital communication

(Cambridge, UK: Cambridge University Press),

doi: 10.1017/CBO9780511813498

Gordon, J. 1962, Proceedings of the IRE, 50, 1898,

doi: 10.1109/JRPROC.1962.288169

Gros, C. 2017, Journal of Physics Communications, 1,

045007, doi: 10.1088/2399-6528/aa927e

Guha, S. 2011, Physical Review Letters, 106, 240502,

doi: 10.1103/PhysRevLett.106.240502

Hauser, M., Arendt, R. G., Kelsall, T., et al. 1998, The

Astrophysical Journal, 508, 25, doi: 10.1086/306379

Heller, R. 2017, Monthly Notices of the Royal Astronomical

Society, 470, 3664, doi: 10.1093/mnras/stx1493

Heller, R., & Hippke, M. 2017, The Astrophysical Journal

Letters, 835, L32, doi: 10.3847/2041-8213/835/2/L32

Heller, R., Hippke, M., & Kervella, P. 2017, The

Astronomical Journal, 154, 115,

doi: 10.3847/1538-3881/aa813f

Hemmati, H. 2009, Near-earth laser communications (CRC

press), doi: 10.1201/9781420015447

Hippke, M. 2017, arXiv preprint arXiv:1712.05682

—. 2018, arXiv preprint arXiv:1801.06218

—. 2019, International Journal of Astrobiology, 18, 267,

doi: 10.1017/S1473550417000507

Hoang, T. 2017, The Astrophysical Journal, 847, 77,

doi: 10.3847/1538-4357/aa88a7

Low-mass interstellar probe downlink 45

Hoang, T., Lazarian, A., Burkhart, B., & Loeb, A. 2017,

The Astrophysical Journal, 837, 5,

doi: 10.3847/1538-4357/aa5da6

Hoang, T., & Loeb, A. 2017, The Astrophysical Journal,

848, 31, doi: 10.3847/1538-4357/aa8c73

Holevo, A. S. 1998, IEEE Transactions on Information

Theory, 44, 269, doi: 10.1109/18.651037

Jarzyna, M., Zwolinski, W., & Banaszek, K. 2019, in

International Conference on Space Optics, Vol. 11180,

International Society for Optics and Photonics,

doi: 10.1117/12.2536130

Jarzyna, M., Zwolinski, W., Jachura, M., & Banaszek, K.

2018, in Free-Space Laser Communication and

Atmospheric Propagation XXX, Vol. 10524, International

Society for Optics and Photonics, 105240A,

doi: 10.1117/12.2289653

Kafka, J., & Baer, T. 1987, Optics letters, 12, 401,

doi: 10.1364/OL.12.000401

Kulkarni, N., Lubin, P., & Zhang, Q. 2018, The

Astronomical Journal, 155, 155,

doi: 10.3847/1538-3881/aaafd2

Landis, G. 2019, in Tennessee Valley Interstellar

Symposium, Wichita, KN.

https://ntrs.nasa.gov/search.jsp?R=20190033432

Lewandowski, W., Azoubib, J., & Klepczynski, W. 1999,

Proceedings of the IEEE, 87, 163, doi: 10.1109/5.736348

Lubin, P. 2016, Journal of the British Interplanetary

Society, 69

—. 2020, The Path (World Scientific)

Ludwig, R., & Taylor, J. 2016, Voyager telecommunications

(John Wiley and Sons, Inc),

doi: 10.1002/9781119169079.ch3

Macleod, A. 2004, arXiv preprint arXiv:physics/0407077

Manchester, Z., & Loeb, A. 2017, The Astrophysical

Journal Letters, 837, L20

Messerschmitt, D. 2020, BPPM modeling software, 1.0,

Zenodo, doi: 10.5281/zenodo.3831502

Messerschmitt, D., Lubin, P., & Morrison, I. 2019, arXiv

preprint arXiv:2001.09987

Messerschmitt, D. G. 2008, Some Digital Communication

Fundamentals for Physicists and Others, Tech. Rep.

UCB/EECS-2008-78, EECS Department, University of

California, Berkeley. www2.eecs.berkeley.edu/Pubs/

TechRpts/2008/EECS-2008-78.pdf

—. 2013, arXiv preprint arXiv:1305.4684

—. 2015, Acta Astronautica, 107, 20,

doi: 10.1016/j.actaastro.2014.11.007

—. 2017, Proc. IEEE, 105, 1511,

doi: 10.1109/JPROC.2017.2717980

Moision, B., & Farr, W. 2014, Range dependence of the

optical communications channel, Tech. Rep. 42-199,

NASA IPN.

tmo.jpl.nasa.gov/progress report/42-199/199B.pdf

Parkin, K. 2018, Acta Astronautica, 152,

doi: 10.1016/j.actaastro.2018.08.035

—. 2019, arXiv preprint arXiv:2005.08940

Roh, W., Seol, J.-Y., Park, J., et al. 2014, IEEE

Communications Magazine, 52, 106,

doi: 10.1109/MCOM.2014.6736750

Schelkunoff, S., & Friis, H. 1952, Antennas: theory and

practice, Vol. 639 (Wiley)

Shibata, H., Shimizu, K., Takesue, H., & Tokura, Y. 2015,

Optics Letters, 40, 3428, doi: 10.1364/OL.40.003428

Slepian, D., & Pollak, H. 1961, Bell System Technical

Journal, 40, 43, doi: 10.1002/j.1538-7305.1961.tb03976.x

Smith, B. A., Soderblom, L. A., Banfield, D., et al. 1989,

Science, 246, 1422, doi: 10.1126/science.246.4936.1422

Spencer, D. T., Bauters, J. F., Heck, M. J., & Bowers, J. E.

2014, Optica, 1, 153, doi: 10.1364/OPTICA.1.000153

Spencer, D. T., Tang, Y., Bauters, J. F., Heck, M. J., &

Bowers, J. E. 2012, in Photonics Conference (IPC), 2012

IEEE (IEEE), 141–142, doi: 10.1109/IPCon.2012.6358529

Toyoshima, M., Leeb, W., Kunimori, H., & Takano, T.

2007, Optical engineering, 46, 015003,

doi: 10.1117/1.2432881

Traub, W. A., & Oppenheimer, B. R. 2010, Direct imaging

of exoplanets, ed. S. Seager, Exoplanets (University of

Arizona Press), 111–156

Treacy, E. 1969, IEEE Journal of Quantum Electronics, 5,

454, doi: 10.1109/JQE.1969.1076303

Wicker, S. B., & Bhargava, V. K. 1999, Reed-Solomon

codes and their applications (John Wiley and Sons)

Wikipedia contributors. 2019, Sunshine duration —

Wikipedia, The Free Encyclopedia.

en.wikipedia.org/wiki/Sunshine duration

Wilson, T. L., Rohlfs, K., & Huttemeister, S. 2009, Tools of

radio astronomy, Vol. 5 (Springer),

doi: 10.1007/978-3-540-85122-6

Wyner, A. 1988, IEEE Transactions on Information

Theory, 34, 1449, doi: 10.1109/18.21284