the recovery of a random variable from a noisy record with application to the study of fluctuations...

21
Journal of Neuroscience Methods, 2 (1980) 389--409 389 © Elsevier/North-Holland Biomedical Press THE RECOVERY OF A RANDOM VARIABLE FROM A .NOISY RECORD WITH APPLICATION TO THE STUDY OF FLUCTUATIONS IN SYNAPTIC POTENTIALS KEN WONG and STEPHEN REDMAN Department of Electrical Engineering, Monash University, Clayton, Victoria (Australia) (Received July 30th, 1980) (Revised version received February 26th, 1980) (Accepted February 26th, 1980) Analysis of fluctuations in the amplitude of evoked synaptic potentials can be severely handicapped by the presence of spontaneous synaptic potentials and recording noise. A numerical procedure has been described whereby it is possible to remove some of the masking effects of this noise from the underlying distribution of the fluctuating synaptic potential. It is not necessary to make an initial assumption about the type of distribution which will best describe the fluctuations. To use this technique, it is necessary to measure the histograms which approximate the probability densities of both the noise, and the noisy evoked potential. It is also necessary to assume that the statistical mechanisms generating the noise are independent of those mechanisms which cuase the fluctuation in synaptic transmission, and that the noise and the evoked potentials add linearly. The statistical reliability of the technique depends upon the amount of noise present, and the sample size. Problems of resolution which arise from finite sampling and high noise levels are discussed. INTRODUCTION Synaptic potentials recorded in nerve and muscle cells are often severely masked by noise. This noise originates from instrumentation noise, mem- brane noise and from spontaneous synaptic potentials. In some instances the peak excursions of this noise can exceed the peak voltage of the synaptic potential. Fluctuations in the peak amplitude of a synaptic potential are usually assumed to result from variations in the amount of neurotransmitter released from the presynaptic terminal. The statistical nature of these fluctuations has been studied in an effort to understand the mechanisms of neurotrans- mitter release. (Recent reviews have been written by Martin (1977), McLach- lan (1978) and Kuno (1971).) If noise is present it is often difficult to obtain reliable measurements of these fluctuations. This paper describes a procedure whereby the uncertainty introduced by noise can be partially removed. When the synaptic potential is evoked by an electrical stimulus, the time

Upload: ken-wong

Post on 21-Jun-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Journal of Neuroscience Methods, 2 (1980) 389--409 389 © Elsevier/North-Holland Biomedical Press

THE RECOVERY OF A RANDOM VARIABLE FROM A .NOISY RECORD WITH APPLICATION TO THE STUDY OF FLUCTUATIONS IN SYNAPTIC POTENTIALS

KEN WONG and STEPHEN REDMAN

Department of Electrical Engineering, Monash University, Clayton, Victoria (Australia)

(Received July 30th, 1980) (Revised version received February 26th, 1980) (Accepted February 26th, 1980)

Analysis of fluctuations in the amplitude of evoked synaptic potentials can be severely handicapped by the presence of spontaneous synaptic potentials and recording noise. A numerical procedure has been described whereby it is possible to remove some of the masking effects of this noise from the underlying distribution of the fluctuating synaptic potential. It is not necessary to make an initial assumption about the type of distribution which will best describe the fluctuations. To use this technique, it is necessary to measure the histograms which approximate the probability densities of both the noise, and the noisy evoked potential. It is also necessary to assume that the statistical mechanisms generating the noise are independent of those mechanisms which cuase the fluctuation in synaptic transmission, and that the noise and the evoked potentials add linearly. The statistical reliability of the technique depends upon the amount of noise present, and the sample size. Problems of resolution which arise from finite sampling and high noise levels are discussed.

INTRODUCTION

Synaptic potentials recorded in nerve and muscle cells are of ten severely masked by noise. This noise originates from instrumentation noise, mem- brane noise and from spontaneous synaptic potentials. In some instances the peak excursions of this noise can exceed the peak voltage of the synaptic potential.

Fluctuations in the peak amplitude of a synaptic potential are usually assumed to result from variations in the amount of neurotransmitter released from the presynaptic terminal. The statistical nature of these fluctuations has been studied in an effort to understand the mechanisms of neurotrans- mitter release. (Recent reviews have been written by Martin (1977), McLach- lan (1978) and Kuno (1971).) If noise is present it is of ten difficult to obtain reliable measurements of these fluctuations. This paper describes a procedure whereby the uncertainty introduced by noise can be partially removed.

When the synaptic potential is evoked by an electrical stimulus, the time

390

at which it occurs is predictable. Thus it is possible to measure some feature of this noisy synaptic potential, such as its peak voltage, by measuring the membrane potential at a particular time following the stimulus, even though a clear definition of the peak at this time cannot always be obtained. Repeated measurements of this kind provide a histogram of the noise-con- taminated signal. Similarly, the statistical nature of the noise can be obtained from a similar measurement, but with no electrical stimulus to the afferent nerve.

The peak voltage measured is the noise-free peak voltage added to the noise. If two assumptions can be made then the probabili ty density for the noisy peak voltage is a convolution of the probabili ty density of the noise voltage with the probabili ty density of the noise-free peak synaptic potential. These two assumptions are that the mechanisms generating the noise are statistically independent from those which cause the noise-free peak voltage to fluctuate, and secondly, that the noise and the synaptic potential add linearly. The integral equation resulting from this convolution can be solved for the noise-free synaptic potential probabili ty density, a process referred to as deconvolution.

One important advantage of this technique is that no a priori assumptions must be made about which distribution function is appropriate for the noise- free synaptic potential. This technique has been applied to an analysis of fluctuations in the excitatory synaptic potential evoked in mammalian spinal motoneurones by impulses in single group l a afferent fibres (Edwards et al., 1976a, b).

DECONVOLUTION THEORY

Let the probability density function (pdf) of the EPSP voltage be S(v), and that of the noise voltage be N(v). If the noise and EPSP voltages are statistically independent, the pdf of the EPSP + noise, Y(v) is given by:

Y(v)=/ N(v--x) S(x)dx - - o o

v u

f N ( v - - x ) S(x) dx vl

(i)

where Vu and v I are bound set by practical considerations. Repeated sampling of each evoked EPSP and of the noise, provides the

data for histograms which estimate the pdfs Y(v) and N(v) respectively. We assume that the pdf for the EPSP is described by a set of impulse func-

tions, each Av apart, i.e.:

S(v) = S , 6 ( v - v , ) + S : 6 ( v - v 2 ) + ... + S , 8 ( v - v . )

391

where Vk+ 1 - - V k ---- A V , k = 1, 2 . . . . , n -- 1.

When this representation is substituted into eqn. 1, the result is:

k

Yk = ~ S i N k + l - - i i = l

where

kAv+v'

Nk = f N(v) dv (k-- 1 ) AV+V'

(2)

(3)

(4)

and v' is the lower bound of the pdf for the noise. It follows that v, = v~ -- v'. There are two ways to estimate Nk. The first is to assume a known prob-

ability distribution to describe the noise spectrum and to find the parameters of this probability distribution from the noise samples. In this case we define N(v), and Nk is calculated usingeqn. 4. The second method is to measure the percentage of samples in the whole noise population falling in the range (k -- 1) • Av + v' and k • Av + v' which becomes Nk.

The pdf for the EPSP (eqn. 2) is unknown. If a guess is made for the values of Si, {i = 1, 2 ... n), and substituted into eqn. 3, we obtain:

k

Wk = ~ S i N k + l - i . (5 ) i= 1

We can use a minimum square difference criterion to find the best esti- mate of Yk- That is, we minimize:

• (Yk - - Wk) 2

k=l

and using eqn. 5, this becomes:

n k i) k -- ~ SiNk+l- (6) k=l i=l

Minimising eqn. 6 is actually finding the best estimate of Si because N k is known and Yk can be calculated from the measured histogram of EPSP plus noise voltages, using a corresponding equation to eqn. 4.

This is a statistical solution, and the variables are defined by their pdfs. Thus the SkS must be positive. If N(v) is normalized with respect to Y(v), then so must S(v} be normalized. That is, the sum of all probability in Sk must be 1. Furthermore, the components in the pdf for S(v) must occur at non-negative voltages. Hence, in eqn. 2, v, /> 0.

Therefore, after expansion of eqn. 6, the problem finally becomes one of

392

minimizing:

f(S) = ~ Y~ -- 2Yk • ~ SiNk+l_ i + SiNk+ l _ (7) k = l i = 1

n

subject to: ~ Sk = 1 , k = l

Sk>~ 0, k = 1 , 2 .. . . . n (8)

and, the lower bounds of Y(v) and N(v) must be chosen such that vt -- v' = vl~>0.

This problem can be solved by a non-linear optimization algorithm using a digital computer. The algorithm is described in the Appendix.

RESULTS

Testing of algorithm

The algorithm searches for a global minimum of a given quadratic func- tion subject to certain constraints. In our case, this function is the sum of the squared differences between the histogram derived from experimental measurement and the histogram computed from an initially guessed synaptic potential pdf and the measured noise. We have tested our implementation of this algorithm by taking a histogram obtained from experimental measure- ments of a peak synaptic potential (Fig. 1A), measuring the statistical properties of the noise which contaminates these synaptic potentials, and ob- taining a deconvolved spectrum from the peak synaptic potential (Fig. 1B).

Assuming that the correct solution consists of two components only, this solution has been varied, one parameter at a time, to show that the solution obtained represents a minimum square difference. The perturbed solution has been convolved with the noise and the resulting histogram subtracted (bin by bin) from the original data (Fig. 1A) to obtain a squared difference (Fig. 1C and D). This test does not establish that the minimum found is a global minimum. However, it can be seen that the sum of the squared differ- ences associated with this minimum is much less than that obtained for the local minimum which occurs when one of the two components has a peak voltage in the region of 500 pV, while the other takes the value originally calculated (Fig. 1C).

Two discrete components may not be the best solution. (The ability of this procedure to resolve adjacent discrete components is discussed later in this section.) We have replaced each discrete component with either 2 or 4 components, with various probabilities, such that the weighted average of these components is the component they replaced. When these combinations were convolved with the noise, and the sum of the squared differences

393

6o! 4O

m

z 20

oL -200

06

~ 0 4

o 0.2

-100 0 100 200 300 400

PEAK VOLTAGE (~V)

-200 -100 0 100 200 300 400

6000

4000

P,I

2000

-200 -100

t

p,*'.,. ',

~ f " . . . . " 2

, /

/ Y ? ,'

0 100 200 300 400

3000

2OOO

100C

PEAK VOLTAGE (pV)

D

PROBABILITY FOR COMPONENT AT 41,uV 02 04 06 08 10

PEAK VOLTAGE (/~V) 10 08 06 04 02 0 PROBABILITY FOR COMPONENT AT 140#V

Fig. 1. A: histogram derived from measurement o f the peak amplitude of a synaptic potential during repeated trials. The total number of trials was 383. The contaminating noise was measured separately and found to be Gaussian, with mean and S.D. o f I #V and 68/~V respectively. B: result o f the deconvolution procedure, indicating two discrete components. One occurs at 41 #V with a probability o f 0.54, the other occurs at 140 #V with a probability o f 0.46. C: the Gaussian noise has been convolved with a pdf o f the form of B, but with variable location of the two components. The convolved result has been subtracted bin by bin from the histogram in A, and the differences squared and added. The curve indicated by plus signs was obtained by maintaining the component at 140/~V fixed in probability and location while the component at 41 # V (in B) was varied in location but not in probability. A min imum occurs at 41 #V. Similarly, the curve indi- cated by filled circles was obtained by varying the location of the component at 140/~V (in B), while the component at 41 #V was fixed in probability and location. A min imum occurs at 140 #V. The joint minimum, for the probability shown in B, occurs at the loca- tion indicated in B. D: the square difference, as computed for C, is determined when the probabilities attached to the two components in B are varied in a complementary manner, keeping the locations o f the two discrete components fixed. The min imum square differ- ence occurs for the probabilities shown in B.

between the result and the original histogram was calculated, none was better than the result shown in Fig. lB.

Again, these tests do not establish that the minimum shown in Fig. 1B is the global minimum. But taken in conjunction with the considerations of

394

resolution and the various other tests described below, we suggest that the algorithm has been correctly implerr ented.

Signal spectrum with a Poisson distribution

Some synaptic potentials have peak voltages which fluctuate from trial to trial in a quantal manner. These quantal fluctuations are often described by Poisson or binomial distributions (Martin, 1977). One simple test of the deconvolution technique is to make the synaptic potential pdf a known Pois- son distribution, and to add Gaussian noise to each component of this pdf. If the probabili ty of n quanta is P(n), then:

m n

P(n) = ~ e -m

40

20

A 80

i 40

B

0-4

30.2 o

0

-200 -100 0 100 200 ~0 400

PEAK VOLTAGE (pV)

Fig. 2. A: a histogram of the signal plus noise voltage. The signal voltage is obtained by perfect sampling from the pdf shown in C by the vertical arrows. This pdf occurs in dis- crete steps of 100/~V, and a Poisson distribution describes the probability of occurrence of each component, with m = I (see text). The noise voltage which is added to this signal is derived f rom finite sampling (400) f rom a Gaussian pdf wi th ~N -- 0 and ON = 50 pV. The number o f samples added to each signal c o m p o n e n t is in p ropor t ion to the probabil- i ty of occurrence o f tha t componen t . B: his togram formed f rom 400 samples o f the Gaussian noise used in A. The dashed line is the theoret ical Gaussian pdf with zero mean and a N = 50 ~V. The theoret ical pdf is used in the deconvolu t ion process. C: a compari- son o f the original signal pdf and the signal pdf calculated by deconvolu t ion . The pdf cal- culated by deconvolu t ion , using the theoret ical pdf in B and the his togram in A, is shown by the rectangles. The componen t s of this pd f are actual ly located at the lower bound of each rectangle, and have the same magnitude.

395

where m is the average number of quanta. The solid bars in Fig. 2C satisfy this description when m = 1. The quantal size is 100/~V. Gaussian noise (with mean (/~N) = 0 and S.D. (ON) = 50 #V) is superimposed on this pdf. The resulting histogram formed from this Poisson pdf (accurately sampled) and 400 samples from the Gaussian pdf, is shown in Fig. 2A. A further 400 sam- ples drawn from the same Gaussian pdf forms the histogram in Fig. 2B. The theoretical pdf is shown as a dashed line. It is used in the deconvolution pro- cess, and the result is the histogram in Fig. 2C. The location of the decon- volution components is the lower bound of each bar. For many purposes, and certainly in neurophysiology, this degree of precision is adequate. How-

100

80

6O m,I m

20

r -~

o-:

-200 -lOO

t lOO 2oo 3oo 4oo

PEAK POI"DCTiAL (pV)

Fig. 3. A comparison of the original signal pdf (vertical bars and arrows in B) and the pdf calculated by deconvolution (rectangles). The test is similar to that described in Fig. 2, except that the original signal pdf has been modified to have discrete steps in amplitude of 50/IV. Samples drawn from the same Gaussian noise pdf as in Fig. 2B have been added to this signal pdf to generate the histogram shown in A. By reducing the magnitude of the discrete steps in the original pdf to the value for the S.D. of the noise, the limit of resolu- tion of the deconvolution process has been exceeded, and a misleading result has been obtained.

396

ever, if the magnitude of the noise with respect to the quantal size is increased in comparison with the example above, the technique falls to adequately recover a Poisson process. In Fig. 3 ,400 samples are drawn from the same Gaussian distribution as in Fig. 2B, and superimposed on a Poisson pdf, also with m = 1, but with the quantal magnitude now 50 pV (i.e. the same as ON). This pdf is shown as the solid bars in Fig. 3B, and the histogram formed by superposition of the noise samples with Poisson pdf is shown in Fig. 3A. Deconvolution gives the histogram in Fig. 3B. The components representing failures of response (0 pV) and one quantum (50 pV) disappear and an entry occurs approximately midway between these two components with a probability approximately equal to the sum of the probabilities of these 2 components. A similar effect occurs for the components representing two quanta (100 pV) and 3 quanta (150 pV). For this reason we have explored the ability of this technique to resolve adjacent discrete components masked by noise (see resolution of adjacent components).

Choice of binwidth

The binwidth of the measured histogram affects the statistical reliability of the results. Local distortions in the signal plus noise histogram can result from a binwidth which is too small, which in turn can introduce spurious components or shift the positions of components, or both, in the decon- volved histogram. A binwidth which is too large leads to a loss of resolution.

If the histogram for the noise sample is deconvolved with the Gaussian pdf for that noise, the result must theoretically be a single component at zero. Otherwise each discrete component of the noise-free synaptic potential will not appear as a single component in the deconvolved result using the noisy synaptic potential histogram and the Gaussian noise pdf. We have chosen a binwidth such that when the noise histogram is deconvolved with its Gaus- sian pdf, a single component occurs at the origin with P > 0.95. (In practice, this single component may spread over two intervals adjacent to the origin, as discussed later in the section on resolution.) This procedure is an excellent test for the goodness-of-fit of the noise sample with the chosen binwidth and the Gaussian pdf with same mean and variance. Often a histogram which fails this test will pass both the Kolmogorov--Smirnov test (Massey, 1951), and the x2-test. An example of such a sample and binwidth combination is shown in Fig. 4. Using a binwidth of 25 pV, the measured histogram satisfied the X 2 test for the Gaussian pdf (0.01 < P < 0.025) and the Kolmogorov--Smirnov test with P < 0.01. However when the noise histogram (PN = --5.2 pV; oN = 72.4 pV) was deconvolved with the Gaussian pdf having this mean and S.D., two discrete components separated by two binwidths emerged. When the binwidth was increased to 35 pV, and the new histogram deconvolved with the same Gaussian pdf, the result consisted of adjacent entries at 0 #V and 35 pV, with a total probability of 1. From the criteria discussed above, a binwidth of 35 pV would be appropriate for the histogram of the synaptic

397

100

80

6O og LU m

z 40

\

%

"'\ '\ ~ ._

-300 -200 -100 0 100 200 300 i i i i e I

VOLTAGE (pv)

Fig. 4. The his togram is a noise sample wi th #N = 5/~V and o N ffi 72.4/~V. The binwidth i s . 25 #V and the sample size is 700. The Gaussian pdf wi th the same mean and variance is shown by ( . . . . . ). When this pdf is deconvolved wi th the sample histogram, two dis- crete componen t s are obta ined. One is located at 0/~V, wi th a probabi l i ty o f 0.89. The o ther is located at 50/~V, wi th probabi l i ty o f 0.11. The two pdfs (...) are the Gaussian dis t r ibut ions centred on these two componen t s and with the corresponding probabi l i ty weightings.

potential plus noise, but not a binwidth of 25/~V. This test should be carried out routinely before applying the deconvolution procedure to the synaptic potential plus noise histogram.

Sample size

A finite sample of the noise, and of the synaptic potential plus noise, provides the data for histograms which approximate the pdfs of these signals. As the correct synaptic potential plus noise pdf is unknown, it is not possible to use the X 2 test to determine the number of samples which are necessary to achieve a specified confidence level in the estimation of this pdf. In this situation it is possible to use the Komogorov--Smirnov test. The deviation of the cumulative distribution obtained by a finite sample from the unknown cumulative distribution (in normalized probability) can be calculated for a particular confidence level and sample size (Massey, 1951). At the 95% con- fidence level, this normalized deviation is 1.36 N, and at the 99% confidence level it is 1.63 N, where N is sample size. Thus, if the normalized deviation is

398

0"5.

0,4 -

0.3 -

m

m

o c~

o_ 0.2 -

0"i-

0J I I ] , o , ~,oo , 2 o 0 , 3o, o , 4,00 ,

PEAK POTENTIAL

Fig. 5. T h e e f f e c t o f a n i n s u f f i c i e n t s a m p l e s i ze o n t h e s i g n a l p d f c a l c u l a t e d b y d e c o n v o l u -

tion. The original pdf is indicated by vertical arrows. The calculated signal pdf is indicated by rectangles and the discrete components are located at the lower bound of each rectangle with a probability indicated by its magnitude. The conditions for this test are the same as those for Fig. 2, except that the noise pdf was calculated from a noise sample of 200, and the signal plus noise histogram was based on a sample of 200.

to be less than 0.05 at the 95% confidence limit, a sample of 740 is required. A sample of 500 gives a maximum deviation of 0.068 for the 95% confidence level.

Unfortunately, the way in which deviations of this magnitude affect the deconvolved result depends upon where they occur in the sample histogram, and we have been guided more by empirical tests than by maximum devia- tions in the cumulative distribution. The simplest test if a large sample is available is to make two sub-samples f rom the original sample, and then deconvolve. If the results are no t statistically different f rom each other and f rom the result for the larger sample, at a prescribed confidence level, then the smaller sample is adequate. An example of a deconvolut ion result where too small a sample size was used is shown in Fig. 5. This result is derived in an identical manner to tha t of Fig. 2C, except that only 200 samples of signal plus noise were used, and the theoretical noise distribution was based on a sample size o f 200. A sample size of 200 is adequate for normal con- fidence levels if the result is a single component . Much larger sample sizes are required for deconvolved results containing more than one discrete compo- nent. This number will usually be limited by experimental practicalities rather than by an a priori determined confidence level.

399

Resolution o f the deconvolution process

Resolution of adjacent discrete components of the noise-free synaptic potential depends upon the magnitude of the noise corrupting this potential. The basic scheme of the deconvolution process is to find a set of discrete components which, when convolved with a given noise pdf will give the best fit to the synaptic potential plus noise pdf obtained from finite sampling. The major error in this process arises from finite sampling, but truncation errors in forming the histograms, and round-off errors during computation also contribute. Thus the synaptic potential pdf computed cannot be expected to be unique.

Consider two signal pdfs; one which contains two discrete components, each having a probability of 0.5, and the other consisting of a single compo- nent at the mean location of the above two components. These two pdfs are the solid vertical lines in Fig. 6A and B respectively. To each pdf the same theoretical Gaussian noise distribution is added as shown in Fig. 6. The signal

>-

A

/ \ / \

/...' ." .. "..:,, ../.-" ..'° ".. "'::~

.,~J¢" ...'" ". . . '=it°.,

I.-..,---I

>- S n'~ m 0 r,," 0,,.

/°°

/

/

1 /"

Z"

SIGNAL

\

\

\

\ "\

MAGNITUDE

Fig. 6. A: two vertical bars, each wi th equal probabil i ty, separated by a variable x which is normal ized by the S.D. o f a Gaussian pdf. This Gaussian pdf is s h o w n by the dot ted lines and is superimposed about each vertical bar. The area under each o f these pdfs is 0 .5, and the curve ( . . . . . ) is the sum o f these two Gaussian curves. B: a vertical bar m i d w a y be tween the t w o mean posi t ions in A. A Gaussian p d f is superimposed about this po int wi th the same S.D. as the Gaussian pdf in A. Its area is normal ized to unity . The problem is to differentiate be tween this Gaussian p d f and the upper curve in A for a given x and a f ixed sample size.

400

plus noise pdf after this convo lu t ion is the uppe r ( . . . . . ) curve in each case. The X: test has been used to est imate how well these two signal plus noise pdfs resemble each other . Perfect sampling of the d is t r ibut ion in Fig. 6A is assumed. It is ( incorrect ly) assumed tha t these Samples were d rawn (again perfec t ly) f rom the d is t r ibut ion in Fig. 6B. Table 1 shows the ×2 values calculated for d i f ferent separations, a sample size o f 400, and 25 df. The two pdfs can be dist inguished with P > 0.95, when the separat ion is 1.4 ON. Imper fec t sampling could result in more resolut ion, or less resolut ion, depending u p o n the sample bias. The effect o f sampling error on resolut ion has n o t been included in this calculat ion. But the deconvo lu t ion process is a more sensitive de t ec to r o f statistical differences than is the ×2 test, a po in t which is discussed in the sections on choice o f b inwidth and sample size. Thus the resolut ion achievable m a y be somewha t be t te r than a m i n i m u m separat ion o f app rox ima te ly 1.4 0 N.

When the two discrete c o m p o n e n t s do n o t have equal probabili t ies, the separat ion be tween these c o m p o n e n t s mus t be increased f rom tha t discussed above if the c o m p o n e n t s are to be recognized as discrete. The ×2 test is again used to illustrate this po in t in Table 2. An ex t reme example with probabi l i ty weightings o f 0.9 and 0.1 requires a separat ion o f a lmost 2.5 ON if this pd f is to be dist inguished f rom a single c o m p o n e n t pdf, with P > 0.95.

The l imitat ions o f resolu t ion using this t echn ique suggest tha t if af ter deconvo lu t ion a discrete pdf is obta ined , in which the separat ion o f adjacent c o m p o n e n t s is less than 2 ON, the in terpre ta t ion of this result mus t be

TABLE 1

X 2 VALUES CALCULATED FOR 25 df AND A SAMPLE SIZE OF 400 FOR THE SIGNAL PLUS NOISE DISTRIBUTIONS SHOWN IN FIG. 6A AND B

The binwidth is 0.5 ON, where o N is the S.D. of the noise. In the calculations for this table and for Table 2, the sample size affects the X 2 value. This is because we are com- paring two different theoretical distributions. The difference between these two distribu- tions can be amplified or compressed by increase or decreases of a scaling factor, which is the sample size for the histogram (obtained by perfect sampling). A sample size of 400 is used here and in Table 2 because it is the sample size used in the previous examples. (See sections on sample size and binwidth.) Histogram entries with values <1 have been set to 0.

Separation of compo- nents (normalised by o N)

)/2 Probability of rejecting hypothesis that the two distributions are the same

0.6 1.32 P < 0.005 0.8 4.11 P < 0.005 1.0 9.89 P < 0.005 1.1 14.35 0.025 < P < 0.05 1.2 20.13 0.25 < P < 0.5 1.4 36.52 P < 0.95 1.6 60.81 P < 0.995

401

TABLE 2

X 2 VALUES FOR THE INDICATED PROBABILITIES AND SEPARATIONS OF TWO COMPONENTS

The pdfs in Fig. 6A and B have been modified to allow for unequal probability weight- ings of the two components in Fig. 6A. The overall pdf was obtained in the same manner as for Fig. 6A. In Fig. 6B, the single component is located at the weighted mean. X 2 val- ues have been calculated in a similar manner to those for Table 1, with the same param- eter values.

Separation of components (normalized by a N)

Probability weightings of two components

0.6 and 0.4 0.7 and 0.3 0.8 and 0.2 0.9 and 0.1

1 9.15 7.08 4.18 1.35 1.2 18.64 14.47 8.58 2.78 1.4 33.85 26.34 15.65 5.08 1.6 56.4 43.92 26.07 8.42 1.8 87.91 68.37 40.40 12.9 2 129.9 100.61 58.87 18.46 2.2 183.6 141.13 81.29 24.82

guarded. An amalgamat ion o f discrete c o m p o n e n t s in the original p d f m ay have occur red , as in Fig. 3B. To establish if this has occur red , it is possible to iden t i fy and select those evoked responses which are associated wi th a par t icu lar discrete c o m p o n e n t (Jack, R e d m a n and Wong, in p repara t ion) . Analogue ca lcula t ion o f the variance o f these evoked responses t h en estab- lishes if t he appa ren t c o m p o n e n t is a discrete c o m p o n e n t (no variance) or an amalgamat ion o f several ad jacent c o m p o n e n t s .

I f t he p d f o f the evoked response is con t inuous , the amalgamat ion o f adja- cen t c o m p o n e n t s can be used to ob ta in an envelope o f the pdf . The distr ibu- t ion o f min ia tu re potent ia ls , o r o f evoked potent ia l s wi th a high average quan ta l c o n t e n t and small quanta l size ( compared wi th the noise) will be in this ca tegory . F o r the d e c o n v o l u t i o n resul t to be accura te , a b inwid th approach ing oN (or greater) and a large sample is necessary. If such a bin- wid th is n o t chosen, or if t he sample size is t o o small, the d e c o n v o l u t i o n resul t m a y be a sequence o f discrete c o m p o n e n t s . Wi thout advance knowledge o f the t y p e o f p d f to expec t , an essent ia l ly con t inuous p d f cou ld be mis taken fo r one wi th several d iscre te c o m p o n e n t s . T o . r e m o v e this unce r t a in ty , i t is necessary to have a suf f ic ient ly large sample such tha t d e c o n v o l u t i o n can be appl ied to several sub-samples. If the original p d f is essential ly con t inuous , d e c o n v o l u t i o n o f each sub-sample will no rma l ly p ro d u ce very d i f f e ren t answers, due to inadequa te sampling. I f the original p d f consists o f discrete c o m p o n e n t s , the results fo r each sub-sample will be similar, as such pdfs are less sensitive to sample size.

A coro l la ry to the l imit o f reso lu t ion concerns the in t e rp re t a t ion o f a

402

deconvolved result in which discrete components occur in adjacent intervals, and where it is clear that the original pdf is not continuous. If the adjacent intervals are separated by less than approximately 0.5 ON, the vagaries of sampling make it meaningless to consider these as two separate components. A small adjustment in binwidth or in the lower bound of the histogram for the signal plus noise can cause these two components to become one in the deconvolved results. The two adjacent components can be combined to give a single component with probabili ty

P= P1 + P2

and with magnitude

P~x~ + P2x2 p

where P1 and P2 are the probabilities of the adjacent components and xl and x2 are their corresponding magnitudes. For example, a deconvolved result which has the probabil i ty/magnitude combinations of 0.41/40, 0.41/60, 0.09/200, 0.09/220, and zero for all other magnitudes, and where the bin- width is 20, should be combined to give 0.82/50, 0 .18/210 and zero for all other magnitudes, when ON > 40.

DISCUSSION

There are numerous examples of synaptic transmission where the magni- tude of the synaptic potential fluctuates in a stepwise or quantal manner (McLachlan, 1978; Martin, 1977). When these synaptic potentials are studied in neurones of the CNS, the statistical description of their fluctuations is difficult to obtain because of the accompanying noise level (Redman, 1979). These circumstances have led others to make a priori assumptions about the statistical distribution of the fluctuations based on results obtained at synapses in the peripheral nervous system, and to then adjust the parameters of the distribution to obtain a best fit for the observed data. (Kuno, 1964; Mendell and Wiener, 1976). The technique of deconvolution takes into account the prevailing noise and makes no a priori assumptions about the underlying distribution. But there are limits to the resolution obtainable using this technique. If the distribution contains discrete components , which are separated by less than aN (about 50 pV for spinal motoneurones in deeply anaesthetized preparations), then it will be difficult to resolve them reliably. Another technique, based on the calculation of variance, can be applied to provide an independent assessment of the results obtained by deconvolution in these marginal cases (Jack, Redman and Wong, in prepara- tion). If the underlying distribution is essentially continuous, then a careful choice of binwidth and a large sample will enable the envelope of this pdf to be obtained using deconvolution procedures.

4 0 3

A large sample size is required if this technique is to be reliable. Acquisi- tion of 800--1600 responses during intracellular recording requires stable recording conditions and computer facilities to provide rapid transfer and storage of this large amount of data.

Convolution integrals (e.g. eqn. 1) can be solved using Fourier transform techniques. Unfortunately, when only estimates of the actual pdfs can be measured, Fourier transform operations can lead to negative peak voltages (for depolarizing synaptic potentials) and negative probabilities in the result. It is not possible to combine Fourier transform procedures with the constraints required. The algorithm described in this paper is ideally suited to the solution of this problem.

This technique has been applied to the analysis of fluctuations of synaptic potentials evoked in spinal motoneurones by impulses in single l a afferents. In this situation, the background noise is relatively high, and the pdf of the miniature synaptic potentials for this synapse cannot be measured (Redman, 1979). Analysis of fluctuations of potentials originating at these synapses is therefore a more difficult task than it is at peripheral synapses. The original application of deconvolution procedures may be found in Edwards et al. (1976a,b) and further applications and results will appear elsewhere (Hirst, Jack, Redman and Wong, in preparation).

The technique has wider application than the one we have described. It is not necessary that the known distribution be Gaussian. In general, the tech- nique is applicable to the separation of two random variables, when the dis- tributions for the sum of these random variables, and for one of the variables, can be measured independently. The two variables must be statistically inde- pendent.

A P P E N D I X

Theoretical basis for the numerical solution

The numerical solution is based on an algorithm given by Goldfarb (1972). Consider the problem of minimizing:

f(S) = f0 + Y'S + (½) S'GS (9)

for the constraint A'S/> b (10)

where f0 = S = y =

G = b = A = S ' =

m =

constant n-dimensional vector (the pdf S(v)) n-dimensional vector (constant) a constant and symmetric positive definite (n X n) matrix m-dimensional vector (constant) (n X m) matrix transpose of S (and similarly for Y') number of constraints.

404

These equations are of the same form as eqns. 7 and 8. The columns of A will be denoted by rl, r2, ... rm and may be viewed as

the inward normals of the constraint defining hyperplanes. The method requires an initial assumption for the value of S within the

region specified by eqn. 10. A search vector is calculated (PqG-~g(S)) where g(S) is the gradient of f(S), and PuG -~ is an operator which includes the active constraints at this point. This search vector is used to find a local mini- mum. In searching for a local minimum, we determine whether any more con- straints are on the constraint hyperplane (i.e. active). If no constraint is invoked, the Lagrange multiplier (a) is calculated to determine whether the local minimum is a global minimum. If it is not a global minimum, the proce- dure repeats. If a constraint is invoked in searching for a minimum, the search vector is recalculated at the point of intersection of the constraint sur- face and the search path.

Now, consider the method of locating the minimum of f(S) on the con- straint surface Mq where Mq is defined by q constraint hyperplanes, i.e. Mq = (S/ri 'S = bi, i = 1, 2 . . . . q} .

Let S* be the value of S where f(S) is a minimum on Mq. A necessary and sufficient condit ion for S* eMq, and to be the minimum, is that the gradient of F(S) at S* be orthogonal to Mq.

That is g(S*) = GS* + Y

= Rq • o~ (11)

where Rq is the (n X q) matrix (r~, r2 ... rq) of rank q, G(S) is the gradient of f(S) at S, and a is a q-dimensional vector (constant) and is the Lagrange multiplier.

The gradient opera tor g( ) is linear. For S ¢ S* on Mq from eqn. (11):

g(S* ) -- g(S) = G(S* -- S ) . (12)

Using eqn. 11, eqn. 12 may be rewrit ten as:

S* -- S = G-l(Rqa -- g(S)) . (13)

Let Lq be a linear subspace parallel to Mq. Hence the vector (S* -- S) is in Lq also. Since Rq are the normals of Mq, then from eqn. 13:

Rq'(S* -- S) = R q ' G - I g q a -- Rq'G-lg(S) = 0

.'. 0~ = (Rq'G-I Rq)-IRq'G-Ig(S)

= Rq*g(S)

where Rq* = (Rq 'G- IRq) - IRq 'G -1 •

(14)

(15)

(16)

(17)

405

Eqns. 16 and 17 provide the method for calculating the Lagrange multiplier a. Substituting for a {from eqn. 15 and into eqn. 13) gives:

S* = S -- Pq G-Ig(S) (18)

where Pq = I -- O -I Rq(Rq'G-' Rq)-' Rq' . (19)

The matrix Pq is required when calculating the search vector. In the special case of zero constraints (q = 0), Pq = I and the result (from eqn. 18) that:

S* = S -- G-Ig(S)

is the well-known Newton's method of finding a minimum. It can be shown (Goldfarb, 1972) that provided an initial value of S

(within the constraints) is given, this method can be used to generate an iterative continuous downhill search for a global minimum of f(S) while always remaining within the constraints. It can also be shown (Goldfarb, 1972) that this procedure will terminate in a finite number (m + m) of itera- tions.

It is important to provide efficient recursion formulae for computat ion of the search operators Pq+IG-I and Rq+1* from PqG -~ and Rq*, and vice versa, when one linearly independent constraint is added, or removed, respectively.

These recursion formulae were derived by Fletcher (1971) and are as fol- lows:

pq+iG -I = pqG -I -- PqG -I rq+ I (PqG -I rq+ 1 )' , -1 ( 2 0 )

rq+iPqG rq+1

, = + -- q*rq+ (PqG-lrq+l) ' (21) ~' - 1 rq+lPqG rq+~

fir*' PqG-I = Pq+IG-1 + r*'Gr - ' - - ~ (22)

Rq+l*Gr*r*' LO J r*'Gr*

where r*' is the (q + 1) th row of Rq+1*. If the pth rather than the (q + 1) th hyperplane is removed from the set of constraints (1, 2 . . . . p . . . . , q + 1), the formulae 22 and 23 can still be used, provided that the pth and (q + 1) th row of Rq+1* are interchanged before formulae 22 and 23 are used i

Using the above results, the structure of the programme can be divided into 5 sections, as follows.

I There is an error in both Fletcher's and Goldfarb's original papers on this condition. In their papers "columns" is printed instead of "rows".

406

Step 1 (initialization)

Our problem is to minimize eqn. 8. When f(S) is expanded it becomes:

n k k k

f(S) = ~ (Yk 2 - 2Y k G S i N k + l _ i + ~ ~ S i S j N k + l _ i N k + l _ j ) k=l i=1 i=1 j=l

= f0 -- 2Y'S + S'GS

which is now in the same form as eqn. 9.

f0 = [Y1Y2 --- Yn]

Y ' = [YIY2... Yn]

and

1 0 0 ...... 0 1 N2 Nl 0 ...... 0 N3 N2 N1 ...... 0

2 N n - 1 Nn-2. . .NaJ

I l l 1 N: N3 2[ Ii l ..... N n ~ 0

Nl N2 ..... N ~ _ ~ N2 NI G = 0 N1 ..... Nn- N3 N2

0 0 N, ~ n N n _ l

0 . . . . . . . 0 ~ 0 ....... i 1~ NI .......

Nn-2 . . . . fo is a constant and can be ignored in the minimization procedure. The

problem is now to minimize:

f(S) = --Y'S + (3) S'GS

subject to the constraints given in section 2. When these constraints are put into the form of eqn. 10, we obtain:

- 1 1 1 ... . .

1 0 0 .....

0 1 0 ... . .

0 0 1 .....

0 0 0

f -ST 0 S2 0 $3 0 $4

1 S ~

407

The constraint hyperplane is defined by the equals sign in this equation. There are n + 1 constraints. One is that the sum of all Sk'S = 1. The other n are that the probabilities of all n components of S(v) be positive.

Step 2

We choose an initial point S o =r--~

so that the only constraint which is not active is $1 (see eqn. 8). With this choice q = n, and in eqn. 17 R , becomes a square matrix. Using matrix theorems, eqn. 17 can be simplified to give:

I 1 0 0 ..... ! l 1 1 0 .....

i 0 0 .

Similarly, from eqn. 19, P , = 0, which givens the search operator:

PnG -1 = O.

Using eqn. 11, we obtain g(S °) = GS ° + Y Set I = 0 (number of iterations) Set q = n (number of active constraints).

Step 3

Compute the search vector E z from:

E z = PqG-lg(S z) .

If E t = 0, then go to Step 5.

Step 4

If E l ¢ 0, then compute S z÷l and g(S 1+1) from:

S l + l = S 1 - - TEl

g(S '+I) = GS '+1 + Y

where r = rain{ 1, ~}

408

[r i E)

rj'EZ > 0

( q + l ) ~ < j ~ < ( n + l ) .

If T < 1, upda te PqG -~ and Rq* by eqns. 2 0 a n d 21. Set I = l + 1 a n d q = q + 1 and re turn to Step 3.

If T = 1, set l = l + 1 and go to Step 5.

Step 5

Compute the Lagrange mult ipler ~q f rom ~q = Rq*g(Sl). Find the min imum e lement in ~q, say the i th element (~?). I f ~ > 0, then

S z is the global min imum. Otherwise, exchange the i th r o w and the qth rOW

of Rq*, t hen use eqns. 22 and 23 to upda te PqG -~ and Rq*. Set q = q -- 1 and re turn to Step 3. Note tha t the first row of Rq* is never d ropped because it is

n

an equal i ty constra int E Si = 1) and as such must always be active. i=l

ACKNOWLEDGEMENTS

We wish to acknowledge the impor t an t pioneering work of Drs. Frank Edwards and Bruce Walmsley in applying this technique to the analysis of synaptic potentials . We are grateful to Professor D. Perkel for his detai led commen t s on an earlier version of this paper.

REFERENCES

Edwards, F.R., Redman, S.J. and Walmsley, B. (1976a) Statistical fluctuations in charge transfer at l a synapses on spinal motoneurones, J. Physiol. (Lond.), 259: 655--688.

Edwards, F.R., Redman, S.J. and Walmsley, B. (1976b) Non-quantal fluctuations and transmission failures in charge transfer at l a synapses on spinal motoneurones, J. Physiol. (Lond.), 259: 689--704.

Fletcher, R. (1971) A general quadratic programming algorithm, J. inst. Math. Applic., 7: 76--91.

Gotdfarb, D. (1972) Extension of Newton's method and simplex methods of solving qua- dratic programs. In F. Lootsma (Ed.), Numerical Methods for Non-Linear Optimiza- tion, Academic Press, London, pp. 239--254.

Kuno, M. (1964) Quantal components of excitatory synaptic potentials in spinal moto- neurones, J. Physiol. (Lond.), 175: 81--99.

Kuno, M. (1971) Quantal aspects of central and ganglionic transmission in vertebrates, Physiol. Rev., 51: 647--678.

McLachlan, E.M. (1978) 'Fne statistics of transmitter release at chemical synapses. In R. Porter (Ed.), International Review of Physiology: Neurophysiology III, 17, Baltimore University Park Press, Baltimore, pp. 49--117.

Martin, A.R. (1977) Junctional transmission II. Presynaptic mechanisms. In E.R. Kandel (Ed.), The Handbook of Physiology• Section 1: The Nervous System Vol. 1, Part 1, American Physiological Society, Bethesda, pp. 329--355.

409

Massey, F.J. (1951) The Kolmogorov--Smirnov test for goodness of fit, J. Amer. stat. Ass., 46: 68--78.

Mendell, L. and Weiner, R. (1976) Analysis of pairs of individual l a e.p.s.p.s in single motoneurones, J. Physiol. (Lond.), 255: 81--104.

Redman, S.J. (1979) Junction mechanisms at group l a synapses, Progr. in Neurobiol., 12: 33--83.