you can’t save all the pandas: impossibility results for ...you can’t save all the pandas:...

16
You can’t save all the pandas: impossibility results for privacy-preserving tracking Yulin Zhang and Dylan A. Shell Department of Computer Science and Engineering, Texas A&M University, College Station, TX 77840, USA yulinzhang|[email protected] Abstract. We consider the problem of target tracking whilst simultaneously pre- serving the target’s privacy as epitomized by the panda tracking scenario intro- duced by O’Kane at WAFR’08. The present paper reconsiders his formulation, with its elegant illustration of the utility of ignorance, and the tracking strategy he proposed, along with its completeness. We explore how the capabilities of the robot and panda affect the feasibility of tracking with a privacy stipulation, un- covering intrinsic limits, no matter the strategy employed. This paper begins with a one-dimensional setting and, putting the trivially infeasible problems aside, an- alyzes the strategy space as a function of problem parameters. We show that it is not possible to actively track the target as well as protect its privacy for every non- trivial pair of tracking and privacy stipulations. Secondly, feasibility is sensitive, in several cases, to the information available to the robot at the start — conditions we call initial I-state dependent cases. Quite naturally in the one-dimensional model, one may quantify sensing power by the number of perceptual (or output) classes available to the robot. The number of initial I-state dependent conditions does not decrease as the robot gains more sensing power and, further, the robot’s power to achieve privacy-preserving tracking is bounded, converging asymptoti- cally with increasing sensing power. Finally, to relate some of the impossibility results in one dimension to their higher-dimensional counterparts, including the planar panda tracking problem studied by O’Kane, we establish a connection be- tween tracking dimensionality and the sensing power of a one-dimensional robot. 1 Introduction Most roboticists see uncertainty as something which should be minimized or even elim- inated if possible. But, as robots become widespread, it is likely that there will be a shift in thinking— robots that know too much are also problematic in their own way. A robot operating in your home, benignly monitoring your activities and your daily routine, for example to schedule vacuuming at unobtrusive times, possesses information that is valuable. There are certainly those who could derive profit from it. A tension exists between information necessary for a robot to be useful and information which could be sensitive if mistakenly disclosed or stolen. But the problem of establishing the min- imal information required to perform a particular task and the problem of analyzing the trade-off between information and performance, despite both being fundamental challenges, remain largely uncharted territory — notwithstanding [1] and [2].

Upload: others

Post on 26-Jan-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • You can’t save all the pandas: impossibilityresults for privacy-preserving tracking

    Yulin Zhang and Dylan A. Shell

    Department of Computer Science and Engineering,Texas A&M University,

    College Station, TX 77840, USAyulinzhang|[email protected]

    Abstract. We consider the problem of target tracking whilst simultaneously pre-serving the target’s privacy as epitomized by the panda tracking scenario intro-duced by O’Kane at WAFR’08. The present paper reconsiders his formulation,with its elegant illustration of the utility of ignorance, and the tracking strategyhe proposed, along with its completeness. We explore how the capabilities of therobot and panda affect the feasibility of tracking with a privacy stipulation, un-covering intrinsic limits, no matter the strategy employed. This paper begins witha one-dimensional setting and, putting the trivially infeasible problems aside, an-alyzes the strategy space as a function of problem parameters. We show that it isnot possible to actively track the target as well as protect its privacy for every non-trivial pair of tracking and privacy stipulations. Secondly, feasibility is sensitive,in several cases, to the information available to the robot at the start — conditionswe call initial I-state dependent cases. Quite naturally in the one-dimensionalmodel, one may quantify sensing power by the number of perceptual (or output)classes available to the robot. The number of initial I-state dependent conditionsdoes not decrease as the robot gains more sensing power and, further, the robot’spower to achieve privacy-preserving tracking is bounded, converging asymptoti-cally with increasing sensing power. Finally, to relate some of the impossibilityresults in one dimension to their higher-dimensional counterparts, including theplanar panda tracking problem studied by O’Kane, we establish a connection be-tween tracking dimensionality and the sensing power of a one-dimensional robot.

    1 Introduction

    Most roboticists see uncertainty as something which should be minimized or even elim-inated if possible. But, as robots become widespread, it is likely that there will be a shiftin thinking— robots that know too much are also problematic in their own way. A robotoperating in your home, benignly monitoring your activities and your daily routine,for example to schedule vacuuming at unobtrusive times, possesses information thatis valuable. There are certainly those who could derive profit from it. A tension existsbetween information necessary for a robot to be useful and information which couldbe sensitive if mistakenly disclosed or stolen. But the problem of establishing the min-imal information required to perform a particular task and the problem of analyzingthe trade-off between information and performance, despite both being fundamentalchallenges, remain largely uncharted territory — notwithstanding [1] and [2].

  • The present paper’s focus is on the problem of tracking a target whilst preserv-ing the target’s privacy. Though fairly narrow, this is a crisply formulated instance ofthe broader dilemma of balancing the information a robot possesses: the robot mustmaintain some estimate of the target’s pose, but information that is too precise is anunwanted intrusion and potential hazard if leaked. The setting we examine, the pandatracker problem, is due to O’Kane [3] who expressed the idea of uncertainty being valu-able and aloofness deserving respect. The following, quoted verbatim from [3, p. 1],describes the scenario:

    “A giant panda moves unpredictably through a wilderness preserve. A mobile robottracks the panda’s movements, periodically sensing partial information about the panda’swhereabouts and transmitting its findings to a central base station. At the same time,poachers attempt to exploit the presence of the tracking robot—either by eavesdrop-ping on its communications or by directly compromising the robot itself—to locate thepanda. We assume, in the worst case, that the poachers have access to any informationcollected by the tracking robot, but they cannot control its motions. The problem is todesign the tracking robot so that the base station can record coarse-grained informationabout the panda’s movements, without allowing the poachers to obtain the fine-grainedposition information they need to harm the panda.”

    Note that it is not sufficient for the robot to simply forget or to degrade sensor datavia post-processing because the adversary may have compromised these operations,simply writing the information to separate storage.

    Before formalizing our approach to the problem, which we do in detail in Section 2,we give an overview of the questions of interest to us. We also give an outline of thestructure of the paper and summarize the reported results.

    One can view the informational constraints as bounds: (1) A maximal- or upper-bound specifies how coarse the tracking information can be. The robot is not helpful inassuring the panda’s well-being when this bound is exceeded. (2) A second constraint, alower-bound, stipulates that if the information is more fine-grained than some threshold,a poacher may succeed in having his wicked way. The problem is clearly infeasiblewhen the lower-bound exceeds the upper-bound. What of other circumstances? Is italways possible to ensure that one will satisfy both bounds indefinitely? In the originalpaper, O’Kane [3] proposed a tracking strategy for a robot equipped with a two-bitquadrant sensor, which certainly succeeds in several circumstances. As no claim ofcompleteness was made, will the strategy work for all feasible bounds? And how arethe strategies affected by improvements in the sensing capabilities of the robot?

    Panda tracking in one dimension

    Strategy space exploration

    Tracking is

    impossible

    Limitations

    Higher-dimensional panda tracking

    xt

    Fig. 1: An overview of the approach taken in the paper.

  • Our examination of these questions follows the outline shown in Figure 1. We startwith a one-dimensional panda tracking problem, analyzing it in detail in Section 3,and find that it is impossible for the robot to achieve privacy-preserving tracking forall nontrivial tracking and privacy stipulations. In addition, there are instances whereprivacy-preserving tracking is limited, depending crucially on the initial informationavailable to the robot. Then, in Section 4, we show that the impossibility conclusionholds for O’Kane’s original planar panda tracking problem too.

    2 Problem description: one-dimensional panda tracking

    The original problem was posed in two dimensions with the robot and panda inhabit-ing a plane that is assumed to be free of obstacles. They move in discrete time-steps,interleaving their motions. A powerful adversary, who is interested in computing thepossible locations of the panda, is assumed to have access to the full history of infor-mation. Any information is presumed to be used optimally by the adversary in recon-struction of possible locations of the panda — by which we mean that the region thatresults is both sound (that is, consistently accounted for by the information) but is alsotight (no larger than necessary). The problem is formulated without needing to appealto probabilities by considering only worst-case reasoning and by using motion- andsensor-models characterized by regions and applying geometric operations.

    Information stipulation: The tracking and privacy requirements were specified as twodisks. The robot is constrained to ensure that the region describing possible loca-tions of the panda always fits inside the tracking disk, which has the larger diameterof the two. The privacy disk imposes the requirement that it always be possible toplace the smaller disk wholly inside the set of possible locations.

    Sensor model: As reflected in the title of his paper, O’Kane considered an uncon-ventional sensor that outputs only two bits of information per measurement. Withorigin centered on the robot, the sensor outputs the quadrant containing the panda.

    Target motion model: The panda moves unpredictably with bounded velocity. Newlyfeasible locations can be modeled as the convolution of the previous time-step’sregion with a disk, sized appropriately for the time-step interval.

    Now, by way of simplification, consider pandas and robots that inhabit obstacle-freeone-dimensional worlds, each only moving left or right along a line.

    Information stipulation: Using the obvious 1-dimensional analogue, now the robottracker has to bound its belief about the panda’s potential locations to an interval ofminimum size rp (p for privacy) and maximum size rt (t for tracking).

    Sensor model: Most simply, the quadrant sensor corresponds to a one-bit sensor indi-cating whether the panda is on the robot’s left- or right-hand side. When the robotis at u1, the sensor indicates whether the panda is within (−∞,u1] or (u1,∞).We have sought a way to explore how modifying the robot’s sensing capabilitiesalters its possible behavior. Our approach is to give the robot m set-points u1 <u2 < · · · < um, each within the robot’s control, so that it can determine which ofthe m+1 non-overlapping intervals contains the panda. With m set-points one can

  • model any sensor that fully divides the state space and produces at most m+ 1observations.∗ The case with m = 1 is the straightforward analogue of the quadrantsensor. Increasing m yields a robot with greater sensing power, since the robot hasa wider choice of how observations should inform itself of the panda’s location.

    Target motion model: The convolution becomes a trivial operation on intervals.

    Figure 2 is a visual example with m = 2. The panda is sensed by the robot as fallingwithin one of the following intervals: (−∞,u1], (u1,u2], (u2,∞), where u1,u2 ∈ R andu1 < u2. And these three intervals are represented by observation values: 0, 1 and 2. Forsimplicity, no constraints are imposed on the robot’s motion and we assume that eachtime-step, the robot can pick positions of u1 < · · ·< um as it likes.

    0 1 2Observation

    u1 u2 xk

    State

    Fig. 2: Panda tracking in one dimension. (Inimitable artwork for the robot and pandaadapted from the original in [3, p. 2].)

    Notation and model:

    The 1-dim. problem is formulated as follows. The panda’s location is represented asa single coordinate indexed by discrete time. At stage k the location of the panda isxk ∈R. Incorporating accumulated knowledge about the panda’s possible location, sen-sor readings, and the movement model permits the robot (and adversary) to maintainknowledge of the panda’s possible location after it moves and between sensing steps.The set of conceivable locations of the panda (a subset of R) is a geometric realizationof an information-state or I-state, as formalized and treated in detail by LaValle [4]. Inthis paper, we take the I-state as an interval.

    The movement of the panda per time-step is bounded by length δ2 , meaning thatthe panda can move at most δ2 in either direction. We use ηk to denote the robot’sknowledge of the panda after the observation taken at time k. In evolving from ηk toηk+1 the robot’s I-state first transits to an intermediate I-state, which we write η−k+1representing the state after adding the uncertainty arising from the panda’s movement,but before observation k+ 1. Since this update occurs before the sensing informationis incorporated, we refer to η−k+1 as the prior I-state for time k+1. Updating I-state ηkinvolves mapping every xk ∈ ηk to [xk− δ2 ,xk +

    δ2 ], the resultant I-state, η

    −k+1, being the

    union of the results.

    ∗This is not a model of any physical sensor of which we are aware. The reader, finding thistoo contrived, may find merit in the choice later, e.g., for n-dimensional tracking (see Lemma 6).

  • Sensor reading updates to the I-state depend on the values of u1(k),u2(k), . . . ,um(k),which are under the control of the robot. The sensor reports the panda’s location towithin one of the m+1 non-empty intervals: (−∞,u1(k)],(u1(k),u2(k)],(u2(k),u3(k)],. . . ,(um(k),∞). If we represent the observation at time k as a non-empty interval y(k)then the posterior I-state ηk is updated as ηk = η−k ∩ y(k).

    For every stage k, the robot chooses a sensing vector vk = [u1(k),u2(k), . . . ,um(k)],ui(k)< u j(k) if i < j, ui(k) ∈ R, so as to achieve the following conditions:

    1. Privacy Preserving Condition (PPC): The size of any I-state ηk = [a,b] should beat least rp. That is, for every stage k, |ηk|= b−a≥ rp.

    2. Target Tracking Condition (TTC): The size of any I-state ηk = [a,b] should be atmost rt . That is, for every stage k, |ηk|= b−a≤ rt .

    3 Privacy-preserving tracking

    Given specific problem parameters, we are interested in whether there is always somevk that a robot can select to track the panda while satisfying the preceding conditions.

    Definition 1. A 1-dim. panda tracking problem is a tuple P1=(η0,rp,rt ,δ ,m), in which1) the initial I-state η0 describes all the possible initial locations of the panda;2) the privacy bound rp gives a lower bound on the I-state size;3) the tracking bound rt gives a upper bound on the I-state size;4) parameter δ describes the panda’s (fastest) motion;5) the sensor capabilities are given by the number m.

    Definition 2. The 1-dim. panda tracking problem P1 =(η0,rp,rt ,δ ,m) is privacy preserv-able, written as predicate PP(P1), if starting with |η0| ∈ [rp,rt ], there exists some strat-egy π to determine a vk at each time-step, such that the Privacy Preserving Conditionholds forever. Otherwise, the problem P1 is not privacy preservable: ¬PP(P1).

    Definition 3. The 1-dim. panda tracking problem P1 = (η0,rp,rt ,δ ,m) is target track-able, TT(P1), if starting with |η0| ∈ [rp,rt ], there exists some strategy π to determine avk at each time-step, such that the Target Tracking Condition holds forever. Otherwise,the problem P1 is not target trackable: ¬TT(P1).

    To save space, we say a problem P1 and also its strategy π are PP if PP(P1). Similarly,both P1 and its strategy π will be called TT if TT(P1). Putting aside trivially infeasibleP1 where rp > rt , we wish to know which problems are both PP and TT. Next, weexplore the parameter space to classify the various classes of problem instances byinvestigating the existence of strategies.

    3.1 Roadmap of technical results

    The results follow from several lemmas. The roadmap in Figure 3 provides a sense ofhow the pieces fit together to help the reader keep an eye on the broader picture.

    Our approach begins, first, by dividing the strategy space into ‘teeth’ and ‘gaps’according to the speed (or size) of the panda’s motion. PP and TT tracking strategies

  • Fig. 3: Roadmap of results for 1-dim. panda tracking with m sensing parameters.

    are shown to exist in teeth regions and the last gap region for all initial I-states thatsatisfy |η0| ∈ [rp,rt ]. The remaining regions differ: their feasibility depends not only onthe panda’s motion but also on the size of initial I-state. Dealing with the remaining gapregions is simplified considerably by focusing on vk that divides η−k evenly. We uses(i) to represent the choice of evenly dividing the prior I-state interval into i parts (themnemonic is s for split). In practice, we will usually compare choices s(a) and s(a+1),for positive integer constant a determined from the speed of the panda’s motion ( δ2 ).

    The action of evenly dividing the I-state interval η−k via s(·) is useful because,after the sensor reading has been processed, one knows the degree of the uncertaintyinvolved, i.e., the size of the I-state interval ηk. An s(i) action results in greater post-sensing uncertainty than an s(i+1) does. Strategies consisting only of s(i) and s(i+1)actions enable examination of the evolution of interval sizes. If the I-state resulting froman s(i) violates the tracking constraint, other actions dividing the I-state into at most iparts, though they be uneven parts, will also violate the tracking bound because we mustguarantee a solution no matter which interval the panda happens to be in. Analogously,the s(i+1) case is often useful in analyzing violation of the privacy bound.

    We have found it helpful to think of the problem as playing a multi-stage gameagainst suicidal pandas who strategically choose the movement that tends to violate thebounds. The robot’s job is to save all of them. According to whether an s(i) or s(i+1)may be performed in the I-state or not, we are able to divide the remaining strategyspace into four cases where two of them are non-PP or non-TT, one is PP and TTregardless of the initial I-state, and the other is I-state dependent.

    3.2 Main results

    In this section, we follow the roadmap in Figure 3.

    Lemma 1. For any 1-dim. panda tracking problem P1 =(η0,rp,rt ,δ ,m), if δ ∈ [arp,art ],where a ∈ Z+, a≤ m, then PP(P1)∧TT(P1).

  • Proof. A PP and TT strategy is given in this proof. For any |ηk| ∈ [rp,rt ] the prior I-state has size |η−k+1|= |ηk|+δ . Since δ ∈ [arp,art ](a≤m), |η

    −k+1| ∈ [arp+rp,art +rt ].

    By taking action s(a+1), which is possible since a≤ m, we get |ηk+1|= 1a+1 |η−k+1| ∈

    [rp,rt ]. That is, if |η0| ∈ [rp,rt ] and we take action s(a+1), then |ηk| ∈ [rp,rt ]. Therefore,there exists a strategy (always take action s(a+1)) for P1, so that the privacy-preservingtracking conditions PPC and TTC are always both satisfied when η0 ∈ [rp,rt ]. ut

    Lemma 2. For any 1-dim. panda tracking problem P1 =(η0,rp,rt ,δ ,m), if δ ∈ (mrt ,∞),then ¬TT(P1).

    Proof. The tracking stipulation is proved to be violated eventually. Given the constraintof m sensing parameters, the prior I-state |η−k+1|= |ηk|+δ can be divided into at mostm+ 1 parts. Among these m+ 1 posterior I-states, if m of them reach the maximumsize rt , the size of the remaining I-state is |ηk|+δ −mrt . If none of the resulting I-statesviolate the tracking bound, then the size of the smallest resulting I-state is |ηk|+δ−mrt .Since δ > mrt , the size of the smallest resulting I-state must increase by some positiveconstant δ −mrt . After d

    rt−rpδ−mrt e stages, the I-state will exceed rt . So it is impossible to

    ensure that the tracking bound will not eventually be violated. ut

    Lemma 3. For 1-dim. panda tracking problem P1 = (η0,rp,rt ,δ ,m), if δ ∈ (art −rt ,arp), where a ∈ Z+, a≤ m and art ≥ arp + rp, then PP(P1)∧TT(P1).

    Proof. A PP and TT strategy is given in this proof. Since δ ∈ (art − rt ,arp) andart > arp + rp, |η−k+1| = |ηk|+ δ ∈ (arp,art + rt) ⊂ L1∪L2, where L1 = [arp,art ] andL2 = [arp + rp,art + rt ]. If action s(a) is performed when |η−k+1| ∈ L1 and s(a+ 1) isperformed when |η−k+1| ∈ L2, the resulting I-state satisfies |ηk+1| ∈ [rp,rt ]. Hence, thereis a strategy consisting of s(a) and s(a+1), for the problem P1 such that the PPC andTTC are always both satisfied when η0 ∈ [rp,rt ]. ut

    Lemma 4. For any 1-dim. panda tracking problem P1 = (η0,rp,rt ,δ ,m), if δ ∈ (art −rt ,arp), where a ∈ Z+, a ≤ m, art < arp + rp, then ¬PP(P1)∨¬TT(P1) when either:(i) rp > art −δ or (ii) rt < (a+1)rp−δ .

    Proof. In case (i), rp > art −δ , so the size of the prior I-state satisfies |η−k+1|= |ηk|+δ > rp + δ > art . That is, if η−k+1 is divided into at most a parts, the largest posteriorI-state ηk+1 will violate the tracking stipulation at the next time-step. Thus, the priorI-state η−k+1 must be divided into at least a+1 parts. But by dividing |η

    −k+1| into at least

    a+1 parts, among all the resulting posterior I-states, it can be shown that the smallestI-state will eventually violate the privacy stipulation. The smallest size of the resultingposterior I-state |ηk+1|smallest is no greater than the average size |ηk|+δa+1 , when dividingη−k+1 into a+ 1 parts. For the smallest posterior I-state, the decrease in size is ∆− =|ηk|− |ηk+1|smallest ≥ |ηk|− |ηk|+δa+1 ≥

    arp−δa+1 > 0. Hence, eventually after d

    (a+1)(rt−rp)arp−δ e

    steps, the smallest I-state will violate the privacy stipulation and put the panda in danger.Similarly, if the prior I-state is divided into less than a+ 1 parts, the average size willbecome larger and the largest posterior I-state will violate the tracking stipulation, as itmust increase at least as much as the average.

    The same conclusion is reached for (ii), when rt < (a+1)rp−δ , along similar lines.ut

  • I-state intervals with PP and TT

    strategies

    State transition under PP and TT

    strategy

    4 6 859/2 19/4 11/2 727/4 15/2 35/4

    rp rt

    !1 !2 !3 !4 !5

    !1 !2 !3 !4 !5

    s(a) s(a) s(a)

    s(a+1)s(a+1)

    |η0|

    ✕ ✕ ✕

    Fig. 4: A problem instance with rp = 92 ,rt =354 ,δ = 2,a = 1, where there exists a PP

    and TT tracking strategy for |η0| ∈ q1∪q2∪q3∪q4∪q5.

    Lemma 5. The tracking and privacy-preserving properties of a 1-dim. panda trackingproblem P1 = (η0,rp,rt ,δ ,m) depend on the size of the initial I-state |η0|, if δ ∈ (art −rt ,arp), where a ∈ Z+, a≤ m and rp ≤ art −δ < (a+1)rp−δ ≤ rt .

    Proof. The proof proceeds by showing, firstly, that there are certain initial I-states forwhich the problem is not both PP and TT. Next, we show that there exist other initialI-states where there are strategies which satisfy the conditions for the problem to be PPand TT.

    First, for |η0| ∈ (art − δ ,(a+ 1)rp− δ ), if the prior I-state is divided into at mosta parts, the size of the largest posterior I-state at the next time-step (t = 1) is |η1| =|η−1 |+δ

    a >art−δ+δ

    a = rt , which will violate the tracking stipulation. If the prior I-state isdivided into at least a+ 1 parts, the size of posterior I-state at the next step (t = 1) is

    |η1|=|η−1 |+δ

    a+1 <(a+1)rp−δ

    a+1 = rp, which will violate the privacy stipulation. Hence, thereare no strategies, which are both PP and TT if |η0| ∈ (art −δ ,(a+1)rp−δ ).

    Second, we provide an example strategy that serves as an existence proof for PPand TT instances with |η0| ∈ [rp,art − δ )∪ ((a+ 1)rp− δ ,rt ]. Consider the instancerp = 92 ,rt =

    354 ,δ = 2,a = 1, where there are PP and TT strategies for η0 ∈ q1 ∪ q2 ∪

    q3 ∪ q4 ∪ q5, where q1 = [ 92 ,194 ],q2 = [5,

    112 ],q3 = [6,

    274 ],q4 = [7,

    152 ],q5 = [8,

    354 ]. By

    always taking s(a) and s(a+ 1), the resulting I-state remains within q1,q2, . . . ,q5 asshown in Figure 4. That is, there are strategies for those I-states in q1∪q2∪·· ·∪q5 andthe problem is PP and TT under such circumstances. ut

    Next, we collect the low-level results presented thus far to clarify their interplay.

    3.3 Aggregation and summary of results

    To have a clearer sense of how these pieces fit together, we found it helpful to plot thespace of problem parameters and examine how the preceding theorems relate visually.Figure 5 contains subfigures for increasingly powerful robots (in terms of sensing) withm = 1,2,3,4. The white regions represent the trivial rp > rt instances; otherwise thewhole strategy space is categorized into the following subregions: PP and TT strategy

  • space (colored green), not both PP and TT space (colored gray), and regions dependenton the initial I-states (colored pink — some caution is advised as particular values ofη0 are not visible in this projection). When summarized in this way, the results permitexamination of how sensor power affects the existence of solutions.

    0 2 4 6 8 10 12 14 16 18 200

    2

    4

    6

    8

    10

    12

    14

    16

    18

    20rp = 2

    rt = 2

    rt = 2rp

    rt = rp +2

    rt = 2rp−2

    rt = rp +4

    rt = rp +6

    rt = rp +8

    rt = rp +10

    rt = 6

    rt = 10

    rp

    r t

    (a) m=1.

    0 2 4 6 80

    2

    4

    6

    8

    rt = 2rp rt = 2rp−2

    rt = 32 rp

    rt = rp +2

    rt = 3rp−2

    rt = 12 rp +1

    rt = 1

    rt = 2

    rp = 1

    rp = 2

    rp

    r t

    (b) m=2.

    0 0.33 0.66 1 1.33 1.66 20

    0.33

    0.66

    1

    1.33

    1.66

    2

    rp = 2

    rt = 2

    rp = 1

    rt = 1

    rp = 23

    rt = 23

    rt = 2rp

    rt = 32 rp

    rt = 43 rp

    rt = rp +2rt = 2rp−2

    rt = 3rp−2

    rt = 12 rp +1

    rt = 13 rp +23rt =

    13 rp +

    23

    rt = 4rp−2

    rp

    r t

    0 0.33 0.66 1 1.33 1.66 20

    0.33

    0.66

    1

    1.33

    1.66

    2

    rp = 2

    rt = 2

    rp = 1

    rt = 1

    rp = 23

    rt = 23

    rt = 2rp

    rt = 32 rp

    rt = 43 rp

    rt = rp +2rt = 2rp−2

    rt = 3rp−2

    rt = 12 rp +1

    rt = 13 rp +23rt =

    13 rp +

    23

    rt = 4rp−2

    rp

    r t

    (c) m=3.

    0 0.33 0.66 1 1.33 1.66 20

    0.33

    0.66

    1

    1.33

    1.66

    2

    rp

    r t

    0 0.33 0.66 1 1.33 1.66 20

    0.33

    0.66

    1

    1.33

    1.66

    2

    rp

    r t

    (d) m=4.

    Fig. 5: The problem parameter space and the existence of strategies for robots with dif-fering m. The green region depicts PP and TT conditions, where a suitable strategyexists no matter the initial I-state. The gray region represents the conditions that vio-late either PP or TT (or both). The pink region depicts conditions known to be I-statedependent. The white region represents trivially infeasible problems. The orange rect-angles emphasize the different magnification-levels and highlight conditions where thedifferences in sensing power come into play. Both rp and rt are expressed in units of δ2 .

    Preserving the privacy of a target certainly makes some unusual demands of a sen-sor. O’Kane’s quadrant sensor has pre-images for each output class that are infinitesubsets of the plane, making it possible for his robot to increase its uncertainty if itmust do so. But it remains far from clear how one might tackle the same problem with adifferent sensor. The privacy requirement makes it difficult to reason about the relation-ship between two similar sensors. For example, an octant sensor appears to be at least

  • as useful as the quadrant sensor, but it makes preserving privacy rather trickier. Sinceoctants meet at the origin at 45◦, it is difficult to position the robot so that it does notdiscover too much. An advantage of the one-dimensional model is that the parameterm allows for a natural modification of sensor capabilities. This leads to three closelyrelated results, each of which helps clarify how certain limitations persist even when mis increased.

    Theorem 1.A (More sensing won’t grant omnipotence) The 1-dim. robot is not alwaysable to achieve privacy-preserving tracking, regardless of its sensing power.

    Proof. The negative results in Lemmas 2 and 4 show that there are circumstances whereit is impossible to find a tracking strategy satisfying both PPC and TTC. Though theseregions depend on m, no finite value of m causes these regions to be empty. ut

    Turning cases that depend on the initial I-state,† one might think that sensitivity toη0 is a consequence of uncertainty that compounds because the sensors used are toofeeble. But this explanation is actually erroneous. Observe that the I-state dependentregion is more complicated than other regions within the strategy space: in Figure 5,the green region is contiguous, whereas the regions marked pink are not. (Bear in mindthat the figure does not depict the η0 itself, merely regions where it is known that η0affects the existence of a strategy.) The specific I-state dependent region for Lemma 5under condition a = 1, visible clearly as chisel shape in Figures 5a and 5b, is invariantwith respect to m, so remains I-state dependent in all circumstances (though outsidethe visible region in Figures 5c and 5d, it is present). As m increases, what happens isthat the regions formerly marked as not both PP and TT are claimed as PP and TT,or become I-state dependent. The following expresses the fact that additional sensingpower fails to reduce the number of I-state dependent strategy regions.

    Theorem 1.B (Information State Invariance) The number of initial I-state dependentstrategy regions does not decrease by using more powerful sensors.

    Proof. We focus on the I-state dependent areas within the square between (0,0) and(2,2) as the parts outside this (orange) square do not change as m increases. Accordingto Lemma 5, the I-state dependent conditions for any specific a are bounded by thefollowing linear inequalities:

    rt <2

    a−1, (1)

    rp >2a, (2)

    rt ≥rpa+

    2a, (3)

    rt ≥ (a+1)rp−2, (4)

    rt <(a+1)

    arp. (5)

    †For brevity sometimes we will call such instances “I-state dependent” though it is strictly|η0|, the size of the initial I-state, on which they depend.

  • Combining (1)–(3) gives both the bound for rt as rt ∈ [ 2(a+1)a2 ,2

    a−1 ), and the boundfor rp as rp ∈ [ 2a ,

    2aa2−1 ]. The I-state dependent condition, thus, is a bounded region.

    Next, we show that (1) and (2) are dominated by (3)–(5). According to (4) and (5),we have rp < 2aa2−1 . Applying this result to (5) produces (1). Similarly, combining (3)and (5) together yields (2). Hence, the I-state dependent conditions are fully determinedby inequalities (3)–(5). (Figure 6 provides a visual example.)

    To form a bounded region with three linear inequalities, the I-state region has tobe a triangle. The three points of the triangle can be obtained by intersecting pairsof (3)–(5): ( 2a ,

    2a+2a2 ), (

    2aa2−1 ,

    2a−1 ), (

    2(a+1)a2+a−1 ,

    2(a+1)2

    a2+a−1 − 2). Since a ∈ {2,3, · · · ,m}, thetriangle region will not be empty. Let ∆(a) denote the triangle with parameter a. Thenthe smallest y coordinate for ∆(a) is minY (∆(a)) = 2a+2a2 . And the largest y coordinatefor ∆(a) is maxY (∆(a)) = 2a−1 . For adjacent triangles ∆(a) and ∆(a+ 1), we haveminY (∆(a)) > maxY (∆(a+ 1)). Hence, the triangles for different values of a do notoverlap. ut

    0 2a−1

    0

    2a−1

    rp = 2art =

    rpa +

    2a

    rt = (a+1)rp−2

    rt =(a+1)rp

    a

    (2a,2a+2

    a2 )

    ( 2aa2−1,2

    a−1)

    ( 2a+2a2+a−1,2a+4

    a2+a−1)

    rp

    r t

    Fig. 6: Relationships of the linear inequalities in Lemma 5.

    The preceding discussion showed that the number of I-state dependent regions in-creases with m and that, along with the chisel shaped region, there are m−1 triangles ofdecreasing size. This motivates our introduction of a quantitative measure of the robot’spower as a function of m.

    Definition 4. A measure of tracking power, p(m), for the robot with m sensing param-eters should satisfy the following two properties: p(m) > 0, ∀m ∈ Z+ (positivity), andp(a)> p(b), if a,b ∈ Z+ and a > b (monotonicity).

    The plots in Figure 5 suggest that one way to quantify change in these regions is tomeasure changes in the various areas of the parameter space as m increases. As a spe-cific measure of power in the one-dimensional setting, we might consider the proportionof cases (in the rp vs. rt plane) that are PP and TT (green) and I-state dependent (pink).Though the green and pink areas are unbounded in the full plane, the only changes thatoccur as m increases are in the square between (0,0) and (2,2). Thus, we take p(m) to

  • equal to the total volume of green and pink regions filling within the region 0≤ rp ≤ rtand 0≤ rt ≤ 2. This area satisfies the properties in Definition 4 and is indicative of thepower of the robot as, intuitively, it can be interpreted as an upper-bound of the solvablecases.

    Corollary 1 (Asymptotic tracking power) The power p(m) of a robot with m sensingparameters to achieve privacy-preserving tracking in the 1-dim. problem is boundedand lim

    m→∞p(m) = L, with 1.5 < L < 1.6

    Proof. Inequalities (3)–(5) in the proof of Theorem 1.B give m− 1 triangles, one foreach a ∈ {2,3, · · · ,m}. An analytic expression gives the area of each of these trianglesand the series describing the cumulative pink volume ppink(m) within 0 ≤ rp ≤ rt and0 ≤ rt ≤ 2 can be shown (by the comparison test) to converge as m→ ∞. Similarly,the cumulative green volume pgreen(m) within 0 ≤ rp ≤ rt and 0 ≤ rt ≤ 2 converges.Numerical evaluation gives the value of the limit ≈ 1.545̄ ut

    4 Beyond one-dimensional tracking

    The inspiration for this work was the 2-dimensional case. This section lifts the impos-sibility result to higher dimensions.

    4.1 Mapping from high dimension to one dimension

    In the n-dimensional privacy-preserving tracking problem, the state for the panda be-comes a point in Rn. The panda can move with a maximum distance of δ2 in any di-rection in Rn within a single time-step, so that the panda’s actions fill an n-dimensionalball. The privacy and tracking bound are also generalized from an interval of size rp andrt , to an n-dimensional ball of diameter rp and rt respectively. That is, the I-state shouldcontain a ball of diameter rp and be contained in a ball of diameter rt , so as to achieveprivacy-preserving tracking. The robot inhabits the n-dimensional space as well, andattention must be paid to its orientation too.

    It is unclear what would form the appropriate higher dimensional analogue of pa-rameter m, so we only consider n-dimensional tracking problems for robots equippedwith a generalization of the quadrant sensor. The sensor’s orientation is determined bythat of the robot and it indicates which of the 2n possible orthogonal cells the pandamight be in. Adopting notation and definitions analogous to those earlier, we use atuple for n-dimensional tracking problems—a subscript makes the intended dimension-ality clear.

    The following lemma shows that there is a mapping which preserves the trackingproperty from n-dimensional problem to 1-dimensional problems.

    Lemma 6. Given some 1-dim. panda tracking problem P1 = (η0,rp,rt ,n), there existsan n-dim. panda tracking problem Pn = (θ0,rp,rt ,δ ,1n) where, if TT(Pn), then TT(P1).

  • Proof. The approach to this proof has elements of a strategy stealing argument andsimulation of one system by another. The robot faced with a 1-dim. problem constructsan n-dim. problem and uses the (hypothetical) strategy for this latter problem to selectactions. The crux of the proof is that the 1-dim. robot can report back observations thatare apposite for the n-dim. case. (Figure 7, below, gives a visual overview.)

    For some P1 = (η0,rp,rt ,δ ,m), with m = n, we construct Pn = (θ0,rp,rt ,δ ,1n)as follows. Without sacrifice of generality, assume that in P1 the initial I-state η0 ={x|ηmin0 ≤ x≤ ηmax0 } is centered at the origin, so ηmin0 =−ηmax0 . (This simplifies the ar-gument and a suitable translation of coordinate system rectifies the situation otherwise.)Then we choose θ0 as the closed ball at the origin with radius ηmax0 .

    Move & Rotate

    !i+11

    !i+12

    !i+13

    ηi+1

    1 ηi+12 ηi+1

    3

    2) Project n-dim.I-state partition

    to 1-dim.

    v

    3) Update n-dim. I-state

    1) Activate n-dim. observations

    Fig. 7: Constructing a 1-dim. strategy π1 from some n-dim. strategy πn.

    We show how, given some πn on Pn = (θ0,rp,rt ,δ ,1n), we can use it to define aπ1 for use by the 1-dim. robot. The robot forms θ0 and also has η0. It picks an ar-bitrary unit-length vector v̂ = v1e1 + v2e2 + · · ·+ vnen, unknown to the source of πn,which is the subspace that the 1-dim. panda lives in. For subsequent steps, the robotmaintains θ−1 ,θ1,θ

    −2 ,θ2, . . . ,θ

    −k ,θk,θ

    −k+1, . . . along with the I-states in the original 1-

    dim. problem η−1 ,η1,η−2 ,η2, . . . ,η

    −k ,ηk,η

    −k+1 . . . . For any step k, the ηk can be seen as

    measured along v̂ within the higher dimensional space. Given θk−1, θ−k is constructedusing Minkowski sum operations as before, though now in higher dimension. Givenθ−k , strategy πn determines a new pose for the n-dim. robot and, on the basis of thislocation and orientation, the n sensing planes slice through θ−k . Though the planes de-marcate 2n cells, the line along v̂ is cut into no more than n+ 1 pieces as the line canpierce each plane at most once (with any planes containing v̂ being ignored). Since the1-dim. robot has m = n, it picks the u1, . . . ,un by measuring the locations that the sens-ing planes intersect the line x = α v̂, α ∈ R. (If fewer than n intersections occur, owingto planes containing the line, the extra ui’s are simply placed outside the range of the

  • I-state.) After the 1-dim. panda’s location is determined, the appropriate orthogonal cellis reported as the n-dim. observation, and θ−k leads to θk via the intersection operation.This process comprises π1. It continues indefinitely because θk must always entirelycontain ηk along the line through v̂ because, after all, a cantankerous n-dim. panda isfree to choose to always limit its movements to that line.

    If πn is TT, then so too is the resulting strategy π1 since the transformation relatingθk∩{α v̂ : α ∈R}with ηk preserves length and, thus, θk fitting within a ball of diameterrt implies that |ηk|< rt . ut

    4.2 Impossibility in high-dimensional privacy-preserving tracking

    Now we are ready to connect the pieces together for the main result:

    Theorem 2 (Impossibility) It is not possible to achieve privacy-preserving panda track-ing in n dimensions for every problem with rp < rt .

    Proof. To extend the lemmas that have shown this result for n = 1 to cases for n > 1,suppose such a solution existed for Pn = (θ0,rp,rt ,δ ,1n). Then according to Lemma 6,every 1-dim. panda tracking problem P1 = (η0,rp,rt ,δ ,n) is TT, since they can bemapped to an n-dim. TT panda tracking problem. But this contradicts the non-TT in-stances in Lemma 2, so no such strategy can exist for every non-trivial problem (rp < rt )in two, three, and higher dimensions. ut

    5 Related work

    Information processing is a critical part of what most robots do—without estimatingproperties of the world, their usefulness is hampered, sometimes severely so. Grantingour computational devices access to too much information involves some risk, and pri-vacy conscious users may balk and simply opt to forgo the technology. Multiple modelshave been proposed to think about this tension in setting of data processing more gen-erally, cf. formulations in [5,6]. Existing work has also explored privacy for networks,along with routing protocols proposed to increase anonymity [7], and includes wirelesssensor problems where distinct notions of spatial and temporal privacy [8,9] have beenidentified.

    Some recent work has begun to investigate privacy in settings more directly appli-cable to robots. Specifically three lines of work propose techniques to help automatethe development of controllers that respect some limits on the information they divulge.O’Kane and Shell [10] formulated a version of the problem in the setting of combi-natorial filters where the designer provides additional stipulations describing whichpieces of information should always be plausibly indistinguishable and which mustnever be conflated. The paper describes hardness results and an algorithm for ascer-taining whether a design satisfying such a stipulation is feasible. This determination offeasibility for a given set of privacy and utility choices is close in spirit to what thispaper has explored for the particular case of privacy-preserving tracking: here thosechoices become quantities rp and rt . The second important line is that of Wu et al. who

  • explore how to disguise some behavior in their control system’s plant model, and showhow to protect this secret behaviour by obfuscating the system outputs [11]. More re-cent work expresses both utility and privacy requirements in an automata frameworkand proposes algorithms to synthesize controllers satisfying these two constraints [12].In the third line, Prorok and Kumar [13] adopt the differential privacy model to charac-terize the privacy of heterogeneous robot swarms, so that any individual robot cannotbe determined to be of a particular type from macroscopic observations of the swarm.

    Finally, we note that while we have considered panda tracking as a cute realizationof this broader informationally constrained problem, wildlife monitoring and protectionis an area in which serious prior robotics work exists (e.g., see [14]) and for which thereis substantial and growing interest [15].

    6 Conclusion and future work

    In this paper we have reexamined the panda tracking scenario introduced by O’Kane [3],focusing on how various parameters the specify a problem instance (such as the ca-pabilities of the robot and the panda) affect the existence of solutions. Our approachhas been to study nontrivial instances of the problem in one dimension. This allowsfor an analysis of strategies by examining whether the sensing operations involved ateach step increase or decrease the degree of uncertainty in a directly quantifiable way.Only if this uncertainty can be precisely controlled forever, can we deem the probleminstance solved. We use a particular set of sensing choices, basically division of theregion evenly, which we think of as a split operation. This operation is useful becausethe worst resulting I-state in other choices, those that are not evenly split, is weaker (interms of satisfying the tracking and privacy constraints) than even divisions are. Thus,the split operation acts as a kind of basis: if the problem has no solution with thesechoices, the panda cannot be tracked with other choices either.

    In examining the space of tracking and privacy stipulations, the existence of strate-gies is shown to be a function of the robot’s initial belief and panda’s movement.There exist regions without any solution, where it is impossible for the robot to ac-tively track the panda as well as protect its privacy for certain nontrivial tracking andprivacy bounds. Additionally, we have uncovered regions where solution feasibility issensitive to the robot’s initial belief, which we have called I-state dependent cases (orconditions). The simple one-dimensional setting also permits exploration of how cir-cumstances change as the robot’s sensing power increases. Perhaps surprisingly, thenumber of these I-state dependent strategy conditions does not decrease as the robot’ssensing becomes more powerful. Finally, we connect the impossibility result back toO’Kane’s setting by mapping between high-dimensional and one-dimensional versions,proving that the 2D planar panda tracking problem does not have any privacy-preservingtracking strategy for every non-trivial tracking and privacy stipulation.

    The results presented in this paper reveal some properties of a particular —and itmust be said somewhat narrow— instance of an informationally constrained problem.Future work could explore what sensor properties permit such a task to be achieved,perhaps identifying families of sensors that suffice more broadly. Also, probabilisticmodels impose different belief representations and observation functions and it is worth

  • exploring analogous notions in those settings. Another thread is to weaken the poacher’scapability, allowing the privacy bound to be relaxed somewhat. For example, in consid-ering some latency in the poacher’s reaction to information that has been leaked, it maybe acceptable for the privacy bound to be relaxed so that it need not apply from everytime-step to the next.

    Acknowledgements

    This work was supported by the NSF through awards IIS-1527436 and IIS-1453652. Wethank the anonymous reviewers for their time and extraordinarily valuable comments,and are grateful to the conference committee for a student travel award.

    References

    1. B. R. Donald, “On information invariants in robotics,” Artificial Intelligence — Special Vol-ume on Computational Research on Interaction and Agency, Part 1, vol. 72, no. 1–2, pp.217–304, 1995.

    2. M. T. Mason, “Kicking the sensing habit,” AI Magazine, vol. 14, no. 1, pp. 58–59, 1993.3. J. M. O’Kane, “On the value of ignorance: Balancing tracking and privacy using a two-bit

    sensor,” in Proc. Int. Workshop on the Algorithmic Foundations of Robotics (WAFR’08),Guanajuato, Mexico, 2008.

    4. S. M. LaValle, “Sensing and filtering: A fresh perspective based on preimages and informa-tion spaces,” Foundations and Trends in Robotics, vol. 1, no. 4, pp. 253–372, 2010.

    5. L. Sweeney, “k-anonymity: A model for protecting privacy,” Int. Journal on Uncertainty,Fuzziness and Knowledge-based Systems, vol. 10, no. 5, pp. 557–570, 2002.

    6. C. Dwork, “Differential privacy: A survey of results,” in Proc. Int. Conference on Theoryand Applications of Models of Computation. Springer, 2008, pp. 1–19.

    7. I. Clarke, O. Sandberg, B. Wiley, and T. W. Hong, “Freenet: A distributed anonymous in-formation storage and retrieval system,” in Int. Workshop on Designing Privacy EnhancingTechnologies: Design Issues in Anonymity and Unobservability, 2001, pp. 46–66.

    8. P. Kamat, Y. Zhang, W. Trappe, and C. Ozturk, “Enhancing source-location privacy in sen-sor network routing,” in Proc. IEEE Int. Conference on Distributed Computing Systems(ICDCS’05), Columbus, Ohio, USA, 2005, pp. 599–608.

    9. P. Kamat, W. Xu, W. Trappe, and Y. Zhang, “Temporal privacy in wireless sensor networks,”in Proc. Int. Conference on Distributed Computing Systems (ICDCS’07), Toronto, Ontario,Canada, 2007, pp. 23–23.

    10. J. O’Kane and D. Shell, “Automatic design of discreet discrete filters,” in Proc. IEEE Int.Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 2015, pp. 353–360.

    11. Y.-C. Wu and S. Lafortune, “Synthesis of insertion functions for enforcement of opacitysecurity properties,” Automatica, vol. 50, no. 5, pp. 1336–1348, 2014.

    12. Y.-C. Wu, V. Raman, S. Lafortune, and S. A. Seshia, “Obfuscator synthesis for privacy andutility,” in NASA Formal Methods Symposium. Springer, 2016, pp. 133–149.

    13. A. Prorok and V. Kumar, “A macroscopic privacy model for heterogeneous robot swarms,”in Proc. Int. Conference on Swarm Intelligence (ICSI). Springer, 2016, pp. 15–27.

    14. D. Bhadauria, V. Isler, A. Studenski, and P. Tokekar, “A robotic sensor network for moni-toring carp in minnesota lakes,” in Proc. IEEE Int. Conference on Robotics and Automation(ICRA), Anchorage, AK, USA, 2010, pp. 3837–3842.

    15. F. Fang, P. Stone, and M. Tambe, “When security games go green: Designing defender strate-gies to prevent poaching and illegal fishing,” in Proc. Int. Joint Conference on Artificial In-telligence (IJCAI), Buenos Aires, Argentina, 2015.

    You can't save all the pandas: impossibility results for privacy-preserving tracking