<!DOCTYPE html>

Introduction

This is an explanation of how a distance-bounding protocol can be performed using the sending of bit sequences between Verifier V to Prover P.

MathML browser support is required. Firefox 60.0 esr should render MathML symbols correctly.

Background

Explanation, part 1: setup

I will attempt to explain my understanding of the Hancke-Khun distance-bounding protocol in my own words for a layman reader. If there are errors or misrepresentations then I am at fault.

Gerhard P. Hancke and Markus G. Kuhn proposed a distance-bounding protocol as a defense against man-in-the-middle attacks for people who use RFID tokens in order to automatically authenticate themselves for a location-based service such as the opening of a door or purchase at a specific point-of-sale device.

An example of a man-in-the-middle attack for such a building access-control could be two attackers maliciously forwarding radio traffic between an RFID token and a building RFID reader without the RFID token owner's knowledge even in the case where the token is located at a great distance from the reader. The idea to strengthen an RFID token against such an attack is to equip the building RFID reader with some means of proving the token is physically located within a specific distance.

The goal of this project is to apply this concept to the ping time between two computers in order to prove how close the computers are from eachother. A distance-bounding protocol proof uses the distance, speed, and time equation solved for distance.

distance = speed ⋅ time

The speed is set to the speed of light since one conclusion from the theory of special relativity is that no information signal or material can travel faster than light in a vacuum. The time is set to half the ping time (round trip time divided by 2.

distance = speed of light ⋅ ping time 2

In the protocol, a verifier, V, and a prover, P, create a pair of one-time-use pseudorandom bit sequences, R0 and R1, each containing n elements. Each element Ri0 or Ri1 is a bit whose value is either 0 or 1. These sequences can be represented like so:

Ri0 = R10 R20 R30 R40 R50 ⋯ Rn0

Ri1 = R11 R21 R31 R41 R51 ⋯ Rn1

Regarding these bit sequences, V rapidly asks P a stream of n questions. A question may take only one of the two forms:

  1. What is the ith bit of R0, Ri0?

  2. What is the ith bit of R1, Ri1?

The stream of questions start with i=1 and end with i=n.

In order to decide which question V asks P, V generates a private random bit sequence, C, which consists of n elements. The rule V follows is that if Ci=0 then V requests that P supply Ri0. If Ci=1 then V requests that P supply Ri1. In other words, at each round, i, V randomly chooses which of the two questions to ask P.

After sending a question to P, V records the exact time and increments i by 1.

Because cause must precede effect, P cannot provide a correct answer to V until after P receives the question. Since the speed of light is the maximum rate at which any information can travel through space, there is a minimum ping time (or "time of flight") for any given distance between V and P which can be used by the protocol to prove an upper bound to the distance between V and P.

Immediately after receiving a question, P sends to V the value RiCi which is the requested bit from either R0 or R1. The set of these responses can be written as RCi.

Upon receiving each response, V records the exact time in order to calculate that particular question-response round-trip time (or "ping time").

Example 1: how the bit sequences are used

To help explain how this process works below is an example that sets n=16 and walks you through how to calculate the response bit sequence, RCi.

  1. Verifier V and Prover P assemble and agree upon pseudorandom bit sequences R0 and R1

    • Ri0 = 0 1 0 0 1 0 1 1 1 0 1 1 0 0 1 0

    • Ri1 = 1 0 0 0 1 1 1 1 0 1 1 0 1 0 0 1

  2. Verifier V secretly produces a pseudorandom bit sequence Ci:

    • Ci = 0 0 0 0 1 0 1 1 0 0 0 1 1 1 0 1
  3. V sends each bit of Ci , one at a time, starting from i=1 until i=n. V notes the exact time when it sent each value of Ci.

  4. P receives and uses each bit of Ci to determine whether to immediately send the bit Ri0 or Ri1 to V in response. If all bits are received and sent without error, P will eventually have sent the set RCi.

  5. V receives and records the arrival time for each response bit, RiCi. V calculates the round-trip time for each round. The resulting values of RiCi are:

    • RiCi = 0 1 0 0 1 0 1 1 1 0 1 0 1 0 1 1

Below is a table illustrating how the example values for these bit sequences correlate. I have bolded the values of Ri0 and Ri1 which were sent by P in response to the values sent of Ci sent by V.

i 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16
Ri0 0
1 0 0 1 0 1 1 1 0 1 1 0 0 1 0
Ri1 1
0 0 0 1 1 1 1 0 1 1 0 1 0 0 1
Ci 0 0 0 0 1 0 1 1 0 0 0 1
1 1 0 1
RiCi 0 1 0 0 1 0 1 1 1 0 1 0 1 0 1 1

Explanation, part 2: False-Acceptance Rate

At each step V records the round trip time required between the sending of the question and the receiving of the correct answer from P. Given enough correct answers from P, V can then use the average value of the round trip time, tm, of correct responses in order to calculate with some statistical certainty that P is physically located within a distance, d. The distance, d can be calculated using the following two equations (pg 68, Hancke 2005).

d = c ⋅ tm−td 2

tm=2⋅tp+td

In the language of the Hancke paper, variables in the two equations are defined as:

c is the propagation speed, tp is the one way propagation time, tm is the measured total round-trip time, and td is the processing delay of the remote device.

A conservative practice defines td=0 for the processing delay variable. It is conservative because td is a function of the capabilities of the hardware P uses to process requests from V. If both P and V trust eachother to use specific hardware with consistent and accurate estimates for response times then td may be specified. However, the Hancke protocol-Kuhn does not provide a means for proving or incentivizing P to accurately measure and report its own hardware capability.

The highest possible propagation speed, c, according to the laws of physics is the speed of light in a vacuum. According to section 2.1.1.1 of the 8th edition of the International System of Units, a document published by the International Bureau of Weights and Measures, this speed is 299 792 458ms.

The statistical certainty that the round-trip time between P and V is less than tm is 1−pFA where pFA is the "false-accept probability". The value of pFA must be a statistical estimate constrained by the possibility that prover, P, maliciously sends its best guesses before receiving the questions from V. If P dishonestly wishes to convince V that the distance is lower than it really is, then P can achieve a 34 probability of guessing correctly for a given round without having yet received that round's value of Ci. This is because, on average, half of the rounds do not require guessing at all since half the time Ri0=Ri1. The other half of the time P's best strategy, assuming V generated C securely, is to guess 1 or 0 at random.

The false acceptance probability, or "False-Acceptance Rate", pFA, of V accepting the distance-bounding protocol proof of P can be calculated using the following equation found on the sixth page of the Hancke paper. This equation calculates pFA assuming V judges that receiving k correct responses out of n total rounds is acceptable.

p FA = ∑ i = k n n i ⋅ 3 4 i ⋅ 1 4 n − i

The equation states that pFA is equal to the sum of each individual probability where P guessed correctly <mathk or more times (for example: one outcome exists where P guesses perfectly, some outcomes where P makes only one mistake, some outcomes where P makes two mistakes, etc.). The total number of terms in the sum is of n−k+1.

In other words, the final term (the n'th term) of the sum is the probability that P guesses correctly in exactly every single response (one very rare possibility). The penultimate term (the n−1'th term) is the probability that P guesses correctly every single time except for exactly one mistake somewhere (a slightly less rare possibility). The n−2'th term is the probability that P guesses all responses correctly but with two errors somewhere. The n−3'th term is the probability that P guesses all responses correctly but with three errors somewhere, and so forth. The first term of the sum is the probability that P guesses correctly exactly k times out of n responses and therefore provided incorrect responses exactly n−k times. Each term of the sum is the binomial probability function (a.k.a. "binomial distribution formula" or "probability mass function") which should be part of the syllabus for any a typical Statistics course.

Since no factor of the equation for pFA can be made exactly equal to zero it is impossible for Verifier V to completely eliminate the possibility that P could forge this distance-bounding proof. The best V can do to strengthen confidence in the proof's validity is to set the parameters k and n to values that produce an acceptably low value for pFA, the probability of falsely accepting a maliciously constructed proof by Prover P.

Example 2: Calculating False-Acceptance Rate

Below is a copy of the previous example table but with values of Ri0 and Ri1 bolded when Ri0=Ri1. From inspection it should be clear that P does not have to guess roughly half of the rounds since a quarter of the time Ri0=Ri1=0 and a quarter of the time Ri0=Ri1=1.

i 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16
Ri0 0
1 0 0 1 0 1 1 1 0 1 1 0 0 1 0
Ri1 1
0 0 0 1 1 1 1 0 1 1 0 1 0 0 1
Ci 0 0 0 0 1 0 1 1 0 0 0 1
1 1 0 1
RiCi 0 1 0 0 1 0 1 1 1 0 1 0 1 0 1 1

Side note: I believe the inefficiency of allowing the protocol to have instances where Ri0=Ri1 is due to Hancke designing the protocol to be simple in order to accomodate implementation in RFID tags with limited computatioinal ability and over noisy communication channels. The scope of this project doesn't include attempting to improve the protocol but to simply implement it as described in the Hancke paper.

In order to illustrate how the False-Acceptance Rate, pFA, is calculated, let us say that V was programmed to accept 14 correct responses out of 16 (k=14, n=16). In this case pFA could be calculated as described below.

The binomial coefficient factor in the pFA equation can be expanded out, with ! signifying the factorial operation (for example, 5!=5⋅4⋅3⋅2⋅1=120).

p FA = ∑ i = k n n ! i ! n − i ! ⋅ 3 4 i ⋅ 1 4 n − i

The sum consists consist of a total of n−k+1=16−14+1=3 terms.

The last term (i=n=16) is:

16 ! 16 ! 16 − 16 ! ⋅ 3 4 16 ⋅ 1 4 16 − 16 = 1.00226 ⋅ 10 -2

The penultimate term (i=15) is:

16 ! 15 ! 16 − 15 ! ⋅ 3 4 15 ⋅ 1 4 16 − 15

= 5.34538 ⋅ 10 -2

The first term (i=k=14) is:

16 ! 14 ! 16 − 14 ! ⋅ 3 4 14 ⋅ 1 4 16 − 14

= 1.33635 ⋅ 10 -1

The sum of these three terms is:

1.00226 ⋅ 10 -2

+

5.34538 ⋅ 10 -2

+

1.33635 ⋅ 10 -1

= 1.97111 ⋅ 10 -1

Therefore, the False-Acceptance Rate, pFA can be written as:

p FA = ∑ i = k = 14 n = 16 n ! i ! n − i ! ⋅ 3 4 i ⋅ 1 4 n − i

= 1.97111 ⋅ 10 -1

= 19.7111 %

In other words, if V decides to accept only k=14 or more correct bits from from P out of a possible n=16 bits in the bit sequences they exchange, then there is about a 19.7% chance that P could fool V into accepting that the distance between them was lower than it physically is. P could do this by completely disregarding V's questions, C, and sending only best guesses for bit sequence RCi given the structure of R0 and R1.