Press "Enter" to skip to content

Trilateration using Gradient Descent

In geometry, trilateration is the process of determining absolute or relative locations of points by measurement of distances, using the geometry of circles, spheres or triangles.

- Trilateration – Wikipedia

Suppose there are three or more anchors deployed at known coordinates (X_i, Y_i, Z_i), where i=1,2,3,\cdots, and a tag is placed at unknown coordinates (x, y, z). A distance r_i between each anchor and tag tag can be measured somehow, e.g., time-of-arrival/flight (ToA/ToF) or log-distance path loss model. In such a situation, can we estimate coordinates of the tag? A problem can be stated as:

Find x, y, z minimizing an error f(x, y, z) = \sum_{i=1}^N \left[r_i-\sqrt{(x-X_i)^2+(y-Y_i)^2+(z-Z_i)^2}\right]^2

Without errors or with negligible errors in distance measurements, we can use either a geometric approach or a least square approach. If errors are not negligible, however, such approaches produces a wrong and inconsistent positioning result.

Alternatively, we can use gradient descent to find coordinates of the tag minimizing the error function globally. As the name says, this optimization algorithm finds a local optimum by descending along a gradient of an error function.

For brevity, let’s denote d_i(x, y, z) = \sqrt{(x-X_i)^2+(y-Y_i)^2+(z-Z_i)^2}. Then the error function can be reformulated as f(x, y, z) = \sum_{i=1}^N \left[r_i-d_i(x,y,z)\right]^2. Its gradient is

\nabla f(x, y, z) = 2\sum_{i=1}^N \frac{d_i(x, y, z) - r_i}{d_i(x, y, z)}\left[(x-X_i)\hat{x}+(y-Y_i)\hat{y}+(z-Z_i)\hat{z}\right],

where \hat{x} is a unit vector of the x axis.

Given random guess coordinates of the tag (x_{prev}, y_{prev}, z_{prev}), we can now update the coordinate (x_{next}, y_{next}, z_{next}):

\begin{bmatrix} x \\ y \\ z \end{bmatrix}_{next} =  \begin{bmatrix} x \\ y \\ z \end{bmatrix}_{prev}  -2\alpha\sum_{i=1}^N \frac{d_i(x, y, z) - r_i}{d_i(x, y, z)}  \begin{bmatrix}  x_{prev} - X_i \\  y_{prev} - Y_i \\  z_{prev} - Z_i  \end{bmatrix},

where \alpha is a learning rate of gradient descent. By repeatedly updating coordinates until difference of the previous coordinates and the following coordinates becomes smaller than a certain threshold, we can stop iteration and get estimated coordinates.

One thing to be noted is that, since the error function is not convex, gradient descent may fall into a wrong region which will be produces an incorrect result or into an infinite loop. To prevent this,

  1. We perform gradient descent multiple times with different initial random guess coordinates
  2. We stop gradient descent when it takes time longer than a certain threshold

Implementation of trilateration using gradient descent can be found in my GitHub repository.