In geometry, trilateration is the process of determining absolute or relative locations of points by measurement of distances, using the geometry of circles, spheres or triangles.
Suppose there are three or more anchors deployed at known coordinates , where , and a tag is placed at unknown coordinates . A distance between each anchor and tag tag can be measured somehow, e.g., time-of-arrival/flight (ToA/ToF) or log-distance path loss model. In such a situation, can we estimate coordinates of the tag? A problem can be stated as:
Find minimizing an error
Without errors or with negligible errors in distance measurements, we can use either a geometric approach or a least square approach. If errors are not negligible, however, such approaches produces a wrong and inconsistent positioning result.
Alternatively, we can use gradient descent to find coordinates of the tag minimizing the error function globally. As the name says, this optimization algorithm finds a local optimum by descending along a gradient of an error function.
For brevity, let’s denote . Then the error function can be reformulated as . Its gradient is
where is a unit vector of the x axis.
Given random guess coordinates of the tag , we can now update the coordinate :
where is a learning rate of gradient descent. By repeatedly updating coordinates until difference of the previous coordinates and the following coordinates becomes smaller than a certain threshold, we can stop iteration and get estimated coordinates.
One thing to be noted is that, since the error function is not convex, gradient descent may fall into a wrong region which will be produces an incorrect result or into an infinite loop. To prevent this,
- We perform gradient descent multiple times with different initial random guess coordinates
- We stop gradient descent when it takes time longer than a certain threshold
Implementation of trilateration using gradient descent can be found in my GitHub repository.