Next Article in Journal
A Deep-Learning Based Posture Detection System for Preventing Telework-Related Musculoskeletal Disorders
Previous Article in Journal
ESCOVE: Energy-SLA-Aware Edge–Cloud Computation Offloading in Vehicular Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Passive Measurement of Three Optical Beacon Coordinates Using a Simultaneous Method

Department of Aviation Technology, Faculty of Military Technology, University of Defence, 66210 Brno, Czech Republic
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(15), 5235; https://doi.org/10.3390/s21155235
Submission received: 4 July 2021 / Revised: 23 July 2021 / Accepted: 29 July 2021 / Published: 2 August 2021
(This article belongs to the Section Navigation and Positioning)

Abstract

:
Among other things, passive methods based on the processing of images of feature points or beacons captured by an image sensor are used to measure the relative position of objects. At least two cameras usually have to be used to obtain the required information, or the cameras are combined with other sensors working on different physical principles. This paper describes the principle of passively measuring three position coordinates of an optical beacon using a simultaneous method and presents the results of corresponding experimental tests. The beacon is represented by an artificial geometric structure, consisting of several semiconductor light sources. The sources are suitably arranged to allow, all from one camera, passive measurement of the distance, two position angles, the azimuth, and the beacon elevation. The mathematical model of this method consists of working equations containing measured coordinates, geometric parameters of the beacon, and geometric parameters of the beacon image captured by the camera. All the results of these experimental tests are presented.

1. Introduction

The article describes how to measure a position of an artificially created object, an optical beacon (hereinafter beacon), in relation to one measuring camera (hereinafter camera). The below presented simultaneous analytical method is one of the passive methods for measuring an object’s relative position.

1.1. Problem Statement

The measurement of object position is an important problem that is solved in numerous areas of human activities. Many different instruments and methods have been developed for outdoor and indoor positioning.
Passive methods using images of some objects of interest constitute one group of methods of measuring the relative position of objects. Some analytical methods are theoretically developed, experimentally verified, and practically used as well [1,2,3,4,5,6]. There are some studies that use stereo vision for measuring position [7,8] or a combination of one camera and another sensor such as a sonar or laser range finder [9,10]. Cameras are also used along with inertial navigation systems [7,11]. This task can be solved using neural networks as well [12,13].
The proposed passive method does not require complex and extensive instrumentation. It needs one camera as the only sensor. On the other hand, the necessary condition for the application of this method is the use of a specific system of feature points, which have to be defined on a surface of the object of interest. The feature points do not have to transmit a specific optical signal. They can create an artificial beacon, which serves as a reference object for a measuring unit equipped with an image sensor. The paper presents a positioning system of this type. Due to the design of the beacon, spans of measured angular coordinates are constrained. However, the beacon layout can be modified so that a span of measured values of position angles is significantly extended.

1.2. Literature Review

A number of different techniques have been developed for indoor positioning systems. Some of these methods are based on evaluating either the time of arrival (TOA) or the time difference of arrival (TDOA) of an optical signal to the receiver; others are based on the received signal strength (RSS) depending on the receiver position [14,15,16,17], and there are also methods based on a receiver position determination according to the angle of arrival (AOA) [18,19,20,21,22,23,24]. Time methods use modulated signal travel time between several reference sources and a receiver. If the source irradiance spatial pattern is known, e.g., light-emitting diode (LED) [25], certain angle methods use the dependence of the received signal strength on the incident angle and on the angle between the normal radiation source and the direction to the receiver. Position angles can be determined also without knowledge of the radiation spatial distribution [16]. Other methods utilize the dependence of a position or of individual source image sizes in the plane of the image sensor [22,26]. The AOA method is also used in a quadrant angular diversity aperture (QADA) receiver, which is equipped with a quadrant photodiode (PD) and an aperture shifted from the PD plane by a small distance. The QADA receiver can be combined with an image sensor [27].
Measuring systems consist of one or several radiation sources and receivers. The radiation of the fixed reference sources located in a room, for example on the ceiling, can be modulated in accordance with the applied method. The sources’ coordinates have to be known. The predominant, but not the only, radiation source type is LED semiconductor. Photodiodes [14,16] or image sensors [18,19,20,21] or both together are used to detect radiation on the receiver side. Optoelectronic receivers can be combined with magnetic field sensors [23], accelerometers [14,16], gyroscopes, or inertial navigation systems, depending on the method [24]. Active optoelectronic sensors, e.g., lidars, are utilized too [22]. Utilization of the scale-invariant feature transform (SIFT) method is described in [18]. Indoor positioning systems also use artificial neural networks (see [18,24,28,29]). Feature points can have an arbitrary nature. In order to be identifiable, they need to be sufficiently contrasting. No specific radiation pattern needs to be assumed to determine the beacon position. The measuring error of the position coordinates is in the order of centimeters when the radiation sources and the receivers are at a distance in the order of meters.

1.3. Principle of the Simultaneous Analytical Method

The solution presented in this article is based on the utilization of one camera. We use the specific object, beacon, and the simultaneous measurement of individual quantities. This method provides the possibility of measuring three positional coordinates if the transverse axis of the beacon and the transverse axis of the camera lie in parallel planes. The diagram of the beacon made for experimental test purposes is shown in Figure 1 [30,31], where Af is the beacon front wall; Aslp and Asrp are projections of the left and right beacon side walls, respectively; b, d16, α1, and β are parameters of the beacon; CB is the center of the beacon; S1 to S9 are light sources (S1 is the reference light source); S1xByBzB is the beacon coordinate system; and ρh and ρv are the horizontal and vertical planes, respectively.
The beacon enables satisfactory measurement of the distance between the beacon and the camera R (hereinafter beacon distance), the beacon position angle in a horizontal plane ω (hereinafter azimuth), and the beacon elevation angle ψ (hereinafter elevation). The angles are created by rotating the beacon around its transverse axes zρωψ0 and yB, respectively (see Figure 2), where CxCyCzC is the camera coordinate system, S1xωyρωψ0zρωψ0 is the reference coordinate system, and γr is a mutual tilt between the camera and the beacon. Nine semiconductor light sources of LED type (hereinafter diodes) represent sources of the optical signals, which are captured by the camera. According to the mutual position of the diodes on the beacon, and according to the diodes’ mutual position in their images in the plane of the camera sensor, the desired quantities can be then determined. The diodes create three square walls of the beacon with side b: a front wall Af and two side walls, left Asl and right Asr. The main parameters of the beacon are the base b and the beacon opening angle β. The side wall projection sizes Aslp and Asrp depend on these parameters.
The evaluation of the position of the beacon diode images was done manually. The position coordinates were calculated and the measurement results were processed on a personal computer. Nowadays, the practical use of this method in real time is not possible. To do this, it is necessary to make a purpose-built automatic measuring device and to create the necessary software similar to those described in [32], where a smaller version of the optical beacon is used. A basic unsolved task is the measurement of the real range of the system for which the position coordinates reach their limits determined by the permissible errors.
The aim of the paper is to explain the method principle and to present the test results obtained during the experiments verifying the suitability of the simultaneous analytical method for measuring the beacon distance, azimuth, and elevation. The rest of the paper is organized as follows: The simultaneous analytical method is described in Section 2. Section 3 provides experimental test results. Sources of errors are characterized in Section 4. Section 5 details ways of mathematical model adjustment. Section 6 presents the conclusion.

2. Description of the Simultaneous Analytical Method

The simultaneous analytical method for measuring two position coordinates was published in [30,31]. In these publications, we use working equations enabling measurement of the beacon distance R and the azimuth ω. These equations represent a mathematical model of this method. The method principle consists in a calculation of several so-called functional beacon distances (hereinafter functional distances) based on known distances between the two selected diodes and the measured distances between images of these diodes that were captured by the camera. Measuring two coordinates, as presented in [30,31], enables the determination of the position of an object only in the 2D plane. The only position coordinate on which the working equations depend is the azimuth ω. Functional distances were calculated for the so-called functional diodes S2, S3, S6, and S8 according to projections onto the yρψω0 axis. The problem can be solved only with the use of the functional distances for the diodes S6 and S8. When measuring the beacon position in the plane, the functional distance equation for the diode S8, for example, is as follows [30,31]:
R 81 = f d 16 cos α 1 ω b 81 1 d 16 sin α 1 ω ,
where b81 is the distance between the image of the S1 reference diode and the image of the S8 functional diode in the plane of the camera detector and d16 and α1 are the parameters of the beacon (see Figure 1).
For the measurement of the position of an object in 3D, the mathematical model had to be extended by at least one functional distance of one of the functional diodes S4, S5, S7, and S9, which lie on the lower line of the beacon. The working equations had to be adjusted also to include projections onto the zρψω0 axis, if they exist, which depend on both azimuth ω and elevation ψ. At least three functional distances have to be used as working equations. Such equations have to be chosen that their manifestation for at least one pair of functional diodes is opposite for both position angles. The chosen calculation uses parallel projections of the distances, between the reference diode S1 and all eight functional diodes S2 to S9, onto appropriate axes of the reference rectangular coordinate system. Its axis xω is identical to the optical axis of the camera xC. Then, for one particular functional diode, its functional distance for the provided azimuth and elevation is the calculated distance between the reference diode and the camera. The working equations are derived from the lens equations of the projections of the distances between the relevant diodes.
The angles ω and ψ are unknown variables that are computed as the numbers for which the deviation between the beacon functional distances, for the individual functional diodes, is a minimum. In other words, we are looking for the minimum of the root mean square (rms) of the difference Drms between the individual functional distances and the mean of these functional distances. When the minimum is reached, the set azimuth, the set elevation, and the mean of the calculated functional distances are equal to the desired measured variables ωm, ψm, and Rm, respectively. The following formulas, i.e., the working equations for the functional diodes S3, S5, S8, and S9 represent the mathematical model of this method:
R 31 = f P S 3 ρ ω ψ 0 b 31 y 1 P S 3 x ω ,
where P S 3 x ω = 0.5 b sin ω and P S 3 ρ ω ψ 0 = 0.5 b cos ω ;
R 51 = f P S 5 ρ ω ψ 0 y 2 + P S 5 ρ ω ψ 0 z 2 b 51 y 2 + b 51 z 2 1 P S 5 x ω ,
where P S 5 x ω = 0.5 b sin ω + δ x ψ 5 , P S 5 ρ ω ψ 0 y = 0.5 b cos ω + δ y ψ 5 , P S 5 ρ ω ψ 0 z = b cos ψ , δ x ψ 5 = b sin ψ cos ω , and δ y ψ 5 = b sin ψ sin ω ;
R 81 = f P S 8 ρ ω ψ 0 y 2 + P S 8 ρ ω ψ 0 z 2 b 81 y 2 + b 81 z 2 1 P S 8 x ω ,
where P S 8 x ω = d 16 sin α 1 ω sin α 1 cos ω 1 cos ψ , P S 8 ρ ω ψ 0 y = d 16 cos α 1 ω sin α 1 sin ω 1 cos ψ , and P S 8 ρ ω ψ 0 z = d 16 sin α 1 sin ψ ;
R 91 = f P S 9 ρ ω ψ 0 y 2 + P S 9 ρ ω ψ 0 z 2 b 91 y 2 + b 91 z 2 1 P S 9 x ω ,
where P S 9 x ω = d 16 sin α 1 ω + δ x ψ 9 , P S 9 ρ ω ψ 0 y = d 16 cos α 1 ω + δ y ψ 9 , P S 9 ρ ω ψ 0 z = d 17 ρ v cos α 2 ρ v + ψ , δ x ψ 9 = a ψ 9 a 0 cos ω , δ y ψ 9 = a ψ 9 a 0 sin ω , a ψ 9 = d 17 ρ v sin α 2 ρ v + ψ , a 0 = d 16 sin α 1 , d 17 ρ v = b 2 + d 16 sin α 1 2 , and α 2 ρ v = arctan d 16 sin α 1 b .
The above equations were derived assuming that the yB beacon axis and yC camera axis are parallel to the horizontal plane ρh (see Figure 2). Their mutual roll angle γr, which is created by the beacon or the camera rotation around the camera optical axis, was considered to be zero. The variables expressed with (2) to (5) are the functional distances Ri1 for diodes Si, where i = 3, 5, 8, 9, and reference diode S1. They are the beacon distances computed from the projections of the distances PSi (between the reference diode S1 and the functional diodes) onto the individual axes of the reference rectangular coordinate system and from the projections of the image distances bi1 of these diodes onto transverse axes yC and zC of the camera coordinate system CxCyCzC. The reference coordinate system is parallel with the camera coordinate system. The axis xω of the reference coordinate system and the axis xC of the camera coordinate system are identical with the camera optical axis. The origin of the reference system is at the intersection of the axis xω and the beacon front wall Af (in the place of the reference diode S1); the axes yρωψ0 and zρωψ0 lie in the plane ρωψ0, which is identical with the beacon front wall Af for ω = ψ = 0°. In this case, yρωψ0yB and zρωψ0zB. The azimuth and the elevation are formed by the beacon rotation around the axis zρωψ0 and yB, respectively. Parameters d16 and α1 are derived from the beacon base b and the beacon opening angle β (see Figure 1). The working equations for the rest of the functional diodes are performed analogically. The elements PSixω are corrections of the deviation between the object distances of the functional diodes and the measured beacon distance.
For all the functional diodes, the rms of the functional distance differences Drms (m) is as follows:
D rms = 1 8 i = 2 9 D i 1 2 ,
where Di1 (m) is the functional distance difference Di1 (m) for the diode pair Si and S1. It is given by the formula
D i 1 = R i 1 R M ,
where R M (m) is the mean of the calculated functional distances. This mean is expressed by the following formula:
R M = i = 2 9 R i 1 8 .
Changing the azimuth and the elevation in the working equations of the mathematical model leads to changes in the mean of the calculated distances and the rms of the distance differences. Assuming that the input position angles are equal to the true azimuth and the elevation, the beacon parameters are set exactly according to the selected values, and the camera has a high resolution, we could theoretically expect the rms of the distance differences to be practically zero.

3. Experimental Test Results

Three basic experimental tests and several check tests were performed. Their aim was to verify the functionality of the created mathematical model and the suitability of the simultaneous analytical method for measuring the optical beacon distance, azimuth, and elevation. The beacon base b and its opening angle β were equal to 470 mm and 56.2°, respectively (see Figure 1). The beacon was placed on a positioning mechanism, which was put on the Thorlabs RBB12A rotation stage. The MOTICAM 1080 camera was used, having two lenses with different focal lengths. Figure 3 shows beacon in the nominal position of ωn = 20° and ψn = 35° during check test.
The rotation stage was used to set the actual beacon azimuth. The conventionally true azimuth ω0 was measured using a scale of the actual rotation stage with the error of 2.5′. The azimuth nominal values ωn were selected, around which the actual azimuth was set. The positioning mechanism was used to set the beacon elevation to nominal discrete values ψn in the range from 0 to 35° with the step of 5°. The conventionally true elevation values ψ0 were measured using the Fortum model 4780200 inclinometer with the error of ±0.1°. The conventionally true beacon distances R0 were measured using the Leica Disto D510 laser distance meter with the error of 1 mm. The nominal azimuth ωn and elevation ψn were introduced to mark the groups of the conventionally true position angles as the azimuth was set randomly and, due to the random elevation of the relatively loose beacon fixation, the actual beacon elevation differed from the elevation set by the positioning mechanism.
The first experiment was performed using a lens with a focal length fL = 120 mm. The beacon distance R0 was 46,520 mm. The nominal azimuth ωn (°) and elevation ψn (°) were {−3, 0, 3, 10, 20, 35, 46} and {0, 5, 20, 35}, respectively. For the individual nominal elevations, the azimuths were set around their nominal values and the beacon image was recorded. Five test series were performed for every elevation. Obtained results were used for statistical processing of errors. The second and third tests were performed with the same beacon, however, using a lens with a focal length fL = 25 mm. The beacon distances were 13,460 and 46,728 mm, respectively. The setup of these tests was the same as for the first test. One hundred forty photos were taken for each distance and focal length; in total, 420 photos were taken. Table 1 shows examples of distance Rm, azimuth ωm, and elevation ψm that were measured during the first experiment in the first and second test series, for the nominal elevation of 20°. For comparison, the conventionally true values R0, ω0, and ψ0 are also shown. Figure 4, Figure 5 and Figure 6 show the distance percentage errors δR and the position angles errors Δω and Δψ for the same test and in all the five test series.
The distance measurements from the first test were as follows: For the nominal elevation ψn = 0°, the mean of the measured distances R ¯ was 46,675 mm. For the nominal elevations of 5, 20, and 35°, the means of the measured distances were 46,668, 46,620, and 46,643 mm, respectively. Corresponding sample standard deviations sR were 100, 127, 197, and 355 mm, respectively.
Results from the second experiment were as follows: For ψn (°) ∈ {0, 5, 20, 35}, the means of the measured distances were 13,513, 13,550, 13,518, and 13,499 mm. The corresponding sample standard deviations sR (mm) ∈ {45, 45, 64, 105}.
From the measured distances in the third experiment, for the same above-mentioned nominal elevation, we yielded R ¯ mm   { 47,000 ,   46,933 ,   46,933 ,   46,925 } and sR (mm) ∈ {189, 129, 220, 374}. The presented values were determined from the test results of all five series.
The means and sample standard deviations of the distance percentage errors as well as azimuth and elevation errors, gained from the results of all three experiments, are listed in Table 2, Table 3 and Table 4. The measurement error frequency of these individual quantities is expressed in percentage from the total number of measurements for the individual nominal elevations. The intervals of the absolute distance percentage errors |δR| are 0.0 to 0.1%, 0.0 to 0.5%, and 0.0 to 1.0%. The spans of the absolute azimuth errors |Δω|and absolute elevation errors |Δψ|are 0.0 to 0.5°, 0.0 to 1.0°, and 0.0 to 2.0°.
The measured azimuth and elevation means and their sample standard deviations are not presented as their conventionally true values mostly differed from the nominal values and were not the same even in the individual test series.
For azimuth, the error frequency was determined for the error deviations between the individual tests and the mean error of the particular test series. The reason for selecting these deviations was the beacon’s random default position in relation to the rotation stage and in relation to the support base in the azimuth. These random positions manifested themselves as a component of systematic errors. The default position differed for the individual elevations as a result of the necessary manipulation with the beacon. Thus, the relatively large nonzero mean errors were proportional mainly to the magnitudes of the unspecified default position of the beacon.
The elevation error frequency was evaluated analogously to the distance errors, according to the differences between the conventionally true and the measured values. In fact, the unspecified beacon elevation did not shift the measurement errors too much outside the selected intervals that were expected for the frequency.

4. Sources of Errors

The measurement errors of the simultaneous analytical method had the following basic causes:
  • Differences between the actual beacon and camera parameters and the parameters that were entered into the mathematical model;
  • Inaccuracies in determining the diode picture coordinates;
  • Aberration of the camera lens;
  • Inaccuracies in determining the beacon and camera mutual position.

4.1. Differences between the Actual Beacon and Camera Parameters and the Mathematical Model Parameters

The beacon base, the opening angle, and the camera focal length are all used in the beacon mathematical model. If the actual real values differ from the values entered into the model, methodological measurement errors occur. In general, these errors occur in all three measured coordinates. This fact worsens the measurement accuracy; however, on the other hand, it enables adjusting the measuring system with its optimal beacon parameters for which the accuracy indicator is the best. By modifying the mathematical model parameters, all the measured quantities can be affected. Especially, manipulation with the focal length of the camera lens is important in the mathematical model. It enables optimizing the beacon distance measurement accuracy, and at the same time, it does not influence the beacon measured angles.

4.2. Inaccuracies in Determining the Diode Picture Coordinates

The influence of the pixel number error (pixel coordinate error) was tested on the mathematical model, based on the fifth series of the first experiment results, with the beacon elevation of 5°. The azimuth of 10° was selected.
For one beacon diode, the number of pixels was increased by one or by two for the yC coordinate. Subsequently, the corresponding distance, azimuth, and elevation were determined. The influence of the pixel coordinates was tested for all nine diodes. The change of the measured beacon position was always measured only for the yC pixel coordinate of one diode. The gained results are provided in Table 5.
Changes of the position angles Δωp1, Δψp1 and Δωp2, Δψp2 are the differences between the original azimuth and elevation and the new azimuth and elevation for the increased pixel coordinates yC0 + 1 and yC0 + 2, respectively, where yC0 is the original pixel coordinate. A change of the pixel coordinate just in one direction can affect all three measured quantities. The distance deviations are not presented in Table 5 as they were only in order of hundredths of a percent. The maximum of the distance error changes was about 0.15%.
It is clear from the table that most deviations for both angles were in order of tenths of a degree. Their maximum magnitude was 0.3° for the change by one pixel and 0.5° for the change by two pixels. The deviations were zero in some cases. They occurred not only when the pixel coordinate was increased by one pixel, but also when the pixel coordinate was increased by two pixels. The nonzero deviations for two pixels were in most cases larger than deviations for one pixel. They were never smaller.
When the pixel coordinates changed for several diodes, the measured position angles changed by more than one degree. This was verified by the pixel coordinates being assessed by two persons independent of each other. The deviations of the determined pixel number were observable at both the yC coordinates and the zC coordinates. The biggest difference found between the measured position angles was 1.5°. It is clear that the method is sensitive to the accuracy of determining the pixel coordinates.

4.3. Lens Aberration

In general, camera lens aberrations can be the cause of measurement errors. The distortion of all the used lenses was experimentally verified. The distortions were not measurable. Other lens aberrations were compensated manually. For these reasons, the influence of the lens aberrations on the measurement accuracy was not considered.

4.4. Inaccuracies in Determining Mutual Position of the Beacon and the Camera

The azimuth measurement results were burdened with some systematic errors due to the fact that some components of the azimuth were not determined and included in the conventionally true values. These components were the beacon rotation relative to the rotation stage movable disk and the supporting base rotation that the rotation stage was placed on. Another factor that could adversely affect the measurement accuracy of both position angles was the mutual tilt of the camera and the beacon, i.e., a mutual roll of these objects around the camera’s optical axis (see Figure 2). Two check measurements were performed with the aim of quantitatively assessing the mentioned inaccuracies.
The first check measurement helped to assess the influence of the undetermined components of the azimuth. The beacon was set with a maximum azimuth deviation of approximately 1.0° towards the rotation stage movable disk. The measurements were performed for two nominal elevations of 0 and 35°. For each elevation, the supporting base was set for initial azimuths of −6.2, 0, and 6.2°. For each of these azimuths, several conventionally true azimuths were set on the rotation stage. The resulting absolute azimuth deviations, relative to the initial azimuths of the supporting base, did not exceed 0.9 and 0.6° for the nominal elevations of 0 and 35°, respectively. The measurement errors were comparable to the errors obtained from the basic experiments. It is clear that for practical use of this method, a firm fixation between the observed object and the measuring pattern has to be ensured.
The influence of the mutual tilt between the camera and the beacon was also measured at nominal elevations of 0 and 35°. For each elevation, their mutual tilt γr was set at −1.1, 0.0, and 0.9°. The error span and the maximum measurement errors are provided in Table 6. It shows that the mutual tilt between the camera and the beacon influences the measurement error magnitude. However, for the selected relatively small tilts, it demonstrated itself only by a small accuracy worsening. In some cases, the errors were even smaller for the nonzero tilt than for the zero tilt. This phenomenon can be explained by the difference between the beacon parameters used in the mathematical model and the actual beacon parameters.

4.5. Influence of the Mathematical Model

The errors resulting from determining inaccuracies of the pixel coordinates, as well as the beacon parameter inaccuracies, are also influenced by the mathematical model character, formed by a system of working equations. The equation’s core is represented by the ratios of the distance projections between the reference diodes and the functional diodes and the corresponding distance projections of the diode images. The object distances are expressed by trigonometric functions of the azimuth and elevations. Both the azimuth and the elevation are independent variables that are systematically changed until the required solution, based on the above-mentioned rule, is found (see Section 2). The coefficients and the initial angles in the working equations are taken from the beacon parameters. The differences between the parameters used in the mathematical model and the actual beacon parameters contribute to the systematic errors. These differences burden the measurement accuracy of the individual functional diodes to different extents, depending on the actual position angles. These mentioned parameter differences cause various functional distance differences, for all functional diodes.
Sizes of images are determined from the pixel coordinates of the corresponding diodes. Errors of the pixel coordinate determination play a significant role in the measurement errors of all the position coordinates. Minimum systematic errors are given by the size of one pixel as it determines elementary measurement uncertainty that cannot be reduced. A manifestation of this uncertainty depends on the picture size, beacon parameters, and measured position angles.
This fact is illustrated in Figure 7a and Figure 8a. They show the dependence of the functional distance, for the selected functional diodes, on the elevation that is entered into the mathematical model. Two combinations of the nominal position angles were used. Individual plots show azimuths when the rms minimum of the distance deviation Drms was achieved. Due to the above-mentioned errors, the Drms minimum had different levels for various combinations of the azimuth and the elevation. In addition, the rate at which the Drms was approaching its minimum depended on the actual position angles. These properties are evident in Figure 7b and Figure 8b.
The presented results are from the first experiment (with beacon distance of 46,820 mm and a lens with focal length of 120 mm) for ωn = ψn = 0° (see Figure 7a) and for ωn = 20° and ψn = 0° (see Figure 8a). The plots show not only the functional distances R16, R17, R81, and R91 for the reference diode S1 and the functional diodes S6, S7, S8, and S9, but also the calculated functional distance means RM. In addition, the plots of the rms of the distance deviations Drms for the mentioned diodes and the nominal position angles are shown in Figure 7b and Figure 8b. The functional distances are shown depending on elevation. The azimuth entered into the mathematical model was actually the parameter of the respective functions. The rms of the distance deviations was evaluated during the measurement process. Its only unambiguous minimum was determined using the entered azimuth and its appropriate elevation. By changing the azimuth, the minimum Drms was shifted along the elevation axis and changed in size. Generally, a local minimum for a function with two variables was searched for. When the smallest rms was achieved, then the mean of the functional distances, the entered azimuth, and the appropriate elevation were the sought beacon coordinates.
It is clear from the plots of the functional distances that the measurement errors depend on the combination of the distance, azimuth, and elevation. If the measurements were burdened only with systematic errors, resulting from the incorrect determination of the actual position coordinates, the mean of the errors would be approximately constant. As the level of the random errors depended on the mentioned combination of the measured quantities, the mean errors expressed as a function of the azimuth often showed a certain trend—most often a significant local extreme (see Figure 4, Figure 5 and Figure 6).
This fact is clear from the functional distance plots. Each diode plot line has a different slope. If the slope is not steep, the deviations between the model parameters and the actual beacon parameters are the significant cause of the measurement errors. This applies especially to the beacon opening angle. It can happen that a small deviation of the beacon parameters has to be compensated by azimuths or elevations that differ very much from the actual values. If the slope of the functional distance line is steep, then the pixel coordinate errors will manifest for the image of the corresponding functional diode. As the distances between the images of the corresponding functional diodes and the reference diode are very small, the deviation of one or two pixels can cause big errors.
An important feature of the mathematical model is the fact that the pixel coordinate errors and the differences between parameters of the real beacon and the mathematical model do not have the same influence on the measurement errors. This is true in the whole range of the measured quantities and their combinations. Thus, the adjustment of the measuring system using optimization of the mathematical model parameters is possible only for some selected intervals of the measured quantities and their combinations. This mathematical model feature was especially apparent for the beacon opening angle.

5. Mathematical Model Adjustments

The influence of the beacon and camera parameters on the measurement results can be utilized for the adjustment of the mathematical model that contains all the relevant parameters. In general, it means that the measurement errors can be minimized by changing the system parameters in the mathematical model. As the errors are not linearly dependent on the measured quantities, the adjustment was performed using an optimization of the selected accuracy indicator. This indicator was selected from a group of results containing several combinations of the measured variables. The results were then compared with conventionally true values of the measured beacon position coordinates. Their high determining accuracy is the necessary precondition for successful adjustment. The measuring system can be adjusted using either the focal length or the beacon opening angle or the beacon base that manifests itself similarly to the focal length.

5.1. Mathematical Model Adjustment Using the Model Focal Length

Distance measurement errors can be reduced by selecting a suitable focal length fM for the model (hereinafter model focal length). The position angle errors were affected by the focal length fM only slightly. The focal lengths fM entered into the model were 125.00 and 26.31 mm, where the actual real focal lengths fL were 120 and 25 mm, respectively. The adjustment performed with the model focal length fM is illustrated in Table 7 for fL = 25 mm, R0 = 46,730 mm, and ψn = 35°. It is clear that small changes in the model focal length significantly affect the distance percentage errors.
Since the measured position angles remain almost constant while the model focal length is changing, the focal length fM can be optimized by finding the minimum of the biggest distance percentage error for a random combination of the position angles in the provided beacon–camera configuration. The optimal focal lengths fM were found for both beacon distances, performed with a lens with a focal length of 25 mm, and for two beacon elevations. Table 8 shows distance percentage errors for the optimal model focal length fMo. Table 9 shows the percentage errors for both beacon distances and for two model focal lengths fM13 and fM46, which were determined as the optimal lengths for the elevation of 5°. From the comparison of the individual optimal focal lengths and the corresponding distance percentage errors, it is clear that the utilization of only one model focal length can be regarded as acceptable, within the selected intervals of the measured distances and angles.

5.2. Mathematical Model Adjustment Using the Beacon Opening Angle

Adjustment of the mathematical model, by entering the beacon opening angle βM into the model (hereinafter model opening angle), consists of position angle error evaluation for different angles βM. As an example, Table 10 shows the results for several angles βM and two combinations of nominal azimuth and elevation, [ωn; ψn] (°). These combinations were [0; 20] and [20; 20]. Results from the first series of the first experiment were used; the beacon distance and lens focal length were 46,820 and 120 mm, respectively.
For the model opening angles βM (°) ∈ 〈56, 64〉 with the step of 0.5°, the azimuth and elevation errors were evaluated. For every βM, the root mean square of the position angle error PArms was determined according to the following formula PArms = ((Δω)2 + (Δψ)2)0.5/2. Model opening angle βM is considered optimal when the PArms is minimum. If the PArms is minimum for more different angles, then the beacon distance percentage error can be taken into consideration.
The model opening angle affects all the measured quantities. It can potentially be used to optimize the measurement of the position angles regardless of their distance errors. Distance errors can be minimized separately by optimizing the model focal length as all the model working equations are linearly dependent on the focal length. However, the adjustment of the mathematical model with the opening angle is not suitable as the optimum opening angles vary significantly with different values and combinations of the measured position angles (see Table 10). In the presented cases, the optimal angles were 63.0° for [0; 20] and 59.5° for [20; 20].

6. Conclusions

The results of the performed experiments show that the presented simultaneous passive method is usable not only for two coordinates, but also for three position beacon coordinates. In both cases, the mathematical model was based on lens equations of the lines connecting the individual functional diodes with the reference diode. Their functional distances were derived from these equations, depending on one or two beacon position angles. The task was solved numerically according to the rms of the difference between the individual functional distances and the mean of the functional distances. When the minimum of this rms was reached, then, the mean of the functional distances and the substituted values of the position angles were taken as the required results. The method can be used in practice to determine the mutual position of the beacon and camera with sufficient accuracy not only in the 2D plane, but also in 3D. The measurement accuracy was limited mainly by the camera resolution. In addition, the deviations between the real beacon parameters and the mathematical model parameters also have some negative influence on the measurement errors.
The particular reached measurement accuracy for the selected configurations was comparable between the individual experiments. The distance percentage errors were in the order of tenths of a percent. The mean errors and standard deviations of the azimuth and elevation errors were in order of tenths of an angular degree. An increase in the absolute mean azimuth errors above one degree was caused by systematic errors as a consequence of the undetermined components of azimuths (see Section 4.4).
The measuring system can be adjusted by a suitable selection of some mathematical model parameters. The effects of the model focal length fM and the model opening angle βM were verified. The accuracy of the beacon distance measurement can be favorably affected by the focal length fM entered into the mathematical model. Thus, it is possible to effectively reduce the distance percentage errors and to find the optimal focal length fMo, usable within all the intervals of the individual measured quantities, while the influence on the azimuth and elevation remains negligible. On the contrary, the influence of the angle βM is reflected in all the measured coordinates. Thus, the angle βM is potentially suitable for the azimuth and elevation; however, its utilization in practice is not suitable as its optimal value depends on the current βM value and on the position angle combinations.
Analogously, the minimum and the maximum measured beacon distances were affected by the beacon size as much as by the focal length. The layout of the functional diodes, and their mutual position with the reference diode, had influence mainly on the span of the measured position angles. All equations of the mathematical model could not be used when some diodes appeared in one line and were not distinguishable or visible. This situation occurred for large beacon distances or when the position angles exceeded their limits ωl and ψl. The limits are clearly provided not only by the beacon layout but also, in general, by both position angles; for ψ = 0° the ωl = β, and for ω = 0° the ψl = 90°.
The option to use this method, but with a smaller number of diodes, was verified too. Only five diodes were used in this model. The diode S1 was used as a reference diode, and the diodes S6, S7, S8, and S9 were tested as the functional diodes (see Figure 1). It is clear from the results that the method was also suitable. However, it was necessary to select only diodes for which the azimuth and the elevation had opposite manifestations. In other words, the line projections between the diodes had to extend for some functional diodes and shorten for others. The resultant errors were comparable with errors listed in Table 5, Table 6 and Table 7.
The presented method is potentially suitable for measuring the position of a random known object. Both the functional and the reference points have to be defined on the surface of this object. In an ideal case, all of these points are visible in the camera pictures. Their images have to be clearly visible so that the distances between the reference point and the individual functional points in the pictures are measurable. Analogously, with this purpose-built beacon, the mathematical model of the object of interest has to contain equations including the line projections (between the reference point and the individual functional points) in the corresponding plane and the line images in the plane of the camera detector.

Author Contributions

Conceptualization, J.N., M.P.; methodology, J.N.; validation, J.N., M.P.; investigation, J.N., M.P.; writing—original draft preparation, J.N.; writing—review and editing, J.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The work presented in this paper has been supported by the Czech Republic Ministry of Defence—University of Defence development program “AIROPS—Airspace Operations”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bourdonnaye, A.; Doskocil, R.; Krivanek, V.; Stefek, A. Practical Experience with Distance Measurement Based on the Single Visual Camera. Adv. Mil. Technol. 2012, 8, 51–58. [Google Scholar]
  2. Doskocil, R.; Fischer, J.; Krivanek, V.; Stefek, A. Measurement of Distance by Single Visual Camera at Robot Sensor Systems. In Proceedings of the 15th Mechatronika 2012, Prague, Czech Republic, 5–7 December 2012. [Google Scholar]
  3. Bui, M.T.; Doskocil, R.; Krivanek, V.; Ha, T.H.; Bergeon, Y.T.; Kutílek, P. Indirect Method to Estimate Distance Measurement Based on Single Visual Cameras. In Proceedings of the International Conference on Military Technologies, Brno, Czech Republic, 31 May–2 June 2017. [Google Scholar]
  4. Kondo, H.; Okajama, K.; Choi, J.K.; Holta, T.; Kondo, M.; Okazaki, T.; Singh, H.; Chao, Z.; Nitadory, K. Passive acoustic and optical guidance for underwater vehicles. In Proceedings of the 2012 Oceans—Yeosu, Yeosu, Korea, 21–24 May 2012. [Google Scholar]
  5. Bui, M.T.; Doskocil, R.; Krivanek, V. Distance and Angle Measurement Using Monocular Vision. In Proceedings of the 2018 18th International Conference on Mechatronics, Brno, Czech Republic, 5–7 December 2018. [Google Scholar]
  6. Beck, J.H.; Kim, S.H. Vision based distance measurement system using two−dimensional barcode for mobile robot. In Proceedings of the 2017 4th International Conference on Computer Applications and Information Processing Technology (CAIPT), Kuta Bali, Indonesia, 8–10 August 2017. [Google Scholar]
  7. Ramezani, M.; Khoshelham, K. Vehicle positioning in GNSS-deprived urban areas by stereo visual-inertial odometry. IEEE Trans. Intell. Veh. 2018, 3, 208–217. [Google Scholar] [CrossRef]
  8. Mizuchi, Y.; Ogura, T.; Kim, Y.B.; Hagiwara, Y.; Choi, Y. Accuracy evaluation of camera-based position and heading measurement system for vessel positioning at a very close distance. In Proceedings of the 2015 15th International Conference on Control, Automation and Systems (ICCAS), Busan, Korea, 13–16 October 2015. [Google Scholar]
  9. Wang, M.; Liu, Y.; Su, D.; Liao, Y.; Shi, L.; Xu, J.; Miro, J.V. Accurate and real-time 3D tracking for the following robots by fusing vision and ultra-sonar information. IEEE/ASME Trans. Mechatron. 2018, 23, 997–1006. [Google Scholar] [CrossRef]
  10. Saito, T.; Nomura, K.; Yamazaki, Y.; Tatsuno, K.; Sota, K.; Fuziwara, Y.; Inoue, E.; Yoshino, K. Position measurement for a mobile weed moving robot by a camera and a laser rangefinder. In Proceedings of the 2017 International Symposium on Micro-NanoMechatronics and Human Science (MHS), Nagoya, Japan, 3–6 December 2017. [Google Scholar]
  11. Lu, Y. A New Algorithm of Rapid and Precise Position and Azimuth Determination Based on Vehicular Optical-electronic Detector. In Proceedings of the 2009 9th International Conference on Electronic Measurement & Instruments, Beijing, China, 16–19 August 2009. [Google Scholar]
  12. Polasek, M.; Nemecek, J. Optical Positioning Using Neural Network. In Proceedings of the 41st International Conference on Telecommunications and Signal Processing (TSP), Athens, Greece, 4–6 July 2018. [Google Scholar]
  13. Polasek, M.; Pucek, V.; Nemecek, J.; Bloudicek, R. Determining the Position Using Neural Network. In Proceedings of the 22nd International Scientific Conference, Transport Means 2018, Trakai, Lithuania, 3–5 October 2018. [Google Scholar]
  14. Yasir, M.; Ho, S.W.; Vellambi, B.N. Indoor Position Tracking Using Multiple Optical Receivers. J. Lightwave Technol. 2016, 34, 1166–1176. [Google Scholar] [CrossRef]
  15. Jaechan, L. Ubiquitous 3D positioning systems by LED-based visible light communications. IEEE Wireless Commun. 2015, 22, 80–85. [Google Scholar]
  16. Yasir, M.; Ho, S.W.; Vellambi, B.N. Indoor positioning system using visible light and accelerometer. IEEE/OSA J. Lightw. Technol. 2014, 32, 3306–3316. [Google Scholar] [CrossRef]
  17. Huang, T.; Gao, X.; Guo, Y.; Li, S.; Li, Q.; Li, C.; Zhu, H.; Wang, Y. Visible light indoor positioning fashioned with a single tilted optical receiver. In Proceedings of the 2015 14th International Conference on Optical Communications and Networks (ICOCN), Nanjing, China, 3–5 July 2015. [Google Scholar]
  18. Zhang, B.; Zhang, M.; Ghassemlooy, Z.; Han, D.; Yu, P. A Visible Light Positioning System with a Novel Positioning Algorithm and Two LEDs. In Proceedings of the 2019 24th OptoElectronics and Communications Conference (OECC) and 2019 International Conference on Photonics in Switching and Computing (PSC), Fukuoka, Japan, 7–11 July 2019. [Google Scholar]
  19. Xu, J.; Gong, C.; Xu, Z. Experimental Indoor Visible Light Positioning Systems with Centimeter Accuracy Based on a Commercial Smartphone Camera. IEEE Photonics J. 2018, 10, 1–17. [Google Scholar] [CrossRef]
  20. Nakazawa, Y.; Makino, H.; Nishimori, K.; Wakatsuki, D.; Komagata, H. Indoor positioning using a high-speed, fish-eye lens-equipped camera in Visible Light Communication. In Proceedings of the IEEE International Conference Indoor Positioning Indoor Navigation, Montbeliard, France, 28−31 October 2013. [Google Scholar]
  21. Kuo, Y.S.; Pannuto, P.; Hsiao, K.J.; Dutta, P. Luxapose: Indoor positioning with mobile phones and visible light. In Proceedings of the MobiCom ‘14: 20th Annual International Conference on Mobile Computing and Networking, Maui, HI, USA, 7–11 September 2014. [Google Scholar]
  22. Shahjalal, M.; Hasan, M.K.; Hossan, M.T.; Chowdhury, M.Z.; Jang, Y.M. Error Mitigation in Optical Camera Communication Based Indoor Positioning System. In Proceedings of the 2018 Tenth International Conference on Ubiquitous and Future Networks (ICUFN), Prague, Czech Republic, 3–6 July 2018. [Google Scholar]
  23. Hou, Y.; Xue, Y.; Chen, C.; Xiao, S. A RSS/AOA based indoor positioning system with a single LED lamp. In Proceedings of the 2015 International Conference on Wireless Communications & Signal Processing (WCSP), Nanjing, China, 15–17 October 2015. [Google Scholar]
  24. Chaudhary, N.; Alves, L.N.; Ghassemblooy, Z. Current Trends on Visible Light Positioning Techniques. In Proceedings of the 2nd West Asian Colloquium on Optical Wireless Communications (WACOWC2019), Tehran, Iran, 27−28 April 2019. [Google Scholar]
  25. Moreno, I.; Sun, C.C. Modeling the radiation pattern of LEDs. Opt. Express 2008, 16, 1808–1819. [Google Scholar] [CrossRef]
  26. Hossan, T.; Chowdhury, M.Z.; Islam, A.; Jang, Y.M. A Novel Indoor Mobile Localization System Based on Optical Camera Communication. Wirel. Commun. Mob. Comput. 2018, 2018, 1–17. [Google Scholar] [CrossRef] [Green Version]
  27. Cincotta, S.; He, C.; Neiled, A.; Armstrong, J. Indoor Visible Light Positioning: Overcoming the Practical Limitations of the Quadrant Angular Diversity Aperture Receiver (QADA) by Using the Two-Stage QADA-Plus Receiver. Sensors 2019, 19, 956. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Lin, C.; Lin, B.; Tang, X.; Zhou, Z.; Zhang, H.; Chaudhary, S.; Ghassemlooy, Z. An Indoor Visible Light Positioning System Using Artificial Neural Network. In Proceedings of the 2018 Asia Communications and Photonics Conference (ACP), Hangzhou, China, 26–29 October 2018. [Google Scholar]
  29. Ifthekhar, M.S.; Saha, N.; Jang, J.M. Neural network based indoor positioning technique in optical camera communication system. In Proceedings of the 2014 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Busan, Korea, 27–30 October 2014. [Google Scholar]
  30. Nemecek, J.; Polasek, M. Measurement of Relative Position of Camera and Optical Beacon by Simultaneous Passive Method. In Proceedings of the 41st International Conference on Telecommunications and Signal Processing (TSP), Athens, Greece, 4–6 July 2018. [Google Scholar]
  31. Nemecek, J.; Polasek, M. Measurement of Relative Position of Camera and Optical Beacon by Simultaneous Passive Method. (The extended post−conference published version). Int. J. Adv. Telecommun. Electrotech. Signals Syst. 2019, 8, 1–7. [Google Scholar]
  32. Polasek, M.; Pucek, V.; Nemecek, J.; Bloudicek, R. Optical positioning realized on raspberry Pi computer board. In Proceedings of the Transport Means Proceedings of the International Conference, Kaunas, Lithuania, 2–4 October 2019. [Google Scholar]
Figure 1. Beacon layout [30,31].
Figure 1. Beacon layout [30,31].
Sensors 21 05235 g001
Figure 2. Diagram of the measuring system.
Figure 2. Diagram of the measuring system.
Sensors 21 05235 g002
Figure 3. Beacon, check test, ωn = 20°, ψn = 35°.
Figure 3. Beacon, check test, ωn = 20°, ψn = 35°.
Sensors 21 05235 g003
Figure 4. Beacon distance percentage errors; first test, ψn = 20°.
Figure 4. Beacon distance percentage errors; first test, ψn = 20°.
Sensors 21 05235 g004
Figure 5. Azimuth errors; first test, ψn = 20°.
Figure 5. Azimuth errors; first test, ψn = 20°.
Sensors 21 05235 g005
Figure 6. Elevation errors; first test, ψn = 20°.
Figure 6. Elevation errors; first test, ψn = 20°.
Sensors 21 05235 g006
Figure 7. (a) Plots of the functional distances for diodes S6, S7, S8, and S9; (b) root mean square of the distance deviations for diodes S6, S7, S8, and S9; R0 = 46,820 mm, fL = 120 mm, ωn = ψn = 0°.
Figure 7. (a) Plots of the functional distances for diodes S6, S7, S8, and S9; (b) root mean square of the distance deviations for diodes S6, S7, S8, and S9; R0 = 46,820 mm, fL = 120 mm, ωn = ψn = 0°.
Sensors 21 05235 g007
Figure 8. (a) Plots of the functional distances for diodes S6, S7, S8, and S9; (b) root mean square of the distance deviations for diodes S6, S7, S8, and S9; R0 = 46,820 mm, fL = 120 mm, ωn = 20°, ψn = 0°.
Figure 8. (a) Plots of the functional distances for diodes S6, S7, S8, and S9; (b) root mean square of the distance deviations for diodes S6, S7, S8, and S9; R0 = 46,820 mm, fL = 120 mm, ωn = 20°, ψn = 0°.
Sensors 21 05235 g008
Table 1. Nominal, conventionally true, and measured beacon position coordinates; first test.
Table 1. Nominal, conventionally true, and measured beacon position coordinates; first test.
Series ωn (°)
−30310203546
1Rm (mm)46,40746,45746,48646,58846,70846,87546,858
ψ0 (°)21.621.621.621.521.521.321.2
ψm (°)21.321.521.521.320.820.521.1
ω0 (°)−3.303.510.319.835.045.8
ωm (°)−4.5−1.12.29.118.733.844.8
2Rm (mm)46,40246,47246,47446,55346,73346,97446,931
ψ0 (°)21.621.621.621.521.521.321.2
ψm (°)21.421.121.120.820.820.521.1
ω0 (°)−3.303.410.320.035.0046.1
ωm (°)−4.5−1.32.29.018.833.745.0
Table 2. Summary of the distance measurement results.
Table 2. Summary of the distance measurement results.
ψn (°)
052035
fL = 120 mm, R0 = 46,520 mm, first test
δ R ¯   ( % ) −0.31 −0.33−0.43−0.38
sδR (%) 0.210.270.420.76
FδR (%)20/74/10023/66/1009/43/940/17/66
fL = 25 mm, R0 = 13,460 mm, second test
δ R ¯   ( % ) −0.39 −0.67 −0.43 −0.29
sδR (%) 0.330.330.480.78
FδR (%)17/66/910/34/8331/57/809/66/74
fL = 25 mm, R0 = 46,728 mm, third test
δ R ¯ (%)0.580.240.190.26
sδR (%) 0.400.280.440.80
FδR (%)6/46/7726/71/10011/69/979/57/71
δ R ¯   ( % ) is the mean percentage error of the distance. sδR (%) is the sample standard deviation of the distance percentage errors. FδR (%) is the frequency for absolute distance percentage errors 0.0 to 0.1%/0.0 to 0.5%/0.0 to 1.0%.
Table 3. Summary of the azimuth measurement results.
Table 3. Summary of the azimuth measurement results.
ψn (°)
052035
fL = 120 mm, R0 = 46,520 mm, first test
Δ ω ¯   ( ° ) −1.55−1.92−1.20−1.95
sΔω (°)0.380.350.110.36
FΔω (%)20/74/10023/66/1009/43/940/17/66
fL = 25 mm, R0 = 13,460 mm, second test
Δ ω ¯ (°)−4.1−1.6−1.8−2.6
sΔω (°)0.10.20.30.5
FΔω (%)100/100/100100/100/10097/97/10063/97/100
fL = 25 mm, R0 = 46,728 mm, third test
Δ ω ¯ (°)−2.0−1.5−1.8−1.9
sΔω (°)0.40.20.40.5
FΔω (%)94/97/100100/100/10074/89/10057/100/100
Δ ω ¯   ( ° ) is the mean error of the azimuth. sΔω (°) is the sample standard deviation of the azimuth errors. FΔω (%) is the frequency for absolute azimuth errors 0.0 to 0.5°/0.0 to 1.0°/0.0 to 2.0°.
Table 4. Summary of the elevation measurement results.
Table 4. Summary of the elevation measurement results.
ψn (°)
052035
fL = 120 mm, R0 = 46,520 mm, first test
Δ ψ ¯   ( ° ) −0.9−1.0−0.4−0.2
sΔψ (°) 0.50.40.30.3
FΔψ (%)20/54/1009/49/10057/91/9477/100/100
fL = 25 mm, R0 = 13,460 mm, second test
Δ ψ ¯   ( ° ) −0.6−0.6−0.4−0.7
sΔψ (°) 0.40.40.30.6
FΔψ (%)43/80/10037/87/10054/100/10054/69/100
fL = 25 mm, D = 46,728 mm, third test
Δ ψ ¯ (°)−0.8−0.8−0.3−0.2
sΔψ (°) 0.90.50.40.6
FΔψ (%)34/51/8931/69/9774/94/10049/86/100
Δ ψ ¯   ( ° ) is the mean error of the elevation. sΔψ (°) is the sample standard deviation of the elevation errors. FΔψ (%) is the frequency for absolute elevation errors 0.0 to 0.5°/0.0 to 1.0°/0.0 to 2.0°.
Table 5. Changes of measured angles when changing pixel coordinates.
Table 5. Changes of measured angles when changing pixel coordinates.
DiodeyC0zC0yC0 + 1Δωp1Δψp1yC0 + 2Δωp2Δψp2
1 100923910100.3010110.50.1
21230242123100.1123200.3
37892367900.1−0.37910.1−0.5
4122368812240.1−0.112250.1−0.1
57736797740.1−0.17750.1−0.1
615782251579001580−0.10.2
7156666815670−0.215680−0.4
83632023640−0.13650−0.2
93506493510035100
Table 6. Effect of the mutual tilt between the camera and the beacon.
Table 6. Effect of the mutual tilt between the camera and the beacon.
ψn (°)γr (°)γr (°)
0−1.10.90−1.10.9
δDmax − δDmin (%)D|max (%)
00.710.770.830.360.450.53
351.941.711.561.371.271.32
Δωmax − Δωmin (°)|Δω|max (°)
00.580.740.630.380.740.60
353.303.403.291.821.881.87
Δψmax − Δψmin (°)ψ|max (°)
01.72.71.41.21.51.4
351.22.10.80.91.41.0
Table 7. Distance percentage errors δR (%) for different model focal lengths.
Table 7. Distance percentage errors δR (%) for different model focal lengths.
ωn (°)
−30310203546
δR (%) for fM = 26.31 mm−1.23−1.11−0.88−0.75−0.360.590.77
δR (%) for fM = 26.00 mm−1.55−1.21−1.31−0.69−0.920.160.32
δR (%) for fM = 26.50 mm0.340.700.591.200.992.052.26
Table 8. Distance percentage errors δR (%) for the optimal model focal length.
Table 8. Distance percentage errors δR (%) for the optimal model focal length.
ωn (°)
−30310203546
fL = 25 mm, R0 = 13,460 mm, ψn = 5°
δR (%) for fMo = fM13 = 26.15 mm 1−0.40−0.20−0.100.030.040.350.42
fL = 25 mm, R0 = 13,460 mm, ψn = 35°
δR (%) for fMo = 26.17 mm−1.04−0.85−0.86−0.62−0.120.761.06
fL = 25 mm, R0 = 46,730 m, ψn = 5°
δR (%) for fMo = fM46 = 26.19 mm 2−0.37−0.10−0.07−0.06−0.17−0.070.39
fL = 25 mm, R0 = 46,730 m, ψn = 35°
δR (%) for fMo = 26.16 mm−0.95−0.60−0.70−0.10−0.310.740.94
1 The model focal length, which was determined as optimal for the beacon distance of 13,460 mm. 2 The model focal length, which was determined as optimal for the beacon distance of 46,730 mm.
Table 9. Errors for various optimal model focal lengths.
Table 9. Errors for various optimal model focal lengths.
ωn (°)
−30310203546
fL = 25 mm, R0 = 13,460 mm, ψn = 5°
δR (%) for fM13 = 26.15 mm 1−0.40−0.20−0.100.030.040.350.42
δR (%) for fM46 = 26.19 mm 2−0.24−0.050.060.200.200.500.60
fL = 25 mm, R0 = 46,730 mm, ψn = 5°
δR (%) for fM13 = 26.15 mm 1−0.52−0.26−0.220.200.300.200.20
δR (%) for fM46 = 26.19 mm 2−0.37−0.10−0.070.060.170.070.39
1 The model focal length, which was determined as optimal for the beacon distance of 13,460 mm. 2 The model focal length, which was determined as optimal for the beacon distance of 46,730 mm.
Table 10. Optimization of the model opening angle.
Table 10. Optimization of the model opening angle.
βM (°)
56.056.557.057.558.059.059.560.060.561.062.063.064.0
ωn = 0°, ψn = 20°
Δω (°)−1.11−1.11−1.11−1.11−1.11−1.21−1.21−1.21−1.21−1.21−1.31−1.31−1.41
Δψ (°)1.451.451.351.251.251.151.051.050.950.950.850.750.65
PArms (°)1.291.291.241.181.181.181.131.131.091.091.101.071.10
ωn = 20°, ψn = 20°
Δω (°)−1.69−1.39−1.19−0.99−0.69−0.190.010.310.510.811.411.912.51
Δψ (°)0.200.200.300.300.300.400.400.500.500.500.600.700.80
PArms (°)1.200.990.870.730.530.310.280.420.510.671.081.441.86
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nemecek, J.; Polasek, M. Passive Measurement of Three Optical Beacon Coordinates Using a Simultaneous Method. Sensors 2021, 21, 5235. https://doi.org/10.3390/s21155235

AMA Style

Nemecek J, Polasek M. Passive Measurement of Three Optical Beacon Coordinates Using a Simultaneous Method. Sensors. 2021; 21(15):5235. https://doi.org/10.3390/s21155235

Chicago/Turabian Style

Nemecek, Jiri, and Martin Polasek. 2021. "Passive Measurement of Three Optical Beacon Coordinates Using a Simultaneous Method" Sensors 21, no. 15: 5235. https://doi.org/10.3390/s21155235

APA Style

Nemecek, J., & Polasek, M. (2021). Passive Measurement of Three Optical Beacon Coordinates Using a Simultaneous Method. Sensors, 21(15), 5235. https://doi.org/10.3390/s21155235

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop