Skip to main content

A new procedure for onboard calibration of star sensors installed on low Earth orbit satellite

Abstract

This paper suggests a new onboard calibration procedure for star sensors installed on low Earth orbit satellites. In this procedure, all the error factors are investigated, and an accurate model of the star sensor is presented. Then, a new scenario is proposed for producing real data from a star sensor installed on a satellite while the satellite is moving in orbit. These sensor data are entered into the unscented Kalman filter as measurement data, and the sensor parameters, including focal length and principal point, are estimated. In the following, the estimated parameters are entered into the linear Kalman filter, and the misalignment error is estimated. Finally, the proposed procedure is simulated, and its performance is checked. The simulation results show the high accuracy and convergence speed of the suggested algorithm.

Introduction

One of the important pillars of satellites is the attitude determination system (ADS), which causes the satellite to be oriented and stabilized. Attitude information is received through sensors, and then the torque required to position the satellite in the proper position is provided using control rules and actuators. The accuracy of the measurement information is of great importance in this subsystem because otherwise it will cause the satellite to have an inappropriate and even unstable attitude. Due to the sensors being affected by systematic and random errors, their accuracy decreases. Therefore, it is necessary to perform onboard calibration for sensors either offline or online [1,2,3].

A star sensor is basically a digital camera with a focal plane made up of many pixels. These pixels are made of two types of CCD (charge-coupled device) or CMOS (complementary metal-oxide semiconductor). CCDs have lower noise but are less resistant to radiation and are also unable to provide output at different rates. One of the important features of CMOS is that each pixel has a separate data processor, which is an advantage. In general, active-pixel sensors (APSs) are called sensors with this feature. The star sensor works in such a way that the starlight is imaged on the focal plane by passing through the camera lens and creating bright spots on it. Then, the coordinates of the center of these spots are obtained by centroid algorithms. The star vector is calculated in the sensor frame by having the coordinates of the center of the spots as well as the optical parameters and distortion parameters. On the other hand, by comparing the created image with the pre-stored images in the memory, the star vector is obtained in the inertial frame. Finally, the satellite attitude matrix is obtained with the help of attitude determination algorithms [4,5,6].

Onboard calibration of the star sensor is divided into two general categories: attitude dependent (ADC) and attitude independent (AIC). AIC methods were given more attention due to the higher accuracy due to the lack of influence of the attitude. Jun Junkins proposed a principle for onboard calibration of the star sensor that the inner product between the stars in the sensor frame and the inertial frame is equal [7]. According to this principle, Mr. Samaan proposed an online calibration method, which is a combination of the least square (LS) algorithm and a linear Kalman filter (LKF) [8]. In this strategy, the LS algorithm calculates the initial conditions of the optical parameters, including the principal point and focal length. Then, the optimal values are obtained by applying these initial conditions to the LKF. Finally, these optimal values are used to obtain the distortion parameters. The major disadvantage of the mentioned method is that the interaction between optical parameters and distortion parameters is not considered. Mr. Wood-bury and Johnkins in [7] have used the algorithm of LS regression to calculate optical parameters, ignoring the influence of distortion parameters. The extended Kalman filter (EKF) has only been used to calculate the focal length and the second-order distortion factor in [9], and the rest of the parameters have been assumed to be fixed. The authors in [10] have improved the accuracy of this method by making modifications to the Samaan method. However, the interaction between the optical parameters and the distortion parameters is still not taken into account. In [11], a temperature-dependent model for the star sensor was extracted, and an extended Kalman filter was used to obtain the parameters of this model. Mr. Fuqiang Zhou, in reference [12], has proposed a three-stage offline calibration method in which the interaction of optical parameters and distortion parameters is considered. The optical parameters are obtained in the first step by applying the Leunberger-Marquardt (LM) algorithm to the model without distortion. In the second step, the distortion parameters are obtained with the help of the values obtained from the first step and the LS algorithm. In the third step, optical parameters and optimal distortion parameters are obtained by using the values obtained from the first two steps and applying the LM algorithm to the model with distortion. The authors in [13] proposed an LS-EKF strategy for star sensor onboard calibration. The LS algorithm computed an initial estimate for the principal point and focal length starting from the measured stars and the associated cataloged stars. Then, this initial estimate is entered into EKF, which computes the best evaluation of the calibration parameters at each star tracker new acquisition. The authors in [14] proposed an NN-UKF strategy for star sensor onboard calibration. The neural network estimates optical parameters. Then, the UKF is developed for accurately estimating optical and distortion parameters. In [15], an on-orbit calibration strategy based on a combination of the singular value and the eigenvalue sensitivity and the angular distance was proposed. The authors in [16] extracted a new model for the SS-411 sun sensor and proposed LM, EKF, and UKF methods for on-orbit calibration of the sun sensor as an optical sensor. In [17], a state model for the star tracker and gyroscopes integrated system based on the satellite payload multiplexed was extracted, and the systematic errors were calibrated by a Kalman filter.

By examining the work done in the field of onboard calibration of the star sensor, it can be concluded that there is a lack of accurate modeling for these sensors, and it is necessary to consider all the error factors. On the other hand, there is no scenario for producing real data from a sensor installed on a satellite while the satellite is moving in orbit. Generating real data helps the calibration results be more valid and gives a better idea of the accuracy and the convergence speed. In addition, when the sensor is installed on the satellite and placed in orbit, high vibrations cause misalignment errors, and as a result, the accuracy of the attitude estimation decreases. Therefore, it is necessary to develop a calibration strategy to estimate the misalignment error as well. Accordingly, the main contributions of the article are as follows:

  • A detailed model of the star sensor is provided that takes into account all the error factors.

  • A scenario is presented to generate real sensor data.

  • A two-step algorithm is developed that, in addition to estimating the focal length, principal point, and distortion coefficients, also estimates the misalignment errors.

  • The remainder of this paper falls under the following categories: We explain the star sensor modeling and output generation scenario in the “Methods” section. The onboard calibration algorithm is detailed in “Results and discussion” section. The findings and discussion are reported in “Conclusions” section. Finally, the “Conclusions” section states the conclusion.

Methods

Star sensor modeling

Error sources of star sensor

The error sources in the star sensor are divided into three categories: internal errors, misalignment errors, and random errors. Internal errors occur due to the change in optical parameters as well as the distortion caused by the camera lens. According to Fig. 1, the star sensor includes a focal plane where the beams of the stars passing through the camera lens are imaged. A coordinate system is defined on this plane, the origin of which is usually in the center of the plane; the x- and y-axes correspond to the plane, and the z-axis is perpendicular to the plane. The origin of this coordinate system is called the principal point. Also, the focal length is the distance between the principal point and the center of the lens. According to Fig. 2, the starlight is deflected as it passes through the sensor lens, which leads to a change in the coordinate of the image point. This phenomenon is called lens distortion. Lens distortion is divided into radial distortion and tangential distortion. Radial distortion is caused by the imperfect curvature of the camera lens elements, which leads to the displacement of the image point inward or outward from its ideal position. Also, tangential distortion occurs due to the misalignment of the centers of the camera lenses and leads to a change in the image along the circumference of the circle.

Fig. 1
figure 1

Interior view of star sensor [18]

Fig. 2
figure 2

Effect of lens distortion [18]

The vibrations caused by the satellite’s separation from the satellite carrier lead to a loss of compatibility between the sensor frame and the body frame or a misalignment error. The misalignment error is usually represented by an orthogonal rotation matrix, which expresses the relationship between two frames. The random errors of the sensor also arise from two important types of noise, namely noise equivalent angle (NEA) and low-frequency effect (LFE). Of course, solutions have been provided to remove these noises from different sources, but their impact is considered in this project. Accordingly, sources of deterministic errors include changes in the principal point, changes in the focal length, changes in the distortion coefficients, internal misalignment errors, and misalignment errors between the sensor frame and body frame. Also, sources of random errors include the drift of optical parameters, the drift of distortion parameters, noise equivalent angle, and low-frequency noise.

Lens distortion modeling

As mentioned in the previous section, lens distortion causes the displacement of the image point. This displacement takes place in both radial and tangential. In the physics of optics, it is proved that radial distortion is a function of the distance of the ideal image point to the principal point, and its effect on the displacement of the image point along the x- and y-axis is obtained from the following Eqs (12, 13, 14, 15, 16, 17, 18):

$${\delta }_{xr}=\left(\widehat{x}-{x}_{0}\right)\left({k}_{1}{r}^{2}+{k}_{2}{r}^{4}\right)$$
(1)
$${\delta }_{yr}=\left(\widehat{y}-{y}_{0}\right)\left({k}_{1}{r}^{2}+{k}_{2}{r}^{4}\right)$$
(2)
$$r=\sqrt{\left(\widehat{x}-{x}_{0}\right)+{\left(\widehat{y}-{y}_{0}\right)}^{2}}$$
(3)

where \(\left(\widehat{x},\widehat{y}\right)\) denotes the ideal image point (without distortion), \(\left({x}_{0},{y}_{0}\right)\) denotes the principal point, \({{\text{k}}}_{1}\) is the coefficient of the 2nd-order radial distortions, \({{\text{k}}}_{2}\) is the coefficient of the 4th-order radial distortions, and r is the distance of the ideal image point to the principal point. Also, the effect of tangential distortion on the displacement of the image point is obtained as follows:

$${\delta }_{xt}={\mu }_{1}\left(3{\left(\widehat{x}-{x}_{0}\right)}^{2}+{\left(\widehat{y}-{y}_{0}\right)}^{2}\right)+2{\mu }_{2}\left(\widehat{x}-{x}_{0}\right)\left(\widehat{y}-{y}_{0}\right)$$
(4)
$${\delta }_{yt}=2{\mu }_{1}\left(\widehat{x}-{x}_{0}\right)\left(\widehat{y}-{y}_{0}\right)+{\mu }_{2}\left({\left(\widehat{x}-{x}_{0}\right)}^{2}+3{\left(\widehat{y}-{y}_{0}\right)}^{2}\right)$$
(5)

Finally, the total displacement is as follows:

$${\delta }_{x}={\delta }_{xr}+{\delta }_{xt}$$
(6)
$${\delta }_{y}={\delta }_{yr}+{\delta }_{yt}$$
(7)

Complete modeling of the star sensor

Due to the existence of various errors caused by changing the internal parameters of the sensor, installing the sensor on the body, and existing noises, the star sensor’s attitude-dependent model is as follows:

$${{R}_{sensor}}^{T}\times {A}_{body}^{sensor}\times {A}_{inertial}^{body}\times \overrightarrow{s}=\frac{1}{\sqrt{{\left(\widehat{x}-{x}_{0}\right)}^{2}+{\left(\widehat{y}-{y}_{0}\right)}^{2}+{f}^{2}}}\left(\begin{array}{c}\widehat{x}-{x}_{0}\\ \widehat{y}-{y}_{0}\\ f\end{array}\right)$$
(8)

where

$$x=\widehat{x}+\left(\widehat{x}-{x}_{0}\right)\left({k}_{1}{r}^{2}+{k}_{2}{r}^{4}\right)+{n}_{LFE}+{n}_{NEA}$$
(9)
$$y=\widehat{y}+\left(\widehat{y}-{y}_{0}\right)\left({k}_{1}{r}^{2}+{k}_{2}{r}^{4}\right)+{n}_{LFE}+{n}_{NEA}$$
(10)
$$r=\sqrt{{\left(\widehat{x}-{x}_{0}\right)}^{2}+{\left(\widehat{y}-{y}_{0}\right)}^{2}}$$
(11)
$$\overrightarrow{{\text{s}}}=\left(\begin{array}{c}{\text{cos}} \left(\alpha \right){\text{cos}}\left(\beta \right)\\ {\text{cos}}\left(\alpha \right){\text{sin}}\left(\beta \right)\\ {\text{sin}}\left(\beta \right)\end{array}\right)$$
(12)

In the above equations, (x, y) is the actual image point (with distortion), (\(\widehat{x}\),\(\widehat{y}\)) is the ideal image point (without distortion), (\({x}_{0}\),\({y}_{0}\)) is the principal point, \(\overrightarrow{s}\) is the star vector in the inertial frame, \({{{\text{R}}}_{{\text{sensor}}}}^{{\text{T}}}\) is the internal misalignment error, \({{\text{A}}}_{{\text{body}}}^{{\text{sensor}}}\) is the misalignment error of the body frame with respect to the sensor frame, \({{\text{A}}}_{{\text{inertial}}}^{{\text{body}}}\) is the attitude matrix of the inertial frame with respect to the body frame, \({{\text{n}}}_{{\text{LFE}}}\) is low-frequency noise, \({{\text{n}}}_{{\text{NEA}}}\) is noise equivalent angle, (α, β) is the star vector in the celestial frame, \({k}_{1}\) and \({k}_{2}\) are the radial distortion coefficients, and \(r\) is the distance between the principal point and the actual image point. It should be noted that the effects of tangential distortion are ignored in this modeling because they are insignificant. Considering the approximation, the above relations are rewritten as follows:

$$\widehat{x}=x+\left(x-{x}_{0}\right)\left({k}_{1}{r}^{2}+{k}_{2}{r}^{4}\right)-{n}_{LFE}-{n}_{NEA}$$
(13)
$$\widehat{y}=y+\left(y-{y}_{0}\right)\left({k}_{1}{r}^{2}+{k}_{2}{r}^{4}\right)-{n}_{LFE}-{n}_{NEA}$$
(14)
$$r=\sqrt{{\left(x-{x}_{0}\right)}^{2}+{\left(y-{y}_{0}\right)}^{2}}$$
(15)

According to the principle that the inner multiplication between the stars in the inertial frame is equal to the inner multiplication of the ideal image points in the sensor frame, the sensor attitude-independent model will be as follows:

$${\overrightarrow{s}}_{i}^{T}{\overrightarrow{s}}_{j}={\overrightarrow{\widehat{w}}}_{i}^{T}{\overrightarrow{\widehat{w}}}_{j}+{n}_{ij}$$
(16)
$$\overrightarrow{\widehat{w}}=\frac{1}{\sqrt{{\left(\widehat{x}-{x}_{0}\right)}^{2}+{\left(\widehat{y}-{y}_{0}\right)}^{2}+{f}^{2}}}\left(\begin{array}{c}\widehat{x}-{x}_{0}\\ \widehat{y}-{y}_{0}\\ f\end{array}\right)$$
(17)

where the indices i and j represent the i-th star and the j-th star, respectively. Here, we intend to estimate the sensor parameters, including \({{\text{x}}}_{0}\), \({{\text{y}}}_{0}\), f, \({{\text{k}}}_{1}\), and \({{\text{k}}}_{2}\), and the misalignment error matrix \({{\text{R}}}_{{\text{body}}}^{{\text{sensor}}}\).

Sensor output generation scenario

As shown in Fig. 3, to generate the image point, there must be information related to the rotation matrix of the inertial frame with respect to the sensor frame, as well as the star sensor catalog information. This rotation matrix is obtained as follows:

Fig. 3
figure 3

Star sensor output generation scenario

$${A}_{inertial}^{sensor}={A}_{body}^{sensor}\times {A}_{orbit}^{body}\times {A}_{inertial}^{orbit}$$
(18)

It is necessary to have a full-sky star catalog and to know the characteristics of the sensor in order to produce the sensor star catalog. The full-sky star catalog is a full list of all known stars in the sky along with their information, including ascension angle, inclination angle, and brightness, based on which the star vector is calculated in the inertial frame. The SAO star catalog J2000 standard is one of the most famous full-sky star catalogs, and the catalogs of many star sensors are prepared according to it. The number of stars in the sensor catalog depends on the field of view of the sensor and the average number of stars in the field of view, which is obtained from the following equation:

$$n=41253\times \frac{\overline{N}}{FOV }$$
(19)

where \(\overline{N }\) is the average number of stars in the field of view and \(FOV\) is the field of view of the sensor. It should be noted that according to the star sensor manufacturing technology, each of these sensors is able to identify stars with a certain minimum brightness. For this reason, the stars in the catalog must have a brightness greater than that certain value. The sensor star catalog is obtained by randomly selecting n stars with a given minimum brightness from the full-sky star catalog. The vectors of the stars in the sensor frame are obtained by multiplying the inertial vectors of the stars in the sensor star catalog by the mentioned rotation matrix. Then, the ideal image points are obtained from the following equation:

$$\widehat{x}=-f\frac{{S}_{sx}}{{S}_{sz}}+{x}_{0}$$
(20)
$$\widehat{y}=-f\frac{{S}_{sy}}{{S}_{sz}}+{y}_{0}$$
(21)

\({S}_{sy}\), \({S}_{sz}\), and \({S}_{sx}\) are the components of the star vector in the sensor frame. In the following, the actual image points are obtained from the Eqs. (9) and (10). Finally, according to the focal plane size, valid coordinates are filtered.

Proposed strategy

According to Fig. 4, the information of the reference star vectors and the image points is entered into the first block after each image capture. In this block, by using the UKF, the sensor parameters, including the principal point, focal length, and distortion coefficients, are obtained. Then, the output of this block is considered input to the second block. In the second block, the misalignment error matrix () parameters are extracted using the previous block’s output, the information of the attitude matrix, and the LKF.

Fig. 4
figure 4

Online two-stage algorithm

The first stage of the algorithm (UKF)

It is necessary to know the state and the measurement representation of the sensor model in order to apply the UKF. If the sensor parameters are assumed constant, then we have the following:

$$p\left(k+1\right)=p\left(k\right)$$
(22)
$${o}_{ij}={s}_{i}^{T}{s}_{j}={F}_{ij}\left({x}_{0},{y}_{0},f,{k}_{1},{k}_{2}\right)={\overrightarrow{\widehat{w}}}_{i}^{T}{\overrightarrow{\widehat{w}}}_{j}+{n}_{ij}$$
(23)

where,

$$P={\left[{x}_{0}{y}_{0}f {k}_{1}{k}_{2}\right]}^{T}$$
(24)

Considering the presence of p stars in an image, the measurement equation will be as follows:

$$o={\left(\begin{array}{c}{s}_{1}^{T}{s}_{2}\\ \vdots \\ {{s}_{p-1}}^{T}{s}_{p}\end{array}\right)}_{m\times 1}=F\left({x}_{0},{y}_{0},f,{k}_{1},{k}_{2}\right)+n {\prime}={\left(\begin{array}{c}{F}_{12}\\ \vdots \\ {F}_{p-1 p}\end{array}\right)}_{m\times 1}+{\left(\begin{array}{c}{n}_{12}\\ \vdots \\ {n}_{p-1 p}\end{array}\right)}_{m\times 1}$$
(25)

According to the Eqs. (22) and (25), the UKF is designed as follows:

At first, by selecting the initial states (\({\widehat{P}}_{0}\left(+\right)\)) and the initial error covariance matrix (\({C}_{pp,0}\left(+\right)\)), 11 sigma points are obtained from the following equation:

$${\rho }_{i,k}=\left\{\begin{array}{c}{\widehat{p}}_{k}(+)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=0\\ {\widehat{p}}_{k}(+)+{(\sqrt{(5+\lambda ){C}_{pp,k}(+)})}_{i}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,...,5\\ {\widehat{p}}_{k}(+)-{(\sqrt{(5+\lambda ){C}_{pp,k}(+)})}_{i}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=10,...,10\end{array}\right.$$
(26)

where \({(\sqrt{(5+\lambda ){C}_{pp,k}(+)})}_{i}\) is the i-th column of the square root of the \((5+\lambda ){C}_{pp,k}(+)\) matrix. Then, the weight coefficients are obtained as follows:

$${w}_{i}{}^{(m)}=\left\{\begin{array}{c}\frac{\lambda }{5+\lambda }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=0\\ \frac{1}{2(5+\lambda )}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,...,10\end{array}\right.$$
(27)
$${w}_{i}{}^{(c)}=\left\{\begin{array}{c}\frac{\lambda }{5+\lambda }+(1-{\alpha }^{2}+\beta )\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=0\\ \frac{1}{2(5+\lambda )}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,...,10\end{array}\right.$$
(28)
$$\lambda ={\alpha }^{2}(5+\kappa )-n$$
(29)

where \(\lambda\) is the scale parameter, \(\alpha\) is the distribution of sigma points, \(\beta\) is prior knowledge of the distribution, and \(\kappa\) is the secondary scale parameter.

In the following, the sigma points are diffused based on the Eq. (30):

$${\rho_{i,k + 1}} = {\rho_{i,k}}$$
(30)

The previous states and the error covariance matrix of the previous state are as follows:

$${p}_{k+1}(-)=\sum_{i=0}^{10}{w}_{i}{}^{(m)}{\rho }_{i,k+1}$$
(31)
$${C}_{pp,k+1}(-)=\sum_{i=0}^{10}{w}_{i}{}^{(c)}({\rho }_{i,k+1}-{\widehat{p}}_{k+1}(-))({\rho }_{i,k+1}-{\widehat{p}}_{k+1}(-){)}^{T}$$
(32)

After that, the measurement equation is diffused:

$${o}_{i,k+1}=F({\rho }_{i,k+1})$$
(33)
$${\widehat{o}}_{k+1}(-)=\sum_{i=0}^{10}{w}_{i}{}^{(m)}{o}_{i,k+1}$$
(34)

The measurement covariance and cross-correlation matrix are as follows:

$${C}_{oo,k+1}=\sum_{i=0}^{10}{w}_{i}{}^{(c)}({o}_{i,k+1}-{\widehat{o}}_{k+1}(-)){({o}_{i,k+1}-{\widehat{o}}_{k+1}(-))}^{T}+\sum {^{\prime}}_{k+1}$$
(35)
$${C}_{po,k+1}=\sum_{i=0}^{10}{w}_{i}{}^{(c)}({\rho }_{i,k+1}-{\widehat{p}}_{k+1}(-)){({o}_{i,k+1}-{\widehat{o}}_{k+1}(-))}^{T}$$
(36)

Finally, the gain of the Kalman filter, the posterior states, and the covariance matrix of the posterior state error will be as follows:

$${K}_{k+1}={C}_{po,k+1}{C}_{oo,k+1}{}^{-1}$$
(37)
$${\widehat{p}}_{k+1}(+)={\widehat{p}}_{k+1}(-)+{K}_{k+1}({o}_{k+1}-{\widehat{o}}_{k+1}(-))$$
(38)
$${C}_{pp,k+1}(+)={C}_{pp,k+1}(-)-{K}_{k+1}{C}_{oo}{K}_{k+1}^{T}$$
(39)

Equations (26) to (39) are repeated.

The second stage of the algorithm (LKF)

This algorithm is used to extract misalignment error parameters. It is possible to obtain the sensor misalignment error matrix by using the sensor parameters obtained from the previous step, the satellite attitude matrix, and the attitude-dependent sensor model. We rewrite the attitude-dependent sensor model by ignoring the internal misalignment error as follows:

$${A}_{body}^{sensor}\times {A}_{inertial}^{body}\times \overrightarrow{s}=\overrightarrow{w}+\overrightarrow{n}$$
(40)

Next, we define the following variables:

$${A}_{body}^{sensor}=\left(\begin{array}{ccc}{r}_{11}& {r}_{12}& {r}_{13}\\ {r}_{21}& {r}_{22}& {r}_{23}\\ {r}_{31}& {r}_{32}& {r}_{33}\end{array}\right)$$
(41)
$$\overrightarrow{v}={A}_{inertial}^{body}\times \overrightarrow{s}$$
(42)

Therefore, the sensor model becomes as follows:

$${A}_{body}^{sensor}\times \overrightarrow{v}=\overrightarrow{w}+\overrightarrow{n}$$
(43)

To convert the sensor model to the desired linear model, we consider the following variables:

$$H=\left(\begin{array}{c}\begin{array}{ccc}v\left(1\right)& v\left(2\right)& \begin{array}{ccc}v\left(3\right)& 0& \begin{array}{ccc}0& 0& \begin{array}{ccc}0& 0& 0\end{array}\end{array}\end{array}\end{array}\\ \begin{array}{ccc}0& 0& \begin{array}{ccc}0& v\left(1\right)& \begin{array}{ccc}v\left(2\right)& v\left(3\right)& \begin{array}{ccc}0& 0& 0\end{array}\end{array}\end{array}\end{array}\\ \begin{array}{ccc}0& 0& \begin{array}{ccc}0& 0& \begin{array}{ccc}0& 0& \begin{array}{ccc}v\left(1\right)& v\left(2\right)& v\left(3\right)\end{array}\end{array}\end{array}\end{array}\end{array}\right)$$
(44)
$$x=\left(\begin{array}{c}{r}_{11}\\ \vdots \\ {r}_{33}\end{array}\right)$$
(45)
$$y=\overrightarrow{w}$$
(46)

where x is the elements of the misalignment error matrix. Therefore, the linear model is expressed as follows:

$$y = Hx + \vec n$$
(47)

If the the elements of the misalignment error matrix are assumed constant, then the state and the measurement representation are expressed as follows:

$$\begin{gathered} x(k + 1) = x(k) \hfill \\ y(k) = H(k)x(k) + \vec n(k) \hfill \\ \end{gathered}$$
(48)

Due to (48), the LKF will be as follows:

$${K_k} = {P_k}{H_{k + 1}}^T({H_{k + 1}}{P_k}{H_{k + 1}}^T + {\Sigma_{k + 1}}){{\text{\;}}^{ - 1}}$$
(49)
$${\hat x_{k + 1}} = {\hat x_k} + {K_k}\left( {{y_{k + 1}} - {H_{k + 1}}{{\hat x}_k}} \right)$$
(50)
$${P_{k + 1}} = \left( {I - {K_k}{H_{k + 1}}} \right){P_k}$$
(51)

where \(\Sigma\) is the covariance of \(\vec n\). These steps are repeated.

Results and discussion

Simulation conditions

The simulation in this section aims to examine the proposed procedure. In this regard, the body frame is considered to be in accordance with the orbital frame, and the orbital characteristics of the satellite are based on Table 1. Also, the VST-68M star sensor is used for simulation (Fig. 5). The specifications of this sensor are given in Table 2. Meanwhile, the misalignment error matrix is as follows:

Table 1 Orbital characteristics of the satellite
Fig. 5
figure 5

VST-68M star sensor

Table 2 VST-68M star sensor specifications
$${A}_{body}^{sensor}=\left(\begin{array}{ccc}0.999695& 0.017449& -0.017452\\ -0.017145& 0.999700& 0.017449\\ 0.017751& -0.017145& 0.999695\end{array}\right)$$
(52)

With the mentioned information, you can get an image of the stars on the focal plane at any time. Figure 6 shows a sample image created on the focal plane of the star sensor.

Fig. 6
figure 6

A sample image created on the CMOS screen of the star sensor

Evaluation of UKF algorithm

Figures 7 and 8 are obtained by applying 0.2-pixel noise, running the UKF 100 times, and averaging the obtained plots. According to the figures, all parameters converge after 300 s. The values of these parameters after 300 s are written in Table 3.

Fig. 7
figure 7

Optical parameters obtained from the first stage of the algorithm

Fig. 8
figure 8

Radial distortion coefficients obtained from the first stage of the algorithm

Table 3 Estimated parameters by first stage of the algorithm

It is concluded that the error of the principal point length is equal to 0.01 pixels, the principal point width is equal to 0.03 pixels, the focal length is equal to 0.001 mm, the 2nd radial distortion coefficient is 0.03 × 10−8, and the 4th radial distortion coefficient is 0.004 × 10−14. Figure 9 shows the mean square error of the sensor parameters, that is, 0.1 pixel after 100 s, and after 500 s, its value is equal to 0.0013 pixels.

Fig. 9
figure 9

Mean square error of the optical parameters obtained from the first stage of the algorithm

The hysteresis caused by lens distortion is shown in Fig. 10. The maximum residual value occurs at the corner points, which is equal to 0.13 pixels. By calculating the average values obtained from the plots in the interval of 500 s, the average principal point length is equal to 499.81, the average principal point width is equal to 529.68, the average focal length is equal to 74.002, the average 2nd distortion coefficient is \(1.95*{10^{ - 8}}\), and the average 4th distortion coefficient is \(- 9.64*{10^{ - 15}}\).

Fig. 10
figure 10

Residual caused by lens distortion

According to the values obtained from the first stage of the algorithm and considering the spot coordinate (432/56, 547/51), the correct star vector, the uncalibrated star vector, and the calibrated star vector in the sensor frame are according to Table 4. According to this table, the accuracy is improved.

Table 4 Correct star vector, uncalibrated star vector and calibrated star vector in sensor frame

Evaluation of the effect of noise on calibration performance

The effect of noise on the accuracy of the optical parameters is shown in Fig. 11. The algorithm is executed 100 times for each noise level, and the average values obtained are considered. As can be seen, the increase in noise causes an increase in the absolute error, which can be justified.

Fig. 11
figure 11

Effect of noise on the accuracy of the optical parameters

Evaluation of LKF algorithm

In the continuation of the simulation, Fig. 12 is obtained by executing the second stage of the algorithm to obtain the misalignment error matrix. According to the figure, all parameters converge after 200 s. The values of these parameters after 200 s are as follows:

Fig. 12
figure 12

Misalignment matrix error obtained from the second stage of the algorithm

$${A}_{body}^{sensor}=\left(\begin{array}{ccc}0.99969& 0.01735& -0.01742\\ -0.01698& 0.99970& 0.01751\\ 0.01772& -0.01721& 0.99969\end{array}\right)$$
(53)

According to (52) and (53), the average absolute error associated with the diagonal elements is \(3.3*{10^{ - 6}}\). Also, the average absolute error associated with the the misalignment error matrix is \(4.4*{10^{ - 5}}\). These analyses show the calibration performance is satisfactory. Moreover, the convergence time (< 200) is low for a space mission. Therefore, this design is proposed for onboard implementation in satellites.

The actual and estimated angles associated with the misalignment error matrix are given in Table 5. As can be seen, the average absolute error is equal to 0.0076 degrees, which indicates the high accuracy of this algorithm.

Table 5 The actual and estimated angles associated with the misalignment error matrix

Conclusions

The onboard calibration method for star sensors deployed on low Earth orbit satellites was proposed in this research. A precise model of the star sensor was given in this approach after a thorough investigation of all the error contributors. Then, a fresh plan was put out for generating actual data from a star sensor mounted on a satellite while it was orbiting the Earth. The focal length and principal point of the sensor were estimated using these measurement data that were fed into the unscented Kalman filter. The estimated parameters were fed into the linear Kalman filter in the following, and the misalignment error was calculated. Eventually, a simulation of the suggested process was run, and its effectiveness was evaluated. The suggested algorithm’s excellent accuracy and rapid convergence were validated by the simulation results.

Availability of data and materials

The authors do not have permissions to share data.

References

  1. J. R. Wertz, D. F. Everett, and J. J. Puschell, “Space mission engineering: the new SMAD,” (No Title), 2011.

  2. Griffith DT, Singla P, Junkins JL (2002) Autonomous on-orbit calibration approaches for star tracker cameras. Adv Astronaut Sci 112:39–57

    Google Scholar 

  3. Xing F, Dong Y, You Z (2006) Laboratory calibration of star tracker with brightness independent star identification strategy. Opt Eng 45(6):63604

    Article  Google Scholar 

  4. J. R. Wertz, Spacecraft attitude determination and control, vol. 73. Springer Science & Business Media, 2012.

  5. Kolomenkin M, Pollak S, Shimshoni I, Lindenbaum M (2008) Geometric voting algorithm for star trackers. IEEE Trans Aerosp Electron Syst 44(2):441–456

    Article  ADS  Google Scholar 

  6. Spratling BB IV, Mortari D (2009) A survey on star identification algorithms. Algorithms 2(1):93–107

    Article  Google Scholar 

  7. Samaan MA, Griffith T, Singla P, Junkins JL. Autonomous on-orbit calibration of star trackers,” in Core technologies for space systems conference (Communication and Navigation Session). 2001. pp. 1–8.

  8. D. P. Woodbury and J. L. Junkins, “Improving camera intrinsic parameter estimates for star tracker applications,” in AIAA Guidance, Navigation, and Control Conference, 2009.

  9. Shen J, Zhang G, Wei X, “Star sensor on-orbit calibration using extended Kalman filter”, in, (2010) 3rd International Symposium on Systems and Control in Aeronautics and Astronautics. IEEE 2010:958–962

    Google Scholar 

  10. Liu H, Wang J, Tan J, Yang J, Jia H, Li X (2011) Autonomous on-orbit calibration of a star tracker camera. Opt Eng 50(2):23604

    Article  Google Scholar 

  11. Sun Y, Xiao Y, Geng Y. On-orbit calibration of star sensor based on a new lens distortion model,” in Proceedings of the 32nd Chinese Control Conference. IEEE; 2013. pp. 4989–4994.

  12. Zhou F, Ye T, Chai X, Wang X, Chen L (2015) Novel autonomous on-orbit calibration method for star sensors. Opt Lasers Eng 67:135–144

    Article  Google Scholar 

  13. Medaglia E, “Autonomous on-orbit calibration of a star tracker”, in, (2016) IEEE Metrology for Aerospace (MetroAeroSpace). IEEE 2016:456–461

    Google Scholar 

  14. Zhang H, Niu Y, Lu J, Zhang C, Yang Y (2017) On-orbit calibration for star sensors without priori information. Opt Express 25(15):18393–18409

    Article  ADS  PubMed  Google Scholar 

  15. Liang W, Zijun Z, Qian X, Liwei L. Star sensor on-orbit calibration based on multiple calibration targets,” in 2019 14th IEEE International Conference on Electronic Measurement & Instruments (ICEMI). IEEE; 2019. pp. 1402–1409.

  16. Rahdan A, Bolandi H, Abedi M (2020) Design of on-board calibration methods for a digital sun sensor based on Levenberg–Marquardt algorithm and Kalman filters. Chin J Aeronaut 33(1):339–351

    Article  Google Scholar 

  17. Z. Yang, X. Zhu, Z. Cai, W. Chen, and J. Yu, “A real-time calibration method for the systematic errors of a star sensor and gyroscope units based on the payload multiplexed,” Optik (Stuttg), vol. 225, p. 165731, 2021.

  18. Yunhai G, Shuang W, Binglong C, “Calibration for star tracker with lens distortion”, in, (2012) IEEE international conference on mechatronics and automation. IEEE 2012:681–686

    Google Scholar 

Download references

Acknowledgements

This work was supported by the self-funded project of Xingtai Science and Technology Bureau: “Research on the Application of Big Data Technology in Car Rental APP” in 2021 (No. 2021ZC026).

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. Data collection, simulation, and analysis were performed by “JZ, HD, and QZ.” The first draft of the manuscript was written by “JZ,” and all authors commented on previous versions of the manuscript. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Jing Zhang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, J., Dong, H. & Zhao, Q. A new procedure for onboard calibration of star sensors installed on low Earth orbit satellite. J. Eng. Appl. Sci. 71, 48 (2024). https://doi.org/10.1186/s44147-024-00385-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s44147-024-00385-y

Keywords