Issue 185, article 4


KVT, 2016, Issue 185, pp.35-47

UDC 629.7.05


Simakov V.A., Gubarev V.F., Salnikov N.N., Melnichuk S.V.

Space Research Institute of the National Academy of Science of Ukraine and State Space Agency of Ukraine, Kyiv, Ukraine , , ,

Introduction. Automatic orbital berthing systems require permanent availability of relative position and attitude of a target spacecraft. In the most general case the only source of information is video filming. Extracting mutual disposition parameters from a video frame is based upon special techniques which can be divided into two large groups: feature-based and model-based. Major difference between them is defined by data structure used for the target description (individual points for feature-based approach vs. rigorous visual model for model-based one). This article is devoted to the research of mathematical problem that appears in considering pose estimation for two orbital spacecraft in the presence of wireframe model of the target when only video filming is available.

The purpose of the article is to construct a model-based method that provides fast and accurate estimation of relative position and attitude of the target spacecraft. We discuss possible drawbacks of direct procedures based on straightforward (pixel-wise) image fitting and propose a subtle algorithm which satisfies formulated conditions.

Results. The algorithm composed of three independent parts (initialization, pose refinement and pose tracking) has been developed and tested on simple initial datum. Initialization stage, responding for rough estimation in the absence of preliminary information, has given relatively poor but quite enough accuracy for the aims of initial approximation. Pose refinement stage which is implemented as iterative procedure based on closeness of neighboring frames demonstrated almost total matching with actual values. Pose tracking (state estimation based on equations of motion) was redundant for our simple example as it could not improve the result provided by pose refinement.

Conclusions. Constructed algorithm has been tested on simplified situation and demonstrated very high precision. More realistic conditions including noises and occlusions can bring to corrupted result that should be recovered. This requires introducing additional steps into the algorithm which are reflected in the text. The notable feature of the algorithm is its high modularity which allows each stage to be implemented and configured independently according to available resources and mission requirements.

Keywords: orbital rendezvous, pose estimation, orbital video filming, computer vision.

Download full text (ru)!


  1. Lowe D.G. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision, 2004, 60 (2), pp. 91–110.
  2. David P. SoftPOSIT: Simultaneous Pose and Correspondence Determination. International Journal of Computer Vision, 2004, 59 (3), pp. 259–284.
  3. Black M.J., Jepson A.D. EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation. International Journal of Computer Vision, 1998, 26 (1), pp. 63–84.
  4. Trefethen L.N., Bau D. Numerical Linear Algebra. Philadelphia: SIAM, 1997, 361p.
  5. Drummond T., Cipolla R. Real-Time Visual Tracking of Complex Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24 (7), pp. 932–946.
  6. Hartley R., Zisserman, A. Multiple View Geometry in Computer Vision. 2nd Edition. Cambridge University Press, 2004, 655p.
  7. Paterson M.S., Yao, F.F. Efficient Binary Space Partitions for Hidden-Surface Removal and Solid Modeling. Discrete and Computational Geometry, 1990, 5, pp. 485–503.
  8. Kelsey J.M., et. al. Vision-Based Relative Pose Estimation for Autonomous Rendezvous and Docking. 2006 IEEE Aerospace Conference. — 20 p.
  9. Wenfu X., et. al. Autonomous Rendezvous and Robotic Capturing of Non-Cooperative Target in Space. Robotica, 2010, 28, pp. 705–718.
  10. Fehse W. Automated Rendezvous and Docking of Spacecraft. Cambridge University Press, 2003, 495 p.

Received 10.06.16