Issue 2 (216), article 5

DOI:

Cybernetics and Computer Engineering, 2024,2(216)

Shepetukha Yu.M., PhD (Engineering), Senior Researcher
Leading Researcher of the Intelligent Control Department
https://orcid.org/0000-0002-6256-5248
e-mail: yshep@meta.ua

Bondar S.O., PhD (Engineering),
Head of the Intelligent Control Department,
https://orcid.org/0000-0003-4140-7985
e-mail: seriybrm@gmail.com

Hubsky Ya.M., PhD Student,
https://orcid.org/0009-0009-4484-9544
e-mail: s.gubsky@gmail.com

Popov I.V., PhD Student,
Junior Researcher of the Intelligent Control Department
https://orcid.org/0009-0009-7961-9431
e-mail: popigor7@gmail.com

International Research and Training Center for Information
Technologies and Systems of the National Academy of Science
and Ministry of Education and Science of Ukraine
40, Acad. Glushkov av., 03187, Kyiv, Ukraine

METHODS OF INTELLECTUALISATION OF SPATIAL SCENE MONITORING PROCESSES

Introduction. The development of intelligent technologies requires the active use of advanced technologies and innovative approaches for the intellectualization of spatial scene monitoring processes. The relevance of the topic lies in the great need to improve the quality of video content production. In particular, there is interest in the automation and further intellectualization of shooting processes. The use of new methods of intellectualization leads to a reduction of permissible errors when creating a creative video project. Intellectualization of data processing processes from markers, namely the use of artificial intelligence (AI) methods, allows to obtain a controlled level of quality with minimal human involvement. Intellectualization of stage production contributes to the creation of exciting and innovative performances that captivate the audience. It allows creating new ways of interacting with the audience and providing them with unique impressions from cultural events.

The purpose  of the paper is to study the methods of intellectualization of data processing from markers during the use of automatic video cameras in tasks of observing stage action for video-photography.

The results. The issue of the interaction of markers with cameras in three-dimensional space, which is completely identical to the built 3D model, is considered.

Conclusions. The information technology of spatial monitoring of the scene can increase the efficiency and simplify the use of automatic video cameras in the tasks of monitoring the stage action for video-photo shooting. There is no one universal “best” method, as each algorithm has its own advantages and disadvantages. However, the optical flow gradient calculation method may be considered more suitable for use in stage production.

The introduction of information technology for spatial scene monitoring based on the optical flow gradient calculation method will improve efficiency and simplify the use of automatic video cameras. The use of surveillance information technology will reduce the burden on the personnel who maintain and manage the filming and are involved in the work.

Keywords: intellectualization of data processing processes, intelligent monitoring, automatic video camera, animation, optical flow gradient, computer vision.

Download full text!

REFERENCES

  1. O.Ye. Volkov, Yu.M. Shepetukha, Yu.P. Bogachuk, M.M. Komar, D.O. Volosheniuk. Experience in development and implementation of intelligent systems for control of dynamic objects. Control Systems and Computers, 2022, Issue 1 (297), pp. 64-81
  2. Stephen Hughes, Michael Lewis. Robotic camera control for remote exploration. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, April 2004, pp. 511–517
  3. Lefevre F., Bombardier V., Charpentier P. Context-based camera selection from multiple video streams. Multimed Tools Appl. 81, pp. 2803–2826 (2022).
  4. E. Stoll, S. Breide, S. Göring, A. Raake. Automatic Camera Selection, Shot Size, and Video Editing. Theater Multi-Camera Recordings. IEEE Access. 2023, Vol. 11, pp. 96673-96692
  5. N. Dagnes, K. Ben-Mansour, F. Marcolin, F. Marin, F.R. Sarhan, S. Dakpé, E. Vezzetti. What is the best set of markers for facial movements recognition. Annals of Physical and Rehabilitation Medicine. 2018,
  6. M. Gustavo, Steadicam shot. The Filmmaker’s Eye, 2022, pp.197-202
  7. Griffith D.A. Spatial Filtering. In: Fischer, M., Getis, A. (eds) Handbook of Applied Spatial Analysis. Springer, Berlin, Heidelberg, 2010.
  8. Simsek E., Ozyer B. Selected Three Frame Difference Method for Moving Object Detection. International Journal of Intelligent Systems and Applications in Engineering. 2021, 9(2), pp.48–54.

Received 06.03.2024

Issue 2 (216), article 4

DOI:

Cybernetics and Computer Engineering, 2024,2(216)

Gladun A.Ya., PhD (Engineering),
Senior Researcher of the Department of Complex Research
of Information Technologies and Systems
https://orcid.org/0000-0002-4133-8169,
e-mail: glanat@yahoo.com

Khala K.O.,
Researcher of the Department of Complex Research
of Information Technologies and Systems
https://orcid.org/0000-0002-9477-970X,
e-mail: cecerongreat@ukr.net

International Research and Training Center
for Information Technologies and Systems of the National Academy of Sciences
of Ukraine and Ministry of Education and Science of Ukraine
40, Acad. Glushkov av., 03187, Kyiv, Ukraine

ONTOLOGY-ORIENTED MULTI-AGENT SYSTEM FOR DECENTRALIZED CONTROL OF UAV’S GROUP

Introduction. Today, UAVs are becoming an increasingly important tool for performing complex tasks in various fields of application, both civil (economic) and military, as they are particularly effective in dynamically uncertain environments with hard-to-reach areas. In addition, technological advances such as blockchain, artificial intelligence (AI), and machine learning have enabled the development of updated and improved UAV systems. To create and deploy a swarm of UAVs, coordinate actions, manage, and exchange data, a model of a multi-agent system (MAC) based on an ontological representation of knowledge is proposed. This model enables a swarm of UAVs to effectively make decisions in various situations while performing assigned tasks. This approach enables the safety, reliability, and efficiency of the tasks of the UAV group.

The purpose of the paper is to develop the theoretical and practical foundations of the integration of the multi-agent system (MAS) based on the ontological representation of knowledge with the UAV network. This involves the development of a MAC architecture and a hierarchical set of ontologies of different levels. The goal is to create a common data description language, define data semantics to ensure data uniqueness and consistency, provide support for decision-making during UAV swarm management, and swarm survivability in the event of aircraft failures or loss. It is necessary to develop algorithms and a method of dividing a complex task into sub-tasks in a swarm of UAVs among all MAS agents. It is needed to ensure reliable exchange of messages (data) between agents during the joint performance of the assigned task, and the possibility of dynamic redistribution of roles between UAV agents as needed.

Methods. During the research, the general theory of intelligent information technologies was applied; agent theory methods in particular intelligent BDI agents; methods of analyzing the performance of wireless data exchange networks; theory of combinatorial optimization for dividing tasks into subtasks; methods of ontological analysis and descriptive logic to create an ontological hierarchical model of the subject area; methods of enriching ontological models from external semantically marked information resources.

Results. As a result of the performed scientific research, the MAS architecture was proposed and its main functions were determined for the decentralized control of a swarm of UAVs. A set of agents with assigned roles was formed, who jointly (cooperatively) perform tasks, exchanging messages, and information with each other, which ensures the survivability of the system (in case of a failure or loss of the device, its task must be distributed among other drones). Plans and scenarios of MAS actions for various situations and means of coordinating actions between agents have been developed to perform the mission by a swarm of UAVs. A hierarchical ontological model of the subject area related to the work of the UAV swarm has been created. The algorithms and methods are based on the integration of semantic technologies that support the MAS during the execution of the UAV swarm mission, decision-making, assessment of the dynamic environment, and response to its changes.

Conclusions. An original approach, algorithms, and method for improving the system of decentralized control of a group of UAVs is proposed. Expanding the functionality of the system for maintaining the interaction of a swarm of unmanned systems based on MAS artificial intelligence is suggested. This system is based on ontological models. The models describe knowledge of the subject area, processes of UAV swarm operation, scenarios of actions in difficult situations, distribution of roles to agents, principles of planning, and coordination. The proposed MAS is integrated with the UAV swarm software platform, which makes it possible to improve the efficiency of the decentralized control system and adapt UAVs to dynamic changes in the environment. The practical result of the work will be a prototype of a software agent system that interacts with ontologies while performing simple tasks. The economic significance of the work consists of focusing on the creation of new intelligent information technologies, which are based on AI and knowledge of the subject area, and this significantly increases the efficiency of the functioning of modern systems.

Keywords: multi-agent system, ontology, formalization of knowledge, UAV, drone, decentralized control, task allocation

Download full text!

REFERENCES

  1. M. Bacco, E. Ferro, and A. Gotta, Radio propagation models for UAVs: what is missing? Proc. of the 11th Int. Conf. on Mobile and Ubiquitous Systems: Computing, Networking and Services. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), 2014, pp. 391-392.
  2. F. M. Bacco, E. Ferro, A. Gotta, UAVs in WSNs for agricultural applications: an analysis of the two-ray radio propagation model. IEEE SENSORS 2014 Proceedings. IEEE, 2014, pp. 130-133.
  3. G. Chmaj, H. Selvaraj, Distributed processing applications for UAV/drones: a survey. Progress in Systems Engineering. Springer International Publishing, 2015, pp. 449-454.
  4. S. D’Auria, M. Luglio, C. Roseti, R. Strollo, F. Zampognaro, Real Time Transmission of Cultural Heritage 3D Survey in Case of Emergency. 3rd Inter-national Conf. on Information and Communication Technologies for Disaster Management, Vienna, Austria. Dec. 2016. 
  5. O. Volkov, Intelligent control, localization and mapping in geoinformation systems based on visual data analysis. Cybernetics and Computer Engineering. 2020, no 200, pp. 41–58.  URL:  http://kvt-journal.орг.ua/1488/ 
  6. I. Bekmezci, O. K. Sahingoz, and S. Temel, Flying ad-hoc networks (FANETs): A survey. Ad Hoc Networks. 2013, vol. 11, no. 3, pp. 1254-1270. j.adhoc.2012.12.004.
  7. R. Carney, M. Chyba, C. Gray,  A. Trimble,  Multi-Agents Path Planning for a Swarm of Unmanned Aerial Vehicles. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium. IEEE, Waikoloa, HI, USA. Sept. 2020, pp. 6495-6498. 
  8. Y. Han, H. Wang, Z. Zhang, W. Wang, Boundary-aware vehicle tracking upon uav. Electron. Lett. 2020, Vol. 56, no. 17, P. 873–876. 
  9. H. Jiang, D. Shi, C. Xue, Y. Wang, G. Wang, Y. Zhang, Multi-agent deep reinforcement learning with type-based hierarchical group communication. Appl. Intell. 2021, 51, pp. 5793–5808.
  10. O. Volkov. Modern unmanned aerial vehicle control systems. Control systems and computers. 2017, no 6, pp. 84-88. URL:  http://usim.org.ua/arch/2017/6/11.pdf  
  11. G. Zhan, Z. Gong, Q. Lv, Z. Zhou, Z. Wang, Z. Yang, D. Zhou, Flight test of autonomous formation management for multiple fixed-wing uavs based on missile parallel method. Drones.  2022, 6 (5), 99. 
  12. Military Aerospace Electronics. URL:  https://www.militaryaerospace.com/magazine.
  13. KRATOS. Unmanned Systems Aerial Target & Unmanned Tactical Systems.  URL:  https://www.kratosdefense.com/about/divisions/unmanned-systems?r=kusd.
  14. A. Jevtic. Swarm intelligence: novel tools for optimization, Feature extraction, and multi-agentsystem modeling. Tesis doctoral. Universidad Politecnica de Madrid, 2018, Spain.
  15. J. Zhang, G. Wang, Y. Song, Task assignment of the improved contract net protocol under a multi-agent system. Algorithms. 2019, 12(4), no. 70, pp. 1-13.  
  16. D. Xu, G. Chen, Autonomous and cooperative control of UAV cluster with multi-agent reinforcement learning. The Aeronautical Journal. 2022, Vol. 126, no. 1300), pp. 932-951.
  17. F. Zitouni, S. Harous, R. Maamri, A distributed approach to the multi-robot task allocation problem using the consensus-based bundle algorithm and ant colony system. IEEE Access, 2020, Vol. 8, pp. 27479-27494. 
  18. Y.I. Rudiakov, V. M. Tomashevsky. Swarm intelligence approach for simulation modeling of distributed power systems. Electronics and Control Systems. 2017, Kyiv, Vol. 1, no. 51, pp. 124-127. 
  19. A. Berezhny. Methods and information technology of automated flight route planning of unmanned aerial vehicles to increase the efficiency of searching for objects. Tesis PhD. Kharkiv NUPS named by Ivan Kozhedub, Kharkiv, 2020. P. 192. [In Ukrainian]
  20. O. Pogudina, D. Krytskyi, A. Bykov, T. Plastun, M. Pyvovar, Methodology of formation of the intelligent component of the agent system of a swarm of unmanned aerial vehicles. Monograph. Nat. aerospace University named after M.E. Zhukovsky “Kharkiv Aviation Institute”. Kharkiv, Madrid Printing, 2021. P. 211. ISBN 978-617-7988-32-7. [In Ukrainian]
  21. A. Gladun, K. Khala, Multi-Agent Drone Network System for Critical Infrastructure Protection with Ontological Knowledge Representation. ІІІ Inter. scient.-pract. conf. Modern computer and information systems and technologies. Zaporizhia, 02-19 Dec, 2022, p. 431-436. [In Ukrainian]
  22. Yongnan Jia, Siying Tian, Qing Li. Recent development of unmanned aerial vehicle swarms. Acta Aeronautica et Astronautica Sinica. 2020, Vol. 41, no. S1, pp. 4–14.
  23. M. Brambilla,  E. Ferrante,  M. Birattari, M. Dorigo. Swarm robotics: a review from the swarm engineering perspective. Swarm Intelligence. 2013, Vol. 7, pp. 1–41. 
  24. M. Yogeswaran, S. Ponnambalam. Swarm robotics: an extensive research review. Ed. I. Fuerstner, Advanced Knowledge Application in Practice (IntechOpen: London, 2010), P. 259.
  25. O. Volkov, M. Komar. Intellectualization of modern systems of automatic control of unmanned aerial vehicles. Cybernetics and Computer Engineering. 2018, no. 191, pp. 45–59. URL: http://dspace.nbuv.gov.ua/bitstream/handle/123456789/131937/03-Gritsenko.pdf?sequence=1. [In Ukrainian]
  26. A. Kolling. Human interaction with robot swarms: a survey. IEEE Transactions on Human–Machine Systems. 2016, Vol. 46, no. 1, pp. 9-26.
  27. M. Zgurovsky, D. Lande, A. Boldak. Linguistic Analysis of Internet Media and Social Network Data in the Problems of Social Transformation Assessment. Cybernetics and Systems Analysis. 2021, Vol. 57, Issue 2, pp. 228–237. 28. A. Gladun, K. Khala, R. Martinez-Bejar, Development of Object’s Structured Information Field with Specific Properties for Its Semantic Model Building. CEUR Workshop Proceedings, 2021. Vol. 3241, pp. 102–111.
  28. A. Gladun, K. Khala, Using ontological models for formalized knowledge assessment. Computer facilities, networks and systems. 2019. No. 18. P. 5-10. Access:  http:// nbuv.gov.ua/UJRN/Kzms_2019_18_3. [In Ukrainian]
  29. R. Burkhart, A Swarm Ontology for Complex Systems Modeling. In Proc. of Symposium on Complex Systems Engineering, Santa Monica, CA, USA, 11-12 Jan. 2007, pp 1-4.
  30. David Martín-Lammerding, Dronetology. URL:  https://www.dronetology.net/dronetology/index-en.html.
  31. Li Xin, Sonia Bilbao, Tamara Martín-Wanton, Joaquim Bastos, Jonathan Rodriguez. SWARMs Ontology: A Common Information Model for the Cooperation of Underwater Robots. J. Sensors (Basel, Switzerland). 2017, Vol 17, no. 569, n. pag.
  32. E. Moraitou, K. Kotis, S. Angelis, A. Soularidis, Onto4Drone. URL:  https://i-lab.aegean.gr/kotis/Ontologies/Onto4drone/index.html.
  33. A. Gladun, K. Khala, Ontology-based semantic similarity to metadata analysis in the information security domain. Probl. Program., 2021, no. 2, pp. 034–041.  
  34. E. Sirin, B. Parsia, B.C. Grau, A. Kalyanpur, Y. Katz Pellet: A practical OWL-DL reasoned. Web Semantics: Science, Services and Agents on the World Wide Web. 2007, Vol. 5, no. 2, pp. 51–53.
  35. T.R. Gruber. A translation approach to portable ontology specifications. Knowledge Acquisition. 1993, Vol. 5, no. 2, pp. 199–220.
  36. M. Hadzic, P. Wongthongtham, T. Dillon, E. Chang, Ontology-Based Multi-Agent Systems, Studies in Computational Intelligence, Springer, 2009, Vol. 219, P. 273.
  37. V.M. Catterson, E.M. Davidson, SDJ. McArthur. Issues in Integrating Existing Multi-Agent Systems for Power Engineering Applications. Proc. of the 13th Inter. Conf. on Intelligent Systems Application to Power Systems, Arlington, VA, USA, 6–10 Nov. 2005, pp. 1-6.
  38. Foundation for Intelligent Physical Agents. FIPA Ontology Service Specification. URL: http://www.fipa.org/specs/fipa00086/XC00086C.html.
  39. Carney R., Chyba M., Gray C., Wilkens G., Shanbrom C., Multi-agent systems for quadcopters. Journal of Geometric Mechanics. 2022, Vol. 14(1), pp. 1-28.
  40. A. Borgida, Description logics in data management. IEEE Trans. Knowl. Data Eng. 1995, Vol. 7, no. 5, pp. 671–682.
  41. M. Wooldridge, N.R. Jennings, D. Kinny, The Gaia Methodology for Agent-Oriented Analysis and Design. Auton. Agents Multi Agent Syst. 2000, Vol. 3, pp. 285–312.
  42. M. Uslar, M. Specht, S. Rohjans, J. Trefke, J.M. González, The Common Information Model CIM: IEC 61968/61970 and 62325 – A Practical Introduction to the CIM. Springer: Berlin, Germany, 2012. P.186.
  43. F. Bellifemine, G. Caire, D. Greenwood, Developing multi-agent systems with JADE. John Wiley & Sons, Ltd, 2007. P. 286.
  44. M.H. Dominguez, J.-I. Hernández-Vega, D.-G. Palomares-Gorham, C. Hernández-Santos, J.S. Cuevas, A BDI Agent System for the Collaboration of the Unmanned Aerial Vehicle. Res. Comput. Sci. 2016, Vol. 121, no. 1, pp. 113–124. 
  45. A. Gladun, V. Hrytsenko, Yu. Zhuravlev, M. Nesen, A model of a multi-agent system for e-business and its software implementation technology. Programming problems. 2004. No. 2,3, pp. 510–519. [In Ukrainian]
  46. J. Kennedy, R. Eberhart, Particle swarm optimization. Proc. of ICNN’95-international conference on neural networks. 1995, Vol. 4. pp. 1942–1948.

Received 04.03.2024

Issue 2 (216), article 3

DOI:

Cybernetics and Computer Engineering, 2024,2(216)

Volosheniuk D.O., PhD (Engineering), Senior Researcher
Head of the Research Laboratory of Unmanned Complexes and Systems
https://orcid.org/0000-0003-3793-7801,
e-mail: p-h-o-e-n-i-x@ukr.net

Tymchyshyn R.M., PhD Student,
Researcher of the Research Laboratory of Unmanned Complexes and Systems
https://orcid.org/0000-0002-4243-4240,
e-mail: romantymchyshyn.rt@gmail.com

International Research and Training Center for Information
Technologies and Systems of the National Academy of Science
and Ministry of Education and Science of Ukraine
40, Acad. Glushkov av., 03187, Kyiv, Ukraine

EDGE DETECTION ALGORITHM FOR MONITORING OF TRANSPORT INFRASTRUCTURE

Introduction. Technologies for monitoring transport infrastructure have been rapidly evolving in recent years, absorbing innovations and the latest developments. The main direction of development and use for this technology has been the implementation of continuous monitoring and control of different aspects of transport infrastructure to ensure its safety and allow efficient and timely management. Computer vision has been playing the main role in the evolution of these technologies and has made unmanned aerial vehicles (UAVs) the most cost-efficient remote monitoring tool.

Purpose. Among the main tasks in the field are monitoring traffic and the conditions of road surfaces and markings. Fast and accurate monitoring systems enable quick responses and minimize negative consequences for citizens. Despite the active development of computer vision algorithms, there is no universal algorithm that suits all scenarios. Algorithms depend on the task, conditions, and even UAV trajectory; even a slight change in the visual scene can cause suboptimal results.

Lately, significant progress has been made in the development of edge detection algorithms. However, they do not consider the specifics of the task of monitoring road markings. The algorithm should consider the characteristics of the objects of interest – their geometric and color features, and the presence of many other objects in the images.

The goal of this paper is to present an algorithm crafted specifically for the task of monitoring transport infrastructure.

Methods. Computer vision, threshold filtering, Sobel operator, noise removal, probabilistic Hough transform, histograms.

Results. The main features of the task of monitoring transport infrastructure using visual data obtained from surveillance cameras or unmanned aerial vehicles have been analyzed. An algorithm for edge detection in images has been developed, which addresses the shortcomings of known methods and is specifically enhanced for working in the domain of transport infrastructure monitoring. The algorithm leverages the narrow specialization of the task to improve the obtained results. The foundation of the algorithm is based on the features of the HSL color model, filtering in the saturation and lightness channels using gradients obtained from the Sobel operator, segment detection based on the probabilistic Hough transform, and a developed algorithm for boundary extraction of point clusters using histograms.

Conclusion. The proposed algorithm can be used in automated and semi-automated decision-making systems, UAV design bureaus, UAV manufacturing enterprises, and information-analytical centers to develop unmanned aviation systems and aerial monitoring technologies to enhance human safety and the economic development of the state. The use of automatic remote monitoring data processing methods allows for faster acquisition of necessary results and improves the efficiency of using geospatial data.

Keywords. Computer vision, object detection, edge detection, image filtering, transportation infrastructure, information technology, monitoring.

Download full text!

REFERENCES

1. Schubert Johan Brynielsson, Joel  Nilsson, Mattias Svenmarck Peter. Artificial Intelligence for Decision Support. Command and Control Systems, 2018.

2. Kozub A.M. Analysis of the Means of Gathering Information for Geographic Information Systems  Kozub A.M., Suvorova N.O , Chernjavskiy V.M. Sistemi ozbroênnâ ì vìjsʹkova tehnìka. 2011.  № 3. pp. 42-47 ISSN 1997-9568 [In Ukrainian]

3. Bhanu B.  Automatic target recognition State of the art survey . IEEE Trans. Aerosp. Electron. Syst.  Vol. AES-22, July 1986, pp. 364–379.

4. Danyk Yu.H., Protsenko M.M. Choice of color model for the digital processing of images in unmanned aircraft system. The Journal of Zhytomyr State Technological University. 2013. № 2 (65). [In Ukrainian]

5. Sobel Irwin. (2014). An Isotropic 3×3 Image Gradient Operator. Presentation at Stanford A.I. Project 1968.

6. N. Jamil, T. M. T. Sembok, Z. A. Bakar. Noise removal and enhancement of binary images using morphological operations. 2008 International Symposium on Information Technology, Kuala Lumpur, Malaysia, 2008, pp. 1-6, doi: 10.1109/ITSIM.2008.4631954.

7. N. Kiryati, Y. Eldar, A.M. Bruckstein. A probabilistic Hough transform. Pattern Recognition. 1991, Vol. 24, Issue 4, 1991, pp. 303-316, doi: 10.1016/0031-3203(91)90073-E.

8. J. Illingworth, J. Kittler. A survey of the hough transform. Computer Vision, Graphics, and Image Processing. 1988, Vol. 44, Issue 1, 198, pp. 87-116, doi: 10.1016/S0734-189X(88)80033-1.

9. Thomas Risse. Hough transform for line recognition: Complexity of evidence accumulation and cluster detection. Computer Vision, Graphics, and Image Processing. 1989, Vol. 46, Issue 3, 1989, pp. 327-345, doi: 10.1016/0734-189X(89)90036-4

Received 29.04.2024

Issue 2 (216), article 2

DOI:

Cybernetics and Computer Engineering, 2024,2(216)

Smirnov A.O., PhD Student,
https://orcid.org/0009-0002-6509-4135,
e-mail: tonysmn97@gmail.com

International Research and Training Center
for Information Technologies and Systems
of the National Academy of Science of Ukraine
and the Ministry of Education and Science of Ukraine.
40, Acad. Glushkov av., 03187, Kyiv, Ukraine

CAMERA POSE ESTIMATION USING A 3D GAUSSIAN SPLATTING RADIANCE FIELD

Introduction. Accurate camera pose estimation is crucial for many applications ranging from robotics to virtual and augmented reality. The process of determining agents pose from a set of observations is called odometry. This work focuses on visual odometry, which utilizes only images from camera as the input data.

The purpose of the paper is to demonstrate an approach for small-scale camera pose estimation using 3D Gaussians as the environment representation. 

Methods. Given the rise of neural volumetric representations for the environment reconstruction, this work relies on Gaussian Splatting algorithm for high-fidelity volumetric representation.

Results. For a trained Gaussian Splatting model and the target image, unseen during training, we estimate its camera pose using differentiable rendering and gradient-based optimization methods. Gradients with respect to camera pose are computed directly from image-space per-pixel loss function via backpropagation. 

The choice of Gaussian Splatting as representation is particularly appealing because it allows for end-to-end estimation and removes several stages that are common for more classical algorithms. And differentiable rasterization as the image formation algorithm provides real-time performance which facilitates its use in real-world applications.

Conclusions. This end-to-end approach greatly simplifies camera pose estimation, avoiding compounding errors that are common for multi-stage algorithms and provides a high-quality camera pose estimation. 

Keywords: radiance fields, scientific computing, odometry, slam, pose estimation, gaussian splatting, differentiable rendering 

Download full text!

REFERENCES

1. Jeff Bezanson. Julia: A Fast Dynamic Language for Technical Computing, 2012, arXiv: 1209.5145 [cs.PL].

2. Bernhard Kerbl. 3D Gaussian Splatting for Real-Time Radiance Field Rendering, 2023, arXiv: 2308.04079 [cs.GR].

3. Kai Li Lim, Thomas Braunl. A Review of Visual Odometry Methods and Its Applications for Autonomous Driving, 2020, arXiv: 2009 . 09193 [cs.CV].

4. Tony Lindeberg. Scale Invariant Feature Transform. Vol. 7, May 2012,

doi: 10.4249/scholarpedia.10491.

5. Ben Mildenhall. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis, 2020, arXiv: 2003.08934 [cs.CV].

6. Thomas Muller. Instant neural graphics primitives with a multires- olution hash encoding. ACM Transactions on Graphics 41.4 (July 2022), pp. 1–15. issn: 1557-7368. doi: 10 . 1145 / 3528223 . 3530127. url: http://dx.doi.org/10.1145/3528223.3530127.

7. Raul Mur-Artal, J. M. M. Montiel, and Juan D. Tardos. “ORB-SLAM: A Versatile and Accurate Monocular SLAM System”. In: IEEE Transactions on Robotics 31.5 (Oct. 2015), pp. 1147–1163. issn: 1941-0468. doi: 10.1109/tro.2015.2463671. url:  http:// dx.doi.org/10.1109/TRO. 2015.2463671.

8. Jim Nilsson, Tomas Akenine-Moller. Understanding SSIM, 2020, arXiv: 2006.13846 [eess.IV].

9. Shashi Poddar, Rahul Kottath, Vinod Karar. Evolution of Visual Odometry Techniques, 2018, arXiv: 1804.11142 [cs.CV].

10. Kerui Ren. Octree-GS: Towards Consistent Real-time Rendering with LOD-Structured 3D Gaussians, 2024, arXiv: 2403.17898 [cs.CV].

11. Johannes Lutz Schonberger, Jan-Michael Frahm. Structure-from-Motion Revisited. Conference on Computer Vision and Pattern Recognition (CVPR), 2016.

12. Rich Sutton. The Bitter Lesson, 2019. URL: http://www.incompleteideas.net/IncIdeas/BitterLesson.html.

Received 29.03.2024

Issue 2 (216), article 1

DOI:

Cybernetics and Computer Engineering, 2024,2(216)

Volkov O.Ye., PhD (Engineering), Senior Researcher,
Director
https://orcid.org/0000-0002-5418-6723,
e-mail: alexvolk@ukr.net

Dzhebrailov R.Yu., PhD Student,
Junior Researcher of the Research Laboratory of Unmanned Complexes and Systems,
https://orcid.org/0000-0002-4473-9670,
e-mail: rombik1197@gmail.com

International Research and Training Center
for Information Technologies and Systems
of the National Academy of Science of Ukraine
and the Ministry of Education and Science of Ukraine.
40, Acad. Glushkov av., 03187, Kyiv, Ukraine

TESTING THE METHOD OF TOPOGRAPHIC AFFINITY OF IMAGES ON IMAGES OF THE EARTH’S SURFACE

Introduction. In connection with the development of the method of topographic affinity of images, it became necessary to conduct its testing according to the criteria of workability and efficiency.

The purpose of the paper is to test the method of determining the topographic affinity of images based on taking into account the detected special zones on the images of the natural landscape for the autonomous navigation of UAVs.

Results. According to the results of testing for three tasks, the method showed its effectiveness at the level of 100%.

Conclusions. The method of topographic affinity of images can work with a large number of complex and diverse images of the Earth’s surface, which cannot be analyzed by other known methods, and with high efficiency. It can be used to build a system of autonomous navigation of UAVs separately or together with other methods.

Keywords: unmanned aerial vehicle, unmanned aircraft complex, autonomous navigation, special points, special areas, method of analysis of special areas of images.

Download full text!

REFERENCES

1. Irani G., Christ J. Image processing for Tomahawk scene matching. Johns Hopkins APL Technical Digest. 1994, No, 3, pp. 250–264.

2. Ronghao Li, Pengqi Gao, Xiangyuan Cai, Xiaotong Chen, Jiangnan Wei, Yinqian Cheng, Hongying Zhao. A Real-Time Incremental Video Mosaic Framework for UAV. Remote Sens. 2023, No, 15, pp. 250–264.

3. Autonomous navigation system for unmanned aerial vehicle based on topographic clustering of visual images : pat. UA 121833 C2 Ukraine: 2020.01. № a 2019 05904 ; application for application filed 29.05.2019 ; published 27.07.2020, Bulletin No. 14. 18 p. [In Ukrainian]

4. Dzhebrailov R.Yu.,  Gospodarchuk O.Yu. Detection of Special Zones as a Basis for the Method of Topographic Affinity of Images Cybernetics and Computer Engineering, 2024, № 1, pp. 20–34.  https://doi.org/10.15407/kvt215.01.020 [In Ukrainian]

Received 23.03.2024