Cybernetics and Computer Engineering, 2021, 1(203)
Anisimov A.V., DSc (Phys & Math), Corresponding Member
of National Academy of Sciences of Ukraine,
Dean of the Faculty of Computer Science and Cybernetics
Bevza M.V., PhD student
Bobyl B.V., PhD student
Taras Shevchenko National University of Kyiv
60, Volodymyrska st., Kiyv, 01033, Ukraine
PREDICTION OF AUDIENCE REACTION ON TEXT-VISUAL CONTENT USING NEURAL NETWORKS
Introduction. Social networks create highly-personalized experiences for their users, giving them an opportunity to follow pages of other users that publicize relevant and interesting content for them. Authors of the content create visual and text content that later receive feedback from their followers in the form of likes, shares and comments.
The purpose of the paper is to build a system that can predict the reaction of the audience on the post and account for all the specialties of the page itself, its audience, the author and variety of possible reactions. In our work we explain the process of the neural network training, that gives the ability to train the neural network for each particular page and audience to get better quality of the algorithms work.
Results. We have created a system that processes both visual and textual part of the content and gives the program the full context of the publication that algorithm will process. The features of the text and image part of the content has been received via processing the data with state-of-the-art neural networks such as BERT and VGG-16.
Conclusions. The result of the work is a state-of-the-art algorithm that can predict reactions of the audience on each publication of the personal page of a user of social media.
Keywords: artificial intelligence, natural language processing, computer vision, social networks.
1. De Fina, A. Storytelling and audience reactions in social media. Language in Society, 2016, 45, 473-498.
2. Gaspar, R., Pedro, C., Panagiotopoulos, P., Seibt, B. Beyond positive or negative: Qualitative sentiment analysis of social media reactions to unexpected stressful events. Comput. Human Behav. 2016, 56, 179-191.
3. Cliche, M. BB_twtr at SemEval-2017 task 4: Twitter sentiment analysis with CNNs and LSTMs. Proceedings of the 11th international workshop on semantic evaluations (SemEval-2017), pp. 573-580.
4. Vaswani A., Shazeer N., Parmar N., Uszkoreit Ja., Jones L., Gomez A.N., Kaiser K., Polosukhin I. Attention is all you need. In Advances in Neural Information Processing Systems, 2017, pp. 6000-6010.
5. Devlin J, Chang M-W., Lee K., Toutanova K.. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
6. Simonyan K., Zisserman A., Very Deep Convolutional Networks for Large-Scale Image Recognition, arXiv e-prints, 2014
7. Russakovsky O. ImageNet Large Scale Visual Recognition Challenge, arXiv e-prints, 2014.
8. He K., Zhang X., Ren S., Sun S. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. IEEE International Conference on Computer Vision (ICCV), 2015. pp. 1026-1034.
9. Glorot X., Bengio Y. Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9:249-256, 2010.
10. Bishop C. M. Neural networks and machine learning. Berlin: Springer, 1998. 353 p.
11. He K., Zhang X., Ren S., Sun S. Deep Residual Learning for Image Recognition. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV:IEEE, 2016. pp. 770-778.