3D Human Motion Capture and Context-Aware Emotion Recognition Technologies, Augmented Reality Body and Face Filters, and Affective Computing and Image Processing Algorithms for Idealized Appearance and Imagery
Raluca-Ștefania Balica*ABSTRACT. The purpose of the paper is to explore 3D human motion capture and context-aware emotion recognition technologies, augmented reality body and face filters, and affective computing and image processing algorithms for idealized appearance and imagery. In this research, previous findings were cumulated showing that real-time beauty touch-ups and AI blemish remover tools apply virtual makeup effects and styles, and smooth skin for face retouching and reshaping. Digital twin mapping and immersive technologies, shared knowledge and action mechanisms, and neuromorphic and bio-inspired computing systems can be harnessed for context-aware virtual garment modeling, 3D human skeleton motion analysis and tracking, and autonomous simulated virtual agent intentionality. Evidence map visualization tools, machine learning classifiers, and reference management software harnessed include Abstrackr, CADIMA, R package and Shiny app citationchaser, Eppi-Reviewer, MMAT, and SWIFT-Active Screener. The case studies cover Fotor’s artificial intelligence beauty filter apps and virtual makeup try-on tools, BodyTune, BIGVU beauty face filter app, YouCam’s Makeup artificial intelligence face editing tools and augmented reality beauty filters, and Perfect365 Video.
Keywords: 3D human motion capture; context-aware emotion recognition; augmented reality; affective computing; image processing; idealized appearance and imagery
How to cite: Balica, R.-Ș. (2025). “3D Human Motion Capture and Context-Aware Emotion Recognition Technologies, Augmented Reality Body and Face Filters, and Affective Computing and Image Processing Algorithms for Idealized Appearance and Imagery,” Journal of Research in Gender Studies 15(1): 25–32. doi: 10.22381/JRGS15120252.
Received 12 February 2025 • Received in revised form 17 July 2025
Accepted 20 July 2025 • Available online 27 July 2025
