1. Real-time gesture transmission with a robotic hand : embodied signals for non-verbal remote communicationLea Pajnič, Matjaž Kljun, Anuradhi Maheshya W. Weerasinghe Arachchillage, Klen Čopič Pucihar, 2025, objavljeni znanstveni prispevek na konferenci Opis: This work explores how computer vision and robotics can support remote, gesture-based embodied signals for expressing presence and emotion in remote communication. We present an initial proof-of-concept in which users interact through robotic hands placed on their desks: one user’s hand gestures are captured in real time by a camera, transmitted over a network, and reproduced by a robotic hand at the remote location. The prototype uses the InMoov robotic hand and MediaPipe Hands for gesture tracking across varied lighting conditions, viewing angles, and backgrounds. Our preliminary tests demonstrate that gestures can be reliably recognised and consistently reproduced through stable network communication. While still at an early stage, this project illustrates the potential of combining affordable robotics with computer vision to create accessible alternatives to voice communication and new forms of remote communication. Ključne besede: robotic hand, gesture transmission, embodied signals, non-verbal communication, remote communication, computer vision Objavljeno v RUP: 30.01.2026; Ogledov: 186; Prenosov: 2
Celotno besedilo (642,14 KB) |
2. Enhanced precision in axle configuration inference for bridge weigh-in-motion systems using computer vision and deep learningDomen Šoberl, Jan Kalin, Andrej Anžlin, Maja Kreslin, Klen Čopič Pucihar, Matjaž Kljun, Doron Hekič, Aleš Žnidarič, 2025, izvirni znanstveni članek Opis: Heavy goods vehicles (HGVs) have a significant impact on road and bridge infrastructure, with overloaded vehicles accelerating structural deterioration and increasing safety risks. Bridge weigh-in-motion (B-WIM) systems estimate gross vehicle weight (GVW) using strain measurements, but inaccuracies in axle configuration recognition can reduce reliability. This study presents a low-cost computer vision (CV) extension for existing B-WIM installations that verifies strain-inferred axle configurations using traffic camera images and flags GVW estimates as reliable or unreliable. Experiments on a data set of over 30,000 HGV records show that by combining convolutional neural networks with strain-based heuristics, GVW reliability can improve from 96.7% to 99.89%, effectively excluding nearly all erroneous measurements. The approach operates without interrupting ongoing B-WIM operations and can be applied retrospectively to historical data. Limitations include the inability to detect raised axles (RAs), which the method excludes as unreliable. This method provides a practical, high-precision enhancement for structural health monitoring of bridges. Ključne besede: B-WIM, computer vision, deep learning Objavljeno v RUP: 16.01.2026; Ogledov: 156; Prenosov: 4
Celotno besedilo (2,01 MB) Gradivo ima več datotek! Več... |
3. |
4. |
5. ImproVisAR : designing augmented reality piano roll for teaching improvisationJordan Aiko Deja, Sandi Štor, Ilonka Pucihar, Anuradhi Maheshya W. Weerasinghe Arachchillage, Rafael Marco Balbin, Klen Čopič Pucihar, Matjaž Kljun, 2025, izvirni znanstveni članek Opis: Improvisation is an important skill in music instrument learning, but remains a less-taught topic in traditional piano education. To improvise effectively, learners must develop musical vocabulary, creative confidence, and comfort in performance. These demands make piano improvisation a complex teaching challenge where technology interventions may offer sup- port. Prior short-term studies on augmented piano roll visualisations have shown promise for teaching sight-reading and motor coordination in novice students. However, how such approaches can support advanced learners in acquisition of improvisational skills remains under-explored. To address this gap, we present ImproVisAR, an interactive piano training system that teaches improvisation through augmented piano roll visualisations. Concepts and tools derived from a co- design process with improvisation experts are integrated as structured learning modes. We validated the system through a four-day controlled study (n = 6) comparing an AR-based condition with a traditional sheet music condition following a mixed-methods approach to data analysis. We collected and analysed subjective ratings of cognitive load, creativity support, user-experience, expert evaluation of performances, interaction logs, and qualitative insights collected from daily post-study interviews. Our findings show that participants experienced reduced cognitive load over time, sustained engage- ment across sessions, and AR participants showed higher expert-rated scores, particularly in rhythm, flow, musicality and overall musical impression. Participants also reported greater immersion, freedom to create musical content and motiva- tion to continue playing. We discuss these findings in relation to user experience and creativity support, and offer design recommendations for AR systems that aim to teach complex, expressive skills such as musical improvisation. Ključne besede: augmented reality, projections, piano, jazz, improvisation, training system Objavljeno v RUP: 09.09.2025; Ogledov: 540; Prenosov: 8
Celotno besedilo (2,68 MB) Gradivo ima več datotek! Več... |
6. |
7. |
8. Gesture recognition on deformable objects using millimeter-wave radarNuwan Attygalle, Matjaž Kljun, Klen Čopič Pucihar, 2025, objavljeni znanstveni prispevek na konferenci Opis: Although deformable objects are not typically designed for digital interaction, they offer a largely unexplored potential—any such object could be repurposed as a medium for controlling digital content. While existing approaches embed sensors into deformable objects to enable interaction, this limits scalability and practicality of such systems. An alternative is to perform gesture recognition on deformable objects using a wrist-worn radar sensor. However, when analysing reflected radar signals it is difficult to separate reflections originating from the continues deformations of the object shape and those from the user’s hand and fingers. Additionally, the continuous shape changes of deformable objects introduce changes in radar cross-section, affecting signal variability. Furthermore, user ergonomics—such as variations in hand size, finger dexterity, and strength—are likely to influence the degree of object deformation during interaction. In this paper, we explore whether radar sensing can be used for robust gesture detection on deformable objects, focusing on how well does a system generalize to previously unseen users and what can we do to improve such generalisability. In pursuit of this goal, we record a dataset of 4.3k labelled gestures with Google Soli millimeter-wave radar sensor on a plush toy and demonstrates robust classification performance, achieving accuracy of up to 90% on a five-gesture set. Furthermore, we investigate model generalizability and show that transfer learning improves recognition for previously unseen users, yielding performance gains of up to 20%. These findings highlight the potential of radar-based sensing for spontaneous and practical interaction with deformable objects. Ključne besede: gesture recognition, deformable objects, millimeter-wave radar Objavljeno v RUP: 23.06.2025; Ogledov: 735; Prenosov: 11
Celotno besedilo (9,65 MB) Gradivo ima več datotek! Več... |
9. Assessing medical training skills via eye and head movementsKayhan Latifzadeh, Luis A. Leiva, Klen Čopič Pucihar, Matjaž Kljun, Iztok Devetak, Lili Steblovnik, 2025, objavljeni znanstveni prispevek na konferenci Opis: We examined eye and head movements to gain insights into skill development in clinical settings. A total of 24 practitioners participated in simulated baby delivery training sessions. We calculated key metrics, including pupillary response rate, fixation duration, or angular velocity. Our findings indicate that eye and head tracking can effectively differentiate between trained and untrained practitioners, particularly during labor tasks. For example, head-related features achieved an F1 score of 0.85 and AUC of 0.86, whereas pupil-related features achieved F1 score of 0.77 and AUC of 0.85. The results lay the groundwork for computational models that support implicit skill assessment and training in clinical settings by using commodity eye-tracking glasses as a complementary device to more traditional evaluation methods such as subjective scores. Ključne besede: eye movemens, head movements, simulation training Objavljeno v RUP: 23.06.2025; Ogledov: 888; Prenosov: 14
Celotno besedilo (3,50 MB) Gradivo ima več datotek! Več... |
10. |