Motion Laboratory

| About | News | Contents | Projects | Publications |
Mocap Mocap Mocap Mocap Mocap Mocap Mocap Mocap Mocap Mocap Mocap
Effect of Fake Heart Sounds on Anxiety While Experiencing High-Altitude VR Content

In recent years, VR technology has attracted attention as a new method for treating acrophobia. In my research, I aimed to verify the effect of fake heart sounds on anxiety in the treatment of acrophobia, using the placebo effect that makes people feel as if their own heart is making sounds when they hear fake heart sounds. In addition to a high-altitude VR experience created using Unity, I also conducted an experiment in which I presented fake heart sounds that gradually became faster. The subjects were asked to perform the task of picking up a box by walking to the edge of a board protruding from a skyscraper in a virtual space while wearing a head-mounted display (HMD). The subjects were divided into two groups by different orders of performing tasks under two experimental conditions: presence and absence of false heart sounds. I evaluated how the anxiety state of the subjects changed using a questionnaire and pulse measurement. As a result of having six people undergo the experiment, it was found that after experiencing the heart- sound condition, fear and anxiety were higher than in the no-heart-sound condition, and an increase in pulse rate was also confirmed. The limited effect of the placebo was confirmed by both the questionnaire and pulse measurement evaluations.

OpenPose Analysis of Shooting Form for Free Throws in Basketball

In basketball, the free throw is an important scoring opportunity that occurs often during a game. This study aims to reveal the differences between success and failure in free throws by comparing the shooting forms of successful and failed free throws by experienced players. In this study, the posture of free throw motion was analyzed using OpenPose. The free throw movements of two people who have been playing basketball for over five years were captured by video. The images at the timings of stance and release were manually extracted from the video, and joint points were obtained from the extracted image by OpenPose. Joint angles were calculated using the dot product of vectors. The angles of the shoulders, elbows, knees, and ankles at the timing of success and failure were compared. Consequently, it was found that the shot success rate was higher when the difference in the angle of the upper body was smaller than that of the lower body. It was also found that the difference between success and failure was greater in the angle of the elbow than in the angle of the shoulder. Taken together, these results imply that the angle of the elbow has the greatest influence on a free throw shot.

Abstraction and Visualization of Movement to Support Understanding of Physical Movements of CG Character

In this study, I propose a method for abstracting human body movements and a method for visualizing kinematic elements using 3DCG, with the aim of supporting dance creation and content production for creators who deal with human body movements. The positions of the center of gravity, the base of support of the human body and the characteristic of the movement were visualized in real time. The center of gravity and the support base are kinematic elements, and the characteristic of the movement are abstracted using spline. A composite center of gravity position was calculated based on the centers of gravity of each body segment and their mass ratio to accommodate changes in posture. The base of support needs to be in the area that covers the outer periphery of the surfaces in contact with the ground, so it was generated by placing markers on the CG character to grasp Those surfaces and then using a convex hull scanning algorithm from the coordinates of the markers. The characteristic of the movement generated a curve by establishing the control points of the spline at regular intervals in the coordinates of the CG character's joint, which is chosen arbitrarily. This was done to express movement abstractly while omitting the details of the movement. To verify the validity of the visualization method, the composite center of gravity was compared with the position of the center of gravity in a standing posture, and the base of support was compared between the proposed method and a simple method that calculates it based on the joint coordinates of the feet. The comparison results show that the proposed method could visualize the center of gravity and base of support more accurate than the simple method.

OpenPose Analysis of Body Angle and Throwing Distance in Soccer Long Throw Motion

In recent years, more and more soccer teams have adopted the long throw as a new attack strategy. In this research, with the goal of helping beginners learn how to do long throws, I compared throwing motions from soccer videos to identify the optimal techniques of throwing far. Long throw videos were taken of five persons, from beginners to experienced players, with each person making five throws. OpenPose was used to obtain each joint position in every frame of the videos, from the start of the run to the end of the throw. Based on the skeletal coordinates obtained for each joint position, I calculated the angles at the elbows, shoulders, hips, and knees, and then compared the differences in angles between subjects. I also graphed the changes in angles at each frame and compared these changes from subject to subject. As a result, it was found that to throw far, it is important to throw quickly after swinging the arm widely and bending the elbow deeply, being conscious of not slowing down the speed while running to the pitching phase.

Interface to Edit Time and Space Parameters for Creating Dance Movements Using VR Controllers

In this study, I developed a system that converts the movements inputted through an HMD and controllers to full-body movements based on patterns to support the creation of dance movements. To increase the variety of dance movements the user can create, I proposed an interface for editing the time and space parameters included in the patterns. By editing the time and space parameters used in the creation of dance movements using VR controllers in VR space, the user can create such movements with various degrees of conversion. To intuitively input the space parameters such as distance and amplitude, two interfaces to input length were developed by measuring the distance between the controllers. To easily input the time parameters such as period and speed, their values are increased by tilting the controller stick to the right and decreased by tilting it to the left. To verify the necessity and operability of editing time and space parameters for creating dance movements, four dancers were asked to use the system. As a result, the necessity of changing time and space parameters was high, and operability was good.

  • Takuto Hirakida, Asako Soga, VR Interface for Creating and Editing Dance Movements with Time and Space Parameters, Proc. of International Workshop on Advanced Image Technology (IWAIT) 2024 (Langkawi, Malaysia), Jan. 2024
Investigation into Effect of Delayed Vision in VR Space on Sense of Body Ownership for Tracking Inputs around The Mouth

With the spread of VR devices, many studies have examined the sense of body ownership. In recent years, it has become possible to track facial movements and detect facial expressions. The purpose of this study is to investigate the effect of delayed vision on the sense of body ownership, focusing on the movements around the mouth. I developed a delay-processing system in which the movements around the mouth of a character are delayed from such movement in reality. A psychological experiment was conducted to investigate the extent to which the delayed processing of the tracked mouth movements can avoid affecting the sense of bodily possession. The subjects were asked to operate a first-person perspective character using an HMD and a facial tracker. In six different delay conditions, the subjects were asked to answer a seven-point SD method questionnaire on the occurrence of body ownership. As a result, it was clarified that a sense of body ownership could be obtained with a delay of up to 300 ms, while this sense disappeared when the delay exceeded 400 ms.

Proposal of 3D Puzzle System for Understanding the Structure of Objects in VR Space

In this study, I developed a 3D puzzle system that fits furniture parts into the silhouette of a model to support a user's understanding of the structure of objects in VR space. The user can move furniture parts by picking and grabbing gestures with one hand, which is displayed by hand tracking. To have the users learn the structure through the experience of assembling the puzzle, two assistive functions for completing the puzzle were implemented. One is a hint function that indicates in text whether the distances and angles between parts and silhouettes are close. The other is a function that corrects the position and angle of the silhouette by releasing the part at a position and angle close to those of the silhouette. Proximity to the correct answer was determined by whether the distance and angle were below threshold values. Seven subjects were asked to use the system and complete a questionnaire. All of those who used the system reported a better understanding of the structure, and 70% said it increased their motivation to more fully understand the structure.

VR System to Support the Evaluation and Mastery of Badminton Smash

In recent years, the widespread adoption of VR technology, coupled with decreasing development costs, has made VR content more accessible. In this study, I developed a VR system aimed at enhancing the motivation of beginner badminton players by focusing on improving their smashes. Users wear a Head-Mounted Display (HMD) and use a controller as a racket in the VR space to hit the shuttlecock. The system evaluates each swing based on five criteria, displaying the score and providing advice on the HMD screen. The evaluation criteria include the distance between the collision point and the center of the racket surface, as well as the position and angle of the racket, with each criterion assessed at three levels. Weighted ratios are applied to evaluate the distance between the collision point and the center of the racket surface. An additional score is awarded for swing speed if the racket exceeds a predefined threshold at the timing of collision with the shuttlecock. Points are also added for successfully hitting the virtual shuttlecock with the target spot of the racket. To gauge the system's effectiveness, ten beginners experienced the VR system and responded to a questionnaire. The results indicate that all of the participants became more aware of the racket's surface center and hitting point. Moreover, 80% of the subjects reported increased motivation to practice badminton through this VR-based approach.

Visualization of a Statue of Prince Shotoku and an Experiential VR Theater for Enhanced Understanding

The creation of Buddha statues involves a unique technique that blends the artisan's delicate skills and artistry, but presenting these has proved challenging. Thus, a VR theater for experiencing a statue of Prince Shotoku was developed to aid in understanding the structure of Buddha statuary. The target of this study was the "Standing Prince Shotoku in His Youth" statue at Kosho-ji in Hiroshima, and both the exterior and interior of the statue were digitized. The exterior of the Buddha statue was photographed from multiple angles in 143 images and transformed into a 3D representation using photogrammetry. The interior modeling was achieved using the imagery from a fiberscope as a reference, manually replicating the carving marks and the structure of the inlaid crystal eyes of Prince Shotoku. Then, a model of this statue was used to develop a VR theater system. Through the HMD and hand tracking, the system allows the user to experience flow control via folded-hand gestures by collision detection. To interactively convey the feeling of the carving method, the VR space includes a feature that lets participants work a chisel on a board. The system was exhibited for two days at Ryukoku Museum, alongside the actual statue of Prince Shotoku. The results of 90 visitor surveys confirmed that the participants, despite the majority of them being elderly, could operate the system without difficulty, thus demonstrating its effectiveness as a support system for understanding an exhibit.

  • Wenze Song, Asako Soga, Experiential VR System for Visualization and Enhanced Understanding of a Statue of Prince Shotoku, Proc. of the 28th Annual Conference of the Virtual Reality Society of Japan, 2C1-01, pp.1-4, Sep. 2023
Analysis of Human Body Features for Machine Learning and Applic Ationfor Content Using Dance Motion Archive

The purpose of this research is to generate dance movements using motion archives and machine learning to support the creation of dance and to incorporate the output in museum exhibits. In this study, posture discrimination and motion prediction of Ryukyu dances were investigated as basic research toward motion generation of dances using machine learning. To verify the possibility of dance movement classification using characteristic postures as learning data, the postures of male and female dance of Ryukyu dances were discriminated. A total of 100 characteristic posture frames, 50 frames each for the male and female dances, were extracted from the motion data and used for learning. The female dance subjects were Ryukyu dance works consisting of 48 basic movements, and the male dance subjects included 7 basic movements. To confirm discrimination accuracy, I asked an expert to discriminate between male and female dances using a CG character image of 90 postures, 45 each of which were determined to be closer to male or female dance, used a real-time pose-judgment system. The agreement rate between results by the system and by expert discrimination was 64.4%. The results could possibly be improved by matching the numbers of basic movements used to excerpt the posture and by increasing the amount of learning data. To verify the feasibility of predicting dance movements using machine learning approaches on time-series data, motion data of periodic "suriashi," a basic movements of Japanese dance, and 60-sec motion data of a partial Ryukyu dance work were used for learning. The results show that periodic movements could be predicted, but complex movements, such as those used in a Ryukyu dance, were difficult to predict with the learning model used in this study. As a content application using a dance motion archive, motion data were introduced in the "AR Sarira Casket" as CG animation of dance and displayed in the permanent exhibition at Ryukoku Museum. An analysis of visitors viewing behavior suggests the effectiveness of the exhibit in stimulating interest in cultural properties.

A Real-time Visualization System for the Body Motion Using Muscle Analysis Data and a VR Device

The purpose of this study is to clarify the muscle activity corresponding to a specific body motion's input in real time. Therefore, I developed a system for real-time visualization of pseudo-muscle activity using muscle-analysis data and VR devices. The user controls a CG avatar in VR space by inputting a body motion with the VR device. In this study, I constructed three types of models to output muscle activity corresponding to the posture of a CG avatar: simplified model based on kinematic theory, predictive models estimated from muscle-analysis data, and learning models based on machine learning of muscle-analysis data. In this system, the user checks the location of active muscles with the CG avatar in the VR space. In addition, the system can present different muscle activities depending on the condition of the upper extremity in response to the input of elbow flexion-extension movements. I implemented a visualization method that expresses the magnitude of muscle activity through color shading the location of the CG avatar's active muscles. In addition, I was also implemented a method to represent muscle expansion by controlling the scale of the CG avatar's body parts in the system. A learning model was built from actually measured motion data of flexion-extension movements with different hand orientations, upper extremity conditions, and flexion speeds. Creating a learning model requires a large amount of data, so I focused on squat movements and verified the validity of muscle-force estimation using processed motion data. The results show the possibility of achieving valid muscle-force estimation when the degree of processing is small. Experiments were conducted with 30 subjects to assess the effectiveness of the system's concept and to explore its visualization methods. As a result, the system was highly evaluated as a means of understanding and improving body motion from subject feedback, thus confirming the usefulness of this system.

  • Taichi Yano, Asako Soga, Kunihiko Oda, A Real-time Visualization System of Muscle Activity in Movements Using VR Device, Proc. of International Workshop on Advanced Image Technology (IWAIT) 2024 (Langkawi, Malaysia), Jan. 2024
| List of Student Projects |