Motion Laboratory

| About | News | Contents | Projects | Publications |
Mocap Mocap Mocap Mocap Mocap Mocap Mocap Mocap
A Proposal of Joystick-type Input System that is Easy to Operate for Physical Disability Persons

Some physical disability persons have difficulty in inputting buttons on the controller and can't enjoy games such as VR. I developed a joystick-type input system for new input system that is easy to operate for physical disability persons. I used an existing VR controller and a joystick of an electric wheelchair connected by a joint as the input device.Users can perform the operation corresponding to the button press by simply tilting this device. Using this system, I made a game to avoid obstacles by making full use of jumps and bombards.This system determines which posture the input device is in from the four directions of front, back, left, and right. This system assigns four button inputs of A, B, Y, and X to these postures.Using these four inputs, users operate contents assuming VR. To determine the posture, I use the rotation angle that can be obtained for each frame from the existing VR controller. With the proposal of this system, the range of contents that can be enjoyed by physical disability persons may expand regardless of VR. In addition, if you get on an electric wheelchair, even healthy persons can use this system, so contents that can be enjoyed by people with physical disability persons and healthy persons in the same way can be realized by using this system.

Discrimination of Hand Posture and Synthesis to Human Motion for CG Representation of Ryukyu Dance

In this study, we reused the accumulated motion data to represent the hand movements of dance in a simplified form. I attempted to develop a system that can discriminate hand postures acquired by LeapMotion and synthesize them into human motion data in real-time. I attempted to develop a system that can discriminate hand postures acquired by LeapMotion and synthesize them into human motion data in real-time. The hand posture is identified as one of the four postures commonly used in Ryukyu dance, using the distance from the tips of the index and middle fingers to the center of the palm and the sum of the rotation angles of the second joints of the four fingers from the index finger to the little finger as features. In this system, the hand posture output based on the discrimination result is synthesized into the human body motion by inputting the hand gesture with LeapMotion while playing the human body motion data of Ryukyu dance. 12 people compared the images before and after synthesizing the hand posture and the human motion, and evaluated them by questionnaire. As a result, we confirmed the importance of synthesizing the hand posture into human body motion, and confirmed that it is suitable for reusing motion data.

Integrated Simulation System for Simple Visualization of Yosakoi Performance

It is difficult to visualize through imagination a new performance when creating traditional Yosakoi dance performance. The purpose of this study is to visualize an entire Yosakoi performance in a simple way. In this research, I developed a performance simulation system that integrates four elements of performance, i.e., costumes, choreography, formation, and props, into a single element. This system can easily simulate an entire Yosakoi performance in 3DCG by simply selecting the components of the performance. The user only needs to input an image of a pattern or illustration, which can then be reflected in the costumes and props as 3D textures and thus visually express the theme of the performance. The choreography and formations commonly used in Yosakoi dance were prepared in advance. Motion data were used for choreography, and the fluttering of the costume sleeves based on the choreography was reproduced using physical operations. To evaluate the usefulness of this system, I conducted an experiment using nine experienced Yosakoi performers. As a result, eight of these participants answered that they could easily simulate the entire performance. This indicates that the system can be used to visualize Yosakoi performance easily and thus to support the production of such dance performance.

3D Virtual Fitting System for Supporting a Museum Exhibit of the Sarira Casket

For conventional museum exhibits, it is difficult to obtain detailed information about cultural properties by just looking at the actual objects. In this study, I developed a 3D virtual fitting system for the costumes of dancers and the musical instruments depicted on the Sarira Casket to promote understanding and arouse interest in cultural properties. This system uses Azure Kinect to recognize a user's motion without any contact and changes the costume of the CG avatar. Users can change the mask, the jacket, and the pants of the CG avatar by swiping their hand to the side. When one of the four correct costume combinations is selected, a description of the costume is displayed. Users can also display the CG objects of instruments and tools to see their size and learn how to hold them. This entertaining and educational system uses intuitive gestures and randomly changes costume combinations when it is reset. It was exhibited at the Ryukoku Museum in Kyoto, and an evaluation experiment was conducted with 51 visitors. Based on questionnaire results, over 80% of them felt that this system deepened their understanding of the dancer costumes, instruments, and tools. These results suggest its usefulness as an exhibition support system for museums.

Creation of AR contents linked to the actual exhibit of a casket and its operation in a museum

In exhibits of cultural properties in a museum, it is difficult to fully understand the displayed contents, even by looking at the explanatory text. Therefore, in this study, I created augmented reality (AR) contents linked to the actual exhibit of a casket and deployed these contents in a museum to enhance understanding of a historical container and support its exhibition in a museum. Here, when a computer tablet is held over images of celestial or dancers drawn on the container as markers, the 3DCG images of the musical instruments and dancers are superimposed. In addition, by using the graphical user interface (GUI), the user can play animation and sound, zoom in and out, and rotate the 3DCG. To operate AR contents in a museum, there are certain constraints on distance, viewing time, and lighting conditions. I studied the display method, such as dividing the display arrangement, implementing intuitive operation methods, unified operation methods, and contrast enhancement processing to increase the recognition rate required for image recognition. The AR contents created from this research were linked to the actual exhibit of a casket and used by visitors in the Ryukoku Museum. From a questionnaire's results, more than 90% of the visitors answered that the AR technology was effective in exhibiting the casket, confirming the usefulness of the AR contents produced in this study.

Proposal for CG contents to support learning of rugby tackling form

In this study, I created CG contents using motion capture for the purpose of learning correct tackling form in rugby. Four types of rugby tackling motions were acquired by an optical motion capture system. CG animations of ideal tackles and dangerous tackles that cannot be filmed were created by editing keyframs of the acquired motions. From the acquired motion data, the power of the tackle and the speed of the wrist during the arm wrapping motion were calculated as features values. The power of the tackle was pseudo-calculated using the equation of motion because it could not be obtained from the motion data only. In addition, the feature values were visualized by displaying a gauge, and the differences between the acquired motions could be compared by watching the CG animation of the tackling motions at the same time. In order to evaluate the usefulness of this content, I conducted an evaluation experiment with eight people with rugby experience. As a result, I obtained positive answers from about 80% of the subjects, confirming that this content can be used to learn correct tackling forms.

Continuous Dog Training Experience by Gesture Recognition Using Kinect

In this research, I developed a system that allows users to experience dog training continuously as a way to give them a greater sense of responsibility in owning a dog, improve their understanding of dog behavior, and instruct them on training methods. In this system, the Kinect recognizes six types of gestures used in actual dog training and displays the corresponding 3DCG dog animation, allowing the user to experience training in real time with the dog. The gestures made by the user are judged as "normal" or "ideal" depending on the conditions met, and the dog's animation changes accordingly. In order to reproduce the process by which a dog becomes trained, the success rate of training is varied according to the number of days the dog has been using the system continuously. A total of 12 students actually experienced this system and evaluated its continuity and overall performance. As a result, 11 of these respondents reported that they could understand the process and the importance of improving dog training through continuous training, thus confirming the usefulness of this system.

System for Visualizing Muscle Activity Using Body-Motion Input and Muscle Analysis Data

This study displays muscle analysis data from body-motion input in real-time. We developed a system that visualizes muscle activity using body-motion input and the modeled muscle analysis data. For creating an approximate model, we used the 16 muscles that occupy about 70% of the volume of the lower body and created an approximate equation of each muscle activity that corresponds to a knee flexion angle. In this prototype, a squatting motion was used for modeling and visualizing muscle activity. We divided the squatting motions into four phases and created an approximate equation of each muscle activity for each phase. The locations of the muscles and their amount of activity were visualized by color using the parts of a CG human model. The knee flexion angle was calculated from two trackers' angles attached to the right leg. Using the calculated values and the approximate models, we presented the muscles used and the amount of their activity in real-time in CG. We experimentally evaluated our prototype system with nine students. About 90% of them recognized their muscle activity based on its visualization.

| List of Student Projects |