signing avatars & 3d landscapes

ML2 aims to advance the design and development of quality signed content through motion capture technology. Through utilizing 3d digital tools (Oculus VR, Kinect, LEAP, Unity, and more), we are creating methods and ways to build immersive learning experiences. 

We seek to explore and map the digital frontier.

In working with BL2 (Petitto, PI), we are developing original patterned ASL texts and wordplay using motion capture, to develop and integrate a Robot-Avatar-Thermal Enhanced Learning Tool to provide early exposure to visual language for infants. 

Izumi Sketch_4__June 3.jpg

motion capture

Our mocap system allows us to capture signing experiences in 3D data, providing us with more tools to create avatars, detect the rhythmic/temporal patterning of natural languages, and an entirely new way to document and preserve sign language. Mocap is also a way to remove the signed text from authors. 

outside land.png

avatar creation

We are creating avatars - both virtual humans and characters/creatures. Virtual humans are "realistic looking" models, while characters/creatures welcomes artistic expeditions. The image above is a basic mesh that our lab uses to apply "photo-mapped" skin across to make an 3D face.    

SILVIA_books633.jpg

3d landscaping

With the LEAP Motion technology, we are building a pilot project that utilizes a handshape to make an object move. (For instance: if you wanted a ball to bounce, you create a claw-handshape and bounce.)