|
|||||
Introduction |
Currently the amount of development of NUI(Natural User Interface) as a
new interface has been increasing. NUI is a new interface which allows
the instinctive operation using voice and body. DTK(Depth Tool Kit) is a development kit using infrared depth sensor. It can recognize the user by the distance information from the infrared depth sensor and detect the body’s moving on 3D. Unlike the usual motion capture, one of the merits of this is that it doesn’t need to put a sensor or mark on the body. |
Infrared depth sensor |
Overview of DTK |
DTK is a tool kit which simplifies the use of infrared depth sensor. Recognition
of users and even acquisition of 3D motion can be simply done by just sending
the order as well as acquiring depth map and images of RGB camera (Figure
3). It also allows to recognize and operate several users at the same time
and to be used as an input device immediately by using gesture interface
and 2D menu creation function.
It also allows to create totally different application using information of depth map and 3D motion data. |
Using method of DTK |
DTK can be used regardless of the language or platform (Figure 4). DTK
is used as a class library for C++ such as Visual Studio(R).
After assuming it as a class, required member function is called and processing
is executed. Application using an infrared sensor can be created by simply
describing a processing which matches the event. In the other language
or platform, it works as a server using UDP communication. Users can use
the information of infrared depth sensor with communicating with DTK as
a client program. DTK can be used regardless of the language or platform
because it works as the defferent application.
Clients send the processing to be executed and necessary information to DTK after connecting with server on UDP. If necessary, DTK transfers the data to a client program. It’s also possible to let DTK display the depth map and RGB image as well as using it as a background program. |
AirDriving &Gesture Interface |
AirDriving & Gesture Interface is a handless driving system which combines
inputting 3D motion and inputting gesture input (Figure 5). Running on
the model of UC-win/Road is allowed using the information of hands and
feet input from infrared depth sensor (Figure 6). During steering operation,
move the both hands lile holding the real steering wheel. For the operation
of axel and brake, move the right foot like stepping on the pedal. It can
react to the insensible movement, so the smooth driving is possible.
Switching of drive/ back is conducted by gesture. Gesture is input by moving the right hand like figure 7 in the state the left hand is fixed. In AirDriving, gesture 1 is drive and gesture 2 is back. By assigning the other gestures to scenarios, operation from start of driving to the end without mouse or key board is possible.
|
Example of use of infrared depth sensor in application |
Starting with AirDriving, applications using infrared depth sensor such
as teleoperation of robot is under development in FORUM8 (Table 2, 3).
RoboCar (R) is a 1/10 scale model car of ZMP Inc. Driving simulation synchronizing between UC-win/Road and RoboCar (R) using AirDriving and gesture interface is possible. AGUL AR.Drone(P9, Up&Coming vol. 95) is a quadrocopter of Parrot (Figure 8). It enables a stable flight and can be used for taking picture from the air. AGUL AR.Drone and robotic arm (Figure 9) can be controlled using 3D motion capture and とGesture Interfac. SLAM(Simultaneous Localization And Mapping) is a system which estimates the self-location and creates the environment map at the same time (Figure 10). By the conbination of distance information of infrared depth sensor and colour information of RGB camera, 3D point cloud model (environment map) is created in real time and a sequential space is generated comparing the obtained images (Self-location is estimated.). This is used for control of robot and 3D mapping.
DTK supports the development of these applications. |
(Up&Coming '12 New Year Issue) | ||
|