Support Topics
Support Topics / UC-win/Road Maintenance/Support
SfM Plug-in
 

SfM Plug-in is a new function of UC-win/Road Ver.11. This Plug-in can analyze plural photos and restore 3D coordinate (point cloud) in VR space.

 What is SfM (Structure from Motion) plug-in?

SfM Plug-in creates a 3D point crowd model of the objects in the space from plural photos. It also estimates the positions of cameras and displays the models on UC-win/Road. The point cloud and the camera positions are obtained by applying the SfM method for the loaded photo images.
SfM Plug-in is available for a variety of applications. For example, it is possible to show a zone around the road and the railway track in point clouds with taking pictures of those while walking. One of the benefits is the easy reproduction of the real objects in VR space without the special equipment like a 3D laser scanner.
-SfM (Structure from Motion): The technology that estimates 3D location of the feature point in the picture and the camera position and orientation through analyzing plural photos.

 Functions of SfM Plug-in
SfM Plug-in has the functions below.

Create point cloud from a video
Create point clouds based on the images extracted from photos and videos taken by a digital camera

Display arrangement of the point crowd
Analyzed and output point clouds can be adjusted to be displayed properly on UC-win/Road. It is available to rotate and adjust to the actual scale and positioning. This makes it possible to combine and show plural point cloud data and lap over VR models and point cloud data.

Caribration file of the camera
By taking the photo of the chess board and entering it, a file is created to correct the distortion in the image through analyzing the distance and features.

Create a Visual Words file
Visual Words file is created and output which are used at the time of the point cloud creation.


Fig. 1 Flow of the analysis

Taking photos
Take photos of the space where you want to create 3D point clouds by digital camera. Plural photos must be taken from slightly different positions. In addition, the camera property is necessary as a parameter for SfM analysis. If the pictures don't have the EXIF information, you need take a photo of a chess board pattern and create a calibration file.
-Exif (Exchangeable image file format): An image file format that can be add the information about the shooting conditions to the digital camera photo data. Metadata such as shooting date, model name, resolution, exposure time, diaphragm value, focal length, ISO speed, and color space are saved with the image.

Create a camera calibration file
Create a calibration file which calculates characteristics of the digital camera by using the photo of the chess board pattern.

Create a Visual Words file
Create a Visual Words file which is needed for the quick judgement of the similarity ratio during SfM analysis. Photos of the analysis object are used for the creation. Because it takes an enormous amount of time to create a Visual Words file, it is recommended that you use sample files originally attached to the plug-in.

SfM Analysis
Import the photos, the camera calibration file, and the Visual Words file, assign analysis conditions, and then perform the analysis.

Visualization in UC-win/Road
When the SfM analysis is run, view point positions, direction arrows, and point clouds are displayed.

画像をクリックすると大きな画像が表示されます。
Fig. 2 Photo of the showroom
画像をクリックすると大きな画像が表示されます。
Fig. 3 Analysis result of the showroom
The kind of analysis

SfM plug-in can perform a real-time analysis and a batch analysis. The real-time analysis analyze the images uploaded to the folder instantly, requiring large memory.
In a batch analysis mode, object pictures can be selected before the analysis, and they only are analyzed to output point clouds. It uses less memory and can analyze more pictures than the real-time analysis. However, the analysis result cannot be obtained if the discontinuous photos are entered because this analysis processes only continuous photos.

Batch analysis
Select photos to be analyzed and set analysis conditions. After starting the analysis, view positions of each selected photos are calculated and displayed on the 3D space. As the analysis progresses, the number of point clouds increases and gradually the structure is clarified. Check the point clouds in the space when the all image analyses have finished. If the point clouds are few, select "give priority to the number of the point clouds" in the analysis condition and perform the analysis again.
In the case that the analysis stops in the middle, perform it again after altering the starting image or removing the one where the analysis stops.

Algorithm for the detection of the feature points
Algorithm for the detection of the feature points can be selected either SIFT or SURF.

SIFT (Scale Invariant Feature Transform)
This is the algorithm to detect the feature points and to describe the number of features. Scale changes and rotation are described. The processing speed is slower than SURF, but it is said to have the higher recognition precision.

SURF (Speed Up Robust Features)
This is the algorithm that SIFT has been improved and speeded up. It is considered to have a faster processing speed and a lower recognition precision than SIFT.

画像をクリックすると大きな画像が表示されます。
Fig. 4 Photo
画像をクリックすると大きな画像が表示されます。
Fig. 5 Screenshot of SfM plug-in
画像をクリックすると大きな画像が表示されます。
Fig. 6 The analysis result by SfM plug-in: point crowd, camera positon and direction
(A white globe and arrows)
 Tips on using SfM plug-in

Shoot photos so that the objects in ones already shot will be included. If not, no point cloud will be created because the positions in the space are unrecognized. (For the batch analysis, the objects in the last 4 photos must be included.)

Brightness should be constant because the recognition of the objects is difficult if the brightness changes greatly than the last analyzed photo. For example, indoor shooting should be done in the room lighting with curtains closed to block the outside light. In the case of the outside shooting, cloudy weather is desirable because brightness changes between sun and shade. If you have both of these photos, analyze them separately and connect them in VR space by using the point cloud adjustment function.

The first 2 pictures for the analysis affect the number of the point clouds output after that. If the point clouds are not output properly, it may be improved by changing the order of the photos to input to SfM plug-in.

If the analysis stops immediately, set the "give priority to the number of the point clouds" in the priority content of the feature detection algorithm. This has lower point cloud precision but has higher detecting ratio of the camera position.



Previous
 
Index
(Up&Coming 2016 Summer issue)
Back
Up&Coming

FORUM8