We successfully drives Kinect and retrieve the Video as well as Depth information from the sensor on the Kinect. The information is also successfully integrated with the OpenCV image processing library. So currently we have full access to all the functionality we want on the Kinect.
After discussion with Dr. Badler, we got a conclusion that the flat ARTag( http://www.artag.net/ ) mentioned in the project proposal is not an ideal tracking measure in the training environment. Though it provides accurate tracking feature, it can only function properly when the tag is facing the camera or within certain small angle. But the trainee might need to rotate the sensor or the needle during the training.
Based on the conclusion, we decided to switch to the color tracking method. The screen capture above shows that we are able to track certain color in the environment. We are able to get the depth information at the corresponding position and get the 3rd dimension information, too. The next step would be tracking 2 or more markers to get the vector between them and try to align this vector to the sensor and needle we want to track.
No comments:
Post a Comment