A framework for non-expert robot programming facilitated by a self-localizing smart device
This was my master thesis project at KUKA Roboter GmbH, Augsburg. The duration of the project was 6 months. For this project, I created requirement analysis, implemented algorithms Hough Circle and Cylinder Model Segmentation using OpenCV and PCL C++ libraries for the better performance, developed algorithms and implemented them in C++ on ROS platform, developed Java APIs to teach the LBR iiwa to pick-and-place the objects.
.Demo
Kuka LBR iiwa robot taught to pick and place an object on the table.
As you have watched in the video, the robot was taught by an android appliction on Google Tango. I developed an android application containing 3 bottons (pick, place and play) and a camera preview. Using camera preview, I did select an object on the table and asked robot by pressing pick button to pick it up. Then I chose a hole to place the object using the place button. Next time, Robot does the same action without being taught (without using pick-place button). Which means, when I press play button, robot picks the same object no matter where it is on the table and places in the same hole.