Master Thesis Project at KUKA Roboter GmbH
A framework for intuitive robot programming facilitated by a self-localizing smart device. This innovative system enables non-experts to teach industrial robots complex pick-and-place operations using augmented reality and computer vision, eliminating the need for traditional programming expertise.
Comprehensive research and development spanning computer vision, robotics, and mobile computing
Implemented advanced algorithms including Hough Circle detection and Cylinder Model Segmentation using OpenCV and PCL C++ libraries for robust object recognition and tracking.
Developed an Android application on Google Tango platform enabling intuitive robot programming through augmented reality, eliminating complex coding requirements.
Created Java APIs for KUKA LBR iiwa robot control, enabling precise pick-and-place operations with real-time position feedback and collision avoidance.
Implemented algorithms in C++ on Robot Operating System (ROS) platform for seamless communication between vision systems, mobile device, and robot controller.
Leveraged Google Tango's self-localization capabilities to establish precise spatial relationships between mobile device, objects, and robot workspace.
Developed intelligent learning system allowing robots to remember taught tasks and autonomously execute them even when object positions change.
Industry-standard tools and frameworks for robotics and computer vision development
Computer Vision Library
Point Cloud Library
Robot Operating System
API Development
Mobile Application
High-Performance Computing
Six months of intensive research, development, and implementation
Conducted comprehensive requirement analysis to identify key challenges in traditional robot programming and define the scope for an intuitive, non-expert-friendly programming framework.
Implemented and optimized computer vision algorithms for object detection and localization:
Developed C++ nodes on ROS platform for seamless integration between vision processing, mobile device communication, and robot control systems with publish-subscribe architecture.
Created an Android application on Google Tango with three-button interface (Pick, Place, Play) and camera preview for intuitive object selection and robot teaching.
Developed comprehensive Java APIs to interface with KUKA LBR iiwa robot controller, enabling precise motion control and task execution with safety constraints.
Watch the KUKA LBR iiwa robot execute pick-and-place operations taught via smartphone
The demonstration showcases the complete workflow of teaching a KUKA LBR iiwa robot using an Android application running on Google Tango.
The robot learns the task once and can autonomously execute it repeatedly, adapting to different object positions on the table through computer vision and spatial mapping.