Non-Expert Robot Programming Framework

Master Thesis Project at KUKA Roboter GmbH

A framework for intuitive robot programming facilitated by a self-localizing smart device. This innovative system enables non-experts to teach industrial robots complex pick-and-place operations using augmented reality and computer vision, eliminating the need for traditional programming expertise.

April - September 2015
KUKA Roboter GmbH
Augsburg, Germany
KUKA LBR iiwa
Google Tango
ROS

Project Highlights

Comprehensive research and development spanning computer vision, robotics, and mobile computing

Computer Vision

Implemented advanced algorithms including Hough Circle detection and Cylinder Model Segmentation using OpenCV and PCL C++ libraries for robust object recognition and tracking.

AR-Based Teaching

Developed an Android application on Google Tango platform enabling intuitive robot programming through augmented reality, eliminating complex coding requirements.

Industrial Robotics

Created Java APIs for KUKA LBR iiwa robot control, enabling precise pick-and-place operations with real-time position feedback and collision avoidance.

ROS Integration

Implemented algorithms in C++ on Robot Operating System (ROS) platform for seamless communication between vision systems, mobile device, and robot controller.

Smart Localization

Leveraged Google Tango's self-localization capabilities to establish precise spatial relationships between mobile device, objects, and robot workspace.

Adaptive Learning

Developed intelligent learning system allowing robots to remember taught tasks and autonomously execute them even when object positions change.

Technologies Used

Industry-standard tools and frameworks for robotics and computer vision development

OpenCV

Computer Vision Library

PCL

Point Cloud Library

ROS

Robot Operating System

Java

API Development

Android

Mobile Application

C++

High-Performance Computing

Development Process

Six months of intensive research, development, and implementation

Requirement Analysis

Conducted comprehensive requirement analysis to identify key challenges in traditional robot programming and define the scope for an intuitive, non-expert-friendly programming framework.

Algorithm Development

Implemented and optimized computer vision algorithms for object detection and localization:

  • Hough Circle Transform: For circular object detection and tracking
  • Cylinder Model Segmentation: For 3D object recognition using point clouds
  • Performance optimization using C++ for real-time processing

ROS Platform Integration

Developed C++ nodes on ROS platform for seamless integration between vision processing, mobile device communication, and robot control systems with publish-subscribe architecture.

Mobile Application Development

Created an Android application on Google Tango with three-button interface (Pick, Place, Play) and camera preview for intuitive object selection and robot teaching.

Java API Integration

Developed comprehensive Java APIs to interface with KUKA LBR iiwa robot controller, enabling precise motion control and task execution with safety constraints.

Live Demonstration

Watch the KUKA LBR iiwa robot execute pick-and-place operations taught via smartphone

How It Works

The demonstration showcases the complete workflow of teaching a KUKA LBR iiwa robot using an Android application running on Google Tango.

  • Pick: Select object on table using camera preview
  • Place: Choose target hole for placement
  • Play: Robot autonomously repeats the task
  • Adaptive: Works regardless of object position
  • No Coding: Zero programming knowledge required

The robot learns the task once and can autonomously execute it repeatedly, adapting to different object positions on the table through computer vision and spatial mapping.