Category Archives: Robotics

ROS Basics – a short Introduction into ROS

The Robot Operating System (ROS) is an open-source meta-operating system which provides essential features, namely hardware abstraction, low-level device control, implementation of environmental functionality, such as visualisation, simulation or testing and allows message-passing between concurrent running processes [O’K13]. Furthermore, it offers implementations of commonly used functionality in installable packages which even cover complex
algorithms like Simultaneous Localization and Mapping (SLAM) and Visual Object Recognition (VOR).

ROS moreover contains tools and libraries for obtaining, building, writing and running code across multiple heterogeneous computers and therefore includes language and platform independent tools. For example, ROS supports multiple client libraries, namely roscpp for C++ , rospy for Python , roslisp for Lisp and many others. It is also possible to link application-related code and external libraries like OpenCV for computer vision or Eigen3 for efficient linear algebra computation. Furthermore, ROS can successfully be wrapped around other frameworks like the Player Project .

ROS is mostly licensed free and has been developed as open source software under BSD Licence, which offers a variety of advantages for a low cost robot.

Unsurprisingly, with the high complexity of ROS there comes one of the highest learning curves of all robotic frameworks. Besides, due to rapid changes of main characteristics during different major versions, nearly all books and most tutorials on the internet became unreliable which is often very confusing for a beginner. But, after the top of the curve, a lot of things are self explanatory and complex features can be implemented very fast.

Unfortunately, another point to mention and one of the main disadvantages of ROS is the dependency on the ROS host and its Operating System (OS). In case you do not develop on a x86 32bit system a lot of automations do not work and require patience to be solved.
Especially, the support of packages on armhf , the ARM release repository, is not very usable, yet. Additionally, despite the importance of reactivity and low latency ROS is – like all other frameworks – no realtime OS.

ROS general terminology

ROS is a message-based concurrent running heterogeneous peer-to-peer network application. Its structure can be imagined as a mostly undirected graph with an obligatory center process node, called roscore . Broadly speaking, this one master node tracks every other part of the robotic network, including running processes and their interfaces. The centralistic design consequently uses its advantages by offering global debugging possibilities and error logging. It further mediates direct connections between every graph node on request. This becomes very useful in cases like image processing, where running traffic over the central node would impact the global system by increasing network usage and processing power.
Still simplifying, other parts of the graph are organized name spaces, called rosnodes , which in turn are containing more rosnodes or process edges called dependently on their function as rostopics or rosservices . A rosnode in a ROS environment therefore can be a robot, a processing server for navigation or even a human interaction device, like a laptop. Usually they physically do not cross the border of a single computing system, but often a single system can run multiple name spaces. Also, rosnodes profit from zero copy shared memory handling between their topics by using the ROS nodelet manager and by that significantly reduce
memory consumption. Every rosnode offers at least one rostopic , a multi-peer subscribable message provider, or a rosservice , a bidirectional unique connection between peers containing parameters.

ROS history

In 2007, the first robot running a version of ROS was STanford Artificial Intelligence Robot
(STAIR) which was developed by Stanford Artificial Intelligence Laboratory (SAIL). During
that time ROS was called switchyard but already followed its main principles like inter-
process communication, concurrency and heterogeneous environments. After that, Willow
Garage primarily developed ROS until February 2013. At this time ROS reached the critical-
mass, every open source project needs to survive without being mainly driven by external
funding. Since then the stewardship of ROS has been moved to the Open Source Robotics Foundation  and subsequently left Willow Garage .
Major versions of ROS are called distributions and are named using adjectives that start
with with successive letters of the alphabet. Starting with box turtle , C Turtle , diamondback ,
electric , fuerte , groovy , hydro and finally Indigo , which is available since May 2014.

ROS Basics – Using ROS Indigo/Jade with a Webcam by the uvc_camera (USB Video Class) package

There are several ways to use ROS Indigo/Jade with a webcam. The one way working at most computer is using a ROS package called uvc_camera which has been created by Ken Tossell. UVC in this context stands for USB Video Class, which is a  standard that covers almost all consumer webcams.

Unfortunately there currently is no step by step tutorial how to use the package, which is why I created this page. In order to run the package, you will need a local catkin workspace as we created it in another post. This is caused by the fact, that the available package is outdated and does not contain any launch files.

Step by Step Guide

We start by cloning the files into our workspace ./src directory, solving the dependencies with rosdep and finally building the workspace with catkin_make:

cd ~/catkin_ws/src/ #change directory to your source folder
git clone https://github.com/ktossell/camera_umd.git #clone the package from its repo
rosdep install camera_umd uvc_camera jpeg_streamer
cd .. #go one dir up to catkin_ws
catkin_make #build the workspace

Before I could build my workspace with the newly cloned files, I still was required to install video4linux support libraries in their development version by:

sudo apt-get install libv4l-dev

After catkin_make finished you can launch the uvc_camera node by:

roscd uvc_camera/launch/
roslaunch ./camera_node.launch

After having a roscore running, the camera_node.launch file should give you something like the following output:

opening /dev/video0
pixfmt 0 = 'YUYV' desc = 'YUV 4:2:2 (YUYV)'
  discrete: 640x480:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 160x120:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 176x144:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x176:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 352x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 432x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 544x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 640x360:   1/30 1/25 1/20 1/15 1/10 1/5 
pixfmt 1 = 'MJPG' desc = 'MJPEG'
  discrete: 640x480:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 160x120:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 176x144:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x176:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 352x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 432x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 544x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 640x360:   1/30 1/25 1/20 1/15 1/10 1/5 
  int (Brightness, 0, id = 980900): 0 to 255 (1)
  int (Contrast, 0, id = 980901): 0 to 255 (1)
  int (Saturation, 0, id = 980902): 0 to 255 (1)
  bool (White Balance Temperature, Auto, 0, id = 98090c): 0 to 1 (1)
  int (Gain, 0, id = 980913): 0 to 255 (1)
  menu (Power Line Frequency, 0, id = 980918): 0 to 2 (1)
    0: Disabled
    1: 50 Hz
    2: 60 Hz
  int (White Balance Temperature, 16, id = 98091a): 0 to 10000 (10)
  int (Sharpness, 0, id = 98091b): 0 to 255 (1)
  int (Backlight Compensation, 0, id = 98091c): 0 to 1 (1)
  menu (Exposure, Auto, 0, id = 9a0901): 0 to 3 (1)
  int (Exposure (Absolute), 16, id = 9a0902): 1 to 10000 (1)
  bool (Exposure, Auto Priority, 0, id = 9a0903): 0 to 1 (1)
Setting auto_focus is not supported
Setting focus_absolute is not supported

where you can see the possible run modes you know can configure in your custom launch file:

<launch>
  <node pkg="uvc_camera" type="uvc_camera_node" name="uvc_camera" output="screen">
    <param name="width" type="int" value="640" /> 
    <!-- we raised the value by the factor 2, as it is supported by previous output -->
    <param name="height" type="int" value="480" /> 
    <!-- we raised the value by the factor 2 -->
    <param name="fps" type="int" value="30" />
    <param name="frame" type="string" value="wide_stereo" />

    <param name="auto_focus" type="bool" value="False" />
    <param name="focus_absolute" type="int" value="0" />
    <!-- other supported params: auto_exposure, exposure_absolute, brightness, power_line_frequency -->
    <!-- in case you want to use a different video input device, change the value below -->
    <param name="device" type="string" value="/dev/video0" /> 
    <param name="camera_info_url" type="string" value="file://$(find uvc_camera)/example.yaml" />
  </node>
</launch>

You can now start rqt and its plugin Visualization > Image View, choose e.g. the /image_raw topic and in case you have club mate and a hitchhiker’s guide to the galaxy by Douglas Adams around, you’ll get the following output:

RQT Image View UVC camera

 

 

ROS Basics – depthimage_to_laserscan with low cost depth sensors Asus Xtion or Microsoft Kinect

Most of my work depended on the efficient connection between the [amazon &title=Asus Xtion&text=Asus Xtion] and the [amazon &title=CubieTruck&text=CubieTruck] as a low cost laser scanner. As the [amazon &title=Asus Xtion&text=Asus Xtion] usually delivers 3D sensor_msgs/PointCloud  data and most slamming algorithms need 2D sensor_msgs/LaserScan messages to work properly, we need to find a solution to this issue: depthimage_to_laserscan .

If you already managed to use the ros-indigo-openni2-camera and ros-indigo-openni2-launch you can use the following code:

<!-- this code originates from https://github.com/turtlebot/turtlebot/blob/hydro/turtlebot_bringup/launch/3dsensor.launch -->
<launch>
  <!-- "camera" should uniquely identify the device. All topics are pushed down
       into the "camera" namespace, and it is prepended to tf frame ids. -->
  <arg name="camera"      default="camera"/>
  <arg name="publish_tf"  default="true"/>

  <!-- Factory-calibrated depth registration -->
  <arg name="depth_registration"              default="true"/>
  <arg     if="$(arg depth_registration)" name="depth" value="depth_registered" />
  <arg unless="$(arg depth_registration)" name="depth" value="depth" />

  <!-- Processing Modules -->
  <arg name="rgb_processing"                  default="true"/>
  <arg name="ir_processing"                   default="true"/>
  <arg name="depth_processing"                default="true"/>
  <arg name="depth_registered_processing"     default="true"/>
  <arg name="disparity_processing"            default="true"/>
  <arg name="disparity_registered_processing" default="true"/>
  <arg name="scan_processing"                 default="true"/>

  <!-- Worker threads for the nodelet manager -->
  <arg name="num_worker_threads" default="4" />

  <!-- Laserscan topic -->
  <arg name="scan_topic" default="scan"/>

  <include file="$(find openni2_launch)/launch/openni2.launch">
    <arg name="camera"                          value="$(arg camera)"/>
    <arg name="publish_tf"                      value="$(arg publish_tf)"/>
    <arg name="depth_registration"              value="$(arg depth_registration)"/>
    <arg name="num_worker_threads"              value="$(arg num_worker_threads)" />

    <!-- Processing Modules -->
    <arg name="rgb_processing"                  value="$(arg rgb_processing)"/>
    <arg name="ir_processing"                   value="$(arg ir_processing)"/>
    <arg name="depth_processing"                value="$(arg depth_processing)"/>
    <arg name="depth_registered_processing"     value="$(arg depth_registered_processing)"/>
    <arg name="disparity_processing"            value="$(arg disparity_processing)"/>
    <arg name="disparity_registered_processing" value="$(arg disparity_registered_processing)"/>
  </include>

   <!--                        Laserscan 
     This uses lazy subscribing, so will not activate until scan is requested.
   -->
  <group if="$(arg scan_processing)">
    <node pkg="nodelet" type="nodelet" name="depthimage_to_laserscan" args="load depthimage_to_laserscan/DepthImageToLaserScanNodelet $(arg camera)/$(arg camera)_nodelet_manager">
      <!-- Pixel rows to use to generate the laserscan. For each column, the scan will
           return the minimum value for those pixels centered vertically in the image. -->
      <param name="scan_height" value="10"/>
      <param name="output_frame_id" value="/$(arg camera)_depth_frame"/>
      <param name="range_min" value="0.45"/>
      <remap from="image" to="$(arg camera)/$(arg depth)/image_raw"/>
      <remap from="scan" to="$(arg scan_topic)"/>

      <remap from="$(arg camera)/image" to="$(arg camera)/$(arg depth)/image_raw"/>
      <remap from="$(arg camera)/scan" to="$(arg scan_topic)"/>
    </node>
    
  </group>
</launch>

As you might see, the depthimage_to_laserscan gets initialized in a separate nodelet manager.

Nodelets are designed to provide a way to run multiple algorithms on a single machine, in a single process, without incurring copy costs when passing messages intraprocess. (Quote Wiki)

They hugely improve the performance of our 3D point clouds and allow significantly higher publishing rates.

An important property of an robot is the rate of data creation. A low rate influences most
higher algorithms leads to incorrect results. In most cases especially the depth sensors are
required to publish sufficient material to create detailed maps. The [amazon &title=Asus Xtion&text=Asus Xtion] Pro driver OpenNi2 and the ROS package openni2_camera offers multiple run modes which can be set by dynamic_reconfigure (I suggest using it in combination with rqt). Another essential option influencing performance is the data_skip parameter, which allows the system to skip a certain amount of pictures the hardware produces before loading them into memory and by that remarkably reduces computational load. It can be set to an integer value between zero, which means not to skip any frames at all, and ten, leading to every eleventh frame to be processed.

Performance Check

The different combinations of resolutions, maximum frequencies and the data_skip -parameter
ran on the aMoSeRo (my low cost [amazon &title=CubieTruck&text=CubieTruck] robot) is illustrated in the table below.  As it can be seen, especially the amount of frames that has to be processed per second highly influences the complete system.

 

In conlusion, the depthimage_to_laserscan package is really useful when working with low cost setups like depth sensors [amazon &title=Asus Xtion&text=Asus Xtion] or the [amazon &title=Kinect&text=Microsoft Kinect]. It furthermore is essential when interfacing SLAM algorithms.

ROS Basics – Step by step guide to a working ROS Indigo Ubuntu 14.04 Laptop/PC

We are beginning with a blank Xubuntu 14.04 Trusty x86 on a [amazon asin=B004URCE4O&text=Lenovo Thinkpad T520] . Any other version of a working Ubuntu 14.04 x86 should be compatible to this tutorial.

Setup Ubuntu environment:

If you are a complete beginner with Linux and Ubuntu, i would advice you to install several tools that are necessary or at least helpful while working with ROS. To install them use the following command and allow sudo to run with administrative permissions by entering your password when asked:

sudo apt-get install fail2ban ufw terminator git

In short, fail2ban is a advanced firewall tool that protects you from bruteforce, ufw is a ‘human readable interface’ to iptables and allows easy firewall rule organisation. Next, terminator is a terminal multiplexer that provides multiple terminals at once without leaving the keyboard while operating. Another essential tool is git, a source code versioning system.

There are more tools that are helpful, but can be considered as optional:

sudo apt-get install vim vnstat htop bmon chromium-browser

Setup ROS desktop environment Ubuntu 14.04 Trusty:

To install ROS itself we can easily follow the well written tutorials provided by their wiki:  http://wiki.ros.org/indigo/Installation/Ubuntu .

In short the commands are like shown below:

  • sudo sh -c ‘echo “deb http://packages.ros.org/ros/ubuntu trusty main” > /etc/apt/sources.list.d/ros-latest.list’
  • wget https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -O – | sudo apt-key add –
  • sudo apt-get update
  • sudo apt-get install ros-indigo-desktop-full

Setup ~/.bashrc part I:

In order to work correctly ROS requires several bash environment variables, that are not very well documented in the install tutorial. You can enter the following commands every time you start a new bash, or add it to .bashrc, the script that gets executed every time you start a bash.

The most important command:

source /opt/ros/hydro/setup.bash

enables bash to provide all ROS related commands like roscore  and rostopic .

In order to work in an network environment (see their wiki), ROS also requires three more variables, namely:

export ROS_MASTER_URI="http://127.0.0.1:11311"
export ROS_HOSTNAME="127.0.0.1"
export ROS_IP="127.0.0.1"

Where ROS_MASTER_URI, defines the ip location of the roscore and the other two the ip of the local instance. As you see, in the example all ips are the local host ip 127.0.0.1 and need to be accordingly changed in order to work properly.

To simplify the ip settings, I suggest some modifications to the commands like that:

export ROS_MASTER_URI="http://`ifconfig wlan0 | grep "inet " | awk -F'[: ]+' '{ print $4 }'`:11311"
export ROS_HOSTNAME="`ip -f inet addr show wlan0 | grep -Po 'inet K[d.]+'`"
export ROS_IP="`ip -f inet addr show wlan0 | grep -Po 'inet K[d.]+'`"

Which sets the ips to the local wlan0 adapter.

Create catkin workspace:

To use non packaged versions of ROS packages or the latest versions that did not have been compiled to the repository, you’ll need a local catkin workspace. Catkin is the ROS build tool, that is required to build packages from source. It allows multiple programming languages per package and handles linking dependencies.  To create a local workspace you can follow the ROS wiki tutorial:  http://wiki.ros.org/catkin/Tutorials/create_a_workspace.

In short, you also can follow these comands:

  • mkdir -p ~/catkin_ws/src
  • cd ~/catkin_ws/src
  • catkin_init_workspace

We will now build the empty work space as a first test:

  • cd ~/catkin_ws/
  • catkin_make
  • source devel/setup.bash

Setup ~/.bashrc part II:

We also need to reference the newly created local workspace in our bashrc. Without doing that, tools like roslaunch and rosrun wouldn’t be able to find the customly created packages.

source /home/insert-your-username/catkin_ws/devel/setup.bash

You can now build, clone or fork your custom packages and therefore can call your pc a working ROS Indigo environment! 🙂

ROS Tools you can try:

Synchronize the time in ROS offline environments without chrony

As our [amazon &title=CubieTruck&text=CubieTruck] is faced with strange issues when using chrony and internet access is not a general prerequisite on ROS setups, i needed to figure out a new way to synchronize the time with no internet ntp server available. For some reasons, even my local ntp was broken, which is why I try to set the time according to the ros master on all clients by this simple bash command:

ntpdate `echo $ROS_MASTER_URI | grep -oE "b([0-9]{1,3}.){3}[0-9]{1,3}b"`

it simply extracts the IPv4 part of the $ROS_MASTER_URI environment and uses ntpdate to set the time on the excecuting client system.

In case you only want to know the exact time derivation consider using the ntpdate parameter -q which only emulates the request.

aMoSeRo – mapping the reality

Today is the day of the first accurate aMoSeRo map. I tried several slamming algorithms e.g. the hector_mapping package, but data has been to bad. So after reviewing nearly all my code, fixing a lot of unit issues and publishing rates, today the first map has been created, which really is a map of the place I am living!

Using Chrony on CubieTruck

Don’t.

Unless you really know what are you doing.

To synchronize the clock and fix a minimal time shift I was detecting, I followed the idea of the TurtleBot2 to use chrony to fix that Chrony is a little daemon that connects to your linux clock or hwclock and detect shifts. For some reason this lead to total chaos on the amosero.
I suppose chrony hasn’t been build for multicore dynamically speeded processors like the A20, which is why the shifting has been erratic and up to 2 seconds per minute.

sudo apt-get remove chrony

Fixed all timing errors on the [amazon &title=CubieTruck&text=CubieTruck]. Also it’s a bit disturbing how little changes can inflict complex setups.

 

 

Resistance to Odometry is futile

It sure is. But a good odometry in a robotic context is an objective that is hard to achieve.  For a robot like the aMoSeRo only two main velocities are relevant: linear and angular speed. Both do not occur on the same time, but still – correctly determining any of them is essential as most higher algorithms like slamming and planning highly depend on it. For me in a out of time running thesis, this task can be the biggest still kinda opened challenge.

All other system parts like gmapping, robot_pose_ekf, tf_broadcasts, sensor code, drivers, dynamic_reconfigure (insert long list of other important things here) are up and well enough running. Most of the thesis is written, only evaluation (experiments) and conclusion (the big round up in the end) is still missing.

Therefore I am really looking forward to a time after my thesis – full of well deserved sleep and a university degree 🙂

Screenshot24.09.2014

thesis writing intermediate state

It has been really quiet in this blog recently. This is because I am currently writing my thesis in latex. Therefore I thought today is a good moment to tell you about some things i learned since the last post.

I have been putting lots of effort into the history of mobile robotics, have been researching some sources and achieved some knowledge about the recent development of Willow Garage, Boston Dynamics and some universities like the TU-Darmstadt.

hence I now distinguish between mobile robots, UGVs and AGVs, consider Willow Garage as possibly dead (even if it isn’t since the majority of employees moved in February) and feel deep respect of whats possible with legged robots shown by Boston Dynamics.

Time is running, it’s scary and spectacular at the same time regarding what is possible with mobile robots. My first 10k of words have been written and there is sill a lot more to be done until the 29th of September 2014.

TurtleBot Inventors Interview

Today I have found a very nice interview of Tully Foote and Melonee Wise on IEEE. I think it is a must read for everybody interested in low-cost mobile ROS robots.

Its amazing to read the thoughts they had while inventing the TurtleBot: Lower the entry barrier into ROS and keeping low cost for educational reasons. The same thing I try to achieve with my thesis and the aMoSeRo 🙂

Most of the time they speak right into my heart, the only thing I would disagree was this:

Melonee: I believe that the thing that robotics needs most is people who know how to program robots, and not as much people who focus on building robots. I’d like to see that shift in robotics competitions in general, where it’s more about what the robot is doing, as opposed to how it’s built. 

I’ve met a lot of engineers that are capable of building robots,but clearly underestimate the heavy lifting done by the computer science part – but also otherwise, some computer scientists do not have a single clue on how to move a real world motor by the power of code – someone needs to build a stable bridge between engineering and computing, and this is a person at least understanding both worlds or better mastering them, because as she also said:

Melonee: Because building robots is hard!