Category Archives: ROS Basics

ROS Basics – challenges in the robotic low cost context

Applications in robotics need to solve a lot of computational intensive tasks. While some of them can be outsourced to an externally powered device like a laptop or a server, others essentially can be calculated on the UGV.

Examples for that are collecting sensor data, receiving and executing commands or streaming data. Balancing these is a challenging task, because on concurrent executing systems all processes can influence each other. Especially when computational power gets cut down to the limits in order to save energy. As most libraries, frameworks or software environments do, ROS requires additional resources when being compared to a single purpose application.

In conclusion providing enough computational power while using reasonable amounts of energy is an important task to solve.

Physical properties
Physical dimensions and requirements result from a tradeoff between costs and size, whereas smaller UGVs tend to be more expensive and complex. On the other hand, an upper bound among others is set by being manageable in terms of transport and storage.

The low cost target UGV is a four wheel or two tracks driven ground robot with physical dimensions below 150mm * 300mm * 300mm (height, width, length). The drive power should be accordingly with an effective force of more than 100 Ncm for moving or holding torque in case of stronger slope. Additionally, tracks are the preferred primary propulsion system as they have better grip properties and only require simple motor control. Another nice to have would be the capability of spot-turning, which would allow operating on small areas and facilitates 3D scans of rooms without moving further than required. Another optional point if the robot is going to be used outside of buildings or around kids is a splash-proof case that would increase the robots life. Furthermore, modular extensibility would increase the usability of the robot significantly.

Modular design

In order to solve the tremendous requirements of robotics in a low cost context, we need to think out of the box while structuring the challenges in solvable problems. Like the following graph shows, we should divide the functionality of UGVs into four main modules: First, Sensors are the parts the robot requires to sense the outside world, next Accumulators serving and saving power, followed by Processors the units are processing information gathered by Sensors and finally, Actuators which provide physical movement. These areas in turn get separated into further sections which we discuss one by one on the next posts.

Screenshot_2016-03-11_21-40-35

ROS Basics – ROS in a low cost robotic context

UGVs like they are found in industry, education or Do it yourself (DIY) communities are currently not affordable for average technique enthusiasts, teachers in schools or sometimes even universities. The concept of low cost robots tries to solve that issue.

What is low cost in a robotic context?

The traditional interpretation of low cost is minimizing the expenses while keeping most important features. In borders of mostly expensive robotics this term needs to follow the same differentiation as between cheap , which means coming with a significantly reduced price and quality, and keen , considered as maintaining a certain amount of quality at a reduced total cost. For example, the 50 000 USD UBR1 is a low cost version 250 000 USD up to 400 000 USD PR2  of Willow Garage , but still is far away from the term cheap in a common way . Another example and at the same time another robot Melonee Wise worked on is the TurtleBot , which was constructed with the attempt to be the lowest cost version of a ROS robot at time of creation.1

How to achieve low cost?

There is no general solution to this problem. But an approach to solve the issue in the robotic context is to replace expensive single purpose solutions produced by companies in low quantities with mass produced products that get customized to suit the application.
A demonstration of this positive misuse are the first versions of the TurtleBot . Instead of constructing the robot with expensive 3D Laser Scanners they replaced it by a Microsoft Kinect originating from the gaming industry. Furthermore, it used a iRobot Roomba and later a iRobot Create as a low cost mobile base as constructing a custom movable footprint would
have been way more expensive. Also, the mass produced product came at a lower cost and unharmed warranty. An important side-effect of these replaceable parts is the independence of unique cost intensive and sometimes, due to customs regulations, not easily accessible parts. By that, the power to choose a cheap replacement at any time reduces overall expenses and
total risk.

As a consequence, an low cost UGV should be easy to build and reproduce, affordable for education and able to run ROS with some kind of 3D measuring device. It further should consist of easily achievable or replaceable parts.
In conclusion, these properties lead to a modular design concept with communication inter- faces between the inexpensive components. Also a certain degree of flexibility is required to maintain extensibility and independence of expensive parts.

ROS Basics – a short Introduction into ROS

The Robot Operating System (ROS) is an open-source meta-operating system which provides essential features, namely hardware abstraction, low-level device control, implementation of environmental functionality, such as visualisation, simulation or testing and allows message-passing between concurrent running processes [O’K13]. Furthermore, it offers implementations of commonly used functionality in installable packages which even cover complex
algorithms like Simultaneous Localization and Mapping (SLAM) and Visual Object Recognition (VOR).

ROS moreover contains tools and libraries for obtaining, building, writing and running code across multiple heterogeneous computers and therefore includes language and platform independent tools. For example, ROS supports multiple client libraries, namely roscpp for C++ , rospy for Python , roslisp for Lisp and many others. It is also possible to link application-related code and external libraries like OpenCV for computer vision or Eigen3 for efficient linear algebra computation. Furthermore, ROS can successfully be wrapped around other frameworks like the Player Project .

ROS is mostly licensed free and has been developed as open source software under BSD Licence, which offers a variety of advantages for a low cost robot.

Unsurprisingly, with the high complexity of ROS there comes one of the highest learning curves of all robotic frameworks. Besides, due to rapid changes of main characteristics during different major versions, nearly all books and most tutorials on the internet became unreliable which is often very confusing for a beginner. But, after the top of the curve, a lot of things are self explanatory and complex features can be implemented very fast.

Unfortunately, another point to mention and one of the main disadvantages of ROS is the dependency on the ROS host and its Operating System (OS). In case you do not develop on a x86 32bit system a lot of automations do not work and require patience to be solved.
Especially, the support of packages on armhf , the ARM release repository, is not very usable, yet. Additionally, despite the importance of reactivity and low latency ROS is – like all other frameworks – no realtime OS.

ROS general terminology

ROS is a message-based concurrent running heterogeneous peer-to-peer network application. Its structure can be imagined as a mostly undirected graph with an obligatory center process node, called roscore . Broadly speaking, this one master node tracks every other part of the robotic network, including running processes and their interfaces. The centralistic design consequently uses its advantages by offering global debugging possibilities and error logging. It further mediates direct connections between every graph node on request. This becomes very useful in cases like image processing, where running traffic over the central node would impact the global system by increasing network usage and processing power.
Still simplifying, other parts of the graph are organized name spaces, called rosnodes , which in turn are containing more rosnodes or process edges called dependently on their function as rostopics or rosservices . A rosnode in a ROS environment therefore can be a robot, a processing server for navigation or even a human interaction device, like a laptop. Usually they physically do not cross the border of a single computing system, but often a single system can run multiple name spaces. Also, rosnodes profit from zero copy shared memory handling between their topics by using the ROS nodelet manager and by that significantly reduce
memory consumption. Every rosnode offers at least one rostopic , a multi-peer subscribable message provider, or a rosservice , a bidirectional unique connection between peers containing parameters.

ROS history

In 2007, the first robot running a version of ROS was STanford Artificial Intelligence Robot
(STAIR) which was developed by Stanford Artificial Intelligence Laboratory (SAIL). During
that time ROS was called switchyard but already followed its main principles like inter-
process communication, concurrency and heterogeneous environments. After that, Willow
Garage primarily developed ROS until February 2013. At this time ROS reached the critical-
mass, every open source project needs to survive without being mainly driven by external
funding. Since then the stewardship of ROS has been moved to the Open Source Robotics Foundation  and subsequently left Willow Garage .
Major versions of ROS are called distributions and are named using adjectives that start
with with successive letters of the alphabet. Starting with box turtle , C Turtle , diamondback ,
electric , fuerte , groovy , hydro and finally Indigo , which is available since May 2014.

ROS Basics – Using ROS Indigo/Jade with a Webcam by the uvc_camera (USB Video Class) package

There are several ways to use ROS Indigo/Jade with a webcam. The one way working at most computer is using a ROS package called uvc_camera which has been created by Ken Tossell. UVC in this context stands for USB Video Class, which is a  standard that covers almost all consumer webcams.

Unfortunately there currently is no step by step tutorial how to use the package, which is why I created this page. In order to run the package, you will need a local catkin workspace as we created it in another post. This is caused by the fact, that the available package is outdated and does not contain any launch files.

Step by Step Guide

We start by cloning the files into our workspace ./src directory, solving the dependencies with rosdep and finally building the workspace with catkin_make:

cd ~/catkin_ws/src/ #change directory to your source folder
git clone https://github.com/ktossell/camera_umd.git #clone the package from its repo
rosdep install camera_umd uvc_camera jpeg_streamer
cd .. #go one dir up to catkin_ws
catkin_make #build the workspace

Before I could build my workspace with the newly cloned files, I still was required to install video4linux support libraries in their development version by:

sudo apt-get install libv4l-dev

After catkin_make finished you can launch the uvc_camera node by:

roscd uvc_camera/launch/
roslaunch ./camera_node.launch

After having a roscore running, the camera_node.launch file should give you something like the following output:

opening /dev/video0
pixfmt 0 = 'YUYV' desc = 'YUV 4:2:2 (YUYV)'
  discrete: 640x480:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 160x120:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 176x144:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x176:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 352x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 432x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 544x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 640x360:   1/30 1/25 1/20 1/15 1/10 1/5 
pixfmt 1 = 'MJPG' desc = 'MJPEG'
  discrete: 640x480:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 160x120:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 176x144:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x176:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 320x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 352x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 432x240:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 544x288:   1/30 1/25 1/20 1/15 1/10 1/5 
  discrete: 640x360:   1/30 1/25 1/20 1/15 1/10 1/5 
  int (Brightness, 0, id = 980900): 0 to 255 (1)
  int (Contrast, 0, id = 980901): 0 to 255 (1)
  int (Saturation, 0, id = 980902): 0 to 255 (1)
  bool (White Balance Temperature, Auto, 0, id = 98090c): 0 to 1 (1)
  int (Gain, 0, id = 980913): 0 to 255 (1)
  menu (Power Line Frequency, 0, id = 980918): 0 to 2 (1)
    0: Disabled
    1: 50 Hz
    2: 60 Hz
  int (White Balance Temperature, 16, id = 98091a): 0 to 10000 (10)
  int (Sharpness, 0, id = 98091b): 0 to 255 (1)
  int (Backlight Compensation, 0, id = 98091c): 0 to 1 (1)
  menu (Exposure, Auto, 0, id = 9a0901): 0 to 3 (1)
  int (Exposure (Absolute), 16, id = 9a0902): 1 to 10000 (1)
  bool (Exposure, Auto Priority, 0, id = 9a0903): 0 to 1 (1)
Setting auto_focus is not supported
Setting focus_absolute is not supported

where you can see the possible run modes you know can configure in your custom launch file:

<launch>
  <node pkg="uvc_camera" type="uvc_camera_node" name="uvc_camera" output="screen">
    <param name="width" type="int" value="640" /> 
    <!-- we raised the value by the factor 2, as it is supported by previous output -->
    <param name="height" type="int" value="480" /> 
    <!-- we raised the value by the factor 2 -->
    <param name="fps" type="int" value="30" />
    <param name="frame" type="string" value="wide_stereo" />

    <param name="auto_focus" type="bool" value="False" />
    <param name="focus_absolute" type="int" value="0" />
    <!-- other supported params: auto_exposure, exposure_absolute, brightness, power_line_frequency -->
    <!-- in case you want to use a different video input device, change the value below -->
    <param name="device" type="string" value="/dev/video0" /> 
    <param name="camera_info_url" type="string" value="file://$(find uvc_camera)/example.yaml" />
  </node>
</launch>

You can now start rqt and its plugin Visualization > Image View, choose e.g. the /image_raw topic and in case you have club mate and a hitchhiker’s guide to the galaxy by Douglas Adams around, you’ll get the following output:

RQT Image View UVC camera

 

 

ROS Basics – depthimage_to_laserscan with low cost depth sensors Asus Xtion or Microsoft Kinect

Most of my work depended on the efficient connection between the [amazon &title=Asus Xtion&text=Asus Xtion] and the [amazon &title=CubieTruck&text=CubieTruck] as a low cost laser scanner. As the [amazon &title=Asus Xtion&text=Asus Xtion] usually delivers 3D sensor_msgs/PointCloud  data and most slamming algorithms need 2D sensor_msgs/LaserScan messages to work properly, we need to find a solution to this issue: depthimage_to_laserscan .

If you already managed to use the ros-indigo-openni2-camera and ros-indigo-openni2-launch you can use the following code:

<!-- this code originates from https://github.com/turtlebot/turtlebot/blob/hydro/turtlebot_bringup/launch/3dsensor.launch -->
<launch>
  <!-- "camera" should uniquely identify the device. All topics are pushed down
       into the "camera" namespace, and it is prepended to tf frame ids. -->
  <arg name="camera"      default="camera"/>
  <arg name="publish_tf"  default="true"/>

  <!-- Factory-calibrated depth registration -->
  <arg name="depth_registration"              default="true"/>
  <arg     if="$(arg depth_registration)" name="depth" value="depth_registered" />
  <arg unless="$(arg depth_registration)" name="depth" value="depth" />

  <!-- Processing Modules -->
  <arg name="rgb_processing"                  default="true"/>
  <arg name="ir_processing"                   default="true"/>
  <arg name="depth_processing"                default="true"/>
  <arg name="depth_registered_processing"     default="true"/>
  <arg name="disparity_processing"            default="true"/>
  <arg name="disparity_registered_processing" default="true"/>
  <arg name="scan_processing"                 default="true"/>

  <!-- Worker threads for the nodelet manager -->
  <arg name="num_worker_threads" default="4" />

  <!-- Laserscan topic -->
  <arg name="scan_topic" default="scan"/>

  <include file="$(find openni2_launch)/launch/openni2.launch">
    <arg name="camera"                          value="$(arg camera)"/>
    <arg name="publish_tf"                      value="$(arg publish_tf)"/>
    <arg name="depth_registration"              value="$(arg depth_registration)"/>
    <arg name="num_worker_threads"              value="$(arg num_worker_threads)" />

    <!-- Processing Modules -->
    <arg name="rgb_processing"                  value="$(arg rgb_processing)"/>
    <arg name="ir_processing"                   value="$(arg ir_processing)"/>
    <arg name="depth_processing"                value="$(arg depth_processing)"/>
    <arg name="depth_registered_processing"     value="$(arg depth_registered_processing)"/>
    <arg name="disparity_processing"            value="$(arg disparity_processing)"/>
    <arg name="disparity_registered_processing" value="$(arg disparity_registered_processing)"/>
  </include>

   <!--                        Laserscan 
     This uses lazy subscribing, so will not activate until scan is requested.
   -->
  <group if="$(arg scan_processing)">
    <node pkg="nodelet" type="nodelet" name="depthimage_to_laserscan" args="load depthimage_to_laserscan/DepthImageToLaserScanNodelet $(arg camera)/$(arg camera)_nodelet_manager">
      <!-- Pixel rows to use to generate the laserscan. For each column, the scan will
           return the minimum value for those pixels centered vertically in the image. -->
      <param name="scan_height" value="10"/>
      <param name="output_frame_id" value="/$(arg camera)_depth_frame"/>
      <param name="range_min" value="0.45"/>
      <remap from="image" to="$(arg camera)/$(arg depth)/image_raw"/>
      <remap from="scan" to="$(arg scan_topic)"/>

      <remap from="$(arg camera)/image" to="$(arg camera)/$(arg depth)/image_raw"/>
      <remap from="$(arg camera)/scan" to="$(arg scan_topic)"/>
    </node>
    
  </group>
</launch>

As you might see, the depthimage_to_laserscan gets initialized in a separate nodelet manager.

Nodelets are designed to provide a way to run multiple algorithms on a single machine, in a single process, without incurring copy costs when passing messages intraprocess. (Quote Wiki)

They hugely improve the performance of our 3D point clouds and allow significantly higher publishing rates.

An important property of an robot is the rate of data creation. A low rate influences most
higher algorithms leads to incorrect results. In most cases especially the depth sensors are
required to publish sufficient material to create detailed maps. The [amazon &title=Asus Xtion&text=Asus Xtion] Pro driver OpenNi2 and the ROS package openni2_camera offers multiple run modes which can be set by dynamic_reconfigure (I suggest using it in combination with rqt). Another essential option influencing performance is the data_skip parameter, which allows the system to skip a certain amount of pictures the hardware produces before loading them into memory and by that remarkably reduces computational load. It can be set to an integer value between zero, which means not to skip any frames at all, and ten, leading to every eleventh frame to be processed.

Performance Check

The different combinations of resolutions, maximum frequencies and the data_skip -parameter
ran on the aMoSeRo (my low cost [amazon &title=CubieTruck&text=CubieTruck] robot) is illustrated in the table below.  As it can be seen, especially the amount of frames that has to be processed per second highly influences the complete system.

 

In conlusion, the depthimage_to_laserscan package is really useful when working with low cost setups like depth sensors [amazon &title=Asus Xtion&text=Asus Xtion] or the [amazon &title=Kinect&text=Microsoft Kinect]. It furthermore is essential when interfacing SLAM algorithms.

ROS Basics – Step by step guide to a working ROS Indigo Ubuntu 14.04 Laptop/PC

We are beginning with a blank Xubuntu 14.04 Trusty x86 on a [amazon asin=B004URCE4O&text=Lenovo Thinkpad T520] . Any other version of a working Ubuntu 14.04 x86 should be compatible to this tutorial.

Setup Ubuntu environment:

If you are a complete beginner with Linux and Ubuntu, i would advice you to install several tools that are necessary or at least helpful while working with ROS. To install them use the following command and allow sudo to run with administrative permissions by entering your password when asked:

sudo apt-get install fail2ban ufw terminator git

In short, fail2ban is a advanced firewall tool that protects you from bruteforce, ufw is a ‘human readable interface’ to iptables and allows easy firewall rule organisation. Next, terminator is a terminal multiplexer that provides multiple terminals at once without leaving the keyboard while operating. Another essential tool is git, a source code versioning system.

There are more tools that are helpful, but can be considered as optional:

sudo apt-get install vim vnstat htop bmon chromium-browser

Setup ROS desktop environment Ubuntu 14.04 Trusty:

To install ROS itself we can easily follow the well written tutorials provided by their wiki:  http://wiki.ros.org/indigo/Installation/Ubuntu .

In short the commands are like shown below:

  • sudo sh -c ‘echo “deb http://packages.ros.org/ros/ubuntu trusty main” > /etc/apt/sources.list.d/ros-latest.list’
  • wget https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -O – | sudo apt-key add –
  • sudo apt-get update
  • sudo apt-get install ros-indigo-desktop-full

Setup ~/.bashrc part I:

In order to work correctly ROS requires several bash environment variables, that are not very well documented in the install tutorial. You can enter the following commands every time you start a new bash, or add it to .bashrc, the script that gets executed every time you start a bash.

The most important command:

source /opt/ros/hydro/setup.bash

enables bash to provide all ROS related commands like roscore  and rostopic .

In order to work in an network environment (see their wiki), ROS also requires three more variables, namely:

export ROS_MASTER_URI="http://127.0.0.1:11311"
export ROS_HOSTNAME="127.0.0.1"
export ROS_IP="127.0.0.1"

Where ROS_MASTER_URI, defines the ip location of the roscore and the other two the ip of the local instance. As you see, in the example all ips are the local host ip 127.0.0.1 and need to be accordingly changed in order to work properly.

To simplify the ip settings, I suggest some modifications to the commands like that:

export ROS_MASTER_URI="http://`ifconfig wlan0 | grep "inet " | awk -F'[: ]+' '{ print $4 }'`:11311"
export ROS_HOSTNAME="`ip -f inet addr show wlan0 | grep -Po 'inet K[d.]+'`"
export ROS_IP="`ip -f inet addr show wlan0 | grep -Po 'inet K[d.]+'`"

Which sets the ips to the local wlan0 adapter.

Create catkin workspace:

To use non packaged versions of ROS packages or the latest versions that did not have been compiled to the repository, you’ll need a local catkin workspace. Catkin is the ROS build tool, that is required to build packages from source. It allows multiple programming languages per package and handles linking dependencies.  To create a local workspace you can follow the ROS wiki tutorial:  http://wiki.ros.org/catkin/Tutorials/create_a_workspace.

In short, you also can follow these comands:

  • mkdir -p ~/catkin_ws/src
  • cd ~/catkin_ws/src
  • catkin_init_workspace

We will now build the empty work space as a first test:

  • cd ~/catkin_ws/
  • catkin_make
  • source devel/setup.bash

Setup ~/.bashrc part II:

We also need to reference the newly created local workspace in our bashrc. Without doing that, tools like roslaunch and rosrun wouldn’t be able to find the customly created packages.

source /home/insert-your-username/catkin_ws/devel/setup.bash

You can now build, clone or fork your custom packages and therefore can call your pc a working ROS Indigo environment! 🙂

ROS Tools you can try: