Tag Archives: ROS Indigo

ROS Basics – Step by step guide to a working ROS Indigo Ubuntu 14.04 Laptop/PC

We are beginning with a blank Xubuntu 14.04 Trusty x86 on a [amazon asin=B004URCE4O&text=Lenovo Thinkpad T520] . Any other version of a working Ubuntu 14.04 x86 should be compatible to this tutorial.

Setup Ubuntu environment:

If you are a complete beginner with Linux and Ubuntu, i would advice you to install several tools that are necessary or at least helpful while working with ROS. To install them use the following command and allow sudo to run with administrative permissions by entering your password when asked:

sudo apt-get install fail2ban ufw terminator git

In short, fail2ban is a advanced firewall tool that protects you from bruteforce, ufw is a ‘human readable interface’ to iptables and allows easy firewall rule organisation. Next, terminator is a terminal multiplexer that provides multiple terminals at once without leaving the keyboard while operating. Another essential tool is git, a source code versioning system.

There are more tools that are helpful, but can be considered as optional:

sudo apt-get install vim vnstat htop bmon chromium-browser

Setup ROS desktop environment Ubuntu 14.04 Trusty:

To install ROS itself we can easily follow the well written tutorials provided by their wiki:  http://wiki.ros.org/indigo/Installation/Ubuntu .

In short the commands are like shown below:

  • sudo sh -c ‘echo “deb http://packages.ros.org/ros/ubuntu trusty main” > /etc/apt/sources.list.d/ros-latest.list’
  • wget https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -O – | sudo apt-key add –
  • sudo apt-get update
  • sudo apt-get install ros-indigo-desktop-full

Setup ~/.bashrc part I:

In order to work correctly ROS requires several bash environment variables, that are not very well documented in the install tutorial. You can enter the following commands every time you start a new bash, or add it to .bashrc, the script that gets executed every time you start a bash.

The most important command:

source /opt/ros/hydro/setup.bash

enables bash to provide all ROS related commands like roscore  and rostopic .

In order to work in an network environment (see their wiki), ROS also requires three more variables, namely:

export ROS_MASTER_URI="http://127.0.0.1:11311"
export ROS_HOSTNAME="127.0.0.1"
export ROS_IP="127.0.0.1"

Where ROS_MASTER_URI, defines the ip location of the roscore and the other two the ip of the local instance. As you see, in the example all ips are the local host ip 127.0.0.1 and need to be accordingly changed in order to work properly.

To simplify the ip settings, I suggest some modifications to the commands like that:

export ROS_MASTER_URI="http://`ifconfig wlan0 | grep "inet " | awk -F'[: ]+' '{ print $4 }'`:11311"
export ROS_HOSTNAME="`ip -f inet addr show wlan0 | grep -Po 'inet K[d.]+'`"
export ROS_IP="`ip -f inet addr show wlan0 | grep -Po 'inet K[d.]+'`"

Which sets the ips to the local wlan0 adapter.

Create catkin workspace:

To use non packaged versions of ROS packages or the latest versions that did not have been compiled to the repository, you’ll need a local catkin workspace. Catkin is the ROS build tool, that is required to build packages from source. It allows multiple programming languages per package and handles linking dependencies.  To create a local workspace you can follow the ROS wiki tutorial:  http://wiki.ros.org/catkin/Tutorials/create_a_workspace.

In short, you also can follow these comands:

  • mkdir -p ~/catkin_ws/src
  • cd ~/catkin_ws/src
  • catkin_init_workspace

We will now build the empty work space as a first test:

  • cd ~/catkin_ws/
  • catkin_make
  • source devel/setup.bash

Setup ~/.bashrc part II:

We also need to reference the newly created local workspace in our bashrc. Without doing that, tools like roslaunch and rosrun wouldn’t be able to find the customly created packages.

source /home/insert-your-username/catkin_ws/devel/setup.bash

You can now build, clone or fork your custom packages and therefore can call your pc a working ROS Indigo environment! 🙂

ROS Tools you can try:

ROS Transformation and Odometry based on URDF Convention REP103

In order to get ROS working correctly, you need a lot things to be set up according to ROS defined conventions: for instance the ‘Standard Units of Measure and Coordinate Conventions’ (REP103), which clearly explains which units geometry_msgs.Twist should have or in what movement direction your robot need to be inside its URDF file.

For me this meant to redo my xacro (URDF-Macro) defined robot driving direction and its according tf-links. Since I’ve already gained some experience in creating robot models I tried to improve it a bit too:

I will explain the robots hardware setup in another post as soon as its possible to run it by keyboard teleop while publishing its accurate odometry. Odometry Messages aren’t simply ROS transformations like moving parts of the robot. Because the robot belongs to the physical world where for example friction exists and further wheel jamming could happen, all the calculated position data need to by verified. Qualified for this task is sensor data like Ultrasonic Sensor Ranges, motor potentiometer or stepper motor positions or Openni2 data provided by the [amazon &title=Xtion&text=Asus Xtion]. After publishing this Odometry messages to the /odom topic, the ros navigation packages can generate geometry/Twist messages to correct the position to mach the simulation again in case there has been some deviation.

More on this topic soon.

ROS Hydro / Indigo PointCloud to Laserscan for 2D Navigation

UPDATE: see my step by step guide for depthimage_to_laserscan here.

PointClouds need a lot of processing and network traffic load. For 2D navigation LaserScans are a good option to decrease this loads, in my case below to 30% of the original.

So after starting my customized openni2_launch (the launchers of openni_camera, and both ros drivers) with a custom Openni2 driver dir:

OPENNI2_DRIVERS_PATH=/usr/lib/OpenNI2/Drivers/ roslaunch /home/user/catkin_ws/src/robot_bringup/launch/openni2.launch

(OPENNI2_DRIVERS_PATH fixes all issues with the camera driver nodelet)

its possible to process this down to a /scan topic by:

rosrun depthimage_to_laserscan depthimage_to_laserscan image:=/camera/depth/image_raw

or directly in a launch file using the handy nodelet manager:

  <group>
    <node pkg="nodelet" type="nodelet" name="depthimage_to_laserscan" args="load depthimage_to_laserscan/DepthImageToLaserScanNodelet $(arg camera)/$(arg camera)_nodelet_manager">
      <param name="scan_height" value="10"/>
      <param name="output_frame_id" value="/$(arg camera)_depth_frame"/>
      <param name="range_min" value="0.45"/>
      <remap from="image" to="$(arg camera)/$(arg depth)/image_raw"/>
      <remap from="scan" to="$(arg scan_topic)"/>

      <remap from="$(arg camera)/image" to="$(arg camera)/$(arg depth)/image_raw"/>
      <remap from="$(arg camera)/scan" to="$(arg scan_topic)"/>
    </node>
  </group>

After that it is possible to visualize the new topic using rviz getting something like that:

of course the robot is still able to laserscan with the xtion (now in rainbow indicating z of the camera)