Tag Archives: ROS

aMoSeRo – first Simultaneous Planning Localization and Mapping (SPLAM)

Far from done, but right on the way – the aMoSeRo did his first 2D planning today.  There are still a lot of adjustments needed for the mapping to work properly, but it’s already impressive to see ROS working.

first SPLAM - first navigation through a map

first SPLAM – first navigation through a map

The node graph still grows and will need some changes when used with multiple robots, but organising goes on 🙂

first SPLAM - it is getting even worse, more topics, more nodes

aMoSeRo – very first mapping view available

After days with latex and struggling with all sensor data a mobile robot needs, today is the first day of ROS showing me a small map view. It’s anything but stable and I can’t claim understanding everything – but because I’ve hadn’t something to report for some time now here a small demonstration:

First Mapping Experience

Screenshot fromMAP

ScreenshotTopics Topics Overview – amosero a distributed system still far from optimal

Because my IMU doesn’t do its work as it should, I’ve used a WiiMote Motion + and run it by a common ros driver and bluetooth.

Yeeeha 🙂

ROS Hydro / Indigo PointCloud to Laserscan for 2D Navigation

UPDATE: see my step by step guide for depthimage_to_laserscan here.

PointClouds need a lot of processing and network traffic load. For 2D navigation LaserScans are a good option to decrease this loads, in my case below to 30% of the original.

So after starting my customized openni2_launch (the launchers of openni_camera, and both ros drivers) with a custom Openni2 driver dir:

(OPENNI2_DRIVERS_PATH fixes all issues with the camera driver nodelet)

its possible to process this down to a /scan topic by:

or directly in a launch file using the handy nodelet manager:

After that it is possible to visualize the new topic using rviz getting something like that:

of course the robot is still able to laserscan with the xtion (now in rainbow indicating z of the camera)

 

 

ROS is about: simulation, simulation and simulation …

ROS needs to know everything about the physics of a robot. It starts with dimension to avoid collusions – both with the outside world and the robot itself (e.g. if it is using two robot arms at once).  Further it is relevant where the sensors are – or in my case where the [amazon &title=Asus Xtion&text=Asus Xtion] is located in relation to the robots base. Another interesting information are the robots joints. Its needed to drive the wheels and to rotate the camera. For all that, a detailed description and representation of the robot in a format that a computer understands is essential.

Today I’ve made a hugh step in the simulation field, so struggeling with the motors in the real world for the last few days doesn’t feel too bad – at least I can generate some nice pictures now:

For me an interesting journey started, with a lot ups and downs – currently I am really excited where we will be in 18 weeks – because 2 weeks of my thesis already passed.