This surely doesn’t look amazing from a external look – but this has been a day of hard work and was very important.
first EKF – the package graph
first EKF – the new TF-Tree
first EKF – slamming, mapping
The robot_pose_ekf package is working! It is neither the setup I am going to use nor is it very stable – but it proves some points.
First I needed to write my own IMU driver for ros – 9 degrees of freedom (DOF) for 30€ and a bag of problems. This is a lot cheaper (around 100€) than the often used Razor IMU of sparkfun with existing ROS code, and exactly 3DOF better than the WiiMote (with Motion+ and also about 60-70€) I have been experimenting with. There is still a lot of work left for improving the stability, the calibration of the LSM9DS0 and there has been a lot more to find out about magnetic fields and strange units that needed to be converted in other ones than I would have ever expected – but so far the /imu_data topic serves some not totally wrong data – therefore normalisation minimizes the issues with the 3-5 scales per axis I had to deal with.
Next I needed to increase my bad odometry sources – if not by quality at least with quantity – and added a GPS sensor as /vo topic. The CubieTruck is a abstruse while handling some easy things like using one of the possible 8 UART connections… for today I’ve not managed to get it running with UART but a external serial usb controller. Another area that needs heavy improvement…
Finally I had to deal with the REP105 issue that I have been describing in a previous post. The gmapping algorithm needed to get adjusted parameters, coming along with some serious confusion generated by inconsistent syncing over my 4 working devices (rosBrain with roscore and slamming, rosDev – my development machine, the aMoSeRo and the seafile backup server)…
…but after that all parts worked together for the first time, including a simulated aMoSeRo moving on screen while beeing hold in different angles in the real world.
Everything is still far from done – but right on the way – and we’ve already found out how wide the road is 🙂
For reasons of documentation and because I never did something like that before (fun) I have created a little video of the amosero driving around in the yard today. It is my longest video so far and I hope you enjoy it:
Far from done, but right on the way – the aMoSeRo did his first 2D planning today. There are still a lot of adjustments needed for the mapping to work properly, but it’s already impressive to see ROS working.
first SPLAM – first navigation through a map
The node graph still grows and will need some changes when used with multiple robots, but organising goes on 🙂
After days with latex and struggling with all sensor data a mobile robot needs, today is the first day of ROS showing me a small map view. It’s anything but stable and I can’t claim understanding everything – but because I’ve hadn’t something to report for some time now here a small demonstration:
Topics Overview – amosero a distributed system still far from optimal
Because my IMU doesn’t do its work as it should, I’ve used a WiiMote Motion + and run it by a common ros driver and bluetooth.
ROS is amazing. After installing the xbox drivers (xboxdrv) on linux, following some well written instructions and writing about 100 lines of own code – the aMoSeRo is now able to be controlled by an xbox controller.
Driving the robot around the house revealed the real power behind the two RB-35 (1:30). Not to fast to control but very strong driving over piles of books the motors seemed to be a good choice.
Xbox – day 1
Xbox – day 1 front
Xbox – day 1 top
Some issues with the wheels – a lot of force beeing at work, especially along the positive and negative y-axis (see REP103 post) will be solved soon by some super glue 🙂
So demonstrating the robot in future will be a lot more easy and controllable – and a lot more fun!
Today we’ve had the honor to inform young high school students about the education possibilities of the Technical University Bergakademie Freiberg at their Open Day. In four hours I’ve learned howto explain everything about the aMoSeRo in a few sentences. Sadly we weren’t able to drive around because everything was very crowded, but we could demonstrate the 3D PointClouds a bit. So everybody was able to see the mathematics in infomatics by example 🙂
The only chance to take some photos had been before the day started, but here are some impressions:
Today the BHT which is a mining research forum in Freiberg, Germany took place. As the amosero should run as a support robot in mining somewhen this has been a great chance to firstly show off what we’ve got so far. After 4 weeks from zero to robot:
BHT – front view
BHT – 10:05 am waiting in front of the lecture hall
BHT – other drones are near by
BHT – moments before the presentation, the IR Camera is clearly running
BHT – during lecture hold by my academic advisor
BHT – view inside the box – not yet well organized, but working
So we were able to demonstrate the [amazon &title=Asus Xtion&text=Asus Xtion] Features like a live IR Image, some 1fps RGB DepthCloud visualized in RVIZ, driving around including to spot turn.
The plate cookie box we used had some negative effect on the wlan capacity, which we need to address soon by e.g. changing the material or excluding the antenna.
Had been a nice experience showing that little low cost ros robot to public an I am still very exited where the journey leads in the remaining 4 months of my thesis.
It’s still not easy finding the right combination and arrangement of all robot parts. Like mentioned in the previous post, SketchUp is a nice tool for easy 3D visualization using real physical dimensions. So I spend some time again:
NEMA17 probably getting replaced by RB-35s
concept phase II – ISO
concept phase II – ISO Xray
concept phase II – front
concept phase II – xray
concept phase II – top
concept phase II – top wireframe
concept phase II – back
concept phase II – back Xray
concept phase II – bottom Xray with dimensions
concept phase II – front wireframe with dimensions
So tomorrow I am trying to by the planned box and the new motors, hopefully posting real world photos soon.
Today I am trying to setup all parts of the coming robot. Because assembling and disassembling would take hours until finding the right configuration and saving steps in between it would be impossible – I thought of a better way. Using my shopworn SketchUp skills and way more time than I expected – finally there is a non perfect but practicable model with the most important parts that are going to be installed. All of them already have the correct physical dimensions which also means that it would be possible to deploy the robot in rviz later, or at least parts of it.
Here are some early stage impressions:
concept phase: iso view
concept phase: bottom view
concept phase: back view
concept phase: front view
The wireframe boxes are space required by usb plugs or power jacks. These need to be accessible and can’t be blocked by anything else.
This robot is not ready yet, everything needs to be rearranged and boxed soon. Some parts are still missing, and no cables are shown, so everything will be more packed than it looks like.
In order to get ROS working correctly, you need a lot things to be set up according to ROS defined conventions: for instance the ‘Standard Units of Measure and Coordinate Conventions’ (REP103), which clearly explains which units geometry_msgs.Twist should have or in what movement direction your robot need to be inside its URDF file.
For me this meant to redo my xacro (URDF-Macro) defined robot driving direction and its according tf-links. Since I’ve already gained some experience in creating robot models I tried to improve it a bit too:
Now the robot has modeled tracks and pretty much his real physical dimensions
The xtion is a sketch-up .dae file I’ve borrowed from the turtlebot
now the robot aims towards positiv x-axis
the frame tree slightly grew
of course the robot is still able to laserscan with the xtion (now in rainbow indicating z of the camera)
even while published fake transform
driving further adding a decaytime of 10
and activated point cloud
the laserscan shows data according to the camera position relative to the robots base link
I will explain the robots hardware setup in another post as soon as its possible to run it by keyboard teleop while publishing its accurate odometry. Odometry Messages aren’t simply ROS transformations like moving parts of the robot. Because the robot belongs to the physical world where for example friction exists and further wheel jamming could happen, all the calculated position data need to by verified. Qualified for this task is sensor data like Ultrasonic Sensor Ranges, motor potentiometer or stepper motor positions or Openni2 data provided by the [amazon &title=Xtion&text=Asus Xtion]. After publishing this Odometry messages to the /odom topic, the ros navigation packages can generate geometry/Twist messages to correct the position to mach the simulation again in case there has been some deviation.