Hi there,
RB1 (blog 'Robot following sings') has evolved to RB2. First of all, I switched to a 4-wheel chassis to get more control of the movements. The Dagu magician frame is ok, but running on a tiled surface, the rear castor caused a lot of unexpected swirling. Adding weight solved that, but then the hard tires started to slip when adding torque. RB2 has an aluminum frame (DF Robot), 4 motors (installed as 2x2) and softer wheels. It doesn’t have the funny looks of the Dagu, but the ugly bastard runs like a clock. The rest stayed unchanged: RPi B+, Dagu mini driver and the camera and websockets classes of Dawn Robotics.
The script evolved as well and RB2 now operates trustworthy at an acceptable speed.
Video:
https://youtu.be/7bVeIi_Izqg
The major differences in this script are:
* Just use color tracking for detection and moving. It’s 50x faster than the full routine. I used openCV bounding box to get a more accurate centroid (Contours are sometimes only a part of the picture). The bounding box also produces the width of the sign, which is used to keep focus. The difference is shown in the picture by red lines (contours) and a green rectangle.
* A range routine was added, using a constant value to multiply the width. Range isn’t needed anymore to adjust position and direction, but can be used to keep distance. (so, more fun than functional)
* A heads-up display of the center coordinates and range is added. Also just for fun (but who knows)
* A time-out routine is added for Pythons time.sleep is only reliable at very small intervals and I needed an accurate time-out to enable exact turns.
* The grabbed image is used as global variable (saves a lot of typing and a small bit of memory)
* Readings while moving are tuned with time-outs. The routine produces more than a hundred readings in a couple of seconds, creating an overload of the webserver and Pi’s memory
* After reading the sign the script forces a wait for the latest image using the max-time variable
* Finally the full detection routine is used for comparing the sign with the reference images. This routine detects the white inner rectangle, shown in the picture as a blue rectangle.
When interested, more details are well commented in the script itself, which can be found at:
https://bitbucket.org/RoboBasics/raspberry-robo-cars/src/1434877c12f39efc2c9b2ff99172ad605236914f/Scripts/reading_signs.py?at=master
(Changed from GitHub as well, for the ease of using SourceTree)
The script can easily be extended with all kinds of routines. I will be working on logging thru a digital compass and the encoders. (Noticed that Allan has been working on some preliminary classes for an encoder PID; I'm curious for the result )
Have Fun!