Explanation of the code

The brains of your robot

Explanation of the code

Postby dirk » Sat Sep 13, 2014 4:56 pm

Hi Alan,

I like the robot very much.
I'm doing some tests and it seems that it is working fine.
But could you maybe upload a video or a blogpost of how the code works?
I know something about coding in python but not very much.
It would be great if you make something to quickly overview the code (then I know where to start).

Regards,
Dirk
(from The Netherlands :P )
dirk
 
Posts: 3
Joined: Sat Sep 13, 2014 4:47 pm

Re: Explanation of the code

Postby Alan » Wed Sep 17, 2014 9:34 am

Hi Dirk,

Great to see you on the forums. :) Sorry for the delayed reply (I still haven't worked out how to get notifications of first time posters...)

I'm tied up all day with other work, but I'll try to put together an outline this evening.

Regards

Alan
Alan
Site Admin
 
Posts: 311
Joined: Fri Jun 14, 2013 10:09 am

Re: Explanation of the code

Postby Alan » Wed Sep 17, 2014 8:13 pm

Hi Dirk,

So I'll give a runthrough of the code, and if there's any extra info you want on any areas, please let me know.

There's a high level overview in the blog post I wrote on programming the robot using Python, which also has a diagram that might make things a bit clearer.

The software is organised in a number of levels, so working from bottom and going up we have

Mini Driver Firmware

The Mini Driver is an Arduino compatible board that drives the motors and provides a good place to attach sensors. On bootup the Pi will try to connect with the Mini Driver to check what version of firmware it's running. If it doesn't find the correct version then it will upload the correct version using a Python library called Ino. The main Mini Driver firmware file is an Arduino sketch called mini_driver_firmware.ino, The code is rather dense because I'm trying to fit a fair bit into the limited memory of the Mini Driver 8kb. This would have been epic space for veteran coders, but for someone like me used to GB of RAM it feels tight. :)

The main part to look at here is the 'loop' routine. This runs continuously

  • Receiving and processing messages that come over the USB serial connection
  • Adjusting the frequency of the PWM signals used to drive the motors
  • Writing out servo motor commands
  • Reading from sensors and sending the data back, this happens at a rate of 100Hz

So the firmware is fairly simple really. It just sits there, taking commands from the Pi and pumping sensor readings back up to the Pi. The communication routines are probably a bit over complicated. Data is sent back and forth in binary to save space, and a simple checksum is used to detect if data gets corrupted as it is sent over serial.

MiniDriver Python Class

One level up from the Mini Driver Firmware is the MiniDriver Python class that lives in mini_driver.py. Again, this is probably a bit more complicated than it needs to be but the basic idea is that this is the Python class that establishes and manages the serial connection with the Mini Driver. It also takes care of checking that the firmware is correct, and uploading a new version if it's not there. The Mini Driver class uses a Connection class to manage the serial connection and a SerialReadProcess (which for some reason seems to be a thread) so that we can read data coming from the USB serial port without interrupting the rest of the program. Whenever you call a routine like getUltrasonicReading on the MiniDriver class, it will go and ask the Connection class to see what data it has got. Similarly, when you call the routine setOutputs, this gets sent to the Connection class which then sends a message over the USB serial connection.

You can actually use the MiniDriver class in your own programs without using the rest of oor software. Simply shut down the robot web server by running

Code: Select all
sudo service robot_web_server stop


and then look at either robot_control_test.py or robot_sensor_test.py for examples of how to do this.

Robot Web Server

Now, we could have just stopped at the MiniDriver Python class. That's what a lot of other companies that produce robot kits do. However we wanted to produce a more complete experience for people getting started with their robot. So, making use of the MiniDriver Python class we've implemented a Python web server using the Tornado library. This is started automatically as a service when the Pi starts up (see the bottom of the installation instructions here for how we do this). The main file of the web server is robot_web_server.py. This will serve all of the web interface files that are found in the www directory, and also exposes a websocket interface. In robot_web_server.py, websocket communication is handles by the ConnectionHandler which uses the sockjs.tornado library (sockjs is an implementation of websockets). Commands that come over websockets are simply text strings, things like GetRobotStatus and 'SetMotorSpeeds 50.0 50.0'. The ConnectionHandler class communicates with a RobotController class in robot_controller.py. The RobotController class is the 'robot' that the web server sees and controls. The reason for not using the MiniDriver class directly is because I wanted to make it possible for people to use their own robots with their software. So for example, if you have your own robot control board and created a Python class that could talk to it, you could modify robot_controller.py to use your Python class rather than the MiniDriver class. You would simply modify all parts of robot_controller.py where it uses self.miniDriver. In the future, I hope to be able to swap between robot hardware using something like a configuration file.

Camera code

Camera streaming is handled by a separate C/C++ progam called raspbery_pi_camera_streamer. This program is started up whenever someone sends 'StartStreaming' to the web server, and it must be sent every 2 seconds or so, otherwise the camera streamer program will be shut down. For example if you shut down the web interface for example you should see the red led on the camera go off after a bit.

The camera streamer is a separate web server which sends out JPEG images from the camera, 640x480 for normal viewing 160x120 for computer vision work and it also sends out motion vectors. The camera streaming could probably have been written in Python using the picamera library but I wanted to learn how to use mmal.

Web Interface

As discussed above, the web interface is sent out by the web server and is everything in the www directory. This is fairly basic HTML5 code and Javascript that I've cobbled together from a number of examples on the internet. The web interface communicates with the web server by using web sockets in order to get a low latency interface. The reason for using a HTML5 web interface is so that the interface of the robot works on as many devices as possible. With limited time and resources I didn't want to have to create an Android app, an iphone app etc. There are lots of ways I'd like to improve the web interface in the future.

The py_websockets_bot library

The py_websockets_bot library is a Python library that lets you write control scripts for the robot. I did think initially about getting people to shut down the web server before writing scripts that used the MiniDriver Python library, but thought that this would be a bit naff. It would be better without that complication. Therefore I decided to take advantage of the fact that the robot was running a web server that spoke websockets and made use of that.

This actually gives a number of advantages. Firstly, people can keep using the web interface to observe what the robot is doing, whilst their control scripts are running. Secondly, it's really easy to write control scripts that can run on another more powerful computer, and control the robot over the network, which is great if you need more power for computer vision etc. Thirdly, if you do want to run a script on the Pi, it's really easy to move it over to the Pi and it should still work, you just tell the script that the address of the webserver is localhost rather than a remote IP address.

Ideally what I'd like to do is let people put scripts that use py_websockets_bot on the Pi, and then activate them using the web interface. So you could write a 'dance script' for example, and then activate it by pressing a button in the web interface. This shouldn't be difficult to do, but finding the time for it is tough.

So...

That was all a bit of a brain dump, so thanks if you stuck with it to the end. Hopefully that's given you a good insight into most of the code base. Let me know if you'd like more detail on anything specific.

Cheers

Alan
Alan
Site Admin
 
Posts: 311
Joined: Fri Jun 14, 2013 10:09 am

Re: Explanation of the code

Postby dirk » Wed Sep 17, 2014 9:38 pm

Hi Alan,

Thank you very much for your great explanation!
I understand the code now and cannot wait to play with it now I understand the basics of the code. I want to make a robot that follows a ball with the camera. Now I understand the code I'm a great step further in the process. I let you know when I have it done!

Regards,
Dirk
dirk
 
Posts: 3
Joined: Sat Sep 13, 2014 4:47 pm

Re: Explanation of the code

Postby Alan » Fri Sep 19, 2014 11:13 am

Great stuff. For that project I'd recommend starting off with something like the get_image and motor_test example scripts in py_websockets_bot/examples, and working from there.

Always happy to offer advice, and looking forward to seeing how it turns out. :)
Alan
Site Admin
 
Posts: 311
Joined: Fri Jun 14, 2013 10:09 am

Re: Explanation of the code

Postby dirk » Tue Sep 23, 2014 2:30 pm

Hi Alan,

I've looked at the get_image.py and tried to work from there, but I don't know how I can do this.
On Youtube there are plenty videos of people who managed to track a tennisball, but I don't know how to do this. Here is a site where a python script is that seens usefull: http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_video/py_meanshift/py_meanshift.html.

But you need numpy and I don't know which values to fill in by the cv2.inRange() line.
Can you help me with this or do you have a better idea of how I can do this?
If you search on Youtube: image tracking opencv you can find a good tutorial but thats in C/C++.
Can you run that on your Pi?

I've also looked at the vector picture but I don't think thats gonna work.

Thanks for your help! :)

Dirk
dirk
 
Posts: 3
Joined: Sat Sep 13, 2014 4:47 pm

Re: Explanation of the code

Postby Alan » Wed Sep 24, 2014 10:21 am

Hi Dirk,

I think that if you're installing py_websockets_bot on Linux or the Raspberry Pi, that numpy should already be installed. If you're installing on Windows, then you should see that the instructions call for installing NumPy.

In fact, I'm pretty sure that if you've got the get_image.py example to work then you have NumPy installed, as we return camera images as numpy arrays. :)

In case you're not familiar with it, numpy is a Python library for efficiently accessing arrays of data. It can take a bit of time to get your head around, but it's very powerful. The nice thing about the current OpenCV Python bindings, is that all images and matrices that you send to it, and get back from it's routines are numpy arrays.

Returning to your program. The meanshift script you've found looks like a good start, and should mostly work. The main bit you need to change is to remove references to VideoCapture (which is getting images from a video file), and instead use the py_websockets_bot get_latest_camera_image routine. So instead of the line

Code: Select all
ret,frame = cap.read()


you would have

Code: Select all
frame, frame_time = bot.get_latest_camera_image()


The values you need to put into the inRange routine depend upon the range of colours you want to track. In this case, I think that you might be able to leave the values as they are, as what the program is doing, is looking at a rectangle you define, and then building up a histogram, that records the frequency of all the different colours in the rectangle. This means that if you set the rectangle around the region of the image occupied by your tennis ball, then the histogram will record all of the pixel colours that appear in the tennis ball.

Hope that helps.

Regards

Alan
Alan
Site Admin
 
Posts: 311
Joined: Fri Jun 14, 2013 10:09 am

Re: Explanation of the code

Postby darkerdude » Sun Mar 22, 2015 6:17 pm

Ive gone through the code and forums, and found a lot of information. this is a really well done project. I just have a question on how to configure something. As I am not very knowledgeable in python coding I am having some troubles doing something that is probably very simple to do. I would like to remove the Mini driver out of the setup and just use the Raspberrys GPIO pins and a L298N to control 2 motors to drive. I am aware that the current code will allow various speeds depending on joystick position, but I would like 1 constant speed once the joystick is out of its deadzones. I have been circling around the robot_controller.py and other files to insert a simple ( if motorspeed >0 then power a gpio pin ) etc to control the motors with no luck. I would eventually like to do the same process to eliminate the pan/tilt as Id like to control another system on the outputs with the left joystick. Can anyone point me where I can do this ?

Any help would be appreciated
darkerdude
 
Posts: 2
Joined: Sat Mar 21, 2015 4:45 am

Re: Explanation of the code

Postby Alan » Wed Mar 25, 2015 3:11 pm

Hi there,

I've replied to your other post, but thought I'd write something here too.

It looks like you're on the right path. I would put code to set up GPIO pins in the __init__ routine in robot_controller.py and then put you code to control the GPIO pins round about line 278 in robot_controller.py (where currently we're calling self.minidriver.setOutputs). If you wanted to make youe code a bit more modular then you could create something like a GPIOControl class and use it in the same way that we use the MiniDriver class to control the hardware.

Hope that helps.

Regards

Alan
Alan
Site Admin
 
Posts: 311
Joined: Fri Jun 14, 2013 10:09 am

Re: Explanation of the code

Postby adamrobots » Mon Nov 16, 2015 5:00 pm

Hi Alan
What files would i need to look at to add control of a further sensor via the web streamer...?
adamrobots
 
Posts: 2
Joined: Mon Nov 16, 2015 4:31 pm

Next

Return to Software

Who is online

Users browsing this forum: No registered users and 1 guest