Dawn Robotics Forum Support and community forums for Dawn Robotics Ltd 2014-08-22T01:53:18+01:00 http://forum.dawnrobotics.co.uk/feed.php?f=5&t=1264 2014-08-22T01:53:18+01:00 2014-08-22T01:53:18+01:00 http://forum.dawnrobotics.co.uk/viewtopic.php?t=1264&p=1395#p1395 <![CDATA[Updated code for RPI face tracking using opencv]]> picamera python module to implement stream from camera. It will move the pan/tilt to keep detected face in frame. If face not found then it will search for face after a specified time ( currently 60 seconds)
See code for details

Code:
#!/usr/bin/env python
# opencv-picam-face.py - Opencv using picamera for face tracking using pan/tilt search and lock
# written by Claude Pageau -
# This is a little laggy but does work OK.
# Uses pipan.py module from openelectron.com RPI camera pan/tilt to control
# camera tracking or use your own pan/tilt module and modify code accordingly.
# if you are not using openelectrons.com pan/tilt hardware.
# Also picamera python module must be installed as well as opencv
# To install opencv and python for opencv
# sudo apt-get install libopencv-dev python-opencv
# To install picamera python module
# sudo apt-get install python-picamera
# You will also need to install python picamera.array tha includes numpy
# sudo pip install "picamera[array]"
#    Note
# v4l2 driver is not used since stream is created using picamera module
# using picamera.array
# If you have any questions email pageauc@gmail.com

import io
import time
import picamera
import picamera.array
import cv2
# openelectron.com python module and files from the OpenElectron RPI camera pan/tilt
# Copy pipan.py to same folder as this script.
import pipan

p = pipan.PiPan()

# Approx Center of Pan/Tilt motion
pan_x_c = 150
pan_y_c = 140

# bounds checking for pan/tilt search.
limit_y_bottom = 80
limit_y_top = 180
limit_y_level = 140
limit_x_left = 60
limit_x_right = 240

# To speed things up, lower the resolution of the camera
CAMERA_WIDTH = 320
CAMERA_HEIGHT = 240

# Camera center of image
cam_cx = CAMERA_WIDTH / 2
cam_cy = CAMERA_HEIGHT / 2

# Face detection opencv center of face box
face_cx = cam_cx
face_cy = cam_cy

# Pan/Tilt motion center point
pan_cx = pan_x_c
pan_cy = pan_y_c

# Amount pan/tilt moves when searching
pan_move_x = 30
pan_move_y = 20

# Timer seconds to wait before starting pan/tilt search for face.
wait_time = 60

# load a cascade file for detecting faces. This file must be in
# same folder as this script. Can usually be found as part of opencv
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

# Saving the picture to an in-program stream rather than a file
stream = io.BytesIO()

# Move the pan/tilt to a specific location. has built in limit checks.
def pan_goto(x,y):
   p.do_pan (int(x))
   p.do_tilt (int(y))

# Start Main Program
with picamera.PiCamera() as camera:
   camera.resolution = (CAMERA_WIDTH, CAMERA_HEIGHT)
   camera.vflip = True
   time.sleep(2)

   # Put camera in a known good position.
   pan_goto(pan_cx, pan_cy)   
   face_found = False
   start_time = time.time()

   while(True):
      with picamera.array.PiRGBArray(camera) as stream:
         camera.capture(stream, format='bgr')
         # At this point the image is available as stream.array
         image = stream.array

      # Convert to grayscale, which is easier
      gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
      # Look for faces over the given image using the loaded cascade file
      faces = face_cascade.detectMultiScale(gray, 1.3, 5)

      for (x,y,w,h) in faces:
          if w > 0 :
            face_found = True
            start_time = time.time()

          # Opencv has built in image manipulation functions
          cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
          face_cx = x + w/2
          Nav_LR = cam_cx - face_cx
          pan_cx = pan_cx - Nav_LR /5
         
          face_cy = y + h/2
          Nav_UD = cam_cy - face_cy
          pan_cy = pan_cy - Nav_UD /4
          pan_goto(pan_cx, pan_cy)

          # Print Navigation required to center face in image
          print " Nav LR=%s UD=%s " % (Nav_LR, Nav_UD)

      elapsed_time = time.time() - start_time

      # start pan/tilt search for face if timer runs out
      if elapsed_time > wait_time:
          face_found = False
          print "Timer=%d  > %s seconds" % (elapsed_time, wait_time)
          pan_cx = pan_cx + pan_move_x
          if pan_cx > limit_x_right:
             pan_cx = limit_x_left         
             pan_cy = pan_cy + pan_move_y
             if pan_cy > limit_y_top:
                pan_cy = limit_y_bottom

          pan_goto (pan_cx, pan_cy)

      # Use opencv built in window to show the image
      # Leave out if your Raspberry Pi isn't set up to display windows
      cv2.imshow('Test Image',image)

      if cv2.waitKey(1) & 0xFF == ord('q'):
         # Close Window
         cv2.destroyAllWindows()
         exit

Statistics: Posted by pageauc — Fri Aug 22, 2014 1:53 am


]]>
2014-08-21T13:17:22+01:00 2014-08-21T13:17:22+01:00 http://forum.dawnrobotics.co.uk/viewtopic.php?t=1264&p=1394#p1394 <![CDATA[Re: How do I change camera image eg flip etc]]>
I will have a look at the minidriver to receive navigation info from my opencv python code. Will need to integrate minidriver to work with self balancing arduino code. I have sample self-balancing code but not tested yet and most likely will need to adapt for my gyro/accelerometer chip.

What strikes me is how un-standardized personal robotics is compared to lets say cars/computers and other products. You would think there would at least be standard cabling and connectors for sensors/devices/motors, as well as a HAL (hardware abstraction layer) so different hardware can be swapped and software can inter operate more seamlessly. Seems like the early days of personal computers (Apple ][, TRS80, Commador, Acorn, Etc ...) when everyone had a different solution. I can remember the can of worms with printers using DOS.

Bye for now

Statistics: Posted by pageauc — Thu Aug 21, 2014 1:17 pm


]]>
2014-08-21T07:58:54+01:00 2014-08-21T07:58:54+01:00 http://forum.dawnrobotics.co.uk/viewtopic.php?t=1264&p=1393#p1393 <![CDATA[Re: How do I change camera image eg flip etc]]>
That looks like a cool project you're working on there. I agree that robots are much more interesting if they're autonomous in some way. :)

Using v4l2 or picamera would I think, have both been valid ways of constructing the streamer. My thinking at the time was, that using MMAL directly gave me maximum flexibility, and using C made it easy for me to get the streaming over HTTP working quickly, as I was essentially ripping out the HTTP streaming code from the mjpg-streamer project.

One cool feature that I think our software setup offers, is that it makes it very easy to do image processing on another computer which can be very useful if you want to do computationally intensive work which would cause the Pi to struggle. If you haven't had a chance to play with this yet then the blog post on py_websockets_bot shows how to set this up.

Also, it is possible to use your own camera code with the robot web server, as you can change the camera streaming program started in camera_streamer.py in the startStreaming routine.

Thinking about it further, it's also possible to use the robot without robot_web_server.py (just run sudo service robot_web_server stop) and instead use the MiniDriver class in mini_driver.py to control the robot hardware from your own Python program (see the script robot_control_test.py) for an example of how this can be done.

Regards

Alan

Regards

Alan

Statistics: Posted by Alan — Thu Aug 21, 2014 7:58 am


]]>
2014-08-21T06:16:19+01:00 2014-08-21T06:16:19+01:00 http://forum.dawnrobotics.co.uk/viewtopic.php?t=1264&p=1392#p1392 <![CDATA[Re: How do I change camera image eg flip etc]]> The custom github bot streamer code interfaces with v4l2 code and would only need to implement calls to the built in v4l2 vflip/hflip and setup as program parameters. It is not necessary or desirable to use RPI camera code since the streamer code can use other types of cameras if need be and the v4l2 code is already functional.

Statistics: Posted by pageauc — Thu Aug 21, 2014 6:16 am


]]>
2014-08-21T05:06:05+01:00 2014-08-21T05:06:05+01:00 http://forum.dawnrobotics.co.uk/viewtopic.php?t=1264&p=1391#p1391 <![CDATA[Re: How do I change camera image eg flip etc]]>
Thanks for the response. I did look at the bot streamer code and help. You are right. Limited capability as far as image settings go Although v4l2 driver has a lot. Also looked at v4l2-ctl and it has vflip and hflip options and a lot more. Installed v4l2ucp control panel and it is session based cannot change global settings prior to or during the bot streamer operation session. Did not look at weather web display of stream could be flipped.

Since I only wanted an opencv interface to control the camera pan/tilt to track a face. I wrote my own python interface using the picamera python module to setup a stream (very extensive capability) and the openelectron.com pan/tilt python module (pipan.py) downloaded from their web site.
I now have a working python script that uses picamera module to stream images to opencv to do face detection and adjusts the camera pan/tilt to keep the face in frame albeit a little laggy but probably acceptable for my intended purposes. Code needs cleanup and I also have to write a routine to have the camera go into a search pattern if a face is not detected in a specified time period. If no face is found then the camera will be set to detect motion using my python pimotion detect program. May also use grive as well.
http://www.raspberrypi.org/forums/viewtopic.php?p=362504#p362504
Once motion is detected code will activate opencv face detect sequence using camera pan/tilt and eventually robot turning around to search for a face/object. Just used face detection since it was easy but hope to setup training vector for various objects, signs, Etc. I am planning on integrating the interface with the self balancing robot, drive wheels so the robot can for example navigate to a person, mimic their movements or any other actions I will make another YouTube video on my project progress so far. Here is first video with opencv working on laptop. http://youtu.be/kAMaUuBVK9I This code did not come over to RPI very cleanly so ended up rewriting using picamera module (see sample code below).

Although opencv is a little laggy on the RPI with 320x240 stream image frame size, I think it will be acceptable for the basic stuff I am planning on doing. Arduino will to do self balancing and get navigation information from RPI initially from serial but may use I2C interface when I learn a bit more. Already have the gyro/accelerometer/compass chip (still in the box). Currently preparing to fabricate the robot self balancing chassis (design drawing is ready and joint testing is complete). Had to build a v-goove cutting tool for the foam board to make nice strong corners (xacto makes one but expensive and not good reviews mine is rugged and simple and works great). The chassis will be made from Elmer's foam board and Elmer's Xtreme glue with reinforcing pins to strengthen the corners and joints. Chassis will be very light and quite strong. Waiting on a solar panel usb power supply to be added as a robot back pack. Will need to test
functionality. I does not have power management interface but I did not want to design my own although there are RPI GPIO boards available.

At any rate thanks for your feedback. I did think about adding the vflip code to the streamer but decided to do it in python instead. Can still use the bot streamer to operate robot using web interface. I think it is better and more interesting to work on a somewhat autonomous robot rather than simply a web remote controlled camera robot.

Rough code sample fyi

Code:
#!/usr/bin/env python
# opencv-test7 - Opencv face tracking face tracking with pan/tilt
# written by Claude Pageau - Still a work in progress so excuse the mess
# and unused variables.  Just trying to get something to work.
# This is a little laggy but does work OK.
# Uses pipan.py module for openelectron.com RPI camera pan/tilt to control
# camera tracking so you will need to use your own pan/tilt module if
# you are not using openelectrons.com hardware.
# Also picamera python module must be installed as well as opencv
# and v4l2 driver (execute sudo modprobe bcm2834-v4l2 command to
# install /dev/video0 device. 
# numby not being used so can be removed since changed code to
# use picamera.array instead.
# Still need to write a pan/tilt search for face routine to try to
# find a face if none found in current frame.   

import io
import time
import picamera
import picamera.array
import cv2
import numpy as np
import pipan

p = pipan.PiPan()
sleep_time = 0.5
pan_x_c = 150
pan_y_c = 130

# these variables are not currently used.  pipan module does
# bounds checking already so pan/tilt will not exceed limits.
limit_y_bottom = 80
limit_y_top = 180
limit_y_level = 140
limit_x_left = 60
limit_x_right = 240

#load a cascade file for detecting faces
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

#saving the picture to an in-program stream rather than a file
stream = io.BytesIO()

#to speed things up, lower the resolution of the camera
CAMERA_WIDTH = 320
CAMERA_HEIGHT = 240

cam_cx = 160
cam_cy = 120

pan_cx = pan_x_c
pan_cy = pan_y_c

face_cx = cam_cx
face_cy = cam_cy

cx_ratio = limit_x_right/cam_cx
cy_ratio = limit_y_top/cam_cy

def pan_goto(x,y):
   p.do_pan (int(x))
   p.do_tilt (int(y))

with picamera.PiCamera() as camera:
   camera.resolution = (CAMERA_WIDTH, CAMERA_HEIGHT)
   camera.vflip = True
   time.sleep(2)

   # put camera in a known good position.
   pan_goto(pan_cx, pan_cy)   

   while(True):
      with picamera.array.PiRGBArray(camera) as stream:
         camera.capture(stream, format='bgr')
         # At this point the image is available as stream.array
         image = stream.array

      # convert to grayscale, which is easier
      gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
      # look for faces over the given image using the loaded cascade file
      faces = face_cascade.detectMultiScale(gray, 1.3, 5)

      for (x,y,w,h) in faces:
          #opencv has built in image manipulation functions
          cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
          face_cx = x + w/2
          Nav_LR = cam_cx - face_cx
          pan_cx = pan_cx - Nav_LR /5
         
          face_cy = y + h/2
          Nav_UD = cam_cy - face_cy
          pan_cy = pan_cy - Nav_UD /4
          pan_goto(pan_cx, pan_cy)

          print " Nav LR=%s UD=%s " % (Nav_LR, Nav_UD)
   
      # use opencv built in window to show the image
      # leave out if your Raspberry Pi isn't set up to display windows
      cv2.imshow('Test Image',image)

      if cv2.waitKey(1) & 0xFF == ord('q'):
         # Close Window
         cv2.destroyAllWindows()
         break

Statistics: Posted by pageauc — Thu Aug 21, 2014 5:06 am


]]>
2014-08-20T18:21:30+01:00 2014-08-20T18:21:30+01:00 http://forum.dawnrobotics.co.uk/viewtopic.php?t=1264&p=1389#p1389 <![CDATA[Re: How do I change camera image eg flip etc]]>
Welcome to the forums. :)

Unfortunately setting options on the camera is not massively straightforward at the moment (unless you know C). Basically the camera streaming is done using a custom program called raspberry_pi_camera_streamer this doesn't yet provide command line options for vflip and hflip, although they can be added fairly easily by copying the relevant bit of code from the program raspivid.

I've added an issue for this feature to the raspberry_pi_camera_streamer repository, but I'm unlikely to have time to look at it in the next 3-4 weeks. Your main options at the moment therefore are

  • Physically turn the camera around on your robot (if you're following our instructions then the camera will be the right way up, but obviously this may not apply if building your own custom robot).
  • If you feel up to it then you can change the code of raspberry_pi_camera_streamer and recompile it.
  • You could flip the image at a later point in the process. It should be possible to modify the web interface to flip the image, which might be an easier task if you're more familiar with HTML.

Sorry there's not a straightforward solution yet. Let me know if you need any pointers for following one of my suggestions.

Regards

Alan

Statistics: Posted by Alan — Wed Aug 20, 2014 6:21 pm


]]>
2014-08-19T14:18:24+01:00 2014-08-19T14:18:24+01:00 http://forum.dawnrobotics.co.uk/viewtopic.php?t=1264&p=1385#p1385 <![CDATA[How do I change camera image eg flip etc]]>
I want to change camera parameters for the stream, eg hflip vflip Etc. as well as other camera parameter settings. Can I change these and where/how. Tried to find a config file but no joy.

Can I control the camera via python camera module?. Looking to process the stream via opencv.

Thanks

Statistics: Posted by pageauc — Tue Aug 19, 2014 2:18 pm


]]>