Page 1 of 1

Self Balancing Autonomous Robot Project

PostPosted: Fri Sep 12, 2014 12:51 am
by pageauc
Here is my latest project update for the Self-Balancing Autonomous Robot project
http://youtu.be/Lc0isSkMcZA

Here is a playlist for the previous updates and the video that inspired me
https://www.youtube.com/playlist?list=PLLXJw_uJtQLYspXORG_To3sDPQ3N1BIFB

Re: Self Balancing Autonomous Robot Project

PostPosted: Fri Sep 12, 2014 1:57 am
by pageauc
Here is the python code if you are interested. Still very much a work in progress so don't expect too much at this point. See code comments for install details.

Code: Select all
#!/usr/bin/env python
# opencv-find-face : Opencv face tracking with pan/tilt search and lock
# written by Claude Pageau -
# This is a little laggy but does work OK.
# Uses Dawn Robotic minidriver module for camera pan/tilt and wheel control.
# camera tracking or use your own pan/tilt module and modify code accordingly.
# if you are not using Dawn mindriver pan/tilt hardware.
# Also picamera python module must be installed as well as opencv
# To install opencv and python for opencv
# sudo apt-get install libopencv-dev python-opencv
# To install picamera python module
# sudo apt-get install python-picamera
# You will also need to install python picamera.array tha includes numpy
# sudo pip install "picamera[array]"
# copy /usr/share/opencv/haarcascades/haarcascade_frontalface_default.xml
# to same folder that this python script is in.
#    Note
# v4l2 driver is not used since stream is created using picamera module
# using picamera.array
# If you have any questions email pageauc@gmail.com
 
print "Initializing ...."
import io
import time
import picamera
import picamera.array
import cv2
import numpy as np
import mini_driver

miniDriver = mini_driver.MiniDriver()
connected = miniDriver.connect()

left_wheel_stop = 0
left_wheel_min = 10
left_wheel_max = 200
right_wheel_stop = 0
right_wheel_min = 10 
right_wheel_max = 200


# To speed things up, lower the resolution of the camera
CAMERA_WIDTH = 320
CAMERA_HEIGHT = 240
# Show opencv window
show_window = False

# Camera center of image
cam_cx = CAMERA_WIDTH / 2
cam_cy = CAMERA_HEIGHT / 2
inch = 9.0

max_pan_left = 10
max_pan_right = 160
max_tilt_top = 50
max_tilt_bottom = 100

# Approx Center of Pan/Tilt motion
pan_center = 90
tilt_center = 90

# bounds checking for local pan/tilt search.
local_pan_left = int(pan_center - 20)
local_pan_right = int(pan_center + 25)
local_tilt_top = int(tilt_center - 40)
local_tilt_bottom = int(tilt_center + 20)

print "PL=%s PR=%s TB=%s TT=%s"  % (local_pan_left, local_pan_right, local_tilt_bottom, local_tilt_top)

# Set Initial starting position of pan/tilt
current_pan_pos = pan_center
current_tilt_pos = tilt_center

# Amount pan/tilt moves when searching
pan_search_unit = 25
tilt_search_unit = 20

# Timer seconds to wait before starting pan/tilt search for face.
# local face search
face_timer1 = 15
# Wide face search
face_timer2 = 60
# Stop pan/tilt and start Motion Detect.
face_timer3 = 120

# Motion scan settings
motion_detected = False
# sensitivity - How much the color value (0-255) needs to change to be considered a change
sensitivity = 25
# threshold - How many pixels must change to be considered motion
threshold = CAMERA_WIDTH * CAMERA_HEIGHT * 2 / 110

# Move the pan/tilt to a specific location. has built in limit checks.
#def pan_goto(x,y):
#   p.do_pan (int(x))
#   p.do_tilt (int(y))

def motion_scan(x,y):
   print "motion_scan - Scan for Motion at cx=%d cy=%d" % (x, y)
   #pan_goto(x,y)
   miniDriver.setOutputs( left_wheel_stop, right_wheel_stop, x, y )

   numImages = 0
   step = 1  # use this to toggle where the image gets saved
   captureCount = 0 # flag used to begin a sequence capture
   stream = io.BytesIO()
   with picamera.PiCamera() as camera:
      camera.resolution = (CAMERA_WIDTH, CAMERA_HEIGHT)
      time.sleep(2)

      # begin motion scan
      while sensitivity > 0:
         camera.resolution = (CAMERA_WIDTH, CAMERA_HEIGHT)
 
         if step == 1:
             stream.seek(0)
             camera.capture(stream, 'bgra', True)
             data1 = np.fromstring(stream.getvalue(), dtype=np.uint8)
             step = 2
         else:
             stream.seek(0)
             camera.capture(stream, 'bgra', True)
             data2 = np.fromstring(stream.getvalue(), dtype=np.uint8)
             step = 1
         numImages = numImages + 1
         if numImages > 4:  # ignore first few images because if the camera is not quite ready it will register as motion right away
             if captureCount <= 0:
                 # not capturing, test for motion (very simplistic, but works good enough for my purposes)
                 data3 = np.abs(data1 - data2)  # get difference between 2 successive images

                 # There are 4 times the number of pixels due to rgba
                 numTriggers = np.count_nonzero(data3 > sensitivity) / 4 / sensitivity

                 if numTriggers > threshold:
                    print "motion_scan - Motion Detected. Threshold=%d Triggers=%d"  % (threshold, numTriggers)
                    stream.close()
                    return True   
   
def face_scan(x,y):
  print "connected =", connected
  if connected:
    print "face_scan - Start Scan at cx=%d cy=%d" % (x, y)
    pan_parked = False
    # Face detection opencv center of face box
    face_cx = x
    face_cy = y
    # Pan/Tilt motion center point
    current_pan_pos = x
    current_tilt_pos = y
   
    # Put pan/tilt in a known good position.
    #pan_goto(pan_center, tilt_center)   
    miniDriver.setOutputs( left_wheel_stop, right_wheel_stop, pan_center, tilt_center )
    time.sleep( 1.0 )
    face_found = False
    start_time = time.time()
 
    # load a cascade file for detecting faces. This file must be in
    # same folder as this script. or loaded as part of opencv at /usr/share/opencv/haarcascades/
    face_cascade = cv2.CascadeClassifier('/usr/share/opencv/haarcascades/haarcascade_frontalface_default.xml')
    # face_cascade = cv2.CascadeClassifier('/usr/share/opencv/haarcascades/haarcascade_profileface.xml')
    # Saving the picture to an in-program stream rather than a file
    stream = io.BytesIO()
    with picamera.PiCamera() as camera:
      camera.resolution = (CAMERA_WIDTH, CAMERA_HEIGHT)
      camera.vflip = False
   
      while(True):
        with picamera.array.PiRGBArray(camera) as stream:
          camera.capture(stream, format='bgr')
          # At this point the image is available as stream.array
          image = stream.array
       
        # Convert to grayscale, which is easier
        gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
        # Look for faces over the given image using the loaded cascade file
        faces = face_cascade.detectMultiScale(gray, 1.3, 5)
   
        for (x,y,w,h) in faces:
          face_found = True
          #print "face_scan - Face Found at x=%d y=%d w=%d h=%d" % (x, y, w, h)
          pan_parked = False
          start_time = time.time()
          distance = ((CAMERA_WIDTH/w) * inch)/12
          face_cx = x + w/2
          Nav_LR = (cam_cx - face_cx) / 6
          current_pan_pos = current_pan_pos + Nav_LR
          face_cy = y + h/2
          Nav_UD = (cam_cy - face_cy) / 5
          current_tilt_pos = current_tilt_pos - Nav_UD
          # Print Navigation required to center face in image
          if ( abs(Nav_LR)>3 or abs(Nav_UD)>3 ):
            if Nav_LR < 0:
               direction = 1
            elif Nav_LR > 0:
               direction = -1

            miniDriver.setOutputs( left_wheel_stop, (abs(Nav_LR)) * direction, current_pan_pos, current_tilt_pos )

            #pan_goto(current_pan_pos, current_tilt_pos)
            #print "face_scan - Nav LR=%s UD=%s  Range=%.1f ft" % (Nav_LR, Nav_UD, distance)   
       
          if show_window:
          # Opencv has built in image manipulation functions
            cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)

        # Use opencv built in window to show the image
        # Leave out if your Raspberry Pi isn't set up to display windows
        if show_window:
          cv2.imshow('Face Image', image)

        elapsed_time = time.time() - start_time         
        # start pan/tilt search for face if timer runs out
        if elapsed_time > face_timer3:
          if not pan_parked:
            pan_parked = True
            stream.close()
            # Return and start motion capture
            return (current_pan_pos, current_tilt_pos, False)
         
        elif elapsed_time > face_timer2:
          face_found = False
          #print "face_scan - Wide Search Timer2=%d  > %s seconds" % (elapsed_time, face_timer2)
          current_pan_pos = current_pan_pos + pan_search_unit
          if (current_pan_pos > max_pan_right):
            current_pan_pos = max_pan_left         
            current_tilt_pos = current_tilt_pos - tilt_search_unit
            if (current_tilt_pos < max_tilt_top):
              current_tilt_pos = max_tilt_bottom
          miniDriver.setOutputs( left_wheel_stop, right_wheel_stop, current_pan_pos, current_tilt_pos )
          time.sleep( 1.0 )
          #pan_goto (current_pan_pos, current_tilt_pos)
   
        elif elapsed_time > face_timer1:
          face_found = False
          print "face_scan - Local Search  Timer1=%d  > %s seconds" % (elapsed_time, face_timer1)
          current_tilt_pos = current_tilt_pos - tilt_search_unit
          if (current_tilt_pos < local_tilt_top):
             current_tilt_pos = local_tilt_bottom       
             current_pan_pos = current_pan_pos + pan_search_unit
             if (current_pan_pos > local_pan_right):
                current_pan_pos = local_pan_left
          #print "C_Pan_Pos=%s  C_Tilt_Pos=%s" % (current_pan_pos, current_tilt_pos)
          miniDriver.setOutputs( left_wheel_stop, right_wheel_stop, current_pan_pos, current_tilt_pos )
          time.sleep( 1.0 )
          # pan_goto (current_pan_pos, current_tilt_pos)
         
       
# ---------------- Start Main Program

try:
  motion_detected = False
  while (True):
    if motion_detected:
      (current_pan_pos, current_tilt_pos, motion_detected) = face_scan(pan_center, tilt_center)
    else:
      motion_detected = motion_scan(pan_center, tilt_center)
finally:       
  print "Exiting Program ...."
  with picamera.PiCamera() as camera:
    camera.close()
  if show_window:
    cv2.destroyAllWindows()


 



Re: Self Balancing Autonomous Robot Project

PostPosted: Fri Sep 12, 2014 1:44 pm
by Alan
Hi Cedric,

That looks really good, thank you for sharing.

Looking forward to future updates as your robot develops, and I especially like the 'bling' foamcore. :)

Regards

Alan

Re: Self Balancing Autonomous Robot Project

PostPosted: Fri Sep 12, 2014 1:47 pm
by Alan
Beg pardon, that should have been hi Claude... :oops:

Not enough coffee today... ;)

Re: Self Balancing Autonomous Robot Project

PostPosted: Sun Dec 14, 2014 2:25 pm
by DannyM
Hi Claude,

in the code you mention a xml file.

'/usr/share/opencv/haarcascades/haarcascade_frontalface_default.xml'

in my windows install, I cannot find this file, also not on the Pi, is it possible for you to place the contents of the file to a reply?

The project is a goal for me, only not with the selfbalancing robot, maybe later..

Thanks, Danny