I am still waiting on delivery of my Dawn Robot but have a STEM PI-BOT to learn and experiment with.
Thanks for the response. I did look at the bot streamer code and help. You are right. Limited capability as far as image settings go Although v4l2 driver has a lot. Also looked at v4l2-ctl and it has vflip and hflip options and a lot more. Installed v4l2ucp control panel and it is session based cannot change global settings prior to or during the bot streamer operation session. Did not look at weather web display of stream could be flipped.
Since I only wanted an opencv interface to control the camera pan/tilt to track a face. I wrote my own python interface using the picamera python module to setup a stream (very extensive capability) and the openelectron.com pan/tilt python module (pipan.py) downloaded from their web site.
I now have a working python script that uses picamera module to stream images to opencv to do face detection and adjusts the camera pan/tilt to keep the face in frame albeit a little laggy but probably acceptable for my intended purposes. Code needs cleanup and I also have to write a routine to have the camera go into a search pattern if a face is not detected in a specified time period. If no face is found then the camera will be set to detect motion using my python pimotion detect program. May also use grive as well.
http://www.raspberrypi.org/forums/viewtopic.php?p=362504#p362504Once motion is detected code will activate opencv face detect sequence using camera pan/tilt and eventually robot turning around to search for a face/object. Just used face detection since it was easy but hope to setup training vector for various objects, signs, Etc. I am planning on integrating the interface with the self balancing robot, drive wheels so the robot can for example navigate to a person, mimic their movements or any other actions I will make another YouTube video on my project progress so far. Here is first video with opencv working on laptop.
http://youtu.be/kAMaUuBVK9I This code did not come over to RPI very cleanly so ended up rewriting using picamera module (see sample code below).
Although opencv is a little laggy on the RPI with 320x240 stream image frame size, I think it will be acceptable for the basic stuff I am planning on doing. Arduino will to do self balancing and get navigation information from RPI initially from serial but may use I2C interface when I learn a bit more. Already have the gyro/accelerometer/compass chip (still in the box). Currently preparing to fabricate the robot self balancing chassis (design drawing is ready and joint testing is complete). Had to build a v-goove cutting tool for the foam board to make nice strong corners (xacto makes one but expensive and not good reviews mine is rugged and simple and works great). The chassis will be made from Elmer's foam board and Elmer's Xtreme glue with reinforcing pins to strengthen the corners and joints. Chassis will be very light and quite strong. Waiting on a solar panel usb power supply to be added as a robot back pack. Will need to test
functionality. I does not have power management interface but I did not want to design my own although there are RPI GPIO boards available.
At any rate thanks for your feedback. I did think about adding the vflip code to the streamer but decided to do it in python instead. Can still use the bot streamer to operate robot using web interface. I think it is better and more interesting to work on a somewhat autonomous robot rather than simply a web remote controlled camera robot.
Rough code sample fyi
- Code: Select all
#!/usr/bin/env python
# opencv-test7 - Opencv face tracking face tracking with pan/tilt
# written by Claude Pageau - Still a work in progress so excuse the mess
# and unused variables. Just trying to get something to work.
# This is a little laggy but does work OK.
# Uses pipan.py module for openelectron.com RPI camera pan/tilt to control
# camera tracking so you will need to use your own pan/tilt module if
# you are not using openelectrons.com hardware.
# Also picamera python module must be installed as well as opencv
# and v4l2 driver (execute sudo modprobe bcm2834-v4l2 command to
# install /dev/video0 device.
# numby not being used so can be removed since changed code to
# use picamera.array instead.
# Still need to write a pan/tilt search for face routine to try to
# find a face if none found in current frame.
import io
import time
import picamera
import picamera.array
import cv2
import numpy as np
import pipan
p = pipan.PiPan()
sleep_time = 0.5
pan_x_c = 150
pan_y_c = 130
# these variables are not currently used. pipan module does
# bounds checking already so pan/tilt will not exceed limits.
limit_y_bottom = 80
limit_y_top = 180
limit_y_level = 140
limit_x_left = 60
limit_x_right = 240
#load a cascade file for detecting faces
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
#saving the picture to an in-program stream rather than a file
stream = io.BytesIO()
#to speed things up, lower the resolution of the camera
CAMERA_WIDTH = 320
CAMERA_HEIGHT = 240
cam_cx = 160
cam_cy = 120
pan_cx = pan_x_c
pan_cy = pan_y_c
face_cx = cam_cx
face_cy = cam_cy
cx_ratio = limit_x_right/cam_cx
cy_ratio = limit_y_top/cam_cy
def pan_goto(x,y):
p.do_pan (int(x))
p.do_tilt (int(y))
with picamera.PiCamera() as camera:
camera.resolution = (CAMERA_WIDTH, CAMERA_HEIGHT)
camera.vflip = True
time.sleep(2)
# put camera in a known good position.
pan_goto(pan_cx, pan_cy)
while(True):
with picamera.array.PiRGBArray(camera) as stream:
camera.capture(stream, format='bgr')
# At this point the image is available as stream.array
image = stream.array
# convert to grayscale, which is easier
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
# look for faces over the given image using the loaded cascade file
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
#opencv has built in image manipulation functions
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
face_cx = x + w/2
Nav_LR = cam_cx - face_cx
pan_cx = pan_cx - Nav_LR /5
face_cy = y + h/2
Nav_UD = cam_cy - face_cy
pan_cy = pan_cy - Nav_UD /4
pan_goto(pan_cx, pan_cy)
print " Nav LR=%s UD=%s " % (Nav_LR, Nav_UD)
# use opencv built in window to show the image
# leave out if your Raspberry Pi isn't set up to display windows
cv2.imshow('Test Image',image)
if cv2.waitKey(1) & 0xFF == ord('q'):
# Close Window
cv2.destroyAllWindows()
break