The rest of the story.....

Problems with ximprov:
   Ok, the people who wrote ximprov didn't implement their graphics routines properly in version 4.1.    The program would give black screens due to improper color depth on various systems.    I started looking at the graphics calls they were using, and it wasn't easily apparent how to fix it, but then again, I didn't want to learn how to program X, I wanted to find a beach ball.
   What ximprov does seem to have is that some of the graphics processing routines are self contained, and could be used for other things.   This is something that might be worth exploring, but I haven't yet.   Instead I abandoned the approach, and started looking for other software approaches.

Problems with hardware:
   Ok, the Thinkpad kind of died.    Maybe trying to fit everything on a 486 with a 200 megabyte drive isn't as easy as it should be.    Redhat 8.0 is something like over a gig installed.   Yeah, it might be a nice system for the casual desktop user, but adapting this to a small, low resources machine is a task... a bigger task than I had thought.

   More recently I have been looking at smaller distributions, and building for small targets.   Some candidates were some single floppy distributions (such as Tom's rb), and some embedded packages.    Mindori Linux from Transmeta is tempting, but getting it to make a working build is a task in itself.   The hardware problems, I will foist off to another task/document, since we are supposed to be talking pattern recognition, and not embedded Linux.   Right?
Block Diagram of New System
   The microcontroller subsystem for controlling the motors kind of happened, but differently.   I became fascinated by an embedded ethernet controller, and some serial busses for stringing chips together.    What resulted was stripping a stepper control bridge circuit from an old miniscribe 20 mB hard disk, and adding a few control chips, I had the ethernet controller running a little java program that would allow the motor bridge controller operable over an ethernet link.   This distributed networked control system is a candidate for a long past due thesis.

Below is a picture of the robot base with the laptop platform and the quickcam removed:
Beach Ball Robot Base

The new approach:

Since the hacking Ximprov code didn't work, I needed to find another approach to pull data from the Quickcam.   I pulled the source for cqcam, a gpl'd Linux quick cam driver, and started hacking on the command line utility.   I wrote a separate source module, with my image processing routines.

The code looks for the largest mostly red object it can find in it's field of view.   The object of the robot's passion is a red toy playground ball.   I bought it thinking that it was very red, but luckily, as it turns out, it seems to be an almost perfect red.

I started by taking the cqcam code, took a picture, and then imported it into the Gimp, and ran a histogram on the color spectrum.   The red components, in the area of the ball appear to be twice as large as the green or blue components, so I start by searching the image for pixels that are twice as red as they are blue or green.

Incadencent lighted room.A typical view can be seen here, in a room with wood floor an incandescent light.   One thing that is interesting to note is that the picture isn't in focus.   It seems that since we are looking for large areas of red, the focus is not as important.

Incadencent picture with the red pixels turned white.Next the color filtering is applied.   All values are set to 0xff if it thinks red is dominate in the pixel otherwise the the red values are left alone and the green and blue values are set to 0x00.
Here is the image with the detected red pixels turned white.

Flourecent lit picture with red pixels turned white.Interestingly enough, the quality, and type of light has a huge effect on the colors in the image.   Notice, the wood floor has a brown color, which has a red component to it.   So, glare is detected as red,  and there is some splotching from the floor color.   Now, if I turn on a fluorescent desk lamp (with a GE F15T8SW soft white bulb, and a Sylvania F15T8-D Daylight bulb), the color map shifts sufficiently to to take the detected red out of the wooden floor, and the image cleans up significantly.   With sunlight, in a slightly different room, there was good color separation from the floor colors, but the light levels and the floor finish was sufficient that a reflection of the ball was picked up by the color recognition routine.

The next task is to clean up the image.   First we apply the despeckle routine from the cqcam package.

Next we crop the image edges off (since we were getting noise in those areas any ways), and start looking for regions that are solid red.   To tell if the region is red, it looks for the number of neighbors that are red adjacent to the red area.   If a pixel has less than nine neighbors, and more than eight neighbors, it is kept, otherwise the pixel is turned to black.   (The range can be adjusted to allow for more fuzzy areas to be kept, but fully surrounded points seems to work well enough.)   This will tend to filter noisy regions, and single specs here and there.

Ball found with center marked.
Next, we search for red areas, and detect their size.   This is done by first copying the image, and then scanning the image, for red areas.   When it finds a red area, it does a flood fill to black, keeping track of the x  & y coordinates and the number of pixels in the object.   The "center" of the object is calculated by taking the average of the coordinates filled.    The object with the largest area is kept along with the coordinates where the center is located.    Here the calculated average center is marked with a green blotch.

This location can then be used to navigate the base.   The system can try to adjust the motors to attempt to put the largest red object in the center of the field of view.