Visually Guided Robots


Visually Guided Robots

Vision and locomotion are coupled. Walking systems must see to move. Vision is the most important exteroceptive sense in humans. While many traditional robots rely on laser rangefinders, rangefinders only reveal geometric information and do not take into account non-geometric cues. We seek to create robots that:

    • Step over obstacles
    • Navigate among large moving objects
    • Stop and turn
    • Change elevation
    • Walk using sparse footholds
    • Resist perturbations


    • We seek a solution that will be a scaffold for future work in intelligent, biologically based robots. Particular emphasis is given to the role of cortical processing and interplay with the cerebellum and spinal cord. This model will be sufficiently general such that certain elements will be applicable to four, six and eight legged robots as well as tracked and wheeled robotics platforms. According to our framework, there are two basic classes of visual cues: Geometric cues, derived from stereopsis and motion flow fields, and Non-Geometric Cues which indicate surface stability.



Vision and Locomotion are conjoined problems

Traditionally, vision and locomotion have been thought of as two separate problems. However, gaze direction, image stabilization, and the use of optic flow are all tightly coupled to the gait cycle.

We have shown that by incorporating knowledge from the Central Pattern Generator (CPG) into vision, we can achieve extreme sensitivity in the use of optic flow for the control of locomotion.


Stepping Over OBstacles using a Neural Network

Finding and using sparse footholds.

In this experiment, we use vision to detect footholds for a robot. The robot then selects a proper foothold based on rules encoded in a neural network. These rules reflect how humans choose footholds as derived by human experimentation.