Until now, even the most advanced robots were
unable to move autonomously and reliably without visual orientation
aids in the environment and without geographic plans input into the
system. While such orientation aids or hard coded geographic
maps allowed robots to move in a particular space, such robots have had
no “awareness” of their location and
movement. Recent products on the market have the ability to
recognize objects that were not pre-programmed into the system, but
they are still unable to avoid or by-pass arbitrary objects and
obstacles in a controlled manner at high speeds.
The software developed by JSSC enables
mobile robots to move rapidly, independent of external markers
or pre-programmed geographic maps in unfamiliar space. This
is made possible by JSSC’s novel technology,
Cogniton™, which allows the robots to be continuously
aware of their environment and their own position. While the
robot is driving over the natural structure of the environment the
current environment is stored in a 3-dimensional model, and the
robot’s relative position and speed is determined from the
changes in the images produced by the cameras and other
sensors. Furthermore, the software can reconstruct in real
time a 3D model of any object in its environment.
These advances are made possible
by novel approaches to solve the vision problem, which are
based on insights about the mechanisms of the biological and
psychological processes that enable human vision, perception and
orientation. This approach enables the system to use its control of the
sensors and vehicle motion to dramatically increase the speed
and accuracy of 3-dimensional perception, thereby enabling the robot to
move and navigate fast and safely in unknown environments.
The software is able to construct detailed maps of the environment from
the relative changes of the images of the surrounding objects,
including the structure of the ground. At the same time the
system can be instructed to search for specified objects.
The system will have the capability to collect
and integrate positional data from GPS units, cell tower signals, and
cameras, and provides a natural basis for a collaborative architecture
of a group of robots that can operate completely autonomously, or be
supervised and directed by human operators. By providing a robust basis
for reliable operation in unknown and unstructured territory, this
technology enables a new level of fast and safe autonomous navigation
and enhanced path planning, enabling autonomous operation and data
collection at previously unreachable locations and at an unprecedented
pace. The algorithms are so efficient that no expensive,
special-purpose hardware is required.
The vision software is integrated with
sophisticated planning software. In order to perform a given task, the
program generates a plan (sequence of actions) that lead from the
current state to the "goal" state, in which the task is
completed. The system will "understand" written and oral
instructions; that is, it will translate these verbal instructions into
goal configurations in the 3D model space, which are used by an
intelligent planning system to quickly generate efficient
plans. The system then performs the sequence of actions
specified by the plan until the goal is achieved. If problems occur
along the way and the current plan is no longer applicable, a
new plan is generated and executed.
The video below shows a prototype version of the software
as is processes the video signals from a camera mounted on a car. As the
car drives along the road the software reconstructs the environment to
determine the location and movement of the car, and identify obstacles (right panel).
This information would allow a vehicle to automously drive to a
distination point when given access to the vehicel's controls.