Common Reality

Rationale

Cognitively speaking, it is often easiest to communicate complex ideas with analogies. Take a moment and imagine the human nervous system from the lowest little pressure neuron on the skin all the way through the spinal cord, up to the lower and midbrains, on to the frontal lobes. One can make a distinction in the nervous system between those regions that act automatically and those that are under conscious control. Let’s take this distinction seriously for a moment. All the neurons leading to the spinal cord, on up to the cerebellum and on to the perceptional cortices are functionally analogous to those concerns in much of robotics. The frontal cortices, memory and learning systems, and the regions implicated in mental imagery fall under the umbrella of the cognitive sciences.


Taking this analogy one step further, the Common Reality project would be that grey area between the higher and lower brain regions (pun intended). The sensory regions send afferent signals (I’ll avoid the use of input and output as the label is subjective based on the receiver) to the cognitive regions. They in turn process this information in some task specific manner and can make efferent requests of the sensor systems, e.g. pressing a button or shifting the eyes to another location.

Common Reality Schematic
 

 

Two Examples


The interplay between cognitive architecture and robotic system through Common Reality can best be illustrated with two examples, one from each discipline’s perspective.

 

Cognitive Science : Small forces infiltration

Imagine that the U.S. Army contracts some cognitive scientists to develop realistic models of small forces infiltration tactics in an urban environment. The purpose is for the improvement of military personnel training. This is an extremely complex task that encompasses many abstract concepts (strategy, planning, prediction) as well as many concrete perceptual tasks (target discrimination, aiming, taking cover). One could do it all “in the head” of the models, but it would be much easier and higher in fidelity if the models could interact in the same training environment as the trainees (in this case the video game Unreal Tournament).


The researchers could attach to the sensing side of CommonReality an interface to the Unreal Tournament game. This interface would be responsible for translating the world of game objects (players, objects, and structures) into perceivable entities that would be utilized by the models. For example: if an opponent player becomes visible to a model, the interface would dispatch a perception of that opponent through CommonReality to the model. The CommonReality system would handle the perceptual stability issue of making sure that opponent A is always perceived as opponent A.


The models that are developed of the squad leader and other soldiers would then be added to the CommonReality system (note: many models, not necessarily running the same cognitive architecture, can be attached). As they process the perceptions (as AfferentEvents), they will dispatch commands to the CommonReality which in turn routes them to the Unreal interface, allowing the modeled players to engage the world.


Given that this example is one that relies on time-synchronized simulations, the CommonReality system also provides a mechanism for time control and monitoring. This permits all of the models to operate in the same time scale as the Unreal game itself. However, there is no strict requirement to couple the models or the simulation to this time control. It is merely provided for those situations when it is required.

 

Robotics : Hide and Seek

Let’s imagine that the very same funders of the small forces simulation are also funding research in autonomous robots that can hide in the face of danger or detection. This project requires a robot that can move across a variety of terrains and that can perceive objects both for identification and navigation. These aren’t small tasks. Add to that the requirement that the robot will need to understand what hiding actually entails – skills such as these require the ability to complete cognitive tasks such as taking the perspective of the seeker in order to ensure that it cannot be seen.


In addition to the robotics work required, researchers could develop a model of a person playing hide and seek, complete with a memory system of good and bad hiding places, and, most importantly, the abstract ability to view a scene from imaginary angles A possible extension would be to maintain two cognitive models; one for the robot (hider) and one for the opponent (seeker). The robot’s model could control what perceptual information reaches the seeker model, effectively allowing it to simulate various situations that the seeker may be presented with. Anywhere that the seeker model would look first would immediately be eliminated from the set of candidate hiding places.

 

Further Affordances

A significant motivator for all of this work is to more effectively leverage specialization and the division of labor. But the abstraction level brings with it even more. Some of the following have come up during brain storming sessions: