December 18, 2007

Unconscious Perception: Adding a Dorsal Stream to IDA

SCR Feature,theory,unconscious processes — alice @ 10:47 pm Print This Post  AddThis Social Bookmark Button

Minds, Agents, and the Only Question That Matters:

For the past decade or more, my research team has pursued an understanding of how minds work, human minds, animal minds, and artificial minds. Minds? To us, a mind is a control structure for an autonomous agent. Autonomous agent? An autonomous agent (Franklin & Graesser 1997) is a system situated in, and part of, an environment, which senses that environment, and acts on it, over time, in pursuit of its own agenda, and in such a way that its actions may affect future sensing. Biological examples of autonomous agents include humans and other animals. Non-biological examples include some mobile robots, and various computational agents, including artificial life agents (Langston 1989), software agents (Franklin & Graesser 1997) and many computer viruses.

Every autonomous agent, including you, me, my cat, my software agent IDA and the thermostat in this room, must continually answer for itself the only question that really matters, What do I do next? That’s what minds, that is control structures, are for, to answer this question, to choose what to do next. How do minds so choose?

Cognition and the Cognitive Cycle:

They choose by means of frequent iteration of a sense-process-act cycle that begins with sampling (sensing) of the environment, continues by processing the incoming stimuli, and concludes by acting on the agent’s world. We refer to the processing portion of each such cycle as cognition, and to the cycle itself as the cognitive cycle. A very simple agent, such as a thermostat, may have a trivial cognitive cycle consisting only of sense-reflex-act. At the other end of the spectrum may be a human cognitive cycle, sense-cognition-act, where its quite complex cognition includes perception, working memory, episodic memory, consciousness, procedural memory, and action selection.

In humans other, higher-level, cognitive processes such as high-level perception, deliberation, volition, self, metacognition, etc., are accomplished using multiple cognitive cycles. Thus, a cognitive cycle is best thought of as a cognitive atom or a cognitive moment, the fundamental unit of cognition out of which everything else is built. Let’s look at an example.

The LIDA Model and her Cognitive Cycle

Designed from Baars’ global workspace theory (1988,1997) and adhering to a number of other psychological theories (Baddeley 2000, Barsalou 1999, Conway 2002, Ericsson and Kintsch 1995, Glenberg 1997), IDA is a software agent that does personnel work for the US Navy (Franklin 2001). IDA was completely hand crafted; what she knows was built into her by her designers. (Click here for articles describing her various modules and mechanisms.)

LIDA is a second order acronym standing for Learning IDA, that is, the IDA model with perceptual, episodic and procedural learning added (D’Mello et al. 2006). Working from the diagram in Figure 1 we’ll briefly describe LIDA’s cognitive cycle. (For more complete descriptions of this cognitive cycle in contexts, please see Baars and Franklin 2003, Franklin et al 2005.)

Figure 1. The LIDA Cognitive Cycle

For convenience, we’ll divide the LIDA cognitive cycle into nine steps.

  1. Incoming sensory stimuli is filtered through preconscious perception where meaning is added and a percept produced.
  2. The current percept moves to preconscious working memory where it participates, along with undecayed percepts from previous cycles, in the structure building of higher-level perception.
  3. The current structure from working memory cues transient episodic memory and declarative memory producing local associations, which are stored in long-term working memory.
  4. Coalitions of the contents of long-term working memory compete for consciousness thus training attention on the most relevant, urgent, important, etc.
  5. The conscious broadcast a la global workspace theory occurs, enabling the various forms of learning and the recruitment of internal resources. The broadcast is hypothesized to be the time of phenomenal consciousness.
  6. Receiving the contents of the conscious broadcast, appropriate schemes from procedural memory respond.
  7. Responding schemes instantiate copies of themselves in the action selection mechanism, bind variables, and pass activation.
  8. The action selection mechanism chooses an action for this cognitive cycle.
  9. LIDA then acts on her environment.

Human cognitive cycles, as modeled by LIDA are hypothesized to sample the environment and act on it asynchronously every 100 to 300 ms. (The running IDA software agent did not sample at this rate.) This timing is compatible with neuroscience evidence (Halgren et al 2002, Freeman, Burke and Holmes 2003, Lehmann, Ozaki and Pal 1987, Lehmann et al 1998). Cycles may cascade with several cycles having parts running simultaneously in parallel. The seriality of consciousness must, however, be preserved. Since so many actions, for example saccades of the eyes, simply redirect the senses, one could argue that cycles should begin with step 9, an action.

The LIDA model is a unified theory of cognition (Newell 1990), and perhaps the most complete such to date. Including both its cognitive cycle and the higher-level cognitive processes mentioned above, the LIDA model aims to be a cognitive “theory of everything,” that is, to model all of human cognition. This is, of course, an unattainable ambition, since human cognition is too rich and multi-faceted to ever be modeled completely. There will always be gaps, that is cognitive processes not incorporated into the model. However, we hope to make the basic building block of the LIDA model, it’s cognitive cycle, sufficiently complete that any particular gap in the model can be filed only by addition, that is without any significant reworking of the existing model. As an example, let’s look at filling such a gap by adding a dorsal stream to LIDA’s perception.

Unconscious Perception via the Dorsal Stream

Perception acts on incoming sensory stimuli to produce information by adding meaning (Oyama 1985). Thus perception enables an agent to model its world. Note that not every agent needs to do this kind of modeling. A thermostat causally transforms its sensory input, a temperature, into an action, possibly into a no-operation. On the other hand, essentially every animal requires perception to identify food items, mates, predators, nest mates, etc. An animal’s knowledge of its world is, at best, approximate, and arises from sensory stimuli via perception.

As Bateson so succinctly points out, ” The processes of perception are inaccessible; only the products are conscious and, of course, it is the products that are necessary.” (1979 p.32) Looking at the LIDA cognitive cycle above, we see that all of perception is preconscious in that it occurs in the cycle prior to the conscious broadcast, the moment of phenomenal consciousness. But some of the contents (products) of perception eventually come to consciousness during the cycle. Can all of the contents of perceptual associative memory potentially come to consciousness? Before learning about the dorsal stream (Milner and Goodale 1995, Goodale and Humphrey 1998, Goodale and Milner 2004) we thought so. Dorsal stream? What’s that?

The work of Goodale and Milner, as described in their eminently readable short book Sight Unseen (2004), describes two divergent visual perception streams, the ventral stream and the dorsal stream, often referred to as the “what” stream and the “how” stream. The ventral stream is concerned with making sense of the current scene, while the dorsal stream is used to guide actions. (Recent additions to our knowledge has shown that what was called the ventral stream should actually be two distinct streams. But, that’s beyond the scope of this essay. For our purposes, let’s refer to both together as the ventral stream. Our main focus is on adding a dorsal stream to the LIDA model. For a visual rendition please click here .

Both steams can be thought of as starting in the early visual areas of the occipital lobe (V1, V2, V3), and later diverging. Starting at the back of the head in the occipital lobe, the ventral stream winds its way around the side and into the temporal lobe, before sending out connections to other temporal and frontal lobe structures housing episodic memory, decision making and the like. Starting nearby, the dorsal stream moves upwards through the occipital lobe into the parietal lobe and continuing until it makes contact with the primary somato-sensory cortex and the primary motor cortex. There are also direct pathways from the dorsal stream to lower parts of the brain such as the superior colliculus. But why so much brain geography, and why two separate visual perceptual streams?

Goodale and Milner describe quite convincing empirical studies that show decidedly different functions for the two perceptual streams. Each being part of perception, both steams produce information by adding meaning to incoming sensory stimuli. They differ as to the kind of meaning they add. Let’s look at the needs and functions of the two visual perception streams individually.

The ventral or “what” stream is concerned with making sense of a scene. This requires recognizing and categorizing objects, and the relations between them, that is, situations. This recognition and categorization must be accomplished independent of scale. The scene must be understood if it appears on a small TV screen, in an actual physical room, or on a huge screen in a movie theater. This understanding must also be independent of position. What’s needed is the relative position of objects, what’s to the left of what, what’s on what, etc. Approximate distance metrics suffice. The function of the ventral stream is to provide information of use in choosing the next action as outlined above in the description of the LIDA cognitive cycle.

The dorsal, or “how,” stream is concerned with providing information on how to carry out an action. For instance, grasping requires not a relative, but an exact, location relative to the hand, and an exact size. Again, this process is part of perception creating information by adding meaning, but it’s a quite different sort of meaning.

The Goodale and Milner work began with a patient whose ventral stream was severely impaired. She could consciously see some color and texture, but was completely unable to recognize shapes and, therefore, couldn’t identify objects. But with her intact dorsal stream, the patient could accurately grasp objects she reported not seeing, and could navigate through rooms with out bumping into chairs that she couldn’t report seeing. Careful experimentation, some using optical illusions, convinced Goodale and Milner that the dorsal stream was not only preconscious, but unconscious, that it, its contents never come to consciousness. Thus we arrive at the title of this section, unconscious perception via the dorsal stream.

is dorsal stream perception really unconscious, or is it only that people can’t report it because they have no episodic memory of it?But, is dorsal stream perception really unconscious, or is it only that people can’t report it because they have no episodic memory of it? This situation occurs with dream amnesia, and with the so-called unconscious driving. In both cases we have concluded that the problem in reporting was with episodic memory and not with consciousness (Franklin et al 2005). Might this also be true of the inability to report the contents of the dorsal stream?

Although the neural correlates of consciousness are not precisely known (Koch 2004), we’ve concluded that’s not the case on the basis of known neural connections. If one follows the LIDA cognitive cycle through brain areas known to be involved with the various LIDA processes, one doesn’t arrive at the endpoint of the dorsal stream until action selection has taken place in the cycle.

One can bolster this conclusion by considering the functions of the two streams. The contents of the ventral stream, the sense of the scene, would be expected to be of use in choosing what to do next and, according to global workspace theory, should come to consciousness. On the other hand, the contents of the dorsal stream are only needed to effect an action after it has been chosen. Consciousness would be irrelevant at this point. This is an argument from computational needs.

The end result of these considerations points out a gap in the LIDA model that needs to be filled to accommodate an unconscious dorsal stream. Filling this gap required the addition of a sensory memory and a sensory motor memory connected by a dorsal stream. Sensory Memory holds incoming sensory stimuli. Such sensory stimuli can be external, that is, generated by the environment, or internal, that is, generated by proprioception or other internal processes. Sensory memory also holds early feature detectors that begin to make sense of the stimuli. Sensory memory feeds into Perceptual Associative Memory in route to consciousness, but its nodes are not permitted to become part of the percept. At a much faster time scale sensory memory also feeds multiple times into each executing sensory-motor automatism (SMA), taken from sensory-motor memory, that operates without benefit of consciousness. All of this addition can be seen illustrated online in the LIDA brief tutorial.

This example illustrates the relative ease of filling gaps in the LIDA model without the necessity of changes to its overall structure. This ease adds to the believability of the model in the absence of direct empirical verification due to current experimental technology being lacking in either temporal or special resolution, or in scope (Franklin et al 2005). We hope to have such verification or falsification in the future as the experimental technology improves.


  1. Baars, B. J. 1988. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press.
  2. Baars, B. J. 1997. In the Theater of Consciousness. Oxford: Oxford University Press.
  3. Baars, B. J., and S. Franklin. 2003. How conscious experience and working memory interact. Trends in Cognitive Science 7:166-172.
  4. Baddeley, A. D. 2000. The episodic buffer: a new component of working memory? Trends in Cognitive Science 4:417-423.
  5. Barsalou, L. W. 1999. Perceptual symbol systems. Behavioral and Brain Sciences 22:577-609.
  6. Bateson, G. 1979. Mind and Nature: a Necessary Unity. New York: Dutton.
  7. Conway, M. A. 2002. Sensory-perceptual episodic memory and its context: autobiographical memory. In Episodic Memory, ed. A. Baddeley, M. Conway, and J. Aggleton. Oxford: Oxford University Press.
  8. D’Mello, Sidney K, S Franklin, U Ramamurthy, and B J Baars. 2006. A cognitive science based machine learning architecture. In AAAI 2006 Spring Symposium Series Sponsor: American Association for Artificial Intelligence. Stanford University, Palo Alto, California, USA.
  9. Ericsson, K. A., and W. Kintsch. 1995. Long-term working memory. Psychological Review 102:211-245.
  10. Franklin, S. 2001. Automating Human Information Agents. In Practical Applications of Intelligent Agents, ed. Z. Chen, and L. C. Jain. Berlin: Springer-Verlag.
  11. Franklin, S., B. J. Baars, U. Ramamurthy, and M. Ventura. 2005. The Role of Consciousness in Memory. Brains, Minds and Media 1:1-38, pdf.
  12. Franklin, S., and A. C. Graesser. 1997. Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents. In Intelligent Agents III. Berlin: Springer Verlag.
  13. Freeman, W. J., B. C. Burke, and M. D. Holmes. 2003. Aperiodic Phase Re-Setting in Scalp EEG of Beta-Gamma Oscillations by State Transitions at Alpha-Theta Rates. Human Brain Mapping 19:248-272.
  14. Glenberg, A M. 1997. What memory is for. Behavioral and Brain Sciences 20: 1-19.
  15. Goodale, M. A., and G. K. Humphrey. 1998. The objects of action and perception. Cognition 67:181-208.
  16. Goodale, M. A., and D. Milner. 2004. Sight Unseen. Oxford: Oxford University Press.
  17. Halgren, E., C. Boujon, J. Clarke, C. Wang, and P. Chauvel. 2002. Rapid distributed fronto-parieto-occipital processing stages during working memory in humans. Cerebral Cortex 12:710-728.
  18. Koch, C. 2004. The Quest for Consciousness: A neurobiological approach. Englewood, Colorado: Roberts & Co.
  19. Langston, C. 1989. Artificial Life. Redwood City, Calif.: Addison-Wesley.
  20. Lehmann, D., H. Ozaki, and I. Pal. 1987. EEG alpha map series: brain micro-states by space-oriented adaptive segmentation. Electroencephalogr. Clin. Neurophysiol. 67:271-288.
  21. Lehmann, D., W. K. Strik, B. Henggeler, T. Koenig, and M. Koukkou. 1998. Brain electric microstates and momentary conscious mind states as building blocks of spontaneous thinking: I. Visual imagery and abstract thoughts. Int. J. Psychophysiol. 29:1-11.
  22. Milner, A. David and Melvyn A. Goodale. 1995. The visual brain in action. Oxford psychology series. Oxford: Oxford University Press.
  23. Newell, A. 1990. Unified Theories of Cognition. Cambridge MA: Harvard University Press.
  24. Oyama, S. 1985. The Ontogeny of Information. Cambridge: Cambridge University Press.

No Comments »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

Line and paragraph breaks automatic, e-mail address never displayed, HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>