Conceptual Adaptations 2: The Family
CogWeb
Evolutionary Psychology index
Conceptual Adaptations
3. Society

(under construction 5 January 2002)


Some dimensions of human cognitive adaptions
(links refer to the glossary):

 


Adaptations relating to social life
 

Non-verbal communicaton

Facial expressions

For evidence of a panhuman design for inferring emotions from facial expressions, see Fridlund 1994, Baron-Cohen, Riviere, et al. 1996. For a more detailed discussion of reading mental states and emotions, see the section in Conceptual Adaptations 1: Folk Psychology and the section below on The Role of Emotions in Regulating Cooperation.

Body language

Body language should be considered both in conjunction with and independently of verbal language. Even if it should turn out that verbal language has evolved from gestural language (see next), there may be aspects of body language that did not carry over well into words, or a channel of communication that is not subject to the same restrictions as verbal language. There may also be aspects of body language that serve as mnemonics rather than as a medium of communication; Laura Spinney (see next) makes some suggestions and provides an overview of the motor theory.

The motor theory of language

The notion that verbal language has evolved from a gestural language was first proposed by Condillac in 1746. It is a subject that is tough to find good evidence for, but some new material that is not entirely speculative has recently been generated. For an overview, see Laura Spinney's Body Talk (April 2000).

Goldin-Meadow and Mylander report in Nature (January 15, 1998) on spontaneous sign language of children in Taiwan and the United States, complementing the work of Kegl and McWhorter on the gesture language invented by children in Nicaragua. This work provides observational evidence for the innate relation between gesture and language, and supports the hypothesis that language evolved as an exaptation of the previously existing complex cerebral motor control system. It also shows that the effects of the motor origin of language can be observed and demonstrated in the lexicon and syntax of languages generally. See Robin Allot's site Language and Evolution for more material on the investigation of the motor theory of language origin, with demonstrations of the direct relation between gesture and language in the form of animated gestural equivalents of a considerable number of words in English, Japanese, French (with more cursory treatment of Hebrew, Korean, Finnish, Hungarian and Basque). To see the animations on the Web it is necessary to have recent versions of Microsoft Explorer or Netscape (but not necessarily Java).

There is evidently also neuroimaging work, possibly by Laura Petitt, showing that the same cortical areas involved in language in hearing individuals are involved in signing in deaf persons.

Literature:

Arbib, M. A. and G. Rizzolatti (1997). Neural expectations: a possible evolutionary path from manual skills to language. Communication and Cognition 29: 393.

Corballis, Michael C. ( 1999). The Gestural Origins of Language. American Scientist 87.2 (Mar-Apr): 87:138.  (Full external text.)

Donald, Merlin (1991). Origins of the Modern Mind. Three stages in the evolution of culture and cognition. Cambridge, MA: Harvard University Press. Author's precis in Brain and Behavioral Sciences article (external). See BBS (1993) 16(4): 737-791 and BBS (1996) 19(1) for peer commentary and author reply.

Gilroy, Peter. Meaning without words: philosophy and non-verbal communication. Brookfield, VT: Avebury, 1996.

Goldin-Meadow, S. and D. McNeill (1999). The role of gesture and mimetic representation in making language the province of speech. In Corballis, M.C. and Lea, S. (eds.). The Descent of Mind. New York, NY: Oxford UP. Collection reviewed by Larry Fiddick.

Iverson, J. M. and E. Thelen (1999). Hand, mouth and brain: the dynamic emergence of speech and gesture. Journal of Consciousness Studies 6. 11-12: 19.

Spinney, Laura. Body Talk. New Scientist 08 April 2000.

Speech

Speech, in the sense of a vocabulary, is attested in vervets (Seyfarth and Cheney). The communication is symbolic, and probably operates as cues that trigger the recall of memories (possibly semantic, possibly episodic) in a simple form of simulation.

A number of cognitive systems have been suggested to constitute the ancestor of grammatical language. Fine-motor control systems are plausibly involved; see body language above. More generally, language may have evolved to compensate for the lack of a graphic output device for the eye and the imagination.

Parents tend to sing to pre-linguistic infants; this may represent a supernormal stimulus. The infant needs to train the ability to classify certain sounds as phonemes. The rising and falling patterns of sounds in singing exaggerate these cues; the added rhythm exaggerates the rhytms characteristic of speech.


 

Phoneme parsing

Spoken language is a discrete combinatorial system. Natural languages contain between 16 (some Hawaian languages) to around 40 or 50 different phonemes (I have no firm data on the maximum). If we pick an average of 35, and restrict the length of lexical strings to ten phonemes (the real length ranges from 1 to somewhere around 50, with most strings clustering between 3 and 15), the total combinatorial space is 3510, or several billion billion lexical strings. While it is a tricky matter to count the words in a language, this is several orders of magnitude more than any language actually uses, which is a few hundred thousand or a few million, depending on how one counts--in any case less than 355.

Of course, there are combinatorial exclusion rules that thin out this space a bit. These are predicated upon limitations in the speech production system (say, certain consonant clusters are hard to pronounce) or in the speech parsing and short-term memory systems (say, no more than two tokens of a phoneme can occur in succession). Still, the exclusion rules leave an abundant number of lexical strings that can arbitrarily be associated with meanings.

A major researcher in this area is developmental psychologist Peter W. Jusczyk; see especially The Discovery of Spoken Language (1997). Jusczyk presents evidence that human infants are capable of making a large number of categorical distinctions between phonemes in the first few months, in some cases from the age of one month, including distinctions not made in the language of their local culture. See also Perceptual Adaptations: The Phonological Parser.

The neurological work of Naatanen et. al (1997) supports the model that infant speech recognition is a domain-specific faculty with local languages filling open parameters.

Bruck (1992) shows that dyslexics show a persistent inability to become conscious of phonemes. This suggests that learning to read involves conscious access to processes that are normally unconscious (phoneme parsing). (See also Shaywitz (1996) on the same issue.) This raises a couple of questions. First of all, why should most people be able to become conscious of phonemes? This would seem an unlikely adaptation. Secondly, what is the role of consciousness in making information from one cognitive module available to another? Phenomenological awareness in this case appears to play the role of providing a locus or language for shared access.

McGurk Effect
Our subjective experience of sounds is dependent not only on hearing but also on seeing. In certain cases, when shown lips forming phonemes that differ from the sound played, subjects will tend to hear the sound formed by the lips (the McGurk Effect). For example, the audio /ba/ with the visual /va/ is perceived as /va/ by pre-linguistic English-exposed infants as early as 20 weeks, while the audio /da/ with the visual visual /va/ is perceived as /da/. The effect remains in place with adults.

McGurk, Harry and Sarah Saqi (1986). Serial recall of static and dynamic stimuli by deaf and hearing children. British Journal of Developmental Psychology 4 (4): 305-310.

Rosenblum, Lawrence D., Mark A. Schmuckler, and Jennifer A. Johnson (1997). The McGurk effect in infants. Perception & Psychophysics 59 (3): 347-357.
 

Lexical access

In the previous section, I suggested that the child is building a virtual computer running on a base 35 language, creating a gigantic, multidimensional informational storage area. The task of learning words can then be defined as matching conventional word meanings to the right lexical strings. Learning the vocabulary part of a language involves performing the same processes of rote memorization as the other speakers.

Morrison et al. (1997) found that object naming speed varies with age rather than with the frequency of their exposure to the name of the object. This suggests lexcial acquisition is a specialized faculty rather than a general learning ability. Pinker (1994) has some data on the incredible efficiency of this process--a word mentioned in passing to a child is typically retained two weeks later; for detailed norms, see Dale & Fenson (1996).

Is word recognition a fully bottom-up autonomous process, dependent only on phoneme parsing, or is it sensitive to context and lexical expectations? A long history of research on multiple lexical access (bibliography with abstracts) suggests there are strong modular effects with little top-down influence.
Samuel (1997), however, found a top-down effect, suggesting that the selection of words involves a sublexical level influenced by phoneme or phoneme-like representations that arise from top-down lexical to phonemic activation.

Vakoch and Wurm (1997) suggest that the affective lexicon is structured to avoid threats to the individual, while the general lexicon appears to be designed for obtaining scarce but valuable resources.

Friedemann Pulvermüller found some evidence that words are stored and processed in several areas of the cortex associated with the meanings of those words. For instance, words referring to movement could be coded in the motor cortex, while hearing-related words may be coded in the auditory cortex. See Alison Motluk, Mind files (New Scientist 21 Nov 98) (external).

Book reviews:


Neural dissociation of speaking and writing

Kathleen Boynes and co-workers report evidence for a neural dissociation of spoken and written language in a subject with a resection of the corpus callosum. The subject, a left-handed womain with left-hemisphere dominance for spoken language, elected to undergo the surgery for a severe epileptic disorder. After the operation, she had no difficulties speaking out loud words flashed to the dominant left hemisphere, but she was unable to write them. When words were flashed to the patient's right hemisphere, she could not speak them out loud but she could write them with her left hand.

The authors suggest this marked dissociation supports the view that spoken and written language output can be controlled by independent hemispheres, even if before hemispheric disconnection
spoken and written language appear as inseparable cognitive entities.

Contact: Kathleen Baynes (kbaynes@ucdavis.edu)

See also Baynes, Kathleen and James C. Eliassen (1998). The visual lexicon: Its access and organization in commissurotomy patients. Right hemisphere language comprehension: Perspectives from cognitive neuroscience. Ed. Mark Beeman and Christine Chiarello. Mahwah, NJ: Erlbaum, 1998. p. 79-104.
 

Domain specificity and neural plasticity

Evidence for specialized brain functions does not mean that only a specific area of the brain can perform a particular task. In cases of severe epilepsy in very young children, a full hemispherectomy, or removal of one half of the brain is sometimes the only option. The most remarkable aspect of this is that when the surgical procedure is successful, not only are the seizures eliminated, but the child can function as well or almost as well as any other child. It is an example of a phenomenon well-known to neuro-biologists called "brain plasticity", the ability of the brain to recover the function of a damaged or removed region by assignment of the function to an undamaged location. The language area of the brain, for example, is often considered to be fixed on the left side of the brain by genetics, but in truth it is not so fixed, and if the left side of the brain is removed at an early age, the right side of the brain will quickly develop a language center and there will be little functional impairment.

See for instance Vining, Eileen P.G. (1997). Why would you remove half a brain? JAMA 278 (24): 2124F.

Several different models attempt to account for this phenomenon. In one view, plasticity is evidence that the brain is a domain-general information processor, able to utilize its neural resources freely in response to the problems that need to be solved. On the other hand, the definition of what constitutes a problem, and the specifically human ways of solving it - for instance by generating language capacities - suggests that functional specialization is a higher-lever phenomenon than cortical localization, and the two issues should not be confused. In this view, the precise location of functionally specialized capacities will vary from person to person according to local conditions during ontogeny (although there is likely to be default locations), but the cognitive primitives and typical inference rules will vary along systematic lines independently of variations in location.

Recent work suggests that specific genes may play a role in the uniquely human capacity for language.

Enard, Wolfgang et al. (2002). Molecular evolution of FOXP2, a gene involved in speech and language. Nature 10.1038.

2002-08-14 "First language gene discovered" (BBC Online presentation of results) (external)


Syntactic structures

For a discussion of the conditions under which the evolution of syntax would be favored, see Nowak et al. (2000). This is a computational analysis that leaves out considerations of how syntax is transmitted.

Are elements of syntax part of normal ontogenetic development? It is somewhat unclear what the claim for an innate grammar amounts to, since language was part of the natural environment of human babies in our ancestral environment. According to the principle of efficient information storage, it is thus plausible that knowledge of grammar is simply stored in the child's environment--that is, in the minds of its relatives. What the child minimally requires is a set of capacities, most likely a specialized and dedicated set, for acquiring that knowledge. A somewhat stronger claim would be that a child will spontaneously be able to structure symbols syntactically; here is some evidence for this view.
 

Spontaneous sign systems created by deaf children in two cultures

Deaf children whose access to usable conventional linguistic input, signed or spoken, is severely limited nevertheless use gesture to communicate. These gestures resemble natural language in that they are structured at the level both of sentence and of word. Although the inclination to use gesture may be traceable to the fact that the deaf childrens hearing parents, like all speakers, gesture as they talk, the children themselves are responsible for introducing language-like structure into their gestures. The authors have explored the robustness of this phenomenon by observing deaf children of hearing parents in two cultures, an American and a Chinese culture, that differ in their child-rearing practices and in the way gesture is used in relation to speech. The spontaneous sign systems developed in these cultures shared a number of structural similarities: patterned production and deletion of semantic elements in the surface structure of a sentence; patterned ordering of those elements within the sentence; and concatenation of propositions within a sentence. These striking similarities offer critical empirical input towards resolving the ongoing debate about the 'innateness' of language in human infants.

S Goldin-Meadow & C Mylander  Spontaneous sign systems created by deaf children in two cultures (Letter to Nature). Nature 391 (1998): 279.
 
 

Conceptual structures

Does language reflect the organization of the brain? An important corrective to the modular view of language is that grammar maps relations between conceptual structures through blending and conceptual integration. In this view, grammar is closely integrated into the pre-linguistic structure of the brain (see for instance Mark Turner's review of Terence Deacon's The Symbolic Species). This view can cogently be argued to be ecologically more plausible than the view that there is a wholly distinct language module, a view espoused at times by Chomsky.

However, even a view of language as closely integrated with, and indeed performing the task of conceptual integration between, pre-linguistic structures remains in principle open to a specific proclivity to integrate information along certain lines. The mild version of the language instinct would be precisely this: linguistic structures emerge out of non-linguistic structures, blending and integrating information from different domains, but it does so according to a universal grammar.

Carruthers, Peter (2002). The cognitive functions of language. (BBS Online preprint in 2002)
 

Art
 

Auditory

Music

Abdulla, Sara. Harking Back (depicted flutes from 9000 BP China; possible Neanderthal flute). Nature, 23 Feb 2000. (external text).

Allott, Robin. The Pythagorean Perspective: The Arts and Sociobiology (external).

Benzon, William (2001). Beethoven¹s Anvil: Music in Mind and Culture. New York: Basic Books. Publisher's presentation.

Born to Sing? News article. 17 July 2000.

Critchley, Macdonald and R.A. Henson (eds.) (1977). Music and the Brain. London: Heinemann.

Deliege, Irene and John Sloboda (eds) (1997). Perception and cognition of music. Psychology Press/Erlbaum.

Economist, The (2000). The Biology of Music.

Gray, Patricia M., Bernie Krause, Jelle Atema, Roger Payne, Carol Krumhansl, and Luis Baptista (2001). The Music of Nature and the Nature of Music. Science 291 (January 5): 52-54. Full text (requires subscription). News report.

Hagen, Edward. Music As a Coalition-Signaling Adaptation (11/98)

Jackendoff, Ray (1987). Consciousness and the Computational Mind. Cambridge, MA: MIT Press. One of the chapters is on music.

Jourdain, Robert (1997). Music, the Brain, and Ecstasy: How Music Captures Our Imagination. New York: William Morrow. Review by Robert Zatorre.

Knobloch, Ferdinand (1995). The Interpersonal Meaning of Music and Ethology. ASCAP 6. 7. Full text and further bibliography.

Leutwyler, Kristin (2001). Exploring the Musical Brain. Scientific American, 22 January 2001. (Popular overview). Full text.

Schellenberg, E. Glenn and Sandra E. Trehub (1996). Natural musical intervals: Evidence from infant listeners. Psychological Science 7 (5): 272-277. Abstract and interview.

Schellenberg, E. Glenn and Sandra E. Trehub (1996). Children's discrimination of melodic intervals. Developmental Psychology 32 (6): 1039-1050. Abstract.

Schneider, Peter, ; M Scherg, H G Dosch, H J Specht, A Gutschalk & A Rupp (2002). Morphology of Heschl's gyrus reflects enhanced activation in the auditory cortex of musicians. Nature Reviews Neuroscience 10.1038/nn871. Full text (subscription). BBC news report (external).

Todd, Neil (2000). Interviewed in New Scientist about rock music and the sacculus, a vestigial organ in the ear.

Xiaodan Leng and Gordon L. Shaw ( 1991). Toward a Neural Theory of Higher Brain Function Using Music as a Window. Concepts in Neuroscience 2 (2): 229-258.
 

Dance

Dance and Music and the Evolution of Humans at the Library of Excerpts (external)
 

Cooperation

One of the central and persistent problems human beings faced in evolutionary history is that of cooperating with others. Evolutionary psychology suggests this has given rise to a complex set of cognitive adaptations, such as an intuitive understanding of social exchange (trade, exchange of favors), emotions and emotional signaling (see below) as well as mind-reading and empathy (see Conceptual Adaptations 1: Mind Reading), and various elements of coalitional psychology including mechanisms for establishing group identity (see below).

Game theory has informed work in this area; see game theory and social science under Bibliography and the special bibliography on evolution and sociology. Cooperative behavior may be scored; see Wedekind and Milinski (2000).

 

Group identity

Henri Tajfel's work with the minimal group experiment showed that people will feel some loyalty to a group even if it was just constituted that morning and even if it's based on a meaningless shared trait, like preferring Klee to Kandinsky. Common projects highten this effect. Howard  Rachlin, in his article "Self and Self-Control," describes experiments which indicate that humans bond not on the basis of genes, but on the basis  of what he calls "functioning together" (p. 89). In other words, humans often act the most altruistically not toward those who share their genes, but  toward those with whom they've regularly labored for a common cause.

Allport, Gordon Willard (1954). The Nature of Prejudice. Cambridge, MA: Addison-Wesley.

Hagen, Edward. Music as a Coalition-Signaling Adaptation

Tajfel, Henri (1981). Human groups and social categories: studies in social psychology.  Cambridge : Cambridge University Press.

Rachlin, Howard. Self and self-control. The self across psychology:  Self-recognition, self-awareness, and the self concept. Eds. Joan Gay Snodgrass, Robert L. Thompson, et al. Annals of the New York Academy of Sciences, vol. 818. New New York, NY: New York Academy of Sciences, 1997. 85-97.

Robinson, W. Peter (ed.) (1996). Social groups and identities: developing the legacy of Henri Tajfel.  Boston: Butterworth-Heinemann.

Social Emotions

A large number of emotions play a role in regulating cooperative behavior. We can distinguish between three functions of emotional adaptations:

Emotions motivate behavior, spurring people to cooperate under certain conditions (such as reciprocity, resource availability, relatedness, etc.) and not others; emotions at times function in social signaling, to indicate a willingness or unwillingness to cooperate, an intention to retaliate (such as conditional cooperation); and emotional adaptations appear to involve an ability to detect and interpret emotions, as well as to feel empathy (see Conceptual adaptations 1: Reading emotions).

Justice

See Mikula, Gerold, Klaus R. Scherer, and Ursula Athenstaedt. The role of injustice in the elicitation of differential emotional reactions. Personality & Social Psychology Bulletin, 1998 Jul, v24 (n7):769-783. Abstract.

Status

See Waldron, Deborah. Status perception deficiency (external pdf file). "Ralph Adolphs, Andrea Herberlein and myself are involved in an experiment looking at patients with lesions to the amygdala, and patients with lesions to areas of the ventral medial frontal cortex. This research though is in its earliest stages, with data still being collected. Results so far are interesting, but at this early stage are only that." (September 1999.)

In "Staying alive: Evolution, culture, and women's intra-sexual aggression," Anne Cambell argues that "females' tendency to place a high value on protecting their own life enhanced their reproductive success in the environment of evolutionary adaptation because infant survival depended more upon maternal (rather than paternal) care and defence." She concludes that "among females, disputes do not carry with them implications for status as they do among males, but are chiefly connected with the acquisition and defence of scarce resources." Full text (external BBS draft, no date).


Religion

Benson Saler (1993/2000). Conceptualizing Religion.

Boyer, Pascal. Religion Explained. Publisher's presentation

Boyer, Pascal (1994). The Naturalness of Religious Ideas: A Cognitive Theory of Religion, Berkeley, CA: University of California Press.  See also his chapter in Hirschfeld & Gelman (1994).

Current Approaches in the Cognitive Science of Religion. Ed. Pyysiainen and Anttonen. London: Continuum 2002.

Lewis-Williams, David (2002). The Mind in the Cave.

Social Exchange

See the main bibliography under the sections Social reasoning and Economics.
 

Work

Judging from the frequency of men's positive attitudes to hunting and fishing--on the face of it, often dangerous, tiring, and monotonous activities--we take a liking to activities that were rewarding in the environment in which we evolved. If so, the resistance and boredom we often feel in work situations in civilized society are based on cost-benefit analyses of fitness enhancements proper to the EEA, though quite likely off the mark in our actual environment. In this sense, our hunter-gatherer ancestors did not work at all; it would have been as seriously maladaptive to dislike doing what you need to do to survive as it would be for a cow to dislike grazing. In this view, it is therefore misleading to add up the total number of hours they hunted and foraged in order to compare that the number of hours a modern human spends working in the fields, at the office, or by the assembly line. If we hold that pleasure and pain are adaptations to encourage optimal behavior in the EEA, hunting and foraging should be pleasurable. Leisure beyond a certain amount should be experienced as painful, and be optimally relieved by hunting and foraging. This pleasure should be productive, in the sense that actually catching or finding food should bring strong reinforcement, and not doing so should be experienced as sad.
 
 
 
 

Conceptual Adaptations 2: The Family
Return to the Evolutionary Psychology index
Evolutionary Psychology index
Index of Adaptations
Back to Conceptual Adaptations 2: The Family
Bibliography
© 1998 Francis F. Steen, Communication Studies, University of California, Los Angeles