Speech information is assessed by determining the presence or absence of visual or auditory features independently. For example, one account of audiovisual integration, the Fuzzy Logical Model of Perception (FLMP Massaro, 1998 Massaro & Cohen, 1995), relies on a priori assumptions that the perceptual system has tacit knowledge of the relations that exist across sensory modalities, by virtue of audiovisual representations of speech sounds stored in memory. The job of the perceptual system, on these views, is therefore to assemble the independent signals into a coherent, multimodal perceptual object. Several models of audiovisual speech perception, however, specifically incorporate assumptions about the independence of information arriving from disparate modalities ( Braida, 1991 Massaro, 1998). Based on these results, investigators have drawn several general conclusions about the nature of visual speech information and how it combines with auditory speech information during the process of speech perception. Frequently, these studies use susceptibility to the McGurk effect or degree of auditory enhancement (as in Sumby & Pollack, 1954) as their dependent variable. In an effort to construct such a theory, investigators have compiled a large and growing body of work concerning the nature of the phonetic information contained in the visual signal ( Brancazio, Miller, & Paré, 1999 Green & Kuhl, 1991 Green & Miller, 1985 Jordan & Bevan, 1997 Jordan, McCotter, & Thomas, 2000 Kanzaki & Campbell, 1999). Clearly, any comprehensive theory of speech perception must be able to explain the utility and importance of visual speech information ( Summerfield, 1987). The discovery of these “audiovisual speech phenomena” has raised several general theoretical questions about the domain of speech perception ( Bernstein et al., 2000). These findings demonstrate that visual information about speech is useful and informative and is not simply compensatory in situations where auditory information is insufficient to support perception. In addition, Reisberg, McLean, and Goldberg (1987) showed that concurrently presented visual information facilitated the repetition of foreign-accented speech and semantically complex sentences. However, when the gains were expressed relative to possible improvement over auditory-alone performance, the contribution of visual information to speech perception accuracy remained constant over the entire range of S/N ratios tested (from −30 dB to 0 dB). They found that absolute gains in speech perception accuracy were most dramatic at S/N ratios where auditory-alone performance was low. In their pioneering study, Sumby and Pollack (1954) found that the addition of visual information about articulation to an auditory signal can improve speech intelligibility performance in noise these gains were equal to a +15-dB gain in signal-to-noise (S/N) ratio under auditory-alone conditions ( MacLeod & Summerfield, 1987 Summerfield, 1987). More practically, visual information about speech has also been shown to enhance auditory speech perception in noise ( Erber, 1969 Middleweerd & Plomp, 1987). This effect has been replicated many times and under many circumstances (see Massaro, 1998). When asked to identify the multimodal stimulus display, 98% of subjects responded /dada/, indicating that the different sources of information from the two sensory modalities were integrated at some point during the process of speech perception. In a classic demonstration of this effect, McGurk and MacDonald ( MacDonald & McGurk, 1978 McGurk & MacDonald, 1976) combined the auditory form of a person saying /baba/ with the visual form of the same person saying /gaga/. Furthermore, auditory and visual stimuli can combine to elicit illusory perceptions. The visual correlates of speech can be perceived accurately by adults ( Berger, 1972 Bernstein, Demorest, & Tucker, 2000 Campbell & Dodd, 1980 Jeffers, 1971 Walden, Prosek, Montgomery, Scherr, & Jones, 1977) and children ( Erber, 1972, 1974). Research on audiovisual speech perception has demonstrated that the domain of speech perception is not limited to the auditory sensory modality.