Multimodality

Teufelsberg, BerlinMultimodality research is currently of the highest interest due to the rapidly growing percentage of cultural artefacts (multimedia artworks, films, websites, comics, computer games, presentations etc.) which are inherently multimodal. A further challenge consists in the ever growing semiotic complexity of these artefacts, which increasingly blur the lines between traditional distinctions, and include interactive dimensions.

Multimodal texts and artefacts combine the use of various semiotic modes (sign systems) such as language, images, gesture, typography, graphics, icons, or sound, which in some cases are transmitted via different perceptual modes such as auditory, visual, or haptic perception.

The investigation of multimodality is a rapidly growing interdisciplinary research field in linguistics, semiotics, aesthetics, visual studies, cognitive psychology, and further disciplines. To keep up with current developments in artistic production and analyse the aesthetic complexity of today’s cultures, these disciplines have to develop new approaches and methods, by combining their theoretical expertise and developing models and analytical tools adequate for multimodal texts.

For multimodal research, it is decisive to understand how semiotic modes work together in multimodal texts. A more precise description is reached with a layered description of modes, where structural, semantic, and stylistic properties are extracted and represented for each mode separately, as well as in interaction with (the same or other) layers of other modes. At the core of multimodality research lies the question of how the co-text of other textual units, which are transmitted in different media and/or perceived via different perceptual modes, influences the interpretation of a textual unit. In recent research (Siefkes 2015), I've proposed a model for text and discourse representation that considers three strata (or levels): (1) the stratum of form (in the sense of “organization of material”, which includes properties of structure or arrangement without considering possible meanings, as well as syntactic properties in some modes such as language); (2) the stratum of semantics (including discourse relations); (3) the stratum of style.

On the basis of this format for text and discourse representation, which allows to analyse formal (= expression) properties, semantic content, and stylistic properties, different types of interactions between modes can be defined, and included in discourse representation formats that have to specifically defined, starting from existing discourse representation models as a starting point, e.g. Segmented Discourse Representation Theory (SDRT).

Bibliography

– Baldry, Anthony & Paul Thibault, Paul (2006), Multimodal Transcription and Text Analysis. London: Equinox.
– Bateman, John (2008), Multimodality and Genre. A Foundation for the Systematic Analysis of Multimodal Documents. London: Palgrave Macmillan.
– Bateman, John (2014), Text and Image. A Critical Introduction to the Visual-Verbal Divide. New York: Routledge.
– Bateman, John A. & Wildfeuer, Janina (2014): “A multimodal discourse theory of visual narrative.” Journal of Pragmatics 74, 180–208.
– Calvert, Gemma, Spence, Charles, & Stein, Barry (2004), The Handbook of Multisensory Processes. Cambridge MA: MIT.
– Deppermann, Arnulf (ed.) (2010), Conversation Analytic Studies of Multimodal Interaction. Special issue, Journal of Pragmatics 46(1).
– Deppermann, Arnulf & Linke, Angelika (eds.) (2010), Sprache intermedial: Stimme und Schrift – Bild und Ton. Berlin/New York: de Gruyter.
Still from Gattaca (1997)– Forceville, Charles & Urios-Aparisi, Eduardo (2009), Multimodal Metaphor. Berlin/New York: de Gruyter.
– Fricke, Ellen (2012), Grammatik multimodal: Wie Wörter und Gesten zusammenwirken. Berlin/New York: de Gruyter.
– Fricke, Ellen (2013), “Towards a unified grammar of gesture and speech: A multimodal approach”, in: Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill & Sedinha Teßendorf (eds.), Body – Language – Communication. An International Handbook on Multimodality in Human Interaction. Berlin: de Gruyter, vol. 1, 733–754.
– Hess-Lüttich, Ernest W.B. & Wenz, Karin (eds.) (2006), Stile des Intermedialen. Tübingen: Narr.
– Holly, Werner (2011), “Social Styles in Multimodal Texts: Discourse Analysis as Culture Study”, in: Claudio Baraldi, Andrea Borsari & Augusto Carli (eds.), Hybrids, Differences, Visions. Aurora: Davies Group, 191–206.
– Jäger, Ludwig (2010), “Intermedialität – Intramedialität – Transkriptivität. Überlegungen zu einigen Prinzipien der kulturellen Semiosis”, in: Deppermann/Linke 2010, 301-324.
– Jewitt, Carey (2009), The Routledge Handbook of Multimodal Analysis. London/New York: Routledge.
– Kress, Gunther & Theo van Leeuwen (1996), Reading Images. The Grammar of Visual Design. London: Routledge.
– Kress, Gunther & Theo van Leeuwen (2001), Multimodal Discourse. London: Arnold.
– O’Halloran, Kay (ed.) (2004), Multimodal Discourse Analysis. Systemic Functional Perspectives. London: Continuum.
– Siefkes, Martin (2015), “How semiotic modes work together in multimodal texts: Defining and representing intermodal relations”. 10plus1 – Living Linguistics 1, 2015: 113-131.
– Siefkes, Martin & Arielli, Emanuele (2015), “An Experimental Approach to Multimodality: How Musical and Architectural Styles Interact in Aesthetic Perception”, in: J. Wildfeuer (ed.), Building Bridges for Multimodal Research. International Perspectives on Theories and Practices of Multimodal Analysis. Bern/New York: Lang.
– Stöckl, Hartmut (2004), “In between Modes. Language and Image in Printed Media”, in: Eija Ventola, Cassily Charles & Martin Kaltenbacher (eds.), Perspectives on Multimodality. Amsterdam: Benjamins, 9-30.
– Wildfeuer, Janina (2014), Film Discourse Interpretation. Towards a New Paradigm for Multimodal Film Analysis. London/New York: Routledge.