In the social semiotic approach to multimodality, a metafunctional hypothesis is posited. This hypothesis states that all semiotic modes serve three metafunctions in order to function as a full system of communication (cf. Kress and van Leeuwen 1996: 40). These metafunctions organize the various elements and systems that constitute a mode into three distinct domains of meaning, i.e. the ideational, the interpersonal and the textual metafunction.
The ideational metafunction organizes the resources we use when we construe our experience of both the inner (mental) and the external (social and physical) world. The ideational metafunction is
…concerned with the content of language [or any other mode], its function as a means of the expression of our experience, both of the external world and of the inner world of our own consciousness – together with what is perhaps a separate sub-component expressing certain basic logical relations (Halliday 1973: 66).
It is possible to distinguish between two sub-components of the ideational metafunction (cf. the quotation above), i.e. the experiential and the logical metafunction. The experiential metafunction construes meaning as distinct, yet related parts of a whole (typically labelled ‘constituency’; cf. Halliday 1979: 63). An experiential configuration of meaning relates a process to one or more participants and frames this relation circumstantially; viz. an experiential configuration signifies an ‘event’. The logical metafunction is concerned with the connection between events and construes meaning in a more abstract way than the experiential metafunction. Where a direct reference to things and states of affairs in ‘real life’ is at play in the experiential metafunction, logical relations are “independent of and make no reference to things” (Halliday 1979: 73). The logical metafunction is central to language but is more difficult to describe in other modes; since only language has a clearly delineated, multivariate structure (i.e. the clause) as its primary means of realizing events, and since the logical metafunction is realized by those items which connect and combine events (i.e. conjunctions in and between clauses), it is problematic to describe the logical metafunction in modes that do not operate with clauses.
The interpersonal metafunction concerns the interaction between the producer and the perceiver (of a text). It organizes the resources we use when we take on different, complementary dialogical roles in an exchange of meaning. In other words, it functions as
…the mediator of role, including all that may be understood by the expression of our own personalities and personal feelings on the one hand, and forms of interaction and social interplay with other participants in the communication situation on the other hand (Halliday 1973: 66).
Interpersonal meanings are not realized as distinctive parts making up a whole (such as it is the case for experiential meanings); instead, interpersonal meaning is “distributed like a prosody throughout a continuous stretch of discourse” (Halliday 1979: 66). The interpersonal metafunction is also concerned with expressions of modality, i.e. the modal status of the represented ‘going-ons’ in a text.
The third metafunction, the textual, organizes the resources we use to create cohesive and context sensitive texts when we choose to exchange a certain experiential meaning. The textual metafunction
…is the component that enables the speaker to organize what he is saying in such a way that it makes sense in the context and fulfils its function as a message (Halliday 1973: 66).
Textual meaning is not realized by constituency or by prosodic structure:
What the textual component does is to express the particular semantic status of elements in the discourse by assigning them to the boundaries (…); this marks off units of the message as extending from one peak of prominence to the next (Halliday 1979: 69).
The hypothesis about the three (or four) metafunctions was originally suggested by Halliday when he worked with the description of Cantonese, and he supported it theoretically with ideas from Whorf, Malinowski and Mathesius. Whorf’s work inspired Halliday to develop the ideational metafunction, while the interpersonal metafunction is inspired by Malinwoski’s work and the textual meatafunction by the work of Mathesius. Halliday pays tribute to these scholars as follows:
For Malinowski, language was a means of action; and since symbols cannot act on things, this meant as means of interaction – acting on other people. Language needs not (and often did not) match the reality; but since it derived its meaning potential from use, it typically worked. For Whorf, on the other hand, language was a means of thought. It provided a model of reality; but when the two did not match, since experience was interpreted within the limitations of this model, it could be disastrous in action […]. Mathesius showed how language varied to suit the context. Each sentence of the text was organized by the speaker so as to convey the message he wanted at that juncture, and the total effect was what we recognize as discourse. Their work provides the foundation for a systemic functional semantics (Halliday 1984: 311).
In the beginning of this entry, it was stated according to Kress and van Leeuwen’s original line of thought that all semiotic modes serve all metafunctions; i.e. all metafunctions have to be present in order for a particular mode to be constituted. This is indeed debatable and ties in with the discussion of what a mode is (cf. entry on “mode”). In a more recent publication, van Leeuwen (2015) discusses the principle of multi-metafunctionality in and across modes:
The use of metafunctions in thinking about other semiotic modes has been an incredibly important step, and excellent heuristic. And it has still further to go, particularly in relation to the idea of communicative acts, or multimodal acts, whatever you wish to call them. But when I wrote Speech, Music, Sound (1999) I commented on the applicability of the metafunctions, because it seemed to me that in sound and music the ideational often has to piggy-back on the interpersonal, because sound is so fundamentally interactional. Then I thought, but what about the visual? Aren’t the phenomena we interpreted as interpersonal in images not always representations of interpersonal relations rather than that they are directly interpersonal? Doesn’t the interpersonal in images have to piggy-back on the ideational? Close distance to the viewer, for instance, is never actual close distance, only a representation of it. Again, in studying PowerPoint I found that the written language on the slides is often entirely devoid of any interpersonal things. There are just nominal groups. No mood structure. The metafunctions are distributed across the modes in the multimodal mix, and not every one of the modes in that mix has all three. So there are issues to discuss. What kind of work are the different semiotic modes given to do? […] We need to be alert about the in which way the metafunctional work is divided among the modes in a multimodal text or communicative event, and it can also be the case that certain uses of language are not fully trifunctional, e.g. the use of language on many PowerPoint slides, because there simply are no interpersonal signifiers. It may also be that either the ideational or the interpersonal is, at a given point in time, more developed in one mode than in another, or used less in one mode than in another – that is what I am beginning to think. You could say that in multimodal communication we always need the three metafunctions, so that all three are present in any act of multimodal communication, but which metafunctions is mostly or solely carried by which kind of mode in the mix may differ. And when looking at modes separately, you may find that some develop the ideational metafunction more than others, and others the interpersonal. Multimodality requires the metafunctions to be rethought and not taken for granted.” (Andersen et al. 2015: 106-107).
Citing this entry
Andersen, Thomas Hæstbæk. 2016. “Metafunctions.” In Key Terms in Multimodality: Definitions, Issues, Discussions, edited by Nina Nørgaard. www.sdu.dk/multimodalkeyterms. Retrieved dd.mm.yyyy.
References
Andersen, Thomas Hestbæk, Morten Boeriis, Eva Maagerø and Elise Seip Tønnesen ( 2015). Social Semiotics. Key Figures, New Directions. London: Routledge.
Halliday, M. A.K. (1973). Explorations in the functions of language. London: Arnold.
Halliday, M. A.K. (1979). “Modes of meaning and modes of expression: types of grammatical structure and their determination by different semantic functions”. In Allerton, D.J., Edward Coney & David Holdcroft (eds), Function and context in linguistic analysis – A festschrift for William Haas. Cambridge: Cambridge University Press.
Halliday, M. A.K., (1984). “On the Ineffability of Grammatical Categories.” In M.A.K. Halliday, 2002. On Grammar. Collected Works of M.A.K. Halliday, volume 1. London and New York: Continuum.
Kress, Gunther and van Leeuwen, Theo (1996). Reading Images. The Grammar of Visual Design. London: Routledge.