Nothing means anything all by itself. There is always a supplement.
Materialism, in all its many variants, is about meaning. That’s because it expresses a long-existing concern humans have had about form and content. Something of a chicken-or-egg problem, but this typification can lead to considerations of causality, which is not my concern here. I’m interested in something more general, and that’s a pattern of thinking, a schema, that can be seen at work across many disciplines of study, even if there is no superficial connection.
This form/content (F/C) debate emerges across disciplines, a ghost discourse throughout dressed with different terminologies and dependent on siloed histories. For example, the study of meaning in linguistics is largely captured by semantics (or semiology). It is a field of study that applies as much to understanding how human beings create and consume “content” as it does to emerging artificial intelligence (AI) programs that can make meaningful, independent decisions. Very often semantics is concerned with the utterance itself, sometimes as “a speech act” and other times as an expression of an inherent grammatical competency. Disciplines require boundaries, so expanding the study of meaning within linguistics to include environment, gesture, culture, or other influences could blur the focus on “pure” language, bleeding into something like anthropology. Yet there are linguists who argue that pragmatics—those extra-lingual elements that speakers utilize to convey meaning—are not subsequent or minor to semantics. Rather, pragmatics (form) must be considered alongside semantics (content) when evaluating what are called “truth conditional” statements.
The F/C argument is not new. Aristotle wrote on it as did the Mahayana Buddhist scholars before him. Today it is best exemplified by AI, both its actual applications and the theory driving it. AI expresses a very literal materialism, and, as such, is a project that aims to end the F/C debate once and for all; content will forever be the servant of form.
We can breakdown F/C into different subcategories—these are only preliminary and not exhaustive:
- form/contentBIOLOGY (F/CB) – philosophy, religious thought, medicine, political thought, humanities, social sciences
- form/contentTECHNOLOGY (F/CT) – cognitive science, artificial intelligence, information technology
- form/contentSPACE (F/CS) – architecture, geography, urban planning, design
- form/contentAUTHORSHIP (F/CA) – literature, advertising, law, copyright
Below is an example of how this model might be used. It is a F/CT analysis of how a 2020 ad from Google presents the relationship between form and content when it comes to trusting our various media as tools for remembering friends and family who have died.
F/CT analysis: Google Home Assistant Ad (2020)
The COVID-19 pandemic is, more likely than not, the most technologically mediated event in human history. Stuck at home and bereft of social interaction, we’ve learned that much more of everyday life can be reduced to the dimensions of a screen than we thought.
Even if our trust in the content we receive through digital media has all but collapsed into cynicism, our more basic trust in the technology that brings it to us has only grown deeper. That is, we doubt the truth conditional statements of content (everything from “that show isn’t believable” to “don’t believe the fake news”), but we accept the basic trustworthiness of the technology (the form) conveying that content to us. But this isn’t a matter of extending conspiracy to technology a la Neo-Luddite; it’s about understanding that the semantic experience of digital content is inseparable from its media.
Marshal McLuran is probably the most well-known champion of this view, with his slogan, “The medium is the message.”
The medium is the message because it is the medium that shapes and controls the scale and form of human association and action. The content or uses of such media are as diverse as they are ineffectual in shaping the form of human associations. Indeed, it is only too typical that the content of any medium blinds us to the character of the medium
Marshal McLuran, Understanding Media
The pandemic exposes something about how our trust in the technology we live with is formed, fast-tracking it. Normally, building such trust across a wide spectrum of consumers takes time to develop, requiring considerable outreach efforts (advertising and education) on the part of companies and states.
An ad that Google ran during the last Super Bowl before COVID-19 shows how trust in a new form of technology is achieved not by a direct plea or exposition of the form itself, but through an otherwise benign narrative. This sleight of hand obfuscates the industrial and economic conditions that may not support or even contradict the intimate, personal ways through which trust is usually formed.
George is elderly and his memory is deteriorating. His daughter is worried that this will quickly lead to his inability to live independently. Assisted-living homes are expensive, especially ones that are nice. She searches the phrase “how not to forget” in Google and reads a WikiHow article that says that repeatedly going over the details of old memories can counter the onset of old-age dementia. She creates a Google account for her dad and uploads all his old photo albums, home videos, and Super 8 films after converting them to digital formats.
Next, the family all pitches in and buys him a Chromebook and a smartphone. She gets a few of his tech-savvy grandchildren—the ones he liked to chide with his joke that “smartphones are for dumb people”—to spend a couple of hours with him each week to teach him how to use the new technology. After a while, he gets the hang of it.
All of this is implied. We we actually see in the ad is what the old man sees as he interacts with the company’s AI interface, a manifestation of Google’s vast store of data as a personable, virtual assistant.
“Hey Google, show me photos of me and Loretta,” he says. Each photo or video lets him relive some of the happiest memories of his life. All the while, he responds to what he sees and Google takes note. “Remember that Loretta hated my mustache,” he tells Google. “OK, I’ll remember that,” the algorithm responds. “Remember that Loretta loved going to Alaska and scallops.” “OK, I’ll remember that.”
As the sentimental music crests, we see a cascade of memories that the old man has logged over the course of months. Details about Loretta and things she said. The summit of affect is when we come to Loretta’s last words to him before dying. With her plea that he “get out of the dang house” when she’s gone in our minds, we hear him taking his dog out for a walk (the sounds of which move away from us, possibly because we experience only what the Google device does). Then the product benefit message: “A little help with the little things.”
Although much of what I suggest is conjuncture, what we get to know of the old man is exactly what Google gets to know. We experience him as the machine does. This is an effective narrative trick because it helps to empathize with the AI and to understand its purpose and intentions. It helps us to evaluate it and to come to trust it. After all, the more we can appreciate someone’s perspective, the better able we are to trust them as friends.
Things that make use cry or laugh somehow help to win our trust. CNN reported that many people who saw Google’s ad ended up crying by the end of it, a “tear-jerker.” But, knowing our love for extra-degeitic twists and the sense of an imminent realism lurking beyond the edges of narrative, Google pulls on our heart strings even further: Lorraine Twohill, Google’s Chief Marketing Officer, stated in a release that “the voice you hear throughout ‘Loretta’ is the grandfather of a Googler, whose story we drew from to create the ad. At 85, to an audience of millions, he’ll be making his film debut. We couldn’t be happier for him.”
The release is a publicity strategy that gives the impression that Google is pulling back the curtain for us, the tearful audience of George’s drama. The ad reflects Google’s honest goal of building “products that help people in their daily lives, in both big and small ways. Sometimes that’s finding a location, sometimes it’s playing a favorite movie, and sometimes it’s using the Google Assistant to remember meaningful details.”
It goes without saying that it is in Google’s interest that we trust its product. To do that, we need to believe that the technology will keep George from losing his mind, allowing him to keep whatever remains of his happiest memories of Loretta for as long as he can. George trusts Google as part of an intimate relationship, one in which he shares memories that we have long reserved only for family or close friends. Our experience of George as the Google virtual assistant achieves two important objectives: It teaches us how machine learning works, and it put us in the position of the AI, which encourages us to understand it as a subject interacting with humans, not an object of human labor.
The already blurry borders between form and content are only more obscure when it comes to Google Assistant, making it hard to detach the content—in this case, our own memories and sense of trust—from the wires and technology acting as a silent curator. We could imagine that perhaps the binary has collapsed in AI, the form becoming entirely the expression of content we ourselves create. There is a very real sense that many of us feel when AI applications like Google Assistant so closely accommodate our most inner, subjective experience—like when you realize millions of other people have searched the exact same (embarrassing) question as you. But far from confirming that tech has finally accessed our deepest depths, reflecting the infinitesimal detail of our unique selves, it achieves the very opposite; what we imagine to be at stake—some unfathomable inner individual essence—was never there in the first place.