Olia Lialina: Once Again, The Doorknob

7

OLIA LIALINA

Abstract

Based on the author’s keynote lecture at the 2018 ‘Rethinking Affordance’ symposium (Stuttgart, Germany), this essay offers a comprehensive survey of the tensions between J.J. Gibson’s and Don Norman’s perspectives on the concept of affordance, and formulates an incisive critique of how Norman reconfigured Gibson’s initial theory. The essay’s key arguments are triangulated in a critical dialogue between design practices, affordance theory, and a critical reading of design pedagogy. Drawing on her own practice as a pioneering net artist and digital folklore researcher, the author moves from early internet design practices through human-computer interaction and user experience design towards a speculative consideration of the affordances of human-robotic interaction.

Keywords

AI, Affordance, Interface Design, UX

  

Once Again, The Doorknob: Affordance, Forgiveness, and Ambiguity in Human-Computer Interaction and Human-Robot Interaction

OLIA LIALINA

Merz Akademie, Stuttgart, Germany

 

Introduction

This essay aims to rethink the concept of affordance through a triangulated analysis of correspondence with design practitioners, critical re-readings of canonical texts, and reflexive engagement with my own creative and pedagogical practices. As both a net artist and an instructor in the field of digital design, I strive to reflect critically on the medium that I work with, in part by way of exploring and showing its underlying properties. Furthermore, as a web archivist and digital folklore researcher, I am also interested in examining how users deal with the worlds that they are thrown into by designers. These areas of research and practice rely and build upon the core tenets of human-computer interaction (HCI) and interface design – both of which provide the conceptual frameworks within which the term ‘affordance’ is now embedded, as well as the contexts in relation to which it is primarily discussed and interpreted.  To rethink affordance, then, it is necessary to think critically about interface design and the contemporary status of human-computer interaction (or, as will be discussed below – human-robot interaction).

 

Interface Design

In the entry on the concept of the interface in Software Studies, A Lexicon, M. Fuller and F. Cramer define interfaces as links that connect “software and hardware to each other and to their human users or other sources of data.” After defining five types of interfaces, the authors note that the fifth, the “user interface,”  i.e., the “symbolic handles”  that make software accessible to users,”  is “often mistaken in media studies for “interface” as whole (Fuller, 2008: 149). The following text is not an exception. It brackets software-to-software interfaces, hardware-to-software interfaces, as well as other types of interfaces that belong to engineering and computer science, and deliberately discusses only the surfaces, the clues and “links” provided to the human user by software designers.

To say that the design of user interfaces powerfully influences our daily lives is both a commonplace observation and a strong understatement. User interfaces influence users’ understanding of a multitude of processes, and help shape their relations with companies that provide digital services. From this perspective, interfaces define the roles computer users get to play in computer culture.

As a field of practice, interface design is effectively devoted to decision-making – or rather, to the facilitation of decision-making processes. Decisions are often made gently and silently. Often, they are made with good intentions, and more often still, with no intention at all. The key point is that decisions are made – just like metaphors are chosen, idioms learned, and affordances introduced. The banality, or ‘common-sense’ orientation, of this process, in no way reflects the gravity of the interface’s effects. From this perspective, to think of the interface, particularly in relation to the concept of ‘affordance,’ means to reflect on both the ideological stakes of the design choices underpinning decision-making processes, as well as on the decision-making practices that they encourage users to undertake. Such a reflection must also include the question of what exactly professional interface designers study in order to be able (and to be allowed) to make these choices. In other words: what should students who will become interface designers (or “front end developers,” or “UX designers” – there are many different terms and each of them could be a subject of investigation) be taught?

From a pedagogical standpoint, there are a number of important paradigms that can be established right away, in an effort to foreground (rather than obscure) the ideological constitution and implications of the interface: Students studying interface design, front-end development, user experience (UX), or those seeking opportunities to reflect critically on these fields, should not begin by making an ‘improved’ prototype of an interface that already exists. Nor should they be guided towards ‘mastering’ design functions (such as, for example, drop shadows or rounded corners). Perhaps a less intuitive alternative approach should be followed, but what might this be? Should they begin the work of designing interfaces by studying philosophy, cybernetics, Marxism, dramaturgy and the arts more generally, and only afterwards set out to create the first button or begin to complete any similarly rudimentary interface design tasks?

As a workable compromise, interface design students might be introduced to key texts that reveal the power that user interface designers have. It is critical that they come to understand that there is no objective reality or reasoning, no nature of things, no laws, no commandments that underpin this field. There is only this: decisions that were and will be made, consciously or unconsciously, and the critical implications of wielding the power to structure these decision-making processes. This sentiment is advance by Jay Bolter and Diana Gromala in Windows and Mirrors (2003), a now canonical text in the field, when they state that “[i]t is important for designers and builders of computer applications to understand the history of transparency, so that they can understand that they have a choice” (Bolter and Gromala, 2003: 35). The text is relatively well-known in the field of media theory as one of its authors coined the concept of remediation (Bolter and Grusin, 2000); however, it is largely ignored in interface design. This is an unfortunate example of how a text that usefully questions mainstream practices of interface design is acknowledged in theoretical, reflective discourse, but disregarded in more practice-based contexts, which continue to rely on the postulate that the best interfaces are intuitive and transparent, to the point where users might assume no interface exists at all.

While artists working with digital technologies are more likely to choose reflexivity over transparency in an effort to re-think, re-imagine, and problematize the working of interfaces, designers are traditionally less likely to do so. When the artist Johannes Osterhoff – who identifies as an “interface artist,” and who is known for witty, long-term performances including Google, iPhone live, or Dear Jeff Bezos (Osterhoff, 2001; 2012; 2013) – was invited to teach a university course on basic interface design, he chose to name the course after the book, Windows and Mirrors. In his teaching, he guided students through the creation of projects that focused on looking at interfaces, reflecting on metaphors, idioms, and, ultimately, rethinking affordances. Soon after, Johannes took on the position of Senior UX Designer at SAP, one of the world’s biggest enterprise software corporations, and I took over the course from him a few years ago. Approximately a decade onwards, in beginning a critical conversation about interface design one might start with some of the essays in Brenda Laurel’s perennially useful book, The Art of Human Computer Interaction (1991). Published approximately five years after graphical user interfaces had begun to be popularized, the book reflects on some of the issues and problems that arose during this process. The book contains essays by practitioners, many of whom, almost three decades after the book’s initial publication, have either turned into pop stars of the electronic age, or have by now been forgotten (as well as some who have recently been rediscovered). A particularly pertinent text in this regard is “Why Interfaces Don’t Work” by Don Norman (1990). The text contains numerous statements that are repeatedly quoted, referenced and internalized by generation after generation of interface designers. Several of Norman’s most cited claims include:

“The problem with the interface is that there is an interface” (Norman, 1990: 217).

“Computers exist to make life easier for the user” (ibid.).

“The designer should always aim to make the task dominate, while making the tools invisible” (ibid.).

And, “The computer of the future should be invisible” (218).

While these particular points are not typographically foregrounded or emphasized by the author himself, they have, nevertheless, become a kind of manifesto and mainstream paradigm for thinking about computers, human-computer interaction and, by extension, about the affordances of the technologies under consideration. As each of these statements allude to, in sentence after sentence, metaphor after metaphor, Norman argues that users of computers are not interested in computers themselves; what they desire, he claims, is to spend the least possible amount of time with a computer as such. As a theoretician – and, more importantly, as a designer working for Apple – Norman was thus pushing for the development of invisible or ‘transparent’ interfaces. In fact, it is through his work that the term “transparent” started to become synonymous with the terms “invisible” and “simple” in interface design circles. Sherry Turkle sums up this swift development in the 2004 introduction to her 1984 book, The Second Self:

“In only a few years the ‘Macintosh meaning’ of the word Transparency had become a new lingua franca. By the mid-1990s, when people said that something was transparent, they meant that they could immediately make it work, not that they knew how it worked” (Turkle, 2004: 7).

The idea that users should not even notice the presence of an interface had thus become widely accepted, and generally perceived as a blessing. Jef Raskin, initiator of the Macintosh project, and author of many thoughtful texts on the subject, writes at the outset of The Humane Interface (2000): “Users do not care what is inside the box, as long as the box does what they need done. […] What users want is convenience and results” (8). In practice, however, this perspective is contradicted by the work of many media artists, discussed, for example, in the aforementioned Windows and Mirrors, and likewise by many websites created by everyday users in the early 1990s. In fact, such websites may offer the best arguments to counter the assumption that users do not want to think about interfaces. Early DIY web design shows, very much against the core assumptions formulated by Norman, that users were constantly busy envisioning and developing interfaces that were not only visible, but even foregrounded. Many examples of such sites are collected in my One Terabyte of Kilobyte Age archive (Figures 1 and 2), and show that users indeed often work actively against the idealized invisibility and transparency of interfaces.

1
Figure 1. From One Terabyte of Kilobyte Age (2009, ongoing), Olia Lialina and Dragan Espenschied

Norman, in order to support his intention of removing the interface from even the peripheral view of the user, quoting himself from Psychology of Everyday Things (1988), lifted the well-known doorknob metaphor from industrial design, importing it into the world of HCI: “A door has an interface – the doorknob and other hardware – but we should not have to think of ourselves as using the interface to the door: we simply think about ourselves as going through the doorway, or closing or opening the door” (Norman, 1990: 218). There is probably no other mantra of interface design that has been quoted more often than this statement. Given the preceding discussion of interface design in this article, does it appear appropriate that Norman’s writing is almost universally assigned as core reading for budding interface design students? Perhaps it is, if one were to consider the sentence following the passage just quoted: “The computer really is special: it is not just another mechanical device” (ibid., 218). Here, Norman momentarily slips and acknowledges the computer as a complex, difficult system. But he quickly catches himself, and immediately following this statement, he reasserts his claim that the computer’s purpose is primarily to simplify lives.

2
Figure 2. From One Terabyte of Kilobyte Age (2009, ongoing), Olia Lialina and Dragan Espenschied

In contrast to the trajectory Norman seeks to prescribe, his tangential observation that a computer is “not just another mechanical device” points to what is perhaps the most important idea students of interface design should take to heart: the complexity and beauty of general purpose computers. The purpose of such a device is not to simplify life (although this may sometimes be an effect of their many uses). Rather, one could think of the computer’s potential purpose as enabling a kind of human-computer symbiosis. When writing the programmatic Man Computer Symbiosis (1960), J.C.R. Licklider appropriately quoted the French philosopher Anry Puancare’s proclamation that, “the question is not what is the answer, the question is what is the question” (75). In doing so, he indicated that if computers were to be considered collaborators or colleagues, they should also be involved in the process of formulating questions, rather than simply being put to task answering them. Similarly, complex purposes of the computer have been formulated, for example that of bootstrapping (as discussed in Engelbart),[1] and that of ‘realising opportunities,[2]’ as Vilém Flusser put it in Digitaler Schein (1997: 213) – incidentally in the same year that Norman’s text was published. All of these observations certainly point to significantly more complex affordances of computer technology than simply that of “making life easier.”

Not only is Norman’s simplification and erasure of the computer interface at odds with critical approaches adopted by other prominent theorists of the time, but one can also sense that Norman’s contemporaries were not particularly excited about his treatment of the doorknob. In a short introductory article, “What is Interface,” Brenda Laurel diplomatically notices that, in fact, doors and doorknobs project significant complexity with regards to issues of control and power; indeed, they necessitate difficult determinations of “who is doing what to whom” (1990: xii). “An interface is a contact surface. It reflects the physical properties of the interactors, the functions to be performed, and the balance of power and control,” continues Laurel (ibid.).

Similarly, when Bruno Latour published Where Are the Missing Masses? The Sociology of a Few Mundane Artifacts (1992), the reference list of his book suggests that he was well acquainted with Norman’s writing. The text contains a highly pertinent section entitled “Description of the Door,” which canonizes the door as a “miracle of technology” that “maintains the wall hole in a reversible state.” Word by word, Latour’s analysis of a note pinned to a door (“The groom is on Strike, For God’s Sake, Keep the door Closed”) and his elaborate remarks on every mechanical detail – knobs, hinges, grooms – fully dismantles Norman’s attempt of portraying the doorknob as something simple, obvious, and intuitive.

“Why Interfaces Don’t Work” does not mention the term affordance, but the doorknob symbolizes the term very well, and has accompanied the concept across most design manuals. What is important to emphasize is that it was Don Norman who first initially adapted ‘affordance,’ originally coined by ecological psychologist J. J. Gibson, for the world of human computer interaction. Viktor Kapelinin provides a good summary of this topic in his entry on affordances in the 2nd edition of Encyclopedia of HCI, a highly recommended resource. Here, affordance is “[…] considered a fundamental concept in HCI research and described as a basic design principle in HCI and interaction design” (Kaptelinin, 2018, author’s emphasis). “For designers of interactive technologies the concept signified the promise of exploiting the power of perception in order to make everyday things more intuitive and, in general, more usable.”

Significantly, the entry pertains to Norman’s figuration of affordance, not Gibson’s. Within the fields of HCI and interface design it is Norman’s reconfiguration of ‘affordance’ that seems to have become the assumed source of the concept itself. A widely quoted table found in Joanna McGrenere and Wayne Ho’s “Affordances: Clarifying and Evolving a Concept” demonstrates the key differences between the two theorists’ conceptualisation of the term, and summarizes the conceptual shift as follows: “Norman […] is specifically interested in manipulating or designing the environment so that utility can be perceived easily” (2000: 8). By contrast, Gibson’s definition does not include “Norman’s inclusion of an object’s perceived properties, or rather, the information that specifies how the object can be used,” and instead notes that an “affordance is independent of the actor’s ability to perceive it” (McGrenere and Ho, 2000: 3).

As is well known, Norman, at a later date, conceded that he misinterpreted Gibson’s term (Norman 2008a), and corrected his general definition to pertain more specifically to “perceived affordances.”[3] Elsewhere, he elaborates:

“Far too often I hear graphic designers claim that they have added an affordance to the screen design when they have done nothing of the sort. Usually they mean that some graphical depiction suggests to the user that a certain action is possible. This is not affordance, either real or perceived. Honest, it isn’t. It is a symbolic communication, one that works only if it follows a convention understood by the user” (Norman 2008b).

Almost two decades later, the community of interface designers has grown vastly, but claims on supposed affordances have become even more ridiculous, to a point where the term is being used by UX designers in extremely wide-ranging meanings, and has become a substitute for any front-end term. A recent article, “How to use affordances in UX,” published online by Tubik Studio, demonstrates this well (Tubik Studio, 2018). The title immediately indicates considerable confusion, suggesting that an ‘affordance’ is perhaps simply an element of an app that can be used alongside other design elements such as ‘menu,’ ‘button,’ ‘illustration,’ ‘logo,’ or ‘photo.’ The article then goes on to reference a recent text in which a taxonomy of six rather absurd types of affordances are proposed, categorised as explicit, hidden, pattern, metaphorical, false, and negative (Borowska, 2015). Here, the designer is not only moving further away from Gibson’s binary perspective (that affordance either exists or does not exist), but also extends Norman’s notion of the “perceived” affordance to the level of the absurd. This terminological mess is nothing new for the field of design, in which varied and divergent usages of the term affordance point to many troubling issues including, for example, the careless imprecision with which concepts such as “transparency” and “experience” are used.

Could these careless games with the term ‘affordance’ be ignored, or perhaps even be perceived positively as a commendable attempt to bring sense into a confusing world of clicking, swiping and drag-and-dropping, as a good intention to contextualize these interactions? It certainly merits emphasizing that neither the desire to define ‘affordance’ nor the careless use of the term are quite as innocent as sometimes they may appear. As a cornerstone of the HCI paradigm of ‘User Centered Design’ – coined and conceptualized (once again) by Don Norman in the mid-1980s, the concept of affordance is equally important to the idea of the User Experience bubble initiated (yet again!) by Norman (Merholz, 2007). Both of these concepts were somewhat collapsed around 1993, when Norman became head of research at Apple. Now, User Experience – or UX – swallowed other possible ways of imagining what an interface might be, and how it might be used. I wrote about the danger of scripting and orchestrating user experiences in “Rich User Experience, UX and Desktopization of War,” where I noted that such scripting raises “user illusion” to a level where users are asked to believe that there is no computer, no algorithms, no input (Lialina, 2015a). But as I noted in an earlier piece, “Turing Complete User” (Lialina, 2012), it is very difficult to criticize the concept of UX itself, because it has developed such a strong aura of doing the right thing, of “seeing more,” “seeing beyond,” etc.

Statements by many contemporary UX designers confirm this perception. For example, when asked about his interpretation of UX, Johannes Osterhoff noted that:

“When I say UX I usually mean the processes that I set up so that a product meets customers’ (i.e., users’) needs. [I say] ‘processes’ because usually I deal with complicated tools that take a long time to develop and refine – much beyond an initial mock-up and a quick subsequent implementation. So when I say UX I mean the interplay of measures that have to be taken to enhance a special piece of software on the long run: this involves several disciplines such as user research, usability testing, interaction design, information visualization, prototyping, scientific and cultural research, and some visual design. In a big software company, strategy and psychology is part of this, too. And also streams of communication; which form and frequency is updated; what works in cross-located teams and what does not” (Correspondence with the author, June 3, 2018).

In response to the same question, Florian Dusch, principal of the Stuttgart-based software design and research company “zigzag,” also refers to UX as “many things,” “holistic,” and “not only pretty images (Correspondence with the author, June 2, 2018). Golden Krishna, a designer employed at Google, in a text with the telling title The Best Interface Is No Interface (2015), offers this list of terms to define UX: “People, happiness, solving problems, understanding needs, love, efficiency, entertainment, pleasure, delight, smiles, soul, warmth, […] etc. etc. etc.” (47). And, finally, the German academic, Marc Hassenzahl, approximates a definition of UX by introducing himself thus on his website: “He is interested in designing meaningful moments through interactive technologies – in short: Experience Design” (Hassenzahl, n.d.). This small sample of quotes from individuals who have been in the design profession for a long time serves well to convey the sense that UX is growing ever more complex, and is maturing into a very large field. The paradox is that technically, when it comes to practice, products of User Experience Design often contradict the image and aura of the field. UX is about nailing things down, it has no place for ambiguity or open-ended processes.

3
Figure 3. From One Terabyte of Kilobyte Age (2009, ongoing), Olia Lialina and Dragan Espenschied
4
Figure 4. From One Terabyte of Kilobyte Age (2009, ongoing), Olia Lialina and Dragan Espenschied

Marc Hassenzahl, quoted above, contributes to the field not only through poetic statements and interviews. In Experience Design: Technology for All the Right Reasons (2010), he offers “the algorithm for providing the experience” (12), in which the “why” is a crucial component, a hallmark that justifies UX’s distinguished position. In a series of video interviews (Interaction Design Foundation, n.d.) that Hassenzahl recorded with the Interaction Design Foundation, the multitude of reasons that can be behind phone calls were used to illustrate this idea: business, goodnight kiss, checking if a kid is at home, ordering food. Ideally, each of the “whys” behind these calls would drive and result in the design of specific user experiences, both with regard to the software and the hardware involved. From this perspective, an ideal UX phone would thus be one that adjusts to different needs, or which at least offers a different app to use for different types of calls. In this sense, the ‘why’ of UX is not a philosophical question, but a pragmatic question – it could be substituted with “what exactly?” and “who exactly?” User Experience Design could thus form a successful attempt to overcome the historic accident Don Norman makes responsible for difficult-to-use interfaces of the late 1980s: “We have adapted a general purpose technology to very specialized tasks while still using general tools” (Norman, 1990: 218).

“We can design in affordances of experiences,” said Norman in 2014 (Interaction Design Foundation, n.d.). What a poetic expression, if we allow ourselves to forget that ‘affordance’ in HCI means immediate unambiguous clue, and ‘experience’ is an interface scripted for a very particular narrow scenario.

There are many such examples of tightly scoped scenarios. To name one that has received significant public attention recently (in the aftermath of the Cambridge Analytica scandal): Facebook recently announced an app for long-term relationships (Machkovech, 2018) – real long-term relationships, not just “hook-ups” (to quote Mark Zuckerberg). I have elaborated my position on general purpose computers and general purpose users elsewhere (see “Turing Complete User” and “Do You Believe in Users” [2009], and, following that perspective, I believe that there should be no dating apps at all; not because dating is wrong, but because individuals can actually date using general purpose software: they can date in email, in chats, in Excel and Etherpad. If the free market demands a dating software, this should be made without asking “why?” or “what exactly?”, “hook-up or long-term relationship?”, etc. – a general purpose dating app, instead of one that compartmentalises and pigeonholes. The “why” of UX should be left to the users, as well as their right to change the answer and still continue to use the same software.

In the One Terabyte of Kilobyte Age, I included a “before_” identifier that is assigned to pages which were created with certain purposes in mind, purposes that nowadays are taken over by industrialized, centralized tools and platforms. One such category is “before_flickr;” another is “before_googlemaps.” The last figure reminds me of ratemyprofessors.com, so I tagged it “before_ratemyprofessor” (Figures 3 and 4). The webpages collected in my archive are dead, and none of them became successful, but they are examples of users finding individual ways of doing what they desire, in an environment that is not custom-designed for their specific goals: in contrast to the visions of interface design presented above – of restrictive views on what kinds of experiences the web affords – this is what I would call a true user experience, even though it is completely against what has become the dominant ideology of UX.

Apart from contradicting Don Norman’s definition and insisting that computers of the future should be visible, I also propose that the term affordance should finally be severed from Norman’s perspective. This means to disconnect ‘affordance’ from experience, from the ability to perceive directly (as described in Gibson), and consequently, to also disconnect it from the requirements and goals of experience design. It means to position ‘affordances’ as possibilities of action. The computer’s core ‘affordance’ then, corresponds to its conceptualization as a ‘general purpose’ device – capable of becoming anything, provided that one is given the option to program it. Ultimately, such a perspective on the concept of affordance (particularly within the fields of HCI and design) means to allow oneself and others to recognize (and, potentially, to act upon) opportunities and risks of a world that is no longer restrained to mechanical age conventions, assumptions, and design choices.

In the latest edition of the influential interaction design manual, About Face, the authors observe:

“A knob can open a door because it is connected to a latch. However in a digital world, an object does what it does because a developer imbued it with the power to do something […] On a computer screen though, we can see a raised three dimensional rectangle that clearly wants to be pushed like a button, but this doesn’t necessarily mean that it should be pushed. It could literally do almost anything” (Cooper et al., 2007: 284).

Throughout the chapter, designers are advised to resist this opportunity to design interfaces that could ‘literally do almost anything,’ and instead to consistently follow recognized conventions. Because everything, in the world of zeroes and ones, is, in principle, possible, the authors introduce the notion of a “contract” as a means of establishing constraints and therefore limiting users’ potential recognition of affordances: “When we render a button on the screen, we are making a contract with the user […]” (285). This notion postulates that if there is what appears to be a button on the screen, users should be able to press it – not, for example, drag-and-drop it. The designed object, in other words, should respond appropriately to the expectations of the users. However, this proposition is correct only as long as the envisioned interface is limited to the horizon of preconceived uses and functions of buttons.

When Bruno Latour wanted his readers to think about a world without doors he wrote:

“[…] imagine people destroying walls and rebuilding them every time they wish to enter or leave the building… or the work that would have to be done to keep inside or outside all the things and people that left to themselves would go the wrong way” (Freeman et al., 2008: 154).

A beautiful thought experiment, and indeed unimaginable in the material world – but not in a computer-generated world, where we do not really need doors. You can go through walls, you can have no walls at all, you can introduce rules that would make walls obsolete, or simply change their ‘behaviour.’ Since rules and contracts – not the behaviors of knobs – are the future of user interfaces, the importance of thinking through the politics of how they are established is again emphasized. The strong need to be thoughtful and careful in how to structure the education of interface designers should be obvious.

 

From human-computer interaction to human-robot interaction

The title of this essay announces two further concepts – forgiveness and human-robot interaction (HRI) that have not been addressed yet. I will turn to them now in sketching answers to two questions: How does the preoccupation with strong clues and strictly bound experiences – what might also be described as affordances and UX – affect the beautiful concept of “forgiveness” (which we often encounter as ‘undo’ functions), which should, at least in theory, be part of every interactive system? And, following on from this, how does HRI refract concepts including transparency, affordance, user experience, the above-mentioned forgiveness, and the idea that ‘form follows function’ or that ‘form follows emotion’?[4]

Apple’s 2006 Human Interface Guidelines gives a good idea, which I think gives a very good indication of what exactly might be meant by forgiveness in the context of designing user interfaces (Apple Computer Inc., 2006: 45):

Forgiveness

Encourage people to explore your application by building in forgiveness – that is, making most actions easily reversible. People need to feel that they can try things without damaging the systems or jeopardizing their data. Create safety nets, such as Undo and Revert to Saved commands, so that people will feel comfortable learning and using your product.

Warn users when they initiate a task that will cause irreversible loss of data. If alerts appear frequently, however, it may mean that the product has some design flaws. When options are presented clearly and feedback is timely, using an application should be relatively error-free.

Anticipate common problems and alert users to potential side effects. Provide extensive feedback and communication at every stage so users feel that they have enough information to make the right choices.

In essence, this recommendation intends to make actions reversible, to offer users stable perceptual cues for a sense of ‘home,’ and to always allow the ‘undoing’ of any action. Roughly a decade after these guidelines were published, Bruce Tognazinni and Don Norman noticed that the principle of forgiveness had vanished from Apple’s iOS guidelines and, in reaction, co-authored an essay, expressing their irritation under the heading, How Apple Is Giving Design a Bad Name (Tognazinni and Norman, 2015).[5]

Users of Apple, Android, and all other mobile phone hardware without keyboards noticed the disappearance of forgiveness even earlier, because there was no equivalent to the standard ‘undo’ keyboard shortcuts for undoing actions, well-known from virtually all contemporary operating systems.

5
Figure 5. External Undo Button, Teja Metez; part of the author’s Undo-Reloaded project (2015)

In my view of the world of HCI, ‘undo’ should be a constitutional right. (It is, accordingly, the top demand on my project User Rights [Lialina, 2013]). First of all, ‘undo’ has a historical importance. It marks the beginning of the period when computers started to be used by people who didn’t program them. Secondly, ‘undo’ is one of very few generic (“stupid”) commands. It follows a convention without sticking its nose into the user’s business, and never asks “why” a user decided to undo an action. In the present context it should be foregrounded that the development of hypes around the affordance concept and UX occurred in parallel with the disappearance of the ‘undo’ function. This is not a coincidence: single-purpose applications with one button per screen are designed to guide users through life without a need for ‘undo’.

As part of more general new media dynamics, the field of HCI is considered as vibrant and ‘pluralistic.’ Tasks for interface designers, therefore, are to be found far beyond ‘Submit’ buttons and the screens of personal computers. There are new challenges, such as Virtual Reality and Augmented Reality, Conversation and Voice User Interfaces, even Brain-Computer Interaction. These areas are not new in and of themselves. They are contemporary with the emergence of graphical user interfaces, but could accurately be described as “trending right now” (or “trending right now again”) in HCI papers and in the culture industry more generally. The current moment (in movies, literature, and consumer products) is all about artificial intelligence, neural networks, and anthropomorphic robots. Allowing this development to infect my curriculum as well, I introduced the rewriting of an ELIZA (see Landsteiner, 2005) script as a task in my interface design course. This allows students to prepare themselves for designing interfaces that talk to the users, and which pretend that they understand them. I personally have a bot (see Lialina, 2015b), and this talk will be fed into its algorithm and will become a part of the bot’s performance. In a few more years this bot might be injected into a manufactured body that looks something like me and will go to give lectures or write essays in my place.

Considering the slew of films and TV series in which robots are the main protagonists, and considering popular media coverage of the adventures of human-looking robots such as Sophia, it requires less and less specialization to dive into complex contemporary issues concerning robots that were exotic not too long ago; relevant examples include the difference between symbolic and strong AI, ethics of robotics, or trans-humanism. This being said, the omnipresence of robots, even if merely in mediated forms, provokes delusions: “We expect our intelligent machines to love us, to be unselfish. By the same measure we consider their rising against us to be the ultimate treason” (Zarkadakis, 2017: 51). Delusions lead to paradoxes: “Robots which enchant us into increasingly intense relationships with the inanimate, are here proposed as a cure for our too-intense immersion in digital connectivity. Robots, the Japanese hope, will pull us back toward the physical real and thus each other” (Turkle, 2012: 147). Paradoxes then lead on to more questions: “Do we really want to be in the business of manufacturing friends that will never be friends?” (ibid., 101). Should robots have rights? Should robots and bots be required to reveal themselves as what they are?

This last question suddenly entered the discourse after Google’s recent demo of the Duplex AI assistant (Grubb, 2018), when Internet users began to debate whether the tool should be allowed to say “hmmm,” “oh,” “errr,” or to use interjections at all.

6
Figure 6. Sophia, First Robot Citizen at the AI for Good Global Summit 2018. (Image credit: CC BY 2.0, AI for Good Global Summit)

Perhaps without even noticing, the general public is now engaging in discussions of difficult ethical as well as interface design questions and decisions. By extension, this is also a debate building on the evolving recognition of the potentially much less restrictive affordances of emerging technologies such as AI assistants. And I wish or hope it will stay like this for some time. “Why Is Sophia’s (Robot) Head Transparent?” (Quora, n.d.), users ask. Is it just to look like the lead character from Ex Machina, or is it for better maintenance? Does it perhaps mark a comeback of transparency in the initial, pre-Macintosh meaning of the word? Curiously, when scientists and interaction designers talk about transparency at the moment, they oscillate between the desire to convey meaning and explain algorithms, on the one hand, and that of increasing the simplicity of the communication with a robot, on the other. The following series of recent publication titles is indicative of this trend: “Designing and implementing transparency for real time inspection of autonomous robots” (Theodorou et al., 2017); “Robot Transparency: Improving Understanding of Intelligent Behaviour for Designers and Users” (Wortham et al., 2017a); “Improving robot transparency: real-time visualisation of robot AI substantially improves understanding in naive observers” (Wortham et al., 2017b).

Joanna J. Bryson, who co-authored these aforementioned papers, projects a very clear position on ethics. “Should Robots have rights?” is not a question for her. Instead, she asks why we should wish to design machines that raise such questions in the first place (Theodorou et al., 2017). There are, however, enough studies proving that humanoids (anthropomorphic robots) that perform morality are the right approach for situations in which robots work with and not instead of people. This could be described as the social robot scenario, in which “social robot is a metaphor that allows human-like communication patterns between humans and machines,” as Frank Hegel wrote (Hegel, 2016: 104). Hegel’s essay doesn’t announce paradigm-shifting insights, but rather states quite obvious things, such as that “human-likeness in robots correlates highly with anthropomorphism” (ibid. 111), or that “aesthetically pleasing robots are thought to possess more social capabilities” (ibid. 112). Calmly and subtly, he introduces his principle for fair robot design: the “fulfilling anthropomorphic form” (ibid. 106), which should immediately lead humans to understand a robot’s purpose and capabilities. Such principles indicate a consideration of affordances for a new age.

Robots are here, not industrial machines, but instead become social or even “lovable” entities. Their main purpose is not to replace people, but to be among people. They are anthropomorphic; they look more and more realistic. They have ‘eyes’ – however, not because they need them to see, but because their eyes inform us that ‘seeing’ is among the robot’s functions. If a robot has a ‘nose,’ it is, likewise, to inform the user that it can ‘smell,’ perhaps detect gas and pollution; if it has ‘arms,’ it can obviously carry heavy items; if it has ‘hands,’ it will be designed to grasp smaller items, and if these hands have ‘fingers,’ you might expect that the robot can play a musical instrument. Robots’ eyes beam usability, their bodies express affordances. Faces literally become an interface.

How can this be contextualised with Norman’s wisdom?:

“Affordances provide strong clues to the operations of things. Plates are for pushing. Knobs are for turning. Slots are for inserting things into. Balls are for throwing or bouncing. When affordances are taken advantage of, the user knows what to do just by looking: no picture, label, or instruction needed” (Norman, 1988: 9).

Manual affordances (“strong clues”) are easy to comprehend and to accept when they are part of a GUI (graphical user interface): they are graphically represented and located somewhere on a screen. Things already became quite a bit more complex both for designers and users when we entered the so-called “post-GUI” realm, in which gestures in virtual, augmented, and invisible space figure importantly. Yet, all of this cannot be compared with the astonishing level of complexity that is reached when our thoughts move from human-computer interaction to human-robot interaction.

7
Figure 7. Video still image from Concept for Swimming Lifesaver Robot (2018), Andreas Eisenhut.

This figure above is from a selection of sketches in which students were tasked to embrace the principle of the fulfilling anthropomorphic form, and take it to the limit. What could be an anthropomorphic design if everything that does not signal a function is removed? If the robot cannot smell, there must be no nose. And why should there be a pair of hands if you only need one? What could this un-ambiguity mean for interaction and product design? Is there a chance for robots to not manifest “what?” and for humans to not answer “why?”.

This leads us to the concluding question regarding the coexistence of affordance and forgiveness in anthropomorphic scenarios: How does the human-computer interaction principle of ‘undo’ appear in human-robot interaction?

In contrast to the current situation in graphical and touch-based user interfaces, forgiveness is doing very well in the realms of robots and AI. It is built in: “[t]he external observer of an intelligent system can’t be separated from the system” (Zarkadakis, 2017: 71). Robot companions are here “[n]ot because we have built robots worthy of our company but because we are ready for theirs,” and “[t]he robots are shaping us as well, teaching us how to behave so they can flourish” (Turkle, 2012: 55). These statements remind us once more of Licklider’s man-computer-symbiosis, Engelbart’s concept of bootstrapping, and other advanced projections for the coexistence of man and computer – except this time, what is concerned is human and robot, not human and computer-on-the-table. Forgiveness is built-in, but in HRI, it is always already built into the human part. It is all ours to give. Here, we are witnessing how the most valuable concept of HCI – ‘undo’ – meets a fundamental principle of symbolic AI – scripting the human interactor.[6] It remains to be seen what affordances will further emerge. And who will undo whom once Symbolic AI is replaced by the Strong or, as scientists and mass media refer to it now, “Real” and “Full” AI.

 

References

Bardini, T. (2000) Bootstrapping: Douglas Engelbart, Coevolution, and the Origins of Personal Computing, 1st ed. Palo Alto: Stanford University Press.

Bolter, D. & R. Grusin (2000) Remediation: Understanding New Media. Cambridge: MIT Press.

Bolter, D. & D. Gromala (2003) Windows and Mirrors: Interaction Design, Digital Art, and the Myth of Transparency. Cambridge: MIT Press.

Borowska, P. (2015) “6 Types of Digital Affordance That Impact Your UX,” Webdesigner Depot. https://www.webdesignerdepot.com/2015/04/6-types-of-digital-affordance-that-impact-your-ux/

Cooper, A., R. Reimann, & D. Cronin (2007) About Face 3: The Essentials of Interaction Design, 3rd edition. Indianapolis, In.: Wiley.

Eisenhut, A. (2018) Concept for Swimming Lifesaver Robot (video).

Flusser, V. (1997) Medienkultur, 5th ed. Frankfurt am Main: Fischer Taschenbuch.

Frogdesign. “About Us.” https://www.frogdesign.com/about Accessed August 18, 2018.

Grubb, J. “Google Duplex: A.I. Assistant Calls Local Businesses to Make Appointments,” YouTube. https://www.youtube.com/watch?v=D5VN56jQMWM Accessed July 28, 2018.

Hassenzahl, M. & J. Carroll (2010) Experience Design: Technology for All the Right Reasons. San Rafael, Ca.: Morgan and Claypool Publishers.

Hegel, F. (2016) “Social Robots: Interface Design between Man and Machine,” in Hadler, F. & J. Haupt (Eds.) Interface Critique. Berlin: Kulturverlag Kadmos.

Interaction Design Foundation (n.d.) “‘User Experience and Experience Design,’ by Marc Hassenzahl.” Accessed July 28, 2018. https://www.interaction-design.org/literature/book/the-encyclopedia-of-human-computer-interaction-2nd-ed/user-experience-and-experience-design

Kaptelinin, V. (n.d.) “Affordances,” in The Encyclopedia of Human-Computer Interaction, 2nd ed. (Interaction Design Foundation). Accessed July 28, 2018. https://www.interaction-design.org/literature/book/the-encyclopedia-of-human-computer-interaction-2nd-ed/affordances

Kay, A. “User Interface: A Personal View,” in Laurel, B. (Ed.) The Art of Human-Computer Interface Design, Reading, MA: Addison-Wesley, 1990, pp.191–207.

Krishna, G. (2015) The Best Interface Is No Interface: The Simple Path to Brilliant Technology. Berkeley, Ca.: New Riders.

Landsteiner, N. (2005) “Eliza (Elizabot.Js),” https://www.masswerk.at/elizabot/.

Latour, B. (1994) “Where Are the Missing Masses?,” in Bijker, W. et al. (Eds.) Shaping Technology / Building Society: Studies in Sociotechnical Change, Reissue edition. Cambridge, Mass.: MIT Press, pp.225–59.

Laurel, B., Ed. (1990) The Art of Human-Computer Interface Design, 1st ed. Reading, Mass.: Addison Wesley Publishing Corporation.

Lialina, O. (2012) “Turing Complete User.” http://contemporary-home-computing.org/turing-complete-user/

Lialina, O. (2013) “User Rights,” http://userrights.contemporary-home-computing.org/

Lialina, O. (2015a) “Rich User Experience, UX and Desktopization of War.” http://contemporary-home-computing.org/RUE/

Lialina, O. (2015b) “GIFmodel_ebooks,” Twitter Bot, https://twitter.com/GIFmodel_ebooks.

Lialina, O., & D. Espenscheid (2009) “Do You Believe in Users?” in Digital Folklore, Stuttgart: Merz und Solitude.

Lialina, O., & D. Espenscheid (2009, ongoing) One Terabyte of Kilobyte Age. http://blog.geocities.institute/

Licklider, J. (2003) “Man-Computer Symbiosis,” in Wardrip-Fruin, N. & N. Montfort (Eds.) The New Media Reader. Cambridge: MIT Press.

Machkovech, S. (2018) “Mark Zuckerberg Announces Facebook Dating,” Ars Technica, https://arstechnica.com/information-technology/2018/05/mark-zuckerberg-announces-facebook-dating/

McGrenere, J. & W. Ho (2000) “Affordances: Clarifying and Evolving a Concept,” Proceedings of Graphics Interface. http://teaching.polishedsolid.com/spring2006/iti/read/affordances.pdf

Merholz, P. (2007) “Peter in Conversation with Don Norman About UX & Innovation,” Adaptive Path. Accessed July 29, 2018. https://www.adaptivepath.com/ideas/e000862/

Metez, T. (2015) External Undo Button. https://newmedia.merz-akademie.de/~teja.metez/undo_reloaded/#undo-keyboard

Murray, J. (1997). Hamlet on the Holodeck: The Future of Narrative in Cyberspace, 1st edition. New York: Free Press.

Norman, D. (1988) Psychology of Everyday Things. New York: Basic Books.

Norman, D. (1990) “Why Interfaces Don’t Work,” Laurel, B. (Ed.) The Art of Human-Computer Interface Design. Reading, Mass.: Addison Wesley Publishing Corporation.

Norman, D. (2008a) “Affordances and Design,” accessed July 28, 2018. https://jnd.org/affordances_and_design/

Norman, D. (2008b) “Affordance, Conventions and Design (Part 2),” accessed August 20, 2018. https://jnd.org/affordance_conventions_and_design_part_2/

Osterhoff, J. (2001) Google (performance) http://google.johannes-p-osterhoff.com/

Osterhoff, J. (2012) iPhone live (online performance) http://iphone-live.net/

Osterhoff, J. (2013) Dear Jeff Bezos (online performance) http://www.bezos.cc/

Raskin, J. (2000) The Humane Interface. New Directions for Designing Interactive Systems. Reading, Mass: Pearson Education.

Theodorou, A. R. Wortham, & J. Bryson (2017) “Designing and Implementing Transparency for Real Time Inspection of Autonomous Robots”, Connection Science 29(3): 230–41.

Tognazzini, B. (2012) “About Tog,” https://asktog.com/atc/about-bruce-tognazzini/.

Tognazzini, B. & D. Norman, “How Apple Is Giving Design A Bad Name,” Fast Company, November 10, 2015. https://www.fastcompany.com/3053406/how-apple-is-giving-design-a-bad-name

Tubik Studio (2018) “UX Design Glossary: How to Use Affordances in User Interfaces,” UX Planet, May 8, 2018. https://uxplanet.org/ux-design-glossary-how-to-use-affordances-in-user-interfaces-393c8e9686e4

Turkle, S. (2004) The Second Self: Computers and the Human Spirit. Cambridge: MIT Press.

Turkle, S. (2012) Alone Together: Why We Expect More from Technology and Less from Each Other. New York, NY: Basic Books.

Wortham R, Theodorou A., & J. Bryson (2017) “Robot Transparency: Improving Understanding of Intelligent Behaviour for Designers and Users,” in Gao Y., Fallah S., Jin Y., Lekakou C. (Eds.) Towards Autonomous Robotic Systems. TAROS 2017. Lecture Notes in Computer Science Vol. 10454.

Quora.com (n.d.) “Why Is Sophia’s (Robot) Head Transparent?” https://www.quora.com/Why-is-Sophias-robot-head-transparent.

Zarkadakis, G. (2017) In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence, 1 edition. Pegasus Books.

 

Notes

[1] See Bardini’s discussion of this issue: “Engelbart took what he called ‘a bootstrapping approach,’ considered as an iterative and coadaptive learning experience” (Bardini, 2000: 24).

[2] “Verwirklichen von Möglichkeiten”

[3] This should remind us of another term that has existed in HCI since 1970, at least at XEROX Park lab: User Illusion, which at the end of the day is the same principle, and also a foundation of interfaces as we know them. “At PARC we coined the phrase user illusion to describe what we were about when designing user interfaces” (see Kay, 1990: 191-207).

[4] Form Follows Emotion is a credo of German industrial designer Hartmut Esslinger, which became a slogan for “frog” the company he founded in 1969. See: “Frog Design. About Us.,” accessed August 18, 2018; “FORM FOLLOWS EMOTION,” Forbes.com, accessed August 18, 2018. https://www.frogdesign.com/about https://www.forbes.com/asap/1999/1112/237.html

[5] Bruce Tognazinni has himself authored eight editions of Apple’s Human Interface Design Guidelines, starting in 1978, and is known for conceptualizing interface design in the context of illusion and stage magic (see Tognazinni, 2012).

[6] “A successful chatterbot author must therefore script the interactor as well as the program, must establish a dramatic framework in which the human interactor knows what kinds of things to say […]” (Murray, 1997: 202).

 

Born in Moscow in 1971 and now based in Germany, Olia Lialina is an early-days, network-based art pioneer, among the best-known participants of the 1990s net.art scene. Her early work had a great impact on recognizing the Internet as a medium for artistic expression and storytelling. This century, her continuous and close attention to Internet architecture, “net.language” and vernacular web – in both artistic and publishing projects – has made her an important voice in contemporary art and new media theory.

Lialina has, for the past two decades, produced many influential works of network-based art: My Boyfriend Came Back from the War (1996), Agatha Appears (1997), First Real Net Art Gallery (1998), Last Real Net Art Museum (2000), Online Newspapers (2004-2018), Summer (2013), Self-Portrait (2018). Lialina is also known for using herself as a GIF model, and is credited with founding one of the earliest web galleries, Art Teleportacia. She is cofounder and keeper of the One Terabyte of Kilobyte Age archive and a professor for New Media Design at Merz Akademie in Stuttgart, Germany.

Email: olia.lialina@merz-akademie.de

 

MT3-1-cover

This article is from the special issue, Rethinking Affordance (Media Theory 3.1), edited by Ashley Scarlett and Martin Zeilinger.

The official version of record is available here: http://journalcontent.mediatheoryjournal.org/index.php/mt/article/view/79

The full issue is available here: http://journalcontent.mediatheoryjournal.org/index.php/mt/issue/view/4

 

 

Leave a Reply

Create a website or blog at WordPress.com

Discover more from Media Theory

Subscribe now to keep reading and get access to the full archive.

Continue reading