<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1240251502728721&amp;ev=PageView&amp;noscript=1"> Skip to main content

Futures of interaction: Engaging with intelligent systems

The Data Handbook

How to use data to improve your customer journey and get better business outcomes in digital sales. Interviews, use cases, and deep-dives.

Get the book
Author avatar

Ram Sankar

Design

LinkedIn

When systems expand to multiple modalities and dissolve into the technosphere, it's time to rethink app screens, search engine listings and the idea behind explicit user actions.

The grand work of interaction design is about developing systems that understand us. We imbue in their qualities and affordances that shield us against our clumsy organics. They extend our working memories and draw our attention contextually across space and time. The reality most of us find ourselves in would be hopeless to grapple with in the absence of their assistance.

Then there are the physical cocoons and muscular extensions they provide us in traversing bleak terrains and completing Herculean tasks we were never made for. From the first pulleys to memories of bison herds transcribed on rock faces, we have been relying on systems for a long time.

But until now, or even now, these systems have been held back by their own computational limits. Their movements could be mathematically elegant but lacking in versatility. Their memories could extend beyond the collective knowledge of humankind but lacked context. Until soon, that is. Without being prepared for it, we suddenly find ourselves at the cusp of supranaturally capable systems that combine vast computational abilities with access to real-time, real-world data, and the sophistication to handle it. How do we design our interactions with them?

Here we cover a few ideas that have been discussed in this regard. Some are tangential to the topic but ask essential questions, such as what an intelligent system is and how to delimit its agency.

To be intelligent

It can be safely assumed that we might never agree on what intelligence means with the semantic rigour demanded by philosophy. But it's not necessary. When we refer to intelligent systems we are referring to their ability to tackle challenges with sound logical reasoning and the inference of meaning. Their ability to generalise this across multiple scenarios with spatiotemporal situatedness and reach a level of behaviour that can be called sentient is the evolutionary next step which we call general intelligence.

At present, there are already numerous systems which are hyper-intelligent in their domains but somewhat obscured behind interface ideas that precede them. To find that the middle step between screens and beings is difficult. Chat-based interfaces, VUIs and augmented reality headsets all have been attempting to fill that gap before we might head to futures with seamless, fully integrated technospheres.

But in the realm of computing, it's also not particularly rewarding to create systems that can exhibit human-like behaviour. Many researchers do not even concern themselves with the Turing Test (skirting its criticisms). Comments are made as to the futility of perfect mimesis like creating mechanical flying birds — rather approach a technology on its own terms. Creating everything in our image brings imperfections to the mix.

Yet this is where the practice of interaction design would differ. It is compelling to be able to interact with a system in a way that is deeply human. And by extension, it would be compelling for any sentient creature to be able to interact with a system in its own manner. When this is almost but not quite achieved, we are faced with complications such as the uncanny valley effect, eeriness arising out of emotional dissonance and a lack of trust that threatens to overturn most of the benefits a hyper-intelligent system would have to offer.

So it is essential not to overstep and still seek ways in which these systems of the near future could engage with us. If the voice is uncanny, perhaps underplay the emotions. If they know too little about our social context — ask rather than assume. 

 

1

From explicit to implicit

A key idea prevalent in the design of interactions is that of explicit user action. Over time, some actions became recurring events, and certain others have become automated, but most desired outcomes are still achieved through intentional, explicit actions on our part. This is a natural progression of how we have used tools from the very beginning. And this does not call to be completely overturned. But it's wise to ponder on how much time and effort could be saved if their need were to be decreased.

For instance, I frequent a certain travel app. I have used it for years but no matter how many times I have made similar decisions over and over again, it remains unwise as to my preferred durations for flights over certain distances and their fares. Each time I keep sorting and filtering and it never presents me with alternate destination suggestions or dates by default. In fact, even the dates frequently coincide with the same holidays and the destinations coincide with the same times of the year but the app remains indifferent every time I initiate the process.

After the booking is managed, there are predictable follow-up tasks such as accommodations, local travel, and perhaps some museum bookings that I have to undertake. And the app does make some broad recommendations immediately but never anything of relevance. Like, in all these years, I have never booked a rental car (I don't drive), but I keep being offered deals from Sixt.

This, to be perfectly sure, is a relief for the privacy-centric. Also, a reflection of just how much work there still needs to be done in terms of optimising data streams and how they connect, but say that were to be achieved, in terms of interaction, the whole idea of having to open up an app screen and start typing in destinations is entirely unnecessary.

An intelligent system of the near future could act just as an add-on to my phone's own operating system, initiated at a voice command or by searching for the destination on the home screen. It could integrate with my phone's understanding of my behaviour and initially make selections on my behalf and only asks to verify and modify. It wouldn't even need a standalone app. If there were ways to ensure that my private data is used only for my benefit in this limited context, it would be a perfect solution (for me).

In the more distant future, perhaps this might all be replaced by an all-knowing, all-doing super system. But until then, even small steps forward do bring forth big implications in the way we navigate these challenges. As in, do we need separate search engines, and would it matter the order in which things are listed? What happens to SEO? And likely most important of all — how do we want to interact?

Degrees of freedom

When the computational abilities of systems were rudimentary, we often had to learn how to use them. The industrial machines of the Steam Age were unforgiving. The earliest data processors required the minds of scientists. The modes through which we could hope to engage with them were often limited to just one. Keystrokes to signals. Then with the advent of GUIs, more sophisticated methods to convert mechanical actions to electrical signals. When the systems reached higher levels of intelligence, these, too have expanded.

To list a few, we have added Voice User Interfaces, conversational text bots and some spatial sensors to the mix, and with these, new degrees of freedom for our interactions. When do we opt for screens and when for voice; how long should spatial interactions be expected to continue to account for physical exhaustion? Some systems might offer multiple modalities at the same time and some might be transmodal, switching from one to another as circumstances dictate.

 

2

 

One way to approach this would be to mirror the physical world. Let the expressive abilities of speech lead the way and supplement with direct manipulation and spatial gestures as needed. There are only few gestures that are understood universally, so perhaps the systems might learn to personalise these by inference.

But we would also need to supplement these with visual interfaces as spatial movements are restricted in dense environments, as would be the clarity of speech. Would a multimodal approach be more practical in this sense for most applications as it affords unforeseen complications? But how does a system maintain a consistent personality across these modalities, as the concept of a brand is unlikely to disappear — perhaps it would be limited only to content?

Beyond all these, there are the transhumanists who advocate and excite the benefits of Brain Machine Interfaces where mere thought is all it takes to engage. How do we prepare for that? The futures that await suddenly look unimaginably complex with no interaction paradigms in our roster to deploy.

Frameworks of action

There is, however, a level above immediate interactions where we would be able to design the ways in which such a system would operate. Incidentally, this might also be the way forward when we think about domains such as product design and services. I refer to it as frameworks of action.

Essentially, it is about designing models/frameworks that would guide the system's actions. The model would have weighted priorities and constraints and desired outcomes but no predesigned flows or even screens. According to the context and people it engages with, it would be able to tap into a library of content and components to exhibit desired behaviour. Even now, many of the products we use have personalised interfaces, this would be taking that a step further.

While we still remain with GUIs, this approach could combine with other systems, such as the brand's communication system and digital design system, to create screens on demand. It could rapidly test different combinations and update itself. We wouldn't need to run separate split tests and multivariate tests — only set goals and limits.

Beyond GUIs, the system could choose the modality that best accommodates the context and characteristics of people interacting with it and offer bespoke interface touchpoints on demand. Eventually, it might become a combination of semi-human-made content and machine-made interactions working within a human-designed framework of values, ethics and outcomes. It's hard to comprehend all the issues that might arise out of such systems of the near and distant futures but an obvious one is safety, and for that, we need to ensure affordances for both human and machine error.

 

3

What does it mean to be a designer?

Not unlike the definition of intelligence, the definition of design is also a wildly slippery one. Here I take it in the broadest sense possible but when we think of design practitioners, it is also incumbent on this discussion to consider what it might mean for the different areas of design practice. In my own work, I have often been disappointed by the repetitiveness of simple but essential things, such as doing an accessible button for the hundredth time — all just moderately varying in terms of appearance and behaviour. I think as systems get more intelligent such inefficiencies would gradually be corrected.

Then we also have more expressive areas of design where human creativity and that uniquely human way of seeing things — rooted in time and place — are predominant. While systems can overtake us in creativity on their own terms, they do not see the world like us and will always have fractures in their mimetic displays. This is something where I believe we will continue to produce work of unparalleled merit. But it also necessitates moving away from trite obsessions with trends. The less something has personality, the easier it is to mimic.

The balance of power between the system and design practitioners is another important aspect where we might see a need for clear definitions. Decision-making is rarely made with perfect insight as information itself is imperfect and incomplete, so how do we agree on who is right when disagreements arise — by authority or legitimacy of argument?

At this point in time, we are unsurprisingly left with more questions than answers. But to speculate on these is to expand our understanding of interaction design, and as such, essential. There is little doubt that we will be engaging with hyper-intelligent systems in the near-future and that the landscape will always be unequally distributed, but how we approach our interactions with them and manage the transitions between them will define how we function as a society in the years to come.

The Data Handbook

How to use data to improve your customer journey and get better business outcomes in digital sales. Interviews, use cases, and deep-dives.

Get the book