Incorporating AI into design - a framework of considerations - Tom Castle

Incorporating AI into design – a framework of considerations

Just recently I’ve been doing some work which involves incorporating AI into design. As part of this work, and because of a broader interest I’ve been doing quite a lot of research and thinking around how we can approach the use of AI type technologies within design work.

Before we get started I want to clarify that I use ‘AI’ as a broad term to cover a wide range of learning technologies from simple Robotic Process Automation through Machine learning and into deep learning. I don’t think it really matters too much what the exact definition is for my purposes here.

To help frame my thinking and support a consistent approach to my design work I’ve developed a framework which I think covers all the key considerations for most cases.

AI Design Framework

Here is a brief overview of each of the sections, with some thoughts, over time I plan to expand and refine so any feedback is welcome:

1) Design for People

When I first started to draft out the framework I wrote ‘AI’ in the middle. I rapidly realised that this was wrong and replacing it with ‘People’. First and foremost we need to remember that (until Skynet takes over) we are designing for people and that all the normal considerations apply, we need to do research and observation and get a really good understanding of the people we are designing for, their context and what the need is that we are trying to create a solution for.

AI is a set of potential technology capabilities, not the starting point. At the moment there is a lot of hype around AI and this is resulting in it being thrown at problems when maybe there is a better, and simpler, solution.

2) Design for Transparent Value

We’re still in the early stages of the ‘AI revolution’ and for the majority of the general public the perceptions of AI is one based on exposure through sci-fi and the press, both of which generally present AI is a negative light.

As a result of this it’s really important that the user interacting with the AI has a clear understanding of why they are getting a better outcome than another experience, particularly if the alternative in previous similar encounters was with a human.

3) Design for Failure

AI technologies rely on statistical probability for a lot of what they do. As a result there will be situations where the system acts in the ‘wrong’ way or comes up with an output or decision which isn’t in line with human expectation.

As a result it is really important to ensure that the implications of a failure are considered and designed for. The confusion matrix is a great starting point for this. By considering the 4 possible outcomes and designing for them appropriately is a useful exercise.

The impact of failure should also be considered and depending on this it may be appropriate to consider handing off to a human user unless there is a very high confidence in the action of the system. Alternatively it may be appropriate to use the AI to augment human decision making and leave the final actual action to the person involved in the process – although this has it’s own potential disadvantages.

4) Design for Learning

A key part of any machine learning system is its ongoing improvement and adaption based on usage and increasing data set in the live environment. After an initial training phase it is normal to continue to use data from the live environment on an ongoing basis to continually refine, improve and adapt the algorithm.

This whole process needs consideration to ensure that it continues to improve and not degrade the capability of the system. While the initial train set of data used to setup the initial algorithm may have been carefully curated, it is like that the inputs and human responses in a live environment will not be as closely controlled. Therefore how the feedback loop operates and how the data is quality controlled is of critical importance.

5) Design for Ethics

The whole field of ethics around AI and Machine learning is still very new and is the subject of much research and debate. Putting aside some of the more existential questions and focusing on some nearer term practical considerations for narrow AI, there are a few points worth consideration. For anyone working on use cases where the outcome has significant impact on the user then a much more thorough examination of this area is clearly needed.

Some of the key considerations I see at the moment look at filter bubbles, where user see increasingly narrow results based on a feedback loop linked to previous interactions and the concept of inclusivity, where it is important to use a training data set that is a good broad representation of the overall population not skewed to only represent a small subsection to the detriment of others. There have been numerous examples where a poorly representative training set has resulted in fairly shocking failures in a live environment.

Summary:

In this short post I’ve barely scratched the surface but hope this five areas give you a good initial starting point for considering the implications of including AI tech in your design work. As the field evolves and we see more published use cases I’m sure the learnings will rapidly evolve so I’m keen to hear feedback on how others have implemented AI and any learnings (good or bad!) you have.

Further Reading:

Rise of the Racist Robots

Google Design – Human Centred Machine Learning

Juvet Agenda

Fast Co. – Designers Guide to the AI industry

IDEO – Applying Human centred design to emerging technologies