Computer Scientists, software engineers and academics are currently carrying the load of responsibility for the ethical implications of AI (specifically Machine Learning) in application. As a result, AI is inferred as having its own agency. This emerging separation of technology from people is alarming, considering it is people who are making it and using to inform important decisions. This work belongs to a wider group – namely development teams and their parent organisations and key to moving forward to a positive future is having cross discipline teams making this tech, mindful ethical approaches and clear methods of communicating this process ongoing with teams, customers, clients and society.
This talk is aimed at designers and product managers and proposes an ethical framework in discovery and solutions. We look at where UX can fit in via the application of existing methods, the relationships between people including how to anticipate the power relationships and finally proposed approaches to solution design to foster trust, control and reduce the mystery of machine learning systems. A robust list of references further reading will also be provided.
- Australian Government Privacy Laws
- CSIRO Ethics Resources for Researchers (including indigenous communities)
Papers and Reports
- Ethically Aligned Design Version 2, by the IEEE
- Algorithmic Impact Assessments – framework for public agency accountability by AI Now
- AI Now Institute 2017 Report
- The Three Laws of Robotics in the Age of Big Data by Jack M. Balkin
- Mechanics of Trust by Jens Riegelsberger, M. Angela Sasse & John D. McCarthy
- Auditing Algorithms from the Outside
- Future of Life AI Principals
- Revealing Uncertainty for Information Visualisation
- Ethics By Numbers How To Build Machine Learning That Cares by Lachlan McCalman
- Computer says no: why making AIs fair, accountable and transparent is crucial by Ian Sample Science editor, The Guardian
- Don Norman: Designing for People
- Why AI is still waiting for it’s ethics transplant by Scott Rosenberg, WIRED
- Ethical Machine Learning (from 8:00 to 19:20)
- Will Tech Companies Ever Take Ethics Seriously? by Evan Selinger
- 3A Institute, ANU
- Centre for Humane Tech <- working on components
- Greater Than Experience – Design Agency <- working on components
- The Ethics Centre, Australia <- drafting guidelines for technology (yet to be released)
- FAT* Conference
- ODI Data Ethics Canvas
- Ten Usability Heuristics by NNGroup
- Humane AI Newsletter by Roya Pakzad
- Fairness Measures managed by Meike Zehlike
Hilary Cinis – Crafting Ethical AI Products and Services–a UX Guide
Data61 focus on every aspect of data R&D. Everything from data collection to insights and the interface to consume data.
Discussion of data and ethics will quickly lead to philsophy. There are two general philosophical schools – utilitarian and deontological (sense of duty).
AI and machine learning do not have their own agency – they are created, the agency remains with the humans who make them.
So why have an ethical pratice for technology? What informs it and how achievable is it? There is a great opportunity right now for UX people to get really involved in the ethics of business.
The IEEE make many great points about having to underpin philosophy with legislation, to cover the people you simply cannot control or appeal to. Systems need to be sustainable, accountable and scrutinised. These are not unreasonable things to expect, as this impacts human rights.
The AI Now 2017 Report talks about the importance of user consent and privacy impacts, particularly when business needs (profit) are controlling the development of an autonomous system.
In all data situations there is a trust vs utility tradeoff. Facebook’s recent problems stem from allowing private data to become public data, pushing too far to utlity and breaking trust.
So how can we capture and anticipate ethical implications – what are the tradeoffs and compromises? We need efficiency but we also need to avoid harm. If we can write a sentencing algorithm for the legal system, can we trust it to do what we expect as a society? What level of error is acceptable?
How can we create and use perceived affordances for users to understand what a system is doing, and make a decision on whether they can trust it? Does the system use language people can understand if they don’t know enough to vet the underlying algorithms?
“Edge cases” is a terrible phrase. Everyone is an edge case. We can’t devolve our responsibilities by claiming it won’t happen.
The questions to guide you:
- can we – legally? ethically?
- should we – does it give a commercial return?
- who and why – UX and product (duty and intent)
- how to actually build it – development (encoded utility)
Where does this leave us as designers? Many people place the responsibility for ethical systems on the product and design teams. This scrutiny is actually a great opportunity to go to the business and push for the support to do great work.
The discovery phase of these new systems isn’t really that different. The who and why should still be considered before the how and what. The most powerful person is still the CEO or whoever is paying the bills; the people with least power are the users… and yet the system is based on their private data. The technical experts sit between the two. Our natural skillsets in service design enable us to guide others.
When we communicate to users about data, there are many layers that go before which affect the credibility and trustworthiness of the experience. If the data isn’t precise or complete, the inferences drawn from them will be questionable. This should lead to disagreement and credibility is lost.
Some industries like aviation safety have this kind of debugging nailed, we should look to them for inspiration.
The myth of the black box has to stop. Hiding algorithms behind IP doesn’t hold up, they can be audited without giving away the IP. People need to know what algorithms are doing, how they work and why they’re being used. We can’t wait for crisis points to have these conversations, it should be going on all the time. You should be able to explain what data is being collected, what you’re doing with it and where it is going.
But these core questions of who, why and what are the core of UX practice. They’re not new or scary, we know what to do! Inclusive design needs to come to the front, everyone needs to be included.
- visibility of system status
- match between system and real world
- user control and freedom
- consistency and standards
- error prevention
- recognition rather than recall
- flexibility and efficiency of use
- aesthetic and minimalist design
- help people recognise, diagnose and recover from errors
- help and documentation
A simple point is to treat your users like smart people! If you are making a prediction with a level of uncertainty, explain that. People can understand the weather forecast isn’t 100% certain. They understand the idea that a 90% chance of rain means it probably will, but might not rain.
Strive for ethical practices; reframe good design practices to apply to the new work; set up diverse, multi-disciplinary teams; challenge “can we” with “should we”. Just because something can be made doesn’t mean it should be made; and not everything can be an experiment when peoples’ lives are involved.
Remember this needs to be an ongoing conversation. Get involved in whatever piece you can, share what you can, keep asking questions.
@hi1z | article
Crafting Ethical AI Products and Services:
Part 1 looks at the reasons why an ethical mindset and practise is key to technology production and positions the ownership as a multidisciplinary activity.
Part 2 is a set of proposed methods for user experience designers and product managers working in businesses that are building new technologies specifically with machine learning AI.
The audience for this guide was initially the Data61 User Experience Designers and Product Managers who have been tasked to provide assistance on development of products and systems using data (sensitive or public) and Machine Learning (algorithms that make predictions, assist with decision making, reveal insights from data, or act autonomously), because these products are expected to deliver information to a range of users and provide the basis for contextually supported decisions. I’m hoping a wider audience will find it helpful.
I have since expanded it and also recently presented on this topic at Web Directions Design 2018 and the revisions are updates in response to further input and interest.
Machine Learning computer scientists, software engineers, data scientists, anthropologists or other highly skilled technical or social science professionals are very welcome to read this guide in order to increase and enhance their understanding of user experience concerns and maybe even refer to it.