Towards transparency in AI, Methods and Challenges

Towards transparency in AI, Methods and Challenges


Timnit Gebru • Google

Automated decision making tools are currently used in high stakes scenarios. From natural language processing tools used to automatically determine one’s suitability for a job, to health diagnostic systems trained to determine a patient’s outcome, machine learning models are used to make decisions that can have serious consequences on people’s lives. In spite of the consequential nature of these use cases, vendors of such models are not required to perform specific tests showing the suitability of their models for a given task. Nor are they required to provide documentation describing the characteristics of their models, or disclose the results of algorithmic audits to ensure that certain groups are not unfairly treated. I will show some examples to examine the dire consequences of basing decisions entirely on machine learning based systems, discuss work on auditing and exposing the gender and skin tone bias found in commercial gender classification systems. I will end with the concept of AI datasheets for datasets and model cards for model reporting to standardize information for datasets and pre-trained models, in order to push the field as a whole towards transparency and accountability. Recently, we have seen many powerful entities in academia and industry announcing initiatives related to AI ethics. I will spend some time in this talk discussing how we can learn from the mistakes and evolutions of other disciplines who have/continue to perform what some call parachute research: one that uses the pain of marginalized communities without centering their voices or benefiting them.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.