Integrating a Framework for Human-Centered AI

Shaping a Human-Centered Artificial Intelligence Framework

Artificial Intelligence is now deeply intertwined with how we interact with information technology, the government, and one another. Tech companies face a crisis of trust in misusing their data and technology capable of unprecedented surveillance and absolute censorship, requiring a delicate balance between regulation and freedom of expression in our technology. The issues of data and algorithmic biases that generate discrimination remain unresolved. AI thrives in regulating the content and information underlying many of our products and services, and yet the question remains: how do we evaluate the effects of AI on humans across a spectrum of rights?

The first step is to establish a human-centered institute framework to continue developing do-good, responsible AI technology fully vested in human interest. The best AI technology fulfills its potential to serve humanity, enhances human ability, and displays collaboration between human and machine. We require an acceptable and universal framework for conducting such support, which can be achieved by expanding upon existing infrastructure that remains universally applicable. Incorporating the Human Rights Framework and the United Nations Declaration of Democracy with the more abstract Asilomar principles and Google’s Equality of Opportunity principles can altogether provide a rich practical basis for ensuring AI goals are aligned with human interests, and in protection of our rights: to work, to privacy, and most importantly, to civil political rights such as free expression.[1]

Framework Components

1) Establish Interdisciplinary Teams

While developing a review framework to focus AI on a human-centered approach, we require social dynamics, regulatory checks, and outcomes to be taken into consideration. The review framework should be flexible and adaptive, consisting of interdisciplinary collaboration across a diverse community of humanists and scientists. These interdisciplinary teams oversee that AI technology is responsibly integrating social dynamics and evaluating social outcomes.

2) Identify Potential Threats, Account for Liability, and Establish Checklists

We must maintain an emphasis on taking social objectives into research priorities. Determining the pros and cons when evaluating AI technology considers who it helps, who it harms, its intended benefits and consequences, as well as its potential for misuse. The ethical debate around AI revolves around ethical agents and liability issues. In the instance of malicious AI, does the designer accept all human responsibility? Presently, no mechanism exists around designer ethics other than abstract theories such as the Asilomar principles. We require a critical assessment of the checks and balances in technicians’ processes of complex computation models in machine learning. Additionally, it is crucial to prepare unbiased fact sheets as data setup impacts the outcomes of machine learning, as well as establish checklists to identify and acknowledge bias in algorithms. The algorithms should be developed with the goal of reducing structural bias and disparity between gender, ethnicity and age to improve on equity[2].

3) Applying Algorithms to Auditing

While the review framework guides interdisciplinary teams of experts in their operations, the issue of biased data producing discriminatory data labels can potentially be resolved with the solution of Algorithmic Fairness proposed by Cynthia Dworke[3]. Differential Privacy is a security guarantee; a set of techniques that retain the privacy of individuals within a large database without fear of identification[4]. Additionally, Google’s Equality of Opportunity in Supervised Learning[5] can serve as guidelines for reducing biases in data models. Multidisciplinary collaboration is essential to reduce discrimination in machine learning.

Citation:
Jasmine Poon, APRU NYU Case Competition 2018, Jul 31 2018.

[1] Stanford GDPi and Stanford, “Human-Centered AI: Building Trust, Democracy and Human Rights by Design,” Medium, July 09, 2018, , accessed July 31, 2018, https://medium.com/stanfords-gdpi/human-centered-ai-building-trust-democracy-and-human-rights-by-design-2fc14a0b48af.

[2] Please see the Fig. 2: Human-Centered AI development goal of Equity & Inclusion

[3] James Zou and Londa Schiebinger, “AI Can Be Sexist and Racist — It’s Time to Make It Fair,” Nature 559, no. 7714 (July 19, 2018): , doi:10.1038/d41586-018-05707-8.

4 Kevin Hartnett and Quanta Magazine, “Making Algorithms Fair: An Interview With Cynthia Dwork.” Quanta Magazine, www.quantamagazine.org/making-algorithms-fair-an-interview-with-cynthia-dwork-20161123/.

[5] Google, “Attack Discrimination with Smarter Machine Learning.” Google, research.google.com/bigpicture/attacking-discrimination-in-ml/.

6 Zen Soo and Philia Siu, “SenseTime Joins Alibaba Group to Nurture AI Start-Ups in Hong Kong.” South China Morning Post, South China Morning Post, 21 May 2018, www.scmp.com/tech/china-tech/article/2147055/sensetime-joins-forces-alibaba-group-nurture-ai-start-ups-hong-kong.

 

0

One comment on “Integrating a Framework for Human-Centered AI”

  1. Hi Jasmine – I enjoyed your blog, and thought your references and frameworks were very thoughtful and applicable to the topic.
    While your perspective is very pragmatic, I’m curious why you view that NGOs and governments need to be involved in regulating AI. I don’t disagree, but am curious what risks that you believe need to be avoided. For example: I’m not convinced that “we must maintain taking social objectives into research priorities,” or how that relates to AI.

    0

Comments are closed.