The Need for Caution in Government Use of AI

To date, machine learning has been used primarily to determine product offerings: What movies should Netflix recommend its users? What products should Amazon recommend? However, certain tech companies are now marketing machine learning applications to governments and public agencies. For example, tech firm Predpol uses “real-time epidemic-type aftershock sequence crime forecasting” and historical crime data to predict when and where a new crime will take place,[1] and it is currently being used by the Los Angeles Police Department (LAPD) to fight “pre-crimes.”[2] Facial recognition company Cloud Walk uses facial recognition and gait analysis to detect suspicious behavior and movements, and police in China are using this technology to predict the probability that someone will commit a crime.[3]

While it is commendable that the public sector is moving towards more cutting-edge and modern technology, the adoption of AI tools in public agencies presents a grave risk not present in the private sector. Private sector companies are simply using their data and AI to sell a product or service. If their data are labeled incorrectly or if their algorithms don’t work, they lose out on a sale. With governments and public agencies, however, the stakes are much bigger. If a police department or social services agency uses bad data or faulty algorithms, the result could be false arrests, wrongful imprisonment, or unjust denial of food stamps.

Exacerbating this risk are the lack of “good” public data and the inability of public agencies to run A/B tests. The data that public agencies have access to are often riddled with institutional biases. For example, police arrest data reflect the income, race, and gender biases of the officers and department,[4] and unemployment data can reflect job market discrimination against certain ages, genders, and races.[5]  These data should not be used as the basis for AI models for government services, as they would only perpetuate these biases. Engadget writer Chris Ip summarizes it well when he writes, “For an AI to be fair, then, it needs to not reflect the world, but create a utopia, a perfect model of fairness.”[6]

As the public sector moves to AI, it is important that these agencies try to minimize the harm and biases resulting from their algorithms and tools. They should be wary of how they collect, analyze, and use their data, and they should be looking out for any biases, assumptions, and limitations along the way. This could be accomplished by making data and algorithms transparent to the public, or by letting third parties conduct audits and examinations for discrimination.[7]

Despite the data issues and risks that the public sector faces in adopting AI and more modern technology, I think it is the right step if done well, and I am excited to see what the future holds.

[1] https://www.techemergence.com/ai-crime-prevention-5-current-applications/

[2] https://www.washingtonpost.com/local/public-safety/police-are-using-software-to-predict-crime-is-it-a-holy-grail-or-biased-against-minorities/2016/11/17/525a6649-0472-440a-aae1-b283aa8e5de8_story.html?noredirect=on&utm_term=.6a20d76e79ac

[3] http://www.chinadaily.com.cn/china/2017-07/31/content_30303525.htm

[4] http://theconversation.com/why-big-data-analysis-of-police-activity-is-inherently-biased-72640

[5] https://www.bls.gov/opub/ted/2017/unemployment-rate-and-employment-population-ratio-vary-by-race-and-ethnicity.htm

[6] https://www.engadget.com/2017/12/21/algorithmic-bias-in-2018/

[7] https://theconversation.com/why-big-data-analysis-of-police-activity-is-inherently-biased-72640

0

One comment on “The Need for Caution in Government Use of AI”

  1. Great post!

    I agree with your claims and definitely think that algorithmic bias may lead to criminal injustice not only as far as wrongful imprisonment is concerned but also with respect to who should or should not get parole. In addition, institutionalized discrimination may be a major problem when taking into account the government’s use of AI.

    To add to your post, ML algorithms may give rise of filtering of information and form filter bubbles – while being used by public agencies. This may, in turn, lead to echo chambers and cause polarization of information – which may hamper the government’s efficient working in terms of security, transportation etc.

    However, the most exciting talk going rounds is how AI in the public sector, in this case in healthcare, could positively impact millions of lives. It is claimed that ML algorithms are being designed to detect early signs of dementia, by observing changes in the human voice. This could treat multiple people early on and on being a success, be extended to other algorithms that can detect other neurological and cognitive diseases.

    0

Comments are closed.