10 organizations leading the way in ethical AI

By Dmitrijs Martinovs, Product Assistant at SAGE Ocean

As advancements in technology continue to progress, the need to consider the ethical implications of these changes becomes a crucial part of technological development. Initiatives such as the Responsible Computer Science Challenge—which is awarding up to $3.5 million to universities taking innovative approaches to teaching ethics alongside computer science—are recognizing the need for ethical training to be built in to the research process from the off.

 

Ethical considerations are particularly important in the field of artificial intelligence. AI is not only susceptible to misuse, but is also reflecting biases that exist in society. For example, facial recognition technology has been shown to be far more effective at identifying white men than black women.


Martin Farmer and Fraser Macleod (2011) in their guidance on involving disabled people in social research argue that research on disabled people adds value only if people with disabilities are involved in every step of the research process. Likewise, it is critical that people who are part of historically disadvantaged groups or the groups that suffer from social stigma, discrimination and/or oppression are involved and have a voice in the conversation around ethics in AI.

Some efforts towards ethical AI have failed specifically because of a lack of diversity in this space. Google’s recent U-turn on forming an AI Ethics committee is an example of that, with reports of Google staff members opposing appointments to the committee based on conservative views towards diversity and inclusion. 

Fortunately, there are a number of institutions and groups that are addressing ethical questions in AI head-on. Here’s our top 10:

1.       The Berkman Klein Center for Internet & Society is based at Harvard University and focuses on the study of cyberspace. One of their current projects, Privacy Tools for Sharing Research Data, examines issues of maintaining research participant’s privacy in the context of information technology, advances in statistical computing, and big data.

2.       The Center for the Governance of AI is housed at the Future of Humanity Institute at the University of Oxford. In a recent report, the center’s director Allan Dafoe outlines an agenda for research which would shed light on the risks associated with AI, providing policy guidance that seeks to ensure the beneficial development and use of advanced AI.

3.       The Data & Society Research Institute addresses the social implications of innovative technologies, such as the abuse of sociotechnical practices to invade people’s privacy, and the provision of new tools of discrimination. A current project examines ‘frameworks to help policymakers and ethics bodies better understand what they should demand of AI development, procurement, and assessment processes.’

4.       AI NOW at New York University is a research institute which seeks to understand social consequences of AI. Their longitudinal project, Discriminating Systems: Gender, Race, and Power in AI, examines the intersection of gender, race, and power in AI. 

5.       The establishing principle of the Human-Centered Artificial Intelligence Institute at Stanford University was the need for AI developers to reflect the wider population in terms of gender, ethnicity, age, nationality and culture. The institute is working on an AI auditing project that will enable the machine to discover and correct its own biases.

6.       The Ethics of AI Lab at the University of Toronto runs an interdisciplinary workshop series, Ethics of AI in Context, all of which are recorded and available free to watch online. The lab also runs a graduate course of the same name, involving speakers from a whole range of academic disciplines.

7.       The Ethics and Governance of Artificial Intelligence Initiative is the product of a joint project between MIT Media Lab and The Berkman Klein Center for Internet & Society. Founded in 2017, it seeks to ensure that automation and machine learning are researched, developed and deployed adhering to ethical principles.

8.       The MIT Media Lab is engaging in impact-oriented pilot projects to support the use of AI for the public good. Through the Algorithmic Decision Making and Governance In the Age of AI project, the team is working with international governments to develop best practices around using AI to make decisions. 

9.       The Alan Turing Institute was founded in 2015 by the UK Government and is based across a number of top universities. The primary aim of the institute’s AI program is to advance world-class research into AI’s applications and implications for society. One of their current projects seeks to advance knowledge in the design and deployment of AI and inclusion to benefit people with disabilities.

10.   The Center on Privacy & Technology at Georgetown University addresses privacy and surveillance law and policy, and the communities they affect. In May 2019, the center published a report examining how police agencies have misused face recognition technology.