AI ethics in technological advancements can help foster a world with less bias and more fairness. Here’s what it is and why it matters.
AI ethics are the moral principles that guide companies toward responsible, fair development and use of AI. As artificial intelligence (AI) becomes increasingly important to society, experts in the field have identified a need for ethical boundaries when creating and implementing new AI tools. Although there's currently no wide-scale governing body to write and enforce these rules, many technology companies have adopted their own version of AI ethics or an AI code of conduct.
In this article, we'll explore what ethics in AI are, why they matter, and some challenges and benefits of developing an AI code of conduct. Afterward, if you'd like to learn more about AI ethics, consider taking IBM's Generative AI: Impact, Considerations, and Ethical Issues course.
AI ethics are the set of guiding principles that stakeholders (from engineers to government officials) use to ensure artificial intelligence technology is developed and used responsibly. This means taking a safe, secure, humane, and environmentally friendly approach to AI.
A strong AI code of ethics can include avoiding bias, ensuring privacy of users and their data, and mitigating environmental risks. Codes of ethics in companies and government regulation frameworks are two main ways that AI ethics can be implemented. By covering global and national ethical AI issues, and laying the policy groundwork for ethical AI in companies, both approaches help regulate AI technology.
More broadly, the discussion around AI ethics has progressed from being centered around academic research and non-profit organizations. Today, big tech companies like IBM, Google, and Meta have assembled teams to tackle ethical issues that arise from collecting massive amounts of data. At the same time, government and intergovernmental entities have begun to devise regulations and ethics policy based on academic research.
Designing ethical principles for responsible AI use and development requires collaboration between industry actors, business leaders, and government representatives. Stakeholders must examine how social, economic, and political issues intersect with AI and determine how machines and humans can coexist harmoniously by limiting potential risks or unintended consequences.
Each of these actors plays an important role in ensuring less bias and risk for AI technologies:
Academics: Researchers and professors are responsible for developing theory-based statistics, research, and ideas that can support governments, corporations, and non-profit organizations.
Government: Agencies and committees within a government can help facilitate AI ethics in a nation. A good example of this is the Preparing for the Future of Artificial Intelligence report, which was developed by the National Science and Technology Council (NSTC) in 2016. It outlines AI and its relationship to public outreach, regulation, governance, economy, and security.
Intergovernmental entities: Entities like the United Nations and the World Bank are responsible for raising awareness and drafting agreements for AI ethics globally. For example, UNESCO’s 193 member states adopted the first ever global agreement on the Ethics of AI in November 2021 to promote human rights and dignity.
Non-profit organizations: Non-profit organizations like Black in AI and Queer in AI help diverse groups gain representation within AI technology. The Future of Life Institute created 23 guidelines that are now the Asilomar AI Principles, which outline specific risks, challenges, and outcomes for AI technologies.
Private companies: Executives at Google, Meta, and other tech companies, as well as banking, consulting, health care, and other private sector industries that use AI technology, are responsible for creating ethics teams and codes of conduct. This often sets a standard for companies to follow.
AI ethics are important because AI technology is meant to augment or replace human intelligence—but when technology is designed to replicate human life, the same issues that can cloud human judgment can seep into the technology.
AI projects built on biased or inaccurate data can have harmful consequences, particularly for underrepresented or marginalized groups and individuals. Further, if AI algorithms and machine learning models are built too hastily, then it can become unmanageable for engineers and product managers to correct learned biases. It's easier to incorporate a code of ethics during the development process to mitigate any future risks.
Science fiction—in books, film, and television—has toyed with the notion of ethics in artificial intelligence for a while. In Spike Jonze’s 2013 film Her, a computer user falls in love with his operating system because of her seductive voice. It’s entertaining to imagine the ways in which machines could influence human lives and push the boundaries of “love”, but it also highlights the need for thoughtfulness around these developing systems.
It may be easiest to illustrate the ethics of artificial intelligence with real-life examples. In December 2022, the app Lensa AI used artificial intelligence to generate cool, cartoon-looking profile photos from people’s regular images. From an ethical standpoint, some people criticized the app for not giving credit or enough money to artists who created the original digital art on which the AI was trained [1]. According to The Washington Post, Lensa was being trained on billions of photographs sourced from the internet without consent [2].
Another example is the AI model ChatGPT, which enables users to produce original content by asking questions. ChatGPT is trained on data from the Internet and can answer a question in a variety of ways, whether it be a poem, Python code, or a proposal. One ethical dilemma is that people are using ChatGPT to win coding contests or write essays. It also raises similar questions to Lensa, but with text rather than images.
These are just two popular examples of AI ethics. As AI has grown in recent years, influencing nearly every industry and having a huge positive impact on industries like health care, the topic of AI ethics has become even more salient. How do we ensure bias-free AI? What can be done to mitigate risks in the future? There are many potential solutions, but stakeholders must act responsibly and collaboratively to ensure positive outcomes across the globe.
Read more: Generative AI Ethics: AI Risks, Benefits, and Best Practices
There are plenty of real-life challenges that can help illustrate AI ethics. Here are just a few.
If AI doesn’t collect data that accurately represents the population, their decisions might be susceptible to historical biases. In 2018, Amazon was under fire for its AI recruiting tool, which downgraded resumes that featured the word “women” (such as “Women’s International Business Society”) [3]. In essence, the AI tool discriminated against women and caused legal risk for the tech giant.
As illustrated earlier with the Lensa AI example, AI relies on data pulled from internet searches, social media photos and comments, online purchases, and more. While this helps to personalize the customer experience, there are questions about the apparent lack of true consent for these companies to access our personal information.
Some AI models are large and require significant amounts of energy to train on data. While research is being done to devise methods for energy-efficient AI, more could be done to incorporate environmental ethical concerns into AI-related policies.
Creating more ethical AI requires a close look at the ethical implications of policy, education, and technology. Regulatory frameworks can ensure that technologies benefit society rather than harm it. Globally, governments are beginning to enforce policies for ethical AI, including how companies should deal with legal issues if bias or other harm arises.
Anyone who encounters AI should understand the risks and potential negative impact of AI that is unethical or fake. The creation and dissemination of accessible resources can mitigate these types of risks.
It may seem counterintuitive to use technology to detect unethical behavior in other forms of technology, but AI tools can be used to determine whether video, audio, or text is fake or not. These tools can detect unethical data sources and biases better and more efficiently than humans.
The ultimate question for our society to answer is, how do we control machines that are more intelligent than we are? If questions like this intrigue you, consider enrolling in one of these AI courses on Coursera:
For an overview of AI ethics, try Lund University’s Artificial Intelligence: Ethics & Societal Challenges. In this beginner-level course, you'll explore the ethical and societal impact of AI technologies, ranging from algorithmic bias and surveillance to AI in democratic vs. authoritarian regimes.
To explore AI's role in addressing complex challenges, consider enrolling in DeepLearning.AI's AI for Good Specialization. There, you'll learn a step-by-step framework for the development of AI projects, build AI projects focused on achieving positive environmental outcomes, and explore real-world case studies related to health, climate change, and disaster management.
To enhance your work and daily life with generative AI, explore IBM's Generative AI Fundamentals Specialization. In addition to reviewing AI ethics, you'll also learn fundamental AI concepts, apply prompting techniques to achieve your desired outcome with generative AI, and identify areas in your work and life that can be enhanced by AI.
NBC News. “Lensa, the AI portrait app, has soared in popularity. But many artists question the ethics of AI art, https://www.nbcnews.com/tech/internet/lensa-ai-artist-controversy-ethics-privacy-rcna60242.” Accessed February 3, 2025.
Washington Post. “Your selfies are helping AI learn. You did not consent to this, https://www.washingtonpost.com/technology/2022/12/09/chatgpt-lensa-ai-ethics/.” Accessed February 3, 2025.
Reuters. “Amazon scraps secret AI recruiting tool that showed bias against women, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.” Accessed February 3, 2025.
Editorial Team
Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...
This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.