On this page
A bruised reputation, stakeholder divestment, talent flight — many businesses are incorporating AI tools into their processes, but few are aware of these associated risks when AI is used without appropriate governance and oversight.
As businesses race ahead, it’s critical they establish a responsible AI framework to ensure tools are used ethically and accurately.
Developing a responsible and ethical AI framework is key for organizations developing or investing in these technologies. To avoid negative consequences, businesses should adhere to the key principles of ethical AI usage and best practices for developing a responsible AI framework.
Meet Our Expert
What Is Responsible AI?
Responsible AI is a term used to describe how businesses deploy AI tactically. Businesses using AI responsibly are focused on fairness and mitigating any biases in the technology. As AI development accelerates, a clear framework to guide AI usage is essential.
Michael Impink, an instructor from Harvard Division of Continuing Education’s Professional and Executive Development division, explained it this way:
“Responsible AI means you’re paying attention to fairness outcomes, cutting biases, and going back and forth with the development team to remediate any issues to make sure the AI is appropriate for all groups,” he says.
Impink, who teaches AI Ethics in Business, says there is not yet a one-size-fits-all solution to adopting a responsible approach to AI.
“It depends what you’re doing with AI, how central it is to your business,” Impink says. “Banks, hospitals, or other organizations in regulated industries will need to make sure their AI works well for all groups — across race and gender, for example — whereas those concerns may not be top of mind for a company in the unregulated product market.”
Taking a responsible approach to AI allows businesses to deliver results without undermining their mission and values, or exposing them to litigation.
What Is Ethical vs. Responsible AI?
The terms “ethical AI” and “responsible AI” are often used interchangeably, but there are key differences between the two.
Ethical AI refers to an approach to AI that is philosophical and focused on abstract principles (like fairness and privacy) while also examining the broader societal implications of widespread AI usage. For example, researchers investigating AI’s impact on the environment or its potential for workforce disruption are examining AI ethics.
Responsible AI is more narrowly focused on how AI is being used. AI responsibility deals with issues related to accountability, transparency, and regulatory compliance. For example, in a medical research setting, a responsible AI framework would ensure there was sufficient transparency into the AI algorithm to understand and eliminate any biases.
When it comes to AI, responsibility and ethics are interconnected concepts: both investigate AI for potential ethical blind spots. While most business leaders will deal with responsible AI issues on a day-to-day basis, gaining an understanding of the broader ethical implications of AI use will support them in making informed decisions.
Why Are Ethics Important in AI?
AI is an incredibly powerful tool with manifold potential uses: machine learning, robotics, content creation, digitization, research, and more. However, human beings remain responsible for AI outcomes.
As managers and executives begin to assume responsibility for AI, they need to have an understanding of the ethical issues at hand. Should the AI make a significant error, the management team would be held accountable.
Understanding AI ethics helps leaders mitigate the liability associated with inappropriate data or AI usage, protecting them from potential legal and ethical risks. Managers who have a clear understanding of the ethical issues AI raises will have a competitive advantage in protecting their company and product, and quickly identifying any potential ethical issues that arise.
What Are the 5 Key Principles of Ethical AI for Organizations?
Organizations that use AI ethically follow five key principles: fairness, transparency, accountability, privacy, and security. These principles outline the best ways to limit an organization’s exposure to the risks associated with AI.
Principle #1: Fairness
Fairness in AI relates to the output of the AI. To be “fair” in this sense means the outputs match a fairness criteria. Depending on the task or problem being solved, the fairness criteria could be about equitable allocation, error rates, or accurate representation. Criteria are typically related to legally protected attributes, like race and gender, and “fairness” ensures outcomes are fair across these populations.
Organizations that want to ensure their AI is delivering fair outcomes across protected classes will need to build models that appropriately weigh different criteria for different groups.
However, some people in the data set may not want to share information related to race, gender, or religion, making it more difficult to create fair outcomes across groups.
“There’s a trade-off between privacy and transparency,” Impink says. The more transparent the data, the easier it is to get a fair outcome — but this could infringe on an individual’s right to privacy.
But building “fair” algorithms is complicated by an understanding of what is considered “fair.” The firm Equivant (formerly Northpointe) built a COMPAS algorithm that was used by judges to predict which criminals were most likely to reoffend. The algorithm took a strict quantitative approach to fairness and didn’t weight groups differently.
The algorithm was predictively correct at the same rates for Black and white defendants, but ProPublica found when it was wrong, it was wrong in different ways based on race: Black defendants who would not be arrested again in two years were considered “high risk” at twice the rate of white defendants who wouldn’t be arrested again in a two-year period.
The algorithm didn’t take the discriminatory policing practices against Black people into account, resulting in incorrect recidivism predictions for that population. At the time, Northpointe argued that the algorithm was fair because its model was showing the same likelihood of reoffense across all groups. ProPublica argued that COMPAS did not meet quantitative fairness criteria when it was wrong (especially since race is a protected class).
ProPublica argued it is not fair in terms of treating likes alike (especially as race is a protected social group category). So it did not achieve the quantitative definition of fairness when it was wrong.
Organizations that want to ensure fairness in their AI algorithms need to develop a robust fairness criteria across protected classes and other social groups.
Principle #2: Transparency
If fairness relates to the outcomes of using AI, transparency is knowing what goes into an algorithm. Creating transparent AI tools helps ensure the tool is unbiased, which is critical for delivering accurate outcomes.
AI algorithms can develop biases in numerous ways.
“The programmers creating the AI could be biased,” Impink says. “If they were all white men who went to Northwestern, for example, there could be blind spots.”
Next, the algorithm itself could be biased, overweighing a certain kind of data.
Finally, the learning materials that trained the AI could be biased. If some relevant information was excluded — purposely or otherwise — the algorithm could become biased.
Organizations can ensure their AI framework is transparent by having programmers consider diverse perspectives when building the tool and conducting rigorous bias testing. Organizations would also benefit from having an AI bias expert on staff who can closely monitor outcomes against the algorithm to determine areas of bias. This expert could also examine training materials to make sure they draw from broad, unbiased sources.
There’s a trade-off between privacy and transparency. The more transparent the data, the easier it is to get a fair outcome — but this could infringe on an individual’s right to privacy.
Michael Impink
Principle #3: Accountability
Accountability in AI means someone needs to be held accountable for the outcomes AI produces. AI itself cannot experience consequences, so organizations need to build a solid framework defining who will be held responsible for the AI. As an IBM training manual from 1979 puts it: “A computer can never be held accountable. Therefore a computer must never make a management decision.”
“When something goes wrong,” Impink says, “you need a throat to squeeze.”
But determining who is responsible for AI malfunction isn’t always easy. Impink shares this potential example of the complexities associated with accountability.
“Say a driverless Uber runs someone over. Is Toyota, the carmaker, responsible? Is it the software developer who built the algorithm? Is it the passenger?”
Organizations can establish clear hierarchies outlining responsibilities for each AI element. A clearly delineated structure will help determine who will be held responsible if something goes wrong.
Principle #4: Privacy
Privacy in AI relates to keeping the data AI uses secure. Personally Identifiable Information (PII), such as someone’s name, Social Security number, address, or phone number, must be kept private to protect individuals from fraud and identity theft.
When it comes to AI, privacy and security are closely linked, and organizations must be in compliance with data privacy laws
“Security is what makes privacy work,” Impink says. “Without it, people would just steal your data.”
Organizations can ensure their AI framework protects user privacy by establishing a strong security system to keep PII safe within their AI tool.
Principle #5: Security
To maintain privacy, organizations have a responsibility to keep user data secure. When using AI, that security means protecting internal, private data from external attacks or internal corruption.
A strong data security protocol is absolutely necessary for any organization using AI. This includes: strong encryption protocols for data (both at rest and in transit), strict identity and access management (IAM) policies to limit who has access to the data, and the anonymization of personal data used for training purposes.
Building a Responsible AI Strategy: Best Practices
Establishing a governance mechanism is the best strategy for organizations building responsible and ethical AI practices.
“A governance mechanism tends to be more valuable than an AI framework,” Impink says.
Those governing bodies can come in different forms: a technical board, a council, or even a single person who is deeply embedded in the process.
When it comes to building an ethical AI strategy, “it has to have teeth,” Impink says. “There has to be some consequence.”
This is where frameworks and policies frequently fall short. When there’s no group or individual directly responsible for enforcing policies or ensuring they’re being followed, organizations can easily slide into unethical or irresponsible AI behaviors. Moreover, since AI is developing so rapidly, a policy from as little as six months ago could be inadequate.
Whatever kind of governing body organizations choose, they should be able to do the following:
- Create, implement, and enforce specific guidelines for AI development and usage.
- Establish a consistent decision-making framework for ethical dilemmas.
- Regularly review and update their guidelines as AI develops.
- Designate a person or persons who are responsible for each element of an AI tool.
Discover Harvard’s Ethics of AI Program
Most business leaders feel unprepared to handle the ethically complex situations AI presents. Without training, these leaders leave their businesses exposed to reputational, operational, and litigation risks if their AI tools fail to meet responsible and ethical standards.
Professional & Executive Development’s AI Ethics in Business: Managing Bias and Ethical Usage program is tailor-made for leaders actively engaged in using AI technologies at their organizations. Ideal for leaders with 10 or more years of leadership experience, this course explores how AI is being used in business and explores the big-picture issues that exist.
This program covers topics including: the current macro-economic and cultural AI trends impacting businesses, the diverse ethical AI situations and management strategies for various industries and organizations, and strategies for managing biases (both algorithmic and human).