Episode 44: The risks and ethics of AI - Part 1
- Embedded IT

- Jun 23, 2025
- 3 min read
Updated: 6 hours ago
Artificial intelligence is increasingly being built into tools and systems used across organisations. While the opportunities are significant, so are the risks, and many of these risks stem from how AI models are trained and how they make decisions. This episode explores the most important areas procurement professionals need to understand, from bias to privacy to ethical decision-making.
Before diving into the ethical considerations, we explore how organisations can practically approach adoption in how to prepare for artificial intelligence.
The challenge of bias in AI systems
One of the biggest risks in any AI project is bias. AI engines learn from historical data, which means they often reflect the patterns and prejudices found in that data. If an AI model has been trained on skewed or unrepresentative information, its recommendations will reflect those same flaws.
Two real-world examples highlight this clearly:
COMPAS in the US correctional system was designed to predict the likelihood of reoffending. Because the underlying data it learned from was biased, the system inaccurately labelled Black offenders as higher risk at almost double the rate of white offenders.
Amazon’s 2015 recruitment tool analysed past hiring data to identify ideal candidates. Because the industry had historically hired mostly men, the AI began favouring male applicants, replicating existing inequality instead of reducing it.
These cases show how critical it is to understand the training data behind any AI tool. Procurement teams must ask direct questions about how the model was built, what data was used, and what steps have been taken to monitor and reduce bias.
Ethical decision-making and the complexity of AI behaviour
Beyond bias, AI raises deeper ethical questions. Some AI-driven systems will one day need to make decisions that humans themselves would struggle with. A common example is the self-driving car scenario. If a collision is unavoidable, how should the AI choose between two harmful outcomes? What ethical rules should govern that choice?
These are not academic questions for anyone procuring tools that involve automation, prediction, or real-world intervention. Organisations must understand how a system has been programmed to respond in ethically sensitive situations, and whether those rules align with their own values and legal responsibilities.
As AI becomes more embedded in public services, transport, healthcare, and safety-critical environments, ethical design will become a core procurement requirement.
Data privacy and the risks of uncontrolled access
Privacy is another major concern when adopting AI tools. Many modern AI assistants, such as Copilot, work across a company’s entire digital environment. If configured incorrectly, they may have access to sensitive information such as HR records or financial data.
Accidental exposure can happen easily if access controls are mismanaged. While major providers often include robust protections and clear configuration options, not all suppliers operate at the same standard. Procurement teams must therefore assess:
what data the AI can access
how permissions are applied and controlled
how the system prevents unintended disclosure
Getting this wrong could expose confidential information to the whole organisation.
Preparing contracts to manage AI risk
AI-related risks are manageable if they are understood early and addressed in contract design. Procurement teams have significant influence here. Contracts should clearly set out how suppliers will handle bias, privacy, decision-making processes, ethical considerations, liability, and future model changes.
This is especially important when purchasing tools that involve safety, public services, or any form of automated decision-making that could affect people’s lives.
For organisations looking to strengthen their approach to assessing AI risks and ethics in technology procurement, get in touch.

