Managing Bias: How to Forge a World of Responsible AI

Randy Ginsburg
July 16, 2023
“AI policies must be Human-centered, Accountable, Safe, Transparent, and Ethical”— Simon Greenman, co-founder, Best Practice AI

Bias in artificial intelligence is almost inescapable, but so is AI itself. Reducing and mitigating that bias is the challenge. Randy Ginsburg speaks with AI experts about managing the risks of AI bias and how to champion responsible AI adoption.

Artificial intelligence has had a bumpy ride to changing the world. Envisioned during the dawn of computing in the 1940s, AI research has experienced numerous ‘AI winters’ and booming summers. Today, AI finds itself in a boom period of not only enormous technological progress, but unprecedented uptake by the wider world. 

Generative models like GPT and DALL·E have transformed the way we work, democratising even abstract tasks from creativity to research. Before, experts believed that only precise tasks, ones you could explain to someone else, or routine tasks, could be taught to a computer. Now, language, image, and video generation AI systems have been deployed across a wide range of industries to achieve goals like autonomous driving and biometric identity verification, while deep learning algorithms are used to predict health risks and enable drug development.

Inside an autonomous car, powered by artificial intelligence.

But these models could not exist without the human-curated data sets upon which AI models are trained. And for the foreseeable future, when AI-generated ‘synthetic data’ creates more problems than it solves, human involvement at the very foundations of AI models, to create those datasets, is a necessity. Even though human involvement is what brings bias into AI models in the first place.

“Systemic racism and biases in AI can result in unequal opportunities in areas from job recruitment to healthcare to safety and security,” says Simon Greenman, a member of the World Economic Forum’s Global AI council and co-founder of AI governance consultancy Best Practice AI.

Given how powerful artificial intelligence can be, its use in day-to-day life is inevitable. That should force society to confront crucial questions, like if it is possible to create truly objective AI systems, when those systems are trained on data which inherently contains biases. More importantly, what needs to happen to give society the best chance of interacting with these models safely? How can we ensure that artificial intelligence reflects the best of us, and not the worst?

“AI policies must be Human-centered, Accountable, Safe, Transparent, and Ethical.”

— Simon Greenman, co-founder, Best Practice AI

This discussion has taken centre stage in recent months, with many leading AI researchers calling for a six-month pause on AI research in order to let society explore the implications of this new, frontier-pushing technology.

Greenman believes that regulation has a key role to play in that discussion. “Governments must implement regulation to ensure that high-risk AI use cases, such as biometric identification or employee performance management, are carried out with the necessary assurance and transparency,” he says, emphasising an AI responsibility framework that he calls ‘HASTE’. “Businesses and AI providers must commit to responsible AI policies that are Human-centered, Accountable, Safe, Transparent, and Ethical.”

This is not about curtailing innovation, he stresses, but about ensuring balance, fairness, and safety in innovation that will come. Developing AI that reflects our best qualities demands meaningful dialogue about how these systems truly function. Only once we understand how AI algorithms operate can society prepare for its full impact.

Researchers at Google-owned AI company DeepMind solved a riddle that has been unsolved for decades: How do proteins repeatedly fold themselves into the proper shapes? (Getty Images)

However, regulation is not a one-size-fits-all solution. AI models that push technical boundaries to the point where they might pose new risks warrant very different models of regulation to more narrow AI systems that get deployed by businesses around the world, in areas like recruitment or healthcare, for example. 

Part of this remedy, suggests Thomas Forss, CEO of AI training data platform StageZero Technologies, is to force organisations who deploy AI models to indicate when AI software is used and to acknowledge important biases that it may contain. “The simplest way to spread awareness is to analyse the bias of models and notify the users about these imperfections,” he says.

In particular, Forss argues that regulation should focus on transparency and user agency. “Models are only as good as the data they are fed, which means that diversity in data will increase in importance,” he notes, arguing that this will require a collective demand for transparency from those developing and deploying AI models. “What the public can do is to demand transparency in what data is fed to AI, and to have the option to opt-out of certain AI-powered services.”

Lavina Ramkissoon, Executive Board Member of the Global AI Ethics Institute, echoes the view that the first step towards responsible AI is to acknowledge the risks that come from the paradigm-shifting technology. This means prioritising the need for autonomy, privacy, data protection, and being wary of over-reliance on software that society does not fully understand. From there, Lavina emphasises the importance of initiating open discussions about AI, not only with top researchers and thinkers but with all members of society.

“The potential and challenges of this technology cannot be fully explored until people learn more and understand how to best apply it,” she argues. “There should be conversations like these at dinner tables, families, offices, and governments alike.”

As we navigate the AI revolution, striking a balance between its potential and its pitfalls is paramount. Collaboration, conversation, and education are crucial to driving ethical progress, and, while technology has already proven to be a force to be reckoned with, it’s the responsibility of humans to ensure that AI is a force for good.

“There should be conversations like these at dinner tables, families, offices, and governments alike.”

— Lavina Ramkissoon, Executive Board Member, Global AI Ethics Institute

Click to view our article about art.
Written by
Randy Ginsburg
Click to view our article about art, music, film, and storytelling..
More about
Society
Click to view our article about art.
More about
Action

Randy is the founder of Digital Fashion Daily and Third Wall Creative, a web3 marketing agency. Straddling the worlds of retail and emerging technology, Randy has worked with many companies including nft now, Shopify, and Touchcast to create compelling and educational web3 content. Previously, Randy worked at Bombas, developing the most comfortable socks in the history of feet.

Suggested

No items found.