top of page
Writer's pictureJames Lawn

Responsible AI in Context

AI has been the foundational technology of Responsible Business Intelligence (RBI) for decades - enabling the responsible application of BI and AI by responsible businesses and organisations. But today, when we're talking about AI, we're probably referring to the AI that emerged late 2022 with the launch of OpenAI's ChatGPT. Since then, the pace of AI advancement has been accelerating, as has the level of insight and noise relating to how we can or cannot trust AI. Nevertheless, according to Sarah Murray at the Financial Times, in her FT Moral Money Forum article What does AI mean for a responsible business?, the message for the corporate sector is clear: 


"any company claiming to be responsible must implement AI technologies without creating threats to society - or risks to the business itself, and the people who depend on it."

In principle, how a company executes on the responsible use of AI is already well defined. Like the application of any other new technology by business, it is covered by the United NationsGuiding Principles on Business and Human Rights, endorsed by the UN Human Rights Council back in 2011. More recently, in 2018, Dunstan Allison-Hope and Mark Hodge published three papers for BSR describing a human rights-based blueprint for responsible business practice with regard to AI. In their first paper, they outlined 10 beliefs to govern and guide the use of AI:


  1. The development and use of AI should be done in ways that respect all clearly articulated and internationally agreed-upon human rights.

  2. We need both governance and technical solutions for the responsible development and use of AI.

  3. Special attention should be paid to the State-business nexus, especially the use of private sector AI solutions in public service delivery.

  4. All actors in all industries across the AI value chain have responsibilities—including those buying and using AI solutions outside of the technology sector.

  5. Responsible business conduct is about the business models and strategies used by companies to take AI to market, not just the risk and merits of specific AI technologies.

  6. AI brings new and previously unforeseen human rights challenges, and the onus is on businesses to proactively “know and show” how they address the actual or potential adverse impacts of AI.

  7. AI also brings challenges like those previously experienced in other industries, and we can learn from them.

  8. User-centered design should address the experiences and views of people, especially vulnerable populations, who may be negatively impacted by new technologies.

  9. Those whose human rights have been violated—however unintentionally—by the deployment of AI solutions should have access to remedy.

  10. We should stretch business and human rights methodologies to suit the nature and pace of AI development and deployment. For example, we should explore rights-based approaches for maximizing the positive impacts of AI, and experiment with the use of foresight methodologies in human rights due diligence. 


Since the launch of ChatGPT, countries across the world have been racing to draft the rules for AI. Given the pace of AI innovation, agile and iterative regulatory frameworks - such as the UK's proposed framework - are likely to be essential if they are to stand the test of time, just as BSR's blueprint beliefs do. Like other pro-innovation AI frameworks developing around the world, the UK aims to give consumers the confidence to use AI products and services, and provide businesses the clarity they need to invest in AI and innovate responsibly:


  • Drive growth and prosperity by making responsible innovation easier and reducing regulatory uncertainty. This will encourage investment in AI and support its adoption throughout the economy, creating jobs and helping everyone to do them more efficiently.

  • Increase public trust in AI by addressing risks and protecting society's fundamental values. Trust is a critical driver for AI adoption as well as for government. If people do not trust AI, they will be reluctant to use it. Such reluctance reduces demand for AI products and hinders innovation.

  • Take a global leadership position in responsible AI. The development of AI technologies can address some of the most pressing global challenges, from climate change to future pandemics. There is also growing international recognition that AI requires new regulatory responses to guide responsible innovation. The need for international governance and regulation, as well as national frameworks, is critical if we are to maximise opportunities and build trust in AI.


As ever, the devil is in the detail and it's not all plain sailing. For example, in the case of the UK Framework, whilst significant consensus across stakeholder groups has already been achieved, this is not yet the case for copyright and AI. The Intellectual Property Office UK working group made up of copyright holders and AI developers, which aimed to produce a new code of practice on copyright and AI that strikes the right balance between the interests of the two groups, has not yet been able to agree on an effective voluntary code of practice. 


Consequently, it now falls on UK government to take the lead on this high profile matter. A significant challenge to be resolved, and also the sign of an agile and iterative regulatory framework for Responsible AI and Responsible Business Intelligence in action. 

Comments


bottom of page