Artificial intelligence (AI) and machine learning technology are no longer mere buzz words; organisations are keen on aligning their initiatives to meet industry guidelines and recommendations. As we push the boundaries of what AI can achieve, governing bodies must create a framework for organisations to consider the ethical implications not as policy, but as a moral obligation to those that it affects.
A responsible approach
AI is revolutionising businesses, streamlining operations and reducing costs. However, alongside abundant opportunity, there is risk. The key to responsible innovation lies in carefully balancing ethics with advancement.
The rapid advances in machine learning (ML) and artificial intelligence (AI) has given rise to a host of ethical concerns. For example: Gender bias in text-based generative AI tools as researched by Isobel Daley, Data Scientist and 6point6 has shown how AI reinforces traditional gender roles, facial recognition technology, as reported by studies like the Gender Shades project, has exhibited bias, particularly in its analysis across different groups and the wider impact to society. AI-driven hiring tools have faced scrutiny for discriminatory outcomes in hiring practices. Predictive policing systems have raised concerns about racial bias and may have been based on inadequate or partial data allowing for the loop of bias to be embedded in existing police practices. Additionally, ethical dilemmas in autonomous vehicles have been discussed greatly.
The proliferation of deepfake technology, its use in crime and widespread AI surveillance and data privacy breaches have been the subject of extensive reporting and research over the past few years. What do all of these examples tell us? They demonstrate that in order to successfully integrate AI, organisations must comply not only with existing legislation, government guidance and cyber security principles but also ensure that responsible and ethical AI is at the heart of their innovations.
Protecting data and people
Whilst businesses increasingly turn to AI, it is essential to highlight the data protection risks inherent in developing machine learning models. These models often require access to substantial datasets, frequently containing sensitive or personal information. These are subject to the stringent safeguards of GDPR and data protection laws and there may be a lack of understanding around risk management and legal compliance when using personal data to develop AI systems.
Beyond protecting data, businesses must also consider the potential impact of employing AI on people. The European Commission succinctly encapsulates these concerns as AI’s “opacity, complexity, bias, a certain degree of unpredictability and partially autonomous behaviour”. As a result, businesses must explore ways in which explainability can be incorporated into models, address the risks of bias and discrimination, and implement safeguards to mitigate the consequences of unexpected outcomes. To achieve this, some of the key principles outlined below should be considered.
Governance for responsible AI: Key principles
In light of these risks and ethical concerns, businesses should conscientiously explore the AI governance principles and recommendations outlined below. These approaches seek to strike a balance between the imperative to foster innovation and the ethical and legal responsibility to effectively manage risk.
Protection of fundamental rights
AI and data protection law are intertwined because of the ethical concerns around the use of personal data and the potential for bias in AI systems. To overcome these challenges, organisations using AI systems should consider implementing features (adhering to data protection legislation such as GDPR) like transparency, consent, and dealing with the potential impact of automated decision-making on individuals. Techniques such as federated learning enables AI models to train on decentralised data without exposing any sensitive data and safeguarding data privacy.
Accountable, safe, transparent and efficient development and operation
The AI systems must be designed to facilitate end-to-end accountability, safety, transparency and efficiency. To accomplish this, regular reviews by humans should be conducted. In addition, the internal working of an AI system should be transparent towards its stakeholders. It should be able to justify the ethical permissibility and the public trustworthiness both of its outcome and of the processes behind its design and use. To ensure integrity and balance, human-in-the-loop features must be built into the system plan.
Compliance with existing legislation
AI systems within an organisation should be evaluated in the context of current national and international laws. In addition, any new laws should be reviewed for applicability.
AI governance policy implementation is in its infancy in the UK and will continue to evolve as the technology takes an increasingly operational role in both public and private sector. The government has set out the general principles in the National AI Strategy, whilst the Guide to using AI in the public sector, developed in cooperation with the Alan Turing Institute, puts forward ways to ensure ethical and safe operational implementation of AI.
Additionally, there are several international guidelines:
- The EU AI Act
- OECD AI Principles
- Singapore’s Approach to AI Governance
- Microsoft Responsible AI standard
- Artificial Intelligence (AI), ENISA
- IEEE Global Initiative
It is advisable to review organisational AI strategy in the context of the available national guidance and also refer to the international equivalents to capture as many different angles as possible. These guidelines are likely to form the basis of future laws.
Fostering collaborative and sustainable innovation
Innovation, sustainability, and collaboration are all related in their efforts to manage multiple dimensions of organisational policies and practices. In particular, it is imperative to note that sustainable innovation roots in collaborative efforts, stakeholder integration, and incorporating external stakeholders’ preferences while shaping innovation practices is important. The impact of this collaboration to generate sustainable innovation can be analysed by using a Sustainable Innovation Matrix (SIM) leadership model.
What is the road ahead?
In navigating the path ahead, businesses must embrace responsibility by recognising exclusion, embedding fairness in algorithms and ensuring AI models know when to seek human input. To successfully and safely integrate AI, organisations need a clear strategy that is aligned with existing and anticipated legislation, high ethical standards and robust security principles. This strategy will of course need to evolve in response to the ever-changing AI governance landscape. Those who embrace this sustainable approach will find themselves well-prepared to harness the advantages of AI and reap its benefits.
How can 6point6 help?
At 6point6 we have an established approach to support our clients for incorporating AI in their business.
- Conducting an in-depth review of the business to analyse the requirement of implementing AI
- Understanding the organisation’s goals and objectives and aligning them towards the requirements of AI
- Conducting an in-depth review of the pros and cons possible with these changes in the business. In particular, analysing the effectiveness of incorporating AI in the business
- Preparing employee capabilities for technical change
- Building, integrating and testing newly built systems
Getting started
In today’s world, AI can solve a range of issues and improve bottom lines in numerous industries. It can also improve the efficiency of a business by reducing the time and effort required to complete tasks and freeing up employees to focus on more complex and innovative aspects of the business.
If you are thinking of incorporating AI into your business then contact us to find out more about the proven 6point6 approach to incorporating AI in a responsible manner.
Aditi Ramachandran
Aditi manages the delivery of security assurance services, providing governance, risk, compliance, and information assurance to clients in the public sector. Having worked in cyber security for over a decade, Aditi has vast experience in information risk management and eDiscovery, with expertise forged by years spent working across a breadth of sectors including aviation, energy, insurance, financial services, and retail.