eCommerceNews UK - Technology news for digital commerce decision-makers
Story image

EU AI Act prompts UK firms to adapt for seamless compliance

Thu, 1st Aug 2024

As the EU Artificial Intelligence (AI) Act comes into effect on 1 August 2024, UK-based companies wishing to operate within the European Union must swiftly adapt to remain compliant with the new regulations.

Luke Dash, CEO of ISMS.online, and Will Hiley, Head of Solutioning at Syniti, have shared insights on the significance and implications of this landmark legislation.

Luke Dash highlights the transformative potential of the EU AI Act, describing it as a watershed moment in AI regulation. "The EU AI Act is a pivotal moment in AI regulation and any UK-based companies wishing to sell or install AI services in the EU must follow the rules. It will have far-reaching consequences across sectors, from technology and healthcare to finance and even individual business functions crucial to operations," he notes. He further emphasises that this regulation will soon be mirrored globally, influencing regions like the United States, which is considering the Algorithmic Accountability Act, and China, which has already issued AI ethics guidelines.

The EU AI Act imposes various compliance requirements on companies, including the classification of their AI systems into risk categories: unacceptable risk, high risk, limited risk, and minimal risk. This categorisation aids businesses in determining which systems need closer scrutiny and adjustments to comply with transparency mandates and bans on dangerous AI applications such as predictive policing. Generative AI models like ChatGPT 4 and Google Gemini will also come under increased examination.

Luke Dash suggests that businesses should not only adhere to the EU AI Act but also utilise the internationally recognised ISO 42001 standard for AI management systems. "ISO 42001 is more than just a compliance checklist. It represents a fundamental shift in how businesses approach AI, recognising that responsible AI is not a hindrance to success but a catalyst for sustainable growth, innovation, and positive change," he explains. By embedding ethical considerations into their AI strategies, companies can foster trust, mitigate risks, and unlock the full potential of AI technologies.

Expanding on the topic, Will Hiley of Syniti addresses the challenges related to data quality, a fundamental aspect of successful AI implementations. He warns against overlooking the quality of data fed into AI systems, stating, "We could easily tie ourselves up in knots about the pros and cons of the Act, but the reality is AI is only as good as the data it uses: give it biased, inaccurate data, and it will give you biased, inaccurate results."

Hiley points out the critical nature of data quality and governance, especially given findings from a recent HFS survey that cites improving operational data availability as the foremost challenge in implementing AI technologies. "Without a real focus on data quality and governance, AI simply won't be able to deliver on its promise," he asserts. Despite concerns that the EU AI Act might stifle innovation due to legal complexities, he sees the regulation as an opportunity for businesses to prioritise data quality and, consequently, enhance their AI projects.

Businesses across the UK and EU find themselves at a crucial juncture. While the EU AI Act introduces stringent requirements that may initially seem daunting, it also offers a structured approach to harnessing AI technologies responsibly and effectively. By aligning their practices with both the EU AI Act and ISO 42001, companies can ensure compliance, mitigate risks, and pave the way for innovative and ethical AI deployments. Attention to data quality and governance, as highlighted by experts, will be vital in realising the full potential of AI, driving sustainable growth, and maintaining consumer trust in an increasingly regulated landscape.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X