Unilever Preps For EU AI Act

EU AI Act

EUROPE | The EU AI Act, the first comprehensive AI regulation, will come into force on the 1st of August, 2024 to govern the risks of AI systems and protect the fundamental rights of EU citizens.

The EU AI Act[a] is the world’s first comprehensive AI law to mitigate the risks of AI use. It is widely considered to become the blueprint for other similar AI regulatory regimes developing worldwide.

Unilever’s comprehensive AI assurance process has reviewed proposed AI use cases or projects and aims to identify, manage and mitigate foreseeable risks. Potential use cases with new AI systems have been assessed by a cross-functional team of subject matter experts, ensuring all relevant AI risks are considered before deployment.

Responsible use of AI has been a priority at Unilever for years. When its AI assurance journey began in 2019, as the global debate on digital transparency and privacy escalated, the business started reviewing its data and AI ethics.

In 2021, Unilever strengthened its business principles to commit to the responsible, ethical, and fair use of data. Over the following years, it built the AI assurance process, scaling this up for the generative AI boom.

“Regulatory compliance is a key component of our Responsible AI Framework, and we are proactively monitoring and addressing upcoming legal developments that may impact Unilever,” said Christine Lee, Chief Privacy Officer.

“To ensure regulatory compliance, a cross-functional team of subject matter experts, including our partners at Holistic AI, assess potential new projects using AI systems at Unilever. They review the project's needs, manage risks, and suggest improvements or mitigation strategies that might be needed before deployment and any ongoing monitoring.”

This triage system is further supported by Unilever’s Responsible AI Principles, which reaffirm its commitment to developing, deploying, and using AI technologies in accordance with Unilever’s Code of Business Principles, legal and regulatory requirements, and UN standards.

This process is now fully integrated across the organisation, and the team recently reached its 150 ‘projects assured’ milestone.

“Taking proof of concept projects using AI systems through a thorough assurance process at an early stage is enabling us to be more innovative and fully deploy trustworthy AI systems more quickly,” said Andy Hill, Chief Data Officer.

With over 500 AI systems in operation globally, ranging from AI-driven R&D that enables faster innovation cycles to machine-activated stock control and generative AI-powered consumer experiences in the marketing space, a Responsible AI Framework that governs the development, deployment and usage of AI is critical for Unilever.

“We see potential in using AI to drive productivity, creativity, and growth at Unilever. With augmentation, we support our teams through learning, enhanced decision-making, and new experiences. By creating autonomous systems, we believe AI can drive productivity within our business. As the deployments of these systems grow, we cannot underestimate the importance of making sure they work responsibly.”

Having full visibility of Unilever’s AI estate has ensured effective risk management and made it easier to keep pace with new global legal frameworks such as the EU AI Act and the US White House Executive Order on AI.

Hill mentioned that as demand for AI systems, technology, and capabilities continues to grow, regulations are expected to evolve as different countries discuss their own approaches to governing AI.

With legal developments that affect its business and brands, Unilever will continue to stay in step – from copyright ownership in AI-generated materials to data privacy laws and advertising regulations.

“I’m very excited to see where the integration of digital innovation and the latest technologies can take us. However, we must also ensure that our data is well governed, effective, and responsible whenever Unilever uses AI.”

More news here