Quo Vadis, GenAI? Innovating Responsibly in the AI Era
PwC Luxembourg I 2:29 pm, 11th December
THE PACE OF AI INNOVATION CONTINUES TO ACCELERATE
Major developments this past month demonstrate the rapid progress of generative AI (GenAI). OpenAI has introduced GPT-4 Turbo, which increases speed and reduces cost of their most advanced AI tools. Anthropic, a US-based large language model (LLM) developer, introduced Claude 2.1, closing the performance gap to OpenAI by offering larger context length. Aleph Alpha, a German startup working on sovereign GenAI, closed one of the largest funding rounds for a European technology startup. The capabilities of ChatGPT and its brethren allow them to generate written content, answer queries, hold conversations, summarise texts and develop code with increasing coherence. However, these progressions have sparked intense debates in Europe over suitable governance strategies as the finalisation of the EU AI Act approaches. Discussions range from what terminology should be used - foundation models or GenAI or general-purpose AI or large language models - indicating that much deliberation is still needed before a consensus can be reached.
In parallel to European discussions on regulation, US President Biden's recent Executive Order on AI emphasise enhanced algorithmic transparency, accountability and fairness, as means to address similar concerns stateside. Amidst these regulatory dialogues taking place globally, businesses are tasked with identifying an ethical yet innovative course forward.
BALANCING INNOVATION AND SENSIBLE REGULATION
With the finalisation of the European AI Act looming large, investments surging, and impatience around adoption, organisations face pressures from all sides. The key is striking the right equilibrium between enabling innovation to drive business value while ensuring ethics, safety, and responsible use. The European AI Act moves toward providing guidelines but still leaves many questions unanswered. Some of them are related to the implementation on the vast array of AI systems and technologies. The law also foresees a multitude of instruments to facilitate innovation on AI in Europe, by creating regulatory sandboxes, but also research and innovation funding. However, the EU countries are not necessarily limited to the regulation, it is important to understand how the member states of EU will respond to this act and whether there will be additional local level innovation policy.
OPERATIONALISING AI RESPONSIBLY
Successfully leveraging AI in business requires not just innovation but responsible governance and deployment:
- Securing and Monitoring GenAI Systems
As interest shifts from ChatGPT to enterprise applications, security and governance become critical. To prevent data exposure or system misuse as GenAI gets deployed for content creation, query answering and task automation, companies must implement access controls, usage monitoring and cybersecurity measures.
- Enabling On-Premises Deployment
While the most powerful models require a cloud infrastructure, smaller, more tailored models can be deployed locally. This allows organisations to treat sensitive data and integrate GenAI in their legacy infrastructure. Rather than compromising data security to utilise GenAI, companies can deploy responsibly on-premise, with increasingly powerful AI models.
UNDERSTANDING THE REGULATORY TRAJECTORY
The EU AI Act is a sweeping governance framework that can directly impact a multitude of organisations. Of relevance are stipulations and restrictions around high-risk AI applications, which not only affects the developer of AI systems, but any organisation that uses those tools. The current negotiations around the EU AI Act are around specific rules on what constitutes a high-risk AI system, and more importantly what does not. Industry groups are trying to lower the regulatory burden. There is a strong push to finalise the regulation in the coming weeks. This urgency reflects growing global momentum toward managing risks and providing oversight for the deploying AI, with the EU desiring a driving seat in this regard. In the US, President Biden’s AI Executive Order calls for an AI Bill of Rights to address issues like algorithmic bias, data privacy and explainability – signalling greater US scrutiny of AI development going forward. Meanwhile Canada recently unveiled similar plans to establish an AI and data commissioner's office focused on compliance. And China has already introduced governance policies centred on priorities around ethics, transparency, and risk mitigation in AI systems, with a stronger focus on social coherence.
The evolving regulatory environment is a call to action for business leaders to apply a strategic approach towards the use of AI in their organisations.
In light of the evolving regulatory environment, it becomes imperative for business leaders to adopt a strategic approach in integrating AI into their organisations, ensuring alignment with regulatory demands. Understanding potential impacts across these areas allows instituting proper governance through board policies and standard operating procedures.
MOVING BEYOND COMPLIANCE WITH AN AI STRATEGY
We are now at the brink of a generational (societal, economic, and technological) change driven by AI. To reap its benefits, a pro-active response to the impending regulation is paramount. Organisations need to find the right approach towards an AI strategy. The PwC Responsible AI (RAI) Framework proposes a set of strategies, controls, responsible approaches, and core AI practices to help ourselves and our clients navigate the landscape of GenAI at an enterprise level. It allows taking stock of global regulatory trajectories and supports a proactive self-assessment of risk factors with appropriate controls, enabling organisations to use GenAI in a responsible manner.
The PwC RAI Framework is augmented by a specific GenAI toolkit that offers a forward-looking governance framework for organisations, covering the range from strategic AI governance planning to pragmatic operational execution. It includes diagnostic surveys to benchmark against standards, roadmaps for integrating AI responsibly, and operational tools for control, bias testing, and ensuring model integrity. This comprehensive approach facilitates a seamless, ethical AI adoption, preparing organisations for the dynamic regulatory future, including the EU AI Act.
KEY TAKEAWAYS
1. Cultivate a Culture Empowering Employees to Innovate Safely with AI: Fostering an environment where employees are encouraged and equipped to experiment with AI is crucial. This involves providing the necessary training, resources, and ethical guidelines to ensure that innovation is pursued safely and responsibly, aligning with the broader goals of the organisation.
2. Rapid AI Innovation Meets Regulatory Scrutiny: The acceleration of generative AI technologies, such as OpenAI's GPT-4 Turbo and Anthropic's Claude 2.1, coincides with intense regulatory discussions, particularly regarding the EU AI Act. This situation underscores a crucial global effort to harmonise ground-breaking AI advancements with comprehensive governance frameworks.
3. Prioritising Responsible AI Deployment Amid Regulatory Changes: When integrating AI into business processes, it is vital to do so with a strategy that places ethics and explainability at the forefront. Tools such as the PwC RAI Framework become essential in aiding organisations to navigate these complexities, enabling ethical AI integration and preparedness for the dynamic regulatory landscape, including the forthcoming EU AI Act.
Saharnaz Dilmaghani, Senior Associate Artificial Intelligence & Data Science, PwC Luxembourg
Thierry Kremser, Deputy Advisory and Data & AI Leader, PwC Luxembourg
Subscribe to our Newsletters
Stay up to date with our latest news
more news
Innovation et IA au cœur de la Bourse de Luxembourg
by Luxembourg Stock Exchange I 9:21 am, 11th November
L'intelligence artificielle (IA), en plein essor ces dernières années, a connu un développement massif, redéfinissant le fonctionnement de nombreux secteurs industriels et d'un grand nombre d’entreprises. Désormais, l'intégration de l'IA dans les processus de l'entreprise est considérée comme un outil clef pouvant améliorer l'efficacité et créer de nouvelles opportunités tout en réduisant les coûts, à tel point que le Fonds monétaire international (FMI) prévoit que près de 40 % des emplois dans le monde seront impactés par des technologies d’IA. Dans cette optique, les entreprises doivent chercher à exploiter son potentiel pour rester compétitives et innovantes.
Diego's Odyssey: Pioneering the AI-driven customer experience revolution
by PwC I 4:03 pm, 6th November
In a rapidly evolving digital landscape, Diego Ries finds himself at a pivotal moment in his career. As a seasoned Customer Experience (CX) lead at a prominent bank in Luxembourg, he has dedicated over 12 years to mastering customer journeys and ensuring seamless interactions across various channels. However, the banking sector is undergoing unprecedented change, driven by digital-first competitors and new generation of customers who are used to personalised, frictionless experiences.
load more