Since the momentum that proceeded the launch of Chat GPT in 2022, culminating with the rise and widespread adoption of Generative Ai by organizations of all type and size, Fujitsu has leveraged on its engineering and R&D capability to unveil new services and platform to support organizations and businesses on their AI adoption journey.
“With businesses and citizens reacting with enthusiasm and eager to embrace AI, achieving trust in AI systems is a key to sustaining the current trend as well as empowering them to confidently explore, implement and benefit from the potential offered by AI” says Moussa OUEDRAOGO, Head of Enterprise & Cybersecurity for Fujitsu Luxembourg & Belgium.
Join us at our upcoming conference "Embracing AI & Data Space" on May 16th to learn more about building trust in generative AI and navigating the security landscape.
Here, we delve into how Fujitsu Kozuchi platform addresses as a priority, the inherent shortcomings pertaining to AI systems. AI trustworthiness for instance is addressed by intertwining explanability in AI model output and bias detection in LLM models ensure the output of the generative AI is achieved through a fair and balanced reasoning process. Bias from an AI could result from the data used during the training phase.
Let us consider the case of a LLM being trained to serve as loan assistant, with the goal of deciding on whether to grant loan to applicants. Assuming the model is trained on historical loan data with decision predominantly made consciously or unconsciously with bias, the model will replicate the same trend to the detriment of certain social classes, postcode, gender, or race. To account for this and allow the systematic identification of bias in data, (beside the LLM bias covered by Kozuchi), Fujitsu Belux team has achieved a complementary solution. The model also offers the possibility to mitigate identified bias within the data set, thus contributing to an AI output that is fairer. Our model could be of value for business sectors such as Insurance for policy quotations, banking and finance for decision making on loan application or investment, the healthcare sector on treatment plan, and for HR services in their quest for fair and less subjective salary offer and promotion.
The current state of play reveals that, while most organizations in Belux region and beyond, regardless of their size (big organizations and SMEs) and their verticals (retails, manufacturing, healthcare, Insurance, finance etc.) are on a path to accelerate the adoption and testing of AI model in response to new business challenges and/ or for creating business value, a clear strategy integrating security and privacy considerations is still to emerge. One of the goals of our upcoming conference on AI and security next 16 May is to help shed some light on this and provide some guidance to local organizations and businesses on securely implementing a generative AI strategy.
The extent of security required from a given organization should cover the three main phases of the generative AI life cycle: training phase, deployment and run phase on production.
For each of those phases the CISO, who must be involved in the early phase of discussions around the business value of adopting the Generative Ai should perform some risk assessment. Such an assessment should account for the deployment mode adopted:
- On prems/local deployment with the LLM trained on local data for internal uses. Even in this case, attention should be paid on potential external sources that may be used for achieving a RAG (Retrieval Augmented Generation). RAG is used by Generative AI developers to make LLM outputs more reliable and aligned to a given context. Data used by the LLM should be classified with required controls set to prevent any adverse interaction either from internal users, including developers, administrators and external adversaries who my take advantage of any connectivity with external sources.
- As a pay per use model through third-party service provider. In this case the best practices related to Cloud security should apply atop the specific controls required for risks related to prompt injection or manipulation which aim at impairing the model behavior or exposing the data it has access to.
Because the integration of the LLM with local system could be source of risks, security testing should set it amongst target assets. This would include vulnerability scanning but also security audits to ascertain the security of key features such as libraries and Plugins.
Fujitsu due diligence assessment of generative AI uses structured assurance cases as model to support claim of security with respect to a given risk or an overall guideline such as the latest LLM top 10 OWASP along with supporting evidence. For a CISO this ultimately serve as a compass to make recommendations for reinforcing the security of the LLM and allow internal services and user to securely harness the power of generative AI. Evidence of mitigation of risks through implementation of relevant controls are available to serve future need including audits.
“Fujitsu's BeLux team is really driving home the importance of generative AI as a game-changer for cybersecurity. It's not just about protecting organizations from attacks, but also empowering security teams to make better decisions and respond to threats more effectively” explains Moussa OUEDRAOGO.
In that vein, Fujitsu has in its service diverse AI based solution to:
- Determine/ predict the scope of propagation of an incident within interdependent services/ system.
- Proposed actionable remediation plan (capable of being triggered if required) after analysing the incident’s Indicators of Compromise.
While this is of relevance to most organizations, it is particularly timely for NIS2 regulated organisations.
Don't miss out! Join us on May 16th for "Embracing AI & Data Space" to gain a deeper understanding of generative AI security and its potential for your organization: https://my.weezevent.com/embracing-ai-data-space
Subscribe to our Newsletters
Stay up to date with our latest news
more news
The pivotal role of cybersecurity in the Digital Equilibrium
by Excellium Services I 11:19 am, 14th November
In the intricate dance of a digital ecosystem, achieving Digital Equilibrium is akin to balancing a complex, multifaceted scale. At the heart of maintaining this delicate balance lies cybersecurity, a fundamental binder ensuring that every component operates harmoniously, efficiently, and securely.
"Small is Beautiful": Post Cyberforce, Wins GSMA Telecommunication-ISAC Award
by Kamel Amroune I 7:32 am, 28th February
Embodying the principle that "Small is Beautiful," Post Cyberforce, under the exemplary leadership of Mohamed Ourdane, and Alexandre De Oliveira for his investment in GSMA T-ISAC have been honored with the prestigious GSMA Telecommunication-ISAC awards.
load more