Cybersecurity implications of AI

Excellium Services I 10:23 am, 11th April

The use of AI in our daily lives, whether professional or personal, is increasingly becoming a standard. Like any new technology, cybersecurity issues are significant and must be considered now to avoid inevitable overflows without a framework or management of these data processing models. 

AI at the service of cybersecurity solutions… and threat actors 

AI models empower cybersecurity solutions to enhance both proactive and reactive measures, efficiently processing large datasets to identify abnormal behavior and detect malicious activity. Automating monotonous security tasks, AI conserves resources for core business functions. Its capacity to rapidly and precisely handle extensive datasets accelerates threat identification, outpacing human capabilities. This swift threat detection minimizes response times to security incidents and aids in reducing associated cybersecurity costs. 

On the other hand, threat actors leverage Artificial Intelligence (AI) to enhance their cyber-attacks, employing sophisticated techniques for greater efficiency and evasiveness. One example is the use of AI-driven phishing attacks, where machine learning models analyze vast datasets to craft highly personalized and convincing phishing emails. This makes it challenging for traditional defenses to detect such attacks. 


Additionally, AI is employed in adversarial machine learning, enabling attackers to manipulate models and evade detection by creating malicious inputs that mimic legitimate data. These tactics showcase the growing sophistication of threat actors who exploit AI capabilities to refine their attack methodologies and increase the potency of their malicious campaigns. 

"The use of AI must be supervised at the company level, which must support its users in controlled use through the integration of AI cybersecurity issues into the awareness program.” 

Framing the use of AI in your organization 

The primary risks associated with AI models include biases, where algorithms may reflect and perpetuate existing societal prejudices, and hallucinations, wherein models generate misleading or false outputs. Biases can lead to unfair decisions, reinforcing social inequities, while hallucinations may result in AI systems producing inaccurate or deceptive information, impacting the reliability and trustworthiness of the models. These risks highlight the importance of rigorous testing, ethical considerations, and continuous monitoring to mitigate potential negative consequences stemming from AI model deployment.  

The need to control and master the use of AI, the creation of models and the underlying components is essential and is reflected, in particular within the European Union, by the future promulgation of legislation on AI (named “AI Act”) which aims to promote the development and adoption, by both private and public actors, of safe and trustworthy AI throughout the EU single market. Furthermore, standards are also adapting, such as the publication in December 2023 of the first ISO standard (ISO/IEC 42001) expressing the requirements for the implementation of an AI management system. 

Therefore, educating and raising awareness on the use of AI is crucial to foster responsible and ethical AI practices. With proper education, individuals and organizations can understand the potential biases, risks, and societal impacts associated with AI technologies. Awareness helps users make informed decisions, encourages ethical AI development, and mitigates the inadvertent perpetuation of biases. Moreover, an educated user base contributes to a more inclusive and transparent AI landscape, fostering collaboration and shared responsibility. In this way, education and awareness are key elements in ensuring the responsible and beneficial integration of AI technologies into our society.  

Author: Johann ALESSANDRONI, Information Security Governance Team Leader, Excellium Services

Subscribe to our Newsletters

Stay up to date with our latest news

more news

load more

Info Message: By continuing to use the site, you agree to the use of cookies. Privacy Policy Accept