Generative AI

Michaël Renotte I 9:30 am, 28th August

AI can be trained by feeding content into generative adversarial networks, transformers, and variational autoencoders to create new content that’s meaningful to people. 

Generative AI is a type of semi-supervised machine learning that uses neural networks to create new content or interpret complex signal information. By training the models with a large amount of content, they can be made to generate new works like what people would create. 

The uses for generative AI go beyond creating imagery. It could help businesses with predictive maintenance or improving cybersecurity analytics. It could help generate new ideas for drugs or assist in quality analysis and medical diagnoses. 

AI is picking up steam with more organizations adopting it in 2023. According to a survey by Info-Tech Research Group, AI will receive the most net new investment by organizations by the end of 2023. While 35% of organizations say they have already invested in it, 44% of organizations say they plan to invest in it next year. With a 9% change between committed investment and planned investment, AI leads all technologies, followed by data lake at 5% and data mesh at 5%. 

Generative AI can play a role in enhancing the top use cases of AI. Many businesses struggle with making use of unstructured data for analysis. Generative AI can interpret that data and transform it into structured data. That not only renders it usable in analytics but trainable for robotic process automation (RPA). Generative AI can also detect anomalies in network and application behavior, aiding security systems in identifying threats. 


Data needs to be collected and synthesized. 

The need to manage unstructured data 

In the age of data collection in hope of becoming more data-driven in their processes, organizations are struggling with how to manage unstructured data. Unstructured data is the majority of data collected, describing everything from written communications to images to presentation decks. Basically, everything that’s not in a database or spreadsheet. Without AI to make sense of it, businesses can’t search this information and turn it into actionable insights. 

Not Enough Data 

In some areas of business, the problem is a lack of the specific data they need to train an algorithm. The medical field often faces this problem because of the sensitivity of patient data. One solution to this problem is to create synthetic data: data that is generated by AI that closely approximates a real example of that data. Synthetic data is being used today to train various AI algorithms, from models that will detect brain tumors on an MRI scan to self-driving cars. 

Desire To Compete 

Organizations are increasing their spending on AI because of the potential benefits it offers. It can enable workers to do more work faster, reducing costs by automating more tasks. It can help discover new products more quickly and increase revenues. With more commercialized options available to deploy generative AI and more organizations investing, those that don’t invest may fall behind. 

Every creative field will be impacted by AI 

Speech recognition 

OpenAI’s stated mission is to pursue artificial general intelligence. As a result, it has several types of AI projects in progress. On September 21, 2022, it released to open source Whisper, a neural net that can recognize English speech with human-level robustness and accuracy. It encourages developers to use Whisper to add voice interfaces to their applications (Source: OpenAI, 2022).  

Fraud detection  

Identifying a user through digital data small details of how they move a mouse or what network their smartphone is connected to at the moment is now possible with AI-developed fraud prevention algorithms. The smallest details can be added up in an algorithm that predicts the risk of fraud on any given transaction and flags it for further inspection by a human (Source: IT Business Edge, 2022). 

Software development 

Non-technical workers will be aided in creating new applications with AI-powered tools that write the code for them. GENIO is one example, described as a low-code software development program. It can generate code in both modern web architectures and back-office solutions.  


Meta unveiled its Make-A-Video system in September 2022, which allows users to type in words to describe a scene to generate a video several seconds long that matches the prompt. In a Facebook post, Meta CEO Mark Zuckerberg wrote, "It’s much harder to generate video than photos because beyond correctly generating each pixel, the system also has to predict how they’ll change over time." Currently no one is allowed access to the model outside of Meta (Source: The Verge, Sept. 2022). 

Generative AI causes controversy 

When considering the risks of adopting AI, consider some of these AI controversies covered by the media in 2022. How could your organization avoid receiving unwanted attention like this? In April, OpenAI releases its DALL-E 2 image generator, which produces biases that reinforce stereotypes. For example, women were more likely to be depicted as nurses, and men were more likely to be depicted as builders. OpenAI releases a fix to improve its image diversity, but then users find it is less accurate at turning their prompts into useful images (Source: NBC News, 2022). In June, Google engineer Blake Lemoine claims that chatbot LaMDA is sentient and publishes an existential conversation he had with the bot to the web. He is later fired (Source: Washington Post, 2022). In August, Jason Allen won Colorado State Fair’s fine arts competition with a piece generated using AI image generator Midjourney, titled Théâtre d'Opéra Spatial, stoking controversy among artists (Source: The New York Times, 2022). 

AI governance 

New legislation in various jurisdictions, including Europe, are defining new rules around when and how AI can be applied. Organizations that are using AI in situations that governments determine to be high risk will be required to do more to mitigate risks. Yet today, 55% of organizations are doing nothing to govern AI. As more organizations invest in AI and start applying it to more decision-making processes, IT leaders should be putting governance structures in place before they’re made to do so by new regulations. 

Also important to watch will be the progress of new legislation to regulate AI. On 12 April, 2021, the European Commission launched the EU AI Act proposal, a draft legislation to promote "trustworthy AI" in the EU. The regulation could enter into force in 2023 in a transitional period. 

Subscribe to our Newsletters

Info Message: By continuing to use the site, you agree to the use of cookies. Privacy Policy Accept