Predictions 2024: Generative AI Goes Mainstream
By Kesava Reddy
Across AI’s journey through 2023, we have seen innovation and reshaping of businesses on a global scale. McKinsey dubbed it ‘AI’s breakout year‘, noting a 40% increase in global investment. The Economic Magazine reported a share-price index jump by nearly 80% for major tech companies. As we step into 2024, we’re left wondering: how will AI adoption shape 2024?
In this new year, non-tech companies are expected to join the AI wave, leveraging it to cut costs and improve productivity. And as AI becomes more seamlessly integrated into our daily lives, ethical considerations will take center stage. From the battle against misinformation to tackling biases and ensuring transparency, there will be a growing emphasis on ethical methodologies and practices.
This article delves into our AI predictions, exploring the top trends that will make generative AI go mainstream.
Enterprises Will Adopt Smaller, Open-Source AI Models
A significant shift towards smaller open source AI models is poised to become a key trend in the upcoming year.
The year 2023 began with trillion parameter models pushing the boundaries of what state-of-the-art AI technology could achieve. However, building these models require computing resources that are beyond reach of most companies or researchers. This led to efforts to push back against the logic of scaling.
In one seminal paper, DeepMind’s 70 Bn parameter Chinchilla model outperformed the 175 Bn parameter GPT-3 by training on five-times the data. Meta then used the same approach to train smaller models like Llama 2 (7 Bn, 13Bn and 70 Bn models) and released them open source. More recently, a key tactic to boost efficiency, called Mixture of Experts (MoE), is being used. This involves training multiple smaller sub-models for specific tasks, and this approach has been taken by Mistral to release the Mixtral 8x7B model, which showcases similar capabilities in a much smaller model.
In other words, AI models are shrinking in size and are being released open source, making them accessible to enterprises of all scales. In the year 2024, this will drive AI adoption, with enterprises fine-tuning the model or building AI applications that are highly tailored to their use-cases, without depending on API access to platforms that have a monopoly on the models.
AI Adoption In India Will Be Driven by Indic-Language LLMs
India’s linguistic landscape is one of the most diverse in the world, with the 2011 Census recognizing 122 major languages and more than 1,500 other languages and dialects. Around 30 languages in India are spoken by over a million people each. If we could unlock the capabilities of Generative AI for Indian languages, we could create significant positive impact by enhancing communication, inclusivity, and accessibility across our diverse linguistic landscape.
However, AI models like Llama2 and Mistral have very limited support for Indic languages. This is partly due to the scarcity of diverse, high-quality Indic language content and inefficient tokenization in models.
Several initiatives are underway to tackle this. For instance, in December 2023, Sarvam AI released OpenHathi, a 7 Bn parameter Hindi language LLM, based on Llama2. Bhashini, an AI-based language translation platform by MeiTY, was recently used at the Kashi Tamil Sangamam event in Varanasi by Prime Minister Narendra Modi to communicate with the Tamil-speaking attendees. Bhashini aims to break language barriers by enabling real-time translation of various Indian languages.
Democratization of the Creative Stack in the M & E Industry
Historically, technology platforms created network effects and wider distribution channels. With Generative AI, however, we are seeing, for the first time, a disruption of the creative process itself. This has powerful implications for the media and entertainment industry, where technology has historically had a massive impact. Smartphones, streaming services, 3D animation technologies, immersive audio technologies, have all helped create the kind of movies, shows and content we have come to expect today. However, with Generative AI, the bar moves a notch higher.
The reason Generative AI is so powerful in the M & E domain is because it democratizes the creative process. With models like Stable Diffusion and Stable Video Diffusion, creating sophisticated artwork and animation costs a fraction of what it would otherwise. With AudioCraft, we can now create rich audio and music from text prompts. LLMs can help writers create a first-draft text that speeds up the creative process.
In 2024, I expect Generative AI technologies to become part of the creative workflow of numerous M&E professionals. I also believe that the content industry would start using it to create advertisements, music videos, and films, unlocking new forms of creative content we haven’t seen before.
Coding Assistants to Become a Part of Every Programmer’s Workflow
In 2023, we already witnessed the capabilities of LLMs in understanding and debugging code, helping translate code from one language to another, and in generating accurate code when given the right prompts. Studies have shown that AI assistants can reduce development time by up to 25% by automating repetitive tasks like boilerplate code generation and finding relevant libraries. However, the risk of leaking sensitive company IP has prevented enterprises from actively using proprietary LLM models hosted by platforms.
With the emergence of open source AI models like StarCoder, Code-Llama, Mistral and Falcon, it is now possible for enterprises to offer their developers fine-tuned AI coding assistants in a safe and secure way. This can help their developers spend less time on routine tasks and could focus on more strategic aspects of coding, thereby cutting down development time, and lowering costs.
In 2024, I expect to see an increasing number of businesses integrating AI coding assistants into the workflow of their development teams. This will drive adoption of Generative AI amongst developers rapidly.
LLM Powered Conversational AI to Transform Customer Service
Imagine seamless, 24/7 support where chatbots understand complex queries and craft personalized responses in real-time. With the emergence of LLMs and AI technologies like Vector databases, Knowledge Graphs, developers can now build sophisticated Conversational AI platforms that are grounded in the knowledge-base that the company already has. This allows developers to build Conversational AI chatbots that can handle routine inquiries like order tracking and billing, and free human agents for intricate issues. Generative AI can also elevate interactions, offer recommendations, generate FAQs on the fly, and even translate languages.
In 2024, we will see numerous new use-cases of Conversational AI powered by emerging AI technologies, which will transform how customer service has historically worked. Conversational AI bots would have the capability of helping plan trips, make reservations, and offer assistance 24/7. We will witness, therefore, witness AI that will get stuff done for customers.
Multimodal AI Will Become the Norm
Multimodal AI leverages the integration of multiple data modalities, including text, images, audio, to enhance the accuracy and robustness of artificial intelligence systems. This enables better inference, richer interaction, and therefore, can be used in a wide range of creative applications, such as generating text from images, composing music from text, and designing products based on user preferences.
In other words, multimodal AI is more natural to how we interact as humans. With this approach, one can potentially generate evocative music inspired by a painting, compose a poem that captures the essence of a video, or design a product that seamlessly blends form and function based on user preferences.
Glimpses of AGI and Final Note
In the quest for Artificial General Intelligence (AGI), 2024 might witness some strides, but it will be confined to nascent, sandboxed environments. There are several approaches currently being explored, such as a hybrid architecture which exploits multiple AI paradigms. Imagine specialized modules such as ‘perception’, ‘memory’ or ‘reasoning’, where ‘perception’ is powered by a multimodal AI system, ‘memory’ by using vector database and knowledge graph powered RAG pipelines, and a deep learning model powering ‘reasoning’.
AGI is elusive and is tough to achieve. However, in a domain-specific sandbox environment, we may see an AGI model emerge in 2024. An example of this is AI that adapts and exhibits understanding of a simulated environment, or mastering a video game.
2024, therefore, promises to be the year when AI graduates to become more mainstream. From driving actual applications within enterprises, to powering new content formats, understanding a wider set of languages, richer interactions, and reasoning or problem-solving capabilities that seem human-like, I believe we will see AI make the world a better and more interesting place this year.
(The author is Kesava Reddy, Chief Revenue Officer, E2E Networks Ltd – India’s fastest-growing accelerated cloud computing platform, and the views expressed in this article are his own)