On Wednesday, March 8th, 2023, Squirro, provider of Semantic Enterprise Search and Insights Applications and Gartner Magic Quadrant Visionary for Insight Engines 2021 and 2022, hosted the webinar “Generative AI for Enterprise: How to Tame a Stochastic Parrot” presenting the results of the international survey focused on the current state, relevant use cases and requirements regarding the deployment of Generative AI in enterprises. Most importantly, Squirro showcased an enterprise-ready LLM architecture able to overcome the most critical limitations of Generative AI applications in an enterprise context.
The overwhelmingly positive reactions Squirro received from the many registrants emphasize the need to integrate Large Language Models (LLMs) in the enterprise context. Thomas Diggelman, Squirro Machine Learning Engineer, gave an overview of LLMs and what they can achieve:
“LLMs are statistical models with enormous amounts of text data to detect patterns and connections between words, phrases, and higher-order relations. These models are highly versatile and can be applied to a wide range of natural language processing tasks. For instance, they can be utilized in generative tasks such as machine learning and machine
translation, summarization, and chat-bots. Additionally, they can be used for spell-checking classification and entity recognition, which are all examples of discriminative models. LLMs have two primary abilities: language modeling and knowledge retrieval through associated memories. Language modeling is a versatile tool for various tasks. Knowledge retrieval is an almost unintentional side effect.”
Thomas also highlighted the unwanted characteristics of these side effects, prohibiting the deployment of the most currently available LLMs trained on large amounts of public data in an enterprise context:
“Compressing vast amounts of information into an LLM has drawbacks. Approximate information reproduction can result in inaccuracies or fabricated contents, and elements can’t even detect when they are hallucinating. This can lead to severe consequences in practical applications. Furthermore, LLMs have inherent biases from their training data, and their black-box nature makes it challenging to trace how answers are generated and on what grounds.”
Squirro CTO, Saurabh Jain demonstrated during a live demo what his team has achieved to overcome these limitations. He showcased an enterprise-ready LLM architecture that solved the limitations currently obstructing LLMs’ usage on internal enterprise data:
“Squirro can counter hallucinations with transparency and explainability in the form of the source of information added to the reply of the LLM when prompted with a question. Furthermore, using Squirro’s Composite AI platform and Insight Engine technology, it can Ingest and locally up-train LLMs with private and premium data sources via any existing workbench to ensure a user gets the correct information when needed in the applications they currently use”. In the webinar, the team stressed the relevance of the security, privacy, and compliance with the internal Access Control Lists (ACLs) for a successful internal deployment of LLMs and provided insights and approaches to addressing this requirement.
To see the solution in action, you still have the opportunity to do so. Due to high demand, Squirro is organizing a second webinar specifically for the APAC and EMEA audiences. The webinar will take place on March 15th at 09:00 AM CET/ 4:00 PM SGT. Part of the webinar will also be the results of a survey aimed at decision-makers across the US, EMEA, and APAC regions about implementing LLMs for the enterprise.
The survey link and the registration form can be found here: https://squirro.com/events/generative-ai-for-enterprise-how-to-tame-a-stochastic-parrot-apac/