DataAnalytics

Rethinking Analytics for Real-world Decision Making

By Brian Neumann, SVP of Engineering at Decile

Analytics have always been a lifeline for ecommerce merchants, providing them the information they need to make decisions and keep operations rolling. For a while, dashboard-based analysis worked, but as data increased, operations became more sophisticated, and decision cycles shortened, the cracks in dashboard-based analytics began to show.

Setting aside the issue of data silos covered elsewhere, dashboard-based analytics failed in three core ways:

1. Many analyses required pulling multiple datasets, requiring analytics workers to pull multiple CSVs and stitch together the data outside of the data.
2. Setting the right business context for the report often required customizations to the dashboards, like setting filters, or adding additional dimensions or metrics.
3. After the data has been collected and reports adjusted, dashboards merely present the data, but itโ€™s up to the user to sift through and mine for insights. At best, this was a slow process. At worst, it left some insights uncovered.

As a result of these issues, traditional dashboard-based analytics forced business users to spend hours trying to make data-driven decisions, or hire data teams to implement new reports, often lagging as new dashboards and data features were developed to answer the questions of the business user. Neither approach scaled well, especially when decisions need to happen quickly.

The Promise of AI

With the advent of new AI capabilities, ecommerce merchants and SaaS providers in the ecommerce space rushed to see how AI could solve the pain points of a dashboard-based analytics approach.

Many merchants download a raw dataset from a dashboard in their ecommerce platform and upload it to AI-based interfaces like ChatGPT. When you compare this to the old spreadsheet style approach, users can extract insights in minutes instead of hours.

But could AI solve the other two issues raised above? What if the right dataset isnโ€™t available in the standard dashboards? Could AI assist in pulling this together? And if so, could AI ensure the right business context to ensure the accuracy of the data and resulting analysis?

There was quite a bit of interest for AI copilots early on, as organizations started putting them on top of their existing data and hoped for the best. Unfortunately, as these AI solutions went into play, the results were often disappointing because the model was missing context. Trust was also difficult to establish, since many systems would return answers while methodology went missing or was difficult to interpret. Without visibility into how conclusions were reached, responses were viewed hesitantly even when technically correct.

Building a modern analyst: how natural-language AI changes how teams work with data

At the end of the day, the ideal state for a business user is to ask a question in natural language, and be presented back with a complete solution including methodology, engendering trust and confidence in the results.

A quality analyst was already solving this need, albeit with a lag time. What were the traits that make the analyst effective?

1. Effective analysts know the specifics on how the business operates. They are able to translate the question from the business user to specific data features and scope the data appropriately. The analyst also knows how key metrics are defined for the business, and can adjust reporting to accommodate these definitions.
2. The analyst has unfettered access to the data, and could write SQL to pull ad hoc datasets when needed, not requiring another BI report buildout.
3. A quality analyst would present assumptions and methodology as part of the response to the business user, creating a shared understanding of the solution and providing a foundation for further refinement of the analysis.

To break the paradigm of dashboard based reporting, AI needs to be equipped with these same abilities, but at machine speed versus human speed.

โ€‹โ€‹Reasoning directly against governed internal data

An AI-based analytics approach can start at the same natural language question jumping off point, but rather than relying on shared or abstracted reporting layers in a BI tool with its slow implementation cycles, brands need an architecture that allows AI to reason directly against governed data.

Cloud software providers such as Google, Snowflake or Databricks have made it easier to deploy text-to-sql capabilities translating natural language text to SQL and effectively datasets for human use. This jailbreaks the data from the constraints of a traditional BI tool, requiring developers to model data before it can be accessed. But even with these off the shelf tools, additional care is required to allow AI to accurately and efficiently pull data.

To inform the models how to get to the right data, two additional pieces need come into play:

โ— A modeled semantic view describing the data structures available and their potential applications, allowing the LLM to understand not just what dimensions/metrics are available but how they can be translated to business terms or leveraged in different questions
โ— Merchant-specific context providing knowledge of the key business concepts (e.g. I run a small wholesale business in conjunction with my e-commerce site, but want to restrict reporting to e-commerce) to avoid prompt repetition (providing this info over and over again in the conversational interface)

Armed with this information, AI can begin to operate like an analyst, with knowledge of the business to apply key constraints, access to the full dataset through SQL without requiring additional report buildout, and the inherent capabilities of the LLM to reason on top of this data.

The last essential part to replicating an analyst is how AI interfaces with the business user. Instructing the underlying agent to mimic the presentation skills of analysts is a start, but there is also an opportunity to fine craft the user interface to allow the solution to unfold assumptions and methodology as desired by the user.

The goal of conversational AI interfaces is to reduce translation rather than hide complexity. Teams will ultimately be more confident in the results they receive when AI models are able to easily show their logic and approach, allowing business users to interact on their own terms.

Looking ahead: Implications of conversational AI for analytics

When paired with strong governance and business context, conversational AI is on the cusp of supplanting traditional dashboard-based analytics as the primary means for a business user to make data-driven decisions.

Analysis becomes more fluid with teams able to test assumptions without waiting for report updates or custom builds, and unplanned questions no longer disrupt existing workflows.

Roles sharpen as analysts spend less time responding to ad hoc requests and more time focusing on governance and accuracy, while business users gain more autonomy while remaining within approved boundaries. Collaboration improves because both groups operate from a common base.

And ultimately, data is more accessible to help guide decisions across the entire business.

Author

Related Articles

Back to top button