Self-service Business Intelligence Tools: The Intersection Of Computer System Innovation As Well As Business Understandings

Posted on

Self-service Business Intelligence Tools: The Intersection Of Computer System Innovation As Well As Business Understandings – A few weeks ago, Microsoft announced a new data analytics product called Fabric. One of the most attractive features of Fabric is a chat interface that allows users to ask data questions in human language. So instead of waiting in the data request queue, everyone gets instant answers to their data questions.

After the release of OpenAI and open-source LLM, analytics vendors from large to small are scrambling to integrate modern LLMs into their products. Between Microsoft and data vendors with shiny demos and bright promises, the future looks bright.

Self-service Business Intelligence Tools: The Intersection Of Computer System Innovation As Well As Business Understandings

There is a big difference between being fast and being reliable. I can answer that

Llm Is Not Enough (for Self Service Analytics)

How the SQL query generated by the AI ​​assistant in the Fabric demo was not correct

Guys, can you test the SQL query before doing the demo video?#MicrosoftBuild #MicrosoftFabric— JP Monteiro (@JPedro_Monteiro) May 23, 2023

This is not a rare case. In this article, an analyst tries to use ChatGPT to generate SQL in a show between humans and machines. ChatGPT was accurate about 50% of the time. This is fine when used by analysts who can check the validity of the output, but is a complete disaster as a self-service analysis tool for non-technical business users.

This is not to say that LLMs have no place in the BI world. We’ve all seen the power of LLMs in the hands of data analysts: quickly generating SQL/Python code for ad hoc analysis, brainstorming ideas during research, or fluently summarizing complex analyses.

What Is Digital Adoption?

But before we convince you that LLMs can be easily integrated into your current BI tool to make your self-service dream come true, let’s *take the whole thing with a healthy dose of skepticism. We seek evidence, not from glossy brochures and nuanced demonstrations, but from first-principle reasoning and understanding of the components that make up the LLM.

In the first part (this post), we’ll try to convince you: Yes, LLM will change the landscape of self-service analytics, but it won’t happen quickly, or the technology won’t be strong enough to power the next generation. Self-service analytics tools. We show why the data interpretation layer is a critical component in building an LLM-powered self-service analytics system.

In Section 2 (postscript), we show why we think the current design of the data interpretation layer is inadequate for such a task and what the missing ingredients are for that to happen.

Imagine a perfect world where business users and analysts live in peace, where reports are cluttered, dashboards flood and request queues despair – LLM becomes an indispensable cog in any business intelligence machine. Forget all the technicalities. Forget all the current limitations of LLM. Imagine a conversation between a business and AI. What will it be like?

The Future Of Bi, Powered By Dbt’s Semantic Layer

👶 User: How can I double our company’s revenue? 🤖 AI: To double revenue, consider focusing on the following areas: conversion rate, activation rate, retention rate, expanding product offerings, entering new markets, and or changing pricing strategies. 👶 User: Extract conversion rate 🤖 ​​AI: I’m sorry, could you please provide more context? What conversion rate data are you referring to? Currently, we have already predefined these metrics: Web to Signup Conversion, Signup to Paid Conversion. 👶 User: Show me Web to Signup Conversion Rate 🤖 ​​AI: Our average Web to Signup conversion rate over the last 3 months is currently 5%. By increasing the conversion rate to 12%, you have the potential to double your revenue 👶 User: Give me a breakdown of our conversion rate over the last 12 months organized by city. 🤖 AI: Here’s a breakdown of our conversion rate over the last 12 months organized by city. San Francisco started at 5.0% and peaked at 5.8% in December. New York started at 4.8%, rose to 5.6% in September and fluctuated between 5.2% and 5.6% through December. Seattle started at 4.2%, rose to 6.2% in November and eased to 5.4% in December. Overall, Seattle had the highest conversion rate in November with a peak of 6.2%. San Francisco showed a gradual increase, while New York had different trends. This data helps us identify areas where we may need to focus our marketing and sales efforts to improve conversion rates in specific cities.

👶 User: Add this chart to my “My KPI” dashboard and set a daily alert to be sent to me via Slack 🤖 AI: Actually, I added the chart to your “My KPI” dashboard and set a daily alert to be sent to you via Slack. You will receive a notification every day with the latest conversion rate data for San Francisco, New York and Seattle.

If you look at the abundance of demos from BI vendors, all of these capabilities look like something LLMs can do. We’ll go through these and show you how difficult it is to implement them correctly.

To get a sense of how difficult it is to implement this system, let’s go through the requirements for an LLM to integrate into a self-service analytics system.

What Is 2d Lead Scoring?

Depending on the capability, the system needs to understand users’ requests, map requests to pre-defined metrics and queries, or create new metrics.

It should be able to formulate appropriate queries, send them to the customer’s database for execution, and then visualize the output.

Furthermore, the system should be able to analyze and interpret the output to recommend appropriate actions to end users.

None of these are inherent capabilities of LLMs, so it is imperative that LLMs adapt to the analytical domain. It’s not as quick or easy as we’d like – but more on that later.

What To Look For In A Business Intelligence Tool

Second, in any Self-Service Business Intelligence (SSBI) system, reliability is extremely important. Without reliability, the system risks presenting incorrect data and misinterpretation – all of which thrive on users’ trust in the system, so the system loses its raison d’être.

To avoid this pitfall, the system needs to preserve the integrity of the entire process, from understanding user inputs to processing and generating the final output. If there is any ambiguity in the requests, it should be possible to identify it and ask clarifying questions from the users. As this article makes clear, this is incredibly difficult to fix.

Since LLM is highly likely to falsify its output, the system needs to ensure analysis and interpretation of the generated query.

But closely aligned to the existing data and situation of the business. This is almost impossible to fix without human intervention. You need both business expertise and the critical thinking skills of a human data analyst.

Best Ai Tools To Boost Productivity In 2023 (free & Paid)

Furthermore, security and control are also key concerns when it comes to self-service analytics. Any SSBI system needs to ensure that users cannot access unauthorized data. Integrating LLM into an SSBI system while ensuring security remains a difficult task, as LLM is quickly vulnerable to attacks such as injection and there is no easy technical solution to the problem.

The risk of data leakage continues when commercially available LLMs continue to send data to third-party AI services such as OpenAI or Anthropic. And the risks here are hardly theoretical, Samsung has already banned the use of ChatGPT internally due to internal data leaks. Compliance requirements such as SOC2 will be a greater challenge.

Finally, cost and speed affect the feasibility of deploying and using the SSBI system. Advanced LLM models such as the GPT-4 demand deep pockets, and cheaper models do not meet the aforementioned requirements for capability and reliability.

In terms of speed, ChatGPT and similar tools create this expectation for instant answers. However, an AI analytics assistant will need to run through various complex steps – including acquisition, design and implementation, which will take considerable time to produce the desired results. Balancing the trade-off between capability, reliability and speed is no easy task.

Leveling Up: Recent Developments Between Databricks And Collibra

In the previous section, we listed the key capabilities required for SSBI to provide a natural language interface to users. These capabilities are not an inherent part of most LLMs on the market today. They are general-purpose models, built on a solid foundation and then adapted to specific domains.

Any specific domain has two components: the knowledge component (the “what” of the domain) and the skills component (the “how” of the domain).

For example, suppose we want to adapt as an LLM Business Analyst Assistant. In that case, here’s the breakdown of our two basic components:

At the intersection of these two domains, you have metrics that are particularly useful for e-commerce, such as conversion rate, cart abandonment rate, inventory turnover, and more importantly, the causal structure of those metrics e.g. To optimize cart abandonment rate, you need to increase search relevance score, decrease shipping cost, etc.

Emotional Intelligence And Leadership: Why Eq Makes A Better Leader

In short, to apply any large language model to a specific domain, we must incorporate both knowledge and skills in that domain into the model.

Fine-tuning takes a pre-training model and

Tools of business intelligence, innovation as a service, benefits of business intelligence tools, self service business intelligence tools, list of business intelligence tools, examples of business intelligence tools, self service innovation, innovation business intelligence, self-service business intelligence, comparison of business intelligence tools, business intelligence as a service, types of business intelligence tools

Leave a Reply

Your email address will not be published. Required fields are marked *