As the demand for high-quality data accelerates, driven in large part by generative AI, data teams face mounting pressure to deliver more, faster. To do this, a wide range of analysts - from technical data experts to business-focused users who prefer visual interfaces - need a faster, more reliable path from question to insight. However, that’s creating tension between speed and good data governance. In any given company, there are usually multiple data analysts per data engineer. This makes it unrealistic for every data request to flow through the data engineering team. With long request queues and competing priorities, engineers simply don’t have the bandwidth to support every ad-hoc query or model analysts need.
This means more analysts are using self-serve tooling, and in many cases, spinning up their own data marts to work around engineering bottlenecks. But when these workflows live outside of governed pipelines, relying on inconsistent logic and unstructured data assets, they introduce serious risks, including duplicated work, data security concerns, and rising cloud costs from redundant or unmanaged assets.
The solution isn't more dashboards. It's empowering the right analysts with the right self-service tools, without sacrificing governance. We'll look at how the role of the analyst is changing in response to this demand, and how companies can use governed collaboration to increase data velocity without compromising on quality.
The evolving role of the analyst
Data analysts are increasingly taking a more active role in shaping data within their companies. This is driven by both business demands for data and changes in the underlying technology. As a result, analysts are:
- Getting closer to raw and modeled data sources
- Becoming more familiar with data tooling
- Incorporating AI
Let’s take a look at each of these areas in detail and what’s driving them.
Getting closer to raw and modeled data sources
Data quality is still a leading concern across industries. In dbt Labs’ 2024 State of Engineering Analytics report, we found 57% of data professionals citing data quality as the largest data-related issue. That’s up from 41% in 2022.
Companies are expecting analysts to be more than just passive users of data. Analysts are increasingly expected to have the skills and tools to verify that the data sets they have are accurate, up to date, and have been properly cleaned for business use.
The need for more high-quality data is also driving analysts to seek out useful data sources they can incorporate into their work. Data silos, islands of data that are independent from and often incompatible with more governed and highly structured data, are still a vexing issue plaguing most companies. Data analysts play a pivotal role in helping to find and transform this data to ensure that it's compatible with the company's governed datasets.
Becoming more familiar with data tooling
In the past, data pipelines were solely the province of data engineering teams. They were often written in different languages, hidden away in stored procedure code in a database or data warehouse. They were as hard to find as they were to use and manage.
Today, with tools like dbt, anyone with knowledge of SQL or Python can contribute to data transformation code. dbt provides a common and governed approach to data transformation backed by software development best practices like documentation, version control, and testing.
As a result, analysts are becoming more familiar with the technical tools required to create and maintain data pipelines, including source control systems such as Git. That enables data engineers and analysts to collaborate on analytics code, data tests, documentation, and data metrics in ways that weren’t previously possible.
In other words, analysts, who were always quite technical, are becoming increasingly more comfortable with more technical tools and workflows, blurring the lines between business and data roles.
Incorporating AI
dbt Labs co-founder Tristan Handy has noted how AI is disrupting the way we do data engineering. The advent of GenAI means that analysts can do more and do it more quickly than ever before:
- Beginner analysts can query data using natural language prompts to a large language model (LLM), which can translate their requests into SQL and run the results for them against the source systems
- Experienced analysts can use AI to help them automatically edit, develop, or understand complex queries that otherwise might take time to develop and debug fully
- All analysts can leverage AI to generate boilerplate code for new data pipelines and tests, as well as base documentation for data models
Of course, AI doesn’t replace the analyst, it supports them. High-quality reports and data products still require human judgment, context, and oversight. AI simply accelerates the work, keeping a skilled analyst in the loop every step of the way.
The challenges that analysts face
All this means that, more than ever, data analysts can dive headlong into data and find the answers they need without waiting on an already overtaxed data engineering team. However, analysts also run multiple risks when dealing directly with ungoverned and unstructured data:
No mechanisms to ensure data quality. Data stored in multiple systems often isn’t rationalized or harmonized. It may exist in different formats across different data stores. Key data values - e.g., revenue - may even differ from system to system, leading to doubts around which system is the “source of truth.”
Missing (or unavailable) metadata. Ungoverned data often lacks appropriate or complete metadata - data about data. This can include technical metadata (tables, columns, data types, relationships, last update time, upstream source) as well as business metadata (owner, description, method of calculation, business meaning, and usage). Without metadata, it can be difficult to tell who’s responsible for a given dataset or how certain values were calculated.
Documentation is light or nonexistent. A critical form of metadata is documentation about the meaning and purpose of a given dataset. Documentation is critical for collaborating across roles. However, without a tool that supports documenting datasets in a data model, such rich metadata might be impossible to capture.
The tools that data analysts can use to collaborate on governed data
In the end, data analysts are concerned primarily with delivering high-quality data and insights to their stakeholders as quickly as possible. It’s the job of a company’s data engineering and central governance teams to set standards and monitor data to ensure that this data is well-governed, secure, and compliant.
With the right tools, data analysts can play a more active role, contributing to structured, governed data by building on shared models, documenting usage, and working within trusted workflows. Together, analysts and governance teams can raise the bar for data quality, security, and compliance.
dbt serves as a data control plane for analytics and AI that centralizes your analytics workflows so that teams can ship and use trusted data, faster. With the rise of AI and self-service analytics, the definition of an “analyst” is evolving. Today’s analysts span a wide spectrum, from SQL-fluent data experts to business users and data scientists who rely on visual tools or natural language. dbt opens the door for all of them to contribute to high-quality, trusted data products—without compromising on governance, speed, or security. Using dbt, analysts and data teams can collaborate on creating reliable, well-documented data sets, all within one central, governed environment by leveraging:
Easy model building. dbt Canvas is a visual tool that any analyst can use to contribute to data models. Using dbt Canvas’ visual, drag-and-drop experience and built-in context-aware AI powered by dbt Copilot, analysts can create model changes that compile to production-ready SQL with all the benefits of dbt, including version control, orchestration, and discovery.
Collaborative discovery. dbt provides access to a company’s data transformation models and associated metadata via dbt Catalog. This feature provides a full view of your data estate, including non-dbt data objects in Snowflake. Engineers, analysts, and business decision-makers can collaborate on code and documentation as part of a single collaborative workflow.
dbt supports writing documentation as an intrinsic part of each data model. Once a data pipeline is pushed to production, analysts can find a governed dataset, examine its metadata, and read its associated documentation before putting it to use.
Frictionless data insights. With dbt Insights, analysts can freely query, validate, visualize, and share trusted data. Analysts can write SQL queries from scratch or use context aware AI, powered by dbt Copilot, to generate new queries using natural language prompts. This means analysts, regardless of technical skill, can explore data, uncover insights, and make decisions, all within a secure, governed environment built for trusted, self-service. dbt Insights is available in Preview.
Data lineage. Data lineage provides a visual map of how data flows across your models, generating a visual lineage graph that helps teams understand upstream sources and downstream dependencies.
Using these data lineage maps, analysts can trace issues to the origin without filing a support ticket or digging through disconnected tooling. Analysts are empowered to detect an issue in the data, report it, and data engineers can quickly assess the impact and resolve the problem.
AI-powered workflows. dbt Copilot is our context-aware AI-powered solution that supports engineers, analysts, and business users at every step of the data lifecycle. It understands the structure, logic, and metadata of your dbt project, so it can help analysts generate SQL, create data tests, and draft documentation with accuracy and speed. Engineers can use it to accelerate development and enforce standards. Analysts can safely explore models with guardrails in place. As more people engage with analytics, dbt Copilot helps maintain governance, quality, and consistency, by keeping everyone aligned on trusted, project-specific context.
To learn more about how dbt brings analysts of all technical skill levels into one central, governed workflow powering faster, more trusted analytics for tomorrow’s data solutions, request a demo today.
Last modified on: Jun 09, 2025
2025 dbt Launch Showcase
Catch our Showcase launch replay to hear from our executives and product leaders about the latest features landing in dbt.
Set your organization up for success. Read the business case guide to accelerate time to value with dbt.