Universal peace sign to show that freedom of analysis and governance of data models are not mortal enemies

Freedom and governance are not mortal enemies

About a month ago, I had the pleasure of being a guest on Ross Helenius’s podcast, Unlocking Value with Data. Ross, the Director of Analytics Engineering at Mimecast, and I explored the nuances of curating and governing data models for analytics teams. We talked about how to give your team of Analytics Engineers the power to govern data models in dbt™ without compromising analyst freedom. If podcasts aren’t your thing, I’ve turned the conversation into a blog post for you. Dive in and see how it applies to your team? Read, set, go!

Who am I to talk to you about analytics?

I’m Sarah Levy, the co-founder and CEO of Euno, a data analytics company that’s been around for about a year and a half. But I’ve been working with data for over 20 years, alongside my co-founder and partner, Eyal Firstenberg, our CTO. Together, we’ve tackled challenges across a range of industries. I spent over a decade in cybersecurity, working on data-driven products. After that, I led a MedTech company that used AI and computer vision for blood testing. Then, I moved into FinTech, where we applied data analytics to real estate.

Throughout this journey, Eyal and I built numerous homegrown solutions to handle data modeling challenges—things like managing terms, definitions, and calculations at scale. When we founded Euno, we immediately zeroed in on solving these tough problems. It’s a complex space, but that’s what excites us, and it’s what led us to build a data analytics startup that thrives on tackling hard problems head-on.

The number one problem for data teams

When we started Euno, I interviewed over 200 data leaders from companies of various sizes, and one challenge consistently stood out—trusting data models and business logic as you scale. We personally experienced this, and it kept resonating again and again.

The goal of the entire data stack—and the significant investments in tools and talent—is to drive data-based decisions. But if you can’t trust the numbers reported by your analytics tools, whether AI-driven or self-serve, the whole system loses its value. So, we drilled down into the issue, focusing on the quality of data models, business logic, and terminology.

Nearly every organization we speak with today is in the process of centralizing and shifting logic outside the BI layer. For years, logic was trapped in BI tools—buried in workbooks as custom fields. But now, with technology that enables version control, documentation, and proper management, teams can code and govern this logic effectively.

However, a new problem emerges. Analysts, who were used to flexibility with data, now find themselves losing autonomy—waiting on long engineering workflows. While these workflows ensure consistency, they also slow down the business, which can’t afford to wait for data-driven insights.

This trade-off between freedom and governance is the heart of what Euno focuses on. Businesses need the freedom to iterate, define terms, and adjust logic as the landscape evolves. But at the same time, they need governance to maintain consistency, trust, and scale. It’s a tricky balance between speed and governance.

Too much freedom creates silos, duplicates, and inconsistencies—leading to collaboration and communication problems between data teams and business stakeholders. The key is finding the balance between governance and freedom because both are critical at every stage.

The balance between freedom and governance 

Let me break down the concept of a data model governance platform, focusing on the core idea without diving into tools and demos. At the heart of what we’re building, the primary users are analytics engineers and BI engineers—the ones responsible for ensuring trust in data models as a company scales. What makes this platform unique is its recognition that freedom is not the problem but part of the solution. You can’t expect analysts to just follow new guidelines, wait for tickets to be resolved, and stop delivering results. They need a certain level of freedom to do their work effectively.

Our platform is built on three key concepts:

Let analysts be analysts

Analysts should continue using the tools they’re skilled in—whether it’s SQL, Looker, Tableau, or Power BI. These are the environments where they craft logic, test ideas, and build reports. 

Mapping and lineage

We use technology to map everything that exists in the data environment, from warehouse tables and semantic models to BI dashboards and calculated fields. By understanding how all these pieces connect and tracking their actual utilization, we provide a clear picture of your entire data lineage. This allows us to introduce governance metrics like the percentage of Tableau dashboards covered by dbt, or the percentage of queries governed. 

Automation

One of the biggest challenges in governing business logic is the sheer amount of work involved. There’s a lot of engineering effort required. That’s why we’ve automated 80% of the day-to-day work of analytics engineers, using auto-coding tools. This frees them to focus on the more strategic, complex tasks that still need manual attention. 

By combining freedom for analysts with automated governance, we give organizations the ability to elevate their governance score while maintaining the speed and flexibility teams need to deliver results.

Measuring success with a governance score

When it comes to data governance, one key challenge is knowing what to focus on amidst the vast amount of content and contributions from different teams. Understanding how everything connects and the impact of those connections is critical. Let me break this down into two parts: the main problems Euno is solving for customers and the measurable business impact. 

First, we help customers build a central metrics layer in dbt with minimal effort. Many organizations recognize the need for this layer, especially when introducing AI-driven tools, but the engineering workload required can be overwhelming. We simplify this process, making it easier and faster. This frees analysts to focus on what they’re really meant to do—answer business questions, not get bogged down in engineering workflows. Many teams lose sight of this, often needing to hire more people to fill the gaps. 

Another key benefit is gaining complete visibility over business logic, which is the first step to regaining control. This control, often lost in silos and duplicate efforts, is essential and measurable. We also cut analytics engineering bottlenecks, with a significant ROI. In fact, for every team we work with, we measure ROI in terms of engineering hours saved. On average, customers experience a 10x ROI compared to the tool’s cost. 

In terms of business impact, by freeing up analysts and cutting bottlenecks, you can deliver data products faster and increase overall productivity. Trust in your data models improves, which saves time and ensures you get the value you originally sought from data. Plus, we help you build a foundation for AI-driven self-serve, which is huge. 

The two key metrics we focus on are ROI and the governance score. What’s unique is that we help elevate your governance score while optimizing ROI—two metrics that typically work against each other. Usually, improving governance requires more investment, but we manage to optimize both, which we believe is a game-changer. 

Many organizations view data governance as an overhead that slows things down, but we offer a unique approach that maximizes both governance and speed—helping you balance them for long-term success.

A unique approach to data modeling 

Many of the processes we see prospects and customers using are manual, homegrown solutions they’ve built in-house—automations and workflows they’ve crafted themselves. We’ve taken those manual efforts and productized them, saving them time and effort. Essentially, we’re offering a tool-based solution that does what they’ve been doing manually, but more efficiently. 

In this way, we help organizations get the combined value of a catalog and lineage tool, like Amundsen or DataHub, along with a semantic layer, all by leveraging the power of dbt’s open-source libraries for both semantics and transformations. This is especially valuable for companies that have already made the transition to dbt but are struggling to unlock its full potential. Many have invested significant time in dbt migrations, yet find it challenging to deliver real business value using the existing tools. That’s where we step in, giving them the value they can’t quite achieve on their own. 

One success story comes from a large micro mobility unicorn. They use Looker and dbt, and integrating dbt’s semantics with Looker has been a real challenge, as there’s no out-of-the-box integration between dbt Cloud and Looker. One of their data leaders posted about this on LinkedIn, and after I commented, we connected and started discussing her situation. She had been doing a tremendous amount of manual work to make dbt and Looker work together. However, after 10 years of using Looker and with so many domains depending on it, migrating to a new tool was a three-year project she didn’t want to undertake. 

Her team loved Looker, but they had also made the transition to dbt for its version control, documentation, and open-source technology. However, making the two systems work together had been a huge headache. That’s when we stepped in with a solution. We automated her entire manual workflow—now, she uses Euno to sync dbt with Looker seamlessly. Every new piece of code written in dbt is automatically exposed to Looker. We’ve also introduced pre-aggregated metrics—when she writes metrics in dbt, there’s a button to pre-materialize them with flexibility on options. Once pre-materialized, the metrics are always up-to-date, and any changes made are synced with Looker. 

By optimizing her entire analytics engineering process, we’ve drastically improved efficiency—and the ROI has been phenomenal.

Across the data stack 

It’s always valuable to have tools that can reduce friction and give people their time back, especially at high-touch points where processes can slow down. These tools provide significant returns, helping teams focus on what really matters. We’ve touched on various platforms like Looker and dbt, but you might be wondering: What technologies does Euno support in its role as that connective layer? 

Euno is warehouse-agnostic. We support all major cloud data warehouses, including Databricks, Snowflake, Redshift, and Azure. We’re integrated with dbt for your ELT, so if your tables are built in dbt, you’re good to go. Down the road, we’ll also support Snowflake tables directly, allowing more flexibility for teams not yet fully onboarded with dbt. 

On the BI side, we currently integrate with Looker and Tableau. Coming soon are Power BI and Sigma, which will expand the options for teams using different tools in their stack. Additionally, we provide an interface for data scientists using Jupyter Notebooks, enabling them to access metrics directly from dbt. We also offer a solution for SQL-savvy users working in Snowflake, giving them seamless access to the same metrics. 

By supporting these platforms, Euno acts as the bridge between your data warehouse, dbt, BI tools, and even data science environments—streamlining workflows across the entire data stack.

The only constant thing is change

When people talk about post-mortem strategies, it’s often about reflecting on past projects to figure out what went wrong. But we also advocate for pre-mortem strategies—anticipating potential pitfalls and steering towards success from the start. So, how do you apply that forward-thinking mindset in data projects?

For us at Euno, constant conversation is key. We never assume we know better than our customers, because we don’t have all the answers from our own experience alone. Instead, we engage with data leaders daily. Personally, I have at least three to four conversations each day, asking data professionals about their challenges, how they’re solving them, their most urgent pains, and how much time and management focus is directed toward these problems.

These pain points shift over time. A year and a half ago, many companies were in the early stages of adopting dbt. Some were still training analysts to become analytics engineers. Today, we’re seeing a shift: dbt is moving back into the hands of engineers, and organizations are realizing they need their analysts focused on business work. Over the past year, metrics in dbt have become more prominent, which has also impacted how teams work.

To stay relevant, we keep the conversation going and remain flexible with our features, priorities, and roadmap. We’re not just building a product—we’re solving our customers’ problems, and that requires us to stay adaptable as their needs evolve.

Shift left

You might have encountered the phrase “shift left” before—it’s a concept borrowed from cybersecurity and software development, and it’s highly relevant in the data world too.

In the data stack, if you visualize it from left to right, you have data sources on the left, followed by the warehouse, transformations, and then consumption tools like BI tools and notebooks on the right. The idea of shifting left is about centralizing business logic as much as possible on the left side of this stack. This ensures that all your tools, data users, and applications align with a consistent set of business logic.

However, not everything should be shifted left. For instance, if you’re testing an idea or a proof of concept that doesn’t gain traction, it might not be worth centralizing. Instead, focus on high-utilization items—things that have proven valuable and widely used. Aligning definitions for these high-usage elements ensures consistency where it matters most.

Another key reason to shift left is to optimize cost and performance. By centralizing elements that require optimization, you can improve scheduling and computational efficiency in the warehouse.

At Euno, we provide our customers with the flexibility to identify what needs to be shifted left and the automation to make it happen. This approach helps them align their business logic and optimize their data operations efficiently.

What about GenAI?

Data and analytics are always evolving rapidly. With the rise of generative AI, it’s shifted the trajectory of how we think about data and analytics, reshaping our ideas. What do you think three years from now will look like? It’s a long enough horizon for the immediate hype cycle to settle, but what will data and analytics look like then?

When we began our startup journey, the buzz was all about GenAI-driven analytics and tools like ChatGPT for business users. It seemed strange that you’d still need to find the right dashboard to get an answer rather than just typing your question. So we decided to tackle this for business users. In researching this, if you’re not in a specific vertical like a product for SaaS companies or financial services, and you’re dealing with horizontal BI—used by 90% of large organizations—it’s incredibly hard to provide trustworthy answers with evidence. Most large organizations have five to ten, or even fifty, different definitions of “total revenue”.

Our conclusion, and my vision for the future, is that data modeling—which was once seen as a given, just building pipelines—is now the foundation for AI-driven analytics. There will be a new category of data modeling that covers business logic, semantics, and transformations. Regardless of the language or platform, your data model will need to be up-to-date, evolve safely, and be continuously maintained. This is how we envision the future: a platform dedicated to managing your data models.

What about you?

I’m always on the lookout to connect with those experiencing these challenges and share insights on how you view them. So, I invite you to reach out and schedule a chat. It’s something that will benefit you now, and let’s keep the conversation going, as we all want to tackle these issues. The data and analytics community is a tight-knit group that frequently crosses paths at conferences and on LinkedIn. I believe everyone in this field is searching for unique solutions to reduce the overhead in our work and reclaim time to focus on valuable outcomes. As data professionals—whether data engineers, analytics engineers, or analysts—we often feel overwhelmed by the technical debt that accumulates, making us feel detached from the impact we have on our organizations. Let’s flip the script, shall we?

***

We’re heading to Coalesce Conference 2024! Explore the Euno way to cut engineering bottlenecks, see what analysts are up to, and shift left your business logic to dbt. Oh, and grab swag no analytics engineer could resist: Euno at Coalesce.

Share this:

Related articles

Tableau vs. dbt: It doesn't have to be a showdown. Learn a framework that balances freedom of analysis with governance of business logic. And more importantly,...
The push for data governance becomes not just a nice-to-have but a crucial necessity for the production of trustworthy insights driven by AI. This becomes very...
Business logic doesn’t bloom behind the scenes by data engineers. It’s developed by business analysts on the front lines. Here's how you resolve the unfinished business...