A fresh perspective on how data leaders can better understand and prepare for AI-driven analytics.
Just a month ago Tableau announced the beta launch of Einstein Copilot for Tableau: a powerful AI assistance that uses natural language and generative AI to accelerate data exploration and analytics. A game-changer, but only if the data models it’s based on are consistent.
The push for data modeling becomes not just a nice-to-have but a crucial necessity for the production of trustworthy insights driven by AI. Without a solid governance strategy, your AI-generated data outputs might be completely unreliable.
The present future of AI and Analytics
The integration of generative AI is undoubtedly transforming the landscape of automated data analysis and SQL query optimization. In the foreseeable future, it may even extend to tasks such as constructing dashboards and calculating metrics upon request.
But in a context where business analysts are increasingly embracing self-serve analytics through their preferred BI tools, the inclusion of AI demands a careful approach from data practitioners.
The challenge of AI-driven data integrity
It’s no secret that business analysts have developed a profound affinity for Tableau – and the reasons are crystal clear. Beyond its rich array of visualization tools, Tableau has spearheaded an exponential revolution in AI-driven self-serve analytics.
There is no doubt that we want analysts to harness BI tools like Tableau, Looker, and Power BI. We want them to flex their self-service and AI muscles for rapid prototyping and dashboard creation. But, and there’s always a but, it might have also unintentionally created a Wild Wild West where analysts could run without a sheriff in sight.
Questioning the reliability of data produced with the assistance of AI becomes relevant when considering the possibility of inconsistencies in the models it was trained on. If the models of AI contain discrepancies, it raises concerns about the trustworthiness of the generated data.
The evolving role of analysts
In a recent episode of DataCamp’s podcast about the impact of AI on professional domains, General Partner at Theory Ventures and data trailblazer Tomasz Tunguz highlighted a critical concern: data quality issues can potentially derail AI projects. Tomasz emphasized the critical need for data teams to prioritize data governance and embrace engineering practices to enhance the reliability and accuracy of AI outputs.
Complicating matters further is the evolving role of analysts, who are no longer merely consumers of data but also contributors, shaping new business logic daily. This evolving logic of metrics and calculated fields must be continuously coded into the data layer through a governed process to keep data models consistent. It is our best guarantee for the production of quality data through AI-based analytics.
Data governance at the forefront
Perhaps the most important principle when adopting modern analytics is that self-service and governance are not mortal enemies. Governance is what makes self-service possible. The same applies to AI-powered insights. The entire purpose behind them is to enable everyone to ask questions and make informed business decisions, fast.
With AI becoming a part of analytics, data modeling and governance return to center stage. The space for innovation lies in developing data modeling practices that offer a more comprehensive approach to semantic alignment and capturing business logic effectively across different layers.
The pivotal question for data leaders is this: How do you lay the groundwork to make AI work for your analytics team?
***
Euno it! Euno is officially on the dbt Cloud integrations list. Check us out under Data Catalogs.