
We caught up with Analytics Engineer and writer Madison Schott for a no-fluff conversation on what “AI-ready data” really means in the real world. From hidden blockers and AI hype to how data teams are actually using these tools—Madison doesn’t hold back.
I’m Madison Schott, an Analytics Engineer at Kit and the writer of the Learn Analytics Engineering newsletter. I share weekly tips and resources on analytics engineering. I also post daily on LinkedIn. I’ve been in this field for almost five years—started out as a data engineer at Capital One. I studied business in college, so analytics engineering is kind of the perfect intersection between business context and the technical depth I gained as a former data engineer.
Honestly, no. Most companies I’ve worked with aren’t even close to ready for that. They’re still trying to nail the basics, like getting metric logic out of BI tools and into consistent data models. AI can’t help if your foundation is a mess.
Foundational work. At smaller companies, teams often skip the groundwork. There’s a lot of logic written ad hoc in BI tools with very little vetting. But at some point, as these companies begin to scale and the pressure to be data-driven increases, that foundational gap becomes a real blocker. You can’t use AI on data that lacks context.
Three things:
If any of those are missing, AI is just going to guess, and that’s risky.
Everyone underestimates how long it takes to build solid, trustworthy data models. People think it’ll take weeks. It takes months. You have to understand what’s being used, what’s valuable, and gather all the business context. That means talking to stakeholders, figuring out their real needs, and translating that into reliable models. It’s not fast, but it’s necessary.
Not always. If you don’t have a semantic layer, some logic will live in BI tools. But your dbt models should make it easy to reuse logic without rewriting it every time. Don’t let analysts guess which field to use.
Start by checking dashboard usage. No point in modeling unused reports. Focus on:
If it’s not good enough for AI, it shouldn’t be in your stack at all.
Data teams need to stop hoarding dashboards. Keep the ones that are used, maintained, and based on governed definitions. And if there’s already a lot of clutter, that’s a strong sign it’s time to run a decluttering initiative. Don’t throw AI on top of chaos. Clean up, enforce ownership, and remove dead dashboards. Only then should you give AI access. Otherwise, you’re just training it to trust bad data.
If no one touches a dashboard in a month? Delete it. If someone complains, great—you just found out it’s still needed. Otherwise, good riddance.
“AI Engineer.” I saw a job post recently and realized it was just… a software engineer. Is the expectation that they’ll just prompt ChatGPT all day? Feels like the “Analytics Engineer” hype cycle from five years ago, just with a shinier label.
As a tool, not a replacement. I use it to:
But if AI can write your entire documentation? Your docs probably aren’t good enough. Real documentation should reflect business context—not just schema descriptions.
Want to get your data AI-ready?
If you’re facing cluttered data environments, duplicated logic, and inconsistent definitions, now is the time to act. Euno helps data teams declutter their environment, create a source of truth for metrics, and certify trusted dashboards, so when AI enters the picture, it’s pulling from reliable, governed data. Deploy AI today with Euno and watch it get smarter over time.
Learn more here.