7 min read
On the rise of AI at work. A take from August '25

I hate having to write about AI because we’re in the mid of the bubble. In fact, the craze is such that many have already warned about the rapid adoption of AI tools as a financial threat for companies. Well good, I hope this financial threat teaches us all a lesson so that we get rid of it faster and can move on with whatever creativity remains in our daily lives.

Look, I’m using AI on a daily basis myself. It does help increase my perceived productivity and it occasionally provides me with micro dopamine rushes when I’m able to solve a small problem twice as fast as I would have had to a year ago. But at the same time, I’m very much aware about the fact that it actively debilitates me. The habit of delegating is addictive, just ask any manager.

Coding faster but questionably

I’m using Cursor at work. At first I was blown away. I have to admit that the application of Gen AI on top of corporate context is what’s the most impactful in bringing incremental value to our routines. I had the same experience with Glean at first, as it was able to read all of our corporate jargon from Confluence, Slack and Miro and feed it back to us with guidance, at a quality level that was often good enough to save time searching myself and just jump across the different sources faster. Cursor brought yet another increment to my professional / technical day to day work.

As a data analyst, I build models. Whether I need to create something new from scratch, migrate or refactor, there’s always quite a significant bit of context that’s needed and required to understand what’s happening inside a dbt project. Cursor stands out because it’s able to read across the entire project, understand its set of interdependencies, at a micro level where the structure and framework of dbt is inherently understood and some of the blockers are anticipated and treated upfront. In addition, it autonomously interacts with the user’s terminal to run queries, just like an assistant or junior analyst would independently attempt while you, the almighty, occasionally review the output and try to critically assess the quality of the work being produced.

By now the majority of tasks that I need to complete through a data model pass through a varying amount of AI chat agent prompts to get semi-assisted coding in place. I don’t know to what extent the nature of the code of data analysts fits more or less to the ability of Cursor. In some degree, analysts need to repeat some mundane tasks when they want to add a new data source to their data lake, or when they need to join an existing model to another. Maybe it’s less relevant when you’re doing advanced data science.

One major breakthrough that is still to be realized is a full sync connection with a data warehouse system and an ability to run queries independently and interpret its results. Once that’s achieved, we could expect background agents to independently work on creative refactoring tasks during the night. Those would be reviewed at the start of the working day by employees. Or, even more likely, another set of trained agents powered by alternative models to critically review the work that was produced.

The long term negative impact

I am convinced that the long term impact will be negative for human data analysts. I don’t want to extrapolate to many other professions because I’m convinced that there is a degree of field specificity that will ultimately determine the extent to which these AI assistants can scale.

In terms of data analysis, it’s clear that the time saving aspect will get to our heads and we will increasingly rely on those assistants to create and productionize code that will then become available in BI reporting tools. Our end stakeholders, AI and human together, will then interpret the results to drive meaningful actions. But deciding on what is meaningful is one thing. Garbage in, garbage out.

I see two main challenges:

  1. Some of the insights will be interpreted automatically by other AI assistants, creating a cloud of repetitive types of reports and assumptions that can either come out as too bland and not “creative” enough in the story-telling type that we may want them to be for us to understand the big picture. The result I’ve observed is an overall fatigue and disinterest by human readers. Keeping them entertained will be challenging. Even prompting the interpreting agent to become more creative will only lead to an increased rate of hallucinations and overinterpretations that could lead to disastrous business decisions.
  2. The increased reliance on AI coding assistants will also lead to more errors or superfluous work being merged in production. When you’re an analyst, you traditionally work on some models of different priority levels. There may be some core entities whose processes are reviewed by the masses to ensure a properly efficient and high velocity pipeline. On the other hand, you have some lower impact models that are usually used for specific ad-hoc analyses or reporting in a sub-field of a given department. I foresee those specific models to receive an plummeting review quality. Obsessed by the time gain that is achievable from the coding agents, we will decrease our reviewing quality and let more things slip into production. At first, it will be a single column that was wrongly added and passed a test that was also created by an AI. Eventually, it will become an erroneous business definition, itself source on a wrongly imported dataset.

Time and creativity

A general problem for creative work

Obviously Gen AI is problematic for all types of creative industries and data analysis is far from the most urgent area where we want to sustain creativity (although I would argue that a minimum threshold is needed as mentioned earlier regarding the story telling importance). But to try and generalize, I see the incremental gain in time we get from it doesn’t justify the additional value we get in terms of creativity. And whether we’re really conscious about it or not, creativity is essential to all of us.

I believe we will eventually realize this more consciously as start hitting roadblocks that can be linked to the overuse and dependence on those new technologies. Once the business and financial metrics start getting hit because of some infrastructure issues. Once the overall sentiment and excitement about a given service or product drops because it has regressed to the mean. Once company CEOs realize that cutting their workforce by half and spend additional budget in AI assistants didn’t lead to that mouth watering growth figure that the McKinsey consultants announced (wait, did they actually crunch the numbers independently?). Then, we will start opting out of all of those additional tools and start properly reflecting on them.