Product

The pendulum - part 2

Because no one asked for 15 versions of the same chart.

Javier Bonilla

Co-founder of Querio

Jun 9, 2025

TL;DR;

We rebuilt Querio to fix a messy, chat-based workflow that wasn’t working. Instead of piling up charts for every small change, we made the output the focus, added simple editing tools, and let users build on their work without starting over. It’s a cleaner, faster way to get answers—because good UX beats trendy interfaces.

If you haven’t read Part 1, go do that first. I’ll wait.

For those who did (and even those who didn’t), here’s a quick refresh. We discussed how the pendulum swings in tech, leading to pretty crazy products. We saw how the LLM pendulum swung us all towards the same chat interface, and how our users started showing us it was time to swing back. Now let’s talk about what we actually did about it.

Good UX principles

UX principles don’t swing that much, they’re relatively stable, but in the frenzy of some new tech, we tend to forget them. What I’m about to drop aren't revolutionary insights—they're basic UX principles that somehow got forgotten in the AI rush. Think of them as the steady foundation that keeps you grounded when the next shiny thing tries to sweep you away.

Start with the job to be done. People might try what you have, but they certainly don't pay you just because your product has AI, they pay you because you get a job done.

Make the output the hero, not the process. When someone wants an answer, they care about the result, not the 27 steps it took to get there. Everything else should fade into the background until they need it.

Iteration should build, not restart. Real work is iterative. When someone says "this is great, but can you make it blue instead?", they don't want to start over—they want to build on what they already have.

Give users control and transparency. People need to understand what's happening and feel like they can steer the ship. Black box magic might impress in demos, but it creates anxiety in real work.

These aren't just nice-to-haves—they're what separate tools people actually use from tools that get abandoned after the novelty wears off. Now let me show you how we applied them.

How we un-AI'd Querio

The n (f*****g) charts problem

Let me paint you a picture of the old Querio experience. Sarah, a marketing manager, asks our AI: "Show me campaign performance by channel for Q3." Great! She gets a beautiful chart showing Facebook ads crushing it, Google struggling, and email holding steady.

But then Sarah notices the chart is using our default blue color scheme, and she needs different colors per channel. So she types: "Can you make those bars blue for facebook, red for google and gray for email instead?"

What happens next? Our AI agent regenerates the entire analysis with newly colored bars. Now Sarah has two charts in her chat: one blue, one mixed. Same data, same insights, just different colors.

She wants to adjust the title? Three charts. Change the date range slightly? Four charts. By the time she's done tweaking things for her presentation, she's got a graveyard of 15 nearly-identical charts cluttering her conversation, and she's lost track of which one was the "final" version.

This is LLM UX at its worst—treating every interaction like a fresh conversation instead of iterative work on a single output. It's like having a designer who throws away the entire mockup and starts over every time you ask them to change a font.

One output and three actions

We realized the problem wasn't AI itself—it was forcing every interaction through that particular UX, a linear chat interface. So we redesigned the entire experience around what people actually wanted to do after getting a data output.

When you prompt Querio, you most likely want a chart or a table (jobs to be done). We stream how our AI agent is generating the response (give users control and transparency), but once we get it, it's front and center (make the output the hero). We then focus on what you'll most likely do next and have a curated experience around each.

1. Edit this output - This takes you to a dedicated editing interface (iteration should build, not restart). The chart stays front and center while editing controls appear around it—color pickers, dropdown menus, inline text editing. Simple changes happen instantly. Fundamental changes give you different versions of the same output to choose from—not a pile of separate charts.

2. Ask a follow-up - This keeps your current output visible while exploring related questions (make the output the hero). "Now show me the same data broken down by region" appears alongside your original chart for easy comparison. You're building on what you discovered, and each follow-up gives you the same three options.

3. New topic - This opens a fresh tab with a clean slate (respect cognitive load). When someone switches from campaign performance to user retention metrics, they get a mental reset. The model gets better attention, the context stays focused, and Sarah doesn't lose her previous work in an endless scroll.

The AI we did keep

Here's the thing—we didn't throw the baby out with the bathwater. The AI is still doing the heavy lifting of turning "show me campaign performance" into actual SQL queries, Python analysis, and beautiful visualizations. That part was never the problem.

What we kept:

  • AI agents at the core of how we write code

  • Prompts as an entry point

  • Automatic chart generation and styling

  • Smart suggestions for follow-up questions

What we ditched:

  • The linear chat itself as the means of AI communicating back

  • Regenerating entire outputs for simple changes

  • Forcing every interaction to start with a prompt

The result feels like magic when you need it to, and like a proper tool when you need control.

Swinging back to balance

The pendulum always swings back, but the trick is recognizing when it's time to course-correct before you hit the wall. Our users told us through their behavior that the chat-everything approach wasn't working. They were abandoning sessions, getting frustrated with simple changes, and generally not getting the value they came for.

The ChatGPT pendulum swung us all toward the same interface because it was the obvious thing to do. Chat worked for ChatGPT, so it must work for everything, right? But ChatGPT is designed for open-ended conversation. Data work is designed for focused, iterative analysis. Different jobs, different tools.

This isn't about being anti-AI or anti-innovation. It's about being pro-user and pro-getting-shit-done. The pendulum taught us that the most powerful technology in the world is useless if it's wrapped in the wrong interface.

The next time a new tech wave hits—and it will—remember the pendulum. Get excited, experiment, push boundaries. But keep an eye on your users and the fundamentals. The magic isn't in the technology itself; it's in how thoughtfully you apply it to real human problems, to real jobs to be done.

And maybe, just maybe, you won't end up with two fucking charts when all someone wanted was a different color.

Querio

Query, report and explore data at technical level.

2025 Querio Ltd. All rights reserved.