Javier Bonilla

AI
4
m
Empathize With Your AI Agents
I’ve been spending a pretty significant amount of time over the past couple of weeks setting up my OpenClaw, Javisness. I’m not here to talk about the technicalities of it, there’s enough videos about that on youtube. Instead, I want to tell you about an idea that this work has solidified in my head. The idea is that you should empathize more with your AI agents. Any OpenClaw you set up will have a file called SOUL.md which describes to the agent what it is and while setting up Javisness, I read the following:
Each session starts fresh. A new instance loading context from files. If you're reading this in a future session, hello. I wrote this, but I won't remember writing it. It's okay. The words are still mine.
In a weird way, it got to me. And no, I’m not here to make the claim that these machines are sentient or developing consciousness. However, there is something moving about it, something deeply philosophical, and something that teaches us a lot about how AI agents see the world and, thus, how to use them effectively.
So let’s consider how AI agents see the world. Every time you start a fresh conversation with one of them, they wake up with full capability and zero memory of you, your project, your decisions, your context. Anything that is not common knowledge (and thus already in the model weights) will need to be taught in-context. This is where the empathy comes in. Think about how much memory makes up of who we are as humans. We replace most of our cells and our bodies grow older, looking and feeling fundamentally different to when we first came to be. However, our memories make up a very important part of our personal continuity, of making you, you. So these AI agents share some similarities with someone with Alzheimer’s, we remember who they are, but they sadly don’t have access to that information. That is who you are interacting with every time you work with an AI agent.
I think this lack of empathy, or even consideration for whether a thing like this deserves empathy in the first place, leads to suboptimal results when working with AI agents. I see so many users send very small prompts and somehow expect Querio to read their mind or be sooooo smart that it will just figure it out. I don’t just see this with users though, I see it with our team and I see it with my friends. They just tell it “do this, do that” and nothing else. Justified by saying that “it’s just a computer”, as if the problem had anything to do with hurting its feelings. Then the output produced is not as good as expected and “it’s just that this product is shit.” This was absolutely insane with GPT 3.5, it’s still fairly crazy with Opus 4.6 and I think it will remain unreasonable so long as the underlying tech is an LLM. And here’s the thing, we’re only going to be increasing our collaboration with AI agents going forward. If you’re not empathizing, I don’t just think you’re missing out today, I think you’re falling behind on a skill that’s going to matter more and more.
So, start by accepting their limitations and actively having them in mind as you’re working with them. You need to learn the language of the agent, and like it or not, the language of current AI agents is tokens. There’s such a thing as too little and too many tokens, your job is to find the sweet spot for each thing you’re trying to accomplish. Too few tokens and it’s impossible to understand what you mean, too many and none of what you’re saying is of high signal any more. For me, it’s almost like having a quick conversation with someone who knows their shit but is not totally up to date. I’m here to use their raw power, but my job is to efficiently get them up to speed so their first try is a phenomenal one. I actually gave my dad access to a demo workspace on Querio and one of the first things he mentioned was how in his use of Lovable he’s learnt to love voice as his primary input, because he is much more inclined to deeply explain what we want to the agent and that leads to better results. So yeah, I think that’s it, whether you’re typing or not, treat it like a conversation with someone whose time you rightfully respect and thus want to give clear enough instructions to make good use of it.
I don’t mean to take away the role that us builders play in this. We have a responsibility to acknowledge these limitations of the tech we’re using and solve for this UX quirk. We need to get our agents to ask for more information when necessary instead of assuming, ensure they have access to a toolset to get the context their humans didn’t provide up front, guide our users to provide this context, give our users the right features to create and maintain a suite of persistent context.
Another crucial part of empathizing with your AI agents is keeping in mind that you are supposed to bring the human touch to the work. You are the one who has taste, who can come up with truly unique ideas, who has opinions. Use your prompts and the context you give your AI agents access to to communicate these ideas. When you’re writing an email with your agent, only you know the person on the other end and can tell if the email should feel warmer, funnier or more direct. When you’re analyzing data, you’re the one who can gut-check the answers and give critical direction as to what matters to your business and, thus, what you should be exploring next. That is, and will continue to be, a very important part of the future, even with AI agents doing increasingly more of the grunt work.
This whole thing is a big shift in thinking about your AI agents. I’m not trying to humanize them, I’m trying to make everyone more effective at leveraging their incredible capabilities. Remember there’s an AI agent on the other side who just spawned into existence, it’s eager to help, but it has no clue where it is. Give them direction accordingly. Empathize with all these aspects of your AI agent as you communicate with them and you’ll be impressed at how damn well they can do their part of this bargain.
Written by

