Large Language Models (LLMs) are often marketed as the ultimate productivity boost for developers: “Write code faster! Debug with AI! No more manual work!” After a recent experience, I can confirm that LLMs are incredibly useful for writing and even structuring code (I’ll write about this probably in a later blog post).
But when it comes to debugging, one should make really sure that the tool has access to all the relevant context (and don’t disable your brain). But .. let’s see, what happened:
Since a couple of days (uhm .. nights mostly, after work), I was writing a web application. The copilot-experience was very good and it really helped tremendously. I never really ran into a situation where I had to debug. And I was curious when (if?) I’d run into that – and how things turn out then.
Chapter 1: The First Sign of Trouble
The code worked flawlessly on my local machine. I deployed it to my shared web host, and — success! — most of the application ran without a hitch. But there was one directory, just one, that kept throwing a 500 Internal Server Error.
I copied the error message and pasted it into the coding assistant’s chatbox – and was curious.
Chapter 2: The Experiment Begins
I assumed already: This wasn’t a code problem. The rest of the application worked fine, and the error only occurred in one specific directory. That usually means server configuration —something outside the codebase.
But I wanted to test the assistant. “Could it figure this out? How do I have to use the tool so that it catches the issue – or was I missing something obvious?”
Entering the Death Loop
The assistant went into action. It analyzed the code, suggested tweaks, and even generated debug scripts. But nothing worked. Even though I tried to guide it step by step — just like a human would – it kept missing the point.
After a while of going in circles (like really seeing the same messages over and over again), I accepted that the assistant wasn’t just failing to fix the issue — it wasn’t asking the right questions.
Okay, stop. This is getting absurd
What happened? It never once considered:
- What’s different about this directory?
- Could this be a server configuration issue?
- What does the local environment look like compared to the server?
It was stuck in its own context — the project files — blind to everything else. Of course it didn’t have access to the prod environment. But that it didn’t even ASK for information was a bit disappointing.
Chapter 3: Assist the Assistant
Okay, let’s step in for help, I decided. I still didn’t know what for sure the issue was. But I had a strong idea. To me it must be a specivic part of the server config.
I created two simple scripts to dump local and server configs. I fed both files to the assistant: “Compare the configs from server and local. Could the differences explain the error?”
The Breakthrough
And lo and behold! The assistant immediately highlighted the issue! Php is installed as FastCGI on the server, which makes mod_rewrite behaves a little different. Something that I had not replicated in my local dev (because it never was an issue before).
I was quite happy that I was right assuming mod_rewrite as a candidate. But also that the assitant immediately checked the difference after bumping it into the right direction. Some moments later, the fix was applied, uploaded and confirmed working.
Chapter 4: The Lesson
What went Wrong
The assistant’s scope was limited to the project directory. It analyzed the code and my local Docker setup — but not the production environment, where the issue was hiding.
Even after I mentioned the server environment, it didn’t request the missing context (like config dumps) or expand its focus beyond the codebase. That was a bit disappointing.
Debugging isn’t just about fixing code — it’s about understanding the system. The assistant couldn’t do that on its own. Not yet, at least.
I had hoped it would ask for the server configs itself — but it didn’t. That’s the gap between a tool and an engineer.
The Bigger Picture
This experience taught me / confirmed three things:
- LLMs are incredible tools — but they’re not engineers. They do not (yet?) consider / request context outside the defined context (or outside the project-codebase).
- Context is everything (as I’ve written already before). If you don’t provide the full picture, the AI can’t help. And sometimes, you don’t even know what the full picture is until you start digging (like I wrote here as well).
- Experience and a human eye help a lot, because we as humans might usually have/know/see more context than any system (unless the system really has access to everything – and isn’t overwhelmed by the huge context then).
Closing: A Partnership, Not a Replacement
I still really appreciate the coding assistant. It saves me hours of work. And I wouldn’t do some projects simply because my private time is limited. But I always use it with care:
- For writing code, it’s a powerhouse!
- For debugging and fixing, it’s usually great as well.
- I must always understand the overall system! In cases when the tool fails, it comes back to me to guide it. If the whole thing is a black box … Both, the tool and me have no clue and that’s not good.
The next time you hit a weird bug, remember: The AI doesn’t see what you don’t tell it. And sometimes, the problem isn’t in the code — it’s in the world around it.
Leave a Reply