Tag: SoftwareEngineering

  • AI Can Write Code, But It Can’t Debug Without Context

    Large Language Models (LLMs) are often marketed as the ultimate productivity boost for developers: “Write code faster! Debug with AI! No more manual work!” After a recent experience, I can confirm that LLMs are incredibly useful for writing and even structuring code (I’ll write about this probably in a later blog post).

    But when it comes to debugging, one should make really sure that the tool has access to all the relevant context (and don’t disable your brain). But .. let’s see, what happened:

    Since a couple of days (uhm .. nights mostly, after work), I was writing a web application. The copilot-experience was very good and it really helped tremendously. I never really ran into a situation where I had to debug. And I was curious when (if?) I’d run into that – and how things turn out then.

    (more…)
  • I Stopped Manually Committing – Here’s Why

    I don’t code much in my day job anymore, but I still love building things. So last weekend, I finaly took the time to test GitHub Copilot’s Agents feature — specifically, a Commit Agent. I’ve seen agents.md and knew the theory, but I wanted the live experience: Could this actually improve my workflow, or was it just another layer of automation hype?

    Even when working alone, I sometimes need to revert—and that’s when I really appreciate clean, atomic commits. But let’s be honest: I’m not always disciplined enough to enforce that myself. So I figured, why not seek the help of an agent?

    (more…)
  • Stop Reinventing the Wheel – go to Community Events!

    I can only advise everyone in tech: Go to events, meetups, and webinars — talk to people, exchange ideas. We all face similar challenges. You don’t have to solve everything alone. And if an event turns out to be a dud? Well, so be it — at least you might have grabbed some free food.

    Recently, after a long time, I went back to a Meetup from Munich Datageeks e.V., and it reminded me: Just being able to discuss a few half-baked ideas or questions with someone can make a huge difference. Chances are, the other person has already tried some of them — and that alone can save you a ton of time!

    On that particular Meetup I got some practical ideas for work that we’re discussing and where we don’t have a clear solution yet. This other company has tried some things already and confirmed some of my (theoretical) concerns.

    Later on the same event I was just talking about AI assisted coding with some others folks. I simply don’t have the time to try out all tools! Speaking to some real developers – not just looking at Youtubes or listeneing to Podcasts – and hearing their in-life experience is just precious.

  • One of the most potentially dangerous failure modes of LLM-based coding assistants …

    I really like having Jason Gorman’s blog posts in my RSS reader. Especially when he’s highlighting some critical issues with AI assisted coding.

    This paragraph for example really made me smile:

    For example, a common strategy they use when they’re not able to fix a problem they created is to delete failing tests, or remove testing from the build completely,

    What Makes AI Agents Particularly Dangerous Is “Silent Failure”

    I just had to smile because I probably would have been quite surprised to see that happening.

    But okay. It’s another thing I put onto my mental list to care about when doing AI assisted coding.

    Check out his post: https://codemanship.wordpress.com/2026/02/27/what-makes-ai-agents-particularly-dangerous-is-silent-failure/

    Fediverse Reactions
  • Agent finops

    The start of this article made me laugh:

    The first time my team shipped an agent into a real SaaS workflow, the product demo looked perfect. The production bill did not.

    FinOps for agents: Loop limits, tool-call caps and the new unit economics of agentic SaaS

    I wasn’t laughing out of malicious joy, but as it’s something that quite a lot of people don’t think about when they start AI / Agentic coding: Whenver you give the program flow the opportunity / ability to make queries on it’s own judgement, think about the case that the thing (I don’t want to call it AI) could run into an infinite loop. And every query to the LLM generates real costs.

    And with “costs” I don’t just mean “a busy CPU” like in traditional infinite loops. More like “costs” in terms of Lambda Horror Stories: Suddenly, every loop querying your LLM provider hit’s your budget.

    And that might get even more interesting in case of vibe coding, where such an infinte loop is burried in thousands of lines of auto-generated code. Oh we have interesting times ahead!

    Check out the article: https://www.infoworld.com/article/4138748/finops-for-agents-loop-limits-tool-call-caps-and-the-new-unit-economics-of-agentic-saas.html

  • AI amplifies DevOps

    DevOps is the backbone of modern software delivery. The latest insights from Developer Tech on Perforce’s AI-driven tools highlight why — again.

    70 percent of the organisations report their DevOps maturity materially affects their success with AI. Rather than replacing established delivery practices, proper foundational workflows serve as the prerequisite for scaling these capabilities.

    Perforce Software: How AI is amplifying DevOps | developer-tech.com

    What’s remarkable out isn’t just the AI integration. It’s how it amplifies DevOps’ core strengths: bridging team gaps, automating repetitive tasks, and ensuring reliability at scale.

    Collaboration, Speed, and Resilience

    DevOps thrives on collaboration, speed, and resilience. AI doesn’t replace these principles — it supercharges them. Perforce’s tools streamline code reviews, predict deployment risks, and optimize workflows. They’re not just upgrades. They’re force multipliers for teams drowning in complexity.

    It’s not an “either or”

    The article also points out that DevOps without AI risks obsolescence. Manual processes become bottlenecks – but AI-driven insights — whether in testing, monitoring, or incident response —turn the huge amount of data into actionable insights.

    That’s not hype. It’s a competitive edge. The future isn’t about choosing between DevOps and AI. It’s about how well you integrate them.

    Check out the article: https://www.developer-tech.com/news/perforce-software-how-ai-is-amplifying-devops/

    Fediverse Reactions
  • Rules don’t always work on AI agents

    A recent Mastodon post from @solomonneas highlights an annoying issue: an AI agent pushed to the main branch 12x, despite clear instructions not to.

    […] My agent pushed to main 12 times despite explicit instructions.

    Fix: git pre-push hooks on 39 repos. Agent can’t push code to main because git rejects it. No willpower needed. […]

    Mechanical enforcement > written instructions.

    @solomonneas@infosec.exchange

    The post really speaks for itself:

    • Agent rules are not 100% reliable
    • KISS: Keep it Simple, Stupid. Don’t make it more complex than necessary. (= don’t start fiddling around with additional AI)
  • AI Won’t Turn Everyone Into Developers- Because Most People Don’t Want to Be Developers

    The AI hype claims that LLMs will make everyone a coder. I say: that’s pretty much BS. Most people don’t want to build software. They want their problems solved, preferably without lifting a finger.

    Joan Westenberg nails this so well in her recent article: The “everyone will code” myth ignores decades of proof. We’ve had WordPress (since 2003) and desktop publishing tools (since the 1980s), yet most still pay for solutions or use templates.

    The real shift? AI will make existing tools smarter — not turn everybody into vibe-coders.

    Read her article that’s just so spot on: https://www.joanwestenberg.com/ai-twitters-favourite-lie-everyone-wants-to-be-a-developer/

  • How a User Helped Fixing 4 Bugs with AI (and No Expertise)

    I see a lot of AI skepticism in the dev community — and some of it is fair (okay, maybe “a lot”). Vague bug reports, monster commits, untested code … We’ve proably read it all, and maybe even seen it all.

    Even Mitchell Hashimoto whose post appeared yesterday in my timeline writes “Slop drives me crazy and it feels like 95+% of bug reports”. But … he continues with an impressive story: A Ghostty user with no Zig or macOS experience took crash logs, fed them through AI, reached out on Discord, explained what was done. And the result:

    (more…)
    Fediverse Reactions
  • Skipping AI is not going to help you or your career

    I just saw a post on Simon Willison’s blog where he linked to “Don’t fall into the anti-AI hype”. And I pretty much agree on what antirez writes there:

    Whatever you believe about what the Right Thing should be, you can't control it by refusing what is happening right now. Skipping AI is not going to help you or your career.
    Don’t fall into the anti-AI hype – <antirez>
    (more…)