Tag: SoftwareEngineering

  • GenAI coding needs more than just a Licence

    I just found this blog post from Rob Bowley in my RSS feed and one paragraph just so resonated with what I read and hear so much:

    For CEOs and founders hoping to benefit, the answer isn’t as simple as handing out Claude licences […]. It’s investing in the engineering culture and practices. Unglamorous, slow work, but there’s no way around it.

    Rob Bowley

    I’ve been trying and since a while now in my own projects and I can just say: to really leverage it, you must adjust the style of working. And this is a learning curve you must be willing to take.

    Fediverse Reactions
  • Why Settle for One AI Assistant When You Can Have Two?

    Two weeks ago, I discovered that Mistral.ai also provides a coding assitant, similar to GitHub Copilot (GHC), called Mistral Vibe (GitHub page).

    In those two weeks I’ve been using Mistral Vibe in parallel to GHC. Just because I wanted to try and see the difference! And just after a couple of days I noticed that the agents definition in Mistral Vibe are a bit different from GHC (in hindsight: of course!). This, of course, leads to a dual configuration in my project so that both assitants can work properly.

    And just today I noticed that I’m doing commits for dual-Agent-Support … Hardly thinkable just half a year ago:

    - Add documentation references:
      * AGENTS.md: Add reference for AI coding assistants
      * README.md: Add reference in Further Reading section
    
    - Enhance dual AI support:
      * Update AGENTS.md to reference both .github/skills/ (GitHub Copilot) and .vibe/skills/ (Mistral Vibe)
      * Clarify which skill directory each AI assistant should use

    So far I’m quite happy and impressed by the performance of the Coding assitants. However, it still makes sense to review the code every now and then. Even though the tools discover a lot of vulnerabilities themselfes which helpme to create a safer result, I had a couple of findinges myself the last days:

    For example: API Endpoints not being protected by login (well, I hadn’t instructed to do so), constructed URLs lacking Url-encoding, or Test being written but testing for an outcome that I didn’t want (e.g. I wanted a certain function to strip whitespaces, whereas the test assumed whitespaces should be retained).

    Anyways. My own commit about a multi-agent(vendor)-setup really showed me how much things have changed in the last months. And for sure, there’s more to come …

    Fediverse Reactions
  • AI Can Write Code, But It Can’t Debug Without Context

    Large Language Models (LLMs) are often marketed as the ultimate productivity boost for developers: “Write code faster! Debug with AI! No more manual work!” After a recent experience, I can confirm that LLMs are incredibly useful for writing and even structuring code (I’ll write about this probably in a later blog post).

    But when it comes to debugging, one should make really sure that the tool has access to all the relevant context (and don’t disable your brain). But .. let’s see, what happened:

    Since a couple of days (uhm .. nights mostly, after work), I was writing a web application. The copilot-experience was very good and it really helped tremendously. I never really ran into a situation where I had to debug. And I was curious when (if?) I’d run into that – and how things turn out then.

    (more…)
  • I Stopped Manually Committing – Here’s Why

    I don’t code much in my day job anymore, but I still love building things. So last weekend, I finaly took the time to test GitHub Copilot’s Agents feature — specifically, a Commit Agent. I’ve seen agents.md and knew the theory, but I wanted the live experience: Could this actually improve my workflow, or was it just another layer of automation hype?

    Even when working alone, I sometimes need to revert—and that’s when I really appreciate clean, atomic commits. But let’s be honest: I’m not always disciplined enough to enforce that myself. So I figured, why not seek the help of an agent?

    (more…)
  • Stop Reinventing the Wheel – go to Community Events!

    I can only advise everyone in tech: Go to events, meetups, and webinars — talk to people, exchange ideas. We all face similar challenges. You don’t have to solve everything alone. And if an event turns out to be a dud? Well, so be it — at least you might have grabbed some free food.

    Recently, after a long time, I went back to a Meetup from Munich Datageeks e.V., and it reminded me: Just being able to discuss a few half-baked ideas or questions with someone can make a huge difference. Chances are, the other person has already tried some of them — and that alone can save you a ton of time!

    On that particular Meetup I got some practical ideas for work that we’re discussing and where we don’t have a clear solution yet. This other company has tried some things already and confirmed some of my (theoretical) concerns.

    Later on the same event I was just talking about AI assisted coding with some others folks. I simply don’t have the time to try out all tools! Speaking to some real developers – not just looking at Youtubes or listeneing to Podcasts – and hearing their in-life experience is just precious.

  • One of the most potentially dangerous failure modes of LLM-based coding assistants …

    I really like having Jason Gorman’s blog posts in my RSS reader. Especially when he’s highlighting some critical issues with AI assisted coding.

    This paragraph for example really made me smile:

    For example, a common strategy they use when they’re not able to fix a problem they created is to delete failing tests, or remove testing from the build completely,

    What Makes AI Agents Particularly Dangerous Is “Silent Failure”

    I just had to smile because I probably would have been quite surprised to see that happening.

    But okay. It’s another thing I put onto my mental list to care about when doing AI assisted coding.

    Check out his post: https://codemanship.wordpress.com/2026/02/27/what-makes-ai-agents-particularly-dangerous-is-silent-failure/

    Fediverse Reactions
  • Agent finops

    The start of this article made me laugh:

    The first time my team shipped an agent into a real SaaS workflow, the product demo looked perfect. The production bill did not.

    FinOps for agents: Loop limits, tool-call caps and the new unit economics of agentic SaaS

    I wasn’t laughing out of malicious joy, but as it’s something that quite a lot of people don’t think about when they start AI / Agentic coding: Whenver you give the program flow the opportunity / ability to make queries on it’s own judgement, think about the case that the thing (I don’t want to call it AI) could run into an infinite loop. And every query to the LLM generates real costs.

    And with “costs” I don’t just mean “a busy CPU” like in traditional infinite loops. More like “costs” in terms of Lambda Horror Stories: Suddenly, every loop querying your LLM provider hit’s your budget.

    And that might get even more interesting in case of vibe coding, where such an infinte loop is burried in thousands of lines of auto-generated code. Oh we have interesting times ahead!

    Check out the article: https://www.infoworld.com/article/4138748/finops-for-agents-loop-limits-tool-call-caps-and-the-new-unit-economics-of-agentic-saas.html

  • AI amplifies DevOps

    DevOps is the backbone of modern software delivery. The latest insights from Developer Tech on Perforce’s AI-driven tools highlight why — again.

    70 percent of the organisations report their DevOps maturity materially affects their success with AI. Rather than replacing established delivery practices, proper foundational workflows serve as the prerequisite for scaling these capabilities.

    Perforce Software: How AI is amplifying DevOps | developer-tech.com

    What’s remarkable out isn’t just the AI integration. It’s how it amplifies DevOps’ core strengths: bridging team gaps, automating repetitive tasks, and ensuring reliability at scale.

    Collaboration, Speed, and Resilience

    DevOps thrives on collaboration, speed, and resilience. AI doesn’t replace these principles — it supercharges them. Perforce’s tools streamline code reviews, predict deployment risks, and optimize workflows. They’re not just upgrades. They’re force multipliers for teams drowning in complexity.

    It’s not an “either or”

    The article also points out that DevOps without AI risks obsolescence. Manual processes become bottlenecks – but AI-driven insights — whether in testing, monitoring, or incident response —turn the huge amount of data into actionable insights.

    That’s not hype. It’s a competitive edge. The future isn’t about choosing between DevOps and AI. It’s about how well you integrate them.

    Check out the article: https://www.developer-tech.com/news/perforce-software-how-ai-is-amplifying-devops/

    Fediverse Reactions
  • Rules don’t always work on AI agents

    A recent Mastodon post from @solomonneas highlights an annoying issue: an AI agent pushed to the main branch 12x, despite clear instructions not to.

    […] My agent pushed to main 12 times despite explicit instructions.

    Fix: git pre-push hooks on 39 repos. Agent can’t push code to main because git rejects it. No willpower needed. […]

    Mechanical enforcement > written instructions.

    @solomonneas@infosec.exchange

    The post really speaks for itself:

    • Agent rules are not 100% reliable
    • KISS: Keep it Simple, Stupid. Don’t make it more complex than necessary. (= don’t start fiddling around with additional AI)
  • AI Won’t Turn Everyone Into Developers- Because Most People Don’t Want to Be Developers

    The AI hype claims that LLMs will make everyone a coder. I say: that’s pretty much BS. Most people don’t want to build software. They want their problems solved, preferably without lifting a finger.

    Joan Westenberg nails this so well in her recent article: The “everyone will code” myth ignores decades of proof. We’ve had WordPress (since 2003) and desktop publishing tools (since the 1980s), yet most still pay for solutions or use templates.

    The real shift? AI will make existing tools smarter — not turn everybody into vibe-coders.

    Read her article that’s just so spot on: https://www.joanwestenberg.com/ai-twitters-favourite-lie-everyone-wants-to-be-a-developer/