Tag: Agentic

  • Why Settle for One AI Assistant When You Can Have Two?

    Two weeks ago, I discovered that Mistral.ai also provides a coding assitant, similar to GitHub Copilot (GHC), called Mistral Vibe (GitHub page).

    In those two weeks I’ve been using Mistral Vibe in parallel to GHC. Just because I wanted to try and see the difference! And just after a couple of days I noticed that the agents definition in Mistral Vibe are a bit different from GHC (in hindsight: of course!). This, of course, leads to a dual configuration in my project so that both assitants can work properly.

    And just today I noticed that I’m doing commits for dual-Agent-Support … Hardly thinkable just half a year ago:

    - Add documentation references:
      * AGENTS.md: Add reference for AI coding assistants
      * README.md: Add reference in Further Reading section
    
    - Enhance dual AI support:
      * Update AGENTS.md to reference both .github/skills/ (GitHub Copilot) and .vibe/skills/ (Mistral Vibe)
      * Clarify which skill directory each AI assistant should use

    So far I’m quite happy and impressed by the performance of the Coding assitants. However, it still makes sense to review the code every now and then. Even though the tools discover a lot of vulnerabilities themselfes which helpme to create a safer result, I had a couple of findinges myself the last days:

    For example: API Endpoints not being protected by login (well, I hadn’t instructed to do so), constructed URLs lacking Url-encoding, or Test being written but testing for an outcome that I didn’t want (e.g. I wanted a certain function to strip whitespaces, whereas the test assumed whitespaces should be retained).

    Anyways. My own commit about a multi-agent(vendor)-setup really showed me how much things have changed in the last months. And for sure, there’s more to come …

    Fediverse Reactions
  • One of the most potentially dangerous failure modes of LLM-based coding assistants …

    I really like having Jason Gorman’s blog posts in my RSS reader. Especially when he’s highlighting some critical issues with AI assisted coding.

    This paragraph for example really made me smile:

    For example, a common strategy they use when they’re not able to fix a problem they created is to delete failing tests, or remove testing from the build completely,

    What Makes AI Agents Particularly Dangerous Is “Silent Failure”

    I just had to smile because I probably would have been quite surprised to see that happening.

    But okay. It’s another thing I put onto my mental list to care about when doing AI assisted coding.

    Check out his post: https://codemanship.wordpress.com/2026/02/27/what-makes-ai-agents-particularly-dangerous-is-silent-failure/

    Fediverse Reactions
  • Agent finops

    The start of this article made me laugh:

    The first time my team shipped an agent into a real SaaS workflow, the product demo looked perfect. The production bill did not.

    FinOps for agents: Loop limits, tool-call caps and the new unit economics of agentic SaaS

    I wasn’t laughing out of malicious joy, but as it’s something that quite a lot of people don’t think about when they start AI / Agentic coding: Whenver you give the program flow the opportunity / ability to make queries on it’s own judgement, think about the case that the thing (I don’t want to call it AI) could run into an infinite loop. And every query to the LLM generates real costs.

    And with “costs” I don’t just mean “a busy CPU” like in traditional infinite loops. More like “costs” in terms of Lambda Horror Stories: Suddenly, every loop querying your LLM provider hit’s your budget.

    And that might get even more interesting in case of vibe coding, where such an infinte loop is burried in thousands of lines of auto-generated code. Oh we have interesting times ahead!

    Check out the article: https://www.infoworld.com/article/4138748/finops-for-agents-loop-limits-tool-call-caps-and-the-new-unit-economics-of-agentic-saas.html

  • Spec-first Agentic Development is not Vibe Coding

    Not even two weeks ago I wrote about “Reproducable Vibecoding” and that the specification as a permanent context to document all decisions is important.

    I just stumbled across the article “Notes on Six Months of AI-Enabled Building” by Isaac Flath. There are a couple of good quotes in there, especially in the chapters “Your Thinking Style Determines Your Success

    (more…)
    Fediverse Reactions