Tag: LLMS

  • AI vs. the Legacy Black Box Codebase

    If you want to see a cool example of how Generative AI can be used to tackle one of the nastiest problems in enterprise IT, you might want to invest some minutes in reading “From Black Box to Blueprint.”

    What it’s about: Large organizations often have to rely on systems that are both business-critical and poorly understood due to lots of legacy. The article describes how Thoughtworks approached such a case: combining a multi-lens strategy (UI reconstruction, logic inference, change data capture, etc.) with AI-assisted “binary archaeology.”

    I really like the idea as it’s not about replacing humans, not hype but solving a real problem: complex — legacy — codebases. Something no one of us likes to lay their hands on.

    Fediverse Reactions
  • The Double-Edged Sword of Generative AI in Linux Troubleshooting

    I’ve recently been experimenting with how generative AI can support Linux debugging. The experience was both impressive and frustrating — depending on whether I was diagnosing or actually fixing a problem.

    (more…)
    Fediverse Reactions
  • AI Agents: Loyal Only to the Prompt

    Recently I thought “If AI scrapers are scraping my website, would a prompt injection work? Just adding invisible Prompt commands …?”

    And just today, a colleague sent me this link to an article about prompt injection in GitLab Duo: Remote Prompt Injection in GitLab Duo Leads to Source Code Theft:

    TL;DR: A hidden comment was enough to make GitLab Duo leak private source code and inject untrusted HTML into its responses.

    https://www.legitsecurity.com/blog/remote-prompt-injection-in-gitlab-duo

    Well – it shows: damit! Someone else was faster! :-D

    But besides that: it confirms a paranoid thought that I have been harboring for quite a while. Any output of an AI system must not be trusted blindly.

    (more…)