• Now on the Fediverse!

    I wrote a couple of times that I reduced my activity on and put more effort into the blog here – and I did! And I’m quite happy about that.

    In parallel I had an anonymous account for IT related topics, but honestly, I missed posting as my true identity. Also as I couldn’t link my LinkedIn-contacts to a fediverse identity (okay, as if they’d all just follow eagerly into the fedi …).

    Anyways! Last week I requested an account at Hachyderm.io and there I am now! I’ll be posting longer articles here and small ideas and thoughts etc on the fediverse. I’d be happy if you drop me a “hi” there as well :-)

    Honestly, I was very very tempted to start a small selfhosted goToSocial instance. But I put that idea on th backlog for now. I don’t want too have too many open projects in parallel.

    Fediverse reactions
  • How to Choose Your Level of Digital Sovereignty

    In the context of the (https://di.day), I noticed discussions on whether Tool A or Service B is “sovereign enough.” But – the more I thought about it, the clearer it became: Digital sovereignty isn’t binary. What works (or is acceptable) for one person or organization might not fit another.

    Over the past weeks, I’ve realized that these discussions often miss a key point: context. Not everyone aims for — or even needs — the same level of souvereignity. Some prioritize data privacy (Level 1), others focus on avoiding proprietary software (Level 6) or geopolitical risks (Level 10).

    So, before are arguing about the ‘right’ level, maybe we should first clarify what the ‘right’ level is for each of us. For some a higher level is a must, for others it’s just optional. And well – it might be okay to disagree.

    Below is a breakdown of 10 levels of digital sovereignty, from individual control to systemic independence. This isn’t meant to be a definitive guide — it is just my attempt to structure the problem. I also don’t claim it to be complete or universally applicable, but I found it intersting to think about the nuances. The layers are not always clearly separable and some companies & products can be found in multiple layers.

    Levels of Sovereignity

    The levels are structured from the most immediate and individual issues (data privacy, software choices) to systemic dependencies (infrastructure, hardware, geopolitics). It is a bottom-up approach to digital sovereignty, where early steps are more actionable for individuals / organizations, while later steps require larger-scale efforts or policy changes.

    LevelGoalNegative Examples
    1Avoid services that use data as currency.Meta (Facebook, Instagram, WhatsApp, …), TikTok, …
    2Control over internal data usage.Microsoft 365, Apple iCloud, GitHub Copilot, most Free Tier Services
    3Avoid dependence on tech giants.Google (Search, Youtube, ..), Microsoft (Windows/Office), Apple Ecosystem, Amazon
    4Reduce risks from SaaS/niche providers.Atlassian (Jira, Confluence), Slack, Google Analytics, Paypal, Adobe
    5Protection from government data access (e.g., CLOUD Act, FATCA, or other foreign laws).AWS (USA/CLOUD Act), Alibaba Cloud (China), Google Cloud (USA), Stripe (payments), …
    6Transparency and control over software (File formats, online registration, most SaaS solutions)Microsoft Office, Adobe Products, Windows 11, Slack, Zoom, Google Workspace, …
    7Resilient, independent infrastructure.AWS, GCP, Azure, Cloudflare, Akamai, Alibaba.
    8Control over hardware and supply chain.Chipsets with closed-source firmware, Smartphones wihtout custom ROM support, …
    9Internal control over knowledge/processes.External IT providers, knowledge monopolies, missing redundancy.
    10Reduce geopolitical hardware risks. Only very few manufacturers for RAM, Storage, CPUs, GPUs, Risk of Oligopoly, Forced Obsolescence, Backdoors
    Fediverse reactions
  • KI-Kataog.de An Independent Directory for AI Tools

    Finding the right AI tool can be overwhelming. With new solutions emerging constantly, it’s easy to get lost.

    KI-Katalog.de offers an alternative: an independent, German-focused directory that compares over 1,000 AI tools. Including pricing information and compatibility with the DSGVO / GDPR.

    https://ki-katalog.de

    Fediverse reactions
  • Why Settle for One AI Assistant When You Can Have Two?

    Two weeks ago, I discovered that Mistral.ai also provides a coding assitant, similar to GitHub Copilot (GHC), called Mistral Vibe (GitHub page).

    In those two weeks I’ve been using Mistral Vibe in parallel to GHC. Just because I wanted to try and see the difference! And just after a couple of days I noticed that the agents definition in Mistral Vibe are a bit different from GHC (in hindsight: of course!). This, of course, leads to a dual configuration in my project so that both assitants can work properly.

    And just today I noticed that I’m doing commits for dual-Agent-Support … Hardly thinkable just half a year ago:

    - Add documentation references:
      * AGENTS.md: Add reference for AI coding assistants
      * README.md: Add reference in Further Reading section
    
    - Enhance dual AI support:
      * Update AGENTS.md to reference both .github/skills/ (GitHub Copilot) and .vibe/skills/ (Mistral Vibe)
      * Clarify which skill directory each AI assistant should use

    So far I’m quite happy and impressed by the performance of the Coding assitants. However, it still makes sense to review the code every now and then. Even though the tools discover a lot of vulnerabilities themselfes which helpme to create a safer result, I had a couple of findinges myself the last days:

    For example: API Endpoints not being protected by login (well, I hadn’t instructed to do so), constructed URLs lacking Url-encoding, or Test being written but testing for an outcome that I didn’t want (e.g. I wanted a certain function to strip whitespaces, whereas the test assumed whitespaces should be retained).

    Anyways. My own commit about a multi-agent(vendor)-setup really showed me how much things have changed in the last months. And for sure, there’s more to come …

    Fediverse reactions
  • AI Can Write Code, But It Can’t Debug Without Context

    Large Language Models (LLMs) are often marketed as the ultimate productivity boost for developers: “Write code faster! Debug with AI! No more manual work!” After a recent experience, I can confirm that LLMs are incredibly useful for writing and even structuring code (I’ll write about this probably in a later blog post).

    But when it comes to debugging, one should make really sure that the tool has access to all the relevant context (and don’t disable your brain). But .. let’s see, what happened:

    Since a couple of days (uhm .. nights mostly, after work), I was writing a web application. The copilot-experience was very good and it really helped tremendously. I never really ran into a situation where I had to debug. And I was curious when (if?) I’d run into that – and how things turn out then.

    (more…)
  • I Stopped Manually Committing – Here’s Why

    I don’t code much in my day job anymore, but I still love building things. So last weekend, I finaly took the time to test GitHub Copilot’s Agents feature — specifically, a Commit Agent. I’ve seen agents.md and knew the theory, but I wanted the live experience: Could this actually improve my workflow, or was it just another layer of automation hype?

    Even when working alone, I sometimes need to revert—and that’s when I really appreciate clean, atomic commits. But let’s be honest: I’m not always disciplined enough to enforce that myself. So I figured, why not seek the help of an agent?

    (more…)
  • BuzzFeed’s AI Gamble Backfired – The pivot to AI isn’t going so great

    I just came across the article BuzzKill – BuzzFeed Nearing Bankruptcy After Disastrous Turn Toward AI and thought it might be worth sharing. Not because of schadenfreude but as a reminder that going all-in on a technology that you haven’t fully mastered is a gamble that risks the company’s existence.

    The article starts with …

    In January 2023, BuzzFeed CEO Jonah Peretti announced in a memo to staff […] a hard pivot to AI […]. two months after OpenAI unveiled […] ChatGPT

    “What could possibly go wrong” is literally the only thing that comes to my mind.

    It’s so insane because they didn’t just bet on AI. They bet against their own strengths: human creativity, editorial judgment, and the hard-won trust of an audience.

    They had a Pulitzer-winning investigative unit (!) and content machine that understood what people wanted. The issue might have been that Facebook changed the rules and BuzzFeed’s response wasn’t adaptation — it was surrender. Instead of doubling down on what made them unique (award winning journalism), they doubled down on what made them cheap. A desparate race to the bottom that you simply can’t win against a behemoth like Facebook.

    BuzzFeed’s story isn’t about AI failure. To me, it’s a testament about

    • mistaking hype for strategy
    • automation for innovation, and
    • desperation for disruption.

    The next time someone declares a ‘hard pivot’ to the latest flavor-of-the-month tech (keep in mind WHEN this pivot was decided!), let’s remember BuzzKill: Are they innovating — or just paying Silicon Valley to automate themselves into obsolescence.

    Read the article on futurism.com: https://futurism.com/artificial-intelligence/buzzfeed-disastrous-earnings-ai

    Fediverse reactions
  • Are we now coding / writing for other agents?

    I just wanted to tick off another article that I had marked for “read later”. In Claude Code is blowing me away, Nick Hodges writes about his surprise how well Claude Code wrote a website plus payment connection for him.

    The story itself is impressive, no doubt. But a key sentence (to me) comes later when he writes:

    The lesson here is that much of what we are doing now is not coding for humans—we are now coding for other agents.

    Nick Hodges

    … and, well, I pretty much agree. Whenever I see any LLM-chat sytem like perplexity or chatgpt in my access logs, I see what he means as well. And – I don’t complain about it. This might be confusing, but the fediverse changed my mind.

    Wait … the Fediverse?

    Yes, the Fediverse!

    I was (and am) happy and proud when people find their way to my website and — hopefully — find something that they find useful! And when i enabled the WordPress-fediverse plugin on my website, I was happy to open the content up to the fediverse.

    And when I don’t just publish a teaser, the whole post can be read completely in the respective fediverse client – well the same holds for RSS, but with the fediverse, it became really apparent to me. And in both scenarios (RSS or Fedi), I don’t get the reader via Browser to my website. They might just stay in their RSS reader or Fedi-client.

    And now? Agents come along as another “client”?

    Should I care? Well yes! Maybe I should keep in mind to make the website agent-friendly (just text only, no CSS, ….)? As long as my content generates value to a visitor, I might just feel fine. No mater which client is used.

    Of course, this attitude doesn’t hold for anyone who needs to make money from the website visit (like showing ads) or aims for a branding effect! But in my case … I could post my How-Tos also on StackOverflow and don’t get branding effects or credit for it …

    Maybe it’s naive. Maybe not. Maybe it’s just the future. I don’t know. But for this website, I don’t want to care too much.

  • Stop Reinventing the Wheel – go to Community Events!

    I can only advise everyone in tech: Go to events, meetups, and webinars — talk to people, exchange ideas. We all face similar challenges. You don’t have to solve everything alone. And if an event turns out to be a dud? Well, so be it — at least you might have grabbed some free food.

    Recently, after a long time, I went back to a Meetup from Munich Datageeks e.V., and it reminded me: Just being able to discuss a few half-baked ideas or questions with someone can make a huge difference. Chances are, the other person has already tried some of them — and that alone can save you a ton of time!

    On that particular Meetup I got some practical ideas for work that we’re discussing and where we don’t have a clear solution yet. This other company has tried some things already and confirmed some of my (theoretical) concerns.

    Later on the same event I was just talking about AI assisted coding with some others folks. I simply don’t have the time to try out all tools! Speaking to some real developers – not just looking at Youtubes or listeneing to Podcasts – and hearing their in-life experience is just precious.

  • One of the most potentially dangerous failure modes of LLM-based coding assistants …

    I really like having Jason Gorman’s blog posts in my RSS reader. Especially when he’s highlighting some critical issues with AI assisted coding.

    This paragraph for example really made me smile:

    For example, a common strategy they use when they’re not able to fix a problem they created is to delete failing tests, or remove testing from the build completely,

    What Makes AI Agents Particularly Dangerous Is “Silent Failure”

    I just had to smile because I probably would have been quite surprised to see that happening.

    But okay. It’s another thing I put onto my mental list to care about when doing AI assisted coding.

    Check out his post: https://codemanship.wordpress.com/2026/02/27/what-makes-ai-agents-particularly-dangerous-is-silent-failure/

    Fediverse reactions