Tag: GenAI

  • KI-Kataog.de An Independent Directory for AI Tools

    Finding the right AI tool can be overwhelming. With new solutions emerging constantly, it’s easy to get lost.

    KI-Katalog.de offers an alternative: an independent, German-focused directory that compares over 1,000 AI tools. Including pricing information and compatibility with the DSGVO / GDPR.

    https://ki-katalog.de

    Fediverse Reactions
  • BuzzFeed’s AI Gamble Backfired – The pivot to AI isn’t going so great

    I just came across the article BuzzKill – BuzzFeed Nearing Bankruptcy After Disastrous Turn Toward AI and thought it might be worth sharing. Not because of schadenfreude but as a reminder that going all-in on a technology that you haven’t fully mastered is a gamble that risks the company’s existence.

    The article starts with …

    In January 2023, BuzzFeed CEO Jonah Peretti announced in a memo to staff […] a hard pivot to AI […]. two months after OpenAI unveiled […] ChatGPT

    “What could possibly go wrong” is literally the only thing that comes to my mind.

    It’s so insane because they didn’t just bet on AI. They bet against their own strengths: human creativity, editorial judgment, and the hard-won trust of an audience.

    They had a Pulitzer-winning investigative unit (!) and content machine that understood what people wanted. The issue might have been that Facebook changed the rules and BuzzFeed’s response wasn’t adaptation — it was surrender. Instead of doubling down on what made them unique (award winning journalism), they doubled down on what made them cheap. A desparate race to the bottom that you simply can’t win against a behemoth like Facebook.

    BuzzFeed’s story isn’t about AI failure. To me, it’s a testament about

    • mistaking hype for strategy
    • automation for innovation, and
    • desperation for disruption.

    The next time someone declares a ‘hard pivot’ to the latest flavor-of-the-month tech (keep in mind WHEN this pivot was decided!), let’s remember BuzzKill: Are they innovating — or just paying Silicon Valley to automate themselves into obsolescence.

    Read the article on futurism.com: https://futurism.com/artificial-intelligence/buzzfeed-disastrous-earnings-ai

    Fediverse Reactions
  • One of the most potentially dangerous failure modes of LLM-based coding assistants …

    I really like having Jason Gorman’s blog posts in my RSS reader. Especially when he’s highlighting some critical issues with AI assisted coding.

    This paragraph for example really made me smile:

    For example, a common strategy they use when they’re not able to fix a problem they created is to delete failing tests, or remove testing from the build completely,

    What Makes AI Agents Particularly Dangerous Is “Silent Failure”

    I just had to smile because I probably would have been quite surprised to see that happening.

    But okay. It’s another thing I put onto my mental list to care about when doing AI assisted coding.

    Check out his post: https://codemanship.wordpress.com/2026/02/27/what-makes-ai-agents-particularly-dangerous-is-silent-failure/

    Fediverse Reactions
  • Agent finops

    The start of this article made me laugh:

    The first time my team shipped an agent into a real SaaS workflow, the product demo looked perfect. The production bill did not.

    FinOps for agents: Loop limits, tool-call caps and the new unit economics of agentic SaaS

    I wasn’t laughing out of malicious joy, but as it’s something that quite a lot of people don’t think about when they start AI / Agentic coding: Whenver you give the program flow the opportunity / ability to make queries on it’s own judgement, think about the case that the thing (I don’t want to call it AI) could run into an infinite loop. And every query to the LLM generates real costs.

    And with “costs” I don’t just mean “a busy CPU” like in traditional infinite loops. More like “costs” in terms of Lambda Horror Stories: Suddenly, every loop querying your LLM provider hit’s your budget.

    And that might get even more interesting in case of vibe coding, where such an infinte loop is burried in thousands of lines of auto-generated code. Oh we have interesting times ahead!

    Check out the article: https://www.infoworld.com/article/4138748/finops-for-agents-loop-limits-tool-call-caps-and-the-new-unit-economics-of-agentic-saas.html

  • AI amplifies DevOps

    DevOps is the backbone of modern software delivery. The latest insights from Developer Tech on Perforce’s AI-driven tools highlight why — again.

    70 percent of the organisations report their DevOps maturity materially affects their success with AI. Rather than replacing established delivery practices, proper foundational workflows serve as the prerequisite for scaling these capabilities.

    Perforce Software: How AI is amplifying DevOps | developer-tech.com

    What’s remarkable out isn’t just the AI integration. It’s how it amplifies DevOps’ core strengths: bridging team gaps, automating repetitive tasks, and ensuring reliability at scale.

    Collaboration, Speed, and Resilience

    DevOps thrives on collaboration, speed, and resilience. AI doesn’t replace these principles — it supercharges them. Perforce’s tools streamline code reviews, predict deployment risks, and optimize workflows. They’re not just upgrades. They’re force multipliers for teams drowning in complexity.

    It’s not an “either or”

    The article also points out that DevOps without AI risks obsolescence. Manual processes become bottlenecks – but AI-driven insights — whether in testing, monitoring, or incident response —turn the huge amount of data into actionable insights.

    That’s not hype. It’s a competitive edge. The future isn’t about choosing between DevOps and AI. It’s about how well you integrate them.

    Check out the article: https://www.developer-tech.com/news/perforce-software-how-ai-is-amplifying-devops/

    Fediverse Reactions
  • Rules don’t always work on AI agents

    A recent Mastodon post from @solomonneas highlights an annoying issue: an AI agent pushed to the main branch 12x, despite clear instructions not to.

    […] My agent pushed to main 12 times despite explicit instructions.

    Fix: git pre-push hooks on 39 repos. Agent can’t push code to main because git rejects it. No willpower needed. […]

    Mechanical enforcement > written instructions.

    @solomonneas@infosec.exchange

    The post really speaks for itself:

    • Agent rules are not 100% reliable
    • KISS: Keep it Simple, Stupid. Don’t make it more complex than necessary. (= don’t start fiddling around with additional AI)
  • Nolto.Social is gone, but is has shown the Demand!

    Nolto.social started as a small experiment as a free alternative to LinkedIn. The author wanted to explore ActivityPub and see what could be built. There was no funding, no team, no roadmap. Just an idea and some time.

    Within a few weeks, almost a thousand people signed up. Companies created pages. Articles were posted. Events were shared. I never marketed it. It spread through blogs and word of mout

    Nolto.Social [16.02.2026]

    According to the author, Nolto was never meant to be a polished product. It was one person building something interesting to see what would happen. Now, the author has decided to shut it down.

    Some might dismiss it as another AI project failing. – I see it differently.

    What Nolto Really Proved

    Nolto demonstrated demand. A private project attracted users and companies in record time. It showed that people want this. That companies want this. The author open-sourced the code and had the courage to stop it when it became clear the project was beyond their capacity to maintain.

    What I see here is an opportunity!

    Or as JTensetti writes it:

    Nolto proved something simple:

    You don’t need permission to experiment.You do not need funding to create value.

    And you don’t need to be “approved” to build.

    To everyone who builds, even when it’s uncomfortable — keep going.

    The open web is not defined by gatekeepers.

    It is defined by those who dare to build.

    Nolto.Social (16.02.2026)
    Fediverse Reactions
  • AI Won’t Turn Everyone Into Developers- Because Most People Don’t Want to Be Developers

    The AI hype claims that LLMs will make everyone a coder. I say: that’s pretty much BS. Most people don’t want to build software. They want their problems solved, preferably without lifting a finger.

    Joan Westenberg nails this so well in her recent article: The “everyone will code” myth ignores decades of proof. We’ve had WordPress (since 2003) and desktop publishing tools (since the 1980s), yet most still pay for solutions or use templates.

    The real shift? AI will make existing tools smarter — not turn everybody into vibe-coders.

    Read her article that’s just so spot on: https://www.joanwestenberg.com/ai-twitters-favourite-lie-everyone-wants-to-be-a-developer/

  • How a User Helped Fixing 4 Bugs with AI (and No Expertise)

    I see a lot of AI skepticism in the dev community — and some of it is fair (okay, maybe “a lot”). Vague bug reports, monster commits, untested code … We’ve proably read it all, and maybe even seen it all.

    Even Mitchell Hashimoto whose post appeared yesterday in my timeline writes “Slop drives me crazy and it feels like 95+% of bug reports”. But … he continues with an impressive story: A Ghostty user with no Zig or macOS experience took crash logs, fed them through AI, reached out on Discord, explained what was done. And the result:

    (more…)
    Fediverse Reactions
  • Skipping AI is not going to help you or your career

    I just saw a post on Simon Willison’s blog where he linked to “Don’t fall into the anti-AI hype”. And I pretty much agree on what antirez writes there:

    Whatever you believe about what the Right Thing should be, you can't control it by refusing what is happening right now. Skipping AI is not going to help you or your career.
    Don’t fall into the anti-AI hype – <antirez>
    (more…)