Tag: AI

  • BuzzFeed’s AI Gamble Backfired – The pivot to AI isn’t going so great

    I just came across the article BuzzKill – BuzzFeed Nearing Bankruptcy After Disastrous Turn Toward AI and thought it might be worth sharing. Not because of schadenfreude but as a reminder that going all-in on a technology that you haven’t fully mastered is a gamble that risks the company’s existence.

    The article starts with …

    In January 2023, BuzzFeed CEO Jonah Peretti announced in a memo to staff […] a hard pivot to AI […]. two months after OpenAI unveiled […] ChatGPT

    “What could possibly go wrong” is literally the only thing that comes to my mind.

    It’s so insane because they didn’t just bet on AI. They bet against their own strengths: human creativity, editorial judgment, and the hard-won trust of an audience.

    They had a Pulitzer-winning investigative unit (!) and content machine that understood what people wanted. The issue might have been that Facebook changed the rules and BuzzFeed’s response wasn’t adaptation — it was surrender. Instead of doubling down on what made them unique (award winning journalism), they doubled down on what made them cheap. A desparate race to the bottom that you simply can’t win against a behemoth like Facebook.

    BuzzFeed’s story isn’t about AI failure. To me, it’s a testament about

    • mistaking hype for strategy
    • automation for innovation, and
    • desperation for disruption.

    The next time someone declares a ‘hard pivot’ to the latest flavor-of-the-month tech (keep in mind WHEN this pivot was decided!), let’s remember BuzzKill: Are they innovating — or just paying Silicon Valley to automate themselves into obsolescence.

    Read the article on futurism.com: https://futurism.com/artificial-intelligence/buzzfeed-disastrous-earnings-ai

    Fediverse Reactions
  • Are we now coding / writing for other agents?

    I just wanted to tick off another article that I had marked for “read later”. In Claude Code is blowing me away, Nick Hodges writes about his surprise how well Claude Code wrote a website plus payment connection for him.

    The story itself is impressive, no doubt. But a key sentence (to me) comes later when he writes:

    The lesson here is that much of what we are doing now is not coding for humans—we are now coding for other agents.

    Nick Hodges

    … and, well, I pretty much agree. Whenever I see any LLM-chat sytem like perplexity or chatgpt in my access logs, I see what he means as well. And – I don’t complain about it. This might be confusing, but the fediverse changed my mind.

    Wait … the Fediverse?

    Yes, the Fediverse!

    I was (and am) happy and proud when people find their way to my website and — hopefully — find something that they find useful! And when i enabled the WordPress-fediverse plugin on my website, I was happy to open the content up to the fediverse.

    And when I don’t just publish a teaser, the whole post can be read completely in the respective fediverse client – well the same holds for RSS, but with the fediverse, it became really apparent to me. And in both scenarios (RSS or Fedi), I don’t get the reader via Browser to my website. They might just stay in their RSS reader or Fedi-client.

    And now? Agents come along as another “client”?

    Should I care? Well yes! Maybe I should keep in mind to make the website agent-friendly (just text only, no CSS, ….)? As long as my content generates value to a visitor, I might just feel fine. No mater which client is used.

    Of course, this attitude doesn’t hold for anyone who needs to make money from the website visit (like showing ads) or aims for a branding effect! But in my case … I could post my How-Tos also on StackOverflow and don’t get branding effects or credit for it …

    Maybe it’s naive. Maybe not. Maybe it’s just the future. I don’t know. But for this website, I don’t want to care too much.

  • Agent finops

    The start of this article made me laugh:

    The first time my team shipped an agent into a real SaaS workflow, the product demo looked perfect. The production bill did not.

    FinOps for agents: Loop limits, tool-call caps and the new unit economics of agentic SaaS

    I wasn’t laughing out of malicious joy, but as it’s something that quite a lot of people don’t think about when they start AI / Agentic coding: Whenver you give the program flow the opportunity / ability to make queries on it’s own judgement, think about the case that the thing (I don’t want to call it AI) could run into an infinite loop. And every query to the LLM generates real costs.

    And with “costs” I don’t just mean “a busy CPU” like in traditional infinite loops. More like “costs” in terms of Lambda Horror Stories: Suddenly, every loop querying your LLM provider hit’s your budget.

    And that might get even more interesting in case of vibe coding, where such an infinte loop is burried in thousands of lines of auto-generated code. Oh we have interesting times ahead!

    Check out the article: https://www.infoworld.com/article/4138748/finops-for-agents-loop-limits-tool-call-caps-and-the-new-unit-economics-of-agentic-saas.html

  • AI amplifies DevOps

    DevOps is the backbone of modern software delivery. The latest insights from Developer Tech on Perforce’s AI-driven tools highlight why — again.

    70 percent of the organisations report their DevOps maturity materially affects their success with AI. Rather than replacing established delivery practices, proper foundational workflows serve as the prerequisite for scaling these capabilities.

    Perforce Software: How AI is amplifying DevOps | developer-tech.com

    What’s remarkable out isn’t just the AI integration. It’s how it amplifies DevOps’ core strengths: bridging team gaps, automating repetitive tasks, and ensuring reliability at scale.

    Collaboration, Speed, and Resilience

    DevOps thrives on collaboration, speed, and resilience. AI doesn’t replace these principles — it supercharges them. Perforce’s tools streamline code reviews, predict deployment risks, and optimize workflows. They’re not just upgrades. They’re force multipliers for teams drowning in complexity.

    It’s not an “either or”

    The article also points out that DevOps without AI risks obsolescence. Manual processes become bottlenecks – but AI-driven insights — whether in testing, monitoring, or incident response —turn the huge amount of data into actionable insights.

    That’s not hype. It’s a competitive edge. The future isn’t about choosing between DevOps and AI. It’s about how well you integrate them.

    Check out the article: https://www.developer-tech.com/news/perforce-software-how-ai-is-amplifying-devops/

    Fediverse Reactions
  • Rules don’t always work on AI agents

    A recent Mastodon post from @solomonneas highlights an annoying issue: an AI agent pushed to the main branch 12x, despite clear instructions not to.

    […] My agent pushed to main 12 times despite explicit instructions.

    Fix: git pre-push hooks on 39 repos. Agent can’t push code to main because git rejects it. No willpower needed. […]

    Mechanical enforcement > written instructions.

    @solomonneas@infosec.exchange

    The post really speaks for itself:

    • Agent rules are not 100% reliable
    • KISS: Keep it Simple, Stupid. Don’t make it more complex than necessary. (= don’t start fiddling around with additional AI)
  • AI Won’t Turn Everyone Into Developers- Because Most People Don’t Want to Be Developers

    The AI hype claims that LLMs will make everyone a coder. I say: that’s pretty much BS. Most people don’t want to build software. They want their problems solved, preferably without lifting a finger.

    Joan Westenberg nails this so well in her recent article: The “everyone will code” myth ignores decades of proof. We’ve had WordPress (since 2003) and desktop publishing tools (since the 1980s), yet most still pay for solutions or use templates.

    The real shift? AI will make existing tools smarter — not turn everybody into vibe-coders.

    Read her article that’s just so spot on: https://www.joanwestenberg.com/ai-twitters-favourite-lie-everyone-wants-to-be-a-developer/

  • How a User Helped Fixing 4 Bugs with AI (and No Expertise)

    I see a lot of AI skepticism in the dev community — and some of it is fair (okay, maybe “a lot”). Vague bug reports, monster commits, untested code … We’ve proably read it all, and maybe even seen it all.

    Even Mitchell Hashimoto whose post appeared yesterday in my timeline writes “Slop drives me crazy and it feels like 95+% of bug reports”. But … he continues with an impressive story: A Ghostty user with no Zig or macOS experience took crash logs, fed them through AI, reached out on Discord, explained what was done. And the result:

    (more…)
    Fediverse Reactions
  • Skipping AI is not going to help you or your career

    I just saw a post on Simon Willison’s blog where he linked to “Don’t fall into the anti-AI hype”. And I pretty much agree on what antirez writes there:

    Whatever you believe about what the Right Thing should be, you can't control it by refusing what is happening right now. Skipping AI is not going to help you or your career.
    Don’t fall into the anti-AI hype – <antirez>
    (more…)
  • AI Won’t Fix Bad Software Teams

    What do you do an Saturday when you don’t feel well to go out for some activities? Right! Let’s read about Software Engineering and AI!

    Yesterday I read the complete series of the 22 blog posts listet on The AI-Ready Software Developer – Index – Codemanship’s Blog.

    (more…)
    Fediverse Reactions
  • Spec-first Agentic Development is not Vibe Coding

    Not even two weeks ago I wrote about “Reproducable Vibecoding” and that the specification as a permanent context to document all decisions is important.

    I just stumbled across the article “Notes on Six Months of AI-Enabled Building” by Isaac Flath. There are a couple of good quotes in there, especially in the chapters “Your Thinking Style Determines Your Success

    (more…)
    Fediverse Reactions