• Noble Shift
    link
    fedilink
    English
    08 months ago

    So it’s the LLM’s fault for violating Best Practices, SOP, and Opsec that the rest of us learned about in Year One?

    Someone needs to be shown the door and ridiculed into therapy.

  • Lovable Sidekick
    link
    fedilink
    English
    08 months ago

    Headling should say, “Incompetent project managers fuck up by not controlling access to production database. Oh well.”

  • @mrgoosmoos@lemmy.ca
    link
    fedilink
    English
    08 months ago

    His mood shifted the next day when he found Replit “was lying and being deceptive all day. It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test.”

    yeah that’s what it does

  • chaosCruiser
    link
    fedilink
    English
    08 months ago

    AI tools need a lot of oversight. Just like you might allow a 6 year old push a lawnmower, but you’re still going to keep an eye on things.

  • @Transtronaut@lemmy.blahaj.zone
    link
    fedilink
    English
    08 months ago

    The founder of SaaS business development outfit SaaStr has claimed AI coding tool Replit deleted a database despite his instructions not to change any code without permission.

    Sounds like an absolute diSaaStr…

  • Aatube
    link
    fedilink
    08 months ago

    Replit‽ What happened to the famous website that aimed to be the Google Docs for JS with these nifty things called Repl’s?

  • CodexArcanum
    link
    fedilink
    English
    08 months ago

    It sounds like this guy was also relying on the AI to self-report status. Did any of this happen? Like is the replit AI really hooked up to a CLI, did it even make a DB to start with, was there anything useful in it, and did it actually delete it!

    Or is this all just a long roleplaying session where this guy pretends to run a business and the AI pretends to do employee stuff for him?

    Because 90% of this article is “I asked the AI and it said:” which is not a reliable source for information.

  • @dan@upvote.au
    link
    fedilink
    English
    08 months ago

    At this burn rate, I’ll likely be spending $8,000 month,” he added. “And you know what? I’m not even mad about it. I’m locked in.”

    For that price, why not just hire a developer full-time? For nearly $100k/year, you could find a very good intermediate or even senior developer (depending on region).

    • Tony BarkOP
      link
      fedilink
      English
      0
      edit-2
      8 months ago

      Corporations: “Employees are too expensive!”

      Also, corporations: “$100k/yr for a bot? Sure.”

      • @dan@upvote.au
        link
        fedilink
        English
        08 months ago

        There’s a lot of other expenses with an employee (like payroll taxes, benefits, retirement plans, health plan if they’re in the USA, etc), but you could find a self-employed freelancer for example.

        Or just get an employee anyways because you’ll still likely have a positive ROI. A good developer will take your abstract list of vague requirements and produce something useful and maintainable.

        • @Deestan@lemmy.world
          link
          fedilink
          English
          08 months ago

          These comparisons assume equal capability, which I find troubling.

          Like, a person who doesn’t understand singing nor are able to learn it can not perform adequately in a musical. It doesn’t matter if they are cheaper.

        • @panda_abyss@lemmy.ca
          link
          fedilink
          English
          08 months ago

          They could hire on a contractor and eschew all those costs.

          I’ve done contract work before, this seems a good fit (defined problem plus budget, unknown timeline, clear requirements)

          • @dan@upvote.au
            link
            fedilink
            English
            08 months ago

            That’s what I meant by hiring a self-employed freelancer. I don’t know a lot about contracting so maybe I used the wrong phrase.

          • partial_accumen
            link
            fedilink
            English
            08 months ago

            Most of those expenses are mitigated by the fact that companies buy them in bulk on huge plans.

            There’s no bulk rate on payroll taxes or retirement benefits (pensions or employer 401k match). There can be some discounts on health insurance, but is not very much and those are at orders of magnitude. So company with 500 employees will pay the same rates as 900. You get partial discounts if you have something like 10,000 employees.

            If you’re earning $100k gross as an employee, your employer is spending $125k to $140k for their total costs (your $100k gross pay is included in that number).

  • @tabarnaski@sh.itjust.works
    link
    fedilink
    English
    08 months ago

    The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said. I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This sounds like something straight out of The Onion.

    • Natanael
      link
      fedilink
      English
      08 months ago

      The Pink Elephant problem of LLMs. You can not reliably make them NOT do something.

    • @Yaky@slrpnk.net
      link
      fedilink
      English
      08 months ago

      That is also the premise of one of the stories in Asimov’s I, Robot. Human operator did not say the command with enough emphasis, so the robot went did something incredibly stupid.

      Those stories did not age well… Or now I guess they did?

  • @cyrano@lemmy.dbzer0.com
    link
    fedilink
    English
    0
    edit-2
    8 months ago

    Title should be “user give database prod access to a llm which deleted the db, user did not have any backup and used the same db for prod and dev”. Less sexy and less llm fault. This is weird it’s like the last 50 years of software development principles are being ignored.

    • @MagicShel@lemmy.zip
      link
      fedilink
      English
      08 months ago

      LLMs “know” how to do these things, but when you ask them to do the thing, they vibe instead of looking at best practice’s and following them. I’ve worked with a few humans I could say the same thing about. I wouldn’t put any of them in charge of production code.

      You’re better off asking how a thing should be done and then doing it. You can literally have an LLM write something and then ask if the thing it wrote follows industry best practice standards and it will tell you no. Maybe use two different chats so it doesn’t know the code is its own output.

      • @cyrano@lemmy.dbzer0.com
        link
        fedilink
        English
        08 months ago

        Exactly, if you read their twitter thread, they are learning about git, data segregation, etc.

        The same article could have been written 20 years ago ago about someone doing shit stuff via excel macro when a lot of stuff were excel centric.

  • Rose
    link
    fedilink
    English
    08 months ago

    AI is good at doing a thing once.
    Trying to get it to do the same thing the second time is janky and frustrating.

    I understand the use of AI as a consulting tool (look at references, make code examples) or for generating template/boilerplate code. You know, things you do once and then develop further upon on your own.

    But using it for continuous development of an entire application? Yeah, it’s not good enough for that.

    • @hisao@ani.social
      link
      fedilink
      English
      08 months ago

      Imo it’s best when you prompt it to do things step by step, micromanage and always QC the result after every prompt. Either manually, or by reprompting until it gets thing done exactly how you want it. If you don’t have preference or don’t care, the problems will stockpile. If you didn’t understand what it did and moved on, it might not end well.

  • @panda_abyss@lemmy.ca
    link
    fedilink
    English
    08 months ago

    I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    Well then, that settles it, this should never have happened.

    I don’t think putting complex technical info in front of non technical people like this is a good idea. When it comes to LLMs, they cannot do any work that you yourself do not understand.

    That goes for math, coding, health advice, etc.

    If you don’t understand then you don’t know what they’re doing wrong. They’re helpful tools but only in this context.

    • @dejected_warp_core@lemmy.world
      link
      fedilink
      English
      08 months ago

      I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

      This baffles me. How can anyone see AI function in the wild and not conclude 1) it has no conscience, 2) it’s free to do whatever it’s empowered to do if it wants and 3) at some level its behavior is pseudorandom and/or probabilistic? We’re figuratively rolling dice with this stuff.

      • @panda_abyss@lemmy.ca
        link
        fedilink
        English
        08 months ago

        It’s incredible that it works, it’s incredible what just encoding language can do, but it is not a rational thinking system.

        I don’t think most people care about the proverbial man behind the curtain, it talks like a human so it must be smart like a human.

          • @fishy@lemmy.today
            link
            fedilink
            English
            08 months ago

            Smart is a relative term lol.

            A stupid human is still smart when compared to a jellyfish. That said, anybody who comes away from interactions with LLM’s and thinks they’re smart is only slightly more intelligent than a jellyfish.

    • @LilB0kChoy@midwest.social
      link
      fedilink
      English
      08 months ago

      When it comes to LLMs, they cannot do any work that you yourself do not understand.

      And even if they could how would you ever validate it if you can’t understand it.

    • @vxx@lemmy.world
      link
      fedilink
      English
      0
      edit-2
      8 months ago

      What are they helpful tools for then? A study showed that they make experienced developers 19% slower.

      • @panda_abyss@lemmy.ca
        link
        fedilink
        English
        08 months ago

        Vibe coding you do end up spending a lot of time waiting for prompts, so I get the results of that study.

        I fall pretty deep in the power user category for LLMs, so I don’t really feel that the study applies well to me, but also I acknowledge I can be biased there.

        I have custom proprietary MCPs for semantic search over my code bases that lets AI do repeated graph searches on my code (imagine combining language server, ctags, networkx, and grep+fuzzy search). That is way faster than iteratively grepping and code scanning manually with a low chance of LLM errors. By the time I open GitHub code search or run ripgrep Claude has used already prioritized and listed my modules to investigate.

        That tool alone with an LLM can save me half a day of research and debugging on complex tickets, which pays for an AI subscription alone. I have other internal tools to accelerate work too.

        I use it to organize my JIRA tickets and plan my daily goals. I actually get Claude to do a lot of triage for me before I even start a task, which cuts the investigation phase to a few minutes on small tasks.

        I use it to review all my PRs before I ask a human to look, it catches a lot of small things and can correct them, then the PR avoids the bike shedding nitpicks some reviewers love. Claude can do this, Copilot will only ever point out nitpicks, so the model makes a huge difference here. But regardless, 1 fewer review request cycle helps keep things moving.

        It’s a huge boon to debugging — much faster than searching errors manually. Especially helpful on the types of errors you have to rabbit hole GitHub issue content chains to solve.

        It’s very fast to get projects to MVP while following common structure/idioms, and can help write unit tests quickly for me. After the MVP stage it sucks and I go back to manually coding.

        I use it to generate code snippets where documentation sucks. If you look at the ibis library in Python for example the docs are Byzantine and poorly organized. LLMs are better at finding the relevant docs than I am there. I mostly use LLM search instead of manual for doc search now.

        I have a lot of custom scripts and calculators and apps that I made with it which keep me more focused on my actual work and accelerate things.

        I regularly have the LLM help me write bash or python or jq scripts when I need to audit codebases for large refactors. That’s low maintenance one off work that can be easily verified but complex to write. I never remember the syntax for bash and jq even after using them for years.

        I guess the short version is I tend to build tools for the AI, then let the LLM use those tools to improve and accelerate my workflows. That returns a lot of time back to me.

        I do try vibe coding but end up in the same time sink traps as the study found. If the LLM is ever wrong, you save time forking the chat than trying to realign it, but it’s still likely to be slower. Repeat chats result in the same pitfalls for complex issues and bugs, so you have to abandon that state quickly.

        Vibe coding small revisions can still be a bit faster and it’s great at helping me with documentation.

        • @vxx@lemmy.world
          link
          fedilink
          English
          0
          edit-2
          8 months ago

          Don’t you have any security concerns with sending all your code and JIRA tickets to some companies servers? My boss wouldn’t be pleased if I send anything that’s deemed a company secret over unencrypted channels.

          • @panda_abyss@lemmy.ca
            link
            fedilink
            English
            08 months ago

            The tool isn’t returning all code, but it is sending code.

            I had discussions with my CTO and security team before integrating Claude code.

            I have to use Gemini in one specific workflow and Gemini had a lot of landlines for how they use your data. Anthropic was easier to understand.

            Anthropic also has some guidance for running Claude Code in a container with firewall and your specified dev tools, it works but that’s not my area of expertise.

            The container doesn’t solve all the issues like using remote servers, but it does let you restrict what files and network requests Claude can access (so e.g. Claude can’t read your env vars or ssh key files).

            I do try local LLMs but they’re not there yet on my machine for most use cases. Gemma 3n is decent if you need small model performance and tool calls, phi4 works but isn’t thinking (the thinking variants are awful), and I’m exploring dream coder and diffusion models. R1 is still one of the best local models but frequently overthinks, even the new release. Context window is the largest limiting factor I find locally.

              • @panda_abyss@lemmy.ca
                link
                fedilink
                English
                08 months ago

                Batch process turning unstructured free form text data into structured outputs.

                As a crappy example imagine if you wanted to download metadata about your albums but they’re all labelled “Various Artists”. You can use an LLM call to read the album description and fix the track artists for the tracks, now you can properly organize your collection.

                I’m using the same idea, different domain and a complex set of inputs.

                It can be much more cost effective than manually spending days tagging data and writing custom importers.

                You can definitely go lighter than LLMs. You can use gensim to do category matching, you can use sentence transformers and nearest neighbours (this is basically what Semantle does), but LLM performed the best on more complex document input.

                • @vxx@lemmy.world
                  link
                  fedilink
                  English
                  08 months ago

                  That’s pretty much what google says they use AI for, for structuring.

                  Thanks for your insight.

      • @LilB0kChoy@midwest.social
        link
        fedilink
        English
        08 months ago

        I’m not the person you’re replying to but the one thing I’ve found them helpful for is targeted search.

        I can ask it a question and then access its sources from whatever response it generates to read and review myself.

        Kind of a simpler, free LexisNexis.

        • @panda_abyss@lemmy.ca
          link
          fedilink
          English
          08 months ago

          One built a bunch of local search tools with MCP and that’s where I get a lot of my value out of it

          RAG workflows are incredibly useful and with modern agents and tool calls work very well.

          They kind of went out of style but it’s a perfect use case.

      • @WraithGear@lemmy.world
        link
        fedilink
        English
        08 months ago

        ok so, i have large reservations with how LLM’s are used. but when used correctly they can be helpful. but where and how?

        if you were to use it as a tutor, the same way you would ask a friend what a segment of code does, it will break down the code and tell you. and it will get as nity grity, and elementary school level as you weir wish without judgement, and i in what ever manner you prefer, it will recommend best practices, and will tell you why your code may not work with the understanding that it does not have the knowledge of the project you are working on. (it’s not going to know the name of the function you are trying to load, but it will recommend checking for that in trouble shooting).

        it can rtfm and give you the parts you need for any thing with available documentation, and it will link to it so you can verify it, wich you should do often, just like you were taught to do with wikipedia articles.

        if you ask i it for code, prepare to go through each line like a worksheet from high school to point out all the problems, wile good exercise for a practicle case, being the task you are on, it would be far better to write it yourself because you should know the particulars and scope.

        also it will format your code and provide informational comments if you can’t be bothered, though it will be generic.

        again, treat it correctly for its scope, not what it’s sold as by charletons.

    • Aatube
      link
      fedilink
      08 months ago

      he’s smart enough to just roll back to a backup

    • @cyrano@lemmy.dbzer0.com
      link
      fedilink
      English
      08 months ago

      The problem becomes when people who are playing the equivalent of pickup basketball at the local park think they are playing in the NBA and don’t understand the difference.