• mx_smith@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    5
    ·
    10 hours ago

    I feel like AI is a 5G language, in that we have moved on from writing code directly to writing md files to command the bots to write the code. It seems like a higher abstraction of the code. It does make you think less about the code directly, and more about the bigger picture, but you still need those skills to check the bots output.

    • nucleative@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      10 hours ago

      Don’t know why you got downvoted, you’re 100% right. It’s just another layer of abstraction. Like a super high level non-deterministic level of abstraction.

      • Philippe23@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 hours ago

        I would say that it’s doesn’t count as a “5G language” if you have to understand and check the underlying “assembly code” it outputs every time you use it.

      • mabeledo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        8 hours ago

        If natural languages were just another level of abstraction, we would already have a successful English like programming language.

        • Zexks@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          They do. And have been around for years. Theyre not successful because theure full of a lot of fluff. Because its language and not instructional.

      • mx_smith@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        10 hours ago

        It seems there a lot of people on Lemmy who dislike anything AI. I have no choice at my work so I have to make the best of it as I’m not leaving my job in this economy.

        • mabeledo@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          8 hours ago

          You will learn to like something because you’re being extorted to use it.

          Sounds about right.

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      9 hours ago

      This is a recipe for SQL injections, race conditions, memory leaks, and keys being placed directly in code.

      Trust the output of an LLM at your peril. Literally.

        • kescusay@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          9 hours ago

          Unless you’re checking every line and have a good enough and comprehensive enough understanding of the codebase to spot subtle bugs it will try to introduce that aren’t caught by your tests, you’re still opening yourself up to problems.

          • mx_smith@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            Your right and it is something that is part of the workflow. You really should only do this process on a language your are really familiar with so you know exactly how you would do it without the bots assistance.

    • mabeledo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      8 hours ago

      Many people believe this, and it couldn’t be more wrong. It’s like saying that a product manager can code, if their tickets are detailed enough to give a general vision of a piece of software.

      Implementation still matters. Context still matters. Vibe coded projects all follow these patterns where each change is a thousand lines of code out, two thousand in. And there’s a breaking point where reading and understanding these changes is not only unpractical, but also counterproductive.

      But then, there’s the bigger question of language expressivity and determinism: even if LLMs could achieve a certain level of consistency of outputs given certain inputs, how do we make a natural language like English expressive enough, and more importantly, non ambiguous enough, to work like an actual programming language?

      • mx_smith@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        I think everyone’s development experience is different with these tools, we are not letting it just work the ticket blindly based off some prompt, we are having it do small tasks that would normally take a few minutes and are now done in seconds. We don’t allow these bots to commit code, or even the commit message, and the devs are still responsible at the end of the day for the code they commit.

        • mabeledo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          5 hours ago

          That’s not at all what the previous message said. You cannot call it a “programming language” and then add all these caveats that programming languages don’t suffer from.