• circuitfarmer@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    53 minutes ago

    When you start relying on something else, it’s quite natural and expected to no longer be good at the thing now being done for you.

    But in this context, it’s a net negative. While you can certainly write more code while using the tool, you’re almost always writing worse code. And you still get the atrophy, so the result overall: now you’re not good at the thing, and neither is the tool you’re using.

    And remember, AI models need constant retraining as systems and approaches are updated, languages change, etc. Where is that training data going to come from? From the people now worse at coding than they were before.

  • jj4211@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 hours ago

    I just don’t get it, even the purportedly best models screw things up so much that I can’t just leave them to the job without reviewing and fixing the mess they made… And I’m also drowning in pull requests that turn out to be broken as it proudly has “co authored by Claude” in it… Like it manages to pass their test case but it’s so messed up that it’s either explicitly causing problems, or had a bunch of unrelated changes randomly.

    I feel like I’m being gaslit as I keep reading that there are developers that feel they successfully offloaded the task of coding.

    Closest I got was a chore that had a perfect criteria “address all warnings from the build”. Then let it go and iterate. Then after 50 rounds each round saying “ok should be done now, everything is taken care of, just need to do a final check”. It burned though most of my monthly quota doing this task before succeeding. Then I look at the proposed change… And it just added directives to the top of every file telling the tools to disable all the warnings… This was the best opus 4.6 could do…

    Now sure, I can have it tear through a short boiler plate and it notice a pattern I’m doing and tab through it. But I haven’t see this “vibe” approach working at all…

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 hour ago

      I feel like I’m being gaslit as I keep reading that there are developers that feel they successfully offloaded the task of coding.

      That’s because you are being gaslit.

      The people making those claims are either a) not developers in the first place, with no awareness of just how shit the “products” they’re pushing are, b) paid astroturfers trying to prop up AI, or c) former actual developers who’ve become addicted to the speed that’s possible with AI who are downplaying how crappy their own code quality has become because they have no familiarity with their codebase anymore and have forgotten how to do so much as a for loop.

      All these people claiming 10x or 100x gains, and everything they’re making is garbage no one should or would touch with a ten-foot pole.

      • boogiebored@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 minutes ago

        there are also the low tier coders who have ai making better code than they could have produced.

    • flandish@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 hour ago

      what it seems to be doing, in your case and others i have seen, is pushing the burden onto those who “care” and really fully grok (no pun intended) the concept of a real code review. it’s exhausting.

  • ArmchairAce1944@discuss.online
    link
    fedilink
    English
    arrow-up
    1
    ·
    55 minutes ago

    I took and passed a coding bootcamp at the eve of the first LLMs and generative AI. I had to do similar courses on my own to refresh my skills. I never found a coding job (story of my life!) But if I needed to I can do another course to refresh and start over stronger.

    What are they so panicky about?

  • ImgurRefugee114@reddthat.com
    link
    fedilink
    English
    arrow-up
    60
    arrow-down
    1
    ·
    5 hours ago

    Lol! Losers. I’ve been programming for almost two decades and extensive use of AI hasn’t compromised my skills AT ALL! These slop machines can’t hope to compete with the quantity and magnitude of subtle bugs I write. My code was terrible long before I made bots have mental breakdowns trying to work with it.

  • Matty_r@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 hours ago

    Go ahead, use your AI to replace all of your own skills. The rest of us will gladly take your job when you can no longer troubleshoot problems.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      56 minutes ago

      Based on my experience with LLM and developers I personally know, my only assumption is they don’t have the skills in the first place…

      In corporate world there are a lot of “developers” that actually act kind of like codegen. They just throw plausible sounding bullshit into an editor and hope for the best. Two examples:

      Once asked to help a team speed something that ran slow, even by their low standards. Turned out they had made their own copy file routine instead of using the standard library one, and sucked the file into memory, expanding array 512 bytes at a time, and then wrote it out, 512 bytes at a time. I made the thing nearly instant by just making it a call to the standard library function to copy a file.

      While helping with a separate problem, I noticed their solution for transferring some file with an indeterminate version number in the middle of the file name. It was a huge mess, but the most illustrative line was the line in their Java application declaring a string “ls /path/with/file|grep prefix.*.extension”…

      Lots of human slop out there that AI can actually compete with.

  • Appoxo@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    5 hours ago

    For those unable to code without AI:
    What even is your contribution outside of a glorified typing monkey that can parse code but is unable to write it?
    It’s like a paramedic not being trained at all for a medical emergency response but sent there regardless to just stand and observe the patient while writing notes about the sounds they make while dying.

    • Luckyfriend222@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      4 hours ago

      So this is going to invoke a multitude of downvotes, but here goes.

      I will give you an example. I can read a bit of python code, not the advanced stuff, but enough to understand to a large degree what the code does. Last week, I had the need to add a button to Netbox that will download a multitude of device configs that are being rendered via config templates. This use case helps a whole department apply configs, without having to create them by hand.

      I knew Netbox has a very powerful plugins ecosystem. The way the base code is written grants the capability of adding any type of plugin you might need in your unique environment. I used Claude to create this plugin for me. I wrote a very specific spec file, told it to utilise the already built pynetbox plugin and ensure it uses nothing fancy that is not sustainable. It created the plugin, helped me with pip installing it, and I deployed it on my dev environment where I tested it extensively.

      My alternative to using claude: Asking our internal development team to write something like this. I would need to wait 3 weeks to even get a spot on their meeting for the request, just to then be told their backlog is full with customer code and they won’t be able to help. This plugin will help our support team with fewer calls, because the configs are accurately built according to the source of truth (Netbox) and will need less human input. So in the greater scheme of the company, that is a net positive.

      What I will do when Netbox updates, is update my dev environment, install the plugin, and test it. If something broke, I will troubleshoot it, of course I will be using Claude with error logs etc, then update the plugin code to work on the new netbox. Is this ideal? Probably not. Is it the only way to get this done? Maybe not either. Is it all I can do at this very moment? Yes.

      My specialist fields are the lower levels. Hardware, hypervisors and setting up VMs + System Software. I need code from time to time to get something functional done. I don’t write whole systems with Claude, that is just ridiculously naive. But small pieces of functional code that solves a single small problem, I honestly don’t understand the problem with that.

      My 2c.

      • Appoxo@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        14
        ·
        4 hours ago

        But you arent a dev as a main job.
        This is talking about developers, employed as developers, beginning to being inept to be developers and (not offense) being not worth much more than what your technical abbilities already provide.
        So what’s their point?

        It’s like someone being employed as a translator, is able to hear the language and sort of understand it but every translation is done through deepL or google translate.
        So why should I a translator instead of using paid deepL directly and proofread it using google translate to make sure it didnt generate (mostly) nonsense?
        Isnt this mostly the point of a trained professional to being better than a self taught amateur?

        • Luckyfriend222@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          4 hours ago

          You are correct. I mistook your comment to refer to people in general, rather than trained professional coders. So indeed, you are correct.

    • Shayeta@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 hours ago

      Clarifying requirements, designing architecture. Also, I dont understand how is someone supposed to be able to “parse code” without being able to write it? It’s like being able to read but unable to write.

  • thericofactor@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    3
    ·
    7 hours ago

    I notice getting lazier. Even adding a. gitignore file I ask Claude now. It takes longer than typing it myself and costs more probably. But I don’t have to do anything but wait a few seconds.

    • meme_historian@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      18
      ·
      5 hours ago

      The thing that scares me (and why I’ve stopped using it): my brain automatically reaches for the shortcut whenever I would have to do deep thinking/planning.

      I have ADD, so getting my brain to focus and work on a task is not an easy feat to begin with. Now I’ve found myself multiple times a day unable to will myself to think about a problem but rather deferred to Claude. It’s seriously fucked up.

      • NoForwadSlashS@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        That’s not even diminished coding ability, that’s diminished thinking ability.

        And herein lies the reason AI is being pushed at all costs.

    • cecilkorik@lemmy.ca
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      1
      ·
      6 hours ago

      If I was paying for it, hell naw. But if my employer not only is willing to pay for it, but considers it a performance metric? I’m going to use it for fucking everything. These are the incentives they give me, I’m going to follow the incentives. Talking to Claude is what they pay me for, apparently.

      But like the article says, if I don’t continue practicing on my own code in my unpaid off-work hours, I imagine I’d be regressing in my skills too. I do that because I enjoy it as a hobby, but if I didn’t, I could see myself and probably a lot of other people getting rugpulled by this.

      • WFH@lemmy.zip
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 hours ago

        I’m not using it for the incentive. I’m using it to avoid punishment. The company I work for made it mandatory to use it daily. So I’m tokenmaxxing bullshit tasks so I can focus on interesting ones, but yeah I already feel it’s making me lazy because I sometimes can’t be bothered to read a log anymore. We are truly fucked.

        This company is working on terrible assumptions. They spent years hunting for the best engineers in the country (or so they pretend to anyway) and suddenly decided that

        • we are average at best and it is better and faster than most of us (it’s not)
        • software engineers don’t like to write code anyway (we do, at least when the challenge is interesting)
        • it will forever be more affordable than properly qualified engineers (oh boy it won’t)
        • a PM with Claude is as qualified as us to bring features to production (talk about tech stack suicide)
        • etc.

        They either have drunk the propaganda koolaid and betting everything on this lie, or are so arrogant they think we can succeed where the largest AI investors in the world utterly failed (see GitHub that can’t even get 3 nines of availability since the switched to full-ai-code).

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    5 hours ago

    The irony will be when AI take over the world and destroy humanity, inserting itself into everything when used for coding, because coders have no idea what is going on.
    Not because the AI is evil or even conscious. But because that’s what all the movies and novels tell it’s supposed to do. 🤣🤣🤣

    • CosmicTurtle0 [he/him]@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      I actually wonder if programming languages is going to be a thing in the future.

      Each language gets compiled down to 1 and 0s. Couldn’t LLMs just get trained in that? “These set of 1s and 0s do login” etc.

    • pool_spray_098@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      Hah!

      The AI tries to understand itself, and queries the sum of all human knowledge… which promptly informs it that it’s a malicious bringer of destruction.

    • Alex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      6 hours ago

      Issue triage, code exploration, extracting information from disparate sources, first pass code review. There are loads of use cases that it’s potentially useful.

      For me it’s a lot better at extracting the requirements for a CPU feature from a 10,000 page architecture reference manual than I am.

      • Tim@lemmy.snowgoons.ro
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        5 hours ago

        Quite; I just set a (locally hosted) LLM off writing the tickets for implementing all the opcodes in a simple device emulator, based on grovelling through datasheets and documentation. Whether the tickets get implemented by an AI or a human, it’s a timesaver having the AI do it, and the tickets will be better written than I would have done.

        Everyone railing against this also overlooks the reality of professional software development: professional software is developed 5% by skilled, trained Software Engineers, and 95% by code monkeys who shotgun copypasta from Stack Overflow until it works. Even if we extremely generously assume that the hardcore “never use AI” Lemmy brigade are in the 5% (and not, more likely the 95% drowning in their own Dunning Kruger,) the “but AIs produce unreadable code and make mistakes” threat isn’t putting off anyone who’s ever actually had to hire a significantly sized development team.

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 hours ago

      Yes, the obvious solution is to avoid it. I use it only for the most boilerplatey things. Anything else, I want to make sure I can still do it myself.

      • farmgineer@nord.pub
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        6 hours ago

        I don’t knowingly use AI at all in my person life and projects (I say ‘knowingly’ since many products have it shoved inside now, but I disable all I see). At work, we have AI code reviews which, as a concept, I think is fine and useful.

  • mx_smith@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    4
    ·
    2 hours ago

    I feel like AI is a 5G language, in that we have moved on from writing code directly to writing md files to command the bots to write the code. It seems like a higher abstraction of the code. It does make you think less about the code directly, and more about the bigger picture, but you still need those skills to check the bots output.

    • nucleative@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      Don’t know why you got downvoted, you’re 100% right. It’s just another layer of abstraction. Like a super high level non-deterministic level of abstraction.

      • Philippe23@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 hour ago

        I would say that it’s doesn’t count as a “5G language” if you have to understand and check the underlying “assembly code” it outputs every time you use it.

      • mx_smith@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        It seems there a lot of people on Lemmy who dislike anything AI. I have no choice at my work so I have to make the best of it as I’m not leaving my job in this economy.

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      56 minutes ago

      This is a recipe for SQL injections, race conditions, memory leaks, and keys being placed directly in code.

      Trust the output of an LLM at your peril. Literally.

        • kescusay@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 minutes ago

          Unless you’re checking every line and have a good enough and comprehensive enough understanding of the codebase to spot subtle bugs it will try to introduce that aren’t caught by your tests, you’re still opening yourself up to problems.

  • Zwuzelmaus@feddit.org
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    6
    ·
    6 hours ago

    Software Engineers Say They’re Losing the Ability to Code Now That AI Does It for Them

    “AI has sucked my brain out of my head. It’s all AI’s fault”

    If I were a bad coder, I would say that too now!

    All bad or average brain workers may start to fear for their jobs already.

    No, seriously I don’t think that it is real, but I think the fear is real.

    • Brummbaer@pawb.social
      link
      fedilink
      English
      arrow-up
      17
      ·
      6 hours ago

      There is an easy cope out to say, these were bad engineers to begin with, but I’m not convinced.

      We know that if you don’t use an ability and use it daily your brain just reallocates resources to other tasks. So if you have a machine that “outsources” thinking for you, you will be less able to think.

      • lacethespace@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 minutes ago

        It’s not even about skill atrophy, it’s more subtle. You can quickly relearn forgotten skills.

        Unlearning a habit, on the other hand, is much harder. If you get into habit of reaching for AI anytime something hard needs to get done it’s going to wreck your internal motivation and reward system. meme_historian describes it really well in this thread, I noticed the same thing happening to me.

    • Sahwa@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      6 hours ago

      “AI has sucked my brain out of my head. It’s all AI’s fault”

      I mean, just look at what happened to Amazon’s engineers; they were forced to use AI in their daily tasks and maximize their use of AI tokens. That was also the fault of the executives who forced employees to use AI.

    • Mokey Fraggle@therock.fraggle-rock.org
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      5 hours ago

      Indeed. It is just devs being lazy. Use your tools, don’t abuse them. Same thing happened when IDEs started to be able to autocomplete and do refactorings. If that makes you stop being able to do it yourself it was never a IDE problem, but a user problem.

  • Destide@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 hours ago

    It’s such a double edged sword on one hand it’s become spell check for programmers meaning my dysleixia is less of a feature in my code on the other hand the temptation to use it like a stack copy paste gets easier every month.

  • mannycalavera@feddit.uk
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    5 hours ago

    If I’m being honest, they’re shit Software Engineers 😂.

    Coding is so much more than writing syntax.

  • Swuden@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    3 hours ago

    I’m fully able to code still, I just find it pointless when AI can do it for me. It’s like having to be somewhere, should I take the car or walk? Yeah walking might be good for me and the environment, but my car is so much faster and easier and I’ll definitely be on time. Who cares about the consequences of the future?

  • theherk@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    3 hours ago

    People lost their abilities to use slide rules too, to write assemblers, etc. The big companies monopolizing the tech are bad, but the tech is here to stay.