• 0 Posts
  • 208 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle










  • dingus@lemmy.worldtoLemmy Shitpost@lemmy.worldPoor pugs
    link
    fedilink
    English
    arrow-up
    10
    ·
    26 days ago

    Actually, it is how bones work for many species. When you are born, the ends of your bones are not yet “ossified” and are mostly made of cartilage. So they will not show up on x-ray.

    Picture obviously can still be fake, but that actually is mostly how bones work for fetuses or infants of many species.





  • Yeah I drive around 3 hours on the highway every several weeks. Sometimes on my drive, there’s obviously traffic. A lot of times it will be something like rush hour traffic, a crash, construction, etc.

    But then like…a good portion of the time when I come to the very front of the “clog”, I find that it is just a blockade of multiple people going incredibly slowly and taking up all lanes of traffic, refusing to move over despite the fact that they are going under the speed limit.


  • Well I mean I guess I get what you’re saying, but I don’t necessarily agree. I don’t really ever see it being pushed as a mental health tool. Rather I think the sycophantic nature of it (which does seem to be programmed) is the reason for said issues. If it simply gave the most “common” answers instead of the most sycophantic answers, I don’t know that we’d have such a large issue of this nature.



  • dingus@lemmy.worldtoLemmy Shitpost@lemmy.worldJust a little... why not?
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    Yeah, ChatGPT is incredibly sycophantic. It’s like it’s basically just programmed to try to make you feel good and affirm you, even if these things are actually counterproductive and damaging. If you talk to it enough, you end up seeing how much of a brown-nosing kiss-ass they’ve made it.

    My friend with a mental illness wants to stop taking her medication? She explains this to ChatGPT. ChatGPT “sees” that she dislikes having to take meds, so it encourages her to stop to make her “feel better”.

    A meth user is struggling to quit? It tells this to ChatGPT. ChatGPT “sees” how the user is suffering and encourages it to take meth to help ease the user’s suffering.

    Thing is they have actually programmed some responses into it that will vehemently be against self harm. Suicide is one that thankfully even if you use flowery language to describe it, ChatGPT will vehemently oppose you.


  • My friend with schizoaffective disorder decided to stop taking her meds after a long chat with ChatGPT as it convinced her she was fine to stop taking them. It went… incredibly poorly as you’d expect. Thankfully she’s been back on her meds for some time.

    I think the people programming these really need to be careful of mental health issues. I noticed that it seems to be hard coded into ChatGPT to convince you NOT to kill yourself, for example. It gives you numbers for hotlines and stuff instead. But they should probably hard code some other things into it that are potentially dangerous when you ask it things. Like telling psych patients to go off their meds or telling meth addicts to have just a little bit of meth.