• AdolfSchmitler@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    36 minutes ago

    Aren’t there still forums where people can say their fish tried it or swim had some or something like that? Or am I just that old these things don’t really exist anymore? Anyone else remember those?

  • captainlezbian@lemmy.world
    link
    fedilink
    arrow-up
    18
    ·
    2 hours ago

    Remember kids, your drug buddy needs to have experience with the substance, basic first aid skills, the ability to call an emergency line, the ability to administer antidotes if they’re easy and readily available (that’s really just for opiates at the moment, but it is vital for them), and most importantly be human. Anything else is just someone you do drugs with. The drug buddy is a friend and a good time amplifier sure, but they’re also a safety figure.

  • jpreston2005@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 hour ago

    Here’s a news article about this, and what the snipped image doesn’t tell you, is that it did actually give dosage recommendations.

    It gave him specific doses of illegal substances, and in one chat, it wrote, “Hell yes—let’s go full trippy mode,” before recommending Sam take twice as much cough syrup so he would have stronger hallucinations.

    It’s one thing to be so isolated from your community that you rely extensively on on-line relationships, but it’s quite a bit different to take that a step further, relying on a machine. Like, what do you think pets are for, my guy? Get a dog, man.

  • Lorindól@sopuli.xyz
    link
    fedilink
    arrow-up
    13
    ·
    3 hours ago

    Lasr summer I asked ChatGPT about Liberty Caps - just to see how bad advice it would give me. It showed me pictures of Death Caps and Destroying Angels and claimed they were Liberty Caps.

    After that I was certain that someone was going to die just like that poor guy.

  • ceoofanarchism@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 hours ago

    I’m writing a book about drug use that needs accurate advice or say “theoretically” and it will tell you anything. Unfortunately it must already be high because it will give you made up advice.

    • nightlily@leminal.space
      link
      fedilink
      English
      arrow-up
      1
      ·
      27 minutes ago

      Have you talked to people who use LLMs regularly? They’ll acknowledge hallucinations but will downplay them as much as possible - saying they’re low frequency and they can support them, while telling you about how they’re using it in an area they’re unfamiliar with. Dunning Kruger strikes again.

    • FartMaster69@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      83
      ·
      10 hours ago

      It tells them it knows what it’s talking about and it speaks with confidence.

      Meanwhile companies and governments won’t stfu about how powerful and great this tech supposedly is, so a percentage of people will believe the propaganda.

      • arrow74@lemmy.zip
        link
        fedilink
        arrow-up
        6
        ·
        5 hours ago

        I’d love students to be given a lesson on tricking AI into giving a false answer. It’s not hard and should be pretty eye opening

        • cb900f_bodhi@lemmynsfw.com
          link
          fedilink
          arrow-up
          4
          arrow-down
          2
          ·
          5 hours ago

          I’d love for high school students to be taught how to ID and manage narcissists and psychopaths but that’s not gonna happen either, unfortunately

          • jaybone@lemmy.zip
            link
            fedilink
            English
            arrow-up
            5
            ·
            5 hours ago

            I’d love for high school students to be taught critical thinking skills, so none of this nonsense would be happening now.

            • jpreston2005@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              2 hours ago

              I’d love it if teachers pay was doubled and class sizes were cut in half. That’s literally the answer to all the “what would make education better in the US?” questions. Pay teachers what they deserve, and quit shoving more and more students into already full classrooms.

    • Technus@lemmy.zip
      link
      fedilink
      arrow-up
      45
      arrow-down
      1
      ·
      10 hours ago

      I think some people are so eager to offload all critical thinking to the machine because they’re barely capable of it themselves to begin with.

  • Arkthos@pawb.social
    link
    fedilink
    arrow-up
    43
    arrow-down
    2
    ·
    10 hours ago

    Might sound cold, but this is really just a Darwin award. Yeah, the guardrails also suck, but what a dumbass.

    • I_Has_A_Hat@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      4 hours ago

      People got mad at me for pointing out this is the case when people die because they listen to an AI chatbot, but it’s true. AI 100% needs more regulation, but introduce any new tool to everyone all at once, and some idiots will use it to remove themselves from the gene pool. If you sent everyone in the world a thin, 2in rod of inert iron, there would be a handful of people who would figure out a way to kill themselves with it.

  • Zozano@aussie.zone
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    5
    ·
    8 hours ago

    Holy fucking outrage machine.

    Are you guys seriously pissed off that an LLM said “I’m not a doctor, I will not suggest dosage amounts of a potentially deadly drug. However, if you want me, I can give you the link for the DDWIWDD music video”

    • Jesus_666@lemmy.world
      link
      fedilink
      arrow-up
      22
      ·
      7 hours ago

      I think it’s a bit more than that. A known failure mode of LLMs is that in a long enough conversation about a topic, eventually the guardrails against that topic start to lose out against the overarching directive to be a sycophant. This kinda smells like that.

      We don’t have many informations here but it’s possible that the LLM had already been worn down to the point of giving passively encouraging answers. My takeaway is once more that LLMs as used today are unreliable, badly engineered, and not actually ready to market.

    • Rothe@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      Who are you directing your comment at? I am not reading anybody commenting anything resembling the straw man you describe in your comment.