• 0 Posts
  • 85 Comments
Joined 2 years ago
cake
Cake day: July 19th, 2023

help-circle





  • I feel like this won’t stop anyone who was already refusing to use a Microsoft account for windows. Anyone who was already bypassing the account requirement will still do so, it just will be more difficult. They’ve accomplished nothing except further pissing off some of their most competent user base.


  • Definitely agree. Most printers are sold at a loss with the plan to milk the buyer long term through ink and other services. EcoTank printers are more expensive, but Epson makes their money at the time of purchase. The ink is extremely cheap, and there’s no way for them to tell if you use 3rd party ink at all. We’ve been printing out textbooks with ours, which would be financially disastrous with a traditional inkjet printer.

    Overall I’ve bought two, one for home and one for the office at work. The cheaper ink has paid for the printers several times over now.


  • I wasn’t sure from the title if it was “Nearly half of U.S. adults believe LLMs are smarter than [the US adults] are.” or “Nearly half of U.S. adults believe LLMs are smarter than [the LLMs actually] are.” It’s the former, although you could probably argue the latter is true too.

    Either way, I’m not surprised that people rate LLMs intelligence highly. They obviously have limited scope in what they can do, and hallucinating false info is a serious issue, but you can ask them a lot of questions that your typical person couldn’t answer and get a decent answer. I feel like they’re generally good at meeting what people’s expectations are of a “smart person”, even if they have major shortcomings in other areas.






  • Fubarberry@sopuli.xyztoTechnology@lemmy.worldWhy I am not impressed by A.I.
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    8 months ago

    I asked mistral/brave AI and got this response:

    How Many Rs in Strawberry

    The word “strawberry” contains three "r"s. This simple question has highlighted a limitation in large language models (LLMs), such as GPT-4 and Claude, which often incorrectly count the number of "r"s as two. The error stems from the way these models process text through a process called tokenization, where text is broken down into smaller units called tokens. These tokens do not always correspond directly to individual letters, leading to errors in counting specific letters within words.