

well obviously it won’t, that’s why you need ethical output restrictions
well obviously it won’t, that’s why you need ethical output restrictions
In case you haven’t seen it, the paper is here - https://machinelearning.apple.com/research/illusion-of-thinking (PDF linked on the left).
The puzzles the researchers have chosen are spatial and logical reasoning puzzles - so certainly not the natural domain of LLMs. The paper doesn’t unfortunately give a clear definition of reasoning, I think I might surmise it as “analysing a scenario and extracting rules that allow you to achieve a desired outcome”.
They also don’t provide the prompts they use - not even for the cases where they say they provide the algorithm in the prompt, which makes that aspect less convincing to me.
What I did find noteworthy was how the models were able to provide around 100 steps correctly for larger Tower of Hanoi problems, but only 4 or 5 correct steps for larger River Crossing problems. I think the River Crossing problem is like the one where you have a boatman who wants to get a fox, a chicken and a bag of rice across a river, but can only take two in his boat at one time? In any case, the researchers suggest that this could be because there will be plenty of examples of Towers of Hanoi with larger numbers of disks, while not so many examples of the River Crossing with a lot more than the typical number of items being ferried across. This being more evidence that the LLMs (and LRMs) are merely recalling examples they’ve seen, rather than genuinely working them out.
I think it’s an easy mistake to confuse sentience and intelligence. It happens in Hollywood all the time - “Skynet began learning at a geometric rate, on July 23 2004 it became self-aware” yadda yadda
But that’s not how sentience works. We don’t have to be as intelligent as Skynet supposedly was in order to be sentient. We don’t start our lives as unthinking robots, and then one day - once we’ve finally got a handle on calculus or a deep enough understanding of the causes of the fall of the Roman empire - we suddenly blink into consciousness. On the contrary, even the stupidest humans are accepted as being sentient. Even a young child, not yet able to walk or do anything more than vomit on their parents’ new sofa, is considered as a conscious individual.
So there is no reason to think that AI - whenever it should be achieved, if ever - will be conscious any more than the dumb computers that precede it.
You must be able to see that giving your daughter your mother’s name as a middle name is not at all the same as giving your son your own name?
Vanity isn’t it? Pathetic male vanity. Never hear women doing it do you.
a sign of utter desperation on the human’s part.
Yes it seems to be the same underlying issue that leads some people to throw money at only fans streamers and such like. A complete starvation of personal contact that leads people to willingly live in a fantasy world.
It was also a true AI wasn’t it? It ran locally and was never turned off, so conversations with it were private and it continued to “exist” and develop by itself.
I think they mean that ARM became dominant by widely licencing its RISC architecture to pretty much anyone. This startup wants to make RISC V designs and licence them to various chip manufacturers - so they won’t be in the business of making chips themselves, just the design.
But as long as they are RISC V chips, then they would run the same software as any other RISC V chips.
Would that be a risk? Isn’t the whole point of RISC V that its ISA is open and free to use? That’s not the case for ARM or Intel’s x86 architecture.
yes we can make an assumption that that is indeed what they think, but that’s not actually what they said with the sentence “This wouldn’t hold up in modern court let alone Victorian age court”. So perhaps they accidentally used incorrect phrasing, but even so, the logic doesn’t follow - if something doesn’t hold up in modern-day court, that tells us nothing about whether or not it would hold up in Victorian times, when standards of evidence were indeed lower.
Yes you’re right, sorry I went off on a tangent about the reasons for the intense negativity in the Lemmyverse about LLMs. I’ve been using lemmy for four years, and definitely don’t think there has ever been any positive feelings towards LLMs here, especially as ChatGPT’s arrival predates the first surge of users on Lemmy (and the subsequent appearance of all the instances we see today). On reddit, yes, and there are still many people there who still think OpenAI is great.
I think it’s another example of “internet bubbles” - people with similar views tend to congregate together and this is particularly true on the internet, when going elsewhere is always just a mouse-click away.
When ChatGPT first launched, Lemmy was still pretty much a ghost town, and it did cause a lot of optimistic excitement e.g. on reddit. Lemmy got a big surge in numbers when reddit did its infamous API changes - enshittification driven by spez’s and other reddit executives’ insatiable lust to exploit the site for more and more money.
Perhaps for this reason, people on Lemmy are more averse to the enshittification trend and generally exploitive nature of large tech companies. I think this is what people on Lemmy object to - tech companies’ concentration of power and profits by ripping off the general public - not so much the concept of LLMs themselves, but the fact they could easily be used to further inequality in society.
It’s interesting isn’t it? “Guys” can include women, and can even be a group of only women, but you can’t talk about a single woman as a guy - “I snogged this gorgeous guy last night”.
You think if people who publish their work publicly didn’t research things like this, they would just never be discovered?
At least this way, we all know about the possibility, and further research can be done to see what can mitigate it.