Isn’t that the Walgreens w on the wall? Stop gaslighting us! /s
- 0 Posts
- 11 Comments
I wasn’t meaning that it’s just an evolutionary advantage for neurodivergents. I mean hell I know several neurodivergents with the opposite problem of being unable to keep themselves from eating.
I meant people in general might have the ability to tune out senses while being on a hunt or escaping danger etc. Being able to prioritize focus for the largest danger or the bigger stressor. Since we’re always stressed now days and the danger of starving isn’t likely to be as immediately detrimental as it used to be, some people’s bodies naturally tune down those urges to eat and drink.
And yeah I used to hike and camp a lot and when I did, I tended to feel hunger and thirst more often. Tend to feel calmer in general too. That seems to support my theory that it’s the constant stress of needing to be productive (and the stress of seeing the news and seeing the government drag people from their homes) that contributes to the dulling of our urges to eat or drink.
Out on fire camps in Nevada and California, 113F days will wreck you fast if you’re not downing water and Gatorade constantly. Good news, when in your in the middle of nowhere, only needing to do manual labor, there’s not much else to think about besides how beautiful the land is (before you get sick of it lol), not much to distract you from your body’s indicators.
Anyway, I doubt it has much to do with “drinking coke and other crap” Sure, if you get thirsty and the closest drink is always a Monster Energy, you’re likely not going to drink much else. But that’s not really the fault of the Monster Energy is it?
Hell, I don’t really drink soda at all, but both my sister and her husband drink energy drinks multiple times a day and eat much more snack/junk food than me and still I’d be willing to bet they remember to drink more water than I do.
Lots of neurodivergent people don’t have as clear signals as neurotypical people do. Some ADHD people, like me, don’t get the urge to eat. Even before getting diagnosed and medicated, I only really know it’s time to eat if I start feeling shaky.
I also don’t typically feel thirsty, but eventually my mouth will get dry or I’ll see my water bottle and think “ah yeah I should probably drink something”
I’d imagine lots of people have varying degrees of how strong their bodily urges are and how easily they can ignore them.
It also seems like it’d be evolutionarily advantageous for our ancestors to be able to tune out hunger and thirst when focused on a task. Since there’s always shit going on in the world and we’re always stressed to be “productive” constantly due to capitalism, I don’t think it’s all that surprising that many people (even those who are otherwise neurotypical) are distracted from the urge to eat or drink.
AnarchoEngineer@lemmy.dbzer0.comto
Lemmy Shitpost@lemmy.world•How Saturday night ended
15·2 months agoThe real question is should you? And the answer is obviously yes
AnarchoEngineer@lemmy.dbzer0.comto
Technology@lemmy.world•A Love Letter To Internet Relay ChatEnglish
20·2 months agoSerial Experiments Lain, layer:10 LOVE
As with every episode I’ve seen so far, it’s a confusing avant-garde mess, but with cassette-cyberpunk aesthetics (the best kind of cyberpunk aesthetics) so I guess that’s okay.
But that episode in particular is weird people confessing psycho love for a protocol.
To be fair I’m less weirded out by falling for a concept and much more weirded out by the fact this protocol looks like and thinks she is an 11yo girl, and these adult creepy idiots are confessing their love to her. Seriously, what the fuck Japan?
AnarchoEngineer@lemmy.dbzer0.comto
Lemmy Shitpost@lemmy.world•I saw what you did there
29·2 months agoI don’t think the second bucket would be all that useful.
If the blade is cutting down into the wood like it’s supposed to, most likely the blood would go down into the primary bucket or go all the way around and start turn the walls and ceiling of the garage into a Jackson Pollock painting.
AnarchoEngineer@lemmy.dbzer0.comto
Technology@lemmy.world•Human-level AI is not inevitable. We have the power to change courseEnglish
5·3 months agoThanks, I almost didn’t post because it was an essay of a comment lol, glad you found it insightful
As for Wolfram Alpha, I’m definitely not an expert but I’d guess the reason it was good at math was that it would simply translate your problem from natural language into commands that could be sent to a math engine that would do the actual calculation.
So basically act like a language translator but for typed out math to a programming language for some advanced calculation program (like wolfram Mathematica)
Again, this is just speculation because I’m a bit too tired to look into it rn, but it seems plausible since we had basic language translators online back then (I think…) and I’d imagine parsing written math is probably easier than natural language translation
AnarchoEngineer@lemmy.dbzer0.comto
Technology@lemmy.world•Human-level AI is not inevitable. We have the power to change courseEnglish
281·3 months agoEngineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.
I loathe python irrationally (and I guess I’m masochist who likes to reinvent the wheel programming wise lol) so I’ve written my own neural nets from scratch a few times.
Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.
This has two major preventative issues for AGI: input size limits, and determinism.
The weight matrices are set for a certain number of inputs. Unfortunately you can’t just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)
This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.
Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And it’s slow af) Doesn’t sound like a mind to me…
Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets you’ve probably used are not thinking… at all. They literally are just a complicated predictive algorithm like linear regression. I’m dead serious. It’s basically regression just in a very high dimensional vector space.
ChatGPT does not think about its answer. It doesn’t have any sort of object identification or thought delineation because it doesn’t have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If it’s off, you do some math to figure out what weight modifications would have lead it to a better answer.
All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they weren’t trained on because it’s similar enough to the questions they weren’t trained on… but it’s not thinking. It isn’t doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.
This is why LLMs can’t do math. Because they don’t actually see the numbers, they don’t know what numbers are. They don’t know anything at all because they’re incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.
Models can only “know” as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they don’t. And you can’t just say “you were wrong” because the model isn’t transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to “learn” which again takes time and really isn’t learning or intelligence at all.
Now there are some more exotic neural networks architectures that could surpass these limitations.
Currently I’m experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.
However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.
You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).
SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changing weights) in real time. Capable as in “this could have discrete time dependent waves of continuous self modifying spike patterns which could theoretically be thoughts” not as in “we can make something that thinks.”
Like these neural nets are good with sensory input and that’s about as far as we’ve gotten (hyperbole but not by that much). But these networks are still fascinating, and they do help us test theories about how the human brain works so eventually maybe we’ll make a real intelligent being with them, but that day isn’t even on the horizon currently
In conclusion, we are not remotely close to AGI. Current models that seem to think are verifiably not thinking and are incapable of it from a structural standpoint. You cannot make an actual thinking machine using the current mainstream model architectures.
The closest alternative that might be able to do this (as far as I’m aware) is relatively untested and difficult to prototype (trust me I’m trying). Furthermore the requirements of learning and thinking largely prohibit the use of gradient descent or similar algorithms meaning training must be done on a much more rigorous and time consuming basis that is not economically favorable. Ergo, we’re not even all that motivated to move towards AGI territory.
Lying to say we are close to AGI when we aren’t at all close, however, is economically favorable which is why you get headlines like this.
AnarchoEngineer@lemmy.dbzer0.comto
Lemmy Shitpost@lemmy.world•You can do it. It's an easy one
8·4 months ago“I ate sigma pie and it was delicious!” Sounds like something that’d show up on my university’s YikYak, alluding to eating out a sorority chick from Sigma Pi lol
Idk if that’s a legitimate sorority, but I know that regardless of the sorority mentioned someone would reply something like “wait till you try a pi phi 😜” and/or someone would say you’re going to get an STD from that particular sorority
AnarchoEngineer@lemmy.dbzer0.comto
Lemmy Shitpost@lemmy.world•Who remembers alt.fan.tonya.harding.whack.whack.whack ?
5·4 months agoLloyd Braun, I just wanted serenity
But you had to go testin’ me, gave me suicide tendencies
Wait didn’t Reddit have its own .onion domain? I suppose I shouldn’t be surprised by the hypocrisy at this point lol