• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: July 6th, 2023

help-circle
  • I’m from the Americas, but not a crazy ass gringo. I’m 32, engaged, got a good job, a good group of friends, don’t struggle too much in life and everything’s good. School and highschool still legitimately terrify me, I get nightmares over it, and I actually got snipped to avoid even the chance of having to put someone through that shit all over again…among a couple of other reasons.


  • If we can’t say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we’re developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don’t know if we’re a few steps away from having massive AI breakthroughs, we don’t know if we already have pieces of algorithms that closely resemble our brains’ own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it’s our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we’ve been down this road with animals before as well, claiming they dont have souls or aren’t conscious beings, that somehow because they don’t very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they’re somehow an inferior or less valid existence.

    You’re describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it’s already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I’m putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it’s meant to be an insult.

    I’m not saying LLMs are alive, and they clearly don’t experience the reality we experience, but to say there’s no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations…is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it’s an emergent property, and enforcing this “intelligence” separation only hinders our ability to properly recognize whether we’re on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn’t let our hubris cloud that judgment.


  • What I never understood about this argument is…why are we fighting over whether something that speaks like us, knows more than us, bullshits and gets shit wrong like us, loses its mind like us, seemingly sometimes seeks self-preservation like us…why all of this isn’t enough to fit the very self-explanatory term “artificial…intelligence”. That name does not describe whether the entity is having a valid experiencing of the world as other living beings, it does not proclaim absolute excellence in all things done by said entity, it doesn’t even really say what kind of intelligence this intelligence would be. It simply says something has an intelligence of some sort, and it’s artificial. We’ve had AI in games for decades, it’s not the sci-fi AI, but it’s still code taking in multiple inputs and producing a behavior as an outcome of those inputs alongside other historical data it may or may not have. This fits LLMs perfectly. As far as I seem to understand, LLMs are essentially at least part of the algorithm we ourselves use in our brains to interpret written or spoken inputs, and produce an output. They bullshit all the time and don’t know when they’re lying, so what? Has nobody here run into a compulsive liar or a sociopath? People sometimes have no idea where a random factoid they’re saying came from or that it’s even a factoid, why is it so crazy when the machine does it?

    I keep hearing the word “anthropomorphize” being thrown around a lot, as if we cant be bringing up others into our domain, all the while refusing to even consider that maybe the underlying mechanisms that make hs tick are not that special, certainly not special enough to grant us a whole degree of separation from other beings and entities, and maybe we should instead bring ourselves down to the same domain as the rest of reality. Cold hard truth is, we don’t know if consciousness isn’t just an emerging property of varios different large models working together to show a cohesive image. If it is, would that be so bad? Hell, we don’t really even know if we actually have free will or if we live in a superdeterministic world, where every single particle moves with a predetermined path given to it since the very beginning of everything. What makes us think we’re so much better than other beings, to the point where we decide whether their existence is even recognizable?


  • I saw a brilliant explanation some time ago that I’m about to butcher back into a terrible one, bear with me:

    Think about 2 particles traveling together. When one gets tugged, it in turns tugs the other one with it. This tug takes some time, since one particle essentially “tells” the other particle to come with it, meaning there’s some level of information exchange happening between these two particles, and that exchange happens at the speed of light. Think about the travel distance between these two particles, it would be pretty linear, and pretty short, so you essentially do not notice this effect since it’s so fast.

    Now think about what happens when those 2 particles start going faster. The information exchange still happens, it still happens at the speed of light, but now that those particles are moving faster in some direction, the information exchange would seem to still go linearly from particle A to particle B, but in reality it would be traveling “diagonally”, since it would have to cover that extra distance being added by the particles moving in certain direction. This is the crucial part: what happens when those particles start getting closer to the speed of light? Well, the information exchange would have to cover the very small distance between these particles, plus the added distance from traveling closer to the speed of light. At first it’s pretty easy to cover this distance, but eventually you’re having to cover the entire distance light would take to travel in a given moment, PLUS the distance between the two particles, which…can’t happen since nothing can go faster than that speed.

    That’s essentially why you can never reach the speed of light, and why the more massive an object, the less speed it can achieve: all those particles have to communicate with each other, and that takes longer and longer the closer to the speed of light the whole object moves.

    See, this also perfectly explains what you’re asking: from the frame of reference of the particles, they’re seeing the information go in a straight line to them, so time is acting normally for them, but from an external perspective, that information is moving in a vector, taking a long time to reach the other particle since it’s having to cover the distance of near light speed in one direction, plus the distance between the two particles in another direction, for a total vector distance that is enormous rather than being negligible. At some point, you never see the information reach the other particle, or in other words, time for that whole object has slowed down to a near halt. This explains why time feels normal for the party traveling fast: they can’t know they’re slowed down since the information exchange is essentially the telling of time, but the external observer sees that slowdown happen, and in fact they get a compounded effect since those particles also communicate their state to the observer at the speed of light, and that distance between the observer and the particles keeps changing.

    This also explains why the particles might be able to also see everything around them happening a lot faster than it should: not only is it taking them longer to get updates about themselves between themselves, but they’re also running into the information from everything around them pretty fast, essentially receiving information from external sources faster than they do from themselves, thus causing this effect of seeing everything happening faster and faster, until it seems to all happen at once at the speed of light.

    Here’s the guy who made it all click for me, since I’m pretty sure I tangled more than one of you up with this long read: https://youtu.be/Vitf8YaVXhc


  • At least learn a little bit about the technology you’re criticizing, such as the difference between fission (aka not fusion) and fusion (aka…fusion), before going on a rant about it saying it’ll never work.

    None of the reactors are being built with output capture in mind at the moment, because output capture is trivial compared to actually having an output, let alone an output that’s greater than the input and which can be sustained. As you’ve clearly learned in this thread, we’re already past having an output, are still testing out ways to have an output greater than an input, with at least one reactor doing so, and we need to tackle the sustained output part, which you’re seeing how it’s actively progressing in real time. Getting the energy is the same it’s always been: putting steam through a turbine.

    Fission is what nuclear reactors do, it has been used in the entire world, it’s being phased out by tons of countries due to the people’s ignorance of the technology as well as fearmongering from parties with a vested interest in seeing nuclear fail, is still safer than any other energy generation method, and would realistically solve our short term issues alongside renewables while we figure out fusion…but as I said, stupid, ignorant people keep talking shit about it and getting it shit down…remind you of anyone?



  • I’m calling out your streaming counterpoint: in the beginning, there was Netflix. It had almost everything from almost all studios, didn’t care about password sharing, and was easily very affordable, even more so if you split costs between everyone sharing accounts. The best part? No ads. The content kept getting better, the show formats kept getting more accesible.

    It was clearly more convenient for everyone to just have Netflix, even more convenient than piracy, but now? Every studio, every company, they all veered away from Netflix and decided to create their own services. Then the price wars started, then the crackdowns on password sharing, and the ad-supported tiers, and then they started canceling shit, good shit, in order to claim them as losses in their tax declarations. And then we all lost, because now we can’t find most content in a single place, we have to endure ads if we want to save money, and we cannot even use some services while traveling since there are limits to devices linked to the accounts. Oh and that show you liked? David Zaslav wanted a bonus this year, so it got shelved even though it was a huge success. It’s no longer convenient to use streaming services, at least not as convenient as it used to be.

    You know what’s convenient now? Piracy, through Plex, Jellyfin, and Emby, all with automations, all easily shareable between friends. That’s what I’m doing now, friends chip in when more storage space is needed, or when some additional service is needed. It’s more work for the more tech-oriented of us, but hell if it isn’t fun to just sail the high seas, giving the finger to these companies, while giving friends a good experience.