

How is the definition of theft determined? Typically the definition is determined by the government. Why would the government define its own funding source as theft?


How is the definition of theft determined? Typically the definition is determined by the government. Why would the government define its own funding source as theft?


Small correction, but this bit isn’t quite correct:
If you go just below light speed, you’ll see the world outside go past like it’s being fast forwarded, and when you return, 8 years will have been compressed into something that seems much shorter to you.
During the time that you are just below light speed at a constant velocity, clocks that are “stationary” will appear to be moving slow to you. And clocks moving with you will appear to be running slow for a “stationary” observer. As I mentioned in a comment in another reply, the trip would feel short to you because the distance to your destination would contract to nearly zero. “Fast forwarding” (ie, having both you and a stationary observer agree that more time has passed on the stationary observer’s clock) would happen during the periods of acceleration/deceleration at the beginning and end of the trip.


Just want to add that what the person on the ship observes is length contraction. When their ship is at near light speed, the distance to their destination contracts to nearly zero (because it is moving at near light speed relative to them); this is why the trip seems short to them.


If I ran the zoo, then any AI that trained on intellectual property as if it were public domain would automatically become public domain itself.


I installed linux on my PC a couple months ago. The other day I wanted to log back into my windows partition for the first time in a while in order to clean up some of the files on that partition (even though the drive is mounted in linux, the windows “fast boot” option apparently leaves it in a state that linux considers read-only). Windows apparently wouldn’t let me log in without a microsoft account, instead of just using my regular windows username.
So yeah, that partition’s gone now. No going back!



Cherry-picking a couple of points I want to respond to together
It is somewhat like a memory buffer but, there is no analysis being linguistics. Short-term memory in biological systems that we know have multi-sensory processing and analysis that occurs inline with “storing”. The chat session is more like RAM than short-term memory that we see in biological systems.
It is also purely linguistic analysis without other inputs out understanding of abstract meaning. In vacuum, it’s a dead-end towards an AGI.
I have trouble with this line of reasoning for a couple of reasons. First, it feels overly simplistic to me to write what LLMs do off as purely linguistic analysis. Language is the input and the output, by all means, but the same could be said in a case where you were communicating with a person over email, and I don’t think you’d say that that person wasn’t sentient. And the way that LLMs embed tokens into multidimensional space is, I think, very much analogous to how a person interprets the ideas behind words that they read.
As a component of a system, it becomes much more promising.
It sounds to me like you’re more strict about what you’d consider to be “the LLM” than I am; I tend to think of the whole system as the LLM. I feel like drawing lines around a specific part of the system is sort of like asking whether a particular piece of someone’s brain is sentient.
Conversely, if the afflicted individual has already developed sufficiently to have abstract and synthetic thought, the inability to store long-term memory would not dampen their sentience.
I’m not sure how to make a philosophical distinction between an amnesiac person with a sufficiently developed psyche, and an LLM with a sufficiently trained model. For now, at least, it just seems that the LLMs are not sufficiently complex to pass scrutiny compared to a person.


LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology
Do you have an example I could check out? I’m curious how a study would show a process to be “fundamentally incapable” in this way.
LLMs do not synthesize. They do not have persistent context.
That seems like a really rigid way of putting it. LLMs do synthesize during their initial training. And they do have persistent context if you consider the way that “conversations” with an LLM are really just including all previous parts of the conversation in a new prompt. Isn’t this analagous to short term memory? Now suppose you were to take all of an LLM’s conversations throughout the day, and then retrain it overnight using those conversations as additional training data? There’s no technical reason that this can’t be done, although in practice it’s computationally expensive. Would you consider that LLM system to have persistent context?
On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?


lol, yeah, I guess the Socratic method is pretty widely frowned upon. My bad. =D


I don’t think it’s just a question of whether AGI can exist. I think AGI is possible, but I don’t think current LLMs can be considered sentient. But I’m also not sure how I’d draw a line between something that is sentient and something that isn’t (or something that “writes” rather than “generates”). That’s kinda why I asked in the first place. I think it’s too easy to say “this program is not sentient because we know that everything it does is just math; weights and values passing through layered matrices; it’s not real thought”. I haven’t heard any good answers to why numbers passing through matrices isn’t thought, but electrical charges passing through neurons is.


Sure, I’m not entitled to anything. And I appreciate your original reply. I’m just saying that your subsequent comments have been useless and condescending. If you didn’t have time to discuss further then… you could have just not replied.


“You’re wrong, but I’m just too busy to say why!”
Still useless.


I’m a software developer, and have worked plenty with LLMs. If you don’t want to address the content of my post, then fine. But “go research” is a pretty useless answer. An LLM could do better!


The only humans with no training (in this sense) are babies. So no, they can’t.


So, I will grant that right now humans are better writers than LLMs. And fundamentally, I don’t think the way that LLMs work right now is capable of mimicking actual human writing, especially as the complexity of the topic increases. But I have trouble with some of these kinds of distinctions.
So, not to be pedantic, but:
AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.
Couldn’t you say the same thing about a person? A person couldn’t write something without having learned to read first. And without having read things similar to what they want to write.
LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent.
This is kind of the classic chinese room philosophical question, though, right? Can you prove to someone that you are intelligent, and that you think? As LLMs improve and become better at sounding like a real, thinking person, does there come a point at which we’d say that the LLM is actually thinking? And if you say no, the LLM is just an algorithm, generating probabilities based on training data or whatever techniques might be used in the future, how can you show that your own thoughts aren’t just some algorithm, formed out of neurons that have been trained based on data passed to them over the course of your lifetime?
And when they start hallucinating, it’s because they don’t understand how they sound…
People do this too, though… It’s just that LLMs do it more frequently right now.
I guess I’m a bit wary about drawing a line in the sand between what humans do and what LLMs do. As I see it, the difference is how good the results are.


What’s the difference?


I think it’s logically consistent to say “It was incorrect that I was not declared the winner of the election, and I should have served the corresponding term. But since the government did not recognize my election victory and I did not serve the term, I am still eligible to serve another term”. I think it’s inconsistent to say that Trump was elected for the purposes of the 22nd amendment, but was not elected for the purposes of serving the term.
(Please don’t mistake me though; although I think Trump’s position in this particular matter is logically self-consistent, it is not consistent with reality. He lost that election.)


Some problems get harder to do on bigger numbers. Like breaking a number into factors; the bigger the number, the harder it is to find the factors. Contrast this with, say, telling whether the number is even, which is easy even for very very large numbers.
There is a certain measure of how quickly problems get harder with bigger numbers called Polynomial Time; this is the P in P, NP, etc. I will omit the details of what polynomial time means exactly because if you don’t know from the name, then the details aren’t particularly important. It’s just a certain measure of how quick or hard the problem is to solve.
So for the various types of problems:
Does navidrome support Chromecast? I’ve had a hard time finding a self hosted music solution that will actual cast. I do have a public facing domain name with certs that, as far as I can tell, is working correctly.