

I switched to Niagara a few years back because Nova didn’t have good support for foldables and tbh I haven’t looked back. It’s very different but once you get used to it it’s much faster than a traditional launcher.
I switched to Niagara a few years back because Nova didn’t have good support for foldables and tbh I haven’t looked back. It’s very different but once you get used to it it’s much faster than a traditional launcher.
Any legal precedent for this has to be a win right?
That’s reasonable; I just wouldn’t have called my wife’s laptop my laptop I guess. It was either that or there was probably an interesting story behind it.
How many laptops do you own lol?
I am fairly certain SmartTubeNext was a rebrand by the same dev.
If you make it only toonami ads I’m in tho
DRM is already the primary purpose of trusted compute if you read shareholder meeting transcripts; security is a marketing side effect.
Yes they were, so I’m offering you an actual theory as to why this may actually be true, yet difficult to “prove”.
Smoking was bad for your health long before anyone sat down and took the time to prove it. Autoregressive LLM tokenizer are a very new field of computer science and it’s going to take a while for the community to collectively understand everything we’re currently doing by trial and error.
Anecdotally, I use it a lot and I feel like my responses are better when I’m polite. I have a couple of theories as to why.
More tokens in the context window of your question, and a clear separator between ideas in a conversation make it easier for the inference tokenizer to recognize disparate ideas.
Higher quality datasets contain american boomer/millennial notions of “politeness” and when responses are structured in kind, they’re more likely to contain tokens from those higher quality datasets.
I haven’t mathematically proven any of this within the llama.cpp tokenizer, but I strongly suspect that I could at least prove a correlation between polite token input and dataset representation output tokens
They are indeed just that keen on our data.
They know they can’t get rid of it for all of their customers, but they do want to make it as hard as possible for random users to do so.
The problem with this is it doesn’t work for home users that want to pay for their software. Crazy… I know… but those people do exist.
For people with “that one game” there is a middle ground. Mine is Destiny 2 and they use a version of easy anticheat that refuses to run on Linux. My solution was to buy a $150 used Dell on eBay, a $180 GPU to be able to output to my 4 high-res displays, and install Debian + moonlight on it. I moved my gaming PC downstairs and a combination of wake-on-lan + sunshine means that I can game at functionally native performance, streaming from the basement. In my setup, windows only exists to play games on.
The added bonus here is now I can also stream games to my phone, or other ~thin clients~ in the house, saving me upgrade costs if I want to play something in the living room or upstairs. All you need is the bare minimum for native-framerate, native-res decoding, which you can find in just about anything made in the last 5-10 years.
“Open source” in ML is a really bad description for what it is. “Free binary with a bit of metadata” would be more accurate. The code used to create deepseek is not open source, nor is the training datasets. 99% of “open source” models are this way. The only interesting part of the open sourcing is the architecture used to run the models, as it lends a lot of insight into the training process, and allows for derivatives via post-training
It’s a little deeper than that, a lot of advertising works on engagement -based heuristics. Today, most people would call it “AI” but it’s fundamentally just a reinforcement learning network that trains itself constantly on user interactions. It’s difficult-to-impossible to determine why input X is associated with output Y, but we can measure in aggregate how subtle changes propagate across engagement metrics.
It is absolutely truthful to say we don’t know how a modern reinforcement learning network got to the state it’s in today, because transactions on the network usually aren’t journaled, just periodically snapshot for A/B testing.
To be clear, that’s not an excuse for undesirable heuristic behavior. Somebody somewhere made the choice to do this, and they should be liable for the output of their code.
I’ve always wondered why board partners didn’t just raise to scalper prices and take a $2200 profit per card sold.
And tbh, it’s Nvidia’s fault that the partners don’t have enough dies, I’d much rather a partner take the margin than an unnecessary middleman.
Why?
There’s nothing preventing you from forking a Lemmy client or server to prototype this. Depending on how you implement the activitypub backend, you might be able to make it transparent to a user if you present an algorithm as an array of cross posts via a /c/ of a server.
Anything more might require forking a client, which might be easier to implement but may be harder to convince a large userbase to migrate to.
I think you’ve correctly identified their self-interest over altruism, but you’ve misidentified the internal value of discouraging clickbait. YouTube is a treasure trove for building training datasets, and its value increases when metadata like thumbnails, descriptions, titles, and tags can be trusted.
It’s the AI gold rush; notice how this coincides with options to limit or disable third-party training but not first-party training? It coincides but is definitely not a coincidence.
Idk, this was kind of a rare combination of “write secure function; proceed to ignore secure function and rawdog strings instead” + “it can be exploited by entering a string with a semicolon”. Neither of those are anything near as egregious as a use after free or buffer overflow. I get programming is hard but like, yikes. It should have been caught on both ends
Back in the day when our community was switching from xmpp to discord, our solution was to write a bot on either end that relayed messages from one to the other. The xmpp bot got more and more naggy over time until eventually we put the xmpp side in read-only for everyone except the relay bot. It did a good enough job at building momentum to switch that the final holdouts came over when we went r/o.
You might consider building something similar if you want to make a genuine effort to switch to matrix or IRC. A relay bot solves the problem of the first people being punished by virtue of being first.