I’m a lonely smut writer in Portugal! Feel free to say hello! :3

  • 0 Posts
  • 26 Comments
Joined 5 months ago
cake
Cake day: November 4th, 2025

help-circle










  • I think I’m following what you mean. To me, though, (using your house analogy) it isn’t that your ex has a key, it’s that the government is demanding that your door remain open. Sure, it’s already off the hinges, but it’s a whole lot easier to put a door back on than to fight the government about it. It’s not currently illegal to protect your data through extreme measures, but this is the beginning of laws that make it illegal. That is why this is worth fighting over to me. What’s more, I can hate and fight against more than one thing, so it’s not a huge issue to be against this.

    And sure, all this data is out there, but that isn’t true for future generations. Old data becomes stale. It just seems like such a defeatist attitude to me to cede ground on this, especially when the laws you mentioned actually being worried about would use this as precedent. It’s certainly easier to argue for an ID requirement when you have the data on millions of users lying about their age and use it as justification for a more controlled implementation.

    But either way, I think I need to step away here. I feel like I understand you, I just disagree and to continue beyond this without doing more reading on the topic, laws, and trends won’t really help, I think (the last I saw for the New York law was that determining what was an adequate attempt to verify age was fell on the AG, who seemed to be leaning towards third party verification. I’m already out of date with developments there).


  • I… didn’t say that? Not sure if you replied to the wrong person? But I’ll try to respond to what I can?

    Oh whoops, if I did, my bad. That’s what I was understanding your comment about “it’s literally the same check we already have” to be. You’re saying there are already age checks for certain sites (and analysis of your web traffic and associated data being sold) and that this is no different, if I understand correctly. It is worth pointing out that while the California law requires no verification, the New York law potentially requires more than just a declaration of age. It’s worse elsewhere in the world.

    All of that is the same thing. It is about building profiles…

    Right, but you see how this is also a bad thing right? Given that the FBI has now spoken about buying this data and uses it to target people, I would think that we would all want better privacy protections, not fewer.

    1. This is not exclusive to the US.

    I don’t see how that should sway opinion about this being a good or a bad thing. It’s a bad thing for everyone, right?

    1. I never said this is “the first step towards something >worse”.

    No, I am saying that. I was saying that calling this a slippery slope doesn’t feel like it is based in the history of privacy erosion. I’d love to learn more about the original sin in all of this, but just because it isn’t the first step doesn’t mean we shouldn’t fight against consolidated, government-mandated privacy violations, right?

    Yes? I am sorry that protecting your privacy takes effort? I am >sure that if you pay a random sponsor on an LTT video that >they’ll claim to do everything for you? Like… I really don’t know what to tell you?

    I think you’re misunderstanding me. I’m not complaining that it’s difficult. I’m asking why we don’t try and just fix the problem instead of letting something like this slide by because there are other, similar issues.


  • Can I ask you to explain your point, “age doesn’t matter, your digital footprint carries over?” You mention solutions to protect yourself from the digital footprint carry over, but this law would just make it easier to overcome those solutions.

    Now instead of having to figure out the various unique patterns of accessing the internet to determine info about you, you just tell them your age (or that you’re an adult, whatever) on those systems directly.

    I also think it’s a bit disingenuous to call ‘this is the first step towards something worse’ a slippery slope when that is exactly how the creeping erosion of privacy has gone in the US historically, but especially the last few decades.

    You acknowledge that a lot of people don’t fully understand how to protect themselves (and offer solutions that require more money, time, and education to accomplish) and in the same breath that is why it’s okay that we make data collection easier.

    I know this probably comes across as accusatory, but I really don’t mean it that way. I’m genuinely trying to understand what your perspective is.




  • There was an interview I saw recently with Asmodei where he said that Anthropic aren’t categorically against autonomous weapons, only that they didn’t think they were ready, seemingly implying they would make mistakes similar to how LLMs hallucinate. A lot of the media coverage around them seemed to imply that they had a higher ethical standard than the others, and I mean… maybe? I guess it could be argued that wanting to minimize collateral damage is more ethical, but regardless, I think it’s important to keep perspective when we see how they act in the coming weeks and months.



  • For your first question, what you’re describing is a problem with education and staffing, not a problem of the tool itself. I’m not suggesting you keep around ‘one old man who hates AI’, my pitch you bar the use of AI for human-level checks.

    For your second, yes I saw the part about how news and media are representing AI in healthcare, but I don’t really see how news or media are relevant here. Could you explain this a bit for me?

    I don’t intend to gloss over the issues with Generative AI/LLMs, I tried to be specific in my separation of ML from them in my original comment where I said LLMs in their public facing version (ChatGPT, Claude, whatever) aren’t very useful.

    The original comment I replied to asked “is “AI” even useful (etc)” but also mentioned LLMs. I was trying to make the point that LLMs aren’t the only type of AI and that others can be employed to great effect. If that was unclear, that’s my bad but that was my intention.

    The reason I don’t want to engage with a hypothetical is because I could just as easily counter with “what if it diagnoses at a 100% success rate? What if fear of losing skills results in doctors never wanting to use AI, resulting in more deaths?” Neither hypothetical argument is really very helpful for the discussion. I promise you I’ve thought about this a lot (but again, I’m not an expert, nor am I in the field), but more importantly I have friends finishing doctorates in the bioinformatics field whom I get some insight from, and I’m, at least at this point, convinced of the benefits.


  • I read both articles you linked, but I’m not really seeing how they support your point. The first article seemed to support the idea that healthcare staff would welcome more seamless, user-friendly AI tools in the field and the second discussed biases within tools they selected for cancer diagnoses and a tool they used to reduce those biases. Am I misunderstanding what you’re saying somewhere?

    Also, with regard to the reduction in diagnostic accuracy of diagnosticians with AI, I would need to see the specific article to be sure, but if it’s the one that was posted across reddit a few months back, I read through that one as well. It seemed to agree with a similar article about students writing papers with and without the use of ChatGPT (group A writes with it, group B writes without it, and afterwards they are asked to both write without the LLM. Group B’s essay was shown to be better. This is a hugely reductive description of the experiment, but gets the idea across). Again, it makes sense that if you use a tool to facilitate an action, that tool is replacing that skill and you get “rusty”. It does not mean that the existence of a tool would reduce skill in those who do not use it, though. My suggestion of using it as a screening tool wouldn’t affect the diagnostician’s skill unless they also used it, which sorta defeats the purpose of them being a human check on the process, post-screening flag.

    I can’t speak to your other points as they’re hypothetical. Obviously, I wouldn’t advocate for an inaccurate tool that causes an already overworked field to take on more work. I’m only suggesting that ML is a tool that has use-cases and can be used to supplement current processes to improve outcomes. They can, and are, being improved constantly. If they’re employed thoughtfully, I just think they can be a huge benefit.


  • Regarding the doctor’s signature thing, that seems a bit preemptive to say a single flawed study invalidates the entire field and tech, especially when the tech is working as intended in that case and it is user error in the study.

    And of course, like any tool it should be utilized thoughtfully. Any form of technology directly takes away from the skill previously utilized to get results. Flint and steel took away from the rubbing sticks together skill. The combustion engine took away from many different professional skills.

    Consider that, in this case, we don’t just have to replace diagnosis but could augment it instead. What if every hospital around the world could augment regular medical care with a single machine processing results. Every single check-up could include a quick cancer screening. If the machine flags you as ‘at risk’, a doctor could then see you for human diagnosis and validation. The skill of diagnosis is still needed and utilized, but now everyone can have regular screening instead of overwhelming an already overtaxed healthcare system.

    Again, all I’m saying is that there are practical, useful use-cases for the technology, they’re just not what we are doing with them.

    Edit: as an after thought, I’m no expert here. As far as I understood, LLMs are a type ML, but ML encompasses a way broader category of ‘AI’. I’m mostly against LLMs for just general use like they are currently. I am advocating for ML as a whole, with thoughtful application.