AmbitiousProcess (they/them)

  • 0 Posts
  • 23 Comments
Joined 4 months ago
cake
Cake day: June 6th, 2025

help-circle
  • Oh, of course the legislation is to blame for a lot of this in the end. I’m just saying that Discord could have already partnered with a number of identity verification services that do already have this infrastructure up and running, with standardized and documented ways to call their APIs to both verify and check the verification of a user.

    At the end of the day, Discord chose to implement a convoluted process of having users email Discord, upload IDs, then have Discord pull the IDs back down from Zendesk and verify them, rather than implementing a system where users could have simply gone to a third-party verification website, done all the steps there, had their data processed much more securely, then have the site just send Discord a message saying “they’re cool, let 'em in”


  • In my opinion, they’re still somewhat at fault, because this was them failing to find and configure their software to work with a third-party identity provider who’s infrastructure was built to handle the security of sensitive information, and just choosing to use email through Zendesk because it was easier in the meantime. A platform that I should note has been routinely accessed again and again by attackers, not just for Discord, but for all sorts of other companies.

    The main problem is that legislation like the Online Safety Act require some privacy protections, like not collecting or storing certain data unless necessary, but they don’t require any particular security measures to be in place. This means that, theoretically, nothing stops a company from passing your ID to their servers in cleartext, for example.

    Now compare this to industries like the credit card industry, where they created PCI DSS, which mandates specific security practices. This is why you don’t often see breaches of any card networks or issuers themselves, and why most fraud is external to the systems that actually process payments through these cards. (e.g. phishing attacks that get your card info, or a store that has your card info already getting hacked)

    This is a HUGE oversight, and one that will lead to things like this happening over and over unless it becomes unprofitable for companies to not care.




  • People can care about school shootings while also not wanting to see a dying body and someone’s bloody gunshot wound randomly appear on their timeline when they’re just trying to look at some fucking memes.

    This is like if I started filling your timeline with random snuff films and gore videos, and when you complained, went “OH you don’t like this? Well the human trafficking victims used in these videos didn’t either.”






  • I doubt that’s the case, currently.

    Right now, there’s a lot of genuine competition in the AI space, so they’re actually trying to out compete one another for market share. It’s only once users are locked into using a particular service that they begin deliberate enshittification with the purpose of getting more money, either from paying for tokens, or like Google did when it deliberately made search quality worse so people would see more ads (“What are you gonna do, go to Bing?”)

    By contrast, if ChatGPT sucks, you can locally host a model, use one from Anthropic, Perplexity, any number of interfaces for open source (or at least, source-available) models like Deepseek, Llama, or Qwen, etc.

    It’s only once industry consolidation really starts taking place that we’ll see things like deliberate measures to make people either spend more on tokens, or make money from things like injecting ads into responses.




  • I think the key reason this was seen as not being terribly offensive was the fact that women are disproportionately more likely than men to be on the receiving end of tons of different negative consequences when dating, thus to a degree justifying them having more of a safe space where their comfort and safety is prioritized.

    1

    However I think a lot of people are also recognizing now that such an app has lots of downsides that come as a result of that kind of structure, like false allegations being given too much legitimacy, high amounts of sensitive data storage, negative interactions being blown out of proportion, etc. I also think that this is yet another signature case of “private market solution to systemic problem” that only kind of addresses the symptoms, but not the actual causes of these issues that are rooted more in our societal standards and expectations of the genders, upbringing, depictions in media, etc.




  • I was thinking this too! Gait recognition can completely bypass facial coverings as a means of identification, but I also don’t think it’ll be much help here.

    Gait recognition can be bypassed by things as simple as putting a rock in your shoe so you walk differently, so when you think about how much extra heavy gear, different shoes, and different overall movement patterns ICE agents will possibly be engaging in, it might not hold up well at tracking them down, especially since to recognize someone by gait, you’d need footage of them that you can already identify them in, to then train the model on.

    In the case of fucklapd.com, this was easy because they could just get public record data for headshot photos, but there isn’t a comparable database with names directly tied to it for gait. I will say though, a lot of these undercover agents might be easier to track by gait since they’ll still generally be wearing more normal attire, and it might be more possible to associate them with who they are outside of work since it’s easier to slip up when you’re just wearing normal clothes.


  • This wouldn’t be an issue if Reddit always attached relevant posts, including negative ones even if those were the minority, to actually help people make a more informed judgement about an ad based on community sentiment, but I think we all know that won’t be the way this goes.

    Posts will inevitably only be linked if they are positive, or at the very least neutral about the product being advertised, because that’s what would allow Reddit to sell advertisers on their higher ROI. The bandwagon effect is a real psychological effect, and Reddit knows it.


  • Fair enough. SEO was definitely one of the many large steps Google has taken to slowly crippling the open web, but I never truly expected it to get this bad. At least with SEO, there was still some incentive left to create quality sites, and it didn’t necessarily kill monetizability for sites.

    This feels like an exponentially larger threat, and I truly hope I’m proven wrong about its potential effects, because if it does come true, we’ll be in a much worse situation than we already are now.


  • Not to mention the fact that the remaining sites that can still hold on, but would just have to cut costs, will just start using language models like Google’s to generate content on their website, which will only worsen the quality of Google’s own answers over time, which will then generate even worse articles, etc etc.

    It doesn’t just create a monetization death spiral, it also makes it harder and harder for answers to be sourced reliably, making Google’s own service worse while all the sites hanging on rely on their worse service to exist.


  • This is fundamentally worse than a lot of what we’ve seen already though, is it not?

    AI overviews are parasitic to traffic itself. If AI overviews are where people begin to go for information, websites get zero ad revenue, subscription revenue, or even traffic that can change their ranking in search.

    Previous changes just did things like pulling a little better context previews from sites, which only somewhat decreased traffic, and adding more ads, which just made the experience of browsing worse, but this eliminates the entire business model of every website completely if Google continues pushing down this path.

    It centralizes all actual traffic solely into Google, yet Google would still be relying on the sites it’s eliminating the traffic of for its information. Those sites cut costs by replacing human writers with more and more AI models, search quality gets infinitely worse, sourcing from articles that themselves were sourced from nothing, then most websites which are no longer receiving enough traffic to be profitable collapse.