• 0 Posts
  • 33 Comments
Joined 3 years ago
cake
Cake day: August 4th, 2023

help-circle


  • More like, large corporations not at all invested in local communities are now empowered to completely run rough shod over local governance processes. They’re actually more likely to pay for folks to stall out slow approval processes so that they can take advantage of this law and start building, especially when the permit would have likely been denied because it didn’t consider easements, fire or flood risks, building and local regulatory standards, or any other manner of things. So this actually increases the likelihood of bribes, and ensuring that corporations actually pay less to your local government and more to personal pockets of those being bribed, while simultaneously making the buildouts less safe and compliant with greater risks to the local community. Basically a lose lose for local folks, and a win win for a giant corporation.

    A better version of these flawed tactics would’ve been that failure to meet timelines would open the project to public vote and also that every project would require a public option (eg government supplied bid on the infrastructure) to compete. That way if timeline expires, it’s not automatically awarded to people who have a vested interest in it expiring at the expense of a community. It could be awarded to a local municipal project instead.






  • Bookshop.org just recently added ebooks, and I believe they have a UK store, for anyone trying to buy ebooks in a more ethical way. It allows you to select a local bookstore of your choosing and support them when you purchase books. They take a small fee to cover their warehousing and shipping I think, but pass along a lot of the profit (80%) to the local bookstore. They’re a certified b corp and their bylaws say they can’t sell to a major retailer (eg amazon).


  • Maybe. I’d say it’s more corporate for Sonos to try to develop yet another closed wireless audio sync protocol just to force users to sign in through their app so they can data scrape you. In the absence of a true open wireless sync protocol (maybe there is one and I’m unaware, in which case I’d like to be educated!) I’d rather them use a more widely adopted protocol than roll their own.

    Edit: I think maybe I misunderstood the comment I replied to and they were agreeing with this statement in general.






  • I don’t believe this is quite right. They’re capable of following instructions that aren’t in their data but appear like things which were (that is, it can probabilistically interpolate between what it has seen in training and what you prompted it with — this is why prompting can be so important). Chain of thought is essentially automated prompt engineering; if it’s seen a similar process (eg from an online help forum or study materials) it can emulate that process with different keywords and phrases. The models themselves however are not able to perform a is to b therefore b is to a, arguably the cornerstone of symbolic reasoning. This is in part because it has no state model or true grounding, only probabilities you could observe a token given some context. So even with chain of thought, it is not reasoning, it’s just doing very fancy interpolation of the words and phrases used in the initial prompt to generate a prompt that is probably going to give a better answer, not because of reasoning, but because of a stochastic process.




  • While this is true, algorithmic feeds virtually guarantee that echo chambers exist within a platform already. Fascists won’t leave YouTube because they feel it’s “too woke” or offering varying viewpoints, they’ll leave because the people they already watch there tell them to go to the other service. So I think it’s possible Elon attracts the fascists, destroys YouTube’s ability to monetize that part of their algorithm, and consequently have to improve service for others to try and ensure other fringe echo chambers don’t follow suit.


  • They don’t, but with quantization and distillation, as well as fancy use of fast ssd storage (they published a paper on this exact topic last year), you can get a really decent model to work on device. People are already doing this with things like OpenHermes and Mistral (given, 7B models, but I could easily see Apple doubling ram and optimizing models with the research paper I mentioned above, and getting 40B models running entirely locally). If the start of the network is good, a 40B model could take care of a vast majority of user Siri queries without ever reaching out to the server.

    For what it’s worth, according to their wwdc note, they’re basically trying to do this.