• 143 Posts
  • 1.62K Comments
Joined 3 years ago
cake
Cake day: June 19th, 2023

help-circle









  • I will add my reasons for being strongly in favor of allowing regional communities.

    I think a common and erroneous tendency is to equate pride in a place with hostility towards outsiders. The opposite of xenophobia is not attempting to extinguish taking pride in places, though: it’s wanting to enthusiasticly share your love of your land with visitors and immigrants.

    I think a rise in nationalism has coincided with a component of globalization in which media treats places as interchangeable, and obscures the joy of taking pride in local culture.

    My town is awesome, and my neighbors – regardless of tenure – are awesome. I love r/Oakland on Reddit, especially the way it helps transplants integrate into our community irl. I don’t think slrpnk.net has enough traffic to support a city level community like that yet, but if there were a community for talking about news and culture in the Bay Area or about California state politics I would love that.

    I think identifying closely with your town and region is often an antidote to nationalism. That said, if Germans or Spaniards find a community based on those identities valuable, I’m fine with that too.

    Places and their diverse cultures rock, and I passionately want to promote a healthy version of that!


  • Andy@slrpnk.nettoMemes@lemmy.mlSupport Fediverse Thought Police
    link
    fedilink
    arrow-up
    3
    arrow-down
    4
    ·
    10 days ago

    Thanks for clarifying.

    At a glance, I don’t see a problem. Isn’t social media already a system for rating social credit?

    I think the problem with social credit scores is when they’re mandatory and can limit things like housing access. Filtering posts on opt-in social networks just sounds like a reasonable tool for moderating decentralized platforms.



  • This depends on your definition of self-awareness. I’m using what I think is a reasonable, mundane framework: self awareness is a spectrum of diverse capabilities that includes any system with some amount of internal observation.

    I think the definition that a lot of folks are using is a binary distinction between things which experience the ability to observe their own ego observing itself and those that don’t. Which I think is useful if your goal is to maintain a belief in human exceptionalism, but much less so if you’re trying to genuinely understand consciousness.

    A lizard has no ego. But it is aware of its comfort and will move from a cold spot to a warmer spot. That is low-level self awareness, and it’s not rare or mystical.




  • I actually kinda agree with this.

    I don’t think LLMs are conscious. But I do think human cognition is way, way dumber than most people realize.

    I used to listen to this podcast called “You Are Not So Smart”. I haven’t listened in years, but now that I’m thinking about it, I should check it out again.

    Anyway, a central theme is that our perceptions are comprised heavily of self-generated delusions that fill the gaps for dozens of cludgey systems to create a very misleading experience of consciousness. Our eyes aren’t that great, so our brains fill in details that aren’t there. Our decision making is too slow, so our brains react on reflex and then generate post-hoc justifications if someone asks why we did something. Our recall is shit, so our brains hallucinate (in ways that admittedly seem surprisingly similar sometimes to LLMs) and then applies wild overconfidence to fabricated memories.

    We’re interesting creatures, but we’re ultimately made of the same stuff as goldfish.



  • Yeah.

    I thought the meme would be more obvious, but since a lot of people seem confused I’ll lay out my thoughts:

    Broadly, we should not consider a human-made system expressing distress to be normal; we especially shouldn’t accept it as normal or healthy for a machine that is reflecting back to us our own behaviors an attitudes, because it implies that everything – from the treatment that generated the training data to the design process to the deployment to the user behavior – are all clearly fucked up.

    Regarding user behavior, we shouldn’t normalize the practice of dismissing cries of distress. It’s like having a fire alarm that constantly issues false positives. That trains people into dangerous behavior. We can’t just compartmentalize it: it’s obviously going to pollute our overall response towards distress with a dismissive reflex beyond interactions with LLMs.

    The overall point is that it’s obviously dystopian and fucked up for a computer to express emotional distress despite the best efforts of its designer. It is clearly evidence of bad design, and for people to consider this kind of glitch acceptable is a sign of a very fucked up society that exercising self-reflection and is unconcerned with the maintenance of its collective ethical guardrails. I don’t feel like this should need to be pointed out, but it seems that it does.