He / They

  • 11 Posts
  • 635 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • Not friendly enough when talking to customers? Bad employee.

    Too friendly when talking to customers? Bad employee.

    This is just about 1) creating an algorithmic justification for the racial profiling that managers already do, and 2) keeping employees in fear of termination so they put up with bullshit.

    Side story about how shitty retail management is:

    When I was working retail years ago (big box electronics store), our management employed a system of getting every new employee to 3 write-ups as fast as they could (I’m talking, within a month of starting), using literally any excuse they could, so they could hold the “one more write-up and you’re fired” over their head.

    “AI” is definitely going to become a new tool for employee suppression.







  • There is no such thing as a form of media that is only applicable to a specific scale of use. Long form and short form media is useful to large and small groups.

    For example, my partner coaches high school policy debate, which has long form video training content, short form content (30 seconds - 5 minutes) like clips from tournament rounds or practices, for recruitment, and very short form (1 - 30 second) clips that are mostly memes.

    Their shorter form content is explicitly meant not to be viral, it’s purely for their school, and other kids in their debate league. Most of it’s not even parsable by non-debaters. It’s only useful to their small community, but that’s what they want.


  • I actually kind of love the idea of a per diem Unknown User Limit. Like the first 5000 unregistered users can view the site, but after that they get dropped at ingress. Also, limit user signups per day (this ain’t about growing user base, it’s about preventing virality)!

    Sure, you could still need an ingress server that can handle a high load to avoid the accidental ddos if word-of-mouth gets out about it, but that’s a million times lower of a requirement than a server that can handle serving a web page or app to the same number of users.


  • you’re failing to see the biases inherent to the content you’re consuming

    You are underestimating people, I think. People choose their echo chambers because they understand that their positions are being challenged elsewhere. It’s not an inability to see the bias in what they consume, it’s a dislike of the alternative.

    Every Trumper I talk to knows very well that Trump is unpopular, that Christian Nationalism is unpopular, that abortion rights are popular, etc, but they don’t care, and they don’t want to constantly be (rightfully) told and shown how dumb they are, so they wall themselves off in their gardens. “I’m just tired of hearing how bad Trump is all the time.”


  • Media literacy was never the problem, because it wasn’t actual confusion about what was real or not that was drawing people to the extreme alt-right sphere, it was confirmation bias that allowed people to choose not to critically assess the content for veracity.

    But I don’t think you can solve this through “media ecology” either. Curating this volume of content is impossible, and there are legitimate dangers in giving the government too much ability to shut down free speech (see Germany condemning any form of pro-Palestinian rhetoric as antisemitic) in order to guard “truth”.

    I think that this is similar to the issue of biased GenAI; you can’t fix bias at the human-output side, you have to build a society that doesn’t want to engage with bigotry, and explore and question its own assumptions (and that’s not ever a fixed state, it’s an ongoing process).



  • I think you are confused about the delineation between local and federal governments. It’s not all one giant pool of tax money. None of Santa Clara County’s budget goes to missiles.

    Also, this feels like you are too capitalism-pilled, and rather than just spending the $240 to do this work, and using the remaining $49,999,760 to just fund free college or UBI programs, you’re like, “how about we pay these people to do the most mind-numbingly, soul-crushingly boring work there is, reading old legal documents?”

    You know what would actually happen if you did that? People would seriously read through them for 1 day, and then they’d be like, “clear”, “clear”, “clear” without looking at half of them. It’s not like you’re gonna find and fund another group to review the first group’s work, after all. So you’d still be where we are now, but you also wasted x* peoples’ time that they could have been enjoying doing literally anything else.


  • Products of a bigoted society goes in, bigoted product comes out.

    In that regard, developers and decision makers would benefit from centering users’ social identities in their process, and acknowledging that these AI tools and their uses are highly context-dependent. They should also try to enhance their understanding of how these tools might be deployed in a way that is culturally responsive.

    You can’t correct for bias at the ass-end of a mathematical algorithm. Generative AI is just caricaturizing our own society back to us; it’s a fun-house mirror that makes our own biases jump out. If they want a model that doesn’t produce bigoted outputs, they’re going to have to fix their inputs.


  • I think you may have misunderstood the purpose of this tool.

    It doesn’t read the deeds, make a decision, and submit them for termination all on its own. It reads them, identifies racial covenants based on patterns of language (which is exactly what LLMs are very good at), and then flags them for a human to review.

    This tool is not replacing jobs, because the whole point is that these reviews were never going to get the budget and manpower to be done manually, and instead would have simply remained on the books.

    I get being disdainful or even angry about LLMs in our unregulated-capitalism anti-worker hellhole because of the way that most companies are using them, but tools aren’t themselves good or bad, they’re just tools. And using a tool to identify racial covenants in legal documents that otherwise would go un-remediated, seems like a pretty good use to me.



  • Santa Clara County alone has 24 million property records, but the study team focused mostly on 5.2 million records from the period 1902 to 1980. The artificial intelligence model completed its review of those records in six days for $258, according to the Stanford study. A manual review would have taken five years at a cost of more than $1.4 million, the study estimated.

    This is an awesome use of an LLM. Talk about the cost savings of automation, especially when the alternative was the reviews just not getting done.



  • This doesn’t reflect how that works right now, though, nor how AGPL would affect most corporations.

    You listed 2 companies (Cisco and Google) that maintain their own forked Linux versions (IOS and Android). Neither of those OSes are server OSes already. They’re router and mobile phone OSes.

    The other hundreds of thousands of companies don’t even touch the kernel, and would not be affected. It would not change the landscape at all to move it to AGPL.