

AI would be fine. we do not have artificial intelligence. full stop. none of the technologies being talked about even approach intelligence. it’s literally just autocorrect. do you know how the autocorrect on your phone’s software keyboard works? then you know how a large language model works. it’s exactly the same formulae, just scaled up and recursed a bunch. I could have endless debates about what ‘intelligence’ is, and I don’t know that there’s a single position I would commit to very hard, but I know, dead certain, that it is not this. turing and minsky agreed when they first threw this garbage away in 1951-too many hazards, too few benefits, and insane unreasonable costs.
but there’s more to it than that. large (whatever) models are inherently politically conservative. they are made of the past, they do not struggle, they do not innovate, and they do not integrate new concepts, because they don’t integrate any concept’s, they just pattern match. you cannot have social progress when decisions are made by large (whatever) models. you cannot have new insights. you cannot have better policies, you cannot improve. you can only cleave closer and closer to the past, and reinforce it by feeding it its own decisions.
It could perhaps be argued, in a society that had once been perfect and was doing pretty well, that this is tolerable in some sectors, as long as someone keeps an eye on it. right now we’re a smouldering sacrifice zone of a society. that means any training data would be toxic horror or toxic horror THAT IS ON FIRE. this is bad. these systems are bad. anyone who advocates for these systems outside extremely niche uses that probably all belong in a lab is a bad person.
and I think, if that isn’t your enemy, your priorities are deeply fucked, to the point you belong in a padded room or a pine box.
not even that. it’s an inherently more regressive version of whatever data that person feeds it.
the two arguments for deploying this shit outside of very narrow laboratory uses, where everyone was already using other statistical models.
A. this is one last grasp at fukuyama’s ‘end of history’, one last desperate scream of the liberal order that they want to be regressive shit heads and build the abdication machine as their grand industrial-philosophical project, so they can do whatever horrible shit they want, and claim that they’re still compassionate and only doing it because computer said so.
B. this is a project by literal monarchists. people who wish to kill democracy. to murder truth and collaboration; replace it with blind tribalistic loyalty to a fuhrer/king. the rhetoric coming from a lot of the funders of these things supports this.
this technology is existentially evil, and will be the end of our society either way. it must be stopped. the people who work on it must be stopped. the people who fund it must be hanged.