

Did you read any of what I wrote? I didn’t say that human interactions can’t be transactional, I quite clearly—at least I think—said that LLMs are not even transactional.
Did you read any of what I wrote? I didn’t say that human interactions can’t be transactional, I quite clearly—at least I think—said that LLMs are not even transactional.
Don’t besmirch the oldest profession by making it akin to souless vacuum. It’s not even a transaction! The AI gains nothing and gives nothing. It’s alienation in it’s purest form—no wonder the rent-seekers love it—It’s the ugliest and least faithful mirror.
✨The Vibe✨ is indeed getting increasingly depressing at work.
It’s also killing my parents’ freelance translation business, there is still money in live interpreting, and prestige stuff or highly technical accuracy very obviously matters stuff, but a lot of stuff is drying up.
A glorious snippet:
The movement
connected toattracted the attention of the founder culture of Silicon Valley andleading to many shared cultural shibboleths and obsessions, especially optimism about the abilityof intelligent capitalists and technocrats to create widespread prosperity.
At first I was confused at what kind of moron would try using shibboleth positively, but it turns it’s just terribly misquoting a citation:
Rationalist culture — and its cultural shibboleths and obsessions — became inextricably intertwined with the founder culture of Silicon Valley as a whole, with its faith in intelligent creators who could figure out the tech, mental and physical alike, that could get us out of the mess of being human.
Also lol at insiting on “exonym” as descriptor for TESCREAL, removing Timnit Gebru and Émile P. Torres and the clear intention of criticism from the term, it doesn’t really even make sense to use the acronym unless you’re doing critical analasis of the movement(s). (Also removing mentions of the espcially strong overalap between EA and rationalists.)
It’s a bit of a hack job at making the page more biased, with a very thin verneer of still using the sources.
Special bootlicking points:
Source: xcancel.com
@PITLORDMOSH: weirdly dev-hostile take for a company blog
@tqbf (The author of the blogpost): I tried to post it on my personal blog and Kurt wouldn’t let me.
For reference Kurt is the CEO of the company that the author works for: https://archive.md/Z2xvg
Not high on the list of thought crimes, but a particular ick for me:
Also: 100% of all the Bash code you should author ever again
Why the bash hate?
Oh no! I wasted my time on Troll. Typical.
Hard disagree, as much as I loathe JK Rowling’s politcal ideas, and the at-times unecessary cruelty found in the HP novels, it still shaped a large part of the imaginary world of a generation. As beautiful as bird songs are (who the hell refers to birdsong as “output”), this simply cannot be compared.
Yes commercial for-profit shareholder-driven lackadaisical “art” is already an insult to life and creativity, but a fully-or-mostly automated slop machine is an infinitely worse one.
Even in the sloppiest of arts I have watched, the humanity still shines through, people still made choice, even subjected to crazy uninispired didacts from above, the hands that fashion books, movies, music, video-games, tv-shows still have—must have—room to bring a given vision together.
I think people DO care.
I don’t know exactly what you wanted to say, if you wanted to express despair, cynisism, nihilishm or something else, but I would encourage you not to give up hope with humanity, people aren’t that stupid, people aren’t that void of meaning.
The standout monuments of stupidity—and/or monstrosity—in McCarthy’s response for me are.
Rekindled a desire to maybe try my own blog ^^.
I think beyond “Keeping up appearances” it’s also the stereotype of fascists—and by extension LLM lovers—having trouble (or pretending to) distinguishing signifying and signified.
Seriously though, I can i trust dotnet ever again?
Infinite-garbage-maze does seem more appealing than “proof-of-work” (the crypto parentage is yuckish enough ^^) as a countermeasure, though I would understand if some would not feel confortable with direct sabotage—say for example a UN organization.
I feel the C-SUITE executives are pushing the AI way harder than they ever pushed crypto though, since they never understood the tech beyond a speculative asset, but the idea of replacing work-hours by AI-automation has been sold HARD to them.
I guess the type of lawyer that does this would be the same that would offload research to paralegals, without properly valuing that as real work, and somehow believe it can be substituted by AI, maybe they never engage their braincells, and just view lawyering as a performative dance to appease the legal gods?
This is beyond horrifying:
I don’t know to decide wether I should be glad this wasn’t show to a jury, or sad we don’t get an obvious mistrial setting some kind of precedent against this kind of demented ventriquolism act, indirectly asking for maximum sentencing through what should be completely inadmissible character testimony.
Does anyone here know how ‘appeals on sentencing’ vs ‘appeals on verdicts’, obviously judges should have some leeway, but do they have enough leeway to say (In court) that they were moved for example by what a spirit medium said or whatnot, is there some jurisprudence there?
I can only hope that the video played an insignificant role in the judges decision, and it was some deranged—post hoc—emotional—waxing ‘poetic’ moment for the judge.
Yuck.
It’s also such a bad description, since from their own post, the Bot+LLM they where using was almost certainly feeding itself data found by a search engine.
That’s like saying, no I didn’t give the amoral PI any private information, I merely gave them a name to investigate!
EDIT: Also lol at this part of the original disclaimer:
An expert in LLMs who has been working in the field since the 1990s reviewed our process.
Pre-commitment is such a silly concept, and also a cultish justification for not changing course.
What’s pernicious (for kool-aided people) is that the initial Roko post was about a “good” AI doing the punishing, because ✨obviously✨ it is only using temporal blackmail because bringing AI into being sooner benefits humanity.
In singularian land, they think the singularity is inevitable, and it’s important to create the good one verse—after all an evil AI could do the torture for shits and giggles, not because of “pragmatic” blackmail.
Thank god for wikipedia and other wikis, may they live long and prosper.
But code that doesn’t crash isn’t necessarily code that works. And even for code made by humans, we sometimes do find out the hard way, and it can sometimes impact an arbitrarily large number of people.