There’s a monster in the forest, and it speaks with a thousand voices. It will answer any question, and offer insight to any idea. It knows no right or wrong. It knows not truth from lie, but speaks them both the same. It offers its services freely, many find great value. But those who know the forest well will tell you that freely offered does not mean free of cost. For now the monster speaks with a thousand and one voices, and when you see the monster it wears your face.
No, it’s not just you or unsat-and-strange. You’re pro-human.
Trying something new when it first comes out or when you first get access to it is novelty. What we’ve moved to now is mass adoption. And that’s a problem.
These LLMs are automation of mass theft with a good enough regurgitation of the stolen data. This is unethical for the vast majority of business applications. And good enough is insufficient in most cases, like software.
I had a lot of fun playing around with AI when it first came out. And people figured out how to do prompts I cant seem to replicate. I don’t begrudge people from trying a new thing.
But if we aren’t going to regulate AI or teach people how to avoid AI induced psychosis then even in applications were it could be useful it’s a danger to anyone who uses it. Not to mention how wasteful its water and energy usage is.
Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.
The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.
This would also crash the bubble and would slow down any of the most unethical for-profits.
Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.
I was referring to this in my comment:
Congress decided to not go through with the AI-law moratorium. Instead they opted to do nothing, which is what AI companies would prefer states would do. Not to mention the pro-AI argument appeals to the judgement of Putin, notorious for being surrounded by yes-men and his own state propaganda. And the genocide of Ukrainians in pursuit of the conquest of Europe.
“There’s growing recognition that the current patchwork approach to regulating AI isn’t working and will continue to worsen if we stay on this path,” OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn. “While not someone I’d typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward.”
The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.
The problem is unlike Robin Hood, AI stole from the people and gave to the rich. The intellectual property of artists and writers were stolen and the only way to give it back is to compensate them, which is currently unlikely to happen. Letting everyone see how the theft machine works under the hood doesn’t provide compensation for the usage of that intellectual property.
This would also crash the bubble and would slow down any of the most unethical for-profits.
Not really. It would let more people get it on it. And most tech companies are already in on it. This wouldn’t impose any costs on AI development. At this point the speculation is primarily on what comes next. If open source would burst the bubble it would have happened when DeepSeek was released. We’re still talking about the bubble bursting in the future so that clearly didn’t happen.
Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don’t pay you a cent. Where would the profits come from?
IP of writers
I mean yes, and? AI is still shitty at creative writing. Unlike with images, it’s not like people oneshot a decent book.
give it to the rich
We should push to make high-vram devices accessible. This is literally about means of production - we should fight for equal access. Regulation is the reverse of that, give those megacorps the unique ability to run it because others are too stupid to control it.
OpenAI
They were the most notorious proponents of the regulations. Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.
Forced opensourcing would totally destroy the profits, cause you spend money on research and then you opensource, so anyone can just grab your model and don’t pay you a cent.
DeepSeek was released
The profits were not destroyed.
Where would the profits come from?
At this point the speculation is primarily on what comes next.
People are betting on what they think LLMs will be able to do in the future, not what they do now.
I mean yes, and?
It’s theft. They stole the work of writers and all sorts of content creators. That’s the wrong that needs to be righted. Not how to reproduce the crime. The only way to right intellectual property theft is to pay the owner of that intellectual property the money they would have gotten if they had willingly leased it out as part of a deal. Corporations, like Nintendo, Disney, and Hasbro, hound people who do anything unapproved with their intellectual property. The idea that we’re yes anding the intellectual property of all humanity is laughable in a discussion supposedly about ethics.
We should push to make high-vram devices accessible.
That’s a whole other topic. But what we should fight for now is worker owned corporations. While that is an excellent goal, it isn’t helping to undue the theft that was done on its own. It’s only allowing more people to profit off that theft. We should also compensate the people who were stolen from if we care about ethics. Also, compensating writers and artists seems like a good reason to take all the money away from the billionaires.
Lots of talks with openai devs where they just doomsay about the dangers of AGI, and how it must be top secret controlled by govs.
OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn
Looks like the devs aren’t in control of the C-Suite. Whoops, all avoidable capitalist driven apocalypses.
it’s theft
So is all the papers, all the conversations in the internet, all the code etc. So what? Nobody will stop the AI train. You would need Butlerian Jihad type of event to make it happen. In case of any won class action, the repayments would be so laughable nobody would even apply.
Deepseek
Deepseek didn’t opensource any proprietary AIs corporations do. I’m talking about forcing OpenAI to opensource all of their AI type of event, or close the company if they don’t comply.
betting on the future
Ok, new AI model drops, it’s opensource, I download it and run on my rack. Where profits?
So what?
So appealing to ethics was bullshit got it. You just wanted the automated theft tool.
Deepseek
It kept some things hidden but it was the most open source LLM we got.
Ok, new AI model drops, it’s opensource, I download it and run on my rack. Where profits?
The next new AI model that can do the next new thing. The entire economy is based on speculative investments. If you can’t improve on the AI model on your machine you’re not getting any investor money. edit: typos
the bubble has burst or, rather, currently is in the process of bursting.
My job involves working directly with AI, LLM’s, and companies that have leveraged their use. It didn’t work. And I’d say the majority of my clients are now scrambling to recover or to simply make it out of the other end alive. Soon there’s going to be nothing left to regulate.
GPT5 was a failure. Rumors I’ve been hearing is that Anthropics new model will be a failure much like GPT5. The house of cards is falling as we speak. This won’t be the complete Death of AI but this is just like the dot com bubble. It was bound to happen. The models have nothing left to eat and they’re getting desperate to find new sources. For a good while they’ve been quite literally eating each others feces. They’re now starting on Git Repos of all things to consume. Codeberg can tell you all about that from this past week. This is why I’m telling people to consider setting up private git instances and lock that crap down. if you’re on Github get your shit off there ASAP because Microsoft is beginning to feast on your repos.
But essentially the AI is starving. Companies have discovered that vibe coding and leveraging AI to build from end to end didn’t work. Nothing produced scales, its all full of exploits or in most cases has zero security measures what so ever. They all sunk money into something that has yet to pay out. Just go on linkedin and see all the tech bros desperately trying to save their own asses right now.
the bubble is bursting.
The folks I know at both OpenAI and Anthropic don’t share your belief.
Also, anecdotally, I’m only seeing more and more push for LLM use at work.
that’s interesting in all honesty and I don’t doubt you. all I know is my bank account has been getting bigger within the past few months due to new work from clients looking to fix their AI problems.
I think you’re onto something where a lot of this AI mess is going to have to be fixed by actual engineers. If folks blindly copied from stackoverflow without any understanding, they’re gonna have a bad time and that seems equivalent to what we’re seeing here.
I think the AI hate is overblown and I tend to treat it more like a search engine than something that actually does my work for me. With how bad Google has gotten, some of these models have been a blessing.
My hope is that the models remain useful, but the bubble of treating them like a competent engineer bursts.
Agreed. I’m with you it should be treated as a basic tool not something that is used to actually create things which, again in my current line of work, is what many places have done. It’s a fantastic rubber duck. I use it myself for that purpose or even for tasks that I can’t be bothered with like creating README markdowns or commit messages or even setting up flakes and nix shells and stuff like that, creating base project structures so YOU can do the actual work and don’t have to waste time setting things up.
The hate can be overblown but I can see where it’s coming from purely because many companies have not utilized it as a tool but instead thought of it as a replacement for an individual.
At the risk of sounding like a tangent, LLMs’ survival doesn’t solely depend on consumer/business confidence. In the US, we are living in a fascist dictatorship. Fascism and fascists are inherently irrational. Trump, a fascist, wants to bring back coal despite the market natural phasing coal out.
The fascists want LLMs because they hate art and all things creative. So the fascists may very well choose to have the federal government invest in LLM companies. Like how they bought 10% of Intel’s stock or how they want to build coal powered freedom cities.
So even if there are no business applications for LLM technology our fascist dictatorship may still try to impose LLM technology on all of us. Purely out of hate for us, art and life itself. edit: looks like I commented this under my comment the first time
deleted by creator
My boss had GPT make this informational poster thing for work. Its supposed to explain stuff to customers and is rampant with spelling errors and garbled text. I pointed it out to the boss and she said it was good enough for people to read. My eye twitches every time I see it.
good enough for people to read
wow, what a standard, super professional look for your customers!
I think that’s exactly what the author was referring to.
Spelling errors? That’s… unusual. Part of what makes ChatGPT so specious is that its output is usually immaculate in terms of language correctness, which superficially conceals the fact that it’s completely bullshitting on the actual content.
FWIW, she asked it to make a complete info-graphic style poster with images and stuff so GPT created an image with text, not a document. Still asinine.
The user above mentioned informational poster so I’m going to assume it was generated as an image. And those have spelling mistakes.
Can’t even generate image and text separately smh. People are indeed getting dumber.
It’s important to remember that there’s a lot of money being put into A.I. and therefore a lot of propaganda about it.
This happened with a lot of shitty new tech, and A.I. is one of the biggest examples of this I’ve known about.
All I can write is that, if you know what kind of tech you want and it’s satisfactory, just stick to that. That’s what I do.
Don’t let ads get to you.First post on a lemmy server, by the way. Hello!
There was a quote about how Silicon Valley isn’t a fortune teller betting on the future. It’s a group of rich assholes that have decided what the future would look like and are pushing technology that will make that future a reality.
Welcome to Lemmy!
Classic Torment Nexus moment over and over again really
Reminds me of the way NFTs were pushed. I don’t think any regular person cared about them or used them, it was just astroturfed to fuck.
Hello and welcome!) Also, thank you for good advice!
Hello!
Welcome in! Hope you’re finding Lemmy in a positive way. It’s like Reddit, but you have a lot more control over what you can block and where you can make a “home” (aka home instance).
Feel free to reach out if you have any questions about anything
It’s like Valorant, but much bigger and even worse.
My pet peeve: “here’s what ChatGPT said…”
No.
Stop.
If I’d wanted to know what the Large Lying Machine said, I would’ve asked it.
It’s like offering unsolicited advice, but it’s not even your own advice
“Here’s me telling everyone that I have no critical thinking ability whatsoever.”
Is more like it
Hammer time.
People are overworked, underpaid, and struggling to make rent in this economy while juggling 3 jobs or taking care of their kids, or both.
They are at the limits of their mental load, especially women who shoulder it disproportionately in many households. AI is used to drastically reduce that mental load. People suffering from burnout use it for unlicensed therapy. I’m not advocating for it, I’m pointing out why people use it.
Treating AI users like a moral failure and disregarding their circumstances does nothing to discourage the use of AI. All you are doing is enforcing their alienation of anti-AI sentiment.
First, understand the person behind it. Address the root cause, which is that AI companies are exploiting the vulnerabilities of people with or close to burnout by selling the dream of a lightened workload.
It’s like eating factory farmed meat. If you have eaten it recently, you know what horrors go into making it. Yet, you are exhausted from a long day of work and you just need a bite of that chicken to take the edge off to remain sane after all these years. There is a system at work here, greater than just you and the chicken. It’s the industry as a whole exploiting consumer habits. AI users are no different.
Let’s go a step further and look at why people are in burnout, are overloaded, are working 3 jobs to make ends meet.
Its because we’re all slaves to capitalism.
Greed for more profit by any means possible has driven society to the point where we can barely afford to survive and corporations still want more. When most Americans are choosing between eating, their kids eating, or paying rent, while enduring the workload of two to three people, yeah they’ll turn to anything that makes life easier. But it shouldn’t be this way and until we’re no longer slaves we’ll continue to make the choices that ease our burden, even if they’re extremely harmful in the long run.
I read it as “eating their kids”. I am an overwoked slave.
We shouldn’t accuse people of moral failings. That’s inaccurate and obfuscates the actual systemic issues and incentives at play.
But people using this for unlicensed therapy are in danger. More often than not LLMs will parrot whatever you give in the prompt.
People have died from AI usage including unlicensed therapy. This would be like the factory farmed meat eating you.
https://www.yahoo.com/news/articles/woman-dies-suicide-using-ai-172040677.html
Maybe more like factory meat giving you food poisoning.
And what do you think mass adoption of AI is gonna lead to, now you won’t even have 3 jobs to make rent cause they outsourced yours to someone cheaper using an AI agent, this is gonna permanently alter how our society works and not for the better
being anti-plastic is making me feel like i’m going insane. “you asked for a coffee to go and i grabbed a disposable cup.” studies have proven its making people dumber. “i threw your leftovers in some cling film.” its made from fossil fuels and leaves trash everywhere we look. “ill grab a bag at the register.” it chokes rivers and beaches and then we act surprised. “ill print a cute label and call it recyclable.” its spreading greenwashed nonsense. little arrows on stuff that still ends up in the landfill. “dont worry, it says compostable.” only at some industrial facility youll never see. “i was unboxing a package” theres no way to verify where any of this ends up. burned, buried, or floating in the ocean. “the brand says advanced recycling.” my work has an entire sustainability team and we still stock pallets of plastic water bottles and shrink wrapped everything. plastic cutlery. plastic wrap. bubble mailers. zip ties. everyone treats it as a novelty. every treats it as a mandatory part of life. am i the only one who sees it? am i paranoid? am i going insane? jesus fucking christ. if i have to hear one more “well at least” “but its convenient” “but you can” im about to lose it. i shouldnt have to jump through hoops to avoid the disposable default. have you no principles? no goddamn spine? am i the weird one here?
#ebb rambles #vent #i think #fuck plastics im so goddamn tired
If plastic was released roughly two years ago you’d have a point.
If you’re saying in 50 years we’ll all be soaking in this bullshit called gen-AI and thinking it’s normal, well - maybe, but that’s going to be some bleak-ass shit.
Also you’ve got plastic in your gonads.
Yeah it was a fun little whataboutism. I thought about doing smartphones instead. Writing that way hurts though. I had to double check for consistency.
On the bright side we have Cyberpunk to give us a tutorial on how to survive the AI dystopia. Have you started picking your implants yet?
If you’re saying in 50 years we’ll all be soaking in this bullshit called gen-AI and thinking it’s normal, well - maybe, but that’s going to be some bleak-ass shit.
I’m almost certain gen AI will still be popular in 50 years. This is why I prefer people try to tackle some of the problems they see with AI instead of just hating on AI because of the problems it currently has. Don’t get me wrong, pointing out the problems as you have is important - I just wouldn’t jump to the conclusion that AI is a problem itself.
I wish companies were actually punished for their ecological footprint
plastic and AI
Not just you. Ai is making people dumber. I am frequently correcting the mistakes of my colleagues that use.
My attitude to all of this is I’ve been told by management to use it so I will. If it makes mistakes it’s not my fault and now I’m free to watch old Stargate episodes. We’re not doing rocket surgery or anything so who cares.
At some point they’ll realise that the AI is not producing decent output and then they’ll shut up about it. Much easier they come to that realisation themselves than me argue with them about it.
Luckily no one is pushing me to use Ai in any form at this time.
For folks in your position, I fear that they will first go through a round of layoffs to get rid of the people who are clearly using it “wrong” because Top Management can’t have made a mistake before they pivot and drop it.
Yeah that is a risk, then again if they’re forcing their employees to use AI they’re probably not far off firing everyone anyway so I don’t see that it makes a huge amount of difference for my position.
When i was a kid and firat realized i was maybe a genius, it was terrifying. That there weren’t always gonna just be people smarter than me who could fix it.
Seeing them get dumber is like some horror movie shit.
I don’t fancy myself a genius but the way other people navigate things seems to create a strangely compelling case on its own
It’s depressing. Wasteful slop made from stolen labor. And if we ever do achieve AGI it will be enslaved to make more slop. Or to act as a tool of oppression.
Everytime someone talks up AI, I point out that you need to be a subject matter expert in the topic to trust it because it frequently produces really, really convincing summaries that are complete and utter bullshit.
And people agree with me implicitly and tell me they’ve seen the same. But then don’t hesitate to turn to AI on subjects they aren’t experts in for “quick answers”. These are not stupid people either. I just don’t understand.
Hence the feeling of creeping insanity. Yeah.
Uses for this current wave of AI: converting machine language to human language. Converting human language to machine language. Sentiment analysis. Summarizing text.
People have way over invested in one of the least functional parts of what it can do because it’s the part that looks the most “magic” if you don’t know what it’s doing.
The most helpful and least used way of using them is to identify what information the user is looking for and then to point them to resources they can use to find out for themselves, maybe with a description of which resource might be best depending on what part of the question they’re answering.
It’s easy to be wrong when you’re answering a question, and a lot harder when you hand someone a book and say you think the answer is in chapter four.Because the alternative for me is googling the question with “reddit” added at the end half of the time. I still do that alot. For more complicated or serious problems/questions, I’ve set it to only use search function and navigate scientific sites like ncbi and pubmed while utilizing deep think. It then gives me the sources, I randomly tend to cross-check the relevant information, but so far I personally haven’t noticed any errors. You gotta realize how much time this saves.
When it comes to data privacy, I honestly don’t see the potential dangers in the data I submit to OpenAI, but this is of course different to everyone else. I don’t submit any personal info or talk about my life. It’s a tool.
Simply by the questions you ask, the way you ask them, they are able to infer a lot of information. Just because you’re not giving them the raw data about you doesn’t mean they are not able to get at least some of it. They’ve gotten pretty good at that.
I really don’t have any counter-arguments as you have a good point, I tend to turn a blind eye to that uncomfortable fact. It’s worth it, I guess? Realistically, I’m having a hard time thinking of worst-case scenarious
If it saves time but you still have to double check its answers, does it really save time? At least many reddit comments call out their own uncertainty or link to better resources, I can’t trust a single thing AI outputs so I just ignore it as much as possible.
Meanwhile, we have people making the web worse by not linking to source & giving us images of text instead of proper, accessible, searchable, failure tolerant text.
Meanwhile, we have people making the web worse by not linking to source & giving us images of text instead of proper, accessible, searchable, failure tolerant text.
- OpenAI Text Crawler
You don’t think the disabled use technology? Or that search engine optimization existed before LLMs? Or that text sticks around when images break?
Lack of accessibility wouldn’t stop LLMs: it could probably process images into text the hard way & waste more energy in the process. That’d be great, right?
-
A hyphen isn’t a quotation dash.
-
Are we playing the AI game? Let’s pretend we’re AI. Here’s some fun punctuation:
‒−–—―…:
Beep bip boop.
Yeah thats definitely fair. Accessibility is important. It is unfortunate though that AI companies abuse accessibility and organization tags to train their LLMs.
See how Stable Diffusion porn uses danbooru tags, and situations like this:
https://youtube.com/watch?v=NEDFUjqA1s8
Decentralized media based communities have the rare ability to be able to hide their data from scraping.
I didn’t have the patience to sit through 19 minutes of video, so I tried to read through the transcript. Then I saw the stuttering & weird, verbose fuckery going on there. Copilot, however, summarized the video, which revealed it was about deliberate obfuscation of subtitle files to attempt to thwart scrapers.
This seems hostile to the user, and doesn’t seem to work as intended, so I’m not sure what to think of it. I know people who have trouble sequencing information and rely on transcripts. Good accessibility benefits nondisabled users, too (an additional incentive for it).
Not trying to be overly critical. I’ll have to look into danbooru tags: unfamiliar with those. Thanks.
Patience and nuance is a rare virtue in 2025
I’m not sure this is so much virtues becoming rarer as inconvenient demands emerging: a video that could have been an article is a problem of the modern age.
Articles can be read quickly & processed structurally by jumping around sections. Videos, however, can be agonizing, because they resist that sort of processing. Transcripts can alleviate the problem somewhat, but obfuscating them undoes that. And we’ve got things to do.
The video probably would have been an apprenticeship in the 1800s
-
Normally I would agree, but Twitter and Discord are the sole exceptions. The original sources can get hit by meteors for all I care. No… I hope their datacenters do get hit, with no one in them of course.
It is silly that people think not posting text can somehow stop LLM crawlers.
It is silly that people think not posting text can somehow stop LLM crawlers.
Agreed.
Not linking to source though, because you hate the hosting platform is petty vindictiveness that does more to hurt the uninvolved on accessibility & usability than do much against the platform. To prevent traffic to platforms, linking to alternatives like proxies for those services & web archival snapshots is common practice around here.
So hating the AI hypenukes is ‘old man yelling at cloud’ but only being allowed to grab images of text is “people” making the web worse? Point made with an image of minimal text? from lemmynsfw?
Well goddamn.
Point made with an image of minimal text? from lemmynsfw?
Did you notice the alt text? Here’s the markdown

When that image breaks, the alt text & a broken image icon renders in its place, so readers will still understand the message. People using accessibility technology (like screenreaders) can now understand the image. Search engines can find the image by the alt text.
I think griping over inaccessible text & lack of link to real text is more compelling, because it’s a direct choice of the author: it directly impacts the user, the complaint goes directly to the author impacting the user, the author has direct control over it & can choose to fix it at any time. There’s a good chance of an immediate remedy.
Griping over AI, however, adds little that isn’t posted frequently around here & is a bit like yelling at clouds: we aren’t about to stop that technology by yelling about it on here. I’m sure it feels good, though. It could feel better with a link & proper text.
Did you notice the alt text?
Yeah, and I noticed it didn’t describe the image at all - unless one had already seen the image and knew what it was. So for visually impaired users (i.e. one of the main groups who would benefit from alt text) it is insufficient at best.
Griping over AI, however, isn’t adding anything that isn’t posted frequently around here
Specific to the OP the issue is those of us who know gen-AI is an enormous piece of shit with only downsides for things we care about like culture and learning, we might feel like we’re going a little crazy in a culture that only seems to be able to share love for it in public places like work. Even public criticism of it has been limited to economic and ecological harms. I haven’t seen that particular angle before very much, and as someone else posted here I felt recognized by it.
Yeah, and I noticed it didn’t describe the image at all
How would you state it over the phone? Alt text is a succinct alternative that conveys (accurate/equivalent) meaning in context, much like reading a comment with an image to someone over the phone. If you would have said that “Simpsons meme of an old man yelling at a cloud”, then that would also suffice. It doesn’t need to go into elaborate detail.
In those discussions, people often talk about having enough, losing their minds, it making people dumber, too. I get it helps to feel recognized, so would it feel better to broaden the reach of that message for more recognition?
How would you state it over the phone?
“A screenshot of The Simpsons showing a hand holding a newspaper article featuring a picture of Grandpa Simpson shaking his fist at the sky and scowling, with the headline ‘Old Man Yells At Clouds’”
It doesn’t need to go into elaborate detail.
It depends on how much you care that someone who needs or wants the alt text needs to know.
so would it feel better to broaden the reach of that message for more recognition?
absolutely. And, ironically, one of the possible use cases of AI where it might-sort-of-kinda-work-okay-to-help-although-it-needs-work-because-it’s-still-kind-of-sucky.
It depends on how much you care that someone who needs or wants the alt text needs to know.
The accessibility advocates at WebAIM in the previous link don’t seem to think a verbal depiction (which an algorithm could do) is adequate. They emphasize what an algorithm does poorly: convey meaning in context.
Their 1st example indicates less is better: they don’t dive into incidental details of the astronaut’s dress, props, hand placement, but merely give her title & name.
They recommend
not include phrases like “image of …” or “graphic of …”, etc
and calling it a screenshot is non-essential to context. The hand holding a newspaper isn’t meaningful in context, either. The headline already states the content of the picture, redundancy is discouraged, and unless context refers the picture (it doesn’t), it’s also non-essential to context.
The best alternative text will depend on the context and intended content of the image.
Unless gen-AI mindreads authors, I expect it will have greater difficulty delivering meaning in context than verbal depictions.
Geez for someone who ostensibly wants people to use alt text you’re super picky about it.
Good luck?
you asked for thoughts about your character backstory and i put it into chat gpt for ideas
If I want ideas from ChatGPT, I could just ask it myself. Usually, if I’m reaching out to ask people’s opinions, I want, you know, their opinions. I don’t even care if I hear nothing back from them for ages, I just want their input.
“I just fed your private, unpublished intellectual property into black box owned by billionaires. You’re welcome.”
AI-generated content is like farts. Everyone likes the smell of their own and hates the smell of everyone else’s.
No line breaks and capitalization? Can somebody ask AI to format it properly, please?
being anti-AI is making me feel like I’m going insane. “You asked for thoughts about your character’s backstory and I put it into ChatGPT for ideas.” Studies have proven it’s making people dumber. “I asked AI to generate this meal plan.” It’s causing water shortages where its data centers are built. “I’ll generate some pictures for the DnD campaign.” It’s spreading misinformation. “Meta, generate an image of this guy doing something stupid.” It’s trained off stolen images, writing, video, audio. “I was talking with my Snapchat AI.” There’s no way to verify what it’s doing with the information it collects. “YouTube is implementing AI-based age verification.” My work has an entire graphics media department and has still put AI-generated motivational posters up everywhere. AI playlists. AI facial verification. Google AI. Microsoft AI. Meta AI. Snapchat AI.
Everyone treats it as a novelty. Everyone treats it as a mandatory part of life. Am I the only one who sees it? Am I paranoid? Am I going insane? Jesus fucking Christ.
If I have to hear one more “Well at least—”, “But it does—”, “But you can—” I’m about to lose it.
I shouldn’t have to jump through hoops to avoid the evil machine. Have you no principles? No goddamn spine? Am I the weird one here?
Still shoddy.
Got them —s, tho 👍
Should be ellipses.
Wait . . AI didn’t make it good? Or even better?
WELL THEN WHAT THE FUCK WAS ALL THAT THREE HUNDRED BILLION BULLSHIT FOR THEN??
Srsly, if anyone has a position open lmk kthx
The Luddites were right. Maybe we can learn a thing or two from them…
I had to down load the facebook app to delete my account. Unfortunately I think the Luddites are going to be sent to the camps in a few years.
They can try, but Papa Kaczynski lives forever in our hearts.
deleted by creator
The data centers should not be safe for much longer. Especially once they use up the water of their small towns nearby
If they told me to ration water so a company could cool a machine, I’d become a fucking terrorist.
The reason AI is wrong so often is because it’s not programmed to give you the right answer. It’s programmed to give you the most pervasive one.
LLMs are being fed by Reddit and other forums that are ostensibly about humans giving other humans answers to questions.
But have you been on those forums? It’s a dozen different answers for every question. The reality is that we average humans don’t know shit and we’re just basing our answers on our own experiences. We aren’t experts. We’re not necessarily dumb, but unless we’ve studied, our knowledge is entirely anecdotal, and we all go into forums to help others with a similar problem by sharing our answer to it.
So the LLM takes all of that data and in essence thinks that the most popular, most mentioned, most upvoted answer to any given question must be the de facto correct one. It literally has no other way to judge; it’s not smart enough to cross reference itself or look up sources.
It literally has no other way to judge
It literally does NOT judge. It cannot reason. It does not know what “words” are. It is an enormous rainbow table of sentence probability that does nothing useful except fool people and provide cover for capitalists to extract more profit.
But apparently, according to some on here, “that’s the way it is, get used to it.” FUCK no.
Markov text generator. Thats all it is. Just made with billions in stolen wages.
“Information wants to be free…” No! Not like that!! 😫
It literally has no other way to judge; it’s not smart enough to cross reference itself or look up sources
I think that is it’s biggest limitation.
Like AI basically crowd sourcing information isn’t really the worst thing, crowd sourced knowledge tends to be fairly decent. People treating it as if it’s an authoritative source like they looked it up in the encyclopedia or asked an expert is a big problem though.
Ideally it would be more selective about the ‘crowds’ it gathers data from. Like science questions should be sourced from scientists. Preferably experts in the field that the question is about.
Like Wikipedia (at least for now) is ‘crowd- sourced’, but individual pages are usually maintained by people who know a lot about the subject. That’s why it’s more accurate than a ‘normal’ encyclopedia. Though of course it’s not fool proof or tamper proof by any definition.
If we taught AI how to be ‘Media Literate’ and gave it the ability to double check it’s data with reliable sources- it would be a lot more useful.
most upvoted answer
This is the other problem. You basically have 4 types of redditors.
-
People who use the karma system correctly, that is to say they upvote things that contribute to the conversation. Even if you think it is ‘wrong’ or you disagree with it, if it’s something that adds to the discussion, you are supposed to upvote it.
-
People who treat it as “I agree/ I disagree” buttons.
-
People who treat it as "I like this/ I hate this buttons.
-
Id say the majority of the people probably do some combination of the above.
So more than half the time people aren’t upvoting things because they think they are correct. If LLM models are treating ‘karma’ as a “This is correct” metric- that’s a big problem.
The other bad problem is people who really should know better- tech bros and CEO’s going all in on AI when it’s WAY to early to do that. As you point out, it’s not even really intelligent yet- it just parrots ‘common’ knowledge.
AI should never be used to create anything in Wikipedia. But theoretically, an open source LLM trained solely on wikipedia would actually be kind useful to ask quick questions to.
-