• MrsDoyle@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    I was trying to take a photo of piece of jewellery in my hand tonight and accidentally activated my phone’s AI. It threw up a big Paperclip-type message, “How can I help you?” I muttered “fuck off” as I stabbed at the back button. “I’m sorry you feel that way!” it said.

    Yeah, I hate it. At least Paperclip didn’t give snark.

  • Zement@feddit.nl
    link
    fedilink
    arrow-up
    23
    ·
    3 days ago

    My thermostat hides no brainier features behind an “Ai” subscription. Switching off the heating when the weather will be warm that day doesn’t need Ai… that’s not even machine learning, that’s a simple PID controller.

    • jol@discuss.tchncs.de
      link
      fedilink
      arrow-up
      13
      ·
      3 days ago

      I’m so glad I switched to just home assistant and zigbee devices, and my radiators are dumb, so I could replace them with zigbee ones. Fuck making everything “smart” a subscription

      • Zement@feddit.nl
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        I think I will try ESP-Home, half of my appliances are Tasmota-based now, I just was too lazy to research compatible Thermostats… (Painful hindsight)…

    • frayedpickles@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Even the supposed efficiency benefits of the nest basically come down to “if you leave the house and forget to turn the air down, we will do it for you automatically”

  • dQw4w9WgXcQ@lemm.ee
    link
    fedilink
    arrow-up
    37
    ·
    3 days ago

    I got a christmas card from my company. As a part of the christmas greeting, they promoted AI, something to the extent of “We wish you a merry christmas, much like the growth of AI technologies within our company” or something like that.

    Please no.

  • CaptPretentious@lemmy.world
    link
    fedilink
    arrow-up
    133
    arrow-down
    3
    ·
    4 days ago

    I hated it when everything became ‘smart’.

    Now everything has ‘AI’.

    Nothing was smart. And that’s not AI.

    Everything costs more, everything has a stupid app that gets abandoned, IoT backend that’s on life support the moment it was turned on. Subscriptions everywhere! Everything is built with lower quality, lower standards.

  • ozoned@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    3 days ago

    Containerize everything!

    Crypto everything!

    NFT everything!

    Metaverse everything!

    This too shall pass.

    • SuspiciousUser@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Put a curved screen on everything, microwave your thanksgiving turkey, put EVERYTHING including hot dogs, ham, and olives in gelatin. Only useful things will have AI in them in the future and I have a hard time convincing the hardcore anti-ai crowd of that.

      • CarrotsHaveEars@lemmy.ml
        link
        fedilink
        arrow-up
        7
        ·
        3 days ago

        Docker is only useful in that many scenarios. Nowadays people make basic binaries like tar into a container, stating that it’s a platform agnostic solution. Sometimes some people are just incompetent and only know docker pull as the only solution.

        • Mio@feddit.nu
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          2 days ago

          Docker have many benefits - container meaning it can be more secure, easy to update and something that many overlook - a dockerfile with detailed intrusions on how to install that actually works if the container works - useful when wiki is not updated.

          Another benefit is that the application owner can change infrastructure used without the user actually need to care. Example - Pihole v5 is backend dns + lighthttp for web + php in one single container. In version v6(beta) they have removed lighthttp and php and built in functionality into the core service. In my tests it went from 100 MB ram usage to 20 MB. They also changed the base from debian to alpine and the image size shrink a lot.

          Next benefit - I am moving from x86 to arm for my home server. Docker itself will figure out what is the right architecture and pull that image.

          Sure - Ansible exist as one attempt to combat the problem of installation instructions but is not as popular and thus the community is smaller. They may leave you in a bad state(it is not like containers were you can delete and start over fresh easily) Then we have VM:s - but IMO they waste to many resources.

      • tetris11@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        LXC – natively containerize an application (or multiple)

        systemd-run – can natively limit CPU shares and RAM usage

      • stinky@redlemmy.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        I think the complaint is that apps are being designed with containerization in mind when they don’t need it

        • azertyfun@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Any examples spring to mind? I’ve built apps that are only distributed as containers (because for their specific purpose it made sense and I am also the operator of the service), but if ya don’t want to run it in a container… just follow the Dockerfile’s “instructions” to package the app yourself? I’m sure I could come up with a contrived example where that would be impractical, but in almost every case a container app is just a basic install script written for a standard distro and therefore easily translatable to something else.

          FOSS developers don’t owe you a pre-packaged .deb. If you think distributing one would be useful, read up on debhelper. But as someone who’s done both, Dockerfile is certainly much easier than debhelper. So “don’t need it” is a statement that only favors native packaging from the user’s perspective, not the maintainer. Can’t really fault a FOSS developer for doing the bare minimum when packaging an app.

          • stinky@redlemmy.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            also! it’s worth noting that not all FOSS developers are debian (or even linux) devs. Developers of open source projects including .Net Core don’t “owe” us packaging of any kind but the topic here is unnecessary containerization, not a social contract to provide it.

          • stinky@redlemmy.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 days ago

            I am not the person who posted the original comment so this is speculation, but when they criticized “containerizing everything” I suspect they meant “Yes client, I can build that app for you, and even though your app doesn’t need it I’m going to spend additional time containerizing it so that I can bill you more” but again you’d have to ask them.

          • stinky@redlemmy.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            “Yes, client! I can build that app for you! I’m going to bill you these extra items for containerization so I can get paid more”

      • chiliedogg@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        3 days ago

        Not as bad as the IR touch screens. They had a IR field protected just above the surface of the screen that would be broken by your finger, such would register a touch at that location.

        Or a fly landing on your screen and walking a few steps could drag a file into the recycle bin.

  • pyre@lemmy.world
    link
    fedilink
    arrow-up
    85
    arrow-down
    9
    ·
    4 days ago

    they don’t care. you’re not the audience. the tech industry lives on hype. now it’s ai because before that they did it with nft and that failed. and crypto failed. tech needs a grift going to keep investors investing. when the bubble bursts again they’ll come up with some other bullshit grift because making useful things is hard work.

    • edric@lemm.ee
      link
      fedilink
      arrow-up
      27
      arrow-down
      3
      ·
      4 days ago

      Yup, you can see it in talks on annual tech conferences. Last year it was APIs, this year it’s all AI. They’ll just move on to the next trendy thing next year.

      • Tyfud@lemmy.world
        link
        fedilink
        English
        arrow-up
        40
        ·
        3 days ago

        To be fair, APIs have been around since the 70s,and are not trendy, they’re just required to have a common interface for applications to request and perform actions with each other.

        But yeah, AI is mostly trendy bullshit

        • edric@lemm.ee
          link
          fedilink
          arrow-up
          5
          ·
          3 days ago

          I was referring mostly about security conferences. Last year almost every vendor was selling API security products. Now it’s all AI infused products.

        • edric@lemm.ee
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          edit-2
          3 days ago

          Have you been to any appsec conferences last year? It was all API security. This year it was all AI-leveraged CI/CD, code/vulnerability review, etc.

    • takeda@lemm.ee
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      3 days ago

      I was ok with crypto and nft because it was up to me to decide if I want to get involved in it or not.

      AI does seem to have impact at jobs, at least employers are trying to use it and see if it actually will allow them to hire less staff, I see that for SWE. I don’t think AI will do much there though.

      • pyre@lemmy.world
        link
        fedilink
        arrow-up
        16
        ·
        edit-2
        3 days ago

        it’s not up to you, it just failed before it could be implemented. many publishers already commit to in-game nfts before they had to back down because it fell apart too quickly (and some still haven’t). if it held on for just a couple more years there wouldn’t be a single aaa title that doesn’t have nfts today.

        crypto was more complicated because unlike these two you can’t just add it and say “here, this is all crypto now” because it requires prior commitment and it’s way too complicated for the average person. plus it doesn’t have much benefit: people already give you money and buy fake coins anyway.

        I’m giving examples from games because it’s the most exploitative market but these would also seep into other apps and services if not for the hurdles and failures. so now we’re stuck with this. everyone’s doing it because it’s a gold rush except instead of gold it’s literal shit, and instead of a rush it’s literal shit.

        — tangent —

        … and just today I realized I had to switch back to Google assistant because literally the only thing gemini can do is talk back to me, but it can’t do anything useful, including the simplest shit like converting currency.

        “I’m sorry, I’m still learning” – why, bitch? why don’t you already know this? what good are you if I ask you to do something for convenience and instead you tell me to do it manually and start explaining how I can do the most basic shit that you can’t do as if I’m the fucking idiot.

    • Bronzebeard@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      6
      ·
      3 days ago

      Nft didn’t fail, it was just the idiotic selling of jogs for obscene amounts that crashed (most of that was likely money laundering anyway). The tech still has a use.

      Wouldn’t exactly call crypto a failure, either, when we’re in the midst of another bull run.

  • Bamboodpanda@lemmy.world
    link
    fedilink
    arrow-up
    24
    arrow-down
    2
    ·
    3 days ago

    AI is one of the most powerful tools available today, and as a heavy user, I’ve seen firsthand how transformative it can be. However, there’s a trend right now where companies are trying to force AI into everything, assuming they know the best way for you to use it. They’re focused on marketing to those who either aren’t using AI at all or are using it ineffectively, promising solutions that often fall short in practice.

    Here’s the truth: the real magic of AI doesn’t come from adopting prepackaged solutions. It comes when you take the time to develop your own use cases, tailored to the unique problems you want to solve. AI isn’t a one-size-fits-all tool; its strength lies in its adaptability. When you shift your mindset from waiting for a product to deliver results to creatively using AI to tackle your specific challenges, it stops being just another tool and becomes genuinely life-changing.

    So, don’t get caught up in the hype or promises of marketing tags. Start experimenting, learning, and building solutions that work for you. That’s when AI truly reaches its full potential.

    • fine_sandy_bottom@discuss.tchncs.de
      link
      fedilink
      arrow-up
      6
      ·
      3 days ago

      I think there’s specific industrial problems for which AI is indeed transformative.

      Just one example that I’m aware of is the AI-accelerated nazca lines survey that revealed many more geoglyphs that we were not previously aware of.

      However, this type of use case just isn’t relevant to most people who’s reliance on LLMs is “write an email to a client saying xyz” or “summarise this email that someone sent to me”.

      • Hexarei@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        3 days ago

        One of my favorite examples is “smart paste”. Got separate address information fields? (City, state, zip etc) Have the user copy the full address, clock “Smart paste”, feed the clipboard to an LLM with a prompt to transform it into the data your form needs. Absolutely game-changing imho.

        Or data ingestion from email - many of my customers get emails from their customers that have instructions in them that someone at the company has to convert into form fields in the app. Instead, we provide an email address (some-company-inbound@ myapp.domain) and we feed the incoming emails into an LLM, ask it to extract any details it can (number of copies, post process, page numbers, etc) and have that auto fill into fields for the customer to review before approving the incoming details.

        So many incredibly powerful use-cases and folks are doing wasteful and pointless things with them.

        • fine_sandy_bottom@discuss.tchncs.de
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          3 days ago

          If I’m brutally honest, I don’t find these use cases very compelling.

          Separate fields for addresses could be easily solved without an LLM. The only reason there isn’t already a common solution is that it just isn’t that much of a problem.

          Data ingestion from email will never be as efficient and accurate as simply having a customer fill out a form directly.

          These things might make someone mildly more efficient at their job, but given the resources required for LLMs is it really worth it?

          • Hexarei@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            3 days ago

            Well, the address one was an example. Smart paste is useful for more than just addresses - Think non-standard data formats where a customer provided janky data and it needs wrangling. Happens often enough and with unique enough data that an LLM is going to be better than a bespoke algo.

            The email one though? We absolutely have dedicated forms, but that doesn’t stop end users from sending emails to our customer anyway - The email ingestion via LLM is so our customer can just have their front desk folks forward the email in and have it make a best guess to save some time. When the customer is a huge shop that handles thousands of incoming jobs per day, the small value adds here and there add up to quite the savings for them (and thus, value we offer).

            Given we run the LLMs on low power machines in-house … Yeah they’re worth it.

            • fine_sandy_bottom@discuss.tchncs.de
              link
              fedilink
              arrow-up
              2
              ·
              2 days ago

              Yeah, still not convinced.

              I work in a field which is not dissimilar. Teaching customers to email you their requirements so your LLM can have a go at filling out the form just seems ludicrous to me.

              Additionally, the models you’re using require stupid amounts of power to produce so that you can run them on low power machines.

              Anyhow, neither of us is going to change our minds without actual data which neither of us have. Who knows, a decade from now I might be forwarding client emails to an LLM so it can fill out a form for me, at which time I’ll know I was wrong.

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        That’s really neat, thanks for sharing that example.

        In my field (biochemistry), there are also quite a few truly awesome use cases for LLMs and other machine learning stuff, but I have been dismayed by how the hype train on AI stuff has been working. Mainly, I just worry that the overhyped nonsense will drown out the legitimately useful stuff, and that the useful stuff may struggle to get coverage/funding once the hype has burnt everyone out.

        • fine_sandy_bottom@discuss.tchncs.de
          link
          fedilink
          arrow-up
          4
          ·
          3 days ago

          I suspect that this is “grumpy old man” type thinking, but my concern is the loss of fundamental skills.

          As an example, like many other people I’ve spent the last few decades developing written communication skills, emailing clients regarding complex topics. Communication requires not only an understanding of the subject, but an understanding of the recipient’s circumstances, and the likelihood of the thoughts and actions that may arise as a result.

          Over the last year or so I’ve noticed my assistants using LLMs to draft emails with deleterious results. This use in many cases reduces my thinking feeling experienced and trained assistant to an automaton regurgitating words from publicly available references. The usual response to this concern is that my assistants are using the tool incorrectly, which is certainly the case, but my argument is that the use of the tool precludes the expenditure of the requisite time and effort to really learn.

          Perhaps this is a kind of circular argument, like why do kids need to learn handwriting when nothing needs to be handwritten.

          It does seem as though we’re on a trajectory towards stupider professional services though, where my bot emails your bot who replies and after n iterations maybe they’ve figured it out.

          • AnarchistArtificer@slrpnk.net
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 days ago

            Oh yeah, I’m pretty worried about that from what I’ve seen in biochemistry undergraduate students. I was already concerned about how little structured support in writing science students receive, and I’m seeing a lot of over reliance on chatGPT.

            With emails and the like, I find that I struggle with the pressure of a blank page/screen, so rewriting a mediocre draft is immensely helpful, but that strategy is only viable if you’re prepared to go in and do some heavy editing. If it were a case of people honing their editing skills, then that might not be so bad, but I have been seeing lots of output that has the unmistakable chatGPT tone.

            In short, I think it is definitely “grumpy old man” thinking, but that doesn’t mean it’s not valid (I say this as someone who is probably too young to be a grumpy old crone yet)

    • stringere@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      3 days ago

      I think of AI like I do apps: every company thinks they need an app now instead of just a website. They don’t, but they’ll sure as hell pay someone to develop an app that serves as a walled garden front end for their website. Most companies don’t need AI for anything, and as you said: they are shoehorning it in anywhere they can without regard to whether it is effective or not.

  • Monstrosity@lemm.ee
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    9
    ·
    3 days ago

    I kind of like AI, sorry.

    But it should all be freely available & completely open sourced since they were all built with our collective knowledge. The crass commercialization/hoarding is what’s gross.

    • JovialMicrobial@lemm.ee
      link
      fedilink
      arrow-up
      50
      ·
      3 days ago

      I like what we could be doing with AI.

      For example there’s one AI that I read about awhile back that was given data sets on all the known human diseases and the medications that are used to treat them.

      Then it was given data sets of all the known chemical compounds(or something like that, can’t remember the exact wording)

      Then it was used to find new potential treatments for diseases. Like new antibiotics. Basically it gives medical researchers leads to follow.

      That’s fucking cool and beneficial to everyone. It’s a wonderful application of the tech. Do more of that please.

      • just_an_average_joe@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        44
        arrow-down
        2
        ·
        3 days ago

        What you are talking about is machine learning which is called AI. What the post is talking about is LLMs which are also called AI.

        AI by definition means anything that exhibits intelligent behavior and it is not natural in nature.

        So when you use GMaps to find the shortest path between 2 points that’s also AI (specifically called local search).

        It is pointless to argue/discuss AI if nobody even know which type they are specifically talking about.

        • Allero@lemmy.today
          link
          fedilink
          arrow-up
          14
          ·
          3 days ago

          The issue is, people tend to overgeneralize and also get averted when some buzzword is repeated too much.

          So, this negatively affects the entire field of any AI.

        • JovialMicrobial@lemm.ee
          link
          fedilink
          arrow-up
          8
          arrow-down
          19
          ·
          3 days ago

          I’m talking about AI in the context of this conversation.

          I’m sorry it upsets you that capitalism has yet again redefined another word to sell us something else, but nobody here is specifically responsible for the language we’ve been given to talk about LLMs.

          Perhaps writing to Mirriam Webster about the issue could reap the results you’re looking for.

          • Jesus_666@lemmy.world
            link
            fedilink
            arrow-up
            18
            ·
            3 days ago

            LLMs are an instance of AI. There are many. Typically, the newest promising one is what the media will refer to as “AI” because the media don’t do subtlety.

            There was a time when expert systems were the one thing the media considered to be AI (and were overhyped to the point of articles wondering if they’d make jobs like general practitioners obsolete). Now it’s generational neural nets. In twenty years it’ll be something else.

    • blind3rdeye@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      Yeah. I’ve been interested in AI for most of my life. I’ve followed AI developments, and tinkered with a lot of AI stuff myself. I was pretty excited when ChatGPT first launched… but that excitement turned very sour after about a month.

      I hate what the world has become. Money corrupts everything. We get the cheapest most exploitative version of every possible idea, and when it comes to AI - that’s pretty big net negative on the world.

    • mm_maybe@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      arrow-down
      5
      ·
      edit-2
      3 days ago

      I mean you’re technically correct from a copyright standpoint since it would be easier to claim fair use for non-commercial research purposes. And bots built for one’s own amusement with open-source tools are way less concerning to me than black-box commercial chatbots that purport to contain “facts” when they are known to contain errors and biases, not to mention vast amounts of stolen copyrighted creative work. But even non-commercial generative AI has to reckon with it’s failure to recognize “data dignity”, that is, the right of individuals to control how data generated by their online activities is shared and used… virtually nobody except maybe Jaron Lanier and the folks behind Brave are even thinking about this issue, but it’s at the core of why people really hate AI.

      • ClamDrinker@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        3 days ago

        You had me in the first half, but then you lost me in the second half with the claim of stolen material. There is no such material inside the AI, just the ideas that can be extracted from such material. People hate their ideas being taken by others but this happens all the time, even by the people that claim that is why they do not like AI. It’s somewhat of a rite of passage for your work to become so liked by others that they take your ideas, and every artist or creative person at that point has to swallow the tough pill that their ideas are not their property, even when their way of expressing them is. The alternative would be dystopian since the same companies we all hate, that abuse current genAI as well, would hold the rights to every idea possible.

        If you publicize your work, your ideas being ripped from it is an inevitability. People learn from the works they see and some of them try to understand why certain works are so interesting, extracting the ideas that do just that, and that is what AI does as well. If you hate AI for this, you must also hate pretty much all creative people for doing the exact same thing. There’s even a famous quote for that before AI was even a thing. “Good artists copy, great artists steal.”

        I’d argue that the abuse of AI to (consider) replacing artists and other working creatives, spreading of misinformation, simplifying of scams, wasting of resources by using AI where it doesn’t belong, and any other unethical means to use AI are far worse than it tapping into the same freedom we all already enjoy. People actually using AI for good means will not be pumping out cheap AI slop, but are instead weaving it into their process to the point it is not even clear AI was used at the end. They are not the same and should not be confused.

        • petrol_sniff_king@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          1
          arrow-down
          5
          ·
          3 days ago

          a rite of passage for your work to become so liked by others that they take your ideas,

          ChatGPT is not a person.

          People learn from the works they see […] and that is what AI does as well.

          ChatGPT is not a person.

          It’s actually really easy: we can say that chatgpt, which is not a person, is also not an artist, and thus cannot make art.

          The mathematical trick of putting real images into a blender and then outputting a Legally Distinct™ one does not absolve the system of its source material.

          but are instead weaving it into their process to the point it is not even clear AI was used at the end.

          The only examples of AI in media that I like are ones that make it painfully obvious what they’re doing.

  • Diplomjodler@lemmy.world
    link
    fedilink
    arrow-up
    60
    arrow-down
    3
    ·
    4 days ago

    The problem isn’t AI. The problem is greedy, clueless management pushing half baked products of dubious value on consumers.