Fully retired now and one of the things I’d like to do is get back into hobby programming through the exploration of new and new-to-me programming languages. Who knows, I might even write something useful someday!

  • 1 Post
  • 156 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle
  • They are just the biggest asshole in the room.

    So one day the different body parts were arguing over who should be in charge.

    The eyes said they should be in charge, because they were the primary source of information about the world.

    The stomach said it should be in charge because digestion was the source of energy.

    The brain said it should be in charge because it was in charge of information processing and decision-making.

    The rectum said nothing, just closed up shop.

    Before long, the vision was blurry, the stomach was queasy, and the brain was foggy.

    Assholes have been in charge ever since.






  • Then I must be among the manliest of men. :)

    I learned all the different ways to use the keyboard in Windows and never looked back. The best of both worlds, although relearning everything now that I’ve switched to Linux is proving a challenge. I’m starting to think that the Linux GUIs don’t have true keyboard accessibility.


  • Why not? The last decade before semi-retirement I had all the different ways to get in touch with me restricted to my phone. My work computer had no email client, no messengers, nothing. I even helped lead the charge to eliminate desk phones.

    That little display may have been the single greatest priductivity booster ever. It stayed on a shelf across the room on do not disturb. The only people allowed past the DnD were my wife and my son. If there really was a work emergency, a manager or coworker knew where to find me to tap me on the shoulder.










  • That is actually my point. I may not have made it clear in this thread, but my claim is not that our brains behave like LLMs, but that they are LLMs.

    That is, our LLM research is not just emulating our mental processes, but showing us how they actually work.

    Most people think there is something magic in our thinking, that mind is separate from brain, that thinking is, in effect, supernatural. I’m making the claim that LLMs are actual demonstrations that thinking is nothing more than the statistical rearrangement of that which has been ingested through our senses, our interactions with the world, and our experience of what has and has not worked.

    Searles proposed a thought experiment called the “Chinese Room” in an attempt to discredit the idea that a machine could either think or understand. My contention is that our brains, being machines, are in fact just suitably sophisticated “Chinese Rooms”.


  • Thanks! I’ve been working on this idea for quite a while. I post summaries and random thoughts occasionally hoping to refine my thinking to the point at which I’ll feel comfortable writing a proper essay.

    I like the name you’ve given the overarching system. That’s been a bit of a struggle for me, so you’ve given me a better concept to work with. “Large Sensory Input Model” captures my thoughts better than my own “the brain is just a kind of LLM.” That it’s abbreviation “LSIM” also conjures connections to “simulation” is a bonus for me, because that also addresses my thoughts on how we understand some things and other people.

    There is a fairly old hypothesis that something called “Theory of Mind” is basically our brain modelling and simulating other brains as a way to understand and predict the behaviour of others. That has explanatory power: empathy, stereotypes, in/out groups, better accuracy with closer relationships, “living on” through powerful simulations of those closest to us who have died, etc.

    Thanks for the feedback!


  • Soon kids will start talking like LLMs.

    Always have, always will.

    My pet hypothesis is that our brains are, in effect, LLMs that are trained via input from our senses and by the output of the other LLMs (brains) in our environment.

    It explains why we so often get stuck in unproductive loops like flat Earth theories.

    It explains why new theories are treated as “hallucinations” regardless of their veracity (cf Copernicus, Galileo, Bruno). It explains why certain “prompts” cause mass “hallucination” (Wakefield and anti-vaxers). It explains why the vast majority of people spend the vast majority of their time just coasting on “local inputs” to “common sense” (personal models of the world that, in their simplicity, often have substantial overlap with others).

    It explains why we spend so much time on “prompt engineering” (propaganda, sound bites, just-so stories, PR “spin”, etc) and so little on “model development” (education and training). (And why so much “education” is more like prompt engineering than model development.)

    Finally, it explains why “scientific” methods of thinking are so rare, even among those who are actually good at it. To think scientifically requires not just the right training, but an actual change in the underlying model. One of the most egregious examples is Linus Pauling, winner of the Nobel Prize in chemistry and vitamin C wackadoodle.