LLMs spew hallucinations by their nature. But what if you could get the LLM to correct itself? Or if you just claimed your LLM could correct itself? Last week, Matt Shumer of AI startup HyperWrite/…
another valiant attempt to get “promptfonder” into more common currency
It’s funny because they’re trying to get rid of the only thing that’s interesting about gen ai.
edit: Well, in this case, not sure if this counts as “trying”.