I’m surprised, these models don’t have something like a “ground truth layer” by now.
Given that ChatGPT for example is completely unspecialized, I would have expected that relatively there’s a way to hand encode axiomatic knowledge. Like specialized domain knowledge or even just basic math. Even tieried data (i.e. more/less trusted sources) seem not to be part of the design.
I think this is something that’s easier said than done. Maybe at our current level, but as these AI get more advanced… What is truth? Sure mathematics seems like an easy target until we consider one of the best use cases for AI could be theory. An AI could have a fresh take on our interpretation of mathematics, where these base level assumptions would actually be a hindrance.
I’m surprised, these models don’t have something like a “ground truth layer” by now.
Given that ChatGPT for example is completely unspecialized, I would have expected that relatively there’s a way to hand encode axiomatic knowledge. Like specialized domain knowledge or even just basic math. Even tieried data (i.e. more/less trusted sources) seem not to be part of the design.
I think this is something that’s easier said than done. Maybe at our current level, but as these AI get more advanced… What is truth? Sure mathematics seems like an easy target until we consider one of the best use cases for AI could be theory. An AI could have a fresh take on our interpretation of mathematics, where these base level assumptions would actually be a hindrance.