Discussion about this post

User's avatar
Hollis Robbins's avatar

Yes this is exactly the conversation and set of refinements I hoped would blossom after "last mile." Human knowledge is rich varied, and outside what LLMs will reach (though I expect they'll get closer and closer).

Julian's avatar

I’ve seen a lot of versions of this “tact knowledge” argument against AI and I think it’s a bad argument. It would hold against “logist” or “symbolic AI” which is constructed from axioms and inferences from axioms. But I don’t see how it holds against the “connectionist” AI that is behind the current AI boom. The whole point of machine learning is that it is able to pick up patterns from a data set without having to be explicitly taught. It’s a lot more flexible and able to pick up context-dependent cues. It’s not taught on explicit rules like traditional (GOFAI) AI, rather it picks up patterns from large amounts of data. Think of how LLM learned how to use language: again, not by learning all the rules of language, but by getting exposed to a vast data base of language-use and observing statistical regularities. This isn’t exactly like human tact knowledge, but it’s similar enough to make the argument unconvincing. Granted, we are still talking about disembodied AIs—they aren’t learning in an embodied way like human beings. However, a lot of the arguments that were valid against Symbolic AI, don’t hold with the Connectionist AI. One argument that I think is still valid is the fact that human intelligence is intentional and conscious. AIs are following a mechanical and mathematical process and are not aware of what they are doing. This is why they keep hallucinating.

4 more comments...

No posts

Ready for more?