By John Mount on •
Artificial intelligence, like machine learning before it, is making big money off what I call the “sell ∀ ∃ as ∃ ∀ scam.”
The scam works as follows.
- Build a system that solves problems, but with an important user-facing control. For AI systems like GPT-X this is “prompt engineering.” For machine learning it is commonly hyper-parameters.
- Convince the user that it is their job to find a instantiation or setting of this control to make the system work for their tasks. Soften this by implying there is a setting of the control that works for all of their problems, so finding that setting is worth the trouble. This is the “∃ ∀” claim: that there exists (∃) a setting or configuration that makes the system work for all (∀) of your examples.
- In practice just make the setting or control complicated enough to provide memorization or over-fitting. That is: exploit the fact that for sufficiently rich systems is is relatively easy to provide a “∀ ∃” system. That is a system for every (∀) task, there exists (∃) a setting that gives the correct answer for that one task. It is just there is no one setting useful for all tasks. This can devolve into