Then, Bayes came along. He is not recent; it is just that computation finally caught up. I used him for a paper five decades ago and heard tsk-tsk. But, then, they laughed at Leontief, then, too. There were other tricks that have evolved. Systems became (much) more powerful. Lots of cheap energy was around and about. Too, seemingly, notions of constraint were left on the side of the road.
Data existence ballooned, though there are many issues left to discuss. That allowed a whole industry to grow (data science). AI/ML/DL took advantage of the situation. Games were won that had not been within grasp before.
- DeepStack: Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker Various, Canada, Czech Republic
- Google's AI surfs the "gamescape" to conquer game theory ZDNet
The stuff does not resonate. But, I'm talking talents not recognized, yet. So, that's not even on the table.
To me, numeric abilities allowed a peanut-buttering without this being obvious. They force convergence. It looked, to hear the hype, that we're now all in a rosy time (colored glasses, for sure). But, again, the old guy will point to 12 years ago when the risk experts were saying, piece of cake - there will never be another downturn (on the eve of the very thing).
What gives? We need to discuss. So, Aviad shows hardness. There are regular discussions at Stanford and elsewhere.
- Hardness of Approximation Between P and NP Aviad Rubinstein, Stanford
Used for a Quora question: With Nash equilibria being shown as intractable, ought we consider that there are numeric limits to AI that are being ignored?
Remarks: Modified: 08/08/2019
08/08/2019 --