My news feed offered an article which seems to offer more perspective on the practical effectiveness of contemporary AI facilities than anything else I've recently seen. AI agents wrong ~70% of time: Carnegie Mellon study • The Register In retrospect, the thing that struck me most about it is how thoroughly I previously lacked such a handle, without recognizing it until seeing this article. Here's what I make of that. In the midst of the huge changes that accompany emergence of AI facilities that are practically effective to any significant degree, it's hard to to get a handle on how effective they actually are and aren't. That may be especially so for those of us not fully occupied with tracking the developments, but from the circulating hype and the drastic pace of change in the industry I suspect those who are deeply involved in the actual efforts may also have a difficult time getting a sense of proportion and absolute status, for similar and some different reasons ...
Some fine guidance: If you give a hungry man a fish, you feed him for a day. If you teach him how to fish, you feed him for a lifetime. -- ? Some more: You can tell whether a man is clever by his answers. You can tell whether a man is wise by his questions. -- Naquib Mahfouz Problems that remain persistently insolvable should always be suspected as questions asked in the wrong way. -- Alan Watts Yesterday I connected the importance of useful questions with the lesson of the fishing parable, like so: Fish <=> Useful answer Fishing <=> Useful question Learning to fish <=> Learning to ask useful questions (<=> Learning to learn?)