Now AI can read minds, according to a recent study.  With all the hype surrounding the latest AI and ChatGPT developments, it seems like we shouldn’t let this new ability pass without comment.  Of course, spies will be thrilled to learn that they can read the minds of their counterparts with only a fMRI scanner and 15 or 16 hours of training.  But the rest of us surely need to pay attention as well.  From “the dog ate my homework,” to “I love your new outfit,” to “that used Audi is a real peach of a car,” and “that’s pure 24-carat gold,” determining interactive fact from fiction surely will be the next frontier, and then we’ll all have to stop with the white lies and tell only the unvarnished truth.  And we’ll certainly want to have our pocket mind-readers handy in the future for those conversations with the used car salesmen and our negotiating counterparts to check on the truthiness of all the daily interactions in our lives both large and small.

Now, before you get too excited – or alarmed — you can still currently thwart the AI system by thinking of teddy bears or something else rather than the conversational matter at hand.  That confuses the system sufficiently at this point that it returns nonsense.  So, the spies are safe for now.  But at the rate that AI is developing, probably not for long.  Pretty soon AI will be able to peer into our minds with clarity and efficiency.  And then, Zen-like detachment and simplicity will be the only way forward.  I pity the poor politicians of this polarized era forced to actually tell the truth.  What a nightmare for them!  Every time they open their mouths, another few votes will be lost.

Politicians aside, the real issue is the indeterminacy of language. Beyond the white lies and polarized tribal warfare, mostly we use language to tell the truth as best we can about what we see, what we want, and what we want to change.  But we fumble for the right words, and our imprecise rendering of our thoughts leaves us stymied and our audiences confused.  Developing powerful communications requires the willingness to get it wrong the first few times, to keep trying, and to edit our remarks until they finally achieve the high gloss of clarity and wisdom.

It won’t help much to have an AI program reading our minds because until we’ve worked on the thinking for long enough, what’s in there is mostly a first draft – messy, imprecise, and unclear even to ourselves.  So beyond spies and politicians, the practical uses of this research are limited for now to tragic cases such as stroke victims and paralysis patients – for whom this technique could indeed produce inestimable benefits by giving a voice to someone deprived of it.

I’m reminded of Bill Gates’ insight that a technological development has less effect in the short run than we imagine, and more in the long run.  I suspect that the longer-term effects of computers reading our minds are much weirder than we can possibly envision right now.  Stay tuned for a very odd decade indeed.