It's crystal-clear that a model that was trained specifically to fool expert interrogators in a Turing test would, in fact, be able to do so. You'd have to sandbag the model just to keep it from tipping its hand by being too good.
We don't have any such models right now, AFAIK, so we can't run such a test. They wouldn't be much good for anything else, and would likely spark ethical concerns due to potential for misuse. But I have no doubt that it's possible to train for the Turing test.