LLMs can mimic past examples of reasoning from the dataset. So, it can re-use reasoning that it has already been trained on. If the network manages to generalize well enough across its training data, then it can get close to reproducing general reasoning. But it can't yet fully get there, of course.
Yes, that's basically what I'm saying. Just less bluntly. It's slightly more nuanced than "LLMs cannot reason" because lines of reasoning are often in their dataset and can sometimes be used by the model. It's just that the model can't be relied on to know the correct reasoning to use in a given situation.