I'm not familiar with Penrose, but he would be wrong if he meant that the incompleteness theorem can be defeated by biological brains. Only way to know whether a statement is true or not is through the application of formal logics. It's merely a belief if it is not proven. If he meant that human brains are capable of beliefs, well, electronic ones are capable of that too.
Penrose's argument is not about humans being able to overcome completeness or consistency or that Turing machines are unable to handle arguments in ZF set theory, but rather that "proper AI" can't exist because human minds don't work algorithmically in the way computers do. You might say Turing machines lack "mathematical intuition" or that humans can work in between logical systems rather than being stuck to different systems or simply that there are a priori differences that can't be overcome. So you can still have an AI that is better than humans at basically everything, including mathematics, strategy, medical diagnosis, writing papers, summarising, writing poems, writing novels, composing music etc without it being "proper AI". Gödel's argument about a Gödel sentence existing and being true but unprovable relates to this in the way that if there is a formal system covering mathematical thoughts it would be consistent and thus the Gödel sentence ought to be true. He concludes that humans would be able to see that this sentence ought to be true, but that this process involves our mathematical thoughts and as such leads to a contradiction since this system doesn't contain the Gödel sentence. The conclusion is thus that a system that governs our mathematical thought cannot exist. But there are problems in relation to this argument relating to completeness that I have forgotten and never properly understood in the first place. But since Penrose is much smarter than me and has thought about these things with great care he will have thought about this as well. The argument is anyways not really the point, because Gödels theorems only apply to specific systems and I don't think anybody takes the argument very seriously. It's more there to illustrate something.
That isn't really related to LLM's who are of course capable of writing down the first incompleteness theorem and proving it of course. You don't need LLMs, neural networks, reinforcement learning or anything else for that, enough monkeys or a random symbol generator will do the job just fine. The general idea however seems to be that the mind is such a complex thing, that it is reasonable to assume that it is not computable. That's also my understanding of what the Chomskyans think who I think don't agree with the Penrose argument but still agree that the human mind works differently to a computer in a meaningful way. That is completely independent of whether you give your neural network a super complex topological structure, throw some probabilistic methods on it decided by the spin of a particle or what not else can be done. The argument is that there is a fundamental difference even before you start doing all of those things. Adding complexity doesn't overcome that.
I know there are some complicated theorems that state that for any neural network there are certain recursions that cannot be reached and my understanding is that whilst in practical terms the limitations for humans are much larger in theoretical terms there don't seem to be theorems for such limitations. You might say that humans are smart and lack computational power whilst computers are the opposite. Now you can argue that neural networks are not the end of it all as is it bounded in the complexity it can achieve. In practical terms, the limit of complexity the human mind is able to understand, I would think would be far smaller and some of Penrose most brilliant colleagues, such as Peter Scholze see a large need to verify their own work via things such as Lean, because the thought processes are already too complex to know whether their own proofs of their own ideas are correct. These things can be handled at ease by computers even if they have theoretical limits humans needn't have. There is no reason why the human mind has to be computable and there are no reasons to think that LLMs, NNs or else work in the same way just because they sometimes yield the same results.