That's irrelevant. What's important is that LLMs are intentionally designed as fully general systems, so they can react like humans within confines of the model's sensory modalities and action space. Much like humans (or anything else in nature), they don't have separate control channels or any kind of artificial "code vs. data" distinction - and you can't add it without loss of generality.