That's funny to hear because DALL-E 3 mainly improves prompt understanding, it hallucinates like mad with faces and hands, and doesn't seem to do anything to improve them like Midjourney for example.
>Whereas with DALL E you can get some hyper-realistic images from it with very little effort using plain human language.
Hyper-realistic, but is it what you want from it? Are you able to guide it into doing exactly what you want? If you have such requirements that just a natural language prompt is enough and is somehow faster than sketching and providing references, of course use it. I'm not so lucky, I don't get what I want from it, and no amount of prompt understanding will make it easier. Although SD/SDXL doesn't pass the quality bar either, not because it's not "detailed" or "hyper-realistic" enough, but because it doesn't pay attention to the things that should be prioritized, like linework or lighting. Neither does any other model. Controlnets and LoRAs alone aren't sufficient for controllability either, mostly because it's too small to understand high-level concepts. So I don't use anything.
>Whereas with DALL E you can get some hyper-realistic images from it with very little effort using plain human language.
Hyper-realistic, but is it what you want from it? Are you able to guide it into doing exactly what you want? If you have such requirements that just a natural language prompt is enough and is somehow faster than sketching and providing references, of course use it. I'm not so lucky, I don't get what I want from it, and no amount of prompt understanding will make it easier. Although SD/SDXL doesn't pass the quality bar either, not because it's not "detailed" or "hyper-realistic" enough, but because it doesn't pay attention to the things that should be prioritized, like linework or lighting. Neither does any other model. Controlnets and LoRAs alone aren't sufficient for controllability either, mostly because it's too small to understand high-level concepts. So I don't use anything.