mixtral 7*8B does indeed have this characteristic. It tends to disregard the requirement for structured output and often outputs unnecessary things in a very casual manner. However, I have found that models like qwen 72b or others have better controllability in this aspect, at least reaching the level of gpt 3.5.