On the contrary, XHTML was stricter. If you put an element where it’s not supposed to be, the whole document would (probably?) not render.
But the point is moot: XHTML is completely dead, lacks newer HTML elements like <details>, and no browser includes an XHTML parser anyway—they’ll use the HTML parser.
It's rendered properly in Chrome. If some XML is wrong, it'll error. So at least Chrome supports XHTML with strict XML validator. And <detail> tag works as expected and it's even nested inside <p> tag in DOM.
That XHTML won't pass validator.w3.org (<details> can't be inside <p>), but Chrome does not care.
Huh, I thought the XML serialisation of HTML had been evicted from all the browsers, but you’re right, and Firefox behaves the same way. https://html.spec.whatwg.org/#html-vs-xhtml describes briefly what is expected to occur (though that section is non-normative). Also given that I know that browsers can successfully load XML documents with XSLT (I’ve seen one or two web pages that actually do this), and that I know that the XML parser is accessible via DOMParser (this can be useful for some kinds of markup sanitisation), I’m not sure why I thought XHTML was completely dead in this way. Something’s niggling at the back of my mind, but I can’t place it.
I believe browsers would be quite within their rights to do DTD validation and consequently reject such a document as this, but they don’t actually do so. (Even if the DTD is specified correctly for XHTML 1.0 Strict, nothing changes.)
Thanks for correcting me. I have updated my mental knowledge repository.
> I’m not sure why I thought XHTML was completely dead in this way. Something’s niggling at the back of my mind, but I can’t place it.
Browsers were always non-validating XML processors. Maybe you're thinking about the Well-formedness constraints? These constraints check if an XML document follows the XML metasyntax and any XML processor is required to fail [1] on errors in these contraints. And browsers do that.
In practice most well-formedness violations were nesting errors like missing start or end tags, common when generating XML via text concatenation or templates, which in the dark times a lot of people do, even when they shouldn't . [2] That's why the HTML5 parsing algorithm is so complex and almost unimplementable. It's the ambition for error correction: It tries to generate a DOM out of any well-formed and unwell-formed documents.
Rather, it would be incorrect for rendering engines to not close the paragraph at this point.
(HTML parsing is defined exhaustively. If two browsers parse a given input string to a different DOM tree, at least one of them is buggy.)