First issue is relevant only on systems that have more than one NUMA node, which is probably every meaningful physical server and essentially no VM (at least on Xen, multiprocessor VMs are single NUMA node), as it does not make much sense to advertise NUMA topology to guest VMs.
Second issue is relevant for postgresql mostly only if you use very large shared_buffers which anyway is not recommended for general workloads. Writing page that exists on disk and was not read short time before is not especially common thing to do.
NUMA can absolutely ping you in virtual servers, but without access to the hypervisor you'll never know why it's happening (JVMs straddling NUMA regions have caused me pain in the past, when the guest was split across memory regions).
The point is that kernel inside VM guest knows nothing about NUMA, so it cannot do any kind of NUMA optimizations hence such optimizations cannot hurt performance as they do not happen at all.
Second issue is relevant for postgresql mostly only if you use very large shared_buffers which anyway is not recommended for general workloads. Writing page that exists on disk and was not read short time before is not especially common thing to do.