So now you have another bit in your deploy pipeline that needs to construct curl requests vs declaring your services against a registrar. Can you share these registrations across servers? I'd hate to have to poll for all my web servers to register a new API process, and I'm at the small scale (~200 servers).
Have you done this? Does it work? Does nginx actually close the file and re-open it every time it does a write, or does it mmap the file as would be sensible on local disk?
Even if nginx gets this right, now you're relying on the consistency of your shared disk implementation. Popular options include:
1. A single UNIX machine. Now you have a single point of failure, and all your traffic . If you're okay with that, you can just do that and skip NFS. If you're okay home-brewing failover solutions for your former single point of failure and its backup, you can just do that and skip NFS.
2. A fancy cluster of filers that attempts to promise you distributed close-to-open consistency, and gets it right with very high but not 100% probability.
3. A fancy cluster of filers that relaxes some of the guarantees on NFS consistency, or only lets one person successfully open a file at once, or something.
4. Something opaque from Amazon, which could be any of the above options and you have no idea which, or something else entirely. Also, a single NFS export from Amazon EFS only runs within a single availability zone. If you're okay having a single AZ as a SPOF, again, you can skip NFS.
(My employer runs NFS at very large scale in production; basically everything in the company touches NFS one way or another, and we have lots of infrastructure to ensure availability and geographic redundancy in NFS. Every time it fails, things get weird, because application software rarely expects files to have the same problems distributed systems have. It's no more magic than any other distributed system, and possibly quite a bit less magic.)
At my company, every server has its own internal load balancer (previously haproxy, now nginx). While definitely possible, I don't think sharing a single file across hundreds of boxes is a "best practice".
That said, this certainly would work for companies with a ton of money to burn (nginx+ licenses aren't cheap) or who are at smaller scales. If it works, ship it!
One way to handle "service discovery" is to assign a distinct port to every service, run an haproxy on every host, and always make cross-service calls through localhost:port. Health checks and config management keep the haproxies up to date, and applications don't have to know which hosts to ring.
Having only a handful of load balancers forces all your internal traffic through a handful of bottlenecks and points of failure, and you still need custom logic in your applications to try another load-balancer when the primary is dead.