The question is the same; why would you use bigint instead of the native UUID type?
Why does OT compare text and UUID instead of char(32) and UUID?
What advantage would there be for database abstraction libraries like SQLalchemy and Django to implement the UUID type with bigint or bigserial instead of the native pg UUID type?
Also, I think you're misunderstanding the article. They aren't talking about storing a uuid in a bigint. They're talking about have two different id's. An incrementing bigint is used internally within the db for PK and FK's. A separate uuid is used as an external identifier that's exposed by your API.
Many people store UUID's as text in the database. Needles to say, this is bad. TFA starts by proposing that it's bad, then does some tests to show why.
I'm not quite sure what all the links have to do with the topic at hand.
Which link are you concerned about the topicality of, in specific?
Shouldn't we then link to the docs on how many bits wide db datatypes are, whether a datatype is prefix or suffix searchable, whether there's data leakage in UUID namespacing with primary NIC MAC address and UUIDv7, and whether there will be overflow with a datatype less wasteful than the text datatype for uuids when there is already a UUID datatype for uuids that one could argue to improve if there is a potential performance benefit
Why does OT compare text and UUID instead of char(32) and UUID?
What advantage would there be for database abstraction libraries like SQLalchemy and Django to implement the UUID type with bigint or bigserial instead of the native pg UUID type?