Agreed on optimizing core objects. I recently wrote a C base class (https://jcristharif.com/quickle/#structs-and-enums) for defining dataclass-like-types that's noticeably faster (~5-10x) to init/copy/serialize/compare than other options (dataclasses, pydantic, namedtuples...). For some applications I write this has a non-negligible performance impact, without requiring deep interpreter changes. Using the base class is nice - my application objects are still defined in normal python code, but all the heavy lifting is done in the c-extension.
However, this speedup comes at the cost of being less dynamic. I'm not sure how much more optimized core python objects could be without sacrificing some of the dynamism some programs rely on. Python dicts are already pretty optimized as is.
>I recently wrote a C base class (https://jcristharif.com/quickle/#structs-and-enums) for defining dataclass-like-types that's noticeably faster (~5-10x) to init/copy/serialize/compare than other options (dataclasses, pydantic, namedtuples...)
YouTube also encountered the same problem. Their solution sounds kinda like "never use pickle, because it's slow. Use custom serialization".
However, this speedup comes at the cost of being less dynamic. I'm not sure how much more optimized core python objects could be without sacrificing some of the dynamism some programs rely on. Python dicts are already pretty optimized as is.