Hacker News new | past | comments | ask | show | jobs | submit login

At my previous company we made heavy use of its lossy compression feature.



What lossy compression? Were you guys throwing bits into /dev/null?


Is that related to the cool "hash compression" technology I hear about? Apparently it can compress an arbitrarily large file into just a few bytes, amazing!


You would get compression with Postgres running on ZFS.


Postgresql has compression by default on, for all large text and other large fields that get great benefit of compression. From documentation -" The technique is affectionately known as TOAST (or "the best thing since sliced bread"). "- https://www.postgresql.org/docs/8.0/static/storage-toast.htm...


Lossy compression, not lossless.


Database plus Copy-On-Write file systems sound like a bad idea. I am imagining a modest 100gb database being re-written for every change.

I am sure there is some way to work around this, but wouldn't this be the default behavior with a typical database and typical COW file system?


You might be interested in reading this paper: https://people.freebsd.org/~seanc/postgresql/scale15x-2017-p...


I am interested and I appreciate the thought and link, however these appear to be the slides to a talk without the actual talk. If so they are of limited use because the slides never have all the information the presenter has, and that information is often just a summary.

Is this the correct talk: https://youtu.be/dwMQXLOXUco?t=5380 ?


Yes, sorry about that, that is the corresponding talk to the slides. Thanks for pointing that out.


First, you have to make sure the page sizes for the FS and DB match. That's a critical requirement. But yes, you'll get write expansion twice.


In random cloud provider you may not get FS with compression on your machine..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: