The Bloat Busters: pg_repack vs. pg_squeeze
31 points by radimm 1 year ago | 12 comments- Valodim 1 year ago
- radimm 1 year agoOh thanks. Will definitely try it as well.
- radimm 1 year ago
- glenjamin 1 year agoA key datapoint not mentioned in the article is "Can this be used on AWS RDS?" - to which I believe the only option that meets that criteria right now is `pg_repack`
- radimm 1 year agoThanks for the good point. You are right, pg_squeeze is not available there. Will update post tonight for clarity. I know it's available on GCP though.
- radimm 1 year ago
- ksynwa 1 year ago> You migrate one or more column data types over a longer period (and no, using ALTER COLUMN name TYPE new_type is not the best option).
What is the best option then? Renaming the existing column to something else, creating a new column with `name` and then dropping the renamed column?
- radimm 1 year agoYes, anytime you have untrivial amount of the data, you go for
1. Create new column 2. Backfill new column over time (manual, triggers, etc.) 3. During small downtime drop/rename (i.e. swap the columns).
- krembo 1 year ago+ moving all FKs and constraints, both operations add a new level complexity to make them live when the db is online up and running
- stuaxo 1 year agoHow big is an untrivial amount of data ?
I feel like there is a whole middle ground of websites I've worked on where I haven't neeed this (or maybe it's because a lot of them were sites that may be complex but have small amounts of users, e.g. internal tools).
- pjd7 1 year agoReally depends on some factors:
Do you have billions of rows with only a very small % in use/locked or millions of rows that are jsonb blobs with a relatively high % in use/locked?
Is your workload mostly write once read lots or read/write evenly split etc.
How fast is your IO. Are you using network storage (EBS for example) vs local NVMe?
How much other load are you contending with on the system.
I have a JSONB heavy workload with lots of updates. JSONB blobs avg around 100KB. But can go up to 20MB. We can see < 10 updates on a blob all the way to thousands of updates on the larger ones.
We use Citus for this workload and can move shards around and that operation will use logical replication to another host effectively cleaning up the bloat that way.
We also have some wide multi-column indexes over text fields and date fields that see a fair bit of rewrite activity as well. Those indexes get bloated a fair bit too and we run re-indexes every few months (used to be every 6-12 months, but as the workload is getting busier we're re-indexing a bit more frequently now).
The index bloat is far more of a problem than the raw table bloat.
In the past I would use pg_repack when we were on a single RDS solution.
- radimm 1 year agoConsider time it takes to rewrite the table, downtime you can afford and the disk space you have. My goal is to write bit more about the cases that go beyond the usual "works on my 100MB" scenario.
- pjd7 1 year ago
- krembo 1 year ago
- radimm 1 year ago
- 1 year ago