Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love PG, and don't plan to ever move away from it.

But I'm starting to wish they had in-place upgrades. Application I develop is getting to the point of "too large to make a copy of the whole db whenever I want to upgrade Postgres".

I can postpone things for a little while by replacing all our large objects with a external file storage -- but the tables themselves are growing quickly.

That'd be my favorite "big data" feature.



You can run pg_upgrade with the --link option to have it hardlink most of the tables into the new cluster's location. It'll still re-write some things, but I believe that most of the actual table data remains unaltered.


Yeah, unless you have huge system catalogs (where the table definitions are stored) pg_upgrade should be almost instant if you sue hardlinks.


That's beautiful. Had no idea about that option.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: