Skip to content

Londiste Memory Usage

During our last major software release we made a pretty massive schema update to the primary postgres database. We pulled several columns out of one of our core tables and moved them into a separate table. We added triggers, dropped tables, added columns, etc. After all the dust had settled one of the lead developers noticed that our londiste replay processes were taking up 7.6GB of resident memory (per top output). Needless to say this was troubling since we had two of theses processes running and only 32GB of RAM in the server.

I did find some info here that suggested that massive number of events submitted to the event queue in a relatively short number of ticks would increase CPU and memory usage. So I updated my .ini file with pgq_lazy_fetch and reloaded the londiste config.

No change in memory usage. Bummer.

Luckily we still had our regular batch processing disabled while we sorted out the changes for the software release so replication lag was <5 seconds on my replicas. So I stopped the replay processes and then restarted them again.

This freed up 15GB of memory. Sweet. Since the was the first time we left replication running during a software release this was the first time we encountered this issue. Now that we have the lazy_fetch enabled I don’t expect to see it again.

Post a Comment

Your email is never published nor shared. Required fields are marked *

Powered by WP Hashcash