Page 1 of 1

Correct way of database data shrinking

PostPosted: Fri Sep 29, 2017 12:19 pm
by imoskovko/gmail_com
Hello

We noticed that our postgress DB grew up to 12 TB. Looks like some devices have too short sync period.
I have trancated largest tables with *_change suffix using pgadmin4 GUI tool.

Waht is correct way of housekeeping database? Can history change data be discarded using Aggregate tools or commands?

Regards,
Igor

Re: Correct way of database data shrinking

PostPosted: Fri Oct 06, 2017 2:56 pm
by sergey_eritsyan/tibbo_com
Hi, Igor!

You can see statistics in DB and find out the most loaded tables.
For DB housekeeping you should make VACUUM LO operation using DB tools. It will clean up and compress data.

Best regards, AggreGate support team.

Re: Correct way of database data shrinking

PostPosted: Tue Oct 10, 2017 7:23 pm
by imoskovko/gmail_com
I found it in Aggregate manuals
It helps

http://aggregate.tibbo.com/docs/en/ls_d ... b=vacuumlo


yum install postgresql-contrib

su - postgres
vacuumlo -U postgres -v -w aggregate
vacuumdb -fav -w

Re: Correct way of database data shrinking

PostPosted: Fri Oct 13, 2017 10:40 am
by sergey_eritsyan/tibbo_com
Thank you for feedback.

Re: Correct way of database data shrinking

PostPosted: Wed Jan 17, 2018 11:39 am
by imoskovko/gmail_com
По мотивам этой переписки

В документации рекомендуется навесить тригеры на все таблицы
http://aggregate.tibbo.com/docs/en/ls_d ... %2BBlobDel

Навесил

su postgres
psql aggregate

в нем создаем процедуру

CREATE OR REPLACE FUNCTION "BlobDel"() RETURNS trigger AS $BODY$ BEGIN delete from pg_catalog.pg_largeobject_metadata where pg_catalog.pg_largeobject_metadata.oid = OLD.ag_data; delete from pg_catalog.pg_largeobject where pg_catalog.pg_largeobject.loid = OLD.ag_data; return OLD; END; $BODY$ LANGUAGE plpgsql VOLATILE COST 100; ALTER FUNCTION "BlobDel"() OWNER TO aggregate;


далее выходим в команжную строку пользователя postgres и в ней навешиваем тригеры на все таблицы автоматически

echo "SELECT table_name FROM information_schema.tables where table_schema='public' ORDER BY table_name;" | psql aggregate | grep -vE 'строк|^$|table_name|-----' | awk '// {print "CREATE TRIGGER ",$1,"BEFORE DELETE ON ",$1,"FOR EACH ROW EXECUTE PROCEDURE \"BlobDel\"();"}' | psql aggregate

Оно отрабатывает по каждой таблице.
Грепы и авки можно обьединить в одно выражение awk, но что то лень - и так работает.

Веду наблюдение за базой.

В кроне пользователя postgres ночная вычистка и backup:

-bash-4.2$ crontab -l
@hourly /usr/bin/vacuumlo -v -w aggregate > /var/log/vacuumlo.log 2>&1
15 3 * * * /usr/bin/vacuumdb -fav -w > /var/log/vacuumdb.log 2>&1
30 5 * * * mkdir /backup/pg_basebackup_$(date +'\%Y.\%m.\%d'); /usr/bin/pg_basebackup -D /backup/pg_basebackup_$(date +'\%Y.\%m.\%d')/ -z -Z9 -Ft -Xf -v > /var/log/postgres.backup.log 2>&1