Correct way of database data shrinking

Questions related to relational databases, key-value databases, NoSQL databases, round-robin databases, and file-based storage.

Correct way of database data shrinking

Postby imoskovko/gmail_com » Fri Sep 29, 2017 12:19 pm

Hello

We noticed that our postgress DB grew up to 12 TB. Looks like some devices have too short sync period.
I have trancated largest tables with *_change suffix using pgadmin4 GUI tool.

Waht is correct way of housekeeping database? Can history change data be discarded using Aggregate tools or commands?

Regards,
Igor
imoskovko/gmail_com
 
Posts: 18
Joined: Wed Aug 30, 2017 9:39 am

Re: Correct way of database data shrinking

Postby sergey_eritsyan/tibbo_com » Fri Oct 06, 2017 2:56 pm

Hi, Igor!

You can see statistics in DB and find out the most loaded tables.
For DB housekeeping you should make VACUUM LO operation using DB tools. It will clean up and compress data.

Best regards, AggreGate support team.
sergey_eritsyan/tibbo_com
 
Posts: 77
Joined: Fri Jul 07, 2017 10:42 am

Re: Correct way of database data shrinking

Postby imoskovko/gmail_com » Tue Oct 10, 2017 7:23 pm

I found it in Aggregate manuals
It helps

http://aggregate.tibbo.com/docs/en/ls_d ... b=vacuumlo


yum install postgresql-contrib

su - postgres
vacuumlo -U postgres -v -w aggregate
vacuumdb -fav -w
imoskovko/gmail_com
 
Posts: 18
Joined: Wed Aug 30, 2017 9:39 am

Re: Correct way of database data shrinking

Postby sergey_eritsyan/tibbo_com » Fri Oct 13, 2017 10:40 am

Thank you for feedback.
sergey_eritsyan/tibbo_com
 
Posts: 77
Joined: Fri Jul 07, 2017 10:42 am

Re: Correct way of database data shrinking

Postby imoskovko/gmail_com » Wed Jan 17, 2018 11:39 am

По мотивам этой переписки

В документации рекомендуется навесить тригеры на все таблицы
http://aggregate.tibbo.com/docs/en/ls_d ... %2BBlobDel

Навесил

su postgres
psql aggregate

в нем создаем процедуру

CREATE OR REPLACE FUNCTION "BlobDel"() RETURNS trigger AS $BODY$ BEGIN delete from pg_catalog.pg_largeobject_metadata where pg_catalog.pg_largeobject_metadata.oid = OLD.ag_data; delete from pg_catalog.pg_largeobject where pg_catalog.pg_largeobject.loid = OLD.ag_data; return OLD; END; $BODY$ LANGUAGE plpgsql VOLATILE COST 100; ALTER FUNCTION "BlobDel"() OWNER TO aggregate;


далее выходим в команжную строку пользователя postgres и в ней навешиваем тригеры на все таблицы автоматически

echo "SELECT table_name FROM information_schema.tables where table_schema='public' ORDER BY table_name;" | psql aggregate | grep -vE 'строк|^$|table_name|-----' | awk '// {print "CREATE TRIGGER ",$1,"BEFORE DELETE ON ",$1,"FOR EACH ROW EXECUTE PROCEDURE \"BlobDel\"();"}' | psql aggregate

Оно отрабатывает по каждой таблице.
Грепы и авки можно обьединить в одно выражение awk, но что то лень - и так работает.

Веду наблюдение за базой.

В кроне пользователя postgres ночная вычистка и backup:

-bash-4.2$ crontab -l
@hourly /usr/bin/vacuumlo -v -w aggregate > /var/log/vacuumlo.log 2>&1
15 3 * * * /usr/bin/vacuumdb -fav -w > /var/log/vacuumdb.log 2>&1
30 5 * * * mkdir /backup/pg_basebackup_$(date +'\%Y.\%m.\%d'); /usr/bin/pg_basebackup -D /backup/pg_basebackup_$(date +'\%Y.\%m.\%d')/ -z -Z9 -Ft -Xf -v > /var/log/postgres.backup.log 2>&1
imoskovko/gmail_com
 
Posts: 18
Joined: Wed Aug 30, 2017 9:39 am


Return to Data Storage

Who is online

Users browsing this forum: No registered users and 1 guest