TECHZEN Zenoss User Community ARCHIVE  

zodb_object_state has grown to ~200GB

Subject: zodb_object_state has grown to ~200GB
Author: [Not Specified]
Posted: 2016-02-04 02:19

My Zenoss server disk became full after a zenoss backup.

I did adu -a / | sort -n -r | head -n 10 to find the largest files on the disk.

The database file/var/lib/mysql/ibdata1was the largest file in the file system and running the sql commands below in the sql database i discovered that zodeb_object_state is the cause.

Now what What is the best way to reduce the size

Regards

/Per

SELECT	concat(table_schema,'.',table_name),
concat(round(table_rows/1000000,2),'M') rows,
concat(round(data_length/(1024*1024),2),'MB') DATA,
concat(round(index_length/(1024*1024),2),'MB') idx,
concat(round((data_length+index_length)/(1024*1024),2),'MB') total_size,
round(index_length/data_length,2) idxfrac
FROM information_schema.TABLES
WHERE table_schema = 'my_database_name'
ORDER BY data_length+index_length DESC LIMIT 10;




Subject: I hate to say this, but you
Author: Andrew Kirch
Posted: 2016-02-08 12:41

I hate to say this, but you're in it pretty deep. we have a tool called zenossdbpack, which should be run as part of routine maintenance. It will take time and a HUGE amount of resources (200+GB of ram) to run successfully, and even then it may not work.

Andrew Kirch

akirch@gvit.com

Need Zenoss support, consulting or custom development Look no further. Email or PM me!

Ready for Distributed Topology (collectors) for Zenoss 5 Coming May 1st from GoVanguard



Subject: Lucky for me, I have a
Author: [Not Specified]
Posted: 2016-02-09 02:06

Lucky for me, I have a reasonably fresh backup. I'll do it that way instead.

Thank you for the rapid response!



Subject: Unfortunately we have the
Author: Martin
Posted: 2016-02-09 03:05

Unfortunately we have the same problem:

mysql> SELECT concat(table_schema,'.',table_name), concat(round(table_rows/1000000,2),'M') rows, concat(round(data_length/(1024*1024),2),'MB') DATA, concat(round(index_length/(1024*1024),2),'MB') idx, concat(round((data_length+index_length)/(1024*1024),2),'MB') total_size, round(index_length/data_length,2) idxfrac FROM information_schema.TABLES WHERE table_schema = 'zodb' ORDER BY data_length+index_length DESC LIMIT 10;
+-------------------------------------+---------+-------------+------------+-------------+---------+
| concat(table_schema,'.',table_name) | rows | DATA | idx | total_size | idxfrac |
+-------------------------------------+---------+-------------+------------+-------------+---------+
| zodb.object_state | 796.13M | 296006.61MB | 38024.89MB | 334031.50MB | 0.13 |
| zodb.object_ref | 180.35M | 4299.83MB | 7412.30MB | 11712.13MB | 1.72 |
| zodb.pack_object | 163.82M | 2968.36MB | 4696.29MB | 7664.66MB | 1.58 |
| zodb.object_refs_added | 163.82M | 2655.90MB | 2252.81MB | 4908.71MB | 0.85 |
| zodb.connection_info | 0.00M | 6.40MB | 0.06MB | 6.46MB | 0.01 |
| zodb.blob_chunk | 0.00M | 0.02MB | 0.02MB | 0.03MB | 1.00 |
| zodb.new_oid | 0.00M | 0.01MB | 0.02MB | 0.02MB | 2.05 |
| zodb.schema_version | 0.00M | 0.02MB | 0.00MB | 0.02MB | 0.00 |
+-------------------------------------+---------+-------------+------------+-------------+---------+
8 rows in set (0.02 sec)

I think the reason for the huge object_state table is that zenossdbpack didn't run for a long time because it will run out of memory after some time:

Out of memory: Kill process 26070 (zenossdbpack) score 862 or sacrifice child
Killed process 26070, UID 1337, (zenossdbpack) total-vm:42838616kB, anon-rss:28809668kB, file-rss:1436kB

Is there any way to delete old unsued records manually Or any other way to clean up the database

Any help is appreciated.

Thank you, Martin



Subject: In the meantime i installed a
Author: Martin
Posted: 2016-02-10 00:36

In the meantime i installed a 500G SSD as swap partition, started zenossdbpack and it looks good so far. Problem is that it will take an incredible amount of time. zenossdbpack is running now for 1 day and it says:

2016-02-10 07:35:20,561 [relstorage.adapters.packundo] INFO objects analyzed: 376300/615741024

So it looks like it takes weeks to complete :D

Any ideas pease



Subject: Our Level 2 support spent
Author: Andrew Kirch
Posted: 2016-02-10 16:12

Our Level 2 support spent SIGNFICIANT time on azodb_object_state db that grew to 180gb. There is NOT a fix other than to pack it which will take time. You may want to disable OOM killer.

Andrew Kirch

akirch@gvit.com

Need Zenoss support, consulting or custom development Look no further. Email or PM me!

Ready for Distributed Topology (collectors) for Zenoss 5 Coming May 1st from GoVanguard



Subject: Any solution ?
Author: [Not Specified]
Posted: 2016-12-05 08:41

Anysolution



Subject: AFAIK there is no solution
Author: Martin
Posted: 2016-12-07 05:30

AFAIK there is no solution for this. I've learned that it is very important to run zenossdbpack on a regular basis and since my last "incident" i keep an eye on it ;)



< Previous
Trap with ZenOSS, Event Details question ?
  Next
device graph portlet won't list custom graph points
>