![]() |
![]() |
Subject: | zodbscan reports "objects not reachable" every Monday morning |
Author: | [Not Specified] |
Posted: | 2014-11-18 07:09 |
Hi folks,
I run zodbscan in cron every day (not with -f) to keep an eye out for dangling references/poskeyerrors. Since I started doing that, every Monday morning zodbscan begins to reports unreachable items: "10.67% of zodb objects not reachable - examine your zenossdbpack settings". Each time it's been about 10%. Then it starts to grow each day.
zenossdbpack runs in cron.weekly which right now is running on Wednesday, the last time it ran was November 12.
What could be happening every week between Sunday's zodbscan and Monday's scan that affects the zodbscan
2014-11-18 05:57:59,678 INFO zodbscan: Examining 3582179 items in zodb database
2014-11-18 06:19:22,281 INFO zodbscan: 12.15% of zodb objects not reachable - examine your zenossdbpack settings
2014-11-18 06:19:22,282 INFO zodbscan: 0 Dangling References were detected
2014-11-17 05:57:50,964 INFO zodbscan: Examining 3516942 items in zodb database
2014-11-17 06:18:43,354 INFO zodbscan: 10.54% of zodb objects not reachable - examine your zenossdbpack settings
2014-11-17 06:18:43,354 INFO zodbscan: 0 Dangling References were detected
2014-11-16 05:58:34,696 INFO zodbscan: Examining 3441658 items in zodb database
2014-11-16 06:19:58,297 INFO zodbscan: 0 Dangling References were detected
Subject: | And to follow on to this, |
Author: | [Not Specified] |
Posted: | 2014-11-18 07:49 |
And to follow on to this, after running zenossdbpack, often times poskeyerrors show up.
Just now I ran zenossdbpack by hand, it removed a bunch of things:
2014-11-18 08:27:26,620 [relstorage.adapters.packundo] INFO pack: removed 430300 (99.1%) state(s)
2014-11-18 08:27:27,571 [relstorage.adapters.packundo] INFO pack: cleaning up
2014-11-18 08:28:51,306 [relstorage.adapters.packundo] INFO pack: finished successfully
2014-11-18 08:29:01,058 [relstorage.adapters.packundo] INFO pack: removed 16500 (99.9%) state(s)
2014-11-18 08:29:01,065 [relstorage.adapters.packundo] INFO pack: cleaning up
2014-11-18 08:29:01,380 [relstorage.adapters.packundo] INFO pack: finished successfully
And now when I run zodbscan:
12 Dangling References
2014-11-18 08:44:28,286 CRITICAL zodbscan: DANGLING REFERENCE (POSKeyError) FOUND:
PATH: zport/dmd/Devices/Network/Router/Cisco/devices/MIGTORSW05/os/interfaces/FastEthernet0_1
TYPE:
OID: 0x006b7d5a '\x00\x00\x00\x00\x00k}Z' 7044442
Refers to a missing object:
NAME: None
TYPE:
OID: 0x006b7e22 '\x00\x00\x00\x00\x00k~"' 7044642
2014-11-18 08:44:28,302 CRITICAL zodbscan: DANGLING REFERENCE (POSKeyError) FOUND:
PATH: zport/dmd/Devices/Network/Router/Cisco/devices/MIGTORSW05/os/interfaces/FastEthernet0_1
TYPE:
OID: 0x006b7d5a '\x00\x00\x00\x00\x00k}Z' 7044442
Refers to a missing object:
NAME: None
TYPE:
OID: 0x006b7e23 '\x00\x00\x00\x00\x00k~#' 7044643
etc
There are serious database integrity issues with the tools here...
I am running 4.2.5 SP203
Subject: | Have you tried running |
Author: | [Not Specified] |
Posted: | 2014-11-18 11:50 |
Have you tried running findposkeyerror -f and zenrelationscan -f from the toolbox Do those detect and/or address the PKE issues that you found
Subject: | No they don't, these only |
Author: | [Not Specified] |
Posted: | 2014-11-18 12:37 |
No they don't, these only show up in zodbscan.
Subject: | Possible code segment to help (fails to format properly) |
Author: | [Not Specified] |
Posted: | 2014-11-18 15:36 |
Dpirmann, here is a segment of code (shared with me from another) - perhaps you can adapt this for your instance
def fixRels(obj): for name,relobj in obj._relations: try: getattr(obj, name)() continue except Exception: try: obj._delOb(name) except Exception: pass obj._setObject(name, relobj.createRelation(name)) commit()##################
for iface in dev.os.interfaces(): fixRels(iface) try: iface.index_object() except POSKeyError: dev.os.interfaces._delOb(iface.id) commit()
< |
Previous Performance reports on the dashboard via Portlet? |
Next Zenoss 5x beta 2 Syslog messages and SNMP traps coming in on 172.17.42.1(docker0 ... |
> |