TECHZEN Zenoss User Community ARCHIVE  

Zenoss 5.0 -> zenping problem

Subject: Zenoss 5.0 -> zenping problem
Author: [Not Specified]
Posted: 2015-03-13 03:52

Hi,

unfortunately zenping doesn't work in our Installation.
We get following error message:

nmap did not execute correctly: ('-iL', '/tmp/zenping_nmap_wE8svh', '-sn', '-PE', '-n', '--privileged', '--send-ip', '-T5', '--min-rtt-timeout', '1.5s', '--max-rtt-timeout', '1.5s', '--max-retries', '1', '--min-rate', '13', '--min-parallelism', '19', '-oX'

Any ideas

regards
Achim



Subject: could you possibly pastebin
Author: Andrew Kirch
Posted: 2015-03-13 08:59

could you possibly pastebin the serviced log for further review

Andrew Kirch

akirch@gvit.com

Need Zenoss support, consulting or custom development Look no further. Email or PM me!

Ready for Distributed Topology (collectors) for Zenoss 5 Coming May 1st from GoVanguard



Subject: Hi Andrew,
Author: [Not Specified]
Posted: 2015-03-14 06:11

Hi Andrew,

zenping log:
I0314 10:28:02.263590 00001 vif.go:58] vif subnet is: 10.3
I0314 10:28:02.263877 00001 lbClient.go:77] ControlPlaneAgent.GetServiceInstance()
I0314 10:28:02.389953 00001 controller.go:291] Allow container to container connections: true
I0314 10:28:02.390675 00001 controller.go:226] Wrote config file /opt/zenoss/etc/global.conf
I0314 10:28:02.394063 00001 controller.go:197] Successfully ran command:'&{/usr/bin/chown [chown zenoss:zenoss /opt/zenoss/etc/global.conf] [] [] 0xc2080e6100 exit status 0 reflect.Value true [0xc20803a080 0xc20803a0a0 0xc20803a0a0] [0xc20803a080 0xc20803a0a0] [0xc20803a098] [0x6de1b0] 0xc2080b6000}' output:
I0314 10:28:02.398483 00001 controller.go:197] Successfully ran command:'&{/usr/bin/chmod [chmod 660 /opt/zenoss/etc/global.conf] [] [] 0xc2080e6da0 exit status 0 reflect.Value true [0xc20803a088 0xc20803a0d0 0xc20803a0d0] [0xc20803a088 0xc20803a0d0] [0xc20803a0c8] [0x6de1b0] 0xc2080b6150}' output:
I0314 10:28:02.398711 00001 controller.go:226] Wrote config file /opt/zenoss/etc/zenping.conf
I0314 10:28:02.400082 00001 controller.go:197] Successfully ran command:'&{/usr/bin/chown [chown zenoss:zenoss /opt/zenoss/etc/zenping.conf] [] [] 0xc2080e7720 exit status 0 reflect.Value true [0xc20803a2c8 0xc20803a2e8 0xc20803a2e8] [0xc20803a2c8 0xc20803a2e8] [0xc20803a2e0] [0x6de1b0] 0xc2080b6310}' output:
I0314 10:28:02.400889 00001 controller.go:197] Successfully ran command:'&{/usr/bin/chmod [chmod 0664 /opt/zenoss/etc/zenping.conf] [] [] 0xc2080e7a20 exit status 0 reflect.Value true [0xc20803a318 0xc20803a350 0xc20803a350] [0xc20803a318 0xc20803a350] [0xc20803a330] [0x6de1b0] 0xc2080b64d0}' output:
I0314 10:28:02.407563 00001 logstash.go:55] Using logstash resourcePath: /usr/local/serviced/resources/logstash
I0314 10:28:02.407716 00001 controller.go:226] Wrote config file /etc/logstash-forwarder.conf
I0314 10:28:02.407775 00001 controller.go:380] pushing network stats to: http://localhost:22350/api/metrics/store
I0314 10:28:02.407850 00001 instance.go:79] about to execute: /usr/local/serviced/resources/logstash/logstash-forwarder , [-idle-flush-time=5s -old-files-hours=26280 -config /etc/logstash-forwarder.conf][4]
I0314 10:28:02.412433 00001 endpoint.go:132] c.zkInfo: {ZkDSN:{"Servers":["192.168.128.62:2181"],"Timeout":15000000000} PoolID:default}
I0314 10:28:02.412725 00001 endpoint.go:173] getting service state: eoq6gg6m77iyo1k6cnqyvt8y2 0
2015/03/14 10:28:02 publisher init
2015/03/14 10:28:02
{
"network": {
"servers": [ "127.0.0.1:5043" ],
"ssl certificate": "/usr/local/serviced/resources/logstash/logstash-forwarder.crt",
"ssl key": "/usr/local/serviced/resources/logstash/logstash-forwarder.key",
"ssl ca": "/usr/local/serviced/resources/logstash/logstash-forwarder.crt",
"timeout": 15
},
"files": [

{
"paths": [ "/opt/zenoss/log/zenping.log" ],
"fields": {"instance":"0","monitor":"localhost","service":"eoq6gg6m77iyo1k6cnqyvt8y2","type":"zenping"}
}
]
}
2015/03/14 10:28:02.417261 Launching harvester on new file: /opt/zenoss/log/zenping.log
2015/03/14 10:28:02.417280 Loading client ssl certificate: /usr/local/serviced/resources/logstash/logstash-forwarder.crt and /usr/local/serviced/resources/logstash/logstash-forwarder.key
2015/03/14 10:28:02.418942 Starting harvester: /opt/zenoss/log/zenping.log
2015/03/14 10:28:02.418964 Current file offset: 746
2015/03/14 10:28:02.596492 Setting trusted CA from file: /usr/local/serviced/resources/logstash/logstash-forwarder.crt
2015/03/14 10:28:02.596733 Connecting to 127.0.0.1:5043 (127.0.0.1)
2015/03/14 10:28:02.596861 Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5043: connection refused
Trying to connect to logstash server... 127.0.0.1:5042
I0314 10:28:03.443447 00001 endpoint.go:306] cached imported endpoint[a8u77sofuk3vbk02c2a1zagw1_localhost_zenhubPB]: {endpointID:localhost_zenhubPB instanceID:0 virtualAddress: purpose:import port:8789}
I0314 10:28:03.443486 00001 endpoint.go:306] cached imported endpoint[a8u77sofuk3vbk02c2a1zagw1_localhost_redis]: {endpointID:localhost_redis instanceID:0 virtualAddress: purpose:import port:6379}
I0314 10:28:03.443496 00001 endpoint.go:306] cached imported endpoint[a8u77sofuk3vbk02c2a1zagw1_controlplane_consumer]: {endpointID:controlplane_consumer instanceID:0 virtualAddress: purpose:import port:8444}
I0314 10:28:03.443502 00001 controller.go:393] command: [su - zenoss -c "/opt/zenoss/bin/zenping run -c --duallog --monitor localhost "] [1]
I0314 10:28:03.455995 00001 controller.go:859] Got service endpoints for eoq6gg6m77iyo1k6cnqyvt8y2: map[tcp:5042:[{ServiceID:controlplane_logstash_tcp Application:controlplane_logstash_tcp ContainerPort:5042 HostPort:5042 HostIP:192.168.128.62 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:5042}] tcp:5043:[{ServiceID:controlplane_logstash_lumberjack Application:controlplane_logstash_lumberjack ContainerPort:5043 HostPort:5043 HostIP:192.168.128.62 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:5043}] tcp:8444:[{ServiceID:controlplane_consumer Application:controlplane_consumer ContainerPort:8443 HostPort:8443 HostIP:192.168.128.62 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:8444}] tcp:443:[{ServiceID:controlplane Application:controlplane ContainerPort:443 HostPort:443 HostIP:192.168.128.62 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:443}]]
I0314 10:28:03.456052 00001 controller.go:871] changing key from tcp:443 to a8u77sofuk3vbk02c2a1zagw1_controlplane: {ServiceID:controlplane Application:controlplane ContainerPort:443 HostPort:443 HostIP:192.168.128.62 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:443}
I0314 10:28:03.456066 00001 controller.go:871] changing key from tcp:5042 to a8u77sofuk3vbk02c2a1zagw1_controlplane_logstash_tcp: {ServiceID:controlplane_logstash_tcp Application:controlplane_logstash_tcp ContainerPort:5042 HostPort:5042 HostIP:192.168.128.62 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:5042}
I0314 10:28:03.456077 00001 controller.go:871] changing key from tcp:5043 to a8u77sofuk3vbk02c2a1zagw1_controlplane_logstash_lumberjack: {ServiceID:controlplane_logstash_lumberjack Application:controlplane_logstash_lumberjack ContainerPort:5043 HostPort:5043 HostIP:192.168.128.62 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:5043}
I0314 10:28:03.456092 00001 controller.go:871] changing key from tcp:8444 to a8u77sofuk3vbk02c2a1zagw1_controlplane_consumer: {ServiceID:controlplane_consumer Application:controlplane_consumer ContainerPort:8443 HostPort:8443 HostIP:192.168.128.62 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:8444}
I0314 10:28:03.456116 00001 endpoint.go:620] Attempting port map for: a8u77sofuk3vbk02c2a1zagw1_controlplane -> {ServiceID:controlplane Application:controlplane ContainerPort:443 HostPort:443 HostIP:192.168.128.62 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:443}
I0314 10:28:03.456183 00001 endpoint.go:640] Success binding port: a8u77sofuk3vbk02c2a1zagw1_controlplane -> proxy[{controlplane controlplane 443 443 192.168.128.62 127.0.0.1 tcp 0 443}; &{%!s(*net.netFD=&{{0 0 0} 7 2 1 false tcp4 0xc20813e9c0 reflect.Value {140137151719192}})}]=>[]
I0314 10:28:03.456303 00001 endpoint.go:306] cached imported endpoint[a8u77sofuk3vbk02c2a1zagw1_controlplane]: {endpointID:controlplane instanceID:0 virtualAddress: purpose:import port:443}
I0314 10:28:03.456319 00001 endpoint.go:620] Attempting port map for: a8u77sofuk3vbk02c2a1zagw1_controlplane_logstash_tcp -> {ServiceID:controlplane_logstash_tcp Application:controlplane_logstash_tcp ContainerPort:5042 HostPort:5042 HostIP:192.168.128.62 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:5042}
I0314 10:28:03.456348 00001 endpoint.go:640] Success binding port: a8u77sofuk3vbk02c2a1zagw1_controlplane_logstash_tcp -> proxy[{controlplane_logstash_tcp controlplane_logstash_tcp 5042 5042 192.168.128.62 127.0.0.1 tcp 0 5042}; &{%!s(*net.netFD=&{{0 0 0} 8 2 1 false tcp4 0xc20813ed80 reflect.Value {140137151719368}})}]=>[]
I0314 10:28:03.456415 00001 endpoint.go:306] cached imported endpoint[a8u77sofuk3vbk02c2a1zagw1_controlplane_logstash_tcp]: {endpointID:controlplane_logstash_tcp instanceID:0 virtualAddress: purpose:import port:5042}
I0314 10:28:03.456433 00001 endpoint.go:620] Attempting port map for: a8u77sofuk3vbk02c2a1zagw1_controlplane_logstash_lumberjack -> {ServiceID:controlplane_logstash_lumberjack Application:controlplane_logstash_lumberjack ContainerPort:5043 HostPort:5043 HostIP:192.168.128.62 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:5043}
I0314 10:28:03.456465 00001 endpoint.go:640] Success binding port: a8u77sofuk3vbk02c2a1zagw1_controlplane_logstash_lumberjack -> proxy[{controlplane_logstash_lumberjack controlplane_logstash_lumberjack 5043 5043 192.168.128.62 127.0.0.1 tcp 0 5043}; &{%!s(*net.netFD=&{{0 0 0} 10 2 1 false tcp4 0xc20813f110 reflect.Value {140137151718312}})}]=>[]
I0314 10:28:03.456519 00001 endpoint.go:306] cached imported endpoint[a8u77sofuk3vbk02c2a1zagw1_controlplane_logstash_lumberjack]: {endpointID:controlplane_logstash_lumberjack instanceID:0 virtualAddress: purpose:import port:5043}
I0314 10:28:03.456532 00001 endpoint.go:620] Attempting port map for: a8u77sofuk3vbk02c2a1zagw1_controlplane_consumer -> {ServiceID:controlplane_consumer Application:controlplane_consumer ContainerPort:8443 HostPort:8443 HostIP:192.168.128.62 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:8444}
I0314 10:28:03.456557 00001 endpoint.go:640] Success binding port: a8u77sofuk3vbk02c2a1zagw1_controlplane_consumer -> proxy[{controlplane_consumer controlplane_consumer 8443 8443 192.168.128.62 127.0.0.1 tcp 0 8444}; &{%!s(*net.netFD=&{{0 0 0} 11 2 1 false tcp4 0xc20813f4a0 reflect.Value {140137151718136}})}]=>[]
I0314 10:28:03.456616 00001 endpoint.go:306] cached imported endpoint[a8u77sofuk3vbk02c2a1zagw1_controlplane_consumer]: {endpointID:controlplane_consumer instanceID:0 virtualAddress: purpose:import port:8443}
I0314 10:28:03.456692 00001 controller.go:665] No prereqs to pass.
I0314 10:28:03.464278 00001 endpoint.go:412] Starting watch for tenantEndpointKey a8u77sofuk3vbk02c2a1zagw1_localhost_redis:
I0314 10:28:03.464308 00001 endpoint.go:412] Starting watch for tenantEndpointKey a8u77sofuk3vbk02c2a1zagw1_localhost_zenhubPB:
I0314 10:28:03.465957 00001 endpoint.go:620] Attempting port map for: a8u77sofuk3vbk02c2a1zagw1_localhost_redis -> {ServiceID:7qlzq75s9fg0dswat6ggtn8t5 Application:localhost_redis ContainerPort:6379 HostPort:49179 HostIP:192.168.128.62 ContainerIP:172.17.0.45 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:6379}
I0314 10:28:03.466032 00001 endpoint.go:640] Success binding port: a8u77sofuk3vbk02c2a1zagw1_localhost_redis -> proxy[{7qlzq75s9fg0dswat6ggtn8t5 localhost_redis 6379 49179 192.168.128.62 172.17.0.45 tcp 0 6379}; &{%!s(*net.netFD=&{{0 0 0} 12 2 1 false tcp4 0xc2088fdad0 reflect.Value {140137151717960}})}]=>[]
I0314 10:28:03.466164 00001 endpoint.go:620] Attempting port map for: a8u77sofuk3vbk02c2a1zagw1_localhost_zenhubPB -> {ServiceID:7vizvtv9n083xaeuu7g56lcnn Application:localhost_zenhubPB ContainerPort:8789 HostPort:49162 HostIP:192.168.128.62 ContainerIP:172.17.0.18 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:8789}
I0314 10:28:03.466208 00001 endpoint.go:640] Success binding port: a8u77sofuk3vbk02c2a1zagw1_localhost_zenhubPB -> proxy[{7vizvtv9n083xaeuu7g56lcnn localhost_zenhubPB 8789 49162 192.168.128.62 172.17.0.18 tcp 0 8789}; &{%!s(*net.netFD=&{{0 0 0} 13 2 1 false tcp4 0xc2088fde30 reflect.Value {140137151717784}})}]=>[]
I0314 10:28:03.472411 00001 controller.go:721] Kicking off health check redis_answering.
I0314 10:28:03.472433 00001 controller.go:722] Setting up health check: /opt/zenoss/bin/healthchecks/redis_answering
I0314 10:28:03.472439 00001 controller.go:721] Kicking off health check running.
I0314 10:28:03.472443 00001 controller.go:722] Setting up health check: pgrep -fu zenoss zenping.py > /dev/null
I0314 10:28:03.472454 00001 controller.go:721] Kicking off health check zenhub_answering.
I0314 10:28:03.472458 00001 controller.go:722] Setting up health check: /opt/zenoss/bin/healthchecks/zenhub_answering
I0314 10:28:03.473566 00001 controller.go:612] Starting service process.
I0314 10:28:03.473596 00001 instance.go:79] about to execute: /bin/sh , [-c exec su - zenoss -c "/opt/zenoss/bin/zenping run -c --duallog --monitor localhost "][2]
2015/03/14 10:28:03.597454 Connecting to 127.0.0.1:5043 (127.0.0.1)
2015/03/14 10:28:03.647977 Connected to 127.0.0.1
Trying to connect to logstash server... 127.0.0.1:5042
Connected to logstash server.
E0314 10:28:06.009456 00001 endpoint.go:510] Setting proxy a8u77sofuk3vbk02c2a1zagw1_localhost_zenhubPB to empty address list
2015/03/14 10:28:17 200 2.570254ms POST /api/metrics/store
W0314 10:28:17.835175 00001 proxy.go:167] No remote services available for prxying proxy[{7vizvtv9n083xaeuu7g56lcnn localhost_zenhubPB 8789 49162 192.168.128.62 172.17.0.18 tcp 0 8789}; &{%!s(*net.netFD=&{{10 0 0} 13 2 1 false tcp4 0xc2088fde30 reflect.Value {140137151717784}})}]=>[]
2015/03/14 10:28:19.924094 Registrar received 1 events
W0314 10:28:20.293120 00001 proxy.go:167] No remote services available for prxying proxy[{7vizvtv9n083xaeuu7g56lcnn localhost_zenhubPB 8789 49162 192.168.128.62 172.17.0.18 tcp 0 8789}; &{%!s(*net.netFD=&{{10 0 0} 13 2 1 false tcp4 0xc2088fde30 reflect.Value {140137151717784}})}]=>[]
W0314 10:28:23.489017 00001 controller.go:777] Health check zenhub_answering failed.
W0314 10:28:26.955403 00001 proxy.go:167] No remote services available for prxying proxy[{7vizvtv9n083xaeuu7g56lcnn localhost_zenhubPB 8789 49162 192.168.128.62 172.17.0.18 tcp 0 8789}; &{%!s(*net.netFD=&{{10 0 0} 13 2 1 false tcp4 0xc2088fde30 reflect.Value {140137151717784}})}]=>[]
2015/03/14 10:28:27.424425 Registrar received 1 events
2015/03/14 10:28:32 200 3.580654ms POST /api/metrics/store
W0314 10:28:33.500609 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:28:34.920853 Registrar received 1 events
W0314 10:28:42.742160 00001 proxy.go:167] No remote services available for prxying proxy[{7vizvtv9n083xaeuu7g56lcnn localhost_zenhubPB 8789 49162 192.168.128.62 172.17.0.18 tcp 0 8789}; &{%!s(*net.netFD=&{{10 0 0} 13 2 1 false tcp4 0xc2088fde30 reflect.Value {140137151717784}})}]=>[]
W0314 10:28:43.519874 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:28:44.919274 Registrar received 1 events
2015/03/14 10:28:47 200 1.60677ms POST /api/metrics/store
2015/03/14 10:28:48 200 1.573719ms POST /api/metrics/store
2015/03/14 10:28:52.419121 Registrar received 4 events
W0314 10:28:53.526917 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:29:02 200 2.279195ms POST /api/metrics/store
W0314 10:29:03.530717 00001 controller.go:777] Health check zenhub_answering failed.
W0314 10:29:13.535207 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:29:17 200 1.923281ms POST /api/metrics/store
E0314 10:29:18.676789 00001 proxy.go:249] Error (net.Dial): dial tcp4 172.17.0.66:8789: connection refused
2015/03/14 10:29:19.918942 Registrar received 1 events
W0314 10:29:23.540863 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:29:32 200 5.756727ms POST /api/metrics/store
W0314 10:29:33.545362 00001 controller.go:777] Health check zenhub_answering failed.
W0314 10:29:43.556294 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:29:47 200 1.79325ms POST /api/metrics/store
2015/03/14 10:29:47 200 1.58317ms POST /api/metrics/store
2015/03/14 10:29:49.918825 Registrar received 2 events
W0314 10:29:53.561517 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:30:02 200 2.078321ms POST /api/metrics/store
W0314 10:30:03.569204 00001 controller.go:777] Health check zenhub_answering failed.
W0314 10:30:13.572765 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:30:17 200 1.557761ms POST /api/metrics/store
W0314 10:30:23.576377 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:30:32 200 1.764353ms POST /api/metrics/store
W0314 10:30:33.580832 00001 controller.go:777] Health check zenhub_answering failed.
W0314 10:30:43.600622 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:30:47 200 2.05294ms POST /api/metrics/store
2015/03/14 10:30:47 200 1.994122ms POST /api/metrics/store
2015/03/14 10:30:49.919090 Registrar received 2 events
W0314 10:30:53.612252 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:31:02 200 1.9814ms POST /api/metrics/store
W0314 10:31:03.622636 00001 controller.go:777] Health check zenhub_answering failed.
W0314 10:31:13.627724 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:31:17 200 2.610878ms POST /api/metrics/store
W0314 10:31:23.632425 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:31:32 200 2.17105ms POST /api/metrics/store
2015/03/14 10:31:32 200 3.904716ms POST /api/metrics/store
W0314 10:31:33.636283 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:31:34.919436 Registrar received 4 events
2015/03/14 10:31:37 200 2.175703ms POST /api/metrics/store
2015/03/14 10:31:42 200 2.200454ms POST /api/metrics/store
W0314 10:31:43.641312 00001 controller.go:777] Health check zenhub_answering failed.
2015/03/14 10:31:47 200 2.594214ms POST /api/metrics/store
2015/03/14 10:31:47 200 2.140354ms POST /api/metrics/store
2015/03/14 10:31:49.927850 Registrar received 2 events
2015/03/14 10:31:52 200 2.018419ms POST /api/metrics/store
2015/03/14 10:31:57 200 1.938357ms POST /api/metrics/store
2015/03/14 10:32:02 200 2.159362ms POST /api/metrics/store
2015/03/14 10:32:02 200 1.90941ms POST /api/metrics/store
2015/03/14 10:32:07 200 2.009277ms POST /api/metrics/store
2015/03/14 10:32:12 200 2.084113ms POST /api/metrics/store
2015/03/14 10:32:17 200 6.754303ms POST /api/metrics/store
2015/03/14 10:32:17 200 1.96519ms POST /api/metrics/store
2015/03/14 10:32:22 200 3.151161ms POST /api/metrics/store

regards
Achim



Subject: Having the same issue as well.
Author: [Not Specified]
Posted: 2015-03-15 20:07

Mine was working just fine until I recently updated the Linux system with patches. Then I started getting a critical error as soon as the system rebooted:

"nmap did not execute correctly: ('-iL', '/tmp/zenping_nmap_PkIPYK', '-sn', '-PE', '-n', '--privileged', '--send-ip', '-T5', '--min-rtt-timeout', '1.5s', '--max-rtt-timeout', '1.5s', '--max-retries', '1', '--min-rate', '7', '-oX', '-')"

One place I looked said to try these commands, which worked for others, but did not work for me:
chown -c root:zenoss /opt/zenoss/bin/pyraw
chown -c root:zenoss /opt/zenoss/bin/zensocket
chown -c root:zenoss /opt/zenoss/bin/nmap
chmod -c 04750 /opt/zenoss/bin/pyraw
chmod -c 04750 /opt/zenoss/bin/zensocket
chmod -c 04750 /opt/zenoss/bin/nmap

I am still stumped on this, but know it was something due to the system updates.

regards,
Kevin



Subject: which OS, Centos/RHEL or
Author: Andrew Kirch
Posted: 2015-03-16 08:43

which OS, Centos/RHEL or Ubuntu

Andrew Kirch

akirch@gvit.com

Need Zenoss support, consulting or custom development Look no further. Email or PM me!

Ready for Distributed Topology (collectors) for Zenoss 5 Coming May 1st from GoVanguard



Subject: Hi,CentOS7.0
Author: [Not Specified]
Posted: 2015-03-16 10:36

Hi,

CentOS7.0.

There might be a problem with the zenoss user rights in the Zenoss.core area
(serviced Service attach Zenoss.core)! If I switch to the zenoss user and try to execute
ping, it appears following error message:
effective uid is not 0, is sudo installed setuid root

regards
Achim



Subject: My error might be different, I thought it looked similar.
Author: [Not Specified]
Posted: 2015-03-16 17:32

I have the same error with Zenping, and the other two services won't start (zensyslog and zentrap. But I tried the fix for the setuid which I have in my last message, and it failed.

I am using CentOS 6.6 and the system didn't have any issues until I rebooted from yum updates on the system. When I run zenping as the user zenoss, this is what I get:

zenping run -d clweb-test -v10
2015-03-16 16:11:04,547 DEBUG zen.collector.scheduler: add task NmapPingTask, using 60 second interval
2015-03-16 16:11:04,547 DEBUG zen.zenping: Starting PBDaemon initialization
2015-03-16 16:11:04,547 INFO zen.zenping: Connecting to localhost:8789
2015-03-16 16:11:04,548 DEBUG zen.pbclientfactory: Starting connection...
2015-03-16 16:11:04,548 DEBUG zen.zenping: Logging in as admin
2015-03-16 16:11:04,549 DEBUG zen.pbclientfactory: Connected
2015-03-16 16:11:04,550 DEBUG zen.pbclientfactory: Cancelling connect timeout
2015-03-16 16:11:04,550 DEBUG zen.pbclientfactory: Sending credentials
2015-03-16 16:11:04,553 DEBUG zen.pbclientfactory: Cancelling connect timeout
2015-03-16 16:11:04,553 INFO zen.zenping: Connected to ZenHub
2015-03-16 16:11:04,553 DEBUG zen.zenping: Setting up initial services: EventService, Products.ZenHub.services.PingPerformanceConfig
2015-03-16 16:11:04,554 DEBUG zen.zenping: Chaining getInitialServices with d2
2015-03-16 16:11:04,556 DEBUG zen.zenping: Loaded service EventService from zenhub
2015-03-16 16:11:04,556 DEBUG zen.zenping: Loaded service Products.ZenHub.services.PingPerformanceConfig from zenhub
2015-03-16 16:11:04,556 DEBUG zen.zenping: Queued event (total of 1) {'rcvtime': 1426543864.556478, 'severity': 0, 'component': 'zenping', 'agent': 'zenping', 'summary': 'started', 'manager': 'dew.cablelabs.com', 'device': 'localhost', 'eventClass': '/App/Start', 'monitor': 'localhost'}
2015-03-16 16:11:04,557 DEBUG zen.zenping: Sending 1 events, 0 perf events, 0 heartbeats
2015-03-16 16:11:04,557 DEBUG zen.zenping: Calling connected.
2015-03-16 16:11:04,557 DEBUG zen.collector.config: Heartbeat timeout set to 900s
2015-03-16 16:11:04,558 DEBUG zen.collector.scheduler: add task configLoader, using 1200 second interval
2015-03-16 16:11:04,558 DEBUG zen.zenping: Performing periodic maintenance
2015-03-16 16:11:04,559 DEBUG zen.collector.scheduler: Task configLoader starting (waited 0 seconds) on 1200 second intervals
2015-03-16 16:11:04,559 DEBUG zen.collector.scheduler: Task configLoader changing state from IDLE to QUEUED
2015-03-16 16:11:04,559 DEBUG zen.collector.scheduler: Task configLoader changing state from QUEUED to RUNNING
2015-03-16 16:11:04,559 DEBUG zen.collector.config: configLoader gathering configuration
2015-03-16 16:11:04,559 DEBUG zen.collector.config: Fetching daemon configuration properties
2015-03-16 16:11:04,576 DEBUG zen.collector.scheduler: Task configLoader changing state from RUNNING to FETCHING_MISC_CONFIG
2015-03-16 16:11:04,577 DEBUG zen.zenping: Updated configCycleInterval preference to 360
2015-03-16 16:11:04,577 DEBUG zen.zenping: Changing config task interval from 20 to 360 minutes
2015-03-16 16:11:04,577 DEBUG zen.collector.scheduler: Stopping task configLoader,
2015-03-16 16:11:04,577 DEBUG zen.collector.scheduler: call finished LoopingCall<1200>(CallableTask: configLoader, *(), **{}) : LoopingCall<1200>(CallableTask: configLoader, *(), **{})
2015-03-16 16:11:04,577 INFO zen.collector.scheduler: Detailed Task Statistics:
configLoader Current State: FETCHING_MISC_CONFIG Successful_Runs: 1 Failed_Runs: 0 Missed_Runs: 0

Detailed Task States:
configLoader State: RUNNING Total: 1 Total Elapsed: 0.0174 Min: 0.0174 Max: 0.0174 Mean: 0.0174 StdDev: 0.0000
configLoader State: QUEUED Total: 1 Total Elapsed: 0.0004 Min: 0.0004 Max: 0.0004 Mean: 0.0004 StdDev: 0.0000

2015-03-16 16:11:04,577 DEBUG zen.collector.config: Heartbeat timeout set to 900s
2015-03-16 16:11:04,577 DEBUG zen.collector.scheduler: add task configLoader, using 21600 second interval
2015-03-16 16:11:04,578 DEBUG zen.zenping: Updated defaultRRDCreateCommand preference to ('RRA:AVERAGE:0.5:1:600', 'RRA:AVERAGE:0.5:6:600', 'RRA:AVERAGE:0.5:24:600', 'RRA:AVERAGE:0.5:288:600', 'RRA:MAX:0.5:6:600', 'RRA:MAX:0.5:24:600', 'RRA:MAX:0.5:288:600')
2015-03-16 16:11:04,578 DEBUG zen.collector.config: Fetching threshold classes
2015-03-16 16:11:04,581 DEBUG zen.zenping: Loading classes ['Products.ZenModel.MinMaxThreshold', 'Products.ZenModel.ValueChangeThreshold', 'ZenPacks.community.deviceAdvDetail.thresholds.StatusThreshold']
2015-03-16 16:11:04,583 DEBUG zen.collector.config: Fetching collector thresholds
2015-03-16 16:11:04,594 DEBUG zen.thresholds: Updating threshold ('high event queue', ('localhost collector', ''))
2015-03-16 16:11:04,594 DEBUG zen.thresholds: Updating threshold ('zenmodeler cycle time', ('localhost collector', ''))
2015-03-16 16:11:04,594 DEBUG zen.collector.config: Fetching configurations
2015-03-16 16:11:04,608 DEBUG zen.zenping: updateDeviceConfigs: updatedConfigs=['clweb-test']
2015-03-16 16:11:04,608 DEBUG zen.zenping: Processing configuration for clweb-test
2015-03-16 16:11:04,608 DEBUG zen.daemon: DummyListener: configuration clweb-test added
2015-03-16 16:11:04,608 DEBUG zen.collector.tasks: Splitting config clweb-test
2015-03-16 16:11:04,608 DEBUG zen.NmapPingTask: Creating an IPv4 task: 10.5.0.39
2015-03-16 16:11:04,608 DEBUG zen.zenping: Tasks for config clweb-test: {'clweb-test 60 10.5.0.39': }
2015-03-16 16:11:04,608 DEBUG zen.collector.scheduler: add task clweb-test 60 10.5.0.39, using 3153600000 second interval
2015-03-16 16:11:04,609 DEBUG zen.collector.scheduler: Pausing task clweb-test 60 10.5.0.39
2015-03-16 16:11:04,609 DEBUG zen.collector.scheduler: Task clweb-test 60 10.5.0.39 starting (waited 0 seconds) on 3153600000 second intervals
2015-03-16 16:11:04,609 DEBUG zen.collector.scheduler: Task clweb-test 60 10.5.0.39 changing state from IDLE to PAUSED
2015-03-16 16:11:04,609 DEBUG zen.zenping: purgeOmittedDevices: deletedConfigs=
2015-03-16 16:11:04,609 DEBUG zen.collector.scheduler: Task configLoader finished, result: 'Configuration loaded'
2015-03-16 16:11:09,552 DEBUG zen.collector.scheduler: Task NmapPingTask starting (waited 5 seconds) on 60 second intervals
2015-03-16 16:11:09,552 DEBUG zen.collector.scheduler: Task NmapPingTask changing state from IDLE to QUEUED
2015-03-16 16:11:09,552 DEBUG zen.collector.scheduler: Task NmapPingTask changing state from QUEUED to RUNNING
2015-03-16 16:11:09,552 DEBUG zen.NmapPingTask: ---- BatchPingDevices ----
2015-03-16 16:11:09,553 DEBUG zen.zenping: Queued event (total of 1) {'rcvtime': 1426543869.552975, 'manager': 'dew.cablelabs.com', 'eventGroup': 'Ping', 'severity': 0, 'device': 'zenping', 'eventClass': '/Status/Ping', 'summary': 'nmap was found', 'monitor': 'localhost', 'agent': 'zenping', 'eventKey': 'nmap_missing'}
2015-03-16 16:11:09,553 DEBUG zen.zenping: Queued event (total of 2) {'rcvtime': 1426543869.55373, 'manager': 'dew.cablelabs.com', 'eventGroup': 'Ping', 'severity': 0, 'device': 'zenping', 'eventClass': '/Status/Ping', 'summary': 'ping cycle time (60.0 seconds) is fine', 'monitor': 'localhost', 'agent': 'zenping', 'eventKey': 'cycle_interval'}
2015-03-16 16:11:09,553 DEBUG zen.NmapPingTask: executing nmap -iL /tmp/zenping_nmap_h7XoRz -sn -PE -n --privileged --send-ip -T5 --min-rtt-timeout 1.5s --max-rtt-timeout 1.5s --max-retries 1 --min-rate 1 -oX -
2015-03-16 16:11:09,560 DEBUG zen.zenping: Sending 2 events, 0 perf events, 0 heartbeats
2015-03-16 16:11:09,572 DEBUG zen.NmapPingTask: input file: 10.5.0.39

2015-03-16 16:11:09,572 DEBUG zen.NmapPingTask: stdout:







2015-03-16 16:11:09,572 DEBUG zen.NmapPingTask: stderr: socket troubles in Init: Operation not permitted (1)

2015-03-16 16:11:09,576 DEBUG zen.zenping: Queued event (total of 1) {'rcvtime': 1426543869.575965, 'manager': 'dew.cablelabs.com', 'eventGroup': 'Ping', 'severity': 5, 'device': 'zenping', 'eventClass': '/Status/Ping', 'summary': "nmap did not execute correctly: ('-iL', '/tmp/zenping_nmap_h7XoRz', '-sn', '-PE', '-n', '--privileged', '--send-ip', '-T5', '--min-rtt-timeout', '1.5s', '--max-rtt-timeout', '1.5s', '--max-retries', '1', '--min-rate', '1', '-oX', '-')", 'monitor': 'localhost', 'agent': 'zenping', 'eventKey': 'nmap_execution'}
2015-03-16 16:11:09,576 DEBUG zen.collector.scheduler: Task NmapPingTask finished, result: None
2015-03-16 16:11:09,576 DEBUG zen.collector.scheduler: Task NmapPingTask changing state from RUNNING to IDLE
2015-03-16 16:11:10,836 DEBUG zen.collector.scheduler: tasks to clean KeyedSet([])
2015-03-16 16:11:10,836 DEBUG zen.collector.scheduler: Cleanup on task configLoader
2015-03-16 16:11:10,837 DEBUG zen.collector.scheduler: Scheduler._cleanupTaskComplete: result=None task.name=configLoader
2015-03-16 16:11:14,563 DEBUG zen.zenping: Sending 1 events, 0 perf events, 0 heartbeats
2015-03-16 16:12:09,557 DEBUG zen.collector.scheduler: Task NmapPingTask changing state from IDLE to QUEUED
2015-03-16 16:12:09,558 DEBUG zen.collector.scheduler: Task NmapPingTask changing state from QUEUED to RUNNING
2015-03-16 16:12:09,558 DEBUG zen.NmapPingTask: ---- BatchPingDevices ----

it just keeps looping...



Subject: put that in a bug, lets see
Author: Andrew Kirch
Posted: 2015-03-17 17:19

put that in a bug, lets see what the devs say. I'm running in Ubuntu, and haven't seen this.
http://jira.zenoss.com is our bugtracker. Please reply with the bug # and I'll make it public and follow it/follow up.

Andrew Kirch

akirch@gvit.com

Need Zenoss support, consulting or custom development Look no further. Email or PM me!

Ready for Distributed Topology (collectors) for Zenoss 5 Coming May 1st from GoVanguard



Subject: Hi,
Author: [Not Specified]
Posted: 2015-03-18 03:34

Hi,

the bug # is zen-17099.

regards
Achim



Subject: I also submitted a ticket
Author: [Not Specified]
Posted: 2015-03-19 11:52

zen-17098



Subject: Hi,
Author: [Not Specified]
Posted: 2015-04-03 03:23

Hi,

unfortunately there is no further response in the bugtracker!
What a pitty.

regards
Achim



< Previous
Zenoss 5.0 -> zensyslog and zentrap problem
  Next
Collecting data
>