TECHZEN Zenoss User Community ARCHIVE  

Zenoss 5.0.2 -> zenping problem

Subject: Zenoss 5.0.2 -> zenping problem
Author: [Not Specified]
Posted: 2015-04-22 04:39

Hello,

we have installed Zenoss 5.0.2 on CentOS 7.1 with the help of core-autodeploy.sh, but there is again a zenping problem.
We get following error message:
nmap did not execute correctly: ('-iL', '/tmp/zenping_nmap_1IU2oa', '-sn', '-PE', '-n', '--privileged', '--send-ip', '-T5', '--min-rtt-timeout', '1.5s', '--max-rtt-timeout', '1.5s', '--max-retries', '1', '--min-rate', '1', '-oX

The zenping logfile contain follwoing messages:
I0422 08:58:26.140859 00001 vif.go:58] vif subnet is: 10.3
I0422 08:58:26.141422 00001 lbClient.go:77] ControlPlaneAgent.GetServiceInstance()
I0422 08:58:26.200418 00001 controller.go:291] Allow container to container connections: true
I0422 08:58:26.200979 00001 controller.go:226] Wrote config file /opt/zenoss/etc/global.conf
I0422 08:58:26.204199 00001 controller.go:197] Successfully ran command:'&{/usr/bin/chown [chown zenoss:zenoss /opt/zenoss/etc/global.conf] [] [] 0xc2080b91c0 exit status 0 true [0xc20802e120 0xc20802e140 0xc20802e140] [0xc20802e120 0xc20802e140] [0xc20802e138] [0x746510] 0xc20800baa0}' output:
I0422 08:58:26.206450 00001 controller.go:197] Successfully ran command:'&{/usr/bin/chmod [chmod 660 /opt/zenoss/etc/global.conf] [] [] 0xc2080b9b80 exit status 0 true [0xc20802e338 0xc20802e358 0xc20802e358] [0xc20802e338 0xc20802e358] [0xc20802e350] [0x746510] 0xc20800bb60}' output:
I0422 08:58:26.206678 00001 controller.go:226] Wrote config file /opt/zenoss/etc/zenping.conf
I0422 08:58:26.208083 00001 controller.go:197] Successfully ran command:'&{/usr/bin/chown [chown zenoss:zenoss /opt/zenoss/etc/zenping.conf] [] [] 0xc2080b9e80 exit status 0 true [0xc20802e3a0 0xc20802e3c0 0xc20802e3c0] [0xc20802e3a0 0xc20802e3c0] [0xc20802e3b8] [0x746510] 0xc20800bd40}' output:
I0422 08:58:26.209238 00001 controller.go:197] Successfully ran command:'&{/usr/bin/chmod [chmod 0664 /opt/zenoss/etc/zenping.conf] [] [] 0xc20801ede0 exit status 0 true [0xc20802e3f0 0xc20802e410 0xc20802e410] [0xc20802e3f0 0xc20802e410] [0xc20802e408] [0x746510] 0xc20800be00}' output:
I0422 08:58:26.215990 00001 logstash.go:55] Using logstash resourcePath: /usr/local/serviced/resources/logstash
I0422 08:58:26.216199 00001 controller.go:226] Wrote config file /etc/logstash-forwarder.conf
I0422 08:58:26.216313 00001 controller.go:380] pushing network stats to: http://localhost:22350/api/metrics/store
I0422 08:58:26.216413 00001 instance.go:79] about to execute: /usr/local/serviced/resources/logstash/logstash-forwarder , [-idle-flush-time=5s -old-files-hours=26280 -config /etc/logstash-forwarder.conf][4]
I0422 08:58:26.217523 00001 endpoint.go:132] c.zkInfo: {ZkDSN:{"Servers":["192.168.128.222:2181"],"Timeout":15000000000} PoolID:default}
I0422 08:58:26.217878 00001 endpoint.go:173] getting service state: 4n6oju8dqy7lcchz67q354wy6 0
2015/04/22 08:58:26 publisher init
2015/04/22 08:58:26
{
"network": {
"servers": [ "127.0.0.1:5043" ],
"ssl certificate": "/usr/local/serviced/resources/logstash/logstash-forwarder.crt",
"ssl key": "/usr/local/serviced/resources/logstash/logstash-forwarder.key",
"ssl ca": "/usr/local/serviced/resources/logstash/logstash-forwarder.crt",
"timeout": 15
},
"files": [

{
"paths": [ "/opt/zenoss/log/zenping.log" ],
"fields": {"instance":"0","monitor":"localhost","service":"4n6oju8dqy7lcchz67q354wy6","type":"zenping"}
}
]
}
2015/04/22 08:58:26.219866 Launching harvester on new file: /opt/zenoss/log/zenping.log
2015/04/22 08:58:26.219903 Loading client ssl certificate: /usr/local/serviced/resources/logstash/logstash-forwarder.crt and /usr/local/serviced/resources/logstash/logstash-forwarder.key
2015/04/22 08:58:26.222248 Starting harvester: /opt/zenoss/log/zenping.log
2015/04/22 08:58:26.222278 Current file offset: 746
I0422 08:58:26.229448 00001 endpoint.go:306] cached imported endpoint[f51a7xs1uma8h3dgrd8zwyvfq_localhost_zenhubPB]: {endpointID:localhost_zenhubPB instanceID:0 virtualAddress: purpose:import port:8789}
I0422 08:58:26.229495 00001 endpoint.go:306] cached imported endpoint[f51a7xs1uma8h3dgrd8zwyvfq_localhost_redis]: {endpointID:localhost_redis instanceID:0 virtualAddress: purpose:import port:6379}
I0422 08:58:26.229512 00001 endpoint.go:306] cached imported endpoint[f51a7xs1uma8h3dgrd8zwyvfq_controlplane_consumer]: {endpointID:controlplane_consumer instanceID:0 virtualAddress: purpose:import port:8444}
I0422 08:58:26.229525 00001 controller.go:393] command: [su - zenoss -c "/opt/zenoss/bin/zenping run -c --duallog --monitor localhost "] [1]
I0422 08:58:26.237131 00001 controller.go:859] Got service endpoints for 4n6oju8dqy7lcchz67q354wy6: map[tcp:443:[{ServiceID:controlplane Application:controlplane ContainerPort:443 HostPort:443 HostIP:192.168.128.222 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:443}] tcp:5042:[{ServiceID:controlplane_logstash_tcp Application:controlplane_logstash_tcp ContainerPort:5042 HostPort:5042 HostIP:192.168.128.222 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:5042}] tcp:5043:[{ServiceID:controlplane_logstash_lumberjack Application:controlplane_logstash_lumberjack ContainerPort:5043 HostPort:5043 HostIP:192.168.128.222 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:5043}] tcp:8444:[{ServiceID:controlplane_consumer Application:controlplane_consumer ContainerPort:8443 HostPort:8443 HostIP:192.168.128.222 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:8444}]]
I0422 08:58:26.237226 00001 controller.go:871] changing key from tcp:443 to f51a7xs1uma8h3dgrd8zwyvfq_controlplane: {ServiceID:controlplane Application:controlplane ContainerPort:443 HostPort:443 HostIP:192.168.128.222 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:443}
I0422 08:58:26.237254 00001 controller.go:871] changing key from tcp:5042 to f51a7xs1uma8h3dgrd8zwyvfq_controlplane_logstash_tcp: {ServiceID:controlplane_logstash_tcp Application:controlplane_logstash_tcp ContainerPort:5042 HostPort:5042 HostIP:192.168.128.222 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:5042}
I0422 08:58:26.237275 00001 controller.go:871] changing key from tcp:5043 to f51a7xs1uma8h3dgrd8zwyvfq_controlplane_logstash_lumberjack: {ServiceID:controlplane_logstash_lumberjack Application:controlplane_logstash_lumberjack ContainerPort:5043 HostPort:5043 HostIP:192.168.128.222 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:5043}
I0422 08:58:26.237293 00001 controller.go:871] changing key from tcp:8444 to f51a7xs1uma8h3dgrd8zwyvfq_controlplane_consumer: {ServiceID:controlplane_consumer Application:controlplane_consumer ContainerPort:8443 HostPort:8443 HostIP:192.168.128.222 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:8444}
I0422 08:58:26.237333 00001 endpoint.go:620] Attempting port map for: f51a7xs1uma8h3dgrd8zwyvfq_controlplane -> {ServiceID:controlplane Application:controlplane ContainerPort:443 HostPort:443 HostIP:192.168.128.222 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:443}
I0422 08:58:26.237412 00001 endpoint.go:640] Success binding port: f51a7xs1uma8h3dgrd8zwyvfq_controlplane -> proxy[{controlplane controlplane 443 443 192.168.128.222 127.0.0.1 tcp 0 443}; &{%!s(*net.netFD=&{{0 0 0} 7 2 1 false tcp4 0xc20809a510 {139708573918320}})}]=>[]
I0422 08:58:26.237599 00001 endpoint.go:306] cached imported endpoint[f51a7xs1uma8h3dgrd8zwyvfq_controlplane]: {endpointID:controlplane instanceID:0 virtualAddress: purpose:import port:443}
I0422 08:58:26.237639 00001 endpoint.go:620] Attempting port map for: f51a7xs1uma8h3dgrd8zwyvfq_controlplane_logstash_tcp -> {ServiceID:controlplane_logstash_tcp Application:controlplane_logstash_tcp ContainerPort:5042 HostPort:5042 HostIP:192.168.128.222 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:5042}
I0422 08:58:26.237693 00001 endpoint.go:640] Success binding port: f51a7xs1uma8h3dgrd8zwyvfq_controlplane_logstash_tcp -> proxy[{controlplane_logstash_tcp controlplane_logstash_tcp 5042 5042 192.168.128.222 127.0.0.1 tcp 0 5042}; &{%!s(*net.netFD=&{{0 0 0} 8 2 1 false tcp4 0xc20809a8d0 {139708573917360}})}]=>[]
I0422 08:58:26.237794 00001 endpoint.go:306] cached imported endpoint[f51a7xs1uma8h3dgrd8zwyvfq_controlplane_logstash_tcp]: {endpointID:controlplane_logstash_tcp instanceID:0 virtualAddress: purpose:import port:5042}
I0422 08:58:26.237832 00001 endpoint.go:620] Attempting port map for: f51a7xs1uma8h3dgrd8zwyvfq_controlplane_logstash_lumberjack -> {ServiceID:controlplane_logstash_lumberjack Application:controlplane_logstash_lumberjack ContainerPort:5043 HostPort:5043 HostIP:192.168.128.222 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:5043}
I0422 08:58:26.237887 00001 endpoint.go:640] Success binding port: f51a7xs1uma8h3dgrd8zwyvfq_controlplane_logstash_lumberjack -> proxy[{controlplane_logstash_lumberjack controlplane_logstash_lumberjack 5043 5043 192.168.128.222 127.0.0.1 tcp 0 5043}; &{%!s(*net.netFD=&{{0 0 0} 10 2 1 false tcp4 0xc20809adb0 {139708573917168}})}]=>[]
I0422 08:58:26.237986 00001 endpoint.go:306] cached imported endpoint[f51a7xs1uma8h3dgrd8zwyvfq_controlplane_logstash_lumberjack]: {endpointID:controlplane_logstash_lumberjack instanceID:0 virtualAddress: purpose:import port:5043}
I0422 08:58:26.238024 00001 endpoint.go:620] Attempting port map for: f51a7xs1uma8h3dgrd8zwyvfq_controlplane_consumer -> {ServiceID:controlplane_consumer Application:controlplane_consumer ContainerPort:8443 HostPort:8443 HostIP:192.168.128.222 ContainerIP:127.0.0.1 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:8444}
I0422 08:58:26.238115 00001 endpoint.go:640] Success binding port: f51a7xs1uma8h3dgrd8zwyvfq_controlplane_consumer -> proxy[{controlplane_consumer controlplane_consumer 8443 8443 192.168.128.222 127.0.0.1 tcp 0 8444}; &{%!s(*net.netFD=&{{0 0 0} 11 2 1 false tcp4 0xc20809b170 {139708573916976}})}]=>[]
I0422 08:58:26.238217 00001 endpoint.go:306] cached imported endpoint[f51a7xs1uma8h3dgrd8zwyvfq_controlplane_consumer]: {endpointID:controlplane_consumer instanceID:0 virtualAddress: purpose:import port:8443}
I0422 08:58:26.238346 00001 controller.go:665] No prereqs to pass.
I0422 08:58:26.251184 00001 endpoint.go:412] Starting watch for tenantEndpointKey f51a7xs1uma8h3dgrd8zwyvfq_localhost_redis:
I0422 08:58:26.251227 00001 endpoint.go:412] Starting watch for tenantEndpointKey f51a7xs1uma8h3dgrd8zwyvfq_localhost_zenhubPB:
I0422 08:58:26.252153 00001 controller.go:721] Kicking off health check redis_answering.
I0422 08:58:26.252176 00001 controller.go:722] Setting up health check: /opt/zenoss/bin/healthchecks/redis_answering
I0422 08:58:26.252187 00001 controller.go:721] Kicking off health check running.
I0422 08:58:26.252195 00001 controller.go:722] Setting up health check: pgrep -fu zenoss zenping.py > /dev/null
I0422 08:58:26.252203 00001 controller.go:721] Kicking off health check zenhub_answering.
I0422 08:58:26.252210 00001 controller.go:722] Setting up health check: /opt/zenoss/bin/healthchecks/zenhub_answering
I0422 08:58:26.253352 00001 controller.go:612] Starting service process.
I0422 08:58:26.253402 00001 instance.go:79] about to execute: /bin/sh , [-c exec su - zenoss -c "/opt/zenoss/bin/zenping run -c --duallog --monitor localhost "][2]
I0422 08:58:26.268762 00001 endpoint.go:620] Attempting port map for: f51a7xs1uma8h3dgrd8zwyvfq_localhost_redis -> {ServiceID:1umnkeablavhxik88p70u109i Application:localhost_redis ContainerPort:6379 HostPort:49153 HostIP:192.168.128.222 ContainerIP:172.17.0.6 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:6379}
I0422 08:58:26.268881 00001 endpoint.go:640] Success binding port: f51a7xs1uma8h3dgrd8zwyvfq_localhost_redis -> proxy[{1umnkeablavhxik88p70u109i localhost_redis 6379 49153 192.168.128.222 172.17.0.6 tcp 0 6379}; &{%!s(*net.netFD=&{{0 0 0} 12 2 1 false tcp4 0xc2089e3aa0 {139708573916784}})}]=>[]
I0422 08:58:26.269167 00001 endpoint.go:620] Attempting port map for: f51a7xs1uma8h3dgrd8zwyvfq_localhost_zenhubPB -> {ServiceID:81m2l8rfetnvo319rlzctalcg Application:localhost_zenhubPB ContainerPort:8789 HostPort:49167 HostIP:192.168.128.222 ContainerIP:172.17.0.25 Protocol:tcp VirtualAddress: InstanceID:0 ProxyPort:8789}
I0422 08:58:26.269253 00001 endpoint.go:640] Success binding port: f51a7xs1uma8h3dgrd8zwyvfq_localhost_zenhubPB -> proxy[{81m2l8rfetnvo319rlzctalcg localhost_zenhubPB 8789 49167 192.168.128.222 172.17.0.25 tcp 0 8789}; &{%!s(*net.netFD=&{{0 0 0} 13 2 1 false tcp4 0xc2089e3e00 {139708573916592}})}]=>[]
2015/04/22 08:58:26.446934 Setting trusted CA from file: /usr/local/serviced/resources/logstash/logstash-forwarder.crt
2015/04/22 08:58:26.447637 Connecting to 127.0.0.1:5043 (127.0.0.1)
2015/04/22 08:58:26.512281 Connected to 127.0.0.1
Trying to connect to logstash server... 127.0.0.1:5042
Connected to logstash server.
2015/04/22 08:58:37 200 6.542704ms POST /api/metrics/store
2015/04/22 08:58:38.724542 Registrar received 4 events
2015/04/22 08:58:41 200 2.977019ms POST /api/metrics/store
2015/04/22 08:58:42 200 2.766809ms POST /api/metrics/store
W0422 08:58:46.273518 00001 controller.go:777] Health check zenhub_answering failed.
2015/04/22 08:58:47 200 3.546938ms POST /api/metrics/store
2015/04/22 08:58:51 200 3.670114ms POST /api/metrics/store
2015/04/22 08:58:56 200 3.23085ms POST /api/metrics/store
W0422 08:58:56.280145 00001 controller.go:777] Health check zenhub_answering failed.
2015/04/22 08:58:56 200 2.950215ms POST /api/metrics/store
2015/04/22 08:59:01 200 3.238064ms POST /api/metrics/store
W0422 08:59:06.287870 00001 controller.go:777] Health check zenhub_answering failed.
2015/04/22 08:59:06 200 6.416951ms POST /api/metrics/store
2015/04/22 08:59:11 200 2.520498ms POST /api/metrics/store
2015/04/22 08:59:11 200 2.818993ms POST /api/metrics/store
2015/04/22 08:59:16 200 4.269792ms POST /api/metrics/store
2015/04/22 08:59:21 200 3.735897ms POST /api/metrics/store
2015/04/22 08:59:26 200 2.907183ms POST /api/metrics/store
2015/04/22 08:59:26 200 2.78844ms POST /api/metrics/store
2015/04/22 08:59:31 200 3.452738ms POST /api/metrics/store
2015/04/22 08:59:36 200 4.483053ms POST /api/metrics/store
2015/04/22 08:59:38.727136 Registrar received 2 events
2015/04/22 08:59:41 200 4.413207ms POST /api/metrics/store
........

If we are using the predefined ping command, we get following error message:

==== hostname.domain.de ====
ping -c2 192.168.10.240
sudo: effective uid is not 0, is sudo installed setuid root

Any ideas

regards
Achim



Subject: Hi,
Author: [Not Specified]
Posted: 2015-04-23 03:02

Hi,

the problem was the underlying filesystem, it was mounted with the option nosuid. :-)

regards
Achim



< Previous
Zenoss 5 install
  Next
Zenoss5 - Change IP Address of Host OS
>