![]() |
![]() |
Subject: | Failure: twisted.conch.error.ConchError: ('Channel closed.', None) |
Author: | [Not Specified] |
Posted: | 2015-06-25 22:10 |
Hi,
I'm running my first install of Zenoss 5, monitoring 2 Linux servers. One of them (Ubuntu 12.04.4 LTS) works great. The second one (Centos 7) fails to model 80% of the time with errors about a closed channel from what I believe is the SSH implementation:
2015-06-26 02:55:42,474 INFO zen.ZenModeler: Running 2 clients
Unhandled error in Deferred:
Unhandled Error
Traceback (most recent call last):
Failure: twisted.conch.error.ConchError: ('Channel closed.', None)
Unhandled error in Deferred:
Unhandled Error
Traceback (most recent call last):
Failure: twisted.conch.error.ConchError: ('Channel closed.', None)
Unhandled error in Deferred:
Unhandled Error
Traceback (most recent call last):
Failure: twisted.conch.error.ConchError: ('Channel closed.', None)
Unhandled error in Deferred:
Unhandled Error
Traceback (most recent call last):
Failure: twisted.conch.error.ConchError: ('Channel closed.', None)
2015-06-26 02:55:42,568 INFO zen.CmdClient: command client finished collection for hypervisor.neumann.local
2015-06-26 02:55:42,569 WARNING zen.ZenModeler: The plugin zenoss.cmd.df returned no results.
2015-06-26 02:55:42,569 WARNING zen.ZenModeler: The plugin zenoss.cmd.uname returned no results.
2015-06-26 02:55:42,569 WARNING zen.ZenModeler: The plugin zenoss.cmd.uname_a returned no results.
2015-06-26 02:55:42,569 WARNING zen.ZenModeler: The plugin zenoss.cmd.linux.cpuinfo returned no results.
Any ideas on how to troubleshoot this
Thanks,
-Doug
Subject: | I found in some docs how to |
Author: | [Not Specified] |
Posted: | 2015-06-25 23:05 |
I found in some docs how to run the modeler at the command line and captured the failure with some debug output. Details at http://pastebin.com/YV2X1DYg (scroll straight to the bottom... I clipped it after the errors). From the debug output, I'm thinking the other end is shutting down my SSH session for some reason. I'll go look in the logs over there.
Subject: | I found a bunch of errors in |
Author: | [Not Specified] |
Posted: | 2015-06-26 00:10 |
I found a bunch of errors in logs, but I'm not sure what to do about them. I've run a "yum update" which I believe means I've got all the latest patches, but the errors persist. Here's what I'm seeing:
Jun 26 01:06:53 hypervisor.neumann.local sshd[6083]: Accepted password for monitor from 172.17.0.65 port 40688 ssh2
Jun 26 01:06:53 hypervisor.neumann.local sshd[6083]: pam_unix(sshd:session): session opened for user monitor by (uid=0)
Jun 26 01:06:53 hypervisor.neumann.local sshd[6089]: fatal: mm_request_receive_expect: read: rtype 115 != type 125
Jun 26 01:06:53 hypervisor.neumann.local sshd[6087]: fatal: mm_request_receive_expect: read: rtype 125 != type 115
Jun 26 01:06:53 hypervisor.neumann.local sshd[6089]: fatal: mm_request_receive_expect: read: rtype 125 != type 123
Jun 26 01:06:53 hypervisor.neumann.local sshd[6087]: fatal: mm_request_receive_expect: read: rtype 123 != type 125
Jun 26 01:06:53 hypervisor.neumann.local sshd[6083]: pam_unix(sshd:session): session closed for user monitor
Subject: | Hello all, |
Author: | Joan Arbona |
Posted: | 2015-07-02 03:24 |
Hello all,
We are having the same problem with two hosts also running CentOS 7 being monitored by Zenoss 4.2.4. Executing zenmodeler with verbose output does not show extra information: zenmodeler run v10 --device=sso4.sbl.uib.es 2> log.log. I've pasted the whole output of ZenModeler in a pastebin: http://pastebin.com/E3MTMyMn.
As with you, Dtneumann, CentOS7 host's sshd log shows the following:
root# tail -f /var/log/secure
Jul 2 10:10:18 sso4 sshd[14305]: fatal: mm_request_receive_expect: read: rtype 115 != type 125
Jul 2 10:10:18 sso4 sshd[14305]: fatal: mm_request_receive_expect: read: rtype 125 != type 123
Jul 2 10:10:18 sso4 sshd[14304]: fatal: mm_request_receive_expect: read: rtype 123 != type 115
Jul 2 10:10:18 sso4 sshd[14298]: pam_unix(sshd:session): session closed for user zencli
Jul 2 10:14:30 sso4 sshd[14496]: fatal: mm_request_receive_expect: read: rtype 115 != type 125
Jul 2 10:14:30 sso4 sshd[14496]: fatal: mm_request_receive_expect: read: rtype 125 != type 123
Jul 2 10:14:30 sso4 sshd[13851]: fatal: mm_request_receive_expect: read: rtype 123 != type 115
Jul 2 10:14:30 sso4 sshd[13845]: pam_unix(sshd:session): session closed for user zencli
Jul 2 10:14:46 sso4 sshd[14509]: Accepted password for zencli from 192.168.41.40 port 54030 ssh2
Jul 2 10:14:46 sso4 sshd[14509]: pam_unix(sshd:session): session opened for user zencli by (uid=0)
Jul 2 10:10:18 sso4 sshd[14305]: fatal: mm_request_receive_expect: read: rtype 115 != type 125
Jul 2 10:10:18 sso4 sshd[14305]: fatal: mm_request_receive_expect: read: rtype 125 != type 123
Jul 2 10:10:18 sso4 sshd[14304]: fatal: mm_request_receive_expect: read: rtype 123 != type 115
Jul 2 10:10:18 sso4 sshd[14298]: pam_unix(sshd:session): session closed for user zencli
Jul 2 10:14:30 sso4 sshd[14496]: fatal: mm_request_receive_expect: read: rtype 115 != type 125
Jul 2 10:14:30 sso4 sshd[14496]: fatal: mm_request_receive_expect: read: rtype 125 != type 123
Jul 2 10:14:30 sso4 sshd[13851]: fatal: mm_request_receive_expect: read: rtype 123 != type 115
Jul 2 10:14:30 sso4 sshd[13845]: pam_unix(sshd:session): session closed for user zencli
Also, both hosts paint the graphs in bits (seem a barcode) and open and close continuously an event saying the following:
"Datasource: FileSystem/idisk - Code: None - Msg: Unknown error code: None"
We have two theories so far:
1- Some kind of missconfiguration on the CentOS 7 host. Presumably somehow related to SELinux. We have, however, more hosts running CentOS 7 with SELinux and being monitored with Zenoss without problems.
2- Firewall blocking and stopping the SSH connection between Zenoss and the host. It could have sense because the connection to the working CentOS 7 hosts does not go through the firewall. However, I don't know how this theory could fit in the fact that I can ssh manually from Zenoss to the host without problems...
We would appreciate some help.
Thanks,
Joan
Subject: | Solved |
Author: | Pavel Mlčoch |
Posted: | 2015-08-13 01:58 |
Subject: | Root Cause |
Author: | [Not Specified] |
Posted: | 2015-11-05 15:50 |
This behavior is potentially caused by the addition of the below configuration which was introduced in RHEL/CentOS 7's sshd_config file:
UsePrivilegeSeparation sandbox
We have noticed that it doesn't matter what value you set the MaxSessions to but to resolve the issue you can comment out the privilege separation option or like Pavkamic suggested set the zshConcurrentSessions to 1 and it will work just fine.
< |
Previous Keep devices and settings and discard all collected info? |
Next event recieved has oid in summary, eventClassKey and message but MIB is already ... |
> |