TECHZEN Zenoss User Community ARCHIVE  

Unable to predict storage availability" error="Post http://127.0.0.1:8888/api/pe ...

Subject: Unable to predict storage availability" error="Post http://127.0.0.1:8888/api/performance/query
Author: vinay p
Posted: 2021-05-27 10:17

Hello,
I am facing an issue with serviced service which is effecting all the Zenoss services data being not available on CC UI. Also serviced service status takes forever to return the response.

If I do restart serviced, it works momentarily before it goes back to the same state.

Here is the warning log from serviced 

May 27 07:09:27 stginfrmon101v serviced[26202]: time="2021-05-27T14:09:27Z" level=warning msg="Unable to predict storage availability" error="Post http://127.0.0.1:8888/api/performance/query: net/http: request canceled (Client.Timeout exceeded while awaiting headers)" location="daemon.go:1280" logger=cli.api lookahead=6m0s

When I look for this specific port I see the following status in netstat many connections are either in TIME_WAIT or FIN_WAIT2 status. I don't see any issue with iptables as well. 

[root@stginfrmon101v ~]# netstat -antpl | grep 8888
tcp 0 0 127.0.0.1:58888 0.0.0.0:* LISTEN 26758/docker-proxy
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 26827/docker-proxy
tcp 0 0 127.0.0.1:54026 127.0.0.1:8888 ESTABLISHED 26202/serviced
tcp 0 0 172.17.0.1:34892 172.17.0.5:8888 TIME_WAIT -
tcp 0 0 172.17.0.1:34794 172.17.0.5:8888 FIN_WAIT2 26827/docker-proxy
tcp 0 0 127.0.0.1:54242 127.0.0.1:8888 FIN_WAIT2 -
tcp 0 0 127.0.0.1:8888 127.0.0.1:53930 CLOSE_WAIT 26827/docker-proxy
tcp 0 0 172.17.0.1:35486 172.17.0.5:8888 ESTABLISHED 26827/docker-proxy
tcp 0 0 172.17.0.1:34780 172.17.0.5:8888 TIME_WAIT -
tcp 0 0 172.17.0.1:34950 172.17.0.5:8888 ESTABLISHED 26827/docker-proxy
tcp 0 0 127.0.0.1:8888 127.0.0.1:54600 ESTABLISHED 26827/docker-proxy
tcp 0 0 172.17.0.1:35396 172.17.0.5:8888 FIN_WAIT2 26827/docker-proxy
tcp 0 0 127.0.0.1:8888 127.0.0.1:54026 ESTABLISHED 26827/docker-proxy
tcp 0 0 127.0.0.1:54068 127.0.0.1:8888 ESTABLISHED 26202/serviced
tcp 0 0 172.17.0.1:34886 172.17.0.5:8888 ESTABLISHED 26827/docker-proxy
tcp 0 0 127.0.0.1:54094 127.0.0.1:8888 FIN_WAIT2 -
tcp 0 0 127.0.0.1:8888 127.0.0.1:54072 ESTABLISHED 26827/docker-proxy
tcp 0 0 172.17.0.1:34790 172.17.0.5:8888 FIN_WAIT2 26827/docker-proxy
tcp 0 0 127.0.0.1:54600 127.0.0.1:8888 ESTABLISHED 26202/serviced
tcp 0 0 172.17.0.1:34894 172.17.0.5:8888 TIME_WAIT -
tcp 0 0 127.0.0.1:54626 127.0.0.1:8888 ESTABLISHED 26202/serviced
tcp 0 0 172.17.0.1:34954 172.17.0.5:8888 FIN_WAIT2 26827/docker-proxy
tcp 0 0 127.0.0.1:8888 127.0.0.1:54038 CLOSE_WAIT 26827/docker-proxy
tcp 0 0 127.0.0.1:54062 127.0.0.1:8888 ESTABLISHED 26202/serviced
tcp 0 0 172.17.0.1:35102 172.17.0.5:8888 FIN_WAIT2 26827/docker-proxy
tcp 0 0 127.0.0.1:8888 127.0.0.1:54242 CLOSE_WAIT 26827/docker-proxy
tcp 0 0 127.0.0.1:8888 127.0.0.1:54062 ESTABLISHED 26827/docker-proxy
tcp 0 0 172.17.0.1:34902 172.17.0.5:8888 FIN_WAIT2 26827/docker-proxy
tcp 0 0 127.0.0.1:54054 127.0.0.1:8888 TIME_WAIT -
tcp 0 0 127.0.0.1:8888 127.0.0.1:54076 CLOSE_WAIT 26827/docker-proxy
tcp 0 0 127.0.0.1:54038 127.0.0.1:8888 FIN_WAIT2 -
tcp 0 0 172.17.0.1:34922 172.17.0.5:8888 ESTABLISHED 26827/docker-proxy
tcp 0 0 127.0.0.1:54066 127.0.0.1:8888 ESTABLISHED 26202/serviced
tcp 0 0 127.0.0.1:8888 127.0.0.1:53934 CLOSE_WAIT 26827/docker-proxy
tcp 0 0 172.17.0.1:34946 172.17.0.5:8888 ESTABLISHED 26827/docker-proxy
tcp 0 0 127.0.0.1:54072 127.0.0.1:8888 ESTABLISHED 26202/serviced
tcp 0 0 127.0.0.1:8888 127.0.0.1:54068 ESTABLISHED 26827/docker-proxy
tcp 0 0 127.0.0.1:54494 127.0.0.1:8888 FIN_WAIT2 -
tcp 0 0 127.0.0.1:54076 127.0.0.1:8888 FIN_WAIT2 -
tcp 0 0 127.0.0.1:54050 127.0.0.1:8888 ESTABLISHED 26202/serviced
tcp 0 0 127.0.0.1:8888 127.0.0.1:54494 CLOSE_WAIT 26827/docker-proxy
tcp 0 0 172.17.0.1:35354 172.17.0.5:8888 FIN_WAIT2 26827/docker-proxy
tcp 0 0 172.17.0.1:34906 172.17.0.5:8888 TIME_WAIT -
tcp 0 0 127.0.0.1:8888 127.0.0.1:54066 ESTABLISHED 26827/docker-proxy
tcp 0 0 127.0.0.1:8888 127.0.0.1:54040 ESTABLISHED 26827/docker-proxy
tcp 0 0 172.17.0.1:34900 172.17.0.5:8888 ESTABLISHED 26827/docker-proxy
tcp 0 0 172.17.0.1:34608 172.17.0.5:8888 TIME_WAIT -
tcp 0 0 127.0.0.1:53934 127.0.0.1:8888 FIN_WAIT2 -
tcp 0 0 127.0.0.1:54536 127.0.0.1:8888 FIN_WAIT2 -
tcp 0 0 127.0.0.1:53930 127.0.0.1:8888 FIN_WAIT2 -
tcp 0 0 127.0.0.1:53920 127.0.0.1:8888 TIME_WAIT -
tcp 0 0 172.17.0.1:34754 172.17.0.5:8888 TIME_WAIT -
tcp 0 0 127.0.0.1:8888 127.0.0.1:54572 CLOSE_WAIT 26827/docker-proxy
tcp 0 0 127.0.0.1:8888 127.0.0.1:54626 ESTABLISHED 26827/docker-proxy
tcp 0 0 127.0.0.1:54572 127.0.0.1:8888 FIN_WAIT2 -
tcp 0 0 127.0.0.1:54030 127.0.0.1:8888 TIME_WAIT -
tcp 0 0 172.17.0.1:34910 172.17.0.5:8888 ESTABLISHED 26827/docker-proxy
tcp 0 0 172.17.0.1:34948 172.17.0.5:8888 ESTABLISHED 26827/docker-proxy
tcp 0 0 172.17.0.1:35432 172.17.0.5:8888 FIN_WAIT2 26827/docker-proxy
tcp 0 0 172.17.0.1:34914 172.17.0.5:8888 TIME_WAIT -
tcp 0 0 127.0.0.1:54040 127.0.0.1:8888 ESTABLISHED 26202/serviced
tcp 0 0 172.17.0.1:35462 172.17.0.5:8888 ESTABLISHED 26827/docker-proxy
tcp 0 0 127.0.0.1:8888 127.0.0.1:54094 CLOSE_WAIT 26827/docker-proxy
tcp 0 0 172.17.0.1:34944 172.17.0.5:8888 FIN_WAIT2 26827/docker-proxy
tcp 0 0 127.0.0.1:8888 127.0.0.1:54536 CLOSE_WAIT 26827/docker-proxy
tcp 0 0 127.0.0.1:8888 127.0.0.1:54050 ESTABLISHED 26827/docker-proxy
tcp 0 0 172.17.0.1:34918 172.17.0.5:8888 TIME_WAIT -

------------------------------
vinay p
------------------------------


Subject: RE: Unable to predict storage availability" error="Post http://127.0.0.1:8888/api/performance/query
Author: vinay p
Posted: 2021-06-11 05:28

Anyone else facing similar issue on 6.3.2 version ?

------------------------------
vinay p
------------------------------


Subject: RE: Unable to predict storage availability" error="Post http://127.0.0.1:8888/api/performance/query
Author: Michael Rogers
Posted: 2021-06-11 14:50

Vinay,

This is typically caused by corruption in the Control Center OpenTSDB HBase.  The fastest way to correct this is to stop HBase, delete the files from disk, and restart HBase.  The procedure looks like this:

docker exec -it serviced-isvcs_opentsdb  supervisorctl -c /opt/zenoss/etc/supervisor.conf stop all
rm -rf /opt/serviced/var/isvcs/opentsdb/hbase/.*
docker exec -it serviced-isvcs_opentsdb  supervisorctl -c /opt/zenoss/etc/supervisor.conf start all
Note: this will remove graph data from all Control Center pages.  Data for monitored devices is stored in a different HBase and will not be affected by these steps.

------------------------------
Michael J. Rogers
Senior Instructor - Zenoss
Austin TX
------------------------------


Subject: RE: Unable to predict storage availability" error="Post http://127.0.0.1:8888/api/performance/query
Author: vinay p
Posted: 2021-06-14 13:38

Thank you very much Michael, The steps given have resolved the issue.

------------------------------
vinay p
------------------------------


< Previous
User-supplied Python expression ((here.speed or 1e9) / 8 * .90)
  Next
WinRM on Win10
>