![]() |
![]() |
Subject: | 5x metric updates fail with traceback |
Author: | [Not Specified] |
Posted: | 2015-01-08 01:56 |
Zenoss version - b2090 core-unstable nightly release.
serviced version - jenkins-serviced-build-2102
I am trying to gather metrics physical disk (PDs) attached to device 192.168.111.162 as below.
The zenpack works well in zenoss 4.2.5.
But while in zenoss 5, the metrics update through zenpython passes successfully, but no graph updates are seen.
Looking at the opentsdb logs, we see tracebacks like below for all the metrics updated.
The metric is represented as '192.168.111.162/pd_counters_ReadIOs, though ideally, it is a component level metric.
The monitored device, can have many PDs, and it is a component metric that is updated.
Does metrics collection work with zenoss 5
How can we see the metrics info through opentsdb
Can anyone explain the rootcause here
id: 0x726c7346, /172.17.2.146:60840 => /172.17.2.142:4242] Internal Server Error on /api/query net.opentsdb.uid.NoSuchUniqueName: No such name for 'metrics': '192.168.111.162/pd_counters_ReadIOs' at net.opentsdb.uid.UniqueId$1GetIdCB.call(UniqueId.java:281) ~[tsdb-2.1.0.jar:] at net.opentsdb.uid.UniqueId$1GetIdCB.call(UniqueId.java:278) ~[tsdb-2.1.0.jar:] at com.stumbleupon.async.Deferred.doCall(Deferred.java:1278) ~[suasync-1.4.0.jar:fe17b98] at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257) ~[suasync-1.4.0.jar:fe17b98] at com.stumbleupon.async.Deferred.callback(Deferred.java:1005) ~[suasync-1.4.0.jar:fe17b98] at org.hbase.async.HBaseRpc.callback(HBaseRpc.java:506) ~[asynchbase-1.5.0.jar:d543609] at org.hbase.async.RegionClient.decode(RegionClient.java:1343) ~[asynchbase-1.5.0.jar:d543609] at org.hbase.async.RegionClient.decode(RegionClient.java:89) ~[asynchbase-1.5.0.jar:d543609] at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500) ~[netty-3.9.1.Final.jar:na] at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) ~[netty-3.9.1.Final.jar:na] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[netty-3.9.1.Final.jar:na] at org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1080) ~[asynchbase-1.5.0.jar:d543609] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.9.1.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) ~[netty-3.9.1.Final.jar:na] at org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:2652) ~[asynchbase-1.5.0.jar:d543609] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) [netty-3.9.1.Final.jar:na] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) [netty-3.9.1.Final.jar:na] at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) [netty-3.9.1.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) [netty-3.9.1.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) [netty-3.9.1.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) [netty-3.9.1.Final.jar:na] at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [netty-3.9.1.Final.jar:na] at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.9.1.Final.jar:na] at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.9.1.Final.jar:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_55] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_55] at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
Subject: | net.opentsdb.uid |
Author: | Jan Garaj |
Posted: | 2015-01-08 04:02 |
net.opentsdb.uid.NoSuchUniqueName: No such name for 'metrics': '192.168.111.162/pd_counters_ReadIOs' at
=> the metric with the name '192.168.111.162/pd_counters_ReadIOs' doesn't exist in the OpenTSDB
(https://groups.google.com/forum/#!topic/opentsdb/cI_nAl0nOVc)
Next question in the thread is: How to create metric in the Zenoss 5 OpenTDSB.
My answer is dunno - we should to wait for someone, who has more Zenoss 5 experience ;-)
Devops Monitoring Expert advice:
Dockerize/automate/monitor all the things.
DevOps stack:
Docker / Kubernetes / Mesos / Zabbix / Zenoss / Grafana / Puppet / Ansible / Vagrant / Terraform /
Elasticsearch
Subject: | In Zenoss 5 - metric collection and graphs seems severely broken |
Author: | [Not Specified] |
Posted: | 2015-01-08 23:41 |
Actually, the issue is much more severe than reported.
In zenoss 5, try to monitor localhost under /Server/SSH/Linux device class.
Both the device graphs and component graphs show no results.
I then manually run zencommand against the device, which all passes good.
Then look in opentsdb logs and see a whole bunch of errors for each of those metrics.
See https://jira.zenoss.com/browse/ZEN-15083
I reported this long time back, but it is marked as backlog.
Basic metric collection and plotting seems broken in zenoss 5.
I wonder how this can be considered a backlog.
Any help please...
Subject: | Monitoring of "localhost" is |
Author: | Jan Garaj |
Posted: | 2015-01-09 02:59 |
Monitoring of "localhost" is not good idea. Zenoss 5 is dockerized, so "localhost" can be only some container.
http://www.zenoss.org/forum/2391
Devops Monitoring Expert advice:
Dockerize/automate/monitor all the things.
DevOps stack:
Docker / Kubernetes / Mesos / Zabbix / Zenoss / Grafana / Puppet / Ansible / Vagrant / Terraform /
Elasticsearch
< |
Previous Monitor a Specific VM via vSphere Zenpack? |
Next SMART monitoring for HDDs and SSDs |
> |