TECHZEN Zenoss User Community ARCHIVE  

How read the CPU Graphs on Zenoss

Subject: How read the CPU Graphs on Zenoss
Author: [Not Specified]
Posted: 2015-04-10 13:45

Hi people,

I am completely new on Zenoss. Im my company we adopted the Zenoss to monitorate two hundred of servers (4.2.5 version, running on VMWARE). My question is simple, we are customizing the multi-graph reports, and how can in read the CPU Graps

The graph is for one Core or agreggated (We have servers with 4 or more cores...Linux and Windows)

If not, how can i make more graphics of the CPU or agregate all in one graph

**- We are using only SNMP to monitorate the servers -**



Subject: It depends on used template.
Author: Jan Garaj
Posted: 2015-04-10 14:17

It depends on used template.

Default Zenoss Linux template uses SNMP and SNMP reads CPU counters, e.g. ssCpuRawSystem (oid: .1.3.6.1.4.1.2021.11.52) =

The number of 'ticks' (typically 1/100s) spent processing system-level code. On a multi-processor system, the 'ssCpuRaw*' counters are cumulative over all CPUs, so their sum will typically be N*100 (for N processors).

From this value is calculated CPU time usage => it's cumulative over all CPUs.

Probable some CPU SNMP template per core exists, but I have never ever seen it :-) There is 99.9% probability, that your CPU usage values are over all CPUs.

Devops Monitoring Expert advice: Dockerize/automate/monitor all the things.

DevOps stack: Docker / Kubernetes / Mesos / Zabbix / Zenoss / Grafana / Puppet / Ansible / Vagrant / Terraform / Elasticsearch



Subject: Dunno, check description of
Author: Jan Garaj
Posted: 2015-04-10 16:57

Dunno, check description of used Windows performance counters.

Devops Monitoring Expert advice: Dockerize/automate/monitor all the things.

DevOps stack: Docker / Kubernetes / Mesos / Zabbix / Zenoss / Grafana / Puppet / Ansible / Vagrant / Terraform / Elasticsearch



< Previous
Decommissioned server shows 100 availability on reports
  Next
Scheduled performance reports - report is sent without any device information, p ...
>