Skip to main content

Hitachi NAS SNMP

Pack assets​

Templates​

The Monitoring Connector Hitachi NAS brings a host template:

  • HW-Storage-Hitachi-Hnas-SNMP-custom

The connector brings the following service templates (sorted by the host template they are attached to):

Service AliasService TemplateService Description
Hardware-GlobalHW-Storage-Hitachi-Hnas-Hardware-Global-SNMPCheck all hardware
Volume-Usage-GlobalHW-Storage-Hitachi-Hnas-Volume-Usage-Global-SNMPCheck volume usages

The services listed above are created automatically when the HW-Storage-Hitachi-Hnas-SNMP host template is used.

Discovery rules​

Service discovery​

Rule nameDescription
HW-Storage-Hitachi-Hnas-SNMP-Interface-NameDiscover the disk partitions and monitor space occupation

More information about discovering services automatically is available on the dedicated page and in the following chapter.

Collected metrics & status​

Here is the list of services for this connector, detailing all metrics linked to each service.

Metric nameUnit
node#statusN/A

To obtain this new metric format, include --use-new-perfdata in the EXTRAOPTIONS service macro.

Prerequisites​

SNMP Configuration​

To use this pack, the SNMP service must be properly configured on your ressource. Please refer to the official documentation from Hitachi.

Network flow​

The target server must be reachable from the Centreon poller on the UDP/161 SNMP port.

Installing the monitoring connector​

Pack​

  1. If the platform uses an online license, you can skip the package installation instruction below as it is not required to have the connector displayed within the Configuration > Monitoring Connector Manager menu. If the platform uses an offline license, install the package on the central server with the command corresponding to the operating system's package manager:
dnf install centreon-pack-hardware-storage-hitachi-hnas-snmp
  1. Whatever the license type (online or offline), install the Hitachi NAS connector through the Configuration > Monitoring Connector Manager menu.

Plugin​

Since Centreon 22.04, you can benefit from the 'Automatic plugin installation' feature. When this feature is enabled, you can skip the installation part below.

You still have to manually install the plugin on the poller(s) when:

  • Automatic plugin installation is turned off
  • You want to run a discovery job from a poller that doesn't monitor any resource of this kind yet

More information in the Installing the plugin section.

Use the commands below according to your operating system's package manager:

dnf install centreon-plugin-Hardware-Storage-Hitachi-Hnas-Snmp

Using the monitoring connector​

Using a host template provided by the connector​

  1. Log into Centreon and add a new host through Configuration > Hosts.
  2. Fill the Name, Alias & IP Address/DNS fields according to your ressource settings.
  3. Apply the HW-Storage-Hitachi-Hnas-SNMP-custom template to the host.

When using SNMP v3, use the SNMPEXTRAOPTIONS macro to add specific authentication parameters. More information in the Troubleshooting SNMP section.

MacroDescriptionDefault valueMandatory
SNMPEXTRAOPTIONSAny extra option you may want to add to every command (E.g. a --verbose flag). All options are listed here
  1. Deploy the configuration. The host appears in the list of hosts, and on page Resources Status. The command that is sent by the connector is displayed in the details panel of the host: it shows the values of the macros.

Using a service template provided by the connector​

  1. If you have used a host template and checked Create Services linked to the Template too, the services linked to the template have been created automatically, using the corresponding service templates. Otherwise, create manually the services you want and apply a service template to them.
  2. Fill in the macros you want (e.g. to change the thresholds for the alerts). Some macros are mandatory (see the table below).
MacroDescriptionDefault valueMandatory
UNKNOWNSTATUSSet unknown threshold for status (Default: '%{state} =~ /unknown/'). You can use the following variables: %{state}, %{display}%{state} =~ /unknown/
FILTERNAMEFilter node name (can be a regexp)
CRITICALSTATUSSet critical threshold for status (Default: '%{state} =~ /offline/i'). You can use the following variables: %{state}, %{display}%{state} =~ /offline/i
WARNINGSTATUSSet warning threshold for status (Default: -). You can use the following variables: %{state}, %{display}
EXTRAOPTIONSAny extra option you may want to add to the command (E.g. a --verbose flag). All options are listed here--verbose
  1. Deploy the configuration. The service appears in the list of service, and on page Resources Status. The command that is sent by the connector is displayed in the details panel of the service: it shows the values of the macros.

How to check in the CLI that the configuration is OK and what are the main options for?​

Once the plugin is installed, log into your Centreon poller's CLI using the centreon-engine user account (su - centreon-engine). Test that the connector is able to monitor a server using a command like this one (replace the sample values by yours):

/usr/lib/centreon/plugins//centreon_hitachi_hnas_snmp.pl \
--plugin=storage::hitachi::hnas::snmp::plugin \
--mode=hardware \
--hostname=10.0.0.1 \
--snmp-version='2c' \
--snmp-community='my-snmp-community' \
--verbose

The expected command output is shown below:

OK: All 118 components are ok [2/2 battery, 8/8 fans, 4/4 psus, 82/82 sysdrives, 22/22 temperatures]. | 'hnas-cluster-1.0#hardware.temperature.celsius'=28C;;;; 'hnas-cluster-1.1#hardware.temperature.celsius'=28C;;;; 'hnas-cluster-1.2#hardware.temperature.celsius'=31C;;;; 'hnas-cluster-1.3#hardware.temperature.celsius'=26C;;;; 'hnas-cluster-1.4#hardware.temperature.celsius'=23C;;;; 'hnas-cluster-1.5#hardware.temperature.celsius'=27C;;;; 'hnas-cluster-1.6#hardware.temperature.celsius'=27C;;;; 'hnas-cluster-1.7#hardware.temperature.celsius'=45C;;;; 'hnas-cluster-1.8#hardware.temperature.celsius'=67C;;;; 'hnas-cluster-1.9#hardware.temperature.celsius'=38C;;;; 'hnas-cluster-1.10#hardware.temperature.celsius'=38C;;;; 'hnas-cluster-2.0#hardware.temperature.celsius'=29C;;;; 'hnas-cluster-2.1#hardware.temperature.celsius'=25C;;;; 'hnas-cluster-2.2#hardware.temperature.celsius'=26C;;;; 'hnas-cluster-2.3#hardware.temperature.celsius'=28C;;;; 'hnas-cluster-2.4#hardware.temperature.celsius'=21C;;;; 'hnas-cluster-2.5#hardware.temperature.celsius'=25C;;;; 'hnas-cluster-2.6#hardware.temperature.celsius'=25C;;;; 'hnas-cluster-2.7#hardware.temperature.celsius'=43C;;;; 'hnas-cluster-2.8#hardware.temperature.celsius'=65C;;;; 'hnas-cluster-2.9#hardware.temperature.celsius'=37C;;;; 'hnas-cluster-2.10#hardware.temperature.celsius'=35C;;;; 'hardware.battery.count'=2;;;; 'hardware.fan.count'=8;;;; 'hardware.psu.count'=4;;;; 'hardware.sysdrive.count'=82;;;; 'hardware.temperature.count'=22;;;;
Checking temperatures
temperature 'hnas-cluster-1.0' status is 'ok' [instance: 1.0] [value: 28]
temperature 'hnas-cluster-1.1' status is 'ok' [instance: 1.1] [value: 28]
temperature 'hnas-cluster-1.2' status is 'ok' [instance: 1.2] [value: 31]
temperature 'hnas-cluster-1.3' status is 'ok' [instance: 1.3] [value: 26]
temperature 'hnas-cluster-1.4' status is 'ok' [instance: 1.4] [value: 23]
temperature 'hnas-cluster-1.5' status is 'ok' [instance: 1.5] [value: 27]
temperature 'hnas-cluster-1.6' status is 'ok' [instance: 1.6] [value: 27]
temperature 'hnas-cluster-1.7' status is 'ok' [instance: 1.7] [value: 45]
temperature 'hnas-cluster-1.8' status is 'ok' [instance: 1.8] [value: 67]
temperature 'hnas-cluster-1.9' status is 'ok' [instance: 1.9] [value: 38]
temperature 'hnas-cluster-1.10' status is 'ok' [instance: 1.10] [value: 38]
temperature 'hnas-cluster-2.0' status is 'ok' [instance: 2.0] [value: 29]
temperature 'hnas-cluster-2.1' status is 'ok' [instance: 2.1] [value: 25]
temperature 'hnas-cluster-2.2' status is 'ok' [instance: 2.2] [value: 26]
temperature 'hnas-cluster-2.3' status is 'ok' [instance: 2.3] [value: 28]
temperature 'hnas-cluster-2.4' status is 'ok' [instance: 2.4] [value: 21]
temperature 'hnas-cluster-2.5' status is 'ok' [instance: 2.5] [value: 25]
temperature 'hnas-cluster-2.6' status is 'ok' [instance: 2.6] [value: 25]
temperature 'hnas-cluster-2.7' status is 'ok' [instance: 2.7] [value: 43]
temperature 'hnas-cluster-2.8' status is 'ok' [instance: 2.8] [value: 65]
temperature 'hnas-cluster-2.9' status is 'ok' [instance: 2.9] [value: 37]
temperature 'hnas-cluster-2.10' status is 'ok' [instance: 2.10] [value: 35]
Checking fans
fan 'hnas-cluster-1.0' status is 'ok' [instance: 1.0] [value: ok]
fan 'hnas-cluster-1.1' status is 'ok' [instance: 1.1] [value: ok]
fan 'hnas-cluster-1.2' status is 'ok' [instance: 1.2] [value: ok]
fan 'hnas-cluster-1.3' status is 'ok' [instance: 1.3] [value: ok]
fan 'hnas-cluster-2.0' status is 'ok' [instance: 2.0] [value: ok]
fan 'hnas-cluster-2.1' status is 'ok' [instance: 2.1] [value: ok]
fan 'hnas-cluster-2.2' status is 'ok' [instance: 2.2] [value: ok]
fan 'hnas-cluster-2.3' status is 'ok' [instance: 2.3] [value: ok]
Checking power supplies
power supply 'hnas-cluster-1.0' status is 'ok' [instance: 1.0].
power supply 'hnas-cluster-1.1' status is 'ok' [instance: 1.1].
power supply 'hnas-cluster-2.0' status is 'ok' [instance: 2.0].
power supply 'hnas-cluster-2.1' status is 'ok' [instance: 2.1].
Checking system drives
system drive '8000000000000000' status is 'online' [instance: 8000000000000000].
system drive '8000000000000001' status is 'online' [instance: 8000000000000001].
system drive '8000000000000002' status is 'online' [instance: 8000000000000002].
system drive '8000000000000003' status is 'online' [instance: 8000000000000003].
system drive '8000000000000004' status is 'online' [instance: 8000000000000004].
system drive '8000000000000005' status is 'online' [instance: 8000000000000005].
system drive '8000000000000006' status is 'online' [instance: 8000000000000006].
system drive '8000000000000007' status is 'online' [instance: 8000000000000007].
system drive '8000000000000008' status is 'online' [instance: 8000000000000008].
system drive '8000000000000009' status is 'online' [instance: 8000000000000009].
system drive '800000000000000A' status is 'online' [instance: 800000000000000A].
system drive '800000000000000B' status is 'online' [instance: 800000000000000B].
system drive '800000000000000C' status is 'online' [instance: 800000000000000C].
system drive '800000000000000D' status is 'online' [instance: 800000000000000D].
system drive '800000000000000E' status is 'online' [instance: 800000000000000E].
system drive '800000000000000F' status is 'online' [instance: 800000000000000F].
system drive '8000000000000010' status is 'online' [instance: 8000000000000010].
system drive '8000000000000011' status is 'online' [instance: 8000000000000011].
system drive '8000000000000012' status is 'online' [instance: 8000000000000012].
system drive '8000000000000013' status is 'online' [instance: 8000000000000013].
system drive '8000000000000014' status is 'online' [instance: 8000000000000014].
system drive '8000000000000015' status is 'online' [instance: 8000000000000015].
system drive '8000000000000016' status is 'online' [instance: 8000000000000016].
system drive '8000000000000017' status is 'online' [instance: 8000000000000017].
system drive '8000000000000018' status is 'online' [instance: 8000000000000018].
system drive '8000000000000019' status is 'online' [instance: 8000000000000019].
system drive '800000000000001A' status is 'online' [instance: 800000000000001A].
system drive '800000000000001B' status is 'online' [instance: 800000000000001B].
system drive '800000000000001C' status is 'online' [instance: 800000000000001C].
system drive '800000000000001D' status is 'online' [instance: 800000000000001D].
system drive '800000000000001E' status is 'online' [instance: 800000000000001E].
system drive '800000000000001F' status is 'online' [instance: 800000000000001F].
system drive '8000000000000020' status is 'online' [instance: 8000000000000020].
system drive '8000000000000021' status is 'online' [instance: 8000000000000021].
system drive '8000000000000022' status is 'online' [instance: 8000000000000022].
system drive '8000000000000023' status is 'online' [instance: 8000000000000023].
system drive '8000000000000024' status is 'online' [instance: 8000000000000024].
system drive '8000000000000025' status is 'online' [instance: 8000000000000025].
system drive '8000000000000026' status is 'online' [instance: 8000000000000026].
system drive '8000000000000027' status is 'online' [instance: 8000000000000027].
system drive '8000000000000028' status is 'online' [instance: 8000000000000028].
system drive '8000000000000029' status is 'online' [instance: 8000000000000029].
system drive '800000000000002A' status is 'online' [instance: 800000000000002A].
system drive '800000000000002B' status is 'online' [instance: 800000000000002B].
system drive '800000000000002C' status is 'online' [instance: 800000000000002C].
system drive '800000000000002D' status is 'online' [instance: 800000000000002D].
system drive '800000000000002E' status is 'online' [instance: 800000000000002E].
system drive '800000000000002F' status is 'online' [instance: 800000000000002F].
system drive '8000000000000030' status is 'online' [instance: 8000000000000030].
system drive '8000000000000031' status is 'online' [instance: 8000000000000031].
system drive '8000000000000032' status is 'online' [instance: 8000000000000032].
system drive '8000000000000033' status is 'online' [instance: 8000000000000033].
system drive '8000000000000034' status is 'online' [instance: 8000000000000034].
system drive '8000000000000035' status is 'online' [instance: 8000000000000035].
system drive '8000000000000036' status is 'online' [instance: 8000000000000036].
system drive '8000000000000037' status is 'online' [instance: 8000000000000037].
system drive '8000000000000038' status is 'online' [instance: 8000000000000038].
system drive '8000000000000039' status is 'online' [instance: 8000000000000039].
system drive '800000000000003A' status is 'online' [instance: 800000000000003A].
system drive '800000000000003B' status is 'online' [instance: 800000000000003B].
system drive '800000000000003C' status is 'online' [instance: 800000000000003C].
system drive '800000000000003D' status is 'online' [instance: 800000000000003D].
system drive '800000000000003E' status is 'online' [instance: 800000000000003E].
system drive '800000000000003F' status is 'online' [instance: 800000000000003F].
system drive '8000000000000040' status is 'online' [instance: 8000000000000040].
system drive '8000000000000041' status is 'online' [instance: 8000000000000041].
system drive '8000000000000042' status is 'online' [instance: 8000000000000042].
system drive '8000000000000043' status is 'online' [instance: 8000000000000043].
system drive '8000000000000044' status is 'online' [instance: 8000000000000044].
system drive '8000000000000045' status is 'online' [instance: 8000000000000045].
system drive '8000000000000046' status is 'online' [instance: 8000000000000046].
system drive '8000000000000047' status is 'online' [instance: 8000000000000047].
system drive '8000000000000048' status is 'online' [instance: 8000000000000048].
system drive '8000000000000049' status is 'online' [instance: 8000000000000049].
system drive '800000000000004A' status is 'online' [instance: 800000000000004A].
system drive '800000000000004B' status is 'online' [instance: 800000000000004B].
system drive '800000000000004C' status is 'online' [instance: 800000000000004C].
system drive '800000000000004D' status is 'online' [instance: 800000000000004D].
system drive '800000000000004E' status is 'online' [instance: 800000000000004E].
system drive '800000000000004F' status is 'online' [instance: 800000000000004F].
system drive '8000000000000050' status is 'online' [instance: 8000000000000050].
system drive '8000000000000051' status is 'online' [instance: 8000000000000051].
Checking batteries
battery 'hnas-cluster-1.0' status is 'ok' [instance: 1.0].
battery 'hnas-cluster-2.0' status is 'ok' [instance: 2.0].

Troubleshooting​

Please find the troubleshooting documentation for Centreon Plugins typical issues.

Available modes​

All available modes can be displayed by adding the --list-mode parameter to the command:

/usr/lib/centreon/plugins//centreon_hitachi_hnas_snmp.pl \
--plugin=storage::hitachi::hnas::snmp::plugin \
--list-mode

The plugin brings the following modes:

ModeLinked service template
cluster-statusHW-Storage-Hitachi-Hnas-Cluster-Status-SNMP
hardwareHW-Storage-Hitachi-Hnas-Hardware-Global-SNMP
interfacesHW-Storage-Hitachi-Hnas-Interfaces-SNMP
list-interfacesUsed for service discovery
list-volumesNot used in this Monitoring Connector
virtual-volumes-quotasHW-Storage-Hitachi-Hnas-Virtual-Volumes-Quotas-SNMP
volume-usageHW-Storage-Hitachi-Hnas-Volume-Usage-Global-SNMP

Available options​

Generic options​

All generic options are listed here:

OptionDescriptionType
--modeDefine the mode in which you want the plugin to be executed (see--list-mode).Global
--dyn-modeSpecify a mode with the module's path (advanced).Global
--list-modeList all available modes.Global
--mode-versionCheck minimal version of mode. If not, unknown error.Global
--versionDisplay the plugin's version.Global
--pass-managerDefine the password manager you want to use. Supported managers are: environment, file, keepass, hashicorpvault and teampass.Global
--verboseDisplay extended status information (long output).Output
--debugDisplay debug messages.Output
--filter-perfdataFilter perfdata that match the regexp. Eg: adding --filter-perfdata='avg' will remove all metrics that do not contain 'avg' from performance data.Output
--filter-perfdata-advFilter perfdata based on a "if" condition using the following variables: label, value, unit, warning, critical, min, max. Variables must be written either %{variable} or %(variable). Eg: adding --filter-perfdata-adv='not (%(value) == 0 and %(max) eq "")' will remove all metrics whose value equals 0 and that don't have a maximum value.Output
--explode-perfdata-maxCreate a new metric for each metric that comes with a maximum limit. The new metric will be named identically with a '_max' suffix). Eg: it will split 'used_prct'=26.93%;0:80;0:90;0;100 into 'used_prct'=26.93%;0:80;0:90;0;100 'used_prct_max'=100%;;;;Output
--change-perfdata --extend-perfdataChange or extend perfdata. Syntax: --extend-perfdata=searchlabel,newlabel,target[,[newuom],[min],[m ax]] Common examples: Change storage free perfdata in used: --change-perfdata=free,used,invert() Change storage free perfdata in used: --change-perfdata=used,free,invert() Scale traffic values automaticaly: --change-perfdata=traffic,,scale(auto) Scale traffic values in Mbps: --change-perfdata=traffic_in,,scale(Mbps),mbps Change traffic values in percent: --change-perfdata=traffic_in,,percent()Output
--extend-perfdata-groupExtend perfdata from multiple perfdatas (methods in target are: min, max, average, sum) Syntax: --extend-perfdata-group=searchlabel,newlabel,target[,[newuom],[m in],[max]] Common examples: Sum wrong packets from all interfaces (with interface need --units-errors=absolute): --extend-perfdata-group=',packets_wrong,sum(packets_(discard |error)_(in|out))' Sum traffic by interface: --extend-perfdata-group='traffic_in_(.*),traffic_$1,sum(traf fic_(in|out)_$1)'Output
--change-short-output --change-long-outputModify the short/long output that is returned by the plugin. Syntax: --change-short-output=pattern~replacement~modifier Most commonly used modifiers are i (case insensitive) and g (replace all occurrences). Eg: adding --change-short-output='OK~Up~gi' will replace all occurrences of 'OK', 'ok', 'Ok' or 'oK' with 'Up'Output
--change-exitReplace an exit code with one of your choice. Eg: adding --change-exit=unknown=critical will result in a CRITICAL state instead of an UNKNOWN state.Output
--range-perfdataChange perfdata range thresholds display: 1 = start value equals to '0' is removed, 2 = threshold range is not display.Output
--filter-uomMasks the units when they don't match the given regular expression.Output
--opt-exitReplace the exit code in case of an execution error (i.e. wrong option provided, SSH connection refused, timeout, etc). Default: unknown.Output
--output-ignore-perfdataRemove all the metrics from the service. The service will still have a status and an output.Output
--output-ignore-labelRemove the status label from the beginning of the output. Eg: 'OK: Ram Total:...' will become 'Ram Total:...'Output
--output-xmlDisplay output in XML format.Output
--output-jsonDisplay output in JSON format.Output
--output-openmetricsDisplay metrics in OpenMetrics format.Output
--output-fileWrite output in file (can be used with json and xml options)Output
--disco-formatDisplay discovery arguments (if the mode manages it).Output
--disco-showDisplay discovery values (if the mode manages it).Output
--float-precisionSet the float precision for thresholds (default: 8).Output
--source-encodingSet encoding of monitoring sources (in some cases. Default: 'UTF-8').Output
--hostnameHostname to query (required).SNMP
--snmp-communityRead community (defaults to public).SNMP
--snmp-versionVersion: 1 for SNMP v1 (default), 2 for SNMP v2c, 3 for SNMP v3.SNMP
--snmp-portPort (default: 161).SNMP
--snmp-timeoutTimeout in secondes (default: 1) before retries.SNMP
--snmp-retriesSet the number of retries (default: 5) before failure.SNMP
--maxrepetitionsMax repetitions value (default: 50) (only for SNMP v2 and v3).SNMP
--subsetleefHow many oid values per SNMP request (default: 50) (for get_leef method. Be cautious when you set it. Prefer to let the default value).SNMP
--snmp-autoreduceAuto reduce SNMP request size in case of SNMP errors (By default, the divisor is 2).SNMP
--snmp-force-getnextUse snmp getnext function (even in snmp v2c and v3).SNMP
--snmp-cache-fileUse SNMP cache file.SNMP
--snmp-usernameSecurity name (only for SNMP v3).SNMP
--authpassphraseAuthentication protocol pass phrase.SNMP
--authprotocolAuthentication protocol: MD5|SHA. Since net-snmp 5.9.1: SHA224|SHA256|SHA384|SHA512.SNMP
--privpassphrasePrivacy protocol pass phraseSNMP
--privprotocolPrivacy protocol: DES|AES. Since net-snmp 5.9.1: AES192|AES192C|AES256|AES256C.SNMP
--contextnameContext nameSNMP
--contextengineidContext engine IDSNMP
--securityengineidSecurity engine IDSNMP
--snmp-errors-exitExit code for SNMP Errors (default: unknown)SNMP
--snmp-tls-transportTLS Transport communication used (can be: 'dtlsudp', 'tlstcp').SNMP
--snmp-tls-our-identityOur X.509 identity to use, which should either be a fingerprint or the filename that holds the certificate.SNMP
--snmp-tls-their-identityThe remote server's identity to connect to, specified as either a fingerprint or a file name. Either this must be specified, or the hostname below along with a trust anchor.SNMP
--snmp-tls-their-hostnameThe remote server's hostname that is expected. If their certificate was signed by a CA then their hostname presented in the certificate must match this value or the connection fails to be established (to avoid man-in-the-middle attacks).SNMP
--snmp-tls-trust-certA trusted certificate to use as trust anchor (like a CA certificate) for verifying a remote server's certificate. If a CA certificate is used to validate a certificate then the TheirHostname parameter must also be specified to ensure their presented hostname in the certificate matches.SNMP

Modes options​

All modes specific options are listed here:

OptionDescriptionType
--filter-nameFilter node name (can be a regexp).Mode
--unknown-statusSet unknown threshold for status (Default: '%{state} =~ /unknown/'). You can use the following variables: %{state}, %{display}Mode
--warning-statusSet warning threshold for status (Default: -). You can use the following variables: %{state}, %{display}Mode
--critical-statusSet critical threshold for status (Default: '%{state} =~ /offline/i'). You can use the following variables: %{state}, %{display}Mode

All available options for a given mode can be displayed by adding the --help parameter to the command:

/usr/lib/centreon/plugins//centreon_hitachi_hnas_snmp.pl \
--plugin=storage::hitachi::hnas::snmp::plugin \
--mode=interfaces \
--help