Azure Data Factory
Pack assets​
Templates​
The Monitoring Connector Azure Data Factory brings a host template:
- Cloud-Azure-DataFactory-Factories-custom
The connector brings the following service templates (sorted by the host template they are attached to):
- Cloud-Azure-DataFactory-Factories-custom
Service Alias | Service Template | Service Description |
---|---|---|
Factory-Usage | Cloud-Azure-DataFactory-Factories-Factory-Usage-Api-custom | Check factory size and entities |
Integration-Runtime | Cloud-Azure-DataFactory-Factories-Integration-Runtime-Api-custom | Check integration runtime utilization |
The services listed above are created automatically when the Cloud-Azure-DataFactory-Factories-custom host template is used.
Discovery rules​
Host discovery​
The Centreon Monitoring Connector Azure Data Factory includes a Host Discovery provider to automatically discover the Azure instances of a given subscription and add them to the list of monitored hosts. This provider is named Microsoft Azure Data Factories.
This discovery feature is only compatible with the api custom mode. azcli is not supported.
Go to the corresponding chapter to learn more about discovering hosts automatically.
Collected metrics & status​
Here is the list of services for this connector, detailing all metrics linked to each service.
- Factory-Usage
- Integration-Runtime
Metric name | Unit |
---|---|
azdatafactory.factoryusage.percentage | % |
azdatafactory.factoryusage.resource.percentage | % |
azdatafactory.factoryusage.size.bytes | B |
azdatafactory.factoryusage.resource.count | count |
Metric name | Unit |
---|---|
azdatafactory.integrationruntime.available.memory.bytes | B |
azdatafactory.integrationruntime.available.node.number.count | count |
azdatafactory.integrationruntime.average.pickup.delay.seconds | s |
azdatafactory.integrationruntime.cpu.percentage.percent | % |
azdatafactory.integrationruntime.queue.length.count | count |
Prerequisites​
Please find all the prerequisites needed for Centreon to get information from Azure on the dedicated page.
Installing the monitoring connector​
Pack​
- If the platform uses an online license, you can skip the package installation instruction below as it is not required to have the connector displayed within the Configuration > Monitoring Connectors Manager menu.
If the platform uses an offline license, install the package on the central server with the command corresponding to the operating system's package manager:
- Alma / RHEL / Oracle Linux 8
- Alma / RHEL / Oracle Linux 9
- Debian 11 & 12
- CentOS 7
dnf install centreon-pack-cloud-azure-datafactory-factories
dnf install centreon-pack-cloud-azure-datafactory-factories
apt install centreon-pack-cloud-azure-datafactory-factories
yum install centreon-pack-cloud-azure-datafactory-factories
- Whatever the license type (online or offline), install the Azure Data Factory connector through the Configuration > Monitoring Connectors Manager menu.
Plugin​
Since Centreon 22.04, you can benefit from the 'Automatic plugin installation' feature. When this feature is enabled, you can skip the installation part below.
You still have to manually install the plugin on the poller(s) when:
- Automatic plugin installation is turned off
- You want to run a discovery job from a poller that doesn't monitor any resource of this kind yet
More information in the Installing the plugin section.
Use the commands below according to your operating system's package manager:
- Alma / RHEL / Oracle Linux 8
- Alma / RHEL / Oracle Linux 9
- Debian 11 & 12
- CentOS 7
dnf install centreon-plugin-Cloud-Azure-DataFactory-Api
dnf install centreon-plugin-Cloud-Azure-DataFactory-Api
apt install centreon-plugin-cloud-azure-datafactory-api
yum install centreon-plugin-Cloud-Azure-DataFactory-Api
Using the monitoring connector​
Using a host template provided by the connector​
- Log into Centreon and add a new host through Configuration > Hosts.
- In the IP Address/DNS field, set the following IP address: 127.0.0.1.
- Apply the Cloud-Azure-DataFactory-Factories-custom template to the host. A list of macros appears. Macros allow you to define how the connector will connect to the resource, and to customize the connector's behavior.
- Fill in the macros you want. Some macros are mandatory. For example, for this connector, you must define the AZURECUSTOMMODE macros (possible values are api or azcli). Indeed, 2 modes of communication can be used with this resource: either using the command tool azcli, or by querying the API directly.
Macro | Description | Default value | Mandatory |
---|---|---|---|
AZURECLIENTID | Set Azure client ID | X | |
AZURECLIENTSECRET | Set Azure client secret | X | |
AZURECUSTOMMODE | When a plugin offers several ways (CLI, library, etc.) to get information the desired one must be defined with this option | api | |
AZURERESOURCE | Set resource name or id (Required) | ||
AZURERESOURCEGROUP | Set resource group (Required if resource's name is used) | X | |
AZURESUBSCRIPTION | Set Azure subscription ID | X | |
AZURETENANT | Set Azure tenant ID | X | |
PROXYURL | Proxy URL. Eg: http://my.proxy:3128 | ||
EXTRAOPTIONS | Any extra option you may want to add to every command (e.g. a --verbose flag). All options are listed here |
Two methods can be used to define the authentication:
- Full ID of the Resource (
/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_id>/providers/XXXXX/XXXXX/<resource_name>
) in the AZURERESOURCE macro.- Resource name in the AZURERESOURCE macro, and resource group name in the AZURERESOURCEGROUP macro.
- Deploy the configuration. The host appears in the list of hosts, and on the Resources Status page. The command that is sent by the connector is displayed in the details panel of the host: it shows the values of the macros.
Using a service template provided by the connector​
- If you have used a host template and checked Create Services linked to the Template too, the services linked to the template have been created automatically, using the corresponding service templates. Otherwise, create manually the services you want and apply a service template to them.
- Fill in the macros you want (e.g. to change the thresholds for the alerts). Some macros are mandatory (see the table below).
- Factory-Usage
- Integration-Runtime
Macro | Description | Default value | Mandatory |
---|---|---|---|
TIMEFRAME | Set timeframe in seconds (i.e. 3600 to check last hour). | ||
INTERVAL | Set interval of the metric query (can be PT1M, PT5M, PT15M, PT30M, PT1H, PT6H, PT12H, PT24H). | ||
AGGREGATION | Aggregate monitoring. Can apply to: 'minimum', 'maximum', 'average', 'total' and 'count'. Can be called multiple times. | ||
WARNINGFACTORYPERCENTAGEUSAGE | Thresholds. | ||
CRITICALFACTORYPERCENTAGEUSAGE | Thresholds. | ||
WARNINGFACTORYSIZE | Thresholds. | ||
CRITICALFACTORYSIZE | Thresholds. | ||
WARNINGRESOURCECOUNT | Thresholds. | ||
CRITICALRESOURCECOUNT | Thresholds. | ||
WARNINGRESOURCEPERCENTAGEUSAGE | Thresholds. | ||
CRITICALRESOURCEPERCENTAGEUSAGE | Thresholds. | ||
EXTRAOPTIONS | Any extra option you may want to add to the command (e.g. a --verbose flag). All options are listed here |
Macro | Description | Default value | Mandatory |
---|---|---|---|
FILTERMETRIC | Filter metrics (Can be: 'IntegrationRuntimeAvailableMemory', 'IntegrationRuntimeAvailableNodeNumber', 'IntegrationRuntimeAverageTaskPickupDelay', 'IntegrationRuntimeCpuPercentage', 'IntegrationRuntimeQueueLength') (Can be a regexp) | ||
TIMEFRAME | Set timeframe in seconds (i.e. 3600 to check last hour). | ||
INTERVAL | Set interval of the metric query (can be PT1M, PT5M, PT15M, PT30M, PT1H, PT6H, PT12H, PT24H). | ||
AGGREGATION | Aggregate monitoring. Can apply to: 'minimum', 'maximum', 'average', 'total' and 'count'. Can be called multiple times. | ||
WARNINGAVAILABLEMEMORY | Thresholds. | ||
CRITICALAVAILABLEMEMORY | Thresholds. | ||
WARNINGAVAILABLENODENUMBER | Thresholds. | ||
CRITICALAVAILABLENODENUMBER | Thresholds. | ||
WARNINGAVERAGETASKPICKUPDELAY | Thresholds. | ||
CRITICALAVERAGETASKPICKUPDELAY | Thresholds. | ||
WARNINGCPUPERCENTAGE | Thresholds. | ||
CRITICALCPUPERCENTAGE | Thresholds. | ||
WARNINGQUEUELENGTH | Thresholds. | ||
CRITICALQUEUELENGTH | Thresholds. | ||
EXTRAOPTIONS | Any extra option you may want to add to the command (e.g. a --verbose flag). All options are listed here |
- Deploy the configuration. The service appears in the list of services, and on the Resources Status page. The command that is sent by the connector is displayed in the details panel of the service: it shows the values of the macros.
How to check in the CLI that the configuration is OK and what are the main options for?​
Once the plugin is installed, log into your Centreon poller's CLI using the
centreon-engine user account (su - centreon-engine
). Test that the connector
is able to monitor an Azure Instance using a command like this one (replace the sample values by yours):
/usr/lib/centreon/plugins/centreon_azure_datafactory_factories_api.pl \
--plugin=cloud::azure::datafactory::factories::plugin \
--mode=integration-runtime \
--custommode='api' \
--resource='FACTORY001ABCD' \
--resource-group='RSG1234' \
--subscription='xxxxxxxxx' \
--tenant='xxxxxxxxx' \
--client-id='xxxxxxxxx' \
--client-secret='xxxxxxxxx' \
--proxyurl='' \
--filter-metric='' \
--timeframe='' \
--interval='' \
--aggregation='' \
--warning-average-task-pickup-delay='' \
--critical-average-task-pickup-delay='' \
--warning-cpu-percentage='' \
--critical-cpu-percentage='' \
--warning-queue-length='' \
--critical-queue-length='' \
--warning-available-node-number='' \
--critical-available-node-number='' \
--warning-available-memory='' \
--critical-available-memory=''
The expected command output is shown below:
OK: Available memory: 33 B Available node number: 53 Average task pickup delay: 10 s Cpu percentage: 54 % Queue length: 41 | 'azdatafactory.integrationruntime.available.memory.bytes'=33B;;;0;'azdatafactory.integrationruntime.available.node.number.count'=53;;;0;'azdatafactory.integrationruntime.average.pickup.delay.seconds'=10s;;;0;'azdatafactory.integrationruntime.cpu.percentage.percent'=54%;;;0;100'azdatafactory.integrationruntime.queue.length.count'=41;;;0;
Troubleshooting​
Please find the troubleshooting documentation for the API-based plugins in this chapter.
Available modes​
In most cases, a mode corresponds to a service template. The mode appears in the execution command for the connector. In the Centreon interface, you don't need to specify a mode explicitly: its use is implied when you apply a service template. However, you will need to specify the correct mode for the template if you want to test the execution command for the connector in your terminal.
All available modes can be displayed by adding the --list-mode
parameter to
the command:
/usr/lib/centreon/plugins/centreon_azure_datafactory_factories_api.pl \
--plugin=cloud::azure::datafactory::factories::plugin \
--list-mode
The plugin brings the following modes:
Mode | Linked service template |
---|---|
discovery [code] | Used for host discovery |
factory-usage [code] | Cloud-Azure-DataFactory-Factories-Factory-Usage-Api-custom |
integration-runtime [code] | Cloud-Azure-DataFactory-Factories-Integration-Runtime-Api-custom |
Available options​
Generic options​
All generic options are listed here:
Option | Description |
---|---|
--mode | Define the mode in which you want the plugin to be executed (see--list-mode). |
--dyn-mode | Specify a mode with the module's path (advanced). |
--list-mode | List all available modes. |
--mode-version | Check minimal version of mode. If not, unknown error. |
--version | Return the version of the plugin. |
--custommode | When a plugin offers several ways (CLI, library, etc.) to get information the desired one must be defined with this option. |
--list-custommode | List all available custom modes. |
--multiple | Multiple custom mode objects. This may be required by some specific modes (advanced). |
--pass-manager | Define the password manager you want to use. Supported managers are: environment, file, keepass, hashicorpvault and teampass. |
--verbose | Display extended status information (long output). |
--debug | Display debug messages. |
--filter-perfdata | Filter perfdata that match the regexp. Eg: adding --filter-perfdata='avg' will remove all metrics that do not contain 'avg' from performance data. |
--filter-perfdata-adv | Filter perfdata based on a "if" condition using the following variables: label, value, unit, warning, critical, min, max. Variables must be written either %{variable} or %(variable). Eg: adding --filter-perfdata-adv='not (%(value) == 0 and %(max) eq "")' will remove all metrics whose value equals 0 and that don't have a maximum value. |
--explode-perfdata-max | Create a new metric for each metric that comes with a maximum limit. The new metric will be named identically with a '_max' suffix). Eg: it will split 'used_prct'=26.93%;0:80;0:90;0;100 into 'used_prct'=26.93%;0:80;0:90;0;100 'used_prct_max'=100%;;;; |
--change-perfdata --extend-perfdata | Change or extend perfdata. Syntax: --extend-perfdata=searchlabel,newlabel,target[,[newuom],[min],[m ax]] Common examples: Convert storage free perfdata into used: --change-perfdata=free,used,invert() Convert storage free perfdata into used: --change-perfdata=used,free,invert() Scale traffic values automatically: --change-perfdata=traffic,,scale(auto) Scale traffic values in Mbps: --change-perfdata=traffic_in,,scale(Mbps),mbps Change traffic values in percent: --change-perfdata=traffic_in,,percent() |
--extend-perfdata-group | Add new aggregated metrics (min, max, average or sum) for groups of metrics defined by a regex match on the metrics' names. Syntax: --extend-perfdata-group=regex,namesofnewmetrics,calculation[,[ne wuom],[min],[max]] regex: regular expression namesofnewmetrics: how the new metrics' names are composed (can use $1, $2... for groups defined by () in regex). calculation: how the values of the new metrics should be calculated newuom (optional): unit of measure for the new metrics min (optional): lowest value the metrics can reach max (optional): highest value the metrics can reach Common examples: Sum wrong packets from all interfaces (with interface need --units-errors=absolute): --extend-perfdata-group=',packets_wrong,sum(packets_(discard |error)_(in|out))' Sum traffic by interface: --extend-perfdata-group='traffic_in_(.*),traffic_$1,sum(traf fic_(in|out)_$1)' |
--change-short-output --change-long-output | Modify the short/long output that is returned by the plugin. Syntax: --change-short-output=pattern~replacement~modifier Most commonly used modifiers are i (case insensitive) and g (replace all occurrences). Eg: adding --change-short-output='OK~Up~gi' will replace all occurrences of 'OK', 'ok', 'Ok' or 'oK' with 'Up' |
--change-exit | Replace an exit code with one of your choice. Eg: adding --change-exit=unknown=critical will result in a CRITICAL state instead of an UNKNOWN state. |
--range-perfdata | Rewrite the ranges displayed in the perfdata. Accepted values: 0: nothing is changed. 1: if the lower value of the range is equal to 0, it is removed. 2: remove the thresholds from the perfdata. |
--filter-uom | Mask the units when they don't match the given regular expression. |
--opt-exit | Replace the exit code in case of an execution error (i.e. wrong option provided, SSH connection refused, timeout, etc). Default: unknown. |
--output-ignore-perfdata | Remove all the metrics from the service. The service will still have a status and an output. |
--output-ignore-label | Remove the status label ("OK:", "WARNING:", "UNKNOWN:", CRITICAL:") from the beginning of the output. Eg: 'OK: Ram Total:...' will become 'Ram Total:...' |
--output-xml | Return the output in XML format (to send to an XML API). |
--output-json | Return the output in JSON format (to send to a JSON API). |
--output-openmetrics | Return the output in OpenMetrics format (to send to a tool expecting this format). |
--output-file | Write output in file (can be combined with json, xml and openmetrics options). E.g.: --output-file=/tmp/output.txt will write the output in /tmp/output.txt. |
--disco-format | Applies only to modes beginning with 'list-'. Returns the list of available macros to configure a service discovery rule (formatted in XML). |
--disco-show | Applies only to modes beginning with 'list-'. Returns the list of discovered objects (formatted in XML) for service discovery. |
--float-precision | Define the float precision for thresholds (default: 8). |
--source-encoding | Define the character encoding of the response sent by the monitored resource. Default: 'UTF-8'. |
--subscription | Set Azure subscription ID. |
--tenant | Set Azure tenant ID. |
--client-id | Set Azure client ID. |
--client-secret | Set Azure client secret. |
--login-endpoint | Set Azure login endpoint URL (Default: 'https://login.microsoftonline.com') |
--management-endpoint | Set Azure management endpoint URL (Default: 'https://management.azure.com') |
--timeframe | Set timeframe in seconds (i.e. 3600 to check last hour). |
--interval | Set interval of the metric query (Can be : PT1M, PT5M, PT15M, PT30M, PT1H, PT6H, PT12H, PT24H). |
--aggregation | Aggregate monitoring. Can apply to: 'minimum', 'maximum', 'average', 'total' and 'count'. Can be called multiple times. |
--zeroed | Set metrics value to 0 if they are missing. Useful when some metrics are undefined. |
--timeout | Set timeout in seconds (Default: 10). |
--http-peer-addr | Set the address you want to connect to. Useful if hostname is only a vhost, to avoid IP resolution. |
--proxyurl | Proxy URL. Eg: http://my.proxy:3128 |
--proxypac | Proxy pac file (can be a URL or a local file). |
--insecure | Accept insecure SSL connections. |
--http-backend | Perl library to use for HTTP transactions. Possible values are: lwp (default) and curl. |
--ssl-opt | Set SSL Options (--ssl-opt="SSL_version => TLSv1" --ssl-opt="SSL_verify_mode => SSL_VERIFY_NONE"). |
--curl-opt | Set CURL Options (--curl-opt="CURLOPT_SSL_VERIFYPEER => 0" --curl-opt="CURLOPT_SSLVERSION => CURL_SSLVERSION_TLSv1_1" ). |
--memcached | Memcached server to use (only one server). |
--redis-server | Redis server to use (only one server). Syntax: address[:port] |
--redis-attribute | Set Redis Options (--redis-attribute="cnx_timeout=5"). |
--redis-db | Set Redis database index. |
--failback-file | Failback on a local file if redis connection failed. |
--memexpiration | Time to keep data in seconds (Default: 86400). |
--statefile-dir | Define the cache directory (default: '/var/lib/centreon/centplugins'). |
--statefile-suffix | Define a suffix to customize the statefile name (Default: ''). |
--statefile-concat-cwd | If used with the '--statefile-dir' option, the latter's value will be used as a sub-directory of the current working directory. Useful on Windows when the plugin is compiled, as the file system and permissions are different from Linux. |
--statefile-format | Define the format used to store the cache. Available formats: 'dumper', 'storable', 'json' (default). |
--statefile-key | Define the key to encrypt/decrypt the cache. |
--statefile-cipher | Define the cipher algorithm to encrypt the cache (Default: 'AES'). |
--filter-dimension | Specify the metric dimension (required for some specific metrics) Syntax example: --filter-dimension="$metricname eq '$metricvalue'" |
--per-sec | Display the statistics based on a per-second period. |
Microsoft Azure Rest API specifics​
- To connect to the Azure Rest API, you must register an application (follow the How-to guide).
- The application needs the 'Monitoring Reader' role (See https://docs.microsoft.com/en-us/azure/azure-monitor/platform/roles-permissions-security#monitoring-reader).
- This custom mode is using the 'OAuth 2.0 Client Credentials Grant Flow'.
- For further information, visit https://docs.microsoft.com/en-us/azure/active-directory/develop/v1-oauth2-client-creds-grant-flow.
Modes options​
All available options for each service template are listed below:
- Factory-Usage
- Integration-Runtime
Option | Description |
---|---|
--resource | Set resource name or ID (Required). |
--resource-group | Set resource group (Required if resource's name is used). |
--warning-$metric$ | Warning threshold ($metric$ can be: 'factory-percentage-usage', 'resource-percentage-usage', 'factory-size', 'resource-count'). |
--critical-$metric$ | Critical threshold ($metric$ can be: 'factory-percentage-usage', 'resource-percentage-usage', 'factory-size', 'resource-count'). |
Option | Description |
---|---|
--resource | Set resource name or ID (Required). |
--resource-group | Set resource group (Required if resource's name is used). |
--filter-metric | Filter metrics (Can be: 'IntegrationRuntimeAvailableMemory', 'IntegrationRuntimeAvailableNodeNumber', 'IntegrationRuntimeAverageTaskPickupDelay', 'IntegrationRuntimeCpuPercentage', 'IntegrationRuntimeQueueLength') (Can be a regexp). |
--warning-$metric$ | Warning threshold ($metric$ can be: 'available-memory', 'available-node-number', 'average-task-pickup-delay', 'cpu-percentage', 'queue-length'). |
--critical-$metric$ | Critical threshold ($metric$ can be: 'available-memory', 'available-node-number', 'average-task-pickup-delay', 'cpu-percentage', 'queue-length'). |
All available options for a given mode can be displayed by adding the
--help
parameter to the command:
/usr/lib/centreon/plugins/centreon_azure_datafactory_factories_api.pl \
--plugin=cloud::azure::datafactory::factories::plugin \
--mode=integration-runtime \
--help