(or, approaching this issue from the other side)
Too many disks are causing too many metrics — what now?
This article is being updated. Please be aware the content herein, not limited to version numbers and slight syntax changes, may not match the output from the most recent versions of Bright. This notation will be removed when the content has been updated
If you have a lot of disks per node, you get a lot of disk metrics.
On a newly installed cluster 50 disks is set as the default maximum number of disks per node that will have their metrics tracked (For cmdaemon versions earlier than18795 it is 6 for the regular nodes, and 100 for the head node).
But you may not want to store all the data to the CMDaemon database – after all, 1000+ metrics per node gets pretty large, slow, messy, and may be pointless for your needs.
One way to deal with this is that you can opt to not store any data if you exceed a value of MaxAutoDetectDisks.
MaxAutoDetectDisks is thus the parameter that you can set to limit the number of disks per nodes that will have their metrics tracked. It is an AdvancedConfig setting (see the Admin Manual for more on AdvancedConfig). So you set it in cmd.conf like:
AdvancedConfig = { "MaxAutoDetectDisks=60" }
You can also set it to 0, which means you can have no limit to the number of disks per node that get their metrics stored:
AdvancedConfig = { "MaxAutoDetectDisks=0" }
If MaxAutoDetectDisks is exceeded (or if it is set to 0) then, for all disk metrics that cmdaemon deals with, none of them are stored. The Store attribute in the metric configuration of the disk metric will then show that it is not being stored. For example, for the disk metric SectorsRead metrics, for the device dm-0:
bright61->monitoring->setup[default]->metricconf]% show SectorsRead:dm-0
Parameter Value
------------------ --------------------
Metric SectorsRead
MetricParam dm-0
Store no
...