How to determine the ideal number of active shards assigned by ElasticSearch (ES)
857
Created On 05/18/23 07:46 AM - Last Modified 10/31/25 07:11 AM
Question
- How can I determine when the number of shards is negatively impacting performance for PAN-OS versions prior to 11.1?
- How can I determine when the maximum shard limit is exceeded (or performance is impacted) for PAN-OS 11.1 and later?
- If my shards are beyond the perfect condition, how can I normalize them?
Environment
- Panorama in Panorama-mode
- Panorama in Log Collector mode
- PAN-OS 10.0 and above
Answer
-
To determine the ideal maximum number of shards that should be generated across the cluster running PAN-OS versions prior to 11.1.
- Extract the number of data nodes using the commands below
admin@Panorama> show log-collector-es-cluster health
{
“cluster_name” : “__pan_cluster__“,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 6,
“number_of_data_nodes” : 4,
“active_primary_shards” : 17186,
“active_shards” : 17192,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 100.0
}
- Extract the Heap memory (GB), to find heap size run command below
admin@Panorama> debug logdb show-heap-size collector-group test
Response from logger 017607000309: Minimum Heap size: 30g Maximum Heap size: 30g
- Use the formula number_of _data_nodes * heap memory * 20 =Ideal Shards
- The computed ideal shards should always be equal or less than the active shards
admin@Panorama> show log-collector-es-cluster health
{
“cluster_name” : “__pan_cluster__“,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 6,
“number_of_data_nodes” : 4,
“active_primary_shards” : 17186,
“active_shards” : 17192,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 100.0
}
2. To determine the ideal maximum number of shards that should be generated across the cluster running PAN-OS 11.1 and later.
-
- Extract the number of data nodes using the commands below
admin@Panorama> show log-collector-es-cluster health
{
“cluster_name” : “__pan_cluster__“,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 6,
“number_of_data_nodes” : 4,
“active_primary_shards” : 17186,
“active_shards” : 17192,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 100.0
}
-
- The cluster shard limit defaults to 1000 per data node, use the formula: number_of_data_nodes * 1000 = ideal Shards
-
- The computed ideal shards should always be equal or less than the active shards
admin@Panorama> show log-collector-es-cluster health
{
“cluster_name” : “__pan_cluster__“,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 6,
“number_of_data_nodes” : 4,
“active_primary_shards” : 17186,
“active_shards” : 17192,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 100.0
}
- To manage and purge older log data on Panorama which will reduce the number of shards, follow these steps:
- Navigate to the Settings: From the Panorama GUI, go to PANORAMA > Setup > Management > Logging and Reporting Settings.
- Click the gear icon.
- Configure Max Days:
- If unset, enable the setting.
- Set the maximum days for storing log data. Configure a lower value to purge some of the older data.
- Verify the Purge:
- Allow the purging process to run for 2-3 days to confirm the new value of active_shards has taken effect.
If the ideal maximum shards value remains high after this time, repeat the process (steps 1-3) and set an even lower value.
- Allow the purging process to run for 2-3 days to confirm the new value of active_shards has taken effect.