Data for which of the following indexes will count against an ingest-based license?
summary
main
_metrics
_introspection
Splunk Enterprise licensing is based on the amount of data that is ingested and indexed by the Splunk platform per day1. The data that counts against the license is the data that is stored in the indexes that are visible to the users and searchable by the Splunk software2. The indexes that are visible and searchable by default are the main index and any custom indexes that are created by the users or the apps3. The main index is the default index where Splunk Enterprise stores all data, unless otherwise specified4.
Option B is the correct answer because the data for the main index will count against the ingest-based license, as it is a visible and searchable index by default. Option A is incorrect because the summary index is a special type of index that stores the results of scheduled reports or accelerated data models, which do not count against the license. Option C is incorrect because the _metrics index is an internal index that stores metrics data about the Splunk platform performance, which does not count against the license. Option D is incorrect because the _introspection index is another internal index that stores data about the impact of the Splunk software on the host system, such as CPU, memory, disk, and network usage, which does not count against the license.
What information is needed about the current environment before deploying Splunk? (select all that apply)
List of vendors for network devices.
Overall goals for the deployment.
Key users.
Data sources.
Before deploying Splunk, it is important to gather some information about the current environment, such as:
Overall goals for the deployment: This includes the business objectives, the use cases, the expected outcomes, and the success criteria for the Splunk deployment. This information helps to define the scope, the requirements, the design, and the validation of the Splunk solution1.
Key users: This includes the roles, the responsibilities, the expectations, and the needs of the different types of users who will interact with the Splunk deployment, such as administrators, analysts, developers, and end users. This information helps to determine the user access, the user experience, the user training, and the user feedback for the Splunk solution1.
Data sources: This includes the types, the formats, the volumes, the locations, and the characteristics of the data that will be ingested, indexed, and searched by the Splunk deployment. This information helps to estimate the data throughput, the data retention, the data quality, and the data analysis for the Splunk solution1.
Option B, C, and D are the correct answers because they reflect the essential information that is needed before deploying Splunk. Option A is incorrect because the list of vendors for network devices is not a relevant information for the Splunk deployment. The network devices may be part of the data sources, but the vendors are not important for the Splunk solution.
In which phase of the Splunk Enterprise data pipeline are indexed extraction configurations processed?
Input
Search
Parsing
Indexing
Indexed extraction configurations are processed in the indexing phase of the Splunk Enterprise data pipeline. The data pipeline is the process that Splunk uses to ingest, parse, index, and search data. Indexed extraction configurations are settings that determine how Splunk extracts fields from data at index time, rather than at search time. Indexed extraction can improve search performance, but it also increases the size of the index. Indexed extraction configurations are applied in the indexing phase, which is the phase where Splunk writes the data and the .tsidx files to the index. The input phase is the phase where Splunk receives data from various sources and formats. The parsing phase is the phase where Splunk breaks the data into events, timestamps, and hosts. The search phase is the phase where Splunk executes search commands and returns results.
What is the best method for sizing or scaling a search head cluster?
Estimate the maximum daily ingest volume in gigabytes and divide by the number of CPU cores per search head.
Estimate the total number of searches per day and divide by the number of CPU cores available on the search heads.
Divide the number of indexers by three to achieve the correct number of search heads.
Estimate the maximum concurrent number of searches and divide by the number of CPU cores per search head.
According to the Splunk blog1, the best method for sizing or scaling a search head cluster is to estimate the maximum concurrent number of searches and divide by the number of CPU cores per search head. This gives you an idea of how many search heads you need to handle the peak search load without overloading the CPU resources. The other options are false because:
Estimating the maximum daily ingest volume in gigabytes and dividing by the number of CPU cores per search head is not a good method for sizing or scaling a search head cluster, as it does not account for the complexity and frequency of the searches. The ingest volume is more relevant for sizing or scaling the indexers, not the search heads2.
Estimating the total number of searches per day and dividing by the number of CPU cores available on the search heads is not a good method for sizing or scaling a search head cluster, as it does not account for the concurrency and duration of the searches. The total number of searches per day is an average metric that does not reflect the peak search load or the search performance2.
Dividing the number of indexers by three to achieve the correct number of search heads is not a good method for sizing or scaling a search head cluster, as it does not account for the search load or the search head capacity. The number of indexers is not directly proportional to the number of search heads, as different types of data and searches may require different amounts of resources2.
(A customer has converted a CSV lookup to a KV Store lookup. What must be done to make it available for an automatic lookup?)
Add the repFactor=true attribute in collections.conf.
Add the replicate=true attribute in lookups.conf.
Add the replicate=true attribute in collections.conf.
Add the repFactor=true attribute in lookups.conf.
Splunk’s KV Store management documentation specifies that when converting a static CSV lookup to a KV Store lookup, the lookup data is stored in a MongoDB-based collection defined in collections.conf. To ensure that the KV Store lookup is replicated and available across all search head cluster members, administrators must include the attribute replicate=true within the collections.conf file.
This configuration instructs Splunk to replicate the KV Store collection’s data to all members in the Search Head Cluster (SHC), enabling consistent access and reliability across the cluster. Without this attribute, the KV Store collection would remain local to a single search head, making it unavailable for automatic lookups performed by other members.
Here’s an example configuration snippet from collections.conf:
[customer_lookup]
replicate = true
field.name = string
field.age = number
The attribute repFactor=true (mentioned in Options A and D) is unrelated to KV Store behavior—it applies to index replication, not KV Store replication. Similarly, replicate=true in lookups.conf (Option B) has no effect, as KV Store replication is controlled exclusively via collections.conf.
Once properly configured, the lookup can be defined in transforms.conf and referenced in props.conf for automatic lookup functionality.
References (Splunk Enterprise Documentation):
• KV Store Collections and Configuration – collections.conf Reference
• Managing KV Store Data in Search Head Clusters
• Automatic Lookup Configuration Using KV Store
• Splunk Enterprise Admin Manual – Distributed KV Store Replication Settings
To expand the search head cluster by adding a new member, node2, what first step is required?
splunk bootstrap shcluster-config -mgmt_uri https://node2:8089 -replication_port 9200 -secret supersecretkey
splunk init shcluster-config -master_uri https://node2:8089 -replication_port 9200 -secret supersecretkey
splunk init shcluster-config -mgmt_uri https://node2:8089 -replication_port 9200 -secret supersecretkey
splunk add shcluster-member -new_member_uri https://node2:8089 -replication_port 9200 -secret supersecretkey
To expand the search head cluster by adding a new member, node2, the first step is to initialize the cluster configuration on node2 using the splunk init shcluster-config command. This command sets the required parameters for the cluster member, such as the management URI, the replication port, and the shared secret key. The management URI must be unique for each cluster member and must match the URI that the deployer uses to communicate with the member. The replication port must be the same for all cluster members and must be different from the management port. The secret key must be the same for all cluster members and must be encrypted using the splunk _encrypt command. The master_uri parameter is optional and specifies the URI of the cluster captain. If not specified, the cluster member will use the captain election process to determine the captain. Option C shows the correct syntax and parameters for the splunk init shcluster-config command. Option A is incorrect because the splunk bootstrap shcluster-config command is used to bring up the first cluster member as the initial captain, not to add a new member. Option B is incorrect because the master_uri parameter is not required and the mgmt_uri parameter is missing. Option D is incorrect because the splunk add shcluster-member command is used to add an existing search head to the cluster, not to initialize a new member12
1: https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/SHCdeploymentoverview#Initialize_cluster_members 2: https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/SHCconfigurationdetails#Configure_the_cluster_members
(How is the search log accessed for a completed search job?)
Search for: index=_internal sourcetype=search.
Select Settings > Searches, reports, and alerts, then from the Actions column, select View Search Log.
From the Activity menu, select Show Search Log.
From the Job menu, select Inspect Job, then click the search.log link.
According to the Splunk Search Job Inspector documentation, the search.log file for a completed search job can be accessed through Splunk Web by navigating to the job’s detailed inspection view.
To access it:
Open the completed search in Splunk Web.
Click the Job menu (top right of the search interface).
Select Inspect Job.
In the Job Inspector window, click the search.log link.
This log provides in-depth diagnostic details about how the search was parsed, distributed, and executed across the search head and indexers. It contains valuable performance metrics, command execution order, event sampling information, and any error or warning messages encountered during search processing.
The search.log file is generated for every search job—scheduled, ad-hoc, or background—and is stored in the job’s dispatch directory ($SPLUNK_HOME/var/run/splunk/dispatch/
Other listed options are incorrect:
Option A queries the _internal index, which does not store per-search logs.
Option B is used to view search configurations, not logs.
Option C is not a valid Splunk Web navigation option.
Thus, the only correct and Splunk-documented method is via Job → Inspect Job → search.log.
References (Splunk Enterprise Documentation):
• Search Job Inspector Overview and Usage
• Analyzing Search Performance Using search.log
• Search Job Management and Dispatch Directory Structure
• Splunk Enterprise Admin Manual – Troubleshooting Searches
Which two sections can be expanded using the Search Job Inspector?
Execution costs.
Saved search history.
Search job properties.
Optimization suggestions.
The Search Job Inspector can be used to expand the following sections: Search job properties and Optimization suggestions. The Search Job Inspector is a tool that provides detailed information about a search job, such as the search parameters, the search statistics, the search timeline, and the search log. The Search Job Inspector can be accessed by clicking the Job menu in the Search bar and selecting Inspect Job. The Search Job Inspector has several sections that can be expanded or collapsed by clicking the arrow icon next to the section name. The Search job properties section shows the basic information about the search job, such as the SID, the status, the duration, the disk usage, and the scan count. The Optimization suggestions section shows the suggestions for improving the search performance, such as using transforming commands, filtering events, or reducing fields. The Execution costs and Saved search history sections are not part of the Search Job Inspector, and they cannot be expanded. The Execution costs section is part of the Search Dashboard, which shows the relative costs of each search component, such as commands, lookups, or subsearches. The Saved search history section is part of the Saved Searches page, which shows the history of the saved searches that have been run by the user or by a schedule
Which of the following use cases would be made possible by multi-site clustering? (select all that apply)
Use blockchain technology to audit search activity from geographically dispersed data centers.
Enable a forwarder to send data to multiple indexers.
Greatly reduce WAN traffic by preferentially searching assigned site (search affinity).
Seamlessly route searches to a redundant site in case of a site failure.
According to the Splunk documentation1, multi-site clustering is an indexer cluster that spans multiple physical sites, such as data centers. Each site has its own set of peer nodes and search heads. Each site also obeys site-specific replication and search factor rules. The use cases that are made possible by multi-site clustering are:
Greatly reduce WAN traffic by preferentially searching assigned site (search affinity). This means that if you configure each site so that it has both a search head and a full set of searchable data, the search head on each site will limit its searches to local peer nodes. This eliminates any need, under normal conditions, for search heads to access data on other sites, greatly reducing network traffic between sites2.
Seamlessly route searches to a redundant site in case of a site failure. This means that by storing copies of your data at multiple locations, you maintain access to the data if a disaster strikes at one location. Multisite clusters provide site failover capability. If a site goes down, indexing and searching can continue on the remaining sites, without interruption or loss of data2.
The other options are false because:
Use blockchain technology to audit search activity from geographically dispersed data centers. This is not a use case of multi-site clustering, as Splunk does not use blockchain technology to audit search activity. Splunk uses its own internal logs and metrics to monitor and audit search activity3.
Enable a forwarder to send data to multiple indexers. This is not a use case of multi-site clustering, as forwarders can send data to multiple indexers regardless of whether they are in a single-site or multi-site cluster. This is a basic feature of forwarders that allows load balancing and high availability of data ingestion4.
A three-node search head cluster is skipping a large number of searches across time. What should be done to increase scheduled search capacity on the search head cluster?
Create a job server on the cluster.
Add another search head to the cluster.
server.conf captain_is_adhoc_searchhead = true.
Change limits.conf value for max_searches_per_cpu to a higher value.
Changing the limits.conf value for max_searches_per_cpu to a higher value is the best option to increase scheduled search capacity on the search head cluster when a large number of searches are skipped across time. This value determines how many concurrent scheduled searches can run on each CPU core of the search head. Increasing this value will allow more scheduled searches to run at the same time, which will reduce the number of skipped searches. Creating a job server on the cluster, running the server.conf captain_is_adhoc_searchhead = true command, or adding another search head to the cluster are not the best options to increase scheduled search capacity on the search head cluster. For more information, see [Configure limits.conf] in the Splunk documentation.
Which CLI command converts a Splunk instance to a license slave?
splunk add licenses
splunk list licenser-slaves
splunk edit licenser-localslave
splunk list licenser-localslave
The splunk edit licenser-localslave command is used to convert a Splunk instance to a license slave. This command will configure the Splunk instance to contact a license master and receive a license from it. This command should be used when the Splunk instance is part of a distributed deployment and needs to share a license pool with other instances. The splunk add licenses command is used to add a license to a Splunk instance, not to convert it to a license slave. The splunk list licenser-slaves command is used to list the license slaves that are connected to a license master, not to convert a Splunk instance to a license slave. The splunk list licenser-localslave command is used to list the license master that a license slave is connected to, not to convert a Splunk instance to a license slave. For more information, see Configure license slaves in the Splunk documentation.
(Which of the following is a minimum search head specification for a distributed Splunk environment?)
A 1Gb Ethernet NIC, optional 2nd NIC for a management network.
An x86 32-bit chip architecture.
128 GB RAM.
Two physical CPU cores, or four vCPU at 2GHz or greater speed per core.
According to the Splunk Enterprise Capacity Planning and Hardware Sizing Guidelines, a distributed Splunk environment’s minimum search head specification must ensure that the system can efficiently manage search parsing, ad-hoc query execution, and knowledge object replication. Splunk officially recommends using a 64-bit x86 architecture system with a minimum of two physical CPU cores (or four vCPUs) running at 2 GHz or higher per core for acceptable performance.
Search heads are CPU-intensive components, primarily constrained by processor speed and the number of concurrent searches they must handle. Memory and disk space should scale with user concurrency and search load, but CPU capability remains the baseline requirement. While 128 GB RAM (Option C) is suitable for high-demand or Enterprise Security (ES) deployments, it exceeds the minimum hardware specification for general distributed search environments.
Splunk no longer supports 32-bit architectures (Option B). While a 1Gb Ethernet NIC (Option A) is common, it is not part of the minimum computational specification required by Splunk for search heads. The critical specification is processor capability — two physical cores or equivalent.
References (Splunk Enterprise Documentation):
• Splunk Enterprise Capacity Planning Manual – Hardware and Performance Guidelines
• Search Head Sizing and System Requirements
• Distributed Deployment Manual – Recommended System Specifications
• Splunk Hardware and Performance Tuning Guide
Which instance can not share functionality with the deployer?
Search head cluster member
License master
Master node
Monitoring Console (MC)
The deployer is a Splunk Enterprise instance that distributes apps and other configurations to the members of a search head cluster1.
The deployer cannot share functionality with any other Splunk Enterprise instance, including the license master, the master node, or the monitoring console2.
However, the search head cluster members can share functionality with the master node and the monitoring console, as long as they are not designated as the captain of the cluster3.
Therefore, the correct answer is B. License master, as it is the only instance that cannot share functionality with the deployer under any circumstances.
Which Splunk server role regulates the functioning of indexer cluster?
Indexer
Deployer
Master Node
Monitoring Console
The master node is the Splunk server role that regulates the functioning of the indexer cluster. The master node coordinates the activities of the peer nodes, such as data replication, data searchability, and data recovery. The master node also manages the cluster configuration bundle and distributes it to the peer nodes. The indexer is the Splunk server role that indexes the incoming data and makes it searchable. The deployer is the Splunk server role that distributes apps and configuration updates to the search head cluster members. The monitoring console is the Splunk server role that monitors the health and performance of the Splunk deployment. For more information, see About indexer clusters and index replication in the Splunk documentation.
(Where can files be placed in a configuration bundle on a search peer that will persist after a new configuration bundle has been deployed?)
In the $SPLUNK_HOME/etc/slave-apps//local folder.
In the $SPLUNK_HOME/etc/master-apps//local folder.
Nowhere; the entire configuration bundle is overwritten with each push.
In the $SPLUNK_HOME/etc/slave-apps/_cluster/local folder.
According to the Indexer Clustering Administration Guide, configuration bundles pushed from the Cluster Manager (Master Node) overwrite the contents of the $SPLUNK_HOME/etc/slave-apps/ directory on each search peer (indexer). However, Splunk provides a special persistent location — the _cluster app’s local directory — for files that must survive bundle redeployments.
Specifically, any configuration files placed in:
$SPLUNK_HOME/etc/slave-apps/_cluster/local/
will persist after future bundle pushes because this directory is excluded from the automatic overwrite process.
This is particularly useful for maintaining local overrides or custom configurations that should not be replaced by the Cluster Manager, such as environment-specific inputs, temporary test settings, or monitoring configurations unique to that peer.
Other directories under slave-apps are overwritten each time a configuration bundle is pushed, ensuring consistency across the cluster. Likewise, master-apps exists only on the Cluster Manager and is used for deployment, not persistence.
Thus, the _cluster/local folder is the only safe, Splunk-documented location for configurations that need to survive bundle redeployment.
References (Splunk Enterprise Documentation):
• Indexer Clustering: How Configuration Bundles Work
• Maintaining Local Configurations on Clustered Indexers
• slave-apps and _cluster App Structure and Behavior
• Splunk Enterprise Admin Manual – Cluster Configuration Management Best Practices
Which of the following statements describe search head clustering? (Select all that apply.)
A deployer is required.
At least three search heads are needed.
Search heads must meet the high-performance reference server requirements.
The deployer must have sufficient CPU and network resources to process service requests and push configurations.
Search head clustering is a Splunk feature that allows a group of search heads to share configurations, apps, and knowledge objects, and to provide high availability and scalability for searching. Search head clustering has the following characteristics:
A deployer is required. A deployer is a Splunk instance that distributes the configurations and apps to the members of the search head cluster. The deployer is not a member of the cluster, but a separate instance that communicates with the cluster master.
At least three search heads are needed. A search head cluster must have at least three search heads to form a quorum and to ensure high availability. If the cluster has less than three search heads, it cannot function properly and will enter a degraded mode.
The deployer must have sufficient CPU and network resources to process service requests and push configurations. The deployer is responsible for handling the requests from the cluster master and the cluster members, and for pushing the configurations and apps to the cluster members. Therefore, the deployer must have enough CPU and network resources to perform these tasks efficiently and reliably.
Search heads do not need to meet the high-performance reference server requirements, as this is not a mandatory condition for search head clustering. The high-performance reference server requirements are only recommended for optimal performance and scalability of Splunk deployments, but they are not enforced by Splunk.
Why should intermediate forwarders be avoided when possible?
To minimize license usage and cost.
To decrease mean time between failures.
Because intermediate forwarders cannot be managed by a deployment server.
To eliminate potential performance bottlenecks.
Intermediate forwarders are forwarders that receive data from other forwarders and then send that data to indexers. They can be useful in some scenarios, such as when network bandwidth or security constraints prevent direct forwarding to indexers, or when data needs to be routed, cloned, or modified in transit. However, intermediate forwarders also introduce additional complexity and overhead to the data pipeline, which can affect the performance and reliability of data ingestion. Therefore, intermediate forwarders should be avoided when possible, and used only when there is a clear benefit or requirement for them. Some of the drawbacks of intermediate forwarders are:
They increase the number of hops and connections in the data flow, which can introduce latency and increase the risk of data loss or corruption.
They consume more resources on the hosts where they run, such as CPU, memory, disk, and network bandwidth, which can affect the performance of other applications or processes on those hosts.
They require additional configuration and maintenance, such as setting up inputs, outputs, load balancing, security, monitoring, and troubleshooting.
They can create data duplication or inconsistency if they are not configured properly, such as when using cloning or routing rules.
Some of the references that support this answer are:
Configure an intermediate forwarder, which states: “Intermediate forwarding is where a forwarder receives data from one or more forwarders and then sends that data on to another indexer. This kind of setup is useful when, for example, you have many hosts in different geographical regions and you want to send data from those forwarders to a central host in that region before forwarding the data to an indexer. All forwarder types can act as an intermediate forwarder. However, this adds complexity to your deployment and can affect performance, so use it only when necessary.”
Intermediate data routing using universal and heavy forwarders, which states: “This document outlines a variety of Splunk options for routing data that address both technical and business requirements. Overall benefits Using splunkd intermediate data routing offers the following overall benefits: … The routing strategies described in this document enable flexibility for reliably processing data at scale. Intermediate routing enables better security in event-level data as well as in transit. The following is a list of use cases and enablers for splunkd intermediate data routing: … Limitations splunkd intermediate data routing has the following limitations: … Increased complexity and resource consumption. splunkd intermediate data routing adds complexity to the data pipeline and consumes resources on the hosts where it runs. This can affect the performance and reliability of data ingestion and other applications or processes on those hosts. Therefore, intermediate routing should be avoided when possible, and used only when there is a clear benefit or requirement for it.”
Use forwarders to get data into Splunk Enterprise, which states: “The forwarders take the Apache data and send it to your Splunk Enterprise deployment for indexing, which consolidates, stores, and makes the data available for searching. Because of their reduced resource footprint, forwarders have a minimal performance impact on the Apache servers. … Note: You can also configure a forwarder to send data to another forwarder, which then sends the data to the indexer. This is called intermediate forwarding. However, this adds complexity to your deployment and can affect performance, so use it only when necessary.”
Where in the Job Inspector can details be found to help determine where performance is affected?
Search Job Properties > runDuration
Search Job Properties > runtime
Job Details Dashboard > Total Events Matched
Execution Costs > Components
This is where in the Job Inspector details can be found to help determine where performance is affected, as it shows the time and resources spent by each component of the search, such as commands, subsearches, lookups, and post-processing1. The Execution Costs > Components section can help identify the most expensive or inefficient parts of the search, and suggest ways to optimize or improve the search performance1. The other options are not as useful as the Execution Costs > Components section for finding performance issues. Option A, Search Job Properties > runDuration, shows the total time, in seconds, that the search took to run2. This can indicate the overall performance of the search, but it does not provide any details on the specific components or factors that affected the performance. Option B, Search Job Properties > runtime, shows the time, in seconds, that the search took to run on the search head2. This can indicate the performance of the search head, but it does not account for the time spent on the indexers or the network. Option C, Job Details Dashboard > Total Events Matched, shows the number of events that matched the search criteria3. This can indicate the size and scope of the search, but it does not provide any information on the performance or efficiency of the search. Therefore, option D is the correct answer, and options A, B, and C are incorrect.
1: Execution Costs > Components 2: Search Job Properties 3: Job Details Dashboard
When Splunk is installed, where are the internal indexes stored by default?
SPLUNK_HOME/bin
SPLUNK_HOME/var/lib
SPLUNK_HOME/var/run
SPLUNK_HOME/etc/system/default
Splunk internal indexes are the indexes that store Splunk’s own data, such as internal logs, metrics, audit events, and configuration snapshots. By default, Splunk internal indexes are stored in the SPLUNK_HOME/var/lib/splunk directory, along with other user-defined indexes. The SPLUNK_HOME/bin directory contains the Splunk executable files and scripts. The SPLUNK_HOME/var/run directory contains the Splunk process ID files and lock files. The SPLUNK_HOME/etc/system/default directory contains the default Splunk configuration files.
What log file would you search to verify if you suspect there is a problem interpreting a regular expression in a monitor stanza?
btool.log
metrics.log
splunkd.log
tailing_processor.log
The tailing_processor.log file would be the best place to search if you suspect there is a problem interpreting a regular expression in a monitor stanza. This log file contains information about how Splunk monitors files and directories, including any errors or warnings related to parsing the monitor stanza. The splunkd.log file contains general information about the Splunk daemon, but it may not have the specific details about the monitor stanza. The btool.log file contains information about the configuration files, but it does not log the runtime behavior of the monitor stanza. The metrics.log file contains information about the performance metrics of Splunk, but it does not log the event breaking issues. For more information, see About Splunk Enterprise logging in the Splunk documentation.
Configurations from the deployer are merged into which location on the search head cluster member?
SPLUNK_HOME/etc/system/local
SPLUNK_HOME/etc/apps/APP_HOME/local
SPLUNK_HOME/etc/apps/search/default
SPLUNK_HOME/etc/apps/APP_HOME/default
Configurations from the deployer are merged into the SPLUNK_HOME/etc/apps/APP_HOME/local directory on the search head cluster member. The deployer distributes apps and other configurations to the search head cluster members in the form of a configuration bundle. The configuration bundle contains the contents of the SPLUNK_HOME/etc/shcluster/apps directory on the deployer. When a search head cluster member receives the configuration bundle, it merges the contents of the bundle into its own SPLUNK_HOME/etc/apps directory. The configurations in the local directory take precedence over the configurations in the default directory. The SPLUNK_HOME/etc/system/local directory is used for system-level configurations, not app-level configurations. The SPLUNK_HOME/etc/apps/search/default directory is used for the default configurations of the search app, not the configurations from the deployer.
(If the maxDataSize attribute is set to auto_high_volume in indexes.conf on a 64-bit operating system, what is the maximum hot bucket size?)
4 GB
750 MB
10 GB
1 GB
According to the indexes.conf reference in Splunk Enterprise, the parameter maxDataSize controls the maximum size (in GB or MB) of a single hot bucket before Splunk rolls it to a warm bucket. When the value is set to auto_high_volume on a 64-bit system, Splunk automatically sets the maximum hot bucket size to 10 GB.
The “auto” settings allow Splunk to choose optimized values based on the system architecture:
auto: Default hot bucket size of 750 MB (32-bit) or 10 GB (64-bit).
auto_high_volume: Specifically tuned for high-ingest indexes; on 64-bit systems, this equals 10 GB per hot bucket.
auto_low_volume: Uses smaller bucket sizes for lightweight indexes.
The purpose of larger hot bucket sizes on 64-bit systems is to improve indexing performance and reduce the overhead of frequent bucket rolling during heavy data ingestion. The documentation explicitly warns that these sizes differ on 32-bit systems due to memory addressing limitations.
Thus, for high-throughput environments running 64-bit operating systems, auto_high_volume = 10 GB is the correct and Splunk-documented configuration.
References (Splunk Enterprise Documentation):
• indexes.conf – maxDataSize Attribute Reference
• Managing Index Buckets and Data Retention
• Splunk Enterprise Admin Manual – Indexer Storage Configuration
• Splunk Performance Tuning: Bucket Management and Hot/Warm Transitions
Which of the following artifacts are included in a Splunk diag file? (Select all that apply.)
OS settings.
Internal logs.
Customer data.
Configuration files.
The following artifacts are included in a Splunk diag file:
Internal logs. These are the log files that Splunk generates to record its own activities, such as splunkd.log, metrics.log, audit.log, and others. These logs can help troubleshoot Splunk issues and monitor Splunk performance.
Configuration files. These are the files that Splunk uses to configure various aspects of its operation, such as server.conf, indexes.conf, props.conf, transforms.conf, and others. These files can help understand Splunk settings and behavior. The following artifacts are not included in a Splunk diag file:
OS settings. These are the settings of the operating system that Splunk runs on, such as the kernel version, the memory size, the disk space, and others. These settings are not part of the Splunk diag file, but they can be collected separately using the diag --os option.
Customer data. These are the data that Splunk indexes and makes searchable, such as the rawdata and the tsidx files. These data are not part of the Splunk diag file, as they may contain sensitive or confidential information. For more information, see Generate a diagnostic snapshot of your Splunk Enterprise deployment in the Splunk documentation.
Which of the following strongly impacts storage sizing requirements for Enterprise Security?
The number of scheduled (correlation) searches.
The number of Splunk users configured.
The number of source types used in the environment.
The number of Data Models accelerated.
Data Model acceleration is a feature that enables faster searches over large data sets by summarizing the raw data into a more efficient format. Data Model acceleration consumes additional disk space, as it stores both the raw data and the summarized data. The amount of disk space required depends on the size and complexity of the Data Model, the retention period of the summarized data, and the compression ratio of the data. According to the Splunk Enterprise Security Planning and Installation Manual, Data Model acceleration is one of the factors that strongly impacts storage sizing requirements for Enterprise Security. The other factors are the volume and type of data sources, the retention policy of the data, and the replication factor and search factor of the index cluster. The number of scheduled (correlation) searches, the number of Splunk users configured, and the number of source types used in the environment are not directly related to storage sizing requirements for Enterprise Security1
1: https://docs.splunk.com/Documentation/ES/6.6.0/Install/Plan#Storage_sizing_requirements
(A customer creates a saved search that runs on a specific interval. Which internal Splunk log should be viewed to determine if the search ran recently?)
metrics.log
kvstore.log
scheduler.log
btool.log
According to Splunk’s Search Scheduler and Job Management documentation, the scheduler.log file, located within the _internal index, records the execution of scheduled and saved searches. This log provides a detailed record of when each search is triggered, how long it runs, and its success or failure status.
Each time a scheduled search runs (for example, alerts, reports, or summary index searches), an entry is written to scheduler.log with fields such as:
sid (search job ID)
app (application context)
savedsearch_name (name of the saved search)
user (owner)
status (success, skipped, or failed)
run_time and result_count
By searching the _internal index for sourcetype=scheduler (or directly viewing scheduler.log), administrators can confirm whether a specific saved search executed as expected and diagnose skipped or delayed runs due to resource contention or concurrency limits.
Other internal logs serve different purposes:
metrics.log records performance metrics.
kvstore.log tracks KV Store operations.
btool.log does not exist — btool outputs configuration data to the console, not a log file.
Hence, scheduler.log is the definitive and Splunk-documented source for validating scheduled search activity.
References (Splunk Enterprise Documentation):
• Saved Searches and Alerts – Scheduler Operation Details
• scheduler.log Reference – Monitoring Scheduled Search Execution
• Monitoring Console: Search Scheduler Health Dashboard
• Troubleshooting Skipped or Delayed Scheduled Searches
(Which deployer push mode should be used when pushing built-in apps?)
merge_to_default
local_only
full
default only
According to the Splunk Enterprise Search Head Clustering (SHC) Deployer documentation, the “local_only” push mode is the correct option when deploying built-in apps. This mode ensures that the deployer only pushes configurations from the local directory of built-in Splunk apps (such as search, learned, or launcher) without overwriting or merging their default app configurations.
In an SHC environment, the deployer is responsible for distributing configuration bundles to all search head members. Each push can be executed in different modes depending on how the admin wants to handle the app directories:
full: Overwrites both default and local folders of all apps in the bundle.
merge_to_default: Merges configurations into the default folder (used primarily for custom apps).
local_only: Pushes only local configurations, preserving default settings of built-in apps (the safest method for core Splunk apps).
default only: Pushes only default folder configurations (rarely used and not ideal for built-in app updates).
Using the “local_only” mode ensures that default Splunk system apps are not modified, preventing corruption or overwriting of base configurations that are critical for Splunk operation. It is explicitly recommended for pushing Splunk-provided (built-in) apps like search, launcher, and user-prefs from the deployer to all SHC members.
References (Splunk Enterprise Documentation):
• Managing Configuration Bundles with the Deployer (Search Head Clustering)
• Deployer Push Modes and Their Use Cases
• Splunk Enterprise Admin Manual – SHC Deployment Management
• Best Practices for Maintaining Built-in Splunk Apps in SHC Environments
When using ingest-based licensing, what Splunk role requires the license manager to scale?
Search peers
Search heads
There are no roles that require the license manager to scale
Deployment clients
When using ingest-based licensing, there are no Splunk roles that require the license manager to scale, because the license manager does not need to handle any additional load or complexity. Ingest-based licensing is a new licensing model that allows customers to pay for the data they ingest into Splunk, regardless of the data source, volume, or use case. Ingest-based licensing simplifies the licensing process and eliminates the need for license pools, license stacks, license slaves, and license warnings. The license manager is still responsible for enforcing the license quota and generating license usage reports, but it does not need to communicate with any other Splunk instances or monitor their license usage. Therefore, option C is the correct answer. Option A is incorrect because search peers are indexers that participate in a distributed search. They do not affect the license manager’s scalability, because they do not report their license usage to the license manager. Option B is incorrect because search heads are Splunk instances that coordinate searches across multiple indexers. They do not affect the license manager’s scalability, because they do not report their license usage to the license manager. Option D is incorrect because deployment clients are Splunk instances that receive configuration updates and apps from a deployment server. They do not affect the license manager’s scalability, because they do not report their license usage to the license manager12
1: https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/AboutSplunklicensing 2: https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/HowSplunklicensingworks
Which of the following is an indexer clustering requirement?
Must use shared storage.
Must reside on a dedicated rack.
Must have at least three members.
Must share the same license pool.
An indexer clustering requirement is that the cluster members must share the same license pool and license master. A license pool is a group of licenses that are assigned to a set of Splunk instances. A license master is a Splunk instance that manages the distribution and enforcement of licenses in a pool. In an indexer cluster, all cluster members must belong to the same license pool and report to the same license master, to ensure that the cluster does not exceed the license limit and that the license violations are handled consistently. An indexer cluster does not require shared storage, because each cluster member has its own local storage for the index data. An indexer cluster does not have to reside on a dedicated rack, because the cluster members can be located on different physical or virtual machines, as long as they can communicate with each other. An indexer cluster does not have to have at least three members, because a cluster can have as few as two members, although this is not recommended for high availability
The frequency in which a deployment client contacts the deployment server is controlled by what?
polling_interval attribute in outputs.conf
phoneHomeIntervalInSecs attribute in outputs.conf
polling_interval attribute in deploymentclient.conf
phoneHomeIntervalInSecs attribute in deploymentclient.conf
The frequency in which a deployment client contacts the deployment server is controlled by the phoneHomeIntervalInSecs attribute in deploymentclient.conf. This attribute specifies how often the deployment client checks in with the deployment server to get updates on the apps and configurations that it should receive. The polling_interval attribute in outputs.conf controls how often the forwarder sends data to the indexer or another forwarder. The polling_interval attribute in deploymentclient.conf and the phoneHomeIntervalInSecs attribute in outputs.conf are not valid Splunk attributes. For more information, see Configure deployment clients and Configure forwarders with outputs.conf in the Splunk documentation.
Consider a use case involving firewall data. There is no Splunk-supported Technical Add-On, but the vendor has built one. What are the items that must be evaluated before installing the add-on? (Select all that apply.)
Identify number of scheduled or real-time searches.
Validate if this Technical Add-On enables event data for a data model.
Identify the maximum number of forwarders Technical Add-On can support.
Verify if Technical Add-On needs to be installed onto both a search head or indexer.
A Technical Add-On (TA) is a Splunk app that contains configurations for data collection, parsing, and enrichment. It can also enable event data for a data model, which is useful for creating dashboards and reports. Therefore, before installing a TA, it is important to identify the number of scheduled or real-time searches that will use the data model, and to validate if the TA enables event data for a data model. The number of forwarders that the TA can support is not relevant, as the TA is installed on the indexer or search head, not on the forwarder. The installation location of the TA depends on the type of data and the use case, so it is not a fixed requirement
What does setting site=site0 on all Search Head Cluster members do in a multi-site indexer cluster?
Disables search site affinity.
Sets all members to dynamic captaincy.
Enables multisite search artifact replication.
Enables automatic search site affinity discovery.
Setting site=site0 on all Search Head Cluster members disables search site affinity. Search site affinity is a feature that allows search heads to preferentially search the peer nodes that are in the same site as the search head, to reduce network latency and bandwidth consumption. By setting site=site0, which is a special value that indicates no site, the search heads will search all peer nodes regardless of their site. Setting site=site0 does not set all members to dynamic captaincy, enable multisite search artifact replication, or enable automatic search site affinity discovery. Dynamic captaincy is a feature that allows any member to become the captain, and it is enabled by default. Multisite search artifact replication is a feature that allows search artifacts to be replicated across sites, and it is enabled by setting site_replication_factor to a value greater than 1. Automatic search site affinity discovery is a feature that allows search heads to automatically determine their site based on the network latency to the peer nodes, and it is enabled by setting site=auto
(Which indexes.conf attribute would prevent an index from participating in an indexer cluster?)
available_sites = none
repFactor = 0
repFactor = auto
site_mappings = default_mapping
The repFactor (replication factor) attribute in the indexes.conf file determines whether an index participates in indexer clustering and how many copies of its data are replicated across peer nodes.
When repFactor is set to 0, it explicitly instructs Splunk to exclude that index from participating in the cluster replication and management process. This means:
The index is not replicated across peer nodes.
It will not be managed by the Cluster Manager.
It exists only locally on the indexer where it was created.
Such indexes are typically used for local-only storage, such as _internal, _audit, or other custom indexes that store diagnostic or node-specific data.
By contrast:
repFactor=auto allows the index to inherit the cluster-wide replication policy from the Cluster Manager.
available_sites and site_mappings relate to multisite configurations, controlling where copies of the data are stored, but they do not remove the index from clustering.
Setting repFactor=0 is the only officially supported way to create a non-clustered index within a clustered environment.
References (Splunk Enterprise Documentation):
• indexes.conf Reference – repFactor Attribute Explanation
• Managing Non-Clustered Indexes in Clustered Deployments
• Indexer Clustering: Index Participation and Replication Policies
• Splunk Enterprise Admin Manual – Local-Only and Clustered Index Configurations
Which of the following are possible causes of a crash in Splunk? (select all that apply)
Incorrect ulimit settings.
Insufficient disk IOPS.
Insufficient memory.
Running out of disk space.
All of the options are possible causes of a crash in Splunk. According to the Splunk documentation1, incorrect ulimit settings can lead to file descriptor exhaustion, which can cause Splunk to crash or hang. Insufficient disk IOPS can also cause Splunk to crash or become unresponsive, as Splunk relies heavily on disk performance2. Insufficient memory can cause Splunk to run out of memory and crash, especially when running complex searches or handling large volumes of data3. Running out of disk space can cause Splunk to stop indexing data and crash, as Splunk needs enough disk space to store its data and logs4.
1: Configure ulimit settings for Splunk Enterprise 2: Troubleshoot Splunk performance issues 3: Troubleshoot memory usage 4: Troubleshoot disk space issues
In search head clustering, which of the following methods can you use to transfer captaincy to a different member? (Select all that apply.)
Use the Monitoring Console.
Use the Search Head Clustering settings menu from Splunk Web on any member.
Run the splunk transfer shcluster-captain command from the current captain.
Run the splunk transfer shcluster-captain command from the member you would like to become the captain.
In search head clustering, there are two methods to transfer captaincy to a different member. One method is to use the Search Head Clustering settings menu from Splunk Web on any member. This method allows the user to select a specific member to become the new captain, or to let Splunk choose the best candidate. The other method is to run the splunk transfer shcluster-captain command from the member that the user wants to become the new captain. This method requires the user to know the name of the target member and to have access to the CLI of that member. Using the Monitoring Console is not a method to transfer captaincy, because the Monitoring Console does not have the option to change the captain. Running the splunk transfer shcluster-captain command from the current captain is not a method to transfer captaincy, because this command will fail with an error message
Stakeholders have identified high availability for searchable data as their top priority. Which of the following best addresses this requirement?
Increasing the search factor in the cluster.
Increasing the replication factor in the cluster.
Increasing the number of search heads in the cluster.
Increasing the number of CPUs on the indexers in the cluster.
Increasing the search factor in the cluster will best address the requirement of high availability for searchable data. The search factor determines how many copies of searchable data are maintained by the cluster. A higher search factor means that more indexers can serve the data in case of a failure or a maintenance event. Increasing the replication factor will improve the availability of raw data, but not searchable data. Increasing the number of search heads or CPUs on the indexers will improve the search performance, but not the availability of searchable data. For more information, see Replication factor and search factor in the Splunk documentation.
Which Splunk log file would be the least helpful in troubleshooting a crash?
splunk_instrumentation.log
splunkd_stderr.log
crash-2022-05-13-ll:42:57.1og
splunkd.log
The splunk_instrumentation.log file is the least helpful in troubleshooting a crash, because it contains information about the Splunk Instrumentation feature, which collects and sends usage data to Splunk Inc. for product improvement purposes. This file does not contain any information about the Splunk processes, errors, or crashes. The other options are more helpful in troubleshooting a crash, because they contain relevant information about the Splunk daemon, the standard error output, and the crash report12
1: https://docs.splunk.com/Documentation/Splunk/9.1.2/Troubleshooting/WhatSplunklogsaboutitself#splunk_instrumentation.log 2: https://docs.splunk.com/Documentation/Splunk/9.1.2/Troubleshooting/WhatSplunklogsaboutitself#splunkd_stderr.log
When adding or decommissioning a member from a Search Head Cluster (SHC), what is the proper order of operations?
1. Delete Splunk Enterprise, if it exists.2. Install and initialize the instance.3. Join the SHC.
1. Install and initialize the instance.2. Delete Splunk Enterprise, if it exists.3. Join the SHC.
1. Initialize cluster rebalance operation.2. Remove master node from cluster.3. Trigger replication.
1. Trigger replication.2. Remove master node from cluster.3. Initialize cluster rebalance operation.
When adding or decommissioning a member from a Search Head Cluster (SHC), the proper order of operations is:
Delete Splunk Enterprise, if it exists.
Install and initialize the instance.
Join the SHC.
This order of operations ensures that the member has a clean and consistent Splunk installation before joining the SHC. Deleting Splunk Enterprise removes any existing configurations and data from the instance. Installing and initializing the instance sets up the Splunk software and the required roles and settings for the SHC. Joining the SHC adds the instance to the cluster and synchronizes the configurations and apps with the other members. The other order of operations are not correct, because they either skip a step or perform the steps in the wrong order.
(Which of the following data sources are used for the Monitoring Console dashboards?)
REST API calls
Splunk btool
Splunk diag
metrics.log
According to Splunk Enterprise documentation for the Monitoring Console (MC), the data displayed in its dashboards is sourced primarily from two internal mechanisms — REST API calls and metrics.log.
The Monitoring Console (formerly known as the Distributed Management Console, or DMC) uses REST API endpoints to collect system-level information from all connected instances, such as indexer clustering status, license usage, and search head performance. These REST calls pull real-time configuration and performance data from Splunk’s internal management layer (/services/server/status, /services/licenser, /services/cluster/peers, etc.).
Additionally, the metrics.log file is one of the main data sources used by the Monitoring Console. This log records Splunk’s internal performance metrics, including pipeline latency, queue sizes, indexing throughput, CPU usage, and memory statistics. Dashboards like “Indexer Performance,” “Search Performance,” and “Resource Usage” are powered by searches over the _internal index that reference this log.
Other tools listed — such as btool (configuration troubleshooting utility) and diag (diagnostic archive generator) — are not used as runtime data sources for Monitoring Console dashboards. They assist in troubleshooting but are not actively queried by the MC.
References (Splunk Enterprise Documentation):
• Monitoring Console Overview – Data Sources and Architecture
• metrics.log Reference – Internal Performance Data Collection
• REST API Usage in Monitoring Console
• Distributed Management Console Configuration Guide
(Which of the following is a valid way to determine if a new bundle push will trigger a rolling restart?)
splunk show cluster-bundle-status
splunk apply cluster-bundle
splunk validate cluster-bundle —check-restart
splunk apply cluster-bundle —validate-bundle
The splunk validate cluster-bundle --check-restart command is the officially documented Splunk Enterprise method to determine if a configuration bundle push will trigger a rolling restart within an indexer cluster.
When configuration changes are made on the Cluster Manager (Master Node)—for example, updates to indexes.conf, props.conf, or transforms.conf—Splunk administrators must validate the bundle before pushing it to all peer nodes. Using this command allows the Cluster Manager to simulate the deployment and verify whether the configuration modifications necessitate a restart across peer indexers to take effect.
The --check-restart flag specifically reports whether:
The configuration changes are minor (no restart required).
The changes affect components that require a full or rolling restart (e.g., changes to indexing paths, volume definitions, or replication factors).
Running this validation prior to an actual splunk apply cluster-bundle command prevents service disruption during production operations.
Other commands such as splunk show cluster-bundle-status display deployment status but not restart requirements, and splunk apply cluster-bundle executes the actual deployment, not validation.
References (Splunk Enterprise Documentation):
• Indexer Clustering: Deploy Configuration Bundles with Validation
• splunk validate cluster-bundle Command Reference
• Managing Indexer Clusters – Rolling Restarts and Bundle Deployment Best Practices
• Splunk Enterprise Admin Manual – Cluster Manager Maintenance Commands
What is the minimum reference server specification for a Splunk indexer?
12 CPU cores, 12GB RAM, 800 IOPS
16 CPU cores, 16GB RAM, 800 IOPS
24 CPU cores, 16GB RAM, 1200 IOPS
28 CPU cores, 32GB RAM, 1200 IOPS
The minimum reference server specification for a Splunk indexer is 12 CPU cores, 12GB RAM, and 800 IOPS. This specification is based on the assumption that the indexer will handle an average indexing volume of 100GB per day, with a peak of 300GB per day, and a typical search load of 1 concurrent search per 1GB of indexing volume. The other specifications are either higher or lower than the minimum requirement. For more information, see [Reference hardware] in the Splunk documentation.
(The performance of a specific search is performing poorly. The search must run over All Time and is expected to have very few results. Analysis shows that the search accesses a very large number of buckets in a large index. What step would most significantly improve the performance of this search?)
Increase the disk I/O hardware performance.
Increase the number of indexing pipelines.
Set indexed_realtime_use_by_default = true in limits.conf.
Change this to a real-time search using an All Time window.
As per Splunk Enterprise Search Performance documentation, the most significant factor affecting search performance when querying across a large number of buckets is disk I/O throughput. A search that spans “All Time” forces Splunk to inspect all historical buckets (hot, warm, cold, and potentially frozen if thawed), even if only a few events match the query. This dramatically increases the amount of data read from disk, making the search bound by I/O performance rather than CPU or memory.
Increasing the number of indexing pipelines (Option B) only benefits data ingestion, not search performance. Changing to a real-time search (Option D) does not help because real-time searches are optimized for streaming new data, not historical queries. The indexed_realtime_use_by_default setting (Option C) applies only to streaming indexed real-time searches, not historical “All Time” searches.
To improve performance for such searches, Splunk documentation recommends enhancing disk I/O capability — typically through SSD storage, increased disk bandwidth, or optimized storage tiers. Additionally, creating summary indexes or accelerated data models may help for repeated “All Time” queries, but the most direct improvement comes from faster disk performance since Splunk must scan large numbers of buckets for even small result sets.
References (Splunk Enterprise Documentation):
• Search Performance Tuning and Optimization
• Understanding Bucket Search Mechanics and Disk I/O Impact
• limits.conf Parameters for Search Performance
• Storage and Hardware Sizing Guidelines for Indexers and Search Heads
Where does the Splunk deployer send apps by default?
etc/slave-apps/
etc/deploy-apps/
etc/apps/
etc/shcluster/
The Splunk deployer sends apps to the search head cluster members by default to the path etc/shcluster/
Splunk's documentation recommends placing the configuration bundle in the $SPLUNK_HOME/etc/shcluster/apps directory on the deployer, which then gets distributed to the search head cluster members. However, it should be noted that within each app's directory, configurations can be under default or local subdirectories, with local taking precedence over default for configurations. The reference to etc/shcluster/
When should multiple search pipelines be enabled?
Only if disk IOPS is at 800 or better.
Only if there are fewer than twelve concurrent users.
Only if running Splunk Enterprise version 6.6 or later.
Only if CPU and memory resources are significantly under-utilized.
Multiple search pipelines should be enabled only if CPU and memory resources are significantly under-utilized. Search pipelines are the processes that execute search commands and return results. Multiple search pipelines can improve the search performance by running concurrent searches in parallel. However, multiple search pipelines also consume more CPU and memory resources, which can affect the overall system performance. Therefore, multiple search pipelines should be enabled only if there are enough CPU and memory resources available, and if the system is not bottlenecked by disk I/O or network bandwidth. The number of concurrent users, the disk IOPS, and the Splunk Enterprise version are not relevant factors for enabling multiple search pipelines
Which of the following statements describe licensing in a clustered Splunk deployment? (Select all that apply.)
Free licenses do not support clustering.
Replicated data does not count against licensing.
Each cluster member requires its own clustering license.
Cluster members must share the same license pool and license master.
The following statements describe licensing in a clustered Splunk deployment: Free licenses do not support clustering, and replicated data does not count against licensing. Free licenses are limited to 500 MB of daily indexing volume and do not allow distributed searching or clustering. To enable clustering, a license with a higher volume limit and distributed features is required. Replicated data is data that is copied from one peer node to another for the purpose of high availability and load balancing. Replicated data does not count against licensing, because it is not new data that is ingested by Splunk. Only the original data that is indexed by the peer nodes counts against licensing. Each cluster member does not require its own clustering license, because clustering licenses are shared among the cluster members. Cluster members must share the same license pool and license master, because the license master is responsible for distributing licenses to the cluster members and enforcing the license limits
When preparing to ingest a new data source, which of the following is optional in the data source assessment?
Data format
Data location
Data volume
Data retention
Data retention is optional in the data source assessment because it is not directly related to the ingestion process. Data retention is determined by the index configuration and the storage capacity of the Splunk platform. Data format, data location, and data volume are all essential information for planning how to collect, parse, and index the data source.
A customer currently has many deployment clients being managed by a single, dedicated deployment server. The customer plans to double the number of clients.
What could be done to minimize performance issues?
Modify deploymentclient. conf to change from a Pull to Push mechanism.
Reduce the number of apps in the Manager Node repository.
Increase the current deployment client phone home interval.
Decrease the current deployment client phone home interval.
According to the Splunk documentation1, increasing the current deployment client phone home interval can minimize performance issues by reducing the frequency of communication between the clients and the deployment server. This can also reduce the network traffic and the load on the deployment server. The other options are false because:
Modifying deploymentclient.conf to change from a Pull to Push mechanism is not possible, as Splunk does not support a Push mechanism for deployment server2.
Reducing the number of apps in the Manager Node repository will not affect the performance of the deployment server, as the apps are only downloaded when there is a change in the configuration or a new app is added3.
Decreasing the current deployment client phone home interval will increase the performance issues, as it will increase the frequency of communication between the clients and the deployment server, resulting in more network traffic and load on the deployment server1.
A Splunk user successfully extracted an ip address into a field called src_ip. Their colleague cannot see that field in their search results with events known to have src_ip. Which of the following may explain the problem? (Select all that apply.)
The field was extracted as a private knowledge object.
The events are tagged as communicate, but are missing the network tag.
The Typing Queue, which does regular expression replacements, is blocked.
The colleague did not explicitly use the field in the search and the search was set to Fast Mode.
The following may explain the problem of why a colleague cannot see the src_ip field in their search results: The field was extracted as a private knowledge object, and the colleague did not explicitly use the field in the search and the search was set to Fast Mode. A knowledge object is a Splunk entity that applies some knowledge or intelligence to the data, such as a field extraction, a lookup, or a macro. A knowledge object can have different permissions, such as private, app, or global. A private knowledge object is only visible to the user who created it, and it cannot be shared with other users. A field extraction is a type of knowledge object that extracts fields from the raw data at index time or search time. If a field extraction is created as a private knowledge object, then only the user who created it can see the extracted field in their search results. A search mode is a setting that determines how Splunk processes and displays the search results, such as Fast, Smart, or Verbose. Fast mode is the fastest and most efficient search mode, but it also limits the number of fields and events that are displayed. Fast mode only shows the default fields, such as _time, host, source, sourcetype, and _raw, and any fields that are explicitly used in the search. If a field is not used in the search and it is not a default field, then it will not be shown in Fast mode. The events are tagged as communicate, but are missing the network tag, and the Typing Queue, which does regular expression replacements, is blocked, are not valid explanations for the problem. Tags are labels that can be applied to fields or field values to make them easier to search. Tags do not affect the visibility of fields, unless they are used as filters in the search. The Typing Queue is a component of the Splunk data pipeline that performs regular expression replacements on the data, such as replacing IP addresses with host names. The Typing Queue does not affect the field extraction process, unless it is configured to do so
To optimize the distribution of primary buckets; when does primary rebalancing automatically occur? (Select all that apply.)
Rolling restart completes.
Master node rejoins the cluster.
Captain joins or rejoins cluster.
A peer node joins or rejoins the cluster.
Primary rebalancing automatically occurs when a rolling restart completes, a master node rejoins the cluster, or a peer node joins or rejoins the cluster. These events can cause the distribution of primary buckets to become unbalanced, so the master node will initiate a rebalancing process to ensure that each peer node has roughly the same number of primary buckets. Primary rebalancing does not occur when a captain joins or rejoins the cluster, because the captain is a search head cluster component, not an indexer cluster component. The captain is responsible for search head clustering, not indexer clustering
(What is a recommended way to improve search performance?)
Use the shortest query possible.
Filter as much as possible in the initial search.
Use non-streaming commands as early as possible.
Leverage the not expression to limit returned results.
Splunk Enterprise Search Optimization documentation consistently emphasizes that filtering data as early as possible in the search pipeline is the most effective way to improve search performance. The base search (the part before the first pipe |) determines the volume of raw events Splunk retrieves from the indexers. Therefore, by applying restrictive conditions early—such as time ranges, indexed fields, and metadata filters—you can drastically reduce the number of events that need to be fetched and processed downstream.
The best practice is to use indexed field filters (e.g., index=security sourcetype=syslog host=server01) combined with search or where clauses at the start of the query. This minimizes unnecessary data movement between indexers and the search head, improving both search speed and system efficiency.
Using non-streaming commands early (Option C) can degrade performance because they require full result sets before producing output. Likewise, focusing solely on shortening queries (Option A) or excessive use of the not operator (Option D) does not guarantee efficiency, as both may still process large datasets.
Filtering early leverages Splunk’s distributed search architecture to limit data at the indexer level, reducing processing load and network transfer.
References (Splunk Enterprise Documentation):
• Search Performance Tuning and Optimization Guide
• Best Practices for Writing Efficient SPL Queries
• Understanding Streaming and Non-Streaming Commands
• Search Job Inspector: Analyzing Execution Costs
Which of the following is a good practice for a search head cluster deployer?
The deployer only distributes configurations to search head cluster members when they “phone home”.
The deployer must be used to distribute non-replicable configurations to search head cluster members.
The deployer must distribute configurations to search head cluster members to be valid configurations.
The deployer only distributes configurations to search head cluster members with splunk apply shcluster-bundle.
The following is a good practice for a search head cluster deployer: The deployer must be used to distribute non-replicable configurations to search head cluster members. Non-replicable configurations are the configurations that are not replicated by the search factor, such as the apps and the server.conf settings. The deployer is the Splunk server role that distributes these configurations to the search head cluster members, ensuring that they have the same configuration. The deployer does not only distribute configurations to search head cluster members when they “phone home”, as this would cause configuration inconsistencies and delays. The deployer does not distribute configurations to search head cluster members to be valid configurations, as this implies that the configurations are invalid without the deployer. The deployer does not only distribute configurations to search head cluster members with splunk apply shcluster-bundle, as this would require manual intervention by the administrator. For more information, see Use the deployer to distribute apps and configuration updates in the Splunk documentation.
Which of the following is unsupported in a production environment?
Cluster Manager can run on the Monitoring Console instance in smaller environments.
Search Head Cluster Deployer can run on the Monitoring Console instance in smaller environments.
Search heads in a Search Head Cluster can run on virtual machines.
Indexers in an indexer cluster can run on virtual machines.
Comprehensive and Detailed Explanation (From Splunk Enterprise Documentation)Splunk Enterprise documentation clarifies that none of the listed configurations are prohibited in production. Splunk allows the Cluster Manager to be colocated with the Monitoring Console in small deployments because both are management-plane functions and do not handle ingestion or search traffic. The documentation also states that the Search Head Cluster Deployer is not a runtime component and has minimal performance requirements, so it may be colocated with the Monitoring Console or Licensing Master when hardware resources permit.
Splunk also supports virtual machines for both search heads and indexers, provided they are deployed with dedicated CPU, storage throughput, and predictable performance. Splunk’s official hardware guidance specifies that while bare metal often yields higher performance, virtualized deployments are fully supported in production as long as sizing principles are met.
Because Splunk explicitly supports all four configurations under proper sizing and best-practice guidelines, there is no correct selection for “unsupported.” The question is outdated relative to current Splunk Enterprise recommendations.
Because Splunk indexing is read/write intensive, it is important to select the appropriate disk storage solution for each deployment. Which of the following statements is accurate about disk storage?
High performance SAN should never be used.
Enable NFS for storing hot and warm buckets.
The recommended RAID setup is RAID 10 (1 + 0).
Virtualized environments are usually preferred over bare metal for Splunk indexers.
Splunk indexing is read/write intensive, as it involves reading data from various sources, writing data to disk, and reading data from disk for searching and reporting. Therefore, it is important to select the appropriate disk storage solution for each deployment, based on the performance, reliability, and cost requirements. The recommended RAID setup for Splunk indexers is RAID 10 (1 + 0), as it provides the best balance of performance and reliability. RAID 10 combines the advantages of RAID 1 (mirroring) and RAID 0 (striping), which means that it offers both data redundancy and data distribution. RAID 10 can tolerate multiple disk failures, as long as they are not in the same mirrored pair, and it can improve the read and write speed, as it can access multiple disks in parallel2
High performance SAN (Storage Area Network) can be used for Splunk indexers, but it is not recommended, as it is more expensive and complex than local disks. SAN also introduces additional network latency and dependency, which can affect the performance and availability of Splunk indexers. SAN is more suitable for Splunk search heads, as they are less read/write intensive and more CPU intensive2
NFS (Network File System) should not be used for storing hot and warm buckets, as it can cause data corruption, data loss, and performance degradation. NFS is a network-based file system that allows multiple clients to access the same files on a remote server. NFS is not compatible with Splunk index replication and search head clustering, as it can cause conflicts and inconsistencies among the Splunk instances. NFS is also slower and less reliable than local disks, as it depends on the network bandwidth and availability. NFS can be used for storing cold and frozen buckets, as they are less frequently accessed and less critical for Splunk operations2
Virtualized environments are not usually preferred over bare metal for Splunk indexers, as they can introduce additional overhead and complexity. Virtualized environments can affect the performance and reliability of Splunk indexers, as they share the physical resources and the network with other virtual machines. Virtualized environments can also complicate the monitoring and troubleshooting of Splunk indexers, as they add another layer of abstraction and configuration. Virtualized environments can be used for Splunk indexers, but they require careful planning and tuning to ensure optimal performance and availability2
Users who receive a link to a search are receiving an "Unknown sid" error message when they open the link.
Why is this happening?
The users have insufficient permissions.
An add-on needs to be updated.
The search job has expired.
One or more indexers are down.
According to the Splunk documentation1, the “Unknown sid” error message means that the search job associated with the link has expired or been deleted. The sid (search ID) is a unique identifier for each search job, and it is used to retrieve the results of the search. If the sid is not found, the search cannot be displayed. The other options are false because:
The users having insufficient permissions would result in a different error message, such as “You do not have permission to view this page” or "You do not have permission to run this search"1.
An add-on needing to be updated would not affect the validity of the sid, unless the add-on changes the search syntax or the data source in a way that makes the search invalid or inaccessible1.
One or more indexers being down would not cause the “Unknown sid” error, as the sid is stored on the search head, not the indexers. However, it could cause other errors, such as “Unable to distribute to peer” or "Search peer has the following message: not enough disk space"1.
Which of the following will cause the greatest reduction in disk size requirements for a cluster of N indexers running Splunk Enterprise Security?
Setting the cluster search factor to N-1.
Increasing the number of buckets per index.
Decreasing the data model acceleration range.
Setting the cluster replication factor to N-1.
Decreasing the data model acceleration range will reduce the disk size requirements for a cluster of indexers running Splunk Enterprise Security. Data model acceleration creates tsidx files that consume disk space on the indexers. Reducing the acceleration range will limit the amount of data that is accelerated and thus save disk space. Setting the cluster search factor or replication factor to N-1 will not reduce the disk size requirements, but rather increase the risk of data loss. Increasing the number of buckets per index will also increase the disk size requirements, as each bucket has a minimum size. For more information, see Data model acceleration and Bucket size in the Splunk documentation.
Which Splunk Enterprise offering has its own license?
Splunk Cloud Forwarder
Splunk Heavy Forwarder
Splunk Universal Forwarder
Splunk Forwarder Management
The Splunk Universal Forwarder is the only Splunk Enterprise offering that has its own license. The Splunk Universal Forwarder license allows the forwarder to send data to any Splunk Enterprise or Splunk Cloud instance without consuming any license quota. The Splunk Heavy Forwarder does not have its own license, but rather consumes the license quota of the Splunk Enterprise or Splunk Cloud instance that it sends data to. The Splunk Cloud Forwarder and the Splunk Forwarder Management are not separate Splunk Enterprise offerings, but rather features of the Splunk Cloud service. For more information, see [About forwarder licensing] in the Splunk documentation.
Which command should be run to re-sync a stale KV Store member in a search head cluster?
splunk clean kvstore -local
splunk resync kvstore -remote
splunk resync kvstore -local
splunk clean eventdata -local
To resync a stale KV Store member in a search head cluster, you need to stop the search head that has the stale KV Store member, run the command splunk clean kvstore --local, and then restart the search head. This triggers the initial synchronization from other KV Store members12.
The command splunk resync kvstore [-source sourceId] is used to resync the entire KV Store cluster from one of the members, not a single member. This command can only be invoked from the node that is operating as search head cluster captain2.
The command splunk clean eventdata -local is used to delete all indexed data from a standalone indexer or a cluster peer node, not to resync the KV Store3.
What is the algorithm used to determine captaincy in a Splunk search head cluster?
Raft distributed consensus.
Rapt distributed consensus.
Rift distributed consensus.
Round-robin distribution consensus.
The algorithm used to determine captaincy in a Splunk search head cluster is Raft distributed consensus. Raft is a consensus algorithm that is used to elect a leader among a group of nodes in a distributed system. In a Splunk search head cluster, Raft is used to elect a captain among the cluster members. The captain is the cluster member that is responsible for coordinating the search activities, replicating the configurations and apps, and pushing the knowledge bundles to the search peers. The captain is dynamically elected based on various criteria, such as CPU load, network latency, and search load. The captain can change over time, depending on the availability and performance of the cluster members. Rapt, Rift, and Round-robin are not valid algorithms for determining captaincy in a Splunk search head cluster
TESTED 30 Nov 2025

