Splunk Enterprise Certified Architect Questions and Answers
In search head clustering, which of the following methods can you use to transfer captaincy to a different member? (Select all that apply.)
Options:
Use the Monitoring Console.
Use the Search Head Clustering settings menu from Splunk Web on any member.
Run the splunk transfer shcluster-captain command from the current captain.
Run the splunk transfer shcluster-captain command from the member you would like to become the captain.
Answer:
B, DExplanation:
In search head clustering, there are two methods to transfer captaincy to a different member. One method is to use the Search Head Clustering settings menu from Splunk Web on any member. This method allows the user to select a specific member to become the new captain, or to let Splunk choose the best candidate. The other method is to run the splunk transfer shcluster-captain command from the member that the user wants to become the new captain. This method requires the user to know the name of the target member and to have access to the CLI of that member. Using the Monitoring Console is not a method to transfer captaincy, because the Monitoring Console does not have the option to change the captain. Running the splunk transfer shcluster-captain command from the current captain is not a method to transfer captaincy, because this command will fail with an error message
Which of the following commands is used to clear the KV store?
Options:
splunk clean kvstore
splunk clear kvstore
splunk delete kvstore
splunk reinitialize kvstore
Answer:
AExplanation:
The splunk clean kvstore command is used to clear the KV store. This command will delete all the collections and documents in the KV store and reset it to an empty state. This command can be useful for troubleshooting KV store issues or resetting the KV store data. The splunk clear kvstore, splunk delete kvstore, and splunk reinitialize kvstore commands are not valid Splunk commands. For more information, see Use the CLI to manage the KV store in the Splunk documentation.
What is the expected minimum amount of storage required for data across an indexer cluster with the following input and parameters?
• Raw data = 15 GB per day
• Index files = 35 GB per day
• Replication Factor (RF) = 2
• Search Factor (SF) = 2
Options:
85 GB per day
50 GB per day
100 GB per day
65 GB per day
Answer:
CExplanation:
The correct answer is C. 100 GB per day. This is the expected minimum amount of storage required for data across an indexer cluster with the given input and parameters. The storage requirement can be calculated by adding the raw data size and the index files size, and then multiplying by the Replication Factor and the Search Factor1. In this case, the calculation is:
(15 GB + 35 GB) x 2 x 2 = 100 GB
The Replication Factor is the number of copies of each bucket that the cluster maintains across the set of peer nodes2. The Search Factor is the number of searchable copies of each bucket that the cluster maintains across the set of peer nodes3. Both factors affect the storage requirement, as they determine how many copies of the data are stored and searchable on the indexers. The other options are not correct, as they do not match the result of the calculation. Therefore, option C is the correct answer, and options A, B, and D are incorrect.
1: Estimate storage requirements 2: About indexer clusters and index replication 3: Configure the search factor
(What command will decommission a search peer from an indexer cluster?)
Options:
splunk disablepeer --enforce-counts
splunk decommission —enforce-counts
splunk offline —enforce-counts
splunk remove cluster-peers —enforce-counts
Answer:
CExplanation:
The splunk offline --enforce-counts command is the official and documented method used to gracefully decommission a search peer (indexer) from an indexer cluster in Splunk Enterprise. This command ensures that all replication and search factors are maintained before the peer is removed.
When executed, Splunk initiates a controlled shutdown process for the peer node. The Cluster Manager verifies that sufficient replicated copies of all bucket data exist across the remaining peers according to the configured replication_factor (RF) and search_factor (SF). The --enforce-counts flag specifically enforces that replication and search counts remain intact before the peer fully detaches from the cluster, ensuring no data loss or availability gap.
The sequence typically includes:
Validating cluster state and replication health.
Rolling off the peer’s data responsibilities to other peers.
Removing the peer from the active cluster membership list once replication is complete.
Other options like disablepeer, decommission, or remove cluster-peers are not valid Splunk commands. Therefore, the correct documented method is to use:
splunk offline --enforce-counts
References (Splunk Enterprise Documentation):
• Indexer Clustering: Decommissioning a Peer Node
• Managing Peer Nodes and Maintaining Data Availability
• Splunk CLI Command Reference – splunk offline
• Cluster Manager and Peer Maintenance Procedures
Configurations from the deployer are merged into which location on the search head cluster member?
Options:
SPLUNK_HOME/etc/system/local
SPLUNK_HOME/etc/apps/APP_HOME/local
SPLUNK_HOME/etc/apps/search/default
SPLUNK_HOME/etc/apps/APP_HOME/default
Answer:
BExplanation:
Configurations from the deployer are merged into the SPLUNK_HOME/etc/apps/APP_HOME/local directory on the search head cluster member. The deployer distributes apps and other configurations to the search head cluster members in the form of a configuration bundle. The configuration bundle contains the contents of the SPLUNK_HOME/etc/shcluster/apps directory on the deployer. When a search head cluster member receives the configuration bundle, it merges the contents of the bundle into its own SPLUNK_HOME/etc/apps directory. The configurations in the local directory take precedence over the configurations in the default directory. The SPLUNK_HOME/etc/system/local directory is used for system-level configurations, not app-level configurations. The SPLUNK_HOME/etc/apps/search/default directory is used for the default configurations of the search app, not the configurations from the deployer.
Before users can use a KV store, an admin must create a collection. Where is a collection is defined?
Options:
kvstore.conf
collection.conf
collections.conf
kvcollections.conf
Answer:
CExplanation:
A collection is defined in the collections.conf file, which specifies the name, schema, and permissions of the collection. The kvstore.conf file is used to configure the KV store settings, such as the port, SSL, and replication factor. The other two files do not exist1
Which of the following options in limits, conf may provide performance benefits at the forwarding tier?
Options:
Enable the indexed_realtime_use_by_default attribute.
Increase the maxKBps attribute.
Increase the parallellngestionPipelines attribute.
Increase the max_searches per_cpu attribute.
Answer:
CExplanation:
The correct answer is C. Increase the parallellngestionPipelines attribute. This is an option in limits.conf that may provide performance benefits at the forwarding tier, as it allows the forwarder to process multiple data inputs in parallel1. The parallellngestionPipelines attribute specifies the number of pipelines that the forwarder can use to ingest data from different sources1. By increasing this value, the forwarder can improve its throughput and reduce the latency of data delivery1. The other options are not effective options to provide performance benefits at the forwarding tier. Option A, enabling the indexed_realtime_use_by_default attribute, is not recommended, as it enables the forwarder to send data to the indexer as soon as it is received, which may increase the network and CPU load and degrade the performance2. Option B, increasing the maxKBps attribute, is not a good option, as it increases the maximum bandwidth, in kilobytes per second, that the forwarder can use to send data to the indexer3. This may improve the data transfer speed, but it may also saturate the network and cause congestion and packet loss3. Option D, increasing the max_searches_per_cpu attribute, is not relevant, as it only affects the search performance on the indexer or search head, not the forwarding performance on the forwarder4. Therefore, option C is the correct answer, and options A, B, and D are incorrect.
1: Configure parallel ingestion pipelines 2: Configure real-time forwarding 3: Configure forwarder output 4: Configure search performance
When should multiple search pipelines be enabled?
Options:
Only if disk IOPS is at 800 or better.
Only if there are fewer than twelve concurrent users.
Only if running Splunk Enterprise version 6.6 or later.
Only if CPU and memory resources are significantly under-utilized.
Answer:
DExplanation:
Multiple search pipelines should be enabled only if CPU and memory resources are significantly under-utilized. Search pipelines are the processes that execute search commands and return results. Multiple search pipelines can improve the search performance by running concurrent searches in parallel. However, multiple search pipelines also consume more CPU and memory resources, which can affect the overall system performance. Therefore, multiple search pipelines should be enabled only if there are enough CPU and memory resources available, and if the system is not bottlenecked by disk I/O or network bandwidth. The number of concurrent users, the disk IOPS, and the Splunk Enterprise version are not relevant factors for enabling multiple search pipelines
An index has large text log entries with many unique terms in the raw data. Other than the raw data, which index components will take the most space?
Options:
Index files (*. tsidx files).
Bloom filters (bloomfilter files).
Index source metadata (sources.data files).
Index sourcetype metadata (SourceTypes. data files).
Answer:
AExplanation:
Index files (. tsidx files) are the main components of an index that store the raw data and the inverted index of terms. They take the most space in an index, especially if the raw data has many unique terms that increase the size of the inverted index. Bloom filters, source metadata, and sourcetype metadata are much smaller in comparison and do not depend on the number of unique terms in the raw data.
How many cluster managers are required for a multisite indexer cluster?
Options:
Two for the entire cluster.
One for each site.
One for the entire cluster.
Two for each site.
Answer:
CExplanation:
A multisite indexer cluster is a type of indexer cluster that spans multiple geographic locations or sites. A multisite indexer cluster requires only one cluster manager, also known as the master node, for the entire cluster. The cluster manager is responsible for coordinating the replication and search activities among the peer nodes across all sites. The cluster manager can reside in any site, but it must be accessible by all peer nodes and search heads in the cluster. Option C is the correct answer. Option A is incorrect because having two cluster managers for the entire cluster would introduce redundancy and complexity. Option B is incorrect because having one cluster manager for each site would create separate clusters, not a multisite cluster. Option D is incorrect because having two cluster managers for each site would be unnecessary and inefficient12
1:
A Splunk architect has inherited the Splunk deployment at Buttercup Games and end users are complaining that the events are inconsistently formatted for a web source. Further investigation reveals that not all weblogs flow through the same infrastructure: some of the data goes through heavy forwarders and some of the forwarders are managed by another department.
Which of the following items might be the cause of this issue?
Options:
The search head may have different configurations than the indexers.
The data inputs are not properly configured across all the forwarders.
The indexers may have different configurations than the heavy forwarders.
The forwarders managed by the other department are an older version than the rest.
Answer:
CExplanation:
The indexers may have different configurations than the heavy forwarders, which might cause the issue of inconsistently formatted events for a web sourcetype. The heavy forwarders perform parsing and indexing on the data before sending it to the indexers. If the indexers have different configurations than the heavy forwarders, such as different props.conf or transforms.conf settings, the data may be parsed or indexed differently on the indexers, resulting in inconsistent events. The search head configurations do not affect the event formatting, as the search head does not parse or index the data. The data inputs configurations on the forwarders do not affect the event formatting, as the data inputs only determine what data to collect and how to monitor it. The forwarder version does not affect the event formatting, as long as the forwarder is compatible with the indexer. For more information, see [Heavy forwarder versus indexer] and [Configure event processing] in the Splunk documentation.
Which of the following security options must be explicitly configured (i.e. which options are not enabled by default)?
Options:
Data encryption between Splunk Web and splunkd.
Certificate authentication between forwarders and indexers.
Certificate authentication between Splunk Web and search head.
Data encryption for distributed search between search heads and indexers.
Answer:
BExplanation:
The following security option must be explicitly configured, as it is not enabled by default:
Certificate authentication between forwarders and indexers. This option allows the forwarders and indexers to verify each other’s identity using SSL certificates, which prevents unauthorized data transmission or spoofing attacks. This option is not enabled by default, as it requires the administrator to generate and distribute the certificates for the forwarders and indexers. For more information, see [Secure the communication between forwarders and indexers] in the Splunk documentation. The following security options are enabled by default:
Data encryption between Splunk Web and splunkd. This option encrypts the communication between the Splunk Web interface and the splunkd daemon using SSL, which prevents data interception or tampering. This option is enabled by default, as Splunk provides a self-signed certificate for this purpose. For more information, see [About securing Splunk Enterprise with SSL] in the Splunk documentation.
Certificate authentication between Splunk Web and search head. This option allows the Splunk Web interface and the search head to verify each other’s identity using SSL certificates, which prevents unauthorized access or spoofing attacks. This option is enabled by default, as Splunk provides a self-signed certificate for this purpose. For more information, see [About securing Splunk Enterprise with SSL] in the Splunk documentation.
Data encryption for distributed search between search heads and indexers. This option encrypts the communication between the search heads and the indexers using SSL, which prevents data interception or tampering. This option is enabled by default, as Splunk provides a self-signed certificate for this purpose. For more information, see [Secure your distributed search environment] in the Splunk documentation.
Which of the following are client filters available in serverclass.conf? (Select all that apply.)
Options:
DNS name.
IP address.
Splunk server role.
Platform (machine type).
Answer:
A, B, DExplanation:
The client filters available in serverclass.conf are DNS name, IP address, and platform (machine type). These filters allow the administrator to specify which forwarders belong to a server class and receive the apps and configurations from the deployment server. The Splunk server role is not a valid client filter in serverclass.conf, as it is not a property of the forwarder. For more information, see [Use forwarder management filters] in the Splunk documentation.
The KV store forms its own cluster within a SHC. What is the maximum number of SHC members KV store will form?
Options:
25
50
100
Unlimited
Answer:
BExplanation:
The KV store forms its own cluster within a SHC. The maximum number of SHC members KV store will form is 50. The KV store cluster is a subset of the SHC members that are responsible for replicating and storing the KV store data. The KV store cluster can have up to 50 members, but only 20 of them can be active at any given time. The other members are standby members that can take over if an active member fails. The KV store cluster cannot have more than 50 members, nor can it have an unlimited number of members. The KV store cluster cannot have 25 or 100 members, because these numbers are not multiples of 5, which is the minimum replication factor for the KV store cluster
(Which of the following must be included in a deployment plan?)
Options:
Future topology diagrams of the IT environment.
A comprehensive list of stakeholders, either direct or indirect.
Current logging details and data source inventory.
Business continuity and disaster recovery plans.
Answer:
CExplanation:
According to Splunk’s Deployment Planning and Implementation Guidelines, one of the most critical elements of a Splunk deployment plan is a comprehensive data source inventory and current logging details. This information defines the scope of data ingestion and directly influences sizing, architecture design, and licensing.
A proper deployment plan should identify:
All data sources (such as syslogs, application logs, network devices, OS logs, databases, etc.)
Expected daily ingest volume per source
Log formats and sourcetypes
Retention requirements and compliance constraints
This data forms the foundation for index sizing, forwarder configuration, and storage planning. Without a well-defined data inventory, Splunk architects cannot accurately determine hardware capacity, indexing load, or network throughput requirements.
While stakeholder mapping, topology diagrams, and continuity plans (Options A, B, D) are valuable in a broader IT project, Splunk’s official guidance emphasizes logging details and source inventory as mandatory for a deployment plan. It ensures that the Splunk environment is properly sized, licensed, and aligned with business data visibility goals.
References (Splunk Enterprise Documentation):
• Splunk Enterprise Deployment Planning Manual – Data Source Inventory Requirements
• Capacity Planning for Indexer and Search Head Sizing
• Planning Data Onboarding and Ingestion Strategies
• Splunk Architecture and Implementation Best Practices
Which of the following is true regarding Splunk Enterprise's performance? (Select all that apply.)
Options:
Adding search peers increases the maximum size of search results.
Adding RAM to existing search heads provides additional search capacity.
Adding search peers increases the search throughput as the search load increases.
Adding search heads provides additional CPU cores to run more concurrent searches.
Answer:
C, DExplanation:
The following statements are true regarding Splunk Enterprise performance:
Adding search peers increases the search throughput as search load increases. This is because adding more search peers distributes the search workload across more indexers, which reduces the load on each indexer and improves the search speed and concurrency.
Adding search heads provides additional CPU cores to run more concurrent searches. This is because adding more search heads increases the number of search processes that can run in parallel, which improves the search performance and scalability. The following statements are false regarding Splunk Enterprise performance:
Adding search peers does not increase the maximum size of search results. The maximum size of search results is determined by the maxresultrows setting in the limits.conf file, which is independent of the number of search peers.
Adding RAM to an existing search head does not provide additional search capacity. The search capacity of a search head is determined by the number of CPU cores, not the amount of RAM. Adding RAM to a search head may improve the search performance, but not the search capacity. For more information, see Splunk Enterprise performance in the Splunk documentation.
A search head cluster member contains the following in its server .conf. What is the Splunk server name of this member?
Options:
node1
shc4
idxc2
node3
Answer:
DExplanation:
The Splunk server name of the member can typically be determined by the serverName attribute in the server.conf file, which is not explicitly shown in the provided snippet. However, based on the provided configuration snippet, we can infer that this search head cluster member is configured to communicate with a cluster master (master_uri) located at node1 and a management node (mgmt_uri) located at node3. The serverName is not the same as the master_uri or mgmt_uri; these URIs indicate the location of the master and management nodes that this member interacts with.
Since the serverName is not provided in the snippet, one would typically look for a setting under the [general] stanza in server.conf. However, given the options and the common naming conventions in a Splunk environment, node3 would be a reasonable guess for the server name of this member, since it is indicated as the management URI within the [shclustering] stanza, which suggests it might be the name or address of the server in question.
For accurate identification, you would need to access the full server.conf file or the Splunk Web on the search head cluster member and look under Settings > Server settings > General settings to find the actual serverName. Reference for these details would be found in the Splunk documentation regarding the configuration files, particularly server.conf.
Other than high availability, which of the following is a benefit of search head clustering?
Options:
Allows indexers to maintain multiple searchable copies of all data.
Input settings are synchronized between search heads.
Fewer network ports are required to be opened between search heads.
Automatic replication of user knowledge objects.
Answer:
DExplanation:
According to the Splunk documentation1, one of the benefits of search head clustering is the automatic replication of user knowledge objects, such as dashboards, reports, alerts, and tags. This ensures that all cluster members have the same set of knowledge objects and can serve the same search results to the users. The other options are false because:
Allowing indexers to maintain multiple searchable copies of all data is a benefit of indexer clustering, not search head clustering2.
Input settings are not synchronized between search heads, as search head clusters do not collect data from inputs. Data collection is done by forwarders or independent search heads3.
Fewer network ports are not required to be opened between search heads, as search head clusters use several ports for communication and replication among the members4.
When preparing to ingest a new data source, which of the following is optional in the data source assessment?
Options:
Data format
Data location
Data volume
Data retention
Answer:
DExplanation:
Data retention is optional in the data source assessment because it is not directly related to the ingestion process. Data retention is determined by the index configuration and the storage capacity of the Splunk platform. Data format, data location, and data volume are all essential information for planning how to collect, parse, and index the data source.
Which of the following artifacts are included in a Splunk diag file? (Select all that apply.)
Options:
OS settings.
Internal logs.
Customer data.
Configuration files.
Answer:
B, DExplanation:
The following artifacts are included in a Splunk diag file:
Internal logs. These are the log files that Splunk generates to record its own activities, such as splunkd.log, metrics.log, audit.log, and others. These logs can help troubleshoot Splunk issues and monitor Splunk performance.
Configuration files. These are the files that Splunk uses to configure various aspects of its operation, such as server.conf, indexes.conf, props.conf, transforms.conf, and others. These files can help understand Splunk settings and behavior. The following artifacts are not included in a Splunk diag file:
OS settings. These are the settings of the operating system that Splunk runs on, such as the kernel version, the memory size, the disk space, and others. These settings are not part of the Splunk diag file, but they can be collected separately using the diag --os option.
Customer data. These are the data that Splunk indexes and makes searchable, such as the rawdata and the tsidx files. These data are not part of the Splunk diag file, as they may contain sensitive or confidential information. For more information, see Generate a diagnostic snapshot of your Splunk Enterprise deployment in the Splunk documentation.
When troubleshooting a situation where some files within a directory are not being indexed, the ignored files are discovered to have long headers. What is the first thing that should be added to inputs.conf?
Options:
Decrease the value of initCrcLength.
Add a crcSalt=
Increase the value of initCrcLength.
Add a crcSalt=
Answer:
CExplanation:
inputs.conf is a configuration file that contains settings for various types of data inputs, such as files, directories, network ports, scripts, and so on1.
initCrcLength is a setting that specifies the number of characters that the input uses to calculate the CRC (cyclic redundancy check) of a file1. The CRC is a value that uniquely identifies a file based on its content2.
crcSalt is another setting that adds a string to the CRC calculation to force the input to consume files that have matching CRCs1. This can be useful when files have identical headers or when files are renamed or rolled over2.
When troubleshooting a situation where some files within a directory are not being indexed, the ignored files are discovered to have long headers, the first thing that should be added to inputs.conf is to increase the value of initCrcLength. This is because by default, the input only performs CRC checks against the first 256 bytes of a file, which means that files with long headers may have matching CRCs and be skipped by the input2. By increasing the value of initCrcLength, the input can use more characters from the file to calculate the CRC, which can reduce the chances of CRC collisions and ensure that different files are indexed3.
Option C is the correct answer because it reflects the best practice for troubleshooting this situation. Option A is incorrect because decreasing the value of initCrcLength would make the CRC calculation less reliable and more prone to collisions. Option B is incorrect because adding a crcSalt with a static string would not help differentiate files with long headers, as they would still have matching CRCs. Option D is incorrect because adding a crcSalt with the
(When determining where a Splunk forwarder is trying to send data, which of the following searches can provide assistance?)
Options:
index=_internal sourcetype=internal metrics destHost | dedup destHost
index=_internal sourcetype=splunkd metrics inputHost | dedup inputHost
index=_metrics sourcetype=splunkd metrics destHost | dedup destHost
index=_internal sourcetype=splunkd metrics destHost | dedup destHost
Answer:
DExplanation:
To determine where a Splunk forwarder is attempting to send its data, administrators can search within the _internal index using the metrics logs generated by the forwarder’s Splunkd process. The correct and documented search is:
index=_internal sourcetype=splunkd metrics destHost | dedup destHost
The _internal index contains detailed operational logs from the Splunkd process, including metrics on network connections, indexing pipelines, and output groups. The field destHost records the destination indexer(s) to which the forwarder is attempting to send data. Using dedup destHost ensures that only unique destination hosts are shown.
This search is particularly useful for troubleshooting forwarding issues, such as connection failures, misconfigurations in outputs.conf, or load-balancing behavior in multi-indexer setups.
Other listed options are invalid or incorrect because:
sourcetype=internal does not exist.
index=_metrics is not where Splunk stores forwarding telemetry.
The field inputHost identifies the source host, not the destination.
Thus, Option D aligns with Splunk’s official troubleshooting practices for forwarder-to-indexer communication validation.
References (Splunk Enterprise Documentation):
• Monitoring Forwarder Connections and Destinations
• Troubleshooting Forwarding Using Internal Logs
• _internal Index Reference – Metrics and destHost Fields
• outputs.conf – Verifying Forwarder Data Routing and Connectivity
Which component in the splunkd.log will log information related to bad event breaking?
Options:
Audittrail
EventBreaking
IndexingPipeline
AggregatorMiningProcessor
Answer:
DExplanation:
The AggregatorMiningProcessor component in the splunkd.log file will log information related to bad event breaking. The AggregatorMiningProcessor is responsible for breaking the incoming data into events and applying the props.conf settings. If there is a problem with the event breaking, such as incorrect timestamps, missing events, or merged events, the AggregatorMiningProcessor will log the error or warning messages in the splunkd.log file. The Audittrail component logs information about the audit events, such as user actions, configuration changes, and search activity. The EventBreaking component logs information about the event breaking rules, such as the LINE_BREAKER and SHOULD_LINEMERGE settings. The IndexingPipeline component logs information about the indexing pipeline, such as the parsing, routing, and indexing phases. For more information, see About Splunk Enterprise logging and [Configure event line breaking] in the Splunk documentation.
If .delta replication fails during knowledge bundle replication, what is the fall-back method for Splunk?
Options:
.Restart splunkd.
.delta replication.
.bundle replication.
Restart mongod.
Answer:
CExplanation:
This is the fall-back method for Splunk if .delta replication fails during knowledge bundle replication. Knowledge bundle replication is the process of distributing the knowledge objects, such as lookups, macros, and field extractions, from the search head cluster to the indexer cluster1. Splunk uses two methods of knowledge bundle replication: .delta replication and .bundle replication1. .Delta replication is the default and preferred method, as it only replicates the changes or updates to the knowledge objects, which reduces the network traffic and disk space usage1. However, if .delta replication fails for some reason, such as corrupted files or network errors, Splunk automatically switches to .bundle replication, which replicates the entire knowledge bundle, regardless of the changes or updates1. This ensures that the knowledge objects are always synchronized between the search head cluster and the indexer cluster, but it also consumes more network bandwidth and disk space1. The other options are not valid fall-back methods for Splunk. Option A, restarting splunkd, is not a method of knowledge bundle replication, but a way to restart the Splunk daemon on a node2. This may or may not fix the .delta replication failure, but it does not guarantee the synchronization of the knowledge objects. Option B, .delta replication, is not a fall-back method, but the primary method of knowledge bundle replication, which is assumed to have failed in the question1. Option D, restarting mongod, is not a method of knowledge bundle replication, but a way to restart the MongoDB daemon on a node3. This is not related to the knowledge bundle replication, but to the KV store replication, which is a different process3. Therefore, option C is the correct answer, and options A, B, and D are incorrect.
1: How knowledge bundle replication works 2: Start and stop Splunk Enterprise 3: Restart the KV store
(What are the possible values for the mode attribute in server.conf for a Splunk server in the [clustering] stanza?)
Options:
[clustering] mode = peer
[clustering] mode = searchhead
[clustering] mode = deployer
[clustering] mode = manager
Answer:
A, B, DExplanation:
Within the [clustering] stanza of the server.conf file, the mode attribute defines the functional role of a Splunk instance within an indexer cluster. Splunk documentation identifies three valid modes:
mode = manager
Defines the node as the Cluster Manager (formerly called the Master Node).
Responsible for coordinating peer replication, managing configurations, and ensuring data integrity across indexers.
mode = peer
Defines the node as an Indexer (Peer Node) within the cluster.
Handles data ingestion, replication, and search operations under the control of the manager node.
mode = searchhead
Defines a Search Head that connects to the cluster for distributed searching and data retrieval.
The value “deployer” (Option C) is not valid within the [clustering] stanza; it applies to Search Head Clustering (SHC) configurations, where it is defined separately in server.conf under [shclustering].
Each mode must be accompanied by other critical attributes such as manager_uri, replication_port, and pass4SymmKey to enable proper communication and security between cluster members.
References (Splunk Enterprise Documentation):
• Indexer Clustering: Configure Manager, Peer, and Search Head Modes
• server.conf Reference – [clustering] Stanza Attributes
• Distributed Search and Cluster Node Role Configuration
• Splunk Enterprise Admin Manual – Cluster Deployment Architecture
Several critical searches that were functioning correctly yesterday are not finding a lookup table today. Which log file would be the best place to start troubleshooting?
Options:
btool.log
web_access.log
health.log
configuration_change.log
Answer:
BExplanation:
A lookup table is a file that contains a list of values that can be used to enrich or modify the data during search time1. Lookup tables can be stored in CSV files or in the KV Store1. Troubleshooting lookup tables involves identifying and resolving issues that prevent the lookup tables from being accessed, updated, or applied correctly by the Splunk searches. Some of the tools and methods that can help with troubleshooting lookup tables are:
web_access.log: This is a file that contains information about the HTTP requests and responses that occur between the Splunk web server and the clients2. This file can help troubleshoot issues related to lookup table permissions, availability, and errors, such as 404 Not Found, 403 Forbidden, or 500 Internal Server Error34.
btool output: This is a command-line tool that displays the effective configuration settings for a given Splunk component, such as inputs, outputs, indexes, props, and so on5. This tool can help troubleshoot issues related to lookup table definitions, locations, and precedence, as well as identify the source of a configuration setting6.
search.log: This is a file that contains detailed information about the execution of a search, such as the search pipeline, the search commands, the search results, the search errors, and the search performance. This file can help troubleshoot issues related to lookup table commands, arguments, fields, and outputs, such as lookup, inputlookup, outputlookup, lookup_editor, and so on .
Option B is the correct answer because web_access.log is the best place to start troubleshooting lookup table issues, as it can provide the most relevant and immediate information about the lookup table access and status. Option A is incorrect because btool output is not a log file, but a command-line tool. Option C is incorrect because health.log is a file that contains information about the health of the Splunk components, such as the indexer cluster, the search head cluster, the license master, and the deployment server. This file can help troubleshoot issues related to Splunk deployment health, but not necessarily related to lookup tables. Option D is incorrect because configuration_change.log is a file that contains information about the changes made to the Splunk configuration files, such as the user, the time, the file, and the action. This file can help troubleshoot issues related to Splunk configuration changes, but not necessarily related to lookup tables.
In which phase of the Splunk Enterprise data pipeline are indexed extraction configurations processed?
Options:
Input
Search
Parsing
Indexing
Answer:
DExplanation:
Indexed extraction configurations are processed in the indexing phase of the Splunk Enterprise data pipeline. The data pipeline is the process that Splunk uses to ingest, parse, index, and search data. Indexed extraction configurations are settings that determine how Splunk extracts fields from data at index time, rather than at search time. Indexed extraction can improve search performance, but it also increases the size of the index. Indexed extraction configurations are applied in the indexing phase, which is the phase where Splunk writes the data and the .tsidx files to the index. The input phase is the phase where Splunk receives data from various sources and formats. The parsing phase is the phase where Splunk breaks the data into events, timestamps, and hosts. The search phase is the phase where Splunk executes search commands and returns results.
(Based on the data sizing and retention parameters listed below, which of the following will correctly calculate the index storage required?)
• Daily rate = 20 GB / day
• Compress factor = 0.5
• Retention period = 30 days
• Padding = 100 GB
Options:
(20 * 30 + 100) * 0.5 = 350 GB
20 / 0.5 * 30 + 100 = 1300 GB
20 * 0.5 * 30 + 100 = 400 GB
20 * 30 + 100 = 700 GB
Answer:
CExplanation:
The Splunk Capacity Planning Manual defines the total required storage for indexes as a function of daily ingest rate, compression factor, retention period, and an additional padding buffer for index management and growth.
The formula is:
Storage = (Daily Data * Compression Factor * Retention Days) + Padding
Given the values:
Daily rate = 20 GB
Compression factor = 0.5 (50% reduction)
Retention period = 30 days
Padding = 100 GB
Plugging these into the formula gives:
20 * 0.5 * 30 + 100 = 400 GB
This result represents the estimated storage needed to retain 30 days of compressed indexed data with an additional buffer to accommodate growth and Splunk’s bucket management overhead.
Compression factor values typically range between 0.5 and 0.7 for most environments, depending on data type. Using compression in calculations is critical, as indexed data consumes less space than raw input after Splunk’s tokenization and compression processes.
Other options either misapply the compression ratio or the order of operations, producing incorrect totals.
References (Splunk Enterprise Documentation):
• Capacity Planning for Indexes – Storage Sizing and Compression Guidelines
• Managing Index Storage and Retention Policies
• Splunk Enterprise Admin Manual – Understanding Index Bucket Sizes
• Indexing Performance and Storage Optimization Guide
What is the best method for sizing or scaling a search head cluster?
Options:
Estimate the maximum daily ingest volume in gigabytes and divide by the number of CPU cores per search head.
Estimate the total number of searches per day and divide by the number of CPU cores available on the search heads.
Divide the number of indexers by three to achieve the correct number of search heads.
Estimate the maximum concurrent number of searches and divide by the number of CPU cores per search head.
Answer:
DExplanation:
According to the Splunk blog1, the best method for sizing or scaling a search head cluster is to estimate the maximum concurrent number of searches and divide by the number of CPU cores per search head. This gives you an idea of how many search heads you need to handle the peak search load without overloading the CPU resources. The other options are false because:
Estimating the maximum daily ingest volume in gigabytes and dividing by the number of CPU cores per search head is not a good method for sizing or scaling a search head cluster, as it does not account for the complexity and frequency of the searches. The ingest volume is more relevant for sizing or scaling the indexers, not the search heads2.
Estimating the total number of searches per day and dividing by the number of CPU cores available on the search heads is not a good method for sizing or scaling a search head cluster, as it does not account for the concurrency and duration of the searches. The total number of searches per day is an average metric that does not reflect the peak search load or the search performance2.
Dividing the number of indexers by three to achieve the correct number of search heads is not a good method for sizing or scaling a search head cluster, as it does not account for the search load or the search head capacity. The number of indexers is not directly proportional to the number of search heads, as different types of data and searches may require different amounts of resources2.
Consider a use case involving firewall data. There is no Splunk-supported Technical Add-On, but the vendor has built one. What are the items that must be evaluated before installing the add-on? (Select all that apply.)
Options:
Identify number of scheduled or real-time searches.
Validate if this Technical Add-On enables event data for a data model.
Identify the maximum number of forwarders Technical Add-On can support.
Verify if Technical Add-On needs to be installed onto both a search head or indexer.
Answer:
A, BExplanation:
A Technical Add-On (TA) is a Splunk app that contains configurations for data collection, parsing, and enrichment. It can also enable event data for a data model, which is useful for creating dashboards and reports. Therefore, before installing a TA, it is important to identify the number of scheduled or real-time searches that will use the data model, and to validate if the TA enables event data for a data model. The number of forwarders that the TA can support is not relevant, as the TA is installed on the indexer or search head, not on the forwarder. The installation location of the TA depends on the type of data and the use case, so it is not a fixed requirement
(On which Splunk components does the Splunk App for Enterprise Security place the most load?)
Options:
Indexers
Cluster Managers
Search Heads
Heavy Forwarders
Answer:
CExplanation:
According to Splunk’s Enterprise Security (ES) Installation and Sizing Guide, the majority of processing and computational load generated by the Splunk App for Enterprise Security is concentrated on the Search Head(s).
This is because Splunk ES is built around a search-driven correlation model — it continuously runs scheduled correlation searches, data model accelerations, and notables generation jobs. These operations rely on the search head tier’s CPU, memory, and I/O resources rather than on indexers. ES also performs extensive data model summarization, CIM normalization, and real-time alerting, all of which are search-intensive operations.
While indexers handle data ingestion and indexing, they are not heavily affected by ES beyond normal search request processing. The Cluster Manager only coordinates replication and plays no role in search execution, and Heavy Forwarders serve as data collection or parsing points with minimal analytical load.
Splunk officially recommends deploying ES on a dedicated Search Head Cluster (SHC) to isolate its high CPU and memory demands from other workloads. For large-scale environments, horizontal scaling via SHC ensures consistent performance and stability.
References (Splunk Enterprise Documentation):
• Splunk Enterprise Security Installation and Configuration Guide
• Search Head Sizing for Splunk Enterprise Security
• Enterprise Security Overview – Workload Distribution and Performance Impact
• Splunk Architecture and Capacity Planning for ES Deployments
Indexing is slow and real-time search results are delayed in a Splunk environment with two indexers and one search head. There is ample CPU and memory available on the indexers. Which of the following is most likely to improve indexing performance?
Options:
Increase the maximum number of hot buckets in indexes.conf
Increase the number of parallel ingestion pipelines in server.conf
Decrease the maximum size of the search pipelines in limits.conf
Decrease the maximum concurrent scheduled searches in limits.conf
Answer:
BExplanation:
Increasing the number of parallel ingestion pipelines in server.conf is most likely to improve indexing performance when indexing is slow and real-time search results are delayed in a Splunk environment with two indexers and one search head. The parallel ingestion pipelines allow Splunk to process multiple data streams simultaneously, which increases the indexing throughput and reduces the indexing latency. Increasing the maximum number of hot buckets in indexes.conf will not improve indexing performance, but rather increase the disk space consumption and the bucket rolling time. Decreasing the maximum size of the search pipelines in limits.conf will not improve indexing performance, but rather reduce the search performance and the search concurrency. Decreasing the maximum concurrent scheduled searches in limits.conf will not improve indexing performance, but rather reduce the search capacity and the search availability. For more information, see Configure parallel ingestion pipelines in the Splunk documentation.
A customer plans to ingest 600 GB of data per day into Splunk. They will have six concurrent users, and they also want high data availability and high search performance. The customer is concerned about cost and wants to spend the minimum amount on the hardware for Splunk. How many indexers are recommended for this deployment?
Options:
Two indexers not in a cluster, assuming users run many long searches.
Three indexers not in a cluster, assuming a long data retention period.
Two indexers clustered, assuming high availability is the greatest priority.
Two indexers clustered, assuming a high volume of saved/scheduled searches.
Answer:
CExplanation:
Two indexers clustered is the recommended deployment for a customer who plans to ingest 600 GB of data per day into Splunk, has six concurrent users, and wants high data availability and high search performance. This deployment will provide enough indexing capacity and search concurrency for the customer’s needs, while also ensuring data replication and searchability across the cluster. The customer can also save on the hardware cost by using only two indexers. Two indexers not in a cluster will not provide high data availability, as there is no data replication or failover. Three indexers not in a cluster will provide more indexing capacity and search concurrency, but also more hardware cost and no data availability. The customer’s data retention period, number of long searches, or volume of saved/scheduled searches are not relevant for determining the number of indexers. For more information, see [Reference hardware] and [About indexer clusters and index replication] in the Splunk documentation.
A search head cluster with a KV store collection can be updated from where in the KV store collection?
Options:
The search head cluster captain.
The KV store primary search head.
Any search head except the captain.
Any search head in the cluster.
Answer:
DExplanation:
According to the Splunk documentation1, any search head in the cluster can update the KV store collection. The KV store collection is replicated across all the cluster members, and any write operation is delegated to the KV store captain, who then synchronizes the changes with the other members. The KV store primary search head is not a valid term, as there is no such role in a search head cluster. The other options are false because:
The search head cluster captain is not the only node that can update the KV store collection, as any member can initiate a write operation1.
Any search head except the captain can also update the KV store collection, as the write operation will be delegated to the captain1.
The master node distributes configuration bundles to peer nodes. Which directory peer nodes receive the bundles?
Options:
apps
deployment-apps
slave-apps
master-apps
Answer:
CExplanation:
The master node distributes configuration bundles to peer nodes in the slave-apps directory under $SPLUNK_HOME/etc. The configuration bundle method is the only supported method for managing common configurations and app deployment across the set of peers. It ensures that all peers use the same versions of these files1. Bundles typically contain a subset of files (configuration files and assets) from $SPLUNK_HOME/etc/system, $SPLUNK_HOME/etc/apps, and $SPLUNK_HOME/etc/users2. The process of distributing knowledge bundles means that peers by default receive nearly the entire contents of the search head’s apps3.
(If a license peer cannot communicate to a license manager for 72 hours or more, what will happen?)
Options:
The license peer is placed in violation, and a warning is generated.
A license warning is generated, and there is no impact to the license peer.
What happens depends on license type.
The license peer is placed in violation, and search is blocked.
Answer:
DExplanation:
Per the Splunk Enterprise Licensing Documentation, a license peer (such as an indexer or search head) must regularly communicate with its license manager to report data usage and verify license validity. Splunk allows a 72-hour grace period during which the peer continues operating normally even if communication with the license manager fails.
If this communication is not re-established within 72 hours, the peer enters a “license violation” state. In this state, the system blocks all search activities, including ad-hoc and scheduled searches, but continues to ingest and index data. Administrative and licensing-related searches may still run for diagnostic purposes, but user searches are restricted.
The intent of this design is to prevent prolonged unlicensed data ingestion while ensuring the environment remains compliant. The 72-hour rule is hard-coded in Splunk Enterprise and applies uniformly across license types (Enterprise or Distributed). This ensures consistent licensing enforcement across distributed deployments.
Warnings are generated during the grace period, but after 72 hours, searches are automatically blocked until the peer successfully reconnects to its license manager.
References (Splunk Enterprise Documentation):
• Managing Licenses in a Distributed Environment
• License Manager and Peer Communication Workflow
• Splunk License Enforcement and Violation Behavior
• Splunk Enterprise Admin Manual – License Usage and Reporting Policies
Which of the following is unsupported in a production environment?
Options:
Cluster Manager can run on the Monitoring Console instance in smaller environments.
Search Head Cluster Deployer can run on the Monitoring Console instance in smaller environments.
Search heads in a Search Head Cluster can run on virtual machines.
Indexers in an indexer cluster can run on virtual machines.
Answer:
A, DExplanation:
Comprehensive and Detailed Explanation (From Splunk Enterprise Documentation)Splunk Enterprise documentation clarifies that none of the listed configurations are prohibited in production. Splunk allows the Cluster Manager to be colocated with the Monitoring Console in small deployments because both are management-plane functions and do not handle ingestion or search traffic. The documentation also states that the Search Head Cluster Deployer is not a runtime component and has minimal performance requirements, so it may be colocated with the Monitoring Console or Licensing Master when hardware resources permit.
Splunk also supports virtual machines for both search heads and indexers, provided they are deployed with dedicated CPU, storage throughput, and predictable performance. Splunk’s official hardware guidance specifies that while bare metal often yields higher performance, virtualized deployments are fully supported in production as long as sizing principles are met.
Because Splunk explicitly supports all four configurations under proper sizing and best-practice guidelines, there is no correct selection for “unsupported.” The question is outdated relative to current Splunk Enterprise recommendations.
Which of the following should be done when installing Enterprise Security on a Search Head Cluster? (Select all that apply.)
Options:
Install Enterprise Security on the deployer.
Install Enterprise Security on a staging instance.
Copy the Enterprise Security configurations to the deployer.
Use the deployer to deploy Enterprise Security to the cluster members.
Answer:
A, DExplanation:
When installing Enterprise Security on a Search Head Cluster (SHC), the following steps should be done: Install Enterprise Security on the deployer, and use the deployer to deploy Enterprise Security to the cluster members. Enterprise Security is a premium app that provides security analytics and monitoring capabilities for Splunk. Enterprise Security can be installed on a SHC by using the deployer, which is a standalone instance that distributes apps and other configurations to the SHC members. Enterprise Security should be installed on the deployer first, and then deployed to the cluster members using the splunk apply shcluster-bundle command. Enterprise Security should not be installed on a staging instance, because a staging instance is not part of the SHC deployment process. Enterprise Security configurations should not be copied to the deployer, because they are already included in the Enterprise Security app package.
Which command should be run to re-sync a stale KV Store member in a search head cluster?
Options:
splunk clean kvstore -local
splunk resync kvstore -remote
splunk resync kvstore -local
splunk clean eventdata -local
Answer:
AExplanation:
To resync a stale KV Store member in a search head cluster, you need to stop the search head that has the stale KV Store member, run the command splunk clean kvstore --local, and then restart the search head. This triggers the initial synchronization from other KV Store members12.
The command splunk resync kvstore [-source sourceId] is used to resync the entire KV Store cluster from one of the members, not a single member. This command can only be invoked from the node that is operating as search head cluster captain2.
The command splunk clean eventdata -local is used to delete all indexed data from a standalone indexer or a cluster peer node, not to resync the KV Store3.
Which of the following most improves KV Store resiliency?
Options:
Decrease latency between search heads.
Add faster storage to the search heads to improve artifact replication.
Add indexer CPU and memory to decrease search latency.
Increase the size of the Operations Log.
Answer:
AExplanation:
KV Store is a feature of Splunk Enterprise that allows apps to store and retrieve data within the context of an app1.
KV Store resides on search heads and replicates data across the members of a search head cluster1.
KV Store resiliency refers to the ability of KV Store to maintain data availability and consistency in the event of failures or disruptions2.
One of the factors that affects KV Store resiliency is the network latency between search heads, which can impact the speed and reliability of data replication2.
Decreasing latency between search heads can improve KV Store resiliency by reducing the chances of data loss, inconsistency, or corruption2.
The other options are not directly related to KV Store resiliency. Faster storage, indexer CPU and memory, and Operations Log size may affect other aspects of Splunk performance, but not KV Store345.
Which command is used for thawing the archive bucket?
Options:
Splunk collect
Splunk convert
Splunk rebuild
Splunk dbinspect
Answer:
CExplanation:
The splunk rebuild command is used for thawing the archive bucket. Thawing is the process of restoring frozen data back to Splunk for searching. Frozen data is data that has been archived or deleted from Splunk after reaching the end of its retention period. To thaw a bucket, the user needs to copy the bucket from the archive location to the thaweddb directory under SPLUNK_HOME/var/lib/splunk and run the splunk rebuild command to rebuild the .tsidx files for the bucket. The splunk collect command is used for collecting diagnostic data from a Splunk instance. The splunk convert command is used for converting configuration files from one format to another. The splunk dbinspect command is used for inspecting the status and properties of the buckets in an index.
Which of the following Splunk deployments has the recommended minimum components for a high-availability search head cluster?
Options:
2 search heads, 1 deployer, 2 indexers
3 search heads, 1 deployer, 3 indexers
1 search head, 1 deployer, 3 indexers
2 search heads, 1 deployer, 3 indexers
Answer:
BExplanation:
The correct Splunk deployment to have the recommended minimum components for a high-availability search head cluster is 3 search heads, 1 deployer, 3 indexers. This configuration ensures that the search head cluster has at least three members, which is the minimum number required for a quorum and failover1. The deployer is a separate instance that manages the configuration updates for the search head cluster2. The indexers are the nodes that store and index the data, and having at least three of them provides redundancy and load balancing3. The other options are not recommended, as they either have less than three search heads or less than three indexers, which reduces the availability and reliability of the cluster. Therefore, option B is the correct answer, and options A, C, and D are incorrect.
1: About search head clusters 2: Use the deployer to distribute apps and configuration updates 3: About indexer clusters and index replication
(Which deployer push mode should be used when pushing built-in apps?)
Options:
merge_to_default
local_only
full
default only
Answer:
BExplanation:
According to the Splunk Enterprise Search Head Clustering (SHC) Deployer documentation, the “local_only” push mode is the correct option when deploying built-in apps. This mode ensures that the deployer only pushes configurations from the local directory of built-in Splunk apps (such as search, learned, or launcher) without overwriting or merging their default app configurations.
In an SHC environment, the deployer is responsible for distributing configuration bundles to all search head members. Each push can be executed in different modes depending on how the admin wants to handle the app directories:
full: Overwrites both default and local folders of all apps in the bundle.
merge_to_default: Merges configurations into the default folder (used primarily for custom apps).
local_only: Pushes only local configurations, preserving default settings of built-in apps (the safest method for core Splunk apps).
default only: Pushes only default folder configurations (rarely used and not ideal for built-in app updates).
Using the “local_only” mode ensures that default Splunk system apps are not modified, preventing corruption or overwriting of base configurations that are critical for Splunk operation. It is explicitly recommended for pushing Splunk-provided (built-in) apps like search, launcher, and user-prefs from the deployer to all SHC members.
References (Splunk Enterprise Documentation):
• Managing Configuration Bundles with the Deployer (Search Head Clustering)
• Deployer Push Modes and Their Use Cases
• Splunk Enterprise Admin Manual – SHC Deployment Management
• Best Practices for Maintaining Built-in Splunk Apps in SHC Environments
When configuring a Splunk indexer cluster, what are the default values for replication and search factor?
Options:
replication_factor = 2search_factor = 2
replication_factor = 2search factor = 3
replication_factor = 3search_factor = 2
replication_factor = 3search factor = 3
Answer:
CExplanation:
The replication factor and the search factor are two important settings for a Splunk indexer cluster. The replication factor determines how many copies of each bucket are maintained across the set of peer nodes. The search factor determines how many searchable copies of each bucket are maintained. The default values for both settings are 3, which means that each bucket has three copies, and at least one of them is searchable
A Splunk instance has the following settings in SPLUNK_HOME/etc/system/local/server.conf:
[clustering]
mode = master
replication_factor = 2
pass4SymmKey = password123
Which of the following statements describe this Splunk instance? (Select all that apply.)
Options:
This is a multi-site cluster.
This cluster's search factor is 2.
This Splunk instance needs to be restarted.
This instance is missing the master_uri attribute.
Answer:
C, DExplanation:
The Splunk instance with the given settings in SPLUNK_HOME/etc/system/local/server.conf is missing the master_uri attribute and needs to be restarted. The master_uri attribute is required for the master node to communicate with the peer nodes and the search head cluster. The master_uri attribute specifies the host name and port number of the master node. Without this attribute, the master node cannot function properly. The Splunk instance also needs to be restarted for the changes in the server.conf file to take effect. The replication_factor setting determines how many copies of each bucket are maintained across the peer nodes. The search factor is a separate setting that determines how many searchable copies of each bucket are maintained across the peer nodes. The search factor is not specified in the given settings, so it defaults to the same value as the replication factor, which is 2. This is not a multi-site cluster, because the site attribute is not specified in the clustering stanza. A multi-site cluster is a cluster that spans multiple geographic locations, or sites, and has different replication and search factors for each site.
Stakeholders have identified high availability for searchable data as their top priority. Which of the following best addresses this requirement?
Options:
Increasing the search factor in the cluster.
Increasing the replication factor in the cluster.
Increasing the number of search heads in the cluster.
Increasing the number of CPUs on the indexers in the cluster.
Answer:
AExplanation:
Increasing the search factor in the cluster will best address the requirement of high availability for searchable data. The search factor determines how many copies of searchable data are maintained by the cluster. A higher search factor means that more indexers can serve the data in case of a failure or a maintenance event. Increasing the replication factor will improve the availability of raw data, but not searchable data. Increasing the number of search heads or CPUs on the indexers will improve the search performance, but not the availability of searchable data. For more information, see Replication factor and search factor in the Splunk documentation.
Which of the following can a Splunk diag contain?
Options:
Search history, Splunk users and their roles, running processes, indexed data
Server specs, current open connections, internal Splunk log files, index listings
KV store listings, internal Splunk log files, search peer bundles listings, indexed data
Splunk platform configuration details, Splunk users and their roles, current open connections, index listings
Answer:
BExplanation:
The following artifacts are included in a Splunk diag file:
Server specs. These are the specifications of the server that Splunk runs on, such as the CPU model, the memory size, the disk space, and the network interface. These specs can help understand the Splunk hardware requirements and performance.
Current open connections. These are the connections that Splunk has established with other Splunk instances or external sources, such as forwarders, indexers, search heads, license masters, deployment servers, and data inputs. These connections can help understand the Splunk network topology and communication.
Internal Splunk log files. These are the log files that Splunk generates to record its own activities, such as splunkd.log, metrics.log, audit.log, and others. These logs can help troubleshoot Splunk issues and monitor Splunk performance.
Index listings. These are the listings of the indexes that Splunk has created and configured, such as the index name, the index location, the index size, and the index attributes. These listings can help understand the Splunk data management and retention. The following artifacts are not included in a Splunk diag file:
Search history. This is the history of the searches that Splunk has executed, such as the search query, the search time, the search results, and the search user. This history is not part of the Splunk diag file, but it can be accessed from the Splunk Web interface or the audit.log file.
Splunk users and their roles. These are the users that Splunk has created and assigned roles to, such as the user name, the user password, the user role, and the user capabilities. These users and roles are not part of the Splunk diag file, but they can be accessed from the Splunk Web interface or the authentication.conf and authorize.conf files.
KV store listings. These are the listings of the KV store collections and documents that Splunk has created and stored, such as the collection name, the collection schema, the document ID, and the document fields. These listings are not part of the Splunk diag file, but they can be accessed from the Splunk Web interface or the mongod.log file.
Indexed data. These are the data that Splunk indexes and makes searchable, such as the rawdata and the tsidx files. These data are not part of the Splunk diag file, as they may contain sensitive or confidential information. For more information, see Generate a diagnostic snapshot of your Splunk Enterprise deployment in the Splunk documentation.
Which of the following clarification steps should be taken if apps are not appearing on a deployment client? (Select all that apply.)
Options:
Check serverclass.conf of the deployment server.
Check deploymentclient.conf of the deployment client.
Check the content of SPLUNK_HOME/etc/apps of the deployment server.
Search for relevant events in splunkd.log of the deployment server.
Answer:
A, B, DExplanation:
The following clarification steps should be taken if apps are not appearing on a deployment client:
Check serverclass.conf of the deployment server. This file defines the server classes and the apps and configurations that they should receive from the deployment server. Make sure that the deployment client belongs to the correct server class and that the server class has the desired apps and configurations.
Check deploymentclient.conf of the deployment client. This file specifies the deployment server that the deployment client contacts and the client name that it uses. Make sure that the deployment client is pointing to the correct deployment server and that the client name matches the server class criteria.
Search for relevant events in splunkd.log of the deployment server. This file contains information about the deployment server activities, such as sending apps and configurations to the deployment clients, detecting client check-ins, and logging any errors or warnings. Look for any events that indicate a problem with the deployment server or the deployment client.
Checking the content of SPLUNK_HOME/etc/apps of the deployment server is not a necessary clarification step, as this directory does not contain the apps and configurations that are distributed to the deployment clients. The apps and configurations for the deployment server are stored in SPLUNK_HOME/etc/deployment-apps. For more information, see Configure deployment server and clients in the Splunk documentation.
Which Splunk internal index contains license-related events?
Options:
_audit
_license
_internal
_introspection
Answer:
CExplanation:
The _internal index contains license-related events, such as the license usage, the license quota, the license pool, the license stack, and the license violations. These events are logged by the license manager in the license_usage.log file, which is part of the _internal index. The _audit index contains audit events, such as user actions, configuration changes, and search activity. These events are logged by the audit trail in the audit.log file, which is part of the _audit index. The _license index does not exist in Splunk, as the license-related events are stored in the _internal index. The _introspection index contains platform instrumentation data, such as the resource usage, the disk objects, the search activity, and the data ingestion. These data are logged by the introspection generator in various log files, such as resource_usage.log, disk_objects.log, search_activity.log, and data_ingestion.log, which are part of the _introspection index. For more information, see About Splunk Enterprise logging and [About the _internal index] in the Splunk documentation.
(A customer creates a saved search that runs on a specific interval. Which internal Splunk log should be viewed to determine if the search ran recently?)
Options:
metrics.log
kvstore.log
scheduler.log
btool.log
Answer:
CExplanation:
According to Splunk’s Search Scheduler and Job Management documentation, the scheduler.log file, located within the _internal index, records the execution of scheduled and saved searches. This log provides a detailed record of when each search is triggered, how long it runs, and its success or failure status.
Each time a scheduled search runs (for example, alerts, reports, or summary index searches), an entry is written to scheduler.log with fields such as:
sid (search job ID)
app (application context)
savedsearch_name (name of the saved search)
user (owner)
status (success, skipped, or failed)
run_time and result_count
By searching the _internal index for sourcetype=scheduler (or directly viewing scheduler.log), administrators can confirm whether a specific saved search executed as expected and diagnose skipped or delayed runs due to resource contention or concurrency limits.
Other internal logs serve different purposes:
metrics.log records performance metrics.
kvstore.log tracks KV Store operations.
btool.log does not exist — btool outputs configuration data to the console, not a log file.
Hence, scheduler.log is the definitive and Splunk-documented source for validating scheduled search activity.
References (Splunk Enterprise Documentation):
• Saved Searches and Alerts – Scheduler Operation Details
• scheduler.log Reference – Monitoring Scheduled Search Execution
• Monitoring Console: Search Scheduler Health Dashboard
• Troubleshooting Skipped or Delayed Scheduled Searches
What is the logical first step when starting a deployment plan?
Options:
Inventory the currently deployed logging infrastructure.
Determine what apps and use cases will be implemented.
Gather statistics on the expected adoption of Splunk for sizing.
Collect the initial requirements for the deployment from all stakeholders.
Answer:
DExplanation:
The logical first step when starting a deployment plan is to collect the initial requirements for the deployment from all stakeholders. This includes identifying the business objectives, the data sources, the use cases, the security and compliance needs, the scalability and availability expectations, and the budget and timeline constraints. Collecting the initial requirements helps to define the scope and the goals of the deployment, and to align the expectations of all the parties involved.
Inventorying the currently deployed logging infrastructure, determining what apps and use cases will be implemented, and gathering statistics on the expected adoption of Splunk for sizing are all important steps in the deployment planning process, but they are not the logical first step. These steps can be done after collecting the initial requirements, as they depend on the information gathered from the stakeholders.
Which of the following server. conf stanzas indicates the Indexer Discovery feature has not been fully configured (restart pending) on the Master Node?
A)
B)
C)
D)
Options:
Option A
Option B
Option C
Option D
Answer:
AExplanation:
The Indexer Discovery feature enables forwarders to dynamically connect to the available peer nodes in an indexer cluster. To use this feature, the manager node must be configured with the [indexer_discovery] stanza and a pass4SymmKey value. The forwarders must also be configured with the same pass4SymmKey value and the master_uri of the manager node. The pass4SymmKey value must be encrypted using the splunk _encrypt command. Therefore, option A indicates that the Indexer Discovery feature has not been fully configured on the manager node, because the pass4SymmKey value is not encrypted. The other options are not related to the Indexer Discovery feature. Option B shows the configuration of a forwarder that is part of an indexer cluster. Option C shows the configuration of a manager node that is part of an indexer cluster. Option D shows an invalid configuration of the [indexer_discovery] stanza, because the pass4SymmKey value is not encrypted and does not match the forwarders’ pass4SymmKey value12
1:
Which part of the deployment plan is vital prior to installing Splunk indexer clusters and search head clusters?
Options:
Data source inventory.
Data policy definitions.
Splunk deployment topology.
Education and training plans.
Answer:
CExplanation:
According to the Splunk documentation1, the Splunk deployment topology is the part of the deployment plan that is vital prior to installing Splunk indexer clusters and search head clusters. The deployment topology defines the number and type of Splunk components, such as forwarders, indexers, search heads, and deployers, that you need to install and configure for your distributed deployment. The deployment topology also determines the network and hardware requirements, the data flow and replication, the high availability and disaster recovery options, and the security and performance considerations for your deployment2. The other options are false because:
Data source inventory is not the part of the deployment plan that is vital prior to installing Splunk indexer clusters and search head clusters, as it is a preliminary step that helps you identify the types, formats, locations, and volumes of data that you want to collect and analyze with Splunk. Data source inventory is important for planning your data ingestion and retention strategies, but it does not directly affect the installation and configuration of Splunk components3.
Data policy definitions are not the part of the deployment plan that is vital prior to installing Splunk indexer clusters and search head clusters, as they are the rules and guidelines that govern how you handle, store, and protect your data. Data policy definitions are important for ensuring data quality, security, and compliance, but they do not directly affect the installation and configuration of Splunk components4.
Education and training plans are not the part of the deployment plan that is vital prior to installing Splunk indexer clusters and search head clusters, as they are the learning resources and programs that help you and your team acquire the skills and knowledge to use Splunk effectively. Education and training plans are important for enhancing your Splunk proficiency and productivity, but they do not directly affect the installation and configuration of Splunk components5.
Which of the following options can improve reliability of syslog delivery to Splunk? (Select all that apply.)
Options:
Use TCP syslog.
Configure UDP inputs on each Splunk indexer to receive data directly.
Use a network load balancer to direct syslog traffic to active backend syslog listeners.
Use one or more syslog servers to persist data with a Universal Forwarder to send the data to Splunk indexers.
Answer:
A, DExplanation:
Syslog is a standard protocol for sending log messages from various devices and applications to a central server. Syslog can use either UDP or TCP as the transport layer protocol. UDP is faster but less reliable, as it does not guarantee delivery or order of the messages. TCP is slower but more reliable, as it ensures delivery and order of the messages. Therefore, to improve the reliability of syslog delivery to Splunk, it is recommended to use TCP syslog.
Another option to improve the reliability of syslog delivery to Splunk is to use one or more syslog servers to persist data with a Universal Forwarder to send the data to Splunk indexers. This way, the syslog servers can act as a buffer and store the data in case of network or Splunk outages. The Universal Forwarder can then forward the data to Splunk indexers when they are available.
Using a network load balancer to direct syslog traffic to active backend syslog listeners is not a reliable option, as it does not address the possibility of data loss or duplication due to network failures or Splunk outages. Configuring UDP inputs on each Splunk indexer to receive data directly is also not a reliable option, as it exposes the indexers to the network and increases the risk of data loss or duplication due to UDP limitations.
Which Splunk Enterprise offering has its own license?
Options:
Splunk Cloud Forwarder
Splunk Heavy Forwarder
Splunk Universal Forwarder
Splunk Forwarder Management
Answer:
CExplanation:
The Splunk Universal Forwarder is the only Splunk Enterprise offering that has its own license. The Splunk Universal Forwarder license allows the forwarder to send data to any Splunk Enterprise or Splunk Cloud instance without consuming any license quota. The Splunk Heavy Forwarder does not have its own license, but rather consumes the license quota of the Splunk Enterprise or Splunk Cloud instance that it sends data to. The Splunk Cloud Forwarder and the Splunk Forwarder Management are not separate Splunk Enterprise offerings, but rather features of the Splunk Cloud service. For more information, see [About forwarder licensing] in the Splunk documentation.
At which default interval does metrics.log generate a periodic report regarding license utilization?
Options:
10 seconds
30 seconds
60 seconds
300 seconds
Answer:
CExplanation:
The default interval at which metrics.log generates a periodic report regarding license utilization is 60 seconds. This report contains information about the license usage and quota for each Splunk instance, as well as the license pool and stack. The report is generated every 60 seconds by default, but this interval can be changed by modifying the license_usage stanza in the metrics.conf file. The other intervals (10 seconds, 30 seconds, and 300 seconds) are not the default values, but they can be set by the administrator if needed. For more information, see About metrics.log and Configure metrics.log in the Splunk documentation.
(Which of the following data sources are used for the Monitoring Console dashboards?)
Options:
REST API calls
Splunk btool
Splunk diag
metrics.log
Answer:
A, DExplanation:
According to Splunk Enterprise documentation for the Monitoring Console (MC), the data displayed in its dashboards is sourced primarily from two internal mechanisms — REST API calls and metrics.log.
The Monitoring Console (formerly known as the Distributed Management Console, or DMC) uses REST API endpoints to collect system-level information from all connected instances, such as indexer clustering status, license usage, and search head performance. These REST calls pull real-time configuration and performance data from Splunk’s internal management layer (/services/server/status, /services/licenser, /services/cluster/peers, etc.).
Additionally, the metrics.log file is one of the main data sources used by the Monitoring Console. This log records Splunk’s internal performance metrics, including pipeline latency, queue sizes, indexing throughput, CPU usage, and memory statistics. Dashboards like “Indexer Performance,” “Search Performance,” and “Resource Usage” are powered by searches over the _internal index that reference this log.
Other tools listed — such as btool (configuration troubleshooting utility) and diag (diagnostic archive generator) — are not used as runtime data sources for Monitoring Console dashboards. They assist in troubleshooting but are not actively queried by the MC.
References (Splunk Enterprise Documentation):
• Monitoring Console Overview – Data Sources and Architecture
• metrics.log Reference – Internal Performance Data Collection
• REST API Usage in Monitoring Console
• Distributed Management Console Configuration Guide
Splunk Enterprise performs a cyclic redundancy check (CRC) against the first and last bytes to prevent the same file from being re-indexed if it is rotated or renamed. What is the number of bytes sampled by default?
Options:
128
512
256
64
Answer:
CExplanation:
Splunk Enterprise performs a CRC check against the first and last 256 bytes of a file by default, as stated in the inputs.conf specification. This is controlled by the initCrcLength parameter, which can be changed if needed. The CRC check helps Splunk Enterprise to avoid re-indexing the same file twice, even if it is renamed or rotated, as long as the content does not change. However, this also means that Splunk Enterprise might miss some files that have the same CRC but different content, especially if they have identical headers. To avoid this, the crcSalt parameter can be used to add some extra information to the CRC calculation, such as the full file path or a custom string. This ensures that each file has a unique CRC and is indexed by Splunk Enterprise. You can read more about crcSalt and initCrcLength in the How log file rotation is handled documentation.
When using the props.conf LINE_BREAKER attribute to delimit multi-line events, the SHOULD_LINEMERGE attribute should be set to what?
Options:
Auto
None
True
False
Answer:
DExplanation:
When using the props.conf LINE_BREAKER attribute to delimit multi-line events, the SHOULD_LINEMERGE attribute should be set to false. This tells Splunk not to merge events that have been broken by the LINE_BREAKER. Setting the SHOULD_LINEMERGE attribute to true, auto, or none will cause Splunk to ignore the LINE_BREAKER and merge events based on other criteria. For more information, see Configure event line breaking in the Splunk documentation.
To improve Splunk performance, parallelIngestionPipelines setting can be adjusted on which of the following components in the Splunk architecture? (Select all that apply.)
Options:
Indexers
Forwarders
Search head
Cluster master
Answer:
A, BExplanation:
The parallelIngestionPipelines setting can be adjusted on the indexers and forwarders to improve Splunk performance. The parallelIngestionPipelines setting determines how many concurrent data pipelines are used to process the incoming data. Increasing the parallelIngestionPipelines setting can improve the data ingestion and indexing throughput, especially for high-volume data sources. The parallelIngestionPipelines setting can be adjusted on the indexers and forwarders by editing the limits.conf file. The parallelIngestionPipelines setting cannot be adjusted on the search head or the cluster master, because they are not involved in the data ingestion and indexing process.