Three Common Errors Customers Face in Splunk
Author: Grace Dolby
Release Date: 15/11/2022
Working with a large customer base and supporting their Splunk deployments has led us to understand the common issues or queries that many will have when deploying or administrating a Splunk instance day-to-day. Hopefully, this blog will allow you to be able to confidently investigate and resolve these issues whether you are running on-premise or cloud.
1. Data not coming in from a Universal Forwarder or other data input type
This issue is probably the most frustrating, because after all your hard work and configuration you go to look in your index in Splunk and alas, there are no events found! There may be multiple reasons for this, however, being able to use the internal logs to your advantage can narrow it down.
Some things that you can check on your UF first:
Can Splunk read the directory or file you want it to monitor?
Are there communication issues between your UF and your Indexer?
You can see this from the _internal logs e.g. index=_internal log_level=ERROR or similar, if the UI is disabled on the UF, you will have to manually navigate to the $Splunkhome/var/log/splunk/splunkd.log
Is a restart of the UF required for the changes you have made?
If all of the above is looking correct, then you can move on to looking at your UF-Indexer communication:
Check if the UF is even connecting to the Indexer
index=_internal source=*metrics.log* tcpin_connections | stats count by sourceIp
Check if the index you have specified in your inputs.conf exists
Check your time range, adjust your search to “All Time” in case the timestamp is being read incorrectly
If you are running Splunk Cloud:
Check you have the credentials package deployed to the UF
Check there are no communication blocks between the UF and the cloud instance
2. “Orphaned” knowledge objects
This one usually comes as a surprise, but occurs when a user who created the search, dashboard, lookup, field, etc has left or moved and been deactivated in the Splunk instance, this leads to an error relating to orphaned objects, and can lead some things such as lookups to break entirely.
Utilise the Orphaned Scheduled Searches, Reports or Alerts dashboard to your advantage to have it show you where they are, this same information can be seen from the “messages” section that you can review as an admin, but anyone with access to the Orphaned dashboard can review objects (useful if the person who left was an admin!)
3. Compatibility Issues
With a number of different moving parts to a Splunk deployment, things can often get a little bit confusing in regards to what version your UF’s can be with what version of your Indexers. Often, we tend to upgrade our Indexers/Search Heads etc without upgrading the UF’s until we need to. Whilst Indexers are backwards compatible with older forwarder versions, they can have limitations on the features available between the two, and this can cause discrepancies such as expecting metrics etc to be available when they are not.
- By utilising the compatibility matrix, you can make sure you are able to maintain functionality between your indexers and forwarders, even if your forwarders are on an older version:
- Compatibility between forwarders and Splunk Enterprise indexers
- If you are using a premium app, there are also supported splunk and app versions, which can also be checked, you can also use the upgrade readiness checker app to automatically comb through your deployment and highlight areas where you may have issues with your upgrade:
- Splunk products version compatibility matrix
- Upgrade Readiness App | Splunkbase
We hope that the above information has armed you with a few more steps to take if you are experiencing an issue, or if you run into one in the future.
If you are interested in any of the topics discussed in this article, please do get in touch with us on our website and we will be happy to speak with you.