Splunk WLM (Workload Management) provides the ability to allocate compute and memory resources to search, indexing, and other processes such as scripted inputs. This allows you to allocate the right resources to your Splunk server depending on its role. For example, you may want to allocate more CPU to a Search Head or allocate more memory for an Indexer.
You can also fine-tune those allocations as well. For example, we can prioritize searches base on a role or based on how well the search was formed. If a user is searching index=*, move them to a slower pool versus an admin searching for important data would go to the faster pool.
With all these great features, there are some good things and some gotchas with the WLM product. Let’s go through the various things you need to know when implementing WLM.
WLM: the Great!
The ability to allocate resources to certain Splunk server roles can really improve the performance of your Splunk environment. WLM works so well, Splunk uses it for their Splunk Cloud environment. This is a great feature to provide for their Splunk Cloud customers to help ensure the environment runs smoothly.
One great use of WLM is the ability to prioritize searches. A common issue among customers is dealing with bad searches. It can sometimes be difficult to train users on how to perform well-formed searches. With WLM, you can prioritize those users with good search habits versus those with poor search habits. The key is to lower the priority for those badly formed searches so that they do not take up unnecessary resources. This ensures your system will continue to perform well with bad searches being performed.
The Good!
As you already know, WLM uses the cgroups functionality that is part of the Linux distribution. Because of this, it can be difficult to get this setup, which might require a Linux admin to configure. This is not a problem with the WLM product but more of an issue with dealing with the Linux operating system.
WLM: the Gotchas!
Just like the good part, the problem really isn’t with the WLM product but more with Linux. Below are a couple of gotchas when setting up WLM.
- Linux OS Version:
- You do need to be on a newer version of the Linux operating system. Even if you are using an older version of Linux that has the cgroups features, WLM will not work with the older version as described in the next section.
- /cgroups versus /sys/fs/cgroup
- When cgroups was first created, it was developed as a “beta” product. So in the beginning, the folder path for cgroups was /cgroups. But as the feature was fully developed, the path changed to /sys/fs/cgroups. Splunk’s WLM uses the /sys/fs/cgroups path which is hardcoded into the product. This prevents you from using previous versions of Linux.
- systemd versus init.d
- As Linux continues to improve, Splunk continues to use the Linux OS latest features. One feature is using systemd. In the past, Splunk used init.d to start the splunkd process. But with the release of systemd, Splunk now allows you to use init.d or systemd. But in order to use WLM, you must use systemd because the Linux operating system needs to be able to manage the splunkd process and the cgroups with systemd.
WLM: the Extras
Even with the gotchas, WLM continues to improve and make changes based on customer feedback. With version 8.0, Splunk added new features that really improves the product
- Schedule-based workload rules: You can schedule when certain rules run at specified times.
- New conditions to classify searches: search_type, search_mode, search_time_range.
Monitor and take automated remediation action: Create rules to monitor long running searches.
About SP6
SP6 is a Splunk consulting firm focused on Splunk professional services including Splunk deployment, ongoing Splunk administration, and Splunk development. SP6 has a separate division that also offers Splunk recruitment and the placement of Splunk professionals into direct-hire (FTE) roles for those companies that may require assistance with acquiring their own full-time staff, given the challenge that currently exists in the market today.