Splunk and Docker
Logging infrastructures can become hard to maintain, specially on high logging volumes. In this article we are going to explain how you can move your Splunk environment into containers, how it will make it easier to maintain your Splunk and have standard configuration across your infrastructure.
Docker is a container manager that makes it easy to create, deploy and run application on containers. Containers allow you to build a package with all the configuration and parts that are needed for the application to run. The package can be shipped to any machine which supports docker and it will work as expected.
Splunk is a very powerful log processing tool with very fast search capabilities. It also helps you analyze and visualize data gathered from devices, application, websites, create alerts based on events and much more.
Why use Splunk in containers
The main reasons to use Splunk with Docker Containers:
Reduce time and management Costs to Upgrade
Easier and faster Rollback
Easier Support and Maintenance
Setting Up Splunk in containers
The image above shows a simple infrastructure with a NodeJS app and a Laravel App. Both are running in containers and have their log folders mapped with docker volumes. Those docker volumes, will be shared between the application container which will store logs there, and the Universal Forwarder Container which will read the logs and send them to the Splunk Indexer. By doing so, we don’t need to install the Splunk UF inside the container, or the host. Also, if we move the app to another host, you can also ship the UF container along with it, even create a docker-compose file that will start the UF container every time that the app is deployed.
First we set up the Splunk Indexer.
Setting up Splunk in a container very simple. It is possible to use the pre-configured image of Splunk Enterprise and Universal Forwarder from the Docker Hub.
First is necessary to install docker on the the machine using the instruction from docker.com or using directly the installation script from get.docker.com. After docker has been installed we can directly start splunk using the following docker command:
docker run -d –name splunk -e “SPLUNK_START_ARGS=–accept-license” \
-e “SPLUNK_USER = root” \
-v /path/on/local/host:/opt/splunk/etc \
-v /path/on/local/host:/opt/splunk/var \
-p 8000:8000 -p 9997:9997 -p 8089:8089 splunk/splunk
The first time, docker will pull the image locally and than start the container. This will bring up splunk running on port 8000. Also it will mount splunks etc and var directories on the local host. This will ensure persistent data and easier backups.
After the Web Console is available, we might want to configure the Server Classes and the app to be deployed in the Forwarder Management menu. This will ensure that the logs from the universal forwarder will be indexed into the right indexes and sourcetypes.
By running the Splunk Indexer on a container makes it very easy to maintain it.
Downtime are reduced during upgrades and rollbacks.
In order to start the Splunk Universal Forwarder, and configure it to send the logs to the Splunk Indexer, we run the following
docker run -d -e “SPLUNK_START_ARGS=–accept-license” \
-e “SPLUNK_FORWARD_SERVER=splunk:9997” \
-e “SPLUNK_USER = root” \
-v volume_name:/opt/universalforwarder/var/logs/ \
-v etc_volume:/opt/universalforwarder/etc/ \
This will bring back the Splunk UF, it will configure it to send logs to the desired Splunk Indexer instance, and also it will mount the volume where are the logs we want to index, and the /opt/splunk/etc folder. As soon as the containers starts, we will see logs coming in. Everytime that we will run the application, independent of the host, if in testing environment or production, we will have the logs of the application in Splunk.
Deploying Splunk using Rancher
We are now able to deploy Splunk indexer and Splunk UF using docker. But if in the near future we want to scale, having to do it manually with docker can become difficult.
One solution for this would be Rancher.
Rancher is an open-source software for managing and running your containers in production. It has nice enterprise level features, and it GUI makes it easy to manage your infrastructure. If it’s your first time installing rancher you can check some instruction here.
Once you have Rancher up and running, first we will add the Splunk Indexer Stack.
Configure the Stack by proving a name and optional description
Now that we have our stack, we can add a new service:
We configure the name of the container, the image that it will use and the ports that will be mapped on the host. We add the environment variable:
And the volumes:
We can create the service now. This will launch the splunk indexer container and we will have it ready to be accessed on port 8000. The image and configuration are the same as we did the manual run of the container, but using Rancher we simplify the deployment. Upgrading, rolling back it’s easier, you can use the build in lb to deploy the splunk indexer, or simply deploy multiple instances on multiple hosts and run clusters of search heads.
Deploying the Splunk UF using Rancher
Using Rancher to deploy Splunk Universal Forwarder is pretty much the same as with the indexer. You need to change the image used, here will be used splunk/universalforwarder.
One important thing is to link the indexer service. This will ensure that the forwarder will communicate with the indexer. To do so, you would need to add a service link and then choose the service you want to link:
Now, when using the SPLUNK_FORWARD_SERVER environment variable, you need to specify the name of the service, in our this case, indexer. So the variable would look like this:
This will ensure that the Universal Forwarder service will always communicate with the Indexer service.
This way of handling how we deploy Splunk, will reduce the time and cost of upgrades, as all you need to do is deploy a new image with the newer version, which will be very quick.
Maintaining and shipping it along with the Application becomes more easy. Also rolling back to a previous version, is as simple as changing only the tag of the image to the older working version.
Having everything pre-configured and packed into one container, makes it easy to maintain standard configuration across the infrastructure.