Open post

Dockerizing Splunk, moving your logging infrastructure into containers


Splunk and Docker

Logging infrastructures can become hard to maintain, specially on high logging volumes. In this article we are going to explain how you can move your Splunk environment into containers, how it will make it easier to maintain your Splunk and have standard configuration across your infrastructure.

Docker is a container manager that makes it easy to create, deploy and run application on containers. Containers allow you to build a package with all the configuration and parts that are needed for the application to run. The package can be shipped to any machine which supports docker and it will work as expected.

Splunk is a very powerful log processing tool with very fast search capabilities. It also helps you analyze and visualize data gathered from devices, application, websites, create alerts based on events and much more.

Why use Splunk in containers
The main reasons to use Splunk with Docker Containers:

Reduce time and management Costs to Upgrade
Easier and faster Rollback
Easier Support and Maintenance
Standard Configurations

Setting Up Splunk in containers

The image above shows a simple infrastructure with a NodeJS app and a Laravel App. Both are running in containers and have their log folders mapped with docker volumes. Those docker volumes, will be shared between the application container which will store logs there, and the Universal Forwarder Container which will read the logs and send them to the Splunk Indexer. By doing so, we don’t need to install the Splunk UF inside the container, or the host. Also, if we move the app to another host, you can also ship the UF container along with it, even create a docker-compose file that will start the UF container every time that the app is deployed.

First we set up the Splunk Indexer.
Setting up Splunk in a container very simple. It is possible to use the pre-configured image of Splunk Enterprise and Universal Forwarder from the Docker Hub.
First is necessary to install docker on the the machine using the instruction from or using directly the installation script from After docker has been installed we can directly start splunk using the following docker command:

docker run -d –name splunk -e “SPLUNK_START_ARGS=–accept-license” \
-e “SPLUNK_USER = root” \
-v /path/on/local/host:/opt/splunk/etc \
-v /path/on/local/host:/opt/splunk/var \
-p 8000:8000 -p 9997:9997 -p 8089:8089 splunk/splunk

The first time, docker will pull the image locally and than start the container. This will bring up splunk running on port 8000. Also it will mount splunks etc and var directories on the local host. This will ensure persistent data and easier backups.
After the Web Console is available, we might want to configure the Server Classes and the app to be deployed in the Forwarder Management menu. This will ensure that the logs from the universal forwarder will be indexed into the right indexes and sourcetypes.
By running the Splunk Indexer on a container makes it very easy to maintain it.
Downtime are reduced during upgrades and rollbacks.

In order to start the Splunk Universal Forwarder, and configure it to send the logs to the Splunk Indexer, we run the following

docker run -d -e “SPLUNK_START_ARGS=–accept-license” \
-e “SPLUNK_FORWARD_SERVER=splunk:9997” \
-e “SPLUNK_USER = root” \
-v volume_name:/opt/universalforwarder/var/logs/ \
-v etc_volume:/opt/universalforwarder/etc/ \

This will bring back the Splunk UF, it will configure it to send logs to the desired Splunk Indexer instance, and also it will mount the volume where are the logs we want to index, and the /opt/splunk/etc folder. As soon as the containers starts, we will see logs coming in. Everytime that we will run the application, independent of the host, if in testing environment or production, we will have the logs of the application in Splunk.

Deploying Splunk using Rancher

We are now able to deploy Splunk indexer and Splunk UF using docker. But if in the near future we want to scale, having to do it manually with docker can become difficult.
One solution for this would be Rancher.
Rancher is an open-source software for managing and running your containers in production. It has nice enterprise level features, and it GUI makes it easy to manage your infrastructure. If it’s your first time installing rancher you can check some instruction here.

Once you have Rancher up and running, first we will add the Splunk Indexer Stack.

Configure the Stack by proving a name and optional description

Now that we have our stack, we can add a new service:

We configure the name of the container, the image that it will use and the ports that will be mapped on the host. We add the environment variable:

And the volumes:

We can create the service now. This will launch the splunk indexer container and we will have it ready to be accessed on port 8000. The image and configuration are the same as we did the manual run of the container, but using Rancher we simplify the deployment. Upgrading, rolling back it’s easier, you can use the build in lb to deploy the splunk indexer, or simply deploy multiple instances on multiple hosts and run clusters of search heads.

Deploying the Splunk UF using Rancher

Using Rancher to deploy Splunk Universal Forwarder is pretty much the same as with the indexer. You need to change the image used, here will be used splunk/universalforwarder.
One important thing is to link the indexer service. This will ensure that the forwarder will communicate with the indexer. To do so, you would need to add a service link and then choose the service you want to link:

Now, when using the SPLUNK_FORWARD_SERVER environment variable, you need to specify the name of the service, in our this case, indexer. So the variable would look like this:

This will ensure that the Universal Forwarder service will always communicate with the Indexer service.

This way of handling how we deploy Splunk, will reduce the time and cost of upgrades, as all you need to do is deploy a new image with the newer version, which will be very quick.
Maintaining and shipping it along with the Application becomes more easy. Also rolling back to a previous version, is as simple as changing only the tag of the image to the older working version.
Having everything pre-configured and packed into one container, makes it easy to maintain standard configuration across the infrastructure.

Open post

Introduction to Rancher

Getting Started with Rancher


Docker by itself is great tool for building scalable infrastructure. It helps you build pieces of stateless services, making it easier to maintain high availability and scalability, but in large infrastructures it can become difficult if done manually.

The official solutions to this problem, Docker Swarm and Compose are also great tools. They allow you to build giant and elastic docker clusters, which are also easily scaled to multiple machines. Having said this, they lack some crucial features like built-in load balancer and cross-machine service discovery.

This is the part where Rancher comes in.
Rancher is an open source Docker PaaS that includes this features and many more. It’s a great tool for managing and running your containers in production. It has nice enterprise level features, and it’s GUI makes it easy to manage your infrastructure. A vast variety of features like service discovery, DNS, load balancing, cross-host network, multi-node support, multi-tenancy, health checks makes Rancher a very compact tool that is also easily deployed in few minutes.

In this post we will be installing rancher, doing the basic configurations, managing environments, adding host and starting you first stack from a catalog.


– This guide assumes you have a linux machine running(even a VM)
– Docker installed
– Access to to pull images

Deploying Rancher using docker

In this tutorial on how to install rancher, we will deploy it using the prebuild docker containers. If you haven’t already installed docker, please do so by using the official docker guide here.
We are going to use two containers for this deployment: rancher/server and mysql for the database. First we want to start the database using the following command:

$ docker run -d –restart=unless-stopped –name mysql \
-v /my/data/dir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-pass mysql

This will start mysql and store its data locally so they will be persistent.

Now we need to create the rancher database. Get shell access to the container bu running the following command:

$ docker exec -it mysql bash

Once inside the container, open mysql cli by running:

$ mysql -u root -p

Run the following commands to initialize the rancher database:

> CREATE DATABASE IF NOT EXISTS cattle COLLATE = ‘utf8_general_ci’ CHARACTER SET = ‘utf8’;
> GRANT ALL ON cattle.* TO ‘cattle’@’%’ IDENTIFIED BY ‘password’;
> GRANT ALL ON cattle.* TO ‘cattle’@’localhost’ IDENTIFIED BY ‘password’;

The above sql queries will create a database named cattle and the user cattle which has access to it.

Now that we have our database ready, we can start the rancher/server container.

$ docker run -d –restart=unless-stopped -p 8080:8080 \
–link mysql:mysql rancher/server \
–db-host mysql –db-port 3306 –db-user cattle \
–db-pass password –db-name cattle

This will start rancher container, connect it to the database we set up earlier. The service should be reachable on port 8080. Using a browser go to you machine ip on port 8080 and you should see Rancher up and running:

Configuring Rancher

The first time you login in rancher, you need to create your first user account. To do so go to Admin – Access Control:

You can choose different ways of managing you Access Control, but for this demo we will go with the Local one.

We setup a local admin user and Enable the Access Control with Local Authentication

Next thing we want to look it’s Environments.
Rancher use Environments to group resources. Each environment has its own sets of services, infrastructure and network. Environments can be owned and used by one or more users or groups, depending on the Access Control you are using.
A good example, as pointed on the Environment tab in Rancher, would be to separate “dev”, “test” and “production” environments. This will help you keep thing isolated from each other and have different ways of providing and restricting access to your teams.

For this demo we will create a Testing Environment. Open the Enviroments Menu(right now we have only the “Default” Environment):

Adding a new environment is very simple. You need to provide the Name, the template that wil be used( we will use Cattle, the rancher default template in this demo) and the users that will have access:

After creating the new environment and and switching to it, the next thing we need to do is add a host. Adding a host is also very simple. Before adding the host you need to check the supported versions of Docker. To add a new host go to Infrastructure -> Hosts.
The first time you will try to add a new host, you will need to configure the Rancher Host Registration URL, which will be used by the agents to connect to.

After saving the configuration, you will be redirected to the new host menu.

You can add hosts from Amazon, Azure, etc but for this demo we will add the local machine where rancher is running.
All you need to is copy the command under section 5 and paste in the host. After the Agent is up and running we can see the new host added present in the Host tab:

Now that we have our environment and a host configured, we can continue to add our first stack.
You can add a stack manually by adding it.s services one by one, or by even adding a docker-compose file. One other way would be to use one of the rancher or community catalogs available. They cove a wide varsity of services, and have pre-configured to make deployment very easy.Go to Catalog and select ALL. Here you will have all the available catalogs:

For this demo we will use the wordpress catalog. Search for wordpress or scroll down at the end. When you choose the wordpress template, you will be presented with some configuration options:

Here you can customize your wordpress settings like database user/password, port that worpress will run and admin password for wordpress. After you done hit launch and in a few seconds you will have wordpress running.

As you can see, we have two services running for this stack, the database and wordpress. We can simply access our wordpress site on the port we configured.

Rancher makes it very easy and simple to maintain docker containers in small or large infrastructures. You can deploy full stacks in seconds. You can deploy stacks in multiple hosts in a cross-host network. Its own built-in features like service discovery,DNS, load balancing and more, makes it a fantastic tool when it comes to high availability and scalability

Open post

Dockerizing Nessus Scanners

Dockerizing your vulnerability assessment architecture

Nessus and Docker
Nessus is a professional vulnerability management tool. It’s a great tool designed to automate the testing and discovery of known security problems. One of the very powerful features of Nessus is its client server technology. Servers can be placed at various strategic points on a network allowing tests to be conducted from various points of view. A central client or multiple distributed clients can control all the servers

Docker is a container manager that makes it easy to create, deploy and run application on containers. Containers allow you to build a package with all the configuration and parts that are needed for the application to run. This will make sure that you have an identical setup on all the machines running that container. The package can be shipped to any machine which supports docker and it will work as expected.

Having the ability to have automated and preconfigured scanners running in your environment can be very useful in terms of operational costs, consistency of configuration, ease of update, ease of deployment, migration.
In this tutorial we will cover how to set up a new image running with the Nessus Scanner and how to deploy images containing custom configuration, making it simple to manage and maintain your scanners.

Why run Nessus Scanner in containers
One of the main reasons is standard configuration. By having a pre built container running the Nessus Scanner , you will be able to start that container on any of you host that will act as the scanner, and have the pre build standard configuration. This may include local user configuration, how the scanner will be managed, etc.. Another important benefit, is the fast deployment. By doing the normal installation, it will require some time to install the scanner, configure it and start the service. Having it running on a container, the deployment will be very fast, and you will have the service up and running in a very short time. Other benefits are easy and faster upgrades and rollbacks.
If you are familiar with Security Center API, and how to start scans using them, you could run the container only during the scanning. By doing so, you would safe resources on the server, as they would be used only when scanning. This feature can also become very effective when used with the Nessus Agent for Host Based Scanning, as you would not need to have Nessus Agent up and running on your machines all the time, but run it only during the time of the scans.

Building the Nessus Scanner Image
Docker installed on the host were Nessus Scanner will run
Local/Private docker registry in order to save the image and deploy it more easy( can also be used, but keep noted that only 1 private repo is available)
Latest Nessus scanner deb file for debian(we’ll be running it on a debian container, you can any OS of you liking)

Building the image:
For building the image, I would suggest doing it in you local machine and then, using the docker registry, deploy it on you Scanner hosts.
First download the latest agent version,as of now is 7.0.3.We are going to use the 64 bit scanner.

After download the .deb package, create a directory, nessus_scanner. Copy the deb file inside of it and also create a new file called Dockerfile with the following content:

FROM debian:latest

ADD Nessus-7.0.3-debian6_amd64.deb /tmp/Nessus-7.0.3-debian6_amd64.deb

RUN apt-get update -y \
&& apt-get install -y apt-utils tzdata \
&& dpkg -i /tmp/Nessus-7.0.3-debian6_amd64.deb \
&& rm -r /tmp/Nessus-7.0.3-debian6_amd64.deb


CMD service nessusd start && tail -f /dev/null

This will tell docker to use the latest version of debian as the OS image. It will copy the Nessus deb file to the container /tmp folder, and it will install the dependencies and the Nessus software on the container. After the installation is done it will remove the deb file, as it won’t be needed anymore. The port 8834, where the Nessus service run, will be exposed, and in the end it will run the command that will start Nessus service everytime that the container starts.
tail -f /dev/null is just a workaround for the container to not stop. That happens because the nessusd service will run in background, making docker think that the application has stopped. This would cause docker to stop the container. In order to prevent this, we run a foreground process which does not effect in any way the the nessusd service.

If you are running this on a private network and need a proxy to have access on the internet, you also need to configure the apt.conf with a proxy that have access to the debian repositories for the debian container to be able to download the necessary dependencies. To do so we need only to add some line in our Dockerfile. Create a apt.conf file with the neccassary proxy configuration inside the nessus directory, then add a line right after the line that copies the deb package:

ADD Nessus-7.0.3-debian6_amd64.deb /tmp/Nessus-7.0.3-debian6_amd64.deb
ADD apt.conf /etc/apt/apt.conf
RUN apt-get update -y \

Note that this might change depending on the os you decided to use.

If you want to use the proxy only once, and restore it the default configuration, just a default apt.conf file in you folder and this line before the Expose command:

&& rm -r /tmp/Nessus-7.0.3-debian6_amd64.deb
ADD apt.conf.default /etc/apt/apt.conf

Now that we the Dockerfile, we can build the container using the following command:

docker build -t nessus/nessus-scanner:7.0.3

This will create a new image tagged with serverheat/nessus-deb-7.
If you are using a private docker registry, you can tag it using your registry:

docker build -t myprivateregistry:5000/nessus/nessus-scanner:7.0.3 .

myprivateregistry is the ip of name of the server where you running you docker registry, usually running on port 5000. Nessus is the name of the project, nessus-scanner is the name of the repository where the image will be saved, and 7.0.3 is the tag, which can be used to determine the version of the service running in the container. The last “.” tells docker where to look for the Dockerfile, in this case the current folder. If you have another path where the Dockerfile is, use that one.

The image will require some time to build. After the image has been build you can push it to your registry using the docker push command and then pull it on the host you want to run the container. For more information regarding the process of pushing and pulling the docker images you can check the official docker documentation

To run the container we use the following command:

docker run -d –name nessus-scanner -p 8834:8834 nessus/nessus-scanner:7.0.3

The above command will tell docker to run the container in detached mode(-d), it will give the container the name nessus-scanner and export on the host the port 8834, which will is mapped to the port 8834 of the container. This will start the container and the service will be running on port 8834 of the host. You could connect to the serverip:8834 using you web browser and you would see the initial Nessus page.

First we need to create a username/password for the Nessus web console.(This will also be used to connect SecurityCenter to the scanner if you are managing the scanner with SecurityCenter ).

In the next step you need to define how the scanner will be managed. This will depend on you infrastructure..if you are using Security Center,, or just a simple Nessus standalone scanner.

After that the Scanner will initialize its data for some seconds.(Here we can note that the first initialization is faster when running the scanner on a container than when having the default installation)

We still don’t have any custom configuration on the image.
The way the container is starting now, all the data will be stored inside the container, and lost when the container removed.

Having Persistent Data
In order to have persistent data, we first create a docker volume, where the data will be stored:

docker volume create nessus_scanner_var

This will create a new docker volume, located on /var/lib/docker/volumes
In order to have the nessus data stored there, we start the container using this command:

docker run -d –name nessus-scanner \
-v nessus_scanner_var:/opt/nessus/var/nessus \
-p 8834:8834 nessus/nessus-scanner:7.0.3

This will start the container and store the data present in the /opt/nessus/var/nessus on the container, into the nessus_scanner_var. This will ensure that the data won’t be lost, independent of the container. Note that data present on the host volumes or folders, have higher priority over the data present on the container.

Custom configuration in the image
Currently we have managed to build and deploy a default Nessus Scanner container. This image does not have any custom configuration. To have the image pre-configured before deploying it, we need to do some simple tasks

First, configure a local Nessus Scanner.
During the first initialization, Nessus scanner will require creating a new user, and how the scanner will be managed. After you start the container, and do the standard configuration that will be used across the infrastructure, you will need to create a new image.
To do so, first you need to find the id of the container, using docker ps command

Once you have the container id, you need the following command to push the new image to the docker registry:

docker commit cont_id nessus-scanner-conf

docker tag nessus-scanner-conf \

docker push myprivateregistry:5000/nessus/nessus-scanner-conf:7.0.3

These sets of docker commands will push the new image to the docker registry.
docker commit with create a new image from the container, docker tag it will tag it using the private registry name, and docker push will push it to the registry.

To run this new image we can use the same command as we did with the first one, except for changing the image name. This time when the container will start, it will not require any configuration, but it will be pre configured and ready to be used.

Deploying Nessus Scanner using Rancher
At this point we have the build Nessus image and the image with custom configuration.
Now we need to deploy our app, manage multiple environments like staging and production. You want to deploy your containers on multiple hosts, set up Cluester’s of containers with some clicks. This is where Rancher comes in handy.
Rancher is an open-source software for managing and running your containers in production. It has nice enterprise level features, and it GUI makes it easy to manage your infrastructure. To start with Rancher, you can check our Introduction to Rancher here.

After you have Rancher installed and up and running and added the hosts where the container will be deployed, we firstly will configure the docker registry(private one, or the account for the if the repository you created is private)
To do so go to Infrastructure — > Registries

After clicking on Add Registry, we can choose the type of registrie we want to add:

After the registry has been added, rancher will be able to pull images from that registry. This means that you don’t need to configure the registry on all your hosts, as you would have needed to do if you would had deployed the container locally.

Once we have the registry set up, we create a new stack that will contain our services:

Configuring the Stack is pretty simple, just add the name and some description

Once we have the stack up, we create a new service, which will be our Nessus Container.

Here is where we configure how the service will be deployed. We can decide how the service will be scaled, how much containers will be running for this, on which hosts they will be running. We configure the image that will be used, and the ports that will be exposed.

In the Volumes tab, we can configure the mapping of the Volumes.

In the Scheduling tab, we configure on which host the containers will be deployed. We can deploy containers on multiple hosts based on the labels we have configured for those hosts, or simply run the container on a specific hosts.

After we have finished with all the necessary configuration, we can create the service, and the containers will be deployed.

Running Nessus Scanner on containers can greatly reduce configuration, maintenance times and cost. Maintaining standard configuration is simplified. New deployment of scanner are more faster and easier to manage. Running the containers along side with using the Security Center API can be very efficient, reducing resource utilization.
Also using Rancher, deployment in wide infrastructures and environments becomes more easy to manage.

Scroll to top