Case Studies

Open post

Ansible Best Practice

Overview

Ansible is an open source configuration management tool. It’s simple use makes it very popular, but being a simple tool does not limit its usage. Ansible can be used for simple automation tasks, but also for complex provisioning and orchestration.In this post we are going to show you some best practice steps you should in consideration when using Ansible. We are assuming you have some basic concepts of Ansible, if not, you can check our previous post here with instruction on how to get started with Ansible.

Project Structure

One of the first things you have to start with, is the directory structure and layout.
The default layout should be something similar to below:

– production and staging are our inventory files. It’s always a good practice to keep the production and staging host in separate files…having them in one file can get messy.
– group_vars and host_vars are used for assigning variables to specified group of hosts or single host.
– library and module_utils are used for any custom module or module_utils
– filter_plugin is used for custom filter plugins.
– site.yml is the master playbook
– webserver.yml and dbserver.yml are playbooks for the web and db servers tiers taken for example in this demo. You can also have other tiers, depending on what you are provisioning
– roles is the directory where all the roles are stored.
– roles/common the directories inside roles directory, like common, represent the roles. This means that everything inside of the common directory, is part of the common role.
Structure inside a role:
tasks/main.yml : this is where the task of the role are specified
handlers/main.yml : handlers file
templates/ : files to be used with the template resource
files/ : files to be used with the copy and script resource
vars/main.yml : variables associated with this role
defaults/main.yml : default lower priority variables for this role
meta/main.yml : role dependencies
library/ : custom modules for this role
module_utils/ : custom module_utils for this role
lookup_plugins/ : custom plugins for this role

Naming

If you plan on maintaining your playbooks, you need to keep the name consistency between your inventories,roles,plays, variables, etc. For example, use the name of the role to separate variables in each group. Let’s say you are installing Apache and PHP on your web servers. You can separate the variables used for each of them, group_vars/webservers/apache.yml and group_vars/webservers/php.yml.

Staging

As mentioned above, you should keep your staging or testing environment separate from you production environment by using different inventory files. Using the -i flag, you can choose on which inventory to run the tasks. Testing your playbooks before running them in production is always a good idea.

Encrypt passwords

There is a high chance that you might end up using passwords or certificates in your playbooks. Having to store them in plain text files, is always not a good idea. Ansibles offers ansible-vault which can be used to encrypt sensitive data. Instead of using plain text files, you store your sensitive data in encrypted files. When running the playbooks, you need to use a flag, –ask-vault-pass or –vault-password-file, which will then decrypt the files(in memory only). The only drawback of this is that at the moment you can only use one password for encrypting your files.

Version Control

Use version control on all you playbooks. Keep your files in a git server, so you will commit when you have changes. This will ensure you have an audit trail describing why and when changes were done to your playbooks.

Modules before run commands

Run commands like command and shell modules enable the user to execute shell command on the hosts. At first this may seem convenient but on the long run it’s always suggested to use the modules when possible. Running you tasks with shell command may work the first time, but there high chances that it will fail a second time, when something exist, like an error. Also having the built in modules can make it easier and help you with getting the idempotency of the playbooks.

Idem-potency
Idem-potency means running the same playbook different times on the same host, and expecting the same results. These means, that after you run your playbook the first time on a host and all changes are done, the second time, the playbook should return everything green and make no changes. There some ideas to start when trying to archive idem-potency.
Always use built modules instead of run commands when is possible as explained above.
Always use the same inputs
The argument force:no can be used on some modules to make sure the task is run only once. This means that if you are using a template, and want it copied only once if not existence, this will ensure that the second time it will not be copied.
When forced to use the command or shell tasks, always use the argument creates. Otherwise, the tasks will be considered as “changed” .

Open post

Getting Started with Ansible

Overview

When working on devops or system administration, one of the main tools you need to keep your eyes on is the configure management. Either be it a set of complex orchestrations or simply automated task, a configuration management tool can make your day easier.
Let’s be honest, we are living in the times of automation. If you have to do a task twice, it’s time to consider automating it. For this post we have decided to go with Ansible.

Ansible is an open source configuration management tool. What makes ansible so famous is its agent-less and push approach. In a nutshell, other solution like Chef or Puppet, work by installing an agent on the hosts that you manage. The agent is responsible for pulling changes from a master host. Usually they use their own channels, not ssh.
The advantage of Ansible is the simplicity on which it operates. Pushes are applied through SSH service, basically is the same as you would connect to your machines and run a set of commands, but instead you use a script to do the whole thing automatically, and with the bonus of having a lot of modules to make it even easier to write the script.

In this post we will show you how to install ansible, run some simple starting scripts and get started using it.

Installing Ansible

Installing ansible is also very simple. You can follow the instruction here depending on you machine. If you use pip, you can install it running this command:

$ sudo pip install ansible

Make sure to check that everything went ok by checking the ansible version

$ ansible --version

Adding hosts to inventory

Once you have Ansible installed, the first thing to do before running your tasks, is to specify the hosts that will be managed by Ansible.
Ansible uses a file called hosts, inside of ansible configuration files( for linux should be /etc/ansible/hosts) were you define instances of your hosts. You can add single hosts, or group of hosts like web servers or database servers, were you can run the same task on multiple hosts.
Let’s start by adding one host in a group of webservers. To do so, modify the file and add two lines like this:

[webservers]
app1Server.example.com

[webservers] is the then name of the group or the group header. app1Server.example.com is the name of the host. You can use also IP’s.

Save the file and close it.
Now that we have at least one host configured, lets run some simple tasks with ansible.

$ ansible webservers -m ping -u root

This should return:
app1Server.example.com | SUCCESS => {
"changed": false,
"ping": "pong"
}

As you can see, the ping command worked.

Running Ansible playbooks

In this part, we will start using Ansible Playbooks.
Playbooks combines task to be runned against a hosts when executed. For this demo, we will provision a LAMP stack. The playbook will install the LAMP components, Apache, PHP, MySQL. It will start Apache and then show a “Hello World” page.

First that start by writing simple web server with only php and apache.

The playbook.yml should look like this:

---
- hosts: all
tasks:
- name: Install Apache
apt: name=apache2 state=present

- name: Install PHP module for Apache
apt: name=libapache2-mod-php5 state=present

- name: Start Apache
service: name=apache2 state=running enabled=yes

- name: Install Hello World PHP script
copy: src=index.php dest=/var/www/index.php mode=0664

The playbooks use YAML language. First is the hosts entry. This tell the playbook were the tasks will be runned. Then it’s the task session. Here you define the tasks that will the script will run. For each of them we set a name, so it’s more easy to understand what is going on during the playbook execution. For this playbooks we will use this modules: apt, service, copy. You can find more detailed information for each of them and more on the ansible docs pages.
Since in the last task, we are using the copy modolu, we need to have the index.php file that will be copied at the host.

<?php
echo “Hello World”;
?>

A simple Hello World will do for the purpose of this post.

To run the playbook, you need to run the following:

$ ansible-playbook playbook.yml -u root

(for this demo am using root user, you can use also non root users)

After the playbook has been executed we will have a simple web server running on our hosts.

Ansible Roles

What did in the previous step, is that we wrote a simple ansible playbook which installs a web server using apache,php and runs a Hello World page.Now, what if our application needs also a database. Normally, we would include another task on the playbook which installs the database. But, in our production environments, databases are located in different servers. We don’t want to install apache on the database server, as we dont want a database installed in the web servers. So including everything in one playbook wont work in this case. Here is where we need Ansible Roles.

A role is a set of tasks and configuration grouped by a common functionality. For instance, a web server could be a role, a database server could be another role.
Before starting with writing the roles we need to understand the project directory structure

At the end, this the directory structure we will have for the project. First is the hosts file, were we will define the web servers and the database servers.Then, is the playbook.yml which will route how the roles will be runned.
The roles directory is were the tasks and the files will be stored. All the task will be writen within the main.yml file in their corresponding folder.
We already have the webserver main.yml file ready, all we need to do is remove the hosts entry and copy the rest.
Now we need to write the main playbook.yml. It should look like this:

---
- name: Apply web server configuration
hosts: webserver
roles:
- webserver

- name: Apply database server configuration
hosts: database
roles:
- database

By doing so, we are sure that the webserver tasks will be runned only on the webservers and the database tasks only on the database servers.

Now we need to write the last one, the database main.yml.
It should look like this:

---
- name: Add APT GPG signing key
apt_key: url=http://keyserver.ubuntu.com/pks/lookup?op=get&search=0xCBCB082A1BB943DB state=present

- name: Add APT repository
apt_repository: repo='deb http://ftp.osuosl.org/pub/mariadb/repo/10.0/ubuntu $ansible_distribution_release main' state=present update_cache=yes

- name: Install MariaDB
apt: name=mariadb-server state=present

- name: Start Mysql Service
service: name=mysql state=started enabled=true

- name: Install python Mysql package #required for mysql_db tasks
apt: name=python-mysqldb state=present

- name: Create a new database
mysql_db: name=demo state=present collation=utf8_general_ci

- name: Create a database user
mysql_user: name=demo password=demo priv=*.*:ALL host=localhost state=present

- name: Copy sample data
copy: src=dump.sql dest=/tmp/dump.sql

- name: Insert sample data
shell: cat /tmp/dump.sql | mysql -u demo -p demo

- name: Install MySQL extension for PHP
apt: name=php5-mysql state=present

The above task will install and configure our database.
We add a simple db.php file with the connection string and a simple query to get the database data:

<?php
$connection = new PDO('mysql:host=localhost;dbname=demo', 'demo', 'demo');
$statement  = $connection->query('SELECT message FROM demo');
echo $statement->fetchColumn();
php>

Also, to have the data stored in the first place in the database, we can set up a simple dump.sql file:

CREATE TABLE IF NOT EXISTS demo (
  message varchar(255) NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
INSERT INTO demo (message) VALUES('Hello World!');

After we have all the files ready, you run again the playbook:

$ ansible-playbook -i hosts playbook.yml -u root

This again will start up a web server, showing a Hello World page, but this time, the web server is using a database to fetch the data.

Conclusion

Ansible is really easy and simple to set up and use. The YAML syntax is simple and easy to understand with a wide documentation and examples. The more structured your projects will be, the more easier to understand and less complex they will be. Ansible can be used to handle small task, or large automation processes, even orchestration.

Open post

Dockerizing Splunk, moving your logging infrastructure into containers

Overview

Splunk and Docker

Logging infrastructures can become hard to maintain, specially on high logging volumes. In this article we are going to explain how you can move your Splunk environment into containers, how it will make it easier to maintain your Splunk and have standard configuration across your infrastructure.

Docker is a container manager that makes it easy to create, deploy and run application on containers. Containers allow you to build a package with all the configuration and parts that are needed for the application to run. The package can be shipped to any machine which supports docker and it will work as expected.

Splunk is a very powerful log processing tool with very fast search capabilities. It also helps you analyze and visualize data gathered from devices, application, websites, create alerts based on events and much more.

Why use Splunk in containers
The main reasons to use Splunk with Docker Containers:

Reduce time and management Costs to Upgrade
Easier and faster Rollback
Easier Support and Maintenance
Standard Configurations

Setting Up Splunk in containers

The image above shows a simple infrastructure with a NodeJS app and a Laravel App. Both are running in containers and have their log folders mapped with docker volumes. Those docker volumes, will be shared between the application container which will store logs there, and the Universal Forwarder Container which will read the logs and send them to the Splunk Indexer. By doing so, we don’t need to install the Splunk UF inside the container, or the host. Also, if we move the app to another host, you can also ship the UF container along with it, even create a docker-compose file that will start the UF container every time that the app is deployed.

First we set up the Splunk Indexer.
Setting up Splunk in a container very simple. It is possible to use the pre-configured image of Splunk Enterprise and Universal Forwarder from the Docker Hub.
First is necessary to install docker on the the machine using the instruction from docker.com or using directly the installation script from get.docker.com. After docker has been installed we can directly start splunk using the following docker command:

docker run -d –name splunk -e “SPLUNK_START_ARGS=–accept-license” \
-e “SPLUNK_USER = root” \
-v /path/on/local/host:/opt/splunk/etc \
-v /path/on/local/host:/opt/splunk/var \
-p 8000:8000 -p 9997:9997 -p 8089:8089 splunk/splunk

The first time, docker will pull the image locally and than start the container. This will bring up splunk running on port 8000. Also it will mount splunks etc and var directories on the local host. This will ensure persistent data and easier backups.
After the Web Console is available, we might want to configure the Server Classes and the app to be deployed in the Forwarder Management menu. This will ensure that the logs from the universal forwarder will be indexed into the right indexes and sourcetypes.
By running the Splunk Indexer on a container makes it very easy to maintain it.
Downtime are reduced during upgrades and rollbacks.

In order to start the Splunk Universal Forwarder, and configure it to send the logs to the Splunk Indexer, we run the following

docker run -d -e “SPLUNK_START_ARGS=–accept-license” \
-e “SPLUNK_FORWARD_SERVER=splunk:9997” \
-e “SPLUNK_USER = root” \
-v volume_name:/opt/universalforwarder/var/logs/ \
-v etc_volume:/opt/universalforwarder/etc/ \
splunk/universalforwarder

This will bring back the Splunk UF, it will configure it to send logs to the desired Splunk Indexer instance, and also it will mount the volume where are the logs we want to index, and the /opt/splunk/etc folder. As soon as the containers starts, we will see logs coming in. Everytime that we will run the application, independent of the host, if in testing environment or production, we will have the logs of the application in Splunk.

Deploying Splunk using Rancher

We are now able to deploy Splunk indexer and Splunk UF using docker. But if in the near future we want to scale, having to do it manually with docker can become difficult.
One solution for this would be Rancher.
Rancher is an open-source software for managing and running your containers in production. It has nice enterprise level features, and it GUI makes it easy to manage your infrastructure. If it’s your first time installing rancher you can check some instruction here.

Once you have Rancher up and running, first we will add the Splunk Indexer Stack.

Configure the Stack by proving a name and optional description

Now that we have our stack, we can add a new service:
:

We configure the name of the container, the image that it will use and the ports that will be mapped on the host. We add the environment variable:

And the volumes:

We can create the service now. This will launch the splunk indexer container and we will have it ready to be accessed on port 8000. The image and configuration are the same as we did the manual run of the container, but using Rancher we simplify the deployment. Upgrading, rolling back it’s easier, you can use the build in lb to deploy the splunk indexer, or simply deploy multiple instances on multiple hosts and run clusters of search heads.

Deploying the Splunk UF using Rancher

Using Rancher to deploy Splunk Universal Forwarder is pretty much the same as with the indexer. You need to change the image used, here will be used splunk/universalforwarder.
One important thing is to link the indexer service. This will ensure that the forwarder will communicate with the indexer. To do so, you would need to add a service link and then choose the service you want to link:

Now, when using the SPLUNK_FORWARD_SERVER environment variable, you need to specify the name of the service, in our this case, indexer. So the variable would look like this:

This will ensure that the Universal Forwarder service will always communicate with the Indexer service.

Conclusion
This way of handling how we deploy Splunk, will reduce the time and cost of upgrades, as all you need to do is deploy a new image with the newer version, which will be very quick.
Maintaining and shipping it along with the Application becomes more easy. Also rolling back to a previous version, is as simple as changing only the tag of the image to the older working version.
Having everything pre-configured and packed into one container, makes it easy to maintain standard configuration across the infrastructure.

Open post

Introduction to Rancher

Getting Started with Rancher

Overview

Docker by itself is great tool for building scalable infrastructure. It helps you build pieces of stateless services, making it easier to maintain high availability and scalability, but in large infrastructures it can become difficult if done manually.

The official solutions to this problem, Docker Swarm and Compose are also great tools. They allow you to build giant and elastic docker clusters, which are also easily scaled to multiple machines. Having said this, they lack some crucial features like built-in load balancer and cross-machine service discovery.

This is the part where Rancher comes in.
Rancher is an open source Docker PaaS that includes this features and many more. It’s a great tool for managing and running your containers in production. It has nice enterprise level features, and it’s GUI makes it easy to manage your infrastructure. A vast variety of features like service discovery, DNS, load balancing, cross-host network, multi-node support, multi-tenancy, health checks makes Rancher a very compact tool that is also easily deployed in few minutes.

In this post we will be installing rancher, doing the basic configurations, managing environments, adding host and starting you first stack from a catalog.

Pre-reqs

– This guide assumes you have a linux machine running(even a VM)
– Docker installed
– Access to docker.com to pull images

Deploying Rancher using docker

In this tutorial on how to install rancher, we will deploy it using the prebuild docker containers. If you haven’t already installed docker, please do so by using the official docker guide here.
We are going to use two containers for this deployment: rancher/server and mysql for the database. First we want to start the database using the following command:

$ docker run -d –restart=unless-stopped –name mysql \
-v /my/data/dir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-pass mysql

This will start mysql and store its data locally so they will be persistent.

Now we need to create the rancher database. Get shell access to the container bu running the following command:

$ docker exec -it mysql bash

Once inside the container, open mysql cli by running:

$ mysql -u root -p

Run the following commands to initialize the rancher database:

> CREATE DATABASE IF NOT EXISTS cattle COLLATE = ‘utf8_general_ci’ CHARACTER SET = ‘utf8’;
> GRANT ALL ON cattle.* TO ‘cattle’@’%’ IDENTIFIED BY ‘password’;
> GRANT ALL ON cattle.* TO ‘cattle’@’localhost’ IDENTIFIED BY ‘password’;

The above sql queries will create a database named cattle and the user cattle which has access to it.

Now that we have our database ready, we can start the rancher/server container.

$ docker run -d –restart=unless-stopped -p 8080:8080 \
–link mysql:mysql rancher/server \
–db-host mysql –db-port 3306 –db-user cattle \
–db-pass password –db-name cattle

This will start rancher container, connect it to the database we set up earlier. The service should be reachable on port 8080. Using a browser go to you machine ip on port 8080 and you should see Rancher up and running:

Configuring Rancher

The first time you login in rancher, you need to create your first user account. To do so go to Admin – Access Control:

You can choose different ways of managing you Access Control, but for this demo we will go with the Local one.

We setup a local admin user and Enable the Access Control with Local Authentication

Next thing we want to look it’s Environments.
Rancher use Environments to group resources. Each environment has its own sets of services, infrastructure and network. Environments can be owned and used by one or more users or groups, depending on the Access Control you are using.
A good example, as pointed on the Environment tab in Rancher, would be to separate “dev”, “test” and “production” environments. This will help you keep thing isolated from each other and have different ways of providing and restricting access to your teams.

For this demo we will create a Testing Environment. Open the Enviroments Menu(right now we have only the “Default” Environment):

Adding a new environment is very simple. You need to provide the Name, the template that wil be used( we will use Cattle, the rancher default template in this demo) and the users that will have access:

After creating the new environment and and switching to it, the next thing we need to do is add a host. Adding a host is also very simple. Before adding the host you need to check the supported versions of Docker. To add a new host go to Infrastructure -> Hosts.
The first time you will try to add a new host, you will need to configure the Rancher Host Registration URL, which will be used by the agents to connect to.

After saving the configuration, you will be redirected to the new host menu.

You can add hosts from Amazon, Azure, etc but for this demo we will add the local machine where rancher is running.
All you need to is copy the command under section 5 and paste in the host. After the Agent is up and running we can see the new host added present in the Host tab:

Now that we have our environment and a host configured, we can continue to add our first stack.
You can add a stack manually by adding it.s services one by one, or by even adding a docker-compose file. One other way would be to use one of the rancher or community catalogs available. They cove a wide varsity of services, and have pre-configured to make deployment very easy.Go to Catalog and select ALL. Here you will have all the available catalogs:

For this demo we will use the wordpress catalog. Search for wordpress or scroll down at the end. When you choose the wordpress template, you will be presented with some configuration options:

Here you can customize your wordpress settings like database user/password, port that worpress will run and admin password for wordpress. After you done hit launch and in a few seconds you will have wordpress running.

As you can see, we have two services running for this stack, the database and wordpress. We can simply access our wordpress site on the port we configured.

Conclusion
Rancher makes it very easy and simple to maintain docker containers in small or large infrastructures. You can deploy full stacks in seconds. You can deploy stacks in multiple hosts in a cross-host network. Its own built-in features like service discovery,DNS, load balancing and more, makes it a fantastic tool when it comes to high availability and scalability

Open post

Dockerizing Nessus Scanners

Dockerizing your vulnerability assessment architecture

Nessus and Docker
Nessus is a professional vulnerability management tool. It’s a great tool designed to automate the testing and discovery of known security problems. One of the very powerful features of Nessus is its client server technology. Servers can be placed at various strategic points on a network allowing tests to be conducted from various points of view. A central client or multiple distributed clients can control all the servers

Docker is a container manager that makes it easy to create, deploy and run application on containers. Containers allow you to build a package with all the configuration and parts that are needed for the application to run. This will make sure that you have an identical setup on all the machines running that container. The package can be shipped to any machine which supports docker and it will work as expected.

Having the ability to have automated and preconfigured scanners running in your environment can be very useful in terms of operational costs, consistency of configuration, ease of update, ease of deployment, migration.
In this tutorial we will cover how to set up a new image running with the Nessus Scanner and how to deploy images containing custom configuration, making it simple to manage and maintain your scanners.

Why run Nessus Scanner in containers
One of the main reasons is standard configuration. By having a pre built container running the Nessus Scanner , you will be able to start that container on any of you host that will act as the scanner, and have the pre build standard configuration. This may include local user configuration, how the scanner will be managed, etc.. Another important benefit, is the fast deployment. By doing the normal installation, it will require some time to install the scanner, configure it and start the service. Having it running on a container, the deployment will be very fast, and you will have the service up and running in a very short time. Other benefits are easy and faster upgrades and rollbacks.
If you are familiar with Security Center API, and how to start scans using them, you could run the container only during the scanning. By doing so, you would safe resources on the server, as they would be used only when scanning. This feature can also become very effective when used with the Nessus Agent for Host Based Scanning, as you would not need to have Nessus Agent up and running on your machines all the time, but run it only during the time of the scans.

Building the Nessus Scanner Image
Prerequisites:
Docker installed on the host were Nessus Scanner will run
Local/Private docker registry in order to save the image and deploy it more easy(hub.docker.com can also be used, but keep noted that only 1 private repo is available)
Latest Nessus scanner deb file for debian(we’ll be running it on a debian container, you can any OS of you liking)

Building the image:
For building the image, I would suggest doing it in you local machine and then, using the docker registry, deploy it on you Scanner hosts.
First download the latest agent version,as of now is 7.0.3.We are going to use the 64 bit scanner.

After download the .deb package, create a directory, nessus_scanner. Copy the deb file inside of it and also create a new file called Dockerfile with the following content:

FROM debian:latest

ADD Nessus-7.0.3-debian6_amd64.deb /tmp/Nessus-7.0.3-debian6_amd64.deb

RUN apt-get update -y \
&& apt-get install -y apt-utils tzdata \
&& dpkg -i /tmp/Nessus-7.0.3-debian6_amd64.deb \
&& rm -r /tmp/Nessus-7.0.3-debian6_amd64.deb

EXPOSE 8834

CMD service nessusd start && tail -f /dev/null

This will tell docker to use the latest version of debian as the OS image. It will copy the Nessus deb file to the container /tmp folder, and it will install the dependencies and the Nessus software on the container. After the installation is done it will remove the deb file, as it won’t be needed anymore. The port 8834, where the Nessus service run, will be exposed, and in the end it will run the command that will start Nessus service everytime that the container starts.
tail -f /dev/null is just a workaround for the container to not stop. That happens because the nessusd service will run in background, making docker think that the application has stopped. This would cause docker to stop the container. In order to prevent this, we run a foreground process which does not effect in any way the the nessusd service.

If you are running this on a private network and need a proxy to have access on the internet, you also need to configure the apt.conf with a proxy that have access to the debian repositories for the debian container to be able to download the necessary dependencies. To do so we need only to add some line in our Dockerfile. Create a apt.conf file with the neccassary proxy configuration inside the nessus directory, then add a line right after the line that copies the deb package:

……
ADD Nessus-7.0.3-debian6_amd64.deb /tmp/Nessus-7.0.3-debian6_amd64.deb
ADD apt.conf /etc/apt/apt.conf
RUN apt-get update -y \
……

Note that this might change depending on the os you decided to use.

If you want to use the proxy only once, and restore it the default configuration, just a default apt.conf file in you folder and this line before the Expose command:

……
&& rm -r /tmp/Nessus-7.0.3-debian6_amd64.deb
ADD apt.conf.default /etc/apt/apt.conf
EXPOSE 8834
……

Now that we the Dockerfile, we can build the container using the following command:

docker build -t nessus/nessus-scanner:7.0.3

This will create a new image tagged with serverheat/nessus-deb-7.
If you are using a private docker registry, you can tag it using your registry:

docker build -t myprivateregistry:5000/nessus/nessus-scanner:7.0.3 .

myprivateregistry is the ip of name of the server where you running you docker registry, usually running on port 5000. Nessus is the name of the project, nessus-scanner is the name of the repository where the image will be saved, and 7.0.3 is the tag, which can be used to determine the version of the service running in the container. The last “.” tells docker where to look for the Dockerfile, in this case the current folder. If you have another path where the Dockerfile is, use that one.

The image will require some time to build. After the image has been build you can push it to your registry using the docker push command and then pull it on the host you want to run the container. For more information regarding the process of pushing and pulling the docker images you can check the official docker documentation https://docs.docker.com

To run the container we use the following command:

docker run -d –name nessus-scanner -p 8834:8834 nessus/nessus-scanner:7.0.3

The above command will tell docker to run the container in detached mode(-d), it will give the container the name nessus-scanner and export on the host the port 8834, which will is mapped to the port 8834 of the container. This will start the container and the service will be running on port 8834 of the host. You could connect to the serverip:8834 using you web browser and you would see the initial Nessus page.

First we need to create a username/password for the Nessus web console.(This will also be used to connect SecurityCenter to the scanner if you are managing the scanner with SecurityCenter ).

In the next step you need to define how the scanner will be managed. This will depend on you infrastructure..if you are using Security Center, Tenable.io, or just a simple Nessus standalone scanner.

After that the Scanner will initialize its data for some seconds.(Here we can note that the first initialization is faster when running the scanner on a container than when having the default installation)

We still don’t have any custom configuration on the image.
The way the container is starting now, all the data will be stored inside the container, and lost when the container removed.

Having Persistent Data
In order to have persistent data, we first create a docker volume, where the data will be stored:

docker volume create nessus_scanner_var

This will create a new docker volume, located on /var/lib/docker/volumes
In order to have the nessus data stored there, we start the container using this command:

docker run -d –name nessus-scanner \
-v nessus_scanner_var:/opt/nessus/var/nessus \
-p 8834:8834 nessus/nessus-scanner:7.0.3

This will start the container and store the data present in the /opt/nessus/var/nessus on the container, into the nessus_scanner_var. This will ensure that the data won’t be lost, independent of the container. Note that data present on the host volumes or folders, have higher priority over the data present on the container.

Custom configuration in the image
Currently we have managed to build and deploy a default Nessus Scanner container. This image does not have any custom configuration. To have the image pre-configured before deploying it, we need to do some simple tasks

First, configure a local Nessus Scanner.
During the first initialization, Nessus scanner will require creating a new user, and how the scanner will be managed. After you start the container, and do the standard configuration that will be used across the infrastructure, you will need to create a new image.
To do so, first you need to find the id of the container, using docker ps command

Once you have the container id, you need the following command to push the new image to the docker registry:

docker commit cont_id nessus-scanner-conf

docker tag nessus-scanner-conf \
myprivateregistry:5000/nessus/nessus-scanner-conf:7.0.3

docker push myprivateregistry:5000/nessus/nessus-scanner-conf:7.0.3

These sets of docker commands will push the new image to the docker registry.
docker commit with create a new image from the container, docker tag it will tag it using the private registry name, and docker push will push it to the registry.

To run this new image we can use the same command as we did with the first one, except for changing the image name. This time when the container will start, it will not require any configuration, but it will be pre configured and ready to be used.

Deploying Nessus Scanner using Rancher
At this point we have the build Nessus image and the image with custom configuration.
Now we need to deploy our app, manage multiple environments like staging and production. You want to deploy your containers on multiple hosts, set up Cluester’s of containers with some clicks. This is where Rancher comes in handy.
Rancher is an open-source software for managing and running your containers in production. It has nice enterprise level features, and it GUI makes it easy to manage your infrastructure. To start with Rancher, you can check our Introduction to Rancher here.

After you have Rancher installed and up and running and added the hosts where the container will be deployed, we firstly will configure the docker registry(private one, or the account for the hub.docker.com if the repository you created is private)
To do so go to Infrastructure — > Registries

After clicking on Add Registry, we can choose the type of registrie we want to add:

After the registry has been added, rancher will be able to pull images from that registry. This means that you don’t need to configure the registry on all your hosts, as you would have needed to do if you would had deployed the container locally.

Once we have the registry set up, we create a new stack that will contain our services:

Configuring the Stack is pretty simple, just add the name and some description

Once we have the stack up, we create a new service, which will be our Nessus Container.

Here is where we configure how the service will be deployed. We can decide how the service will be scaled, how much containers will be running for this, on which hosts they will be running. We configure the image that will be used, and the ports that will be exposed.

In the Volumes tab, we can configure the mapping of the Volumes.

In the Scheduling tab, we configure on which host the containers will be deployed. We can deploy containers on multiple hosts based on the labels we have configured for those hosts, or simply run the container on a specific hosts.

After we have finished with all the necessary configuration, we can create the service, and the containers will be deployed.

Running Nessus Scanner on containers can greatly reduce configuration, maintenance times and cost. Maintaining standard configuration is simplified. New deployment of scanner are more faster and easier to manage. Running the containers along side with using the Security Center API can be very efficient, reducing resource utilization.
Also using Rancher, deployment in wide infrastructures and environments becomes more easy to manage.

Scroll to top