The purpose of this project is to educate others that want to break into the world of DevOps or just wanting to bring more automation into their homelab. Plus, serve as a fun way for me to automate all the things and better myself at documentation and code control. I will be taking a break from playing Ark long enough to go over my continuous deployment plan and execution. Ark server configs will be saved to github with included Jenkins pipeline and Ansible playbooks needed to test and deploy the Ark server. When a commit is pushed to github, Jenkins will see this and pull in the code. When all tests have passed, Jenkins will trigger an Ansible template through an AWX (opensource tower) API call, that will pull in any environment variables and build all directories, config files, and finally deploy the whole thing to Docker Swarm with a docker-stack.yml file.

Jenkins to AWX


The host I'm running this on is a very basic install of Ubuntu 18.04 running Docker in Swarm mode. Jenkins and Ansible AWX are both running in Docker and will also soon be deployed in the same manner as I am preparing to deploy the Ark server, with a Jenkinsfile and Ansible playbooks. That should make for some fun problems. Data persistence is accomplished by mounting Docker volumes at stack deployment time. A lot of this server build has been manual, but is slowly being put into Ansible playbooks, like this one, as I have time.

So, from the beginning, a system to run this on. I won't go in to too much detail on installing Ubuntu, or setting up Docker Swarm, as they have already been documented extensively, beyond my abilities. But once a system is ready and running Docker Swarm, Jenkins and Ansible AWX are ready to be deployed. They are both handled with a docker-stack.yml file and deployed to Docker Swarm. /data/ is the root directory on the system where Docker will store volumes.


Jenkins is being deployed with a pretty standard stack file. I haven't complicated it too much yet, but I do have plans to add a slave or two also running in Docker, like the master. As part of this project, I will be hooking up an old laptop to the Jenkins master to act as a slave and handle vagrant and virtualbox for spinning up vms as well as Docker for container driven tests and Docker deployments to Swarm. It will also be a manager in the Swarm cluster and have direct access to all docker stack deploy commands. That slave is not included in this config file, but will be mentioned again later.


version: '3'


    image: jenkins/jenkins:lts
      - '8080:8080'
      - '50000:50000'
    # environment:
      # JAVA_OPTS: "-Djava.awt.headless=true"
      - /data/jenkins:/var/jenkins_home

This stack is deployed manually, with the following commands:

sudo mkdir -p /data/jenkins
sudo chown -R 1000:1000 /data/jenkins
docker stack deploy -c jenkins-stack.yml jenkins

Follow the logs as it builds...

docker service logs -f jenkins_jenkins

Once it's up and running you'll see that it's written files to the /data/jenkins/ directory.

ls /data/jenkins/


Remember that config.xml file ^ ! You'll need that when you forget your password and lock yourself out of Jenkins.

Browse to http://your_server_ip:8080/ or http://localhost:8080/ if you're on the box running Swarm.

  • Set up a user, password and whatever else and enable security through Jenkins > Manage Jenkins > Configure Global Security > Enable Security, http://localhost:8080/configureSecurity/
  • tick Allow users to sign up
    • untick this after you have created an account to disable further accounts being created
  • tick Logged-in users can do anything
  • untick Allow anonymous read access
  • tick Enable Agent → Master Access Control
  • Apply and Save



Next, bring up the Ansible AWX stack. It's a bit more complicated than the Jenkins stack and runs multiple containers. You'll want to change the passwords. These are only examples. Eventually, I will get this streamlined with environment variables in the stack file, that will be populated at build time in either Jenkins as it hands it off to AWX, or pulled in to AWX from vault or something similar.


version: '3'

    image: ansible/awx_web:latest
      - rabbitmq
      - memcached
      - postgres
      - "80:8052"
    hostname: awxweb
    user: root
        condition: on-failure
        delay: 5s
        # max_attempts: 3
        window: 60s
      SECRET_KEY: awxsecret
      DATABASE_NAME: awx
      DATABASE_USER: awx
      DATABASE_PASSWORD: awxpass
      DATABASE_PORT: 5432
      DATABASE_HOST: postgres
      RABBITMQ_USER: guest
      RABBITMQ_HOST: rabbitmq
      RABBITMQ_PORT: 5672
      MEMCACHED_HOST: memcached
      MEMCACHED_PORT: 11211
      AWX_ADMIN_USER: admin
      AWX_ADMIN_PASSWORD: password

    image: ansible/awx_task:latest
      - rabbitmq
      - memcached
      - web
      - postgres
    hostname: awx
    user: root
        condition: on-failure
        delay: 5s
        # max_attempts: 3
        window: 60s
      SECRET_KEY: awxsecret
      DATABASE_NAME: awx
      DATABASE_USER: awx
      DATABASE_PASSWORD: awxpass
      DATABASE_HOST: postgres
      DATABASE_PORT: 5432
      RABBITMQ_USER: guest
      RABBITMQ_HOST: rabbitmq
      RABBITMQ_PORT: 5672
      MEMCACHED_HOST: memcached
      MEMCACHED_PORT: 11211
      AWX_ADMIN_USER: admin
      AWX_ADMIN_PASSWORD: password

    image: rabbitmq:3
        condition: on-failure
        delay: 5s
        # max_attempts: 3
        window: 60s

    image: memcached:alpine
        condition: on-failure
        delay: 5s
        # max_attempts: 3
        window: 60s

    image: postgres:9.6
        condition: on-failure
        delay: 5s
        # max_attempts: 3
        window: 60s
      - /data/awx:/var/lib/postgresql/data:Z
      POSTGRES_USER: awx
      POSTGRES_PASSWORD: awxpass
      POSTGRES_DB: awx
      PGDATA: /var/lib/postgresql/data/pgdata

Bring up the stack with the following:

sudo mkdir -p /data/awx
sudo chown -R 999:999 /data/awx
docker stack deploy -c awx-stack.yml awx

First, you'll want to follow logs from the postgres service as it builds...

docker service logs -f awx_postgres

And then follow awx_web as it preps the database and performs any upgrades using awx_task

docker service logs -f awx_web
docker service logs -f awx_task

It takes a bit for AWX to build. Give it a good 10-20 minutes before you give up on it.
Browse to http://your_server_ip:80/#/login or http://localhost/#/login if you're on the box and you'll be greeted with the AWX login screen! Huzzah!

Defaults are admin password



Create a user for Jenkins to use through the API


Add that user and password back in Jenkins at Jenkins > Credentials > System > Global credentials


Browse to Jenkins > Manage Jenkins > Ansible Tower and add the newly built Ansible AWX to Jenkins. Give it a name, the url to to AWX, and use the credentials made in the last step. Hit "test" to verify a connection.



Back to AWX, there is more to configure. First of which, are credentials of a user on the Docker Host with passwordless ssh and sudo access.


Create an inventory containing the Docker Swarm host. I'm calling mine by hostname, shredder. An IP works just as well. I'm populating this inventory by pulling it in as a project from GitHub and configuring AWX to use the inventory.ini file.


Create a project to pull in the code from Github. This project will pull in this very repository.


Once a project is created and we're pulling in code, a template can be constructed that Jenkins will call through the API. This template uses the inventory created above, the credentials created above, the project being pulled in, docker-ark, and uses a playbook in that project to call the main task.


This playbook is still very basic, and only does some minor directory prep and config file generation, but it's a great start to test results. Here is what a run looks like through AWX.



In order to get Jenkins to fire off this templated playbook, a few plugins need to be installed first.

Browse to Jenkins > Manage Jenkins > Manage Plugins and install any plugins needed.


With the plugins installed, they can be called in the Jenkinsfile. I'm starting with a very basic Pipeline, that will grow in to more as I build on tests, call Docker builds, test Ansible playbooks with molecule, etc. During the deploy stage, the ansibleTower plugin is called and populated with the values that were configured above, when connecting Jenkins to Ansible AWX. The following variables set, should be all that is needed to get started. The tower server itself and the template to point at.

  towerServer: 'Ansible AWX'
  jobTemplate: 'ark'


#!/usr/bin/env groovy

node('master') {

    try {

        stage('build') {
            // Clean workspace
            // Checkout the app at the given commit sha from the webhook
            checkout scm

        stage('test') {
            // Run any testing suites
            sh "echo 'WE ARE TESTING'"

        stage('deploy') {
            sh "echo 'WE ARE DEPLOYING'"
            wrap([$class: 'AnsiColorBuildWrapper', colorMapName: "xterm"]) {
                    towerServer: 'Ansible AWX',
                    jobTemplate: 'ark',
                    importTowerLogs: true,
                    inventory: '',
                    jobTags: '',
                    limit: '',
                    removeColor: false,
                    verbose: true,
                    credential: '',
                    extraVars: ''

    } catch(error) {
        throw error

    } finally {
        // Any cleanup operations needed, whether we hit an error or not


Be sure to add the Jenkins user to the permissions tab on the template itself, or an error will come back, something like 'template does not exist' because the user accessing the AWX API does not have permissions to see that template unless they're an admin. Give it at least Execute capabilities.



Configure a webhook to trigger a build in Jenkins for every push to Github. This is accomplished by browsing to the docker-ark repo > Settings > Integrations and Services > Add Service > Jenkins (GitHub plugin). The url will be the public IP, (ex. of your Jenkins Master.


Created a NAT rule to forward traffic hitting port 8080 on the Public WAN ( and redirect it to the Docker host private ip (, so it hits the Jenkins Master at port 8080.


If this is something you'd like to try and keep internal and private to your homelab, you can easily bring up a Bitbucket server running in Docker Swarm, that will give you all the same functionality of Github and remain free for up to 5 users. Here is a stack file to bring up a Bitbucket server in Docker Swarm. Deploy it and log in with your Atlassian account or create a new one.


version: '3'


    image: atlassian/bitbucket-server
      - '7990:7990'
      - '7999:7999'
      - /data/bitbucket:/var/atlassian/application-data/bitbucket

Deploy the stack with

sudo mkdir -p /data/bitbucket
sudo chown -R daemon:daemon /data/bitbucket
docker stack deploy -c bitbucket-stack.yml bb

Verify it's running at http://localhost:7990/login



And with that ladies and gentlemen, a push to Github should trigger a build in Jenkins which will then hit the API to AWX and deploy a playbook to the Docker Swarm host! As I'm writing and pushing files up in this project I've been watching the builds go by and it's a lot of fun! The possibilities seam endless with a pipeline like this. It will make for a great template for deploying more things to my Docker Swarm Cluster and building out the homelab.

Jenkins Pipeline

Ansible Playbook


Finally, to the point of deploying and maintaining the Ark server on Docker Swarm! The main reason I like this game, is that is runs natively on Linux when installed through Steam. It's been a great time-waster lately and has me excited about gaming again. I'm still playing solo on the server I'm hosting, but it is publicly available for others to join. Long term plans are to get more people playing with me and stress testing the server it runs on. I'm feeling more comfortable in the back end of the Ark server as far as config files and what not go. I've got it backing up every 15 minutes and have the 'restore from backup' down to a science now after multiple catastrophic disasters and losing all my dinos from Alpha Raptor attacks! Fuckers! Yes, it might be considered cheating... But I'm the only one keeping me accountable right now. I'd have to take that into consideration if more people start playing on the server and can no longer roll it back as I wish. Another thing I've been trying to get working is mods, but no luck with that yet. I have been able to tweak harvest multipliers and taming multipliers though and make the game feel a bit less grindy. I want to spin up a couple more of the expansions as Docker services running along side the Island server currently being hosted.

To deploy Ark to Docker Swarm, I started with the TuRz4m Ark Docker repo, and upgraded their docker-compose.yml file to Docker Compose v3 and renamed it docker-stack.yml file.

You can see there are some subtle differences, like the formatting of the environment variables.

sdiff docker-compose.yml docker-stack.yml

ark:                            | version: '3'
  container_name: ark           |
  image: turzam/ark             | services:
  environment:                  |   island:
    - SESSIONNAME=Ark Docker    |     image: turzam/ark
    - SERVERMAP=TheIsland       |     environment:
    - SERVERPASSWORD=""         |       SESSIONNAME: Ark Docker
    - ADMINPASSWORD=""          |       SERVERMAP: TheIsland
    - BACKUPONSTART=1           |       SERVERPASSWORD: ""
    - UPDATEONSTART=1           |       ADMINPASSWORD: ""
    - TZ=Europe/Paris           |       BACKUPONSTART: 1
    - GID=1000                  |       UPDATEONSTART: 1
    - UID=1000                  |       AUTOBACKUP: 15
  volumes:                      |       TZ: US/Pacific
    - /data/ark:/ark            |       GID: 1000
  ports:                        |       UID: 1000
   - 7778:7778/udp              |     volumes:
   - 7778:7778                  |       - /data/ark:/ark
   - 27015:27015/udp            |     ports:
   - 27015:27015                |       - '7778:7778/udp'
   - 32330:32330                |       - '27015:27015/udp'
                                >       - '32330:32330'

For a manual deployment, this one works just as easily as AWX and Jenkins did. It can be deployed with the following.

sudo mkdir -p /data/ark
sudo chown -R 1000:1000 /data/ark
docker stack deploy -c bitbucket-stack.yml ark

But, I want Jenkins and Ansible to handle this deployment for me. While putting this all together, here is the commit that finally made Ansible deploy this to the Swarm! Now I'm getting somewhere!

Here is the Jenkins run from that commit.

And the Ansible job that was kicked off.

Log into the server and check it out.

docker ps to list running docker processes

docker ps

CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS                PORTS                                                                            NAMES
015cc0306a64        turzam/ark:latest                                     "/home/steam/"    43 minutes ago      Up 43 minutes         7778/tcp, 7778/udp, 27015/tcp, 32330/tcp, 27015/udp                              ark_ark.1.b5ifdto69xfdzw9p0wy800dhu

docker exec -it container_id bash to open a shell

docker exec -it 015cc0306a64 bash

[email protected]:/ark#

Verify that Ark is running with arkmanager commands

arkmanager status

Running command 'status' for instance 'main'
 Server running:   Yes 
 Server listening:   Yes 
Server Name: Ark Docker - (v279.275)
Players: 0 / 70
 Server online:   No 
 Server version:   2762965 

Then, sign into Steam and test that the server is accessible.

Navigate to View > Servers


Next, click on the Favorites > ADD A SERVER. Put in the private IP of the Docker Swarm host. Example Click on FIND GAMES AT THIS ADDRESS or specify the port if you know it. Finally, select the Docker Ark server and click on ADD SELECETED GAME SERVER TO FAVORITES


Start the game and find the server under the favorites tab.


To open this up to the world, add a couple of NAT rules to a public facing IP.

Port 7778


And Port 27015





Things I plan to do next:

  • Because I'm deploying the Jenkins Master to Docker Swarm,
    • I'm unable to build any sort of virtualisation on the master itself.
    • I need a separate hardware box to act as a Jenkins slave for this.
      • Using an old laptop with a base install of ubuntu 18.04
      • Basic packages installed.
        • Ansible
        • Vagrant
        • Virtualbox
      • For running all tests on the playbooks before sending it to production.
      • I'm going with Ubuntu Desktop for when it comes time to test a VM that needs a GUI for something.
    • I'm currently able to run tests in molecule locally in the workstation.
      • I want that functionality in Jenkins as well.
      • A slave will be the easiest way to accomplish this.
  • I still need to figure out a way to pass the environment variable to the stack on deployment...
    • Set variables in AWX inventory after it's already pulled from Github
    • Set ENV vars in Jenkins and pass them in during the API call to AWX
  • Only deploy master branch, but test other branches
  • Create accessibility between Jenkins and the arkmanager command available in the container
  • Generate a WARNING in Ark, when the server is going down for maintenance
    • Trigger a 15 minute timer and have jenkins sleep for 15 minutes before doing anything


Girlfriend was kind enough to take the time today and learn some git commands and help me test the Jenkins webhook! She cloned this branch locally and created a branch, made edits to this README, and created a pull request full of spelling fixes and edits. When she pushed this branch up to github, it kicked off Jenkins and created a new branch in the dashboard as well.


With that, it ran through the usual basic Pipeline I have configured and also kicked off the Ansible AWX run, which deployed the Ark server to Docker. Pretty cool! I will definitely need to put some checks in place to only run tests on branches that are not master, don't deploy to production when a collaborator is only pushing edits to a README, etc...

When the pull request was merged Jenkins was triggered again for a build and deploy!