Visualisation of Docker and Kubernetes

With Weave Scope you have in seconds a beautiful monitoring, visualisation & management for Docker and Kubernetes via your browser. I show with Docker-Selenium a simple example.

Preconditions

Lets go…

# create new Docker VM (local)
$ docker-machine create -d virtualbox WeaveScope

# pointing shell to WeaveScope VM (local)
$ eval $(docker-machine env WeaveScope)

# SSH into WeaveScope VM (local -> VM)
$ docker-machine ssh WeaveScope

# become root (VM)
$ sudo su -

# download scope (VM)
$ wget -O /usr/local/bin/scope https://git.io/scope

# change access rights of scope (VM)
$ chmod a+x /usr/local/bin/scope

# launch scope (VM)
# Do not forget the shown URL!!!
$ scope launch

# exit root and ssh (VM -> local)
$ exit

# create Selenium Hub (local)
$ docker run -d -p 4444:4444 --name selenium-hub selenium/hub:2.53.0

# create Selenium Chrome Node (local)
$ docker run -d --link selenium-hub:hub selenium/node-chrome:2.53.0

# create Selenium Firefox Node (local)
$ docker run -d --link selenium-hub:hub selenium/node-firefox:2.53.0

# show running containers (optional)
$ docker ps -a
...
CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS                    NAMES
a3e7b11c5a5f        selenium/node-firefox:2.53.0   "/opt/bin/entry_point"   9 seconds ago       Up 8 seconds                                 cocky_wing
699bf05681e8        selenium/node-chrome:2.53.0    "/opt/bin/entry_point"   42 seconds ago      Up 42 seconds                                distracted_darwin
bbc5f545261b        selenium/hub:2.53.0            "/opt/bin/entry_point"   2 minutes ago       Up 2 minutes        0.0.0.0:4444->4444/tcp   selenium-hub
9fe4e406fb50        weaveworks/scope:0.16.0        "/home/weave/entrypoi"   5 minutes ago       Up 5 minutes                                 weavescope

That’s it! Now start your browser and open the URL.

weavescope browser

Real-time log monitoring

You may need to watch different log files on automated test runs. With log.io you can simply monitoring log files via browser! This tutorial shows how easy it is.

Preconditions

Preparation

Create new project with following structure and files.

# create new project LogIO
$ mkdir -p ~/Projects/LogIO/data

# go into new Project
$ cd ~/Projects/LogIO

# create needed files in data
$ touch data/{harvester.conf,log_server.conf,web_server.conf,log.io}

# create Vagrantfile
$ touch Vagrantfile

# show files
$ tree .
.
├── Vagrantfile
└── data
    ├── harvester.conf
    ├── log.io
    ├── log_server.conf
    └── web_server.conf

File contents

# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.require_version ">= 1.8.1"
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  config.vm.box = "centos/7"
  config.vm.network "public_network"
  config.vm.synced_folder "./data", "/vagrant", disabled: false

  config.vm.provider "virtualbox" do |vb|
    vb.name = "LogIO"
    vb.cpus = "2"
    vb.memory = "2048"
    vb.gui = false
  end

  config.vm.provision "shell", inline: <<-SHELL
    # install needed packages
    sudo yum update -y && sudo yum install -y epel-release
    sudo yum install -y vim net-tools npm nodejs
    sudo yum clean all
    # install log.io for user <root>
    sudo npm install -g log.io --user "root"
    # provide custom files for user <root>
    sudo rm -f /root/.log.io/*
    sudo cp /vagrant/*.conf /root/.log.io/
    sudo chown root:root /root/.log.io/*.conf
    # provide init.d for log.io
    sudo cp /vagrant/log.io /usr/local/bin/log.io
    sudo chmod +x /usr/local/bin/log.io
    sudo chown root:root /usr/local/bin/log.io
  SHELL

end

Configure your Harvesters…

exports.config = {
  nodeName: "application_server",
  logStreams: {
    apache: [
      "/var/log/apache2/access.log",
      "/var/log/apache2/error.log"
    ]
  },
  server: {
    // connect to log.io server
    host: '127.0.0.1',
    port: 28777
  }
}

Configure your log server…

exports.config = {
  host: '0.0.0.0',
  port: 28777
}

Configure your web server…

exports.config = {
  host: '0.0.0.0',
  port: 28778,

  /*
  // Enable HTTP Basic Authentication
  auth: {
    user: "admin",
    pass: "1234"
  },
  */

  /*
  // Enable HTTPS/SSL
  ssl: {
    key: '/path/to/privatekey.pem',
    cert: '/path/to/certificate.pem'
  },
  */

  /*
  // Restrict access to websocket (socket.io)
  // Uses socket.io 'origins' syntax
  restrictSocket: '*:*',
  */

  /*
  // Restrict access to http server (express)
  restrictHTTP: [
    "192.168.29.39",
    "10.0.*"
  ]
  */

}

Create simple init script…

#!/bin/bash

start() {
  echo "Starting log.io process..."
  /usr/bin/log.io-server &
  /usr/bin/log.io-harvester &
}

stop() {
  echo "Stopping io-log process..."
  pkill node
}

status() {
  echo "Status io-log process..."
  netstat -tlp | grep node
}

case "$1" in
  start) start;;
  stop) stop;;
  status) status;;
  *) echo "Usage: start|stop|status";;
esac

Usage

# start VM via vagrant
$ vagrant up

# SSH into VM
$ vagrant ssh

# become root
$ sudo su -

# start log.io
$ log.io start

# get ip
$ ip addr

Now open your browser with URL http://<ip>:28778

Apache Jmeter and Docker

Okay, this time we will create a Docker-Jmeter test environment. We create some simple (but tiny) Docker images/containers which can be reused for future test runs. Here we follow the DRY rule. Next, we want to achieve.

$ java -jar apache-jmeter-3.0/bin/ApacheJMeter.jar -t /tutorial/jmx/simple.jmx -n -l /tutorial/results/log.jtl

Preconditions

Preparation

# create all directories
$ mkdir -p /tutorial/{java,jmeter,jmx,results}

# create Dockerfile for JAVA
$ cd /tutorial/java && vim Dockerfile

# build JAVA image
$ docker build -t alpine/java .

# create Dockerfile for Jmeter
$ cd /tutorial/jmeter && vim Dockerfile

# build Jmeter image
$ docker build -t alpine/jmeter .

# list Docker images (optional)
$ docker images
...
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
alpine/jmeter       latest              0864228be6ef        15 seconds ago      150.2 MB
alpine/java         latest              068005f45866        5 minutes ago       102.8 MB
alpine              latest              f70c828098f5        5 days ago          4.795 MB

Dockerfile JAVA

FROM alpine

RUN apk --update add openjdk8-jre-base

ENTRYPOINT ["/usr/bin/java"]

Dockerfile Jmeter

FROM alpine

RUN apk --update add wget
RUN wget http://mirror.switch.ch/mirror/apache/dist//jmeter/binaries/apache-jmeter-3.0.tgz
RUN tar zxvf apache-jmeter-3.0.tgz
RUN apk del wget
RUN rm -f apache-jmeter-3.0.tgz
RUN rm -fr /apache-jmeter-3.0/docs

VOLUME /apache-jmeter-3.0

CMD ["/bin/true"]

JMX file

Create or copy existing JMX file on folder “/tutorial/jmx” with name “simple.jmx”.

# show JMX (optional)
$ ls /tutorial/jmx
...
simple.jmx

Create Jmeter container

# create jmeter container
$ docker create --name jmeter alpine/jmeter

# list containers (optional)
$ docker ps -a
...
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS      PORTS       NAMES
0916ff8f25bb        alpine/jmeter       "/bin/true"         7 seconds ago       Created                 jmeter

Note: The container was created but not running!

Test execution

# test run JAVA container (optional)
$ docker run -ti --rm --volumes-from jmeter alpine/java -jar /apache-jmeter-3.0/bin/ApacheJMeter.jar --version

# run Jmeter (without report)
$ docker run -ti --rm -v /tutorial/jmx:/jmx --volumes-from jmeter alpine/java -jar /apache-jmeter-3.0/bin/ApacheJMeter.jar -t /jmx/simple.jmx -n

# run Jmeter (with report)
$ docker run -ti --rm -v /tutorial/jmx:/jmx -v /tutorial/results:/results --volumes-from jmeter alpine/java -jar /apache-jmeter-3.0/bin/ApacheJMeter.jar -t /jmx/simple.jmx -n -l /results/log.jtl

# show report
$ cat /tutorial/results/log.jtl

🙂

Python, Selenium Grid and Docker

With Docker you can quickly and easily install, configure and use Selenium Grid. This tutorial shows the respective steps that you need as a software tester (or Developer). Instead of Python you can also use other languages, which are supported by Selenium​.

Preconditions

Preparation of files

# create new project
$ mkdir -p ~/Project/SeleniumTutorial && cd ~/Project/SeleniumTutorial

# create docker-compose.yml (version 1)
$ vim v1-docker-compose.yml

# or create docker-compose.yml (version 2)
$ vim v2-docker-compose.yml

# create python example.py
$ vim example.py

Note: You can opt for a version of docker-compose.yml!

Version: 1

---
selenium_hub:
  image: selenium/hub
  ports:
    - 4444:4444
node_1:
  image: selenium/node-chrome
  links:
    - selenium_hub:hub
node_2:
  image: selenium/node-firefox
  links:
    - selenium_hub:hub

Version: 2

---
version: '2'
services:
  selenium_hub:
    image: selenium/hub
    ports:
      - 4444:4444
  node_1:
    image: selenium/node-chrome
    depends_on:
      - selenium_hub
    environment:
      - HUB_PORT_4444_TCP_ADDR=selenium_hub
  node_2:
    image: selenium/node-firefox
    environment:
      - HUB_PORT_4444_TCP_ADDR=selenium_hub
    depends_on:
      - selenium_hub
import os
import datetime
import time
import unittest
from selenium import webdriver


class Example(unittest.TestCase):

    def setUp(self):

        self.driver = webdriver.Remote(
            command_executor='http://192.168.99.100:4444/wd/hub',
            desired_capabilities={
                'browserName': 'firefox',
                'javascriptEnabled': True
            }
        )

        self.driver.get('http://softwaretester.info/')

    def test_something(self):

        dt_format = '%Y%m%d_%H%M%S'
        cdt = datetime.datetime.fromtimestamp(time.time()).strftime(dt_format)
        current_location = os.getcwd()
        img_folder = current_location + '/images/'

        if not os.path.exists(img_folder):
            os.mkdir(img_folder)

        picture = img_folder + cdt + '.png'
        self.driver.save_screenshot(picture)

    def tearDown(self):

        self.driver.quit()


if __name__ == "__main__":

    unittest.main(verbosity=1)

Create environment

# create new VM
$ docker-machine create -d virtualbox Grid

# pointing shell
$ eval $(docker-machine env Grid)

# show status (optional)
$ docker-machine ls
...
NAME   ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
Grid   *        virtualbox   Running   tcp://192.168.99.100:2376           v1.11.1 

# run docker-compose (Version: 1)
$ docker-compose -f v1-docker-compose.yml up -d

# run docker-compose (Version: 2)
$ docker-compose -f v2-docker-compose.yml up -d

# show status (Version: 1)
$ docker-compose -f v1-docker-compose.yml ps
...
             Name                         Command           State           Ports          
------------------------------------------------------------------------------------------
seleniumtutorial_node_1_1         /opt/bin/entry_point.sh   Up                           
seleniumtutorial_node_2_1         /opt/bin/entry_point.sh   Up                           
seleniumtutorial_selenium_hub_1   /opt/bin/entry_point.sh   Up      0.0.0.0:4444->4444/tcp

# show status (Version: 2)
$ docker-compose -f v2-docker-compose.yml ps
...
             Name                         Command           State           Ports          
------------------------------------------------------------------------------------------
seleniumtutorial_node_1_1         /opt/bin/entry_point.sh   Up                           
seleniumtutorial_node_2_1         /opt/bin/entry_point.sh   Up                           
seleniumtutorial_selenium_hub_1   /opt/bin/entry_point.sh   Up      0.0.0.0:4444->4444/tcp

Open Browser

Selenium Grid Console

Run Python script

# run python selenium script
$ python -B ~/Projects/Selenium/example.py

Note: Via browserName (example.py) you can choose the respective browser (firefox or chrome)!

Note: Via docker-compose scale you can add/remove node instances!

# create 2 instances (Version: 1)
$ docker-compose -f v1-docker-compose.yml scale node_1=2

# create 3 instances (Version: 2)
$ docker-compose -f v2-docker-compose.yml scale node_2=3

docker-compose and Jenkins

In this tutorial i show an example, how to install Jenkins (version 2.0) via docker-compose (and docker-machine).

Preconditions

Preparation

# create example directories
$ mkdir -p ~/Projects/Example/Jenkins_HOME && cd ~/Projects/Example

# create new and edit compose files
$ vim docker-compose.yml
---
version: '2'
services:
  jenkins:
    image: jenkins:2.0
    container_name: jenkins
    restart: always
    ports:
      - 8080:8080
    volumes:
      - ./FOR_JENKINS:/var/jenkins_home

Build and run

# create new VM
$ docker-machine create -d virtualbox --virtualbox-memory "2048" example-vm

# point shell
$ eval $(docker-machine env example-vm)

# show current state
$ docker-machine ls
...
NAME         ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
example-vm   *        virtualbox   Running   tcp://192.168.99.100:2376           v1.11.0 

# run docker-compose
$ docker-compose up -d

# show state
$ docker-compose ps
...
Name                Command               State                 Ports               
------------------------------------------------------------------------------------
jenkins   /bin/tini -- /usr/local/bi ...   Up      50000/tcp, 0.0.0.0:8080->8080/tcp

# get administrator password
$ docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword

Run Browser

docker jenkins container

Vagrant and YAML

Ruby`s stdlib provides an YAML module for data serialization. By using this module, we can create only one Vagrantfile which reads the configuration from YAML files. If you are thinking of build servers – so only YAML files are needed for generating complete environments.

Preconditions

Preparation

# create new directory
$ mkdir ~/Projects/Tutorial && cd ~/Projects/Tutorial

# create new files
$ touch Vagrantfile && touch server-config.yml

# list available BaseBoxes (optional)
$ vagrant box list

# edit Vagrantfile
$ vim Vagrantfile

# edit server-config.yml
$ vim server-config.yml
# -*- mode: ruby -*-

require 'yaml'
servers = YAML.load_file('server-config.yml')
API_VERSION = "2"

Vagrant.configure(API_VERSION) do |config|

  servers.each do |servers|

    config.vm.define servers["name"] do |machine|

      machine.vm.box = servers["box"]
      machine.vm.network :forwarded_port, guest: 22, host: servers["ssh"], id: 'ssh'
      
      machine.vm.provider :virtualbox do |vb|
        vb.name = servers["name"]
        vb.memory = servers["memory"]
        vb.cpus = servers["cpus"]
      end

    end

  end

end
---
- name: box_centos7_a
  box: lupin/centos7
  ssh: 2221
  memory: 1024
  cpus: 2
- name: box_centos7_b
  box: lupin/centos7
  ssh: 2222
  memory: 1024
  cpus: 2

Usage

# start run
$ vagrant up

# check status
$ vagrant status

# SSH example
$ vagrant ssh [name]

Deploy with Vagrant on KVM/libvirt

In this tutorial i show, how to extend Vagrant, to convert BaseBoxes and deploy to KVM/libvirt.

Preconditions

  • Vagrant installed (min. version 1.5)

Install Vagrant-Mutate and Vagrant-Libvirt

# Ubuntu, Debian etc.
$ apt-get install qemu-utils libvirt-dev libxslt-dev libxml2-dev zlib1g-dev ruby-dev

# CentOS, Fedora, Red Hat etc.
$ yum install qemu-img libvirt-devel ruby-libvirt ruby-devel libxslt-devel libxml2-devel libguestfs-tools-c

# install Vagrant-Mutate
$ vagrant plugin install vagrant-mutate

# install Vagrant-libvirt
$ vagrant plugin install vagrant-libvirt

Convert existing VirtualBox BaseBox

# Syntax
$ vagrant mutate [box-name | url] [target provider]

# Example for libvirt
$ vagrant mutate lupin/centos7 libvirt

# Show boxes
$ vagrant box list

Supported conversions by Vagrant-mutate

  • VirtualBox to KVM
  • VirtualBox to libvirt
  • libvirt to KVM
  • KVM to libvirt

Vagrantfile example

# -*- mode: ruby -*-
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'libvirt'

Vagrant.configure("2") do |config|

  config.vm.provider :libvirt do |libvirt|
    libvirt.host = '<target>'
    libvirt.username = '<user>'
    libvirt.id_ssh_key_file = '<key>'
    libvirt.connect_via_ssh = true
  end

  config.vm.define :my_vm do |machine|

    machine.vm.box = "trusty64"
    machine.vm.network :public_network, :dev => "br0", :mode => 'bridge'

    machine.vm.provider :libvirt do |setting|
      setting.memory = 1024
      setting.cpus = 1
      setting.random_hostname = true
    end

  end

end

Note: Read the documentation, there are many settings more available!

Usage

Common Vagrant commands like: up, destroy, suspend, resume, halt, ssh etc are available.

Create desktop environments on the fly

In this tutorial we will create desktop environments via docker on the fly. This environments could be used for development and/or testing purposes. For example you could expand it with Selenium-Grid nodes or provide manual testers all they need.

Preconditions

Steps

# create new VM
$ docker-machine create -d virtualbox xserver

# ssh into VM
$ docker-machine ssh xserver

# create Dockerfile
$ vi Dockerfile
FROM centos:centos7

MAINTAINER Lupin3000

RUN yum update -y
RUN yum install -y epel-release
RUN yum install -y x2goserver x2goserver-xsession
RUN yum groupinstall -y Xfce
RUN yum install -y firefox

RUN /usr/bin/ssh-keygen -t rsa -f /etc /ssh/ssh_host_rsa_key -N ''
RUN /usr/bin/ssh-keygen -t ecdsa -f /etc /ssh/ssh_host_ecdsa_key -N ''
RUN /usr/bin/ssh-keygen -t ed25519 -f /etc /ssh/ssh_host_ed25519_key -N ''

RUN adduser testuser
RUN echo 'testuser:test123' | chpasswd
RUN echo 'root:test123' | chpasswd

EXPOSE 22

CMD ["/usr/sbin/sshd", "-D"]

* Note: the space after etc is because of the security settings of my provider!

# build Docker image from Dockerfile
$ docker build -t centos7/xserver .

# run Docker container from image
$ docker run -p 2222:22 -d --name centos7-xserver centos7/xserver

Connect with x2go client

The following example shows the client configuration. Important are values ​​for host (192.168.99.100), port (2222) and session type (XFCE).

x2go client settings

Now it’s up to you to add more users, tools etc. – or to integrate everything into a build process.