Steffen Lorenzhttps://softwaretester.infoI`m a test enthusiast. I love tricky tasks, automation and new technologies. I`m familiar with Windows, Linux and Mac OS X. The only problem is that i spend to much time in front of my screen. ;)
Okay,… The pipeline has already two steps “Build” and “Deploy” running, but the last step “Test” is missing. In this part I will show a simple example with Python, Selenium and Docker (standalone-chrome) for test step.
Create a very small AWS ECS cluster in region “Frankfurt” (eu-central-1). Therefore enter Amazon ECS Clusters and press button “Create Cluster”.
Select template “EC2 Linux + Networking” and continue to next step.
On section “Configure cluster” you give a name like “ExampleCluster”.
On section “Instance configuration” select “On-Demand Instance”, “t2.micro”, “1”, “22” and “None – unable to SSH”.
In the section “Networking” you have to be careful now. Your values will be different from mine! Under VPC, select the same value as for the EC2 Jenkins instance (I selected default VPC). Now you can choose one of the subnets. We created the security group together with the EC2 Jenkins instance, so select “ExampleSecurityGroup” here.
Okay, press button “Create” and wait till the cluster is created. The cluster creation can take a while, so please be patient.
AWS ECS Task Definition
The cluster is running and the “Task Definition” can be created. So press button “Create new Task Definition”.
Select “EC2” on page launch type compatibility and press button “Next step”.
On section “Configure task and container definitions” set value “ExampleTask” for input field “Task Definition Name” and for “Network Mode” select “<default>”.
On section “Container Definition” press button “Add Container”. A new window will slide in. Here give the “Container name” value “ExampleContainer”, add under image your latest version from ECR (my latest is 24). Set values “128” for “Memory Limits (MiB)”, “80:80” for “Port mappings” and press button “Add”.
You are done with your task definition configuration, scroll down and press button “Create”.
Before we can go through the next steps, we need to adjust the group policy for “PipelineExampleGroup”. You must add the “AmazonECS_FullAccess” policy. _For our example this is okay, but never use this policy in production!_
Run task on ECS cluster (via Jenkins)
Now you only need to modify two files in your repository. Replace the content of “deploy.sh” and “Jenkinsfile” with following contents.
This tutorial serie should enable you to create own pipelines via Jenkins on AWS. Therefore we try to catch all needed basics with AWS IAM, EC2, ECR and ECS. Some of our configurations are recommended only for learning purpose, don’t use them on production! Why? Because these lessons are for people who starts on these topics and I will try to make all steps/configuration as easy as possible without focus on security. In this part we will create the environment and setup the “build step”.
AWS account (eq. free tier)
Git account (eq. GitLab, Bitbucket, GitHub, etc.)
The first preparation you do on AWS IAM Management Console. Here you create and configure a new group. The benefit of this group is that you can reconfigure the policies for assigned users easily at anytime. Please name the group “PipelineExampleGroup”.
Now search for EC2 Container Registry policies and enable checkbox for “AmazonEC2ContainerRegistryPowerUser”. For our example this policy is enough, but for production please don’t do that!
After the group is created, a user needs to be assigned to this group. Name the user “PipelineExampleUser”. Please enable checkbox “Programmatic access” for this user.
Assign the user to group.
Before you finish the process, please choose Download .csv and then save the file to a safe location.
AWS Jenkins EC2 Instance
Now you can launch our EC2 instance. Do this on region “Frankfurt” (eu-central-1). Of course you can choose any other region, but please remember your choice later. At very first step select the template “Amazon Linux 2 AMI (HVM), SSD Volume Type”.
The instance type “t2.micro” is enough for our example. For production you will need something else – depending to your needs.
Now you need to be a little bit careful. On Instance Details step please select “Enable” for “Auto-assign Public IP” and “Stop” for “Shutdown Behavior”. For all other values the defaults should be fine. I select my default VPC and “No preference…” for Subnet.
15 Gb disk space are fine. For production you need to estimate differently.
With the tag you will have it easier to identify the instance later on console view. Enter values “Name” for “Key” and “Jenkins” for “Value”.
Create a new security group with name “ExampleSecurityGroup” and allow ports 22, 80 and 8080 (IPv4 only). You can change the configuration at any time later. On a production environment you should use other ports like 443 and IP restrictions.
Create a new key pair with name “ExampleKeyPair”. Don’t forget to save the key (“Download Key Pair”) and press “Launch Instances”!
Install and run Jenkins
The EC2 instance is running and you can connect via SSH to start all needed installations and configurations. Attention: Your Public IP/DNS will be different (also after every stop/start), via button “Connect” you can easily figure out your configuration. I will just use the term “<EC2 IP|DNS>” in my description.
# move SSH keys (my are downloaded under Downloads)
Note: I have a space after etc, because of security settings of my provider.
Do not close the SSH connection yet. Start your browser and following there the Jenkins installation steps. The URL is similar to your SSH connection – http://<EC2 IP|DNS>:8080. You should see the following screen and paste the initial password there.
On next screen press button “Install suggested plugins” and wait for the screen to create administrator account. Fill in your credentials and finish the installation steps. The remaining configurations (on browser) will be made later.
Before you can push images to ECR, you need to create a new repository. On the ECR page, choose button “Create repository”. Your AWS ECR console screen could look a little bit different.
Give a repository name “example/nginx” and press button “Create repository”.
Done, your ECR repository is already created. You can see on overview page all needed informations like Repository name and URI. Your repository URI will be different to my. I will just use the term “<ECR URI>” in my description.
Okay, now enable user jenkins to connect to ECR. Go back to terminal and execute following steps. You need now the credentials from downloaded csv file for “PipelineExampleUser”.
# change to jenkins user
# show docker info (optional)
# configure AWS-CLI options
AWS Access Key ID[None]:<credentials.csv>
AWS Secret Access Key[None]:<credentials.csv>
# list repositories in registry (optional)
$aws ecr describe-repositories
I assume that you are familiar with Git. You must now create a Git Repository and create the following folders and files there. I will use my own private GitLab repository.
Inside folder “dev_credentials” I store the credentials.csv from AWS. The content of this folder will be only on my local machine, because via .gitignore I exclude the folder and files from git.
Jenkins job configuration
I will not use this tutorial to explain security topics for Jenkins, so we start directly with the configuration of the job (resp. project). On main page press now button “New item” or link “create new jobs”. Insert name “ExamplePipeline”, select “Pipeline” and press button “OK”.
To save some disk space enable checkbox discard old builds (5 builds are enough).
Normally you would create a webhook to trigger the build after commit, but our EC2 instance does change the public IP/DNS on every stop/start. That’s why here we check the revision changes every 5 minutes on git and trigger the job if something has changed.
Add the repository (may credentials are needed), configure the branch and Jenkinsfile path.
Press button “save”, _cross fingers_ and trigger manual the build. If you did nothing wrong, the job will run without issues and the ECR contains your images (depending how often you trigger the build).
The next part of this tutorial series will be about deployment to ECS.
While surfing the internet I stumbled across Sitespeed.io. It’s a amazing collection of Open Source Tools, which make performance measuring for developers and testers super easy. I tried it out and was immediately impressed. Here’s a little tutorial on how to use Jenkins and Sitespeed.
With minimal 2 commands the environment (via Docker) is already created. Most of the time will be needed for the plugins installation.
When Jenkins is ready (restarted), install the HTML Publisher PlugIn (no restart after installation of plugin required).
Create a new free-style project named SiteSpeed.
Attention: You need to specify later the absolute path to the local directory /target/workspace/SiteSpeed. If you do not know how, press save and start the build without any job information (empty job configuration) and follow the optional instructions.
Via Post-Build-Action: Publish HTML reports you can enter the report very simple from the job project page.
Save everything and run the job. After a short time you can look at the HTML report. See “Pages” > “https://www.sitespeed.io/” for screenshots, HAR and video files. On the website of sitespeed.io is a very detailed documentation and many more examples. Have fun!
In the last tutorial of this series, we create the dashboards for testing and support. Part 1, Part 2, Part 3 and Part 4 should have been successfully completed.
First, I show the result. With a little Drag & Drop and resizing, your dashboard might look like this. But after all, it’s up to your creativity how the results look.
I will just show now the most important configurations, from the 4th part you should already know the others.
Add following panels. 1x Graph panel, 2x Singlestat panel and 1x Pie Chart panel. Now edit the Graph panel.
Under tab General enter for Title: Results: $Testers. On tab Metrics select Data Source InfluxDB_test_db and enter From: default suite WHERE: qa =~ /^$Testers$/, SELECT: field(passed) alias(Test Passed) field(skipped) alias(Tests Skipped) field(failed) alias(Tests Failed), FORMAT AS: Time series and ALIAS BY: $col.
For tab Axes just enable Show checkbox on Left Y and X-Axis, Unit: short, Scale: linear and Mode: Time. On tab Legend enable checkboxes Show, As Table, Min, Max and Current. On tab Display enable only checkbox Lines.
Now you can edit the Singlestat panels (one after the other). Here the important screens for Passed Total: $Testers:
For Singlestat Failed Total: $Testers, just change field(passed) to field(failed).
The Pie Chart Average: $Testers is simple, too. Here are the most important settings.
You’re done with 2nd dashboard. Don’t forget to save (incl. variables and so on)!
Let’s get to the last dashboard (Support). Again, the result.
Now add the following panels. 1x Graph panel and 1x Pie Chart panel (we dynamically generate the others).
Here now the important settings. On tab General have attentions to Repeat!
The tab Legend for Graph panel have only Show checkbox enabled. Let’s go to the Pie Chart settings. On tab General have attentions to Repeat!
That’s it with this series. I hope you have all the knowledge to create with Grafana awesome QA dashboards.
Finally, we can create the first dashboard. The prerequisite for this is that you have successfully completed the previous tutorials (Part1, Part2, Part3).
Pipeline graph panel
Now go to the still empty dashboard Pipeline. Add the Graph Panel here and select Edit in the title of panel.
In the Metrics tab you now enter the following values. Data Source: InfluxDB_pipeline_db, From: default, pipeline, WHERE: stage =~ /^$Stage$/, SELECT: field(duration), FORMAT AS: Time series and ALIAS BY: Duration.
Note: In case your graph is not displayed correctly, select on the Time picker dropdown: Last 30 days.
Change to General tab and enter for Title: Duration: $Stage.
Change to Axes tab and enable checkboxes Show for Left Y and X-Axis. The Unit value for Left Y should be: minutes(m).
Under tab Legend choose enable following checkboxes: Show, As Table, Min and Max. For Decimals enter 2.
Our last action for the Graph panel will be done in tab Display. Here we add the Threshold. T1: gt, 15, Color critical, enable checkbox Fill and Y-Axis: left.
We are done with Graph panel … don’t forget to save!
Pipeline singlestat panel (Last Status)
Now we add 2 more singlestat panels. One should show the Last Status and the other Last Duration. Press Edit for Last Status panel.
In the Metrics tab you now enter the following values. Data Source: InfluxDB_pipeline_db, From: default, pipeline, WHERE: stage =~ /^$Stage$/, SELECT: field(status)last() and FORMAT AS: Time series.
Change to General tab and enter for Title: Last status: $Stage.
Close Panel edit mode and save.
Pipeline singlestat panel (Last Duration)
Last Singlestat will have following Metrics. Data Source: InfluxDB_pipeline_db, From: default, pipeline, WHERE: stage =~ /^$Stage$/, SELECT: field(duration) and FORMAT AS: Time series. Under tab General just add Title: Last Duration: $Stage.
For tab Options select Stat: Current, Font size: 50%, Unit: minutes(m), Thresholds: 10,15, enable checkbox Show for Gauge, Min: 0, Max: 30 and enable checkboxes Threshold labels plus Threshold markers. Close Edit mode and save.
Final Pipeline Dashboard
Now you can play with the size and placement of the panels. My Pipeline dashboard now looks like this:
If you change the variables (S1, S2, S3), the values of the panels should change.
This leaves only 2 dashboards left. See you in next tutorial.