Archives for September 30, 2021

Developer Pipelines

Introduction:

As part of coding, developers usually create a feature branch out of the main branch and implement the required features. It involves building the code, deploying it to a local environment, and testing it manually to make sure it has no bugs. In highly complicated features, this involves more than one developer working in parallel in the same feature branch.

Developer’s challenges:

As a developer, in the pre-merge process, I have gone through this multiple times. Apart from spending a lot of effort on writing code, I have spent a lot of time populating the data in the local environment, and need to tweak the automation test scripts to run them according to environment configurations. It involves manual testing to be done with assumptions about real-time use cases, to make sure that source code will run as expected in the actual environment.

As the testing happens mostly in the local environment, the feature would usually function well. Even though the feature looks good during quick testing, this may be due to not having proper data in local infrastructure. There could also be a mismatch in infrastructure-related configuration, and so on;  this may lead to breaking the actual team’s test environment, which affects continuous integration of other developer’s features. It can completely block the QA team from testing other features.  A small flutter of the butterfly’s wings can have effects greater than we developers realize or intend.

Additionally, every time we do code changes this requires the developer’s effort to build and deploy the source code to the local box. Due to manual intervention, some of the environment-related configurations can go wrong.  This is a process just waiting to blow up.

As multiple developers may work in parallel in a feature branch, a developer won’t have visibility on what has been done by another developer. It would be difficult to troubleshoot, if the feature doesn’t work as expected, and spot the bugs, whether it arises from other developers’ code.

In some instances, it may be required to involve the QA team to run the tests on complicated features. As the code from different developers goes into the test environment, it would be difficult to identify the features implemented and filter out the test scripts to be executed. It may result in re-running the passed test cases again and again, and it would be a never-ending process until the sprint is completed.

It requires a lot of manual configuration changes to be done in the local environment.  If we think we can avoid local environment changes, it is not practical to expose the infrastructure-related information like build/deploy scripts to developers. It may lead to everyone playing around with the environment, and it may end up affecting the script quality and stability of the environment and will end up in a mess.  I have seen this happen.

On the other hand, if the scripts are being controlled by the deployment team, each and every developer needs to contact the deployment team to push each and every simple feature to the dev/test environment. It will affect the productivity of both developers and DevOps engineers. It will introduce delays in deployment lead time.

Nowadays, optimizing the release process has become the new normal. As there are CI pipelines available to push the source code from the previous environment in the workflow to production, visibility of where the source code stands has begun to be clearer. It is also necessary to have the same visibility before merging the source code from the feature branch to the main branch. It involves troubleshooting the failures in an earlier stage from the feature branch itself and provides visibility to other developers and the QA team about whether the code is ready for testing, or if the feature is still under development.

ReleaseIQ’s Pre-merge (Developer) pipeline:

Based upon real customer use cases and challenges in providing the visibility of pre-merged source code, ReleaseIQ has built a pre-merge pipelines capability. It is implemented end-to-end by focussing on the developer’s needs and specific problems as they go through deploying their code to the dev/test environment. Developers need control over creating the pre-merge CI pipelines and at the same time to not have access to the merged pipelines configured in the CI tool which contains critical information. As a first step, the ReleaseIQ admin decides which CI tools can be accessed by developers, and also which job/pipeline can be accessed by developers. In this way, the jobs which have access over critical/high environments can be restricted from developers.

Developers commit their source code to source control systems (like Github, BitBucket, GitLab, SVN, Perforce) and then the ReleaseIQ pipeline immediately begins creating the build, then deploys it to the test environment and runs the necessary tests configured by the developer. Upon successful testing, developers can immediately raise pull requests, to the approver. On approval, the ReleaseIQ pipeline covering end-to-end will start to deploy to real-time environments like stage, prod etc.

Use case: Build/Deploy/Test the feature and create pull request

Pre-merge pipeline creation in ReleaseIQ

CI tool setup:

Admin/Devops can do one time set up of CI tools (Jenkins), and provide CI tools access to developers only with limited jobs which would not impact the critical environment like PROD.

Setting up CI tools for developer pipeline

Pipeline creation:

Admin can create the pipelines in their CI tools, which will have access only for dev/test environments. Developers will be given access to those jobs in CI tool settings.

Pipeline creation by Developers:

Developer Pipeline

Developers can create their own customized pipelines, which can build their source code, deploy it and run tests. Once tests are successful, developers can approve their pipeline which creates pull requests and assign it to mentioned approvers.

Pipeline execution:

Pipeline Execution

The Commits screen in ReleaseIQ product provides E2E visibility of pipeline execution and developers can have complete information at each commit level.

Pipeline Execution

Troubleshooting:

If the pipeline fails, without CI tool access, developers will be able to access the logs and identify the bugs and fix it on their own. It doesn’t require access to Dev machines, or manual intervention of the DevOps/admin.

Pipeline Failure

On a single click of the failed pipeline stage, developers will be able to root cause the issue and fix it.

Troubleshooting Logs

If this is interesting, then check out our ReleaseIQ for Jenkins edition and get to work on your developer pipelines.

End-to-End Release Pipelines for Kubernetes Apps using ReleaseIQ

Introduction

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications to increase the productivity of the developers. Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. Kubernetes applications can be deployed in the cloud and to Kubernetes clusters hosted in other environments. By using the containerized applications we can have testing environments similar to production which can catch issues earlier in the process.. Kubernetes deployments allow you to build application services that can run in multiple containers, schedule those containers across a cluster, scale those containers, and manage the health of those containers over time.  Kubernetes has transformed the application development model.

Continuous Integration (CI)

Continuous Integration (CI) is a process that allows developers to integrate work into a shared environment; for example pushing code to development quite frequently as needed,  This enhances collaborative development. Frequent integration helps dissolve silos, reducing the size of each commit to lower the chance of having an issue in the pre-production or production environments. CI workflow starts from listening from the source code management (SCM) system, to building  the source code and deploying to a shared development environment  and then running any Integration tests.

Continuous Delivery/Continuous Deployment (CD)

Continuous delivery extends the continuous integration process by deploying builds that pass the integration test suite to a pre-production environment.This makes it straightforward to evaluate each build in a production-like environment so that developers can easily validate bug fixes or test new features without additional work.

Once a build passes automated tests in a pre-production environment, a continuous deployment system can automatically deploy the build to production servers.This enables teams to release new features and bug fixes instantly.

Importance of End-to-End Pipeline

The advantage of the end-to-end pipeline in ReleaseIQ is the developer, project manager, team lead, and DevOps team can have visibility from check in of code until it is being deployed to production. Key is the visibility to debug the issue when there is a failure on any step. The deployment to a specific environment can be tested thoroughly using automated or manual tests. Deployment control can be given to the application owners whether to proceed with deployment or to reject the deployment. Team members can backtrace performance and infrastructure issues as well.

Single Microservice E2E Pipeline

We can use ReleaseIQ to deploy a single microservice from SCM to production. The workflow consists of listening from SCM, building the source code, uploading the binary to a build repository, deploying to a test environment, running an automated test for deployment verification and if the verifications are successful, deploy to the production environment.

Use Case #1 – Using ReleaseIQ Internal Capabilities

The workflow for the use case is listening from SCM, building source code using Maven or Gradle, pushing Docker image to a build repository, and deploying to Kubernetes using the Helm via ReleaseIQ.

Prerequisites:

  1. Create Product, Team, and Component for the Use Case.
  2. Configure the Source Control Management in Settings.
  3. Configure the Build Repository in Settings.
  4. Configure the Environment with Appropriate Kubernetes Cluster Configuration.
  5. Verify the Helm configuration files with these samples.

Configure the Internal Build Configuration as below in Build Step:

  • Choose Build Tool Name as Maven
  • Select Maven Version from drop-down
  • Enter the goal of Maven
  • Select the Java version
  • Select the Build Repository name and File Repository.
  • If you want to Scan the Deliverable select the checkbox
  • If you want to fail the pipeline if there is any vulnerability, select the fail pipeline checkbox

Kubernetes Build Set up

Configure the Internal Deployer Configuration as below in Deploy Step:

  • Select the Environment which is configured with Kubernetes cluster
  • Select the Deployer tool Name as “Internal Deployer Tool”
  • Choose the endpoint of the cluster
  • Select Deployment Type as “Helm”

Kubernetes Deploy Step

Choose Add Helm Configuration:

  • Select the SCM, Repository, Branch name, and path which contains the Helm Deployment files.
  • If you have multiple YAML files for Deployment add those file names in the Deployment files text box
  •  Enter the Namespace where we want to deploy the Application.

Add Helm Configuration

The overall End-to-End Pipeline will look like this:

End to End Pipeline

The pipeline execution will look like this:

Pipeline execution

Use Case #2 – Using Jenkins for push to build repository

The workflow for the use case is listening from SCM, building source code, pushing Docker Image to a Build Repository using Jenkins, and deploying to Kubernetes using Helm via ReleaseIQ.

Prerequisites:

  1. Create Product, Team, and Component for the Use Case.
  2. Configure the Source Control Management in Settings.
  3. Configure the Jenkins CI Tool in Settings.
  4. Configure the Environment with Appropriate Kubernetes Cluster Configuration.
  5. Verify the Helm configuration files with these samples.
  6. Archive the Build Location and Build ID in the Jenkins Job/Pipeline.

Configure the Build Step as Below:

  • Select the Jenkins CI Tool configured in the Build Tool Name.
  • Select the Jenkins Job or Jenkins Pipeline that build and Push Docker image to Build Repository.

Kubernetes Build configuration

Configure the Internal Deployer Configuration as below in Deploy Step:

  • Select the Environment which is configured with Kubernetes Cluster
  • Select the Deployer tool Name as “Internal Deployer Tool”
  • Choose the endpoint of Cluster
  • Select Deployment Type as “Helm”

Kubernetes Deployment configuration

Choose Add Helm Configuration:

  • Select the SCM, Repository, Branch name, and path which contains the Helm Deployment files.
  • If you have multiple YAML files for Deployment add those file names in the Deployment files text box
  • Enter the Namespace where we want to deploy the Application.

Helm Configuration

The Overall E2E Pipeline will look like this:

End-to-End Pipeline

The Pipeline Execution will look like this:

Pipeline Execution

Multiple Microservice CI/CD E2E Pipeline:

There are many scenarios where we have multiple microservices for an application. These microservices could be dependent or Independent. We may want to deploy the microservices one by one if they are independent and we want to deploy multiple services together if they are dependent. Also we want to deploy the microservices together at a scheduled time or we may want to get some approval before the deployment. ReleaseIQ can easily implement these scenarios and many others.

Use Case:  The workflow for the Use case is two CI pipelines connected to a CD Pipeline.

CI Pipeline #1:

Listening from SCM, Build Source code using Maven or Gradle, Push Docker Image to a Build Repository and Deploy to Kubernetes using Helm via ReleaseIQ. Configure the CI pipeline same as single microservice Use Case #1 and add a pipeline connector to connect to the CD pipeline.

CCI Pipeline

CI Pipeline #2

Listening from SCM, Build Source code Push Docker Image to a Build Repository using Jenkins Tool and Deploy to Kubernetes using Helm via ReleaseIQ. Configure the CI pipeline same as Use Case #2 and add a pipeline connector to connect to the CD pipeline.

CI Pipeline

CD Pipeline

Prerequisites:

  1. Create Product, Team and Component for the Use Case.
  2. Configure the Source Control Management in Settings.
  3. Configure the Build Repository in Settings.
  4. Configure the Environment with Appropriate Kubernetes Cluster Configuration.
  5. Verify the Helm configuration files with these samples.

Configure the Internal Deployer Configuration as below in Deploy Step:

  • Select the Environment which is configured with Kubernetes Cluster
  • Select the Deployer tool Name as “Internal Deployer Tool”
  • Choose the endpoint of Cluster
  • Select Deployment Type as “Helm”

CD Pipeline

Choose Add Helm Configuration:

  • Select the Linked CI Pipeline Name as CI Pipeline #1
  • Select the SCM, Repository , Branch name and path which contains the Helm Deployment files.
  • If you have multiple yaml files for Deployment add those file names in the Deployment files text box
  •  Enter the Namespace where we want to deploy the Application.
  • Click Add new and follow the above steps for CI Pipeline #2.

Helm configuration

The CD Pipeline looks like below:

CD Pipeline

The Pipeline Execution will look like this:

Pipeline Execution

For more information on our Essentials for Jenkins click here.