Skip to content

DevopsProjects05/DevSecOps-End-to-End-Project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ End-to-End DevSecOps Implementation: CI/CD, Security, and Monitoring

Table of Content:


Project Overview

This DevSecOps project implements a fully automated CI/CD pipeline that integrates security scanning, observability, and infrastructure automation. The project ensures secure application deployment by incorporating static code analysis, container security scanning, infrastructure as code (IaC) validation, and real-time monitoring using industry-leading tools like OpenTelemetry, Prometheus, Grafana, Trivy, and TFSec.

Key Objectives

βœ… Continuous Integration & Deployment (CI/CD): Automate application builds, testing, and deployments using Jenkins, Docker, and Node.js to ensure seamless software delivery.

βœ… Secrets Management: Securely handle sensitive credentials using HashiCorp Vault, ensuring encrypted access to secrets.

βœ… Infrastructure as Code (IaC) Security: Automate infrastructure provisioning with Terraform, enforce best practices, and enhance security using TFSec for IaC validation.

βœ… Static Code & Dependency Analysis: Ensure code quality, security, and compliance with SonarQube for static analysis, Snyk for dependency vulnerability scanning, and Trivy for container image security.

βœ… Monitoring & Observability: Implement real-time performance tracking and security monitoring using Prometheus, Grafana, and OpenTelemetry to gain full visibility into system health.

βœ… Artifact Management: Store, manage, and distribute application artifacts efficiently using Nexus Repository to improve version control and software traceability.

βœ… Configuration Management: Automate and standardize system configurations with Ansible for consistent and scalable infrastructure setup.

βœ… Team Collaboration & Alerting: Enhance developer communication and incident response by integrating Slack notifications for build failures, security alerts, and deployment updates.

Architecture


Architecture


Cost Optimization Decisions

To ensure a cost-effective solution without compromising on functionality, all tools used in this DevSecOps project have been integrated into the Jenkins server. This approach avoids the need for additional servers or infrastructure, reducing operational costs.

Rationale

  • Centralized Integration: Running all tools (e.g. Prometheus, Grafana, Trivy, TFsec, SonarQube) on the same server minimizes resource utilization and eliminates the cost of multiple servers.
  • Simplified Management: Centralized integration simplifies maintenance, monitoring, and updates for all tools.
  • Efficient Resource Usage: Using the Jenkins server for multi-purpose tasks optimizes the allocated resources, leveraging idle capacity during pipeline executions.

Implementation

  • All tools are installed and configured on the Jenkins server instance.
  • Prometheus and Grafana are set up to run on separate ports to avoid conflicts.
  • Tools like Trivy, TFsec, are containerized or run as CLI tools, leveraging Docker where applicable.

Running the Application in the Background

If you want to run the server continuously while using the terminal for other tasks, execute the provided command to run it in the background

Why Background Running? Running the server in the background avoids the need to open duplicate terminals when integrating other tools or performing additional tasks.


Create Jenkins Server

Creating an EC2 Instance:

  1. Log in to the AWS Management Console.
  2. Navigate to EC2 > Instances > Launch Instances.
  3. Configure the instance:
    • AMI: Amazon Linux 2.
    • Instance Type: t3.xlarge.
    • Create a key pair (Store it in Secure place)
    • Storage: 50 GiB gp2.
    • Security Group: Allow SSH (port 22) and HTTP (port 8080).
    • Assign a key pair.
  4. Launch the instance and wait for it to initialize.

Commands to Run After Launch:

sudo yum update -y
sudo yum install git -y

Install OpenTelemetry and Project Dependencies

1. Install Node.js and npm

Node.js is required to run the project, and npm (Node Package Manager) manages the project's dependencies.

Installation Commands:

curl -sL https://rpm.nodesource.com/setup_16.x | sudo bash -
sudo yum install nodejs -y

Verify Installation:

node -v
npm -v


2. Install Project Dependencies

This project uses OpenTelemetry (OTel) for distributed tracing and observability. Install the necessary OpenTelemetry libraries:

Clone the Project Repository

  1. Clone the Repository: Run the following command to clone the GitHub repository:

    git clone https://github.com/DevopsProjects05/DevSecOps-End-to-End-Project.git
    cd DevSecOps-End-to-End-Project/src
  2. Install OpenTelemetry libraries:

    npm install @opentelemetry/sdk-trace-node
    npm install @opentelemetry/exporter-trace-otlp-http

Library Overview:

  • @opentelemetry/sdk-trace-node: Enables OpenTelemetry tracing in the Node.js application.
  • @opentelemetry/exporter-trace-otlp-http: Sends trace data from the application to the OpenTelemetry Collector over HTTP using the OTLP protocol.

3. Update the Collector URL in server.js

Configure the application to send trace data to the OpenTelemetry Collector.

Steps:

  1. Open the server.js file:

    vi server.js
  2. Locate and update the following line:

    url: 'http://<collector-ip>:4318/v1/traces'
  3. Replace <collector-ip> with the public IP address of your OpenTelemetry Collector:

    url: 'http://public-ip:4318/v1/traces'
  4. Save and exit the file.

4. Start the Application

Run the application to generate and send telemetry data to the OpenTelemetry Collector.

Steps:

  1. Executing node server.js in Background
    node server.js 
    To Run in the Background:
    This allows you to keep the process running without needing to open duplicate terminals.
  node server.js & 
  1. Access the application at:
    http://<public-ip>:3000
    

Application Successfully Running on Port 3000


  1. To stop the server: (if you run without &)(optional)
    Ctrl + C

Jenkins Installation

  1. Add the Jenkins repository:
sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat/jenkins.io-2023.key
sudo yum upgrade -y
  1. Install Java 17:
amazon-linux-extras enable corretto17
sudo yum install -y java-17-amazon-corretto
java --version

Command Line Execution Result



  1. Install Jenkins:
sudo yum install jenkins -y
  1. Enable and start Jenkins:
sudo systemctl enable jenkins
sudo systemctl start jenkins
sudo systemctl status jenkins

Terminal Display of Running Commands



Access Jenkins

Once you access Jenkins at http://<Jenkins-Instance-IP>:8080, you will see the following page:

Jenkins Server is Up and Running on Port 8080



Retrieving the Initial Admin Password

Run the following command in the terminal:

cat /var/lib/jenkins/secrets/initialAdminPassword

Terminal Output Screenshot



Copy the output(initial admin password) and paste in jenkins server to continue

After entering the initial admin password, you will be redirected to a page to install pluggins as shown below:



Select Install suggested plugins to install necessary plugins

You will see the plugins are getting installed once you click on suggested plugins:



Create Jenkins User

After installing plugins, you will be redirected to a page to set up a Jenkins user account. Fill in the required details:

Here’s the Output You Should See



Provide the necessary details to create your Jenkins account.

Once the plugins installed you will see jenkins url as shown below:


Save and Finish to start using Jenkins


Configuring Jenkins for CI-CD with Additional Tools

1. Install Essential Plugins

  1. Go to Jenkins Dashboard > Manage Jenkins > Plugins.

  2. Navigate to the Available tab and search for these plugins:

    • Git Plugin: For integrating Git repositories (pre-installed).

    • Pipeline Plugin: For creating declarative or scripted pipelines.

      • Pipeline: Stage View
      • Pipeline: Declarative Agent API
    • Terraform Plugin: For running Terraform commands in Jenkins.

    • HashiCorp Vault: To pull secrets from Vault (optional, based on your goals).

    • HashiCorp Vault Pipeline

    • SonarQube Scanner Plugin: For static code analysis integration.

    • Docker: To run Docker-related commands within Jenkins.

    • Snyk Security: For code and dependency scanning.

    • Ansible Plugin: To automate configuration management.

    • Prometheus: For Monitoring and Observability

    • OpenTelemetry Agent Host Metrics Monitor Plugin

    • Slack Notification :

Install plugins as shown below:



Restarting Jenkins

After installing plugins or making configuration changes, you may need to restart your Jenkins server. You can do this in one of the following ways:

  1. Using the systemctl command (Linux systems):
    systemctl restart jenkins
  2. Using the Jenkins UI: If you're on the plugin installation page, check the "Restart Jenkins when installation is complete and no jobs are running" box at the bottom of the page. Alternatively, navigate to the following URL in your browser to restart Jenkins:
http://<public-ip>:8080/restart

Replace with your Jenkins server's public IP address. Ensure Jenkins has fully restarted before proceeding with further tasks.

Troubleshooting: Updating Jenkins IP Address

If you stop the Jenkins instance and start it again, you may experience slowness when accessing Jenkins or making changes. This happens because the Jenkins IP address changes after restarting the instance.

To resolve this issue, follow these steps to update the latest IP address in Jenkins:

  1. Open the Jenkins configuration file:

    vi /var/lib/jenkins/jenkins.model.JenkinsLocationConfiguration.xml
  2. Update the Jenkins URL field with the new public IP address of the instance.

  3. Save the changes and restart Jenkins:

    systemctl restart jenkins

Configure Tools

Install Terraform:

sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
sudo yum install -y terraform
terraform --version

Command Output Snapshot



Install TFScan:

curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash
tfsec --version


Install Trivy:

curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh
sudo mv /root/bin/trivy /usr/local/bin/trivy

Troubleshooting Trivy Installation

If you encounter an issue where trivy is not found after installation, follow these steps to locate and move it manually:

Issue

You may see the following error when trying to move trivy:

mv: cannot stat '/root/bin/trivy': No such file or directory

Solution

Find where trivy was installed by running:

find / -name trivy 2>/dev/null

This should return a path similar to:

/root/DevSecOps-End-to-End-Project/src/bin/trivy

Move trivy to /usr/local/bin/ for system-wide access:

mv /root/DevSecOps-End-to-End-Project/src/bin/trivy /usr/local/bin/trivy

Verify the installation:

trivy --version

If the installation was successful, this should output the installed Trivy version.

Command Execution Status



Install Snyk CLI:

npm install -g snyk
snyk --version

Terminal Session Output



Install Ansible

Install ansible using pip:

yum install pip -y  
pip install ansible
ansible --version

Execution Log in Terminal



Configuring Global Tools in Jenkins

  1. Git:
    • Go to Manage Jenkins > Global Tool Configuration.
    • Under Git, click Add Git and set the path to /usr/bin/git.


Refer to the above screenshot to configure Terraform & Ansible.

Note: Ensure that you uncheck Install automatically.

  1. Terraform:

    • Add Terraform under Terraform installations.
    • Ensure the binary is installed at /usr/bin/.
  2. Ansible:

    • Add Ansible installation and set the path to /usr/bin/.

Click on Apply & Save to continue.


Create Your First Job to Verify Jenkins

Follow these steps to create a Freestyle Project in Jenkins to verify that Jenkins is properly configured with additional tools:

  1. Create a Freestyle Project:

    • Go to the Jenkins Dashboard and click on New Item.
    • Enter a name for your job (e.g. Verify-Jenkins) and select Freestyle Project.
  2. Configure the Build Steps:

    • Scroll down to the Build section and click Add build step.
    • Select Execute shell and add the following commands:
      echo "Jenkins is configured with additional tools!"
      tfsec --version
      trivy --version
      snyk --version
  3. Save and Build:

    • Click Save to create the job.
    • Go back to the project dashboard and click Build Now to execute the job.
  4. Verify the Output:

    • Navigate to the Console Output of the build to verify that the commands ran successfully and the versions of tfsec, trivy, and snyk are displayed.
  5. Save and build the job.

Check the console output to verify the installed versions.



Deploying SonarQube as a Container

Steps to Install and Configure SonarQube:

  1. Install Docker:
sudo yum install docker -y
sudo systemctl enable docker
sudo systemctl start docker
  1. Check Docker Status:
sudo systemctl status docker

Terminal View of Executed Commands



Run the SonarQube Container

Execute the following command to pull and run the latest SonarQube container in detached mode (-d), mapping it to port 9000:

docker run -d --name sonarcontainer -p 9000:9000 sonarqube:latest
  • -d β†’ Runs the container in detached mode (in the background).
  • --name sonarcontainer β†’ Assigns a custom name (sonarcontainer) to the container for easy management.
  • -p 9000:9000 β†’ Maps port 9000 of the container to port 9000 of the host machine.
  • sonarqube:latest β†’ Uses the latest available SonarQube image from Docker Hub.

Verify if the container is running using:

docker ps

Command Execution Status



Sonarqube Successfully Running on Port 9000

Once the container is running, access the SonarQube web interface:

  • Open a browser and navigate to:
    http://<your-ec2-ip>:9000

Console Output After Command Execution



Default Credentials

  • Username: admin
  • Password: admin

Upon first login, you will be prompted to change the default password for security.

Provide a new Password : Example@12345 (Suggested)



Adding SonarQube Configuration to Jenkins

  1. Go to Manage Jenkins > System.
  2. Scroll to SonarQube Servers and click Add SonarQube.
  3. Enter the following details:
    • Name: SonarQube server (or any identifier).
    • Server URL: http://<your-sonarqube-server-ip>:9000.


Save the configuration.

Important Note:

Before proceeding, make sure to securely store the following values, as they will be required later:

  • sonar.projectName
  • sonar.projectKey
  • Token

Create a New Project in SonarQube

  1. Log in to SonarQube.
  2. Click Create a local project and provide the project name (e.g. Sample E-Commerce Project).
  3. Branch should be main then Next
  4. Select Use the global setting, then click Create Project.

Generate an Authentication Token

  1. Navigate to My Account > Security.
  2. Under Generate Tokens, enter a token name (e.g. Sample Project Token).
  3. Select Global Analysis from the dropdown.
  4. Click Generate and copy the token (save it securely; it will not be displayed again).

Install Sonar Scanner

  1. Create a directory for Sonar Scanner:
    mkdir -p /downloads/sonarqube
    cd /downloads/sonarqube
  2. Download the latest Sonar Scanner:
    wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-5.0.1.3006-linux.zip
    unzip sonar-scanner-cli-5.0.1.3006-linux.zip
    sudo mv sonar-scanner-5.0.1.3006-linux /opt/sonar-scanner
  3. Add Sonar Scanner to the PATH:
    vi ~/.bashrc
    export PATH="/opt/sonar-scanner/bin:$PATH"

Add the path as shown below:



source ~/.bashrc

Verify the installation:

sonar-scanner --version

Execution Log in Terminal



Ensure SonarQube Scanner plugin is installed.

Analyze Code with Sonar Scanner

  1. Navigate to the src directory.

    cd src
  2. Create and edit the sonar-project.properties file:

    vi sonar-project.properties           

    Add the following content:

    # Unique project identifier in SonarQube
    sonar.projectKey=Sample-E-Commerce-Project     
    
    # Display name of the project
    sonar.projectName=Sample E-Commerce Project 
    
    # Directory where source code is located (relative to this file)
    sonar.sources=.
    
    # URL of the SonarQube server
    sonar.host.url=http://<your-sonarqube-server-ip>:9000    
    
    # Authentication token from SonarQube
    sonar.login=<your-authentication-token>    
    

    Important Ensure that you replace Sonarqube server IP & token before scanning

  3. Run the Sonar Scanner:

    /opt/sonar-scanner/bin/sonar-scanner

You will see below result after running sonar scanner:



  1. For debugging issues, use:

    /opt/sonar-scanner/bin/sonar-scanner -X

    If you get an error:

    • Ensure your SonarQube server IP is configured in Jenkins.
    • Verify that your project key and authentication token are correct.
    • Make sure you are in the correct path (/src).
    • Confirm that the sonar-project.properties file exists in the /src directory.

View Results in SonarQube

  1. Open your browser and navigate to http://<your-sonarqube-server-ip>:9000.
  2. Log in to the SonarQube dashboard.
  3. Locate the project (e.g., Sample E-Commerce Project).
  4. View analysis results, including security issues, reliability, maintainability, and code coverage.

The following output will be visible upon successful execution:



Installing HashiCorp Vault for Secure Secrets Management

HashiCorp Vault is used to securely manage AWS credentials and other sensitive secrets, including:

  • Nexus Credentials
  • Docker Hub Credentials
  • Snyk Token
  • SonarQube Token
  • Other Confidential Secrets

By integrating Vault, we ensure that secrets are securely stored and dynamically accessed, reducing security risks.

  1. Why Vault? HashiCorp Vault is used to:

    • Securely store and manage sensitive information.
    • Dynamically generate AWS credentials for Terraform.
  2. Steps for Vault Integration: Before proceeding, we need to integrate Vault:

    • Install Vault:

      sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
      sudo yum install -y vault
    • Start Vault in Development Mode:

      vault server -dev -dev-listen-address="0.0.0.0:8200"

    Note: Copy the Root token to login into Hashicorp vault server

    • Run Vault in Background (Optional):
      vault server -dev -dev-listen-address="0.0.0.0:8200" &
  3. Access Vault Server

    http://<public-ip>:8200

Vault is Up and Running at IP:8200



Enter the Root token to login.

Integrate Vault for Secrets Management

Open a Separate Terminal for Configuration

  1. Right-click on the tab of your terminal session.
  2. From the context menu, select the option 'Duplicate Session'.
  3. This will open a new tab with a duplicate of your current terminal session, which you can use to continue the setup process.
  4. After entering into the duplicate terminal, get sudo access and follow the steps below..

Step-by-Step Configuration

  1. Set Vault's Environment Variables:
    export VAULT_ADDR=http://0.0.0.0:8200
    export VAULT_TOKEN=<your-root-token>
  2. Enable the AWS Secrets Engine:
    vault secrets enable -path=aws aws
  3. Configure AWS Credentials in Vault:
    vault write aws/config/root \
     access_key=<your-Access-key> \
     secret_key=<your-Secret-key>
  4. Create a Vault Role for AWS Credentials
vault write aws/roles/dev-role \
 credential_type=iam_user \
 policy_document=-<<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": ["ec2:*", "sts:GetCallerIdentity"],
            "Resource": "*"
        }
    ]
}
EOF

πŸ” Secure Credentials in Jenkins Using HashiCorp Vault

πŸ›‘οΈ How to Store Vault Secrets in Jenkins Securely?

Go to Jenkins Dashboard β†’ Click Manage Jenkins
Navigate to: Manage Credentials β†’ Global β†’ Add Credentials
Select "Secret Text" under Kind
Add Vault Address:

  • Secret: Paste your Vault Address
  • ID: VAULT_ADDR
  • Click OK Add Vault Token:
  • Secret: Paste your Vault Token
  • ID: VAULT_TOKEN
  • Click OK

🎯 How to Use It in Jenkins Pipeline?

environment {
    VAULT_ADDR = credentials('VAULT_ADDR')
    VAULT_TOKEN = credentials('VAULT_TOKEN')
}

Testing HashiCorp Vault in a Freestyle Jenkins Job

To verify Vault integration with Jenkins, follow these steps:

Add Vault URL to Jenkins

  1. Go to Manage Jenkins > System > Vault Plugin.
  2. Enter the Vault URL: http://<public-ip>:8200.
  3. Click Apply and Save.

Steps to Create and Configure the Jenkins Job

  1. Create a New Freestyle Job:

    • Go to Jenkins Dashboard > New Item.
    • Enter a job name (e.g., Test-Vault).
    • Select Freestyle Project and click OK.
  2. Add Build Step:

    • Under Build, click on Add Build Step.
    • Select Execute Shell.
  3. Add the Following Shell Script:

    # Export Vault address and token
    export VAULT_ADDR=http://<public-ip>:8200
    export VAULT_TOKEN=<YOUR_VAULT_TOKEN>
    
    echo "Testing Vault Connection..."
    # Read AWS credentials from Vault
    vault read -format=json aws/creds/dev-role > aws_creds.json
    jq -r '.data.access_key' aws_creds.json
    jq -r '.data.secret_key' aws_creds.json
  4. Run the Job:

  • Click Save and then Build Now.
  1. Verify the Output:
  • Check the Console Output to ensure:
  • Vault connection is successful.
  • The AWS credentials are retrieved and displayed below.


Integrating Tfsec to Enhance Terraform Security Scanning

To scan Terraform files for potential security vulnerabilities using tfsec, follow these steps:

  1. Ensure a Terraform File Exists:

    • Confirm that the required Terraform file (.tf) is available in the directory.
  2. Navigate to the Terraform Directory:

    cd /root/DevSecOps-End-to-End-Project/terraform
  3. Run tfsec:

  • Execute the following command to perform the security scan:

    tfsec .
  1. Analyze the Output:
  • Review the results of the scan for any identified security issues and resolve them as needed.


Integrating Trivy to Enhance Container Image Scanning

Install Docker

If Docker is not already installed, use the following command to install Docker:

sudo yum install docker -y

Navigate to Docker directory

cd DevSecOps-End-to-End-Project/Docker

Steps to Integrate Trivy for Image Scanning

  1. Create the Dockerfile:

    • The Dockerfile is already available in the /Docker directory. Below is an example of the Dockerfile:
    FROM node:16-alpine
    WORKDIR /app
    COPY package.json .
    RUN npm install
    COPY . .
    EXPOSE 3000
    CMD ["npm", "start"]
    • Save this file in the root directory of your project.
  2. Build the Docker Image:

    • Navigate to your project directory and run:
    docker build -t sample-ecommerce-app .

What You Can Expect After Running the Command



  1. Run the Docker Container (Optional for Testing):
    • To test the container, run:
    docker run -p 3000:3000 sample-ecommerce-app
  • Access the application in your browser at:

    http://<your-server-ip>:3000
    

  1. Scan the Image with Trivy:

    • Use Trivy to scan the Docker image for vulnerabilities:
    trivy image sample-ecommerce-app
  2. Analyze the Output:

    • Review the vulnerabilities identified in the scan and address them by updating dependencies or modifying the Dockerfile.

  1. Clean Up:
    • Stop and remove the running container (if applicable):
    docker stop <container-id>
    docker rm <container-id>

Push Docker Image to a Container Registry

To store and share your Docker image, push it to a container registry like Docker Hub, Amazon ECR, or Azure ACR.

  1. Log in to Docker Hub:

    docker login -u <your-dockerhub-username> -p <your-dockerhub-password>
  2. Tag the Docker Image:

    docker tag sample-ecommerce-app <your-registry>/sample-ecommerce-app:nodejs
  3. Push the Image to the Registry:

    docker push <your-registry>/sample-ecommerce-app:nodejs
  4. Verify on Docker Hub.

After running the steps, the output will appear as follows:



Deploying Nexus Repository as a Docker Container

Run the Nexus Container

To deploy the Nexus Repository as a container, run the following command:

docker run -d -p 8081:8081 --name nexus sonatype/nexus3

Access Nexus

  1. Open your browser and navigate to:
    http://<your-host-ip>:8081
    


  1. Retrieve the Admin Password:
    • Run the following command to get the admin password:
      docker exec -it nexus cat /nexus-data/admin.password
    Click on the sign icon in the top-right corner.
  2. The default credentials are:
    • Username: admin
    • Password: Retrived Password.


  1. Update your password after the first login as shown below

  1. Select: Enable anonymous access β†’ Click Next β†’ Finish the setup.

Configure Nexus

Create a Docker Repository as shown below

  1. Navigate to Nexus Repositories:

    • Click on the "Settings" (gear icon) β†’ "Repositories".
  2. Create a New Repository:

    • Click on "Create repository".
    • Choose "docker (hosted)" for pushing Docker images.
  3. Configure the Repository:

    • Name: Enter a name for the repository (e.g., docker-hosted).
    • Allow anonymous Docker pull: Enable this option if needed.
  4. Click on create repository.

Securely Managing Credentials with HashiCorp Vault

Storing and Accessing Credentials in Vault

Storing Nexus Credentials

1. Enable the KV Secrets Engine

Ensure the KV secrets engine is enabled in Vault to securely store credentials.

vault secrets enable -path=nexus kv

2. Store Nexus Credentials

Use the vault kv put command to securely store your Nexus credentials and repository URL or token:

vault kv put nexus/credentials \
    username="your-nexus-username" \
    password="your-nexus-password" \
    repo_url=https://nexus.example.com

Replace https://nexus.example.com with your Nexus repository URL.

How to Find Your Nexus Repository URL

  • Log in to your Nexus Repository.
  • Navigate to Nexus Repositories:
    • Click on the "Settings" (gear icon) β†’ "Repositories".
  • Identify the repository that you previously created, click on it.
  • Copy the repository URL displayed under the repository details as shown below.


3. Retrieve Nexus Credentials

To fetch the stored credentials:

  • Retrieve all stored credentials:
    vault kv get nexus/credentials

Storing Docker Credentials:

vault kv put secret/docker username="<user-name>" password="<your-password>"

Retrieve Docker Credentials:

  • Fetch the username:
    vault kv get -field=username secret/docker
  • Fetch the password:
    vault kv get -field=password secret/docker

Storing Snyk Token

1. Sign In to Snyk

  • Go to the Snyk login page.
  • Log in using your preferred method (e.g., email/password, GitHub, GitLab, or SSO).

2. Navigate to Your API Token

  • Click on your Organization bottom-left corner.
  • Select Account Settings from the dropdown menu.
  • Locate Auth token and click on click to show to view token.
  • Below is the screenshot for your reference


3. Enable the KV Secrets Engine (if not already enabled)

vault secrets enable -path=snyk kv
  • -path=snyk: Specifies a custom path for storing Snyk-related secrets. You can customize this path as needed.

4. Store the Snyk Token

vault kv put snyk/token api_token="your-snyk-token"

Replace your-snyk-token with your actual Snyk token.

5. Retrieve the Snyk Token

To fetch the token programmatically or manually:

vault kv get -field=api_token snyk/token

The -field=api_token flag extracts only the token value.


Monitoring with Prometheus and Grafana

Add Prometheus to Node.js Application

  1. Install Prometheus client:
    npm install prom-client
  2. Expose metrics in server.js. (included expose metrics in server.js)

Next Steps

  1. Test this updated server.js:
    node server.js 

To Run in the Background:
This allows you to keep the process running without needing to open duplicate terminals.

node server.js &

Access Prometheus metrics at: http://<public-ip>:3000/metrics to ensure it is working as expected.

You will see the metrics once you access the URL:



Open a Separate Terminal for Prometheus Setup

Important : Jump to Install and Configure Prometheus if you run node js in Background

  1. Right-click on the tab of your terminal session.
  2. From the context menu, select the option 'Duplicate Session'.
  3. This will open a new tab with a duplicate of your current terminal session, which you can use to continue the setup process.
  4. After entering into the duplicate terminal, get sudo access and navigate to:
    cd DevSecOps-End-to-End-Project/src

Install and Configure Prometheus

  1. Download and run Prometheus:
    wget https://github.com/prometheus/prometheus/releases/download/v2.47.0/prometheus-2.47.0.linux-amd64.tar.gz
    tar -xvzf prometheus-2.47.0.linux-amd64.tar.gz
    cd prometheus-2.47.0.linux-amd64

Configure Prometheus

  1. Find the prometheus.yml File Ensure the prometheus.yml configuration file exists in the current directory

  2. Verify the file

    ls
  3. Edit the File

    vi prometheus.yml
  4. Locate the scrape_configs: section.

  5. Replace the following in your prometheus.yml file:

  • Job Name: Change it to "nodejs-app".
  • Host: Replace localhost with your public IP address.
  • Port: Update the port to 3000.

Here’s an example of the updated configuration:

scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "node-js-app"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["<public IP>:3000"]   # Replace with your Public IP

Save the file in the same directory as Prometheus.

Run Prometheus

  • To start the Prometheus server, use the following command:

     ./prometheus --config.file=prometheus.yml

To Run in the Background:
This allows you to keep the process running without needing to open duplicate terminals.

./prometheus --config.file=prometheus.yml &
  • Open the Prometheus server in your browser.
    http://Public-ip:9090/

Prometheus Successfully Running on Port 9090



  • Navigate to the Status tab.

  • Choose Targets from the dropdown.

Below is the result you can expect:



Open a Separate Terminal for Grafana Setup

Important : Jump to Install and Configure Grafana if you run Prometheus in Background

  1. Right-click on the tab of your terminal session.
  2. From the context menu, select the option 'Duplicate Session'.
  3. This will open a new tab with a duplicate of your current terminal session, which you can use to continue the setup process.
  4. After entering into the duplicate terminal, get sudo access and navigate to:
    cd DevSecOps-End-to-End-Project/src

Install and Configure Grafana

  1. Download and run Grafana:

    wget https://dl.grafana.com/oss/release/grafana-10.0.0.linux-amd64.tar.gz
    tar -xvzf grafana-10.0.0.linux-amd64.tar.gz
    cd grafana-10.0.0/bin

    Run Grafana

    ./grafana-server

Resolve Port Conflict with Grafana

You may encounter an error because Grafana tries to access port 3000, which is already occupied by Node.js. To resolve this, we need to change the Grafana port to 3001.

Steps to Change the Grafana Port

1.Find the defaults.ini file by running the following command:

find / -name defaults.ini 2>/dev/null

2.Navigate to the conf directory:

cd ../conf

3.Edit the defaults.ini file:

vi defaults.ini

4.Add the following line to set the Grafana port to 3001 as shown below:

http_port = 3001


Restart Grafana

Now, navigate back to the Grafana execution folder:

cd /root/DevSecOps-End-to-End-Project/src/prometheus-2.47.0.linux-amd64/grafana-10.0.0/bin

Run Grafana again:

./grafana-server  

To Run in the Background:
This allows you to keep the process running without needing to open duplicate terminals.

./grafana-server &

Access Grafana:

http://<server-ip>:3001.

Grafana Running and Accessible on the Browser



Login with Default Credentials

  1. Use the following default credentials to log in:
    • Username: admin
    • Password: admin

After initial login you will be prompted to change the password upon your first login. Follow the instructions to set a strong, secure password.

Once you login you will see Grafana Dashboard as shown below:



Configure Prometheus as a Data Source

Add Prometheus as a data source.

  • In Grafana Home page.
  • Click on data source.

The following page will appear:



Select Prometheus from the list.

Enter Prometheus URL as shown below



Click on Save to continue.

Import a Pre-Built Dashboard

Go to Home Page > toggle menu > dashboards > new> Import in Grafana.



Load the Node.js Dashboard

  1. Enter the Dashboard ID:

    • Use the Dashboard ID for Node.js: 11159.
  2. Click Load to fetch the dashboard configuration.

  3. In the next step, select Prometheus as the data source to proceed.

The interface will appear as follows:



click on import.

Here is the NodeJS Application Dashboard result after the process completes



From Scans to Dashboards: Unlocking Security Insights with Trivy, TFsec, Prometheus, and Grafana

Trivy and TFsec are powerful security scanning tools for containers and Infrastructure as Code (IaC), but they lack a built-in graphical interface for visualizing vulnerabilities. This project bridges that gap by integrating Trivy and TFsec with Prometheus and Grafana, transforming raw security scan data into insightful, real-time dashboards for better monitoring and decision-making. πŸš€

Trivy Integration

Run Trivy and Generate JSON Output

Navigate to Docker directory:

cd DevSecOps-End-to-End-Project/Docker

Scan Docker Image with Trivy

trivy image --format json --severity HIGH,CRITICAL <image-name> > trivy-results.json

After running this command, a trivy-results.json file will be created.

Generate Prometheus Metrics from Trivy Results

Create a file generate-trivy-metrics.js and add the following content:

vi generate-trivy-metrics.js
const fs = require('fs');

try {
    console.log('Reading Trivy results...');
    const trivyResults = JSON.parse(fs.readFileSync('trivy-results.json', 'utf8'));

    console.log('Generating Prometheus metrics...');
    const metrics = [];
    trivyResults.Results.forEach((result) => {
        result.Vulnerabilities.forEach((vuln) => {
            metrics.push(`# HELP trivy_vulnerabilities Trivy vulnerability scan results`);
            metrics.push(`# TYPE trivy_vulnerabilities gauge`);
            metrics.push(`trivy_vulnerabilities{image="${result.Target}",severity="${vuln.Severity}",id="${vuln.VulnerabilityID}"} 1`);
        });
    });

    console.log('Writing metrics to trivy-metrics.prom...');
    fs.writeFileSync('trivy-metrics.prom', metrics.join('\n'));
    console.log('Metrics file trivy-metrics.prom created successfully.');
} catch (error) {
    console.error('Error:', error.message);
}

Run the script:

node generate-trivy-metrics.js

A trivy-metrics.prom file will be created.

Move the file to the metrics directory:

mv trivy-metrics.prom metrics

Start a simple HTTP server to expose metrics on port 8085:

python3 -m http.server 8085

Access Trivy metrics:

http://Public-ip:8085/

Execution Log in Terminal



Trivy Metrics is Now Live at IP:8085



Open a Separate Terminal

  1. Right-click on the tab of your terminal session.
  2. From the context menu, select the option 'Duplicate Session'.
  3. This will open a new tab with a duplicate of your current terminal session, which you can use to continue the setup process.
  4. After entering into the duplicate terminal, get sudo access and navigate to:
    cd /root/DevSecOps-End-to-End-Project/src/prometheus-2.47.0.linux-amd64

Add Trivy to Prometheus Configuration

vi prometheus.yml
  - job_name: "trivy"
    static_configs:
      - targets: ["<your-server-ip>:8085"]

Below is the screenshot for your reference:



Reload Prometheus to Apply Changes

Prometheus is already running in the background, follow these steps to reload it with the updated configuration:

Steps to Reload Prometheus

  1. Find the Process ID (PID): Use the following command to locate the PID of the Prometheus process:

    ps aux | grep prometheus
  2. Stop the Current Process: Terminate the Prometheus process using the PID from the previous step:

    kill <PID>

Replace with the actual Process ID. 3. Restart Prometheus in the Background: Start Prometheus with the updated configuration in the background:

./prometheus --config.file=prometheus.yml &

Prometheus is successfully scraping metrics from Trivy



The above screenshot confirms that Prometheus is successfully scraping metrics from Trivy.

Visualize Trivy Metrics in Grafana

Follow these steps to visualize Trivy metrics in Grafana:

1. Open Grafana

Access Grafana in your browser:

http://<your-server-ip>:3001

2. Create a New Dashboard

  • Navigate to the Grafana Home Page.
  • Click on Create your first Dashboard.

You will see the following page:



3. Add a Visualization

Click on Add visualization, and you will be redirected to the following page:


Select Prometheus as the data source.

4. Run a Query

  • Enter the PromQL query trivy_vulnerabilities under the metrics field.
  • Click on Run Query.
  • Choose a visualization type, such as Table, Gauge, or Time Series.

Example:



5. View Trivy Vulnerabilities

The dashboard will display the number of vulnerabilities:



6. Save the Dashboard

  • Once you are satisfied with the visualization, save the dashboard for future use.

Verify Trivy Metrics Manually

To ensure the metrics are accurate, you can verify them manually by scanning an image directly with Trivy:

  1. Run the following command:

    trivy image <image-name>
  2. The output will display the same number of vulnerabilities as seen in Grafana:



TFsec Integration

Run TFsec and Generate JSON Output

Navigate to terraform directory:

cd /root/DevSecOps-End-to-End-Project/terraform
tfsec . --format=json > tfsec-results.json

After running this command, a tfsec-results.json file will be created.

Generate Prometheus Metrics from TFsec Results

Create a file generate-tfsec-metrics.js and add the following content:

vi generate-tfsec-metrics.js
const fs = require('fs');

console.log("Reading TFsec results...");
const tfsecResults = JSON.parse(fs.readFileSync('tfsec-results.json', 'utf8'));

console.log("Generating Prometheus metrics...");
let metrics = "# HELP tfsec_vulnerabilities TFsec vulnerability scan results\n";
metrics += "# TYPE tfsec_vulnerabilities gauge\n";

tfsecResults.results.forEach(result => {
    metrics += `tfsec_vulnerabilities{severity="${result.severity}",rule_id="${result.rule_id}",description="${result.description.replace(/"/g, '\\"')}"} 1\n`;
});

console.log("Writing metrics to tfsec-metrics.prom...");
fs.writeFileSync('tfsec-metrics.prom', metrics);

console.log("Metrics file tfsec-metrics.prom created successfully.");

Run the script:

node generate-tfsec-metrics.js

A tfsec-metrics.prom file will be created.

Move the file to the metrics directory:

mv tfsec-metrics.prom metrics

Start a simple HTTP server to expose metrics on port 8086:

python3 -m http.server 8086 &

Execution Log in Terminal



Tfsec Metrics is Now Live at IP:8086



Open a Separate Terminal

  1. Right-click on the tab of your terminal session.
  2. From the context menu, select the option 'Duplicate Session'.
  3. This will open a new tab with a duplicate of your current terminal session, which you can use to continue the setup process.
  4. After entering into the duplicate terminal, get sudo access and navigate to:
    cd /root/DevSecOps-End-to-End-Project/src/prometheus-2.47.0.linux-amd64

Add TFsec to Prometheus Configuration

vi prometheus.yml
  - job_name: "tfsec"
    static_configs:
      - targets: ["<your-server-ip>:8086"]

Below is the screenshot for your reference:



Reload Prometheus to Apply Changes

Prometheus is already running in the background, follow these steps to reload it with the updated configuration:

Steps to Reload Prometheus

  1. Find the Process ID (PID): Use the following command to locate the PID of the Prometheus process:

    ps aux | grep prometheus
  2. Stop the Current Process: Terminate the Prometheus process using the PID from the previous step:

    kill <PID>

Replace with the actual Process ID. 3. Restart Prometheus in the Background: Start Prometheus with the updated configuration in the background:

./prometheus --config.file=prometheus.yml &

Prometheus is successfully scraping metrics from Tfsec



The above screenshot confirms that Prometheus is successfully scraping metrics from Tfsec.

Visualize Tfsec Metrics in Grafana

Follow these steps to visualize Tfsec metrics in Grafana:

1. Open Grafana

Access Grafana in your browser:

http://<your-server-ip>:3001

2. Create a New Dashboard

  • Navigate to the Grafana Home Page.
  • Click on Create your first Dashboard.

You will see the following page:



3. Add a Visualization

Click on Add visualization, and you will be redirected to the following page:


Select Prometheus as the data source.

4. Run a Query

  • Enter the PromQL query tfsec_vulnerabilities under the metrics field.
  • Click on Run Query.
  • Choose a visualization type, such as Table, Gauge, or Time Series.

Example:



5. View Tfsec Vulnerabilities

The dashboard will display the number of vulnerabilities:



Note: Previously, when TFSec was run, no vulnerabilities were detected, and the scan results were clear. To demonstrate the Grafana dashboard functionality, I intentionally made changes to the main.tf file to introduce vulnerabilities.

6. Save the Dashboard

  • Once you are satisfied with the visualization, save the dashboard for future use.

Verify Tfsec Metrics Manually

To ensure the metrics are accurate, you can verify them manually by scanning tf files directly with tfsec:

  1. Run the following command:

    tfsec .
  2. The output will display the same number of vulnerabilities as seen in Grafana:



OpenTelemetry Setup and Configuration

Since we have already installed the OpenTelemetry-related dependencies and updated the collectorUrl in server.js earlier, let's proceed with downloading the OpenTelemetry Collector.



Download and Set Up OpenTelemetry Collector

The OpenTelemetry Collector processes and exports telemetry data from the application to a desired backend.

Download the Collector

wget https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.83.0/otelcol-contrib_0.83.0_linux_amd64.tar.gz

Extract the File

tar -xvf otelcol-contrib_0.83.0_linux_amd64.tar.gz

Move the Binary to a System-Wide Location

sudo mv otelcol-contrib /usr/local/bin/otelcol
  • Why: Places the binary in a directory included in your system’s PATH, so you can run it from anywhere.
  • Result: The Collector is installed and ready to use.

Verify Installation

otelcol --version

  • Why: Confirms that the Collector is installed correctly.
  • Result: Displays the version of the Collector.

Run the OpenTelemetry Collector

Ensure the otel-collector-config.yaml file is present in your directory. Run the Collector with the configuration file.

otelcol --config otel-collector-config.yaml

Check the Collector logs to confirm traces are being received:

INFO    TracesExporter  {"kind": "exporter", "data_type": "traces", "resource spans": 1, "spans": 1}

Metrics Endpoint

The OpenTelemetry Collector exposes its internal metrics at the /metrics endpoint. These metrics are in Prometheus format and provide insights into the Collector's performance and health.

Access the metrics at:

http://your-public-ip:8888/metrics


8. Enhance Tracing in the Application (This step is already included in server.js)

Add custom spans to improve the observability of specific routes.

Edit server.js to Add Custom Spans**

vi /root/DevSecOps-End-to-End-Project/src/server.js

Add the following code to create a custom span for the /custom route:

app.get('/custom', (req, res) => {
    const span = tracer.startSpan('Custom Route Span');
    res.send('This is a custom route');
    span.end();
});
  • Why: Custom spans provide detailed observability for specific operations.
  • Result: Requests to /custom will generate a new span named Custom Route Span.

Restart the Application

node server.js
  • Why: Applies the changes to the server.
  • Result: The application restarts with the new custom span functionality.

Test the Custom Route

Visit the custom route in your browser or using curl:

http://<your-server-ip>:3000/custom


  • Why: Generates traffic to test the new custom span.
  • Result: A span is created for the /custom route and sent to the Collector.

Configure Prometheus to Scrape OpenTelemetry Metrics

vi prometheus.yml
  - job_name: 'otel-collector'
    scrape_interval: 5s
    static_configs:
      - targets: ["your-public-ip:8888"]

(Replace your-public-ip with your actual server IP)

Below is the screenshot for your reference:



Reload Prometheus to Apply Changes

Prometheus is already running in the background, follow these steps to reload it with the updated configuration:

Steps to Reload Prometheus

  1. Find the Process ID (PID): Use the following command to locate the PID of the Prometheus process:

    ps aux | grep prometheus
  2. Stop the Current Process: Terminate the Prometheus process using the PID from the previous step:

    kill <PID>

Replace with the actual Process ID. 3. Restart Prometheus in the Background: Start Prometheus with the updated configuration in the background:

./prometheus --config.file=prometheus.yml &

Prometheus is successfully scraping metrics from opentelemetry



The above screenshot confirms that Prometheus is successfully scraping metrics from opentelemetry.

Visualize OpenTelemetry Metrics in Grafana

Follow these steps to visualize OpenTelemetry metrics in Grafana:

1. Open Grafana

Access Grafana in your browser:

http://<your-server-ip>:3001

2. Create a New Dashboard

  • Navigate to the Grafana Home Page.
  • Click on Create your first Dashboard.

You will see the following page:



3. Add a Visualization

Click on Add visualization, and you will be redirected to the following page:


Select Prometheus as the data source.

4. Run a Query

  • Enter the PromQL queries in the metrics field, run the query, and choose a visualization type like Table, Gauge, or Time Series.

    • CPU Usage Visualization
    rate(process_cpu_seconds_total[5m])
    

    Visualization Type: Time Series



    • Memory Usage Visualization
    process_resident_memory_bytes
    

    Visualization Type: Gauge



    • Uptime Visualization
    time() - process_start_time_seconds
    


5. Save the Dashboard

  • Once you are satisfied with the visualization, save the dashboard for future use.

πŸ”” Jenkins Slack Notification Configuration

πŸ“’ Automate Build Status Alerts with Slack Integration in Jenkins!

βœ… Follow this Guide:

πŸ“Œ Step-by-Step Slack Notification Setup in Jenkins



⚠️ Configuring Grafana Alerts for Security and Performance Monitoring

πŸ“Œ Step 1: Open Grafana & Navigate to Alerting

  • Go to Grafana Dashboard (http://your-grafana-ip:3001).
  • Click on "Alerting" β†’ "Alert Rules".
  • Click "Create Alert Rule".

πŸ“Œ Step 2: Select Data Source & Define Alert Conditions

  • Choose Prometheus as the data source.
  • Define a PromQL query to specify alert conditions.

πŸ“Œ Step 3: Define Alert Queries for Each Tool

βœ… Trivy Security Alerts (Critical Vulnerabilities)

trivy_vulnerabilities{severity="CRITICAL"} > 0
  • Condition: If CRITICAL vulnerabilities exist, trigger an alert.

βœ… TFsec Security Alerts (Terraform Misconfigurations)

  • PromQL Query:
    tfsec_vulnerabilities > 0
    
  • Condition: If TFsec detects infrastructure misconfigurations, trigger an alert.

βœ…OpenTelemetry Performance Alerts (High CPU Usage)

  • PromQL Query:
    rate(process_cpu_seconds_total[5m]) > 0.9
    
  • Condition: If CPU usage exceeds 90% for 5 minutes, trigger an alert.

πŸ“Œ Configure Notification Channels

  • Navigate to "Alerting" β†’ "Notification Policies".
  • Click "Add a New Contact Point".

βœ… Slack Integration

  • Select "Slack" as the notification type.
  • Paste the Slack Webhook URL.
  • Click "Send Test Notification" to verify.


βœ… Microsoft Teams Integration

  • Select "Webhook" as the notification type.
  • Paste the Microsoft Teams Webhook URL.

βœ… Email Notification Setup

  • Select "Email" as the notification type.
  • Enter the recipient email (e.g., [email protected]).

Enable Alerting & Save

  • Click "Save & Apply".
  • Ensure alert rules are active.
  • Generate a test alert by manually increasing vulnerabilities or CPU usage.

Verify Alerts

  • Check if Slack, Teams, or Email notifications are received.
  • Adjust alert thresholds to minimize false positives.

Automating with Jenkins Pipeline

Creating Jenkins Pipeline

  1. In the Jenkins dashboard, click on + New Item or New Job.
  2. Provide a name (e.g., Devsecops Pipeline).
  3. Select Pipeline and click OK to proceed.
  4. In the left-side menu, click Configure.
  5. In the Pipeline section:
    • in Definition select Pipeline script from SCM.
    • Under SCM, select Git.
    • Provide the repository URL: https://github.com/DevopsProjects05/DevSecOps-End-to-End-Project.git.
    • In Branches to build, enter: */main.
    • For Script Path, enter: Jenkinsfile/Jenkinsfile.
  6. Click Apply and Save.

Store the Private Key in Jenkins Credentials

Once the EC2 instance is launched, store the private key in Jenkins Global Credentials for secure SSH access.

  1. Open the .pem file (e.g., ci-cd-key.pem) using Notepad or any text editor.
  2. Copy the entire private key, including: -----BEGIN RSA PRIVATE KEY----- (Private Key Content) -----END RSA PRIVATE KEY-----
  3. Go to Jenkins Dashboard β†’ Manage Jenkins β†’ Manage Credentials.
  4. Under Global credentials, click Add Credentials.
  5. Set the following values:
  • Kind: Select SSH Username with Private Key.
  • ID: Enter "ANSIBLE_SSH_KEY".
  • Username: Enter "ec2-user".
  • Private Key: Select "Enter directly", then paste the copied private key.
  1. Click Add, then Apply & Save.

βœ… Your private key is now securely stored in Jenkins, ready for automated deployments! πŸš€

# βš οΈπŸš€ **IMPORTANT NOTE: BEFORE RUNNING THE PIPELINE** πŸš€βš οΈ

**Before executing the Jenkins pipeline, ensure the following critical conditions are met to prevent failures:

## ❌ Delete Any Existing EC2 Instance
- Before running the pipeline, delete any existing EC2 instance to avoid conflicts and unexpected errors.

## πŸ” Ensure Proper Permissions for Ansible SSH Key
- The SSH key used by Ansible must have correct permissions to enable a secure connection.

### βœ… Run the Following Command: # Replace with your workspace name

chmod 777 /var/lib/jenkins/workspace/your-workspace-name/ansible/ansible_ssh_key.pem

Build the Jenkins Pipeline

Once everything is set up, follow these steps to execute the Jenkins pipeline:

  1. Go to the Jenkins Dashboard.
  2. Click on the pipeline that you have created.
  3. Click "Build Now" to start executing the pipeline.

βœ… Jenkins will now trigger the pipeline execution and automate the deployment process! πŸš€


CI-CD Pipeline: Automated DevSecOps Deployment

This Jenkins pipeline automates the secure deployment of a Node.js application by integrating Vault for secrets management, SonarQube for code quality, Snyk for security scanning, and TFScan for Terraform security checks. It provisions AWS infrastructure using Terraform, builds and scans Docker images with Trivy, stores artifacts in Nexus, and deploys via Ansible. Additionally, it ensures continuous monitoring and automated notifications via Slack. This end-to-end DevSecOps pipeline guarantees a robust, secure, and efficient software delivery process.

You will see the stage view of your pipeline:



Who Can Use This Project?

βœ… DevOps Engineers - To implement security in CI/CD pipelines.
βœ… Security Teams - To monitor vulnerabilities & enforce compliance.
βœ… Developers - To integrate security scanning into development workflows.
βœ… Cloud Architects - To manage Infrastructure as Code (IaC) securely.
βœ… Interview Candidates - To showcase hands-on experience in DevSecOps.


Challenges and Learnings

Challenges Faced:

Tool Integration Complexity: Integrating multiple tools (Jenkins, Terraform, Trivy, SonarQube, etc.) required careful configuration and troubleshooting to ensure compatibility and smooth operation.

Security and Secrets Management: Securely managing sensitive credentials using HashiCorp Vault and ensuring they were not exposed in logs or environment variables was a critical challenge.

Monitoring and Observability: Setting up Prometheus, Grafana, and OpenTelemetry for real-time monitoring and tracing required learning new tools and configuring custom metrics.

Cost Optimization: Balancing cost and functionality while running all tools on a single EC2 instance required careful resource allocation and performance tuning.

Pipeline Failures: Debugging pipeline failures (e.g., Terraform errors, Docker build issues) required a systematic approach to identify and resolve root causes.

Key Learnings:

Automation is Key: Automating repetitive tasks (e.g., infrastructure provisioning, security scans) reduces human error and speeds up deployments.

Security is Non-Negotiable: Integrating security tools (e.g., Trivy, TFsec, Snyk) early in the pipeline ensures vulnerabilities are caught before production.

Observability is Crucial: Real-time monitoring and alerting (via Prometheus, Grafana, and OpenTelemetry) provide actionable insights into system performance and security.

Documentation Matters: Clear and detailed documentation is essential for onboarding new team members and troubleshooting issues.

Collaboration is Vital: Integrating Slack notifications ensures team members are informed about pipeline status and security alerts.


Future Enhancements

Kubernetes Integration: Migrate the application and tools to a Kubernetes cluster for better scalability and resource management.

Advanced Security Measures: Integrate OWASP ZAP for dynamic application security testing (DAST) and runtime security monitoring using tools like Falco.

Multi-Cloud Support: Extend the pipeline to support multi-cloud deployments (e.g., AWS, Azure, GCP) using Terraform or Crossplane.

Enhanced Monitoring: Add distributed tracing using Jaeger or Zipkin and implement log aggregation using the ELK Stack or Fluentd.

Pipeline Optimization: Implement parallel stages in the Jenkins pipeline and add automated rollback mechanisms for deployment failures.

Compliance and Governance: Integrate compliance checks using tools like Open Policy Agent (OPA) or HashiCorp Sentinel.

AI/ML Integration: Use machine learning models to predict and prevent potential security vulnerabilities or performance bottlenecks.

Improved Collaboration: Integrate Jira or Trello for task management and add more notification channels (e.g., Microsoft Teams, PagerDuty).

Cost Management: Implement cost monitoring and optimization tools like AWS Cost Explorer and add automated resource scaling.

User-Friendly Dashboards: Create a centralized dashboard for all pipeline metrics, security insights, and deployment statuses using Grafana or Kibana.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published