- Project Overview
- Key Objectives
- Architecture
- Create Jenkins Server
- Install OpenTelemetry and Project Dependencies
- Jenkins Installation
- Configure Tools
- Configuring Global Tools in Jenkins
- Create Your First Job to Verify Jenkins
- Deploying SonarQube as a Container
- Installing HashiCorp Vault for Secure Secrets Management
- Integrating Tfsec to Enhance Terraform Security Scanning
- Integrating Trivy to Enhance Container Image Scanning
- Push Docker Image to a Container Registry
- Deploying Nexus Repository as a Docker Container
- Securely Managing Credentials with HashiCorp Vault
- Monitoring with Prometheus and Grafana
- From Scans to Dashboards: Unlocking Security Insights with Trivy, Tfsec, Prometheus, and Grafana
- OpenTelemetry Setup and Configuration
- Jenkins Slack Notification Configuration
- Configuring Grafana Alerts for Security and Performance Monitoring
- Automating with Jenkins Pipeline
- CI/CD Pipeline: Automated DevSecOps Deployment
- Who Can Use This Project?
- Challenges and Learnings
- Future Enhancements
This DevSecOps project implements a fully automated CI/CD pipeline that integrates security scanning, observability, and infrastructure automation. The project ensures secure application deployment by incorporating static code analysis, container security scanning, infrastructure as code (IaC) validation, and real-time monitoring using industry-leading tools like OpenTelemetry, Prometheus, Grafana, Trivy, and TFSec.
β Continuous Integration & Deployment (CI/CD): Automate application builds, testing, and deployments using Jenkins, Docker, and Node.js to ensure seamless software delivery.
β Secrets Management: Securely handle sensitive credentials using HashiCorp Vault, ensuring encrypted access to secrets.
β Infrastructure as Code (IaC) Security: Automate infrastructure provisioning with Terraform, enforce best practices, and enhance security using TFSec for IaC validation.
β Static Code & Dependency Analysis: Ensure code quality, security, and compliance with SonarQube for static analysis, Snyk for dependency vulnerability scanning, and Trivy for container image security.
β Monitoring & Observability: Implement real-time performance tracking and security monitoring using Prometheus, Grafana, and OpenTelemetry to gain full visibility into system health.
β Artifact Management: Store, manage, and distribute application artifacts efficiently using Nexus Repository to improve version control and software traceability.
β Configuration Management: Automate and standardize system configurations with Ansible for consistent and scalable infrastructure setup.
β Team Collaboration & Alerting: Enhance developer communication and incident response by integrating Slack notifications for build failures, security alerts, and deployment updates.
To ensure a cost-effective solution without compromising on functionality, all tools used in this DevSecOps project have been integrated into the Jenkins server. This approach avoids the need for additional servers or infrastructure, reducing operational costs.
- Centralized Integration: Running all tools (e.g. Prometheus, Grafana, Trivy, TFsec, SonarQube) on the same server minimizes resource utilization and eliminates the cost of multiple servers.
- Simplified Management: Centralized integration simplifies maintenance, monitoring, and updates for all tools.
- Efficient Resource Usage: Using the Jenkins server for multi-purpose tasks optimizes the allocated resources, leveraging idle capacity during pipeline executions.
- All tools are installed and configured on the Jenkins server instance.
- Prometheus and Grafana are set up to run on separate ports to avoid conflicts.
- Tools like Trivy, TFsec, are containerized or run as CLI tools, leveraging Docker where applicable.
If you want to run the server continuously while using the terminal for other tasks, execute the provided command to run it in the background
Why Background Running? Running the server in the background avoids the need to open duplicate terminals when integrating other tools or performing additional tasks.
- Log in to the AWS Management Console.
- Navigate to EC2 > Instances > Launch Instances.
- Configure the instance:
- AMI: Amazon Linux 2.
- Instance Type:
t3.xlarge. - Create a key pair (Store it in Secure place)
- Storage: 50 GiB gp2.
- Security Group: Allow SSH (port 22) and HTTP (port 8080).
- Assign a key pair.
- Launch the instance and wait for it to initialize.
sudo yum update -ysudo yum install git -yNode.js is required to run the project, and npm (Node Package Manager) manages the project's dependencies.
curl -sL https://rpm.nodesource.com/setup_16.x | sudo bash -
sudo yum install nodejs -ynode -v
npm -vThis project uses OpenTelemetry (OTel) for distributed tracing and observability. Install the necessary OpenTelemetry libraries:
-
Clone the Repository: Run the following command to clone the GitHub repository:
git clone https://github.com/DevopsProjects05/DevSecOps-End-to-End-Project.git
cd DevSecOps-End-to-End-Project/src -
Install OpenTelemetry libraries:
npm install @opentelemetry/sdk-trace-node
npm install @opentelemetry/exporter-trace-otlp-http
- @opentelemetry/sdk-trace-node: Enables OpenTelemetry tracing in the Node.js application.
- @opentelemetry/exporter-trace-otlp-http: Sends trace data from the application to the OpenTelemetry Collector over HTTP using the OTLP protocol.
Configure the application to send trace data to the OpenTelemetry Collector.
-
Open the
server.jsfile:vi server.js
-
Locate and update the following line:
url: 'http://<collector-ip>:4318/v1/traces' -
Replace
<collector-ip>with the public IP address of your OpenTelemetry Collector:url: 'http://public-ip:4318/v1/traces' -
Save and exit the file.
Run the application to generate and send telemetry data to the OpenTelemetry Collector.
- Executing node server.js in Background
To Run in the Background:
node server.js
This allows you to keep the process running without needing to open duplicate terminals.
node server.js & - Access the application at:
http://<public-ip>:3000
- To stop the server: (if you run without &)(optional)
Ctrl + C
- Add the Jenkins repository:
sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat/jenkins.io-2023.keysudo yum upgrade -y- Install Java 17:
amazon-linux-extras enable corretto17
sudo yum install -y java-17-amazon-corretto
java --version- Install Jenkins:
sudo yum install jenkins -y- Enable and start Jenkins:
sudo systemctl enable jenkinssudo systemctl start jenkinssudo systemctl status jenkinsOnce you access Jenkins at http://<Jenkins-Instance-IP>:8080, you will see the following page:
Run the following command in the terminal:
cat /var/lib/jenkins/secrets/initialAdminPasswordCopy the output(initial admin password) and paste in jenkins server to continue
After entering the initial admin password, you will be redirected to a page to install pluggins as shown below:
Select Install suggested plugins to install necessary plugins
After installing plugins, you will be redirected to a page to set up a Jenkins user account. Fill in the required details:
Provide the necessary details to create your Jenkins account.
Save and Finish to start using Jenkins
-
Go to Jenkins Dashboard > Manage Jenkins > Plugins.
-
Navigate to the Available tab and search for these plugins:
-
Git Plugin: For integrating Git repositories (pre-installed).
-
Pipeline Plugin: For creating declarative or scripted pipelines.
- Pipeline: Stage View
- Pipeline: Declarative Agent API
-
Terraform Plugin: For running Terraform commands in Jenkins.
-
HashiCorp Vault: To pull secrets from Vault (optional, based on your goals).
-
HashiCorp Vault Pipeline
-
SonarQube Scanner Plugin: For static code analysis integration.
-
Docker: To run Docker-related commands within Jenkins.
-
Snyk Security: For code and dependency scanning.
-
Ansible Plugin: To automate configuration management.
-
Prometheus: For Monitoring and Observability
-
OpenTelemetry Agent Host Metrics Monitor Plugin
-
Slack Notification :
-
Install plugins as shown below:
After installing plugins or making configuration changes, you may need to restart your Jenkins server. You can do this in one of the following ways:
- Using the systemctl command (Linux systems):
systemctl restart jenkins
- Using the Jenkins UI: If you're on the plugin installation page, check the "Restart Jenkins when installation is complete and no jobs are running" box at the bottom of the page. Alternatively, navigate to the following URL in your browser to restart Jenkins:
http://<public-ip>:8080/restartReplace with your Jenkins server's public IP address. Ensure Jenkins has fully restarted before proceeding with further tasks.
If you stop the Jenkins instance and start it again, you may experience slowness when accessing Jenkins or making changes. This happens because the Jenkins IP address changes after restarting the instance.
To resolve this issue, follow these steps to update the latest IP address in Jenkins:
-
Open the Jenkins configuration file:
vi /var/lib/jenkins/jenkins.model.JenkinsLocationConfiguration.xml
-
Update the
Jenkins URLfield with the new public IP address of the instance. -
Save the changes and restart Jenkins:
systemctl restart jenkins
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.reposudo yum install -y terraformterraform --versioncurl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bashtfsec --versioncurl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | shsudo mv /root/bin/trivy /usr/local/bin/trivyIf you encounter an issue where trivy is not found after installation, follow these steps to locate and move it manually:
You may see the following error when trying to move trivy:
mv: cannot stat '/root/bin/trivy': No such file or directoryfind / -name trivy 2>/dev/nullThis should return a path similar to:
/root/DevSecOps-End-to-End-Project/src/bin/trivymv /root/DevSecOps-End-to-End-Project/src/bin/trivy /usr/local/bin/trivytrivy --versionIf the installation was successful, this should output the installed Trivy version.
npm install -g snyksnyk --versionyum install pip -y pip install ansibleansible --version- Git:
- Go to Manage Jenkins > Global Tool Configuration.
- Under Git, click Add Git and set the path to
/usr/bin/git.
Refer to the above screenshot to configure Terraform & Ansible.
Note: Ensure that you uncheck Install automatically.
-
Terraform:
- Add Terraform under Terraform installations.
- Ensure the binary is installed at
/usr/bin/.
-
Ansible:
- Add Ansible installation and set the path to
/usr/bin/.
- Add Ansible installation and set the path to
Click on Apply & Save to continue.
Follow these steps to create a Freestyle Project in Jenkins to verify that Jenkins is properly configured with additional tools:
-
Create a Freestyle Project:
- Go to the Jenkins Dashboard and click on New Item.
- Enter a name for your job (e.g.
Verify-Jenkins) and select Freestyle Project.
-
Configure the Build Steps:
- Scroll down to the Build section and click Add build step.
- Select Execute shell and add the following commands:
echo "Jenkins is configured with additional tools!" tfsec --version trivy --version snyk --version
-
Save and Build:
- Click Save to create the job.
- Go back to the project dashboard and click Build Now to execute the job.
-
Verify the Output:
- Navigate to the Console Output of the build to verify that the commands ran successfully and the versions of
tfsec,trivy, andsnykare displayed.
- Navigate to the Console Output of the build to verify that the commands ran successfully and the versions of
-
Save and build the job.
- Install Docker:
sudo yum install docker -ysudo systemctl enable docker
sudo systemctl start docker- Check Docker Status:
sudo systemctl status dockerExecute the following command to pull and run the latest SonarQube container in detached mode (-d), mapping it to port 9000:
docker run -d --name sonarcontainer -p 9000:9000 sonarqube:latest-dβ Runs the container in detached mode (in the background).--name sonarcontainerβ Assigns a custom name (sonarcontainer) to the container for easy management.-p 9000:9000β Maps port 9000 of the container to port 9000 of the host machine.sonarqube:latestβ Uses the latest available SonarQube image from Docker Hub.
Verify if the container is running using:
docker psOnce the container is running, access the SonarQube web interface:
- Open a browser and navigate to:
http://<your-ec2-ip>:9000
- Username:
admin - Password:
admin
Upon first login, you will be prompted to change the default password for security.
Provide a new Password : Example@12345 (Suggested)
- Go to Manage Jenkins > System.
- Scroll to SonarQube Servers and click Add SonarQube.
- Enter the following details:
- Name: SonarQube server (or any identifier).
- Server URL:
http://<your-sonarqube-server-ip>:9000.
Save the configuration.
Before proceeding, make sure to securely store the following values, as they will be required later:
sonar.projectNamesonar.projectKeyToken
- Log in to SonarQube.
- Click Create a local project and provide the project name (e.g.
Sample E-Commerce Project). - Branch should be
mainthenNext - Select Use the global setting, then click Create Project.
- Navigate to My Account > Security.
- Under Generate Tokens, enter a token name (e.g.
Sample Project Token). - Select Global Analysis from the dropdown.
- Click Generate and copy the token (save it securely; it will not be displayed again).
- Create a directory for Sonar Scanner:
mkdir -p /downloads/sonarqube cd /downloads/sonarqube - Download the latest Sonar Scanner:
wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-5.0.1.3006-linux.zip unzip sonar-scanner-cli-5.0.1.3006-linux.zip
sudo mv sonar-scanner-5.0.1.3006-linux /opt/sonar-scanner
- Add Sonar Scanner to the PATH:
vi ~/.bashrcexport PATH="/opt/sonar-scanner/bin:$PATH"
Add the path as shown below:
source ~/.bashrcVerify the installation:
sonar-scanner --versionEnsure SonarQube Scanner plugin is installed.
-
Navigate to the
srcdirectory.cd src -
Create and edit the
sonar-project.propertiesfile:vi sonar-project.properties
Add the following content:
# Unique project identifier in SonarQube sonar.projectKey=Sample-E-Commerce-Project # Display name of the project sonar.projectName=Sample E-Commerce Project # Directory where source code is located (relative to this file) sonar.sources=. # URL of the SonarQube server sonar.host.url=http://<your-sonarqube-server-ip>:9000 # Authentication token from SonarQube sonar.login=<your-authentication-token>Important Ensure that you replace Sonarqube server IP & token before scanning
-
Run the Sonar Scanner:
/opt/sonar-scanner/bin/sonar-scanner
-
For debugging issues, use:
/opt/sonar-scanner/bin/sonar-scanner -X
If you get an error:
- Ensure your SonarQube server IP is configured in Jenkins.
- Verify that your project key and authentication token are correct.
- Make sure you are in the correct path (
/src). - Confirm that the
sonar-project.propertiesfile exists in the/srcdirectory.
- Open your browser and navigate to
http://<your-sonarqube-server-ip>:9000. - Log in to the SonarQube dashboard.
- Locate the project (e.g.,
Sample E-Commerce Project). - View analysis results, including security issues, reliability, maintainability, and code coverage.
HashiCorp Vault is used to securely manage AWS credentials and other sensitive secrets, including:
- Nexus Credentials
- Docker Hub Credentials
- Snyk Token
- SonarQube Token
- Other Confidential Secrets
By integrating Vault, we ensure that secrets are securely stored and dynamically accessed, reducing security risks.
-
Why Vault? HashiCorp Vault is used to:
- Securely store and manage sensitive information.
- Dynamically generate AWS credentials for Terraform.
-
Steps for Vault Integration: Before proceeding, we need to integrate Vault:
-
Install Vault:
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo sudo yum install -y vault
-
Start Vault in Development Mode:
vault server -dev -dev-listen-address="0.0.0.0:8200"
Note: Copy the Root token to login into Hashicorp vault server
- Run Vault in Background (Optional):
vault server -dev -dev-listen-address="0.0.0.0:8200" &
-
-
Access Vault Server
http://<public-ip>:8200
Enter the Root token to login.
- Right-click on the tab of your terminal session.
- From the context menu, select the option 'Duplicate Session'.
- This will open a new tab with a duplicate of your current terminal session, which you can use to continue the setup process.
- After entering into the duplicate terminal, get sudo access and follow the steps below..
- Set Vault's Environment Variables:
export VAULT_ADDR=http://0.0.0.0:8200export VAULT_TOKEN=<your-root-token>
- Enable the AWS Secrets Engine:
vault secrets enable -path=aws aws - Configure AWS Credentials in Vault:
vault write aws/config/root \ access_key=<your-Access-key> \ secret_key=<your-Secret-key>
- Create a Vault Role for AWS Credentials
vault write aws/roles/dev-role \
credential_type=iam_user \
policy_document=-<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["ec2:*", "sts:GetCallerIdentity"],
"Resource": "*"
}
]
}
EOFπ‘οΈ How to Store Vault Secrets in Jenkins Securely?
Go to Jenkins Dashboard β Click Manage Jenkins
Navigate to: Manage Credentials β Global β Add Credentials
Select "Secret Text" under Kind
Add Vault Address:
- Secret: Paste your Vault Address
- ID:
VAULT_ADDR - Click OK Add Vault Token:
- Secret: Paste your Vault Token
- ID:
VAULT_TOKEN - Click OK
environment {
VAULT_ADDR = credentials('VAULT_ADDR')
VAULT_TOKEN = credentials('VAULT_TOKEN')
}To verify Vault integration with Jenkins, follow these steps:
- Go to Manage Jenkins > System > Vault Plugin.
- Enter the Vault URL:
http://<public-ip>:8200. - Click Apply and Save.
-
Create a New Freestyle Job:
- Go to Jenkins Dashboard > New Item.
- Enter a job name (e.g.,
Test-Vault). - Select Freestyle Project and click OK.
-
Add Build Step:
- Under Build, click on Add Build Step.
- Select Execute Shell.
-
Add the Following Shell Script:
# Export Vault address and token export VAULT_ADDR=http://<public-ip>:8200 export VAULT_TOKEN=<YOUR_VAULT_TOKEN> echo "Testing Vault Connection..." # Read AWS credentials from Vault vault read -format=json aws/creds/dev-role > aws_creds.json jq -r '.data.access_key' aws_creds.json jq -r '.data.secret_key' aws_creds.json
-
Run the Job:
- Click Save and then Build Now.
- Verify the Output:
- Check the Console Output to ensure:
- Vault connection is successful.
- The AWS credentials are retrieved and displayed below.
To scan Terraform files for potential security vulnerabilities using tfsec, follow these steps:
-
Ensure a Terraform File Exists:
- Confirm that the required Terraform file (
.tf) is available in the directory.
- Confirm that the required Terraform file (
-
Navigate to the Terraform Directory:
cd /root/DevSecOps-End-to-End-Project/terraform -
Run tfsec:
-
Execute the following command to perform the security scan:
tfsec .
- Analyze the Output:
- Review the results of the scan for any identified security issues and resolve them as needed.
If Docker is not already installed, use the following command to install Docker:
sudo yum install docker -yNavigate to Docker directory
cd DevSecOps-End-to-End-Project/Docker
-
Create the Dockerfile:
- The Dockerfile is already available in the
/Dockerdirectory. Below is an example of the Dockerfile:
FROM node:16-alpine WORKDIR /app COPY package.json . RUN npm install COPY . . EXPOSE 3000 CMD ["npm", "start"]
- Save this file in the root directory of your project.
- The Dockerfile is already available in the
-
Build the Docker Image:
- Navigate to your project directory and run:
docker build -t sample-ecommerce-app .
- Run the Docker Container (Optional for Testing):
- To test the container, run:
docker run -p 3000:3000 sample-ecommerce-app
-
Scan the Image with Trivy:
- Use Trivy to scan the Docker image for vulnerabilities:
trivy image sample-ecommerce-app
-
Analyze the Output:
- Review the vulnerabilities identified in the scan and address them by updating dependencies or modifying the Dockerfile.
- Clean Up:
- Stop and remove the running container (if applicable):
docker stop <container-id> docker rm <container-id>
To store and share your Docker image, push it to a container registry like Docker Hub, Amazon ECR, or Azure ACR.
-
Log in to Docker Hub:
docker login -u <your-dockerhub-username> -p <your-dockerhub-password>
-
Tag the Docker Image:
docker tag sample-ecommerce-app <your-registry>/sample-ecommerce-app:nodejs
-
Push the Image to the Registry:
docker push <your-registry>/sample-ecommerce-app:nodejs
-
Verify on Docker Hub.
To deploy the Nexus Repository as a container, run the following command:
docker run -d -p 8081:8081 --name nexus sonatype/nexus3- Open your browser and navigate to:
http://<your-host-ip>:8081
- Retrieve the Admin Password:
- Run the following command to get the admin password:
docker exec -it nexus cat /nexus-data/admin.password
- Run the following command to get the admin password:
- The default credentials are:
- Username:
admin - Password: Retrived Password.
- Username:
- Update your password after the first login as shown below
- Select: Enable anonymous access β Click Next β Finish the setup.
-
Navigate to Nexus Repositories:
- Click on the "Settings" (gear icon) β "Repositories".
-
Create a New Repository:
- Click on "Create repository".
- Choose "docker (hosted)" for pushing Docker images.
-
Configure the Repository:
- Name: Enter a name for the repository (e.g.,
docker-hosted). - Allow anonymous Docker pull: Enable this option if needed.
- Name: Enter a name for the repository (e.g.,
-
Click on
create repository.
Ensure the KV secrets engine is enabled in Vault to securely store credentials.
vault secrets enable -path=nexus kvUse the vault kv put command to securely store your Nexus credentials and repository URL or token:
vault kv put nexus/credentials \
username="your-nexus-username" \
password="your-nexus-password" \
repo_url=https://nexus.example.comReplace https://nexus.example.com with your Nexus repository URL.
- Log in to your Nexus Repository.
- Navigate to Nexus Repositories:
- Click on the "Settings" (gear icon) β "Repositories".
- Identify the repository that you previously created, click on it.
- Copy the repository URL displayed under the repository details as shown below.
To fetch the stored credentials:
- Retrieve all stored credentials:
vault kv get nexus/credentials
vault kv put secret/docker username="<user-name>" password="<your-password>"- Fetch the username:
vault kv get -field=username secret/docker
- Fetch the password:
vault kv get -field=password secret/docker
- Go to the Snyk login page.
- Log in using your preferred method (e.g., email/password, GitHub, GitLab, or SSO).
- Click on your Organization bottom-left corner.
- Select Account Settings from the dropdown menu.
- Locate Auth token and click on
click to showto view token. - Below is the screenshot for your reference
vault secrets enable -path=snyk kv-path=snyk: Specifies a custom path for storing Snyk-related secrets. You can customize this path as needed.
vault kv put snyk/token api_token="your-snyk-token"Replace your-snyk-token with your actual Snyk token.
To fetch the token programmatically or manually:
vault kv get -field=api_token snyk/tokenThe -field=api_token flag extracts only the token value.
- Install Prometheus client:
npm install prom-client
- Expose metrics in
server.js. (included expose metrics in server.js)
- Test this updated
server.js:node server.js
To Run in the Background:
This allows you to keep the process running without needing to open duplicate terminals.
node server.js &Access Prometheus metrics at: http://<public-ip>:3000/metrics to ensure it is working as expected.
Important : Jump to Install and Configure Prometheus if you run node js in Background
- Right-click on the tab of your terminal session.
- From the context menu, select the option 'Duplicate Session'.
- This will open a new tab with a duplicate of your current terminal session, which you can use to continue the setup process.
- After entering into the duplicate terminal, get sudo access and navigate to:
cd DevSecOps-End-to-End-Project/src
- Download and run Prometheus:
wget https://github.com/prometheus/prometheus/releases/download/v2.47.0/prometheus-2.47.0.linux-amd64.tar.gz
tar -xvzf prometheus-2.47.0.linux-amd64.tar.gz
cd prometheus-2.47.0.linux-amd64
-
Find the
prometheus.ymlFile Ensure theprometheus.ymlconfiguration file exists in the current directory -
Verify the file
ls
-
Edit the File
vi prometheus.yml
-
Locate the
scrape_configs:section. -
Replace the following in your
prometheus.ymlfile:
- Job Name: Change it to
"nodejs-app". - Host: Replace
localhostwith your public IP address. - Port: Update the port to
3000.
Hereβs an example of the updated configuration:
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "node-js-app"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["<public IP>:3000"] # Replace with your Public IPSave the file in the same directory as Prometheus.
-
To start the Prometheus server, use the following command:
./prometheus --config.file=prometheus.yml
To Run in the Background:
This allows you to keep the process running without needing to open duplicate terminals.
./prometheus --config.file=prometheus.yml &- Open the Prometheus server in your browser.
http://Public-ip:9090/
-
Navigate to the Status tab.
-
Choose Targets from the dropdown.
Important : Jump to Install and Configure Grafana if you run Prometheus in Background
- Right-click on the tab of your terminal session.
- From the context menu, select the option 'Duplicate Session'.
- This will open a new tab with a duplicate of your current terminal session, which you can use to continue the setup process.
- After entering into the duplicate terminal, get sudo access and navigate to:
cd DevSecOps-End-to-End-Project/src
-
Download and run Grafana:
wget https://dl.grafana.com/oss/release/grafana-10.0.0.linux-amd64.tar.gz
tar -xvzf grafana-10.0.0.linux-amd64.tar.gz
cd grafana-10.0.0/binRun Grafana
./grafana-server
You may encounter an error because Grafana tries to access port 3000, which is already occupied by Node.js. To resolve this, we need to change the Grafana port to 3001.
1.Find the defaults.ini file by running the following command:
find / -name defaults.ini 2>/dev/null2.Navigate to the conf directory:
cd ../conf3.Edit the defaults.ini file:
vi defaults.ini4.Add the following line to set the Grafana port to 3001 as shown below:
http_port = 3001Now, navigate back to the Grafana execution folder:
cd /root/DevSecOps-End-to-End-Project/src/prometheus-2.47.0.linux-amd64/grafana-10.0.0/binRun Grafana again:
./grafana-server To Run in the Background:
This allows you to keep the process running without needing to open duplicate terminals.
./grafana-server &Access Grafana:
http://<server-ip>:3001.- Use the following default credentials to log in:
- Username:
admin - Password:
admin
- Username:
After initial login you will be prompted to change the password upon your first login. Follow the instructions to set a strong, secure password.
Add Prometheus as a data source.
- In Grafana Home page.
- Click on data source.
Select Prometheus from the list.
Enter Prometheus URL as shown below
Click on Save to continue.
Go to Home Page > toggle menu > dashboards > new> Import in Grafana.
-
Enter the Dashboard ID:
- Use the Dashboard ID for Node.js:
11159.
- Use the Dashboard ID for Node.js:
-
Click Load to fetch the dashboard configuration.
-
In the next step, select
Prometheusas the data source to proceed.
click on import.
Trivy and TFsec are powerful security scanning tools for containers and Infrastructure as Code (IaC), but they lack a built-in graphical interface for visualizing vulnerabilities. This project bridges that gap by integrating Trivy and TFsec with Prometheus and Grafana, transforming raw security scan data into insightful, real-time dashboards for better monitoring and decision-making. π
Navigate to Docker directory:
cd DevSecOps-End-to-End-Project/Dockertrivy image --format json --severity HIGH,CRITICAL <image-name> > trivy-results.jsonAfter running this command, a trivy-results.json file will be created.
Create a file generate-trivy-metrics.js and add the following content:
vi generate-trivy-metrics.jsconst fs = require('fs');
try {
console.log('Reading Trivy results...');
const trivyResults = JSON.parse(fs.readFileSync('trivy-results.json', 'utf8'));
console.log('Generating Prometheus metrics...');
const metrics = [];
trivyResults.Results.forEach((result) => {
result.Vulnerabilities.forEach((vuln) => {
metrics.push(`# HELP trivy_vulnerabilities Trivy vulnerability scan results`);
metrics.push(`# TYPE trivy_vulnerabilities gauge`);
metrics.push(`trivy_vulnerabilities{image="${result.Target}",severity="${vuln.Severity}",id="${vuln.VulnerabilityID}"} 1`);
});
});
console.log('Writing metrics to trivy-metrics.prom...');
fs.writeFileSync('trivy-metrics.prom', metrics.join('\n'));
console.log('Metrics file trivy-metrics.prom created successfully.');
} catch (error) {
console.error('Error:', error.message);
}Run the script:
node generate-trivy-metrics.jsA trivy-metrics.prom file will be created.
Move the file to the metrics directory:
mv trivy-metrics.prom metricsStart a simple HTTP server to expose metrics on port 8085:
python3 -m http.server 8085Access Trivy metrics:
http://Public-ip:8085/- Right-click on the tab of your terminal session.
- From the context menu, select the option 'Duplicate Session'.
- This will open a new tab with a duplicate of your current terminal session, which you can use to continue the setup process.
- After entering into the duplicate terminal, get sudo access and navigate to:
cd /root/DevSecOps-End-to-End-Project/src/prometheus-2.47.0.linux-amd64
vi prometheus.yml - job_name: "trivy"
static_configs:
- targets: ["<your-server-ip>:8085"]Below is the screenshot for your reference:
Prometheus is already running in the background, follow these steps to reload it with the updated configuration:
-
Find the Process ID (PID): Use the following command to locate the PID of the Prometheus process:
ps aux | grep prometheus -
Stop the Current Process: Terminate the Prometheus process using the PID from the previous step:
kill <PID>
Replace with the actual Process ID. 3. Restart Prometheus in the Background: Start Prometheus with the updated configuration in the background:
./prometheus --config.file=prometheus.yml &The above screenshot confirms that Prometheus is successfully scraping metrics from Trivy.
Follow these steps to visualize Trivy metrics in Grafana:
Access Grafana in your browser:
http://<your-server-ip>:3001- Navigate to the Grafana Home Page.
- Click on Create your first Dashboard.
Click on Add visualization, and you will be redirected to the following page:
Select Prometheus as the data source.
- Enter the PromQL query
trivy_vulnerabilitiesunder the metrics field. - Click on Run Query.
- Choose a visualization type, such as Table, Gauge, or Time Series.
The dashboard will display the number of vulnerabilities:
- Once you are satisfied with the visualization, save the dashboard for future use.
To ensure the metrics are accurate, you can verify them manually by scanning an image directly with Trivy:
-
Run the following command:
trivy image <image-name>
-
The output will display the same number of vulnerabilities as seen in Grafana:
Navigate to terraform directory:
cd /root/DevSecOps-End-to-End-Project/terraformtfsec . --format=json > tfsec-results.jsonAfter running this command, a tfsec-results.json file will be created.
Create a file generate-tfsec-metrics.js and add the following content:
vi generate-tfsec-metrics.jsconst fs = require('fs');
console.log("Reading TFsec results...");
const tfsecResults = JSON.parse(fs.readFileSync('tfsec-results.json', 'utf8'));
console.log("Generating Prometheus metrics...");
let metrics = "# HELP tfsec_vulnerabilities TFsec vulnerability scan results\n";
metrics += "# TYPE tfsec_vulnerabilities gauge\n";
tfsecResults.results.forEach(result => {
metrics += `tfsec_vulnerabilities{severity="${result.severity}",rule_id="${result.rule_id}",description="${result.description.replace(/"/g, '\\"')}"} 1\n`;
});
console.log("Writing metrics to tfsec-metrics.prom...");
fs.writeFileSync('tfsec-metrics.prom', metrics);
console.log("Metrics file tfsec-metrics.prom created successfully.");Run the script:
node generate-tfsec-metrics.jsA tfsec-metrics.prom file will be created.
Move the file to the metrics directory:
mv tfsec-metrics.prom metricsStart a simple HTTP server to expose metrics on port 8086:
python3 -m http.server 8086 &- Right-click on the tab of your terminal session.
- From the context menu, select the option 'Duplicate Session'.
- This will open a new tab with a duplicate of your current terminal session, which you can use to continue the setup process.
- After entering into the duplicate terminal, get sudo access and navigate to:
cd /root/DevSecOps-End-to-End-Project/src/prometheus-2.47.0.linux-amd64
vi prometheus.yml - job_name: "tfsec"
static_configs:
- targets: ["<your-server-ip>:8086"]Below is the screenshot for your reference:
Prometheus is already running in the background, follow these steps to reload it with the updated configuration:
-
Find the Process ID (PID): Use the following command to locate the PID of the Prometheus process:
ps aux | grep prometheus -
Stop the Current Process: Terminate the Prometheus process using the PID from the previous step:
kill <PID>
Replace with the actual Process ID. 3. Restart Prometheus in the Background: Start Prometheus with the updated configuration in the background:
./prometheus --config.file=prometheus.yml &The above screenshot confirms that Prometheus is successfully scraping metrics from Tfsec.
Follow these steps to visualize Tfsec metrics in Grafana:
Access Grafana in your browser:
http://<your-server-ip>:3001- Navigate to the Grafana Home Page.
- Click on Create your first Dashboard.
Click on Add visualization, and you will be redirected to the following page:
Select Prometheus as the data source.
- Enter the PromQL query
tfsec_vulnerabilitiesunder the metrics field. - Click on Run Query.
- Choose a visualization type, such as Table, Gauge, or Time Series.
The dashboard will display the number of vulnerabilities:
Note: Previously, when TFSec was run, no vulnerabilities were detected, and the scan results were clear. To demonstrate the Grafana dashboard functionality, I intentionally made changes to the main.tf file to introduce vulnerabilities.
- Once you are satisfied with the visualization, save the dashboard for future use.
To ensure the metrics are accurate, you can verify them manually by scanning tf files directly with tfsec:
-
Run the following command:
tfsec . -
The output will display the same number of vulnerabilities as seen in Grafana:
Since we have already installed the OpenTelemetry-related dependencies and updated the collectorUrl in server.js earlier, let's proceed with downloading the OpenTelemetry Collector.
The OpenTelemetry Collector processes and exports telemetry data from the application to a desired backend.
wget https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.83.0/otelcol-contrib_0.83.0_linux_amd64.tar.gztar -xvf otelcol-contrib_0.83.0_linux_amd64.tar.gzsudo mv otelcol-contrib /usr/local/bin/otelcol- Why: Places the binary in a directory included in your systemβs PATH, so you can run it from anywhere.
- Result: The Collector is installed and ready to use.
otelcol --version- Why: Confirms that the Collector is installed correctly.
- Result: Displays the version of the Collector.
Ensure the otel-collector-config.yaml file is present in your directory. Run the Collector with the configuration file.
otelcol --config otel-collector-config.yamlCheck the Collector logs to confirm traces are being received:
INFO TracesExporter {"kind": "exporter", "data_type": "traces", "resource spans": 1, "spans": 1}
The OpenTelemetry Collector exposes its internal metrics at the /metrics endpoint. These metrics are in Prometheus format and provide insights into the Collector's performance and health.
Access the metrics at:
http://your-public-ip:8888/metricsAdd custom spans to improve the observability of specific routes.
vi /root/DevSecOps-End-to-End-Project/src/server.jsAdd the following code to create a custom span for the /custom route:
app.get('/custom', (req, res) => {
const span = tracer.startSpan('Custom Route Span');
res.send('This is a custom route');
span.end();
});- Why: Custom spans provide detailed observability for specific operations.
- Result: Requests to
/customwill generate a new span namedCustom Route Span.
node server.js- Why: Applies the changes to the server.
- Result: The application restarts with the new custom span functionality.
Visit the custom route in your browser or using curl:
http://<your-server-ip>:3000/custom- Why: Generates traffic to test the new custom span.
- Result: A span is created for the
/customroute and sent to the Collector.
vi prometheus.yml - job_name: 'otel-collector'
scrape_interval: 5s
static_configs:
- targets: ["your-public-ip:8888"](Replace your-public-ip with your actual server IP)
Below is the screenshot for your reference:
Prometheus is already running in the background, follow these steps to reload it with the updated configuration:
-
Find the Process ID (PID): Use the following command to locate the PID of the Prometheus process:
ps aux | grep prometheus -
Stop the Current Process: Terminate the Prometheus process using the PID from the previous step:
kill <PID>
Replace with the actual Process ID. 3. Restart Prometheus in the Background: Start Prometheus with the updated configuration in the background:
./prometheus --config.file=prometheus.yml &The above screenshot confirms that Prometheus is successfully scraping metrics from opentelemetry.
Follow these steps to visualize OpenTelemetry metrics in Grafana:
Access Grafana in your browser:
http://<your-server-ip>:3001- Navigate to the Grafana Home Page.
- Click on Create your first Dashboard.
Click on Add visualization, and you will be redirected to the following page:
Select Prometheus as the data source.
-
Enter the PromQL queries in the metrics field, run the query, and choose a visualization type like Table, Gauge, or Time Series.
- CPU Usage Visualization
rate(process_cpu_seconds_total[5m])Visualization Type: Time Series
- Memory Usage Visualization
process_resident_memory_bytesVisualization Type: Gauge
- Uptime Visualization
time() - process_start_time_seconds
- Once you are satisfied with the visualization, save the dashboard for future use.
π’ Automate Build Status Alerts with Slack Integration in Jenkins!
π Step-by-Step Slack Notification Setup in Jenkins
- Go to Grafana Dashboard (
http://your-grafana-ip:3001). - Click on "Alerting" β "Alert Rules".
- Click "Create Alert Rule".
- Choose Prometheus as the data source.
- Define a PromQL query to specify alert conditions.
trivy_vulnerabilities{severity="CRITICAL"} > 0
- Condition: If
CRITICALvulnerabilities exist, trigger an alert.
- PromQL Query:
tfsec_vulnerabilities > 0 - Condition: If TFsec detects infrastructure misconfigurations, trigger an alert.
- PromQL Query:
rate(process_cpu_seconds_total[5m]) > 0.9 - Condition: If CPU usage exceeds
90%for5 minutes, trigger an alert.
- Navigate to "Alerting" β "Notification Policies".
- Click "Add a New Contact Point".
- Select "Slack" as the notification type.
- Paste the Slack Webhook URL.
- Click "Send Test Notification" to verify.
- Select "Webhook" as the notification type.
- Paste the Microsoft Teams Webhook URL.
- Select "Email" as the notification type.
- Enter the recipient email (e.g.,
[email protected]).
- Click "Save & Apply".
- Ensure alert rules are active.
- Generate a test alert by manually increasing vulnerabilities or CPU usage.
- Check if Slack, Teams, or Email notifications are received.
- Adjust alert thresholds to minimize false positives.
- In the Jenkins dashboard, click on + New Item or New Job.
- Provide a name (e.g., Devsecops Pipeline).
- Select Pipeline and click OK to proceed.
- In the left-side menu, click Configure.
- In the Pipeline section:
- in Definition select Pipeline script from SCM.
- Under SCM, select Git.
- Provide the repository URL:
https://github.com/DevopsProjects05/DevSecOps-End-to-End-Project.git. - In Branches to build, enter:
*/main. - For Script Path, enter:
Jenkinsfile/Jenkinsfile.
- Click Apply and Save.
Once the EC2 instance is launched, store the private key in Jenkins Global Credentials for secure SSH access.
- Open the
.pemfile (e.g.,ci-cd-key.pem) using Notepad or any text editor. - Copy the entire private key, including:
-----BEGIN RSA PRIVATE KEY----- (Private Key Content) -----END RSA PRIVATE KEY----- - Go to Jenkins Dashboard β Manage Jenkins β Manage Credentials.
- Under Global credentials, click Add Credentials.
- Set the following values:
- Kind: Select SSH Username with Private Key.
- ID: Enter "ANSIBLE_SSH_KEY".
- Username: Enter "ec2-user".
- Private Key: Select "Enter directly", then paste the copied private key.
- Click Add, then Apply & Save.
β Your private key is now securely stored in Jenkins, ready for automated deployments! π
# β οΈπ **IMPORTANT NOTE: BEFORE RUNNING THE PIPELINE** πβ οΈ
**Before executing the Jenkins pipeline, ensure the following critical conditions are met to prevent failures:
## β Delete Any Existing EC2 Instance
- Before running the pipeline, delete any existing EC2 instance to avoid conflicts and unexpected errors.
## π Ensure Proper Permissions for Ansible SSH Key
- The SSH key used by Ansible must have correct permissions to enable a secure connection.
### β
Run the Following Command: # Replace with your workspace name
chmod 777 /var/lib/jenkins/workspace/your-workspace-name/ansible/ansible_ssh_key.pem
Once everything is set up, follow these steps to execute the Jenkins pipeline:
- Go to the Jenkins Dashboard.
- Click on the pipeline that you have created.
- Click "Build Now" to start executing the pipeline.
β Jenkins will now trigger the pipeline execution and automate the deployment process! π
This Jenkins pipeline automates the secure deployment of a Node.js application by integrating Vault for secrets management, SonarQube for code quality, Snyk for security scanning, and TFScan for Terraform security checks. It provisions AWS infrastructure using Terraform, builds and scans Docker images with Trivy, stores artifacts in Nexus, and deploys via Ansible. Additionally, it ensures continuous monitoring and automated notifications via Slack. This end-to-end DevSecOps pipeline guarantees a robust, secure, and efficient software delivery process.
β
DevOps Engineers - To implement security in CI/CD pipelines.
β
Security Teams - To monitor vulnerabilities & enforce compliance.
β
Developers - To integrate security scanning into development workflows.
β
Cloud Architects - To manage Infrastructure as Code (IaC) securely.
β
Interview Candidates - To showcase hands-on experience in DevSecOps.
Tool Integration Complexity: Integrating multiple tools (Jenkins, Terraform, Trivy, SonarQube, etc.) required careful configuration and troubleshooting to ensure compatibility and smooth operation.
Security and Secrets Management: Securely managing sensitive credentials using HashiCorp Vault and ensuring they were not exposed in logs or environment variables was a critical challenge.
Monitoring and Observability: Setting up Prometheus, Grafana, and OpenTelemetry for real-time monitoring and tracing required learning new tools and configuring custom metrics.
Cost Optimization: Balancing cost and functionality while running all tools on a single EC2 instance required careful resource allocation and performance tuning.
Pipeline Failures: Debugging pipeline failures (e.g., Terraform errors, Docker build issues) required a systematic approach to identify and resolve root causes.
Automation is Key: Automating repetitive tasks (e.g., infrastructure provisioning, security scans) reduces human error and speeds up deployments.
Security is Non-Negotiable: Integrating security tools (e.g., Trivy, TFsec, Snyk) early in the pipeline ensures vulnerabilities are caught before production.
Observability is Crucial: Real-time monitoring and alerting (via Prometheus, Grafana, and OpenTelemetry) provide actionable insights into system performance and security.
Documentation Matters: Clear and detailed documentation is essential for onboarding new team members and troubleshooting issues.
Collaboration is Vital: Integrating Slack notifications ensures team members are informed about pipeline status and security alerts.
Kubernetes Integration: Migrate the application and tools to a Kubernetes cluster for better scalability and resource management.
Advanced Security Measures: Integrate OWASP ZAP for dynamic application security testing (DAST) and runtime security monitoring using tools like Falco.
Multi-Cloud Support: Extend the pipeline to support multi-cloud deployments (e.g., AWS, Azure, GCP) using Terraform or Crossplane.
Enhanced Monitoring: Add distributed tracing using Jaeger or Zipkin and implement log aggregation using the ELK Stack or Fluentd.
Pipeline Optimization: Implement parallel stages in the Jenkins pipeline and add automated rollback mechanisms for deployment failures.
Compliance and Governance: Integrate compliance checks using tools like Open Policy Agent (OPA) or HashiCorp Sentinel.
AI/ML Integration: Use machine learning models to predict and prevent potential security vulnerabilities or performance bottlenecks.
Improved Collaboration: Integrate Jira or Trello for task management and add more notification channels (e.g., Microsoft Teams, PagerDuty).
Cost Management: Implement cost monitoring and optimization tools like AWS Cost Explorer and add automated resource scaling.
User-Friendly Dashboards: Create a centralized dashboard for all pipeline metrics, security insights, and deployment statuses using Grafana or Kibana.















































































