ELK-Stack Deployment on Kali Linux for Log Management!
In this article we have been going to install ELK-Stack. The ELK (Elasticsearch, Logstash, and Kibana) stack is a centralized logging solution that provides users with comprehensive log searches in a single location.
Installing the ELK (Elasticsearch, Logstash, and Kibana) stack on Kali Linux for a SOC environment involves several steps.
Step 1: Install Java
As Elasticsearch core component of ELK Stack written in java so this software need java to perform its functionality.
sudo apt-get install default-jdk

Step 2: Install Elasticsearch
Add the Elasticsearch GPG key and repository.
We need Elasticsearch GPG key and repository to ensure secure and reliable installation of Elasticsearch packages on your system. As these are open-source packages so any malicious actor can abuse them and put any type of malicious code in package, so this GPG helps prevent installing tampered or malicious packages.
- Import the GPG key for Elastic:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Above warning shows that we can’t add GPG key through this method so we will add using the Elasticsearch Repository to system list through which system make download of GPG key using repository installations.
2. Add Elastic Repo to System repo list:
sudo sh -c 'echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" > /etc/apt/sources.list.d/elastic-7.x.list'

3. Update the repo:
sudo apt update
Step 3: Install Elasticsearch
Now we will install Elasticsearch after GPG key addition and then configure Elasticsearch to operate on nodes.
sudo apt-get update
sudo apt-get install elasticsearch

Now configure Elasticsearch by using following command.
sudo nano /etc/elasticsearch/elasticsearch.yml
Scroll down and find network.host which we change into out localhost and
#http.port: 9200. In the Discovery section, we are adding one more line, as we are configuring a single node cluster it essentially tells Elasticsearch not to search for other nodes to form a cluster and to function as a standalone instance.
discovery.type: single-node

Now by using following command we will start the Elasticsearch service. We can also add this service to run on boot.
sudo systemctl start elasticsearch.service
sudo systemctl enable elasticsearch.service
Now put http://localhost:9200/ in browser and following window will open.

We have successfully installed the Elasticsearch on the Linux now we proceed with other Kibana and Logstash installation.
Step 4: Install Kibana
sudo apt install kibana

After installation we will proceed with configuration.
sudo nano /etc/kibana/kibana.yml
we will have to uncomment the following lines
#server.port: 5601 #server.name: “your-hostname”
#elasticsearch.hosts: [“http://localhost:9200″]

Save the file by CTRL + X then Y and then press enter.
We will start Kibana service by:
sudo systemctl start kibana
And to Launch Kibana on Boot we will add the following command.
sudo systemctl enable kibana
We have to open port for Kibana in iptables by using:
sudo ufw allow 5601/tcp
Now to check installation of Kibana we go to http://localhost:5601.

Now we have successfully installed the Kibana. After this we will go to installation of Logstash.
Step 5: Install Logstash
sudo apt install logstash

After making installation we will start the Logstash service.
sudo systemctl start logstash
sudo systemctl enable logstash #for running on boot
Configuration of Logstash is a highly customizable part of the ELK stack. Once installed, configure its INPUT, FILTERS, and OUTPUT pipelines according to your own individual use case. All custom Logstash configuration files are stored in:
/etc/logstash/conf.d/
Now we have deployed the three Software and then move to install agent on windows or Linux to get logs for security and data analysis.
Conclusion:
The most popular use case for the ELK Stack is for log management and analysis. In addition to this, there are many more highly valuable reasons to use ELK for reporting, alerting and improving your observability.