Cloud - Poking at public Kubes

Kubernetes - What can be found?


I selected Kubernetes as the topic for this post as I believe that container platforms like Kubernetes offer some interesting research, provide some good learning opportunities, and have low hanging fruit. I do not want to exclude Docker from this conversation, as Docker also can fall victim to the same habits/risks I will breakdown below, I just like Kubernetes. Let's dive in!

Because containers have a chance of falling victim to misconfiguration steps due to the requirements of applications they run. An application can mandate any number of unique communication requirements put in place, which the kubernetes service must take into account to provide successful service. A relational database for example can require a separate network connection for administrator and standard user connectivity; it can also require different ports for internal and external bound traffic. While these requirements are relatively straightforward, the more unique requirements present a greater chance of misconfiguration occurring when building a kubernetes container to house it.

First, a Use-Case

A development team builds an application, which mandates strict security controls regarding how inbound and outbound traffic must use specific ports. Since we live in a fast-paced world, the development team is under a tight deadline, and the application is completed quickly and handed off for deployment. The deployment engineer, who may also be under a deadline, does not fully realize that the inbound traffic must use TCP port 8083 and outbound traffic must use TCP port 3300. The engineer opens both ports for bi-directional traffic, tests that the application and container are 100% successful, and clears the container for deployment. This misconfiguration could expose sensitive data and result in several security issues.

On to the Testing!!

On the surface, Kubernetes containers appear pretty secure in their construction. They are almost solely REST API controlled with all communication passing over the HTTPS protocol (Port 443), which supports TLS 1.2 encryption. They can be configured to require identity access management (IAM) and two-factor authentication (2FA), resulting in highly controlled access to managed containers. Additionally, both of these configurations can be logged to a SIEM to provide user behavior correlation and alerting. So on the surface, the base Kubernetes platform appears secure, but as my previous use case stated services configured inaccurately, or the passage of time paired a team's mission change, can result in services experiencing scope creep and thus... Insecurity!

At the most basic, Kubernetes rely on yaml files to house the configuration guidelines for each kubernetes cluster. An example application yaml configuration file:

  kind: Service
  apiVersion: v1
  metadata:
        name: my-service
  spec:
        selector:
          app: MyApp
    ports:
        - name: http
        protocol: TCP
        port: 80
        targetPort: 9376
        - name: https
        protocol: TCP
        port: 443
        targetPort: 9377

A single kubernetes cluster could have several different yaml configuration files, one for each of the following service types: service configuration, endpoint configuration, multi-port configuration, load-balancing, specific applications like nginx configurations, and redis-master configurations. Each of these configuration types can expose aspects of the kubernetes containers or the cluster as a whole.

The Real World

To give a specific example, I went to shodan to see if I could find misconfigured Kubernetes deployments. The first target I selected was the Master-Slave hierarchy of Redis-Master. Redis-Master by default uses port TCP port 6327. However, in looking at the Kubernetes GitHub page. I found that Kubernetes uses the TCP port 6397 by default to perform this function. Switching over to Shodan and using the search term "port:6379 redis-master", 54 unique hits were returned, with at least 17 resolving to cloud providers like Amazon or Tencent Cloud Computing.

Macintosh HD:Users:Q9:Desktop:Kubes Searches:redis-master.png
Figure 1: “port:6397 redis-master” - Shodan

I next performed a search on just the term "kubernetes" itself, which resulted in 20,435 unique hits. Nothing earth shattering and this number is too large to pull any useful data on face value. However, the results do show that to some degree, the term "kubernetes" is a default naming convention, and is actively used.

Macintosh HD:Users:Q9:Desktop:Kubes Searches:kubernetes.png
Figure 2: "kubernetes" - Shodan

Finally, I performed a search on the term "k8s". This term is often used for kubernetes instances, as there are 8 (eight) characters between the 'k' and 's' characters within the word 'kubernetes'. While this is an interesting aside, the search did return a default open result! Which turned out the be an Elastic Database!!!

Macintosh HD:Users:Q9:Desktop:Kubes Searches:35-x-x-109:k8s.png
Figure 3: "k8s" – Shodan

Macintosh HD:Users:Q9:Desktop:Kubes Searches:35-x-x-109:Shodan-Main.png
Figure 4: Open Kubernetes Site

Using a VPN to provide a simple cover, I connected to the identified site and, surprisingly, pulled back plaintext JSON. I entered the following wildcard search command: http://35.xxx.xxx.109/_search?q=?, and I was able to return a backend Kibana dashboard detailing a kubernetes logging service. Account names, machine names, object instance IDs, kubernetes health stats, internal IP addresses, and several other pieces of information could be gleaned from the data.

Macintosh HD:Users:Q9:Desktop:Kubes Searches:35-x-x-109:Screen Shot 2019-01-30 at 7.54.19 PM.png
Figure 5: Kibana API Query
Macintosh HD:Users:Q9:Desktop:Kubes Searches:35-x-x-109:Screen Shot 2019-01-30 at 7.55.39 PM.png
Figure 6: Internal IP Address and Failure Logs
Macintosh HD:Users:Q9:Desktop:Kubes Searches:35-x-x-109:Screen Shot 2019-01-30 at 7.56.01 PM.png
Figure 7: Kubernetes Error Format and Internal IPs
I was able to query the database for other interesting pieces of data, like dashboard configuration templates. It may be telling that I was able to find a public kubernetes deployment in relatively short order, I am confident I will be able to perform the same actions again. Another interesting point I was able to identify was time. The timestamp of last application update was 2018-10-12. This system has been up for a significant period-of-time and appears to have been public since at least that time, most likely even earlier.

Kubernetes appears to be a ripe target for additional threat research. Given higher concentrations of data, I am confident I will be able to leverage misconfigurations, API command manipulations to Kubernetes master systems, poor resource utilization within Kube-pods, and suspicious user behavior.

Mitigation

1) Check your public IP space!!!
    a) If you see that you have exposed ports, or services coming from your public space... Perhaps take a look at those!
2) When deploying Kubernetes Clusters, which require some level extended network access. USE Network Policies!
    a) Here is a good resource for Kubernetes Network Policy configuration steps: >> HERE <<
    b) If you are using Docker... While not a Network Policy, they are 'iptables' so... here is a link: >> HERE <<
3) Use a Vulnerability Scanner!
    a) There are several resources to scan containers for vulnerabilities, Tenable Nessus, Qualys, and several others!
    b) There are also free Open Source scanners too use as well... OWASP maintains a good list >> HERE <<

In Closing

Threat Research within the cloud is a largely untapped environment. While, the bar to enter cloud threat research is relatively high, there are several areas still providing low hanging fruit, given the constraint of only having access to public data. While my research pointed out that kubernetes clusters are operating within the public view, it is really the services housed on those clusters which pose the true security risk if left unchecked.

Comments