Fleet – Breaking out the osquery API & Web UI

Note: Kolide Fleet has been retired. FleetDM is a drop-in replacement that was forked from Kolide Fleet by the team over at FleetDM.com. FleetDM has replaced Kolide Fleet in Security Onion and in my osquery course and is what I now recommend for osquery management. The content below directly applies to FleetDM as-is.


I was a very early user of Kolide’s open source osquery fleet manager, Fleet. I have used it in production for my osquery endpoints, within my osquery course (Osquery For Security Analysis), and now, deeply integrated into the next major version of Security Onion (Hybrid Hunter).

When you deploy Fleet, there are a couple different ways to manage it – either through a CLI or through a web UI. The web interface is the more common way to manage Fleet. In the background, the web UI is using a bunch of API endpoints that are published at /api/v1/kolide/. When osquery agents connect to Fleet for management tasks, they use /api/v1/osquery/ or gRPC. Unfortunately, within Fleet itself, there is no way to split out the osquery management APIs from the web management APIs; this means that if you make Fleet Internet-accessible (so that non-VPN roaming endpoints can checkin), you expose the web UI to the public Internet. From a security perspective, we want to reduce the risk to an acceptable level – in this case, it would be best if we can configure the Internet-accessible system to allow osquery endpoints through, but restrict web UI requests in some form.

The way I recently handled this with Security Onion was to break out the web UI interface and osquery interface using a reverse proxy, Nginx – here is the relevant Nginx config I used. As you can see, there are two config blocks. The first one, for port 443 allows access to both the web interface and the osquery API:

 server {
        listen       443 ssl http2 default_server;
        server_name  _;
        root         /opt/socore/html/packages;
        index        index.html;

        ssl_certificate "/etc/pki/nginx/server.crt";
        ssl_certificate_key "/etc/pki/nginx/server.key";
        ssl_session_cache shared:SSL:1m;
        ssl_session_timeout  10m;
        ssl_ciphers HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers on;

        location /fleet/ {
          proxy_pass https://{{ MAINIP }}:8080;
          proxy_read_timeout    90;
          proxy_connect_timeout 90;
          proxy_set_header      Host $host;
          proxy_set_header      X-Real-IP $remote_addr;
          proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header      Proxy "";

        }

The second configuration block sets Nginx to listen on port 8090, and only passes the connection through if the URL matches one of the pre-defined patterns. Kolide Launcher uses gRPC, so we also need to specify that in the Nginx config.

    server {
        listen       8090 ssl http2 default_server;
        server_name  _;
        root         /opt/socore/html;
        index        blank.html;

        ssl_certificate "/etc/pki/nginx/server.crt";
        ssl_certificate_key "/etc/pki/nginx/server.key";
        ssl_session_cache shared:SSL:1m;
        ssl_session_timeout  10m;
        ssl_ciphers HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers on;

        location ~ ^/kolide.agent.Api/(RequestEnrollment|RequestConfig|RequestQueries|PublishLogs|PublishResults|CheckHealth)$ {
            grpc_pass  grpcs://{{ MAINIP }}:8080;
            grpc_set_header Host $host;
            grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_buffering off;
        }

    }

(Figured this out thanks to these posts on the osquery slack)

The way this works is that the system’s firewall (iptables in this case), would be set to only allow Fleet admins access to port 443, but all osquery endpoints would be allowed inbound access to port 8090. You can actually set IP restrictions within Nginx itself, but for this particular use-case it was going to work better to manage the IP restrictions at the system firewall level rather than within Nginx.

-Josh

Leave a comment

search previous next tag category expand menu location phone mail time cart zoom edit close