Archive for the ‘Containers’ Category

Monitoring a Kubernetes environment using Prometheus or Sysdig

January 11, 2020 Leave a comment


If you have a Kubernetes cluster with pods coming online and going offline at frequent intervals, it may be worth looking at Prometheus.  Prometheus the Titan in Greek mythology stole fire from the Gods and gave it to mankind.  In the same manner Prometheus the open-source monitoring and alerting instrumentation sheds light on your cloud-native applications running as microservices in containers on-premise or in the public cloud.  Prometheus collects metrics over HTTP, it doesn’t focus on logging or events.  It comes in the form of a single binary which is to be installed on your server.

Prometheus use a pull-based mechanism, has its own simple query language (not SQL), uses a text based metrics format and has a time-series database.  So what’s the catch then?  As a user of Prometheus you have to instrument your cloud-native application with Prometheus client libraries which are available for a variety of programming languages including Java, Go, Python, .NET, PHP, Ruby.  If your aim is to visualize these metrics you need another open-source tool like Grafana. The other caveat is that Prometheus may not scale well for very large environments and it does not have long term storage or anomaly detection.

If rolling your own using open-source tools is not your thing, off-the-shelf alternatives like Sysdig are worth exploring.    The name Sysdig brought to my mind an excavator digging alongside humans in containerized homes, especially since Sysdig claims to work at the Linux kernel level and below the containers…

Digging for info

Sysdig aims to be a single platform for monitoring, run-time detection, security and forensics.  You install a Sysdig agent in a container on each host that is to be monitored.  The Sysdig back-end can run in the cloud or on-premise.  Sysdig uses  eBPF (Extended Berkeley Packet Filter) in a passive manner.  Since eBPF runs under containers in ring 0 (kernel mode), Sysdig can capture every read/write and inter-process communication with the goal of identifying user activity within the host.

Traditionally to monitor Tomcat, Redis, Elastic search you needed sidecar containers, which are small containers containing a logging agent and running in the same Kubernetes worker pod as your application.   The sidecar container shares a volume with the application container.  The application container writes logs to the shared volume from where the logging agent in the sidecar container reads them.  Since Sysdig operates at the OS kernel level you benefit from not having to run logging agents in sidecars.

Sysdig gives you KPIs which they refer to as “golden signals” for the Kubernetes cluster.  These KPIs are in the areas of availability, performance, forensics and compliance.  The forensics KPIs become useful for instance if you want to go back in time before a violation occurred and observe who shredded a bash history.  From a compliance perspective Sysdig helps you detect compliance violations.  The policy engine uses Calico to write detailed policy rules at the file, process or container level.   There is a relatively good ecosystem around Sysdig as you can send alerts from Sysdig via email or to Slack, PagerDuty, ServiceNow, Splunk, Syslog, Google security command center or the AWS security hub.

While I’d planned to talk about the VMware response to Kubernetes, monitoring seemed more interesting to me today, hence this article.  Monitoring in a Kubernetes environment is an emerging area and you can expect to see more commercial alternatives to Sysdig in the months to come.

Cloud-native apps, containers and Kubernetes

January 10, 2020 Leave a comment


In my previous article we discussed the move away from monolithic applications to cloud-native applications using microservices.  These new applications are polyglot, written using a variety of languages and frameworks. With this new model you can  change any microservice without having to rebuild your entire (previously monolithic) application.  However if you are an enterprise with over 500 servers and are moving to containers, you would turn to tools like Docker to “cointainerize” an application, ship it, run it. Docker is installed on the host OS which could be Ubuntu Linux or some other flavor of Linux.

Customers like Business Insider who use containers and Docker can now create a local development environment that can be shipped to development, QA, production while being assured that the same stack is running everywhere.   Lyft the ride-share vendor moved away from a monolithic application to a micro-services architecture using Docker and now find that when running tests they no longer need to clear a database, they just knock down the container re-start it and it is in the same place as before but in less than 5 minutes, something which wasn’t possible using Virtual Machines.   Yelp the online review company uses Docker to run Cassandra an open-source NoSQL database management system in containers.

An over-simplified analogy would be that Docker helps you create and roll-out toys.  However what if you want the toy to do more, what if you want to deploy the toys beyond just one location, what if you wanted the equivalent of a puppet-show?  This is where a puppet-master would come in.  Kubernetes is that puppet-master!

puppet master

Docker is all about running micro-services in containers on a single machine.  Docker doesn’t help when you want to run containers in a cluster of nodes, across data-centers with fail-over, networking and storage.  This is where an open-source Docker container orchestration tool like Kubernetes is needed.

With Kubernetes you can use open-source tools like Graylog and Apache Kafka to collect and digest logs from containers.  For monitoring containers, you could have your applications post metrics to a time-series data store like InfluxDB and use an open-source dashboard tool like Grafana to visualize these metrics.

From a storage perspective if you are a VMware shop, you could use VMware vSAN.  If you are a DellEMC or NetApp storage shop you could use the Container Storage Interface(CSI) driver running in a Kubernetes pod within a Kubernetes worker node to enable storage provisioning of your legacy EMC or NetApp shared storage.

Does the move to cloud-native applications, containers, Docker, Kubernetes mean that you no longer need hypervisors and related licensing from VMware?  Not if VMware has anything to say about it.  That is a topic for my next article.  Stay tuned.