Monitored Infrastructure is Not Fun... But it's Critical - Reactive Ops

Tell us more

Community

Monitored Infrastructure is Not Fun... But it's Critical

Monitored Infrastructure is Not Fun... But it's Critical

Scaling and automation only work when detailed monitoring is put in place. Monitoring is critical, but a lot of companies don’t make time for it. Monitoring can be complicated, and it’s difficult (and not fun) to think about risk management.

What can go wrong? A lot. The very thought can be overwhelming.

Laying the Foundation

One suggestion is to start small. To have a properly monitored infrastructure, it helps to begin with a baseline, and then every time something happens you can add a new monitoring endpoint, as warranted. The AWS outage, for example, was a great exercise in monitoring. Every time a company or a customer of that company noticed a problem with the application, it was an opportunity to incorporate future monitoring endpoints into the infrastructure.

With server instances, here are a few basic checks you can do:

  1. Are my instances up or not? (Kubernetes can monitor server status for you.)
  2. Does my instance/server have enough resources – disk space, CPU usage and RAM usage. (Kubernetes can also monitor these resources for you.)
  3. Can all of my cluster nodes talk to each other?

It’s important to be able to tell the difference between an infrastructure problem and an application problem. For that reason, you need network and application monitoring. Having your server logs and your cluster logs go to a centralized logging place is just as important as logging all of the errors that occur with your application.

Infrastructure Done Right

While the importance of automation and monitoring are fairly well understood, sufficient automation and monitoring aren’t always implemented. Scaling, on the other hand, is something that most companies get wrong. Because they don’t understand how to scale properly, companies often throw hardware at the problem. Such solutions aren’t cheap or highly effective.

Companies that turn their attention to Kubernetes quickly learn about its scaling capabilities and the associated savings from making full use of resources. These are big selling points. Kubernetes can run your application faster and deploy easily, but it does something else too: it can help you schedule and use your resources in the most advantageous way possible.

Containerization in general, and Kubernetes in particular, are intriguing to a lot of companies. But if those companies try to implement Kubernetes in-house, they often end up hiring people and giving those resources the time-consuming task of learning a complex new technology. (They then hope those people will stick around.) That’s where an experienced DevOps partner comes in – someone to take the effort and costs off your plate.

In sum, Kubernetes can take care of your infrastructure scaling, automation and monitoring needs for you. And the right DevOps partner can implement a strategic solution to make sure that your scaling, automation and monitoring are working exactly the way they’re supposed to. With the help of your partner, you can make sure that your Kubernetes infrastructure is done right.

Other Developer Hub Posts

| Rob Scott

rbac-lookup: Reverse Lookup for Kubernetes Authorization

Read more

| ReactiveOps

Kubernetes HPA Autoscaling with Custom and External Metrics - Using...

Read more

| ReactiveOps

Automated SSL certs for Kubernetes with letsencrypt and cert-manager

Read more

| ReactiveOps

The Benefits of Running Kubernetes on Google Container Engine

Read more

| Eric Hole

kops 102 - An Inside Look at Deploying Kubernetes on...

Read more