How Kasten K10 Integrates With Red Hat Advanced Cluster Security for Security Monitoring

In this blog, we will be showcasing an integration between Kasten K10’s Kubernetes native data protection solution and Red Hat Advanced Cluster Security (RHACS), a platform that allows organizations to more securely build, deploy, and run cloud native applications.

As the recognized leader in data protection for containerized environments, Kasten K10 ensures adequate protection of all your sensitive and mission-critical data in the event of unforeseen downtime. Kasten K10 new integrations with Amazon GuardDuty and Red Hat Advanced Cluster Security provide for additional layers of security by enabling early threat detection through wholistic monitoring and correlation of all relevant events across the environment. This makes it possible to counter potential attacks earlier than before, alerting SecOps teams and thereby buying them extra time to mount a response and keep the blast radius at a minimum.

What we’ll be covering the following areas:

  • A perspective of cloud native security and Kasten K10’s role in data recovery
  • The Kubernetes audit log and how Kasten K10 extends it
  • What Red Hat Advanced Cluster Security is and the benefits it provides
  • A quick start guide to getting Red Hat Advanced Cluster Security to monitor your critical K10 resources

Introduction

With the ever-increasing amount of data flowing through the cloud1, and the trope “data is king” becoming more true with each passing day, it’s more important than ever to protect against the rising wave of cyber attacks; 2021 had around 4100 security breaches with an estimated 22 billion records exposed2. The adoption of Security Information and Event Management (SIEM) systems aims to detect these threats in real-time and respond quickly in order to minimize any damage caused, but doesn’t protect against the data lost during these breaches.

Kasten K10 for makes backing-up and restoring data easy in the event of a security breach or unintended or unauthorized data manipulation. It’s a powerful cloud native application that automates application stack replication to a standby cluster for fast failovers, securely replicates backups to off-site storage, protects against broad infrastructure and hardware failures, and provides robust ransomware protection.

Kasten K10’s cloud native philosophy inherently makes it easy to integrate into security monitoring systems without much effort. Security monitoring is an important part of every cloud application, and any third-party application launched in a cloud environment should be able to be easily monitored for any possible threat, like picking up on any unauthorized activity – which could signify the first signs of an attack. Kasten K10 is no exception, and, as we’ll see in this article, integrates natively with Red Hat Advanced Cluster Security by making use of the Kubernetes Audit.

We will discuss the Kubernetes Audit and how Kasten K10 natively integrates with it, walk through an introduction to Red Hat Advanced Cluster Security (RHACS) and provide a quick guide on how to deploy and utilize the RHACS services within a cluster that has Kasten K10 deployed.

Kubernetes Audit and Kasten K10

All activity that is processed by the kube-apiserver can be logged into an audit event type which can be used for security monitoring. This means any calls to the core Kubernetes API, or to an extended API set up via the aggregation layer, could be logged.

The criteria for logging events is based on the audit policy that is supplied to the server on startup. In a managed service such as OCP, you cannot customize the server, however the generated audit policy file for OCP pulls all registered API groups and logs them at the metadata level. For more information on the Kubernetes Audit please refer to this document.

Configuring your clusters audit log will depend on your Kubernetes distribution. For example, when deploying with k3d, use the following flags to launch the kube-apiserver with a log audit backend:

k3d cluster create kube-audit-test \
--volume "PATH_TO_POLICY/audit-policy-minimal.yaml:/etc/kubernetes/audit/policy.yaml@server:0" \
--k3s-arg "--kube-apiserver-arg=audit-policy-file=/etc/kubernetes/audit/policy.yaml@server:0" \
--k3s-arg "--kube-apiserver-arg=audit-log-path=/etc/kubernetes/audit/audit.log@server:0" \
--k3s-arg "--kube-apiserver-arg=audit-log-maxsize=300@server:0" \
--k3s-arg "--kube-apiserver-arg=audit-log-maxbackup=3@server:0"

There are two backend types for the kube audit: log and webhook. The log backend logs all the audit event locally and is ephemeral, while the webhook backend allows you to send the data to an external server. Extra options are provided for each type for flexible configuration but must be passed in as flags to the kube-apiserver on startup, something you may not have access to depending on where your clusters are deployed.

The kube audit is extendable, meaning you can write new backends if they implement the backend interface and pass them to extended API servers to add functionality as to where the audit data is sent.

The audit event type logs at four different levels: None, Metadata, Request, and RequestResponse. All of these add more information to the event object that’s logged, culminating in the full request and response body logged in the RequestResponse level. There is a potential scalability issue depending on how open the audit policy is, how big each audit event object is (Metadata vs. RequestResponse), and backend type.

Metadata provides the best balance between information for threat detection and scalability, with an example for getting a K10 passkey object shown:

{ 
   "kind":"Event",
   "apiVersion":"audit.k8s.io/v1",
   "level":"Metadata",
   "auditID":"ac1735bc-9713-407f-880f-6f6c28c88caf",
   "stage":"ResponseComplete",
   "requestURI":"/apis/vault.kio.kasten.io/v1alpha1/passkeys/k10MasterKey",
   "verb":"get",
   "user":{
      "username":"system:admin",
      "groups":[
         "system:masters",
         "system:authenticated"
      ]
   },
   "sourceIPs":[
      "{IP_ADDRESS}"
   ],
   "userAgent":"kubectl/v1.25.0 (darwin/arm64) kubernetes/a866cbe",
   "objectRef":{
      "resource":"passkeys",
      "name":"k10MasterKey",
      "apiGroup":"vault.kio.kasten.io",
      "apiVersion":"v1alpha1"
   },
   "responseStatus":{
      "metadata":{},
      "code":200
   },
   "requestReceivedTimestamp":"2022-12-22T00:06:47.042962Z",
   "stageTimestamp":"2022-12-22T00:06:47.047325Z",
   "annotations":{
      "authorization.k8s.io/decision":"allow",
      "authorization.k8s.io/reason":""
   }
}

You can see the sourceIPs, userAgent, and user being provided. Some Kubernetes managed service providers will add extra information such as credentials, which we’ll show in the below section as applies to an Openshift deployment.

For a self-deployed Kubernetes cluster, a good policy to include all current Kasten K10 groups and resources would be the following:

apiVersion: audit.k8s.io/v1
kind: Policy
omitStages:
- "RequestReceived"
rules:
- level: None
  nonResourceURLs:
    - /healthz*
    - /version
    - /openapi/v2*
    - /timeout*
- level: Metadata
  resources:
  - group: "actions.kio.kasten.io"
    resources: ["backupactions", "restoreactions", "exportactions", "importactions", "backupclusteractions", "restoreclusteractions", "retireactions", "runactions", "cancelactions", "reportactions", "upgradeactions"]
  - group: "apps.kio.kasten.io"
    resources: ["restorepointcontents", "clusterrestorepoints", "restorepoints", "ApplicationResource"]
  - group: "vault.kio.kasten.io"
    resources: ["passkeys"]
  - group: "repositories.kio.kasten.io"
    resources: ["restorepointrepositories", "storagerepositories"]
  - group: "config.kio.kasten.io"
  - group: "dist.kio.kasten.io"
  - group: "auth.kio.kasten.io"
  - group: "reporting.kio.kasten.io"
  verbs: ["create", "update", "patch", "delete", "get"]

We don’t include the list verb as the UI makes many calls that can quickly overwhelm the logs. Some of the groups have the resources listed, and these can be easily changed to only show what you’re interested in. 

They also all are shown at the Metadata level, but some could be added at the RequestResponse level to show the request and response body if interested in capturing that level of detail, such as in the upcoming Event custom resource.

How Kasten K10 activity gets captured by the Kube audit

There are many custom resources that power Kasten K10, created either via a Custom Resource Definition or the Aggregated API. The API groups and associated resources created by the CRD’s are:

Auth.kio.kasten.io 
  - k10clusterrolebindings, k10clusterroles
Config.kio.kasten.io
  - policies, policypresets, profiles, 
Dist.kio.kasten.io
  - bootstraps, clusters, distributions
Reporting.kio.kasten.io
  - reports

And those created from the Aggregated API:

Actions.kio.kasten.io 
  - backupactions, restoreactions, exportactions, importactions, backupclusteractions,
  - restoreclusteractions, retireactions, runactions, cancelactions, reportactions,
  - upgradeactions
Apps.kio.kasten.io
  - restorepointcontents, clusterrestorepoints, restorepoints, applicationresource
Vault.kio.kasten.io
  - passkeys
Repositories.kio.kasten.io
  - restorepointrepositories, storagerepositories

You can see a list of these by running:

kubectl get apiservice | grep kio.kasten.io

Which will also show you the version and how they were created. 

No matter how these custom resources are interacted with, the interaction is processed through the kube-apiserver. This flow creates an audit event based on the criteria set forth in the audit policy and flags passed to the kube-apiserver, such as which backend type to use, log or webhook, and configurable options for each.

This means that all external interactions with Kasten K10 natively leverage the Kubernetes Audit, and any security monitoring system that uses it as a data source can be used to monitor Kasten K10 for security.

Activity within Kasten K10 that does not relate to a custom resource will not be logged by the kube audit, since these were never processed by the kube-apiserver and thus were not related to the core Kubernetes API, or extended APIs via the aggregation layer. 

Since this will be a custom resource created by the Aggregated API, all activity will flow through the kube-apiserver and thus will allow for the creation of audit events as per the audit policy. To get more specific information about each of these internal events, the audit event level would need to be Request or RequestResponse. This implies that managed services such as OCP will not log this information, and an extended backend will need to be built within K10 that provides this functionality.

As shown above, Kasten K10 natively leverages the Kubernetes Audit by fashion of its architecture, and thus any system which leverages this for security monitoring automatically monitors K10 as well; Red Hat Advanced Cluster Security for Kubernetes (RHACS) falls into this category by leveraging the Kubernetes Audit for cluster monitoring.

Deploying RHACS alongside Kasten K10 in your cluster provides for a powerful combination of security monitoring, disaster recovery, backup and restore, and ransomware protection, helping protect you against the ever-increasing cyber security threats. Since you depend on K10 to protect you, this extra protection brought by Red Hat Advanced Cluster Security helps add an extra layer in your security protocol.

What is RHACS?

Red Hat Advanced Cluster Security for Kubernetes is a Kubernetes security platform that’s deployable in your cluster alongside all your applications and provides constant security monitoring, alerting, and action. It leverages Falco libraries and eBPF for container runtime and network security, Kubernetes Audit logs for cluster monitoring, and its container image scanner for vulnerability information (CVE’s and CVSS scores) at the build deploy and runtime stages.

Red Hat Advanced Cluster Security can be easily installed on any Red Hat OpenShift environment including OCP, OKE, OSD, Azure Red Hat OpenShift, and Red Hat OpenShift Service on AWS. You can also install it on EKS, GKE, and AKS clusters (with equal functionality but varying amounts of support for mental service [See support policy for more information]). Since it leverages Falco libraries for container runtime security, it does need to be installed on x86 Linux, and you can see the list of supported operating systems here.1

Red Hat Advanced Cluster Security works across multiple clusters, with the centralized services installed on a single cluster, and the secured cluster services installed across all clusters. The centralized services include Central and Scanner, which handle the API interactions and scanning of container images, associated databases and packages installed by package managers, respectively.

  • Central: Red Hat Advanced Cluster Security application management interface and services. Handles data persistence, API interactions and UI access. One central instance is all that is needed for managing multiple clusters.
  • Scanner: Certified vulnerability scanner for scanning container images and their associated databases. Analyzes image layers to check for known vulnerabilities from the Common Vulnerabilities and Exposures (CVE) list. This also identifies vulnerabilities in packages installed by package managers and in dependencies for multiple languages

To secure each cluster, secured cluster services are installed on each and contain the Sensor, Admission controller, Collector, and Scanner. The Sensor analyzes and monitors the cluster; the Admission controller prevents the creation of workloads that violate security policies; the Collector leverages Falco to monitor container activity and sends its data to the Sensor; Scanner is a lightweight version of the central scanner that scans images on the specific cluster.

Red Hat Advanced Cluster Security provides many integrations for getting its data out, such as with CI/CD pipelines to integrate security as far left as possible, and with SIEMS such as Splunk and Sumo Logic.

To view all security policies which Red Hat Advanced Cluster Security comes with, please see this. Custom security policies can be created to add extra security to specific situations which might be less common. In addition to this, Red Hat Advanced Cluster Security provides a network policy graph which shows how pods are allowed to communicate with each other and other network endpoints. See  in conjunction with Kasten K10 to monitor your cluster’s posture in this technical how-to guide.

Conclusion

Red Hat Advanced Cluster Security provides a lot of security monitoring such as container runtime security, Kubernetes Audit logs, and image scanning, and by virtue of Kasten K10’s cloud native approach, all of these can be used to monitor the security of Kasten K10.

Kasten K10 and Red Hat Advanced Cluster Security deployed in your cluster together are a powerful combination to help battle the increase in cyber-attacks and security vulnerabilities that are currently being exploited. Try Kasten K10 free today!

#1 Kubernetes Data Protection and Mobility
Free
#1 Kubernetes Data Protection and Mobility
Similar Blog Posts
Business | November 18, 2024
Business | November 13, 2024
Technical | October 30, 2024
Stay up to date on the latest tips and news
By subscribing, you are agreeing to have your personal information managed in accordance with the terms of Veeam’s Privacy Policy
You're all set!
Watch your inbox for our weekly blog updates.
OK