
Kubernetes Response Engine, Part 3: Falcosidekick + Knative
This blog post is part of a series of articles about how to create a
Kubernetes
response engine withFalco
,Falcosidekick
and aFaaS
.See other posts:
- Kubernetes Response Engine, Part 1 : Falcosidekick + Kubeless
- Kubernetes Response Engine, Part 2 : Falcosidekick + OpenFaas
- Kubernetes Response Engine, Part 4 : Falcosidekick + Tekton
- Kubernetes Response Engine, Part 5 : Falcosidekick + Argo
- Kubernetes Response Engine, Part 6 : Falcosidekick + Cloud Run
- Kubernetes Response Engine, Part 7: Falcosidekick + Cloud Functions
As the Cloud Native ecosystem grows and the idea that an integrator can browse the offerings and slap them together like an a la carte menu resonates. We call this Thinking Cloud Native.
Falco already produced events, but in the form of a webhook with bespoke payloads, which is fine, unless you would like to integrate into an ecosystem for event routing. To enable this for Falco we had to think about how these events are moved from producer to consumer via something else. Enter: CloudEvents.
What is CloudEvents? It is a specification for translating an event and the metadata onto a specific protocol and back. What? It lets you think about the event in a generic way without it being tied to particular choices the integration is making today, and with minor effort CloudEvents lets that integration change the protocol choice without changing the meaning of the event.
This lossless property of CloudEvents means the integrator is free to choose middleware that also speaks CloudEvents and has its own choices of persistence and protocol, but the consumer of the event need not be aware of these translations that have happened between the producer and consumer.
There are several choices that support CloudEvents today: Serverless.com Event Gateway, Argo, Google Cloud Pub/Sub, Azure Event Grid, and Knative Eventing. A more full list is at the cloudevents/spec repo.
For this blog post, we are going to focus in on Falco+Knative and see what we can do with that a la carte selection.
Falco+Knative
What is Knative? It is two things: Knative Serving and Knative Eventing. Serving provides a container based scale to zero, scale real big functionality; as well as rainbow deploys, auto-TLS, domain mappings, and various knobs to control concurrency and scale traits. Eventing provides a thin abstraction on top of traditional message brokers (think Kafka or AMQP) that lets you compose your application without considering the message persistence choices in the moment (CloudEvents).
From Knative Eventing, we will use two components: Broker and Trigger. A Knative Eventing Broker represents a event delivery and persistence layer, sort of an eventing mesh. A Knative Eventing Trigger works with the Broker to ask that a consumer be involved with a CloudEvent that matches some specified attributes. So the Broker is the stream of events, the Trigger is how you select events out of the stream and get them delivered.
With Falco producing CloudEvents, we can point our alerts from Falco at the Knative Eventing Broker. Then create a Trigger that selects the Falco event we want to react to. But we also need something to consume the event and react!
From Knative Serving, we can leverage a Knative Serving Service (KService). A KService looks like a lot like a Kubernetes deployment, but it is realized on the cluster as an autoscaling and routable component without the need for manually creating additional Kubernetes Services. KService can run any container as long as it is stateless, and the lifecycle is defined only in the context of an active HTTP request.
To tie this up in a picture,
Falco --[via Sidekick]--> Broker --[via Trigger]--> KService
We are free to make the subscriber of the Trigger be anything we want it to be as long as it is routable from the Broker, and it accepts HTTP POSTs. The request will be a CloudEvent in Binary mode, and Falco makes JSON events, so the payload will be the standard JSON Falco is known for. In-fact, we can replace the KService in with a Kubeless function and it will work.
Demo
To demonstrate this, we have prepared a simple example: We will detect root shell creations and delete that pod.
Prerequisites
K3s Cluster
For this blog post, we a will show the demo using k3s using multipass. Here is a cluster creation commands:
You should get this final output:
Now we have a bare bones k3s cluster!
Install Knative
To install the rest of Knative into k3s:
See also knative.dev install instructions for installing these into your own cluster.
Falco/Falcosidekick/sidekick UI
We'll use helm to install Falco
,Falcosidekick
and Falcosidekick UI
.
First, add the falcosecurity helm
repo:
In a real project, you should get the whole chart with
helm pull falcosecurity/falco --untar
and then configure the values.yaml
.
For this tutorial, will try to keep thing as easy as possible and set configs
directly by helm install
command:
You should get this output:
And you can see your new Falco
,Falco Sidekick
,Falco Sidekick UI
pods:
The arguments
--set falcosidekick.enabled=true --set falcosidekick.webui.enabled=true
enables Falcosidekick and the UI as the below shows:
You can now test it with a typical port-forwarding:
Drop demo
Install the demo with:
This will install a Knative Service that will consume the Falco events sent by falcosidekick (to the broker), some RBAC to enable that service to delete pods, and a Knative Trigger to register this consumer for events from the Knative Eventing Broker.
Consumer KService
The simplified go code in use is like the following:
The full implementation can be found in the falco-drop repo.
Pro-tip: if you are developing in Go for Kubernetes, take a look at ko.
ko
enables containerizing go applications without needing a Dockerfile.
Even though the Trigger only delivers events that match the Trigger filter, it is a good idea to validate the event that the function is receiving, which is why we are validating again in the above code (trust, but verify).
Eventing Triggers
The Trigger configures the Broker for a subscriber to be invoked when the Broker
ingresses an event that matches the spec.filter
settings.
Note: the
kind: Service, name: drop
resource is the Knative Service we created above.
Here we are requesting that the broker only deliver events that have the
attributes (CloudEvent attributes) of source=falco.org
and
type=falco.rule.output.v1
. These events are delivered to our subscriber
KService.
Want to learn how that spec.subscriber.ref
works?! It is
duck typing
and
you
can
learn
more, but tl;dr: it is basically
doing this (except fancy),
kubectl get ksvc drop -o jsonpath='{.status.address.url}'
Test
First we will create a pod that we can execute code on later:
You should see two pods runing, drop-00001-*
and a alpine
:
Next, we will execute a command in that alpine
pod:
The alpine
pod will be terminated by the drop function once the events are
processed:
Or simply start a hanging shell:
And the shell will be closed:
The event that the drop function is reacting to is a CloudEvent that looks something like this:
Context Attributes,
specversion: 1.0
type: falco.rule.output.v1
source: falco.org
id: f7628198-3822-4c98-ac3f-71770e272a16
time: 2021-01-11T23:46:19.82302759Z
datacontenttype: application/json
Extensions,
foo: bar
priority: Notice
rule: Terminal shell in container
Data,
{
"output": "23:46:19.823027590: Notice A shell was spawned in a container with an attached terminal (user=root user_loginuid=-1 k8s.ns=default k8s.pod=alpine container=f29b261f8831 shell=bash parent=runc cmdline=bash -il terminal=34816 container_id=f29b261f8831 image=mysql) k8s.ns=default k8s.pod=mysql-db-7d59548d75-wh44s container=f29b261f8831",
"priority": "Notice",
"rule": "Terminal shell in container",
"time": "2021-01-11T23:46:19.82302759Z",
"output_fields": {
"container.id": "f29b261f8831",
"container.image.repository": "mysql",
"evt.time": 1610408779823027700,
"k8s.ns.name": "default",
"k8s.pod.name": "alpine",
"proc.cmdline": "bash -il",
"proc.name": "bash",
"proc.pname": "runc",
"proc.tty": 34816,
"user.loginuid": -1,
"user.name": "root"
}
}
The KService consumes this event and simply deletes the pod. You can also see this activity in the falcosidekick UI.
Conclusion
Thinking Cloud Native is a mindset of picking the right tool for the job and assembling these tools into something greater than their parts. Falco is a great tool for detection and alerts, it gets really interesting once we can react to those events in ways we never imagined, because integrators are creative and innovative.
What will you build?
Knative
If you would like to find out more about Knative:
- Get started in knative.dev.
- Check out the Knative Project in GitHub.
- Meet the maintainers on the Knative Slack.
- Follow @KnativeProject on Twitter.
Falco
If you would like to find out more about Falco:
- Get started in Falco.org.
- Check out the Falco project in GitHub.
- Get involved in the Falco community.
- Meet the maintainers on the Falco Slack.
- Follow @falco_org on Twitter.