Falco from A to Y
Falco from A to Y
When managing a server fleet, it is often challenging to have visibility into what is happening on the servers. We can never truly know when a user is attempting to bypass our system’s security.
Logs (if they exist) are usually buried in the noise, making it difficult to detect abnormal behaviors that could be a sign of an intrusion.
Using a log aggregator like Loki or Elasticsearch can be an effective solution for centralizing logs and making them more easily exploitable. However, this is not enough to detect dangerous actions such as creating a reverse shell, writing to a sensitive directory, searching for SSH keys, etc., which do not generate logs.
Apart from attacks on exposed services (VPNs, web servers, SSH daemons, etc.), we are completely blind to what is happening on our servers. If a malicious actor gains access through a compromised SSH key, they can move around freely, and it will be very difficult to detect them (SELinux and AppArmor do block some attempts). This is where Falco comes in.
What is Falco?
Falco is a threat detection engine for your systems. It is particularly well-suited for containerized environments (Docker, Kubernetes) but is not limited to them.
It works by monitoring system calls and comparing them to predefined rules. When an abnormal event is detected, Falco sends an alert. These rules are written in YAML and can be fine-tuned to match your environment.
In this article, we will explore what Falco is and how to be alerted of abnormal events on our servers, as well as how to set it up in a Kubernetes environment.
It’s worth noting that Falco is a project accepted by the CNCF (Cloud Native Computing Foundation) in the “graduated” category, alongside projects like Cilium, Rook, ArgoCD, and Prometheus.
Let’s dive into Falco from A to Y… with a good cup of coffee! (the “Z” being unreachable for a constantly evolving software).
Installation on a Debian 12 server
Let’s start by installing Falco on a Debian 12 server. For this, we will use an official APT repository.
curl -fsSL https://falco.org/repo/falcosecurity-packages.asc | \
sudo gpg --dearmor -o /usr/share/keyrings/falco-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/falco-archive-keyring.gpg] https://download.falco.org/packages/deb stable main" | sudo tee /etc/apt/sources.list.d/falcosecurity.list
sudo apt-get update && sudo apt-get install -y falco
During the installation, you will be asked to choose a “driver”. This refers to how Falco will monitor system calls and events. You can choose between:
- Kernel module (Legacy)
- eBPF
- Modern eBPF
The Kernel module is to be avoided, as it is much less performant than eBPF and does not provide any additional functionality. The difference between eBPF and Modern eBPF lies in their implementation. The classic eBPF requires Falco to download a library containing the instructions that it would compile on-the-fly into our kernel. This library is specific to the version of the kernel we are using, requiring us to re-download it with each kernel update. The Modern eBPF module is different, following the BPF CO-RE (BPF Compile Once, Run Everywhere) paradigm. The code to inject into the kernel is already present in the Falco binary, and no external driver is needed (however, it requires a minimum kernel version of 5.8, whereas the classic eBPF works from kernel version 4.14).
My machine has a kernel version of 6.1.0, so I will choose Modern eBPF
.
After that, I will verify that Falco is installed correctly with the command falco --version
.
After that, I will verify that Falco is installed correctly with the command falco --version
.
$ falco --version
Thu Apr 04 10:29:22 2024: Falco version: 0.37.1 (x86_64)
Thu Apr 04 10:29:22 2024: Falco initialized with configuration file: /etc/falco/falco.yaml
Thu Apr 04 10:29:22 2024: System info: Linux version 6.1.0-16-amd64 ([email protected]) (gcc-12 (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC Debian 6.1.67-1 (2023-12-12)
{"default_driver_version":"7.0.0+driver","driver_api_version":"8.0.0","driver_schema_version":"2.0.0","engine_version":"31","engine_version_semver":"0.31.0","falco_version":"0.37.1","libs_version":"0.14.3","plugin_api_version":"3.2.0"}
I can then start the corresponding Falco service for my driver (in my case falco-bpf.service
, the other services being falco-kmod.service
and falco-bpf.service
).
$ systemctl status falco-modern-bpf.service
● falco-modern-bpf.service - Falco: Container Native Runtime Security with modern ebpf
Loaded: loaded (/lib/systemd/system/falco-modern-bpf.service; enabled; preset: enabled)
Active: active (running) since Thu 2024-04-04 10:31:07 CEST; 44s ago
Docs: https://falco.org/docs/
Main PID: 2511 (falco)
Tasks: 9 (limit: 3509)
Memory: 32.8M
CPU: 828ms
CGroup: /system.slice/falco-modern-bpf.service
└─2511 /usr/bin/falco -o engine.kind=modern_ebpf
avril 04 10:31:07 falco-linux falco[2511]: Falco initialized with configuration file: /etc/falco/falco.yaml
avril 04 10:31:07 falco-linux falco[2511]: System info: Linux version 6.1.0-16-amd64 ([email protected]) (gcc-12 (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.4>
avril 04 10:31:07 falco-linux falco[2511]: Loading rules from file /etc/falco/falco_rules.yaml
avril 04 10:31:07 falco-linux falco[2511]: Loading rules from file /etc/falco/falco_rules.local.yaml
avril 04 10:31:07 falco-linux falco[2511]: The chosen syscall buffer dimension is: 8388608 bytes (8 MBs)
avril 04 10:31:07 falco-linux falco[2511]: Starting health webserver with threadiness 4, listening on 0.0.0.0:8765
avril 04 10:31:07 falco-linux falco[2511]: Loaded event sources: syscall
avril 04 10:31:07 falco-linux falco[2511]: Enabled event sources: syscall
avril 04 10:31:07 falco-linux falco[2511]: Opening 'syscall' source with modern BPF probe.
avril 04 10:31:07 falco-linux falco[2511]: One ring buffer every '2' CPUs.
We will discuss driver configuration in more detail in an upcoming chapter
Créer une première règle Falco
For those who are impatient, here is a first Falco rule that detects writes to binary directories that are not caused by package managers.
By default, the rules are located in the following files:
/etc/falco/falco_rules.yaml
→ Default rules/etc/falco/falco_rules.local.yaml
→ Custom rules/etc/falco/rules.d/*
→ Custom rules per file repository
So, let’s create our first rule in /etc/falco/falco_rules.local.yaml.
- macro: bin_dir
condition: (fd.directory in (/bin, /sbin, /usr/bin, /usr/sbin))
- list: package_mgmt_binaries
items: [rpm_binaries, deb_binaries, update-alternat, gem, npm, python_package_managers, sane-utils.post, alternatives, chef-client, apk, snapd]
- macro: package_mgmt_procs
condition: (proc.name in (package_mgmt_binaries))
- rule: Write below binary dir
desc: >
Trying to write to any file below specific binary directories can serve as an auditing rule to track general system changes.
Such rules can be noisy and challenging to interpret, particularly if your system frequently undergoes updates. However, careful
profiling of your environment can transform this rule into an effective rule for detecting unusual behavior associated with system
changes, including compliance-related cases.
condition: >
open_write and evt.dir=<
and bin_dir
and not package_mgmt_procs
output: File below a known binary directory opened for writing (file=%fd.name pcmdline=%proc.pcmdline gparent=%proc.aname[2] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: ERROR
We will see later how a rule is composed, but for now, we only admit that this rule allows to detect writings in binary directories that are not caused by package managers.
After creating our rule in the file /etc/falco/falco_rules.local.yaml
, Falco should automatically reload the rule (if not, we can do it manually using sudo systemctl reload falco-modern-bpf.service
).
Let’s try to trigger the alert we just created. I will open a first terminal displaying the Falco logs with journalctl -u falco-modern-bpf.service -f
and in a second one, execute the command touch /bin/toto
.
avril 04 11:18:38 falco-linux falco[2511]: 11:18:38.057651538: Error File below a known binary directory opened for writing (file=/bin/toto pcmdline=bash gparent=sshd evt_type=openat user=root user_uid=0 user_loginuid=0 process=touch proc_exepath=/usr/bin/touch parent=bash command=touch /bin/toto terminal=34816 container_id=host container_name=host)
Victory, we triggered an alert! 🥳
Now, it should not trigger if I install a package via apt
:
$ apt install -y apache2
$ which apache2
/usr/sbin/apache2
The Apache2 binary is located in /usr/sbin
(a directory that is well monitored by our Falco rule), but no alert was triggered.
We are already starting to understand how Falco works. Let’s dive a little deeper into the details.
Falco Architecture
Natively, Falco is designed to work with eBPF (Extended Berkeley Packet Filter) probes. These probes are programs that are loaded directly into the kernel to notify Falco (in user-land) of what is happening. It is also possible to run Falco using a kernel module.
Info
What is eBPF?
eBPF, which stands for extended Berkeley Packet Filter, is a technology that allows running programs in a sandbox environment within the Linux kernel without the need to modify kernel code or load external modules. eBPF has been available since kernel 3.18 (2014) and has the ability to attach to an event. This way, the injected code is only called when a system call related to our need is executed. This prevents overloading the kernel with unnecessary programs.
To learn more, I invite you to watch the video by Laurent GRONDIN.
System calls (syscall)
Falco is an agent that monitors system calls using eBPF or a kernel module (when eBPF is not possible). Programs typically make system calls to interact with the kernel (essential for reading/writing a file, making a request, etc.).
To view the system calls of a process, you can use the strace
command. This command allows you to launch a program and see the system calls that are being made.
$ strace echo "P'tit kawa?"
execve("/usr/bin/echo", ["echo", "P'tit kawa?"], 0x7fff2442be98 /* 66 vars */) = 0
brk(NULL) = 0x629292ad2000
arch_prctl(0x3001 /* ARCH_??? */, 0x7fff1aa14580) = -1 EINVAL (Argument invalide)
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7604dfe2e000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (Aucun fichier ou dossier de ce nom)
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=110843, ...}, AT_EMPTY_PATH) = 0
mmap(NULL, 110843, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7604dfe12000
close(3) = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\237\2\0\0\0\0\0"..., 832) = 832
pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784
pread64(3, "\4\0\0\0 \0\0\0\5\0\0\0GNU\0\2\0\0\300\4\0\0\0\3\0\0\0\0\0\0\0"..., 48, 848) = 48
pread64(3, "\4\0\0\0\24\0\0\0\3\0\0\0GNU\0\302\211\332Pq\2439\235\350\223\322\257\201\326\243\f"..., 68, 896) = 68
newfstatat(3, "", {st_mode=S_IFREG|0755, st_size=2220400, ...}, AT_EMPTY_PATH) = 0
pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784
mmap(NULL, 2264656, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7604dfa00000
mprotect(0x7604dfa28000, 2023424, PROT_NONE) = 0
mmap(0x7604dfa28000, 1658880, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x28000) = 0x7604dfa28000
mmap(0x7604dfbbd000, 360448, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1bd000) = 0x7604dfbbd000
mmap(0x7604dfc16000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x215000) = 0x7604dfc16000
mmap(0x7604dfc1c000, 52816, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7604dfc1c000
close(3) = 0
mmap(NULL, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7604dfe0f000
arch_prctl(ARCH_SET_FS, 0x7604dfe0f740) = 0
set_tid_address(0x7604dfe0fa10) = 12767
set_robust_list(0x7604dfe0fa20, 24) = 0
rseq(0x7604dfe100e0, 0x20, 0, 0x53053053) = 0
mprotect(0x7604dfc16000, 16384, PROT_READ) = 0
mprotect(0x6292924c8000, 4096, PROT_READ) = 0
mprotect(0x7604dfe68000, 8192, PROT_READ) = 0
prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0
munmap(0x7604dfe12000, 110843) = 0
getrandom("\xe1\x15\x54\xd3\xf5\xa1\x30\x4d", 8, GRND_NONBLOCK) = 8
brk(NULL) = 0x629292ad2000
brk(0x629292af3000) = 0x629292af3000
openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=15751120, ...}, AT_EMPTY_PATH) = 0
mmap(NULL, 15751120, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7604dea00000
close(3) = 0
newfstatat(1, "", {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0x3), ...}, AT_EMPTY_PATH) = 0
write(1, "P'tit kawa?\n", 12P'tit kawa?
) = 12
close(1) = 0
close(2) = 0
exit_group(0) = ?
+++ exited with 0 +++
This is precisely the system calls that Falco will monitor to detect potential anomalies.
But why not just use
strace
and avoid the need for eBPF?
strace
is not a viable solution for several reasons, including the fact that it cannot monitor multiple processes at the same time. Additionally, it works in the opposite way of what we want to achieve: it allows us to see the system calls of a monitored process, but not to react to system calls to alert of a malicious process.
Sysdig (the company behind Falco) also offers a tool called Sysdig to visualize and record system calls in a file, making it easier to create Falco rules.
VERSION="0.36.0"
wget https://github.com/draios/sysdig/releases/download/${VERSION}/sysdig-${VERSION}-x86_64.deb
dpkg -i sysdig-${VERSION}-x86_64.deb
Let’s fix the version to 0.36.0, so that you get the same results as me. But of course, I encourage you to use the latest available version.
Using the command sysdig proc.name=chmod
, we can visualize the system calls related to this command.
153642 15:18:57.468174047 1 chmod (72012.72012) < execve res=0 exe=chmod args=777.README.md. tid=72012(chmod) pid=72012(chmod) ptid=71438(bash) cwd=<NA> fdlimit=1024 pgft_maj=0 pgft_min=33 vm_size=432 vm_rss=4 vm_swap=0 comm=chmod cgroups=cpuset=/user.slice.cpu=/user.slice/user-0.slice/session-106.scope.cpuacct=/.i... env=SHELL=/bin/bash.LESS= -R.PWD=/root.LOGNAME=root.XDG_SESSION_TYPE=tty.LS_OPTIO... tty=34823 pgid=72012(chmod) loginuid=0(root) flags=1(EXE_WRITABLE) cap_inheritable=0 cap_permitted=1FFFFFFFFFF cap_effective=1FFFFFFFFFF exe_ino=654358 exe_ino_ctime=2023-12-29 16:08:00.974303000 exe_ino_mtime=2022-09-20 17:27:27.000000000 uid=0(root) trusted_exepath=/usr/bin/chmod
153643 15:18:57.468204086 1 chmod (72012.72012) > brk addr=0
153644 15:18:57.468205471 1 chmod (72012.72012) < brk res=55B917192000 vm_size=432 vm_rss=4 vm_swap=0
153645 15:18:57.468270944 1 chmod (72012.72012) > mmap addr=0 length=8192 prot=3(PROT_READ|PROT_WRITE) flags=10(MAP_PRIVATE|MAP_ANONYMOUS) fd=-1(EPERM) offset=0
We will reuse this tool in a future chapter to create Falco rules.
Getting alerted in case of an event
It’s great to be aware of an intrusion, but if no one is there to react, it’s not very useful!
That’s why we’re going to install a second application: Falco Sidekick.
Its role is to receive alerts from Falco and redirect them to external tools (Mail, Alertmanager, Slack, etc.). To install it, we can directly download the binary and create a systemd service (ideally, it should be deployed on an isolated machine to prevent an attacker from disabling it).
VER="2.28.0"
wget -c https://github.com/falcosecurity/falcosidekick/releases/download/${VER}/falcosidekick_${VER}_linux_amd64.tar.gz -O - | tar -xz
chmod +x falcosidekick
sudo mv falcosidekick /usr/local/bin/
sudo touch /etc/systemd/system/falcosidekick.service
sudo chmod 664 /etc/systemd/system/falcosidekick.service
Edit the file /etc/systemd/system/falcosidekick.service
and add the following content:
[Unit]
Description=Falcosidekick
After=network.target
StartLimitIntervalSec=0
[Service]
Type=simple
Restart=always
RestartSec=1
ExecStart=/usr/local/bin/falcosidekick -c /etc/falcosidekick/config.yaml
[Install]
WantedBy=multi-user.target
But before running it, we need to create the configuration file /etc/falcosidekick/config.yaml
.
I manage my alerts with Alertmanager and I want to continue using it for Falco alerts because it allows me to route notifications to Gotify or Email based on their priority.
debug: false
alertmanager:
hostport: "http://192.168.1.89:9093"
To start the service, we can use systemctl
.
systemctl daemon-reload
systemctl enable --now falcosidekick
Now, we need to instruct our Falco agent to connect to our Falco-Sidekick. To do this, I edit the configuration in /etc/falco/falco.yaml
to enable the http output to Falco-Sidekick.
Here are the values to edit:
json_output: true
json_include_output_property: true
http_output:
enabled: true
url: http://192.168.1.105:2801/ # Falco-Sidekick URL
I restart Falco-Sidekick and create a first test alert:
curl -sI -XPOST http://192.168.1.105:2801/test
I receive this test alert on my alertmanager and even by email (thanks to the integration of an SMTP in my alertmanager):
Now, let’s try to trigger the alert of the Write below binary dir rule that we created earlier.
The touch /bin/coffee
is directly reported to my Alertmanager!
We also visualize details such as the process that initiated the system call, touch
(with bash
creating this process, itself called by tmux
).
To monitor Falco-Sidekick, you can use the following endpoints:
/ping
- Returns “pong” in plain text/healthz
- Returns{ 'status': 'ok' }
/metrics
- Exports Prometheus metrics
Web Interface for Falco
Having coupled Falco-Sidekick with AlertManager to be notified in case of an incident is a good thing, but if we want to make it simpler and have a web interface to view the alerts, we can use Falcosidekick UI.
Falco-Sidekick UI is a web interface that retrieves alerts from Falco-Sidekick. It is based on a Redis database that stores the alerts to display them in a dashboard.
We can install it with Docker containers in the following way :
version: '3'
services:
falco-sidekick-ui:
image: falcosecurity/falcosidekick-ui:2.3.0-rc2
restart: always
ports:
- "2802:2802"
environment:
- FALCOSIDEKICK_UI_REDIS_URL=redis:6379
- FALCOSIDEKICK_UI_USER=weare:coffeelovers
depends_on:
- redis
redis:
image: redis/redis-stack:7.2.0-v9
Once the images are downloaded, we can start the containers with docker-compose up -d
and authenticate on the web interface with the credentials weare:coffeelovers
.
Next, I need to instruct Falco-Sidekick to route the alerts to Falco-Sidekick UI. To do this, I edit my configuration file /etc/falcosidekick/config.yaml
:
debug: false
alertmanager:
hostport: "http://192.168.1.89:9093"
webui:
url: "http://192.168.1.105:2802"
With this new configuration, both the web interface and AlertManager receive the events.
After a few triggered alerts, my interface starts to fill up a bit.
I can then sort the alerts by type, rule, priority, etc.
I can review previous alerts to identify any recurring patterns on the monitored machines.
Info
In the current case, there is no data persistence in the Redis database. If you restart the container, you will lose all previous alerts. If Falco-Sidekick UI is used in production, it is necessary to set up a persistent database.
Now that we have everything we need to visualize the alerts, let’s see how to create Falco rules.
Règles Falco
A Falco rule consists of several elements: a name, a description, a condition, an output, a priority, and tags.
Here is an example of a Falco rule:
- rule: Program run with disallowed http proxy env
desc: >
Detect curl or wget usage with HTTP_PROXY environment variable. Attackers can manipulate the HTTP_PROXY variable's
value to redirect application's internal HTTP requests. This could expose sensitive information like authentication
keys and private data.
condition: >
spawned_process
and (proc.name in (http_proxy_binaries))
and ( proc.env icontains HTTP_PROXY OR proc.env icontains HTTPS_PROXY )
output: Curl or wget run with disallowed HTTP_PROXY environment variable (env=%proc.env evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty exe_flags=%evt.arg.flags %container.info)
priority: NOTICE
tags: [maturity_incubating, host, container, users, mitre_execution, T1204]
The name, description, tags, and priority will be displayed in the report of a suspicious event. It is important to provide meaningful content for these fields to facilitate the reading of alerts from an external tool.
If the mention proc.env icontains HTTP_PROXY
is quite easy to understand. What about spawned_process
or http_proxy_binaries
?
Let’s start by explaining the fields we use to contextualize an alert.
Fields
Fields are the variables used in Falco rules. They differ depending on the event (system call) we want to monitor. There are several classes of fields, for example:
evt
: Generic event information (evt.time, evt.type, evt.dir).proc
: Process information (proc.name, proc.cmdline, proc.tty).user
/group
: User information who triggered the event (user.name, group.gid).fd
: File and connection information (fd.ip, fd.name).container
: Docker container or Kubernetes pod information (container.id, container.name, container.image).k8s
: Kubernetes object information (k8s.ns.name, k8s.pod.name).
The definition of a field to monitor always starts with a system call (starting with evt
) before contextualizing it with other fields (such as the process name, the user who executed it, or the opened file).
To see the complete list of fields, you can refer to the official documentation or use the falco --list=syscall
command directly.
Operators
Conditions can be used in Falco rules.
Operators | Description |
---|---|
= , != | Equality and inequality operators. |
<= , < , >= , > | Comparison operators for numeric values. |
contains , icontains | For strings, returns “true” if one string contains another, and icontains is the case-insensitive version. For flags, returns “true” if the flag is set. Examples: proc.cmdline contains "-jar" , evt.arg.flags contains O_TRUNC . |
startswith , endswith | Checks the prefix or suffix of strings. |
glob | Evaluates standard glob patterns. Example: fd.name glob "/home/*/.ssh/*" . |
in | Evaluates if the provided set (which can have a single element) is entirely contained within another set. Example: (b,c,d) in (a,b,c) returns FALSE since d is not contained in the compared set (a,b,c) . |
intersects | Evaluates if the provided set (which can have a single element) has at least one element in common with another set. Example: (b,c,d) intersects (a,b,c) returns TRUE since both sets contain b and c . |
pmatch | (Prefix Match) Compares a file path to a set of file or directory prefixes. Example: fd.name pmatch (/tmp/hello) returns true against /tmp/hello , /tmp/hello/world but not against /tmp/hello_world . |
exists | Checks if a field is defined. Example: k8s.pod.name exists . |
bcontains , bstartswith | (Binary contains) These operators function similarly to contains and startswith and allow for byte matching against a raw byte string, accepting a hexadecimal string as input. Examples: evt.buffer bcontains CAFE , evt.buffer bstartswith CAFE_ . |
Thus, to compare the process name, we can use the following conditions: proc.name = sshd
or proc.name contains sshd
.
When a part of the condition does not have an operator, it means that it is a macro, let’s see what it is about.
Macros and Lists
Macros are “variables” that can be used in Falco rules to facilitate the reading and maintenance of rules or to reuse frequently used conditions.
For example, instead of repeating the condition proc.name in (rpm, dpkg, apt, yum)
in the 10 rules that have it, we can create a macro package_mgmt_procs
and a list package_mgmt_binaries
that contains the names of the package management processes.
- list: package_mgmt_binaries
items: [rpm_binaries, deb_binaries, update-alternat, gem, npm, python_package_managers, sane-utils.post, alternatives, chef-client, apk, snapd]
- macro: package_mgmt_procs
condition: (proc.name in (package_mgmt_binaries))
Thus, in the following rule, we can simply use package_mgmt_procs
to check if the process is a package manager. This will return true if the process is indeed one.
- rule: Write below binary dir
desc: >
Trying to write to any file below specific binary directories can serve as an auditing rule to track general system changes.
Such rules can be noisy and challenging to interpret, particularly if your system frequently undergoes updates. However, careful
profiling of your environment can transform this rule into an effective rule for detecting unusual behavior associated with system
changes, including compliance-related cases.
condition: >
open_write and evt.dir=<
and bin_dir
and not package_mgmt_procs
output: File below a known binary directory opened for writing (file=%fd.name pcmdline=%proc.pcmdline gparent=%proc.aname[2] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)
priority: ERROR
We get a very simple and readable rule with reusable code in other contexts.
The above rule comes from the official Falco documentation. But by knowing the fields and operators, we can easily create our own rules.
In the following example, I want to detect if a user is trying to search for sensitive files (such as SSH keys or Kubernetes configuration files).
- list: searching_binaries
items: ['grep', 'fgrep', 'egrep', 'rgrep', 'locate', 'find']
- rule: search for sensitives files
desc: Detect if someone is searching for a sensitive file
condition: >
spawned_process and proc.name in (searching_binaries) and
(
proc.args contains "id_rsa" or
proc.args contains "id_ed25519" or
proc.args contains "kube/config"
)
output: Someone is searching for a sensitive file (file=%proc.args pcmdline=%proc.pcmdline gparent=%proc.aname[2] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty)
priority: INFO
tags: [sensitive_data, ssh]
Writing a Rule
To write a rule like we did earlier (search for sensitive files), it is possible to do it blindly as we did (by writing, then testing), but the Falco maintainers suggest using sysdig
instead. We will see the benefits of this method.
Sysdig offers a feature to record system calls to a file and use this recording with the same syntax as Falco rules to see if an event is properly detected.
Before we start, it is important to know which event we want to react to. For this, some manpages
can help us determine which system calls are used by a program.
In a first terminal, we will start recording system calls to a file.
I will perform the action I want to monitor (in my case, I want to be alerted on a chmod 777
). In the man
page of chmod
(man 2 chmod
), I see that the system calls used are fchmodat
and fchmod
.
Therefore, I will record the system calls fchmodat
, fchmod
, and chmod
to a file named dumpfile.scap
.
sysdig -w dumpfile.scap "evt.type in (fchmod,chmod,fchmodat)"
In a second terminal, I will execute the command chmod 777 /tmp/test
.
chmod 777 /tmp/test
I stop the recording of system calls with Ctrl+C
. I can use sysdig to read this file and see the system calls that have been recorded.
$ sysdig -r dumpfile.scap
1546 16:21:59.800433848 1 <NA> (-1.76565) > fchmodat
1547 16:21:59.800449701 1 <NA> (-1.76565) < fchmodat res=0 dirfd=-100(AT_FDCWD) filename=/tmp/test mode=0777(S_IXOTH|S_IWOTH|S_IROTH|S_IXGRP|S_IWGRP|S_IRGRP|S_IXUSR|S_IWUSR|S_IRUSR)
Our chmod
has been successfully recorded.
I can now replay this recording by adding conditions to create my Falco rule reacting to this event. After some tests, I come up with the following rule:
$ sysdig -r ~/dumpfile.scap "evt.type in (fchmod,chmod,fchmodat) and (evt.arg.mode contains S_IXOTH and evt.arg.mode contains S_IWOTH and evt.arg.mode contains S_IROTH and evt.arg.mode contains S_IXGRP and evt.arg.mode contains S_IWGRP and evt.arg.mode contains S_IRGRP and evt.arg.mode contains S_IXUSR and evt.arg.mode contains S_IWUSR and evt.arg.mode contains S_IRUSR)"
1547 16:21:59.800449701 1 <NA> (-1.76565) < fchmodat res=0 dirfd=-100(AT_FDCWD) filename=/tmp/test mode=0777(S_IXOTH|S_IWOTH|S_IROTH|S_IXGRP|S_IWGRP|S_IRGRP|S_IXUSR|S_IWUSR|S_IRUSR)
I just have to create the Falco rule from my sysdig
command.
- macro: chmod_777
condition: (evt.arg.mode contains S_IXOTH and evt.arg.mode contains S_IWOTH and evt.arg.mode contains S_IROTH and evt.arg.mode contains S_IXGRP and evt.arg.mode contains S_IWGRP and evt.arg.mode contains S_IRGRP and evt.arg.mode contains S_IXUSR and evt.arg.mode contains S_IWUSR and evt.arg.mode contains S_IRUSR)
- rule: chmod 777
desc: Detect if someone is trying to chmod 777 a file
condition: >
evt.type in (fchmod,chmod,fchmodat) and chmod_777
output: Someone is trying to chmod 777 a file (file=%fd.name pcmdline=%proc.pcmdline gparent=%proc.aname[2] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty)
priority: NOTICE
tags: [chmod, security]
Overrides and exceptions
A complex case to handle is that of exceptions. For example, I discover that a specific user needs to search for sensitive files in a maintenance script.
The search for sensitive files
rule will react to all find
processes searching for SSH keys. However, I want my user to be able to do this without triggering an alert.
- list: searching_binaries
items: ['grep', 'fgrep', 'egrep', 'rgrep', 'locate', 'find']
- rule: search for sensitives files
desc: Detect if someone is searching for a sensitive file
condition: >
spawned_process and proc.name in (searching_binaries) and
(
proc.args contains "id_rsa" or
proc.args contains "id_ed25519" or
proc.args contains "kube/config"
)
output: Someone is searching for a sensitive file (file=%proc.args pcmdline=%proc.pcmdline gparent=%proc.aname[2] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty)
priority: INFO
tags: [sensitive_data, ssh]
I can do this in two ways:
- Add a patch to the search for sensitive files rule to add an additional condition.
- Add an exception in the search for sensitive files rule to ignore alerts for a specific user.
Why not modify the condition to exclude this case?
I have X machines with the same Falco rules (we will see how to synchronize the rules on the machines below), I want to have as few differences as possible between the machines. I prefer to add a patch file that will be exclusive to this machine.
I will then add a file /etc/falco/rules.d/patch-rules.yaml
with the following content:
- rule: search for sensitives files
condition: and user.name != "mngmt"
override:
condition: append
Tip
It is also possible to do override
on lists and macros:
- list: searching_binaries
items: ['grep', 'fgrep', 'egrep', 'rgrep', 'locate', 'find']
- list: searching_binaries
items: ['rg', 'hgrep', 'ugrep']
override:
items: append
We have just used an override
allowing us to add a condition to our rule. But override
also allows us to replace an entire condition of a rule. For example:
- rule: search for sensitives files
condition: >
spawned_process and proc.name in (searching_binaries) and
(
proc.args contains "id_rsa" or
proc.args contains "id_ed25519" or
proc.args contains "kube/config"
) and user.name != "mngmt"
override:
condition: replace
The downside of this method is that it can become complicated to manage if we have many rules and overrides in multiple files. It’s easy to get lost.
Instead of adding an override to modify a condition to exclude a context, it is possible to use exceptions.
- rule: search for sensitives files
desc: Detect if someone is searching for a sensitive file
condition: >
spawned_process and proc.name in (searching_binaries) and
(
proc.args contains "id_rsa" or
proc.args contains "id_ed25519" or
proc.args contains "kube/config"
)
exceptions:
- name: ssh_script
fields: user.name
values: [mngmt]
output: Someone is searching for a sensitive file (file=%proc.args pcmdline=%proc.pcmdline gparent=%proc.aname[2] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%p
roc.pname command=%proc.cmdline terminal=%proc.tty container_id=%container.id container_image=%container.image.repository container_image_tag=%container.image.tag container_name=%container.name)
priority: INFO
tags: [sensitive_data, ssh]
This writing is cleaner and allows me to add many exceptions without having to modify the main rule. I can also refine my exception by adding other fields to be more precise.
- rule: search for sensitives files
desc: Detect if someone is searching for a sensitive file
condition: >
spawned_process and proc.name in (searching_binaries) and
(
proc.args contains "id_rsa" or
proc.args contains "id_ed25519" or
proc.args contains "kube/config"
)
exceptions:
- name: context_1
fields: [user.name, proc.cwd, user.shell]
values:
- [mngmt, /home/mngmt/, /bin/sh]
output: Someone is currently searching for a sensitive file (%proc.cwd file=%proc.args pcmdline=%proc.pcmdline gparent=%proc.aname[2] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%pr
oc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty container_id=%container.id container_image=%container.image.repository container_image_tag=%container.image.tag container_name=%container.name)
priority: INFO
tags: [sensitive_data, ssh]
The use of exceptions helps to avoid having a “condition” field that is too long and difficult to maintain.
For example, I used a rather simple case, but it is extremely dangerous to let a user search for sensitive files. I invite you to reconsider how you manage access to these files rather than creating an exception in Falco.
Falco and Containers
Falco integrates well with containers and is capable of detecting events that occur within them.
Containers can be monitored by using the values of the container
class (for example, container.id
, container.name
, container.image
).
- macro: container_started
condition: ((evt.type = container) or (spawned_process and proc.vpid=1))
- rule: New container with tag latest
desc: Detect if a new container with the tag "latest" is started
condition: >
container_started and container.image.tag="latest"
output: A new container with the tag "latest" is started (container_id=%container.id container_image=%container.image.repository container_image_tag=%container.image.tag container_name=%container.name k8s_ns=%k8s.ns.name k8s_pod_name=%k8s.pod.name)
priority: INFO
tags: [container, invalid_tag]
The existing rules are already adapted to work within containers (such as rules for shell detection, sensitive file search, etc.). However, it is necessary to add the container.id
, container.image
, container.name
fields to the output to have information about the relevant container when an alert is triggered.
Just a clarification: Without the use of container_started
, the rule would not monitor any system calls, and Falco would be forced to monitor every system call to check if it is a container with the latest
tag, which poses significant performance issues as specified in the chapter Falco Rules.
For example, let’s reuse the search for sensitives files
rule to add the container.id
, container.image
, and container.name
fields to the output.
- rule: search for sensitives files
desc: Detect if someone is searching for a sensitive file
condition: >
spawned_process and proc.name in (searching_binaries) and
(
proc.args contains "id_rsa" or
proc.args contains "id_ed25519" or
proc.args contains "kube/config"
)
output: Someone is searching for a sensitive file (file=%proc.args pcmdline=%proc.pcmdline gparent=%proc.aname[2] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty container_id=%container.id container_image=%container.image.repository container_image_tag=%container.image.tag container_name=%container.name)
priority: INFO
tags: [sensitive_data, ssh]
When an alert is triggered, we obtain information about the relevant container (such as the ID, name, image, etc.).
{
"hostname": "falco-linux",
"output": "22:50:25.195959798: Informational Someone is searching for a sensitive file (file=-r id_rsa pcmdline=ash gparent=containerd-shim evt_type=execve user=root user_uid=0 user_loginuid=-1 process=grep proc_exepath=/bin/busybox parent=ash command=grep -r id_rsa terminal=34816 container_id=b578c3492ecf container_image=alpine container_image_tag=latest container_name=sharp_lovelace)",
"priority": "Informational",
"rule": "search for sensitives files",
"source": "syscall",
"tags": [
"sensitive_data",
"ssh"
],
"time": "2024-04-06T20:50:25.195959798Z",
"output_fields": {
"container.id": "b578c3492ecf",
"container.image.repository": "alpine",
"container.image.tag": "latest",
"container.name": "sharp_lovelace",
"evt.time": 1712436625195959800,
"evt.type": "execve",
"proc.aname[2]": "containerd-shim",
"proc.args": "-r id_rsa",
"proc.cmdline": "grep -r id_rsa",
"proc.exepath": "/bin/busybox",
"proc.name": "grep",
"proc.pcmdline": "ash",
"proc.pname": "ash",
"proc.tty": 34816,
"user.loginuid": -1,
"user.name": "root",
"user.uid": 0
}
}
To determine whether a rule should apply to containers or not, you can add the following conditions:
- The condition
container.id=host
for those that should not apply to them. - The condition
container.id!=host
for those that should only apply to containers.
Without these conditions, the rules will apply to all processes, including those that are not in containers.
XZ Vulnerability (CVE-2024-3094)
Let’s talk about the latest news! A vulnerability has been discovered in the liblzma
library that allows a specific attacker to bypass SSHD authentication using a private SSH key. This vulnerability is referenced under the name CVE-2024-3094 and is critical.
Sysdig has published a rule to detect if the vulnerable liblzma
library is loaded by SSHD. Here it is:
- rule: Backdoored library loaded into SSHD (CVE-2024-3094)
desc: A version of the liblzma library was seen loading which was backdoored by a malicious user in order to bypass SSHD authentication.
condition: open_read and proc.name=sshd and (fd.name endswith "liblzma.so.5.6.0" or fd.name endswith "liblzma.so.5.6.1")
output: SSHD Loaded a vulnerable library (| file=%fd.name | proc.pname=%proc.pname gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] image=%container.image.repository | proc.cmdline=%proc.cmdline | container.name=%container.name | proc.cwd=%proc.cwd proc.pcmdline=%proc.pcmdline user.name=%user.name user.loginuid=%user.loginuid user.uid=%user.uid user.loginname=%user.loginname image=%container.image.repository | container.id=%container.id | container_name=%container.name| proc.cwd=%proc.cwd )
priority: WARNING
tags: [host,container]
If I add this famous library liblzma
(version 5.6.0) in the directory /lib/x86_64-linux-gnu/
and start the sshd
service from it, Falco should detect the alert…
{
"hostname": "falco-linux",
"output": "23:11:24.780959791: Warning SSHD Loaded a vulnerable library (| file=/lib/x86_64-linux-gnu/liblzma.so.5.6.0 | proc.pname=systemd gparent=<NA> ggparent=<NA> gggparent=<NA> image=<NA> | proc.cmdline=sshd -D | container.name=host | proc.cwd=/ proc.pcmdline=systemd install user.name=root user.loginuid=-1 user.uid=0 user.loginname=<NA> image=<NA> | container.id=host | container_name=host| proc.cwd=/ )",
"priority": "Warning",
"rule": "Backdoored library loaded into SSHD (CVE-2024-3094)",
"source": "syscall",
"tags": [
"container",
"host"
],
"time": "2024-04-06T21:11:24.780959791Z",
"output_fields": {
"container.id": "host",
"container.image.repository": null,
"container.name": "host",
"evt.time": 1712437884780959700,
"fd.name": "/lib/x86_64-linux-gnu/liblzma.so.5.6.0",
"proc.aname[2]": null,
"proc.aname[3]": null,
"proc.aname[4]": null,
"proc.cmdline": "sshd -D",
"proc.cwd": "/",
"proc.pcmdline": "systemd install",
"proc.pname": "systemd",
"user.loginname": "<NA>",
"user.loginuid": -1,
"user.name": "root",
"user.uid": 0
}
}
Bingo! Falco has successfully detected the alert, and we can react accordingly to fix this vulnerability.
The driver-loader on Falco
When I first installed Falco, I could see that it needed to load a ‘driver’. This is essential for Falco if you are using the kmod
mode (kernel module) or simple ebpf
. The modern_ebpf
mode, on the other hand, does not require a driver.
If, however, you are using a mode that requires a driver, you will see in the pod logs (in a Kubernetes context) that the driver is downloaded and installed in an emptyDir
to be mounted in the Falco pod.
$ kubectl logs -l app.kubernetes.io/name=falco -n falco -c falco-driver-loader
2024-04-17 21:49:01 INFO Removing eBPF probe symlink
└ path: /root/.falco/falco-bpf.o
2024-04-17 21:49:01 INFO Trying to download a driver.
└ url: https://download.falco.org/driver/7.0.0%2Bdriver/x86_64/falco_debian_6.1.76-1-cloud-amd64_1.o
2024-04-17 21:49:01 INFO Driver downloaded.
└ path: /root/.falco/7.0.0+driver/x86_64/falco_debian_6.1.76-1-cloud-amd64_1.o
2024-04-17 21:49:01 INFO Symlinking eBPF probe
├ src: /root/.falco/7.0.0+driver/x86_64/falco_debian_6.1.76-1-cloud-amd64_1.o
└ dest: /root/.falco/falco-bpf.o
2024-04-17 21:49:01 INFO eBPF probe symlinked
Outside of Kubernetes, if we start Falco without a driver, we will encounter an error:
$ falco -o engine.kind=ebpf
Fri Apr 19 08:20:07 2024: Falco version: 0.37.1 (x86_64)
Fri Apr 19 08:20:07 2024: Loading rules from file /etc/falco/falco_rules.yaml
Fri Apr 19 08:20:07 2024: Loading rules from file /etc/falco/falco_rules.local.yaml
Fri Apr 19 08:20:07 2024: Loaded event sources: syscall
Fri Apr 19 08:20:07 2024: Enabled event sources: syscall
Fri Apr 19 08:20:07 2024: Opening 'syscall' source with BPF probe. BPF probe path: /root/.falco/falco-bpf.o
Fri Apr 19 08:20:07 2024: An error occurred in an event source, forcing termination...
Events detected: 0
Rule counts by severity:
Triggered rules by rule name:
Error: can't open BPF probe '/root/.falco/falco-bpf.o'
On Kubernetes, an initContainer automatically starts to download the driver. Outside the cluster, you need to download this driver using the falcoctl driver install --type ebpf
command.
I’m not too fond of this configuration. What would happen if the driver was corrupted by a hacker? Or if the download server was unavailable while I was adding new machines to my infrastructure? The ideal scenario is to have a completely airgapped operation (without external connection to the cluster) for Falco.
It is possible to create a Docker image of Falco with the driver already integrated, but this can pose maintenance issues (you will need to update the image for each new version of Falco, and it may not be compatible with all kernel versions).
A solution proposed by Sysdig is to have its own web server to host Falco drivers and download them from this server (this can even be a pod accompanied by a Kubernetes service).
To specify a different download URL, you need to set the environment variable FALCOCTL_DRIVER_REPOS
in the falco-driver-loader
pod.
In the Helm chart, this is done at the key driver.loader.initContainer.env
:
driver:
kind: ebpf
loader:
initContainer:
env:
- name: FALCOCTL_DRIVER_REPOS
value: "http://driver-httpd/"
Falco on Kubernetes
I couldn’t leave you without showing you how to install Falco on a Kubernetes cluster! 😄
In addition to being compatible with containers, Falco rules are already ready to use for Kubernetes. From a cluster, we have access to new variables to create our rules (or use them in the outputs):
k8s.ns.name
- Kubernetes namespace name.k8s.pod.name
- Kubernetes pod name.k8s.pod.label
/k8s.pod.labels
- Kubernetes pod labels.k8s.pod.cni.json
- CNI information of the Kubernetes pod.
It is worth noting that since Falco monitors system calls, the Falco rules installed by the Helm chart are also valid on hosts (outside of pods).
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm install falco falcosecurity/falco -n falco -f values.yaml --create-namespace
tty: true
falcosidekick:
enabled: true
webui:
enabled: true
# You can add additional configurations for Falco-Sidekick to route notifications as below
# slack:
# webhookurl: https://hooks.slack.com/services/XXXXX/XXXXX/XXXXX
# channel: "#falco-alerts"
driver:
kind: modern_ebpf
When you want to add a rule, the ‘simple’ method is to do it in the values.yaml
of the Helm chart and update the deployment via helm upgrade --reuse-values -n falco falco falcosecurity/falco -f falco-rules.yaml
. As soon as the customRules
field is present, the Helm chart will create a ConfigMap
with the custom rules, you can also use it to modify the rules.
customRules:
restrict_tag.yaml: |
- macro: container_started
condition: >
((evt.type = container or
(spawned_process and proc.vpid=1)))
- list: namespace_system
items: [kube-system, cilium-system]
- macro: namespace_allowed_to_use_latest_tag
condition: not (k8s.ns.name in (namespace_system))
- rule: Container with latest tag outside kube-system and cilium-system
desc: Detects containers with the "latest" tag started in namespaces other than kube-system and cilium-system
condition: >
container_started
and container.image endswith "latest"
and namespace_allowed_to_use_latest_tag
output: "Container with 'latest' tag detected outside kube-system and cilium-system (user=%user.name container_id=%container.id image=%container.image)"
priority: WARNING
tags: [k8s, container, latest_tag]
Note: I wrote the rule reacting to a container with the latest
tag to show that it was possible to react to pod metadata, but it would be more appropriate to delegate this function to a Kyverno or an Admission Policy
For those who are not satisfied with this, we will see below how to automate the addition of rules from external artifacts (and without updating our Helm deployment to add new rules).
Tip
You are not required to have only one values.yaml
file containing both the configuration AND the Falco rules. You can very well separate the two for more clarity.
helm install falco falcosecurity/falco -n falco -f values.yml -f falco-rules.yaml --create-namespace
Now, if I start a container with the image nginx:latest
in the default
namespace:
kubectl run nginx --image=nginx:latest -n default
Falco successfully detected the alert and notified me that the nginx
container with the nginx:latest
image was started in the default
namespace.
Detection, alert, and response
For now, the main difference between Falco and its competitor on Kubernetes is the ability to react to an alert raised by a pod. Falco can only detect and alert, but cannot react (such as killing the pod or quarantining it).
For this purpose, there is a tool called Falco Talon, developed by a Falco maintainer.
This is a tool that reacts to alerts and works in the same way as Falco-Sidekick (it is even possible to use them together). Upon receiving an alert, Falco Talon will re
⚠️ Attention, the program is still under development, you may have differences with what I will show you. ⚠️
Here are the possible actions with Falco Talon:
- Terminate the pod (
kubernetes:terminate
) - Add / Remove a label (
kubernetes:labelize
) - Create a NetworkPolicy (
kubernetes:networkpolicy
) - Run a command / script in the pod (
kubernetes:exec
andkubernetes:script
) - Delete a resource (
kubernetes:delete
) (other than the pod) - Block outbound traffic (
calico:networkpolicy
) (requires Calico)
Each action is configurable, and there is nothing stopping you from having the same action multiple times with different parameters.
To install it, a Helm chart is available on the Github repository (an index will of course be available later):
git clone https://github.com/Falco-Talon/falco-talon.git
cd falco-talon/deployment/helm/
helm install falco-talon . -n falco --create-namespace
Once Talon is installed, we need to configure Falco-Sidekick to send alerts to Talon. To do this, we must modify our values.yaml
to add the following configuration:
falcosidekick:
enabled: true
config:
webhook:
address: "http://falco-talon:2803"
There you go! Falco-Sidekick will send alerts to Falco Talon, which will be able to react accordingly. It’s up to us to configure it to react as we wish to our alerts.
The files to modify are directly available in the Github repository (in the Helm chart) at the location ./deployment/helm/rules.yaml
and ./deployment/helm/rules_overrides.yaml
.
Here is an excerpt from the default configuration of Talon (in the file rules.yaml
):
- action: Labelize Pod as Suspicious
actionner: kubernetes:labelize
parameters:
labels:
suspicious: true
- rule: Terminal shell in container
match:
rules:
- Terminal shell in container
output_fields:
- k8s.ns.name!=kube-system, k8s.ns.name!=falco
actions:
- action: Labelize Pod as Suspicious
This creates the Labelize Pod as Suspicious
action that will add a suspicious: true
label to the pod when a terminal is detected in a container. This action will only apply if the Terminal shell in container
alert is triggered and this alert has the k8s.ns.name
field different from kube-system
and falco
.
Simple, right? 😄
To test this, let’s start an nginx pod:
$ kubectl run nginx --image=nginx:latest -n default
pod/nginx created
$ kubectl get pods nginx --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 10s run=nginx
No suspicious
label is present. But if I start a terminal in the nginx
pod…
$ kubectl exec pod/nginx -it -- bash
root@nginx:/# id
uid=0(root) gid=0(root) groups=0(root)
root@nginx:/#
exit
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 16m run=nginx,suspicious=true
A suspicious
label has been added to the nginx
pod! 😄
We can also remove a label so that a service no longer points to this pod.
Another concern is that as long as the pod is running, the attacker can continue their attack, download payloads, access other services, etc. One solution would be to add a NetworkPolicy
to block outgoing traffic from the pod.
Here is my new set of rules:
- action: Disable outbound connections
actionner: kubernetes:networkpolicy
parameters:
allow:
- 127.0.0.1/32
- action: Labelize Pod as Suspicious
actionner: kubernetes:labelize
parameters:
labels:
app: ""
suspicious: true
- rule: Terminal shell in container
match:
rules:
- Terminal shell in container
output_fields:
- k8s.ns.name!=kube-system, k8s.ns.name!=falco
actions:
- action: Disable outbound connections
- action: Labelize Pod as Suspicious
What do you think will happen if I start a terminal in a pod?
Let’s find out with a Deployment
creating a netshoot
pod (image used for network testing).
apiVersion: apps/v1
kind: Deployment
metadata:
name: netshoot
spec:
replicas: 1
selector:
matchLabels:
app: netshoot
template:
metadata:
labels:
app: netshoot
spec:
containers:
- name: netshoot
image: nicolaka/netshoot
command: ["/bin/bash"]
args: ["-c", "while true; do sleep 60;done"]
Natively (without any NetworkPolicy
), the netshoot
pod can access the internet or other cluster services. But once I have applied my rule and started a terminal in the netshoot
pod, the outgoing traffic is blocked, the suspicious
label is added to the pod, and the app
label is removed.
As we remove the app=netshoot
label, the Deployment will recreate a new pod to maintain the number of replicas.
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
netshoot-789557564b-8gc7m 1/1 Running 0 13m app=netshoot,pod-template-hash=789557564b
$ kubectl exec deploy/netshoot -it -- bash
netshoot-789557564b-8gc7m:~# ping -c1 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
--- 1.1.1.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
netshoot-789557564b-8gc7m:~# curl http://kubernetes.default.svc.cluster.local:443 # Aucune réponse
netshoot-789557564b-8gc7m:~# exit
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
netshoot-789557564b-8gc7m 1/1 Running 0 14m pod-template-hash=789557564b,suspicious=true
netshoot-789557564b-grg8k 1/1 Running 0 54s app=netshoot,pod-template-hash=789557564b
Thus, we can easily isolate pods in quarantine and block all outgoing traffic without impacting the Deployment.
The networkpolicy created by Talon will use the label suspicious=true
as a selector. However, once we have deleted the pod, the NetworkPolicy
is still present. Therefore, it will need to be manually removed.
In addition to being notified by Falco-Sidekick during an alert, Talon can also send us a small message to inform us of the action taken.
By default, Talon will generate events
in the namespace of the alert. To view them, simply run the command: kubectl get events --sort-by=.metadata.creationTimestamp
.
24m Normal falco-talon:kubernetes:networkpolicy:success pod Status: success...
24m Normal falco-talon:kubernetes:labelize:success pod Status: success...
But it is possible to configure Talon to send messages to a webhook, a Slack channel, an SMTP, a request in Loki, etc.
In my case, I will configure it to send me a message to a webhook, I will update my values.yaml
of Falco Talon with the following values:
defaultNotifiers:
- webhook
- k8sevents
notifiers:
webhook:
url: "https://webhook.site/045451d8-ab16-45d9-a65e-7d1858f8c5b7"
http_method: "POST"
I update my Helm chart with the command helm upgrade --reuse-values -n falco falco-talon . -f values.yaml
. And as soon as Talon performs an action, I will receive a message on my webhook.
For example, in the case where Talon deploys a NetworkPolicy
to block outgoing traffic from a pod, I will receive a message like this:
{
"objects": {
"Namespace": "default",
"Networkpolicy": "netshoot-789557564b-lwgfs",
"Pod": "netshoot-789557564b-lwgfs"
},
"trace_id": "51ac05de-f97c-45c9-9ac7-cb6f93062f8a",
"rule": "Terminal shell in container",
"event": "A shell was spawned in a container with an attached terminal (evt_type=execve user=root user_uid=0 user_loginuid=-1 process=bash proc_exepath=/bin/bash parent=containerd-shim command=bash terminal=34816 exe_flags=EXE_WRITABLE container_id=0a1197d327ff container_image=docker.io/nicolaka/netshoot container_image_tag=latest container_name=netshoot k8s_ns=default k8s_pod_name=netshoot-789557564b-lwgfs)",
"message": "action",
"output": "the networkpolicy 'netshoot-789557564b-lwgfs' in the namespace 'default' has been updated",
"actionner": "kubernetes:networkpolicy",
"action": "Disable outbound connections",
"status": "success"
}
GitOps and Falco
Since my article on ArgoCD where I talked to you about GitOps and X-as-Code, you might suspect that I will talk to you about managing Falco rules ‘as code’. And you are right!
Let’s see how to manage our Falco configuration in “Pull” mode from Falco.
If I have an infrastructure of ~100 machines, I will not enjoy connecting to each of them to modify or update the Falco rules machine by machine.
This is why Falco has a tool called falcoctl
which allows to retrieve the Falco configuration (plugins, rules, etc.) from an external server (it is also from this same tool that we can download the drivers). For this, Falco uses OCI Artifacts to install the rules and plugins.
$ falcoctl artifact list
INDEX ARTIFACT TYPE REGISTRY REPOSITORY
falcosecurity application-rules rulesfile ghcr.io falcosecurity/rules/application-rules
falcosecurity cloudtrail plugin ghcr.io falcosecurity/plugins/plugin/cloudtrail
falcosecurity cloudtrail-rules rulesfile ghcr.io falcosecurity/plugins/ruleset/cloudtrail
falcosecurity dummy plugin ghcr.io falcosecurity/plugins/plugin/dummy
falcosecurity dummy_c plugin ghcr.io falcosecurity/plugins/plugin/dummy_c
falcosecurity falco-incubating-rules rulesfile ghcr.io falcosecurity/rules/falco-incubating-rules
falcosecurity falco-rules rulesfile ghcr.io falcosecurity/rules/falco-rules
falcosecurity falco-sandbox-rules rulesfile ghcr.io falcosecurity/rules/falco-sandbox-rules
# ...
When installing Falco, the package already contains default rules (those in the falco-rules
artifact), so it is possible to add a set of rules in this way:
$ falcoctl artifact install falco-incubating-rules
2024-04-17 18:36:11 INFO Resolving dependencies ...
2024-04-17 18:36:12 INFO Installing artifacts refs: [ghcr.io/falcosecurity/rules/falco-incubating-rules:latest]
2024-04-17 18:36:12 INFO Preparing to pull artifact ref: ghcr.io/falcosecurity/rules/falco-incubating-rules:latest
2024-04-17 18:36:12 INFO Pulling layer d306556e1c90
2024-04-17 18:36:13 INFO Pulling layer 93a62ab52683
2024-04-17 18:36:13 INFO Pulling layer 5e734f96181c
2024-04-17 18:36:13 INFO Verifying signature for artifact
└ digest: ghcr.io/falcosecurity/rules/falco-incubating-rules@sha256:5e734f96181cda9fc34e4cc6a1808030c319610e926ab165857a7829d297c321
2024-04-17 18:36:13 INFO Signature successfully verified!
2024-04-17 18:36:13 INFO Extracting and installing artifact type: rulesfile file: falco-incubating_rules.yaml.tar.gz
2024-04-17 18:36:13 INFO Artifact successfully installed
├ name: ghcr.io/falcosecurity/rules/falco-incubating-rules:latest
├ type: rulesfile
├ digest: sha256:5e734f96181cda9fc34e4cc6a1808030c319610e926ab165857a7829d297c321
└ directory: /etc/falco
Once the command is executed, falcoctl
will download the rules from the OCI image ghcr.io/falcosecurity/rules/falco-incubating-rules:latest
and install them in the /etc/falco
directory (in this case, the file /etc/falco/falco-incubating_rules.yaml
is created).
However, for a sustainable infrastructure, I will not enjoy typing bash commands every time I want to install new rules. Let’s take a look at the falcoctl
configuration file (which should have been generated during your first execution of falcoctl
), located at /etc/falcoctl/config.yaml
.
artifact:
follow:
every: 6h0m0s
falcoversions: http://localhost:8765/versions
refs:
- falco-rules:0
driver:
type: modern_ebpf
name: falco
repos:
- https://download.falco.org/driver
version: 7.0.0+driver
hostroot: /
indexes:
- name: falcosecurity
url: https://falcosecurity.github.io/falcoctl/index.yaml
We will first focus on the follow
part of the configuration. This allows us to track Falco rule updates and install them automatically.
To follow the previous example, instead of installing the falco-incubating-rules
rules on a one-time basis, we will add them to the falcoctl
configuration so that they are automatically installed using the falcoctl artifact follow
command.
To achieve this, we will add the falco-incubating-rules
rules to the falcoctl
configuration:
artifact:
follow:
every: 6h0m0s
falcoversions: http://localhost:8765/versions
refs:
- falco-incubating-rules
# ...
Next, the falcoctl artifact follow
command will install the falco-incubating-rules
and automatically update them every six hours if a new version is available on the latest
tag (by comparing the artifact’s digest).
falcoctl artifact follow
2024-04-17 18:50:06 INFO Creating follower artifact: falco-incubating-rules:latest check every: 6h0m0s
2024-04-17 18:50:06 INFO Starting follower artifact: ghcr.io/falcosecurity/rules/falco-incubating-rules:latest
2024-04-17 18:50:06 INFO Found new artifact version followerName: ghcr.io/falcosecurity/rules/falco-incubating-rules:latest tag: latest
2024-04-17 18:50:10 INFO Artifact correctly installed
├ followerName: ghcr.io/falcosecurity/rules/falco-incubating-rules:latest
├ artifactName: ghcr.io/falcosecurity/rules/falco-incubating-rules:latest
├ type: rulesfile
├ digest: sha256:d4c03e000273a0168ee3d9b3dfb2174e667b93c9bfedf399b298ed70f37d623b
└ directory: /etc/falco
Info
Quick reminder: falco
will automatically reload the rules when a file configured in /etc/falco/falco.yaml
is modified.
By default, falcoctl
will use the latest
tag for OCI images, but I strongly recommend specifying a specific version (e.g., 1.0.0
). To find out the available versions of an artifact, we have the falcoctl artifact info
command. For example:
$ falcoctl artifact info falco-incubating-rules
REF TAGS
ghcr.io/falcosecurity/rules/falco-incubating-rules 2.0.0-rc1, sha256-8b8dd8ee8eec6b0ba23b6a7bc3926a48aaa8e56dc42837a0ad067988fdb19e16.sig, 2.0.0, 2.0, 2, latest, sha256-1391a1df4aa230239cff3efc7e0754dbf6ebfa905bef5acadf8cdfc154fc1557.sig, 3.0.0-rc1, sha256-de9eb3f8525675dc0ffd679955635aa1a8f19f4dea6c9a5d98ceeafb7a665170.sig, 3.0.0, 3.0, 3, sha256-555347ba5f7043f0ca21a5a752581fb45050a706cd0fb45aabef82375591bc87.sig, 3.0.1, sha256-5e734f96181cda9fc34e4cc6a1808030c319610e926ab165857a7829d297c321.sig
Creating custom artifacts
We have seen how to use the artifacts provided by falcoctl
from the falcosecurity
registry, now it’s time for us to create our own artifacts!
I will start by creating a file never-chmod-777.yaml
containing a Falco rule:
- macro: chmod_777
condition: (evt.arg.mode contains S_IXOTH and evt.arg.mode contains S_IWOTH and evt.arg.mode contains S_IROTH and evt.arg.mode contains S_IXGRP and evt.arg.mode contains S_IWGRP and evt.arg.mode contains S_IRGRP and evt.arg.mode contains S_IXUSR and evt.arg.mode contains S_IWUSR and evt.arg.mode contains S_IRUSR)
- rule: chmod 777
desc: Detect if someone is trying to chmod 777 a file
condition: >
evt.type=fchmodat and chmod_777
output: Someone is trying to chmod 777 a file (file=%fd.name pcmdline=%proc.pcmdline gparent=%proc.aname[2] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty)
priority: ERROR
tags: [chmod, security]
To package this file into an OCI artifact, we will start by authenticating to a Docker registry (here, Github Container Registry):
docker login ghcr.io
To create our image, no docker build
is needed, we can do it using falcoctl
:
$ falcoctl registry push --type rulesfile \
--version 1.0.0 \
ghcr.io/cuistops/never-chmod-777:1.0.0 \
never-chmod-777.yaml
2024-04-17 19:03:10 INFO Preparing to push artifact name: ghcr.io/cuistops/never-chmod-777:1.0.0 type: rulesfile
2024-04-17 19:03:10 INFO Pushing layer ac9ec4319805
2024-04-17 19:03:11 INFO Pushing layer a449a3b9a393
2024-04-17 19:03:12 INFO Pushing layer 9fa17441da69
2024-04-17 19:03:12 INFO Artifact pushed
├ name: ghcr.io/cuistops/never-chmod-777:1.0.0
├ type:
└ digest: sha256:9fa17441da69ec590f3d9c0a58c957646d55060ffa2deae84d99b513a5041e6d
Here we go! We have our OCI image ghcr.io/cuistops/never-chmod-777:1.0.0
containing our Falco rule. All that’s left is to add it to the falcoctl
configuration:
artifact:
follow:
every: 6h0m0s
falcoversions: http://localhost:8765/versions
refs:
- falco-rules:3
- ghcr.io/cuistops/never-chmod-777:1.0.0
Automatically, falcoctl artifact follow
will create the file /etc/falco/never-chmod-777.yaml
with my rule.
$ falcoctl artifact follow
2024-04-17 19:07:29 INFO Creating follower artifact: falco-rules:3 check every: 6h0m0s
2024-04-17 19:07:29 INFO Creating follower artifact: ghcr.io/cuistops/never-chmod-777:1.0.0 check every: 6h0m0s
2024-04-17 19:07:29 INFO Starting follower artifact: ghcr.io/falcosecurity/rules/falco-rules:3
2024-04-17 19:07:29 INFO Starting follower artifact: ghcr.io/cuistops/never-chmod-777:1.0.0
2024-04-17 19:07:30 INFO Found new artifact version followerName: ghcr.io/falcosecurity/rules/falco-rules:3 tag: latest
2024-04-17 19:07:30 INFO Found new artifact version followerName: ghcr.io/cuistops/never-chmod-777:1.0.0 tag: 1.0.0
2024-04-17 19:07:33 INFO Artifact correctly installed
├ followerName: ghcr.io/cuistops/never-chmod-777:1.0.0
├ artifactName: ghcr.io/cuistops/never-chmod-777:1.0.0
├ type: rulesfile
├ digest: sha256:9fa17441da69ec590f3d9c0a58c957646d55060ffa2deae84d99b513a5041e6d
└ directory: /etc/falco
2024-04-17 19:07:33 INFO Artifact correctly installed
├ followerName: ghcr.io/falcosecurity/rules/falco-rules:3
├ artifactName: ghcr.io/falcosecurity/rules/falco-rules:3
├ type: rulesfile
├ digest: sha256:d4c03e000273a0168ee3d9b3dfb2174e667b93c9bfedf399b298ed70f37d623b
└ directory: /etc/falco
In a Kubernetes cluster, it is necessary to specify the artifacts to follow in the values.yaml
of the Helm chart at the location falcoctl.config.artifacts.follow.refs
:
falcoctl:
config:
artifacts:
follow:
refs:
- falco-rules:3
- ghcr.io/cuistops/never-chmod-777:1.0.0
Note that when creating an artifact with falcoctl
, it will copy the path provided in the falcoctl registry push
command and deposit the file at the same path, taking /etc/falco
as the root. This means that if I provide the path ./rulartéfacter-chmod-777.yaml
when generating my artifact, the file will be deposited in /etc/falco/rules.d/never-chmod-777.yaml
. Therefore, it is important to ensure that the path is correctly created and/or that falco
is properly configured to read the rules from this location.
As a reminder, Falco automatically reloads its configuration at the following locations:
/etc/falco/falco_rules.yaml
/etc/falco/rules.d/*
/etc/falco/falco_rules.local.yaml
If I place a rule file at the location /etc/falco/never-chmod-777.yaml
, Falco will not (by default) read it. A viable solution is to place the rules in the /etc/falco/rules.d
directory (present in Falco’s configuration), but this directory is not created by default.
In Kubernetes, I added an initContainer to create the rules.d
directory in the rulesfiles-install-dir
volume, and I instruct falcoctl
to deposit the rules in this directory (knowing that the rulesfiles-install-dir
volume is mounted in /etc/falco
).
extra:
initContainers:
- name: create-rulesd-dir
image: busybox
command: ["mkdir", "-p", "/etc/falco/rules.d"]
volumeMounts:
- name: rulesfiles-install-dir
mountPath: /etc/falco
falcoctl:
artifact:
install:
args: ["--log-format=json", "--rulesfiles-dir", "/rulesfiles/rules.d/"]
follow:
args: ["--log-format=json", "--rulesfiles-dir", "/rulesfiles/rules.d/"]
config:
artifact:
follow:
refs:
- falco-rules:3
- ghcr.io/cuistops/never-chmod-777:1.0.0
With this setup, my artifacts are directly installed in a directory monitored by Falco, and I do not need to modify its configuration to specify the new rule files.
Github Action to generate artifacts
To automate the generation of artifacts, we can use Github Actions (or Gitlab CI, Jenkins, etc.). Here is an example workflow to generate OCI images from a Github repository:
name: Generate Falco Artifacts
on:
push:
env:
OCI_REGISTRY: ghcr.io
jobs:
build-image:
name: Build OCI Image
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
id-token: write
strategy:
matrix:
include:
- rule_file: config/rules/never-chmod-777.yaml
name: never-chmod-777
version: 1.0.0
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@f95db51fddba0c2d1ec667646a06c2ce06100226 # v3.0.0
- name: Log into registry ${{ env.OCI_REGISTRY }}
if: github.event_name != 'pull_request'
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
with:
registry: ${{ env.OCI_REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout Falcoctl Repo
uses: actions/checkout@v3
with:
repository: falcosecurity/falcoctl
ref: main
path: tools/falcoctl
- name: Setup Golang
uses: actions/setup-go@v4
with:
go-version: '^1.20'
cache-dependency-path: tools/falcoctl/go.sum
- name: Build falcoctl
run: make
working-directory: tools/falcoctl
- name: Install falcoctl in /usr/local/bin
run: |
mv tools/falcoctl/falcoctl /usr/local/bin
- name: Checkout Rules Repo
uses: actions/checkout@v3
- name: force owner name to lowercase # Obligatoire pour les organisations / utilisateurs ayant des majuscules dans leur nom Github (ce qui est mon cas...)
run: |
owner=$(echo $reponame | cut -d'/' -f1 | tr '[:upper:]' '[:lower:]')
echo "owner=$owner" >>${GITHUB_ENV}
env:
reponame: '${{ github.repository }}'
- name: Upload OCI artifacts
run: |
cp ${rule_file} $(basename ${rule_file})
falcoctl registry push \
--config /dev/null \
--type rulesfile \
--version ${version} \
${OCI_REGISTRY}/${owner}/${name}:${version} $(basename ${rule_file})
env:
version: ${{ matrix.version }}
rule_file: ${{ matrix.rule_file }}
name: ${{ matrix.name }}
To add a Falco rule, simply create the file in the repository and specify the path, name, and version in the workflow matrix.
jobs:
strategy:
matrix:
include:
- rule_file: config/rules/never-chmod-777.yaml
name: never-chmod-777
version: 1.0.0
- rule_file: config/rules/search-for-aws-credentials.yaml
name: search-for-aws-credentials
version: 0.1.1
I also share with you the method from Thomas Labarussias which consists of taking advantage of semver notation to create multi-tagged images (e.g. 1.0.0
, 1.0
, 1
, latest
):
- name: Upload OCI artifacts
run: |
MAJOR=$(echo ${version} | cut -f1 -d".")
MINOR=$(echo ${version} | cut -f1,2 -d".")
cp ${rule_file} $(basename ${rule_file})
falcoctl registry push \
--config /dev/null \
--type rulesfile \
--version ${version} \
--tag latest --tag ${MAJOR} --tag ${MINOR} --tag ${version}\
${OCI_REGISTRY}/${owner}/${name}:${version} $(basename ${rule_file})
env:
version: ${{ matrix.version }}
rule_file: ${{ matrix.rule_file }}
name: ${{ matrix.name }}
Thus, version 1.0.0
will have tags latest
, 1.0
, 1
, and 1.0.0
. This way, I can specify in my falcoctl
configuration the tag 1.0
which will be equivalent to the latest version of the semver branch: 1.0.x
. (same for 1
which will be equivalent to 1.x.x
).
Create your own artifacts index
Let’s now move on to creating our own indexer to reference our OCI images from falcoctl
. For this, we need an HTTP server that will expose a YAML file containing the information of the OCI images.
To do this, I will simply expose my index.yaml
containing the information of my OCI image ghcr.io/cuistops/never-chmod-777:1.0.0
in a Github repository.
- name: cuistops-never-chmod-777
type: rulesfile
registry: ghcr.io
repository: cuistops/never-chmod-777
description: Never use 'chmod 777' or 'chmod a+rwx'
home: https://github.com/CuistOps/falco
keywords:
- never-chmod-777
license: apache-2.0
maintainers:
- email: [email protected]
name: Quentin JOLY
sources:
- https://raw.githubusercontent.com/CuistOps/falco/main/config/rules/never-chmod-777.yaml
This YAML file is publicly accessible via this link. I will then add it as an index on falcoctl
.
$ falcoctl index add cuistops-security https://raw.githubusercontent.com/CuistOps/falco/main/config/index/index.yaml
2024-04-17 20:32:28 INFO Adding index name: cuistops-security path: https://raw.githubusercontent.com/CuistOps/falco/main/config/index/index.yaml
2024-04-17 20:32:28 INFO Index successfully added
I can see the artifacts available in my cuistops-security
indexer:
$ falcoctl artifact list --index cuistops-security
INDEX ARTIFACT TYPE REGISTRY REPOSITORY
cuistops-security cuistops-never-chmod-777 rulesfile ghcr.io cuistops/never-chmod-777
I add the follow
to my rules by specifying only the package name in my index.yaml
, here is my complete falcoctl
configuration file:
artifact:
follow:
every: 6h0m0s
falcoversions: http://localhost:8765/versions
refs:
- cuistops-never-chmod-777:1.0.0
driver:
type: modern_ebpf
name: falco
repos:
- https://download.falco.org/driver
version: 7.0.0+driver
hostroot: /
indexes:
- name: cuistops-security
url: https://raw.githubusercontent.com/CuistOps/falco/main/config/index/index.yaml
backend: "https"
As soon as I run the command falcoctl artifact follow
, the never-chmod-777
rules are automatically installed.
$ falcoctl artifact follow
2024-04-17 20:57:56 INFO Creating follower artifact: cuistops-never-chmod-777:1.0.0 check every: 6h0m0s
2024-04-17 20:57:56 INFO Starting follower artifact: ghcr.io/cuistops/never-chmod-777:1.0.0
2024-04-17 20:57:57 INFO Found new artifact version followerName: ghcr.io/cuistops/never-chmod-777:1.0.0 tag: 1.0.0
2024-04-17 20:58:00 INFO Artifact correctly installed
├ followerName: ghcr.io/cuistops/never-chmod-777:1.0.0
├ artifactName: ghcr.io/cuistops/never-chmod-777:1.0.0
├ type: rulesfile
├ digest: sha256:9fa17441da69ec590f3d9c0a58c957646d55060ffa2deae84d99b513a5041e6d
└ directory: /etc/falco
Warning
During my experiments, I encountered a small issue with the default configuration of falcoctl
. The falcosecurity
registry was not working correctly and was not recognized.
$ falcoctl index list
NAME URL ADDED UPDATED
This is due to an error in the indexes
configuration (/etc/falcoctl/config.yaml
). Indeed, the backend
field is mandatory and must be filled in.
indexes:
- name: falcosecurity
url: https://falcosecurity.github.io/falcoctl/index.yaml
backend: "https" # <--- Ajouter cette ligne
I have submitted a small PR to fix this issue, but until it is resolved (in the next release, v0.8.0
), you may encounter the same incident.
In the Falco Helm chart, it is possible to specify artifact indexers to follow in the values.yaml
file:
falcoctl:
config:
indexes:
- name: cuistops-security
url: https://raw.githubusercontent.com/CuistOps/falco/main/config/index/index.yaml
backend: "https"
Conclusion
Falco is a very interesting product that convinced me with its ease of use and flexibility.
However, I do have a reservation about the requirement for OCI artifacts, which I find a bit cumbersome to manage (using a Git tag would have been convenient, being able to update the OCI tag following semver would have been useful as well). I also regret the lack of integration with Kubernetes for adding rules; using a CRD for rules would have been a plus.
EDIT: After discussing with Thomas Labarussias, he provided me with a Github Workflow example to simulate semver tracking with OCI images by giving multiple tags to the image. This allows tracking major, minor, and patch versions of an OCI image. Additionally, he explained that integration with a CRD for Kubernetes is planned in a future version of Falco.
I will soon test a competing tool to get a clearer idea of what is being done in this field. If you have any suggestions, feel free to share them with me!
Until then, thank you for reading this article; I hope it has been helpful and has inspired you to try Falco. If you wish, you can also support my work by buying me a coffee via Ko-fi.
I also thank Jérémy for being there to proofread and give his opinion on what I write.