pax_global_header 0000666 0000000 0000000 00000000064 15003540030 0014500 g ustar 00root root 0000000 0000000 52 comment=d423e7addc501738bccd663497bd374bf8bd3166
opensnitch-1.6.9/ 0000775 0000000 0000000 00000000000 15003540030 0013667 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/.github/ 0000775 0000000 0000000 00000000000 15003540030 0015227 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/.github/FUNDING.yml 0000664 0000000 0000000 00000001216 15003540030 0017044 0 ustar 00root root 0000000 0000000 # These are supported funding model platforms
github: gustavo-iniguez-goya
patreon: # Replace with a single patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
opensnitch-1.6.9/.github/ISSUE_TEMPLATE/ 0000775 0000000 0000000 00000000000 15003540030 0017412 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/.github/ISSUE_TEMPLATE/bug_report.md 0000664 0000000 0000000 00000003716 15003540030 0022113 0 ustar 00root root 0000000 0000000 ---
name: 🞠Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
Please, check the FAQ and Known Problems pages before creating the bug report:
https://github.com/evilsocket/opensnitch/wiki/FAQs
GUI related issues:
https://github.com/evilsocket/opensnitch/wiki/GUI-known-problems
Daemon related issues:
- Run `opensnitchd -check-requirements` to see if your kernel is compatible.
- https://github.com/evilsocket/opensnitch/wiki/daemon-known-problems
**Describe the bug**
A clear and concise description of what the bug is.
Include the following information:
- OpenSnitch version.
- OS: [e.g. Debian GNU/Linux, ArchLinux, Slackware, ...]
- Version [e.g. Buster, 10.3, 20.04]
- Window Manager: [e.g. GNOME Shell, KDE, enlightenment, i3wm, ...]
- Kernel version: echo $(uname -a)
**To Reproduce**
Describe in detail as much as you can what happened.
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Post error logs:**
If it's a crash of the GUI:
- Launch it from a terminal and reproduce the issue.
- Post the errors logged to the terminal.
If the daemon doesn't start or doesn't intercept connections:
- Run `opensnitchd -check-requirements` to see if your kernel is compatible.
- Post last 15 lines of the log file `/var/log/opensnitchd.log`
- Or launch it from a terminal as root (`# /usr/bin/opensnitchd -rules-path /etc/opensnitchd/rules`) and post the errors logged to the terminal.
If the deb or rpm packages fail to install:
- Install them from a terminal (`$ sudo dpkg -i opensnitch*` / `$ sudo yum install opensnitch*`), and post the errors logged to stdout.
**Expected behavior (optional)**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots or videos to help explain your problem. It may help to understand the issue much better.
**Additional context**
Add any other context about the problem here.
opensnitch-1.6.9/.github/ISSUE_TEMPLATE/config.yml 0000664 0000000 0000000 00000000213 15003540030 0021376 0 ustar 00root root 0000000 0000000 contact_links:
- name: 🙋 Question
url: https://github.com/evilsocket/opensnitch/discussions/new
about: Ask your question here
opensnitch-1.6.9/.github/ISSUE_TEMPLATE/feature-request.md 0000664 0000000 0000000 00000000434 15003540030 0023056 0 ustar 00root root 0000000 0000000 ---
name: 💡 Feature request
about: Suggest an idea
title: '[Feature Request]
'
labels: feature
assignees: ''
---
### Summary:
opensnitch-1.6.9/.github/workflows/ 0000775 0000000 0000000 00000000000 15003540030 0017264 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/.github/workflows/build_ebpf_modules.yml 0000664 0000000 0000000 00000004573 15003540030 0023643 0 ustar 00root root 0000000 0000000 # This is a basic workflow to help you get started with Actions
name: CI - build eBPF modules
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
paths:
- 'ebpf_prog/*'
- '.github/workflows/build_ebpf_modules.yml'
pull_request:
paths:
- 'ebpf_prog/*'
- '.github/workflows/build_ebpf_modules.yml'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
# The matrix configuration will execute the steps, once per dimension defined:
# kernel 5.8 + tag 1.5.0
# kernel 5.8 + tag master
# kernel 6.0 + tag 1.5.0, etc
build:
strategy:
matrix:
kernel: ["6.0"]
tag: ["1.6.0"]
runs-on: ubuntu-22.04
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v3
with:
# ref: can be a branch name, tag, commit, etc
ref: ${{ matrix.tag }}
- name: Get dependencies
run: |
sudo apt-get install git dpkg-dev rpm flex bison ca-certificates wget python3 rsync bc libssl-dev clang llvm libelf-dev libzip-dev git libnetfilter-queue-dev libpcap-dev protobuf-compiler python3-pip dh-golang golang-any golang-golang-x-net-dev golang-google-grpc-dev golang-goprotobuf-dev libmnl-dev golang-github-vishvananda-netlink-dev golang-github-evilsocket-ftrace-dev golang-github-google-gopacket-dev golang-github-fsnotify-fsnotify-dev linux-headers-$(uname -r)
- name: Download kernel sources and compile eBPF modules
run: |
kernel_version="${{ matrix.kernel }}"
if [ ! -d utils/packaging/ ]; then
mkdir -p utils/packaging/
fi
wget https://raw.githubusercontent.com/evilsocket/opensnitch/master/utils/packaging/build_modules.sh -O utils/packaging/build_modules.sh
bash utils/packaging/build_modules.sh $kernel_version
sha1sum ebpf_prog/modules/opensnitch*o > ebpf_prog/modules/checksums.txt
- uses: actions/upload-artifact@v4
with:
name: opensnitch-ebpf-modules-${{ matrix.kernel }}-${{ matrix.tag }}
path: ebpf_prog/modules/*
opensnitch-1.6.9/.github/workflows/generic_validations.yml 0000664 0000000 0000000 00000001536 15003540030 0024025 0 ustar 00root root 0000000 0000000 name: Test resources validation
on:
# Trigger this workflow only when ebpf modules changes.
push:
paths:
- 'ui/resources/*'
- '.github/workflows/generic.yml'
pull_request:
paths:
- 'ui/resources/*'
- '.github/workflows/generic.yml'
# Allow to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
build:
name: Install tools
runs-on: ubuntu-latest
steps:
- name: Check out git code
uses: actions/checkout@v2
- name: Get and prepare dependencies
run: |
set -e
set -x
sudo apt install desktop-file-utils appstream
- name: Validate resources
run: |
set -e
set -x
desktop-file-validate ui/resources/opensnitch_ui.desktop
appstreamcli validate ui/resources/io.github.evilsocket.opensnitch.appdata.xml
opensnitch-1.6.9/.github/workflows/go.yml 0000664 0000000 0000000 00000002436 15003540030 0020421 0 ustar 00root root 0000000 0000000 name: Build status
on:
push:
paths:
- 'daemon/**'
- '.github/workflows/go.yml'
pull_request:
paths:
- 'daemon/**'
- '.github/workflows/go.yml'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.20.10
uses: actions/setup-go@v3
with:
go-version: 1.20.10
id: go
- name: Check out code into the Go module directory
uses: actions/checkout@v3
- name: Get dependencies
run: |
sudo apt-get install git libnetfilter-queue-dev libmnl-dev libpcap-dev protobuf-compiler
export GOPATH=~/go
export PATH=$PATH:$GOPATH/bin
go install github.com/golang/protobuf/protoc-gen-go@latest
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.34.1
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@v1.3.0
cd proto
make ../daemon/ui/protocol/ui.pb.go
cd ../daemon
go mod tidy; go mod vendor
- name: Build
run: |
cd daemon
go build -v .
- name: Test
run: |
cd daemon
sudo PRIVILEGED_TESTS=1 NETLINK_TESTS=1 go test ./...
opensnitch-1.6.9/.gitignore 0000664 0000000 0000000 00000000027 15003540030 0015656 0 ustar 00root root 0000000 0000000 *.sock
*.pyc
*.profile
opensnitch-1.6.9/LICENSE 0000664 0000000 0000000 00000104515 15003540030 0014702 0 ustar 00root root 0000000 0000000 GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
Copyright (C)
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see .
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
Copyright (C)
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
.
opensnitch-1.6.9/Makefile 0000664 0000000 0000000 00000001527 15003540030 0015334 0 ustar 00root root 0000000 0000000 all: protocol opensnitch_daemon gui
install:
@cd daemon && make install
@cd ui && make install
protocol:
@cd proto && make
opensnitch_daemon:
@cd daemon && make
gui:
@cd ui && make
clean:
@cd daemon && make clean
@cd proto && make clean
@cd ui && make clean
run:
cd ui && pip3 install --upgrade . && cd ..
opensnitch-ui --socket unix:///tmp/osui.sock &
./daemon/opensnitchd -rules-path /etc/opensnitchd/rules -ui-socket unix:///tmp/osui.sock -cpu-profile cpu.profile -mem-profile mem.profile
test:
clear
make clean
clear
mkdir -p rules
make
clear
make run
adblocker:
clear
make clean
clear
make
clear
python make_ads_rules.py
clear
cd ui && pip3 install --upgrade . && cd ..
opensnitch-ui --socket unix:///tmp/osui.sock &
./daemon/opensnitchd -rules-path /etc/opensnitchd/rules -ui-socket unix:///tmp/osui.sock
opensnitch-1.6.9/README.md 0000664 0000000 0000000 00000010016 15003540030 0015144 0 ustar 00root root 0000000 0000000
OpenSnitch is a GNU/Linux application firewall.
•• Key Features • Download • Installation • Usage examples • In the press ••
## Key features
* Interactive outbound connections filtering.
* [Block ads, trackers or malware domains](https://github.com/evilsocket/opensnitch/wiki/block-lists) system wide.
* Ability to [configure system firewall](https://github.com/evilsocket/opensnitch/wiki/System-rules) from the GUI (nftables).
- Configure input policy, allow inbound services, etc.
* Manage [multiple nodes](https://github.com/evilsocket/opensnitch/wiki/Nodes) from a centralized GUI.
* [SIEM integration](https://github.com/evilsocket/opensnitch/wiki/SIEM-integration)
## Download
Download deb/rpm packages for your system from https://github.com/evilsocket/opensnitch/releases
## Installation
#### deb
> $ sudo apt install ./opensnitch*.deb ./python3-opensnitch-ui*.deb
#### rpm
> $ sudo yum localinstall opensnitch-1*.rpm; sudo yum localinstall opensnitch-ui*.rpm
Then run: `$ opensnitch-ui` or launch the GUI from the Applications menu.
Please, refer to [the documentation](https://github.com/evilsocket/opensnitch/wiki/Installation) for detailed information.
## OpenSnitch in action
Examples of OpenSnitch intercepting unexpected connections:
https://github.com/evilsocket/opensnitch/discussions/categories/show-and-tell
Have you seen a connection you didn't expect? [submit it!](https://github.com/evilsocket/opensnitch/discussions/new?category=show-and-tell)
## In the press
- 2017 [PenTest Magazine](https://twitter.com/pentestmag/status/857321886807605248)
- 11/2019 [It's Foss](https://itsfoss.com/opensnitch-firewall-linux/)
- 03/2020 [Linux Format #232](https://www.linux-magazine.com/Issues/2020/232/Firewalld-and-OpenSnitch)
- 08/2020 [Linux Magazine Polska #194](https://linux-magazine.pl/archiwum/wydanie/387)
- 08/2021 [Linux Format #280](https://github.com/evilsocket/opensnitch/discussions/631)
- 02/2022 [Linux User](https://www.linux-community.de/magazine/linuxuser/2022/03/)
- 06/2022 [Linux Magazine #259](https://www.linux-magazine.com/Issues/2022/259/OpenSnitch)
## Donations
If you find OpenSnitch useful and want to donate to the dedicated developers, you can do it from the **Sponsor this project** section on the right side of this repository.
You can see here who are the current maintainers of OpenSnitch:
https://github.com/evilsocket/opensnitch/commits/master
## Contributors
[See the list](https://github.com/evilsocket/opensnitch/graphs/contributors)
## Translating
opensnitch-1.6.9/daemon/ 0000775 0000000 0000000 00000000000 15003540030 0015132 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/.gitignore 0000664 0000000 0000000 00000000023 15003540030 0017115 0 ustar 00root root 0000000 0000000 opensnitchd
vendor
opensnitch-1.6.9/daemon/Gopkg.toml 0000664 0000000 0000000 00000000540 15003540030 0017075 0 ustar 00root root 0000000 0000000 [[constraint]]
name = "github.com/fsnotify/fsnotify"
version = "1.4.7"
[[constraint]]
name = "github.com/google/gopacket"
version = "~1.1.14"
[[constraint]]
name = "google.golang.org/grpc"
version = "~1.11.2"
[[constraint]]
name = "github.com/evilsocket/ftrace"
version = "~1.2.0"
[prune]
go-tests = true
unused-packages = true
opensnitch-1.6.9/daemon/Makefile 0000664 0000000 0000000 00000001160 15003540030 0016570 0 ustar 00root root 0000000 0000000 #SRC contains all *.go *.c *.h files in daemon/ and its subfolders
SRC := $(shell find . -type f -name '*.go' -o -name '*.h' -o -name '*.c')
PREFIX?=/usr/local
all: opensnitchd
install:
@mkdir -p $(DESTDIR)/etc/opensnitchd/rules
@install -Dm755 opensnitchd \
-t $(DESTDIR)$(PREFIX)/bin/
@install -Dm644 opensnitchd.service \
-t $(DESTDIR)/etc/systemd/system/
@install -Dm644 default-config.json \
-t $(DESTDIR)/etc/opensnitchd/
@install -Dm644 system-fw.json \
-t $(DESTDIR)/etc/opensnitchd/
@systemctl daemon-reload
opensnitchd: $(SRC)
@go get
@go build -o opensnitchd .
clean:
@rm -rf opensnitchd
opensnitch-1.6.9/daemon/conman/ 0000775 0000000 0000000 00000000000 15003540030 0016405 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/conman/connection.go 0000664 0000000 0000000 00000022220 15003540030 0021071 0 ustar 00root root 0000000 0000000 package conman
import (
"errors"
"fmt"
"net"
"os"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/dns"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/netfilter"
"github.com/evilsocket/opensnitch/daemon/netlink"
"github.com/evilsocket/opensnitch/daemon/netstat"
"github.com/evilsocket/opensnitch/daemon/procmon"
"github.com/evilsocket/opensnitch/daemon/procmon/audit"
"github.com/evilsocket/opensnitch/daemon/procmon/ebpf"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
"github.com/google/gopacket/layers"
)
// Connection represents an outgoing connection.
type Connection struct {
Entry *netstat.Entry
Process *procmon.Process
Pkt *netfilter.Packet
Protocol string
DstHost string
SrcIP net.IP
DstIP net.IP
SrcPort uint
DstPort uint
}
var showUnknownCons = false
// Parse extracts the IP layers from a network packet to determine what
// process generated a connection.
func Parse(nfp netfilter.Packet, interceptUnknown bool) *Connection {
showUnknownCons = interceptUnknown
if nfp.IsIPv4() {
con, err := NewConnection(&nfp)
if err != nil {
log.Debug("%s", err)
return nil
} else if con == nil {
return nil
}
return con
}
if core.IPv6Enabled == false {
return nil
}
con, err := NewConnection6(&nfp)
if err != nil {
log.Debug("%s", err)
return nil
} else if con == nil {
return nil
}
return con
}
func newConnectionImpl(nfp *netfilter.Packet, c *Connection, protoType string) (cr *Connection, err error) {
// no errors but not enough info neither
if c.parseDirection(protoType) == false {
return nil, nil
}
log.Debug("new connection %s => %d:%v -> %v (%s):%d uid: %d, mark: %x", c.Protocol, c.SrcPort, c.SrcIP, c.DstIP, c.DstHost, c.DstPort, nfp.UID, nfp.Mark)
c.Entry = &netstat.Entry{
Proto: c.Protocol,
SrcIP: c.SrcIP,
SrcPort: c.SrcPort,
DstIP: c.DstIP,
DstPort: c.DstPort,
UserId: -1,
INode: -1,
}
pid := -1
uid := -1
if procmon.MethodIsEbpf() {
swap := false
c.Process, swap, err = ebpf.GetPid(c.Protocol, c.SrcPort, c.SrcIP, c.DstIP, c.DstPort)
if swap {
c.swapFields()
}
if c.Process != nil {
c.Entry.UserId = c.Process.UID
return c, nil
}
if err != nil {
log.Debug("ebpf warning: %v", err)
return nil, nil
}
} else if procmon.MethodIsAudit() {
if aevent := audit.GetEventByPid(pid); aevent != nil {
audit.Lock.RLock()
c.Process = procmon.NewProcess(pid, aevent.ProcName)
c.Process.Path = aevent.ProcPath
c.Process.ReadCmdline()
c.Process.CWD = aevent.ProcDir
audit.Lock.RUnlock()
// if the proc dir contains non alhpa-numeric chars the field is empty
if c.Process.CWD == "" {
c.Process.ReadCwd()
}
c.Process.ReadEnv()
c.Process.CleanPath()
procmon.AddToActivePidsCache(uint64(pid), c.Process)
return c, nil
}
}
// Sometimes when using eBPF, the PID is not found by the connection's parameters,
// but falling back to legacy methods helps to find it and avoid "unknown/kernel pop-ups".
//
// One of the reasons is because after coming back from suspend state, for some reason (bug?),
// gobpf/libbpf is unable to delete ebpf map entries, so when they reach the maximum capacity no
// more entries are added, nor updated.
if pid < 0 {
// 0. lookup uid and inode via netlink. Can return several inodes.
// 1. lookup uid and inode using /proc/net/(udp|tcp|udplite)
// 2. lookup pid by inode
// 3. if this is coming from us, just accept
// 4. lookup process info by pid
var inodeList []int
uid, inodeList = netlink.GetSocketInfo(c.Protocol, c.SrcIP, c.SrcPort, c.DstIP, c.DstPort)
if len(inodeList) == 0 {
procmon.GetInodeFromNetstat(c.Entry, &inodeList, c.Protocol, c.SrcIP, c.SrcPort, c.DstIP, c.DstPort)
}
for n, inode := range inodeList {
pid = procmon.GetPIDFromINode(inode, fmt.Sprint(inode, c.SrcIP, c.SrcPort, c.DstIP, c.DstPort))
if pid != -1 {
log.Debug("[%d] PID found %d [%d]", n, pid, inode)
c.Entry.INode = inode
break
}
}
}
if pid == os.Getpid() {
// return a Process object with our PID, to be able to exclude our own connections
// (to the UI on a local socket for example)
c.Process = procmon.NewProcess(pid, "")
return c, nil
}
if nfp.UID != 0xffffffff {
uid = int(nfp.UID)
}
c.Entry.UserId = uid
if c.Process == nil {
if c.Process = procmon.FindProcess(pid, showUnknownCons); c.Process == nil {
return nil, fmt.Errorf("Could not find process by its pid %d for: %s", pid, c)
}
}
return c, nil
}
// NewConnection creates a new Connection object, and returns the details of it.
func NewConnection(nfp *netfilter.Packet) (c *Connection, err error) {
ipv4 := nfp.Packet.Layer(layers.LayerTypeIPv4)
if ipv4 == nil {
return nil, errors.New("Error getting IPv4 layer")
}
ip, ok := ipv4.(*layers.IPv4)
if !ok {
return nil, errors.New("Error getting IPv4 layer data")
}
c = &Connection{
SrcIP: ip.SrcIP,
DstIP: ip.DstIP,
DstHost: dns.HostOr(ip.DstIP, ""),
Pkt: nfp,
}
return newConnectionImpl(nfp, c, "")
}
// NewConnection6 creates a IPv6 new Connection object, and returns the details of it.
func NewConnection6(nfp *netfilter.Packet) (c *Connection, err error) {
ipv6 := nfp.Packet.Layer(layers.LayerTypeIPv6)
if ipv6 == nil {
return nil, errors.New("Error getting IPv6 layer")
}
ip, ok := ipv6.(*layers.IPv6)
if !ok {
return nil, errors.New("Error getting IPv6 layer data")
}
c = &Connection{
SrcIP: ip.SrcIP,
DstIP: ip.DstIP,
DstHost: dns.HostOr(ip.DstIP, ""),
Pkt: nfp,
}
return newConnectionImpl(nfp, c, "6")
}
func (c *Connection) parseDirection(protoType string) bool {
ret := false
if tcpLayer := c.Pkt.Packet.Layer(layers.LayerTypeTCP); tcpLayer != nil {
if tcp, ok := tcpLayer.(*layers.TCP); ok == true && tcp != nil {
c.Protocol = "tcp" + protoType
c.DstPort = uint(tcp.DstPort)
c.SrcPort = uint(tcp.SrcPort)
ret = true
if tcp.DstPort == 53 {
c.getDomains(c.Pkt, c)
}
}
} else if udpLayer := c.Pkt.Packet.Layer(layers.LayerTypeUDP); udpLayer != nil {
if udp, ok := udpLayer.(*layers.UDP); ok == true && udp != nil {
c.Protocol = "udp" + protoType
c.DstPort = uint(udp.DstPort)
c.SrcPort = uint(udp.SrcPort)
ret = true
if udp.DstPort == 53 {
c.getDomains(c.Pkt, c)
}
}
} else if udpliteLayer := c.Pkt.Packet.Layer(layers.LayerTypeUDPLite); udpliteLayer != nil {
if udplite, ok := udpliteLayer.(*layers.UDPLite); ok == true && udplite != nil {
c.Protocol = "udplite" + protoType
c.DstPort = uint(udplite.DstPort)
c.SrcPort = uint(udplite.SrcPort)
ret = true
}
} else if sctpLayer := c.Pkt.Packet.Layer(layers.LayerTypeSCTP); sctpLayer != nil {
if sctp, ok := sctpLayer.(*layers.SCTP); ok == true && sctp != nil {
c.Protocol = "sctp" + protoType
c.DstPort = uint(sctp.DstPort)
c.SrcPort = uint(sctp.SrcPort)
ret = true
}
} else if icmpLayer := c.Pkt.Packet.Layer(layers.LayerTypeICMPv4); icmpLayer != nil {
if icmp, ok := icmpLayer.(*layers.ICMPv4); ok == true && icmp != nil {
c.Protocol = "icmp"
c.DstPort = 0
c.SrcPort = 0
ret = true
}
} else if icmp6Layer := c.Pkt.Packet.Layer(layers.LayerTypeICMPv6); icmp6Layer != nil {
if icmp6, ok := icmp6Layer.(*layers.ICMPv6); ok == true && icmp6 != nil {
c.Protocol = "icmp" + protoType
c.DstPort = 0
c.SrcPort = 0
ret = true
}
}
return ret
}
// swapFields swaps connection's fields.
// Used to workaround an issue where outbound connections
// have the fields swapped (procmon/ebpf/find.go).
func (c *Connection) swapFields() {
oEntry := c.Entry
c.Entry = &netstat.Entry{
Proto: c.Protocol,
SrcIP: oEntry.DstIP,
DstIP: oEntry.SrcIP,
SrcPort: oEntry.DstPort,
DstPort: oEntry.SrcPort,
UserId: oEntry.UserId,
INode: oEntry.INode,
}
c.SrcIP = oEntry.DstIP
c.DstIP = oEntry.SrcIP
c.DstPort = oEntry.SrcPort
c.SrcPort = oEntry.DstPort
}
func (c *Connection) getDomains(nfp *netfilter.Packet, con *Connection) {
domains := dns.GetQuestions(nfp)
if len(domains) < 1 {
return
}
for _, dns := range domains {
con.DstHost = dns
}
}
// To returns the destination host of a connection.
func (c *Connection) To() string {
if c.DstHost == "" {
return c.DstIP.String()
}
return fmt.Sprintf("%s (%s)", c.DstHost, c.DstIP)
}
func (c *Connection) String() string {
if c.Entry == nil {
return fmt.Sprintf("%d:%s ->(%s)-> %s:%d", c.SrcPort, c.SrcIP, c.Protocol, c.To(), c.DstPort)
}
if c.Process == nil {
return fmt.Sprintf("%d:%s (uid:%d) ->(%s)-> %s:%d", c.SrcPort, c.SrcIP, c.Entry.UserId, c.Protocol, c.To(), c.DstPort)
}
return fmt.Sprintf("%s (%d) -> %s:%d (proto:%s uid:%d)", c.Process.Path, c.Process.ID, c.To(), c.DstPort, c.Protocol, c.Entry.UserId)
}
// Serialize returns a connection serialized.
func (c *Connection) Serialize() *protocol.Connection {
return &protocol.Connection{
Protocol: c.Protocol,
SrcIp: c.SrcIP.String(),
SrcPort: uint32(c.SrcPort),
DstIp: c.DstIP.String(),
DstHost: c.DstHost,
DstPort: uint32(c.DstPort),
UserId: uint32(c.Entry.UserId),
ProcessId: uint32(c.Process.ID),
ProcessPath: c.Process.Path,
ProcessArgs: c.Process.Args,
ProcessEnv: c.Process.Env,
ProcessCwd: c.Process.CWD,
}
}
opensnitch-1.6.9/daemon/conman/connection_test.go 0000664 0000000 0000000 00000007042 15003540030 0022135 0 ustar 00root root 0000000 0000000 package conman
import (
"fmt"
"net"
"testing"
"github.com/google/gopacket"
"github.com/google/gopacket/layers"
"github.com/evilsocket/opensnitch/daemon/netfilter"
)
// Adding new packets:
// wireshark -> right click -> Copy as HexDump -> create []byte{}
func NewTCPPacket() gopacket.Packet {
// 47676:192.168.1.100 -> 1.1.1.1:23
testTCPPacket := []byte{0x4c, 0x6e, 0x6e, 0xd5, 0x79, 0xbf, 0x00, 0x28, 0x9d, 0x43, 0x7f, 0xd7, 0x08, 0x00, 0x45, 0x10,
0x00, 0x3c, 0x1d, 0x07, 0x40, 0x00, 0x40, 0x06, 0x59, 0x8e, 0xc0, 0xa8, 0x01, 0x6d, 0x01, 0x01,
0x01, 0x01, 0xba, 0x3c, 0x00, 0x17, 0x47, 0x7e, 0xf3, 0x0b, 0x00, 0x00, 0x00, 0x00, 0xa0, 0x02,
0xfa, 0xf0, 0x4c, 0x27, 0x00, 0x00, 0x02, 0x04, 0x05, 0xb4, 0x04, 0x02, 0x08, 0x0a, 0x91, 0xfb,
0xb5, 0xf4, 0x00, 0x00, 0x00, 0x00, 0x01, 0x03, 0x03, 0x0a}
return gopacket.NewPacket(testTCPPacket, layers.LinkTypeEthernet, gopacket.Default)
}
func NewUDPPacket() gopacket.Packet {
// 29517:192.168.1.109 -> 1.0.0.1:53
testUDPPacketDNS := []byte{
0x4c, 0x6e, 0x6e, 0xd5, 0x79, 0xbf, 0x00, 0x28, 0x9d, 0x43, 0x7f, 0xd7, 0x08, 0x00, 0x45, 0x00,
0x00, 0x40, 0x54, 0x1a, 0x40, 0x00, 0x3f, 0x11, 0x24, 0x7d, 0xc0, 0xa8, 0x01, 0x6d, 0x01, 0x00,
0x00, 0x01, 0x73, 0x4d, 0x00, 0x35, 0x00, 0x2c, 0xf1, 0x17, 0x05, 0x51, 0x00, 0x20, 0x00, 0x01,
0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x02, 0x70, 0x69, 0x04, 0x68, 0x6f, 0x6c, 0x65, 0x00, 0x00,
0x01, 0x00, 0x01, 0x00, 0x00, 0x29, 0x10, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00,
}
return gopacket.NewPacket(testUDPPacketDNS, layers.LinkTypeEthernet, gopacket.Default)
}
func EstablishConnection(proto, dst string) (net.Conn, error) {
c, err := net.Dial(proto, dst)
if err != nil {
fmt.Println(err)
return nil, err
}
return c, nil
}
func ListenOnPort(proto, port string) (net.Listener, error) {
l, err := net.Listen(proto, port)
if err != nil {
fmt.Println(err)
return nil, err
}
return l, nil
}
func NewPacket(pkt gopacket.Packet) *netfilter.Packet {
return &netfilter.Packet{
Packet: pkt,
UID: 666,
NetworkProtocol: netfilter.IPv4,
}
}
func NewDummyConnection(src, dst net.IP) *Connection {
return &Connection{
SrcIP: src,
DstIP: dst,
}
}
// Test TCP parseDirection()
func TestParseTCPDirection(t *testing.T) {
srcIP := net.IP{192, 168, 1, 100}
dstIP := net.IP{1, 1, 1, 1}
c := NewDummyConnection(srcIP, dstIP)
// 47676:192.168.1.100 -> 1.1.1.1:23
pkt := NewPacket(NewTCPPacket())
c.Pkt = pkt
// parseDirection extracts the src and dst port from a network packet.
if c.parseDirection("") == false {
t.Error("parseDirection() should not be false")
t.Fail()
}
if c.SrcPort != 47676 {
t.Error("parseDirection() SrcPort mismatch:", c)
t.Fail()
}
if c.DstPort != 23 {
t.Error("parseDirection() DstPort mismatch:", c)
t.Fail()
}
if c.Protocol != "tcp" {
t.Error("parseDirection() Protocol mismatch:", c)
t.Fail()
}
}
// Test UDP parseDirection()
func TestParseUDPDirection(t *testing.T) {
srcIP := net.IP{192, 168, 1, 100}
dstIP := net.IP{1, 0, 0, 1}
c := NewDummyConnection(srcIP, dstIP)
// 29517:192.168.1.109 -> 1.0.0.1:53
pkt := NewPacket(NewUDPPacket())
c.Pkt = pkt
// parseDirection extracts the src and dst port from a network packet.
if c.parseDirection("") == false {
t.Error("parseDirection() should not be false")
t.Fail()
}
if c.SrcPort != 29517 {
t.Error("parseDirection() SrcPort mismatch:", c)
t.Fail()
}
if c.DstPort != 53 {
t.Error("parseDirection() DstPort mismatch:", c)
t.Fail()
}
if c.Protocol != "udp" {
t.Error("parseDirection() Protocol mismatch:", c)
t.Fail()
}
}
opensnitch-1.6.9/daemon/core/ 0000775 0000000 0000000 00000000000 15003540030 0016062 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/core/core.go 0000664 0000000 0000000 00000003177 15003540030 0017351 0 ustar 00root root 0000000 0000000 package core
import (
"fmt"
"os"
"os/exec"
"os/user"
"path/filepath"
"strings"
"time"
)
const (
defaultTrimSet = "\r\n\t "
)
// Trim remove trailing spaces from a string.
func Trim(s string) string {
return strings.Trim(s, defaultTrimSet)
}
// Exec spawns a new process and reurns the output.
func Exec(executable string, args []string) (string, error) {
path, err := exec.LookPath(executable)
if err != nil {
return "", err
}
raw, err := exec.Command(path, args...).CombinedOutput()
if err != nil {
return "", err
}
return Trim(string(raw)), nil
}
// Exists checks if a path exists.
func Exists(path string) bool {
if _, err := os.Stat(path); os.IsNotExist(err) {
return false
}
return true
}
// ExpandPath replaces '~' shorthand with the user's home directory.
func ExpandPath(path string) (string, error) {
// Check if path is empty
if path != "" {
if strings.HasPrefix(path, "~") {
usr, err := user.Current()
if err != nil {
return "", err
}
// Replace only the first occurrence of ~
path = strings.Replace(path, "~", usr.HomeDir, 1)
}
return filepath.Abs(path)
}
return "", nil
}
// IsAbsPath verifies if a path is absolute or not
func IsAbsPath(path string) bool {
return path[0] == 47 // 47 == '/'
}
// GetFileModTime checks if a file has been modified.
func GetFileModTime(filepath string) (time.Time, error) {
fi, err := os.Stat(filepath)
if err != nil || fi.IsDir() {
return time.Now(), fmt.Errorf("GetFileModTime() Invalid file")
}
return fi.ModTime(), nil
}
// ConcatStrings joins the provided strings.
func ConcatStrings(args ...string) string {
return strings.Join(args, "")
}
opensnitch-1.6.9/daemon/core/ebpf.go 0000664 0000000 0000000 00000002534 15003540030 0017331 0 ustar 00root root 0000000 0000000 package core
import (
"fmt"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/iovisor/gobpf/elf"
)
// LoadEbpfModule loads the given eBPF module, from the given path if specified.
// Otherwise t'll try to load the module from several default paths.
func LoadEbpfModule(module, path string) (m *elf.Module, err error) {
var (
modulesDir = "/opensnitchd/ebpf"
paths = []string{
fmt.Sprint("/usr/local/lib", modulesDir),
fmt.Sprint("/usr/lib", modulesDir),
fmt.Sprint("/etc/opensnitchd"), // Deprecated: will be removed in future versions.
}
)
// if path has been specified, try to load the module from there.
if path != "" {
paths = []string{path}
}
modulePath := ""
moduleError := fmt.Errorf(`Module not found (%s) in any of the paths.
You may need to install the corresponding package`, module)
for _, p := range paths {
modulePath = fmt.Sprint(p, "/", module)
log.Debug("[eBPF] trying to load %s", modulePath)
if !Exists(modulePath) {
continue
}
m = elf.NewModule(modulePath)
if m.Load(nil) == nil {
log.Info("[eBPF] module loaded: %s", modulePath)
return m, nil
}
moduleError = fmt.Errorf(`
unable to load eBPF module (%s). Your kernel version (%s) might not be compatible.
If this error persists, change process monitor method to 'proc'`, module, GetKernelVersion())
}
return m, moduleError
}
opensnitch-1.6.9/daemon/core/gzip.go 0000664 0000000 0000000 00000000635 15003540030 0017366 0 ustar 00root root 0000000 0000000 package core
import (
"compress/gzip"
"io/ioutil"
"os"
)
// ReadGzipFile reads a gzip to text.
func ReadGzipFile(filename string) ([]byte, error) {
fd, err := os.Open(filename)
if err != nil {
return nil, err
}
defer fd.Close()
gz, err := gzip.NewReader(fd)
if err != nil {
return nil, err
}
defer gz.Close()
s, err := ioutil.ReadAll(gz)
if err != nil {
return nil, err
}
return s, nil
}
opensnitch-1.6.9/daemon/core/system.go 0000664 0000000 0000000 00000013112 15003540030 0017733 0 ustar 00root root 0000000 0000000 package core
import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"regexp"
"strings"
"github.com/evilsocket/opensnitch/daemon/log"
)
var (
// IPv6Enabled indicates if IPv6 protocol is enabled in the system
IPv6Enabled = Exists("/proc/sys/net/ipv6")
)
// GetHostname returns the name of the host where the daemon is running.
func GetHostname() string {
hostname, _ := ioutil.ReadFile("/proc/sys/kernel/hostname")
return strings.Replace(string(hostname), "\n", "", -1)
}
// GetKernelVersion returns the kernel version.
func GetKernelVersion() string {
version, _ := ioutil.ReadFile("/proc/sys/kernel/osrelease")
return strings.Replace(string(version), "\n", "", -1)
}
// CheckSysRequirements checks system features we need to work properly
func CheckSysRequirements() {
type checksT struct {
RegExps []string
Reason string
}
type ReqsList struct {
Item string
Checks checksT
}
kVer := GetKernelVersion()
log.Raw("\n\t%sChecking system requirements for kernel version %s%s\n", log.FG_WHITE+log.BG_LBLUE, kVer, log.RESET)
log.Raw("%s------------------------------------------------------------------------------%s\n\n", log.FG_WHITE+log.BG_LBLUE, log.RESET)
confPaths := []string{
fmt.Sprint("/boot/config-", kVer),
"/proc/config.gz",
// Fedora SilverBlue
fmt.Sprint("/usr/lib/modules/", kVer, "/config"),
}
var fileContent []byte
var err error
for _, confFile := range confPaths {
if !Exists(confFile) {
err = fmt.Errorf("%s not found", confFile)
log.Debug(err.Error())
continue
}
if confFile[len(confFile)-2:] == "gz" {
fileContent, err = ReadGzipFile(confFile)
} else {
fileContent, err = ioutil.ReadFile(confFile)
}
if err == nil {
break
}
}
if err != nil {
fmt.Printf("\n\t%s kernel config not found (%s) in any of the expected paths.\n", log.Bold(log.Red("✘")), kVer)
fmt.Printf("\tPlease, open a new issue on github specifying your kernel and distro version (/etc/os-release).\n\n")
return
}
// TODO: check loaded/configured modules (nfnetlink, nfnetlink_queue, xt_NFQUEUE, etc)
// Other items to check:
// CONFIG_NETFILTER_NETLINK
// CONFIG_NETFILTER_NETLINK_QUEUE
const reqsList = `
[
{
"Item": "kprobes",
"Checks": {
"Regexps": [
"CONFIG_KPROBES=y",
"CONFIG_KPROBES_ON_FTRACE=y",
"CONFIG_HAVE_KPROBES=y",
"CONFIG_HAVE_KPROBES_ON_FTRACE=y",
"CONFIG_KPROBE_EVENTS=y"
],
"Reason": " - KPROBES not fully supported by this kernel."
}
},
{
"Item": "uprobes",
"Checks": {
"Regexps": [
"CONFIG_UPROBES=y",
"CONFIG_UPROBE_EVENTS=y"
],
"Reason": " * UPROBES not supported. Common error => cannot open uprobe_events: open /sys/kernel/debug/tracing/uprobe_events"
}
},
{
"Item": "ftrace",
"Checks": {
"Regexps": [
"CONFIG_FTRACE=y"
],
"Reason": " - CONFIG_TRACE=y not set. Common error => Error while loading kprobes: invalid argument."
}
},
{
"Item": "syscalls",
"Checks": {
"Regexps": [
"CONFIG_HAVE_SYSCALL_TRACEPOINTS=y",
"CONFIG_FTRACE_SYSCALLS=y"
],
"Reason": " - CONFIG_FTRACE_SYSCALLS or CONFIG_HAVE_SYSCALL_TRACEPOINTS not set. Common error => error enabling tracepoint tracepoint/syscalls/sys_enter_execve: cannot read tracepoint id"
}
},
{
"Item": "nfqueue",
"Checks": {
"Regexps": [
"CONFIG_NETFILTER_NETLINK_QUEUE=[my]",
"CONFIG_NFT_QUEUE=[my]",
"CONFIG_NETFILTER_XT_TARGET_NFQUEUE=[my]"
],
"Reason": " * NFQUEUE netfilter extensions not supported by this kernel (CONFIG_NETFILTER_NETLINK_QUEUE, CONFIG_NFT_QUEUE, CONFIG_NETFILTER_XT_TARGET_NFQUEUE)."
}
},
{
"Item": "netlink",
"Checks": {
"Regexps": [
"CONFIG_NETFILTER_NETLINK=[my]",
"CONFIG_NETFILTER_NETLINK_QUEUE=[my]",
"CONFIG_NETFILTER_NETLINK_ACCT=[my]"
],
"Reason": " * NETLINK extensions not supported by this kernel (CONFIG_NETFILTER_NETLINK, CONFIG_NETFILTER_NETLINK_QUEUE, CONFIG_NETFILTER_NETLINK_ACCT)."
}
},
{
"Item": "net diagnostics",
"Checks": {
"Regexps": [
"CONFIG_INET_DIAG=[my]",
"CONFIG_INET_TCP_DIAG=[my]",
"CONFIG_INET_UDP_DIAG=[my]",
"CONFIG_INET_DIAG_DESTROY=[my]"
],
"Reason": " * One or more socket monitoring interfaces are not enabled (CONFIG_INET_DIAG, CONFIG_INET_TCP_DIAG, CONFIG_INET_UDP_DIAG, CONFIG_DIAG_DESTROY (Reject feature))."
}
}
]
`
reqsFullfiled := true
dec := json.NewDecoder(strings.NewReader(reqsList))
for {
var reqs []ReqsList
if err := dec.Decode(&reqs); err == io.EOF {
break
} else if err != nil {
log.Error("%s", err)
break
}
for _, req := range reqs {
checkOk := true
for _, trex := range req.Checks.RegExps {
fmt.Printf("\tChecking => %s\n", trex)
re, err := regexp.Compile(trex)
if err != nil {
fmt.Printf("\t%s %s\n", log.Bold(log.Red("Invalid regexp =>")), log.Red(trex))
continue
}
if re.Find(fileContent) == nil {
fmt.Printf("\t%s\n", log.Red(req.Checks.Reason))
checkOk = false
}
}
if checkOk {
fmt.Printf("\n\t* %s\t %s\n", log.Bold(log.Green(req.Item)), log.Bold(log.Green("✔")))
} else {
reqsFullfiled = false
fmt.Printf("\n\t* %s\t %s\n", log.Bold(log.Red(req.Item)), log.Bold(log.Red("✘")))
}
fmt.Println()
}
}
if !reqsFullfiled {
log.Raw("\n%sWARNING:%s Your kernel doesn't support some of the features OpenSnitch needs:\nRead more: https://github.com/evilsocket/opensnitch/issues/774\n", log.FG_WHITE+log.BG_YELLOW, log.RESET)
}
}
opensnitch-1.6.9/daemon/core/version.go 0000664 0000000 0000000 00000000310 15003540030 0020070 0 ustar 00root root 0000000 0000000 package core
// version related consts
const (
Name = "opensnitch-daemon"
Version = "1.6.9"
Author = "Simone 'evilsocket' Margaritelli"
Website = "https://github.com/evilsocket/opensnitch"
)
opensnitch-1.6.9/daemon/data/ 0000775 0000000 0000000 00000000000 15003540030 0016043 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/data/rules/ 0000775 0000000 0000000 00000000000 15003540030 0017175 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/data/rules/000-allow-localhost.json 0000664 0000000 0000000 00000001030 15003540030 0023463 0 ustar 00root root 0000000 0000000 {
"created": "2023-07-05T10:46:47.904024069+01:00",
"updated": "2023-07-05T10:46:47.921828104+01:00",
"name": "000-allow-localhost",
"description": "Allow connections to localhost. See this link for more information:\nhttps://github.com/evilsocket/opensnitch/wiki/Rules#localhost-connections",
"enabled": true,
"precedence": true,
"action": "allow",
"duration": "always",
"operator": {
"type": "regexp",
"operand": "dest.ip",
"sensitive": false,
"data": "^(127\\.0\\.0\\.1|::1)$",
"list": []
}
}
opensnitch-1.6.9/daemon/default-config.json 0000664 0000000 0000000 00000001077 15003540030 0020721 0 ustar 00root root 0000000 0000000 {
"Server":
{
"Address":"unix:///tmp/osui.sock",
"LogFile":"/var/log/opensnitchd.log"
},
"DefaultAction": "allow",
"DefaultDuration": "once",
"InterceptUnknown": false,
"ProcMonitorMethod": "ebpf",
"LogLevel": 2,
"LogUTC": true,
"LogMicro": false,
"Firewall": "nftables",
"Rules": {
"Path": "/etc/opensnitchd/rules/"
},
"Stats": {
"MaxEvents": 150,
"MaxStats": 25,
"Workers": 6
},
"Internal": {
"GCPercent": 100,
"FlushConnsOnStart": true
}
}
opensnitch-1.6.9/daemon/dns/ 0000775 0000000 0000000 00000000000 15003540030 0015716 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/dns/ebpfhook.go 0000664 0000000 0000000 00000011717 15003540030 0020051 0 ustar 00root root 0000000 0000000 package dns
import (
"bytes"
"debug/elf"
"encoding/binary"
"errors"
"fmt"
"net"
"os"
"os/signal"
"strings"
"syscall"
"time"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
bpf "github.com/iovisor/gobpf/elf"
)
/*
#cgo LDFLAGS: -ldl
#define _GNU_SOURCE
#include
#include
#include
#include
#include
char* find_libc() {
void *handle;
struct link_map * map;
handle = dlopen(NULL, RTLD_NOW);
if (handle == NULL) {
fprintf(stderr, "EBPF-DNS dlopen() failed: %s\n", dlerror());
return NULL;
}
if (dlinfo(handle, RTLD_DI_LINKMAP, &map) == -1) {
fprintf(stderr, "EBPF-DNS: dlinfo failed: %s\n", dlerror());
return NULL;
}
while(1){
if(map == NULL){
break;
}
if(strstr(map->l_name, "libc.so")){
fprintf(stderr,"found %s\n", map->l_name);
return map->l_name;
}
map = map->l_next;
}
return NULL;
}
*/
import "C"
type nameLookupEvent struct {
AddrType uint32
IP [16]uint8
Host [252]byte
}
func findLibc() (string, error) {
ret := C.find_libc()
if ret == nil {
return "", errors.New("Could not find path to libc.so")
}
str := C.GoString(ret)
return str, nil
}
// Iterates over all symbols in an elf file and returns the offset matching the provided symbol name.
func lookupSymbol(elffile *elf.File, symbolName string) (uint64, error) {
symbols, err := elffile.DynamicSymbols()
if err != nil {
return 0, err
}
for _, symb := range symbols {
if symb.Name == symbolName {
return symb.Value, nil
}
}
return 0, fmt.Errorf("Symbol: '%s' not found", symbolName)
}
// ListenerEbpf starts listening for DNS events.
func ListenerEbpf(ebpfModPath string) error {
m, err := core.LoadEbpfModule("opensnitch-dns.o", ebpfModPath)
if err != nil {
log.Error("[eBPF DNS]: %s", err)
return err
}
defer m.Close()
// libbcc resolves the offsets for us. without bcc the offset for uprobes must parsed from the elf files
// some how 0 must be replaced with the offset of getaddrinfo bcc does this using bcc_resolve_symname
// Attaching to uprobe using perf open might be a better aproach requires https://github.com/iovisor/gobpf/pull/277
libcFile, err := findLibc()
if err != nil {
log.Error("EBPF-DNS: Failed to find libc.so: %v", err)
return err
}
libcElf, err := elf.Open(libcFile)
if err != nil {
log.Error("EBPF-DNS: Failed to open %s: %v", libcFile, err)
return err
}
probesAttached := 0
for uprobe := range m.IterUprobes() {
probeFunction := strings.Replace(uprobe.Name, "uretprobe/", "", 1)
probeFunction = strings.Replace(probeFunction, "uprobe/", "", 1)
offset, err := lookupSymbol(libcElf, probeFunction)
if err != nil {
log.Warning("EBPF-DNS: Failed to find symbol for uprobe %s (offset: %d): %s\n", uprobe.Name, offset, err)
continue
}
err = bpf.AttachUprobe(uprobe, libcFile, offset)
if err != nil {
log.Warning("EBPF-DNS: Failed to attach uprobe %s : %s, (%s, %d)\n", uprobe.Name, err, libcFile, offset)
continue
}
probesAttached++
}
if probesAttached == 0 {
log.Warning("EBPF-DNS: Failed to find symbols for uprobes.")
return errors.New("Failed to find symbols for uprobes")
}
// Reading Events
channel := make(chan []byte)
//log.Warning("EBPF-DNS: %+v\n", m)
perfMap, err := bpf.InitPerfMap(m, "events", channel, nil)
if err != nil {
log.Error("EBPF-DNS: Failed to init perf map: %s\n", err)
return err
}
sig := make(chan os.Signal, 1)
exitChannel := make(chan bool)
signal.Notify(sig,
syscall.SIGHUP,
syscall.SIGINT,
syscall.SIGTERM,
syscall.SIGKILL,
syscall.SIGQUIT)
for i := 0; i < 5; i++ {
go spawnDNSWorker(i, channel, exitChannel)
}
perfMap.PollStart()
<-sig
log.Info("EBPF-DNS: Received signal: terminating ebpf dns hook.")
perfMap.PollStop()
for i := 0; i < 5; i++ {
exitChannel <- true
}
return nil
}
func spawnDNSWorker(id int, channel chan []byte, exitChannel chan bool) {
log.Debug("dns worker initialized #%d", id)
var event nameLookupEvent
for {
select {
case <-time.After(1 * time.Millisecond):
continue
case <-exitChannel:
goto Exit
default:
data := <-channel
if len(data) > 0 {
log.Debug("(%d) EBPF-DNS: LookupEvent %d %x %x %x", id, len(data), data[:4], data[4:20], data[20:])
}
err := binary.Read(bytes.NewBuffer(data), binary.LittleEndian, &event)
if err != nil {
log.Warning("(%d) EBPF-DNS: Failed to decode ebpf nameLookupEvent: %s\n", id, err)
continue
}
// Convert C string (null-terminated) to Go string
host := string(event.Host[:bytes.IndexByte(event.Host[:], 0)])
var ip net.IP
// 2 -> AF_INET (ipv4)
if event.AddrType == 2 {
ip = net.IP(event.IP[:4])
} else {
ip = net.IP(event.IP[:])
}
log.Debug("(%d) EBPF-DNS: Tracking Resolved Message: %s -> %s\n", id, host, ip.String())
Track(ip.String(), host)
}
}
Exit:
log.Debug("DNS worker #%d closed", id)
}
opensnitch-1.6.9/daemon/dns/parse.go 0000664 0000000 0000000 00000000777 15003540030 0017372 0 ustar 00root root 0000000 0000000 package dns
import (
"github.com/evilsocket/opensnitch/daemon/netfilter"
"github.com/google/gopacket/layers"
)
// GetQuestions retrieves the domain names a process is trying to resolve.
func GetQuestions(nfp *netfilter.Packet) (questions []string) {
dnsLayer := nfp.Packet.Layer(layers.LayerTypeDNS)
if dnsLayer == nil {
return questions
}
dns, _ := dnsLayer.(*layers.DNS)
for _, dnsQuestion := range dns.Questions {
questions = append(questions, string(dnsQuestion.Name))
}
return questions
}
opensnitch-1.6.9/daemon/dns/systemd/ 0000775 0000000 0000000 00000000000 15003540030 0017406 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/dns/systemd/monitor.go 0000664 0000000 0000000 00000014567 15003540030 0021441 0 ustar 00root root 0000000 0000000 // Package systemd defines several utilities to interact with systemd.
//
// ResolvedMonitor:
// * To debug systemd-resolved queries and inspect the protocol:
// - resolvectl monitor
// * Resources:
// - https://github.com/systemd/systemd/blob/main/src/resolve/resolvectl.c
// - The protocol used to send and receive data is varlink:
// https://github.com/varlink/go
// https://github.com/systemd/systemd/blob/main/src/resolve/resolved-varlink.c
// - https://systemd.io/RESOLVED-VPNS/
package systemd
import (
"context"
"errors"
"fmt"
"sync"
"time"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/varlink/go/varlink"
)
// whenever there's a new DNS response, this callback will be invoked.
// the second parameter is a MonitorResponse struct that will be filled with
// data.
type resolvedCallback func(context.Context, interface{}) (uint64, error)
const (
// SuccessState is the string returned by systemd-resolved when a DNS query is successful.
// Other states: https://github.com/systemd/systemd/blob/main/src/resolve/resolved-dns-transaction.c#L3608
SuccessState = "success"
socketPath = "/run/systemd/resolve/io.systemd.Resolve.Monitor"
resolvedSubscribeMethod = "io.systemd.Resolve.Monitor.SubscribeQueryResults"
// DNSTypeA A
DNSTypeA = 1
// DNSTypeAAAA AAAA
DNSTypeAAAA = 28
// DNSTypeCNAME cname
DNSTypeCNAME = 5
)
// QuestionMonitorResponse represents a DNS query
// "question": [{"class": 1, "type": 28,"name": "images.site.com"}],
type QuestionMonitorResponse struct {
Name string `json:"name"`
Class int `json:"class"`
Type int `json:"type"`
}
// KeyType holds question that generated the answer
/*answer: [{
"rr": {
"key": {
"class": 1,
"type": 28,
"name": "images.site.com"
},
"address": [100, 13, 45, 111]
},
"raw": "DFJFKE343443EFKEREKET=",
"ifindex": 3
}]*/
type KeyType struct {
Name string `json:"name"`
Class int `json:"class"`
Type int `json:"type"`
}
// RRType represents a DNS answer
// if the response is a CNAME, Address will be nil, and Name a domain name.
type RRType struct {
Key QuestionMonitorResponse `json:"key"`
Address []byte `json:"address"`
Name string `json:"name"`
}
// AnswerMonitorResponse represents the DNS answer of a DNS query.
type AnswerMonitorResponse struct {
RR RRType `json:"rr"`
Raw string `json:"raw"`
Ifindex int `json:"ifindex"`
}
// MonitorResponse represents the systemd-resolved protocol message
// sent over the wire, that holds the answer to a DNS query.
type MonitorResponse struct {
State string `json:"state"`
Question []QuestionMonitorResponse `json:"question"`
// CollectedQuestions
// "collectedQuestions":[{"class":1,"type":1,"name":"translate.google.com"}]
Answer []AnswerMonitorResponse `json:"answer"`
Continues bool `json:"continues"`
}
// ResolvedMonitor represents a systemd-resolved monitor
type ResolvedMonitor struct {
mu *sync.RWMutex
Ctx context.Context
Cancel context.CancelFunc
// connection with the systemd-resolved unix socket:
// /run/systemd/resolve/io.systemd.Resolve.Monitor
Conn *varlink.Connection
// channel where all the DNS respones will be sent
ChanResponse chan *MonitorResponse
// error channel to signal any problem
ChanConnError chan error
// callback that is emited when systemd-resolved resolves a domain name.
receiverCb resolvedCallback
connected bool
}
// NewResolvedMonitor returns a new ResolvedMonitor object.
// With this object you can passively read DNS answers.
func NewResolvedMonitor() (*ResolvedMonitor, error) {
if core.Exists(socketPath) == false {
return nil, fmt.Errorf("%s doesn't exist", socketPath)
}
ctx, cancel := context.WithCancel(context.Background())
return &ResolvedMonitor{
mu: &sync.RWMutex{},
Ctx: ctx,
Cancel: cancel,
ChanResponse: make(chan *MonitorResponse),
ChanConnError: make(chan error),
}, nil
}
// Connect opens a unix socket with systemd-resolved
func (r *ResolvedMonitor) Connect() (*varlink.Connection, error) {
r.mu.Lock()
defer r.mu.Unlock()
var err error
r.Conn, err = varlink.NewConnection(r.Ctx, fmt.Sprintf("unix://%s", socketPath))
if err != nil {
return nil, err
}
r.connected = true
go r.connPoller()
return r.Conn, nil
}
// if we're connected to the unix socket, check every few seconds if we're still
// connected, and if not, reconnect, to survive to systemd-resolved restarts.
func (r *ResolvedMonitor) connPoller() {
for {
select {
case <-time.After(5 * time.Second):
if r.isConnected() {
continue
}
log.Debug("ResolvedMonitor not connected")
if _, err := r.Connect(); err == nil {
r.Subscribe()
}
goto Exit
}
}
Exit:
log.Debug("ResolvedMonitor connection poller exit.")
}
// Subscribe sends the instruction to systemd-resolved to start monitoring
// DNS answers.
func (r *ResolvedMonitor) Subscribe() error {
if r.isConnected() == false {
return errors.New("Not connected")
}
var err error
type emptyT struct{}
empty := &emptyT{}
r.receiverCb, err = r.Conn.Send(r.Ctx, resolvedSubscribeMethod, empty, varlink.Continues|varlink.More)
if err != nil {
return err
}
go r.monitor(r.Ctx, r.ChanResponse, r.ChanConnError, r.receiverCb)
return nil
}
// monitor will listen for DNS answers from systemd-resolved.
func (r *ResolvedMonitor) monitor(ctx context.Context, chanResponse chan *MonitorResponse, chanConnError chan error, callback resolvedCallback) {
for {
m := &MonitorResponse{}
continues, err := callback(ctx, m)
if err != nil {
chanConnError <- err
goto Exit
}
if continues != varlink.Continues {
goto Exit
}
log.Debug("ResolvedMonitor >> new response: %#v", m)
chanResponse <- m
}
Exit:
r.mu.Lock()
r.connected = false
r.mu.Unlock()
log.Debug("ResolvedMonitor.monitor() exit.")
}
// GetDNSResponses returns a channel that you can use to read responses.
func (r *ResolvedMonitor) GetDNSResponses() chan *MonitorResponse {
return r.ChanResponse
}
// Exit returns a channel to listen for connection errors.
func (r *ResolvedMonitor) Exit() chan error {
return r.ChanConnError
}
// Close closes the unix socket with systemd-resolved
func (r *ResolvedMonitor) Close() {
r.ChanConnError <- nil
r.Cancel()
}
func (r *ResolvedMonitor) isConnected() bool {
r.mu.RLock()
defer r.mu.RUnlock()
return r.connected
}
opensnitch-1.6.9/daemon/dns/track.go 0000664 0000000 0000000 00000004133 15003540030 0017352 0 ustar 00root root 0000000 0000000 package dns
import (
"net"
"sync"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/google/gopacket"
"github.com/google/gopacket/layers"
)
var (
responses = make(map[string]string, 0)
lock = sync.RWMutex{}
)
// TrackAnswers obtains the resolved domains of a DNS query.
// If the packet is UDP DNS, the domain names are added to the list of resolved domains.
func TrackAnswers(packet gopacket.Packet) bool {
udpLayer := packet.Layer(layers.LayerTypeUDP)
if udpLayer == nil {
return false
}
udp, ok := udpLayer.(*layers.UDP)
if ok == false || udp == nil {
return false
}
if udp.SrcPort != 53 {
return false
}
dnsLayer := packet.Layer(layers.LayerTypeDNS)
if dnsLayer == nil {
return false
}
dnsAns, ok := dnsLayer.(*layers.DNS)
if ok == false || dnsAns == nil {
return false
}
for _, ans := range dnsAns.Answers {
if ans.Name != nil {
if ans.IP != nil {
Track(ans.IP.String(), string(ans.Name))
} else if ans.CNAME != nil {
Track(string(ans.CNAME), string(ans.Name))
}
}
}
return true
}
// Track adds a resolved domain to the list.
func Track(resolved string, hostname string) {
lock.Lock()
defer lock.Unlock()
if len(resolved) > 3 && resolved[0:4] == "127." {
return
}
if resolved == "::1" || resolved == hostname {
return
}
responses[resolved] = hostname
log.Debug("New DNS record: %s -> %s", resolved, hostname)
}
// Host returns if a resolved domain is in the list.
func Host(resolved string) (host string, found bool) {
lock.RLock()
defer lock.RUnlock()
host, found = responses[resolved]
return
}
// HostOr checks if an IP has a domain name already resolved.
// If the domain is in the list it's returned, otherwise the IP will be returned.
func HostOr(ip net.IP, or string) string {
if host, found := Host(ip.String()); found == true {
// host might have been CNAME; go back until we reach the "root"
seen := make(map[string]bool) // prevent possibility of loops
for {
orig, had := Host(host)
if seen[orig] {
break
}
if !had {
break
}
seen[orig] = true
host = orig
}
return host
}
return or
}
opensnitch-1.6.9/daemon/firewall/ 0000775 0000000 0000000 00000000000 15003540030 0016737 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/firewall/common/ 0000775 0000000 0000000 00000000000 15003540030 0020227 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/firewall/common/common.go 0000664 0000000 0000000 00000007322 15003540030 0022052 0 ustar 00root root 0000000 0000000 package common
import (
"sync"
"time"
"github.com/evilsocket/opensnitch/daemon/log"
)
// default arguments for various functions
var (
EnableRule = true
DoLogErrors = true
ForcedDelRules = true
ReloadRules = true
RestoreChains = true
BackupChains = true
ReloadConf = true
)
type (
callback func()
callbackBool func() bool
// Common holds common fields and functionality of both firewalls,
// iptables and nftables.
Common struct {
RulesChecker *time.Ticker
ErrChan chan string
QueueNum uint16
stopChecker chan bool
Running bool
Intercepting bool
FwEnabled bool
sync.RWMutex
}
)
// ErrorsChan returns the channel where the errors are sent to.
func (c *Common) ErrorsChan() <-chan string {
return c.ErrChan
}
// ErrChanEmpty checks if the errors channel is empty.
func (c *Common) ErrChanEmpty() bool {
return len(c.ErrChan) == 0
}
// SendError sends an error to the channel of errors.
func (c *Common) SendError(err string) {
log.Warning("%s", err)
if len(c.ErrChan) >= cap(c.ErrChan) {
log.Debug("fw errors channel full, emptying errChan")
for e := range c.ErrChan {
log.Warning("%s", e)
if c.ErrChanEmpty() {
break
}
}
return
}
select {
case c.ErrChan <- err:
case <-time.After(100 * time.Millisecond):
log.Warning("SendError() channel locked? REVIEW")
}
}
// SetQueueNum sets the queue number used by the firewall.
// It's the queue where all intercepted connections will be sent.
func (c *Common) SetQueueNum(qNum *int) {
c.Lock()
defer c.Unlock()
if qNum != nil {
c.QueueNum = uint16(*qNum)
}
}
// IsRunning returns if the firewall is running or not.
func (c *Common) IsRunning() bool {
c.RLock()
defer c.RUnlock()
return c != nil && c.Running
}
// IsFirewallEnabled returns if the firewall is running or not.
func (c *Common) IsFirewallEnabled() bool {
c.RLock()
defer c.RUnlock()
return c != nil && c.FwEnabled
}
// IsIntercepting returns if the firewall is running or not.
func (c *Common) IsIntercepting() bool {
c.RLock()
defer c.RUnlock()
return c != nil && c.Intercepting
}
// NewRulesChecker starts monitoring interception rules.
// We expect to have 2 rules loaded: one to intercept DNS responses and another one
// to intercept network traffic.
func (c *Common) NewRulesChecker(areRulesLoaded callbackBool, reloadRules callback) {
c.Lock()
defer c.Unlock()
if c.RulesChecker != nil {
c.RulesChecker.Stop()
select {
case c.stopChecker <- true:
case <-time.After(5 * time.Millisecond):
log.Error("NewRulesChecker: timed out stopping monitor rules")
}
}
c.stopChecker = make(chan bool, 1)
c.RulesChecker = time.NewTicker(time.Second * 10)
go startCheckingRules(c.stopChecker, c.RulesChecker, areRulesLoaded, reloadRules)
}
// StartCheckingRules monitors if our rules are loaded.
// If the rules to intercept traffic are not loaded, we'll try to insert them again.
func startCheckingRules(exitChan <-chan bool, rulesChecker *time.Ticker, areRulesLoaded callbackBool, reloadRules callback) {
for {
select {
case <-exitChan:
goto Exit
case _, active := <-rulesChecker.C:
if !active {
goto Exit
}
if areRulesLoaded() == false {
reloadRules()
}
}
}
Exit:
log.Info("exit checking firewall rules")
}
// StopCheckingRules stops checking if firewall rules are loaded.
func (c *Common) StopCheckingRules() {
c.Lock()
defer c.Unlock()
if c.RulesChecker != nil {
select {
case c.stopChecker <- true:
close(c.stopChecker)
case <-time.After(5 * time.Millisecond):
// We should not arrive here
log.Error("StopCheckingRules: timed out stopping monitor rules")
}
c.RulesChecker.Stop()
c.RulesChecker = nil
}
}
func (c *Common) reloadCallback(callback func()) {
callback()
}
opensnitch-1.6.9/daemon/firewall/config/ 0000775 0000000 0000000 00000000000 15003540030 0020204 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/firewall/config/config.go 0000664 0000000 0000000 00000014505 15003540030 0022005 0 ustar 00root root 0000000 0000000 // Package config provides functionality to load and monitor the system
// firewall rules.
// It's inherited by the different firewall packages (iptables, nftables).
//
// The firewall rules defined by the user are reloaded in these cases:
// - When the file system-fw.json changes.
// - When the firewall rules are not present when listing them.
package config
import (
"encoding/json"
"io/ioutil"
"os"
"sync"
"github.com/evilsocket/opensnitch/daemon/firewall/common"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/fsnotify/fsnotify"
)
// ExprValues holds the statements' options:
// "Name": "ct",
// "Values": [
// {
// "Key": "state",
// "Value": "established"
// },
// {
// "Key": "state",
// "Value": "related"
// }]
type ExprValues struct {
Key string
Value string
}
// ExprStatement holds the definition of matches to use against connections.
//{
// "Op": "!=",
// "Name": "tcp",
// "Values": [
// {
// "Key": "dport",
// "Value": "443"
// }
// ]
//}
type ExprStatement struct {
Op string // ==, !=, ... Only one per expression set.
Name string // tcp, udp, ct, daddr, log, ...
Values []*ExprValues // dport 8000
}
// Expressions holds the array of expressions that create the rules
type Expressions struct {
Statement *ExprStatement
}
// FwRule holds the fields of a rule
type FwRule struct {
*sync.RWMutex
// we need to keep old fields in the struct. Otherwise when receiving a conf from the GUI, the legacy rules would be deleted.
Chain string // TODO: deprecated, remove
Table string // TODO: deprecated, remove
Parameters string // TODO: deprecated, remove
UUID string
Description string
Target string
TargetParameters string
Expressions []*Expressions
Position uint64 `json:",string"`
Enabled bool
}
// FwChain holds the information that defines a firewall chain.
// It also contains the firewall table definition that it belongs to.
type FwChain struct {
// table fields
Table string
Family string
// chain fields
Name string
Description string
Priority string
Type string
Hook string
Policy string
Rules []*FwRule
}
// IsInvalid checks if the chain has been correctly configured.
func (fc *FwChain) IsInvalid() bool {
return fc.Name == "" || fc.Family == "" || fc.Table == ""
}
type rulesList struct {
Rule *FwRule
}
type chainsList struct {
Rule *FwRule // TODO: deprecated, remove
Chains []*FwChain
}
// SystemConfig holds the list of rules to be added to the system
type SystemConfig struct {
SystemRules []*chainsList
sync.RWMutex
Version uint32
Enabled bool
}
// Config holds the functionality to re/load the firewall configuration from disk.
// This is the configuration to manage the system firewall (iptables, nftables).
type Config struct {
watcher *fsnotify.Watcher
monitorExitChan chan bool
// preload will be called after daemon startup, whilst reload when a modification is performed.
preloadCallback func()
// reloadCallback is called after the configuration is written.
reloadCallback func()
file string
SysConfig SystemConfig
sync.Mutex
}
// NewSystemFwConfig initializes config fields
func (c *Config) NewSystemFwConfig(preLoadCb, reLoadCb func()) (*Config, error) {
var err error
watcher, err := fsnotify.NewWatcher()
if err != nil {
log.Warning("Error creating firewall config watcher: %s", err)
return nil, err
}
c.Lock()
defer c.Unlock()
c.file = "/etc/opensnitchd/system-fw.json"
c.monitorExitChan = make(chan bool, 1)
c.preloadCallback = preLoadCb
c.reloadCallback = reLoadCb
c.watcher = watcher
return c, nil
}
func (c *Config) SetFile(file string) {
c.file = file
}
// LoadDiskConfiguration reads and loads the firewall configuration from disk
func (c *Config) LoadDiskConfiguration(reload bool) error {
c.Lock()
defer c.Unlock()
raw, err := ioutil.ReadFile(c.file)
if err != nil {
log.Error("Error reading firewall configuration from disk %s: %s", c.file, err)
return err
}
if err = c.loadConfiguration(raw); err != nil {
return err
}
// we need to monitor the configuration file for changes, regardless if it's
// malformed or not.
c.watcher.Remove(c.file)
if err := c.watcher.Add(c.file); err != nil {
log.Error("Could not watch firewall configuration: %s", err)
return err
}
if reload {
c.reloadCallback()
return nil
}
go c.monitorConfigWorker()
return nil
}
// loadConfigutation reads the system firewall rules from disk.
// Then the rules are added based on the configuration defined.
func (c *Config) loadConfiguration(rawConfig []byte) error {
c.SysConfig.Lock()
defer c.SysConfig.Unlock()
// delete old system rules, that may be different from the new ones
c.preloadCallback()
if err := json.Unmarshal(rawConfig, &c.SysConfig); err != nil {
// we only log the parser error, giving the user a chance to write a valid config
log.Error("Error parsing firewall configuration %s: %s", c.file, err)
return err
}
log.Info("fw configuration loaded")
return nil
}
// SaveConfiguration saves configuration to disk.
// This event dispatches a reload of the configuration.
func (c *Config) SaveConfiguration(rawConfig string) error {
conf, err := json.MarshalIndent([]byte(rawConfig), " ", " ")
if err != nil {
log.Error("saving json firewall configuration: %s %s", err, conf)
return err
}
if err = os.Chmod(c.file, 0600); err != nil {
log.Warning("unable to set system-fw.json permissions: %s", err)
}
if err = ioutil.WriteFile(c.file, []byte(rawConfig), 0600); err != nil {
log.Error("writing firewall configuration to disk: %s", err)
return err
}
return nil
}
// StopConfigWatcher stops the configuration watcher and stops the subroutine.
func (c *Config) StopConfigWatcher() {
c.Lock()
defer c.Unlock()
if c.monitorExitChan != nil {
c.monitorExitChan <- true
close(c.monitorExitChan)
}
if c.watcher != nil {
c.watcher.Remove(c.file)
c.watcher.Close()
}
}
func (c *Config) monitorConfigWorker() {
for {
select {
case <-c.monitorExitChan:
goto Exit
case event := <-c.watcher.Events:
if (event.Op&fsnotify.Write == fsnotify.Write) || (event.Op&fsnotify.Remove == fsnotify.Remove) {
c.LoadDiskConfiguration(common.ReloadConf)
}
}
}
Exit:
log.Debug("stop monitoring firewall config file")
c.Lock()
c.monitorExitChan = nil
c.Unlock()
}
opensnitch-1.6.9/daemon/firewall/iptables/ 0000775 0000000 0000000 00000000000 15003540030 0020542 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/firewall/iptables/iptables.go 0000664 0000000 0000000 00000012361 15003540030 0022677 0 ustar 00root root 0000000 0000000 package iptables
import (
"bytes"
"encoding/json"
"os/exec"
"regexp"
"strings"
"sync"
"github.com/evilsocket/opensnitch/daemon/firewall/common"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
"github.com/golang/protobuf/jsonpb"
)
// Action is the modifier we apply to a rule.
type Action string
const (
// Name is the name that identifies this firewall
Name = "iptables"
// SystemRulePrefix prefix added to each system rule
SystemRulePrefix = "opensnitch-filter"
)
// Actions we apply to the firewall.
const (
ADD = Action("-A")
INSERT = Action("-I")
DELETE = Action("-D")
FLUSH = Action("-F")
NEWCHAIN = Action("-N")
DELCHAIN = Action("-X")
POLICY = Action("-P")
DROP = Action("DROP")
ACCEPT = Action("ACCEPT")
)
// SystemRule blabla
type SystemRule struct {
Rule *config.FwRule
Table string
Chain string
}
// SystemChains keeps track of the fw rules that have been added to the system.
type SystemChains struct {
Rules map[string]*SystemRule
sync.RWMutex
}
// Iptables struct holds the fields of the iptables fw
type Iptables struct {
regexRulesQuery *regexp.Regexp
regexSystemRulesQuery *regexp.Regexp
bin string
bin6 string
chains SystemChains
common.Common
config.Config
sync.Mutex
}
// Fw initializes a new Iptables object
func Fw() (*Iptables, error) {
if err := IsAvailable(); err != nil {
return nil, err
}
reRulesQuery, _ := regexp.Compile(`NFQUEUE.*ctstate NEW,RELATED.*NFQUEUE num.*bypass`)
reSystemRulesQuery, _ := regexp.Compile(SystemRulePrefix + ".*")
ipt := &Iptables{
bin: "iptables",
bin6: "ip6tables",
regexRulesQuery: reRulesQuery,
regexSystemRulesQuery: reSystemRulesQuery,
chains: SystemChains{
Rules: make(map[string]*SystemRule),
},
}
return ipt, nil
}
// Name returns the firewall name
func (ipt *Iptables) Name() string {
return Name
}
// Init inserts the firewall rules and starts monitoring for firewall
// changes.
func (ipt *Iptables) Init(qNum *int) {
if ipt.IsRunning() {
return
}
ipt.SetQueueNum(qNum)
ipt.ErrChan = make(chan string, 100)
// In order to clean up any existing firewall rule before start,
// we need to load the fw configuration first to know what rules
// were configured.
ipt.NewSystemFwConfig(ipt.preloadConfCallback, ipt.reloadRulesCallback)
ipt.LoadDiskConfiguration(!common.ReloadConf)
// start from a clean state
ipt.CleanRules(false)
ipt.EnableInterception()
ipt.AddSystemRules(!common.ReloadRules, common.BackupChains)
ipt.Running = true
}
// Stop deletes the firewall rules, allowing network traffic.
func (ipt *Iptables) Stop() {
if ipt.Running == false {
return
}
ipt.StopConfigWatcher()
ipt.StopCheckingRules()
ipt.CleanRules(log.GetLogLevel() == log.DEBUG)
ipt.Running = false
}
// IsAvailable checks if iptables is installed in the system.
// If it's not, we'll default to nftables.
func IsAvailable() error {
_, err := exec.Command("iptables", []string{"-V"}...).CombinedOutput()
if err != nil {
return err
}
return nil
}
// EnableInterception adds fw rules to intercept connections.
func (ipt *Iptables) EnableInterception() {
if err4, err6 := ipt.QueueConnections(common.EnableRule, true); err4 != nil || err6 != nil {
log.Fatal("Error while running conntrack firewall rule: %s %s", err4, err6)
} else if err4, err6 = ipt.QueueDNSResponses(common.EnableRule, true); err4 != nil || err6 != nil {
log.Error("Error while running DNS firewall rule: %s %s", err4, err6)
}
// start monitoring firewall rules to intercept network traffic
ipt.NewRulesChecker(ipt.AreRulesLoaded, ipt.reloadRulesCallback)
}
// DisableInterception removes firewall rules to intercept outbound connections.
func (ipt *Iptables) DisableInterception(logErrors bool) {
ipt.StopCheckingRules()
ipt.QueueDNSResponses(!common.EnableRule, logErrors)
ipt.QueueConnections(!common.EnableRule, logErrors)
}
// CleanRules deletes the rules we added.
func (ipt *Iptables) CleanRules(logErrors bool) {
ipt.DisableInterception(logErrors)
ipt.DeleteSystemRules(common.ForcedDelRules, common.BackupChains, logErrors)
}
// Serialize converts the configuration from json to protobuf
func (ipt *Iptables) Serialize() (*protocol.SysFirewall, error) {
sysfw := &protocol.SysFirewall{}
jun := jsonpb.Unmarshaler{
AllowUnknownFields: true,
}
rawConfig, err := json.Marshal(&ipt.SysConfig)
if err != nil {
log.Error("nfables.Serialize() struct to string error: %s", err)
return nil, err
}
// string to proto
if err := jun.Unmarshal(strings.NewReader(string(rawConfig)), sysfw); err != nil {
log.Error("nfables.Serialize() string to protobuf error: %s", err)
return nil, err
}
return sysfw, nil
}
// Deserialize converts a protocolbuffer structure to json.
func (ipt *Iptables) Deserialize(sysfw *protocol.SysFirewall) ([]byte, error) {
jun := jsonpb.Marshaler{
OrigName: true,
EmitDefaults: false,
Indent: " ",
}
var b bytes.Buffer
if err := jun.Marshal(&b, sysfw); err != nil {
log.Error("nfables.Deserialize() error 2: %s", err)
return nil, err
}
return b.Bytes(), nil
//return nil, fmt.Errorf("iptables.Deserialize() not implemented")
}
opensnitch-1.6.9/daemon/firewall/iptables/monitor.go 0000664 0000000 0000000 00000004014 15003540030 0022557 0 ustar 00root root 0000000 0000000 package iptables
import (
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/firewall/common"
"github.com/evilsocket/opensnitch/daemon/log"
)
// AreRulesLoaded checks if the firewall rules for intercept traffic are loaded.
func (ipt *Iptables) AreRulesLoaded() bool {
var outMangle6 string
outMangle, err := core.Exec("iptables", []string{"-n", "-L", "OUTPUT", "-t", "mangle"})
if err != nil {
return false
}
if core.IPv6Enabled {
outMangle6, err = core.Exec("ip6tables", []string{"-n", "-L", "OUTPUT", "-t", "mangle"})
if err != nil {
return false
}
}
systemRulesLoaded := true
ipt.chains.RLock()
if len(ipt.chains.Rules) > 0 {
for _, rule := range ipt.chains.Rules {
if chainOut4, err4 := core.Exec("iptables", []string{"-n", "-L", rule.Chain, "-t", rule.Table}); err4 == nil {
if ipt.regexSystemRulesQuery.FindString(chainOut4) == "" {
systemRulesLoaded = false
break
}
}
if core.IPv6Enabled {
if chainOut6, err6 := core.Exec("ip6tables", []string{"-n", "-L", rule.Chain, "-t", rule.Table}); err6 == nil {
if ipt.regexSystemRulesQuery.FindString(chainOut6) == "" {
systemRulesLoaded = false
break
}
}
}
}
}
ipt.chains.RUnlock()
result := ipt.regexRulesQuery.FindString(outMangle) != "" &&
systemRulesLoaded
if core.IPv6Enabled {
result = result && ipt.regexRulesQuery.FindString(outMangle6) != ""
}
return result
}
// reloadRulesCallback gets called when the interception rules are not present or after the configuration file changes.
func (ipt *Iptables) reloadRulesCallback() {
log.Important("firewall rules changed, reloading")
ipt.CleanRules(false)
ipt.AddSystemRules(common.ReloadRules, common.BackupChains)
ipt.EnableInterception()
}
// preloadConfCallback gets called before the fw configuration is reloaded
func (ipt *Iptables) preloadConfCallback() {
log.Info("iptables config changed, reloading")
ipt.DeleteSystemRules(common.ForcedDelRules, common.BackupChains, log.GetLogLevel() == log.DEBUG)
}
opensnitch-1.6.9/daemon/firewall/iptables/rules.go 0000664 0000000 0000000 00000004302 15003540030 0022222 0 ustar 00root root 0000000 0000000 package iptables
import (
"fmt"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/vishvananda/netlink"
)
// RunRule inserts or deletes a firewall rule.
func (ipt *Iptables) RunRule(action Action, enable bool, logError bool, rule []string) (err4, err6 error) {
if enable == false {
action = "-D"
}
rule = append([]string{string(action)}, rule...)
ipt.Lock()
defer ipt.Unlock()
if _, err4 = core.Exec(ipt.bin, rule); err4 != nil {
if logError {
log.Error("Error while running firewall rule, ipv4 err: %s", err4)
log.Error("rule: %s", rule)
}
}
// On some systems IPv6 is disabled
if core.IPv6Enabled {
if _, err6 = core.Exec(ipt.bin6, rule); err6 != nil {
if logError {
log.Error("Error while running firewall rule, ipv6 err: %s", err6)
log.Error("rule: %s", rule)
}
}
}
return
}
// QueueDNSResponses redirects DNS responses to us, in order to keep a cache
// of resolved domains.
// INPUT --protocol udp --sport 53 -j NFQUEUE --queue-num 0 --queue-bypass
func (ipt *Iptables) QueueDNSResponses(enable bool, logError bool) (err4, err6 error) {
return ipt.RunRule(INSERT, enable, logError, []string{
"INPUT",
"--protocol", "udp",
"--sport", "53",
"-j", "NFQUEUE",
"--queue-num", fmt.Sprintf("%d", ipt.QueueNum),
"--queue-bypass",
})
}
// QueueConnections inserts the firewall rule which redirects connections to us.
// Connections are queued until the user denies/accept them, or reaches a timeout.
// OUTPUT -t mangle -m conntrack --ctstate NEW,RELATED -j NFQUEUE --queue-num 0 --queue-bypass
func (ipt *Iptables) QueueConnections(enable bool, logError bool) (error, error) {
err4, err6 := ipt.RunRule(ADD, enable, logError, []string{
"OUTPUT",
"-t", "mangle",
"-m", "conntrack",
"--ctstate", "NEW,RELATED",
"-j", "NFQUEUE",
"--queue-num", fmt.Sprintf("%d", ipt.QueueNum),
"--queue-bypass",
})
if enable {
// flush conntrack as soon as netfilter rule is set. This ensures that already-established
// connections will go to netfilter queue.
if err := netlink.ConntrackTableFlush(netlink.ConntrackTable); err != nil {
log.Error("error in ConntrackTableFlush %s", err)
}
}
return err4, err6
}
opensnitch-1.6.9/daemon/firewall/iptables/system.go 0000664 0000000 0000000 00000011411 15003540030 0022413 0 ustar 00root root 0000000 0000000 package iptables
import (
"strings"
"github.com/evilsocket/opensnitch/daemon/firewall/common"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
)
// CreateSystemRule creates the custom firewall chains and adds them to the system.
func (ipt *Iptables) CreateSystemRule(rule *config.FwRule, table, chain, hook string, logErrors bool) bool {
ipt.chains.Lock()
defer ipt.chains.Unlock()
if rule == nil {
return false
}
if table == "" {
table = "filter"
}
if hook == "" {
hook = rule.Chain
}
chainName := SystemRulePrefix + "-" + hook
if _, ok := ipt.chains.Rules[table+"-"+chainName]; ok {
return false
}
ipt.RunRule(NEWCHAIN, common.EnableRule, logErrors, []string{chainName, "-t", table})
// Insert the rule at the top of the chain
if err4, err6 := ipt.RunRule(INSERT, common.EnableRule, logErrors, []string{hook, "-t", table, "-j", chainName}); err4 == nil && err6 == nil {
ipt.chains.Rules[table+"-"+chainName] = &SystemRule{
Table: table,
Chain: chain,
Rule: rule,
}
}
return true
}
// AddSystemRules creates the system firewall from configuration.
func (ipt *Iptables) AddSystemRules(reload, backupExistingChains bool) {
// Version 0 has no Enabled field, so it'd be always false
if ipt.SysConfig.Enabled == false && ipt.SysConfig.Version > 0 {
return
}
for _, cfg := range ipt.SysConfig.SystemRules {
if cfg.Rule != nil {
ipt.CreateSystemRule(cfg.Rule, cfg.Rule.Table, cfg.Rule.Chain, cfg.Rule.Chain, common.EnableRule)
ipt.AddSystemRule(ADD, cfg.Rule, cfg.Rule.Table, cfg.Rule.Chain, common.EnableRule)
continue
}
if cfg.Chains != nil {
for _, chn := range cfg.Chains {
if chn.Hook != "" && chn.Type != "" {
ipt.ConfigureChainPolicy(chn.Type, chn.Hook, chn.Policy, true)
}
}
}
}
}
// DeleteSystemRules deletes the system rules.
// If force is false and the rule has not been previously added,
// it won't try to delete the rules. Otherwise it'll try to delete them.
func (ipt *Iptables) DeleteSystemRules(force, backupExistingChains, logErrors bool) {
ipt.chains.Lock()
defer ipt.chains.Unlock()
for _, fwCfg := range ipt.SysConfig.SystemRules {
if fwCfg.Rule == nil {
continue
}
chain := SystemRulePrefix + "-" + fwCfg.Rule.Chain
if _, ok := ipt.chains.Rules[fwCfg.Rule.Table+"-"+chain]; !ok && !force {
continue
}
ipt.RunRule(FLUSH, common.EnableRule, false, []string{chain, "-t", fwCfg.Rule.Table})
ipt.RunRule(DELETE, !common.EnableRule, logErrors, []string{fwCfg.Rule.Chain, "-t", fwCfg.Rule.Table, "-j", chain})
ipt.RunRule(DELCHAIN, common.EnableRule, false, []string{chain, "-t", fwCfg.Rule.Table})
delete(ipt.chains.Rules, fwCfg.Rule.Table+"-"+chain)
for _, chn := range fwCfg.Chains {
if chn.Table == "" {
chn.Table = "filter"
}
chain := SystemRulePrefix + "-" + chn.Hook
if _, ok := ipt.chains.Rules[chn.Type+"-"+chain]; !ok && !force {
continue
}
ipt.RunRule(FLUSH, common.EnableRule, logErrors, []string{chain, "-t", chn.Type})
ipt.RunRule(DELETE, !common.EnableRule, logErrors, []string{chn.Hook, "-t", chn.Type, "-j", chain})
ipt.RunRule(DELCHAIN, common.EnableRule, logErrors, []string{chain, "-t", chn.Type})
delete(ipt.chains.Rules, chn.Type+"-"+chain)
}
}
}
// DeleteSystemRule deletes a new rule.
func (ipt *Iptables) DeleteSystemRule(action Action, rule *config.FwRule, table, chain string, enable bool) (err4, err6 error) {
chainName := SystemRulePrefix + "-" + chain
if table == "" {
table = "filter"
}
r := []string{chainName, "-t", table}
if rule.Parameters != "" {
r = append(r, strings.Split(rule.Parameters, " ")...)
}
r = append(r, []string{"-j", rule.Target}...)
if rule.TargetParameters != "" {
r = append(r, strings.Split(rule.TargetParameters, " ")...)
}
return ipt.RunRule(action, enable, true, r)
}
// AddSystemRule inserts a new rule.
func (ipt *Iptables) AddSystemRule(action Action, rule *config.FwRule, table, chain string, enable bool) (err4, err6 error) {
if rule == nil {
return nil, nil
}
ipt.RLock()
defer ipt.RUnlock()
chainName := SystemRulePrefix + "-" + chain
if table == "" {
table = "filter"
}
r := []string{chainName, "-t", table}
if rule.Parameters != "" {
r = append(r, strings.Split(rule.Parameters, " ")...)
}
r = append(r, []string{"-j", rule.Target}...)
if rule.TargetParameters != "" {
r = append(r, strings.Split(rule.TargetParameters, " ")...)
}
return ipt.RunRule(ADD, enable, true, r)
}
// ConfigureChainPolicy configures chains policy.
func (ipt *Iptables) ConfigureChainPolicy(table, hook, policy string, logError bool) {
// TODO: list all policies before modify them, and restore the original state on exit.
// still, if we exit abruptly, we might left the system badly configured.
ipt.RunRule(POLICY, true, logError, []string{
hook,
strings.ToUpper(policy),
"-t", table,
})
}
opensnitch-1.6.9/daemon/firewall/nftables/ 0000775 0000000 0000000 00000000000 15003540030 0020535 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/firewall/nftables/chains.go 0000664 0000000 0000000 00000013750 15003540030 0022337 0 ustar 00root root 0000000 0000000 package nftables
import (
"fmt"
"strings"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/google/nftables"
)
// getChainKey returns the identifier that will be used to link chains and rules.
// When adding a new chain the key is stored, then later when adding a rule we get
// the chain that the rule belongs to by this key.
func getChainKey(name string, table *nftables.Table) string {
if table == nil {
return ""
}
return fmt.Sprintf("%s-%s-%d", name, table.Name, table.Family)
}
// GetChain gets an existing chain
func GetChain(name string, table *nftables.Table) *nftables.Chain {
key := getChainKey(name, table)
if ch, ok := sysChains.Load(key); ok {
return ch.(*nftables.Chain)
}
return nil
}
// AddChain adds a new chain to nftables.
// https://wiki.nftables.org/wiki-nftables/index.php/Netfilter_hooks#Priority_within_hook
func (n *Nft) AddChain(name, table, family string, priority *nftables.ChainPriority, ctype nftables.ChainType, hook *nftables.ChainHook, policy nftables.ChainPolicy) *nftables.Chain {
if family == "" {
family = exprs.NFT_FAMILY_INET
}
tbl := n.GetTable(table, family)
if tbl == nil {
log.Error("%s addChain, Error getting table: %s, %s", logTag, table, family)
return nil
}
var chain *nftables.Chain
// Verify if the chain already exists, and reuse it if it does.
// In some systems it fails adding a chain when it already exists, whilst in others
// it doesn't.
key := getChainKey(name, tbl)
chain = n.GetChain(name, tbl, family)
if chain != nil {
if _, exists := sysChains.Load(key); exists {
sysChains.Delete(key)
}
chain.Policy = &policy
n.Conn.AddChain(chain)
} else {
// nft list chains
chain = n.Conn.AddChain(&nftables.Chain{
Name: strings.ToLower(name),
Table: tbl,
Type: ctype,
Hooknum: hook,
Priority: priority,
Policy: &policy,
})
if chain == nil {
log.Debug("%s AddChain() chain == nil", logTag)
return nil
}
}
sysChains.Store(key, chain)
return chain
}
// GetChain checks if a chain in the given table exists.
func (n *Nft) GetChain(name string, table *nftables.Table, family string) *nftables.Chain {
if chains, err := n.Conn.ListChains(); err == nil {
for _, c := range chains {
if name == c.Name && table.Name == c.Table.Name && GetFamilyCode(family) == c.Table.Family {
return c
}
}
}
return nil
}
// regular chains are user-defined chains, to better organize fw rules.
// https://wiki.nftables.org/wiki-nftables/index.php/Configuring_chains#Adding_regular_chains
func (n *Nft) addRegularChain(name, table, family string) error {
tbl := n.GetTable(table, family)
if tbl == nil {
return fmt.Errorf("%s addRegularChain, Error getting table: %s, %s", logTag, table, family)
}
chain := n.Conn.AddChain(&nftables.Chain{
Name: name,
Table: tbl,
})
if chain == nil {
return fmt.Errorf("%s error adding regular chain: %s", logTag, name)
}
key := getChainKey(name, tbl)
sysChains.Store(key, chain)
return nil
}
// AddInterceptionChains adds the needed chains to intercept traffic.
func (n *Nft) AddInterceptionChains() error {
var filterPolicy nftables.ChainPolicy
var manglePolicy nftables.ChainPolicy
filterPolicy = nftables.ChainPolicyAccept
manglePolicy = nftables.ChainPolicyAccept
tbl := n.GetTable(exprs.NFT_CHAIN_FILTER, exprs.NFT_FAMILY_INET)
if tbl != nil {
key := getChainKey(exprs.NFT_HOOK_INPUT, tbl)
ch, found := sysChains.Load(key)
if key != "" && found {
filterPolicy = *ch.(*nftables.Chain).Policy
}
}
tbl = n.GetTable(exprs.NFT_CHAIN_MANGLE, exprs.NFT_FAMILY_INET)
if tbl != nil {
key := getChainKey(exprs.NFT_HOOK_OUTPUT, tbl)
ch, found := sysChains.Load(key)
if key != "" && found {
manglePolicy = *ch.(*nftables.Chain).Policy
}
}
// nft list tables
n.AddChain(exprs.NFT_HOOK_INPUT, exprs.NFT_CHAIN_FILTER, exprs.NFT_FAMILY_INET,
nftables.ChainPriorityFilter, nftables.ChainTypeFilter, nftables.ChainHookInput, filterPolicy)
if !n.Commit() {
return fmt.Errorf("Error adding DNS interception chain input-filter-inet")
}
n.AddChain(exprs.NFT_HOOK_OUTPUT, exprs.NFT_CHAIN_MANGLE, exprs.NFT_FAMILY_INET,
nftables.ChainPriorityMangle, nftables.ChainTypeRoute, nftables.ChainHookOutput, manglePolicy)
if !n.Commit() {
log.Error("(1) Error adding interception chain mangle-output-inet, trying with type Filter instead of Route")
// Workaround for kernels 4.x and maybe others.
// @see firewall/nftables/utils.go:GetChainPriority()
chainPrio, chainType := GetChainPriority(exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_MANGLE, exprs.NFT_HOOK_OUTPUT)
n.AddChain(exprs.NFT_HOOK_OUTPUT, exprs.NFT_CHAIN_MANGLE, exprs.NFT_FAMILY_INET,
chainPrio, chainType, nftables.ChainHookOutput, manglePolicy)
if !n.Commit() {
return fmt.Errorf("(2) Error adding interception chain mangle-output-inet with type Filter. Report it on github please, specifying the distro and the kernel")
}
}
return nil
}
// DelChain deletes a chain from the system.
func (n *Nft) DelChain(chain *nftables.Chain) error {
n.Conn.DelChain(chain)
sysChains.Delete(getChainKey(chain.Name, chain.Table))
if !n.Commit() {
return fmt.Errorf("[nftables] error deleting chain %s, %s", chain.Name, chain.Table.Name)
}
return nil
}
// backupExistingChains saves chains with Accept policy.
// If the user configures the chain policy to Drop, we need to set it back to Accept,
// in order not to block incoming connections.
func (n *Nft) backupExistingChains() {
if chains, err := n.Conn.ListChains(); err == nil {
for _, c := range chains {
if c.Policy != nil && *c.Policy == nftables.ChainPolicyAccept {
log.Debug("%s backing up existing chain with policy ACCEPT: %s, %s", logTag, c.Name, c.Table.Name)
origSysChains[getChainKey(c.Name, c.Table)] = c
}
}
}
}
func (n *Nft) restoreBackupChains() {
for _, c := range origSysChains {
log.Debug("%s Restoring chain policy to accept: %s, %s", logTag, c.Name, c.Table.Name)
*c.Policy = nftables.ChainPolicyAccept
n.Conn.AddChain(c)
}
n.Commit()
}
opensnitch-1.6.9/daemon/firewall/nftables/chains_test.go 0000664 0000000 0000000 00000004447 15003540030 0023401 0 ustar 00root root 0000000 0000000 package nftables_test
import (
"testing"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables"
)
func TestChains(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
if nftest.Fw.AddInterceptionTables() != nil {
t.Error("Error adding interception tables")
}
t.Run("AddChain", func(t *testing.T) {
filterPolicy := nftables.ChainPolicyAccept
chn := nftest.Fw.AddChain(
exprs.NFT_HOOK_INPUT,
exprs.NFT_CHAIN_FILTER,
exprs.NFT_FAMILY_INET,
nftables.ChainPriorityFilter,
nftables.ChainTypeFilter,
nftables.ChainHookInput,
filterPolicy)
if chn == nil {
t.Error("chain input-filter-inet not created")
}
if !nftest.Fw.Commit() {
t.Error("error adding input-filter-inet chain")
}
})
t.Run("getChain", func(t *testing.T) {
tblfilter := nftest.Fw.GetTable(exprs.NFT_CHAIN_FILTER, exprs.NFT_FAMILY_INET)
if tblfilter == nil {
t.Error("table filter-inet not created")
}
chn := nftest.Fw.GetChain(exprs.NFT_HOOK_INPUT, tblfilter, exprs.NFT_FAMILY_INET)
if chn == nil {
t.Error("chain input-filter-inet not added")
}
})
t.Run("delChain", func(t *testing.T) {
tblfilter := nftest.Fw.GetTable(exprs.NFT_CHAIN_FILTER, exprs.NFT_FAMILY_INET)
if tblfilter == nil {
t.Error("table filter-inet not created")
}
chn := nftest.Fw.GetChain(exprs.NFT_HOOK_INPUT, tblfilter, exprs.NFT_FAMILY_INET)
if chn == nil {
t.Error("chain input-filter-inet not added")
}
if err := nftest.Fw.DelChain(chn); err != nil {
t.Error("error deleting chain input-filter-inet")
}
})
nftest.Fw.DelSystemTables()
}
// TestAddInterceptionChains checks if the needed tables and chains have been created.
// We use 2: output-mangle-inet for intercepting outbound connections, and input-filter-inet for DNS responses interception
func TestAddInterceptionChains(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
if err := nftest.Fw.AddInterceptionTables(); err != nil {
t.Errorf("Error adding interception tables: %s", err)
}
if err := nftest.Fw.AddInterceptionChains(); err != nil {
t.Errorf("Error adding interception chains: %s", err)
}
nftest.Fw.DelSystemTables()
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/ 0000775 0000000 0000000 00000000000 15003540030 0021676 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/firewall/nftables/exprs/counter.go 0000664 0000000 0000000 00000000377 15003540030 0023713 0 ustar 00root root 0000000 0000000 package exprs
import (
"github.com/google/nftables/expr"
)
// NewExprCounter returns a counter for packets or bytes.
func NewExprCounter(counterName string) *[]expr.Any {
return &[]expr.Any{
&expr.Objref{
Type: 1,
Name: counterName,
},
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/counter_test.go 0000664 0000000 0000000 00000002433 15003540030 0024745 0 ustar 00root root 0000000 0000000 package exprs_test
import (
"testing"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables"
)
func TestExprNamedCounter(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
// we must create the table before the counter object.
tbl, _ := nftest.Fw.AddTable("yyy", exprs.NFT_FAMILY_INET)
nftest.Fw.Conn.AddObj(
&nftables.CounterObj{
Table: &nftables.Table{
Name: "yyy",
Family: nftables.TableFamilyINet,
},
Name: "xxx-counter",
Bytes: 0,
Packets: 0,
},
)
r, _ := nftest.AddTestRule(t, conn, exprs.NewExprCounter("xxx-counter"))
if r == nil {
t.Error("Error adding counter rule")
return
}
objs, err := nftest.Fw.Conn.GetObjects(tbl)
if err != nil {
t.Errorf("Error retrieving objects from table %s: %s", tbl.Name, err)
}
if len(objs) != 1 {
t.Errorf("%d objects found, expected 1", len(objs))
}
counter, ok := objs[0].(*nftables.CounterObj)
if !ok {
t.Errorf("returned Obj is not CounterObj: %+v", objs[0])
}
if counter.Name != "xxx-counter" {
t.Errorf("CounterObj name differs: %s, expected 'xxx-counter'", counter.Name)
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/ct.go 0000664 0000000 0000000 00000005540 15003540030 0022637 0 ustar 00root root 0000000 0000000 package exprs
import (
"fmt"
"strconv"
"strings"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/google/nftables/binaryutil"
"github.com/google/nftables/expr"
)
// Example https://github.com/google/nftables/blob/master/nftables_test.go#L1234
// https://wiki.nftables.org/wiki-nftables/index.php/Setting_packet_metainformation
// NewExprCtMark returns a new ct expression.
// # set
// # nft --debug netlink add rule filter output mark set 1
// ip filter output
// [ immediate reg 1 0x00000001 ]
// [ meta set mark with reg 1 ]
//
// match mark:
// nft --debug netlink add rule mangle prerouting ct mark 123
// [ ct load mark => reg 1 ]
// [ cmp eq reg 1 0x0000007b ]
func NewExprCtMark(setMark bool, value string, cmpOp *expr.CmpOp) (*[]expr.Any, error) {
mark, err := strconv.Atoi(value)
if err != nil {
return nil, fmt.Errorf("Invalid conntrack mark: %s (%s)", err, value)
}
exprCtMark := []expr.Any{}
exprCtMark = append(exprCtMark, []expr.Any{
&expr.Immediate{
Register: 1,
Data: binaryutil.NativeEndian.PutUint32(uint32(mark)),
},
&expr.Ct{
Key: expr.CtKeyMARK,
Register: 1,
SourceRegister: setMark,
},
}...)
if setMark == false {
exprCtMark = append(exprCtMark, []expr.Any{
&expr.Cmp{Op: *cmpOp, Register: 1, Data: binaryutil.NativeEndian.PutUint32(uint32(mark))},
}...)
}
return &exprCtMark, nil
}
// NewExprCtState returns a new ct expression.
func NewExprCtState(ctFlags []*config.ExprValues) (*[]expr.Any, error) {
mask := uint32(0)
for _, flag := range ctFlags {
found, msk, err := parseInlineCtStates(flag.Value)
if err != nil {
return nil, err
}
if found {
mask |= msk
continue
}
msk, err = getCtState(flag.Value)
if err != nil {
return nil, err
}
mask |= msk
}
return &[]expr.Any{
&expr.Ct{
Register: 1, SourceRegister: false, Key: expr.CtKeySTATE,
},
&expr.Bitwise{
SourceRegister: 1,
DestRegister: 1,
Len: 4,
Mask: binaryutil.NativeEndian.PutUint32(mask),
Xor: binaryutil.NativeEndian.PutUint32(0),
},
}, nil
}
func parseInlineCtStates(flags string) (found bool, mask uint32, err error) {
// a "state" flag may be compounded of multiple values, separated by commas:
// related,established
fgs := strings.Split(flags, ",")
if len(fgs) > 0 {
for _, fg := range fgs {
msk, err := getCtState(fg)
if err != nil {
return false, 0, err
}
mask |= msk
found = true
}
}
return
}
func getCtState(flag string) (mask uint32, err error) {
switch strings.ToLower(flag) {
case CT_STATE_NEW:
mask |= expr.CtStateBitNEW
case CT_STATE_ESTABLISHED:
mask |= expr.CtStateBitESTABLISHED
case CT_STATE_RELATED:
mask |= expr.CtStateBitRELATED
case CT_STATE_INVALID:
mask |= expr.CtStateBitINVALID
default:
return 0, fmt.Errorf("Invalid conntrack flag: %s", flag)
}
return
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/ct_test.go 0000664 0000000 0000000 00000011755 15003540030 0023703 0 ustar 00root root 0000000 0000000 package exprs_test
import (
"fmt"
"testing"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables/binaryutil"
"github.com/google/nftables/expr"
)
func TestExprCtMark(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
type ctTestsT struct {
nftest.TestsT
setMark bool
}
cmp := expr.CmpOpEq
tests := []ctTestsT{
{
TestsT: nftest.TestsT{
Name: "test-ct-set-mark-666",
Parms: "666",
ExpectedExprsNum: 2,
ExpectedExprs: []interface{}{
&expr.Immediate{
Register: 1,
Data: binaryutil.NativeEndian.PutUint32(uint32(666)),
},
&expr.Ct{
Key: expr.CtKeyMARK,
Register: 1,
SourceRegister: true,
},
},
},
setMark: true,
},
{
TestsT: nftest.TestsT{
Name: "test-ct-check-mark-666",
Parms: "666",
ExpectedExprsNum: 3,
ExpectedExprs: []interface{}{
&expr.Immediate{
Register: 1,
Data: binaryutil.NativeEndian.PutUint32(uint32(666)),
},
&expr.Ct{
Key: expr.CtKeyMARK,
Register: 1,
SourceRegister: false,
},
&expr.Cmp{
Op: cmp,
Register: 1,
Data: binaryutil.NativeEndian.PutUint32(uint32(666)),
},
},
},
setMark: false,
},
{
TestsT: nftest.TestsT{
Name: "test-invalid-ct-check-mark",
Parms: "0x29a",
ExpectedExprsNum: 3,
ExpectedExprs: []interface{}{},
ExpectedFail: true,
},
setMark: false,
},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
quotaExpr, err := exprs.NewExprCtMark(test.setMark, test.TestsT.Parms, &cmp)
if err != nil && !test.ExpectedFail {
t.Errorf("Error creating expr Ct: %s", quotaExpr)
return
} else if err != nil && test.ExpectedFail {
return
}
r, _ := nftest.AddTestRule(t, conn, quotaExpr)
if r == nil && !test.ExpectedFail {
t.Error("Error adding rule with Ct expression")
}
if !nftest.AreExprsValid(t, &test.TestsT, r) {
return
}
if test.ExpectedFail {
t.Errorf("test should have failed")
}
})
}
}
func TestExprCtState(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
type ctTestsT struct {
nftest.TestsT
setMark bool
}
tests := []nftest.TestsT{
{
Name: "test-ct-single-state",
Parms: "",
Values: []*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_CT_STATE,
Value: exprs.CT_STATE_NEW,
},
},
ExpectedExprsNum: 2,
ExpectedExprs: []interface{}{
&expr.Ct{
Register: 1, SourceRegister: false, Key: expr.CtKeySTATE,
},
&expr.Bitwise{
SourceRegister: 1,
DestRegister: 1,
Len: 4,
Mask: binaryutil.NativeEndian.PutUint32(expr.CtStateBitNEW),
Xor: binaryutil.NativeEndian.PutUint32(0),
},
},
ExpectedFail: false,
},
{
Name: "test-ct-multiple-states",
Parms: "",
Values: []*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_CT_STATE,
Value: fmt.Sprint(exprs.CT_STATE_NEW, ",", exprs.CT_STATE_ESTABLISHED),
},
},
ExpectedExprsNum: 2,
ExpectedExprs: []interface{}{
&expr.Ct{
Register: 1, SourceRegister: false, Key: expr.CtKeySTATE,
},
&expr.Bitwise{
SourceRegister: 1,
DestRegister: 1,
Len: 4,
Mask: binaryutil.NativeEndian.PutUint32(expr.CtStateBitNEW | expr.CtStateBitESTABLISHED),
Xor: binaryutil.NativeEndian.PutUint32(0),
},
},
ExpectedFail: false,
},
{
Name: "test-invalid-ct-state",
Parms: "",
Values: []*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_CT_STATE,
Value: "xxx",
},
},
ExpectedExprsNum: 2,
ExpectedExprs: []interface{}{},
ExpectedFail: true,
},
{
Name: "test-invalid-ct-states",
Parms: "",
Values: []*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_CT_STATE,
Value: "new,xxx",
},
},
ExpectedExprsNum: 2,
ExpectedExprs: []interface{}{},
ExpectedFail: true,
},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
quotaExpr, err := exprs.NewExprCtState(test.Values)
if err != nil && !test.ExpectedFail {
t.Errorf("Error creating expr Ct: %s", quotaExpr)
return
} else if err != nil && test.ExpectedFail {
return
}
r, _ := nftest.AddTestRule(t, conn, quotaExpr)
if r == nil && !test.ExpectedFail {
t.Error("Error adding rule with Quota expression")
}
if !nftest.AreExprsValid(t, &test, r) {
return
}
if test.ExpectedFail {
t.Errorf("test should have failed")
}
})
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/enums.go 0000664 0000000 0000000 00000012710 15003540030 0023355 0 ustar 00root root 0000000 0000000 package exprs
// keywords used in the configuration to define rules.
const (
// https://wiki.nftables.org/wiki-nftables/index.php/Netfilter_hooks#Priority_within_hook
NFT_CHAIN_MANGLE = "mangle"
NFT_CHAIN_FILTER = "filter"
NFT_CHAIN_RAW = "raw"
NFT_CHAIN_SECURITY = "security"
NFT_CHAIN_NATDEST = "natdest"
NFT_CHAIN_NATSOURCE = "natsource"
NFT_CHAIN_CONNTRACK = "conntrack"
NFT_CHAIN_SELINUX = "selinux"
NFT_HOOK_INPUT = "input"
NFT_HOOK_OUTPUT = "output"
NFT_HOOK_PREROUTING = "prerouting"
NFT_HOOK_POSTROUTING = "postrouting"
NFT_HOOK_INGRESS = "ingress"
NFT_HOOK_EGRESS = "egress"
NFT_HOOK_FORWARD = "forward"
NFT_TABLE_INET = "inet"
NFT_TABLE_NAT = "nat"
// TODO
NFT_TABLE_ARP = "arp"
NFT_TABLE_BRIDGE = "bridge"
NFT_TABLE_NETDEV = "netdev"
NFT_FAMILY_IP = "ip"
NFT_FAMILY_IP6 = "ip6"
NFT_FAMILY_INET = "inet"
NFT_FAMILY_BRIDGE = "bridge"
NFT_FAMILY_ARP = "arp"
NFT_FAMILY_NETDEV = "netdev"
VERDICT_ACCEPT = "accept"
VERDICT_DROP = "drop"
VERDICT_REJECT = "reject"
VERDICT_RETURN = "return"
VERDICT_QUEUE = "queue"
VERDICT_JUMP = "jump"
// TODO
VERDICT_GOTO = "goto"
VERDICT_STOP = "stop"
VERDICT_STOLEN = "stolen"
VERDICT_CONTINUE = "continue"
VERDICT_MASQUERADE = "masquerade"
VERDICT_DNAT = "dnat"
VERDICT_SNAT = "snat"
VERDICT_REDIRECT = "redirect"
VERDICT_TPROXY = "tproxy"
NFT_PARM_TO = "to"
NFT_QUEUE_NUM = "num"
NFT_QUEUE_BY_PASS = "queue-bypass"
NFT_MASQ_RANDOM = "random"
NFT_MASQ_FULLY_RANDOM = "fully-random"
NFT_MASQ_PERSISTENT = "persistent"
NFT_PROTOCOL = "protocol"
NFT_SPORT = "sport"
NFT_DPORT = "dport"
NFT_SADDR = "saddr"
NFT_DADDR = "daddr"
NFT_ICMP_CODE = "code"
NFT_ICMP_TYPE = "type"
NFT_ETHER = "ether"
NFT_IIFNAME = "iifname"
NFT_OIFNAME = "oifname"
NFT_LOG = "log"
NFT_LOG_PREFIX = "prefix"
// TODO
NFT_LOG_LEVEL = "level"
NFT_LOG_LEVEL_EMERG = "emerg"
NFT_LOG_LEVEL_ALERT = "alert"
NFT_LOG_LEVEL_CRIT = "crit"
NFT_LOG_LEVEL_ERR = "err"
NFT_LOG_LEVEL_WARN = "warn"
NFT_LOG_LEVEL_NOTICE = "notice"
NFT_LOG_LEVEL_INFO = "info"
NFT_LOG_LEVEL_DEBUG = "debug"
NFT_LOG_LEVEL_AUDIT = "audit"
NFT_LOG_FLAGS = "flags"
NFT_CT = "ct"
NFT_CT_STATE = "state"
NFT_CT_SET_MARK = "set"
NFT_CT_MARK = "mark"
CT_STATE_NEW = "new"
CT_STATE_ESTABLISHED = "established"
CT_STATE_RELATED = "related"
CT_STATE_INVALID = "invalid"
NFT_NOTRACK = "notrack"
NFT_QUOTA = "quota"
NFT_QUOTA_UNTIL = "until"
NFT_QUOTA_OVER = "over"
NFT_QUOTA_USED = "used"
NFT_QUOTA_UNIT_BYTES = "bytes"
NFT_QUOTA_UNIT_KB = "kbytes"
NFT_QUOTA_UNIT_MB = "mbytes"
NFT_QUOTA_UNIT_GB = "gbytes"
NFT_COUNTER = "counter"
NFT_COUNTER_NAME = "name"
NFT_COUNTER_PACKETS = "packets"
NFT_COUNTER_BYTES = "bytes"
NFT_LIMIT = "limit"
NFT_LIMIT_OVER = "over"
NFT_LIMIT_BURST = "burst"
NFT_LIMIT_UNITS_RATE = "rate-units"
NFT_LIMIT_UNITS_TIME = "time-units"
NFT_LIMIT_UNITS = "units"
NFT_LIMIT_UNIT_SECOND = "second"
NFT_LIMIT_UNIT_MINUTE = "minute"
NFT_LIMIT_UNIT_HOUR = "hour"
NFT_LIMIT_UNIT_DAY = "day"
NFT_LIMIT_UNIT_KBYTES = "kbytes"
NFT_LIMIT_UNIT_MBYTES = "mbytes"
NFT_META = "meta"
NFT_META_MARK = "mark"
NFT_META_SET_MARK = "set"
NFT_META_PRIORITY = "priority"
NFT_META_NFTRACE = "nftrace"
NFT_META_SET = "set"
NFT_META_SKUID = "skuid"
NFT_META_SKGID = "skgid"
NFT_META_L4PROTO = "l4proto"
NFT_META_PROTOCOL = "protocol"
NFT_PROTO_UDP = "udp"
NFT_PROTO_UDPLITE = "udplite"
NFT_PROTO_TCP = "tcp"
NFT_PROTO_SCTP = "sctp"
NFT_PROTO_DCCP = "dccp"
NFT_PROTO_ICMP = "icmp"
NFT_PROTO_ICMPX = "icmpx"
NFT_PROTO_ICMPv6 = "icmpv6"
NFT_PROTO_AH = "ah"
NFT_PROTO_ETHERNET = "ethernet"
NFT_PROTO_GRE = "gre"
NFT_PROTO_IP = "ip"
NFT_PROTO_IPIP = "ipip"
NFT_PROTO_L2TP = "l2tp"
NFT_PROTO_COMP = "comp"
NFT_PROTO_IGMP = "igmp"
NFT_PROTO_ESP = "esp"
NFT_PROTO_RAW = "raw"
NFT_PROTO_ENCAP = "encap"
ICMP_NO_ROUTE = "no-route"
ICMP_PROT_UNREACHABLE = "prot-unreachable"
ICMP_PORT_UNREACHABLE = "port-unreachable"
ICMP_NET_UNREACHABLE = "net-unreachable"
ICMP_ADDR_UNREACHABLE = "addr-unreachable"
ICMP_HOST_UNREACHABLE = "host-unreachable"
ICMP_NET_PROHIBITED = "net-prohibited"
ICMP_HOST_PROHIBITED = "host-prohibited"
ICMP_ADMIN_PROHIBITED = "admin-prohibited"
ICMP_REJECT_ROUTE = "reject-route"
ICMP_REJECT_POLICY_FAIL = "policy-fail"
ICMP_ECHO_REPLY = "echo-reply"
ICMP_ECHO_REQUEST = "echo-request"
ICMP_SOURCE_QUENCH = "source-quench"
ICMP_DEST_UNREACHABLE = "destination-unreachable"
ICMP_REDIRECT = "redirect"
ICMP_TIME_EXCEEDED = "time-exceeded"
ICMP_INFO_REQUEST = "info-request"
ICMP_INFO_REPLY = "info-reply"
ICMP_PARAMETER_PROBLEM = "parameter-problem"
ICMP_TIMESTAMP_REQUEST = "timestamp-request"
ICMP_TIMESTAMP_REPLY = "timestamp-reply"
ICMP_ROUTER_ADVERTISEMENT = "router-advertisement"
ICMP_ROUTER_SOLICITATION = "router-solicitation"
ICMP_ADDRESS_MASK_REQUEST = "address-mask-request"
ICMP_ADDRESS_MASK_REPLY = "address-mask-reply"
ICMP_PACKET_TOO_BIG = "packet-too-big"
ICMP_NEIGHBOUR_SOLICITATION = "neighbour-solicitation"
ICMP_NEIGHBOUR_ADVERTISEMENT = "neighbour-advertisement"
)
opensnitch-1.6.9/daemon/firewall/nftables/exprs/ether.go 0000664 0000000 0000000 00000002617 15003540030 0023342 0 ustar 00root root 0000000 0000000 package exprs
import (
"encoding/hex"
"fmt"
"strings"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/google/nftables/expr"
)
// NewExprEther creates a new expression to match ethernet MAC addresses
func NewExprEther(values []*config.ExprValues) (*[]expr.Any, error) {
etherExpr := []expr.Any{}
macDir := uint32(6)
for _, eth := range values {
if eth.Key == NFT_DADDR {
macDir = uint32(0)
} else {
macDir = uint32(6)
}
macaddr, err := parseMACAddr(eth.Value)
if err != nil {
return nil, err
}
etherExpr = append(etherExpr, []expr.Any{
&expr.Meta{Key: expr.MetaKeyIIFTYPE, Register: 1},
&expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: []byte{0x01, 0x00},
},
&expr.Payload{
DestRegister: 1,
Base: expr.PayloadBaseLLHeader,
Offset: macDir,
Len: 6,
},
&expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: macaddr,
},
}...)
}
return ðerExpr, nil
}
func parseMACAddr(macValue string) ([]byte, error) {
mac := strings.Split(macValue, ":")
macaddr := make([]byte, 0)
if len(mac) != 6 {
return nil, fmt.Errorf("Invalid MAC address: %s", macValue)
}
for i, m := range mac {
mm, err := hex.DecodeString(m)
if err != nil {
return nil, fmt.Errorf("Invalid MAC byte: %c (%s)", mm[i], macValue)
}
macaddr = append(macaddr, mm[0])
}
return macaddr, nil
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/ether_test.go 0000664 0000000 0000000 00000005273 15003540030 0024402 0 ustar 00root root 0000000 0000000 package exprs_test
import (
"bytes"
"reflect"
"testing"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables/expr"
)
func TestExprEther(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
values := []*config.ExprValues{
&config.ExprValues{
Key: "ether",
Value: "de:ad:be:af:ca:fe",
},
}
etherExpr, err := exprs.NewExprEther(values)
if err != nil {
t.Errorf("Error creating Ether expression: %s, %+v", err, values)
}
r, _ := nftest.AddTestRule(t, conn, etherExpr)
if r == nil {
t.Error("Error adding Ether rule")
return
}
if len(r.Exprs) != 4 {
t.Errorf("invalid rule created, we expected 4 expressions, got: %d", len(r.Exprs))
}
/*
expr Meta
expr Cmp
expr Payload
expr Cmp
*/
t.Run("test-ether-expr meta", func(t *testing.T) {
e := r.Exprs[0] // meta
if reflect.TypeOf(e).String() != "*expr.Meta" {
t.Errorf("first expression should be *expr.Meta, instead of: %s", reflect.TypeOf(e))
}
lMeta, ok := e.(*expr.Meta)
if !ok {
t.Errorf("invalid meta expr: %T", e)
}
if lMeta.Key != expr.MetaKeyIIFTYPE {
t.Errorf("invalid meta Key: %d, instead of %d", lMeta.Key, expr.MetaKeyIIFTYPE)
}
})
t.Run("test-ether-expr cmp", func(t *testing.T) {
e := r.Exprs[1] // cmp
if reflect.TypeOf(e).String() != "*expr.Cmp" {
t.Errorf("second expression should be *expr.Cmp, instead of: %s", reflect.TypeOf(e))
}
lCmp, ok := e.(*expr.Cmp)
if !ok {
t.Errorf("invalid cmp expr: %T", e)
}
if !bytes.Equal(lCmp.Data, []byte{0x01, 0x00}) {
t.Errorf("invalid cmp data: %v", lCmp.Data)
}
})
t.Run("test-ether-expr payload", func(t *testing.T) {
e := r.Exprs[2] // payload
if reflect.TypeOf(e).String() != "*expr.Payload" {
t.Errorf("third expression should be *expr.Payload, instead of: %s", reflect.TypeOf(e))
}
lPayload, ok := e.(*expr.Payload)
if !ok {
t.Errorf("invalid payload expr: %T", e)
}
if lPayload.Base != expr.PayloadBaseLLHeader || lPayload.Offset != 6 || lPayload.Len != 6 {
t.Errorf("invalid payload data: %v", lPayload)
}
})
t.Run("test-ether-expr cmp", func(t *testing.T) {
e := r.Exprs[3] // cmp
if reflect.TypeOf(e).String() != "*expr.Cmp" {
t.Errorf("fourth expression should be *expr.Cmp, instead of: %s", reflect.TypeOf(e))
}
lCmp, ok := e.(*expr.Cmp)
if !ok {
t.Errorf("invalid cmp expr: %T", e)
}
if !bytes.Equal(lCmp.Data, []byte{222, 173, 190, 175, 202, 254}) {
t.Errorf("invalid cmp data: %q", lCmp.Data)
}
})
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/iface.go 0000664 0000000 0000000 00000001262 15003540030 0023275 0 ustar 00root root 0000000 0000000 package exprs
import (
"github.com/google/nftables/expr"
)
// NewExprIface returns a new network interface expression
func NewExprIface(iface string, isOut bool, cmpOp expr.CmpOp) *[]expr.Any {
keyDev := expr.MetaKeyIIFNAME
if isOut {
keyDev = expr.MetaKeyOIFNAME
}
return &[]expr.Any{
&expr.Meta{Key: keyDev, Register: 1},
&expr.Cmp{
Op: cmpOp,
Register: 1,
Data: ifname(iface),
},
}
}
// https://github.com/google/nftables/blob/master/nftables_test.go#L81
func ifname(n string) []byte {
buf := make([]byte, 16)
length := len(n)
// allow wildcards
if n[length-1:] == "*" {
return []byte(n[:length-1])
}
copy(buf, []byte(n+"\x00"))
return buf
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/iface_test.go 0000664 0000000 0000000 00000004342 15003540030 0024336 0 ustar 00root root 0000000 0000000 package exprs_test
import (
"bytes"
"reflect"
"testing"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables/expr"
)
// https://github.com/evilsocket/opensnitch/blob/master/daemon/firewall/nftables/exprs/iface.go#L22
func ifname(n string) []byte {
buf := make([]byte, 16)
length := len(n)
// allow wildcards
if n[length-1:] == "*" {
return []byte(n[:length-1])
}
copy(buf, []byte(n+"\x00"))
return buf
}
func TestExprIface(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
type ifaceTestsT struct {
name string
iface string
out bool
}
tests := []ifaceTestsT{
{"test-in-iface-xxx", "in-iface0", false},
{"test-out-iface-xxx", "out-iface0", true},
{"test-out-iface-xxx-wildcard", "out-iface*", true},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
ifaceExpr := exprs.NewExprIface(test.iface, test.out, expr.CmpOpEq)
r, _ := nftest.AddTestRule(t, conn, ifaceExpr)
if r == nil {
t.Error("Error adding rule with iface expression")
}
if total := len(r.Exprs); total != 2 {
t.Errorf("expected 2 expressions, got %d: %+v", total, r.Exprs)
}
e := r.Exprs[0]
if reflect.TypeOf(e).String() != "*expr.Meta" {
t.Errorf("first expression should be *expr.Meta, instead of: %s", reflect.TypeOf(e))
}
lExpr, ok := e.(*expr.Meta)
if !ok {
t.Errorf("invalid iface meta expr: %T", e)
}
if test.out && lExpr.Key != expr.MetaKeyOIFNAME {
t.Errorf("iface Key should be MetaKeyOIFNAME instead of: %+v", lExpr)
} else if !test.out && lExpr.Key != expr.MetaKeyIIFNAME {
t.Errorf("iface Key should be MetaKeyIIFNAME instead of: %+v", lExpr)
}
e = r.Exprs[1]
if reflect.TypeOf(e).String() != "*expr.Cmp" {
t.Errorf("second expression should be *expr.Cmp, instead of: %s", reflect.TypeOf(e))
}
lCmp, ok := e.(*expr.Cmp)
if !ok {
t.Errorf("invalid iface cmp expr: %T", e)
}
if !bytes.Equal(lCmp.Data, ifname(test.iface)) {
t.Errorf("iface Cmp does not match: %v, expected: %v", lCmp.Data, ifname(test.iface))
}
})
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/ip.go 0000664 0000000 0000000 00000007477 15003540030 0022654 0 ustar 00root root 0000000 0000000 package exprs
import (
"fmt"
"net"
"strings"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/google/nftables/expr"
"golang.org/x/sys/unix"
)
// NewExprIP returns a new IP expression.
// You can use multiple statements to specify daddr + saddr, or combine them
// in a single statement expression:
// Example 1 (filtering by source and dest address):
// "Name": "ip",
// "Values": [ {"Key": "saddr": "Value": "1.2.3.4"},{"Key": "daddr": "Value": "1.2.3.5"} ]
// Example 2 (filtering by multiple dest addrs IPs):
// "Name": "ip",
// "Values": [
// {"Key": "daddr": "Value": "1.2.3.4"},
// {"Key": "daddr": "Value": "1.2.3.5"}
// ]
// Example 3 (filtering by network range):
// "Name": "ip",
// "Values": [
// {"Key": "daddr": "Value": "1.2.3.4-1.2.9.254"}
// ]
// TODO (filter by multiple dest addrs separated by commas):
// "Values": [
// {"Key": "daddr": "Value": "1.2.3.4,1.2.9.254"}
// ]
func NewExprIP(family string, ipOptions []*config.ExprValues, cmpOp expr.CmpOp) (*[]expr.Any, error) {
var exprIP []expr.Any
// if the table family is inet, we need to specify the protocol of the IP being added.
if family == NFT_FAMILY_INET {
exprIP = append(exprIP, &expr.Meta{Key: expr.MetaKeyNFPROTO, Register: 1})
exprIP = append(exprIP, &expr.Cmp{Op: expr.CmpOpEq, Register: 1, Data: []byte{unix.NFPROTO_IPV4}})
}
for _, ipOpt := range ipOptions {
// TODO: ipv6
switch ipOpt.Key {
case NFT_SADDR, NFT_DADDR:
payload := getExprIPPayload(ipOpt.Key)
exprIP = append(exprIP, payload)
if strings.Index(ipOpt.Value, "-") == -1 {
exprIPtemp, err := getExprIP(ipOpt.Value, cmpOp)
if err != nil {
return nil, err
}
exprIP = append(exprIP, *exprIPtemp...)
} else {
exprIPtemp, err := getExprRangeIP(ipOpt.Value, cmpOp)
if err != nil {
return nil, err
}
exprIP = append(exprIP, *exprIPtemp...)
}
case NFT_PROTOCOL:
payload := getExprIPPayload(ipOpt.Key)
exprIP = append(exprIP, payload)
protoCode, err := getProtocolCode(ipOpt.Value)
if err != nil {
return nil, err
}
exprIP = append(exprIP, []expr.Any{
&expr.Cmp{
Op: cmpOp,
Register: 1,
Data: []byte{byte(protoCode)},
},
}...)
}
}
return &exprIP, nil
}
func getExprIPPayload(what string) *expr.Payload {
switch what {
case NFT_PROTOCOL:
return &expr.Payload{
DestRegister: 1,
Offset: 9, // daddr
Base: expr.PayloadBaseNetworkHeader,
Len: 1, // 16 ipv6
}
case NFT_DADDR:
// NOTE 1: if "what" is daddr and SourceRegister is part of the Payload{} expression,
// the rule is not added.
return &expr.Payload{
DestRegister: 1,
Offset: 16, // daddr
Base: expr.PayloadBaseNetworkHeader,
Len: 4, // 16 ipv6
}
default:
return &expr.Payload{
SourceRegister: 1,
DestRegister: 1,
Offset: 12, // saddr
Base: expr.PayloadBaseNetworkHeader,
Len: 4, // 16 ipv6
}
}
}
// Supported IP types: a.b.c.d, a.b.c.d-w.x.y.z
// TODO: support IPs separated by commas: a.b.c.d, e.f.g.h,...
func getExprIP(value string, cmpOp expr.CmpOp) (*[]expr.Any, error) {
ip := net.ParseIP(value)
if ip == nil {
return nil, fmt.Errorf("Invalid IP: %s", value)
}
return &[]expr.Any{
&expr.Cmp{
Op: cmpOp,
Register: 1,
Data: ip.To4(),
},
}, nil
}
// Supported IP types: a.b.c.d, a.b.c.d-w.x.y.z
// TODO: support IPs separated by commas: a.b.c.d, e.f.g.h,...
func getExprRangeIP(value string, cmpOp expr.CmpOp) (*[]expr.Any, error) {
ips := strings.Split(value, "-")
ipSrc := net.ParseIP(ips[0])
ipDst := net.ParseIP(ips[1])
if ipSrc == nil || ipDst == nil {
return nil, fmt.Errorf("Invalid IPs range: %v", ips)
}
return &[]expr.Any{
&expr.Range{
Op: cmpOp,
Register: 1,
FromData: ipSrc.To4(),
ToData: ipDst.To4(),
},
}, nil
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/ip_test.go 0000664 0000000 0000000 00000014433 15003540030 0023701 0 ustar 00root root 0000000 0000000 package exprs_test
import (
"net"
"testing"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables/expr"
"golang.org/x/sys/unix"
)
func TestExprIP(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
tests := []nftest.TestsT{
{
"test-ip-daddr",
exprs.NFT_FAMILY_IP,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "daddr",
Value: "1.1.1.1",
},
},
2,
[]interface{}{
&expr.Payload{
SourceRegister: 0,
DestRegister: 1,
Offset: 16,
Base: expr.PayloadBaseNetworkHeader,
Len: 4,
},
&expr.Cmp{
Data: net.ParseIP("1.1.1.1").To4(),
},
},
false,
},
{
"test-ip-saddr",
exprs.NFT_FAMILY_IP,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "saddr",
Value: "1.1.1.1",
},
},
2,
[]interface{}{
&expr.Payload{
SourceRegister: 1,
DestRegister: 1,
Offset: 12,
Base: expr.PayloadBaseNetworkHeader,
Len: 4,
},
&expr.Cmp{
Data: net.ParseIP("1.1.1.1").To4(),
},
},
false,
},
{
"test-inet-daddr",
exprs.NFT_FAMILY_INET,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "daddr",
Value: "1.1.1.1",
},
},
4,
[]interface{}{
&expr.Meta{
Key: expr.MetaKeyNFPROTO, Register: 1,
},
&expr.Cmp{
Data: []byte{unix.NFPROTO_IPV4},
},
&expr.Payload{
SourceRegister: 0,
DestRegister: 1,
Offset: 16,
Base: expr.PayloadBaseNetworkHeader,
Len: 4,
},
&expr.Cmp{
Data: net.ParseIP("1.1.1.1").To4(),
},
},
false,
},
{
"test-ip-daddr-invalid",
exprs.NFT_FAMILY_IP,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "daddr",
Value: "1.1.1",
},
},
0,
[]interface{}{},
true,
},
{
"test-ip-daddr-invalid",
exprs.NFT_FAMILY_IP,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "daddr",
Value: "1..1.1.1",
},
},
0,
[]interface{}{},
true,
},
{
"test-ip-daddr-invalid",
exprs.NFT_FAMILY_IP,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "daddr",
Value: "www.test.com",
},
},
0,
[]interface{}{},
true,
},
{
"test-ip-daddr-invalid",
exprs.NFT_FAMILY_IP,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "daddr",
Value: "",
},
},
0,
[]interface{}{},
true,
},
{
"test-inet-saddr",
exprs.NFT_FAMILY_INET,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "saddr",
Value: "1.1.1.1",
},
},
4,
[]interface{}{
&expr.Meta{
Key: expr.MetaKeyNFPROTO, Register: 1,
},
&expr.Cmp{
Data: []byte{unix.NFPROTO_IPV4},
},
&expr.Payload{
SourceRegister: 1,
DestRegister: 1,
Offset: 12,
Base: expr.PayloadBaseNetworkHeader,
Len: 4,
},
&expr.Cmp{
Data: net.ParseIP("1.1.1.1").To4(),
},
},
false,
},
{
"test-inet-daddr-invalid",
exprs.NFT_FAMILY_INET,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "daddr",
Value: "1..1.1.1",
},
},
0,
[]interface{}{},
true,
},
{
"test-inet-saddr-invalid",
exprs.NFT_FAMILY_INET,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "saddr",
Value: "1..1.1.1",
},
},
0,
[]interface{}{},
true,
},
{
"test-inet-range-daddr",
exprs.NFT_FAMILY_INET,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "daddr",
Value: "1.1.1.1-2.2.2.2",
},
},
4,
[]interface{}{
&expr.Meta{
Key: expr.MetaKeyNFPROTO, Register: 1,
},
&expr.Cmp{
Data: []byte{unix.NFPROTO_IPV4},
},
&expr.Payload{
SourceRegister: 0,
DestRegister: 1,
Offset: 16,
Base: expr.PayloadBaseNetworkHeader,
Len: 4,
},
&expr.Range{
Register: 1,
FromData: net.ParseIP("1.1.1.1").To4(),
ToData: net.ParseIP("2.2.2.2").To4(),
},
},
false,
},
{
"test-inet-range-saddr",
exprs.NFT_FAMILY_INET,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "saddr",
Value: "1.1.1.1-2.2.2.2",
},
},
4,
[]interface{}{
&expr.Meta{
Key: expr.MetaKeyNFPROTO, Register: 1,
},
&expr.Cmp{
Data: []byte{unix.NFPROTO_IPV4},
},
&expr.Payload{
SourceRegister: 1,
DestRegister: 1,
Offset: 12,
Base: expr.PayloadBaseNetworkHeader,
Len: 4,
},
&expr.Range{
Register: 1,
FromData: net.ParseIP("1.1.1.1").To4(),
ToData: net.ParseIP("2.2.2.2").To4(),
},
},
false,
},
{
"test-inet-daddr-range-invalid",
exprs.NFT_FAMILY_INET,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "daddr",
Value: "1.1.1.1--2.2.2.2",
},
},
0,
[]interface{}{},
true,
},
{
"test-inet-daddr-range-invalid",
exprs.NFT_FAMILY_INET,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "daddr",
Value: "1.1.1.1-1..2.2.2",
},
},
0,
[]interface{}{},
true,
},
{
"test-inet-daddr-range-invalid",
exprs.NFT_FAMILY_INET,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: "daddr",
// TODO: not supported yet
Value: "1.1.1.1/24",
},
},
0,
[]interface{}{},
true,
},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
ipExpr, err := exprs.NewExprIP(test.Family, test.Values, expr.CmpOpEq)
if err != nil && !test.ExpectedFail {
t.Errorf("Error creating expr IP: %s", ipExpr)
return
} else if err != nil && test.ExpectedFail {
return
}
r, _ := nftest.AddTestRule(t, conn, ipExpr)
if r == nil && !test.ExpectedFail {
t.Error("Error adding rule with IP expression")
}
if !nftest.AreExprsValid(t, &test, r) {
return
}
if test.ExpectedFail {
t.Errorf("test should have failed")
}
})
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/limit.go 0000664 0000000 0000000 00000004014 15003540030 0023342 0 ustar 00root root 0000000 0000000 package exprs
import (
"fmt"
"strconv"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/google/nftables/expr"
)
// NewExprLimit returns a new limit expression.
// limit rate [over] 1/second
// to express bytes units, we use: 10-mbytes instead of nft's 10 mbytes
func NewExprLimit(statement *config.ExprStatement) (*[]expr.Any, error) {
var err error
exprLimit := &expr.Limit{
Type: expr.LimitTypePkts,
Over: false,
Unit: expr.LimitTimeSecond,
}
for _, values := range statement.Values {
switch values.Key {
case NFT_LIMIT_OVER:
exprLimit.Over = true
case NFT_LIMIT_UNITS:
exprLimit.Rate, err = strconv.ParseUint(values.Value, 10, 64)
if err != nil {
return nil, fmt.Errorf("Invalid limit rate: %s", values.Value)
}
case NFT_LIMIT_BURST:
limitBurst := 0
limitBurst, err = strconv.Atoi(values.Value)
if err != nil || limitBurst == 0 {
return nil, fmt.Errorf("Invalid burst limit: %s, err: %s", values.Value, err)
}
exprLimit.Burst = uint32(limitBurst)
case NFT_LIMIT_UNITS_RATE:
// units rate must be placed AFTER the rate
exprLimit.Type, exprLimit.Rate = getLimitRate(values.Value, exprLimit.Rate)
case NFT_LIMIT_UNITS_TIME:
exprLimit.Unit = getLimitUnits(values.Value)
}
}
return &[]expr.Any{exprLimit}, nil
}
func getLimitUnits(units string) (limitUnits expr.LimitTime) {
switch units {
case NFT_LIMIT_UNIT_MINUTE:
limitUnits = expr.LimitTimeMinute
case NFT_LIMIT_UNIT_HOUR:
limitUnits = expr.LimitTimeHour
case NFT_LIMIT_UNIT_DAY:
limitUnits = expr.LimitTimeDay
default:
limitUnits = expr.LimitTimeSecond
}
return limitUnits
}
func getLimitRate(units string, rate uint64) (limitType expr.LimitType, limitRate uint64) {
switch units {
case NFT_LIMIT_UNIT_KBYTES:
limitRate = rate * 1024
limitType = expr.LimitTypePktBytes
case NFT_LIMIT_UNIT_MBYTES:
limitRate = (rate * 1024) * 1024
limitType = expr.LimitTypePktBytes
default:
limitType = expr.LimitTypePkts
limitRate, _ = strconv.ParseUint(units, 10, 64)
}
return
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/log.go 0000664 0000000 0000000 00000003422 15003540030 0023007 0 ustar 00root root 0000000 0000000 package exprs
import (
"fmt"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/google/nftables/expr"
"golang.org/x/sys/unix"
)
// NewExprLog returns a new log expression.
func NewExprLog(statement *config.ExprStatement) (*[]expr.Any, error) {
prefix := "opensnitch"
logExpr := expr.Log{
Key: 1 << unix.NFTA_LOG_PREFIX,
Data: []byte(prefix),
}
for _, values := range statement.Values {
switch values.Key {
case NFT_LOG_PREFIX:
if values.Value == "" {
return nil, fmt.Errorf("Invalid log prefix, it's empty")
}
logExpr.Data = []byte(values.Value)
case NFT_LOG_LEVEL:
lvl, err := getLogLevel(values.Value)
if err != nil {
log.Warning("%s", err)
return nil, err
}
logExpr.Key |= 1 << unix.NFTA_LOG_LEVEL
logExpr.Level = lvl
// TODO
// https://github.com/google/nftables/blob/main/nftables_test.go#L623
//case exprs.NFT_LOG_FLAGS:
//case exprs.NFT_LOG_GROUP:
//case exprs.NFT_LOG_QTHRESHOLD:
}
}
return &[]expr.Any{
&logExpr,
}, nil
}
func getLogLevel(what string) (expr.LogLevel, error) {
switch what {
// https://github.com/google/nftables/blob/main/expr/log.go#L28
case NFT_LOG_LEVEL_EMERG:
return expr.LogLevelEmerg, nil
case NFT_LOG_LEVEL_ALERT:
return expr.LogLevelAlert, nil
case NFT_LOG_LEVEL_CRIT:
return expr.LogLevelCrit, nil
case NFT_LOG_LEVEL_ERR:
return expr.LogLevelErr, nil
case NFT_LOG_LEVEL_WARN:
return expr.LogLevelWarning, nil
case NFT_LOG_LEVEL_NOTICE:
return expr.LogLevelNotice, nil
case NFT_LOG_LEVEL_INFO:
return expr.LogLevelInfo, nil
case NFT_LOG_LEVEL_DEBUG:
return expr.LogLevelDebug, nil
case NFT_LOG_LEVEL_AUDIT:
return expr.LogLevelAudit, nil
}
return 0, fmt.Errorf("Invalid log level: %s", what)
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/log_test.go 0000664 0000000 0000000 00000006302 15003540030 0024046 0 ustar 00root root 0000000 0000000 package exprs_test
import (
"testing"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
exprs "github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables/expr"
"golang.org/x/sys/unix"
)
func TestExprLog(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
type logTestsT struct {
nftest.TestsT
statem *config.ExprStatement
}
tests := []logTestsT{
{
TestsT: nftest.TestsT{
Name: "test-log-prefix-simple",
Values: []*config.ExprValues{
&config.ExprValues{
Key: "prefix",
Value: "counter-test",
},
},
ExpectedExprs: []interface{}{
&expr.Log{
Key: 1 << unix.NFTA_LOG_PREFIX,
Data: []byte("counter-test"),
},
},
ExpectedExprsNum: 1,
ExpectedFail: false,
},
statem: &config.ExprStatement{
Op: "==",
Name: "log",
},
},
{
TestsT: nftest.TestsT{
Name: "test-log-prefix-emerg",
Values: []*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_LOG_PREFIX,
Value: "counter-test-emerg",
},
&config.ExprValues{
Key: exprs.NFT_LOG_LEVEL,
Value: exprs.NFT_LOG_LEVEL_EMERG,
},
},
ExpectedExprs: []interface{}{
&expr.Log{
Key: (1 << unix.NFTA_LOG_PREFIX) | (1 << unix.NFTA_LOG_LEVEL),
Level: expr.LogLevelEmerg,
Data: []byte("counter-test-emerg"),
},
},
ExpectedExprsNum: 1,
ExpectedFail: false,
},
statem: &config.ExprStatement{
Op: "==",
Name: "log",
},
},
{
TestsT: nftest.TestsT{
Name: "test-invalid-log-prefix",
Values: []*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_LOG_PREFIX,
Value: "",
},
&config.ExprValues{
Key: exprs.NFT_LOG_LEVEL,
Value: exprs.NFT_LOG_LEVEL_EMERG,
},
},
ExpectedExprs: []interface{}{},
ExpectedExprsNum: 0,
ExpectedFail: true,
},
statem: &config.ExprStatement{
Op: "==",
Name: "log",
},
},
{
TestsT: nftest.TestsT{
Name: "test-invalid-log-level",
Values: []*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_LOG_PREFIX,
Value: "counter-invalid-level",
},
&config.ExprValues{
Key: exprs.NFT_LOG_LEVEL,
Value: "",
},
},
ExpectedExprs: []interface{}{},
ExpectedExprsNum: 0,
ExpectedFail: true,
},
statem: &config.ExprStatement{
Op: "==",
Name: "log",
},
},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
test.statem.Values = test.TestsT.Values
logExpr, err := exprs.NewExprLog(test.statem)
if err != nil && !test.ExpectedFail {
t.Errorf("Error creating expr Log: %s", logExpr)
return
} else if err != nil && test.ExpectedFail {
return
}
r, _ := nftest.AddTestRule(t, conn, logExpr)
if r == nil {
t.Error("Error adding rule with log expression")
}
if !nftest.AreExprsValid(t, &test.TestsT, r) {
return
}
if test.ExpectedFail {
t.Errorf("test should have failed")
}
})
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/meta.go 0000664 0000000 0000000 00000006643 15003540030 0023164 0 ustar 00root root 0000000 0000000 package exprs
import (
"fmt"
"strconv"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/google/nftables/binaryutil"
"github.com/google/nftables/expr"
)
// NewExprMeta creates a new meta selector to match or set packet metainformation.
// https://wiki.nftables.org/wiki-nftables/index.php/Matching_packet_metainformation
func NewExprMeta(values []*config.ExprValues, cmpOp *expr.CmpOp) (*[]expr.Any, error) {
setMark := false
metaExpr := []expr.Any{}
for _, meta := range values {
switch meta.Key {
case NFT_META_SET_MARK:
setMark = true
continue
case NFT_META_MARK:
metaKey, err := getMetaKey(meta.Key)
if err != nil {
return nil, err
}
metaVal, err := getMetaValue(meta.Value)
if err != nil {
return nil, err
}
if setMark {
metaExpr = append(metaExpr, []expr.Any{
&expr.Immediate{
Register: 1,
Data: binaryutil.NativeEndian.PutUint32(uint32(metaVal)),
}}...)
metaExpr = append(metaExpr, []expr.Any{
&expr.Meta{Key: metaKey, Register: 1, SourceRegister: setMark}}...)
} else {
metaExpr = append(metaExpr, []expr.Any{
&expr.Meta{Key: metaKey, Register: 1, SourceRegister: setMark},
&expr.Cmp{
Op: *cmpOp,
Register: 1,
Data: binaryutil.NativeEndian.PutUint32(uint32(metaVal)),
}}...)
}
setMark = false
return &metaExpr, nil
case NFT_META_L4PROTO:
mexpr, err := NewExprProtocol(meta.Key)
if err != nil {
return nil, err
}
metaExpr = append(metaExpr, *mexpr...)
return &metaExpr, nil
case NFT_META_PRIORITY,
NFT_META_SKUID, NFT_META_SKGID,
NFT_META_PROTOCOL:
metaKey, err := getMetaKey(meta.Key)
if err != nil {
return nil, err
}
metaVal, err := getProtocolCode(meta.Value)
if err != nil {
return nil, err
}
metaExpr = append(metaExpr, []expr.Any{
&expr.Meta{Key: metaKey, Register: 1, SourceRegister: setMark},
&expr.Cmp{
Op: *cmpOp,
Register: 1,
Data: binaryutil.NativeEndian.PutUint32(uint32(metaVal)),
}}...)
setMark = false
return &metaExpr, nil
case NFT_META_NFTRACE:
mark, err := getMetaValue(meta.Value)
if err != nil {
return nil, err
}
if mark != 0 && mark != 1 {
return nil, fmt.Errorf("%s Invalid nftrace value: %d. Only 1 or 0 allowed", "nftables", mark)
}
// TODO: not working yet
return &[]expr.Any{
&expr.Meta{Key: expr.MetaKeyNFTRACE, Register: 1},
&expr.Cmp{
Op: *cmpOp,
Register: 1,
Data: binaryutil.NativeEndian.PutUint32(uint32(mark)),
},
}, nil
default:
// not supported yet
}
}
return nil, fmt.Errorf("%s meta keyword not supported yet, open a new issue on github", "nftables")
}
func getMetaValue(value string) (int, error) {
metaVal, err := strconv.Atoi(value)
if err != nil {
return 0, err
}
return metaVal, nil
}
// https://github.com/google/nftables/blob/main/expr/expr.go#L168
func getMetaKey(value string) (expr.MetaKey, error) {
switch value {
case NFT_META_MARK:
return expr.MetaKeyMARK, nil
case NFT_META_PRIORITY:
return expr.MetaKeyPRIORITY, nil
case NFT_META_SKUID:
return expr.MetaKeySKUID, nil
case NFT_META_SKGID:
return expr.MetaKeySKGID, nil
// ip, ip6, arp, vlan
case NFT_META_PROTOCOL:
return expr.MetaKeyPROTOCOL, nil
case NFT_META_L4PROTO:
return expr.MetaKeyL4PROTO, nil
}
return expr.MetaKeyPRANDOM, fmt.Errorf("meta key %s not supported (yet)", value)
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/meta_test.go 0000664 0000000 0000000 00000007777 15003540030 0024234 0 ustar 00root root 0000000 0000000 package exprs_test
import (
"testing"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables/binaryutil"
"github.com/google/nftables/expr"
)
func TestExprMeta(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
tests := []nftest.TestsT{
{
"test-meta-mark",
exprs.NFT_FAMILY_IP,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_META_MARK,
Value: "666",
},
},
2,
[]interface{}{
&expr.Meta{
Key: expr.MetaKeyMARK,
Register: 1,
SourceRegister: false,
},
&expr.Cmp{
Data: binaryutil.NativeEndian.PutUint32(uint32(666)),
},
},
false,
},
{
"test-meta-set-mark",
exprs.NFT_FAMILY_IP,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_META_SET_MARK,
Value: "",
},
&config.ExprValues{
Key: exprs.NFT_META_MARK,
Value: "666",
},
},
2,
[]interface{}{
&expr.Immediate{
Register: 1,
Data: binaryutil.NativeEndian.PutUint32(666),
},
&expr.Meta{
Key: expr.MetaKeyMARK,
Register: 1,
SourceRegister: true,
},
},
false,
},
{
"test-meta-priority",
exprs.NFT_FAMILY_IP,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_META_PRIORITY,
Value: "1",
},
},
2,
[]interface{}{
&expr.Meta{
Key: expr.MetaKeyPRIORITY,
Register: 1,
SourceRegister: false,
},
&expr.Cmp{
Data: binaryutil.NativeEndian.PutUint32(uint32(1)),
},
},
false,
},
{
"test-meta-skuid",
exprs.NFT_FAMILY_IP,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_META_SKUID,
Value: "1",
},
},
2,
[]interface{}{
&expr.Meta{
Key: expr.MetaKeySKUID,
Register: 1,
SourceRegister: false,
},
&expr.Cmp{
Data: binaryutil.NativeEndian.PutUint32(uint32(1)),
},
},
false,
},
{
"test-meta-skgid",
exprs.NFT_FAMILY_IP,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_META_SKGID,
Value: "1",
},
},
2,
[]interface{}{
&expr.Meta{
Key: expr.MetaKeySKGID,
Register: 1,
SourceRegister: false,
},
&expr.Cmp{
Data: binaryutil.NativeEndian.PutUint32(uint32(1)),
},
},
false,
},
{
"test-meta-protocol",
exprs.NFT_FAMILY_IP,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_META_PROTOCOL,
Value: "15",
},
},
2,
[]interface{}{
&expr.Meta{
Key: expr.MetaKeyPROTOCOL,
Register: 1,
SourceRegister: false,
},
&expr.Cmp{
Data: binaryutil.NativeEndian.PutUint32(uint32(15)),
},
},
false,
},
// tested more in depth in protocol_test.go
{
"test-meta-l4proto",
exprs.NFT_FAMILY_IP,
"",
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_META_L4PROTO,
Value: "15",
},
},
1,
[]interface{}{
&expr.Meta{
Key: expr.MetaKeyL4PROTO,
Register: 1,
SourceRegister: false,
},
},
false,
},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
cmp := expr.CmpOpEq
metaExpr, err := exprs.NewExprMeta(test.Values, &cmp)
if err != nil && !test.ExpectedFail {
t.Errorf("Error creating expr Meta: %s", metaExpr)
return
} else if err != nil && test.ExpectedFail {
return
}
r, _ := nftest.AddTestRule(t, conn, metaExpr)
if r == nil && !test.ExpectedFail {
t.Error("Error adding rule with Meta expression")
}
if !nftest.AreExprsValid(t, &test, r) {
return
}
if test.ExpectedFail {
t.Errorf("test should have failed")
}
})
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/nat.go 0000664 0000000 0000000 00000010067 15003540030 0023013 0 ustar 00root root 0000000 0000000 package exprs
import (
"fmt"
"net"
"strconv"
"strings"
"github.com/google/nftables"
"github.com/google/nftables/binaryutil"
"github.com/google/nftables/expr"
"golang.org/x/sys/unix"
)
// NewExprNATFlags returns the nat flags configured.
// common to masquerade, snat and dnat
func NewExprNATFlags(parms string) (random, fullrandom, persistent bool) {
masqParms := strings.Split(parms, ",")
for _, mParm := range masqParms {
switch mParm {
case NFT_MASQ_RANDOM:
random = true
case NFT_MASQ_FULLY_RANDOM:
fullrandom = true
case NFT_MASQ_PERSISTENT:
persistent = true
}
}
return
}
// NewExprNAT parses the redirection of redirect, snat, dnat, tproxy and masquerade verdict:
// to x.y.z.a:abcd
// If only the IP is specified (to 1.2.3.4), only NAT.RegAddrMin must be present (regAddr == true)
// If only the port is specified (to :1234), only NAT.RegPortMin must be present (regPort == true)
// If both addr and port are specified (to 1.2.3.4:1234), NAT.RegPortMin and NAT.RegAddrMin must be present.
func NewExprNAT(parms, verdict string) (bool, bool, *[]expr.Any, error) {
regAddr := false
regProto := false
exprNAT := []expr.Any{}
NATParms := strings.Split(parms, " ")
idx := 0
// exclude first parameter if it's "to"
if NATParms[idx] == NFT_PARM_TO {
idx++
}
if idx == len(NATParms) {
return regAddr, regProto, &exprNAT, fmt.Errorf("Invalid parms: %s", parms)
}
dParms := strings.Split(NATParms[idx], ":")
// masquerade doesn't allow "to IP"
if dParms[0] != "" && verdict != VERDICT_MASQUERADE {
dIP := dParms[0]
destIP := net.ParseIP(dIP)
if destIP == nil {
return regAddr, regProto, &exprNAT, fmt.Errorf("Invalid IP: %s", dIP)
}
exprNAT = append(exprNAT, []expr.Any{
&expr.Immediate{
Register: 1,
Data: destIP.To4(),
}}...)
regAddr = true
}
if len(dParms) == 2 {
dPort := dParms[1]
// TODO: support ranges. 9000-9100
destPort, err := strconv.Atoi(dPort)
if err != nil {
return regAddr, regProto, &exprNAT, fmt.Errorf("Invalid Port: %s", dPort)
}
reg := uint32(2)
toPort := binaryutil.BigEndian.PutUint16(uint16(destPort))
// if reg=1 (RegAddrMin=1) is not set, this error appears listing the rules
// "netlink: Error: NAT statement has no proto expression"
if verdict == VERDICT_TPROXY || verdict == VERDICT_MASQUERADE || verdict == VERDICT_REDIRECT {
// according to https://github.com/google/nftables/blob/8a10f689006bf728a5cff35787713047f68e308a/nftables_test.go#L4871
// Masquerade ports should be specified like this:
// toPort = binaryutil.BigEndian.PutUint32(uint32(destPort) << 16)
// but then it's not added/listed correctly with nft.
reg = 1
}
exprNAT = append(exprNAT, []expr.Any{
&expr.Immediate{
Register: reg,
Data: toPort,
}}...)
regProto = true
}
return regAddr, regProto, &exprNAT, nil
}
// NewExprMasquerade returns a new masquerade expression.
func NewExprMasquerade(toPorts, random, fullRandom, persistent bool) *[]expr.Any {
exprMasq := &expr.Masq{
ToPorts: toPorts,
Random: random,
FullyRandom: fullRandom,
Persistent: persistent,
}
if toPorts {
exprMasq.RegProtoMin = 1
}
return &[]expr.Any{
exprMasq,
}
}
// NewExprRedirect returns a new redirect expression.
func NewExprRedirect() *[]expr.Any {
return &[]expr.Any{
// Redirect is a special case of DNAT where the destination is the current machine
&expr.Redir{
RegisterProtoMin: 1,
},
}
}
// NewExprSNAT returns a new snat expression.
func NewExprSNAT() *expr.NAT {
return &expr.NAT{
Type: expr.NATTypeSourceNAT,
Family: unix.NFPROTO_IPV4,
}
}
// NewExprDNAT returns a new dnat expression.
func NewExprDNAT() *expr.NAT {
return &expr.NAT{
Type: expr.NATTypeDestNAT,
Family: unix.NFPROTO_IPV4,
}
}
// NewExprTproxy returns a new tproxy expression.
// XXX: is "to x.x.x.x:1234" supported by google/nftables lib? or only "to :1234"?
// it creates an erronous rule.
func NewExprTproxy() *[]expr.Any {
return &[]expr.Any{
&expr.TProxy{
Family: byte(nftables.TableFamilyIPv4),
TableFamily: byte(nftables.TableFamilyIPv4),
RegPort: 1,
}}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/nat_test.go 0000664 0000000 0000000 00000030042 15003540030 0024045 0 ustar 00root root 0000000 0000000 package exprs_test
import (
"net"
"testing"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables"
"github.com/google/nftables/binaryutil"
"github.com/google/nftables/expr"
"golang.org/x/sys/unix"
)
func TestExprVerdictSNAT(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
// TODO: test random, permanent, persistent flags.
tests := []nftest.TestsT{
{
"test-nat-snat-to-127001",
exprs.NFT_FAMILY_IP,
"to 127.0.0.1",
nil,
2,
[]interface{}{
&expr.Immediate{
Register: 1,
Data: net.ParseIP("127.0.0.1").To4(),
},
&expr.NAT{
Type: expr.NATTypeSourceNAT,
Family: unix.NFPROTO_IPV4,
Random: false,
FullyRandom: false,
Persistent: false,
RegAddrMin: 1,
},
},
false,
},
{
"test-nat-snat-127001",
exprs.NFT_FAMILY_IP,
"127.0.0.1",
nil,
2,
[]interface{}{
&expr.Immediate{
Register: 1,
Data: net.ParseIP("127.0.0.1").To4(),
},
&expr.NAT{
Type: expr.NATTypeSourceNAT,
Family: unix.NFPROTO_IPV4,
Random: false,
FullyRandom: false,
Persistent: false,
RegAddrMin: 1,
},
},
false,
},
{
"test-nat-snat-to-127001:12345",
exprs.NFT_FAMILY_IP,
"to 127.0.0.1:12345",
nil,
3,
[]interface{}{
&expr.Immediate{
Register: 1,
Data: net.ParseIP("127.0.0.1").To4(),
},
&expr.Immediate{
Register: uint32(2),
Data: binaryutil.BigEndian.PutUint16(uint16(12345)),
},
&expr.NAT{
Type: expr.NATTypeSourceNAT,
Family: unix.NFPROTO_IPV4,
Random: false,
FullyRandom: false,
Persistent: false,
RegAddrMin: 1,
RegProtoMin: 2,
},
},
false,
},
{
"test-nat-snat-to-:12345",
exprs.NFT_FAMILY_IP,
"to :12345",
nil,
2,
[]interface{}{
&expr.Immediate{
Register: uint32(2),
Data: binaryutil.BigEndian.PutUint16(uint16(12345)),
},
&expr.NAT{
Type: expr.NATTypeSourceNAT,
Family: unix.NFPROTO_IPV4,
Random: false,
FullyRandom: false,
Persistent: false,
RegAddrMin: 0,
RegProtoMin: 2,
},
},
false,
},
{
"test-nat-snat-127001:12345",
exprs.NFT_FAMILY_IP,
"127.0.0.1:12345",
nil,
3,
[]interface{}{
&expr.Immediate{
Register: 1,
Data: net.ParseIP("127.0.0.1").To4(),
},
&expr.Immediate{
Register: uint32(2),
Data: binaryutil.BigEndian.PutUint16(uint16(12345)),
},
&expr.NAT{
Type: expr.NATTypeSourceNAT,
Family: unix.NFPROTO_IPV4,
Random: false,
FullyRandom: false,
Persistent: false,
RegAddrMin: 1,
RegProtoMin: 2,
},
},
false,
},
{
"test-invalid-nat-snat-to-",
exprs.NFT_FAMILY_IP,
"to",
nil,
3,
[]interface{}{},
true,
},
{
"test-invalid-nat-snat-to-invalid-ip",
exprs.NFT_FAMILY_IP,
"to 127..0.0.1",
nil,
3,
[]interface{}{},
true,
},
{
"test-invalid-nat-snat-to-invalid-port",
exprs.NFT_FAMILY_IP,
"to 127.0.0.1:aaa",
nil,
3,
[]interface{}{},
true,
},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
verdExpr := exprs.NewExprVerdict(exprs.VERDICT_SNAT, test.Parms)
if !test.ExpectedFail && verdExpr == nil {
t.Errorf("error creating snat verdict")
} else if test.ExpectedFail && verdExpr == nil {
return
}
r, _ := nftest.AddTestSNATRule(t, conn, verdExpr)
if r == nil {
t.Errorf("Error adding rule")
return
}
if !nftest.AreExprsValid(t, &test, r) {
return
}
if test.ExpectedFail {
t.Errorf("test should have failed")
}
})
}
}
func TestExprVerdictDNAT(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
tests := []nftest.TestsT{
{
"test-nat-dnat-to-127001",
exprs.NFT_FAMILY_IP,
"to 127.0.0.1",
nil,
2,
[]interface{}{
&expr.Immediate{
Register: 1,
Data: net.ParseIP("127.0.0.1").To4(),
},
&expr.NAT{
Type: expr.NATTypeDestNAT,
Family: unix.NFPROTO_IPV4,
Random: false,
FullyRandom: false,
Persistent: false,
RegAddrMin: 1,
},
},
false,
},
{
"test-nat-dnat-127001",
exprs.NFT_FAMILY_IP,
"127.0.0.1",
nil,
2,
[]interface{}{
&expr.Immediate{
Register: 1,
Data: net.ParseIP("127.0.0.1").To4(),
},
&expr.NAT{
Type: expr.NATTypeDestNAT,
Family: unix.NFPROTO_IPV4,
Random: false,
FullyRandom: false,
Persistent: false,
RegAddrMin: 1,
},
},
false,
},
{
"test-nat-dnat-to-127001:12345",
exprs.NFT_FAMILY_IP,
"to 127.0.0.1:12345",
nil,
3,
[]interface{}{
&expr.Immediate{
Register: 1,
Data: net.ParseIP("127.0.0.1").To4(),
},
&expr.Immediate{
Register: uint32(2),
Data: binaryutil.BigEndian.PutUint16(uint16(12345)),
},
&expr.NAT{
Type: expr.NATTypeDestNAT,
Family: unix.NFPROTO_IPV4,
Random: false,
FullyRandom: false,
Persistent: false,
RegAddrMin: 1,
RegProtoMin: 2,
},
},
false,
},
{
"test-nat-dnat-to-:12345",
exprs.NFT_FAMILY_IP,
"to :12345",
nil,
2,
[]interface{}{
&expr.Immediate{
Register: uint32(2),
Data: binaryutil.BigEndian.PutUint16(uint16(12345)),
},
&expr.NAT{
Type: expr.NATTypeDestNAT,
Family: unix.NFPROTO_IPV4,
Random: false,
FullyRandom: false,
Persistent: false,
RegAddrMin: 0,
RegProtoMin: 2,
},
},
false,
},
{
"test-nat-dnat-127001:12345",
exprs.NFT_FAMILY_IP,
"127.0.0.1:12345",
nil,
3,
[]interface{}{
&expr.Immediate{
Register: 1,
Data: net.ParseIP("127.0.0.1").To4(),
},
&expr.Immediate{
Register: uint32(2),
Data: binaryutil.BigEndian.PutUint16(uint16(12345)),
},
&expr.NAT{
Type: expr.NATTypeDestNAT,
Family: unix.NFPROTO_IPV4,
Random: false,
FullyRandom: false,
Persistent: false,
RegAddrMin: 1,
RegProtoMin: 2,
},
},
false,
},
{
"test-invalid-nat-dnat-to-",
exprs.NFT_FAMILY_IP,
"to",
nil,
3,
[]interface{}{},
true,
},
{
"test-invalid-nat-dnat-to-invalid-ip",
exprs.NFT_FAMILY_IP,
"to 127..0.0.1",
nil,
3,
[]interface{}{},
true,
},
{
"test-invalid-nat-dnat-to-invalid-port",
exprs.NFT_FAMILY_IP,
"to 127.0.0.1:aaa",
nil,
3,
[]interface{}{},
true,
},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
verdExpr := exprs.NewExprVerdict(exprs.VERDICT_DNAT, test.Parms)
if !test.ExpectedFail && verdExpr == nil {
t.Errorf("error creating verdict")
} else if test.ExpectedFail && verdExpr == nil {
return
}
r, _ := nftest.AddTestDNATRule(t, conn, verdExpr)
if r == nil {
t.Errorf("Error adding rule")
return
}
if !nftest.AreExprsValid(t, &test, r) {
return
}
if test.ExpectedFail {
t.Errorf("test should have failed")
}
})
}
}
func TestExprVerdictMasquerade(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
tests := []nftest.TestsT{
{
"test-nat-masq-to-:12345",
exprs.NFT_FAMILY_IP,
"to :12345",
nil,
2,
[]interface{}{
&expr.Immediate{
Register: uint32(1),
Data: binaryutil.BigEndian.PutUint16(uint16(12345)),
},
&expr.Masq{
ToPorts: true,
Random: false,
FullyRandom: false,
Persistent: false,
},
},
false,
},
{
"test-nat-masq-flags",
exprs.NFT_FAMILY_IP,
"random,fully-random,persistent",
nil,
1,
[]interface{}{
&expr.Masq{
ToPorts: false,
Random: true,
FullyRandom: true,
Persistent: true,
},
},
false,
},
{
"test-nat-masq-empty",
exprs.NFT_FAMILY_IP,
"",
nil,
1,
[]interface{}{
&expr.Masq{},
},
false,
},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
verdExpr := exprs.NewExprVerdict(exprs.VERDICT_MASQUERADE, test.Parms)
if !test.ExpectedFail && verdExpr == nil {
t.Errorf("error creating verdict")
} else if test.ExpectedFail && verdExpr == nil {
return
}
r, _ := nftest.AddTestSNATRule(t, conn, verdExpr)
if r == nil {
t.Errorf("Error adding rule")
return
}
if !nftest.AreExprsValid(t, &test, r) {
return
}
if test.ExpectedFail {
t.Errorf("test should have failed")
}
})
}
}
func TestExprVerdictRedirect(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
tests := []nftest.TestsT{
{
"test-nat-redir-to-127001:12345",
exprs.NFT_FAMILY_IP,
"to 127.0.0.1:12345",
nil,
3,
[]interface{}{
&expr.Immediate{
Register: 1,
Data: net.ParseIP("127.0.0.1").To4(),
},
&expr.Immediate{
Register: uint32(1),
Data: binaryutil.BigEndian.PutUint16(uint16(12345)),
},
&expr.Redir{
RegisterProtoMin: 1,
},
},
false,
},
{
"test-nat-redir-to-:12345",
exprs.NFT_FAMILY_IP,
"to :12345",
nil,
2,
[]interface{}{
&expr.Immediate{
Register: uint32(1),
Data: binaryutil.BigEndian.PutUint16(uint16(12345)),
},
&expr.Redir{
RegisterProtoMin: 1,
},
},
false,
},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
verdExpr := exprs.NewExprVerdict(exprs.VERDICT_REDIRECT, test.Parms)
if !test.ExpectedFail && verdExpr == nil {
t.Errorf("error creating verdict")
} else if test.ExpectedFail && verdExpr == nil {
return
}
r, _ := nftest.AddTestDNATRule(t, conn, verdExpr)
if r == nil {
t.Errorf("Error adding rule")
return
}
if !nftest.AreExprsValid(t, &test, r) {
return
}
if test.ExpectedFail {
t.Errorf("test should have failed")
}
})
}
}
func TestExprVerdictTProxy(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
tests := []nftest.TestsT{
{
"test-nat-tproxy-to-127001:12345",
exprs.NFT_FAMILY_IP,
"to 127.0.0.1:12345",
nil,
4,
[]interface{}{
&expr.Immediate{
Register: 1,
Data: net.ParseIP("127.0.0.1").To4(),
},
&expr.Immediate{
Register: uint32(1),
Data: binaryutil.BigEndian.PutUint16(uint16(12345)),
},
&expr.TProxy{
Family: byte(nftables.TableFamilyIPv4),
TableFamily: byte(nftables.TableFamilyIPv4),
RegPort: 1,
},
&expr.Verdict{
Kind: expr.VerdictAccept,
},
},
false,
},
{
"test-nat-tproxy-to-:12345",
exprs.NFT_FAMILY_IP,
"to :12345",
nil,
3,
[]interface{}{
&expr.Immediate{
Register: uint32(1),
Data: binaryutil.BigEndian.PutUint16(uint16(12345)),
},
&expr.TProxy{
Family: byte(nftables.TableFamilyIPv4),
TableFamily: byte(nftables.TableFamilyIPv4),
RegPort: 1,
},
&expr.Verdict{
Kind: expr.VerdictAccept,
},
},
false,
},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
verdExpr := exprs.NewExprVerdict(exprs.VERDICT_TPROXY, test.Parms)
if !test.ExpectedFail && verdExpr == nil {
t.Errorf("error creating verdict")
} else if test.ExpectedFail && verdExpr == nil {
return
}
r, _ := nftest.AddTestDNATRule(t, conn, verdExpr)
if r == nil {
t.Errorf("Error adding rule")
return
}
if !nftest.AreExprsValid(t, &test, r) {
return
}
if test.ExpectedFail {
t.Errorf("test should have failed")
}
})
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/notrack.go 0000664 0000000 0000000 00000000304 15003540030 0023663 0 ustar 00root root 0000000 0000000 package exprs
import "github.com/google/nftables/expr"
// NewNoTrack adds a new expression not to track connections.
func NewNoTrack() *[]expr.Any {
return &[]expr.Any{
&expr.Notrack{},
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/operator.go 0000664 0000000 0000000 00000001107 15003540030 0024057 0 ustar 00root root 0000000 0000000 package exprs
import (
"github.com/google/nftables/expr"
)
// NewOperator translates a string comparator operator to nftables operator
func NewOperator(operator string) expr.CmpOp {
switch operator {
case "!=":
return expr.CmpOpNeq
case ">":
return expr.CmpOpGt
case ">=":
return expr.CmpOpGte
case "<":
return expr.CmpOpLt
case "<=":
return expr.CmpOpLte
}
return expr.CmpOpEq
}
// NewExprOperator returns a new comparator operator
func NewExprOperator(op expr.CmpOp) *[]expr.Any {
return &[]expr.Any{
&expr.Cmp{
Register: 1,
Op: op,
},
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/port.go 0000664 0000000 0000000 00000004321 15003540030 0023211 0 ustar 00root root 0000000 0000000 package exprs
import (
"fmt"
"strconv"
"strings"
"github.com/google/nftables"
"github.com/google/nftables/binaryutil"
"github.com/google/nftables/expr"
)
// NewExprPort returns a new port expression with the given matching operator.
func NewExprPort(port string, op *expr.CmpOp) (*[]expr.Any, error) {
eport, err := strconv.Atoi(port)
if err != nil {
return nil, err
}
return &[]expr.Any{
&expr.Cmp{
Register: 1,
Op: *op,
Data: binaryutil.BigEndian.PutUint16(uint16(eport))},
}, nil
}
// NewExprPortRange returns a new port range expression.
func NewExprPortRange(sport string, cmpOp *expr.CmpOp) (*[]expr.Any, error) {
ports := strings.Split(sport, "-")
iport, err := strconv.Atoi(ports[0])
if err != nil {
return nil, err
}
eport, err := strconv.Atoi(ports[1])
if err != nil {
return nil, err
}
return &[]expr.Any{
&expr.Range{
Op: *cmpOp,
Register: 1,
FromData: binaryutil.BigEndian.PutUint16(uint16(iport)),
ToData: binaryutil.BigEndian.PutUint16(uint16(eport)),
},
}, nil
}
// NewExprPortSet returns a new set of ports.
func NewExprPortSet(portv string) *[]nftables.SetElement {
setElements := []nftables.SetElement{}
ports := strings.Split(portv, ",")
for _, portv := range ports {
portExpr := exprPortSubSet(portv)
if portExpr != nil {
setElements = append(setElements, *portExpr...)
}
}
return &setElements
}
func exprPortSubSet(portv string) *[]nftables.SetElement {
port, err := strconv.Atoi(portv)
if err != nil {
return nil
}
return &[]nftables.SetElement{
{Key: binaryutil.BigEndian.PutUint16(uint16(port))},
}
}
// NewExprPortDirection returns a new expression to match connections based on
// the direction of the connection (source, dest)
func NewExprPortDirection(direction string) (*expr.Payload, error) {
switch direction {
case NFT_DPORT:
return &expr.Payload{
DestRegister: 1,
Base: expr.PayloadBaseTransportHeader,
Offset: 2,
Len: 2,
}, nil
case NFT_SPORT:
return &expr.Payload{
DestRegister: 1,
Base: expr.PayloadBaseTransportHeader,
Offset: 0,
Len: 2,
}, nil
default:
return nil, fmt.Errorf("Not valid protocol direction: %s", direction)
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/port_test.go 0000664 0000000 0000000 00000006244 15003540030 0024256 0 ustar 00root root 0000000 0000000 package exprs_test
import (
"bytes"
"fmt"
"reflect"
"testing"
exprs "github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables/binaryutil"
"github.com/google/nftables/expr"
)
type portTestsT struct {
port string
portVal int
cmp expr.CmpOp
shouldFail bool
}
func TestExprPort(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
portTests := []portTestsT{
{"53", 53, expr.CmpOpEq, false},
{"80", 80, expr.CmpOpEq, false},
{"65535", 65535, expr.CmpOpEq, false},
{"45,", 0, expr.CmpOpEq, true},
{"", 0, expr.CmpOpEq, true},
}
for _, test := range portTests {
t.Run(fmt.Sprint("test-", test.port), func(t *testing.T) {
portExpr, err := exprs.NewExprPort(test.port, &test.cmp)
if err != nil {
if !test.shouldFail {
t.Errorf("Error creating expr port: %v, %s", test, err)
}
return
}
//fmt.Printf("%s, %+v\n", test.port, *portExpr)
r, _ := nftest.AddTestRule(t, conn, portExpr)
if r == nil {
t.Errorf("Error adding rule with port (%s) expression", test.port)
}
e := r.Exprs[0]
cmp, ok := e.(*expr.Cmp)
if !ok {
t.Errorf("%s - invalid port expr: %T", test.port, e)
}
//fmt.Printf("%s, %+v\n", reflect.TypeOf(e).String(), e)
if reflect.TypeOf(e).String() != "*expr.Cmp" {
t.Errorf("%s - first expression should be *expr.Cmp, instead of: %s", test.port, reflect.TypeOf(e))
}
portVal := binaryutil.BigEndian.PutUint16(uint16(test.portVal))
if !bytes.Equal(cmp.Data, portVal) {
t.Errorf("%s - invalid port in expr.Cmp: %d", test.port, cmp.Data)
}
})
}
}
func TestExprPortRange(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
portTests := []portTestsT{
{"53-5353", 53, expr.CmpOpEq, false},
{"80-8080", 80, expr.CmpOpEq, false},
{"1-65535", 65535, expr.CmpOpEq, false},
{"1,45,", 0, expr.CmpOpEq, true},
{"1-2.", 0, expr.CmpOpEq, true},
}
for _, test := range portTests {
t.Run(fmt.Sprint("test-", test.port), func(t *testing.T) {
portExpr, err := exprs.NewExprPortRange(test.port, &test.cmp)
if err != nil {
if !test.shouldFail {
t.Errorf("Error creating expr port range: %v, %s", test, err)
}
return
}
//fmt.Printf("%s, %+v\n", test.port, *portExpr)
r, _ := nftest.AddTestRule(t, conn, portExpr)
if r == nil {
t.Errorf("Error adding rule with port range (%s) expression", test.port)
}
e := r.Exprs[0]
_, ok := e.(*expr.Range)
if !ok {
t.Errorf("%s - invalid port range expr: %T", test.port, e)
}
fmt.Printf("%s, %+v\n", reflect.TypeOf(e).String(), e)
if reflect.TypeOf(e).String() != "*expr.Range" {
t.Errorf("%s - first expression should be *expr.Cmp, instead of: %s", test.port, reflect.TypeOf(e))
}
/*portVal := binaryutil.BigEndian.PutUint16(uint16(test.portVal))
if !bytes.Equal(range.FromData, portVal) {
t.Errorf("%s - invalid port range in expr.Cmp: %d", test.port, cmp.Data)
}*/
})
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/protocol.go 0000664 0000000 0000000 00000005361 15003540030 0024073 0 ustar 00root root 0000000 0000000 package exprs
import (
"fmt"
"strings"
"github.com/google/nftables"
"github.com/google/nftables/expr"
"golang.org/x/sys/unix"
)
// NewExprProtocol creates a new expression to filter connections by protocol
func NewExprProtocol(proto string) (*[]expr.Any, error) {
protoExpr := expr.Meta{Key: expr.MetaKeyL4PROTO, Register: 1}
switch strings.ToLower(proto) {
case NFT_META_L4PROTO:
return &[]expr.Any{
&protoExpr,
}, nil
case NFT_PROTO_UDP:
return &[]expr.Any{
&protoExpr,
&expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: []byte{unix.IPPROTO_UDP},
},
}, nil
case NFT_PROTO_TCP:
return &[]expr.Any{
&protoExpr,
&expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: []byte{unix.IPPROTO_TCP},
},
}, nil
case NFT_PROTO_UDPLITE:
return &[]expr.Any{
&protoExpr,
&expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: []byte{unix.IPPROTO_UDPLITE},
},
}, nil
case NFT_PROTO_SCTP:
return &[]expr.Any{
&protoExpr,
&expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: []byte{unix.IPPROTO_SCTP},
},
}, nil
case NFT_PROTO_DCCP:
return &[]expr.Any{
&protoExpr,
&expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: []byte{unix.IPPROTO_DCCP},
},
}, nil
case NFT_PROTO_ICMP:
return &[]expr.Any{
&protoExpr,
&expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: []byte{unix.IPPROTO_ICMP},
},
}, nil
case NFT_PROTO_ICMPv6:
return &[]expr.Any{
&protoExpr,
&expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: []byte{unix.IPPROTO_ICMPV6},
},
}, nil
/*TODO: could be simplified
default:
proto, err := getProtocolCode(value)
if err != nil {
return nil, err
}
return &[]expr.Any{
protoExpr,
&expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: []byte{byte(proto)},
},
}, nil
*/
default:
return nil, fmt.Errorf("Not valid protocol rule, invalid or not supported protocol: %s", proto)
}
}
// NewExprProtoSet creates a new list of SetElements{}, to match
// multiple protocol values.
func NewExprProtoSet(l4prots string) *[]nftables.SetElement {
protoList := strings.Split(l4prots, ",")
protoSet := []nftables.SetElement{}
for _, name := range protoList {
pcode, err := getProtocolCode(name)
if err != nil {
continue
}
protoSet = append(protoSet,
[]nftables.SetElement{
{Key: []byte{byte(pcode)}},
}...)
}
return &protoSet
}
// NewExprL4Proto returns a new expression to match a protocol.
func NewExprL4Proto(name string, cmpOp *expr.CmpOp) *[]expr.Any {
proto, _ := getProtocolCode(name)
return &[]expr.Any{
&expr.Cmp{
Op: *cmpOp,
Register: 1,
Data: []byte{byte(proto)},
},
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/protocol_test.go 0000664 0000000 0000000 00000004504 15003540030 0025130 0 ustar 00root root 0000000 0000000 package exprs_test
import (
"fmt"
"reflect"
"testing"
exprs "github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables/expr"
"golang.org/x/sys/unix"
)
func TestExprProtocol(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
testProtos := []string{
exprs.NFT_PROTO_TCP,
exprs.NFT_PROTO_UDP,
exprs.NFT_PROTO_UDPLITE,
exprs.NFT_PROTO_SCTP,
exprs.NFT_PROTO_DCCP,
exprs.NFT_PROTO_ICMP,
exprs.NFT_PROTO_ICMPv6,
}
protoValues := []byte{
unix.IPPROTO_TCP,
unix.IPPROTO_UDP,
unix.IPPROTO_UDPLITE,
unix.IPPROTO_SCTP,
unix.IPPROTO_DCCP,
unix.IPPROTO_ICMP,
unix.IPPROTO_ICMPV6,
}
for idx, proto := range testProtos {
t.Run(fmt.Sprint("test-protoExpr-", proto), func(t *testing.T) {
protoExpr, err := exprs.NewExprProtocol(proto)
if err != nil {
t.Errorf("%s - Error creating expr Log: %s", proto, protoExpr)
return
}
r, _ := nftest.AddTestRule(t, conn, protoExpr)
if r == nil {
t.Errorf("Error adding rule with proto %s expression", proto)
}
if len(r.Exprs) != 2 {
t.Errorf("%s - expected 2 Expressions, found %d", proto, len(r.Exprs))
}
e := r.Exprs[0]
meta, ok := e.(*expr.Meta)
if !ok {
t.Errorf("%s - invalid proto expr: %T", proto, e)
}
//fmt.Printf("%s, %+v\n", reflect.TypeOf(e).String(), e)
if reflect.TypeOf(e).String() != "*expr.Meta" {
t.Errorf("%s - first expression should be *expr.Meta, instead of: %s", proto, reflect.TypeOf(e))
}
if meta.Key != expr.MetaKeyL4PROTO {
t.Errorf("%s - invalid proto expr.Meta.Key: %d", proto, expr.MetaKeyL4PROTO)
}
e = r.Exprs[1]
cmp, ok := e.(*expr.Cmp)
if !ok {
t.Errorf("%s - invalid proto cmp expr: %T", proto, e)
}
//fmt.Printf("%s, %+v\n", reflect.TypeOf(e).String(), e)
if reflect.TypeOf(e).String() != "*expr.Cmp" {
t.Errorf("%s - second expression should be *expr.Cmp, instead of: %s", proto, reflect.TypeOf(e))
}
if cmp.Op != expr.CmpOpEq {
t.Errorf("%s - expr.Cmp should be CmpOpEq, instead of: %d", proto, cmp.Op)
}
if cmp.Data[0] != protoValues[idx] {
t.Errorf("%s - expr.Data differs: %d<->%d", proto, cmp.Data, protoValues[idx])
}
})
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/quota.go 0000664 0000000 0000000 00000003210 15003540030 0023352 0 ustar 00root root 0000000 0000000 package exprs
import (
"fmt"
"strconv"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/google/nftables/expr"
)
// NewQuota returns a new quota expression.
// TODO: named quotas
func NewQuota(opts []*config.ExprValues) (*[]expr.Any, error) {
over := false
bytes := int64(0)
used := int64(0)
for _, opt := range opts {
switch opt.Key {
case NFT_QUOTA_OVER:
over = true
case NFT_QUOTA_UNIT_BYTES:
b, err := strconv.ParseInt(opt.Value, 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid quota bytes: %s", opt.Value)
}
bytes = b
case NFT_QUOTA_USED:
// TODO: support for other size units
b, err := strconv.ParseInt(opt.Value, 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid quota initial consumed bytes: %s", opt.Value)
}
used = b
case NFT_QUOTA_UNIT_KB:
b, err := strconv.ParseInt(opt.Value, 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid quota bytes: %s", opt.Value)
}
bytes = b * 1024
case NFT_QUOTA_UNIT_MB:
b, err := strconv.ParseInt(opt.Value, 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid quota bytes: %s", opt.Value)
}
bytes = (b * 1024) * 1024
case NFT_QUOTA_UNIT_GB:
b, err := strconv.ParseInt(opt.Value, 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid quota bytes: %s", opt.Value)
}
bytes = ((b * 1024) * 1024) * 1024
default:
return nil, fmt.Errorf("invalid quota key: %s", opt.Key)
}
}
if bytes == 0 {
return nil, fmt.Errorf("quota bytes cannot be 0")
}
return &[]expr.Any{
&expr.Quota{
Bytes: uint64(bytes),
Consumed: uint64(used),
Over: over,
},
}, nil
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/quota_test.go 0000664 0000000 0000000 00000010375 15003540030 0024423 0 ustar 00root root 0000000 0000000 package exprs_test
import (
"testing"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables/expr"
)
func TestExprQuota(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
tests := []nftest.TestsT{
{
"test-quota-over-bytes-12345",
"", // family
"", // parms
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_QUOTA_OVER,
Value: "",
},
&config.ExprValues{
Key: exprs.NFT_QUOTA_UNIT_BYTES,
Value: "12345",
},
},
1,
[]interface{}{
&expr.Quota{
Bytes: uint64(12345),
Consumed: 0,
Over: true,
},
},
false,
},
{
"test-quota-over-kbytes-1",
"", // family
"", // parms
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_QUOTA_OVER,
Value: "",
},
&config.ExprValues{
Key: exprs.NFT_QUOTA_UNIT_KB,
Value: "1",
},
},
1,
[]interface{}{
&expr.Quota{
Bytes: uint64(1024),
Consumed: 0,
Over: true,
},
},
false,
},
{
"test-quota-over-mbytes-1",
"", // family
"", // parms
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_QUOTA_OVER,
Value: "",
},
&config.ExprValues{
Key: exprs.NFT_QUOTA_UNIT_MB,
Value: "1",
},
},
1,
[]interface{}{
&expr.Quota{
Bytes: uint64(1024 * 1024),
Consumed: 0,
Over: true,
},
},
false,
},
{
"test-quota-over-gbytes-1",
"", // family
"", // parms
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_QUOTA_OVER,
Value: "",
},
&config.ExprValues{
Key: exprs.NFT_QUOTA_UNIT_GB,
Value: "1",
},
},
1,
[]interface{}{
&expr.Quota{
Bytes: uint64(1024 * 1024 * 1024),
Consumed: 0,
Over: true,
},
},
false,
},
{
"test-quota-until-gbytes-1",
"", // family
"", // parms
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_QUOTA_UNIT_GB,
Value: "1",
},
},
1,
[]interface{}{
&expr.Quota{
Bytes: uint64(1024 * 1024 * 1024),
Consumed: 0,
Over: false,
},
},
false,
},
{
"test-quota-consumed-bytes-1024",
"", // family
"", // parms
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_QUOTA_UNIT_GB,
Value: "1",
},
&config.ExprValues{
Key: exprs.NFT_QUOTA_USED,
Value: "1024",
},
},
1,
[]interface{}{
&expr.Quota{
Bytes: uint64(1024 * 1024 * 1024),
Consumed: 1024,
Over: false,
},
},
false,
},
{
"test-invalid-quota-key",
"", // family
"", // parms
[]*config.ExprValues{
&config.ExprValues{
Key: "gbyte",
Value: "1",
},
},
1,
[]interface{}{},
true,
},
{
"test-invalid-quota-value",
"", // family
"", // parms
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_QUOTA_UNIT_GB,
Value: "1a",
},
},
1,
[]interface{}{},
true,
},
{
"test-invalid-quota-value",
"", // family
"", // parms
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_QUOTA_UNIT_GB,
Value: "",
},
},
1,
[]interface{}{},
true,
},
{
"test-invalid-quota-bytes-0",
"", // family
"", // parms
[]*config.ExprValues{
&config.ExprValues{
Key: exprs.NFT_QUOTA_UNIT_GB,
Value: "0",
},
},
1,
[]interface{}{},
true,
},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
quotaExpr, err := exprs.NewQuota(test.Values)
if err != nil && !test.ExpectedFail {
t.Errorf("Error creating expr Quota: %s", quotaExpr)
return
} else if err != nil && test.ExpectedFail {
return
}
r, _ := nftest.AddTestRule(t, conn, quotaExpr)
if r == nil && !test.ExpectedFail {
t.Error("Error adding rule with Quota expression")
}
if !nftest.AreExprsValid(t, &test, r) {
return
}
if test.ExpectedFail {
t.Errorf("test should have failed")
}
})
}
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/utils.go 0000664 0000000 0000000 00000012620 15003540030 0023366 0 ustar 00root root 0000000 0000000 package exprs
import (
"strconv"
"github.com/google/gopacket/layers"
"golang.org/x/sys/unix"
)
// GetICMPRejectCode returns the code by its name.
func GetICMPRejectCode(reason string) uint8 {
switch reason {
case ICMP_HOST_UNREACHABLE, ICMP_ADDR_UNREACHABLE:
return layers.ICMPv4CodeHost
case ICMP_PROT_UNREACHABLE:
return layers.ICMPv4CodeProtocol
case ICMP_PORT_UNREACHABLE:
return layers.ICMPv4CodePort
case ICMP_ADMIN_PROHIBITED:
return layers.ICMPv4CodeCommAdminProhibited
case ICMP_HOST_PROHIBITED:
return layers.ICMPv4CodeHostAdminProhibited
case ICMP_NET_PROHIBITED:
return layers.ICMPv4CodeNetAdminProhibited
}
return layers.ICMPv4CodeNet
}
// GetICMPxRejectCode returns the code by its name.
func GetICMPxRejectCode(reason string) uint8 {
// https://github.com/torvalds/linux/blob/master/net/netfilter/nft_reject.c#L96
// https://github.com/google/gopacket/blob/3aa782ce48d4a525acaebab344cedabfb561f870/layers/icmp4.go#L37
switch reason {
case ICMP_HOST_UNREACHABLE, ICMP_NET_UNREACHABLE:
return unix.NFT_REJECT_ICMP_UNREACH // results in -> net-unreachable???
case ICMP_PROT_UNREACHABLE:
return unix.NFT_REJECT_ICMPX_HOST_UNREACH // results in -> prot-unreachable???
case ICMP_PORT_UNREACHABLE:
return unix.NFT_REJECT_ICMPX_PORT_UNREACH // results in -> host-unreachable???
case ICMP_NO_ROUTE:
return unix.NFT_REJECT_ICMPX_NO_ROUTE // results in -> net-unreachable
}
return unix.NFT_REJECT_ICMP_UNREACH // results in -> net-unreachable???
}
// GetICMPType returns an ICMP type code
func GetICMPType(icmpType string) uint8 {
switch icmpType {
case ICMP_ECHO_REPLY:
return layers.ICMPv4TypeEchoReply
case ICMP_ECHO_REQUEST:
return layers.ICMPv4TypeEchoRequest
case ICMP_SOURCE_QUENCH:
return layers.ICMPv4TypeSourceQuench
case ICMP_DEST_UNREACHABLE:
return layers.ICMPv4TypeDestinationUnreachable
case ICMP_ROUTER_ADVERTISEMENT:
return layers.ICMPv4TypeRouterAdvertisement
case ICMP_ROUTER_SOLICITATION:
return layers.ICMPv4TypeRouterSolicitation
case ICMP_REDIRECT:
return layers.ICMPv4TypeRedirect
case ICMP_TIME_EXCEEDED:
return layers.ICMPv4TypeTimeExceeded
case ICMP_INFO_REQUEST:
return layers.ICMPv4TypeInfoRequest
case ICMP_INFO_REPLY:
return layers.ICMPv4TypeInfoReply
case ICMP_PARAMETER_PROBLEM:
return layers.ICMPv4TypeParameterProblem
case ICMP_TIMESTAMP_REQUEST:
return layers.ICMPv4TypeTimestampRequest
case ICMP_TIMESTAMP_REPLY:
return layers.ICMPv4TypeTimestampReply
case ICMP_ADDRESS_MASK_REQUEST:
return layers.ICMPv4TypeAddressMaskRequest
case ICMP_ADDRESS_MASK_REPLY:
return layers.ICMPv4TypeAddressMaskReply
}
return 0
}
// GetICMPv6Type returns an ICMPv6 type code
func GetICMPv6Type(icmpType string) uint8 {
switch icmpType {
case ICMP_DEST_UNREACHABLE:
return layers.ICMPv6TypeDestinationUnreachable
case ICMP_PACKET_TOO_BIG:
return layers.ICMPv6TypePacketTooBig
case ICMP_TIME_EXCEEDED:
return layers.ICMPv6TypeTimeExceeded
case ICMP_PARAMETER_PROBLEM:
return layers.ICMPv6TypeParameterProblem
case ICMP_ECHO_REQUEST:
return layers.ICMPv6TypeEchoRequest
case ICMP_ECHO_REPLY:
return layers.ICMPv6TypeEchoReply
case ICMP_ROUTER_SOLICITATION:
return layers.ICMPv6TypeRouterSolicitation
case ICMP_ROUTER_ADVERTISEMENT:
return layers.ICMPv6TypeRouterAdvertisement
case ICMP_NEIGHBOUR_SOLICITATION:
return layers.ICMPv6TypeNeighborSolicitation
case ICMP_NEIGHBOUR_ADVERTISEMENT:
return layers.ICMPv6TypeNeighborAdvertisement
case ICMP_REDIRECT:
return layers.ICMPv6TypeRedirect
}
return 0
}
// GetICMPv6RejectCode returns the code by its name.
func GetICMPv6RejectCode(reason string) uint8 {
switch reason {
case ICMP_HOST_UNREACHABLE, ICMP_NET_UNREACHABLE, ICMP_NO_ROUTE:
return layers.ICMPv6CodeNoRouteToDst
case ICMP_ADDR_UNREACHABLE:
return layers.ICMPv6CodeAddressUnreachable
case ICMP_PORT_UNREACHABLE:
return layers.ICMPv6CodePortUnreachable
case ICMP_REJECT_POLICY_FAIL:
return layers.ICMPv6CodeSrcAddressFailedPolicy
case ICMP_REJECT_ROUTE:
return layers.ICMPv6CodeRejectRouteToDst
}
return layers.ICMPv6CodeNoRouteToDst
}
// getProtocolCode will try to return the code of the given protocol.
// If the protocol is not in our list, we'll use the value as decimal.
// So for example IPPROTO_ENCAP (0x62) must be specified as 98.
// https://pkg.go.dev/golang.org/x/sys/unix#pkg-constants
func getProtocolCode(value string) (byte, error) {
switch value {
case NFT_PROTO_TCP:
return unix.IPPROTO_TCP, nil
case NFT_PROTO_UDP:
return unix.IPPROTO_UDP, nil
case NFT_PROTO_UDPLITE:
return unix.IPPROTO_UDPLITE, nil
case NFT_PROTO_SCTP:
return unix.IPPROTO_SCTP, nil
case NFT_PROTO_DCCP:
return unix.IPPROTO_DCCP, nil
case NFT_PROTO_ICMP:
return unix.IPPROTO_ICMP, nil
case NFT_PROTO_ICMPv6:
return unix.IPPROTO_ICMPV6, nil
case NFT_PROTO_AH:
return unix.IPPROTO_AH, nil
case NFT_PROTO_ETHERNET:
return unix.IPPROTO_ETHERNET, nil
case NFT_PROTO_GRE:
return unix.IPPROTO_GRE, nil
case NFT_PROTO_IP:
return unix.IPPROTO_IP, nil
case NFT_PROTO_IPIP:
return unix.IPPROTO_IPIP, nil
case NFT_PROTO_L2TP:
return unix.IPPROTO_L2TP, nil
case NFT_PROTO_COMP:
return unix.IPPROTO_COMP, nil
case NFT_PROTO_IGMP:
return unix.IPPROTO_IGMP, nil
case NFT_PROTO_ESP:
return unix.IPPROTO_ESP, nil
case NFT_PROTO_RAW:
return unix.IPPROTO_RAW, nil
case NFT_PROTO_ENCAP:
return unix.IPPROTO_ENCAP, nil
}
prot, err := strconv.Atoi(value)
if err != nil {
return 0, err
}
return byte(prot), nil
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/verdict.go 0000664 0000000 0000000 00000011513 15003540030 0023666 0 ustar 00root root 0000000 0000000 package exprs
import (
"strconv"
"strings"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/google/nftables/expr"
"golang.org/x/sys/unix"
)
// NewExprVerdict constructs a new verdict to apply on connections.
func NewExprVerdict(verdict, parms string) *[]expr.Any {
switch strings.ToLower(verdict) {
case VERDICT_ACCEPT:
return NewExprAccept()
case VERDICT_DROP:
return &[]expr.Any{&expr.Verdict{
Kind: expr.VerdictDrop,
}}
// FIXME: this verdict is not added to nftables
case VERDICT_STOP:
return &[]expr.Any{&expr.Verdict{
Kind: expr.VerdictStop,
}}
case VERDICT_REJECT:
reject := NewExprReject(parms)
return &[]expr.Any{reject}
case VERDICT_RETURN:
return &[]expr.Any{&expr.Verdict{
Kind: expr.VerdictReturn,
}}
case VERDICT_JUMP:
return &[]expr.Any{
&expr.Verdict{
Kind: expr.VerdictKind(unix.NFT_JUMP),
Chain: parms,
},
}
case VERDICT_QUEUE:
queueNum := 0
var err error
p := strings.Split(parms, " ")
if len(p) == 0 {
log.Warning("invalid Queue expr parameters")
return nil
}
// TODO: allow to configure this flag
if p[0] == NFT_QUEUE_NUM {
queueNum, err = strconv.Atoi(p[len(p)-1])
if err != nil {
log.Warning("invalid Queue num: %s", err)
return nil
}
}
return &[]expr.Any{
&expr.Queue{
Num: uint16(queueNum),
Flag: expr.QueueFlagBypass,
}}
case VERDICT_SNAT:
snat := NewExprSNAT()
snat.Random, snat.FullyRandom, snat.Persistent = NewExprNATFlags(parms)
snatExpr := &[]expr.Any{snat}
regAddr, regProto, natParms, err := NewExprNAT(parms, VERDICT_SNAT)
if err != nil {
log.Warning("error adding snat verdict: %s", err)
return nil
}
if regAddr {
snat.RegAddrMin = 1
}
if regProto {
snat.RegProtoMin = 2
}
*snatExpr = append(*natParms, *snatExpr...)
return snatExpr
case VERDICT_DNAT:
dnat := NewExprDNAT()
dnat.Random, dnat.FullyRandom, dnat.Persistent = NewExprNATFlags(parms)
dnatExpr := &[]expr.Any{dnat}
regAddr, regProto, natParms, err := NewExprNAT(parms, VERDICT_DNAT)
if err != nil {
log.Warning("error adding dnat verdict: %s", err)
return nil
}
if regAddr {
dnat.RegAddrMin = 1
}
if regProto {
dnat.RegProtoMin = 2
}
*dnatExpr = append(*natParms, *dnatExpr...)
return dnatExpr
case VERDICT_MASQUERADE:
m := &expr.Masq{}
m.Random, m.FullyRandom, m.Persistent = NewExprNATFlags(parms)
masqExpr := &[]expr.Any{m}
if parms == "" {
return masqExpr
}
// if any of the flag is set to true, toPorts must be false
toPorts := !(m.Random == true || m.FullyRandom == true || m.Persistent == true)
masqExpr = NewExprMasquerade(toPorts, m.Random, m.FullyRandom, m.Persistent)
_, _, natParms, err := NewExprNAT(parms, VERDICT_MASQUERADE)
if err != nil {
log.Warning("error adding masquerade verdict: %s", err)
}
*masqExpr = append(*natParms, *masqExpr...)
return masqExpr
case VERDICT_REDIRECT:
_, _, rewriteParms, err := NewExprNAT(parms, VERDICT_REDIRECT)
if err != nil {
log.Warning("error adding redirect verdict: %s", err)
return nil
}
redirExpr := NewExprRedirect()
*redirExpr = append(*rewriteParms, *redirExpr...)
return redirExpr
case VERDICT_TPROXY:
_, _, rewriteParms, err := NewExprNAT(parms, VERDICT_TPROXY)
if err != nil {
log.Warning("error adding tproxy verdict: %s", err)
return nil
}
tproxyExpr := &[]expr.Any{}
*tproxyExpr = append(*tproxyExpr, *rewriteParms...)
tVerdict := NewExprTproxy()
*tproxyExpr = append(*tproxyExpr, *tVerdict...)
*tproxyExpr = append(*tproxyExpr, *NewExprAccept()...)
return tproxyExpr
}
// target can be empty, "ct set mark" or "log" for example
return &[]expr.Any{}
}
// NewExprAccept creates the accept verdict.
func NewExprAccept() *[]expr.Any {
return &[]expr.Any{&expr.Verdict{
Kind: expr.VerdictAccept,
}}
}
// NewExprReject creates new Reject expression
// icmpx, to reject the IPv4 and IPv6 traffic, icmp for ipv4, icmpv6 for ...
// Ex.: "Target": "reject", "TargetParameters": "with tcp reset"
// https://wiki.nftables.org/wiki-nftables/index.php/Rejecting_traffic
func NewExprReject(parms string) *expr.Reject {
reject := &expr.Reject{}
reject.Code = unix.NFT_REJECT_ICMP_UNREACH
reject.Type = unix.NFT_REJECT_ICMP_UNREACH
parmList := strings.Split(parms, " ")
length := len(parmList)
if length <= 1 {
return reject
}
what := parmList[1]
how := parmList[length-1]
switch what {
case NFT_PROTO_TCP:
reject.Type = unix.NFT_REJECT_TCP_RST
reject.Code = unix.NFT_REJECT_TCP_RST
case NFT_PROTO_ICMP:
reject.Type = unix.NFT_REJECT_ICMP_UNREACH
reject.Code = GetICMPRejectCode(how)
return reject
case NFT_PROTO_ICMPX:
// icmp and icmpv6
reject.Type = unix.NFT_REJECT_ICMPX_UNREACH
reject.Code = GetICMPxRejectCode(how)
return reject
case NFT_PROTO_ICMPv6:
reject.Type = 1
reject.Code = GetICMPv6RejectCode(how)
default:
}
return reject
}
opensnitch-1.6.9/daemon/firewall/nftables/exprs/verdict_test.go 0000664 0000000 0000000 00000021263 15003540030 0024730 0 ustar 00root root 0000000 0000000 package exprs_test
import (
"fmt"
"reflect"
"testing"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables"
"github.com/google/nftables/expr"
"golang.org/x/sys/unix"
)
type verdictTestsT struct {
name string
verdict string
parms string
expectedExpr string
expectedKind expr.VerdictKind
}
func TestExprVerdict(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
// we must create a custom chain before using JUMP verdict.
tbl, _ := nftest.Fw.AddTable("yyy", exprs.NFT_FAMILY_INET)
nftest.Fw.Conn.AddChain(&nftables.Chain{
Name: "custom-chain",
Table: tbl,
})
nftest.Fw.Commit()
verdictTests := []verdictTestsT{
{"test-accept", exprs.VERDICT_ACCEPT, "", "*expr.Verdict", expr.VerdictAccept},
{"test-AcCept", "AcCePt", "", "*expr.Verdict", expr.VerdictAccept},
{"test-ACCEPT", "ACCEPT", "", "*expr.Verdict", expr.VerdictAccept},
{"test-drop", exprs.VERDICT_DROP, "", "*expr.Verdict", expr.VerdictDrop},
//{"test-stop", exprs.VERDICT_STOP, "", "*expr.Verdict", expr.VerdictStop},
{"test-return", exprs.VERDICT_RETURN, "", "*expr.Verdict", expr.VerdictReturn},
{"test-jump", exprs.VERDICT_JUMP, "custom-chain", "*expr.Verdict", expr.VerdictJump},
// empty verdict must be valid at this level.
// it can be used with "log" or "ct set mark"
{"test-empty-verdict", "", "", "*expr.Verdict", expr.VerdictAccept},
}
for _, test := range verdictTests {
t.Run(test.name, func(t *testing.T) {
verdExpr := exprs.NewExprVerdict(test.verdict, test.parms)
r, _ := nftest.AddTestRule(t, conn, verdExpr)
if r == nil {
t.Errorf("Error adding rule with verdict expression %s", test.verdict)
return
}
if test.name == "test-empty-verdict" {
return
}
e := r.Exprs[0]
if reflect.TypeOf(e).String() != test.expectedExpr {
t.Errorf("first expression should be *expr.Verdict, instead of: %s", reflect.TypeOf(e))
return
}
verd, ok := e.(*expr.Verdict)
if !ok {
t.Errorf("invalid verdict: %T", e)
return
}
if verd.Kind != test.expectedKind {
t.Errorf("invalid verdict kind: %+v, expected: %+v", verd.Kind, test.expectedKind)
return
}
})
}
}
func TestExprVerdictReject(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
type rejectTests struct {
name string
parms string
what string
family string
parmType byte
parmCode byte
}
tests := []rejectTests{
{
"test-reject-tcp-RST",
"with tcp reset",
exprs.NFT_PROTO_TCP,
exprs.NFT_FAMILY_INET,
unix.NFT_REJECT_TCP_RST,
unix.NFT_REJECT_TCP_RST,
},
{
"test-reject-icmp-host-unreachable",
fmt.Sprint("with icmp ", exprs.ICMP_HOST_UNREACHABLE),
exprs.NFT_FAMILY_IP,
exprs.NFT_PROTO_ICMP,
unix.NFT_REJECT_ICMP_UNREACH,
exprs.GetICMPRejectCode(exprs.ICMP_HOST_UNREACHABLE),
},
{
"test-reject-icmp-addr-unreachable",
fmt.Sprint("with icmp ", exprs.ICMP_ADDR_UNREACHABLE),
exprs.NFT_FAMILY_IP,
exprs.NFT_PROTO_ICMP,
unix.NFT_REJECT_ICMP_UNREACH,
exprs.GetICMPRejectCode(exprs.ICMP_ADDR_UNREACHABLE),
},
{
"test-reject-icmp-prot-unreachable",
fmt.Sprint("with icmp ", exprs.ICMP_PROT_UNREACHABLE),
exprs.NFT_FAMILY_IP,
exprs.NFT_PROTO_ICMP,
unix.NFT_REJECT_ICMP_UNREACH,
exprs.GetICMPRejectCode(exprs.ICMP_PROT_UNREACHABLE),
},
{
"test-reject-icmp-port-unreachable",
fmt.Sprint("with icmp ", exprs.ICMP_PORT_UNREACHABLE),
exprs.NFT_FAMILY_IP,
exprs.NFT_PROTO_ICMP,
unix.NFT_REJECT_ICMP_UNREACH,
exprs.GetICMPRejectCode(exprs.ICMP_PORT_UNREACHABLE),
},
{
"test-reject-icmp-admin-prohibited",
fmt.Sprint("with icmp ", exprs.ICMP_ADMIN_PROHIBITED),
exprs.NFT_FAMILY_IP,
exprs.NFT_PROTO_ICMP,
unix.NFT_REJECT_ICMP_UNREACH,
exprs.GetICMPRejectCode(exprs.ICMP_ADMIN_PROHIBITED),
},
{
"test-reject-icmp-host-prohibited",
fmt.Sprint("with icmp ", exprs.ICMP_HOST_PROHIBITED),
exprs.NFT_FAMILY_IP,
exprs.NFT_PROTO_ICMP,
unix.NFT_REJECT_ICMP_UNREACH,
exprs.GetICMPRejectCode(exprs.ICMP_HOST_PROHIBITED),
},
{
"test-reject-icmp-net-prohibited",
fmt.Sprint("with icmp ", exprs.ICMP_NET_PROHIBITED),
exprs.NFT_FAMILY_IP,
exprs.NFT_PROTO_ICMP,
unix.NFT_REJECT_ICMP_UNREACH,
exprs.GetICMPRejectCode(exprs.ICMP_NET_PROHIBITED),
},
// icmpx
{
"test-reject-icmpx-net-unreachable",
fmt.Sprint("with icmpx ", exprs.ICMP_NET_UNREACHABLE),
exprs.NFT_FAMILY_INET,
exprs.NFT_PROTO_ICMPX,
unix.NFT_REJECT_ICMPX_UNREACH,
exprs.GetICMPxRejectCode(exprs.ICMP_NET_UNREACHABLE),
},
{
"test-reject-icmpx-host-unreachable",
fmt.Sprint("with icmpx ", exprs.ICMP_HOST_UNREACHABLE),
exprs.NFT_FAMILY_INET,
exprs.NFT_PROTO_ICMPX,
unix.NFT_REJECT_ICMPX_UNREACH,
exprs.GetICMPxRejectCode(exprs.ICMP_HOST_UNREACHABLE),
},
{
"test-reject-icmpx-prot-unreachable",
fmt.Sprint("with icmpx ", exprs.ICMP_PROT_UNREACHABLE),
exprs.NFT_FAMILY_INET,
exprs.NFT_PROTO_ICMPX,
unix.NFT_REJECT_ICMPX_UNREACH,
exprs.GetICMPxRejectCode(exprs.ICMP_PROT_UNREACHABLE),
},
{
"test-reject-icmpx-port-unreachable",
fmt.Sprint("with icmpx ", exprs.ICMP_PORT_UNREACHABLE),
exprs.NFT_FAMILY_INET,
exprs.NFT_PROTO_ICMPX,
unix.NFT_REJECT_ICMPX_UNREACH,
exprs.GetICMPxRejectCode(exprs.ICMP_PORT_UNREACHABLE),
},
{
"test-reject-icmpx-no-route",
fmt.Sprint("with icmpx ", exprs.ICMP_NO_ROUTE),
exprs.NFT_FAMILY_INET,
exprs.NFT_PROTO_ICMPX,
unix.NFT_REJECT_ICMPX_UNREACH,
exprs.GetICMPxRejectCode(exprs.ICMP_NO_ROUTE),
},
// icmpv6
{
"test-reject-icmpv6-net-unreachable",
fmt.Sprint("with icmpv6 ", exprs.ICMP_NET_UNREACHABLE),
exprs.NFT_FAMILY_IP6,
exprs.NFT_PROTO_ICMPv6,
1,
exprs.GetICMPv6RejectCode(exprs.ICMP_NET_UNREACHABLE),
},
{
"test-reject-icmpv6-addr-unreachable",
fmt.Sprint("with icmpv6 ", exprs.ICMP_ADDR_UNREACHABLE),
exprs.NFT_FAMILY_IP6,
exprs.NFT_PROTO_ICMPv6,
1,
exprs.GetICMPv6RejectCode(exprs.ICMP_ADDR_UNREACHABLE),
},
{
"test-reject-icmpv6-host-unreachable",
fmt.Sprint("with icmpv6 ", exprs.ICMP_HOST_UNREACHABLE),
exprs.NFT_FAMILY_IP6,
exprs.NFT_PROTO_ICMPv6,
1,
exprs.GetICMPv6RejectCode(exprs.ICMP_HOST_UNREACHABLE),
},
{
"test-reject-icmpv6-port-unreachable",
fmt.Sprint("with icmpv6 ", exprs.ICMP_PORT_UNREACHABLE),
exprs.NFT_FAMILY_IP6,
exprs.NFT_PROTO_ICMPv6,
1,
exprs.GetICMPv6RejectCode(exprs.ICMP_PORT_UNREACHABLE),
},
{
"test-reject-icmpv6-no-route",
fmt.Sprint("with icmpv6 ", exprs.ICMP_NO_ROUTE),
exprs.NFT_FAMILY_IP6,
exprs.NFT_PROTO_ICMPv6,
1,
exprs.GetICMPv6RejectCode(exprs.ICMP_NO_ROUTE),
},
{
"test-reject-icmpv6-reject-policy-fail",
fmt.Sprint("with icmpv6 ", exprs.ICMP_REJECT_POLICY_FAIL),
exprs.NFT_FAMILY_IP6,
exprs.NFT_PROTO_ICMPv6,
1,
exprs.GetICMPv6RejectCode(exprs.ICMP_REJECT_POLICY_FAIL),
},
{
"test-reject-icmpv6-reject-route",
fmt.Sprint("with icmpv6 ", exprs.ICMP_REJECT_ROUTE),
exprs.NFT_FAMILY_IP6,
exprs.NFT_PROTO_ICMPv6,
1,
exprs.GetICMPv6RejectCode(exprs.ICMP_REJECT_ROUTE),
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
verdExpr := exprs.NewExprVerdict(exprs.VERDICT_REJECT, test.parms)
r, _ := nftest.AddTestRule(t, conn, verdExpr)
if r == nil {
t.Errorf("Error adding rule with reject verdict %s", "")
return
}
e := r.Exprs[0]
if reflect.TypeOf(e).String() != "*expr.Reject" {
t.Errorf("first expression should be *expr.Verdict, instead of: %s", reflect.TypeOf(e))
return
}
verd, ok := e.(*expr.Reject)
if !ok {
t.Errorf("invalid verdict: %T", e)
return
}
//fmt.Printf("reject verd: %+v\n", verd)
if verd.Code != uint8(test.parmCode) {
t.Errorf("invalid reject verdict code: %d, expected: %d", verd.Code, test.parmCode)
}
})
}
}
func TestExprVerdictQueue(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
verdExpr := exprs.NewExprVerdict(exprs.VERDICT_QUEUE, "num 1")
r, _ := nftest.AddTestRule(t, conn, verdExpr)
if r == nil {
t.Errorf("Error adding rule with Queue verdict")
return
}
e := r.Exprs[0]
if reflect.TypeOf(e).String() != "*expr.Queue" {
t.Errorf("first expression should be *expr.Queue, instead of: %s", reflect.TypeOf(e))
return
}
verd, ok := e.(*expr.Queue)
if !ok {
t.Errorf("invalid verdict: %T", e)
return
}
if verd.Num != 1 {
t.Errorf("invalid queue verdict Num: %d", verd.Num)
}
}
opensnitch-1.6.9/daemon/firewall/nftables/monitor.go 0000664 0000000 0000000 00000004345 15003540030 0022561 0 ustar 00root root 0000000 0000000 package nftables
import (
"time"
"github.com/evilsocket/opensnitch/daemon/firewall/common"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/log"
)
// AreRulesLoaded checks if the firewall rules for intercept traffic are loaded.
func (n *Nft) AreRulesLoaded() bool {
n.Lock()
defer n.Unlock()
nRules := 0
chains, err := n.Conn.ListChains()
if err != nil {
log.Warning("[nftables] error listing nftables chains: %s", err)
return false
}
for _, c := range chains {
rules, err := n.Conn.GetRule(c.Table, c)
if err != nil {
log.Warning("[nftables] Error listing rules: %s", err)
continue
}
for rdx, r := range rules {
if string(r.UserData) == InterceptionRuleKey {
if c.Table.Name == exprs.NFT_CHAIN_FILTER && c.Name == exprs.NFT_HOOK_INPUT && rdx != 0 {
log.Warning("nftables DNS rule not in 1st position (%d)", rdx)
return false
}
nRules++
if c.Table.Name == exprs.NFT_CHAIN_MANGLE && rdx < len(rules)-2 {
log.Warning("nfables queue rule is not the latest of the list (%d/%d), reloading", rdx, len(rules))
return false
}
}
}
}
// we expect to have exactly 3 rules (2 queue and 1 dns). If there're less or more, then we
// need to reload them.
if nRules != 3 {
log.Warning("nfables filter rules not loaded: %d", nRules)
return false
}
return true
}
// ReloadConfCallback gets called after the configuration changes.
func (n *Nft) ReloadConfCallback() {
log.Important("reloadConfCallback changed, reloading")
n.DeleteSystemRules(!common.ForcedDelRules, !common.RestoreChains, log.GetLogLevel() == log.DEBUG)
n.AddSystemRules(common.ReloadRules, !common.BackupChains)
}
// ReloadRulesCallback gets called when the interception rules are not present.
func (n *Nft) ReloadRulesCallback() {
log.Important("nftables firewall rules changed, reloading")
n.DisableInterception(log.GetLogLevel() == log.DEBUG)
time.Sleep(time.Millisecond * 500)
n.EnableInterception()
}
// PreloadConfCallback gets called before the fw configuration is loaded
func (n *Nft) PreloadConfCallback() {
log.Info("nftables config changed, reloading")
n.DeleteSystemRules(!common.ForcedDelRules, common.RestoreChains, log.GetLogLevel() == log.DEBUG)
}
opensnitch-1.6.9/daemon/firewall/nftables/monitor_test.go 0000664 0000000 0000000 00000005535 15003540030 0023622 0 ustar 00root root 0000000 0000000 package nftables_test
import (
"testing"
"time"
"github.com/evilsocket/opensnitch/daemon/firewall/common"
nftb "github.com/evilsocket/opensnitch/daemon/firewall/nftables"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables"
)
// mimic EnableInterception() but without NewRulesChecker()
func addInterceptionRules(nft *nftb.Nft, t *testing.T) {
if err := nft.AddInterceptionTables(); err != nil {
t.Errorf("Error while adding interception tables: %s", err)
return
}
if err := nft.AddInterceptionChains(); err != nil {
t.Errorf("Error while adding interception chains: %s", err)
return
}
if err, _ := nft.QueueDNSResponses(common.EnableRule, common.EnableRule); err != nil {
t.Errorf("Error while running DNS nftables rule: %s", err)
}
if err, _ := nft.QueueConnections(common.EnableRule, common.EnableRule); err != nil {
t.Errorf("Error while running conntrack nftables rule: %s", err)
}
}
func _testMonitorReload(t *testing.T, conn *nftables.Conn, nft *nftb.Nft) {
tblfilter := nft.GetTable(exprs.NFT_CHAIN_FILTER, exprs.NFT_FAMILY_INET)
if tblfilter == nil || tblfilter.Name != exprs.NFT_CHAIN_FILTER {
t.Error("table filter-inet not in the list")
}
chnFilterInput := nftest.Fw.GetChain(exprs.NFT_HOOK_INPUT, tblfilter, exprs.NFT_FAMILY_INET)
if chnFilterInput == nil {
t.Error("chain input-filter-inet not in the list")
}
rules, _ := conn.GetRules(tblfilter, chnFilterInput)
if len(rules) == 0 {
t.Error("DNS interception rule not added")
}
conn.FlushChain(chnFilterInput)
nftest.Fw.Commit()
// the rules checker checks the rules every 10s
reloaded := false
for i := 0; i < 15; i++ {
if r, _ := getRule(t, conn, tblfilter.Name, exprs.NFT_HOOK_INPUT, nftb.InterceptionRuleKey, 0); r != nil {
reloaded = true
break
}
time.Sleep(time.Second)
}
if !reloaded {
t.Error("rules under input-filter-inet not reloaded after 10s")
}
}
func TestAreRulesLoaded(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
addInterceptionRules(nftest.Fw, t)
if !nftest.Fw.AreRulesLoaded() {
t.Error("interception rules not loaded, and they should")
}
nftest.Fw.DelInterceptionRules()
if nftest.Fw.AreRulesLoaded() {
t.Error("interception rules are loaded, and the shouldn't")
}
}
func TestMonitorReload(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
nftest.Fw.EnableInterception()
// test that rules are reloaded after being deleted, but also
// that the monitor is not stopped after the first reload.
_testMonitorReload(t, conn, nftest.Fw)
_testMonitorReload(t, conn, nftest.Fw)
_testMonitorReload(t, conn, nftest.Fw)
}
opensnitch-1.6.9/daemon/firewall/nftables/nftables.go 0000664 0000000 0000000 00000012342 15003540030 0022664 0 ustar 00root root 0000000 0000000 package nftables
import (
"bytes"
"encoding/json"
"strings"
"sync"
"github.com/evilsocket/opensnitch/daemon/firewall/common"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/evilsocket/opensnitch/daemon/firewall/iptables"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
"github.com/golang/protobuf/jsonpb"
"github.com/google/nftables"
)
// Action is the modifier we apply to a rule.
type Action string
// Actions we apply to the firewall.
const (
fwKey = "opensnitch-key"
InterceptionRuleKey = fwKey + "-interception"
SystemRuleKey = fwKey + "-system"
Name = "nftables"
)
var (
filterTable = &nftables.Table{
Family: nftables.TableFamilyINet,
Name: exprs.NFT_CHAIN_FILTER,
}
mangleTable = &nftables.Table{
Family: nftables.TableFamilyINet,
Name: exprs.NFT_CHAIN_FILTER,
}
)
// Nft holds the fields of our nftables firewall
type Nft struct {
Conn *nftables.Conn
chains iptables.SystemChains
common.Common
config.Config
sync.Mutex
}
// NewNft creates a new nftables object
func NewNft() *nftables.Conn {
return &nftables.Conn{}
}
// Fw initializes a new nftables object
func Fw() (*Nft, error) {
n := &Nft{
chains: iptables.SystemChains{
Rules: make(map[string]*iptables.SystemRule),
},
}
return n, nil
}
// Name returns the name of the firewall
func (n *Nft) Name() string {
return Name
}
// Init inserts the firewall rules and starts monitoring for firewall
// changes.
func (n *Nft) Init(qNum *int) {
if n.IsRunning() {
return
}
n.ErrChan = make(chan string, 100)
InitMapsStore()
n.SetQueueNum(qNum)
n.Conn = NewNft()
// In order to clean up any existing firewall rule before start,
// we need to load the fw configuration first to know what rules
// were configured.
n.NewSystemFwConfig(n.PreloadConfCallback, n.ReloadConfCallback)
n.LoadDiskConfiguration(!common.ReloadConf)
// start from a clean state
// The daemon may have exited unexpectedly, leaving residual fw rules, so we
// need to clean them up to avoid duplicated rules.
n.DelInterceptionRules()
n.AddSystemRules(!common.ReloadRules, common.BackupChains)
n.EnableInterception()
n.Running = true
}
// Stop deletes the firewall rules, allowing network traffic.
func (n *Nft) Stop() {
if n.IsRunning() == false {
return
}
n.StopConfigWatcher()
n.StopCheckingRules()
n.CleanRules(log.GetLogLevel() == log.DEBUG)
n.Lock()
n.Running = false
n.Unlock()
}
// EnableInterception adds firewall rules to intercept connections
func (n *Nft) EnableInterception() {
if err := n.AddInterceptionTables(); err != nil {
log.Error("Error while adding interception tables: %s", err)
return
}
if err := n.AddInterceptionChains(); err != nil {
log.Error("Error while adding interception chains: %s", err)
return
}
if err, _ := n.QueueDNSResponses(common.EnableRule, common.EnableRule); err != nil {
log.Error("Error while running DNS nftables rule: %s", err)
}
if err, _ := n.QueueConnections(common.EnableRule, common.EnableRule); err != nil {
log.Error("Error while running conntrack nftables rule: %s", err)
}
// start monitoring firewall rules to intercept network traffic.
n.NewRulesChecker(n.AreRulesLoaded, n.ReloadRulesCallback)
}
// DisableInterception removes firewall rules to intercept outbound connections.
func (n *Nft) DisableInterception(logErrors bool) {
n.StopCheckingRules()
n.DelInterceptionRules()
}
// CleanRules deletes the rules we added.
func (n *Nft) CleanRules(logErrors bool) {
n.DisableInterception(logErrors)
n.DeleteSystemRules(common.ForcedDelRules, common.RestoreChains, logErrors)
}
// Commit applies the queued changes, creating new objects (tables, chains, etc).
// You add rules, chains or tables, and after calling to Flush() they're added to the system.
// NOTE: it's very important not to call Flush() without queued tasks.
func (n *Nft) Commit() bool {
if err := n.Conn.Flush(); err != nil {
log.Warning("%s error applying changes: %s", logTag, err)
return false
}
return true
}
// Serialize converts the configuration from json to protobuf
func (n *Nft) Serialize() (*protocol.SysFirewall, error) {
sysfw := &protocol.SysFirewall{}
jun := jsonpb.Unmarshaler{
AllowUnknownFields: true,
}
rawConfig, err := json.Marshal(&n.SysConfig)
if err != nil {
log.Error("nftables.Serialize() struct to string error: %s", err)
return nil, err
}
// string to proto
if err := jun.Unmarshal(strings.NewReader(string(rawConfig)), sysfw); err != nil {
log.Error("nftables.Serialize() string to protobuf error: %s", err)
return nil, err
}
return sysfw, nil
}
// Deserialize converts a protocolbuffer structure to byte array.
func (n *Nft) Deserialize(sysfw *protocol.SysFirewall) ([]byte, error) {
jun := jsonpb.Marshaler{
OrigName: true,
EmitDefaults: true,
Indent: " ",
}
// NOTE: '<' and '>' characters are encoded to unicode (\u003c).
// This has no effect on adding rules to nftables.
// Users can still write "<" if they want to, rules are added ok.
var b bytes.Buffer
if err := jun.Marshal(&b, sysfw); err != nil {
log.Error("nfables.Deserialize() error 2: %s", err)
return nil, err
}
return b.Bytes(), nil
}
opensnitch-1.6.9/daemon/firewall/nftables/nftest/ 0000775 0000000 0000000 00000000000 15003540030 0022040 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/firewall/nftables/nftest/nftest.go 0000664 0000000 0000000 00000003354 15003540030 0023677 0 ustar 00root root 0000000 0000000 package nftest
import (
"os"
"runtime"
"testing"
nftb "github.com/evilsocket/opensnitch/daemon/firewall/nftables"
"github.com/google/nftables"
"github.com/vishvananda/netns"
)
var (
conn *nftables.Conn
newNS netns.NsHandle
// Fw represents the nftables Fw object.
Fw, _ = nftb.Fw()
)
func init() {
nftb.InitMapsStore()
}
// SkipIfNotPrivileged will skip the test from where it's invoked,
// to skip the test if we don't have root privileges.
// This may occur when executing the tests on restricted environments,
// such as containers, chroots, etc.
func SkipIfNotPrivileged(t *testing.T) {
if os.Getenv("PRIVILEGED_TESTS") == "" {
t.Skip("Set PRIVILEGED_TESTS to 1 to launch these tests, and launch them as root, or as a user allowed to create new namespaces.")
}
}
// OpenSystemConn opens a new connection with the kernel in a new namespace.
// https://github.com/google/nftables/blob/8f2d395e1089dea4966c483fbeae7e336917c095/internal/nftest/system_conn.go#L15
func OpenSystemConn(t *testing.T) (*nftables.Conn, netns.NsHandle) {
t.Helper()
// We lock the goroutine into the current thread, as namespace operations
// such as those invoked by `netns.New()` are thread-local. This is undone
// in nftest.CleanupSystemConn().
runtime.LockOSThread()
ns, err := netns.New()
if err != nil {
t.Fatalf("netns.New() failed: %v", err)
}
t.Log("OpenSystemConn() with NS:", ns)
c, err := nftables.New(nftables.WithNetNSFd(int(ns)))
if err != nil {
t.Fatalf("nftables.New() failed: %v", err)
}
return c, ns
}
// CleanupSystemConn closes the given namespace.
func CleanupSystemConn(t *testing.T, newNS netns.NsHandle) {
defer runtime.UnlockOSThread()
if err := newNS.Close(); err != nil {
t.Fatalf("newNS.Close() failed: %v", err)
}
}
opensnitch-1.6.9/daemon/firewall/nftables/nftest/test_utils.go 0000664 0000000 0000000 00000015306 15003540030 0024573 0 ustar 00root root 0000000 0000000 package nftest
import (
"bytes"
"reflect"
"testing"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/google/nftables"
"github.com/google/nftables/expr"
)
// TestsT defines the fields of a test.
type TestsT struct {
Name string
Family string
Parms string
Values []*config.ExprValues
ExpectedExprsNum int
ExpectedExprs []interface{}
ExpectedFail bool
}
// AreExprsValid checks if the expressions defined in the given rule are valid
// according to the expected expressions defined in the tests.
func AreExprsValid(t *testing.T, test *TestsT, rule *nftables.Rule) bool {
total := len(rule.Exprs)
if total != test.ExpectedExprsNum {
t.Errorf("expected %d expressions, found %d", test.ExpectedExprsNum, total)
return false
}
for idx, e := range rule.Exprs {
if reflect.TypeOf(e).String() != reflect.TypeOf(test.ExpectedExprs[idx]).String() {
t.Errorf("first expression should be %s, instead of: %s", reflect.TypeOf(test.ExpectedExprs[idx]), reflect.TypeOf(e))
return false
}
switch e.(type) {
case *expr.Meta:
lExpr, ok := e.(*expr.Meta)
lExpect, okExpected := test.ExpectedExprs[idx].(*expr.Meta)
if !ok || !okExpected {
t.Errorf("invalid Meta expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
if lExpr.Key != lExpect.Key || lExpr.Register != lExpect.Register {
t.Errorf("invalid Meta.Key,\ngot: %+v\nexpected: %+v\n", lExpr.Key, lExpect.Key)
}
if lExpr.SourceRegister != lExpect.SourceRegister {
t.Errorf("invalid Meta.SourceRegister,\ngot: %+v\nexpected: %+v\n", lExpr.SourceRegister, lExpect.SourceRegister)
}
if lExpr.Register != lExpect.Register {
t.Errorf("invalid Meta.Register,\ngot: %+v\nexpected: %+v\n", lExpr.SourceRegister, lExpect.SourceRegister)
}
case *expr.Immediate:
lExpr, ok := e.(*expr.Immediate)
lExpect, okExpected := test.ExpectedExprs[idx].(*expr.Immediate)
if !ok || !okExpected {
t.Errorf("invalid Immediate expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
if !bytes.Equal(lExpr.Data, lExpect.Data) && !test.ExpectedFail {
t.Errorf("invalid Immediate.Data,\ngot: %+v,\nexpected: %+v", lExpr.Data, lExpect.Data)
return false
}
case *expr.TProxy:
lExpr, ok := e.(*expr.TProxy)
lExpect, okExpected := test.ExpectedExprs[idx].(*expr.TProxy)
if !ok || !okExpected {
t.Errorf("invalid TProxy expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
if lExpr.Family != lExpect.Family || lExpr.TableFamily != lExpect.TableFamily || lExpr.RegPort != lExpect.RegPort {
t.Errorf("invalid TProxy expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
case *expr.Redir:
lExpr, ok := e.(*expr.Redir)
lExpect, okExpected := test.ExpectedExprs[idx].(*expr.Redir)
if !ok || !okExpected {
t.Errorf("invalid Redir expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
if lExpr.RegisterProtoMin != lExpect.RegisterProtoMin {
t.Errorf("invalid Redir expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
case *expr.Masq:
lExpr, ok := e.(*expr.Masq)
lExpect, okExpected := test.ExpectedExprs[idx].(*expr.Masq)
if !ok || !okExpected {
t.Errorf("invalid Masq expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
if lExpr.ToPorts != lExpect.ToPorts ||
lExpr.Random != lExpect.Random ||
lExpr.FullyRandom != lExpect.FullyRandom ||
lExpr.Persistent != lExpect.Persistent {
t.Errorf("invalid Masq expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
case *expr.NAT:
lExpr, ok := e.(*expr.NAT)
lExpect, okExpected := test.ExpectedExprs[idx].(*expr.NAT)
if !ok || !okExpected {
t.Errorf("invalid NAT expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
if lExpr.RegProtoMin != lExpect.RegProtoMin ||
lExpr.RegAddrMin != lExpect.RegAddrMin ||
lExpr.Random != lExpect.Random ||
lExpr.FullyRandom != lExpect.FullyRandom ||
lExpr.Persistent != lExpect.Persistent {
t.Errorf("invalid NAT expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
case *expr.Quota:
lExpr, ok := e.(*expr.Quota)
lExpect, okExpected := test.ExpectedExprs[idx].(*expr.Quota)
if !ok || !okExpected {
t.Errorf("invalid Quota expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
if lExpr.Bytes != lExpect.Bytes ||
lExpr.Over != lExpect.Over ||
lExpr.Consumed != lExpect.Consumed {
t.Errorf("invalid Quota.Data,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
case *expr.Ct:
lExpr, ok := e.(*expr.Ct)
lExpect, okExpected := test.ExpectedExprs[idx].(*expr.Ct)
if !ok || !okExpected {
t.Errorf("invalid Ct expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
if lExpr.Key != lExpect.Key || lExpr.Register != lExpect.Register || lExpr.SourceRegister != lExpect.SourceRegister {
t.Errorf("invalid Ct parms,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
case *expr.Bitwise:
lExpr, ok := e.(*expr.Bitwise)
lExpect, okExpected := test.ExpectedExprs[idx].(*expr.Bitwise)
if !ok || !okExpected {
t.Errorf("invalid Bitwise expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
if lExpr.Len != lExpect.Len ||
!bytes.Equal(lExpr.Mask, lExpect.Mask) ||
!bytes.Equal(lExpr.Xor, lExpect.Xor) ||
lExpr.DestRegister != lExpect.DestRegister ||
lExpr.SourceRegister != lExpect.SourceRegister {
t.Errorf("invalid Bitwise parms,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
case *expr.Log:
lExpr, ok := e.(*expr.Log)
lExpect, okExpected := test.ExpectedExprs[idx].(*expr.Log)
if !ok || !okExpected {
t.Errorf("invalid Log expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
if !bytes.Equal(lExpr.Data, lExpect.Data) && !test.ExpectedFail {
t.Errorf("invalid Log.Data,\ngot: %+v,\nexpected: %+v", lExpr.Data, lExpect.Data)
return false
}
if lExpr.Key != lExpect.Key ||
lExpr.Level != lExpect.Level ||
lExpr.Group != lExpect.Group ||
lExpr.Snaplen != lExpect.Snaplen ||
lExpr.QThreshold != lExpect.QThreshold {
t.Errorf("invalid Log fields,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
case *expr.Cmp:
lExpr, ok := e.(*expr.Cmp)
lExpect, okExpected := test.ExpectedExprs[idx].(*expr.Cmp)
if !ok || !okExpected {
t.Errorf("invalid Cmp expr,\ngot: %+v,\nexpected: %+v", lExpr, lExpect)
return false
}
if !bytes.Equal(lExpr.Data, lExpect.Data) && !test.ExpectedFail {
t.Errorf("invalid Cmp.Data,\ngot: %+v,\nexpected: %+v", lExpr.Data, lExpect.Data)
return false
}
}
}
return true
}
opensnitch-1.6.9/daemon/firewall/nftables/nftest/utils.go 0000664 0000000 0000000 00000005403 15003540030 0023531 0 ustar 00root root 0000000 0000000 package nftest
import (
"testing"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/google/nftables"
"github.com/google/nftables/expr"
)
// AddTestRule adds a generic table, chain and rule with the given expression.
func AddTestRule(t *testing.T, conn *nftables.Conn, exp *[]expr.Any) (*nftables.Rule, *nftables.Chain) {
_, err := Fw.AddTable("yyy", exprs.NFT_FAMILY_INET)
if err != nil {
t.Errorf("pre step add_table() yyy-inet failed: %s", err)
return nil, nil
}
chn := Fw.AddChain(
exprs.NFT_HOOK_INPUT,
"yyy",
exprs.NFT_FAMILY_INET,
nftables.ChainPriorityFilter,
nftables.ChainTypeFilter,
nftables.ChainHookInput,
nftables.ChainPolicyAccept)
if chn == nil {
t.Error("pre step add_chain() input-yyy-inet failed")
return nil, nil
}
//nft.Commit()
r, err := Fw.AddRule(
exprs.NFT_HOOK_INPUT, "yyy", exprs.NFT_FAMILY_INET,
0,
"key-yyy",
exp)
if err != nil {
t.Errorf("Error adding rule: %s", err)
return nil, nil
}
t.Logf("Rule: %+v", r)
return r, chn
}
// AddTestSNATRule adds a generic table, chain and rule with the given expression.
func AddTestSNATRule(t *testing.T, conn *nftables.Conn, exp *[]expr.Any) (*nftables.Rule, *nftables.Chain) {
_, err := Fw.AddTable("uuu", exprs.NFT_FAMILY_INET)
if err != nil {
t.Errorf("pre step add_table() uuu-inet failed: %s", err)
return nil, nil
}
chn := Fw.AddChain(
exprs.NFT_HOOK_POSTROUTING,
"uuu",
exprs.NFT_FAMILY_INET,
nftables.ChainPriorityNATSource,
nftables.ChainTypeNAT,
nftables.ChainHookPostrouting,
nftables.ChainPolicyAccept)
if chn == nil {
t.Error("pre step add_chain() input-uuu-inet failed")
return nil, nil
}
//nft.Commit()
r, err := Fw.AddRule(
exprs.NFT_HOOK_POSTROUTING, "uuu", exprs.NFT_FAMILY_INET,
0,
"key-uuu",
exp)
if err != nil {
t.Errorf("Error adding rule: %s", err)
return nil, nil
}
t.Logf("Rule: %+v", r)
return r, chn
}
// AddTestDNATRule adds a generic table, chain and rule with the given expression.
func AddTestDNATRule(t *testing.T, conn *nftables.Conn, exp *[]expr.Any) (*nftables.Rule, *nftables.Chain) {
_, err := Fw.AddTable("iii", exprs.NFT_FAMILY_INET)
if err != nil {
t.Errorf("pre step add_table() iii-inet failed: %s", err)
return nil, nil
}
chn := Fw.AddChain(
exprs.NFT_HOOK_PREROUTING,
"iii",
exprs.NFT_FAMILY_INET,
nftables.ChainPriorityNATDest,
nftables.ChainTypeNAT,
nftables.ChainHookPrerouting,
nftables.ChainPolicyAccept)
if chn == nil {
t.Error("pre step add_chain() input-iii-inet failed")
return nil, nil
}
//nft.Commit()
r, err := Fw.AddRule(
exprs.NFT_HOOK_PREROUTING, "iii", exprs.NFT_FAMILY_INET,
0,
"key-iii",
exp)
if err != nil {
t.Errorf("Error adding rule: %s", err)
return nil, nil
}
t.Logf("Rule: %+v", r)
return r, chn
}
opensnitch-1.6.9/daemon/firewall/nftables/parser.go 0000664 0000000 0000000 00000014160 15003540030 0022362 0 ustar 00root root 0000000 0000000 package nftables
import (
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/google/nftables"
"github.com/google/nftables/expr"
)
// nftables rules are composed of expressions, for example:
// tcp dport 443 ip daddr 192.168.1.1
// \-----------/ \------------------/
// with these format:
// keyword1keyword2value...
//
// here we parse the expression, and based on keyword1, we build the rule with the given options.
//
// If the rule has multiple values (tcp dport 80,443,8080), no spaces are allowed,
// and the separator is a ",", instead of the format { 80, 443, 8080 }
//
// In order to debug invalid expressions, or how to build new ones, use the following command:
// # nft --debug netlink add rule filter output mark set 1
// ip filter output
// [ immediate reg 1 0x00000001 ]
// [ meta set mark with reg 1 ]
//
// Debugging added rules:
// nft --debug netlink list ruleset
//
// https://wiki.archlinux.org/title/Nftables#Expressions
// https://wiki.nftables.org/wiki-nftables/index.php/Building_rules_through_expressions
func (n *Nft) parseExpression(table, chain, family string, expression *config.Expressions) *[]expr.Any {
var exprList []expr.Any
cmpOp := exprs.NewOperator(expression.Statement.Op)
switch expression.Statement.Name {
case exprs.NFT_CT:
exprCt := n.buildConntrackRule(expression.Statement.Values, &cmpOp)
if exprCt == nil {
log.Warning("%s Ct statement error", logTag)
return nil
}
exprList = append(exprList, *exprCt...)
case exprs.NFT_META:
metaExpr, err := exprs.NewExprMeta(expression.Statement.Values, &cmpOp)
if err != nil {
log.Warning("%s meta statement error: %s", logTag, err)
return nil
}
for _, exprValue := range expression.Statement.Values {
switch exprValue.Key {
case exprs.NFT_META_L4PROTO:
l4rule, err := n.buildL4ProtoRule(table, family, exprValue.Value, &cmpOp)
if err != nil {
log.Warning("%s meta.l4proto statement error: %s", logTag, err)
return nil
}
*metaExpr = append(*metaExpr, *l4rule...)
case exprs.NFT_DPORT, exprs.NFT_SPORT:
exprPDir, err := exprs.NewExprPortDirection(exprValue.Key)
if err != nil {
log.Warning("%s ports statement error: %s", logTag, err)
return nil
}
*metaExpr = append(*metaExpr, []expr.Any{exprPDir}...)
portsRule, err := n.buildPortsRule(table, family, exprValue.Value, &cmpOp)
if err != nil {
log.Warning("%s meta.l4proto.ports statement error: %s", logTag, err)
return nil
}
*metaExpr = append(*metaExpr, *portsRule...)
}
}
return metaExpr
case exprs.NFT_ETHER:
etherExpr, err := exprs.NewExprEther(expression.Statement.Values)
if err != nil {
log.Warning("%s ether statement error: %s", logTag, err)
return nil
}
return etherExpr
// TODO: support iif, oif
case exprs.NFT_IIFNAME, exprs.NFT_OIFNAME:
isOut := expression.Statement.Name == exprs.NFT_OIFNAME
iface := expression.Statement.Values[0].Key
if iface == "" {
log.Warning("%s network interface statement error: %s", logTag, expression.Statement.Name)
return nil
}
exprList = append(exprList, *exprs.NewExprIface(iface, isOut, cmpOp)...)
case exprs.NFT_FAMILY_IP, exprs.NFT_FAMILY_IP6:
exprIP, err := exprs.NewExprIP(family, expression.Statement.Values, cmpOp)
if err != nil {
log.Warning("%s addr statement error: %s", logTag, err)
return nil
}
exprList = append(exprList, *exprIP...)
case exprs.NFT_PROTO_ICMP, exprs.NFT_PROTO_ICMPv6:
exprICMP := n.buildICMPRule(table, family, expression.Statement.Name, expression.Statement.Values)
if exprICMP == nil {
log.Warning("%s icmp statement error", logTag)
return nil
}
exprList = append(exprList, *exprICMP...)
case exprs.NFT_LOG:
exprLog, err := exprs.NewExprLog(expression.Statement)
if err != nil {
log.Warning("%s log statement error", logTag)
return nil
}
exprList = append(exprList, *exprLog...)
case exprs.NFT_LIMIT:
exprLimit, err := exprs.NewExprLimit(expression.Statement)
if err != nil {
log.Warning("%s %s", logTag, err)
return nil
}
exprList = append(exprList, *exprLimit...)
case exprs.NFT_PROTO_UDP, exprs.NFT_PROTO_TCP, exprs.NFT_PROTO_UDPLITE, exprs.NFT_PROTO_SCTP, exprs.NFT_PROTO_DCCP:
exprProto, err := exprs.NewExprProtocol(expression.Statement.Name)
if err != nil {
log.Warning("%s proto statement error: %s", logTag, err)
return nil
}
exprList = append(exprList, *exprProto...)
for _, exprValue := range expression.Statement.Values {
switch exprValue.Key {
case exprs.NFT_DPORT, exprs.NFT_SPORT:
exprPDir, err := exprs.NewExprPortDirection(exprValue.Key)
if err != nil {
log.Warning("%s ports statement error: %s", logTag, err)
return nil
}
exprList = append(exprList, []expr.Any{exprPDir}...)
portsRule, err := n.buildPortsRule(table, family, exprValue.Value, &cmpOp)
if err != nil {
log.Warning("%s proto.ports statement error: %s", logTag, err)
return nil
}
exprList = append(exprList, *portsRule...)
}
}
case exprs.NFT_QUOTA:
exprQuota, err := exprs.NewQuota(expression.Statement.Values)
if err != nil {
log.Warning("%s quota statement error: %s", logTag, err)
return nil
}
exprList = append(exprList, *exprQuota...)
case exprs.NFT_NOTRACK:
exprList = append(exprList, *exprs.NewNoTrack()...)
case exprs.NFT_COUNTER:
defaultCounterName := "opensnitch"
counterObj := &nftables.CounterObj{
Table: &nftables.Table{Name: table, Family: nftables.TableFamilyIPv4},
Name: defaultCounterName,
Bytes: 0,
Packets: 0,
}
for _, counterOption := range expression.Statement.Values {
switch counterOption.Key {
case exprs.NFT_COUNTER_NAME:
defaultCounterName = counterOption.Value
counterObj.Name = defaultCounterName
case exprs.NFT_COUNTER_BYTES:
// TODO: allow to set initial bytes/packets?
counterObj.Bytes = 1
case exprs.NFT_COUNTER_PACKETS:
counterObj.Packets = 1
}
}
n.Conn.AddObj(counterObj)
exprList = append(exprList, *exprs.NewExprCounter(defaultCounterName)...)
}
return &exprList
}
opensnitch-1.6.9/daemon/firewall/nftables/rule_helpers.go 0000664 0000000 0000000 00000014073 15003540030 0023562 0 ustar 00root root 0000000 0000000 package nftables
import (
"fmt"
"strings"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/google/nftables"
"github.com/google/nftables/expr"
)
// rules examples: https://github.com/google/nftables/blob/master/nftables_test.go
func (n *Nft) buildICMPRule(table, family string, icmpProtoVersion string, icmpOptions []*config.ExprValues) *[]expr.Any {
tbl := n.GetTable(table, family)
if tbl == nil {
return nil
}
offset := uint32(0)
icmpType := uint8(0)
setType := nftables.SetDatatype{}
switch icmpProtoVersion {
case exprs.NFT_PROTO_ICMP:
setType = nftables.TypeICMPType
case exprs.NFT_PROTO_ICMPv6:
setType = nftables.TypeICMP6Type
default:
return nil
}
exprICMP, _ := exprs.NewExprProtocol(icmpProtoVersion)
ICMPrule := []expr.Any{}
ICMPrule = append(ICMPrule, *exprICMP...)
ICMPtemp := []expr.Any{}
setElements := []nftables.SetElement{}
for _, icmp := range icmpOptions {
switch icmp.Key {
case exprs.NFT_ICMP_TYPE:
icmpTypeList := strings.Split(icmp.Value, ",")
for _, icmpTypeStr := range icmpTypeList {
if exprs.NFT_PROTO_ICMPv6 == icmpProtoVersion {
icmpType = exprs.GetICMPv6Type(icmpTypeStr)
} else {
icmpType = exprs.GetICMPType(icmpTypeStr)
}
exprCmp := &expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: []byte{icmpType},
}
ICMPtemp = append(ICMPtemp, []expr.Any{exprCmp}...)
// fill setElements. If there're more than 1 icmp type we'll use it later
setElements = append(setElements,
[]nftables.SetElement{
{
Key: []byte{icmpType},
},
}...)
}
case exprs.NFT_ICMP_CODE:
// TODO
offset = 1
}
}
ICMPrule = append(ICMPrule, []expr.Any{
&expr.Payload{
DestRegister: 1,
Base: expr.PayloadBaseTransportHeader,
Offset: offset, // 0 type, 1 code
Len: 1,
},
}...)
if len(setElements) == 1 {
ICMPrule = append(ICMPrule, ICMPtemp...)
} else {
set := &nftables.Set{
Anonymous: true,
Constant: true,
Table: tbl,
KeyType: setType,
}
if err := n.Conn.AddSet(set, setElements); err != nil {
log.Warning("%s AddSet() error: %s", logTag, err)
return nil
}
sysSets = append(sysSets, []*nftables.Set{set}...)
ICMPrule = append(ICMPrule, []expr.Any{
&expr.Lookup{
SourceRegister: 1,
SetName: set.Name,
SetID: set.ID,
}}...)
}
return &ICMPrule
}
func (n *Nft) buildConntrackRule(ctOptions []*config.ExprValues, cmpOp *expr.CmpOp) *[]expr.Any {
exprList := []expr.Any{}
setMark := false
for _, ctOption := range ctOptions {
switch ctOption.Key {
// we expect to have multiple "state" keys:
// { "state": "established", "state": "related" }
case exprs.NFT_CT_STATE:
ctExprState, err := exprs.NewExprCtState(ctOptions)
if err != nil {
log.Warning("%s ct set state error: %s", logTag, err)
return nil
}
exprList = append(exprList, *ctExprState...)
exprList = append(exprList,
&expr.Cmp{Op: expr.CmpOpNeq, Register: 1, Data: []byte{0, 0, 0, 0}},
)
// we only need to iterate once here
goto Exit
case exprs.NFT_CT_SET_MARK:
setMark = true
case exprs.NFT_CT_MARK:
ctExprMark, err := exprs.NewExprCtMark(setMark, ctOption.Value, cmpOp)
if err != nil {
log.Warning("%s ct mark error: %s", logTag, err)
return nil
}
exprList = append(exprList, *ctExprMark...)
goto Exit
default:
log.Warning("%s invalid conntrack option: %s", logTag, ctOption)
return nil
}
}
Exit:
return &exprList
}
// buildL4ProtoRule helper builds a new protocol rule to match ports and protocols.
//
// nft --debug=netlink add rule filter input meta l4proto { tcp, udp } th dport 53
// __set%d filter 3 size 2
// __set%d filter 0
// element 00000006 : 0 [end] element 00000011 : 0 [end]
// ip filter input
// [ meta load l4proto => reg 1 ]
// [ lookup reg 1 set __set%d ]
// [ payload load 2b @ transport header + 2 => reg 1 ]
// [ cmp eq reg 1 0x00003500 ]
func (n *Nft) buildL4ProtoRule(table, family, l4prots string, cmpOp *expr.CmpOp) (*[]expr.Any, error) {
tbl := n.GetTable(table, family)
if tbl == nil {
return nil, fmt.Errorf("Invalid table (%s, %s)", table, family)
}
exprList := []expr.Any{}
if strings.Index(l4prots, ",") != -1 {
set := &nftables.Set{
Anonymous: true,
Constant: true,
Table: tbl,
KeyType: nftables.TypeInetProto,
}
protoSet := exprs.NewExprProtoSet(l4prots)
if err := n.Conn.AddSet(set, *protoSet); err != nil {
log.Warning("%s protoSet, AddSet() error: %s", logTag, err)
return nil, err
}
exprList = append(exprList, &expr.Lookup{
SourceRegister: 1,
SetName: set.Name,
SetID: set.ID,
})
} else {
exprProto := exprs.NewExprL4Proto(l4prots, cmpOp)
exprList = append(exprList, *exprProto...)
}
return &exprList, nil
}
func (n *Nft) buildPortsRule(table, family, ports string, cmpOp *expr.CmpOp) (*[]expr.Any, error) {
tbl := n.GetTable(table, family)
if tbl == nil {
return nil, fmt.Errorf("Invalid table (%s, %s)", table, family)
}
exprList := []expr.Any{}
if strings.Index(ports, ",") != -1 {
set := &nftables.Set{
Anonymous: true,
Constant: true,
Table: tbl,
KeyType: nftables.TypeInetService,
}
setElements := exprs.NewExprPortSet(ports)
if err := n.Conn.AddSet(set, *setElements); err != nil {
log.Warning("%s portSet, AddSet() error: %s", logTag, err)
return nil, err
}
exprList = append(exprList, &expr.Lookup{
SourceRegister: 1,
SetName: set.Name,
SetID: set.ID,
})
sysSets = append(sysSets, []*nftables.Set{set}...)
} else if strings.Index(ports, "-") != -1 {
portRange, err := exprs.NewExprPortRange(ports, cmpOp)
if err != nil {
log.Warning("%s invalid portRange: %s, %s", logTag, ports, err)
return nil, err
}
exprList = append(exprList, *portRange...)
} else {
exprPort, err := exprs.NewExprPort(ports, cmpOp)
if err != nil {
return nil, err
}
exprList = append(exprList, *exprPort...)
}
return &exprList, nil
}
opensnitch-1.6.9/daemon/firewall/nftables/rules.go 0000664 0000000 0000000 00000017170 15003540030 0022224 0 ustar 00root root 0000000 0000000 package nftables
import (
"fmt"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/google/nftables"
"github.com/google/nftables/binaryutil"
"github.com/google/nftables/expr"
"golang.org/x/sys/unix"
)
// QueueDNSResponses redirects DNS responses to us, in order to keep a cache
// of resolved domains.
// This rule must be added in top of the system rules, otherwise it may get bypassed.
// nft insert rule ip filter input udp sport 53 queue num 0 bypass
func (n *Nft) QueueDNSResponses(enable bool, logError bool) (error, error) {
if n.Conn == nil {
return nil, nil
}
families := []string{exprs.NFT_FAMILY_INET}
for _, fam := range families {
table := n.GetTable(exprs.NFT_CHAIN_FILTER, fam)
chain := GetChain(exprs.NFT_HOOK_INPUT, table)
if table == nil {
log.Error("QueueDNSResponses() Error getting table: %s-filter", fam)
continue
}
if chain == nil {
log.Error("QueueDNSResponses() Error getting chain: %s-%d", table.Name, table.Family)
continue
}
// nft list ruleset -a
n.Conn.InsertRule(&nftables.Rule{
Position: 0,
Table: table,
Chain: chain,
Exprs: []expr.Any{
&expr.Meta{Key: expr.MetaKeyL4PROTO, Register: 1},
&expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: []byte{unix.IPPROTO_UDP},
},
&expr.Payload{
DestRegister: 1,
Base: expr.PayloadBaseTransportHeader,
Offset: 0,
Len: 2,
},
&expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: binaryutil.BigEndian.PutUint16(uint16(53)),
},
&expr.Queue{
Num: n.QueueNum,
Flag: expr.QueueFlagBypass,
},
},
// rule key, to allow get it later by key
UserData: []byte(InterceptionRuleKey),
})
}
// apply changes
if !n.Commit() {
return fmt.Errorf("Error adding DNS interception rules"), nil
}
return nil, nil
}
// QueueConnections inserts the firewall rule which redirects connections to us.
// Connections are queued until the user denies/accept them, or reaches a timeout.
// This rule must be added at the end of all the other rules, that way we can add
// rules above this one to exclude a service/app from being intercepted.
// nft insert rule ip mangle OUTPUT ct state new queue num 0 bypass
func (n *Nft) QueueConnections(enable bool, logError bool) (error, error) {
if n.Conn == nil {
return nil, fmt.Errorf("nftables QueueConnections: netlink connection not active")
}
table := n.GetTable(exprs.NFT_CHAIN_MANGLE, exprs.NFT_FAMILY_INET)
if table == nil {
return nil, fmt.Errorf("QueueConnections() Error getting table mangle-inet")
}
chain := GetChain(exprs.NFT_HOOK_OUTPUT, table)
if chain == nil {
return nil, fmt.Errorf("QueueConnections() Error getting outputChain: output-%s", table.Name)
}
n.Conn.AddRule(&nftables.Rule{
Position: 0,
Table: table,
Chain: chain,
Exprs: []expr.Any{
&expr.Meta{Key: expr.MetaKeyL4PROTO, Register: 1},
&expr.Cmp{
Op: expr.CmpOpNeq,
Register: 1,
Data: []byte{unix.IPPROTO_TCP},
},
&expr.Ct{Register: 1, SourceRegister: false, Key: expr.CtKeySTATE},
&expr.Bitwise{
SourceRegister: 1,
DestRegister: 1,
Len: 4,
Mask: binaryutil.NativeEndian.PutUint32(expr.CtStateBitNEW | expr.CtStateBitRELATED),
Xor: binaryutil.NativeEndian.PutUint32(0),
},
&expr.Cmp{Op: expr.CmpOpNeq, Register: 1, Data: []byte{0, 0, 0, 0}},
&expr.Queue{
Num: n.QueueNum,
Flag: expr.QueueFlagBypass,
},
},
// rule key, to allow get it later by key
UserData: []byte(InterceptionRuleKey),
})
/* nft --debug=netlink add rule inet mangle output tcp flags '& (fin|syn|rst|ack) == syn' queue bypass num 0
[ meta load l4proto => reg 1 ]
[ cmp eq reg 1 0x00000006 ]
[ payload load 1b @ transport header + 13 => reg 1 ]
[ bitwise reg 1 = ( reg 1 & 0x00000002 ) ^ 0x00000000 ]
[ cmp neq reg 1 0x00000000 ]
[ queue num 0 bypass ]
Intercept packets *only* with the SYN flag set.
Using 'ct state NEW' causes to intercept packets with other flags set, which
sometimes means that we receive outbound connections not in the expected order:
443:1.1.1.1 -> 192.168.123:12345 (bits ACK, ACK+PSH or SYN+ACK set)
*/
n.Conn.AddRule(&nftables.Rule{
Position: 0,
Table: table,
Chain: chain,
Exprs: []expr.Any{
&expr.Meta{Key: expr.MetaKeyL4PROTO, Register: 1},
&expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: []byte{unix.IPPROTO_TCP},
},
&expr.Payload{
DestRegister: 1,
Base: expr.PayloadBaseTransportHeader,
Offset: 13,
Len: 1,
},
&expr.Bitwise{
DestRegister: 1,
SourceRegister: 1,
Len: 1,
Mask: []byte{0x17},
Xor: []byte{0x00},
},
&expr.Cmp{
Op: expr.CmpOpEq,
Register: 1,
Data: []byte{0x02},
},
&expr.Queue{
Num: n.QueueNum,
Flag: expr.QueueFlagBypass,
},
},
// rule key, to allow get it later by key
UserData: []byte(InterceptionRuleKey),
})
// apply changes
if !n.Commit() {
return fmt.Errorf("Error adding interception rule "), nil
}
return nil, nil
}
// InsertRule inserts a rule at the top of rules list.
func (n *Nft) InsertRule(chain, table, family string, position uint64, exprs *[]expr.Any) error {
tbl := n.GetTable(table, family)
if tbl == nil {
return fmt.Errorf("%s getting table: %s, %s", logTag, table, family)
}
chainKey := getChainKey(chain, tbl)
chn, chok := sysChains.Load(chainKey)
if !chok {
return fmt.Errorf("%s getting table: %s, %s", logTag, table, family)
}
rule := &nftables.Rule{
Position: position,
Table: tbl,
Chain: chn.(*nftables.Chain),
Exprs: *exprs,
UserData: []byte(SystemRuleKey),
}
n.Conn.InsertRule(rule)
if !n.Commit() {
return fmt.Errorf("rule not added")
}
return nil
}
// AddRule adds a rule to the system.
func (n *Nft) AddRule(chain, table, family string, position uint64, key string, exprs *[]expr.Any) (*nftables.Rule, error) {
tbl := n.GetTable(table, family)
if tbl == nil {
return nil, fmt.Errorf("getting %s table: %s, %s", logTag, table, family)
}
chainKey := getChainKey(chain, tbl)
chn, chok := sysChains.Load(chainKey)
if !chok {
return nil, fmt.Errorf("getting table: %s, %s", table, family)
}
rule := &nftables.Rule{
Position: position,
Table: tbl,
Chain: chn.(*nftables.Chain),
Exprs: *exprs,
UserData: []byte(key),
}
n.Conn.AddRule(rule)
if !n.Commit() {
return nil, fmt.Errorf("adding %s rule", logTag)
}
return rule, nil
}
func (n *Nft) delRulesByKey(key string) error {
chains, err := n.Conn.ListChains()
if err != nil {
return fmt.Errorf("error listing nftables chains (%s): %s", key, err)
}
for _, c := range chains {
rules, err := n.Conn.GetRule(c.Table, c)
if err != nil {
log.Warning("Error listing rules (%s): %s", key, err)
continue
}
delRules := 0
for _, r := range rules {
if string(r.UserData) != key {
continue
}
// just passing the r object doesn't work.
if err := n.Conn.DelRule(&nftables.Rule{
Table: c.Table,
Chain: c,
Handle: r.Handle,
}); err != nil {
log.Warning("[nftables] error deleting rule (%s): %s", key, err)
continue
}
delRules++
}
if delRules > 0 {
if !n.Commit() {
log.Warning("%s error deleting rules: %s", logTag, err)
}
}
if len(rules) == 0 || len(rules) == delRules {
_, chfound := sysChains.Load(getChainKey(c.Name, c.Table))
if chfound {
n.DelChain(c)
}
}
}
return nil
}
// DelInterceptionRules deletes our interception rules, by key.
func (n *Nft) DelInterceptionRules() {
n.delRulesByKey(InterceptionRuleKey)
}
opensnitch-1.6.9/daemon/firewall/nftables/rules_test.go 0000664 0000000 0000000 00000014074 15003540030 0023263 0 ustar 00root root 0000000 0000000 package nftables_test
import (
"testing"
nftb "github.com/evilsocket/opensnitch/daemon/firewall/nftables"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables"
)
func getRulesList(t *testing.T, conn *nftables.Conn, family, tblName, chnName string) ([]*nftables.Rule, int) {
chains, err := conn.ListChains()
if err != nil {
return nil, -1
}
for rdx, c := range chains {
if c.Table.Family == nftb.GetFamilyCode(family) && c.Table.Name == tblName && c.Name == chnName {
rules, err := conn.GetRule(c.Table, c)
if err != nil {
return nil, -1
}
return rules, rdx
}
}
return nil, -1
}
func getRule(t *testing.T, conn *nftables.Conn, tblName, chnName, key string, ruleHandle uint64) (*nftables.Rule, int) {
chains, err := conn.ListChains()
if err != nil {
return nil, -1
}
for _, c := range chains {
rules, err := conn.GetRule(c.Table, c)
if err != nil {
continue
}
for rdx, r := range rules {
//t.Logf("Table: %s<->%s, Chain: %s<->%s, Rule Handle: %d<->%d, UserData: %s<->%s", c.Table.Name, tblName, c.Name, chnName, r.Handle, ruleHandle, string(r.UserData), key)
if c.Table.Name == tblName && c.Name == chnName {
if ruleHandle > 0 && r.Handle == ruleHandle {
return r, rdx
}
if key != "" && string(r.UserData) == key {
return r, rdx
}
}
}
}
return nil, -1
}
func TestAddRule(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
r, chn := nftest.AddTestRule(t, conn, exprs.NewNoTrack())
/*
_, err := nft.AddTable("yyy", exprs.NFT_FAMILY_INET)
if err != nil {
t.Error("pre step add_table() yyy-inet failed")
}
chn := nft.AddChain(
exprs.NFT_HOOK_INPUT,
"yyy",
exprs.NFT_FAMILY_INET,
nftables.ChainPriorityFilter,
nftables.ChainTypeFilter,
nftables.ChainHookInput,
nftables.ChainPolicyAccept)
if chn == nil {
t.Error("pre step add_chain() input-yyy-inet failed")
}
r, err := nft.addRule(
exprs.NFT_HOOK_INPUT, "yyy", exprs.NFT_FAMILY_INET,
0,
"key-yyy",
exprs.NewNoTrack())
if err != nil {
t.Errorf("Error adding rule: %s", err)
}
*/
rules, err := conn.GetRules(chn.Table, chn)
if err != nil || len(rules) != 1 {
t.Errorf("Rule not added, total: %d", len(rules))
}
t.Log(r.Handle)
}
func TestInsertRule(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
_, err := nftest.Fw.AddTable("yyy", exprs.NFT_FAMILY_INET)
if err != nil {
t.Error("pre step add_table() yyy-inet failed")
}
chn := nftest.Fw.AddChain(
exprs.NFT_HOOK_INPUT,
"yyy",
exprs.NFT_FAMILY_INET,
nftables.ChainPriorityFilter,
nftables.ChainTypeFilter,
nftables.ChainHookInput,
nftables.ChainPolicyAccept)
if chn == nil {
t.Error("pre step add_chain() input-yyy-inet failed")
}
err = nftest.Fw.InsertRule(
exprs.NFT_HOOK_INPUT, "yyy", exprs.NFT_FAMILY_INET,
0,
exprs.NewNoTrack())
if err != nil {
t.Errorf("Error inserting rule: %s", err)
}
rules, err := conn.GetRules(chn.Table, chn)
if err != nil || len(rules) != 1 {
t.Errorf("Rule not inserted, total: %d", len(rules))
}
}
func TestQueueConnections(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
_, err := nftest.Fw.AddTable(exprs.NFT_CHAIN_MANGLE, exprs.NFT_FAMILY_INET)
if err != nil {
t.Error("pre step add_table() mangle-inet failed")
}
chn := nftest.Fw.AddChain(
exprs.NFT_HOOK_OUTPUT, exprs.NFT_CHAIN_MANGLE, exprs.NFT_FAMILY_INET,
nftables.ChainPriorityFilter,
nftables.ChainTypeFilter,
nftables.ChainHookInput,
nftables.ChainPolicyAccept)
if chn == nil {
t.Error("pre step add_chain() output-mangle-inet failed")
}
if err1, err2 := nftest.Fw.QueueConnections(true, true); err1 != nil && err2 != nil {
t.Errorf("rule to queue connections not added: %s, %s", err1, err2)
}
r, _ := getRule(t, conn, exprs.NFT_CHAIN_MANGLE, exprs.NFT_HOOK_OUTPUT, nftb.InterceptionRuleKey, 0)
if r == nil {
t.Error("rule to queue connections not in the list")
}
if string(r.UserData) != nftb.InterceptionRuleKey {
t.Errorf("invalid UserData: %s", string(r.UserData))
}
}
func TestQueueDNSResponses(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
_, err := nftest.Fw.AddTable(exprs.NFT_CHAIN_FILTER, exprs.NFT_FAMILY_INET)
if err != nil {
t.Error("pre step add_table() filter-inet failed")
}
chn := nftest.Fw.AddChain(
exprs.NFT_HOOK_INPUT, exprs.NFT_CHAIN_FILTER, exprs.NFT_FAMILY_INET,
nftables.ChainPriorityFilter,
nftables.ChainTypeFilter,
nftables.ChainHookInput,
nftables.ChainPolicyAccept)
if chn == nil {
t.Error("pre step add_chain() input-filter-inet failed")
}
if err1, err2 := nftest.Fw.QueueDNSResponses(true, true); err1 != nil && err2 != nil {
t.Errorf("rule to queue DNS responses not added: %s, %s", err1, err2)
}
r, _ := getRule(t, conn, exprs.NFT_CHAIN_FILTER, exprs.NFT_HOOK_INPUT, nftb.InterceptionRuleKey, 0)
if r == nil {
t.Error("rule to queue DNS responses not in the list")
}
if string(r.UserData) != nftb.InterceptionRuleKey {
t.Errorf("invalid UserData: %s", string(r.UserData))
}
// nftables.DelRule() does not accept rule handles == 0
// https://github.com/google/nftables/blob/8f2d395e1089dea4966c483fbeae7e336917c095/rule.go#L200
// sometimes when adding this rule in new namespaces it's added with rule.Handle == 0, so it fails deleting the rule, thus failing the test.
// can it happen on "prod" environments?
/*if err1, err2 := nft.QueueDNSResponses(false, true); err1 != nil && err2 != nil {
t.Errorf("rule to queue DNS responses not deleted: %s, %s", err1, err2)
}
r, _ = getRule(t, conn, exprs.NFT_CHAIN_FILTER, exprs.NFT_HOOK_INPUT, nftb.InterceptionRuleKey, 0)
if r != nil {
t.Error("rule to queue DNS responses should have been deleted")
}*/
}
opensnitch-1.6.9/daemon/firewall/nftables/system.go 0000664 0000000 0000000 00000011524 15003540030 0022413 0 ustar 00root root 0000000 0000000 package nftables
import (
"fmt"
"strings"
"sync"
"github.com/evilsocket/opensnitch/daemon/firewall/config"
"github.com/evilsocket/opensnitch/daemon/firewall/iptables"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/google/nftables"
"github.com/google/nftables/expr"
"github.com/google/uuid"
)
// store of tables added to the system
type sysTablesT struct {
tables map[string]*nftables.Table
sync.RWMutex
}
func (t *sysTablesT) Add(name string, tbl *nftables.Table) {
t.Lock()
defer t.Unlock()
t.tables[name] = tbl
}
func (t *sysTablesT) Get(name string) *nftables.Table {
t.RLock()
defer t.RUnlock()
return t.tables[name]
}
func (t *sysTablesT) List() map[string]*nftables.Table {
t.RLock()
defer t.RUnlock()
return t.tables
}
func (t *sysTablesT) Del(name string) {
t.Lock()
defer t.Unlock()
delete(t.tables, name)
}
var (
logTag = "nftables:"
sysTables *sysTablesT
sysChains *sync.Map
origSysChains map[string]*nftables.Chain
sysSets []*nftables.Set
)
// InitMapsStore initializes internal stores of chains and maps.
func InitMapsStore() {
sysTables = &sysTablesT{
tables: make(map[string]*nftables.Table),
}
sysChains = &sync.Map{}
origSysChains = make(map[string]*nftables.Chain)
}
// CreateSystemRule create the custom firewall chains and adds them to system.
// nft insert rule ip opensnitch-filter opensnitch-input udp dport 1153
func (n *Nft) CreateSystemRule(chain *config.FwChain, logErrors bool) bool {
if chain.IsInvalid() {
log.Warning("%s CreateSystemRule(), Chain's field Name and Family cannot be empty", logTag)
return false
}
tableName := chain.Table
n.AddTable(chain.Table, chain.Family)
// regular chains doesn't have a hook, nor a type
if chain.Hook == "" && chain.Type == "" {
n.addRegularChain(chain.Name, tableName, chain.Family)
return n.Commit()
}
chainPolicy := nftables.ChainPolicyAccept
if iptables.Action(strings.ToLower(chain.Policy)) == exprs.VERDICT_DROP {
chainPolicy = nftables.ChainPolicyDrop
}
chainHook := GetHook(chain.Hook)
chainPrio, chainType := GetChainPriority(chain.Family, chain.Type, chain.Hook)
if chainPrio == nil {
log.Warning("%s Invalid system firewall combination: %s, %s", logTag, chain.Type, chain.Hook)
return false
}
if ret := n.AddChain(chain.Name, chain.Table, chain.Family, chainPrio,
chainType, chainHook, chainPolicy); ret == nil {
log.Warning("%s error adding chain: %s, table: %s", logTag, chain.Name, chain.Table)
return false
}
return n.Commit()
}
// AddSystemRules creates the system firewall from configuration.
func (n *Nft) AddSystemRules(reload, backupExistingChains bool) {
n.SysConfig.RLock()
defer n.SysConfig.RUnlock()
if n.SysConfig.Enabled == false {
log.Important("[nftables] AddSystemRules() fw disabled")
return
}
if backupExistingChains {
n.backupExistingChains()
}
for _, fwCfg := range n.SysConfig.SystemRules {
for _, chain := range fwCfg.Chains {
if !n.CreateSystemRule(chain, true) {
log.Info("createSystem failed: %s %s", chain.Name, chain.Table)
continue
}
for i := len(chain.Rules) - 1; i >= 0; i-- {
if chain.Rules[i].UUID == "" {
uuid := uuid.New()
chain.Rules[i].UUID = uuid.String()
}
if chain.Rules[i].Enabled {
if err4, _ := n.AddSystemRule(chain.Rules[i], chain); err4 != nil {
n.SendError(fmt.Sprintf("%s (%s)", err4, chain.Rules[i].UUID))
}
}
}
}
}
}
// DeleteSystemRules deletes the system rules.
// If force is false and the rule has not been previously added,
// it won't try to delete the tables and chains. Otherwise it'll try to delete them.
func (n *Nft) DeleteSystemRules(force, restoreExistingChains, logErrors bool) {
n.Lock()
defer n.Unlock()
if err := n.delRulesByKey(SystemRuleKey); err != nil {
log.Warning("error deleting interception rules: %s", err)
}
if restoreExistingChains {
n.restoreBackupChains()
}
if force {
n.DelSystemTables()
}
}
// AddSystemRule inserts a new rule.
func (n *Nft) AddSystemRule(rule *config.FwRule, chain *config.FwChain) (err4, err6 error) {
n.Lock()
defer n.Unlock()
exprList := []expr.Any{}
for _, expression := range rule.Expressions {
exprsOfRule := n.parseExpression(chain.Table, chain.Name, chain.Family, expression)
if exprsOfRule == nil {
return fmt.Errorf("%s invalid rule parameters: %v", rule.UUID, expression), nil
}
exprList = append(exprList, *exprsOfRule...)
}
if len(exprList) > 0 {
exprVerdict := exprs.NewExprVerdict(rule.Target, rule.TargetParameters)
if exprVerdict == nil {
return fmt.Errorf("%s invalid verdict %s %s", rule.UUID, rule.Target, rule.TargetParameters), nil
}
exprList = append(exprList, *exprVerdict...)
if err := n.InsertRule(chain.Name, chain.Table, chain.Family, rule.Position, &exprList); err != nil {
return err, nil
}
}
return nil, nil
}
opensnitch-1.6.9/daemon/firewall/nftables/system_test.go 0000664 0000000 0000000 00000011031 15003540030 0023443 0 ustar 00root root 0000000 0000000 package nftables_test
import (
"testing"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
)
type sysChainsListT struct {
family string
table string
chain string
expectedRules int
}
func TestAddSystemRules(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
cfg, err := nftest.Fw.NewSystemFwConfig(nftest.Fw.PreloadConfCallback, nftest.Fw.ReloadConfCallback)
if err != nil {
t.Logf("Error creating fw config: %s", err)
}
cfg.SetFile("./testdata/test-sysfw-conf.json")
if err := cfg.LoadDiskConfiguration(false); err != nil {
t.Errorf("Error loading config from disk: %s", err)
}
nftest.Fw.AddSystemRules(false, false)
rules, _ := getRulesList(t, conn, exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_FILTER, exprs.NFT_HOOK_INPUT)
// 3 rules in total, 1 disabled.
if len(rules) != 1 {
t.Errorf("test-load-conf.json mangle-output should contain only 3 rules, no -> %d", len(rules))
for _, r := range rules {
t.Logf("%+v", r)
}
}
rules, _ = getRulesList(t, conn, exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_MANGLE, exprs.NFT_HOOK_OUTPUT)
// 3 rules in total, 1 disabled.
if len(rules) != 3 {
t.Errorf("test-load-conf.json mangle-output should contain only 3 rules, no -> %d", len(rules))
for _, r := range rules {
t.Log(r)
}
}
rules, _ = getRulesList(t, conn, exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_MANGLE, exprs.NFT_HOOK_FORWARD)
// 3 rules in total, 1 disabled.
if len(rules) != 1 {
t.Errorf("test-load-conf.json mangle-output should contain only 3 rules, no -> %d", len(rules))
for _, r := range rules {
t.Log(r)
}
}
}
func TestFwConfDisabled(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
cfg, err := nftest.Fw.NewSystemFwConfig(nftest.Fw.PreloadConfCallback, nftest.Fw.ReloadConfCallback)
if err != nil {
t.Logf("Error creating fw config: %s", err)
}
cfg.SetFile("./testdata/test-sysfw-conf.json")
if err := cfg.LoadDiskConfiguration(false); err != nil {
t.Errorf("Error loading config from disk: %s", err)
}
nftest.Fw.AddSystemRules(false, false)
tests := []sysChainsListT{
{
exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_MANGLE, exprs.NFT_HOOK_OUTPUT, 3,
},
{
exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_MANGLE, exprs.NFT_HOOK_FORWARD, 1,
},
{
exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_FILTER, exprs.NFT_HOOK_INPUT, 1,
},
}
for _, tt := range tests {
rules, _ := getRulesList(t, conn, tt.family, tt.table, tt.chain)
if len(rules) != 0 {
t.Logf("%d rules found, there should be 0", len(rules))
}
}
}
func TestDeleteSystemRules(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
cfg, err := nftest.Fw.NewSystemFwConfig(nftest.Fw.PreloadConfCallback, nftest.Fw.ReloadConfCallback)
if err != nil {
t.Logf("Error creating fw config: %s", err)
}
cfg.SetFile("./testdata/test-sysfw-conf.json")
if err := cfg.LoadDiskConfiguration(false); err != nil {
t.Errorf("Error loading config from disk: %s", err)
}
nftest.Fw.AddSystemRules(false, false)
tests := []sysChainsListT{
{
exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_MANGLE, exprs.NFT_HOOK_OUTPUT, 3,
},
{
exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_MANGLE, exprs.NFT_HOOK_FORWARD, 1,
},
{
exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_FILTER, exprs.NFT_HOOK_INPUT, 1,
},
}
for _, tt := range tests {
rules, _ := getRulesList(t, conn, tt.family, tt.table, tt.chain)
if len(rules) != tt.expectedRules {
t.Errorf("%d rules found, there should be %d", len(rules), tt.expectedRules)
}
}
t.Run("test-delete-system-rules", func(t *testing.T) {
nftest.Fw.DeleteSystemRules(false, false, true)
for _, tt := range tests {
rules, _ := getRulesList(t, conn, tt.family, tt.table, tt.chain)
if len(rules) != 0 {
t.Errorf("%d rules found, there should be 0", len(rules))
}
tbl := nftest.Fw.GetTable(tt.table, tt.family)
if tbl == nil {
t.Errorf("table %s-%s should exist", tt.table, tt.family)
}
/*chn := nft.getChain(tt.chain, tbl, tt.family)
if chn == nil {
if chains, err := conn.ListChains(); err == nil {
for _, c := range chains {
}
}
t.Errorf("chain %s-%s-%s should exist", tt.family, tt.table, tt.chain)
}*/
}
})
t.Run("test-delete-system-rules+chains", func(t *testing.T) {
})
}
opensnitch-1.6.9/daemon/firewall/nftables/tables.go 0000664 0000000 0000000 00000004431 15003540030 0022340 0 ustar 00root root 0000000 0000000 package nftables
import (
"fmt"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/google/nftables"
)
// AddTable adds a new table to nftables.
func (n *Nft) AddTable(name, family string) (*nftables.Table, error) {
famCode := GetFamilyCode(family)
tbl := &nftables.Table{
Family: famCode,
Name: name,
}
n.Conn.AddTable(tbl)
if !n.Commit() {
return nil, fmt.Errorf("%s error adding system firewall table: %s, family: %s (%d)", logTag, name, family, famCode)
}
key := getTableKey(name, family)
sysTables.Add(key, tbl)
return tbl, nil
}
// GetTable retrieves an already added table to the system.
func (n *Nft) GetTable(name, family string) *nftables.Table {
return sysTables.Get(getTableKey(name, family))
}
func getTableKey(name string, family interface{}) string {
return fmt.Sprint(name, "-", family)
}
// AddInterceptionTables adds the needed tables to intercept traffic.
func (n *Nft) AddInterceptionTables() error {
if _, err := n.AddTable(exprs.NFT_CHAIN_MANGLE, exprs.NFT_FAMILY_INET); err != nil {
return err
}
if _, err := n.AddTable(exprs.NFT_CHAIN_FILTER, exprs.NFT_FAMILY_INET); err != nil {
return err
}
return nil
}
// Contrary to iptables, in nftables there're no predefined rules.
// Convention is though to use the iptables names by default.
// We need at least: mangle and filter tables, inet family (IPv4 and IPv6).
func (n *Nft) addSystemTables() {
n.AddTable(exprs.NFT_CHAIN_MANGLE, exprs.NFT_FAMILY_INET)
n.AddTable(exprs.NFT_CHAIN_FILTER, exprs.NFT_FAMILY_INET)
}
// return the number of rules that we didn't add.
func (n *Nft) nonSystemRules(tbl *nftables.Table) int {
chains, err := n.Conn.ListChains()
if err != nil {
return -1
}
t := 0
for _, c := range chains {
if tbl.Name != c.Table.Name && tbl.Family != c.Table.Family {
continue
}
rules, err := n.Conn.GetRule(c.Table, c)
if err != nil {
return -1
}
t += len(rules)
}
return t
}
// DelSystemTables deletes tables created from fw configuration.
func (n *Nft) DelSystemTables() {
for k, tbl := range sysTables.List() {
if n.nonSystemRules(tbl) != 0 {
continue
}
n.Conn.DelTable(tbl)
if !n.Commit() {
log.Warning("error deleting system table: %s", k)
continue
}
sysTables.Del(k)
}
}
opensnitch-1.6.9/daemon/firewall/nftables/tables_test.go 0000664 0000000 0000000 00000006466 15003540030 0023411 0 ustar 00root root 0000000 0000000 package nftables_test
import (
"testing"
nftb "github.com/evilsocket/opensnitch/daemon/firewall/nftables"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/nftest"
"github.com/google/nftables"
)
func tableExists(t *testing.T, conn *nftables.Conn, origtbl *nftables.Table, family string) bool {
tables, err := conn.ListTablesOfFamily(
nftb.GetFamilyCode(family),
)
if err != nil {
return false
}
found := false
for _, tbl := range tables {
if origtbl != nil && tbl.Name == origtbl.Name {
found = true
break
}
}
return found
}
func TestAddTable(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
t.Run("inet family", func(t *testing.T) {
tblxxx, err := nftest.Fw.AddTable("xxx", exprs.NFT_FAMILY_INET)
if err != nil {
t.Error("table xxx-inet not added:", err)
}
if tableExists(t, nftest.Fw.Conn, tblxxx, exprs.NFT_FAMILY_INET) == false {
t.Error("table xxx-inet not in the list")
}
nftest.Fw.DelSystemTables()
if tableExists(t, nftest.Fw.Conn, tblxxx, exprs.NFT_FAMILY_INET) {
t.Error("table xxx-inet still exists")
}
})
t.Run("ip family", func(t *testing.T) {
tblxxx, err := nftest.Fw.AddTable("xxx", exprs.NFT_FAMILY_IP)
if err != nil {
t.Error("table xxx-ip not added:", err)
}
if tableExists(t, nftest.Fw.Conn, tblxxx, exprs.NFT_FAMILY_IP) == false {
t.Error("table xxx-ip not in the list")
}
nftest.Fw.DelSystemTables()
if tableExists(t, nftest.Fw.Conn, tblxxx, exprs.NFT_FAMILY_IP) {
t.Errorf("table xxx-ip still exists:") // %+v", sysTables)
}
})
t.Run("ip6 family", func(t *testing.T) {
tblxxx, err := nftest.Fw.AddTable("xxx", exprs.NFT_FAMILY_IP6)
if err != nil {
t.Error("table xxx-ip6 not added:", err)
}
if tableExists(t, nftest.Fw.Conn, tblxxx, exprs.NFT_FAMILY_IP6) == false {
t.Error("table xxx-ip6 not in the list")
}
nftest.Fw.DelSystemTables()
if tableExists(t, nftest.Fw.Conn, tblxxx, exprs.NFT_FAMILY_IP6) {
t.Errorf("table xxx-ip6 still exists:") // %+v", sysTables)
}
})
}
// TestAddInterceptionTables checks if the needed tables have been created.
// We use 2: mangle-inet for intercepting outbound connections, and filter-inet for DNS responses interception
func TestAddInterceptionTables(t *testing.T) {
nftest.SkipIfNotPrivileged(t)
conn, newNS := nftest.OpenSystemConn(t)
defer nftest.CleanupSystemConn(t, newNS)
nftest.Fw.Conn = conn
if err := nftest.Fw.AddInterceptionTables(); err != nil {
t.Errorf("addInterceptionTables() error: %s", err)
}
t.Run("mangle-inet", func(t *testing.T) {
tblmangle := nftest.Fw.GetTable(exprs.NFT_CHAIN_MANGLE, exprs.NFT_FAMILY_INET)
if tblmangle == nil {
t.Error("interception table mangle-inet not in the list")
}
if tableExists(t, nftest.Fw.Conn, tblmangle, exprs.NFT_FAMILY_INET) == false {
t.Error("table mangle-inet not in the list")
}
})
t.Run("filter-inet", func(t *testing.T) {
tblfilter := nftest.Fw.GetTable(exprs.NFT_CHAIN_FILTER, exprs.NFT_FAMILY_INET)
if tblfilter == nil {
t.Error("interception table filter-inet not in the list")
}
if tableExists(t, nftest.Fw.Conn, tblfilter, exprs.NFT_FAMILY_INET) == false {
t.Error("table filter-inet not in the list")
}
})
}
opensnitch-1.6.9/daemon/firewall/nftables/testdata/ 0000775 0000000 0000000 00000000000 15003540030 0022346 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/firewall/nftables/testdata/test-sysfw-conf.json 0000664 0000000 0000000 00000010343 15003540030 0026315 0 ustar 00root root 0000000 0000000 {
"Enabled": true,
"Version": 1,
"SystemRules": [
{
"Chains": [
{
"Name": "input",
"Table": "filter",
"Family": "inet",
"Priority": "",
"Type": "filter",
"Hook": "input",
"Policy": "accept",
"Rules": [
{
"Enabled": true,
"Position": "0",
"Description": "Allow SSH server connections when input policy is DROP",
"Parameters": "",
"Expressions": [
{
"Statement": {
"Op": "",
"Name": "tcp",
"Values": [
{
"Key": "dport",
"Value": "22"
}
]
}
}
],
"Target": "accept",
"TargetParameters": ""
}
]
},
{
"Name": "output",
"Table": "mangle",
"Family": "inet",
"Priority": "",
"Type": "mangle",
"Hook": "output",
"Policy": "accept",
"Rules": [
{
"Enabled": true,
"Position": "0",
"Description": "Allow ICMP",
"Expressions": [
{
"Statement": {
"Op": "",
"Name": "icmp",
"Values": [
{
"Key": "type",
"Value": "echo-request"
},
{
"Key": "type",
"Value": "echo-reply"
}
]
}
}
],
"Target": "accept",
"TargetParameters": ""
},
{
"Enabled": true,
"Position": "0",
"Description": "Allow ICMPv6",
"Expressions": [
{
"Statement": {
"Op": "",
"Name": "icmpv6",
"Values": [
{
"Key": "type",
"Value": "echo-request"
},
{
"Key": "type",
"Value": "echo-reply"
}
]
}
}
],
"Target": "accept",
"TargetParameters": ""
},
{
"Enabled": true,
"Position": "0",
"Description": "Exclude WireGuard VPN from being intercepted",
"Parameters": "",
"Expressions": [
{
"Statement": {
"Op": "",
"Name": "udp",
"Values": [
{
"Key": "dport",
"Value": "51820"
}
]
}
}
],
"Target": "accept",
"TargetParameters": ""
}
]
},
{
"Name": "forward",
"Table": "mangle",
"Family": "inet",
"Priority": "",
"Type": "mangle",
"Hook": "forward",
"Policy": "accept",
"Rules": [
{
"UUID": "7d7394e1-100d-4b87-a90a-cd68c46edb0b",
"Enabled": true,
"Position": "0",
"Description": "Intercept forwarded connections (docker, etc)",
"Expressions": [
{
"Statement": {
"Op": "",
"Name": "ct",
"Values": [
{
"Key": "state",
"Value": "new"
}
]
}
}
],
"Target": "queue",
"TargetParameters": "num 0"
}
]
}
]
}
]
}
opensnitch-1.6.9/daemon/firewall/nftables/utils.go 0000664 0000000 0000000 00000013632 15003540030 0022231 0 ustar 00root root 0000000 0000000 package nftables
import (
"strings"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/google/nftables"
)
func GetFamilyCode(family string) nftables.TableFamily {
famCode := nftables.TableFamilyINet
switch family {
// [filter]: prerouting forward input output postrouting
// [nat]: prerouting, input output postrouting
// [route]: output
case exprs.NFT_FAMILY_IP6:
famCode = nftables.TableFamilyIPv6
case exprs.NFT_FAMILY_IP:
famCode = nftables.TableFamilyIPv4
case exprs.NFT_FAMILY_BRIDGE:
// [filter]: prerouting forward input output postrouting
famCode = nftables.TableFamilyBridge
case exprs.NFT_FAMILY_ARP:
// [filter]: input output
famCode = nftables.TableFamilyARP
case exprs.NFT_FAMILY_NETDEV:
// [filter]: egress, ingress
famCode = nftables.TableFamilyNetdev
}
return famCode
}
func GetHook(chain string) *nftables.ChainHook {
hook := nftables.ChainHookOutput
// https://github.com/google/nftables/blob/master/chain.go#L33
switch strings.ToLower(chain) {
case exprs.NFT_HOOK_INPUT:
hook = nftables.ChainHookInput
case exprs.NFT_HOOK_PREROUTING:
hook = nftables.ChainHookPrerouting
case exprs.NFT_HOOK_POSTROUTING:
hook = nftables.ChainHookPostrouting
case exprs.NFT_HOOK_FORWARD:
hook = nftables.ChainHookForward
case exprs.NFT_HOOK_INGRESS:
hook = nftables.ChainHookIngress
}
return hook
}
// GetChainPriority gets the corresponding priority for the given chain, based
// on the following configuration matrix:
// https://wiki.nftables.org/wiki-nftables/index.php/Netfilter_hooks#Priority_within_hook
// https://github.com/google/nftables/blob/master/chain.go#L48
// man nft (table 6.)
func GetChainPriority(family, cType, hook string) (*nftables.ChainPriority, nftables.ChainType) {
// types: route, nat, filter
chainType := nftables.ChainTypeFilter
// priorities: raw, conntrack, mangle, natdest, filter, security
chainPrio := nftables.ChainPriorityFilter
family = strings.ToLower(family)
cType = strings.ToLower(cType)
hook = strings.ToLower(hook)
// constraints
// https://www.netfilter.org/projects/nftables/manpage.html#lbAQ
if (cType == exprs.NFT_CHAIN_NATDEST || cType == exprs.NFT_CHAIN_NATSOURCE) && hook == exprs.NFT_HOOK_FORWARD {
log.Warning("[nftables] invalid nat combination of tables and hooks. chain: %s, hook: %s", cType, hook)
return nil, chainType
}
if family == exprs.NFT_FAMILY_NETDEV && (cType != exprs.NFT_CHAIN_FILTER || hook != exprs.NFT_HOOK_INGRESS) {
log.Warning("[nftables] invalid netdev combination of tables and hooks. chain: %s, hook: %s", cType, hook)
return nil, chainType
}
if family == exprs.NFT_FAMILY_ARP && (cType != exprs.NFT_CHAIN_FILTER || (hook != exprs.NFT_HOOK_OUTPUT && hook != exprs.NFT_HOOK_INPUT)) {
log.Warning("[nftables] invalid arp combination of tables and hooks. chain: %s, hook: %s", cType, hook)
return nil, chainType
}
if family == exprs.NFT_FAMILY_BRIDGE && (cType != exprs.NFT_CHAIN_FILTER || (hook == exprs.NFT_HOOK_EGRESS || hook == exprs.NFT_HOOK_INGRESS)) {
log.Warning("[nftables] invalid bridge combination of tables and hooks. chain: %s, hook: %s", cType, hook)
return nil, chainType
}
// Standard priority names, family and hook compatibility matrix
// https://www.netfilter.org/projects/nftables/manpage.html#lbAQ
switch cType {
case exprs.NFT_CHAIN_FILTER:
if family == exprs.NFT_FAMILY_BRIDGE {
// bridge all filter -200 NF_BR_PRI_FILTER_BRIDGED
chainPrio = nftables.ChainPriorityConntrack
switch hook {
case exprs.NFT_HOOK_PREROUTING: // -300
chainPrio = nftables.ChainPriorityRaw
case exprs.NFT_HOOK_OUTPUT: // -100
chainPrio = nftables.ChainPriorityNATSource
case exprs.NFT_HOOK_POSTROUTING: // 300
chainPrio = nftables.ChainPriorityConntrackHelper
}
}
case exprs.NFT_CHAIN_MANGLE:
// hooks: all
// XXX: check hook input?
chainPrio = nftables.ChainPriorityMangle
// https://wiki.nftables.org/wiki-nftables/index.php/Configuring_chains#Base_chain_types
// (...) equivalent semantics to the mangle table but only for the output hook (for other hooks use type filter instead).
// Despite of what is said on the wiki, mangle chains must be of filter type,
// otherwise on some kernels (4.19.x) table MANGLE hook OUTPUT chain is not created
chainType = nftables.ChainTypeFilter
case exprs.NFT_CHAIN_RAW:
// hook: all
chainPrio = nftables.ChainPriorityRaw
case exprs.NFT_CHAIN_CONNTRACK:
chainPrio, chainType = GetConntrackPriority(hook)
case exprs.NFT_CHAIN_NATDEST:
// hook: prerouting
chainPrio = nftables.ChainPriorityNATDest
switch hook {
case exprs.NFT_HOOK_OUTPUT:
chainPrio = nftables.ChainPriorityNATSource
}
chainType = nftables.ChainTypeNAT
case exprs.NFT_CHAIN_NATSOURCE:
// hook: postrouting
chainPrio = nftables.ChainPriorityNATSource
chainType = nftables.ChainTypeNAT
case exprs.NFT_CHAIN_SECURITY:
// hook: all
chainPrio = nftables.ChainPrioritySecurity
case exprs.NFT_CHAIN_SELINUX:
// hook: all
if hook != exprs.NFT_HOOK_POSTROUTING {
chainPrio = nftables.ChainPrioritySELinuxLast
} else {
chainPrio = nftables.ChainPrioritySELinuxFirst
}
}
return chainPrio, chainType
}
// https://wiki.nftables.org/wiki-nftables/index.php/Netfilter_hooks#Priority_within_hook
func GetConntrackPriority(hook string) (*nftables.ChainPriority, nftables.ChainType) {
chainType := nftables.ChainTypeFilter
chainPrio := nftables.ChainPriorityConntrack
switch hook {
case exprs.NFT_HOOK_PREROUTING:
chainPrio = nftables.ChainPriorityConntrack
// ChainTypeNAT not allowed here
case exprs.NFT_HOOK_OUTPUT:
chainPrio = nftables.ChainPriorityNATSource // 100 - ChainPriorityConntrack
case exprs.NFT_HOOK_POSTROUTING:
chainPrio = nftables.ChainPriorityConntrackHelper
chainType = nftables.ChainTypeNAT
case exprs.NFT_HOOK_INPUT:
// can also be hook == NFT_HOOK_POSTROUTING
chainPrio = nftables.ChainPriorityConntrackConfirm
}
return chainPrio, chainType
}
opensnitch-1.6.9/daemon/firewall/nftables/utils_test.go 0000664 0000000 0000000 00000016753 15003540030 0023277 0 ustar 00root root 0000000 0000000 package nftables_test
import (
"testing"
nftb "github.com/evilsocket/opensnitch/daemon/firewall/nftables"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables/exprs"
"github.com/google/nftables"
)
type chainPrioT struct {
test string
errorReason string
family string
chain string
hook string
checkEqual bool
chainPrio *nftables.ChainPriority
chainType nftables.ChainType
}
// TestGetConntrackPriority test basic Conntrack chains priority configurations.
// https://wiki.nftables.org/wiki-nftables/index.php/Netfilter_hooks#Priority_within_hook
func TestGetConntrackPriority(t *testing.T) {
t.Run("hook-prerouting", func(t *testing.T) {
cprio, ctype := nftb.GetConntrackPriority(exprs.NFT_HOOK_PREROUTING)
if cprio != nftables.ChainPriorityConntrack && ctype != nftables.ChainTypeFilter {
t.Errorf("invalid conntrack priority or type for hook PREROUTING: %+v, %+v", cprio, ctype)
}
})
t.Run("hook-output", func(t *testing.T) {
cprio, ctype := nftb.GetConntrackPriority(exprs.NFT_HOOK_OUTPUT)
if cprio != nftables.ChainPriorityNATSource && ctype != nftables.ChainTypeFilter {
t.Errorf("invalid conntrack priority or type for hook OUTPUT: %+v, %+v", cprio, ctype)
}
})
t.Run("hook-postrouting", func(t *testing.T) {
cprio, ctype := nftb.GetConntrackPriority(exprs.NFT_HOOK_POSTROUTING)
if cprio != nftables.ChainPriorityConntrackHelper && ctype != nftables.ChainTypeNAT {
t.Errorf("invalid conntrack priority or type for hook POSTROUTING: %+v, %+v", cprio, ctype)
}
})
t.Run("hook-input", func(t *testing.T) {
cprio, ctype := nftb.GetConntrackPriority(exprs.NFT_HOOK_INPUT)
if cprio != nftables.ChainPriorityConntrackConfirm && ctype != nftables.ChainTypeFilter {
t.Errorf("invalid conntrack priority or type for hook INPUT: %+v, %+v", cprio, ctype)
}
})
}
// https://wiki.nftables.org/wiki-nftables/index.php/Netfilter_hooks#Priority_within_hook
// https://github.com/google/nftables/blob/master/chain.go#L48
// man nft (table 6.)
func TestGetChainPriority(t *testing.T) {
matrixTests := []chainPrioT{
// https://wiki.nftables.org/wiki-nftables/index.php/Configuring_chains#Base_chain_types
// (...) equivalent semantics to the mangle table but only for the output hook (for other hooks use type filter instead).
// Despite of what is said on the wiki, mangle chains must be of filter type,
// otherwise on some kernels (4.19.x) table MANGLE hook OUTPUT chain is not created
{
"inet-mangle-output",
"invalid MANGLE chain priority or type: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_MANGLE, exprs.NFT_HOOK_OUTPUT,
true,
nftables.ChainPriorityMangle, nftables.ChainTypeFilter,
},
{
"inet-natdest-output",
"invalid NATDest-output chain priority or type: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_NATDEST, exprs.NFT_HOOK_OUTPUT,
true,
nftables.ChainPriorityNATSource, nftables.ChainTypeNAT,
},
{
"inet-natdest-prerouting",
"invalid NATDest-prerouting chain priority or type: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_NATDEST, exprs.NFT_HOOK_PREROUTING,
true,
nftables.ChainPriorityNATDest, nftables.ChainTypeNAT,
},
{
"inet-natsource-postrouting",
"invalid NATSource-postrouting chain priority or type: %+v-%+v, %v-%v",
exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_NATSOURCE, exprs.NFT_HOOK_POSTROUTING,
true,
nftables.ChainPriorityNATSource, nftables.ChainTypeNAT,
},
// constraints
// https://www.netfilter.org/projects/nftables/manpage.html#lbAQ
{
"inet-natdest-forward",
"invalid natdest-forward chain: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_NATDEST, exprs.NFT_HOOK_FORWARD,
true,
nil, nftables.ChainTypeFilter,
},
{
"inet-natsource-forward",
"invalid natsource-forward chain: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_NATSOURCE, exprs.NFT_HOOK_FORWARD,
true,
nil, nftables.ChainTypeFilter,
},
{
"netdev-filter-ingress",
"invalid netdev chain prio or type: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_NETDEV, exprs.NFT_CHAIN_FILTER, exprs.NFT_HOOK_INGRESS,
true,
nftables.ChainPriorityFilter, nftables.ChainTypeFilter,
},
{
"arp-filter-input",
"invalid arp chain prio or type: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_ARP, exprs.NFT_CHAIN_FILTER, exprs.NFT_HOOK_INPUT,
true,
nftables.ChainPriorityFilter, nftables.ChainTypeFilter,
},
{
"bridge-filter-prerouting",
"invalid bridge-prerouting chain prio or type: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_BRIDGE, exprs.NFT_CHAIN_FILTER, exprs.NFT_HOOK_PREROUTING,
true,
nftables.ChainPriorityRaw, nftables.ChainTypeFilter,
},
{
"bridge-filter-output",
"invalid bridge-output chain prio or type: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_BRIDGE, exprs.NFT_CHAIN_FILTER, exprs.NFT_HOOK_OUTPUT,
true,
nftables.ChainPriorityNATSource, nftables.ChainTypeFilter,
},
{
"bridge-filter-postrouting",
"invalid bridge-postrouting chain prio or type: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_BRIDGE, exprs.NFT_CHAIN_FILTER, exprs.NFT_HOOK_POSTROUTING,
true,
nftables.ChainPriorityConntrackHelper, nftables.ChainTypeFilter,
},
}
for _, testChainPrio := range matrixTests {
t.Run(testChainPrio.test, func(t *testing.T) {
chainPrio, chainType := nftb.GetChainPriority(testChainPrio.family, testChainPrio.chain, testChainPrio.hook)
if testChainPrio.checkEqual {
if chainPrio != testChainPrio.chainPrio && chainType != testChainPrio.chainType {
t.Errorf(testChainPrio.errorReason, chainPrio, chainType, testChainPrio.chainPrio, testChainPrio.chainType)
}
} else {
if chainPrio == testChainPrio.chainPrio && chainType == testChainPrio.chainType {
t.Errorf(testChainPrio.errorReason, chainPrio, chainType, testChainPrio.chainPrio, testChainPrio.chainType)
}
}
})
}
}
func TestInvalidChainPriority(t *testing.T) {
matrixTests := []chainPrioT{
{
"inet-natdest-forward",
"natdest-forward chain should be invalid: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_NATDEST, exprs.NFT_HOOK_FORWARD,
true,
nil, nftables.ChainTypeFilter,
},
{
"inet-natsource-forward",
"natsource-forward chain should be invalid: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_INET, exprs.NFT_CHAIN_NATSOURCE, exprs.NFT_HOOK_FORWARD,
true,
nil, nftables.ChainTypeFilter,
},
{
"netdev-natsource-forward",
"netdev chain should be invalid: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_NETDEV, exprs.NFT_CHAIN_NATSOURCE, exprs.NFT_HOOK_FORWARD,
true,
nil,
nftables.ChainTypeFilter,
},
{
"arp-natsource-forward",
"arp chain should be invalid: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_ARP, exprs.NFT_CHAIN_NATSOURCE, exprs.NFT_HOOK_FORWARD,
true,
nil, nftables.ChainTypeFilter,
},
{
"bridge-natsource-forward",
"bridge chain should be invalid: %+v-%+v <-> %v-%v",
exprs.NFT_FAMILY_ARP, exprs.NFT_CHAIN_NATSOURCE, exprs.NFT_HOOK_FORWARD,
true,
nil, nftables.ChainTypeFilter,
},
}
for _, testChainPrio := range matrixTests {
t.Run(testChainPrio.test, func(t *testing.T) {
chainPrio, chainType := nftb.GetChainPriority(testChainPrio.family, testChainPrio.chain, testChainPrio.hook)
if testChainPrio.checkEqual {
if chainPrio != testChainPrio.chainPrio && chainType != testChainPrio.chainType {
}
} else {
if chainPrio == testChainPrio.chainPrio && chainType == testChainPrio.chainType {
t.Errorf(testChainPrio.errorReason, chainPrio, chainType, testChainPrio.chainPrio, testChainPrio.chainType)
}
}
})
}
}
opensnitch-1.6.9/daemon/firewall/rules.go 0000664 0000000 0000000 00000007615 15003540030 0020431 0 ustar 00root root 0000000 0000000 package firewall
import (
"fmt"
"github.com/evilsocket/opensnitch/daemon/firewall/common"
"github.com/evilsocket/opensnitch/daemon/firewall/iptables"
"github.com/evilsocket/opensnitch/daemon/firewall/nftables"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
)
// Firewall is the interface that all firewalls (iptables, nftables) must implement.
type Firewall interface {
Init(*int)
Stop()
Name() string
IsRunning() bool
SetQueueNum(num *int)
SaveConfiguration(rawConfig string) error
EnableInterception()
DisableInterception(bool)
QueueDNSResponses(bool, bool) (error, error)
QueueConnections(bool, bool) (error, error)
CleanRules(bool)
AddSystemRules(bool, bool)
DeleteSystemRules(bool, bool, bool)
Serialize() (*protocol.SysFirewall, error)
Deserialize(sysfw *protocol.SysFirewall) ([]byte, error)
ErrorsChan() <-chan string
ErrChanEmpty() bool
}
var (
fw Firewall
queueNum = 0
)
// Init initializes the firewall and loads firewall rules.
// We'll try to use the firewall configured in the configuration (iptables/nftables).
// If iptables is not installed, we can add nftables rules directly to the kernel,
// without relying on any binaries.
func Init(fwType string, qNum *int) (err error) {
if fwType == iptables.Name {
fw, err = iptables.Fw()
if err != nil {
log.Warning("iptables not available: %s", err)
}
}
if fwType == nftables.Name || err != nil {
fw, err = nftables.Fw()
if err != nil {
log.Warning("nftables not available: %s", err)
}
}
if err != nil {
return fmt.Errorf("firewall error: %s, not iptables nor nftables are available or are usable. Please, report it on github", err)
}
if fw == nil {
return fmt.Errorf("Firewall not initialized")
}
fw.Stop()
fw.Init(qNum)
queueNum = *qNum
log.Info("Using %s firewall", fw.Name())
return
}
// IsRunning returns if the firewall is running or not.
func IsRunning() bool {
return fw != nil && fw.IsRunning()
}
// ErrorsChan returns the channel where the errors are sent to.
func ErrorsChan() <-chan string {
return fw.ErrorsChan()
}
// ErrChanEmpty checks if the errors channel is empty.
func ErrChanEmpty() bool {
return fw.ErrChanEmpty()
}
// CleanRules deletes the rules we added.
func CleanRules(logErrors bool) {
if fw == nil {
return
}
fw.CleanRules(logErrors)
}
// ChangeFw stops current firewall and initializes a new one.
func ChangeFw(fwtype string) (err error) {
Stop()
err = Init(fwtype, &queueNum)
return
}
// Reload deletes existing firewall rules and readds them.
func Reload() {
fw.Stop()
fw.Init(&queueNum)
}
// ReloadSystemRules deletes existing rules, and add them again
func ReloadSystemRules() {
fw.DeleteSystemRules(!common.ForcedDelRules, common.RestoreChains, true)
fw.AddSystemRules(common.ReloadRules, common.BackupChains)
}
// EnableInterception removes the rules to intercept outbound connections.
func EnableInterception() error {
if fw == nil {
return fmt.Errorf("firewall not initialized when trying to enable interception, report please")
}
fw.EnableInterception()
return nil
}
// DisableInterception removes the rules to intercept outbound connections.
func DisableInterception() error {
if fw == nil {
return fmt.Errorf("firewall not initialized when trying to disable interception, report please")
}
fw.DisableInterception(true)
return nil
}
// Stop deletes the firewall rules, allowing network traffic.
func Stop() {
if fw == nil {
return
}
fw.Stop()
}
// SaveConfiguration saves configuration string to disk
func SaveConfiguration(rawConfig []byte) error {
return fw.SaveConfiguration(string(rawConfig))
}
// Serialize transforms firewall json configuration to protobuf
func Serialize() (*protocol.SysFirewall, error) {
return fw.Serialize()
}
// Deserialize transforms firewall json configuration to protobuf
func Deserialize(sysfw *protocol.SysFirewall) ([]byte, error) {
return fw.Deserialize(sysfw)
}
opensnitch-1.6.9/daemon/go.mod 0000664 0000000 0000000 00000002311 15003540030 0016235 0 ustar 00root root 0000000 0000000 module github.com/evilsocket/opensnitch/daemon
go 1.17
require (
github.com/fsnotify/fsnotify v1.4.7
github.com/golang/protobuf v1.5.0
github.com/google/gopacket v1.1.14
github.com/google/nftables v0.1.0
github.com/google/uuid v1.3.0
github.com/iovisor/gobpf v0.2.0
github.com/varlink/go v0.4.0
github.com/vishvananda/netlink v0.0.0-20210811191823-e1a867c6b452
github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae
golang.org/x/net v0.0.0-20211209124913-491a49abca63
golang.org/x/sys v0.0.0-20211205182925-97ca703d548d
google.golang.org/grpc v1.32.0
)
require (
github.com/BurntSushi/toml v0.4.1 // indirect
github.com/google/go-cmp v0.5.6 // indirect
github.com/josharian/native v0.0.0-20200817173448-b6b71def0850 // indirect
github.com/mdlayher/netlink v1.4.2 // indirect
github.com/mdlayher/socket v0.0.0-20211102153432-57e3fa563ecb // indirect
golang.org/x/mod v0.5.1 // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/tools v0.1.8 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55 // indirect
google.golang.org/protobuf v1.26.0 // indirect
honnef.co/go/tools v0.2.2 // indirect
)
opensnitch-1.6.9/daemon/go.sum 0000664 0000000 0000000 00000046372 15003540030 0016301 0 ustar 00root root 0000000 0000000 cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v0.4.1 h1:GaI7EiDXDRfa8VshkTj7Fym7ha+y8/XxIgD2okUIjLw=
github.com/BurntSushi/toml v0.4.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cilium/ebpf v0.5.0/go.mod h1:4tRaxcgiL706VnOzHOdBlY8IEAIdxINsQBcU4xJJXRs=
github.com/cilium/ebpf v0.7.0/go.mod h1:/oI2+1shJiTGAMgl6/RgJr36Eo1jzrRcAWbcXO2usCA=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.5.0 h1:LUVKkCeviFUMKqHa4tXIIij/lbhnMbP7Fn5wKdKkRh4=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gopacket v1.1.14 h1:1+TEhSu8Mh154ZBVjyd1Nt2Bb7cnyOeE3GQyb1WGLqI=
github.com/google/gopacket v1.1.14/go.mod h1:UCLx9mCmAwsVbn6qQl1WIEt2SO7Nd2fD0th1TBAsqBw=
github.com/google/nftables v0.1.0 h1:T6lS4qudrMufcNIZ8wSRrL+iuwhsKxpN+zFLxhUWOqk=
github.com/google/nftables v0.1.0/go.mod h1:b97ulCCFipUC+kSin+zygkvUVpx0vyIAwxXFdY3PlNc=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/iovisor/gobpf v0.2.0 h1:34xkQxft+35GagXBk3n23eqhm0v7q0ejeVirb8sqEOQ=
github.com/iovisor/gobpf v0.2.0/go.mod h1:WSY9Jj5RhdgC3ci1QaacvbFdQ8cbrEjrpiZbLHLt2s4=
github.com/josharian/native v0.0.0-20200817173448-b6b71def0850 h1:uhL5Gw7BINiiPAo24A2sxkcDI0Jt/sqp1v5xQCniEFA=
github.com/josharian/native v0.0.0-20200817173448-b6b71def0850/go.mod h1:7X/raswPFr05uY3HiLlYeyQntB6OO7E/d2Cu7qoaN2w=
github.com/jsimonetti/rtnetlink v0.0.0-20190606172950-9527aa82566a/go.mod h1:Oz+70psSo5OFh8DBl0Zv2ACw7Esh6pPUphlvZG9x7uw=
github.com/jsimonetti/rtnetlink v0.0.0-20200117123717-f846d4f6c1f4/go.mod h1:WGuG/smIU4J/54PblvSbh+xvCZmpJnFgr3ds6Z55XMQ=
github.com/jsimonetti/rtnetlink v0.0.0-20201009170750-9c6f07d100c1/go.mod h1:hqoO/u39cqLeBLebZ8fWdE96O7FxrAsRYhnVOdgHxok=
github.com/jsimonetti/rtnetlink v0.0.0-20201216134343-bde56ed16391/go.mod h1:cR77jAZG3Y3bsb8hF6fHJbFoyFukLFOkQ98S0pQz3xw=
github.com/jsimonetti/rtnetlink v0.0.0-20201220180245-69540ac93943/go.mod h1:z4c53zj6Eex712ROyh8WI0ihysb5j2ROyV42iNogmAs=
github.com/jsimonetti/rtnetlink v0.0.0-20210122163228-8d122574c736/go.mod h1:ZXpIyOK59ZnN7J0BV99cZUPmsqDRZ3eq5X+st7u/oSA=
github.com/jsimonetti/rtnetlink v0.0.0-20210212075122-66c871082f2b/go.mod h1:8w9Rh8m+aHZIG69YPGGem1i5VzoyRC8nw2kA8B+ik5U=
github.com/jsimonetti/rtnetlink v0.0.0-20210525051524-4cc836578190/go.mod h1:NmKSdU4VGSiv1bMsdqNALI4RSvvjtz65tTMCnD05qLo=
github.com/jsimonetti/rtnetlink v0.0.0-20211022192332-93da33804786 h1:N527AHMa793TP5z5GNAn/VLPzlc0ewzWdeP/25gDfgQ=
github.com/jsimonetti/rtnetlink v0.0.0-20211022192332-93da33804786/go.mod h1:v4hqbTdfQngbVSZJVWUhGE/lbTFf9jb+ygmNUDQMuOs=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/mdlayher/ethtool v0.0.0-20210210192532-2b88debcdd43/go.mod h1:+t7E0lkKfbBsebllff1xdTmyJt8lH37niI6kwFk9OTo=
github.com/mdlayher/ethtool v0.0.0-20211028163843-288d040e9d60 h1:tHdB+hQRHU10CfcK0furo6rSNgZ38JT8uPh70c/pFD8=
github.com/mdlayher/ethtool v0.0.0-20211028163843-288d040e9d60/go.mod h1:aYbhishWc4Ai3I2U4Gaa2n3kHWSwzme6EsG/46HRQbE=
github.com/mdlayher/genetlink v1.0.0 h1:OoHN1OdyEIkScEmRgxLEe2M9U8ClMytqA5niynLtfj0=
github.com/mdlayher/genetlink v1.0.0/go.mod h1:0rJ0h4itni50A86M2kHcgS85ttZazNt7a8H2a2cw0Gc=
github.com/mdlayher/netlink v0.0.0-20190409211403-11939a169225/go.mod h1:eQB3mZE4aiYnlUsyGGCOpPETfdQq4Jhsgf1fk3cwQaA=
github.com/mdlayher/netlink v1.0.0/go.mod h1:KxeJAFOFLG6AjpyDkQ/iIhxygIUKD+vcwqcnu43w/+M=
github.com/mdlayher/netlink v1.1.0/go.mod h1:H4WCitaheIsdF9yOYu8CFmCgQthAPIWZmcKp9uZHgmY=
github.com/mdlayher/netlink v1.1.1/go.mod h1:WTYpFb/WTvlRJAyKhZL5/uy69TDDpHHu2VZmb2XgV7o=
github.com/mdlayher/netlink v1.2.0/go.mod h1:kwVW1io0AZy9A1E2YYgaD4Cj+C+GPkU6klXCMzIJ9p8=
github.com/mdlayher/netlink v1.2.1/go.mod h1:bacnNlfhqHqqLo4WsYeXSqfyXkInQ9JneWI68v1KwSU=
github.com/mdlayher/netlink v1.2.2-0.20210123213345-5cc92139ae3e/go.mod h1:bacnNlfhqHqqLo4WsYeXSqfyXkInQ9JneWI68v1KwSU=
github.com/mdlayher/netlink v1.3.0/go.mod h1:xK/BssKuwcRXHrtN04UBkwQ6dY9VviGGuriDdoPSWys=
github.com/mdlayher/netlink v1.4.0/go.mod h1:dRJi5IABcZpBD2A3D0Mv/AiX8I9uDEu5oGkAVrekmf8=
github.com/mdlayher/netlink v1.4.1/go.mod h1:e4/KuJ+s8UhfUpO9z00/fDZZmhSrs+oxyqAS9cNgn6Q=
github.com/mdlayher/netlink v1.4.2 h1:3sbnJWe/LETovA7yRZIX3f9McVOWV3OySH6iIBxiFfI=
github.com/mdlayher/netlink v1.4.2/go.mod h1:13VaingaArGUTUxFLf/iEovKxXji32JAtF858jZYEug=
github.com/mdlayher/socket v0.0.0-20210307095302-262dc9984e00/go.mod h1:GAFlyu4/XV68LkQKYzKhIo/WW7j3Zi0YRAz/BOoanUc=
github.com/mdlayher/socket v0.0.0-20211007213009-516dcbdf0267/go.mod h1:nFZ1EtZYK8Gi/k6QNu7z7CgO20i/4ExeQswwWuPmG/g=
github.com/mdlayher/socket v0.0.0-20211102153432-57e3fa563ecb h1:2dC7L10LmTqlyMVzFJ00qM25lqESg9Z4u3GuEXN5iHY=
github.com/mdlayher/socket v0.0.0-20211102153432-57e3fa563ecb/go.mod h1:nFZ1EtZYK8Gi/k6QNu7z7CgO20i/4ExeQswwWuPmG/g=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/varlink/go v0.4.0 h1:+/BQoUO9eJK/+MTSHwFcJch7TMsb6N6Dqp6g0qaXXRo=
github.com/varlink/go v0.4.0/go.mod h1:DKg9Y2ctoNkesREGAEak58l+jOC6JU2aqZvUYs5DynU=
github.com/vishvananda/netlink v0.0.0-20210811191823-e1a867c6b452 h1:xe1bLd/sNkKVWdZuAb2+4JeMQMYyQ7Av38iRrE1lhm8=
github.com/vishvananda/netlink v0.0.0-20210811191823-e1a867c6b452/go.mod h1:twkDnbuQxJYemMlGd4JFIcuhgX83tXhKS2B/PRMpOho=
github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI=
github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae h1:4hwBBUfQCFe3Cym0ZtKyq7L16eZUtYKs+BaHDN6mAns=
github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.5.1 h1:OJxoQ/rynoF0dcCdI7cLPktw/hR2cueqYfjm43oqK38=
golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191007182048-72f939374954/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201010224723-4f7140c49acb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201216054612-986b41b23924/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210928044308-7d9f5e0b762b/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211020060615-d418f374d309/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211201190559-0a0e4e1bb54c/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211209124913-491a49abca63 h1:iocB37TsdFuN6IBRZ+ry36wrkoV51/tl5vOWqkcPGvY=
golang.org/x/net v0.0.0-20211209124913-491a49abca63/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190411185658-b44545bcd369/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200217220822-9197077df867/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200728102440-3e129f6d46b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201009025420-dfb3f7c4e634/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201118182958-a01c418693c7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201218084310-7d0127a74742/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210110051926-789bb1bd4061/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210123111255-9b0068b26619/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210216163648-f7da38b97c65/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210525143221-35b2ab0089ea/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210906170528-6f6e22806c34/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211205182925-97ca703d548d h1:FjkYO/PPp4Wi0EAUOVLxePm7qVW4r4ctbWpURyuOD0E=
golang.org/x/sys v0.0.0-20211205182925-97ca703d548d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.7/go.mod h1:LGqMHiF4EqQNHR1JncWGqT5BVaXmza+X+BDGol+dOxo=
golang.org/x/tools v0.1.8 h1:P1HhGGuLW4aAclzjtmJdf0mJOjVUZUzOTqkAkWL+l6w=
golang.org/x/tools v0.1.8/go.mod h1:nABZi5QlRsZVlzPpHl034qft6wpY4eDcsTt5AaioBiU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55 h1:gSJIx1SDwno+2ElGhA4+qG2zF97qiUzTM+rQ0klBOcE=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.32.0 h1:zWTV+LMdc3kaiJMSTOFz2UgSBgx8RNQoTGiZu3fR9S0=
google.golang.org/grpc v1.32.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/lk=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.2.1/go.mod h1:lPVVZ2BS5TfnjLyizF7o7hv7j9/L+8cZY2hLyjP9cGY=
honnef.co/go/tools v0.2.2 h1:MNh1AVMyVX23VUHE2O27jm6lNj3vjO5DexS4A1xvnzk=
honnef.co/go/tools v0.2.2/go.mod h1:lPVVZ2BS5TfnjLyizF7o7hv7j9/L+8cZY2hLyjP9cGY=
opensnitch-1.6.9/daemon/log/ 0000775 0000000 0000000 00000000000 15003540030 0015713 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/log/formats/ 0000775 0000000 0000000 00000000000 15003540030 0017366 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/log/formats/csv.go 0000664 0000000 0000000 00000001706 15003540030 0020514 0 ustar 00root root 0000000 0000000 package formats
import (
"fmt"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
)
// CSV name of the output format, used in json configs
const CSV = "csv"
// Csv object
type Csv struct {
}
// NewCSV returns a new CSV transformer object.
func NewCSV() *Csv {
return &Csv{}
}
// Transform takes input arguments and formats them to CSV.
func (c *Csv) Transform(args ...interface{}) (out string) {
p := args[0]
values := p.([]interface{})
for _, val := range values {
switch val.(type) {
case *protocol.Connection:
con := val.(*protocol.Connection)
out = fmt.Sprint(out,
con.SrcIp, ",",
con.SrcPort, ",",
con.DstIp, ",",
con.DstHost, ",",
con.DstPort, ",",
con.Protocol, ",",
con.ProcessId, ",",
con.UserId, ",",
//con.ProcessComm, ",",
con.ProcessPath, ",",
con.ProcessArgs, ",",
con.ProcessCwd, ",",
)
default:
out = fmt.Sprint(out, val, ",")
}
}
out = out[:len(out)-1]
return
}
opensnitch-1.6.9/daemon/log/formats/formats.go 0000664 0000000 0000000 00000000476 15003540030 0021377 0 ustar 00root root 0000000 0000000 package formats
// LoggerFormat is the common interface that every format must meet.
// Transform expects an arbitrary number of arguments and types, and
// it must transform them to a string.
// Arguments can be of type Connection, string, int, etc.
type LoggerFormat interface {
Transform(...interface{}) string
}
opensnitch-1.6.9/daemon/log/formats/json.go 0000664 0000000 0000000 00000003013 15003540030 0020663 0 ustar 00root root 0000000 0000000 package formats
import (
"encoding/json"
"fmt"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
)
// JSON name of the output format, used in our json config
const JSON = "json"
// events types
const (
EvConnection = iota
EvExec
)
// JSONEventFormat object to be sent to the remote service.
// TODO: Expand as needed: ebpf events, etc.
type JSONEventFormat struct {
Event interface{} `json:"Event"`
Rule string `json:"Rule"`
Action string `json:"Action"`
Type uint8 `json:"Type"`
}
// NewJSON returns a new Json format, to send events as json.
// The json is the protobuffer in json format.
func NewJSON() *JSONEventFormat {
return &JSONEventFormat{}
}
// Transform takes input arguments and formats them to JSON format.
func (j *JSONEventFormat) Transform(args ...interface{}) (out string) {
p := args[0]
jObj := &JSONEventFormat{}
values := p.([]interface{})
for n, val := range values {
switch val.(type) {
// TODO:
// case *protocol.Rule:
// case *protocol.Process:
// case *protocol.Alerts:
case *protocol.Connection:
// XXX: All fields of the Connection object are sent, is this what we want?
// or should we send an anonymous json?
jObj.Event = val.(*protocol.Connection)
jObj.Type = EvConnection
case string:
// action
// rule name
if n == 1 {
jObj.Action = val.(string)
} else if n == 2 {
jObj.Rule = val.(string)
}
}
}
rawCfg, err := json.Marshal(&jObj)
if err != nil {
return
}
out = fmt.Sprint(string(rawCfg), "\n\n")
return
}
opensnitch-1.6.9/daemon/log/formats/rfc3164.go 0000664 0000000 0000000 00000003025 15003540030 0021005 0 ustar 00root root 0000000 0000000 package formats
import (
"fmt"
"log/syslog"
"os"
"time"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
)
// RFC3164 name of the output format, used in our json config
const RFC3164 = "rfc3164"
// Rfc3164 object
type Rfc3164 struct {
seq int
}
// NewRfc3164 returns a new Rfc3164 object, that transforms a message to
// RFC3164 format.
func NewRfc3164() *Rfc3164 {
return &Rfc3164{}
}
// Transform takes input arguments and formats them to RFC3164 format.
func (r *Rfc3164) Transform(args ...interface{}) (out string) {
hostname := ""
tag := ""
arg1 := args[0]
// we can do this better. Think.
if len(args) > 1 {
hostname = args[1].(string)
tag = args[2].(string)
}
values := arg1.([]interface{})
for n, val := range values {
switch val.(type) {
case *protocol.Connection:
con := val.(*protocol.Connection)
out = fmt.Sprint(out,
" SRC=\"", con.SrcIp, "\"",
" SPT=\"", con.SrcPort, "\"",
" DST=\"", con.DstIp, "\"",
" DSTHOST=\"", con.DstHost, "\"",
" DPT=\"", con.DstPort, "\"",
" PROTO=\"", con.Protocol, "\"",
" PID=\"", con.ProcessId, "\"",
" UID=\"", con.UserId, "\"",
//" COMM=", con.ProcessComm, "\"",
" PATH=\"", con.ProcessPath, "\"",
" CMDLINE=\"", con.ProcessArgs, "\"",
" CWD=\"", con.ProcessCwd, "\"",
)
default:
out = fmt.Sprint(out, " ARG", n, "=\"", val, "\"")
}
}
out = fmt.Sprintf("<%d>%s %s %s[%d]: [%s]\n",
syslog.LOG_NOTICE|syslog.LOG_DAEMON,
time.Now().Format(time.RFC3339),
hostname,
tag,
os.Getpid(),
out[1:])
return
}
opensnitch-1.6.9/daemon/log/formats/rfc5424.go 0000664 0000000 0000000 00000003044 15003540030 0021007 0 ustar 00root root 0000000 0000000 package formats
import (
"fmt"
"log/syslog"
"os"
"time"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
)
// RFC5424 name of the output format, used in our json config
const RFC5424 = "rfc5424"
// Rfc5424 object
type Rfc5424 struct {
seq int
}
// NewRfc5424 returns a new Rfc5424 object, that transforms a message to
// RFC5424 format (sort of).
func NewRfc5424() *Rfc5424 {
return &Rfc5424{}
}
// Transform takes input arguments and formats them to RFC5424 format.
func (r *Rfc5424) Transform(args ...interface{}) (out string) {
hostname := ""
tag := ""
arg1 := args[0]
if len(args) > 1 {
arg2 := args[1]
arg3 := args[2]
hostname = arg2.(string)
tag = arg3.(string)
}
values := arg1.([]interface{})
for n, val := range values {
switch val.(type) {
case *protocol.Connection:
con := val.(*protocol.Connection)
out = fmt.Sprint(out,
" SRC=\"", con.SrcIp, "\"",
" SPT=\"", con.SrcPort, "\"",
" DST=\"", con.DstIp, "\"",
" DSTHOST=\"", con.DstHost, "\"",
" DPT=\"", con.DstPort, "\"",
" PROTO=\"", con.Protocol, "\"",
" PID=\"", con.ProcessId, "\"",
" UID=\"", con.UserId, "\"",
//" COMM=", con.ProcessComm, "\"",
" PATH=\"", con.ProcessPath, "\"",
" CMDLINE=\"", con.ProcessArgs, "\"",
" CWD=\"", con.ProcessCwd, "\"",
)
default:
out = fmt.Sprint(out, " ARG", n, "=\"", val, "\"")
}
}
out = fmt.Sprintf("<%d>1 %s %s %s %d TCPOUT - [%s]\n",
syslog.LOG_NOTICE|syslog.LOG_DAEMON,
time.Now().Format(time.RFC3339),
hostname,
tag,
os.Getpid(),
out[1:])
return
}
opensnitch-1.6.9/daemon/log/log.go 0000664 0000000 0000000 00000011406 15003540030 0017025 0 ustar 00root root 0000000 0000000 package log
import (
"fmt"
"os"
"strings"
"sync"
"time"
)
type Handler func(format string, args ...interface{})
// https://misc.flogisoft.com/bash/tip_colors_and_formatting
const (
BOLD = "\033[1m"
DIM = "\033[2m"
RED = "\033[31m"
GREEN = "\033[32m"
BLUE = "\033[34m"
YELLOW = "\033[33m"
FG_BLACK = "\033[30m"
FG_WHITE = "\033[97m"
BG_DGRAY = "\033[100m"
BG_RED = "\033[41m"
BG_GREEN = "\033[42m"
BG_YELLOW = "\033[43m"
BG_LBLUE = "\033[104m"
RESET = "\033[0m"
)
// log level constants
const (
DEBUG = iota
INFO
IMPORTANT
WARNING
ERROR
FATAL
)
//
var (
WithColors = true
Output = os.Stdout
StdoutFile = "/dev/stdout"
DateFormat = "2006-01-02 15:04:05"
MinLevel = INFO
LogUTC = true
LogMicro = false
mutex = &sync.RWMutex{}
labels = map[int]string{
DEBUG: "DBG",
INFO: "INF",
IMPORTANT: "IMP",
WARNING: "WAR",
ERROR: "ERR",
FATAL: "!!!",
}
colors = map[int]string{
DEBUG: DIM + FG_BLACK + BG_DGRAY,
INFO: FG_WHITE + BG_GREEN,
IMPORTANT: FG_WHITE + BG_LBLUE,
WARNING: FG_WHITE + BG_YELLOW,
ERROR: FG_WHITE + BG_RED,
FATAL: FG_WHITE + BG_RED + BOLD,
}
)
// Wrap wraps a text with effects
func Wrap(s, effect string) string {
if WithColors == true {
s = effect + s + RESET
}
return s
}
// Dim dims a text
func Dim(s string) string {
return Wrap(s, DIM)
}
// Bold bolds a text
func Bold(s string) string {
return Wrap(s, BOLD)
}
// Red reds the text
func Red(s string) string {
return Wrap(s, RED)
}
// Green greens the text
func Green(s string) string {
return Wrap(s, GREEN)
}
// Blue blues the text
func Blue(s string) string {
return Wrap(s, BLUE)
}
// Yellow yellows the text
func Yellow(s string) string {
return Wrap(s, YELLOW)
}
// Raw prints out a text without colors
func Raw(format string, args ...interface{}) {
mutex.RLock()
defer mutex.RUnlock()
fmt.Fprintf(Output, format, args...)
}
// SetLogLevel sets the log level
func SetLogLevel(newLevel int) {
mutex.Lock()
defer mutex.Unlock()
MinLevel = newLevel
}
// GetLogLevel returns the current log level configured.
func GetLogLevel() int {
mutex.RLock()
defer mutex.RUnlock()
return MinLevel
}
// SetLogUTC configures UTC timestamps
func SetLogUTC(newLogUTC bool) {
mutex.Lock()
defer mutex.Unlock()
LogUTC = newLogUTC
}
// GetLogUTC returns the current config.
func GetLogUTC() bool {
mutex.RLock()
defer mutex.RUnlock()
return LogUTC
}
// SetLogMicro configures microsecond timestamps
func SetLogMicro(newLogMicro bool) {
mutex.Lock()
defer mutex.Unlock()
LogMicro = newLogMicro
}
// GetLogMicro returns the current config.
func GetLogMicro() bool {
mutex.Lock()
defer mutex.Unlock()
return LogMicro
}
// Log prints out a text with the given color and format
func Log(level int, format string, args ...interface{}) {
mutex.Lock()
defer mutex.Unlock()
if level >= MinLevel {
label := labels[level]
color := colors[level]
datefmt := DateFormat
if LogMicro == true {
datefmt = DateFormat + ".000000"
}
when := time.Now().UTC().Format(datefmt)
if LogUTC == false {
when = time.Now().Local().Format(datefmt)
}
what := fmt.Sprintf(format, args...)
if strings.HasSuffix(what, "\n") == false {
what += "\n"
}
l := Dim("[%s]")
r := Wrap(" %s ", color) + " %s"
fmt.Fprintf(Output, l+" "+r, when, label, what)
}
}
func setDefaultLogOutput() {
mutex.Lock()
Output = os.Stdout
mutex.Unlock()
}
// OpenFile opens a file to print out the logs
func OpenFile(logFile string) (err error) {
if logFile == StdoutFile {
setDefaultLogOutput()
return
}
if Output, err = os.OpenFile(logFile, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644); err != nil {
Error("Error opening log: %s %s", logFile, err)
//fallback to stdout
setDefaultLogOutput()
}
Important("Start writing logs to %s", logFile)
return err
}
// Close closes the current output file descriptor
func Close() {
if Output != os.Stdout {
Output.Close()
}
}
// Debug is the log level for debugging purposes
func Debug(format string, args ...interface{}) {
Log(DEBUG, format, args...)
}
// Info is the log level for informative messages
func Info(format string, args ...interface{}) {
Log(INFO, format, args...)
}
// Important is the log level for things that must pay attention
func Important(format string, args ...interface{}) {
Log(IMPORTANT, format, args...)
}
// Warning is the log level for non-critical errors
func Warning(format string, args ...interface{}) {
Log(WARNING, format, args...)
}
// Error is the log level for errors that should be corrected
func Error(format string, args ...interface{}) {
Log(ERROR, format, args...)
}
// Fatal is the log level for errors that must be corrected before continue
func Fatal(format string, args ...interface{}) {
Log(FATAL, format, args...)
os.Exit(1)
}
opensnitch-1.6.9/daemon/log/loggers/ 0000775 0000000 0000000 00000000000 15003540030 0017355 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/log/loggers/logger.go 0000664 0000000 0000000 00000004343 15003540030 0021167 0 ustar 00root root 0000000 0000000 package loggers
import "fmt"
const logTag = "opensnitch"
// Logger is the common interface that every logger must met.
// Serves as a generic holder of different types of loggers.
type Logger interface {
Transform(...interface{}) string
Write(string)
}
// LoggerConfig holds the configuration of a logger
type LoggerConfig struct {
// Name of the logger: syslog, elastic, ...
Name string
// Format: rfc5424, csv, json, ...
Format string
// Protocol: udp, tcp
Protocol string
// Server: 127.0.0.1:514
Server string
// WriteTimeout:
WriteTimeout string
// Tag: opensnitchd, mytag, ...
Tag string
// Workers: number of workers
Workers int
}
// LoggerManager represents the LoggerManager.
type LoggerManager struct {
loggers map[string]Logger
msgs chan []interface{}
count int
}
// NewLoggerManager instantiates all the configured loggers.
func NewLoggerManager() *LoggerManager {
lm := &LoggerManager{
loggers: make(map[string]Logger),
}
return lm
}
// Load loggers configuration and initialize them.
func (l *LoggerManager) Load(configs []LoggerConfig, workers int) {
for _, cfg := range configs {
switch cfg.Name {
case LOGGER_REMOTE:
if lgr, err := NewRemote(&cfg); err == nil {
l.count++
l.loggers[fmt.Sprint(lgr.Name, lgr.cfg.Server, lgr.cfg.Protocol)] = lgr
workers += cfg.Workers
}
case LOGGER_REMOTE_SYSLOG:
if lgr, err := NewRemoteSyslog(&cfg); err == nil {
l.count++
l.loggers[fmt.Sprint(lgr.Name, lgr.cfg.Server, lgr.cfg.Protocol)] = lgr
workers += cfg.Workers
}
case LOGGER_SYSLOG:
if lgr, err := NewSyslog(&cfg); err == nil {
l.count++
l.loggers[lgr.Name] = lgr
workers += cfg.Workers
}
}
}
if workers == 0 {
workers = 4
}
l.msgs = make(chan []interface{}, workers)
for i := 0; i < workers; i++ {
go newWorker(i, l)
}
}
func (l *LoggerManager) write(args ...interface{}) {
for _, logger := range l.loggers {
logger.Write(logger.Transform(args...))
}
}
func newWorker(id int, l *LoggerManager) {
for {
for msg := range l.msgs {
l.write(msg)
}
}
}
// Log sends data to the loggers.
func (l *LoggerManager) Log(args ...interface{}) {
if l.count > 0 {
go func(args ...interface{}) {
argv := args
l.msgs <- argv
}(args...)
}
}
opensnitch-1.6.9/daemon/log/loggers/remote.go 0000664 0000000 0000000 00000010454 15003540030 0021203 0 ustar 00root root 0000000 0000000 package loggers
import (
"fmt"
"log/syslog"
"net"
"os"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/log/formats"
)
const (
LOGGER_REMOTE = "remote"
)
// Remote defines the logger that writes events to a generic remote server.
// It can write to the local or a remote daemon, UDP or TCP.
// It supports writing events in RFC5424, RFC3164, CSV and JSON formats.
type Remote struct {
Name string
Tag string
Hostname string
Writer *syslog.Writer
logFormat formats.LoggerFormat
cfg *LoggerConfig
netConn net.Conn
Timeout time.Duration
errors uint32
maxErrors uint32
status uint32
mu *sync.RWMutex
}
// NewRemote returns a new object that manipulates and prints outbound connections
// to a remote syslog server, with the given format (RFC5424 by default)
func NewRemote(cfg *LoggerConfig) (*Remote, error) {
var err error
log.Info("NewRemote logger: %v", cfg)
sys := &Remote{
mu: &sync.RWMutex{},
}
sys.Name = LOGGER_REMOTE
sys.cfg = cfg
// list of allowed formats for this logger
sys.logFormat = formats.NewRfc5424()
if cfg.Format == formats.RFC3164 {
sys.logFormat = formats.NewRfc3164()
} else if cfg.Format == formats.JSON {
sys.logFormat = formats.NewJSON()
} else if cfg.Format == formats.CSV {
sys.logFormat = formats.NewCSV()
}
sys.Tag = logTag
if cfg.Tag != "" {
sys.Tag = cfg.Tag
}
sys.Hostname, err = os.Hostname()
if err != nil {
sys.Hostname = "localhost"
}
if cfg.WriteTimeout == "" {
cfg.WriteTimeout = writeTimeout
}
sys.Timeout = (time.Second * 15)
if err = sys.Open(); err != nil {
log.Error("Error loading logger: %s", err)
return nil, err
}
log.Info("[%s] initialized: %v", sys.Name, cfg)
return sys, err
}
// Open opens a new connection with a server or with the daemon.
func (s *Remote) Open() (err error) {
atomic.StoreUint32(&s.errors, 0)
if s.cfg.Server == "" {
return fmt.Errorf("[%s] Server address must not be empty", s.Name)
}
s.mu.Lock()
s.netConn, err = s.Dial(s.cfg.Protocol, s.cfg.Server, s.Timeout*5)
s.mu.Unlock()
if err == nil {
atomic.StoreUint32(&s.status, CONNECTED)
}
return err
}
// Dial opens a new connection with a remote server.
func (s *Remote) Dial(proto, addr string, connTimeout time.Duration) (netConn net.Conn, err error) {
switch proto {
case "udp", "tcp":
netConn, err = net.DialTimeout(proto, addr, connTimeout)
if err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("[%s] Network protocol %s not supported", s.Name, proto)
}
return netConn, nil
}
// Close closes the writer object
func (s *Remote) Close() (err error) {
s.mu.RLock()
if s.netConn != nil {
err = s.netConn.Close()
//s.netConn.conn = nil
}
s.mu.RUnlock()
atomic.StoreUint32(&s.status, DISCONNECTED)
return
}
// ReOpen tries to reestablish the connection with the writer
func (s *Remote) ReOpen() {
if atomic.LoadUint32(&s.status) == CONNECTING {
return
}
atomic.StoreUint32(&s.status, CONNECTING)
if err := s.Close(); err != nil {
log.Debug("[%s] error closing Close(): %s", s.Name, err)
}
if err := s.Open(); err != nil {
log.Debug("[%s] ReOpen() error: %s", s.Name, err)
} else {
log.Debug("[%s] ReOpen() ok", s.Name)
}
}
// Transform transforms data for proper ingestion.
func (s *Remote) Transform(args ...interface{}) (out string) {
if s.logFormat != nil {
args = append(args, s.Hostname)
args = append(args, s.Tag)
out = s.logFormat.Transform(args...)
}
return
}
func (s *Remote) Write(msg string) {
deadline := time.Now().Add(s.Timeout)
// BUG: it's fairly common to have write timeouts via udp/tcp.
// Reopening the connection with the server helps to resume sending events to the server,
// and have a continuous stream of events. Otherwise it'd stop working.
// I haven't figured out yet why these write errors ocurr.
s.mu.Lock()
s.netConn.SetWriteDeadline(deadline)
_, err := s.netConn.Write([]byte(msg))
s.mu.Unlock()
if err == nil {
return
}
log.Debug("[%s] %s write error: %v", s.Name, s.cfg.Protocol, err.(net.Error))
atomic.AddUint32(&s.errors, 1)
if atomic.LoadUint32(&s.errors) > maxAllowedErrors {
s.ReOpen()
return
}
}
func (s *Remote) formatLine(msg string) string {
nl := ""
if !strings.HasSuffix(msg, "\n") {
nl = "\n"
}
return fmt.Sprintf("%s%s", msg, nl)
}
opensnitch-1.6.9/daemon/log/loggers/remote_syslog.go 0000664 0000000 0000000 00000010437 15003540030 0022604 0 ustar 00root root 0000000 0000000 package loggers
import (
"fmt"
"net"
"os"
"sync"
"sync/atomic"
"time"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/log/formats"
)
const (
LOGGER_REMOTE_SYSLOG = "remote_syslog"
writeTimeout = "1s"
// restart syslog connection after these amount of errors
maxAllowedErrors = 10
)
// connection status
const (
DISCONNECTED = iota
CONNECTED
CONNECTING
)
// RemoteSyslog defines the logger that writes traces to the syslog.
// It can write to the local or a remote daemon.
type RemoteSyslog struct {
Syslog
Hostname string
netConn net.Conn
Timeout time.Duration
errors uint32
status uint32
mu *sync.RWMutex
}
// NewRemoteSyslog returns a new object that manipulates and prints outbound connections
// to a remote syslog server, with the given format (RFC5424 by default)
func NewRemoteSyslog(cfg *LoggerConfig) (*RemoteSyslog, error) {
var err error
log.Info("NewSyslog logger: %v", cfg)
sys := &RemoteSyslog{
mu: &sync.RWMutex{},
}
sys.Name = LOGGER_REMOTE_SYSLOG
sys.cfg = cfg
// list of allowed formats for this logger
sys.logFormat = formats.NewRfc5424()
if cfg.Format == formats.RFC3164 {
sys.logFormat = formats.NewRfc3164()
} else if cfg.Format == formats.CSV {
sys.logFormat = formats.NewCSV()
}
sys.Tag = logTag
if cfg.Tag != "" {
sys.Tag = cfg.Tag
}
sys.Hostname, err = os.Hostname()
if err != nil {
sys.Hostname = "localhost"
}
if cfg.WriteTimeout == "" {
cfg.WriteTimeout = writeTimeout
}
sys.Timeout, _ = time.ParseDuration(cfg.WriteTimeout)
if err = sys.Open(); err != nil {
log.Error("Error loading logger: %s", err)
return nil, err
}
log.Info("[%s] initialized: %v", sys.Name, cfg)
return sys, err
}
// Open opens a new connection with a server or with the daemon.
func (s *RemoteSyslog) Open() (err error) {
atomic.StoreUint32(&s.errors, 0)
if s.cfg.Server == "" {
return fmt.Errorf("[%s] Server address must not be empty", s.Name)
}
s.mu.Lock()
s.netConn, err = s.Dial(s.cfg.Protocol, s.cfg.Server, s.Timeout*5)
s.mu.Unlock()
if err == nil {
atomic.StoreUint32(&s.status, CONNECTED)
}
return err
}
// Dial opens a new connection with a syslog server.
func (s *RemoteSyslog) Dial(proto, addr string, connTimeout time.Duration) (netConn net.Conn, err error) {
switch proto {
case "udp", "tcp":
netConn, err = net.DialTimeout(proto, addr, connTimeout)
if err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("[%s] Network protocol %s not supported", s.Name, proto)
}
return netConn, nil
}
// Close closes the writer object
func (s *RemoteSyslog) Close() (err error) {
s.mu.RLock()
defer s.mu.RUnlock()
if s.netConn != nil {
err = s.netConn.Close()
//s.netConn.conn = nil
}
atomic.StoreUint32(&s.status, DISCONNECTED)
return
}
// ReOpen tries to reestablish the connection with the writer
func (s *RemoteSyslog) ReOpen() {
if atomic.LoadUint32(&s.status) == CONNECTING {
return
}
atomic.StoreUint32(&s.status, CONNECTING)
if err := s.Close(); err != nil {
log.Debug("[%s] error closing Close(): %s", s.Name, err)
}
if err := s.Open(); err != nil {
log.Debug("[%s] ReOpen() error: %s", s.Name, err)
return
}
}
// Transform transforms data for proper ingestion.
func (s *RemoteSyslog) Transform(args ...interface{}) (out string) {
if s.logFormat != nil {
args = append(args, s.Hostname)
args = append(args, s.Tag)
out = s.logFormat.Transform(args...)
}
return
}
func (s *RemoteSyslog) Write(msg string) {
deadline := time.Now().Add(s.Timeout)
// BUG: it's fairly common to have write timeouts via udp/tcp.
// Reopening the connection with the server helps to resume sending events to syslog,
// and have a continuous stream of events. Otherwise it'd stop working.
// I haven't figured out yet why these write errors ocurr.
s.mu.RLock()
s.netConn.SetWriteDeadline(deadline)
_, err := s.netConn.Write([]byte(msg))
s.mu.RUnlock()
if err != nil {
log.Debug("[%s] %s write error: %v", s.Name, s.cfg.Protocol, err.(net.Error))
atomic.AddUint32(&s.errors, 1)
if atomic.LoadUint32(&s.errors) > maxAllowedErrors {
s.ReOpen()
return
}
}
}
// https://cs.opensource.google/go/go/+/refs/tags/go1.18.2:src/log/syslog/syslog.go;l=286;drc=0a1a092c4b56a1d4033372fbd07924dad8cbb50b
func (s *RemoteSyslog) formatLine(msg string) string {
return msg
}
opensnitch-1.6.9/daemon/log/loggers/syslog.go 0000664 0000000 0000000 00000003307 15003540030 0021227 0 ustar 00root root 0000000 0000000 package loggers
import (
"log/syslog"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/log/formats"
)
const (
LOGGER_SYSLOG = "syslog"
)
// Syslog defines the logger that writes traces to the syslog.
// It can write to the local or a remote daemon.
type Syslog struct {
Name string
Writer *syslog.Writer
Tag string
logFormat formats.LoggerFormat
cfg *LoggerConfig
}
// NewSyslog returns a new object that manipulates and prints outbound connections
// to syslog (local or remote), with the given format (RFC5424 by default)
func NewSyslog(cfg *LoggerConfig) (*Syslog, error) {
var err error
log.Info("NewSyslog logger: %v", cfg)
sys := &Syslog{
Name: LOGGER_SYSLOG,
cfg: cfg,
}
sys.logFormat = formats.NewRfc5424()
if cfg.Format == formats.CSV {
sys.logFormat = formats.NewCSV()
}
sys.Tag = logTag
if cfg.Tag != "" {
sys.Tag = cfg.Tag
}
if err = sys.Open(); err != nil {
log.Error("Error loading logger: %s", err)
return nil, err
}
log.Info("[%s logger] initialized: %v", sys.Name, cfg)
return sys, err
}
// Open opens a new connection with a server or with the daemon.
func (s *Syslog) Open() error {
var err error
s.Writer, err = syslog.New(syslog.LOG_NOTICE|syslog.LOG_DAEMON, logTag)
return err
}
// Close closes the writer object
func (s *Syslog) Close() error {
return s.Writer.Close()
}
// Transform transforms data for proper ingestion.
func (s *Syslog) Transform(args ...interface{}) (out string) {
if s.logFormat != nil {
out = s.logFormat.Transform(args...)
}
return
}
func (s *Syslog) Write(msg string) {
if err := s.Writer.Notice(msg); err != nil {
log.Error("[%s] write error: %s", s.Name, err)
}
}
opensnitch-1.6.9/daemon/main.go 0000664 0000000 0000000 00000043742 15003540030 0016417 0 ustar 00root root 0000000 0000000 /* Copyright (C) 2018 Simone Margaritelli
// 2021 themighty1
// 2022 calesanz
// 2019-2022 Gustavo Iñiguez Goia
//
// This file is part of OpenSnitch.
//
// OpenSnitch is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// OpenSnitch is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with OpenSnitch. If not, see .
*/
package main
import (
"bytes"
"context"
"flag"
"fmt"
"io/ioutil"
golog "log"
"net"
"os"
"os/signal"
"runtime"
"runtime/pprof"
"syscall"
"time"
"github.com/evilsocket/opensnitch/daemon/conman"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/dns"
"github.com/evilsocket/opensnitch/daemon/dns/systemd"
"github.com/evilsocket/opensnitch/daemon/firewall"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/log/loggers"
"github.com/evilsocket/opensnitch/daemon/netfilter"
"github.com/evilsocket/opensnitch/daemon/netlink"
"github.com/evilsocket/opensnitch/daemon/procmon/ebpf"
"github.com/evilsocket/opensnitch/daemon/procmon/monitor"
"github.com/evilsocket/opensnitch/daemon/rule"
"github.com/evilsocket/opensnitch/daemon/statistics"
"github.com/evilsocket/opensnitch/daemon/ui"
"github.com/evilsocket/opensnitch/daemon/ui/config"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
)
var (
showVersion = false
checkRequirements = false
procmonMethod = ""
logFile = ""
logUTC = true
logMicro = false
rulesPath = "/etc/opensnitchd/rules/"
configFile = "/etc/opensnitchd/default-config.json"
ebpfModPath = "" // /usr/lib/opensnitchd/ebpf
noLiveReload = false
queueNum = 0
repeatQueueNum int //will be set later to queueNum + 1
workers = 16
debug = false
warning = false
important = false
errorlog = false
uiSocket = ""
uiClient = (*ui.Client)(nil)
cpuProfile = ""
memProfile = ""
ctx = (context.Context)(nil)
cancel = (context.CancelFunc)(nil)
err = (error)(nil)
rules = (*rule.Loader)(nil)
stats = (*statistics.Statistics)(nil)
queue = (*netfilter.Queue)(nil)
repeatPktChan = (<-chan netfilter.Packet)(nil)
pktChan = (<-chan netfilter.Packet)(nil)
wrkChan = (chan netfilter.Packet)(nil)
sigChan = (chan os.Signal)(nil)
exitChan = (chan bool)(nil)
loggerMgr *loggers.LoggerManager
resolvMonitor *systemd.ResolvedMonitor
)
func init() {
flag.BoolVar(&showVersion, "version", debug, "Show daemon version of this executable and exit.")
flag.BoolVar(&checkRequirements, "check-requirements", debug, "Check system requirements for incompatibilities.")
flag.StringVar(&procmonMethod, "process-monitor-method", procmonMethod, "How to search for processes path. Options: ftrace, audit (experimental), ebpf (experimental), proc (default)")
flag.StringVar(&uiSocket, "ui-socket", uiSocket, "Path the UI gRPC service listener (https://github.com/grpc/grpc/blob/master/doc/naming.md).")
flag.IntVar(&queueNum, "queue-num", queueNum, "Netfilter queue number.")
flag.IntVar(&workers, "workers", workers, "Number of concurrent workers.")
flag.BoolVar(&noLiveReload, "no-live-reload", debug, "Disable rules live reloading.")
flag.StringVar(&rulesPath, "rules-path", rulesPath, "Path to load JSON rules from.")
flag.StringVar(&configFile, "config-file", configFile, "Path to the daemon configuration file.")
//flag.StringVar(&ebpfModPath, "ebpf-modules-path", ebpfModPath, "Path to the directory with the eBPF modules.")
flag.StringVar(&logFile, "log-file", logFile, "Write logs to this file instead of the standard output.")
flag.BoolVar(&logUTC, "log-utc", logUTC, "Write logs output with UTC timezone (enabled by default).")
flag.BoolVar(&logMicro, "log-micro", logMicro, "Write logs output with microsecond timestamp (disabled by default).")
flag.BoolVar(&debug, "debug", debug, "Enable debug level logs.")
flag.BoolVar(&warning, "warning", warning, "Enable warning level logs.")
flag.BoolVar(&important, "important", important, "Enable important level logs.")
flag.BoolVar(&errorlog, "error", errorlog, "Enable error level logs.")
flag.StringVar(&cpuProfile, "cpu-profile", cpuProfile, "Write CPU profile to this file.")
flag.StringVar(&memProfile, "mem-profile", memProfile, "Write memory profile to this file.")
}
// Load configuration file from disk, by default from /etc/opensnitchd/default-config.json,
// or from the path specified by configFile.
// This configuration will be loaded again by uiClient(), in order to monitor it for changes.
func loadDiskConfiguration() (*config.Config, error) {
if configFile == "" {
return nil, fmt.Errorf("Configuration file cannot be empty")
}
raw, err := config.Load(configFile)
if err != nil || len(raw) == 0 {
return nil, fmt.Errorf("Error loading configuration %s: %s", configFile, err)
}
clientConfig, err := config.Parse(raw)
if err != nil {
return nil, fmt.Errorf("Error parsing configuration %s: %s", configFile, err)
}
log.Info("Loading configuration file %s ...", configFile)
return &clientConfig, nil
}
func overwriteLogging() bool {
return debug || warning || important || errorlog || logFile != "" || logMicro
}
func setupLogging() {
golog.SetOutput(ioutil.Discard)
if debug {
log.SetLogLevel(log.DEBUG)
} else if warning {
log.SetLogLevel(log.WARNING)
} else if important {
log.SetLogLevel(log.IMPORTANT)
} else if errorlog {
log.SetLogLevel(log.ERROR)
} else {
log.SetLogLevel(log.INFO)
}
log.SetLogUTC(logUTC)
log.SetLogMicro(logMicro)
var logFileToUse string
if logFile == "" {
logFileToUse = log.StdoutFile
} else {
logFileToUse = logFile
}
log.Close()
if err := log.OpenFile(logFileToUse); err != nil {
log.Error("Error opening user defined log: %s %s", logFileToUse, err)
}
}
func setupProfiling() {
if cpuProfile != "" {
if f, err := os.Create(cpuProfile); err != nil {
log.Fatal("%s", err)
} else if err := pprof.StartCPUProfile(f); err != nil {
log.Fatal("%s", err)
}
}
}
func setupSignals() {
sigChan = make(chan os.Signal, 1)
exitChan = make(chan bool, workers+1)
signal.Notify(sigChan,
syscall.SIGHUP,
syscall.SIGINT,
syscall.SIGTERM,
syscall.SIGQUIT)
go func() {
sig := <-sigChan
log.Raw("\n")
log.Important("Got signal: %v", sig)
cancel()
time.AfterFunc(10*time.Second, func() {
log.Error("[REVIEW] closing due to timeout")
os.Exit(0)
})
}()
}
func worker(id int) {
log.Debug("Worker #%d started.", id)
for true {
select {
case <-ctx.Done():
goto Exit
default:
pkt, ok := <-wrkChan
if !ok {
log.Debug("worker channel closed %d", id)
goto Exit
}
onPacket(pkt)
}
}
Exit:
log.Debug("worker #%d exit", id)
}
func setupWorkers() {
log.Debug("Starting %d workers ...", workers)
// setup the workers
wrkChan = make(chan netfilter.Packet)
for i := 0; i < workers; i++ {
go worker(i)
}
}
// Listen to events sent from other modules
func listenToEvents() {
for i := 0; i < 5; i++ {
go func(uiClient *ui.Client) {
for evt := range ebpf.Events() {
// for loop vars are per-loop, not per-item
evt := evt
uiClient.PostAlert(
protocol.Alert_WARNING,
protocol.Alert_KERNEL_EVENT,
protocol.Alert_SHOW_ALERT,
protocol.Alert_MEDIUM,
evt)
}
}(uiClient)
}
}
func initSystemdResolvedMonitor() {
resolvMonitor, err := systemd.NewResolvedMonitor()
if err != nil {
log.Debug("[DNS] Unable to use systemd-resolved monitor: %s", err)
return
}
_, err = resolvMonitor.Connect()
if err != nil {
log.Debug("[DNS] Connecting to systemd-resolved: %s", err)
return
}
err = resolvMonitor.Subscribe()
if err != nil {
log.Debug("[DNS] Subscribing to systemd-resolved DNS events: %s", err)
return
}
go func() {
for {
select {
case exit := <-resolvMonitor.Exit():
if exit == nil {
log.Info("[DNS] systemd-resolved monitor stopped")
return
}
log.Debug("[DNS] systemd-resolved monitor disconnected. Reconnecting...")
case response := <-resolvMonitor.GetDNSResponses():
if response.State != systemd.SuccessState {
log.Debug("[DNS] systemd-resolved monitor response error: %v", response)
continue
}
/*for i, q := range response.Question {
log.Debug("%d SYSTEMD RESPONSE Q: %s", i, q.Name)
}*/
for i, a := range response.Answer {
if a.RR.Key.Type != systemd.DNSTypeA &&
a.RR.Key.Type != systemd.DNSTypeAAAA &&
a.RR.Key.Type != systemd.DNSTypeCNAME {
log.Debug("systemd-resolved, excluding answer: %#v", a)
continue
}
domain := a.RR.Key.Name
ip := net.IP(a.RR.Address)
log.Debug("%d systemd-resolved monitor response: %s -> %s", i, domain, ip)
if a.RR.Key.Type == systemd.DNSTypeCNAME {
log.Debug("systemd-resolved CNAME >> %s -> %s", a.RR.Name, domain)
dns.Track(a.RR.Name, domain)
} else {
dns.Track(ip.String(), domain)
}
}
}
}
}()
}
func doCleanup(queue, repeatQueue *netfilter.Queue) {
log.Info("Cleaning up ...")
firewall.Stop()
monitor.End()
uiClient.Close()
queue.Close()
repeatQueue.Close()
if resolvMonitor != nil {
resolvMonitor.Close()
}
if cpuProfile != "" {
pprof.StopCPUProfile()
}
if memProfile != "" {
f, err := os.Create(memProfile)
if err != nil {
fmt.Printf("Could not create memory profile: %s\n", err)
return
}
defer f.Close()
runtime.GC() // get up-to-date statistics
if err := pprof.WriteHeapProfile(f); err != nil {
fmt.Printf("Could not write memory profile: %s\n", err)
}
}
}
func onPacket(packet netfilter.Packet) {
// DNS response, just parse, track and accept.
if dns.TrackAnswers(packet.Packet) == true {
packet.SetVerdictAndMark(netfilter.NF_ACCEPT, packet.Mark)
stats.OnDNSResponse()
return
}
// Parse the connection state
con := conman.Parse(packet, uiClient.InterceptUnknown())
if con == nil {
applyDefaultAction(&packet, nil)
return
}
// accept our own connections
if con.Process.ID == os.Getpid() {
packet.SetVerdict(netfilter.NF_ACCEPT)
return
}
// search a match in preloaded rules
r := acceptOrDeny(&packet, con)
if r != nil && r.Nolog {
return
}
// XXX: if a connection is not intercepted due to InterceptUnknown == false,
// it's not sent to the server, which leads to miss information.
stats.OnConnectionEvent(con, r, r == nil)
}
func applyDefaultAction(packet *netfilter.Packet, con *conman.Connection) {
if uiClient.DefaultAction() == rule.Allow {
packet.SetVerdictAndMark(netfilter.NF_ACCEPT, packet.Mark)
return
}
if uiClient.DefaultAction() == rule.Reject && con != nil {
netlink.KillSocket(con.Protocol, con.SrcIP, con.SrcPort, con.DstIP, con.DstPort)
}
packet.SetVerdict(netfilter.NF_DROP)
}
func acceptOrDeny(packet *netfilter.Packet, con *conman.Connection) *rule.Rule {
r := rules.FindFirstMatch(con)
if r == nil {
// no rule matched
// Note that as soon as we set a verdict on a packet, the next packet in the netfilter queue
// will begin to be processed even if this function hasn't yet returned
// send a request to the UI client if
// 1) connected and running and 2) we are not already asking
if uiClient.Connected() == false || uiClient.GetIsAsking() == true {
applyDefaultAction(packet, con)
log.Debug("UI is not running or busy, connected: %v, running: %v", uiClient.Connected(), uiClient.GetIsAsking())
return nil
}
uiClient.SetIsAsking(true)
defer uiClient.SetIsAsking(false)
// In order not to block packet processing, we send our packet to a different netfilter queue
// and then immediately pull it back out of that queue
packet.SetRequeueVerdict(uint16(repeatQueueNum))
var o bool
var pkt netfilter.Packet
// don't wait for the packet longer than 1 sec
select {
case pkt, o = <-repeatPktChan:
if !o {
log.Debug("error while receiving packet from repeatPktChan")
return nil
}
case <-time.After(1 * time.Second):
log.Debug("timed out while receiving packet from repeatPktChan")
return nil
}
//check if the pulled out packet is the same we put in
if res := bytes.Compare(packet.Packet.Data(), pkt.Packet.Data()); res != 0 {
log.Error("The packet which was requeued has changed abruptly. This should never happen. Please report this incident to the Opensnitch developers. %v %v ", packet, pkt)
return nil
}
packet = &pkt
// Update the hostname again.
// This is required due to a race between the ebpf dns hook and the actual first packet beeing sent
if con.DstHost == "" {
con.DstHost = dns.HostOr(con.DstIP, con.DstHost)
}
r = uiClient.Ask(con)
if r == nil {
log.Error("Invalid rule received, applying default action")
applyDefaultAction(packet, con)
return nil
}
ok := false
pers := ""
action := string(r.Action)
if r.Action == rule.Allow {
action = log.Green(action)
} else {
action = log.Red(action)
}
// check if and how the rule needs to be saved
if r.Duration == rule.Always {
pers = "Saved"
// add to the loaded rules and persist on disk
if err := rules.Add(r, true); err != nil {
log.Error("Error while saving rule: %s", err)
} else {
ok = true
}
} else {
pers = "Added"
// add to the rules but do not save to disk
if err := rules.Add(r, false); err != nil {
log.Error("Error while adding rule: %s", err)
} else {
ok = true
}
}
if ok {
log.Important("%s new rule: %s if %s", pers, action, r.Operator.String())
}
}
if packet == nil {
log.Debug("Packet nil after processing rules")
return r
}
if r.Enabled == false {
applyDefaultAction(packet, con)
ruleName := log.Green(r.Name)
log.Info("DISABLED (%s) %s %s -> %s:%d (%s)", uiClient.DefaultAction(), log.Bold(log.Green("✔")), log.Bold(con.Process.Path), log.Bold(con.To()), con.DstPort, ruleName)
} else if r.Action == rule.Allow {
packet.SetVerdictAndMark(netfilter.NF_ACCEPT, packet.Mark)
ruleName := log.Green(r.Name)
if r.Operator.Operand == rule.OpTrue {
ruleName = log.Dim(r.Name)
}
log.Debug("%s %s -> %d:%s => %s:%d, mark: %x (%s)", log.Bold(log.Green("✔")), log.Bold(con.Process.Path), con.SrcPort, log.Bold(con.SrcIP.String()), log.Bold(con.To()), con.DstPort, packet.Mark, ruleName)
} else {
if r.Action == rule.Reject {
netlink.KillSocket(con.Protocol, con.SrcIP, con.SrcPort, con.DstIP, con.DstPort)
}
packet.SetVerdict(netfilter.NF_DROP)
log.Debug("%s %s -> %d:%s => %s:%d, mark: %x (%s)", log.Bold(log.Red("✘")), log.Bold(con.Process.Path), con.SrcPort, log.Bold(con.SrcIP.String()), log.Bold(con.To()), con.DstPort, packet.Mark, log.Red(r.Name))
}
return r
}
func main() {
ctx, cancel = context.WithCancel(context.Background())
defer cancel()
flag.Parse()
if showVersion {
fmt.Println(core.Version)
os.Exit(0)
}
if checkRequirements {
core.CheckSysRequirements()
os.Exit(0)
}
setupLogging()
setupProfiling()
log.Important("Starting %s v%s", core.Name, core.Version)
cfg, err := loadDiskConfiguration()
if err != nil {
log.Fatal("%s", err)
}
if err == nil && cfg.Rules.Path != "" {
rulesPath = cfg.Rules.Path
}
if rulesPath == "" {
log.Fatal("rules path cannot be empty")
}
rulesPath, err := core.ExpandPath(rulesPath)
if err != nil {
log.Fatal("Error accessing rules path (does it exist?): %s", err)
}
setupSignals()
log.Info("Loading rules from %s ...", rulesPath)
rules, err = rule.NewLoader(!noLiveReload)
if err != nil {
log.Fatal("%s", err)
} else if err = rules.Load(rulesPath); err != nil {
log.Fatal("%s", err)
}
stats = statistics.New(rules)
loggerMgr = loggers.NewLoggerManager()
uiClient = ui.NewClient(uiSocket, configFile, stats, rules, loggerMgr)
// prepare the queue
setupWorkers()
queue, err := netfilter.NewQueue(uint16(queueNum))
if err != nil {
msg := fmt.Sprintf("Error creating queue #%d: %s", queueNum, err)
uiClient.SendWarningAlert(msg)
log.Warning("Is opensnitchd already running?")
log.Fatal(msg)
}
pktChan = queue.Packets()
repeatQueueNum = queueNum + 1
repeatQueue, rqerr := netfilter.NewQueue(uint16(repeatQueueNum))
if rqerr != nil {
msg := fmt.Sprintf("Error creating repeat queue #%d: %s", repeatQueueNum, rqerr)
uiClient.SendErrorAlert(msg)
log.Warning("Is opensnitchd already running?")
log.Warning(msg)
}
repeatPktChan = repeatQueue.Packets()
// queue is ready, run firewall rules and start intercepting connections
if err = firewall.Init(uiClient.GetFirewallType(), &queueNum); err != nil {
log.Warning("%s", err)
uiClient.SendWarningAlert(err)
}
uiClient.Connect()
listenToEvents()
if overwriteLogging() {
setupLogging()
}
// overwrite monitor method from configuration if the user has passed
// the option via command line.
if procmonMethod != "" {
if err := monitor.ReconfigureMonitorMethod(procmonMethod, cfg.Ebpf.ModulesPath); err != nil {
msg := fmt.Sprintf("Unable to set process monitor method via parameter: %v", err)
uiClient.SendWarningAlert(msg)
log.Warning(msg)
}
}
go func(uiClient *ui.Client, ebpfPath string) {
if err := dns.ListenerEbpf(ebpfPath); err != nil {
msg := fmt.Sprintf("EBPF-DNS: Unable to attach ebpf listener: %s", err)
log.Warning(msg)
// don't display an alert, since this module is not critical
uiClient.PostAlert(
protocol.Alert_ERROR,
protocol.Alert_GENERIC,
protocol.Alert_SAVE_TO_DB,
protocol.Alert_MEDIUM,
msg)
}
}(uiClient, cfg.Ebpf.ModulesPath)
initSystemdResolvedMonitor()
log.Info("Running on netfilter queue #%d ...", queueNum)
for {
select {
case <-ctx.Done():
goto Exit
case pkt, ok := <-pktChan:
if !ok {
goto Exit
}
wrkChan <- pkt
}
}
Exit:
close(wrkChan)
doCleanup(queue, repeatQueue)
os.Exit(0)
}
opensnitch-1.6.9/daemon/netfilter/ 0000775 0000000 0000000 00000000000 15003540030 0017126 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/netfilter/packet.go 0000664 0000000 0000000 00000003021 15003540030 0020720 0 ustar 00root root 0000000 0000000 package netfilter
import "C"
import (
"github.com/google/gopacket"
)
// packet consts
const (
IPv4 = 4
)
// Verdict holds the action to perform on a packet (NF_DROP, NF_ACCEPT, etc)
type Verdict C.uint
// VerdictContainer struct
type VerdictContainer struct {
Mark uint32
Verdict Verdict
Packet []byte
}
// Packet holds the data of a network packet
type Packet struct {
Packet gopacket.Packet
verdictChannel chan VerdictContainer
IfaceInIdx int
IfaceOutIdx int
Mark uint32
UID uint32
NetworkProtocol uint8
}
// SetVerdict emits a veredict on a packet
func (p *Packet) SetVerdict(v Verdict) {
p.verdictChannel <- VerdictContainer{Verdict: v, Packet: nil, Mark: 0}
}
// SetVerdictAndMark emits a veredict on a packet and marks it in order to not
// analyze it again.
func (p *Packet) SetVerdictAndMark(v Verdict, mark uint32) {
p.verdictChannel <- VerdictContainer{Verdict: v, Packet: nil, Mark: mark}
}
// SetRequeueVerdict apply a verdict on a requeued packet
func (p *Packet) SetRequeueVerdict(newQueueID uint16) {
v := uint(NF_QUEUE)
q := (uint(newQueueID) << 16)
v = v | q
p.verdictChannel <- VerdictContainer{Verdict: Verdict(v), Packet: nil, Mark: p.Mark}
}
// SetVerdictWithPacket apply a verdict, but with a new packet
func (p *Packet) SetVerdictWithPacket(v Verdict, packet []byte) {
p.verdictChannel <- VerdictContainer{Verdict: v, Packet: packet, Mark: 0}
}
// IsIPv4 returns if the packet is IPv4
func (p *Packet) IsIPv4() bool {
return p.NetworkProtocol == IPv4
}
opensnitch-1.6.9/daemon/netfilter/queue.c 0000664 0000000 0000000 00000000024 15003540030 0020412 0 ustar 00root root 0000000 0000000 #include "queue.h"
opensnitch-1.6.9/daemon/netfilter/queue.go 0000664 0000000 0000000 00000015232 15003540030 0020604 0 ustar 00root root 0000000 0000000 package netfilter
/*
#cgo pkg-config: libnetfilter_queue
#cgo CFLAGS: -I/usr/include
#cgo LDFLAGS: -L/usr/lib64/ -ldl
#include "queue.h"
*/
import "C"
import (
"fmt"
"os"
"sync"
"syscall"
"time"
"unsafe"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/google/gopacket"
"github.com/google/gopacket/layers"
"golang.org/x/sys/unix"
)
const (
AF_INET = 2
AF_INET6 = 10
NF_DROP Verdict = 0
NF_ACCEPT Verdict = 1
NF_STOLEN Verdict = 2
NF_QUEUE Verdict = 3
NF_REPEAT Verdict = 4
NF_STOP Verdict = 5
NF_DEFAULT_QUEUE_SIZE uint32 = 4096
NF_DEFAULT_PACKET_SIZE uint32 = 4096
)
var (
queueIndex = make(map[uint32]*chan Packet, 0)
queueIndexLock = sync.RWMutex{}
gopacketDecodeOptions = gopacket.DecodeOptions{Lazy: true, NoCopy: true}
)
// VerdictContainerC is the struct that contains the mark, action, length and
// payload of a packet.
// It's defined in queue.h, and filled on go_callback()
type VerdictContainerC C.verdictContainer
// Queue holds the information of a netfilter queue.
// The handles of the connection to the kernel and the created queue.
// A channel where the intercepted packets will be received.
// The ID of the queue.
type Queue struct {
h *C.struct_nfq_handle
qh *C.struct_nfq_q_handle
packets chan Packet
fd C.int
idx uint32
}
// NewQueue opens a new netfilter queue to receive packets marked with a mark.
func NewQueue(queueID uint16) (q *Queue, err error) {
q = &Queue{
idx: uint32(time.Now().UnixNano()),
packets: make(chan Packet),
}
if err = q.create(queueID); err != nil {
return nil, err
} else if err = q.setup(); err != nil {
return nil, err
}
go q.run()
return q, nil
}
func (q *Queue) create(queueID uint16) (err error) {
var ret C.int
if q.h, err = C.nfq_open(); err != nil {
return fmt.Errorf("Error opening Queue handle: %v", err)
} else if ret, err = C.nfq_unbind_pf(q.h, AF_INET); err != nil || ret < 0 {
errmsg := fmt.Errorf("Error %d unbinding existing q handler from AF_INET protocol family: %v", ret, err)
if syscall.Errno(ret) == unix.EINVAL {
errmsg = fmt.Errorf("%s\nRestarting your computer may help to solve this error (see issues: #323 and #912 for more information)", errmsg)
}
return errmsg
} else if ret, err = C.nfq_unbind_pf(q.h, AF_INET6); err != nil || ret < 0 {
return fmt.Errorf("Error (%d) unbinding existing q handler from AF_INET6 protocol family: %v", ret, err)
} else if ret, err := C.nfq_bind_pf(q.h, AF_INET); err != nil || ret < 0 {
return fmt.Errorf("Error (%d) binding to AF_INET protocol family: %v", ret, err)
} else if ret, err := C.nfq_bind_pf(q.h, AF_INET6); err != nil || ret < 0 {
return fmt.Errorf("Error (%d) binding to AF_INET6 protocol family: %v", ret, err)
} else if q.qh, err = C.CreateQueue(q.h, C.uint16_t(queueID), C.uint32_t(q.idx)); err != nil || q.qh == nil {
q.destroy()
return fmt.Errorf("Error binding to queue: %v", err)
}
queueIndexLock.Lock()
queueIndex[q.idx] = &q.packets
queueIndexLock.Unlock()
return nil
}
func (q *Queue) setup() (err error) {
var ret C.int
queueSize := C.uint32_t(NF_DEFAULT_QUEUE_SIZE)
bufferSize := C.uint(NF_DEFAULT_PACKET_SIZE)
totSize := C.uint(NF_DEFAULT_QUEUE_SIZE * NF_DEFAULT_PACKET_SIZE)
if ret, err = C.nfq_set_queue_maxlen(q.qh, queueSize); err != nil || ret < 0 {
q.destroy()
return fmt.Errorf("Unable to set max packets in queue: %v", err)
} else if C.nfq_set_mode(q.qh, C.uint8_t(2), bufferSize) < 0 {
q.destroy()
return fmt.Errorf("Unable to set packets copy mode: %v", err)
} else if q.fd, err = C.nfq_fd(q.h); err != nil {
q.destroy()
return fmt.Errorf("Unable to get queue file-descriptor. %v", err)
} else if C.nfnl_rcvbufsiz(C.nfq_nfnlh(q.h), totSize) < 0 {
q.destroy()
return fmt.Errorf("Unable to increase netfilter buffer space size")
}
return nil
}
func (q *Queue) run() {
if errno := C.Run(q.h, q.fd); errno != 0 {
fmt.Fprintf(os.Stderr, "Terminating, unable to receive packet due to errno=%d", errno)
}
}
// Close ensures that nfqueue resources are freed and closed.
// C.stop_reading_packets() stops the reading packets loop, which causes
// go-subroutine run() to exit.
// After exit, listening queue is destroyed and closed.
// If for some reason any of the steps stucks while closing it, we'll exit by timeout.
func (q *Queue) Close() {
C.stop_reading_packets()
q.destroy()
queueIndexLock.Lock()
delete(queueIndex, q.idx)
queueIndexLock.Unlock()
close(q.packets)
}
func (q *Queue) destroy() {
// we'll try to exit cleanly, but sometimes nfqueue gets stuck
time.AfterFunc(5*time.Second, func() {
log.Warning("queue (%d) stuck, closing by timeout", q.idx)
if q != nil {
C.close(q.fd)
q.closeNfq()
}
os.Exit(0)
})
if q.qh != nil {
if ret := C.nfq_destroy_queue(q.qh); ret != 0 {
log.Warning("Queue.destroy() idx=%d, nfq_destroy_queue() not closed: %d", q.idx, ret)
}
}
q.closeNfq()
}
func (q *Queue) closeNfq() {
if q.h != nil {
if ret := C.nfq_close(q.h); ret != 0 {
log.Warning("Queue.destroy() idx=%d, nfq_close() not closed: %d", q.idx, ret)
}
}
}
// Packets return the list of enqueued packets.
func (q *Queue) Packets() <-chan Packet {
return q.packets
}
// FYI: the export keyword is mandatory to specify that go_callback is defined elsewhere
//export go_callback
func go_callback(queueID C.int, data *C.uchar, length C.int, mark C.uint, idx uint32, vc *VerdictContainerC, uid, devIn, devOut uint32) {
(*vc).verdict = C.uint(NF_ACCEPT)
(*vc).data = nil
(*vc).mark_set = 0
(*vc).length = 0
queueIndexLock.RLock()
queueChannel, found := queueIndex[idx]
queueIndexLock.RUnlock()
if !found {
fmt.Fprintf(os.Stderr, "Unexpected queue idx %d\n", idx)
return
}
xdata := C.GoBytes(unsafe.Pointer(data), length)
p := Packet{
verdictChannel: make(chan VerdictContainer),
Mark: uint32(mark),
UID: uid,
NetworkProtocol: xdata[0] >> 4, // first 4 bits is the version
IfaceInIdx: int(devIn),
IfaceOutIdx: int(devOut),
}
var packet gopacket.Packet
if p.IsIPv4() {
packet = gopacket.NewPacket(xdata, layers.LayerTypeIPv4, gopacketDecodeOptions)
} else {
packet = gopacket.NewPacket(xdata, layers.LayerTypeIPv6, gopacketDecodeOptions)
}
p.Packet = packet
select {
case *queueChannel <- p:
select {
case v := <-p.verdictChannel:
if v.Packet == nil {
(*vc).verdict = C.uint(v.Verdict)
} else {
(*vc).verdict = C.uint(v.Verdict)
(*vc).data = (*C.uchar)(unsafe.Pointer(&v.Packet[0]))
(*vc).length = C.uint(len(v.Packet))
}
if v.Mark != 0 {
(*vc).mark_set = C.uint(1)
(*vc).mark = C.uint(v.Mark)
}
}
case <-time.After(1 * time.Millisecond):
fmt.Fprintf(os.Stderr, "Timed out while sending packet to queue channel %d\n", idx)
}
}
opensnitch-1.6.9/daemon/netfilter/queue.h 0000664 0000000 0000000 00000006320 15003540030 0020424 0 ustar 00root root 0000000 0000000 #ifndef _NETFILTER_QUEUE_H
#define _NETFILTER_QUEUE_H
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
typedef struct {
unsigned int verdict;
unsigned int mark;
unsigned int mark_set;
unsigned int length;
unsigned char *data;
} verdictContainer;
static void *get_uid = NULL;
extern void go_callback(int id, unsigned char* data, int len, unsigned int mark, uint32_t idx, verdictContainer *vc, uint32_t uid, uint32_t in_dev, uint32_t out_dev);
static uint8_t stop = 0;
static inline void configure_uid_if_available(struct nfq_q_handle *qh){
void *hndl = dlopen("libnetfilter_queue.so.1", RTLD_LAZY);
if (!hndl) {
hndl = dlopen("libnetfilter_queue.so", RTLD_LAZY);
if (!hndl){
printf("WARNING: libnetfilter_queue not available\n");
return;
}
}
if ((get_uid = dlsym(hndl, "nfq_get_uid")) == NULL){
printf("WARNING: nfq_get_uid not available\n");
return;
}
printf("OK: libnetfiler_queue supports nfq_get_uid\n");
#ifdef NFQA_CFG_F_UID_GID
if (qh != NULL && nfq_set_queue_flags(qh, NFQA_CFG_F_UID_GID, NFQA_CFG_F_UID_GID)){
printf("WARNING: UID not available on this kernel/libnetfilter_queue\n");
}
#endif
}
static int nf_callback(struct nfq_q_handle *qh, struct nfgenmsg *nfmsg, struct nfq_data *nfa, void *arg){
if (stop) {
return -1;
}
uint32_t id = -1, idx = 0, mark = 0;
struct nfqnl_msg_packet_hdr *ph = NULL;
unsigned char *buffer = NULL;
int size = 0;
verdictContainer vc = {0};
uint32_t uid = 0xffffffff;
uint32_t in_dev=0, out_dev=0;
in_dev = nfq_get_indev(nfa);
out_dev = nfq_get_outdev(nfa);
mark = nfq_get_nfmark(nfa);
ph = nfq_get_msg_packet_hdr(nfa);
id = ntohl(ph->packet_id);
size = nfq_get_payload(nfa, &buffer);
idx = (uint32_t)((uintptr_t)arg);
#ifdef NFQA_CFG_F_UID_GID
if (get_uid)
nfq_get_uid(nfa, &uid);
#endif
go_callback(id, buffer, size, mark, idx, &vc, uid, in_dev, out_dev);
if( vc.mark_set == 1 ) {
return nfq_set_verdict2(qh, id, vc.verdict, vc.mark, vc.length, vc.data);
}
return nfq_set_verdict2(qh, id, vc.verdict, vc.mark, vc.length, vc.data);
}
static inline struct nfq_q_handle* CreateQueue(struct nfq_handle *h, uint16_t queue, uint32_t idx) {
struct nfq_q_handle* qh = nfq_create_queue(h, queue, &nf_callback, (void*)((uintptr_t)idx));
if (qh == NULL){
printf("ERROR: nfq_create_queue() queue not created\n");
} else {
configure_uid_if_available(qh);
}
return qh;
}
static inline void stop_reading_packets() {
stop = 1;
}
static inline int Run(struct nfq_handle *h, int fd) {
char buf[4096] __attribute__ ((aligned));
int rcvd, opt = 1;
setsockopt(fd, SOL_NETLINK, NETLINK_NO_ENOBUFS, &opt, sizeof(int));
while ((rcvd = recv(fd, buf, sizeof(buf), 0)) >= 0) {
if (stop == 1) {
return errno;
}
nfq_handle_packet(h, buf, rcvd);
}
return errno;
}
#endif
opensnitch-1.6.9/daemon/netlink/ 0000775 0000000 0000000 00000000000 15003540030 0016576 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/netlink/ifaces.go 0000664 0000000 0000000 00000002322 15003540030 0020356 0 ustar 00root root 0000000 0000000 package netlink
import (
"net"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/vishvananda/netlink"
)
// https://cs.opensource.google/go/go/+/refs/tags/go1.20.6:src/net/ip.go;l=133
// TODO: remove when upgrading go version.
func isPrivate(ip net.IP) bool {
if ip4 := ip.To4(); ip4 != nil {
return ip4[0] == 10 ||
(ip4[0] == 172 && ip4[1]&0xf0 == 16) ||
(ip4[0] == 192 && ip4[1] == 168)
}
return len(ip) == 16 && ip[0]&0xfe == 0xfc
}
// GetLocalAddrs returns the list of local IPs
func GetLocalAddrs() map[string]netlink.Addr {
localAddresses := make(map[string]netlink.Addr)
addr, err := netlink.AddrList(nil, netlink.FAMILY_ALL)
if err != nil {
log.Error("eBPF error looking up this machine's addresses via netlink: %v", err)
return nil
}
for _, a := range addr {
log.Debug("local addr: %+v\n", a)
localAddresses[a.IP.String()] = a
}
return localAddresses
}
// AddrUpdateToAddr translates AddrUpdate struct to Addr.
func AddrUpdateToAddr(addr *netlink.AddrUpdate) netlink.Addr {
return netlink.Addr{
IPNet: &addr.LinkAddress,
LinkIndex: addr.LinkIndex,
Flags: addr.Flags,
Scope: addr.Scope,
PreferedLft: addr.PreferedLft,
ValidLft: addr.ValidLft,
}
}
opensnitch-1.6.9/daemon/netlink/socket.go 0000664 0000000 0000000 00000020250 15003540030 0020414 0 ustar 00root root 0000000 0000000 package netlink
import (
"fmt"
"net"
"strconv"
"syscall"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/vishvananda/netlink"
"golang.org/x/sys/unix"
)
// GetSocketInfo asks the kernel via netlink for a given connection.
// If the connection is found, we return the uid and the possible
// associated inodes.
// If the outgoing connection is not found but there're entries with the source
// port and same protocol, add all the inodes to the list.
//
// Some examples:
// outgoing connection as seen by netfilter || connection details dumped from kernel
//
// 47344:192.168.1.106 -> 151.101.65.140:443 || in kernel: 47344:192.168.1.106 -> 151.101.65.140:443
// 8612:192.168.1.5 -> 192.168.1.255:8612 || in kernel: 8612:192.168.1.105 -> 0.0.0.0:0
// 123:192.168.1.5 -> 217.144.138.234:123 || in kernel: 123:0.0.0.0 -> 0.0.0.0:0
// 45015:127.0.0.1 -> 239.255.255.250:1900 || in kernel: 45015:127.0.0.1 -> 0.0.0.0:0
// 50416:fe80::9fc2:ddcf:df22:aa50 -> fe80::1:53 || in kernel: 50416:254.128.0.0 -> 254.128.0.0:53
// 51413:192.168.1.106 -> 103.224.182.250:1337 || in kernel: 51413:0.0.0.0 -> 0.0.0.0:0
func GetSocketInfo(proto string, srcIP net.IP, srcPort uint, dstIP net.IP, dstPort uint) (uid int, inodes []int) {
uid = -1
family := uint8(syscall.AF_INET)
ipproto := uint8(syscall.IPPROTO_TCP)
protoLen := len(proto)
if proto[protoLen-1:protoLen] == "6" {
family = syscall.AF_INET6
}
if proto[:3] == "udp" {
ipproto = syscall.IPPROTO_UDP
if protoLen >= 7 && proto[:7] == "udplite" {
ipproto = syscall.IPPROTO_UDPLITE
}
}
if protoLen >= 4 && proto[:4] == "sctp" {
ipproto = syscall.IPPROTO_SCTP
}
if protoLen >= 4 && proto[:4] == "icmp" {
ipproto = syscall.IPPROTO_RAW
}
if sockList, err := SocketGet(family, ipproto, uint16(srcPort), uint16(dstPort), srcIP, dstIP); err == nil {
for n, sock := range sockList {
if sock.UID != 0xffffffff {
uid = int(sock.UID)
}
log.Debug("[%d/%d] outgoing connection uid: %d, %d:%v -> %v:%d || netlink response: %d:%v -> %v:%d inode: %d - loopback: %v multicast: %v unspecified: %v linklocalunicast: %v ifaceLocalMulticast: %v GlobalUni: %v ",
n, len(sockList),
int(sock.UID),
srcPort, srcIP, dstIP, dstPort,
sock.ID.SourcePort, sock.ID.Source,
sock.ID.Destination, sock.ID.DestinationPort, sock.INode,
sock.ID.Destination.IsLoopback(),
sock.ID.Destination.IsMulticast(),
sock.ID.Destination.IsUnspecified(),
sock.ID.Destination.IsLinkLocalUnicast(),
sock.ID.Destination.IsLinkLocalMulticast(),
sock.ID.Destination.IsGlobalUnicast(),
)
if sock.ID.SourcePort == uint16(srcPort) && sock.ID.Source.Equal(srcIP) &&
(sock.ID.DestinationPort == uint16(dstPort)) &&
((sock.ID.Destination.IsGlobalUnicast() || sock.ID.Destination.IsLoopback()) && sock.ID.Destination.Equal(dstIP)) {
inodes = append([]int{int(sock.INode)}, inodes...)
continue
}
log.Debug("GetSocketInfo() invalid: %d:%v -> %v:%d", sock.ID.SourcePort, sock.ID.Source, sock.ID.Destination, sock.ID.DestinationPort)
}
// handle special cases (see function description): ntp queries (123), broadcasts, incomming connections.
if len(inodes) == 0 && len(sockList) > 0 {
for n, sock := range sockList {
if sockList[n].ID.Destination.Equal(net.IPv4zero) || sockList[n].ID.Destination.Equal(net.IPv6zero) {
inodes = append([]int{int(sock.INode)}, inodes...)
log.Debug("netlink socket not found, adding entry: %d:%v -> %v:%d || %d:%v -> %v:%d inode: %d state: %s",
srcPort, srcIP, dstIP, dstPort,
sockList[n].ID.SourcePort, sockList[n].ID.Source,
sockList[n].ID.Destination, sockList[n].ID.DestinationPort,
sockList[n].INode, TCPStatesMap[sock.State])
} else if sock.ID.SourcePort == uint16(srcPort) && sock.ID.Source.Equal(srcIP) &&
(sock.ID.DestinationPort == uint16(dstPort)) {
inodes = append([]int{int(sock.INode)}, inodes...)
continue
} else {
log.Debug("netlink socket not found, EXCLUDING entry: %d:%v -> %v:%d || %d:%v -> %v:%d inode: %d state: %s",
srcPort, srcIP, dstIP, dstPort,
sockList[n].ID.SourcePort, sockList[n].ID.Source,
sockList[n].ID.Destination, sockList[n].ID.DestinationPort,
sockList[n].INode, TCPStatesMap[sock.State])
}
}
}
} else {
log.Debug("netlink socket error: %v - %d:%v -> %v:%d", err, srcPort, srcIP, dstIP, dstPort)
}
return uid, inodes
}
// GetSocketInfoByInode dumps the kernel sockets table and searches the given
// inode on it.
func GetSocketInfoByInode(inodeStr string) (*Socket, error) {
inode, err := strconv.ParseUint(inodeStr, 10, 32)
if err != nil {
return nil, err
}
type inetStruct struct{ family, proto uint8 }
socketTypes := []inetStruct{
{syscall.AF_INET, syscall.IPPROTO_TCP},
{syscall.AF_INET, syscall.IPPROTO_UDP},
{syscall.AF_INET6, syscall.IPPROTO_TCP},
{syscall.AF_INET6, syscall.IPPROTO_UDP},
}
for _, socket := range socketTypes {
socketList, err := SocketsDump(socket.family, socket.proto)
if err != nil {
return nil, err
}
for idx := range socketList {
if uint32(inode) == socketList[idx].INode {
return socketList[idx], nil
}
}
}
return nil, fmt.Errorf("Inode not found")
}
// KillSocket kills a socket given the properties of a connection.
func KillSocket(proto string, srcIP net.IP, srcPort uint, dstIP net.IP, dstPort uint) {
family := uint8(syscall.AF_INET)
ipproto := uint8(syscall.IPPROTO_TCP)
protoLen := len(proto)
if proto[protoLen-1:protoLen] == "6" {
family = syscall.AF_INET6
}
if proto[:3] == "udp" {
ipproto = syscall.IPPROTO_UDP
if protoLen >= 7 && proto[:7] == "udplite" {
ipproto = syscall.IPPROTO_UDPLITE
}
}
if sockList, err := SocketGet(family, ipproto, uint16(srcPort), uint16(dstPort), srcIP, dstIP); err == nil {
for _, s := range sockList {
if err := SocketKill(family, ipproto, s.ID); err != nil {
log.Debug("Unable to kill socket: %d, %d, %v", srcPort, dstPort, err)
}
}
}
}
// KillSockets kills all sockets given a family and a protocol.
// Be careful if you don't exclude local sockets, many local servers may misbehave,
// entering in an infinite loop.
func KillSockets(fam, proto uint8, excludeLocal bool) error {
sockListTCP, err := SocketsDump(fam, proto)
if err != nil {
return fmt.Errorf("eBPF could not dump TCP (%d/%d) sockets via netlink: %v", fam, proto, err)
}
for _, sock := range sockListTCP {
if excludeLocal && (isPrivate(sock.ID.Destination) ||
sock.ID.Source.IsUnspecified() ||
sock.ID.Destination.IsUnspecified()) {
continue
}
if err := SocketKill(fam, proto, sock.ID); err != nil {
log.Debug("Unable to kill socket (%+v): %s", sock.ID, err)
}
}
return nil
}
// KillAllSockets kills the sockets for the given families and protocols.
func KillAllSockets() {
type opts struct {
fam uint8
proto uint8
}
optList := []opts{
// add families and protos as wish
{unix.AF_INET, uint8(syscall.IPPROTO_TCP)},
{unix.AF_INET6, uint8(syscall.IPPROTO_TCP)},
{unix.AF_INET, uint8(syscall.IPPROTO_UDP)},
{unix.AF_INET6, uint8(syscall.IPPROTO_UDP)},
{unix.AF_INET, uint8(syscall.IPPROTO_SCTP)},
{unix.AF_INET6, uint8(syscall.IPPROTO_SCTP)},
}
for _, opt := range optList {
KillSockets(opt.fam, opt.proto, true)
}
}
// FlushConnections flushes conntrack as soon as netfilter rule is set.
// This ensures that already-established connections will go to netfilter queue.
func FlushConnections() {
if err := netlink.ConntrackTableFlush(netlink.ConntrackTable); err != nil {
log.Error("error flushing ConntrackTable %s", err)
}
if err := netlink.ConntrackTableFlush(netlink.ConntrackExpectTable); err != nil {
log.Error("error flusing ConntrackExpectTable %s", err)
}
// Force established connections to reestablish again.
KillAllSockets()
}
// SocketsAreEqual compares 2 different sockets to see if they match.
func SocketsAreEqual(aSocket, bSocket *Socket) bool {
return ((*aSocket).INode == (*bSocket).INode &&
//inodes are unique enough, so the matches below will never have to be checked
(*aSocket).ID.SourcePort == (*bSocket).ID.SourcePort &&
(*aSocket).ID.Source.Equal((*bSocket).ID.Source) &&
(*aSocket).ID.Destination.Equal((*bSocket).ID.Destination) &&
(*aSocket).ID.DestinationPort == (*bSocket).ID.DestinationPort &&
(*aSocket).UID == (*bSocket).UID)
}
opensnitch-1.6.9/daemon/netlink/socket_linux.go 0000664 0000000 0000000 00000014546 15003540030 0021646 0 ustar 00root root 0000000 0000000 package netlink
import (
"encoding/binary"
"errors"
"fmt"
"net"
"syscall"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/vishvananda/netlink/nl"
)
// This is a modification of https://github.com/vishvananda/netlink socket_linux.go - Apache2.0 license
// which adds support for query UDP, UDPLITE and IPv6 sockets to SocketGet()
const (
SOCK_DESTROY = 21
sizeofSocketID = 0x30
sizeofSocketRequest = sizeofSocketID + 0x8
sizeofSocket = sizeofSocketID + 0x18
)
var (
native = nl.NativeEndian()
networkOrder = binary.BigEndian
TCP_ALL = uint32(0xfff)
)
// https://elixir.bootlin.com/linux/latest/source/include/net/tcp_states.h
const (
TCP_INVALID = iota
TCP_ESTABLISHED
TCP_SYN_SENT
TCP_SYN_RECV
TCP_FIN_WAIT1
TCP_FIN_WAIT2
TCP_TIME_WAIT
TCP_CLOSE
TCP_CLOSE_WAIT
TCP_LAST_ACK
TCP_LISTEN
TCP_CLOSING
TCP_NEW_SYN_REC
TCP_MAX_STATES
)
// TCPStatesMap holds the list of TCP states
var TCPStatesMap = map[uint8]string{
TCP_INVALID: "invalid",
TCP_ESTABLISHED: "established",
TCP_SYN_SENT: "syn_sent",
TCP_SYN_RECV: "syn_recv",
TCP_FIN_WAIT1: "fin_wait1",
TCP_FIN_WAIT2: "fin_wait2",
TCP_TIME_WAIT: "time_wait",
TCP_CLOSE: "close",
TCP_CLOSE_WAIT: "close_wait",
TCP_LAST_ACK: "last_ack",
TCP_LISTEN: "listen",
TCP_CLOSING: "closing",
}
// SocketID holds the socket information of a request/response to the kernel
type SocketID struct {
SourcePort uint16
DestinationPort uint16
Source net.IP
Destination net.IP
Interface uint32
Cookie [2]uint32
}
// Socket represents a netlink socket.
type Socket struct {
Family uint8
State uint8
Timer uint8
Retrans uint8
ID SocketID
Expires uint32
RQueue uint32
WQueue uint32
UID uint32
INode uint32
}
// SocketRequest holds the request/response of a connection to the kernel
type SocketRequest struct {
Family uint8
Protocol uint8
Ext uint8
pad uint8
States uint32
ID SocketID
}
type writeBuffer struct {
Bytes []byte
pos int
}
func (b *writeBuffer) Write(c byte) {
b.Bytes[b.pos] = c
b.pos++
}
func (b *writeBuffer) Next(n int) []byte {
s := b.Bytes[b.pos : b.pos+n]
b.pos += n
return s
}
// Serialize convert SocketRequest struct to bytes.
func (r *SocketRequest) Serialize() []byte {
b := writeBuffer{Bytes: make([]byte, sizeofSocketRequest)}
b.Write(r.Family)
b.Write(r.Protocol)
b.Write(r.Ext)
b.Write(r.pad)
native.PutUint32(b.Next(4), r.States)
networkOrder.PutUint16(b.Next(2), r.ID.SourcePort)
networkOrder.PutUint16(b.Next(2), r.ID.DestinationPort)
if r.Family == syscall.AF_INET6 {
copy(b.Next(16), r.ID.Source)
copy(b.Next(16), r.ID.Destination)
} else {
copy(b.Next(4), r.ID.Source.To4())
b.Next(12)
copy(b.Next(4), r.ID.Destination.To4())
b.Next(12)
}
native.PutUint32(b.Next(4), r.ID.Interface)
native.PutUint32(b.Next(4), r.ID.Cookie[0])
native.PutUint32(b.Next(4), r.ID.Cookie[1])
return b.Bytes
}
// Len returns the size of a socket request
func (r *SocketRequest) Len() int { return sizeofSocketRequest }
type readBuffer struct {
Bytes []byte
pos int
}
func (b *readBuffer) Read() byte {
c := b.Bytes[b.pos]
b.pos++
return c
}
func (b *readBuffer) Next(n int) []byte {
s := b.Bytes[b.pos : b.pos+n]
b.pos += n
return s
}
func (s *Socket) deserialize(b []byte) error {
if len(b) < sizeofSocket {
return fmt.Errorf("socket data short read (%d); want %d", len(b), sizeofSocket)
}
rb := readBuffer{Bytes: b}
s.Family = rb.Read()
s.State = rb.Read()
s.Timer = rb.Read()
s.Retrans = rb.Read()
s.ID.SourcePort = networkOrder.Uint16(rb.Next(2))
s.ID.DestinationPort = networkOrder.Uint16(rb.Next(2))
if s.Family == syscall.AF_INET6 {
s.ID.Source = net.IP(rb.Next(16))
s.ID.Destination = net.IP(rb.Next(16))
} else {
s.ID.Source = net.IPv4(rb.Read(), rb.Read(), rb.Read(), rb.Read())
rb.Next(12)
s.ID.Destination = net.IPv4(rb.Read(), rb.Read(), rb.Read(), rb.Read())
rb.Next(12)
}
s.ID.Interface = native.Uint32(rb.Next(4))
s.ID.Cookie[0] = native.Uint32(rb.Next(4))
s.ID.Cookie[1] = native.Uint32(rb.Next(4))
s.Expires = native.Uint32(rb.Next(4))
s.RQueue = native.Uint32(rb.Next(4))
s.WQueue = native.Uint32(rb.Next(4))
s.UID = native.Uint32(rb.Next(4))
s.INode = native.Uint32(rb.Next(4))
return nil
}
// SocketKill kills a connection
func SocketKill(family, proto uint8, sockID SocketID) error {
sockReq := &SocketRequest{
Family: family,
Protocol: proto,
ID: sockID,
}
req := nl.NewNetlinkRequest(SOCK_DESTROY, syscall.NLM_F_REQUEST|syscall.NLM_F_ACK)
req.AddData(sockReq)
_, err := req.Execute(syscall.NETLINK_INET_DIAG, 0)
if err != nil {
return err
}
return nil
}
// SocketGet returns the list of active connections in the kernel
// filtered by several fields. Currently it returns connections
// filtered by source port and protocol.
func SocketGet(family uint8, proto uint8, srcPort, dstPort uint16, local, remote net.IP) ([]*Socket, error) {
_Id := SocketID{
SourcePort: srcPort,
Cookie: [2]uint32{nl.TCPDIAG_NOCOOKIE, nl.TCPDIAG_NOCOOKIE},
}
sockReq := &SocketRequest{
Family: family,
Protocol: proto,
States: TCP_ALL,
ID: _Id,
}
return netlinkRequest(sockReq, family, proto, srcPort, dstPort, local, remote)
}
// SocketsDump returns the list of all connections from the kernel
func SocketsDump(family uint8, proto uint8) ([]*Socket, error) {
sockReq := &SocketRequest{
Family: family,
Protocol: proto,
States: TCP_ALL,
}
return netlinkRequest(sockReq, 0, 0, 0, 0, nil, nil)
}
func netlinkRequest(sockReq *SocketRequest, family uint8, proto uint8, srcPort, dstPort uint16, local, remote net.IP) ([]*Socket, error) {
req := nl.NewNetlinkRequest(nl.SOCK_DIAG_BY_FAMILY, syscall.NLM_F_DUMP)
req.AddData(sockReq)
msgs, err := req.Execute(syscall.NETLINK_INET_DIAG, 0)
if err != nil {
return nil, err
}
if len(msgs) == 0 {
return nil, errors.New("Warning, no message nor error from netlink, or no connections found")
}
var sock []*Socket
for n, m := range msgs {
s := &Socket{}
if err = s.deserialize(m); err != nil {
log.Error("[%d] netlink socket error: %s, %d:%v -> %v:%d - %d:%v -> %v:%d",
n, TCPStatesMap[s.State],
srcPort, local, remote, dstPort,
s.ID.SourcePort, s.ID.Source, s.ID.Destination, s.ID.DestinationPort)
continue
}
if s.INode == 0 {
continue
}
sock = append([]*Socket{s}, sock...)
}
return sock, err
}
opensnitch-1.6.9/daemon/netlink/socket_test.go 0000664 0000000 0000000 00000005646 15003540030 0021467 0 ustar 00root root 0000000 0000000 package netlink
import (
"fmt"
"net"
"os"
"strconv"
"strings"
"testing"
)
type Connection struct {
SrcIP net.IP
DstIP net.IP
Protocol string
SrcPort uint
DstPort uint
OutConn net.Conn
Listener net.Listener
}
func EstablishConnection(proto, dst string) (net.Conn, error) {
c, err := net.Dial(proto, dst)
if err != nil {
fmt.Println(err)
return nil, err
}
return c, nil
}
func ListenOnPort(proto, port string) (net.Listener, error) {
// TODO: UDP -> ListenUDP() or ListenPacket()
l, err := net.Listen(proto, port)
if err != nil {
fmt.Println(err)
return nil, err
}
return l, nil
}
func setupConnection(proto string, connChan chan *Connection) {
listnr, _ := ListenOnPort(proto, "127.0.0.1:55555")
conn, err := EstablishConnection(proto, "127.0.0.1:55555")
if err != nil {
connChan <- nil
return
}
laddr := strings.Split(conn.LocalAddr().String(), ":")
daddr := strings.Split(conn.RemoteAddr().String(), ":")
sport, _ := strconv.Atoi(laddr[1])
dport, _ := strconv.Atoi(daddr[1])
lconn := &Connection{
SrcPort: uint(sport),
DstPort: uint(dport),
SrcIP: net.ParseIP(laddr[0]),
DstIP: net.ParseIP(daddr[0]),
Protocol: "tcp",
Listener: listnr,
OutConn: conn,
}
connChan <- lconn
}
// TestNetlinkQueries tests queries to the kernel to get the inode of a connection.
// When using ProcFS as monitor method, we need that value to get the PID of an application.
// We also need it if for any reason auditd or ebpf doesn't return the PID of the application.
// TODO: test all the cases described in the GetSocketInfo() description.
func TestNetlinkTCPQueries(t *testing.T) {
// netlink tests disabled by default, they cause random failures on restricted
// environments.
if os.Getenv("NETLINK_TESTS") == "" {
t.Skip("Skipping netlink tests. Use NETLINK_TESTS=1 to launch these tests.")
}
connChan := make(chan *Connection)
go setupConnection("tcp", connChan)
conn := <-connChan
if conn == nil {
t.Error("TestParseTCPConnection, conn nil")
}
var inodes []int
uid := -1
t.Run("Test GetSocketInfo", func(t *testing.T) {
uid, inodes = GetSocketInfo("tcp", conn.SrcIP, conn.SrcPort, conn.DstIP, conn.DstPort)
if len(inodes) == 0 {
t.Error("inodes empty")
}
if uid != os.Getuid() {
t.Error("GetSocketInfo UID error:", uid, os.Getuid())
}
})
t.Run("Test GetSocketInfoByInode", func(t *testing.T) {
socket, err := GetSocketInfoByInode(fmt.Sprint(inodes[0]))
if err != nil {
t.Error("GetSocketInfoByInode error:", err)
}
if socket == nil {
t.Error("GetSocketInfoByInode inode not found")
}
if socket.ID.SourcePort != uint16(conn.SrcPort) {
t.Error("GetSocketInfoByInode dstPort error:", socket)
}
if socket.ID.DestinationPort != uint16(conn.DstPort) {
t.Error("GetSocketInfoByInode dstPort error:", socket)
}
if socket.UID != uint32(os.Getuid()) {
t.Error("GetSocketInfoByInode UID error:", socket, os.Getuid())
}
})
conn.Listener.Close()
}
opensnitch-1.6.9/daemon/netstat/ 0000775 0000000 0000000 00000000000 15003540030 0016614 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/netstat/entry.go 0000664 0000000 0000000 00000001404 15003540030 0020303 0 ustar 00root root 0000000 0000000 package netstat
import (
"net"
)
// Entry holds the information of a /proc/net/* entry.
// For example, /proc/net/tcp:
// sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
// 0: 0100007F:13AD 00000000:0000 0A 00000000:00000000 00:00000000 00000000 1000 0 18083222
type Entry struct {
Proto string
SrcIP net.IP
DstIP net.IP
UserId int
INode int
SrcPort uint
DstPort uint
}
// NewEntry creates a new entry with values from /proc/net/
func NewEntry(proto string, srcIP net.IP, srcPort uint, dstIP net.IP, dstPort uint, userId int, iNode int) Entry {
return Entry{
Proto: proto,
SrcIP: srcIP,
SrcPort: srcPort,
DstIP: dstIP,
DstPort: dstPort,
UserId: userId,
INode: iNode,
}
}
opensnitch-1.6.9/daemon/netstat/find.go 0000664 0000000 0000000 00000002462 15003540030 0020067 0 ustar 00root root 0000000 0000000 package netstat
import (
"net"
"strings"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
)
// FindEntry looks for the connection in the list of known connections in ProcFS.
func FindEntry(proto string, srcIP net.IP, srcPort uint, dstIP net.IP, dstPort uint) *Entry {
if entry := findEntryForProtocol(proto, srcIP, srcPort, dstIP, dstPort); entry != nil {
return entry
}
ipv6Suffix := "6"
if core.IPv6Enabled && strings.HasSuffix(proto, ipv6Suffix) == false {
otherProto := proto + ipv6Suffix
log.Debug("Searching for %s netstat entry instead of %s", otherProto, proto)
if entry := findEntryForProtocol(otherProto, srcIP, srcPort, dstIP, dstPort); entry != nil {
return entry
}
}
return &Entry{
Proto: proto,
SrcIP: srcIP,
SrcPort: srcPort,
DstIP: dstIP,
DstPort: dstPort,
UserId: -1,
INode: -1,
}
}
func findEntryForProtocol(proto string, srcIP net.IP, srcPort uint, dstIP net.IP, dstPort uint) *Entry {
entries, err := Parse(proto)
if err != nil {
log.Warning("Error while searching for %s netstat entry: %s", proto, err)
return nil
}
for _, entry := range entries {
if srcIP.Equal(entry.SrcIP) && srcPort == entry.SrcPort && dstIP.Equal(entry.DstIP) && dstPort == entry.DstPort {
return &entry
}
}
return nil
}
opensnitch-1.6.9/daemon/netstat/parse.go 0000664 0000000 0000000 00000005252 15003540030 0020261 0 ustar 00root root 0000000 0000000 package netstat
import (
"bufio"
"encoding/binary"
"net"
"os"
"regexp"
"strconv"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
)
var (
parser = regexp.MustCompile(`(?i)` +
`\d+:\s+` + // sl
`([a-f0-9]{8,32}):([a-f0-9]{4})\s+` + // local_address
`([a-f0-9]{8,32}):([a-f0-9]{4})\s+` + // rem_address
`[a-f0-9]{2}\s+` + // st
`[a-f0-9]{8}:[a-f0-9]{8}\s+` + // tx_queue rx_queue
`[a-f0-9]{2}:[a-f0-9]{8}\s+` + // tr tm->when
`[a-f0-9]{8}\s+` + // retrnsmt
`(\d+)\s+` + // uid
`\d+\s+` + // timeout
`(\d+)\s+` + // inode
`.+`) // stuff we don't care about
)
func decToInt(n string) int {
d, err := strconv.ParseInt(n, 10, 64)
if err != nil {
log.Fatal("Error while parsing %s to int: %s", n, err)
}
return int(d)
}
func hexToInt(h string) uint {
d, err := strconv.ParseUint(h, 16, 64)
if err != nil {
log.Fatal("Error while parsing %s to int: %s", h, err)
}
return uint(d)
}
func hexToInt2(h string) (uint, uint) {
if len(h) > 16 {
d, err := strconv.ParseUint(h[:16], 16, 64)
if err != nil {
log.Fatal("Error while parsing %s to int: %s", h[16:], err)
}
d2, err := strconv.ParseUint(h[16:], 16, 64)
if err != nil {
log.Fatal("Error while parsing %s to int: %s", h[16:], err)
}
return uint(d), uint(d2)
}
d, err := strconv.ParseUint(h, 16, 64)
if err != nil {
log.Fatal("Error while parsing %s to int: %s", h[16:], err)
}
return uint(d), 0
}
func hexToIP(h string) net.IP {
n, m := hexToInt2(h)
var ip net.IP
if m != 0 {
ip = make(net.IP, 16)
// TODO: Check if this depends on machine endianness?
binary.LittleEndian.PutUint32(ip, uint32(n>>32))
binary.LittleEndian.PutUint32(ip[4:], uint32(n))
binary.LittleEndian.PutUint32(ip[8:], uint32(m>>32))
binary.LittleEndian.PutUint32(ip[12:], uint32(m))
} else {
ip = make(net.IP, 4)
binary.LittleEndian.PutUint32(ip, uint32(n))
}
return ip
}
// Parse scans and retrieves the opened connections, from /proc/net/ files
func Parse(proto string) ([]Entry, error) {
filename := core.ConcatStrings("/proc/net/", proto)
fd, err := os.Open(filename)
if err != nil {
return nil, err
}
defer fd.Close()
entries := make([]Entry, 0)
scanner := bufio.NewScanner(fd)
for lineno := 0; scanner.Scan(); lineno++ {
// skip column names
if lineno == 0 {
continue
}
line := core.Trim(scanner.Text())
m := parser.FindStringSubmatch(line)
if m == nil {
log.Warning("Could not parse netstat line from %s: %s", filename, line)
continue
}
entries = append(entries, NewEntry(
proto,
hexToIP(m[1]),
hexToInt(m[2]),
hexToIP(m[3]),
hexToInt(m[4]),
decToInt(m[5]),
decToInt(m[6]),
))
}
return entries, nil
}
opensnitch-1.6.9/daemon/opensnitchd-dinit 0000664 0000000 0000000 00000000325 15003540030 0020500 0 ustar 00root root 0000000 0000000 # Application firewall OpenSnitch
type = process
command = /usr/bin/opensnitchd -rules-path /etc/opensnitchd/rules
restart = true
smooth-recovery = yes
restart-delay = 15
stop-timeout = 10
restart-limit-count = 0
opensnitch-1.6.9/daemon/opensnitchd-openrc 0000775 0000000 0000000 00000002004 15003540030 0020656 0 ustar 00root root 0000000 0000000 #!/sbin/openrc-run
# OpenSnitch firewall service
depend() {
before net
after iptables ip6tables
use logger
provide firewall
}
start_pre() {
/bin/mkdir -p /etc/opensnitchd/rules
/bin/chown -R root:root /etc/opensnitchd
/bin/chown root:root /var/log/opensnitchd.log
/bin/chmod -R 755 /etc/opensnitchd
/bin/chmod -R 0644 /etc/opensnitchd/rules
/bin/chmod 0600 /var/log/opensnitchd.log
}
start() {
ebegin "Starting application firewall"
# only if the verbose flag is not set (rc-service opensnitchd start -v)
if [ -z "$VERBOSE" ]; then
# redirect stdout and stderr to /dev/null
/usr/local/bin/opensnitchd -rules-path /etc/opensnitchd/rules -log-file /var/log/opensnitchd.log > /dev/null 2>&1 &
else
/usr/local/bin/opensnitchd -rules-path /etc/opensnitchd/rules -log-file /var/log/opensnitchd.log
fi
eend $?
}
stop() {
ebegin "Stopping application firewall"
/usr/bin/pkill -SIGINT opensnitchd
eend $?
}
opensnitch-1.6.9/daemon/opensnitchd.service 0000664 0000000 0000000 00000000557 15003540030 0021041 0 ustar 00root root 0000000 0000000 [Unit]
Description=Application firewall OpenSnitch
Documentation=https://github.com/evilsocket/opensnitch/wiki
[Service]
Type=simple
PermissionsStartOnly=true
ExecStartPre=/bin/mkdir -p /etc/opensnitchd/rules
ExecStart=/usr/local/bin/opensnitchd -rules-path /etc/opensnitchd/rules
Restart=always
RestartSec=30
TimeoutStopSec=10
[Install]
WantedBy=multi-user.target
opensnitch-1.6.9/daemon/procmon/ 0000775 0000000 0000000 00000000000 15003540030 0016607 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/procmon/activepids.go 0000664 0000000 0000000 00000005070 15003540030 0021273 0 ustar 00root root 0000000 0000000 package procmon
import (
"io/ioutil"
"strconv"
"strings"
"sync"
"time"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
)
type value struct {
Process *Process
//Starttime uniquely identifies a process, it is the 22nd value in /proc//stat
//if another process starts with the same PID, it's Starttime will be unique
Starttime uint64
}
var (
activePids = make(map[uint64]value)
activePidsLock = sync.RWMutex{}
)
//MonitorActivePids checks that each process in activePids
//is still running and if not running (or another process with the same pid is running),
//removes the pid from activePids
func MonitorActivePids() {
for {
time.Sleep(time.Second)
activePidsLock.Lock()
for k, v := range activePids {
data, err := ioutil.ReadFile(core.ConcatStrings("/proc/", strconv.FormatUint(k, 10), "/stat"))
if err != nil {
//file does not exists, pid has quit
delete(activePids, k)
pidsCache.delete(int(k))
continue
}
startTime, err := strconv.ParseInt(strings.Split(string(data), " ")[21], 10, 64)
if err != nil {
log.Error("Could not find or convert Starttime. This should never happen. Please report this incident to the Opensnitch developers: %v", err)
delete(activePids, k)
pidsCache.delete(int(k))
continue
}
if uint64(startTime) != v.Starttime {
//extremely unlikely: the original process has quit and another process
//was started with the same PID - all this in less than 1 second
log.Error("Same PID but different Starttime. Please report this incident to the Opensnitch developers.")
delete(activePids, k)
pidsCache.delete(int(k))
continue
}
}
activePidsLock.Unlock()
}
}
func findProcessInActivePidsCache(pid uint64) *Process {
activePidsLock.Lock()
defer activePidsLock.Unlock()
if value, ok := activePids[pid]; ok {
return value.Process
}
return nil
}
// AddToActivePidsCache adds the given pid to a list of known processes.
func AddToActivePidsCache(pid uint64, proc *Process) {
data, err := ioutil.ReadFile(core.ConcatStrings("/proc/", strconv.FormatUint(pid, 10), "/stat"))
if err != nil {
//most likely the process has quit by now
return
}
startTime, err2 := strconv.ParseInt(strings.Split(string(data), " ")[21], 10, 64)
if err2 != nil {
log.Error("Could not find or convert Starttime. This should never happen. Please report this incident to the Opensnitch developers: %v", err)
return
}
activePidsLock.Lock()
activePids[pid] = value{
Process: proc,
Starttime: uint64(startTime),
}
activePidsLock.Unlock()
}
opensnitch-1.6.9/daemon/procmon/activepids_test.go 0000664 0000000 0000000 00000006033 15003540030 0022332 0 ustar 00root root 0000000 0000000 package procmon
import (
"fmt"
"math/rand"
"os"
"os/exec"
"syscall"
"testing"
"time"
)
//TestMonitorActivePids starts helper processes, adds them to activePids
//and then kills them and checks if monitorActivePids() removed the killed processes
//from activePids
func TestMonitorActivePids(t *testing.T) {
if os.Getenv("helperBinaryMode") == "on" {
//we are in the "helper binary" mode, we were started with helperCmd.Start() (see below)
//do nothing, just wait to be killed
time.Sleep(time.Second * 10)
os.Exit(1) //will never get here; but keep it here just in case
}
//we are in a normal "go test" mode
tmpDir := "/tmp/ostest_" + randString()
os.Mkdir(tmpDir, 0777)
fmt.Println("tmp dir", tmpDir)
defer os.RemoveAll(tmpDir)
go MonitorActivePids()
//build a "helper binary" with "go test -c -o /tmp/path" and put it into a tmp dir
helperBinaryPath := tmpDir + "/helper1"
goExecutable, _ := exec.LookPath("go")
cmd := exec.Command(goExecutable, "test", "-c", "-o", helperBinaryPath)
if err := cmd.Run(); err != nil {
t.Error("Error running go test -c", err)
}
var numberOfHelpers = 5
var helperProcs []*Process
//start helper binaries
for i := 0; i < numberOfHelpers; i++ {
var helperCmd *exec.Cmd
helperCmd = &exec.Cmd{
Path: helperBinaryPath,
Args: []string{helperBinaryPath},
Env: []string{"helperBinaryMode=on"},
}
if err := helperCmd.Start(); err != nil {
t.Error("Error starting helper binary", err)
}
go func() {
helperCmd.Wait() //must Wait(), otherwise the helper process becomes a zombie when kill()ed
}()
pid := helperCmd.Process.Pid
proc := NewProcess(pid, helperBinaryPath)
helperProcs = append(helperProcs, proc)
AddToActivePidsCache(uint64(pid), proc)
}
//sleep to make sure all processes started before we proceed
time.Sleep(time.Second * 1)
//make sure all PIDS are in the cache
for i := 0; i < numberOfHelpers; i++ {
proc := helperProcs[i]
pid := proc.ID
foundProc := findProcessInActivePidsCache(uint64(pid))
if foundProc == nil {
t.Error("PID not found among active processes", pid)
}
if proc.Path != foundProc.Path || proc.ID != foundProc.ID {
t.Error("PID or path doesn't match with the found process")
}
}
//kill all helpers except for one
for i := 0; i < numberOfHelpers-1; i++ {
if err := syscall.Kill(helperProcs[i].ID, syscall.SIGTERM); err != nil {
t.Error("error in syscall.Kill", err)
}
}
//give the cache time to remove killed processes
time.Sleep(time.Second * 1)
//make sure only the alive process is in the cache
foundProc := findProcessInActivePidsCache(uint64(helperProcs[numberOfHelpers-1].ID))
if foundProc == nil {
t.Error("last alive PID is not found among active processes", foundProc)
}
if len(activePids) != 1 {
t.Error("more than 1 active PIDs left in cache")
}
}
func randString() string {
rand.Seed(time.Now().UnixNano())
var letterRunes = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
b := make([]rune, 10)
for i := range b {
b[i] = letterRunes[rand.Intn(len(letterRunes))]
}
return string(b)
}
opensnitch-1.6.9/daemon/procmon/audit/ 0000775 0000000 0000000 00000000000 15003540030 0017715 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/procmon/audit/client.go 0000664 0000000 0000000 00000022076 15003540030 0021531 0 ustar 00root root 0000000 0000000 // Package audit reads auditd events from the builtin af_unix plugin, and parses
// the messages in order to proactively monitor pids which make connections.
// Once a connection is made and redirected to us via NFQUEUE, we
// lookup the connection inode in /proc, and add the corresponding PID with all
// the information of the process to a list of known PIDs.
//
// TODO: Prompt the user to allow/deny a connection/program as soon as it's
// started.
//
// Requisities:
// - install auditd and audispd-plugins
// - enable af_unix plugin /etc/audisp/plugins.d/af_unix.conf (active = yes)
// - auditctl -a always,exit -F arch=b64 -S socket,connect,execve -k opensnitchd
// - increase /etc/audisp/audispd.conf q_depth if there're dropped events
// - set write_logs to no if you don't need/want audit logs to be stored in the disk.
//
// read messages from the pipe to verify that it's working:
// socat unix-connect:/var/run/audispd_events stdio
//
// Audit event fields:
// https://github.com/linux-audit/audit-documentation/blob/master/specs/fields/field-dictionary.csv
// Record types:
// https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Audit_Record_Types.html
//
// Documentation:
// https://github.com/linux-audit/audit-documentation
package audit
import (
"bufio"
"fmt"
"io"
"net"
"os"
"runtime"
"sort"
"sync"
"time"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
)
// Event represents an audit event, which in our case can be an event of type
// socket, execve, socketpair or connect.
type Event struct {
Timestamp string // audit(xxxxxxx:nnnn)
Serial string
ProcName string // comm
ProcPath string // exe
ProcCmdLine string // proctitle
ProcDir string // cwd
ProcMode string // mode
TTY string
Pid int
UID int
Gid int
PPid int
EUid int
EGid int
OUid int
OGid int
UserName string // auid
DstHost net.IP
DstPort int
NetFamily string // inet, inet6, local
Success string
INode int
Dev string
Syscall int
Exit int
EventType string
RawEvent string
LastSeen time.Time
}
// MaxEventAge is the maximum minutes an audit process can live without network activity.
const (
MaxEventAge = int(10)
)
var (
// Lock holds a mutex
Lock sync.RWMutex
ourPid = os.Getpid()
// cache of events
events []*Event
eventsCleaner *time.Ticker
eventsCleanerChan = (chan bool)(nil)
// TODO: EventChan is an output channel where incoming auditd events will be written.
// If a client opens it.
EventChan = (chan Event)(nil)
eventsExitChan = (chan bool)(nil)
auditConn net.Conn
// TODO: we may need arm arch
rule64 = []string{"exit,always", "-F", "arch=b64", "-F", fmt.Sprint("ppid!=", ourPid), "-F", fmt.Sprint("pid!=", ourPid), "-S", "socket,connect", "-k", "opensnitch"}
rule32 = []string{"exit,always", "-F", "arch=b32", "-F", fmt.Sprint("ppid!=", ourPid), "-F", fmt.Sprint("pid!=", ourPid), "-S", "socketcall", "-F", "a0=1", "-k", "opensnitch"}
audispdPath = "/var/run/audispd_events"
)
// OPENSNITCH_RULES_KEY is the mark we place on every event we are interested in.
const (
OpensnitchRulesKey = "key=\"opensnitch\""
)
// GetEvents returns the list of processes which have opened a connection.
func GetEvents() []*Event {
return events
}
// GetEventByPid returns an event given a pid.
func GetEventByPid(pid int) *Event {
Lock.RLock()
defer Lock.RUnlock()
for _, event := range events {
if pid == event.Pid {
return event
}
}
return nil
}
// sortEvents sorts received events by time and elapsed time since latest network activity.
// newest PIDs will be placed on top of the list.
func sortEvents() {
sort.Slice(events, func(i, j int) bool {
now := time.Now()
elapsedTimeT := now.Sub(events[i].LastSeen)
elapsedTimeU := now.Sub(events[j].LastSeen)
t := events[i].LastSeen.UnixNano()
u := events[j].LastSeen.UnixNano()
return t > u && elapsedTimeT < elapsedTimeU
})
}
// cleanOldEvents deletes the PIDs which do not exist or that are too old to
// live.
// We start searching from the oldest to the newest.
// If the last network activity of a PID has been greater than MaxEventAge,
// then it'll be deleted.
func cleanOldEvents() {
Lock.Lock()
defer Lock.Unlock()
for n := len(events) - 1; n >= 0; n-- {
now := time.Now()
elapsedTime := now.Sub(events[n].LastSeen)
if int(elapsedTime.Minutes()) >= MaxEventAge {
events = append(events[:n], events[n+1:]...)
continue
}
if core.Exists(fmt.Sprint("/proc/", events[n].Pid)) == false {
events = append(events[:n], events[n+1:]...)
}
}
}
func deleteEvent(pid int) {
for n := range events {
if events[n].Pid == pid || events[n].PPid == pid {
deleteEventByIndex(n)
break
}
}
}
func deleteEventByIndex(index int) {
Lock.Lock()
events = append(events[:index], events[index+1:]...)
Lock.Unlock()
}
// AddEvent adds new event to the list of PIDs which have generate network
// activity.
// If the PID is already in the list, the LastSeen field is updated, to keep
// it alive.
func AddEvent(aevent *Event) {
if aevent == nil {
return
}
Lock.Lock()
defer Lock.Unlock()
for n := 0; n < len(events); n++ {
if events[n].Pid == aevent.Pid && events[n].Syscall == aevent.Syscall {
if aevent.ProcCmdLine != "" || (aevent.ProcCmdLine == events[n].ProcCmdLine) {
events[n] = aevent
}
events[n].LastSeen = time.Now()
sortEvents()
return
}
}
aevent.LastSeen = time.Now()
events = append([]*Event{aevent}, events...)
}
// startEventsCleaner will review if the events in the cache need to be cleaned
// every 5 minutes.
func startEventsCleaner() {
for {
select {
case <-eventsCleanerChan:
goto Exit
case <-eventsCleaner.C:
cleanOldEvents()
}
}
Exit:
log.Debug("audit: cleanerRoutine stopped")
}
func addRules() bool {
r64 := append([]string{"-A"}, rule64...)
r32 := append([]string{"-A"}, rule32...)
_, err64 := core.Exec("auditctl", r64)
_, err32 := core.Exec("auditctl", r32)
if err64 == nil && err32 == nil {
return true
}
log.Error("Error adding audit rule, err32=%v, err=%v", err32, err64)
return false
}
func configureSyscalls() {
// XXX: what about a i386 process running on a x86_64 system?
if runtime.GOARCH == "386" {
syscallSOCKET = "1"
syscallCONNECT = "3"
syscallSOCKETPAIR = "8"
}
}
func deleteRules() bool {
r64 := []string{"-D", "-k", "opensnitch"}
r32 := []string{"-D", "-k", "opensnitch"}
_, err64 := core.Exec("auditctl", r64)
_, err32 := core.Exec("auditctl", r32)
if err64 == nil && err32 == nil {
return true
}
log.Error("Error deleting audit rules, err32=%v, err64=%v", err32, err64)
return false
}
func checkRules() bool {
// TODO
return true
}
func checkStatus() bool {
// TODO
return true
}
// Reader reads events from audisd af_unix pipe plugin.
// If the auditd daemon is stopped or restarted, the reader handle
// is closed, so we need to restablished the connection.
func Reader(r io.Reader, eventChan chan<- Event) {
if r == nil {
log.Error("Error reading auditd events. Is auditd running? is af_unix plugin enabled?")
return
}
reader := bufio.NewReader(r)
go startEventsCleaner()
for {
select {
case <-eventsExitChan:
goto Exit
default:
buf, _, err := reader.ReadLine()
if err != nil {
if err == io.EOF {
log.Error("AuditReader: auditd stopped, reconnecting in 30s %s", err)
if newReader, err := reconnect(); err == nil {
reader = bufio.NewReader(newReader)
log.Important("Auditd reconnected, continue reading")
}
continue
}
log.Warning("AuditReader: auditd error %s", err)
break
}
parseEvent(string(buf[0:len(buf)]), eventChan)
}
}
Exit:
log.Debug("audit.Reader() closed")
}
// StartChannel creates a channel to receive events from Audit.
// Launch audit.Reader() in a goroutine:
// go audit.Reader(c, (chan<- audit.Event)(audit.EventChan))
func StartChannel() {
EventChan = make(chan Event, 0)
}
func reconnect() (net.Conn, error) {
deleteRules()
time.Sleep(30 * time.Second)
return connect()
}
func connect() (net.Conn, error) {
addRules()
// TODO: make the unix socket path configurable
return net.Dial("unix", audispdPath)
}
// Stop stops listening for events from auditd and delete the auditd rules.
func Stop() {
if auditConn != nil {
if err := auditConn.Close(); err != nil {
log.Warning("audit.Stop() error closing socket: %v", err)
}
}
if eventsCleaner != nil {
eventsCleaner.Stop()
}
if eventsExitChan != nil {
eventsExitChan <- true
close(eventsExitChan)
}
if eventsCleanerChan != nil {
eventsCleanerChan <- true
close(eventsCleanerChan)
}
deleteRules()
if EventChan != nil {
close(EventChan)
}
}
// Start makes a new connection to the audisp af_unix socket.
func Start() (net.Conn, error) {
auditConn, err := connect()
if err != nil {
log.Error("auditd Start() connection error %v", err)
deleteRules()
return nil, err
}
configureSyscalls()
eventsCleaner = time.NewTicker(time.Minute * 5)
eventsCleanerChan = make(chan bool)
eventsExitChan = make(chan bool)
return auditConn, err
}
opensnitch-1.6.9/daemon/procmon/audit/parse.go 0000664 0000000 0000000 00000017246 15003540030 0021370 0 ustar 00root root 0000000 0000000 package audit
import (
"encoding/hex"
"fmt"
"net"
"regexp"
"strconv"
"strings"
)
var (
newEvent = false
netEvent = &Event{}
// RegExp for parse audit messages
// https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sec-understanding_audit_log_files
auditRE, _ = regexp.Compile(`([a-zA-Z0-9\-_]+)=([a-zA-Z0-9:'\-\/\"\.\,_\(\)]+)`)
rawEvent = make(map[string]string)
)
// amd64 syscalls definition
// if the platform is not amd64, it's redefined on Start()
var (
syscallSOCKET = "41"
syscallCONNECT = "42"
syscallSOCKETPAIR = "53"
syscallEXECVE = "59"
syscallSOCKETCALL = "102"
)
// /usr/include/x86_64-linux-gnu/bits/socket_type.h
const (
sockSTREAM = "1"
sockDGRAM = "2"
sockRAW = "3"
sockSEQPACKET = "5"
sockPACKET = "10"
// /usr/include/x86_64-linux-gnu/bits/socket.h
pfUNSPEC = "0"
pfLOCAL = "1" // PF_UNIX
pfINET = "2"
pfINET6 = "10"
// /etc/protocols
protoIP = "0"
protoTCP = "6"
protoUDP = "17"
)
// https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Audit_Record_Types.html
const (
AuditTypePROCTITLE = "type=PROCTITLE"
AuditTypeCWD = "type=CWD"
AuditTypePATH = "type=PATH"
AuditTypeEXECVE = "type=EXECVE"
AuditTypeSOCKADDR = "type=SOCKADDR"
AuditTypeSOCKETCALL = "type=SOCKETCALL"
AuditTypeEOE = "type=EOE"
)
var (
syscallSOCKETstr = fmt.Sprint("syscall=", syscallSOCKET)
syscallCONNECTstr = fmt.Sprint("syscall=", syscallCONNECT)
syscallSOCKETPAIRstr = fmt.Sprint("syscall=", syscallSOCKETPAIR)
syscallEXECVEstr = fmt.Sprint("syscall=", syscallEXECVE)
syscallSOCKETCALLstr = fmt.Sprint("syscall=", syscallSOCKETCALL)
)
// parseNetLine parses a SOCKADDR message type of the form:
// saddr string: inet6 host:2001:4860:4860::8888 serv:53
func parseNetLine(line string, decode bool) (family string, dstHost net.IP, dstPort int) {
// 0:4 - type
// 4:8 - port
// 8:16 - ip
switch family := line[0:4]; family {
// local
// case "0100":
// ipv4
case "0200":
octet2 := decodeString(line[4:8])
octet := decodeString(line[8:16])
host := fmt.Sprint(octet[0], ".", octet[1], ".", octet[2], ".", octet[3])
fmt.Printf("dest ip: %s -- %s:%s\n", line[4:8], octet2, host)
// ipv6
//case "0A00":
}
if decode == true {
line = decodeString(line)
}
pieces := strings.Split(line, " ")
family = pieces[0]
if family[:4] != "inet" {
return family, dstHost, 0
}
if len(pieces) > 1 && pieces[1][:5] == "host:" {
dstHost = net.ParseIP(strings.Split(pieces[1], "host:")[1])
}
if len(pieces) > 2 && pieces[2][:5] == "serv:" {
_dstPort, err := strconv.Atoi(strings.Split(line, "serv:")[1])
if err != nil {
dstPort = -1
} else {
dstPort = _dstPort
}
}
return family, dstHost, dstPort
}
// decodeString will try to decode a string encoded in hexadecimal.
// If the string can not be decoded, the original string will be returned.
// In that case, usually it means that it's a non-encoded string.
func decodeString(s string) string {
decoded, err := hex.DecodeString(s)
if err != nil {
return s
}
return fmt.Sprintf("%s", decoded)
}
// extractFields parsed an audit raw message, and extracts all the fields.
func extractFields(rawMessage string, newEvent *map[string]string) {
Lock.Lock()
defer Lock.Unlock()
if auditRE == nil {
newEvent = nil
return
}
fieldList := auditRE.FindAllStringSubmatch(rawMessage, -1)
if fieldList == nil {
newEvent = nil
return
}
for _, field := range fieldList {
(*newEvent)[field[1]] = field[2]
}
}
// populateEvent populates our Event from a raw parsed message.
func populateEvent(aevent *Event, eventFields *map[string]string) *Event {
if aevent == nil {
return nil
}
Lock.Lock()
defer Lock.Unlock()
for k, v := range *eventFields {
switch k {
//case "a0":
//case "a1":
//case "a2":
case "fam":
if v == "local" {
return nil
}
aevent.NetFamily = v
case "lport":
aevent.DstPort, _ = strconv.Atoi(v)
// TODO
/*case "addr":
fmt.Println("addr: ", v)
case "daddr":
fmt.Println("daddr: ", v)
case "laddr":
aevent.DstHost = net.ParseIP(v)
case "saddr":
parseNetLine(v, true)
fmt.Println("saddr:", v)
*/
case "exe":
aevent.ProcPath = strings.Trim(decodeString(v), "\"")
case "comm":
aevent.ProcName = strings.Trim(decodeString(v), "\"")
// proctitle may be truncated to 128 characters, so don't rely on it, parse /proc//instead
//case "proctitle":
// aevent.ProcCmdLine = strings.Trim(decodeString(v), "\"")
case "tty":
aevent.TTY = v
case "pid":
aevent.Pid, _ = strconv.Atoi(v)
case "ppid":
aevent.PPid, _ = strconv.Atoi(v)
case "uid":
aevent.UID, _ = strconv.Atoi(v)
case "gid":
aevent.Gid, _ = strconv.Atoi(v)
case "success":
aevent.Success = v
case "cwd":
aevent.ProcDir = strings.Trim(decodeString(v), "\"")
case "inode":
aevent.INode, _ = strconv.Atoi(v)
case "dev":
aevent.Dev = v
case "mode":
aevent.ProcMode = v
case "ouid":
aevent.OUid, _ = strconv.Atoi(v)
case "ogid":
aevent.OGid, _ = strconv.Atoi(v)
case "syscall":
aevent.Syscall, _ = strconv.Atoi(v)
case "exit":
aevent.Exit, _ = strconv.Atoi(v)
case "type":
aevent.EventType = v
case "msg":
parts := strings.Split(v[6:], ":")
aevent.Timestamp = parts[0]
aevent.Serial = parts[1][:len(parts[1])-1]
}
}
return aevent
}
// parseEvent parses an auditd event, discards the unwanted ones, and adds
// the ones we're interested in to an array.
// We're only interested in the socket,socketpair,connect and execve syscalls.
// Events from us are excluded.
//
// When we received an event, we parse and add it to the list as soon as we can.
// If the next messages of the set have additional information, we update the
// event.
func parseEvent(rawMessage string, eventChan chan<- Event) {
if newEvent == false && strings.Index(rawMessage, OpensnitchRulesKey) == -1 {
return
}
aEvent := make(map[string]string)
if strings.Index(rawMessage, syscallSOCKETstr) != -1 ||
strings.Index(rawMessage, syscallCONNECTstr) != -1 ||
strings.Index(rawMessage, syscallSOCKETPAIRstr) != -1 ||
strings.Index(rawMessage, syscallEXECVEstr) != -1 ||
strings.Index(rawMessage, syscallSOCKETCALLstr) != -1 {
extractFields(rawMessage, &aEvent)
if aEvent == nil {
return
}
newEvent = true
netEvent = &Event{}
netEvent = populateEvent(netEvent, &aEvent)
AddEvent(netEvent)
} else if newEvent == true && strings.Index(rawMessage, AuditTypePROCTITLE) != -1 {
extractFields(rawMessage, &aEvent)
if aEvent == nil {
return
}
netEvent = populateEvent(netEvent, &aEvent)
AddEvent(netEvent)
} else if newEvent == true && strings.Index(rawMessage, AuditTypeCWD) != -1 {
extractFields(rawMessage, &aEvent)
if aEvent == nil {
return
}
netEvent = populateEvent(netEvent, &aEvent)
AddEvent(netEvent)
} else if newEvent == true && strings.Index(rawMessage, AuditTypeEXECVE) != -1 {
extractFields(rawMessage, &aEvent)
if aEvent == nil {
return
}
netEvent = populateEvent(netEvent, &aEvent)
AddEvent(netEvent)
} else if newEvent == true && strings.Index(rawMessage, AuditTypePATH) != -1 {
extractFields(rawMessage, &aEvent)
if aEvent == nil {
return
}
netEvent = populateEvent(netEvent, &aEvent)
AddEvent(netEvent)
} else if newEvent == true && strings.Index(rawMessage, AuditTypeSOCKADDR) != -1 {
extractFields(rawMessage, &aEvent)
if aEvent == nil {
return
}
netEvent = populateEvent(netEvent, &aEvent)
AddEvent(netEvent)
if EventChan != nil {
eventChan <- *netEvent
}
} else if newEvent == true && strings.Index(rawMessage, AuditTypeEOE) != -1 {
newEvent = false
AddEvent(netEvent)
if EventChan != nil {
eventChan <- *netEvent
}
}
}
opensnitch-1.6.9/daemon/procmon/cache.go 0000664 0000000 0000000 00000016465 15003540030 0020215 0 ustar 00root root 0000000 0000000 package procmon
import (
"os"
"sort"
"strconv"
"sync"
"time"
"github.com/evilsocket/opensnitch/daemon/core"
)
// InodeItem represents an item of the InodesCache.
type InodeItem struct {
FdPath string
LastSeen int64
Pid int
sync.RWMutex
}
// ProcItem represents an item of the pidsCache
type ProcItem struct {
FdPath string
Descriptors []string
LastSeen int64
Pid int
sync.RWMutex
}
// CacheProcs holds the cache of processes that have established connections.
type CacheProcs struct {
items []*ProcItem
sync.RWMutex
}
// CacheInodes holds the cache of Inodes.
// The key is formed as follow:
// inode+srcip+srcport+dstip+dstport
type CacheInodes struct {
items map[string]*InodeItem
sync.RWMutex
}
var (
// cache of inodes, which help to not iterate over all the pidsCache and
// descriptors of /proc//fd/
// 15-50us vs 50-80ms
// we hit this cache when:
// - we've blocked a connection and the process retries it several times until it gives up,
// - or when a process timeouts connecting to an IP/domain and it retries it again,
// - or when a process resolves a domain and then connects to the IP.
inodesCache = NewCacheOfInodes()
maxTTL = 3 // maximum 3 minutes of inactivity in cache. Really rare, usually they lasts less than a minute.
// 2nd cache of already known running pids, which also saves time by
// iterating only over a few pids' descriptors, (30us-20ms vs. 50-80ms)
// since it's more likely that most of the connections will be made by the
// same (running) processes.
// The cache is ordered by time, placing in the first places those PIDs with
// active connections.
pidsCache CacheProcs
pidsDescriptorsCache = make(map[int][]string)
cacheTicker = time.NewTicker(2 * time.Minute)
)
// CacheCleanerTask checks periodically if the inodes in the cache must be removed.
func CacheCleanerTask() {
for {
select {
case <-cacheTicker.C:
inodesCache.cleanup()
}
}
}
// NewCacheOfInodes returns a new cache for inodes.
func NewCacheOfInodes() *CacheInodes {
return &CacheInodes{
items: make(map[string]*InodeItem),
}
}
//******************************************************************************
// items of the caches.
func (i *InodeItem) updateTime() {
i.Lock()
i.LastSeen = time.Now().UnixNano()
i.Unlock()
}
func (i *InodeItem) getTime() int64 {
i.RLock()
defer i.RUnlock()
return i.LastSeen
}
func (p *ProcItem) updateTime() {
p.Lock()
p.LastSeen = time.Now().UnixNano()
p.Unlock()
}
func (p *ProcItem) updateDescriptors(descriptors []string) {
p.Lock()
p.Descriptors = descriptors
p.Unlock()
}
//******************************************************************************
// cache of processes
func (c *CacheProcs) add(fdPath string, fdList []string, pid int) {
c.Lock()
defer c.Unlock()
for n := range c.items {
item := c.items[n]
if item == nil {
continue
}
if item.Pid == pid {
item.updateTime()
return
}
}
procItem := &ProcItem{
Pid: pid,
FdPath: fdPath,
Descriptors: fdList,
LastSeen: time.Now().UnixNano(),
}
c.setItems([]*ProcItem{procItem}, c.items)
}
func (c *CacheProcs) sort(pid int) {
item := c.getItem(0)
if item != nil && item.Pid == pid {
return
}
c.RLock()
defer c.RUnlock()
sort.Slice(c.items, func(i, j int) bool {
t := c.items[i].LastSeen
u := c.items[j].LastSeen
return t > u || t == u
})
}
func (c *CacheProcs) delete(pid int) {
c.Lock()
defer c.Unlock()
for n, procItem := range c.items {
if procItem.Pid == pid {
c.deleteItem(n)
inodesCache.delete(pid)
break
}
}
}
func (c *CacheProcs) deleteItem(pos int) {
nItems := len(c.items)
if pos < nItems {
c.setItems(c.items[:pos], c.items[pos+1:])
}
}
func (c *CacheProcs) setItems(newItems []*ProcItem, oldItems []*ProcItem) {
c.items = append(newItems, oldItems...)
}
func (c *CacheProcs) getItem(index int) *ProcItem {
c.RLock()
defer c.RUnlock()
if index >= len(c.items) {
return nil
}
return c.items[index]
}
func (c *CacheProcs) getItems() []*ProcItem {
return c.items
}
func (c *CacheProcs) countItems() int {
c.RLock()
defer c.RUnlock()
return len(c.items)
}
// loop over the processes that have generated connections
func (c *CacheProcs) getPid(inode int, inodeKey string, expect string) (int, int) {
c.Lock()
defer c.Unlock()
for n, procItem := range c.items {
if procItem == nil {
continue
}
if idxDesc, _ := getPidDescriptorsFromCache(procItem.FdPath, inodeKey, expect, &procItem.Descriptors, procItem.Pid); idxDesc != -1 {
procItem.updateTime()
return procItem.Pid, n
}
descriptors := lookupPidDescriptors(procItem.FdPath, procItem.Pid)
if descriptors == nil {
c.deleteItem(n)
continue
}
procItem.updateDescriptors(descriptors)
if idxDesc, _ := getPidDescriptorsFromCache(procItem.FdPath, inodeKey, expect, &descriptors, procItem.Pid); idxDesc != -1 {
procItem.updateTime()
return procItem.Pid, n
}
}
return -1, -1
}
//******************************************************************************
// cache of inodes
func (i *CacheInodes) add(key, descLink string, pid int) {
i.Lock()
defer i.Unlock()
if descLink == "" {
descLink = core.ConcatStrings("/proc/", strconv.Itoa(pid), "/exe")
}
i.items[key] = &InodeItem{
FdPath: descLink,
Pid: pid,
LastSeen: time.Now().UnixNano(),
}
}
func (i *CacheInodes) delete(pid int) {
i.Lock()
defer i.Unlock()
for k, inodeItem := range i.items {
if inodeItem.Pid == pid {
delete(i.items, k)
}
}
}
func (i *CacheInodes) getPid(inodeKey string) int {
if item, ok := i.isInCache(inodeKey); ok {
// sometimes the process may have disappeared at this point
if _, err := os.Lstat(item.FdPath); err == nil {
item.updateTime()
return item.Pid
}
pidsCache.delete(item.Pid)
i.delItem(inodeKey)
}
return -1
}
func (i *CacheInodes) delItem(inodeKey string) {
i.Lock()
defer i.Unlock()
delete(i.items, inodeKey)
}
func (i *CacheInodes) getItem(inodeKey string) *InodeItem {
i.RLock()
defer i.RUnlock()
return i.items[inodeKey]
}
func (i *CacheInodes) getItems() map[string]*InodeItem {
i.RLock()
defer i.RUnlock()
return i.items
}
func (i *CacheInodes) isInCache(inodeKey string) (*InodeItem, bool) {
i.RLock()
defer i.RUnlock()
if item, found := i.items[inodeKey]; found {
return item, true
}
return nil, false
}
func (i *CacheInodes) cleanup() {
now := time.Now()
i.Lock()
defer i.Unlock()
for k := range i.items {
if i.items[k] == nil {
continue
}
lastSeen := now.Sub(
time.Unix(0, i.items[k].getTime()),
)
if core.Exists(i.items[k].FdPath) == false || int(lastSeen.Minutes()) > maxTTL {
delete(i.items, k)
}
}
}
func getPidDescriptorsFromCache(fdPath, inodeKey, expect string, descriptors *[]string, pid int) (int, *[]string) {
for fdIdx := 0; fdIdx < len(*descriptors); fdIdx++ {
descLink := core.ConcatStrings(fdPath, (*descriptors)[fdIdx])
if link, err := os.Readlink(descLink); err == nil && link == expect {
if fdIdx > 0 {
// reordering helps to reduce look up times by a factor of 10.
fd := (*descriptors)[fdIdx]
*descriptors = append((*descriptors)[:fdIdx], (*descriptors)[fdIdx+1:]...)
*descriptors = append([]string{fd}, *descriptors...)
}
if _, ok := inodesCache.isInCache(inodeKey); ok {
inodesCache.add(inodeKey, descLink, pid)
}
return fdIdx, descriptors
}
}
return -1, descriptors
}
opensnitch-1.6.9/daemon/procmon/cache_test.go 0000664 0000000 0000000 00000006605 15003540030 0021247 0 ustar 00root root 0000000 0000000 package procmon
import (
"fmt"
"testing"
"time"
)
func TestCacheProcs(t *testing.T) {
fdList := []string{"0", "1", "2"}
pidsCache.add(fmt.Sprint("/proc/", myPid, "/fd/"), fdList, myPid)
t.Log("Pids in cache: ", pidsCache.countItems())
t.Run("Test addProcEntry", func(t *testing.T) {
if pidsCache.countItems() != 1 {
t.Error("pidsCache should be 1")
}
})
oldPid := pidsCache.getItem(0)
pidsCache.add(fmt.Sprint("/proc/", myPid, "/fd/"), fdList, myPid)
t.Run("Test addProcEntry update", func(t *testing.T) {
if pidsCache.countItems() != 1 {
t.Error("pidsCache should still be 1!", pidsCache)
}
oldTime := time.Unix(0, oldPid.LastSeen)
newTime := time.Unix(0, pidsCache.getItem(0).LastSeen)
if oldTime.Equal(newTime) == false {
t.Error("pidsCache, time not updated: ", oldTime, newTime)
}
})
pidsCache.add("/proc/2/fd", fdList, 2)
pidsCache.delete(2)
t.Run("Test deleteProcEntry", func(t *testing.T) {
if pidsCache.countItems() != 1 {
t.Error("pidsCache should be 1:", pidsCache.countItems())
}
})
pid, _ := pidsCache.getPid(0, "", "/dev/null")
t.Run("Test getPidFromCache", func(t *testing.T) {
if pid != myPid {
t.Error("pid not found in cache", pidsCache.countItems())
}
})
// should not crash, and the number of items should still be 1
pidsCache.deleteItem(1)
t.Run("Test deleteItem check bounds", func(t *testing.T) {
if pidsCache.countItems() != 1 {
t.Error("deleteItem check bounds error", pidsCache.countItems())
}
})
pidsCache.deleteItem(0)
t.Run("Test deleteItem", func(t *testing.T) {
if pidsCache.countItems() != 0 {
t.Error("deleteItem error", pidsCache.countItems())
}
})
t.Log("items in cache:", pidsCache.countItems())
// the key of an inodeCache entry is formed as: inodeNumer + srcIP + srcPort + dstIP + dstPort
inodeKey := "000000000127.0.0.144444127.0.0.153"
// add() expects a path to the inode fd (/proc//fd/12345), but as getPid() will check the path in order to retrieve the pid,
// we just set it to "" and it'll use /proc//exe
inodesCache.add(inodeKey, "", myPid)
t.Run("Test addInodeEntry", func(t *testing.T) {
if _, found := inodesCache.items[inodeKey]; !found {
t.Error("inodesCache, inode not added:", len(inodesCache.items), inodesCache.items)
}
})
pid = inodesCache.getPid(inodeKey)
t.Run("Test getPidByInodeFromCache", func(t *testing.T) {
if pid != myPid {
t.Error("inode not found in cache", pid, inodeKey, len(inodesCache.items), inodesCache.items)
}
})
// should delete all inodes of a pid
inodesCache.delete(myPid)
t.Run("Test deleteInodeEntry", func(t *testing.T) {
if _, found := inodesCache.items[inodeKey]; found {
t.Error("inodesCache, key found in cache but it should not exist", inodeKey, len(inodesCache.items), inodesCache.items)
}
})
}
// Test getPidDescriptorsFromCache descriptors (inodes) reordering.
// When an inode (descriptor) is found, if it's pushed to the top of the list,
// the next time we look for it will cost -10x.
// Without reordering, the inode 0 will always be found on the 10th position,
// taking an average of 100us instead of 30.
// Benchmark results with reordering: ~5600ns/op, without: ~56000ns/op.
func BenchmarkGetPid(b *testing.B) {
fdList := []string{"10", "9", "8", "7", "6", "5", "4", "3", "2", "1", "0"}
pidsCache.add(fmt.Sprint("/proc/", myPid, "/fd/"), fdList, myPid)
for i := 0; i < b.N; i++ {
pidsCache.getPid(0, "", "/dev/null")
}
}
opensnitch-1.6.9/daemon/procmon/details.go 0000664 0000000 0000000 00000021252 15003540030 0020565 0 ustar 00root root 0000000 0000000 package procmon
import (
"bufio"
"fmt"
"io/ioutil"
"os"
"regexp"
"strconv"
"strings"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/dns"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/netlink"
)
var socketsRegex, _ = regexp.Compile(`socket:\[([0-9]+)\]`)
// GetInfo collects information of a process.
func (p *Process) GetInfo() error {
if os.Getpid() == p.ID {
return nil
}
// if the PID dir doesn't exist, the process may have exited or be a kernel connection
// XXX: can a kernel connection exist without an entry in ProcFS?
if p.Path == "" && p.IsAlive() == false {
log.Debug("PID can't be read /proc/ %d %s", p.ID, p.Comm)
// The Comm field shouldn't be empty if the proc monitor method is ebpf or audit.
// If it's proc and the corresponding entry doesn't exist, there's nothing we can
// do to inform the user about this process.
if p.Comm == "" {
return fmt.Errorf("Unable to get process information")
}
}
p.ReadCmdline()
p.ReadComm()
p.ReadCwd()
if err := p.ReadPath(); err != nil {
log.Error("GetInfo() path can't be read")
return err
}
p.ReadEnv()
return nil
}
// GetExtraInfo collects information of a process.
func (p *Process) GetExtraInfo() error {
p.ReadEnv()
p.readDescriptors()
p.readIOStats()
p.readStatus()
return nil
}
// ReadComm reads the comm name from ProcFS /proc//comm
func (p *Process) ReadComm() error {
if p.Comm != "" {
return nil
}
data, err := ioutil.ReadFile(core.ConcatStrings("/proc/", strconv.Itoa(p.ID), "/comm"))
if err != nil {
return err
}
p.Comm = core.Trim(string(data))
return nil
}
// ReadCwd reads the current working directory name from ProcFS /proc//cwd
func (p *Process) ReadCwd() error {
if p.CWD != "" {
return nil
}
link, err := os.Readlink(core.ConcatStrings("/proc/", strconv.Itoa(p.ID), "/cwd"))
if err != nil {
return err
}
p.CWD = link
return nil
}
// ReadEnv reads and parses the environment variables of a process.
func (p *Process) ReadEnv() {
data, err := ioutil.ReadFile(core.ConcatStrings("/proc/", strconv.Itoa(p.ID), "/environ"))
if err != nil {
return
}
for _, s := range strings.Split(string(data), "\x00") {
parts := strings.SplitN(core.Trim(s), "=", 2)
if parts != nil && len(parts) == 2 {
key := core.Trim(parts[0])
val := core.Trim(parts[1])
p.Env[key] = val
}
}
}
// ReadPath reads the symbolic link that /proc//exe points to.
// Note 1: this link might not exist on the root filesystem, it might
// have been executed from a container, so the real path would be:
// /proc//root/
//
// Note 2:
// There're at least 3 things that a (regular) kernel connection meets
// from userspace POV:
// - /proc//cmdline and /proc//maps empty
// - /proc//exe can't be read
func (p *Process) ReadPath() error {
// avoid rereading the path
if p.Path != "" && core.IsAbsPath(p.Path) {
return nil
}
defer func() {
if p.Path == "" {
// determine if this process might be of a kernel task.
if data, err := ioutil.ReadFile(core.ConcatStrings("/proc/", strconv.Itoa(p.ID), "/maps")); err == nil && len(data) == 0 {
p.Path = "Kernel connection"
p.Args = append(p.Args, p.Comm)
return
}
p.Path = p.Comm
}
}()
linkName := core.ConcatStrings("/proc/", strconv.Itoa(p.ID), "/exe")
if _, err := os.Lstat(linkName); err != nil {
return err
}
// FIXME: this reading can give error: file name too long
link, err := os.Readlink(linkName)
if err != nil {
return err
}
p.SetPath(link)
return nil
}
// SetPath sets the path of the process, and fixes it if it's needed.
func (p *Process) SetPath(path string) {
p.Path = path
p.CleanPath()
}
// ReadCmdline reads the cmdline of the process from ProcFS /proc//cmdline
// This file may be empty if the process is of a kernel task.
// It can also be empty for short-lived processes.
func (p *Process) ReadCmdline() {
if len(p.Args) > 0 {
return
}
if data, err := ioutil.ReadFile(core.ConcatStrings("/proc/", strconv.Itoa(p.ID), "/cmdline")); err == nil {
if len(data) == 0 {
return
}
for i, b := range data {
if b == 0x00 {
data[i] = byte(' ')
}
}
args := strings.Split(string(data), " ")
for _, arg := range args {
arg = core.Trim(arg)
if arg != "" {
p.Args = append(p.Args, arg)
}
}
}
p.CleanArgs()
}
// CleanArgs applies fixes on the cmdline arguments.
// - AppImages cmdline reports the execuable launched as /proc/self/exe,
// instead of the actual path to the binary.
func (p *Process) CleanArgs() {
if len(p.Args) > 0 && p.Args[0] == ProcSelf {
p.Args[0] = p.Path
}
}
func (p *Process) readDescriptors() {
f, err := os.Open(core.ConcatStrings("/proc/", strconv.Itoa(p.ID), "/fd/"))
if err != nil {
return
}
fDesc, err := f.Readdir(-1)
f.Close()
p.Descriptors = nil
for _, fd := range fDesc {
tempFd := &procDescriptors{
Name: fd.Name(),
}
if link, err := os.Readlink(core.ConcatStrings("/proc/", strconv.Itoa(p.ID), "/fd/", fd.Name())); err == nil {
tempFd.SymLink = link
socket := socketsRegex.FindStringSubmatch(link)
if len(socket) > 0 {
socketInfo, err := netlink.GetSocketInfoByInode(socket[1])
if err == nil {
tempFd.SymLink = fmt.Sprintf("socket:[%s] - %d:%s -> %s:%d, state: %s", fd.Name(),
socketInfo.ID.SourcePort,
socketInfo.ID.Source.String(),
dns.HostOr(socketInfo.ID.Destination, socketInfo.ID.Destination.String()),
socketInfo.ID.DestinationPort,
netlink.TCPStatesMap[socketInfo.State])
}
}
if linkInfo, err := os.Lstat(link); err == nil {
tempFd.Size = linkInfo.Size()
tempFd.ModTime = linkInfo.ModTime()
}
}
p.Descriptors = append(p.Descriptors, tempFd)
}
}
func (p *Process) readIOStats() {
f, err := os.Open(core.ConcatStrings("/proc/", strconv.Itoa(p.ID), "/io"))
if err != nil {
return
}
defer f.Close()
p.IOStats = &procIOstats{}
scanner := bufio.NewScanner(f)
for scanner.Scan() {
s := strings.Split(scanner.Text(), " ")
switch s[0] {
case "rchar:":
p.IOStats.RChar, _ = strconv.ParseInt(s[1], 10, 64)
case "wchar:":
p.IOStats.WChar, _ = strconv.ParseInt(s[1], 10, 64)
case "syscr:":
p.IOStats.SyscallRead, _ = strconv.ParseInt(s[1], 10, 64)
case "syscw:":
p.IOStats.SyscallWrite, _ = strconv.ParseInt(s[1], 10, 64)
case "read_bytes:":
p.IOStats.ReadBytes, _ = strconv.ParseInt(s[1], 10, 64)
case "write_bytes:":
p.IOStats.WriteBytes, _ = strconv.ParseInt(s[1], 10, 64)
}
}
}
func (p *Process) readStatus() {
if data, err := ioutil.ReadFile(core.ConcatStrings("/proc/", strconv.Itoa(p.ID), "/status")); err == nil {
p.Status = string(data)
}
if data, err := ioutil.ReadFile(core.ConcatStrings("/proc/", strconv.Itoa(p.ID), "/stat")); err == nil {
p.Stat = string(data)
}
if data, err := ioutil.ReadFile(core.ConcatStrings("/proc/", strconv.Itoa(p.ID), "/stack")); err == nil {
p.Stack = string(data)
}
if data, err := ioutil.ReadFile(core.ConcatStrings("/proc/", strconv.Itoa(p.ID), "/maps")); err == nil {
p.Maps = string(data)
}
if data, err := ioutil.ReadFile(core.ConcatStrings("/proc/", strconv.Itoa(p.ID), "/statm")); err == nil {
p.Statm = &procStatm{}
fmt.Sscanf(string(data), "%d %d %d %d %d %d %d", &p.Statm.Size, &p.Statm.Resident, &p.Statm.Shared, &p.Statm.Text, &p.Statm.Lib, &p.Statm.Data, &p.Statm.Dt)
}
}
// CleanPath applies fixes on the path to the binary:
// - Remove extra characters from the link that it points to.
// When a running process is deleted, the symlink has the bytes " (deleted")
// appended to the link.
// - If the path is /proc/self/exe, resolve the symlink that it points to.
func (p *Process) CleanPath() {
// Sometimes the path to the binary reported is the symbolic link of the process itself.
// This is not useful to the user, and besides it's a generic path that can represent
// to any process.
// Therefore we cannot use /proc/self/exe directly, because it resolves to our own process.
if strings.HasPrefix(p.Path, ProcSelf) {
if link, err := os.Readlink(core.ConcatStrings(ProcSelf, "/exe")); err == nil {
p.Path = link
return
}
if len(p.Args) > 0 && p.Args[0] != "" {
p.Path = p.Args[0]
return
}
p.Path = p.Comm
}
pathLen := len(p.Path)
if pathLen >= 10 && p.Path[pathLen-10:] == " (deleted)" {
p.Path = p.Path[:len(p.Path)-10]
}
// We may receive relative paths from kernel, but the path of a process must be absolute
if core.IsAbsPath(p.Path) == false {
if err := p.ReadPath(); err != nil {
log.Debug("ClenPath() error reading process path%s", err)
return
}
}
}
// IsAlive checks if the process is still running
func (p *Process) IsAlive() bool {
return core.Exists(core.ConcatStrings("/proc/", strconv.Itoa(p.ID)))
}
opensnitch-1.6.9/daemon/procmon/ebpf/ 0000775 0000000 0000000 00000000000 15003540030 0017523 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/procmon/ebpf/cache.go 0000664 0000000 0000000 00000007117 15003540030 0021123 0 ustar 00root root 0000000 0000000 package ebpf
import (
"sync"
"time"
"github.com/evilsocket/opensnitch/daemon/procmon"
)
// NewExecEvent constructs a new execEvent from the arguments.
func NewExecEvent(pid, ppid, uid uint32, path string, comm [16]byte) *execEvent {
ev := &execEvent{
Type: EV_TYPE_EXEC,
PID: pid,
PPID: ppid,
UID: uid,
Comm: comm,
}
length := MaxPathLen
if len(path) < MaxPathLen {
length = len(path)
}
copy(ev.Filename[:], path[:length])
return ev
}
type execEventItem struct {
Proc procmon.Process
Event execEvent
LastSeen int64
}
type eventsStore struct {
execEvents map[uint32]*execEventItem
sync.RWMutex
}
// NewEventsStore creates a new store of events.
func NewEventsStore() *eventsStore {
return &eventsStore{
execEvents: make(map[uint32]*execEventItem),
}
}
func (e *eventsStore) add(key uint32, event execEvent, proc procmon.Process) {
e.Lock()
defer e.Unlock()
e.execEvents[key] = &execEventItem{
Proc: proc,
Event: event,
}
}
func (e *eventsStore) isInStore(key uint32) (item *execEventItem, found bool) {
e.RLock()
defer e.RUnlock()
item, found = e.execEvents[key]
return
}
func (e *eventsStore) delete(key uint32) {
e.Lock()
defer e.Unlock()
delete(e.execEvents, key)
}
func (e *eventsStore) DeleteOldItems() {
e.Lock()
defer e.Unlock()
for k, item := range e.execEvents {
if item.Proc.IsAlive() == false {
delete(e.execEvents, k)
}
}
}
//-----------------------------------------------------------------------------
type ebpfCacheItem struct {
Key []byte
Proc procmon.Process
LastSeen int64
}
type ebpfCacheType struct {
Items map[interface{}]*ebpfCacheItem
sync.RWMutex
}
var (
maxTTL = 40 // Seconds
maxCacheItems = 5000
ebpfCache *ebpfCacheType
ebpfCacheTicker *time.Ticker
)
// NewEbpfCacheItem creates a new cache item.
func NewEbpfCacheItem(key []byte, proc procmon.Process) *ebpfCacheItem {
return &ebpfCacheItem{
Key: key,
Proc: proc,
LastSeen: time.Now().UnixNano(),
}
}
func (i *ebpfCacheItem) isValid() bool {
lastSeen := time.Now().Sub(
time.Unix(0, i.LastSeen),
)
return int(lastSeen.Seconds()) < maxTTL
}
// NewEbpfCache creates a new cache store.
func NewEbpfCache() *ebpfCacheType {
ebpfCacheTicker = time.NewTicker(1 * time.Minute)
return &ebpfCacheType{
Items: make(map[interface{}]*ebpfCacheItem, 0),
}
}
func (e *ebpfCacheType) addNewItem(key interface{}, itemKey []byte, proc procmon.Process) {
e.Lock()
e.Items[key] = NewEbpfCacheItem(itemKey, proc)
e.Unlock()
}
func (e *ebpfCacheType) isInCache(key interface{}) (item *ebpfCacheItem, found bool) {
leng := e.Len()
e.Lock()
item, found = e.Items[key]
if found {
if item.isValid() {
e.update(key, item)
} else {
found = false
delete(e.Items, key)
}
}
e.Unlock()
if leng > maxCacheItems {
e.DeleteOldItems()
}
return
}
func (e *ebpfCacheType) update(key interface{}, item *ebpfCacheItem) {
item.LastSeen = time.Now().UnixNano()
e.Items[key] = item
}
func (e *ebpfCacheType) Len() int {
e.RLock()
defer e.RUnlock()
return len(e.Items)
}
func (e *ebpfCacheType) DeleteOldItems() {
length := e.Len()
e.Lock()
defer e.Unlock()
for k, item := range e.Items {
if length > maxCacheItems || (item != nil && !item.isValid()) {
delete(e.Items, k)
}
}
}
func (e *ebpfCacheType) delete(key interface{}) {
e.Lock()
defer e.Unlock()
if key, found := e.Items[key]; found {
delete(e.Items, key)
}
}
func (e *ebpfCacheType) clear() {
if e == nil {
return
}
e.Lock()
defer e.Unlock()
for k := range e.Items {
delete(e.Items, k)
}
if ebpfCacheTicker != nil {
ebpfCacheTicker.Stop()
}
}
opensnitch-1.6.9/daemon/procmon/ebpf/debug.go 0000664 0000000 0000000 00000005151 15003540030 0021142 0 ustar 00root root 0000000 0000000 package ebpf
import (
"fmt"
"os/exec"
"strconv"
"syscall"
"unsafe"
"github.com/evilsocket/opensnitch/daemon/log"
daemonNetlink "github.com/evilsocket/opensnitch/daemon/netlink"
elf "github.com/iovisor/gobpf/elf"
)
// print map contents. used only for debugging
func dumpMap(bpfmap *elf.Map, isIPv6 bool) {
var lookupKey []byte
var nextKey []byte
var value []byte
if !isIPv6 {
lookupKey = make([]byte, 12)
nextKey = make([]byte, 12)
} else {
lookupKey = make([]byte, 36)
nextKey = make([]byte, 36)
}
value = make([]byte, 40)
firstrun := true
i := 0
for {
i++
ok, err := m.LookupNextElement(bpfmap, unsafe.Pointer(&lookupKey[0]),
unsafe.Pointer(&nextKey[0]), unsafe.Pointer(&value[0]))
if err != nil {
log.Error("eBPF LookupNextElement error: %v", err)
return
}
if firstrun {
// on first run lookupKey is a dummy, nothing to delete
firstrun = false
copy(lookupKey, nextKey)
continue
}
fmt.Println("key, value", lookupKey, value)
if !ok { //reached end of map
break
}
copy(lookupKey, nextKey)
}
}
//PrintEverything prints all the stats. used only for debugging
func PrintEverything() {
bash, _ := exec.LookPath("bash")
//get the number of the first map
out, err := exec.Command(bash, "-c", "bpftool map show | head -n 1 | cut -d ':' -f1").Output()
if err != nil {
fmt.Println("bpftool map dump name tcpMap ", err)
}
i, _ := strconv.Atoi(string(out[:len(out)-1]))
fmt.Println("i is", i)
//dump all maps for analysis
for j := i; j < i+14; j++ {
_, _ = exec.Command(bash, "-c", "bpftool map dump id "+strconv.Itoa(j)+" > dump"+strconv.Itoa(j)).Output()
}
alreadyEstablished.RLock()
for sock1, v := range alreadyEstablished.TCP {
fmt.Println(*sock1, v)
}
fmt.Println("---------------------")
for sock1, v := range alreadyEstablished.TCPv6 {
fmt.Println(*sock1, v)
}
alreadyEstablished.RUnlock()
fmt.Println("---------------------")
sockets, _ := daemonNetlink.SocketsDump(syscall.AF_INET, syscall.IPPROTO_TCP)
for idx := range sockets {
fmt.Println("socket tcp: ", sockets[idx])
}
fmt.Println("---------------------")
sockets, _ = daemonNetlink.SocketsDump(syscall.AF_INET6, syscall.IPPROTO_TCP)
for idx := range sockets {
fmt.Println("socket tcp6: ", sockets[idx])
}
fmt.Println("---------------------")
sockets, _ = daemonNetlink.SocketsDump(syscall.AF_INET, syscall.IPPROTO_UDP)
for idx := range sockets {
fmt.Println("socket udp: ", sockets[idx])
}
fmt.Println("---------------------")
sockets, _ = daemonNetlink.SocketsDump(syscall.AF_INET6, syscall.IPPROTO_UDP)
for idx := range sockets {
fmt.Println("socket udp6: ", sockets[idx])
}
}
opensnitch-1.6.9/daemon/procmon/ebpf/ebpf.go 0000664 0000000 0000000 00000013322 15003540030 0020767 0 ustar 00root root 0000000 0000000 package ebpf
import (
"context"
"encoding/binary"
"fmt"
"sync"
"syscall"
"unsafe"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
daemonNetlink "github.com/evilsocket/opensnitch/daemon/netlink"
"github.com/evilsocket/opensnitch/daemon/procmon"
elf "github.com/iovisor/gobpf/elf"
"github.com/vishvananda/netlink"
)
// contains pointers to ebpf maps for a given protocol (tcp/udp/v6)
type ebpfMapsForProto struct {
bpfmap *elf.Map
}
//Not in use, ~4usec faster lookup compared to m.LookupElement()
// mimics union bpf_attr's anonymous struct used by BPF_MAP_*_ELEM commands
// from /include/uapi/linux/bpf.h
type bpf_lookup_elem_t struct {
map_fd uint64 //even though in bpf.h its type is __u32, we must make it 8 bytes long
//because "key" is of type __aligned_u64, i.e. "key" must be aligned on an 8-byte boundary
key uintptr
value uintptr
}
type alreadyEstablishedConns struct {
TCP map[*daemonNetlink.Socket]int
TCPv6 map[*daemonNetlink.Socket]int
sync.RWMutex
}
// list of returned errors
const (
NoError = iota
NotAvailable
EventsNotAvailable
)
// Error returns the error type and a message with the explanation
type Error struct {
Msg error
What int
}
var (
m, perfMod *elf.Module
lock = sync.RWMutex{}
mapSize = uint(12000)
ebpfMaps map[string]*ebpfMapsForProto
modulesPath string
//connections which were established at the time when opensnitch started
alreadyEstablished = alreadyEstablishedConns{
TCP: make(map[*daemonNetlink.Socket]int),
TCPv6: make(map[*daemonNetlink.Socket]int),
}
ctxTasks context.Context
cancelTasks context.CancelFunc
running = false
maxKernelEvents = 32768
kernelEvents = make(chan interface{}, maxKernelEvents)
// list of local addresses of this machine
localAddresses = make(map[string]netlink.Addr)
hostByteOrder binary.ByteOrder
)
// Start installs ebpf kprobes
func Start(modPath string) *Error {
modulesPath = modPath
setRunning(false)
if err := mountDebugFS(); err != nil {
log.Error("ebpf.Start -> mount debugfs error. Report on github please: %s", err)
return &Error{
fmt.Errorf("ebpf.Start: mount debugfs error. Report on github please: %s", err),
NotAvailable,
}
}
var err error
m, err = core.LoadEbpfModule("opensnitch.o", modulesPath)
if err != nil {
log.Error("%s", err)
dispatchErrorEvent(fmt.Sprint("[eBPF]: ", err.Error()))
return &Error{
fmt.Errorf("[eBPF] Error loading opensnitch.o: %s", err.Error()),
NotAvailable,
}
}
m.EnableOptionCompatProbe()
// if previous shutdown was unclean, then we must remove the dangling kprobe
// and install it again (close the module and load it again)
if err := m.EnableKprobes(0); err != nil {
m.Close()
if err := m.Load(nil); err != nil {
return &Error{
fmt.Errorf("eBPF failed to load /etc/opensnitchd/opensnitch.o (2): %v", err),
NotAvailable,
}
}
if err := m.EnableKprobes(0); err != nil {
return &Error{
fmt.Errorf("eBPF error when enabling kprobes: %v", err),
NotAvailable,
}
}
}
determineHostByteOrder()
ebpfMaps = map[string]*ebpfMapsForProto{
"tcp": {
bpfmap: m.Map("tcpMap")},
"tcp6": {
bpfmap: m.Map("tcpv6Map")},
"udp": {
bpfmap: m.Map("udpMap")},
"udp6": {
bpfmap: m.Map("udpv6Map")},
}
for prot, mfp := range ebpfMaps {
if mfp.bpfmap == nil {
return &Error{
fmt.Errorf("eBPF module opensnitch.o malformed, bpfmap[%s] nil", prot),
NotAvailable,
}
}
}
ctxTasks, cancelTasks = context.WithCancel(context.Background())
ebpfCache = NewEbpfCache()
initEventsStreamer()
saveEstablishedConnections(uint8(syscall.AF_INET))
if core.IPv6Enabled {
saveEstablishedConnections(uint8(syscall.AF_INET6))
}
go monitorCache()
go monitorMaps()
go monitorLocalAddresses()
go monitorAlreadyEstablished()
setRunning(true)
return nil
}
func saveEstablishedConnections(commDomain uint8) error {
// save already established connections
socketListTCP, err := daemonNetlink.SocketsDump(commDomain, uint8(syscall.IPPROTO_TCP))
if err != nil {
log.Debug("eBPF could not dump TCP (%d) sockets via netlink: %v", commDomain, err)
return err
}
for _, sock := range socketListTCP {
inode := int((*sock).INode)
pid := procmon.GetPIDFromINode(inode, fmt.Sprint(inode,
(*sock).ID.Source, (*sock).ID.SourcePort, (*sock).ID.Destination, (*sock).ID.DestinationPort))
alreadyEstablished.Lock()
alreadyEstablished.TCP[sock] = pid
alreadyEstablished.Unlock()
}
return nil
}
func setRunning(status bool) {
lock.Lock()
defer lock.Unlock()
running = status
}
// Stop stops monitoring connections using kprobes
func Stop() {
lock.RLock()
defer lock.RUnlock()
if running == false {
return
}
cancelTasks()
ebpfCache.clear()
if m != nil {
m.Close()
}
for pm := range perfMapList {
if pm != nil {
pm.PollStop()
}
}
for k, mod := range perfMapList {
if mod != nil {
mod.Close()
delete(perfMapList, k)
}
}
if perfMod != nil {
perfMod.Close()
}
}
// make bpf() syscall with bpf_lookup prepared by the caller
func makeBpfSyscall(bpf_lookup *bpf_lookup_elem_t) uintptr {
BPF_MAP_LOOKUP_ELEM := 1 //cmd number
syscall_BPF := 321 //syscall number
sizeOfStruct := 40 //sizeof bpf_lookup_elem_t struct
r1, _, _ := syscall.Syscall(uintptr(syscall_BPF), uintptr(BPF_MAP_LOOKUP_ELEM),
uintptr(unsafe.Pointer(bpf_lookup)), uintptr(sizeOfStruct))
return r1
}
func dispatchErrorEvent(what string) {
log.Error(what)
dispatchEvent(what)
}
func dispatchEvent(data interface{}) {
if len(kernelEvents) > maxKernelEvents-1 {
fmt.Printf("kernelEvents queue full (%d)", len(kernelEvents))
<-kernelEvents
}
select {
case kernelEvents <- data:
default:
}
}
func Events() <-chan interface{} {
return kernelEvents
}
opensnitch-1.6.9/daemon/procmon/ebpf/events.go 0000664 0000000 0000000 00000014362 15003540030 0021364 0 ustar 00root root 0000000 0000000 package ebpf
import (
"bytes"
"encoding/binary"
"fmt"
"os"
"os/signal"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/procmon"
elf "github.com/iovisor/gobpf/elf"
)
// MaxPathLen defines the maximum length of a path, as defined by the kernel:
// https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/limits.h#L13
const MaxPathLen = 4096
// MaxArgs defines the maximum number of arguments allowed
const MaxArgs = 20
// MaxArgLen defines the maximum length of each argument.
// NOTE: this value is 131072 (PAGE_SIZE * 32)
// https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/binfmts.h#L16
const MaxArgLen = 256
// TaskCommLen is the maximum num of characters of the comm field
const TaskCommLen = 16
type execEvent struct {
Type uint64
PID uint32
UID uint32
PPID uint32
RetCode uint32
ArgsCount uint8
ArgsPartial uint8
Filename [MaxPathLen]byte
Args [MaxArgs][MaxArgLen]byte
Comm [TaskCommLen]byte
Pad1 uint16
Pad2 uint32
}
// Struct that holds the metadata of a connection.
// When we receive a new connection, we look for it on the eBPF maps,
// and if it's found, this information is returned.
type networkEventT struct {
Pid uint64
UID uint64
Comm [TaskCommLen]byte
}
// List of supported events
const (
EV_TYPE_NONE = iota
EV_TYPE_EXEC
EV_TYPE_EXECVEAT
EV_TYPE_FORK
EV_TYPE_SCHED_EXIT
)
var (
execEvents = NewEventsStore()
perfMapList = make(map[*elf.PerfMap]*elf.Module)
// total workers spawned by the different events PerfMaps
eventWorkers = 0
perfMapName = "proc-events"
// default value is 8.
// Not enough to handle high loads such http downloads, torent traffic, etc.
// (regular desktop usage)
ringBuffSize = 64 // * PAGE_SIZE (4k usually)
)
func initEventsStreamer() {
elfOpts := make(map[string]elf.SectionParams)
elfOpts["maps/"+perfMapName] = elf.SectionParams{PerfRingBufferPageCount: ringBuffSize}
var err error
perfMod, err = core.LoadEbpfModule("opensnitch-procs.o", modulesPath)
if err != nil {
dispatchErrorEvent(fmt.Sprint("[eBPF events]: ", err))
return
}
perfMod.EnableOptionCompatProbe()
if err = perfMod.Load(elfOpts); err != nil {
dispatchErrorEvent(fmt.Sprint("[eBPF events]: ", err))
return
}
tracepoints := []string{
"tracepoint/sched/sched_process_exit",
"tracepoint/syscalls/sys_enter_execve",
"tracepoint/syscalls/sys_enter_execveat",
"tracepoint/syscalls/sys_exit_execve",
"tracepoint/syscalls/sys_exit_execveat",
//"tracepoint/sched/sched_process_exec",
//"tracepoint/sched/sched_process_fork",
}
// Enable tracepoints first, that way if kprobes fail loading we'll still have some
for _, tp := range tracepoints {
err = perfMod.EnableTracepoint(tp)
if err != nil {
dispatchErrorEvent(fmt.Sprintf("[eBPF events] error enabling tracepoint %s: %s", tp, err))
}
}
if err = perfMod.EnableKprobes(0); err != nil {
// if previous shutdown was unclean, then we must remove the dangling kprobe
// and install it again (close the module and load it again)
perfMod.Close()
if err = perfMod.Load(elfOpts); err != nil {
dispatchErrorEvent(fmt.Sprintf("[eBPF events] failed to load /etc/opensnitchd/opensnitch-procs.o (2): %v", err))
return
}
if err = perfMod.EnableKprobes(0); err != nil {
dispatchErrorEvent(fmt.Sprintf("[eBPF events] error enabling kprobes: %v", err))
}
}
sig := make(chan os.Signal, 1)
signal.Notify(sig, os.Interrupt, os.Kill)
go func(sig chan os.Signal) {
<-sig
}(sig)
eventWorkers = 0
initPerfMap(perfMod)
}
func initPerfMap(mod *elf.Module) {
perfChan := make(chan []byte)
lostEvents := make(chan uint64, 1)
var err error
perfMap, err := elf.InitPerfMap(mod, perfMapName, perfChan, lostEvents)
if err != nil {
dispatchErrorEvent(fmt.Sprintf("[eBPF events] Error initializing eBPF events perfMap: %s", err))
return
}
perfMapList[perfMap] = mod
eventWorkers += 4
for i := 0; i < eventWorkers; i++ {
go streamEventsWorker(i, perfChan, lostEvents, kernelEvents, execEvents)
}
perfMap.PollStart()
}
func streamEventsWorker(id int, chn chan []byte, lost chan uint64, kernelEvents chan interface{}, execEvents *eventsStore) {
var event execEvent
errors := 0
maxErrors := 20 // we should have no errors.
tooManyErrors := func() bool {
errors++
if errors > maxErrors {
log.Error("[eBPF events] too many errors parsing events from kernel")
log.Error("verify that you're using the correct eBPF modules for this version (%s)", core.Version)
return true
}
return false
}
for {
select {
case <-ctxTasks.Done():
goto Exit
case l := <-lost:
log.Debug("Lost ebpf events: %d", l)
case d := <-chn:
if err := binary.Read(bytes.NewBuffer(d), hostByteOrder, &event); err != nil {
log.Debug("[eBPF events #%d] error: %s", id, err)
if tooManyErrors() {
goto Exit
}
} else {
switch event.Type {
case EV_TYPE_EXEC, EV_TYPE_EXECVEAT:
if _, found := execEvents.isInStore(event.PID); found {
log.Debug("[eBPF event inCache] -> %d", event.PID)
continue
}
proc := event2process(&event)
if proc == nil {
continue
}
execEvents.add(event.PID, event, *proc)
case EV_TYPE_SCHED_EXIT:
log.Debug("[eBPF exit event] -> %d", event.PID)
if _, found := execEvents.isInStore(event.PID); found {
log.Debug("[eBPF exit event inCache] -> %d", event.PID)
execEvents.delete(event.PID)
}
}
}
}
}
Exit:
log.Debug("perfMap goroutine exited #%d", id)
}
func event2process(event *execEvent) (proc *procmon.Process) {
proc = procmon.NewProcess(int(event.PID), byteArrayToString(event.Comm[:]))
// trust process path received from kernel
path := byteArrayToString(event.Filename[:])
if path != "" {
proc.SetPath(path)
} else {
if proc.ReadPath() != nil {
return nil
}
}
proc.ReadCwd()
proc.ReadEnv()
proc.UID = int(event.UID)
proc.PPID = int(event.PPID)
if event.ArgsPartial == 0 {
for i := 0; i < int(event.ArgsCount); i++ {
proc.Args = append(proc.Args, byteArrayToString(event.Args[i][:]))
}
proc.CleanArgs()
} else {
proc.ReadCmdline()
}
log.Debug("[eBPF exec event] ppid: %d, pid: %d, %s -> %s", event.PPID, event.PID, proc.Path, proc.Args)
return
}
opensnitch-1.6.9/daemon/procmon/ebpf/find.go 0000664 0000000 0000000 00000020306 15003540030 0020773 0 ustar 00root root 0000000 0000000 package ebpf
import (
"encoding/binary"
"fmt"
"net"
"strconv"
"unsafe"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
daemonNetlink "github.com/evilsocket/opensnitch/daemon/netlink"
"github.com/evilsocket/opensnitch/daemon/procmon"
)
// we need to manually remove old connections from a bpf map
// GetPid looks up process pid in a bpf map.
// If it's not found, it searches already-established TCP connections.
// Returns the process if found.
// Additionally, if the process has been found by swapping fields, it'll return
// a flag indicating it.
func GetPid(proto string, srcPort uint, srcIP net.IP, dstIP net.IP, dstPort uint) (*procmon.Process, bool, error) {
if proc := getPidFromEbpf(proto, srcPort, srcIP, dstIP, dstPort); proc != nil {
return proc, false, nil
}
if findAddressInLocalAddresses(dstIP) {
// FIXME: systemd-resolved sometimes makes a TCP Fast Open connection to a DNS server (8.8.8.8 on my machine)
// and we get a packet here with **source** (not detination!!!) IP 8.8.8.8
// Maybe it's an in-kernel response with spoofed IP because resolved's TCP Fast Open packet, nor the response.
// Another scenario when systemd-resolved or dnscrypt-proxy is used, is that every outbound connection has
// the fields swapped:
// 443:public-ip -> local-ip:local-port , like if it was a response (but it's not).
// Swapping connection fields helps to identify the connection + pid + process, and continue working as usual
// when systemd-resolved is being used. But we should understand why is this happenning.
if proc := getPidFromEbpf(proto, dstPort, dstIP, srcIP, srcPort); proc != nil {
return proc, true, fmt.Errorf("[ebpf conn] FIXME: found swapping fields, systemd-resolved is that you? set DNS=x.x.x.x to your DNS server in /etc/systemd/resolved.conf to workaround this problem")
}
return nil, false, fmt.Errorf("[ebpf conn] unknown source IP: %s", srcIP)
}
//check if it comes from already established TCP
if proto == "tcp" || proto == "tcp6" {
if pid, uid, err := findInAlreadyEstablishedTCP(proto, srcPort, srcIP, dstIP, dstPort); err == nil {
proc := procmon.NewProcess(pid, "")
proc.GetInfo()
proc.UID = uid
return proc, false, nil
}
}
//using netlink.GetSocketInfo to check if UID is 0 (in-kernel connection)
if uid, _ := daemonNetlink.GetSocketInfo(proto, srcIP, srcPort, dstIP, dstPort); uid == 0 {
return nil, false, nil
}
return nil, false, nil
}
// getPidFromEbpf looks up a connection in bpf map and returns PID if found
// the lookup keys and values are defined in opensnitch.c , e.g.
//
// struct tcp_key_t {
// u16 sport;
// u32 daddr;
// u16 dport;
// u32 saddr;
// }__attribute__((packed));
// struct tcp_value_t{
// u64 pid;
// u64 uid;
// u64 counter;
// char[TASK_COMM_LEN] comm; // 16 bytes
// }__attribute__((packed));
func getPidFromEbpf(proto string, srcPort uint, srcIP net.IP, dstIP net.IP, dstPort uint) (proc *procmon.Process) {
// Some connections, like broadcasts, are only seen in eBPF once,
// but some applications send 1 connection per network interface.
// If we delete the eBPF entry the first time we see it, we won't find
// the connection the next times.
delItemIfFound := true
_, ok := ebpfMaps[proto]
if !ok {
return
}
var value networkEventT
var key []byte
var isIP4 bool = (proto == "tcp") || (proto == "udp") || (proto == "udplite")
if isIP4 {
key = make([]byte, 12)
copy(key[2:6], dstIP)
binary.BigEndian.PutUint16(key[6:8], uint16(dstPort))
copy(key[8:12], srcIP)
} else { // IPv6
key = make([]byte, 36)
copy(key[2:18], dstIP)
binary.BigEndian.PutUint16(key[18:20], uint16(dstPort))
copy(key[20:36], srcIP)
}
hostByteOrder.PutUint16(key[0:2], uint16(srcPort))
k := core.ConcatStrings(
proto,
strconv.FormatUint(uint64(srcPort), 10),
srcIP.String(),
dstIP.String(),
strconv.FormatUint(uint64(dstPort), 10))
if cacheItem, isInCache := ebpfCache.isInCache(k); isInCache {
// should we re-read the info?
// environ vars might have changed
//proc.GetInfo()
deleteEbpfEntry(proto, unsafe.Pointer(&key[0]))
proc = &cacheItem.Proc
log.Debug("[ebpf conn] in cache: %s, %d -> %s", k, proc.ID, proc.Path)
return
}
err := m.LookupElement(ebpfMaps[proto].bpfmap, unsafe.Pointer(&key[0]), unsafe.Pointer(&value))
if err != nil {
// key not found
// sometimes srcIP is 0.0.0.0. Happens especially with UDP sendto()
// for example: 57621:10.0.3.1 -> 10.0.3.255:57621 , reported as: 0.0.0.0 -> 10.0.3.255
if isIP4 {
zeroes := make([]byte, 4)
copy(key[8:12], zeroes)
} else {
zeroes := make([]byte, 16)
copy(key[20:36], zeroes)
}
err = m.LookupElement(ebpfMaps[proto].bpfmap, unsafe.Pointer(&key[0]), unsafe.Pointer(&value))
if err == nil {
delItemIfFound = false
}
}
if err != nil && proto == "udp" && srcIP.String() == dstIP.String() {
// very rarely I see this connection. It has srcIP and dstIP == 0.0.0.0 in ebpf map
// it is a localhost to localhost connection
// srcIP was already set to 0, set dstIP to zero also
// TODO try to reproduce it and look for srcIP/dstIP in other kernel structures
zeroes := make([]byte, 4)
copy(key[2:6], zeroes)
err = m.LookupElement(ebpfMaps[proto].bpfmap, unsafe.Pointer(&key[0]), unsafe.Pointer(&value))
}
if err != nil {
// key not found in bpf maps
return nil
}
proc = findConnProcess(&value, k)
log.Debug("[ebpf conn] adding item to cache: %s", k)
ebpfCache.addNewItem(k, key, *proc)
if delItemIfFound {
deleteEbpfEntry(proto, unsafe.Pointer(&key[0]))
}
return
}
// findConnProcess finds the process' details of a connection.
// By default we only receive the PID of the process, so we need to get
// the rest of the details.
// TODO: get the details from kernel, with mm_struct (exe_file, fd_path, etc).
func findConnProcess(value *networkEventT, connKey string) (proc *procmon.Process) {
comm := byteArrayToString(value.Comm[:])
proc = procmon.NewProcess(int(value.Pid), comm)
// Use socket's UID. A process may have dropped privileges.
// This is the UID that we've always used.
proc.UID = int(value.UID)
err := proc.ReadPath()
if ev, found := execEvents.isInStore(uint32(value.Pid)); found {
// use socket's UID. See above why ^
ev.Proc.UID = proc.UID
ev.Proc.ReadCmdline()
// if proc's ReadPath() has been successfull, and the path received via the execve tracepoint differs,
// use proc's path.
// Sometimes we received from the tracepoint a wrong/non-existent path.
// Othertimes we receive a "helper" that executes the real binary which opens the connection.
// Downsides: for execveat() executions we won't display the original binary.
if err == nil && ev.Proc.Path != proc.Path {
proc.ReadCmdline()
ev.Proc.Path = proc.Path
ev.Proc.Args = proc.Args
}
proc = &ev.Proc
log.Debug("[ebpf conn] not in cache, but in execEvents: %s, %d -> %s", connKey, proc.ID, proc.Path)
} else {
log.Debug("[ebpf conn] not in cache, NOR in execEvents: %s, %d -> %s", connKey, proc.ID, proc.Path)
// We'll end here if the events module has not been loaded, or if the process is not in cache.
proc.GetInfo()
execEvents.add(uint32(value.Pid),
*NewExecEvent(uint32(value.Pid), 0, uint32(value.UID), proc.Path, value.Comm),
*proc)
}
return
}
// FindInAlreadyEstablishedTCP searches those TCP connections which were already established at the time
// when opensnitch started
func findInAlreadyEstablishedTCP(proto string, srcPort uint, srcIP net.IP, dstIP net.IP, dstPort uint) (int, int, error) {
alreadyEstablished.RLock()
defer alreadyEstablished.RUnlock()
var _alreadyEstablished map[*daemonNetlink.Socket]int
if proto == "tcp" {
_alreadyEstablished = alreadyEstablished.TCP
} else if proto == "tcp6" {
_alreadyEstablished = alreadyEstablished.TCPv6
}
for sock, v := range _alreadyEstablished {
if (*sock).ID.SourcePort == uint16(srcPort) && (*sock).ID.Source.Equal(srcIP) &&
(*sock).ID.Destination.Equal(dstIP) && (*sock).ID.DestinationPort == uint16(dstPort) {
return v, int((*sock).UID), nil
}
}
return -1, -1, fmt.Errorf("eBPF inode not found")
}
//returns true if addr is in the list of this machine's addresses
func findAddressInLocalAddresses(addr net.IP) bool {
lock.Lock()
defer lock.Unlock()
_, found := localAddresses[addr.String()]
return found
}
opensnitch-1.6.9/daemon/procmon/ebpf/monitor.go 0000664 0000000 0000000 00000010032 15003540030 0021535 0 ustar 00root root 0000000 0000000 package ebpf
import (
"syscall"
"time"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
daemonNetlink "github.com/evilsocket/opensnitch/daemon/netlink"
"github.com/vishvananda/netlink"
)
// we need to manually remove old connections from a bpf map
// since when a bpf map is full it doesn't allow any more insertions
func monitorMaps() {
for {
select {
case <-ctxTasks.Done():
goto Exit
default:
time.Sleep(time.Second * 5)
for name := range ebpfMaps {
// using a pointer to the map doesn't delete the items.
// bpftool still counts them.
if items := getItems(name, name == "tcp6" || name == "udp6"); items > 500 {
deleted := deleteOldItems(name, name == "tcp6" || name == "udp6", items/2)
log.Debug("[ebpf] old items deleted: %d", deleted)
}
}
}
}
Exit:
}
func monitorCache() {
for {
select {
case <-ctxTasks.Done():
goto Exit
case <-ebpfCacheTicker.C:
ebpfCache.DeleteOldItems()
execEvents.DeleteOldItems()
}
}
Exit:
}
// maintain a list of this machine's local addresses
func monitorLocalAddresses() {
newAddrChan := make(chan netlink.AddrUpdate)
done := make(chan struct{})
defer close(done)
lock.Lock()
localAddresses = daemonNetlink.GetLocalAddrs()
lock.Unlock()
netlink.AddrSubscribeWithOptions(newAddrChan, done,
netlink.AddrSubscribeOptions{
ErrorCallback: func(err error) {
log.Error("AddrSubscribeWithOptions error: %s", err)
},
ListExisting: true,
})
for {
select {
case <-ctxTasks.Done():
done <- struct{}{}
goto Exit
case addr := <-newAddrChan:
if addr.NewAddr && !findAddressInLocalAddresses(addr.LinkAddress.IP) {
log.Debug("local addr added: %+v\n", addr)
lock.Lock()
localAddresses[addr.LinkAddress.IP.String()] = daemonNetlink.AddrUpdateToAddr(&addr)
lock.Unlock()
} else if !addr.NewAddr {
log.Debug("local addr removed: %+v\n", addr)
lock.Lock()
delete(localAddresses, addr.LinkAddress.IP.String())
lock.Unlock()
}
}
}
Exit:
log.Debug("monitorLocalAddresses exited")
}
// monitorAlreadyEstablished makes sure that when an already-established connection is closed
// it will be removed from alreadyEstablished. If we don't do this and keep the alreadyEstablished entry forever,
// then after the genuine process quits,a malicious process may reuse PID-srcPort-srcIP-dstPort-dstIP
func monitorAlreadyEstablished() {
tcperr := 0
errLimitExceeded := func() bool {
if tcperr > 100 {
log.Debug("monitorAlreadyEstablished() generated too much errors")
return true
}
tcperr++
return false
}
for {
select {
case <-ctxTasks.Done():
goto Exit
default:
time.Sleep(time.Second * 2)
socketListTCP, err := daemonNetlink.SocketsDump(uint8(syscall.AF_INET), uint8(syscall.IPPROTO_TCP))
if err != nil {
log.Debug("monitorAlreadyEstablished(), error dumping TCP sockets via netlink (%d): %s", tcperr, err)
if errLimitExceeded() {
goto Exit
}
continue
}
alreadyEstablished.Lock()
for aesock := range alreadyEstablished.TCP {
found := false
for _, sock := range socketListTCP {
if daemonNetlink.SocketsAreEqual(aesock, sock) {
found = true
break
}
}
if !found {
delete(alreadyEstablished.TCP, aesock)
}
}
alreadyEstablished.Unlock()
if core.IPv6Enabled {
socketListTCPv6, err := daemonNetlink.SocketsDump(uint8(syscall.AF_INET6), uint8(syscall.IPPROTO_TCP))
if err != nil {
if errLimitExceeded() {
goto Exit
}
log.Debug("monitorAlreadyEstablished(), error dumping TCPv6 sockets via netlink (%d): %s", tcperr, err)
continue
}
alreadyEstablished.Lock()
for aesock := range alreadyEstablished.TCPv6 {
found := false
for _, sock := range socketListTCPv6 {
if daemonNetlink.SocketsAreEqual(aesock, sock) {
found = true
break
}
}
if !found {
delete(alreadyEstablished.TCPv6, aesock)
}
}
alreadyEstablished.Unlock()
}
}
}
Exit:
log.Debug("monitorAlreadyEstablished exited")
}
opensnitch-1.6.9/daemon/procmon/ebpf/utils.go 0000664 0000000 0000000 00000010424 15003540030 0021213 0 ustar 00root root 0000000 0000000 package ebpf
import (
"bytes"
"encoding/binary"
"fmt"
"unsafe"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
)
func determineHostByteOrder() {
lock.Lock()
//determine host byte order
buf := [2]byte{}
*(*uint16)(unsafe.Pointer(&buf[0])) = uint16(0xABCD)
switch buf {
case [2]byte{0xCD, 0xAB}:
hostByteOrder = binary.LittleEndian
case [2]byte{0xAB, 0xCD}:
hostByteOrder = binary.BigEndian
default:
log.Error("Could not determine host byte order.")
}
lock.Unlock()
}
func mountDebugFS() error {
debugfsPath := "/sys/kernel/debug/"
kprobesPath := fmt.Sprint(debugfsPath, "tracing/kprobe_events")
if core.Exists(kprobesPath) == false {
if _, err := core.Exec("mount", []string{"-t", "debugfs", "none", debugfsPath}); err != nil {
log.Warning("eBPF debugfs error: %s", err)
return fmt.Errorf(`%s
Unable to access debugfs filesystem, needed for eBPF to work, likely caused by a hardened or customized kernel.
Change process monitor method to 'proc' to stop receiving this alert
`, err)
}
}
return nil
}
// Trim null characters, and return the left part of the byte array.
// NOTE: using BPF_MAP_TYPE_PERCPU_ARRAY does not initialize strings to 0,
// so we end up receiving events as follow:
// event.filename -> /usr/bin/iptables
// event.filename -> /bin/lsn/iptables (should be /bin/ls)
// It turns out, that there's a 0x00 character between "/bin/ls" and "n/iptables":
// [47 115 98 105 110 47 100 117 109 112 101 50 102 115 0 0 101 115
// ^^^
// TODO: investigate if there's any way of initializing the struct to 0
// like using __builtin_memset() (can't be used with PERCPU apparently)
func byteArrayToString(arr []byte) string {
temp := bytes.SplitAfter(arr, []byte("\x00"))[0]
return string(bytes.Trim(temp[:], "\x00"))
}
func deleteEbpfEntry(proto string, key unsafe.Pointer) bool {
if err := m.DeleteElement(ebpfMaps[proto].bpfmap, key); err != nil {
log.Debug("error deleting ebpf entry: %s", err)
return false
}
return true
}
func getItems(proto string, isIPv6 bool) (items uint) {
isDup := make(map[string]uint8)
var lookupKey []byte
var nextKey []byte
if !isIPv6 {
lookupKey = make([]byte, 12)
nextKey = make([]byte, 12)
} else {
lookupKey = make([]byte, 36)
nextKey = make([]byte, 36)
}
var value networkEventT
firstrun := true
for {
mp, ok := ebpfMaps[proto]
if !ok {
return
}
ok, err := m.LookupNextElement(mp.bpfmap, unsafe.Pointer(&lookupKey[0]),
unsafe.Pointer(&nextKey[0]), unsafe.Pointer(&value))
if !ok || err != nil { //reached end of map
log.Debug("[ebpf] %s map: %d active items", proto, items)
return
}
if firstrun {
// on first run lookupKey is a dummy, nothing to delete
firstrun = false
copy(lookupKey, nextKey)
continue
}
if counter, duped := isDup[string(lookupKey)]; duped && counter > 1 {
deleteEbpfEntry(proto, unsafe.Pointer(&lookupKey[0]))
continue
}
isDup[string(lookupKey)]++
copy(lookupKey, nextKey)
items++
}
return items
}
// deleteOldItems deletes maps' elements in order to keep them below maximum capacity.
// If ebpf maps are full they don't allow any more insertions, ending up lossing events.
func deleteOldItems(proto string, isIPv6 bool, maxToDelete uint) (deleted uint) {
isDup := make(map[string]uint8)
var lookupKey []byte
var nextKey []byte
if !isIPv6 {
lookupKey = make([]byte, 12)
nextKey = make([]byte, 12)
} else {
lookupKey = make([]byte, 36)
nextKey = make([]byte, 36)
}
var value networkEventT
firstrun := true
i := uint(0)
for {
i++
if i > maxToDelete {
return
}
ok, err := m.LookupNextElement(ebpfMaps[proto].bpfmap, unsafe.Pointer(&lookupKey[0]),
unsafe.Pointer(&nextKey[0]), unsafe.Pointer(&value))
if !ok || err != nil { //reached end of map
return
}
if _, duped := isDup[string(lookupKey)]; duped {
if deleteEbpfEntry(proto, unsafe.Pointer(&lookupKey[0])) {
deleted++
copy(lookupKey, nextKey)
continue
}
return
}
if firstrun {
// on first run lookupKey is a dummy, nothing to delete
firstrun = false
copy(lookupKey, nextKey)
continue
}
if !deleteEbpfEntry(proto, unsafe.Pointer(&lookupKey[0])) {
return
}
deleted++
isDup[string(lookupKey)]++
copy(lookupKey, nextKey)
}
return
}
opensnitch-1.6.9/daemon/procmon/find.go 0000664 0000000 0000000 00000005367 15003540030 0020071 0 ustar 00root root 0000000 0000000 package procmon
import (
"os"
"sort"
"strconv"
"github.com/evilsocket/opensnitch/daemon/core"
)
func sortPidsByTime(fdList []os.FileInfo) []os.FileInfo {
sort.Slice(fdList, func(i, j int) bool {
t := fdList[i].ModTime().UnixNano()
u := fdList[j].ModTime().UnixNano()
return t > u
})
return fdList
}
// inodeFound searches for the given inode in /proc//fd/ or
// /proc//task//fd/ and gets the symbolink link it points to,
// in order to compare it against the given inode.
// If the inode is found, the cache is updated ans sorted.
func inodeFound(pidsPath, expect, inodeKey string, inode, pid int) bool {
fdPath := core.ConcatStrings(pidsPath, strconv.Itoa(pid), "/fd/")
fdList := lookupPidDescriptors(fdPath, pid)
if fdList == nil {
return false
}
for idx := 0; idx < len(fdList); idx++ {
descLink := core.ConcatStrings(fdPath, fdList[idx])
if link, err := os.Readlink(descLink); err == nil && link == expect {
inodesCache.add(inodeKey, descLink, pid)
pidsCache.add(fdPath, fdList, pid)
return true
}
}
return false
}
// lookupPidInProc searches for an inode in /proc.
// First it gets the running PIDs and obtains the opened sockets.
// TODO: If the inode is not found, search again in the task/threads
// of every PID (costly).
func lookupPidInProc(pidsPath, expect, inodeKey string, inode int) int {
pidList := getProcPids(pidsPath)
for _, pid := range pidList {
if inodeFound(pidsPath, expect, inodeKey, inode, pid) {
return pid
}
}
return -1
}
// lookupPidDescriptors returns the list of descriptors inside
// /proc//fd/
// TODO: search in /proc//task//fd/ .
func lookupPidDescriptors(fdPath string, pid int) []string {
f, err := os.Open(fdPath)
if err != nil {
return nil
}
// This is where most of the time is wasted when looking for PIDs.
// long running processes like firefox/chrome tend to have a lot of descriptor
// references that points to non existent files on disk, but that remains in
// memory (those with " (deleted)").
// This causes to have to iterate over 300 to 700 items, that are not sockets.
fdList, err := f.Readdir(-1)
f.Close()
if err != nil {
return nil
}
fdList = sortPidsByTime(fdList)
s := make([]string, len(fdList))
for n, f := range fdList {
s[n] = f.Name()
}
return s
}
// getProcPids returns the list of running PIDs, /proc or /proc//task/ .
func getProcPids(pidsPath string) (pidList []int) {
f, err := os.Open(pidsPath)
if err != nil {
return pidList
}
ls, err := f.Readdir(-1)
f.Close()
if err != nil {
return pidList
}
ls = sortPidsByTime(ls)
for _, f := range ls {
if f.IsDir() == false {
continue
}
if pid, err := strconv.Atoi(f.Name()); err == nil {
pidList = append(pidList, []int{pid}...)
}
}
return pidList
}
opensnitch-1.6.9/daemon/procmon/find_test.go 0000664 0000000 0000000 00000001574 15003540030 0021124 0 ustar 00root root 0000000 0000000 package procmon
import (
"fmt"
"testing"
)
func TestGetProcPids(t *testing.T) {
pids := getProcPids("/proc")
if len(pids) == 0 {
t.Error("getProcPids() should not be 0", pids)
}
}
func TestLookupPidDescriptors(t *testing.T) {
pidsFd := lookupPidDescriptors(fmt.Sprint("/proc/", myPid, "/fd/"), myPid)
if len(pidsFd) == 0 {
t.Error("getProcPids() should not be 0", pidsFd)
}
}
func TestLookupPidInProc(t *testing.T) {
// we expect that the inode 1 points to /dev/null
expect := "/dev/null"
foundPid := lookupPidInProc("/proc/", expect, "", myPid)
if foundPid == -1 {
t.Error("lookupPidInProc() should not return -1")
}
}
func BenchmarkGetProcs(b *testing.B) {
for i := 0; i < b.N; i++ {
getProcPids("/proc")
}
}
func BenchmarkLookupPidDescriptors(b *testing.B) {
for i := 0; i < b.N; i++ {
lookupPidDescriptors(fmt.Sprint("/proc/", myPid, "/fd/"), myPid)
}
}
opensnitch-1.6.9/daemon/procmon/monitor/ 0000775 0000000 0000000 00000000000 15003540030 0020276 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/procmon/monitor/init.go 0000664 0000000 0000000 00000005565 15003540030 0021603 0 ustar 00root root 0000000 0000000 package monitor
import (
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/procmon"
"github.com/evilsocket/opensnitch/daemon/procmon/audit"
"github.com/evilsocket/opensnitch/daemon/procmon/ebpf"
)
var (
cacheMonitorsRunning = false
)
// List of errors that this package may return.
const (
NoError = iota
ProcFsErr
AuditdErr
EbpfErr
EbpfEventsErr
)
// Error wraps the type of error with its message
type Error struct {
What int
Msg error
}
// ReconfigureMonitorMethod configures a new method for parsing connections.
func ReconfigureMonitorMethod(newMonitorMethod, ebpfModulesPath string) *Error {
if procmon.GetMonitorMethod() == newMonitorMethod {
return nil
}
oldMethod := procmon.GetMonitorMethod()
if oldMethod == "" {
oldMethod = procmon.MethodProc
}
End()
procmon.SetMonitorMethod(newMonitorMethod)
// if the new monitor method fails to start, rollback the change and exit
// without saving the configuration. Otherwise we can end up with the wrong
// monitor method configured and saved to file.
err := Init(ebpfModulesPath)
if err.What > NoError {
log.Error("Reconf() -> Init() error: %v", err)
procmon.SetMonitorMethod(oldMethod)
return err
}
return nil
}
// End stops the way of parsing new connections.
func End() {
if procmon.MethodIsAudit() {
audit.Stop()
} else if procmon.MethodIsEbpf() {
ebpf.Stop()
}
}
// Init starts parsing connections using the method specified.
func Init(ebpfModulesPath string) (errm *Error) {
errm = &Error{}
if cacheMonitorsRunning == false {
go procmon.MonitorActivePids()
go procmon.CacheCleanerTask()
cacheMonitorsRunning = true
}
if procmon.MethodIsEbpf() {
err := ebpf.Start(ebpfModulesPath)
if err == nil {
log.Info("Process monitor method ebpf")
return errm
}
// ebpf main module loaded, we can use ebpf
// XXX: this will have to be rewritten when we'll have more events (bind, listen, etc)
if err.What == ebpf.EventsNotAvailable {
log.Info("Process monitor method ebpf")
log.Warning("opensnitch-procs.o not available: %s", err.Msg)
return errm
}
// we need to stop this method even if it has failed to start, in order to clean up the kprobes
// It helps with the error "cannot write...kprobe_events: file exists".
ebpf.Stop()
errm.What = err.What
errm.Msg = err.Msg
log.Warning("error starting ebpf monitor method: %v", err)
} else if procmon.MethodIsAudit() {
auditConn, err := audit.Start()
if err == nil {
log.Info("Process monitor method audit")
go audit.Reader(auditConn, (chan<- audit.Event)(audit.EventChan))
return &Error{AuditdErr, err}
}
errm.What = AuditdErr
errm.Msg = err
log.Warning("error starting audit monitor method: %v", err)
}
// if any of the above methods have failed, fallback to proc
log.Info("Process monitor method /proc")
procmon.SetMonitorMethod(procmon.MethodProc)
return errm
}
opensnitch-1.6.9/daemon/procmon/parse.go 0000664 0000000 0000000 00000006703 15003540030 0020256 0 ustar 00root root 0000000 0000000 package procmon
import (
"fmt"
"net"
"time"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/netstat"
"github.com/evilsocket/opensnitch/daemon/procmon/audit"
)
func getPIDFromAuditEvents(inode int, inodeKey string, expect string) (int, int) {
audit.Lock.RLock()
defer audit.Lock.RUnlock()
auditEvents := audit.GetEvents()
for n := 0; n < len(auditEvents); n++ {
pid := auditEvents[n].Pid
if inodeFound("/proc/", expect, inodeKey, inode, pid) {
return pid, n
}
}
for n := 0; n < len(auditEvents); n++ {
ppid := auditEvents[n].PPid
if inodeFound("/proc/", expect, inodeKey, inode, ppid) {
return ppid, n
}
}
return -1, -1
}
// GetInodeFromNetstat tries to obtain the inode of a connection from /proc/net/*
func GetInodeFromNetstat(netEntry *netstat.Entry, inodeList *[]int, protocol string, srcIP net.IP, srcPort uint, dstIP net.IP, dstPort uint) bool {
if netEntry = netstat.FindEntry(protocol, srcIP, srcPort, dstIP, dstPort); netEntry == nil {
log.Debug("Could not find netstat entry for: (%s) %d:%s -> %s:%d", protocol, srcPort, srcIP, dstIP, dstPort)
return false
}
if netEntry.INode > 0 {
log.Debug("connection found in netstat: %#v", netEntry)
*inodeList = append([]int{netEntry.INode}, *inodeList...)
return true
}
log.Debug("<== no inodes found for this connection: %#v", netEntry)
return false
}
// GetPIDFromINode tries to get the PID from a socket inode following these steps:
// 1. Get the PID from the cache of Inodes.
// 2. Get the PID from the cache of PIDs.
// 3. Look for the PID using one of these methods:
// - audit: listening for socket creation from auditd.
// - proc: search /proc
//
// If the PID is not found by one of the 2 first methods, it'll try it using /proc.
func GetPIDFromINode(inode int, inodeKey string) int {
found := -1
if inode <= 0 {
return found
}
start := time.Now()
expect := fmt.Sprintf("socket:[%d]", inode)
if cachedPidInode := inodesCache.getPid(inodeKey); cachedPidInode != -1 {
log.Debug("Inode found in cache: %v %v %v %v", time.Since(start), inodesCache.getPid(inodeKey), inode, inodeKey)
return cachedPidInode
}
cachedPid, pos := pidsCache.getPid(inode, inodeKey, expect)
if cachedPid != -1 {
log.Debug("Socket found in known pids %v, pid: %d, inode: %d, pos: %d, pids in cache: %d", time.Since(start), cachedPid, inode, pos, pidsCache.countItems())
pidsCache.sort(cachedPid)
inodesCache.add(inodeKey, "", cachedPid)
return cachedPid
}
if MethodIsAudit() {
if aPid, pos := getPIDFromAuditEvents(inode, inodeKey, expect); aPid != -1 {
log.Debug("PID found via audit events: %v, position: %d", time.Since(start), pos)
return aPid
}
}
if found == -1 || methodIsProc() {
found = lookupPidInProc("/proc/", expect, inodeKey, inode)
}
log.Debug("new pid lookup took (%d): %v", found, time.Since(start))
return found
}
// FindProcess checks if a process exists given a PID.
// If it exists in /proc, a new Process{} object is returned with the details
// to identify a process (cmdline, name, environment variables, etc).
func FindProcess(pid int, interceptUnknown bool) *Process {
if interceptUnknown && pid < 0 {
return NewProcess(0, "")
}
if proc := findProcessInActivePidsCache(uint64(pid)); proc != nil {
return proc
}
proc := NewProcess(pid, "")
if err := proc.GetInfo(); err != nil {
log.Debug("[%d] FindProcess() error: %s", pid, err)
return nil
}
AddToActivePidsCache(uint64(pid), proc)
return proc
}
opensnitch-1.6.9/daemon/procmon/process.go 0000664 0000000 0000000 00000006374 15003540030 0020626 0 ustar 00root root 0000000 0000000 package procmon
import (
"sync"
"time"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
)
var (
cacheMonitorsRunning = false
lock = sync.RWMutex{}
monitorMethod = MethodProc
)
// monitor method supported types
const (
MethodProc = "proc"
MethodAudit = "audit"
MethodEbpf = "ebpf"
KernelConnection = "Kernel connection"
ProcSelf = "/proc/self/"
)
// man 5 proc; man procfs
type procIOstats struct {
RChar int64
WChar int64
SyscallRead int64
SyscallWrite int64
ReadBytes int64
WriteBytes int64
}
type procNetStats struct {
ReadBytes uint64
WriteBytes uint64
}
type procDescriptors struct {
ModTime time.Time
Name string
SymLink string
Size int64
}
type procStatm struct {
Size int64
Resident int64
Shared int64
Text int64
Lib int64
Data int64 // data + stack
Dt int
}
// Process holds the details of a process.
type Process struct {
Env map[string]string
IOStats *procIOstats
NetStats *procNetStats
Statm *procStatm
Maps string
// Path is the absolute path to the binary
Path string
Comm string
CWD string
Status string
Stat string
Stack string
Descriptors []*procDescriptors
// Args is the command that the user typed. It MAY contain the absolute path
// of the binary:
// $ curl https://...
// -> Path: /usr/bin/curl
// -> Args: curl https://....
// $ /usr/bin/curl https://...
// -> Path: /usr/bin/curl
// -> Args: /usr/bin/curl https://....
Args []string
ID int
PPID int
UID int
}
// NewProcess returns a new Process structure.
func NewProcess(pid int, comm string) *Process {
return &Process{
ID: pid,
Comm: comm,
Args: make([]string, 0),
Env: make(map[string]string),
IOStats: &procIOstats{},
NetStats: &procNetStats{},
Statm: &procStatm{},
}
}
// Serialize transforms a Process object to gRPC protocol object
func (p *Process) Serialize() *protocol.Process {
ioStats := p.IOStats
netStats := p.NetStats
if ioStats == nil {
ioStats = &procIOstats{}
}
if netStats == nil {
netStats = &procNetStats{}
}
return &protocol.Process{
Pid: uint64(p.ID),
Ppid: uint64(p.PPID),
Uid: uint64(p.UID),
Comm: p.Comm,
Path: p.Path,
Args: p.Args,
Env: p.Env,
Cwd: p.CWD,
IoReads: uint64(ioStats.RChar),
IoWrites: uint64(ioStats.WChar),
NetReads: netStats.ReadBytes,
NetWrites: netStats.WriteBytes,
}
}
// SetMonitorMethod configures a new method for parsing connections.
func SetMonitorMethod(newMonitorMethod string) {
lock.Lock()
defer lock.Unlock()
monitorMethod = newMonitorMethod
}
// GetMonitorMethod configures a new method for parsing connections.
func GetMonitorMethod() string {
lock.Lock()
defer lock.Unlock()
return monitorMethod
}
// MethodIsEbpf returns if the process monitor method is eBPF.
func MethodIsEbpf() bool {
lock.RLock()
defer lock.RUnlock()
return monitorMethod == MethodEbpf
}
// MethodIsAudit returns if the process monitor method is eBPF.
func MethodIsAudit() bool {
lock.RLock()
defer lock.RUnlock()
return monitorMethod == MethodAudit
}
func methodIsProc() bool {
lock.RLock()
defer lock.RUnlock()
return monitorMethod == MethodProc
}
opensnitch-1.6.9/daemon/procmon/process_test.go 0000664 0000000 0000000 00000005662 15003540030 0021664 0 ustar 00root root 0000000 0000000 package procmon
import (
"os"
"testing"
)
var (
myPid = os.Getpid()
proc = NewProcess(myPid, "fakeComm")
)
func TestNewProcess(t *testing.T) {
if proc.ID != myPid {
t.Error("NewProcess PID not equal to ", myPid)
}
if proc.Comm != "fakeComm" {
t.Error("NewProcess Comm not equal to fakeComm")
}
}
func TestProcPath(t *testing.T) {
if err := proc.ReadPath(); err != nil {
t.Error("Proc path error:", err)
}
if proc.Path == "/fake/path" {
t.Error("Proc path equal to /fake/path, should be different:", proc.Path)
}
}
func TestProcCwd(t *testing.T) {
err := proc.ReadCwd()
if proc.CWD == "" {
t.Error("Proc readCwd() not read:", err)
}
}
func TestProcCmdline(t *testing.T) {
proc.ReadCmdline()
if len(proc.Args) == 0 {
t.Error("Proc Args should not be empty:", proc.Args)
}
}
func TestProcDescriptors(t *testing.T) {
proc.readDescriptors()
if len(proc.Descriptors) == 0 {
t.Error("Proc Descriptors should not be empty:", proc.Descriptors)
}
}
func TestProcEnv(t *testing.T) {
proc.ReadEnv()
if len(proc.Env) == 0 {
t.Error("Proc Env should not be empty:", proc.Env)
}
}
func TestProcIOStats(t *testing.T) {
proc.readIOStats()
if proc.IOStats.RChar == 0 {
t.Error("Proc.IOStats.RChar should not be 0:", proc.IOStats)
}
if proc.IOStats.WChar == 0 {
t.Error("Proc.IOStats.WChar should not be 0:", proc.IOStats)
}
if proc.IOStats.SyscallRead == 0 {
t.Error("Proc.IOStats.SyscallRead should not be 0:", proc.IOStats)
}
if proc.IOStats.SyscallWrite == 0 {
t.Error("Proc.IOStats.SyscallWrite should not be 0:", proc.IOStats)
}
/*if proc.IOStats.ReadBytes == 0 {
t.Error("Proc.IOStats.ReadBytes should not be 0:", proc.IOStats)
}
if proc.IOStats.WriteBytes == 0 {
t.Error("Proc.IOStats.WriteBytes should not be 0:", proc.IOStats)
}*/
}
func TestProcStatus(t *testing.T) {
proc.readStatus()
if proc.Status == "" {
t.Error("Proc Status should not be empty:", proc)
}
if proc.Stat == "" {
t.Error("Proc Stat should not be empty:", proc)
}
/*if proc.Stack == "" {
t.Error("Proc Stack should not be empty:", proc)
}*/
if proc.Maps == "" {
t.Error("Proc Maps should not be empty:", proc)
}
if proc.Statm.Size == 0 {
t.Error("Proc Statm Size should not be 0:", proc.Statm)
}
if proc.Statm.Resident == 0 {
t.Error("Proc Statm Resident should not be 0:", proc.Statm)
}
if proc.Statm.Shared == 0 {
t.Error("Proc Statm Shared should not be 0:", proc.Statm)
}
if proc.Statm.Text == 0 {
t.Error("Proc Statm Text should not be 0:", proc.Statm)
}
if proc.Statm.Lib != 0 {
t.Error("Proc Statm Lib should not be 0:", proc.Statm)
}
if proc.Statm.Data == 0 {
t.Error("Proc Statm Data should not be 0:", proc.Statm)
}
if proc.Statm.Dt != 0 {
t.Error("Proc Statm Dt should not be 0:", proc.Statm)
}
}
func TestProcCleanPath(t *testing.T) {
proc.Path = "/fake/path/binary (deleted)"
proc.CleanPath()
if proc.Path != "/fake/path/binary" {
t.Error("Proc cleanPath() not cleaned:", proc.Path)
}
}
opensnitch-1.6.9/daemon/rule/ 0000775 0000000 0000000 00000000000 15003540030 0016101 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/rule/loader.go 0000664 0000000 0000000 00000025537 15003540030 0017712 0 ustar 00root root 0000000 0000000 package rule
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path"
"path/filepath"
"sort"
"strings"
"sync"
"time"
"github.com/evilsocket/opensnitch/daemon/conman"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/fsnotify/fsnotify"
)
// Loader is the object that holds the rules loaded from disk, as well as the
// rules watcher.
type Loader struct {
rules map[string]*Rule
watcher *fsnotify.Watcher
path string
rulesKeys []string
sync.RWMutex
liveReload bool
liveReloadRunning bool
}
// NewLoader loads rules from disk, and watches for changes made to the rules files
// on disk.
func NewLoader(liveReload bool) (*Loader, error) {
watcher, err := fsnotify.NewWatcher()
if err != nil {
return nil, err
}
return &Loader{
path: "",
rules: make(map[string]*Rule),
liveReload: liveReload,
watcher: watcher,
liveReloadRunning: false,
}, nil
}
// NumRules returns he number of loaded rules.
func (l *Loader) NumRules() int {
l.RLock()
defer l.RUnlock()
return len(l.rules)
}
// GetAll returns the loaded rules.
func (l *Loader) GetAll() map[string]*Rule {
l.RLock()
defer l.RUnlock()
return l.rules
}
// Load loads rules files from disk.
func (l *Loader) Load(path string) error {
if core.Exists(path) == false {
return fmt.Errorf("Path '%s' does not exist\nCreate it if you want to save rules to disk", path)
}
path, err := core.ExpandPath(path)
if err != nil {
return fmt.Errorf("Error accessing rules path: %s.\nCreate it if you want to save rules to disk", err)
}
expr := filepath.Join(path, "*.json")
matches, err := filepath.Glob(expr)
if err != nil {
return fmt.Errorf("Error globbing '%s': %s", expr, err)
}
l.path = path
if len(l.rules) == 0 {
l.rules = make(map[string]*Rule)
}
for _, fileName := range matches {
log.Debug("Reading rule from %s", fileName)
if err := l.loadRule(fileName); err != nil {
log.Warning("%s", err)
continue
}
}
if l.liveReload && l.liveReloadRunning == false {
go l.liveReloadWorker()
}
return nil
}
// Add adds a rule to the list of rules, and optionally saves it to disk.
func (l *Loader) Add(rule *Rule, saveToDisk bool) error {
l.addUserRule(rule)
if saveToDisk {
fileName := filepath.Join(l.path, fmt.Sprintf("%s.json", rule.Name))
return l.Save(rule, fileName)
}
return nil
}
// Replace adds a rule to the list of rules, and optionally saves it to disk.
func (l *Loader) Replace(rule *Rule, saveToDisk bool) error {
if err := l.replaceUserRule(rule); err != nil {
return err
}
if saveToDisk {
l.Lock()
defer l.Unlock()
fileName := filepath.Join(l.path, fmt.Sprintf("%s.json", rule.Name))
return l.Save(rule, fileName)
}
return nil
}
// Save a rule to disk.
func (l *Loader) Save(rule *Rule, path string) error {
rule.Updated = time.Now().Format(time.RFC3339)
raw, err := json.MarshalIndent(rule, "", " ")
if err != nil {
return fmt.Errorf("Error while saving rule %s to %s: %s", rule, path, err)
}
if err = ioutil.WriteFile(path, raw, 0600); err != nil {
return fmt.Errorf("Error while saving rule %s to %s: %s", rule, path, err)
}
return nil
}
// Delete deletes a rule from the list by name.
// If the duration is Always (i.e: saved on disk), it'll attempt to delete
// it from disk.
func (l *Loader) Delete(ruleName string) error {
l.Lock()
defer l.Unlock()
rule := l.rules[ruleName]
if rule == nil {
return nil
}
l.cleanListsRule(rule)
delete(l.rules, ruleName)
l.sortRules()
if rule.Duration != Always {
return nil
}
log.Info("Delete() rule: %s", rule)
return l.deleteRuleFromDisk(ruleName)
}
func (l *Loader) loadRule(fileName string) error {
raw, err := ioutil.ReadFile(fileName)
if err != nil {
return fmt.Errorf("Error while reading %s: %s", fileName, err)
}
l.Lock()
defer l.Unlock()
var r Rule
err = json.Unmarshal(raw, &r)
if err != nil {
return fmt.Errorf("Error parsing rule from %s: %s", fileName, err)
}
raw = nil
if oldRule, found := l.rules[r.Name]; found {
l.cleanListsRule(oldRule)
}
if !r.Enabled {
// XXX: we only parse and load the Data field if the rule is disabled and the Data field is not empty
// the rule will remain disabled.
if err = l.unmarshalOperatorList(&r.Operator); err != nil {
return err
}
} else {
if err := r.Operator.Compile(); err != nil {
log.Warning("Operator.Compile() error: %s: %s (%s)", err, r.Operator.Data, r.Name)
return fmt.Errorf("(1) Error compiling rule: %s", err)
}
if r.Operator.Type == List {
for i := 0; i < len(r.Operator.List); i++ {
if err := r.Operator.List[i].Compile(); err != nil {
log.Warning("Operator.Compile() error: %s (%s)", err, r.Name)
return fmt.Errorf("(1) Error compiling list rule: %s", err)
}
}
}
}
if oldRule, found := l.rules[r.Name]; found {
l.deleteOldRuleFromDisk(oldRule, &r)
}
log.Debug("Loaded rule from %s: %s", fileName, r.String())
l.rules[r.Name] = &r
l.sortRules()
if l.isTemporary(&r) {
err = l.scheduleTemporaryRule(r)
}
return nil
}
// deleteRule deletes a rule from memory if it has been deleted from disk.
// This is only called if fsnotify's Remove event is fired, thus it doesn't
// have to delete temporary rules (!Always).
func (l *Loader) deleteRule(filePath string) {
fileName := filepath.Base(filePath)
ruleName := fileName[:len(fileName)-5]
l.RLock()
rule, found := l.rules[ruleName]
delRule := found && rule.Duration == Always
l.RUnlock()
if delRule {
l.Delete(ruleName)
}
}
func (l *Loader) deleteRuleFromDisk(ruleName string) error {
path := fmt.Sprint(l.path, "/", ruleName, ".json")
return os.Remove(path)
}
// deleteOldRuleFromDisk deletes a rule from disk if the Duration changes
// from Always (saved on disk), to !Always (temporary).
func (l *Loader) deleteOldRuleFromDisk(oldRule, newRule *Rule) {
if oldRule.Duration == Always && newRule.Duration != Always {
if err := l.deleteRuleFromDisk(oldRule.Name); err != nil {
log.Error("Error deleting old rule from disk: %s", oldRule.Name)
}
}
}
// cleanListsRule erases the list of domains of an Operator of type Lists
func (l *Loader) cleanListsRule(oldRule *Rule) {
if oldRule.Operator.Type == Lists {
oldRule.Operator.StopMonitoringLists()
} else if oldRule.Operator.Type == List {
for i := 0; i < len(oldRule.Operator.List); i++ {
if oldRule.Operator.List[i].Type == Lists {
oldRule.Operator.List[i].StopMonitoringLists()
break
}
}
}
}
func (l *Loader) isTemporary(r *Rule) bool {
return r.Duration != Restart && r.Duration != Always && r.Duration != Once
}
func (l *Loader) isUniqueName(name string) bool {
_, found := l.rules[name]
return !found
}
func (l *Loader) setUniqueName(rule *Rule) {
l.Lock()
defer l.Unlock()
idx := 1
base := rule.Name
for l.isUniqueName(rule.Name) == false {
idx++
rule.Name = fmt.Sprintf("%s-%d", base, idx)
}
}
// Deprecated: rule.Operator.Data no longer holds the operator list in json format as string.
func (l *Loader) unmarshalOperatorList(op *Operator) error {
if op.Type == List && len(op.List) == 0 && op.Data != "" {
if err := json.Unmarshal([]byte(op.Data), &op.List); err != nil {
return fmt.Errorf("error loading rule of type list: %s", err)
}
op.Data = ""
}
return nil
}
func (l *Loader) sortRules() {
l.rulesKeys = make([]string, 0, len(l.rules))
for k := range l.rules {
l.rulesKeys = append(l.rulesKeys, k)
}
sort.Strings(l.rulesKeys)
}
func (l *Loader) addUserRule(rule *Rule) {
if rule.Duration == Once {
return
}
l.setUniqueName(rule)
l.replaceUserRule(rule)
}
func (l *Loader) replaceUserRule(rule *Rule) (err error) {
l.Lock()
oldRule, found := l.rules[rule.Name]
l.Unlock()
if found {
// If the rule has changed from Always (saved on disk) to !Always (temporary),
// we need to delete the rule from disk and keep it in memory.
l.deleteOldRuleFromDisk(oldRule, rule)
// delete loaded lists, if this is a rule of type Lists
l.cleanListsRule(oldRule)
}
if err := l.unmarshalOperatorList(&rule.Operator); err != nil {
log.Error(err.Error())
}
if rule.Enabled {
if err := rule.Operator.Compile(); err != nil {
log.Warning("Operator.Compile() error: %s: %s", err, rule.Operator.Data)
return fmt.Errorf("(2) error compiling rule: %s", err)
}
if rule.Operator.Type == List {
for i := 0; i < len(rule.Operator.List); i++ {
if err := rule.Operator.List[i].Compile(); err != nil {
log.Warning("Operator.Compile() error: %s: ", err)
return fmt.Errorf("(2) error compiling list rule: %s", err)
}
}
}
}
l.Lock()
l.rules[rule.Name] = rule
l.sortRules()
l.Unlock()
if l.isTemporary(rule) {
err = l.scheduleTemporaryRule(*rule)
}
return err
}
func (l *Loader) scheduleTemporaryRule(rule Rule) error {
tTime, err := time.ParseDuration(string(rule.Duration))
if err != nil {
return err
}
time.AfterFunc(tTime, func() {
l.Lock()
defer l.Unlock()
log.Info("Temporary rule expired: %s - %s", rule.Name, rule.Duration)
if newRule, found := l.rules[rule.Name]; found {
if newRule.Duration != rule.Duration {
log.Debug("%s temporary rule expired, but has new Duration, old: %s, new: %s", rule.Name, rule.Duration, newRule.Duration)
return
}
delete(l.rules, rule.Name)
l.sortRules()
}
})
return nil
}
func (l *Loader) liveReloadWorker() {
l.liveReloadRunning = true
log.Debug("Rules watcher started on path %s ...", l.path)
if err := l.watcher.Add(l.path); err != nil {
log.Error("Could not watch path: %s", err)
l.liveReloadRunning = false
return
}
for {
select {
case event := <-l.watcher.Events:
// a new rule json file has been created or updated
if event.Op&fsnotify.Write == fsnotify.Write {
if strings.HasSuffix(event.Name, ".json") {
log.Important("Ruleset changed due to %s, reloading ...", path.Base(event.Name))
if err := l.loadRule(event.Name); err != nil {
log.Warning("%s", err)
}
}
} else if event.Op&fsnotify.Remove == fsnotify.Remove {
if strings.HasSuffix(event.Name, ".json") {
log.Important("Rule deleted %s", path.Base(event.Name))
// we only need to delete from memory rules of type Always,
// because the Remove event is of a file, i.e.: Duration == Always
l.deleteRule(event.Name)
}
}
case err := <-l.watcher.Errors:
log.Error("File system watcher error: %s", err)
}
}
}
// FindFirstMatch will try match the connection against the existing rule set.
func (l *Loader) FindFirstMatch(con *conman.Connection) (match *Rule) {
l.RLock()
defer l.RUnlock()
for _, idx := range l.rulesKeys {
rule, _ := l.rules[idx]
if rule.Enabled == false {
continue
}
if rule.Match(con) {
// We have a match.
// Save the rule in order to don't ask the user to take action,
// and keep iterating until a Deny or a Priority rule appears.
match = rule
if rule.Action == Reject || rule.Action == Deny || rule.Precedence == true {
return rule
}
}
}
return match
}
opensnitch-1.6.9/daemon/rule/loader_test.go 0000664 0000000 0000000 00000023403 15003540030 0020737 0 ustar 00root root 0000000 0000000 package rule
import (
"fmt"
"io"
"math/rand"
"os"
"testing"
"time"
)
var tmpDir string
func TestMain(m *testing.M) {
tmpDir = "/tmp/ostest_" + randString()
os.Mkdir(tmpDir, 0777)
defer os.RemoveAll(tmpDir)
os.Exit(m.Run())
}
func TestRuleLoader(t *testing.T) {
t.Parallel()
t.Log("Test rules loader")
var list []Operator
dur1s := Duration("1s")
dummyOper, _ := NewOperator(Simple, false, OpTrue, "", list)
dummyOper.Compile()
inMem1sRule := Create("000-xxx-name", "rule description xxx", true, false, false, Allow, dur1s, dummyOper)
inMemUntilRestartRule := Create("000-aaa-name", "rule description aaa", true, false, false, Allow, Restart, dummyOper)
l, err := NewLoader(false)
if err != nil {
t.Fail()
}
if err = l.Load("/non/existent/path/"); err == nil {
t.Error("non existent path test: err should not be nil")
}
if err = l.Load("testdata/"); err != nil {
t.Error("Error loading test rules: ", err)
}
// we expect 6 valid rules (2 invalid), loaded from testdata/
testNumRules(t, l, 6)
if err = l.Add(inMem1sRule, false); err != nil {
t.Error("Error adding temporary rule")
}
testNumRules(t, l, 7)
// test auto deletion of temporary rule
time.Sleep(time.Second * 2)
testNumRules(t, l, 6)
if err = l.Add(inMemUntilRestartRule, false); err != nil {
t.Error("Error adding temporary rule (2)")
}
testNumRules(t, l, 7)
testRulesOrder(t, l)
testSortRules(t, l)
testFindMatch(t, l)
testFindEnabled(t, l)
testDurationChange(t, l)
}
func TestRuleLoaderInvalidRegexp(t *testing.T) {
t.Parallel()
t.Log("Test rules loader: invalid regexp")
l, err := NewLoader(true)
if err != nil {
t.Fail()
}
t.Run("loadRule() from disk test (simple)", func(t *testing.T) {
if err := l.loadRule("testdata/invalid-regexp.json"); err == nil {
t.Error("invalid regexp rule loaded: loadRule()")
}
})
t.Run("loadRule() from disk test (list)", func(t *testing.T) {
if err := l.loadRule("testdata/invalid-regexp-list.json"); err == nil {
t.Error("invalid regexp rule loaded: loadRule()")
}
})
var list []Operator
dur30m := Duration("30m")
opListData := `[{"type": "regexp", "operand": "process.path", "sensitive": false, "data": "^(/di(rmngr)$"}, {"type": "simple", "operand": "dest.port", "data": "53", "sensitive": false}]`
invalidRegexpOp, _ := NewOperator(List, false, OpList, opListData, list)
invalidRegexpRule := Create("invalid-regexp", "invalid rule description", true, false, false, Allow, dur30m, invalidRegexpOp)
t.Run("replaceUserRule() test list", func(t *testing.T) {
if err := l.replaceUserRule(invalidRegexpRule); err == nil {
t.Error("invalid regexp rule loaded: replaceUserRule()")
}
})
}
// Test rules of type operator.list. There're these scenarios:
// - Enabled rules:
// * operator Data field is ignored if it contains the list of operators as json string.
// * the operarots list is expanded as json objecs under "list": []
// For new rules (> v1.6.3), Data field will be empty.
//
// - Disabled rules
// * (old) the Data field contains the list of operators as json string, and the list of operarots is empty.
// * Data field empty, and the list of operators expanded.
// In all cases the list of operators must be loaded.
func TestRuleLoaderList(t *testing.T) {
l, err := NewLoader(true)
if err != nil {
t.Fail()
}
testRules := map[string]string{
"rule-with-operator-list": "testdata/rule-operator-list.json",
"rule-disabled-with-operators-list-as-json-string": "testdata/rule-disabled-operator-list.json",
"rule-disabled-with-operators-list-expanded": "testdata/rule-disabled-operator-list-expanded.json",
"rule-with-operator-list-data-empty": "testdata/rule-operator-list-data-empty.json",
}
for name, path := range testRules {
t.Run(fmt.Sprint("loadRule() ", path), func(t *testing.T) {
if err := l.loadRule(path); err != nil {
t.Error(fmt.Sprint("loadRule() ", path, " error:"), err)
}
t.Log("Test: List rule:", name, path)
r, found := l.rules[name]
if !found {
t.Error(fmt.Sprint("loadRule() ", path, " not in the list:"), l.rules)
}
// Starting from > v1.6.3, after loading a rule of type List, the field Operator.Data is emptied, if the Data contained the list of operators as json.
if len(r.Operator.List) != 2 {
t.Error(fmt.Sprint("loadRule() ", path, " operator List not loaded:"), r)
}
if r.Operator.List[0].Type != Simple ||
r.Operator.List[0].Operand != OpProcessPath ||
r.Operator.List[0].Data != "/usr/bin/telnet" {
t.Error(fmt.Sprint("loadRule() ", path, " operator List 0 not loaded:"), r)
}
if r.Operator.List[1].Type != Simple ||
r.Operator.List[1].Operand != OpDstPort ||
r.Operator.List[1].Data != "53" {
t.Error(fmt.Sprint("loadRule() ", path, " operator List 1 not loaded:"), r)
}
})
}
}
func TestLiveReload(t *testing.T) {
t.Parallel()
t.Log("Test rules loader with live reload")
l, err := NewLoader(true)
if err != nil {
t.Fail()
}
if err = Copy("testdata/000-allow-chrome.json", tmpDir+"/000-allow-chrome.json"); err != nil {
t.Error("Error copying rule into a temp dir")
}
if err = Copy("testdata/001-deny-chrome.json", tmpDir+"/001-deny-chrome.json"); err != nil {
t.Error("Error copying rule into a temp dir")
}
if err = l.Load(tmpDir); err != nil {
t.Error("Error loading test rules: ", err)
}
//wait for watcher to activate
time.Sleep(time.Second)
if err = Copy("testdata/live_reload/test-live-reload-remove.json", tmpDir+"/test-live-reload-remove.json"); err != nil {
t.Error("Error copying rules into temp dir")
}
if err = Copy("testdata/live_reload/test-live-reload-delete.json", tmpDir+"/test-live-reload-delete.json"); err != nil {
t.Error("Error copying rules into temp dir")
}
//wait for watcher to pick up the changes
time.Sleep(time.Second)
testNumRules(t, l, 4)
if err = os.Remove(tmpDir + "/test-live-reload-remove.json"); err != nil {
t.Error("Error Remove()ing file from temp dir")
}
if err = l.Delete("test-live-reload-delete"); err != nil {
t.Error("Error Delete()ing file from temp dir")
}
//wait for watcher to pick up the changes
time.Sleep(time.Second)
testNumRules(t, l, 2)
}
func randString() string {
rand.Seed(time.Now().UnixNano())
var letterRunes = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
b := make([]rune, 10)
for i := range b {
b[i] = letterRunes[rand.Intn(len(letterRunes))]
}
return string(b)
}
func Copy(src, dst string) error {
in, err := os.Open(src)
if err != nil {
return err
}
defer in.Close()
out, err := os.Create(dst)
if err != nil {
return err
}
defer out.Close()
_, err = io.Copy(out, in)
if err != nil {
return err
}
return out.Close()
}
func testNumRules(t *testing.T, l *Loader, num int) {
if l.NumRules() != num {
t.Error("rules number should be (2): ", num)
}
}
func testRulesOrder(t *testing.T, l *Loader) {
if l.rulesKeys[0] != "000-aaa-name" {
t.Error("Rules not in order (0): ", l.rulesKeys)
}
if l.rulesKeys[1] != "000-allow-chrome" {
t.Error("Rules not in order (1): ", l.rulesKeys)
}
if l.rulesKeys[2] != "001-deny-chrome" {
t.Error("Rules not in order (2): ", l.rulesKeys)
}
}
func testSortRules(t *testing.T, l *Loader) {
l.rulesKeys[1] = "001-deny-chrome"
l.rulesKeys[2] = "000-allow-chrome"
l.sortRules()
if l.rulesKeys[1] != "000-allow-chrome" {
t.Error("Rules not in order (1): ", l.rulesKeys)
}
if l.rulesKeys[2] != "001-deny-chrome" {
t.Error("Rules not in order (2): ", l.rulesKeys)
}
}
func testFindMatch(t *testing.T, l *Loader) {
conn.Process.Path = "/opt/google/chrome/chrome"
testFindPriorityMatch(t, l)
testFindDenyMatch(t, l)
testFindAllowMatch(t, l)
restoreConnection()
}
func testFindPriorityMatch(t *testing.T, l *Loader) {
match := l.FindFirstMatch(conn)
if match == nil {
t.Error("FindPriorityMatch didn't match")
}
// test 000-allow-chrome, priority == true
if match.Name != "000-allow-chrome" {
t.Error("findPriorityMatch: priority rule failed: ", match)
}
}
func testFindDenyMatch(t *testing.T, l *Loader) {
l.rules["000-allow-chrome"].Precedence = false
// test 000-allow-chrome, priority == false
// 001-deny-chrome must match
match := l.FindFirstMatch(conn)
if match == nil {
t.Error("FindDenyMatch deny didn't match")
}
if match.Name != "001-deny-chrome" {
t.Error("findDenyMatch: deny rule failed: ", match)
}
}
func testFindAllowMatch(t *testing.T, l *Loader) {
l.rules["000-allow-chrome"].Precedence = false
l.rules["001-deny-chrome"].Action = Allow
// test 000-allow-chrome, priority == false
// 001-deny-chrome must match
match := l.FindFirstMatch(conn)
if match == nil {
t.Error("FindAllowMatch allow didn't match")
}
if match.Name != "001-deny-chrome" {
t.Error("findAllowMatch: allow rule failed: ", match)
}
}
func testFindEnabled(t *testing.T, l *Loader) {
l.rules["000-allow-chrome"].Precedence = false
l.rules["001-deny-chrome"].Action = Allow
l.rules["001-deny-chrome"].Enabled = false
// test 000-allow-chrome, priority == false
// 001-deny-chrome must match
match := l.FindFirstMatch(conn)
if match == nil {
t.Error("FindEnabledMatch, match nil")
}
if match.Name == "001-deny-chrome" {
t.Error("findEnabledMatch: deny rule shouldn't have matched: ", match)
}
}
// test that changing the Duration of a temporary rule doesn't delete
// the new one, ignoring the old timer.
func testDurationChange(t *testing.T, l *Loader) {
l.rules["000-aaa-name"].Duration = "2s"
if err := l.replaceUserRule(l.rules["000-aaa-name"]); err != nil {
t.Error("testDurationChange, error replacing rule: ", err)
}
l.rules["000-aaa-name"].Duration = "1h"
if err := l.replaceUserRule(l.rules["000-aaa-name"]); err != nil {
t.Error("testDurationChange, error replacing rule: ", err)
}
time.Sleep(time.Second * 4)
if _, found := l.rules["000-aaa-name"]; !found {
t.Error("testDurationChange, error: rule has been deleted")
}
}
opensnitch-1.6.9/daemon/rule/operator.go 0000664 0000000 0000000 00000020034 15003540030 0020262 0 ustar 00root root 0000000 0000000 package rule
import (
"fmt"
"net"
"reflect"
"regexp"
"strconv"
"strings"
"sync"
"github.com/evilsocket/opensnitch/daemon/conman"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
)
// Type is the type of rule.
// Every type has its own way of checking the user data against connections.
type Type string
// Sensitive defines if a rule is case-sensitive or not. By default no.
type Sensitive bool
// Operand is what we check on a connection.
type Operand string
// Available types
const (
Simple = Type("simple")
Regexp = Type("regexp")
Complex = Type("complex") // for future use
List = Type("list")
Network = Type("network")
Lists = Type("lists")
)
// Available operands
const (
OpTrue = Operand("true")
OpProcessID = Operand("process.id")
OpProcessPath = Operand("process.path")
OpProcessCmd = Operand("process.command")
OpProcessEnvPrefix = Operand("process.env.")
OpProcessEnvPrefixLen = 12
OpUserID = Operand("user.id")
OpSrcIP = Operand("source.ip")
OpSrcPort = Operand("source.port")
OpDstIP = Operand("dest.ip")
OpDstHost = Operand("dest.host")
OpDstPort = Operand("dest.port")
OpDstNetwork = Operand("dest.network")
OpSrcNetwork = Operand("source.network")
OpProto = Operand("protocol")
OpIfaceIn = Operand("iface.in")
OpIfaceOut = Operand("iface.out")
OpList = Operand("list")
OpDomainsLists = Operand("lists.domains")
OpDomainsRegexpLists = Operand("lists.domains_regexp")
OpIPLists = Operand("lists.ips")
OpNetLists = Operand("lists.nets")
)
type opCallback func(value interface{}) bool
// Operator represents what we want to filter of a connection, and how.
type Operator struct {
cb opCallback
re *regexp.Regexp
netMask *net.IPNet
lists map[string]interface{}
exitMonitorChan chan (bool)
Operand Operand `json:"operand"`
Data string `json:"data"`
Type Type `json:"type"`
List []Operator `json:"list"`
Sensitive Sensitive `json:"sensitive"`
listsMonitorRunning bool
isCompiled bool
sync.RWMutex
}
// NewOperator returns a new operator object
func NewOperator(t Type, s Sensitive, o Operand, data string, list []Operator) (*Operator, error) {
op := Operator{
Type: t,
Sensitive: s,
Operand: o,
Data: data,
List: list,
}
return &op, nil
}
// Compile translates the operator type field to its callback counterpart
func (o *Operator) Compile() error {
if o.isCompiled {
return nil
}
if o.Type == Simple {
o.cb = o.simpleCmp
} else if o.Type == Regexp {
o.cb = o.reCmp
if o.Sensitive == false {
o.Data = strings.ToLower(o.Data)
}
re, err := regexp.Compile(o.Data)
if err != nil {
return err
}
o.re = re
} else if o.Type == List {
o.Operand = OpList
} else if o.Type == Network {
var err error
_, o.netMask, err = net.ParseCIDR(o.Data)
if err != nil {
return err
}
o.cb = o.cmpNetwork
} else if o.Type == Lists {
if o.Data == "" {
return fmt.Errorf("Operand lists is empty, nothing to load: %s", o)
}
if o.Operand == OpDomainsLists {
o.loadLists()
o.cb = o.domainsListCmp
} else if o.Operand == OpDomainsRegexpLists {
o.loadLists()
o.cb = o.reListCmp
} else if o.Operand == OpIPLists {
o.loadLists()
o.cb = o.ipListCmp
} else if o.Operand == OpNetLists {
o.loadLists()
o.cb = o.ipNetCmp
} else {
return fmt.Errorf("Unknown Lists operand: %s", o.Operand)
}
} else {
return fmt.Errorf("Unknown type: %s", o.Type)
}
log.Debug("Operator compiled: %s", o)
o.isCompiled = true
return nil
}
func (o *Operator) String() string {
how := "is"
if o.Type == Regexp {
how = "matches"
}
return fmt.Sprintf("%s %s '%s'", log.Bold(string(o.Operand)), how, log.Yellow(string(o.Data)))
}
func (o *Operator) simpleCmp(v interface{}) bool {
if o.Sensitive == false {
return strings.EqualFold(v.(string), o.Data)
}
return v == o.Data
}
func (o *Operator) reCmp(v interface{}) bool {
if vt := reflect.ValueOf(v).Kind(); vt != reflect.String {
log.Warning("Operator.reCmp() bad interface type: %T", v)
return false
}
if o.Sensitive == false {
v = strings.ToLower(v.(string))
}
return o.re.MatchString(v.(string))
}
func (o *Operator) cmpNetwork(destIP interface{}) bool {
// 192.0.2.1/24, 2001:db8:a0b:12f0::1/32
if o.netMask == nil {
log.Warning("cmpNetwork() NULL: %s", destIP)
return false
}
return o.netMask.Contains(destIP.(net.IP))
}
func (o *Operator) domainsListCmp(v interface{}) bool {
dstHost := v.(string)
if dstHost == "" {
return false
}
if o.Sensitive == false {
dstHost = strings.ToLower(dstHost)
}
o.RLock()
defer o.RUnlock()
if _, found := o.lists[dstHost]; found {
log.Debug("%s: %s, %s", log.Red("domain list match"), dstHost, o.lists[dstHost])
return true
}
return false
}
func (o *Operator) ipListCmp(v interface{}) bool {
dstIP := v.(string)
if dstIP == "" {
return false
}
o.RLock()
defer o.RUnlock()
if _, found := o.lists[dstIP]; found {
log.Debug("%s: %s, %s", log.Red("IP list match"), dstIP, o.lists[dstIP].(string))
return true
}
return false
}
func (o *Operator) ipNetCmp(dstIP interface{}) bool {
o.RLock()
defer o.RUnlock()
for host, netMask := range o.lists {
n := netMask.(*net.IPNet)
if n.Contains(dstIP.(net.IP)) {
log.Debug("%s: %s, %s", log.Red("Net list match"), dstIP, host)
return true
}
}
return false
}
func (o *Operator) reListCmp(v interface{}) bool {
dstHost := v.(string)
if dstHost == "" {
return false
}
if o.Sensitive == false {
dstHost = strings.ToLower(dstHost)
}
o.RLock()
defer o.RUnlock()
for file, re := range o.lists {
r := re.(*regexp.Regexp)
if r.MatchString(dstHost) {
log.Debug("%s: %s, %s", log.Red("Regexp list match"), dstHost, file)
return true
}
}
return false
}
func (o *Operator) listMatch(con interface{}) bool {
res := true
for i := 0; i < len(o.List); i++ {
res = res && o.List[i].Match(con.(*conman.Connection))
}
return res
}
// Match tries to match parts of a connection with the given operator.
func (o *Operator) Match(con *conman.Connection) bool {
if o.Operand == OpTrue {
return true
} else if o.Operand == OpList {
return o.listMatch(con)
} else if o.Operand == OpProcessPath {
return o.cb(con.Process.Path)
} else if o.Operand == OpProcessCmd {
return o.cb(strings.Join(con.Process.Args, " "))
} else if o.Operand == OpDstHost && con.DstHost != "" {
return o.cb(con.DstHost)
} else if o.Operand == OpDstIP {
return o.cb(con.DstIP.String())
} else if o.Operand == OpDstPort {
return o.cb(strconv.FormatUint(uint64(con.DstPort), 10))
} else if o.Operand == OpDomainsLists {
return o.cb(con.DstHost)
} else if o.Operand == OpIPLists {
return o.cb(con.DstIP.String())
} else if o.Operand == OpUserID {
return o.cb(strconv.Itoa(con.Entry.UserId))
} else if o.Operand == OpDstNetwork {
return o.cb(con.DstIP)
} else if o.Operand == OpSrcNetwork {
return o.cb(con.SrcIP)
} else if o.Operand == OpNetLists {
return o.cb(con.DstIP)
} else if o.Operand == OpDomainsRegexpLists {
return o.cb(con.DstHost)
} else if o.Operand == OpIfaceIn {
if ifname, err := net.InterfaceByIndex(con.Pkt.IfaceInIdx); err == nil {
return o.cb(ifname.Name)
}
} else if o.Operand == OpIfaceOut {
if ifname, err := net.InterfaceByIndex(con.Pkt.IfaceOutIdx); err == nil {
return o.cb(ifname.Name)
}
} else if o.Operand == OpProto {
return o.cb(con.Protocol)
} else if o.Operand == OpSrcIP {
return o.cb(con.SrcIP.String())
} else if o.Operand == OpSrcPort {
return o.cb(strconv.FormatUint(uint64(con.SrcPort), 10))
} else if o.Operand == OpProcessID {
return o.cb(strconv.Itoa(con.Process.ID))
} else if strings.HasPrefix(string(o.Operand), string(OpProcessEnvPrefix)) {
envVarName := core.Trim(string(o.Operand[OpProcessEnvPrefixLen:]))
envVarValue, _ := con.Process.Env[envVarName]
return o.cb(envVarValue)
}
return false
}
opensnitch-1.6.9/daemon/rule/operator_lists.go 0000664 0000000 0000000 00000014673 15003540030 0021514 0 ustar 00root root 0000000 0000000 package rule
import (
"fmt"
"io/ioutil"
"net"
"path/filepath"
"regexp"
"runtime/debug"
"strings"
"time"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
)
func (o *Operator) monitorLists() {
log.Info("monitor lists started: %s", o.Data)
modTimes := make(map[string]time.Time)
totalFiles := 0
needReload := false
numFiles := 0
expr := filepath.Join(o.Data, "/*.*")
for {
select {
case <-o.exitMonitorChan:
goto Exit
default:
fileList, err := filepath.Glob(expr)
if err != nil {
log.Warning("Error reading directory of domains list: %s, %s", o.Data, err)
goto Exit
}
numFiles = 0
for _, filename := range fileList {
// ignore hidden files
name := filepath.Base(filename)
if name[:1] == "." {
delete(modTimes, filename)
continue
}
// an overwrite operation performs two tasks: truncate the file and save the new content,
// causing the file time to be modified twice.
modTime, err := core.GetFileModTime(filename)
if err != nil {
log.Debug("deleting saved mod time due to error reading the list, %s", filename)
delete(modTimes, filename)
} else if lastModTime, found := modTimes[filename]; found {
if lastModTime.Equal(modTime) == false {
log.Debug("list changed: %s, %s, %s", lastModTime, modTime, filename)
needReload = true
}
}
modTimes[filename] = modTime
numFiles++
}
fileList = nil
if numFiles != totalFiles {
needReload = true
}
totalFiles = numFiles
if needReload {
// we can't reload a single list, because the domains of all lists are added to the same map.
// we could have the domains separated by lists/files, but then we'd need to iterate the map in order
// to match a domain. Reloading the lists shoud only occur once a day.
if err := o.readLists(); err != nil {
log.Warning("%s", err)
}
needReload = false
}
time.Sleep(4 * time.Second)
}
}
Exit:
modTimes = nil
o.ClearLists()
log.Info("lists monitor stopped")
}
// ClearLists deletes all the entries of a list
func (o *Operator) ClearLists() {
o.Lock()
defer o.Unlock()
log.Info("clearing domains lists: %d - %s", len(o.lists), o.Data)
for k := range o.lists {
delete(o.lists, k)
}
debug.FreeOSMemory()
}
// StopMonitoringLists stops the monitoring lists goroutine.
func (o *Operator) StopMonitoringLists() {
if o.listsMonitorRunning == true {
o.exitMonitorChan <- true
o.exitMonitorChan = nil
o.listsMonitorRunning = false
}
}
func (o *Operator) readDomainsList(raw, fileName string) (dups uint64) {
log.Debug("Loading domains list: %s, size: %d", fileName, len(raw))
lines := strings.Split(string(raw), "\n")
for _, domain := range lines {
if len(domain) < 9 {
continue
}
// exclude not valid lines
if domain[:7] != "0.0.0.0" && domain[:9] != "127.0.0.1" {
continue
}
host := domain[8:]
// exclude localhost entries
if domain[:9] == "127.0.0.1" {
host = domain[10:]
}
if host == "local" || host == "localhost" || host == "localhost.localdomain" || host == "broadcasthost" {
continue
}
host = core.Trim(host)
if _, found := o.lists[host]; found {
dups++
continue
}
o.lists[host] = fileName
}
lines = nil
log.Info("%d domains loaded, %s", len(o.lists), fileName)
return dups
}
func (o *Operator) readNetList(raw, fileName string) (dups uint64) {
log.Debug("Loading nets list: %s, size: %d", fileName, len(raw))
lines := strings.Split(string(raw), "\n")
for _, line := range lines {
if line == "" || line[0] == '#' {
continue
}
host := core.Trim(line)
if _, found := o.lists[host]; found {
dups++
continue
}
_, netMask, err := net.ParseCIDR(host)
if err != nil {
log.Warning("Error parsing net from list: %s, (%s)", err, fileName)
continue
}
o.lists[host] = netMask
}
lines = nil
log.Info("%d nets loaded, %s", len(o.lists), fileName)
return dups
}
func (o *Operator) readRegexpList(raw, fileName string) (dups uint64) {
log.Debug("Loading regexp list: %s, size: %d", fileName, len(raw))
lines := strings.Split(string(raw), "\n")
for n, line := range lines {
if line == "" || line[0] == '#' {
continue
}
host := core.Trim(line)
if _, found := o.lists[host]; found {
dups++
continue
}
re, err := regexp.Compile(line)
if err != nil {
log.Warning("Error compiling regexp from list: %s, (%d:%s)", err, n, fileName)
continue
}
o.lists[line] = re
}
lines = nil
log.Info("%d regexps loaded, %s", len(o.lists), fileName)
return dups
}
func (o *Operator) readIPList(raw, fileName string) (dups uint64) {
log.Debug("Loading IPs list: %s, size: %d", fileName, len(raw))
lines := strings.Split(string(raw), "\n")
for _, line := range lines {
if line == "" || line[0] == '#' {
continue
}
ip := core.Trim(line)
if _, found := o.lists[ip]; found {
dups++
continue
}
o.lists[ip] = fileName
}
lines = nil
log.Info("%d IPs loaded, %s", len(o.lists), fileName)
return dups
}
func (o *Operator) readLists() error {
o.ClearLists()
var dups uint64
// this list is particular to this operator and rule
o.Lock()
defer o.Unlock()
o.lists = make(map[string]interface{})
expr := filepath.Join(o.Data, "*.*")
fileList, err := filepath.Glob(expr)
if err != nil {
return fmt.Errorf("Error loading domains lists '%s': %s", expr, err)
}
for _, fileName := range fileList {
// ignore hidden files
name := filepath.Base(fileName)
if name[:1] == "." {
continue
}
raw, err := ioutil.ReadFile(fileName)
if err != nil {
log.Warning("Error reading list of IPs (%s): %s", fileName, err)
continue
}
if o.Operand == OpDomainsLists {
dups += o.readDomainsList(string(raw), fileName)
} else if o.Operand == OpDomainsRegexpLists {
dups += o.readRegexpList(string(raw), fileName)
} else if o.Operand == OpNetLists {
dups += o.readNetList(string(raw), fileName)
} else if o.Operand == OpIPLists {
dups += o.readIPList(string(raw), fileName)
} else {
log.Warning("Unknown lists operand type: %s", o.Operand)
}
}
log.Info("%d lists loaded, %d domains, %d duplicated", len(fileList), len(o.lists), dups)
return nil
}
func (o *Operator) loadLists() {
log.Info("loading domains lists: %s, %s, %s", o.Type, o.Operand, o.Data)
// when loading from disk, we don't use the Operator's constructor, so we need to create this channel
if o.exitMonitorChan == nil {
o.exitMonitorChan = make(chan bool)
o.listsMonitorRunning = true
go o.monitorLists()
}
}
opensnitch-1.6.9/daemon/rule/operator_test.go 0000664 0000000 0000000 00000053132 15003540030 0021326 0 ustar 00root root 0000000 0000000 package rule
import (
"encoding/json"
"fmt"
"net"
"testing"
"time"
"github.com/evilsocket/opensnitch/daemon/conman"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/netstat"
"github.com/evilsocket/opensnitch/daemon/procmon"
)
var (
defaultProcPath = "/usr/bin/opensnitchd"
defaultProcArgs = "-rules-path /etc/opensnitchd/rules/"
defaultDstHost = "opensnitch.io"
defaultDstPort = uint(443)
defaultDstIP = "185.53.178.14"
defaultUserID = 666
netEntry = &netstat.Entry{
UserId: defaultUserID,
}
proc = &procmon.Process{
ID: 12345,
Path: defaultProcPath,
Args: []string{"-rules-path", "/etc/opensnitchd/rules/"},
}
conn = &conman.Connection{
Protocol: "TCP",
SrcPort: 66666,
SrcIP: net.ParseIP("192.168.1.111"),
DstIP: net.ParseIP(defaultDstIP),
DstPort: defaultDstPort,
DstHost: defaultDstHost,
Process: proc,
Entry: netEntry,
}
)
func compileListOperators(list *[]Operator, t *testing.T) {
op := *list
for i := 0; i < len(*list); i++ {
if err := op[i].Compile(); err != nil {
t.Error("NewOperator List, Compile() subitem error:", err)
}
}
}
func unmarshalListData(data string, t *testing.T) (op *[]Operator) {
if err := json.Unmarshal([]byte(data), &op); err != nil {
t.Error("Error unmarshalling list data:", err, data)
return nil
}
return op
}
func restoreConnection() {
conn.Process.Path = defaultProcPath
conn.DstHost = defaultDstHost
conn.DstPort = defaultDstPort
conn.Entry.UserId = defaultUserID
}
func TestNewOperatorSimple(t *testing.T) {
t.Log("Test NewOperator() simple")
var list []Operator
opSimple, err := NewOperator(Simple, false, OpTrue, "", list)
if err != nil {
t.Error("NewOperator simple.err should be nil: ", err)
t.Fail()
}
if err = opSimple.Compile(); err != nil {
t.Fail()
}
if opSimple.Match(nil) == false {
t.Error("Test NewOperator() simple.case-insensitive doesn't match")
t.Fail()
}
t.Run("Operator Simple proc.id", func(t *testing.T) {
// proc.id not sensitive
opSimple, err = NewOperator(Simple, false, OpProcessID, "12345", list)
if err != nil {
t.Error("NewOperator simple.case-insensitive.proc.id err should be nil: ", err)
t.Fail()
}
if err = opSimple.Compile(); err != nil {
t.Error("NewOperator simple.case-insensitive.proc.id Compile() err:", err)
t.Fail()
}
if opSimple.Match(conn) == false {
t.Error("Test NewOperator() simple proc.id doesn't match")
t.Fail()
}
})
opSimple, err = NewOperator(Simple, false, OpProcessPath, defaultProcPath, list)
t.Run("Operator Simple proc.path case-insensitive", func(t *testing.T) {
// proc path not sensitive
if err != nil {
t.Error("NewOperator simple proc.path err should be nil: ", err)
t.Fail()
}
if err = opSimple.Compile(); err != nil {
t.Error("NewOperator simple.case-insensitive.proc.path Compile() err:", err)
t.Fail()
}
if opSimple.Match(conn) == false {
t.Error("Test NewOperator() simple proc.path doesn't match")
t.Fail()
}
})
t.Run("Operator Simple proc.path sensitive", func(t *testing.T) {
// proc path sensitive
opSimple.Sensitive = true
conn.Process.Path = "/usr/bin/OpenSnitchd"
if opSimple.Match(conn) == true {
t.Error("Test NewOperator() simple proc.path sensitive match")
t.Fail()
}
})
opSimple, err = NewOperator(Simple, false, OpDstHost, defaultDstHost, list)
t.Run("Operator Simple con.dstHost case-insensitive", func(t *testing.T) {
// proc dst host not sensitive
if err != nil {
t.Error("NewOperator simple proc.path err should be nil: ", err)
t.Fail()
}
if err = opSimple.Compile(); err != nil {
t.Error("NewOperator simple.case-insensitive.dstHost Compile() err:", err)
t.Fail()
}
if opSimple.Match(conn) == false {
t.Error("Test NewOperator() simple.conn.dstHost.not-sensitive doesn't match")
t.Fail()
}
})
t.Run("Operator Simple con.dstHost case-insensitive different host", func(t *testing.T) {
conn.DstHost = "www.opensnitch.io"
if opSimple.Match(conn) == true {
t.Error("Test NewOperator() simple.conn.dstHost.not-sensitive doesn't MATCH")
t.Fail()
}
})
t.Run("Operator Simple con.dstHost sensitive", func(t *testing.T) {
// proc dst host sensitive
opSimple, err = NewOperator(Simple, true, OpDstHost, "OpEnsNitCh.io", list)
if err != nil {
t.Error("NewOperator simple.dstHost.sensitive err should be nil: ", err)
t.Fail()
}
if err = opSimple.Compile(); err != nil {
t.Error("NewOperator simple.dstHost.sensitive Compile() err:", err)
t.Fail()
}
conn.DstHost = "OpEnsNitCh.io"
if opSimple.Match(conn) == false {
t.Error("Test NewOperator() simple.dstHost.sensitive doesn't match")
t.Fail()
}
})
t.Run("Operator Simple proc.args case-insensitive", func(t *testing.T) {
// proc args case-insensitive
opSimple, err = NewOperator(Simple, false, OpProcessCmd, defaultProcArgs, list)
if err != nil {
t.Error("NewOperator simple proc.args err should be nil: ", err)
t.Fail()
}
if err = opSimple.Compile(); err != nil {
t.Error("NewOperator simple proc.args Compile() err: ", err)
t.Fail()
}
if opSimple.Match(conn) == false {
t.Error("Test NewOperator() simple proc.args doesn't match")
t.Fail()
}
})
t.Run("Operator Simple con.dstIp case-insensitive", func(t *testing.T) {
// proc dstIp case-insensitive
opSimple, err = NewOperator(Simple, false, OpDstIP, defaultDstIP, list)
if err != nil {
t.Error("NewOperator simple conn.dstip.err should be nil: ", err)
t.Fail()
}
if err = opSimple.Compile(); err != nil {
t.Error("NewOperator simple con.dstIp Compile() err: ", err)
t.Fail()
}
if opSimple.Match(conn) == false {
t.Error("Test NewOperator() simple conn.dstip doesn't match")
t.Fail()
}
})
t.Run("Operator Simple UserId case-insensitive", func(t *testing.T) {
// conn.uid case-insensitive
opSimple, err = NewOperator(Simple, false, OpUserID, fmt.Sprint(defaultUserID), list)
if err != nil {
t.Error("NewOperator simple conn.userid.err should be nil: ", err)
t.Fail()
}
if err = opSimple.Compile(); err != nil {
t.Error("NewOperator simple UserId Compile() err: ", err)
t.Fail()
}
if opSimple.Match(conn) == false {
t.Error("Test NewOperator() simple conn.userid doesn't match")
t.Fail()
}
})
restoreConnection()
}
func TestNewOperatorNetwork(t *testing.T) {
t.Log("Test NewOperator() network")
var dummyList []Operator
opSimple, err := NewOperator(Network, false, OpDstNetwork, "185.53.178.14/24", dummyList)
if err != nil {
t.Error("NewOperator network.err should be nil: ", err)
t.Fail()
}
if err = opSimple.Compile(); err != nil {
t.Fail()
}
if opSimple.Match(conn) == false {
t.Error("Test NewOperator() network doesn't match")
t.Fail()
}
opSimple, err = NewOperator(Network, false, OpDstNetwork, "8.8.8.8/24", dummyList)
if err != nil {
t.Error("NewOperator network.err should be nil: ", err)
t.Fail()
}
if err = opSimple.Compile(); err != nil {
t.Fail()
}
if opSimple.Match(conn) == true {
t.Error("Test NewOperator() network doesn't match:", conn.DstIP)
t.Fail()
}
restoreConnection()
}
func TestNewOperatorRegexp(t *testing.T) {
t.Log("Test NewOperator() regexp")
var dummyList []Operator
opRE, err := NewOperator(Regexp, false, OpProto, "^TCP$", dummyList)
if err != nil {
t.Error("NewOperator regexp.err should be nil: ", err)
t.Fail()
}
if err = opRE.Compile(); err != nil {
t.Fail()
}
if opRE.Match(conn) == false {
t.Error("Test NewOperator() regexp doesn't match")
t.Fail()
}
restoreConnection()
}
func TestNewOperatorInvalidRegexp(t *testing.T) {
t.Log("Test NewOperator() invalid regexp")
var dummyList []Operator
opRE, err := NewOperator(Regexp, false, OpProto, "^TC(P$", dummyList)
if err != nil {
t.Error("NewOperator regexp.err should be nil: ", err)
t.Fail()
}
if err = opRE.Compile(); err == nil {
t.Error("NewOperator() invalid regexp. It should fail: ", err)
t.Fail()
}
restoreConnection()
}
func TestNewOperatorRegexpSensitive(t *testing.T) {
t.Log("Test NewOperator() regexp sensitive")
var dummyList []Operator
var sensitive Sensitive
sensitive = true
conn.Process.Path = "/tmp/cUrL"
opRE, err := NewOperator(Regexp, sensitive, OpProcessPath, "^/tmp/cUrL$", dummyList)
if err != nil {
t.Error("NewOperator regexp.case-sensitive.err should be nil: ", err)
t.Fail()
}
if err = opRE.Compile(); err != nil {
t.Fail()
}
if opRE.Match(conn) == false {
t.Error("Test NewOperator() RE sensitive doesn't match:", conn.Process.Path)
t.Fail()
}
t.Run("Operator regexp proc.path case-sensitive", func(t *testing.T) {
conn.Process.Path = "/tmp/curl"
if opRE.Match(conn) == true {
t.Error("Test NewOperator() RE sensitive match:", conn.Process.Path)
t.Fail()
}
})
opRE, err = NewOperator(Regexp, !sensitive, OpProcessPath, "^/tmp/cUrL$", dummyList)
if err != nil {
t.Error("NewOperator regexp.case-insensitive.err should be nil: ", err)
t.Fail()
}
if err = opRE.Compile(); err != nil {
t.Fail()
}
if opRE.Match(conn) == false {
t.Error("Test NewOperator() RE not sensitive match:", conn.Process.Path)
t.Fail()
}
restoreConnection()
}
func TestNewOperatorList(t *testing.T) {
t.Log("Test NewOperator() List")
var list []Operator
listData := `[{"type": "simple", "operand": "dest.ip", "data": "185.53.178.14", "sensitive": false}, {"type": "simple", "operand": "dest.port", "data": "443", "sensitive": false}]`
// simple list
opList, err := NewOperator(List, false, OpProto, listData, list)
t.Run("Operator List simple case-insensitive", func(t *testing.T) {
if err != nil {
t.Error("NewOperator list.regexp.err should be nil: ", err)
t.Fail()
}
if err = opList.Compile(); err != nil {
t.Fail()
}
opList.List = *unmarshalListData(opList.Data, t)
compileListOperators(&opList.List, t)
if opList.Match(conn) == false {
t.Error("Test NewOperator() list simple doesn't match")
t.Fail()
}
})
t.Run("Operator List regexp case-insensitive", func(t *testing.T) {
// list with regexp, case-insensitive
listData = `[{"type": "regexp", "operand": "process.path", "data": "^/usr/bin/.*", "sensitive": false},{"type": "simple", "operand": "dest.ip", "data": "185.53.178.14", "sensitive": false}, {"type": "simple", "operand": "dest.port", "data": "443", "sensitive": false}]`
opList.List = *unmarshalListData(listData, t)
compileListOperators(&opList.List, t)
if err = opList.Compile(); err != nil {
t.Fail()
}
if opList.Match(conn) == false {
t.Error("Test NewOperator() list regexp doesn't match")
t.Fail()
}
})
t.Run("Operator List regexp case-sensitive", func(t *testing.T) {
// list with regexp, case-sensitive
// "data": "^/usr/BiN/.*" must match conn.Process.Path (sensitive)
listData = `[{"type": "regexp", "operand": "process.path", "data": "^/usr/BiN/.*", "sensitive": false},{"type": "simple", "operand": "dest.ip", "data": "185.53.178.14", "sensitive": false}, {"type": "simple", "operand": "dest.port", "data": "443", "sensitive": false}]`
opList.List = *unmarshalListData(listData, t)
compileListOperators(&opList.List, t)
conn.Process.Path = "/usr/BiN/opensnitchd"
opList.Sensitive = true
if err = opList.Compile(); err != nil {
t.Fail()
}
if opList.Match(conn) == false {
t.Error("Test NewOperator() list.regexp.sensitive doesn't match:", conn.Process.Path)
t.Fail()
}
})
t.Run("Operator List regexp case-insensitive 2", func(t *testing.T) {
// "data": "^/usr/BiN/.*" must not match conn.Process.Path (insensitive)
opList.Sensitive = false
conn.Process.Path = "/USR/BiN/opensnitchd"
if err = opList.Compile(); err != nil {
t.Fail()
}
if opList.Match(conn) == false {
t.Error("Test NewOperator() list.regexp.insensitive match:", conn.Process.Path)
t.Fail()
}
})
t.Run("Operator List regexp case-insensitive 3", func(t *testing.T) {
// "data": "^/usr/BiN/.*" must match conn.Process.Path (insensitive)
opList.Sensitive = false
conn.Process.Path = "/USR/bin/opensnitchd"
if err = opList.Compile(); err != nil {
t.Fail()
}
if opList.Match(conn) == false {
t.Error("Test NewOperator() list.regexp.insensitive match:", conn.Process.Path)
t.Fail()
}
})
restoreConnection()
}
func TestNewOperatorListsSimple(t *testing.T) {
t.Log("Test NewOperator() Lists simple")
var dummyList []Operator
opLists, err := NewOperator(Lists, false, OpDomainsLists, "testdata/lists/domains/", dummyList)
if err != nil {
t.Error("NewOperator Lists, shouldn't be nil: ", err)
t.Fail()
}
if err = opLists.Compile(); err != nil {
t.Error("NewOperator Lists, Compile() error:", err)
}
time.Sleep(time.Second)
t.Log("testing Lists, DstHost:", conn.DstHost)
// The list contains 4 lines, 1 is a comment and there's a domain duplicated.
// We should only load lines that start with 0.0.0.0 or 127.0.0.1
if len(opLists.lists) != 2 {
t.Error("NewOperator Lists, number of domains error:", opLists.lists, len(opLists.lists))
}
if opLists.Match(conn) == false {
t.Error("Test NewOperator() lists doesn't match")
}
opLists.StopMonitoringLists()
time.Sleep(time.Second)
opLists.Lock()
if len(opLists.lists) != 0 {
t.Error("NewOperator Lists, number should be 0 after stop:", opLists.lists, len(opLists.lists))
}
opLists.Unlock()
restoreConnection()
}
func TestNewOperatorListsIPs(t *testing.T) {
t.Log("Test NewOperator() Lists domains_regexp")
var subOp *Operator
var list []Operator
listData := `[{"type": "simple", "operand": "user.id", "data": "666", "sensitive": false}, {"type": "lists", "operand": "lists.ips", "data": "testdata/lists/ips/", "sensitive": false}]`
opLists, err := NewOperator(List, false, OpList, listData, list)
if err != nil {
t.Error("NewOperator Lists domains_regexp, shouldn't be nil: ", err)
t.Fail()
}
if err := opLists.Compile(); err != nil {
t.Error("NewOperator Lists domains_regexp, Compile() error:", err)
}
opLists.List = *unmarshalListData(opLists.Data, t)
for i := 0; i < len(opLists.List); i++ {
if err := opLists.List[i].Compile(); err != nil {
t.Error("NewOperator Lists domains_regexp, Compile() subitem error:", err)
}
if opLists.List[i].Type == Lists {
subOp = &opLists.List[i]
}
}
time.Sleep(time.Second)
if opLists.Match(conn) == false {
t.Error("Test NewOperator() Lists domains_regexp, doesn't match:", conn.DstHost)
}
subOp.Lock()
listslen := len(subOp.lists)
subOp.Unlock()
if listslen != 2 {
t.Error("NewOperator Lists domains_regexp, number of domains error:", subOp.lists)
}
//t.Log("checking lists.domains_regexp:", tries, conn.DstHost)
if opLists.Match(conn) == false {
// we don't care about if it matches, we're testing race conditions
t.Log("Test NewOperator() Lists domains_regexp, doesn't match:", conn.DstHost)
}
subOp.StopMonitoringLists()
time.Sleep(time.Second)
subOp.Lock()
if len(subOp.lists) != 0 {
t.Error("NewOperator Lists number should be 0:", subOp.lists, len(subOp.lists))
}
subOp.Unlock()
restoreConnection()
}
func TestNewOperatorListsNETs(t *testing.T) {
t.Log("Test NewOperator() Lists domains_regexp")
var subOp *Operator
var list []Operator
listData := `[{"type": "simple", "operand": "user.id", "data": "666", "sensitive": false}, {"type": "lists", "operand": "lists.nets", "data": "testdata/lists/nets/", "sensitive": false}]`
opLists, err := NewOperator(List, false, OpList, listData, list)
if err != nil {
t.Error("NewOperator Lists domains_regexp, shouldn't be nil: ", err)
t.Fail()
}
if err := opLists.Compile(); err != nil {
t.Error("NewOperator Lists domains_regexp, Compile() error:", err)
}
opLists.List = *unmarshalListData(opLists.Data, t)
for i := 0; i < len(opLists.List); i++ {
if err := opLists.List[i].Compile(); err != nil {
t.Error("NewOperator Lists domains_regexp, Compile() subitem error:", err)
}
if opLists.List[i].Type == Lists {
subOp = &opLists.List[i]
}
}
time.Sleep(time.Second)
if opLists.Match(conn) == false {
t.Error("Test NewOperator() Lists domains_regexp, doesn't match:", conn.DstHost)
}
subOp.Lock()
listslen := len(subOp.lists)
subOp.Unlock()
if listslen != 2 {
t.Error("NewOperator Lists domains_regexp, number of domains error:", subOp.lists)
}
//t.Log("checking lists.domains_regexp:", tries, conn.DstHost)
if opLists.Match(conn) == false {
// we don't care about if it matches, we're testing race conditions
t.Log("Test NewOperator() Lists domains_regexp, doesn't match:", conn.DstHost)
}
subOp.StopMonitoringLists()
time.Sleep(time.Second)
subOp.Lock()
if len(subOp.lists) != 0 {
t.Error("NewOperator Lists number should be 0:", subOp.lists, len(subOp.lists))
}
subOp.Unlock()
restoreConnection()
}
func TestNewOperatorListsComplex(t *testing.T) {
t.Log("Test NewOperator() Lists complex")
var subOp *Operator
var list []Operator
listData := `[{"type": "simple", "operand": "user.id", "data": "666", "sensitive": false}, {"type": "lists", "operand": "lists.domains", "data": "testdata/lists/domains/", "sensitive": false}]`
opLists, err := NewOperator(List, false, OpList, listData, list)
if err != nil {
t.Error("NewOperator Lists complex, shouldn't be nil: ", err)
t.Fail()
}
if err := opLists.Compile(); err != nil {
t.Error("NewOperator Lists complex, Compile() error:", err)
}
opLists.List = *unmarshalListData(opLists.Data, t)
for i := 0; i < len(opLists.List); i++ {
if err := opLists.List[i].Compile(); err != nil {
t.Error("NewOperator Lists complex, Compile() subitem error:", err)
}
if opLists.List[i].Type == Lists {
subOp = &opLists.List[i]
}
}
time.Sleep(time.Second)
subOp.Lock()
if len(subOp.lists) != 2 {
t.Error("NewOperator Lists complex, number of domains error:", subOp.lists)
}
subOp.Unlock()
if opLists.Match(conn) == false {
t.Error("Test NewOperator() Lists complex, doesn't match")
}
subOp.StopMonitoringLists()
time.Sleep(time.Second)
subOp.Lock()
if len(subOp.lists) != 0 {
t.Error("NewOperator Lists number should be 0:", subOp.lists, len(subOp.lists))
}
subOp.Unlock()
restoreConnection()
}
func TestNewOperatorListsDomainsRegexp(t *testing.T) {
t.Log("Test NewOperator() Lists domains_regexp")
var subOp *Operator
var list []Operator
listData := `[{"type": "simple", "operand": "user.id", "data": "666", "sensitive": false}, {"type": "lists", "operand": "lists.domains_regexp", "data": "testdata/lists/regexp/", "sensitive": false}]`
opLists, err := NewOperator(List, false, OpList, listData, list)
if err != nil {
t.Error("NewOperator Lists domains_regexp, shouldn't be nil: ", err)
t.Fail()
}
if err := opLists.Compile(); err != nil {
t.Error("NewOperator Lists domains_regexp, Compile() error:", err)
}
opLists.List = *unmarshalListData(opLists.Data, t)
for i := 0; i < len(opLists.List); i++ {
if err := opLists.List[i].Compile(); err != nil {
t.Error("NewOperator Lists domains_regexp, Compile() subitem error:", err)
}
if opLists.List[i].Type == Lists {
subOp = &opLists.List[i]
}
}
time.Sleep(time.Second)
if opLists.Match(conn) == false {
t.Error("Test NewOperator() Lists domains_regexp, doesn't match:", conn.DstHost)
}
subOp.Lock()
listslen := len(subOp.lists)
subOp.Unlock()
if listslen != 2 {
t.Error("NewOperator Lists domains_regexp, number of domains error:", subOp.lists)
}
//t.Log("checking lists.domains_regexp:", tries, conn.DstHost)
if opLists.Match(conn) == false {
// we don't care about if it matches, we're testing race conditions
t.Log("Test NewOperator() Lists domains_regexp, doesn't match:", conn.DstHost)
}
subOp.StopMonitoringLists()
time.Sleep(time.Second)
subOp.Lock()
if len(subOp.lists) != 0 {
t.Error("NewOperator Lists number should be 0:", subOp.lists, len(subOp.lists))
}
subOp.Unlock()
restoreConnection()
}
// Must be launched with -race to test that we don't cause leaks
// Race occured on operator.go:241 reListCmp().MathString()
// fixed here: 53419fe
func TestRaceNewOperatorListsDomainsRegexp(t *testing.T) {
t.Log("Test NewOperator() Lists domains_regexp")
var subOp *Operator
var list []Operator
listData := `[{"type": "simple", "operand": "user.id", "data": "666", "sensitive": false}, {"type": "lists", "operand": "lists.domains_regexp", "data": "testdata/lists/regexp/", "sensitive": false}]`
opLists, err := NewOperator(List, false, OpList, listData, list)
if err != nil {
t.Error("NewOperator Lists domains_regexp, shouldn't be nil: ", err)
t.Fail()
}
if err := opLists.Compile(); err != nil {
t.Error("NewOperator Lists domains_regexp, Compile() error:", err)
}
opLists.List = *unmarshalListData(opLists.Data, t)
for i := 0; i < len(opLists.List); i++ {
if err := opLists.List[i].Compile(); err != nil {
t.Error("NewOperator Lists domains_regexp, Compile() subitem error:", err)
}
if opLists.List[i].Type == Lists {
subOp = &opLists.List[i]
}
}
// touch domains list in background, to force a reload.
go func() {
touches := 1000
for {
if touches < 0 {
break
}
core.Exec("/bin/touch", []string{"testdata/lists/regexp/domainsregexp.txt"})
touches--
time.Sleep(100 * time.Millisecond)
//t.Log("touching:", touches)
}
}()
time.Sleep(time.Second)
subOp.Lock()
listslen := len(subOp.lists)
subOp.Unlock()
if listslen != 2 {
t.Error("NewOperator Lists domains_regexp, number of domains error:", subOp.lists)
}
tries := 10000
for {
if tries < 0 {
break
}
//t.Log("checking lists.domains_regexp:", tries, conn.DstHost)
if opLists.Match(conn) == false {
// we don't care about if it matches, we're testing race conditions
t.Log("Test NewOperator() Lists domains_regexp, doesn't match:", conn.DstHost)
}
tries--
time.Sleep(10 * time.Millisecond)
}
subOp.StopMonitoringLists()
time.Sleep(time.Second)
subOp.Lock()
if len(subOp.lists) != 0 {
t.Error("NewOperator Lists number should be 0:", subOp.lists, len(subOp.lists))
}
subOp.Unlock()
restoreConnection()
}
opensnitch-1.6.9/daemon/rule/rule.go 0000664 0000000 0000000 00000010666 15003540030 0017410 0 ustar 00root root 0000000 0000000 package rule
import (
"fmt"
"time"
"github.com/evilsocket/opensnitch/daemon/conman"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
)
// Action of a rule
type Action string
// Actions of rules
const (
Allow = Action("allow")
Deny = Action("deny")
Reject = Action("reject")
)
// Duration of a rule
type Duration string
// daemon possible durations
const (
Once = Duration("once")
Restart = Duration("until restart")
Always = Duration("always")
)
// Rule represents an action on a connection.
// The fields match the ones saved as json to disk.
// If a .json rule file is modified on disk, it's reloaded automatically.
type Rule struct {
// Save date fields as string, to avoid issues marshalling Time (#1140).
Created string `json:"created"`
Updated string `json:"updated"`
Name string `json:"name"`
Description string `json:"description"`
Action Action `json:"action"`
Duration Duration `json:"duration"`
Operator Operator `json:"operator"`
Enabled bool `json:"enabled"`
Precedence bool `json:"precedence"`
Nolog bool `json:"nolog"`
}
// Create creates a new rule object with the specified parameters.
func Create(name, description string, enabled, precedence, nolog bool, action Action, duration Duration, op *Operator) *Rule {
return &Rule{
Created: time.Now().Format(time.RFC3339),
Enabled: enabled,
Precedence: precedence,
Nolog: nolog,
Name: name,
Description: description,
Action: action,
Duration: duration,
Operator: *op,
}
}
func (r *Rule) String() string {
enabled := "Disabled"
if r.Enabled {
enabled = "Enabled"
}
return fmt.Sprintf("[%s] %s: if(%s){ %s %s }", enabled, r.Name, r.Operator.String(), r.Action, r.Duration)
}
// Match performs on a connection the checks a Rule has, to determine if it
// must be allowed or denied.
func (r *Rule) Match(con *conman.Connection) bool {
return r.Operator.Match(con)
}
// Deserialize translates back the rule received to a Rule object
func Deserialize(reply *protocol.Rule) (*Rule, error) {
if reply.Operator == nil {
log.Warning("Deserialize rule, Operator nil")
return nil, fmt.Errorf("invalid operator")
}
operator, err := NewOperator(
Type(reply.Operator.Type),
Sensitive(reply.Operator.Sensitive),
Operand(reply.Operator.Operand),
reply.Operator.Data,
make([]Operator, 0),
)
if err != nil {
log.Warning("Deserialize rule, NewOperator() error: %s", err)
return nil, err
}
newRule := Create(
reply.Name,
reply.Description,
reply.Enabled,
reply.Precedence,
reply.Nolog,
Action(reply.Action),
Duration(reply.Duration),
operator,
)
if Type(reply.Operator.Type) == List {
newRule.Operator.Data = ""
reply.Operator.Data = ""
for i := 0; i < len(reply.Operator.List); i++ {
newRule.Operator.List = append(
newRule.Operator.List,
Operator{
Type: Type(reply.Operator.List[i].Type),
Sensitive: Sensitive(reply.Operator.List[i].Sensitive),
Operand: Operand(reply.Operator.List[i].Operand),
Data: string(reply.Operator.List[i].Data),
},
)
}
}
return newRule, nil
}
// Serialize translates a Rule to the protocol object
func (r *Rule) Serialize() *protocol.Rule {
if r == nil {
return nil
}
created, err := time.Parse(time.RFC3339, r.Created)
if err != nil {
log.Warning("Error parsing rule Created date (it should be in RFC3339 format): %s (%s)", err, string(r.Name))
log.Warning("using current time instead: %s", created)
created = time.Now()
}
protoRule := &protocol.Rule{
Created: created.Unix(),
Name: string(r.Name),
Description: string(r.Description),
Enabled: bool(r.Enabled),
Precedence: bool(r.Precedence),
Nolog: bool(r.Nolog),
Action: string(r.Action),
Duration: string(r.Duration),
Operator: &protocol.Operator{
Type: string(r.Operator.Type),
Sensitive: bool(r.Operator.Sensitive),
Operand: string(r.Operator.Operand),
Data: string(r.Operator.Data),
},
}
if r.Operator.Type == List {
r.Operator.Data = ""
for i := 0; i < len(r.Operator.List); i++ {
protoRule.Operator.List = append(protoRule.Operator.List,
&protocol.Operator{
Type: string(r.Operator.List[i].Type),
Sensitive: bool(r.Operator.List[i].Sensitive),
Operand: string(r.Operator.List[i].Operand),
Data: string(r.Operator.List[i].Data),
})
}
}
return protoRule
}
opensnitch-1.6.9/daemon/rule/rule_test.go 0000664 0000000 0000000 00000006223 15003540030 0020441 0 ustar 00root root 0000000 0000000 package rule
import (
"testing"
)
func TestCreate(t *testing.T) {
t.Log("Test: Create rule")
var list []Operator
oper, _ := NewOperator(Simple, false, OpTrue, "", list)
r := Create("000-test-name", "rule description 000", true, false, false, Allow, Once, oper)
t.Run("New rule must not be nil", func(t *testing.T) {
if r == nil {
t.Error("Create() returned nil")
t.Fail()
}
})
t.Run("Rule name must be 000-test-name", func(t *testing.T) {
if r.Name != "000-test-name" {
t.Error("Rule name error:", r.Name)
t.Fail()
}
})
t.Run("Rule must be enabled", func(t *testing.T) {
if r.Enabled == false {
t.Error("Rule Enabled is false:", r)
t.Fail()
}
})
t.Run("Rule Precedence must be false", func(t *testing.T) {
if r.Precedence == true {
t.Error("Rule Precedence is true:", r)
t.Fail()
}
})
t.Run("Rule Action must be Allow", func(t *testing.T) {
if r.Action != Allow {
t.Error("Rule Action is not Allow:", r.Action)
t.Fail()
}
})
t.Run("Rule Duration should be Once", func(t *testing.T) {
if r.Duration != Once {
t.Error("Rule Duration is not Once:", r.Duration)
t.Fail()
}
})
}
func TestRuleSerializers(t *testing.T) {
t.Log("Test: Serializers()")
var opList []Operator
opList = append(opList, Operator{
Type: Simple,
Operand: OpProcessPath,
Data: "/path/x",
})
opList = append(opList, Operator{
Type: Simple,
Operand: OpDstPort,
Data: "23",
})
op, _ := NewOperator(List, false, OpTrue, "", opList)
// this string must be erased after Deserialized
op.Data = "[\"test\": true]"
r := Create("000-test-serializer-list", "rule description 000", true, false, false, Allow, Once, op)
rSerialized := r.Serialize()
t.Run("Serialize() must not return nil", func(t *testing.T) {
if rSerialized == nil {
t.Error("rule.Serialize() returned nil")
t.Fail()
}
})
rDeser, err := Deserialize(rSerialized)
t.Run("Deserialize must not return error", func(t *testing.T) {
if err != nil {
t.Error("rule.Serialize() returned error:", err)
t.Fail()
}
})
// commit: b93051026e6a82ba07a5ac2f072880e69f04c238
t.Run("Deserialize. Operator.Data must be empty", func(t *testing.T) {
if rDeser.Operator.Data != "" {
t.Error("rule.Deserialize() Operator.Data not emptied:", rDeser.Operator.Data)
t.Fail()
}
})
t.Run("Deserialize. Operator.List must be expanded", func(t *testing.T) {
if len(rDeser.Operator.List) != 2 {
t.Error("rule.Deserialize() invalid Operator.List:", rDeser.Operator.List)
t.Fail()
}
if rDeser.Operator.List[0].Operand != OpProcessPath {
t.Error("rule.Deserialize() invalid Operator.List 1:", rDeser.Operator.List)
t.Fail()
}
if rDeser.Operator.List[1].Operand != OpDstPort {
t.Error("rule.Deserialize() invalid Operator.List 2:", rDeser.Operator.List)
t.Fail()
}
if rDeser.Operator.List[0].Type != Simple || rDeser.Operator.List[1].Type != Simple {
t.Error("rule.Deserialize() invalid Operator.List 3:", rDeser.Operator.List)
t.Fail()
}
if rDeser.Operator.List[0].Data != "/path/x" || rDeser.Operator.List[1].Data != "23" {
t.Error("rule.Deserialize() invalid Operator.List 4:", rDeser.Operator.List)
t.Fail()
}
})
}
opensnitch-1.6.9/daemon/rule/testdata/ 0000775 0000000 0000000 00000000000 15003540030 0017712 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/rule/testdata/000-allow-chrome.json 0000664 0000000 0000000 00000000570 15003540030 0023475 0 ustar 00root root 0000000 0000000 {
"created": "2020-12-13T18:06:52.209804547+01:00",
"updated": "2020-12-13T18:06:52.209857713+01:00",
"name": "000-allow-chrome",
"enabled": true,
"precedence": true,
"action": "allow",
"duration": "always",
"operator": {
"type": "simple",
"operand": "process.path",
"sensitive": false,
"data": "/opt/google/chrome/chrome",
"list": []
}
} opensnitch-1.6.9/daemon/rule/testdata/001-deny-chrome.json 0000664 0000000 0000000 00000000567 15003540030 0023325 0 ustar 00root root 0000000 0000000 {
"created": "2020-12-13T17:54:49.067148304+01:00",
"updated": "2020-12-13T17:54:49.067213602+01:00",
"name": "001-deny-chrome",
"enabled": true,
"precedence": false,
"action": "deny",
"duration": "always",
"operator": {
"type": "simple",
"operand": "process.path",
"sensitive": false,
"data": "/opt/google/chrome/chrome",
"list": []
}
} opensnitch-1.6.9/daemon/rule/testdata/invalid-regexp-list.json 0000664 0000000 0000000 00000001526 15003540030 0024500 0 ustar 00root root 0000000 0000000 {
"created": "2020-12-13T18:06:52.209804547+01:00",
"updated": "2020-12-13T18:06:52.209857713+01:00",
"name": "invalid-regexp-list",
"enabled": true,
"precedence": true,
"action": "allow",
"duration": "always",
"operator": {
"type": "list",
"operand": "list",
"sensitive": false,
"data": "[{\"type\": \"regexp\", \"operand\": \"process.path\", \"sensitive\": false, \"data\": \"^(/di(rmngr$\"}, {\"type\": \"simple\", \"operand\": \"dest.port\", \"data\": \"53\", \"sensitive\": false}]",
"list": [
{
"type": "regexp",
"operand": "process.path",
"sensitive": false,
"data": "^(/di(rmngr)$",
"list": null
},
{
"type": "simple",
"operand": "dest.port",
"sensitive": false,
"data": "53",
"list": null
}
]
}
}
opensnitch-1.6.9/daemon/rule/testdata/invalid-regexp.json 0000664 0000000 0000000 00000000574 15003540030 0023531 0 ustar 00root root 0000000 0000000 {
"created": "2020-12-13T18:06:52.209804547+01:00",
"updated": "2020-12-13T18:06:52.209857713+01:00",
"name": "invalid-regexp",
"enabled": true,
"precedence": true,
"action": "allow",
"duration": "always",
"operator": {
"type": "regexp",
"operand": "process.path",
"sensitive": false,
"data": "/opt/((.*)google/chrome/chrome",
"list": []
}
}
opensnitch-1.6.9/daemon/rule/testdata/lists/ 0000775 0000000 0000000 00000000000 15003540030 0021050 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/rule/testdata/lists/domains/ 0000775 0000000 0000000 00000000000 15003540030 0022502 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/rule/testdata/lists/domains/domainlists.txt 0000664 0000000 0000000 00000000164 15003540030 0025572 0 ustar 00root root 0000000 0000000 # this line must be ignored, 0.0.0.0 www.test.org
0.0.0.0 www.test.org
127.0.0.1 www.test.org
0.0.0.0 opensnitch.io
opensnitch-1.6.9/daemon/rule/testdata/lists/ips/ 0000775 0000000 0000000 00000000000 15003540030 0021643 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/rule/testdata/lists/ips/ips.txt 0000664 0000000 0000000 00000000227 15003540030 0023200 0 ustar 00root root 0000000 0000000 # this line must be ignored, 0.0.0.0 www.test.org
# empty lines are also ignored
1.1.1.1
185.53.178.14
# duplicated entries should be ignored
1.1.1.1
opensnitch-1.6.9/daemon/rule/testdata/lists/nets/ 0000775 0000000 0000000 00000000000 15003540030 0022021 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/rule/testdata/lists/nets/nets.txt 0000664 0000000 0000000 00000000240 15003540030 0023527 0 ustar 00root root 0000000 0000000 # this line must be ignored, 0.0.0.0 www.test.org
# empty lines are also ignored
1.1.1.0/24
185.53.178.0/24
# duplicated entries should be ignored
1.1.1.0/24
opensnitch-1.6.9/daemon/rule/testdata/lists/regexp/ 0000775 0000000 0000000 00000000000 15003540030 0022342 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/rule/testdata/lists/regexp/domainsregexp.txt 0000664 0000000 0000000 00000000132 15003540030 0025744 0 ustar 00root root 0000000 0000000 # this line must be ignored, 0.0.0.0 www.test.org
www.test.org
www.test.org
opensnitch.io
opensnitch-1.6.9/daemon/rule/testdata/live_reload/ 0000775 0000000 0000000 00000000000 15003540030 0022177 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/rule/testdata/live_reload/test-live-reload-delete.json 0000664 0000000 0000000 00000000620 15003540030 0027510 0 ustar 00root root 0000000 0000000 {
"created": "2020-12-13T18:06:52.209804547+01:00",
"updated": "2020-12-13T18:06:52.209857713+01:00",
"name": "test-live-reload-delete",
"enabled": true,
"precedence": true,
"action": "deny",
"duration": "always",
"operator": {
"type": "simple",
"operand": "process.path",
"sensitive": false,
"data": "/usr/bin/curl",
"list": []
}
} opensnitch-1.6.9/daemon/rule/testdata/live_reload/test-live-reload-remove.json 0000664 0000000 0000000 00000000620 15003540030 0027543 0 ustar 00root root 0000000 0000000 {
"created": "2020-12-13T18:06:52.209804547+01:00",
"updated": "2020-12-13T18:06:52.209857713+01:00",
"name": "test-live-reload-remove",
"enabled": true,
"precedence": true,
"action": "deny",
"duration": "always",
"operator": {
"type": "simple",
"operand": "process.path",
"sensitive": false,
"data": "/usr/bin/curl",
"list": []
}
} opensnitch-1.6.9/daemon/rule/testdata/rule-disabled-operator-list-expanded.json 0000664 0000000 0000000 00000001253 15003540030 0027712 0 ustar 00root root 0000000 0000000 {
"created": "2023-10-03T18:06:52.209804547+01:00",
"updated": "2023-10-03T18:06:52.209857713+01:00",
"name": "rule-disabled-with-operators-list-expanded",
"enabled": false,
"precedence": true,
"action": "allow",
"duration": "always",
"operator": {
"type": "list",
"operand": "list",
"sensitive": false,
"data": "",
"list": [
{
"type": "simple",
"operand": "process.path",
"sensitive": false,
"data": "/usr/bin/telnet",
"list": null
},
{
"type": "simple",
"operand": "dest.port",
"sensitive": false,
"data": "53",
"list": null
}
]
}
}
opensnitch-1.6.9/daemon/rule/testdata/rule-disabled-operator-list.json 0000664 0000000 0000000 00000001104 15003540030 0026117 0 ustar 00root root 0000000 0000000 {
"created": "2023-10-03T18:06:52.209804547+01:00",
"updated": "2023-10-03T18:06:52.209857713+01:00",
"name": "rule-disabled-with-operators-list-as-json-string",
"enabled": false,
"precedence": true,
"action": "allow",
"duration": "always",
"operator": {
"type": "list",
"operand": "list",
"sensitive": false,
"data": "[{\"type\": \"simple\", \"operand\": \"process.path\", \"sensitive\": false, \"data\": \"/usr/bin/telnet\"}, {\"type\": \"simple\", \"operand\": \"dest.port\", \"data\": \"53\", \"sensitive\": false}]",
"list": [
]
}
}
opensnitch-1.6.9/daemon/rule/testdata/rule-operator-list-data-empty.json 0000664 0000000 0000000 00000001243 15003540030 0026421 0 ustar 00root root 0000000 0000000 {
"created": "2023-10-03T18:06:52.209804547+01:00",
"updated": "2023-10-03T18:06:52.209857713+01:00",
"name": "rule-with-operator-list-data-empty",
"enabled": true,
"precedence": true,
"action": "allow",
"duration": "always",
"operator": {
"type": "list",
"operand": "list",
"sensitive": false,
"data": "",
"list": [
{
"type": "simple",
"operand": "process.path",
"sensitive": false,
"data": "/usr/bin/telnet",
"list": null
},
{
"type": "simple",
"operand": "dest.port",
"sensitive": false,
"data": "53",
"list": null
}
]
}
}
opensnitch-1.6.9/daemon/rule/testdata/rule-operator-list.json 0000664 0000000 0000000 00000001537 15003540030 0024364 0 ustar 00root root 0000000 0000000 {
"created": "2023-10-03T18:06:52.209804547+01:00",
"updated": "2023-10-03T18:06:52.209857713+01:00",
"name": "rule-with-operator-list",
"enabled": true,
"precedence": true,
"action": "allow",
"duration": "always",
"operator": {
"type": "list",
"operand": "list",
"sensitive": false,
"data": "[{\"type\": \"simple\", \"operand\": \"process.path\", \"sensitive\": false, \"data\": \"/usr/bin/telnet\"}, {\"type\": \"simple\", \"operand\": \"dest.port\", \"data\": \"53\", \"sensitive\": false}]",
"list": [
{
"type": "simple",
"operand": "process.path",
"sensitive": false,
"data": "/usr/bin/telnet",
"list": null
},
{
"type": "simple",
"operand": "dest.port",
"sensitive": false,
"data": "53",
"list": null
}
]
}
}
opensnitch-1.6.9/daemon/statistics/ 0000775 0000000 0000000 00000000000 15003540030 0017324 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/statistics/event.go 0000664 0000000 0000000 00000001251 15003540030 0020773 0 ustar 00root root 0000000 0000000 package statistics
import (
"time"
"github.com/evilsocket/opensnitch/daemon/conman"
"github.com/evilsocket/opensnitch/daemon/rule"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
)
type Event struct {
Time time.Time
Connection *conman.Connection
Rule *rule.Rule
}
func NewEvent(con *conman.Connection, match *rule.Rule) *Event {
return &Event{
Time: time.Now(),
Connection: con,
Rule: match,
}
}
func (e *Event) Serialize() *protocol.Event {
return &protocol.Event{
Time: e.Time.Format("2006-01-02 15:04:05"),
Connection: e.Connection.Serialize(),
Rule: e.Rule.Serialize(),
Unixnano: e.Time.UnixNano(),
}
}
opensnitch-1.6.9/daemon/statistics/stats.go 0000664 0000000 0000000 00000014375 15003540030 0021023 0 ustar 00root root 0000000 0000000 package statistics
import (
"strconv"
"sync"
"time"
"github.com/evilsocket/opensnitch/daemon/conman"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/log/loggers"
"github.com/evilsocket/opensnitch/daemon/rule"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
)
// StatsConfig holds the stats confguration
type StatsConfig struct {
MaxEvents int `json:"MaxEvents"`
MaxStats int `json:"MaxStats"`
Workers int `json:"Workers"`
}
type conEvent struct {
con *conman.Connection
match *rule.Rule
wasMissed bool
}
// Statistics holds the connections and statistics the daemon intercepts.
// The connections are stored in the Events slice.
type Statistics struct {
logger *loggers.LoggerManager
rules *rule.Loader
Started time.Time
ByExecutable map[string]uint64
ByPort map[string]uint64
ByProto map[string]uint64
ByAddress map[string]uint64
ByHost map[string]uint64
jobs chan conEvent
ByUID map[string]uint64
Events []*Event
Dropped int
// max number of events to keep in the buffer
maxEvents int
// max number of entries for each By* map
maxStats int
DNSResponses int
Connections int
Ignored int
Accepted int
RuleHits int
RuleMisses int
sync.RWMutex
}
// New returns a new Statistics object and initializes the go routines to update the stats.
func New(rules *rule.Loader) (stats *Statistics) {
stats = &Statistics{
Started: time.Now(),
Events: make([]*Event, 0),
ByProto: make(map[string]uint64),
ByAddress: make(map[string]uint64),
ByHost: make(map[string]uint64),
ByPort: make(map[string]uint64),
ByUID: make(map[string]uint64),
ByExecutable: make(map[string]uint64),
rules: rules,
jobs: make(chan conEvent),
maxEvents: 150,
maxStats: 25,
}
return stats
}
// SetLoggers sets the configured loggers where we'll write the events.
func (s *Statistics) SetLoggers(loggers *loggers.LoggerManager) {
s.logger = loggers
}
// SetLimits configures the max events to keep in the backlog before sending
// the stats to the UI, or while the UI is not connected.
// if the backlog is full, it'll be shifted by one.
func (s *Statistics) SetLimits(config StatsConfig) {
if config.MaxEvents > 0 {
s.maxEvents = config.MaxEvents
}
if config.MaxStats > 0 {
s.maxStats = config.MaxStats
}
wrks := config.Workers
if wrks == 0 {
wrks = 6
}
log.Info("Stats, max events: %d, max stats: %d, max workers: %d", s.maxStats, s.maxEvents, wrks)
for i := 0; i < wrks; i++ {
go s.eventWorker(i)
}
}
// OnConnectionEvent sends the details of a new connection throughout a channel,
// in order to add the connection to the stats.
func (s *Statistics) OnConnectionEvent(con *conman.Connection, match *rule.Rule, wasMissed bool) {
s.jobs <- conEvent{
con: con,
match: match,
wasMissed: wasMissed,
}
action := ""
rname := ""
if match != nil {
action = string(match.Action)
rname = string(match.Name)
}
s.logger.Log(con.Serialize(), action, rname)
}
// OnDNSResponse increases the counter of dns and accepted connections.
func (s *Statistics) OnDNSResponse() {
s.Lock()
defer s.Unlock()
s.DNSResponses++
s.Accepted++
}
// OnIgnored increases the counter of ignored and accepted connections.
func (s *Statistics) OnIgnored() {
s.Lock()
defer s.Unlock()
s.Ignored++
s.Accepted++
}
func (s *Statistics) incMap(m *map[string]uint64, key string) {
if val, found := (*m)[key]; found == false {
// do we have enough space left?
nElems := len(*m)
if nElems >= s.maxStats {
// find the element with less hits
nMin := uint64(9999999999)
minKey := ""
for k, v := range *m {
if v < nMin {
minKey = k
nMin = v
}
}
// remove it
if minKey != "" {
delete(*m, minKey)
}
}
(*m)[key] = 1
} else {
(*m)[key] = val + 1
}
}
func (s *Statistics) eventWorker(id int) {
log.Debug("Stats worker #%d started.", id)
for true {
select {
case job := <-s.jobs:
s.onConnection(job.con, job.match, job.wasMissed)
}
}
}
func (s *Statistics) onConnection(con *conman.Connection, match *rule.Rule, wasMissed bool) {
s.Lock()
defer s.Unlock()
s.Connections++
if wasMissed {
s.RuleMisses++
} else {
s.RuleHits++
}
if wasMissed == false && match.Action == rule.Allow {
s.Accepted++
} else {
s.Dropped++
}
s.incMap(&s.ByProto, con.Protocol)
s.incMap(&s.ByAddress, con.DstIP.String())
if con.DstHost != "" {
s.incMap(&s.ByHost, con.DstHost)
}
s.incMap(&s.ByPort, strconv.FormatUint(uint64(con.DstPort), 10))
s.incMap(&s.ByUID, strconv.Itoa(con.Entry.UserId))
s.incMap(&s.ByExecutable, con.Process.Path)
// if we reached the limit, shift everything back
// by one position
nEvents := len(s.Events)
if nEvents == s.maxEvents {
s.Events = s.Events[1:]
}
if wasMissed {
return
}
s.Events = append(s.Events, NewEvent(con, match))
}
func (s *Statistics) serializeEvents() []*protocol.Event {
nEvents := len(s.Events)
serialized := make([]*protocol.Event, nEvents)
for i, e := range s.Events {
serialized[i] = e.Serialize()
}
return serialized
}
// emptyStats empties the stats once we've sent them to the GUI.
// We don't need them anymore here.
func (s *Statistics) emptyStats() {
s.Lock()
if len(s.Events) > 0 {
s.Events = make([]*Event, 0)
}
s.Unlock()
}
// Serialize returns the collected statistics.
// After return the stats, the Events are emptied, to keep collecting more stats
// and not miss connections.
func (s *Statistics) Serialize() *protocol.Statistics {
s.Lock()
defer s.emptyStats()
defer s.Unlock()
return &protocol.Statistics{
DaemonVersion: core.Version,
Rules: uint64(s.rules.NumRules()),
Uptime: uint64(time.Since(s.Started).Seconds()),
DnsResponses: uint64(s.DNSResponses),
Connections: uint64(s.Connections),
Ignored: uint64(s.Ignored),
Accepted: uint64(s.Accepted),
Dropped: uint64(s.Dropped),
RuleHits: uint64(s.RuleHits),
RuleMisses: uint64(s.RuleMisses),
Events: s.serializeEvents(),
ByProto: s.ByProto,
ByAddress: s.ByAddress,
ByHost: s.ByHost,
ByPort: s.ByPort,
ByUid: s.ByUID,
ByExecutable: s.ByExecutable,
}
}
opensnitch-1.6.9/daemon/system-fw.json 0000664 0000000 0000000 00000014772 15003540030 0017776 0 ustar 00root root 0000000 0000000 {
"Enabled": true,
"Version": 1,
"SystemRules": [
{
"Rule": {
"Table": "mangle",
"Chain": "OUTPUT",
"Enabled": false,
"Position": "0",
"Description": "Allow icmp",
"Parameters": "-p icmp",
"Expressions": [],
"Target": "ACCEPT",
"TargetParameters": ""
},
"Chains": []
},
{
"Chains": [
{
"Name": "forward",
"Table": "filter",
"Family": "inet",
"Priority": "",
"Type": "filter",
"Hook": "forward",
"Policy": "accept",
"Rules": []
},
{
"Name": "output",
"Table": "filter",
"Family": "inet",
"Priority": "",
"Type": "filter",
"Hook": "output",
"Policy": "accept",
"Rules": []
},
{
"Name": "input",
"Table": "filter",
"Family": "inet",
"Priority": "",
"Type": "filter",
"Hook": "input",
"Policy": "accept",
"Rules": [
{
"Enabled": false,
"Position": "0",
"Description": "Allow SSH server connections when input policy is DROP",
"Parameters": "",
"Expressions": [
{
"Statement": {
"Op": "",
"Name": "tcp",
"Values": [
{
"Key": "dport",
"Value": "22"
}
]
}
}
],
"Target": "accept",
"TargetParameters": ""
}
]
},
{
"Name": "filter-prerouting",
"Table": "nat",
"Family": "inet",
"Priority": "",
"Type": "filter",
"Hook": "prerouting",
"Policy": "accept",
"Rules": []
},
{
"Name": "prerouting",
"Table": "mangle",
"Family": "inet",
"Priority": "",
"Type": "mangle",
"Hook": "prerouting",
"Policy": "accept",
"Rules": []
},
{
"Name": "postrouting",
"Table": "mangle",
"Family": "inet",
"Priority": "",
"Type": "mangle",
"Hook": "postrouting",
"Policy": "accept",
"Rules": []
},
{
"Name": "prerouting",
"Table": "nat",
"Family": "inet",
"Priority": "",
"Type": "natdest",
"Hook": "prerouting",
"Policy": "accept",
"Rules": []
},
{
"Name": "postrouting",
"Table": "nat",
"Family": "inet",
"Priority": "",
"Type": "natsource",
"Hook": "postrouting",
"Policy": "accept",
"Rules": []
},
{
"Name": "input",
"Table": "nat",
"Family": "inet",
"Priority": "",
"Type": "natsource",
"Hook": "input",
"Policy": "accept",
"Rules": []
},
{
"Name": "output",
"Table": "nat",
"Family": "inet",
"Priority": "",
"Type": "natdest",
"Hook": "output",
"Policy": "accept",
"Rules": []
},
{
"Name": "output",
"Table": "mangle",
"Family": "inet",
"Priority": "",
"Type": "mangle",
"Hook": "output",
"Policy": "accept",
"Rules": [
{
"Enabled": true,
"Position": "0",
"Description": "Allow ICMP",
"Expressions": [
{
"Statement": {
"Op": "",
"Name": "icmp",
"Values": [
{
"Key": "type",
"Value": "echo-request,echo-reply,destination-unreachable"
}
]
}
}
],
"Target": "accept",
"TargetParameters": ""
},
{
"Enabled": true,
"Position": "0",
"Description": "Allow ICMPv6",
"Expressions": [
{
"Statement": {
"Op": "",
"Name": "icmpv6",
"Values": [
{
"Key": "type",
"Value": "echo-request,echo-reply,destination-unreachable"
}
]
}
}
],
"Target": "accept",
"TargetParameters": ""
},
{
"Enabled": false,
"Position": "0",
"Description": "Exclude WireGuard VPN from being intercepted",
"Parameters": "",
"Expressions": [
{
"Statement": {
"Op": "",
"Name": "udp",
"Values": [
{
"Key": "dport",
"Value": "51820"
}
]
}
}
],
"Target": "accept",
"TargetParameters": ""
}
]
},
{
"Name": "forward",
"Table": "mangle",
"Family": "inet",
"Priority": "",
"Type": "mangle",
"Hook": "forward",
"Policy": "accept",
"Rules": [
{
"UUID": "7d7394e1-100d-4b87-a90a-cd68c46edb0b",
"Enabled": false,
"Position": "0",
"Description": "Intercept forwarded connections (docker, etc)",
"Expressions": [
{
"Statement": {
"Op": "",
"Name": "ct",
"Values": [
{
"Key": "state",
"Value": "new"
}
]
}
}
],
"Target": "queue",
"TargetParameters": "num 0"
}
]
}
]
}
]
}
opensnitch-1.6.9/daemon/ui/ 0000775 0000000 0000000 00000000000 15003540030 0015547 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/ui/alerts.go 0000664 0000000 0000000 00000007175 15003540030 0017402 0 ustar 00root root 0000000 0000000 package ui
import (
"time"
"github.com/evilsocket/opensnitch/daemon/conman"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/procmon"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
"golang.org/x/net/context"
"google.golang.org/grpc"
"google.golang.org/grpc/encoding/gzip"
)
// NewWarningAlert builts a new warning alert
func NewWarningAlert(what protocol.Alert_What, data interface{}) *protocol.Alert {
return NewAlert(protocol.Alert_WARNING, what, protocol.Alert_SHOW_ALERT, protocol.Alert_MEDIUM, data)
}
// NewErrorAlert builts a new error alert
func NewErrorAlert(what protocol.Alert_What, data interface{}) *protocol.Alert {
return NewAlert(protocol.Alert_ERROR, what, protocol.Alert_SHOW_ALERT, protocol.Alert_HIGH, data)
}
// NewAlert builts a new generic alert
func NewAlert(atype protocol.Alert_Type, what protocol.Alert_What, action protocol.Alert_Action, prio protocol.Alert_Priority, data interface{}) *protocol.Alert {
a := &protocol.Alert{
Id: uint64(time.Now().UnixNano()),
Type: atype,
Action: action,
What: what,
Priority: prio,
}
switch what {
case protocol.Alert_KERNEL_EVENT:
switch data.(type) {
case procmon.Process:
a.Data = &protocol.Alert_Proc{
data.(*procmon.Process).Serialize(),
}
case string:
a.Data = &protocol.Alert_Text{data.(string)}
a.Action = protocol.Alert_SHOW_ALERT
}
case protocol.Alert_CONNECTION:
a.Data = &protocol.Alert_Conn{
data.(*conman.Connection).Serialize(),
}
case protocol.Alert_GENERIC:
a.Data = &protocol.Alert_Text{data.(string)}
}
return a
}
// SendInfoAlert sends an info alert
func (c *Client) SendInfoAlert(data interface{}) {
c.PostAlert(protocol.Alert_INFO, protocol.Alert_GENERIC, protocol.Alert_SHOW_ALERT, protocol.Alert_LOW, data)
}
// SendWarningAlert sends an warning alert
func (c *Client) SendWarningAlert(data interface{}) {
c.PostAlert(protocol.Alert_WARNING, protocol.Alert_GENERIC, protocol.Alert_SHOW_ALERT, protocol.Alert_MEDIUM, data)
}
// SendErrorAlert sends an error alert
func (c *Client) SendErrorAlert(data interface{}) {
c.PostAlert(protocol.Alert_ERROR, protocol.Alert_GENERIC, protocol.Alert_SHOW_ALERT, protocol.Alert_HIGH, data)
}
// alertsDispatcher waits to be connected to the GUI.
// Once connected, dispatches all the queued alerts.
func (c *Client) alertsDispatcher() {
queuedAlerts := make(chan protocol.Alert, 32)
connected := false
isQueueFull := func(qdAlerts chan protocol.Alert) bool { return len(qdAlerts) > 31 }
isQueueEmpty := func(qdAlerts chan protocol.Alert) bool { return len(qdAlerts) == 0 }
queueAlert := func(qdAlerts chan protocol.Alert, pbAlert protocol.Alert) {
if isQueueFull(qdAlerts) {
v := <-qdAlerts
// empty queue before adding a new one
log.Debug("Discarding queued alert (%d): %v", len(qdAlerts), v)
}
select {
case qdAlerts <- pbAlert:
default:
log.Debug("Alert not sent to queue, full? (%d)", len(qdAlerts))
}
}
for {
select {
case pbAlert := <-c.alertsChan:
if !connected {
queueAlert(queuedAlerts, pbAlert)
continue
}
c.dispatchAlert(pbAlert)
case ready := <-c.isConnected:
connected = ready
if ready {
log.Important("UI connected, dispathing queued alerts: %d", len(c.alertsChan))
for {
if isQueueEmpty(queuedAlerts) {
// no more queued alerts, exit
break
}
c.dispatchAlert(<-queuedAlerts)
}
}
}
}
}
func (c *Client) dispatchAlert(pbAlert protocol.Alert) {
if c.client == nil {
return
}
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
c.client.PostAlert(ctx, &pbAlert, grpc.UseCompressor(gzip.Name))
cancel()
}
opensnitch-1.6.9/daemon/ui/auth/ 0000775 0000000 0000000 00000000000 15003540030 0016510 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/ui/auth/auth.go 0000664 0000000 0000000 00000005437 15003540030 0020011 0 ustar 00root root 0000000 0000000 package auth
import (
"crypto/tls"
"crypto/x509"
"io/ioutil"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/ui/config"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
)
// client auth types:
// https://pkg.go.dev/crypto/tls#ClientAuthType
var (
clientAuthType = map[string]tls.ClientAuthType{
"no-client-cert": tls.NoClientCert,
"req-cert": tls.RequestClientCert,
"req-any-cert": tls.RequireAnyClientCert,
"verify-cert": tls.VerifyClientCertIfGiven,
"req-and-verify-cert": tls.RequireAndVerifyClientCert,
}
)
const (
// AuthSimple will use WithInsecure()
AuthSimple = "simple"
// AuthTLSSimple will use a common CA certificate, shared between the server
// and all the clients.
AuthTLSSimple = "tls-simple"
// AuthTLSMutual will use a CA certificate and a client cert and key
// to authenticate each client.
AuthTLSMutual = "tls-mutual"
)
// New returns the configuration that the UI will use
// to connect with the server.
func New(config *config.Config) (grpc.DialOption, error) {
config.RLock()
credsType := config.Server.Authentication.Type
tlsOpts := config.Server.Authentication.TLSOptions
config.RUnlock()
if credsType == "" || credsType == AuthSimple {
log.Debug("UI auth: simple")
return grpc.WithInsecure(), nil
}
certPool := x509.NewCertPool()
// use CA certificate to authenticate clients if supplied
if tlsOpts.CACert != "" {
if caPem, err := ioutil.ReadFile(tlsOpts.CACert); err != nil {
log.Warning("reading UI auth CA certificate (%s): %s", credsType, err)
} else {
if !certPool.AppendCertsFromPEM(caPem) {
log.Warning("adding UI auth CA certificate (%s): %s", credsType, err)
}
}
}
// use server certificate to authenticate clients if supplied
if tlsOpts.ServerCert != "" {
if serverPem, err := ioutil.ReadFile(tlsOpts.ServerCert); err != nil {
log.Warning("reading auth server cert: %s", err)
} else {
if !certPool.AppendCertsFromPEM(serverPem) {
log.Warning("adding UI auth server cert (%s): %s", credsType, err)
}
}
}
// set config of tls credential
// https://pkg.go.dev/crypto/tls#Config
tlsCfg := &tls.Config{
InsecureSkipVerify: tlsOpts.SkipVerify,
RootCAs: certPool,
}
// https://pkg.go.dev/google.golang.org/grpc/credentials#SecurityLevel
if credsType == AuthTLSMutual {
tlsCfg.ClientAuth = clientAuthType[tlsOpts.ClientAuthType]
clientCert, err := tls.LoadX509KeyPair(
tlsOpts.ClientCert,
tlsOpts.ClientKey,
)
if err != nil {
return nil, err
}
log.Debug(" using client cert: %s", tlsOpts.ClientCert)
log.Debug(" using client key: %s", tlsOpts.ClientKey)
tlsCfg.Certificates = []tls.Certificate{clientCert}
}
return grpc.WithTransportCredentials(
credentials.NewTLS(tlsCfg),
), nil
}
opensnitch-1.6.9/daemon/ui/client.go 0000664 0000000 0000000 00000022554 15003540030 0017364 0 ustar 00root root 0000000 0000000 package ui
import (
"fmt"
"net"
"sync"
"time"
"github.com/evilsocket/opensnitch/daemon/conman"
"github.com/evilsocket/opensnitch/daemon/firewall/iptables"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/log/loggers"
"github.com/evilsocket/opensnitch/daemon/rule"
"github.com/evilsocket/opensnitch/daemon/statistics"
"github.com/evilsocket/opensnitch/daemon/ui/auth"
"github.com/evilsocket/opensnitch/daemon/ui/config"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
"github.com/fsnotify/fsnotify"
"golang.org/x/net/context"
"google.golang.org/grpc"
"google.golang.org/grpc/connectivity"
"google.golang.org/grpc/keepalive"
)
var (
configFile = "/etc/opensnitchd/default-config.json"
dummyOperator, _ = rule.NewOperator(rule.Simple, false, rule.OpTrue, "", make([]rule.Operator, 0))
clientDisconnectedRule = rule.Create("ui.client.disconnected", "", true, false, false, rule.Allow, rule.Once, dummyOperator)
// While the GUI is connected, deny by default everything until the user takes an action.
clientConnectedRule = rule.Create("ui.client.connected", "", true, false, false, rule.Deny, rule.Once, dummyOperator)
clientErrorRule = rule.Create("ui.client.error", "", true, false, false, rule.Allow, rule.Once, dummyOperator)
clientConfig config.Config
maxQueuedAlerts = 1024
)
// Client holds the connection information of a client.
type Client struct {
rules *rule.Loader
stats *statistics.Statistics
con *grpc.ClientConn
configWatcher *fsnotify.Watcher
client protocol.UIClient
clientCtx context.Context
clientCancel context.CancelFunc
streamNotifications protocol.UI_NotificationsClient
isConnected chan bool
alertsChan chan protocol.Alert
socketPath string
unixSockPrefix string
//isAsking is set to true if the client is awaiting a decision from the GUI
isAsking bool
isUnixSocket bool
sync.RWMutex
}
// NewClient creates and configures a new client.
func NewClient(socketPath, localConfigFile string, stats *statistics.Statistics, rules *rule.Loader, loggers *loggers.LoggerManager) *Client {
if localConfigFile != "" {
configFile = localConfigFile
}
c := &Client{
stats: stats,
rules: rules,
isUnixSocket: false,
isAsking: false,
isConnected: make(chan bool),
alertsChan: make(chan protocol.Alert, maxQueuedAlerts),
}
//for i := 0; i < 4; i++ {
go c.alertsDispatcher()
c.clientCtx, c.clientCancel = context.WithCancel(context.Background())
if watcher, err := fsnotify.NewWatcher(); err == nil {
c.configWatcher = watcher
}
c.loadDiskConfiguration(false)
if socketPath != "" {
c.setSocketPath(c.getSocketPath(socketPath))
}
loggers.Load(clientConfig.Server.Loggers, clientConfig.Stats.Workers)
stats.SetLimits(clientConfig.Stats)
stats.SetLoggers(loggers)
return c
}
// Connect starts the connection poller
func (c *Client) Connect() {
go c.poller()
}
// Close cancels the running tasks: pinging the server and (re)connection poller.
func (c *Client) Close() {
c.clientCancel()
}
// ProcMonitorMethod returns the monitor method configured.
// If it's not present in the config file, it'll return an empty string.
func (c *Client) ProcMonitorMethod() string {
clientConfig.RLock()
defer clientConfig.RUnlock()
return clientConfig.ProcMonitorMethod
}
// InterceptUnknown returns
func (c *Client) InterceptUnknown() bool {
clientConfig.RLock()
defer clientConfig.RUnlock()
return clientConfig.InterceptUnknown
}
// GetFirewallType returns the firewall to use
func (c *Client) GetFirewallType() string {
clientConfig.RLock()
defer clientConfig.RUnlock()
if clientConfig.Firewall == "" {
return iptables.Name
}
return clientConfig.Firewall
}
// DefaultAction returns the default configured action for
func (c *Client) DefaultAction() rule.Action {
isConnected := c.Connected()
c.RLock()
defer c.RUnlock()
if isConnected {
return clientConnectedRule.Action
}
return clientDisconnectedRule.Action
}
// DefaultDuration returns the default duration configured for a rule.
// For example it can be: once, always, "until restart".
func (c *Client) DefaultDuration() rule.Duration {
c.RLock()
defer c.RUnlock()
return clientDisconnectedRule.Duration
}
// Connected checks if the client has established a connection with the server.
func (c *Client) Connected() bool {
c.RLock()
defer c.RUnlock()
if c.con == nil || c.con.GetState() != connectivity.Ready {
return false
}
return true
}
// GetIsAsking returns the isAsking flag
func (c *Client) GetIsAsking() bool {
c.RLock()
defer c.RUnlock()
return c.isAsking
}
// SetIsAsking sets the isAsking flag
func (c *Client) SetIsAsking(flag bool) {
c.Lock()
defer c.Unlock()
c.isAsking = flag
}
func (c *Client) poller() {
log.Debug("UI service poller started for socket %s", c.socketPath)
wasConnected := false
for {
select {
case <-c.clientCtx.Done():
log.Info("Client.poller() exit, Done()")
goto Exit
default:
isConnected := c.Connected()
if wasConnected != isConnected {
c.onStatusChange(isConnected)
wasConnected = isConnected
}
if c.Connected() == false {
// connect and create the client if needed
if err := c.connect(); err != nil {
log.Warning("Error while connecting to UI service: %s", err)
}
}
if c.Connected() == true {
// if the client is connected and ready, send a ping
if err := c.ping(time.Now()); err != nil {
log.Warning("Error while pinging UI service: %s, state: %v", err, c.con.GetState())
}
}
time.Sleep(1 * time.Second)
}
}
Exit:
log.Info("uiClient exit")
}
func (c *Client) onStatusChange(connected bool) {
if connected {
log.Info("Connected to the UI service on %s", c.socketPath)
go c.Subscribe()
select {
case c.isConnected <- true:
default:
}
} else {
log.Error("Connection to the UI service lost.")
c.disconnect()
}
}
func (c *Client) connect() (err error) {
if c.Connected() {
return
}
if c.con != nil {
if c.con.GetState() == connectivity.TransientFailure || c.con.GetState() == connectivity.Shutdown {
c.disconnect()
} else {
return
}
}
if err := c.openSocket(); err != nil {
log.Debug("connect() %s", err)
c.disconnect()
return err
}
if c.client == nil {
c.client = protocol.NewUIClient(c.con)
}
return nil
}
func (c *Client) openSocket() (err error) {
c.Lock()
defer c.Unlock()
dialOption, err := auth.New(&clientConfig)
if err != nil {
return fmt.Errorf("Invalid client auth options: %s", err)
}
if c.isUnixSocket {
c.con, err = grpc.Dial(c.socketPath, dialOption,
grpc.WithDialer(func(addr string, timeout time.Duration) (net.Conn, error) {
return net.DialTimeout(c.unixSockPrefix, addr, timeout)
}))
} else {
// https://pkg.go.dev/google.golang.org/grpc/keepalive#ClientParameters
var kacp = keepalive.ClientParameters{
Time: 5 * time.Second,
// if there's no activity after ^, wait 20s and close
// server timeout is 20s by default.
Timeout: 22 * time.Second,
// send pings even without active streams
PermitWithoutStream: true,
}
c.con, err = grpc.Dial(c.socketPath, dialOption, grpc.WithKeepaliveParams(kacp))
}
return err
}
func (c *Client) disconnect() {
c.Lock()
defer c.Unlock()
select {
case c.isConnected <- false:
default:
}
if c.con != nil {
c.con.Close()
c.con = nil
log.Debug("client.disconnect()")
}
c.client = nil
}
func (c *Client) ping(ts time.Time) (err error) {
if c.Connected() == false {
return fmt.Errorf("service is not connected")
}
c.Lock()
defer c.Unlock()
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
reqID := uint64(ts.UnixNano())
pReq := &protocol.PingRequest{
Id: reqID,
Stats: c.stats.Serialize(),
}
c.stats.RLock()
pong, err := c.client.Ping(ctx, pReq)
c.stats.RUnlock()
if err != nil {
return err
}
if pong.Id != reqID {
return fmt.Errorf("Expected pong with id 0x%x, got 0x%x", reqID, pong.Id)
}
return nil
}
// Ask sends a request to the server, with the values of a connection to be
// allowed or denied.
func (c *Client) Ask(con *conman.Connection) *rule.Rule {
if c.client == nil {
return nil
}
// FIXME: if timeout is fired, the rule is not added to the list in the GUI
ctx, cancel := context.WithTimeout(context.Background(), time.Second*120)
defer cancel()
reply, err := c.client.AskRule(ctx, con.Serialize())
if err != nil {
log.Warning("Error while asking for rule: %s - %v", err, con)
return nil
}
r, err := rule.Deserialize(reply)
if err != nil {
return nil
}
return r
}
// PostAlert queues a new message to be delivered to the server
func (c *Client) PostAlert(atype protocol.Alert_Type, awhat protocol.Alert_What, action protocol.Alert_Action, prio protocol.Alert_Priority, data interface{}) {
if len(c.alertsChan) > maxQueuedAlerts-1 {
// pop oldest alert if channel is full
log.Debug("PostAlert() queue full, popping alert (%d)", len(c.alertsChan))
<-c.alertsChan
}
if c.Connected() == false {
log.Debug("UI not connected, queueing alert: %d", len(c.alertsChan))
}
c.alertsChan <- *NewAlert(atype, awhat, action, prio, data)
}
func (c *Client) monitorConfigWorker() {
for {
select {
case event := <-c.configWatcher.Events:
if (event.Op&fsnotify.Write == fsnotify.Write) || (event.Op&fsnotify.Remove == fsnotify.Remove) {
c.loadDiskConfiguration(true)
}
}
}
}
opensnitch-1.6.9/daemon/ui/client_test.go 0000664 0000000 0000000 00000005542 15003540030 0020421 0 ustar 00root root 0000000 0000000 package ui
import (
"encoding/json"
"testing"
"time"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/firewall/iptables"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/log/loggers"
"github.com/evilsocket/opensnitch/daemon/procmon"
"github.com/evilsocket/opensnitch/daemon/rule"
"github.com/evilsocket/opensnitch/daemon/statistics"
"github.com/evilsocket/opensnitch/daemon/ui/config"
)
var (
defaultConfig = &config.Config{
ProcMonitorMethod: procmon.MethodEbpf,
DefaultAction: "allow",
DefaultDuration: "once",
InterceptUnknown: false,
Firewall: "nftables",
}
reloadConfig = *defaultConfig
)
func restoreConfigFile(t *testing.T) {
// start from a clean state
if _, err := core.Exec("cp", []string{
// unmodified default config
"./testdata/orig-default-config.json",
// config will be modified by some tests
"./testdata/default-config.json",
}); err != nil {
t.Errorf("error copying default config file: %s", err)
}
}
func validateConfig(t *testing.T, uiClient *Client, cfg *config.Config) {
if uiClient.ProcMonitorMethod() != cfg.ProcMonitorMethod {
t.Errorf("not expected ProcMonitorMethod value: %s, expected: %s", uiClient.ProcMonitorMethod(), cfg.ProcMonitorMethod)
}
if uiClient.GetFirewallType() != cfg.Firewall {
t.Errorf("not expected FirewallType value: %s, expected: %s", uiClient.GetFirewallType(), cfg.Firewall)
}
if uiClient.InterceptUnknown() != cfg.InterceptUnknown {
t.Errorf("not expected InterceptUnknown value: %v, expected: %v", uiClient.InterceptUnknown(), cfg.InterceptUnknown)
}
if uiClient.DefaultAction() != rule.Action(cfg.DefaultAction) {
t.Errorf("not expected DefaultAction value: %s, expected: %s", clientDisconnectedRule.Action, cfg.DefaultAction)
}
}
func TestClientConfig(t *testing.T) {
restoreConfigFile(t)
cfgFile := "./testdata/default-config.json"
rules, err := rule.NewLoader(false)
if err != nil {
log.Fatal("")
}
stats := statistics.New(rules)
loggerMgr := loggers.NewLoggerManager()
uiClient := NewClient("unix:///tmp/osui.sock", cfgFile, stats, rules, loggerMgr)
t.Run("validate-load-config", func(t *testing.T) {
validateConfig(t, uiClient, defaultConfig)
})
t.Run("validate-reload-config", func(t *testing.T) {
reloadConfig.ProcMonitorMethod = procmon.MethodProc
reloadConfig.DefaultAction = string(rule.Deny)
reloadConfig.InterceptUnknown = true
reloadConfig.Firewall = iptables.Name
reloadConfig.Server.Address = "unix:///run/user/1000/opensnitch/osui.sock"
plainJSON, err := json.Marshal(reloadConfig)
if err != nil {
t.Errorf("Error marshalling config: %s", err)
}
if err = config.Save(configFile, string(plainJSON)); err != nil {
t.Errorf("error saving config to disk: %s", err)
}
time.Sleep(time.Second * 3)
validateConfig(t, uiClient, &reloadConfig)
})
}
opensnitch-1.6.9/daemon/ui/config/ 0000775 0000000 0000000 00000000000 15003540030 0017014 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/ui/config/config.go 0000664 0000000 0000000 00000006562 15003540030 0020621 0 ustar 00root root 0000000 0000000 package config
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"reflect"
"sync"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/log/loggers"
"github.com/evilsocket/opensnitch/daemon/statistics"
)
type (
serverTLSOptions struct {
CACert string `json:"CACert"`
ServerCert string `json:"ServerCert"`
ServerKey string `json:"ServerKey"`
ClientCert string `json:"ClientCert"`
ClientKey string `json:"ClientKey"`
// https://pkg.go.dev/crypto/tls#ClientAuthType
ClientAuthType string `json:"ClientAuthType"`
// https://pkg.go.dev/crypto/tls#Config
SkipVerify bool `json:"SkipVerify"`
// https://pkg.go.dev/crypto/tls#Conn.VerifyHostname
// VerifyHostname bool
// https://pkg.go.dev/crypto/tls#example-Config-VerifyConnection
// VerifyConnection bool
// VerifyPeerCertificate bool
}
serverAuth struct {
// token?, google?, simple-tls, mutual-tls
Type string `json:"Type"`
TLSOptions serverTLSOptions `json:"TLSOptions"`
}
serverConfig struct {
Loggers []loggers.LoggerConfig `json:"Loggers"`
Address string `json:"Address"`
LogFile string `json:"LogFile"`
Authentication serverAuth `json:"Authentication"`
}
rulesOptions struct {
Path string `json:"Path"`
}
ebpfOptions struct {
ModulesPath string `json:"ModulesPath"`
}
// InternalOptions struct
internalOptions struct {
GCPercent int `json:"GCPercent"`
FlushConnsOnStart bool `json:"FlushConnsOnStart"`
}
)
// Config holds the values loaded from configFile
type Config struct {
LogLevel *int32 `json:"LogLevel"`
Firewall string `json:"Firewall"`
DefaultAction string `json:"DefaultAction"`
DefaultDuration string `json:"DefaultDuration"`
ProcMonitorMethod string `json:"ProcMonitorMethod"`
Ebpf ebpfOptions `json:"Ebpf"`
Rules rulesOptions `json:"Rules"`
Server serverConfig `json:"Server"`
Stats statistics.StatsConfig `json:"Stats"`
Internal internalOptions `json:"Internal"`
InterceptUnknown bool `json:"InterceptUnknown"`
LogUTC bool `json:"LogUTC"`
LogMicro bool `json:"LogMicro"`
sync.RWMutex
}
// Parse determines if the given configuration is ok.
func Parse(rawConfig interface{}) (conf Config, err error) {
if vt := reflect.ValueOf(rawConfig).Kind(); vt == reflect.String {
err = json.Unmarshal([]byte((rawConfig.(string))), &conf)
} else {
err = json.Unmarshal(rawConfig.([]uint8), &conf)
}
return conf, err
}
// Load loads the content of a file from disk.
func Load(configFile string) ([]byte, error) {
raw, err := ioutil.ReadFile(configFile)
if err != nil || len(raw) == 0 {
return nil, err
}
return raw, nil
}
// Save writes daemon configuration to disk.
func Save(configFile, rawConfig string) (err error) {
if _, err = Parse(rawConfig); err != nil {
return fmt.Errorf("Error parsing configuration %s: %s", rawConfig, err)
}
if err = os.Chmod(configFile, 0600); err != nil {
log.Warning("unable to set permissions to default config: %s", err)
}
if err = ioutil.WriteFile(configFile, []byte(rawConfig), 0644); err != nil {
log.Error("writing configuration to disk: %s", err)
return err
}
return nil
}
opensnitch-1.6.9/daemon/ui/config_utils.go 0000664 0000000 0000000 00000007703 15003540030 0020572 0 ustar 00root root 0000000 0000000 package ui
import (
"fmt"
"strings"
"runtime/debug"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/netlink"
"github.com/evilsocket/opensnitch/daemon/procmon/monitor"
"github.com/evilsocket/opensnitch/daemon/rule"
"github.com/evilsocket/opensnitch/daemon/ui/config"
)
func (c *Client) getSocketPath(socketPath string) string {
c.Lock()
defer c.Unlock()
if strings.HasPrefix(socketPath, "unix:") == true {
c.isUnixSocket = true
c.unixSockPrefix = "unix"
return socketPath[5:]
}
if strings.HasPrefix(socketPath, "unix-abstract:") == true {
c.isUnixSocket = true
c.unixSockPrefix = "unix-abstract"
return socketPath[14:]
}
c.isUnixSocket = false
return socketPath
}
func (c *Client) setSocketPath(socketPath string) {
c.Lock()
defer c.Unlock()
c.socketPath = socketPath
}
func (c *Client) isProcMonitorEqual(newMonitorMethod string) bool {
clientConfig.RLock()
defer clientConfig.RUnlock()
return newMonitorMethod == clientConfig.ProcMonitorMethod
}
func (c *Client) loadDiskConfiguration(reload bool) {
raw, err := config.Load(configFile)
if err != nil || len(raw) == 0 {
// Sometimes we may receive 2 Write events on monitorConfigWorker,
// Which may lead to read 0 bytes.
log.Warning("Error loading configuration from disk %s: %s", configFile, err)
return
}
if ok := c.loadConfiguration(raw); ok {
if err := c.configWatcher.Add(configFile); err != nil {
log.Error("Could not watch path: %s", err)
return
}
}
if reload {
return
}
go c.monitorConfigWorker()
}
func (c *Client) loadConfiguration(rawConfig []byte) bool {
var err error
clientConfig, err = config.Parse(rawConfig)
if err != nil {
msg := fmt.Sprintf("Error parsing configuration %s: %s", configFile, err)
log.Error(msg)
c.SendWarningAlert(msg)
return false
}
clientConfig.Lock()
defer clientConfig.Unlock()
// firstly load config level, to detect further errors if any
if clientConfig.LogLevel != nil {
log.SetLogLevel(int(*clientConfig.LogLevel))
}
log.SetLogUTC(clientConfig.LogUTC)
log.SetLogMicro(clientConfig.LogMicro)
if clientConfig.Server.LogFile != "" {
log.Close()
log.OpenFile(clientConfig.Server.LogFile)
}
if clientConfig.Server.Address != "" {
tempSocketPath := c.getSocketPath(clientConfig.Server.Address)
if tempSocketPath != c.socketPath {
// disconnect, and let the connection poller reconnect to the new address
c.disconnect()
}
c.setSocketPath(tempSocketPath)
}
if clientConfig.DefaultAction != "" {
clientDisconnectedRule.Action = rule.Action(clientConfig.DefaultAction)
clientErrorRule.Action = rule.Action(clientConfig.DefaultAction)
// TODO: reconfigure connected rule if changed, but not save it to disk.
//clientConnectedRule.Action = rule.Action(clientConfig.DefaultAction)
}
if clientConfig.DefaultDuration != "" {
clientDisconnectedRule.Duration = rule.Duration(clientConfig.DefaultDuration)
clientErrorRule.Duration = rule.Duration(clientConfig.DefaultDuration)
}
reloaded := false
if clientConfig.ProcMonitorMethod != "" {
err := monitor.ReconfigureMonitorMethod(clientConfig.ProcMonitorMethod, clientConfig.Ebpf.ModulesPath)
if err != nil {
msg := fmt.Sprintf("Unable to set new process monitor (%s) method from disk: %v", clientConfig.ProcMonitorMethod, err.Msg)
log.Warning(msg)
c.SendWarningAlert(msg)
} else {
reloaded = true
}
}
if reloaded && clientConfig.Internal.FlushConnsOnStart {
log.Debug("[config] flushing established connections")
netlink.FlushConnections()
} else {
log.Debug("[config] not flushing established connections")
}
if clientConfig.Internal.GCPercent > 0 {
oldgcpercent := debug.SetGCPercent(clientConfig.Internal.GCPercent)
log.Info("GC percent set to %d, previously was %d", clientConfig.Internal.GCPercent, oldgcpercent)
}
// TODO:
//c.stats.SetLimits(clientConfig.Stats)
//loggers.Load(clientConfig.Server.Loggers, clientConfig.Stats.Workers)
//stats.SetLoggers(loggers)
return true
}
opensnitch-1.6.9/daemon/ui/notifications.go 0000664 0000000 0000000 00000027432 15003540030 0020757 0 ustar 00root root 0000000 0000000 package ui
import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"strconv"
"strings"
"time"
"github.com/evilsocket/opensnitch/daemon/core"
"github.com/evilsocket/opensnitch/daemon/firewall"
"github.com/evilsocket/opensnitch/daemon/log"
"github.com/evilsocket/opensnitch/daemon/procmon"
"github.com/evilsocket/opensnitch/daemon/procmon/monitor"
"github.com/evilsocket/opensnitch/daemon/rule"
"github.com/evilsocket/opensnitch/daemon/ui/config"
"github.com/evilsocket/opensnitch/daemon/ui/protocol"
"golang.org/x/net/context"
)
var stopMonitoringProcess = make(chan int)
// NewReply constructs a new protocol notification reply
func NewReply(rID uint64, replyCode protocol.NotificationReplyCode, data string) *protocol.NotificationReply {
return &protocol.NotificationReply{
Id: rID,
Code: replyCode,
Data: data,
}
}
func (c *Client) getClientConfig() *protocol.ClientConfig {
raw, _ := ioutil.ReadFile(configFile)
nodeName := core.GetHostname()
nodeVersion := core.GetKernelVersion()
var ts time.Time
rulesTotal := len(c.rules.GetAll())
ruleList := make([]*protocol.Rule, rulesTotal)
idx := 0
for _, r := range c.rules.GetAll() {
ruleList[idx] = r.Serialize()
idx++
}
sysfw, err := firewall.Serialize()
if err != nil {
log.Warning("firewall.Serialize() error: %s", err)
}
return &protocol.ClientConfig{
Id: uint64(ts.UnixNano()),
Name: nodeName,
Version: nodeVersion,
IsFirewallRunning: firewall.IsRunning(),
Config: strings.Replace(string(raw), "\n", "", -1),
LogLevel: uint32(log.MinLevel),
Rules: ruleList,
SystemFirewall: sysfw,
}
}
func (c *Client) monitorProcessDetails(pid int, stream protocol.UI_NotificationsClient, notification *protocol.Notification) {
p := procmon.NewProcess(pid, "")
p.GetInfo()
ticker := time.NewTicker(2 * time.Second)
for {
select {
case _pid := <-stopMonitoringProcess:
if _pid != pid {
continue
}
goto Exit
case <-ticker.C:
if err := p.GetExtraInfo(); err != nil {
c.sendNotificationReply(stream, notification.Id, notification.Data, err)
goto Exit
}
pJSON, err := json.Marshal(p)
notification.Data = string(pJSON)
if errs := c.sendNotificationReply(stream, notification.Id, notification.Data, err); errs != nil {
goto Exit
}
}
}
Exit:
ticker.Stop()
}
func (c *Client) handleActionChangeConfig(stream protocol.UI_NotificationsClient, notification *protocol.Notification) {
log.Info("[notification] Reloading configuration")
// Parse received configuration first, to get the new proc monitor method.
newConf, err := config.Parse(notification.Data)
if err != nil {
log.Warning("[notification] error parsing received config: %v", notification.Data)
c.sendNotificationReply(stream, notification.Id, "", err)
return
}
if c.GetFirewallType() != newConf.Firewall {
firewall.ChangeFw(newConf.Firewall)
}
if err := monitor.ReconfigureMonitorMethod(
newConf.ProcMonitorMethod,
clientConfig.Ebpf.ModulesPath,
); err != nil {
c.sendNotificationReply(stream, notification.Id, "", err.Msg)
return
}
// this save operation triggers a re-loadConfiguration()
err = config.Save(configFile, notification.Data)
if err != nil {
log.Warning("[notification] CHANGE_CONFIG not applied %s", err)
}
c.sendNotificationReply(stream, notification.Id, "", err)
}
func (c *Client) handleActionEnableRule(stream protocol.UI_NotificationsClient, notification *protocol.Notification) {
var err error
for _, rul := range notification.Rules {
log.Info("[notification] enable rule: %s", rul.Name)
// protocol.Rule(protobuf) != rule.Rule(json)
r, _ := rule.Deserialize(rul)
r.Enabled = true
// save to disk only if the duration is rule.Always
err = c.rules.Replace(r, r.Duration == rule.Always)
}
c.sendNotificationReply(stream, notification.Id, "", err)
}
func (c *Client) handleActionDisableRule(stream protocol.UI_NotificationsClient, notification *protocol.Notification) {
var err error
for _, rul := range notification.Rules {
log.Info("[notification] disable rule: %s", rul)
r, _ := rule.Deserialize(rul)
r.Enabled = false
err = c.rules.Replace(r, r.Duration == rule.Always)
}
c.sendNotificationReply(stream, notification.Id, "", err)
}
func (c *Client) handleActionChangeRule(stream protocol.UI_NotificationsClient, notification *protocol.Notification) {
var rErr error
for _, rul := range notification.Rules {
r, err := rule.Deserialize(rul)
if r == nil {
rErr = fmt.Errorf("Invalid rule, %s", err)
continue
}
log.Info("[notification] change rule: %s %d", r, notification.Id)
if err := c.rules.Replace(r, r.Duration == rule.Always); err != nil {
log.Warning("[notification] Error changing rule: %s %s", err, r)
rErr = err
}
}
c.sendNotificationReply(stream, notification.Id, "", rErr)
}
func (c *Client) handleActionDeleteRule(stream protocol.UI_NotificationsClient, notification *protocol.Notification) {
var err error
for _, rul := range notification.Rules {
log.Info("[notification] delete rule: %s %d", rul.Name, notification.Id)
err = c.rules.Delete(rul.Name)
if err != nil {
log.Error("[notification] Error deleting rule: %s %s", err, rul)
}
}
c.sendNotificationReply(stream, notification.Id, "", err)
}
func (c *Client) handleActionMonitorProcess(stream protocol.UI_NotificationsClient, notification *protocol.Notification) {
pid, err := strconv.Atoi(notification.Data)
if err != nil {
log.Error("parsing PID to monitor: %d, err: %s", pid, err)
return
}
if !core.Exists(fmt.Sprint("/proc/", pid)) {
c.sendNotificationReply(stream, notification.Id, "", fmt.Errorf("The process is no longer running"))
return
}
go c.monitorProcessDetails(pid, stream, notification)
}
func (c *Client) handleActionStopMonitorProcess(stream protocol.UI_NotificationsClient, notification *protocol.Notification) {
pid, err := strconv.Atoi(notification.Data)
if err != nil {
log.Error("parsing PID to stop monitor: %d, err: %s", pid, err)
c.sendNotificationReply(stream, notification.Id, "", fmt.Errorf("Error stopping monitor: %s", notification.Data))
return
}
stopMonitoringProcess <- pid
c.sendNotificationReply(stream, notification.Id, "", nil)
}
func (c *Client) handleActionReloadFw(stream protocol.UI_NotificationsClient, notification *protocol.Notification) {
log.Info("[notification] reloading firewall")
sysfw, err := firewall.Deserialize(notification.SysFirewall)
if err != nil {
log.Warning("firewall.Deserialize() error: %s", err)
c.sendNotificationReply(stream, notification.Id, "", fmt.Errorf("Error reloading firewall, invalid rules"))
return
}
if err := firewall.SaveConfiguration(sysfw); err != nil {
c.sendNotificationReply(stream, notification.Id, "", fmt.Errorf("Error saving system firewall rules: %s", err))
return
}
// TODO:
// - add new API endpoints to delete, add or change rules atomically.
// - a global goroutine where errors can be sent to the server (GUI).
go func(c *Client) {
var errors string
for {
select {
case fwerr := <-firewall.ErrorsChan():
errors = fmt.Sprint(errors, fwerr, ",")
if firewall.ErrChanEmpty() {
goto ExitWithError
}
// FIXME: can this operation last longer than 2s? if there're more than.. 100...10000 rules?
case <-time.After(2 * time.Second):
log.Debug("[notification] reload firewall. timeout fired, no errors?")
c.sendNotificationReply(stream, notification.Id, "", nil)
goto Exit
}
}
ExitWithError:
c.sendNotificationReply(stream, notification.Id, "", fmt.Errorf("%s", errors))
Exit:
}(c)
}
func (c *Client) handleNotification(stream protocol.UI_NotificationsClient, notification *protocol.Notification) {
switch {
case notification.Type == protocol.Action_MONITOR_PROCESS:
c.handleActionMonitorProcess(stream, notification)
case notification.Type == protocol.Action_STOP_MONITOR_PROCESS:
c.handleActionStopMonitorProcess(stream, notification)
case notification.Type == protocol.Action_CHANGE_CONFIG:
c.handleActionChangeConfig(stream, notification)
case notification.Type == protocol.Action_ENABLE_INTERCEPTION:
log.Info("[notification] starting interception")
if err := firewall.EnableInterception(); err != nil {
log.Warning("firewall.EnableInterception() error: %s", err)
c.sendNotificationReply(stream, notification.Id, "", err)
return
}
c.sendNotificationReply(stream, notification.Id, "", nil)
case notification.Type == protocol.Action_DISABLE_INTERCEPTION:
log.Info("[notification] stopping interception")
if err := firewall.DisableInterception(); err != nil {
log.Warning("firewall.DisableInterception() error: %s", err)
c.sendNotificationReply(stream, notification.Id, "", err)
return
}
c.sendNotificationReply(stream, notification.Id, "", nil)
case notification.Type == protocol.Action_RELOAD_FW_RULES:
c.handleActionReloadFw(stream, notification)
// ENABLE_RULE just replaces the rule on disk
case notification.Type == protocol.Action_ENABLE_RULE:
c.handleActionEnableRule(stream, notification)
case notification.Type == protocol.Action_DISABLE_RULE:
c.handleActionDisableRule(stream, notification)
case notification.Type == protocol.Action_DELETE_RULE:
c.handleActionDeleteRule(stream, notification)
// CHANGE_RULE can add() or replace) an existing rule.
case notification.Type == protocol.Action_CHANGE_RULE:
c.handleActionChangeRule(stream, notification)
}
}
func (c *Client) sendNotificationReply(stream protocol.UI_NotificationsClient, nID uint64, data string, err error) error {
reply := NewReply(nID, protocol.NotificationReplyCode_OK, data)
if err != nil {
reply.Code = protocol.NotificationReplyCode_ERROR
reply.Data = fmt.Sprint(err)
}
if err := stream.Send(reply); err != nil {
log.Error("Error replying to notification: %s %d", err, reply.Id)
return err
}
return nil
}
// Subscribe opens a connection with the server (UI), to start
// receiving notifications.
// It firstly sends the daemon status and configuration.
func (c *Client) Subscribe() {
ctx, cancel := context.WithTimeout(context.Background(), time.Second*10)
defer cancel()
clientCfg, err := c.client.Subscribe(ctx, c.getClientConfig())
if err != nil {
log.Error("Subscribing to GUI %s", err)
// When connecting to the GUI via TCP, sometimes the notifications channel is
// not established, and the main channel is never closed.
// We need to disconnect everything after a timeout and try it again.
c.disconnect()
return
}
if tempConf, err := config.Parse(clientCfg.Config); err == nil {
c.Lock()
clientConnectedRule.Action = rule.Action(tempConf.DefaultAction)
c.Unlock()
}
c.listenForNotifications()
}
// Notifications is the channel where the daemon receives messages from the server.
// It consists of 2 grpc streams (send/receive) that are never closed,
// this way we can share messages in realtime.
// If the GUI is closed, we'll receive an error reading from the channel.
func (c *Client) listenForNotifications() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// open the stream channel
streamReply := &protocol.NotificationReply{Id: 0, Code: protocol.NotificationReplyCode_OK}
notisStream, err := c.client.Notifications(ctx)
if err != nil {
log.Error("establishing notifications channel %s", err)
return
}
// send the first notification
if err := notisStream.Send(streamReply); err != nil {
log.Error("sending notification HELLO %s", err)
return
}
log.Info("Start receiving notifications")
for {
select {
case <-c.clientCtx.Done():
goto Exit
default:
noti, err := notisStream.Recv()
if err == io.EOF {
log.Warning("notification channel closed by the server")
goto Exit
}
if err != nil {
log.Error("getting notifications: %s %s", err, noti)
goto Exit
}
c.handleNotification(notisStream, noti)
}
}
Exit:
notisStream.CloseSend()
log.Info("Stop receiving notifications")
c.disconnect()
}
opensnitch-1.6.9/daemon/ui/protocol/ 0000775 0000000 0000000 00000000000 15003540030 0017410 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/ui/protocol/.gitkeep 0000664 0000000 0000000 00000000000 15003540030 0021027 0 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/ui/testdata/ 0000775 0000000 0000000 00000000000 15003540030 0017360 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/daemon/ui/testdata/default-config.json 0000664 0000000 0000000 00000000726 15003540030 0023147 0 ustar 00root root 0000000 0000000 {"Server":{"Address":"unix:///run/user/1000/opensnitch/osui.sock","Authentication":{"Type":"","TLSOptions":{"CACert":"","ServerCert":"","ServerKey":"","ClientCert":"","ClientKey":"","SkipVerify":false,"ClientAuthType":""}},"LogFile":"","Loggers":null},"DefaultAction":"deny","DefaultDuration":"once","InterceptUnknown":true,"ProcMonitorMethod":"proc","LogLevel":null,"LogUTC":false,"LogMicro":false,"Firewall":"iptables","Stats":{"MaxEvents":0,"MaxStats":0,"Workers":0}} opensnitch-1.6.9/daemon/ui/testdata/orig-default-config.json 0000664 0000000 0000000 00000000652 15003540030 0024103 0 ustar 00root root 0000000 0000000 {
"Server":
{
"Address":"unix:///tmp/osui.sock",
"LogFile":"/var/log/opensnitchd.log"
},
"DefaultAction": "allow",
"DefaultDuration": "once",
"InterceptUnknown": false,
"ProcMonitorMethod": "ebpf",
"LogLevel": 2,
"LogUTC": true,
"LogMicro": false,
"Firewall": "nftables",
"Stats": {
"MaxEvents": 150,
"MaxStats": 25,
"Workers": 6
}
}
opensnitch-1.6.9/ebpf_prog/ 0000775 0000000 0000000 00000000000 15003540030 0015632 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/ebpf_prog/Makefile 0000664 0000000 0000000 00000003362 15003540030 0017276 0 ustar 00root root 0000000 0000000 # OpenSnitch - 2023
#
# On Debian based distros we need the following 2 directories.
# Otherwise, just use the kernel headers from the kernel sources.
#
KERNEL_DIR ?= /lib/modules/$(shell uname -r)/source
KERNEL_HEADERS ?= /usr/src/linux-headers-$(shell uname -r)/
CLANG ?= clang
LLC ?= llc
LLVM_STRIP ?= llvm-strip -g
ARCH ?= $(shell uname -m)
# as in /usr/src/linux-headers-*/arch/
# TODO: extract correctly the archs, and add more if needed.
ifeq ($(ARCH),x86_64)
ARCH := x86
else ifeq ($(ARCH),i686)
ARCH := x86
else ifeq ($(ARCH),armv7l)
ARCH := arm
else ifeq ($(ARCH),aarch64)
ARCH := arm64
endif
ifeq ($(ARCH),arm)
# on previous archs, it fails with "SMP not supported on pre-ARMv6"
EXTRA_FLAGS = "-D__LINUX_ARM_ARCH__=7"
endif
BIN := opensnitch.o opensnitch-procs.o opensnitch-dns.o
CLANG_FLAGS = -I. \
-I$(KERNEL_HEADERS)/arch/$(ARCH)/include/generated/ \
-I$(KERNEL_HEADERS)/include \
-include $(KERNEL_DIR)/include/linux/kconfig.h \
-I$(KERNEL_DIR)/include \
-I$(KERNEL_DIR)/include/uapi \
-I$(KERNEL_DIR)/include/generated/uapi \
-I$(KERNEL_DIR)/arch/$(ARCH)/include \
-I$(KERNEL_DIR)/arch/$(ARCH)/include/generated \
-I$(KERNEL_DIR)/arch/$(ARCH)/include/uapi \
-I$(KERNEL_DIR)/arch/$(ARCH)/include/generated/uapi \
-I$(KERNEL_DIR)/tools/testing/selftests/bpf/ \
-D__KERNEL__ -D__BPF_TRACING__ -Wno-unused-value -Wno-pointer-sign \
-D__TARGET_ARCH_$(ARCH) -Wno-compare-distinct-pointer-types \
$(EXTRA_FLAGS) \
-Wno-gnu-variable-sized-type-not-at-end \
-Wno-address-of-packed-member -Wno-tautological-compare \
-Wno-unknown-warning-option \
-g -O2 -emit-llvm
all: $(BIN)
%.o: %.c
$(CLANG) $(CLANG_FLAGS) -c $< -o $@.partial
$(LLC) -march=bpf -mcpu=generic -filetype=obj -o $@ $@.partial
rm -f $@.partial
clean:
rm -f *.o *.partial
opensnitch-1.6.9/ebpf_prog/README 0000664 0000000 0000000 00000004745 15003540030 0016524 0 ustar 00root root 0000000 0000000 Compilation requires getting kernel sources for now.
There's a helper script to automate this process:
https://github.com/evilsocket/opensnitch/blob/master/utils/packaging/build_modules.sh
The basic steps to compile the modules are:
sudo apt install clang llvm libelf-dev libzip-dev flex bison libssl-dev bc rsync python3
cd opensnitch
wget https://github.com/torvalds/linux/archive/v5.8.tar.gz
tar -xf v5.8.tar.gz
cp ebpf_prog/opensnitch*.c ebpf_prog/common* ebpf_prog/Makefile linux-5.8/samples/bpf/
cp -r ebpf_prog/bpf_headers/ linux-5.8/samples/bpf/
cd linux-5.8 && yes "" | make oldconfig && make prepare && make headers_install # (1 min)
cd samples/bpf && make KERNEL_DIR=../../linux-5.8/
objdump -h opensnitch.o # you should see many sections, number 1 should be called kprobe/tcp_v4_connect
llvm-strip -g opensnitch*.o # remove debug info
sudo cp opensnitch*.o /usr/lib/opensnitchd/ebpf/ # or /etc/opensnitchd for < v1.6.x
cd ../../../daemon
Since v1.6.0, opensnitchd expects to find the opensnitch*.o modules under:
/usr/local/lib/opensnitchd/ebpf/
/usr/lib/opensnitchd/ebpf/
/etc/opensnitchd/ # deprecated, only on < v1.5.x
start opensnitchd with:
opensnitchd -rules-path /etc/opensnitchd/rules -process-monitor-method ebpf
---
### Compiling for Fedora (and others rpm based systems)
You need to install the kernel-devel, clang and llvm packages.
Then: `cd ebpf_prog/ ; make KERNEL_DIR=/usr/src/kernels/$(uname -r)/`
(or just pass the kernel version you want)
### Notes
The kernel where you intend to run it must have some options activated:
$ grep BPF /boot/config-$(uname -r)
CONFIG_CGROUP_BPF=y
CONFIG_BPF=y
CONFIG_BPF_SYSCALL=y
CONFIG_BPF_EVENTS=y
CONFIG_KPROBES=y
CONFIG_KPROBE_EVENTS=y
For the opensnitch-procs.o module to work, this option must be enabled:
$ grep FTRACE_SYSCALLS /boot/config-$(uname -r)
CONFIG_FTRACE_SYSCALLS=y
(https://github.com/iovisor/bcc/blob/master/docs/kernel_config.md)
Also, in some distributions debugfs is not mounted automatically.
Since v1.6.0 we try to mount it automatically. If you're running
a lower version so you'll need to mount it manually:
$ sudo mount -t debugfs none /sys/kernel/debug
In order to make it permanent add it to /etc/fstab:
debugfs /sys/kernel/debug debugfs defaults 0 0
opensnitch-procs.o and opensnitch-dns.o are only compatible with kernels >= 5.5,
bpf_probe_read_user*() were added on that kernel on:
https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md#helpers
opensnitch-1.6.9/ebpf_prog/arm-clang-asm-fix.patch 0000664 0000000 0000000 00000000601 15003540030 0022053 0 ustar 00root root 0000000 0000000 --- ../../arch/arm/include/asm/unified.h 2021-04-20 10:47:54.075834124 +0000
+++ ../../arch/arm/include/asm/unified-clang-fix.h 2021-04-20 10:47:38.943811970 +0000
@@ -11,7 +11,10 @@
#if defined(__ASSEMBLY__)
.syntax unified
#else
-__asm__(".syntax unified");
+//__asm__(".syntax unified");
+#ifndef __clang__
+ __asm__(".syntax unified");
+#endif
#endif
#ifdef CONFIG_CPU_V7M
opensnitch-1.6.9/ebpf_prog/bpf_headers/ 0000775 0000000 0000000 00000000000 15003540030 0020074 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/ebpf_prog/bpf_headers/bpf_core_read.h 0000664 0000000 0000000 00000046132 15003540030 0023025 0 ustar 00root root 0000000 0000000 /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
#ifndef __BPF_CORE_READ_H__
#define __BPF_CORE_READ_H__
/*
* enum bpf_field_info_kind is passed as a second argument into
* __builtin_preserve_field_info() built-in to get a specific aspect of
* a field, captured as a first argument. __builtin_preserve_field_info(field,
* info_kind) returns __u32 integer and produces BTF field relocation, which
* is understood and processed by libbpf during BPF object loading. See
* selftests/bpf for examples.
*/
enum bpf_field_info_kind {
BPF_FIELD_BYTE_OFFSET = 0, /* field byte offset */
BPF_FIELD_BYTE_SIZE = 1,
BPF_FIELD_EXISTS = 2, /* field existence in target kernel */
BPF_FIELD_SIGNED = 3,
BPF_FIELD_LSHIFT_U64 = 4,
BPF_FIELD_RSHIFT_U64 = 5,
};
/* second argument to __builtin_btf_type_id() built-in */
enum bpf_type_id_kind {
BPF_TYPE_ID_LOCAL = 0, /* BTF type ID in local program */
BPF_TYPE_ID_TARGET = 1, /* BTF type ID in target kernel */
};
/* second argument to __builtin_preserve_type_info() built-in */
enum bpf_type_info_kind {
BPF_TYPE_EXISTS = 0, /* type existence in target kernel */
BPF_TYPE_SIZE = 1, /* type size in target kernel */
BPF_TYPE_MATCHES = 2, /* type match in target kernel */
};
/* second argument to __builtin_preserve_enum_value() built-in */
enum bpf_enum_value_kind {
BPF_ENUMVAL_EXISTS = 0, /* enum value existence in kernel */
BPF_ENUMVAL_VALUE = 1, /* enum value value relocation */
};
#define __CORE_RELO(src, field, info) \
__builtin_preserve_field_info((src)->field, BPF_FIELD_##info)
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
#define __CORE_BITFIELD_PROBE_READ(dst, src, fld) \
bpf_probe_read_kernel( \
(void *)dst, \
__CORE_RELO(src, fld, BYTE_SIZE), \
(const void *)src + __CORE_RELO(src, fld, BYTE_OFFSET))
#else
/* semantics of LSHIFT_64 assumes loading values into low-ordered bytes, so
* for big-endian we need to adjust destination pointer accordingly, based on
* field byte size
*/
#define __CORE_BITFIELD_PROBE_READ(dst, src, fld) \
bpf_probe_read_kernel( \
(void *)dst + (8 - __CORE_RELO(src, fld, BYTE_SIZE)), \
__CORE_RELO(src, fld, BYTE_SIZE), \
(const void *)src + __CORE_RELO(src, fld, BYTE_OFFSET))
#endif
/*
* Extract bitfield, identified by s->field, and return its value as u64.
* All this is done in relocatable manner, so bitfield changes such as
* signedness, bit size, offset changes, this will be handled automatically.
* This version of macro is using bpf_probe_read_kernel() to read underlying
* integer storage. Macro functions as an expression and its return type is
* bpf_probe_read_kernel()'s return value: 0, on success, <0 on error.
*/
#define BPF_CORE_READ_BITFIELD_PROBED(s, field) ({ \
unsigned long long val = 0; \
\
__CORE_BITFIELD_PROBE_READ(&val, s, field); \
val <<= __CORE_RELO(s, field, LSHIFT_U64); \
if (__CORE_RELO(s, field, SIGNED)) \
val = ((long long)val) >> __CORE_RELO(s, field, RSHIFT_U64); \
else \
val = val >> __CORE_RELO(s, field, RSHIFT_U64); \
val; \
})
/*
* Extract bitfield, identified by s->field, and return its value as u64.
* This version of macro is using direct memory reads and should be used from
* BPF program types that support such functionality (e.g., typed raw
* tracepoints).
*/
#define BPF_CORE_READ_BITFIELD(s, field) ({ \
const void *p = (const void *)s + __CORE_RELO(s, field, BYTE_OFFSET); \
unsigned long long val; \
\
/* This is a so-called barrier_var() operation that makes specified \
* variable "a black box" for optimizing compiler. \
* It forces compiler to perform BYTE_OFFSET relocation on p and use \
* its calculated value in the switch below, instead of applying \
* the same relocation 4 times for each individual memory load. \
*/ \
asm volatile("" : "=r"(p) : "0"(p)); \
\
switch (__CORE_RELO(s, field, BYTE_SIZE)) { \
case 1: val = *(const unsigned char *)p; break; \
case 2: val = *(const unsigned short *)p; break; \
case 4: val = *(const unsigned int *)p; break; \
case 8: val = *(const unsigned long long *)p; break; \
} \
val <<= __CORE_RELO(s, field, LSHIFT_U64); \
if (__CORE_RELO(s, field, SIGNED)) \
val = ((long long)val) >> __CORE_RELO(s, field, RSHIFT_U64); \
else \
val = val >> __CORE_RELO(s, field, RSHIFT_U64); \
val; \
})
#define ___bpf_field_ref1(field) (field)
#define ___bpf_field_ref2(type, field) (((typeof(type) *)0)->field)
#define ___bpf_field_ref(args...) \
___bpf_apply(___bpf_field_ref, ___bpf_narg(args))(args)
/*
* Convenience macro to check that field actually exists in target kernel's.
* Returns:
* 1, if matching field is present in target kernel;
* 0, if no matching field found.
*
* Supports two forms:
* - field reference through variable access:
* bpf_core_field_exists(p->my_field);
* - field reference through type and field names:
* bpf_core_field_exists(struct my_type, my_field).
*/
#define bpf_core_field_exists(field...) \
__builtin_preserve_field_info(___bpf_field_ref(field), BPF_FIELD_EXISTS)
/*
* Convenience macro to get the byte size of a field. Works for integers,
* struct/unions, pointers, arrays, and enums.
*
* Supports two forms:
* - field reference through variable access:
* bpf_core_field_size(p->my_field);
* - field reference through type and field names:
* bpf_core_field_size(struct my_type, my_field).
*/
#define bpf_core_field_size(field...) \
__builtin_preserve_field_info(___bpf_field_ref(field), BPF_FIELD_BYTE_SIZE)
/*
* Convenience macro to get field's byte offset.
*
* Supports two forms:
* - field reference through variable access:
* bpf_core_field_offset(p->my_field);
* - field reference through type and field names:
* bpf_core_field_offset(struct my_type, my_field).
*/
#define bpf_core_field_offset(field...) \
__builtin_preserve_field_info(___bpf_field_ref(field), BPF_FIELD_BYTE_OFFSET)
/*
* Convenience macro to get BTF type ID of a specified type, using a local BTF
* information. Return 32-bit unsigned integer with type ID from program's own
* BTF. Always succeeds.
*/
#define bpf_core_type_id_local(type) \
__builtin_btf_type_id(*(typeof(type) *)0, BPF_TYPE_ID_LOCAL)
/*
* Convenience macro to get BTF type ID of a target kernel's type that matches
* specified local type.
* Returns:
* - valid 32-bit unsigned type ID in kernel BTF;
* - 0, if no matching type was found in a target kernel BTF.
*/
#define bpf_core_type_id_kernel(type) \
__builtin_btf_type_id(*(typeof(type) *)0, BPF_TYPE_ID_TARGET)
/*
* Convenience macro to check that provided named type
* (struct/union/enum/typedef) exists in a target kernel.
* Returns:
* 1, if such type is present in target kernel's BTF;
* 0, if no matching type is found.
*/
#define bpf_core_type_exists(type) \
__builtin_preserve_type_info(*(typeof(type) *)0, BPF_TYPE_EXISTS)
/*
* Convenience macro to check that provided named type
* (struct/union/enum/typedef) "matches" that in a target kernel.
* Returns:
* 1, if the type matches in the target kernel's BTF;
* 0, if the type does not match any in the target kernel
*/
#define bpf_core_type_matches(type) \
__builtin_preserve_type_info(*(typeof(type) *)0, BPF_TYPE_MATCHES)
/*
* Convenience macro to get the byte size of a provided named type
* (struct/union/enum/typedef) in a target kernel.
* Returns:
* >= 0 size (in bytes), if type is present in target kernel's BTF;
* 0, if no matching type is found.
*/
#define bpf_core_type_size(type) \
__builtin_preserve_type_info(*(typeof(type) *)0, BPF_TYPE_SIZE)
/*
* Convenience macro to check that provided enumerator value is defined in
* a target kernel.
* Returns:
* 1, if specified enum type and its enumerator value are present in target
* kernel's BTF;
* 0, if no matching enum and/or enum value within that enum is found.
*/
#define bpf_core_enum_value_exists(enum_type, enum_value) \
__builtin_preserve_enum_value(*(typeof(enum_type) *)enum_value, BPF_ENUMVAL_EXISTS)
/*
* Convenience macro to get the integer value of an enumerator value in
* a target kernel.
* Returns:
* 64-bit value, if specified enum type and its enumerator value are
* present in target kernel's BTF;
* 0, if no matching enum and/or enum value within that enum is found.
*/
#define bpf_core_enum_value(enum_type, enum_value) \
__builtin_preserve_enum_value(*(typeof(enum_type) *)enum_value, BPF_ENUMVAL_VALUE)
/*
* bpf_core_read() abstracts away bpf_probe_read_kernel() call and captures
* offset relocation for source address using __builtin_preserve_access_index()
* built-in, provided by Clang.
*
* __builtin_preserve_access_index() takes as an argument an expression of
* taking an address of a field within struct/union. It makes compiler emit
* a relocation, which records BTF type ID describing root struct/union and an
* accessor string which describes exact embedded field that was used to take
* an address. See detailed description of this relocation format and
* semantics in comments to struct bpf_field_reloc in libbpf_internal.h.
*
* This relocation allows libbpf to adjust BPF instruction to use correct
* actual field offset, based on target kernel BTF type that matches original
* (local) BTF, used to record relocation.
*/
#define bpf_core_read(dst, sz, src) \
bpf_probe_read_kernel(dst, sz, (const void *)__builtin_preserve_access_index(src))
/* NOTE: see comments for BPF_CORE_READ_USER() about the proper types use. */
#define bpf_core_read_user(dst, sz, src) \
bpf_probe_read_user(dst, sz, (const void *)__builtin_preserve_access_index(src))
/*
* bpf_core_read_str() is a thin wrapper around bpf_probe_read_str()
* additionally emitting BPF CO-RE field relocation for specified source
* argument.
*/
#define bpf_core_read_str(dst, sz, src) \
bpf_probe_read_kernel_str(dst, sz, (const void *)__builtin_preserve_access_index(src))
/* NOTE: see comments for BPF_CORE_READ_USER() about the proper types use. */
#define bpf_core_read_user_str(dst, sz, src) \
bpf_probe_read_user_str(dst, sz, (const void *)__builtin_preserve_access_index(src))
#define ___concat(a, b) a ## b
#define ___apply(fn, n) ___concat(fn, n)
#define ___nth(_1, _2, _3, _4, _5, _6, _7, _8, _9, _10, __11, N, ...) N
/*
* return number of provided arguments; used for switch-based variadic macro
* definitions (see ___last, ___arrow, etc below)
*/
#define ___narg(...) ___nth(_, ##__VA_ARGS__, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0)
/*
* return 0 if no arguments are passed, N - otherwise; used for
* recursively-defined macros to specify termination (0) case, and generic
* (N) case (e.g., ___read_ptrs, ___core_read)
*/
#define ___empty(...) ___nth(_, ##__VA_ARGS__, N, N, N, N, N, N, N, N, N, N, 0)
#define ___last1(x) x
#define ___last2(a, x) x
#define ___last3(a, b, x) x
#define ___last4(a, b, c, x) x
#define ___last5(a, b, c, d, x) x
#define ___last6(a, b, c, d, e, x) x
#define ___last7(a, b, c, d, e, f, x) x
#define ___last8(a, b, c, d, e, f, g, x) x
#define ___last9(a, b, c, d, e, f, g, h, x) x
#define ___last10(a, b, c, d, e, f, g, h, i, x) x
#define ___last(...) ___apply(___last, ___narg(__VA_ARGS__))(__VA_ARGS__)
#define ___nolast2(a, _) a
#define ___nolast3(a, b, _) a, b
#define ___nolast4(a, b, c, _) a, b, c
#define ___nolast5(a, b, c, d, _) a, b, c, d
#define ___nolast6(a, b, c, d, e, _) a, b, c, d, e
#define ___nolast7(a, b, c, d, e, f, _) a, b, c, d, e, f
#define ___nolast8(a, b, c, d, e, f, g, _) a, b, c, d, e, f, g
#define ___nolast9(a, b, c, d, e, f, g, h, _) a, b, c, d, e, f, g, h
#define ___nolast10(a, b, c, d, e, f, g, h, i, _) a, b, c, d, e, f, g, h, i
#define ___nolast(...) ___apply(___nolast, ___narg(__VA_ARGS__))(__VA_ARGS__)
#define ___arrow1(a) a
#define ___arrow2(a, b) a->b
#define ___arrow3(a, b, c) a->b->c
#define ___arrow4(a, b, c, d) a->b->c->d
#define ___arrow5(a, b, c, d, e) a->b->c->d->e
#define ___arrow6(a, b, c, d, e, f) a->b->c->d->e->f
#define ___arrow7(a, b, c, d, e, f, g) a->b->c->d->e->f->g
#define ___arrow8(a, b, c, d, e, f, g, h) a->b->c->d->e->f->g->h
#define ___arrow9(a, b, c, d, e, f, g, h, i) a->b->c->d->e->f->g->h->i
#define ___arrow10(a, b, c, d, e, f, g, h, i, j) a->b->c->d->e->f->g->h->i->j
#define ___arrow(...) ___apply(___arrow, ___narg(__VA_ARGS__))(__VA_ARGS__)
#define ___type(...) typeof(___arrow(__VA_ARGS__))
#define ___read(read_fn, dst, src_type, src, accessor) \
read_fn((void *)(dst), sizeof(*(dst)), &((src_type)(src))->accessor)
/* "recursively" read a sequence of inner pointers using local __t var */
#define ___rd_first(fn, src, a) ___read(fn, &__t, ___type(src), src, a);
#define ___rd_last(fn, ...) \
___read(fn, &__t, ___type(___nolast(__VA_ARGS__)), __t, ___last(__VA_ARGS__));
#define ___rd_p1(fn, ...) const void *__t; ___rd_first(fn, __VA_ARGS__)
#define ___rd_p2(fn, ...) ___rd_p1(fn, ___nolast(__VA_ARGS__)) ___rd_last(fn, __VA_ARGS__)
#define ___rd_p3(fn, ...) ___rd_p2(fn, ___nolast(__VA_ARGS__)) ___rd_last(fn, __VA_ARGS__)
#define ___rd_p4(fn, ...) ___rd_p3(fn, ___nolast(__VA_ARGS__)) ___rd_last(fn, __VA_ARGS__)
#define ___rd_p5(fn, ...) ___rd_p4(fn, ___nolast(__VA_ARGS__)) ___rd_last(fn, __VA_ARGS__)
#define ___rd_p6(fn, ...) ___rd_p5(fn, ___nolast(__VA_ARGS__)) ___rd_last(fn, __VA_ARGS__)
#define ___rd_p7(fn, ...) ___rd_p6(fn, ___nolast(__VA_ARGS__)) ___rd_last(fn, __VA_ARGS__)
#define ___rd_p8(fn, ...) ___rd_p7(fn, ___nolast(__VA_ARGS__)) ___rd_last(fn, __VA_ARGS__)
#define ___rd_p9(fn, ...) ___rd_p8(fn, ___nolast(__VA_ARGS__)) ___rd_last(fn, __VA_ARGS__)
#define ___read_ptrs(fn, src, ...) \
___apply(___rd_p, ___narg(__VA_ARGS__))(fn, src, __VA_ARGS__)
#define ___core_read0(fn, fn_ptr, dst, src, a) \
___read(fn, dst, ___type(src), src, a);
#define ___core_readN(fn, fn_ptr, dst, src, ...) \
___read_ptrs(fn_ptr, src, ___nolast(__VA_ARGS__)) \
___read(fn, dst, ___type(src, ___nolast(__VA_ARGS__)), __t, \
___last(__VA_ARGS__));
#define ___core_read(fn, fn_ptr, dst, src, a, ...) \
___apply(___core_read, ___empty(__VA_ARGS__))(fn, fn_ptr, dst, \
src, a, ##__VA_ARGS__)
/*
* BPF_CORE_READ_INTO() is a more performance-conscious variant of
* BPF_CORE_READ(), in which final field is read into user-provided storage.
* See BPF_CORE_READ() below for more details on general usage.
*/
#define BPF_CORE_READ_INTO(dst, src, a, ...) ({ \
___core_read(bpf_core_read, bpf_core_read, \
dst, (src), a, ##__VA_ARGS__) \
})
/*
* Variant of BPF_CORE_READ_INTO() for reading from user-space memory.
*
* NOTE: see comments for BPF_CORE_READ_USER() about the proper types use.
*/
#define BPF_CORE_READ_USER_INTO(dst, src, a, ...) ({ \
___core_read(bpf_core_read_user, bpf_core_read_user, \
dst, (src), a, ##__VA_ARGS__) \
})
/* Non-CO-RE variant of BPF_CORE_READ_INTO() */
#define BPF_PROBE_READ_INTO(dst, src, a, ...) ({ \
___core_read(bpf_probe_read, bpf_probe_read, \
dst, (src), a, ##__VA_ARGS__) \
})
/* Non-CO-RE variant of BPF_CORE_READ_USER_INTO().
*
* As no CO-RE relocations are emitted, source types can be arbitrary and are
* not restricted to kernel types only.
*/
#define BPF_PROBE_READ_USER_INTO(dst, src, a, ...) ({ \
___core_read(bpf_probe_read_user, bpf_probe_read_user, \
dst, (src), a, ##__VA_ARGS__) \
})
/*
* BPF_CORE_READ_STR_INTO() does same "pointer chasing" as
* BPF_CORE_READ() for intermediate pointers, but then executes (and returns
* corresponding error code) bpf_core_read_str() for final string read.
*/
#define BPF_CORE_READ_STR_INTO(dst, src, a, ...) ({ \
___core_read(bpf_core_read_str, bpf_core_read, \
dst, (src), a, ##__VA_ARGS__) \
})
/*
* Variant of BPF_CORE_READ_STR_INTO() for reading from user-space memory.
*
* NOTE: see comments for BPF_CORE_READ_USER() about the proper types use.
*/
#define BPF_CORE_READ_USER_STR_INTO(dst, src, a, ...) ({ \
___core_read(bpf_core_read_user_str, bpf_core_read_user, \
dst, (src), a, ##__VA_ARGS__) \
})
/* Non-CO-RE variant of BPF_CORE_READ_STR_INTO() */
#define BPF_PROBE_READ_STR_INTO(dst, src, a, ...) ({ \
___core_read(bpf_probe_read_str, bpf_probe_read, \
dst, (src), a, ##__VA_ARGS__) \
})
/*
* Non-CO-RE variant of BPF_CORE_READ_USER_STR_INTO().
*
* As no CO-RE relocations are emitted, source types can be arbitrary and are
* not restricted to kernel types only.
*/
#define BPF_PROBE_READ_USER_STR_INTO(dst, src, a, ...) ({ \
___core_read(bpf_probe_read_user_str, bpf_probe_read_user, \
dst, (src), a, ##__VA_ARGS__) \
})
/*
* BPF_CORE_READ() is used to simplify BPF CO-RE relocatable read, especially
* when there are few pointer chasing steps.
* E.g., what in non-BPF world (or in BPF w/ BCC) would be something like:
* int x = s->a.b.c->d.e->f->g;
* can be succinctly achieved using BPF_CORE_READ as:
* int x = BPF_CORE_READ(s, a.b.c, d.e, f, g);
*
* BPF_CORE_READ will decompose above statement into 4 bpf_core_read (BPF
* CO-RE relocatable bpf_probe_read_kernel() wrapper) calls, logically
* equivalent to:
* 1. const void *__t = s->a.b.c;
* 2. __t = __t->d.e;
* 3. __t = __t->f;
* 4. return __t->g;
*
* Equivalence is logical, because there is a heavy type casting/preservation
* involved, as well as all the reads are happening through
* bpf_probe_read_kernel() calls using __builtin_preserve_access_index() to
* emit CO-RE relocations.
*
* N.B. Only up to 9 "field accessors" are supported, which should be more
* than enough for any practical purpose.
*/
#define BPF_CORE_READ(src, a, ...) ({ \
___type((src), a, ##__VA_ARGS__) __r; \
BPF_CORE_READ_INTO(&__r, (src), a, ##__VA_ARGS__); \
__r; \
})
/*
* Variant of BPF_CORE_READ() for reading from user-space memory.
*
* NOTE: all the source types involved are still *kernel types* and need to
* exist in kernel (or kernel module) BTF, otherwise CO-RE relocation will
* fail. Custom user types are not relocatable with CO-RE.
* The typical situation in which BPF_CORE_READ_USER() might be used is to
* read kernel UAPI types from the user-space memory passed in as a syscall
* input argument.
*/
#define BPF_CORE_READ_USER(src, a, ...) ({ \
___type((src), a, ##__VA_ARGS__) __r; \
BPF_CORE_READ_USER_INTO(&__r, (src), a, ##__VA_ARGS__); \
__r; \
})
/* Non-CO-RE variant of BPF_CORE_READ() */
#define BPF_PROBE_READ(src, a, ...) ({ \
___type((src), a, ##__VA_ARGS__) __r; \
BPF_PROBE_READ_INTO(&__r, (src), a, ##__VA_ARGS__); \
__r; \
})
/*
* Non-CO-RE variant of BPF_CORE_READ_USER().
*
* As no CO-RE relocations are emitted, source types can be arbitrary and are
* not restricted to kernel types only.
*/
#define BPF_PROBE_READ_USER(src, a, ...) ({ \
___type((src), a, ##__VA_ARGS__) __r; \
BPF_PROBE_READ_USER_INTO(&__r, (src), a, ##__VA_ARGS__); \
__r; \
})
#endif
opensnitch-1.6.9/ebpf_prog/bpf_headers/bpf_helper_defs.h 0000664 0000000 0000000 00000504244 15003540030 0023365 0 ustar 00root root 0000000 0000000 /* This is auto-generated file. See bpf_doc.py for details. */
/* Forward declarations of BPF structs */
struct bpf_fib_lookup;
struct bpf_sk_lookup;
struct bpf_perf_event_data;
struct bpf_perf_event_value;
struct bpf_pidns_info;
struct bpf_redir_neigh;
struct bpf_sock;
struct bpf_sock_addr;
struct bpf_sock_ops;
struct bpf_sock_tuple;
struct bpf_spin_lock;
struct bpf_sysctl;
struct bpf_tcp_sock;
struct bpf_tunnel_key;
struct bpf_xfrm_state;
struct linux_binprm;
struct pt_regs;
struct sk_reuseport_md;
struct sockaddr;
struct tcphdr;
struct seq_file;
struct tcp6_sock;
struct tcp_sock;
struct tcp_timewait_sock;
struct tcp_request_sock;
struct udp6_sock;
struct unix_sock;
struct task_struct;
struct __sk_buff;
struct sk_msg_md;
struct xdp_md;
struct path;
struct btf_ptr;
struct inode;
struct socket;
struct file;
struct bpf_timer;
struct mptcp_sock;
struct bpf_dynptr;
struct iphdr;
struct ipv6hdr;
/*
* bpf_map_lookup_elem
*
* Perform a lookup in *map* for an entry associated to *key*.
*
* Returns
* Map value associated to *key*, or **NULL** if no entry was
* found.
*/
static void *(*bpf_map_lookup_elem)(void *map, const void *key) = (void *) 1;
/*
* bpf_map_update_elem
*
* Add or update the value of the entry associated to *key* in
* *map* with *value*. *flags* is one of:
*
* **BPF_NOEXIST**
* The entry for *key* must not exist in the map.
* **BPF_EXIST**
* The entry for *key* must already exist in the map.
* **BPF_ANY**
* No condition on the existence of the entry for *key*.
*
* Flag value **BPF_NOEXIST** cannot be used for maps of types
* **BPF_MAP_TYPE_ARRAY** or **BPF_MAP_TYPE_PERCPU_ARRAY** (all
* elements always exist), the helper would return an error.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_map_update_elem)(void *map, const void *key, const void *value, __u64 flags) = (void *) 2;
/*
* bpf_map_delete_elem
*
* Delete entry with *key* from *map*.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_map_delete_elem)(void *map, const void *key) = (void *) 3;
/*
* bpf_probe_read
*
* For tracing programs, safely attempt to read *size* bytes from
* kernel space address *unsafe_ptr* and store the data in *dst*.
*
* Generally, use **bpf_probe_read_user**\ () or
* **bpf_probe_read_kernel**\ () instead.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_probe_read)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) 4;
/*
* bpf_ktime_get_ns
*
* Return the time elapsed since system boot, in nanoseconds.
* Does not include time the system was suspended.
* See: **clock_gettime**\ (**CLOCK_MONOTONIC**)
*
* Returns
* Current *ktime*.
*/
static __u64 (*bpf_ktime_get_ns)(void) = (void *) 5;
/*
* bpf_trace_printk
*
* This helper is a "printk()-like" facility for debugging. It
* prints a message defined by format *fmt* (of size *fmt_size*)
* to file *\/sys/kernel/debug/tracing/trace* from DebugFS, if
* available. It can take up to three additional **u64**
* arguments (as an eBPF helpers, the total number of arguments is
* limited to five).
*
* Each time the helper is called, it appends a line to the trace.
* Lines are discarded while *\/sys/kernel/debug/tracing/trace* is
* open, use *\/sys/kernel/debug/tracing/trace_pipe* to avoid this.
* The format of the trace is customizable, and the exact output
* one will get depends on the options set in
* *\/sys/kernel/debug/tracing/trace_options* (see also the
* *README* file under the same directory). However, it usually
* defaults to something like:
*
* ::
*
* telnet-470 [001] .N.. 419421.045894: 0x00000001:
*
* In the above:
*
* * ``telnet`` is the name of the current task.
* * ``470`` is the PID of the current task.
* * ``001`` is the CPU number on which the task is
* running.
* * In ``.N..``, each character refers to a set of
* options (whether irqs are enabled, scheduling
* options, whether hard/softirqs are running, level of
* preempt_disabled respectively). **N** means that
* **TIF_NEED_RESCHED** and **PREEMPT_NEED_RESCHED**
* are set.
* * ``419421.045894`` is a timestamp.
* * ``0x00000001`` is a fake value used by BPF for the
* instruction pointer register.
* * ```` is the message formatted with
* *fmt*.
*
* The conversion specifiers supported by *fmt* are similar, but
* more limited than for printk(). They are **%d**, **%i**,
* **%u**, **%x**, **%ld**, **%li**, **%lu**, **%lx**, **%lld**,
* **%lli**, **%llu**, **%llx**, **%p**, **%s**. No modifier (size
* of field, padding with zeroes, etc.) is available, and the
* helper will return **-EINVAL** (but print nothing) if it
* encounters an unknown specifier.
*
* Also, note that **bpf_trace_printk**\ () is slow, and should
* only be used for debugging purposes. For this reason, a notice
* block (spanning several lines) is printed to kernel logs and
* states that the helper should not be used "for production use"
* the first time this helper is used (or more precisely, when
* **trace_printk**\ () buffers are allocated). For passing values
* to user space, perf events should be preferred.
*
* Returns
* The number of bytes written to the buffer, or a negative error
* in case of failure.
*/
static long (*bpf_trace_printk)(const char *fmt, __u32 fmt_size, ...) = (void *) 6;
/*
* bpf_get_prandom_u32
*
* Get a pseudo-random number.
*
* From a security point of view, this helper uses its own
* pseudo-random internal state, and cannot be used to infer the
* seed of other random functions in the kernel. However, it is
* essential to note that the generator used by the helper is not
* cryptographically secure.
*
* Returns
* A random 32-bit unsigned value.
*/
static __u32 (*bpf_get_prandom_u32)(void) = (void *) 7;
/*
* bpf_get_smp_processor_id
*
* Get the SMP (symmetric multiprocessing) processor id. Note that
* all programs run with migration disabled, which means that the
* SMP processor id is stable during all the execution of the
* program.
*
* Returns
* The SMP id of the processor running the program.
*/
static __u32 (*bpf_get_smp_processor_id)(void) = (void *) 8;
/*
* bpf_skb_store_bytes
*
* Store *len* bytes from address *from* into the packet
* associated to *skb*, at *offset*. *flags* are a combination of
* **BPF_F_RECOMPUTE_CSUM** (automatically recompute the
* checksum for the packet after storing the bytes) and
* **BPF_F_INVALIDATE_HASH** (set *skb*\ **->hash**, *skb*\
* **->swhash** and *skb*\ **->l4hash** to 0).
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_store_bytes)(struct __sk_buff *skb, __u32 offset, const void *from, __u32 len, __u64 flags) = (void *) 9;
/*
* bpf_l3_csum_replace
*
* Recompute the layer 3 (e.g. IP) checksum for the packet
* associated to *skb*. Computation is incremental, so the helper
* must know the former value of the header field that was
* modified (*from*), the new value of this field (*to*), and the
* number of bytes (2 or 4) for this field, stored in *size*.
* Alternatively, it is possible to store the difference between
* the previous and the new values of the header field in *to*, by
* setting *from* and *size* to 0. For both methods, *offset*
* indicates the location of the IP checksum within the packet.
*
* This helper works in combination with **bpf_csum_diff**\ (),
* which does not update the checksum in-place, but offers more
* flexibility and can handle sizes larger than 2 or 4 for the
* checksum to update.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_l3_csum_replace)(struct __sk_buff *skb, __u32 offset, __u64 from, __u64 to, __u64 size) = (void *) 10;
/*
* bpf_l4_csum_replace
*
* Recompute the layer 4 (e.g. TCP, UDP or ICMP) checksum for the
* packet associated to *skb*. Computation is incremental, so the
* helper must know the former value of the header field that was
* modified (*from*), the new value of this field (*to*), and the
* number of bytes (2 or 4) for this field, stored on the lowest
* four bits of *flags*. Alternatively, it is possible to store
* the difference between the previous and the new values of the
* header field in *to*, by setting *from* and the four lowest
* bits of *flags* to 0. For both methods, *offset* indicates the
* location of the IP checksum within the packet. In addition to
* the size of the field, *flags* can be added (bitwise OR) actual
* flags. With **BPF_F_MARK_MANGLED_0**, a null checksum is left
* untouched (unless **BPF_F_MARK_ENFORCE** is added as well), and
* for updates resulting in a null checksum the value is set to
* **CSUM_MANGLED_0** instead. Flag **BPF_F_PSEUDO_HDR** indicates
* the checksum is to be computed against a pseudo-header.
*
* This helper works in combination with **bpf_csum_diff**\ (),
* which does not update the checksum in-place, but offers more
* flexibility and can handle sizes larger than 2 or 4 for the
* checksum to update.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_l4_csum_replace)(struct __sk_buff *skb, __u32 offset, __u64 from, __u64 to, __u64 flags) = (void *) 11;
/*
* bpf_tail_call
*
* This special helper is used to trigger a "tail call", or in
* other words, to jump into another eBPF program. The same stack
* frame is used (but values on stack and in registers for the
* caller are not accessible to the callee). This mechanism allows
* for program chaining, either for raising the maximum number of
* available eBPF instructions, or to execute given programs in
* conditional blocks. For security reasons, there is an upper
* limit to the number of successive tail calls that can be
* performed.
*
* Upon call of this helper, the program attempts to jump into a
* program referenced at index *index* in *prog_array_map*, a
* special map of type **BPF_MAP_TYPE_PROG_ARRAY**, and passes
* *ctx*, a pointer to the context.
*
* If the call succeeds, the kernel immediately runs the first
* instruction of the new program. This is not a function call,
* and it never returns to the previous program. If the call
* fails, then the helper has no effect, and the caller continues
* to run its subsequent instructions. A call can fail if the
* destination program for the jump does not exist (i.e. *index*
* is superior to the number of entries in *prog_array_map*), or
* if the maximum number of tail calls has been reached for this
* chain of programs. This limit is defined in the kernel by the
* macro **MAX_TAIL_CALL_CNT** (not accessible to user space),
* which is currently set to 33.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_tail_call)(void *ctx, void *prog_array_map, __u32 index) = (void *) 12;
/*
* bpf_clone_redirect
*
* Clone and redirect the packet associated to *skb* to another
* net device of index *ifindex*. Both ingress and egress
* interfaces can be used for redirection. The **BPF_F_INGRESS**
* value in *flags* is used to make the distinction (ingress path
* is selected if the flag is present, egress path otherwise).
* This is the only flag supported for now.
*
* In comparison with **bpf_redirect**\ () helper,
* **bpf_clone_redirect**\ () has the associated cost of
* duplicating the packet buffer, but this can be executed out of
* the eBPF program. Conversely, **bpf_redirect**\ () is more
* efficient, but it is handled through an action code where the
* redirection happens only after the eBPF program has returned.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_clone_redirect)(struct __sk_buff *skb, __u32 ifindex, __u64 flags) = (void *) 13;
/*
* bpf_get_current_pid_tgid
*
* Get the current pid and tgid.
*
* Returns
* A 64-bit integer containing the current tgid and pid, and
* created as such:
* *current_task*\ **->tgid << 32 \|**
* *current_task*\ **->pid**.
*/
static __u64 (*bpf_get_current_pid_tgid)(void) = (void *) 14;
/*
* bpf_get_current_uid_gid
*
* Get the current uid and gid.
*
* Returns
* A 64-bit integer containing the current GID and UID, and
* created as such: *current_gid* **<< 32 \|** *current_uid*.
*/
static __u64 (*bpf_get_current_uid_gid)(void) = (void *) 15;
/*
* bpf_get_current_comm
*
* Copy the **comm** attribute of the current task into *buf* of
* *size_of_buf*. The **comm** attribute contains the name of
* the executable (excluding the path) for the current task. The
* *size_of_buf* must be strictly positive. On success, the
* helper makes sure that the *buf* is NUL-terminated. On failure,
* it is filled with zeroes.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_get_current_comm)(void *buf, __u32 size_of_buf) = (void *) 16;
/*
* bpf_get_cgroup_classid
*
* Retrieve the classid for the current task, i.e. for the net_cls
* cgroup to which *skb* belongs.
*
* This helper can be used on TC egress path, but not on ingress.
*
* The net_cls cgroup provides an interface to tag network packets
* based on a user-provided identifier for all traffic coming from
* the tasks belonging to the related cgroup. See also the related
* kernel documentation, available from the Linux sources in file
* *Documentation/admin-guide/cgroup-v1/net_cls.rst*.
*
* The Linux kernel has two versions for cgroups: there are
* cgroups v1 and cgroups v2. Both are available to users, who can
* use a mixture of them, but note that the net_cls cgroup is for
* cgroup v1 only. This makes it incompatible with BPF programs
* run on cgroups, which is a cgroup-v2-only feature (a socket can
* only hold data for one version of cgroups at a time).
*
* This helper is only available is the kernel was compiled with
* the **CONFIG_CGROUP_NET_CLASSID** configuration option set to
* "**y**" or to "**m**".
*
* Returns
* The classid, or 0 for the default unconfigured classid.
*/
static __u32 (*bpf_get_cgroup_classid)(struct __sk_buff *skb) = (void *) 17;
/*
* bpf_skb_vlan_push
*
* Push a *vlan_tci* (VLAN tag control information) of protocol
* *vlan_proto* to the packet associated to *skb*, then update
* the checksum. Note that if *vlan_proto* is different from
* **ETH_P_8021Q** and **ETH_P_8021AD**, it is considered to
* be **ETH_P_8021Q**.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_vlan_push)(struct __sk_buff *skb, __be16 vlan_proto, __u16 vlan_tci) = (void *) 18;
/*
* bpf_skb_vlan_pop
*
* Pop a VLAN header from the packet associated to *skb*.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_vlan_pop)(struct __sk_buff *skb) = (void *) 19;
/*
* bpf_skb_get_tunnel_key
*
* Get tunnel metadata. This helper takes a pointer *key* to an
* empty **struct bpf_tunnel_key** of **size**, that will be
* filled with tunnel metadata for the packet associated to *skb*.
* The *flags* can be set to **BPF_F_TUNINFO_IPV6**, which
* indicates that the tunnel is based on IPv6 protocol instead of
* IPv4.
*
* The **struct bpf_tunnel_key** is an object that generalizes the
* principal parameters used by various tunneling protocols into a
* single struct. This way, it can be used to easily make a
* decision based on the contents of the encapsulation header,
* "summarized" in this struct. In particular, it holds the IP
* address of the remote end (IPv4 or IPv6, depending on the case)
* in *key*\ **->remote_ipv4** or *key*\ **->remote_ipv6**. Also,
* this struct exposes the *key*\ **->tunnel_id**, which is
* generally mapped to a VNI (Virtual Network Identifier), making
* it programmable together with the **bpf_skb_set_tunnel_key**\
* () helper.
*
* Let's imagine that the following code is part of a program
* attached to the TC ingress interface, on one end of a GRE
* tunnel, and is supposed to filter out all messages coming from
* remote ends with IPv4 address other than 10.0.0.1:
*
* ::
*
* int ret;
* struct bpf_tunnel_key key = {};
*
* ret = bpf_skb_get_tunnel_key(skb, &key, sizeof(key), 0);
* if (ret < 0)
* return TC_ACT_SHOT; // drop packet
*
* if (key.remote_ipv4 != 0x0a000001)
* return TC_ACT_SHOT; // drop packet
*
* return TC_ACT_OK; // accept packet
*
* This interface can also be used with all encapsulation devices
* that can operate in "collect metadata" mode: instead of having
* one network device per specific configuration, the "collect
* metadata" mode only requires a single device where the
* configuration can be extracted from this helper.
*
* This can be used together with various tunnels such as VXLan,
* Geneve, GRE or IP in IP (IPIP).
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_get_tunnel_key)(struct __sk_buff *skb, struct bpf_tunnel_key *key, __u32 size, __u64 flags) = (void *) 20;
/*
* bpf_skb_set_tunnel_key
*
* Populate tunnel metadata for packet associated to *skb.* The
* tunnel metadata is set to the contents of *key*, of *size*. The
* *flags* can be set to a combination of the following values:
*
* **BPF_F_TUNINFO_IPV6**
* Indicate that the tunnel is based on IPv6 protocol
* instead of IPv4.
* **BPF_F_ZERO_CSUM_TX**
* For IPv4 packets, add a flag to tunnel metadata
* indicating that checksum computation should be skipped
* and checksum set to zeroes.
* **BPF_F_DONT_FRAGMENT**
* Add a flag to tunnel metadata indicating that the
* packet should not be fragmented.
* **BPF_F_SEQ_NUMBER**
* Add a flag to tunnel metadata indicating that a
* sequence number should be added to tunnel header before
* sending the packet. This flag was added for GRE
* encapsulation, but might be used with other protocols
* as well in the future.
*
* Here is a typical usage on the transmit path:
*
* ::
*
* struct bpf_tunnel_key key;
* populate key ...
* bpf_skb_set_tunnel_key(skb, &key, sizeof(key), 0);
* bpf_clone_redirect(skb, vxlan_dev_ifindex, 0);
*
* See also the description of the **bpf_skb_get_tunnel_key**\ ()
* helper for additional information.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_set_tunnel_key)(struct __sk_buff *skb, struct bpf_tunnel_key *key, __u32 size, __u64 flags) = (void *) 21;
/*
* bpf_perf_event_read
*
* Read the value of a perf event counter. This helper relies on a
* *map* of type **BPF_MAP_TYPE_PERF_EVENT_ARRAY**. The nature of
* the perf event counter is selected when *map* is updated with
* perf event file descriptors. The *map* is an array whose size
* is the number of available CPUs, and each cell contains a value
* relative to one CPU. The value to retrieve is indicated by
* *flags*, that contains the index of the CPU to look up, masked
* with **BPF_F_INDEX_MASK**. Alternatively, *flags* can be set to
* **BPF_F_CURRENT_CPU** to indicate that the value for the
* current CPU should be retrieved.
*
* Note that before Linux 4.13, only hardware perf event can be
* retrieved.
*
* Also, be aware that the newer helper
* **bpf_perf_event_read_value**\ () is recommended over
* **bpf_perf_event_read**\ () in general. The latter has some ABI
* quirks where error and counter value are used as a return code
* (which is wrong to do since ranges may overlap). This issue is
* fixed with **bpf_perf_event_read_value**\ (), which at the same
* time provides more features over the **bpf_perf_event_read**\
* () interface. Please refer to the description of
* **bpf_perf_event_read_value**\ () for details.
*
* Returns
* The value of the perf event counter read from the map, or a
* negative error code in case of failure.
*/
static __u64 (*bpf_perf_event_read)(void *map, __u64 flags) = (void *) 22;
/*
* bpf_redirect
*
* Redirect the packet to another net device of index *ifindex*.
* This helper is somewhat similar to **bpf_clone_redirect**\
* (), except that the packet is not cloned, which provides
* increased performance.
*
* Except for XDP, both ingress and egress interfaces can be used
* for redirection. The **BPF_F_INGRESS** value in *flags* is used
* to make the distinction (ingress path is selected if the flag
* is present, egress path otherwise). Currently, XDP only
* supports redirection to the egress interface, and accepts no
* flag at all.
*
* The same effect can also be attained with the more generic
* **bpf_redirect_map**\ (), which uses a BPF map to store the
* redirect target instead of providing it directly to the helper.
*
* Returns
* For XDP, the helper returns **XDP_REDIRECT** on success or
* **XDP_ABORTED** on error. For other program types, the values
* are **TC_ACT_REDIRECT** on success or **TC_ACT_SHOT** on
* error.
*/
static long (*bpf_redirect)(__u32 ifindex, __u64 flags) = (void *) 23;
/*
* bpf_get_route_realm
*
* Retrieve the realm or the route, that is to say the
* **tclassid** field of the destination for the *skb*. The
* identifier retrieved is a user-provided tag, similar to the
* one used with the net_cls cgroup (see description for
* **bpf_get_cgroup_classid**\ () helper), but here this tag is
* held by a route (a destination entry), not by a task.
*
* Retrieving this identifier works with the clsact TC egress hook
* (see also **tc-bpf(8)**), or alternatively on conventional
* classful egress qdiscs, but not on TC ingress path. In case of
* clsact TC egress hook, this has the advantage that, internally,
* the destination entry has not been dropped yet in the transmit
* path. Therefore, the destination entry does not need to be
* artificially held via **netif_keep_dst**\ () for a classful
* qdisc until the *skb* is freed.
*
* This helper is available only if the kernel was compiled with
* **CONFIG_IP_ROUTE_CLASSID** configuration option.
*
* Returns
* The realm of the route for the packet associated to *skb*, or 0
* if none was found.
*/
static __u32 (*bpf_get_route_realm)(struct __sk_buff *skb) = (void *) 24;
/*
* bpf_perf_event_output
*
* Write raw *data* blob into a special BPF perf event held by
* *map* of type **BPF_MAP_TYPE_PERF_EVENT_ARRAY**. This perf
* event must have the following attributes: **PERF_SAMPLE_RAW**
* as **sample_type**, **PERF_TYPE_SOFTWARE** as **type**, and
* **PERF_COUNT_SW_BPF_OUTPUT** as **config**.
*
* The *flags* are used to indicate the index in *map* for which
* the value must be put, masked with **BPF_F_INDEX_MASK**.
* Alternatively, *flags* can be set to **BPF_F_CURRENT_CPU**
* to indicate that the index of the current CPU core should be
* used.
*
* The value to write, of *size*, is passed through eBPF stack and
* pointed by *data*.
*
* The context of the program *ctx* needs also be passed to the
* helper.
*
* On user space, a program willing to read the values needs to
* call **perf_event_open**\ () on the perf event (either for
* one or for all CPUs) and to store the file descriptor into the
* *map*. This must be done before the eBPF program can send data
* into it. An example is available in file
* *samples/bpf/trace_output_user.c* in the Linux kernel source
* tree (the eBPF program counterpart is in
* *samples/bpf/trace_output_kern.c*).
*
* **bpf_perf_event_output**\ () achieves better performance
* than **bpf_trace_printk**\ () for sharing data with user
* space, and is much better suitable for streaming data from eBPF
* programs.
*
* Note that this helper is not restricted to tracing use cases
* and can be used with programs attached to TC or XDP as well,
* where it allows for passing data to user space listeners. Data
* can be:
*
* * Only custom structs,
* * Only the packet payload, or
* * A combination of both.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_perf_event_output)(void *ctx, void *map, __u64 flags, void *data, __u64 size) = (void *) 25;
/*
* bpf_skb_load_bytes
*
* This helper was provided as an easy way to load data from a
* packet. It can be used to load *len* bytes from *offset* from
* the packet associated to *skb*, into the buffer pointed by
* *to*.
*
* Since Linux 4.7, usage of this helper has mostly been replaced
* by "direct packet access", enabling packet data to be
* manipulated with *skb*\ **->data** and *skb*\ **->data_end**
* pointing respectively to the first byte of packet data and to
* the byte after the last byte of packet data. However, it
* remains useful if one wishes to read large quantities of data
* at once from a packet into the eBPF stack.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_load_bytes)(const void *skb, __u32 offset, void *to, __u32 len) = (void *) 26;
/*
* bpf_get_stackid
*
* Walk a user or a kernel stack and return its id. To achieve
* this, the helper needs *ctx*, which is a pointer to the context
* on which the tracing program is executed, and a pointer to a
* *map* of type **BPF_MAP_TYPE_STACK_TRACE**.
*
* The last argument, *flags*, holds the number of stack frames to
* skip (from 0 to 255), masked with
* **BPF_F_SKIP_FIELD_MASK**. The next bits can be used to set
* a combination of the following flags:
*
* **BPF_F_USER_STACK**
* Collect a user space stack instead of a kernel stack.
* **BPF_F_FAST_STACK_CMP**
* Compare stacks by hash only.
* **BPF_F_REUSE_STACKID**
* If two different stacks hash into the same *stackid*,
* discard the old one.
*
* The stack id retrieved is a 32 bit long integer handle which
* can be further combined with other data (including other stack
* ids) and used as a key into maps. This can be useful for
* generating a variety of graphs (such as flame graphs or off-cpu
* graphs).
*
* For walking a stack, this helper is an improvement over
* **bpf_probe_read**\ (), which can be used with unrolled loops
* but is not efficient and consumes a lot of eBPF instructions.
* Instead, **bpf_get_stackid**\ () can collect up to
* **PERF_MAX_STACK_DEPTH** both kernel and user frames. Note that
* this limit can be controlled with the **sysctl** program, and
* that it should be manually increased in order to profile long
* user stacks (such as stacks for Java programs). To do so, use:
*
* ::
*
* # sysctl kernel.perf_event_max_stack=
*
* Returns
* The positive or null stack id on success, or a negative error
* in case of failure.
*/
static long (*bpf_get_stackid)(void *ctx, void *map, __u64 flags) = (void *) 27;
/*
* bpf_csum_diff
*
* Compute a checksum difference, from the raw buffer pointed by
* *from*, of length *from_size* (that must be a multiple of 4),
* towards the raw buffer pointed by *to*, of size *to_size*
* (same remark). An optional *seed* can be added to the value
* (this can be cascaded, the seed may come from a previous call
* to the helper).
*
* This is flexible enough to be used in several ways:
*
* * With *from_size* == 0, *to_size* > 0 and *seed* set to
* checksum, it can be used when pushing new data.
* * With *from_size* > 0, *to_size* == 0 and *seed* set to
* checksum, it can be used when removing data from a packet.
* * With *from_size* > 0, *to_size* > 0 and *seed* set to 0, it
* can be used to compute a diff. Note that *from_size* and
* *to_size* do not need to be equal.
*
* This helper can be used in combination with
* **bpf_l3_csum_replace**\ () and **bpf_l4_csum_replace**\ (), to
* which one can feed in the difference computed with
* **bpf_csum_diff**\ ().
*
* Returns
* The checksum result, or a negative error code in case of
* failure.
*/
static __s64 (*bpf_csum_diff)(__be32 *from, __u32 from_size, __be32 *to, __u32 to_size, __wsum seed) = (void *) 28;
/*
* bpf_skb_get_tunnel_opt
*
* Retrieve tunnel options metadata for the packet associated to
* *skb*, and store the raw tunnel option data to the buffer *opt*
* of *size*.
*
* This helper can be used with encapsulation devices that can
* operate in "collect metadata" mode (please refer to the related
* note in the description of **bpf_skb_get_tunnel_key**\ () for
* more details). A particular example where this can be used is
* in combination with the Geneve encapsulation protocol, where it
* allows for pushing (with **bpf_skb_get_tunnel_opt**\ () helper)
* and retrieving arbitrary TLVs (Type-Length-Value headers) from
* the eBPF program. This allows for full customization of these
* headers.
*
* Returns
* The size of the option data retrieved.
*/
static long (*bpf_skb_get_tunnel_opt)(struct __sk_buff *skb, void *opt, __u32 size) = (void *) 29;
/*
* bpf_skb_set_tunnel_opt
*
* Set tunnel options metadata for the packet associated to *skb*
* to the option data contained in the raw buffer *opt* of *size*.
*
* See also the description of the **bpf_skb_get_tunnel_opt**\ ()
* helper for additional information.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_set_tunnel_opt)(struct __sk_buff *skb, void *opt, __u32 size) = (void *) 30;
/*
* bpf_skb_change_proto
*
* Change the protocol of the *skb* to *proto*. Currently
* supported are transition from IPv4 to IPv6, and from IPv6 to
* IPv4. The helper takes care of the groundwork for the
* transition, including resizing the socket buffer. The eBPF
* program is expected to fill the new headers, if any, via
* **skb_store_bytes**\ () and to recompute the checksums with
* **bpf_l3_csum_replace**\ () and **bpf_l4_csum_replace**\
* (). The main case for this helper is to perform NAT64
* operations out of an eBPF program.
*
* Internally, the GSO type is marked as dodgy so that headers are
* checked and segments are recalculated by the GSO/GRO engine.
* The size for GSO target is adapted as well.
*
* All values for *flags* are reserved for future usage, and must
* be left at zero.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_change_proto)(struct __sk_buff *skb, __be16 proto, __u64 flags) = (void *) 31;
/*
* bpf_skb_change_type
*
* Change the packet type for the packet associated to *skb*. This
* comes down to setting *skb*\ **->pkt_type** to *type*, except
* the eBPF program does not have a write access to *skb*\
* **->pkt_type** beside this helper. Using a helper here allows
* for graceful handling of errors.
*
* The major use case is to change incoming *skb*s to
* **PACKET_HOST** in a programmatic way instead of having to
* recirculate via **redirect**\ (..., **BPF_F_INGRESS**), for
* example.
*
* Note that *type* only allows certain values. At this time, they
* are:
*
* **PACKET_HOST**
* Packet is for us.
* **PACKET_BROADCAST**
* Send packet to all.
* **PACKET_MULTICAST**
* Send packet to group.
* **PACKET_OTHERHOST**
* Send packet to someone else.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_change_type)(struct __sk_buff *skb, __u32 type) = (void *) 32;
/*
* bpf_skb_under_cgroup
*
* Check whether *skb* is a descendant of the cgroup2 held by
* *map* of type **BPF_MAP_TYPE_CGROUP_ARRAY**, at *index*.
*
* Returns
* The return value depends on the result of the test, and can be:
*
* * 0, if the *skb* failed the cgroup2 descendant test.
* * 1, if the *skb* succeeded the cgroup2 descendant test.
* * A negative error code, if an error occurred.
*/
static long (*bpf_skb_under_cgroup)(struct __sk_buff *skb, void *map, __u32 index) = (void *) 33;
/*
* bpf_get_hash_recalc
*
* Retrieve the hash of the packet, *skb*\ **->hash**. If it is
* not set, in particular if the hash was cleared due to mangling,
* recompute this hash. Later accesses to the hash can be done
* directly with *skb*\ **->hash**.
*
* Calling **bpf_set_hash_invalid**\ (), changing a packet
* prototype with **bpf_skb_change_proto**\ (), or calling
* **bpf_skb_store_bytes**\ () with the
* **BPF_F_INVALIDATE_HASH** are actions susceptible to clear
* the hash and to trigger a new computation for the next call to
* **bpf_get_hash_recalc**\ ().
*
* Returns
* The 32-bit hash.
*/
static __u32 (*bpf_get_hash_recalc)(struct __sk_buff *skb) = (void *) 34;
/*
* bpf_get_current_task
*
* Get the current task.
*
* Returns
* A pointer to the current task struct.
*/
static __u64 (*bpf_get_current_task)(void) = (void *) 35;
/*
* bpf_probe_write_user
*
* Attempt in a safe way to write *len* bytes from the buffer
* *src* to *dst* in memory. It only works for threads that are in
* user context, and *dst* must be a valid user space address.
*
* This helper should not be used to implement any kind of
* security mechanism because of TOC-TOU attacks, but rather to
* debug, divert, and manipulate execution of semi-cooperative
* processes.
*
* Keep in mind that this feature is meant for experiments, and it
* has a risk of crashing the system and running programs.
* Therefore, when an eBPF program using this helper is attached,
* a warning including PID and process name is printed to kernel
* logs.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_probe_write_user)(void *dst, const void *src, __u32 len) = (void *) 36;
/*
* bpf_current_task_under_cgroup
*
* Check whether the probe is being run is the context of a given
* subset of the cgroup2 hierarchy. The cgroup2 to test is held by
* *map* of type **BPF_MAP_TYPE_CGROUP_ARRAY**, at *index*.
*
* Returns
* The return value depends on the result of the test, and can be:
*
* * 1, if current task belongs to the cgroup2.
* * 0, if current task does not belong to the cgroup2.
* * A negative error code, if an error occurred.
*/
static long (*bpf_current_task_under_cgroup)(void *map, __u32 index) = (void *) 37;
/*
* bpf_skb_change_tail
*
* Resize (trim or grow) the packet associated to *skb* to the
* new *len*. The *flags* are reserved for future usage, and must
* be left at zero.
*
* The basic idea is that the helper performs the needed work to
* change the size of the packet, then the eBPF program rewrites
* the rest via helpers like **bpf_skb_store_bytes**\ (),
* **bpf_l3_csum_replace**\ (), **bpf_l3_csum_replace**\ ()
* and others. This helper is a slow path utility intended for
* replies with control messages. And because it is targeted for
* slow path, the helper itself can afford to be slow: it
* implicitly linearizes, unclones and drops offloads from the
* *skb*.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_change_tail)(struct __sk_buff *skb, __u32 len, __u64 flags) = (void *) 38;
/*
* bpf_skb_pull_data
*
* Pull in non-linear data in case the *skb* is non-linear and not
* all of *len* are part of the linear section. Make *len* bytes
* from *skb* readable and writable. If a zero value is passed for
* *len*, then all bytes in the linear part of *skb* will be made
* readable and writable.
*
* This helper is only needed for reading and writing with direct
* packet access.
*
* For direct packet access, testing that offsets to access
* are within packet boundaries (test on *skb*\ **->data_end**) is
* susceptible to fail if offsets are invalid, or if the requested
* data is in non-linear parts of the *skb*. On failure the
* program can just bail out, or in the case of a non-linear
* buffer, use a helper to make the data available. The
* **bpf_skb_load_bytes**\ () helper is a first solution to access
* the data. Another one consists in using **bpf_skb_pull_data**
* to pull in once the non-linear parts, then retesting and
* eventually access the data.
*
* At the same time, this also makes sure the *skb* is uncloned,
* which is a necessary condition for direct write. As this needs
* to be an invariant for the write part only, the verifier
* detects writes and adds a prologue that is calling
* **bpf_skb_pull_data()** to effectively unclone the *skb* from
* the very beginning in case it is indeed cloned.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_pull_data)(struct __sk_buff *skb, __u32 len) = (void *) 39;
/*
* bpf_csum_update
*
* Add the checksum *csum* into *skb*\ **->csum** in case the
* driver has supplied a checksum for the entire packet into that
* field. Return an error otherwise. This helper is intended to be
* used in combination with **bpf_csum_diff**\ (), in particular
* when the checksum needs to be updated after data has been
* written into the packet through direct packet access.
*
* Returns
* The checksum on success, or a negative error code in case of
* failure.
*/
static __s64 (*bpf_csum_update)(struct __sk_buff *skb, __wsum csum) = (void *) 40;
/*
* bpf_set_hash_invalid
*
* Invalidate the current *skb*\ **->hash**. It can be used after
* mangling on headers through direct packet access, in order to
* indicate that the hash is outdated and to trigger a
* recalculation the next time the kernel tries to access this
* hash or when the **bpf_get_hash_recalc**\ () helper is called.
*
* Returns
* void.
*/
static void (*bpf_set_hash_invalid)(struct __sk_buff *skb) = (void *) 41;
/*
* bpf_get_numa_node_id
*
* Return the id of the current NUMA node. The primary use case
* for this helper is the selection of sockets for the local NUMA
* node, when the program is attached to sockets using the
* **SO_ATTACH_REUSEPORT_EBPF** option (see also **socket(7)**),
* but the helper is also available to other eBPF program types,
* similarly to **bpf_get_smp_processor_id**\ ().
*
* Returns
* The id of current NUMA node.
*/
static long (*bpf_get_numa_node_id)(void) = (void *) 42;
/*
* bpf_skb_change_head
*
* Grows headroom of packet associated to *skb* and adjusts the
* offset of the MAC header accordingly, adding *len* bytes of
* space. It automatically extends and reallocates memory as
* required.
*
* This helper can be used on a layer 3 *skb* to push a MAC header
* for redirection into a layer 2 device.
*
* All values for *flags* are reserved for future usage, and must
* be left at zero.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_change_head)(struct __sk_buff *skb, __u32 len, __u64 flags) = (void *) 43;
/*
* bpf_xdp_adjust_head
*
* Adjust (move) *xdp_md*\ **->data** by *delta* bytes. Note that
* it is possible to use a negative value for *delta*. This helper
* can be used to prepare the packet for pushing or popping
* headers.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_xdp_adjust_head)(struct xdp_md *xdp_md, int delta) = (void *) 44;
/*
* bpf_probe_read_str
*
* Copy a NUL terminated string from an unsafe kernel address
* *unsafe_ptr* to *dst*. See **bpf_probe_read_kernel_str**\ () for
* more details.
*
* Generally, use **bpf_probe_read_user_str**\ () or
* **bpf_probe_read_kernel_str**\ () instead.
*
* Returns
* On success, the strictly positive length of the string,
* including the trailing NUL character. On error, a negative
* value.
*/
static long (*bpf_probe_read_str)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) 45;
/*
* bpf_get_socket_cookie
*
* If the **struct sk_buff** pointed by *skb* has a known socket,
* retrieve the cookie (generated by the kernel) of this socket.
* If no cookie has been set yet, generate a new cookie. Once
* generated, the socket cookie remains stable for the life of the
* socket. This helper can be useful for monitoring per socket
* networking traffic statistics as it provides a global socket
* identifier that can be assumed unique.
*
* Returns
* A 8-byte long unique number on success, or 0 if the socket
* field is missing inside *skb*.
*/
static __u64 (*bpf_get_socket_cookie)(void *ctx) = (void *) 46;
/*
* bpf_get_socket_uid
*
* Get the owner UID of the socked associated to *skb*.
*
* Returns
* The owner UID of the socket associated to *skb*. If the socket
* is **NULL**, or if it is not a full socket (i.e. if it is a
* time-wait or a request socket instead), **overflowuid** value
* is returned (note that **overflowuid** might also be the actual
* UID value for the socket).
*/
static __u32 (*bpf_get_socket_uid)(struct __sk_buff *skb) = (void *) 47;
/*
* bpf_set_hash
*
* Set the full hash for *skb* (set the field *skb*\ **->hash**)
* to value *hash*.
*
* Returns
* 0
*/
static long (*bpf_set_hash)(struct __sk_buff *skb, __u32 hash) = (void *) 48;
/*
* bpf_setsockopt
*
* Emulate a call to **setsockopt()** on the socket associated to
* *bpf_socket*, which must be a full socket. The *level* at
* which the option resides and the name *optname* of the option
* must be specified, see **setsockopt(2)** for more information.
* The option value of length *optlen* is pointed by *optval*.
*
* *bpf_socket* should be one of the following:
*
* * **struct bpf_sock_ops** for **BPF_PROG_TYPE_SOCK_OPS**.
* * **struct bpf_sock_addr** for **BPF_CGROUP_INET4_CONNECT**
* and **BPF_CGROUP_INET6_CONNECT**.
*
* This helper actually implements a subset of **setsockopt()**.
* It supports the following *level*\ s:
*
* * **SOL_SOCKET**, which supports the following *optname*\ s:
* **SO_RCVBUF**, **SO_SNDBUF**, **SO_MAX_PACING_RATE**,
* **SO_PRIORITY**, **SO_RCVLOWAT**, **SO_MARK**,
* **SO_BINDTODEVICE**, **SO_KEEPALIVE**.
* * **IPPROTO_TCP**, which supports the following *optname*\ s:
* **TCP_CONGESTION**, **TCP_BPF_IW**,
* **TCP_BPF_SNDCWND_CLAMP**, **TCP_SAVE_SYN**,
* **TCP_KEEPIDLE**, **TCP_KEEPINTVL**, **TCP_KEEPCNT**,
* **TCP_SYNCNT**, **TCP_USER_TIMEOUT**, **TCP_NOTSENT_LOWAT**.
* * **IPPROTO_IP**, which supports *optname* **IP_TOS**.
* * **IPPROTO_IPV6**, which supports *optname* **IPV6_TCLASS**.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_setsockopt)(void *bpf_socket, int level, int optname, void *optval, int optlen) = (void *) 49;
/*
* bpf_skb_adjust_room
*
* Grow or shrink the room for data in the packet associated to
* *skb* by *len_diff*, and according to the selected *mode*.
*
* By default, the helper will reset any offloaded checksum
* indicator of the skb to CHECKSUM_NONE. This can be avoided
* by the following flag:
*
* * **BPF_F_ADJ_ROOM_NO_CSUM_RESET**: Do not reset offloaded
* checksum data of the skb to CHECKSUM_NONE.
*
* There are two supported modes at this time:
*
* * **BPF_ADJ_ROOM_MAC**: Adjust room at the mac layer
* (room space is added or removed below the layer 2 header).
*
* * **BPF_ADJ_ROOM_NET**: Adjust room at the network layer
* (room space is added or removed below the layer 3 header).
*
* The following flags are supported at this time:
*
* * **BPF_F_ADJ_ROOM_FIXED_GSO**: Do not adjust gso_size.
* Adjusting mss in this way is not allowed for datagrams.
*
* * **BPF_F_ADJ_ROOM_ENCAP_L3_IPV4**,
* **BPF_F_ADJ_ROOM_ENCAP_L3_IPV6**:
* Any new space is reserved to hold a tunnel header.
* Configure skb offsets and other fields accordingly.
*
* * **BPF_F_ADJ_ROOM_ENCAP_L4_GRE**,
* **BPF_F_ADJ_ROOM_ENCAP_L4_UDP**:
* Use with ENCAP_L3 flags to further specify the tunnel type.
*
* * **BPF_F_ADJ_ROOM_ENCAP_L2**\ (*len*):
* Use with ENCAP_L3/L4 flags to further specify the tunnel
* type; *len* is the length of the inner MAC header.
*
* * **BPF_F_ADJ_ROOM_ENCAP_L2_ETH**:
* Use with BPF_F_ADJ_ROOM_ENCAP_L2 flag to further specify the
* L2 type as Ethernet.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_adjust_room)(struct __sk_buff *skb, __s32 len_diff, __u32 mode, __u64 flags) = (void *) 50;
/*
* bpf_redirect_map
*
* Redirect the packet to the endpoint referenced by *map* at
* index *key*. Depending on its type, this *map* can contain
* references to net devices (for forwarding packets through other
* ports), or to CPUs (for redirecting XDP frames to another CPU;
* but this is only implemented for native XDP (with driver
* support) as of this writing).
*
* The lower two bits of *flags* are used as the return code if
* the map lookup fails. This is so that the return value can be
* one of the XDP program return codes up to **XDP_TX**, as chosen
* by the caller. The higher bits of *flags* can be set to
* BPF_F_BROADCAST or BPF_F_EXCLUDE_INGRESS as defined below.
*
* With BPF_F_BROADCAST the packet will be broadcasted to all the
* interfaces in the map, with BPF_F_EXCLUDE_INGRESS the ingress
* interface will be excluded when do broadcasting.
*
* See also **bpf_redirect**\ (), which only supports redirecting
* to an ifindex, but doesn't require a map to do so.
*
* Returns
* **XDP_REDIRECT** on success, or the value of the two lower bits
* of the *flags* argument on error.
*/
static long (*bpf_redirect_map)(void *map, __u32 key, __u64 flags) = (void *) 51;
/*
* bpf_sk_redirect_map
*
* Redirect the packet to the socket referenced by *map* (of type
* **BPF_MAP_TYPE_SOCKMAP**) at index *key*. Both ingress and
* egress interfaces can be used for redirection. The
* **BPF_F_INGRESS** value in *flags* is used to make the
* distinction (ingress path is selected if the flag is present,
* egress path otherwise). This is the only flag supported for now.
*
* Returns
* **SK_PASS** on success, or **SK_DROP** on error.
*/
static long (*bpf_sk_redirect_map)(struct __sk_buff *skb, void *map, __u32 key, __u64 flags) = (void *) 52;
/*
* bpf_sock_map_update
*
* Add an entry to, or update a *map* referencing sockets. The
* *skops* is used as a new value for the entry associated to
* *key*. *flags* is one of:
*
* **BPF_NOEXIST**
* The entry for *key* must not exist in the map.
* **BPF_EXIST**
* The entry for *key* must already exist in the map.
* **BPF_ANY**
* No condition on the existence of the entry for *key*.
*
* If the *map* has eBPF programs (parser and verdict), those will
* be inherited by the socket being added. If the socket is
* already attached to eBPF programs, this results in an error.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_sock_map_update)(struct bpf_sock_ops *skops, void *map, void *key, __u64 flags) = (void *) 53;
/*
* bpf_xdp_adjust_meta
*
* Adjust the address pointed by *xdp_md*\ **->data_meta** by
* *delta* (which can be positive or negative). Note that this
* operation modifies the address stored in *xdp_md*\ **->data**,
* so the latter must be loaded only after the helper has been
* called.
*
* The use of *xdp_md*\ **->data_meta** is optional and programs
* are not required to use it. The rationale is that when the
* packet is processed with XDP (e.g. as DoS filter), it is
* possible to push further meta data along with it before passing
* to the stack, and to give the guarantee that an ingress eBPF
* program attached as a TC classifier on the same device can pick
* this up for further post-processing. Since TC works with socket
* buffers, it remains possible to set from XDP the **mark** or
* **priority** pointers, or other pointers for the socket buffer.
* Having this scratch space generic and programmable allows for
* more flexibility as the user is free to store whatever meta
* data they need.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_xdp_adjust_meta)(struct xdp_md *xdp_md, int delta) = (void *) 54;
/*
* bpf_perf_event_read_value
*
* Read the value of a perf event counter, and store it into *buf*
* of size *buf_size*. This helper relies on a *map* of type
* **BPF_MAP_TYPE_PERF_EVENT_ARRAY**. The nature of the perf event
* counter is selected when *map* is updated with perf event file
* descriptors. The *map* is an array whose size is the number of
* available CPUs, and each cell contains a value relative to one
* CPU. The value to retrieve is indicated by *flags*, that
* contains the index of the CPU to look up, masked with
* **BPF_F_INDEX_MASK**. Alternatively, *flags* can be set to
* **BPF_F_CURRENT_CPU** to indicate that the value for the
* current CPU should be retrieved.
*
* This helper behaves in a way close to
* **bpf_perf_event_read**\ () helper, save that instead of
* just returning the value observed, it fills the *buf*
* structure. This allows for additional data to be retrieved: in
* particular, the enabled and running times (in *buf*\
* **->enabled** and *buf*\ **->running**, respectively) are
* copied. In general, **bpf_perf_event_read_value**\ () is
* recommended over **bpf_perf_event_read**\ (), which has some
* ABI issues and provides fewer functionalities.
*
* These values are interesting, because hardware PMU (Performance
* Monitoring Unit) counters are limited resources. When there are
* more PMU based perf events opened than available counters,
* kernel will multiplex these events so each event gets certain
* percentage (but not all) of the PMU time. In case that
* multiplexing happens, the number of samples or counter value
* will not reflect the case compared to when no multiplexing
* occurs. This makes comparison between different runs difficult.
* Typically, the counter value should be normalized before
* comparing to other experiments. The usual normalization is done
* as follows.
*
* ::
*
* normalized_counter = counter * t_enabled / t_running
*
* Where t_enabled is the time enabled for event and t_running is
* the time running for event since last normalization. The
* enabled and running times are accumulated since the perf event
* open. To achieve scaling factor between two invocations of an
* eBPF program, users can use CPU id as the key (which is
* typical for perf array usage model) to remember the previous
* value and do the calculation inside the eBPF program.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_perf_event_read_value)(void *map, __u64 flags, struct bpf_perf_event_value *buf, __u32 buf_size) = (void *) 55;
/*
* bpf_perf_prog_read_value
*
* For en eBPF program attached to a perf event, retrieve the
* value of the event counter associated to *ctx* and store it in
* the structure pointed by *buf* and of size *buf_size*. Enabled
* and running times are also stored in the structure (see
* description of helper **bpf_perf_event_read_value**\ () for
* more details).
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_perf_prog_read_value)(struct bpf_perf_event_data *ctx, struct bpf_perf_event_value *buf, __u32 buf_size) = (void *) 56;
/*
* bpf_getsockopt
*
* Emulate a call to **getsockopt()** on the socket associated to
* *bpf_socket*, which must be a full socket. The *level* at
* which the option resides and the name *optname* of the option
* must be specified, see **getsockopt(2)** for more information.
* The retrieved value is stored in the structure pointed by
* *opval* and of length *optlen*.
*
* *bpf_socket* should be one of the following:
*
* * **struct bpf_sock_ops** for **BPF_PROG_TYPE_SOCK_OPS**.
* * **struct bpf_sock_addr** for **BPF_CGROUP_INET4_CONNECT**
* and **BPF_CGROUP_INET6_CONNECT**.
*
* This helper actually implements a subset of **getsockopt()**.
* It supports the following *level*\ s:
*
* * **IPPROTO_TCP**, which supports *optname*
* **TCP_CONGESTION**.
* * **IPPROTO_IP**, which supports *optname* **IP_TOS**.
* * **IPPROTO_IPV6**, which supports *optname* **IPV6_TCLASS**.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_getsockopt)(void *bpf_socket, int level, int optname, void *optval, int optlen) = (void *) 57;
/*
* bpf_override_return
*
* Used for error injection, this helper uses kprobes to override
* the return value of the probed function, and to set it to *rc*.
* The first argument is the context *regs* on which the kprobe
* works.
*
* This helper works by setting the PC (program counter)
* to an override function which is run in place of the original
* probed function. This means the probed function is not run at
* all. The replacement function just returns with the required
* value.
*
* This helper has security implications, and thus is subject to
* restrictions. It is only available if the kernel was compiled
* with the **CONFIG_BPF_KPROBE_OVERRIDE** configuration
* option, and in this case it only works on functions tagged with
* **ALLOW_ERROR_INJECTION** in the kernel code.
*
* Also, the helper is only available for the architectures having
* the CONFIG_FUNCTION_ERROR_INJECTION option. As of this writing,
* x86 architecture is the only one to support this feature.
*
* Returns
* 0
*/
static long (*bpf_override_return)(struct pt_regs *regs, __u64 rc) = (void *) 58;
/*
* bpf_sock_ops_cb_flags_set
*
* Attempt to set the value of the **bpf_sock_ops_cb_flags** field
* for the full TCP socket associated to *bpf_sock_ops* to
* *argval*.
*
* The primary use of this field is to determine if there should
* be calls to eBPF programs of type
* **BPF_PROG_TYPE_SOCK_OPS** at various points in the TCP
* code. A program of the same type can change its value, per
* connection and as necessary, when the connection is
* established. This field is directly accessible for reading, but
* this helper must be used for updates in order to return an
* error if an eBPF program tries to set a callback that is not
* supported in the current kernel.
*
* *argval* is a flag array which can combine these flags:
*
* * **BPF_SOCK_OPS_RTO_CB_FLAG** (retransmission time out)
* * **BPF_SOCK_OPS_RETRANS_CB_FLAG** (retransmission)
* * **BPF_SOCK_OPS_STATE_CB_FLAG** (TCP state change)
* * **BPF_SOCK_OPS_RTT_CB_FLAG** (every RTT)
*
* Therefore, this function can be used to clear a callback flag by
* setting the appropriate bit to zero. e.g. to disable the RTO
* callback:
*
* **bpf_sock_ops_cb_flags_set(bpf_sock,**
* **bpf_sock->bpf_sock_ops_cb_flags & ~BPF_SOCK_OPS_RTO_CB_FLAG)**
*
* Here are some examples of where one could call such eBPF
* program:
*
* * When RTO fires.
* * When a packet is retransmitted.
* * When the connection terminates.
* * When a packet is sent.
* * When a packet is received.
*
* Returns
* Code **-EINVAL** if the socket is not a full TCP socket;
* otherwise, a positive number containing the bits that could not
* be set is returned (which comes down to 0 if all bits were set
* as required).
*/
static long (*bpf_sock_ops_cb_flags_set)(struct bpf_sock_ops *bpf_sock, int argval) = (void *) 59;
/*
* bpf_msg_redirect_map
*
* This helper is used in programs implementing policies at the
* socket level. If the message *msg* is allowed to pass (i.e. if
* the verdict eBPF program returns **SK_PASS**), redirect it to
* the socket referenced by *map* (of type
* **BPF_MAP_TYPE_SOCKMAP**) at index *key*. Both ingress and
* egress interfaces can be used for redirection. The
* **BPF_F_INGRESS** value in *flags* is used to make the
* distinction (ingress path is selected if the flag is present,
* egress path otherwise). This is the only flag supported for now.
*
* Returns
* **SK_PASS** on success, or **SK_DROP** on error.
*/
static long (*bpf_msg_redirect_map)(struct sk_msg_md *msg, void *map, __u32 key, __u64 flags) = (void *) 60;
/*
* bpf_msg_apply_bytes
*
* For socket policies, apply the verdict of the eBPF program to
* the next *bytes* (number of bytes) of message *msg*.
*
* For example, this helper can be used in the following cases:
*
* * A single **sendmsg**\ () or **sendfile**\ () system call
* contains multiple logical messages that the eBPF program is
* supposed to read and for which it should apply a verdict.
* * An eBPF program only cares to read the first *bytes* of a
* *msg*. If the message has a large payload, then setting up
* and calling the eBPF program repeatedly for all bytes, even
* though the verdict is already known, would create unnecessary
* overhead.
*
* When called from within an eBPF program, the helper sets a
* counter internal to the BPF infrastructure, that is used to
* apply the last verdict to the next *bytes*. If *bytes* is
* smaller than the current data being processed from a
* **sendmsg**\ () or **sendfile**\ () system call, the first
* *bytes* will be sent and the eBPF program will be re-run with
* the pointer for start of data pointing to byte number *bytes*
* **+ 1**. If *bytes* is larger than the current data being
* processed, then the eBPF verdict will be applied to multiple
* **sendmsg**\ () or **sendfile**\ () calls until *bytes* are
* consumed.
*
* Note that if a socket closes with the internal counter holding
* a non-zero value, this is not a problem because data is not
* being buffered for *bytes* and is sent as it is received.
*
* Returns
* 0
*/
static long (*bpf_msg_apply_bytes)(struct sk_msg_md *msg, __u32 bytes) = (void *) 61;
/*
* bpf_msg_cork_bytes
*
* For socket policies, prevent the execution of the verdict eBPF
* program for message *msg* until *bytes* (byte number) have been
* accumulated.
*
* This can be used when one needs a specific number of bytes
* before a verdict can be assigned, even if the data spans
* multiple **sendmsg**\ () or **sendfile**\ () calls. The extreme
* case would be a user calling **sendmsg**\ () repeatedly with
* 1-byte long message segments. Obviously, this is bad for
* performance, but it is still valid. If the eBPF program needs
* *bytes* bytes to validate a header, this helper can be used to
* prevent the eBPF program to be called again until *bytes* have
* been accumulated.
*
* Returns
* 0
*/
static long (*bpf_msg_cork_bytes)(struct sk_msg_md *msg, __u32 bytes) = (void *) 62;
/*
* bpf_msg_pull_data
*
* For socket policies, pull in non-linear data from user space
* for *msg* and set pointers *msg*\ **->data** and *msg*\
* **->data_end** to *start* and *end* bytes offsets into *msg*,
* respectively.
*
* If a program of type **BPF_PROG_TYPE_SK_MSG** is run on a
* *msg* it can only parse data that the (**data**, **data_end**)
* pointers have already consumed. For **sendmsg**\ () hooks this
* is likely the first scatterlist element. But for calls relying
* on the **sendpage** handler (e.g. **sendfile**\ ()) this will
* be the range (**0**, **0**) because the data is shared with
* user space and by default the objective is to avoid allowing
* user space to modify data while (or after) eBPF verdict is
* being decided. This helper can be used to pull in data and to
* set the start and end pointer to given values. Data will be
* copied if necessary (i.e. if data was not linear and if start
* and end pointers do not point to the same chunk).
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* All values for *flags* are reserved for future usage, and must
* be left at zero.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_msg_pull_data)(struct sk_msg_md *msg, __u32 start, __u32 end, __u64 flags) = (void *) 63;
/*
* bpf_bind
*
* Bind the socket associated to *ctx* to the address pointed by
* *addr*, of length *addr_len*. This allows for making outgoing
* connection from the desired IP address, which can be useful for
* example when all processes inside a cgroup should use one
* single IP address on a host that has multiple IP configured.
*
* This helper works for IPv4 and IPv6, TCP and UDP sockets. The
* domain (*addr*\ **->sa_family**) must be **AF_INET** (or
* **AF_INET6**). It's advised to pass zero port (**sin_port**
* or **sin6_port**) which triggers IP_BIND_ADDRESS_NO_PORT-like
* behavior and lets the kernel efficiently pick up an unused
* port as long as 4-tuple is unique. Passing non-zero port might
* lead to degraded performance.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_bind)(struct bpf_sock_addr *ctx, struct sockaddr *addr, int addr_len) = (void *) 64;
/*
* bpf_xdp_adjust_tail
*
* Adjust (move) *xdp_md*\ **->data_end** by *delta* bytes. It is
* possible to both shrink and grow the packet tail.
* Shrink done via *delta* being a negative integer.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_xdp_adjust_tail)(struct xdp_md *xdp_md, int delta) = (void *) 65;
/*
* bpf_skb_get_xfrm_state
*
* Retrieve the XFRM state (IP transform framework, see also
* **ip-xfrm(8)**) at *index* in XFRM "security path" for *skb*.
*
* The retrieved value is stored in the **struct bpf_xfrm_state**
* pointed by *xfrm_state* and of length *size*.
*
* All values for *flags* are reserved for future usage, and must
* be left at zero.
*
* This helper is available only if the kernel was compiled with
* **CONFIG_XFRM** configuration option.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_get_xfrm_state)(struct __sk_buff *skb, __u32 index, struct bpf_xfrm_state *xfrm_state, __u32 size, __u64 flags) = (void *) 66;
/*
* bpf_get_stack
*
* Return a user or a kernel stack in bpf program provided buffer.
* To achieve this, the helper needs *ctx*, which is a pointer
* to the context on which the tracing program is executed.
* To store the stacktrace, the bpf program provides *buf* with
* a nonnegative *size*.
*
* The last argument, *flags*, holds the number of stack frames to
* skip (from 0 to 255), masked with
* **BPF_F_SKIP_FIELD_MASK**. The next bits can be used to set
* the following flags:
*
* **BPF_F_USER_STACK**
* Collect a user space stack instead of a kernel stack.
* **BPF_F_USER_BUILD_ID**
* Collect buildid+offset instead of ips for user stack,
* only valid if **BPF_F_USER_STACK** is also specified.
*
* **bpf_get_stack**\ () can collect up to
* **PERF_MAX_STACK_DEPTH** both kernel and user frames, subject
* to sufficient large buffer size. Note that
* this limit can be controlled with the **sysctl** program, and
* that it should be manually increased in order to profile long
* user stacks (such as stacks for Java programs). To do so, use:
*
* ::
*
* # sysctl kernel.perf_event_max_stack=
*
* Returns
* The non-negative copied *buf* length equal to or less than
* *size* on success, or a negative error in case of failure.
*/
static long (*bpf_get_stack)(void *ctx, void *buf, __u32 size, __u64 flags) = (void *) 67;
/*
* bpf_skb_load_bytes_relative
*
* This helper is similar to **bpf_skb_load_bytes**\ () in that
* it provides an easy way to load *len* bytes from *offset*
* from the packet associated to *skb*, into the buffer pointed
* by *to*. The difference to **bpf_skb_load_bytes**\ () is that
* a fifth argument *start_header* exists in order to select a
* base offset to start from. *start_header* can be one of:
*
* **BPF_HDR_START_MAC**
* Base offset to load data from is *skb*'s mac header.
* **BPF_HDR_START_NET**
* Base offset to load data from is *skb*'s network header.
*
* In general, "direct packet access" is the preferred method to
* access packet data, however, this helper is in particular useful
* in socket filters where *skb*\ **->data** does not always point
* to the start of the mac header and where "direct packet access"
* is not available.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_load_bytes_relative)(const void *skb, __u32 offset, void *to, __u32 len, __u32 start_header) = (void *) 68;
/*
* bpf_fib_lookup
*
* Do FIB lookup in kernel tables using parameters in *params*.
* If lookup is successful and result shows packet is to be
* forwarded, the neighbor tables are searched for the nexthop.
* If successful (ie., FIB lookup shows forwarding and nexthop
* is resolved), the nexthop address is returned in ipv4_dst
* or ipv6_dst based on family, smac is set to mac address of
* egress device, dmac is set to nexthop mac address, rt_metric
* is set to metric from route (IPv4/IPv6 only), and ifindex
* is set to the device index of the nexthop from the FIB lookup.
*
* *plen* argument is the size of the passed in struct.
* *flags* argument can be a combination of one or more of the
* following values:
*
* **BPF_FIB_LOOKUP_DIRECT**
* Do a direct table lookup vs full lookup using FIB
* rules.
* **BPF_FIB_LOOKUP_OUTPUT**
* Perform lookup from an egress perspective (default is
* ingress).
*
* *ctx* is either **struct xdp_md** for XDP programs or
* **struct sk_buff** tc cls_act programs.
*
* Returns
* * < 0 if any input argument is invalid
* * 0 on success (packet is forwarded, nexthop neighbor exists)
* * > 0 one of **BPF_FIB_LKUP_RET_** codes explaining why the
* packet is not forwarded or needs assist from full stack
*
* If lookup fails with BPF_FIB_LKUP_RET_FRAG_NEEDED, then the MTU
* was exceeded and output params->mtu_result contains the MTU.
*/
static long (*bpf_fib_lookup)(void *ctx, struct bpf_fib_lookup *params, int plen, __u32 flags) = (void *) 69;
/*
* bpf_sock_hash_update
*
* Add an entry to, or update a sockhash *map* referencing sockets.
* The *skops* is used as a new value for the entry associated to
* *key*. *flags* is one of:
*
* **BPF_NOEXIST**
* The entry for *key* must not exist in the map.
* **BPF_EXIST**
* The entry for *key* must already exist in the map.
* **BPF_ANY**
* No condition on the existence of the entry for *key*.
*
* If the *map* has eBPF programs (parser and verdict), those will
* be inherited by the socket being added. If the socket is
* already attached to eBPF programs, this results in an error.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_sock_hash_update)(struct bpf_sock_ops *skops, void *map, void *key, __u64 flags) = (void *) 70;
/*
* bpf_msg_redirect_hash
*
* This helper is used in programs implementing policies at the
* socket level. If the message *msg* is allowed to pass (i.e. if
* the verdict eBPF program returns **SK_PASS**), redirect it to
* the socket referenced by *map* (of type
* **BPF_MAP_TYPE_SOCKHASH**) using hash *key*. Both ingress and
* egress interfaces can be used for redirection. The
* **BPF_F_INGRESS** value in *flags* is used to make the
* distinction (ingress path is selected if the flag is present,
* egress path otherwise). This is the only flag supported for now.
*
* Returns
* **SK_PASS** on success, or **SK_DROP** on error.
*/
static long (*bpf_msg_redirect_hash)(struct sk_msg_md *msg, void *map, void *key, __u64 flags) = (void *) 71;
/*
* bpf_sk_redirect_hash
*
* This helper is used in programs implementing policies at the
* skb socket level. If the sk_buff *skb* is allowed to pass (i.e.
* if the verdict eBPF program returns **SK_PASS**), redirect it
* to the socket referenced by *map* (of type
* **BPF_MAP_TYPE_SOCKHASH**) using hash *key*. Both ingress and
* egress interfaces can be used for redirection. The
* **BPF_F_INGRESS** value in *flags* is used to make the
* distinction (ingress path is selected if the flag is present,
* egress otherwise). This is the only flag supported for now.
*
* Returns
* **SK_PASS** on success, or **SK_DROP** on error.
*/
static long (*bpf_sk_redirect_hash)(struct __sk_buff *skb, void *map, void *key, __u64 flags) = (void *) 72;
/*
* bpf_lwt_push_encap
*
* Encapsulate the packet associated to *skb* within a Layer 3
* protocol header. This header is provided in the buffer at
* address *hdr*, with *len* its size in bytes. *type* indicates
* the protocol of the header and can be one of:
*
* **BPF_LWT_ENCAP_SEG6**
* IPv6 encapsulation with Segment Routing Header
* (**struct ipv6_sr_hdr**). *hdr* only contains the SRH,
* the IPv6 header is computed by the kernel.
* **BPF_LWT_ENCAP_SEG6_INLINE**
* Only works if *skb* contains an IPv6 packet. Insert a
* Segment Routing Header (**struct ipv6_sr_hdr**) inside
* the IPv6 header.
* **BPF_LWT_ENCAP_IP**
* IP encapsulation (GRE/GUE/IPIP/etc). The outer header
* must be IPv4 or IPv6, followed by zero or more
* additional headers, up to **LWT_BPF_MAX_HEADROOM**
* total bytes in all prepended headers. Please note that
* if **skb_is_gso**\ (*skb*) is true, no more than two
* headers can be prepended, and the inner header, if
* present, should be either GRE or UDP/GUE.
*
* **BPF_LWT_ENCAP_SEG6**\ \* types can be called by BPF programs
* of type **BPF_PROG_TYPE_LWT_IN**; **BPF_LWT_ENCAP_IP** type can
* be called by bpf programs of types **BPF_PROG_TYPE_LWT_IN** and
* **BPF_PROG_TYPE_LWT_XMIT**.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_lwt_push_encap)(struct __sk_buff *skb, __u32 type, void *hdr, __u32 len) = (void *) 73;
/*
* bpf_lwt_seg6_store_bytes
*
* Store *len* bytes from address *from* into the packet
* associated to *skb*, at *offset*. Only the flags, tag and TLVs
* inside the outermost IPv6 Segment Routing Header can be
* modified through this helper.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_lwt_seg6_store_bytes)(struct __sk_buff *skb, __u32 offset, const void *from, __u32 len) = (void *) 74;
/*
* bpf_lwt_seg6_adjust_srh
*
* Adjust the size allocated to TLVs in the outermost IPv6
* Segment Routing Header contained in the packet associated to
* *skb*, at position *offset* by *delta* bytes. Only offsets
* after the segments are accepted. *delta* can be as well
* positive (growing) as negative (shrinking).
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_lwt_seg6_adjust_srh)(struct __sk_buff *skb, __u32 offset, __s32 delta) = (void *) 75;
/*
* bpf_lwt_seg6_action
*
* Apply an IPv6 Segment Routing action of type *action* to the
* packet associated to *skb*. Each action takes a parameter
* contained at address *param*, and of length *param_len* bytes.
* *action* can be one of:
*
* **SEG6_LOCAL_ACTION_END_X**
* End.X action: Endpoint with Layer-3 cross-connect.
* Type of *param*: **struct in6_addr**.
* **SEG6_LOCAL_ACTION_END_T**
* End.T action: Endpoint with specific IPv6 table lookup.
* Type of *param*: **int**.
* **SEG6_LOCAL_ACTION_END_B6**
* End.B6 action: Endpoint bound to an SRv6 policy.
* Type of *param*: **struct ipv6_sr_hdr**.
* **SEG6_LOCAL_ACTION_END_B6_ENCAP**
* End.B6.Encap action: Endpoint bound to an SRv6
* encapsulation policy.
* Type of *param*: **struct ipv6_sr_hdr**.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_lwt_seg6_action)(struct __sk_buff *skb, __u32 action, void *param, __u32 param_len) = (void *) 76;
/*
* bpf_rc_repeat
*
* This helper is used in programs implementing IR decoding, to
* report a successfully decoded repeat key message. This delays
* the generation of a key up event for previously generated
* key down event.
*
* Some IR protocols like NEC have a special IR message for
* repeating last button, for when a button is held down.
*
* The *ctx* should point to the lirc sample as passed into
* the program.
*
* This helper is only available is the kernel was compiled with
* the **CONFIG_BPF_LIRC_MODE2** configuration option set to
* "**y**".
*
* Returns
* 0
*/
static long (*bpf_rc_repeat)(void *ctx) = (void *) 77;
/*
* bpf_rc_keydown
*
* This helper is used in programs implementing IR decoding, to
* report a successfully decoded key press with *scancode*,
* *toggle* value in the given *protocol*. The scancode will be
* translated to a keycode using the rc keymap, and reported as
* an input key down event. After a period a key up event is
* generated. This period can be extended by calling either
* **bpf_rc_keydown**\ () again with the same values, or calling
* **bpf_rc_repeat**\ ().
*
* Some protocols include a toggle bit, in case the button was
* released and pressed again between consecutive scancodes.
*
* The *ctx* should point to the lirc sample as passed into
* the program.
*
* The *protocol* is the decoded protocol number (see
* **enum rc_proto** for some predefined values).
*
* This helper is only available is the kernel was compiled with
* the **CONFIG_BPF_LIRC_MODE2** configuration option set to
* "**y**".
*
* Returns
* 0
*/
static long (*bpf_rc_keydown)(void *ctx, __u32 protocol, __u64 scancode, __u32 toggle) = (void *) 78;
/*
* bpf_skb_cgroup_id
*
* Return the cgroup v2 id of the socket associated with the *skb*.
* This is roughly similar to the **bpf_get_cgroup_classid**\ ()
* helper for cgroup v1 by providing a tag resp. identifier that
* can be matched on or used for map lookups e.g. to implement
* policy. The cgroup v2 id of a given path in the hierarchy is
* exposed in user space through the f_handle API in order to get
* to the same 64-bit id.
*
* This helper can be used on TC egress path, but not on ingress,
* and is available only if the kernel was compiled with the
* **CONFIG_SOCK_CGROUP_DATA** configuration option.
*
* Returns
* The id is returned or 0 in case the id could not be retrieved.
*/
static __u64 (*bpf_skb_cgroup_id)(struct __sk_buff *skb) = (void *) 79;
/*
* bpf_get_current_cgroup_id
*
* Get the current cgroup id based on the cgroup within which
* the current task is running.
*
* Returns
* A 64-bit integer containing the current cgroup id based
* on the cgroup within which the current task is running.
*/
static __u64 (*bpf_get_current_cgroup_id)(void) = (void *) 80;
/*
* bpf_get_local_storage
*
* Get the pointer to the local storage area.
* The type and the size of the local storage is defined
* by the *map* argument.
* The *flags* meaning is specific for each map type,
* and has to be 0 for cgroup local storage.
*
* Depending on the BPF program type, a local storage area
* can be shared between multiple instances of the BPF program,
* running simultaneously.
*
* A user should care about the synchronization by himself.
* For example, by using the **BPF_ATOMIC** instructions to alter
* the shared data.
*
* Returns
* A pointer to the local storage area.
*/
static void *(*bpf_get_local_storage)(void *map, __u64 flags) = (void *) 81;
/*
* bpf_sk_select_reuseport
*
* Select a **SO_REUSEPORT** socket from a
* **BPF_MAP_TYPE_REUSEPORT_SOCKARRAY** *map*.
* It checks the selected socket is matching the incoming
* request in the socket buffer.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_sk_select_reuseport)(struct sk_reuseport_md *reuse, void *map, void *key, __u64 flags) = (void *) 82;
/*
* bpf_skb_ancestor_cgroup_id
*
* Return id of cgroup v2 that is ancestor of cgroup associated
* with the *skb* at the *ancestor_level*. The root cgroup is at
* *ancestor_level* zero and each step down the hierarchy
* increments the level. If *ancestor_level* == level of cgroup
* associated with *skb*, then return value will be same as that
* of **bpf_skb_cgroup_id**\ ().
*
* The helper is useful to implement policies based on cgroups
* that are upper in hierarchy than immediate cgroup associated
* with *skb*.
*
* The format of returned id and helper limitations are same as in
* **bpf_skb_cgroup_id**\ ().
*
* Returns
* The id is returned or 0 in case the id could not be retrieved.
*/
static __u64 (*bpf_skb_ancestor_cgroup_id)(struct __sk_buff *skb, int ancestor_level) = (void *) 83;
/*
* bpf_sk_lookup_tcp
*
* Look for TCP socket matching *tuple*, optionally in a child
* network namespace *netns*. The return value must be checked,
* and if non-**NULL**, released via **bpf_sk_release**\ ().
*
* The *ctx* should point to the context of the program, such as
* the skb or socket (depending on the hook in use). This is used
* to determine the base network namespace for the lookup.
*
* *tuple_size* must be one of:
*
* **sizeof**\ (*tuple*\ **->ipv4**)
* Look for an IPv4 socket.
* **sizeof**\ (*tuple*\ **->ipv6**)
* Look for an IPv6 socket.
*
* If the *netns* is a negative signed 32-bit integer, then the
* socket lookup table in the netns associated with the *ctx*
* will be used. For the TC hooks, this is the netns of the device
* in the skb. For socket hooks, this is the netns of the socket.
* If *netns* is any other signed 32-bit value greater than or
* equal to zero then it specifies the ID of the netns relative to
* the netns associated with the *ctx*. *netns* values beyond the
* range of 32-bit integers are reserved for future use.
*
* All values for *flags* are reserved for future usage, and must
* be left at zero.
*
* This helper is available only if the kernel was compiled with
* **CONFIG_NET** configuration option.
*
* Returns
* Pointer to **struct bpf_sock**, or **NULL** in case of failure.
* For sockets with reuseport option, the **struct bpf_sock**
* result is from *reuse*\ **->socks**\ [] using the hash of the
* tuple.
*/
static struct bpf_sock *(*bpf_sk_lookup_tcp)(void *ctx, struct bpf_sock_tuple *tuple, __u32 tuple_size, __u64 netns, __u64 flags) = (void *) 84;
/*
* bpf_sk_lookup_udp
*
* Look for UDP socket matching *tuple*, optionally in a child
* network namespace *netns*. The return value must be checked,
* and if non-**NULL**, released via **bpf_sk_release**\ ().
*
* The *ctx* should point to the context of the program, such as
* the skb or socket (depending on the hook in use). This is used
* to determine the base network namespace for the lookup.
*
* *tuple_size* must be one of:
*
* **sizeof**\ (*tuple*\ **->ipv4**)
* Look for an IPv4 socket.
* **sizeof**\ (*tuple*\ **->ipv6**)
* Look for an IPv6 socket.
*
* If the *netns* is a negative signed 32-bit integer, then the
* socket lookup table in the netns associated with the *ctx*
* will be used. For the TC hooks, this is the netns of the device
* in the skb. For socket hooks, this is the netns of the socket.
* If *netns* is any other signed 32-bit value greater than or
* equal to zero then it specifies the ID of the netns relative to
* the netns associated with the *ctx*. *netns* values beyond the
* range of 32-bit integers are reserved for future use.
*
* All values for *flags* are reserved for future usage, and must
* be left at zero.
*
* This helper is available only if the kernel was compiled with
* **CONFIG_NET** configuration option.
*
* Returns
* Pointer to **struct bpf_sock**, or **NULL** in case of failure.
* For sockets with reuseport option, the **struct bpf_sock**
* result is from *reuse*\ **->socks**\ [] using the hash of the
* tuple.
*/
static struct bpf_sock *(*bpf_sk_lookup_udp)(void *ctx, struct bpf_sock_tuple *tuple, __u32 tuple_size, __u64 netns, __u64 flags) = (void *) 85;
/*
* bpf_sk_release
*
* Release the reference held by *sock*. *sock* must be a
* non-**NULL** pointer that was returned from
* **bpf_sk_lookup_xxx**\ ().
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_sk_release)(void *sock) = (void *) 86;
/*
* bpf_map_push_elem
*
* Push an element *value* in *map*. *flags* is one of:
*
* **BPF_EXIST**
* If the queue/stack is full, the oldest element is
* removed to make room for this.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_map_push_elem)(void *map, const void *value, __u64 flags) = (void *) 87;
/*
* bpf_map_pop_elem
*
* Pop an element from *map*.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_map_pop_elem)(void *map, void *value) = (void *) 88;
/*
* bpf_map_peek_elem
*
* Get an element from *map* without removing it.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_map_peek_elem)(void *map, void *value) = (void *) 89;
/*
* bpf_msg_push_data
*
* For socket policies, insert *len* bytes into *msg* at offset
* *start*.
*
* If a program of type **BPF_PROG_TYPE_SK_MSG** is run on a
* *msg* it may want to insert metadata or options into the *msg*.
* This can later be read and used by any of the lower layer BPF
* hooks.
*
* This helper may fail if under memory pressure (a malloc
* fails) in these cases BPF programs will get an appropriate
* error and BPF programs will need to handle them.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_msg_push_data)(struct sk_msg_md *msg, __u32 start, __u32 len, __u64 flags) = (void *) 90;
/*
* bpf_msg_pop_data
*
* Will remove *len* bytes from a *msg* starting at byte *start*.
* This may result in **ENOMEM** errors under certain situations if
* an allocation and copy are required due to a full ring buffer.
* However, the helper will try to avoid doing the allocation
* if possible. Other errors can occur if input parameters are
* invalid either due to *start* byte not being valid part of *msg*
* payload and/or *pop* value being to large.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_msg_pop_data)(struct sk_msg_md *msg, __u32 start, __u32 len, __u64 flags) = (void *) 91;
/*
* bpf_rc_pointer_rel
*
* This helper is used in programs implementing IR decoding, to
* report a successfully decoded pointer movement.
*
* The *ctx* should point to the lirc sample as passed into
* the program.
*
* This helper is only available is the kernel was compiled with
* the **CONFIG_BPF_LIRC_MODE2** configuration option set to
* "**y**".
*
* Returns
* 0
*/
static long (*bpf_rc_pointer_rel)(void *ctx, __s32 rel_x, __s32 rel_y) = (void *) 92;
/*
* bpf_spin_lock
*
* Acquire a spinlock represented by the pointer *lock*, which is
* stored as part of a value of a map. Taking the lock allows to
* safely update the rest of the fields in that value. The
* spinlock can (and must) later be released with a call to
* **bpf_spin_unlock**\ (\ *lock*\ ).
*
* Spinlocks in BPF programs come with a number of restrictions
* and constraints:
*
* * **bpf_spin_lock** objects are only allowed inside maps of
* types **BPF_MAP_TYPE_HASH** and **BPF_MAP_TYPE_ARRAY** (this
* list could be extended in the future).
* * BTF description of the map is mandatory.
* * The BPF program can take ONE lock at a time, since taking two
* or more could cause dead locks.
* * Only one **struct bpf_spin_lock** is allowed per map element.
* * When the lock is taken, calls (either BPF to BPF or helpers)
* are not allowed.
* * The **BPF_LD_ABS** and **BPF_LD_IND** instructions are not
* allowed inside a spinlock-ed region.
* * The BPF program MUST call **bpf_spin_unlock**\ () to release
* the lock, on all execution paths, before it returns.
* * The BPF program can access **struct bpf_spin_lock** only via
* the **bpf_spin_lock**\ () and **bpf_spin_unlock**\ ()
* helpers. Loading or storing data into the **struct
* bpf_spin_lock** *lock*\ **;** field of a map is not allowed.
* * To use the **bpf_spin_lock**\ () helper, the BTF description
* of the map value must be a struct and have **struct
* bpf_spin_lock** *anyname*\ **;** field at the top level.
* Nested lock inside another struct is not allowed.
* * The **struct bpf_spin_lock** *lock* field in a map value must
* be aligned on a multiple of 4 bytes in that value.
* * Syscall with command **BPF_MAP_LOOKUP_ELEM** does not copy
* the **bpf_spin_lock** field to user space.
* * Syscall with command **BPF_MAP_UPDATE_ELEM**, or update from
* a BPF program, do not update the **bpf_spin_lock** field.
* * **bpf_spin_lock** cannot be on the stack or inside a
* networking packet (it can only be inside of a map values).
* * **bpf_spin_lock** is available to root only.
* * Tracing programs and socket filter programs cannot use
* **bpf_spin_lock**\ () due to insufficient preemption checks
* (but this may change in the future).
* * **bpf_spin_lock** is not allowed in inner maps of map-in-map.
*
* Returns
* 0
*/
static long (*bpf_spin_lock)(struct bpf_spin_lock *lock) = (void *) 93;
/*
* bpf_spin_unlock
*
* Release the *lock* previously locked by a call to
* **bpf_spin_lock**\ (\ *lock*\ ).
*
* Returns
* 0
*/
static long (*bpf_spin_unlock)(struct bpf_spin_lock *lock) = (void *) 94;
/*
* bpf_sk_fullsock
*
* This helper gets a **struct bpf_sock** pointer such
* that all the fields in this **bpf_sock** can be accessed.
*
* Returns
* A **struct bpf_sock** pointer on success, or **NULL** in
* case of failure.
*/
static struct bpf_sock *(*bpf_sk_fullsock)(struct bpf_sock *sk) = (void *) 95;
/*
* bpf_tcp_sock
*
* This helper gets a **struct bpf_tcp_sock** pointer from a
* **struct bpf_sock** pointer.
*
* Returns
* A **struct bpf_tcp_sock** pointer on success, or **NULL** in
* case of failure.
*/
static struct bpf_tcp_sock *(*bpf_tcp_sock)(struct bpf_sock *sk) = (void *) 96;
/*
* bpf_skb_ecn_set_ce
*
* Set ECN (Explicit Congestion Notification) field of IP header
* to **CE** (Congestion Encountered) if current value is **ECT**
* (ECN Capable Transport). Otherwise, do nothing. Works with IPv6
* and IPv4.
*
* Returns
* 1 if the **CE** flag is set (either by the current helper call
* or because it was already present), 0 if it is not set.
*/
static long (*bpf_skb_ecn_set_ce)(struct __sk_buff *skb) = (void *) 97;
/*
* bpf_get_listener_sock
*
* Return a **struct bpf_sock** pointer in **TCP_LISTEN** state.
* **bpf_sk_release**\ () is unnecessary and not allowed.
*
* Returns
* A **struct bpf_sock** pointer on success, or **NULL** in
* case of failure.
*/
static struct bpf_sock *(*bpf_get_listener_sock)(struct bpf_sock *sk) = (void *) 98;
/*
* bpf_skc_lookup_tcp
*
* Look for TCP socket matching *tuple*, optionally in a child
* network namespace *netns*. The return value must be checked,
* and if non-**NULL**, released via **bpf_sk_release**\ ().
*
* This function is identical to **bpf_sk_lookup_tcp**\ (), except
* that it also returns timewait or request sockets. Use
* **bpf_sk_fullsock**\ () or **bpf_tcp_sock**\ () to access the
* full structure.
*
* This helper is available only if the kernel was compiled with
* **CONFIG_NET** configuration option.
*
* Returns
* Pointer to **struct bpf_sock**, or **NULL** in case of failure.
* For sockets with reuseport option, the **struct bpf_sock**
* result is from *reuse*\ **->socks**\ [] using the hash of the
* tuple.
*/
static struct bpf_sock *(*bpf_skc_lookup_tcp)(void *ctx, struct bpf_sock_tuple *tuple, __u32 tuple_size, __u64 netns, __u64 flags) = (void *) 99;
/*
* bpf_tcp_check_syncookie
*
* Check whether *iph* and *th* contain a valid SYN cookie ACK for
* the listening socket in *sk*.
*
* *iph* points to the start of the IPv4 or IPv6 header, while
* *iph_len* contains **sizeof**\ (**struct iphdr**) or
* **sizeof**\ (**struct ipv6hdr**).
*
* *th* points to the start of the TCP header, while *th_len*
* contains the length of the TCP header (at least
* **sizeof**\ (**struct tcphdr**)).
*
* Returns
* 0 if *iph* and *th* are a valid SYN cookie ACK, or a negative
* error otherwise.
*/
static long (*bpf_tcp_check_syncookie)(void *sk, void *iph, __u32 iph_len, struct tcphdr *th, __u32 th_len) = (void *) 100;
/*
* bpf_sysctl_get_name
*
* Get name of sysctl in /proc/sys/ and copy it into provided by
* program buffer *buf* of size *buf_len*.
*
* The buffer is always NUL terminated, unless it's zero-sized.
*
* If *flags* is zero, full name (e.g. "net/ipv4/tcp_mem") is
* copied. Use **BPF_F_SYSCTL_BASE_NAME** flag to copy base name
* only (e.g. "tcp_mem").
*
* Returns
* Number of character copied (not including the trailing NUL).
*
* **-E2BIG** if the buffer wasn't big enough (*buf* will contain
* truncated name in this case).
*/
static long (*bpf_sysctl_get_name)(struct bpf_sysctl *ctx, char *buf, unsigned long buf_len, __u64 flags) = (void *) 101;
/*
* bpf_sysctl_get_current_value
*
* Get current value of sysctl as it is presented in /proc/sys
* (incl. newline, etc), and copy it as a string into provided
* by program buffer *buf* of size *buf_len*.
*
* The whole value is copied, no matter what file position user
* space issued e.g. sys_read at.
*
* The buffer is always NUL terminated, unless it's zero-sized.
*
* Returns
* Number of character copied (not including the trailing NUL).
*
* **-E2BIG** if the buffer wasn't big enough (*buf* will contain
* truncated name in this case).
*
* **-EINVAL** if current value was unavailable, e.g. because
* sysctl is uninitialized and read returns -EIO for it.
*/
static long (*bpf_sysctl_get_current_value)(struct bpf_sysctl *ctx, char *buf, unsigned long buf_len) = (void *) 102;
/*
* bpf_sysctl_get_new_value
*
* Get new value being written by user space to sysctl (before
* the actual write happens) and copy it as a string into
* provided by program buffer *buf* of size *buf_len*.
*
* User space may write new value at file position > 0.
*
* The buffer is always NUL terminated, unless it's zero-sized.
*
* Returns
* Number of character copied (not including the trailing NUL).
*
* **-E2BIG** if the buffer wasn't big enough (*buf* will contain
* truncated name in this case).
*
* **-EINVAL** if sysctl is being read.
*/
static long (*bpf_sysctl_get_new_value)(struct bpf_sysctl *ctx, char *buf, unsigned long buf_len) = (void *) 103;
/*
* bpf_sysctl_set_new_value
*
* Override new value being written by user space to sysctl with
* value provided by program in buffer *buf* of size *buf_len*.
*
* *buf* should contain a string in same form as provided by user
* space on sysctl write.
*
* User space may write new value at file position > 0. To override
* the whole sysctl value file position should be set to zero.
*
* Returns
* 0 on success.
*
* **-E2BIG** if the *buf_len* is too big.
*
* **-EINVAL** if sysctl is being read.
*/
static long (*bpf_sysctl_set_new_value)(struct bpf_sysctl *ctx, const char *buf, unsigned long buf_len) = (void *) 104;
/*
* bpf_strtol
*
* Convert the initial part of the string from buffer *buf* of
* size *buf_len* to a long integer according to the given base
* and save the result in *res*.
*
* The string may begin with an arbitrary amount of white space
* (as determined by **isspace**\ (3)) followed by a single
* optional '**-**' sign.
*
* Five least significant bits of *flags* encode base, other bits
* are currently unused.
*
* Base must be either 8, 10, 16 or 0 to detect it automatically
* similar to user space **strtol**\ (3).
*
* Returns
* Number of characters consumed on success. Must be positive but
* no more than *buf_len*.
*
* **-EINVAL** if no valid digits were found or unsupported base
* was provided.
*
* **-ERANGE** if resulting value was out of range.
*/
static long (*bpf_strtol)(const char *buf, unsigned long buf_len, __u64 flags, long *res) = (void *) 105;
/*
* bpf_strtoul
*
* Convert the initial part of the string from buffer *buf* of
* size *buf_len* to an unsigned long integer according to the
* given base and save the result in *res*.
*
* The string may begin with an arbitrary amount of white space
* (as determined by **isspace**\ (3)).
*
* Five least significant bits of *flags* encode base, other bits
* are currently unused.
*
* Base must be either 8, 10, 16 or 0 to detect it automatically
* similar to user space **strtoul**\ (3).
*
* Returns
* Number of characters consumed on success. Must be positive but
* no more than *buf_len*.
*
* **-EINVAL** if no valid digits were found or unsupported base
* was provided.
*
* **-ERANGE** if resulting value was out of range.
*/
static long (*bpf_strtoul)(const char *buf, unsigned long buf_len, __u64 flags, unsigned long *res) = (void *) 106;
/*
* bpf_sk_storage_get
*
* Get a bpf-local-storage from a *sk*.
*
* Logically, it could be thought of getting the value from
* a *map* with *sk* as the **key**. From this
* perspective, the usage is not much different from
* **bpf_map_lookup_elem**\ (*map*, **&**\ *sk*) except this
* helper enforces the key must be a full socket and the map must
* be a **BPF_MAP_TYPE_SK_STORAGE** also.
*
* Underneath, the value is stored locally at *sk* instead of
* the *map*. The *map* is used as the bpf-local-storage
* "type". The bpf-local-storage "type" (i.e. the *map*) is
* searched against all bpf-local-storages residing at *sk*.
*
* *sk* is a kernel **struct sock** pointer for LSM program.
* *sk* is a **struct bpf_sock** pointer for other program types.
*
* An optional *flags* (**BPF_SK_STORAGE_GET_F_CREATE**) can be
* used such that a new bpf-local-storage will be
* created if one does not exist. *value* can be used
* together with **BPF_SK_STORAGE_GET_F_CREATE** to specify
* the initial value of a bpf-local-storage. If *value* is
* **NULL**, the new bpf-local-storage will be zero initialized.
*
* Returns
* A bpf-local-storage pointer is returned on success.
*
* **NULL** if not found or there was an error in adding
* a new bpf-local-storage.
*/
static void *(*bpf_sk_storage_get)(void *map, void *sk, void *value, __u64 flags) = (void *) 107;
/*
* bpf_sk_storage_delete
*
* Delete a bpf-local-storage from a *sk*.
*
* Returns
* 0 on success.
*
* **-ENOENT** if the bpf-local-storage cannot be found.
* **-EINVAL** if sk is not a fullsock (e.g. a request_sock).
*/
static long (*bpf_sk_storage_delete)(void *map, void *sk) = (void *) 108;
/*
* bpf_send_signal
*
* Send signal *sig* to the process of the current task.
* The signal may be delivered to any of this process's threads.
*
* Returns
* 0 on success or successfully queued.
*
* **-EBUSY** if work queue under nmi is full.
*
* **-EINVAL** if *sig* is invalid.
*
* **-EPERM** if no permission to send the *sig*.
*
* **-EAGAIN** if bpf program can try again.
*/
static long (*bpf_send_signal)(__u32 sig) = (void *) 109;
/*
* bpf_tcp_gen_syncookie
*
* Try to issue a SYN cookie for the packet with corresponding
* IP/TCP headers, *iph* and *th*, on the listening socket in *sk*.
*
* *iph* points to the start of the IPv4 or IPv6 header, while
* *iph_len* contains **sizeof**\ (**struct iphdr**) or
* **sizeof**\ (**struct ipv6hdr**).
*
* *th* points to the start of the TCP header, while *th_len*
* contains the length of the TCP header with options (at least
* **sizeof**\ (**struct tcphdr**)).
*
* Returns
* On success, lower 32 bits hold the generated SYN cookie in
* followed by 16 bits which hold the MSS value for that cookie,
* and the top 16 bits are unused.
*
* On failure, the returned value is one of the following:
*
* **-EINVAL** SYN cookie cannot be issued due to error
*
* **-ENOENT** SYN cookie should not be issued (no SYN flood)
*
* **-EOPNOTSUPP** kernel configuration does not enable SYN cookies
*
* **-EPROTONOSUPPORT** IP packet version is not 4 or 6
*/
static __s64 (*bpf_tcp_gen_syncookie)(void *sk, void *iph, __u32 iph_len, struct tcphdr *th, __u32 th_len) = (void *) 110;
/*
* bpf_skb_output
*
* Write raw *data* blob into a special BPF perf event held by
* *map* of type **BPF_MAP_TYPE_PERF_EVENT_ARRAY**. This perf
* event must have the following attributes: **PERF_SAMPLE_RAW**
* as **sample_type**, **PERF_TYPE_SOFTWARE** as **type**, and
* **PERF_COUNT_SW_BPF_OUTPUT** as **config**.
*
* The *flags* are used to indicate the index in *map* for which
* the value must be put, masked with **BPF_F_INDEX_MASK**.
* Alternatively, *flags* can be set to **BPF_F_CURRENT_CPU**
* to indicate that the index of the current CPU core should be
* used.
*
* The value to write, of *size*, is passed through eBPF stack and
* pointed by *data*.
*
* *ctx* is a pointer to in-kernel struct sk_buff.
*
* This helper is similar to **bpf_perf_event_output**\ () but
* restricted to raw_tracepoint bpf programs.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_skb_output)(void *ctx, void *map, __u64 flags, void *data, __u64 size) = (void *) 111;
/*
* bpf_probe_read_user
*
* Safely attempt to read *size* bytes from user space address
* *unsafe_ptr* and store the data in *dst*.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_probe_read_user)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) 112;
/*
* bpf_probe_read_kernel
*
* Safely attempt to read *size* bytes from kernel space address
* *unsafe_ptr* and store the data in *dst*.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_probe_read_kernel)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) 113;
/*
* bpf_probe_read_user_str
*
* Copy a NUL terminated string from an unsafe user address
* *unsafe_ptr* to *dst*. The *size* should include the
* terminating NUL byte. In case the string length is smaller than
* *size*, the target is not padded with further NUL bytes. If the
* string length is larger than *size*, just *size*-1 bytes are
* copied and the last byte is set to NUL.
*
* On success, returns the number of bytes that were written,
* including the terminal NUL. This makes this helper useful in
* tracing programs for reading strings, and more importantly to
* get its length at runtime. See the following snippet:
*
* ::
*
* SEC("kprobe/sys_open")
* void bpf_sys_open(struct pt_regs *ctx)
* {
* char buf[PATHLEN]; // PATHLEN is defined to 256
* int res = bpf_probe_read_user_str(buf, sizeof(buf),
* ctx->di);
*
* // Consume buf, for example push it to
* // userspace via bpf_perf_event_output(); we
* // can use res (the string length) as event
* // size, after checking its boundaries.
* }
*
* In comparison, using **bpf_probe_read_user**\ () helper here
* instead to read the string would require to estimate the length
* at compile time, and would often result in copying more memory
* than necessary.
*
* Another useful use case is when parsing individual process
* arguments or individual environment variables navigating
* *current*\ **->mm->arg_start** and *current*\
* **->mm->env_start**: using this helper and the return value,
* one can quickly iterate at the right offset of the memory area.
*
* Returns
* On success, the strictly positive length of the output string,
* including the trailing NUL character. On error, a negative
* value.
*/
static long (*bpf_probe_read_user_str)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) 114;
/*
* bpf_probe_read_kernel_str
*
* Copy a NUL terminated string from an unsafe kernel address *unsafe_ptr*
* to *dst*. Same semantics as with **bpf_probe_read_user_str**\ () apply.
*
* Returns
* On success, the strictly positive length of the string, including
* the trailing NUL character. On error, a negative value.
*/
static long (*bpf_probe_read_kernel_str)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) 115;
/*
* bpf_tcp_send_ack
*
* Send out a tcp-ack. *tp* is the in-kernel struct **tcp_sock**.
* *rcv_nxt* is the ack_seq to be sent out.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_tcp_send_ack)(void *tp, __u32 rcv_nxt) = (void *) 116;
/*
* bpf_send_signal_thread
*
* Send signal *sig* to the thread corresponding to the current task.
*
* Returns
* 0 on success or successfully queued.
*
* **-EBUSY** if work queue under nmi is full.
*
* **-EINVAL** if *sig* is invalid.
*
* **-EPERM** if no permission to send the *sig*.
*
* **-EAGAIN** if bpf program can try again.
*/
static long (*bpf_send_signal_thread)(__u32 sig) = (void *) 117;
/*
* bpf_jiffies64
*
* Obtain the 64bit jiffies
*
* Returns
* The 64 bit jiffies
*/
static __u64 (*bpf_jiffies64)(void) = (void *) 118;
/*
* bpf_read_branch_records
*
* For an eBPF program attached to a perf event, retrieve the
* branch records (**struct perf_branch_entry**) associated to *ctx*
* and store it in the buffer pointed by *buf* up to size
* *size* bytes.
*
* Returns
* On success, number of bytes written to *buf*. On error, a
* negative value.
*
* The *flags* can be set to **BPF_F_GET_BRANCH_RECORDS_SIZE** to
* instead return the number of bytes required to store all the
* branch entries. If this flag is set, *buf* may be NULL.
*
* **-EINVAL** if arguments invalid or **size** not a multiple
* of **sizeof**\ (**struct perf_branch_entry**\ ).
*
* **-ENOENT** if architecture does not support branch records.
*/
static long (*bpf_read_branch_records)(struct bpf_perf_event_data *ctx, void *buf, __u32 size, __u64 flags) = (void *) 119;
/*
* bpf_get_ns_current_pid_tgid
*
* Returns 0 on success, values for *pid* and *tgid* as seen from the current
* *namespace* will be returned in *nsdata*.
*
* Returns
* 0 on success, or one of the following in case of failure:
*
* **-EINVAL** if dev and inum supplied don't match dev_t and inode number
* with nsfs of current task, or if dev conversion to dev_t lost high bits.
*
* **-ENOENT** if pidns does not exists for the current task.
*/
static long (*bpf_get_ns_current_pid_tgid)(__u64 dev, __u64 ino, struct bpf_pidns_info *nsdata, __u32 size) = (void *) 120;
/*
* bpf_xdp_output
*
* Write raw *data* blob into a special BPF perf event held by
* *map* of type **BPF_MAP_TYPE_PERF_EVENT_ARRAY**. This perf
* event must have the following attributes: **PERF_SAMPLE_RAW**
* as **sample_type**, **PERF_TYPE_SOFTWARE** as **type**, and
* **PERF_COUNT_SW_BPF_OUTPUT** as **config**.
*
* The *flags* are used to indicate the index in *map* for which
* the value must be put, masked with **BPF_F_INDEX_MASK**.
* Alternatively, *flags* can be set to **BPF_F_CURRENT_CPU**
* to indicate that the index of the current CPU core should be
* used.
*
* The value to write, of *size*, is passed through eBPF stack and
* pointed by *data*.
*
* *ctx* is a pointer to in-kernel struct xdp_buff.
*
* This helper is similar to **bpf_perf_eventoutput**\ () but
* restricted to raw_tracepoint bpf programs.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_xdp_output)(void *ctx, void *map, __u64 flags, void *data, __u64 size) = (void *) 121;
/*
* bpf_get_netns_cookie
*
* Retrieve the cookie (generated by the kernel) of the network
* namespace the input *ctx* is associated with. The network
* namespace cookie remains stable for its lifetime and provides
* a global identifier that can be assumed unique. If *ctx* is
* NULL, then the helper returns the cookie for the initial
* network namespace. The cookie itself is very similar to that
* of **bpf_get_socket_cookie**\ () helper, but for network
* namespaces instead of sockets.
*
* Returns
* A 8-byte long opaque number.
*/
static __u64 (*bpf_get_netns_cookie)(void *ctx) = (void *) 122;
/*
* bpf_get_current_ancestor_cgroup_id
*
* Return id of cgroup v2 that is ancestor of the cgroup associated
* with the current task at the *ancestor_level*. The root cgroup
* is at *ancestor_level* zero and each step down the hierarchy
* increments the level. If *ancestor_level* == level of cgroup
* associated with the current task, then return value will be the
* same as that of **bpf_get_current_cgroup_id**\ ().
*
* The helper is useful to implement policies based on cgroups
* that are upper in hierarchy than immediate cgroup associated
* with the current task.
*
* The format of returned id and helper limitations are same as in
* **bpf_get_current_cgroup_id**\ ().
*
* Returns
* The id is returned or 0 in case the id could not be retrieved.
*/
static __u64 (*bpf_get_current_ancestor_cgroup_id)(int ancestor_level) = (void *) 123;
/*
* bpf_sk_assign
*
* Helper is overloaded depending on BPF program type. This
* description applies to **BPF_PROG_TYPE_SCHED_CLS** and
* **BPF_PROG_TYPE_SCHED_ACT** programs.
*
* Assign the *sk* to the *skb*. When combined with appropriate
* routing configuration to receive the packet towards the socket,
* will cause *skb* to be delivered to the specified socket.
* Subsequent redirection of *skb* via **bpf_redirect**\ (),
* **bpf_clone_redirect**\ () or other methods outside of BPF may
* interfere with successful delivery to the socket.
*
* This operation is only valid from TC ingress path.
*
* The *flags* argument must be zero.
*
* Returns
* 0 on success, or a negative error in case of failure:
*
* **-EINVAL** if specified *flags* are not supported.
*
* **-ENOENT** if the socket is unavailable for assignment.
*
* **-ENETUNREACH** if the socket is unreachable (wrong netns).
*
* **-EOPNOTSUPP** if the operation is not supported, for example
* a call from outside of TC ingress.
*
* **-ESOCKTNOSUPPORT** if the socket type is not supported
* (reuseport).
*/
static long (*bpf_sk_assign)(void *ctx, void *sk, __u64 flags) = (void *) 124;
/*
* bpf_ktime_get_boot_ns
*
* Return the time elapsed since system boot, in nanoseconds.
* Does include the time the system was suspended.
* See: **clock_gettime**\ (**CLOCK_BOOTTIME**)
*
* Returns
* Current *ktime*.
*/
static __u64 (*bpf_ktime_get_boot_ns)(void) = (void *) 125;
/*
* bpf_seq_printf
*
* **bpf_seq_printf**\ () uses seq_file **seq_printf**\ () to print
* out the format string.
* The *m* represents the seq_file. The *fmt* and *fmt_size* are for
* the format string itself. The *data* and *data_len* are format string
* arguments. The *data* are a **u64** array and corresponding format string
* values are stored in the array. For strings and pointers where pointees
* are accessed, only the pointer values are stored in the *data* array.
* The *data_len* is the size of *data* in bytes - must be a multiple of 8.
*
* Formats **%s**, **%p{i,I}{4,6}** requires to read kernel memory.
* Reading kernel memory may fail due to either invalid address or
* valid address but requiring a major memory fault. If reading kernel memory
* fails, the string for **%s** will be an empty string, and the ip
* address for **%p{i,I}{4,6}** will be 0. Not returning error to
* bpf program is consistent with what **bpf_trace_printk**\ () does for now.
*
* Returns
* 0 on success, or a negative error in case of failure:
*
* **-EBUSY** if per-CPU memory copy buffer is busy, can try again
* by returning 1 from bpf program.
*
* **-EINVAL** if arguments are invalid, or if *fmt* is invalid/unsupported.
*
* **-E2BIG** if *fmt* contains too many format specifiers.
*
* **-EOVERFLOW** if an overflow happened: The same object will be tried again.
*/
static long (*bpf_seq_printf)(struct seq_file *m, const char *fmt, __u32 fmt_size, const void *data, __u32 data_len) = (void *) 126;
/*
* bpf_seq_write
*
* **bpf_seq_write**\ () uses seq_file **seq_write**\ () to write the data.
* The *m* represents the seq_file. The *data* and *len* represent the
* data to write in bytes.
*
* Returns
* 0 on success, or a negative error in case of failure:
*
* **-EOVERFLOW** if an overflow happened: The same object will be tried again.
*/
static long (*bpf_seq_write)(struct seq_file *m, const void *data, __u32 len) = (void *) 127;
/*
* bpf_sk_cgroup_id
*
* Return the cgroup v2 id of the socket *sk*.
*
* *sk* must be a non-**NULL** pointer to a socket, e.g. one
* returned from **bpf_sk_lookup_xxx**\ (),
* **bpf_sk_fullsock**\ (), etc. The format of returned id is
* same as in **bpf_skb_cgroup_id**\ ().
*
* This helper is available only if the kernel was compiled with
* the **CONFIG_SOCK_CGROUP_DATA** configuration option.
*
* Returns
* The id is returned or 0 in case the id could not be retrieved.
*/
static __u64 (*bpf_sk_cgroup_id)(void *sk) = (void *) 128;
/*
* bpf_sk_ancestor_cgroup_id
*
* Return id of cgroup v2 that is ancestor of cgroup associated
* with the *sk* at the *ancestor_level*. The root cgroup is at
* *ancestor_level* zero and each step down the hierarchy
* increments the level. If *ancestor_level* == level of cgroup
* associated with *sk*, then return value will be same as that
* of **bpf_sk_cgroup_id**\ ().
*
* The helper is useful to implement policies based on cgroups
* that are upper in hierarchy than immediate cgroup associated
* with *sk*.
*
* The format of returned id and helper limitations are same as in
* **bpf_sk_cgroup_id**\ ().
*
* Returns
* The id is returned or 0 in case the id could not be retrieved.
*/
static __u64 (*bpf_sk_ancestor_cgroup_id)(void *sk, int ancestor_level) = (void *) 129;
/*
* bpf_ringbuf_output
*
* Copy *size* bytes from *data* into a ring buffer *ringbuf*.
* If **BPF_RB_NO_WAKEUP** is specified in *flags*, no notification
* of new data availability is sent.
* If **BPF_RB_FORCE_WAKEUP** is specified in *flags*, notification
* of new data availability is sent unconditionally.
* If **0** is specified in *flags*, an adaptive notification
* of new data availability is sent.
*
* An adaptive notification is a notification sent whenever the user-space
* process has caught up and consumed all available payloads. In case the user-space
* process is still processing a previous payload, then no notification is needed
* as it will process the newly added payload automatically.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_ringbuf_output)(void *ringbuf, void *data, __u64 size, __u64 flags) = (void *) 130;
/*
* bpf_ringbuf_reserve
*
* Reserve *size* bytes of payload in a ring buffer *ringbuf*.
* *flags* must be 0.
*
* Returns
* Valid pointer with *size* bytes of memory available; NULL,
* otherwise.
*/
static void *(*bpf_ringbuf_reserve)(void *ringbuf, __u64 size, __u64 flags) = (void *) 131;
/*
* bpf_ringbuf_submit
*
* Submit reserved ring buffer sample, pointed to by *data*.
* If **BPF_RB_NO_WAKEUP** is specified in *flags*, no notification
* of new data availability is sent.
* If **BPF_RB_FORCE_WAKEUP** is specified in *flags*, notification
* of new data availability is sent unconditionally.
* If **0** is specified in *flags*, an adaptive notification
* of new data availability is sent.
*
* See 'bpf_ringbuf_output()' for the definition of adaptive notification.
*
* Returns
* Nothing. Always succeeds.
*/
static void (*bpf_ringbuf_submit)(void *data, __u64 flags) = (void *) 132;
/*
* bpf_ringbuf_discard
*
* Discard reserved ring buffer sample, pointed to by *data*.
* If **BPF_RB_NO_WAKEUP** is specified in *flags*, no notification
* of new data availability is sent.
* If **BPF_RB_FORCE_WAKEUP** is specified in *flags*, notification
* of new data availability is sent unconditionally.
* If **0** is specified in *flags*, an adaptive notification
* of new data availability is sent.
*
* See 'bpf_ringbuf_output()' for the definition of adaptive notification.
*
* Returns
* Nothing. Always succeeds.
*/
static void (*bpf_ringbuf_discard)(void *data, __u64 flags) = (void *) 133;
/*
* bpf_ringbuf_query
*
* Query various characteristics of provided ring buffer. What
* exactly is queries is determined by *flags*:
*
* * **BPF_RB_AVAIL_DATA**: Amount of data not yet consumed.
* * **BPF_RB_RING_SIZE**: The size of ring buffer.
* * **BPF_RB_CONS_POS**: Consumer position (can wrap around).
* * **BPF_RB_PROD_POS**: Producer(s) position (can wrap around).
*
* Data returned is just a momentary snapshot of actual values
* and could be inaccurate, so this facility should be used to
* power heuristics and for reporting, not to make 100% correct
* calculation.
*
* Returns
* Requested value, or 0, if *flags* are not recognized.
*/
static __u64 (*bpf_ringbuf_query)(void *ringbuf, __u64 flags) = (void *) 134;
/*
* bpf_csum_level
*
* Change the skbs checksum level by one layer up or down, or
* reset it entirely to none in order to have the stack perform
* checksum validation. The level is applicable to the following
* protocols: TCP, UDP, GRE, SCTP, FCOE. For example, a decap of
* | ETH | IP | UDP | GUE | IP | TCP | into | ETH | IP | TCP |
* through **bpf_skb_adjust_room**\ () helper with passing in
* **BPF_F_ADJ_ROOM_NO_CSUM_RESET** flag would require one call
* to **bpf_csum_level**\ () with **BPF_CSUM_LEVEL_DEC** since
* the UDP header is removed. Similarly, an encap of the latter
* into the former could be accompanied by a helper call to
* **bpf_csum_level**\ () with **BPF_CSUM_LEVEL_INC** if the
* skb is still intended to be processed in higher layers of the
* stack instead of just egressing at tc.
*
* There are three supported level settings at this time:
*
* * **BPF_CSUM_LEVEL_INC**: Increases skb->csum_level for skbs
* with CHECKSUM_UNNECESSARY.
* * **BPF_CSUM_LEVEL_DEC**: Decreases skb->csum_level for skbs
* with CHECKSUM_UNNECESSARY.
* * **BPF_CSUM_LEVEL_RESET**: Resets skb->csum_level to 0 and
* sets CHECKSUM_NONE to force checksum validation by the stack.
* * **BPF_CSUM_LEVEL_QUERY**: No-op, returns the current
* skb->csum_level.
*
* Returns
* 0 on success, or a negative error in case of failure. In the
* case of **BPF_CSUM_LEVEL_QUERY**, the current skb->csum_level
* is returned or the error code -EACCES in case the skb is not
* subject to CHECKSUM_UNNECESSARY.
*/
static long (*bpf_csum_level)(struct __sk_buff *skb, __u64 level) = (void *) 135;
/*
* bpf_skc_to_tcp6_sock
*
* Dynamically cast a *sk* pointer to a *tcp6_sock* pointer.
*
* Returns
* *sk* if casting is valid, or **NULL** otherwise.
*/
static struct tcp6_sock *(*bpf_skc_to_tcp6_sock)(void *sk) = (void *) 136;
/*
* bpf_skc_to_tcp_sock
*
* Dynamically cast a *sk* pointer to a *tcp_sock* pointer.
*
* Returns
* *sk* if casting is valid, or **NULL** otherwise.
*/
static struct tcp_sock *(*bpf_skc_to_tcp_sock)(void *sk) = (void *) 137;
/*
* bpf_skc_to_tcp_timewait_sock
*
* Dynamically cast a *sk* pointer to a *tcp_timewait_sock* pointer.
*
* Returns
* *sk* if casting is valid, or **NULL** otherwise.
*/
static struct tcp_timewait_sock *(*bpf_skc_to_tcp_timewait_sock)(void *sk) = (void *) 138;
/*
* bpf_skc_to_tcp_request_sock
*
* Dynamically cast a *sk* pointer to a *tcp_request_sock* pointer.
*
* Returns
* *sk* if casting is valid, or **NULL** otherwise.
*/
static struct tcp_request_sock *(*bpf_skc_to_tcp_request_sock)(void *sk) = (void *) 139;
/*
* bpf_skc_to_udp6_sock
*
* Dynamically cast a *sk* pointer to a *udp6_sock* pointer.
*
* Returns
* *sk* if casting is valid, or **NULL** otherwise.
*/
static struct udp6_sock *(*bpf_skc_to_udp6_sock)(void *sk) = (void *) 140;
/*
* bpf_get_task_stack
*
* Return a user or a kernel stack in bpf program provided buffer.
* To achieve this, the helper needs *task*, which is a valid
* pointer to **struct task_struct**. To store the stacktrace, the
* bpf program provides *buf* with a nonnegative *size*.
*
* The last argument, *flags*, holds the number of stack frames to
* skip (from 0 to 255), masked with
* **BPF_F_SKIP_FIELD_MASK**. The next bits can be used to set
* the following flags:
*
* **BPF_F_USER_STACK**
* Collect a user space stack instead of a kernel stack.
* **BPF_F_USER_BUILD_ID**
* Collect buildid+offset instead of ips for user stack,
* only valid if **BPF_F_USER_STACK** is also specified.
*
* **bpf_get_task_stack**\ () can collect up to
* **PERF_MAX_STACK_DEPTH** both kernel and user frames, subject
* to sufficient large buffer size. Note that
* this limit can be controlled with the **sysctl** program, and
* that it should be manually increased in order to profile long
* user stacks (such as stacks for Java programs). To do so, use:
*
* ::
*
* # sysctl kernel.perf_event_max_stack=
*
* Returns
* The non-negative copied *buf* length equal to or less than
* *size* on success, or a negative error in case of failure.
*/
static long (*bpf_get_task_stack)(struct task_struct *task, void *buf, __u32 size, __u64 flags) = (void *) 141;
/*
* bpf_load_hdr_opt
*
* Load header option. Support reading a particular TCP header
* option for bpf program (**BPF_PROG_TYPE_SOCK_OPS**).
*
* If *flags* is 0, it will search the option from the
* *skops*\ **->skb_data**. The comment in **struct bpf_sock_ops**
* has details on what skb_data contains under different
* *skops*\ **->op**.
*
* The first byte of the *searchby_res* specifies the
* kind that it wants to search.
*
* If the searching kind is an experimental kind
* (i.e. 253 or 254 according to RFC6994). It also
* needs to specify the "magic" which is either
* 2 bytes or 4 bytes. It then also needs to
* specify the size of the magic by using
* the 2nd byte which is "kind-length" of a TCP
* header option and the "kind-length" also
* includes the first 2 bytes "kind" and "kind-length"
* itself as a normal TCP header option also does.
*
* For example, to search experimental kind 254 with
* 2 byte magic 0xeB9F, the searchby_res should be
* [ 254, 4, 0xeB, 0x9F, 0, 0, .... 0 ].
*
* To search for the standard window scale option (3),
* the *searchby_res* should be [ 3, 0, 0, .... 0 ].
* Note, kind-length must be 0 for regular option.
*
* Searching for No-Op (0) and End-of-Option-List (1) are
* not supported.
*
* *len* must be at least 2 bytes which is the minimal size
* of a header option.
*
* Supported flags:
*
* * **BPF_LOAD_HDR_OPT_TCP_SYN** to search from the
* saved_syn packet or the just-received syn packet.
*
*
* Returns
* > 0 when found, the header option is copied to *searchby_res*.
* The return value is the total length copied. On failure, a
* negative error code is returned:
*
* **-EINVAL** if a parameter is invalid.
*
* **-ENOMSG** if the option is not found.
*
* **-ENOENT** if no syn packet is available when
* **BPF_LOAD_HDR_OPT_TCP_SYN** is used.
*
* **-ENOSPC** if there is not enough space. Only *len* number of
* bytes are copied.
*
* **-EFAULT** on failure to parse the header options in the
* packet.
*
* **-EPERM** if the helper cannot be used under the current
* *skops*\ **->op**.
*/
static long (*bpf_load_hdr_opt)(struct bpf_sock_ops *skops, void *searchby_res, __u32 len, __u64 flags) = (void *) 142;
/*
* bpf_store_hdr_opt
*
* Store header option. The data will be copied
* from buffer *from* with length *len* to the TCP header.
*
* The buffer *from* should have the whole option that
* includes the kind, kind-length, and the actual
* option data. The *len* must be at least kind-length
* long. The kind-length does not have to be 4 byte
* aligned. The kernel will take care of the padding
* and setting the 4 bytes aligned value to th->doff.
*
* This helper will check for duplicated option
* by searching the same option in the outgoing skb.
*
* This helper can only be called during
* **BPF_SOCK_OPS_WRITE_HDR_OPT_CB**.
*
*
* Returns
* 0 on success, or negative error in case of failure:
*
* **-EINVAL** If param is invalid.
*
* **-ENOSPC** if there is not enough space in the header.
* Nothing has been written
*
* **-EEXIST** if the option already exists.
*
* **-EFAULT** on failrue to parse the existing header options.
*
* **-EPERM** if the helper cannot be used under the current
* *skops*\ **->op**.
*/
static long (*bpf_store_hdr_opt)(struct bpf_sock_ops *skops, const void *from, __u32 len, __u64 flags) = (void *) 143;
/*
* bpf_reserve_hdr_opt
*
* Reserve *len* bytes for the bpf header option. The
* space will be used by **bpf_store_hdr_opt**\ () later in
* **BPF_SOCK_OPS_WRITE_HDR_OPT_CB**.
*
* If **bpf_reserve_hdr_opt**\ () is called multiple times,
* the total number of bytes will be reserved.
*
* This helper can only be called during
* **BPF_SOCK_OPS_HDR_OPT_LEN_CB**.
*
*
* Returns
* 0 on success, or negative error in case of failure:
*
* **-EINVAL** if a parameter is invalid.
*
* **-ENOSPC** if there is not enough space in the header.
*
* **-EPERM** if the helper cannot be used under the current
* *skops*\ **->op**.
*/
static long (*bpf_reserve_hdr_opt)(struct bpf_sock_ops *skops, __u32 len, __u64 flags) = (void *) 144;
/*
* bpf_inode_storage_get
*
* Get a bpf_local_storage from an *inode*.
*
* Logically, it could be thought of as getting the value from
* a *map* with *inode* as the **key**. From this
* perspective, the usage is not much different from
* **bpf_map_lookup_elem**\ (*map*, **&**\ *inode*) except this
* helper enforces the key must be an inode and the map must also
* be a **BPF_MAP_TYPE_INODE_STORAGE**.
*
* Underneath, the value is stored locally at *inode* instead of
* the *map*. The *map* is used as the bpf-local-storage
* "type". The bpf-local-storage "type" (i.e. the *map*) is
* searched against all bpf_local_storage residing at *inode*.
*
* An optional *flags* (**BPF_LOCAL_STORAGE_GET_F_CREATE**) can be
* used such that a new bpf_local_storage will be
* created if one does not exist. *value* can be used
* together with **BPF_LOCAL_STORAGE_GET_F_CREATE** to specify
* the initial value of a bpf_local_storage. If *value* is
* **NULL**, the new bpf_local_storage will be zero initialized.
*
* Returns
* A bpf_local_storage pointer is returned on success.
*
* **NULL** if not found or there was an error in adding
* a new bpf_local_storage.
*/
static void *(*bpf_inode_storage_get)(void *map, void *inode, void *value, __u64 flags) = (void *) 145;
/*
* bpf_inode_storage_delete
*
* Delete a bpf_local_storage from an *inode*.
*
* Returns
* 0 on success.
*
* **-ENOENT** if the bpf_local_storage cannot be found.
*/
static int (*bpf_inode_storage_delete)(void *map, void *inode) = (void *) 146;
/*
* bpf_d_path
*
* Return full path for given **struct path** object, which
* needs to be the kernel BTF *path* object. The path is
* returned in the provided buffer *buf* of size *sz* and
* is zero terminated.
*
*
* Returns
* On success, the strictly positive length of the string,
* including the trailing NUL character. On error, a negative
* value.
*/
static long (*bpf_d_path)(struct path *path, char *buf, __u32 sz) = (void *) 147;
/*
* bpf_copy_from_user
*
* Read *size* bytes from user space address *user_ptr* and store
* the data in *dst*. This is a wrapper of **copy_from_user**\ ().
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_copy_from_user)(void *dst, __u32 size, const void *user_ptr) = (void *) 148;
/*
* bpf_snprintf_btf
*
* Use BTF to store a string representation of *ptr*->ptr in *str*,
* using *ptr*->type_id. This value should specify the type
* that *ptr*->ptr points to. LLVM __builtin_btf_type_id(type, 1)
* can be used to look up vmlinux BTF type ids. Traversing the
* data structure using BTF, the type information and values are
* stored in the first *str_size* - 1 bytes of *str*. Safe copy of
* the pointer data is carried out to avoid kernel crashes during
* operation. Smaller types can use string space on the stack;
* larger programs can use map data to store the string
* representation.
*
* The string can be subsequently shared with userspace via
* bpf_perf_event_output() or ring buffer interfaces.
* bpf_trace_printk() is to be avoided as it places too small
* a limit on string size to be useful.
*
* *flags* is a combination of
*
* **BTF_F_COMPACT**
* no formatting around type information
* **BTF_F_NONAME**
* no struct/union member names/types
* **BTF_F_PTR_RAW**
* show raw (unobfuscated) pointer values;
* equivalent to printk specifier %px.
* **BTF_F_ZERO**
* show zero-valued struct/union members; they
* are not displayed by default
*
*
* Returns
* The number of bytes that were written (or would have been
* written if output had to be truncated due to string size),
* or a negative error in cases of failure.
*/
static long (*bpf_snprintf_btf)(char *str, __u32 str_size, struct btf_ptr *ptr, __u32 btf_ptr_size, __u64 flags) = (void *) 149;
/*
* bpf_seq_printf_btf
*
* Use BTF to write to seq_write a string representation of
* *ptr*->ptr, using *ptr*->type_id as per bpf_snprintf_btf().
* *flags* are identical to those used for bpf_snprintf_btf.
*
* Returns
* 0 on success or a negative error in case of failure.
*/
static long (*bpf_seq_printf_btf)(struct seq_file *m, struct btf_ptr *ptr, __u32 ptr_size, __u64 flags) = (void *) 150;
/*
* bpf_skb_cgroup_classid
*
* See **bpf_get_cgroup_classid**\ () for the main description.
* This helper differs from **bpf_get_cgroup_classid**\ () in that
* the cgroup v1 net_cls class is retrieved only from the *skb*'s
* associated socket instead of the current process.
*
* Returns
* The id is returned or 0 in case the id could not be retrieved.
*/
static __u64 (*bpf_skb_cgroup_classid)(struct __sk_buff *skb) = (void *) 151;
/*
* bpf_redirect_neigh
*
* Redirect the packet to another net device of index *ifindex*
* and fill in L2 addresses from neighboring subsystem. This helper
* is somewhat similar to **bpf_redirect**\ (), except that it
* populates L2 addresses as well, meaning, internally, the helper
* relies on the neighbor lookup for the L2 address of the nexthop.
*
* The helper will perform a FIB lookup based on the skb's
* networking header to get the address of the next hop, unless
* this is supplied by the caller in the *params* argument. The
* *plen* argument indicates the len of *params* and should be set
* to 0 if *params* is NULL.
*
* The *flags* argument is reserved and must be 0. The helper is
* currently only supported for tc BPF program types, and enabled
* for IPv4 and IPv6 protocols.
*
* Returns
* The helper returns **TC_ACT_REDIRECT** on success or
* **TC_ACT_SHOT** on error.
*/
static long (*bpf_redirect_neigh)(__u32 ifindex, struct bpf_redir_neigh *params, int plen, __u64 flags) = (void *) 152;
/*
* bpf_per_cpu_ptr
*
* Take a pointer to a percpu ksym, *percpu_ptr*, and return a
* pointer to the percpu kernel variable on *cpu*. A ksym is an
* extern variable decorated with '__ksym'. For ksym, there is a
* global var (either static or global) defined of the same name
* in the kernel. The ksym is percpu if the global var is percpu.
* The returned pointer points to the global percpu var on *cpu*.
*
* bpf_per_cpu_ptr() has the same semantic as per_cpu_ptr() in the
* kernel, except that bpf_per_cpu_ptr() may return NULL. This
* happens if *cpu* is larger than nr_cpu_ids. The caller of
* bpf_per_cpu_ptr() must check the returned value.
*
* Returns
* A pointer pointing to the kernel percpu variable on *cpu*, or
* NULL, if *cpu* is invalid.
*/
static void *(*bpf_per_cpu_ptr)(const void *percpu_ptr, __u32 cpu) = (void *) 153;
/*
* bpf_this_cpu_ptr
*
* Take a pointer to a percpu ksym, *percpu_ptr*, and return a
* pointer to the percpu kernel variable on this cpu. See the
* description of 'ksym' in **bpf_per_cpu_ptr**\ ().
*
* bpf_this_cpu_ptr() has the same semantic as this_cpu_ptr() in
* the kernel. Different from **bpf_per_cpu_ptr**\ (), it would
* never return NULL.
*
* Returns
* A pointer pointing to the kernel percpu variable on this cpu.
*/
static void *(*bpf_this_cpu_ptr)(const void *percpu_ptr) = (void *) 154;
/*
* bpf_redirect_peer
*
* Redirect the packet to another net device of index *ifindex*.
* This helper is somewhat similar to **bpf_redirect**\ (), except
* that the redirection happens to the *ifindex*' peer device and
* the netns switch takes place from ingress to ingress without
* going through the CPU's backlog queue.
*
* The *flags* argument is reserved and must be 0. The helper is
* currently only supported for tc BPF program types at the ingress
* hook and for veth device types. The peer device must reside in a
* different network namespace.
*
* Returns
* The helper returns **TC_ACT_REDIRECT** on success or
* **TC_ACT_SHOT** on error.
*/
static long (*bpf_redirect_peer)(__u32 ifindex, __u64 flags) = (void *) 155;
/*
* bpf_task_storage_get
*
* Get a bpf_local_storage from the *task*.
*
* Logically, it could be thought of as getting the value from
* a *map* with *task* as the **key**. From this
* perspective, the usage is not much different from
* **bpf_map_lookup_elem**\ (*map*, **&**\ *task*) except this
* helper enforces the key must be an task_struct and the map must also
* be a **BPF_MAP_TYPE_TASK_STORAGE**.
*
* Underneath, the value is stored locally at *task* instead of
* the *map*. The *map* is used as the bpf-local-storage
* "type". The bpf-local-storage "type" (i.e. the *map*) is
* searched against all bpf_local_storage residing at *task*.
*
* An optional *flags* (**BPF_LOCAL_STORAGE_GET_F_CREATE**) can be
* used such that a new bpf_local_storage will be
* created if one does not exist. *value* can be used
* together with **BPF_LOCAL_STORAGE_GET_F_CREATE** to specify
* the initial value of a bpf_local_storage. If *value* is
* **NULL**, the new bpf_local_storage will be zero initialized.
*
* Returns
* A bpf_local_storage pointer is returned on success.
*
* **NULL** if not found or there was an error in adding
* a new bpf_local_storage.
*/
static void *(*bpf_task_storage_get)(void *map, struct task_struct *task, void *value, __u64 flags) = (void *) 156;
/*
* bpf_task_storage_delete
*
* Delete a bpf_local_storage from a *task*.
*
* Returns
* 0 on success.
*
* **-ENOENT** if the bpf_local_storage cannot be found.
*/
static long (*bpf_task_storage_delete)(void *map, struct task_struct *task) = (void *) 157;
/*
* bpf_get_current_task_btf
*
* Return a BTF pointer to the "current" task.
* This pointer can also be used in helpers that accept an
* *ARG_PTR_TO_BTF_ID* of type *task_struct*.
*
* Returns
* Pointer to the current task.
*/
static struct task_struct *(*bpf_get_current_task_btf)(void) = (void *) 158;
/*
* bpf_bprm_opts_set
*
* Set or clear certain options on *bprm*:
*
* **BPF_F_BPRM_SECUREEXEC** Set the secureexec bit
* which sets the **AT_SECURE** auxv for glibc. The bit
* is cleared if the flag is not specified.
*
* Returns
* **-EINVAL** if invalid *flags* are passed, zero otherwise.
*/
static long (*bpf_bprm_opts_set)(struct linux_binprm *bprm, __u64 flags) = (void *) 159;
/*
* bpf_ktime_get_coarse_ns
*
* Return a coarse-grained version of the time elapsed since
* system boot, in nanoseconds. Does not include time the system
* was suspended.
*
* See: **clock_gettime**\ (**CLOCK_MONOTONIC_COARSE**)
*
* Returns
* Current *ktime*.
*/
static __u64 (*bpf_ktime_get_coarse_ns)(void) = (void *) 160;
/*
* bpf_ima_inode_hash
*
* Returns the stored IMA hash of the *inode* (if it's avaialable).
* If the hash is larger than *size*, then only *size*
* bytes will be copied to *dst*
*
* Returns
* The **hash_algo** is returned on success,
* **-EOPNOTSUP** if IMA is disabled or **-EINVAL** if
* invalid arguments are passed.
*/
static long (*bpf_ima_inode_hash)(struct inode *inode, void *dst, __u32 size) = (void *) 161;
/*
* bpf_sock_from_file
*
* If the given file represents a socket, returns the associated
* socket.
*
* Returns
* A pointer to a struct socket on success or NULL if the file is
* not a socket.
*/
static struct socket *(*bpf_sock_from_file)(struct file *file) = (void *) 162;
/*
* bpf_check_mtu
*
* Check packet size against exceeding MTU of net device (based
* on *ifindex*). This helper will likely be used in combination
* with helpers that adjust/change the packet size.
*
* The argument *len_diff* can be used for querying with a planned
* size change. This allows to check MTU prior to changing packet
* ctx. Providing an *len_diff* adjustment that is larger than the
* actual packet size (resulting in negative packet size) will in
* principle not exceed the MTU, why it is not considered a
* failure. Other BPF-helpers are needed for performing the
* planned size change, why the responsability for catch a negative
* packet size belong in those helpers.
*
* Specifying *ifindex* zero means the MTU check is performed
* against the current net device. This is practical if this isn't
* used prior to redirect.
*
* On input *mtu_len* must be a valid pointer, else verifier will
* reject BPF program. If the value *mtu_len* is initialized to
* zero then the ctx packet size is use. When value *mtu_len* is
* provided as input this specify the L3 length that the MTU check
* is done against. Remember XDP and TC length operate at L2, but
* this value is L3 as this correlate to MTU and IP-header tot_len
* values which are L3 (similar behavior as bpf_fib_lookup).
*
* The Linux kernel route table can configure MTUs on a more
* specific per route level, which is not provided by this helper.
* For route level MTU checks use the **bpf_fib_lookup**\ ()
* helper.
*
* *ctx* is either **struct xdp_md** for XDP programs or
* **struct sk_buff** for tc cls_act programs.
*
* The *flags* argument can be a combination of one or more of the
* following values:
*
* **BPF_MTU_CHK_SEGS**
* This flag will only works for *ctx* **struct sk_buff**.
* If packet context contains extra packet segment buffers
* (often knows as GSO skb), then MTU check is harder to
* check at this point, because in transmit path it is
* possible for the skb packet to get re-segmented
* (depending on net device features). This could still be
* a MTU violation, so this flag enables performing MTU
* check against segments, with a different violation
* return code to tell it apart. Check cannot use len_diff.
*
* On return *mtu_len* pointer contains the MTU value of the net
* device. Remember the net device configured MTU is the L3 size,
* which is returned here and XDP and TC length operate at L2.
* Helper take this into account for you, but remember when using
* MTU value in your BPF-code.
*
*
* Returns
* * 0 on success, and populate MTU value in *mtu_len* pointer.
*
* * < 0 if any input argument is invalid (*mtu_len* not updated)
*
* MTU violations return positive values, but also populate MTU
* value in *mtu_len* pointer, as this can be needed for
* implementing PMTU handing:
*
* * **BPF_MTU_CHK_RET_FRAG_NEEDED**
* * **BPF_MTU_CHK_RET_SEGS_TOOBIG**
*/
static long (*bpf_check_mtu)(void *ctx, __u32 ifindex, __u32 *mtu_len, __s32 len_diff, __u64 flags) = (void *) 163;
/*
* bpf_for_each_map_elem
*
* For each element in **map**, call **callback_fn** function with
* **map**, **callback_ctx** and other map-specific parameters.
* The **callback_fn** should be a static function and
* the **callback_ctx** should be a pointer to the stack.
* The **flags** is used to control certain aspects of the helper.
* Currently, the **flags** must be 0.
*
* The following are a list of supported map types and their
* respective expected callback signatures:
*
* BPF_MAP_TYPE_HASH, BPF_MAP_TYPE_PERCPU_HASH,
* BPF_MAP_TYPE_LRU_HASH, BPF_MAP_TYPE_LRU_PERCPU_HASH,
* BPF_MAP_TYPE_ARRAY, BPF_MAP_TYPE_PERCPU_ARRAY
*
* long (\*callback_fn)(struct bpf_map \*map, const void \*key, void \*value, void \*ctx);
*
* For per_cpu maps, the map_value is the value on the cpu where the
* bpf_prog is running.
*
* If **callback_fn** return 0, the helper will continue to the next
* element. If return value is 1, the helper will skip the rest of
* elements and return. Other return values are not used now.
*
*
* Returns
* The number of traversed map elements for success, **-EINVAL** for
* invalid **flags**.
*/
static long (*bpf_for_each_map_elem)(void *map, void *callback_fn, void *callback_ctx, __u64 flags) = (void *) 164;
/*
* bpf_snprintf
*
* Outputs a string into the **str** buffer of size **str_size**
* based on a format string stored in a read-only map pointed by
* **fmt**.
*
* Each format specifier in **fmt** corresponds to one u64 element
* in the **data** array. For strings and pointers where pointees
* are accessed, only the pointer values are stored in the *data*
* array. The *data_len* is the size of *data* in bytes - must be
* a multiple of 8.
*
* Formats **%s** and **%p{i,I}{4,6}** require to read kernel
* memory. Reading kernel memory may fail due to either invalid
* address or valid address but requiring a major memory fault. If
* reading kernel memory fails, the string for **%s** will be an
* empty string, and the ip address for **%p{i,I}{4,6}** will be 0.
* Not returning error to bpf program is consistent with what
* **bpf_trace_printk**\ () does for now.
*
*
* Returns
* The strictly positive length of the formatted string, including
* the trailing zero character. If the return value is greater than
* **str_size**, **str** contains a truncated string, guaranteed to
* be zero-terminated except when **str_size** is 0.
*
* Or **-EBUSY** if the per-CPU memory copy buffer is busy.
*/
static long (*bpf_snprintf)(char *str, __u32 str_size, const char *fmt, __u64 *data, __u32 data_len) = (void *) 165;
/*
* bpf_sys_bpf
*
* Execute bpf syscall with given arguments.
*
* Returns
* A syscall result.
*/
static long (*bpf_sys_bpf)(__u32 cmd, void *attr, __u32 attr_size) = (void *) 166;
/*
* bpf_btf_find_by_name_kind
*
* Find BTF type with given name and kind in vmlinux BTF or in module's BTFs.
*
* Returns
* Returns btf_id and btf_obj_fd in lower and upper 32 bits.
*/
static long (*bpf_btf_find_by_name_kind)(char *name, int name_sz, __u32 kind, int flags) = (void *) 167;
/*
* bpf_sys_close
*
* Execute close syscall for given FD.
*
* Returns
* A syscall result.
*/
static long (*bpf_sys_close)(__u32 fd) = (void *) 168;
/*
* bpf_timer_init
*
* Initialize the timer.
* First 4 bits of *flags* specify clockid.
* Only CLOCK_MONOTONIC, CLOCK_REALTIME, CLOCK_BOOTTIME are allowed.
* All other bits of *flags* are reserved.
* The verifier will reject the program if *timer* is not from
* the same *map*.
*
* Returns
* 0 on success.
* **-EBUSY** if *timer* is already initialized.
* **-EINVAL** if invalid *flags* are passed.
* **-EPERM** if *timer* is in a map that doesn't have any user references.
* The user space should either hold a file descriptor to a map with timers
* or pin such map in bpffs. When map is unpinned or file descriptor is
* closed all timers in the map will be cancelled and freed.
*/
static long (*bpf_timer_init)(struct bpf_timer *timer, void *map, __u64 flags) = (void *) 169;
/*
* bpf_timer_set_callback
*
* Configure the timer to call *callback_fn* static function.
*
* Returns
* 0 on success.
* **-EINVAL** if *timer* was not initialized with bpf_timer_init() earlier.
* **-EPERM** if *timer* is in a map that doesn't have any user references.
* The user space should either hold a file descriptor to a map with timers
* or pin such map in bpffs. When map is unpinned or file descriptor is
* closed all timers in the map will be cancelled and freed.
*/
static long (*bpf_timer_set_callback)(struct bpf_timer *timer, void *callback_fn) = (void *) 170;
/*
* bpf_timer_start
*
* Set timer expiration N nanoseconds from the current time. The
* configured callback will be invoked in soft irq context on some cpu
* and will not repeat unless another bpf_timer_start() is made.
* In such case the next invocation can migrate to a different cpu.
* Since struct bpf_timer is a field inside map element the map
* owns the timer. The bpf_timer_set_callback() will increment refcnt
* of BPF program to make sure that callback_fn code stays valid.
* When user space reference to a map reaches zero all timers
* in a map are cancelled and corresponding program's refcnts are
* decremented. This is done to make sure that Ctrl-C of a user
* process doesn't leave any timers running. If map is pinned in
* bpffs the callback_fn can re-arm itself indefinitely.
* bpf_map_update/delete_elem() helpers and user space sys_bpf commands
* cancel and free the timer in the given map element.
* The map can contain timers that invoke callback_fn-s from different
* programs. The same callback_fn can serve different timers from
* different maps if key/value layout matches across maps.
* Every bpf_timer_set_callback() can have different callback_fn.
*
*
* Returns
* 0 on success.
* **-EINVAL** if *timer* was not initialized with bpf_timer_init() earlier
* or invalid *flags* are passed.
*/
static long (*bpf_timer_start)(struct bpf_timer *timer, __u64 nsecs, __u64 flags) = (void *) 171;
/*
* bpf_timer_cancel
*
* Cancel the timer and wait for callback_fn to finish if it was running.
*
* Returns
* 0 if the timer was not active.
* 1 if the timer was active.
* **-EINVAL** if *timer* was not initialized with bpf_timer_init() earlier.
* **-EDEADLK** if callback_fn tried to call bpf_timer_cancel() on its
* own timer which would have led to a deadlock otherwise.
*/
static long (*bpf_timer_cancel)(struct bpf_timer *timer) = (void *) 172;
/*
* bpf_get_func_ip
*
* Get address of the traced function (for tracing and kprobe programs).
*
* Returns
* Address of the traced function.
*/
static __u64 (*bpf_get_func_ip)(void *ctx) = (void *) 173;
/*
* bpf_get_attach_cookie
*
* Get bpf_cookie value provided (optionally) during the program
* attachment. It might be different for each individual
* attachment, even if BPF program itself is the same.
* Expects BPF program context *ctx* as a first argument.
*
* Supported for the following program types:
* - kprobe/uprobe;
* - tracepoint;
* - perf_event.
*
* Returns
* Value specified by user at BPF link creation/attachment time
* or 0, if it was not specified.
*/
static __u64 (*bpf_get_attach_cookie)(void *ctx) = (void *) 174;
/*
* bpf_task_pt_regs
*
* Get the struct pt_regs associated with **task**.
*
* Returns
* A pointer to struct pt_regs.
*/
static long (*bpf_task_pt_regs)(struct task_struct *task) = (void *) 175;
/*
* bpf_get_branch_snapshot
*
* Get branch trace from hardware engines like Intel LBR. The
* hardware engine is stopped shortly after the helper is
* called. Therefore, the user need to filter branch entries
* based on the actual use case. To capture branch trace
* before the trigger point of the BPF program, the helper
* should be called at the beginning of the BPF program.
*
* The data is stored as struct perf_branch_entry into output
* buffer *entries*. *size* is the size of *entries* in bytes.
* *flags* is reserved for now and must be zero.
*
*
* Returns
* On success, number of bytes written to *buf*. On error, a
* negative value.
*
* **-EINVAL** if *flags* is not zero.
*
* **-ENOENT** if architecture does not support branch records.
*/
static long (*bpf_get_branch_snapshot)(void *entries, __u32 size, __u64 flags) = (void *) 176;
/*
* bpf_trace_vprintk
*
* Behaves like **bpf_trace_printk**\ () helper, but takes an array of u64
* to format and can handle more format args as a result.
*
* Arguments are to be used as in **bpf_seq_printf**\ () helper.
*
* Returns
* The number of bytes written to the buffer, or a negative error
* in case of failure.
*/
static long (*bpf_trace_vprintk)(const char *fmt, __u32 fmt_size, const void *data, __u32 data_len) = (void *) 177;
/*
* bpf_skc_to_unix_sock
*
* Dynamically cast a *sk* pointer to a *unix_sock* pointer.
*
* Returns
* *sk* if casting is valid, or **NULL** otherwise.
*/
static struct unix_sock *(*bpf_skc_to_unix_sock)(void *sk) = (void *) 178;
/*
* bpf_kallsyms_lookup_name
*
* Get the address of a kernel symbol, returned in *res*. *res* is
* set to 0 if the symbol is not found.
*
* Returns
* On success, zero. On error, a negative value.
*
* **-EINVAL** if *flags* is not zero.
*
* **-EINVAL** if string *name* is not the same size as *name_sz*.
*
* **-ENOENT** if symbol is not found.
*
* **-EPERM** if caller does not have permission to obtain kernel address.
*/
static long (*bpf_kallsyms_lookup_name)(const char *name, int name_sz, int flags, __u64 *res) = (void *) 179;
/*
* bpf_find_vma
*
* Find vma of *task* that contains *addr*, call *callback_fn*
* function with *task*, *vma*, and *callback_ctx*.
* The *callback_fn* should be a static function and
* the *callback_ctx* should be a pointer to the stack.
* The *flags* is used to control certain aspects of the helper.
* Currently, the *flags* must be 0.
*
* The expected callback signature is
*
* long (\*callback_fn)(struct task_struct \*task, struct vm_area_struct \*vma, void \*callback_ctx);
*
*
* Returns
* 0 on success.
* **-ENOENT** if *task->mm* is NULL, or no vma contains *addr*.
* **-EBUSY** if failed to try lock mmap_lock.
* **-EINVAL** for invalid **flags**.
*/
static long (*bpf_find_vma)(struct task_struct *task, __u64 addr, void *callback_fn, void *callback_ctx, __u64 flags) = (void *) 180;
/*
* bpf_loop
*
* For **nr_loops**, call **callback_fn** function
* with **callback_ctx** as the context parameter.
* The **callback_fn** should be a static function and
* the **callback_ctx** should be a pointer to the stack.
* The **flags** is used to control certain aspects of the helper.
* Currently, the **flags** must be 0. Currently, nr_loops is
* limited to 1 << 23 (~8 million) loops.
*
* long (\*callback_fn)(u32 index, void \*ctx);
*
* where **index** is the current index in the loop. The index
* is zero-indexed.
*
* If **callback_fn** returns 0, the helper will continue to the next
* loop. If return value is 1, the helper will skip the rest of
* the loops and return. Other return values are not used now,
* and will be rejected by the verifier.
*
*
* Returns
* The number of loops performed, **-EINVAL** for invalid **flags**,
* **-E2BIG** if **nr_loops** exceeds the maximum number of loops.
*/
static long (*bpf_loop)(__u32 nr_loops, void *callback_fn, void *callback_ctx, __u64 flags) = (void *) 181;
/*
* bpf_strncmp
*
* Do strncmp() between **s1** and **s2**. **s1** doesn't need
* to be null-terminated and **s1_sz** is the maximum storage
* size of **s1**. **s2** must be a read-only string.
*
* Returns
* An integer less than, equal to, or greater than zero
* if the first **s1_sz** bytes of **s1** is found to be
* less than, to match, or be greater than **s2**.
*/
static long (*bpf_strncmp)(const char *s1, __u32 s1_sz, const char *s2) = (void *) 182;
/*
* bpf_get_func_arg
*
* Get **n**-th argument (zero based) of the traced function (for tracing programs)
* returned in **value**.
*
*
* Returns
* 0 on success.
* **-EINVAL** if n >= arguments count of traced function.
*/
static long (*bpf_get_func_arg)(void *ctx, __u32 n, __u64 *value) = (void *) 183;
/*
* bpf_get_func_ret
*
* Get return value of the traced function (for tracing programs)
* in **value**.
*
*
* Returns
* 0 on success.
* **-EOPNOTSUPP** for tracing programs other than BPF_TRACE_FEXIT or BPF_MODIFY_RETURN.
*/
static long (*bpf_get_func_ret)(void *ctx, __u64 *value) = (void *) 184;
/*
* bpf_get_func_arg_cnt
*
* Get number of arguments of the traced function (for tracing programs).
*
*
* Returns
* The number of arguments of the traced function.
*/
static long (*bpf_get_func_arg_cnt)(void *ctx) = (void *) 185;
/*
* bpf_get_retval
*
* Get the syscall's return value that will be returned to userspace.
*
* This helper is currently supported by cgroup programs only.
*
* Returns
* The syscall's return value.
*/
static int (*bpf_get_retval)(void) = (void *) 186;
/*
* bpf_set_retval
*
* Set the syscall's return value that will be returned to userspace.
*
* This helper is currently supported by cgroup programs only.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static int (*bpf_set_retval)(int retval) = (void *) 187;
/*
* bpf_xdp_get_buff_len
*
* Get the total size of a given xdp buff (linear and paged area)
*
* Returns
* The total size of a given xdp buffer.
*/
static __u64 (*bpf_xdp_get_buff_len)(struct xdp_md *xdp_md) = (void *) 188;
/*
* bpf_xdp_load_bytes
*
* This helper is provided as an easy way to load data from a
* xdp buffer. It can be used to load *len* bytes from *offset* from
* the frame associated to *xdp_md*, into the buffer pointed by
* *buf*.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_xdp_load_bytes)(struct xdp_md *xdp_md, __u32 offset, void *buf, __u32 len) = (void *) 189;
/*
* bpf_xdp_store_bytes
*
* Store *len* bytes from buffer *buf* into the frame
* associated to *xdp_md*, at *offset*.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_xdp_store_bytes)(struct xdp_md *xdp_md, __u32 offset, void *buf, __u32 len) = (void *) 190;
/*
* bpf_copy_from_user_task
*
* Read *size* bytes from user space address *user_ptr* in *tsk*'s
* address space, and stores the data in *dst*. *flags* is not
* used yet and is provided for future extensibility. This helper
* can only be used by sleepable programs.
*
* Returns
* 0 on success, or a negative error in case of failure. On error
* *dst* buffer is zeroed out.
*/
static long (*bpf_copy_from_user_task)(void *dst, __u32 size, const void *user_ptr, struct task_struct *tsk, __u64 flags) = (void *) 191;
/*
* bpf_skb_set_tstamp
*
* Change the __sk_buff->tstamp_type to *tstamp_type*
* and set *tstamp* to the __sk_buff->tstamp together.
*
* If there is no need to change the __sk_buff->tstamp_type,
* the tstamp value can be directly written to __sk_buff->tstamp
* instead.
*
* BPF_SKB_TSTAMP_DELIVERY_MONO is the only tstamp that
* will be kept during bpf_redirect_*(). A non zero
* *tstamp* must be used with the BPF_SKB_TSTAMP_DELIVERY_MONO
* *tstamp_type*.
*
* A BPF_SKB_TSTAMP_UNSPEC *tstamp_type* can only be used
* with a zero *tstamp*.
*
* Only IPv4 and IPv6 skb->protocol are supported.
*
* This function is most useful when it needs to set a
* mono delivery time to __sk_buff->tstamp and then
* bpf_redirect_*() to the egress of an iface. For example,
* changing the (rcv) timestamp in __sk_buff->tstamp at
* ingress to a mono delivery time and then bpf_redirect_*()
* to sch_fq@phy-dev.
*
* Returns
* 0 on success.
* **-EINVAL** for invalid input
* **-EOPNOTSUPP** for unsupported protocol
*/
static long (*bpf_skb_set_tstamp)(struct __sk_buff *skb, __u64 tstamp, __u32 tstamp_type) = (void *) 192;
/*
* bpf_ima_file_hash
*
* Returns a calculated IMA hash of the *file*.
* If the hash is larger than *size*, then only *size*
* bytes will be copied to *dst*
*
* Returns
* The **hash_algo** is returned on success,
* **-EOPNOTSUP** if the hash calculation failed or **-EINVAL** if
* invalid arguments are passed.
*/
static long (*bpf_ima_file_hash)(struct file *file, void *dst, __u32 size) = (void *) 193;
/*
* bpf_kptr_xchg
*
* Exchange kptr at pointer *map_value* with *ptr*, and return the
* old value. *ptr* can be NULL, otherwise it must be a referenced
* pointer which will be released when this helper is called.
*
* Returns
* The old value of kptr (which can be NULL). The returned pointer
* if not NULL, is a reference which must be released using its
* corresponding release function, or moved into a BPF map before
* program exit.
*/
static void *(*bpf_kptr_xchg)(void *map_value, void *ptr) = (void *) 194;
/*
* bpf_map_lookup_percpu_elem
*
* Perform a lookup in *percpu map* for an entry associated to
* *key* on *cpu*.
*
* Returns
* Map value associated to *key* on *cpu*, or **NULL** if no entry
* was found or *cpu* is invalid.
*/
static void *(*bpf_map_lookup_percpu_elem)(void *map, const void *key, __u32 cpu) = (void *) 195;
/*
* bpf_skc_to_mptcp_sock
*
* Dynamically cast a *sk* pointer to a *mptcp_sock* pointer.
*
* Returns
* *sk* if casting is valid, or **NULL** otherwise.
*/
static struct mptcp_sock *(*bpf_skc_to_mptcp_sock)(void *sk) = (void *) 196;
/*
* bpf_dynptr_from_mem
*
* Get a dynptr to local memory *data*.
*
* *data* must be a ptr to a map value.
* The maximum *size* supported is DYNPTR_MAX_SIZE.
* *flags* is currently unused.
*
* Returns
* 0 on success, -E2BIG if the size exceeds DYNPTR_MAX_SIZE,
* -EINVAL if flags is not 0.
*/
static long (*bpf_dynptr_from_mem)(void *data, __u32 size, __u64 flags, struct bpf_dynptr *ptr) = (void *) 197;
/*
* bpf_ringbuf_reserve_dynptr
*
* Reserve *size* bytes of payload in a ring buffer *ringbuf*
* through the dynptr interface. *flags* must be 0.
*
* Please note that a corresponding bpf_ringbuf_submit_dynptr or
* bpf_ringbuf_discard_dynptr must be called on *ptr*, even if the
* reservation fails. This is enforced by the verifier.
*
* Returns
* 0 on success, or a negative error in case of failure.
*/
static long (*bpf_ringbuf_reserve_dynptr)(void *ringbuf, __u32 size, __u64 flags, struct bpf_dynptr *ptr) = (void *) 198;
/*
* bpf_ringbuf_submit_dynptr
*
* Submit reserved ring buffer sample, pointed to by *data*,
* through the dynptr interface. This is a no-op if the dynptr is
* invalid/null.
*
* For more information on *flags*, please see
* 'bpf_ringbuf_submit'.
*
* Returns
* Nothing. Always succeeds.
*/
static void (*bpf_ringbuf_submit_dynptr)(struct bpf_dynptr *ptr, __u64 flags) = (void *) 199;
/*
* bpf_ringbuf_discard_dynptr
*
* Discard reserved ring buffer sample through the dynptr
* interface. This is a no-op if the dynptr is invalid/null.
*
* For more information on *flags*, please see
* 'bpf_ringbuf_discard'.
*
* Returns
* Nothing. Always succeeds.
*/
static void (*bpf_ringbuf_discard_dynptr)(struct bpf_dynptr *ptr, __u64 flags) = (void *) 200;
/*
* bpf_dynptr_read
*
* Read *len* bytes from *src* into *dst*, starting from *offset*
* into *src*.
* *flags* is currently unused.
*
* Returns
* 0 on success, -E2BIG if *offset* + *len* exceeds the length
* of *src*'s data, -EINVAL if *src* is an invalid dynptr or if
* *flags* is not 0.
*/
static long (*bpf_dynptr_read)(void *dst, __u32 len, struct bpf_dynptr *src, __u32 offset, __u64 flags) = (void *) 201;
/*
* bpf_dynptr_write
*
* Write *len* bytes from *src* into *dst*, starting from *offset*
* into *dst*.
* *flags* is currently unused.
*
* Returns
* 0 on success, -E2BIG if *offset* + *len* exceeds the length
* of *dst*'s data, -EINVAL if *dst* is an invalid dynptr or if *dst*
* is a read-only dynptr or if *flags* is not 0.
*/
static long (*bpf_dynptr_write)(struct bpf_dynptr *dst, __u32 offset, void *src, __u32 len, __u64 flags) = (void *) 202;
/*
* bpf_dynptr_data
*
* Get a pointer to the underlying dynptr data.
*
* *len* must be a statically known value. The returned data slice
* is invalidated whenever the dynptr is invalidated.
*
* Returns
* Pointer to the underlying dynptr data, NULL if the dynptr is
* read-only, if the dynptr is invalid, or if the offset and length
* is out of bounds.
*/
static void *(*bpf_dynptr_data)(struct bpf_dynptr *ptr, __u32 offset, __u32 len) = (void *) 203;
/*
* bpf_tcp_raw_gen_syncookie_ipv4
*
* Try to issue a SYN cookie for the packet with corresponding
* IPv4/TCP headers, *iph* and *th*, without depending on a
* listening socket.
*
* *iph* points to the IPv4 header.
*
* *th* points to the start of the TCP header, while *th_len*
* contains the length of the TCP header (at least
* **sizeof**\ (**struct tcphdr**)).
*
* Returns
* On success, lower 32 bits hold the generated SYN cookie in
* followed by 16 bits which hold the MSS value for that cookie,
* and the top 16 bits are unused.
*
* On failure, the returned value is one of the following:
*
* **-EINVAL** if *th_len* is invalid.
*/
static __s64 (*bpf_tcp_raw_gen_syncookie_ipv4)(struct iphdr *iph, struct tcphdr *th, __u32 th_len) = (void *) 204;
/*
* bpf_tcp_raw_gen_syncookie_ipv6
*
* Try to issue a SYN cookie for the packet with corresponding
* IPv6/TCP headers, *iph* and *th*, without depending on a
* listening socket.
*
* *iph* points to the IPv6 header.
*
* *th* points to the start of the TCP header, while *th_len*
* contains the length of the TCP header (at least
* **sizeof**\ (**struct tcphdr**)).
*
* Returns
* On success, lower 32 bits hold the generated SYN cookie in
* followed by 16 bits which hold the MSS value for that cookie,
* and the top 16 bits are unused.
*
* On failure, the returned value is one of the following:
*
* **-EINVAL** if *th_len* is invalid.
*
* **-EPROTONOSUPPORT** if CONFIG_IPV6 is not builtin.
*/
static __s64 (*bpf_tcp_raw_gen_syncookie_ipv6)(struct ipv6hdr *iph, struct tcphdr *th, __u32 th_len) = (void *) 205;
/*
* bpf_tcp_raw_check_syncookie_ipv4
*
* Check whether *iph* and *th* contain a valid SYN cookie ACK
* without depending on a listening socket.
*
* *iph* points to the IPv4 header.
*
* *th* points to the TCP header.
*
* Returns
* 0 if *iph* and *th* are a valid SYN cookie ACK.
*
* On failure, the returned value is one of the following:
*
* **-EACCES** if the SYN cookie is not valid.
*/
static long (*bpf_tcp_raw_check_syncookie_ipv4)(struct iphdr *iph, struct tcphdr *th) = (void *) 206;
/*
* bpf_tcp_raw_check_syncookie_ipv6
*
* Check whether *iph* and *th* contain a valid SYN cookie ACK
* without depending on a listening socket.
*
* *iph* points to the IPv6 header.
*
* *th* points to the TCP header.
*
* Returns
* 0 if *iph* and *th* are a valid SYN cookie ACK.
*
* On failure, the returned value is one of the following:
*
* **-EACCES** if the SYN cookie is not valid.
*
* **-EPROTONOSUPPORT** if CONFIG_IPV6 is not builtin.
*/
static long (*bpf_tcp_raw_check_syncookie_ipv6)(struct ipv6hdr *iph, struct tcphdr *th) = (void *) 207;
opensnitch-1.6.9/ebpf_prog/bpf_headers/bpf_helpers.h 0000664 0000000 0000000 00000023732 15003540030 0022545 0 ustar 00root root 0000000 0000000 /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
#ifndef __BPF_HELPERS__
#define __BPF_HELPERS__
/*
* Note that bpf programs need to include either
* vmlinux.h (auto-generated from BTF) or linux/types.h
* in advance since bpf_helper_defs.h uses such types
* as __u64.
*/
#include "bpf_helper_defs.h"
#define __uint(name, val) int (*name)[val]
#define __type(name, val) typeof(val) *name
#define __array(name, val) typeof(val) *name[]
/*
* Helper macro to place programs, maps, license in
* different sections in elf_bpf file. Section names
* are interpreted by libbpf depending on the context (BPF programs, BPF maps,
* extern variables, etc).
* To allow use of SEC() with externs (e.g., for extern .maps declarations),
* make sure __attribute__((unused)) doesn't trigger compilation warning.
*/
#if __GNUC__ && !__clang__
/*
* Pragma macros are broken on GCC
* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55578
* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90400
*/
#define SEC(name) __attribute__((section(name), used))
#else
#define SEC(name) \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wignored-attributes\"") \
__attribute__((section(name), used)) \
_Pragma("GCC diagnostic pop") \
#endif
/* Avoid 'linux/stddef.h' definition of '__always_inline'. */
#undef __always_inline
#define __always_inline inline __attribute__((always_inline))
#ifndef __noinline
#define __noinline __attribute__((noinline))
#endif
#ifndef __weak
#define __weak __attribute__((weak))
#endif
/*
* Use __hidden attribute to mark a non-static BPF subprogram effectively
* static for BPF verifier's verification algorithm purposes, allowing more
* extensive and permissive BPF verification process, taking into account
* subprogram's caller context.
*/
#define __hidden __attribute__((visibility("hidden")))
/* When utilizing vmlinux.h with BPF CO-RE, user BPF programs can't include
* any system-level headers (such as stddef.h, linux/version.h, etc), and
* commonly-used macros like NULL and KERNEL_VERSION aren't available through
* vmlinux.h. This just adds unnecessary hurdles and forces users to re-define
* them on their own. So as a convenience, provide such definitions here.
*/
#ifndef NULL
#define NULL ((void *)0)
#endif
#ifndef KERNEL_VERSION
#define KERNEL_VERSION(a, b, c) (((a) << 16) + ((b) << 8) + ((c) > 255 ? 255 : (c)))
#endif
/*
* Helper macros to manipulate data structures
*/
#ifndef offsetof
#define offsetof(TYPE, MEMBER) ((unsigned long)&((TYPE *)0)->MEMBER)
#endif
#ifndef container_of
#define container_of(ptr, type, member) \
({ \
void *__mptr = (void *)(ptr); \
((type *)(__mptr - offsetof(type, member))); \
})
#endif
/*
* Compiler (optimization) barrier.
*/
#ifndef barrier
#define barrier() asm volatile("" ::: "memory")
#endif
/* Variable-specific compiler (optimization) barrier. It's a no-op which makes
* compiler believe that there is some black box modification of a given
* variable and thus prevents compiler from making extra assumption about its
* value and potential simplifications and optimizations on this variable.
*
* E.g., compiler might often delay or even omit 32-bit to 64-bit casting of
* a variable, making some code patterns unverifiable. Putting barrier_var()
* in place will ensure that cast is performed before the barrier_var()
* invocation, because compiler has to pessimistically assume that embedded
* asm section might perform some extra operations on that variable.
*
* This is a variable-specific variant of more global barrier().
*/
#ifndef barrier_var
#define barrier_var(var) asm volatile("" : "=r"(var) : "0"(var))
#endif
/*
* Helper macro to throw a compilation error if __bpf_unreachable() gets
* built into the resulting code. This works given BPF back end does not
* implement __builtin_trap(). This is useful to assert that certain paths
* of the program code are never used and hence eliminated by the compiler.
*
* For example, consider a switch statement that covers known cases used by
* the program. __bpf_unreachable() can then reside in the default case. If
* the program gets extended such that a case is not covered in the switch
* statement, then it will throw a build error due to the default case not
* being compiled out.
*/
#ifndef __bpf_unreachable
# define __bpf_unreachable() __builtin_trap()
#endif
/*
* Helper function to perform a tail call with a constant/immediate map slot.
*/
#if __clang_major__ >= 8 && defined(__bpf__)
static __always_inline void
bpf_tail_call_static(void *ctx, const void *map, const __u32 slot)
{
if (!__builtin_constant_p(slot))
__bpf_unreachable();
/*
* Provide a hard guarantee that LLVM won't optimize setting r2 (map
* pointer) and r3 (constant map index) from _different paths_ ending
* up at the _same_ call insn as otherwise we won't be able to use the
* jmpq/nopl retpoline-free patching by the x86-64 JIT in the kernel
* given they mismatch. See also d2e4c1e6c294 ("bpf: Constant map key
* tracking for prog array pokes") for details on verifier tracking.
*
* Note on clobber list: we need to stay in-line with BPF calling
* convention, so even if we don't end up using r0, r4, r5, we need
* to mark them as clobber so that LLVM doesn't end up using them
* before / after the call.
*/
asm volatile("r1 = %[ctx]\n\t"
"r2 = %[map]\n\t"
"r3 = %[slot]\n\t"
"call 12"
:: [ctx]"r"(ctx), [map]"r"(map), [slot]"i"(slot)
: "r0", "r1", "r2", "r3", "r4", "r5");
}
#endif
/*
* Helper structure used by eBPF C program
* to describe BPF map attributes to libbpf loader
*/
struct bpf_map_defold {
unsigned int type;
unsigned int key_size;
unsigned int value_size;
unsigned int max_entries;
unsigned int map_flags;
} __attribute__((deprecated("use BTF-defined maps in .maps section")));
enum libbpf_pin_type {
LIBBPF_PIN_NONE,
/* PIN_BY_NAME: pin maps by name (in /sys/fs/bpf by default) */
LIBBPF_PIN_BY_NAME,
};
enum libbpf_tristate {
TRI_NO = 0,
TRI_YES = 1,
TRI_MODULE = 2,
};
#define __kconfig __attribute__((section(".kconfig")))
#define __ksym __attribute__((section(".ksyms")))
#define __kptr __attribute__((btf_type_tag("kptr")))
#define __kptr_ref __attribute__((btf_type_tag("kptr_ref")))
#ifndef ___bpf_concat
#define ___bpf_concat(a, b) a ## b
#endif
#ifndef ___bpf_apply
#define ___bpf_apply(fn, n) ___bpf_concat(fn, n)
#endif
#ifndef ___bpf_nth
#define ___bpf_nth(_, _1, _2, _3, _4, _5, _6, _7, _8, _9, _a, _b, _c, N, ...) N
#endif
#ifndef ___bpf_narg
#define ___bpf_narg(...) \
___bpf_nth(_, ##__VA_ARGS__, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0)
#endif
#define ___bpf_fill0(arr, p, x) do {} while (0)
#define ___bpf_fill1(arr, p, x) arr[p] = x
#define ___bpf_fill2(arr, p, x, args...) arr[p] = x; ___bpf_fill1(arr, p + 1, args)
#define ___bpf_fill3(arr, p, x, args...) arr[p] = x; ___bpf_fill2(arr, p + 1, args)
#define ___bpf_fill4(arr, p, x, args...) arr[p] = x; ___bpf_fill3(arr, p + 1, args)
#define ___bpf_fill5(arr, p, x, args...) arr[p] = x; ___bpf_fill4(arr, p + 1, args)
#define ___bpf_fill6(arr, p, x, args...) arr[p] = x; ___bpf_fill5(arr, p + 1, args)
#define ___bpf_fill7(arr, p, x, args...) arr[p] = x; ___bpf_fill6(arr, p + 1, args)
#define ___bpf_fill8(arr, p, x, args...) arr[p] = x; ___bpf_fill7(arr, p + 1, args)
#define ___bpf_fill9(arr, p, x, args...) arr[p] = x; ___bpf_fill8(arr, p + 1, args)
#define ___bpf_fill10(arr, p, x, args...) arr[p] = x; ___bpf_fill9(arr, p + 1, args)
#define ___bpf_fill11(arr, p, x, args...) arr[p] = x; ___bpf_fill10(arr, p + 1, args)
#define ___bpf_fill12(arr, p, x, args...) arr[p] = x; ___bpf_fill11(arr, p + 1, args)
#define ___bpf_fill(arr, args...) \
___bpf_apply(___bpf_fill, ___bpf_narg(args))(arr, 0, args)
/*
* BPF_SEQ_PRINTF to wrap bpf_seq_printf to-be-printed values
* in a structure.
*/
#define BPF_SEQ_PRINTF(seq, fmt, args...) \
({ \
static const char ___fmt[] = fmt; \
unsigned long long ___param[___bpf_narg(args)]; \
\
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
___bpf_fill(___param, args); \
_Pragma("GCC diagnostic pop") \
\
bpf_seq_printf(seq, ___fmt, sizeof(___fmt), \
___param, sizeof(___param)); \
})
/*
* BPF_SNPRINTF wraps the bpf_snprintf helper with variadic arguments instead of
* an array of u64.
*/
#define BPF_SNPRINTF(out, out_size, fmt, args...) \
({ \
static const char ___fmt[] = fmt; \
unsigned long long ___param[___bpf_narg(args)]; \
\
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
___bpf_fill(___param, args); \
_Pragma("GCC diagnostic pop") \
\
bpf_snprintf(out, out_size, ___fmt, \
___param, sizeof(___param)); \
})
#ifdef BPF_NO_GLOBAL_DATA
#define BPF_PRINTK_FMT_MOD
#else
#define BPF_PRINTK_FMT_MOD static const
#endif
#define __bpf_printk(fmt, ...) \
({ \
BPF_PRINTK_FMT_MOD char ____fmt[] = fmt; \
bpf_trace_printk(____fmt, sizeof(____fmt), \
##__VA_ARGS__); \
})
/*
* __bpf_vprintk wraps the bpf_trace_vprintk helper with variadic arguments
* instead of an array of u64.
*/
#define __bpf_vprintk(fmt, args...) \
({ \
static const char ___fmt[] = fmt; \
unsigned long long ___param[___bpf_narg(args)]; \
\
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
___bpf_fill(___param, args); \
_Pragma("GCC diagnostic pop") \
\
bpf_trace_vprintk(___fmt, sizeof(___fmt), \
___param, sizeof(___param)); \
})
/* Use __bpf_printk when bpf_printk call has 3 or fewer fmt args
* Otherwise use __bpf_vprintk
*/
#define ___bpf_pick_printk(...) \
___bpf_nth(_, ##__VA_ARGS__, __bpf_vprintk, __bpf_vprintk, __bpf_vprintk, \
__bpf_vprintk, __bpf_vprintk, __bpf_vprintk, __bpf_vprintk, \
__bpf_vprintk, __bpf_vprintk, __bpf_printk /*3*/, __bpf_printk /*2*/,\
__bpf_printk /*1*/, __bpf_printk /*0*/)
/* Helper macro to print out debug messages */
#define bpf_printk(fmt, args...) ___bpf_pick_printk(args)(fmt, ##args)
#endif
opensnitch-1.6.9/ebpf_prog/bpf_headers/bpf_tracing.h 0000664 0000000 0000000 00000053242 15003540030 0022531 0 ustar 00root root 0000000 0000000 /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
#ifndef __BPF_TRACING_H__
#define __BPF_TRACING_H__
#include "bpf_helpers.h"
/* Scan the ARCH passed in from ARCH env variable (see Makefile) */
#if defined(__TARGET_ARCH_x86)
#define bpf_target_x86
#define bpf_target_defined
#elif defined(__TARGET_ARCH_s390)
#define bpf_target_s390
#define bpf_target_defined
#elif defined(__TARGET_ARCH_arm)
#define bpf_target_arm
#define bpf_target_defined
#elif defined(__TARGET_ARCH_arm64)
#define bpf_target_arm64
#define bpf_target_defined
#elif defined(__TARGET_ARCH_mips)
#define bpf_target_mips
#define bpf_target_defined
#elif defined(__TARGET_ARCH_powerpc)
#define bpf_target_powerpc
#define bpf_target_defined
#elif defined(__TARGET_ARCH_sparc)
#define bpf_target_sparc
#define bpf_target_defined
#elif defined(__TARGET_ARCH_riscv)
#define bpf_target_riscv
#define bpf_target_defined
#elif defined(__TARGET_ARCH_arc)
#define bpf_target_arc
#define bpf_target_defined
#else
/* Fall back to what the compiler says */
#if defined(__x86_64__)
#define bpf_target_x86
#define bpf_target_defined
#elif defined(__s390__)
#define bpf_target_s390
#define bpf_target_defined
#elif defined(__arm__)
#define bpf_target_arm
#define bpf_target_defined
#elif defined(__aarch64__)
#define bpf_target_arm64
#define bpf_target_defined
#elif defined(__mips__)
#define bpf_target_mips
#define bpf_target_defined
#elif defined(__powerpc__)
#define bpf_target_powerpc
#define bpf_target_defined
#elif defined(__sparc__)
#define bpf_target_sparc
#define bpf_target_defined
#elif defined(__riscv) && __riscv_xlen == 64
#define bpf_target_riscv
#define bpf_target_defined
#elif defined(__arc__)
#define bpf_target_arc
#define bpf_target_defined
#endif /* no compiler target */
#endif
#ifndef __BPF_TARGET_MISSING
#define __BPF_TARGET_MISSING "GCC error \"Must specify a BPF target arch via __TARGET_ARCH_xxx\""
#endif
#if defined(bpf_target_x86)
#if defined(__KERNEL__) || defined(__VMLINUX_H__)
#define __PT_PARM1_REG di
#define __PT_PARM2_REG si
#define __PT_PARM3_REG dx
#define __PT_PARM4_REG cx
#define __PT_PARM5_REG r8
#define __PT_RET_REG sp
#define __PT_FP_REG bp
#define __PT_RC_REG ax
#define __PT_SP_REG sp
#define __PT_IP_REG ip
/* syscall uses r10 for PARM4 */
#define PT_REGS_PARM4_SYSCALL(x) ((x)->r10)
#define PT_REGS_PARM4_CORE_SYSCALL(x) BPF_CORE_READ(x, r10)
#else
#ifdef __i386__
#define __PT_PARM1_REG eax
#define __PT_PARM2_REG edx
#define __PT_PARM3_REG ecx
/* i386 kernel is built with -mregparm=3 */
#define __PT_PARM4_REG __unsupported__
#define __PT_PARM5_REG __unsupported__
#define __PT_RET_REG esp
#define __PT_FP_REG ebp
#define __PT_RC_REG eax
#define __PT_SP_REG esp
#define __PT_IP_REG eip
#else /* __i386__ */
#define __PT_PARM1_REG rdi
#define __PT_PARM2_REG rsi
#define __PT_PARM3_REG rdx
#define __PT_PARM4_REG rcx
#define __PT_PARM5_REG r8
#define __PT_RET_REG rsp
#define __PT_FP_REG rbp
#define __PT_RC_REG rax
#define __PT_SP_REG rsp
#define __PT_IP_REG rip
/* syscall uses r10 for PARM4 */
#define PT_REGS_PARM4_SYSCALL(x) ((x)->r10)
#define PT_REGS_PARM4_CORE_SYSCALL(x) BPF_CORE_READ(x, r10)
#endif /* __i386__ */
#endif /* __KERNEL__ || __VMLINUX_H__ */
#elif defined(bpf_target_s390)
struct pt_regs___s390 {
unsigned long orig_gpr2;
};
/* s390 provides user_pt_regs instead of struct pt_regs to userspace */
#define __PT_REGS_CAST(x) ((const user_pt_regs *)(x))
#define __PT_PARM1_REG gprs[2]
#define __PT_PARM2_REG gprs[3]
#define __PT_PARM3_REG gprs[4]
#define __PT_PARM4_REG gprs[5]
#define __PT_PARM5_REG gprs[6]
#define __PT_RET_REG grps[14]
#define __PT_FP_REG gprs[11] /* Works only with CONFIG_FRAME_POINTER */
#define __PT_RC_REG gprs[2]
#define __PT_SP_REG gprs[15]
#define __PT_IP_REG psw.addr
#define PT_REGS_PARM1_SYSCALL(x) PT_REGS_PARM1_CORE_SYSCALL(x)
#define PT_REGS_PARM1_CORE_SYSCALL(x) BPF_CORE_READ((const struct pt_regs___s390 *)(x), orig_gpr2)
#elif defined(bpf_target_arm)
#define __PT_PARM1_REG uregs[0]
#define __PT_PARM2_REG uregs[1]
#define __PT_PARM3_REG uregs[2]
#define __PT_PARM4_REG uregs[3]
#define __PT_PARM5_REG uregs[4]
#define __PT_RET_REG uregs[14]
#define __PT_FP_REG uregs[11] /* Works only with CONFIG_FRAME_POINTER */
#define __PT_RC_REG uregs[0]
#define __PT_SP_REG uregs[13]
#define __PT_IP_REG uregs[12]
#elif defined(bpf_target_arm64)
struct pt_regs___arm64 {
unsigned long orig_x0;
};
/* arm64 provides struct user_pt_regs instead of struct pt_regs to userspace */
#define __PT_REGS_CAST(x) ((const struct user_pt_regs *)(x))
#define __PT_PARM1_REG regs[0]
#define __PT_PARM2_REG regs[1]
#define __PT_PARM3_REG regs[2]
#define __PT_PARM4_REG regs[3]
#define __PT_PARM5_REG regs[4]
#define __PT_RET_REG regs[30]
#define __PT_FP_REG regs[29] /* Works only with CONFIG_FRAME_POINTER */
#define __PT_RC_REG regs[0]
#define __PT_SP_REG sp
#define __PT_IP_REG pc
#define PT_REGS_PARM1_SYSCALL(x) PT_REGS_PARM1_CORE_SYSCALL(x)
#define PT_REGS_PARM1_CORE_SYSCALL(x) BPF_CORE_READ((const struct pt_regs___arm64 *)(x), orig_x0)
#elif defined(bpf_target_mips)
#define __PT_PARM1_REG regs[4]
#define __PT_PARM2_REG regs[5]
#define __PT_PARM3_REG regs[6]
#define __PT_PARM4_REG regs[7]
#define __PT_PARM5_REG regs[8]
#define __PT_RET_REG regs[31]
#define __PT_FP_REG regs[30] /* Works only with CONFIG_FRAME_POINTER */
#define __PT_RC_REG regs[2]
#define __PT_SP_REG regs[29]
#define __PT_IP_REG cp0_epc
#elif defined(bpf_target_powerpc)
#define __PT_PARM1_REG gpr[3]
#define __PT_PARM2_REG gpr[4]
#define __PT_PARM3_REG gpr[5]
#define __PT_PARM4_REG gpr[6]
#define __PT_PARM5_REG gpr[7]
#define __PT_RET_REG regs[31]
#define __PT_FP_REG __unsupported__
#define __PT_RC_REG gpr[3]
#define __PT_SP_REG sp
#define __PT_IP_REG nip
/* powerpc does not select ARCH_HAS_SYSCALL_WRAPPER. */
#define PT_REGS_SYSCALL_REGS(ctx) ctx
#elif defined(bpf_target_sparc)
#define __PT_PARM1_REG u_regs[UREG_I0]
#define __PT_PARM2_REG u_regs[UREG_I1]
#define __PT_PARM3_REG u_regs[UREG_I2]
#define __PT_PARM4_REG u_regs[UREG_I3]
#define __PT_PARM5_REG u_regs[UREG_I4]
#define __PT_RET_REG u_regs[UREG_I7]
#define __PT_FP_REG __unsupported__
#define __PT_RC_REG u_regs[UREG_I0]
#define __PT_SP_REG u_regs[UREG_FP]
/* Should this also be a bpf_target check for the sparc case? */
#if defined(__arch64__)
#define __PT_IP_REG tpc
#else
#define __PT_IP_REG pc
#endif
#elif defined(bpf_target_riscv)
#define __PT_REGS_CAST(x) ((const struct user_regs_struct *)(x))
#define __PT_PARM1_REG a0
#define __PT_PARM2_REG a1
#define __PT_PARM3_REG a2
#define __PT_PARM4_REG a3
#define __PT_PARM5_REG a4
#define __PT_RET_REG ra
#define __PT_FP_REG s0
#define __PT_RC_REG a0
#define __PT_SP_REG sp
#define __PT_IP_REG pc
/* riscv does not select ARCH_HAS_SYSCALL_WRAPPER. */
#define PT_REGS_SYSCALL_REGS(ctx) ctx
#elif defined(bpf_target_arc)
/* arc provides struct user_pt_regs instead of struct pt_regs to userspace */
#define __PT_REGS_CAST(x) ((const struct user_regs_struct *)(x))
#define __PT_PARM1_REG scratch.r0
#define __PT_PARM2_REG scratch.r1
#define __PT_PARM3_REG scratch.r2
#define __PT_PARM4_REG scratch.r3
#define __PT_PARM5_REG scratch.r4
#define __PT_RET_REG scratch.blink
#define __PT_FP_REG __unsupported__
#define __PT_RC_REG scratch.r0
#define __PT_SP_REG scratch.sp
#define __PT_IP_REG scratch.ret
/* arc does not select ARCH_HAS_SYSCALL_WRAPPER. */
#define PT_REGS_SYSCALL_REGS(ctx) ctx
#endif
#if defined(bpf_target_defined)
struct pt_regs;
/* allow some architecutres to override `struct pt_regs` */
#ifndef __PT_REGS_CAST
#define __PT_REGS_CAST(x) (x)
#endif
#define PT_REGS_PARM1(x) (__PT_REGS_CAST(x)->__PT_PARM1_REG)
#define PT_REGS_PARM2(x) (__PT_REGS_CAST(x)->__PT_PARM2_REG)
#define PT_REGS_PARM3(x) (__PT_REGS_CAST(x)->__PT_PARM3_REG)
#define PT_REGS_PARM4(x) (__PT_REGS_CAST(x)->__PT_PARM4_REG)
#define PT_REGS_PARM5(x) (__PT_REGS_CAST(x)->__PT_PARM5_REG)
#define PT_REGS_RET(x) (__PT_REGS_CAST(x)->__PT_RET_REG)
#define PT_REGS_FP(x) (__PT_REGS_CAST(x)->__PT_FP_REG)
#define PT_REGS_RC(x) (__PT_REGS_CAST(x)->__PT_RC_REG)
#define PT_REGS_SP(x) (__PT_REGS_CAST(x)->__PT_SP_REG)
#define PT_REGS_IP(x) (__PT_REGS_CAST(x)->__PT_IP_REG)
#define PT_REGS_PARM1_CORE(x) BPF_CORE_READ(__PT_REGS_CAST(x), __PT_PARM1_REG)
#define PT_REGS_PARM2_CORE(x) BPF_CORE_READ(__PT_REGS_CAST(x), __PT_PARM2_REG)
#define PT_REGS_PARM3_CORE(x) BPF_CORE_READ(__PT_REGS_CAST(x), __PT_PARM3_REG)
#define PT_REGS_PARM4_CORE(x) BPF_CORE_READ(__PT_REGS_CAST(x), __PT_PARM4_REG)
#define PT_REGS_PARM5_CORE(x) BPF_CORE_READ(__PT_REGS_CAST(x), __PT_PARM5_REG)
#define PT_REGS_RET_CORE(x) BPF_CORE_READ(__PT_REGS_CAST(x), __PT_RET_REG)
#define PT_REGS_FP_CORE(x) BPF_CORE_READ(__PT_REGS_CAST(x), __PT_FP_REG)
#define PT_REGS_RC_CORE(x) BPF_CORE_READ(__PT_REGS_CAST(x), __PT_RC_REG)
#define PT_REGS_SP_CORE(x) BPF_CORE_READ(__PT_REGS_CAST(x), __PT_SP_REG)
#define PT_REGS_IP_CORE(x) BPF_CORE_READ(__PT_REGS_CAST(x), __PT_IP_REG)
#if defined(bpf_target_powerpc)
#define BPF_KPROBE_READ_RET_IP(ip, ctx) ({ (ip) = (ctx)->link; })
#define BPF_KRETPROBE_READ_RET_IP BPF_KPROBE_READ_RET_IP
#elif defined(bpf_target_sparc)
#define BPF_KPROBE_READ_RET_IP(ip, ctx) ({ (ip) = PT_REGS_RET(ctx); })
#define BPF_KRETPROBE_READ_RET_IP BPF_KPROBE_READ_RET_IP
#else
#define BPF_KPROBE_READ_RET_IP(ip, ctx) \
({ bpf_probe_read_kernel(&(ip), sizeof(ip), (void *)PT_REGS_RET(ctx)); })
#define BPF_KRETPROBE_READ_RET_IP(ip, ctx) \
({ bpf_probe_read_kernel(&(ip), sizeof(ip), (void *)(PT_REGS_FP(ctx) + sizeof(ip))); })
#endif
#ifndef PT_REGS_PARM1_SYSCALL
#define PT_REGS_PARM1_SYSCALL(x) PT_REGS_PARM1(x)
#endif
#define PT_REGS_PARM2_SYSCALL(x) PT_REGS_PARM2(x)
#define PT_REGS_PARM3_SYSCALL(x) PT_REGS_PARM3(x)
#ifndef PT_REGS_PARM4_SYSCALL
#define PT_REGS_PARM4_SYSCALL(x) PT_REGS_PARM4(x)
#endif
#define PT_REGS_PARM5_SYSCALL(x) PT_REGS_PARM5(x)
#ifndef PT_REGS_PARM1_CORE_SYSCALL
#define PT_REGS_PARM1_CORE_SYSCALL(x) PT_REGS_PARM1_CORE(x)
#endif
#define PT_REGS_PARM2_CORE_SYSCALL(x) PT_REGS_PARM2_CORE(x)
#define PT_REGS_PARM3_CORE_SYSCALL(x) PT_REGS_PARM3_CORE(x)
#ifndef PT_REGS_PARM4_CORE_SYSCALL
#define PT_REGS_PARM4_CORE_SYSCALL(x) PT_REGS_PARM4_CORE(x)
#endif
#define PT_REGS_PARM5_CORE_SYSCALL(x) PT_REGS_PARM5_CORE(x)
#else /* defined(bpf_target_defined) */
#define PT_REGS_PARM1(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM2(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM3(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM4(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM5(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_RET(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_FP(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_RC(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_SP(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_IP(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM1_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM2_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM3_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM4_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM5_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_RET_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_FP_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_RC_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_SP_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_IP_CORE(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define BPF_KPROBE_READ_RET_IP(ip, ctx) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define BPF_KRETPROBE_READ_RET_IP(ip, ctx) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM1_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM2_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM3_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM4_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM5_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM1_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM2_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM3_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM4_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#define PT_REGS_PARM5_CORE_SYSCALL(x) ({ _Pragma(__BPF_TARGET_MISSING); 0l; })
#endif /* defined(bpf_target_defined) */
/*
* When invoked from a syscall handler kprobe, returns a pointer to a
* struct pt_regs containing syscall arguments and suitable for passing to
* PT_REGS_PARMn_SYSCALL() and PT_REGS_PARMn_CORE_SYSCALL().
*/
#ifndef PT_REGS_SYSCALL_REGS
/* By default, assume that the arch selects ARCH_HAS_SYSCALL_WRAPPER. */
#define PT_REGS_SYSCALL_REGS(ctx) ((struct pt_regs *)PT_REGS_PARM1(ctx))
#endif
#ifndef ___bpf_concat
#define ___bpf_concat(a, b) a ## b
#endif
#ifndef ___bpf_apply
#define ___bpf_apply(fn, n) ___bpf_concat(fn, n)
#endif
#ifndef ___bpf_nth
#define ___bpf_nth(_, _1, _2, _3, _4, _5, _6, _7, _8, _9, _a, _b, _c, N, ...) N
#endif
#ifndef ___bpf_narg
#define ___bpf_narg(...) ___bpf_nth(_, ##__VA_ARGS__, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0)
#endif
#define ___bpf_ctx_cast0() ctx
#define ___bpf_ctx_cast1(x) ___bpf_ctx_cast0(), (void *)ctx[0]
#define ___bpf_ctx_cast2(x, args...) ___bpf_ctx_cast1(args), (void *)ctx[1]
#define ___bpf_ctx_cast3(x, args...) ___bpf_ctx_cast2(args), (void *)ctx[2]
#define ___bpf_ctx_cast4(x, args...) ___bpf_ctx_cast3(args), (void *)ctx[3]
#define ___bpf_ctx_cast5(x, args...) ___bpf_ctx_cast4(args), (void *)ctx[4]
#define ___bpf_ctx_cast6(x, args...) ___bpf_ctx_cast5(args), (void *)ctx[5]
#define ___bpf_ctx_cast7(x, args...) ___bpf_ctx_cast6(args), (void *)ctx[6]
#define ___bpf_ctx_cast8(x, args...) ___bpf_ctx_cast7(args), (void *)ctx[7]
#define ___bpf_ctx_cast9(x, args...) ___bpf_ctx_cast8(args), (void *)ctx[8]
#define ___bpf_ctx_cast10(x, args...) ___bpf_ctx_cast9(args), (void *)ctx[9]
#define ___bpf_ctx_cast11(x, args...) ___bpf_ctx_cast10(args), (void *)ctx[10]
#define ___bpf_ctx_cast12(x, args...) ___bpf_ctx_cast11(args), (void *)ctx[11]
#define ___bpf_ctx_cast(args...) ___bpf_apply(___bpf_ctx_cast, ___bpf_narg(args))(args)
/*
* BPF_PROG is a convenience wrapper for generic tp_btf/fentry/fexit and
* similar kinds of BPF programs, that accept input arguments as a single
* pointer to untyped u64 array, where each u64 can actually be a typed
* pointer or integer of different size. Instead of requring user to write
* manual casts and work with array elements by index, BPF_PROG macro
* allows user to declare a list of named and typed input arguments in the
* same syntax as for normal C function. All the casting is hidden and
* performed transparently, while user code can just assume working with
* function arguments of specified type and name.
*
* Original raw context argument is preserved as well as 'ctx' argument.
* This is useful when using BPF helpers that expect original context
* as one of the parameters (e.g., for bpf_perf_event_output()).
*/
#define BPF_PROG(name, args...) \
name(unsigned long long *ctx); \
static __attribute__((always_inline)) typeof(name(0)) \
____##name(unsigned long long *ctx, ##args); \
typeof(name(0)) name(unsigned long long *ctx) \
{ \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
return ____##name(___bpf_ctx_cast(args)); \
_Pragma("GCC diagnostic pop") \
} \
static __attribute__((always_inline)) typeof(name(0)) \
____##name(unsigned long long *ctx, ##args)
struct pt_regs;
#define ___bpf_kprobe_args0() ctx
#define ___bpf_kprobe_args1(x) ___bpf_kprobe_args0(), (void *)PT_REGS_PARM1(ctx)
#define ___bpf_kprobe_args2(x, args...) ___bpf_kprobe_args1(args), (void *)PT_REGS_PARM2(ctx)
#define ___bpf_kprobe_args3(x, args...) ___bpf_kprobe_args2(args), (void *)PT_REGS_PARM3(ctx)
#define ___bpf_kprobe_args4(x, args...) ___bpf_kprobe_args3(args), (void *)PT_REGS_PARM4(ctx)
#define ___bpf_kprobe_args5(x, args...) ___bpf_kprobe_args4(args), (void *)PT_REGS_PARM5(ctx)
#define ___bpf_kprobe_args(args...) ___bpf_apply(___bpf_kprobe_args, ___bpf_narg(args))(args)
/*
* BPF_KPROBE serves the same purpose for kprobes as BPF_PROG for
* tp_btf/fentry/fexit BPF programs. It hides the underlying platform-specific
* low-level way of getting kprobe input arguments from struct pt_regs, and
* provides a familiar typed and named function arguments syntax and
* semantics of accessing kprobe input paremeters.
*
* Original struct pt_regs* context is preserved as 'ctx' argument. This might
* be necessary when using BPF helpers like bpf_perf_event_output().
*/
#define BPF_KPROBE(name, args...) \
name(struct pt_regs *ctx); \
static __attribute__((always_inline)) typeof(name(0)) \
____##name(struct pt_regs *ctx, ##args); \
typeof(name(0)) name(struct pt_regs *ctx) \
{ \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
return ____##name(___bpf_kprobe_args(args)); \
_Pragma("GCC diagnostic pop") \
} \
static __attribute__((always_inline)) typeof(name(0)) \
____##name(struct pt_regs *ctx, ##args)
#define ___bpf_kretprobe_args0() ctx
#define ___bpf_kretprobe_args1(x) ___bpf_kretprobe_args0(), (void *)PT_REGS_RC(ctx)
#define ___bpf_kretprobe_args(args...) ___bpf_apply(___bpf_kretprobe_args, ___bpf_narg(args))(args)
/*
* BPF_KRETPROBE is similar to BPF_KPROBE, except, it only provides optional
* return value (in addition to `struct pt_regs *ctx`), but no input
* arguments, because they will be clobbered by the time probed function
* returns.
*/
#define BPF_KRETPROBE(name, args...) \
name(struct pt_regs *ctx); \
static __attribute__((always_inline)) typeof(name(0)) \
____##name(struct pt_regs *ctx, ##args); \
typeof(name(0)) name(struct pt_regs *ctx) \
{ \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
return ____##name(___bpf_kretprobe_args(args)); \
_Pragma("GCC diagnostic pop") \
} \
static __always_inline typeof(name(0)) ____##name(struct pt_regs *ctx, ##args)
/* If kernel has CONFIG_ARCH_HAS_SYSCALL_WRAPPER, read pt_regs directly */
#define ___bpf_syscall_args0() ctx
#define ___bpf_syscall_args1(x) ___bpf_syscall_args0(), (void *)PT_REGS_PARM1_SYSCALL(regs)
#define ___bpf_syscall_args2(x, args...) ___bpf_syscall_args1(args), (void *)PT_REGS_PARM2_SYSCALL(regs)
#define ___bpf_syscall_args3(x, args...) ___bpf_syscall_args2(args), (void *)PT_REGS_PARM3_SYSCALL(regs)
#define ___bpf_syscall_args4(x, args...) ___bpf_syscall_args3(args), (void *)PT_REGS_PARM4_SYSCALL(regs)
#define ___bpf_syscall_args5(x, args...) ___bpf_syscall_args4(args), (void *)PT_REGS_PARM5_SYSCALL(regs)
#define ___bpf_syscall_args(args...) ___bpf_apply(___bpf_syscall_args, ___bpf_narg(args))(args)
/* If kernel doesn't have CONFIG_ARCH_HAS_SYSCALL_WRAPPER, we have to BPF_CORE_READ from pt_regs */
#define ___bpf_syswrap_args0() ctx
#define ___bpf_syswrap_args1(x) ___bpf_syswrap_args0(), (void *)PT_REGS_PARM1_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args2(x, args...) ___bpf_syswrap_args1(args), (void *)PT_REGS_PARM2_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args3(x, args...) ___bpf_syswrap_args2(args), (void *)PT_REGS_PARM3_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args4(x, args...) ___bpf_syswrap_args3(args), (void *)PT_REGS_PARM4_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args5(x, args...) ___bpf_syswrap_args4(args), (void *)PT_REGS_PARM5_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args(args...) ___bpf_apply(___bpf_syswrap_args, ___bpf_narg(args))(args)
/*
* BPF_KSYSCALL is a variant of BPF_KPROBE, which is intended for
* tracing syscall functions, like __x64_sys_close. It hides the underlying
* platform-specific low-level way of getting syscall input arguments from
* struct pt_regs, and provides a familiar typed and named function arguments
* syntax and semantics of accessing syscall input parameters.
*
* Original struct pt_regs * context is preserved as 'ctx' argument. This might
* be necessary when using BPF helpers like bpf_perf_event_output().
*
* At the moment BPF_KSYSCALL does not transparently handle all the calling
* convention quirks for the following syscalls:
*
* - mmap(): __ARCH_WANT_SYS_OLD_MMAP.
* - clone(): CONFIG_CLONE_BACKWARDS, CONFIG_CLONE_BACKWARDS2 and
* CONFIG_CLONE_BACKWARDS3.
* - socket-related syscalls: __ARCH_WANT_SYS_SOCKETCALL.
* - compat syscalls.
*
* This may or may not change in the future. User needs to take extra measures
* to handle such quirks explicitly, if necessary.
*
* This macro relies on BPF CO-RE support and virtual __kconfig externs.
*/
#define BPF_KSYSCALL(name, args...) \
name(struct pt_regs *ctx); \
extern _Bool LINUX_HAS_SYSCALL_WRAPPER __kconfig; \
static __attribute__((always_inline)) typeof(name(0)) \
____##name(struct pt_regs *ctx, ##args); \
typeof(name(0)) name(struct pt_regs *ctx) \
{ \
struct pt_regs *regs = LINUX_HAS_SYSCALL_WRAPPER \
? (struct pt_regs *)PT_REGS_PARM1(ctx) \
: ctx; \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
if (LINUX_HAS_SYSCALL_WRAPPER) \
return ____##name(___bpf_syswrap_args(args)); \
else \
return ____##name(___bpf_syscall_args(args)); \
_Pragma("GCC diagnostic pop") \
} \
static __attribute__((always_inline)) typeof(name(0)) \
____##name(struct pt_regs *ctx, ##args)
#define BPF_KPROBE_SYSCALL BPF_KSYSCALL
#endif
opensnitch-1.6.9/ebpf_prog/common.h 0000664 0000000 0000000 00000004212 15003540030 0017272 0 ustar 00root root 0000000 0000000 #ifndef OPENSNITCH_COMMON_H
#define OPENSNITCH_COMMON_H
#include "common_defs.h"
//https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/limits.h#L13
#ifndef MAX_PATH_LEN
#define MAX_PATH_LEN 4096
#endif
//https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/binfmts.h#L16
#define MAX_CMDLINE_LEN 4096
// max args that I've been able to use before hitting the error:
// "dereference of modified ctx ptr disallowed"
#define MAX_ARGS 20
#define MAX_ARG_SIZE 256
// flags to indicate if we were able to read all the cmdline arguments,
// or if one of the arguments is >= MAX_ARG_SIZE, or there more than MAX_ARGS
#define COMPLETE_ARGS 0
#define INCOMPLETE_ARGS 1
#ifndef TASK_COMM_LEN
#define TASK_COMM_LEN 16
#endif
#define BUF_SIZE_MAP_NS 256
#define GLOBAL_MAP_NS "256"
enum events_type {
EVENT_NONE = 0,
EVENT_EXEC,
EVENT_EXECVEAT,
EVENT_FORK,
EVENT_SCHED_EXIT,
};
struct trace_ev_common {
short common_type;
char common_flags;
char common_preempt_count;
int common_pid;
};
struct trace_sys_enter_execve {
struct trace_ev_common ext;
int __syscall_nr;
char *filename;
const char *const *argv;
const char *const *envp;
};
struct trace_sys_enter_execveat {
struct trace_ev_common ext;
int __syscall_nr;
char *filename;
const char *const *argv;
const char *const *envp;
int flags;
};
struct trace_sys_exit_execve {
struct trace_ev_common ext;
int __syscall_nr;
long ret;
};
struct data_t {
u64 type;
u32 pid; // PID as in the userspace term (i.e. task->tgid in kernel)
u32 uid;
// Parent PID as in the userspace term (i.e task->real_parent->tgid in kernel)
u32 ppid;
u32 ret_code;
u8 args_count;
u8 args_partial;
char filename[MAX_PATH_LEN];
char args[MAX_ARGS][MAX_ARG_SIZE];
char comm[TASK_COMM_LEN];
u16 pad1;
u32 pad2;
};
//-----------------------------------------------------------------------------
// maps
struct bpf_map_def SEC("maps/heapstore") heapstore = {
.type = BPF_MAP_TYPE_PERCPU_ARRAY,
.key_size = sizeof(u32),
.value_size = sizeof(struct data_t),
.max_entries = 1
};
#endif
opensnitch-1.6.9/ebpf_prog/common_defs.h 0000664 0000000 0000000 00000001626 15003540030 0020301 0 ustar 00root root 0000000 0000000 #ifndef OPENSNITCH_COMMON_DEFS_H
#define OPENSNITCH_COMMON_DEFS_H
#include
#include
#include
#include "bpf_headers/bpf_helpers.h"
#include "bpf_headers/bpf_tracing.h"
//#include
#define BUF_SIZE_MAP_NS 256
#define MAPSIZE 12000
// even though we only need 32 bits of pid, on x86_32 ebpf verifier complained when pid type was set to u32
typedef u64 pid_size_t;
typedef u64 uid_size_t;
//-------------------------------map definitions
// which github.com/iovisor/gobpf/elf expects
typedef struct bpf_map_def {
unsigned int type;
unsigned int key_size;
unsigned int value_size;
unsigned int max_entries;
unsigned int map_flags;
unsigned int pinning;
char namespace[BUF_SIZE_MAP_NS];
} bpf_map_def;
enum bpf_pin_type {
PIN_NONE = 0,
PIN_OBJECT_NS,
PIN_GLOBAL_NS,
PIN_CUSTOM_NS,
};
//-----------------------------------
#endif
opensnitch-1.6.9/ebpf_prog/opensnitch-dns.c 0000664 0000000 0000000 00000016276 15003540030 0020746 0 ustar 00root root 0000000 0000000 /* Copyright (C) 2022 calesanz
// 2023-2024 Gustavo Iñiguez Goya
//
// This file is part of OpenSnitch.
//
// OpenSnitch is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// OpenSnitch is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with OpenSnitch. If not, see .
*/
#define KBUILD_MODNAME "opensnitch-dns"
#include
#include
#include
#include
#include
#include
#include
#include "common_defs.h"
#include "bpf_headers/bpf_helpers.h"
#include "bpf_headers/bpf_tracing.h"
//-----------------------------------
// random values
#define MAX_ALIASES 5
#define MAX_IPS 30
struct nameLookupEvent {
u32 addr_type;
u8 ip[16];
char host[252];
} __attribute__((packed));
struct hostent {
char *h_name; /* Official name of host. */
char **h_aliases; /* Alias list. */
int h_addrtype; /* Host address type. */
int h_length; /* Length of address. */
char **h_addr_list; /* List of addresses from name server. */
#ifdef __USE_MISC
#define h_addr h_addr_list[0] /* Address, for backward compatibility.*/
#endif
};
struct addrinfo {
int ai_flags; /* Input flags. */
int ai_family; /* Protocol family for socket. */
int ai_socktype; /* Socket type. */
int ai_protocol; /* Protocol for socket. */
size_t ai_addrlen; /* Length of socket address. */
struct sockaddr *ai_addr; /* Socket address for socket. */
char *ai_canonname; /* Canonical name for service location. */
struct addrinfo *ai_next; /* Pointer to next in list. */
};
struct addrinfo_args_cache {
struct addrinfo **addrinfo_ptr;
char node[256];
};
// define temporary array for data
struct bpf_map_def SEC("maps/addrinfo_args_hash") addrinfo_args_hash = {
.type = BPF_MAP_TYPE_HASH,
.max_entries = 256, // max entries at any time
.key_size = sizeof(u32),
.value_size = sizeof(struct addrinfo_args_cache),
};
// BPF output events
struct bpf_map_def SEC("maps/events") events = {
.type = BPF_MAP_TYPE_PERF_EVENT_ARRAY,
.key_size = sizeof(u32),
.value_size = sizeof(u32),
.max_entries = 256, // max cpus
};
/**
* Hooks gethostbyname calls and emits multiple nameLookupEvent events.
* It supports at most MAX_IPS many addresses.
*/
SEC("uretprobe/gethostbyname")
int uretprobe__gethostbyname(struct pt_regs *ctx) {
// bpf_tracing_prinkt("Called gethostbyname %d\n",1);
struct nameLookupEvent data = {0};
if (!PT_REGS_RC(ctx))
return 0;
struct hostent *host = (struct hostent *)PT_REGS_RC(ctx);
char * hostnameptr = {0};
bpf_probe_read(&hostnameptr, sizeof(hostnameptr), &host->h_name);
bpf_probe_read_str(&data.host, sizeof(data.host), hostnameptr);
char **ips = {0};
bpf_probe_read(&ips, sizeof(ips), &host->h_addr_list);
#pragma clang loop unroll(full)
for (int i = 0; i < MAX_IPS; i++) {
char *ip={0};
bpf_probe_read(&ip, sizeof(ip), &ips[i]);
if (ip == NULL) {
return 0;
}
bpf_probe_read_user(&data.addr_type, sizeof(data.addr_type),
&host->h_addrtype);
if (data.addr_type == AF_INET) {
// Only copy the 4 relevant bytes
bpf_probe_read_user(&data.ip, 4, ip);
} else {
bpf_probe_read_user(&data.ip, sizeof(data.ip), ip);
}
bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, &data,
sizeof(data));
// char **alias = host->h_aliases;
char **aliases = {0};
bpf_probe_read(&aliases, sizeof(aliases), &host->h_aliases);
#pragma clang loop unroll(full)
for (int j = 0; j < MAX_ALIASES; j++) {
char *alias = {0};
bpf_probe_read(&alias, sizeof(alias), &aliases[j]);
if (alias == NULL) {
return 0;
}
bpf_probe_read_user(&data.host, sizeof(data.host), alias);
bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, &data,
sizeof(data));
}
}
return 0;
}
// capture getaddrinfo call and store the relevant arguments to a hash.
SEC("uprobe/getaddrinfo")
int addrinfo(struct pt_regs *ctx) {
struct addrinfo_args_cache addrinfo_args = {0};
if (!PT_REGS_PARM1(ctx))
return 0;
if (!PT_REGS_PARM4(ctx))
return 0;
u64 pid_tgid = bpf_get_current_pid_tgid();
u32 tid = (u32)pid_tgid;
addrinfo_args.addrinfo_ptr = (struct addrinfo **)PT_REGS_PARM4(ctx);
bpf_probe_read_user_str(&addrinfo_args.node, sizeof(addrinfo_args.node),
(char *)PT_REGS_PARM1(ctx));
bpf_map_update_elem(&addrinfo_args_hash, &tid, &addrinfo_args,
0 /* flags */);
return 0;
}
SEC("uretprobe/getaddrinfo")
int ret_addrinfo(struct pt_regs *ctx) {
struct nameLookupEvent data = {0};
struct addrinfo_args_cache *addrinfo_args = {0};
u64 pid_tgid = bpf_get_current_pid_tgid();
u32 tid = (u32)pid_tgid;
addrinfo_args = bpf_map_lookup_elem(&addrinfo_args_hash, &tid);
if (addrinfo_args == 0) {
return 0; // missed start
}
struct addrinfo **res_p={0};
bpf_probe_read(&res_p, sizeof(res_p), &addrinfo_args->addrinfo_ptr);
#pragma clang loop unroll(full)
for (int i = 0; i < MAX_IPS; i++) {
struct addrinfo *res={0};
bpf_probe_read(&res, sizeof(res), res_p);
if (res == NULL) {
goto out;
}
bpf_probe_read(&data.addr_type, sizeof(data.addr_type),
&res->ai_family);
if (data.addr_type == AF_INET) {
struct sockaddr_in *ipv4={0};
bpf_probe_read(&ipv4, sizeof(ipv4), &res->ai_addr);
// Only copy the 4 relevant bytes
bpf_probe_read_user(&data.ip, 4, &ipv4->sin_addr);
} else if(data.addr_type == AF_INET6) {
struct sockaddr_in6 *ipv6={0};
bpf_probe_read(&ipv6, sizeof(ipv6), &res->ai_addr);
bpf_probe_read_user(&data.ip, sizeof(data.ip), &ipv6->sin6_addr);
} else {
goto out;
}
bpf_probe_read_kernel_str(&data.host, sizeof(data.host),
&addrinfo_args->node);
bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, &data,
sizeof(data));
struct addrinfo * next={0};
bpf_probe_read(&next, sizeof(next), &res->ai_next);
if (next == NULL){
goto out;
}
res_p = &next;
}
out:
bpf_map_delete_elem(&addrinfo_args_hash, &tid);
return 0;
}
char _license[] SEC("license") = "GPL";
u32 _version SEC("version") = 0xFFFFFFFE;
opensnitch-1.6.9/ebpf_prog/opensnitch-procs.c 0000664 0000000 0000000 00000016276 15003540030 0021310 0 ustar 00root root 0000000 0000000 #define KBUILD_MODNAME "opensnitch-procs"
#include "common.h"
struct bpf_map_def SEC("maps/proc-events") events = {
// Since kernel 4.4
.type = BPF_MAP_TYPE_PERF_EVENT_ARRAY,
.key_size = sizeof(u32),
.value_size = sizeof(u32),
.max_entries = 256, // max cpus
};
struct bpf_map_def SEC("maps/execMap") execMap = {
.type = BPF_MAP_TYPE_HASH,
.key_size = sizeof(u32),
.value_size = sizeof(struct data_t),
.max_entries = 256,
};
static __always_inline void new_event(struct data_t* data)
{
// initializing variables with __builtin_memset() is required
// for compatibility with bpf on kernel 4.4
struct task_struct *task;
struct task_struct *parent;
__builtin_memset(&task, 0, sizeof(task));
__builtin_memset(&parent, 0, sizeof(parent));
task = (struct task_struct *)bpf_get_current_task();
bpf_probe_read(&parent, sizeof(parent), &task->real_parent);
data->pid = bpf_get_current_pid_tgid() >> 32;
#if !defined(__arm__) && !defined(__i386__)
// on i686 -> invalid read from stack
bpf_probe_read(&data->ppid, sizeof(u32), &parent->tgid);
#endif
data->uid = bpf_get_current_uid_gid() & 0xffffffff;
bpf_get_current_comm(&data->comm, sizeof(data->comm));
};
/*
* send to userspace the result of the execve* call.
*/
static __always_inline void __handle_exit_execve(struct trace_sys_exit_execve *ctx)
{
u64 pid_tgid = bpf_get_current_pid_tgid();
struct data_t *proc = bpf_map_lookup_elem(&execMap, &pid_tgid);
// don't delete the pid from execMap here, delegate it to sched_process_exit
if (proc == NULL) { return; }
if (ctx->ret != 0) { return; }
proc->ret_code = ctx->ret;
bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, proc, sizeof(*proc));
}
// https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-4.html
// bprm_execve REGS_PARM3
// https://elixir.bootlin.com/linux/latest/source/fs/exec.c#L1796
SEC("tracepoint/sched/sched_process_exit")
int tracepoint__sched_sched_process_exit(struct pt_regs *ctx)
{
u64 pid_tgid = bpf_get_current_pid_tgid();
struct data_t *proc = bpf_map_lookup_elem(&execMap, &pid_tgid);
// if the pid is not in execMap cache (because it's not of a pid we've
// previously intercepted), do not send the event to userspace, because
// we won't do anything with it and it consumes CPU cycles (too much in some
// scenarios).
if (proc == NULL) { return 0; }
int zero = 0;
struct data_t *data = bpf_map_lookup_elem(&heapstore, &zero);
if (!data){ return 0; }
new_event(data);
data->type = EVENT_SCHED_EXIT;
bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, data, sizeof(*data));
bpf_map_delete_elem(&execMap, &pid_tgid);
return 0;
};
SEC("tracepoint/syscalls/sys_exit_execve")
int tracepoint__syscalls_sys_exit_execve(struct trace_sys_exit_execve *ctx)
{
__handle_exit_execve(ctx);
return 0;
};
SEC("tracepoint/syscalls/sys_exit_execveat")
int tracepoint__syscalls_sys_exit_execveat(struct trace_sys_exit_execve *ctx)
{
__handle_exit_execve(ctx);
return 0;
};
SEC("tracepoint/syscalls/sys_enter_execve")
int tracepoint__syscalls_sys_enter_execve(struct trace_sys_enter_execve* ctx)
{
int zero = 0;
struct data_t *data = {0};
data = (struct data_t *)bpf_map_lookup_elem(&heapstore, &zero);
if (!data){ return 0; }
new_event(data);
data->type = EVENT_EXEC;
// bpf_probe_read_user* helpers were introduced in kernel 5.5
// Since the args can be overwritten anyway, maybe we could get them from
// mm_struct instead for a wider kernel version support range?
bpf_probe_read_user_str(&data->filename, sizeof(data->filename), (const char *)ctx->filename);
const char *argp={0};
data->args_count = 0;
data->args_partial = INCOMPLETE_ARGS;
// FIXME: on i386 arch, the following code fails with permission denied.
#if !defined(__arm__) && !defined(__i386__)
#pragma unroll
for (int i = 0; i < MAX_ARGS; i++) {
bpf_probe_read_user(&argp, sizeof(argp), &ctx->argv[i]);
if (!argp){ data->args_partial = COMPLETE_ARGS; break; }
if (bpf_probe_read_user_str(&data->args[i], MAX_ARG_SIZE, argp) >= MAX_ARG_SIZE){
break;
}
data->args_count++;
}
#endif
// FIXME: on aarch64 we fail to save the event to execMap, so send it to userspace here.
#if defined(__aarch64__)
bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, data, sizeof(*data));
#else
// in case of failure adding the item to the map, send it directly
u64 pid_tgid = bpf_get_current_pid_tgid();
if (bpf_map_update_elem(&execMap, &pid_tgid, data, BPF_ANY) != 0) {
// With some commands, this helper fails with error -28 (ENOSPC). Misleading error? cmd failed maybe?
// BUG: after coming back from suspend state, this helper fails with error -95 (EOPNOTSUPP)
// Possible workaround: count -95 errors, and from userspace reinitialize the streamer if errors >= n-errors
bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, data, sizeof(*data));
}
#endif
return 0;
};
SEC("tracepoint/syscalls/sys_enter_execveat")
int tracepoint__syscalls_sys_enter_execveat(struct trace_sys_enter_execveat* ctx)
{
int zero = 0;
struct data_t *data = {0};
data = (struct data_t *)bpf_map_lookup_elem(&heapstore, &zero);
if (!data){ return 0; }
new_event((void *)data);
data->type = EVENT_EXECVEAT;
// bpf_probe_read_user* helpers were introduced in kernel 5.5
// Since the args can be overwritten anyway, maybe we could get them from
// mm_struct instead for a wider kernel version support range?
bpf_probe_read_user_str(&data->filename, sizeof(data->filename), (const char *)ctx->filename);
const char *argp={0};
data->args_count = 0;
data->args_partial = INCOMPLETE_ARGS;
// FIXME: on i386 arch, the following code fails with permission denied.
#if !defined(__arm__) && !defined(__i386__)
#pragma unroll
for (int i = 0; i < MAX_ARGS; i++) {
bpf_probe_read_user(&argp, sizeof(argp), &ctx->argv[i]);
if (!argp){ data->args_partial = COMPLETE_ARGS; break; }
if (bpf_probe_read_user_str(&data->args[i], MAX_ARG_SIZE, argp) >= MAX_ARG_SIZE){
break;
}
data->args_count++;
}
#endif
// FIXME: on aarch64 we fail to save the event to execMap, so send it to userspace here.
#if defined(__aarch64__)
bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, data, sizeof(*data));
#else
// in case of failure adding the item to the map, send it directly
u64 pid_tgid = bpf_get_current_pid_tgid();
if (bpf_map_update_elem(&execMap, &pid_tgid, data, BPF_ANY) != 0) {
// With some commands, this helper fails with error -28 (ENOSPC). Misleading error? cmd failed maybe?
// BUG: after coming back from suspend state, this helper fails with error -95 (EOPNOTSUPP)
// Possible workaround: count -95 errors, and from userspace reinitialize the streamer if errors >= n-errors
bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, data, sizeof(*data));
}
#endif
return 0;
};
char _license[] SEC("license") = "GPL";
// this number will be interpreted by the elf loader
// to set the current running kernel version
u32 _version SEC("version") = 0xFFFFFFFE;
opensnitch-1.6.9/ebpf_prog/opensnitch.c 0000664 0000000 0000000 00000043703 15003540030 0020157 0 ustar 00root root 0000000 0000000 #define KBUILD_MODNAME "dummy"
#include "common_defs.h"
#include
#include
#include
#include
struct tcp_key_t {
u16 sport;
u32 daddr;
u16 dport;
u32 saddr;
}__attribute__((packed));
struct tcp_value_t {
pid_size_t pid;
uid_size_t uid;
char comm[TASK_COMM_LEN];
}__attribute__((packed));
// not using unsigned __int128 because it is not supported on x86_32
struct ipV6 {
u64 part1;
u64 part2;
}__attribute__((packed));
struct tcpv6_key_t {
u16 sport;
struct ipV6 daddr;
u16 dport;
struct ipV6 saddr;
}__attribute__((packed));
struct tcpv6_value_t{
pid_size_t pid;
uid_size_t uid;
char comm[TASK_COMM_LEN];
}__attribute__((packed));
struct udp_key_t {
u16 sport;
u32 daddr;
u16 dport;
u32 saddr;
} __attribute__((packed));
struct udp_value_t{
pid_size_t pid;
uid_size_t uid;
char comm[TASK_COMM_LEN];
}__attribute__((packed));
struct udpv6_key_t {
u16 sport;
struct ipV6 daddr;
u16 dport;
struct ipV6 saddr;
}__attribute__((packed));
struct udpv6_value_t{
pid_size_t pid;
uid_size_t uid;
char comm[TASK_COMM_LEN];
}__attribute__((packed));
// on x86_32 "struct sock" is arranged differently from x86_64 (at least on Debian kernels).
// We hardcode offsets of IP addresses.
struct sock_on_x86_32_t {
u8 data_we_dont_care_about[40];
struct ipV6 daddr;
struct ipV6 saddr;
};
// Add +1,+2,+3 etc. to map size helps to easier distinguish maps in bpftool's output
struct bpf_map_def SEC("maps/tcpMap") tcpMap = {
.type = BPF_MAP_TYPE_HASH,
.key_size = sizeof(struct tcp_key_t),
.value_size = sizeof(struct tcp_value_t),
.max_entries = MAPSIZE+1,
};
struct bpf_map_def SEC("maps/tcpv6Map") tcpv6Map = {
.type = BPF_MAP_TYPE_HASH,
.key_size = sizeof(struct tcpv6_key_t),
.value_size = sizeof(struct tcpv6_value_t),
.max_entries = MAPSIZE+2,
};
struct bpf_map_def SEC("maps/udpMap") udpMap = {
.type = BPF_MAP_TYPE_HASH,
.key_size = sizeof(struct udp_key_t),
.value_size = sizeof(struct udp_value_t),
.max_entries = MAPSIZE+3,
};
struct bpf_map_def SEC("maps/udpv6Map") udpv6Map = {
.type = BPF_MAP_TYPE_HASH,
.key_size = sizeof(struct udpv6_key_t),
.value_size = sizeof(struct udpv6_value_t),
.max_entries = MAPSIZE+4,
};
// for TCP the IP-tuple can be copied from "struct sock" only upon return from tcp_connect().
// We stash the socket here to look it up upon return.
struct bpf_map_def SEC("maps/tcpsock") tcpsock = {
.type = BPF_MAP_TYPE_HASH,
.key_size = sizeof(u64),
// using u64 instead of sizeof(struct sock *)
// to avoid pointer size related quirks on x86_32
.value_size = sizeof(u64),
.max_entries = 300,
};
struct bpf_map_def SEC("maps/tcpv6sock") tcpv6sock = {
.type = BPF_MAP_TYPE_HASH,
.key_size = sizeof(u64),
.value_size = sizeof(u64),
.max_entries = 300,
};
struct bpf_map_def SEC("maps/icmpsock") icmpsock = {
.type = BPF_MAP_TYPE_HASH,
.key_size = sizeof(u64),
.value_size = sizeof(u64),
.max_entries = 300,
};
// initializing variables with __builtin_memset() is required
// for compatibility with bpf on kernel 4.4
SEC("kprobe/tcp_v4_connect")
int kprobe__tcp_v4_connect(struct pt_regs *ctx)
{
#if defined(__i386__)
// On x86_32 platforms I couldn't get function arguments using PT_REGS_PARM1
// that's why we are accessing registers directly
struct sock *sk = (struct sock *)((ctx)->ax);
#else
struct sock *sk = (struct sock *)PT_REGS_PARM1(ctx);
#endif
u64 skp = (u64)sk;
u64 pid_tgid = bpf_get_current_pid_tgid();
bpf_map_update_elem(&tcpsock, &pid_tgid, &skp, BPF_ANY);
return 0;
};
SEC("kretprobe/tcp_v4_connect")
int kretprobe__tcp_v4_connect(struct pt_regs *ctx)
{
u64 pid_tgid = bpf_get_current_pid_tgid();
u64 *skp = bpf_map_lookup_elem(&tcpsock, &pid_tgid);
if (skp == NULL) {return 0;}
struct sock *sk;
__builtin_memset(&sk, 0, sizeof(sk));
sk = (struct sock *)*skp;
struct tcp_key_t tcp_key;
__builtin_memset(&tcp_key, 0, sizeof(tcp_key));
bpf_probe_read(&tcp_key.dport, sizeof(tcp_key.dport), &sk->__sk_common.skc_dport);
bpf_probe_read(&tcp_key.sport, sizeof(tcp_key.sport), &sk->__sk_common.skc_num);
bpf_probe_read(&tcp_key.daddr, sizeof(tcp_key.daddr), &sk->__sk_common.skc_daddr);
bpf_probe_read(&tcp_key.saddr, sizeof(tcp_key.saddr), &sk->__sk_common.skc_rcv_saddr);
struct tcp_value_t tcp_value={0};
__builtin_memset(&tcp_value, 0, sizeof(tcp_value));
tcp_value.pid = pid_tgid >> 32;
tcp_value.uid = bpf_get_current_uid_gid() & 0xffffffff;
bpf_get_current_comm(&tcp_value.comm, sizeof(tcp_value.comm));
bpf_map_update_elem(&tcpMap, &tcp_key, &tcp_value, BPF_ANY);
bpf_map_delete_elem(&tcpsock, &pid_tgid);
return 0;
};
SEC("kprobe/tcp_v6_connect")
int kprobe__tcp_v6_connect(struct pt_regs *ctx)
{
#if defined(__i386__)
struct sock *sk = (struct sock *)((ctx)->ax);
#else
struct sock *sk = (struct sock *)PT_REGS_PARM1(ctx);
#endif
u64 skp = (u64)sk;
u64 pid_tgid = bpf_get_current_pid_tgid();
bpf_map_update_elem(&tcpv6sock, &pid_tgid, &skp, BPF_ANY);
return 0;
};
SEC("kretprobe/tcp_v6_connect")
int kretprobe__tcp_v6_connect(struct pt_regs *ctx)
{
u64 pid_tgid = bpf_get_current_pid_tgid();
u64 *skp = bpf_map_lookup_elem(&tcpv6sock, &pid_tgid);
if (skp == NULL) {return 0;}
struct sock *sk;
__builtin_memset(&sk, 0, sizeof(sk));
sk = (struct sock *)*skp;
struct tcpv6_key_t tcpv6_key;
__builtin_memset(&tcpv6_key, 0, sizeof(tcpv6_key));
bpf_probe_read(&tcpv6_key.dport, sizeof(tcpv6_key.dport), &sk->__sk_common.skc_dport);
bpf_probe_read(&tcpv6_key.sport, sizeof(tcpv6_key.sport), &sk->__sk_common.skc_num);
#if defined(__i386__)
struct sock_on_x86_32_t sock;
__builtin_memset(&sock, 0, sizeof(sock));
bpf_probe_read(&sock, sizeof(sock), *(&sk));
tcpv6_key.daddr = sock.daddr;
tcpv6_key.saddr = sock.saddr;
#else
bpf_probe_read(&tcpv6_key.daddr, sizeof(tcpv6_key.daddr), &sk->__sk_common.skc_v6_daddr.in6_u.u6_addr32);
bpf_probe_read(&tcpv6_key.saddr, sizeof(tcpv6_key.saddr), &sk->__sk_common.skc_v6_rcv_saddr.in6_u.u6_addr32);
#endif
struct tcpv6_value_t tcpv6_value={0};
__builtin_memset(&tcpv6_value, 0, sizeof(tcpv6_value));
tcpv6_value.pid = pid_tgid >> 32;
tcpv6_value.uid = bpf_get_current_uid_gid() & 0xffffffff;
bpf_get_current_comm(&tcpv6_value.comm, sizeof(tcpv6_value.comm));
bpf_map_update_elem(&tcpv6Map, &tcpv6_key, &tcpv6_value, BPF_ANY);
bpf_map_delete_elem(&tcpv6sock, &pid_tgid);
return 0;
};
SEC("kprobe/udp_sendmsg")
int kprobe__udp_sendmsg(struct pt_regs *ctx)
{
#if defined(__i386__)
struct sock *sk = (struct sock *)((ctx)->ax);
struct msghdr *msg = (struct msghdr *)((ctx)->dx);
#else
struct sock *sk = (struct sock *)PT_REGS_PARM1(ctx);
struct msghdr *msg = (struct msghdr *)PT_REGS_PARM2(ctx);
#endif
u64 msg_name; //pointer
__builtin_memset(&msg_name, 0, sizeof(msg_name));
bpf_probe_read(&msg_name, sizeof(msg_name), &msg->msg_name);
struct sockaddr_in * usin = (struct sockaddr_in *)msg_name;
struct udp_key_t udp_key;
__builtin_memset(&udp_key, 0, sizeof(udp_key));
bpf_probe_read(&udp_key.dport, sizeof(udp_key.dport), &usin->sin_port);
if (udp_key.dport != 0){ //likely
bpf_probe_read(&udp_key.daddr, sizeof(udp_key.daddr), &usin->sin_addr.s_addr);
}
else {
//very rarely dport can be found in skc_dport
bpf_probe_read(&udp_key.dport, sizeof(udp_key.dport), &sk->__sk_common.skc_dport);
bpf_probe_read(&udp_key.daddr, sizeof(udp_key.daddr), &sk->__sk_common.skc_daddr);
}
bpf_probe_read(&udp_key.sport, sizeof(udp_key.sport), &sk->__sk_common.skc_num);
bpf_probe_read(&udp_key.saddr, sizeof(udp_key.saddr), &sk->__sk_common.skc_rcv_saddr);
// TODO: armhf
#if !defined(__arm__)
// extract from the ancillary message the source IP.
if (udp_key.saddr == 0){
u64 cmsg=0;
bpf_probe_read(&cmsg, sizeof(cmsg), &msg->msg_control);
struct in_pktinfo *inpkt = (struct in_pktinfo *)CMSG_DATA(cmsg);
bpf_probe_read(&udp_key.saddr, sizeof(udp_key.saddr), &inpkt->ipi_spec_dst.s_addr);
}
#endif
u32 zero_key = 0;
__builtin_memset(&zero_key, 0, sizeof(zero_key));
struct udp_value_t *lookedupValue = bpf_map_lookup_elem(&udpMap, &udp_key);
u64 pid = bpf_get_current_pid_tgid() >> 32;
if (lookedupValue == NULL || lookedupValue->pid != pid) {
struct udp_value_t udp_value={0};
__builtin_memset(&udp_value, 0, sizeof(udp_value));
udp_value.pid = pid;
udp_value.uid = bpf_get_current_uid_gid() & 0xffffffff;
bpf_get_current_comm(&udp_value.comm, sizeof(udp_value.comm));
bpf_map_update_elem(&udpMap, &udp_key, &udp_value, BPF_ANY);
}
//else nothing to do
return 0;
};
SEC("kprobe/udpv6_sendmsg")
int kprobe__udpv6_sendmsg(struct pt_regs *ctx)
{
#if defined(__i386__)
struct sock *sk = (struct sock *)((ctx)->ax);
struct msghdr *msg = (struct msghdr *)((ctx)->dx);
#else
struct sock *sk = (struct sock *)PT_REGS_PARM1(ctx);
struct msghdr *msg = (struct msghdr *)PT_REGS_PARM2(ctx);
#endif
u64 msg_name; //a pointer
__builtin_memset(&msg_name, 0, sizeof(msg_name));
bpf_probe_read(&msg_name, sizeof(msg_name), &msg->msg_name);
struct udpv6_key_t udpv6_key;
__builtin_memset(&udpv6_key, 0, sizeof(udpv6_key));
bpf_probe_read(&udpv6_key.dport, sizeof(udpv6_key.dport), &sk->__sk_common.skc_dport);
if (udpv6_key.dport != 0){ //likely
bpf_probe_read(&udpv6_key.daddr, sizeof(udpv6_key.daddr), &sk->__sk_common.skc_v6_daddr.in6_u.u6_addr32);
}
else {
struct sockaddr_in6 * sin6 = (struct sockaddr_in6 *)msg_name;
bpf_probe_read(&udpv6_key.dport, sizeof(udpv6_key.dport), &sin6->sin6_port);
bpf_probe_read(&udpv6_key.daddr, sizeof(udpv6_key.daddr), &sin6->sin6_addr.in6_u.u6_addr32);
}
bpf_probe_read(&udpv6_key.sport, sizeof(udpv6_key.sport), &sk->__sk_common.skc_num);
bpf_probe_read(&udpv6_key.saddr, sizeof(udpv6_key.saddr), &sk->__sk_common.skc_v6_rcv_saddr.in6_u.u6_addr32);
if (udpv6_key.saddr.part1 == 0){
u64 cmsg=0;
bpf_probe_read(&cmsg, sizeof(cmsg), &msg->msg_control);
struct in6_pktinfo *inpkt = (struct in6_pktinfo *)CMSG_DATA(cmsg);
bpf_probe_read(&udpv6_key.saddr, sizeof(udpv6_key.saddr), &inpkt->ipi6_addr.s6_addr32);
}
#if defined(__i386__)
struct sock_on_x86_32_t sock;
__builtin_memset(&sock, 0, sizeof(sock));
bpf_probe_read(&sock, sizeof(sock), *(&sk));
udpv6_key.daddr = sock.daddr;
udpv6_key.saddr = sock.saddr;
#endif
struct udpv6_value_t *lookedupValue = bpf_map_lookup_elem(&udpv6Map, &udpv6_key);
u64 pid = bpf_get_current_pid_tgid() >> 32;
if ( lookedupValue == NULL || lookedupValue->pid != pid) {
struct udpv6_value_t udpv6_value={0};
__builtin_memset(&udpv6_value, 0, sizeof(udpv6_value));
bpf_get_current_comm(&udpv6_value.comm, sizeof(udpv6_value.comm));
udpv6_value.pid = pid;
udpv6_value.uid = bpf_get_current_uid_gid() & 0xffffffff;
bpf_map_update_elem(&udpv6Map, &udpv6_key, &udpv6_value, BPF_ANY);
}
//else nothing to do
return 0;
};
// TODO: armhf
#if !defined(__arm__)
SEC("kprobe/inet_dgram_connect")
int kprobe__inet_dgram_connect(struct pt_regs *ctx)
{
#if defined(__i386__)
struct socket *skt = (struct socket *)PT_REGS_PARM1(ctx);
struct sockaddr *saddr = (struct sockaddr *)PT_REGS_PARM2(ctx);
#else
struct socket *skt = (struct socket *)PT_REGS_PARM1(ctx);
struct sockaddr *saddr = (struct sockaddr *)PT_REGS_PARM2(ctx);
#endif
u64 pid_tgid = bpf_get_current_pid_tgid();
u64 skp = (u64)skt;
u64 sa = (u64)saddr;
bpf_map_update_elem(&tcpsock, &pid_tgid, &skp, BPF_ANY);
bpf_map_update_elem(&icmpsock, &pid_tgid, &sa, BPF_ANY);
return 0;
}
SEC("kretprobe/inet_dgram_connect")
int kretprobe__inet_dgram_connect(int retval)
{
u64 pid_tgid = bpf_get_current_pid_tgid();
u64 *skp = bpf_map_lookup_elem(&tcpsock, &pid_tgid);
if (skp == NULL) { goto out; }
u64 *sap = bpf_map_lookup_elem(&icmpsock, &pid_tgid);
if (sap == NULL) { goto out; }
struct sock *sk;
struct socket *skt;
__builtin_memset(&sk, 0, sizeof(sk));
__builtin_memset(&skt, 0, sizeof(skt));
skt = (struct socket *)*skp;
bpf_probe_read(&sk, sizeof(sk), &skt->sk);
u8 proto = 0;
u8 type = 0;
u8 fam = 0;
bpf_probe_read(&proto, sizeof(proto), &sk->sk_protocol);
bpf_probe_read(&type, sizeof(type), &sk->sk_type);
bpf_probe_read(&fam, sizeof(type), &sk->sk_family);
struct udp_value_t udp_value={0};
__builtin_memset(&udp_value, 0, sizeof(udp_value));
udp_value.pid = pid_tgid >> 32;
udp_value.uid = bpf_get_current_uid_gid() & 0xffffffff;
bpf_get_current_comm(&udp_value.comm, sizeof(udp_value.comm));
if (fam == AF_INET){
struct sockaddr_in *ska;
struct udp_key_t udp_key;
__builtin_memset(&ska, 0, sizeof(ska));
__builtin_memset(&udp_key, 0, sizeof(udp_key));
ska = (struct sockaddr_in *)*sap;
bpf_probe_read(&udp_key.daddr, sizeof(udp_key.daddr), &ska->sin_addr.s_addr);
bpf_probe_read(&udp_key.dport, sizeof(udp_key.dport), &ska->sin_port);
if (udp_key.dport == 0){
bpf_probe_read(&udp_key.dport, sizeof(udp_key.dport), &sk->__sk_common.skc_dport);
bpf_probe_read(&udp_key.daddr, sizeof(udp_key.daddr), &sk->__sk_common.skc_daddr);
}
bpf_probe_read(&udp_key.sport, sizeof(udp_key.sport), &sk->__sk_common.skc_num);
bpf_probe_read(&udp_key.saddr, sizeof(udp_key.saddr), &sk->__sk_common.skc_rcv_saddr);
udp_key.sport = (udp_key.sport >> 8) | ((udp_key.sport << 8) & 0xff00);
// There're several reasons for these fields to be empty:
// - saddr may be empty if sk_state is 7 (CLOSE)
// -
if (udp_key.dport == 0 || udp_key.daddr == 0){
goto out;
}
if (proto == IPPROTO_UDP){
bpf_map_update_elem(&udpMap, &udp_key, &udp_value, BPF_ANY);
}
} else if (fam == AF_INET6){
struct sockaddr_in6 *ska;
struct udpv6_key_t udpv6_key;
__builtin_memset(&ska, 0, sizeof(ska));
__builtin_memset(&udpv6_key, 0, sizeof(udpv6_key));
ska = (struct sockaddr_in6 *)*sap;
bpf_probe_read(&udpv6_key.dport, sizeof(udpv6_key.dport), &sk->__sk_common.skc_dport);
if (udpv6_key.dport != 0){ //likely
bpf_probe_read(&udpv6_key.daddr, sizeof(udpv6_key.daddr), &sk->__sk_common.skc_v6_daddr.in6_u.u6_addr32);
}
else {
bpf_probe_read(&udpv6_key.dport, sizeof(udpv6_key.dport), &ska->sin6_port);
bpf_probe_read(&udpv6_key.daddr, sizeof(udpv6_key.daddr), &ska->sin6_addr.in6_u.u6_addr32);
}
bpf_probe_read(&udpv6_key.sport, sizeof(udpv6_key.sport), &sk->__sk_common.skc_num);
bpf_probe_read(&udpv6_key.saddr, sizeof(udpv6_key.saddr), &sk->__sk_common.skc_v6_rcv_saddr.in6_u.u6_addr32);
#if defined(__i386__)
struct sock_on_x86_32_t sock;
__builtin_memset(&sock, 0, sizeof(sock));
bpf_probe_read(&sock, sizeof(sock), *(&sk));
udpv6_key.daddr = sock.daddr;
udpv6_key.saddr = sock.saddr;
#endif
if (udpv6_key.dport == 0){
goto out;
}
if (proto == IPPROTO_UDP){
bpf_map_update_elem(&udpv6Map, &udpv6_key, &udp_value, BPF_ANY);
}
}
//if (proto == IPPROTO_UDP && type == SOCK_DGRAM && udp_key.dport == 1025){
// udp_key.dport = 0;
// udp_key.sport = 0;
// bpf_map_update_elem(&icmpMap, &udp_key, &udp_value, BPF_ANY);
//}
//else if (proto == IPPROTO_UDP && type == SOCK_DGRAM && udp_key.dport != 1025){
// bpf_map_update_elem(&icmpMap, &udp_key, &udp_value, BPF_ANY);
//} else if (proto == IPPROTO_TCP && type == SOCK_RAW){
// sport always 6 and dport 0
// bpf_map_update_elem(&tcpMap, &udp_key, &udp_value, BPF_ANY);
//}
return 0;
out:
bpf_map_delete_elem(&tcpsock, &pid_tgid);
bpf_map_delete_elem(&icmpsock, &pid_tgid);
return 0;
};
#endif
// TODO: for 32bits
#if !defined(__arm__) && !defined(__i386__)
SEC("kprobe/iptunnel_xmit")
int kprobe__iptunnel_xmit(struct pt_regs *ctx)
{
struct sk_buff *skb = (struct sk_buff *)PT_REGS_PARM3(ctx);
u32 src = (u32)PT_REGS_PARM4(ctx);
u32 dst = (u32)PT_REGS_PARM5(ctx);
u16 sport = 0;
unsigned char *head;
u16 pkt_hdr;
__builtin_memset(&head, 0, sizeof(head));
__builtin_memset(&pkt_hdr, 0, sizeof(pkt_hdr));
bpf_probe_read(&head, sizeof(head), &skb->head);
bpf_probe_read(&pkt_hdr, sizeof(pkt_hdr), &skb->transport_header);
struct udphdr *udph;
__builtin_memset(&udph, 0, sizeof(udph));
udph = (struct udphdr *)(head + pkt_hdr);
bpf_probe_read(&sport, sizeof(sport), &udph->source);
sport = (sport >> 8) | ((sport << 8) & 0xff00);
struct udp_key_t udp_key;
struct udp_value_t udp_value;
__builtin_memset(&udp_key, 0, sizeof(udp_key));
__builtin_memset(&udp_value, 0, sizeof(udp_value));
bpf_probe_read(&udp_key.sport, sizeof(udp_key.sport), &sport);
bpf_probe_read(&udp_key.dport, sizeof(udp_key.dport), &udph->dest);
bpf_probe_read(&udp_key.saddr, sizeof(udp_key.saddr), &src);
bpf_probe_read(&udp_key.daddr, sizeof(udp_key.daddr), &dst);
struct udp_value_t *lookedupValue = bpf_map_lookup_elem(&udpMap, &udp_key);
u64 pid = bpf_get_current_pid_tgid() >> 32;
if ( lookedupValue == NULL || lookedupValue->pid != pid) {
bpf_get_current_comm(&udp_value.comm, sizeof(udp_value.comm));
udp_value.pid = pid;
udp_value.uid = bpf_get_current_uid_gid() & 0xffffffff;
bpf_map_update_elem(&udpMap, &udp_key, &udp_value, BPF_ANY);
}
return 0;
};
#endif
char _license[] SEC("license") = "GPL";
// this number will be interpreted by the elf loader
// to set the current running kernel version
u32 _version SEC("version") = 0xFFFFFFFE;
opensnitch-1.6.9/proto/ 0000775 0000000 0000000 00000000000 15003540030 0015032 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/proto/.gitignore 0000664 0000000 0000000 00000000006 15003540030 0017016 0 ustar 00root root 0000000 0000000 *.pyc
opensnitch-1.6.9/proto/Makefile 0000664 0000000 0000000 00000001104 15003540030 0016466 0 ustar 00root root 0000000 0000000 all: ../daemon/ui/protocol/ui.pb.go ../ui/opensnitch/ui_pb2.py
../daemon/ui/protocol/ui.pb.go: ui.proto
protoc -I. ui.proto --go_out=../daemon/ui/protocol/ --go-grpc_out=../daemon/ui/protocol/ --go_opt=paths=source_relative --go-grpc_opt=paths=source_relative
../ui/opensnitch/ui_pb2.py: ui.proto
python3 -m grpc_tools.protoc -I. --python_out=../ui/opensnitch/ --grpc_python_out=../ui/opensnitch/ ui.proto
clean:
@rm -rf ../daemon/ui/protocol/ui.pb.go
@rm -rf ../daemon/ui/protocol/ui_grpc.pb.go
@rm -rf ../ui/opensnitch/ui_pb2.py
@rm -rf ../ui/opensnitch/ui_pb2_grpc.py
opensnitch-1.6.9/proto/ui.proto 0000664 0000000 0000000 00000013201 15003540030 0016531 0 ustar 00root root 0000000 0000000 syntax = "proto3";
package protocol;
option go_package = "github.com/evilsocket/opensnitch/daemon/ui/protocol";
service UI {
rpc Ping(PingRequest) returns (PingReply) {}
rpc AskRule (Connection) returns (Rule) {}
rpc Subscribe (ClientConfig) returns (ClientConfig) {}
rpc Notifications (stream NotificationReply) returns (stream Notification) {}
rpc PostAlert(Alert) returns (MsgResponse) {}
}
/**
- Send error messages (kernel not compatible, etc)
- Send warnings (eBPF modules failed loading, etc)
- Send kernel events: new execs, bytes recv/sent, ...
- Alert of events defined by the user: alert when a rule matches
*/
message Alert {
enum Priority {
LOW = 0;
MEDIUM = 1;
HIGH = 2;
}
enum Type {
ERROR = 0;
WARNING = 1;
INFO = 2;
}
enum Action {
NONE = 0;
SHOW_ALERT = 1;
SAVE_TO_DB = 2;
}
// What caused the alert
enum What {
GENERIC = 0;
PROC_MONITOR = 1;
FIREWALL = 2;
CONNECTION = 3;
RULE = 4;
NETLINK = 5;
// bind, exec, etc
KERNEL_EVENT = 6;
}
uint64 id = 1;
Type type = 2;
// TODO: group of actions: SHOW_ALERT | SAVE_TO_DB
Action action = 3;
Priority priority = 4;
What what = 5;
// https://developers.google.com/protocol-buffers/docs/reference/go-generated#oneof
oneof data {
// errors, messages, etc
string text = 6;
// proc events: send/recv bytes, etc
Process proc = 8;
// conn events: bind, listen, etc
Connection conn = 9;
Rule rule = 10;
FwRule fwrule = 11;
}
}
message MsgResponse {
uint64 id = 1;
}
message Event {
string time = 1;
Connection connection = 2;
Rule rule = 3;
int64 unixnano = 4;
}
message Statistics {
string daemon_version = 1;
uint64 rules = 2;
uint64 uptime = 3;
uint64 dns_responses = 4;
uint64 connections = 5;
uint64 ignored = 6;
uint64 accepted = 7;
uint64 dropped = 8;
uint64 rule_hits = 9;
uint64 rule_misses = 10;
map by_proto = 11;
map by_address = 12;
map by_host = 13;
map by_port = 14;
map by_uid = 15;
map by_executable = 16;
repeated Event events = 17;
}
message PingRequest {
uint64 id = 1;
Statistics stats = 2;
}
message PingReply {
uint64 id = 1;
}
message Process {
uint64 pid = 1;
uint64 ppid = 2;
uint64 uid = 3;
string comm = 4;
string path = 5;
repeated string args = 6;
map env = 7;
string cwd = 8;
uint64 io_reads = 9;
uint64 io_writes = 10;
uint64 net_reads = 11;
uint64 net_writes = 12;
}
message Connection {
string protocol = 1;
string src_ip = 2;
uint32 src_port = 3;
string dst_ip = 4;
string dst_host = 5;
uint32 dst_port = 6;
uint32 user_id = 7;
uint32 process_id = 8;
string process_path = 9;
string process_cwd = 10;
repeated string process_args = 11;
map process_env = 12;
}
message Operator {
string type = 1;
string operand = 2;
string data = 3;
bool sensitive = 4;
repeated Operator list = 5;
}
message Rule {
int64 created = 1;
string name = 2;
string description = 3;
bool enabled = 4;
bool precedence = 5;
bool nolog = 6;
string action = 7;
string duration = 8;
Operator operator = 9;
}
enum Action {
NONE = 0;
ENABLE_INTERCEPTION = 1;
DISABLE_INTERCEPTION = 2;
ENABLE_FIREWALL = 3;
DISABLE_FIREWALL = 4;
RELOAD_FW_RULES = 5;
CHANGE_CONFIG = 6;
ENABLE_RULE = 7;
DISABLE_RULE = 8;
DELETE_RULE = 9;
CHANGE_RULE = 10;
LOG_LEVEL = 11;
STOP = 12;
MONITOR_PROCESS = 13;
STOP_MONITOR_PROCESS = 14;
}
message StatementValues {
string Key = 1;
string Value = 2;
}
message Statement {
string Op = 1;
string Name = 2;
repeated StatementValues Values = 3;
}
message Expressions {
Statement Statement = 1;
}
message FwRule {
// DEPRECATED: for backward compatibility with iptables
string Table = 1;
string Chain = 2;
string UUID = 3;
bool Enabled = 4;
uint64 Position = 5;
string Description = 6;
string Parameters = 7;
repeated Expressions Expressions = 8;
string Target = 9;
string TargetParameters = 10;
}
message FwChain {
string Name = 1;
string Table = 2;
string Family = 3;
string Priority = 4;
string Type = 5;
string Hook = 6;
string Policy = 7;
repeated FwRule Rules = 8;
}
message FwChains {
// DEPRECATED: backward compatibility with iptables
FwRule Rule = 1;
repeated FwChain Chains = 2;
}
message SysFirewall {
bool Enabled = 1;
uint32 Version = 2;
repeated FwChains SystemRules = 3;
}
// client configuration sent on Subscribe()
message ClientConfig {
uint64 id = 1;
string name = 2;
string version = 3;
bool isFirewallRunning = 4;
// daemon configuration as json string
string config = 5;
uint32 logLevel = 6;
repeated Rule rules = 7;
SysFirewall systemFirewall = 8;
}
// notification sent to the clients (daemons)
message Notification {
uint64 id = 1;
string clientName = 2;
string serverName = 3;
// CHANGE_CONFIG: 2, data: {"default_timeout": 1, ...}
Action type = 4;
string data = 5;
repeated Rule rules = 6;
SysFirewall sysFirewall = 7;
}
// notification reply sent to the server (GUI)
message NotificationReply {
uint64 id = 1;
NotificationReplyCode code = 2;
string data = 3;
}
enum NotificationReplyCode {
OK = 0;
ERROR = 1;
}
opensnitch-1.6.9/release.sh 0000775 0000000 0000000 00000001243 15003540030 0015646 0 ustar 00root root 0000000 0000000 #!/bin/bash
# nothing to see here, just a utility i use to create new releases ^_^
CURRENT_VERSION=$(cat daemon/core/version.go | grep Version | cut -d '"' -f 2)
TO_UPDATE=(
daemon/core/version.go
ui/version.py
)
echo -n "Current version is $CURRENT_VERSION, select new version: "
read NEW_VERSION
echo "Creating version $NEW_VERSION ...\n"
for file in "${TO_UPDATE[@]}"
do
echo "Patching $file ..."
sed -i "s/$CURRENT_VERSION/$NEW_VERSION/g" $file
git add $file
done
git commit -m "Releasing v$NEW_VERSION"
git push
git tag -a v$NEW_VERSION -m "Release v$NEW_VERSION"
git push origin v$NEW_VERSION
echo
echo "All done, v$NEW_VERSION released ^_^"
opensnitch-1.6.9/screenshots/ 0000775 0000000 0000000 00000000000 15003540030 0016227 5 ustar 00root root 0000000 0000000 opensnitch-1.6.9/screenshots/opensnitch-ui-general-tab-deny.png 0000664 0000000 0000000 00000325323 15003540030 0024646 0 ustar 00root root 0000000 0000000 ‰PNG
IHDR € G Á®8ï sBIT|dˆ tEXtSoftware mate-screenshotÈ–ðJ IDATxœìÝyXTeûÀñïþƒ
*ˆŠ+îZîjæ’KšÖke¥¯–fšVšù˲̲E}µÌ\3-—r/5·ÜrßBPPPP‘]¶™9¿?hN›ˆ(æÜŸëš8˳3ÃsŸç9g4ä3jÔ(%ÿ2!„B!„ÿ>óæÍÓäý[gúåµ×^SêׯOãÆqtt|ð%B!„BQfRSS VΞ=Ëܹs5 È
þ
„»»{ù–P!„B!D™JNNfåÊ•Ì;W£¨_¿¾B!„Bñruu¥^½z X5JéÞ½;666å\,!„B!„÷ƒ»»;¶¶¶ÿ§äž?!„B!Ä¿‚Á` ++‹ììlŠ¢ ÕjÑjµØØØ`gg‡V«•üòqqqò<F!„B!VŠ¢‘‘Aff&F£Q]¹A“ÑhD¯×“™™‰-h4šâ’´Èü$ B!„B<ÔE!--ììlE¡B…
T¨P'''t:z½ž´´4nܸABB‚D999•*Hz”ó“ P!„BñÐR…Û·o“““ƒµµ5µjÕÂÁÁA]o4Ñjµ¸¸¸àââ‚··7.\ ''‡ôôtï*HzÔó+»ÉªB!„BQƲ³³ÉÉÉA§ÓQ·n]ììì0E¾ìíí©S§VVVèõz²³³%¿<î: T…œœRSS¹yó&7oÞ$55•œœuŽªBˆò KlR6òiü¨)‹c+ç‡âßGQôz=z½žêÕ«£Ñh0w|iµZªW¯Ž^¯W¤"ùåºë) z½žôôtœ©U« 111¤¦¦âè舵µõÝ&)„åKŸHèï›Ø}æ2ÑWoatñ¡Š=Ú<Ù•FàgZNçöþʶ?ùz#‰,3žÞÔïЛ>M+ÝñŠ’|€ÿ·‚PM]ž{ÿUZ¹‚!;“ƒ:;t%™¢?ÎÂqßr¢bޞ܋*Åeª$±÷«wYnÄ¡þsLÙ
u{=§Oà룞<ùÎ$žô»céᆲeénê
%8V¥Oáû<¶Å%Xi!ÄÃÁh4b0pqqÁÖÖV}8JIØÙÙáââBff&ƒîΡÏÒߢE‹¨Q£íÚµS·Ý»w/6¬ÔùA)F 333qrr¢~ýú¸¹¹áææFýúõqrr"33ón“Bˆr¥¤‡³þ‹˜»þ a×<ý«âšss‡7±à£™üx:™’ôßc2'¾ÿœ¹k÷s!AÁÝ??…[—Î}»dßÓª±ñÄ·²+®>~xÚj@Ifÿ¼ ¼1áìKºŸµP¸ý×~9•f6²tW£L¬¬e $Ǫ´õ)b¿Çö~§!„ ÓìC777³°k¶ìêÕ«8pÀl™››999÷œŸé•‘‘ÁçŸNFFF¡ëË*¿:uê°hÑ"vìØÁ``÷îÝjPx/ùA)F ³²²ð÷÷G¯×›-÷òò"<<ggç»MR!ÊIý¼”ß.ÝÆ£ÉxíùÇñ²1[¿eÁæóìù~-µ¦¾HcÇûÛaV®íg˱[àÓ7ß~
ÿ¿}&ÙØ•ìj}mú¾õ1}ÕDïOYÐhВ±u›hUûjÛ? |ËI™«»•ÿØ–WBQrrrÔûâ 7h:vì«Vb„ x{{ÏÌ™3©Y³&Íš5SŠbkk‹^¯¿«‡¤äÏÏÄ`00gÎBCC™={6¯¿þ:VVVfÛ”U~mÚ´ÁÚÚš¹sçráÂþüóOFŒAÛ¶mÉÈȸ§üJõPÓwPä_&„ÿ&JúöMû†ôøÞ¦ÙžZ'jöB·sÓYwñ$ûO¥Ò¨Eß[BR—ç¾¹›½¡×Èq i÷<ÕÂ@I»ÄîŸa_h4·ôNT®×–¾ý;SÃYû÷Ãï¸ÚêÚ±ëTÉÚ
Ôí0ˆÁ]k`—–BªÀ€Þ €uî¹Fg‡©ÀwHÃÉp–åoÏã ®-¯ðñÿ{—•áàN)O%ÍWÖ©m‰øì~¹âN—7ÿþÕ ã‹ß_Æ©ú¼3šŽ´(釘?i¡Uúóã:SAÏñMëØvò"ñ©V¸W©G›§úÒ±ºýqŽ[ÈÅÇ^` ãQ6Œ$Õ§£G¹›—5ãë>›Íïqn<þß·x®¦ªît¬Ô©±Û> 9”í[vr<<šëÉYX»úP«å“ô}¢>žÚ¢÷TòÛ¤Ë&öð~Ù~‚Ë7R18TÀÛ·)}^lMü‚’¦QÌyPÍXxúÞ$èð…Vkö]x&ãÆã«¯¾â“O>á…^`ùòå2vìXÒÓÓÕí4
F£±ÄRQù,X@tt4S¦LaΜ9Ì™3‡‘#GšM½,«üÒÓÓiÞ¼9:ubÇŽ´k׎Ö[“––f¶ÿÝæ¥˜jkkKll¬z³¢é‹íBˆ‡„!æ"—³AW=˜Îù>85iÖ¼:ZEOÌåXr?’sˆüu);®9á_³2V7ÃØµd.ëÎe>–ßþ÷«$àT·-ºpóÈÏÌ_vˆ[êõ1#×ö®dÃy…JÕ|pHåøú%l
ÏÆÊ7ˆZÎŒWgöÿÍdÁOÛ9v1‰œ£xE§aÎŽª-:RÏSêuìNµqÓ Ê
öΟɷÛN“`]u}±MËD±ÿç*¦’x–?/‚—¿7¶iÑ[·˜ÍEE”U[‰†|±R‰¸pÐ_ çBJ&Y9‘äD†q)GKÕFÁxj®³çë™,ÜJŠS
êÕp$5â k¾üŠMQzµ\Iû–²pW4xzáîãEż‡_Iç¯uËØyMƒOççPÿŸà(Á±*¦í¹AĹ›XyѸQ ®9±ß²%»ãQŠÝÏœ>r#ß,ÝIx¶Úv¦uý*¹·+yÅE§_ø) „÷‹V«E£Ñm6õ199™1cÆÈœ9s¨Vo¾ù&©©©fÛegg£ÑhÐjKö–Ÿ^¯çÛo¿åòåË|ôÑGøûûóÁÃÂ…Õ±”U~¦×¶mÛØ¹s';vdß¾}lÞ¼¹À6w›”bÐÎÎŽÔÔTÎ;G¥J• ¸~ý:ÙÙÙ2ýSñ¯¢d¤‘X98Rð–(-N®.è4™™õÏRï^}ë)ªéÌãƒïÿâÐî„d†²=Ê€W×WýT5tJ“'ñý¹?9“òír÷×x´gÄäÔ²†ëÛ?eÚº(ÂB¯¡ÔnÄ áý0,ß̉ë—9±ë2'v¯Ç9 C†
‘ç?ìE¥a¬‘§ø;üïLðá]ü•Tú]zÑáï'´è#v±-<›šxsL'*å½òwŒ¤©Ð‘‘“Ÿ¦¦µ‘k¿~Âôõ1„Ÿ‹ÇX³J¡W5öAôîߌ3ŽðëšC4ÙÂl}æÙߊnŸÔÇh_DY•†ñÝMì…HÒºV"%2’Tgw\Ó“¸táúÇêp%<‚tmº4¨€!b5ÛÂ2°=˜ñ£Ûâ©É!zógÌÜÍ{Âè>äïÙ×fÀ„ÿÒÁ4ì«?þ÷
#ÉÇdÅþlúðb¯Øç¿6àt§cUtÛãÑžW§·ÿç¼¾O§ãÒ™0’;¶/z?³I6
Ù7âIRÀÁ§wlE€‡í?Aj‰Ò }dQçÂí‹Å¤/„V«ÅÊÊŠÛ·oãääd¶.))‰ñãÇ3{ölÆŒCbbb§a¦§§£ÓéÐjµ%š±XT~mÚ´aذaèt:nß¾N§ãÃ?$<<EQÔ|Ë*¿ƒòÃ?0bÄÚµkGƒ
˜;w.Š¢Ð¶mÛR×J êt:œÉÌÌäÊ•+@î¨ ³³s‰Ÿ<#„=¶(d¤¦p[³®BæíÛpvt@C¶V Üê7Äßê,áñqDÇÇ“¥‰ûm£Ë“ŒUÙyïͶ²Fg•›‘‡W%¬5—ÉÎÉBAƒc`^y¯
7.œäÈá?ùóx87.îfér/üÆ´§ÂÓ(®²¦_Œ$G]!IÑRµ^}*Z±¹ZO-½½Ði¢s¿î§È´8÷¥wƒ¿X~f¿œªCcuk…¤¸¶YYAãL#ßMD_>ÏÅ캤\ˆÇ1x ®bÃ…p¢sÜ9> okTRH:E²¢Å¿n]Ü5 Öø5nDåWˆ#Ùè‘›°“/Õ*|«’z‚5?^'IãË“ƒºàWèC`Kp¬4f›ça$1l7Û÷'üÊM’RÓÈ0‚6;›ìüV‘—»ÚѬB‡Nýħ§×áä×=zÑ¥¡¶%J£¸óàéK$(„xÀILLÄÞ¾àô¤¤$ÆÇÍ›7
€’’’̾T½´ù5lØììl²²r/
¬iذ¡Ù”Ó²ÊïÊ•+¼öÚk4nܘ7nШQ#^{í5NžUñÓi»tŒS·ZÑ6Ï(J
gO] ±¥j@e´œ/˜€V›;f¥ÅÚÚÐàñøŒèTùŸQ2îžÚ£/ÀßO³©’Z;*=F Çéü+_ÎØÀ¥‹a\ÎiO…‚µÂÒ(–ýßO3‹ßGíçk4%ýѺñØ€'9t~5ÇÖmÂÁëŸ ÐÚæíSd!|nèÃæM‘\ˆ¼HÊkj¶kB°û6m‰äB¤;áqàÓ½!>
×ÿþ§hÌÛà:Vyª¥*$;?œ’¹~ñGEÐ¥JvEU¾¸cUă[s"~æ«ÿýNr•vôØÖòaE·@áY»5aè{5húOŽž:ÉÇÙ¼ Šô7Þc`;ï§ó ØôkÊ×= !EQpqqáÊ•+ܾ};;ó¹èF£‘¸¸¸B÷ÍÌÌ$==ªU«šM¥É/55µÀ¶YYYj@XÖù1‚ÌÌL’““HII¡aÆ´lÙ’7n”:?(Å=€Bñ¨Ð¸6䱺öΖ÷aêgsíàOl<“îÍhüÏ=`Æô[$f)€BúùP¢üªá_nZ…¤KWȩ䇟ß߯Êp,É'’Dìå2óÄ-ZkëÜi;ìKõim…••È!'ÇT
.•*b1ríÌiâKþÿ¢D4Û2°k5¬²7ÔtÏ·ªUqÓ×>…•5·¼•5ÂG›HøÎýDèý
´Ç»N]*(1œúõ8—/‚ƒ½Ñ ÁÝÛ[‘˜S'¹a P¸u.”«F
îU«âr§v´ò¢ýàÚ¹¾w%ëÃ3Š%:V…ÕG!!â<7õZª4¡upMüÜíò×EµC>FÕš„ÐÿÅ7ÕÍ’HLLÆ¥q‡ó Øô…âÁQFCÅŠ‰-ô^¹Â^ÙÙÙÄÄÄP©R%4Í]}1ûÃ_\\IIIfy$%%wOùA)Ÿ*„Í>KxüRžYͧSÿÀ¿Z4IÑ\ŠMFïX'†>Em;Ô{㔄},˜qƒúU!úL8éV~ôèRG?oºÔ:ÈOá;™?+•6Mü°M‹åÍÖ¿>wzv†áÒï|óÙï$9VÀ»’;öÆDb¢orÛhKõN¨eýOJ^?|*»£9w™ß—}OZÝ:´èÞß:iéy‚½—ÖóŬhšV·#åj&µ½H;Ï»mÄütøvHÇ£³Ø~Õ€òwüaØŽ.A‡ŠnŸ¢Êªw#yoasè9´Õž¢¦³C]j»ngOx$ZŸî4ôÍìlêu¡S•lŽ\ÏŸ]¢¶g
N]$Û¡.]ÛWʤ;ÖÀʧ#Ït?ɧë/ñǪõ4|{ µó–èXQX}šâ^±"všh"·~Dz„@”‹ò—!ÏÕØ¢Ú!_õVóá÷1øÕNE›T.¹†ÑªþþÎh5š¥a_ÌyÐ*©˜ôïúœBˆ{“““ƒ‹‹z½ž+W®àíí]ìÃ'³²²ˆ‹‹ÃÃãTßUþ¨ç'ŸãB‹¦ñhÂso¿Å]›èžÃµˆHn+P»u?ÆL~ƒžAŽf”·@j¹&pîÌô•ñä¨Qô¨¢FN䥆TÌ
cÏæMì8ÞEz †L×ú„tkN
w-)q—¸—“oº<7–‘]«PºIwVtÊS|!ö(ûåb‚ìë0`ìHz6©Š}Âöí;AT¦CæÝF˜E°ñç‰ñÔæ½™ïNíSDY´>7òF£h¨P«fî;uÔrDƒ–Š
âg:Hº*<1ê5ú6¯†Ýsœ:ŸŠsý®Ÿ8‚¶…Ρ-ŒßÎCèî¯Ãx}/+ ##Ï…Õ’«Âê£àФ/Ïu¬…§1–SGÎRµ;Ý‚ò^(¦þ)9VžTóÈàÒñ½ü¾ç Ž5iÿÜpzèJ˜Åœ9wH_!¼ÌÌL<==ñööæÚµk\»v””²²²Ðëõdee‘’’ÂÕ«W¹ví>>>xzz’]Ĭ-8?ͨQ£”Q£F•ª Ba1ôÇY8î[NúôaòÛÝñ¹‡a(ä6GÁ%E/½»4J_¦ÒoõoQ–µ)÷–)£™L¹WP!rÙÙÙ¡ÓéHLL$))‰ŒŒôz=:{{{ÜÜÜððð ''ç®GÆ,!¿yóæÉP!„¸[÷Ú.¸á)—OIÓ(©’íýhE eY›ro™2*@1Bˆ‡Bff&Zggg<<<°²²Rï3}?yzzz‰¿Áó“ P!„Bñ¯a4ÉÌÌ,“7KÌOîB!„B!#€BQº&¼<{~y—B!„âžÈ B!„BX …B!„ÂBH („B!„B@!„B!„°E>Æ××÷A–C!„B!D‰-t¹Œ
!„B!„… P!„B!,„€B!„Ba!$ B!„B! B!„BX …B!„ÂBH („B!„B@!„B!„°
!„B!„… P!„B!,„®¬Šˆˆ 55•„„„2IÏÓÓWWWªW¯^&é !„B!„¥+“ 0""FCûöíË"9ÕñãljŒŒ$00°LÓB!„BKT&S@SSSiܸqY$e¦I“&¤¤¤”yºB!„Ba‰Êd0!!EQÔ¿9ó?øˆL}ú]¥c§sd`ãI<Õà
³´…B!„BÜ»2»0o ¸êø|;à"Vš¢“oÓ©+ûvn3[fPô¼²¦:}ê-«b !„B!„ø[™€ðO˜mÈÄÙÑÏ>ÿ¤ÐíÞ÷ NÖe2Õt4MYO!„B!,Z™Ž š7k+[Œ¼5aR‘ÛŸ:ògezCNî¾yF…B!„B”24é<‘ÁK+mÈ,°nÚàUlKüˆ}¿ž.°ÎÆÊŽþÁïG‘„B!„ÂâÝ—À~
&ЯÁ„Û\æbö÷OC§Œ
!„B!DÙ»/`^W’B™·$¶:‡ëÆþÒ'[þÓôjUlYVEB!„BQˆ2ù@ø' Ìÿš»o8íšu¥SËžx©ÛxѯÃKÔ¨S•O~XäþB!„B!ÊÆ}T# 9ɲwàÙöÿÜßצe®+gˆË%=['ÁžB!„BÜg÷å!0y=Ûd_îï ®\ÀMçÀ¥ÌƒÄFÝ$îR
oµ_}¿‹!„B!„ï¾N=|ô Û<ÃõŸë¶;…#O•t†Œô,NýIØî"–Uá…ï°ÿЙ*„B!„÷Ñ}ZÁ³"S&½Ã›™¯£(
6Ö6XYéÈÑg“S5‡œ®ÙdÌ&;'‡Ó§OHÀ'„B!„÷Q™ BÁQÀS§O‘œœÄ[ ܼqÎFƒµ-Y™Y\¼IVfVh‰‹‹—Ñ?!„B!„¸îëà•è+äès0$&%ƹ°P233±¶¶¡Cûö XÛØ_` …B!„¢ìÜׇÀÄÇ]ÇÆÆ[\]Ý ä±ÇZ¹B!„Bˆû美 ÆÇÇ3⿯ÜUÅý-„B!„¢ôîk 8ÿßÜuÅý-„B!„¢ôÊ$ twwÇh4¢Ñh0e‘$Z£Ñˆ»»{™¤'„B!„–®LžêêêJhh¨–ÅËh4Š««kYQ!„B!,^™Œ Ö¨Qƒ.püøqË"IÜÝÝquu¥Fe’žB!„BXº2»°fÍše•”B!„Bˆû L¿^!„B!ÄÃK@!„B!„°
!„B!„… P!„B!,„€B!„Ba!Êì) ¤¦¦’PVI>²<==quu¥zõêå]!„B!„)“ 0""VKÇŽË"9‹pìØ1"## ,ï¢!„B!,D™€©©©üÝ¥¦M›²k×.¢££¹}û6iiiå]¤ûÊÉÉ '''|}}-¦Î&–\÷MÚº|XZ»[Z}MòÖ[!Ä¿W™€2í³tpss£zõêh44Myé¾PEQˆ‰‰!<<›G¾Î&–\÷MÚº|XZ»[Z}MòÖ;&&??¿ò.’BˆR*³{ Eéøùù¡Ñh0å]”ûJ«ÕâççGxx¸ÚYzÔëlbÉuФˇ¥µ»¥Õ×ÄTï‹/–wQ„B܃2 E)«¤,Š¥tŒF#Z–¬¬,‹©³‰%×ýA“¶.–Öî–V_S½-aº«B<Êäk ÊÙ£