
[{"content":"","date":"7 April 2026","externalUrl":null,"permalink":"/tags/ansible/","section":"Tags","summary":"","title":"Ansible","type":"tags"},{"content":"","date":"7 April 2026","externalUrl":null,"permalink":"/tags/k8s/","section":"Tags","summary":"","title":"K8s","type":"tags"},{"content":"","date":"7 April 2026","externalUrl":null,"permalink":"/","section":"Nikola Simić","summary":"","title":"Nikola Simić","type":"page"},{"content":"","date":"7 April 2026","externalUrl":null,"permalink":"/posts/","section":"Posts","summary":"","title":"Posts","type":"posts"},{"content":" Introduction # Recently, my colleague Luka and I started our journey of learning about Kubernetes. Both of us had some basic knowledge and no real experience so we decided the first thing we should do would be to create a real K8S cluster that we can use for practice. There are alternatives to this approach, mainly setting up a local environment (You can check a guide here), but in addition to a local environment not being great for collaborative learning, we wanted to get as close as possible to a production grade setup, excluding managed services like EKS or AKS.\nIn the beginning we used the Cloud Guru subscription, that allowed us to create some VMs and resources in the Azure environment that last for 4 hours which was plenty of time for our learning sessions. After a couple of sessions we realized that provisioning infrastructure resources and configuring the cluster with our bash scripts manually was tedious and time consuming, so with me being more interested in DevOps, we decided I should automate it.\nInfrastructure # For the most basic setup, we need two nodes (VMs) that can communicate with each other. We decided to go with the cloud route, since it will bring us some benefits in the future, mostly with setting up custom domains and load balancers, and also because my homelab currently has no available resources for such feat :D. AWS was the platform of choice, but I strongly advise people to experiment since we are not utilizing anything that other platforms do not provide. If you are looking for the best price to performance Hetzner cost optimized VPS is my recommendation.\nInfrastructure diagram I went for the t3a.xlarge type for both EC2 instances that comes with 4 vCPUs and 16GB of RAM. To be honest, after some testing, our practice cluster never had more than 25% of its resources consumed so I advise going with cheaper instance types. I believe 2 vCPUs and 8GB of RAM is enough for a project like this one and you can always scale it up or down depending on your needs. Since that instance type does not come with default storage, both instances have a 15GB EBS volume attached to them. Also for now, we didn\u0026rsquo;t want to complicate networking so we have only one public facing subnet.\nAutomation # For the automation part, the IaC tool of choice was Terraform. No particular reason other than that it\u0026rsquo;s the most popular and widely used in the industry. For now the whole setup can be placed in just a few files, but I\u0026rsquo;ll surely have to refactor it later to use modules. Directory structure\ninfra/ ├─ providers.tf # Defines terraform and provider requirements ├─ outputs.tf # Defines public IPs of the instances as outputs ├─ network.tf # Defines network resources ├─ nodes.tf # Defines EC2 resources ├─ security.tf # Defines Security groups and rules network.tf\nresource \u0026#34;aws_vpc\u0026#34; \u0026#34;k8s_practice\u0026#34; { cidr_block = \u0026#34;10.0.0.0/16\u0026#34; tags = { Name = \u0026#34;k8s_practice\u0026#34; } } resource \u0026#34;aws_subnet\u0026#34; \u0026#34;public\u0026#34; { vpc_id = aws_vpc.k8s_practice.id cidr_block = \u0026#34;10.0.0.0/24\u0026#34; tags = { Name = \u0026#34;public\u0026#34; } } resource \u0026#34;aws_internet_gateway\u0026#34; \u0026#34;portal\u0026#34; { vpc_id = aws_vpc.k8s_practice.id tags = { Name = \u0026#34;portal\u0026#34; } } resource \u0026#34;aws_route_table\u0026#34; \u0026#34;public\u0026#34; { vpc_id = aws_vpc.k8s_practice.id route { cidr_block = \u0026#34;0.0.0.0/0\u0026#34; gateway_id = aws_internet_gateway.portal.id } } resource \u0026#34;aws_route_table_association\u0026#34; \u0026#34;public\u0026#34; { subnet_id = aws_subnet.public.id route_table_id = aws_route_table.public.id } nodes.tf\ndata \u0026#34;aws_ami\u0026#34; \u0026#34;k8s_node\u0026#34; { most_recent = true owners = [\u0026#34;136693071363\u0026#34;] # Official Debian account filter { name = \u0026#34;name\u0026#34; values = [\u0026#34;debian-13-amd64*\u0026#34;] #https://wiki.debian.org/Cloud/AmazonEC2Image/Trixie } } resource \u0026#34;aws_instance\u0026#34; \u0026#34;control_1\u0026#34; { ami = data.aws_ami.k8s_node.id instance_type = \u0026#34;t3a.xlarge\u0026#34; associate_public_ip_address = true subnet_id = aws_subnet.public.id vpc_security_group_ids = [aws_security_group.k8s_node.id] root_block_device { delete_on_termination = true volume_size = 15 volume_type = \u0026#34;gp3\u0026#34; } key_name = \u0026#34;dxkp\u0026#34; tags = { Name = \u0026#34;control_1\u0026#34; K8SType = \u0026#34;control\u0026#34; } } resource \u0026#34;aws_instance\u0026#34; \u0026#34;worker_1\u0026#34; { ami = data.aws_ami.k8s_node.id instance_type = \u0026#34;t3a.xlarge\u0026#34; associate_public_ip_address = true subnet_id = aws_subnet.public.id vpc_security_group_ids = [aws_security_group.k8s_node.id] root_block_device { delete_on_termination = true volume_size = 15 volume_type = \u0026#34;gp3\u0026#34; } key_name = \u0026#34;dxkp\u0026#34; tags = { Name = \u0026#34;worker_1\u0026#34; K8SType = \u0026#34;worker\u0026#34; } } security.tf\nresource \u0026#34;aws_security_group\u0026#34; \u0026#34;k8s_node\u0026#34; { description = \u0026#34;Security group for all k8s nodes\u0026#34; name = \u0026#34;sg_k8s_node\u0026#34; vpc_id = aws_vpc.k8s_practice.id } resource \u0026#34;aws_vpc_security_group_ingress_rule\u0026#34; \u0026#34;allow_ssh\u0026#34; { security_group_id = aws_security_group.k8s_node.id description = \u0026#34;Allow SSH ingress from anywhere\u0026#34; cidr_ipv4 = \u0026#34;0.0.0.0/0\u0026#34; from_port = \u0026#34;22\u0026#34; to_port = \u0026#34;22\u0026#34; ip_protocol = \u0026#34;tcp\u0026#34; } resource \u0026#34;aws_vpc_security_group_ingress_rule\u0026#34; \u0026#34;allow_node_communication\u0026#34; { security_group_id = aws_security_group.k8s_node.id description = \u0026#34;Allow K8S node communication\u0026#34; referenced_security_group_id = aws_security_group.k8s_node.id from_port = \u0026#34;-1\u0026#34; to_port = \u0026#34;-1\u0026#34; ip_protocol = \u0026#34;-1\u0026#34; } resource \u0026#34;aws_vpc_security_group_egress_rule\u0026#34; \u0026#34;allow_any\u0026#34; { security_group_id = aws_security_group.k8s_node.id description = \u0026#34;Allow all egress traffic\u0026#34; cidr_ipv4 = \u0026#34;0.0.0.0/0\u0026#34; from_port = \u0026#34;-1\u0026#34; to_port = \u0026#34;-1\u0026#34; ip_protocol = \u0026#34;-1\u0026#34; } It\u0026rsquo;s worth noting that we allow SSH access to all of our nodes only because we still connect and work on them, but the goal is that none of those have public SSH access. One cool trick to use is to define default tags inside the AWS provider block. That will tag all of the provisioned resources with those tags, in our case: ManagedBy=Terraform and Project=k8s_practice.\nIn addition to this simple terraform setup, I created two Github Action workflows that we can use to deploy and destroy the infrastructure in one click. For additional security and cost optimization \u0026ldquo;Destroy Infrastructure\u0026rdquo; workflow is run every midnight in case we forgot to destroy the infra ourselves.\nI plan on further expanding the infrastructure with AWS Auto-Scaling Groups that will help simulate scaling of the cluster itself under load.\nCluster Configuration # For the cluster setup, initially we created a pair of bash scripts containing the necessary steps to set up the cluster, one for the control plane and one for the worker node. Thanks to the Official kubernetes documentation we haven\u0026rsquo;t had much issues with it. It turned out that even the couple of obstacles we encountered were there because we haven\u0026rsquo;t followed the guide thoroughly :D. Initially we started with an Ubuntu server images, but later decided to continue with Debian as it does not use snap and because it has a smaller memory footprint.\nAutomation # Again, I needed a tool for configuration management, and I again choose the most widely used in the industry - Ansible. Unlike Terraform, this was not that simple a choice, as there is a lot of good configuration management software available right now, though ultimately agentless approach of Ansible is why I went with it. Since we already had bash scripts written, I just needed to adjust those steps and write them in an Ansible way. One small challenge was making a dynamic inventory to accommodate our procedure of often destroying and creating instances without having static IPs assigned to the instances. For that I used AWS Dynamic inventory plugin\nplugin: amazon.aws.aws_ec2 regions: - eu-central-1 cache: true cache_plugin: jsonfile cache_timeout: 300 cache_connection: /tmp/ansible_aws_ec2_cache hostnames: - ip-address filters: instance-state-name: running tag:Project: k8s_practice compose: aws_tags: tags keyed_groups: - key: tags.ManagedBy prefix: managed_by separator: \u0026#34;_\u0026#34; - key: tags.K8SType prefix: k8s_node_type separator: \u0026#34;_\u0026#34; Crucial line here is a group defined by a resource tag K8SType. We use this group to separate Ansible roles (sets of tasks) for the Control plane nodes and for the worker nodes. Directory structure\nconfig/ ├── ansible.cfg ├── README.md ├── requirements.txt ├── requirements.yaml ├── inventory/ │ └── aws_ec2.yaml ├── playbooks/ │ └── site.yaml └── roles/ ├── addons/ │ ├── files/ │ │ └── argocd-values.yaml │ └── tasks/ │ ├── 01-argocd.yaml │ └── main.yaml ├── common/ │ ├── files/ │ │ └── containerd-config.toml │ ├── handlers/ │ │ └── main.yaml │ └── tasks/ │ ├── 01-system_setup.yaml │ ├── 02-common_packages.yaml │ ├── 03-containerd.yaml │ ├── 04-kubernetes_dependencies.yaml │ └── main.yaml ├── control_plane/ │ ├── files/ │ │ └── k8s-cluster-config.yaml │ └── tasks/ │ ├── 01-kubernetes_cluster.yaml │ ├── 02-kubernetes_qol.yaml │ ├── 03-kubernetes_networking.yaml │ ├── 04-helm.yaml │ └── main.yaml └── worker/ └── tasks/ └── main.yaml Admittedly, this setup looks a bit complicated, but I assure you, if you have a solid understanding of Linux and Bash it\u0026rsquo;s pretty easy to understand. Under the site.yaml playbook we have defined 4 roles:\ncommon - Runs on all nodes, sets up the system, common packages and kubernetes dependencies control_plane - Runs on control plane nodes, initializes K8S cluster, sets up some bash aliases for the admin users, sets up Flannel as CNI and installs Helm worker - Runs on worker nodes, joins worker nodes to the cluster addons - Runs on control plane nodes, installs ArgoCD Not to clutter this post with thousand more files and for the sake of keeping my cluster definition private I won\u0026rsquo;t be sharing my Ansible setup here, but If you are interested, be sure to contact me over email or LinkedIn. This playbook is set to run on a successful infrastructure deployment utilizing the workflow_run trigger inside Github Actions.\nPrice (As of April 2026) # I\u0026rsquo;ll start with assumption that both me and Luka spend 10 hours (probably a bit overestimated) a week working on this project with the cluster deployed. Here are the numbers for a monthly basis:\nAWS [~40 hrs/month]\nResource Qty Unit Price Monthly Usage Monthly Cost t3a.xlarge (control plane) 1 $0.1728/hr ~40 hrs $6.91 t3a.xlarge (worker node) 1 $0.1728/hr ~40 hrs $6.91 EBS gp3 15GB (per node) 2 $0.08/GB-month ~40 hrs (prorated) $0.13 Total ~$13.95/month As I mentioned, a cheaper instance type like t3a.large ($0.0864/hr) would still be sufficient and cut the cost roughly in half - down to ~$7.05/month total.\nNot that far off from Netflix subscription price depending on where you live :D. But If you think this is low for a fancy cluster setup like this, take a look at the Hetzner alternative I mentioned before:\nHetzner [~40 hrs/month]\nResource Qty Unit Price Monthly Usage Monthly Cost CX43 (control plane) 1 $0.0224/hr ~40 hrs $0.90 CX43 (worker node) 1 $0.0224/hr ~40 hrs $0.90 Total ~$1.79/month Now, looking at this, Hetzner seems as a no brainer (and trust me, it is in so many cases), but I decided still to go with the AWS setup (with the smaller t3a.large instance type) since it offers some services that Hetzner does not, like private container repository with ECR, and managed K8S solution that I want to test as this project progresses. But who knows, I might set up some of those services in my homelab and ditch AWS? I\u0026rsquo;ll make sure to document that in a blog as well :D\nConclusion # With just a couple of USD a month you can have a fully automated, version-controlled production like Kubernetes cluster setup without a managed service. I enjoyed working on this one and I highly recommend people who are starting with Kubernetes to try a project like this one as there is no better way of learning Kubernetes and DevOps in general than working on a real cluster and infrastructure.\nReferences # Kubernetes documentation: https://kubernetes.io/docs/home/ Terraform documentation: https://developer.hashicorp.com/terraform/docs EC2 instance type On demand pricing: https://aws.amazon.com/ec2/pricing/on-demand/ Hetzner VPS: https://www.hetzner.com/cloud/ Github actions documentation: https://docs.github.com/en/actions Ansible documentation: https://docs.ansible.com/ ","date":"7 April 2026","externalUrl":null,"permalink":"/posts/simple-k8s-lab/","section":"Posts","summary":"I am currently learning kubernetes and of course, there is no better way to learn than to practice on a real cluster. In this post I\u0026rsquo;ll present a simple, cheap and an automated way of building a real Kubernetes cluster with Terraform and Ansible","title":"Simple, cheap and automated K8S cluster setup","type":"posts"},{"content":"","date":"7 April 2026","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"7 April 2026","externalUrl":null,"permalink":"/tags/terraform/","section":"Tags","summary":"","title":"Terraform","type":"tags"},{"content":"","date":"10 June 2025","externalUrl":null,"permalink":"/tags/containers/","section":"Tags","summary":"","title":"Containers","type":"tags"},{"content":"","date":"10 June 2025","externalUrl":null,"permalink":"/tags/docker/","section":"Tags","summary":"","title":"Docker","type":"tags"},{"content":"","date":"10 June 2025","externalUrl":null,"permalink":"/tags/podman/","section":"Tags","summary":"","title":"Podman","type":"tags"},{"content":" Why alternatives? # Chances are that the first container related technology you have heard of is Docker, and for a good reason. It\u0026rsquo;s a well established platform for working with container workloads with a developer friendly interface and at the same time very robust functionalities. Besides that, Docker as an organization was one of the creators of the OCI, project that defines important standards for container technologies.\nSo\u0026hellip; Why would you look at the alternatives?\nWell, let\u0026rsquo;s be real, Docker is not going anywhere for now, BUT I think is beneficial to have some knowledge of different technologies as those might better fit some use-cases you encounter. Besides that, in my opinion, some points of discussion can be made about Docker\u0026rsquo;s architecture which is based on a daemon process that by default runs with root privileges.\nDocker licensing # While Docker engine, the heart of Docker itself is open source and free to use you can find some discussions online about the licensing/pricing of the Docker platform itself. I took some time to do some research and I actually find pricing fair. As I said, those subscription plans include some additional features regarding the ecosystem itself like Docker Hub and Docker Desktop. You can easily use alternatives for those platform offerings as well, so you are not really locked into the ecosystem which I appreciate.\nDocker subscription plans Docker architecture # Docker architecture Here is the architecture diagram from the official Docker documentation. From here we can see that we have a standard client-server architecture between Docker Client and Docker daemon. Docker daemon listens for Docker API requests and manages Docker objects using containerd as its underlying core. What\u0026rsquo;s great about this architecture is that you don\u0026rsquo;t have to have Docker Client on a host machine that is running dockerd.\nNow, the trick about dockerd is that, by default it\u0026rsquo;s running with root privileges and that makes it a target of threat actors. Besides, the default way of running Docker containers is running them as the root user. Even if not at all trivial, if a threat actor is able to escape the container isolation they\u0026rsquo;ll have root user access to the host.\nThese security risks can be mitigated by using Rootless mode which has some prerequisites and limitations but is way more secure than the default configuration.\nPodman # Podman is a tool for managing and running OCI compliant containers, developed by Red Hat. It comes with the CLI that is going to be very familiar to anyone who has used Docker CLI. There is a running joke in the community that you can just create an alias docker=podman and run Podman just like you would Docker. Additional features for building and manipulating container images are wrapped into Buildah and Scopeo software tools. GUI software is also available with Podman Desktop.\nPodman Architecture # In contrast to Docker, Podman\u0026rsquo;s architecture is daemonless. To be more specific, when executing the Podman command to run a container, Podman will use what is called a fork/exec model, meaning the container process is a child forked from the Podman process. Therefore container process loginuid (id of the process owner) will be the uid of a user who executed the command, allowing for rootless containers. One of the multiple benefits of this approach is also auditing, as the container process can be traced to the user who started it.\nPodman containers can be easily integrated with the systemd Linux init system. This makes running containers as services a few steps process. Porting multi-service apps that were hosted on VMs to containers can also be done by integrating systemd into a container. Of course, the recommendation for that case would still be to separate those services into multiple containers, but you might lack resources to achieve that so you have this alternative.\nSimilar to Kubernetes pods, Podman can create pods which are groups of containers that share resources. Each pod runs an infra container by default that serves as a controller for other containers inside the pod. These pods allow developers to create Kubernetes like environments for local development and can also be exported as a K8 YAML file.\nFinally, Podman has support for a RESTful API which allows users to run podman commands from remote clients.\nUsing Podman # As a simple example, I\u0026rsquo;ll run nginx web server container with test HTML page using Podman:\nmkdir /tmp/test-web echo \u0026#34;\u0026lt;h1\u0026gt;Test\u0026lt;/h1\u0026gt;\u0026#34; \u0026gt; /tmp/test-web/index.html podman run -d -p 8080:80 -v /tmp/test-web:/usr/share/nginx/html:Z docker.io/library/nginx There are three main things worth noticing:\nUnlike Docker, Podman does not create a volume directory by default. This is by design. :Z added at the end of a volume bind is a tag that enables the container to write to the desired directory without SELinux interrupting it. This has to be done for Docker containers as well if you have SELinux enabled on your system. The image is referenced as a full URL because Podman has a few default image repositories. If not stated, you\u0026rsquo;ll be prompted to select the image from desired the supplier. If you would to run a container inside a pod, you would first create a pod and then run a container inside it, like this:\npodman pod create --name pod-web --infra --publish 8080:80 podman run -d -v /tmp/test-web:/usr/share/nginx/html:Z docker.io/library/nginx Podman limitations # Podman does not support Windows containers Even though Docker ready images are OCI compatible, you can run into issues of running some of those using Podman Podman\u0026rsquo;s podman compose is a script written in Python which traditionally had some issues, unlike Docker\u0026rsquo;s built in docker compose. Podman is not compatible with Docker Swarm Podman ecosystem is not as robust as Dockers Conclusion # Recently I have seen more people talking about Podman so it\u0026rsquo;s definitely worth taking a look into. Who knows, it might even suit some of your use-cases better than Docker.\nReferences # What is Docker: https://docs.docker.com/get-started/docker-overview/ What is Podman: https://www.redhat.com/en/topics/containers/what-is-podman Understanding rootless Podman\u0026rsquo;s user namespace modes: https://www.redhat.com/en/blog/rootless-podman-user-namespace-modes Podman volumes and SELinux: https://blog.christophersmart.com/2021/01/31/podman-volumes-and-selinux/ Migrating from Docker: https://podman-desktop.io/docs/migrating-from-docker/importing-saved-containers ","date":"10 June 2025","externalUrl":null,"permalink":"/posts/podman-docker-alternative/","section":"Posts","summary":"I wanted to take a deeper dive into the Podman tool for container management and compare it to well-established Docker platform","title":"Podman - daemonless Docker alternative","type":"posts"},{"content":" Introduction # ICMP tunneling is a method of encapsulating network packets (or any data really) inside the ICMP packet itself. I first heard of this method at the Internet Networks course at the University. It instantly intrigued me because this method allows potential malicious actors to bypass network restrictions set by the system administrators. As I researched more about it, I found out that it can be used for data exfiltration and C2 (command-and-control) attacks, tho, recently, advancements made in the firewall software have made this method harder to pull off. Either way, as I was looking for a project that includes network programming, this was a perfect thing to look into.\nDesign # The goal of this software is to route traffic filtered by the firewall through the tunnel and access the Internet from a machine that does not have access to it.\nIlustration of the software functionality If you look closer into this illustration, it should be easy to recognize the problems that we have to solve:\nHow to route the traffic from the client PC inner processes to the process that encapsulates it and sends it through a tunnel? How to extract the original traffic on the server and route it to its original destination but in a way that the responses from the destination are routed back to the client? How to extract response traffic on the client from the tunnel and route it to its original sender? I designed the solution in such a way to separate responsibility between encapsulating traffic and sending it through the tunnel and receiving the encapsulated traffic from the tunnel, extracting it, and forwarding it to the original source/desitnation. Considering that the client/server roles have the same requirement of encapsulating/extracting the traffic, and the only difference between them is routing configuration I decided to separate these two roles with the flag which is passed to the program as an input argument.\nImplementation # ICMP # First, a short explanation of the ICMP itself. ICMP (Internet Control Message Protocol) is a OSI layer 3 protocol, defined in RFC 792 used for network diagnostics and debugging. There are 9 different ICMP message types, but for this particular solution echo message type is used.\nICMP echo datagram The data field has no limitations to its length and it\u0026rsquo;s not subverted to basic firewall checks, which makes it a perfect place to put users data/packets that they want to tunnel.\nTechnologies # When I started working on this project, somewhat logically, I chose C as the programming language for my solution. I had some experience with it from secondary school and programming some simple programs for ESP32 based microcontrollers. Unfortunately, when I started, besides having to learn and explore how ICMP tunneling works and the ways to implement it, I had to relearn C and I haven\u0026rsquo;t had time for that. In the end, I chose Python as my favorite \u0026ldquo;Just make it work\u0026rdquo; language. Of course, this decision comes with a big hit to performance, but the focus of this project is to learn and demonstrate how this method works and not to create the most performant solution.\nClient subsystem # Routing traffic to the client subsystem to encapsulate it into ICMP\nFirstly, I needed a way to route traffic from the other processes on the client PC to the software subsystem which encapsulates the traffic into an ICMP tunnel. Libraries like Scapy could not satisfy my need, as they can\u0026rsquo;t directly manipulate packets from the kernel but just register it and create a copy of them to modify. While looking for a adequate solution I stumbled upon TUN/TAP interfaces in Linux. TUN/TAP interfaces are virtual network devices that operate on the network/IP layer (TUN) and data link/ethernet layer (TAP). With the Linux built in tools, I have created a TUN interface to which all the traffic from the client PC will be redirected. My software will then bind and read from the said interface and modify and encapsulate the traffic.\nip tuntap add name tun0b mode tun # Create TUN interface with the name tun0b ip link set tun0b up # Enable TUN interface ip addr add 10.0.0.2 peer 10.0.0.1 dev tun0b # Set clients IP on the TUN interface, and define TUN interfaces IP as only one node ip route add default via 10.0.0.1 dev tun0b # Create default route which sends all traffic to the TUN interface Receiving traffic from the client processes, ICMP encapsulation and forwaring to the tunnel\nReceiving the traffic from the TUN interface is done in a few lines of code using the pytuntap library. One daemon thread is listening on the TUN interface and reading from it. Buffer read from the TUN interface is encapsulated as ICMP using a custom made ICMPPacket class which implements all needed features for the ICMP echo/echo reply packets. Finally, using raw sockets, the software sends the ICMP packet to the server whose address it got from the input arguments.\ndef wrap_into_icmp(self, stop_event: Event) -\u0026gt; None: while not stop_event.is_set(): buffer = self.tun.read() if buffer: icmp_packet = ICMPPacket( self.destination, self.mode == Modes.SERVER ) icmp_packet.payload = buffer self.icmp_socket.sendto( icmp_packet.get_raw(), ( self.destination, 1001 ) ) Receiving traffic from the tunnel, extracing it and forwarding to client processes\nAs I explored further, I found the libnetfilter_queue project from the creators of iptables and nftables software used for configuring the rules for packet filtering. This library provides a packet queue to which a program can bind and have access to packets queued in the kernel. Forwarding packets to the queue is done by creating nftables or iptables rules. This simplifies the process of receiving the traffic from the tunnel because the software doesn\u0026rsquo;t have to manage reading variable length packets directly from the socket.\nAs the inner workings of iptables and nftables chains are fairly complicated, so I\u0026rsquo;ll attach some useful links in the References section and maybe create a post about them sometimes.\nTo get packets to the netfilter queue, I created a simple rule that forwards ICMP packets sent from the server IP address to the queue.\niptables -t filter -A INPUT -s $SERVER_IP -p icmp -j NFQUEUE --queue_num 1 After that, using a Python wrapper for the libnetfilter_queue library NetfilterQueue, the software can access the packets from the queue. Daemon thread handles packets from the queue, extracts the original traffic from the ICMP and forwards it back to client processes via the TUN interface.\ndef handle_queue(self, packet: Packet, stop_event: Event) -\u0026gt; None: if stop_event.is_set(): raise Exception raw_ip_icmp = packet.get_payload() complete_header_len = ( raw_ip_icmp[0] \u0026amp; 0xF ) * 4 + 8 secret_payload = raw_ip_icmp[complete_header_len:] self.tun.write(secret_payload) packet.drop() With this, client subsystem is completed.\nServer subsystem # Similar to the client subsystem, traffic from the tunnel is forwarded to the netfilter_queue from which the ICMP packets are read and the original traffic is extracted. Original traffic is sent via the TUN interface to the server kernel. The difference between this TUN interface and the one set on the client machine is that this one is not configured as a peer, which means that the server kernel is told that behind that TUN interface is a network with multiple hosts. When configured like this, the route to the network behind the TUN interface is automatically added to the route tables.\nNow, the original traffic has to be forwarded to it\u0026rsquo;s destination which can be done by setting the kernel parameter net.ipv4.ip_forward=1. This way the server won\u0026rsquo;t drop the packet which is not intended for it, but rather forward it to its destination. The only issue now is, that the client\u0026rsquo;s original traffic has its source address set to the client\u0026rsquo;s source address. If the server forwards it like that, the packet won\u0026rsquo;t come back to it and the traffic won\u0026rsquo;t be routed through the ICMP tunnel, which means the client won\u0026rsquo;t receive it behind the firewall. The solution to this is Network Address Translation or NAT for short. This mechanism allows the server to modify the source address and optionally the port to its address and forward the packet. This translation will be recorded into the NAT table, so when the server receives the response to the forwarded packet, it knows to forward it to its original source. Fun fact: that\u0026rsquo;s what your router does every time you access the internet!\nNAT example iptables has a built in support for NAT, and it\u0026rsquo;s configured using the MASQUERADE target.\niptables -t nat -A POSTROUTING -s $CLIENT_IP -j MASQUERADE With this problem resolved routing response traffic back to the tunnel is again the same as on the client subsystem. We already have a route to the TUN interface and we can encapsulate the traffic into an ICMP packet and send it to a tunnel via the socket.\nConfiguration # Configurations explained in the last to subsections are defined as a bash script separate for client and server configurations. When the software is started, depending on a mode flag which determines in which mode (client or server) it\u0026rsquo;s running the right configuration script will be executed. At the end of execution, the cleanup script will revert all changes made to the system.\nDemonstration # To demonstrate how this software works, I\u0026rsquo;ll first introduce the network configuration, then I\u0026rsquo;ll try to access the internet from the PC which has its traffic restricted without and then with my software enabled.\nExample network configuration In the image above, I have described a simple network configuration that you can set yourself up at your home as well. The client PC is restricted by the firewall between its router. I\u0026rsquo;ve simulated the firewall by adding some simple iptables rules on the client PC which block all non-ICMP traffic to the client with one exception, that being DNS queries. Since the router in this setup is resolving DNS queries, I allowed the queries from the client to make the setup less complicated.\nFor the test, I\u0026rsquo;ll be running a simple git command to clone the repository of this project to my PC. As I said, the first run is without the software and it results in failure as you might have guessed.\nTest without the software enabled If we inspect the traffic on the client PC using Wireshark we get something like this:\nWireshark capture without the software enabled We can see a lot of TCP Retransmission segments. Since the firewall doesn\u0026rsquo;t block outbound TCP traffic, the client is able to start the TCP handshake process by sending the TCP segment with the SYN flag. Now the firewall blocks the response from the destination which triggers TCP retransmission on the client which hasn\u0026rsquo;t got the response from the server, so it sends the SYN segment again\u0026hellip; and again\u0026hellip;\nNow, if we enable our software on the client and server side, we get a successful test.\nTest with the software enabled If we inspect the traffic on the client PC using Wireshark we can see a lot of ICMP packets:\nWireshark capture with the software enabled Because of the size of the ICMP packets with embedded client PC traffic, we can see IP segmentation happening on some of the packets. Also, if you take a closer look, you will notice that all of the ECHO (or outbound) ICMP packets have a mark which says: no response found. For an ICMP echo request to be successful, the Data segment of the packet has to be the same on both ECHO and ECHO REPLY packets. In the newer versions of firewall software, this traffic is configured to be dropped, as it\u0026rsquo;s an ECHO REPLY to an ECHO not sent from the said system. Looking into the data segment of one of the ICMP packets and decoding it with the use of this cool online tool we can see that the embedded data is a TCP ACK segment.\nDecoded data segment of the captured ICMP packet Conclusion # Newer and more advanced firewalls are more capable of protecting your network infrastructure from this kind of attacks, but doing a project like this one can teach you a lot of computer networking and operating system concepts so I highly recommend you try something like this. If you want to check out my solution you can find it here:\ndXellor/icmp-bifrost ICMP tunneling utility Python 0 0 References # ICMP specification: https://datatracker.ietf.org/doc/html/rfc792 Netfilter project: https://netfilter.org/ Iptables packet flow: https://rakhesh.com/linux-bsd/iptables-packet-flow-and-various-others-bits-and-bobs/ Netfilter hooks: https://wiki.nftables.org/wiki-nftables/index.php/Netfilter_hooks NAT: https://www.cisco.com/c/en/us/products/routers/network-address-translation.html ","date":"24 March 2025","externalUrl":null,"permalink":"/posts/bypassing-network-restrictions-icmp-tunneling/","section":"Posts","summary":"When I learned about the concept of ICMP tunneling, I was fascinated. I ended up researching it and creating software that utilizes this method as a part of my Bachelor's thesis","title":"Bypassing network restrictions using ICMP tunneling","type":"posts"},{"content":"","date":"24 March 2025","externalUrl":null,"permalink":"/tags/cybersecurity/","section":"Tags","summary":"","title":"Cybersecurity","type":"tags"},{"content":"","date":"24 March 2025","externalUrl":null,"permalink":"/tags/networking/","section":"Tags","summary":"","title":"Networking","type":"tags"},{"content":"","date":"24 March 2025","externalUrl":null,"permalink":"/tags/personal-project/","section":"Tags","summary":"","title":"Personal Project","type":"tags"},{"content":"","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":" icmp-bifrost Since I am interested in cyber security and networking, this was a perfect project for me to match those two topics. I have built from scratch a Linux client and server application that uses ICMP tunneling to bypass network restrictions. This project is a part of my Bachelor\u0026rsquo;s thesis.\ndXellor/icmp-bifrost ICMP tunneling utility Python 0 0 Xd programming language Like most programmers, I had a wish to create my own programming language. I got that chance when I was attending a compilers course at the university. Unfortunately, currently, it only has a front-end part of a compiler (meaning only syntax and lexical parsing) but I would like to return to it sometime in the future and add a code generation to complete it.\ndXellor/Xd-Programming-Language Xtremely dumb programming language Lex 1 0 Tutours Tutours is one of the bigger projects I\u0026rsquo;ve had a chance to work on outside the corporate environment. This was a part of a Software Design course at the university. With the 15 other colleagues in the team, we have built a Tours marketplace application based on a Modular Monolith architecture guided by the principles of Domain Driven Design.\nRA2020PSW8/tourism-api Tours marketplace system designed as a monolith architecture - backend C# 0 5 RA2020PSW8/tourism-webapp Tours marketplace system designed as a monolith architecture - frontend TypeScript 0 5 BSEP-T12 In this project, the team was tasked with implementing core security features into the simple marketing web application.\ndXellor/bsep-t12 Implementation of the security mechanisms on a marketing web application HTML 1 1 Other You can find more of my personal projects on my github profile :D\n","externalUrl":null,"permalink":"/projects/","section":"Nikola Simić","summary":"","title":"Projects","type":"page"},{"content":" Summary A DevOps engineer coming from a software engineering background, experienced in various fields in the IT sphere. Excels in handling non-trivial issues that require a broad knowledge of multiple related topics. Communicative, disciplined, respectful.\nExperience DevOps Engineer Aug 2025 - Present TIAC d.o.o, Novi Sad, Serbia Domain: BI for US High Education Institutions. About: Managing infrastructure with AWS CDK as IaC. Managing serverless workloads utilizing AWS Lambda, ECS Fargate, S3, DynamoDb, RDS Aurora. Managing CI/CD pipelines utilizing GitHub Actions Optimizing application performance and infrastructure costs. Redesigning infrastructure to a less coupled multiple-stacks approach. Working on improving HECVAT score impacted by the infrastructure and security of the application solution. Working on backup \u0026 recovery strategies. Maintaining IAM roles and permissions by following the least-privilege principal. Working with: AWS VPC, CloudWatch, Cognito, Bedrock, Bedrock Agent Core, Secrets Manager. Skills: AWS CDK, AWS serverless technologies (S3, ECS Fargate, DynamoDb, RDS, Lambda, AppSync), Github Actions, AWS CloudWatch, Grafana, AWS Cognito, AWS Bedrock and Bedrock Agent Core, NodeJs with Typescript, Angular + NgRx DevOps \u0026amp; Software Engineer Dec 2023 - Aug 2025 TIAC d.o.o, Novi Sad, Serbia Domain: US Federal Student Financial Aid. About: Worked on migrating a legacy Windows Forms application to modern web technologies, improving stability and security, and designing new features in the process. Was charged with the application deployment utilizing a combination of GitHub CI pipelines and custom scripts written in PowerShell. Designed an improved authentication process that utilizes refresh tokens for improved user experience. Worked on maintaining and improving file-based integrations systems implemented as Windows services. Skills: DotNet, Angular, NgRx, Windows Server, GitHub Actions, PowerShell, MS SQL, Jira DevOps \u0026amp; Software Engineer Contractor 2023 Ingel Agro, Novi Sad, Serbia Domain: Remote control and data aggregation of automated agro-cultural machines About: Improved communications performance of an existing IoT setup by more than 50% by designing and developing a solution that utilizes a protocol more suited for the use case. Setup Debian-based VPS hosted on Hetzner with an MQTT broker and a Wireguard VPN to serve as a central point through which remote nodes can send data and receive commands. Skills: NodeJs, Node-Red, Wireguard VPN, Linux administration, networking Education Master\u0026#39;s degree in Electrical and Computer Engineering Pursuing Faculty of Technical Sciences, University of Novi Sad, Serbia Currently pursuing a Master's degree in Electrical and Computer Engineering Bachelor\u0026#39;s degree in Electrical and Computer Engineering 2020-2024 Faculty of Technical Sciences, University of Novi Sad, Serbia Graduated on the Applied Computer Sciences and Informatics submodule with the GPA 9.29/10.0 Bachelor thesis: Bypassing network restrictions using ICMP tunneling This thesis describes the implementation of a software solution that allows a user to bypass restrictions defined on the network to which his computer is connected. This is possible due to the ICMP tunneling method that encapsulates clients network traffic into the ICMP packet, which is by default allowed on most networks due to the purpose of ICMP which is network diagnostics. ICMP packets are routed to the server which can unpack the original packet and forward it to its destination. The response on said packets is encapsulated into an ICMP packet again by the server and sent to the client, which can unpack the original packet and receive it. Certifications ","externalUrl":null,"permalink":"/resume/","section":"Nikola Simić","summary":"","title":"Resume","type":"page"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"}]