Our Journey Through Linux/Unix Landscapes
From Arch to Alpine: Our Journey Through Linux Landscapes. Discover how we slashed storage, vulnerabilities, and maintenance time!

Hey there! Recently, we took a trip to Canadian University Dubai to scout for some fresh talent. One of the key questions we always ask potential interns is: "Which operating system are you using?"

Oh, and by the way, we judge people based on their answer—very harshly:
- Windows: You are probably not a fit. Because if you're using Windows, you're either lost or just here for the free snacks.
- Kali: You're a script kiddie. Because nothing says "I'm a hacker" like using a distro that does all the work for you.
- Debian: You are lost in the past; ditch the COBOL and move to something modern. Because if you love stability so much, why not just use a typewriter?
- Ubuntu: You're basic. Because if you wanted to be unique, you wouldn't be using the Linux equivalent of a pumpkin spice latte.
- Fedora: You're a developer who loves bleeding-edge software but hates when things actually work. Because who needs stability when you can have the latest bugs?
- Arch Linux: You're a masochist. Because nothing says "I love pain" like manually configuring everything and pretending it's fun. It is the distribution of choice at Kalvad for our employees, Welcome on board!
- CentOS: You're an enterprise drone. Because if you wanted excitement, you wouldn't be using the Linux equivalent of a corporate cubicle.
- Gentoo: You're a control freak. Because if compiling everything from source is your idea of a good time, you might need to reevaluate your life choices.
- OpenSUSE: You're a jack-of-all-trades, master of none. Because if you wanted to be taken seriously, you wouldn't be using a distro that's as confusing as it is versatile.
This little question got me reminiscing about an article we wrote three years ago about using Arch Linux in production. Boy, have things changed since then! It's high time for an update, so buckle up!
Why Does It Matter to Change the OS? What Is the Impact?
Let's be clear, @Kalvad, we're running a service business, and our goal is to make our customers happy. How do we do it? We focus on two key parts: the quality of the product and reliability. While the quality of the product is mostly derived from the code—which I won't bore you with here—reliability is a whole different ball game.
Reliability is a vast topic; it encompasses everything from deployment and backups to cybersecurity and beyond. If I had to describe it in one word, it would be: achieving 0% HTTP 5xx errors. Now, I know that's about as likely as finding a unicorn, but it's a target worth striving for. What does it mean? It means that your entire system is only as strong as its weakest link. So, if one part of your system is a dumpster fire, guess what? Your whole system is a dumpster fire.
So, having snapshots from the file system directly, high security protection, low maintenance, better control, and all of this while reducing the human aspect of it, is crucial. By minimizing human intervention, behind the scene magic and other shenanigans, we reduce the risk of errors and increase the efficiency and consistency of our systems. Because, let's face it, humans are great at making mistakes.
Finally, some people might call me a fake ecologist—after all, I live in Dubai and run on AC six months a year—but energy and resources do matter. I'm tired of seeing systems that could run on 16GB of RAM and 4 CPUs needing 256GB of RAM and 64 CPUs to operate. It's not just about saving costs; it's about being responsible and efficient with the resources we have. By optimizing our infrastructure, we not only improve performance but also contribute to a more sustainable approach to technology. Because, honestly, who needs a server that's as power-hungry as a small country?
How Do We Investigate an OS?
We focus on multiple aspects when evaluating an operating system to ensure it meets our rigorous standards and requirements:
Build Package Yourself
Thanks to our policy of building our own software, we need the capability to package our own software or control the versions we use. This autonomy allows us to tailor the software to our specific needs and avoid unnecessary components. The best example of this is my usual rant against NGINX: Who in their right mind needs a mail gateway activated in NGINX on production systems nowadays? By building our own packages, we can strip out such superfluous features and optimize the software for our exact use cases. Because, you know, who doesn't love a good decluttering session?
$ docker run -it nginx:latest bash
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
254e724d7786: Pull complete
913115292750: Pull complete
3e544d53ce49: Pull complete
4f21ed9ac0c0: Pull complete
d38f2ef2d6f2: Pull complete
40a6e9f4e456: Pull complete
d3dc5ec71e9d: Pull complete
Digest: sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66
Status: Downloaded newer image for nginx:latest
root@81e6ea6c3242:/# nginx -v
nginx version: nginx/1.27.5
root@81e6ea6c3242:/# nginx -vvvvv
nginx version: nginx/1.27.5
root@81e6ea6c3242:/# nginx -V
nginx version: nginx/1.27.5
built by gcc 12.2.0 (Debian 12.2.0-14)
built with OpenSSL 3.0.15 3 Sep 2024
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/run/nginx.pid --lock-path=/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module
--with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module
--with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module
--with-http_sub_module --with-http_v2_module --with-http_v3_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module
--with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -ffile-prefix-map=/home/builder/debuild/nginx-1.27.5/debian/debuild-base/nginx-1.27.5=. -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'
nginx:latest options!!!
Upstream First
The second most important part of our evaluation is the upstream compatibility. If a software requires massive changes to run on our OS, or worse, if the distribution likes to randomly insert modifications like malloc
in some C code (hello, Debian), it's a red flag. We prioritize operating systems that stay as close as possible to the upstream source, ensuring that we benefit from the latest features, security updates, and community support without unnecessary deviations. Because, let's face it, nobody has time for a distribution that thinks it's a better coder than the original developers.
Freshness
By default, how outdated is your kernel? The kernel, especially when you start to use containers and new features (like uring, ebpf, etc...), is the biggest vector of performance. Having a completely outdated kernel is a big no-no for us. We need an OS that keeps its kernel and core components up-to-date to leverage the latest performance improvements, security patches, and hardware support. Freshness in the kernel and software packages ensures that we can provide the best possible performance and reliability to our customers. Because, honestly, running on outdated kernels is like trying to win a race with a horse and buggy.
Choices
I'm going to review a few distributions/systems that we considered and explain why we didn't choose them for production. We're talking about the top 5 on Distrowatch in the server category:
- Debian: Outdated as hell, not following upstream, and doing some crazy backporting. Because who needs the latest features when you can have a museum piece, right?
- Ubuntu: Same as Debian but slightly less outdated, and on top of that, it does a lot of "magic" (like the auto-start of services). Because nothing says "reliable" like services that start themselves without any warning.
- Fedora: More up-to-date, but comes with a massive issue: it's very big and crazily heavy. Furthermore, it uses the second-worst package management system after Deb: RPM. Because who doesn't love a bloated system with a package manager that feels like it's from the Stone Age?
- OpenSuse: Better than Fedora, but still uses RPM. Vade retro, Satana! Because if you're going to use RPM, you might as well be summoning the devil himself.
- NixOS: NixOS is the only one that we didn't hate. Technically, it's a marvel, but the learning curve is too high. Because who has time to learn a new system when you can just stick with what you know and complain about it?
Where Are We Now?

We've bid adieu to Ubuntu, Alma, Rocky Linux, and even Arch Linux for our servers, though Arch Linux still holds a special place as the main OS for the folks at Kalvad.
Instead, we've gone all-in on Alpine Linux and FreeBSD. Because who needs mainstream when you can have the best?
Here's our current game plan:
- If it needs storage or high network throughput (like a load balancer), we roll with FreeBSD. Because when you need reliability and performance, FreeBSD is like that dependable old friend who never lets you down—unlike some other OSes we could mention.
- If it's stateless, we go with Alpine—whether it's bare metal, a VM, or as a Docker base. Because nothing says "lightweight and efficient" like Alpine Linux in a container. It's like the Swiss Army knife of stateless systems—small, sharp, and ready for anything. Unlike some other distros that are as bloated as a Thanksgiving turkey.
- And if we need virtualization, we also use FreeBSD with BHyve. Because why complicate things with multiple hypervisors when FreeBSD can handle it all?
Why FreeBSD?

Argument 1: ZFS
ZFS is like the Swiss Army knife of file systems and logical volume managers, originally designed by Sun Microsystems. It's renowned for its data integrity, support for massive storage capacities, and efficient data compression. What sets ZFS apart is its use of a Copy-on-Write (CoW) transactional model, which ensures that data is never overwritten. Instead, changes are written to new blocks, and metadata is updated to point to these new blocks only after the write is complete. This approach not only enhances data integrity but also enables features like snapshots and clones, which are incredibly useful for backups and testing.
Now, let's talk about btrfs. While btrfs also uses a Copy-on-Write model and offers many advanced features, it has historically struggled with stability and performance issues. ZFS, on the other hand, has been battle-tested in enterprise environments for years and is known for its robustness and reliability. ZFS's mature codebase and comprehensive feature set make it a superior choice for mission-critical applications where data integrity and performance are paramount.
Oh, and did I mention that ZFS has tier 1 support? Imagine that—a file system that actually does what it promises!
Advantages of ZFS:
- Data Integrity: Uses checksums to ensure your data stays corruption-free. This means that ZFS can detect and correct errors automatically, providing a robust layer of data protection that is crucial for mission-critical applications.
- Snapshots and Clones: Makes backups and quick recoveries a breeze. With ZFS, you can take snapshots of your file system at any point in time, allowing you to restore data quickly and efficiently. This feature is invaluable for disaster recovery and testing new software without risking data loss.
- Scalability: Handles massive amounts of data like a champ. ZFS is designed to scale effortlessly, making it suitable for environments with petabytes of data. Its architecture allows for efficient data management and high performance, even as data volumes grow.
- Compression: ZFS supports various compression algorithms like LZ4, gzip, and Zstd, which can significantly reduce storage space requirements without compromising performance. This feature is particularly beneficial for environments with large datasets, as it optimizes storage efficiency and can improve I/O throughput by reducing the amount of data written to disk.
- ZRAID: ZFS also boasts built-in RAID functionality, known as RAID-Z, which provides redundancy without the need for separate hardware RAID controllers. This makes it a cost-effective solution for ensuring data availability and protection against disk failures. Additionally, ZFS supports dynamic striping, which optimizes performance by distributing data across all available disks.
Argument 2: PF
PF (Packet Filter) is a robust firewall and network address translation tool, originally developed as part of the OpenBSD project. It's like having a highly efficient bouncer for your network, providing advanced packet filtering, stateful inspection, and traffic shaping. PF is renowned for its simplicity, performance, and reliability, making it a favorite among network administrators who value both security and ease of use.
One of the standout features of PF is its ability to handle complex filtering rules with ease. It uses a straightforward syntax that is both powerful and easy to understand, allowing administrators to define intricate rulesets without getting bogged down in complexity. Additionally, PF supports features like Quality of Service (QoS), which enables traffic shaping to prioritize certain types of network traffic over others.
Now, let's talk about iptables. While iptables is a powerful and flexible firewall tool in its own right, it has a reputation for being more complex and less intuitive than PF. Iptables uses a rule-based system that can be difficult to manage, especially for those who are not already familiar with its intricacies. The syntax can be cumbersome, and the lack of built-in support for some advanced features means that additional tools and scripts are often required to achieve the same level of functionality as PF.
In summary, PF offers a more streamlined and user-friendly approach to firewall management, with a focus on simplicity and performance. It's like having a well-trained bouncer who knows exactly who to let in and who to keep out, without any unnecessary fuss. Iptables, on the other hand, is more like a versatile but somewhat unwieldy security guard who requires a bit more effort to manage effectively. So, while some people might call PF a "working iptables," let's be honest—we're way too classy for that comparison.
Benefits of PF:
- Security: Offers strong protection against network threats. PF allows you to define complex rule sets to control network traffic, providing a robust defense against unauthorized access and attacks. Its stateful inspection capabilities ensure that only legitimate traffic is allowed through, enhancing overall network security.
- Flexibility: Allows for complex rule sets and traffic management. With PF, you can create detailed rules to manage network traffic based on various criteria, such as source and destination addresses, ports, and protocols. This flexibility enables you to tailor your network security policies to meet specific requirements.
- Performance: Optimized for high throughput and low latency. PF is designed to handle high volumes of network traffic efficiently, ensuring that your network remains responsive and performant even under heavy loads. Its optimized architecture minimizes latency, making it ideal for high-performance networking environments.
Argument 3: sysrc (and sysinit)
sysrc is a utility for managing rc.conf, the main configuration file for FreeBSD. It simplifies the process of configuring system services, making it a breeze to manage.
Advantages of sysrc:
- Ease of Use: Simplifies the management of system configurations. sysrc provides a user-friendly interface for managing system services, allowing you to enable, disable, and configure services with ease. This simplicity reduces the risk of configuration errors and makes it easier to maintain consistent system settings.
- Consistency: Ensures a consistent approach to system configuration. By using sysrc, you can standardize the configuration of system services across multiple servers, ensuring that all systems are configured uniformly. This consistency is crucial for maintaining a stable and predictable environment.
Argument 4: pkg and poudriere
The pkg format is highly readable and reliable. Plus, we can customize packages and build them locally on the server or through poudriere and distribute them to all servers.
Benefits of pkg and poudriere:
- Customization: Allows for tailored package builds. With pkg and poudriere, you can create custom packages that meet your specific requirements. This customization enables you to include only the components you need, reducing the risk of conflicts and improving system performance.
- Efficiency: Streamlines package management and distribution. pkg and poudriere provide a robust framework for managing software packages, making it easy to install, update, and remove packages across multiple servers. This efficiency simplifies the process of maintaining a consistent software environment and ensures that all systems are up-to-date.
A Few Downsides
- Software Updates: Sometimes our software isn't quite up to date—like our beloved niche language, Crystal, and our darling tool, LavinMQ. And, shame on us, we haven't been bothering to submit those pesky upstream updates. Who needs the latest and greatest anyway, right?
- Docker Support: No support for Docker (though Podman support is available).
- Tool Compatibility: Some tools just don't play nice with FreeBSD, mainly because it's not Linux—despite what some cybersecurity specialists might tell me about it being the same as RedHat. Because, you know, if you ignore the differences, you might as well say a bicycle is the same as a motorcycle..
Why Alpine?

ISO Customization
Alpine Linux makes it a breeze to customize your ISO, allowing you to add software or build specific variants tailored to your needs. One popular example is the "virt" variant, which features a slimmed-down kernel optimized for virtual systems. This flexibility is perfect for creating lightweight, efficient environments that meet your exact specifications.
To get started with customizing your Alpine Linux ISO, you can follow the detailed guide available on the Alpine Linux wiki: How to make a custom ISO image with mkimage. This guide provides step-by-step instructions on using the mkimage
tool to create a custom ISO image. Whether you're looking to include additional packages, configure specific settings, or optimize for particular use cases, the wiki offers comprehensive information to help you achieve your goals.
Additionally, the Alpine Linux community and documentation are excellent resources for further customization tips and best practices. By leveraging these resources, you can ensure that your custom ISO is not only functional but also optimized for performance and reliability.
Diskless Mode
Alpine Linux shines with its support for diskless operation, allowing systems to run entirely from memory. Because who needs disks when you can have everything floating in the digital ether? This feature is particularly useful for environments where minimizing disk I/O is crucial, such as in certain embedded systems or secure environments where disk writes are about as welcome as a bull in a china shop. By leveraging diskless mode, Alpine Linux can significantly enhance performance and reduce wear on storage devices—because nothing says "efficiency" like pretending you don't need a hard drive. This capability not only streamlines operations but also opens up new possibilities for deploying lightweight, efficient systems in various scenarios. We'll dive deeper into how to set up and optimize diskless mode in a future article, so stay tuned—if you can handle the excitement!
Speed
Alpine Linux is lightning-fast to boot—it takes us less than 3 seconds to boot on KalvadOS (as a VM). Sure, we're still light-years away from the sub-50ms boot time of Imil with SmolBSD, but hey, we'll take what we can get, and it's more than acceptable for mere mortals like us.
And get this—our custom ISO for VMs is a mere 75MB, including all the needed tools and custom repos. That means when we deploy on a new host, we're not shuffling gigabytes around like some kind of digital hoarders. Speaking of which, have you seen some of those Docker containers out there? Clocking in at over 2GB, moving them around is about as fun as watching paint dry.
Oh, and did I mention we reboot faster than you can say "Why is this taking so long?"—less than 5 seconds, thank you very much. And installation? Less than a minute. Because who has time to wait around all day?
Easy Mirroring
With a simple rsync script, you can create a local replica:
#!/bin/sh
set -e # Exit on error
# Configuration
ALPINE_VERSION="v3.21"
BASE_SRC="rsync://rsync.alpinelinux.org/alpine/${ALPINE_VERSION}"
BASE_DEST="/storage/alpine/mirror/${ALPINE_VERSION}"
REPOS="main community releases"
ARCHITECTURES="x86_64" # Added support for multiple architectures
LOCK_FILE="/tmp/alpine_sync.lock"
LOG_FILE="/var/log/alpine_sync.log"
# Logging function
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Cleanup function
cleanup() {
rm -f "$LOCK_FILE"
log "Sync process ended"
}
# Check if another sync is running
if [ -f "$LOCK_FILE" ]; then
log "Error: Another sync process is running"
exit 1
fi
# Create lock file
touch "$LOCK_FILE"
trap cleanup EXIT
# Start sync process
log "Starting Alpine Linux mirror sync for version ${ALPINE_VERSION}"
# Iterate through each repository
for repo in $REPOS; do
log "Syncing $repo repository..."
# Create destination directory if it doesn't exist
mkdir -p "$BASE_DEST/$repo"
# Sync each architecture
for arch in $ARCHITECTURES; do
log "Syncing $repo/$arch..."
# Perform rsync with architecture-specific inclusion
if ! /usr/local/bin/rsync -v --stats --progress \
--archive \
--update \
--hard-links \
--delete \
--delete-after \
--delay-updates \
--timeout=600 \
"$BASE_SRC/$repo/$arch" "$BASE_DEST/$repo"; then
log "Error: Failed to sync $repo/$arch"
exit 1
fi
log "Finished syncing $repo/$arch"
done
done
# Update last sync timestamp
date '+%Y-%m-%d %H:%M:%S' > "${BASE_DEST}/.last_sync"
log "Mirror sync completed successfully"
exit 0
And the best part? By targeting the architecture, the total size of the repo is a mere 71GB (as of 2025/05/09)—because who doesn't love a compact powerhouse?
Custom Repository
Creating a custom repository is a piece of cake—so easy, even your cat could probably do it. And forcing installations from this repository? Well, that's just a walk in the park. Because who doesn't love a good, old-fashioned software dictatorship where you get to call all the shots?
Minimalism
With Debian/Ubuntu, you often have to deal with a bunch of pre-installed services that you never asked for. For example, a base Ubuntu 24.04 server comes with ModemManager, rsyslog, and systemd-journald—because who doesn't need a modem manager on a VM, right?
But with Alpine, if you want nothing, you get nothing. It's like a blank canvas waiting for your masterpiece! As of today, our VMs clock in at a mere 175MB of RAM used, with just NTP, SSH, and our custom setup. Because why waste resources on stuff you'll never use? It's like moving into a fully furnished house and then spending the first week throwing out all the junk the previous owner left behind.
Downside: Musl
Musl is a lightweight implementation of the C standard library, designed with a focus on simplicity, correctness, and efficiency. Unlike GLibC, which is more feature-rich but also more complex, Musl aims to provide a lean and mean alternative that prioritizes performance and minimalism.
Key Characteristics of Musl:
- Simplicity and Correctness: Musl is designed to be simple and correct, adhering strictly to standards and avoiding unnecessary complexity. This makes it easier to understand and maintain, reducing the likelihood of bugs and security vulnerabilities.
- Lightweight: Musl is significantly lighter than GLibC, making it ideal for environments where resource efficiency is crucial. This includes embedded systems, containers, and other resource-constrained environments.
- Performance: Musl is optimized for performance, offering faster execution times and lower memory usage compared to GLibC. This makes it a preferred choice for applications where performance is critical.
- Static Linking: Musl supports static linking, which allows for the creation of self-contained binaries that do not depend on external libraries. This is particularly useful for deployment in environments where dynamic linking is not feasible or desirable.
Who Uses Musl?
- Docker Containers: Many Docker containers use Musl due to its lightweight nature and performance benefits. This helps in reducing the overall size of the container images and improving startup times.
- Statically Linked Binaries: Almost every statically linked binary, except those compiled with Zig, uses Musl. This is because Musl's design and features make it well-suited for static linking, providing a robust and efficient runtime environment.
By focusing on simplicity, performance, and efficiency, Musl offers a compelling alternative to GLibC, particularly in environments where these attributes are highly valued.
A Few Metrics

Let's dive into some of the "impressive" metrics we've achieved with our shift to FreeBSD and Alpine Linux—because who doesn't love a good brag session?
- Storage Efficiency: FreeBSD has significantly reduced our overall storage size by at least 25%. This reduction isn't just a number; it translates to tangible benefits in managing our data more efficiently and cost-effectively. Who knew that cutting down on storage could be so satisfying and impactful? It's almost like we've discovered the secret to digital decluttering!
- Vulnerability Reduction: By reducing the number of software deployed by 50% (no more systemd 😄), we've effectively slashed our vulnerabilities. This strategic reduction means fewer potential entry points for security threats, leading to a more secure and robust infrastructure. It's simple math—fewer software components equate to fewer problems and a more secure environment. Because, let's face it, who needs all that extra software when you can have peace of mind?
- Maintenance Time: We've managed to cut the human time required to maintain our infrastructure by 50%. This significant reduction allows our team to focus on more strategic tasks and innovations rather than getting bogged down by routine maintenance. More time for coffee breaks, high-fives, and focusing on what truly matters—driving our projects forward! Because, honestly, who wants to spend their time fixing things when you can be sipping coffee and dreaming up the next big thing?
Next Step

Our shift to Alpine Linux and FreeBSD has brought some seriously awesome improvements in performance, flexibility, and ease of management. Sure, there are a few downsides, but the benefits totally outweigh them, making these operating systems stellar choices for our production environment.
We are 100% committed to sticking with FreeBSD, thanks to all its advantages. Because, let's face it, why would we ever want to switch when we've found something that actually works?
However, this is probably not our latest move for the stateless part. We are checking and following these operating systems with interest:
- Void Linux: Known for its independence and rolling release model, Void Linux offers a unique approach with its runit init system and xbps package manager. It's lightweight and highly customizable, making it an intriguing option for stateless environments. It is interesting for us because it is using GLibC and is not using Systemd.
- Chimera Linux: A relatively new player in the Linux world, Chimera Linux focuses on simplicity and performance. It uses the llvm/clang toolchain and the apk package manager, offering a fresh perspective on Linux distributions.
- EweOS: musl-based, lightweight, general-purpose Linux distribution, which adopts musl libc and busybox to the latest versions of software with a rolling-release model.
We're excited to explore these options and see how they can further enhance our infrastructure!
If you have a problem and no one else can help. Maybe you can hire the Kalvad-Team.