Networking – back to basics

Photo by Onlineprinters on Unsplash

After trying to find a good enough solution for DNS serving, I figured that I am simply not satisfied with the current ecosystem of Linux self-hosting. Some give you a choice of the software to use for specific services, if at all, and then abstract the interface to some extent.

My problem with ALL of them is that they are simply assuming you want to use known software and just simplify the management. Well, if they have done a better job, yeah, sure. I am not saying they are doing a bad job, it is simply not enough anymore, and the maintainers of the free-source solutions simply don’t get paid enough, if at all, to advance their solutions.

So, to solve it for my needs, I am retiring op-nslookup and getting back to basics, what am I expecting from my self-hosting solution?

I’ll try answering this first as a whole, and then go into detail.

Expectations

  • REST API Management – GUI is nice, but API should be first citizen, all available options in a GUI should be actions I can perform using the API
  • Configurability – Every behavior of a service should be configurable
  • Best Practices – Every configuration should have default values that are considered to be best practice, allow bad behavior for whatever reason, but document and produce appropriate log messages
  • Security – Every user interaction should go through security assertions, insecure configurations should be documented and produce appropriate log messages
  • Usability – Having said the above, the services should be easily usable, for example, while spf2 configurations might be more secure, it makes emails unusable in most common cases
  • Interoperability – While I will be proud to have my own full-blown ecosystem, I am not operating in a vacuum, so RFCs are an obvious, so I am talking about supporting common configuration files and ubiquitous language, no reason to reinvent the wheel, but making a better version might be an interesting journey
  • One ring to rule them all – Have a single source of truth for information about shared settings, for example, serving a domain in DNS, Email and Web, the settings should be in one place, the DNS, MX and HTTP services should derive their internal configurations from the single source on-change

Details – TBD

I’ll dive into each service in a future post

PoC – Email

I have already started working on an Email service, Python based, using the Twisted framework. I tried looking for other frameworks in the past, and I was hoping the async ecosystem in Python has something better, but so far Twisted is the only encompassing solution for network based solutions, and it already has very good separate implementations.

Since Email is one of my familiar fields (after DNS) I am starting with a really simple local submission server (no authentication, only acceptance), the goal of the PoC:

  • Support domains
  • Support users
  • Support aliases
  • Support domain catch-all
  • Support local delivery
  • Support server catch-all
  • Support delayed rejections (accept the message, then based on rules, send back a rejection to the sending address or to a postmaster)
  • Support two delivery backends
    • Filesystem (JSON files)
    • SQLite + Filesystem (for message body)

Common services are separated between MSA, MTA, MDA and other acronyms of services with specific jobs, which will probably still be represented in my solutions, but the lines are more blurry the less naive those systems became (SSL / TLS, anti-spam, anti-virus, rate limits etc.). However, I will try to keep the common concepts in place so if you know your postfix / dovecot / exim you will feel almost at home.

The work is in a repository I started a few years ago: https://gitlab.com/uda/txmx

I will do my best to keep up with my own standards 😉

Update #1: op-nslookup

Photo by Chris Liverani on Unsplash

This is an update on Journey to new DNS servers, which I am now calling op-nslookup

I was playing around with PowerDNS on one of my existing servers, using a separate port, then ran simple dig queries and wanted to see what the difference would be. TL;DR: I wasn’t impressed.

What did I try:

  • I installed PowerDNS 4.6 server from their official repo, on an Ubuntu 18.04 server (yeah, I know).
  • Modified the pdns.conf file and set the existing named.conf file for the bind backend, so it loads the existing zones
  • Run zone2sql into an sqlite3 file, and configure gsqlite3 backend

In both cases (bind or gsqlite3 as backends) the performance was the same, I managed to tune the performance so the average of the query time was on par with the existing bind.

The bad part here was that while the minimum was the same (120-124ms) the maximum was way higher for PowerDNS (400ms 600ms while Bind was usually up to 300ms) so I think my numbers are not very scientific and if I want to be sure I’ll need to benchmark using a reliable tool and not simply run and copy-paste into calc…

But this isn’t all and I will definitely need to invest more time in comparing features, since I am sure there are configurations at which I can get more out of it, even in speed wise.

The fact this is a hobby project does not mean I need to lower my standards.

Journey to new DNS servers

Photo by Taylor Vick on Unsplash

I have been managing my own DNS servers for a long time now, most of the time using ISPConfig to manage two to three BIND instances. It has been fun, but I got tired of ISPConfig’s limitations in some areas, and I want to have my own definition of servers.

I have tried that before, several times, but I did the mistake so many did before, trying to solve everything in one shot. And learning from my paid work experience, it isn’t worth the time. So I am going on a new journey to build my own infrastructure, starting with my self-hosted DNS servers, one piece of the infrastructure at a time.

It might take time, and I am aware that sometimes it’ll be days and weeks between progresses, but it is a journey of learning and sharing. So let’s begin.

My considerations

  1. Quick response
  2. Quick refresh (of changes)
  3. Support many resource records types and security measures
  4. Ability to support DDNS
  5. Active maintainers

Quick response

If I am in the US and the server is in Europe, a 100ms response time in the server itself is really slow. The server must respond very quickly to each request.

Quick refresh

When a change is introduced, a 60 seconds to update is slow, think about APIs updating DNS records for verification purposes, it needs to go to the next step within that timeframe. So my goal is up to 20 seconds to implement record changes.

Support many RR types and security

I expect the server to support most of the RR types, namely support:

  • DNAME
  • Implement some ANAME behavior (Like CNAME, but can be used for the apex domain)
  • CAA

It should fully support DNSSEC with the latest practices

Active maintainers

The program I will choose should be actively maintained, otherwise any bug report, feature request or merge request might just remain in limbo, and I will either have to fork it or move to another solution.

Options

  • BIND 9 (9.16.n for now, it has ESV)
  • PowerDNS

These are the only two after filtering the common available options, given that I already use IPv6 in both serving and records, wild card records and want to use DNSSEC.

I’ll state the obvious, I need the software to be open and free source, be tested and run smoothly on Linux server (I can manage with BSD if that will become a requirement).

Other options

  • Develop my own server

Well, a lovely idea, I always like to consider it in every project, but this isn’t on the table for now, sorry me.

Bu, as a consolation, I will be working on all integration and admin interfaces, and with the slightest hint of trouble using available solutions I’ll just write up my own.

That’s it for now, hitting the road for new DNS servers.

Run tcpdump for a given time using timeout

Lately I needed to run tcpdump on several servers for a given time, and then download the pcap fiels, all in a programmatic way.

So I got to know the useful timeout command, simple and straight-forward.

timeout 120s tcpdump -s 0 -A dst port 80

Remember that if you are not running as root and using sudo, you will need to put sudo before the timeout command, so it can actually send the SIGTERM without getting Permission denied.

sudo timeout 120s tcpdump -s 0 -A dst port 80

If you want to learn more about timeout:
https://explainshell.com/explain?cmd=timeout+120s+tcpdump+-s+0+-A+dst+port+80