Networking – back to basics

Photo by Onlineprinters on Unsplash

After trying to find a good enough solution for DNS serving, I figured that I am simply not satisfied with the current ecosystem of Linux self-hosting. Some give you a choice of the software to use for specific services, if at all, and then abstract the interface to some extent.

My problem with ALL of them is that they are simply assuming you want to use known software and just simplify the management. Well, if they have done a better job, yeah, sure. I am not saying they are doing a bad job, it is simply not enough anymore, and the maintainers of the free-source solutions simply don’t get paid enough, if at all, to advance their solutions.

So, to solve it for my needs, I am retiring op-nslookup and getting back to basics, what am I expecting from my self-hosting solution?

I’ll try answering this first as a whole, and then go into detail.

Expectations

  • REST API Management – GUI is nice, but API should be first citizen, all available options in a GUI should be actions I can perform using the API
  • Configurability – Every behavior of a service should be configurable
  • Best Practices – Every configuration should have default values that are considered to be best practice, allow bad behavior for whatever reason, but document and produce appropriate log messages
  • Security – Every user interaction should go through security assertions, insecure configurations should be documented and produce appropriate log messages
  • Usability – Having said the above, the services should be easily usable, for example, while spf2 configurations might be more secure, it makes emails unusable in most common cases
  • Interoperability – While I will be proud to have my own full-blown ecosystem, I am not operating in a vacuum, so RFCs are an obvious, so I am talking about supporting common configuration files and ubiquitous language, no reason to reinvent the wheel, but making a better version might be an interesting journey
  • One ring to rule them all – Have a single source of truth for information about shared settings, for example, serving a domain in DNS, Email and Web, the settings should be in one place, the DNS, MX and HTTP services should derive their internal configurations from the single source on-change

Details – TBD

I’ll dive into each service in a future post

PoC – Email

I have already started working on an Email service, Python based, using the Twisted framework. I tried looking for other frameworks in the past, and I was hoping the async ecosystem in Python has something better, but so far Twisted is the only encompassing solution for network based solutions, and it already has very good separate implementations.

Since Email is one of my familiar fields (after DNS) I am starting with a really simple local submission server (no authentication, only acceptance), the goal of the PoC:

  • Support domains
  • Support users
  • Support aliases
  • Support domain catch-all
  • Support local delivery
  • Support server catch-all
  • Support delayed rejections (accept the message, then based on rules, send back a rejection to the sending address or to a postmaster)
  • Support two delivery backends
    • Filesystem (JSON files)
    • SQLite + Filesystem (for message body)

Common services are separated between MSA, MTA, MDA and other acronyms of services with specific jobs, which will probably still be represented in my solutions, but the lines are more blurry the less naive those systems became (SSL / TLS, anti-spam, anti-virus, rate limits etc.). However, I will try to keep the common concepts in place so if you know your postfix / dovecot / exim you will feel almost at home.

The work is in a repository I started a few years ago: https://gitlab.com/uda/txmx

I will do my best to keep up with my own standards 😉

FastAPI validate timezones

Photo by Luis Cortes on Unsplash

If you aren’t familiar with Python, FastAPI or Timezones, this might not be the post for you, sorry.

For the rest of you geeks (like me), here is an example how to validate supported timezones in an input (Query, Path etc.). Scratch that, this is about how to validate dynamic lists of values in FastAPI by using timezones as an example.

As you know, in FastAPI you can validate predefined values by using String Enums (A class sub-classed from str and Enum), so how do you validate dynamic lists of values?

According to the docs, this is how you create a dynamic Enum:

>>> from datetime import timedelta
>>> class Period(timedelta, Enum):
...     "different lengths of time"
...     _ignore_ = 'Period i'
...     Period = vars()
...     for i in range(367):
...         Period['day_%d' % i] = i

The _ignore_ part is to remove the variable generated by the property Period and the for loop, of course.

Back to Timezones, according to the docs you can get the list of timezones supported by the locally installed IANA timezone DB:

import zoneinfo
zoneinfo.available_timezones()

And to our mini validation example:

import zoneinfo
from datetime import datetime
from enum import Enum

from fastapi import FastAPI, Query

app = FastAPI()


class Timezone(str, Enum):
    _ignore_ = 'Timezone z'
    Timezone = vars()
    for z in zoneinfo.available_timezones():
        Timezone[z] = z


@app.get('/')
def get_time(zone: Timezone = Query(Timezone.UTC)):
    return {'now': datetime.now(tz=zoneinfo.ZoneInfo(zone))}

This way you can accept a timezone string, but not inserting it into any function without validating it is a string you can trust, and in the same time you don’t have to maintain that list on your own.

Remember #1: that the timezone list is recalculated every time the function is called (app start in this case), so if you run this directly on a machine, remember to restart the app after updating the timezone DB.

Remember #2: that most of the Enum items aren’t accessible in the class instance notation, Timezone.America/New_York, but you can access it using the dict notation Timezone['America/New_York'], The common names of course are accessible, like Timezone.UTC and Timezone.EST.

Update #1: op-nslookup

Photo by Chris Liverani on Unsplash

This is an update on Journey to new DNS servers, which I am now calling op-nslookup

I was playing around with PowerDNS on one of my existing servers, using a separate port, then ran simple dig queries and wanted to see what the difference would be. TL;DR: I wasn’t impressed.

What did I try:

  • I installed PowerDNS 4.6 server from their official repo, on an Ubuntu 18.04 server (yeah, I know).
  • Modified the pdns.conf file and set the existing named.conf file for the bind backend, so it loads the existing zones
  • Run zone2sql into an sqlite3 file, and configure gsqlite3 backend

In both cases (bind or gsqlite3 as backends) the performance was the same, I managed to tune the performance so the average of the query time was on par with the existing bind.

The bad part here was that while the minimum was the same (120-124ms) the maximum was way higher for PowerDNS (400ms 600ms while Bind was usually up to 300ms) so I think my numbers are not very scientific and if I want to be sure I’ll need to benchmark using a reliable tool and not simply run and copy-paste into calc…

But this isn’t all and I will definitely need to invest more time in comparing features, since I am sure there are configurations at which I can get more out of it, even in speed wise.

The fact this is a hobby project does not mean I need to lower my standards.

Journey to new DNS servers

Photo by Taylor Vick on Unsplash

I have been managing my own DNS servers for a long time now, most of the time using ISPConfig to manage two to three BIND instances. It has been fun, but I got tired of ISPConfig’s limitations in some areas, and I want to have my own definition of servers.

I have tried that before, several times, but I did the mistake so many did before, trying to solve everything in one shot. And learning from my paid work experience, it isn’t worth the time. So I am going on a new journey to build my own infrastructure, starting with my self-hosted DNS servers, one piece of the infrastructure at a time.

It might take time, and I am aware that sometimes it’ll be days and weeks between progresses, but it is a journey of learning and sharing. So let’s begin.

My considerations

  1. Quick response
  2. Quick refresh (of changes)
  3. Support many resource records types and security measures
  4. Ability to support DDNS
  5. Active maintainers

Quick response

If I am in the US and the server is in Europe, a 100ms response time in the server itself is really slow. The server must respond very quickly to each request.

Quick refresh

When a change is introduced, a 60 seconds to update is slow, think about APIs updating DNS records for verification purposes, it needs to go to the next step within that timeframe. So my goal is up to 20 seconds to implement record changes.

Support many RR types and security

I expect the server to support most of the RR types, namely support:

  • DNAME
  • Implement some ANAME behavior (Like CNAME, but can be used for the apex domain)
  • CAA

It should fully support DNSSEC with the latest practices

Active maintainers

The program I will choose should be actively maintained, otherwise any bug report, feature request or merge request might just remain in limbo, and I will either have to fork it or move to another solution.

Options

  • BIND 9 (9.16.n for now, it has ESV)
  • PowerDNS

These are the only two after filtering the common available options, given that I already use IPv6 in both serving and records, wild card records and want to use DNSSEC.

I’ll state the obvious, I need the software to be open and free source, be tested and run smoothly on Linux server (I can manage with BSD if that will become a requirement).

Other options

  • Develop my own server

Well, a lovely idea, I always like to consider it in every project, but this isn’t on the table for now, sorry me.

Bu, as a consolation, I will be working on all integration and admin interfaces, and with the slightest hint of trouble using available solutions I’ll just write up my own.

That’s it for now, hitting the road for new DNS servers.

Run tcpdump for a given time using timeout

Lately I needed to run tcpdump on several servers for a given time, and then download the pcap fiels, all in a programmatic way.

So I got to know the useful timeout command, simple and straight-forward.

timeout 120s tcpdump -s 0 -A dst port 80

Remember that if you are not running as root and using sudo, you will need to put sudo before the timeout command, so it can actually send the SIGTERM without getting Permission denied.

sudo timeout 120s tcpdump -s 0 -A dst port 80

If you want to learn more about timeout:
https://explainshell.com/explain?cmd=timeout+120s+tcpdump+-s+0+-A+dst+port+80

nginx dynamic settings – part 2

In my previous post re. nginx dynamic settings, I’ve put an example of using variables in the index directive for serving a dynamic main file. This time I want to talk about try_files directive.

In the official examples, linked above, there is a one showing how to provide default place holder image, which is nice, and useful for hard set configurations. Most of the other examples are around internal rewrites to language interpreters.

Now say you host a Drupal multi-site, or WordPress multi-site and want to provide different favicon.ico files or robots.txt per domain, this can come handy. Here is an example:

location /favicon.ico {
    try_files $http_host.favicon.ico favicon.ico =404;
    log_not_found off;
    access_log off;
}

This way you can provide a default file for all, and specify a unique one for some.

Notice that for favicon.ico this doesn’t really cover it, since themes provide “shortcut icon” tags that override the default favicon. But for robots.txt this is very useful.

How to set dynamic nginx settings using variables

Looking through solutions on the internet, I found that for nginx there are plenty solution for dynamic root directories, headers and environment variables out there.

Today I was asked about using the same application directory with various cached index files, in this case, the determination is based on the domain accessed.

The previous solution used was to create spearate root directories with copies of the same system, which is wrong, just a waste of deployment time and configuration.

A more elegant solution, is to use the $http_host variable, and define a dynamic index file, like this:

index index.$http_host.php index.php index.html;

Now, be aware, this might not always be the best solution. also, most of the times, this will not be the specific setting or variable to use, but the idea is there.

Short variable swap in PHP > 5.4.x

Following this great old post from David Walsh’s post Tweet for Code #2, here is a PHP adaptation for this JavaScript Var Swap tweet:

$b = [$a, $a = $b][0];

Works on PHP 5.4 and up.

I know this is not very practical, for daily work, but it can come handy in a job interview.

♦ ♦ ♦

[Update:June 16, 2016]

In PHP 7.1.x it will finally be possible to use a cleaner swap short-code:

[$a, $b] = [$b, $a];

[/Update]

Gitlab / Github set custom branch as default

When using Gitlab / Github for development with large development groups, with or without branch per feature, you probably would want to use a development branch, and setting it as a default is a good idea. so when making a new clone you will automatically be in the development branch.

You need to keep in mind that deploying will now require the usage of -b master in the clone command (unless you are using tags, which is really a better idea, but just to be fair, in old installations you can’t clone into tags, so you can… no, just upgrade)

I attached screenshots from both Gitlab and Github’s settings page, just change the “Default branch”.

Gitlab:

Github: