- Jonathan Haack
- Haack’s Networking
Tutorial: The tutorial and setup details can be found on the Wiki. The official documentation for postfix and dovecot is great, but also a big thanks to Linux Babe again for a treasurer trove of resources and tips.
The mail server tutorial is for users of Debian GNU/Linux who want to set up a proper email server. By proper, I mean that the outgoing email must contain valid SPF, DKIM, DMARC, MX, and PTR records. Additionally, the incoming mail server must likewise be able to screen for these records however the sysadmin prefers. Lastly, I personally found Round Cube, postfixadmin, and virtual users to be completely worthless at best, and worse, they add needed levels of complexity that get in the way of the purpose: sending and receiving email well. So, in conclusion, I also intend for the term proper mail server to mean something like bare-bones, or just the GNU/Linux pieces required to send and receive good quality emails and function well. The target audience for this setup is therefore high-end residential users, influencers, small businesses (250 and under), and educators.
Why? Well, I might ask why not? Why do we all seem to accept that gmail, yahoo, proton mail, sendgrid, etc., are necessary these days for personal email, encrypted email, and newsletters? The common arguments are that self-hosted email servers lack good reputation and will be marked as spam and/or that they are too hard to maintain. For the first argument, that they will be marked as spam, I present to you four test reports from mail-tester.com. These tests are from haacksnetworking.org and jonathanhaack.com. I sent an email from the command line using mail and one from Thunderbird for each domain:
Okay, so I think I have proven that it’s possible, but what’s required for the target audience to do this on their own? Of course, basic understanding and creation of common DNS records is a requirement. Additionally, the first thing that is not common, is that you need to be able to create PTR records (or request them) and own public facing IPs. In my case, I co-locate a server at Brown Rice Internet. Co-locating gives me the satisfaction of self-hosting, but the end user feeling and ease of cloud infrastructure. After all, it is my own cloud infrastructure, if we use the word cloud loosely, of course. But co-locating, although my preference, is hardly required. You can just as easily use a service like Kamatera and get access to PTR and public IP space.
As for the next argument, again it’s always something about “how much work it is” or “how hard to maintain it” or “what if it breaks or goes down.” First of all, everything breaks. Even so-called enterprise cloud infrastructure breaks. And yes, having proper backups and version controls are a must and this solution presumes all of these are in place:
- Virtualization is in place (virsh, docker, vagrant, it does not matter).
- Backups are regularly scheduled, including snapshots, files, databases.
- Version control is conducted, minimum of daily, ideally hourly.
- Redundancy for hardware is accounted for, zfs mirrors, RAID, hot spares, etc.
If these are not in place, learn how to do them first. They are not difficult, however, as the naysayers and others who might try to discourage you may argue. In my case, the two mail servers I created are wholly virtualized with kvm. This means I won’t ever have to rebuild it from scratch; at worst, I will revert to a snapshot or restore point. That’s hardly an issue. If you need help creating a virtualization stack, here’s a simple recipe. (This tutorial also covers how I setup the redundancy with zfs mirrors.) As for backing up the VMs, you may want to do a full back up monthly or quarterly, and if so, here’s a script that will backup and compress (with tar/pbzip2) your VMs for you. Lastly, if you need version control, just use rsync and/or rsnapshot to get your files off-site to your backup workstation. Since I like being very prepared, I also have an identical server ready to go in case this one blows up and dies, or a “hot spare” as it were. The absolute worst that can happen is that I lose 2-3 days traveling to the data center, putting in the spare, copying over the tarballs, then importing the latest database/files. That’s worst case, moreover, and an entirely acceptable timeframe.
Okay, so the last question people might ask is cost!? So, yes, these solutions cost money. A common co-location setup can run about 2K a year. The hardware might be an additional 1K or 2K. Those aren’t pennies. I will note, however, that if you compare apples to apples, equivalently sized and provisioned AWS / Digital Ocean services will be 2 or 3 times as much as what I paid for the entire setup. You exchange monthly bills and exorbitant costs, however, with upkeep and maintenance of your own hardware. But that, my friends, is a hacking gift that keeps giving.