The Mailroom: Part 4 – Client Access, Webmail, and Series Wrap-Up

By Collin

Time to complete: 1-2 hours Additional monthly cost: $0 (Roundcube is self-hosted) Prerequisites: Parts 1-3 completed – mail server sending and receiving, SpamAssassin filtering, fail2ban active

Introduction

Parts 1 through 3 built a functional mail server from scratch — component selection, deployment, outbound relay, spam filtering, and security hardening. What’s missing is the last mile: actually using the thing day to day.

This final article covers:

  • Configuring email clients properly (desktop and mobile)
  • Deploying Roundcube webmail for browser-based access
  • Final deliverability checks and validation
  • A retrospective on the full project — what worked, what didn’t, and what the stack actually costs to run

If you’ve been following along, your server already handles the complete email lifecycle. This article makes it accessible from anywhere and confirms everything is working the way it should be.

A note on the relay: If you followed Part 3 when it was originally published, you may have set up SendGrid as your outbound relay. I’ve since migrated to SMTP2GO after two SendGrid free-tier account lockouts caused by their anti-abuse automation. Part 3 has been updated to reflect this, and everything in this article applies regardless of which relay you’re using. The client configuration is identical — the relay is transparent to everything downstream.

Email Client Configuration

The Quick Version

Every email client needs the same four pieces of information:

Incoming (IMAP):

Server:     mail.[your-domain].com
Port:       993
Security:   SSL/TLS
Username:   [your-username]
Password:   [your-password]

Outgoing (SMTP):

Server:     mail.[your-domain].com
Port:       587
Security:   STARTTLS
Auth:       Normal password
Username:   [your-username]
Password:   [your-password]

These settings are the same regardless of client. The server-side configuration from Parts 2 and 3 already handles everything — Dovecot on 993 for IMAP, Postfix submission on 587 for SMTP, both requiring TLS and authentication.

Thunderbird (Desktop)

Thunderbird’s auto-detect usually gets the IMAP side right but can stumble on SMTP settings, especially with non-standard relay configurations.

  1. Open Thunderbird → Account SettingsAccount ActionsAdd Mail Account
  2. Enter your name, email address ([username]@[your-domain].com), and password
  3. Click Configure manually (don’t trust auto-detect here)
  4. Set incoming to IMAP, mail.[your-domain].com, port 993, SSL/TLS
  5. Set outgoing to SMTP, mail.[your-domain].com, port 587, STARTTLS
  6. Authentication method: Normal password for both
  7. Click Done

If Thunderbird tries to use port 465 or SSL/TLS for outgoing instead of STARTTLS on 587, it will fail. Port 465 (implicit TLS) isn’t enabled in the master.cf configuration from Part 2 — only the submission service on 587 with STARTTLS is active.

Testing send:

Compose a test email to a Gmail or Outlook address. Check the Sent folder — if the message appears there, Thunderbird successfully authenticated via SASL on port 587 and Postfix relayed it through your outbound provider.

If sending fails with a connection timeout, verify from the server that port 587 is still listening:

sudo ss -tlnp | grep :587
# Should show: LISTEN 0 100 0.0.0.0:587

If it’s listening but Thunderbird can’t connect, check your firewall:

sudo ufw status | grep 587
# Should show: 587/tcp ALLOW IN Anywhere

Common Thunderbird issues:

  • “Login to server failed”: Double-check username. It’s the Linux username (e.g., collin), not the full email address, unless your Dovecot auth is configured for email-based login.
  • “Connection to server timed out” on send: Wrong port or security type. Confirm port 587 with STARTTLS, not 465 with SSL/TLS.
  • Emails send but bounce back: Check the bounce message. If it mentions relay authentication, verify /etc/postfix/sasl_passwd is correct and the hash was regenerated (sudo postmap /etc/postfix/sasl_passwd). This is the single most common gotcha with Postfix relay configuration — editing the file without running postmap means Postfix is still using the old credentials.

iOS Mail (iPhone/iPad)

iOS Mail handles self-hosted servers well, which makes it a solid option for mobile access without installing anything extra.

  1. SettingsMailAccountsAdd AccountOtherAdd Mail Account
  2. Enter:
    • Name: Your Name
    • Email: [username]@[your-domain].com
    • Password: your password
    • Description: whatever you want
  3. Tap Next — iOS will attempt auto-discovery and fail. That’s expected.
  4. Select IMAP (not POP)
  5. Under Incoming Mail Server:
    • Host Name: mail.[your-domain].com
    • Username: [your-username]
    • Password: your password
  6. Under Outgoing Mail Server:
    • Host Name: mail.[your-domain].com
    • Username: [your-username]
    • Password: your password
  7. Tap Next — iOS will verify the connection

iOS will negotiate TLS automatically. It uses port 993 for IMAP and typically tries 587 with STARTTLS for SMTP.

If verification fails:

iOS shows a generic “Cannot Verify Server Identity” warning for Let’s Encrypt certificates on some older versions. Tap Continue — this is a trust chain issue with the iOS version, not a certificate problem. The connection is still encrypted.

If it genuinely can’t connect, make sure you’re not on a network that blocks port 993 or 587 (some corporate WiFi does this).

Android (Gmail App or K-9 Mail)

The Gmail app on Android supports third-party IMAP accounts:

  1. Open Gmail → SettingsAdd accountOther
  2. Enter your email address
  3. Select IMAP
  4. Enter the same server settings as above
  5. For security type, select SSL/TLS for incoming and STARTTLS for outgoing

K-9 Mail follows a similar flow and tends to handle self-hosted servers with fewer quirks than the Gmail app.

macOS Mail

  1. System SettingsInternet AccountsAdd Other AccountMail account
  2. Enter name, email, password
  3. macOS will fail auto-discovery — click Sign In anyway, then configure manually
  4. Same settings as above

macOS Mail sometimes defaults to checking certificates strictly. If you see certificate warnings, it’s the same Let’s Encrypt trust chain behavior as iOS.

Roundcube Webmail

Why Bother with Webmail

Native email clients (Thunderbird, iOS Mail) are better for daily use. Webmail fills a different role:

  • Accessing email from a machine where you can’t install a client
  • Quick checks when you don’t have your phone
  • Administrative tasks (checking spam folders, bulk operations)
  • Showing other people your setup without handing them your laptop

Part 1 selected Roundcube for its mature plugin ecosystem, active development, and clean interface.

Where to Run It

Roundcube is just a PHP web app that talks IMAP and SMTP to your mail server — it doesn’t need to live on the mail server itself. You have two reasonable options:

On the mail server droplet — simplest setup, everything on one box. Downside: you’re adding a web server and PHP to your mail infrastructure, which increases attack surface.

On a separate machine — a VM, another VPS, or even a Raspberry Pi. Roundcube connects to the mail server over the network using the same ports as any other email client. This keeps your mail server lean.

Both approaches use the same Docker Compose setup. The only difference is whether ROUNDCUBEMAIL_DEFAULT_HOST points to localhost or your mail server’s hostname.

Architecture

Browser → Roundcube (Apache + PHP)
              → IMAP: mail.[your-domain].com:993
              → SMTP: mail.[your-domain].com:587

That’s it. Roundcube is an email client that runs in a browser. Nothing more complicated than that.

Deploy with Docker Compose

You’ll need Docker and Docker Compose installed. If you don’t have them yet:

# Install Docker (Ubuntu/Debian)
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Log out and back in for group membership to take effect

Create a directory for Roundcube:

mkdir -p ~/roundcube/{config,data}
cd ~/roundcube

Important: Create the config file before starting the container. If Docker starts and the file doesn’t exist, it will create a directory in its place, and the container will fail with a mount error. If this happens to you, docker compose down, rm -rf config/custom.inc.php, create the actual file, then docker compose up -d.

Create the compose file:

nano docker-compose.yml
services:
  roundcube:
    image: roundcube/roundcubemail:latest-apache
    container_name: roundcube
    restart: unless-stopped
    ports:
      - "8080:80"
    environment:
      - ROUNDCUBEMAIL_DEFAULT_HOST=ssl://mail.[your-domain].com
      - ROUNDCUBEMAIL_DEFAULT_PORT=993
      - ROUNDCUBEMAIL_SMTP_SERVER=tls://mail.[your-domain].com
      - ROUNDCUBEMAIL_SMTP_PORT=587
      - ROUNDCUBEMAIL_UPLOAD_MAX_FILESIZE=25M
      - ROUNDCUBEMAIL_SKIN=elastic
      - ROUNDCUBEMAIL_PLUGINS=archive,zipdownload
      - ROUNDCUBEMAIL_DB_TYPE=sqlite
    volumes:
      - ./data/db:/var/roundcube/db
      - ./data/temp:/tmp/roundcube-temp
      - ./config/custom.inc.php:/var/roundcube/config/custom.inc.php:ro

That’s the entire deployment. The latest-apache image includes Apache and PHP — no separate web server container needed.

If Roundcube runs on the same box as the mail server, change the IMAP/SMTP hosts to localhost:

      - ROUNDCUBEMAIL_DEFAULT_HOST=ssl://localhost
      - ROUNDCUBEMAIL_SMTP_SERVER=tls://localhost

If Roundcube runs on a different machine, use the mail server’s public hostname as shown in the default config above. Make sure the machine running Roundcube can reach your mail server on ports 993 and 587. This is the part that tripped me up — more on that in the firewall section below.

Custom Configuration

Create the config file for UI preferences. Do this before running docker compose up for the first time:

nano config/custom.inc.php
<?php
// Use Elastic skin (modern, responsive)
$config['skin'] = 'elastic';

// Show email threading
$config['default_list_mode'] = 'threads';

// Preview pane layout
$config['layout'] = 'widescreen';

// Auto-save drafts every 60 seconds
$config['draft_autosave'] = 60;

// Display format
$config['date_format'] = 'Y-m-d';
$config['message_show_email'] = true;

// SMTP identity - lock to authenticated user
$config['smtp_user'] = '%u';
$config['smtp_pass'] = '%p';

// Security
$config['ip_check'] = true;
$config['session_lifetime'] = 30;
$config['password_charset'] = 'UTF-8';

Start Roundcube

cd ~/roundcube
docker compose up -d

Watch the logs for startup issues:

docker logs roundcube -f
# Look for Apache startup messages and "ready to handle connections"
# Press Ctrl+C to stop watching

Verify it’s running:

curl -s http://localhost:8080 | head -20
# Should return HTML (the Roundcube login page)

Firewall Gotcha: Remote Roundcube Can’t Reach the Mail Server

If Roundcube runs on a different machine and login times out with “Connection to storage server failed,” the container can’t reach your mail server on ports 993 (IMAP) or 587 (SMTP submission).

The latest-apache container image is stripped down — no ping or nc available. Test connectivity with PHP instead:

docker exec roundcube php -r "var_dump(@fsockopen('ssl://mail.[your-domain].com', 993, \$errno, \$errstr, 10)); echo \$errno . ': ' . \$errstr;"

If that returns bool(false) and 110: Connection timed out, the traffic is being blocked somewhere between Roundcube and the mail server.

Things to check:

  1. Mail server firewall (UFW): Ports 993 and 587 should allow connections from anywhere (or at least from the IP running Roundcube). sudo ufw status on the mail server.
  2. Cloud provider firewall: DigitalOcean, AWS, and others have cloud firewalls that sit in front of the VM. Check your provider’s dashboard — these are separate from UFW and easy to forget about.
  3. Local network firewall: This is the one that got me. If Roundcube runs on a homelab VM behind something like OPNsense or pfSense, your VM might have restricted outbound ports. In my case, the Services VM had an OPNsense alias (Services_Outbound_Ports) that controlled which ports it could reach on the internet. Ports 993 and 587 weren’t in the list because the VM had never needed them before. Adding them to the alias fixed it immediately.

The mail server’s UFW can show everything wide open, and you can still get timeouts if the traffic never leaves your local network. Check outbound firewall rules on the machine running Roundcube, not just inbound rules on the mail server.

First Login

  1. Open http://[your-server-ip]:8080 in your browser
  2. Log in with your mail server credentials:
    • Username: your Linux username (not the full email address)
    • Password: your mail account password
  3. You should see your inbox with any existing emails

Send a Test Email

Once logged in, compose a test to your Gmail address. Check:

  1. Does it arrive in Gmail’s inbox (not spam)?
  2. In Gmail, click the three dots → Show original — do SPF, DKIM, and DMARC all show pass?
  3. Reply from Gmail — does the reply come back into Roundcube?

If all three work, Roundcube is fully functional. The email takes the same path as Thunderbird or iOS Mail — through Postfix’s submission service on port 587, then out via your relay.

The Elastic Skin

Roundcube’s default elastic skin is responsive and reasonably modern. It’s not going to be mistaken for Gmail, but it handles the basics well:

  • Threaded conversations
  • Drag-and-drop attachments
  • Responsive on mobile browsers
  • Dark mode (follows system preference)

The widescreen layout set in custom.inc.php gives you a three-column view (folders / message list / preview) which is the most comfortable for daily use.

Optional: Adding HTTPS

The setup above runs on plain HTTP, which is fine if Roundcube is only accessible on your local network. If you’re exposing it to the internet or just want TLS, you have a few options:

Option A: Reverse proxy (Nginx, Caddy, Traefik)

If you already run a reverse proxy for other services, add Roundcube behind it. Remove the ports section from the compose file and connect it to your proxy’s Docker network instead. Point your proxy at the container on port 80.

For Traefik, replace the ports section with labels:

    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.roundcube.rule=Host(`webmail.[your-domain].local`)"
      - "traefik.http.routers.roundcube.entrypoints=websecure"
      - "traefik.http.routers.roundcube.tls=true"
      - "traefik.http.services.roundcube.loadbalancer.server.port=80"
    networks:
      - web

networks:
  web:
    external: true

Make sure the network name matches whatever your Traefik setup actually uses. Run docker network ls to check — it might be web, traefik, traefik_default, or something else entirely depending on how you set it up.

For Caddy, a two-line Caddyfile handles it:

webmail.[your-domain].com {
    reverse_proxy roundcube:80
}

Option B: Cloudflare Tunnel

If Roundcube runs at home and you don’t want to open ports, a Cloudflare Tunnel can expose it securely. Cloudflare handles TLS termination and you get a public URL without touching your firewall.

Option C: Self-signed cert for local use

If it’s strictly internal, a self-signed cert and a quick Nginx config in front of Roundcube works. You’ll get browser warnings but the traffic will be encrypted on your LAN.

For most people following this guide, plain HTTP on a local network is fine to start with. You can add TLS later without changing anything about Roundcube itself.

Optional: Modern Skin Alternatives

If the elastic skin doesn’t cut it for you:

Roundcube community plugins can be added via the ROUNDCUBEMAIL_PLUGINS environment variable or mounted into the plugins directory. Check the Roundcube plugin repository for UI enhancements.

Snappymail is a completely separate webmail client (not a Roundcube skin) with a more modern interface. If you find yourself spending more time fighting Roundcube’s UI than reading email, it’s worth evaluating as a replacement. The Docker deployment is similar, and it talks to the same IMAP/SMTP backend.

Final Checks and Validation

Before calling this project done, run through the full verification to confirm everything is actually working end to end.

SPF Verification

# From any machine:
dig TXT [your-domain].com +short

You should see:

"v=spf1 ip4:[your-droplet-ip] include:spf.smtp2go.com include:amazonses.com -all"

If you’re still on SendGrid, the include will be sendgrid.net instead. Either way, the include:amazonses.com is there for when SES gets approved. It doesn’t hurt anything in the meantime, and you won’t have to touch DNS again when you switch relays.

DKIM Verification

Send an email to a Gmail address, then open it in Gmail:

  1. Open the email
  2. Click the three dots → Show original
  3. Look for the authentication results header:
Authentication-Results: mx.google.com;
    dkim=pass header.d=[your-domain].com
    spf=pass (google.com: domain of [username]@[your-domain].com designates...)
    dmarc=pass (p=NONE sp=NONE dis=NONE)

All three should show pass.

If DKIM shows fail or none, your relay’s domain authentication may not be complete. For SMTP2GO, check the CNAME records:

# Verify SMTP2GO CNAME records exist
dig CNAME em[your-id].[your-domain].com +short
dig CNAME s[your-id]._domainkey.[your-domain].com +short
# Should return smtp2go.net addresses

For SendGrid, the records are s1._domainkey and s2._domainkey pointing to sendgrid.net.

DMARC Status and Upgrade Path

Part 2 set DMARC to p=none (monitoring only). This was deliberate — you want to see reports before enforcing a policy.

Check your current record:

dig TXT _dmarc.[your-domain].com +short
# Should show: "v=DMARC1; p=none; rua=mailto:dmarc@[your-domain].com"

The upgrade schedule:

After 30 days of clean DMARC reports (no legitimate mail failing authentication):

v=DMARC1; p=quarantine; rua=mailto:dmarc@[your-domain].com; pct=100

After 60 days with no issues:

v=DMARC1; p=reject; rua=mailto:dmarc@[your-domain].com; pct=100

p=reject is the end goal — it tells receiving servers to outright reject mail that fails SPF and DKIM. This prevents anyone from spoofing your domain.

Don’t rush to p=reject. If something is misconfigured, you’ll silently lose legitimate email. The monitoring period exists for a reason.

Reading DMARC reports:

The rua address receives XML aggregate reports from providers like Gmail and Outlook. They’re not fun to read raw. Tools like DMARC Analyzer or dmarcian can parse them into something readable.

Mail-Tester Score

mail-tester.com gives you a comprehensive score for your outbound email configuration:

  1. Go to mail-tester.com — it shows you a unique email address
  2. Send an email from your server (via Thunderbird or Roundcube) to that address
  3. Wait 10 seconds, then click Check your score

A score of 9/10 or higher means your configuration is solid. Common deductions:

  • Missing DKIM: Relay domain authentication not complete
  • No reverse DNS: PTR record not set (covered in Part 2)
  • Listed on blacklist: Check if your relay’s sending IP is clean
  • No body content: Send a real email with actual text, not just a subject line

End-to-End Sending Test

Test the full path from each client:

From Thunderbird:

  1. Compose email to a Gmail address
  2. Send
  3. Check Gmail → Show original → Verify SPF/DKIM/DMARC pass
  4. Reply from Gmail
  5. Verify reply arrives in Thunderbird

From iOS/Android:

  1. Same test — send to Gmail, verify headers, reply back
  2. Confirm push notifications work (if configured)

From Roundcube:

  1. Log into webmail
  2. Send to Gmail
  3. Verify headers
  4. Reply from Gmail
  5. Verify reply appears in Roundcube inbox

If all three paths work with passing authentication headers, the mail server is fully operational.

Relay Verification

Confirm your server isn’t an open relay (this was checked in Part 3, but worth verifying again):

telnet mail.[your-domain].com 25
# After connecting:
EHLO test.com
MAIL FROM:<[email protected]>
RCPT TO:<[email protected]>

You should get:

554 5.7.1 <[email protected]>: Relay access denied

If it says 250 OK, your server is an open relay and will be blacklisted within hours. Go back to Part 2 and check smtpd_recipient_restrictions in main.cf.

Series Retrospective

What Got Built

Over four articles, from bare metal to production:

Infrastructure:

  • DigitalOcean droplet running Ubuntu 24.04 LTS
  • Postfix handling SMTP (receiving on 25, submission on 587)
  • Dovecot providing IMAP access on 993
  • SpamAssassin filtering incoming mail
  • SMTP2GO relaying outbound mail (via port 2525)
  • Let’s Encrypt TLS certificates with auto-renewal
  • fail2ban protecting SSH and Postfix
  • Daily S3 backups with 30-day retention
  • Roundcube webmail for browser-based access

DNS and Authentication:

  • MX, A, SPF, DKIM, DMARC, PTR records
  • SMTP2GO domain authentication (CNAME records)
  • DMARC monitoring with upgrade path to enforcement

Client Access:

  • Desktop: Thunderbird
  • Mobile: iOS Mail / Android Gmail
  • Web: Roundcube

What It Actually Costs

ServiceMonthly Cost
DigitalOcean mail droplet$6.00
DigitalOcean WordPress droplet (existing)$6.00
AWS S3 backups~$0.50
SMTP2GO relay$0.00 (free tier)
Roundcube$0.00 (self-hosted)
Total~$12.50/month

The incremental cost over the existing WordPress hosting is $6.50/month for a fully self-hosted email server with professional deliverability.

For comparison:

  • Google Workspace: $7.20/user/month ($86.40/year)
  • Fastmail: $5/month ($60/year)
  • This setup: $6.50/month incremental ($78/year) — plus you actually understand how email works

The Relay Saga: SendGrid → SMTP2GO → SES

This project has been through three relay providers, which honestly taught more than getting it right the first time would have.

Attempt 1: AWS SES. The original plan. Denied production access because the account was new with no billing history. The response was clear: build up usage of other AWS services over 2-3 billing cycles, then reapply.

Attempt 2: SendGrid. Pivoted here as the backup plan. Worked great initially. Then the first free-tier account got locked by their anti-abuse automation — no warning, no appeal that went anywhere. Created a second account. That got locked too, within weeks. Two accounts, same pattern: low-volume personal email flagged as suspicious by automated systems with no meaningful human review.

Attempt 3: SMTP2GO. The migration took about 15 minutes — new CNAME records in Cloudflare, updated credentials in /etc/postfix/sasl_passwd, postmap, restart. Working ever since with no issues. The free tier gives 1,000 emails/month, which is more than enough for personal use.

The takeaway: free tiers on high-volume commercial platforms aren’t designed for low-volume personal senders. You’re an edge case in their abuse detection models. SMTP2GO’s free tier is genuinely usable for this purpose. And if it weren’t, the two-line relay swap means you’re never locked into anything.

When SES eventually gets approved, the migration is two commands:

# Update relay credentials
sudo nano /etc/postfix/sasl_passwd
# Change: [mail.smtp2go.com]:2525 user:password
# To:     [email-smtp.us-east-2.amazonaws.com]:587 AKIA...:secretkey

sudo postmap /etc/postfix/sasl_passwd

# Update relay host
sudo nano /etc/postfix/main.cf
# Change: relayhost = [mail.smtp2go.com]:2525
# To:     relayhost = [email-smtp.us-east-2.amazonaws.com]:587

sudo systemctl restart postfix

No DNS changes, no client reconfiguration. The relay is transparent to everything upstream. The S3 backup system from Part 2 generates consistent AWS billing (~$0.50/month), and after 60-90 days of that, the SES reapplication should have a better chance.

What Actually Went Wrong

Documenting only the successes would be dishonest. Here’s what the series ran into:

SendGrid free tier lockouts: The biggest surprise and the reason Part 3 got updated mid-series. Two accounts locked by anti-abuse automation, both for normal personal email volume. If you go with SendGrid, use a paid plan or be prepared to pivot.

DigitalOcean port 587 blocking: New droplets can’t send on port 587 outbound. This is an anti-spam measure that’s nowhere in the obvious documentation. Both SendGrid and SMTP2GO support port 2525 as an alternative. If you’re on a different VPS provider, check whether they block outbound SMTP ports before you start.

SpamAssassin mail loop: The content filter integration between Postfix and SpamAssassin creates a loop if the port 10025 off-ramp isn’t configured with an empty content_filter=. This is a known gotcha, but it’s the kind of thing that causes a 30-minute debugging session the first time you hit it.

fail2ban Dovecot filter: The default regex filter for Dovecot on Ubuntu 24.04 has compatibility issues. Rather than spending time debugging filter patterns, SSH and Postfix SASL protection covers the critical attack vectors.

SendGrid signup rejection: Using a privacy-focused email provider (ProtonMail, Tutanota) to sign up for SendGrid results in silent rejection. Gmail or Outlook addresses work instantly. Makes sense from their anti-abuse perspective, but it’s not documented.

Stale Postfix hash files: Editing /etc/postfix/sasl_passwd without running sudo postmap /etc/postfix/sasl_passwd afterward means Postfix keeps using the old credentials from the .db file. This bit me during the SMTP2GO migration. The fix is always the same: edit the file, run postmap, restart Postfix.

master.cf typo: permit_sasl_authenticated is correct. permit_sasl_authentication (without the ‘d’) is not a real Postfix option and silently does nothing. Fun to debug.

Docker volume mount directory bug: If you start a Docker container before creating a file that’s bind-mounted in the compose file, Docker helpfully creates a directory at that path instead of waiting. The container then fails with a mount error about “not a directory.” The fix: docker compose down, delete the directory, create the actual file, then docker compose up -d.

Roundcube firewall timeout (homelab-specific): If Roundcube runs on a homelab VM behind a firewall like OPNsense, the VM’s outbound port restrictions might not include 993 (IMAP) and 587 (SMTP). The mail server’s UFW can be wide open and you’ll still get connection timeouts if the traffic never leaves your local network. Check outbound rules on the Roundcube host, not just inbound rules on the mail server.

Skills Acquired

This is the part that matters for a cybersecurity career. Running a mail server teaches you:

Email authentication protocols: SPF, DKIM, and DMARC aren’t abstract concepts anymore. You’ve configured them, tested them, and seen what happens when they fail. Understanding these protocols is directly relevant to phishing analysis, email forensics, and security architecture.

DNS at a practical level: MX records, CNAME chains, PTR records, TXT records with specific syntax requirements. DNS misconfiguration is behind a significant percentage of email deliverability issues, and understanding DNS deeply is foundational for security work.

Linux service administration: Postfix, Dovecot, SpamAssassin, fail2ban — each with its own configuration format, logging approach, and failure modes. The troubleshooting skills transfer to any Linux service management role.

Multi-cloud architecture: DigitalOcean for compute, AWS for storage and (eventually) email relay, Cloudflare for DNS and CDN, SMTP2GO for relay. Understanding how to integrate services across providers is increasingly relevant.

Log analysis and troubleshooting: journalctl, postfix check, doveconf -n, reading raw email headers — these are the tools you use when something breaks at 2 AM. The debugging process is more valuable than the working configuration.

Security hardening: TLS configuration, firewall rules, fail2ban, SASL authentication, relay restrictions. Each layer serves a specific purpose, and understanding why each control exists matters more than the implementation details.

Incident response (unplanned): The SendGrid lockouts were essentially a service disruption that required rapid diagnosis and migration to an alternative provider. That’s incident response in miniature — identify the impact, find the alternative, execute the migration, verify the fix, update the documentation. Not a bad skill to practice on a personal project rather than in production at 3 AM.

What’s Next (Optional, Not Covered)

For anyone who wants to keep building on this:

  • Greylisting — temporary rejection of first-time senders; legitimate servers retry, spam bots don’t
  • rspamd migration — replace SpamAssassin with rspamd for better performance and ML-based filtering (covered in Part 1’s alternative stack)
  • Multiple users — additional mail accounts for your domain
  • DMARC report analysis — parse the XML aggregate reports for deliverability insights
  • SES migration — swap SMTP2GO for AWS SES when approved (instructions above)
  • Sieve filtering — server-side mail rules via Dovecot’s ManageSieve plugin

Key File Locations (Quick Reference)

Configuration:
  /etc/postfix/main.cf          - Postfix main config
  /etc/postfix/master.cf        - Postfix service definitions
  /etc/postfix/sasl_passwd      - SMTP relay credentials
  /etc/dovecot/conf.d/          - Dovecot configuration files
  /etc/spamassassin/local.cf    - SpamAssassin rules
  /etc/fail2ban/jail.local      - fail2ban jail config

Certificates:
  /etc/letsencrypt/live/mail.[your-domain].com/

Mail Storage:
  /home/[username]/Maildir/     - User mailboxes

Logs:
  sudo journalctl -u postfix    - Postfix logs
  sudo journalctl -u dovecot    - Dovecot logs
  sudo journalctl -u spamassassin
  sudo fail2ban-client status   - Ban statistics

Backups:
  /usr/local/bin/backup-mail.sh - Backup script
  /var/log/mail-backup.log      - Backup logs
  S3: [your-domain]-mail-backups-[timestamp]/

Monitoring:
  /usr/local/bin/check-disk-space.sh
  /usr/local/bin/check-cert-expiry.sh

Important Commands

# Service management
sudo systemctl restart postfix dovecot spamassassin fail2ban
sudo systemctl status postfix dovecot spamassassin fail2ban

# Mail queue
sudo mailq                      # View queue
sudo postsuper -d ALL           # Clear queue (nuclear option)

# Testing
sudo postfix check              # Configuration validation
sudo doveconf -n                # Dovecot effective config
openssl s_client -connect mail.[your-domain].com:993 -quiet  # Test IMAP TLS

# Relay credentials (always run postmap after editing sasl_passwd)
sudo nano /etc/postfix/sasl_passwd
sudo postmap /etc/postfix/sasl_passwd
sudo systemctl restart postfix

# Monitoring
sudo fail2ban-client status sshd
sudo fail2ban-client status postfix-sasl
aws s3 ls s3://[your-bucket]/ --recursive --human-readable | tail -5

Conclusion

This series started with a question: is self-hosting email in 2026 worth the effort?

The answer depends on what you’re trying to get out of it. If you just need email, buy Fastmail and move on. If you want to understand how one of the internet’s most fundamental protocols actually works — how messages get routed, authenticated, filtered, and delivered — there’s no substitute for building it yourself.

The hybrid approach (self-hosted server with commercial relay) reflects a real engineering tradeoff. IP reputation is a genuine infrastructure problem that takes months to solve organically. Using a relay service for outbound delivery isn’t a compromise — it’s the same architecture that production mail systems use at scale. The difference is you understand every component in the chain.

The relay journey from SES denial to SendGrid lockouts to SMTP2GO stability wasn’t in the original plan, but it ended up demonstrating something important: when you understand your own infrastructure, migrating between providers is a 15-minute job, not a crisis. That portability — knowing exactly what each component does and how to swap it — is the actual value of building this yourself.

Four articles. About 8-10 hours of hands-on work. $6.50/month incremental cost. One fully functional mail server that you can troubleshoot, extend, and explain in a job interview.


This is Part 4 of “The Mailroom” email server series.

Published: [Date] Last updated: [Date] Time to complete: 1-2 hours Part of series: Building a Secure Email Server Previous: Part 3 – Outbound Mail and Security Hardening

From Config Chaos to Automated: Building a Self-Documenting Homelab

By Collin

Overview

If you’ve been running a homelab for more than a year, you probably have configs scattered across half a dozen machines, documentation that’s either missing or six months out of date, and a vague sense of dread about what would happen if you had to rebuild from scratch.

That was me. This is what I did about it — and what I’d do differently if I started over today.


The Problem

My homelab spans a few different environments: local VMs on [your hypervisor, e.g. Proxmox], a baremetal firewall, and a couple of cloud servers handling mail and web hosting. On paper that’s not a huge footprint. In practice it meant configuration files living in completely different places with no consistent structure, no backups, and documentation that existed mainly in my head.

The specific pain points:

  • No config backups. If a VM died, I’d be reconstructing from memory.
  • No documentation. “I’ll remember how I set this up” is a lie I kept telling myself.
  • Manual everything. Deploying a new service meant copying compose files, manually editing reverse proxy configs, and adding dashboard entries one at a time.
  • No visibility. No single place to see what was running, where, and whether it was healthy.

The goal wasn’t to over-engineer it. I wanted something that would handle 90% of the repetitive work automatically, be easy to maintain, and actually get used. That last part matters — a system you abandon after two weeks is worse than no system at all.


The Architecture

Everything runs through a dedicated Services VM — [your OS, e.g. Ubuntu 24.04] running on [your hypervisor]. This VM is the single point of truth for all containerized services. It runs:

  • Traefik — reverse proxy that automatically discovers new containers via Docker labels
  • Homepage — dashboard that also auto-discovers containers via Docker labels
  • MkDocs — documentation site served from the same Git repo that stores all infrastructure code
  • Ansible — runs on a cron job nightly, SSHs into every host, pulls config files, and commits them to GitHub

The key design decision was keeping everything in one Git repository. Configs, documentation, Ansible playbooks, Docker compose files — all of it lives in ~/[your-repo-name]/ and syncs to GitHub automatically. If the VM dies, rebuilding is one Ansible playbook run away.


Auto-Discovery: The Part That Actually Saves Time

The biggest quality-of-life improvement was getting Traefik and Homepage to automatically pick up new containers via Docker labels. No more manually editing reverse proxy configs or dashboard entries every time you spin up a new service.

When you deploy a new container, you add labels like this to the compose file:

labels:
  # Tell Traefik to route this service
  - "traefik.enable=true"
  - "traefik.http.routers.[service-name].rule=Host(`[service-name].[your-domain].local`)"
  - "traefik.http.routers.[service-name].entrypoints=web"
  - "traefik.http.services.[service-name].loadbalancer.server.port=[container-port]"

  # Tell Homepage to show this service on the dashboard
  - "homepage.group=[Dashboard Group Name]"
  - "homepage.name=[Display Name]"
  - "homepage.href=http://[service-name].[your-domain].local"
  - "homepage.description=[Short description]"
  - "homepage.icon=[service-name].png"

Start the container and within seconds it appears on the dashboard with a live status badge, and [service-name].[your-domain].local routes to it through Traefik. No config editing required.

One gotcha worth mentioning: Homepage reads the Docker socket to discover containers, and it needs to run with a group ID that has permission to access that socket. If you see EACCES /var/run/docker.sock in the Homepage logs, check the GID of your docker group (getent group docker) and make sure the PGID environment variable in your Homepage compose file matches it. This will silently blank your entire dashboard and the logs are the only way to know why.


Automated Config Backups with Ansible

The backup system uses Ansible’s fetch module to pull config files from every host and store them in the Git repo. What makes it scale cleanly is a metadata-driven approach: instead of hardcoding paths in the playbook, each host has a host_vars file that lists what to back up.

# infrastructure/ansible/inventory/host_vars/[hostname].yml

ansible_host: [host-ip]
ansible_user: [ssh-user]
ansible_ssh_private_key_file: ~/.ssh/[your-key]

config_paths:
  - /etc/[service]/[config-file]
  - /etc/[service]/[another-config]

The playbook itself never needs to change:

- name: Fetch config files
  fetch:
    src: "{{ item }}"
    dest: "~/[repo]/configs/{{ inventory_hostname }}/"
    flat: yes
  loop: "{{ config_paths }}"
  when: config_paths is defined

Adding a new host to backups means adding a host_vars file with the right paths. That’s the whole change. The playbook runs nightly via cron and only commits to GitHub if something actually changed — no noise in the commit history.

# Cron entry — runs at 2am nightly
0 2 * * * cd ~/[repo] && ansible-playbook \
  -i infrastructure/ansible/inventory/hosts.yml \
  infrastructure/ansible/playbooks/backup-configs.yml && \
  git add configs/ && git diff --cached --quiet || \
  git commit -m "Auto backup $(date +%Y-%m-%d)" && git push

This approach works across mixed environments without any changes to the playbook — local VMs, cloud servers, whatever. As long as you can SSH to it, Ansible can back it up.


The Service Scaffold Script

Deploying a new service used to mean copying a compose file, editing names and ports, creating the folder structure, adding a host_vars entry, and creating a doc stub — all manually and all prone to inconsistency. I replaced that with a Python script.

python3 ~/[repo]/automation/tools/service-add.py [service-name] --vlan [vlan]

One command creates:

  • Folder structureservices/[vlan]/[service]/config/ and data/
  • docker-compose.yml — with Traefik and Homepage labels already filled in
  • host_vars entry — for Ansible backup registration
  • Documentation stub — pre-populated markdown in docs/Services/

The compose file still needs manual edits for the correct image, port, and any service-specific config. But the skeleton is always consistent, the labels are always right, and you’re not starting from scratch or copy-pasting from another service and forgetting to update a name somewhere.

The script itself is straightforward Python — mostly file I/O, string replacement, and pathlib for directory creation. If you’re learning Python and want a practical first project, something like this is a good place to start. The concepts are simple and the payoff is immediate.


Documentation That Actually Stays Current

The documentation problem is one I’ve seen people solve in a lot of ways, most of which fail because they require too much manual effort to maintain. Writing docs in a separate system that lives outside your infrastructure repo is a good way to ensure they’re always stale.

The approach here: everything is markdown in the same repo. MkDocs renders it into a browsable site at docs.[your-domain].local. Because the docs live next to the configs and compose files, there’s at least a fighting chance of updating them when something changes.

The structure that’s worked well:

docs/
├── index.md
├── Infrastructure/
│   └── Network Topology.md     # Network diagram, VLAN table, host inventory
├── Services/
│   └── [Service Name].md       # One file per service, generated by scaffold script
└── Procedures/
    ├── Initial Setup.md        # How to rebuild from scratch
    ├── Adding a Service.md     # End-to-end deployment workflow
    └── Recovery.md             # What to do when things break

The Recovery doc is the one I’d tell people to write first. It forces you to think through failure scenarios before they happen, and “what would I do if this VM died right now” is a question worth having an answer to before you actually need it.


The Security Angle

A few things worth calling out specifically if you’re coming at this from a security background:

Credentials don’t go in the repo. The GitHub token lives in [your password manager]. SSH private keys live on disk and in [your password manager] as a backup — never committed. Any config files containing secrets (database passwords, API keys) are explicitly gitignored before the backup runs.

The Ansible key is scoped. There’s a dedicated SSH key for Ansible automation, separate from your personal keys. If it’s ever compromised, you revoke it without touching anything else. It’s generated with ed25519 and stored in [your password manager].

AdGuard sits between clients and DNS. All local DNS rewrites run through AdGuard Home, which also handles ad/tracker blocking and query logging. That means you have visibility into what’s resolving what — useful for catching unexpected outbound connections from IoT devices or compromised systems.

VLAN segmentation limits blast radius. The Services VM lives in the management LAN. IoT devices, guest devices, and lab VMs are on separate VLANs with default-deny firewall rules. A compromised IoT device can’t reach the management network without explicitly punching through the firewall.

None of this is groundbreaking security, but it’s the difference between a homelab that’s a liability and one that’s reasonably hardened for a home environment.


What I’d Do Differently

Start with the scaffold script earlier. The inconsistency that builds up from manually creating service folders over months is real. Having a standard structure from day one makes everything downstream — backups, documentation, troubleshooting — cleaner.

Write the Recovery doc while you’re building. I wrote mine after the fact and had to fill in gaps from memory. Writing it during the build means it’s accurate and you catch assumptions you didn’t know you were making.

The metadata-driven backup approach is worth the upfront design time. The temptation is to hardcode paths in the playbook and move on. That works until you have 15 hosts, at which point you’re editing a monolithic file instead of dropping in a new host_vars entry.

Don’t underestimate the DNS rewrite step. Every new service needs a local DNS entry pointing the hostname to the Services VM. It sounds obvious but it’s easy to forget mid-deployment and then spend time wondering why Traefik routing isn’t working.


The Stack in Summary

ComponentToolWhy
ContainerizationDocker + ComposeStandard, portable, easy to rebuild
Reverse proxyTraefikAuto-discovers containers via Docker labels
DashboardHomepageSame label-based auto-discovery, live status
Config backupsAnsible + GitAgentless, works across mixed environments
DocumentationMkDocs + MaterialMarkdown in the repo, always accessible
Service scaffoldingPython scriptConsistent structure, no manual templating
Secrets[Your password manager]Never in the repo

The whole thing cost about a weekend of setup time. Ongoing maintenance overhead is close to zero — Ansible runs itself, the dashboard updates itself, and new services slot in with one command. That’s about the best you can do without going full Kubernetes, which is almost certainly overkill for a homelab.


The Mailroom: Part 3 – Outbound Mail and Security Hardening

By Collin

Time to complete: 2-3 hours
Additional monthly cost: $0 (SMTP2GO free tier)
Prerequisites: Parts 1 & 2 completed – mail server receiving email, IMAP working, S3 backups running


Introduction

In Part 2 we built a mail server that receives email reliably. This article completes the stack: outbound email via SMTP2GO relay, spam filtering with SpamAssassin, brute-force protection with fail2ban, and lightweight monitoring.

What you’ll have by the end:

  • Outbound email sending from your domain
  • Spam filtering on all incoming mail
  • Automated intrusion protection
  • Alerts for the two things that actually matter

Fair warning: this section has more moving parts than Part 2 and more places where things can go sideways. We’ll document the real issues encountered so you can skip the troubleshooting rabbit holes.


SendGrid Setup and Configuration

⚠️ Update: SendGrid Free Tier Reliability

This section documents the original SendGrid setup, which worked — until it didn’t. Two separate SendGrid accounts were locked without explanation during this project. The first account (registered with a domain email) was locked within weeks. The second (registered with Gmail, following their own advice) lasted slightly longer before the same thing happened.

SendGrid’s anti-abuse automation is designed to catch spammers at scale, but it has a high false-positive rate for legitimate low-volume senders. New accounts with no billing history sending 5-10 emails per day match the same pattern as spammers testing stolen credentials — and there’s no reliable way to prevent it on the free tier.

The setup below was migrated to SMTP2GO, which has a 1,000 emails/month free tier and hasn’t shown the same lockout behavior. The Postfix configuration is nearly identical — only the relay hostname and credentials change. If you’re following along, skip to the SMTP2GO configuration below. The SendGrid walkthrough is preserved for reference since the DNS and Postfix concepts apply to any relay provider.

Why SendGrid

AWS SES denied production access for our new account (covered in Part 2). SendGrid’s free tier gives 100 emails/day with no waiting period. That’s enough for professional correspondence and projects while we build AWS billing history for eventual SES approval.

Free tier limits:

  • 100 emails/day
  • 3,000/month
  • No credit card required

Create Your SendGrid Account

  1. Go to: https://signup.sendgrid.com/
  2. Fill out the registration form

Critical: Use a Gmail or Outlook address to sign up – not ProtonMail, Tutanota, or other privacy-focused providers. SendGrid’s automated fraud detection flags privacy email providers and will reject your account outright without explanation. This is a silent failure – you won’t know why you were rejected.

If you get rejected:

  • Don’t appeal (takes days)
  • Create a new account with a mainstream email address
  • Approval is instant with Gmail/Outlook
  1. Verify your email via the confirmation link
  2. Complete the onboarding questionnaire

Authenticate Your Domain

SendGrid needs to verify you own your domain before allowing you to send from it.

  1. In the SendGrid dashboard, go to Settings → Sender Authentication
  2. Click Authenticate Your Domain
  3. Enter your domain: [your-domain].com
  4. Under Advanced Settings:
    • Use automated security – Keep this enabled (auto-rotates DKIM keys)
    • Use custom return path – Leave off (unnecessary complexity)
    • Use a custom link subdomain – Leave off (marketing feature, not needed)
    • Use a custom DKIM selector – Leave off (no conflict with AWS SES selectors)
  5. Click Next

You’ll see 5-6 DNS records to add. The table shows “Host” and “Value” columns. Example:

Type: CNAME  Host: url6981.[your-domain].com      Value: sendgrid.net
Type: CNAME  Host: 59853756.[your-domain].com     Value: sendgrid.net
Type: CNAME  Host: em8538.[your-domain].com        Value: u59853756.wl007.sendgrid.net
Type: CNAME  Host: s1._domainkey.[your-domain].com Value: s1.domainkey.u59853756.wl007.sendgrid.net
Type: CNAME  Host: s2._domainkey.[your-domain].com Value: s2.domainkey.u59853756.wl007.sendgrid.net
Type: TXT    Host: _dmarc.[your-domain].com        Value: v=DMARC1; p=none; rua=...

Your numbers will differ – copy them exactly from SendGrid.

Add DNS Records to Cloudflare

Go to Cloudflare → DNS → Add record.

Important: In the “Name” field, enter only the subdomain portion – not the full hostname. Cloudflare automatically appends your domain.

For example, if SendGrid shows url6981.[your-domain].com:

  • Name field: url6981
  • Name field: url6981.[your-domain].com ❌ (creates double domain)

Add each CNAME record with Proxy status: DNS only (gray cloud).

Skip the DMARC TXT record if you already have one from Part 2. You can’t have duplicate DMARC records.

After adding all records, click Verify in SendGrid. DNS propagation takes 2-10 minutes.

Update your SPF record in Cloudflare to authorize SendGrid:

Find your existing SPF TXT record (the one starting with v=spf1) and edit it to add include:sendgrid.net:

v=spf1 ip4:[your-droplet-ip] include:sendgrid.net include:amazonses.com -all

Generate API Key

  1. SendGrid → Settings → API Keys
  2. Click Create API Key
  3. Name: mail-server-smtp-relay
  4. Select Custom Access
  5. In the Access Details list, find Mail Send and slide it to Full Access
  6. Leave everything else at No Access
  7. Click Create & View

Copy the API key immediately (starts with SG.). You cannot view it again.

Configure Postfix to Relay via SendGrid

Create the SASL credentials file:

sudo nano /etc/postfix/sasl_passwd

Add this line (replace with your actual API key):

[smtp.sendgrid.net]:2525 apikey:SG.your-actual-api-key-here

Note the port: 2525, not 587.

DigitalOcean blocks outbound port 587 on new droplets to prevent spam. SendGrid also listens on port 2525 which is not blocked.

Verify the connection works before proceeding:

telnet smtp.sendgrid.net 2525
# Should show: 220 SG ESMTP service ready
# Press Ctrl+] then type quit to exit

If port 2525 connects and 587 times out, you’ve confirmed the block. Use 2525.

Save and exit the sasl_passwd file.

Secure the file and create hash database:

sudo chmod 600 /etc/postfix/sasl_passwd
sudo postmap /etc/postfix/sasl_passwd

# Verify both files exist
ls -la /etc/postfix/sasl_passwd*

Update Postfix main.cf:

sudo nano /etc/postfix/main.cf

At the end of the file, add:

# ====== SendGrid SMTP Relay ======
relayhost = [smtp.sendgrid.net]:2525
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_sasl_tls_security_options = noanonymous
smtp_tls_security_level = encrypt
header_size_limit = 4096000

# SpamAssassin concurrency limits
spamassassin_destination_concurrency_limit = 1
spamassassin_destination_recipient_limit = 1

Check for duplicate settings:

sudo postfix check

If you see warnings about “overriding earlier entry” for relayhost or smtp_tls_security_level, there are duplicate lines from Part 2’s configuration. Comment out the old ones:

# Find and comment out the old empty relayhost line
sudo grep -n "^relayhost" /etc/postfix/main.cf

A critical note on smtpd_* vs smtp_*:

These look nearly identical but do completely different things:

  • smtpd_* (with ‘d’) = Server settings – controls connections into your server (email clients connecting on port 587)
  • smtp_* (no ‘d’) = Client settings – controls connections out of your server (to SendGrid)

You need both sets. The SendGrid section uses smtp_* for outbound. Your existing smtpd_* settings from Part 2 handle inbound submission. Don’t comment out the smtpd_* lines.

Save and restart Postfix:

sudo postfix check
sudo systemctl restart postfix
sudo systemctl status postfix

Test Outbound Email

# Send test using sendmail directly
/usr/sbin/sendmail [email protected] << EOF
Subject: Test from mail server
From: root@[your-domain].com

Testing SendGrid relay on port 2525
EOF

Check the mail queue:

sudo mailq

If the queue is empty immediately, the mail either delivered or failed fast. Check logs:

sudo journalctl -u postfix --since "2 minutes ago" --no-pager | grep -E "status=|relay="

Successful delivery looks like:

relay=smtp.sendgrid.net[IP]:2525, status=sent (250 Ok: queued as ABC123)

Also verify in SendGrid:

Go to SendGrid → Activity → You should see your sent message listed as “Delivered”.

Delivery reports from failed attempts will appear in /root/Maildir/new/. If you see files there, read them:

sudo cat /root/Maildir/new/* | grep -E "status=|Final-Recipient|Diagnostic" | tail -20

Common issue – queue stuck with port 587 errors:

If you see connect to smtp.sendgrid.net:587: Connection timed out, you edited sasl_passwd but forgot to regenerate the hash:

sudo postmap /etc/postfix/sasl_passwd
sudo postsuper -d ALL  # Clear stuck queue
sudo systemctl restart postfix

SMTP2GO: The Relay That Actually Stayed Working

After two SendGrid lockouts, the requirements shifted from “best free tier” to “free tier that won’t randomly revoke access.”

SMTP2GO free tier:

  • 1,000 emails/month
  • No credit card required
  • Domain verification via DNS (same process as SendGrid)
  • SMTP relay on port 2525 (same workaround for DigitalOcean’s port blocking)

Create Your SMTP2GO Account

  1. Go to: https://www.smtp2go.com/
  2. Sign up with any email address
  3. Verify your email

Authenticate Your Domain

  1. In the SMTP2GO dashboard, go to Sending → Sender Domains
  2. Add your domain: [your-domain].com
  3. You’ll get DNS records to add

Add DNS Records to Cloudflare

If migrating from SendGrid: Delete the old SendGrid CNAME records first (url6981, 59853756, em8538, s1._domainkey, s2._domainkey). Keep any SES DKIM records if you’re still planning that migration.

Add the SMTP2GO records:

Type Name Target
CNAME em[your-number] return.smtp2go.net
CNAME s[your-number]._domainkey dkim.smtp2go.net
CNAME link track.smtp2go.net

Your numbers will differ — copy them exactly from SMTP2GO.

Update your SPF record to replace sendgrid.net with smtp2go.com:

v=spf1 ip4:[your-droplet-ip] include:smtp2go.com include:amazonses.com -all

Generate SMTP Credentials

  1. SMTP2GO dashboard → Settings → SMTP Users
  2. Create a new SMTP user or use the default
  3. Note the username and password

Configure Postfix for SMTP2GO

Update the SASL credentials file:

sudo nano /etc/postfix/sasl_passwd

Replace the SendGrid line with:

[mail.smtp2go.com]:2525 your-smtp2go-username:your-smtp2go-password

Critical step most people miss: Regenerate the hash database. Postfix reads the .db file, not the text file. If you edit sasl_passwd without running postmap, Postfix will keep using the old credentials.

sudo chmod 600 /etc/postfix/sasl_passwd
sudo postmap /etc/postfix/sasl_passwd

Update main.cf:

sudo nano /etc/postfix/main.cf

Change the relayhost line:

relayhost = [mail.smtp2go.com]:2525

The rest of the SASL settings from the SendGrid section remain the same — smtp_sasl_auth_enable, smtp_sasl_password_maps, smtp_sasl_security_options, and smtp_tls_security_level don’t change.

Restart and test:

sudo postfix check
sudo systemctl restart postfix
echo "SMTP2GO relay test" | /usr/sbin/sendmail -v [email protected]

Check the logs:

sudo journalctl -u postfix --since "2 minutes ago" --no-pager | grep -E "status=|relay="

Successful delivery looks like:

relay=mail.smtp2go.com[45.79.170.99]:2525, status=sent (250 OK)

SpamAssassin Configuration

SpamAssassin filters incoming spam using rule-based scoring and Bayesian analysis. Getting it integrated with Postfix correctly requires some care – there’s a specific mail loop issue to avoid.

Install SpamAssassin

sudo apt install spamassassin spamc -y

# Verify installation
spamassassin --version

Find the correct spamd binary path – this varies between systems and getting it wrong prevents the service from starting:

which spamd
# Usually: /usr/sbin/spamd

Create SpamAssassin system user:

sudo adduser --system --group --no-create-home spamd

Configure SpamAssassin

sudo nano /etc/spamassassin/local.cf

Add:

# Spam score threshold (5.0 is standard)
required_score 5.0

# Bayesian filtering
use_bayes 1
bayes_auto_learn 1

# Network checks
skip_rbl_checks 0

# Mark spam in subject line
rewrite_header Subject [SPAM]

# Don't modify spam (just add headers)
report_safe 0

Save and exit.

Enable SpamAssassin daemon:

sudo nano /etc/default/spamassassin

Find and change:

ENABLED=0  →  ENABLED=1
CRON=0     →  CRON=1

Save and exit.

Integrate SpamAssassin with Postfix

This is where most problems occur. The integration requires careful configuration to avoid a mail loop where Postfix sends mail to SpamAssassin, which sends it back to Postfix, which sends it back to SpamAssassin indefinitely.

The correct architecture:

Internet → Postfix (port 25) → SpamAssassin (port 783)
                                      ↓
              Your Maildir ← Postfix (port 10025)

Port 10025 is a dedicated “off-ramp” that receives mail back from SpamAssassin with content filtering disabled.

Edit master.cf:

sudo nano /etc/postfix/master.cf

Step 1: Add content filter to the smtp service (around line 17):

Find the smtp service line:

smtp      inet  n       -       y       -       -       smtpd

Add the content_filter option below it:

smtp      inet  n       -       y       -       -       smtpd
  -o content_filter=spamassassin:127.0.0.1:10025

Step 2: Add the port 10025 off-ramp (add this block, before the spamassassin pipe):

# SpamAssassin return path - content filtering disabled
127.0.0.1:10025 inet  n       -       n       -       10      smtpd
  -o content_filter=
  -o receive_override_options=no_unknown_recipient_checks,no_header_body_checks,no_milters
  -o smtpd_helo_restrictions=
  -o smtpd_client_restrictions=
  -o smtpd_sender_restrictions=
  -o smtpd_recipient_restrictions=permit_mynetworks,reject
  -o mynetworks=127.0.0.0/8
  -o smtpd_authorized_xforward_hosts=127.0.0.0/8

Step 3: Add the SpamAssassin pipe (at the end of the file):

# SpamAssassin filter pipe
spamassassin unix -     n       n       -       -       pipe
  user=spamd argv=/usr/sbin/spamc -f -e /usr/sbin/sendmail -oi -f ${sender} ${recipient}

Important: The path is /usr/sbin/spamc – verify this matches your system with which spamc. Getting this wrong causes the spamd service to fail with a path error.

Do not add localhost:10025 to the argv line. Postfix will interpret it as a recipient email address and try to deliver mail to a user named “localhost:10025”, causing bounce messages.

Save and exit.

Start SpamAssassin and restart Postfix:

sudo systemctl start spamassassin
sudo systemctl enable spamassassin
sudo systemctl status spamassassin

sudo systemctl restart postfix
sudo systemctl status postfix

If spamassassin fails to start:

# Check logs for exact error
sudo journalctl -u spamassassin -n 50 --no-pager

# Verify spamd path
which spamd
ls -la /usr/sbin/spamd

# If the service file has wrong path, fix it
sudo systemctl edit --full spamassassin
# Find ExecStart= line and correct the path

Verify SpamAssassin is Working

Check that mail is being scanned:

Send an email from an external account to your mail server, then check the headers of the received email in Thunderbird:

Right-click the email → View Source (or Message Source)

Look for:

X-Spam-Checker-Version: SpamAssassin 4.0.0 on mail.[your-domain].com
X-Spam-Status: No, score=-0.0 required=5.0

If you see these headers, SpamAssassin is scanning mail.

Test with GTUBE spam pattern:

Send an email from an external address to your server with this exact text in the body:

XJS*C4JDBQADN1.NSBN3*2IDNEN*GTUBE-STANDARD-ANTI-UBE-TEST-EMAIL*C.34X

SpamAssassin will score this 1000+ (well above the 5.0 threshold). Check that the email arrives with [SPAM] in the subject line.

If mail stops flowing after SpamAssassin integration:

The most common issue is a mail loop. Check the logs:

sudo journalctl -u postfix --since "5 minutes ago" --no-pager | grep -E "too many hops|loop"

If you see “too many hops” errors, the off-ramp on port 10025 isn’t working correctly. Verify:

  1. The 127.0.0.1:10025 block exists in master.cf with content_filter= (empty value)
  2. The pickup service doesn’t have a content_filter set
  3. Postfix was fully restarted (not just reloaded)

Clear any stuck mail and restart:

sudo postsuper -d ALL
sudo systemctl restart postfix

Security Hardening with fail2ban

fail2ban monitors logs and automatically bans IPs that show malicious behavior – failed login attempts, brute-force attacks, and so on.

Install fail2ban

sudo apt install fail2ban -y

Configure fail2ban

Do not edit /etc/fail2ban/jail.conf – it gets overwritten on package updates. Create a local override:

sudo nano /etc/fail2ban/jail.local

Paste:

[DEFAULT]
bantime = 3600
findtime = 600
maxretry = 5

[sshd]

enabled = true

[postfix-sasl]

enabled = true

What this does:

  • Bans IPs for 1 hour (3600 seconds) after 5 failed attempts within 10 minutes
  • Monitors SSH authentication failures
  • Monitors Postfix SASL authentication failures (email login attempts)

Save and exit.

A note on the Dovecot jail:

The default Dovecot filter that ships with fail2ban on Ubuntu 24.04 has regex compatibility issues that cause fail2ban to crash on startup. Don’t enable the Dovecot jail without first verifying the filter works on your system.

SSH and Postfix protection cover the most critical attack vectors. Dovecot authentication failures will often be caught by the Postfix jail anyway since they share the same authentication path.

If you want to attempt Dovecot protection later:

# Test if the filter is valid before enabling
sudo fail2ban-client -t

Only add [dovecot] with enabled = true if the configuration test passes cleanly.

Start fail2ban:

sudo systemctl restart fail2ban
sudo systemctl enable fail2ban
sudo systemctl status fail2ban

Verify jails are active:

sudo fail2ban-client status

Expected output:

Status
|- Number of jail:      2
`- Jail list:   postfix-sasl, sshd

Check each jail:

sudo fail2ban-client status sshd
sudo fail2ban-client status postfix-sasl

You’ll likely already see failed attempts in the SSH jail – internet scanners constantly probe SSH on all IP addresses. If any IP has exceeded the threshold, it will appear in the “Banned IP list”.

Useful fail2ban commands:

# Unban an IP (if you accidentally ban yourself)
sudo fail2ban-client set sshd unbanip [your-ip]

# View fail2ban logs
sudo tail -f /var/log/fail2ban.log

# Check currently banned IPs
sudo fail2ban-client status sshd | grep "Banned IP"

Monitoring: Alerts That Actually Matter

For a personal mail server, you want exception-based monitoring – alerts only when something is broken, not daily status reports. Two things warrant automated alerts:

  1. Disk space – If the disk fills up, mail delivery stops completely
  2. Certificate expiration – If Let’s Encrypt renewal fails silently, TLS breaks and mail stops

Daily summary emails and fail2ban notifications are mostly noise at this scale.

Disk Space Alert

sudo nano /usr/local/bin/check-disk-space.sh

Paste:

#!/bin/bash

THRESHOLD=85
USAGE=$(df / | tail -1 | awk '{print $5}' | sed 's/%//')
HOSTNAME=$(hostname)

if [ $USAGE -gt $THRESHOLD ]; then
    echo "Disk usage on ${HOSTNAME} is at ${USAGE}%.

Breakdown:
$(df -h /)

Maildir size:
$(du -sh /home/*/Maildir 2>/dev/null)

Top 5 largest directories:
$(du -sh /* 2>/dev/null | sort -rh | head -5)" | \
    mail -s "ALERT: Disk Space ${USAGE}% on ${HOSTNAME}" [your-username]@[your-domain].com
fi

Save, exit, make executable:

sudo chmod +x /usr/local/bin/check-disk-space.sh

Test it:

# Temporarily lower threshold to trigger the alert
sudo sed 's/THRESHOLD=85/THRESHOLD=1/' /usr/local/bin/check-disk-space.sh | sudo bash

You should receive an email within 1-2 minutes.

Certificate Expiration Alert

sudo nano /usr/local/bin/check-cert-expiry.sh

Paste:

#!/bin/bash

CERT="/etc/letsencrypt/live/mail.[your-domain].com/cert.pem"
WARN_DAYS=30
HOSTNAME=$(hostname)

EXPIRY=$(openssl x509 -enddate -noout -in $CERT | cut -d= -f2)
EXPIRY_EPOCH=$(date -d "$EXPIRY" +%s)
NOW_EPOCH=$(date +%s)
DAYS_LEFT=$(( ($EXPIRY_EPOCH - $NOW_EPOCH) / 86400 ))

if [ $DAYS_LEFT -lt $WARN_DAYS ]; then
    echo "TLS certificate for ${HOSTNAME} expires in ${DAYS_LEFT} days.

Expiry date: ${EXPIRY}
Certificate: ${CERT}

Run 'sudo certbot renew' to renew manually, or check auto-renewal:
sudo systemctl status certbot.timer
sudo journalctl -u certbot -n 50" | \
    mail -s "ALERT: Certificate expires in ${DAYS_LEFT} days on ${HOSTNAME}" [your-username]@[your-domain].com
fi

Save, exit, make executable:

sudo chmod +x /usr/local/bin/check-cert-expiry.sh

Test it:

# Temporarily set threshold to 999 days to trigger the alert
sudo sed 's/WARN_DAYS=30/WARN_DAYS=999/' /usr/local/bin/check-cert-expiry.sh | sudo bash

Check how many days remain on your actual certificate:

openssl x509 -enddate -noout -in /etc/letsencrypt/live/mail.[your-domain].com/cert.pem

Update Backup Script to Alert on Failure

While you’re at it, add failure notification to your backup script from Part 2:

sudo nano /usr/local/bin/backup-mail.sh

Add at the very end of the script:

if [ $? -ne 0 ]; then
    echo "Mail backup failed at $(date). Check the server." | \
    mail -s "ALERT: Backup Failed on $(hostname)" [your-username]@[your-domain].com
fi

Schedule All Monitoring with Cron

sudo crontab -e

Add these lines (alongside the backup job from Part 2):

# Daily mail backup at 2 AM (from Part 2)
0 2 * * * /usr/local/bin/backup-mail.sh >> /var/log/mail-backup.log 2>&1

# Check disk space every 6 hours
0 */6 * * * /usr/local/bin/check-disk-space.sh

# Check certificate expiry every Monday at 8 AM
0 8 * * 1 /usr/local/bin/check-cert-expiry.sh

Save and verify:

sudo crontab -l

Final Verification

Complete System Check

All ports listening:

sudo ss -tlnp | grep -E ':(25|587|993)'

All three should show LISTEN.

All services running:

sudo systemctl status postfix dovecot spamassassin fail2ban

All should show “active”.

fail2ban jails active:

sudo fail2ban-client status
# Should show: postfix-sasl, sshd

Mail queue empty:

sudo mailq
# Should show: Mail queue is empty

S3 backups working:

aws s3 ls s3://[your-bucket-name]/ --recursive --human-readable | tail -5

End-to-End Email Test

Test receiving: Send an email from an external account to [your-username]@[your-domain].com and verify it appears in Thunderbird with SpamAssassin headers.

Test sending: Send from Thunderbird (using port 587 SMTP with your mail server) to an external Gmail or Outlook address. Verify it arrives in inbox (not spam).

Check email headers in Gmail to verify SPF and DKIM pass:

  1. Open the received email in Gmail
  2. Click three dots → “Show original”
  3. Look for: spf=passdkim=pass

Conclusion

What You’ve Built

Your mail server now handles the complete email lifecycle:

Incoming mail:
  Internet → Postfix (port 25)
           → SpamAssassin (spam scored and tagged)
           → Delivered to Maildir
           → Accessible via IMAP (port 993)

Outgoing mail:
  Thunderbird → Postfix (port 587, authenticated)
              → SendGrid relay (port 2525)
              → Delivered reliably to recipients

Security layers:

  • TLS encryption on all connections
  • fail2ban blocking brute-force attempts
  • SPF, DKIM, DMARC records for sender authentication
  • Strict recipient restrictions (no open relay)

Operational:

  • Daily S3 backups with 30-day retention
  • Disk space alerts
  • Certificate expiration warnings
  • Backup failure notifications

Monthly Costs

Service Cost
DigitalOcean mail droplet $6.00
DigitalOcean WordPress droplet $6.00
AWS S3 backups ~$0.50
SMTP2GO (relay) $0.00
Total ~$12.50/month

The AWS SES Timeline

We’re running through SMTP2GO while building AWS billing history. (Originally SendGrid, but two account lockouts forced a migration — see above.) The path to SES approval:

  • Now – Month 2: Daily S3 backups generate consistent AWS usage
  • Month 2-3: Two billing cycles complete with charges
  • Month 3: Reapply to SES with evidence of established AWS usage
  • When approved: Swap SMTP2GO for SES by updating relayhost and sasl_passwd – no other changes required

Real Challenges This Guide Encountered

This wouldn’t be an honest guide without documenting what actually went wrong:

SendGrid free tier lockouts: Two accounts were locked without warning — first using a domain email, then using Gmail. SendGrid’s anti-abuse system is tuned for catching spammers at scale, which means legitimate low-volume personal senders get flagged as false positives. The migration to SMTP2GO resolved this. If you’re using any relay’s free tier, have a backup plan.

DigitalOcean port blocking: Port 587 outbound is blocked. Use port 2525 for your relay provider (both SendGrid and SMTP2GO support it).

SpamAssassin integration: The mail loop between Postfix and SpamAssassin is a real issue. The port 10025 off-ramp with empty content_filter is required. The spamd binary path varies – verify with which spamd before configuring.

fail2ban Dovecot filter: The default filter has regex compatibility issues on Ubuntu 24.04. SSH and Postfix protection work out of the box. The Dovecot jail requires filter testing before enabling.

Configuration duplicates: Adding SendGrid relay settings to a Postfix config that already has some of those parameters creates duplicate warnings. Postfix uses the last value and continues working, but clean up duplicates to avoid confusion.

Stale Postfix hash files: After updating /etc/postfix/sasl_passwd with new relay credentials, Postfix kept authenticating with the old ones. The fix: sudo postmap /etc/postfix/sasl_passwd to regenerate the .db file. Postfix reads the binary hash, not the text file — editing one without updating the other is a silent failure.

master.cf typo — permit_sasl_authentication vs permit_sasl_authenticated: A single missing letter (d) in the submission service restrictions caused a 451 Server configuration error on all outbound mail. The restriction is permit_sasl_authenticated (past tense), not permit_sasl_authentication. Postfix logs the exact unknown restriction name, which makes this easy to find if you know to look.

What’s Next (Optional)

  • Roundcube webmail – Browser-based email access
  • Greylisting – Additional spam defense (temporary reject, legitimate servers retry)
  • Multiple users – Add more mail accounts for your domain
  • DMARC reporting analysis – Review aggregate reports for deliverability insights
  • SES migration – Swap SMTP2GO for AWS SES when approved

You now have a complete, production-ready mail server built from scratch. It sends reliably, receives correctly, filters spam, blocks attackers, backs up automatically, and alerts you when something needs attention.


Published: [Date]
Last updated: [Date]
Time to complete: 2-3 hours
Part of series: Building a Secure Email Server
Previous: Part 2 – Deployment and Configuration

The Mailroom: Part Two – Deployment

Building a Secure Email Server: Part 2 – Deployment

Time to complete: 2-3 hours
Monthly cost: ~$12.50 ($12 DigitalOcean + $0.50 AWS S3)
Prerequisites: Part 1 (component selection), domain registered with Cloudflare DNS, basic Linux familiarity


Introduction

In Part 1, we selected components and justified the hybrid architecture: self-hosted mail server for receiving with commercial relay for sending. This article walks through the actual deployment, ending with a fully functional mail server that can receive email and provide IMAP access.

We’ll also confront the reality of outbound email delivery for small self-hosted servers—specifically why AWS SES denied our production access request and what we’re doing about it.

What you’ll have by the end:

  • Mail server receiving email on your domain
  • IMAP access from any email client
  • TLS encryption throughout
  • Automated backups to AWS S3
  • Foundation for reliable outbound delivery (covered in Part 3)

Preparation

Server Provisioning

Create a $6/month DigitalOcean droplet with these specifications:

  • Image: Ubuntu 24.04 LTS x64
  • Plan: Basic Shared CPU, $6/month (1GB RAM, 25GB SSD, 1000GB transfer)
  • Region: Choose geographically closest to you for lower latency
  • Authentication: SSH keys (if you don’t have keys, generate them on your local machine first)
  • Hostname: mail-[your-domain]
  • Tags: mail, production (optional, for organization)

After the droplet is created, note the IP address and log in:

# Connect to your server
ssh root@[your-droplet-ip]

# You should see the Ubuntu welcome message

Update the system immediately:

# Update package lists
sudo apt update

# Upgrade all packages
sudo apt upgrade -y

# This may take 2-5 minutes

Set the hostname properly:

# Set the fully qualified domain name
sudo hostnamectl set-hostname mail.[your-domain].com

# Verify it was set
hostname -f
# Should output: mail.[your-domain].com

Configure the firewall (UFW) before anything else:

This is critical—configure all ports before enabling the firewall to avoid locking yourself out.

# Check current status (should be inactive)
sudo ufw status

# Set default policies
sudo ufw default deny incoming
sudo ufw default allow outgoing

# Allow SSH FIRST (critical - don't skip this!)
sudo ufw allow 22/tcp
# Output: Rules updated

# Allow mail server ports
sudo ufw allow 25/tcp    # SMTP (receiving mail from other servers)
sudo ufw allow 587/tcp   # SMTP submission (authenticated sending)
sudo ufw allow 993/tcp   # IMAPS (encrypted IMAP)

# Allow web ports for Let's Encrypt and future webmail
sudo ufw allow 80/tcp    # HTTP (Let's Encrypt challenge)
sudo ufw allow 443/tcp   # HTTPS (webmail)

# Enable the firewall
sudo ufw enable
# Press 'y' when prompted

# Verify all rules are in place
sudo ufw status verbose

Expected output:

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere
25/tcp                     ALLOW       Anywhere
587/tcp                    ALLOW       Anywhere
993/tcp                    ALLOW       Anywhere
80/tcp                     ALLOW       Anywhere
443/tcp                    ALLOW       Anywhere

Create a non-root administrative user:

Running everything as root is a security risk. Create a dedicated user:

# Create user (you'll be prompted for password and info)
adduser [username]

# Add to sudo group for administrative privileges
usermod -aG sudo [username]

# Verify user was created
id [username]
# Should show uid, gid, and groups including sudo

For the rest of this guide, you can continue as root for service configuration, or switch to your new user with su - [username]. Service installation and configuration typically require sudo/root access.

Common issue: If you enable UFW before allowing port 22, you’ll be locked out immediately. Use DigitalOcean’s web console (droplet → Access → Launch Droplet Console) to regain access and run ufw allow 22/tcp && ufw reload.


DNS Configuration

Configure these records in Cloudflare before installing mail software. DNS propagation takes 2-10 minutes, so we set this up first.

Log into Cloudflare and select your domain.

Navigate to DNS → Records, then add each of these:

1. A Record (Mail Server IP):

Type: A
Name: mail
Content: [your-droplet-ip]
Proxy status: DNS only (gray cloud - CRITICAL)
TTL: Auto

Click “Save”. The proxy status must be DNS only (gray cloud icon, not orange). SMTP and IMAP protocols don’t work through Cloudflare’s HTTP proxy.

2. MX Record (Mail Destination):

Type: MX
Name: @ (or leave blank for root domain)
Mail server: mail.[your-domain].com
Priority: 10
Proxy status: DNS only
TTL: Auto

The priority value (10) determines which server receives mail if you have multiple MX records. Lower numbers have higher priority.

3. SPF Record (Sender Policy Framework):

Type: TXT
Name: @ (root domain)
Content: v=spf1 ip4:[your-droplet-ip] include:amazonses.com -all
TTL: Auto

This tells receiving servers which IPs are authorized to send mail from your domain. The -all means “reject all other senders.”

4. DMARC Record (Domain-based Message Authentication):

Type: TXT
Name: _dmarc
Content: v=DMARC1; p=none; rua=mailto:dmarc@[your-domain].com
TTL: Auto

DMARC tells receiving servers what to do with mail that fails SPF/DKIM checks. p=none is monitoring mode—good for initial setup. The rua address receives aggregate reports.

Your DNS records should now look like this:

Type Name Content Proxy
A mail [your-ip] DNS only
A @ [wordpress-ip] Proxied
MX @ mail.[your-domain].com (priority 10)
TXT @ v=spf1 ip4:[your-ip] include:amazonses.com -all
TXT _dmarc v=DMARC1; p=none; rua=mailto:dmarc@…
CNAME www [your-domain].com Proxied

Configure PTR Record (Reverse DNS):

PTR records map IP addresses back to hostnames. Many mail servers reject email if the sending IP doesn’t have a proper PTR record.

In DigitalOcean:

  1. Go to your droplet page
  2. Click the Networking tab
  3. Scroll to “PTR Record” or “Reverse DNS”
  4. Enter: mail.[your-domain].com
  5. Click “Update” or “Save”

Some DigitalOcean regions show this differently—if you don’t see it, check Networking → Domains or submit a support ticket.

Wait 2-5 minutes for DNS propagation, then verify from your mail server:

# Test A record (should return your droplet IP)
dig mail.[your-domain].com +short

# Expected output:
# [your-droplet-ip]

# Test MX record
dig MX [your-domain].com +short

# Expected output:
# 10 mail.[your-domain].com.

# Test SPF record
dig TXT [your-domain].com +short

# Expected output (among other TXT records):
# "v=spf1 ip4:[your-droplet-ip] include:amazonses.com -all"

# Test PTR record (reverse lookup)
dig -x [your-droplet-ip] +short

# Expected output:
# mail.[your-domain].com.

If any of these fail, wait a few more minutes and try again. DNS propagation can take up to 10 minutes, occasionally longer.

Common issues:

  1. dig mail.[your-domain].com returns 127.0.0.1: The server is resolving via /etc/hosts. Check and remove any lines containing your mail hostname:
# Check hosts file
cat /etc/hosts

# If you see mail.[your-domain].com mapped to 127.0.0.1, edit and remove it
sudo nano /etc/hosts
# Delete the offending line, save and exit

# Flush DNS cache
sudo systemd-resolve --flush-caches
sudo systemctl restart systemd-resolved

# Test again
dig mail.[your-domain].com +short
  1. MX record shows Cloudflare IPs: You accidentally left proxy mode on. Go back to Cloudflare DNS, click the orange cloud next to the mail A record to turn it gray (DNS only).
  2. PTR record not working: Some regions require 24 hours or a support ticket. You can proceed with setup, but deliverability will be limited until PTR is configured.

TLS Certificates

Modern email requires TLS encryption. We’ll use Let’s Encrypt to get free, trusted certificates that auto-renew.

Install Certbot:

# Install certbot and dependencies
sudo apt install certbot -y

# Verify installation
certbot --version
# Should show: certbot 2.x.x

Obtain certificates using standalone mode:

Standalone mode temporarily runs a web server on port 80 for the Let’s Encrypt ACME challenge. This is why we opened port 80 in the firewall earlier.

# Request certificates
sudo certbot certonly --standalone -d mail.[your-domain].com

You’ll be prompted for:

  1. Email address: Enter your email (used for renewal notices) Enter email address (used for urgent renewal and security notices): [email protected]
  2. Terms of Service: Type A and press Enter to agree Please read the Terms of Service at https://letsencrypt.org/documents/LE-SA-v1.3... (A)gree/(C)ancel: A
  3. Share email with EFF: Type Y or N (your choice) Would you be willing to share your email with the Electronic Frontier Foundation? (Y)es/(N)o: N

Expected output on success:

Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/mail.[your-domain].com/fullchain.pem
Key is saved at:         /etc/letsencrypt/live/mail.[your-domain].com/privkey.pem
This certificate expires on 2026-05-13.
These files will be updated when the certificate renews.
Certbot has set up a scheduled task to automatically renew this certificate in the background.

Verify certificate files exist:

# List certificate directory
sudo ls -la /etc/letsencrypt/live/mail.[your-domain].com/

# You should see:
# cert.pem -> ../../archive/mail.[your-domain].com/cert1.pem
# chain.pem -> ../../archive/mail.[your-domain].com/chain1.pem
# fullchain.pem -> ../../archive/mail.[your-domain].com/fullchain1.pem
# privkey.pem -> ../../archive/mail.[your-domain].com/privkey1.pem

The files you’ll use in configuration:

  • fullchain.pem: Certificate + intermediate certificates (for public presentation)
  • privkey.pem: Private key (keep this secure, never share)

Verify auto-renewal is configured:

Let’s Encrypt certificates expire after 90 days. Certbot automatically sets up a systemd timer to renew them.

# Check renewal timer status
sudo systemctl status certbot.timer

# Should show: active (waiting)

Test the renewal process (dry-run):

# Simulate renewal without actually renewing
sudo certbot renew --dry-run

# Should end with: Congratulations, all simulated renewals succeeded

This confirms that renewal will work when needed.

Common issues:

  1. Connection timeout during challenge: Ensure port 80 is open in UFW (sudo ufw status | grep 80). Some hosting providers also have network-level firewalls—check DigitalOcean’s Cloud Firewall settings if enabled.
  2. DNS validation failed: Certbot can’t resolve mail.[your-domain].com. Check that your A record exists and propagated with dig mail.[your-domain].com +short.
  3. Certificate already exists: If you’re re-running certbot, you’ll see “Certificate not yet due for renewal”. This is fine—use the existing certificates.

Mailbox Essentials

Postfix: Receiving Mail

Postfix is the MTA (Mail Transfer Agent) that handles receiving mail from other servers and routing it to user mailboxes.

Install Postfix:

sudo apt install postfix -y

During installation, a configuration screen will appear:

  1. General type of mail configuration:
    • Use arrow keys to select: Internet Site
    • Press Enter
  2. System mail name:
    • Enter: [your-domain].com (without the mail. prefix)
    • This is your email domain, not the server hostname
    • Press Enter

The installation configures Postfix with basic settings. We’ll customize it next.

Backup the original configuration:

# Always backup before editing
sudo cp /etc/postfix/main.cf /etc/postfix/main.cf.backup

# Verify backup exists
ls -la /etc/postfix/*.backup

Edit the main configuration file:

sudo nano /etc/postfix/main.cf

Verify these basic settings (they should already be set from installation, around lines 30-50):

myhostname = mail.[your-domain].com
mydomain = [your-domain].com
myorigin = $mydomain
mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain
inet_interfaces = all
inet_protocols = ipv4

What these mean:

  • myhostname: The server’s fully qualified domain name
  • mydomain: Your email domain
  • myorigin: What domain appears in mail sent from this server
  • mydestination: Domains this server accepts mail for (local delivery)
  • inet_interfaces: Listen on all network interfaces (not just localhost)
  • inet_protocols: Use IPv4 only (simplifies configuration)

Scroll to the end of the file and add these sections:

# ====== Maildir Configuration ======
home_mailbox = Maildir/
mailbox_command =

# ====== TLS Configuration ======
smtpd_tls_cert_file=/etc/letsencrypt/live/mail.[your-domain].com/fullchain.pem
smtpd_tls_key_file=/etc/letsencrypt/live/mail.[your-domain].com/privkey.pem
smtpd_tls_security_level=may
smtpd_tls_loglevel = 1
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache

smtp_tls_CApath=/etc/ssl/certs
smtp_tls_security_level=may
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

# ====== SASL Authentication ======
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_auth_enable = yes
smtpd_sasl_security_options = noanonymous
smtpd_sasl_local_domain = $myhostname

# ====== Recipient Restrictions ======
smtpd_recipient_restrictions = 
    permit_mynetworks,
    permit_sasl_authenticated,
    reject_unauth_destination

Configuration explanations:

Maildir:

  • home_mailbox = Maildir/: Store mail in ~/Maildir/ instead of /var/mail/username
  • mailbox_command =: Empty value disables any custom delivery agent

Why Maildir over mbox:

  • Each email is a separate file (safer, less corruption)
  • Better IMAP performance (no file locking issues)
  • Can handle concurrent access properly
  • Standard for modern mail servers

TLS Settings:

  • smtpd_tls_cert_file/key_file: Point to Let’s Encrypt certificates
  • smtpd_tls_security_level=may: Offer TLS but don’t require it (some servers don’t support it)
  • smtpd_tls_loglevel = 1: Log TLS connections for debugging
  • Session caches improve performance for repeated connections

SASL Authentication:

  • smtpd_sasl_type = dovecot: Use Dovecot for authentication
  • smtpd_sasl_path = private/auth: Socket path where Dovecot listens
  • smtpd_sasl_auth_enable = yes: Enable SASL authentication
  • noanonymous: Don’t allow anonymous authentication

Recipient Restrictions:

  • permit_mynetworks: Allow mail from localhost
  • permit_sasl_authenticated: Allow mail from authenticated users
  • reject_unauth_destination: Reject mail to external domains (prevents open relay)

Save and exit: Ctrl+X, Y, Enter

Check for configuration errors:

# Postfix will report syntax errors
sudo postfix check

# If there are no errors, nothing is printed
# If there are errors, fix them before proceeding

Restart Postfix to apply changes:

# Restart the service
sudo systemctl restart postfix

# Check status
sudo systemctl status postfix

Expected output:

● postfix.service - Postfix Mail Transport Agent
     Loaded: loaded (/lib/systemd/system/postfix.service; enabled)
     Active: active (exited) since Wed 2026-02-12 18:10:22 UTC

Verify Postfix is listening on port 25:

sudo ss -tlnp | grep :25

Expected output:

LISTEN 0    100    0.0.0.0:25    0.0.0.0:*    users:(("master",pid=12345,fd=13))

This confirms Postfix is listening on all interfaces (0.0.0.0) on port 25.

Check Postfix logs for any errors:

# View recent Postfix logs
sudo journalctl -u postfix -n 50 --no-pager

# Look for lines like:
# postfix/master[xxxxx]: daemon started -- version 3.x.x

If you see errors, address them before proceeding.

Common issues:

  1. Postfix won’t start – configuration error: Run sudo postfix check for details. Common issues:
    • Typo in configuration file
    • Missing = sign or semicolon
    • Wrong certificate path
  2. Port 25 not listening: Check if another service is using port 25: sudo lsof -i :25
  3. Permission denied on certificates: Ensure Postfix can read Let’s Encrypt files: sudo ls -la /etc/letsencrypt/live/mail.[your-domain].com/ # Should be readable by root

Dovecot: IMAP Access

Dovecot provides IMAP access so email clients can retrieve mail. It also handles authentication for Postfix.

Install Dovecot packages:

# Install core Dovecot and IMAP daemon
sudo apt install dovecot-core dovecot-imapd dovecot-lmtpd -y

# Verify installation
dovecot --version
# Should show: 2.3.x or higher

Dovecot configuration is split across multiple files in /etc/dovecot/conf.d/. We’ll edit four key files.

1. Configure Authentication (10-auth.conf):

sudo nano /etc/dovecot/conf.d/10-auth.conf

Find and modify (around line 10):

disable_plaintext_auth = yes

This prevents password transmission without TLS. Uncommitted if it has a # at the start.

Find and modify (around line 100):

auth_mechanisms = plain login

This enables standard authentication methods. Again, remove # if present.

What this does: Requires TLS before accepting passwords (disable_plaintext_auth), and accepts both PLAIN and LOGIN auth methods (compatible with all email clients).

Save and exit: Ctrl+X, Y, Enter

2. Configure Mail Location (10-mail.conf):

sudo nano /etc/dovecot/conf.d/10-mail.conf

Find and set (around line 30):

mail_location = maildir:~/Maildir

Remove the # if present. This tells Dovecot to look for mail in each user’s ~/Maildir directory.

Save and exit: Ctrl+X, Y, Enter

3. Configure SSL/TLS (10-ssl.conf):

sudo nano /etc/dovecot/conf.d/10-ssl.conf

Find and set (around line 6):

ssl = required

Change from ssl = yes to ssl = required to enforce TLS.

Find and modify (around lines 12-14):

ssl_cert = </etc/letsencrypt/live/mail.[your-domain].com/fullchain.pem
ssl_key = </etc/letsencrypt/live/mail.[your-domain].com/privkey.pem

Important: Note the < before the path. This tells Dovecot to read the file contents.

Replace the default paths (pointing to /etc/dovecot/private) with your Let’s Encrypt certificates.

Save and exit: Ctrl+X, Y, Enter

4. Configure Postfix Authentication Socket (10-master.conf):

This is the most critical configuration—it allows Postfix to authenticate users via Dovecot.

sudo nano /etc/dovecot/conf.d/10-master.conf

Find the service auth section (around line 95). It will look like:

service auth {
  # auth_socket_path points to this userdb socket by default. It's typically
  # used by dovecot-lda, doveadm, possibly imap process, etc. Users that have
  # full permissions to this socket are able to get a list of all usernames and
  # get the results of everyone's userdb lookups.
  #
  # The default 0666 mode allows anyone to connect to the socket, but the
  # userdb lookups will succeed only if the userdb returns an "uid" field that
  # matches the caller process's UID. Also if caller's uid or gid matches the
  # socket's uid or gid the lookup succeeds. Anything else causes a failure.
  #
  # To give the caller full permissions to lookup all users, set the mode to
  # something else than 0666 and Dovecot lets the kernel enforce the
  # permissions (e.g. 0777 allows everyone full permissions).
  unix_listener auth-userdb {
    #mode = 0666
    #user = 
    #group = 
  }
}

Replace the entire service auth section with:

service auth {
  unix_listener auth-userdb {
    #mode = 0666
    #user = 
    #group = 
  }
  
  # Postfix smtp-auth
  unix_listener /var/spool/postfix/private/auth {
    mode = 0660
    user = postfix
    group = postfix
  }
}

What this does: Creates a Unix socket at /var/spool/postfix/private/auth that Postfix can use to verify user credentials. The permissions (0660) and ownership (postfix:postfix) allow Postfix to access it.

Save and exit: Ctrl+X, Y, Enter

Add Dovecot user to Postfix group:

This ensures Dovecot has permission to create the socket in Postfix’s directory:

# Add dovecot to postfix group
sudo usermod -aG postfix dovecot

# Verify
groups dovecot
# Should show: dovecot : dovecot postfix

Ensure the private directory exists with correct permissions:

# Check if directory exists
sudo ls -la /var/spool/postfix/private/

# If it doesn't exist, create it
sudo mkdir -p /var/spool/postfix/private/

# Set ownership and permissions
sudo chown postfix:postfix /var/spool/postfix/private/
sudo chmod 750 /var/spool/postfix/private/

Restart Dovecot:

# Restart to apply all configuration changes
sudo systemctl restart dovecot

# Check status
sudo systemctl status dovecot

Expected output:

● dovecot.service - Dovecot IMAP/POP3 email server
     Loaded: loaded (/lib/systemd/system/dovecot.service; enabled)
     Active: active (running) since Wed 2026-02-12 18:15:30 UTC

Verify the authentication socket was created:

sudo ls -la /var/spool/postfix/private/auth

Expected output:

srw-rw---- 1 postfix postfix 0 Feb 12 18:15 /var/spool/postfix/private/auth

The s indicates a socket file, and permissions rw-rw---- mean read/write for user and group (postfix).

Verify Dovecot is listening on port 993:

sudo ss -tlnp | grep :993

Expected output:

LISTEN 0    100    0.0.0.0:993    0.0.0.0:*    users:(("dovecot",pid=31055,fd=37))

Check Dovecot logs:

sudo journalctl -u dovecot -n 50 --no-pager

Look for:

  • master: Dovecot v2.3.x starting up
  • No error messages

Common issues:

  1. Auth socket not created: Most common issue. Solutions: # Verify dovecot is in postfix group groups dovecot # Restart both services in order sudo systemctl restart dovecot sudo systemctl restart postfix # Check again sudo ls -la /var/spool/postfix/private/auth
  2. Port 993 not listening: Dovecot may have failed to start. Check logs: sudo journalctl -u dovecot -n 100 --no-pager | grep -i error
  3. SSL certificate errors: Verify paths are correct and files are readable: sudo ls -la /etc/letsencrypt/live/mail.[your-domain].com/ sudo dovecot -n | grep ssl_cert
  4. “doveconf: Fatal: Error in configuration file”: Syntax error in a config file. Check which file: sudo doveconf -n # Will show the error location

Submission Port Configuration

Port 587 is the standard port for mail submission (sending) with authentication. We’ll enable it now so it’s ready for future SMTP relay configuration.

Edit Postfix master configuration:

sudo nano /etc/postfix/master.cf

Find the submission section (around lines 17-30). It will be commented out with # symbols:

#submission inet n       -       y       -       -       smtpd
#  -o syslog_name=postfix/submission
#  -o smtpd_tls_security_level=encrypt
#  -o smtpd_sasl_auth_enable=yes

Uncomment and configure it to look like this (remove ALL # symbols from these lines):

submission inet n       -       y       -       -       smtpd
  -o syslog_name=postfix/submission
  -o smtpd_tls_security_level=encrypt
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_tls_auth_only=yes
  -o smtpd_reject_unlisted_recipient=no
  -o smtpd_client_restrictions=permit_sasl_authenticated,reject
  -o smtpd_recipient_restrictions=
  -o smtpd_relay_restrictions=permit_sasl_authenticated,reject

Important: The lines starting with -o must have leading spaces (not tabs). They are option overrides for this specific service.

What each option does:

  • syslog_name=postfix/submission: Log entries clearly marked for port 587
  • smtpd_tls_security_level=encrypt: Require TLS (unlike port 25 which is optional)
  • smtpd_sasl_auth_enable=yes: Enable authentication
  • smtpd_tls_auth_only=yes: Only allow authentication over TLS
  • smtpd_reject_unlisted_recipient=no: Accept mail for any domain (relay it)
  • smtpd_client_restrictions=permit_sasl_authenticated,reject: Only authenticated users
  • smtpd_relay_restrictions=permit_sasl_authenticated,reject: Only relay for authenticated users

Why port 587 instead of 465:

  • Port 587 uses STARTTLS (start with plain text, upgrade to TLS)
  • Port 465 uses implicit TLS (TLS from the start)
  • 587 is the modern standard, supported by all email clients
  • You can enable 465 later if needed, but 587 is sufficient

Note about port 465: There’s a submissions section further down in master.cf. Leave it commented out for now. We’re only using port 587.

Save and exit: Ctrl+X, Y, Enter

Restart Postfix to enable the submission port:

sudo systemctl restart postfix

# Check for any errors
sudo systemctl status postfix

Verify port 587 is now listening:

sudo ss -tlnp | grep :587

Expected output:

LISTEN 0    100    0.0.0.0:587    0.0.0.0:*    users:(("master",pid=31782,fd=14))

Verify all three mail ports are listening:

sudo ss -tlnp | grep -E ':(25|587|993)'

Expected output:

LISTEN 0    100    0.0.0.0:25     0.0.0.0:*    users:(("master",pid=31782,fd=13))
LISTEN 0    100    0.0.0.0:587    0.0.0.0:*    users:(("master",pid=31782,fd=14))
LISTEN 0    100    0.0.0.0:993    0.0.0.0:*    users:(("dovecot",pid=31055,fd=37))

All three ports should show “LISTEN”.

Common issue: If port 587 doesn’t appear, check that you uncommented all the submission lines and that there are no syntax errors:

# Check for configuration errors
sudo postfix check

# View recent Postfix logs for errors
sudo journalctl -u postfix -n 30 --no-pager

Create Mail User and Test Reception

Now that Postfix and Dovecot are configured, create a user account and test that email actually works.

Create a system user for email:

# Create user (choose a username - we'll use "collin" as an example)
sudo adduser collin

You’ll be prompted for:

  • Password (choose a strong one – you’ll use this to log into email)
  • Full name (optional, press Enter to skip)
  • Room number, phone, etc. (all optional, press Enter through these)
  • Confirm: Y

Create the Maildir structure:

Even though Postfix will create Maildir automatically on first delivery, it’s better to create it now with correct permissions:

# Create Maildir for the user
sudo -u collin maildirmake.dovecot /home/collin/Maildir

# Verify it was created
sudo ls -la /home/collin/

# You should see: drwx------ Maildir/

Set proper permissions:

# Ensure ownership is correct
sudo chown -R collin:collin /home/collin/Maildir

# Ensure permissions are restrictive (only user can read)
sudo chmod -R 700 /home/collin/Maildir

# Verify
sudo ls -la /home/collin/Maildir/

Expected Maildir structure:

drwx------ 5 collin collin 4096 Feb 12 18:49 .
drwx------ 3 collin collin 4096 Feb 12 18:49 ..
drwx------ 2 collin collin 4096 Feb 12 18:49 cur
drwx------ 2 collin collin 4096 Feb 12 18:49 new
drwx------ 2 collin collin 4096 Feb 12 18:49 tmp

The three subdirectories:

  • new/ – Unread messages
  • cur/ – Read messages
  • tmp/ – Temporary files during delivery

Send a test email from external provider:

Using Gmail, ProtonMail, or any other email account, send an email to:

collin@[your-domain].com

Subject: Test Email
Body: Testing my new mail server!

Wait 10-30 seconds for delivery, then check the logs:

# Watch for the delivery in real-time
sudo journalctl -u postfix -f

# Press Ctrl+C to stop watching

Or check recent entries:

# View last 50 Postfix log entries
sudo journalctl -u postfix -n 50 --no-pager

# Search specifically for delivered messages
sudo journalctl -u postfix --no-pager | grep -i delivered

Look for a line like:

postfix/local[32242]: 8AE4841E68: to=<collin@[your-domain].com>, relay=local, delay=0.18, delays=0.16/0.01/0/0, dsn=2.0.0, status=sent (delivered to mailbox)

The key part is status=sent (delivered to mailbox).

Check if the email file exists:

# List new mail
sudo ls -la /home/collin/Maildir/new/

# You should see a file with a name like:
# 1707683045.V801I12345M654321.mail

Read the email (optional):

# View the raw email
sudo cat /home/collin/Maildir/new/*

You’ll see the full email including all headers, which is useful for debugging.

Test IMAP access with an email client:

Now configure an email client to make sure IMAP works. Use any of these:

  • Thunderbird (desktop)
  • iOS Mail (iPhone/iPad)
  • Android Gmail or K-9 Mail
  • macOS Mail

IMAP Settings:

Server: mail.[your-domain].com
Port: 993
Connection security: SSL/TLS
Authentication: Normal password
Username: collin
Password: [the password you set for collin user]

SMTP Settings (for future sending):

Server: mail.[your-domain].com
Port: 587
Connection security: STARTTLS
Authentication: Normal password
Username: collin
Password: [the password you set for collin user]

In Thunderbird, for example:

  1. Open Thunderbird
  2. File → New → Existing Mail Account
  3. Enter:
    • Your name: Your Name
    • Email address: collin@[your-domain].com
    • Password: [collin’s password]
  4. Click “Configure manually”
  5. Enter the IMAP and SMTP settings above
  6. Click “Done”

You should:

  • Connect successfully
  • See your “Test Email” in the inbox
  • Be able to read it

Common issues:

  1. Email doesn’t arrive – check logs: # Check for errors or bounces sudo journalctl -u postfix -n 100 --no-pager | grep -i error # Check mail queue sudo mailq # Should be empty if mail was delivered
  2. Email delivered to /var/mail/collin instead of Maildir: # Check traditional mailbox location sudo ls -la /var/mail/collin # If mail is there, Postfix isn't using Maildir # Verify main.cf has: home_mailbox = Maildir/ sudo grep "home_mailbox" /etc/postfix/main.cf # Restart Postfix if you changed anything sudo systemctl restart postfix
  3. IMAP connection refused: # Verify Dovecot is running sudo systemctl status dovecot # Verify port 993 is listening sudo ss -tlnp | grep :993 # Check Dovecot logs for errors sudo journalctl -u dovecot -n 50 --no-pager
  4. Authentication failed in email client: # Verify user exists id collin # Try testing authentication manually telnet localhost 143 # Type: a1 LOGIN collin password # Should return: a1 OK Logged in # Type: a2 LOGOUT
  5. TLS/SSL errors: # Verify certificates are readable sudo ls -la /etc/letsencrypt/live/mail.[your-domain].com/ # Test TLS connection openssl s_client -connect mail.[your-domain].com:993 -quiet # Should show certificate chain and connect

If your email client connects and shows the test email, mail reception is fully working. You can now receive email on your domain and access it via IMAP from any device.


The Outbound Challenge

Email Reputation Reality

Your mail server can now receive email perfectly. Sending is where things get complicated.

The problem: Email providers (Gmail, Outlook, etc.) evaluate sender reputation based on:

  1. IP reputation – Is this IP known to send spam?
  2. Sending volume – Do they send enough mail to establish patterns?

Why VPS IPs struggle:

  • Cloud provider IP ranges are constantly cycled by spammers
  • If another DigitalOcean customer in your subnet sends spam, your IP can be blacklisted by association
  • Personal mail servers send 5-10 emails/day, which isn’t enough volume for ML models to build trust

The solution: Use a commercial SMTP relay service with established IP pools and reputation. They handle deliverability; you handle learning mail infrastructure.

This isn’t defeat—it’s pragmatic architecture. You’re still running the full mail stack, just routing outbound through a reliable path.


AWS SES: Setup and Reality Check

Amazon Simple Email Service (SES) was our first choice for outbound relay due to its excellent deliverability and cost-effectiveness ($0.10 per 1,000 emails). Here’s what happened during setup.

Create or log into your AWS account:

Navigate to: https://console.aws.amazon.com/

Access the SES Console:

  1. In the top search bar, type “SES”
  2. Click Simple Email Service
  3. Important: Check your region in the top-right corner
    • Choose: us-east-2 (Ohio) or us-east-1 (Virginia)
    • SES isn’t available in all regions
    • Remember which region you choose—use it consistently

Verify your domain:

  1. In the SES dashboard, left sidebar → Verified identities
  2. Click Create identity (orange button)
  3. Select: Domain
  4. Domain name: [your-domain].com (without mail. prefix)
  5. Check these boxes:
    • Use a default DKIM signing key (RSA_2048_BIT)
    • Publish DNS records to Route 53 – UNCHECK (we’re using Cloudflare)
  6. Click Create identity

You’ll see a verification screen with DNS records:

AWS generates three DKIM CNAME records. Copy each one to a text file:

[random-string-1]._domainkey.[your-domain].com → [value-1].dkim.amazonses.com
[random-string-2]._domainkey.[your-domain].com → [value-2].dkim.amazonses.com
[random-string-3]._domainkey.[your-domain].com → [value-3].dkim.amazonses.com

Plus a domain verification TXT record:

_amazonses.[your-domain].com → [verification-code]

Add DKIM records to Cloudflare:

Go to Cloudflare DNS → Add record (do this 3 times):

Type: CNAME
Name: [random-string-1]._domainkey
Target: [value-1].dkim.amazonses.com
Proxy: DNS only
TTL: Auto

Repeat for all three DKIM records. After 2-5 minutes, AWS will verify them and show “Successful” next to DKIM configuration.

Generate SMTP credentials:

These credentials allow Postfix to authenticate with SES as an SMTP relay.

  1. SES Console → Left sidebar → SMTP settings
  2. Click Create SMTP credentials
  3. IAM User Name: Keep default (ses-smtp-user-[date]) or customize
  4. Click Create user
  5. Critical: Download or copy the credentials immediately:
    • SMTP username: (starts with AKIA…)
    • SMTP password: (long random string)
    • You cannot retrieve the password later!

Save these in a secure location (password manager). We’ll use them in Part 3 for SMTP2GO, or later when SES is approved.

Request production access:

By default, SES operates in “sandbox mode,” which only allows sending to verified email addresses. For real-world use (like job applications), you need production access.

  1. SES Console → Left sidebar → Account dashboard
  2. Look for Production access section
  3. Click Request production access button

Fill out the form:

  • Mail type: Transactional
  • Website URL: https://[your-domain].com
  • Use case description: Personal mail server for professional blog ([your-domain].com) and job application correspondence.Low volume (50-100 emails per month). Self-hosted Postfix MTA using SES as SMTP relay for reliable delivery.All mail is individual professional correspondence—no bulk sending or marketing.
  • Describe how you will comply with AWS Service Terms: Monitor SES console daily for bounces and complaints. Remove invalid addresses immediately.Maintain low complaint rate. All recipients are legitimate contacts (job recruiters, blog readers who contacted me).
  • Acknowledge compliance: Check the box
  • Click Submit request

The response (24-48 hours later):

This is what AWS actually sent back:

Hello,

Thank you for submitting your request to increase your sending limits. We would like to gather more information about your use case.

[…]

Due to some limiting factors on your account currently, you are not eligible to send SES messages in US East (Ohio) region. You will need to show a pattern of use of other AWS services and a consistent paid billing history to gain access to this function.

We enforce these limitations on all new accounts. Your continued usage of other AWS services will give us greater flexibility to increase your spending limits in the future.

Please open a new case after you have a successful billing cycle and additional use of other AWS services and we will gladly review your account.

What this means:

AWS is being conservative with new accounts to prevent spammers. They want to see:

  1. “Pattern of use of other AWS services” – Not just SES, but S3, Lambda, CloudWatch, etc.
  2. “Consistent paid billing history” – At least 1-2 billing cycles ($2-10/month minimum)
  3. Time – Proving you’re not a fly-by-night spam operation

This isn’t a permanent “no”—it’s “not yet, prove you’re legitimate first.”

This is frustrating but common. Many new AWS accounts experience this, especially for SES. The solution is to build up AWS usage over 60-90 days, then reapply with evidence of legitimate usage.

Why we’re not giving up on SES:

  • Best long-term economics ($0.10/1000 emails vs $19.95/month for SendGrid at scale)
  • Excellent deliverability when approved
  • AWS experience is valuable for cybersecurity roles
  • Learning AWS infrastructure is part of the project

The strategy:

  1. Use SMTP2GO immediately (1,000 emails/month free tier) for projects and correspondence
  2. Build AWS usage history with S3 backups (next section)
  3. Optionally use Lambda, CloudWatch for additional AWS activity
  4. After 60-90 days, reapply to SES with proof:
    • “I’ve been using S3 for automated backups for 3 months”
    • “Consistent billing history attached”
    • “Professional mail infrastructure in production”
  5. Much higher approval odds the second time

This approach is both pragmatic (email works now via SMTP2GO) and strategic (positioning for SES approval later).


Building AWS Usage History: S3 Backups

The strategy to get SES approved: generate consistent AWS usage and billing history while also creating something useful. Automated mail backups to S3 accomplish both.

Benefits:

  • Generates AWS billing (~$0.50-1/month)
  • Shows regular service usage
  • Actually protects your mail
  • Positions you for SES approval in 60-90 days

Install AWS CLI v2:

The awscli package isn’t in Ubuntu 24.04 repositories. Install the official AWS CLI v2:

# Download the installer
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

# Install unzip if not present
sudo apt install unzip -y

# Unzip the installer
unzip awscliv2.zip

# Run the installer
sudo ./aws/install

# Verify installation
aws --version
# Should show: aws-cli/2.x.x Python/3.x.x Linux/...

# Clean up
rm -rf aws awscliv2.zip

Create IAM user with S3 access:

Don’t use your root AWS account credentials. Create a dedicated IAM user for the mail server:

  1. AWS Console → Search for IAM → Click IAM
  2. Left sidebar → Users
  3. Click Create user (orange button)
  4. User name: mail-server-admin
  5. Click Next

Set permissions:

  1. Select Attach policies directly
  2. Search for and select these policies:
    • AmazonS3FullAccess (for backups)
    • CloudWatchLogsFullAccess (optional, for future log monitoring)
    • AWSLambda_FullAccess (optional, for future automation)
  3. Click Next
  4. Review and click Create user

Create access keys for CLI access:

  1. Click on the user you just created: mail-server-admin
  2. Click Security credentials tab
  3. Scroll down to Access keys section
  4. Click Create access key
  5. Select use case: Command Line Interface (CLI)
  6. Check the confirmation checkbox: “I understand the above recommendation…”
  7. Click Next
  8. Description (optional): “Mail server backups and automation”
  9. Click Create access key

Save your credentials:

You’ll see:

  • Access key ID: AKIA… (20 characters)
  • Secret access key: Long random string (40 characters)

Critical: Click Download .csv file and save it securely. You cannot view the secret again!

Copy both values to use in the next step.

Configure AWS CLI on your mail server:

# Run configuration wizard
aws configure

Enter when prompted:

AWS Access Key ID [None]: AKIA[your-key-here]
AWS Secret Access Key [None]: [your-secret-key-here]
Default region name [None]: us-east-2
Default output format [None]: json

Verify credentials work:

# Test authentication
aws sts get-caller-identity

Expected output:

{
    "UserId": "AIDA...",
    "Account": "123456789012",
    "Arn": "arn:aws:iam::123456789012:user/mail-server-admin"
}

If you see your account number and the mail-server-admin user, configuration is correct.

Create S3 bucket for backups:

Bucket names must be globally unique across all AWS accounts. Add a timestamp to ensure uniqueness:

# Create bucket (replace [your-domain] with your actual domain)
aws s3 mb s3://[your-domain]-mail-backups-$(date +%s)

# Example output:
# make_bucket: secblues-mail-backups-1707928699

Note the full bucket name (including the timestamp). You’ll need it for the backup script.

Verify bucket was created:

# List all your buckets
aws s3 ls

# Should show:
# 2026-02-12 20:38:22 [your-domain]-mail-backups-1707928699

Create backup script:

# Create the script file
sudo nano /usr/local/bin/backup-mail.sh

Paste this script (replace placeholders with your actual values):

#!/bin/bash

# Mail backup script for S3
# Backs up Maildir and mail configuration daily

BACKUP_DATE=$(date +%Y-%m-%d-%H%M%S)
BUCKET_NAME="[your-full-bucket-name-including-timestamp]"
MAIL_USER="[your-mail-username]"
BACKUP_DIR="/tmp/mail-backup-${BACKUP_DATE}"

# Create temporary backup directory
mkdir -p ${BACKUP_DIR}

# Backup Maildir (user's mail)
tar -czf ${BACKUP_DIR}/maildir-${BACKUP_DATE}.tar.gz \
    -C /home/${MAIL_USER} \
    Maildir/

# Backup mail server configuration
tar -czf ${BACKUP_DIR}/mail-config-${BACKUP_DATE}.tar.gz \
    /etc/postfix/main.cf \
    /etc/postfix/master.cf \
    /etc/dovecot/

# Upload Maildir backup to S3
aws s3 cp ${BACKUP_DIR}/maildir-${BACKUP_DATE}.tar.gz \
    s3://${BUCKET_NAME}/maildir/

# Upload config backup to S3
aws s3 cp ${BACKUP_DIR}/mail-config-${BACKUP_DATE}.tar.gz \
    s3://${BUCKET_NAME}/config/

# Cleanup temporary files
rm -rf ${BACKUP_DIR}

echo "Backup completed: ${BACKUP_DATE}"

What this script does:

  • Creates timestamped backups of Maildir and configurations
  • Compresses them with gzip
  • Uploads to separate S3 folders (maildir/ and config/)
  • Cleans up temporary files
  • Logs completion

Save and exit: Ctrl+X, Y, Enter

Make the script executable:

# Set execute permission
sudo chmod +x /usr/local/bin/backup-mail.sh

# Verify permissions
ls -la /usr/local/bin/backup-mail.sh
# Should show: -rwxr-xr-x (executable)

Test the backup script:

# Run manually
sudo /usr/local/bin/backup-mail.sh

Expected output:

Backup completed: 2026-02-12-204716

Verify files were uploaded to S3:

# List all objects in bucket
aws s3 ls s3://[your-bucket-name]/ --recursive

Expected output:

2026-02-12 20:47:21   3529 maildir/maildir-2026-02-12-204716.tar.gz
2026-02-12 20:47:21  28314 config/mail-config-2026-02-12-204716.tar.gz

If you see both files, backups are working!

Schedule automatic daily backups with cron:

# Edit root's crontab
sudo crontab -e

If prompted to select an editor:

  • Choose 1 for nano (easiest)

Add this line at the end:

# Daily mail backup at 2:00 AM
0 2 * * * /usr/local/bin/backup-mail.sh >> /var/log/mail-backup.log 2>&1

What this does:

  • Runs backup script daily at 2:00 AM
  • Logs output to /var/log/mail-backup.log
  • 2>&1 captures both stdout and stderr

Save and exit: Ctrl+X, Y, Enter

Verify cron job was added:

# List crontab
sudo crontab -l

# Should show your backup line

Check cron service is running:

sudo systemctl status cron
# Should show: active (running)

Configure S3 lifecycle policy (auto-delete old backups):

Keep backups for 30 days, then automatically delete them to control costs:

# Create lifecycle policy file
cat > /tmp/lifecycle.json << 'EOF'
{
  "Rules": [
    {
      "Expiration": {
        "Days": 30
      },
      "ID": "DeleteOldBackups",
      "Status": "Enabled",
      "Prefix": ""
    }
  ]
}
EOF

# Apply policy to bucket
aws s3api put-bucket-lifecycle-configuration \
  --bucket [your-bucket-name] \
  --lifecycle-configuration file:///tmp/lifecycle.json

# Verify policy was applied
aws s3api get-bucket-lifecycle-configuration \
  --bucket [your-bucket-name]

Expected output:

{
    "Rules": [
        {
            "Expiration": {
                "Days": 30
            },
            "ID": "DeleteOldBackups",
            "Status": "Enabled",
            "Prefix": ""
        }
    ]
}

This keeps your S3 costs predictable—after 30 days, old backups are automatically deleted.

Cost estimation:

  • S3 storage: $0.023/GB/month
  • Estimated usage: 20GB (mail + configs, 30 days retention)
  • Lifecycle policy: Automatically deletes after 30 days
  • Monthly cost: ~$0.50-1.00

Small cost for AWS usage history plus actual backup protection.

What you’ve accomplished:

✅ AWS CLI configured and authenticated
✅ S3 bucket created
✅ Automated daily backups working
✅ 30-day retention policy
✅ AWS billing started (usage history building)
✅ Actual useful backup system

Timeline for SES:

  • Month 1-2: S3 backups generate consistent usage
  • Month 2-3: 2+ billing cycles complete
  • Month 3: Reapply to SES with evidence: “3 months of S3 usage, consistent billing history”
  • Expected: Much higher approval rate

Testing the backup:

The cron job runs daily at 2 AM. To test it now:

# Run backup manually
sudo /usr/local/bin/backup-mail.sh

# Check the log (will be created after first cron run)
tail /var/log/mail-backup.log

# Verify S3 has new files
aws s3 ls s3://[your-bucket-name]/ --recursive --human-readable

You should see increasing file counts daily.

Restoring from backup (if needed):

# List available backups
aws s3 ls s3://[your-bucket-name]/maildir/

# Download specific backup
aws s3 cp s3://[your-bucket-name]/maildir/maildir-2026-02-12-204716.tar.gz /tmp/

# Extract
cd /home/[username]
tar -xzf /tmp/maildir-2026-02-12-204716.tar.gz

# Verify mail restored
ls -la /home/[username]/Maildir/new/

Verification and Testing

Run through this comprehensive checklist to confirm everything is configured correctly.

System and Network Checks

Verify all ports are listening:

# Check all three mail ports at once
sudo ss -tlnp | grep -E ':(25|587|993)'

Expected output:

LISTEN 0    100    0.0.0.0:25     0.0.0.0:*    users:(("master",pid=31782,fd=13))
LISTEN 0    100    0.0.0.0:587    0.0.0.0:*    users:(("master",pid=31782,fd=14))
LISTEN 0    100    0.0.0.0:993    0.0.0.0:*    users:(("dovecot",pid=31055,fd=37))

All three should show “LISTEN”.

Verify firewall rules:

sudo ufw status verbose | grep -E '(25|587|993|80|443)'

Expected output:

25/tcp                     ALLOW IN    Anywhere
587/tcp                    ALLOW IN    Anywhere
993/tcp                    ALLOW IN    Anywhere
80/tcp                     ALLOW IN    Anywhere
443/tcp                    ALLOW IN    Anywhere

DNS Verification

Re-verify all DNS records:

# A record
dig mail.[your-domain].com +short
# Should return: [your-droplet-ip]

# MX record
dig MX [your-domain].com +short
# Should return: 10 mail.[your-domain].com.

# SPF record
dig TXT [your-domain].com +short | grep spf
# Should include: "v=spf1 ip4:[your-ip] include:amazonses.com -all"

# DMARC record
dig TXT _dmarc.[your-domain].com +short
# Should return: "v=DMARC1; p=none; rua=mailto:dmarc@..."

# PTR record (reverse DNS)
dig -x [your-droplet-ip] +short
# Should return: mail.[your-domain].com.

If any fail, revisit the DNS configuration section.

Service Status Checks

Verify all services are running:

# Postfix status
sudo systemctl status postfix --no-pager -l

# Dovecot status
sudo systemctl status dovecot --no-pager -l

# Both should show: Active: active (running)

Check for errors in logs:

# Postfix errors
sudo journalctl -u postfix -n 100 --no-pager | grep -i error

# Dovecot errors
sudo journalctl -u dovecot -n 100 --no-pager | grep -i error

# If both return nothing, that's good - no errors

Postfix Configuration Checks

Verify Postfix is using Maildir:

sudo postconf | grep home_mailbox
# Should show: home_mailbox = Maildir/

Verify TLS certificates are configured:

sudo postconf | grep smtpd_tls_cert_file
sudo postconf | grep smtpd_tls_key_file

# Both should point to /etc/letsencrypt/live/mail.[your-domain].com/

Verify SASL is enabled:

sudo postconf | grep smtpd_sasl_auth_enable
# Should show: smtpd_sasl_auth_enable = yes

Dovecot Configuration Checks

Verify mail location:

sudo doveconf -n | grep mail_location
# Should show: mail_location = maildir:~/Maildir

Verify SSL is required:

sudo doveconf -n | grep "^ssl "
# Should show: ssl = required

Verify auth socket exists:

sudo ls -la /var/spool/postfix/private/auth
# Should show: srw-rw---- 1 postfix postfix

Email Delivery Test

Send test email from external provider:

From Gmail, ProtonMail, or any other email service, send to: [your-username]@[your-domain].com

Monitor delivery in real-time:

# Watch logs
sudo journalctl -u postfix -f

# Press Ctrl+C when you see the delivery

Verify delivery:

# Check for "delivered to mailbox"
sudo journalctl -u postfix --no-pager | grep "status=sent"

# Example output:
# status=sent (delivered to mailbox)

Verify email file exists:

# List new mail
sudo ls -la /home/[your-username]/Maildir/new/

# Should see a file with timestamp in name

IMAP Connection Test

Test with command-line:

# Test IMAP over SSL
openssl s_client -connect mail.[your-domain].com:993 -quiet

# After connection, type:
a1 LOGIN [username] [password]
# Should return: a1 OK Logged in

# List mailboxes
a2 LIST "" "*"
# Should show: * LIST (\HasNoChildren) "." "INBOX"

# Logout
a3 LOGOUT

Test with email client:

Configure any IMAP client with these settings and verify:

  • Connection succeeds
  • Test email appears in inbox
  • Can read the email

Connection details:

IMAP:
  Server: mail.[your-domain].com
  Port: 993
  Security: SSL/TLS
  Username: [your-username]
  Password: [user's password]

SMTP (for future):
  Server: mail.[your-domain].com
  Port: 587
  Security: STARTTLS
  Username: [your-username]
  Password: [user's password]

AWS S3 Backup Verification

Verify backup script runs:

# Run manual backup
sudo /usr/local/bin/backup-mail.sh

# Check output
# Should show: Backup completed: [timestamp]

Verify files in S3:

# List all backups
aws s3 ls s3://[your-bucket-name]/ --recursive --human-readable

# Should show files in maildir/ and config/ folders

Verify cron job:

# Check crontab
sudo crontab -l | grep backup-mail

# Should show: 0 2 * * * /usr/local/bin/backup-mail.sh

Verify lifecycle policy:

# Check policy
aws s3api get-bucket-lifecycle-configuration \
  --bucket [your-bucket-name]

# Should show 30-day expiration rule

TLS Certificate Verification

Check certificate expiration:

# Check cert details
sudo certbot certificates

# Should show:
# Certificate Name: mail.[your-domain].com
# Expiry Date: [90 days from creation]
# Certificate Path: /etc/letsencrypt/live/mail.[your-domain].com/fullchain.pem

Verify auto-renewal:

# Check renewal timer
sudo systemctl status certbot.timer

# Should show: Active: active (waiting)

Test renewal process:

# Dry run (doesn't actually renew)
sudo certbot renew --dry-run

# Should end with: Congratulations, all simulations succeeded

Security Verification

Check file permissions on sensitive files:

# Maildir should be 700 (user-only access)
sudo ls -ld /home/[username]/Maildir
# Should show: drwx------

# Certificate private key should be 600
sudo ls -la /etc/letsencrypt/live/mail.[your-domain].com/privkey.pem
# Should show: -rw-r--r-- (lrwxrwxrwx if symlink)

# Postfix config should be 644
sudo ls -la /etc/postfix/main.cf
# Should show: -rw-r--r--

Verify no open relay:

# Test if server allows relaying without auth
telnet mail.[your-domain].com 25

# After connection, type:
HELO test.com
MAIL FROM: [email protected]
RCPT TO: [email protected]

# Should get: 554 5.7.1 Relay access denied
# This is GOOD - server refuses to relay

Final Checklist

Run through this checklist and verify each item:

  • [ ] All ports listening (25, 587, 993)
  • [ ] All DNS records verified (A, MX, SPF, DMARC, PTR)
  • [ ] Postfix running without errors
  • [ ] Dovecot running without errors
  • [ ] TLS certificates valid and auto-renewing
  • [ ] External email received successfully
  • [ ] Email visible in Maildir
  • [ ] IMAP client connection works
  • [ ] Can read received email in client
  • [ ] Auth socket exists and accessible
  • [ ] S3 backups working
  • [ ] Cron job scheduled
  • [ ] AWS usage generating billing
  • [ ] No open relay (relay access denied without auth)

If all items are checked, your mail server is fully operational for receiving email.

What’s working:

  • Receiving email from any sender
  • IMAP access from any email client
  • TLS encryption throughout
  • Automated backups
  • AWS usage history building

What’s pending:

  • Outbound relay configuration (Part 3: SMTP2GO)
  • Spam filtering (Part 3: SpamAssassin)
  • Additional security hardening (Part 3)
  • AWS SES approval (60-90 days)

Conclusion

You’ve built a production-ready mail server from scratch. Let’s review what you’ve accomplished and what comes next.

What You’ve Built

Infrastructure:

  • DigitalOcean droplet running Ubuntu 24.04 LTS
  • Hardened firewall (UFW) with only necessary ports open
  • TLS certificates from Let’s Encrypt with automatic renewal
  • Complete DNS configuration (A, MX, SPF, DMARC, PTR)

Mail Services:

  • Postfix: Receiving mail on port 25, submission on port 587
  • Dovecot: IMAP access on port 993
  • Maildir format: Each email as a separate file
  • TLS encryption: All connections encrypted
  • Authentication: Dovecot SASL integration with Postfix

Backup and AWS Integration:

  • AWS CLI configured and authenticated
  • S3 bucket for automated backups
  • Daily backup script running via cron
  • 30-day retention policy
  • Consistent AWS usage generating billing history

Current Capabilities

Fully Functional:

  • Receiving email: Anyone can send mail to [username]@[your-domain].com
  • IMAP access: Connect from any email client (Thunderbird, iOS Mail, Android, etc.)
  • Multiple devices: Access same mailbox from phone, laptop, tablet
  • Encrypted connections: TLS for SMTP and IMAP
  • Automated backups: Daily backups to S3
  • Professional setup: Proper SPF, DMARC, PTR records

Pending Configuration:

  • Sending email: Requires SMTP relay (Part 3)
  • Spam filtering: SpamAssassin configuration (Part 3)
  • Security hardening: fail2ban, additional restrictions (Part 3)
  • Monitoring: Log analysis, alerting (Part 3)

Monthly Costs

Current expenses:

  • DigitalOcean droplet (mail server): $6.00/month
  • DigitalOcean droplet (WordPress, existing): $6.00/month
  • AWS S3 backups: ~$0.50/month
  • Total: ~$12.50/month

Cost breakdown by function:

  • Receiving email: $6/month (droplet)
  • Backup protection: $0.50/month (S3)
  • AWS usage history: $0.50/month (S3, building toward SES)
  • WordPress hosting: $6/month (existing)

Comparison to alternatives:

  • Google Workspace: $6/user/month = $72/year (email only, no custom setup)
  • Fastmail: $5/month = $60/year (email only)
  • Your setup: $12.50/month = $150/year (email + blog + learning experience)

The AWS SES Situation

Current status: Production access denied due to new account

What we learned: AWS requires established usage history before approving SES for new accounts. This is common anti-spam policy, not a rejection of your use case.

The path forward:

  1. Now – Month 2: S3 backups generate consistent AWS usage and billing
  2. Month 2-3: Additional AWS services (optional: Lambda automation, CloudWatch logging)
  3. Month 3: Reapply to SES with evidence:
    • “90 days of consistent S3 usage”
    • “3 billing cycles with charges”
    • “Professional mail infrastructure in production”
  4. Expected outcome: Much higher approval rate

Why we’re not giving up on SES:

  • Best long-term economics ($0.10/1,000 emails)
  • Excellent deliverability
  • AWS experience valuable for cybersecurity careers
  • Natural fit with existing AWS infrastructure (S3 backups)

Timeline: 60-90 days until SES reapplication

What’s Next: Part 3 and Beyond

The next article will cover:

1. SMTP2GO SMTP Relay (Immediate Solution)

  • Sign up for free tier (1,000 emails/month)
  • Configure Postfix to relay via SMTP2GO
  • Test outbound email delivery
  • Verify deliverability to Gmail, Outlook, etc.

Why SMTP2GO:

  • Works immediately (no waiting period)
  • Free tier sufficient for personal use
  • Proven deliverability
  • Temporary until SES approval

2. SpamAssassin Configuration

  • Install and configure spam filtering
  • Bayesian learning
  • Custom rules
  • Integration with Dovecot

3. Security Hardening

  • fail2ban for brute-force protection
  • Rate limiting
  • Additional Postfix restrictions
  • Log monitoring
  • Intrusion detection

4. Monitoring and Maintenance

  • Log rotation
  • Performance monitoring
  • Disk usage alerts
  • Backup verification
  • Certificate renewal checks

Skills Acquired

Through this deployment, you’ve gained hands-on experience with:

Linux System Administration:

  • User and permissions management
  • Firewall configuration (UFW)
  • Service management (systemd)
  • Cron job automation
  • Log analysis

Email Infrastructure:

  • SMTP protocol and Postfix configuration
  • IMAP protocol and Dovecot setup
  • TLS certificate management
  • DNS records (MX, A, SPF, DMARC, PTR)
  • Maildir format and mail delivery

Cloud Infrastructure:

  • AWS IAM user and policy management
  • AWS CLI configuration
  • S3 bucket creation and lifecycle policies
  • Multi-cloud architecture (DO + AWS + Cloudflare)

Troubleshooting:

  • Service debugging via journalctl
  • DNS propagation issues
  • Permission and socket configuration
  • Log analysis and error resolution

Documentation for Future Reference

Key file locations:

Configuration Files:
/etc/postfix/main.cf          - Postfix main configuration
/etc/postfix/master.cf        - Postfix service configuration
/etc/dovecot/conf.d/          - Dovecot configuration directory
/etc/letsencrypt/live/        - TLS certificates
/usr/local/bin/backup-mail.sh - Backup script

Mail Data:
/home/[username]/Maildir/     - User mailbox
/var/spool/postfix/           - Postfix queue and sockets

Logs:
journalctl -u postfix         - Postfix logs
journalctl -u dovecot         - Dovecot logs
/var/log/mail-backup.log      - Backup logs (after first cron run)

AWS:
~/.aws/credentials            - AWS CLI credentials
~/.aws/config                 - AWS CLI configuration

Important commands:

# Service management
sudo systemctl restart postfix
sudo systemctl restart dovecot
sudo systemctl status postfix dovecot

# View logs
sudo journalctl -u postfix -f
sudo journalctl -u dovecot -n 100

# Check mail queue
sudo mailq

# Test configuration
sudo postfix check
sudo doveconf -n

# Manual backup
sudo /usr/local/bin/backup-mail.sh

# List S3 backups
aws s3 ls s3://[bucket-name]/ --recursive

Final Thoughts

Building a mail server is one of the more complex self-hosting projects, which is why many people avoid it. But you’ve done it the right way:

  • Security-first: TLS, proper authentication, restricted permissions
  • Properly configured: Following best practices for DNS, file permissions, service configuration
  • Documented: Understanding why each component exists and how they interact
  • Backed up: Automated protection for your data
  • Pragmatic: Using commercial relay for deliverability while building toward full self-hosting

You now have a fully functional mail server that receives email reliably, integrates with any email client, and provides the foundation for complete email independence.

Part 3 will complete the picture by adding outbound sending capabilities and additional security layers, giving you a production-ready mail server suitable for professional correspondence.


Published:
Last updated:
Time to complete: 2-3 hours
Part of series: Building a Secure Email Server
Next: Part 3 – Outbound Relay, Spam Filtering, and Security Hardening

The Mailroom: Part One – Component Selection and Architecture Decisions

By Collin

Project Overview

Email infrastructure is one of the most complex yet fundamental services on the internet. Despite using email daily, most people—even many in tech—don’t understand how it actually works under the hood. SMTP, IMAP, DKIM, SPF, DMARC, spam filtering, TLS encryption, mail queues, DNS configurations… the list of interacting components is extensive.

This project aims to build a production-grade email server from scratch, not as a quick setup following a tutorial, but as a deep dive into email security, protocols, and infrastructure management. The goals are threefold:

  1. Learn email security from the infrastructure side – understanding how authentication protocols (SPF/DKIM/DMARC) work, how spam filtering detects threats, how mail servers get compromised, and how to harden them.
  2. Gain hands-on experience with production services – managing a real mail server with actual deliverability requirements (job applications, blog correspondence) forces you to understand reliability, monitoring, and troubleshooting in ways that lab environments don’t.
  3. Document the process – turning the learning experience into comprehensive guides that help others understand not just the “how” but the “why” behind email infrastructure decisions.

The server will host email for secblues.com, providing a professional email address for professional and blog correspondence. This is real-world usage with real consequences if misconfigured—which makes it an excellent learning opportunity.

Why Build Your Own Mail Server in 2026?

It’s a fair question. Commercial email services like Google Workspace, Fastmail, and ProtonMail are reliable, secure, and cost-effective. Why take on the complexity of self-hosting?

The answer: learning. Running your own mail server teaches you:

  • How email authentication prevents spoofing and phishing
  • Why some emails go to spam (and how to fix it)
  • How mail servers defend against abuse and attacks
  • The practical implementation of cryptography (DKIM signing, TLS)
  • DNS configuration beyond basic A records
  • Log analysis and threat detection
  • Incident response when things inevitably break

These are skills that translate directly to security operations, DevOps, and systems administration roles. You can’t learn them by using Gmail.

The Deployment Architecture

For this project, I’m deploying a hybrid architecture using DigitalOcean for hosting with AWS SES for outbound email relay. This combines self-hosted infrastructure for learning with commercial relay for reliable delivery.

Why hybrid instead of pure self-hosted?

The harsh reality of email reputation: even with perfect configuration (SPF, DKIM, DMARC, PTR records), small mail servers face two massive obstacles:

  1. VPS IP reputation: Spammers constantly rotate through VPS IP ranges. If another customer in your /24 subnet sends spam, your IP can be blacklisted by association via services like UCEPROTECT-3.
  2. The volume gap: Gmail and Outlook use sending volume as a trust signal. A personal server sending 5-10 emails per day never generates enough data for their machine learning models to establish trust. You remain in “unknown sender” territory indefinitely, which often defaults to the spam folder.

Building IP reputation takes months of consistent sending, and even then, there are no guarantees. For job applications and professional correspondence, this risk is unacceptable.

The hybrid solution:

Inbound (full learning experience):
  Internet → Your VPS (Postfix receives on port 25)
           → Spam filtering (SpamAssassin)
           → Storage (Dovecot/IMAP)
           → You (email clients via IMAP)

Outbound (reliable delivery):
  You → Your VPS (Postfix on port 587)
      → SMTP Relay (AWS SES)
      → Recipient (delivered from established IP)

What you still learn:

  • Complete Postfix configuration (receiving, routing, queues, relay setup)
  • Dovecot and IMAP protocols
  • SPF/DKIM/DMARC implementation (still required for your domain)
  • Spam filtering and content analysis
  • Mail server security and hardening
  • TLS configuration
  • Log analysis and troubleshooting

What you outsource:

  • IP reputation management for outbound delivery
  • Deliverability monitoring and blacklist management
  • Dealing with major providers’ spam filters

The deployment will run on Ubuntu 24.04 VPS with the following stack:

  • Postfix for SMTP (receiving + relay to SES)
  • Dovecot for IMAP (accessing mail from clients)
  • SpamAssassin for spam filtering
  • OpenDKIM for email signing
  • AWS SES for outbound relay
  • Let’s Encrypt for TLS certificates
  • Roundcube for webmail (optional)

Now let’s examine why each of these components was chosen over alternatives.


Component Selection: Understanding the Tradeoffs

Building a mail server requires choosing between multiple competing implementations for each role. Here’s the decision-making process for each component, with comparisons to alternatives.

Mail Transfer Agent (MTA): The SMTP Server

Role: The MTA is the core of any mail server. It accepts incoming mail on port 25, sends outgoing mail to other servers, manages mail queues, and routes messages to local delivery agents.

Postfix ✓ (Recommended)

Homepage: https://www.postfix.org/

Postfix is the industry-standard MTA, originally written by Wietse Venema in the late 1990s as a secure alternative to Sendmail. It’s designed with security as the primary concern—the entire architecture is built around privilege separation and defense in depth.

Why Postfix:

  • Security-first architecture: Runs as multiple processes with different privilege levels. If one component is compromised, the damage is contained by the privilege separation model.
  • Industry standard: Most widely deployed MTA in production environments. What you learn applies directly to enterprise mail infrastructure.
  • Excellent documentation: Decades of deployment experience means extensive guides, troubleshooting resources, and community knowledge.
  • Modular integration: Easy to add spam filters, virus scanners, content filters, and authentication mechanisms.
  • Performance: Efficiently handles high volume while remaining resource-efficient for small deployments.
  • Active maintenance: Regular security updates and active development community.

Logging and troubleshooting: Postfix has exceptional logging. Every mail transaction is logged with clear, parseable messages that make troubleshooting straightforward. For a learning project, this is invaluable.

Alternatives Comparison

MTAProsConsBest For
Exim (https://www.exim.org/)Actively maintained, default on Debian, extremely flexible routing rules, powerful ACL systemMore complex configuration syntax, historically more CVEs than Postfix, smaller community than PostfixEnvironments with complex routing requirements, existing Debian infrastructure, or when you need Exim-specific features
OpenSMTPD (https://www.opensmtpd.org/)Clean simple configuration, OpenBSD project (security-focused), minimal attack surface, modern codebaseSmaller ecosystem, fewer third-party integrations, less common in enterprise, steeper learning curve for advanced featuresMinimalist setups, OpenBSD environments, security-focused deployments where simplicity matters more than ecosystem

Verdict: Postfix offers the best balance of security, documentation, industry relevance, and learning value. Exim is a solid alternative if you’re already in Debian ecosystem. OpenSMTPD is interesting for security purists but has less industry adoption. The skills you develop configuring Postfix transfer directly to most professional environments.


IMAP/POP3 Server: Mail Access

Role: While the MTA handles mail transport, users need a way to access their mailboxes. The IMAP server stores mail and provides access for email clients (phones, desktop apps, webmail).

Dovecot ✓ (Recommended)

Homepage: https://www.dovecot.org/

Dovecot is the de facto standard IMAP/POP3 server, designed specifically for high performance and security. It pairs naturally with Postfix—the two projects evolved together and are commonly deployed as a matched set.

Why Dovecot:

  • Performance: Optimized for efficiency with intelligent indexing and caching. Fast even with large mailboxes.
  • Security: Solid track record, supports all modern authentication mechanisms (including 2FA integration).
  • Standards compliance: Excellent implementation of IMAP protocol, handles edge cases properly.
  • Integration: Standard pairing with Postfix, well-documented configuration patterns.
  • Features: Sieve mail filtering, full-text search, quota management, virtual users.
  • Flexibility: Supports multiple storage formats (maildir, mbox) and authentication backends.

Learning value: Understanding Dovecot teaches you how email storage works, authentication mechanisms, and the IMAP protocol’s capabilities and limitations.

Alternatives Comparison

IMAP ServerProsConsBest For
Cyrus IMAP (https://www.cyrusimap.org/)Actively maintained, extremely scalable, “murder” clustering for large deployments, robust access controls, battle-tested at scaleComplex setup, steeper learning curve, database-backed storage required, significant overhead for small deploymentsLarge organizations (universities, enterprises), deployments with thousands of users, when you need clustering and high availability

Verdict: Dovecot is the clear choice for modern mail servers under 1000 users. It’s what you’ll encounter in most professional environments, and the Postfix+Dovecot pairing is well-tested and extensively documented. Cyrus is only worth considering if you’re building infrastructure for a large organization or need specific enterprise features like clustering.


Spam Filtering: Defending Your Inbox

Role: Spam filtering analyzes incoming mail and determines whether it’s legitimate or junk. This is critical for both usability (clean inbox) and security (blocking phishing attempts).

SpamAssassin ✓ (Recommended for Learning)

Homepage: https://spamassassin.apache.org/

SpamAssassin is a mature, rule-based spam filter that scores emails based on hundreds of heuristics. It’s been the standard open-source spam solution since 2001.

Why SpamAssassin:

  • Transparent scoring: Every email gets a detailed score breakdown showing which rules triggered. This is invaluable for learning how spam detection works.
  • Highly configurable: Extensive rule sets you can customize and audit. You understand exactly why mail was classified as spam.
  • Bayesian learning: Can be trained on your mail to improve accuracy over time.
  • Integration: Standard content filter for Postfix, well-documented setup.
  • Educational value: Reading SpamAssassin rules teaches you spam techniques and detection methods.

Example scoring output:

X-Spam-Status: Yes, score=8.2 required=5.0
  tests=BAYES_99,HTML_MESSAGE,MIME_HTML_ONLY,RCVD_IN_BRBL_LASTEXT,
  RCVD_IN_XBL,URIBL_BLACK

You can see exactly why the mail was flagged (HTML-only, blacklisted sender IP, URL in known spam database).

rspamd (Alternative for Production)

Homepage: https://rspamd.com/

rspamd is the modern alternative to SpamAssassin—faster, more accurate, and feature-rich.

Why rspamd:

  • Performance: Significantly faster than SpamAssassin, uses Redis for caching.
  • Machine learning: Neural network classification and statistical learning.
  • All-in-one: Includes DKIM signing/verification, greylisting, rate limiting, and more.
  • Modern architecture: Asynchronous, event-driven design.

Tradeoff: rspamd is less transparent about scoring. It works better but teaches you less about spam detection techniques.

Alternatives Comparison

Spam FilterProsConsBest For
rspamd (https://rspamd.com/)Modern architecture, significantly faster than SpamAssassin, machine learning and neural networks, integrated features (DKIM, ARC, reputation, greylisting), event-driven asynchronous design, uses Redis for cachingSteeper learning curve, less transparent scoring (ML models are “black boxes”), requires Redis infrastructure, more complex initial setupProduction deployments prioritizing performance and accuracy, high-volume mail servers, when you want an all-in-one solution
MailScanner (https://www.mailscanner.info/)Mature and battle-tested, integrates multiple scanners (SpamAssassin, ClamAV), flexible policy framework, good for corporate environmentsHeavier resource usage, slower processing than modern alternatives, Perl-based (performance limitations), smaller active communityCorporate environments needing policy-based routing, deployments requiring virus scanning integration, when you need centralized mail gateway

Recommendation for this project: Start with SpamAssassin to learn spam filtering fundamentals. The transparent scoring teaches you how spam detection actually works – you can see exactly which rules triggered and why. After understanding the principles, consider migrating to rspamd for better performance.

Article angle: Document both. Show SpamAssassin setup and detailed rule analysis, then migration to rspamd, explaining what you learned from each approach.


Email Authentication: DKIM Signing

Role: DKIM (DomainKeys Identified Mail) cryptographically signs outbound email, allowing recipients to verify the message came from your domain and wasn’t altered in transit.

OpenDKIM ✓ (Recommended)

Homepage: http://www.opendkim.org/

OpenDKIM is the standard open-source implementation of DKIM signing and verification.

Why OpenDKIM:

  • Postfix integration: Standard milter (mail filter) interface, well-documented.
  • Simple: Does one job well—signs outgoing mail, verifies incoming signatures.
  • Debugging tools: Includes utilities to test and validate signatures.
  • Widely deployed: Industry standard implementation you’ll encounter in production.

How it works: OpenDKIM signs outgoing mail with your private key, and publishes your public key in DNS. Recipients verify the signature using the public key, confirming the mail came from your domain.

Alternatives

The reality is that OpenDKIM is the standard implementation. The only alternative worth mentioning:

  • rspamd built-in DKIM (https://rspamd.com/): If you’re using rspamd for spam filtering, it includes DKIM signing and verification. This eliminates the need for a separate OpenDKIM daemon and reduces moving parts.

Verdict: Use OpenDKIM if running the Postfix+SpamAssassin stack. If you’re using rspamd, use its built-in DKIM functionality to keep your architecture simpler.


SMTP Relay: Reliable Outbound Delivery

Role: Accepts mail from your server and delivers it to recipients using established IP addresses with proven reputation. This solves the deliverability problem that plagues small self-hosted mail servers.

Why Use an SMTP Relay?

The harsh reality of email reputation:

Even with perfect technical configuration (SPF, DKIM, DMARC, PTR records), self-hosted mail servers face two significant obstacles:

  1. VPS IP reputation: Cloud provider IP ranges are constantly cycled by spammers. If another customer in your subnet sends spam, your IP can be blacklisted by association through services like UCEPROTECT-3 or Spamhaus. You have no control over your “neighbors.”
  2. Volume-based trust: Gmail, Outlook, and other major providers use sending volume as a trust signal. A personal server sending 5-10 emails per day doesn’t generate enough data for their machine learning models to establish trust. Low-volume senders often remain in “unknown/neutral” status indefinitely, which frequently defaults to spam folder placement.

Building IP reputation can take months of consistent sending, with no guarantee of success. For professional correspondence and job applications, this uncertainty is unacceptable.

How SMTP relay solves this: Relay services maintain large pools of IP addresses with established reputations. They handle deliverability monitoring, blacklist management, and relationships with major email providers. Your mail is delivered from their trusted IPs while still being signed by your domain.

Amazon SES ✓ (Recommended)

Homepage: https://aws.amazon.com/ses/

Amazon Simple Email Service is AWS’s managed email platform, designed for both transactional and marketing email at any scale.

Why Amazon SES:

  • Best economics at any scale: $0.10 per 1,000 emails after free tier
    • 10,000 emails/month = $1.00
    • 50,000 emails/month = $5.00
    • No minimum fee, pure pay-per-use
  • AWS ecosystem integration: Learn AWS services (IAM, CloudWatch, SNS)
  • Experience value: “Configured AWS SES integration” demonstrates cloud platform experience
  • Excellent deliverability: Amazon’s reputation and infrastructure
  • Detailed analytics: Bounce tracking, complaint monitoring, delivery metrics
  • Scalability: Grows from personal use to enterprise without migration

Setup complexity: Moderate

  • AWS account verification (24-48 hour wait for production access)
  • IAM user creation and access key management
  • SMTP credentials generation
  • Learning AWS console navigation

Cost example for personal use:

  • 0-3,000 emails/month (job apps + blog): ~$0.30/month
  • Essentially free compared to VPS costs

Alternatives Comparison

SMTP RelayFree TierPaid PricingProsConsBest For
SendGrid (https://sendgrid.com/)100 emails/day (3,000/month) forever$19.95/month for 40,000 emailsQuick setup, good documentation, Twilio-backedAggressive anti-abuse automation locks personal accounts without warning. More expensive at scale than SES, vendor lock-inOnly if you’re on a paid plan. Free tier is unreliable for personal mail servers
Mailgun (https://www.mailgun.com/)100 emails/day (3,000/month) for 3 months, then requires paid plan$35/month for 50,000 emailsEmail validation API, good for developers, European region optionMost expensive option at scale, time-limited free tierNeed email validation features, European data residency requirements
SMTP2GO (https://www.smtp2go.com/)1,000 emails/month forever$10/month for 10,000 emailsSimple pricing, good support, nice dashboard, free tier doesn’t aggressively lock accountsSmaller company (less proven at scale), smaller communityPersonal mail servers on free tier, when you need a relay that won’t randomly lock you out
Postmark (https://postmarkapp.com/)No free tier$15/month for 10,000 emailsPremium service, best deliverability reputation, transactional email focus, excellent supportNo free tier, more expensive than alternativesWhen deliverability is critical, willing to pay premium for support, transactional email focus

Recommendation for this project: Amazon SES (long-term target)

  • Best long-term economics (matters if blog grows)
  • AWS experience is valuable for cybersecurity roles
  • More interesting article content (AWS integration)
  • Scales without migration or pricing cliffs

Immediate relay while waiting for SES approval: SMTP2GO

  • 15-minute setup, 1,000 emails/month free tier
  • Free tier doesn’t lock personal accounts (unlike SendGrid — see Part 3 for the full story)
  • Same Postfix relay configuration as any other provider — can migrate to SES later with a two-line config change

Setup time comparison:

  • SendGrid: 15 minutes (API key, configure Postfix, done)
  • Mailgun: 20 minutes (similar to SendGrid)
  • Amazon SES: 1-2 hours initial setup + 24-48 hour verification wait
  • SMTP2GO: 15 minutes
  • Postmark: 20 minutes (but costs money immediately)

Why not just use SES for everything (inbound + outbound)?

You could configure SES to receive mail, but then you’d:

  • Lose all the learning about Postfix, Dovecot, spam filtering
  • Have to build custom tooling for mail access (no standard IMAP)
  • Miss the entire point of this project (understanding mail infrastructure)

SES for outbound relay gives you reliable delivery while preserving the learning experience.


TLS Certificates: Encrypting Connections

Role: TLS certificates encrypt SMTP and IMAP connections, preventing eavesdropping on email content and credentials.

Let’s Encrypt + Certbot ✓ (Recommended)

Homepage: https://letsencrypt.org/ and https://certbot.eff.org/

Let’s Encrypt revolutionized TLS by providing free, automated certificates trusted by all major clients.

Why Let’s Encrypt:

  • Free: No cost for certificates that would cost $50-200/year from commercial CAs.
  • Automated renewal: Certbot handles the entire lifecycle—request, install, renew.
  • Trusted: Certificates are trusted by all email clients, browsers, and operating systems.
  • Learning value: Understand the ACME protocol, certificate management, and PKI concepts.
  • Industry standard: Same technology securing most of the web.

Certbot automation:

# Initial certificate
certbot certonly --standalone -d mail.secblues.com

# Automatic renewal (runs twice daily via systemd timer)
systemctl status certbot.timer

Alternatives Comparison

Certificate SourceProsConsBest For
ZeroSSL (https://zerossl.com/)Free ACME certificates like Let’s Encrypt, 90-day validity, alternative CA for redundancy, includes certificate management dashboardSlightly less automated than Let’s Encrypt with Certbot, smaller community, requires account creationWhen you want a Let’s Encrypt alternative, multi-CA redundancy strategy, or prefer their management interface
Buypass (https://www.buypass.com/ssl/products/acme)Free ACME certificates, 180-day validity (longer than LE), Norwegian CA with good reputation, no rate limitsLess common, smaller community, fewer integration guides, Certbot support added laterEuropean deployments, when you need longer validity periods, alternative to Let’s Encrypt
Commercial CAs (DigiCert, Sectigo, GlobalSign)Extended validation available, paid support, insurance against mis-issuance, wildcard optionsCost ($50-300/year), manual renewal process, no practical advantage for mail serversOrganizations with compliance requirements, when you need EV certificates, or have certificate insurance requirements

Verdict: Let’s Encrypt is the standard choice. Free, automated, and trusted by all email clients. ZeroSSL and Buypass are viable alternatives if you want CA diversity or encounter Let’s Encrypt rate limits, but offer no compelling advantage for a typical mail server deployment.


Webmail Interface (Optional)

Role: Provides browser-based access to email without configuring an IMAP client. Useful for accessing mail from untrusted computers or as a backup access method.

Note: Webmail is optional. Native email clients (iOS Mail, Thunderbird, K-9 Mail) provide better user experience and offline access.

Roundcube ✓ (Recommended)

Homepage: https://roundcube.net/

Roundcube is a relatively modern, feature-complete webmail client with a clean interface and active development.

Why Roundcube:

  • Modern UI: Clean, responsive design similar to Gmail.
  • Feature-complete: Rich text editing, contacts, calendar (via plugins), threaded conversations.
  • Active development: Regular updates, security patches, new features.
  • Plugin ecosystem: Extensible with hundreds of community plugins.
  • Lightweight: PHP-based, runs efficiently on the same server.

Alternatives Comparison

WebmailProsConsBest For
Snappymail (https://snappymail.eu/)Modern beautiful UI, active development (Rainloop fork), mobile-responsive, fast performance, regular security updatesSmaller community than Roundcube, newer project (less battle-tested), fewer pluginsWhen you prioritize modern aesthetics, mobile experience, and performance over ecosystem maturity
SOGo (https://www.sogo.nu/)Full groupware solution (email + calendar + contacts), Microsoft Exchange ActiveSync compatibility, enterprise features, good mobile supportRequires database backend, Java dependencies, complex setup, resource-heavy, overkill for personal useOrganizations replacing Exchange, when you need calendar/contacts integration, corporate groupware requirements

Verdict: Roundcube for reliability, extensive plugin ecosystem, and community support. Snappymail if you prioritize modern UI and performance. SOGo only if you need full groupware features.

Honest assessment: For day-to-day use, native email clients (iOS Mail, Thunderbird, K-9 Mail) provide superior experience and offline access. Webmail is best for:

  • Emergency access from untrusted computers
  • Administrative tasks (checking spam folders, bulk operations)
  • Demonstrating your setup to others
  • Quick checks when you don’t have your devices

Web Server (for Webmail)

Role: Serves the webmail interface over HTTPS.

Nginx ✓ (Recommended)

Homepage: https://nginx.org/

Since you’re already running Nginx for your WordPress blog, using it for webmail eliminates an additional dependency.

Why Nginx:

  • Already deployed: No additional service to manage.
  • Performance: Excellent for serving PHP applications via PHP-FPM.
  • Security: Strong track record, regular updates, good defaults.
  • Reverse proxy: Can easily proxy to additional services later.
  • Documentation: Extensive guides for PHP application deployment.

Alternatives Comparison

Web ServerProsConsBest For
Apache (https://httpd.apache.org/)More common for PHP historically, .htaccess support, extensive module ecosystem, well-documentedHeavier resource usage than Nginx, more complex configuration for simple needs, slower for static contentWhen you need .htaccess support, already have Apache expertise, or require specific Apache modules
Caddy (https://caddyserver.com/)Automatic HTTPS (no manual cert config), extremely simple configuration, modern design, built-in HTTP/3Less widespread adoption (smaller ecosystem), fewer guides specific to mail server integration, newer platformNew projects prioritizing simplicity and modern protocols, when learning modern web server architecture

Verdict: Nginx since you’re already using it for your WordPress blog. Apache is equally valid if you prefer its ecosystem or need .htaccess support. Caddy is interesting for greenfield projects but less common in production mail server deployments.


Operating System Choice

Ubuntu 24.04 LTS is the foundation for this deployment. Here’s why:

Why Ubuntu:

  • Long-term support: 5 years of security updates (until 2029).
  • Up-to-date packages: Recent versions of Postfix, Dovecot, and other components.
  • Documentation: Extensive community guides for mail server deployment.
  • Stability: Proven in production environments, good testing before release.
  • Package management: APT ecosystem is mature and well-maintained.

Alternatives:

  • Debian Stable: More conservative (older packages), longer support cycle, excellent stability.
  • Rocky Linux / AlmaLinux: RHEL-compatible, good for enterprise environments using RPM packages.
  • FreeBSD: Excellent security, performance, and documentation, but smaller community for mail server guides.

Verdict: Ubuntu LTS offers the best balance of current packages and long-term support for a learning project.


The Complete Stack

Bringing it all together, here’s the recommended component stack with justifications:

Hybrid Architecture Stack:

Operating System: Ubuntu 24.04 LTS
  └─ Long-term support (5 years), current packages, extensive documentation

MTA: Postfix
  └─ Industry standard, security-first design, excellent logging
  └─ Configured for: receiving mail (port 25) + relay to SES (port 587)

IMAP: Dovecot
  └─ Standard pairing with Postfix, high performance, feature-complete

Spam Filter: SpamAssassin
  └─ Transparent scoring for learning, migrate to rspamd later for production

DKIM: OpenDKIM
  └─ Standard implementation, simple configuration

SMTP Relay: Amazon SES
  └─ Reliable outbound delivery, AWS experience, best long-term economics
  └─ Solves IP reputation and deliverability challenges

TLS: Let's Encrypt + Certbot
  └─ Free, automated, trusted certificates

Web Server: Nginx
  └─ Already deployed for WordPress, efficient, good PHP support

Webmail: Roundcube (optional)
  └─ Modern interface, active development, good feature set

DNS: Cloudflare
  └─ Free DNS, good UI for mail-specific records (MX, SPF, DKIM)

Hosting: DigitalOcean Droplet
  └─ Predictable pricing, simple management, good for learning

Monthly cost breakdown:

  • DigitalOcean droplet (mail server): $6/month
  • DigitalOcean droplet (WordPress blog): $6/month (already paying)
  • Amazon SES (outbound relay): ~$0.10-0.30/month (negligible)
  • Total: ~$12/month (~$6/month incremental over existing blog)

Experienced Gained:

  • Complete mail server administration (Postfix, Dovecot, spam filtering)
  • Email authentication protocols (SPF, DKIM, DMARC)
  • SMTP relay integration (hybrid cloud architecture)
  • AWS services (SES, potentially S3 for backups)
  • TLS configuration and certificate management
  • Security hardening and monitoring
  • Multi-cloud architecture patterns

What you outsource:

  • IP reputation management for outbound delivery
  • Deliverability monitoring at scale
  • Dealing with major email provider spam filters

Alternative Stack: The Modern Performance Approach

For comparison, here’s what a performance-optimized stack looks like:

Modern Performance Stack (still hybrid):

MTA: Postfix (unchanged - still the best)
IMAP: Dovecot (unchanged - still the best)
Spam/Auth/Signing: rspamd (replaces SpamAssassin + OpenDKIM)
  └─ Faster, ML-based, all-in-one solution (DKIM built-in)
Caching: Redis
  └─ Required for rspamd, improves performance significantly
SMTP Relay: Amazon SES (unchanged - still needed for deliverability)
TLS: Let's Encrypt (unchanged)
Web Server: Nginx (unchanged)
Webmail: Snappymail
  └─ Modern UI, lightweight, faster than Roundcube

Tradeoff: Better performance and modern architecture, but less educational transparency. You won’t learn as much about spam filtering rules or DKIM internals because rspamd abstracts them behind ML models and integrated functionality.

Recommendation: Start with the learning stack (SpamAssassin/OpenDKIM), migrate to the performance stack (rspamd) once you understand the fundamentals. Document both in your article series to show you understand the evolution from traditional to modern approaches.

Note on IP reputation: Even the performance stack uses SMTP relay (SES) for outbound delivery. The IP reputation problem affects all small self-hosted servers regardless of which spam filter you use. The relay is not a “learning wheels” component—it’s a pragmatic solution to a real infrastructure challenge.


What to Explicitly Avoid

Turnkey solutions (Mail-in-a-Box, MailCow, iRedMail):

These all-in-one Docker stacks or installer scripts set up a complete mail server in minutes. While they work well for “just get me email,” they defeat the learning purpose:

  • Mail-in-a-Box (https://mailinabox.email/): Excellent turnkey solution, but you learn almost nothing about how email works.
  • MailCow (https://mailcow.email/): Modern Docker-based stack, well-maintained, but opaque configuration.
  • iRedMail (https://www.iredmail.org/): Comprehensive installer, supports multiple Linux distributions, but automated setup teaches you little.

When to use them: Production deployments where time is more valuable than learning. Or as reference implementations to see how components should integrate.

When to avoid them: Learning projects like this one. Build it manually to understand each piece.


Further Reading and Resources

Official Documentation

  • Postfix Documentation: http://www.postfix.org/documentation.html
    • The authoritative reference, well-written and comprehensive
  • Dovecot Wiki: https://doc.dovecot.org/
    • Excellent configuration examples and troubleshooting guides
  • SpamAssassin Wiki: https://wiki.apache.org/spamassassin/
    • Detailed rule explanations and training guides

Email Protocol Standards (RFCs)

  • RFC 5321 – SMTP: https://www.rfc-editor.org/rfc/rfc5321
    • The SMTP protocol specification
  • RFC 3501 – IMAP: https://www.rfc-editor.org/rfc/rfc3501
    • IMAP protocol details
  • RFC 6376 – DKIM: https://www.rfc-editor.org/rfc/rfc6376
    • DKIM signing and verification
  • RFC 7208 – SPF: https://www.rfc-editor.org/rfc/rfc7208
    • Sender Policy Framework
  • RFC 7489 – DMARC: https://www.rfc-editor.org/rfc/rfc7489
    • Domain-based Message Authentication, Reporting & Conformance

Security Hardening Guides

  • NSA Email Server Security Guide: https://media.defense.gov/2023/Sep/12/2003299662/-1/-1/0/CTR_NSA_MAIL_SERVER_HARDENING.PDF
    • Comprehensive security recommendations from the NSA
  • CIS Benchmark for Mail Servers: https://www.cisecurity.org/benchmark/mail_servers
    • Industry-standard security baselines

Deliverability and Reputation

  • Google Postmaster Tools: https://postmaster.google.com/
    • Monitor your domain’s reputation with Gmail
  • Microsoft SNDS: https://sendersupport.olc.protection.outlook.com/snds/
    • Check your IP reputation with Outlook/Hotmail
  • MXToolbox: https://mxtoolbox.com/
    • Test DNS records, check blacklists, verify configuration

SMTP Relay Services

  • Amazon SES Documentation: https://docs.aws.amazon.com/ses/
    • Official AWS SES documentation and integration guides
  • SendGrid Documentation: https://docs.sendgrid.com/
    • SendGrid API and SMTP relay documentation
  • Mailgun Documentation: https://documentation.mailgun.com/
    • Mailgun API and integration guides
  • Email Service Provider Comparison: https://postmarkapp.com/compare
    • Independent comparison of major email sending services

Community Resources

  • r/selfhosted: https://www.reddit.com/r/selfhosted/
    • Community discussions on self-hosting (including mail servers)
  • Postfix Users Mailing List: http://www.postfix.org/lists.html
    • Active community for Postfix questions
  • Server Fault: https://serverfault.com/questions/tagged/postfix
    • Q&A for mail server administration

Books

  • “The Book of Postfix” by Ralf Hildebrandt and Patrick Koetter
    • Comprehensive guide to Postfix administration
  • “Postfix: The Definitive Guide” by Kyle D. Dent
    • In-depth coverage of Postfix configuration and troubleshooting

Testing and Validation Tools

  • mail-tester.com: https://www.mail-tester.com/
    • Test your outbound mail configuration and spam score
  • MX Toolbox DMARC Analyzer: https://mxtoolbox.com/dmarc.aspx
    • Validate DMARC records
  • DKIM Validator: https://dkimvalidator.com/
    • Test DKIM signing

Next Steps

Now that we’ve justified the component choices, the next article in this series will cover the actual deployment:

  1. Initial server setup and hardening (SSH, firewall, updates)
  2. Postfix installation and configuration (SMTP receiving + relay configuration)
  3. AWS SES setup (account verification, SMTP credentials, relay integration)
  4. Dovecot setup (IMAP access)
  5. DNS configuration (MX, SPF, DKIM, DMARC records)
  6. Spam filtering (SpamAssassin integration)
  7. TLS encryption (Let’s Encrypt certificates)
  8. Testing and validation (deliverability checks, relay verification)
  9. Monitoring and maintenance (log analysis, queue management, performance monitoring)

The deployment will follow security best practices from the start—there’s no point in learning how to build an insecure mail server only to harden it later. The hybrid architecture (self-hosted receiving with SES relay for sending) will be configured from the beginning, ensuring reliable delivery for professional correspondence while maintaining the full learning experience.


Conclusion

Email infrastructure is more complex than it appears. Each component serves a specific purpose, and the interactions between them create the complete system. Understanding why we choose Postfix over alternatives, SpamAssassin over rspamd initially, or a hybrid architecture over pure self-hosted is more valuable than just following installation commands.

The hybrid approach—self-hosting for learning while using commercial relay for deliverability—reflects real-world engineering tradeoffs. It acknowledges that:

  • IP reputation is a genuine infrastructure challenge, not a configuration problem
  • Professional deliverability matters when using email for job applications
  • Learning value comes from running the full stack, even if outbound goes through a relay
  • Multi-cloud architecture is increasingly common in production environments

These decisions reflect the tension between idealism and pragmatism. A purely self-hosted mail server is educational but risks poor deliverability. A fully managed service is reliable but teaches nothing. The hybrid approach captures the learning value while ensuring the infrastructure actually works for its intended purpose.

By documenting the reasoning behind each choice, you build transferable knowledge that applies beyond this specific deployment. You’ll understand when to use traditional MTAs vs managed services, when spam filtering transparency matters vs performance, and when to self-host vs outsource. These are the engineering judgment skills that separate following tutorials from building production systems.

The next article will put these components into action, building a secure, functional mail server with reliable delivery from day one.


This is Part 1 of the “The Mailroom” email server build series.

Published: 12/25

Hardening the Stack: A Comprehensive Guide to WordPress Security

By Collin

A default WordPress installation is a target. To move from “vulnerable” to “hardened,” we must secure the entire stack: the Operating System, the Web Server, the Database, and the Application itself. This guide follows the official WordPress hardening standards to build a fortress, not just a website.

Environment: DigitalOcean Droplet running Ubuntu 24.04, Nginx web server, MariaDB database, managed through Cloudflare (DNS + proxy).


1. The Foundation: Server-Level Hardening (OS)

Before touching WordPress, you must secure the host. If the server is compromised, the application doesn’t matter.

SSH Key-Based Authentication

Standard password logins are susceptible to brute-force attacks. We replace “what you know” (password) with “what you have” (private key).

Action: Disable password-based authentication.

Implementation:

sudo nano /etc/ssh/sshd_config

Find and modify these lines:

PasswordAuthentication no
PubkeyAuthentication yes
PermitRootLogin no

Restart SSH:

sudo systemctl restart ssh

Verification: Try logging in from a different machine without a key – it should fail.

Firewall Configuration (UFW)

Ubuntu’s Uncomplicated Firewall (UFW) provides a simple interface for iptables. The principle: deny everything except what you explicitly need.

Action: Configure UFW to allow only HTTP, HTTPS, and SSH.

Implementation:

# Check status first
sudo ufw status

# Default policies: deny incoming, allow outgoing
sudo ufw default deny incoming
sudo ufw default allow outgoing

# Allow SSH (do this FIRST to avoid locking yourself out)
sudo ufw allow 22/tcp

# Allow HTTP and HTTPS
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

# Enable the firewall
sudo ufw enable

# Verify rules
sudo ufw status verbose

Expected output:

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere
80/tcp                     ALLOW       Anywhere
443/tcp                    ALLOW       Anywhere

Critical: If you’re using a non-standard SSH port, adjust accordingly. If you use Cloudflare’s proxy (orange cloud), you can optionally restrict port 80/443 to Cloudflare’s IP ranges for additional protection.

Automatic Security Updates

The WordPress Codex emphasizes keeping server software current. We use unattended-upgrades to ensure the OS patches itself.

Action: Enable automated security patching.

Implementation:

sudo apt install unattended-upgrades
sudo dpkg-reconfigure -plow unattended-upgrades

Select “Yes” when prompted. This configures automatic installation of security updates only.

Verification:

sudo systemctl status unattended-upgrades

Configuration file location: /etc/apt/apt.conf.d/50unattended-upgrades

You can optionally configure email notifications for update failures. This is crucial for a production site.


2. The Perimeter: Nginx & Network Security

The web server is the gatekeeper. By configuring Nginx properly, we stop attacks before they reach the WordPress PHP engine.

Security Headers

We use HTTP headers to instruct the visitor’s browser to block common exploitation techniques.

Action: Add protection against Clickjacking, MIME-sniffing, and referrer leakage.

Location: Add to your Nginx server block (typically in /etc/nginx/sites-available/[your_domain])

Configuration:

server {
    listen 443 ssl http2;
    server_name [your_domain].com www.[your_domain].com;
    
    # Security Headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    add_header X-XSS-Protection "1; mode=block" always;
    
    # Additional recommended headers
    add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
    
    # Rest of your configuration...
}

Header explanations:

  • X-Frame-Options: Prevents your site from being embedded in iframes (clickjacking protection)
  • X-Content-Type-Options: Prevents browsers from MIME-sniffing (changing file types)
  • Referrer-Policy: Controls how much referrer information is sent
  • X-XSS-Protection: Enables browser’s built-in XSS filter (legacy, but doesn’t hurt)
  • Permissions-Policy: Disables unnecessary browser features

Test after applying:

sudo nginx -t
sudo systemctl reload nginx

Disable Directory Indexing

Prevent attackers from browsing your directory structure.

Action: Ensure autoindex is disabled globally.

Configuration: In your main nginx.conf (/etc/nginx/nginx.conf) or site config:

autoindex off;

Test: Navigate to https://[your_domain].com/wp-content/uploads/ – you should get a 403 Forbidden, not a file listing.

Blocking Access to Sensitive Files

Nginx should never serve files like .htaccess, wp-config.php, or other sensitive items.

Configuration: Add to your server block:

# Block access to hidden files
location ~ /\. {
    deny all;
}

# Block access to wp-config.php
location = /wp-config.php {
    deny all;
}

# Block access to readme/license files
location ~* ^/readme\.(txt|html)$ {
    deny all;
}

location ~* ^/license\.txt$ {
    deny all;
}

Rate Limiting (Bonus: Advanced)

Protect against brute-force login attempts at the web server level.

Configuration: In your http block in nginx.conf:

# Define rate limit zone (outside server block)
limit_req_zone $binary_remote_addr zone=wp_login:10m rate=3r/m;

# In your server block, apply to wp-login.php
location = /wp-login.php {
    limit_req zone=wp_login burst=5;
    include fastcgi_params;
    fastcgi_pass unix:/var/run/php/php8.3-fpm.sock;
}

This limits login attempts to 3 per minute per IP, with a burst allowance of 5.

Note: Adjust the PHP-FPM socket path to match your PHP version (e.g., php8.1-fpm.sock, php8.2-fpm.sock, php8.3-fpm.sock).


3. The Core: Application-Level Hardening

Now we move into wp-config.php and the WordPress filesystem, following the principle of Least Privilege.

Custom Database Prefix

Automated SQL Injection (SQLi) attacks assume your tables start with wp_. Changing this breaks the vast majority of automated scripts.

Technical Pitfall: Failing to update internal meta references will lock you out of the admin dashboard. Always backup before this operation. Complete all changes before applying and updating.

Action: Change the prefix and rename all tables.

Step 1: Update wp-config.php:

$table_prefix = 'AB_';

Step 2: Rename all database tables:

-- Connect to MySQL
mysql -u [wordpress_user] -p [wordpress_db]

-- Rename core tables
RENAME TABLE wp_users TO AB_users;
RENAME TABLE wp_posts TO AB_posts;
RENAME TABLE wp_postmeta TO AB_postmeta;
RENAME TABLE wp_usermeta TO AB_usermeta;
RENAME TABLE wp_comments TO AB_comments;
RENAME TABLE wp_commentmeta TO AB_commentmeta;
RENAME TABLE wp_terms TO AB_terms;
RENAME TABLE wp_term_taxonomy TO AB_term_taxonomy;
RENAME TABLE wp_term_relationships TO AB_term_relationships;
RENAME TABLE wp_termmeta TO AB_termmeta;
RENAME TABLE wp_options TO AB_options;
RENAME TABLE wp_links TO AB_links;

Step 3 (Critical): Update internal WordPress references:

-- Update options table
UPDATE AB_options 
SET option_name = 'AB_user_roles' 
WHERE option_name = 'wp_user_roles';

-- Update user meta keys
UPDATE AB_usermeta 
SET meta_key = REPLACE(meta_key, 'wp_', 'AB_');

Step 4: If using plugins (e.g., Wordfence, LiteSpeed Cache), you must also rename their tables:

-- Example for Wordfence
RENAME TABLE wp_wfconfig TO AB_wfconfig;
RENAME TABLE wp_wfblocks7 TO AB_wfblocks7;
-- Check SHOW TABLES; for all plugin tables and rename accordingly

High-Entropy Security Salts

Salts encrypt the information stored in user cookies. Default or weak salts are a security risk.

Action: Regenerate secret keys using WordPress’s official API.

Implementation:

# Visit this URL to generate fresh salts (in a browser or via curl)
# https://api.wordpress.org/secret-key/1.1/salt/

# Edit wp-config.php
sudo nano /var/www/[your_wordpress_directory]/wp-config.php

Replace this entire block:

define('AUTH_KEY',         'put your unique phrase here');
define('SECURE_AUTH_KEY',  'put your unique phrase here');
define('LOGGED_IN_KEY',    'put your unique phrase here');
define('NONCE_KEY',        'put your unique phrase here');
define('AUTH_SALT',        'put your unique phrase here');
define('SECURE_AUTH_SALT', 'put your unique phrase here');
define('LOGGED_IN_SALT',   'put your unique phrase here');
define('NONCE_SALT',       'put your unique phrase here');

With fresh values from the API. Each key should be a 64-character random string.

Impact: Regenerating salts will log out all users. Schedule this during maintenance.

Disable File Editing from Dashboard

The WordPress dashboard allows administrators to edit PHP files (plugins, themes) by default. This is the first tool an attacker uses if they compromise an admin account.

Action: Disable the built-in code editor.

Implementation: Add to wp-config.php (above the /* That's all, stop editing! */ line):

// Disable file editing
define('DISALLOW_FILE_EDIT', true);

Verification: Log into WordPress admin → Appearance. The “Theme File Editor” option should be gone.

Trade-off: You’ll need SFTP/SSH access to edit theme files. This is actually proper practice anyway.

File Permissions

Incorrect file permissions are one of the most common vulnerabilities. The rule: only the files that absolutely need to be writable should be writable.

Action: Set restrictive permissions on WordPress files.

Implementation:

# Navigate to WordPress root
cd /var/www/[your_wordpress_directory]

# Set ownership (replace 'www-data' with your web server user if different)
sudo chown -R www-data:www-data .

# Set directory permissions to 755 (owner: rwx, group/others: rx)
find . -type d -exec chmod 755 {} \;

# Set file permissions to 644 (owner: rw, group/others: r)
find . -type f -exec chmod 644 {} \;

# Special case: wp-config.php should be 440 or 400 (read-only, not world-readable)
chmod 440 wp-config.php

# Verify critical files
ls -la wp-config.php
# Should show: -r--r----- (440)

Permission breakdown:

  • 755 for directories: Owner can read/write/execute, others can read/execute
  • 644 for files: Owner can read/write, others can only read
  • 440 for wp-config.php: Only owner and group can read, no one can write

Writeable directories (WordPress needs these for uploads, cache, etc.):

chmod 755 wp-content/uploads
chmod 755 wp-content/cache  # if you use caching

If using a plugin that requires write access to specific directories, only grant it there. Never make the entire wp-content writable.

Additional wp-config.php Hardening

Beyond salts and file editing, there are several other hardening constants.

Add these to wp-config.php:

// --- Hardening Constants ---

// Disable plugin/theme installation from dashboard (set to true after initial setup)
define('DISALLOW_FILE_MODS', false);

// Force SSL for admin area (if using HTTPS, which you should be)
define('FORCE_SSL_ADMIN', true);

// Limit post revisions to save database space
define('WP_POST_REVISIONS', 3);

// Set auto-save interval (default is 60 seconds, increase to reduce DB writes)
define('AUTOSAVE_INTERVAL', 300); // 5 minutes

For Nginx users: To disable PHP execution in uploads, add to your Nginx config:

location ~* /wp-content/uploads/.*\.php$ {
    deny all;
}

4. The Data: Database Hardening

Database integrity is paramount. We must ensure proper privilege separation and backup procedures.

Restrict Database User Privileges

For normal WordPress operations (posting, uploading media, installing plugins), the MySQL user only needs: SELECT, INSERT, UPDATE, DELETE.

Dangerous privileges like DROP, ALTER, GRANT should be revoked.

Action: Create a separate admin user for schema changes, use a restricted user for WordPress.

Implementation:

-- Connect to MySQL as root
sudo mysql -u root

-- Check current privileges
SHOW GRANTS FOR '[wordpress_user]'@'localhost';

-- If the user has excessive privileges, revoke them
REVOKE ALL PRIVILEGES ON [wordpress_db].* FROM '[wordpress_user]'@'localhost';

-- Grant only necessary privileges
GRANT SELECT, INSERT, UPDATE, DELETE ON [wordpress_db].* TO '[wordpress_user]'@'localhost';

-- Apply changes
FLUSH PRIVILEGES;

-- Exit
EXIT;

Critical caveat: Major WordPress updates and some plugins require ALTER and CREATE to modify the database schema. Before running updates:

  1. Temporarily grant privileges:
GRANT ALL PRIVILEGES ON [wordpress_db].* TO '[wordpress_user]'@'localhost';
FLUSH PRIVILEGES;
  1. Run the update
  2. Revoke again:
REVOKE ALTER, CREATE, DROP, INDEX ON [wordpress_db].* FROM '[wordpress_user]'@'localhost';
FLUSH PRIVILEGES;

Better approach for advanced users: Use two database users – one restricted for runtime, one with full access for updates. Switch between them in wp-config.php as needed.

Database Backups

No hardening guide is complete without backups. The question isn’t if you’ll need them, it’s when.

Action: Set up automated database backups with off-site storage.

Implementation (using mysqldump):

# Create backup directory
sudo mkdir -p /var/backups/wordpress
sudo chown $(whoami):$(whoami) /var/backups/wordpress

# Create backup script
sudo nano /usr/local/bin/backup-wordpress-db.sh

Backup script:

#!/bin/bash
BACKUP_DIR="/var/backups/wordpress"
DB_NAME="[wordpress_db]"
DB_USER="[wordpress_user]"
DB_PASS="[your_db_password]"  # Better: read from secure file
DATE=$(date +%Y%m%d_%H%M%S)
FILENAME="wp_backup_$DATE.sql.gz"

# Dump and compress
mysqldump -u $DB_USER -p$DB_PASS $DB_NAME | gzip > $BACKUP_DIR/$FILENAME

# Keep only last 7 days
find $BACKUP_DIR -name "wp_backup_*.sql.gz" -mtime +7 -delete

# Optional: Upload to remote storage (S3, Backblaze, etc.)
# aws s3 cp $BACKUP_DIR/$FILENAME s3://your-bucket/backups/

Make executable and schedule:

sudo chmod +x /usr/local/bin/backup-wordpress-db.sh

# Add to crontab (daily at 2 AM)
crontab -e

Add line:

0 2 * * * /usr/local/bin/backup-wordpress-db.sh

Security note: Never store database credentials in plain text. Use a secrets manager or at minimum a restricted-permission file.

Database Connection Security

Ensure WordPress connects to MySQL over localhost (Unix socket) rather than TCP, and uses a strong password.

In wp-config.php:

define('DB_HOST', 'localhost');  // or '127.0.0.1' - both use socket by default

Generate strong database password:

openssl rand -base64 32

Update the password in both wp-config.php and MySQL:

ALTER USER '[wordpress_user]'@'localhost' IDENTIFIED BY 'new_strong_password';
FLUSH PRIVILEGES;

5. Maintenance: Monitoring & Auditing

A hardened site requires continuous vigilance. Security is not a one-time configuration.

Log Monitoring

When issues arise, logs are your forensic evidence.

Action: Configure centralized logging and regular review.

Log locations:

  • Nginx access log: /var/log/nginx/access.log
  • Nginx error log: /var/log/nginx/error.log
  • PHP-FPM error log: /var/log/php8.3-fpm.log (adjust version number)

Real-time monitoring:

# Monitor Nginx errors
sudo tail -f /var/log/nginx/error.log

# Watch rate limiter in action
sudo tail -f /var/log/nginx/access.log | grep "503"

# Monitor all Nginx access (useful during attacks)
sudo tail -f /var/log/nginx/access.log

# Filter for failed login attempts
sudo grep "wp-login.php" /var/log/nginx/access.log | grep "POST"

# PHP health check (adjust to your PHP version)
sudo tail -f /var/log/php8.3-fpm.log

Setting log rotation (prevents logs from filling disk):

# Nginx logs are rotated by default via logrotate
cat /etc/logrotate.d/nginx

Advanced: Install fail2ban for Automated IP Blocking

sudo apt install fail2ban

# Create WordPress jail
sudo nano /etc/fail2ban/jail.local

Fail2ban WordPress jail configuration:

[wordpress-auth]
enabled = true
filter = wordpress-auth
logpath = /var/log/nginx/access.log
maxretry = 3
bantime = 3600
findtime = 600

Create the filter:

sudo nano /etc/fail2ban/filter.d/wordpress-auth.conf
[Definition]
failregex = ^<HOST> .* "POST /wp-login.php
ignoreregex =

Restart fail2ban:

sudo systemctl restart fail2ban
sudo fail2ban-client status wordpress-auth

Cloudflare + fail2ban: Restoring Real IPs

The Problem: If you’re using Cloudflare in proxy mode (orange cloud), Nginx sees all requests coming from Cloudflare’s IP addresses, not the actual visitor’s IP. This breaks fail2ban because:

  1. Nginx logs show Cloudflare IPs (e.g., 104.16.x.x) instead of attacker IPs
  2. fail2ban tries to ban Cloudflare’s servers (bad)
  3. Attackers never get blocked (worse)

The Solution: Configure Nginx to trust Cloudflare’s proxy headers and extract the real visitor IP using the ngx_http_realip_module.

Action: Configure Nginx to restore real IPs from Cloudflare headers.

Implementation:

First, verify the realip module is available:

nginx -V 2>&1 | grep -o with-http_realip_module
# Should output: with-http_realip_module

Create a Cloudflare IP configuration file:

sudo nano /etc/nginx/cloudflare-realip.conf

Add Cloudflare’s current IP ranges:

# Cloudflare IPv4 ranges
set_real_ip_from 173.245.48.0/20;
set_real_ip_from 103.21.244.0/22;
set_real_ip_from 103.22.200.0/22;
set_real_ip_from 103.31.4.0/22;
set_real_ip_from 141.101.64.0/18;
set_real_ip_from 108.162.192.0/18;
set_real_ip_from 190.93.240.0/20;
set_real_ip_from 188.114.96.0/20;
set_real_ip_from 197.234.240.0/22;
set_real_ip_from 198.41.128.0/17;
set_real_ip_from 162.158.0.0/15;
set_real_ip_from 104.16.0.0/13;
set_real_ip_from 104.24.0.0/14;
set_real_ip_from 172.64.0.0/13;
set_real_ip_from 131.0.72.0/22;

# Cloudflare IPv6 ranges
set_real_ip_from 2400:cb00::/32;
set_real_ip_from 2606:4700::/32;
set_real_ip_from 2803:f800::/32;
set_real_ip_from 2405:b500::/32;
set_real_ip_from 2405:8100::/32;
set_real_ip_from 2a06:98c0::/29;
set_real_ip_from 2c0f:f248::/32;

# Use the CF-Connecting-IP header (Cloudflare's real IP header)
real_ip_header CF-Connecting-IP;

# Use X-Forwarded-For as fallback
# real_ip_header X-Forwarded-For;

# Don't trust any other proxies
real_ip_recursive off;

Note: Cloudflare updates their IP ranges occasionally. Get the current list from: https://www.cloudflare.com/ips/

Include in your main Nginx configuration:

sudo nano /etc/nginx/nginx.conf

Add inside the http block (before your server blocks):

http {
    # ... other settings ...
    
    # Cloudflare real IP restoration
    include /etc/nginx/cloudflare-realip.conf;
    
    # ... rest of config ...
}

Test and reload Nginx:

sudo nginx -t
sudo systemctl reload nginx

Verification:

Check that Nginx is now logging real IPs:

# Watch access log
sudo tail -f /var/log/nginx/access.log
# Visit your site from your phone/computer
# You should see YOUR actual IP, not a Cloudflare IP (104.16.x.x)

Test fail2ban sees real IPs:

# Intentionally fail a login 3+ times from your test machine
# Then check fail2ban log
sudo tail -f /var/log/fail2ban.log

# Check if YOUR IP got banned (not Cloudflare's)
sudo fail2ban-client status wordpress-auth
# Should show your actual IP in the banned list

Unban yourself if needed:

sudo fail2ban-client set wordpress-auth unbanip YOUR_IP_ADDRESS

Security note: Only trust Cloudflare IPs if you’re actually behind Cloudflare. If an attacker bypasses Cloudflare and connects directly to your origin server, they could spoof the CF-Connecting-IP header. Protect against this by:

  1. Firewall your origin server to only accept connections from Cloudflare IPs:
# UFW example - allow only Cloudflare
sudo ufw default deny incoming
sudo ufw allow from 173.245.48.0/20 to any port 80
sudo ufw allow from 173.245.48.0/20 to any port 443
# ... repeat for all Cloudflare ranges ...
# Or use a script: https://github.com/Paul-Reed/cloudflare-ufw
  1. Use Authenticated Origin Pulls (Cloudflare’s mTLS) to cryptographically verify requests are from Cloudflare

Alternative approach: If you don’t want to maintain Cloudflare IP lists, use their API to automatically update:

# Create update script
sudo nano /usr/local/bin/update-cloudflare-ips.sh
#!/bin/bash
CF_IPSV4_URL="https://www.cloudflare.com/ips-v4"
CF_IPSV6_URL="https://www.cloudflare.com/ips-v6"
CONF_FILE="/etc/nginx/cloudflare-realip.conf"

# Fetch current IPs
echo "# Auto-generated Cloudflare IPs - $(date)" > $CONF_FILE
echo "" >> $CONF_FILE
echo "# IPv4" >> $CONF_FILE
for ip in $(curl -s $CF_IPSV4_URL); do
    echo "set_real_ip_from $ip;" >> $CONF_FILE
done
echo "" >> $CONF_FILE
echo "# IPv6" >> $CONF_FILE
for ip in $(curl -s $CF_IPSV6_URL); do
    echo "set_real_ip_from $ip;" >> $CONF_FILE
done
echo "" >> $CONF_FILE
echo "real_ip_header CF-Connecting-IP;" >> $CONF_FILE
echo "real_ip_recursive off;" >> $CONF_FILE

# Test and reload Nginx
nginx -t && systemctl reload nginx

Make executable and schedule monthly:

sudo chmod +x /usr/local/bin/update-cloudflare-ips.sh
sudo crontab -e
# Add: 0 3 1 * * /usr/local/bin/update-cloudflare-ips.sh

WordPress Security Plugins

Defense in depth means layering multiple security measures. Here are recommended plugins:

1. Wordfence Security (Free)

  • Web Application Firewall (WAF)
  • Malware scanner
  • Login security & 2FA
  • Real-time traffic monitoring

Installation:

# Via WP-CLI (if installed)
wp plugin install wordfence --activate --allow-root

# Or via dashboard: Plugins → Add New → Search "Wordfence"

Configuration priorities:

  • Enable 2FA for all admin users
  • Configure email alerts for critical events
  • Run scheduled scans weekly
  • Enable “Extended Protection” (free vs premium trade-off)

2. Alternative: Solid Security (formerly iThemes Security)

  • Similar feature set to Wordfence
  • Lighter resource usage
  • Better for shared hosting

Pick one WAF plugin, not both. Running multiple security plugins can cause conflicts.

File Integrity Monitoring

Detect unauthorized changes to core WordPress files.

Action: Use WordPress’s built-in file verification.

Via WP-CLI: (run from within the web root /var/www/[your_domain])

# Verify core files (detects modifications)
wp core verify-checksums --allow-root

# Example output shows modified files:
# Warning: File doesn't verify against checksum: wp-admin/admin.php

For continuous monitoring, install a plugin like WP Security Audit Log or use OSSEC (advanced, server-level).

Keep WordPress Updated

This is basic but critical. Most compromises exploit known, patched vulnerabilities.

Enable automatic updates for minor versions (in wp-config.php):

// Auto-update minor core releases (e.g., 6.4.1 to 6.4.2)
define('WP_AUTO_UPDATE_CORE', 'minor');

// To enable major updates (use with caution):
// define('WP_AUTO_UPDATE_CORE', true);

Plugin updates: Review release notes before updating, especially for major versions. Test on staging first if possible.

Via WP-CLI (safer for automation):

# Update WordPress core
wp core update --allow-root

# Update all plugins
wp plugin update --all --allow-root

# Update all themes
wp theme update --all --allow-root

The 502 Bad Gateway Lesson

Real-world incident: After implementing the database prefix change, the site returned a 502 error.

Root cause: Wordfence couldn’t find its configuration tables because they still had the wp_ prefix while WordPress was looking for AB_ tables.

Investigation:

# Check Nginx error log
sudo tail -f /var/log/nginx/error.log
# Output: "upstream sent invalid header while reading response header from upstream"

# Check PHP-FPM log
sudo tail -f /var/log/php8.3-fpm.log
# Output: "WordPress database error Table 'wordpress_db.AB_wfconfig' doesn't exist"

Solution: Query all database tables and verify prefix consistency:

SHOW TABLES;
-- Found leftover wp_wfconfig, wp_wfblocks7, etc.

RENAME TABLE wp_wfconfig TO AB_wfconfig;
RENAME TABLE wp_wfblocks7 TO AB_wfblocks7;
-- (total: 39 tables needed renaming)

Lesson: When changing database prefixes, audit ALL tables, not just WordPress core tables. Plugins create their own schema.


6. Defense in Depth: The Cloudflare Layer

If you’re using Cloudflare in proxy mode (orange cloud), you get an additional security layer in front of your server.

Cloudflare Security Settings

Navigate to: Security → Settings in Cloudflare dashboard

Recommended configuration:

  • Security Level: High (challenges suspicious visitors)
  • Challenge Passage: 30 minutes
  • Browser Integrity Check: On
  • Privacy Pass Support: On

Cloudflare Firewall Rules

Free plan users get 5 firewall rules. Use them wisely.

Example rule: Block known bad bots

(cf.bot_management.score lt 30)

Action: Block

Example rule: Rate limit wp-login.php

(http.request.uri.path contains "/wp-login.php")

Action: Challenge
Rate: 3 requests per minute

Cloudflare Page Rules

Force HTTPS redirect (free with Cloudflare):

URL: http://*[your_domain].com/*
Setting: Always Use HTTPS

SSL/TLS Configuration

Verify SSL/TLS Strict mode is enabled:

  • SSL/TLS → Overview: “Full (strict)” mode selected
  • Edge Certificates → Always Use HTTPS: On
  • HSTS: Enabled with minimum 6 months max-age

On your server, ensure valid SSL certificate:

# If using Let's Encrypt
sudo certbot renew --dry-run

# Check certificate validity
openssl x509 -in /etc/letsencrypt/live/[your_domain].com/cert.pem -noout -dates

Final Security Checklist

A comprehensive verification list before considering your WordPress installation hardened:

Server Level

  • [ ] SSH key-only authentication (passwords disabled)
  • [ ] UFW firewall configured (ports 22, 80, 443 only)
  • [ ] Automatic security updates enabled
  • [ ] fail2ban installed and configured (optional but recommended)
  • [ ] Non-root user for server management

Web Server (Nginx)

  • [ ] Security headers active (X-Frame-Options, CSP, etc.)
  • [ ] Directory indexing disabled
  • [ ] Sensitive files blocked (.htaccess, wp-config.php, readme.txt)
  • [ ] PHP execution disabled in uploads directory
  • [ ] Rate limiting on wp-login.php (optional)
  • [ ] SSL/TLS configured (Cloudflare handles this)

WordPress Application

  • [ ] Custom database prefix (AB_)
  • [ ] Security salts regenerated
  • [ ] File editing disabled (DISALLOW_FILE_EDIT)
  • [ ] File permissions set correctly (755/644)
  • [ ] wp-config.php readable only by owner (440)
  • [ ] Default content deleted (Hello World post, Sample Page)
  • [ ] Admin username is not “admin”
  • [ ] All user accounts use strong passwords
  • [ ] Two-factor authentication enabled for admins

Database

  • [ ] Database user privileges restricted (SELECT, INSERT, UPDATE, DELETE only)
  • [ ] Strong database password (32+ characters)
  • [ ] Connection over localhost/socket
  • [ ] Automated backups configured (daily minimum)
  • [ ] Backup restoration tested (critical – untested backups are useless)

Ongoing Maintenance

  • [ ] WordPress core auto-updates enabled (minor versions)
  • [ ] Plugin updates reviewed weekly
  • [ ] Security plugin installed (Wordfence or Solid Security)
  • [ ] Log monitoring configured
  • [ ] File integrity checks scheduled
  • [ ] Backup retention policy (7-30 days on-server, 90+ days off-site)

Cloudflare (if using)

  • [ ] Security level set to High
  • [ ] Firewall rules configured
  • [ ] SSL/TLS mode: Full (strict)
  • [ ] HSTS enabled
  • [ ] Page rules for HTTPS redirect

Conclusion: Continuous Hardening

Security is not a destination; it’s a continuous process. This guide covers the foundational hardening measures for a WordPress blog, but the threat landscape evolves constantly.

The three pillars of WordPress security:

  1. Prevention: Implement these hardening measures
  2. Detection: Monitor logs, use security plugins, review traffic
  3. Response: Have a recovery plan, maintain tested backups

Next steps:

  • Implement a staging environment for testing updates
  • Consider a Web Application Firewall (beyond Cloudflare’s basic protection)
  • Audit third-party plugins quarterly (delete unused ones)
  • Review access logs monthly for suspicious patterns
  • Subscribe to WordPress security mailing lists

Resources:

By following this guide, you’ve transformed a default WordPress installation into a hardened, defense-in-depth security architecture. Each layer—from the OS firewall to Cloudflare’s WAF—adds redundancy. If one layer fails, others remain.

The philosophy: Make your blog a harder target than the next one. Attackers seek easy wins; don’t be one.


Last updated: February 2026
WordPress Version: 6.4+
Server: Ubuntu 24.04 LTS / Nginx