Let's encrypt it all

For a couple of months now I have been using Let's Encrypt to generate free and valid certificates for all the services I run.
In many places the free Certificate Authority (short CA) has spread like wild-fire. From small to large scale services, many adopted it and the amount of issued certificates has grown over 1 million in just four months.
As a visitor to this website you have probably noticed the small green lock sign next to the address bar. The certificate used for this website is accepted to be valid by your browser (and also by your operating system).
If you're up for some background knowledge, just read on. If you're up for some hands-on technical stuff, jump right on to the howto.
Just note: This is a veeeeeeery long article in any case.

Certificate Authority

"So, what's so interesting about Let's Encrypt and why would I want to use it?" you might ask now.
To understand the answer to this, you need to learn about the status-quo and how things have been done before Let's Encrypt was around.

While anyone can create a certificate and use it for their service, CAs are there to ensure - as a trusted third-party - to your computer, that the certificate, which has been created and is used by you, was really yours to begin with.
If someone else created a certificate in your name and used it in front of your service (e.g. man-in-the-middle attack), there would be no way for you to tell, whether you are actually talking to the service you wanted to talk to and the attacker could easily use that certificate to read your traffic (steal your passwords and private data, etc.).

Big players

Because of this obvious flaw in the design of how encryption is taking place between computers over a network, CAs have been installed as a way to validate your certificate by signing it and thus telling your computer, that the certificate in use is trusted, because they know you.
The they know you part of it usually meant, that you payed money to a company (CAs up to last year, with the exception of CAcert were private companies), to let it start a script (which i.e. is part of the OpenSSL software bundle) to sign a certificate or even generate it and afterwards sign it for you.
Many of them do only casual checking of identity and/or ownership of domains applicated for (e.g. measures such as sending to certain mail addresses on a host are not necessarily secure ways of validating ownership).
As you can imagine though, this is a money machine.

More Flaws

This is the moment, where you should ask yourself: "Well, what if one of the CAs is a bad boy or messes up in such a way that it issues certificates for anyone without checking or even for services that don't belong to the applicators?"
Well... it will come to no surprise: This has happened and will happen (again).
In a way this would also be a perfect time to ask yourself, if encrypted traffic is all that secure after all. A little paranoia never hurt.

The certificates with which the CAs sign other certificates and thereby trust them are called root certificates and are usually shipped by your operating system and/or your web browser. These bundles are used by all applications that understand encrypted traffic to check whether the connection they are about to make is trusted by a CA.
Now, there is no way of telling whether your CA bundle is sufficient or harmful, but at some point you have to trust your operating system. You do trust your operating system, don't you? Well, that's too bad.
To circumvent this flaw in turn, Google and Mozilla introduced certificate pinning in their browsers, which can detect fraudulent certificates for some domains by checking against checksums (derived from a hash function) of the root certificate with which these domains are signed. These checksums (or sometimes called hashes) are shipped with the browser, which should make you wonder, if you can trust those either.
Sometimes I think, that the more one learns about the topic, the further one wants to move into the woods...

Self-sign

For people or associations, that wanted to use their services encrypted, but not go to the lengths of spending a lot of money on top of their hosting costs, it was a typical move to just self-sign their certificates. I've done it myself and basically, there's no harm in being your own CA.
Unfortunately (or luckily) browser became more restrictive about whom and how they trusted over the years and made it increasingly painful to use self-signed certificates. In Firefox you could add an exception for your certificate and that was that. Chrome/Chromium introduced an annoying red warning page that you had to endure each time you encountered a self-signed certificate.

Teaming up

In 2003 CAcert emerged as a community driven project to become a free and non-profit CA. Unfortunately its root certificate was never properly accepted in all operating systems or browsers.
Although its web-of-trust system for user validation was a neat idea, the association struggled for acceptance due to its insufficient auditing and verification mechanisms for a long time and finally lost to Mozilla's CA certificate policy, which led to a poor inclusion status overall.

Encrypt the interwebz

In 2014 another player entered the stage to make all that pain go away. I was very excited to see a presentation on Let's Encrypt amongst the content for 31C3.
Their goal was (and still is) to achieve a maximum throughput in encrypted traffic and for that they were teaming up with some big players (Mozilla, EFF, Cisco Systems, Akamai, University of Michigan) to develop their automated validation system, that would offer trusted certificates to anyone running a website for free.

While self-signed and CAcert certificates were becoming increasingly limiting and the sometimes high costs of buying a trusted certificate kept many from encrypting their traffic all together, the services certified by the usual CAs on the other hand stood in the light of false pretense to be more secure than the formerly named.
Let's Encrypt has indeed come to restore the balance and free what should be free: Encrypted traffic.
By issuing intermediate certificates, which were cross-signed by IdenTrust, Let's Encrypt is able to sign and thereby trust certificates it deems valid according to its openly developed validation system.
Their system is fully automated (implementing ACME), easy (and in many ways) to use and doesn't suffer from the above mentioned pitfalls of the usual validation process.

Using letsencrypt

The Let's Encrypt software bundle is by now packaged for all major Linux distributions (and as a matter of fact, the Internet runs on Linux), so acquiring a valid certificate for your website has become so easy, it's insane.
Let me tell you about how I do things on Arch Linux on this very server.
Although Let's Encrypt lets you freely handle the ACME handshake yourself (if you want to do it manually or with another client), the community around it wrote a Python based piece of software, that does just that. It's called certbot .
While some may argue its backward-compatibility is just dangerous, one could also argue the other way round: Unsecured webservers, running super old Python versions, because their admins can't or won't update, is very dangerous.
In any case: Every sofware has bugs (some more severe than others), but those are also more likely to get fixed faster, the more people stumble upon them.
A unifying piece of software such as certbot is very useful and eases the overall spreading of Let's Encrypt .
Nonetheless, if you're able to do the ACME challenge manually and it makes sense in your scenario, you might want to consider that.

certbot

Arch Linux has certbot in its repositories, so just install the latest version and all of its dependencies:
pacman -Sy certbot
Now would be a good time to have a look at what the EFF has to tell you about nginx in conjunction with certbot.
Arch Linux of course also has a very useful article on the topic in its wiki.
At this point I am assuming nginx is already installed, configured for non-encrypted service and we want to generate certificates for the following domains: www.domain.tld, domain.tld, cloud.domain.tld, www.cloud.domain.com, mail.domain.tld, www.mail.domain.tld (using Subject Alternative Name (SAN)).
Currently the certbot plugin for nginx is still experimental, so I will refrain from using it and use the webroot method instead.

nginx preparation

Let us have a look at how to configure nginx, so it will be prepared for the ACME challenge.

Snippets

  • we require a directory (.well-known/acme-challenge/), that is writable by certbot (root) to place a challenge response on each domain

  • the directory must be servable (readable) by nginx (usually running with the user and group http)

As the directory can be the same for all the challenges on your server, you can of course just create one and redirect all requests from the outside to it. We will use /srv/http/letsencrypt/ for it and define a configuration block, that we can include anywhere we need it.

  • /etc/nginx/letsencrypt-challenge.conf

    location ~ /\.well-known/acme-challenge {
      root /srv/http/letsencrypt;
      default_type "text/plain";
    }
    
This will tell nginx to look inside of /srv/http/letsencrypt for requests to ./well-known/acme-challenge on a domain, where we include this.

The following short example is an overview of /etc/nginx/nginx.conf. Yours might look different and this one is here for demonstrational purposes only!
Anyhow, I like to separately include the configuration for the different subdomains/domains here, so they will not get mixed up and it will be easier to add or disable functionality.

  • /etc/nginx/nginx.conf

    worker_processes auto;
    error_log /var/log/nginx/error.log;
    events {
      worker_connections 1024;
    }
    http {
      include mime.types;
      default_type application/octet-stream;
      gzip on;
      sendfile on;
      keepalive_requests 55;
      keepalive_timeout 55;
      # pelican blog
      include domain.conf;
      # ownCloud
      include cloud.domain.conf;
      # roundcube mail interface available only through VPN
      include mail.domain.conf;
    }
    
The initial configuration already shows, that we now have three services that will need to be covered by the certificate, which we want to get. The roundcube webmail service I picked for demonstrational purposes as a hidden service. This is not meant to badmouth their security, but to show that you can hide your service behind a VPN, if you choose to.
To achieve something like that, you can use the nginx geo plugin. When you setup a VPN infrastructure, this will lead to you having a separate connection to your server within a private network. For the sake of simplicity let us assume your server will have 172.16.0.1 and your client computer 172.16.0.2 as IPs in this setup.
On your server you can now explicitely look for the correct client and allow or deny access. Another block for the nginx configuration can be used to let you include this in your domain configurations:

  • /etc/nginx/geoblock.conf

    geo $is_allowed{
      default 0;
      172.16.0.2 1;
    }
    
Here we define a variable called is_allowed, which initially defaults to 0. If the request to your server is coming from the IP 172.16.0.2 is_allowed will be set to 1.
Note: Add this snippet to your hidden service's configuration file right at the top!

There is one downside to this though, if you choose to have a Let's Encrypt certificate for the hidden service: You have to specify an extra check, that excludes calls to .well-known/acme-challenge from the geo block and makes it publicly accessible.
For that to happen you can define another block for multiple inclusion.

  • /etc/nginx/letsencrypt-request-check.conf

    if ($request_uri ~ \.well-known/acme-challenge) {
      set $is_allowed 1;
    }
    if ($is_allowed = 0){
      return 301 https://domain.tld$request_uri;
    }
    
This snippet will set the previously introduced variable is_allowed to 1, if the request was correct and will permanently redirect to the main website otherwise.
As it makes sense to have https enabled on all of your services, the permanent redirect is added to this configuration snippet. You could also separate it out if you like.
Note: You must include letsencrypt-request-check.conf after geoblock.conf, but before letsencrypt-challenge.conf!

You will have to include the above snippets in your configuration for each of your subdomains/domains and make sure that /srv/http/letsencrypt/ has sufficient permissions.
This will roughly look as follows:

  • /etc/nginx/domain.conf & /etc/nginx/cloud.domain.conf

    server {
      listen  80;
      listen [::]:80;
      # ...
      include letsencrypt-challenge.conf;
      # ...
    }
    

  • /etc/nginx/mail.domain.conf

    include geoblock.conf;
    server {
      listen  80;
      listen [::]:80;
      # ...
      include letsencrypt-request-check.conf;
      include letsencrypt-challenge.conf;
      # ...
    }
    

certbot staging

certbot has a mode called staging that basically gets a "test certificate" for you, so you can try if everything is working as expected. Sounds safe? Let's do it (as root or with sudo)!
certbot certonly \
  --staging \
  --agree-tos \
  --renew-by-default \
  --email valid@domain.tld \
  --webroot -w /srv/http/letsencrypt \
    -d domain.tld \
    -d www.domain.tld \
    -d cloud.domain.tld \
    -d www.cloud.domain.tld \
    -d mail.domain.tld \
    -d www.mail.domain.tld
All domains are defined seprately using the -d flag. The above command will give you an error, if something goes wrong and that usually is quite explicit.
Note: It is very important to test your setup with the staging environment first, because the production environment is rate-limited (and half-baked certs will not do you any good).
If everything went right, you will now have an intermediate certificate, that in itself is still useless.
Let's go for the real deal then, shall we?
certbot certonly \
  --agree-tos \
  --renew-by-default \
  --email valid@domain.tld \
  --webroot -w /srv/http/letsencrypt \
    -d domain.tld \
    -d www.domain.tld \
    -d cloud.domain.tld \
    -d www.cloud.domain.tld \
    -d mail.domain.tld \
    -d www.mail.domain.tld
This should return a success message, with the note, that your certificate has been saved to /etc/letsencrypt/live/domain.tld/fullchain.pem and until when that certificate is valid.
Congratulations! You just generated a signed certificate, that is valid for the above domains and is recognized by operating systems and browsers!

Production

Before we can include the certificate in the nginx configuration for each domain though, it is time to think about proper SSL/TLS settings (cipher suite, protocols, Diffie-Hellman key exchange) and security headers (Content Security Policy (CSP), Cross-origin Resources Sharing (CORS), HTTP Strict Transport Security (HSTS), X-Content-Type-Options, X-Frame-Options (XFO), X-XSS-Protection).
Luckily, already a lot of other people have thought about these issues and provided their expertise. Just look at this, this or at the Mozilla's SSL config generator for web servers.

moar snippets

To include safe settings for nginx in all domain configurations, we will create some more snippets and will be happy about this form of reusability!

  • /etc/nginx/tls.conf

    ssl_certificate /etc/letsencrypt/live/domain.tld/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/domain.tld/privkey.pem;
    ssl_session_cache shared:SSL:50m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;
    ssl_dhparam /etc/nginx/dhparam.pem;
    ssl_protocols TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
    ssl_prefer_server_ciphers on;
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/letsencrypt/live/domain.tld/fullchain.pem;
    resolver 8.8.8.8;
    
Note: I chose a very modern approach towards ssl_protocols by enabling only TLSv1.2 at this point. Depending on your clients, you might want to use 'TLSv1 TLSv1.1 TLSv1.2' instead.
To generate the needed dhparam.pem (2048 bits recommended) we can use OpenSSL as root:
openssl dhparam -out /etc/nginx/dhparam.pem 2048
  • /etc/nginx/security_headers.conf

    add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
    add_header X-Content-Type-Options nosniff;
    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Robots-Tag "none";
    
A little note on the Content-Security-Policy here: Usually one would try to have the targets (default-src, connect-src, img-src, script-src, style-src) be set to 'self'. Due to the inline CSS and Javascript in services such as ownCloud and roundcube, this is not possible though, so 'unsafe_inline' and 'unsafe_eval' have to be added as well for some of them.
At this point you could of course also choose to create differing 'security_headers' inclusions for the services you run.
Depending on which are running, you will want to monitor your developer console in your browser closely after using this security header. It will tell you, if CFP is blocking some resource (and possibly making it unusable).

domain configurations

Following are the three different configurations for the services (I won't go into detail about uWSGI here, but in a coming article I will).
  • /etc/nginx/domain.conf:

    # redirect all unencrypted traffic to https
    server {
      listen 80 default_server;
      server_name domain.tld www.domain.tld;
      return 301 https://domain.tld$request_uri;
    }
    
    # redirect all traffic to www. to the plain url
    server {
      listen 443 ssl;
      listen [::]:443 ssl;
      server_name www.domain.tld;
      return 301 https://domain.tld$request_uri;
    }
    
    server {
      listen 443 default_server;
      listen [::]:443 ssl default_server;
      server_name domain.tld;
      include tls.conf;
      # your pelican blog resides here
      root /srv/http/websites/domain.tld;
      # make sure to log
      access_log /var/log/nginx/access.domain.log;
      error_log /var/log/nginx/error.domain.log;
      error_page 403 404 /404/index.html;
      error_page 500 502 503 504  /50x.html;
      # include security headers
      include security_headers.conf;
      add_header Content-Security-Policy "default-src 'self'; connect-src 'self'; img-src 'self'; script-src 'self'; style-src 'self'";
      # include the letsencrypt snippet
      include letsencrypt-challenge.conf;
    
      location / {
        index index.html index.htm;
        try_files $uri $uri/ $uri/index.html;
      }
    
      location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
      }
    
      location = /50x.html {
        root /usr/share/nginx/html;
      }
    }
    

  • /etc/nginx/cloud.domain.conf

    # redirect all unencrypted traffic to https
    server {
      listen 80;
      listen [::]:80;
      server_name cloud.domain.tld www.cloud.domain.tld;
      return 301 https://cloud.domain.tld$request_uri;
    }
    
    # redirect www. to the plain domain
    server {
      listen 443 ssl;
      listen [::]:443 ssl;
      server_name www.cloud.domain.tld;
      return 301 https://cloud.domain.tld$request_uri;
    }
    
    server {
      listen 443 ssl;
      listen [::]:443 ssl;
      server_name cloud.domain.tld;
      include tls.conf;
      error_page 403 /core/templates/403.php;
      error_page 404 /core/templates/404.php;
      # make sure to log
      access_log /var/log/nginx/access.cloud.domain.log;
      error_log /var/log/nginx/error.cloud.domain.log;
      #this is to avoid Request Entity Too Large error
      client_max_body_size 10G;
      # include security headers (the rest are set by ownCloud itself already)
      add_header Content-Security-Policy "default-src 'self'; connect-src 'self'; img-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline' 'unsafe-eval'";
      add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
      # include the letsencrypt snippet
      include letsencrypt-challenge.conf;
    
      location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
      }
    
      location ~ ^/(?:\.htaccess|data|config|db_structure\.xml|README) {
        deny all;
        log_not_found off;
        access_log off;
      }
    
      location ~ ^(.+\.php)(.*)$ {
        include uwsgi_params;
        uwsgi_modifier1 14;
        uwsgi_pass unix:/run/uwsgi/owncloud.sock;
        uwsgi_intercept_errors on;
      }
    
      location / {
        root /usr/share/webapps/owncloud;
        index index.php;
        rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
        rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
        rewrite ^/.well-known/carddav /remote.php/dav/ redirect;
        rewrite ^/.well-known/caldav /remote.php/dav/ redirect;
        rewrite ^(/core/doc/[^\/]+/)$ $1/index.html;
        rewrite ^/caldav(.*)$ /remote.php/dav$1 redirect;
        rewrite ^/carddav(.*)$ /remote.php/dav$1 redirect;
        rewrite ^/webdav(.*)$ /remote.php/dav$1 redirect;
        try_files $uri $uri/ /index.php;
      }
    
      location ~ ^/.(?:jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
        expires 30d;
        access_log off;
      }
    }
    

  • /etc/nginx/mail.domain.conf

    # include the geoblock snippet
    include geoblock.conf;
    
    # redirect all unencrypted traffic to https
    server {
      listen 80;
      listen [::]:80;
      server_name mail.domain.tld www.mail.domain.tld;
      return 301 https://mail.domain.tld$request_uri;
    }
    
    # redirect www. to the plain domain
    server {
      listen 443;
      listen [::]:443 ssl;
      server_name www.mail.domain.tld;
      return 301 https://mail.domain.tld$request_uri;
    }
    server {
      listen 443 ssl;
      listen [::]:443 ssl;
      server_name mail.domain.tld;
      include tls.conf;
      # make sure to log
      access_log /var/log/nginx/access.mail.domain.log;
      error_log /var/log/nginx/error.mail.domain.log;
      root /usr/share/webapps/roundcubemail;
      #this is to avoid Request Entity Too Large error
      client_max_body_size 20M;
      # include security headers
      include security_headers.conf;
      add_header Content-Security-Policy "default-src 'self'; connect-src 'self'; img-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'";
      # include the request-check snippet
      include letsencrypt-challenge.conf;
      # include the letsencrypt snippet
      include letsencrypt-challenge.conf;
    
      location / {
        index index.php;
        try_files $uri $uri/$args @roundcubemail;
      }
    
      location @roundcubemail {
        include uwsgi_params;
        uwsgi_modifier1 14;
        uwsgi_pass unix:/run/uwsgi/roundcubemail.sock;
      }
    
      location ~ ^/favicon.ico$ {
        root /usr/share/webapps/roundcubemail/skins/classic/images;
        log_not_found off;
        access_log off;
        expires max;
      }
    
      location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
        expires 30d;
      }
    
      # Deny serving some files
      location ~ ^/(composer\.json-dist|composer\.json|package\.xml|CHANGELOG|INSTALL|LICENSE|README\.md|UPGRADING|bin|config|installer|program\/(include|lib|localization|steps)|SQL|tests)$ {
        deny all;
      }
    
      # Deny serving files beginning with a dot, but allow letsencrypt acme-challenge
      location ~ /\.(?!well-known/acme-challenge) {
        deny all;
        access_log off;
        log_not_found off;
      }
    }
    
As you can see here, you have to exclude .well-known/acme-challenge/ from denying access to all directories beginning with a dot.

Bringing it up

You should now check your nginx configuration (as root):

nginx -t
This should tell you if something is wrong. Make sure to fix all problems, else nginx will not come back up after restarting it!
If all is well, restart the web server (as root):
systemctl restart nginx
Et voila! Your website should now serve over https!
You might want to use the Mozilla Observatory now to scan for issues in your setup and to optimize it.

Postfix

Your mail server can also use this certificate now (if your MX record points to one of the domains the certificate was issued for).

  • /etc/postfix/main.cf

    smtpd_tls_cert_file = /etc/letsencrypt/live/domain.tld/fullchain.pem
    smtpd_tls_key_file = /etc/letsencrypt/live/domain.tld/privkey.pem
    

Dovecot

The same counts for your IMAP server:

  • /etc/dovecot/dovecot.conf

    ssl_cert = </etc/letsencrypt/live/domain.tld/fullchain.pem
    ssl_key = </etc/letsencrypt/live/domain.tld/privkey.pem
    

Prosody

The XMPP/Jabber server is unfortunately not able to directly access the data in /etc/letsencrypt/live/domain.tld, because it runs as its own user (prosody).
You can work around this issue by either changing many permissions, or copy the set of files over to /etc/prosody/certs/ and set these up in your configuration.
I recommend the latter, as otherwise you will have to change many file and directory permissions, that are seemingly handled by certbot and thus lower the overall security of your system.

  • /etc/prosody/prosody.cfg.lua

    ssl = {
      certificate = "/etc/prosody/certs/fullchain.pem";
      key = "/etc/prosody/certs/privkey.pem";
    }
    

Renewal

During the creation of the certificate you might have already seen, that it is only valid for 90 days. This is ultimately not a bad thing, as now you will have a very easy time renewing it.
When using certbot for the first time to create your real certificate, it automatically saved a configuration file for your domain under /etc/letsencrypt/renewal/domain.tld.conf.
This is a perfect time for testing it (as root):
certbot renew --dry-run
Did everything run smoothly? Great! You can now have certbot renew your certificate, once it is due.
EFF recommends running the renewal twice daily for the chance to stay online in case Let's Encrypt has to revoce its root certificate for some reason (let us hope that won't happen...).
On Arch Linux a systemd service and timer would be the way to do it.

  • /etc/systemd/system/certbot.service

    [Unit]
    Description=Let's Encrypt renewal
    
    [Service]
    Type=oneshot
    ExecStart=/usr/bin/certbot renew --quiet --agree-tos
    ExecStartPost=/usr/bin/cp --remove-destination /etc/letsencrypt/live/domain.tld/fullchain.pem /etc/letsencrypt/live/domain.tld/privkey.pem /etc/prosody/certs/
    ExecStartPost=/usr/bin/chown prosody /etc/prosody/certs/fullchain.pem ; /usr/bin/chown prosody /etc/prosody/certs/privkey.pem ; /usr/bin/chmod u-w,g-r,o-r /etc/prosody/certs/privkey.pem
    StandardError=syslog
    NotifyAccess=all
    KillSignal=SIGQUIT
    PrivateDevices=yes
    PrivateTmp=yes
    ProtectSystem=full
    ReadWriteDirectories=/etc/letsencrypt /etc/prosody/certs
    ProtectHome=yes
    NoNewPrivileges=yes
    
Note: If not using prosody it's safe to remove /etc/prosody/certs from the ReadWriteDirectories directive and get rid of the ExecStartPost directives all together. They are there for providing prosody with the certificate in a safe manner.

  • /etc/systemd/system/certbot.timer

    [Unit]
    Description=Daily renewal of Let's Encrypt's certificates
    
    [Timer]
    OnCalendar=daily
    RandomizedDelaySec=1day
    Persistent=true
    
    [Install]
    WantedBy=timers.target
    
The above timer will start the renewal process daily with a random time offset.
You can now easily start the service (as root):
systemctl start certbot
Enable and start its timer:
systemctl enable certbot.timer
systemctl start certbot.timer
Enjoy your encrypted services!