• 3 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • I think there are two approaches to infrastructure as code (and even code in general):

    • as steps (ansible, web UI like pihole…)
    • declarative (nix, k8s, nomad, terraform…)

    Both should scale (in my company we use templating a lot) but I find the latter easier to debug, because you can ‘see’ the expected end result. But it boils down to personal preference really.

    As for your case, ideally you don’t write custom code to generate your template (I agree with you in that it’s tedious!), but you use the templating tool of your framework of choice. You can see this example, it’s on grimd (what I forked leng from) and Nomad, but it might be useful to you.

    P.S also added to the docs on the signal reloading here


  • I have a similar use case where I also need my records to change dynamically.

    Leng doesn’t support nsupdate (feel free to make an issue!), but it supports changing the config file at runtime and having leng reread it by issuing a SIGUSR1 signal. I have not documented this yet (I’ll get to it today), but you can see the code here

    Alternatively, you can just reload the service like you do with pihole - I don’t know how quick pihole is to start, but leng should be quick enough that you won’t notice the interim period when it is restarting. This is what I used to do before I implemented signal reloading.

    Edit: my personal recommendation is you use templating to render the config file with your new records, then reload via SIGUSR1 or restart the service. nsupdate would make leng stateful, which is not something I desire (I consider it an advantage that the config file specifies the server’s behaviour exactly)









  • What you described is correct! How to replicate this will depend heavily on your setup.

    In my specific scenario, I make the containers of all my apps use leng as my DNS server. If you use plain docker see here, if you use docker compose you can do:

    version: 2
    services:
     application:
      dns: [10.10.0.0] # address of leng server here!
    

    Personally, I use Nomad, so I specify that in the job file of each service.

    Then I use wireguard as my VPN and (in my personal devices) I set the DNS field to the address of the leng server. If you would like more details I can document this approach better in leng’s docs :). But like I said, the best way to do this won’t be the same if you don’t use docker or wireguard.

    If you are interested in Nomad and calling services by name instead of IP, you can see this tangentially related blog post of mine as well









    • Can you show the diff with your previous WG config?
    • Is 10.11.12.0/24 also on enp3s0?

    I am able to connect and can ping 10.11.12.77, the IP address of the server, but nothing else

    Including the wider internet, if you set your phone’s AllowedIPs to 0.0.0.0/0? This makes me think it’s a problem with the NAT, not so much wireguard. Also make sure ipv4 forwarding is enabled:

    sysctl -w net.ipv4.conf.default.forwarding=1
    sysctl -w net.ipv4.conf.enp3s0.forwarding=1
    

    Reading this article might help! I know this is not what you asked, but otherwise, my approach to accessing devices on my LAN is to also include them in the WG VPN - so that they all have an IP address on the VPN subnet (in your case 10.11.13.0/24). Bonus points for excluding your LAN guests from your selfhosted subnet.


  • Yep I am using traefik -> nginx. I simply add the traefik tags to the nginx service. I didn’t include that in the example file to keep it simple.

    As for the storage, I use SeaweedFS (has a CSI plugin, really cool, works well with nomad) but as a CSI volume it’s not suitable for backing postgres’ filesystem. The lookups are so noticeably slower that your Lemmy instance will be laggy. So I decided to use a normal host volume, so the DB writes to disk directly, and you can back that up to an S3-compatible storage with this (also cool). Could be SeaweedFS, AWS, Backblaze…

    I think SeaweedFS is suitable for your pictrs storage though, be it through its S3 API (supported by pictrs) or through a SeaweedFS CSI volume that stores the files directly.

    I hope that answers it! Do let me know what you end up with


  • Have you considered running your Lemmy instance on more than a single machine? If it is possible to run two lemmy containers anyway (ie, lemmy is not a singleton), why not run them on separate machines? With load balancing you could achieve a more stable experience. It might be cheaper to have many mediocre machines rather than a single powerful one too, as well as more sustainable long-term (vertical vs horizontal scaling).

    The downside would be that the set-up would be less obvious than with Docker compose and you would probably need to get into k8s/k3s/nomad territory in order to orchestrate a proper fleet.


  • There are dozens of us!

    • nomad fmt was applied already - granted it is not a small easy to read job file, it might be easier to split it up into separate jobs
    • I will look into making this into a Pack - I have never built one because I have never shared my config like this before. I don’t know how popular they are among selfhosters either!

    I think an easy first step would be to contribute a sample job file like this into the Lemmy docs website. Then people can adapt to their setups. I find there is a lot more to configure in Nomad than in Docker compose for example because you stop assuming everything will be in a single box, which changes networking considerably. There is also whether to use Consul, Vault etc.