haha I mean my instance barely exists, a full pg_dump is just 175MB 😄
I like sysadmin, scripting, manga and football.
haha I mean my instance barely exists, a full pg_dump is just 175MB 😄
Everything wasss fiiiinnee. 30 seconds downtime between db backups and container restarts.
Now I can finslly use Jerboa again as a client.
Nothing bad would ever happen from deploying on a friday evening right? right?
YOLO, see you on the other side
Try the pi for tinkering since it will be cheaper. If you end seeing issues with performance for the usage you need you could start looking up used laptops or optiplexes.
I had some used componentes lying around so I frankesteined a server with used parts after buying some disks
Here comes Arch Linux with a steel chair!!
Pretty sure you configure everything on the entrypoint, for the services runnin in your home machine it should be transparent
I remember having to enable forwarding of the initial packet when I used to forward a webserver
iptables -A FORWARD -i eth0 -o wg0 -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT
Which color is hannah montana linux
My docker containers are all configured via docker compose so I just tar the .yml files and the outside data volumes and backup that to an external drive.
For configs living in /etc you can also backup all of them but I guess its harder to remember what you modified and where so this is why you document your setup step by step.
Something nice and easy I use for personal documentations is mdbooks.
A Debian sid user is just a future Arch Linux user.
I use arch btw
Idk what’s offsite to you but if it’s a controlled place for you (like from a friend or family member) you could simply bring your device one day and do the first copy there.
Otherwise maybe rsync a folder at a time.
It has an UPS builtin 😇
Jokes aside I used to run a few python bots inside termux on my very old S3 Mini a few years ago. It did the job at least.
Say it properly “Nome”, they deprecated the G
Termux has nginx, postgres, python and plenty of stuff compiled to ARM so I bet you can. You would have to be wary of non standard ports unless you have root access and make sure android does not kill or puts to sleep termux by adding exceptions to the app.
I remember running a few low traffic Mastodon bots in a S3 Mini years ago and it was decent.
Kompile It yourSSelf
Run docker ps
and check what port the container claims to have been mapped on the host.
You only receive content updates for the communities your local users are subscribed.
That being said federation has been struggling significantly since this past reddit exodus and there’s time where outages of any kind might have made you lose federated content. I don’t think is possible to “catch-up” all of what you have lost.
You can however per example search a port url of a remote instance to force it federate.
I guess it might increase the load slightly but I think individual instances actually do more harm when they go down and disappear because of the timeouts.
I run my own individual lemmy instance where I’m by myself because I don’t want the extra legal burden of random users signing up here but I also host two communities with around 100 subscribers and not duplicated on other servers so I guess it works to actually contribute to decentralization of the federation.
Upgraded with no issues so far, thanks for the quick update