Can confirm that there is 0 ingress or egress fees, since this is not an S3 container storage server, but a simple FTP server that also has a borg&restic module. So it simply doesnt fall into the e/ingress cost model.
Can confirm that there is 0 ingress or egress fees, since this is not an S3 container storage server, but a simple FTP server that also has a borg&restic module. So it simply doesnt fall into the e/ingress cost model.
Because using a containerization system to run multiple services on the same machine is vastly superior to running everything bare metal? Both from a security and a ease-of-use standpoint. Why wouldnt you use docker?
Caddy and Authentik play very nicely together thanks to caddy forward_auth
directive. Regarding acls, you’ll have to read some documentation, but it shouldnt be difficult to figure out whatsoever. The documentation and forum are great sources of info.
AdGuard Home supports static clients. Unless the instance is being used over TCP (port 53, unencrypted), it is by far the better way to use clientnames in the DNS server addresses and unblock the clients over that.
For DoT: clientname.dns.yourdomain.com
For DoH: https://dns.yourdomain.com/dns-query/clientname
A client, especially a mobile one, can simply not guarantee always having the same IP address.
If you dont fear using a little bit of terminal, caddy imo is the better choice. It makes SSL even more brainless (since its 100% automatic), is very easy to configure (especially for reverse proxying) yet very powerful if you need it, has a wonderful documentation and an extensive extension library, doesnt require a mysql database that eats 200 MB RAM and does not have unnecessary limitations due to UI abstractions. There are many more advantages to caddy over NPM. I have not looked back since I switched.
An example caddyfile for reverse proxying to a docker container from a hostname, with automatic SSL certificates, automatic websockets and all the other typical bells and whistles:
https://yourdomain.com {
reverse_proxy radarr:7878
}
The demo instance would be their commercial service I suppose: https://ente.io/. Since, as are their own words, the github code 1:1 represents the code running on their own servers, the result when selfhosting should be identical.
Theres a Dockerfile that you can use for building. It barely changes the flow of how you setup the container. Bigger issue imo is that it literally is the code they use for their premium service, meaning that all the payment stuff is in there. And I don’t know if the apps even have support for connecting to a custom instance.
Edit: their docs state that the apps all support custom instances, making this more intruiging
You make a good point. But I still find that directly exposing a port on my home network feels more dangerous than doing so on a remote server. I want to prevent attackers sidestepping the proxy and directly accessing the server itself, which feels more likely to allow circumventing the isolations provided by docker in case of a breach.
Judging from a couple articles I read online, if i wanted to publicly expose a port on my home network, I should also isolate the public server from the rest of the local LAN with a VLAN. For which I’d need to first replace my router, and learn a whole lot more about networking. Doing it this way, which is basically a homemade cloudflare tunnel, lets me rest easier at night.
Very true! For me, that specific server was a chance to try out arm based servers. Also, I initially wanted to spin up something billed on the hour for testing, and then it was so quick to work that I just left it running.
But I’ll keep my eye out for some low spec yearly billed servers, and move sooner or later.
Your first paragraph hits the nail on the head. From what I’ve read, bots all over the net will find any openly exposed ports in no time and start attacking it blindly, putting strain on your router and a general risk into your home network.
Regarding bandwith: 100% of the traffic via the domain name (not local network) runs through the proxy server. But these datacenters have 1 to 10 gigabit uplinks, so the slowest link in the chain is usually your home internet connection. Which, in my case, is 500mbit down and 50mbit up. And that’s easily saturated on both directions by the tunnel and VPS. plus, streaming a 4K BluRay remux usually only requires between 35 and 40 mbit of upload speed, so speed is rarely a worry.
Hey! I’m also running my homelab on unraid! :D
The reverse proxy basically allows you to open only one port on your machine for generic web traffic, instead of opening (and exposing) a port for each app individually. You then address each app by a certain hostname / Domain path, so either something like movies.myhomelab.com
or myhomelab.com/movies
.
The issue is that you’ll have to point your domain directly at your home IP. Which then means that whenever you share a link to an app on your homelab, you also indirectly leak your home location (to the degree that IP location allows). Which I simply do not feel comfortable with. The easy solution is running the traffic through Cloudflare (this can be set up in 15 minutes), but they impose traffic restrictions on free plans, so it’s out of the question for media or cloud apps.
That’s what my proxy VPS is for. Basically cloudflare tunnels rebuilt. An encrypted, direct tunnel between my homelab and a remote server in a datacenter, meaning I expose no port at home, and visitors connect to that datacenter IP instead of my home one. There is also no one in between my two servers, so I don’t give up any privacy. Comes with near zero bandwith loss in both directions too! And it requires near zero computational power, so it’s all running on a machine costing me 3,50 a month.
Or take github out of the equation and directly use cloudflare pages. It has its own pros and cons, but for a simple static blog it’ll be more than enough, and takes out the CNAME hassle.
Completely off topic, but can anyone pinpoint this Christmas market? Looks hella cozy, but I don’t recognize the buildings around it.
Both UnraidFS and mergerFS can merge drives of separate types and sizes into one array. They also allow removing / adding drives without disturbing the array. None of this is possible with traditional RAID (or at least not without a significant time sink for re-making the array), no matter the type of RAID you use.