- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
AFAIK every NAS just uses unauthenticated connections to pull containers, I’m not sure how many actually allow you to log in even (raising the limit to a whopping 40 per hour).
So hopefully systems like /r/unRAID handle the throttling gracefully when clicking “update all”.
Anyone have ideas on how to set up a local docker hub proxy to keep the most common containers on-site instead of hitting docker hub every time?
Do you have a good resource for how one can go about this?
You can host your own with harbor, and set up replication per repo (pull upstream tags) If you need a commercial product/support you can use MSR v4.
Harbor can install on any K8s cluster using helm, with just a couple of dependencies (cert-manager, postgres op, redis-op) Replication stuff you can easily add.
I have some no-warranty terraform I could share if there is some interest.
That’s what we do internally for our openshift deployment. It will reach out if not in harbor and then cache it there for everyone else to use.
I’ve only done my “is it even possible” research so far, but these look promising:
https://medium.com/@amandubey_6607/docker-registry-caching-a2dfefecfff5
https://github.com/obeone/multi-registry-cache
Much appreciated <3
https://www.squid-cache.org/ Should work too I think