Essentially the title. It can be a software you can self host, it can be a tool to help with it, or it can be otherwise a tool that can help you find new things etc. It doesn't matter.

Bonus: Which one of them helped NOT you, but someone you know like a family member? i.e. is there something that your family heavily prefers that you have just discovered?

I'll start: I learned about this really cool newspaper-like RSS reader frontend called Selfoss. and saves me from the shitshow that are /r/news and /r/worldnews. My Calibre-web instance though, that I have been hosting for months, has helped out tons of my friends who constantly ask me if I have x or y book.

Comments (337)

Tubesync, because I'm tired of YouTube videos disappearing.

Next step is to figure out how to get Plex to display something other than the filenames.

Jellyfin has a YouTube Metadata plugin. I have a pipeline of tubesync to jellyfin that displays all my videos with the original YouTube Metadata

Plex does too will be back shortly with a link

Edit: YouTube Agent

RemindME! 5 days "I need to dedicate next weekend to this and possible message someone here how"

[deleted]

!RemindMe 7 hours

!RemindMe 7 hours

RemindME! 5 hours

[deleted]

I will be messaging you in 23 days on 2022-09-15 00:00:00 UTC to remind you of this link

CLICK THIS LINK to send a PM to also be reminded and to reduce spam.

^(Parent commenter can ) ^(delete this message to hide from others.)


|^(Info)|^(Custom)|^(Your Reminders)|^(Feedback)| |-|-|-|-|

RemindME! 5 days

I tried both TubeSync and this agent - but it doesn't get to fetch the Metadata. Since there seems to be this "weird" concept of YouTube Series and Movies - is this not meant ot fetch the metadata of any given YouTube video?

Make sure you’ve read the instructions carefully. It requires the file to have certain elements, namely the video id, inside square brackets. Depending on your goals you may need to lower your expectations for what metadata is fetched. It’s really only thumbnail, title, release date, and description. Anything else, like tagging actors or directors, genres, etc is out of the scope of this project due to the difficulty in scraping that info quickly and effectively from YouTube itself.

Thanks! I expected TubeSync to already do that, but the brackets were missing around the id. What helped me overall most though was to turn on the XML NFO and the JSON info file on TubeSync. Plex picks it up how I was expecting it to be picked up now!

The word "pipeline" triggered something! I've been slowly acquiring a muti-year series with a downloader and manually importing into sonarr.

So I just did a search and found https://www.reddit.com/r/Softwarr/comments/try200/sonarr_youtubedl_a_companion_tool_for_sonarr_to/

Thank you so much for this. My family that are fans of Abridged series will thank you through me.

because I'm tired of YouTube videos disappearing.

Man I want to do something like this but I need much, much more storage before I can start. I have like 7 TB but I have like 20% left...

...Maybe I should do some summer cleaning lol.

I just ordered a pair of 14tb drives for this reason.

How much it cost? I'm in India and I just want to know the price range because I think it's sold way too expensive here.

Edit: I'm paying around 40$ for 1TB. Damn I have no idea why they are so expensive.

Right now on Amazon the 14TB Western Digital external (shuccable) is $260 USD, not the cheapest but pretty typical for the drive not on sale.

I was incorrect, I went with 2 12TB drives. They were 215.64 a piece. https://smile.amazon.com/gp/aw/d/B07X4V2M3B?ref=ppx_pt2_mob_b_prod_image

$14 or $15/TB is the price where I pull the trigger.

So for a 14TB drive, once it's $210 USD or cheaper I buy

IKR. Prices of storage on India is obnoxious. I am looking for good solutions too for cheap so I would suggest looking into Amazon on the upcoming prime day sale. Maybe we'll be able to grab something for cheap that time.

They get it just for 18USD. I removed GST Ani it was still around 28USD for us. Wtf.

Maybe change some h264 videos to x265. Will save some space.

Tdarr can be used.

Yeah but that needs so much encoding time!

No need to encode all videos at once, I mean dedicate 4 hours a day, maybe in night or when the server isn't busy.

I'm looking in to doing this with my entire library.

Youtube Videos take ridiculously little space.

Like 1TB per 10000 videos.

This is partially correct. It depends on the quality and codec. 1080p you might be right, but not for 4K

[deleted]

Same, but as like 1/4 of my archive is 4K (or at least 1440p) it doesn’t work out. But for 1080p, the numbers add up.

4k content is just for movies and only for fantasy stuff, no one cares if Fight Club or The Grey Man is in 4k. 1080p works for 90% of the stuff, even for youtube's codec

I mean, rn I only have 1 4K video, but there will probably be more in the future. But that one video is already 9GB or something like that

Yeah, its ludicrous, you can try and downsize it using x265 codec

I don’t want to tho. I want that 4K on my 1080p screen lmao

Aww I need this! But my storage is almost full i just got 22TB left I need to make some decisions

Tried it, but had some performance problems. After feeding it 30-something playlist, I couldn't open the page, PHP was failing with timeout.

I've found YoutubeDL-Material to be nice alternative.

Oh, and about Plex — you need to:

  • enable creating NFOs for downloaded videos (don't remember how to enable that in TubeSync, but should be something in config)
  • add one of library agents to Plex that read those NFOs, and create library with it. None of them are perfect, but I ended up on TubeSync-NFO.

That seems like an unnecessary requirement. I'll have to see what happens with it.

Edit: I was referring to Elastic should be unnecessary, due to it being heavyweight for such a thing.

I couldn’t get the above Youtube Agent working but the one this guy is talking about worked for me

Well, the other option is to cram all metadata in filename, and that's not perfect, either. NFOs allow displaying description, channel name and link, etc.

[deleted]

Oh. Indeed, that was TubeArchivist, my bad.

Just an alternative to this: https://www.tubearchivist.com/

Been using this for years, just a heads up, I used to get all kinds of problems with tubsync (some kind of collisions with the tasks?) even with 1 or 0 set for the number of worker threads until I switched the backend to postgres. Since migrating, it's worked great though

What do you mean disappearing?

Being taken down either by YouTube or by the owner.

Ah, I just tried out yt-dlp and tubesync seems to just be a wrapper for it. Pretty cool.

Yeah, no reason to reinvent that wheel.

Tubesync on Linux?

I'm using it in Docker. I don't know what language it is in, or if it is compiled for Windows.

Tubesync

Does this let you download them as audio?

It does I believe, you choose audio when selecting resolution to download video in

I struggled with getting plex to display the names of YouTube videos properly. I switched to jellyfin and it read every video without issue.

Tubesync, because I'm tired of YouTube videos disappearing.

Next step is to figure out how to get Plex to display something other than the filenames.

This is something that I would do if I started about 15 years ago.

I needed same thing but all I wanted was to backup music videos that I save in my playlists into mp3 so in the end I ended up with a cron job that runs below command daily:

docker run --rm --user "$(id -u):$(id -g)" -d \ -v /Music/Backup/Youtube:/media \ --name youtube-archive tnk4on/yt-dlp \ -x --audio-format mp3 \ --embed-thumbnail --embed-metadata \ --download-archive youtube.txt \ -o '%(playlist)s/%(title)s.%(ext)s' \ --cookies cookies.txt \ --match-filter "playlist_title!='Favorites' & playlist_title!='Liked videos'" \ https://www.youtube.com/user/MYUSERNAME/playlists

Had to exclude likes and favourites and also to generate cookies file so it backups my private playlists

Kavita https://www.kavitareader.com/

It's been really nice as a self-hosted application for keeping all my reading material. The ability to stop reading on one device and immediately pick up right where I was on another device is excellent. Adding it to my phone as a chrome app is seamless.

It's the thing that is making me finally want to figure out how to access my homelab resources from off-site.

Kavita is really nice, but if you're planning on reading with your phone from outside the house, you may want to look at Komga. I went with Komga over Kavita because Komga integrates better with Tachiyomi, the leading android app for comics and manga, which means I can download chapters over wifi and take them with me, rather than use mobile data. Comics can use a surprising amount of data.

EDIT: Also, Komga has komf, a really neat metadata fetcher for manga.

I've thought so too before I started using kavita daily. For some reason I was under the impression I need perfect tachiyomi support (btw kavita has tachiyomi support now).

But I'm useing the browser now on all my devices and it works really well! Downsite is I can't download my stuff, but I mostly have a wifi connection when reading so thats finde for me.
For Manga kavita is way superior in my opinion. Last update added a double site reader for lager screens (like tablets etc) and I love that so much.

Just my thoughts!

Interesting. I do like Kavita's overall interface better, to the point that I'll probably migrate whenever it rolls out Tachiyomi tracking and metadata fetching, so I'm intrigued. Can you say more about why you like it so much?

As mentioned the double page feature is amazing!
Overall the support on their discord is great too. Had at least two times I asked there for advice and had people help me out.
I like the ui which also got big update with one of the last updates.
It just works very good for me. Had no overall problems!

Feel free to ask me specific questions :)

Edit: forgot to mention that if you use the browser you get sync across all your devices, what was very important for me!

I used to run Komga but I would frequently catch it attempting to eat all of my available system resources just to perform a sync. Because it's written in kotlin and runs on JVM, it's horrible at resource management and would grind the rest of my server to a halt.

Kavita has none of those problems and I've yet to find a feature missing between the two.

And next update will be focused on performance, so kavita should get even faster!

Wait, are you the dev??? I think I just realised that! Thank you for Kavita -- it's amazing!!!!

No, haha. He just wrote that in the release notes :b

Oh whoops! Well if they see this, I hope they know we love it!

Hm, looks fairly similar to calibre-web but with less management options. The reader might be nicer than c-w though. I'll check it out, ty for sharing.

Yw! I mainly use it for manga and some PDF manuals at the moment, rather than ebooks, so I can't speak to that experience, but for comics and manga it's pretty good.

I currently use Komga for comics/manga. Also really nice, although it doesn't support epub

Barrier.

Let’s me share my mouse and keyboard cross platform.

Using the server function on a M1 Mac and able to move my mouse over to my windows 11 pc. Makes working from home such a breeze.

Interesting. Took a look at it as it sounded a lot like Synergy. Now I see why. Its forked from the Synergy code.

I bought Synergy several years ago and its amazing.

It's a fork of synergy.

I bought synergy and was not able to make it work. Then I discovered barrier which just works better

I bought synergy

I did that too, because "back then" they promised to implement features for Linux. They didn't and they really won't. I regret buying it. Barrier does everything I need and doesn't disappoint me.

[deleted]

Literally just learned about it and have been low-key wanting something similar for a decade.

Been using it for a couple years now, it works so seamlessly i don't even think about it. Starts on all of my devices on startup, so it just always works. Should it ever not, it offers logs & you can edit the config file as a last resort, even on windows.

Remindme! 2 weeks

I've tried this over the course of like the past six years and have literally never gotten it to work. I match and pair the machines, have services running, then apparently when it's all set up I can just never move to the other machine.

Try turning off SSL on both sides and reversing which side is the server/client.

you have to set the Screen name under the setting for each computer.

Also on the server side, you have to click configure server, and click and drag two computer screens. Double click each screen and set the screen name to for each computer and click ok.

Then re-load each side. Server side first, and then client.

Also under the Barrier menu, there’s a logging window that helps.

All of this sounds familiar. Last time I tried was probably six months ago though I guess I'll give it another shot some time soon.

EmulatorJS, a self hosted web based retro emulation front end with rom and art management.

https://github.com/linuxserver/emulatorjs

I am curious- why did you go this route vs just installing an emulator(s) on your computer?

It's in a browser so I can play from multiple systems and when away from home.

Very cool! What do you play from, if you don't mind my asking?

So far on Windows 11 with an Xbox One controller, which the emulator picked up automatically. I'm looking into options on Android and Fedora but haven't tried it yet.

On Android in chrome is gives you a gamepad on screen to play. It's amazing

Good call. I just tried Brave browser on Android and also get the onscreen gamepad.

I have deployed this. Very cool and allows me to use my razor kischi controller on my iPhone. I’m finding that I get no audio on my iPhone though where as in a desktop browser it’s ok.

Do you use iOS?

Audio works on my Android. I don't have an iOS device to test.

I haven't seen that controller before but it looks nice. Has it worked well for you?

I think the button mapping is a bit off compared to that of the on screen but seems to work ok. Just no sound.

I’m confused. Does this load emulators and ROMs from the server and show them on the front end? Can it use any emulator? I tried googling and the docs seem lacking.

runs everything through the browser

has a set emulator for supported systems

How is the audio sync?

I've only played NES and SNES games so far, but audio sync has been great. There has been some slight audio "crackling" or "popping" on the NES emulator but very minor.

Remindme! 1 week

I will be messaging you in 7 days on 2022-07-25 01:31:28 UTC to remind you of this link

5 OTHERS CLICKED THIS LINK to send a PM to also be reminded and to reduce spam.

^(Parent commenter can ) ^(delete this message to hide from others.)


|^(Info)|^(Custom)|^(Your Reminders)|^(Feedback)| |-|-|-|-|

Is there any way to put authentication in front of it?

Probably easiest to put it at your reverse proxy level

I'm fairly new to the self-host community (This month upgraded my home server from the old Celeron laptop to the full ex-gaming PC with i5-3570K).

And my favorite discovery for the iCloud-like folder-based photo gallery was Photoview.

This solution is quite barebone, but my case was to made my games screenshots available at any device with as low costs as possible. It's not perfect, but with my current screenshots library (with 45k files) it takes about 2 minutes to rescan for the new files.

I've tried almost every active photo-galllery solution, but since I don't need a lot of their mandatory functionality (like AI, faces, etc) — Photoview was the closest one for me.

Interesting. I've just been using Nextcloud in the past for personal photo backup. For anonymous imgur-like image upload, I have a chevereto-free instance.

I didn't tried Nextcloud one, my case is less about backup, and more about timeline and fast thumbnails with albums, created from the folders on disk.

Almost always new files transferred via SMB into the corresponding folders.

Good one. Sticking with Immich for my part, even though it's still heavily WIP.

UI is just way too important to me when it comes to photos gallery, it's also quite fast and even has an AI system as a nice bonus.

Just waiting for the album feature that should release soon.

I'm also following releases page of Immich. It looks extremely promising.

It looks sweet! What does the AI bring to the table?

Object detection. The search tab displays some "album-style" with the different objects and environments identified.

They will soon add face detection as well.

It all run locally ofc.

Very cool, will have to try it out!

You said any device... Is there an Android app?

It's on a priority list, as I can tell by the GitHub issues page, but even iOS app not without bugs, so I'm just using web instead.

Ahh yes, web pages, the original apps. Thanks

I've become a fan of Uptime Kuma. I can even track things I want to buy but are currently out of stock. With the right keywords, the tool will send me an email once it's "back online"/back in stock.

BTW take a look at changedetection.io

That's an interesting use-case. I never thought about that, but I can see the idea. Do you know of a brief how-to?

I imagine the trick is to something along the lines of adding a new monitor and then look for something on the page to indicate that things are available? I have an actual situation where I need to select at least one option on a page before it shows what I'm after, any way around that?

You're on the right track! :-) Just search for the keyword indicating out of stock. Or vice versa and flip the result.

Could you give a rough example. this sounds interesting.

On the dashboard you can add a new HTTP site to monitor and I look for the keyword "out of stock" basically. I let the tool check every 5 minutes.

Unfortunately Uptime kuma is a bit basic.

There is no incident history(there is a PR since February), and there is no maintenance mode (PR since January).

I use it, as I haven't found a better alternative, but its very basic at the moment.

There is freshstatus for free status monitoring.

Tdarr! Saved 1.2TB of space from a starting point of 7TB of Linux isos

The UI and setup can make a grown person cry.

Finally got around to using Vidcoder for batch conversion. It's not the same, but batch handling is better than handbrake, and the UI is understandable.

Not to be a pooper or anything, but wouldn’t it be more effective to find a Linux iso with your target bitrate that someone has spent a bunch of time working on to make sure it looks almost identical to the source at 1/2 the bitrate?

content in x265 hasn't been heavily distributed for an extremely long time. So you might be having to wait around for a while if something isn't readily available. Especially if it's something not super popular.

[deleted]

You want your video lib quality to look like YouTube? LOL

[deleted]

Just transcode it then?

Edit: meant on-the-fly transcoding.

[deleted]

Oh sorry, I meant transcoding on-the-fly with something like Jellyfin.

[deleted]

Understandable, have you looked into hardware transcoding as well? My cheap i5 iGPU can do like 12 streams in parallel so I wouldn’t be surprised if a cheap Nvidia card could do even more.

Which of av1 or vp9 is superior? I'm still favoring h265 when I want smaller size because I'm still undecided on which way to go with the open-source formats.

What Is tdarr?

https://github.com/HaveAGitGat/Tdarr

Tdarr V2 is a cross-platform conditional based transcoding application for automating media library transcode/remux management in order to process your media files as required. For example, you can set rules for the required codecs, containers, languages etc that your media should have which helps keeps things organized and can increase compatability with your devices. A common use for Tdarr is to simply convert video files from h264 to h265 (hevc), saving 40%-50% in size.

Tdarr is a good way to use a rules based approach to encoding large media libraries with the ability to distribute the work to multiple heterogeneous worker nodes.

How did tdarr accomplish this for you?

Took me a sec but Linux isos in this case are most likely media files

Yeah me too. I realize but didn't want to highlight my missing of the joke. XD

Solely by converting to x265?

Yes, solely by that.

I have been crunching a ~16TB library and have saved 4TB and counting, nearing the end of the list.

https://i.imgur.com/5rzzGPA.png

Do you get any quality issues?

Haven't encountered any at all yet.

My Tdarr node is an i9-9900k using Quick Sync - the quality of the hardware transcoders in the later Intel Core generations is quite good. Inbuilt checks to ensure bitrate and duration ensure that the output file is within expected norms. The checks do catch the occasional outlier - often hiccups like files with huge numbers of subtitle streams and such that confuse the pre-process/command generation, for example. Outside of those issues (which I usually ignore/leave the original file in place, and then seek to redownload a more normalized file to then crunch again), a random sampling of my library has seen more or less identical quality to my eye.

[deleted]

Yes, 100%, and that's more or less the most common use case for Tdarr. Download 4k h265, download 1080p h264 (far more common) and compress yourself.

With the right settings (and many of the default/community plugins are configured correctly out of the box) you can compress h264 down to around half the bitrate in h265 easily.

That's because x265 isn't much more efficient at lower res content (1080p and below). Especially when you double encode something and start to add generational losses.

Some people are just finding out that they can't actually tell the difference which is fine. If you're using Sonarr or Radarr I strongly suggest sticking to x264 and just downloading lower bitrate and lower res (720p, 576p, 480p) content. Don't be afraid of lower resolutions, otherwise you're just going to bit starve the video. There's a lot of considerations that go into encoding, dedicated groups work hard on optimizing for each video and this program can't replicate that.

yes, your milage may vary of course but i've so far shrunk 3.2 TB to 2.3 TB.

It's quite CPU intensive (unless you have hardware encoding via GPU) so i expect it to take at least a year until my linux iso collection is done

Unmanic is also an option for this. I switched away from tdarr to unmanic looking for something that was easier to manage.

Beatbump. YouTube music in the style of invidious. It'll automatically create a playlist of similar music based off a song/artist/album, and will continue playing in the background on Android with the screen off. I just found it and I love it. Apparently a big update/essentially a rewrite is coming in a day or two as well.

https://github.com/snuffyDev/Beatbump

Docker here but it's in its infancy. Listens on port 3000.

https://hub.docker.com/r/snuffydev/beatbump

So basically this picks up the ball that Goggle dropped? I’m pretty sick of music streaming apps since Google play music dropped the ball YouTube music is still complete garbage and Apple Music is at least reliable.

It doesn't let me upload my own music like Google Play Music did, but I've settled with plex/plexamp for that. But it's pretty great for finding new music similar to other songs/artists I like, quickly pulling up an album or something someone requests that I haven't downloaded, or just adding to my wife's home page on her phone so she doesn't have to bother with downloading or asking me to download. And it's ad free. The dev has a link to a full featured demo up in the readme on his github to play around with if you're curious before you go through the work of spinning up your own instance.

I think I based on the math I did years ago if you only downloaded flac files you only would need like 480TB of disks to download all mastered music. So I’m sure if someone had this recommendation engine and hooked it up to Funk whale or something it would be pretty feasible to compete with the big streaming services.

Dev here, I just managed to get my Docker install to FINALLY work. So a more official and updated Docker image is in the works!

Dude, you are awesome. The old one works/worked just fine by the way, I have two of them up and running.

Thank you! I'm happy to hear you find my project useful! Working on Docker now (still trying to learn how to use it of course!). While currently the rewrite is somewhat broken on mobile, if you check the official instance that's listed in the GitHub README the rewrite is live.

Thanks again for posting my project! It's appreciated!

Updated the image (hopefully it should work!)

Awesome! Image pulled and updated just fine, seeing all the new settings, but I can't get anything to play. Most of the time nothing happens, occasionally I've gotten the error "no source" or "interrupted due to another request." Unfortunately I'm probably too stupid to help you debug and it's always possible the problem is on my end, but let me know if there's any info I could offer. I've tried http and hls streams. The place for "hls stream proxy" is blank, don't know if that matters because it's blank on your public instance as well, which works just fine.

EDIT: Working like a charm again now

Listening on port 3000 is the only thing it needs to do. Most people just punk these things behind a reverse proxy with automatic TLS with let’s encrypt

I recently discovered Portainer, it's a web based docker interface so I can host pi-hole and home assistant directly through it! I installed it onto raspberry Pi OS Lite if I ever wish to install anything outside of docker but I most likely don't have to!

If you're doing docker swarm, Portainer agents/edge are pretty useful too, and allow you to manage all of your docker instances across your VPSes! (Just make sure you have 2377, 8000, 9001, 7946 and 4789 open to your main portainer instance (They don't tell you about some of these ports even in install guide)). I just did ufw allow from my local wireguard subnet.

Maybe a bit off topic, but I still fail to see the use of Portainer. I just copy/paste the docker-compose.yml files and up it. Why fiddle with all the point and clicking to do that?

I mainly do this to get a centralized overview over all machines. No ssh into any instance needed. I really like it, although I am more of console guy.

Same reason I use the Jellyfin settings page instead of sshing into the container and editing the config file

Not really the same thing. Presumably you only need to setup Jellyfin once, while you probably have a bunch of containers all with existing well configured docker-compose files.

Nah, like for maintenance.

Why SSH into a container to rescan my library when I can just click a button

I'm always in ssh anyway. Running another command is no big deal, much less than having to open yet another tab, start Portainer, find what I need to do etc etc. Anyway, I guess we live in different worlds. Whatever works for you :)

I do 90% of everything from my phone. Buttons are just easier for me

I see where you come from. I too enjoy having a overview of everything and the slight automation it offers. It's mainly a comfort thing. Easily being able to see which port is which without typing a bunch of commands and such feel nice.

"Hm, I wonder what ip addresses the networks attached to this docker instance are?" (Helpful when you're setting up matrix without having to inspect each of them, it just shows em).

"Hm, I need to clean my docker images" (Instead of doing docker image rm x y z a b c)

"Hm, I need to see what's going on with the swarm and what populated where"

"Hm, I need to see what I already have running categorized nicely across all my VPSes and networks"

etc etc

  • Using nginx-proxy-manager I use a standard network with all my containers so I always know which it is.
  • docker system prune -a
  • don't use swarm
  • docker ps (already have an alias with the details I need)

Using nginx-proxy-manager I use a standard network with all my containers so I always know which it is.

Do you know what docker IP address the latest mautrix-whatsapp instance started on, or what it might be if you restart docker? :P

docker system prune -a

That deletes the images I want to keep.

docker ps (already have an alias with the details I need)

docker ps is a mess of information lol and does not show me shit across the networks.


as I said, it might not be for you or for your use cases, but it's pretty darn useful.

Easier to refresh the container if needed.

Particularly, you can use it for an organization, so you can have different access levels. People can then only control what they are given access to, instead of everything.

Refresh? What do you mean, update? How can it be any easier than docker-compose pull && docker-compose up -d ?

Two clicks in portainer. No need to ssh in.

I love portainer, openmediavault even has a one-click install for it which is how I learned docker to begin with

I've been thinking of making an RPi NAS with openmediavault. I just can't find a solid way to connect storage if I go with external drives or a HDD enclosure.

I definitely reccommend it, especially if you're new to docker/selfhosting since it sinplifies most things to toggle switches instead of 50 CLI commands

Nextcloud News, a crossplatform RSS reader that I can sync with Nextcloud.

Have you actually started using it?

I tried importing my feeds from inoreader but the experience just wasn't that good. I can't put my finger on it but all the whitespace was part of it. It also didn't seem all the predictable with how/when it marks things as read.

To me, marking as read being reliable is key because I subscribe to some high volume sites I click through without reading more than headlines and some very low volume ones that I want to be sure to get to.

Yes, it is a major improvement over Akregator which was desktop-only.

My only problem is that I am unable to read posts that are read unless they are starred.

I haven't experienced your problem with marking as read.

It's really awesome and I did use it for a while as well. I ultimately did switch to FeeshRSS, though, because it offers a nicer UI and seems to be a bit faster as well.

Immich! Self hosted google photos alternative. Still early stages of development but it's already stable and awesome

I tried getting it going, but unfortunately it’s a little beyond my capability right now with spinning it up. Hopefully it’s simplified a bit for its first stable version. Still impressed with the goals though

Honestly I'm a huge noob and it's really not bad. If you've ever used docker compose, the only thing you need to do is set 2 or 3 basic things in the .env file and then do 'docker compose up' and it does everything for you. It will be online on port 2283

Oh they created images I guess. Earlier it use to build images locally and that was hard.

I was a fan of photoprism because of the ability to backup Live Photos from iOS devices. Guess I’ll have to check into immich.

Edit: no Live Photo support in immich. Issue can be found here.

https://github.com/alextran1502/immich/issues/160

My biggest beef with photoprism is that it only supports 1 user, and that user is the admin user. So if you want to share it with family, you gotta give them all the 1 admin account.

[deleted]

That sounds amazing. Would you happen to have that shortcut sharable?

https://github.com/alexta69/metube#bookmarklet

What other things have you done relevant to self-hosting with iOS shortcuts?

Tandoor, the recipe service. It's girlfriend approved too!

Also check out Mealie

Also check out homechart. I've been running mealie and about to switch over I think.

Technically not self-hosted but it comes with having your own domain.

I've been using Cloudflare mail routing for a while but only this weekend I realized you can integrate the Catch-All feature with Bitwarden to generate a random user@yourdomain.tld in the username generator.

I'm today's year old when I discovered this trick!

Actually i have a look at freeipa to replace my openldap and powerdns (recursor & dns-server) instances.

Think it will integrate good with my keycloak instance, too.

If you figure out how to do it please let me know

Did you had a try already?

The actual process of the free ipa is straight forward

It just the dns part gets confusing and the official instructions assumes a lot.

I can confirm that. I thought it would be easy while having a second domain to build the ā€žnewā€œ directory on. But actually I stuck on switching over the recursor from pdns to freeipa.

These are words, but no idea what they mean. Tell your DHCP server that the FreeIPA instance is the DNS server for clients now (if you're using the DHCP built into IPA).

The DNS part is pretty straightforward if you know about DNS. The complexities come with the SRV records that active directory client require for automatic lookup.

The "all in one" IPA is more or less plug-and-play. Turn off your existing DHCP and DNS, set it up, and run with it.

If you want to use your own DHCP server, then you need to configure it with auth to create DNS records (kerberos requires DNS for host principals, which is also why AD needs it) so DHCP client create records.

If you want to use your own DNS, you need to configure it to allow auth for the DHCP server to create records. Same as above.

If you use the IPA builtins (dhcpd, BIND), you don't need to do anything.

To "switch the recursor" (making the IPA server an authoritative recursing DNS server for your local domain), change the address in your DHCP server. To add your existing PowerDNS server/domain as lookups, add it as a forwarder.

This my initial setup: DHCP: Ubiquiti UDM-Pro DNS: LXC with PDNS-Recursor, that passes queries for my internal domain (domain A) to another LXC with PDNS Auth.Server

First milestone should be, that I switch the DNS from Recursor-LXC to Freeipa-VM. I already Setup domain B on freeipa, but have problems to point dns-queries from there to the PDNS Auth Server, so that both internal domains are resolved.

The docs are really clear. ipa dnsrecord-add example.com. fw --ns-rec=<nameserver>, where <nameserver> will be the authoritative one.

The grammar you're using will be confusing, because a "recursor" may not be meaningful. Is it actually just a recursive nameserver (as-in it performs all the lookups itself)? Does it forward requests somewhere if they're not in the authoritative one and cache results?

IPA uses BIND on the backend. Which means that: allow-recursion { any; } should be in /etc/named.conf (IPA has added this by default for a while, but you can look at the BIND docs to filter this down to only trusted hosts if you're worried about security and reflection attacks, assuming your DNS server is available to outside clients).

Realistically, if the PowerDNS authoritative server isn't doing dynamic DHCP/DNS, it would be easiest to just forward/export the zone and let IPA handle both, which is something you'll have to do eventually if you want to decommission it. Forwarding is fine for now.

Again, though, IPA is "DNS+DHCP+krb5+LDAP+ntp" (largely because working time sync is required for krb5, and so is working dynamic DNS). The LDAP is actually 389ds, which comes out of Netscape Directory Server (this also became Sun/Oracle Directory Server, and 389 is used as the backend for Dogtag). It's not quite the same as OpenLDAP, and your LDIFs may need modification.

If you want to use the krb/SSO parts of IPA, your first "milestone" must be "DHCP clients get a DNS record dynamically". That's the hard part. If you use all the built-in stuff, it's easy. If you want to use your own DHCP/DNS, it can be rough. Ubiquiti requires a bunch of global-parameter in the config. Setting up a relay may be easier. Kerberos/SSO will not work without this. You don't have to forward to BIND/IPA if you set up DDNS another way (I never used the IPA DNS/DHCP), but DDNS has to work.

On the other hand, IPA basically "looks like" AD LDAP to outside clients, so that integration is sometimes easier.

At first, thanks again for all that input! As you can see, i am no DNS expert right now! :-)

The docs are really clear. ipa dnsrecord-add example.com. fw --ns-rec=, where will be the authoritative one.

This i tried already, but i get this error message:

ipa: ERROR: DNS zone home.domainA.de. already exists in DNS and is handled by server(s): 192.168.10.11.

(192.168.10.11 is my pdns auth. server, 192.168.10.150 is my pdns-recursor)

At the pdns-recursor i just had to configure the forward zones to the dns-server for each domain. I would like to do exactly this at freeipa at first, so that i can play around with domainB before i do any changes to my "working" domainA:

forward-zones=home.domainA.de=192.168.10.11:53, home.domainB.de=192.168.1.216:53;8.8.8.8

If you want to use the krb/SSO parts of IPA, your first "milestone" must be "DHCP clients get a DNS record dynamically". That's the hard part.

What i want to achieve is, that all my hosts, especially LXC and VMs, are "managed" by FreeIPA.

This means for me:

  • I can use it as user federation for keycloak (already setup with ldap)
  • I can use it as inventory source for ansible
  • I can use it as ldap source for icinga2
  • Management of DNS-entries (best case automatically, worst case manually)
  • User management (SSH-Keys, smb-share permissions, automount)
  • And maybe some more cool stuff... :-)

Seems like theres a long way to go and this "project" will last more than just one month :D

This i tried already, but i get this error message:

ipa: ERROR: DNS zone home.domainA.de. already exists in DNS and is handled by server(s): 192.168.10.11.

ipa dnsforwardzone-add --skip-overlap-check home.domainA.de --forwarder=192.168.10.11 --forward-policy=only

What i want to achieve is, that all my hosts, especially LXC and VMs, are "managed" by FreeIPA.

This means for me:

I can use it as user federation for keycloak (already setup with ldap)
I can use it as inventory source for ansible
I can use it as ldap source for icinga2
Management of DNS-entries (best case automatically, worst case manually)
User management (SSH-Keys, smb-share permissions, automount)
And maybe some more cool stuff... :-)

Seems like theres a long way to go and this "project" will last more than just one month :D

All of the host management is more or less reliant on the other pieces ticking. User management isn't really SSH keys (you could set up Dogtag for SSH cert management), but "SSO via krb+LDAP".

"Best case automatically" is also kinda "doesn't work without this". Not to put too fine a point on it, but IPA doesn't ship with DHCP+DNS integration for nothing. It's because it's part and parcel of krb. Almost none of what you "want" from IPA (one user -> one password -> works everywhere with SAML or ties into OIDC/oauth if you want it that way; hosts in LDAP dynamically for inventory/etc) is gonna hinge on getting ddns working, not forwarding.

Take me seriously when I say that this should be your first step.

Ok, i am going to take your advice serious and i think i know what you mean.

I already had activated ddns at global settings and at my zone, but i didnt enabled it on my test-client. so i reinstalled the ipa-client with:

(followed this doc: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-dynamic-dns-updates)

ipa-client-install --enable-dns-updates --mkhomedir

First good thing: The host was automatically added with its ip to dns (great!)

Then i changed the ip at my UDM-Pro and restarted the test-client: the ip changed, but the dns entry is not updated by now - maybe i have to wait for the TTL to end, but hoped the ip would be updated after reboot of the machine.

It should be more than this. What actually happens is that, generally, the key should be used to also assign a hash as a TXT record which says: "I created this record for this client". Such as:

xerxes                  A       192.168.0.183
                        TXT     "31bc319ecf2f67520af1c686d18bda0737"

When an address is requested, without any mapping already existing, a brand-new client will say: "hey, my name is Nest-Thermostat", as part of the DHCP option requests (almost nothing requires a manual set-hostname ... in 2022), and it will happily issue an address and create a record.

Again, like:

Nest-Thermostat-BD3B    A       192.168.0.101
                        TXT     "00c0868798312b180055e3d0e30b63e3a5"

You can set static reservations outside of this, but every client which gets an address gets a corresponding DNS record, with no intervention on your part. IPA does this out of the box. So does AD. And neither one works well without it. ipa-client-install configures sssd for you, but it used to be (and almost certainly still is) possible to just configure nsswitch.conf or sssd or krb5.conf or whatever yourself, to "make" IPA talk to systems which don't have ipa-client (but they do support krb5 and LDAP, because basically everything does).

TTL is just for the lookup on the record. ipa-client --enable-dns-updates can allow "road-warrior" clients which get an address from a VPN concentrator or something hanging off your infrastructure to say "hey, this is my IP, let me kinit now", but it's not required. IPA isn't required at all for these updates to work, but these updates are required for IPA/AD to work with dynamic addressing.

The options in the unifi config I linked earlier say, more or less, "if a client requests an address in this block, update that domain with a specified key to write a record". The records look like the ones above.

I followed your advice and ā€žmovedā€œ the zone of domainA to freeipa at first so that I could turn of both pdns instances, already. And it was a good decision to do it, because it already solved one problem with ddns. But ddns is still an open challenge I am on. (IPA cannot be resolved…)

But nevertheless: I could already turn off 3 LXC and 1 Docker Container: - PDNS-Recursor - PDNS-Auth.Server - OpenLDAP - LDAPmyAdmin

Many thanks to you @readonly12345 ! You pushed me to the right way!

What do you mean about IPA cannot be resolved for DDNS? That the DHCP server can't authenticate with the key? That the client cannot query DNS?

The dhcp server logs and bind logs from the IPA server may be enlightening.

I already had a look at the logs, but didnt go deeper by now. Think this will be my tast for the next hours.

Here a part of my logs:

* (2022-07-18 22:59:28): [be[home.domainB.de]] [resolv_discover_srv_done] (0x0040): [RID#1] SRV query failed [4]: Domain name not found
********************** BACKTRACE DUMP ENDS HERE *********************************
(2022-07-18 22:59:28): [be[home.domainB.de]] [resolve_srv_done] (0x0040): [RID#1] Unable to resolve SRV [1432158237]: SRV record not found
********************** PREVIOUS MESSAGE WAS TRIGGERED BY THE FOLLOWING BACKTRACE:
* (2022-07-18 22:59:28): [be[home.domainB.de]] [fo_set_port_status] (0x0100): [RID#1] Marking port 0 of server '(no name)' as 'not working'
* (2022-07-18 22:59:28): [be[home.domainB.de]] [resolve_srv_done] (0x0040): [RID#1] Unable to resolve SRV [1432158237]: SRV record not found
********************** BACKTRACE DUMP ENDS HERE *********************************
(2022-07-18 22:59:28): [be[home.domainB.de]] [fo_get_server_hostent] (0x0020): [RID#1] Bug: Trying to get hostent from a name-less server
********************** PREVIOUS MESSAGE WAS TRIGGERED BY THE FOLLOWING BACKTRACE:
* (2022-07-18 22:59:28): [be[home.domainB.de]] [set_srv_data_status] (0x0100): [RID#1] Marking SRV lookup of service 'IPA' as 'not resolved'
* (2022-07-18 22:59:28): [be[home.domainB.de]] [be_resolve_server_process] (0x0080): [RID#1] Couldn't resolve server (SRV lookup meta-server), resolver returned [1432158237]: SRV record not found
* (2022-07-18 22:59:28): [be[home.domainB.de]] [be_resolve_server_process] (0x1000): [RID#1] Trying with the next one!
* (2022-07-18 22:59:28): [be[home.domainB.de]] [fo_resolve_service_send] (0x0100): [RID#1] Trying to resolve service 'IPA'
* (2022-07-18 22:59:28): [be[home.domainB.de]] [get_server_status] (0x1000): [RID#1] Status of server 'ds01.home.domainB.de' is 'name not resolved'
* (2022-07-18 22:59:28): [be[home.domainB.de]] [get_port_status] (0x1000): [RID#1] Port status of port 0 for server 'ds01.home.domainB.de' is 'neutral'
* (2022-07-18 22:59:28): [be[home.domainB.de]] [fo_resolve_service_activate_timeout] (0x2000): [RID#1] Resolve timeout [dns_resolver_timeout] set to 6 seconds
* (2022-07-18 22:59:28): [be[home.domainB.de]] [get_server_status] (0x1000): [RID#1] Status of server 'ds01.home.domainB.de' is 'name not resolved'
* (2022-07-18 22:59:28): [be[home.domainB.de]] [resolv_is_address] (0x4000): [RID#1] [ds01.home.domainB.de] does not look like an IP address
* (2022-07-18 22:59:28): [be[home.domainB.de]] [resolv_gethostbyname_step] (0x2000): [RID#1] Querying files
* (2022-07-18 22:59:28): [be[home.domainB.de]] [resolv_gethostbyname_files_send] (0x0100): [RID#1] Trying to resolve A record of 'ds01.home.domainB.de' in files
* (2022-07-18 22:59:28): [be[home.domainB.de]] [set_server_common_status] (0x0100): [RID#1] Marking server 'ds01.home.domainB.de' as 'resolving name'
* (2022-07-18 22:59:28): [be[home.domainB.de]] [set_server_common_status] (0x0100): [RID#1] Marking server 'ds01.home.domainB.de' as 'name resolved'
* (2022-07-18 22:59:28): [be[home.domainB.de]] [be_resolve_server_process] (0x1000): [RID#1] Saving the first resolved server
* (2022-07-18 22:59:28): [be[home.domainB.de]] [be_resolve_server_process] (0x0200): [RID#1] Found address for server ds01.home.domainB.de: [192.168.1.216] TTL 7200
* (2022-07-18 22:59:28): [be[home.domainB.de]] [ipa_resolve_callback] (0x0400): [RID#1] Constructed uri 'ldap://ds01.home.domainB.de'
* (2022-07-18 22:59:28): [be[home.domainB.de]] [fo_get_server_hostent] (0x0020): [RID#1] Bug: Trying to get hostent from a name-less server
********************** BACKTRACE DUMP ENDS HERE *********************************
(2022-07-18 22:59:28): [be[home.domainB.de]] [write_krb5info_file_from_fo_server] (0x0020): [RID#1] Server without name and address found in list.

Ok, so this isn't quite the same. Is domainB the one which was previously managed by powerdns?

Debug logs would help, but it looks like the clients are not on the same domain as the IPA server (is it on domainA?). How are your domains laid out?

Can you compare the records from ipa dns-update-system-records --dry-run to what's there?

the client is a test-vm (ubuntu) i created only for freeipa tests / learning.

freeipa server is: ds01.home.domainB.de

hostname of the test-vm is at the same domain:

root@ubuntu-vm01:/home/alexander# hostnamectlStatic hostname: ubuntu-vm01.home.domainB.de

At /etc/hosts i got:

192.168.50.15 ubuntu-vm01.home.domainB.de ubuntu-vm01127.0.0.1 localhost127.0.1.1 ubuntu-vm01.home.domainB.de ubuntu-vm01192.168.1.216 ds01.home.domainB.de ds01

FreeIPA is:

[root@ds01 ~]# hostnamectlStatic hostname: ds01.home.domainB.de

At /etc/hosts i got:

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4192.168.1.216 ds01.home.domainB.de::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

At FreeIPA DNS is configured three zones:

Could it be possible, that it is an issue with the reverse lookup zone? The client is at a different subnet (192.168.50.17) ? (Do i need one for every subnet?)

DomainB wasnt managed by PDNS before.

Installed reinstalled the client, now i get the error, that the domain name is misformatted:

* (2022-07-19 21:30:36): [be[home.domainB.de]] [be_ptask_enable] (0x0080): [RID#1] Task [Dyndns update]: already enabled* (2022-07-19 21:30:46): [be[home.domainB.de]] [be_ptask_execute] (0x0400): Task [Dyndns update]: executing task, timeout 0 seconds* (2022-07-19 21:30:46): [be[home.domainB.de]] [ipa_dyndns_update_send] (0x0400): Performing update* (2022-07-19 21:30:46): [be[home.domainB.de]] [sdap_id_op_connect_step] (0x4000): reusing cached connection* (2022-07-19 21:30:46): [be[home.domainB.de]] [check_ipv6_addr] (0x0200): Link local IPv6 address fe80::a251:5b50:48b:7ffd* (2022-07-19 21:30:46): [be[home.domainB.de]] [resolv_is_address] (0x4000): [ubuntu-vm01.home.domainB.de] does not look like an IP address* (2022-07-19 21:30:46): [be[home.domainB.de]] [resolv_gethostbyname_step] (0x2000): Querying DNS* (2022-07-19 21:30:46): [be[home.domainB.de]] [resolv_gethostbyname_dns_query] (0x0100): Trying to resolve A record of 'ubuntu-vm01.home.domainB.de' in DNS* (2022-07-19 21:30:46): [be[home.domainB.de]] [schedule_request_timeout] (0x2000): Scheduling a timeout of 3 seconds* (2022-07-19 21:30:46): [be[home.domainB.de]] [schedule_timeout_watcher] (0x2000): Scheduling DNS timeout watcher* (2022-07-19 21:30:46): [be[home.domainB.de]] [unschedule_timeout_watcher] (0x4000): Unscheduling DNS timeout watcher* (2022-07-19 21:30:46): [be[home.domainB.de]] [resolv_gethostbyname_dns_parse] (0x1000): Parsing an A reply* (2022-07-19 21:30:46): [be[home.domainB.de]] [request_watch_destructor] (0x0400): Deleting request watch* (2022-07-19 21:30:46): [be[home.domainB.de]] [resolv_is_address] (0x4000): [ubuntu-vm01.home.domainB.de] does not look like an IP address* (2022-07-19 21:30:46): [be[home.domainB.de]] [resolv_gethostbyname_step] (0x2000): Querying DNS* (2022-07-19 21:30:46): [be[home.domainB.de]] [resolv_gethostbyname_dns_query] (0x0100): Trying to resolve AAAA record of 'ubuntu-vm01.home.domainB.de' in DNS* (2022-07-19 21:30:46): [be[home.domainB.de]] [schedule_request_timeout] (0x2000): Scheduling a timeout of 3 seconds* (2022-07-19 21:30:46): [be[home.domainB.de]] [schedule_timeout_watcher] (0x2000): Scheduling DNS timeout watcher* (2022-07-19 21:30:46): [be[home.domainB.de]] [unschedule_timeout_watcher] (0x4000): Unscheduling DNS timeout watcher* (2022-07-19 21:30:46): [be[home.domainB.de]] [request_watch_destructor] (0x0400): Deleting request watch* (2022-07-19 21:30:46): [be[home.domainB.de]] [resolv_gethostbyname_done] (0x0040): querying hosts database failed [5]: Eingabe-/Ausgabefehler********************** BACKTRACE DUMP ENDS HERE *********************************(2022-07-19 21:30:46): [be[home.domainB.de]] [nsupdate_get_addrs_done] (0x0040): Could not resolve address for this machine, error [5]: Eingabe-/Ausgabefehler, resolver returned: [8]: Misformatted domain name(2022-07-19 21:30:46): [be[home.domainB.de]] [nsupdate_get_addrs_done] (0x0040): nsupdate_get_addrs_done failed: [5]: [Eingabe-/Ausgabefehler]********************** PREVIOUS MESSAGE WAS TRIGGERED BY THE FOLLOWING BACKTRACE:* (2022-07-19 21:30:46): [be[home.domainB.de]] [nsupdate_get_addrs_done] (0x0040): Could not resolve address for this machine, error [5]: Eingabe-/Ausgabefehler, resolver returned: [8]: Misformatted domain name* (2022-07-19 21:30:46): [be[home.domainB.de]] [nsupdate_get_addrs_done] (0x0040): nsupdate_get_addrs_done failed: [5]: [Eingabe-/Ausgabefehler]********************** BACKTRACE DUMP ENDS HERE *********************************(2022-07-19 21:30:46): [be[home.domainB.de]] [sdap_dyndns_dns_addrs_done] (0x0040): Could not receive list of current addresses [5]: Eingabe-/Ausgabefehler(2022-07-19 21:30:46): [be[home.domainB.de]] [ipa_dyndns_sdap_update_done] (0x0040): Dynamic DNS update failed [5]: Eingabe-/Ausgabefehler********************** PREVIOUS MESSAGE WAS TRIGGERED BY THE FOLLOWING BACKTRACE:* (2022-07-19 21:30:46): [be[home.domainB.de]] [sdap_dyndns_dns_addrs_done] (0x0040): Could not receive list of current addresses [5]: Eingabe-/Ausgabefehler* (2022-07-19 21:30:46): [be[home.domainB.de]] [ipa_dyndns_sdap_update_done] (0x0040): Dynamic DNS update failed [5]: Eingabe-/Ausgabefehler********************** BACKTRACE DUMP ENDS HERE *********************************(2022-07-19 21:30:46): [be[home.domainB.de]] [be_ptask_done] (0x0040): Task [Dyndns update]: failed with [5]: Eingabe-/Ausgabefehler********************** PREVIOUS MESSAGE WAS TRIGGERED BY THE FOLLOWING BACKTRACE:* (2022-07-19 21:30:46): [be[home.domainB.de]] [sdap_id_op_destroy] (0x4000): releasing operation connection* (2022-07-19 21:30:46): [be[home.domainB.de]] [be_ptask_done] (0x0040): Task [Dyndns update]: failed with [5]: Eingabe-/Ausgabefehler********************** BACKTRACE DUMP ENDS HERE *********************************

Think i found another hint at a new reinstall of the client:

Missing A/AAAA record(s) for host ubuntu-vm01.home.domainB.de: 192.168.50.18.

Extra A/AAAA record(s) for host ubuntu-vm01.home.domainB.de: 192.168.50.15, 127.0.1.1. Incorrect reverse record(s): 192.168.50.18 is pointing to ubuntu-vm01. instead of ubuntu-vm01.home.domainB.de.

But i dont unterstand, where ubuntu-vm01 is comming from, i checked:

  • hostnamectl
  • hostname
  • hostname --fqdn
  • /etc/hosts

/etc/hosts won't matter at all, since it's trying to look up a SRV record. It cannot and should not be faked this way.

So, yes, you should have a reverse lookup zone for every block if you want DDNS to work. The basic config for unifi was in the earlier link.

        global-parameters "option domain-name "home.first";
        global-parameters "key dhcp-edgerouter-home.first. { algorithm HMAC-MD5; secret "KeyFor_home.first"; };"
        global-parameters "zone home.first. { primary 192.168.2.20; key dhcp-edgerouter-home.first.; }"
        global-parameters "ddns-domainname "home.first.";"
        global-parameters "ddns-rev-domainname "in-addr.arpa.";"
        global-parameters "zone home.first. { primary 192.168.2.20; key dhcp-edgerouter-home.first.; }"
        global-parameters "zone 2.168.192.in-addr.arpa. { primary 192.168.2.20; key dhcp-edgerouter-home.first.; }"
        global-parameters "option domain-name "home.second";"
        global-parameters "key dhcp-edgerouter-home.second. { algorithm hmac-md5; secret "KeyFor_home.second"; };"
        global-parameters "zone home.second. { primary 192.168.2.20; key dhcp-edgerouter-home.second.; }"
        global-parameters "ddns-domainname "home.second.";"
        global-parameters "zone in-addr.arpa. { primary 192.168.2.20; key dhcp-edgerouter-home.second.; }"
        global-parameters "zone 3.168.192.in-addr.arpa. { primary 192.168.2.20; key dhcp-edgerouter-home.second.; }"
        global-parameters "update-static-leases on;"
        global-parameters "ddns-updates on;"

This is telling dhcpd "when an address in 192.168.2.0 is issued, update both the zone and the rdns using the specified key to issue a request to BIND".

If you actually query the IPA DNS, does it know about ubuntu-vm01.home.domainB.de? Does it have a PTR for the address? Are there SRV records for _ldap._tcp.home.domainB.de and so on?

this is the result of dig:

; <<>> DiG 9.10.6 <<>> @ds01.home.domainB.de ubuntu-vm01.home.domainB.de; (1 server found);; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19972;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; WARNING: recursion requested but not available;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 1232;; QUESTION SECTION:;ubuntu-vm01.home.domainB.de. IN A;; ANSWER SECTION:ubuntu-vm01.home.domainB.de. 1200 IN A 192.168.50.18;; Query time: 4 msec;; SERVER: 192.168.1.216#53(192.168.1.216);; WHEN: Tue Jul 19 21:57:26 CEST 2022;; MSG SIZE rcvd: 69

At the installation of the ipa-client just the A reocrd an some SSHFP records are added automatically - no PTR or SRV records. The record in reverse lookup zone is missing, too.

(I already added the reverse lookup zones for each network)

This is telling dhcpd "when an address in 192.168.2.0 is issued, update both the zone and the rdns using the specified key to issue a request to BIND".

Isnt this done by sssd, too? At sssd.conf i set:

dyndns_update = True

The SRV records are added by the IPA server itself.

dig -x 192.168.50.18 returns no records? Again, these should be created as soon as an address is requested, with or without ipa-client. Let's step back and check your DHCP server configuration.

Did you generate a key for it to talk to BIND?

yes, its empty:

alexander@MBPvonAlexander ~ % dig -x 192.168.50.18
; <<>> DiG 9.10.6 <<>> -x 192.168.50.18
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 27139
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;18.50.168.192.in-addr.arpa. IN PTR
;; Query time: 3 msec
;; SERVER: 192.168.50.1#53(192.168.50.1)
;; WHEN: Tue Jul 19 22:21:32 CEST 2022
;; MSG SIZE rcvd: 55

No, i didnt generate any keys for BIND (or cant remember).

Actually my UDM-Pro is my DHCP-Server. Sadly i cant use the config from your example, because its from edge router. As far as i know, there is actually no way to do it with UDM Pro: https://community.ui.com/questions/Is-there-a-plan-to-have-the-UDMPro-DHCP-server-update-a-DNS-server/a3452fb8-e3ae-4922-bfaa-a934e5670761

But SSSD (?) is generating the A record at installation - why doesnt it update the A record the same way?

Ok - missed the PTR Sync and dynamic update setting in r.lookup zones.

Now i get a PTR record at 50.168.192.in-addr.arpa. and this at dig:

alexander@MBPvonAlexander ~ % dig -x 192.168.50.111; <<>> DiG 9.10.6 <<>> -x 192.168.50.111;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23257;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096;; QUESTION SECTION:;111.50.168.192.in-addr.arpa. IN PTR;; ANSWER SECTION:111.50.168.192.in-addr.arpa. 0 IN PTR ubuntu-vm01.;; Query time: 11 msec;; SERVER: 192.168.50.1#53(192.168.50.1);; WHEN: Tue Jul 19 23:19:25 CEST 2022;; MSG SIZE rcvd: 81

And sorry: Some SRV records are set by freeipa, for example _ldap._tcp or _kerberos-master._tcp and some other.

https://community.ui.com/questions/Is-there-a-plan-to-have-the-UDMPro-DHCP-server-update-a-DNS-server/a3452fb8-e3ae-4922-bfaa-a934e5670761

The honest answer is that dnsmasq is trash as an actual implementation (rndc support was requested a decade ago), and that's what the UDM uses. I'd honestly turn that off and replace the DHCP server with a rpi or something, beyond Unifi not being the company they were 5 years ago.

The A record is not updated because dnsmasq doesn't do rndc, so it cannot update.

Ok, I understand why the UDM does not update the DNS - but why can’t the host update it by itself? For me the difference from installation to update is not clear: At installation the A record is generated, but when the host ip changes or I ā€žtriggerā€œ it with a sssd restart it does not work.

Thought the setting of dyndns_update at sssd means exactly this.

Ok, so essentially, go back in time to the late 1990s. Desktop PCs with self-managed resources were supplanting "Windows for Workgroups", thin clients, etc. Microsoft was trying hard to supplant traditional UNIX, VMS, and Netware. A fair amount of Windows "servers" and clients were still backed by Netware auth.

Microsoft helped write a specification which became what everyone would more or less follow (including Microsoft, though in their fashion at the time, they "extended" it in a such a way that nobody else could implement a client without risking lawsuits).

It looked like this:

  • LDAP is the source of truth for all objects: services, compute (server or desktop), and user accounts
  • Authentication is handled by kerberos, authorization is done via LDAP. Kerberos more or less requires functional time sync and working DNS for host/service principals
  • DNS records should be managed in LDAP, and tied to objects/krb host principals
  • Authorities should be located via SRV records (Microsoft added CLDAP as a requisite part of "joining a domain", probably arguing that Win9x couldn't do SRV lookups or something, but it's far more likely that this was an anti-competitive measure).
  • Defense in depth is crucial. Every layer should be treated as untrusted.

When a client receives an address and it's never been seen before, a DNS record is created.

Ok, great. But that DNS record doesn't have a matching kerberos host principal or an object in LDAP. "Domain login" in Windows won't work, because there's no host principal to look up users. The DNS record for the host (if it previously exists) also won't be updated unless the client passes a host principal which matches. This prevents Joe Hacker from naming his laptop ad1 and becoming a domain controller via DNS.

Service lookup should be via SRV records so servers can be added/removed/replaced without knocking a number of services and users who use explicit hostname lookup for controllers out of the network.

Once a user logs into a computer which is part of the domain, they are issued a ticket. That ticket can be (and is) used for access to services -- mailboxes in Outlook (via Exchange), shared drives, groups, etc. The computer and the user can be part of groups, with their own permissions, and they can be more or less infinitely recursively nested.

In order for the host to update itself, it needs to be able to authenticate with a kerberos host principal. And it needs to be able to locate the "domain" via a SRV lookup. You are seeing SRV lookup failures.

The only reason I know all of this is from re-implementing AD from scratch (well, until I got stuck because there was no open source CLDAP until samba4) in the mid-2000s at some point.

The "canonical" way to do what you are trying to do -- if you do not control the DHCP/DNS infrastructure or are unable to send updates, is to use a delegated subdomain, such as corp.domainB.de (if AD), or whatever, and delegate a DHCP block there which a server which can do this updates, with CNAMEs from the root domain to subdomain mappings for "dynamic" service hosts (SQL, whatever) if those services must/want to participate in the "domain".

If you fix the SRV record lookup so it can find the controller, you may bypass this, but unless you intimately understand all the moving pieces and what they do, it's going to be banging your head against the wall trying to substitute them out piecemeal.

Thank you very much for the detailed explanation!

The challenge with such a big topic is actually to find the right start and then build on it bit by bit.But at the very beginning, unfortunately, you usually lack the overall view of the big picture. So I am very grateful for your comments.

Can you recommend beginner-friendly documentation for building a domain controller?

I think I really need to understand this topic from the ground up. I have now changed the DNS settings again in the UDM-Pro and have the feeling that something is wrong with the basic setup of the DNS server in Freeipa.

Also, I'm not sure if there might be another problem with the setup of my domains:
home.domainA.de is routed to my dynamic IP at home via a DynDNS service.
DomainB.de is registered with Cloudflare because my original idea was to route home.domainB.de into my network via a Cloudflare tunnel without open ports.

As mentioned in the original post, the question here was which topics you are currently learning. And I've actually learned a lot about it these days and I'd like to deepen it.

The problem right now is essentially that you need to see the forest and the trees (that's kind of a pun since domains are often referred to as 'forests', but not an intentional one). In order to replace any puzzle piece in the big picture, you need to understand not only what that individual piece does, but how it interlocks with everything else.

The beginner-friendly way to build a working LDAP+DNS+DHCP+NTP setup (the moving pieces in a "domain controller" from a network infrastructure level, without getting into the nuts and bolts of how kerberos handshakes work, what principals are and how they differ from tickets, or how any of that ties into authorization from LDAP) is to make a new subdomain or domain, not get in the way of the DHCP server/DNS server, and let it work as designed.

It's impossible to say whether anything is wrong with the basic setup of the DNS server in FreeIPA without giving examples of what you think is failing.

The alternative way is to build it out piecemeal, starting with a DHCP server which can send RNDC requests to update DNS, then add SRV records on top of that for LDAP lookups, add a timeserver in the DHCP options, add kerberos usesrs, make sure you can kinit. Then bring up an LDAP server and tie kerberos into that.

I have no idea from the outset whether home.domainA.de is hairpinned or whether you have an authoritative server for domainA, nor do I have any idea whether you have split-horizon set up for domainB. If domainB is somewhere on the internet with records (such as cloudflare) and you want different records on the same domain at home without a subdomain, you'd either need to convince cloudflare to set up split-horizon for you, or configure BIND to use response policy zones, which allow you to set a different "zone" which says "use this set of names as overrides for lookups"

That RPZ can be used for RNDC for your clients. DNS isn't designed to say "I'm an authoritative server for this zone/domain, but if I don't know about a record, then forward the lookup to this other server which is really really authoritative". DNSMasq (and the UDM) don't call the overrides RPZ, but that's more or less what they are.

The "easy" way to get around this (and what virtually every business did/does with AD) is to just delegate a subdomain for your domain.

It's impossible to say whether anything is wrong with the basic setup of the DNS server in FreeIPA without giving examples of what you think is failing.

I am a bit unsettled, because i get this result, when i do a nslookup from my macbook (thats actually no part of the domain):

alexander@MacBook-Pro-von-Alexander ~ % nslookup -q=SRV _ldap._tcp.home.domainB.de
;; Got recursion not available from 192.168.1.216, trying next server
Server: 192.168.50.1
Address: 192.168.50.1#53
_ldap._tcp.home.domainB.de service = 0 100 389 ds01.home.domainB.de.

From FreeIPA Server i get:

[root@ds01 ~]# nslookup -q=SRV _ldap._tcp.home.domainB.de
Server: 127.0.0.1
Address: 127.0.0.1#53
_ldap._tcp.home.domainB.de service = 0 100 389 ds01.home.domainB.de.

And from my Test-VM i get:

root@ubuntu-vm01:/home/administrator# nslookup -q=SRV _ldap._tcp.home.domainB.de
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
_ldap._tcp.home.domainB.de service = 0 100 389 ds01.home.domainB.de.
Authoritative answers can be found from:
ds01.home.domainB.de internet address = 192.168.1.216

I dont unterstand right now, why i get the message that recursion ist not available, when i call nslookup and if this could also be one part of my "problems"

The FreeIPA instance was set at primary DNS server, secondary DNS is the UDM-Pro gateway for the network (vlan).

For one, nslookup is more or less deprecated. Use dig. recursion not available means "in the response from the DNS server, it set a flag which says that it won't look further", and possibly/probably also "this isn't a fully qualified request, so nah". Add a . at the end to tell it you don't want to and see if it's the same.

From your test vm, it's impossible to say without seeing some information from resolvectl status. It's running systemd-resolved, and it's not clear which DNS server it's using. I'd suspect that the Test-VM (via resolvectl status) is telling you "I asked 192.168.50.1, and here's the answer it gave me when it looked up a different server, but you should ask that one because this result may be cached".

I have no idea what the resolution settings on your Mac are. The IPA server probably has

nameserver 127.0.0.1
nameserver x.x.x.x
search home.domainB.de

This is also the allow-recursion { ...; } BIND setting I mentioned earlier. You should check it.

freeipa

use this if you're installing on bare OS

Thank you, I will have a look at it!

I use FreeIPA as my LDAP backend for Authentik, works fantastic - it was a "setup once and forget" sort of thing, although admittedly I am only using it as an LDAP backend.

Haven't tried say, joining servers/systems to the domain and actually logging in over that.

I am actually at this point now, too.

did someone say free beer?

RustDesk - I was looking for a TeamViewer alternative for quite some time

Interesting, thanks. I have just started evaluating Mesh Central

I've been playing a bit with Wazuh (wazuh.com) which is an Open Source unified XDR and SIEM platform for your endpoints and cloud. Its pretty damn slick. Just wish I had more time to dig into it.

This looks awesome. Is it free if I host it myself?

not 100% selfhosted, but the docker image from erisamoe/cloudflared is amazing. It's so quick to get services exposed without opening ports. And if you add Cloudflare Access for authentication on top, is just so quick to get up and running and sharing services with others (pretty?) securely.

erisamoe/cloudflared

Why are you using an unofficial image instead of cloudflare/cloudflared though?

Because you have to pin the official image to a particular version. There is no :latest tag.

Hence some people prefer 3rd party images that are just proxies to the latest available by CF.

Not a fan of it but it's an option for lazy people.

is there a reason to not having a latest tag?

If you are using latest, you run the risk of a breaking change in the application. Better to manually upgrade to new versions after reviewing the changes, taking a backup, etc.

yes of course, I always tag versions on the stuff I have containered.

But why not let the user choose if they want to live on the edge? Or is it just to protect themselves from too many backlashes if they release something not-too-well?

Why are you using an unofficial image instead of cloudflare/cloudflared though?

unfortunately the official image is only built for AMD64 hosts, so can't run it on ARM (e.g. raspberry pi)

Interesting. I just have a wireguard routing with nginx that does it for me without me having to potentially have another point of failure that I probably do not have control over. Have you tried it?

DNS on domain points to VPS1 -> VPS1 has Nginx -> {wireguard url} ---> Any of the 4 other VPS/homeservers where service is hosted -> Service:port.

Jeez, I was just looking into this as a next iteration. More involved but wanted to try to learn this bit. Did you follow any guides or have any recommendations? Thinking on using one of oracle free vpcs to begin with, but haven't figured out much more beyond that

Digital Ocean has fairly(1), excellent(2), tutorials(3), for setting up nginx. As for wireguard, use wg-easy.

Oracle VPS are great starting point. Just remember to disable their external firewall. You can use ufw inside of their VPS but for some stupid-ass reason Oracle also has an external firewall in the virtual cloud networks (Go to virtual cloud network -> Subnet -> default security list -> ingress rules 0.0.0.0/0 all protocols, egress the same). Then go harden your server, and enable ufw.

Once there, you can just do a { proxy_pass: wireguard_client_ip:port } on your nginx config on the oracle vps etc :)

Every single cloud provider does this, as does openstack, and pretty much every kubernetes load balancer.

Every single cloud provider does this

?

Sure? I didn't say anything contrary. I just showed them how to do it themselves manually without relying on external parties that they do not control. Y'know, the point of self-hosting. Unless I'm misunderstanding your comment somehow?

for some stupid-ass reason Oracle also has an external firewall in the virtual cloud networks

This part. It is not "for some stupid-ass reason Oracle also has..."

Amazon, Google, Mirosoft, Rackspace, self-hosted Openstack, Eucalyptus, and every other public or private cloud provider will do this, along with basically every k8s load balancer (except there, you need to edit a ConfigMap to allow the LB software to bind to non-HTTP[s] ports, generally).

This is not some weird "external party that they do not control thing". It's every IaaS, whether you control it yourself or not.

Ah, fair enough. That doesn't mean I have to like it however. I just don't like having to manage my firewalls at two separate places, and it took me quite a while to figure out wtf was going on when I was new in this game.

I mean, ok, I get that. There is a big difference between "firewalls" for software-defined networks (whether L2 or L3) and system-level firewalls.

Managing the "firewall" for a security group is more or less equivalent to managing the firewall on your router, or handling a "real" switch. Pretty much all of the substrates can do port mirroring, they're bound via GRE/vlan/vxlan (vxlan most commonly in 2022), they may run BGP to determine pathing, etc.

This is not like "I'm running a VM on my laptop and I have to allow the host's firewall to allow the guest's firewall to allow some service". It's more like "I'm running a physical server or a VM hanging directly on the network in a datacenter, and the 'firewall' is the core networking stack".

It's nice that, for the most part, SDN is pretty cross-compatible now. I have no idea what Oracle is running for their network level, but it's entirely possible that the "firewall" you configure on their portal/in their API is directly changing settings on a real switch. Even if it's not, and it's backed by Neutron/Openstack, the port security is telling a virtual overlay network that "packets going to this destination on this port should be allowed", and that virtual overlay network may (and almost certainly does) span an entire region/availability zone, in the same way that, conceptually, one backhaul serves an entire high rise/apartment building and individual units have routers with "firewalls".

This is that "firewall", and it's 100% normal/expected that you would need to "port forward"

cloudflare/cloudflared

First time I hear about that. So it kinda does away with DDNS issues right?

Probably yes, not sure which issues you're thinking about though

The issue of setting up DDNS in general and have it working at all time, regardless of changing IP addresses at home.

Yes, exactly.

Instead of traffic going directly to you, it basicallycreates a tunnel and keeps it up to date at cloudflare. Then when you hit cloudflare, it goes through that.

I have set up my Prawlerr/Radarr/Sonarr/Jellyfin/Jellyseer stack. Along with qbittorrent, FlareSolverr and a subfinder.

I still can't figure out how to correctly enable hardware acceleration in docker (or maybe my hw just don't have corresponding capability). And the authentication part is still not combined together. But it's fun to see everything working fine.

For jellyfin use the linuxserver image. Then map your /dev/dri inside the container - that's where the GPU devices are listed in the host OS. It's all documented. Then it's a matter of dialing in the setting through the Jellyfin Admin panel.

Google your GPU, in case of an integrated GPU, google the CPU model. Look what features it supports, then enable them in the admin panel.

If you are running headless, i.e. no monitor attached, your /dev/dri might be empy. To fix that reboot with a monitor connected. Some BIOS will let you init the iGPU at all times. Other won't let you and you have to keep a monitor plugged at boot times. That can be fixed with a dummy display plug.

I keep meaning to set up an arr stack and just haven't had the time. Maybe some day...

I spent a decent chunk of the month learning about LXC and alpine.

Holy shit it's awesome. my syncthing relay is taking up 7MB of ram instead of 256MB.

Been working on getting other services switched over, but it's slow going as I'm trying to script their setup.

https://github.com/ThellraAK/lxc-scripts

Have been working on vault warden, but I'm stuck on the database stuff, trying to decide if I want one master database for everything that wants one, or setting up a locked down one for each service.

Edit: should also be noted that the relay server is available in the alpine repos, I didn't learn that until afterwards

I got everything except networking down for LXC. Getting connectivity is a lot more tricky than I expected for things like webservers.

networking in what way?

anonaddy and simplelogin but I'm not setting it up myself, just cool that you can.

Recently found tt-rss.
Previously I had used FreeRSS, but it bugged out on my multiple times because I run my stuff in a k8s cluster and I just couldn't be bothered anymore.

Very awesome tool and I've started to add feed after feed in there.

Check out miniflux. I like it a lot more than the others you mentioned.

I've tried a LOT of self hosted RSS readers and keep coming back to TT-RSS.

Don’t go to TT-RSS! The developers are strong right wing dudes.

Aside from the dev’s political leanings (which I know nothing about), the overtly hateful attitude demonstrated toward users was rude and aggressive. Never before have I dismissed a piece of software because I didn’t ā€œlikeā€ the dev. I’m on mobile and don’t have any quotes on-hand, but it only took me 10 minutes of poking around in the support forums to see what others were talking about; this guy is an indefensible asshole. I know nothing of the dev other than what that 10 minutes taught me, and I often internally reference that ā€œworst-caseā€ behavior when I’m conducting myself online; a shining example of how not to act.

I don’t care if this dev (or group) created a SaaS that offered me a personal genie; I would boycott it.

Got an alternative with a comparable filter system? Or at least some kind of "saved smart search" function? Because so far the filter system is what's keeping me with ttrss.

What does political affiliation have to do with a piece of software? Didn’t know we have to check if someone’s on the left or right before we use their app.

Nothing, but politics have seemed to infest every aspect of life these days..

Exactly! I’m tired of this shit..

You are not from 'murrica, right? everything is political gender race bullshit over there nowadays.

Software has no political party, it's a tool. It's like not buying a Bosch drill because the name sounds german and oh germans are nazzzzis.

I've used ttrss before but now I'm using nextcloud's news rss fetcher with the android client. It's really nice and simple. Ttrss was too much in my opinion.

RemindMe! 2 months "Stuff I need to add to the already underpowered server"

Kamailio+rtpengine

Self hosted voice is boss

Kamailio+rtpengine

It's been years since I played with sip/voip. You still need to pay for a gateway into the public phone system no?

yup

Where do you get such a gateway? Is there a tutorial somewhere for this? All the things I find when I'm googling don't seem to have anything to do with setting up a gateway to the public phone system and more are focused on businesses using this.

[deleted]

+1 for AudioBookShelf. Great app and the multi app syncing is spot on

Wise

How did you install Wise-Mapping please? Is there a way using docker-compose?

I need to learn to do mapping in general. I feel like it would be so useful for me.

I tested and now run iSpy NVR. It recognised all my ONVIF cameras on the spot and allows me to save to a NFS share while the app is run from a docker container

Is this a good replacement for blue iris?

Depends what you are looking for in such software

A list from a quick google search I found https://medevel.com/10-cctv-open-source-solutions/

Will any of these work with Eufy security cameras? Looking for a docker/oss server solution. Sorry if it's a dumb question, I don't know much about security video standards.

Not a full replacement, but you might also enjoy Frigate

OpenEBS Jiva for Kubernetes storage.

It is the simplest Kubernetes distributed file/block storage I've found for small deployments.

It provisions image files on to a hostPath on each node so no block devices are needed. It also needs just two components to run - Jiva itself and local provisioner.

It does one thing well enough and doesn't try to handle a bunch of things like a UI, backup and other things I don't necessarily want bundled.

at work i use Longhorn for those kind of things :)

Longhorn is pretty cool. But Longhorn bundles all the other things the OP of this thread mentioned not wanting bundled (eg. UI, backups)

I use Longhorn at work and in my homelab.

EDIT: I have left reddit due to the hostile API pricing (details here). All of my historical comments have either been deleted or replaced with this text.

Proxmox! I moved all my self-hosted stuff to it after a hard drive failure. So far it's been amazing. Backups are incredibly easy now.

A couple of days ago i discovered code-server...i love it, i no longer go completely blind when tracking down a bug in nano, no more no autocomplete, multiple languages, no more having to move files to work machine for edit them and send them back to the server...I just love it

You should see if you can get Github copilot. It's fairly neat. Takes longer to make it work in code-server than a VS code instance (Have to deal with .vsx bullshit) but it works!

I've not got a contribution but great idea for a thread, OP. I'm always looking for this sort of stuff and haven't heard of most of the suggestions.

This thread specifically and this sub in general are a gold mine of interesting new stuff to do.

Even for the stuff I already know and use, it's good to see them often as some form of confirmation. Also nice to see the comments on stuff I know like "have you tried X? I find it does a better job at doing Y"

https://plausible.io

will never use www again i swear

With the 3 w's it gives a security error on the browser. https://plausible.io won't give that alert and you can see their website, as expected. šŸ™Œ

[deleted]

But how will opera know I’m trying to surf the World Wide Web if I don’t put it in the address

I find it pretty useful as a resource type indicator though, I host a whole bunch of things with sub domains and instead of pointing my root domain name at my landing page it becomes www instead.

Maybe so, but why not just set up the certificate to allow the domain name with and without www? That has been standard for at least the past 10 years!

I especially love it when speaking -- an acronym of 9 syllables replacing words totaling 3 syllables.

"dub-dub-dub". Nerdy but your tongue won't get twisted in knots.

iā€˜m a 90s kid, iā€˜ll use www. until i die.

You should try Gopher, there is a lot more content there and comes with its own search engine.

In my case I actually need a subdomain for TLS. I'm using DuckDNS as my free dynamic DNS service, together with Let's Encrypt (configured through SWAG). It works and gives me an https address for free, however there's a limitation with DuckDNS that can only grant a certificate for a sub-subdomain (so https://www.subdomain.duckdns.org is valid, but the non-www equivalent is not). Although it doesn't have to be www, that's just a convention.

That's not true. I use duck dns with no www with no issues.

You have htttps working with no sub-subdomain?

You have to use 2 subdomains per application you're exposing in swag. So it redirects both www.example.duckdns.org and example.duckdns.org. After that it will work out the certificates itself really.

Script-server for running my numerous python scripts and Cronicle to schedule them.

In my case, just 3 weeks ago: WeKan and BookStack.

Thanks for getting me ti check out tachiyomi!

I recently discovered Traccar! It’s a service to which you can send devices GPS coordinates and track them. I’ll stop using WhatsApp ā€œshare locationā€ soon 😁

Have you seen hauk: https://github.com/bilde2910/Hauk

Yes! But it only has android client. Traccar has also iOS and accepts a lot of protocols, so you can also track vehicles, bikes or whatever you want 😁

I found out of Authelia,

  • https://www.authelia.com
  • https://github.com/authelia/authelia

From GitHub,

Authelia is an open-source authentication and authorization server providing two-factor authentication and single sign-on (SSO) for your applications via a web portal. It acts as a companion for reverse proxies like nginx, Traefik, caddy or HAProxy to let them know whether requests should either be allowed or redirected to Authelia's portal for authentication.

I recently was out of town and for some reason, my wireguard vpn kept having issues. Like it was dropping packages or the connection. Not sure if the connection at home, where I was, or issues with having multiple devices (gotta look into this)

So I had been looking to see how I could expose some services, but add authentication to them. That led me to authelia. Haven’t gotten yet to try it, but it’ll be a way to connect to certain services. VPN will be rebuilt and kept for other things.

Using Ceph on a Home-NAS

I just discovered SuiteCRM and plan on setting it up soon.

I own my own small consulting firm and will begin a large outreach campaign soon, will need to track clients in a sales funnel.

I was delighted to see that there's a dashboard/CRM I can run myself, I'm sure out of the box it will be more than I need but I could still customize it from there.

I was puzzled to see so many SimpleLogin/Anonaddy until I realized you can self host those. It's not so obvious on their main page, or is it just me?

Do you get the premium features for free by self hosting it like Bitwarden?

Definitely Shell NGN. I run Proxmox with a ton of LXC containers, it is totally awesome to be able to have all the ssh keys stored and be able to connect automatically to whatever container I need to. Have it behind NPM and authelia. Can access my server from anywhere on the globe. It is coded with responsive design so you can get a terminal up in your phone browser. Bomb!

For the second part of the question I recently (re)discovered Overseerr. It’s a really slick (UI way better than Ombi IMHO) way of allowing your nearest and dearest to request media content to be downloaded to your Plex instance. Do it!

(On an aside, for others posting: it makes your awesome suggestions even more awesome when you link out to them!)

Docker versions:

Add this for the docker selfoss-frontend:

https://github.com/MatthK/Selfoss-Webfront

RemindMe! 3 days

You could use a selfhosted tool for that ;)

Remindme! 3 days.

I'll be back to see what you suggest for this function.

I seem to have invited a buncha smartasses.

Alright fine, I'll find what I can.

RemindMe! 1 week

Remindme! 3 days

That’s why I’m comin’ back in three days. I wanna see what people suggest.

RemindMe! 2 days

docker registry, save so much fucking money

GitLab and GitHub have free docker registries, so not sure what you mean by saving so much money. But this is /r/selfhosted so in the spirit of self-hosting, you win, I suppose.

Why the downvotes? Lol it has literally saved so much money, theres literally no good prices docker registry hosts for closed source projects, the only option that isnt absurdly expensive is self hosting it

Um, can't speak much for GitHub, but GitLab's container registry is also free for private repos. So...

As for the downvotes, I don't know, I didn't downvote you, I just disagree with you. To be honest though, why are you stressed about being downvoted? It's a completely meaningless number system, trust me.

!RemindMe 3 days

!RemindMe 2 days

Anonaddy!

I definitely love Nextcloud. It's been so useful just for personal use, in fact, that it was the first thing I set up for my publishing company.

Remind me! 7 days

!RemindMe 1 week

!RemindMe 1 week

restic

A nice CLI tool that has the user feel of rsync with the frontend support of a comprehensive selection of storage platforms. restic + crontab is replacing my Duplicati backup solution until I can find something better.

Couple things...

  1. I love airsonic-advanced (https://fleet.linuxserver.io/image?name=linuxserver/airsonic-advanced) for not having to keep my entire music library on my phone.

  2. Why calibre-web versus COPS which is a bit lighter/less resource intensive?

Cheers!

RemindMe! 21 days

!RemindMe 5 days