I moved in a new house with my girlfriend in the middle of 2025. After living 15 years in a nice apartment, it is the occasion to build a nice homelab. I had self-hosted services since 2004, but the server has always been an old desktop, a slow QNAP or that one Zotac ID11 that still refused to give up after 10 years, and now a refurbished Zotac CI540 Nano. I've wanted a nice server and network for a long time. The idea is to run a Home Assistant instance like all geeks and a Frigate instance to monitor things with cameras.
This article focuses on the hardware and some choices I made. Future articles will describe the setup and configuration.
Hardware
Router
I've wanted to have Vlans for a long time. Before, I did not have a router that allowed it. I also had no reason to have this kind of setup. With cameras, servers, IOT devices, it became necessary to partition the devices. I've purchased a Flint 2 from GL.Inet. It is not expensive, the interface is based on OpenWRT, and it is known to be sufficiently configurable that you can remove your Proximus router to use this one, even with TV. The interface is really user-friendly, and it is possible to install Luci, the advanced Openwrt web interface o configure things. I wanted a setup where people and IOT are operating on separate networks with a reverse proxy providing access to Home Assistant and Frigate to the user's devices (laptops, smartphones).
Unfortunately, I've had issues with the default Firmware based on Openwrt 21.2. I could not manage to have everything working as expected. Sometimes the DHCP would not work on the interface, sometimes I would not get internet. In the end, I replaced the Firmware by Openwrt Vanilla by reading the instructions. Everything worked directly. There may be some bugs with the GL.inet firmware.
If you are experiencing issues with the default firmware, I encourage you to try the vanilla version of Openwrt once you are comfortable with Luci. It is not that difficult and there is a lot of help. Moreover, replacing the firmware is completely reversible. You can go back to the other firmware if you want.
In the end, this is my setup:
- 192.168.129.0/24 for humans.
This is the same IP range than Proximus created with their router. It allows to define static IP and keep them when switching from one router to another. I broke my configuration by accident several times and this trick has been useful.
We have a Wi-Fi network with the same range. It allows me to share resources between smartphones and Ethernet laptop/Zotac
- Smartphones
- Laptop
- Zotac (cifs, Network Share, ...)
- Reverse Proxy (Home Assistant, Frigate)
- 10.10.10.0/24 Ethernet
- Ethernet cameras
- Home Assistant
- Frigate
- 10.10.20.0/24 Wi-Fi
- IOT Wi-Fi camera
- Future Wi-Fi devices (not Matter protocol)
I've struggled to make a Matter device works with that setup. This is a known difficulty:
In the end, I decided to create another Wi-Fi for matter device operating on 10.10.10.0 with proper firewall rules.
Server
After thinking about it for a long time, I decided to buy a Beelink EQ14. I've read the recommended hardware for frigate and saw that it was a good fit:
- Dual Ethernet
- Intel N150 with an integrated iGPU (easy hardware acceleration with frigate)
- Small form factor
- Low consumption (25W max)
- I chose the 1 To version with 16Gb RAM. It is possible to add up to 2 additional NVME disks.
Cameras
I have not purchased the Ethernet camera yet, but I will probably buy Dahua cameras. The first priority was to purchase cameras to check the dog (yes, I have a dog now !). I wanted to try cheap cameras because they would stay in the cellar. The house is not wired in all rooms. Outside camera will be wired and powered on with a POE switch.
Wireless
I purchased wireless camera Tapo C210. They are cheap (~€20) and you can connect to them without internet access with a RTSP URL. Unfortunately, you need to use their proprietary application to configure the camera to your Wi-Fi network, set up the device and configure the account used in the RTSP URL. The image quality is sufficient for my need, they are reliable so far. I've purchased 3 of them.
You can find the relevant URL once you activated the user on ISpyConnect page. This website is really useful to easily get the RTSP URLs of your cameras.
Software
I have installed Proxmox on the Beelink. It is my first experience with that system, but I am really happy with it. So far, I have two VM and one LXC container.
Home Assistant relies on their own image but I used Debian Trixie installation for LXC container and the other VM. I am familiar with Debian because agayon.be is already running on a Debian system and I can install the unattended-upgrades package to automatically install security updates without intervention.
Home asssitant
I installed their .qcow2 image. because it allows me to easily use the apps.
Home Assistant Operating System: [...] It is the most convenient option in terms of installation and maintenance and it supports apps. Home Assistant Operating System is the recommended installation type for most users.
I preferred this solution to the container setup. It also helps the update process.
Frigate
Finally, I created a LXC container to view the camera content. I rely on Frigate. It seems more modern than ZoneMinder and easy to set up.
The preferred installation flow relies on a docker container. They don't recommend running it inside a Proxmox LXC container. But I went for that option and it runs really well. I could have relied on an existing script but I wanted to control and understand what I do. To achieve it, I had to allow the iGPU of the Beelink to use available to the container (shared resources). Most documentation is really bad when it comes to iGPU passthrough. Some advises to use a privileged container to access the hardware. This is a really bad idea and not necessary. Others advise to put too much permissions on the device (666) where anybody can access the renderer. This is a really bad idea, in my case, I just limited who can access it.
I rely on a unpriviledged LXC container. I have a udev rule to assign a dedicated group to the card and renderer. I ensure the host permissions targets 100000 user and group. I also created a container volume (CT) of 500GB mounted at the location /mnt/storage in the LXC container.
Proxmox host configuration
Mappings
udev rule
[user@proxmox]: cat /etc/udev/rules.d/99-gpu-passthrough.rules
KERNEL=="renderD128", SUBSYSTEM=="drm", MODE="0660", OWNER="100000", GROUP="100000"
KERNEL=="card0", SUBSYSTEM=="drm", MODE="0660", OWNER="100000", GROUP="100000"
Group and User mapping
[user@proxmox]: cat /etc/subgid
root:100000:65536
[user@proxmox]: cat /etc/subuid
root:100000:65536
LXC container configuration
[user@proxmox]: cat /etc/pve/lxc/xxx.conf
#lxc.autodev%3A 0
arch: amd64
cmode: console
cores: 2
dev0: /dev/dri/renderD128,gid=992,mode=0660,uid=0
dev1: /dev/dri/card0,gid=992,mode=0660,uid=0
features: nesting=1
hostname: lxcship
memory: 2048
mp0: local-lvm:vm-102-disk-1,mp=/mnt/storage,backup=1,size=500G
net0: name=eth0,bridge=vmbr1,firewall=1,hwaddr=BC:24:11:DD:3C:EF,ip=dhcp,ip6=dhcp,type=veth
net1: name=eth1,bridge=vmbr0,firewall=1,hwaddr=BC:24:11:DD:67:EA,ip=dhcp,ip6=dhcp,link_down=1,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-102-disk-0,size=20G
swap: 2048
unprivileged: 1
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
lxc.cgroup.devices.allow: c 10:200 rwm
iGPU host permission
[user@proxmox]: ls -l /dev/dri/
total 0
drwxr-xr-x 2 root root 80 Mar 2 13:46 by-path
crw-rw---- 1 100000 100000 226, 0 Mar 2 13:46 card0
crw-rw---- 1 100000 100000 226, 128 Mar 2 13:46 renderD128
LXC configuration
Inside the LXC container, I've created a group: media_group with GUID 1001
[user@lxc]: cat /etc/group
[...]
media_group:x:1001:root,frigate.
[user@lxc]: cat /etc/subgid
frigate:100000:65536
root:100000:100001
[user@lxc]: cat /etc/subuid
frigate:100000:65536
root:100000:100001
iGPU permissions
[user@lxc]: ls -l /dev/dri/
total 0
crw-rw---- 1 root media_group 226, 0 Mar 2 12:47 card0
crw-rw---- 1 root media_group 226, 128 Mar 2 12:47 renderD128
docker-compose of frigate
[user@lxc]: cat containers/frigate/Dockerfile
FROM ghcr.io/blakeblackshear/frigate:stable
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y --no-install-recommends ffmpeg vainfo
[user@lxc]: cat docker-compose.yml
services:
frigate:
build: ./containers/frigate
# container_name: frigate_nvr
restart: unless-stopped
stop_grace_period: 30s # allow enough time to shut down the various services
shm_size: "512mb"
devices:
- /dev/dri/renderD128:/dev/dri/renderD128 # For intel hwaccel, needs to be updated for your hardware
volumes:
- /etc/localtime:/etc/localtime:ro
- ./config:/config
- ./storage:/media/frigate
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
ports:
- "8971:8971"
- "8554:8554" # RTSP feeds
- "8555:8555/tcp" # WebRTC over tcp
- "8555:8555/udp" # WebRTC over udp
environment:
FRIGATE_RTSP_PASSWORD: "42"
FFREPORT: "file=/config/ffmpeg.log:level=32"
Frigate config.uml
This is a sample config file for the Tapo C210 cameras.
[user@lxc]: cat config/config.yaml
detectors:
ov_0:
type: openvino
device: GPU
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
path: /openvino-model/ssdlite_mobilenet_v2.xml
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
ffmpeg:
input_args:
- -hwaccel
- vaapi
- -rtsp_transport
- tcp
logger:
default: info
logs:
frigate.video: info # debug # Enable FFmpeg debug logs
mqtt:
enabled: true
host: 10.10.10.130
topic_prefix: frigate
client_id: frigate
user: mqtt
password: pass_mqtt
port: 1338
tls:
enabled: false
go2rtc:
streams:
CAM0:
- rtsp://user:password@10.10.20.XXX:554/stream2#video=copy#audio=copy#audio=aac
CAM1:
- rtsp://user:password@10.10.20.XXY:554/stream2#video=copy#audio=copy#audio=aac
CAM2:
- rtsp://user:password@10.10.20.XXZ:554/stream2#video=copy#audio=copy#audio=aac
webrtc:
candidates:
- stun:8555
api:
origin: '*'
cameras:
CAM0:
enabled: true
ffmpeg:
output_args:
record: preset-record-generic-audio-aac
inputs:
- path:
rtsp://user:password@10.10.20.XXX:554/stream1 # <----- The stream you want to use for detection
input_args: preset-rtsp-restream
roles:
- record
- audio
- detect
audio:
enabled: true
listen:
- bark
- scream
- yell
filters:
speech:
threshold: 0.6
glass:
threshold: 0.5
fire_alarm:
threshold: 0.5
smoke_detector:
threshold: 0.5
snapshots:
enabled: false
timestamp: false
record:
enabled: true
retain:
mode: motion
detect:
enabled: true
width: 640
height: 480
stationary:
interval: 25
threshold: 30
onvif:
host: 10.10.20.154
port: 2020
user: user
password: password
autotracking:
enabled: false
track:
- person
return_preset: '3'
# ---------------------------------------------------------------------------------------------------------------------
CAM1:
enabled: true
SAME
CAM2:
enabled: true
SAME
version: 0.16-0
camera_groups:
Group1:
order: 1
icon: LuWi-Fi
cameras:
- CAM0
- CAM1
- CAM2
Haproxy
The proxy has access to both networks and cameras/IOT
It is the only way to get access to the interface, and it allows to avoid exposing all ports of the iot/cameras devices on a family network.
Configuration
root@haproxy:/etc/haproxy# cat haproxy.cfg
global
# levels: debug,info,notice, warning etc
log /dev/log local0 info
log /dev/log local1 debug
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
# Performances / stabilité
maxconn 4096
tune.ssl.default-dh-param 4096
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
option forwardfor
option http-server-close
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend http_in
bind *:80
mode http
redirect scheme https code 301 if !{ ssl_fc }
frontend https_in
bind *:443 ssl crt /etc/letsencrypt alpn h2,http/1.1
mode http
# Sécurité headers
http-response set-header X-Frame-Options SAMEORIGIN # DENY break moskito MQTT Iframe
http-response set-header X-Content-Type-Options nosniff
http-response set-header Referrer-Policy no-referrer
http-response set-header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
# ACL par nom de domaine
acl host_frigate hdr(host) -i frigate.example.com
acl host_homeassistant hdr(host) -i home-asistant.example.com
use_backend frigate_backend if host_frigate
use_backend homeassistant_backend if host_homeassistant
# Défaut : refus
default_backend deny_backend
backend frigate_backend
mode http
option httpchk GET /
server frigate 10.10.10.XXX:8971 check
backend homeassistant_backend
mode http
option httpchk GET /
server homeassistant 10.10.10.YYY:8123 check
# refus par défaut
backend deny_backend
mode http
http-request deny deny_status 403
Finally, as I have a lot of subdomains on agayon.be, and some hosts are not connected to internet, I've switched to lego to update Let’s Encrypt certificates with the DNS challenge.
The cameras and Home Assistant instance are accessible on my phone with the help of a Wireguard VPN. I don't use the Home Assistant application and services are not accessible from the Internet.
A following article may follow with details regarding the firewall and other specifications. Thank you for reading until here !