Merged Config History to Remove not Secret Secrets
This is a combination of 129 commits: Initial Server Configuration Add Caddy Add Jelly Bucket to Minio Remove Podman DNS Initialize Server Configuration Directory Also replace Minio Pod w/ Nix Derivation Remove Neko/WatchThingz User Configuration (Broken, See Issue) Disable WatchThingz Add cockpit TODO: Add Cockpit Plugins TODO: Add Performance Metrics https://github.com/performancecopilot/pcp Start adding Gitea TODO: Gitea specific postgres config, determine global postgres Add Second Mass Storage Drive Add Gitea in Full Mount Both Data Dirs for Minio Add CUDA to Nvidia Add OCI Based Servers TODO: Organize into server arcitecture Add Secrets Add some nice to have packages Massive Server Upgrade Jelly s3fs mount Stats for things like Minio Usage, Logs etc. VirtualHost & Pod Cleanup Move pod import inot oci services that use them Have services define what virtualhost of caddy they belong to Migrade homeassitant and jellyfin to new dir structure Headscale and static files Directory Reorganization New Module Structure Headscale is public facing Headscale User Generation Module Finish HeadScale PreAuth Module TODO: Activation Script sketch: (Tailscale & Container) Headscale integration Add Local DNS Resolver & Local Domains Add Path to Output of ensureUsers Fix Path Setting Add Services Dir Local Join to Tailnet w/ Auth Gen Togers Uses .tv ... Move networking config Add networking to configuration.nix Update to Brdiged Networking Requirement for nspawn Fix unit definitions Cleanup defs for container support Add Minio Containers to tailnet Disable PostGresql, seems to break things Migrate to LVM Disk Fix not Using Headscale Containers Re-add Nextcloud Re Auth Prometheus for Minio Pretty Graphs Init: pre-office servers Init: pre Pterodactyl server Fix Jelly VPN Disable Grafana for Now Add VaultWarden Add Anki Add GC and Store Optimization Correct Gitea's connection to postgresql Add Vaultwarden, Remove Anki Cleanup User Depsfor Recognize Pterodactyl: Add Nspawn Service Change to Flake System Fix flake path pugs Add Hydra Add Build Machine Wings: Migrate to Nix Directly... or do tun tap. Might do latter Try to get Anki to Work It passes args properly now, but not environment variables Add NAT Passthrough on Ports Disable for now, interferes b/c of NAT Tried to enable actions Nix Serve Cache Hydra DynRun Increase port range Stop Using Pod Patch Hydra Video Group & Patches libnvidia-container ldconfig patch More patching nvidia-podman fix && jellyfin nvidia Nix cache domain Update Flake Container Deployment User & Script Add Handy Helper Deploy-scheme Forgetten Flake Update 2023-03-12 -> 2023-03-21 Update Flake Update Nextcloud 25 -> 26 Update Flake & Nvidia-Podman Update of flake broke nvidia podman, this fixes it, hopefully Latest working version Update Time! Use new Gitea Config Use new Gitea Config, properly Currently borked, need to wait, or go back to earlier working version Working now Updates Change Hydra Port Whoops, Keyboard bad Convert to String Update Time NodeJS InSecure for Now OpenSSL1.1.1t InSecure Disable Hydra Tests More insecure Update and Ethan Basic AudioBookshelf impl Add AudioBookShelf Fix Group Test Env Var Environment Wrong Location Remove TMP Env Config Dir SystemDir: Audiobookshelf Audiobook: getopt ExecStart Args for Env Correct Port Add Domain: AudioBooks Git LFS Hauk Location Tracking TODO: Change domain to whereis.chris.crompton.cc Enable Hauk Correct Hauk Port Flake Update Docker-compat Disable Recognize Setup Nextcloud 26 -> 27 Disable Podman-Nvidia Environment is clouded for some reason™️ (nvidia-container-tools makes a "docker" command visible) OctoPrint & Prusa Samba server Reorganize for Config Merge Move Nvidia Fix to File Migrate to sops-nix servers -> server Remove Old Key Things for Agenix
This commit is contained in:
1
.gitignore
vendored
1
.gitignore
vendored
@@ -2,4 +2,3 @@
|
|||||||
# Ignore build outputs from performing a nix-build or `nix build` command
|
# Ignore build outputs from performing a nix-build or `nix build` command
|
||||||
result
|
result
|
||||||
result-*
|
result-*
|
||||||
|
|
||||||
|
|||||||
7
.sops.yaml
Normal file
7
.sops.yaml
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
keys:
|
||||||
|
- &hippocampus age1crymppz88etsdjpckmtdhr397x5xg5wv8jt6tcj23gt2snq73pzs04fuve
|
||||||
|
creation_rules:
|
||||||
|
- path_regex: machines/hippocampus/secrets/[^/]+\.(yaml|json|env|ini)$
|
||||||
|
key_groups:
|
||||||
|
- age:
|
||||||
|
- *hippocampus
|
||||||
79
flake.lock
generated
Normal file
79
flake.lock
generated
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
{
|
||||||
|
"nodes": {
|
||||||
|
"nixpkgs": {
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1687898314,
|
||||||
|
"narHash": "sha256-B4BHon3uMXQw8ZdbwxRK1BmxVOGBV4viipKpGaIlGwk=",
|
||||||
|
"owner": "NixOS",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"rev": "e18dc963075ed115afb3e312b64643bf8fd4b474",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "NixOS",
|
||||||
|
"ref": "nixos-unstable",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nixpkgs-stable": {
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1691874659,
|
||||||
|
"narHash": "sha256-qgmixg0c/CRNT2p9Ad35kaC7NzYVZ6GRooErYI7OGJM=",
|
||||||
|
"owner": "NixOS",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"rev": "efeed708ece1a9f4ae0506ae4a4d7da264a74102",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "NixOS",
|
||||||
|
"ref": "release-23.05",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nixpkgs_2": {
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1691853136,
|
||||||
|
"narHash": "sha256-wTzDsRV4HN8A2Sl0SVQY0q8ILs90CD43Ha//7gNZE+E=",
|
||||||
|
"owner": "NixOS",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"rev": "f0451844bbdf545f696f029d1448de4906c7f753",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "NixOS",
|
||||||
|
"ref": "nixpkgs-unstable",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"root": {
|
||||||
|
"inputs": {
|
||||||
|
"nixpkgs": "nixpkgs",
|
||||||
|
"sops-nix": "sops-nix"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"sops-nix": {
|
||||||
|
"inputs": {
|
||||||
|
"nixpkgs": "nixpkgs_2",
|
||||||
|
"nixpkgs-stable": "nixpkgs-stable"
|
||||||
|
},
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1691915920,
|
||||||
|
"narHash": "sha256-4pitrahUZc1ftIw38CelScd+JYGUVZ4mQTMe3VAz44c=",
|
||||||
|
"owner": "Mic92",
|
||||||
|
"repo": "sops-nix",
|
||||||
|
"rev": "32603de0dc988d60a7b80774dd7aed1083cd9629",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "Mic92",
|
||||||
|
"repo": "sops-nix",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"root": "root",
|
||||||
|
"version": 7
|
||||||
|
}
|
||||||
35
flake.nix
Normal file
35
flake.nix
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
{
|
||||||
|
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
|
||||||
|
inputs.sops-nix.url = "github:Mic92/sops-nix";
|
||||||
|
|
||||||
|
outputs = { self, nixpkgs, sops-nix }@attrs: let
|
||||||
|
hydraGitea = (final: prev: {
|
||||||
|
hydra_unstable = prev.hydra_unstable.overrideAttrs
|
||||||
|
(old: {
|
||||||
|
doCheck = false;
|
||||||
|
patches = [
|
||||||
|
(final.fetchpatch {
|
||||||
|
name = "hydra-gitea-push-patch";
|
||||||
|
url = "https://patch-diff.githubusercontent.com/raw/NixOS/hydra/pull/1227.patch";
|
||||||
|
sha256 = "A4dN/4zLMKLYaD38lu87lzAWH/3EUM7G5njx7Q4W47w=";
|
||||||
|
})
|
||||||
|
];
|
||||||
|
});
|
||||||
|
});
|
||||||
|
nvidiaContainer = import ./nvidiacontainer-overlay.nix nixpkgs;
|
||||||
|
in {
|
||||||
|
|
||||||
|
nixosConfigurations.nixos = nixpkgs.lib.nixosSystem {
|
||||||
|
system = "x86_64-linux";
|
||||||
|
specialArgs = attrs;
|
||||||
|
modules =
|
||||||
|
[
|
||||||
|
({ config, pkgs, ... }: {
|
||||||
|
nixpkgs.overlays = [ hydraGitea nvidiaContainer ];
|
||||||
|
})
|
||||||
|
./machines/hippocampus/configuration.nix
|
||||||
|
sops-nix.nixosModules.sops
|
||||||
|
];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
21
libnvidia-container/avoid-static-libtirpc-build.patch
Normal file
21
libnvidia-container/avoid-static-libtirpc-build.patch
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
diff --git a/Makefile b/Makefile
|
||||||
|
index 00d561e..68221ae 100644
|
||||||
|
--- a/Makefile
|
||||||
|
+++ b/Makefile
|
||||||
|
@@ -242,7 +242,7 @@ $(BIN_NAME): $(BIN_OBJS)
|
||||||
|
##### Public rules #####
|
||||||
|
|
||||||
|
all: CPPFLAGS += -DNDEBUG
|
||||||
|
-all: shared static tools
|
||||||
|
+all: shared tools
|
||||||
|
|
||||||
|
# Run with ASAN_OPTIONS="protect_shadow_gap=0" to avoid CUDA OOM errors
|
||||||
|
debug: CFLAGS += -pedantic -fsanitize=undefined -fno-omit-frame-pointer -fno-common -fsanitize=address
|
||||||
|
@@ -274,7 +274,6 @@ install: all
|
||||||
|
# Install header files
|
||||||
|
$(INSTALL) -m 644 $(LIB_INCS) $(DESTDIR)$(includedir)
|
||||||
|
# Install library files
|
||||||
|
- $(INSTALL) -m 644 $(LIB_STATIC) $(DESTDIR)$(libdir)
|
||||||
|
$(INSTALL) -m 755 $(LIB_SHARED) $(DESTDIR)$(libdir)
|
||||||
|
$(LN) -sf $(LIB_SONAME) $(DESTDIR)$(libdir)/$(LIB_SYMLINK)
|
||||||
|
ifeq ($(WITH_NVCGO), yes)
|
||||||
13
libnvidia-container/config.tml
Normal file
13
libnvidia-container/config.tml
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
disable-require = true
|
||||||
|
#swarm-resource = "DOCKER_RESOURCE_GPU"
|
||||||
|
|
||||||
|
[nvidia-container-cli]
|
||||||
|
#root = "/run/nvidia/driver"
|
||||||
|
#path = "/usr/bin/nvidia-container-cli"
|
||||||
|
environment = []
|
||||||
|
#debug = "/var/log/nvidia-container-runtime-hook.log"
|
||||||
|
ldcache = "/tmp/ld.so.cache"
|
||||||
|
load-kmods = true
|
||||||
|
no-cgroups = false
|
||||||
|
#user = "root:video"
|
||||||
|
ldconfig = "@@glibcbin@/bin/ldconfig"
|
||||||
14
libnvidia-container/inline-c-struct.patch
Normal file
14
libnvidia-container/inline-c-struct.patch
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
diff --git a/src/nvcgo.c b/src/nvcgo.c
|
||||||
|
index 98789a3..1197302 100644
|
||||||
|
--- a/src/nvcgo.c
|
||||||
|
+++ b/src/nvcgo.c
|
||||||
|
@@ -33,7 +33,8 @@
|
||||||
|
void nvcgo_program_1(struct svc_req *, register SVCXPRT *);
|
||||||
|
|
||||||
|
static struct nvcgo_ext {
|
||||||
|
- struct nvcgo;
|
||||||
|
+ struct rpc rpc;
|
||||||
|
+ struct libnvcgo api;
|
||||||
|
bool initialized;
|
||||||
|
void *dl_handle;
|
||||||
|
} global_nvcgo_context;
|
||||||
122
libnvidia-container/libnvc-ldconfig-and-path-fix.patch
Normal file
122
libnvidia-container/libnvc-ldconfig-and-path-fix.patch
Normal file
@@ -0,0 +1,122 @@
|
|||||||
|
diff --git a/src/ldcache.c b/src/ldcache.c
|
||||||
|
index 38bab05..14c1893 100644
|
||||||
|
--- a/src/ldcache.c
|
||||||
|
+++ b/src/ldcache.c
|
||||||
|
@@ -108,40 +108,27 @@ ldcache_close(struct ldcache *ctx)
|
||||||
|
|
||||||
|
int
|
||||||
|
ldcache_resolve(struct ldcache *ctx, uint32_t arch, const char *root, const char * const libs[],
|
||||||
|
- char *paths[], size_t size, ldcache_select_fn select, void *select_ctx)
|
||||||
|
+ char *paths[], size_t size, const char* version)
|
||||||
|
{
|
||||||
|
char path[PATH_MAX];
|
||||||
|
- struct header_libc6 *h;
|
||||||
|
- int override;
|
||||||
|
+ char dir[PATH_MAX];
|
||||||
|
+ char lib[PATH_MAX];
|
||||||
|
|
||||||
|
- h = (struct header_libc6 *)ctx->ptr;
|
||||||
|
memset(paths, 0, size * sizeof(*paths));
|
||||||
|
|
||||||
|
- for (uint32_t i = 0; i < h->nlibs; ++i) {
|
||||||
|
- int32_t flags = h->libs[i].flags;
|
||||||
|
- char *key = (char *)ctx->ptr + h->libs[i].key;
|
||||||
|
- char *value = (char *)ctx->ptr + h->libs[i].value;
|
||||||
|
-
|
||||||
|
- if (!(flags & LD_ELF) || (flags & LD_ARCH_MASK) != arch)
|
||||||
|
- continue;
|
||||||
|
-
|
||||||
|
- for (size_t j = 0; j < size; ++j) {
|
||||||
|
- if (!str_has_prefix(key, libs[j]))
|
||||||
|
- continue;
|
||||||
|
- if (path_resolve(ctx->err, path, root, value) < 0)
|
||||||
|
- return (-1);
|
||||||
|
- if (paths[j] != NULL && str_equal(paths[j], path))
|
||||||
|
- continue;
|
||||||
|
- if ((override = select(ctx->err, select_ctx, root, paths[j], path)) < 0)
|
||||||
|
- return (-1);
|
||||||
|
- if (override) {
|
||||||
|
- free(paths[j]);
|
||||||
|
- paths[j] = xstrdup(ctx->err, path);
|
||||||
|
- if (paths[j] == NULL)
|
||||||
|
- return (-1);
|
||||||
|
- }
|
||||||
|
- break;
|
||||||
|
- }
|
||||||
|
+ for (size_t j = 0; j < size; ++j) {
|
||||||
|
+ snprintf(dir, 100, "/run/opengl-driver%s/lib",
|
||||||
|
+ arch == LD_I386_LIB32 ? "-32" : "");
|
||||||
|
+ if (!strncmp(libs[j], "libvdpau_nvidia.so", 100))
|
||||||
|
+ strcat(dir, "/vdpau");
|
||||||
|
+ snprintf(lib, 100, "%s/%s.%s", dir, libs[j], version);
|
||||||
|
+ if (path_resolve_full(ctx->err, path, "/", lib) < 0)
|
||||||
|
+ return (-1);
|
||||||
|
+ if (!file_exists(ctx->err, path))
|
||||||
|
+ continue;
|
||||||
|
+ paths[j] = xstrdup(ctx->err, path);
|
||||||
|
+ if (paths[j] == NULL)
|
||||||
|
+ return (-1);
|
||||||
|
}
|
||||||
|
return (0);
|
||||||
|
}
|
||||||
|
diff --git a/src/ldcache.h b/src/ldcache.h
|
||||||
|
index 33d78dd..2b087db 100644
|
||||||
|
--- a/src/ldcache.h
|
||||||
|
+++ b/src/ldcache.h
|
||||||
|
@@ -50,6 +50,6 @@ void ldcache_init(struct ldcache *, struct error *, const char *);
|
||||||
|
int ldcache_open(struct ldcache *);
|
||||||
|
int ldcache_close(struct ldcache *);
|
||||||
|
int ldcache_resolve(struct ldcache *, uint32_t, const char *, const char * const [],
|
||||||
|
- char *[], size_t, ldcache_select_fn, void *);
|
||||||
|
+ char *[], size_t, const char*);
|
||||||
|
|
||||||
|
#endif /* HEADER_LDCACHE_H */
|
||||||
|
diff --git a/src/nvc_info.c b/src/nvc_info.c
|
||||||
|
index 9e27c3c..c227f5b 100644
|
||||||
|
--- a/src/nvc_info.c
|
||||||
|
+++ b/src/nvc_info.c
|
||||||
|
@@ -216,15 +216,13 @@ find_library_paths(struct error *err, struct dxcore_context *dxcore, struct nvc_
|
||||||
|
if (path_resolve_full(err, path, root, ldcache) < 0)
|
||||||
|
return (-1);
|
||||||
|
ldcache_init(&ld, err, path);
|
||||||
|
- if (ldcache_open(&ld) < 0)
|
||||||
|
- return (-1);
|
||||||
|
|
||||||
|
info->nlibs = size;
|
||||||
|
info->libs = array_new(err, size);
|
||||||
|
if (info->libs == NULL)
|
||||||
|
goto fail;
|
||||||
|
if (ldcache_resolve(&ld, LIB_ARCH, root, libs,
|
||||||
|
- info->libs, info->nlibs, select_libraries_fn, info) < 0)
|
||||||
|
+ info->libs, info->nlibs, info->nvrm_version) < 0)
|
||||||
|
goto fail;
|
||||||
|
|
||||||
|
info->nlibs32 = size;
|
||||||
|
@@ -232,13 +230,11 @@ find_library_paths(struct error *err, struct dxcore_context *dxcore, struct nvc_
|
||||||
|
if (info->libs32 == NULL)
|
||||||
|
goto fail;
|
||||||
|
if (ldcache_resolve(&ld, LIB32_ARCH, root, libs,
|
||||||
|
- info->libs32, info->nlibs32, select_libraries_fn, info) < 0)
|
||||||
|
+ info->libs32, info->nlibs32, info->nvrm_version) < 0)
|
||||||
|
goto fail;
|
||||||
|
rv = 0;
|
||||||
|
|
||||||
|
fail:
|
||||||
|
- if (ldcache_close(&ld) < 0)
|
||||||
|
- return (-1);
|
||||||
|
return (rv);
|
||||||
|
}
|
||||||
|
|
||||||
|
diff --git a/src/nvc_ldcache.c b/src/nvc_ldcache.c
|
||||||
|
index db3b2f6..076b4ba 100644
|
||||||
|
--- a/src/nvc_ldcache.c
|
||||||
|
+++ b/src/nvc_ldcache.c
|
||||||
|
@@ -367,7 +367,7 @@ nvc_ldcache_update(struct nvc_context *ctx, const struct nvc_container *cnt)
|
||||||
|
if (validate_args(ctx, cnt != NULL) < 0)
|
||||||
|
return (-1);
|
||||||
|
|
||||||
|
- argv = (char * []){cnt->cfg.ldconfig, "-f", "/etc/ld.so.conf", "-C", "/etc/ld.so.cache", cnt->cfg.libs_dir, cnt->cfg.libs32_dir, NULL};
|
||||||
|
+ argv = (char * []){cnt->cfg.ldconfig, "-f", "/tmp/ld.so.conf.nvidia-host", "-C", "/tmp/ld.so.cache.nvidia-host", cnt->cfg.libs_dir, cnt->cfg.libs32_dir, NULL};
|
||||||
|
if (*argv[0] == '@') {
|
||||||
|
/*
|
||||||
|
* We treat this path specially to be relative to the host filesystem.
|
||||||
161
machines/hippocampus/configuration.nix
Normal file
161
machines/hippocampus/configuration.nix
Normal file
@@ -0,0 +1,161 @@
|
|||||||
|
# Edit this configuration file to define what should be installed on
|
||||||
|
# your system. Help is available in the configuration.nix(5) man page
|
||||||
|
# and in the NixOS manual (accessible by running ‘nixos-help’).
|
||||||
|
|
||||||
|
{ config, pkgs, ... }:
|
||||||
|
|
||||||
|
{
|
||||||
|
imports =
|
||||||
|
[ # Include the results of the hardware scan.
|
||||||
|
./hardware-configuration.nix
|
||||||
|
|
||||||
|
# Network configuration
|
||||||
|
./networking.nix
|
||||||
|
|
||||||
|
# Enable Flakes
|
||||||
|
./flakes.nix
|
||||||
|
|
||||||
|
# Enable Secrets
|
||||||
|
./secrets.nix
|
||||||
|
|
||||||
|
# Nvidia Driver Config
|
||||||
|
./nvidia.nix
|
||||||
|
|
||||||
|
# Enable Containers
|
||||||
|
./oci.nix
|
||||||
|
|
||||||
|
# Servers: (Nextcloud, minio, and more)
|
||||||
|
./servers.nix
|
||||||
|
|
||||||
|
# Services: (tailscale, etc.)
|
||||||
|
./services.nix
|
||||||
|
];
|
||||||
|
nixpkgs.config.permittedInsecurePackages = [
|
||||||
|
"nodejs-14.21.3"
|
||||||
|
"openssl-1.1.1t"
|
||||||
|
"openssl-1.1.1u"
|
||||||
|
];
|
||||||
|
nix.gc = {
|
||||||
|
automatic = true;
|
||||||
|
dates = "weekly";
|
||||||
|
options = "--delete-older-than 30d";
|
||||||
|
};
|
||||||
|
nix.settings.auto-optimise-store = true;
|
||||||
|
|
||||||
|
# Bootloader.
|
||||||
|
boot.loader.systemd-boot.enable = true;
|
||||||
|
boot.loader.efi.canTouchEfiVariables = true;
|
||||||
|
boot.loader.efi.efiSysMountPoint = "/boot/efi";
|
||||||
|
|
||||||
|
networking.hostName = "nixos"; # Define your hostname.
|
||||||
|
# networking.wireless.enable = true; # Enables wireless support via wpa_supplicant.
|
||||||
|
|
||||||
|
# Configure network proxy if necessary
|
||||||
|
# networking.proxy.default = "http://user:password@proxy:port/";
|
||||||
|
# networking.proxy.noProxy = "127.0.0.1,localhost,internal.domain";
|
||||||
|
|
||||||
|
# Enable networking
|
||||||
|
networking.networkmanager.enable = true;
|
||||||
|
|
||||||
|
# Set your time zone.
|
||||||
|
time.timeZone = "America/Toronto";
|
||||||
|
|
||||||
|
# Select internationalisation properties.
|
||||||
|
i18n.defaultLocale = "en_CA.UTF-8";
|
||||||
|
|
||||||
|
# Enable the X11 windowing system.
|
||||||
|
services.xserver.enable = true;
|
||||||
|
|
||||||
|
# Enable the Pantheon Desktop Environment.
|
||||||
|
services.xserver.displayManager.sddm.enable = true;
|
||||||
|
services.xserver.desktopManager.plasma5.enable = true;
|
||||||
|
|
||||||
|
# Configure keymap in X11
|
||||||
|
services.xserver = {
|
||||||
|
layout = "us";
|
||||||
|
xkbVariant = "";
|
||||||
|
};
|
||||||
|
|
||||||
|
# Enable CUPS to print documents.
|
||||||
|
services.printing.enable = true;
|
||||||
|
|
||||||
|
# Enable sound with pipewire.
|
||||||
|
sound.enable = true;
|
||||||
|
hardware.pulseaudio.enable = false;
|
||||||
|
security.rtkit.enable = true;
|
||||||
|
services.pipewire = {
|
||||||
|
enable = true;
|
||||||
|
alsa.enable = true;
|
||||||
|
alsa.support32Bit = true;
|
||||||
|
pulse.enable = true;
|
||||||
|
# If you want to use JACK applications, uncomment this
|
||||||
|
#jack.enable = true;
|
||||||
|
|
||||||
|
# use the example session manager (no others are packaged yet so this is enabled by default,
|
||||||
|
# no need to redefine it in your config for now)
|
||||||
|
#media-session.enable = true;
|
||||||
|
};
|
||||||
|
|
||||||
|
# Enable touchpad support (enabled default in most desktopManager).
|
||||||
|
# services.xserver.libinput.enable = true;
|
||||||
|
|
||||||
|
# Define a user account. Don't forget to set a password with ‘passwd’.
|
||||||
|
users.users.server = {
|
||||||
|
isNormalUser = true;
|
||||||
|
description = "server";
|
||||||
|
extraGroups = [ "networkmanager" "wheel" "video" ];
|
||||||
|
packages = with pkgs; [
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
# Enable automatic login for the user.
|
||||||
|
services.xserver.displayManager.autoLogin.enable = true;
|
||||||
|
services.xserver.displayManager.autoLogin.user = "server";
|
||||||
|
|
||||||
|
# List packages installed in system profile. To search, run:
|
||||||
|
# $ nix search wget
|
||||||
|
environment.systemPackages = with pkgs; [
|
||||||
|
firefox
|
||||||
|
|
||||||
|
screen
|
||||||
|
btop
|
||||||
|
htop
|
||||||
|
|
||||||
|
git
|
||||||
|
git-lfs
|
||||||
|
|
||||||
|
emacs
|
||||||
|
|
||||||
|
prusa-slicer
|
||||||
|
|
||||||
|
sops
|
||||||
|
];
|
||||||
|
|
||||||
|
# Some programs need SUID wrappers, can be configured further or are
|
||||||
|
# started in user sessions.
|
||||||
|
# programs.mtr.enable = true;
|
||||||
|
# programs.gnupg.agent = {
|
||||||
|
# enable = true;
|
||||||
|
# enableSSHSupport = true;
|
||||||
|
# };
|
||||||
|
|
||||||
|
# List services that you want to enable:
|
||||||
|
|
||||||
|
# Enable the OpenSSH daemon.
|
||||||
|
services.openssh.enable = true;
|
||||||
|
|
||||||
|
# Open ports in the firewall.
|
||||||
|
# networking.firewall.allowedTCPPorts = [ ... ];
|
||||||
|
# networking.firewall.allowedUDPPorts = [ ... ];
|
||||||
|
# Or disable the firewall altogether.
|
||||||
|
networking.firewall.enable = false;
|
||||||
|
|
||||||
|
# This value determines the NixOS release from which the default
|
||||||
|
# settings for stateful data, like file locations and database versions
|
||||||
|
# on your system were taken. It‘s perfectly fine and recommended to leave
|
||||||
|
# this value at the release version of the first install of this system.
|
||||||
|
# Before changing this value read the documentation for this option
|
||||||
|
# (e.g. man configuration.nix or on https://nixos.org/nixos/options.html).
|
||||||
|
system.stateVersion = "22.11"; # Did you read the comment?
|
||||||
|
|
||||||
|
}
|
||||||
3
machines/hippocampus/flakes.nix
Normal file
3
machines/hippocampus/flakes.nix
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
{ pkgs, ... }: {
|
||||||
|
nix.settings.experimental-features = [ "nix-command" "flakes" ];
|
||||||
|
}
|
||||||
49
machines/hippocampus/hardware-configuration.nix
Normal file
49
machines/hippocampus/hardware-configuration.nix
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
# Do not modify this file! It was generated by ‘nixos-generate-config’
|
||||||
|
# and may be overwritten by future invocations. Please make changes
|
||||||
|
# to /etc/nixos/configuration.nix instead.
|
||||||
|
{ config, lib, pkgs, modulesPath, ... }:
|
||||||
|
|
||||||
|
{
|
||||||
|
imports =
|
||||||
|
[ (modulesPath + "/installer/scan/not-detected.nix")
|
||||||
|
];
|
||||||
|
|
||||||
|
boot.initrd.availableKernelModules = [ "ahci" "xhci_pci" "ehci_pci" "usb_storage" "usbhid" "sd_mod" "sr_mod" ];
|
||||||
|
boot.initrd.kernelModules = [ ];
|
||||||
|
boot.kernelModules = [ "kvm-intel" ];
|
||||||
|
boot.extraModulePackages = [ ];
|
||||||
|
|
||||||
|
fileSystems."/" =
|
||||||
|
{ device = "/dev/disk/by-uuid/e68356b4-237d-4508-9dac-dfa253b7a548";
|
||||||
|
fsType = "ext4";
|
||||||
|
};
|
||||||
|
|
||||||
|
fileSystems."/boot/efi" =
|
||||||
|
{ device = "/dev/disk/by-uuid/78EA-3351";
|
||||||
|
fsType = "vfat";
|
||||||
|
};
|
||||||
|
|
||||||
|
fileSystems."/mass" =
|
||||||
|
{ device = "/dev/mass/red2x6";
|
||||||
|
fsType = "xfs";
|
||||||
|
};
|
||||||
|
|
||||||
|
swapDevices = [ ];
|
||||||
|
|
||||||
|
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
|
||||||
|
# (the default) this is the recommended approach. When using systemd-networkd it's
|
||||||
|
# still possible to use this option, but it's recommended to use it in conjunction
|
||||||
|
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
|
||||||
|
networking.useDHCP = lib.mkDefault true;
|
||||||
|
# networking.interfaces.enp0s25.useDHCP = lib.mkDefault false;
|
||||||
|
networking.defaultGateway = "192.168.1.1";
|
||||||
|
networking.nameservers = [
|
||||||
|
"8.8.8.8"
|
||||||
|
"8.8.4.4"
|
||||||
|
];
|
||||||
|
# networking.interfaces.enp4s0.useDHCP = lib.mkDefault true;
|
||||||
|
# networking.interfaces.enp8s0.useDHCP = lib.mkDefault true;
|
||||||
|
|
||||||
|
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
|
||||||
|
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
|
||||||
|
}
|
||||||
86
machines/hippocampus/modules/containerHeadscale.nix
Normal file
86
machines/hippocampus/modules/containerHeadscale.nix
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
{config, pkgs, lib, ...}: with lib; let
|
||||||
|
|
||||||
|
cfg = config.services.headscale;
|
||||||
|
|
||||||
|
authServer = cfg.settings.server_url;
|
||||||
|
|
||||||
|
connectContainer = name: options: {
|
||||||
|
enableTun = true;
|
||||||
|
bindMounts = {
|
||||||
|
"/var/tailauth" = {
|
||||||
|
hostPath = cfg.ensureUsers."${name}".path;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
config = {config, pkgs, ...}: {
|
||||||
|
imports = [
|
||||||
|
./tailscale.nix
|
||||||
|
];
|
||||||
|
|
||||||
|
networking.nameservers = [ "1.1.1.1" ];
|
||||||
|
networking.useHostResolvConf = false;
|
||||||
|
|
||||||
|
networking.firewall = {
|
||||||
|
enable = false;
|
||||||
|
};
|
||||||
|
|
||||||
|
services.tailscale = {
|
||||||
|
enable = true;
|
||||||
|
useRoutingFeatures = "client";
|
||||||
|
authTokenPath = "/var/tailauth";
|
||||||
|
authUrl = authServer;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
mkContainerAfterToken = name: options: {
|
||||||
|
requires = ["headscale-preauth-regen-${name}.service"];
|
||||||
|
after = ["headscale-preauth-regen-${name}.service"];
|
||||||
|
};
|
||||||
|
|
||||||
|
ensureContainerUser = name: options: {
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
in {
|
||||||
|
# Extend NixOS containers to automatically
|
||||||
|
# create a headscale user with the container
|
||||||
|
# name, generate its auth, and binds it to
|
||||||
|
# the container
|
||||||
|
imports = [
|
||||||
|
./headscale.nix
|
||||||
|
];
|
||||||
|
|
||||||
|
options = {
|
||||||
|
services.headscale.containers = mkOption {
|
||||||
|
type = types.attrsOf (types.submodule (
|
||||||
|
{config, options, name, ...}:
|
||||||
|
{
|
||||||
|
|
||||||
|
}
|
||||||
|
));
|
||||||
|
default = {};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
config = {
|
||||||
|
networking.bridges = {
|
||||||
|
"br0" = {
|
||||||
|
interfaces = [];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
networking.interfaces.br0.ipv4.addresses = [{
|
||||||
|
address = "10.0.0.1";
|
||||||
|
prefixLength = 24;
|
||||||
|
}];
|
||||||
|
networking.nat = {
|
||||||
|
enable = true;
|
||||||
|
# Check for hostBridge use vb instead of ve
|
||||||
|
internalInterfaces = (map (n: "vb-${n}") (attrNames cfg.containers)) ++ ["br0"];
|
||||||
|
externalInterface = "enp0s25";
|
||||||
|
enableIPv6 = true;
|
||||||
|
};
|
||||||
|
containers = mapAttrs connectContainer cfg.containers;
|
||||||
|
systemd.services = mapAttrs' (n: v: nameValuePair "container@${n}" (mkContainerAfterToken n v)) cfg.containers;
|
||||||
|
services.headscale.ensureUsers = mapAttrs ensureContainerUser cfg.containers;
|
||||||
|
};
|
||||||
|
}
|
||||||
166
machines/hippocampus/modules/headscale.nix
Normal file
166
machines/hippocampus/modules/headscale.nix
Normal file
@@ -0,0 +1,166 @@
|
|||||||
|
{ config, lib, pkgs, ...}:
|
||||||
|
|
||||||
|
with lib;
|
||||||
|
let
|
||||||
|
cfg = config.services.headscale;
|
||||||
|
|
||||||
|
userOptions = { config, ... }: {
|
||||||
|
options = {
|
||||||
|
enablePreAuth = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = true;
|
||||||
|
description = lib.mdDoc ''
|
||||||
|
Generate Pre Authorized Token with User
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
preAuthEphemeral = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = false;
|
||||||
|
description = lib.mdDoc ''
|
||||||
|
Should the token be ephemeral, making the user
|
||||||
|
dissappear after no usage
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
preAuthExpiration = mkOption {
|
||||||
|
type = types.str;
|
||||||
|
default = "1h";
|
||||||
|
description = lib.mdDoc ''
|
||||||
|
How long should the token be active for
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
preAuthReusable = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = false;
|
||||||
|
description = lib.mdDoc ''
|
||||||
|
Should the token be able to be used more than once
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
preAuthTags = mkOption {
|
||||||
|
type = types.listOf types.str;
|
||||||
|
default = [];
|
||||||
|
description = lib.mdDoc ''
|
||||||
|
How should this login token be tagged
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
preAuthRegen = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = false;
|
||||||
|
description = lib.mdDoc ''
|
||||||
|
Should a timer be built to regenerate the token
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
path = mkOption {
|
||||||
|
type = types.path;
|
||||||
|
default = "/run/headscale_${config._module.args.name}_auth/auth";
|
||||||
|
description = mdDoc ''
|
||||||
|
Path of generated token
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
mkUserService = name: options: {
|
||||||
|
"headscale-ensureUser-${name}" = {
|
||||||
|
description = "Ensure '${name}' user exists for headscale";
|
||||||
|
wantedBy = ["multi-user.target" "headscale.service"];
|
||||||
|
after = ["headscale.service"];
|
||||||
|
requires = ["headscale.service"];
|
||||||
|
partOf = ["headscale.service"];
|
||||||
|
|
||||||
|
script = ''
|
||||||
|
${cfg.package}/bin/headscale users create ${name}
|
||||||
|
'';
|
||||||
|
|
||||||
|
serviceConfig = {
|
||||||
|
Type = "oneshot";
|
||||||
|
User = cfg.user;
|
||||||
|
Group = cfg.user;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
# Make Auth token readable with given set of perms and also user?
|
||||||
|
# also generate on timer
|
||||||
|
|
||||||
|
mkUserPreAuth = name: options: optionalAttrs options.enablePreAuth {
|
||||||
|
"headscale-preauth-${name}" = {
|
||||||
|
description = "Generate Headscale Preauth Token for '${name}'";
|
||||||
|
wantedBy = ["multi-user.target"];
|
||||||
|
after = ["headscale-ensureUser-${name}.service"];
|
||||||
|
requires = ["headscale-ensureUser-${name}.service"];
|
||||||
|
partOf = ["headscale.service"];
|
||||||
|
|
||||||
|
script = ''
|
||||||
|
${cfg.package}/bin/headscale preauthkeys -u ${name} create \
|
||||||
|
${lib.optionalString options.preAuthEphemeral "--ephemeral"} \
|
||||||
|
${lib.optionalString options.preAuthReusable "--reusable"} \
|
||||||
|
--expiration ${options.preAuthExpiration} \
|
||||||
|
${lib.optionalString (options.preAuthTags != []) "--tags ${toString options.preAuthTags}"} \
|
||||||
|
> /run/headscale_${name}_auth/auth; echo /run/headscale_${name}_auth/auth created
|
||||||
|
'';
|
||||||
|
|
||||||
|
# TODO: Use Activation Script to Generate On Boot
|
||||||
|
# with option to regenerate with timer
|
||||||
|
|
||||||
|
serviceConfig = {
|
||||||
|
Type = "oneshot";
|
||||||
|
PermissionsStartOnly = "true";
|
||||||
|
# Directory Creation Issues Don't Belong To User/Aren't Passed
|
||||||
|
# with the specified user, resulting in %t resolving to root's
|
||||||
|
# runtime directory
|
||||||
|
# https://github.com/containers/podman/issues/12778
|
||||||
|
RuntimeDirectory = "headscale_${name}_auth";
|
||||||
|
RuntimeDirectoryMode = "0775";
|
||||||
|
User = cfg.user;
|
||||||
|
Group = cfg.user;
|
||||||
|
RemainAfterExit = "yes";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
mkUserPreAuthRegen = name: options: {
|
||||||
|
"headscale-preauth-regen-${name}" = {
|
||||||
|
wantedBy = ["multi-user.target" "headscale-ensureUser-${name}.service"];
|
||||||
|
after = ["headscale-ensureUser-${name}.service"];
|
||||||
|
|
||||||
|
script = ''
|
||||||
|
${pkgs.systemd}/bin/systemctl restart headscale-preauth-${name}.service
|
||||||
|
'';
|
||||||
|
serviceConfig = {
|
||||||
|
Type = "oneshot";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
mkServices = name: options: (mkUserService name options) //
|
||||||
|
(mkUserPreAuth name options) //
|
||||||
|
(mkUserPreAuthRegen name options);
|
||||||
|
|
||||||
|
mkTimer = name: options: optionalAttrs options.preAuthRegen {
|
||||||
|
wantedBy = ["multi-user.target" "headscale-ensureUser-${name}.service"];
|
||||||
|
after = ["headscale-ensureUser-${name}.service"];
|
||||||
|
requires = ["headscale-ensureUser-${name}.service"];
|
||||||
|
bindsTo = ["headscale.service"];
|
||||||
|
partOf = ["headscale.service"];
|
||||||
|
|
||||||
|
timerConfig = {
|
||||||
|
OnUnitActiveSec = options.preAuthExpiration;
|
||||||
|
Unit = "headscale-preauth-regen-${name}.service";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
in
|
||||||
|
{
|
||||||
|
options.services.headscale.ensureUsers = mkOption {
|
||||||
|
default = {};
|
||||||
|
type = types.attrsOf (types.submodule userOptions);
|
||||||
|
description = lib.mdDoc "Ensure these users exist in headscale";
|
||||||
|
};
|
||||||
|
config = lib.mkIf (cfg.enable && cfg.ensureUsers != {}) {
|
||||||
|
systemd.services = (foldl' (a: b: a // b) {} (attrValues ((mapAttrs
|
||||||
|
(n: v: (mkServices n v)) cfg.ensureUsers))));
|
||||||
|
|
||||||
|
systemd.timers = filterAttrs (n: v: v !={}) (mapAttrs'
|
||||||
|
(n: v: nameValuePair "headscale-preauth-${n}" (mkTimer n v)) cfg.ensureUsers);
|
||||||
|
};
|
||||||
|
}
|
||||||
203
machines/hippocampus/modules/pods.nix
Normal file
203
machines/hippocampus/modules/pods.nix
Normal file
@@ -0,0 +1,203 @@
|
|||||||
|
{ config, lib, pkgs, ...}:
|
||||||
|
|
||||||
|
with lib;
|
||||||
|
let
|
||||||
|
cfg = config.virtualisation.oci-containers;
|
||||||
|
proxy_env = config.networking.proxy.envVars;
|
||||||
|
|
||||||
|
runDirSetup = ''
|
||||||
|
if [ -z ''${XDG_RUNTIME_DIR} ]; then
|
||||||
|
export XDG_RUNTIME_DIR="/run/";
|
||||||
|
fi
|
||||||
|
'';
|
||||||
|
|
||||||
|
podOptions =
|
||||||
|
{ ... }: {
|
||||||
|
|
||||||
|
options = {
|
||||||
|
|
||||||
|
ports = mkOption {
|
||||||
|
type = with types; listOf str;
|
||||||
|
default = [];
|
||||||
|
description = lib.mdDoc ''
|
||||||
|
Network ports to publish from the pod to the outer host.
|
||||||
|
Valid formats:
|
||||||
|
- `<ip>:<hostPort>:<podPort>`
|
||||||
|
- `<ip>::<podPort>`
|
||||||
|
- `<hostPort>:<podPort>`
|
||||||
|
- `<podPort>`
|
||||||
|
Both `hostPort` and `podPort` can be specified as a range of
|
||||||
|
ports. When specifying ranges for both, the number of pod
|
||||||
|
ports in the range must match the number of host ports in the
|
||||||
|
range. Example: `1234-1236:1234-1236/tcp`
|
||||||
|
When specifying a range for `hostPort` only, the `podPort`
|
||||||
|
must *not* be a range. In this case, the pod port is published
|
||||||
|
somewhere within the specified `hostPort` range.
|
||||||
|
Example: `1234-1236:1234/tcp`
|
||||||
|
'';
|
||||||
|
example = literalExpression ''
|
||||||
|
[
|
||||||
|
"8080:9000"
|
||||||
|
]
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
volumes = mkOption {
|
||||||
|
type = with types; listOf str;
|
||||||
|
default = [];
|
||||||
|
description = lib.mdDoc ''
|
||||||
|
List of volumes to attach to this pod.
|
||||||
|
Note that this is a list of `"src:dst"` strings to
|
||||||
|
allow for `src` to refer to `/nix/store` paths, which
|
||||||
|
would be difficult with an attribute set. There are
|
||||||
|
also a variety of mount options available as a third
|
||||||
|
field;
|
||||||
|
'';
|
||||||
|
example = literalExpression ''
|
||||||
|
[
|
||||||
|
"volume_name:/path/inside/container"
|
||||||
|
"/path/on/host:/path/inside/container"
|
||||||
|
]
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
enableInfra = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = true;
|
||||||
|
};
|
||||||
|
|
||||||
|
infraImage = mkOption {
|
||||||
|
type = types.str;
|
||||||
|
default = "";
|
||||||
|
};
|
||||||
|
|
||||||
|
infraName = mkOption {
|
||||||
|
type = types.str;
|
||||||
|
default = "";
|
||||||
|
};
|
||||||
|
|
||||||
|
dependsOn = mkOption {
|
||||||
|
type = with types; listOf str;
|
||||||
|
default = [];
|
||||||
|
description = lib.mdDoc ''
|
||||||
|
Define which other containers this one depends on. They will be added to both After and Requires for the unit.
|
||||||
|
Use the same name as the attribute under `virtualisation.oci-containers.containers`.
|
||||||
|
'';
|
||||||
|
example = literalExpression ''
|
||||||
|
virtualisation.oci-containers.containers = {
|
||||||
|
node1 = {};
|
||||||
|
node2 = {
|
||||||
|
dependsOn = [ "node1" ];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
contains = mkOption {
|
||||||
|
type = with types; listOf str;
|
||||||
|
default = [];
|
||||||
|
description = lib.mkDoc ''
|
||||||
|
Define what containers are contained within this pod,
|
||||||
|
with shared ports and volumes as decribed above.
|
||||||
|
'';
|
||||||
|
example = literalExpression ''
|
||||||
|
virtualisation.oci-containers.containers = {
|
||||||
|
containers = {
|
||||||
|
node1 = {};
|
||||||
|
node2 = {};
|
||||||
|
};
|
||||||
|
pods = {
|
||||||
|
nodes = {
|
||||||
|
contains = [
|
||||||
|
"node1"
|
||||||
|
"node2"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
extraOptions = mkOption {
|
||||||
|
type = with types; listOf str;
|
||||||
|
default = [];
|
||||||
|
description = lib.mdDoc "Extra options for {command}`${defaultBackend} run`.";
|
||||||
|
example = literalExpression ''
|
||||||
|
["--network=host"]
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
autoStart = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = true;
|
||||||
|
description = lib.mdDoc ''
|
||||||
|
When enabled, the container is automatically started on boot.
|
||||||
|
If this option is set to false, the container has to be started on-demand via its service.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
mkService = name: pod: let
|
||||||
|
dependsOn = map (x: "${cfg.backend}-${x}.service") pod.dependsOn;
|
||||||
|
contains = map (x: "${cfg.backend}-${x}.service") pod.contains;
|
||||||
|
escapedName = escapeShellArg "${name}_pod";
|
||||||
|
podPID = "$XDG_RUNTIME_DIR/${escapedName}.pid";
|
||||||
|
in rec {
|
||||||
|
wantedBy = [] ++ optional (pod.autoStart) "multi-user.target";
|
||||||
|
after = dependsOn;
|
||||||
|
before = contains;
|
||||||
|
requires = dependsOn;
|
||||||
|
wants = contains;
|
||||||
|
environment = proxy_env;
|
||||||
|
path = [ config.virtualisation.podman.package ];
|
||||||
|
|
||||||
|
preStart = runDirSetup + ''
|
||||||
|
${cfg.backend} pod rm --ignore -f --pod-id-file=${podPID} || true
|
||||||
|
'';
|
||||||
|
|
||||||
|
script = runDirSetup + (concatStringsSep " \\\n " ([
|
||||||
|
"exec ${cfg.backend} pod create"
|
||||||
|
"--name=${escapedName}"
|
||||||
|
"--pod-id-file=${podPID}"
|
||||||
|
"--replace"
|
||||||
|
] ++ map escapeShellArg pod.extraOptions
|
||||||
|
++ (if pod.enableInfra then ([
|
||||||
|
"--infra-conmon-pidfile=$XDG_RUNTIME_DIR/${escapedName}-infra.pid"
|
||||||
|
(if (pod.infraImage == "") then "" else "--infra-image=${pod.infraImage}")
|
||||||
|
"--infra-name=${escapedName}-infra"
|
||||||
|
] ++ (map (p: "-p ${p}") pod.ports)
|
||||||
|
++ (map (v: "-v ${v}") pod.volumes)) else [
|
||||||
|
""
|
||||||
|
])
|
||||||
|
++ ["--infra=${lib.trivial.boolToString pod.enableInfra}"]
|
||||||
|
));
|
||||||
|
|
||||||
|
preStop = runDirSetup + "${cfg.backend} pod stop --ignore --pod-id-file=${podPID}";
|
||||||
|
postStop = runDirSetup + ''
|
||||||
|
${cfg.backend} pod rm --ignore -f --pod-id-file=${podPID} || true
|
||||||
|
rm ${podPID}
|
||||||
|
'';
|
||||||
|
|
||||||
|
serviceConfig = {
|
||||||
|
RemainAfterExit="yes";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
in
|
||||||
|
{
|
||||||
|
options.virtualisation.oci-containers.pods = mkOption {
|
||||||
|
default = {};
|
||||||
|
type = types.attrsOf (types.submodule podOptions);
|
||||||
|
description = lib.mdDoc "OCI (podman) Pods to run as a systemd service";
|
||||||
|
};
|
||||||
|
config = let
|
||||||
|
merge = p: lib.lists.foldr (e1: e2: e1 // e2) {} p;
|
||||||
|
joinPods = pods: lib.lists.foldr lib.attrsets.unionOfDisjoint {} (map merge (attrValues (mapAttrs (name: pod: map (cont: {"${cont}".extraOptions=["--pod=${name}_pod"];}) pod.contains) pods)));
|
||||||
|
makeBinds = pods: lib.lists.foldr lib.attrsets.unionOfDisjoint {} (map merge (attrValues (mapAttrs (name: pod: map (cont: {"${cfg.backend}-${cont}".partOf=["${cfg.backend}_pod-${name}.service"];}) pod.contains) pods)));
|
||||||
|
test = pkgs.writeText "THISISATEST" (joinPods cfg.pods);
|
||||||
|
in lib.mkIf (cfg.pods != {}) {
|
||||||
|
systemd.services = (mapAttrs'
|
||||||
|
(n: v: nameValuePair "${cfg.backend}_pod-${n}"
|
||||||
|
(mkService n v)) cfg.pods) // (makeBinds cfg.pods);
|
||||||
|
virtualisation.oci-containers.containers = joinPods cfg.pods;
|
||||||
|
};
|
||||||
|
}
|
||||||
35
machines/hippocampus/modules/tailscale.nix
Normal file
35
machines/hippocampus/modules/tailscale.nix
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
{config, pkgs, lib, ...}:
|
||||||
|
with lib;
|
||||||
|
let
|
||||||
|
cfg = config.services.tailscale;
|
||||||
|
defPath = if config.services.headscale.enable then "${config.services.headscale.settings.server_url}" else null;
|
||||||
|
in {
|
||||||
|
# Configure tailscale to allow specifiying user login and auth path
|
||||||
|
options.services.tailscale = {
|
||||||
|
authTokenPath = mkOption {
|
||||||
|
type = types.nullOr types.path;
|
||||||
|
default = null;
|
||||||
|
description = "Should tailscale automatically login with the given authtoken file";
|
||||||
|
};
|
||||||
|
authUrl = mkOption {
|
||||||
|
type = types.nullOr types.str;
|
||||||
|
default = defPath;
|
||||||
|
description = "Server URL of head/tailscale";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
config = let
|
||||||
|
waitGen = optional (cfg.authTokenPath == defPath) "headscale-preauth-regen-${name}";
|
||||||
|
in {
|
||||||
|
systemd.services.tailscale_autologin = mkIf (cfg.enable && cfg.authTokenPath != null) {
|
||||||
|
wantedBy = ["tailscaled.service"];
|
||||||
|
after = ["tailscaled.service"] ++ waitGen;
|
||||||
|
script = ''
|
||||||
|
${pkgs.tailscale}/bin/tailscale up --login-server ${cfg.authUrl} --authkey $(cat ${cfg.authTokenPath})
|
||||||
|
'';
|
||||||
|
serviceConfig = {
|
||||||
|
Type = "simple";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
9
machines/hippocampus/networking.nix
Normal file
9
machines/hippocampus/networking.nix
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
{
|
||||||
|
networking.useDHCP = false;
|
||||||
|
networking.interfaces.enp0s25.ipv4.addresses = [
|
||||||
|
{
|
||||||
|
address = "192.168.1.20";
|
||||||
|
prefixLength = 24;
|
||||||
|
}
|
||||||
|
];
|
||||||
|
}
|
||||||
17
machines/hippocampus/nvidia.nix
Normal file
17
machines/hippocampus/nvidia.nix
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
{config, pkgs, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
# NVIDIA drivers are unfree.
|
||||||
|
nixpkgs.config.allowUnfree = true;
|
||||||
|
|
||||||
|
services.xserver.videoDrivers = [ "nvidia" ];
|
||||||
|
hardware.opengl.enable = true;
|
||||||
|
|
||||||
|
# Optionally, you may need to select the appropriate driver version for your specific GPU.
|
||||||
|
hardware.nvidia.package = config.boot.kernelPackages.nvidiaPackages.legacy_470;
|
||||||
|
|
||||||
|
environment.systemPackages = with pkgs; [
|
||||||
|
cudaPackages.cudatoolkit
|
||||||
|
cudaPackages.cudnn
|
||||||
|
];
|
||||||
|
}
|
||||||
16
machines/hippocampus/oci.nix
Normal file
16
machines/hippocampus/oci.nix
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
{ config, pkgs, ... }:
|
||||||
|
|
||||||
|
{
|
||||||
|
virtualisation = {
|
||||||
|
podman = {
|
||||||
|
enable = true;
|
||||||
|
# enableNvidia = true;
|
||||||
|
dockerCompat = true;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
environment.systemPackages = with pkgs; [
|
||||||
|
podman-compose
|
||||||
|
# nvidia-podman
|
||||||
|
];
|
||||||
|
}
|
||||||
21
machines/hippocampus/oci/hauk.nix
Normal file
21
machines/hippocampus/oci/hauk.nix
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
{ config, pkgs, ...}:
|
||||||
|
{
|
||||||
|
config = {
|
||||||
|
virtualisation.oci-containers = {
|
||||||
|
containers = rec {
|
||||||
|
hauk = {
|
||||||
|
image = "bilde2910/hauk";
|
||||||
|
ports = [
|
||||||
|
"7888:80"
|
||||||
|
];
|
||||||
|
volumes = [
|
||||||
|
"/etc/hauk:/etc/hauk"
|
||||||
|
];
|
||||||
|
extraOptions = [
|
||||||
|
"--pull=newer"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
33
machines/hippocampus/oci/homeassistant.nix
Normal file
33
machines/hippocampus/oci/homeassistant.nix
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
{ config, pkgs, ...}:
|
||||||
|
{
|
||||||
|
config = {
|
||||||
|
virtualisation.oci-containers = {
|
||||||
|
containers = rec {
|
||||||
|
homeassistant = {
|
||||||
|
image = "ghcr.io/home-assistant/home-assistant:stable";
|
||||||
|
ports = [
|
||||||
|
"8123:8123"
|
||||||
|
];
|
||||||
|
volumes = [
|
||||||
|
"/var/lib/homeassistant/config:/config"
|
||||||
|
];
|
||||||
|
extraOptions = [
|
||||||
|
"--pull=newer"
|
||||||
|
"--device" "/dev/serial/by-id/usb-dresden_elektronik_ingenieurtechnik_GmbH_ConBee_II_DE2471411-if00:/dev/ttyZIG:rwm"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
|
# podman --runtime /usr/bin/crun run -d \
|
||||||
|
# --security-opt label=disable \
|
||||||
|
# --annotation run.oci.keep_original_groups=1 \
|
||||||
|
# --restart=unless-stopped \
|
||||||
|
# --name homeassistant \
|
||||||
|
# --cap-add=CAP_NET_RAW,CAP_NET_BIND_SERVICE \
|
||||||
|
# -p 8123:8123 \
|
||||||
|
# -v /home/server/CONTAINERS/HASS/config:/config:Z \
|
||||||
|
# --device /dev/ttyACM0:/dev/ttyZIG:rwm \
|
||||||
|
# -e TZ=America/Toronto \
|
||||||
|
# ghcr.io/home-assistant/home-assistant:stable
|
||||||
233
machines/hippocampus/oci/jelly.nix
Normal file
233
machines/hippocampus/oci/jelly.nix
Normal file
@@ -0,0 +1,233 @@
|
|||||||
|
{ config, pkgs, ...}:
|
||||||
|
let
|
||||||
|
|
||||||
|
in
|
||||||
|
{
|
||||||
|
imports = [
|
||||||
|
../modules/pods.nix
|
||||||
|
];
|
||||||
|
config = let
|
||||||
|
baseEnv = {
|
||||||
|
TZ = "America/Toronto";
|
||||||
|
PUID = "1000";
|
||||||
|
PGID = "1000";
|
||||||
|
};
|
||||||
|
dataDir = "/jelly/data";
|
||||||
|
configDir = "/jelly/conf";
|
||||||
|
in {
|
||||||
|
virtualisation.oci-containers = let
|
||||||
|
cnt = config.virtualisation.oci-containers.containers;
|
||||||
|
getPorts = l: builtins.concatMap (c: cnt."${c}".ports) l;
|
||||||
|
in {
|
||||||
|
containers = {
|
||||||
|
wireguard = {
|
||||||
|
image = "linuxserver/wireguard:latest";
|
||||||
|
volumes = [
|
||||||
|
"${configDir}/wireguard:/config"
|
||||||
|
"${configDir}/wireguard_pia:/opt"
|
||||||
|
];
|
||||||
|
ports = getPorts [
|
||||||
|
"deluge"
|
||||||
|
"sonarr"
|
||||||
|
"radarr"
|
||||||
|
"jellyseerr"
|
||||||
|
"bazarr"
|
||||||
|
"readarr"
|
||||||
|
"prowlarr"
|
||||||
|
];
|
||||||
|
environment = {
|
||||||
|
TZ = "America/Toronto";
|
||||||
|
PIA_USER = "p5062257";
|
||||||
|
PIA_PASS = "HEqwg9CvQB";
|
||||||
|
AUTOCONNECT = "true";
|
||||||
|
PIA_PF = "false";
|
||||||
|
DISABLE_IPV6 = "yes";
|
||||||
|
PIA_DNS = "true";
|
||||||
|
VPN_PROTOCOL = "wireguard";
|
||||||
|
};
|
||||||
|
extraOptions = [
|
||||||
|
"--cap-add=ALL"
|
||||||
|
"--pull=newer"
|
||||||
|
"--dns=1.1.1.1"
|
||||||
|
"--sysctl=net.ipv4.conf.all.src_valid_mark=1"
|
||||||
|
"--sysctl=net.ipv6.conf.lo.disable_ipv6=1"
|
||||||
|
"--sysctl=net.ipv6.conf.all.disable_ipv6=1"
|
||||||
|
"--sysctl=net.ipv6.conf.default.disable_ipv6=1" ];
|
||||||
|
};
|
||||||
|
|
||||||
|
deluge = {
|
||||||
|
image = "linuxserver/deluge:latest";
|
||||||
|
volumes = [
|
||||||
|
"${dataDir}:/data"
|
||||||
|
"${configDir}/deluge:/config"
|
||||||
|
];
|
||||||
|
ports = [
|
||||||
|
"8112:8112"
|
||||||
|
"34325:34325"
|
||||||
|
"34325:34325/udp"
|
||||||
|
"51413:51413"
|
||||||
|
"51413:51413/udp"
|
||||||
|
];
|
||||||
|
environment = baseEnv // {
|
||||||
|
|
||||||
|
};
|
||||||
|
extraOptions = [
|
||||||
|
"--pull=newer"
|
||||||
|
"--network" "container:wireguard"
|
||||||
|
];
|
||||||
|
dependsOn = [
|
||||||
|
"wireguard"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
jellyfin = {
|
||||||
|
image = "jellyfin/jellyfin:latest";
|
||||||
|
volumes = [
|
||||||
|
"${dataDir}:/data"
|
||||||
|
"${configDir}/jellyfin:/config"
|
||||||
|
];
|
||||||
|
ports = [
|
||||||
|
"8096:8096"
|
||||||
|
];
|
||||||
|
environment = baseEnv // {
|
||||||
|
JELLYFIN_PublishedServerUrl = "127.0.0.1";
|
||||||
|
# NVIDIA_VISIBLE_DEVICES = "all";
|
||||||
|
};
|
||||||
|
extraOptions = [
|
||||||
|
# "--runtime=nvidia"
|
||||||
|
# "--gpus=all"
|
||||||
|
"--pull=newer"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
jellyseerr = {
|
||||||
|
image = "fallenbagel/jellyseerr:latest";
|
||||||
|
volumes = [
|
||||||
|
"${dataDir}:/data"
|
||||||
|
"${configDir}/jellyseerr:/app/config"
|
||||||
|
];
|
||||||
|
ports = [
|
||||||
|
"5055:5055"
|
||||||
|
];
|
||||||
|
environment = baseEnv // {
|
||||||
|
|
||||||
|
};
|
||||||
|
extraOptions = [
|
||||||
|
"--pull=newer"
|
||||||
|
"--network" "container:wireguard"
|
||||||
|
];
|
||||||
|
dependsOn = [
|
||||||
|
"sonarr"
|
||||||
|
"radarr"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
radarr = {
|
||||||
|
image = "linuxserver/radarr:latest";
|
||||||
|
volumes = [
|
||||||
|
"${dataDir}:/data"
|
||||||
|
"${configDir}/radarr:/config"
|
||||||
|
];
|
||||||
|
ports = [
|
||||||
|
"7878:7878"
|
||||||
|
];
|
||||||
|
environment = baseEnv // {
|
||||||
|
|
||||||
|
};
|
||||||
|
extraOptions = [
|
||||||
|
"--pull=newer"
|
||||||
|
"--network" "container:wireguard"
|
||||||
|
];
|
||||||
|
dependsOn = [
|
||||||
|
"prowlarr"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
sonarr = {
|
||||||
|
image = "linuxserver/sonarr:latest";
|
||||||
|
volumes = [
|
||||||
|
"${dataDir}:/data"
|
||||||
|
"${configDir}/sonarr:/config"
|
||||||
|
];
|
||||||
|
ports = [
|
||||||
|
"8989:8989"
|
||||||
|
];
|
||||||
|
environment = baseEnv // {
|
||||||
|
|
||||||
|
};
|
||||||
|
extraOptions = [
|
||||||
|
"--pull=newer"
|
||||||
|
"--network" "container:wireguard"
|
||||||
|
];
|
||||||
|
dependsOn = [
|
||||||
|
"prowlarr"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
bazarr = {
|
||||||
|
image = "linuxserver/bazarr:latest";
|
||||||
|
volumes = [
|
||||||
|
"${dataDir}:/data"
|
||||||
|
"${configDir}/bazarr:/config"
|
||||||
|
];
|
||||||
|
ports = [
|
||||||
|
"6767:6767"
|
||||||
|
];
|
||||||
|
environment = baseEnv // {
|
||||||
|
|
||||||
|
};
|
||||||
|
extraOptions = [
|
||||||
|
"--pull=newer"
|
||||||
|
"--network" "container:wireguard"
|
||||||
|
];
|
||||||
|
dependsOn = [
|
||||||
|
"prowlarr"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
readarr = {
|
||||||
|
image = "linuxserver/readarr:nightly";
|
||||||
|
volumes = [
|
||||||
|
"${dataDir}:/data"
|
||||||
|
"${configDir}/readarr:/config"
|
||||||
|
];
|
||||||
|
ports = [
|
||||||
|
"8787:8787"
|
||||||
|
];
|
||||||
|
environment = baseEnv // {
|
||||||
|
|
||||||
|
};
|
||||||
|
extraOptions = [
|
||||||
|
"--pull=newer"
|
||||||
|
"--network" "container:wireguard"
|
||||||
|
];
|
||||||
|
dependsOn = [
|
||||||
|
"prowlarr"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
prowlarr = {
|
||||||
|
image = "linuxserver/prowlarr:nightly";
|
||||||
|
volumes = [
|
||||||
|
"${configDir}/prowlarr:/config"
|
||||||
|
];
|
||||||
|
ports = [
|
||||||
|
"9696:9696"
|
||||||
|
];
|
||||||
|
environment = baseEnv // {
|
||||||
|
|
||||||
|
};
|
||||||
|
extraOptions = [
|
||||||
|
"--pull=newer"
|
||||||
|
"--network" "container:wireguard"
|
||||||
|
];
|
||||||
|
dependsOn = [
|
||||||
|
"deluge"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
# TODO: Submit PR for nvidia podman services
|
||||||
|
# systemd.services.podman-jellyfin.path = [pkgs.nvidia-podman];
|
||||||
|
};
|
||||||
|
}
|
||||||
37
machines/hippocampus/oci/watchthingz.nix
Normal file
37
machines/hippocampus/oci/watchthingz.nix
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
{ config, pkgs, ...}:
|
||||||
|
{
|
||||||
|
config = {
|
||||||
|
# users = {
|
||||||
|
# users.neko = {
|
||||||
|
# isNormalUser = true;
|
||||||
|
# description = "Watchthingz Running User";
|
||||||
|
# group = "neko";
|
||||||
|
# };
|
||||||
|
# groups.neko = {};
|
||||||
|
# };
|
||||||
|
virtualisation.oci-containers = {
|
||||||
|
containers = rec {
|
||||||
|
neko = {
|
||||||
|
image = "m1k1o/neko:vivaldi";
|
||||||
|
ports = [
|
||||||
|
"8080:8080"
|
||||||
|
"52000-52100:52000-52100/udp"
|
||||||
|
];
|
||||||
|
environment = {
|
||||||
|
NEKO_SCREEN = "1920x1080@30";
|
||||||
|
NEKO_PASSWORD = "GBGJ";
|
||||||
|
NEKO_PASSWORD_ADMIN = "davey";
|
||||||
|
NEKO_EPR = "52000-52100";
|
||||||
|
NEKO_ICELITE = "1";
|
||||||
|
};
|
||||||
|
extraOptions = [
|
||||||
|
"--pull=newer"
|
||||||
|
];
|
||||||
|
# user = "neko";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
# Rootless podman broken by: github.com/nixos/nixpkgs/issues/207050
|
||||||
|
# systemd.services.podman-neko.serviceConfig.User = "neko";
|
||||||
|
};
|
||||||
|
}
|
||||||
8
machines/hippocampus/secrets.nix
Normal file
8
machines/hippocampus/secrets.nix
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{config, pkgs, ...}: let
|
||||||
|
in {
|
||||||
|
sops = {
|
||||||
|
age.keyFile = "/root/.config/sops/age/keys.txt";
|
||||||
|
defaultSopsFile = "/etc/nixos/machines/hippocampus/secrets/pass.yaml";
|
||||||
|
validateSopsFiles = false;
|
||||||
|
};
|
||||||
|
}
|
||||||
25
machines/hippocampus/secrets/pass.yaml
Normal file
25
machines/hippocampus/secrets/pass.yaml
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
nextcloud:
|
||||||
|
adminPass: ENC[AES256_GCM,data:D2SAD/Somvw8abIm0KX4fWRfuQ==,iv:Y7K14yZZFcu97KVBd0219hwnGY4LEX2DNxxulSegr/8=,tag:aRJAlz1xvQxWodcE2bZLdQ==,type:str]
|
||||||
|
s3secret: ENC[AES256_GCM,data:lIVuiZMh376MSuu13UPCu49Q64bVbk+WM/CUEIGzV0Q=,iv:J2vHalppWEupWK07zXsMoiH6avmpsgg0Cqcc7EkZVV4=,tag:pxKwiaH5SZa8Vh71gLGQWw==,type:str]
|
||||||
|
vaultenv: ENC[AES256_GCM,data:oTMhUU23v0SFImzDNjfqo3wn26ghqHGfArQl+K9E3u3YI9qmwdN/Z0dpLvT7TI01cdEIwM8ToKAd2HueymTHMT0wXMNAWMFVNm5lUot6U9kV+Pwfq3W+c8MygqXL/QVeFCzUsEa4ZvAE647+2JIkcI95H8mIWfenL0wA5O+OLiEz1fFykMbGBvWm7GM5oFWU9RXo0d5BAIaqd7D5oL3tgi2EnrtnVMJ8USgYA+d3TNCEatHO8ARwtCRhC8FK+86RowBlwiylIySuJiMScvzstB8TWVps4wo7xK0lZ8PUicFI2q+N+Q7B3x1hUW0Z2f4pmxAwb8qRxXZtA7B99bBjAwSwh1A301LYMAKJqELNiNOZ9xjl5r12fAOqP3ujJ84eacNVmsKFpA5HxIfUQBlkoHYRXfkd+Z8wz9fhzr53PvWHblr4eS+jCpJzSP98uyou4FYfMXoYOT9kzNNHGsWAoxLxQusehIaHyicG6uVE53wEQw/r9xeJeg==,iv:anKhX3TVyEeatnB/qjlce3g7cifrX8QlBJ/9UzWUa8k=,tag:BDccovkJBW8q0URMLBxbcQ==,type:str]
|
||||||
|
minioRoot: ENC[AES256_GCM,data:z6+VkyRjWRSh8pu5gO58RRyGXT+Lvl+AVr37A5nXh6aj+q6SevNL7wLf9Joao4xmjXexKVavOhs/9OSBJpmbq0R+MRI=,iv:vrow7hvrTacnMi7sFnsuXwMOHrvr6c8YUTYFUry4E4U=,tag:fWfiEvkuSiXHIFqWnLiMiQ==,type:str]
|
||||||
|
sops:
|
||||||
|
kms: []
|
||||||
|
gcp_kms: []
|
||||||
|
azure_kv: []
|
||||||
|
hc_vault: []
|
||||||
|
age:
|
||||||
|
- recipient: age1crymppz88etsdjpckmtdhr397x5xg5wv8jt6tcj23gt2snq73pzs04fuve
|
||||||
|
enc: |
|
||||||
|
-----BEGIN AGE ENCRYPTED FILE-----
|
||||||
|
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBzejFzRTd4UlRGY3JLd0ZZ
|
||||||
|
MFBKcDROdnBIcFY1a3lKZUNYazd4MDkwRzFnCm9JTE10MXdmRUFrQ0tzcE1ERDJX
|
||||||
|
S3hmbzhFRTN6WVZ6VzdnQjB3Z05NS0EKLS0tIDM5WDdZdFU4SlJ2QnhTMmtTYW5l
|
||||||
|
RVUzMlFya3Z0amdTUTJ5YjFRck5kZzQKoWZzExqzPRpQPL4CdqBalc1/dYtjBH6J
|
||||||
|
LGR0oImfOWlIJwcaJLv/fc470UvXHHwIji9v/pbV7xMkgMjlJthaYg==
|
||||||
|
-----END AGE ENCRYPTED FILE-----
|
||||||
|
lastmodified: "2023-08-14T23:26:14Z"
|
||||||
|
mac: ENC[AES256_GCM,data:H8FtTQvdV7riqejKTqWa2IBsuc9RbGljAyqpeDRYqixYk5OCbg43DNLNaigbpX/nI4uMke8dCTWTqHVA/n0gYfdFYLIEdDCL2EnIT2tVDUeSRk/hf9CwgMD8EoEoVdE1XbT5cAQvS2X7nENELfasfGu37xVs2YZeflsVRjzyEkU=,iv:ApyHUl5rGHHS9rf2mMGICzdh1d1KDplsy2vwsSoQDpw=,tag:+Zh32Ui21mHiNYy1rQ8+Gw==,type:str]
|
||||||
|
pgp: []
|
||||||
|
unencrypted_suffix: _unencrypted
|
||||||
|
version: 3.7.3
|
||||||
6
machines/hippocampus/servers.nix
Normal file
6
machines/hippocampus/servers.nix
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
imports = [
|
||||||
|
./servers/public.nix
|
||||||
|
./servers/private.nix
|
||||||
|
];
|
||||||
|
}
|
||||||
41
machines/hippocampus/servers/jelly-mount.nix
Normal file
41
machines/hippocampus/servers/jelly-mount.nix
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
{ pkgs, lib, config, ... }:
|
||||||
|
let
|
||||||
|
s3fs = { mount, bucket }: {
|
||||||
|
age.secrets.jellyMount = {
|
||||||
|
file = /etc/nixos/secrets/jellyMountPass.age;
|
||||||
|
owner = "root";
|
||||||
|
group = "root";
|
||||||
|
mode = "0600";
|
||||||
|
};
|
||||||
|
|
||||||
|
systemd.services."s3fs-${bucket}" = {
|
||||||
|
description = "Jellyfin Bucket Storage";
|
||||||
|
wantedBy = [ "multi-user.target" ];
|
||||||
|
|
||||||
|
serviceConfig = {
|
||||||
|
ExecStartPre = [
|
||||||
|
"${pkgs.coreutils}/bin/mkdir -m 0500 -pv ${mount}"
|
||||||
|
"${pkgs.e2fsprogs}/bin/chattr +i ${mount}" # Stop files being accidentally written to unmounted directory
|
||||||
|
];
|
||||||
|
ExecStart = let
|
||||||
|
options = [
|
||||||
|
"passwd_file=${config.age.secrets.jellyMount.path}"
|
||||||
|
"use_path_request_style"
|
||||||
|
"allow_other"
|
||||||
|
"url=http://localhost:7500"
|
||||||
|
"umask=0077"
|
||||||
|
];
|
||||||
|
in
|
||||||
|
"${pkgs.s3fs}/bin/s3fs ${bucket} ${mount} -f "
|
||||||
|
+ lib.concatMapStringsSep " " (opt: "-o ${opt}") options;
|
||||||
|
ExecStopPost = "-${pkgs.fuse}/bin/fusermount -u ${mount}";
|
||||||
|
KillMode = "process";
|
||||||
|
Restart = "on-failure";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
in
|
||||||
|
s3fs {
|
||||||
|
mount = "/jelly";
|
||||||
|
bucket = "jellyfin";
|
||||||
|
}
|
||||||
27
machines/hippocampus/servers/private.nix
Normal file
27
machines/hippocampus/servers/private.nix
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
{
|
||||||
|
imports = [
|
||||||
|
# Local Network DNS
|
||||||
|
./private/unbound.nix
|
||||||
|
|
||||||
|
# System Stats and Monitoring
|
||||||
|
./private/cockpit.nix
|
||||||
|
|
||||||
|
# Track Stats of system
|
||||||
|
./private/prometheus.nix
|
||||||
|
|
||||||
|
# Pretty Visuals
|
||||||
|
# ./private/grafana.nix
|
||||||
|
|
||||||
|
# Home Monitoring and Control
|
||||||
|
./private/homeassistant.nix
|
||||||
|
|
||||||
|
# Minio S3 Object Storage
|
||||||
|
./private/miniio.nix
|
||||||
|
|
||||||
|
# OctoPrint
|
||||||
|
./private/octoprint.nix
|
||||||
|
|
||||||
|
# Samba Share
|
||||||
|
./private/samba.nix
|
||||||
|
];
|
||||||
|
}
|
||||||
25
machines/hippocampus/servers/private/cockpit.nix
Normal file
25
machines/hippocampus/servers/private/cockpit.nix
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
{ pkgs
|
||||||
|
, config
|
||||||
|
, fetchFromGitHub
|
||||||
|
, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
services.cockpit = {
|
||||||
|
enable = true;
|
||||||
|
port = 9090;
|
||||||
|
};
|
||||||
|
|
||||||
|
# TODO: Performance Metrics:
|
||||||
|
# https://github.com/performancecopilot/pcp
|
||||||
|
|
||||||
|
# environment.systemPackages = let
|
||||||
|
# cockpit-machines = stdenv.mkDerivation {
|
||||||
|
# pname = "cockpit-machines";
|
||||||
|
# version = "283";
|
||||||
|
# src = fetchFromGitHub
|
||||||
|
# };
|
||||||
|
# in [
|
||||||
|
# cockpit-machines
|
||||||
|
# cockpit-containers
|
||||||
|
# ];
|
||||||
|
}
|
||||||
10
machines/hippocampus/servers/private/grafana.nix
Normal file
10
machines/hippocampus/servers/private/grafana.nix
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
{config, pkgs, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
services.grafana = {
|
||||||
|
enable = true;
|
||||||
|
|
||||||
|
http_addr = "0.0.0.0";
|
||||||
|
http_port = 9998;
|
||||||
|
};
|
||||||
|
}
|
||||||
24
machines/hippocampus/servers/private/homeassistant.nix
Normal file
24
machines/hippocampus/servers/private/homeassistant.nix
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
{pkgs, config, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
imports = [
|
||||||
|
../../oci/homeassistant.nix
|
||||||
|
];
|
||||||
|
services.unbound.settings.server = let
|
||||||
|
RECORD = ".assistant. IN A 192.168.1.20";
|
||||||
|
in {
|
||||||
|
local-zone = [
|
||||||
|
"assistant. static"
|
||||||
|
];
|
||||||
|
local-data = [
|
||||||
|
"'home${RECORD}'"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"http://home.assistant" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:8123
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
43
machines/hippocampus/servers/private/jellyfin.nix
Normal file
43
machines/hippocampus/servers/private/jellyfin.nix
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
{
|
||||||
|
services.unbound.settings.server = let
|
||||||
|
RECORD = ".tv. IN A 192.168.1.20";
|
||||||
|
in {
|
||||||
|
local-zone = [
|
||||||
|
"tv. transparent"
|
||||||
|
];
|
||||||
|
local-data = [
|
||||||
|
"'radarr${RECORD}'"
|
||||||
|
"'sonarr${RECORD}'"
|
||||||
|
"'prowlarr${RECORD}'"
|
||||||
|
"'deluge${RECORD}'"
|
||||||
|
"'bazarr${RECORD}'"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"http://radarr.tv" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:7878
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
"http://sonarr.tv" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:8989
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
"http://prowlarr.tv" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:9696
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
"http://deluge.tv" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:8112
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
"http://bazarr.tv" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:6767
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
88
machines/hippocampus/servers/private/miniio.nix
Normal file
88
machines/hippocampus/servers/private/miniio.nix
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
{ pkgs, config, lib, ...}: let
|
||||||
|
mkLocalMinio = {
|
||||||
|
path, n
|
||||||
|
}: {
|
||||||
|
autoStart = true;
|
||||||
|
|
||||||
|
privateNetwork = true;
|
||||||
|
hostBridge = "br0";
|
||||||
|
localAddress = "10.0.0.${toString (10+n)}/24";
|
||||||
|
|
||||||
|
# If true it registers a new node very time
|
||||||
|
# need to find where it stores the state
|
||||||
|
ephemeral = false;
|
||||||
|
|
||||||
|
bindMounts = {
|
||||||
|
"/mnt/disk1/minio" = {
|
||||||
|
hostPath = path;
|
||||||
|
isReadOnly = false;
|
||||||
|
};
|
||||||
|
"/rootCreds" = {
|
||||||
|
hostPath = config.sops.secrets.minioRoot.path;
|
||||||
|
isReadOnly = true;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
config = {pkgs, config, ...}: {
|
||||||
|
system.stateVersion = "22.11";
|
||||||
|
|
||||||
|
networking.defaultGateway = "10.0.0.1";
|
||||||
|
|
||||||
|
networking.firewall = {
|
||||||
|
allowedTCPPorts = [
|
||||||
|
9000
|
||||||
|
7501
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
environment.systemPackages = with pkgs; [
|
||||||
|
minio
|
||||||
|
minio-client
|
||||||
|
];
|
||||||
|
|
||||||
|
services.minio = {
|
||||||
|
enable = true;
|
||||||
|
listenAddress = ":9000";
|
||||||
|
consoleAddress = ":7501";
|
||||||
|
|
||||||
|
dataDir = [
|
||||||
|
];
|
||||||
|
|
||||||
|
rootCredentialsFile = "/rootCreds";
|
||||||
|
};
|
||||||
|
systemd.services.minio.after = ["tailscale_autologin.service"];
|
||||||
|
systemd.services.minio.preStart = ''
|
||||||
|
sleep 2s
|
||||||
|
'';
|
||||||
|
systemd.services.minio.environment = {
|
||||||
|
MINIO_VOLUMES = "/mnt/disk1/minio";
|
||||||
|
# Expandable later, but each pool must have more than 1 disk.
|
||||||
|
# https://github.com/minio/minio/issues/16711
|
||||||
|
MINIO_SERVER_URL = "http://minio1.minio1.tailnet:9000";
|
||||||
|
MINIO_PROMETHEUS_URL = "http://100.64.0.5:9999";
|
||||||
|
MINIO_PROMETHEUS_JOB_ID = "minio-job";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
in {
|
||||||
|
imports = [
|
||||||
|
../../modules/containerHeadscale.nix
|
||||||
|
];
|
||||||
|
|
||||||
|
sops.secrets.minioRoot = {
|
||||||
|
owner = "root";
|
||||||
|
mode = "0444";
|
||||||
|
};
|
||||||
|
|
||||||
|
containers = {
|
||||||
|
minio1 = mkLocalMinio {
|
||||||
|
path = "/mass/minio";
|
||||||
|
n = 1;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
services.headscale.containers = {
|
||||||
|
minio1 = {
|
||||||
|
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
10
machines/hippocampus/servers/private/octoprint.nix
Normal file
10
machines/hippocampus/servers/private/octoprint.nix
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
{ pkgs
|
||||||
|
, config
|
||||||
|
, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
services.octoprint = {
|
||||||
|
enable = true;
|
||||||
|
port = 7550;
|
||||||
|
};
|
||||||
|
}
|
||||||
24
machines/hippocampus/servers/private/prometheus.nix
Normal file
24
machines/hippocampus/servers/private/prometheus.nix
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
{config, pkgs, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
services.prometheus = {
|
||||||
|
enable = true;
|
||||||
|
port = 9999;
|
||||||
|
scrapeConfigs = [
|
||||||
|
{
|
||||||
|
job_name = "minio-job";
|
||||||
|
metrics_path = "/minio/v2/metrics/cluster";
|
||||||
|
scheme = "http";
|
||||||
|
# Turn into secret with bearer_token_file
|
||||||
|
bearer_token = "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJwcm9tZXRoZXVzIiwic3ViIjoiaGlwcG9jYW1wdXMiLCJleHAiOjQ4MzA5ODA0MjB9.C-Y5lCDcpcHPWu87CXcqFdQF3nZ55neNVL-QVhf2NxGaqGQ1GL5AW7svbFZVjLJy1yMzgNn7wlAXB23d7q0GYA";
|
||||||
|
static_configs = [
|
||||||
|
{
|
||||||
|
targets = [
|
||||||
|
"100.64.0.4:9000"
|
||||||
|
];
|
||||||
|
}
|
||||||
|
];
|
||||||
|
}
|
||||||
|
];
|
||||||
|
};
|
||||||
|
}
|
||||||
28
machines/hippocampus/servers/private/samba.nix
Normal file
28
machines/hippocampus/servers/private/samba.nix
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
{ config, lib, pkgs, ... }: {
|
||||||
|
services.samba-wsdd.enable = true;
|
||||||
|
services.samba = {
|
||||||
|
enable = true;
|
||||||
|
securityType = "user";
|
||||||
|
extraConfig = ''
|
||||||
|
workgroup = WORKGROUP
|
||||||
|
server string = smbnix
|
||||||
|
netbios name = smbnix
|
||||||
|
security = user
|
||||||
|
#use sendfile = yes
|
||||||
|
#max protocol = smb2
|
||||||
|
# note: localhost is the ipv6 localhost ::1
|
||||||
|
hosts allow = 192.168.0. 127.0.0.1 localhost
|
||||||
|
hosts deny = 0.0.0.0/0
|
||||||
|
guest account = nobody
|
||||||
|
map to guest = bad user
|
||||||
|
'';
|
||||||
|
shares = {
|
||||||
|
public = {
|
||||||
|
path = "/mass/jelly/media";
|
||||||
|
browseable = "yes";
|
||||||
|
"read only" = "yes";
|
||||||
|
"guest ok" = "yes";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
31
machines/hippocampus/servers/private/unbound.nix
Normal file
31
machines/hippocampus/servers/private/unbound.nix
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
{config, pkgs, lib, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
services.unbound = {
|
||||||
|
enable = false;
|
||||||
|
|
||||||
|
settings = {
|
||||||
|
server = {
|
||||||
|
interface = [
|
||||||
|
"0.0.0.0" "::"
|
||||||
|
];
|
||||||
|
private-address = "192.168.1.0/24";
|
||||||
|
access-control = [
|
||||||
|
"127.0.0.0/8 allow"
|
||||||
|
"192.168.1.0/24 allow"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
forward-zone = [
|
||||||
|
{
|
||||||
|
name = ".";
|
||||||
|
forward-addr = [
|
||||||
|
"1.1.1.1"
|
||||||
|
"1.0.0.1"
|
||||||
|
"8.8.8.8"
|
||||||
|
"8.8.4.4"
|
||||||
|
];
|
||||||
|
}
|
||||||
|
];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
42
machines/hippocampus/servers/public.nix
Normal file
42
machines/hippocampus/servers/public.nix
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
{
|
||||||
|
imports = [
|
||||||
|
# Reverse Proxy
|
||||||
|
./public/caddy.nix
|
||||||
|
|
||||||
|
# Entrace to Control Pane of Private Network
|
||||||
|
./public/headscale.nix
|
||||||
|
|
||||||
|
# Location tracking of my Dad in Saskatchewan
|
||||||
|
./public/hauk.nix
|
||||||
|
|
||||||
|
# Self Hosted Git Server
|
||||||
|
./public/gitea.nix
|
||||||
|
|
||||||
|
# Hydra Build Server
|
||||||
|
./public/hydra.nix
|
||||||
|
|
||||||
|
# Self Hosted Netflix
|
||||||
|
./public/jellyfin.nix
|
||||||
|
|
||||||
|
# Audio Books
|
||||||
|
./public/audiobookshelf.nix
|
||||||
|
|
||||||
|
# Static Website
|
||||||
|
./public/syzygial.nix
|
||||||
|
|
||||||
|
# Self Hosted Cloud Storage & Services
|
||||||
|
./public/nextcloud.nix
|
||||||
|
|
||||||
|
# Rabb.it at home
|
||||||
|
./public/watchthingz.nix
|
||||||
|
|
||||||
|
# Pterodactyl Game Server
|
||||||
|
./public/pterodactyl.nix
|
||||||
|
|
||||||
|
# Vaultwarden
|
||||||
|
./public/vaultwarden.nix
|
||||||
|
|
||||||
|
# Anki Sync Server
|
||||||
|
./public/anki.nix
|
||||||
|
];
|
||||||
|
}
|
||||||
25
machines/hippocampus/servers/public/anki.nix
Normal file
25
machines/hippocampus/servers/public/anki.nix
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
{config, pkgs, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
systemd.services.ankisync = {
|
||||||
|
enable = false;
|
||||||
|
wantedBy = ["network-online.target"];
|
||||||
|
script = ''
|
||||||
|
${pkgs.anki-bin}/bin/anki --syncserver
|
||||||
|
'';
|
||||||
|
serviceConfig = {
|
||||||
|
Type = "simple";
|
||||||
|
DynamicUser = true;
|
||||||
|
PrivateTmp = true;
|
||||||
|
StateDirectory = "foo";
|
||||||
|
StateDirectoryMode = "0750";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"anki.syzygial.cc" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:4000
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
41
machines/hippocampus/servers/public/audiobookshelf.nix
Normal file
41
machines/hippocampus/servers/public/audiobookshelf.nix
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
{ config, pkgs, ... }: let
|
||||||
|
stateDir = "/var/lib/audiobookshelf";
|
||||||
|
in {
|
||||||
|
users.users.audiobookshelf = {
|
||||||
|
group = config.users.groups.audiobookshelf.name;
|
||||||
|
isSystemUser = true;
|
||||||
|
};
|
||||||
|
users.groups.audiobookshelf = { };
|
||||||
|
|
||||||
|
systemd.services.audiobookshelf = {
|
||||||
|
after = [ "network.target" ];
|
||||||
|
environment = {
|
||||||
|
};
|
||||||
|
path = with pkgs; [
|
||||||
|
util-linux
|
||||||
|
];
|
||||||
|
serviceConfig = {
|
||||||
|
user = config.users.users.audiobookshelf.name;
|
||||||
|
group = config.users.groups.audiobookshelf.name;
|
||||||
|
ExecStart = "${pkgs.audiobookshelf}/bin/audiobookshelf --port ${toString 7991}";
|
||||||
|
WorkingDirectory = "${stateDir}";
|
||||||
|
PrivateTmp = "true";
|
||||||
|
PrivateDevices = "true";
|
||||||
|
ProtectHome = "true";
|
||||||
|
ProtectSystem = "strict";
|
||||||
|
AmbientCapabilities = "CAP_NET_BIND_SERVICE";
|
||||||
|
StateDirectory = "audiobookshelf";
|
||||||
|
StateDirectoryMode = "0700";
|
||||||
|
Restart = "always";
|
||||||
|
};
|
||||||
|
wantedBy = [ "multi-user.target" ];
|
||||||
|
};
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"books.syzygial.cc" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:${toString 7991}
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
||||||
16
machines/hippocampus/servers/public/caddy.nix
Normal file
16
machines/hippocampus/servers/public/caddy.nix
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
{ config, pkgs, ... }:
|
||||||
|
|
||||||
|
{
|
||||||
|
services.caddy = {
|
||||||
|
enable = true;
|
||||||
|
# acmeCA = "https://acme-staging-v02.api.letsencrypt.org/directory";
|
||||||
|
email = "davidcrompton1192@gmail.com";
|
||||||
|
};
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"star.zlinger.syzygial.cc" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 3.145.117.46:4000
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
41
machines/hippocampus/servers/public/gitea.nix
Normal file
41
machines/hippocampus/servers/public/gitea.nix
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
{ pkgs, config, ...}: let
|
||||||
|
davesDomain = "syzygial.cc";
|
||||||
|
in {
|
||||||
|
services.gitea = {
|
||||||
|
enable = true;
|
||||||
|
database = {
|
||||||
|
type = "postgres";
|
||||||
|
socket = "/run/postgresql";
|
||||||
|
};
|
||||||
|
settings = {
|
||||||
|
server = {
|
||||||
|
HTTP_PORT = 5000;
|
||||||
|
ROOT_URL = "https://git.${davesDomain}";
|
||||||
|
};
|
||||||
|
actions = {
|
||||||
|
ENABLED = true;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
services.postgresql = {
|
||||||
|
enable = true;
|
||||||
|
port = 5432;
|
||||||
|
ensureUsers = [{
|
||||||
|
name = "gitea";
|
||||||
|
ensurePermissions = {
|
||||||
|
"DATABASE \"gitea\"" = "ALL PRIVILEGES";
|
||||||
|
};
|
||||||
|
ensureClauses = {
|
||||||
|
createdb = true;
|
||||||
|
};
|
||||||
|
}];
|
||||||
|
};
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"git.${davesDomain}" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:${toString config.services.gitea.settings.server.HTTP_PORT}
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
14
machines/hippocampus/servers/public/hauk.nix
Normal file
14
machines/hippocampus/servers/public/hauk.nix
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
{pkgs, config, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
imports = [
|
||||||
|
../../oci/hauk.nix
|
||||||
|
];
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"crompton.cc" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:7888
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
57
machines/hippocampus/servers/public/headscale.nix
Normal file
57
machines/hippocampus/servers/public/headscale.nix
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
{config, pkgs, ...}: {
|
||||||
|
imports = [
|
||||||
|
../../modules/headscale.nix
|
||||||
|
];
|
||||||
|
services.headscale = {
|
||||||
|
enable = true;
|
||||||
|
# 7000 port addresses are for internal network
|
||||||
|
port = 7000;
|
||||||
|
settings = {
|
||||||
|
server_url = "https://headscale.syzygial.cc";
|
||||||
|
# TODO: Generate keys??
|
||||||
|
|
||||||
|
|
||||||
|
# Postgres seems to be broken
|
||||||
|
# db_type = "postgres";
|
||||||
|
# db_host = "/var/run/postgresql";
|
||||||
|
# db_name = "headscale";
|
||||||
|
# db_user = "headscale";
|
||||||
|
|
||||||
|
# Tailscale IP Base:
|
||||||
|
ip_prefixes = [
|
||||||
|
"100.64.0.0/10"
|
||||||
|
];
|
||||||
|
|
||||||
|
# Give a name to each device
|
||||||
|
dns_config = {
|
||||||
|
base_domain = "tailnet";
|
||||||
|
magic_dns = true;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
# Temporary until systemd units are made
|
||||||
|
# TODO: Create automatic systemd units for provisioning auth keys
|
||||||
|
environment.systemPackages = with pkgs; [
|
||||||
|
headscale
|
||||||
|
];
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"headscale.syzygial.cc" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy localhost:7000
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
# services.postgresql = {
|
||||||
|
# enable = true;
|
||||||
|
# port = 5432;
|
||||||
|
# ensureDatabases = [
|
||||||
|
# "headscale"
|
||||||
|
# ];
|
||||||
|
# ensureUsers = [{
|
||||||
|
# name = "headscale";
|
||||||
|
# ensurePermissions = {
|
||||||
|
# "DATABASE \"headscale\"" = "ALL PRIVILEGES";
|
||||||
|
# };
|
||||||
|
# }];
|
||||||
|
# };
|
||||||
|
}
|
||||||
89
machines/hippocampus/servers/public/hydra.nix
Normal file
89
machines/hippocampus/servers/public/hydra.nix
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
{config, pkgs, ...}: let
|
||||||
|
deploy-container = pkgs.writeScriptBin "deploy-nixos-container" ''
|
||||||
|
pushd $2
|
||||||
|
nixos-container update $1 --flake $2#$3
|
||||||
|
git reset --hard HEAD
|
||||||
|
git clean -fdx
|
||||||
|
git reflog expire --expire=now --all
|
||||||
|
git repack -ad # Remove dangling objects from packfiles
|
||||||
|
git prune # Remove dangling loose objects
|
||||||
|
popd
|
||||||
|
'';
|
||||||
|
in {
|
||||||
|
imports = [
|
||||||
|
./nix-serve.nix
|
||||||
|
];
|
||||||
|
services.hydra = {
|
||||||
|
enable = true;
|
||||||
|
hydraURL = "https://hydra.syzygial.cc";
|
||||||
|
port = 3500;
|
||||||
|
notificationSender = "hydra@localhost";
|
||||||
|
buildMachinesFiles = [];
|
||||||
|
useSubstitutes = true;
|
||||||
|
extraConfig = ''
|
||||||
|
<dynamicruncommand>
|
||||||
|
enable = 1
|
||||||
|
</dynamicruncommand>
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
systemd.services.hydra = {
|
||||||
|
serviceConfig = {
|
||||||
|
RestartSec = "20s";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
users.users."hydra" = {
|
||||||
|
openssh.authorizedKeys.keys = [
|
||||||
|
|
||||||
|
];
|
||||||
|
packages = [
|
||||||
|
|
||||||
|
];
|
||||||
|
};
|
||||||
|
# Deployment User
|
||||||
|
users.users.hydra-deploy = {
|
||||||
|
isNormalUser = true;
|
||||||
|
home = "/var/lib/hydra/deploy";
|
||||||
|
description = "Hydra Deployment User";
|
||||||
|
extraGroups = [ "hydra" ];
|
||||||
|
packages = [
|
||||||
|
deploy-container
|
||||||
|
];
|
||||||
|
};
|
||||||
|
# TODO: Configure authorizedKeys between
|
||||||
|
# hydra-queue-runner and hydra-deploy
|
||||||
|
security.sudo.extraRules = [
|
||||||
|
{
|
||||||
|
users = ["hydra-deploy"];
|
||||||
|
commands = [
|
||||||
|
{
|
||||||
|
command = "${deploy-container}/bin/deploy-nixos-container *";
|
||||||
|
options = ["NOPASSWD"];
|
||||||
|
}
|
||||||
|
];
|
||||||
|
}
|
||||||
|
];
|
||||||
|
networking.nat = {
|
||||||
|
enable = true;
|
||||||
|
internalInterfaces = [
|
||||||
|
"ve-newalan"
|
||||||
|
"ve-handyhelper"
|
||||||
|
];
|
||||||
|
externalInterface = "enp0s25";
|
||||||
|
enableIPv6 = true;
|
||||||
|
};
|
||||||
|
|
||||||
|
nix.buildMachines = [
|
||||||
|
{ hostName = "localhost";
|
||||||
|
system = "x86_64-linux";
|
||||||
|
supportedFeatures = ["kvm" "nixos-test" "big-parallel" "benchmark"];
|
||||||
|
maxJobs = 8;
|
||||||
|
}
|
||||||
|
];
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"hydra.syzygial.cc" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy localhost:${toString config.services.hydra.port}
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
30
machines/hippocampus/servers/public/jellyfin.nix
Normal file
30
machines/hippocampus/servers/public/jellyfin.nix
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
{ pkgs, config, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
imports = [
|
||||||
|
# ./jelly-mount.nix
|
||||||
|
# Server component is container based
|
||||||
|
../../oci/jelly.nix
|
||||||
|
|
||||||
|
# Load local network DNS resolution
|
||||||
|
../private/jellyfin.nix
|
||||||
|
];
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"jelly.syzygial.cc" = {
|
||||||
|
serverAliases = [
|
||||||
|
"jelly.crompton.cc"
|
||||||
|
];
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:8096
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
"add.jelly.crompton.cc" = {
|
||||||
|
serverAliases = [
|
||||||
|
# "add.jelly.syzygial.cc"
|
||||||
|
];
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:5055
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
114
machines/hippocampus/servers/public/nextcloud.nix
Normal file
114
machines/hippocampus/servers/public/nextcloud.nix
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
{ pkgs, config, ...}: let
|
||||||
|
nxperm = {
|
||||||
|
owner = "nextcloud";
|
||||||
|
group = "nextcloud";
|
||||||
|
mode = "0440";
|
||||||
|
};
|
||||||
|
in {
|
||||||
|
imports = [
|
||||||
|
./nextcloud/collobara.nix
|
||||||
|
];
|
||||||
|
sops.secrets."nextcloud/adminPass" = nxperm;
|
||||||
|
sops.secrets."nextcloud/s3secret" = nxperm;
|
||||||
|
|
||||||
|
services.nextcloud = {
|
||||||
|
enable = true;
|
||||||
|
package = pkgs.nextcloud27;
|
||||||
|
hostName = "localhost";
|
||||||
|
|
||||||
|
config = {
|
||||||
|
adminuser = "CromptonAdmin";
|
||||||
|
adminpassFile = config.sops.secrets."nextcloud/adminPass".path;
|
||||||
|
|
||||||
|
extraTrustedDomains = [
|
||||||
|
"cloud.crompton.cc"
|
||||||
|
"nextcloud.syzygial.cc"
|
||||||
|
];
|
||||||
|
|
||||||
|
trustedProxies = [
|
||||||
|
"cloud.crompton.cc"
|
||||||
|
"nextcloud.syzygial.cc"
|
||||||
|
];
|
||||||
|
|
||||||
|
dbtype = "pgsql";
|
||||||
|
dbname = "nextcloud";
|
||||||
|
dbuser = "nextcloud";
|
||||||
|
|
||||||
|
dbhost = "/run/postgresql";
|
||||||
|
overwriteProtocol = "https";
|
||||||
|
|
||||||
|
objectstore.s3 = {
|
||||||
|
enable = true;
|
||||||
|
bucket = "nextcloud";
|
||||||
|
autocreate = false;
|
||||||
|
key = "nextcloud";
|
||||||
|
secretFile = config.sops.secrets."nextcloud/s3secret".path;
|
||||||
|
region = "us-east-1";
|
||||||
|
hostname = "100.64.0.4";
|
||||||
|
port = 9000;
|
||||||
|
useSsl = false;
|
||||||
|
usePathStyle = true;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
# systemd.services.nextcloud-setup = {
|
||||||
|
# requires = [ "postgresql.service" ];
|
||||||
|
# after = [ "postgresql.service" ];
|
||||||
|
# path = config.users.users.nextcloud.packages;
|
||||||
|
# script = ''
|
||||||
|
# if [[ ! -e /var/lib/nextcloud/store-apps/recognize/node_modules/@tensorflow/tfjs-node/lib/napi-v8/tfjs_binding.node ]]; then
|
||||||
|
# if [[ -d /var/lib/nextcloud/store-apps/recognize/node_modules/ ]]; then
|
||||||
|
# cd /var/lib/nextcloud/store-apps/recognize/node_modules/
|
||||||
|
# npm rebuild @tensorflow/tfjs-node --build-addon-from-source
|
||||||
|
# fi
|
||||||
|
# fi
|
||||||
|
# '';
|
||||||
|
# };
|
||||||
|
|
||||||
|
systemd.services.phpfpm-nextcloud = {
|
||||||
|
path = config.users.users.nextcloud.packages;
|
||||||
|
};
|
||||||
|
|
||||||
|
users.users.nextcloud = {
|
||||||
|
shell = pkgs.bashInteractive;
|
||||||
|
packages = with pkgs; [
|
||||||
|
# generate video thumbnails with preview generator
|
||||||
|
ffmpeg_5-headless
|
||||||
|
# required for recognize app
|
||||||
|
nodejs-14_x # runtime and installation requirement
|
||||||
|
nodejs-14_x.pkgs.node-pre-gyp # installation requirement
|
||||||
|
util-linux # runtime requirement for taskset
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
services.nginx.virtualHosts."localhost".listen = [ { addr = "127.0.0.1"; port = 8000; } ];
|
||||||
|
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"cloud.crompton.cc" = {
|
||||||
|
serverAliases = [
|
||||||
|
"nextcloud.syzygial.cc"
|
||||||
|
];
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:8000
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
services.postgresql = {
|
||||||
|
enable = true;
|
||||||
|
port = 5432;
|
||||||
|
ensureDatabases = [
|
||||||
|
"nextcloud"
|
||||||
|
];
|
||||||
|
ensureUsers = [{
|
||||||
|
name = "nextcloud";
|
||||||
|
ensurePermissions = {
|
||||||
|
"DATABASE \"nextcloud\"" = "ALL PRIVILEGES";
|
||||||
|
};
|
||||||
|
ensureClauses = {
|
||||||
|
createdb = true;
|
||||||
|
};
|
||||||
|
}];
|
||||||
|
};
|
||||||
|
}
|
||||||
@@ -0,0 +1,5 @@
|
|||||||
|
{config, pkgs, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
|
||||||
|
}
|
||||||
50
machines/hippocampus/servers/public/nextcloud/onlyoffice.nix
Normal file
50
machines/hippocampus/servers/public/nextcloud/onlyoffice.nix
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
{config, pkgs, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
services.onlyoffice = {
|
||||||
|
enable = true;
|
||||||
|
port = 7001;
|
||||||
|
|
||||||
|
hostname = "only.office";
|
||||||
|
|
||||||
|
postgresHost = "/run/postgresql";
|
||||||
|
postgresName = "onlyoffice";
|
||||||
|
postgresUser = "onlyoffice";
|
||||||
|
};
|
||||||
|
|
||||||
|
services.nginx.virtualHosts."${config.services.onlyoffice.hostname}".listen = [ { addr = "127.0.0.1"; port = 7002; } ];
|
||||||
|
|
||||||
|
services.unbound.settings.server = let
|
||||||
|
RECORD = ".office. IN A 192.168.1.20";
|
||||||
|
in {
|
||||||
|
local-zone = [
|
||||||
|
"office. transparent"
|
||||||
|
];
|
||||||
|
local-data = [
|
||||||
|
"'only${RECORD}'"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"https://only.office" = {
|
||||||
|
extraConfig = ''
|
||||||
|
tls internal
|
||||||
|
reverse_proxy 127.0.0.1:7001
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
services.postgresql = {
|
||||||
|
enable = true;
|
||||||
|
port = 5432;
|
||||||
|
ensureDatabases = [
|
||||||
|
"onlyoffice"
|
||||||
|
];
|
||||||
|
ensureUsers = [{
|
||||||
|
name = "onlyoffice";
|
||||||
|
ensurePermissions = {
|
||||||
|
"DATABASE \"onlyoffice\"" = "ALL PRIVILEGES";
|
||||||
|
};
|
||||||
|
}];
|
||||||
|
};
|
||||||
|
}
|
||||||
16
machines/hippocampus/servers/public/nix-serve.nix
Normal file
16
machines/hippocampus/servers/public/nix-serve.nix
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
{config, pkgs, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
services.nix-serve = {
|
||||||
|
enable = true;
|
||||||
|
port = 5050;
|
||||||
|
secretKeyFile = "/etc/nixos/secrets/cache-priv-key.pem";
|
||||||
|
};
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"nixcache.syzygial.cc" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:${toString config.services.nix-serve.port}
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
75
machines/hippocampus/servers/public/pterodactyl.nix
Normal file
75
machines/hippocampus/servers/public/pterodactyl.nix
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
{config, pkgs, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
systemd.targets.machines.enable = true;
|
||||||
|
systemd.services."pterodactyl-container" = {
|
||||||
|
enable = true;
|
||||||
|
wantedBy = ["machines.target"];
|
||||||
|
environment = {
|
||||||
|
# SYSTEMD_NSPAWN_USE_CGNS = "0";
|
||||||
|
};
|
||||||
|
script = ''
|
||||||
|
exec ${config.systemd.package}/bin/systemd-nspawn --hostname pterodactyl \
|
||||||
|
--resolv-conf=off --system-call-filter="add_key keyctl bpf" --bind /dev/fuse \
|
||||||
|
-nbD /var/lib/machines/pterodactyl --machine pterodactyl
|
||||||
|
'';
|
||||||
|
postStart = ''
|
||||||
|
${pkgs.iproute2}/bin/ip link set ve-pterodactyl up || true
|
||||||
|
${pkgs.iproute2}/bin/ip addr add 10.1.0.0 dev ve-pterodactyl || true
|
||||||
|
${pkgs.iproute2}/bin/ip route add 10.1.0.1 dev ve-pterodactyl || true
|
||||||
|
'';
|
||||||
|
serviceConfig = {
|
||||||
|
Type = "notify";
|
||||||
|
Slice = "machine.slice";
|
||||||
|
Delegate = true;
|
||||||
|
DeviceAllow = "/dev/fuse rwm";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
networking.nat = {
|
||||||
|
enable = true;
|
||||||
|
# Check for hostBridge use vb instead of ve
|
||||||
|
internalInterfaces = ["ve-pterodactyl"];
|
||||||
|
externalInterface = "enp0s25";
|
||||||
|
enableIPv6 = true;
|
||||||
|
forwardPorts = [
|
||||||
|
{ sourcePort = "25565:28000";
|
||||||
|
destination = "10.1.0.1:25565-25600";
|
||||||
|
proto = "tcp";
|
||||||
|
}
|
||||||
|
{ sourcePort = "25565:28000";
|
||||||
|
destination = "10.1.0.1:25565-25600";
|
||||||
|
proto = "udp";
|
||||||
|
}
|
||||||
|
{ sourcePort = 2022;
|
||||||
|
destination = "10.1.0.1:2022";
|
||||||
|
proto = "tcp";
|
||||||
|
}
|
||||||
|
{ sourcePort = 2022;
|
||||||
|
destination = "10.1.0.1:2022";
|
||||||
|
proto = "udp";
|
||||||
|
}
|
||||||
|
];
|
||||||
|
};
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"games.syzygial.cc:443" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 10.1.0.1:80
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
"games.syzygial.cc:9000" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 10.1.0.1:9000
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
"pnode.syzygial.cc:443" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 10.1.0.1:9000
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
"pnode.syzygial.cc:9000" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 10.1.0.1:9000
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
14
machines/hippocampus/servers/public/syzygial.nix
Normal file
14
machines/hippocampus/servers/public/syzygial.nix
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
{config, pkgs, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"syzygial.cc" = {
|
||||||
|
extraConfig = ''
|
||||||
|
file_server {
|
||||||
|
root /srv/www/syzygial
|
||||||
|
browse
|
||||||
|
}
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
37
machines/hippocampus/servers/public/vaultwarden.nix
Normal file
37
machines/hippocampus/servers/public/vaultwarden.nix
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
{config, pkgs, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
sops.secrets.vaultenv = {
|
||||||
|
owner = config.systemd.services.vaultwarden.serviceConfig.User;
|
||||||
|
};
|
||||||
|
services.vaultwarden = {
|
||||||
|
enable = true;
|
||||||
|
dbBackend = "postgresql";
|
||||||
|
environmentFile = config.sops.secrets.vaultenv.path;
|
||||||
|
config = {
|
||||||
|
DOMAIN = "https://vault.crompton.cc";
|
||||||
|
ROCKET_ADDRESS = "127.0.0.1";
|
||||||
|
ROCKET_PORT = 8222;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
services.postgresql = {
|
||||||
|
enable = true;
|
||||||
|
port = 5432;
|
||||||
|
ensureDatabases = [
|
||||||
|
"vaultwarden"
|
||||||
|
];
|
||||||
|
ensureUsers = [{
|
||||||
|
name = "vaultwarden";
|
||||||
|
ensurePermissions = {
|
||||||
|
"DATABASE \"vaultwarden\"" = "ALL PRIVILEGES";
|
||||||
|
};
|
||||||
|
}];
|
||||||
|
};
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"vault.crompton.cc" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:${toString config.services.vaultwarden.config.ROCKET_PORT}
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
19
machines/hippocampus/servers/public/watchthingz.nix
Normal file
19
machines/hippocampus/servers/public/watchthingz.nix
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
{pkgs, config, ...}:
|
||||||
|
|
||||||
|
{
|
||||||
|
imports = [
|
||||||
|
../../oci/watchthingz.nix
|
||||||
|
];
|
||||||
|
services.caddy.virtualHosts = {
|
||||||
|
"watchthingz.syzygial.cc" = {
|
||||||
|
extraConfig = ''
|
||||||
|
reverse_proxy 127.0.0.1:8080 {
|
||||||
|
header_up Host {host}
|
||||||
|
header_up X-Real-IP {remote_host}
|
||||||
|
header_up X-Forwarded-For {remote_host}
|
||||||
|
header_up X-Forwarded-Proto {scheme}
|
||||||
|
}
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
5
machines/hippocampus/services.nix
Normal file
5
machines/hippocampus/services.nix
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
{
|
||||||
|
imports = [
|
||||||
|
./services/tailscale.nix
|
||||||
|
];
|
||||||
|
}
|
||||||
19
machines/hippocampus/services/tailscale.nix
Normal file
19
machines/hippocampus/services/tailscale.nix
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
{config, pkgs, lib, ...}:
|
||||||
|
let
|
||||||
|
tailUser = "server";
|
||||||
|
in {
|
||||||
|
imports = [
|
||||||
|
../modules/headscale.nix
|
||||||
|
../modules/tailscale.nix
|
||||||
|
];
|
||||||
|
services.headscale.ensureUsers = {
|
||||||
|
"${tailUser}" = {};
|
||||||
|
};
|
||||||
|
services.tailscale = {
|
||||||
|
enable = true;
|
||||||
|
authTokenPath = config.services.headscale.ensureUsers."${tailUser}".path;
|
||||||
|
};
|
||||||
|
systemd.services.tailscale_autologin = {
|
||||||
|
after = ["headscale-preauth-${tailUser}.service"];
|
||||||
|
};
|
||||||
|
}
|
||||||
47
nvidiacontainer-overlay.nix
Normal file
47
nvidiacontainer-overlay.nix
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
(nixpkgs: (final: prev: let
|
||||||
|
libnvidia-container = ((
|
||||||
|
final.callPackage (nixpkgs+"/pkgs/applications/virtualization/libnvidia-container") { }
|
||||||
|
).overrideAttrs (old: rec {
|
||||||
|
pname = "libnvidia-container";
|
||||||
|
version = "1.12.0";
|
||||||
|
|
||||||
|
src = final.fetchFromGitHub {
|
||||||
|
owner = "NVIDIA";
|
||||||
|
repo = pname;
|
||||||
|
rev = "v${version}";
|
||||||
|
sha256 = "Ih8arSrBGGX44SiWcj61qV9z4DRrbi1J+3xxid2GupE=";
|
||||||
|
};
|
||||||
|
patches = [
|
||||||
|
./libnvidia-container/inline-c-struct.patch
|
||||||
|
./libnvidia-container/avoid-static-libtirpc-build.patch
|
||||||
|
./libnvidia-container/libnvc-ldconfig-and-path-fix.patch
|
||||||
|
];
|
||||||
|
postInstall =
|
||||||
|
let
|
||||||
|
inherit (final.addOpenGLRunpath) driverLink;
|
||||||
|
libraryPath = final.lib.makeLibraryPath [ "$out" driverLink "${driverLink}-32" ];
|
||||||
|
in
|
||||||
|
''
|
||||||
|
remove-references-to -t "${final.go}" $out/lib/libnvidia-container-go.so.${version}
|
||||||
|
wrapProgram $out/bin/nvidia-container-cli --prefix LD_LIBRARY_PATH : ${libraryPath}
|
||||||
|
'';
|
||||||
|
}));
|
||||||
|
in {
|
||||||
|
mkNvidiaContainerPkg = { name, containerRuntimePath, configTemplate, additionalPaths ? [] }:
|
||||||
|
let
|
||||||
|
nvidia-container-toolkit = final.callPackage (nixpkgs+"/pkgs/applications/virtualization/nvidia-container-toolkit") {
|
||||||
|
inherit containerRuntimePath configTemplate libnvidia-container;
|
||||||
|
};
|
||||||
|
in final.symlinkJoin {
|
||||||
|
inherit name;
|
||||||
|
paths = [
|
||||||
|
libnvidia-container
|
||||||
|
nvidia-container-toolkit
|
||||||
|
] ++ additionalPaths;
|
||||||
|
};
|
||||||
|
nvidia-podman = final.mkNvidiaContainerPkg {
|
||||||
|
name = "nvidia-podman";
|
||||||
|
containerRuntimePath = "${final.runc}/bin/runc";
|
||||||
|
configTemplate = ./libnvidia-container/config.tml;
|
||||||
|
};
|
||||||
|
}))
|
||||||
Reference in New Issue
Block a user