Umbrel on WSL2 with Windows 11

Has anyone successfully run Umbrel on WSL2 with Windows 11? I am able to get it up and running but it is constantly having trouble staying online. The umbrel seems to disconnect randomly and then it has trouble getting all of the apps back up and running. It is very frustrating.

I wanted to use my desktop computer that runs windows because I just got it brand new and it is a nice computer, but i cant just run linux on it. I wanted to try this before investing in a new hardware setup just for an umbrel.

If anyone has experience in this I would love to know what you did to get it running.
Thanks,

What version and Linux distribution are you running to install within the WSL2 subsystem?

There may be some extra dependencies required and some always-on settings required to turn on so it doesn’t go to sleep or get disconnected can try to further investigate, if you want to as well feel free to share a debug log with command sudo ~/umbrel/scripts/debug to look into it further

im using Ubuntu 20.04.1.
That makes sense. something seems to happen and it all resets. Ive made it so my computer doesnt go to sleep, but that hasnt helped. I copied the debug file below. Any help would be fantastic…

=====================
= Umbrel debug info =

Umbrel version

0.5.3

Memory usage

           total        used        free      shared  buff/cache   available

Mem: 15G 5.6G 252M 7.3G 9.8G 2.5G
Swap: 4.1G 2.2G 1.9G

total: 35.8%
system: 29.6%
lightning: 1.7%
btcpay-server: 1.4%
bitcoin: 1.4%
thunderhub: 0.6%
nostr-relay: 0.4%
lnplus: 0.4%
lnbits: 0.3%
tailscale: 0%
lightning-shell: 0%
file-browser: 0%

Memory monitor logs

81636 ? S 0:01 bash ./scripts/memory-monitor
Memory monitor is already running
81636 ? S 0:01 bash ./scripts/memory-monitor
Memory monitor is already running
81636 ? S 0:01 bash ./scripts/memory-monitor
Memory monitor is already running
81636 ? S 0:01 bash ./scripts/memory-monitor
Memory monitor is already running
81636 ? S 0:01 bash ./scripts/memory-monitor
Memory monitor is already running

Filesystem information

Filesystem Size Used Avail Use% Mounted on
/dev/sdd 1007G 579G 377G 61% /
/dev/sdd 1007G 579G 377G 61% /

Karen logs

81666 ? S 0:00 bash ./karen
karen is already running
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running
Got signal: change-password
karen is getting triggered!
This script must only be run on Umbrel OS
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running
81624 ? S 0:00 bash ./karen
81666 ? S 0:00 bash ./karen
karen is already running

Docker containers

NAMES STATUS
btcpay-server_web_1 Up 9 seconds
bitcoin_server_1 Up 2 days
btcpay-server_nbxplorer_1 Restarting (139) 13 seconds ago
bitcoin_tor_server_1 Restarting (1) 21 seconds ago
bitcoin_bitcoind_1 Up 2 days
lightning_app_proxy_1 Up 2 days
lightning_app_1 Up 2 days
lightning_tor_server_1 Up 2 days
lightning-shell_web_1 Up 2 days
lnplus_web_1 Restarting (1) 27 seconds ago
lightning-shell_tor_server_1 Restarting (1) 24 seconds ago
lnplus_app_proxy_1 Up 2 days
lnplus_tor_server_1 Up 2 days
thunderhub_web_1 Up 2 days
thunderhub_tor_server_1 Restarting (1) 1 second ago
file-browser_tor_server_1 Restarting (1) 38 seconds ago
btcpay-server_tor_server_1 Up 2 days
nostr-relay_app_proxy_1 Up 2 days
btcpay-server_app_proxy_1 Up 2 days
file-browser_server_1 Up Less than a second (health: starting)
nostr-relay_tor_server_1 Up 2 days
lnbits_web_1 Up 2 days
lnbits_tor_server_1 Restarting (1) 34 seconds ago
tailscale_tor_server_1 Restarting (1) 42 seconds ago
dashboard Up 2 days
manager Up 2 days
tor_server Up 2 days

Umbrel logs

Attaching to manager
manager | $ node ./bin/www
manager | Mon, 20 Feb 2023 14:54:43 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:49:9
manager | Mon, 20 Feb 2023 14:54:43 GMT morgan deprecated default format: use combined format at app.js:49:9
manager | Listening on port 3006
manager | ::ffff:10.21.21.2 - - [Mon, 20 Feb 2023 14:54:51 GMT] “GET /v1/system/update-status HTTP/1.0” 200 65 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | yarn run v1.22.18
manager | $ node ./bin/www
manager | Mon, 20 Feb 2023 14:56:48 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:49:9
manager | Mon, 20 Feb 2023 14:56:48 GMT morgan deprecated default format: use combined format at app.js:49:9
manager | Listening on port 3006
manager | ::ffff:10.21.21.2 - - [Mon, 20 Feb 2023 14:56:56 GMT] “GET /v1/system/update-status HTTP/1.0” 200 65 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | yarn run v1.22.18
manager | $ node ./bin/www
manager | Mon, 20 Feb 2023 14:59:05 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:49:9
manager | Mon, 20 Feb 2023 14:59:05 GMT morgan deprecated default format: use combined format at app.js:49:9
manager | Listening on port 3006
manager | yarn run v1.22.18
manager | $ node ./bin/www
manager | Mon, 20 Feb 2023 20:55:54 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:49:9
manager | Mon, 20 Feb 2023 20:55:54 GMT morgan deprecated default format: use combined format at app.js:49:9
manager | Listening on port 3006
manager | yarn run v1.22.18
manager | $ node ./bin/www
manager | Wed, 01 Mar 2023 00:49:41 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:49:9
manager | Wed, 01 Mar 2023 00:49:41 GMT morgan deprecated default format: use combined format at app.js:49:9
manager | Listening on port 3006

Tor Proxy logs

Attaching to tor_proxy
tor_proxy | Feb 20 16:54:41.000 [notice] Bootstrapped 0% (starting): Starting
tor_proxy | Feb 20 16:54:41.000 [notice] Starting with guard context “default”
tor_proxy | Feb 20 16:54:41.000 [notice] Bootstrapped 5% (conn): Connecting to a relay
tor_proxy | Feb 20 16:54:42.000 [notice] Bootstrapped 10% (conn_done): Connected to a relay
tor_proxy | Feb 20 16:54:43.000 [notice] Bootstrapped 14% (handshake): Handshaking with a relay
tor_proxy | Feb 20 16:54:43.000 [notice] Bootstrapped 15% (handshake_done): Handshake with a relay done
tor_proxy | Feb 20 16:54:43.000 [notice] Bootstrapped 75% (enough_dirinfo): Loaded enough directory info to build circuits
tor_proxy | Feb 20 16:54:43.000 [notice] Bootstrapped 90% (ap_handshake_done): Handshake finished with a relay to build circuits
tor_proxy | Feb 20 16:54:43.000 [notice] Bootstrapped 95% (circuit_create): Establishing a Tor circuit
tor_proxy | Feb 20 16:54:44.000 [notice] Bootstrapped 100% (done): Done

App logs

bitcoin

Attaching to bitcoin_server_1, bitcoin_app_proxy_1, bitcoin_i2pd_daemon_1, bitcoin_tor_1, bitcoin_tor_server_1, bitcoin_bitcoind_1
app_proxy_1 | Bitcoin Node is now ready…
app_proxy_1 | Listening on port: 2100
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / -> http://10.21.22.2:3005
app_proxy_1 | Waiting for 10.21.22.2:3005 to open…
app_proxy_1 | Bitcoin Node is now ready…
app_proxy_1 | Listening on port: 2100
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | yarn run v1.22.19
tor_1 | Feb 28 21:47:14.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Feb 28 21:47:16.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Feb 28 21:49:17.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Feb 28 21:59:51.000 [notice] Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDir available to query.
tor_1 | Feb 28 22:03:29.000 [notice] Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDir available to query.
tor_1 | Feb 28 22:17:38.000 [notice] Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDir available to query.
tor_1 | Feb 28 22:19:28.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Feb 28 22:39:32.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Feb 28 22:40:25.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Feb 28 22:48:34.000 [notice] Catching signal TERM, exiting cleanly.
i2pd_daemon_1 | 22:48:31@749/error - Tunnels: Can’t select next hop for ~NYK~U~m3DBwPe~tY~mUOcCPBWLT65~x3B33ZbW5atc=
i2pd_daemon_1 | 22:48:31@749/error - Tunnels: Can’t create inbound tunnel, no peers available
i2pd_daemon_1 | 22:48:31@749/error - Tunnels: Can’t select next hop for ~NYK~U~m3DBwPe~tY~mUOcCPBWLT65~x3B33ZbW5atc=
i2pd_daemon_1 | 22:48:31@749/error - Tunnels: Can’t create inbound tunnel, no peers available
i2pd_daemon_1 | 22:48:31@749/error - Tunnels: Can’t select next hop for ~NYK~U~m3DBwPe~tY~mUOcCPBWLT65~x3B33ZbW5atc=
i2pd_daemon_1 | 22:48:31@749/error - Tunnels: Can’t create inbound tunnel, no peers available
i2pd_daemon_1 | 22:48:31@749/error - Tunnels: Can’t select next hop for ~NYK~U~m3DBwPe~tY~mUOcCPBWLT65~x3B33ZbW5atc=
i2pd_daemon_1 | 22:48:31@749/error - Tunnels: Can’t create inbound tunnel, no peers available
i2pd_daemon_1 | 22:48:31@749/error - Tunnels: Can’t select next hop for ~NYK~U~m3DBwPe~tY~mUOcCPBWLT65~x3B33ZbW5atc=
i2pd_daemon_1 | 22:48:31@749/error - Tunnels: Can’t create inbound tunnel, no peers available
bitcoind_1 | 2023-03-03T14:18:47Z connect() to 10.21.22.11:7656 failed after wait: Host is unreachable (113)
bitcoind_1 | 2023-03-03T14:18:51Z connect() to 10.21.22.10:9050 failed after wait: Host is unreachable (113)
bitcoind_1 | 2023-03-03T14:18:54Z connect() to 10.21.22.10:9050 failed after wait: Host is unreachable (113)
bitcoind_1 | 2023-03-03T14:18:57Z connect() to 10.21.22.10:9050 failed after wait: Host is unreachable (113)
bitcoind_1 | 2023-03-03T14:19:00Z connect() to 10.21.22.10:9050 failed after wait: Host is unreachable (113)
bitcoind_1 | 2023-03-03T14:19:03Z connect() to 10.21.22.10:9050 failed after wait: Host is unreachable (113)
bitcoind_1 | 2023-03-03T14:19:06Z connect() to 10.21.22.10:9050 failed after wait: Host is unreachable (113)
bitcoind_1 | 2023-03-03T14:19:10Z connect() to 10.21.22.10:9050 failed after wait: Host is unreachable (113)
bitcoind_1 | 2023-03-03T14:19:13Z connect() to 10.21.22.10:9050 failed after wait: Host is unreachable (113)
bitcoind_1 | 2023-03-03T14:19:16Z connect() to 10.21.22.10:9050 failed after wait: Host is unreachable (113)
server_1 | yarn run v1.22.18
server_1 | $ node ./bin/www
server_1 | Mon, 20 Feb 2023 20:55:42 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:33:9
server_1 | Mon, 20 Feb 2023 20:55:42 GMT morgan deprecated default format: use combined format at app.js:33:9
server_1 | Listening on port 3005
server_1 | yarn run v1.22.18
server_1 | $ node ./bin/www
server_1 | Wed, 01 Mar 2023 00:49:44 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:33:9
server_1 | Wed, 01 Mar 2023 00:49:44 GMT morgan deprecated default format: use combined format at app.js:33:9
server_1 | Listening on port 3005

btcpay-server

Attaching to btcpay-server_web_1, btcpay-server_nbxplorer_1, btcpay-server_postgres_1, btcpay-server_tor_server_1, btcpay-server_app_proxy_1
app_proxy_1 | Error wating for port: “The address ‘btcpay-server_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘btcpay-server_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘btcpay-server_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘btcpay-server_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘btcpay-server_web_1’ cannot be found”
app_proxy_1 | Retrying…
nbxplorer_1 | at Npgsql.NpgsqlConnection.g__OpenAsync|45_0(Boolean async, CancellationToken cancellationToken)
nbxplorer_1 | at NBXplorer.Backends.Postgres.DbConnectionFactory.CreateConnection(Action1 action) in /source/NBXplorer/Backends/Postgres/DbConnectionFactory.cs:line 54 nbxplorer_1 | at NBXplorer.HostedServices.DatabaseSetupHostedService.StartAsync(CancellationToken cancellationToken) in /source/NBXplorer/HostedServices/DatabaseSetupHostedService.cs:line 33 nbxplorer_1 | at Microsoft.AspNetCore.Hosting.HostedServiceExecutor.ExecuteAsync(Func2 callback, Boolean throwOnFirstFailure)
nbxplorer_1 | at Microsoft.AspNetCore.Hosting.WebHost.StartAsync(CancellationToken cancellationToken)
nbxplorer_1 | at Microsoft.AspNetCore.Hosting.WebHostExtensions.RunAsync(IWebHost host, CancellationToken token, String startupMessage)
nbxplorer_1 | at Microsoft.AspNetCore.Hosting.WebHostExtensions.RunAsync(IWebHost host, CancellationToken token, String startupMessage)
nbxplorer_1 | at Microsoft.AspNetCore.Hosting.WebHostExtensions.RunAsync(IWebHost host, CancellationToken token)
nbxplorer_1 | at Microsoft.AspNetCore.Hosting.WebHostExtensions.Run(IWebHost host)
nbxplorer_1 | at NBXplorer.Program.Main(String[] args) in /source/NBXplorer/Program.cs:line 60
web_1 | info: BTCPayServer.Plugins.PluginManager: Loading plugins from /data/plugins
web_1 | info: BTCPayServer.Plugins.PluginManager: Adding and executing plugin BTCPayServer - 1.7.5
web_1 | info: BTCPayServer.Plugins.PluginManager: Adding and executing plugin BTCPayServer.Plugins.Shopify - 1.7.5
web_1 | info: BTCPayServer.Plugins.PluginManager: Adding and executing plugin BTCPayServer.Plugins.Crowdfund - 1.7.5
web_1 | info: BTCPayServer.Plugins.PluginManager: Adding and executing plugin BTCPayServer.Plugins.PayButton - 1.7.5
web_1 | info: BTCPayServer.Plugins.PluginManager: Adding and executing plugin BTCPayServer.Plugins.PointOfSale - 1.7.5
web_1 | info: Configuration: Supported chains: BTC
web_1 | info: Configuration: BTC: Explorer url is http://btcpay-server_nbxplorer_1:32838/
web_1 | info: Configuration: BTC: Cookie file is /data/.nbxplorer/Main/.cookie
web_1 | info: Configuration: Network: Mainnet
postgres_1 | 2023-02-20 13:08:43.906 UTC [1] LOG: received immediate shutdown request
postgres_1 | 2023-02-20 13:08:43.906 UTC [1] LOG: could not open file “postmaster.pid”: No such file or directory
postgres_1 | 2023-02-20 13:08:43.908 UTC [14530] WARNING: terminating connection because of crash of another server process
postgres_1 | 2023-02-20 13:08:43.908 UTC [14530] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
postgres_1 | 2023-02-20 13:08:43.908 UTC [14530] HINT: In a moment you should be able to reconnect to the database and repeat your command.
postgres_1 | 2023-02-20 13:08:43.910 UTC [31] WARNING: terminating connection because of crash of another server process
postgres_1 | 2023-02-20 13:08:43.910 UTC [31] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
postgres_1 | 2023-02-20 13:08:43.910 UTC [31] HINT: In a moment you should be able to reconnect to the database and repeat your command.
postgres_1 | 2023-02-20 13:08:43.938 UTC [1] LOG: could not write file “pg_stat/pg_stat_statements.stat.tmp”: No such file or directory
postgres_1 | 2023-02-20 13:08:43.952 UTC [1] LOG: database system is shut down

file-browser

Attaching to file-browser_app_proxy_1, file-browser_tor_server_1, file-browser_server_1
server_1 | 2023/03/03 14:10:04 timeout
server_1 | 2023/03/03 14:11:06 timeout
server_1 | 2023/03/03 14:12:07 timeout
server_1 | 2023/03/03 14:13:09 timeout
server_1 | 2023/03/03 14:14:10 timeout
server_1 | 2023/03/03 14:15:11 timeout
server_1 | 2023/03/03 14:16:14 timeout
server_1 | 2023/03/03 14:17:15 timeout
server_1 | 2023/03/03 14:18:17 timeout
server_1 | 2023/03/03 14:19:18 timeout
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / -> http://file-browser_server_1:80
app_proxy_1 | Waiting for file-browser_server_1:80 to open…
app_proxy_1 | File Browser is now ready…
app_proxy_1 | Listening on port: 7421
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www

lightning

Attaching to lightning_app_proxy_1, lightning_tor_1, lightning_app_1, lightning_tor_server_1, lightning_lnd_1
app_proxy_1 | Validating token: a661e4d16d19 …
app_proxy_1 | Validating token: a661e4d16d19 …
app_proxy_1 | Validating token: a661e4d16d19 …
app_proxy_1 | Validating token: a661e4d16d19 …
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / -> http://10.21.22.3:3006
app_proxy_1 | Waiting for 10.21.22.3:3006 to open…
app_proxy_1 | Lightning Node is now ready…
app_proxy_1 | Listening on port: 2101
app_1 | [backup-monitor] Checking channel backup…
app_1 | [backup-monitor] Sleeping…
app_1 | Waiting for LND…
app_1 | Checking LND status…
app_1 | [backup-monitor] Checking channel backup…
app_1 | [backup-monitor] Sleeping…
app_1 | [backup-monitor] Checking channel backup…
app_1 | [backup-monitor] Sleeping…
app_1 | Waiting for LND…
app_1 | Checking LND status…
lnd_1 | 2023-03-01 08:01:46.165 [INF] LNWL: Started listening for bitcoind transaction notifications via ZMQ on 10.21.21.8:28333
lnd_1 | 2023-03-01 08:01:46.751 [INF] LNWL: The wallet has been unlocked without a time limit
lnd_1 | 2023-03-01 08:01:46.753 [INF] CHRE: LightningWallet opened
lnd_1 | 2023-03-01 08:01:46.765 [INF] SRVR: Proxying all network traffic via Tor (stream_isolation=false)! NOTE: Ensure the backend node is proxying over Tor as well
lnd_1 | 2023-03-01 08:01:46.765 [INF] TORC: Starting tor controller
lnd_1 | 2023-03-01 08:01:49.382 [ERR] RPCS: [/lnrpc.Lightning/ChannelBalance]: the RPC server is in the process of starting up, but not yet ready to accept calls
lnd_1 | 2023-03-01 08:01:49.883 [ERR] LTND: Shutting down because error in main method: unable to initialize tor controller: unable to connect to Tor server: dial tcp 10.21.21.11:29051: connect: no route to host
lnd_1 | 2023-03-01 08:01:49.892 [INF] LTND: Shutdown complete
lnd_1 |
lnd_1 | unable to initialize tor controller: unable to connect to Tor server: dial tcp 10.21.21.11:29051: connect: no route to host
tor_1 | Feb 28 14:55:50.000 [notice] While bootstrapping, fetched this many bytes: 116750 (consensus network-status fetch); 1772 (authority cert fetch); 14686 (microdescriptor fetch)
tor_1 | Feb 28 14:55:50.000 [notice] While not bootstrapping, fetched this many bytes: 5192652 (consensus network-status fetch); 47017 (authority cert fetch); 3238696 (microdescriptor fetch)
tor_1 | Feb 28 14:55:50.000 [notice] Heartbeat: Our onion services received 1 v3 INTRODUCE2 cells and attempted to launch 1 rendezvous circuits.
tor_1 | Feb 28 17:20:21.000 [notice] No circuits are opened. Relaxed timeout for circuit 16143 (a Hidden service: Uploading HS descriptor 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway.
tor_1 | Feb 28 19:19:19.000 [notice] No circuits are opened. Relaxed timeout for circuit 16220 (a Hidden service: Uploading HS descriptor 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [1 similar message(s) suppressed in last 7140 seconds]
tor_1 | Feb 28 20:55:50.000 [notice] Heartbeat: Tor’s uptime is 7 days 23:59 hours, with 15 circuits open. I’ve sent 162.17 MB and received 121.75 MB. I’ve received 0 connections on IPv4 and 0 on IPv6. I’ve made 97 connections with IPv4 and 0 with IPv6.
tor_1 | Feb 28 20:55:50.000 [notice] While bootstrapping, fetched this many bytes: 116750 (consensus network-status fetch); 1772 (authority cert fetch); 14686 (microdescriptor fetch)
tor_1 | Feb 28 20:55:50.000 [notice] While not bootstrapping, fetched this many bytes: 5367508 (consensus network-status fetch); 47017 (authority cert fetch); 3329010 (microdescriptor fetch)
tor_1 | Feb 28 20:55:50.000 [notice] Heartbeat: Our onion services received 1 v3 INTRODUCE2 cells and attempted to launch 1 rendezvous circuits.
tor_1 | Feb 28 22:48:34.000 [notice] Catching signal TERM, exiting cleanly.

lightning-shell

Attaching to lightning-shell_web_1, lightning-shell_app_proxy_1, lightning-shell_tor_server_1
web_1 | [2023/03/01 00:49:46:5047] N: terminal type: xterm-256color
web_1 | [2023/03/01 00:49:46:5105] N: /usr/local/lib//libwebsockets-evlib_uv.so
web_1 | [2023/03/01 00:49:46:5128] N: LWS: 4.3.0-a5aae04, NET CLI SRV H1 H2 WS ConMon IPv6-absent
web_1 | [2023/03/01 00:49:46:5256] N: elops_init_pt_uv: Using foreign event loop…
web_1 | [2023/03/01 00:49:46:5261] N: ++ [wsi|0|pipe] (1)
web_1 | [2023/03/01 00:49:46:5266] N: ++ [vh|0|netlink] (1)
web_1 | [2023/03/01 00:49:46:6680] N: ++ [vh|1|default||7681] (2)
web_1 | [2023/03/01 00:49:46:6681] N: [null wsi]: lws_socket_bind: source ads 0.0.0.0
web_1 | [2023/03/01 00:49:46:6681] N: ++ [wsi|1|listen|default||7681] (2)
web_1 | [2023/03/01 00:49:46:6681] N: Listening on port: 7681
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / -> http://lightning-shell_web_1:7681
app_proxy_1 | Waiting for lightning-shell_web_1:7681 to open…
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | yarn run v1.22.19

lnbits

Attaching to lnbits_app_proxy_1, lnbits_web_1, lnbits_tor_server_1
app_proxy_1 | Error wating for port: “The address ‘lnbits_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘lnbits_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘lnbits_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘lnbits_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | yarn run v1.22.19
web_1 | 2023-03-03 14:18:42.14 | ERROR | The backend for LndRestWallet isn’t working properly: ‘Unable to connect to https://10.21.21.9:8080.’
web_1 | 2023-03-03 14:18:42.14 | INFO | Retrying connection to backend in 5 seconds…
web_1 | 2023-03-03 14:18:50.30 | ERROR | The backend for LndRestWallet isn’t working properly: ‘Unable to connect to https://10.21.21.9:8080.’
web_1 | 2023-03-03 14:18:50.30 | INFO | Retrying connection to backend in 5 seconds…
web_1 | 2023-03-03 14:18:58.46 | ERROR | The backend for LndRestWallet isn’t working properly: ‘Unable to connect to https://10.21.21.9:8080.’
web_1 | 2023-03-03 14:18:58.46 | INFO | Retrying connection to backend in 5 seconds…
web_1 | 2023-03-03 14:19:06.63 | ERROR | The backend for LndRestWallet isn’t working properly: ‘Unable to connect to https://10.21.21.9:8080.’
web_1 | 2023-03-03 14:19:06.63 | INFO | Retrying connection to backend in 5 seconds…
web_1 | 2023-03-03 14:19:14.78 | ERROR | The backend for LndRestWallet isn’t working properly: ‘Unable to connect to https://10.21.21.9:8080.’
web_1 | 2023-03-03 14:19:14.78 | INFO | Retrying connection to backend in 5 seconds…

Part 2:
lnplus

Attaching to lnplus_web_1, lnplus_app_proxy_1, lnplus_tor_server_1
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘lnplus_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘lnplus_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘lnplus_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘lnplus_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘lnplus_web_1’ cannot be found”
web_1 | WARNING: Skipping key “LN_SERVER_URL”. Already set in ENV.
web_1 | WARNING: Skipping key “CERTIFICATE_PATH”. Already set in ENV.
web_1 | WARNING: Skipping key “API_URL”. Already set in ENV.
web_1 | (rdb:1) /app/config/application.rb:21:module Lnpclient
web_1 | => Booting Puma
web_1 | => Rails 7.0.3.1 application starting in development
web_1 | => Run bin/rails server --help for more startup options
web_1 | A server is already running. Check /app/tmp/pids/server.pid.
web_1 | Exiting
web_1 | (rdb:1) /gems/ruby/3.0.0/gems/railties-7.0.3.1/lib/rails/command.rb:54: ARGV.replace(original_argv)

nostr-relay

Attaching to nostr-relay_relay_1, nostr-relay_app_proxy_1, nostr-relay_tor_server_1, nostr-relay_web_1
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘nostr-relay_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘nostr-relay_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘nostr-relay_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘nostr-relay_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘nostr-relay_web_1’ cannot be found”
relay_1 | Feb 28 22:40:58.198 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 75.751µs (result: Ok, WAL size: 0)
relay_1 | Feb 28 22:41:58.199 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 77.731µs (result: Ok, WAL size: 0)
relay_1 | Feb 28 22:42:58.199 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 77.831µs (result: Ok, WAL size: 0)
relay_1 | Feb 28 22:43:58.201 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 66.99µs (result: Ok, WAL size: 0)
relay_1 | Feb 28 22:44:58.203 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 66.551µs (result: Ok, WAL size: 0)
relay_1 | Feb 28 22:45:58.203 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 101.522µs (result: Ok, WAL size: 0)
relay_1 | Feb 28 22:46:58.204 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 70.191µs (result: Ok, WAL size: 0)
relay_1 | Feb 28 22:47:58.204 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 81.751µs (result: Ok, WAL size: 0)
relay_1 | Feb 28 22:48:34.698 INFO nostr_rs_relay::server: Shutting down webserver due to SIGTERM
relay_1 | Feb 28 22:48:34.702 INFO nostr_rs_relay::db: database connection closed
web_1 | {“level”:“info”,“ts”:1677444945.0472996,“logger”:“tls”,“msg”:“finished cleaning storage units”}
web_1 | {“level”:“info”,“ts”:1677531344.8994474,“logger”:“tls”,“msg”:“cleaning storage unit”,“description”:“FileStorage:/data/caddy”}
web_1 | {“level”:“info”,“ts”:1677531344.901474,“logger”:“tls”,“msg”:“finished cleaning storage units”}
web_1 | {“level”:“info”,“ts”:1677617744.694356,“logger”:“tls”,“msg”:“cleaning storage unit”,“description”:“FileStorage:/data/caddy”}
web_1 | {“level”:“info”,“ts”:1677617744.6959815,“logger”:“tls”,“msg”:“finished cleaning storage units”}
web_1 | {“level”:“info”,“ts”:1677624514.677543,“msg”:“shutting down apps, then terminating”,“signal”:“SIGTERM”}
web_1 | {“level”:“warn”,“ts”:1677624514.6808836,“msg”:“exiting; byeee!! :wave:”,“signal”:“SIGTERM”}
web_1 | {“level”:“info”,“ts”:1677624514.7001746,“logger”:“tls.cache.maintenance”,“msg”:“stopped background certificate maintenance”,“cache”:“0xc0008d0690”}
web_1 | {“level”:“info”,“ts”:1677624514.7036605,“logger”:“admin”,“msg”:“stopped previous server”,“address”:“localhost:2019”}
web_1 | {“level”:“info”,“ts”:1677624514.7040882,“msg”:“shutdown complete”,“signal”:“SIGTERM”,“exit_code”:0}

tailscale

Attaching to tailscale_web_1, tailscale_tor_server_1
web_1 | 2023/02/28 22:48:33 monitor: ip rule deleted: {Family:2 DstLength:0 SrcLength:0 Tos:0 Table:253 Protocol:0 Scope:0 Type:1 Flags:0 Attributes:{Dst: Src: Gateway: OutIface:0 Priority:5230 Table:253 Mark:16711680 Pref: Expires: Metrics: Multipath:[]}}
web_1 | 2023/02/28 22:48:33 monitor: ip rule deleted: {Family:2 DstLength:0 SrcLength:0 Tos:0 Table:0 Protocol:0 Scope:0 Type:7 Flags:0 Attributes:{Dst: Src: Gateway: OutIface:0 Priority:5250 Table:0 Mark:16711680 Pref: Expires: Metrics: Multipath:[]}}
web_1 | 2023/02/28 22:48:33 monitor: ip rule deleted: {Family:2 DstLength:0 SrcLength:0 Tos:0 Table:52 Protocol:0 Scope:0 Type:1 Flags:0 Attributes:{Dst: Src: Gateway: OutIface:0 Priority:5270 Table:52 Mark:0 Pref: Expires: Metrics: Multipath:[]}}
web_1 | 2023/02/28 22:48:34 tailscaled got signal terminated; shutting down
web_1 | 2023/02/28 22:48:34 control: client.Shutdown()
web_1 | 2023/02/28 22:48:34 control: client.Shutdown: inSendStatus=0
web_1 | 2023/02/28 22:48:34 control: mapRoutine: quit
web_1 | 2023/02/28 22:48:34 control: Client.Shutdown done.
web_1 | 2023/02/28 22:48:34 flushing log.
web_1 | 2023/02/28 22:48:34 logger closing down

thunderhub

Attaching to thunderhub_web_1, thunderhub_tor_server_1, thunderhub_app_proxy_1
app_proxy_1 | Error wating for port: “The address ‘thunderhub_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘thunderhub_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘thunderhub_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘thunderhub_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘thunderhub_web_1’ cannot be found”
app_proxy_1 | Retrying…
web_1 | {
web_1 | message: ‘UnableToConnectToAnyNode’,
web_1 | level: ‘error’,
web_1 | timestamp: ‘2023-03-01T00:49:53.184Z’
web_1 | }
web_1 | {
web_1 | level: ‘error’,
web_1 | message: 'Initiating subscriptions failed: ',
web_1 | timestamp: ‘2023-03-01T00:49:53.184Z’
web_1 | }

==== Result ====

The debug script did not automatically detect any issues with your Umbrel.

If there is any way you know how to set a static IP address for the WSL2 that would be fantastic too! I cant find a good tutorial online.

For Static IP on Linux these steps should be accurate as also referenced from this guide, it is probably unrelated to this current error though and may not be necessary if you don’t want,

It also looks like maybe there is something breaking with your containers so I would verify the components installed with the following commands:

docker -v

docker-compose -v

python3 -V

You can reference these commands in this guide here and follow those steps listed after verifying the components are installed and the next step to try and load into your dashboard, let me know if this is helpful and gets you up and running: :

So I went through all those steps, the only thing i didnt do was dedicate a hard drive to use with the WSL2. Is this a necessary step in the process? I never knew this was a thing to dedicate a hard drive space for WSL2. is it even possible to do this now after the fact?
I was able to shutdown and restart my umbrel, and it has mostly come back up except for tailscale and BTC Pay Server. Here is the latest debug info. Maybe when things go awry I need to shut down the umbrel to shut down the dockers so that I can restart it again?

=====================
= Umbrel debug info =

Umbrel version

0.5.3

Memory usage

           total        used        free      shared  buff/cache   available

Mem: 15G 5.3G 202M 26M 10G 10G
Swap: 4.1G 879M 3.2G

total: 33.7%
system: 33.7%
thunderhub: 0%
tailscale: 0%
nostr-relay: 0%
lnplus: 0%
lnbits: 0%
lightning: 0%
lightning-shell: 0%
file-browser: 0%
btcpay-server: 0%
bitcoin: 0%

Memory monitor logs

Memory monitor is already running
256460 ? S 0:00 bash ./scripts/memory-monitor
Memory monitor is already running
256460 ? S 0:00 bash ./scripts/memory-monitor
Memory monitor is already running
2023-03-05 20:48:51 Memory monitor running!
2023-03-05 21:15:09 Memory monitor running!
724 ? S 0:00 bash ./scripts/memory-monitor
195949 ? R 0:00 bash ./scripts/memory-monitor
Memory monitor is already running

Filesystem information

Filesystem Size Used Avail Use% Mounted on
/dev/sdc 1007G 581G 376G 61% /
/dev/sdc 1007G 581G 376G 61% /

Karen logs

karen is getting triggered!
./karen: line 75: /home/alvareah/umbrel/events/triggers/: Is a directory
Got signal:
karen is getting triggered!
./karen: line 75: /home/alvareah/umbrel/events/triggers/: Is a directory
./karen: line 75: /home/alvareah/umbrel/events/triggers/: Is a directory
Got signal:
karen is getting triggered!
Got signal:
karen is getting triggered!
./karen: line 75: /home/alvareah/umbrel/events/triggers/: Is a directory
Got signal:
karen is getting triggered!
./karen: line 75: /home/alvareah/umbrel/events/triggers/: Is a directory
Got signal:
karen is getting triggered!
./karen: line 75: /home/alvareah/umbrel/events/triggers/: Is a directory
./karen: line 75: /home/alvareah/umbrel/events/triggers/: Is a directory
Got signal:
karen is getting triggered!
Got signal:
karen is getting triggered!
./karen: line 75: /home/alvareah/umbrel/events/triggers/: Is a directory
./karen: line 75: /home/alvareah/umbrel/events/triggers/: Is a directory
712 ? S 0:00 bash ./karen
771 ? S 0:00 bash ./karen
karen is already running
Got signal: backup
karen is getting triggered!
Deriving keys…
Creating backup…
Adding random padding…
1+0 records in
1+0 records out
5455 bytes (5.5 kB, 5.3 KiB) copied, 2.946e-05 s, 185 MB/s
Creating encrypted tarball…
backup/
backup/.padding
backup/channel.backup
Uploading backup…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6608 100 146 100 6462 65 2885 0:00:02 0:00:02 --:–:-- 2951
{“message”:“Successfully uploaded backup 1678106792127.tar.gz.pgp for backup ID ed654c166ca9f125de0c682c8edbb252e5673351860f51f336c200c0d9696d74”}

====== Backup success =======

Got signal: change-password
karen is getting triggered!
This script must only be run on Umbrel OS

Docker containers

NAMES STATUS
btcpay-server_web_1 Restarting (139) Less than a second ago
bitcoin_server_1 Up 11 hours
btcpay-server_nbxplorer_1 Restarting (139) 40 seconds ago
bitcoin_bitcoind_1 Up 11 hours
bitcoin_tor_server_1 Up 11 hours
bitcoin_i2pd_daemon_1 Up 11 hours
bitcoin_tor_1 Up 11 hours
btcpay-server_postgres_1 Restarting (1) 56 seconds ago
btcpay-server_tor_server_1 Up 11 hours
bitcoin_app_proxy_1 Up 11 hours
btcpay-server_app_proxy_1 Up 11 hours
lightning_app_1 Up 11 hours
lightning_tor_server_1 Up 11 hours
lightning_tor_1 Up 11 hours
lightning_app_proxy_1 Up 11 hours
lightning_lnd_1 Up 11 hours
nostr-relay_relay_1 Up 11 hours
nostr-relay_app_proxy_1 Up 11 hours
nostr-relay_web_1 Up 11 hours
nostr-relay_tor_server_1 Up 11 hours
lnplus_app_proxy_1 Up 11 hours
file-browser_app_proxy_1 Up 11 hours
lightning-shell_app_proxy_1 Up 11 hours
lnbits_app_proxy_1 Up 11 hours
thunderhub_app_proxy_1 Up 11 hours
lnbits_web_1 Up 11 hours
lightning-shell_web_1 Up 11 hours
thunderhub_web_1 Up 11 hours
lnbits_tor_server_1 Up 11 hours
thunderhub_tor_server_1 Up 11 hours
file-browser_tor_server_1 Up 11 hours
lightning-shell_tor_server_1 Up 11 hours
lnplus_tor_server_1 Up 11 hours
lnplus_web_1 Up 11 hours
file-browser_server_1 Up 11 hours (healthy)
tailscale_web_1 Up 11 hours
tailscale_tor_server_1 Restarting (1) 33 seconds ago
nginx Up 11 hours
manager Up 11 hours
auth Up 11 hours
tor_server Up 11 hours
tor_proxy Up 11 hours
dashboard Up 11 hours

Umbrel logs

Attaching to manager
manager | ::ffff:10.21.21.2 - - [Mon, 06 Mar 2023 12:47:25 GMT] “GET /v1/system/memory HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Mon, 06 Mar 2023 12:47:25 GMT] “GET /v1/system/storage HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Mon, 06 Mar 2023 12:47:25 GMT] “GET /v1/apps HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Mon, 06 Mar 2023 12:47:26 GMT] “GET /v1/system/get-update HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Mon, 06 Mar 2023 12:47:38 GMT] “GET /v1/system/update-status HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Mon, 06 Mar 2023 12:47:45 GMT] “GET /v1/apps?installed=1 HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Mon, 06 Mar 2023 12:47:45 GMT] “GET /v1/system/memory HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Mon, 06 Mar 2023 12:47:45 GMT] “GET /v1/system/storage HTTP/1.0” 200 1426 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Mon, 06 Mar 2023 12:47:45 GMT] “GET /v1/apps HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Mon, 06 Mar 2023 12:47:46 GMT] “GET /v1/system/get-update HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager

Tor Proxy logs

Attaching to tor_proxy
tor_proxy | Mar 06 08:15:12.000 [notice] While bootstrapping, fetched this many bytes: 601694 (consensus network-status fetch); 14101 (authority cert fetch); 5609727 (microdescriptor fetch)
tor_proxy | Mar 06 08:15:12.000 [notice] While not bootstrapping, fetched this many bytes: 163754 (consensus network-status fetch); 8863 (authority cert fetch); 325086 (microdescriptor fetch)
tor_proxy | Mar 06 08:15:12.000 [notice] Average packaged cell fullness: 70.356%. TLS write overhead: 2%
tor_proxy | Mar 06 08:15:12.000 [notice] Heartbeat: Our onion service received 0 v3 INTRODUCE2 cells and attempted to launch 0 rendezvous circuits.
tor_proxy | Mar 06 10:29:15.000 [notice] Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDir available to query.
tor_proxy | Mar 06 10:31:07.000 [notice] Tried for 120 seconds to get a connection to [scrubbed]:9735. Giving up. (waiting for rendezvous desc)
tor_proxy | Mar 06 10:46:07.000 [notice] Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDir available to query.
tor_proxy | Mar 06 10:46:19.000 [notice] Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDir available to query.
tor_proxy | Mar 06 11:27:41.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $A53C46F5B157DD83366D45A8E99A244934A14C46~csailmitexit [s0BLh6T2Ruh9+EyOl4tRuv/zVuDxhXRk9S9ILyUjoGc] at 128.31.0.13. Retrying on a new circuit.
tor_proxy | Mar 06 11:27:56.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $A398080A6A72F828DC4476DE45E28C5892CA1070~ForPrivacyNET [AKiZ17aMjR94+BtLdxOQoomNtgKIvw7i1JZdU+7TMuE] at 185.220.101.49. Retrying on a new circuit.

App logs

bitcoin

Attaching to bitcoin_server_1, bitcoin_bitcoind_1, bitcoin_tor_server_1, bitcoin_i2pd_daemon_1, bitcoin_tor_1, bitcoin_app_proxy_1
i2pd_daemon_1 | 12:31:19@914/error - SAM: Naming lookup failed. LeaseSet for 26mn7ghpxrkrfn5kzthxg7fwlyobaoxsnsbzod5gvb26jtqomcla.b32.i2p not found
i2pd_daemon_1 | 12:31:19@148/error - SAM: Read error: End of file
i2pd_daemon_1 | 12:35:42@148/error - Garlic: Failed to decrypt message
i2pd_daemon_1 | 12:37:04@148/error - SAM: Stream read error: Operation canceled
i2pd_daemon_1 | 12:37:13@148/error - Garlic: Failed to decrypt message
i2pd_daemon_1 | 12:37:13@148/error - Garlic: Failed to decrypt message
i2pd_daemon_1 | 12:38:43@148/error - Garlic: Failed to decrypt message
i2pd_daemon_1 | 12:38:57@148/error - Garlic: Failed to decrypt message
i2pd_daemon_1 | 12:41:27@148/error - Garlic: Failed to decrypt message
i2pd_daemon_1 | 12:47:27@148/error - Garlic: Failed to decrypt message
server_1 | yarn run v1.22.18
server_1 | $ node ./bin/www
server_1 | Mon, 06 Mar 2023 02:15:27 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:33:9
server_1 | Mon, 06 Mar 2023 02:15:27 GMT morgan deprecated default format: use combined format at app.js:33:9
server_1 | Listening on port 3005
bitcoind_1 | 2023-03-06T12:36:44Z Socks5() connect to 95.175.104.240:8333 failed: general failure
bitcoind_1 | 2023-03-06T12:38:08Z Socks5() connect to 165.231.182.30:8333 failed: general failure
bitcoind_1 | 2023-03-06T12:38:40Z Socks5() connect to r4sgef7txzfupkzx4ucweiaptmeivhtrvtlc5o7yfzmcpx7bezbbd4id.onion:8333 failed: host unreachable
bitcoind_1 | 2023-03-06T12:39:32Z Socks5() connect to 7mvgzoqdkrwmaqitkmcbye6enwycq7xsav2asg2b3iigvxx2b7sapmad.onion:8333 failed: host unreachable
bitcoind_1 | 2023-03-06T12:39:34Z New outbound peer connected: version: 70016, blocks=779584, peer=3375 (outbound-full-relay)
bitcoind_1 | 2023-03-06T12:41:15Z Socks5() connect to 169.150.197.108:8333 failed: general failure
bitcoind_1 | 2023-03-06T12:45:46Z New outbound peer connected: version: 70016, blocks=779584, peer=3418 (block-relay-only)
bitcoind_1 | 2023-03-06T12:46:29Z Potential stale tip detected, will try using extra outbound peer (last tip update: 2647 seconds ago)
bitcoind_1 | 2023-03-06T12:46:30Z New outbound peer connected: version: 70015, blocks=779584, peer=3428 (outbound-full-relay)
bitcoind_1 | 2023-03-06T12:47:38Z New outbound peer connected: version: 70016, blocks=779584, peer=3433 (block-relay-only)
tor_1 | Mar 06 10:37:19.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Mar 06 10:46:46.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Mar 06 10:54:53.000 [notice] Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDir available to query.
tor_1 | Mar 06 11:07:25.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Mar 06 11:20:03.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Mar 06 11:20:25.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Mar 06 11:59:14.000 [notice] Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDir available to query.
tor_1 | Mar 06 12:36:44.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Mar 06 12:38:40.000 [notice] Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDir available to query.
tor_1 | Mar 06 12:39:32.000 [notice] Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDir available to query.
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / -> http://10.21.22.2:3005
app_proxy_1 | Waiting for 10.21.22.2:3005 to open…
app_proxy_1 | Bitcoin Node is now ready…
app_proxy_1 | Listening on port: 2100

btcpay-server

Attaching to btcpay-server_web_1, btcpay-server_nbxplorer_1, btcpay-server_postgres_1, btcpay-server_tor_server_1, btcpay-server_app_proxy_1
app_proxy_1 | Error wating for port: “The address ‘btcpay-server_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘btcpay-server_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘btcpay-server_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘btcpay-server_web_1’ cannot be found”
app_proxy_1 | Retrying…
app_proxy_1 | Error wating for port: “The address ‘btcpay-server_web_1’ cannot be found”
app_proxy_1 | Retrying…
nbxplorer_1 | at Npgsql.NpgsqlConnection.g__OpenAsync|45_0(Boolean async, CancellationToken cancellationToken)
nbxplorer_1 | at NBXplorer.Backends.Postgres.DbConnectionFactory.CreateConnection(Action1 action) in /source/NBXplorer/Backends/Postgres/DbConnectionFactory.cs:line 54 nbxplorer_1 | at NBXplorer.HostedServices.DatabaseSetupHostedService.StartAsync(CancellationToken cancellationToken) in /source/NBXplorer/HostedServices/DatabaseSetupHostedService.cs:line 33 nbxplorer_1 | at Microsoft.AspNetCore.Hosting.HostedServiceExecutor.ExecuteAsync(Func2 callback, Boolean throwOnFirstFailure)
nbxplorer_1 | at Microsoft.AspNetCore.Hosting.WebHost.StartAsync(CancellationToken cancellationToken)
nbxplorer_1 | at Microsoft.AspNetCore.Hosting.WebHostExtensions.RunAsync(IWebHost host, CancellationToken token, String startupMessage)
nbxplorer_1 | at Microsoft.AspNetCore.Hosting.WebHostExtensions.RunAsync(IWebHost host, CancellationToken token, String startupMessage)
nbxplorer_1 | at Microsoft.AspNetCore.Hosting.WebHostExtensions.RunAsync(IWebHost host, CancellationToken token)
nbxplorer_1 | at Microsoft.AspNetCore.Hosting.WebHostExtensions.Run(IWebHost host)
nbxplorer_1 | at NBXplorer.Program.Main(String[] args) in /source/NBXplorer/Program.cs:line 60
web_1 | info: BTCPayServer.Plugins.PluginManager: Adding and executing plugin BTCPayServer - 1.7.12
web_1 | info: BTCPayServer.Plugins.PluginManager: Adding and executing plugin BTCPayServer.Plugins.Shopify - 1.7.12
web_1 | info: BTCPayServer.Plugins.PluginManager: Adding and executing plugin BTCPayServer.Plugins.NFC - 1.7.12
web_1 | info: BTCPayServer.Plugins.PluginManager: Adding and executing plugin BTCPayServer.Plugins.Crowdfund - 1.7.12
web_1 | info: BTCPayServer.Plugins.PluginManager: Adding and executing plugin BTCPayServer.Plugins.PayButton - 1.7.12
web_1 | info: BTCPayServer.Plugins.PluginManager: Adding and executing plugin BTCPayServer.Plugins.PointOfSale - 1.7.12
web_1 | info: Configuration: Supported chains: BTC
web_1 | info: Configuration: BTC: Explorer url is http://btcpay-server_nbxplorer_1:32838/
web_1 | info: Configuration: BTC: Cookie file is /data/.nbxplorer/Main/.cookie
web_1 | info: Configuration: Network: Mainnet
postgres_1 | 2023-03-06 12:46:51.263 UTC [1] LOG: starting PostgreSQL 13.6 (Debian 13.6-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
postgres_1 | 2023-03-06 12:46:51.263 UTC [1] LOG: listening on IPv4 address “0.0.0.0”, port 5432
postgres_1 | 2023-03-06 12:46:51.263 UTC [1] LOG: listening on IPv6 address “::”, port 5432
postgres_1 | 2023-03-06 12:46:51.265 UTC [1] LOG: listening on Unix socket “/var/run/postgresql/.s.PGSQL.5432”
postgres_1 | 2023-03-06 12:46:51.269 UTC [27] LOG: database system was shut down at 2023-02-20 13:07:44 UTC
postgres_1 | 2023-03-06 12:46:51.269 UTC [27] LOG: invalid primary checkpoint record
postgres_1 | 2023-03-06 12:46:51.269 UTC [27] PANIC: could not locate a valid checkpoint record
postgres_1 | 2023-03-06 12:46:51.270 UTC [1] LOG: startup process (PID 27) was terminated by signal 6: Aborted
postgres_1 | 2023-03-06 12:46:51.270 UTC [1] LOG: aborting startup due to startup process failure
postgres_1 | 2023-03-06 12:46:51.271 UTC [1] LOG: database system is shut down

file-browser

Attaching to file-browser_app_proxy_1, file-browser_tor_server_1, file-browser_server_1
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / -> http://file-browser_server_1:80
app_proxy_1 | Waiting for file-browser_server_1:80 to open…
app_proxy_1 | File Browser is now ready…
app_proxy_1 | Listening on port: 7421
server_1 | 2023/03/06 02:15:14 Using config file: /.filebrowser.json
server_1 | 2023/03/06 02:15:14 Listening on [::]:80

lightning

Attaching to lightning_app_1, lightning_tor_server_1, lightning_tor_1, lightning_app_proxy_1, lightning_lnd_1
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / -> http://10.21.22.3:3006
app_proxy_1 | Waiting for 10.21.22.3:3006 to open…
app_proxy_1 | Lightning Node is now ready…
app_proxy_1 | Listening on port: 2101
lnd_1 | 2023-03-06 12:41:55.601 [INF] CRTR: Processed channels=0 updates=51 nodes=2 in last 1m0.000542338s
lnd_1 | 2023-03-06 12:42:32.942 [INF] DISC: Broadcasting 54 new announcements in 6 sub batches
lnd_1 | 2023-03-06 12:42:55.601 [INF] CRTR: Processed channels=0 updates=28 nodes=5 in last 1m0.000001901s
lnd_1 | 2023-03-06 12:43:55.600 [INF] CRTR: Processed channels=0 updates=47 nodes=2 in last 59.999098272s
lnd_1 | 2023-03-06 12:44:02.941 [INF] DISC: Broadcasting 66 new announcements in 7 sub batches
lnd_1 | 2023-03-06 12:44:55.600 [INF] CRTR: Processed channels=0 updates=26 nodes=2 in last 1m0.000275037s
lnd_1 | 2023-03-06 12:45:32.941 [INF] DISC: Broadcasting 44 new announcements in 5 sub batches
lnd_1 | 2023-03-06 12:45:55.601 [INF] CRTR: Processed channels=0 updates=32 nodes=1 in last 1m0.000308785s
lnd_1 | 2023-03-06 12:46:55.600 [INF] CRTR: Processed channels=0 updates=34 nodes=1 in last 59.999532222s
lnd_1 | 2023-03-06 12:47:02.942 [INF] DISC: Broadcasting 62 new announcements in 7 sub batches
app_1 | Checking LND status…
app_1 | LND already unlocked!
app_1 | Checking LND status…
app_1 | LND already unlocked!
app_1 | [backup-monitor] Checking channel backup…
app_1 | [backup-monitor] Sleeping…
app_1 | Checking LND status…
app_1 | LND already unlocked!
app_1 | Checking LND status…
app_1 | LND already unlocked!
tor_1 | Mar 06 02:15:47.000 [notice] Bootstrapped 89% (ap_handshake): Finishing handshake with a relay to build circuits
tor_1 | Mar 06 02:15:47.000 [notice] Bootstrapped 90% (ap_handshake_done): Handshake finished with a relay to build circuits
tor_1 | Mar 06 02:15:47.000 [notice] Bootstrapped 95% (circuit_create): Establishing a Tor circuit
tor_1 | Mar 06 02:15:48.000 [notice] Bootstrapped 100% (done): Done
tor_1 | Mar 06 04:30:27.000 [notice] No circuits are opened. Relaxed timeout for circuit 237 (a Hidden service: Establishing introduction point 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway.
tor_1 | Mar 06 08:15:22.000 [notice] Heartbeat: Tor’s uptime is 6:00 hours, with 39 circuits open. I’ve sent 6.29 MB and received 10.50 MB. I’ve received 0 connections on IPv4 and 0 on IPv6. I’ve made 11 connections with IPv4 and 0 with IPv6.
tor_1 | Mar 06 08:15:22.000 [notice] While bootstrapping, fetched this many bytes: 601267 (consensus network-status fetch); 14101 (authority cert fetch); 5600409 (microdescriptor fetch)
tor_1 | Mar 06 08:15:22.000 [notice] While not bootstrapping, fetched this many bytes: 137216 (consensus network-status fetch); 7091 (authority cert fetch); 322278 (microdescriptor fetch)
tor_1 | Mar 06 08:15:22.000 [notice] Heartbeat: Our onion services received 0 v3 INTRODUCE2 cells and attempted to launch 0 rendezvous circuits.
tor_1 | Mar 06 09:37:51.000 [notice] No circuits are opened. Relaxed timeout for circuit 696 (a Hidden service: Uploading HS descriptor 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [1 similar message(s) suppressed in last 18480 seconds]

lightning-shell

Attaching to lightning-shell_app_proxy_1, lightning-shell_web_1, lightning-shell_tor_server_1
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / -> http://lightning-shell_web_1:7681
app_proxy_1 | Waiting for lightning-shell_web_1:7681 to open…
app_proxy_1 | Lightning Shell is now ready…
app_proxy_1 | Listening on port: 7681
web_1 | [2023/03/06 02:15:18:6462] N: LWS: 4.3.0-a5aae04, NET CLI SRV H1 H2 WS ConMon IPv6-absent
web_1 | [2023/03/06 02:15:18:6496] N: elops_init_pt_uv: Using foreign event loop…
web_1 | [2023/03/06 02:15:18:6500] N: ++ [wsi|0|pipe] (1)
web_1 | [2023/03/06 02:15:18:6503] N: ++ [vh|0|netlink] (1)
web_1 | [2023/03/06 02:15:18:6503] N: ++ [vh|1|default||7681] (2)
web_1 | [2023/03/06 02:15:18:6504] N: [null wsi]: lws_socket_bind: source ads 0.0.0.0
web_1 | [2023/03/06 02:15:18:6504] N: ++ [wsi|1|listen|default||7681] (2)
web_1 | [2023/03/06 02:15:18:6504] N: Listening on port: 7681
web_1 | [2023/03/06 02:15:21:8243] N: ++ [wsisrv|0|adopted] (1)
web_1 | [2023/03/06 02:15:21:8270] N: – [wsisrv|0|adopted] (0) 2.733ms

lnbits

Attaching to lnbits_app_proxy_1, lnbits_web_1, lnbits_tor_server_1
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / -> http://lnbits_web_1:3007
app_proxy_1 | Waiting for lnbits_web_1:3007 to open…
app_proxy_1 | LNbits is now ready…
app_proxy_1 | Listening on port: 3007
web_1 | 2023-03-06 10:45:42.78 | INFO | Task: pending check finished for 0 payments (took 0.001 s)
web_1 | 2023-03-06 11:15:42.77 | INFO | Task: checking all pending payments (incoming=False, outgoing=True) of last 15 days
web_1 | 2023-03-06 11:15:42.77 | INFO | Task: pending check finished for 0 payments (took 0.001 s)
web_1 | 2023-03-06 11:45:42.76 | INFO | Task: checking all pending payments (incoming=False, outgoing=True) of last 15 days
web_1 | 2023-03-06 11:45:42.77 | INFO | Task: pending check finished for 0 payments (took 0.001 s)
web_1 | 2023-03-06 12:15:42.75 | INFO | Task: checking all pending payments (incoming=False, outgoing=True) of last 15 days
web_1 | 2023-03-06 12:15:42.75 | INFO | Task: pending check finished for 0 payments (took 0.001 s)
web_1 | 2023-03-06 12:45:42.75 | INFO | Task: checking all pending payments (incoming=False, outgoing=True) of last 15 days
web_1 | 2023-03-06 12:45:42.75 | INFO | Task: pending check finished for 0 payments (took 0.001 s)
web_1 | 2023-03-06 12:46:45.38 | INFO | ::ffff:10.21.0.1:0 - “GET / HTTP/1.1” 200

lnplus

Attaching to lnplus_app_proxy_1, lnplus_tor_server_1, lnplus_web_1
web_1 | => Rails 7.0.3.1 application starting in development
web_1 | => Run bin/rails server --help for more startup options
web_1 | Puma starting in single mode…
web_1 | * Puma version: 5.6.4 (ruby 3.0.1-p64) (“Birdie’s Version”)
web_1 | * Min threads: 5
web_1 | * Max threads: 5
web_1 | * Environment: development
web_1 | * PID: 1
web_1 | * Listening on http://0.0.0.0:3777
web_1 | Use Ctrl-C to stop
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / -> http://lnplus_web_1:3777
app_proxy_1 | Waiting for lnplus_web_1:3777 to open…
app_proxy_1 | Lightning Network+ is now ready…
app_proxy_1 | Listening on port: 3777

nostr-relay

Attaching to nostr-relay_relay_1, nostr-relay_app_proxy_1, nostr-relay_web_1, nostr-relay_tor_server_1
web_1 | {“level”:“info”,“ts”:1678068920.4438558,“msg”:“using provided configuration”,“config_file”:"/etc/caddy/Caddyfile",“config_adapter”:“caddyfile”}
web_1 | {“level”:“warn”,“ts”:1678068920.446018,“msg”:“Caddyfile input is not formatted; run the ‘caddy fmt’ command to fix inconsistencies”,“adapter”:“caddyfile”,“file”:"/etc/caddy/Caddyfile",“line”:7}
web_1 | {“level”:“info”,“ts”:1678068920.4501143,“logger”:“admin”,“msg”:“admin endpoint started”,“address”:“localhost:2019”,“enforce_origin”:false,“origins”:["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
web_1 | {“level”:“info”,“ts”:1678068920.45203,“logger”:“tls.cache.maintenance”,“msg”:“started background certificate maintenance”,“cache”:“0xc0004127e0”}
web_1 | {“level”:“info”,“ts”:1678068920.4533758,“logger”:“tls”,“msg”:“cleaning storage unit”,“description”:“FileStorage:/data/caddy”}
web_1 | {“level”:“info”,“ts”:1678068920.453431,“logger”:“http.log”,“msg”:“server running”,“name”:“srv0”,“protocols”:[“h1”,“h2”,“h3”]}
web_1 | {“level”:“info”,“ts”:1678068920.4540415,“logger”:“tls”,“msg”:“finished cleaning storage units”}
web_1 | {“level”:“error”,“ts”:1678068920.4540741,“msg”:“unable to autosave config”,“file”:"/config/caddy/autosave.json",“error”:“open /config/caddy/autosave.json: permission denied”}
web_1 | {“level”:“info”,“ts”:1678068920.454087,“msg”:“serving initial configuration”}
web_1 | {“level”:“info”,“ts”:1678106805.1032827,“logger”:“http.log.access.log0”,“msg”:“handled request”,“request”:{“remote_ip”:“10.21.0.15”,“remote_port”:“56874”,“proto”:“HTTP/1.1”,“method”:“GET”,“host”:“172.30.223.24:4848”,“uri”:"/",“headers”:{“Accept-Language”:[“en-US,en;q=0.9”],“Accept-Encoding”:[“gzip, deflate”],“User-Agent”:[“Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36”],“Connection”:[“close”],“X-Forwarded-Proto”:[“http”],“X-Forwarded-Host”:[“172.30.223.24:4848”],“Accept”:["/"],“X-Forwarded-For”:["::ffff:10.21.0.1"]}},“user_id”:"",“duration”:0.02328965,“size”:365,“status”:200,“resp_headers”:{“Content-Type”:[“text/html; charset=utf-8”],“Last-Modified”:[“Mon, 06 Feb 2023 12:09:28 GMT”],“Content-Encoding”:[“gzip”],“Vary”:[“Accept-Encoding”],“Server”:[“Caddy”],“Etag”:["“rpnr3sge”"]}}
relay_1 | Mar 06 12:38:20.097 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 54.07µs (result: Ok, WAL size: 0)
relay_1 | Mar 06 12:39:20.098 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 51.939µs (result: Ok, WAL size: 0)
relay_1 | Mar 06 12:40:20.099 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 59.49µs (result: Ok, WAL size: 0)
relay_1 | Mar 06 12:41:20.101 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 67.249µs (result: Ok, WAL size: 0)
relay_1 | Mar 06 12:42:20.102 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 96.969µs (result: Ok, WAL size: 0)
relay_1 | Mar 06 12:43:20.104 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 77.12µs (result: Ok, WAL size: 0)
relay_1 | Mar 06 12:44:20.105 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 56.68µs (result: Ok, WAL size: 0)
relay_1 | Mar 06 12:45:20.106 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 65.27µs (result: Ok, WAL size: 0)
relay_1 | Mar 06 12:46:20.108 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 63.39µs (result: Ok, WAL size: 0)
relay_1 | Mar 06 12:47:20.109 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 72.38µs (result: Ok, WAL size: 0)
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / -> http://nostr-relay_web_1:3000
app_proxy_1 | Waiting for nostr-relay_web_1:3000 to open…
app_proxy_1 | Nostr Relay is now ready…
app_proxy_1 | Listening on port: 4848

tailscale

Attaching to tailscale_web_1, tailscale_tor_server_1
web_1 | 2023/03/06 12:47:21 monitor: RTM_DELROUTE: src=, dst=ff00::/8, gw=, outif=9700, table=254
web_1 | 2023/03/06 12:47:21 [RATELIMIT] format(“monitor: %s: src=%v, dst=%v, gw=%v, outif=%v, table=%v”)
web_1 | 2023/03/06 12:47:34 monitor: RTM_DELROUTE: src=, dst=fe80::e4ac:62ff:fe1c:d140/128, gw=, outif=9704, table=254
web_1 | 2023/03/06 12:47:34 monitor: RTM_DELROUTE: src=, dst=fe80::/64, gw=, outif=9704, table=254
web_1 | 2023/03/06 12:47:34 [RATELIMIT] format(“monitor: %s: src=%v, dst=%v, gw=%v, outif=%v, table=%v”)
web_1 | 2023/03/06 12:47:47 [RATELIMIT] format(“monitor: %s: src=%v, dst=%v, gw=%v, outif=%v, table=%v”) (1 dropped)
web_1 | 2023/03/06 12:47:47 monitor: RTM_DELROUTE: src=, dst=fe80::3014:7dff:feaa:4345/128, gw=, outif=9706, table=254
web_1 | 2023/03/06 12:47:47 monitor: RTM_DELROUTE: src=, dst=fe80::/64, gw=, outif=9706, table=254
web_1 | 2023/03/06 12:47:47 monitor: RTM_DELROUTE: src=, dst=ff00::/8, gw=, outif=9706, table=254
web_1 | 2023/03/06 12:47:47 [RATELIMIT] format(“monitor: %s: src=%v, dst=%v, gw=%v, outif=%v, table=%v”)

thunderhub

Attaching to thunderhub_app_proxy_1, thunderhub_web_1, thunderhub_tor_server_1
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / -> http://thunderhub_web_1:3000
app_proxy_1 | Waiting for thunderhub_web_1:3000 to open…
app_proxy_1 | ThunderHub is now ready…
app_proxy_1 | Listening on port: 3000
web_1 | {
web_1 | message: ‘UnableToConnectToAnyNode’,
web_1 | level: ‘error’,
web_1 | timestamp: ‘2023-03-06T02:15:23.482Z’
web_1 | }
web_1 | {
web_1 | level: ‘error’,
web_1 | message: 'Initiating subscriptions failed: ',
web_1 | timestamp: ‘2023-03-06T02:15:23.483Z’
web_1 | }

==== Result ====

The debug script did not automatically detect any issues with your Umbrel.

I have Umbrel on WSL2 with Windows 11 which works fine.

My steps:

  1. Installed Ubuntu app from Microsoft store
  2. I didn’t want the installation on the C:\ drive so I exported Ubuntu and imported it to another drive with bigger space (howto: https://dev.to/mefaba/installing-wsl-on-another-drive-in-windows-5c4a)
  3. Fixed issues with WSL2 import (like default user, home folder)
  4. Updated Ubuntu and installed Umbrel with “curl -L https://umbrel.sh | bash”
  5. Installed Bitoin Node and let is fully sync first before installing other apps.