Error: System service failed - I've tried all the basics!

Hi Everyone, having trouble with this bitcoin machines raspberry pi. Was up and running fine but it was disconnected during an umbrella update and since then I’ve only received an error message. I can ssh in, have the IP address and still system failed message. I formatted the micros card and installed the umbrel software on it again, still nothing.

=====================
= Umbrel debug info =

Umbrel version

0.4.9

Flashed OS version

v0.4.17

Raspberry Pi Model

Revision : d03115
Serial : 10000000a2d918d6
Model : Raspberry Pi 4 Model B Rev 1.5

Firmware

Dec 1 2021 15:01:54
Copyright © 2012 Broadcom
version 71bd3109023a0c8575585ba87cbb374d2eeb038f (clean) (release) (start)

Temperature

temp=39.9’C

Throttling

throttled=0x0

Memory usage

          total        used        free      shared  buff/cache   available

Mem: 7.8G 135M 7.4G 8.0M 228M 7.6G
Swap: 4.1G 0B 4.1G

total: 1.7%
system: 1.7%
tor: 0%
pi-hole: 0%
lnd: 0%
electrs: 0%
bitcoin: 0%

Memory monitor logs

2022-04-27 07:15:23 Memory monitor running!
2022-04-27 07:19:37 Memory monitor running!
2022-05-04 01:03:43 Memory monitor running!
2022-05-08 11:07:23 Memory monitor running!

Filesystem information

Filesystem Size Used Avail Use% Mounted on
/dev/root 29G 3.1G 25G 12% /
/dev/sda1 916G 147G 723G 17% /home/umbrel/umbrel

Startup service logs

– Logs begin at Sun 2022-05-08 17:25:21 EDT, end at Sun 2022-05-08 17:48:30 EDT. –
May 08 17:25:40 umbrel systemd[1]: Dependency failed for Umbrel Startup Service.
May 08 17:25:40 umbrel systemd[1]: umbrel-startup.service: Job umbrel-startup.service/start failed with result ‘dependency’.

External storage service logs

– Logs begin at Sun 2022-05-08 17:25:21 EDT, end at Sun 2022-05-08 17:48:30 EDT. –
May 08 17:25:25 umbrel systemd[1]: Starting External Storage Mounter…
May 08 17:25:25 umbrel external storage mounter[472]: Running external storage mount script…
May 08 17:25:25 umbrel external storage mounter[472]: Found device "JMicron "
May 08 17:25:25 umbrel external storage mounter[472]: Blacklisting USB device IDs against UAS driver…
May 08 17:25:26 umbrel external storage mounter[472]: Rebinding USB drivers…
May 08 17:25:26 umbrel external storage mounter[472]: Checking USB devices are back…
May 08 17:25:26 umbrel external storage mounter[472]: Waiting for USB devices…
May 08 17:25:27 umbrel external storage mounter[472]: Waiting for USB devices…
May 08 17:25:28 umbrel external storage mounter[472]: Waiting for USB devices…
May 08 17:25:29 umbrel external storage mounter[472]: Checking if the device is ext4…
May 08 17:25:29 umbrel external storage mounter[472]: Yes, it is ext4
May 08 17:25:29 umbrel external storage mounter[472]: Checking if device contains an Umbrel install…
May 08 17:25:29 umbrel external storage mounter[472]: Yes, it contains an Umbrel install
May 08 17:25:29 umbrel external storage mounter[472]: Bind mounting external storage over local Umbrel installation…
May 08 17:25:29 umbrel external storage mounter[472]: Bind mounting external storage over local Docker data dir…
May 08 17:25:29 umbrel external storage mounter[472]: Bind mounting external storage to /swap
May 08 17:25:29 umbrel external storage mounter[472]: Bind mounting SD card root at /sd-card…
May 08 17:25:29 umbrel external storage mounter[472]: Checking Umbrel root is now on external storage…
May 08 17:25:30 umbrel external storage mounter[472]: Checking /var/lib/docker is now on external storage…
May 08 17:25:30 umbrel external storage mounter[472]: Checking /swap is now on external storage…
May 08 17:25:30 umbrel external storage mounter[472]: Setting up swapfile
May 08 17:25:30 umbrel external storage mounter[472]: Setting up swapspace version 1, size = 4 GiB (4294963200 bytes)
May 08 17:25:30 umbrel external storage mounter[472]: no label, UUID=18cd8fde-e957-4076-b5ee-038f324dc9f6
May 08 17:25:30 umbrel external storage mounter[472]: Checking SD Card root is bind mounted at /sd-root…
May 08 17:25:30 umbrel external storage mounter[472]: Starting external drive mount monitor…
May 08 17:25:30 umbrel external storage mounter[472]: Mount script completed successfully!
May 08 17:25:30 umbrel systemd[1]: Started External Storage Mounter.

External storage SD card update service logs

– Logs begin at Sun 2022-05-08 17:25:21 EDT, end at Sun 2022-05-08 17:48:30 EDT. –
May 08 17:25:39 umbrel systemd[1]: Starting External Storage SDcard Updater…
May 08 17:25:39 umbrel external storage updater[1032]: Checking if SD card Umbrel is newer than external storage…
May 08 17:25:40 umbrel external storage updater[1032]: Yes, SD version is newer.
May 08 17:25:40 umbrel external storage updater[1032]: Checking if the external storage version “0.4.9” satisfies update requirement “>=0.2.1”…
May 08 17:25:40 umbrel external storage updater[1032]: Yes, it does, attempting an automatic update…
May 08 17:25:40 umbrel external storage updater[1032]: =======================================
May 08 17:25:40 umbrel external storage updater[1032]: =============== UPDATE ================
May 08 17:25:40 umbrel external storage updater[1032]: =======================================
May 08 17:25:40 umbrel external storage updater[1032]: ========== Stage: Download ============
May 08 17:25:40 umbrel external storage updater[1032]: =======================================
May 08 17:25:40 umbrel external storage updater[1032]: An update is already in progress. Exiting now.
May 08 17:25:40 umbrel systemd[1]: umbrel-external-storage-sdcard-update.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
May 08 17:25:40 umbrel systemd[1]: umbrel-external-storage-sdcard-update.service: Failed with result ‘exit-code’.
May 08 17:25:40 umbrel systemd[1]: Failed to start External Storage SDcard Updater.

Karen logs

Pulling manager … extracting (83.9%)
Pulling dashboard … extracting (83.9%)
Pulling tor_server … extracting (5.2%)
Pulling tor_server … extracting (6.1%)
Pulling electrs … extracting (100.0%)
Pulling tor_server … extracting (7.9%)
Pulling tor_server … extracting (8.7%)
Pulling manager … extracting (85.0%)
Pulling middleware … extracting (85.0%)
Pulling dashboard … extracting (85.0%)
Pulling tor_server … extracting (9.6%)
Pulling electrs … pull complete
Pulling electrs … extracting (100.0%)
Pulling electrs … extracting (100.0%)
Pulling tor_server … extracting (11.3%)
Pulling tor_server … extracting (12.2%)
Pulling dashboard … extracting (86.1%)
Pulling middleware … extracting (86.1%)
Pulling manager … extracting (86.1%)
Pulling tor_server … extracting (14.0%)
Pulling tor_server … extracting (15.7%)
Pulling electrs … pull complete
Pulling electrs … extracting (1.4%)
Pulling tor_server … extracting (17.4%)
Pulling manager … extracting (87.2%)
Pulling dashboard … extracting (87.2%)
Pulling middleware … extracting (87.2%)
Pulling tor_server … extracting (18.3%)
Pulling tor_server … extracting (19.2%)
Pulling dashboard … extracting (88.3%)
Pulling middleware … extracting (88.3%)
Pulling manager … extracting (88.3%)
Pulling tor_server … extracting (20.1%)
Pulling electrs … extracting (5.5%)
Pulling electrs … extracting (16.4%)
Pulling electrs … extracting (23.3%)
Pulling tor_server … extracting (20.9%)
Pulling dashboard … extracting (89.4%)
Pulling manager … extracting (89.4%)
Pulling middleware … extracting (89.4%)
Pulling electrs … extracting (30.1%)
Pulling electrs … extracting (39.7%)
Pulling tor_server … extracting (22.7%)
Pulling electrs … extracting (49.3%)
Pulling electrs … extracting (60.3%)
Pulling tor_server … extracting (23.6%)
Pulling electrs … extracting (67.1%)
Pulling tor_server … extracting (24.4%)
Pulling electrs … extracting (74.0%)

Docker containers

NAMES STATUS

Umbrel logs

Attaching to middleware, manager
middleware | > node ./bin/www
middleware |
middleware | Sun, 08 May 2022 13:11:37 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:46:9
middleware | Sun, 08 May 2022 13:11:37 GMT morgan deprecated default format: use combined format at app.js:46:9
middleware | (node:59) DeprecationWarning: grpc.load: Use the @grpc/proto-loader module with grpc.loadPackageDefinition instead
middleware | Listening on port 3005
middleware | LndUnlocker: Wallet failed to unlock!
middleware | LndUnlocker: Wallet failed to unlock!
middleware | LndUnlocker: Wallet failed to unlock!
middleware | LndUnlocker: Wallet failed to unlock!
middleware | LndUnlocker: Wallet unlocked!
middleware | Can’t connect, keep trying
middleware | Can’t connect, keep trying
middleware | Can’t connect, keep trying
middleware | { version: ‘umbrel-manager-0.2.16’ }
middleware | Can connect, lets proceed with server starting
middleware | Pre-condition found, Running service
middleware |
middleware | > umbrel-middleware@0.1.12 start /app
middleware | > node ./bin/www
middleware |
middleware | Sun, 08 May 2022 13:33:50 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:46:9
middleware | Sun, 08 May 2022 13:33:50 GMT morgan deprecated default format: use combined format at app.js:46:9
middleware | (node:52) DeprecationWarning: grpc.load: Use the @grpc/proto-loader module with grpc.loadPackageDefinition instead
middleware | Listening on port 3005
middleware | LndUnlocker: Wallet failed to unlock!
middleware | LndUnlocker: Wallet failed to unlock!
middleware | LndUnlocker: Wallet failed to unlock!
middleware | LndUnlocker: Wallet failed to unlock!
middleware | LndUnlocker: Wallet unlocked!
manager | yarn run v1.22.15
manager | $ node ./bin/www
manager | Sun, 08 May 2022 14:47:50 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:40:9
manager | Sun, 08 May 2022 14:47:50 GMT morgan deprecated default format: use combined format at app.js:40:9
manager | Listening on port 3006
manager | yarn run v1.22.15
manager | $ node ./bin/www
manager | Sun, 08 May 2022 14:49:22 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:40:9
manager | Sun, 08 May 2022 14:49:22 GMT morgan deprecated default format: use combined format at app.js:40:9
manager | Listening on port 3006
manager | yarn run v1.22.15
manager | $ node ./bin/www
manager | Sun, 08 May 2022 15:06:42 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:40:9
manager | Sun, 08 May 2022 15:06:42 GMT morgan deprecated default format: use combined format at app.js:40:9
manager | Listening on port 3006
manager | yarn run v1.22.15
manager | $ node ./bin/www
manager | Sun, 08 May 2022 15:12:16 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:40:9
manager | Sun, 08 May 2022 15:12:16 GMT morgan deprecated default format: use combined format at app.js:40:9
manager | Listening on port 3006
manager | yarn run v1.22.15
manager | $ node ./bin/www
manager | Sun, 08 May 2022 17:17:28 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:40:9
manager | Sun, 08 May 2022 17:17:28 GMT morgan deprecated default format: use combined format at app.js:40:9
manager | Listening on port 3006
manager | yarn run v1.22.15
manager | $ node ./bin/www
manager | Sun, 08 May 2022 17:42:42 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:40:9
manager | Sun, 08 May 2022 17:42:42 GMT morgan deprecated default format: use combined format at app.js:40:9
manager | Listening on port 3006

Bitcoin Core logs

Attaching to bitcoin
bitcoin | 2022-05-08T13:42:04Z init message: Done loading
bitcoin | 2022-05-08T13:42:04Z addcon thread start
bitcoin | 2022-05-08T13:42:04Z opencon thread start
bitcoin | 2022-05-08T13:42:04Z net thread start
bitcoin | 2022-05-08T13:43:10Z New outbound peer connected: version: 70015, blocks=735475, peer=0 (outbound-full-relay)
bitcoin | 2022-05-08T13:43:14Z New outbound peer connected: version: 70016, blocks=735475, peer=1 (outbound-full-relay)
bitcoin | 2022-05-08T13:43:21Z P2P peers available. Skipped DNS seeding.
bitcoin | 2022-05-08T13:43:21Z dnsseed thread exit
bitcoin | 2022-05-08T13:43:23Z Synchronizing blockheaders, height: 735475 (~100.00%)
bitcoin | 2022-05-08T13:44:26Z New outbound peer connected: version: 70016, blocks=735475, peer=3 (outbound-full-relay)
bitcoin | 2022-05-08T13:44:49Z Socks5() connect to 146.70.52.144:8333 failed: general failure
bitcoin | 2022-05-08T13:45:42Z New outbound peer connected: version: 70016, blocks=735475, peer=4 (outbound-full-relay)
bitcoin | 2022-05-08T13:46:27Z tor: Thread interrupt
bitcoin | 2022-05-08T13:46:27Z Shutdown: In progress…
bitcoin | 2022-05-08T13:46:27Z addcon thread exit
bitcoin | 2022-05-08T13:46:27Z torcontrol thread exit
bitcoin | 2022-05-08T13:46:27Z msghand thread exit
bitcoin | 2022-05-08T13:46:27Z net thread exit
bitcoin | 2022-05-08T13:46:27Z ERROR: Error while reading proxy response
bitcoin | 2022-05-08T13:46:27Z opencon thread exit
bitcoin | 2022-05-08T13:46:28Z DumpAnchors: Flush 0 outbound block-relay-only peer addresses to anchors.dat started
bitcoin | 2022-05-08T13:46:28Z DumpAnchors: Flush 0 outbound block-relay-only peer addresses to anchors.dat completed (0.01s)
bitcoin | 2022-05-08T13:46:28Z scheduler thread exit
bitcoin | 2022-05-08T13:46:28Z Writing 0 unbroadcast transactions to disk.
bitcoin | 2022-05-08T13:46:28Z Dumped mempool: 9e-06s to copy, 0.002983s to dump
bitcoin | 2022-05-08T13:46:28Z FlushStateToDisk: write coins cache to disk (0 coins, 0kB) started
bitcoin | 2022-05-08T13:46:28Z FlushStateToDisk: write coins cache to disk (0 coins, 0kB) completed (0.00s)
bitcoin | 2022-05-08T13:46:28Z FlushStateToDisk: write coins cache to disk (0 coins, 0kB) started
bitcoin | 2022-05-08T13:46:28Z FlushStateToDisk: write coins cache to disk (0 coins, 0kB) completed (0.00s)
bitcoin | 2022-05-08T13:46:29Z Shutdown: done

LND logs

Attaching to lnd
lnd | 2022-05-08 13:46:27.651 [INF] HLCK: Health monitor shutting down
lnd | 2022-05-08 13:46:27.766 [INF] RPCS: Stopping RPC Server
lnd | 2022-05-08 13:46:27.766 [INF] RPCS: Stopping SignRPC Sub-RPC Server
lnd | 2022-05-08 13:46:27.766 [INF] RPCS: Stopping RouterRPC Sub-RPC Server
lnd | 2022-05-08 13:46:27.766 [INF] RPCS: Stopping AutopilotRPC Sub-RPC Server
lnd | 2022-05-08 13:46:27.766 [INF] RPCS: Stopping InvoicesRPC Sub-RPC Server
lnd | 2022-05-08 13:46:27.766 [INF] RPCS: Stopping VersionRPC Sub-RPC Server
lnd | 2022-05-08 13:46:27.766 [INF] RPCS: Stopping WalletKitRPC Sub-RPC Server
lnd | 2022-05-08 13:46:27.766 [INF] RPCS: Stopping ChainRPC Sub-RPC Server
lnd | 2022-05-08 13:46:27.766 [INF] RPCS: Stopping WatchtowerRPC Sub-RPC Server
lnd | 2022-05-08 13:46:27.768 [INF] RPCS: Stopping WatchtowerClientRPC Sub-RPC Server
lnd | 2022-05-08 13:46:27.768 [INF] TORC: Stopping tor controller
lnd | 2022-05-08 13:46:27.830 [INF] BTCN: Lost peer 34.81.31.170:8333 (outbound)
lnd | 2022-05-08 13:46:27.832 [INF] BTCN: Lost peer 109.101.234.205:8333 (outbound)
lnd | 2022-05-08 13:46:27.833 [INF] BTCN: Syncing to block height 735475 from peer 176.9.150.253:8333
lnd | 2022-05-08 13:46:27.833 [INF] BTCN: Fetching set of headers from tip (height=735475) from peer 176.9.150.253:8333
lnd | 2022-05-08 13:46:27.858 [INF] BTCN: Lost peer 67.241.169.206:8333 (outbound)
lnd | 2022-05-08 13:46:27.864 [INF] BTCN: Lost peer 176.9.150.253:8333 (outbound)
lnd | 2022-05-08 13:46:27.865 [ERR] TORC: DEL_ONION got error: undefined response code: 0, err: read tcp 10.21.21.9:34794->10.21.21.11:29051: read: connection reset by peer
lnd | 2022-05-08 13:46:27.865 [ERR] LTND: error stopping tor controller: undefined response code: 0, err: read tcp 10.21.21.9:34794->10.21.21.11:29051: read: connection reset by peer
lnd | 2022-05-08 13:46:27.865 [INF] BTCN: Canceling block subscription: id=1
lnd | 2022-05-08 13:46:27.877 [INF] BTCN: Syncing to block height 735475 from peer 89.233.207.67:8333
lnd | 2022-05-08 13:46:27.877 [INF] BTCN: Fetching set of headers from tip (height=735475) from peer 89.233.207.67:8333
lnd | 2022-05-08 13:46:27.878 [INF] BTCN: Lost peer 89.233.207.67:8333 (outbound)
lnd | 2022-05-08 13:46:27.880 [WRN] BTCN: No sync peer candidates available
lnd | 2022-05-08 13:46:27.919 [INF] BTCN: Block manager shutting down
lnd | 2022-05-08 13:46:27.983 [INF] BTCN: Address manager shutting down
lnd | 2022-05-08 13:46:28.010 [INF] LNWL: Stopping web API fee estimator
lnd | 2022-05-08 13:46:28.058 [INF] LTND: Shutdown complete
lnd |

electrs logs

Attaching to electrs
electrs | [2022-05-08T13:46:03.465Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:04.468Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:05.471Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:06.474Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:07.477Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:08.479Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:09.482Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:10.484Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:11.489Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:12.495Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:13.499Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:14.505Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:15.510Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:16.516Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:17.518Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:18.521Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:19.524Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:20.527Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:21.530Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:22.535Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:23.540Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:24.545Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:25.548Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:26.551Z INFO electrs::daemon] waiting for 267644 blocks to download (IBD)
electrs | [2022-05-08T13:46:27.547Z INFO electrs::signals] notified via SIG2
electrs | [2022-05-08T13:46:27.552Z INFO electrs::db] closing DB at /data/db/bitcoin
electrs | [2022-05-08T13:46:27.556Z INFO electrs::server] electrs stopped: bitcoin RPC polling interrupted
electrs |
electrs | Caused by:
electrs | exiting due to signal

Tor logs

Attaching to umbrel_app_tor_1, umbrel_app_3_tor_1, tor, umbrel_app_2_tor_1
app_2_tor_1 | May 08 13:33:41.000 [notice] Bootstrapped 75% (enough_dirinfo): Loaded enough directory info to build circuits
app_2_tor_1 | May 08 13:33:41.000 [notice] Bootstrapped 90% (ap_handshake_done): Handshake finished with a relay to build circuits
app_2_tor_1 | May 08 13:33:41.000 [notice] Bootstrapped 95% (circuit_create): Establishing a Tor circuit
app_2_tor_1 | May 08 13:33:44.000 [notice] Bootstrapped 100% (done): Done
app_2_tor_1 | May 08 13:41:57.000 [notice] Your system clock just jumped 479 seconds forward; assuming established circuits no longer work.
app_2_tor_1 | May 08 13:42:21.000 [notice] Your network connection speed appears to have changed. Resetting timeout to 60s after 18 timeouts and 243 buildtimes.
app_2_tor_1 | May 08 13:42:45.000 [notice] Guard Logforme3 ($8F6A78B1EA917F2BF221E87D14361C050A70CCC3) is failing more circuits than usual. Most likely this means the Tor network is overloaded. Success counts are 111/159. Use counts are 60/60. 112 circuits completed, 0 were unusable, 1 collapsed, and 1 timed out. For reference, your timeout cutoff is 60 seconds.
app_2_tor_1 | May 08 13:42:46.000 [warn] Guard Logforme3 ($8F6A78B1EA917F2BF221E87D14361C050A70CCC3) is failing a very large amount of circuits. Most likely this means the Tor network is overloaded, but it could also mean an attack against you or potentially the guard itself. Success counts are 111/223. Use counts are 60/60. 112 circuits completed, 0 were unusable, 1 collapsed, and 1 timed out. For reference, your timeout cutoff is 60 seconds.
app_2_tor_1 | May 08 13:42:49.000 [notice] Your network connection speed appears to have changed. Resetting timeout to 60s after 18 timeouts and 173 buildtimes.
app_2_tor_1 | May 08 13:46:27.000 [notice] Catching signal TERM, exiting cleanly.
tor | May 08 13:45:54.000 [notice] Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDir available to query.
tor | May 08 13:45:55.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $6BCB964AB74E23F8986BDA905697D3A6BE08AF28~F3Netze [a1F4rc0sg1V5geDuZS44nmmzL/O6MnRUTnI66M1VoDk] at 185.220.100.252. Retrying on a new circuit.
tor | May 08 13:46:03.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $C6ED7D37AA3CCAA586B22E0A30B395A5E582440A~relayongrankhul [ZYCLOdl5cM3kl2ZmbphkOO9+4IGv6kB+WdOjoNzhLkQ] at 185.220.100.244. Retrying on a new circuit.
tor | May 08 13:46:04.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $C6ED7D37AA3CCAA586B22E0A30B395A5E582440A~relayongrankhul [ZYCLOdl5cM3kl2ZmbphkOO9+4IGv6kB+WdOjoNzhLkQ] at 185.220.100.244. Retrying on a new circuit.
tor | May 08 13:46:10.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $C6ED7D37AA3CCAA586B22E0A30B395A5E582440A~relayongrankhul [ZYCLOdl5cM3kl2ZmbphkOO9+4IGv6kB+WdOjoNzhLkQ] at 185.220.100.244. Retrying on a new circuit.
tor | May 08 13:46:18.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $3A9559477D72F71215850C97FA62A0DA7380964B~PawNetBlue [BEu6bqJyjHOQPqjkvWGPRREBRh/cfGAlfK81GK4drhc] at 185.83.214.69. Retrying on a new circuit.
tor | May 08 13:46:19.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $3A9559477D72F71215850C97FA62A0DA7380964B~PawNetBlue [BEu6bqJyjHOQPqjkvWGPRREBRh/cfGAlfK81GK4drhc] at 185.83.214.69. Retrying on a new circuit.
tor | May 08 13:46:19.000 [notice] Tried for 127 seconds to get a connection to [scrubbed]:8333. Giving up.
tor | May 08 13:46:25.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $3A9559477D72F71215850C97FA62A0DA7380964B~PawNetBlue [BEu6bqJyjHOQPqjkvWGPRREBRh/cfGAlfK81GK4drhc] at 185.83.214.69. Retrying on a new circuit.
tor | May 08 13:46:27.000 [notice] Catching signal TERM, exiting cleanly.
app_3_tor_1 | May 08 13:42:07.000 [notice] Bootstrapped 15% (handshake_done): Handshake with a relay done
app_3_tor_1 | May 08 13:42:07.000 [notice] Bootstrapped 75% (enough_dirinfo): Loaded enough directory info to build circuits
app_3_tor_1 | May 08 13:42:07.000 [notice] Bootstrapped 90% (ap_handshake_done): Handshake finished with a relay to build circuits
app_3_tor_1 | May 08 13:42:07.000 [notice] Bootstrapped 95% (circuit_create): Establishing a Tor circuit
app_3_tor_1 | May 08 13:42:20.000 [notice] Bootstrapped 100% (done): Done
app_3_tor_1 | May 08 13:42:47.000 [notice] Your network connection speed appears to have changed. Resetting timeout to 60s after 18 timeouts and 248 buildtimes.
app_3_tor_1 | May 08 13:44:39.000 [notice] Your network connection speed appears to have changed. Resetting timeout to 60s after 18 timeouts and 139 buildtimes.
app_3_tor_1 | May 08 13:45:39.000 [notice] Your network connection speed appears to have changed. Resetting timeout to 120s after 18 timeouts and 0 buildtimes.
app_3_tor_1 | May 08 13:45:48.000 [warn] Guard lazybear ($536E7674C8279803EDF1CC10B6F67B3959B025C2) is failing a very large amount of circuits. Most likely this means the Tor network is overloaded, but it could also mean an attack against you or potentially the guard itself. Success counts are 65/176. Use counts are 18/87. 134 circuits completed, 69 were unusable, 0 collapsed, and 217 timed out. For reference, your timeout cutoff is 120 seconds.
app_3_tor_1 | May 08 13:46:27.000 [notice] Catching signal TERM, exiting cleanly.
app_tor_1 | May 08 13:33:37.000 [notice] Bootstrapped 14% (handshake): Handshaking with a relay
app_tor_1 | May 08 13:33:38.000 [notice] Bootstrapped 15% (handshake_done): Handshake with a relay done
app_tor_1 | May 08 13:33:38.000 [notice] Bootstrapped 75% (enough_dirinfo): Loaded enough directory info to build circuits
app_tor_1 | May 08 13:33:38.000 [notice] Bootstrapped 90% (ap_handshake_done): Handshake finished with a relay to build circuits
app_tor_1 | May 08 13:33:38.000 [notice] Bootstrapped 95% (circuit_create): Establishing a Tor circuit
app_tor_1 | May 08 13:33:43.000 [notice] Bootstrapped 100% (done): Done
app_tor_1 | May 08 13:41:57.000 [notice] Your system clock just jumped 479 seconds forward; assuming established circuits no longer work.
app_tor_1 | May 08 13:42:04.000 [notice] Guard torrelaygermany ($614690D0111BB16428B816DB9E998C0E5EA3F495) is failing more circuits than usual. Most likely this means the Tor network is overloaded. Success counts are 138/198. Use counts are 59/59. 140 circuits completed, 0 were unusable, 2 collapsed, and 3 timed out. For reference, your timeout cutoff is 60 seconds.
app_tor_1 | May 08 13:42:18.000 [warn] Guard torrelaygermany ($614690D0111BB16428B816DB9E998C0E5EA3F495) is failing a very large amount of circuits. Most likely this means the Tor network is overloaded, but it could also mean an attack against you or potentially the guard itself. Success counts are 138/277. Use counts are 90/90. 139 circuits completed, 0 were unusable, 0 collapsed, and 1 timed out. For reference, your timeout cutoff is 60 seconds.
app_tor_1 | May 08 13:46:27.000 [notice] Catching signal TERM, exiting cleanly.

App logs

pi-hole

server_1 | Using IPv4 and IPv6
server_1 | ::: Preexisting ad list /etc/pihole/adlists.list detected ((exiting setup_blocklists early))
server_1 | https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
server_1 | ::: Testing pihole-FTL DNS: FTL started!
server_1 | ::: Testing lighttpd config: Syntax OK
server_1 | ::: All config checks passed, cleared for startup …
server_1 | ::: Enabling Query Logging
server_1 | [i] Enabling logging…
[✓] Logging has been enabled!
server_1 | ::: Docker start setup complete
server_1 | Checking if custom gravity.db is set in /etc/pihole/pihole-FTL.conf
server_1 | Pi-hole version is v5.6 (Latest: v5.10)
server_1 | AdminLTE version is v5.8 (Latest: v5.12)
server_1 | FTL version is v5.11 (Latest: v5.15)
server_1 | Container tag is: 2021.10.1
server_1 | [cont-init.d] 20-start.sh: exited 0.
server_1 | [cont-init.d] done.
server_1 | [services.d] starting services
server_1 | Starting pihole-FTL (no-daemon) as root
server_1 | Starting crond
server_1 | Starting lighttpd
server_1 | [services.d] done.
server_1 | Stopping pihole-FTL
server_1 | Stopping lighttpd
server_1 | Stopping cron
server_1 | [cont-finish.d] executing container finish scripts…
server_1 | [cont-finish.d] done.
server_1 | [s6-finish] waiting for services.
server_1 | [s6-finish] sending all processes the TERM signal.
server_1 | [s6-finish] sending all processes the KILL signal and exiting.

==== Result ====

The debug script did not automatically detect any issues with your Umbrel.
umbrel@umbrel:~ $

Thanks for any assistance for this new user! I’m trying to learn as much as possible

Seems Umbrel can´t be updated. Maybe a sdcard issue.
You can try burn another sdcard with Umbrel 0.4.17, then reset Umbrel with command
“rm -f ~/umbrel/statuses/update-in-progress && sudo shutdown now”
wait some 3 minutes to stop Umbrel completely
power it off, plug new sdcard and power on
Post new logs here.

1 Like

Hi! So i ended up wiping the sd and ssd and starting from scratch, but all good now!

Well, if you did not change any part, you likely still have a hardware problem waiting the worst moment to show up, as usual. Let´s see. Don´t put much money on this, until got well tested.

1 Like