Fluxnode Setup on R720XD Unraid

Did any of these notes help?

Send me some Flux. :)

Address:                           t1X5WnEuz8EwLkq3s2CSD951XWwZKSVjh6a



24 July 2022

Node offline?

sudo reboot

Update benchmarks. Make sure the benchmark program is updated and they pass. 

It took 25 blocks yesterday to go from 'Started' to 'Confirmed' on my Zelcore status page. Is this the new standard?

4 June 2022

I was not seeing my node assets and rewards ANYWHERE in Zelcore. Simple fix: Somehow the 'Investments' wallet had NO assets listed in it. All I had to do was add FLUX. That's a dumb default, IHMO.


Update 23 May 2022

The benchmarks haven't failed once, to all the naysayers that say the 2697v2 CPU in a VM is 'on the bubble' for pass/fail. Yeah, but it's not consumer gear and everything outside the VM isn't going to push the CPU, so the odds of it failing enough to be DOS are slim.

Plenty of RAM/CPU/disk speed here.

First rewards about 5 FLUX for Cumulus.

Update May 10 2022:

Got my 2697v2 in and it just performs well enough to pass at lowest EPS of 61.xx to 63.xx

It might be worth considering a lower core count CPU with higher clocks. This is stock clock so I might look into a little bit of an overclock around 5% or so just for some buffer. 

I had to reinitialize the node using the script and re-download the bootstrap file. 

For some reason if it's too 'stale' (hadn't been online for about 10 days) then this might be necessary.

The setup guide mentions port 53 and some port. You don't appear to need them, just these ranges. Apparently U-Verse TV also uses 53 and whatever the setup guide on Medium said.



The Flux Network tab on the gui (http://192.168.1.xxx:16126/flux/fluxnetwork) won't populate until the node is up.

Flux Node setup notes 29 Apr 2022.

This is very ugly formatting and it's just notes, really, now.

If any of this helps, you can send me Flux at t1Qxm2xUsyeqShKemb5Q4tYUnduWKoQpzeG

Help me get that node level up. :)

Machine: Dell R720xd with single CPU.

Before you read any further, the E5-2630 0 Xeon will not pass the benchmark for even a Cumulus Node. 

Never ever. 

It would have to be 20% faster than what I am benching it at. Around 51 EPS and it needs 60 Events per core for these node specs: 

https://docs.google.com/spreadsheets/d/1qI6poLOieT3TgsAolFicYAyluby91F_BD4mcXblKP_8/edit#gid=1619256168

You're going to need a fiar amount of capital to run anything higher than a Cumulus node anway.

E5 2690 V2 seems to be the choice for R720XD as it is a little cheaper, uses a little less power, but the cores are clocked over 10% higher than the 12 core 2697. Real world, single core perf seems to be the choice as many people also put these in the 2013 Mac Pros (got one) for the same reason: single core perf is the most visible diff in most usage.

2697 V2 Might also be a good choice. It turbos almost as high on single core and has a 20% larger cache. 

$70 on Ebay and that's adding a 1 yr Squaretrade warranty: https://ebay.us/h9JAOj

NVME drive holding the VM disk image. Nothing special.

The video and Medium post are pretty good but the script now asks for "FluxNode Identity Key"



Here's my sysbench output:


sysbench --test=cpu --threads=1 --cpu-max-prime=60000 --time=20 run

WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options.

sysbench 1.0.18 (using system LuaJIT 2.1.0-beta3)


Running the test with following options:

Number of threads: 1

Initializing random number generator from current time



Prime numbers limit: 60000


Initializing worker threads...


Threads started!


CPU speed:

    events per second:    48.89


General statistics:

    total time:                          20.0022s

    total number of events:              978


Latency (ms):

         min:                                   18.47

         avg:                                   20.42

         max:                                   46.10

         95th percentile:                       24.83

         sum:                                19969.16


Threads fairness:

    events (avg/stddev):           978.0000/0.00

    execution time (avg/stddev):   19.9692/0.00


Restart benchmarks:

fluxbench-cli restartnodebenchmarks



UPDATE: 8 Nov 2022 Benchmark 3.5.0 is out and I got 67 EPS, so about 12% more than req'd which is comfortable-ish. Sometimes it has to rerun a bench.



The IP of your VM under Unraid will be different than the server IP. So, to access the flux node dashboard it will be 192.168.1.xxx: 16126 where xxx is the IP of your VM. I forgot this. Duh.


Notes: 


--probably have to pass ports to the VM machine duh

--how to make sure vm gets allocated the same IP


DISK Benchmarking.

dd if=/dev/zero of=sb-io-test bs=64k count=16k conv=fdatasync; rm -rf sb-io-test


Above command will show you disk speed. My NVME where the VM disk is located gets 350-370mb/s. Enough for almost the largest node level.


Disk size is ony 100GB but you allocated more? This is normal for Ubuntu...dunno why...


See:


$ df -Th 


Filesystem                        Type      Size  Used Avail Use% Mounted on

udev                              devtmpfs  3.9G     0  3.9G   0% /dev

tmpfs                             tmpfs     796M  1.2M  795M   1% /run

/dev/mapper/ubuntu--vg-ubuntu--lv ext4       98G   28G   66G  30% /

tmpfs                             tmpfs     3.9G     0  3.9G   0% /dev/shm

tmpfs                             tmpfs     5.0M     0  5.0M   0% /run/lock

tmpfs                             tmpfs     3.9G     0  3.9G   0% /sys/fs/cgroup

/dev/loop0                        squashfs   62M   62M     0 100% /snap/core20/1405

/dev/loop1                        squashfs   62M   62M     0 100% /snap/core20/1434

/dev/loop2                        squashfs   68M   68M     0 100% /snap/lxd/21835

/dev/loop3                        squashfs   68M   68M     0 100% /snap/lxd/22753

/dev/loop4                        squashfs   44M   44M     0 100% /snap/snapd/15177

/dev/loop5                        squashfs   45M   45M     0 100% /snap/snapd/15534

/dev/vda2                         ext4      1.5G  213M  1.2G  16% /boot

tmpfs                             tmpfs     796M     0  796M   0% /run/user/1001


Here's how to resize: 


:~$ sudo lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv

  Size of logical volume ubuntu-vg/ubuntu-lv changed from 100.00 GiB (25600 extents) to <254.50 GiB (65151 extents).

  Logical volume ubuntu-vg/ubuntu-lv successfully resized.

:~$ sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

resize2fs 1.45.5 (07-Jan-2020)

Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required

old_desc_blocks = 13, new_desc_blocks = 32

The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 66714624 (4k) blocks long.


List of Commands for Flux nodes that might be useful to have.

▶  COMMANDS TO MANAGE FLUX DAEMON.

📌 Start Flux daemon: sudo systemctl start zelcash

📌 Stop Flux daemon: sudo systemctl stop zelcash

📌 Help list: flux-cli help


▶  COMMANDS TO MANAGE BENCHMARK.

📌 Get info: fluxbench-cli getinfo

📌 Check benchmark: fluxbench-cli getbenchmarks

📌 Restart benchmark: fluxbench-cli restartnodebenchmarks

📌 Stop benchmark: fluxbench-cli stop

📌 Start benchmark: sudo systemctl restart zelcash


▶  COMMANDS TO MANAGE FLUX.

📌 Summary info: pm2 info flux

📌 Logs in real time: pm2 monit

📌 Stop Flux: pm2 stop flux

📌 Start Flux: pm2 start flux


▶  COMMANDS TO MANAGE WATCHDOG.

📌 Stop watchdog: pm2 stop watchdog

📌 Start watchdog: pm2 start watchdog --watch

📌 Restart watchdog: pm2 reload watchdog --watch

📌 Error logs: ~/watchdog/watchdog_error.log

📌 Logs in real time: pm2 monit


📌 IMPORTANT: After installation check 'pm2 list' if not work, type 'source /home/xxxxx/.bashrc'


📌 To access your frontend to Flux enter this in as your url: xxx.xx.xx.x:16126


This script for configs/tests.

  1. bash -i <(curl -s https://raw.githubusercontent.com/RunOnFlux/fluxnode-multitool/master/multitoolbox.sh)

---
To clear your logs

cat /dev/null > .flux/debug.log && cat /dev/null > zelflux/debug.log && cat /dev/null > zelflux/error.log && cat /dev/null > .fluxbenchmark/debug.log && cat /dev/null > .fluxbenchmark/benchmark_debug_error.log && cat /dev/null > .fluxbenchmark/flux_daemon_debug_error.log && cat /dev/null > watchdog/watchdog_error.log && cat /dev/null > benchmark_debug_error.log
---

To see your node balances go to https://paoverview.app.runonflux.io/ and enter your node address. Here's mine after a couple of weeks:




Comments

Popular Posts