Tutorial: How to manually download latest pruned ChainDB [GOOGLE CLOUD]

This solution solves the problem of the node that does not leave the status of “pruning”, this problem is related to the database.

With this tutorial, you can download the latest database manually.

1- Go to google cloud

2- Compute Engine > VM instances

3- Click on SSH to open a dialog window with your server

4- When the command line window opens, type:

sudo su

5- Enter the command below to return the previous folders:

cd …

6- Enter the command below to go directly to the folder intended to make the change

cd nkn/nkn-commercial/services/nkn-node/

7- The command below will delete the current database:

rm -r ChainDB

8- The command below will download the new database (this may take a few minutes):

wget https://nkn.org/ChainDB_pruned_latest.tar.gz

9- After the download is finished, use the command below to unzip the new database, it will be automatically unzipped into a folder called chainDB equal to the previous one, however, with the new database:

tar -vzxf ChainDB_pruned_latest.tar.gz

10- After unzipping, you can delete the ChainDB_pruned_latest.tar.gz file to avoid taking up space on the server:

rm -r ChainDB_pruned_latest.tar.gz

11- Okay, now just restart using the reboot command

reboot

1 Like

And if you are deploying multiple nodes and try to save time, you could download the pruned DB once and then do local scp. The snapshot on nkn.org is rate limited, so it might appear to be slow sometimes.

Yes, just create an image from the first node.

I did the installation by following the procedures on this page. In addition, I need to download this chain_db file ? Can you help me

You don’t have to download the chain DB, since the nkn node software will automatically sync up the chain DB with other nodes.

But to speed up the initial synchronization of chain DB, you can download the chain DB snapshot as mentioned in the latest post. Again, this is not mandatory but optional.

I obtained a gain of 80% to 90% in speed for the initiation of mining

It is not mandatory, but it helps speed up the mining initiation process.

Using other services like VULTR and DIGITAL OCEAN, it will not help you much … but with GOOGLE CLOUD it will certainly help you.

1 Like

wget -O - “link to where ChainDB…tar.gz file is hosted” -q --show-progress | tar -xzf -

this command downloads and extracts at the same time. Smaller VPS can run out of diskspace if you download first and extract it will be 12 GB plus 13,5GB which is already too much space.

2 Likes

Nice, thanks for the contribution. This is a totally viable solution for VPS with a size of 25GB HD.I noticed something very important, it is better to delete the ChainDB folder before exporting the new files. When I didn’t do that, somehow there were some useless files inside the ChainDB folder, increasing the folder size by 2gb compared to other nodes.

Yes, you HAVE to delete the existing folder, if you don’t there could be problems with the node.

Conflicting data

1 Like

Does this mean that with this you don’t need to wait to the full chain database to be downloaded at the beginning? (took me more than 48 hours)

1 Like

I did this process only using google’s cloud service, however, with other cloud services you don’t have to.

I saved time in updating the database.

I’ve tried to archive the ChainDB folder from one of my machines (Google Cloud) and unarchive it to my Raspberry Pi, but after restarting it stucks on “Pruning”. I re-installed NKN, it’s doing the same. Any idea why?

1 Like

I don’t know for sure, but one time I did the same thing between two VPS and it went wrong.
Download directly from the link, delete the ChainDB folder before extract the ChainDB_pruned_latest.tar.gz file

1 Like

Fix your transfer unpack command with this one, so it’s done in one go, not wasting space on a small storage VPN.

wget -O - “https://nkn.org/ChainDB_pruned_latest.tar.gz” -q --show-progress | tar -xzf -

Also no need to delete the ChainDB file after the transfer.

Hi, i´m getting this message when i run that script:

“gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now”

received the same message, does anyone have an answer?

Maybe in this topic

1 Like

Is this software available for Windows OS yet? I am facing the pruning problem and my node status shows as “Not started”. Was syncing well all day long but just broke down a few mins ago.

image

Hi,Can I do it when vps mining?
If so, when I reboot my vps, the mining program will automatic run again?
Thank you !

1 Like