Interplanetary File System: IPFS Installation

Started by Optimitron, Aug 13, 2022, 12:37 AM

Previous topic - Next topic

OptimitronTopic starter


For the cleanness of the experiment, I suggest instantly installing it on some external server, since we will consider some pitfalls with working in local mode and remote. Then, if desired, it will not be demolished for a long time, there is not much.

Install go

Official documentation
See for the latest version.

Note: it is better to install IPFS on behalf of the user who is supposed to use it most often. The fact is that below we will consider the option of mounting via FUSE and there are subtleties.

cd ~
curl -O
tar xvf go1.12.9.linux-amd64.tar.gz
sudo chown -R root:root ./go
sudo mv go /usr/local
rm go1.12.9.linux-amd64.tar.gz

Then you need to update the environment (more details here:

echo 'export GOPATH=$HOME/work' >> ~/.bashrc
echo 'export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin' >> ~/.bashrc
source ~/.bashrc

Checking that go is installed

go version

Install IPFS

I liked the installation method through ipfs-update the most.

Install it with the command

go get -v -u

After that, you can run the following commands:

ipfs-update versions - to see all available versions for download.
ipfs-update version - to see the currently installed version (until we have IPFS installed, it will be none).
ipfs-update install latest - Install the latest version of IPFS. Instead of latest, respectively, you can specify any desired version from the list of accessible ones.

Installing ipfs

ipfs-update install latest


ipfs --version

Directly with the installation in general terms everything.

Start IPFS


First you need to perform initialization.

ipfs init

In response, you will receive a thing like this:

 ipfs init
initializing IPFS node at /home/USERNAME/.ipfs
generating 2048-bit RSA keypair...done
peer identity: QmeCWX1DD7Hnxхxxхxxхxxхxxхxxхxxхxxхxxхx
to get started, enter:
ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme

You can run the suggested command

ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme


Hello and Welcome to IPFS!

██╗██████╗ ███████╗███████╗
██║██████╔╝█████╗ ███████╗
██║██╔═══╝ ██╔══╝ ╚════██║
██║██║ ██║ ███████║
╚═╝╚═╝ ╚═╝ ╚══════╝

If you're seeing this, you have successfully installed
IPFS and are now interfacing with the ipfs merkledag!

 -------------------------------------------------- -----
| Warning: |
| This is alpha software. Use at your own discretion! |
| Much is missing or lacking polish. There are bugs. |
| Not yet secure. Read the security notes for more. |
 -------------------------------------------------- -----

Check out some of the other files in this directory:

  ./quick-start <-- usage examples
  ./readme <-- this file
  ./security notes

Here, in my opinion, the begins. The guys at the installation stage are already starting to use their own technologies. The proposed hash QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv is not generated specifically for you, but sewn into the release. That is, before the release, they prepared a welcome text, poured it into IPFS and added the address to the installer.
I think it's very cool. And this file (more precisely, the entire folder) can now be viewed not only locally, but also on the official gateway Simultaneously, you can be sure that the contents of the folder have not changed in any way, because if it had changed, then the hash would also have changed.

By the way, in this case, IPFS has some similarities with the version control server. If you make changes to the source files of the folder and again pour the folder into IPFS, then it will receive a new address. At the same time, the old folder will not go anywhere just like that and will be accessible at its previous address.

Direct launch

ipfs daemon

You should receive a response like this:

ipfs daemon
Initializing daemon...
go-ipfs version: 0.4.22-
Repo version: 7
System version: amd64/linux
golang version: go1.12.7
Swarm listening on /ip4/x.x.x.x/tcp/4001
Swarm listening on /ip4/
Swarm listening on /ip6/::1/tcp/4001
Swarm listening on /p2p-circuit
Swarm announcing /ip4/
Swarm announcing /ip6/::1/tcp/4001
API server listening on /ip4/
Gateway (readonly) server listening on /ip4/
Daemon is ready

Opening the doors to the Internet

Pay attention to these two lines:

Gateway (readonly) server listening on /ip4/

Now, if you have installed IPFS locally, then you will access IPFS interfaces using local addresses and everything will be accessible to you (For instance, localhost:5001/webui/). But when installed on an external server, by default, the gateways are closed to the Net. Gateways two:

    Webui admin panel (github) on port 5001.
    External API on port 8080 (readonly).

So far, both ports (5001 and 8080) can be opened for experiments, but on a combat server, of course, port 5001 should be closed with a firewall. There is also port 4001, which is needed so that other peers can find you. It should be left open to outside requests.

Open ~/.ipfs/config for editing and find these lines in it:

Addresses: {
  "Swarm": [
  "Announce": [],
  "NoAnnounce": [],
  "API": "/ip4/",
  Gateway: "/ip4/"

We change to the ip of your server and save the file, after which we restart ipfs (stop the running command with Ctrl + C and start it again).

Should get

WebUI: http://ip_your_server:5001/webui
Gateway (readonly) server listening on /ip4/your_server_ip/tcp/8080

Now the external interfaces should be available.



The above readme file should open.


The web interface should open.

If webui works for you, then the IPFS settings can be changed directly in it, including viewing statistics, but below I will consider configuration options directly through the config file, which is generally not critical. It's just better to remember exactly where the config is and what to do with it, otherwise if the web face does not work, it will be more difficult.

Setting up a web interface to work with your server

The fact is that webui, in my opinion, works very ambiguously. First, it tries to connect to the API of the server where the interface is open (based on the address in the browser, of course). and if it doesn't work there, it tries to connect to the local gateway.
And if you have IPFS running locally, then webui will work fine for you, only you will work with local IPFS, and not external, although you opened webui on an external server. Then you upload the files, but for some reason you don't see them just like that on an external server...

And if it is not running locally, then we get a connection error. In our case, the error is most probably due to CORS, which is also indicated by webui, suggesting adding a config.

ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["http://yourserver_ip:5001", "", "https://webui.ipfs. io"]'
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Methods '["PUT", "GET", "POST"]'

I just registered a wildcard

ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["*"]'

The added headers can be found in the same ~/.ipfs/config. In my case it is

  API: {
    "HTTPHeaders": {
      "Access-Control-Allow-Origin": [

We restart ipfs and we see that webui has successfully connected (in any case, it should, if you opened the gateways for requests from outside, as described above).

Now you can upload folders and files directly through the web interface, as well as create your own folders.

Mounting the FUSE file system

Here's a pretty amusing feature.

Files (as well as folders), we can add not only through the web interface, but also directly in the terminal, for instance

ipfs add test -r
added QmfYuz2gegRZNkDUDVLNa5DXzKmxхxxхxxхxx test/test.txt
added QmbnzgRVAP4fL814h5mQttyqk1aURxхxxхxxхxxхx test

The last hash is the hash of the root folder.

Using this hash, we can open a folder on any ipfs node (which can find our node and get the contents), we can in the web interface on port 5001 or 8080, or we can locally via ipfs.

ipfs ls QmbnzgRVAP4fL814h5mQttyqk1aUxхxxхxxхxxхxx
QmfYuz2gegRZNkDUDVLNa5DXzKmKVxхxxхxxхxxхxxx 10 test.txt

But you can still open it like a regular folder.

Let's create two folders at the root and grant rights to them to our user.

sudo mkdir /ipfs /ipns
sudo chown USERNAME /ipfs /ipns

and restart ipfs with --mount flag

ipfs daemon --mount

You can create folders in other places and specify the path to them through the parameters ipfs daemon --mount --mount-ipfs /ipfs_path --mount-ipns /ipns_path

Now reading from this folder is somewhat unusual.

ls -la /ipfs
ls: reading directory '/ipfs': Operation not permitted
total 0

That is, there is no direct access to the root of this folder. But you can get the content, knowing the hash.

ls -la /ipfs/QmbnzgRVAP4fL814h5mQttyqxхxxхxxхxxхxxхxxx
total 0
-r--r--r-- 1 root root 10 Aug 31 07:03 test.txt

cat /ipfs/QmbnzgRVAP4fL814h5mQttyqxхxxхxxхxxхxxхxxx/test.txt

At the same time, even auto-completion works inside the folder when the path is specified.

As I said above, there are subtleties with such mounting: by default, mounted FUSE folders are accessible only to the current user (even root will not be able to read from such a folder, not to mention other users in the system). If you want to make these folders accessible
 to other users, then in the config you need to change "FuseAllowOther": false to "FuseAllowOther": true. But that's not all. If you run IPFS as root, then everything is OK. And if on behalf of a regular user (even sudo), then you will get an error

mount helper error: fusermount: option allow_other only allowed if 'user_allow_other' is set in /etc/fuse.conf

In this case, you need to edit /etc/fuse.conf by uncommenting the #user_allow_other line.

After that, restart ipfs.

Known issues with FUSE

The problem has been noticed more than once that after restarting ipfs with mounting (and maybe in other cases), the /ipfs and /ipns mount points become unavailable. There is no access to them, and ls -la /ipfs shows ???? in the list of rights.

Found this solution:

fusermount -z -u /ipfs
fusermount -z -u /ipns

Then restart ipfs.

Adding a Service

Of course, running in the terminal is only suitable for initial tests. In combat mode, the daemon should start automatically at system startup.

On behalf of sudo, create the file /etc/systemd/system/ipfs.service and write to it:

Description=IPFS Daemon

ExecStart=/home/USERNAME/work/bin/ipfs daemon --mount


USERNAME, of course, must be replaced with your user (and perhaps the full path to the ipfs program will be different for you (you must specify the full path)).

We activate the system.

sudo systemctl enable ipfs.service

We start the service.

sudo service ipfs start

Checking the status of the service.

sudo service ipfs status

For the cleanness of the experiment, it will be possible to reboot the server in the future to check that ipfs starts successfully automatically.

Adding known to us feasts

Consider a situation where we have IPFS nodes installed both on an external server and locally. On an external server, we add some file and try to get it via IPFS locally by CID. What will happen? Of course, the local server most probably does not know anything about our external server and will simply try to find the file by CID by "asking" all IPFS peers accessible to it (with which it has already managed to "get acquainted"). Those in turn will ask others. And so on, until the file is found.
Actually, the same thing happens when we try to get the file through the official gateway. If you're lucky, the file will be found in a few seconds. And if not, it will not be found even in a few minutes, which greatly affects the comfort of work. But we know where this file will first appear. So why don't we instantly tell our local server "Search there first"? Apparently, this can be done.

1. We go to the remote server and look in the ~/.ipfs/config config

"Identity": {
    "PeerID": "QmeCWX1DD7HnPSuMHZSh6tFuxхxxхxxхxxхxxхxx",

2. Run sudo service ipfs status and look for Swarm entries in it, for instance:

Swarm announcing /ip4/your_server_ip/tcp/4001

3. We add from this the general address of the form "/ip4/ip_your_server/tcp/4001/ipfs/$PeerID".

4. For reliability, we will try to add this address to peers through our local webui.

5. If everything is OK, open the local config ~ / .ipfs / config, find "Bootstrap" in it: [...
and add the received address first to the array.

Restart IPFS.

Now let's add the file to the external server and try to request it on the local one. Should fly fast.

But this functionality is not yet stable. As far as I understand, even if we specify the address of a peer in Bootstrap, ipfs changes the list of active connections with peers during operation. In any case, a discussion of this and wishes regarding the ability to specify permanent peers is being conducted here and it seems like it is supposed to add some functionality to ipfs@5.0+

The list of current peers can be viewed both in the webui and in the terminal.

ipfs swarm peers

And here and there you can add your feast manually.

ipfs swarm connect "/ip4/ip_your_server/tcp/4001/ipfs/$PeerID"

Until this functionality has been improved, you can write a tool to check for a connection to the desired peer and, if not, to add a connection.


Among those who are already familiar with IPFS, there are both arguments for and against IPFS. In principle, the discussion the day before yesterday prompted me to dig IPFS again. And with regards to the discussion mentioned above: I can't say that I strongly oppose any argument of those who spoke (I disagree only with the fact that one and a half programmers use IPFS). In general, both are right in their own way (especially the comment about checks makes you think).
But if we discard the moral and legal assessment, who will give a technical judgment of this technology? Personally, I have some kind of inner feeling that "this is definitely necessary, it has certain prospects." But why exactly, there is no clear formulation. Like, if you look at the existing centralized tools, then in many respects they are far ahead (stability, speed, manageability, etc.).
Nevertheless, I have one thought that seems to make sense and which can hardly be implemented without such decentralized systems. Of course, I'm swinging too hard, but I would formulate it this way: the principle of disseminating information on the Internet must be changed.

Let me explain. If you think about it, now we have info distributed according to the principle "I hope that the one to whom I gave it will protect it and it will not be lost or received by those to whom it was not intended." As an instance, it is easy to consider various mail services, cloud storages, etc. And what do we end up with?
In principle, all the most amusing things are listed in <irony>wonderful</irony> article Summer is almost over. There is almost no leaked data left. That is, the main Net giants are becoming larger, they are accumulating more and more information, and such leaks are a kind of information atomic explosions. This has never happened before, and here it is again. Simultaneously, although many understand that there are risks, they will continue to trust their data to third-party companies. Firstly, there is not much alternative, and secondly, they promise that they have patched up all the holes and this will never happen again.

What option do I see? It seems to me that data should initially be distributed openly. But openness in this case does not mean that everything should be easy to read. I'm talking about the openness of storage and distribution, but not total openness in reading. I assume that information should be distributed with public keys. After all, the principle of public / private keys is already old, almost like the Internet. If the information is not confidential and is designed for a wide range, then it is laid out instantly with a public key (but still in encrypted form, just anyone can decrypt it with the accessible key).
And if not, then it is laid out without a public key, and the key itself is transferred to what should have access to this information. Simultaneously, the one who should read it should only have a key, and where to get this info, he should not really soar - he just pulls it from the network (this is the new principle of distribution by content, not by address).

Thus, for a mass attack, attackers will need to obtain a huge number of private keys, and this is not likely to be done in one place. This task, as I see it, is more difficult than hаcking a particular system.

And here another problem is closed: confirmation of authorship. Now on the Net you can find many quotes written by our friends. But where is the guarantee that it was they who wrote them? Now, if each such record was accompanied by a digital signature, it would be much easier. And it doesn't matter where this information lies, the main thing is the signature, which, of course, is difficult to forge.

I am not a security specialist and cannot know exactly how to use it accurately, but it seems to me that these keys are used at the level of exchange between IPFS nodes. And also js-ipfs and instance projects like orbit-db, which runs
That is, theoretically, each device (mobile and not only) can be easily equipped with its own encryption-decryption machines. In that case, it remains only for everybody to take care of saving their private keys, and everyone will be responsible for their own security, and not be a hostage to another human factor on some super-popular Net giant.


With great difficulty and many minutes of delay, the published files can be downloaded both through the gateway on the Net and local (even if it exists and is visible to hundreds of peers). For the transfer and distribution of files - so far completely not suitable. Works like a torus when it first appeared, even worse. Hope it gets better.

Generally, there is a desktop version for Windows, with a one-click installation - it's a little easier (at times) to install to play


Good thing. I actively use the messenger to save user data, with encryption of course.
The responses are positive - you can restore data anywhere and any way with the security of the keys.