IPFS Storage Service with Search Capability

Mahuta

Mahuta (formerly known as IPFS-Store) is a library to aggregate and consolidate files or documents stored by your application on the IPFS network. It provides a solution to collect, store, index, cache and search IPFS data handled by your system in a convenient way.

Project status

ServiceMasterDevelopment
CI Status
Test CoverageCoverageCoverage
BintrayBintray 
Docker 
SonarQuality Gate Status 

Features

  • Indexation: Mahuta stores documents or files on IPFS and index the hash with optional metadata.
  • Discovery: Documents and files indexed can be searched using complex logical queries or fuzzy/full text search)
  • Scalable: Optimised for large scale applications using asynchronous writing mechanism and caching
  • Replication: Replica set can be configured to replicate (pin) content across multiple nodes (standard IPFS node or IPFS-cluster node)
  • Multi-platform: Mahuta can be used as a simple embedded Java library for your JVM-based application or run as a simple, scalable and configurable Rest API.

Mahuta.jpg


Getting Started

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.

Prerequisites

Mahuta depends of two components:

  • an IPFS node (go or js implementation)
  • a search engine (currently only ElasticSearch is supported)

See how to run those two components first run IPFS and ElasticSearch

Java library

  1. Import the Maven dependencies (core module + indexer)
<repository>
    <id>consensys-kauri</id>
    <name>consensys-kauri</name>
    <url>https://consensys.bintray.com/kauri/</url>
</repository>
<dependency>
    <groupId>net.consensys.mahuta</groupId>
    <artifactId>mahuta-core</artifactId>
    <version>${MAHUTA_VERSION}</version>
</dependency>
<dependency>
    <groupId>net.consensys.mahuta</groupId>
    <artifactId>mahuta-indexing-elasticsearch</artifactId>
    <version>${MAHUTA_VERSION}</version>
</dependency>

2. Configure Mahuta to connect to an IPFS node and an indexer

Mahuta mahuta = new MahutaFactory()
    .configureStorage(IPFSService.connect("localhost", 5001))
    .configureIndexer(ElasticSearchService.connect("localhost", 9300, "cluster-name"))
    .defaultImplementation();

3. Execute high-level operations

IndexingResponse response = mahuta.prepareStringIndexing("article", "## This is my first article")
    .contentType("text/markdown")
    .indexDocId("article-1")
    .indexFields(ImmutableMap.of("title", "First Article", "author", "greg"))
    .execute();
    
GetResponse response = mahuta.prepareGet()
    .indexName("article")
    .indexDocId("article-1")
    .loadFile(true)
    .execute();
    
SearchResponse response = mahuta.prepareSearch()
    .indexName("article")
    .query(Query.newQuery().equals("author", "greg"))
    .pageRequest(PageRequest.of(0, 20))
    .execute();

For more info, Mahuta Java API

Spring-Data

  1. Import the Maven dependencies
<dependency>
    <groupId>net.consensys.mahuta</groupId>
    <artifactId>mahuta-springdata</artifactId>
    <version>${MAHUTA_VERSION}</version>
</dependency>

2.   Configure your spring-data repository

@IPFSDocument(index = "article", indexConfiguration = "article_mapping.json", indexContent = true)
public class Article {
    
    @Id
    private String id;

    @Hash
    private String hash;

    @Fulltext
    private String title;

    @Fulltext
    private String content;

    @Indexfield
    private Date createdAt;

    @Indexfield
    private String createdBy;
}



public class ArticleRepository extends MahutaRepositoryImpl<Article, String> {

    public ArticleRepository(Mahuta mahuta) {
        super(mahuta);
    }
}

For more info, Mahuta Spring Data

HTTP API with Docker

Prerequisites

Docker

$ docker run -it --name mahuta \ 
    -p 8040:8040 \
    -e MAHUTA_IPFS_HOST=ipfs \
    -e MAHUTA_ELASTICSEARCH_HOST=elasticsearch \
    gjeanmart/mahuta

Docker Compose

Check out the documentation to configure Mahuta HTTP-API with Docker.

Examples

To access the API documentation, go to Mahuta HTTP API

Create the index article

  • Sample Request:
curl -X POST \
  http://localhost:8040/mahuta/config/index/article \
  -H 'Content-Type: application/json' 

Success Response:

  • Code: 200
    Content:
{
    "status": "SUCCESS"
}

Store and index an article and its metadata

  • Sample Request:
curl -X POST \
  'http://localhost:8040/mahuta/index' \
  -H 'content-type: application/json' \
  -d '{"content":"# Hello world,\n this is my first file stored on **IPFS**","indexName":"article","indexDocId":"hello_world","contentType":"text/markdown","index_fields":{"title":"Hello world","author":"Gregoire Jeanmart","votes":10,"date_created":1518700549,"tags":["general"]}}'

Success Response:

  • Code: 200
    Content:
{
  "indexName": "article",
  "indexDocId": "hello_world",
  "contentId": "QmWHR4e1JHMs2h7XtbDsS9r2oQkyuzVr5bHdkEMYiqfeNm",
  "contentType": "text/markdown",
  "content": null,
  "pinned": true,
  "indexFields": {
    "title": "Hello world",
    "author": "Gregoire Jeanmart",
    "votes": 10,
    "createAt": 1518700549,
    "tags": [
      "general"
    ]
  },
  "status": "SUCCESS"
}

Search by query

  • Sample Request:
curl -X POST \
 'http://localhost:8040/mahuta/query/search?index=article' \
 -H 'content-type: application/json' \
 -d '{"query":[{"name":"title","operation":"CONTAINS","value":"Hello"},{"name":"author.keyword","operation":"EQUALS","value":"Gregoire Jeanmart"},{"name":"votes","operation":"GT","value":"5"}]}'

Success Response:

  • Code: 200
    Content:
{
  "status": "SUCCESS",
  "page": {
    "pageRequest": {
      "page": 0,
      "size": 20,
      "sort": null,
      "direction": "ASC"
    },
    "elements": [
      {
        "metadata": {
          "indexName": "article",
          "indexDocId": "hello_world",
          "contentId": "Qmd6VkHiLbLPncVQiewQe3SBP8rrG96HTkYkLbMzMe6tP2",
          "contentType": "text/markdown",
          "content": null,
          "pinned": true,
          "indexFields": {
            "author": "Gregoire Jeanmart",
            "votes": 10,
            "title": "Hello world",
            "createAt": 1518700549,
            "tags": [
              "general"
            ]
          }
        },
        "payload": null
      }
    ],
    "totalElements": 1,
    "totalPages": 1
  }
}

Download Details:
 

Author: ConsenSys
Download Link: Download The Source Code
Official Website: https://github.com/ConsenSys/Mahuta 
License: Apache-2.0 license

#ipfs #blockchain #java 

IPFS Storage Service with Search Capability
Elian  Harber

Elian Harber

1659811860

Kubo: IPFS Implementation in Go

kubo

the oldest IPFS implementation, previously known as "go-ipfs" 

What is Kubo?

Kubo (go-ipfs) the earliest and most widely used implementation of IPFS.

It includes:

  • an IPFS daemon server
  • extensive command line tooling
  • an HTTP Gateway (/ipfs/, /ipns/) for serving content to HTTP browsers
  • an HTTP RPC API (/api/v0) for controlling the daemon node

Note: other implementations exist.

What is IPFS?

IPFS is a global, versioned, peer-to-peer filesystem. It combines good ideas from previous systems such as Git, BitTorrent, Kademlia, SFS, and the Web. It is like a single BitTorrent swarm, exchanging git objects. IPFS provides an interface as simple as the HTTP web, but with permanence built-in. You can also mount the world at /ipfs.

For more info see: https://docs.ipfs.tech/concepts/what-is-ipfs/

Before opening an issue, consider using one of the following locations to ensure you are opening your thread in the right place:

YouTube Channel Subscribers Follow @IPFS on Twitter

Next milestones

Milestones on GitHub

Security Issues

Please follow SECURITY.md.

Install

The canonical download instructions for IPFS are over at: https://docs.ipfs.tech/install/. It is highly recommended you follow those instructions if you are not interested in working on IPFS development.

System Requirements

IPFS can run on most Linux, macOS, and Windows systems. We recommend running it on a machine with at least 2 GB of RAM and 2 CPU cores (kubo is highly parallel). On systems with less memory, it may not be completely stable.

If your system is resource-constrained, we recommend:

  1. Installing OpenSSL and rebuilding kubo manually with make build GOTAGS=openssl. See the download and compile section for more information on compiling kubo.
  2. Initializing your daemon with ipfs init --profile=lowpower

Docker

Docker Image Version (legacy name)

More info on how to run kubo (go-ipfs) inside Docker can be found here.

Native Linux package managers

Arch Linux

kubo via Community Repo

# pacman -S kubo

kubo-git via AUR

Nix

With the purely functional package manager Nix you can install kubo (go-ipfs) like this:

$ nix-env -i ipfs

You can also install the Package by using its attribute name, which is also ipfs.

Solus

In solus, kubo (go-ipfs) is available in the main repository as go-ipfs.

$ sudo eopkg install go-ipfs

You can also install it through the Solus software center.

openSUSE

Community Package for go-ipfs

Other package managers

Guix

GNU's functional package manager, Guix, also provides a go-ipfs package:

$ guix package -i go-ipfs

Snap

With snap, in any of the supported Linux distributions:

$ sudo snap install ipfs

The snap sets IPFS_PATH to SNAP_USER_COMMON, which is usually ~/snap/ipfs/common. If you want to use ~/.ipfs instead, you can bind-mount it to ~/snap/ipfs/common like this:

$ sudo mount --bind ~/.ipfs ~/snap/ipfs/common

If you want something more sophisticated to escape the snap confinement, we recommend using a different method to install kubo so that it is not subject to snap confinement.

macOS package managers

MacPorts

The package ipfs currently points to kubo (go-ipfs) and is being maintained.

$ sudo port install ipfs

Nix

In macOS you can use the purely functional package manager Nix:

$ nix-env -i ipfs

You can also install the Package by using its attribute name, which is also ipfs.

Homebrew

A Homebrew formula ipfs is maintained too.

$ brew install --formula ipfs

Windows package managers

Chocolatey

Chocolatey Version

PS> choco install ipfs

Scoop

Scoop provides kubo as go-ipfs in its 'extras' bucket.

PS> scoop bucket add extras
PS> scoop install go-ipfs

Install prebuilt binaries

dist.ipfs.io Downloads

From there:

  • Click the blue "Download kubo" on the right side of the page.
  • Open/extract the archive.
  • Move kubo (ipfs) to your path (install.sh can do it for you).

If you are unable to access dist.ipfs.io, you can also download kubo (go-ipfs) from:

Build from Source

GitHub go.mod Go version

kubo's build system requires Go and some standard POSIX build tools:

  • GNU make
  • Git
  • GCC (or some other go compatible C Compiler) (optional)

To build without GCC, build with CGO_ENABLED=0 (e.g., make build CGO_ENABLED=0).

Install Go

GitHub go.mod Go version

If you need to update: Download latest version of Go.

You'll need to add Go's bin directories to your $PATH environment variable e.g., by adding these lines to your /etc/profile (for a system-wide installation) or $HOME/.profile:

export PATH=$PATH:/usr/local/go/bin
export PATH=$PATH:$GOPATH/bin

(If you run into trouble, see the Go install instructions).

Download and Compile IPFS

$ git clone https://github.com/ipfs/kubo.git

$ cd kubo
$ make install

Alternatively, you can run make build to build the go-ipfs binary (storing it in cmd/ipfs/ipfs) without installing it.

NOTE: If you get an error along the lines of "fatal error: stdlib.h: No such file or directory", you're missing a C compiler. Either re-run make with CGO_ENABLED=0 or install GCC.

Cross Compiling

Compiling for a different platform is as simple as running:

make build GOOS=myTargetOS GOARCH=myTargetArchitecture

OpenSSL

To build go-ipfs with OpenSSL support, append GOTAGS=openssl to your make invocation. Building with OpenSSL should significantly reduce the background CPU usage on nodes that frequently make or receive new connections.

Note: OpenSSL requires CGO support and, by default, CGO is disabled when cross-compiling. To cross-compile with OpenSSL support, you must:

  1. Install a compiler toolchain for the target platform.
  2. Set the CGO_ENABLED=1 environment variable.

Troubleshooting

  • Separate instructions are available for building on Windows.
  • git is required in order for go get to fetch all dependencies.
  • Package managers often contain out-of-date golang packages. Ensure that go version reports at least 1.10. See above for how to install go.
  • If you are interested in development, please install the development dependencies as well.
  • Shell command completions can be generated with one of the ipfs commands completion subcommands. Read docs/command-completion.md to learn more.
  • See the misc folder for how to connect IPFS to systemd or whatever init system your distro uses.

Updating

Using ipfs-update

IPFS has an updating tool that can be accessed through ipfs update. The tool is not installed alongside IPFS in order to keep that logic independent of the main codebase. To install ipfs update, download it here.

Downloading builds using IPFS

List the available versions of kubo (go-ipfs) implementation:

$ ipfs cat /ipns/dist.ipfs.io/go-ipfs/versions

Then, to view available builds for a version from the previous command ($VERSION):

$ ipfs ls /ipns/dist.ipfs.io/go-ipfs/$VERSION

To download a given build of a version:

$ ipfs get /ipns/dist.ipfs.io/go-ipfs/$VERSION/go-ipfs_$VERSION_darwin-386.tar.gz # darwin 32-bit build
$ ipfs get /ipns/dist.ipfs.io/go-ipfs/$VERSION/go-ipfs_$VERSION_darwin-amd64.tar.gz # darwin 64-bit build
$ ipfs get /ipns/dist.ipfs.io/go-ipfs/$VERSION/go-ipfs_$VERSION_freebsd-amd64.tar.gz # freebsd 64-bit build
$ ipfs get /ipns/dist.ipfs.io/go-ipfs/$VERSION/go-ipfs_$VERSION_linux-386.tar.gz # linux 32-bit build
$ ipfs get /ipns/dist.ipfs.io/go-ipfs/$VERSION/go-ipfs_$VERSION_linux-amd64.tar.gz # linux 64-bit build
$ ipfs get /ipns/dist.ipfs.io/go-ipfs/$VERSION/go-ipfs_$VERSION_linux-arm.tar.gz # linux arm build
$ ipfs get /ipns/dist.ipfs.io/go-ipfs/$VERSION/go-ipfs_$VERSION_windows-amd64.zip # windows 64-bit build

Getting Started

Usage

docs: Command-line quick start docs: Command-line reference

To start using IPFS, you must first initialize IPFS's config files on your system, this is done with ipfs init. See ipfs init --help for information on the optional arguments it takes. After initialization is complete, you can use ipfs mount, ipfs add and any of the other commands to explore!

Some things to try

Basic proof of 'ipfs working' locally:

echo "hello world" > hello
ipfs add hello
# This should output a hash string that looks something like:
# QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o
ipfs cat <that hash>

Troubleshooting

If you have previously installed IPFS before and you are running into problems getting a newer version to work, try deleting (or backing up somewhere else) your IPFS config directory (~/.ipfs by default) and rerunning ipfs init. This will reinitialize the config file to its defaults and clear out the local datastore of any bad entries.

Please direct general questions and help requests to our forum or our IRC channel (freenode #ipfs).

If you believe you've found a bug, check the issues list and, if you don't see your problem there, either come talk to us on Matrix chat, or file an issue of your own!

Packages

See IPFS in GO documentation.

Development

Some places to get you started on the codebase:

Map of Implemented Subsystems

WIP: This is a high-level architecture diagram of the various sub-systems of this specific implementation. To be updated with how they interact. Anyone who has suggestions is welcome to comment here on how we can improve this!

CLI, HTTP-API, Architecture Diagram

Origin

Description: Dotted means "likely going away". The "Legacy" parts are thin wrappers around some commands to translate between the new system and the old system. The grayed-out parts on the "daemon" diagram are there to show that the code is all the same, it's just that we turn some pieces on and some pieces off depending on whether we're running on the client or the server.

Testing

make test

Development Dependencies

If you make changes to the protocol buffers, you will need to install the protoc compiler.

Developer Notes

Find more documentation for developers on docs

Maintainer Info

Contributing

We ❤️ all our contributors; this project wouldn’t be what it is without you! If you want to help out, please see CONTRIBUTING.md.

This repository falls under the IPFS Code of Conduct.

Please reach out to us in one chat rooms.

Author: ipfs
Source Code: https://github.com/ipfs/kubo 
License: Unknown and 2 other licenses found

#go #golang #ipfs 

Kubo: IPFS Implementation in Go
Saul  Alaniz

Saul Alaniz

1655745480

Cómo Construir Una DApp Y Alojarla En IPFS Usando Fleek

La etapa de implementación de una aplicación es un paso crucial dentro del proceso de desarrollo. Durante esta etapa, la aplicación pasa de estar alojada localmente a estar disponible para el público objetivo en cualquier parte del mundo.

Con el uso creciente de blockchains en la creación de aplicaciones, es posible que se haya preguntado cómo se alojan las DApps, que interactúan con los contratos inteligentes.

En este tutorial, aprenderá a alojar DApps con Fleek mediante la creación de una aplicación de adopción de mascotas descentralizada de muestra con React, Hardhat y Alchemy. 

Lo que necesitas antes de comenzar este tutorial

Este tutorial contiene varios pasos prácticos. Para seguir, te recomiendo que hagas lo siguiente:

  • Instale React , que usaremos para construir la interfaz de usuario. Estoy usando React v14 en este tutorial.
  • Instale Hardhat , que usaremos como nuestro entorno de desarrollo.
  • Cree una cuenta gratuita para la plataforma de desarrollo blockchain de Alchemy
  • Cree una cuenta gratuita para Fleek , de la que aprenderá más en la siguiente sección
  • Descargue la extensión del navegador MetaMask

MetaMask es una billetera de criptomonedas que permite a los usuarios acceder a DApps a través de un navegador o una aplicación móvil. También querrá una cuenta MetaMask de prueba en una red de prueba de Ethereum para probar contratos inteligentes . Estoy usando Ropsten Test Network en este tutorial.

¿Qué es Fleek?

Fleek es una solución Web3 que tiene como objetivo hacer que el proceso de implementación de sus sitios, DApps y servicios sea fluido. Actualmente, Fleek proporciona una puerta de enlace para alojar sus servicios en el Sistema de archivos interplanetarios (IPFS) o en la computadora de Internet (IC) de Dfinity .

Fleek se describe a sí mismo como el equivalente de Netlify para aplicaciones Web3. Como resultado, encontrará algunas características que son similares a Netlify, como ejecutar compilaciones usando imágenes de Docker y generar vistas previas de implementación.

Según el blog de IPFS , “el principal objetivo de Fleek para 2022 es reestructurar su infraestructura de IPFS para descentralizarla e incentivarla aún más. También incluirá nuevos proveedores de infraestructura Web3 para diferentes piezas de la pila de construcción web”.

Fleek ofrece una solución a un desafío de IPFS en el que el hash de su sitio web cambia cada vez que realiza una actualización, lo que dificulta tener un hash de dirección fijo. Después de la implementación inicial, Fleek construirá, anclará y actualizará su sitio.

Comencemos a construir nuestra DApp de muestra en la siguiente sección e implementémosla usando Fleek. Alojaremos la DApp en IPFS.

 

Creación de una DApp de muestra para implementar en Fleek

En esta sección, construiremos un sistema de seguimiento de adopciones descentralizado para una tienda de mascotas.

Si está familiarizado con Truffle Suite , puede reconocer algunas partes de este ejercicio. La inspiración para esta DApp proviene de la guía Truffle . Llevaremos las cosas un paso más allá usando Alchemy, Hardhat y React.

Para permitirle concentrarse en escribir el contrato inteligente e implementar la DApp, ya he creado el componente y el estado de la interfaz de usuario. El contrato inteligente y el código React estarán contenidos en un solo proyecto.

Simplemente clone la aplicación React de mi repositorio de GitHub para comenzar a escribir el contrato inteligente:

git clone  https://github.com/vickywane/react-web3 

A continuación, cambie el directorio a la carpeta clonada e instale las dependencias enumeradas en el package.jsonarchivo:

# cambiar el directorio cd react-web3 # instalar las dependencias de la aplicación npm install

With the React application set up, let’s proceed to create the pet adoption smart contract.

Creación del contrato inteligente de adopción de mascotas

Dentro del react-web3directorio, cree una carpeta de contratos para almacenar el código de Solidity para nuestro contrato inteligente de adopción de mascotas.

Usando su editor de código, cree un archivo llamado Adoption.soly pegue el código a continuación para crear las variables y funciones necesarias dentro del contrato inteligente, que incluyen:

  • Una matriz de 16 longitudes para almacenar la dirección de cada adoptante de mascotas
  • Una función para adoptar una mascota.
  • Una función para recuperar la dirección de todas las mascotas adoptadas
//SPDX-License-Identifier: Unlicense
// ./react-web3/contracts/Adoption.sol
pragma solidity ^0.8.0;

contract Adoption {
  address[16] public adopters;
  event PetAssigned(address indexed petOwner, uint32 petId);

  // adopting a pet
  function adopt(uint32 petId) public {
    require(petId >= 0 && petId <= 15, "Pet does not exist");

    adopters[petId] = msg.sender;
    emit PetAssigned(msg.sender, petId);
  }

  // Retrieving the adopters
  function getAdopters() public view returns (address[16] memory) {
    return adopters;
  }
}

A continuación, cree otro archivo con el nombre deploy-contract-script.jsdentro de la carpeta de contratos. Pegue el código JavaScript a continuación en el archivo. El código actuará como un script que utiliza el getContractFactorymétodo asincrónico de Hardhat para crear una instancia de fábrica del contrato inteligente de adopción y luego implementarlo.

// react-web3/contract/deploy-contract-script.js
require('dotenv').config()
const { ethers } = require("hardhat");

async function main() {
  // We get the contract to deploy
  const Adoption = await ethers.getContractFactory("Adoption");
  const adoption = await Adoption.deploy();
  await adoption.deployed();

  console.log("Adoption Contract deployed to:", adoption.address);
}

// We recommend this pattern to be able to use async/await everywhere
// and properly handle errors.
main()
  .then(() => process.exit(0))
  .catch((error) => {
    console.error(error);
    process.exit(1);
});

Finalmente, crea un archivo llamado hardhat.config.js. Este archivo especificará la configuración del casco.

Agregue el siguiente código JavaScript al hardhat.config.jsarchivo para especificar una versión de Solidity y el punto final de URL para su cuenta de red de Ropsten.

require("@nomiclabs/hardhat-waffle");
require('dotenv').config();

/**
 * @type import('hardhat/config').HardhatUserConfig
 */
module.exports = {
  solidity: "0.8.4",
  networks: {
    ropsten: {
      url: process.env.ALCHEMY_API_URL,
      accounts: [`0x${process.env.METAMASK_PRIVATE_KEY}`]
    }
  }
};

Estoy usando las variables de entorno ALCHEMY_API_URLy METAMASK_PRIVATE_KEYpara almacenar la URL y los valores de clave de cuenta privada utilizados para la configuración de red de Ropsten:

  • El METAMASK_PRIVATE_KEYestá asociado con su billetera MetaMask
  • Los ALCHEMY_API_URLenlaces a una aplicación de Alchemy

Puede almacenar y acceder a estas variables de entorno dentro de este react-web3proyecto utilizando un .envarchivo y el dotenvpaquete. Revisaremos cómo hacer esto en la siguiente sección.

Si ha estado siguiendo con éxito, aún no ha creado una aplicación de Alchemy en este punto del proyecto. Necesitará usar su punto final de URL de API, así que procedamos a crear una aplicación en Alchemy.

Crear una aplicación de Alchemy

Alchemy proporciona funciones que le permiten conectarse a un nodo externo de llamada a procedimiento remoto (RPC) para una red. Los nodos RPC hacen posible que su DApp y la cadena de bloques se comuniquen.

Usando su navegador web, navegue hasta el panel web de Alchemy y cree una nueva aplicación.

Proporcione un nombre y una descripción para la aplicación, luego seleccione la red Ropsten. Haga clic en el botón "Crear aplicación" para continuar.

Después de crear la aplicación, la encontrará en la parte inferior de la página.

Haga clic en "Ver clave" para revelar las claves API para la aplicación Alchemy. Tome nota de la URL HTTP. He redactado esta información en la imagen de abajo.

Cree un .envarchivo dentro de su proyecto Hardhat, como se muestra a continuación. Utilizará este .envarchivo para almacenar la URL de su aplicación Alchemy y la clave privada de MetaMask.

// react-web3/.env

ALCHEMY_API_URL=<ALCHEMY_HTTP_URL>
METAMASK_PRIVATE_KEY=<METAMASK_PRIVATE_KEY>

Reemplace los marcadores de posición ALCHEMY_HTTP_URLy METAMASK_PRIVATE_KEYanteriores con la URL HTTP de Alchemy y su clave privada de MetaMask. Siga la guía MetaMask Export Private Key para aprender cómo exportar esta información para su billetera.

Por último, ejecute el siguiente comando para implementar su contrato inteligente de adopción de mascotas en la red Ropsten especificada:

npx hardhat run contracts/deploy-contract-script.js --network ropsten

Como se muestra en la imagen a continuación, tenga en cuenta la dirección que se devuelve a su consola después de implementar el contrato. Necesitará esta dirección en la siguiente sección.

En este punto, se ha implementado el contrato inteligente de adopción de mascotas. Ahora cambiemos el enfoque a la propia DApp y creemos funciones para interactuar con el contrato inteligente de adopción de mascotas.

Construyendo la interfaz DApp

Similar al tutorial de la tienda de mascotas de la guía Truffle, nuestra DApp mostrará dieciséis razas diferentes de perros que se pueden adoptar. La información detallada de cada perro se almacena en el src/pets.jsonarchivo . Estamos usando TailwindCSS para darle estilo a esta DApp.

Para comenzar, abra el state/context.jsarchivo y reemplace el contenido existente con el siguiente código:

// react-web3/state/context.js

import React, {useEffect, useReducer} from "react";
import Web3 from "web3";
import {ethers, providers} from "ethers";

const {abi} = require('../../artifacts/contracts/Adoption.sol/Adoption.json')

if (!abi) {
    throw new Error("Adoptiom.json ABI file missing. Run npx hardhat run contracts/deploy-contract-script.js")
}

export const initialState = {
    isModalOpen: false,
    dispatch: () => {
    },
    showToast: false,
    adoptPet: (id) => {
    },
    retrieveAdopters: (id) => {
    },
};

const {ethereum, web3} = window

const AppContext = React.createContext(initialState);
export default AppContext;

const reducer = (state, action) => {
    switch (action.type) {
        case 'INITIATE_WEB3':
            return {
                ...state,
                isModalOpen: action.payload,
            }
        case 'SENT_TOAST':
            return {
                ...state,
                showToast: action.payload.toastVisibility
            }
        default:
            return state;
    }
};

const createEthContractInstance = () => {
    try {
        const provider = new providers.Web3Provider(ethereum)
        const signer = provider.getSigner()
        const contractAddress = process.env.REACT_APP_ADOPTION_CONTRACT_ADDRESS

        return new ethers.Contract(contractAddress, abi, signer)
    } catch (e) {
        console.log('Unable to create ethereum contract. Error:', e)
    }
}

export const AppProvider = ({children}) => {
    const [state, dispatch] = useReducer(reducer, initialState);

    const instantiateWeb3 = async _ => {
        if (ethereum) {
            try {
                // Request account access
                return await ethereum.request({method: "eth_requestAccounts"})
            } catch (error) {
                // User denied account access...
                console.error("User denied account access")
            }
        } else if (web3) {
            return
        }
        return new Web3(Web3.givenProvider || "ws://localhost:8545")
    }

    const adoptPet = async id => {
        try {
            const instance = createEthContractInstance()
            const accountData = await instantiateWeb3()

            await instance.adopt(id, {from: accountData[0]})

            dispatch({
                type: 'SENT_TOAST', payload: {
                    toastVisibility: true
                }
            })

            // close success toast after 3s
            setTimeout(() => {
                dispatch({
                    type: 'SENT_TOAST', payload: {
                        toastVisibility: false
                    }
                })
            }, 3000)
        } catch (e) {
            console.log("ERROR:", e)
        }
    }

    const retrieveAdopters = async _ => {
        try {
            const instance = createEthContractInstance()
            return await instance.getAdopters()
        } catch (e) {
            console.log("RETRIEVING:", e)
        }
    }

    useEffect(() => {
        (async () => { await instantiateWeb3() })()
    })

    return (
        <AppContext.Provider
            value={{
                ...state,
                dispatch,
                adoptPet,
                retrieveAdopters
            }}
        >
            {children}
        </AppContext.Provider>
    );
};

Al leer el bloque de código anterior, observará lo siguiente:

Los objetos Ethereum y Web3 se desestructuran desde la ventana del navegador. La extensión MetaMask inyectará el objeto Ethereum en el navegador.

La createEthContractInstancefunción auxiliar crea y devuelve una instancia del contrato de adopción de mascotas utilizando la ABI y la dirección del contrato de Alchemy.

La instantiateWeb3función auxiliar recuperará y devolverá la dirección de la cuenta del usuario en una matriz, utilizando MetaMask para verificar que el objeto de la ventana de Ethereum esté definido.

La instantiateWeb3función auxiliar también se ejecuta en un useEffectenlace para garantizar que los usuarios se conecten con MetaMask inmediatamente después de abrir la aplicación en el navegador web.

La adoptPetfunción espera un parámetro numérico petId, crea la instancia del contrato de adopción y recupera la dirección del usuario mediante las funciones auxiliares createEthContractInstancey .instantiateWeb3

El petIdparámetro y la dirección de la cuenta de usuario se pasan al adoptmétodo desde la instancia del contrato de adopción de mascotas para adoptar una mascota.

La retrieveAdoptersfunción ejecuta el getAdoptersmétodo en la instancia de adopción para recuperar la dirección de todas las mascotas adoptadas.

Guarde estos cambios e inicie el servidor de desarrollo de React para ver la DApp de la tienda de mascotas en http://localhost4040/ .

En este punto, las funciones dentro del contrato de adopción se han implementado en el state/context.jsarchivo, pero aún no se han ejecutado. Sin autenticarse con MetaMask, la dirección de la cuenta del usuario no estará definida y todos los botones para adoptar una mascota estarán deshabilitados, como se muestra a continuación:

Procedamos a agregar la dirección del contrato de adopción de mascotas como una variable de entorno y alojemos la DApp en Fleek.

Implementación de React DApp en Fleek

El alojamiento de una DApp en Fleek se puede realizar a través del panel de control de Fleek, la CLI de Fleek o incluso mediante programación mediante Fleek GitHub Actions . En esta sección, aprenderá a usar la CLI de Fleek mientras alojamos la DApp de la tienda de mascotas en IPFS a través de Fleek.

Configuración de la CLI de Fleek

Ejecute el siguiente comando para instalar Fleek CLI globalmente en su computadora:

npm install -g @fleek/cli

Para usar la CLI de Fleek instalada, debe tener una clave API para una cuenta de Fleek almacenada como una variable de entorno en su terminal. Procedamos a generar una clave API para su cuenta utilizando el panel web de Fleek.

Usando su navegador web, navegue hasta el panel de control de su cuenta de Fleek y haga clic en el avatar de su cuenta para mostrar un menú emergente. Dentro de este menú, haga clic en "Configuración" para navegar a la configuración de su cuenta de Fleek.

Dentro de la configuración de su cuenta de Fleek, haga clic en el botón "Generar API" para iniciar el modal "Detalles de API", que generará una clave de API para su cuenta de Fleek.

Tome la clave API generada y reemplace el FLEEK_API_KEYmarcador de posición en el siguiente comando:

export FLEEK_API_KEY='FLEEK_API_KEY'

Ejecute este comando para exportar la clave API de Fleek como una variable de entorno temporal para su terminal de computadora. La CLI de Fleek leerá el valor de la FLEEK_API_KEYvariable cuando ejecute un comando en su cuenta de Fleek.

Inicializar un sitio a través de Fleek CLI

Debe generar los archivos estáticos para React DApp localmente antes de poder alojar la DApp y sus archivos en IPFS usando Fleek.

La generación de archivos estáticos se puede automatizar durante el proceso de compilación especificando una imagen de Docker y los comandos que se usarán para crear sus archivos estáticos. Sin embargo, en este tutorial, generará los archivos estáticos manualmente.

Ejecute el siguiente comando npm para generar archivos estáticos para la DApp en un builddirectorio.

npm run build

A continuación, inicialice un espacio de trabajo del sitio de Fleek dentro de la react-web3carpeta con el siguiente comando:

fleek site:init

El proceso de inicialización es un paso único para cada sitio de Fleek. Fleek CLI iniciará una sesión interactiva que lo guiará a través del proceso de inicialización del sitio.

Durante el proceso de inicialización, se le pedirá que ingrese un teamId, como se ve en la siguiente imagen:

Encontrará sus teamIdnúmeros en la URL del panel de control de Fleek. A continuación se muestra un ejemplo teamId:

En este punto, la CLI de Fleek ha generado un .fleek.jsonarchivo dentro del react-web3directorio con las configuraciones de alojamiento de Fleek. Sin embargo, falta una cosa: la variable de entorno que contiene la dirección del contrato inteligente de adopción de mascotas.

Procedamos a ver cómo agregar variables de entorno para sitios implementados localmente en Fleek.

Agregar una variable de entorno

Fleek permite a los desarrolladores administrar credenciales confidenciales para sus sitios de forma segura, ya sea a través del panel de Fleek o del archivo de configuración. Como está alojando localmente un sitio desde su línea de comandos en este tutorial, especificará su variable de entorno en el .fleek.jsonarchivo.

En el siguiente código, reemplace el ADOPTION_CONTRACT_ADDRESSmarcador de posición con la dirección del contrato inteligente de adopción de mascotas. Recuerde, esta dirección se devolvió después de que creamos la aplicación Alchemy e implementamos el contrato inteligente con el comando npx anteriormente en este tutorial .

 {
  "site": {
    "id": "SITE_ID",
    "team": "TEAM_ID",
    "platform": "ipfs",
    "source": "ipfs",
    "name": "SITE_NAME"
  },
  "build": {
    "baseDir": "",
    "publicDir": "build",
    "rootDir": "",
    "environment": {
       "REACT_APP_ADOPTION_CONTRACT": "ADOPTION_CONTRACT_ADDRESS"
    }
  }
}

Nota: La CLI SITE_IDde Fleek generará automáticamente los marcadores de posición , y en el TEAM_IDarchivo cuando inicialice un sitio de Fleek.SITE_NAME.fleek.json

El código anterior también contiene el siguiente objeto, que debe agregar al objeto de compilación dentro de su .fleek.jsonarchivo:

    "environment": {
       "REACT_APP_ADOPTION_CONTRACT": "ADOPTION_CONTRACT_ADDRESS"
    }

El objeto JSON anterior especifica una imagen acoplable de nodo que utilizará Fleek para crear la DApp. Durante el proceso de compilación, se ejecutarán los comandos npm en el commandcampo.

Ejecute el siguiente comando para volver a implementar la DApp usando la nueva configuración de compilación en el .fleek.jsonarchivo.

fleek site:deploy

¡Felicidades! La DApp se ha implementado por completo y ahora puede acceder al sitio en vivo a través de su navegador web. También puede obtener información más detallada sobre la DApp alojada a través del panel de control de Fleek siguiendo estos pasos:

  • Navega a tu tablero de Fleek
  • Haga clic en el nombre de la DApp que implementó
  • Ver la URL del sitio implementado a la izquierda
  • Vea una imagen de vista previa de implementación a la derecha

Haga clic en la URL del sitio para abrir la DApp en una nueva pestaña del navegador. Se le pedirá que conecte una billetera MetaMask inmediatamente después de que se inicie la DApp. Después de conectar una billetera, podrá adoptar cualquiera de los dieciséis perros haciendo clic en los botones "Adoptar".

¡Eso es todo! Su DApp de adopción de mascotas de muestra se implementó en Fleek.

Conclusión

En este tutorial, nos enfocamos en crear y alojar una DApp de muestra en IPFS a través de Fleek. El proceso comenzó de manera similar al contrato inteligente de adopción de mascotas de la guía de Truffle. Luego, dio un paso más al crear una DApp para interactuar con el contrato inteligente de adopción de mascotas.

Si desea aprovechar los pasos de este tutorial para alojar una DApp lista para producción, le recomiendo que considere lo siguiente:

Primero, asegúrese de conectar Fleek a un proveedor de host de código, como GitHub, e implemente la DApp desde una rama de producción dentro de su repositorio. Esto permitirá que Fleek vuelva a implementar automáticamente la DApp cuando envíe una nueva confirmación de código a la rama implementada.

En segundo lugar, si está utilizando un .fleek.jsonarchivo para almacenar variables de entorno, incluya el nombre del .fleek.jsonarchivo en su .gitignorearchivo. Hacer esto garantizará que el .fleek.jsonarchivo no se inserte y que las variables de entorno no queden expuestas.

Espero que hayas encontrado útil este tutorial. Si tiene alguna pregunta, no dude en compartir un comentario.

Esta historia se publicó originalmente en https://blog.logrocket.com/how-build-dapp-host-ipfs-fleek/

#dapp #fleek  #ipfs 

Cómo Construir Una DApp Y Alojarla En IPFS Usando Fleek

Fleekを使用してDAppを構築し、IPFSでホストする方法

アプリケーションの展開段階は、開発プロセス内の重要なステップです。この段階で、アプリケーションはローカルでホストされるようになり、世界中のターゲットオーディエンスが利用できるようになります。

アプリケーションの構築におけるブロックチェーンの使用が増えるにつれ、スマートコントラクトと相互作用するDAppがどのようにホストされているのか疑問に思われるかもしれません。

このチュートリアルでは、React、Hardhat、およびAlchemyを使用してサンプルの分散型ペットの養子縁組アプリケーションを構築することにより、FleekでDAppをホストする方法を学習します。 

このチュートリアルを開始する前に必要なもの

このチュートリアルには、いくつかの実践的な手順が含まれています。フォローするには、次のことを行うことをお勧めします。

  • UIの構築に使用するReactをインストールします。このチュートリアルではReactv14を使用しています
  • 開発環境として使用するHardhatをインストールします
  • Alchemyブロックチェーン開発者プラットフォームの無料アカウントを作成します
  • Fleekの無料アカウントを作成します。これについては、次のセクションで詳しく説明します。
  • MetaMaskブラウザ拡張機能をダウンロードする

MetaMaskは、ユーザーがブラウザーまたはモバイルアプリを介してDAppにアクセスできるようにする暗号通貨ウォレットです。スマートコントラクトをテストするために、EthereumテストネットでテストMetaMaskアカウントも必要になります。このチュートリアルでは、RopstenTestNetworkを使用しています。

フレークとは何ですか?

Fleekは、サイト、DApp、およびサービスをシームレスに展開するプロセスを作成することを目的としたWeb3ソリューションです。現在、Fleekは、InterPlanetary File System(IPFS)またはDfinityのインターネットコンピューター(IC)でサービスをホストするためのゲートウェイを提供しています。

Fleekは、それ自体をWeb3アプリケーションと同等のNetlifyと表現しています。その結果、Dockerイメージを使用したビルドの実行やデプロイメントプレビューの生成など、Netlifyに類似したいくつかの機能が見つかります。

IPFSブログによると、「2022年のFleekの主な目標は、IPFSインフラストラクチャを再構築して、さらに分散化し、インセンティブを与えることです。また、Web構築スタックのさまざまな部分に対応する新しいWeb3インフラストラクチャプロバイダーも含まれます。」

Fleekは、更新を行うたびにWebサイトのハッシュが変更されるため、固定アドレスハッシュを使用することが困難になるというIPFSの課題に対するソリューションを提供します。最初の展開後、Fleekはサイトを構築、固定、更新します。

次のセクションでサンプルDAppの構築を開始し、Fleekを使用してデプロイしましょう。DAppをIPFSでホストします。

 

FleekにデプロイするサンプルDAppを構築する

このセクションでは、ペットショップ向けの分散型採用追跡システムを構築します。

Truffle Suiteに精通している場合は、この演習の一部に気付くかもしれません。このDAppのインスピレーションは、トリュフガイドから来ています。Alchemy、Hardhat、Reactを使用して、さらに一歩進んでいきます。

スマートコントラクトの作成とDAppのデプロイに集中できるようにするために、私はすでにUIコンポーネントと状態を構築しました。スマートコントラクトとReactコードは単一のプロジェクトに含まれます。

GitHubリポジトリからReactアプリケーションのクローンを作成するだけで、スマートコントラクトの作成を開始できます。

git clone  https://github.com/vickywane/react-web3 

次に、ディレクトリを複製フォルダに変更し、package.jsonファイルにリストされている依存関係をインストールします。

#ディレクトリを変更するcd react-web3#アプリケーションの依存関係をインストールするnpm install

With the React application set up, let’s proceed to create the pet adoption smart contract.

ペットの養子縁組のスマートコントラクトを作成する

ディレクトリ内にreact-web3、ペットの養子縁組のスマートコントラクトのSolidityコードを保存するコントラクトフォルダーを作成します。

コードエディタを使用して、という名前のファイルAdoption.solを作成し、以下のコードに貼り付けて、スマートコントラクト内に必要な変数と関数を作成します。

  • 各ペットの養子縁組の住所を格納する16の長さの配列
  • ペットを養子にする機能
  • 養子となったすべてのペットの住所を取得する機能
//SPDX-License-Identifier: Unlicense
// ./react-web3/contracts/Adoption.sol
pragma solidity ^0.8.0;

contract Adoption {
  address[16] public adopters;
  event PetAssigned(address indexed petOwner, uint32 petId);

  // adopting a pet
  function adopt(uint32 petId) public {
    require(petId >= 0 && petId <= 15, "Pet does not exist");

    adopters[petId] = msg.sender;
    emit PetAssigned(msg.sender, petId);
  }

  // Retrieving the adopters
  function getAdopters() public view returns (address[16] memory) {
    return adopters;
  }
}

deploy-contract-script.js次に、 contractsフォルダー内に名前を付けた別のファイルを作成します。以下のJavaScriptコードをファイルに貼り付けます。このコードはgetContractFactory、Hardhatの非同期メソッドを使用して、養子縁組スマートコントラクトのファクトリインスタンスを作成し、それをデプロイするスクリプトとして機能します。

// react-web3/contract/deploy-contract-script.js
require('dotenv').config()
const { ethers } = require("hardhat");

async function main() {
  // We get the contract to deploy
  const Adoption = await ethers.getContractFactory("Adoption");
  const adoption = await Adoption.deploy();
  await adoption.deployed();

  console.log("Adoption Contract deployed to:", adoption.address);
}

// We recommend this pattern to be able to use async/await everywhere
// and properly handle errors.
main()
  .then(() => process.exit(0))
  .catch((error) => {
    console.error(error);
    process.exit(1);
});

最後に、というファイルを作成しますhardhat.config.js。このファイルは、ヘルメットの構成を指定します。

次のJavaScriptコードをhardhat.config.jsファイルに追加して、RopstenネットワークアカウントのSolidityバージョンとURLエンドポイントを指定します。

require("@nomiclabs/hardhat-waffle");
require('dotenv').config();

/**
 * @type import('hardhat/config').HardhatUserConfig
 */
module.exports = {
  solidity: "0.8.4",
  networks: {
    ropsten: {
      url: process.env.ALCHEMY_API_URL,
      accounts: [`0x${process.env.METAMASK_PRIVATE_KEY}`]
    }
  }
};

私は環境変数ALCHEMY_API_URLMETAMASK_PRIVATE_KEY使用しており、Ropstenネットワーク構成に使用されるURLとプライベートアカウントキーの値を保存しています。

  • METAMASK_PRIVATE_KEYMetaMaskウォレットに関連付けられています
  • ALCHEMY_API_URLAlchemyアプリケーションへのリンク

ファイルとパッケージを使用して、このプロジェクト内にこれらの環境変数を保存してアクセスできます。これを行う方法については、次のセクションで確認します。react-web3.envdotenv

順調に進んでいる場合は、プロジェクトのこの時点ではまだAlchemyアプリケーションを作成していません。API URLエンドポイントを使用する必要があるので、Alchemyでアプリケーションを作成してみましょう。

Alchemyアプリケーションの作成

Alchemyは、ネットワークの外部リモートプロシージャコール(RPC)ノードに接続できるようにする機能を提供します。RPCノードを使用すると、DAppとブロックチェーンが通信できるようになります。

Webブラウザーを使用して、Alchemy Webダッシュボードに移動し、新しいアプリを作成します。

アプリの名前と説明を入力してから、Ropstenネットワークを選択します。「アプリの作成」ボタンをクリックして続行します。

アプリが作成されると、ページの下部に表示されます。

「キーの表示」をクリックして、AlchemyアプリのAPIキーを表示します。HTTPURLに注意してください。下の画像でこの情報を編集しました。

.env以下に示すように、Hardhatプロジェクト内にファイルを作成します。この.envファイルを使用して、AlchemyアプリのURLとMetaMask秘密鍵を保存します。

// react-web3/.env

ALCHEMY_API_URL=<ALCHEMY_HTTP_URL>
METAMASK_PRIVATE_KEY=<METAMASK_PRIVATE_KEY>

上記のALCHEMY_HTTP_URLandMETAMASK_PRIVATE_KEYプレースホルダーを、AlchemyのHTTPURLとMetaMask秘密鍵に置き換えます。MetaMaskエクスポート秘密鍵ガイドに従って、この情報をウォレットにエクスポートする方法を学習してください。

最後に、次のコマンドを実行して、ペットの養子縁組のスマートコントラクトを指定されたRopstenネットワークにデプロイします。

npx hardhat run contracts/deploy-contract-script.js --network ropsten

次の画像に示すように、コントラクトがデプロイされた後にコンソールに返されるアドレスに注意してください。このアドレスは次のセクションで必要になります。

この時点で、ペットの養子縁組のスマートコントラクトが展開されています。次に、焦点をDApp自体に移し、ペットの養子縁組のスマートコントラクトと対話するための関数を作成しましょう。

DAppフロントエンドの構築

トリュフガイドのペットショップのチュートリアルと同様に、DAppには養子縁組できる16種類の犬が表示されます。各犬の詳細情報はsrc/pets.jsonファイルに保存されます。このDAppのスタイルを設定するためにTailwindCSSを使用しています。

まず、state/context.jsファイルを開き、既存のコンテンツを次のコードに置き換えます。

// react-web3/state/context.js

import React, {useEffect, useReducer} from "react";
import Web3 from "web3";
import {ethers, providers} from "ethers";

const {abi} = require('../../artifacts/contracts/Adoption.sol/Adoption.json')

if (!abi) {
    throw new Error("Adoptiom.json ABI file missing. Run npx hardhat run contracts/deploy-contract-script.js")
}

export const initialState = {
    isModalOpen: false,
    dispatch: () => {
    },
    showToast: false,
    adoptPet: (id) => {
    },
    retrieveAdopters: (id) => {
    },
};

const {ethereum, web3} = window

const AppContext = React.createContext(initialState);
export default AppContext;

const reducer = (state, action) => {
    switch (action.type) {
        case 'INITIATE_WEB3':
            return {
                ...state,
                isModalOpen: action.payload,
            }
        case 'SENT_TOAST':
            return {
                ...state,
                showToast: action.payload.toastVisibility
            }
        default:
            return state;
    }
};

const createEthContractInstance = () => {
    try {
        const provider = new providers.Web3Provider(ethereum)
        const signer = provider.getSigner()
        const contractAddress = process.env.REACT_APP_ADOPTION_CONTRACT_ADDRESS

        return new ethers.Contract(contractAddress, abi, signer)
    } catch (e) {
        console.log('Unable to create ethereum contract. Error:', e)
    }
}

export const AppProvider = ({children}) => {
    const [state, dispatch] = useReducer(reducer, initialState);

    const instantiateWeb3 = async _ => {
        if (ethereum) {
            try {
                // Request account access
                return await ethereum.request({method: "eth_requestAccounts"})
            } catch (error) {
                // User denied account access...
                console.error("User denied account access")
            }
        } else if (web3) {
            return
        }
        return new Web3(Web3.givenProvider || "ws://localhost:8545")
    }

    const adoptPet = async id => {
        try {
            const instance = createEthContractInstance()
            const accountData = await instantiateWeb3()

            await instance.adopt(id, {from: accountData[0]})

            dispatch({
                type: 'SENT_TOAST', payload: {
                    toastVisibility: true
                }
            })

            // close success toast after 3s
            setTimeout(() => {
                dispatch({
                    type: 'SENT_TOAST', payload: {
                        toastVisibility: false
                    }
                })
            }, 3000)
        } catch (e) {
            console.log("ERROR:", e)
        }
    }

    const retrieveAdopters = async _ => {
        try {
            const instance = createEthContractInstance()
            return await instance.getAdopters()
        } catch (e) {
            console.log("RETRIEVING:", e)
        }
    }

    useEffect(() => {
        (async () => { await instantiateWeb3() })()
    })

    return (
        <AppContext.Provider
            value={{
                ...state,
                dispatch,
                adoptPet,
                retrieveAdopters
            }}
        >
            {children}
        </AppContext.Provider>
    );
};

上記のコードブロックを読むと、次のことがわかります。

イーサリアムとWeb3オブジェクトは、ブラウザウィンドウから分解されます。MetaMask拡張機能は、Ethereumオブジェクトをブラウザーに挿入します。

ヘルパー関数はcreateEthContractInstance、契約のABIとAlchemyからのアドレスを使用して、ペットの養子縁組契約のインスタンスを作成して返します。

ヘルパー関数は、instantiateWeb3MetaMaskを使用してEthereumウィンドウオブジェクトが定義されていることを確認し、配列内のユーザーのアカウントアドレスを取得して返します。

instantiateWeb3ヘルパー関数もフックで実行され、ユーザーuseEffectがWebブラウザーでアプリケーションを開いた直後にMetaMaskに接続できるようにします。

このadoptPet関数は数値petIdパラメーターを期待し、Adoptionコントラクトインスタンスを作成し、createEthContractInstanceおよびinstantiateWeb3ヘルパー関数を使用してユーザーのアドレスを取得します。

パラメータとユーザーアカウントアドレスは、ペットの養子縁組契約インスタンスからメソッドpetIdに渡され、ペットを養子縁組します。adopt

このretrieveAdopters関数はgetAdopters、Adoptionインスタンスでメソッドを実行して、採用されたすべてのペットのアドレスを取得します。

これらの変更を保存し、React開発サーバーを起動して、 http://localhost4040/にあるペットショップのDAppを表示します。

この時点で、養子縁組契約内の機能はstate/context.jsファイルに実装されていますが、まだ実行されていません。以下に示すように、MetaMaskで認証しないと、ユーザーのアカウントアドレスは未定義になり、ペットを養子にするためのすべてのボタンが無効になります。

ペットの養子縁組契約アドレスを環境変数として追加し、FleekでDAppをホストしてみましょう。

ReactDAppをFleekにデプロイする

FleekでDAppをホストするには、Fleekダッシュボード、Fleek CLIを使用するか、FleekGitHubActionsをプログラムで使用することもできます。このセクションでは、Fleekを介してIPFSでペットショップDAppをホストするときにFleekCLIを使用する方法を学習します。

FleekCLIの設定

以下のコマンドを実行して、FleekCLIをコンピューターにグローバルにインストールします。

npm install -g @fleek/cli

インストールされたFleekCLIを使用するには、FleekアカウントのAPIキーを環境変数としてターミナルに保存する必要があります。Fleek Webダッシュボードを使用して、アカウントのAPIキーの生成に進みましょう。

Webブラウザーを使用して、Fleekアカウントダッシュボードに移動し、アカウントアバターをクリックして、ポップアップメニューを表示します。このメニュー内で、「設定」をクリックして、Fleekアカウント設定に移動します。

Fleekアカウント設定内で、[Generate API]ボタンをクリックして、[API Details]モーダルを起動します。これにより、FleekアカウントのAPIキーが生成されます。

FLEEK_API_KEY生成されたAPIキーを取得し、以下のコマンドでプレースホルダーを置き換えます。

export FLEEK_API_KEY='FLEEK_API_KEY'

このコマンドを実行して、FleekAPIキーをコンピューター端末の一時的な環境変数としてエクスポートします。FLEEK_API_KEYFleekアカウントに対してコマンドを実行すると、FleekCLIは変数の値を読み取ります。

FleekCLIを使用してサイトを初期化する

Fleekを使用してIPFSでDAppとそのファイルをホストする前に、ReactDAppの静的ファイルをローカルで生成する必要があります。

静的ファイルの生成は、Dockerイメージと静的ファイルのビルドに使用するコマンドを指定することにより、ビルドプロセス中に自動化できます。ただし、このチュートリアルでは、静的ファイルを手動で生成します。

以下のnpmコマンドを実行して、ディレクトリにDAppの静的ファイルを生成しますbuild

npm run build

react-web3次に、次のコマンドを使用して、フォルダー内のFleekサイトワークスペースを初期化します。

fleek site:init

初期化プロセスは、Fleekサイトごとに1回限りのステップです。Fleek CLIは、サイトを初期化するプロセスをガイドするインタラクティブセッションを開始します。

次の図に示すように、初期化プロセス中に、を入力するように求めteamIdられます。

teamIdFleekダッシュボードのURL内に番号が表示されます。例teamIdを以下に示します。

この時点で、Fleek CLIは、Fleekのホスティング構成を使用し.fleek.jsonてディレクトリ内にファイルを生成しました。react-web3ただし、1つ欠けているのは、ペットの養子縁組のスマートコントラクトアドレスを含む環境変数です。

Fleekにローカルにデプロイされたサイトの環境変数を追加する方法を見てみましょう。

環境変数の追加

Fleekを使用すると、開発者はFleekダッシュボードまたは構成ファイルを介してサイトの機密性の高いクレデンシャルを安全に管理できます。このチュートリアルでは、コマンドラインからサイトをローカルでホストしているため、.fleek.jsonファイルで環境変数を指定します。

以下のコードで、ADOPTION_CONTRACT_ADDRESSプレースホルダーをペットの養子縁組のスマートコントラクトアドレスに置き換えます。このアドレスは、Alchemyアプリケーションを作成し、このチュートリアルの前半でnpxコマンドを使用してスマートコントラクトをデプロイした後に返されたことを忘れないでください。

 {
  "site": {
    "id": "SITE_ID",
    "team": "TEAM_ID",
    "platform": "ipfs",
    "source": "ipfs",
    "name": "SITE_NAME"
  },
  "build": {
    "baseDir": "",
    "publicDir": "build",
    "rootDir": "",
    "environment": {
       "REACT_APP_ADOPTION_CONTRACT": "ADOPTION_CONTRACT_ADDRESS"
    }
  }
}

注:SITE_ID、、、TEAM_IDおよびプレースホルダーは、Fleekサイトを初期化するときSITE_NAMEに、ファイル内のFleekCLIによって自動的に生成されます。.fleek.json

.fleek.json上記のコードには、ファイル内のビルドオブジェクトに追加する必要がある次のオブジェクトも含まれています。

    "environment": {
       "REACT_APP_ADOPTION_CONTRACT": "ADOPTION_CONTRACT_ADDRESS"
    }

上記のJSONオブジェクトは、FleekがDAppの構築に使用するノードDockerイメージを指定します。commandビルドプロセス中に、フィールドのnpmコマンドが実行されます。

以下のコマンドを実行して、ファイル内の新しいビルド構成を使用してDAppを再デプロイし.fleek.jsonます。

fleek site:deploy

おめでとう!DAppが完全にデプロイされ、Webブラウザーからライブサイトにアクセスできるようになりました。次の手順に従って、FleekダッシュボードからホストされたDAppに関する詳細情報を取得することもできます。

  • Fleekダッシュボードに移動します
  • デプロイしたDAppの名前をクリックします
  • 左側のデプロイされたサイトのURLを参照してください
  • 右側のデプロイプレビューイメージを参照してください

サイトのURLをクリックして、新しいブラウザタブでDAppを開きます。DAppが起動するとすぐに、MetaMaskウォレットを接続するように求められます。ウォレットが接続されたら、「採用」ボタンをクリックして、16匹の犬のいずれかを採用することができます。

それでおしまい!サンプルのペットの養子縁組DAppがFleekにデプロイされました。

結論

このチュートリアルでは、Fleekを介してIPFSでサンプルDAppを構築してホストすることに焦点を当てました。プロセスは、トリュフのガイドからのペットの養子縁組のスマートコントラクトと同様に始まりました。次に、ペットの養子縁組のスマートコントラクトと対話するためのDAppを構築することで、さらに一歩進んだ。

このチュートリアル内の手順を活用して本番環境に対応したDAppをホストする場合は、次のことを検討することを強くお勧めします。

まず、FleekをGitHubなどのコードホストプロバイダーに接続し、リポジトリ内の本番ブランチからDAppをデプロイするようにしてください。これにより、デプロイされたブランチに新しいコードコミットをプッシュしたときに、FleekがDAppを自動的に再デプロイできるようになります。

次に、.fleek.jsonファイルを使用して環境変数を保存している場合は、.fleek.jsonファイルにファイル名を含め.gitignoreます。これを行うと、.fleek.jsonファイルがプッシュされず、環境変数が公開されなくなります。

このチュートリアルがお役に立てば幸いです。ご不明な点がございましたら、お気軽にコメントをお寄せください。

このストーリーは、もともとhttps://blog.logrocket.com/how-build-dapp-host-ipfs-fleek/で公開されました

#dapp #fleek  #ipfs 

Fleekを使用してDAppを構築し、IPFSでホストする方法
Cleora  Roob

Cleora Roob

1650689220

Introduction to IPFS (Interplanetary File System)

This video is an introduction to IPFS (interplanetary file system).   IPFS is a decentralized distributed file storage system that allows you to store files or digital assets in a decentralized fashion through nodes scattered across the world (think Tor network but with a consistent naming and hashing of content).

In this video chris explains the relevance of IPFS to Metaverse and Web3 technologies such as NFT's (Non Fungible Tokens) and how IPFS powers NFT's hosted on the most popular NFT marketplace Opensea.   We take a deep dive at the metadata of NFT's and view some NFT's hosted in IPFS directly via Brave Browser and via IPFS gateway such as Cloudflare discussing why we should use the IPFS scheme rather than using gateway url's

Having built an understanding of NFT's we then look at the architecture of NFT's and how it compares to traditional centralized architectures, we look at how we can get started with creating our own IPFS nodes to host our content.

Finally after realizing that hosting our own nodes are painful, we use a popular pinning platform called Pinata to host our assets.

At the end of this video you should have a good overview of IPFS and how to get started with it and how it fits into the Web3 ecosystem

00:00 - introduction
00:52 - what is ipfs
01:53 - ipfs and nft's
03:18 - how opensea uses ipfs
04:10 - NFT metadata
05:15 - Viewing opensea NFT's in IPFS and Brave
05:30 - Brave Browser and IPFS
06:50 - IPFS Gateways such as CloudFlare
10:24 - Decentralized IPFS architecture vs Centralized architecture
14:24 - Pro's and Cons of IPFS vs Centralized Storage
19:09 - Creating and Installing an IPFS node in Azure Cloud
24:30 - Hosting and viewing content from your own IPFS node
27:45 - Unique Content Identifiers (CID) and Hashing
31:35 - Nodes must be online to serve content
33:25 - Pinning content with Pinata
36:00 - Hosting images in IPFS using Pinata
38:00 - Hosting videos in IPFS using Pinata
39:00 - Hosting Folders in IPFS
39:22 - Hosting a React application in IPFS
42:15 - Conclusion

#ipfs #web3 #nft 

Introduction to IPFS (Interplanetary File System)

Rust IPFS: The interPlanetary File System (IPFS), Implemented In Rust

Rust IPFS

The Interplanetary File System (IPFS), implemented in Rust

Description

This repository contains the crates for the IPFS core implementation which includes a blockstore, a libp2p integration which includes DHT content discovery and pubsub support, and HTTP API bindings. Our goal is to leverage both the unique properties of Rust to create powerful, performant software that works even in resource-constrained environments, while also maximizing interoperability with the other "flavors" of IPFS, namely JavaScript and Go.

Project Status - Alpha

You can see details about what's implemented, what's not, and also learn about other ecosystem projects, at Are We IPFS Yet?

For more information about IPFS see: https://docs.ipfs.io/introduction/overview/

Install

Rust IPFS depends on protoc and openssl.

Dependencies

First, install the dependencies.

With apt:

$ apt-get install protobuf-compiler libssl-dev zlib1g-dev

With yum:

$ yum install protobuf-compiler libssl-dev zlib1g-dev

Install rust-ipfs itself

The rust-ipfs binaries can be built from source. Our goal is to always be compatible with the stable release of Rust.

$ git clone https://github.com/rs-ipfs/rust-ipfs && cd rust-ipfs
$ cargo build --workspace

You will then find the binaries inside of the project root's /target/debug folder.

Note: binaries available via cargo install is coming soon.

Getting started

We recommend browsing the examples, the http crate tutorial and tests in order to see how to use Rust-IPFS in different scenarios.

Running the tests

The project currently features unit, integration, conformance and interoperability tests. Unit and integation tests can be run with:

$ cargo test --workspace

The --workspace flag ensures the tests from the http and unixfs crates are also run.

Explanations on how to run the conformance tests can be found here. The Go and JS interoperability tests are behind a feature flag and can be run with:

$ cargo test --feature=test_go_interop
$ cargo test --feature=test_js_interop

These are mutually exclusive, i.e. --all-features won't work as expected.

Note: you will need to set the GO_IPFS_PATH and the JS_IPFS_PATH environment variables to point to the relevant IPFS binary.

Contributing

See the contributing docs for more info.

You can also back the project financially by reaching out or by becoming a backer on OpenCollective

If you have any questions on the use of the library or other inquiries, you are welcome to submit an issue.

Roadmap

Special thanks to the Web3 Foundation and Protocol Labs for their devgrant support.

Completed Work

  • Project Setup
  • Testing Setup
    • Conformance testing
  • HTTP API Scaffolding
  • UnixFS Support
  • /pubsub/{publish,subscribe,peers,ls}
  • /swarm/{connect,peers,addrs,addrs/local,disconnect}
  • /id
  • /version
  • /shutdown
  • /block/{get,put,rm,stat}
  • /dag/{put,resolve}
  • /refs and /refs/local
  • /bitswap/{stat,wantlist}
  • /cat
  • /get
  • /resolve

Work in Progress

  • /bootstrap
  • /dht
  • interop testing

Work still required

  • /name
  • /ping
  • /key
  • /config
  • /stats
  • /files (regular and mfs)
  • a few other miscellaneous endpoints not enumerated here

Maintainers

Rust IPFS was originally authored by @dvc94ch and now actively maintained by @koivunej, and @aphelionz. Special thanks is given to Protocol Labs and Equilibrium.

Alternatives and other cool, related projects

It’s been noted that the Rust-IPFS name and popularity may serve its organization from a "first-mover" perspective. However, alternatives with different philosophies do exist, and we believe that supporting a diverse IPFS community is important and will ultimately help produce the best solution possible.

  • rust-ipfs-api - A Rust client for an existing IPFS HTTP API. Supports both hyper and actix.
  • ipfs-embed - An implementation based on sled
  • rust-ipld - Basic rust ipld library supporting dag-cbor, dag-json and dag-pb formats.
  • PolkaX's own rust-ipfs
  • Parity's rust-libp2p, which does a lot the of heavy lifting here

If you know of another implementation or another cool project adjacent to these efforts, let us know!

Download Details:
Author: rs-ipfs
Source Code: https://github.com/rs-ipfs/rust-ipfs
License: View license

#rust  #rustlang #ipfs #p2p

Rust IPFS: The interPlanetary File System (IPFS), Implemented In Rust
Zak Dyer

Zak Dyer

1639552706

Building a Full Stack NFT Marketplace DApp on Flow

How to Build an NFT Marketplace on Flow Testnet using React.js, Blocto, and IPFS Integration

PREREQUISITE: You must have Node.js installed. You can do that here: https://nodejs.org/en/download/
Node is installed if you type `node -v` in a terminal and it shows you the version.

Hey there! This video is meant to show people how to go from NOTHING to having a Full Stack NFT Marketplace DApp on Flow. This video is not meant to produce a beautiful NFT Marketplace. Rather, it is to show you HOW you can make one. Every step of the way is included in the video.

1. We code out all the smart contracts.
2. We start from a blank React.js project and implement basic marketplace functionality.
3. We go over how to incorporate Blocto wallet into your DApp.
4. IPFS integration is used to store NFT metadata.

DApp Source Code: https://github.com/jacob-tucker/nftdapp-tutorial 
You can find the Smart Contracts here: https://github.com/jacob-tucker/nftdapp-tutorial/tree/main/src/cadence/contracts 

Timestamps:
00:00 - NFT Contract
22:00 - NFT Transactions
30:50 - NFT Marketplace
50:00 NFT Marketplace Transactions
59:30 - DApp Setup
1:03:30 - Configuring Testnet
1:05:30 - Blocto Setup
1:12:30 - Deploying Contracts to Testnet
1:18:00 - Minting NFTs
1:25:43 - IPFS Integration
1:31:30 - Flowscan
1:32:00 - Setting up a User
1:38:23 - Viewing our Collection
1:53:37 - Listing for Sale on the Marketplace
1:58:34 - Viewing NFTs for Sale
2:10:25 - Unlisting from Sale
2:13:18 - View Any Accounts NFTs
2:17:15 - Purchasing NFTs
2:33:20 - Conclusion
2:35:22 - Making the Script Better :) (OPTIONAL)

#nft #blockchain #react #flow #blocto #ipfs 

Building a Full Stack NFT Marketplace DApp on Flow
Code  JS

Code JS

1636959022

Web3 Storage | The Easiest Way to Use IPFS

In today's video we will Introduce Web3 Storage the easiest way to use IPFS for beginner
 

WEB3 STORAGE
👉 Create a FREE account on Web3 Storage: https://cutt.ly/aRFjIYE 
👉 Tutorial: https://cutt.ly/kRFjSw4 

Subscribe: https://www.youtube.com/c/EatTheBlocks/featured 

#ipfs  #javascript 

Web3 Storage | The Easiest Way to Use IPFS

Building a NFT Marketplace on Ethereum with Polygon and Next.js

How to start NFT Marketplace Development?

In this video, you'll learn how to build a full stack NFT marketplace on Ethereum with Solidity, Polygon, IPFS, Next.js, Ethers.js, and Hardhat.

We'll start from scratch, creating a new project and installing the dependencies. We'll then write and test out the smart contracts. Once the tests have passed, we'll write the front end code to connect the smart contracts.

After testing on a local network, we'll deploy to the Matic / Polygon network using a custom RPC provider (Infura).

0:00 - Introduction
3:00 - Project initialization and configuration
17:40 - Creating an Ethereum wallet
21:20 - Coding the NFT smart contract
28:19 - Coding the Market smart contract
58:50 - Testing the contracts
1:10:57 - Updating _app.js
1:14:35 - Updating the home page
1:35:24 - Deploying to a local node
1:39:47 - Coding the create-item page
1:58:00 - Coding the my-assets page
2:03:10 - Coding the creator-dashboard page
2:09:25 - Deploying to Matic Mumbai Testnet
2:20:40 - Conclusion

Subscribe to my channel: https://www.youtube.com/naderdabit 

Follow me on Twitter: https://twitter.com/dabit3 

Source code
https://github.com/dabit3/polygon-ethereum-nextjs-marketplace/ 

#blockchain #polygon #ethereum #nest #ipfs #solidity #Matic

Building a NFT Marketplace on Ethereum with Polygon and Next.js

Create a Decentralized Web on IPFS Protocol

The InterPlanetary File System (IPFS) is a protocol and peer-to-peer network for storing and sharing data in a distributed file system. IPFS uses content-addressing to uniquely identify each file in a global namespace connecting all computing devices.

Download Slides here https://payhip.com/b/R1Mu 

Intro 0:00 
Why IPFS?  2:00
Explain the original web model and the limitation
* Content addressing instead of location addressing 
* decentralized content distributed among peers

Content 3:30
* Content is hashed as CID
* Content is immutable each update generates new CID 
* Content addressing

Routing  4:30 
* Distributed Hash Table (DHTs) maps CID / Peer IP address 
* DHT server hosts content and DHT

Publishing Content 6:30
* New Content that you want to share on ipfs
*  hash the content creating new CID
* Update your local DHT CID / your ip address 
* DHT will be updated to all the content peer (NOT the CONTENT)
* People searching for your CID will be connected to you and only you.

Consuming Content 8:48
* ipfs client (dht client) want to consume Ipfs://cid/ 
* ipfs client consults its local DHT table to see where this CID is located, gets back a collection of IP addresses 
* client connects to some or all the peers found hosting that CID
* client downloads chunks of the content from each peer so it speeds up
* Once the client has the content it is now also updating its local DHT table that it now also hosts that CID (if it supports being a DHT server)
* New updated DHT is propogated across peer

IPFS Overview (Digrams) 11:30

Demo 13:45

More Information 18:30

Immutable Content
* if Content gets updated changes URI how do I inform the user?
* hash the public key of the user instead and share that 
Brand new Client/server
* I know nothing about the network (Bootstraping)
* you will be bootstrapped with a collection of ip addresses to start you up.
More
* IPFS gateway 
* IP Name server
* Solve content 
* Deleting Content( once other node hosts it no way to delete it from their network)
NAT traversal

via Hussein

#blockchain #ipfs #decentralized 

Create a Decentralized Web on IPFS Protocol
Nandu Singh

Nandu Singh

1628734527

Getting Started with IPFS

What is IPFS?

Let's just start with a one-line definition of IPFS:

IPFS is a distributed system for storing and accessing files, websites, applications, and data.

What does that mean, exactly? Let's say you're doing some research on aardvarks. (Just roll with it; aardvarks are cool! Did you know they can tunnel 3 feet in only 5 minutes?) You might start by visiting the Wikipedia page on aardvarks at:

https://en.wikipedia.org/wiki/Aardvark

When you put that URL in your browser's address bar, your computer asks one of Wikipedia's computers, which might be somewhere on the other side of the country (or even the planet), for the aardvark page.

However, that's not the only option for meeting your aardvark needs! There's a mirror of Wikipedia stored on IPFS, and you could use that instead. If you use IPFS, your computer asks to get the aardvark page like this:

/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Aardvark.html

The easiest way to view the above link is by opening it in your browser through an IPFS Gateway. Simply add https://ipfs.io to the start of the above link and you'll be able to view the page →(opens new window)

IPFS knows how to find that sweet, sweet aardvark information by its contents, not its location (more on that, which is called content addressing, below). The IPFS-ified version of the aardvark info is represented by that string of numbers in the middle of the URL (QmXo…), and instead of asking one of Wikipedia's computers for the page, your computer uses IPFS to ask lots of computers around the world to share the page with you. It can get your aardvark info from anyone who has it, not just Wikipedia.

And, when you use IPFS, you don't just download files from someone else — your computer also helps distribute them. When your friend a few blocks away needs the same Wikipedia page, they might be as likely to get it from you as they would from your neighbor or anyone else using IPFS.

IPFS makes this possible for not only web pages but also any kind of file a computer might store, whether it's a document, an email, or even a database record.

Decentralization

Making it possible to download a file from many locations that aren't managed by one organization:

  • Supports a resilient internet. If someone attacks Wikipedia's web servers or an engineer at Wikipedia makes a big mistake that causes their servers to catch fire, you can still get the same webpages from somewhere else.
  • Makes it harder to censor content. Because files on IPFS can come from many places, it's harder for anyone (whether they're states, corporations, or someone else) to block things. We hope IPFS can help provide ways to circumvent actions like these when they happen.
  • Can speed up the web when you're far away or disconnected. If you can retrieve a file from someone nearby instead of hundreds or thousands of miles away, you can often get it faster. This is especially valuable if your community is networked locally but doesn't have a good connection to the wider internet. (Well-funded organizations with technical expertise do this today by using multiple data centers or CDNs — content distribution networks (opens new window). IPFS hopes to make this possible for everyone.)

That last point is actually where IPFS gets its full name: the InterPlanetary File System. We're striving to build a system that works across places as disconnected or as far apart as planets. While that's an idealistic goal, it keeps us working and thinking hard, and almost everything we create in pursuit of that goal is also useful here at home.

Content addressing

For a beginner-friendly primer on why cryptographic hashing and content addressing matter, take a look at ProtoSchool's tutorial, Content Addressing on the Decentralized Web (opens new window).

What about that link to the aardvark page above? It looked a little unusual:

/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Aardvark.html

That jumble of letters after /ipfs/ is called a content identifier and it’s how IPFS can get content from multiple places.

Traditional URLs and file paths such as…

  • https://en.wikipedia.org/wiki/Aardvark
  • /Users/Alice/Documents/term_paper.doc
  • C:\Users\Joe\My Documents\project_sprint_presentation.ppt

…identify a file by where it's located — what computer it's on and where on that computer's hard drive it is. That doesn't work if the file is in many places, though, like your neighbor's computer and your friend's across town.

Instead of being location-based, IPFS addresses a file by what's in it, or by its content. The content identifier above is a cryptographic hash of the content at that address. The hash is unique to the content that it came from, even though it may look short compared to the original content. It also allows you to verify that you got what you asked for — bad actors can't just hand you content that doesn't match. (If hashes are new to you, check out the concept guide on hashes for an introduction.)

NOTE

Why do we say "content" instead of "files" or "web pages" here? Because a content identifier can point to many different types of data, such as a single small file, a piece of a larger file, or metadata. (In case you don't know, metadata is "data about the data." You use metadata when you access the date, location, or file size of your digital pictures, for example.) So, an individual IPFS address can refer to the metadata of just a single piece of a file, a whole file, a directory, a whole website, or any other kind of content. For more on this, check out our guide to how IPFS works.

Because the address of a file in IPFS is created from the content itself, links in IPFS can't be changed. For example ...

  • If the text on a web page is changed, the new version gets a new, different address.
  • Content can't be moved to a different address. On today's internet, a company could reorganize content on their website and move a page at http://mycompany.com/what_we_do to http://mycompany.com/services. In IPFS, the old link you have would still point to the same old content.

Of course, people want to update and change content all the time and don't want to send new links every time they do it. This is entirely possible in an IPFS world, but explaining it requires a little more info than what's within the scope of this IPFS introduction. Check out the concept guides on IPNS, the Mutable File System (MFS), and DNSLink to learn more about how changing content can work in a content-addressed, distributed system.

It's important to remember in all of these situations, using IPFS is participatory and collaborative. If nobody using IPFS has the content identified by a given address available for others to access, you won't be able to get it. On the other hand, content can't be removed from IPFS as long as someone is interested enough to make it available, whether that person is the original author or not. Note that this is similar to the current web, where it is also impossible to remove content that's been copied across an unknowable number of websites; the difference with IPFS is that you are always able to find those copies.

Participation

While there's lots of complex technology in IPFS, the fundamental ideas are about changing how networks of people and computers communicate. Today's World Wide Web is structured on ownership and access, meaning that you get files from whoever owns them — if they choose to grant you access. IPFS is based on the ideas of possession and participation, where many people possess each others' files and participate in making them available.

That means IPFS only works well when people are actively participating. If you use your computer to share files using IPFS, but then you turn your computer off, other people won't be able to get those files from you anymore. But if you or others make sure that copies of those files are stored on more than one computer that's powered on and running IPFS, those files will be more reliably available to other IPFS users who want them. This happens to some extent automatically: by default, your computer shares a file with others for a limited time after you've downloaded it using IPFS. You can also make content available more permanently by pinning it, which saves it to your computer and makes it available on the IPFS network until you decide to unpin it. (You can learn more about this in our guide to persistence and pinning.)

If you want to make sure one of your own files is permanently shared on the internet today, you might use a for-pay file-sharing service like Dropbox. Some people have begun offering similar services based on IPFS called pinning services. But since IPFS makes this kind of sharing a built-in feature, you can also collaborate with friends or partner with institutions (for example, museums and libraries might work together) to share each others' files. We hope IPFS can be the low-level tool that allows a rich fabric of communities, business, and cooperative organizations to all form a distributed web that is much more reliable, robust, and equitable than the one we have today.

via https://docs.ipfs.io/concepts/what-is-ipfs/ 

In Depth Introduction to IPFS

This is the first video in a 3 part series on the Interplanetary File System or IPFS. In this video we’ll explore the theoretical foundations of all the technologies that make IPFS what it is today: hash functions, distributed hash tables, bittorrent, merkle trees and more. The more practical introduction comes after this, but I recommend learning the theory as well.

Part 2: https://youtu.be/KIEq2FyMczs 

More about version control software:
https://www.atlassian.com/git/tutorials/what-is-version-control 

Great introductory paper on Bittorrent:
http://web.cs.ucla.edu/classes/cs217/05BitTorrent.pdf 

More about the Kademlia DHT:
https://en.wikipedia.org/wiki/Kademlia 

Additional reading:
Great introduction to IPFS
https://medium.com/@ConsenSys/an-introduction-to-ipfs-9bba4860abd0
Great video with easy to follow animations
https://www.youtube.com/watch?v=5Uj6uR3fp-U&t=391s
IPFS whitepaper breakdown (there’s a part 2 as well)
https://hackernoon.com/understanding-the-ipfs-white-paper-part-1-8ea5340b0a2e

Thanks for watching!

Music: https://www.bensound.com

#blockchain #ipfs 

Getting Started with IPFS
Autumn  Blick

Autumn Blick

1602739680

Swarm, IPFS and BigchainDB: Comparing Data Storage and Decentralization

Data and content management are two of the main capabilities in many of the real-world business applications, such as information portals, Wikipedia, and ecommerce and social media applications.

There is no exception in the decentralized world. During the EVM discussion, we briefly looked at the EVM capability for storing data on Ethereum.

Although it is convenient, it is not generally intended to be used for data storage. It is very expensive too. There are a few options application developers can leverage to manage and access decentralized data and contents for decentralized applications, including Swarm (the Ethereum blockchain solution), IPFS and BigchainDB (a big data platform for blockchain). We will cover them in the rest of this section.

Swarm

Swarm provides a content distribution service for Ethereum and DApps. Here are some features of Swarm:

·    It is a decentralized storage platform, a native base layer service of the Ethereum web 3 stack.

·    It intends to be a decentralized store of Ethereum’s public record as an alternative to an Ethereum on-chain storage solution.

·    It allows DApps to store and distribute code, data, and contents, without jamming all the information on the blockchain.

Imagine you are developing a blockchain-based medical record system, you want to keep track when the medical records are added, where the medical records are recorded, and who has accessed the medical records and for what purpose. All these are the immutable transaction records you want to maintain in the blockchain. But, the medical records themselves, including physician notes, medical diagnosis, and imaging, and so on, may not be suitable to be stored in the Ethereum blockchain. Swarm or IPFS are best suited for such use cases.

#blockchain #ethereum #ethereum-blockchain #ipfs #swarm #ethereum-scalability #ethereum-2.0 #blockchain-development

Swarm, IPFS and BigchainDB: Comparing Data Storage and Decentralization
Fannie  Zemlak

Fannie Zemlak

1600304400

Build a distributed website with Hugo

Hugo is an interesting static website generator. It is efficient, concise, and powerful. I’ll show you the pleasant journey of building a website from scratch with it.

Hugo is very easy to learn, so I believe anyone with a basic technical background could quickly create a beautiful static website with Hugo.

#ipfs #hugo #web #programming

Build a distributed website with Hugo

WTH is IPFS? InterPlanetary File Systems To Rescue The Internet

Introduced in February 2015, TechCrunch Magazine noted IPFS was “quickly spreading by word of mouth.”

It’s possibly a key component to solving deep-seated but largely unknown problems that exist in today’s internet usage.

Some believe IPFS, a new tongue-twisting acronym, is a tool that’ll finally evolve the internet from central entities to a world wide web of shared information, as our online founders always envisioned.

The Simple Breakdown:

IPFS = Git + BitTorrent

To understand IPFS or an Interplanetary File System, one can envision a file system that stores files and tracks versions over time, like the project Git……

Photo Source.

Git is a distributed version control system, or VCS. Developers use it to track changes to their code . When text is added, edited or deleted in any piece of code, Git tracks the changes line-by-line It’s distributed because every user has all the source code on their computer and can act as a server. Secondly, Git is backed by a content-addressable database, meaning the content in the database is immutable.

But… IPFS also incorporates how files move across a network, making it a distributed file system, such as BitTorrent.

Photo Source.

_BitTorrent allows users to quickly download large files using minimum internet bandwidth. For this reason it is free to use and includes no spyware or pop-up advertising. BitTorrent’s protocol maximizes transfer speed by gathering small pieces of files desired by the user and downloads the content pieces simultaneously from other users that retain seekers content. BitTorrent is popular for freely sharing videos, programs, books, music, legal/medical records, and more. Not to mention, the downloads are much faster than other protocols. _

Photo Source.

Git + BitTorrent = BFF

IPFS uses BitTorrent’s approach, but applies Git’s concept and creates a new type of file system that tracks the respective versions of files from all the users in the network.

By utilizing both characteristics of these two entities, IPFS birthed a new permanent web that challenges existing internet protocols, such as HTTP.

WTF Is Wrong With HTTP? It Seems Fine To Me, TYVM.

Well, first, I’m not sure if you realized you had options. So while the thought: “The World Wide Web is actually wider…” sinks into your head, here’s a brief summary of what you’ve likely been using for some time.

Sit back for a quick lesson on HTTP, ASAP. (lol, you love the free jokes in my articles).

The internet is a collection of protocols that describe how data moves through a network. Developers adopted these protocols over time as they built applications on top of existing infrastructure. The protocol that serves as the backbone of the web is the HyperText Transfer Protocol.

HTTP, or HyperText Transfer Protocol is an application layer protocol for distributed, collaborative hypermedia systems created by Tim Berners Lee at CERN in 1989. HTTP is the foundation for data communication using hypertext files. It is currently used for most of the data transfer on the internet.

HTTP is a request-response protocol.

Since the internet boasts a vast array of resources hosted on different servers. To access these resources, a browser needs to be able to send a request to the server and display the resources. HTTP is the underlying format for structuring requests and responses for communication between client and host.

The message that is sent by a client to a server is what is known as an HTTP request. When these requests are being sent, clients can use various methods to make this request.

HTTP request methods are the assets that indicate a specific action to be performed on a given resource. Each method implements a distinct semantic, but there are some shared features.

Common Example: the Google homepage-back to the “client” or browser. This is a location-addressed protocol which means when google.com is inserted into a browser, it gets translated into an IP address belonging to a Google server, initiating a request-response cycle with that server.

WTF, We Have a 404 On HTTP! SOS!

Internet savvy, or not, I believe we have all fallen victim to an HTTP meltdown at least once in our lives.

Photo Source.

Do you recall a time in your life when you and a large group of people went to the same website at the same time?

Each individual participating in this action types the request into their online device and sends a request to that website, where a response is given.

each person is sent the same data, individually. If there’s 10,000 people trying to access a site, on the backend, there are 10k requests, and 10k responses. This sounds great, right? Problem is it’s pretty inefficient.

In a perfect world, participants should be able to leverage physical proximity to more effectively retrieve requested information.

HTTP presents another significant issue if there is a problem in the network’s line of communication, leaving the client unable to connect with the server.

This occurs if:

  • A country is blocking some content
  • An ISP has an outage
  • Content was merely moved or deleted.

These types of broken links exist everywhere on the HTTP web.

A location-based addressing model, like HTTP, encourages the centralization of information.

It is convenient to trust a handful of applications with all our online data, but there’s a great sense of power and responsibility that comes with placing centralized providers with our precious personal and public information.

#ipfs #blockchain #bittorrent #women-in-blockchain #women-in-tech #what-is-ipfs #how-does-ipfs-work #hackernoon-top-story

WTH is IPFS? InterPlanetary File Systems To Rescue The Internet