Python  Library

Python Library

1662485400

Unilm: Large-scale Self-supervised Pre-training Across Tasks

Hiring

We are hiring at all levels (including FTE researchers and interns)! If you are interested in working with us on NLP and large-scale pre-trained models, please send your resume to fuwei@microsoft.com.

AI Fundamentals

Foundation of Large Models

Transformers at Scale = DeepNet + X-MoE

Stability - DeepNet: scaling Transformers to 1,000 Layers and beyond

Efficiency & Transferability - X-MoE: scalable & finetunable sparse Mixture-of-Experts (MoE)

Foundation (aka Pre-trained) Models

General-purpose Foundation Model

MetaLM: Language Models are General-Purpose Interfaces

The Big Convergence - Large-scale self-supervised pre-training across tasks (predictive and generative), languages (100+ languages), and modalities (language, image, audio, layout/format + language, vision + language, audio + language, etc.)

Language & Multilingual

UniLM: unified pre-training for language understanding and generation

InfoXLM/XLM-E: multilingual/cross-lingual pre-trained models for 100+ languages

DeltaLM/mT6: encoder-decoder pre-training for language generation and translation for 100+ languages

MiniLM: small and fast pre-trained models for language understanding and generation

AdaLM: domain, language, and task adaptation of pre-trained models

EdgeLM(NEW): small pre-trained models on edge/client devices

SimLM (NEW): similarity matching with language model pre-training

Vision

BEiT(-2): generative self-supervised pre-training for vision / BERT Pre-Training of Image Transformers

DiT (NEW): self-supervised pre-training for Document Image Transformers

Speech

WavLM: speech pre-training for full stack tasks

Multimodal (X + Language)

LayoutLM/LayoutLMv2/LayoutLMv3: multimodal (text + layout/format + image) pre-training for Document AI (e.g. scanned documents, PDF, etc.)

LayoutXLM: multimodal (text + layout/format + image) pre-training for multilingual document understanding

MarkupLM: markup language model pre-training for visually-rich document understanding

UniSpeech: unified pre-training for self-supervised learning and supervised learning for ASR

UniSpeech-SAT: universal speech representation learning with speaker-aware pre-training

SpeechT5: encoder-decoder pre-training for spoken language processing

VLMo: Unified vision-language pre-training

VL-BEiT (NEW): Generative Vision-Language Pre-training - evolution of BEiT to multimodal

BEiT-3 (NEW): a general-purpose multimodal foundation model, and a major milestone of The Big Convergence of Large-scale Pre-training Across Tasks, Languages, and Modalities.

Toolkits

s2s-ft: sequence-to-sequence fine-tuning toolkit

Aggressive Decoding (NEW): lossless and efficient sequence-to-sequence decoding algorithm

Applications

TrOCR: transformer-based OCR w/ pre-trained models

LayoutReader: pre-training of text and layout for reading order detection

XLM-T: multilingual NMT w/ pretrained cross-lingual encoders

News

  • August, 2022: BEiT-3- a general-purpose multimodal foundation model, which achieves state-of-the-art transfer performance on both vision and vision-language tasks
  • July, 2022: SimLM - Large-scale self-supervised pre-training for similarity matching
  • June, 2022: DiT and LayoutLMv3 were accepted by ACM Multimedia 2022
  • June, 2022: MetaLM - Language models are general-purpose interfaces to foudation models (language/multilingual, vision, speech, and multimodal)
  • June, 2022: VL-BEiT - bidirectional multimodal Transformer learned from scratch with one unified pretraining task, one shared backbone, and one-stage training, supporting both vision and vision-language tasks.
  • [Model Release] June, 2022: LayoutLMv3 Chinese - Chinese version of LayoutLMv3
  • [Code Release] May, 2022: Aggressive Decoding - Lossless Speedup for Seq2seq Generation
  • April, 2022: Transformers at Scale = DeepNet + X-MoE
  • [Model Release] April, 2022: LayoutLMv3 - Pre-training for Document AI with Unified Text and Image Masking
  • [Model Release] March, 2022: EdgeFormer - Parameter-efficient Transformer for On-device Seq2seq Generation
  • [Model Release] March, 2022: DiT - Self-supervised Document Image Transformer. Demos: Document Layout Analysis, Document Image Classification
  • January, 2022: BEiT was accepted by ICLR 2022 as Oral presentation (54 out of 3391).
  • [Model Release] December 16th, 2021: TrOCR small models for handwritten and printed texts, with 3x inference speedup.
  • November 24th, 2021: VLMo as the new SOTA on the VQA Challenge
  • November, 2021: Multilingual translation at scale: 10000 language pairs and beyond
  • [Model Release] November, 2021: MarkupLM - Pre-training for text and markup language (e.g. HTML/XML)
  • [Model Release] November, 2021: VLMo - Unified vision-language pre-training w/ BEiT
  • October, 2021: WavLM Large achieves state-of-the-art performance on the SUPERB benchmark
  • [Model Release] October, 2021: WavLM - Large-scale self-supervised pre-trained models for speech.
  • [Model Release] October 2021: TrOCR is on HuggingFace
  • September 28th, 2021: T-ULRv5 (aka XLM-E/InfoXLM) as the SOTA on the XTREME leaderboard. // Blog
  • [Model Release] September, 2021: LayoutLM-cased are on HuggingFace
  • [Model Release] September, 2021: TrOCR - Transformer-based OCR w/ pre-trained BEiT and RoBERTa models.
  • August 2021: LayoutLMv2 and LayoutXLM are on HuggingFace
  • [Model Release] August, 2021: LayoutReader - Built with LayoutLM to improve general reading order detection.
  • [Model Release] August, 2021: DeltaLM - Encoder-decoder pre-training for language generation and translation.
  • August 2021: BEiT is on HuggingFace
  • [Model Release] July, 2021: BEiT - Towards BERT moment for CV
  • [Model Release] June, 2021: LayoutLMv2, LayoutXLM, MiniLMv2, and AdaLM.
  • May, 2021: LayoutLMv2, InfoXLMv2, MiniLMv2, UniLMv3, and AdaLM were accepted by ACL 2021.
  • April, 2021: LayoutXLM is coming by extending the LayoutLM into multilingual support! A multilingual form understanding benchmark XFUND is also introduced, which includes forms with human labeled key-value pairs in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese).
  • March, 2021: InfoXLM was accepted by NAACL 2021.
  • December 29th, 2020: LayoutLMv2 is coming with the new SOTA on a wide varierty of document AI tasks, including DocVQA and SROIE leaderboard.
  • October 8th, 2020: T-ULRv2 (aka InfoXLM) as the SOTA on the XTREME leaderboard. // Blog
  • September, 2020: MiniLM was accepted by NeurIPS 2020.
  • July 16, 2020: InfoXLM (Multilingual UniLM) arXiv
  • June, 2020: UniLMv2 was accepted by ICML 2020; LayoutLM was accepted by KDD 2020.
  • April 5, 2020: Multilingual MiniLM released!
  • September, 2019: UniLMv1 was accepted by NeurIPS 2019.

Release

***** New May, 2022: Aggressive Decoding release *****

  •  Aggressive Decoding (May 20, 2022): Aggressive Decoding, a novel decoding paradigm for lossless speedup for seq2seq generation. Unlike the previous efforts (e.g., non-autoregressive decoding) speeding up seq2seq generation at the cost of quality loss, Aggressive Decoding aims to yield the identical (or better) generation compared with autoregressive decoding but in a significant speedup: For the seq2seq tasks characterized by highly similar inputs and outputs (e.g., Grammatical Error Correction and Text Simplification), the Input-guided Aggressive Decoding can introduce a 7x-9x speedup for the popular 6-layer Transformer on GPU with the identical results as greedy decoding; For other general seq2seq tasks (e.g., Machine Translation and Abstractive Summarization), the Generalized Aggressive Decoding can have a 3x-5x speedup with the identical or even better quality. "Lossless Acceleration for Seq2seq Generation with Aggressive Decoding"

***** New April, 2022: LayoutLMv3 release *****

  •  LayoutLM 3.0 (April 19, 2022): LayoutLMv3, a multimodal pre-trained Transformer for Document AI with unified text and image masking. Additionally, it is also pre-trained with a word-patch alignment objective to learn cross-modal alignment by predicting whether the corresponding image patch of a text word is masked. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model for both text-centric and image-centric Document AI tasks. Experimental results show that LayoutLMv3 achieves state-of-the-art performance not only in text-centric tasks, including form understanding, receipt understanding, and document visual question answering, but also in image-centric tasks such as document image classification and document layout analysis. "LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking ACM MM 2022"

***** March, 2022: EdgeFormer release *****

  •  EdgeFormer (March 18, 2022): EdgeFormer, the first publicly available pretrained parameter-efficient Transformer for on-device seq2seq generation. EdgeFormer has only 11 million parameters, taking up less than 15MB disk size after int8 quantization and compression, which can process a sentence of the length of 20-30 tokens with acceptable latency on two middle-to-high end CPU cores and less than 50MB memory footprint. The pretrained EdgeFormer can be fine-tuned to English seq2seq tasks and achieve promising results -- significantly better than the strong paramter-efficient Transformer baseline (pretrained Universal Transformer) and full-parameterized Transformer-base model without pretraining, which we believe can largely facilitate on-device seq2seq generation in practice. "EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq Generation"

***** March, 2022: DiT release *****

  •  DiT (March 4, 2022): DiT, a self-supervised pre-trained Document Image Transformer model using large-scale unlabeled text images for Document AI tasks, which is essential since no supervised counterparts ever exist due to the lack of human labeled document images. We leverage DiT as the backbone network in a variety of vision-based Document AI tasks, including document image classification, document layout analysis, table detection as well as text detection for OCR. Experiment results have illustrated that the self-supervised pre-trained DiT model achieves new state-of-the-art results on these downstream tasks, e.g. document image classification (91.11 → 92.69), document layout analysis (91.0 → 94.9), table detection (94.23 → 96.55) and text detection for OCR (93.07 → 94.29). "DiT: Self-supervised Pre-training for Document Image Transformer ACM MM 2022"

***** October, 2021: WavLM release *****

  •  WavLM (October 27, 2021): WavLM, a new pre-trained speech model, to solve full-stack downstream speech tasks. WavLM integrates the gated relative position embedding structure and the utterance mixing method, to model both spoken content and speaker identity preservation. WavLM is trained on 94k hours of public audio data, which is larger than other released checkpoints for English Speech modeling. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks. "WavLM: Large-Scale Self-Supervised Pre-training for Full Stack Speech Processing"

***** October, 2021: MarkupLM release *****

  •  MarkupLM (October 19, 2021): MarkupLM, a simple yet effective pre-training approach for text and markup language. With the Transformer architecture, MarkupLM integrates different input embeddings including text embeddings, position embeddings, and XPath embeddings. Furthermore, we also propose new pre-training objectives that are specially designed for understanding the markup language. We evaluate the pre-trained MarkupLM model on the WebSRC and SWDE datasets. Experiments show that MarkupLM significantly outperforms several SOTA baselines in these tasks. "MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding ACL 2022"

***** September, 2021: TrOCR release *****

  •  TrOCR (September 22, 2021): Transformer-based OCR with pre-trained models, which leverages the Transformer architecture for both image understanding and bpe-level text generation. The TrOCR model is simple but effective (convolution free), and can be pre-trained with large-scale synthetic data and fine-tuned with human-labeled datasets. "TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models"

***** August, 2021: LayoutReader release *****

***** August, 2021: DeltaLM release *****

***** July, 2021: BEiT release *****

***** June, 2021: LayoutXLM | AdaLM | MiniLMv2 release *****

***** May, 2021: LayoutLMv2 | LayoutXLM release *****

  •  LayoutLM 2.0 (December 29, 2020): multimodal pre-training for visually-rich document understanding by leveraging text, layout and image information in a single framework. It is coming with new SOTA on a wide range of document understanding tasks, including FUNSD (0.7895 -> 0.8420), CORD (0.9493 -> 0.9601), SROIE (0.9524 -> 0.9781), Kleister-NDA (0.834 -> 0.852), RVL-CDIP (0.9443 -> 0.9564), and DocVQA (0.7295 -> 0.8672). "LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding ACL 2021"

***** February, 2020: UniLM v2 | MiniLM v1 | LayoutLM v1 | s2s-ft v1 release *****

***** October 1st, 2019: UniLM v1 release *****

Download details:

Author: microsoft
Source code: https://github.com/microsoft/unilm 
License: MIT license

#python #ArtificialIntelligence #ai #machinelearning 

What is GEEK

Buddha Community

Unilm: Large-scale Self-supervised Pre-training Across Tasks
Python  Library

Python Library

1662485400

Unilm: Large-scale Self-supervised Pre-training Across Tasks

Hiring

We are hiring at all levels (including FTE researchers and interns)! If you are interested in working with us on NLP and large-scale pre-trained models, please send your resume to fuwei@microsoft.com.

AI Fundamentals

Foundation of Large Models

Transformers at Scale = DeepNet + X-MoE

Stability - DeepNet: scaling Transformers to 1,000 Layers and beyond

Efficiency & Transferability - X-MoE: scalable & finetunable sparse Mixture-of-Experts (MoE)

Foundation (aka Pre-trained) Models

General-purpose Foundation Model

MetaLM: Language Models are General-Purpose Interfaces

The Big Convergence - Large-scale self-supervised pre-training across tasks (predictive and generative), languages (100+ languages), and modalities (language, image, audio, layout/format + language, vision + language, audio + language, etc.)

Language & Multilingual

UniLM: unified pre-training for language understanding and generation

InfoXLM/XLM-E: multilingual/cross-lingual pre-trained models for 100+ languages

DeltaLM/mT6: encoder-decoder pre-training for language generation and translation for 100+ languages

MiniLM: small and fast pre-trained models for language understanding and generation

AdaLM: domain, language, and task adaptation of pre-trained models

EdgeLM(NEW): small pre-trained models on edge/client devices

SimLM (NEW): similarity matching with language model pre-training

Vision

BEiT(-2): generative self-supervised pre-training for vision / BERT Pre-Training of Image Transformers

DiT (NEW): self-supervised pre-training for Document Image Transformers

Speech

WavLM: speech pre-training for full stack tasks

Multimodal (X + Language)

LayoutLM/LayoutLMv2/LayoutLMv3: multimodal (text + layout/format + image) pre-training for Document AI (e.g. scanned documents, PDF, etc.)

LayoutXLM: multimodal (text + layout/format + image) pre-training for multilingual document understanding

MarkupLM: markup language model pre-training for visually-rich document understanding

UniSpeech: unified pre-training for self-supervised learning and supervised learning for ASR

UniSpeech-SAT: universal speech representation learning with speaker-aware pre-training

SpeechT5: encoder-decoder pre-training for spoken language processing

VLMo: Unified vision-language pre-training

VL-BEiT (NEW): Generative Vision-Language Pre-training - evolution of BEiT to multimodal

BEiT-3 (NEW): a general-purpose multimodal foundation model, and a major milestone of The Big Convergence of Large-scale Pre-training Across Tasks, Languages, and Modalities.

Toolkits

s2s-ft: sequence-to-sequence fine-tuning toolkit

Aggressive Decoding (NEW): lossless and efficient sequence-to-sequence decoding algorithm

Applications

TrOCR: transformer-based OCR w/ pre-trained models

LayoutReader: pre-training of text and layout for reading order detection

XLM-T: multilingual NMT w/ pretrained cross-lingual encoders

News

  • August, 2022: BEiT-3- a general-purpose multimodal foundation model, which achieves state-of-the-art transfer performance on both vision and vision-language tasks
  • July, 2022: SimLM - Large-scale self-supervised pre-training for similarity matching
  • June, 2022: DiT and LayoutLMv3 were accepted by ACM Multimedia 2022
  • June, 2022: MetaLM - Language models are general-purpose interfaces to foudation models (language/multilingual, vision, speech, and multimodal)
  • June, 2022: VL-BEiT - bidirectional multimodal Transformer learned from scratch with one unified pretraining task, one shared backbone, and one-stage training, supporting both vision and vision-language tasks.
  • [Model Release] June, 2022: LayoutLMv3 Chinese - Chinese version of LayoutLMv3
  • [Code Release] May, 2022: Aggressive Decoding - Lossless Speedup for Seq2seq Generation
  • April, 2022: Transformers at Scale = DeepNet + X-MoE
  • [Model Release] April, 2022: LayoutLMv3 - Pre-training for Document AI with Unified Text and Image Masking
  • [Model Release] March, 2022: EdgeFormer - Parameter-efficient Transformer for On-device Seq2seq Generation
  • [Model Release] March, 2022: DiT - Self-supervised Document Image Transformer. Demos: Document Layout Analysis, Document Image Classification
  • January, 2022: BEiT was accepted by ICLR 2022 as Oral presentation (54 out of 3391).
  • [Model Release] December 16th, 2021: TrOCR small models for handwritten and printed texts, with 3x inference speedup.
  • November 24th, 2021: VLMo as the new SOTA on the VQA Challenge
  • November, 2021: Multilingual translation at scale: 10000 language pairs and beyond
  • [Model Release] November, 2021: MarkupLM - Pre-training for text and markup language (e.g. HTML/XML)
  • [Model Release] November, 2021: VLMo - Unified vision-language pre-training w/ BEiT
  • October, 2021: WavLM Large achieves state-of-the-art performance on the SUPERB benchmark
  • [Model Release] October, 2021: WavLM - Large-scale self-supervised pre-trained models for speech.
  • [Model Release] October 2021: TrOCR is on HuggingFace
  • September 28th, 2021: T-ULRv5 (aka XLM-E/InfoXLM) as the SOTA on the XTREME leaderboard. // Blog
  • [Model Release] September, 2021: LayoutLM-cased are on HuggingFace
  • [Model Release] September, 2021: TrOCR - Transformer-based OCR w/ pre-trained BEiT and RoBERTa models.
  • August 2021: LayoutLMv2 and LayoutXLM are on HuggingFace
  • [Model Release] August, 2021: LayoutReader - Built with LayoutLM to improve general reading order detection.
  • [Model Release] August, 2021: DeltaLM - Encoder-decoder pre-training for language generation and translation.
  • August 2021: BEiT is on HuggingFace
  • [Model Release] July, 2021: BEiT - Towards BERT moment for CV
  • [Model Release] June, 2021: LayoutLMv2, LayoutXLM, MiniLMv2, and AdaLM.
  • May, 2021: LayoutLMv2, InfoXLMv2, MiniLMv2, UniLMv3, and AdaLM were accepted by ACL 2021.
  • April, 2021: LayoutXLM is coming by extending the LayoutLM into multilingual support! A multilingual form understanding benchmark XFUND is also introduced, which includes forms with human labeled key-value pairs in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese).
  • March, 2021: InfoXLM was accepted by NAACL 2021.
  • December 29th, 2020: LayoutLMv2 is coming with the new SOTA on a wide varierty of document AI tasks, including DocVQA and SROIE leaderboard.
  • October 8th, 2020: T-ULRv2 (aka InfoXLM) as the SOTA on the XTREME leaderboard. // Blog
  • September, 2020: MiniLM was accepted by NeurIPS 2020.
  • July 16, 2020: InfoXLM (Multilingual UniLM) arXiv
  • June, 2020: UniLMv2 was accepted by ICML 2020; LayoutLM was accepted by KDD 2020.
  • April 5, 2020: Multilingual MiniLM released!
  • September, 2019: UniLMv1 was accepted by NeurIPS 2019.

Release

***** New May, 2022: Aggressive Decoding release *****

  •  Aggressive Decoding (May 20, 2022): Aggressive Decoding, a novel decoding paradigm for lossless speedup for seq2seq generation. Unlike the previous efforts (e.g., non-autoregressive decoding) speeding up seq2seq generation at the cost of quality loss, Aggressive Decoding aims to yield the identical (or better) generation compared with autoregressive decoding but in a significant speedup: For the seq2seq tasks characterized by highly similar inputs and outputs (e.g., Grammatical Error Correction and Text Simplification), the Input-guided Aggressive Decoding can introduce a 7x-9x speedup for the popular 6-layer Transformer on GPU with the identical results as greedy decoding; For other general seq2seq tasks (e.g., Machine Translation and Abstractive Summarization), the Generalized Aggressive Decoding can have a 3x-5x speedup with the identical or even better quality. "Lossless Acceleration for Seq2seq Generation with Aggressive Decoding"

***** New April, 2022: LayoutLMv3 release *****

  •  LayoutLM 3.0 (April 19, 2022): LayoutLMv3, a multimodal pre-trained Transformer for Document AI with unified text and image masking. Additionally, it is also pre-trained with a word-patch alignment objective to learn cross-modal alignment by predicting whether the corresponding image patch of a text word is masked. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model for both text-centric and image-centric Document AI tasks. Experimental results show that LayoutLMv3 achieves state-of-the-art performance not only in text-centric tasks, including form understanding, receipt understanding, and document visual question answering, but also in image-centric tasks such as document image classification and document layout analysis. "LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking ACM MM 2022"

***** March, 2022: EdgeFormer release *****

  •  EdgeFormer (March 18, 2022): EdgeFormer, the first publicly available pretrained parameter-efficient Transformer for on-device seq2seq generation. EdgeFormer has only 11 million parameters, taking up less than 15MB disk size after int8 quantization and compression, which can process a sentence of the length of 20-30 tokens with acceptable latency on two middle-to-high end CPU cores and less than 50MB memory footprint. The pretrained EdgeFormer can be fine-tuned to English seq2seq tasks and achieve promising results -- significantly better than the strong paramter-efficient Transformer baseline (pretrained Universal Transformer) and full-parameterized Transformer-base model without pretraining, which we believe can largely facilitate on-device seq2seq generation in practice. "EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq Generation"

***** March, 2022: DiT release *****

  •  DiT (March 4, 2022): DiT, a self-supervised pre-trained Document Image Transformer model using large-scale unlabeled text images for Document AI tasks, which is essential since no supervised counterparts ever exist due to the lack of human labeled document images. We leverage DiT as the backbone network in a variety of vision-based Document AI tasks, including document image classification, document layout analysis, table detection as well as text detection for OCR. Experiment results have illustrated that the self-supervised pre-trained DiT model achieves new state-of-the-art results on these downstream tasks, e.g. document image classification (91.11 → 92.69), document layout analysis (91.0 → 94.9), table detection (94.23 → 96.55) and text detection for OCR (93.07 → 94.29). "DiT: Self-supervised Pre-training for Document Image Transformer ACM MM 2022"

***** October, 2021: WavLM release *****

  •  WavLM (October 27, 2021): WavLM, a new pre-trained speech model, to solve full-stack downstream speech tasks. WavLM integrates the gated relative position embedding structure and the utterance mixing method, to model both spoken content and speaker identity preservation. WavLM is trained on 94k hours of public audio data, which is larger than other released checkpoints for English Speech modeling. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks. "WavLM: Large-Scale Self-Supervised Pre-training for Full Stack Speech Processing"

***** October, 2021: MarkupLM release *****

  •  MarkupLM (October 19, 2021): MarkupLM, a simple yet effective pre-training approach for text and markup language. With the Transformer architecture, MarkupLM integrates different input embeddings including text embeddings, position embeddings, and XPath embeddings. Furthermore, we also propose new pre-training objectives that are specially designed for understanding the markup language. We evaluate the pre-trained MarkupLM model on the WebSRC and SWDE datasets. Experiments show that MarkupLM significantly outperforms several SOTA baselines in these tasks. "MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding ACL 2022"

***** September, 2021: TrOCR release *****

  •  TrOCR (September 22, 2021): Transformer-based OCR with pre-trained models, which leverages the Transformer architecture for both image understanding and bpe-level text generation. The TrOCR model is simple but effective (convolution free), and can be pre-trained with large-scale synthetic data and fine-tuned with human-labeled datasets. "TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models"

***** August, 2021: LayoutReader release *****

***** August, 2021: DeltaLM release *****

***** July, 2021: BEiT release *****

***** June, 2021: LayoutXLM | AdaLM | MiniLMv2 release *****

***** May, 2021: LayoutLMv2 | LayoutXLM release *****

  •  LayoutLM 2.0 (December 29, 2020): multimodal pre-training for visually-rich document understanding by leveraging text, layout and image information in a single framework. It is coming with new SOTA on a wide range of document understanding tasks, including FUNSD (0.7895 -> 0.8420), CORD (0.9493 -> 0.9601), SROIE (0.9524 -> 0.9781), Kleister-NDA (0.834 -> 0.852), RVL-CDIP (0.9443 -> 0.9564), and DocVQA (0.7295 -> 0.8672). "LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding ACL 2021"

***** February, 2020: UniLM v2 | MiniLM v1 | LayoutLM v1 | s2s-ft v1 release *****

***** October 1st, 2019: UniLM v1 release *****

Download details:

Author: microsoft
Source code: https://github.com/microsoft/unilm 
License: MIT license

#python #ArtificialIntelligence #ai #machinelearning 

Jupyter Notebook Kernel for Running ansible Tasks and Playbooks

Ansible Jupyter Kernel

Example Jupyter Usage

The Ansible Jupyter Kernel adds a kernel backend for Jupyter to interface directly with Ansible and construct plays and tasks and execute them on the fly.

Demo

Demo

Installation:

ansible-kernel is available to be installed from pypi but you can also install it locally. The setup package itself will register the kernel with Jupyter automatically.

From pypi

pip install ansible-kernel
python -m ansible_kernel.install

From a local checkout

pip install -e .
python -m ansible_kernel.install

For Anaconda/Miniconda

pip install ansible-kernel
python -m ansible_kernel.install --sys-prefix

Usage

Local install

    jupyter notebook
    # In the notebook interface, select Ansible from the 'New' menu

Container

docker run -p 8888:8888 benthomasson/ansible-jupyter-kernel

Then copy the URL from the output into your browser:
http://localhost:8888/?token=ABCD1234

Using the Cells

Normally Ansible brings together various components in different files and locations to launch a playbook and performs automation tasks. For this jupyter interface you need to provide this information in cells by denoting what the cell contains and then finally writing your tasks that will make use of them. There are Examples available to help you, in this section we'll go over the currently supported cell types.

In order to denote what the cell contains you should prefix it with a pound/hash symbol (#) and the type as listed here as the first line as shown in the examples below.

#inventory

The inventory that your tasks will use

#inventory
[all]
ahost ansible_connection=local
anotherhost examplevar=val

#play

This represents the opening block of a typical Ansible play

#play
name: Hello World
hosts: all
gather_facts: false

#task

This is the default cell type if no type is given for the first line

#task
debug:
#task
shell: cat /tmp/afile
register: output

#host_vars

This takes an argument that represents the hostname. Variables defined in this file will be available in the tasks for that host.

#host_vars Host1
hostname: host1

#group_vars

This takes an argument that represents the group name. Variables defined in this file will be available in the tasks for hosts in that group.

#group_vars BranchOfficeX
gateway: 192.168.1.254

#vars

This takes an argument that represents the filename for use in later cells

#vars example_vars
message: hello vars
#play
name: hello world
hosts: localhost
gather_facts: false
vars_files:
    - example_vars

#template

This takes an argument in order to create a templated file that can be used in later cells

#template hello.j2
{{ message }}
#task
template:
    src: hello.j2
    dest: /tmp/hello

#ansible.cfg

Provides overrides typically found in ansible.cfg

#ansible.cfg
[defaults]
host_key_checking=False

Examples

You can find various example notebooks in the repository

Using the development environment

It's possible to use whatever python development process you feel comfortable with. The repository itself includes mechanisms for using pipenv

pipenv install
...
pipenv shell

Author: ansible
Source Code:  https://github.com/ansible/ansible-jupyter-kernel
License: Apache-2.0 License

#jupyter #python 

Shreya kapoor

Shreya kapoor

1604946745

200 hour Yoga Teacher Training Course in Ghaziabad, India | Divyaa Yoga Institute

Yoga gives peace to the body and mind that helps in living a healthy & happy life. It comes with a lot of benefits for both mental and physical health. Meditation and Yoga can cure many diseases and after seeing the results across the world, people are getting more into meditation and yoga. Many of them are trying to motivate others for shifting towards yoga by saving a little bit of time from daily routine. It’s not a bad idea to start a carrier in yoga as a yoga trainer, teacher, consultant or Therapist. If you are interested in it and doing it for a longer period.

Divyaa yoga institute launched 200 yoga teacher training certification courses in Ghaziabad and come up as a world-class professional yoga institute in Delhi NCR for the people who love yoga and ready to make a carrier in it. This is the best yoga teacher training institute in India that provides group yoga classes, yoga certification courses, yoga workshops, and provides many other courses that can help in gaining professional knowledge.

200 hour yoga teacher training course in India facilitates a professional syllabus that starts from basic and goes to the advanced level. They provide personal female/ male yoga trainer to the students according to their requirements.

200 hour yoga teacher training course syllabus includes mantra chanting. It releases positive energy from your mind that helps in decreasing negative thoughts. The study of asana is one of the most important points in the syllabus. Their trainers take care of the proper posture and body alignment so that the risk of injury can be reduced.

When teachers teach, they want to be sure that every student is getting things properly. We provide personal yoga trainer on demand to balance the comfortability while doing yoga practice. When you will invest in yoga, it will make your life smoother and happier forever. Taking yoga as your carrier is an excellent option because in this journey you will give knowledge of being healthy, happy, and calm to others. You will feel great when you will be the reason for the happiness of thousands of people who come to you in search of stability and calmness in their life.

For beginners, it is very important to do pose carefully to avoid injury. Their trainers use props at early stages so that beginners can improve after a few days. Improvisation plays a crucial role and at Divyaa yoga institute, professional teachers take care of every little thing so that every single person gets satisfaction in terms of peace, happiness, and whatever their goal is after adding meditation and yoga to their life schedule.

On the launch of 200 hour yoga teacher training course, the owner of Divyaa yoga institute said that “Yoga is an ancient practice and meditation that is now on everyone’s tongue. People are getting familiar with yoga because of positive results all across the world. Yoga is come up as a treatment for heart and other health issues. We are trying to encourage people, to give your mind and body relaxation from the stress and tension that they have filled up in their life because of pressure and duties.”

He further adds about the course “We are offering 200 hour yoga teacher training course for the people who have an interest or have some experience in the yoga profession. Now, you can convert your interest into a profession by opting for our professional yoga courses in India. We have experienced professional yoga trainers at our institute from across the world and sharing their experience with the people who are willing or trying to join yoga for the rest of their life.”

About Divyaa yoga institute
Divyaa yoga institute is a leading international yoga school in Ghaziabad, that provides several yoga programs that include yoga workshops, group yoga classes, corporate yoga classes, private yoga classes, and stress management & spiritual classes in Ghaziabad.

If you are interested in yoga whether you have experience or not and want to take yoga as your profession then Divyaa yoga institute is the right place for your goal. They will give shape to your interest and develop your yoga skills in order to make you a professional yoga trainer. They offer courses like yoga for better living certification courses of 21 days, meditation certificate course, and 200 hour yoga teacher training course that we have talked about above. Join today if you have a spark in you, they will show you the path to a better life in yoga.

#200 hour yoga teacher training #200 hour yoga teacher training in ghaziabad #200 hour yoga teacher training in india #yoga teacher training course #yoga teacher training courses #teacher training courses

sunil ab

1624880003

Microsoft Power BI Course Online Training institute Hyderabad,Ameerpet,USA,UK@7993762900

Ab Trainings Provides Best Power BI certification training in Hyderabad, Ameerpet. Will help you learn Power BI concepts like Microsoft Power BI Desktop layouts, BI reports, dashboards, Power BI DAX commands, and functions. In this Power BI course, you will explore to experiment, fix, prepare, and present data quickly and easily.

Microsoft Power BI classroom, Online training in Power BI with Power BI Desktop, Data Modeling, Visualization, DAX, Power BI Service and Live Project, Power Bi Online Classes, Power Bi Certification, Power Bi Job Support, Power Bi Proxy, Power Bi Coaching Center, Power bI Training Institute , POWER BI SERVICE/SERVER/WORK SPACE , Corporate Training’s in Hyderabad,Ameerpet,Bangalore, Pune,Chennai,Delhi,Noida,Kerala,india,USA,UK,Canada,Dubai,Middle East,Japan,Germany,Switzerland, Austria,Spain,Australia,Malaysia,Italy,South Africa, Saudi Arabia,Singapore,China, Russia, Ukraine,
Power BI training in Hyderabad will help you get the most out of Online Power BI Training, enabling you to solve business problems and improve operations. This Power Bi Online course helps you master the development of dashboards from published reports, discover better insight from the data, & create practical recipes on the various tasks that you can do with Microsoft Power BI Training in Hyderabad

#power bi training hyderabad #microsoft power bi training in hyderabad #microsoft power bi classroom training in hyderabad #top power bi course training institute in hyderabad #power bi certification training in hyderabad #power bi online training

Cómo Hacer Operaciones CRUD En JavaScript Creando Una Aplicación Todo

Hoy vamos a aprender cómo hacer operaciones CRUD en JavaScript creando una aplicación Todo. Empecemos 🔥

Esta es la aplicación que estamos haciendo hoy:

Aplicación que estamos haciendo hoy

¿Qué es CRUD?

descripción de la imagen

CRUD significa -

  • c: crear
  • R: Leer
  • U: Actualizar
  • D: Eliminar

CRUD de forma completa

CRUD es un tipo de mecanismo que le permite crear datos, leer datos, editarlos y eliminar esos datos. En nuestro caso, vamos a crear una aplicación Todo, por lo que tendremos 4 opciones para crear tareas, leer tareas, actualizar tareas o eliminar tareas.

Comprender los principios CRUD

Antes de comenzar el tutorial, primero, comprendamos los principios CRUD. Para eso, creemos una aplicación de redes sociales muy, muy simple.

Proyecto de aplicación de redes sociales

Configuración

Configuración del proyecto

Para este proyecto, seguiremos los siguientes pasos:

  • Cree 3 archivos llamados index.html, style.css y main.js
  • Vincule el archivo JavaScript y CSS a index.html
  • Inicie su servidor en vivo

HTML

Dentro de la etiqueta del cuerpo, crea un div con un nombre de clase .container. Ahí tendremos 2 secciones, .lefty .right👇

<body>
  <h1>Social Media App</h1>
  <div class="container">

    <div class="left"></div>
    <div class="right"></div>

  </div>
</body>

En el lado izquierdo, crearemos nuestras publicaciones. En el lado derecho, podemos ver, actualizar y eliminar nuestras publicaciones. Ahora, crea un formulario dentro de la etiqueta div .left 👇

<div class="left">
  <form id="form">
    <label for="post"> Write your post here</label>
    <br><br>
    <textarea name="post" id="input" cols="30" rows="10"></textarea>
    <br> <br>
    <div id="msg"></div>
    <button type="submit">Post</button>
  </form>
</div>

Escribe este código dentro del HTML para que podamos mostrar nuestra publicación en el lado derecho 👇

<div class="right">
  <h3>Your posts here</h3>
  <div id="posts"></div>
</div>

A continuación, insertaremos el CDN font-awesome dentro de la etiqueta de encabezado para usar sus fuentes en nuestro proyecto 👇

<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.4/css/all.min.css" />

Ahora, vamos a hacer algunas publicaciones de muestra con íconos de eliminar y editar. Escribe este código dentro del div con el id #posts: 👇

<div id="posts">
  <div>
    <p>Hello world post 1</p>
    <span class="options">
      <i class="fas fa-edit"></i>
      <i class="fas fa-trash-alt"></i>
    </span>
  </div>

  <div >
    <p>Hello world post 2</p>
    <span class="options">
      <i class="fas fa-edit"></i>
      <i class="fas fa-trash-alt"></i>
    </span>
  </div>
</div>

El resultado hasta ahora se ve así:

Resultado de marcado HTML

CSS

Agregar CSS para el proyecto 1

Mantengámoslo simple. Escribe estos estilos dentro de tu hoja de estilo: 👇

body {
  font-family: sans-serif;
  margin: 0 50px;
}

.container {
  display: flex;
  gap: 50px;
}

#posts {
  width: 400px;
}

i {
  cursor: pointer;
}

Ahora, escribe estos estilos para los íconos de opción y div posteriores: 👇

#posts div {
  display: flex;
  align-items: center;
  justify-content: space-between;
}

.options {
  display: flex;
  gap: 25px;
}

#msg {
  color: red;
}

Los resultados hasta ahora se ven así: 👇

El resultado después de agregar el proyecto de parte css 1

Parte de JavaScript

Comenzando la parte de javascript

De acuerdo con este diagrama de flujo, seguiremos adelante con el proyecto. No te preocupes, te explicaré todo en el camino. 👇

diagrama de flujo

Validación de formulario

Primero, apuntemos a todos los selectores de ID del HTML en JavaScript. Así: 👇

let form = document.getElementById("form");
let input = document.getElementById("input");
let msg = document.getElementById("msg");
let posts = document.getElementById("posts");

Luego, cree un detector de eventos de envío para el formulario para que pueda evitar el comportamiento predeterminado de nuestra aplicación. Al mismo tiempo, crearemos una función llamada formValidation. 👇

form.addEventListener("submit", (e) => {
  e.preventDefault();
  console.log("button clicked");

  formValidation();
});

let formValidation = () => {};

Ahora, vamos a hacer una declaración if else dentro de nuestra formValidationfunción. Esto nos ayudará a evitar que los usuarios envíen campos de entrada en blanco. 👇

let formValidation = () => {
  if (input.value === "") {
    msg.innerHTML = "Post cannot be blank";
    console.log("failure");
  } else {
    console.log("successs");
    msg.innerHTML = "";
  }
};

Aquí está el resultado hasta ahora: 👇

7sb8faq21j5dzy9vlswj

Como puede ver, también aparecerá un mensaje si el usuario intenta enviar el formulario en blanco.

Cómo aceptar datos de campos de entrada

Cualesquiera que sean los datos que obtengamos de los campos de entrada, los almacenaremos en un objeto. Vamos a crear un objeto llamado data. Y crea una función llamada acceptData: 👇

let data = {};

let acceptData = () => {};

La idea principal es que, usando la función, recopilamos datos de las entradas y los almacenamos en nuestro objeto llamado data. Ahora terminemos de construir nuestra acceptDatafunción.

let acceptData = () => {
  data["text"] = input.value;
  console.log(data);
};

Además, necesitamos que la acceptDatafunción funcione cuando el usuario haga clic en el botón Enviar. Para eso, activaremos esta función en la instrucción else de nuestra formValidationfunción. 👇

let formValidation = () => {
  if (input.value === "") {
    // Other codes are here
  } else {
    // Other codes are here
    acceptData();
  }
};

Cuando ingresamos datos y enviamos el formulario, en la consola podemos ver un objeto que contiene los valores de entrada de nuestro usuario. Así: 👇

resultado hasta ahora en la consola

Cómo crear publicaciones usando literales de plantilla de JavaScript

Para publicar nuestros datos de entrada en el lado derecho, necesitamos crear un elemento div y agregarlo al div de publicaciones. Primero, creemos una función y escribamos estas líneas: 👇

let createPost = () => {
  posts.innerHTML += ``;
};

Los acentos graves son literales de plantilla. Funcionará como una plantilla para nosotros. Aquí, necesitamos 3 cosas: un div principal, la entrada en sí y el div de opciones que lleva los íconos de edición y eliminación. Terminemos nuestra función 👇

let createPost = () => {
  posts.innerHTML += `
  <div>
    <p>${data.text}</p>
    <span class="options">
      <i onClick="editPost(this)" class="fas fa-edit"></i>
      <i onClick="deletePost(this)" class="fas fa-trash-alt"></i>
    </span>
  </div>
  `;
  input.value = "";
};

En nuestra acceptdatafunción, activaremos nuestra createPostfunción. Así: 👇

let acceptData = () => {
  // Other codes are here

  createPost();
};

El resultado hasta ahora: 👇

resultado hasta ahora

Hasta ahora todo bien chicos, casi hemos terminado con el proyecto 1.

Hasta ahora, todo bien

Cómo eliminar una publicación

Para eliminar una publicación, en primer lugar, creemos una función dentro de nuestro archivo javascript:

let deletePost = (e) => {};

A continuación, activamos esta deletePostfunción dentro de todos nuestros íconos de eliminación usando un atributo onClick. Escribirá estas líneas en HTML y en el literal de la plantilla. 👇

<i onClick="deletePost(this)" class="fas fa-trash-alt"></i>

La thispalabra clave se referirá al elemento que disparó el evento. en nuestro caso, el thisse refiere al botón eliminar.

Mire con cuidado, el padre del botón Eliminar es el tramo con opciones de nombre de clase. El padre del lapso es el div. Entonces, escribimos parentElementdos veces para que podamos saltar del ícono de eliminar al div y apuntarlo directamente para eliminarlo.

Terminemos nuestra función. 👇

let deletePost = (e) => {
  e.parentElement.parentElement.remove();
};

El resultado hasta ahora: 👇

eliminar el resultado de una publicación

Cómo editar una publicación

Para editar una publicación, en primer lugar, creemos una función dentro de nuestro archivo JavaScript:

let editPost = (e) => {};

A continuación, activamos esta editPostfunción dentro de todos nuestros íconos de edición usando un atributo onClick. Escribirá estas líneas en HTML y en el literal de la plantilla. 👇

<i onClick="editPost(this)" class="fas fa-edit"></i>

La thispalabra clave se referirá al elemento que disparó el evento. En nuestro caso, el thisse refiere al botón editar.

Mire con cuidado, el padre del botón de edición es el tramo con opciones de nombre de clase. El padre del lapso es el div. Entonces, escribimos parentElementdos veces para que podamos saltar del ícono de edición al div y apuntarlo directamente para eliminarlo.

Luego, cualquier dato que esté dentro de la publicación, lo traemos de vuelta al campo de entrada para editarlo.

Terminemos nuestra función. 👇

let editPost = (e) => {
  input.value = e.parentElement.previousElementSibling.innerHTML;
  e.parentElement.parentElement.remove();
};

El resultado hasta ahora: 👇

Editar el resultado de una publicación

¡Tomar un descanso!

Tomar un descanso

Felicitaciones a todos por completar el proyecto 1. Ahora, ¡tómense un pequeño descanso!

Cómo hacer una aplicación de tareas pendientes usando operaciones CRUD

Hagamos una aplicación de tareas

Comencemos a hacer el proyecto 2, que es una aplicación To-Do.

Configuración del proyecto

configuración del proyecto

Para este proyecto, seguiremos los siguientes pasos:

  • Cree 3 archivos llamados index.html, style.css y main.js
  • Vincule los archivos JavaScript y CSS a index.html
  • Inicie nuestro servidor en vivo

HTML

Escribe este código de inicio dentro del archivo HTML: 👇

<div class="app">
  <h4 class="mb-3">TODO App</h4>

  <div id="addNew" data-bs-toggle="modal" data-bs-target="#form">
    <span>Add New Task</span>
    <i class="fas fa-plus"></i>
  </div>
</div>

El div con una identificación addNewes el botón que abrirá el modal. El intervalo se mostrará en el botón. El ies el ícono de font-awesome.

Vamos a usar bootstrap para hacer nuestro modal. Usaremos el modal para agregar nuevas tareas. Para eso, agregue el enlace CDN de arranque dentro de la etiqueta principal. 👇

<link
  href="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/css/bootstrap.min.css"
  rel="stylesheet"
  integrity="sha384-1BmE4kWBq78iYhFldvKuhfTAU6auU8tT94WrHftjDbrCEXSU1oBoqyl2QvZ6jIW3"
  crossorigin="anonymous"
/>

<script
  src="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/js/bootstrap.bundle.min.js"
  integrity="sha384-ka7Sk0Gln4gmtz2MlQnikT1wXgYsOg+OMhuP+IlRH9sENBO0LRn5q+8nbTov4+1p"
  crossorigin="anonymous"
></script>

Para ver las tareas creadas, usaremos un div con una tarea de identificación, dentro del div con la aplicación de nombre de clase. 👇

<h5 class="text-center my-3">Tasks</h5>

<div id="tasks"></div>

Inserte el CDN font-awesome dentro de la etiqueta principal para usar fuentes en nuestro proyecto 👇

<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.4/css/all.min.css" />

Copie y pegue el código a continuación que proviene del modal de arranque. Lleva un formulario con 3 campos de entrada y un botón de envío. Si lo desea, puede buscar en el sitio web de Bootstrap escribiendo 'modal' en la barra de búsqueda.

<!-- Modal -->
<form
  class="modal fade"
  id="form"
  tabindex="-1"
  aria-labelledby="exampleModalLabel"
  aria-hidden="true"
>
  <div class="modal-dialog">
    <div class="modal-content">
      <div class="modal-header">
        <h5 class="modal-title" id="exampleModalLabel">Add New Task</h5>
        <button
          type="button"
          class="btn-close"
          data-bs-dismiss="modal"
          aria-label="Close"
        ></button>
      </div>
      <div class="modal-body">
        <p>Task Title</p>
        <input type="text" class="form-control" name="" id="textInput" />
        <div id="msg"></div>
        <br />
        <p>Due Date</p>
        <input type="date" class="form-control" name="" id="dateInput" />
        <br />
        <p>Description</p>
        <textarea
          name=""
          class="form-control"
          id="textarea"
          cols="30"
          rows="5"
        ></textarea>
      </div>
      <div class="modal-footer">
        <button type="button" class="btn btn-secondary" data-bs-dismiss="modal">
          Close
        </button>
        <button type="submit" id="add" class="btn btn-primary">Add</button>
      </div>
    </div>
  </div>
</form>

El resultado hasta ahora: 👇

Configuración del archivo HTML

Hemos terminado con la configuración del archivo HTML. Comencemos el CSS.

CSS

Agregar la parte css

Agregue estos estilos en el cuerpo para que podamos mantener nuestra aplicación en el centro exacto de la pantalla.

body {
  font-family: sans-serif;
  margin: 0 50px;
  background-color: #e5e5e5;
  overflow: hidden;
  display: flex;
  justify-content: center;
  align-items: center;
  height: 100vh;
}

Apliquemos estilo al div con la aplicación classname. 👇

.app {
  background-color: #fff;
  width: 300px;
  height: 500px;
  border: 5px solid #abcea1;
  border-radius: 8px;
  padding: 15px;
}

El resultado hasta ahora: 👇

Estilos de aplicaciones

Ahora, diseñemos el botón con el id addNew. 👇

#addNew {
  display: flex;
  justify-content: space-between;
  align-items: center;
  background-color: rgba(171, 206, 161, 0.35);
  padding: 5px 10px;
  border-radius: 5px;
  cursor: pointer;
}
.fa-plus {
  background-color: #abcea1;
  padding: 3px;
  border-radius: 3px;
}

El resultado hasta ahora: 👇

Agregar nueva tarea Botón

Si hace clic en el botón, el modal aparece así: 👇

Estallidos modales

Agregar el JS

Agregar el JavaScript

En el archivo JavaScript, en primer lugar, seleccione todos los selectores del HTML que necesitamos usar. 👇

let form = document.getElementById("form");
let textInput = document.getElementById("textInput");
let dateInput = document.getElementById("dateInput");
let textarea = document.getElementById("textarea");
let msg = document.getElementById("msg");
let tasks = document.getElementById("tasks");
let add = document.getElementById("add");

Validaciones de formulario

No podemos permitir que un usuario envíe campos de entrada en blanco. Entonces, necesitamos validar los campos de entrada. 👇

form.addEventListener("submit", (e) => {
  e.preventDefault();
  formValidation();
});

let formValidation = () => {
  if (textInput.value === "") {
    console.log("failure");
    msg.innerHTML = "Task cannot be blank";
  } else {
    console.log("success");
    msg.innerHTML = "";
  }
};

Además, agregue esta línea dentro del CSS:

#msg {
  color: red;
}

El resultado hasta ahora: 👇

descripción de la imagen

Como puede ver, la validación está funcionando. El código JavaScript no permite que el usuario envíe campos de entrada en blanco; de lo contrario, verá un mensaje de error.

Cómo recopilar datos y utilizar el almacenamiento local

Independientemente de las entradas que escriba el usuario, debemos recopilarlas y almacenarlas en el almacenamiento local.

Primero, recopilamos los datos de los campos de entrada, usando la función named acceptDatay una matriz llamada data. Luego los empujamos dentro del almacenamiento local así: 👇

let data = [];

let acceptData = () => {
  data.push({
    text: textInput.value,
    date: dateInput.value,
    description: textarea.value,
  });

  localStorage.setItem("data", JSON.stringify(data));

  console.log(data);
};

También tenga en cuenta que esto nunca funcionará a menos que invoque la función acceptDatadentro de la declaración else de la validación del formulario. Síguenos aquí: 👇

let formValidation = () => {

  // Other codes are here
   else {

    // Other codes are here

    acceptData();
  }
};

Es posible que haya notado que el modal no se cierra automáticamente. Para resolver esto, escribe esta pequeña función dentro de la instrucción else de la validación del formulario: 👇

let formValidation = () => {

  // Other codes are here
   else {

    // Other codes are here

    acceptData();
    add.setAttribute("data-bs-dismiss", "modal");
    add.click();

    (() => {
      add.setAttribute("data-bs-dismiss", "");
    })();
  }
};

Si abre las herramientas de desarrollo de Chrome, vaya a la aplicación y abra el almacenamiento local. Puedes ver este resultado: 👇

Resultado de almacenamiento local

Cómo crear nuevas tareas

Para crear una nueva tarea, necesitamos crear una función, usar literales de plantilla para crear los elementos HTML y usar un mapa para insertar los datos recopilados del usuario dentro de la plantilla. Síguenos aquí: 👇

let createTasks = () => {
  tasks.innerHTML = "";
  data.map((x, y) => {
    return (tasks.innerHTML += `
    <div id=${y}>
          <span class="fw-bold">${x.text}</span>
          <span class="small text-secondary">${x.date}</span>
          <p>${x.description}</p>
  
          <span class="options">
            <i onClick= "editTask(this)" data-bs-toggle="modal" data-bs-target="#form" class="fas fa-edit"></i>
            <i onClick ="deleteTask(this);createTasks()" class="fas fa-trash-alt"></i>
          </span>
        </div>
    `);
  });

  resetForm();
};

También tenga en cuenta que la función nunca se ejecutará a menos que la invoque dentro de la acceptDatafunción, así: 👇

let acceptData = () => {
  // Other codes are here

  createTasks();
};

Una vez que hayamos terminado de recopilar y aceptar datos del usuario, debemos borrar los campos de entrada. Para eso creamos una función llamada resetForm. Síguenos: 👇

let resetForm = () => {
  textInput.value = "";
  dateInput.value = "";
  textarea.value = "";
};

El resultado hasta ahora: 👇

Adición de tarjetas de tareas

Como puede ver, no hay estilos con la tarjeta. Agreguemos algunos estilos: 👇

#tasks {
  display: grid;
  grid-template-columns: 1fr;
  gap: 14px;
}

#tasks div {
  border: 3px solid #abcea1;
  background-color: #e2eede;
  border-radius: 6px;
  padding: 5px;
  display: grid;
  gap: 4px;
}

Dale estilo a los botones de editar y eliminar con este código: 👇

#tasks div .options {
  justify-self: center;
  display: flex;
  gap: 20px;
}

#tasks div .options i {
  cursor: pointer;
}

El resultado hasta ahora: 👇

Plantillas de tarjetas de estilos

Función para eliminar una tarea

Mire aquí cuidadosamente, agregué 3 líneas de código dentro de la función.

  • La primera línea eliminará el elemento HTML de la pantalla,
  • la segunda línea eliminará la tarea objetivo de la matriz de datos,
  • y la tercera línea actualizará el almacenamiento local con los nuevos datos.
let deleteTask = (e) => {
  e.parentElement.parentElement.remove();

  data.splice(e.parentElement.parentElement.id, 1);

  localStorage.setItem("data", JSON.stringify(data));

  console.log(data);
};

Ahora cree una tarea ficticia e intente eliminarla. El resultado hasta ahora se ve así: 👇

descripción de la imagen

Función para editar tareas

Mire aquí cuidadosamente, agregué 5 líneas de código dentro de la función.

  • La línea 1 apunta a la tarea que seleccionamos para editar
  • Las líneas 2, 3 y 4 apuntan a los valores [tarea, fecha, descripción] de la tarea que seleccionamos para editar
  • la línea 5 está ejecutando la función de eliminación para eliminar los datos seleccionados tanto del almacenamiento local, el elemento HTML y la matriz de datos.
let editTask = (e) => {
  let selectedTask = e.parentElement.parentElement;

  textInput.value = selectedTask.children[0].innerHTML;
  dateInput.value = selectedTask.children[1].innerHTML;
  textarea.value = selectedTask.children[2].innerHTML;

  deleteTask(e);
};

Ahora, intente crear una tarea ficticia y edítela. El resultado hasta ahora: 👇

Edición de una tarea

Cómo obtener datos del almacenamiento local

Si actualiza la página, notará que todos sus datos han desaparecido. Para resolver ese problema, ejecutamos un IIFE (expresión de función invocada inmediatamente) para recuperar los datos del almacenamiento local. Síguenos: 👇

(() => {
  data = JSON.parse(localStorage.getItem("data")) || [];
  console.log(data);
  createTasks();
})();

Ahora los datos aparecerán incluso si actualiza la página.

Conclusión

Felicidades

Felicitaciones por completar con éxito este tutorial. Ha aprendido a crear una aplicación de lista de tareas mediante operaciones CRUD. Ahora, puede crear su propia aplicación CRUD usando este tutorial.

Aquí está tu medalla por leer hasta el final. ❤️

Fuente: https://www.freecodecamp.org/news/learn-crud-operations-in-javascript-by-building-todo-app/ 

#javascript #crud #operator #todoapp