Hoang  Lan

Hoang Lan

1627037100

Sử Dụng Context Params Của AsyncData() Trong Nuxt.js

Hey, chào các bợn. Đây tiếp tục là chuỗi video chia sẻ kiến thức của mình về vue.js nhưng là nuxt.js. Nuxt.js đã khẳng định được vị trí của mình và được sử dụng ở rất nhiều dự án chạy thực tế. Vì vậy, mình sẽ chia sẻ những kiến thức mình hiểu, tích lũy, chôm xỉa được cho mọi người. Vẫn thói quen cũ, mình sẽ đi rất chậm trong khóa chia sẻ này để mọi người có thể hiểu tường tận hơn về bản chất của vấn đề. (Updated - 31/05/2020)

Khóa này được tham khảo từ các nguồn sau đây:

  • Nuxt.js Guidelines
  • Vueschool.io
  • Vuemastery
  • Udemy
  • Thông tin được đóng góp của mọi người từ Facebook, RHP Team Discord, Husmon Corp

CẢM ƠN VÌ TẤT CẢ SỰ GIÚP ĐỠ CỦA MỌI NGƯỜI!
Thank You

#rhpteam #nuxtjs #hocnuxtjs

#nuxtjs

What is GEEK

Buddha Community

Sử Dụng Context Params Của AsyncData() Trong Nuxt.js

NBB: Ad-hoc CLJS Scripting on Node.js

Nbb

Not babashka. Node.js babashka!?

Ad-hoc CLJS scripting on Node.js.

Status

Experimental. Please report issues here.

Goals and features

Nbb's main goal is to make it easy to get started with ad hoc CLJS scripting on Node.js.

Additional goals and features are:

  • Fast startup without relying on a custom version of Node.js.
  • Small artifact (current size is around 1.2MB).
  • First class macros.
  • Support building small TUI apps using Reagent.
  • Complement babashka with libraries from the Node.js ecosystem.

Requirements

Nbb requires Node.js v12 or newer.

How does this tool work?

CLJS code is evaluated through SCI, the same interpreter that powers babashka. Because SCI works with advanced compilation, the bundle size, especially when combined with other dependencies, is smaller than what you get with self-hosted CLJS. That makes startup faster. The trade-off is that execution is less performant and that only a subset of CLJS is available (e.g. no deftype, yet).

Usage

Install nbb from NPM:

$ npm install nbb -g

Omit -g for a local install.

Try out an expression:

$ nbb -e '(+ 1 2 3)'
6

And then install some other NPM libraries to use in the script. E.g.:

$ npm install csv-parse shelljs zx

Create a script which uses the NPM libraries:

(ns script
  (:require ["csv-parse/lib/sync$default" :as csv-parse]
            ["fs" :as fs]
            ["path" :as path]
            ["shelljs$default" :as sh]
            ["term-size$default" :as term-size]
            ["zx$default" :as zx]
            ["zx$fs" :as zxfs]
            [nbb.core :refer [*file*]]))

(prn (path/resolve "."))

(prn (term-size))

(println (count (str (fs/readFileSync *file*))))

(prn (sh/ls "."))

(prn (csv-parse "foo,bar"))

(prn (zxfs/existsSync *file*))

(zx/$ #js ["ls"])

Call the script:

$ nbb script.cljs
"/private/tmp/test-script"
#js {:columns 216, :rows 47}
510
#js ["node_modules" "package-lock.json" "package.json" "script.cljs"]
#js [#js ["foo" "bar"]]
true
$ ls
node_modules
package-lock.json
package.json
script.cljs

Macros

Nbb has first class support for macros: you can define them right inside your .cljs file, like you are used to from JVM Clojure. Consider the plet macro to make working with promises more palatable:

(defmacro plet
  [bindings & body]
  (let [binding-pairs (reverse (partition 2 bindings))
        body (cons 'do body)]
    (reduce (fn [body [sym expr]]
              (let [expr (list '.resolve 'js/Promise expr)]
                (list '.then expr (list 'clojure.core/fn (vector sym)
                                        body))))
            body
            binding-pairs)))

Using this macro we can look async code more like sync code. Consider this puppeteer example:

(-> (.launch puppeteer)
      (.then (fn [browser]
               (-> (.newPage browser)
                   (.then (fn [page]
                            (-> (.goto page "https://clojure.org")
                                (.then #(.screenshot page #js{:path "screenshot.png"}))
                                (.catch #(js/console.log %))
                                (.then #(.close browser)))))))))

Using plet this becomes:

(plet [browser (.launch puppeteer)
       page (.newPage browser)
       _ (.goto page "https://clojure.org")
       _ (-> (.screenshot page #js{:path "screenshot.png"})
             (.catch #(js/console.log %)))]
      (.close browser))

See the puppeteer example for the full code.

Since v0.0.36, nbb includes promesa which is a library to deal with promises. The above plet macro is similar to promesa.core/let.

Startup time

$ time nbb -e '(+ 1 2 3)'
6
nbb -e '(+ 1 2 3)'   0.17s  user 0.02s system 109% cpu 0.168 total

The baseline startup time for a script is about 170ms seconds on my laptop. When invoked via npx this adds another 300ms or so, so for faster startup, either use a globally installed nbb or use $(npm bin)/nbb script.cljs to bypass npx.

Dependencies

NPM dependencies

Nbb does not depend on any NPM dependencies. All NPM libraries loaded by a script are resolved relative to that script. When using the Reagent module, React is resolved in the same way as any other NPM library.

Classpath

To load .cljs files from local paths or dependencies, you can use the --classpath argument. The current dir is added to the classpath automatically. So if there is a file foo/bar.cljs relative to your current dir, then you can load it via (:require [foo.bar :as fb]). Note that nbb uses the same naming conventions for namespaces and directories as other Clojure tools: foo-bar in the namespace name becomes foo_bar in the directory name.

To load dependencies from the Clojure ecosystem, you can use the Clojure CLI or babashka to download them and produce a classpath:

$ classpath="$(clojure -A:nbb -Spath -Sdeps '{:aliases {:nbb {:replace-deps {com.github.seancorfield/honeysql {:git/tag "v2.0.0-rc5" :git/sha "01c3a55"}}}}}')"

and then feed it to the --classpath argument:

$ nbb --classpath "$classpath" -e "(require '[honey.sql :as sql]) (sql/format {:select :foo :from :bar :where [:= :baz 2]})"
["SELECT foo FROM bar WHERE baz = ?" 2]

Currently nbb only reads from directories, not jar files, so you are encouraged to use git libs. Support for .jar files will be added later.

Current file

The name of the file that is currently being executed is available via nbb.core/*file* or on the metadata of vars:

(ns foo
  (:require [nbb.core :refer [*file*]]))

(prn *file*) ;; "/private/tmp/foo.cljs"

(defn f [])
(prn (:file (meta #'f))) ;; "/private/tmp/foo.cljs"

Reagent

Nbb includes reagent.core which will be lazily loaded when required. You can use this together with ink to create a TUI application:

$ npm install ink

ink-demo.cljs:

(ns ink-demo
  (:require ["ink" :refer [render Text]]
            [reagent.core :as r]))

(defonce state (r/atom 0))

(doseq [n (range 1 11)]
  (js/setTimeout #(swap! state inc) (* n 500)))

(defn hello []
  [:> Text {:color "green"} "Hello, world! " @state])

(render (r/as-element [hello]))

Promesa

Working with callbacks and promises can become tedious. Since nbb v0.0.36 the promesa.core namespace is included with the let and do! macros. An example:

(ns prom
  (:require [promesa.core :as p]))

(defn sleep [ms]
  (js/Promise.
   (fn [resolve _]
     (js/setTimeout resolve ms))))

(defn do-stuff
  []
  (p/do!
   (println "Doing stuff which takes a while")
   (sleep 1000)
   1))

(p/let [a (do-stuff)
        b (inc a)
        c (do-stuff)
        d (+ b c)]
  (prn d))
$ nbb prom.cljs
Doing stuff which takes a while
Doing stuff which takes a while
3

Also see API docs.

Js-interop

Since nbb v0.0.75 applied-science/js-interop is available:

(ns example
  (:require [applied-science.js-interop :as j]))

(def o (j/lit {:a 1 :b 2 :c {:d 1}}))

(prn (j/select-keys o [:a :b])) ;; #js {:a 1, :b 2}
(prn (j/get-in o [:c :d])) ;; 1

Most of this library is supported in nbb, except the following:

  • destructuring using :syms
  • property access using .-x notation. In nbb, you must use keywords.

See the example of what is currently supported.

Examples

See the examples directory for small examples.

Also check out these projects built with nbb:

API

See API documentation.

Migrating to shadow-cljs

See this gist on how to convert an nbb script or project to shadow-cljs.

Build

Prequisites:

  • babashka >= 0.4.0
  • Clojure CLI >= 1.10.3.933
  • Node.js 16.5.0 (lower version may work, but this is the one I used to build)

To build:

  • Clone and cd into this repo
  • bb release

Run bb tasks for more project-related tasks.

Download Details:
Author: borkdude
Download Link: Download The Source Code
Official Website: https://github.com/borkdude/nbb 
License: EPL-1.0

#node #javascript

Paris  Kessler

Paris Kessler

1664349137

ConvMAE: Masked Convolution Meets Masked Autoencoders on Python

[NeurIPS 2022] MCMAE: Masked Convolution Meets Masked Autoencoders

Updates

15/Sep/2022

Paper accepted at NeurIPS 2022.

9/Sep/2022

ConvMAE-v2 pretrained checkpoints are released.

21/Aug/2022

Official-ConvMAE-Det which follows official ViTDet codebase is released.

08/Jun/2022

🚀FastConvMAE🚀: significantly accelerates the pretraining hours (4000 single GPU hours => 200 single GPU hours). The code is going to be released at FastConvMAE.

27/May/2022

  1. The supported codes for ImageNet-1K pretraining.
  2. The supported codes and models for semantic segmentation are provided.

20/May/2022

Update results on video classification.

16/May/2022

The supported codes and models for COCO object detection and instance segmentation are available.

11/May/2022

  1. Pretrained models on ImageNet-1K for ConvMAE.
  2. The supported codes and models for ImageNet-1K finetuning and linear probing are provided.

08/May/2022

The preprint version is public at arxiv.

Introduction

ConvMAE framework demonstrates that multi-scale hybrid convolution-transformer can learn more discriminative representations via the mask auto-encoding scheme.

  • We present the strong and efficient self-supervised framework ConvMAE, which is easy to implement but show outstanding performances on downstream tasks.
  • ConvMAE naturally generates hierarchical representations and exhibit promising performances on object detection and segmentation.
  • ConvMAE-Base improves the ImageNet finetuning accuracy by 1.4% compared with MAE-Base. On object detection with Mask-RCNN, ConvMAE-Base achieves 53.2 box AP and 47.1 mask AP with a 25-epoch training schedule while MAE-Base attains 50.3 box AP and 44.9 mask AP with 100 training epochs. On ADE20K with UperNet, ConvMAE-Base surpasses MAE-Base by 3.6 mIoU (48.1 vs. 51.7).

tenser

Pretrain on ImageNet-1K

The following table provides pretrained checkpoints and logs used in the paper. | | ConvMAE-Base| | :---: | :---: | | pretrained checkpoints| download | | logs | download |

The following results are for ConvMAE-v2 (pretrained for 200 epochs on ImageNet-1k). | model | pretrained checkpoints | ft. acc. on ImageNet-1k | | :---: | :---: | :---: | | ConvMAE-v2-Small | download | 83.6 | | ConvMAE-v2-Base | download | 85.7 | | ConvMAE-v2-Large | download | 86.8 | | ConvMAE-v2-Huge | download | 88.0 |

Main Results on ImageNet-1K

Models#Params(M)SupervisionEncoder RatioPretrain EpochsFT acc@1(%)LIN acc@1(%)FT logs/weightsLIN logs/weights
BEiT88DALLE100%30083.037.6--
MAE88RGB25%160083.667.8--
SimMIM88RGB100%80084.056.7--
MaskFeat88HOG100%30083.6N/A--
data2vec88RGB100%80084.2N/A--
ConvMAE-B88RGB25%160085.070.9log/weight 

Main Results on COCO

Mask R-CNN

ModelsPretrainPretrain EpochsFinetune Epochs#Params(M)FLOPs(T)box APmask APlogs/weights
Swin-BIN21K w/ labels90361090.751.445.4-
Swin-LIN21K w/ labels90362181.152.446.2-
MViTv2-BIN21K w/ labels9036730.653.147.4-
MViTv2-LIN21K w/ labels90362391.353.647.5-
Benchmarking-ViT-BIN1K w/o labels16001001180.950.444.9-
Benchmarking-ViT-LIN1K w/o labels16001003401.953.347.2-
ViTDetIN1K w/o labels16001001110.851.245.5-
MIMDet-ViT-BIN1K w/o labels1600361271.151.546.0-
MIMDet-ViT-LIN1K w/o labels1600363452.653.347.5-
ConvMAE-BIN1K w/o lables1600251040.953.247.1log/weight

Main Results on ADE20K

UperNet

ModelsPretrainPretrain EpochsFinetune Iters#Params(M)FLOPs(T)mIoUlogs/weights
DeiT-BIN1K w/ labels30016K1630.645.6-
Swin-BIN1K w/ labels30016K1210.348.1-
MoCo V3IN1K30016K1630.647.3-
DINOIN1K40016K1630.647.2-
BEiTIN1K+DALLE160016K1630.647.1-
PeCoIN1K30016K1630.646.7-
CAEIN1K+DALLE80016K1630.648.8-
MAEIN1K160016K1630.648.1-
ConvMAE-BIN1K160016K1530.651.7log/weight

Main Results on Kinetics-400

ModelsPretrain EpochsFinetune Epochs#Params(M)Top1Top5logs/weights
VideoMAE-B2001008777.8  
VideoMAE-B8001008779.4  
VideoMAE-B16001008779.8  
VideoMAE-B1600100 (w/ Repeated Aug)8780.794.7 
SpatioTemporalLearner-B800150 (w/ Repeated Aug)8781.394.9 
VideoConvMAE-B2001008680.194.3Soon
VideoConvMAE-B8001008681.795.1Soon
VideoConvMAE-B-MSD8001008682.795.5Soon

Main Results on Something-Something V2

ModelsPretrain EpochsFinetune Epochs#Params(M)Top1Top5logs/weights
VideoMAE-B200408766.1  
VideoMAE-B800408769.3  
VideoMAE-B2400408770.3  
VideoConvMAE-B200408667.791.2Soon
VideoConvMAE-B800408669.992.4Soon
VideoConvMAE-B-MSD800408670.793.0Soon

Getting Started

Prerequisites

  • Linux
  • Python 3.7+
  • CUDA 10.2+
  • GCC 5+

Training and evaluation

Visualization

tenser

Acknowledgement

The pretraining and finetuning of our project are based on DeiT and MAE. The object detection and semantic segmentation parts are based on MIMDet and MMSegmentation respectively. Thanks for their wonderful work.

License

ConvMAE is released under the MIT License.

Citation

@article{gao2022convmae,  title={ConvMAE: Masked Convolution Meets Masked Autoencoders},  author={Gao, Peng and Ma, Teli and Li, Hongsheng and Dai, Jifeng and Qiao, Yu},  journal={arXiv preprint arXiv:2205.03892},  year={2022}}

Download Details:

Author: Alpha-VL
Source Code: https://github.com/Alpha-VL/ConvMAE

License: MIT license

#python 

Michio JP

Michio JP

1629796171

Focal Transformer | Official Implementation of Focal Transformer

Focal Transformer

This is the official implementation of our Focal Transformer -- "Focal Self-attention for Local-Global Interactions in Vision Transformers", by Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan and Jianfeng Gao.

Introduction

Our Focal Transfomer introduced a new self-attention mechanism called focal self-attention for vision transformers. In this new mechanism, each token attends the closest surrounding tokens at fine granularity but the tokens far away at coarse granularity, and thus can capture both short- and long-range visual dependencies efficiently and effectively.

With our Focal Transformers, we achieved superior performance over the state-of-the-art vision Transformers on a range of public benchmarks. In particular, our Focal Transformer models with a moderate size of 51.1M and a larger size of 89.8M achieve 83.6 and 84.0 Top-1 accuracy, respectively, on ImageNet classification at 224x224 resolution. Using Focal Transformers as the backbones, we obtain consistent and substantial improvements over the current state-of-the-art methods for 6 different object detection methods trained with standard 1x and 3x schedules. Our largest Focal Transformer yields 58.7/58.9 box mAPs and 50.9/51.3 mask mAPs on COCO mini-val/test-dev, and 55.4 mIoU on ADE20K for semantic segmentation.

Benchmarking

Image Classification on ImageNet-1K

ModelPretrainUse ConvResolutionacc@1acc@5#paramsFLOPsCheckpointConfig
Focal-TIN-1KNo22482.295.928.9M4.9Gdownloadyaml
Focal-TIN-1KYes22482.796.130.8M4.9Gdownloadyaml
Focal-SIN-1KNo22483.696.251.1M9.4Gdownloadyaml
Focal-BIN-1KNo22484.096.589.8M16.4Gdownloadyaml

Object Detection and Instance Segmentation on COCO

Mask R-CNN

BackbonePretrainLr Schd#paramsFLOPsbox mAPmask mAP
Focal-TImageNet-1K1x49M291G44.841.0
Focal-TImageNet-1K3x49M291G47.242.7
Focal-SImageNet-1K1x71M401G47.442.8
Focal-SImageNet-1K3x71M401G48.843.8
Focal-BImageNet-1K1x110M533G47.843.2
Focal-BImageNet-1K3x110M533G49.043.7

RetinaNet

BackbonePretrainLr Schd#paramsFLOPsbox mAP
Focal-TImageNet-1K1x39M265G43.7
Focal-TImageNet-1K3x39M265G45.5
Focal-SImageNet-1K1x62M367G45.6
Focal-SImageNet-1K3x62M367G47.3
Focal-BImageNet-1K1x101M514G46.3
Focal-BImageNet-1K3x101M514G46.9

Other detection methods

BackbonePretrainMethodLr Schd#paramsFLOPsbox mAP
Focal-TImageNet-1KCascade Mask R-CNN3x87M770G51.5
Focal-TImageNet-1KATSS3x37M239G49.5
Focal-TImageNet-1KRepPointsV23x45M491G51.2
Focal-TImageNet-1KSparse R-CNN3x111M196G49.0

Semantic Segmentation on ADE20K

BackbonePretrainMethodResolutionIters#paramsFLOPsmIoUmIoU (MS)
Focal-TImageNet-1KUPerNet512x512160k62M998G45.847.0
Focal-SImageNet-1KUPerNet512x512160k85M1130G48.050.0
Focal-BImageNet-1KUPerNet512x512160k126M1354G49.050.5
Focal-LImageNet-22KUPerNet640x640160k240M3376G54.055.4

Getting Started

Citation

If you find this repo useful to your project, please consider to cite it with following bib:

@misc{yang2021focal,
    title={Focal Self-attention for Local-Global Interactions in Vision Transformers}, 
    author={Jianwei Yang and Chunyuan Li and Pengchuan Zhang and Xiyang Dai and Bin Xiao and Lu Yuan and Jianfeng Gao},
    year={2021},
    eprint={2107.00641},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

Acknowledgement

Our codebase is built based on Swin-Transformer. We thank the authors for the nicely organized code!

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Download Details:

Author: microsoft

Source Code: https://github.com/microsoft/Focal-Transformer 

Hoang  Lan

Hoang Lan

1627037100

Sử Dụng Context Params Của AsyncData() Trong Nuxt.js

Hey, chào các bợn. Đây tiếp tục là chuỗi video chia sẻ kiến thức của mình về vue.js nhưng là nuxt.js. Nuxt.js đã khẳng định được vị trí của mình và được sử dụng ở rất nhiều dự án chạy thực tế. Vì vậy, mình sẽ chia sẻ những kiến thức mình hiểu, tích lũy, chôm xỉa được cho mọi người. Vẫn thói quen cũ, mình sẽ đi rất chậm trong khóa chia sẻ này để mọi người có thể hiểu tường tận hơn về bản chất của vấn đề. (Updated - 31/05/2020)

Khóa này được tham khảo từ các nguồn sau đây:

  • Nuxt.js Guidelines
  • Vueschool.io
  • Vuemastery
  • Udemy
  • Thông tin được đóng góp của mọi người từ Facebook, RHP Team Discord, Husmon Corp

CẢM ƠN VÌ TẤT CẢ SỰ GIÚP ĐỠ CỦA MỌI NGƯỜI!
Thank You

#rhpteam #nuxtjs #hocnuxtjs

#nuxtjs

Lupe  Connelly

Lupe Connelly

1626960900

Create Dating App (Vue Js Capacitor) Using Nuxt Js, Laravel, Socket IO - #3

Give me a design and coding challenge !

Day for #100DaysOfCode Challenge

Sources :
Trello : https://trello.com/invite/b/kGXI8zlV/d4a415ab005f801d82939d886232334e/100daysofcode
Figma https://figma.com/@kewcoder
Github https://github.com/kewcoder

#vue #vue js #nuxt js #nuxt #laravel #socket io