1627037100
Hey, chào các bợn. Đây tiếp tục là chuỗi video chia sẻ kiến thức của mình về vue.js nhưng là nuxt.js. Nuxt.js đã khẳng định được vị trí của mình và được sử dụng ở rất nhiều dự án chạy thực tế. Vì vậy, mình sẽ chia sẻ những kiến thức mình hiểu, tích lũy, chôm xỉa được cho mọi người. Vẫn thói quen cũ, mình sẽ đi rất chậm trong khóa chia sẻ này để mọi người có thể hiểu tường tận hơn về bản chất của vấn đề. (Updated - 31/05/2020)
Khóa này được tham khảo từ các nguồn sau đây:
CẢM ƠN VÌ TẤT CẢ SỰ GIÚP ĐỠ CỦA MỌI NGƯỜI!
Thank You
#rhpteam #nuxtjs #hocnuxtjs
#nuxtjs
1632537859
Not babashka. Node.js babashka!?
Ad-hoc CLJS scripting on Node.js.
Experimental. Please report issues here.
Nbb's main goal is to make it easy to get started with ad hoc CLJS scripting on Node.js.
Additional goals and features are:
Nbb requires Node.js v12 or newer.
CLJS code is evaluated through SCI, the same interpreter that powers babashka. Because SCI works with advanced compilation, the bundle size, especially when combined with other dependencies, is smaller than what you get with self-hosted CLJS. That makes startup faster. The trade-off is that execution is less performant and that only a subset of CLJS is available (e.g. no deftype, yet).
Install nbb
from NPM:
$ npm install nbb -g
Omit -g
for a local install.
Try out an expression:
$ nbb -e '(+ 1 2 3)'
6
And then install some other NPM libraries to use in the script. E.g.:
$ npm install csv-parse shelljs zx
Create a script which uses the NPM libraries:
(ns script
(:require ["csv-parse/lib/sync$default" :as csv-parse]
["fs" :as fs]
["path" :as path]
["shelljs$default" :as sh]
["term-size$default" :as term-size]
["zx$default" :as zx]
["zx$fs" :as zxfs]
[nbb.core :refer [*file*]]))
(prn (path/resolve "."))
(prn (term-size))
(println (count (str (fs/readFileSync *file*))))
(prn (sh/ls "."))
(prn (csv-parse "foo,bar"))
(prn (zxfs/existsSync *file*))
(zx/$ #js ["ls"])
Call the script:
$ nbb script.cljs
"/private/tmp/test-script"
#js {:columns 216, :rows 47}
510
#js ["node_modules" "package-lock.json" "package.json" "script.cljs"]
#js [#js ["foo" "bar"]]
true
$ ls
node_modules
package-lock.json
package.json
script.cljs
Nbb has first class support for macros: you can define them right inside your .cljs
file, like you are used to from JVM Clojure. Consider the plet
macro to make working with promises more palatable:
(defmacro plet
[bindings & body]
(let [binding-pairs (reverse (partition 2 bindings))
body (cons 'do body)]
(reduce (fn [body [sym expr]]
(let [expr (list '.resolve 'js/Promise expr)]
(list '.then expr (list 'clojure.core/fn (vector sym)
body))))
body
binding-pairs)))
Using this macro we can look async code more like sync code. Consider this puppeteer example:
(-> (.launch puppeteer)
(.then (fn [browser]
(-> (.newPage browser)
(.then (fn [page]
(-> (.goto page "https://clojure.org")
(.then #(.screenshot page #js{:path "screenshot.png"}))
(.catch #(js/console.log %))
(.then #(.close browser)))))))))
Using plet
this becomes:
(plet [browser (.launch puppeteer)
page (.newPage browser)
_ (.goto page "https://clojure.org")
_ (-> (.screenshot page #js{:path "screenshot.png"})
(.catch #(js/console.log %)))]
(.close browser))
See the puppeteer example for the full code.
Since v0.0.36, nbb includes promesa which is a library to deal with promises. The above plet
macro is similar to promesa.core/let
.
$ time nbb -e '(+ 1 2 3)'
6
nbb -e '(+ 1 2 3)' 0.17s user 0.02s system 109% cpu 0.168 total
The baseline startup time for a script is about 170ms seconds on my laptop. When invoked via npx
this adds another 300ms or so, so for faster startup, either use a globally installed nbb
or use $(npm bin)/nbb script.cljs
to bypass npx
.
Nbb does not depend on any NPM dependencies. All NPM libraries loaded by a script are resolved relative to that script. When using the Reagent module, React is resolved in the same way as any other NPM library.
To load .cljs
files from local paths or dependencies, you can use the --classpath
argument. The current dir is added to the classpath automatically. So if there is a file foo/bar.cljs
relative to your current dir, then you can load it via (:require [foo.bar :as fb])
. Note that nbb
uses the same naming conventions for namespaces and directories as other Clojure tools: foo-bar
in the namespace name becomes foo_bar
in the directory name.
To load dependencies from the Clojure ecosystem, you can use the Clojure CLI or babashka to download them and produce a classpath:
$ classpath="$(clojure -A:nbb -Spath -Sdeps '{:aliases {:nbb {:replace-deps {com.github.seancorfield/honeysql {:git/tag "v2.0.0-rc5" :git/sha "01c3a55"}}}}}')"
and then feed it to the --classpath
argument:
$ nbb --classpath "$classpath" -e "(require '[honey.sql :as sql]) (sql/format {:select :foo :from :bar :where [:= :baz 2]})"
["SELECT foo FROM bar WHERE baz = ?" 2]
Currently nbb
only reads from directories, not jar files, so you are encouraged to use git libs. Support for .jar
files will be added later.
The name of the file that is currently being executed is available via nbb.core/*file*
or on the metadata of vars:
(ns foo
(:require [nbb.core :refer [*file*]]))
(prn *file*) ;; "/private/tmp/foo.cljs"
(defn f [])
(prn (:file (meta #'f))) ;; "/private/tmp/foo.cljs"
Nbb includes reagent.core
which will be lazily loaded when required. You can use this together with ink to create a TUI application:
$ npm install ink
ink-demo.cljs
:
(ns ink-demo
(:require ["ink" :refer [render Text]]
[reagent.core :as r]))
(defonce state (r/atom 0))
(doseq [n (range 1 11)]
(js/setTimeout #(swap! state inc) (* n 500)))
(defn hello []
[:> Text {:color "green"} "Hello, world! " @state])
(render (r/as-element [hello]))
Working with callbacks and promises can become tedious. Since nbb v0.0.36 the promesa.core
namespace is included with the let
and do!
macros. An example:
(ns prom
(:require [promesa.core :as p]))
(defn sleep [ms]
(js/Promise.
(fn [resolve _]
(js/setTimeout resolve ms))))
(defn do-stuff
[]
(p/do!
(println "Doing stuff which takes a while")
(sleep 1000)
1))
(p/let [a (do-stuff)
b (inc a)
c (do-stuff)
d (+ b c)]
(prn d))
$ nbb prom.cljs
Doing stuff which takes a while
Doing stuff which takes a while
3
Also see API docs.
Since nbb v0.0.75 applied-science/js-interop is available:
(ns example
(:require [applied-science.js-interop :as j]))
(def o (j/lit {:a 1 :b 2 :c {:d 1}}))
(prn (j/select-keys o [:a :b])) ;; #js {:a 1, :b 2}
(prn (j/get-in o [:c :d])) ;; 1
Most of this library is supported in nbb, except the following:
:syms
.-x
notation. In nbb, you must use keywords.See the example of what is currently supported.
See the examples directory for small examples.
Also check out these projects built with nbb:
See API documentation.
See this gist on how to convert an nbb script or project to shadow-cljs.
Prequisites:
To build:
bb release
Run bb tasks
for more project-related tasks.
Download Details:
Author: borkdude
Download Link: Download The Source Code
Official Website: https://github.com/borkdude/nbb
License: EPL-1.0
#node #javascript
1664349137
15/Sep/2022
Paper accepted at NeurIPS 2022.
9/Sep/2022
ConvMAE-v2 pretrained checkpoints are released.
21/Aug/2022
Official-ConvMAE-Det which follows official ViTDet codebase is released.
08/Jun/2022
🚀FastConvMAE🚀: significantly accelerates the pretraining hours (4000 single GPU hours => 200 single GPU hours). The code is going to be released at FastConvMAE.
27/May/2022
20/May/2022
Update results on video classification.
16/May/2022
The supported codes and models for COCO object detection and instance segmentation are available.
11/May/2022
08/May/2022
The preprint version is public at arxiv.
ConvMAE framework demonstrates that multi-scale hybrid convolution-transformer can learn more discriminative representations via the mask auto-encoding scheme.
The following table provides pretrained checkpoints and logs used in the paper. | | ConvMAE-Base| | :---: | :---: | | pretrained checkpoints| download | | logs | download |
The following results are for ConvMAE-v2 (pretrained for 200 epochs on ImageNet-1k). | model | pretrained checkpoints | ft. acc. on ImageNet-1k | | :---: | :---: | :---: | | ConvMAE-v2-Small | download | 83.6 | | ConvMAE-v2-Base | download | 85.7 | | ConvMAE-v2-Large | download | 86.8 | | ConvMAE-v2-Huge | download | 88.0 |
Models | #Params(M) | Supervision | Encoder Ratio | Pretrain Epochs | FT acc@1(%) | LIN acc@1(%) | FT logs/weights | LIN logs/weights |
---|---|---|---|---|---|---|---|---|
BEiT | 88 | DALLE | 100% | 300 | 83.0 | 37.6 | - | - |
MAE | 88 | RGB | 25% | 1600 | 83.6 | 67.8 | - | - |
SimMIM | 88 | RGB | 100% | 800 | 84.0 | 56.7 | - | - |
MaskFeat | 88 | HOG | 100% | 300 | 83.6 | N/A | - | - |
data2vec | 88 | RGB | 100% | 800 | 84.2 | N/A | - | - |
ConvMAE-B | 88 | RGB | 25% | 1600 | 85.0 | 70.9 | log/weight |
Models | Pretrain | Pretrain Epochs | Finetune Epochs | #Params(M) | FLOPs(T) | box AP | mask AP | logs/weights |
---|---|---|---|---|---|---|---|---|
Swin-B | IN21K w/ labels | 90 | 36 | 109 | 0.7 | 51.4 | 45.4 | - |
Swin-L | IN21K w/ labels | 90 | 36 | 218 | 1.1 | 52.4 | 46.2 | - |
MViTv2-B | IN21K w/ labels | 90 | 36 | 73 | 0.6 | 53.1 | 47.4 | - |
MViTv2-L | IN21K w/ labels | 90 | 36 | 239 | 1.3 | 53.6 | 47.5 | - |
Benchmarking-ViT-B | IN1K w/o labels | 1600 | 100 | 118 | 0.9 | 50.4 | 44.9 | - |
Benchmarking-ViT-L | IN1K w/o labels | 1600 | 100 | 340 | 1.9 | 53.3 | 47.2 | - |
ViTDet | IN1K w/o labels | 1600 | 100 | 111 | 0.8 | 51.2 | 45.5 | - |
MIMDet-ViT-B | IN1K w/o labels | 1600 | 36 | 127 | 1.1 | 51.5 | 46.0 | - |
MIMDet-ViT-L | IN1K w/o labels | 1600 | 36 | 345 | 2.6 | 53.3 | 47.5 | - |
ConvMAE-B | IN1K w/o lables | 1600 | 25 | 104 | 0.9 | 53.2 | 47.1 | log/weight |
Models | Pretrain | Pretrain Epochs | Finetune Iters | #Params(M) | FLOPs(T) | mIoU | logs/weights |
---|---|---|---|---|---|---|---|
DeiT-B | IN1K w/ labels | 300 | 16K | 163 | 0.6 | 45.6 | - |
Swin-B | IN1K w/ labels | 300 | 16K | 121 | 0.3 | 48.1 | - |
MoCo V3 | IN1K | 300 | 16K | 163 | 0.6 | 47.3 | - |
DINO | IN1K | 400 | 16K | 163 | 0.6 | 47.2 | - |
BEiT | IN1K+DALLE | 1600 | 16K | 163 | 0.6 | 47.1 | - |
PeCo | IN1K | 300 | 16K | 163 | 0.6 | 46.7 | - |
CAE | IN1K+DALLE | 800 | 16K | 163 | 0.6 | 48.8 | - |
MAE | IN1K | 1600 | 16K | 163 | 0.6 | 48.1 | - |
ConvMAE-B | IN1K | 1600 | 16K | 153 | 0.6 | 51.7 | log/weight |
Models | Pretrain Epochs | Finetune Epochs | #Params(M) | Top1 | Top5 | logs/weights |
---|---|---|---|---|---|---|
VideoMAE-B | 200 | 100 | 87 | 77.8 | ||
VideoMAE-B | 800 | 100 | 87 | 79.4 | ||
VideoMAE-B | 1600 | 100 | 87 | 79.8 | ||
VideoMAE-B | 1600 | 100 (w/ Repeated Aug) | 87 | 80.7 | 94.7 | |
SpatioTemporalLearner-B | 800 | 150 (w/ Repeated Aug) | 87 | 81.3 | 94.9 | |
VideoConvMAE-B | 200 | 100 | 86 | 80.1 | 94.3 | Soon |
VideoConvMAE-B | 800 | 100 | 86 | 81.7 | 95.1 | Soon |
VideoConvMAE-B-MSD | 800 | 100 | 86 | 82.7 | 95.5 | Soon |
Models | Pretrain Epochs | Finetune Epochs | #Params(M) | Top1 | Top5 | logs/weights |
---|---|---|---|---|---|---|
VideoMAE-B | 200 | 40 | 87 | 66.1 | ||
VideoMAE-B | 800 | 40 | 87 | 69.3 | ||
VideoMAE-B | 2400 | 40 | 87 | 70.3 | ||
VideoConvMAE-B | 200 | 40 | 86 | 67.7 | 91.2 | Soon |
VideoConvMAE-B | 800 | 40 | 86 | 69.9 | 92.4 | Soon |
VideoConvMAE-B-MSD | 800 | 40 | 86 | 70.7 | 93.0 | Soon |
The pretraining and finetuning of our project are based on DeiT and MAE. The object detection and semantic segmentation parts are based on MIMDet and MMSegmentation respectively. Thanks for their wonderful work.
ConvMAE is released under the MIT License.
@article{gao2022convmae, title={ConvMAE: Masked Convolution Meets Masked Autoencoders}, author={Gao, Peng and Ma, Teli and Li, Hongsheng and Dai, Jifeng and Qiao, Yu}, journal={arXiv preprint arXiv:2205.03892}, year={2022}}
Author: Alpha-VL
Source Code: https://github.com/Alpha-VL/ConvMAE
License: MIT license
1629796171
This is the official implementation of our Focal Transformer -- "Focal Self-attention for Local-Global Interactions in Vision Transformers", by Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan and Jianfeng Gao.
Our Focal Transfomer introduced a new self-attention mechanism called focal self-attention for vision transformers. In this new mechanism, each token attends the closest surrounding tokens at fine granularity but the tokens far away at coarse granularity, and thus can capture both short- and long-range visual dependencies efficiently and effectively.
With our Focal Transformers, we achieved superior performance over the state-of-the-art vision Transformers on a range of public benchmarks. In particular, our Focal Transformer models with a moderate size of 51.1M and a larger size of 89.8M achieve 83.6 and 84.0 Top-1 accuracy, respectively, on ImageNet classification at 224x224 resolution. Using Focal Transformers as the backbones, we obtain consistent and substantial improvements over the current state-of-the-art methods for 6 different object detection methods trained with standard 1x and 3x schedules. Our largest Focal Transformer yields 58.7/58.9 box mAPs and 50.9/51.3 mask mAPs on COCO mini-val/test-dev, and 55.4 mIoU on ADE20K for semantic segmentation.
Model | Pretrain | Use Conv | Resolution | acc@1 | acc@5 | #params | FLOPs | Checkpoint | Config |
---|---|---|---|---|---|---|---|---|---|
Focal-T | IN-1K | No | 224 | 82.2 | 95.9 | 28.9M | 4.9G | download | yaml |
Focal-T | IN-1K | Yes | 224 | 82.7 | 96.1 | 30.8M | 4.9G | download | yaml |
Focal-S | IN-1K | No | 224 | 83.6 | 96.2 | 51.1M | 9.4G | download | yaml |
Focal-B | IN-1K | No | 224 | 84.0 | 96.5 | 89.8M | 16.4G | download | yaml |
Backbone | Pretrain | Lr Schd | #params | FLOPs | box mAP | mask mAP |
---|---|---|---|---|---|---|
Focal-T | ImageNet-1K | 1x | 49M | 291G | 44.8 | 41.0 |
Focal-T | ImageNet-1K | 3x | 49M | 291G | 47.2 | 42.7 |
Focal-S | ImageNet-1K | 1x | 71M | 401G | 47.4 | 42.8 |
Focal-S | ImageNet-1K | 3x | 71M | 401G | 48.8 | 43.8 |
Focal-B | ImageNet-1K | 1x | 110M | 533G | 47.8 | 43.2 |
Focal-B | ImageNet-1K | 3x | 110M | 533G | 49.0 | 43.7 |
Backbone | Pretrain | Lr Schd | #params | FLOPs | box mAP |
---|---|---|---|---|---|
Focal-T | ImageNet-1K | 1x | 39M | 265G | 43.7 |
Focal-T | ImageNet-1K | 3x | 39M | 265G | 45.5 |
Focal-S | ImageNet-1K | 1x | 62M | 367G | 45.6 |
Focal-S | ImageNet-1K | 3x | 62M | 367G | 47.3 |
Focal-B | ImageNet-1K | 1x | 101M | 514G | 46.3 |
Focal-B | ImageNet-1K | 3x | 101M | 514G | 46.9 |
Backbone | Pretrain | Method | Lr Schd | #params | FLOPs | box mAP |
---|---|---|---|---|---|---|
Focal-T | ImageNet-1K | Cascade Mask R-CNN | 3x | 87M | 770G | 51.5 |
Focal-T | ImageNet-1K | ATSS | 3x | 37M | 239G | 49.5 |
Focal-T | ImageNet-1K | RepPointsV2 | 3x | 45M | 491G | 51.2 |
Focal-T | ImageNet-1K | Sparse R-CNN | 3x | 111M | 196G | 49.0 |
Backbone | Pretrain | Method | Resolution | Iters | #params | FLOPs | mIoU | mIoU (MS) |
---|---|---|---|---|---|---|---|---|
Focal-T | ImageNet-1K | UPerNet | 512x512 | 160k | 62M | 998G | 45.8 | 47.0 |
Focal-S | ImageNet-1K | UPerNet | 512x512 | 160k | 85M | 1130G | 48.0 | 50.0 |
Focal-B | ImageNet-1K | UPerNet | 512x512 | 160k | 126M | 1354G | 49.0 | 50.5 |
Focal-L | ImageNet-22K | UPerNet | 640x640 | 160k | 240M | 3376G | 54.0 | 55.4 |
If you find this repo useful to your project, please consider to cite it with following bib:
@misc{yang2021focal,
title={Focal Self-attention for Local-Global Interactions in Vision Transformers},
author={Jianwei Yang and Chunyuan Li and Pengchuan Zhang and Xiyang Dai and Bin Xiao and Lu Yuan and Jianfeng Gao},
year={2021},
eprint={2107.00641},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Our codebase is built based on Swin-Transformer. We thank the authors for the nicely organized code!
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
Author: microsoft
Source Code: https://github.com/microsoft/Focal-Transformer
1627037100
Hey, chào các bợn. Đây tiếp tục là chuỗi video chia sẻ kiến thức của mình về vue.js nhưng là nuxt.js. Nuxt.js đã khẳng định được vị trí của mình và được sử dụng ở rất nhiều dự án chạy thực tế. Vì vậy, mình sẽ chia sẻ những kiến thức mình hiểu, tích lũy, chôm xỉa được cho mọi người. Vẫn thói quen cũ, mình sẽ đi rất chậm trong khóa chia sẻ này để mọi người có thể hiểu tường tận hơn về bản chất của vấn đề. (Updated - 31/05/2020)
Khóa này được tham khảo từ các nguồn sau đây:
CẢM ƠN VÌ TẤT CẢ SỰ GIÚP ĐỠ CỦA MỌI NGƯỜI!
Thank You
#rhpteam #nuxtjs #hocnuxtjs
#nuxtjs
1626960900
Give me a design and coding challenge !
Day for #100DaysOfCode Challenge
Sources :
Trello : https://trello.com/invite/b/kGXI8zlV/d4a415ab005f801d82939d886232334e/100daysofcode
Figma https://figma.com/@kewcoder
Github https://github.com/kewcoder
#vue #vue js #nuxt js #nuxt #laravel #socket io