1673431080
High-performance and extensible re-implementation of the Fortran 77 code RADEX (van der Tak et al. 2007, A&A 468, 627) in the Julia programming language. A Python wrapper is provided using PyJulia
. Distinguishing features of this implementation include:
For cases where the same input parameters are used, results from Jadex are expected to match RADEX within five significant figures. These differences arise in-part from the use of higher precision mathematical constants and general numerical instability for levels with very small populations. Jadex has been validated against the RADEX wrapper SpectralRadex for a suite of species and physical conditions (see test/validation.jl
).
To install Jadex, open an interactive Julia session, press the ]
key to enter the package management mode, and execute the command add Jadex
. To execute the test suite, run test Jadex
from package mode.
To use the Python wrapper, first install Jadex per the above instruction and then follow the PyJulia installation instructions. Jadex can then be imported from Python by calling from julia import Jadex
.
For validation purposes, optional compilation instructions are included in src/wrap_slatec.jl
for compiling and linking the slatec.f
Fortran file from RADEX into a shared library. The resulting libslatec.so
is then wrapped and can be called to factor the rate matrix and solve for the level populations.
Please refer to the online https://autocorr.github.io/Jadex.jl for the Quickstart guide, User Guide, and API reference. The documentation source files are also supplied in the docs/
folder distributed with Jadex.
If you use Jadex in an academic work, we ask that you cite the following references, including the original publication for RADEX (van der Tak et al. 2007):
@ARTICLE{2007A&A...468..627V,
author = {{van der Tak}, F.~F.~S. and {Black}, J.~H. and {Sch{\"o}ier}, F.~L. and {Jansen}, D.~J. and {van Dishoeck}, E.~F.},
title = "{A computer program for fast non-LTE analysis of interstellar line spectra. With diagnostic plots to interpret observed line intensity ratios}",
journal = {\aap},
keywords = {radiative transfer, methods: numerical, radio lines: ISM, infrared: ISM, submillimeter, Astrophysics},
year = 2007,
month = jun,
volume = {468},
number = {2},
pages = {627-635},
doi = {10.1051/0004-6361:20066820},
archivePrefix = {arXiv},
eprint = {0704.0155},
primaryClass = {astro-ph},
adsurl = {https://ui.adsabs.harvard.edu/abs/2007A&A...468..627V},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
If one uses the collision rate files from the LAMDA database, the following citation should be included in addition to the source references listed on the page for the specie(s) used.
@ARTICLE{2005A&A...432..369S,
author = {{Sch{\"o}ier}, F.~L. and {van der Tak}, F.~F.~S. and {van Dishoeck}, E.~F. and {Black}, J.~H.},
title = "{An atomic and molecular database for analysis of submillimetre line observations}",
journal = {\aap},
keywords = {astronomical data bases: miscellaneous, atomic data, molecular data, radiative transfer, ISM: atoms, ISM: molecules, Astrophysics},
year = 2005,
month = mar,
volume = {432},
number = {1},
pages = {369-379},
doi = {10.1051/0004-6361:20041729},
archivePrefix = {arXiv},
eprint = {astro-ph/0411110},
primaryClass = {astro-ph},
adsurl = {https://ui.adsabs.harvard.edu/abs/2005A&A...432..369S},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
Author: Autocorr
Source Code: https://github.com/autocorr/Jadex.jl
License: GPL-3.0 license
1672831200
If we want to read data, then we need to use call.
For Example,
If we want to check the balance of the contract then we use call.
If we want to write data, then we need to use transaction.
For Example,
If we want to transfer a token or send an ETH from one contract to another, then we need to use Transaction.
Original article source at: https://www.c-sharpcorner.com/
1660658280
Bonjour lecteurs, dans cet article nous allons essayer de comprendre ce qu'est l'algorithme LDA. comment cela fonctionne et comment il est implémenté en python. Latent Dirichlet Allocation est un algorithme qui relève principalement du domaine du traitement du langage naturel (NLP).
Il est utilisé pour la modélisation de sujet. La modélisation de sujet est une technique d'apprentissage automatique effectuée sur des données textuelles pour les analyser et trouver un sujet similaire abstrait parmi la collection de documents.
LDA est l'un des algorithmes de modélisation thématique spécialement conçu pour les données textuelles. Cette technique considère chaque document comme un mélange de certains des sujets que l'algorithme produit comme résultat final. Les sujets sont la distribution de probabilité des mots qui apparaissent dans l'ensemble de tous les documents présents dans l'ensemble de données.
Le résultat des données prétraitées fournira un tableau de mots-clés ou de jetons, l'algorithme LDA prendra ces données prétraitées en entrée et essaiera de trouver des sujets cachés/sous-jacents en fonction de la distribution de probabilité de ces mots-clés. Initialement, l'algorithme affectera chaque mot du document à un sujet aléatoire parmi le nombre « n » de sujets.
Par exemple, considérez les données textuelles suivantes
Théoriquement, considérons deux sujets Sports et Covid pour que l'algorithme travaille. L'algorithme peut attribuer le premier mot qui dit "IPL" pour le sujet 2 Covid. Nous savons que cette affectation est erronée, mais l'algorithme essaiera de corriger cela dans la future itération en fonction de deux facteurs, à savoir la fréquence à laquelle le sujet apparaît dans le document et la fréquence à laquelle le mot apparaît dans le sujet. Comme il n'y a pas beaucoup de termes liés à Covid dans le texte 1 et que le mot "IPL" n'apparaîtra pas plusieurs fois dans le sujet 2 Covid, l'algorithme peut attribuer le mot "IPL" au nouveau sujet qui est le sujet 1 (sports). Avec plusieurs itérations de ce type, l'algorithme atteindra une stabilité dans la reconnaissance des sujets et la distribution des mots entre les sujets. Enfin, chaque document peut être représenté comme un mélange de sujets déterminés.
A lire aussi : Recherche bidirectionnelle en Python
Les étapes suivantes sont effectuées dans LDA pour attribuer des sujets à chacun des documents :
1) Pour chaque document, initialiser aléatoirement chaque mot à un thème parmi les K thèmes où K est le nombre de thèmes prédéfinis.
2) Pour chaque pièce d :
Pour chaque mot w du document, calculez :
3) Réaffecter le sujet T' au mot w avec probabilité p(t'|d)*p(w|t') en considérant tous les autres mots et leurs affectations de sujet
La dernière étape est répétée plusieurs fois jusqu'à ce que nous atteignions un état stable où les affectations de sujet ne changent plus. La proportion de sujets pour chaque document est ensuite déterminée à partir de ces affectations de sujets.
Exemple illustratif de LDA :
Disons que nous avons les 4 documents suivants comme corpus et que nous souhaitons effectuer une modélisation thématique sur ces documents.
La modélisation LDA nous aide à découvrir des sujets dans le corpus ci-dessus et à attribuer des mélanges de sujets pour chacun des documents. Par exemple, le modèle peut produire quelque chose comme indiqué ci-dessous :
Sujet 1 : 40 % de vidéos, 60 % de YouTube
Sujet 2 : 95 % de blogs, 5 % de YouTube
Les documents 1 et 2 appartiendraient alors à 100% au Topic 1. Le document 3 appartiendrait à 100% au Topic 2. Le document 4 appartiendrait à 80% au Topic 2 et 20% au Topic 1
Voici les étapes pour mettre en œuvre l'algorithme LDA :
Ici, nous avons les données d'entrée collectées à partir de Twitter et les avons converties en un fichier CSV, car les données sur les réseaux sociaux sont variées et nous pouvons construire un modèle efficace.
import numpy as np
import pandas as pd
import re
import gensim
from gensim import corpora, models, similarities
from gensim.parsing.preprocessing import STOPWORDS
from nltk.corpus import stopwords
def normalize_whitespace(tweet):
tweet = re.sub('[\s]+', ' ', tweet)
return tweet
text = " We are the students of Science. "
print("Text Before: ",text)
text = normalize_whitespace(text)
print("Text After: ",text)
PRODUCTION:
Text Before: We are the students of Science.
Texte après : Nous sommes des étudiants en sciences.
import nltk
nltk.download('stopwords')
import gensim
from gensim.parsing.preprocessing import STOPWORDS
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
def remove_stopwords(text):
final_s=""
text_arr= text.split(" ") #splits sentence when space occurs
print(text_arr)
for word in text_arr:
if word not in stop_words: # if word is not in stopword then append(join) it to string
final_s= final_s + word + " "
return final_s
import nltk
# nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer, SnowballStemmer, PorterStemmer
stemmer = PorterStemmer()
def tokenize_stemming(text):
text = re.sub(r'[^\w\s]','',text)
#replace multiple spaces with one space
text = re.sub(r'[\s]+',' ',text)
#transfer text to lowercase
text = text.lower()
# tokenize text
tokens = re.split(" ", text)
# Remove stop words
result = []
for token in tokens :
if token not in stop_words and len(token) > 1:
result.append(stemmer.stem(token))
return result
Il est l'abréviation de terme fréquence-fréquence inverse des documents, est une statistique numérique destinée à refléter l'importance d'un mot pour un document dans une collection ou un corpus. Il est souvent utilisé comme facteur de pondération.
corpus_doc2bow_vectors = [dictionary.doc2bow(tok_doc) for tok_doc in tokens]
print("# Term Frequency : ")
corpus_doc2bow_vectors[:5]
tfidf_model = models.TfidfModel(corpus_doc2bow_vectors, id2word=dictionary, normalize=False)
corpus_tfidf_vectors = tfidf_model[corpus_doc2bow_vectors]
print("\n# TF_IDF: ")
print(corpus_tfidf_vectors[5])
lda_model = gensim.models.LdaMulticore(corpus_doc2bow_vectors, num_topics=10, id2word=dictionary, passes=2, workers=2)
lda_model_tfidf = gensim.models.LdaMulticore(corpus_tfidf_vectors, num_topics=10, id2word=dictionary, passes=2, workers=4)
for idx, topic in lda_model_tfidf.print_topics(-1):
print('Topic: {} Word: {}'.format(idx, topic))
Évaluation des performances en classant des exemples de documents à l'aide du modèle LDA Bag of Words Nous vérifierons où notre document de test serait classé.
for index, score in sorted(lda_model[corpus_doc2bow_vectors[1]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model.print_topic(index, 10)))
for index, score in sorted(lda_model_tfidf[corpus_doc2bow_vectors[1]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model_tfidf.print_topic(index, 10)))
Dans cet article, nous avons essayé de comprendre l'algorithme le plus couramment utilisé dans le domaine du traitement du langage naturel. LDA est la base de la modélisation thématique - un type de modélisation statistique et d'exploration de données.
Lien : https://www.askpython.com/python/examples/latent-dirichlet-allocation-lda
#python
1660640107
Здравствуйте читатели, в этой статье мы попробуем разобраться что такое алгоритм LDA. как это работает и как это реализовано в python. Скрытое распределение Дирихле — это алгоритм, который в основном относится к области обработки естественного языка (NLP).
Он используется для тематического моделирования. Тематическое моделирование — это метод машинного обучения, выполняемый на текстовых данных для их анализа и поиска абстрактной похожей темы среди коллекции документов.
LDA — это один из алгоритмов тематического моделирования, специально разработанный для текстовых данных. Этот метод рассматривает каждый документ как смесь некоторых тем, которые алгоритм выдает в качестве конечного результата. Темы представляют собой распределение вероятностей слов, встречающихся в наборе всех документов, присутствующих в наборе данных.
Результат предварительно обработанных данных предоставит массив ключевых слов или токенов, алгоритм LDA примет эти предварительно обработанные данные в качестве входных данных и попытается найти скрытые/основные темы на основе распределения вероятностей этих ключевых слов. Первоначально алгоритм будет назначать каждому слову в документе случайную тему из ' n' количества тем.
Например, рассмотрим следующие текстовые данные
Теоретически давайте рассмотрим две темы Sports и Covid, над которыми будет работать алгоритм. Алгоритм может назначить первое слово, которое говорит «IPL», для темы 2 Covid. Мы знаем, что это назначение неверно, но алгоритм попытается исправить это в будущей итерации на основе двух факторов: как часто тема встречается в документе и как часто слово встречается в теме. Поскольку в тексте 1 не так много терминов, связанных с Covid, а слово «IPL» не будет встречаться много раз в теме 2 Covid, алгоритм может назначить слово «IPL» новой теме, которая является темой 1 (спорт). С помощью нескольких таких итераций алгоритм достигнет стабильности в распознавании тем и распределении слов по темам. Наконец, каждый документ может быть представлен как смесь определенных тем.
Читайте также: Двунаправленный поиск в Python
Следующие шаги выполняются в LDA для назначения тем каждому из документов:
1) Для каждого документа случайным образом инициализируйте каждое слово темой среди K тем, где K — количество предопределенных тем.
2) Для каждого документа d:
Для каждого слова w в документе вычислить:
3) Переназначить тему T' слову w с вероятностью p(t'|d)*p(w|t'), учитывая все остальные слова и их назначения тем.
Последний шаг повторяется несколько раз, пока мы не достигнем устойчивого состояния, когда тематические задания больше не меняются. Затем на основе этих тематических заданий определяется доля тем для каждого документа.
Иллюстративный пример LDA:
Допустим, у нас есть следующие 4 документа в качестве корпуса, и мы хотим провести тематическое моделирование по этим документам.
Моделирование LDA помогает нам находить темы в вышеупомянутом корпусе и назначать смеси тем для каждого из документов. Например, модель может вывести что-то, как показано ниже:
Тема 1: 40% видео, 60% YouTube
Тема 2: 95% блоги, 5% YouTube
Документы 1 и 2 тогда будут на 100 % принадлежать Теме 1. Документ 3 будет на 100 % принадлежать Теме 2. Документ 4 будет принадлежать 80 % Теме 2 и 20 % Теме 1.
Ниже приведены шаги для реализации алгоритма LDA:
Здесь у нас есть входные данные, собранные из Twitter и преобразованные в файл CSV, поскольку данные в социальных сетях разнообразны, и мы можем построить эффективную модель.
import numpy as np
import pandas as pd
import re
import gensim
from gensim import corpora, models, similarities
from gensim.parsing.preprocessing import STOPWORDS
from nltk.corpus import stopwords
def normalize_whitespace(tweet):
tweet = re.sub('[\s]+', ' ', tweet)
return tweet
text = " We are the students of Science. "
print("Text Before: ",text)
text = normalize_whitespace(text)
print("Text After: ",text)
ВЫХОД:
Text Before: We are the students of Science.
Текст после: Мы студенты естественных наук.
import nltk
nltk.download('stopwords')
import gensim
from gensim.parsing.preprocessing import STOPWORDS
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
def remove_stopwords(text):
final_s=""
text_arr= text.split(" ") #splits sentence when space occurs
print(text_arr)
for word in text_arr:
if word not in stop_words: # if word is not in stopword then append(join) it to string
final_s= final_s + word + " "
return final_s
import nltk
# nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer, SnowballStemmer, PorterStemmer
stemmer = PorterStemmer()
def tokenize_stemming(text):
text = re.sub(r'[^\w\s]','',text)
#replace multiple spaces with one space
text = re.sub(r'[\s]+',' ',text)
#transfer text to lowercase
text = text.lower()
# tokenize text
tokens = re.split(" ", text)
# Remove stop words
result = []
for token in tokens :
if token not in stop_words and len(token) > 1:
result.append(stemmer.stem(token))
return result
Это сокращение от термина «частотно-обратная частота документа», представляет собой числовую статистику, которая предназначена для отражения того, насколько важно слово для документа в коллекции или корпусе. Его часто используют в качестве весового коэффициента.
corpus_doc2bow_vectors = [dictionary.doc2bow(tok_doc) for tok_doc in tokens]
print("# Term Frequency : ")
corpus_doc2bow_vectors[:5]
tfidf_model = models.TfidfModel(corpus_doc2bow_vectors, id2word=dictionary, normalize=False)
corpus_tfidf_vectors = tfidf_model[corpus_doc2bow_vectors]
print("\n# TF_IDF: ")
print(corpus_tfidf_vectors[5])
lda_model = gensim.models.LdaMulticore(corpus_doc2bow_vectors, num_topics=10, id2word=dictionary, passes=2, workers=2)
lda_model_tfidf = gensim.models.LdaMulticore(corpus_tfidf_vectors, num_topics=10, id2word=dictionary, passes=2, workers=4)
for idx, topic in lda_model_tfidf.print_topics(-1):
print('Topic: {} Word: {}'.format(idx, topic))
Оценка производительности путем классификации образцов документов с использованием модели LDA Bag of Words Мы проверим, где наш тестовый документ будет классифицирован.
for index, score in sorted(lda_model[corpus_doc2bow_vectors[1]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model.print_topic(index, 10)))
for index, score in sorted(lda_model_tfidf[corpus_doc2bow_vectors[1]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model_tfidf.print_topic(index, 10)))
В этой статье мы попытались понять наиболее часто используемый алгоритм в области обработки естественного языка. LDA является основой для тематического моделирования — типа статистического моделирования и интеллектуального анализа данных.
Ссылка: https://www.askpython.com/python/examples/latent-dirichlet-allocation-lda
#python
1660632746
Xin chào các bạn độc giả, trong bài viết này chúng tôi sẽ cố gắng tìm hiểu thuật toán LDA là gì. nó hoạt động như thế nào và nó được triển khai như thế nào trong python. Phân bổ Dirichlet tiềm ẩn là một thuật toán chủ yếu nằm trong miền xử lý ngôn ngữ tự nhiên (NLP).
Nó được sử dụng để mô hình hóa chủ đề. Mô hình hóa chủ đề là một kỹ thuật học máy được thực hiện trên dữ liệu văn bản để phân tích dữ liệu đó và tìm một chủ đề tương tự trừu tượng trong bộ sưu tập các tài liệu.
LDA là một trong những thuật toán mô hình hóa chủ đề được thiết kế đặc biệt cho dữ liệu văn bản. Kỹ thuật này coi mỗi tài liệu là một hỗn hợp của một số chủ đề mà thuật toán tạo ra như một kết quả cuối cùng. Các chủ đề là phân phối xác suất của các từ xuất hiện trong tập hợp tất cả các tài liệu có trong tập dữ liệu.
Kết quả của dữ liệu được xử lý trước sẽ cung cấp một mảng từ khóa hoặc mã thông báo, thuật toán LDA sẽ lấy dữ liệu được xử lý trước này làm đầu vào và sẽ cố gắng tìm các chủ đề ẩn / cơ bản dựa trên phân phối xác suất của các từ khóa này. Ban đầu, thuật toán sẽ gán mỗi từ trong tài liệu cho một chủ đề ngẫu nhiên trong số ' n' số chủ đề.
Ví dụ: hãy xem xét dữ liệu văn bản sau
Về mặt lý thuyết, chúng ta hãy xem xét hai chủ đề Thể thao và Sống động để thuật toán hoạt động. Thuật toán có thể gán từ đầu tiên có nội dung “IPL” cho chủ đề 2 Covid. Chúng tôi biết bài tập này là sai, nhưng thuật toán sẽ cố gắng sửa lỗi này trong lần lặp lại trong tương lai dựa trên hai yếu tố là tần suất xuất hiện của chủ đề trong tài liệu và tần suất xuất hiện của từ trong chủ đề. Vì không có nhiều thuật ngữ liên quan đến Covid trong văn bản 1 và từ “IPL” sẽ không xuất hiện nhiều lần trong chủ đề 2 Covid, thuật toán có thể gán từ “IPL” cho chủ đề mới là chủ đề 1 (thể thao). Với nhiều lần lặp lại như vậy, thuật toán sẽ đạt được sự ổn định trong nhận dạng chủ đề và phân phối từ trên các chủ đề. Cuối cùng, mỗi tài liệu có thể được biểu diễn dưới dạng hỗn hợp các chủ đề đã xác định.
Cũng đọc: Tìm kiếm hai chiều bằng Python
Các bước sau được thực hiện trong LDA để gán chủ đề cho từng tài liệu:
1) Đối với mỗi tài liệu, hãy khởi tạo ngẫu nhiên mỗi từ cho một chủ đề trong số K chủ đề trong đó K là số chủ đề được xác định trước.
2) Đối với mỗi tài liệu d:
Đối với mỗi từ w trong tài liệu, hãy tính:
3) Gán lại chủ đề T 'cho từ w với xác suất p (t' | d) * p (w | t ') xem xét tất cả các từ khác và bài tập chủ đề của chúng
Bước cuối cùng được lặp lại nhiều lần cho đến khi chúng ta đạt đến trạng thái ổn định mà các bài tập về chủ đề không thay đổi thêm. Tỷ lệ chủ đề cho mỗi tài liệu sau đó được xác định từ các bài tập chủ đề này.
Ví dụ minh họa về LDA:
Giả sử chúng tôi có 4 tài liệu sau đây làm kho tài liệu và chúng tôi muốn thực hiện mô hình hóa chủ đề trên các tài liệu này.
Mô hình hóa LDA giúp chúng tôi khám phá các chủ đề trong kho tài liệu trên và chỉ định các hỗn hợp chủ đề cho mỗi tài liệu. Ví dụ, mô hình có thể xuất ra một cái gì đó như được đưa ra bên dưới:
Chủ đề 1: 40% video, 60% YouTube
Chủ đề 2: 95% blog, 5% YouTube
Tài liệu 1 và 2 sẽ thuộc 100% thuộc Chủ đề 1. Tài liệu 3 sẽ thuộc 100% thuộc Chủ đề 2. Tài liệu 4 sẽ thuộc 80% thuộc Chủ đề 2 và 20% thuộc Chủ đề 1
Sau đây là các bước để triển khai Thuật toán LDA:
Tại đây, chúng tôi có dữ liệu đầu vào được thu thập từ Twitter và chuyển đổi thành tệp CSV, vì dữ liệu trên mạng xã hội rất đa dạng và chúng tôi có thể xây dựng một mô hình hiệu quả.
import numpy as np
import pandas as pd
import re
import gensim
from gensim import corpora, models, similarities
from gensim.parsing.preprocessing import STOPWORDS
from nltk.corpus import stopwords
def normalize_whitespace(tweet):
tweet = re.sub('[\s]+', ' ', tweet)
return tweet
text = " We are the students of Science. "
print("Text Before: ",text)
text = normalize_whitespace(text)
print("Text After: ",text)
ĐẦU RA:
Text Before: We are the students of Science.
Văn bản Sau: Chúng tôi là sinh viên của Khoa học.
import nltk
nltk.download('stopwords')
import gensim
from gensim.parsing.preprocessing import STOPWORDS
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
def remove_stopwords(text):
final_s=""
text_arr= text.split(" ") #splits sentence when space occurs
print(text_arr)
for word in text_arr:
if word not in stop_words: # if word is not in stopword then append(join) it to string
final_s= final_s + word + " "
return final_s
import nltk
# nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer, SnowballStemmer, PorterStemmer
stemmer = PorterStemmer()
def tokenize_stemming(text):
text = re.sub(r'[^\w\s]','',text)
#replace multiple spaces with one space
text = re.sub(r'[\s]+',' ',text)
#transfer text to lowercase
text = text.lower()
# tokenize text
tokens = re.split(" ", text)
# Remove stop words
result = []
for token in tokens :
if token not in stop_words and len(token) > 1:
result.append(stemmer.stem(token))
return result
Nó là viết tắt của thuật ngữ tần số nghịch đảo tần số của tài liệu, là một thống kê số nhằm phản ánh tầm quan trọng của một từ đối với một tài liệu trong một bộ sưu tập hoặc kho ngữ liệu. Nó thường được sử dụng như một hệ số trọng số.
corpus_doc2bow_vectors = [dictionary.doc2bow(tok_doc) for tok_doc in tokens]
print("# Term Frequency : ")
corpus_doc2bow_vectors[:5]
tfidf_model = models.TfidfModel(corpus_doc2bow_vectors, id2word=dictionary, normalize=False)
corpus_tfidf_vectors = tfidf_model[corpus_doc2bow_vectors]
print("\n# TF_IDF: ")
print(corpus_tfidf_vectors[5])
lda_model = gensim.models.LdaMulticore(corpus_doc2bow_vectors, num_topics=10, id2word=dictionary, passes=2, workers=2)
lda_model_tfidf = gensim.models.LdaMulticore(corpus_tfidf_vectors, num_topics=10, id2word=dictionary, passes=2, workers=4)
for idx, topic in lda_model_tfidf.print_topics(-1):
print('Topic: {} Word: {}'.format(idx, topic))
Đánh giá hiệu suất bằng cách phân loại các tài liệu mẫu bằng mô hình LDA Bag of Words Chúng tôi sẽ kiểm tra xem tài liệu thử nghiệm của chúng tôi sẽ được phân loại ở đâu.
for index, score in sorted(lda_model[corpus_doc2bow_vectors[1]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model.print_topic(index, 10)))
for index, score in sorted(lda_model_tfidf[corpus_doc2bow_vectors[1]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model_tfidf.print_topic(index, 10)))
Trong bài viết này, chúng tôi đã cố gắng tìm hiểu thuật toán được sử dụng phổ biến nhất trong miền xử lý ngôn ngữ tự nhiên. LDA là cơ sở để lập mô hình chủ đề - một loại mô hình thống kê và khai thác dữ liệu.
Liên kết: https://www.askpython.com/python/examples/latent-dirichlet-allocation-lda
#python
1660625280
各位讀者好,在本文中我們將嘗試了解什麼是 LDA 算法。它是如何工作的以及它是如何在 python 中實現的。潛在狄利克雷分配是一種主要屬於自然語言處理 (NLP) 領域的算法。
它用於主題建模。主題建模是一種對文本數據執行的機器學習技術,用於分析它並在文檔集合中找到一個抽象的相似主題。
LDA 是專為文本數據設計的主題建模算法之一。該技術將每個文檔視為算法生成的一些主題的混合,作為最終結果。主題是出現在數據集中所有文檔集中的單詞的概率分佈。
預處理數據的結果會提供一組關鍵詞或tokens,LDA算法會將這些預處理數據作為輸入,並嘗試根據這些關鍵詞的概率分佈來尋找隱藏/潛在的主題。最初,該算法會將文檔中的每個單詞分配給“ n”個主題中的一個隨機主題。
例如,考慮以下文本數據
從理論上講,讓我們考慮兩個主題 Sports 和 Covid 以供算法研究。該算法可以為主題 2 Covid 分配第一個單詞“IPL”。我們知道這個分配是錯誤的,但是該算法將在未來的迭代中嘗試根據兩個因素來糾正這個問題,即主題在文檔中出現的頻率以及單詞在主題中出現的頻率。由於文本 1 中與 Covid 相關的術語不多,並且“IPL”這個詞在主題 2 Covid 中不會出現很多次,因此算法可以將“IPL”這個詞分配給新主題,即主題 1(體育)。通過多次這樣的迭代,該算法將實現主題識別和跨主題詞分佈的穩定性。最後,每個文檔都可以表示為已確定主題的混合。
另請閱讀:Python 中的雙向搜索
在 LDA 中執行以下步驟以將主題分配給每個文檔:
1)對於每個文檔,將每個單詞隨機初始化為 K 個主題中的一個主題,其中 K 是預定義主題的數量。
2) 對於每個文檔 d:
對於文檔中的每個單詞 w,計算:
3) 考慮所有其他單詞及其主題分配,以概率 p(t'|d)*p(w|t') 將主題 T' 重新分配給單詞 w
最後一步重複多次,直到我們達到主題分配不會進一步變化的穩定狀態。然後根據這些主題分配確定每個文檔的主題比例。
LDA 的說明性示例:
假設我們有以下 4 個文檔作為語料庫,我們希望對這些文檔進行主題建模。
LDA 建模幫助我們發現上述語料庫中的主題並為每個文檔分配主題混合。例如,模型可能會輸出如下所示的內容:
主題 1:40% 視頻,60% YouTube
主題 2:95% 的博客,5% 的 YouTube
文檔 1 和 2 將 100% 屬於主題 1。文檔 3 將 100% 屬於主題 2。文檔 4 將 80% 屬於主題 2,20% 屬於主題 1
以下是實現 LDA 算法的步驟:
在這裡,我們從 Twitter 收集輸入數據並將其轉換為 CSV 文件,因為社交媒體上的數據多種多樣,我們可以建立一個有效的模型。
import numpy as np
import pandas as pd
import re
import gensim
from gensim import corpora, models, similarities
from gensim.parsing.preprocessing import STOPWORDS
from nltk.corpus import stopwords
def normalize_whitespace(tweet):
tweet = re.sub('[\s]+', ' ', tweet)
return tweet
text = " We are the students of Science. "
print("Text Before: ",text)
text = normalize_whitespace(text)
print("Text After: ",text)
輸出:
Text Before: We are the students of Science.
後文:我們是理科學生。
import nltk
nltk.download('stopwords')
import gensim
from gensim.parsing.preprocessing import STOPWORDS
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
def remove_stopwords(text):
final_s=""
text_arr= text.split(" ") #splits sentence when space occurs
print(text_arr)
for word in text_arr:
if word not in stop_words: # if word is not in stopword then append(join) it to string
final_s= final_s + word + " "
return final_s
import nltk
# nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer, SnowballStemmer, PorterStemmer
stemmer = PorterStemmer()
def tokenize_stemming(text):
text = re.sub(r'[^\w\s]','',text)
#replace multiple spaces with one space
text = re.sub(r'[\s]+',' ',text)
#transfer text to lowercase
text = text.lower()
# tokenize text
tokens = re.split(" ", text)
# Remove stop words
result = []
for token in tokens :
if token not in stop_words and len(token) > 1:
result.append(stemmer.stem(token))
return result
它是詞頻-逆文檔頻率的縮寫,是一種數值統計數據,旨在反映一個詞對集合或語料庫中的文檔的重要性。它通常用作加權因子。
corpus_doc2bow_vectors = [dictionary.doc2bow(tok_doc) for tok_doc in tokens]
print("# Term Frequency : ")
corpus_doc2bow_vectors[:5]
tfidf_model = models.TfidfModel(corpus_doc2bow_vectors, id2word=dictionary, normalize=False)
corpus_tfidf_vectors = tfidf_model[corpus_doc2bow_vectors]
print("\n# TF_IDF: ")
print(corpus_tfidf_vectors[5])
lda_model = gensim.models.LdaMulticore(corpus_doc2bow_vectors, num_topics=10, id2word=dictionary, passes=2, workers=2)
lda_model_tfidf = gensim.models.LdaMulticore(corpus_tfidf_vectors, num_topics=10, id2word=dictionary, passes=2, workers=4)
for idx, topic in lda_model_tfidf.print_topics(-1):
print('Topic: {} Word: {}'.format(idx, topic))
通過使用 LDA Bag of Words 模型對樣本文檔進行分類進行性能評估 我們將檢查我們的測試文檔將被分類到哪裡。
for index, score in sorted(lda_model[corpus_doc2bow_vectors[1]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model.print_topic(index, 10)))
for index, score in sorted(lda_model_tfidf[corpus_doc2bow_vectors[1]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model_tfidf.print_topic(index, 10)))
在本文中,我們試圖了解自然語言處理領域下最常用的算法。LDA 是主題建模的基礎——一種統計建模和數據挖掘。
鏈接:https ://www.askpython.com/python/examples/latent-dirichlet-allocation-lda
#python
1654834020
croc
is a tool that allows any two computers to simply and securely transfer files and folders. AFAIK, croc is the only CLI file-transfer tool that does all of the following:
For more information about croc
, see my blog post or read a recent interview I did.
Download the latest release for your system, or install a release from the command-line:
curl https://getcroc.schollz.com | bash
On macOS you can install the latest release with Homebrew:
brew install croc
On macOS you can also install the latest release with MacPorts:
sudo port selfupdate
sudo port install croc
On Windows you can install the latest release with Scoop or Chocolatey:
scoop install croc
choco install croc
On Unix you can install the latest release with Nix:
nix-env -i croc
On Alpine Linux you have to install dependencies first:
apk add bash coreutils
wget -qO- https://getcroc.schollz.com | bash
On Arch Linux you can install the latest release with pacman
:
pacman -S croc
On Fedora you can install with dnf
:
dnf install croc
On Gentoo you can install with portage
:
emerge net-misc/croc
On Termux you can install with pkg
:
pkg install croc
On FreeBSD you can install with pkg
:
pkg install croc
Or, you can install Go and build from source (requires Go 1.17+):
go install github.com/schollz/croc/v9@latest
On Android there is a 3rd party F-Droid app available to download.
To send a file, simply do:
$ croc send [file(s)-or-folder]
Sending 'file-or-folder' (X MB)
Code is: code-phrase
Then to receive the file (or folder) on another computer, you can just do
croc code-phrase
The code phrase is used to establish password-authenticated key agreement (PAKE) which generates a secret key for the sender and recipient to use for end-to-end encryption.
There are a number of configurable options (see --help
). A set of options (like custom relay, ports, and code phrase) can be set using --remember
.
You can send with your own code phrase (must be more than 6 characters).
croc send --code [code-phrase] [file(s)-or-folder]
By default, croc will prompt whether to overwrite a file. You can automatically overwrite files by using the --overwrite
flag (recipient only). For example, receive a file to automatically overwrite:
croc --yes --overwrite <code>
You can pipe to croc
:
cat [filename] | croc send
In this case croc
will automatically use the stdin data and send and assign a filename like "croc-stdin-123456789". To receive to stdout
at you can always just use the --yes
will automatically approve the transfer and pipe it out to stdout
.
croc --yes [code-phrase] > out
All of the other text printed to the console is going to stderr
so it will not interfere with the message going to stdout
.
Sometimes you want to send URLs or short text. In addition to piping, you can easily send text with croc
:
croc send --text "hello world"
This will automatically tell the receiver to use stdout
when they receive the text so it will be displayed.
You can use a proxy as your connection to the relay by adding a proxy address with --socks5
. For example, you can send via a tor relay:
croc --socks5 "127.0.0.1:9050" send SOMEFILE
You can choose from several different elliptic curves to use for encryption by using the --curve
flag. Only the recipient can choose the curve. For example, receive a file using the P-521 curve:
croc --curve p521 <codephrase>
Available curves are P-256, P-348, P-521 and SIEC. P-256 is the default curve.
You can choose from several different hash algorithms. The default is the xxhash
algorithm which is fast and thorough. If you want to optimize for speed you can use the imohash
algorithm which is even faster, but since it samples files (versus reading the whole file) it can mistakenly determine that a file is the same on the two computers transferring - though this is only a problem if you are syncing files versus sending a new file to a computer.
croc send --hash imohash SOMEFILE
The relay is needed to staple the parallel incoming and outgoing connections. By default, croc
uses a public relay but you can also run your own relay:
croc relay
By default it uses TCP ports 9009-9013. Make sure to open those up. You can customized the ports (e.g. croc relay --ports 1111,1112
), but you must have a minimum of 2 ports for the relay. The first port is for communication and the subsequent ports are used for the multiplexed data transfer.
You can send files using your relay by entering --relay
to change the relay that you are using if you want to custom host your own.
croc --relay "myrelay.example.com:9009" send [filename]
Note, when sending, you only need to include the first port (the communication port). The subsequent ports for data transfer will be transmitted back to the user from the relay.
If it's easier you can also run a relay with Docker:
docker run -d -p 9009-9013:9009-9013 -e CROC_PASS='YOURPASSWORD' schollz/croc
Be sure to include the password for the relay otherwise any requests will be rejected.
croc --pass YOURPASSWORD --relay "myreal.example.com:9009" send [filename]
Note: when including --pass YOURPASSWORD
you can instead pass a file with the password, e.g. --pass FILEWITHPASSWORD
.
croc
has gone through many iterations, and I am awed by all the great contributions! If you feel like contributing, in any way, by all means you can send an Issue, a PR, ask a question, or tweet me (@yakczar).
Thanks @warner for the idea, @tscholl2 for the encryption gists, @skorokithakis for code on proxying two connections. Finally thanks for making pull requests @maximbaz, @meyermarcel, @Girbons, @techtide, @heymatthew, @Lunsford94, @lummie, @jesuiscamille, @threefjord, @marcossegovia, @csleong98, @afotescu, @callmefever, @El-JojA, @anatolyyyyyy, @goggle, @smileboywtu, @nicolashardy, @fbartels, @rkuprov, @hreese, @xenrox and Ipar!
This project is supported by Github sponsors.
Author: schollz
Source Code: https://github.com/schollz/croc
License: MIT license
1653913680
Learn how to transfer your money from an external bank to your chime banking account.
In this video, I will show you step by step how you can transfer funds from another bank to your chime bank account. You can do this by going to the move money tab, and then click on transfer from external bank. Then you can link your bank account using Plaid, and transfer funds within 7 days.
1625862840
Ever wanted to know how to pass data across different pages on your Flutter app? Well, this video shows you exactly how.
In this video, I go over how to do it with both the regular Navigator.push method as well as with named routes (Navigator.pushNamed).
And I don’t just show how to pass over simple data like ints and Strings, but I also show how to pass Map type data.
These aren’t all of the methods and if you have others ways to achieve this feel free to share!
GitHub Project Repository: https://github.com/FlutterMentor/passing_data/tree/master
00:00 - Intro
00:30 - UI Code Overview
01:30 - Passing Data With Navigator.push
02:15 - Passing Data Backwards with Navigator.pop
04:27 - Passing Data With Named Routes (Navigator.pushNamed)
06:50 - Passing Data Back But Sending Info With Named Route
08:30 - Like & Subscribe
08:46 - Flutter Mentor Out
#flutter #data #transfer
Credits:
OUTRO SONG:
Mitsubachi by Smith The Mister https://smiththemister.bandcamp.com
Smith The Mister https://bit.ly/Smith-The-Mister-YT
Free Download / Stream: http://bit.ly/mitsubachi
Music promoted by Audio Library https://youtu.be/0IgndUb1YQI
#flutter #transfer #data
1624300620
In this video I will show you how to transfer crypto from binance to gate .io It’s really easy and it will take you less than a minute to do so!
📺 The video in this post was made by How To Explained
️ The origin of the article: https://www.youtube.com/watch?v=7ahYp5sShTM
🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
⭐ ⭐ ⭐ Join to Get free ‘GEEK coin’ (GEEKCASH coin)! ☞ https://geekcash.org⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!
You can see more here: What is Gate Exchange | How to Register, Buy and Sell on Gate Exchange
#transfer crypto #binance #gate.io #bitcoin #blockchain
1624293000
In this video I will show you how to transfer crypto from binance to kucoin exchange. It’s really easy and it will take you less than a minute to do so!
📺 The video in this post was made by How To Explained
️ The origin of the article: https://www.youtube.com/watch?v=Z3vFK_eoULY
🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
⭐ ⭐ ⭐ Join to Get free ‘GEEK coin’ (GEEKCASH coin)! ☞ https://geekcash.org⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!
#binance #crypto #transfer #kucoin exchange #bitcoin #blockchain
1624223160
HOW TO TRANSFER (SEND AND RECEIVE) COINS FROM ONE WALLET TO THE OTHER
📺 The video in this post was made by LEARN EVERYTHING FOR FREE EFF DONVIP
️ The origin of the article: https://www.youtube.com/watch?v=r1UagiSGFp8
🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
⭐ ⭐ ⭐ Join to Get free ‘GEEK coin’ (GEEKCASH coin)! ☞ https://geekcash.org⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!
#bitcoin #blockchain #transfer #coins
1624064400
In this video I will show you how to transfer crypto from binance to metamask wallet. It’s really easy and it will take you less than a minute to do so!
📺 The video in this post was made by How To Explained
️ The origin of the article: https://www.youtube.com/watch?v=TaqwQgKNS9o
🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
⭐ ⭐ ⭐ Join to Get free ‘GEEK coin’ (GEEKCASH coin)! ☞ https://geekcash.org⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!
#bitcoin #blockchain #transfer #crypto #binance #metamask wallet
1622869419
Digital Banking Is Disrupting Traditional Banking
Although the two concepts are not the same, a digital bank account is frequently confused with a virtual bank account.
Most banks now offer a digital service that allows customers to do everything from check their balance to make interbank currency transactions without ever having to visit a physical location. Customers benefit from digital banking because it saves them time, which they can put to better use by focusing on their customers and growing their business.
Any banking activity that is conducted utilizing a digital device — whether it’s a desktop computer or a mobile banking app — is now referred to as digital banking. People use saving apps to save money online, pay for coffee by scanning a code, and businesses pay their employees via app internet banking. All of these are examples of digital banking. Taking Disruption a Step Further (Virtually). The majority of international and cross-border sellers struggle to open a bank account in the buyer market, due to excessive fees that eat into profits and time spent going to market.
This is where virtual accounts come into play, a newer and even more disruptive kind of FinTech-driven banking. Virtual accounts are distinct from other types of digital banking in that they exist solely online, with no physical presence in a community or country.
Virtual accounts allow businesses to receive foreign currency payments into their accounts while maintaining consistent foreign currency conversion prices for receivables, removing price padding and high conversion rates from merchant processors, online marketplaces, and customers.
Virtual accounts simplify the cash management process for organizations by reducing the number of real accounts that they operate around the world.
Virtual Accounts, such as the Wallex Global Collection account, are meant to allow businesses to receive payments from all over the world as if they were local payments. They can receive account information in countries such as the United States, the United Kingdom, and Europe, as well as accept foreign currency payments from consumers to an account in their name, all without having a physical presence in the area.
This would be a hassle-free alternative to acquiring local bank accounts in each nation of operation for enterprises and e-commerce suppliers moving to new markets.
Furthermore, accounts may be activated instantly, payments may be made to virtual accounts for free, logs and notifications on the status of each transfer are supplied, and funds can be converted back to your native currency at very competitive exchange rates. There is no need to visit a bank or fill out complex paperwork to open, operate, or maintain the account.
Global Business Account: A Boon for Cross-border Business Expansion
A global business account works in a similar fashion to a digital bank account and virtual account, but we like to think of them as the upgrade to both digital and virtual accounts to a more cost-effective, robust package of solutions for growing businesses.
Companies can manage all of their needs with a Wallex global business account, including paying suppliers and employees, moving money across borders, performing local currency conversions, maintaining liquidity in multiple currencies, and setting up virtual accounts in the countries and currencies that matter most to them.
When opposed to standard bank-driven business accounts, the following are the advantages of a Wallex business account:
It is completely free to open and maintain. There is no requirement for a minimum balance.
Singapore, Indonesia, and Hong Kong have all regulated online secure platforms.
Transaction processing times are reduced.
Fees that are less expensive and more transparent, based on the services consumed
Currency conversions at competitive exchange rates
Virtual accounts in a variety of currencies are available.
An account manager who is dedicated to you
Additional services may include credit extension, virtual cards, and other services that improve the business’ capacity to function solely through a global business account.
In the key Asian markets of Singapore, Indonesia, and Hong Kong, Wallex is licensed and regulated, making it easier and safer for Asian businesses to do business abroad. All customer monies are safeguarded and kept separate from all other internal operations accounts in a client segregated account. Wallex follows a tight set of guidelines for both compliance and the security of funds in its care.
Wallex is more than just a money transfer service for your company.
We’ve established a formidable alternative to traditional banking at Wallex. Our Global Business Accounts are designed for companies with a strong focus on worldwide expansion. Wallex does not charge any setup fees, unlike most other accounts. There are no monthly minimum fund requirements to meet, and there are never any monthly or annual fees.
In just a few minutes, you may set up a Global Business Account. You can transfer, receive, and retain funds in up to 47 different currencies once you’ve been approved. So get in touch with us to schedule a demo when you’re ready to streamline your foreign transaction demands.
#money #transfer #digital #cryptocurrency #international
1621958754
Thanks to FinTech, the planet is now undergoing a global money revolution. FinTech adoption was doubling every two years before the pandemic, increasing from 16 percent in 2015 to 64 percent in 2019. FinTech-enabled services like digital payments and FX management have become important for business survival in the aftermath of the pandemic. Wallex, for example, is at the forefront of this movement.
Wallex, an Asian fintech company, has expanded much faster than industry analysts predicted in recent years. Wallex now has 20,000 customers and has handled transactions worth over US$1.7 billion, up from just a hundred in its first year. Hiro Kiga, co-founder and COO of Wallex, reveals the secret to the company’s success: a great team that follows a few best practices and development hacks. Here’s how Wallex grew from zero to twenty thousand customers.
With a “let’s strive to satisfy everyone” mentality, some FinTech companies prefer to operate in both the B2C and B2B markets. Wallex is unique in that it chooses to concentrate solely on the B2B industry.
With the emergence of FinTech 3.0, the B2B and B2B2X sectors will see more development in the coming years. Other services related to lending, commerce technology, identity, fraud, risk management, and more will be added to the industry’s payments and banking-as-a-service offerings. Wallex recognized this early on and agreed to concentrate its resources solely on corporate FinTech. Wallex’s singular approach continues to pay off, as shown by its rising list of corporate clients.
2. Be inventive
To kick off their sales campaign and expand their brand in 2016, Hiro and the other Wallex founders relied on personal networking and word-of-mouth promotions. “It was just me and my co-founder doing a lot of sales in the early days,” he says. However, they soon realized that these tactics would not lead to exponential growth, let alone sustained growth. That’s why they came up with two different playbooks, both of which produced excellent results almost instantly.
For one thing, they paid more attention to press reports of new startup funding. Hiro reached out to investors regularly to congratulate them and request an introduction to the startup. This put him in front of startups, where he effectively conveyed Wallex’s value proposition: we will help you save money on foreign exchange. His proactive nature aided in the conversion of several startups into long-term Wallex customers. Wallex took the growth hack a step further by hiring seasoned workers and people from various backgrounds and leveraging their skills, learning, and networks. Wallex was able to convert many initial “anchor customers” by rapidly developing a professional sales team, setting the stage for the company’s long-term expansion and development.
3. First earn their confidence, then more business will come.
According to a recent Accenture study, non-traditional finance providers such as Wallex must work harder to earn consumer confidence than their traditional rivals. Hiro agrees wholeheartedly. He explains, “If we want to handle people’s assets, we need to have a certain degree of confidence.” Wallex can sustain its competitive place in the emerging FinTech space thanks to its emphasis on gaining customers’ confidence and doing everything possible to keep it.
Wallex devotes a significant amount of time, effort, and resources to ensuring that its financial infrastructure is protected by the appropriate licenses and technology. Wallex also collaborates with financial regulators such as the Monetary Authority of Singapore (MAS), Bank Indonesia, and the Hong Kong Customs and Excise Department to ensure that organizations and their assets are protected to the highest security and regulatory standards.
Hiro and Co. also received critical approval from BCA, one of Asia’s largest banking institutions. Following these decisions, Wallex was able to gain Indonesians’ confidence and establish a strong competitive position.
4. Brand-building hack at a low cost
Hiro and his team put more money into content and brand-building initiatives to increase their reputation. . Visitors to the company’s website will find case studies and testimonials from anchor customers, which will build confidence and pique their interest. It enhances its digital presence through alliances and investment news, increasing the recall value of its brand. People on the receiving end (those being paid) become more aware of the brand by using the Wallex logo in remittance warning emails. Wallex also employs public relations strategies to raise brand awareness in reputable regional publications such as Tech in Asia, Kompas, and e27.
Since they are already acquainted with Brand Wallex, Wallex attempts to turn power recipients, i.e. businesses that accept payments from several companies regularly, into power senders.
5. Dismantle organizational silos
Disruptive progress, according to Wallex, can only occur when companies implement new business models, foster outside-the-box thinking, and remove organizational silos.
Many FinTech companies struggle with the final component, but Wallex does not.
The business has established an organizational framework in which various functional groups and divisions interact freely and transparently with one another. Employees are encouraged to exchange knowledge and best practices, ask questions and make recommendations regardless of their specific positions or levels. This method of collaboration aids in the creation of a single brand voice that consumers recognize and respond to positively.
Final thoughts
Learning from the experiences of other startups and adopting best practices will make the transition from 100 to 20K much smoother. This is precisely what Wallex’s Hiro Kiga believes and recommends; Hiro also encourages advertisers and company owners to try out new growth hacking techniques. He concludes, “Fear is the enemy of creativity.” Wallex, which is licensed and controlled in Singapore, Indonesia, and Hong Kong, provides a variety of low-cost, convenient, safe, and fast cross-border solutions to help you manage your international business from a single location. For more details, go to [Wallex.Asia]or contact our FX expert directly.
visit us:https://rb.gy/bzcs21
#money #transfer #international