1652276714
Parses the Graphviz DOT language and creates an interface, in golang, with which to easily create new and manipulate existing graphs which can be written back to the DOT format.
graphAst, _ := gographviz.ParseString(`digraph G {}`)
graph := gographviz.NewGraph()
if err := gographviz.Analyse(graphAst, graph); err != nil {
panic(err)
}
graph.AddNode("G", "a", nil)
graph.AddNode("G", "b", nil)
graph.AddEdge("a", "b", true, nil)
output := graph.String()
go get github.com/awalterschulze/gographviz
Using Golang and GraphViz to Visualize Complex Grails Applications
This parser has been created using gocc.
Author: Awalterschulze
Source Code: https://github.com/awalterschulze/gographviz
License: View license
1651641960
Package goraph implements graph data structure and algorithms.
go get -v gopkg.in/gyuho/goraph.v2;
I have tutorials and visualizations of graph, tree algorithms:
For fast query and retrieval, please check out Cayley.
Author: Gyuho
Source Code: https://github.com/gyuho/goraph
License: MIT License
1651627140
Gonum
The core packages of the Gonum suite are written in pure Go with some assembly. Installation is done using go get
.
go get -u gonum.org/v1/gonum/...
Gonum supports and tests using the gc compiler on the two most recent Go releases on Linux (386, amd64 and arm64), macOS and Windows (both on amd64).
The Gonum modules are released on a six-month release schedule, aligned with the Go releases. i.e.: when Go-1.x
is released, Gonum-v0.n.0
is released around the same time. Six months after, Go-1.x+1
is released, and Gonum-v0.n+1.0
as well.
The release schedule, based on the current Go release schedule is thus:
Gonum-v0.n.0
: FebruaryGonum-v0.n+1.0
: AugustThe Gonum packages use a variety of build tags to set non-standard build conditions. Building Gonum applications will work without knowing how to use these tags, but they can be used during testing and to control the use of assembly and CGO code.
The current list of non-internal tags is as follows:
If you find any bugs, feel free to file an issue on the github issue tracker. Discussions on API changes, added features, code review, or similar requests are preferred on the gonum-dev Google Group.
https://groups.google.com/forum/#!forum/gonum-dev
Author: Gonum
Source Code: https://github.com/gonum/gonum
License: BSD-3-Clause License
1650270006
優れた製品を非常に迅速に出荷できるようにするためにGraphQLが大好きなのと同じように、従来のCDNではGraphQLAPIをキャッシュできません。カスタムキャッシングソリューションを最初から構築する必要があり、それが何週間も気を散らしてしまい、キャッシュが必要なほどうまく機能しませんでした。
そのため、GraphCDNを作成しました。
GraphCDNは、GraphQLAPIのCDNです。私たちはあなたに 私たちが望んでいたGraphQLAPIの安心を提供したいと思います :
本日、GraphCDNがすべての方にご利用いただけるようになりました。🎉
GraphCDN は、世界中の60のデータセンターでGraphQLクエリ結果を キャッシュし、キャッシュの動作をきめ細かく制御します。すべてをGraphQLタイプレベル、さらにはフィールドレベルで構成できます。たとえば、 「投稿を含むクエリ結果を900秒間キャッシュする」 、 「APITokenを含むクエリ結果をキャッシュしない」などです。
GraphCDNエッジキャッシュを本当に輝かせるのは、 自動ミューテーション無効化です。ゲートウェイは、たとえば editUser(id: 5)
、ミューテーションによってオブジェクトが変更されたことを検出し、そのオブジェクトを含むキャッシュされたクエリ結果を自動的に無効にします。
GraphCDNのエッジキャッシュは、認証されたユーザーごとに機密データをキャッシュすることもでき、 POST
リクエストを完全に処理し、stale-while-revalidateをサポートし、サービスのカスタムキャッシュ無効化GraphQL APIを公開します(例 mutation purgeUser(id: 5)
)。
GraphCDNは、GraphQLAPIの詳細な分析も提供します 。Google Analytics for GraphQLを想像してみてください。GraphQLAPIがどのように使用されているか、トラフィックレベルでどのように動作するか、ユーザーのエクスペリエンスがどのようなものかを理解するのに役立ちます。キャッシュヒット率をデバッグし、特定のクエリまでのオリジンサーバーのパフォーマンスを確認することも役立ちます。
さらに、GraphCDNは 、オリジンサーバーが応答するすべてのHTTPおよびGraphQLエラーを追跡する ため、顧客の問題をより適切かつ迅速にデバッグできます。それだけでなく、エラーのレベルが通常のベースラインを超えると、GraphCDNは自動的に電子メールアラートを送信するので、エラーを常に把握できます。
最後に、GraphCDNはGraphQLAPIも 保護 します。GraphQLの柔軟性は開発者にとって素晴らしいものですが、従来のCDNでは処理できない悪意のある攻撃者に対する新しい攻撃ベクトルも開きます。
最も一般的な攻撃の1つは、サーバーやデータベースを過負荷にするために、深くネストされたクエリを送信することです。たとえば 、 :Post
を持つ Comment
sを持つ CMSを想像してみてください。Author
query maliciousQuery {
allPosts {
comments {
author {
posts {
comments {
# ...repeat times 10000...
}
}
}
}
}
}
DataLoaderは、これらのクエリの一部がサーバーリソースを大量に消費するのを防ぐことができますが、すべてではありません。GraphCDNには、すぐに 使用できるクエリの深さ制限 があります。着信クエリを分析し、ネストが深すぎる場合はブロックします。GraphCDNはエッジに位置するため、オリジンサーバーがこれらの悪意のあるクエリを処理する必要はありません。
レート制限や複雑さの分析、その他のアイデアなど、すべての人に役立つ可能性のあるセキュリティ機能をさらに強化する計画があります。機能リクエストをチェックして、 次に見たいものに投票してください!
今日、すべての人にGraphCDNを公開できることを非常に嬉しく思います。また、GraphQLAPIに十分な安心感がもたらされることを願っています。ご不明な点がございましたら、いつでも support@graphcdn.ioまでお問い合わせください。 サポートさせていただきます。
GraphCDNは、 TimSuchanek と MaxStoiberによって共同設立されました 。ティムはとの作成者であり、 graphql-playground
プリズマ graphql-request
(旧姓GraphCool)の最初の従業員でした。Maxは、2018年にGitHubに買収された、Spectrumを共同作成 react-boilerplate
し styled-components
、以前に共同設立しました。
GraphCDNは、Guillermo Rauch(CEO、Vercel)、Tom Preston-Werner(GitHubの共同創設者)、Andreas Klinger(CTO、On Deck)、Matt Biilmann、Christian Bach(Co-CEO、 Netlify)、Jason Warner(CTO、GitHub)、Nicolas Dessaigne(共同創設者、Algolia)、 その他多数。
1650269963
Por mucho que amemos a GraphQL por permitirnos enviar buenos productos muy rápidamente, ninguna CDN tradicional puede almacenar en caché las API de GraphQL. Tuvimos que crear soluciones de almacenamiento en caché personalizadas desde cero, lo que nos distrajo durante semanas y los cachés nunca funcionaron tan bien como necesitábamos.
Por eso creamos GraphCDN.
GraphCDN es la CDN para su API GraphQL. Queremos brindarle la tranquilidad para su API de GraphQL que desearíamos tener con:
¡Hoy nos complace anunciar que GraphCDN está disponible para todos! 🎉
GraphCDN almacena en caché los resultados de su consulta GraphQL en nuestros 60 centros de datos en todo el mundo y le brinda un control detallado sobre el comportamiento de su caché. Puede configurar todo en un tipo de GraphQL e incluso a nivel de campo, por ejemplo, "Guardar en caché cualquier resultado de consulta que contenga una Publicación durante 900 segundos" o "No almacenar en caché ningún resultado de consulta que contenga un APIToken" .
Lo que realmente hace brillar a la caché perimetral de GraphCDN es la invalidación automática de la mutación . ¡ La puerta de enlace detecta cuándo las mutaciones cambian algún objeto, por ejemplo editUser(id: 5)
, e invalida automáticamente cualquier resultado de consulta en caché que contenga ese objeto!
La caché perimetral de GraphCDN también puede almacenar en caché datos confidenciales por usuario autenticado, funciona perfectamente con POST
las solicitudes, es compatible con la revalidación obsoleta e incluso expone una API GraphQL de invalidación de caché personalizada para su servicio (p. ej mutation purgeUser(id: 5)
., ).
GraphCDN también le brinda análisis detallados para su API GraphQL . Imagine Google Analytics para GraphQL: lo ayuda a comprender cómo se usa su API GraphQL, cómo se comporta con sus niveles de tráfico y cómo es la experiencia para sus usuarios. También es útil depurar la tasa de aciertos de la caché y verificar el rendimiento de su servidor de origen hasta la consulta específica:
Además de eso, GraphCDN rastrea todos los errores HTTP y GraphQL con los que responde su servidor de origen para que pueda depurar los problemas de los clientes mejor y más rápido. No solo eso, sino que cuando el nivel de errores supera la línea de base normal, GraphCDN le envía automáticamente alertas por correo electrónico para que pueda estar al tanto de ellos.
Finalmente, GraphCDN también protege su API GraphQL por usted. Si bien la flexibilidad de GraphQL es fantástica para los desarrolladores, también abre nuevos vectores de ataque para actores malintencionados que las CDN tradicionales no están preparadas para manejar.
Uno de los ataques más comunes es enviar consultas profundamente anidadas para sobrecargar el servidor y/o la base de datos. Por ejemplo, imagina un CMS con una Post
que tiene una Comment
s que tiene una Author
:
query maliciousQuery {
allPosts {
comments {
author {
posts {
comments {
# ...repeat times 10000...
}
}
}
}
}
}
DataLoader puede evitar que algunas de estas consultas consuman demasiados recursos del servidor, pero no todos. GraphCDN viene con un límite de profundidad de consulta listo para usar : analiza las consultas entrantes y las bloquea si están demasiado anidadas. ¡Dado que GraphCDN se encuentra en el borde, su servidor de origen nunca tendrá que lidiar con estas consultas maliciosas!
Tenemos planes para otras funciones de seguridad que podrían ser útiles para todos, incluida la limitación de velocidad y el análisis de complejidad, así como muchas otras ideas. ¡ Consulte las solicitudes de funciones y vote por lo que le gustaría ver a continuación!
Estamos muy emocionados de abrir GraphCDN para todos hoy y esperamos que le brinde una merecida tranquilidad para su API GraphQL. Si tiene alguna pregunta, envíenos un ping en cualquier momento a support@graphcdn.io . ¡Estamos aquí para ayudarlo!
GraphCDN fue cofundado por Tim Suchanek y Max Stoiber . Tim es el creador de graphql-playground
y graphql-request
, y fue el primer empleado de Prisma (de soltera GraphCool). Max co-creó react-boilerplate
y styled-components
, y anteriormente cofundó Spectrum, que fue adquirida por GitHub en 2018.
GraphCDN cuenta con el respaldo de inversores ángeles líderes en la industria, incluidos Guillermo Rauch (CEO, Vercel), Tom Preston-Werner (cofundador, GitHub), Andreas Klinger (CTO, On Deck), Matt Biilmann y Christian Bach (Co-CEOs, Netlify), Jason Warner (CTO, GitHub), Nicolas Dessaigne (cofundador, Algolia) y muchos otros .
1649658960
Graph data structure library. Supports Rust 1.41 and later.
Please read the API documentation here
Crate feature flags:
graphmap
(default) enable GraphMap
.stable_graph
(default) enable StableGraph
.matrix_graph
(default) enable MatrixGraph
.serde-1
(optional) enable serialization for Graph, StableGraph
using serde 1.0. Requires Rust version as required by serde.See RELEASES for a list of changes. The minimum supported rust version will only change on major releases.
Download Details:
Author: petgraph
Source Code: https://github.com/petgraph/petgraph
License: View license
1648736118
Como administrador de Microsoft Teams, puede ver y administrar las aplicaciones disponibles en el catálogo de aplicaciones para Microsoft Teams desde el Centro de administración de Microsoft Teams (TAC). Es una lista enorme para desplazarse que incluye aplicaciones propias disponibles de Microsoft, aplicaciones de terceros disponibles de socios de servicio y aplicaciones desarrolladas por desarrolladores de organizaciones. Sin embargo, carece de la funcionalidad de informes y descargas.
En este artículo, exploraremos la extracción de información sobre las aplicaciones de Microsoft Teams de manera eficiente.
Tenemos una buena API Graph disponible para enumerar aplicaciones del catálogo de aplicaciones de Microsoft Teams de la siguiente manera:
GET /appCatalogs/teamsApps
BÁSICO
Con un permiso mínimo delegado o de aplicación de AppCatalog.Read.All , podemos leer las aplicaciones del catálogo de aplicaciones.
La API funciona bien para obtener los detalles básicos de las aplicaciones disponibles en un catálogo de aplicaciones.
Avancemos un poco más en esto y obtengamos más información sobre una aplicación individual con una llamada a la siguiente API:
https://graph.microsoft.com/v1.0/appCatalogs/teamsApps?$filter=id eq '05ab3377-bd38-41dc-b917-b05449c13c78'&$expand=appDefinitions&$expand=appDefinitions($expand=bot)
BÁSICO
Con dos llamadas a Graph API, obtenemos más información sobre la aplicación junto con su definición.
Hay un par de limitaciones, observé trabajar con esta API Graph.
El nombre del editor es importante para comprender cuántas aplicaciones hay disponibles de un editor individual y su estado. Las organizaciones tienen su lista aprobada de proveedores o aprobadores, y es importante comprender la disponibilidad de las aplicaciones de los proveedores aprobados.
Hay una API interna no documentada disponible para obtener esta información. Siga los pasos a continuación para obtener los detalles de la aplicación:
Como autenticación, necesitamos pasar el token de portador.
El token de portador disponible desde la sesión del navegador se puede utilizar más. Si genera uno con PowerShell combinando las API de gráficos, es posible que no funcione para obtener los detalles.
Las API gráficas tienen limitaciones para obtener la información de la aplicación. La solución es utilizar la API interna. Sin embargo, evite usar la API interna en el código listo para producción.
Todavía estoy buscando opciones para automatizar este script de PowerShell como una solución integral. Agradezco sus pensamientos sobre el mismo.
Fuente: https://www.c-sharpcorner.com/article/retrieve-app-details-from-ms-teams-catalog/
1648735574
Microsoft Teams管理者は、Microsoft Teams Admin Center(TAC)からMicrosoftTeamsのアプリカタログで利用可能なアプリを表示および管理できます。これは、Microsoftから入手できるファーストパーティのアプリ、サービスパートナーから入手できるサードパーティのアプリ、組織の開発者によって開発されたアプリなど、スクロールするための膨大なリストです。ただし、レポート機能とダウンロード機能はありません。
この記事では、MicrosoftTeamsアプリに関する情報を効率的に抽出する方法について説明します。
次のように、MicrosoftTeamsアプリカタログからアプリを一覧表示するために使用できる優れたグラフAPIがあります。
GET /appCatalogs/teamsApps
ベーシック
AppCatalog.Read.Allの最低限の委任またはアプリケーション権限があれば、アプリカタログからアプリを読み取ることができます。
APIは正常に機能し、アプリカタログで利用可能なアプリの基本的な詳細を取得します。
これについてもう少し詳しく説明し、以下のAPIを呼び出して個々のアプリに関する詳細情報を取得しましょう。
https://graph.microsoft.com/v1.0/appCatalogs/teamsApps?$filter=id eq '05ab3377-bd38-41dc-b917-b05449c13c78'&$expand=appDefinitions&$expand=appDefinitions($expand=bot)
ベーシック
2つのGraphAPI呼び出しを使用して、アプリに関する詳細情報とその定義を取得します。
いくつかの制限があります。私はこのGraphAPIでの作業を観察しました。
パブリッシャーの名前は、個々のパブリッシャーから利用できるアプリの数とそのステータスを理解するために重要です。組織には承認されたベンダーまたは承認者のリストがあり、承認されたベンダーからのアプリの可用性を理解することが重要です。
この情報を取得するために利用できる、文書化されていない内部APIがあります。アプリの詳細を取得するには、以下の手順に従ってください。
認証として、Bearerトークンを渡す必要があります。
ブラウザセッションから利用できるBearerトークンをさらに使用できます。グラフAPIを組み合わせてPowerShellで生成した場合、詳細を取得できない場合があります。
グラフAPIには、アプリ情報を取得するための制限があります。回避策は、内部APIを使用することです。ただし、本番環境に対応したコードで内部APIを使用することは避けてください。
エンドツーエンドのソリューションとして、このPowerShellスクリプトを自動化するオプションをまだ検討しています。同じことについてのあなたの考えに感謝します。
ソース:https ://www.c-sharpcorner.com/article/retrieve-app-details-from-ms-teams-catalog/
1647455280
Graph Nets is DeepMind's library for building graph networks in Tensorflow and Sonnet.
A graph network takes a graph as input and returns a graph as output. The input graph has edge- (E ), node- (V ), and global-level (u) attributes. The output graph has the same structure, but updated attributes. Graph networks are part of the broader family of "graph neural networks" (Scarselli et al., 2009).
To learn more about graph networks, see our arXiv paper: Relational inductive biases, deep learning, and graph networks.
The Graph Nets library can be installed from pip.
This installation is compatible with Linux/Mac OS X, and Python 2.7 and 3.4+.
The library will work with both the CPU and GPU version of TensorFlow, but to allow for that it does not list Tensorflow as a requirement, so you need to install Tensorflow separately if you haven't already done so.
To install the Graph Nets library and use it with TensorFlow 1 and Sonnet 1, run:
(CPU)
$ pip install graph_nets "tensorflow>=1.15,<2" "dm-sonnet<2" "tensorflow_probability<0.9"
(GPU)
$ pip install graph_nets "tensorflow_gpu>=1.15,<2" "dm-sonnet<2" "tensorflow_probability<0.9"
To install the Graph Nets library and use it with TensorFlow 2 and Sonnet 2, run:
(CPU)
$ pip install graph_nets "tensorflow>=2.1.0-rc1" "dm-sonnet>=2.0.0b0" tensorflow_probability
(GPU)
$ pip install graph_nets "tensorflow_gpu>=2.1.0-rc1" "dm-sonnet>=2.0.0b0" tensorflow_probability
The latest version of the library requires TensorFlow >=1.15. For compatibility with earlier versions of TensorFlow, please install v1.0.4 of the Graph Nets library.
The following code constructs a simple graph net module and connects it to data.
import graph_nets as gn
import sonnet as snt
# Provide your own functions to generate graph-structured data.
input_graphs = get_graphs()
# Create the graph network.
graph_net_module = gn.modules.GraphNetwork(
edge_model_fn=lambda: snt.nets.MLP([32, 32]),
node_model_fn=lambda: snt.nets.MLP([32, 32]),
global_model_fn=lambda: snt.nets.MLP([32, 32]))
# Pass the input graphs to the graph network, and return the output graphs.
output_graphs = graph_net_module(input_graphs)
The library includes demos which show how to create, manipulate, and train graph networks to reason about graph-structured data, on a shortest path-finding task, a sorting task, and a physical prediction task. Each demo uses the same graph network architecture, which highlights the flexibility of the approach.
To try out the demos without installing anything locally, you can run the demos in your browser (even on your phone) via a cloud Colaboratory backend. Click a demo link below, and follow the instructions in the notebook.
The "shortest path demo" creates random graphs, and trains a graph network to label the nodes and edges on the shortest path between any two nodes. Over a sequence of message-passing steps (as depicted by each step's plot), the model refines its prediction of the shortest path.
The "sort demo" creates lists of random numbers, and trains a graph network to sort the list. After a sequence of message-passing steps, the model makes an accurate prediction of which elements (columns in the figure) come next after each other (rows).
The "physics demo" creates random mass-spring physical systems, and trains a graph network to predict the state of the system on the next timestep. The model's next-step predictions can be fed back in as input to create a rollout of a future trajectory. Each subplot below shows the true and predicted mass-spring system states over 50 steps. This is similar to the model and experiments in Battaglia et al. (2016)'s "interaction networks".
The "graph nets basics demo" is a tutorial containing step by step examples about how to create and manipulate graphs, how to feed them into graph networks and how to build custom graph network modules.
To install the necessary dependencies, run:
$ pip install jupyter matplotlib scipy
To try the demos, run:
$ cd <path-to-graph-nets-library>/demos
$ jupyter notebook
then open a demo through the Jupyter notebook interface.
Check out these high-quality open-source libraries for graph neural networks:
jraph: DeepMind's GNNs/GraphNets library for JAX.
pytorch_geometric: See MetaLayer for an analog of our Graph Nets interface.
Download Details:
Author: deepmind
Source Code: https://github.com/deepmind/graph_nets
License: Apache-2.0 License
1644490980
データはインターネット時代の新しいゴールドです。利用可能なデータのほとんどは通常、構造化されていません。したがって、開発者として、解釈を容易にする方法が必要です。そこでMatplotlibが登場します。
序章
ドキュメントから。
Matplotlibは、Pythonで静的、アニメーション、およびインタラクティブな視覚化を作成するための包括的なライブラリです。Matplotlibは、簡単なことと難しいことを可能にします。
Matplotlibの使用を開始するには、依存関係を含むmatplotlibをインストールする必要があります。
pip install matplotlib
インストールが完了すると、次の出力が表示されます。
Installing collected packages: numpy, six, python-dateutil, kiwisolver, pyparsing, packaging, cycler, fonttools, pillow, matplotlib
Successfully installed cycler-0.11.0 fonttools-4.29.1 kiwisolver-1.3.2 matplotlib-3.5.1 numpy-1.22.1 packaging-21.3 pillow-9.0.0 pyparsing-3.0.7 python-dateutil-2.8.2 six-1.16.0
次に、次のようにインポートします。
import matplotlib.pyplot as plt
このインポートでは、短いエイリアスpltを使用してすべてのMatplotlib関数にアクセスできます。
単純に聞こえるかもしれませんが、折れ線グラフは、時間の経過とともに何かがどのように変化するかを視覚化するのに役立ちます。たとえば、次のことを視覚化できます。
ビジネスが5年間でどのように成長したかを表す簡単な線グラフを作成しましょう。最初に2つの変数を定義します。
sales = [0, 1000,5000,15000,50000]
year =[2010,2011,2012,2013,2014,2015]
x_axisに年をプロットし、y_axisに売上をプロットします。次のコードは、ビジネスがどのように成長したかを示す線グラフをプロットします。
import matplotlib.pyplot as plt
year =[2010,2011,2012,2013,2014,2015]sales = [0, 1000,5000,15000,50000,100000]plt.plot(year, sales)plt.show()
2つのリストの長さが同じであることを確認してください。そうでない場合、ValueErrorが発生します
。グラフは次のようになります。
グラフは見栄えがしますが、ラベルがないと何も伝達されないので、グラフにx軸とy軸のラベルを付けましょう。
import matplotlib.pyplot as plt
year =[2010,2011,2012,2013,2014,2015]
sales = [0, 1000,5000,15000,50000,100000]
plt.plot(year, sales)plt.xlabel('year')
plt.ylabel('Amount in Dollars')
# title
plt.title("Linear graph Showing Growth of Lux store")
plt.show()
複数のプロット
2つのデータセットを比較したい場合があります。つまり、同じグラフに2本の線を引く必要があります。Matplotlibは、各行に異なる色を自動的に割り当てます。
同じ期間の売上と利益を比較したいとします。注文のデータは次のようになります
profit = [0,80,200,1500,2800,2600,3000]
注文数と年の間のグラフ線を反映するようにコードを更新します。
グラフは次のようになります。
上記のように、Matplotlibは各行に異なる色を自動的に割り当てます。色名または16進コードを使用して、行ごとに異なる色を指定することもできます。
plt.plot(year, sales, color = 'green)
plt.plot(year,profit, color ='#7BCCB5')
ここでは、販売ラインに緑、利益ラインに青緑を使用しています。
カラーグラフ
グラフの線種を指定することもできます。
plt.plot(year,profit, color ='#7BCCB5',linestyle ='--')
matplotlibeで利用可能な他のスタイルは次のとおりです。
MatplotlibドキュメントからのScrenshot
伝説
凡例は、グラフ上に複数の線があり、各線に名前を付ける場合に使用されます。plt.legend()を使用して、表示するラベルのリストを指定します
。最初の行は総売上高を表し、2番目の行は利益を表します。
plt.legend([‘Gross sum of sales’,’profits’])
legendは、キーワード引数としてlocも取ります。locは、ラベルの場所をコードとして指定します。たとえば、コード10は中央を意味し、4は右下を意味します。グラフに場所を追加しましょう。
plt.legend([‘Gross sum of sales’,’profits’],loc =4)
グラフは次のようになります。
伝説
棒グラフ
名前が示すように、棒グラフは棒を使用したデータのグラフィック表現です。Matplotlibを使用して棒グラフを作成するにはplt.bar()
、2つの入力引数をとるを使用します。
特定の飲食店でのピザの種類の人気を示す簡単な棒グラフを作成しましょう
from matplotlib import pyplot as pltpizzas = [ 'Cheese','Veggie','Pepperoni','Meat','Margherita',
'BBQ Chicken ','Hawaiian','Buffalo ']
popularity = [98, 90, 86, 84, 82, 80]plt.bar(range(len(popularity)), popularity)
plt.show()
棒グラフは次のようになります。
ピザ棒グラフ
上の棒グラフは見栄えがしますが、どのような種類のデータを表しているのかわかりません。いくつかのラベルを追加しましょう。Matplotlibsは、文字列ラベルのリストを使用してxaxisのラベルを設定する.set_xticklabelsメソッドを提供します。
この場合、文字列のリストはピザタイプの文字列になります。xaxisラベルを追加しましょう。
まず、コマンドを使用してAxesオブジェクトを定義しますplt.subplot()
ピザチャート
私たちのラベルは一緒に崩れているように見えます。それらを回転させて見栄えを良くし、set_xticklabelsメソッドに引数の回転を追加しましょう。
ax.set_xticklabels(pizzas,rotation=40)
積み上げ棒
積み上げ棒グラフ は、2つのデータセットを比較し、通常、互いに積み重ねられた棒で表されます。
最初に、Matplotlibを使用して積み上げ棒グラフをプロットするための2番目のデータセットを定義します。この例では、AとBの2つの都市でのピザの人気を比較します。
popularity_in_A = [98, 90, 86, 84, 82, 80,80,80]popularity_in_B = [90, 85, 84, 83, 82, 80,80,80]
最初に下のバーを通常どおりにプロットし、次に引数bottomキーワードを指定して2番目のバーをプロットします。
積み上げ棒
円グラフ
ウィキペディアによると、
円グラフ(または円グラフ)は、数値の比率を示すためにスライスに分割された円形の統計グラフィックです。円グラフでは、各スライスの弧の長さ(およびその結果としてその中心角と面積)は、それが表す量に比例します
円グラフを使用して表すことができます
Matplotlibで円グラフを作成するには、表現したい値をとるコマンドplt.pie()を使用します。たとえば、次のデータを使用して、プログラミング言語の人気を示す円グラフを描画します。
programming_languages = ["Python", "Java","Javascript", "C++", "C#", 'C','Typescript']Popularity = [230000, 170000, 150000, 100000,70000,70000,50000]
円グラフをプロットしてみましょう
from matplotlib import pyplot as pltprogramming_languages = ["Python", "Java","Javascript", "C++", "C#", 'C','Typescript']
Popularity = [230000, 170000, 150000, 100000,70000,70000,50000]plt.pie(Popularity,labels = programming_languages)
plt.axis('equal')plt.show()
円グラフ
ラベルを追加するには、labelsキーワードを使用します。以下に示すように、各スライスにパーセンテージラベルを追加することもできます。
ヒストグラム
ヒストグラムは、度数分布をグラフで表したものです。たとえば、ヒストグラムを使用して、特定の年齢層の人数を判別できます。Matplotlibでヒストグラムを作成するplt.hist()
には、データを入力として使用します。
次のデータセットでヒストグラムを描きましょう。
data = [100, 210, 0, 3, 20, 1000]
データに加えて、ビンのサイズも指定する必要があります。たとえば、最初のビンは0から20の間である可能性があります。
from matplotlib import pyplot as pltdata = [100, 210, 0, 3, 20, 1000]
plt.hist(data, bins=20)plt.show()
ヒストグラム
結論
このチュートリアルでは、最も一般的に使用される形式のデータ表示でデータを表現する方法について説明しました。Matplotlibは、データのパターンを完全に視覚化して識別することができる包括的なライブラリです。
1644221804
Fast path finding for arbitrary graphs.
If you want to learn how the demo was made, please refer to the demo's source code. I tried to describe it in great details.
Performance
I measured performance of this library on New York City roads graph (733,844
edges, 264,346
nodes). It was done by solving 250
random path finding problems. Each algorithm was solving the same set of problems. Table below shows required time to solve one problem.
Average | Median | Min | Max | p90 | p99 | |
---|---|---|---|---|---|---|
A* greedy (suboptimal) | 32ms | 24ms | 0ms | 179ms | 73ms | 136ms |
NBA* | 44ms | 34ms | 0ms | 222ms | 107ms | 172ms |
A*, unidirectional | 55ms | 38ms | 0ms | 356ms | 123ms | 287ms |
Dijkstra | 264ms | 258ms | 0ms | 782ms | 483ms | 631ms |
"A* greedy" converged the fastest, however, as name implies the found path is not necessary globally optimal.
There are a few things that contribute to the performance of this library.
I'm using heap-based priority queue, built specifically for the path finding. I modified a heap's implementation, so that changing priority of any element takes O(lg n)
time.
Each path finder opens many graph nodes during its exploration, which creates pressure on garbage collector. To avoid the pressure, I've created an object pool, which recycles nodes when possible.
In general, the A*
algorithm helps to converge to the optimal solution faster than Dijkstra, because it uses "hints" from the heuristic function. When search is performed in both directions (source -> target
and target -> source
), the convergence can be improved even more. The NBA* algorithm is a bi-directional path finder, that guarantees optimal shortest path. At the same time it removes balanced heuristic requirement. It also seem to be the fastest algorithm, among implemented here (NB: If you have suggestions how to improve this even further - please let me know!)
I also tried to create my own version of bi-directional A* search, which turned out to be harder than I expected - the two searches met each other quickly, but the point where they met was not necessary on the shortest global path. It was close to optimal, but not the optimal. I wanted to remove the code, but then changed my mind: It finds a path very quickly. So, in case when speed matters more than correctness, this could be a good trade off. I called this algorithm A* greedy
, but maybe it should be A* lazy
.
usage
You can install this module, bu requiring it from npm:
npm i ngraph.path
Or download from CDN:
<script src='https://unpkg.com/ngraph.path@1.3.1/dist/ngraph.path.min.js'></script>
If you download from CDN the library will be available under ngraphPath
global name.
This is a basic example, which finds a path between arbitrary two nodes in arbitrary graph
let path = require('ngraph.path');
let pathFinder = path.aStar(graph); // graph is https://github.com/anvaka/ngraph.graph
// now we can find a path between two nodes:
let fromNodeId = 40;
let toNodeId = 42;
let foundPath = pathFinder.find(fromNodeId, toNodeId);
// foundPath is array of nodes in the graph
Example above works for any graph, and it's equivalent to unweighted Dijkstra's algorithm.
Let's say we have the following graph:
let createGraph = require('ngraph.graph');
let graph = createGraph();
graph.addLink('a', 'b', {weight: 10});
graph.addLink('a', 'c', {weight: 10});
graph.addLink('c', 'd', {weight: 5});
graph.addLink('b', 'd', {weight: 10});
We want to find a path with the smallest possible weight:
let pathFinder = aStar(graph, {
// We tell our pathfinder what should it use as a distance function:
distance(fromNode, toNode, link) {
// We don't really care about from/to nodes in this case,
// as link.data has all needed information:
return link.data.weight;
}
});
let path = pathFinder.find('a', 'd');
This code will correctly print a path: d <- c <- a
.
When pathfinder searches for a path between two nodes it considers all neighbors of a given node without any preference. In some cases we may want to guide the pathfinder and tell it our preferred exploration direction.
For example, when each node in a graph has coordinates, we can assume that nodes that are closer towards the path-finder's target should be explored before other nodes.
let createGraph = require('ngraph.graph');
let graph = createGraph();
// Our graph has cities:
graph.addNode('NYC', {x: 0, y: 0});
graph.addNode('Boston', {x: 1, y: 1});
graph.addNode('Philadelphia', {x: -1, y: -1});
graph.addNode('Washington', {x: -2, y: -2});
// and railroads:
graph.addLink('NYC', 'Boston');
graph.addLink('NYC', 'Philadelphia');
graph.addLink('Philadelphia', 'Washington');
When we build the shortest path from NYC to Washington, we want to tell the pathfinder that it should prefer Philadelphia over Boston.
let pathFinder = aStar(graph, {
distance(fromNode, toNode) {
// In this case we have coordinates. Lets use them as
// distance between two nodes:
let dx = fromNode.data.x - toNode.data.x;
let dy = fromNode.data.y - toNode.data.y;
return Math.sqrt(dx * dx + dy * dy);
},
heuristic(fromNode, toNode) {
// this is where we "guess" distance between two nodes.
// In this particular case our guess is the same as our distance
// function:
let dx = fromNode.data.x - toNode.data.x;
let dy = fromNode.data.y - toNode.data.y;
return Math.sqrt(dx * dx + dy * dy);
}
});
let path = pathFinder.find('NYC', 'Washington');
With this simple heuristic our algorithm becomes smarter and faster.
It is very important that our heuristic function does not overestimate actual distance between two nodes. If it does so, then algorithm cannot guarantee the shortest path.
If you want the pathfinder to treat your graph as oriented - pass oriented: true
setting:
let pathFinder = aStar(graph, {
oriented: true
});
The library implements a few A* based path finders:
let aStarPathFinder = path.aStar(graph, options);
let aGreedyStar = path.aGreedy(graph, options);
let nbaFinder = path.nba(graph, options);
Each finder has just one method find(fromNodeId, toNodeId)
, which returns array of nodes, that belongs to the found path. If no path exists - empty array is returned.
Which finder to choose?
With many options available, it may be confusing whether to pick Dijkstra or A*.
I would pick Dijkstra if there is no way to guess a distance between two arbitrary nodes in a graph. If we can guess distance between two nodes - pick A*.
Among algorithms presented above, I'd recommend A* greedy
if you care more about speed and less about accuracy. However if accuracy is your top priority - choose NBA*
. This is a bi-directional, optimal A* algorithm with very good exit criteria. You can read about it here: https://repub.eur.nl/pub/16100/ei2009-10.pdf
Play with a demo or watch it on YouTube.
Download Details:
Author: anvaka
Source Code: https://github.com/anvaka/ngraph.path
License: MIT
#algorithm #graph
1643433300
When I first encountered Open Graph (OG) images, I thought they were simply a decorative protocol that comes when we share links. It didn’t take long for me to realize that OG images have a lot of impact on generally any resource or website that’s shared on public platforms.
When the image is combined with title and description metadata, they provide quick information about the resource shared. For instance, when we share a link on Twitter, the metadata is parsed and a preview card generates.
On a quick glance, the preview card provides information about the resource shared even before visiting the link. Now, if no metadata is available, no preview generates, and the link gets truncated, leaving no useful information about the resource.
However, creating OG images for many pages or blogs is time-consuming. A better approach would be to have a few templates designed for respective categories and dynamically create the images with a simple image generator service.
In this post, we will set up a simple server with the /ogimage
endpoint that responds with dynamically generated images from provided query parameters. The primary objective is to reduce the manual effort when creating OG images.
For the sake of this post, we will use Node.js and Express to set up the server and use a couple of npm packages to handle the dynamic image generation. Please feel free to use the tools that suit your preferences.
So, without further ado, let’s get started…
Let’s first understand what the OG protocol is. According to opg.me, “The Open Graph protocol enables any web page to become a rich object in a social graph. It provides enough information to richly represent any web page within the social graph.”
Individual pieces of information that are socially shareable are defined via meta tags. These tags are then grouped by the OG mechanism to provide a preview of the shared resource on social media.
In this post, we will focus more on og:image
to learn more about the other meta tags (such as og:title
or og:description
) and the Open Graph protocol itself, please refer to this insightful article.
Below are the steps required to build a Node.js powered OG image generator:
ogimage
endpointogimage
endpointTo begin, let’s create a simple Node.js and Express app with a single GET
endpoint, /ogimage
. All the data that goes into ogimage
is from query parameters from the URL:
# Create a new directory and cd into it
mkdir og-imager
cd og-imager
# initialize npm
npm init
# or use "npm init -y" to initialize with default values
# add express
npm install express
Next, create an index.js
file and add the below snippet. This imports and initializes an Express app, sets up a GET /ogimage
endpoint, and listens for requests:
// Import and initialize the express app
const express = require('express');
const app = express();
const port = process.env.PORT || 3000;
// setup GET endpoint
app.get('/ogimage', (req, res) => {
res.send('OG Imager!');
});
// Listen for requests
app.listen(port, () => {
console.log(`app listening at ${port}`)
});
We can now add the start script to package.json
to start the app. Use nodemon for local development purposes to autoreload the Node server when changes are made:
# add nodemon as dev-dependency
npm install nodemon -D# add start scripts
"scripts": {
"start": "node index.js",
"dev": "nodemon index.js"
},
Start the server (npm run start
/npm run dev
) and we should see the OG Imager!
on the browser when http://localhost:3000/ogimage
loads.
An image template is a simple HTML markup with a few placeholders and CSS to style. The placeholders are in Handlebars syntax, {{placeholder}}
, but we will discuss this more in the next section.
In simpler terms, we want to create a simple HTML page and capture the page as an image with respective dimensions. Below is the markup for an example template that we can use. Please feel free to modify the HTML and CSS as you see fit for your own blogs/apps:
const templateHTML = `
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<style>{{styles}}</style>
</head>
<body id="body">
<main>
<div class='logo'>
{{#if logoUrl}}
<img src="{{logoUrl}}" alt="logo" />
{{else}}
<span>Example Logo</span>
{{/if}}
</div>
<div class="title">{{title}}</div>
<div>
{{#if tags}}
<ul class="tags">
{{#each tags}}
<li class="tag-item">#{{this}}</li>
{{/each}}
</ul>
{{/if}}
{{#if path}}
<p class="path">{{path}}</p>
{{/if}}
</div>
</main>
</body>
</html>
`;
Now, let’s add the styles for the template. Similar to HTML, the CSS will have placeholders for dynamic content, such as a background image or title font size:
const templateStyles = `
@font-face {
font-family: Source Code Pro;
src: url(https://fonts.googleapis.com/css2?family=Source+Code+Pro:wght@500&display=swap);
}
* {
box-sizing: border-box;
}
:root {
font-size: 16px;
font-family: Source Code Pro, monospace;
}
body {
padding: 2.5rem;
height: 90vh;
background: #042f7d;
{{#if bgUrl}}
background-image: url({{bgUrl}});
background-position: center;
background-repeat: no-repeat;
background-size: cover;
{{else}}
background: linear-gradient(to right, #042f7d, #007eff);
color: #00ffae;
{{/if}}
}
main {
height: 100%;
width: 100%;
display: flex;
flex-direction: column;
justify-content: space-between;
}
.logo {
width: 15rem;
height: 3rem;
}
.logo img {
width: 100%;
height: 100%;
}
.logo span {
font-size: 2rem;
color: yellow;
font-style: italic;
text-decoration: wavy;
font-variant: unicase;
}
.title {
font-size: {{fontSize}};
text-transform: capitalize;
margin: 0.25rem 0;
font-weight: bold;
}
.tags {
display: flex;
list-style-type: none;
padding-left: 0;
color: #ff00d2;
font-size: 1.5rem;
}
.tag-item {
margin-right: 0.5rem;
}
.path {
color: #6dd6ff;
font-size: 1.25rem;
}
`;
Now that we have the template ready, the next step is to generate an image from it.
To generate an image from an HTML template on a server, spin up a headless browser to load a page with the HTML and CSS from a template on the desired viewport dimensions. Then, the loaded page is captured and saved/served as an image.
We will use Puppeteer to spin up the headless browser and take a screenshot of a page loaded from the template we created above. We will also need Handlebars to compile the templated HTML and CSS and replace placeholders with dynamic values:
npm install puppeteer handlebars
Before launching the browser and capturing the page, let’s compile the template HTML that must be loaded into the page:
const Handlebars = require("handlebars");
// Get dynamic font size for title depending on its length
function getFontSize(title="") {
if (!title || typeof title !== 'string') return "";
const titleLength = title.length;
if (titleLength > 55) return "2.75rem";
if (titleLength > 35) return "3.25rem";
if (titleLength > 25) return "4.25rem";
return "4.75rem";
}
// compile templateStyles
const compiledStyles = Handlebars.compile(templateStyles)({
bgUrl: req.query.bgUrl,
fontSize: getFontSize(req.query.title),
});
// compile templateHTML
const compiledHTML = Handlebars.compile(templateHTML)({
logoUrl: req.query.logoUrl,
title: req.query.title,
tags: req.query.tags,
path: req.query.path,
styles: compiledStyles,
});
Note that Handlebars will escape unsafe HTML. So, passing the query string value directly is safe as long as our placeholders are with {{double-stash}}
. The resulting HTML and styles will have the dynamic values that a query string receives.
Next up is to spin up the browser and take a screenshot of the page with Puppeteer. Puppeteer sets the viewport to 800x600
by default (at the time of writing this article). However, this can be overridden by the defaultViewport
property sent with launching the method:
const puppeteer = require('puppeteer');
// ...
app.get('/ogimage', async (req, res) => { // Note the async
// ...
const browser = await puppeteer.launch({
headless: true,
args: ["--no-sandbox"],
defaultViewport: {
width: 1200,
height: 630,
}
});
const page = await browser.newPage();
// ...
});
1200x630
are the most common dimensions for OG images. The viewport size can also be dynamically controlled by using page.setViewport
to set values from request parameters:
await page.setViewport({ width: Number(req.query.width), height: Number(req.query.height) });
Next, set the compiled HTML as page content and wait until there are zero network requests for at least 500ms by setting waitUntil
property to networkidle0
. This wait ensures all images and content loads:
await page.setContent(compiledHTML, { waitUntil: 'networkidle0' });
Wait a minute, setting networkidle0
means it will wait 500ms every time. How do we fix this?
In pursuit of the answer, I landed on a framework for building Open Graph images from GitHub.
In the article, Jason Etcovitch writes, “We changed waitUntil
to domcontentloaded
to ensure that the HTML had finished being parsed, then passed a custom function to page.evaluate
.
“This [runs] in the context of the page itself, but pipes the return value to the outer context. This meant that we could listen for image load events and pause execution until the Promises have been resolved.”
The below snippet is directly taken from this blog post to fix this issue:
// Set the content to our rendered HTML
await page.setContent(compiledHTML, { waitUntil: "domcontentloaded" });
// Wait until all images and fonts have loaded
await page.evaluate(async () => {
const selectors = Array.from(document.querySelectorAll("img"));
await Promise.all([
document.fonts.ready,
...selectors.map((img) => {
// Image has already finished loading, let’s see if it worked
if (img.complete) {
// Image loaded and has presence
if (img.naturalHeight !== 0) return;
// Image failed, so it has no height
throw new Error("Image failed to load");
}
// Image hasn’t loaded yet, added an event listener to know when it does
return new Promise((resolve, reject) => {
img.addEventListener("load", resolve);
img.addEventListener("error", reject);
});
}),
]);
});
So, we can take a screenshot of the body element (the visible content wrapper) on the loaded page with page.screenshot
and send the omitBackground: true
property to ignore the browser background, only taking a screenshot of the loaded content.
However, if there is no background property set, the resulting screenshot will have a transparent background rather than the white browser default background:
const element = await page.$('#body');
const image = await element.screenshot({ omitBackground: true });
await browser.close();
And that’s it; we have an image generated and one last step is to serve the image.
To save/serve the image, we must first set the Content-Type
header to indicate that the ogimage
endpoint responds with an image so no additional logic is required to handle the response.
We can directly use the endpoint as an image URL and set the Cache-Control
headers for caching purposes:
app.get('/ogimage', (req, res) => {
// Compile Template HTML & CSS with Handlebars
.....
// Load the template and take a screenshot with Puppeteer
.....
res.writeHead(200, {
'Content-Type': 'image/png',
'Cache-Control': `immutable, no-transform, s-max-age=2592000, max-age=2592000` // 30 days cache
});
res.end(image);
});
To load the image preview locally, open your browser and visit the ogimage
endpoint at localhost:3000/ogimage
with query parameters. This sends a GET
request to the service and displays the image response in the browser:
http://localhost:3000/ogimage?title=Open%20Graph%20Image%20Generator%20with%20NodeJS&tags[]=nodejs&tags[]=og-image&path=blog.yourdomain.com/open-graph-image-generator-with-nodejs
The image preview looks something like below:
And here is the final code:
// index.js
const express = require('express');
const puppeteer = require('puppeteer');
const Handlebars = require("handlebars");
const app = express();
const port = process.env.PORT || 3000;
const templateStyles = `...`;
const templateHTML = `...`;
// Get dynamic font size for title depending on its length
function getFontSize(title="") {
if (!title || typeof title !== 'string') return "";
const titleLength = title.length;
if (titleLength > 55) return "2.75rem";
if (titleLength > 35) return "3.25rem";
if (titleLength > 25) return "4.25rem";
return "4.75rem";
}
app.get('/ogimage', async (req, res) => {
// compiled styles
const compiledStyles = Handlebars.compile(templateStyles)({
bgUrl: req.query.bgUrl,
fontSize: getFontSize(req.query.title),
});
// compiled HTML
const compiledHTML = Handlebars.compile(templateHTML)({
logoUrl: req.query.logoUrl,
title: req.query.title,
tags: req.query.tags,
path: req.query.path,
styles: compiledStyles,
});
// Launch Headless browser and capture creenshot
const browser = await puppeteer.launch({
headless: true,
args: ["--no-sandbox"],
defaultViewport: {
width: 1200,
height: 630,
}
});
const page = await browser.newPage();
// Set the content to our rendered HTML
await page.setContent(compiledHTML, { waitUntil: "domcontentloaded" });
// Wait until all images and fonts have loaded
await page.evaluate(async () => {
const selectors = Array.from(document.querySelectorAll("img"));
await Promise.all([
document.fonts.ready,
...selectors.map((img) => {
// Image has already finished loading, let’s see if it worked
if (img.complete) {
// Image loaded and has presence
if (img.naturalHeight !== 0) return;
// Image failed, so it has no height
throw new Error("Image failed to load");
}
// Image hasn’t loaded yet, added an event listener to know when it does
return new Promise((resolve, reject) => {
img.addEventListener("load", resolve);
img.addEventListener("error", reject);
});
}),
]);
});
const element = await page.$('#body');
const image = await element.screenshot({ omitBackground: true });
await browser.close();
res.writeHead(200, { 'Content-Type': 'image/png', 'Cache-Control': `immutable, no-transform, s-max-age=2592000, max-age=2592000` });
res.end(image);
})
app.listen(port, () => {
console.log(`app listening at ${port}`)
});
You can also find the complete code on GitHub. Feel free to fork it and extend beyond the template to fit your needs.
A good tip for development is to comment out the Puppeteer and Content-Type header code followed by sending the compiledHTML
in response instead of the generated image, res.status(200).send(compiledHTML)
:
// compiled HTML
const compiledHTML = ...;
// Comment out puppeteer, browser, page stuff
// const browser = ...;
// ...
// await browser.close();
// instead of image as response, send compiledHTML itself
// res.writeHead(200, { 'Content-Type': 'image/png', 'Cache-Control': `immutable, no-transform, s-max-age=2592000, max-age=2592000` });
// res.end(image);
res.status(200).send(compiledHTML);
This bypasses image generation and renders the resulting HTML in your browser for a faster development process by quickly iterating on the UI for the template(s).
To link within the meta tags, add the meta image tags with the dynamic URL as content. This URL will resolve to an image in the preview when loaded.
og:image
is the primary meta tag for the OG image. You can also add Twitter, Instagram, and any other social media-specific tags along with your target platforms:
<meta property=”og:image” content=”https://{{your_domain.com}}/ogimage?title=Open%20Graph%20Image%20Generator%20with%20NodeJS&tags[]=nodejs&tags[]=og-image&path=blog.yourdomain.com/open-graph-image-generator-with-nodejs&logoUrl={{your_logo_url}}”>
Note that you may need to URL escape the query string; you can use encodeURI
.
There we go, we have our own OG image generator service that dynamically creates images for each page/blog post.
You can also pick the pieces (templates, Handlebars compilation, Puppeteer screenshot) of this service to put together a serverless function or use it as a utility during the build process in any frontend app.
This post is one of many approaches to achieve this. In general, the context remains the same; it’s the syntax/language that changes .
Furthermore, the generated image can be stored in AWS S3, GCS, or any service that suits your needs, and can serve from the storage on subsequent requests to save generation time. You can also use an in-memory cache with cache invalidation for every certain length of days.
Thank you for reading. I hope you found this post helpful, and please share it with those who might benefit from it. Ciao!
Link: https://blog.logrocket.com/create-open-graph-image-generator-node-js/
1642865460
ngraph.graph
Graph data structure for javascript. This library belongs to a family of javascript graph packages called ngraph.
Install
With npm do:
npm install ngraph.graph
Or download from CDN:
<script src='https://unpkg.com/ngraph.graph@19.0.0/dist/ngraph.graph.min.js'></script>
If you download from CDN the library will be available under createGraph
global name.
Create a graph with no edges and no nodes:
var createGraph = require('ngraph.graph');
var g = createGraph();
The graph g
can be grown in two ways. You can add one node at a time:
g.addNode('hello');
g.addNode('world');
Now graph g
contains two nodes: hello
and world
. You can also use addLink()
method to grow a graph. Calling this method with nodes which are not present in the graph creates them:
g.addLink('space', 'bar'); // now graph 'g' has two new nodes: 'space' and 'bar'
If nodes already present in the graph 'addLink()' makes them connected:
// Only a link between 'hello' and 'world' is created. No new nodes.
g.addLink('hello', 'world');
The most common and convenient choices are numbers and strings. You can associate arbitrary data with node via optional second argument of addNode()
method:
// Node 'world' is associated with a string object 'custom data'
g.addNode('world', 'custom data');
// You can associate arbitrary objects with node:
g.addNode('server', {
status: 'on',
ip: '127.0.0.1'
});
// to get data back use `data` property of node:
var server = g.getNode('server');
console.log(server.data); // prints associated object
You can also associate arbitrary object with a link using third optional argument of addLink()
method:
// A link between nodes '1' and '2' is now associated with object 'x'
g.addLink(1, 2, x);
After you created a graph one of the most common things to do is to enumerate its nodes/links to perform an operation.
g.forEachNode(function(node){
console.log(node.id, node.data);
});
The function takes callback which accepts current node. Node object may contain internal information. node.id
and node.data
represent parameters passed to the g.addNode(id, data)
method and they are guaranteed to be present in future versions of the library.
To enumerate all links in the graph use forEachLink()
method:
g.forEachLink(function(link) {
console.dir(link);
});
To enumerate all links for a specific node use forEachLinkedNode()
method:
g.forEachLinkedNode('hello', function(linkedNode, link){
console.log("Connected node: ", linkedNode.id, linkedNode.data);
console.dir(link); // link object itself
});
This method always enumerates both inbound and outbound links. If you want to get only outbound links, pass third optional argument:
g.forEachLinkedNode('hello',
function(linkedNode, link) { /* ... */ },
true // enumerate only outbound links
);
To get a particular node object use getNode()
method. E.g.:
var world = g.getNode('world'); // returns 'world' node
console.log(world.id, world.data);
To get a particular link object use getLink()
method:
var helloWorldLink = g.getLink('hello', 'world'); // returns a link from 'hello' to 'world'
console.log(helloWorldLink);
To remove a node or a link from a graph use removeNode()
or removeLink()
correspondingly:
g.removeNode('space');
// Removing link is a bit harder, since method requires actual link object:
g.forEachLinkedNode('hello', function(linkedNode, link){
g.removeLink(link);
});
You can also remove all nodes and links by calling
g.clear();
Whenever someone changes your graph you can listen to notifications:
g.on('changed', function(changes) {
console.dir(changes); // prints array of change records
});
g.add(42); // this will trigger 'changed event'
Each change record holds information:
ChangeRecord = {
changeType: add|remove|update - describes type of this change
node: - only present when this record reflects a node change, represents actual node
link: - only present when this record reflects a link change, represents actual link
}
Sometimes it is desirable to react only on bulk changes. ngraph.graph supports this via beginUpdate()
/endUpdate()
methods:
g.beginUpdate();
for(var i = 0; i < 100; ++i) {
g.addLink(i, i + 1); // no events are triggered here
}
g.endUpdate(); // this triggers all listeners of 'changed' event
If you want to stop listen to events use off()
method:
g.off('changed', yourHandler); // no longer interested in changes from graph
For more information about events, please follow to ngraph.events
Author: Anvaka
Source Code: https://github.com/anvaka/ngraph.graph
License: BSD-3-Clause License
1642826490
Storing data in tables has its limitations. Usually joining and aggregations are required to represent more complicated datasets and extract desirable data. Storing data in a semantic graph may be the solution and I am showing you how to programmatically switching from pandas to the knowledge graph.
Remember how many times you look up “how to do this in pandas”? Though it is the most popular data handling library in Python, it is quite complicated due to the rigidness of storing data in tabular formats. This is most obvious when the data stored is imported from a JSON file and end up having multiple layers of objects. At this point, you wished for a data structure that let you store data with objects and subclasses, just like in object-orientated programs. The answer? Semantic knowledge graphs.
In this talk, Cheuk will first introduce what is semantic knowledge graphs. It’s building block: triples, and how all data can be described will them - with objects and properties. Cheuk will assume no prior knowledge and will explain via examples and visualization with the TerminusDB model builder - a graphical interface that allows you to build schemas for semantic knowledge graphs.
In the next part, Cheuk will show how to construct a schema based on a pandas DataFrame. With the Python client of TemrinusDB, schema can be built programmatically follow by importing the data in the DataFrame. In this part, basic Python knowledge is assumed. In this part, Cheuk will show the internals of pandas, dissecting it and reconstruct a knowledge graph schema. Cheuk will also show the code that transforms the data and insert them in the prepared graph.
Finally, Cheuk will visualize the graph in a customized interactive graph visualization in Jupyter notebook.
This talk is for data scientist and engineers who works with data and using pandas a lot. They may need a new tool and new skills to expand their repertoire of data handling and Semantic Knowledge Graph would be a high value one.
#pandas #graph #dataframes
1642822534
This is a short and basic course on Graphs. It’s a course where we will discuss all the basic concepts related to graphs. Different basic types of graphs and various operations were performed on the Graphs. Knowledge of graphs is incomplete without learning about The two important traversal mechanisms of graphs, that is breadth-first search and depth-first search. So, the concepts, as well as the implementation of DFS and BFS using python programming, will be discussed.
00:01:15 Agenda of the course
00:04:04 Introduction to Graphs
00:07:22 Types of Graphs
00:16:47 Common Graph Operations
00:21:07 Applications of Graph
00:29:35 Traversal Algorithms - BFS
00:33:05 BFS Implementation
00:43:23 Traversal Algorithms - DFS
00:46:56 DFS Implementation
00:58:27 Summary
#python #graph #algorithms #datastructures