1680159313
In this tutorial, you will learn how to use Neo4j AuraDB and GraphQL to visualize query data and what problems you can solve using them.
As we continue to build and develop software applications, our requirements for building complex applications have become more diverse and unique. We have a variety of data and many ways to work with it. However, one way is sometimes better and more performant than the other.
This article will discuss how we can use Neo4j and GraphQL and what problems we can solve using them. Before starting this article, you should have a strong knowledge of creating projects and backend development with Node.js and some familiarity with GraphQL.
This is an introductory article for Neo4j, so it is absolutely fine if you are unfamiliar with Neo4j. Even if you have used Neo4j before, this article can help you introduce GraphQL and work on projects with Neo4j and GraphQL. So, without further ado, let’s get right into it!
Table of contents:
GraphQL is a query language for implementing the API. According to the GraphQL website:
“GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful dev tools.”
In more simple terms, GraphQL is a query language that can be used as an API to communicate between the client and the server. For example, we can use queries in GraphQL to get properties and mutations for specific operations.
We make GraphQL requests using types and fields instead of endpoints and retrieve the data as a JSON object. This helps us get only the data we requested from the server. For example, a typical GraphQL query will look like this:
{
products {
productId
title
variant {
price
size
}
}
}
The response data will look like this:
{
"data": {
"products": [
{
"productId": "1",
"title": "Blue Jeans",
"variant": {
"price": 35,
"size": "XL"
}
},
{
"productId": "2",
"title": "Armani Suit",
"variant": {
"price": 59,
"size": "XXL"
}
}
]
}
}
As you can see, we are getting the JSON response in the same structure as we sent the request. Please head over to the GraphQL documentation to learn more about GraphQL.
Neo4j is a native graph database that differs from other data storage solutions. Neo4j uses the data storage facility and is highly optimized for storing and maintaining relational data. Neo4j stores the data in the database as a graph. In this graph, each node represents the data, and the relationships between them are defined by a direct edge between the two nodes. All the information to find the next node in a sequence is available in the node itself. According to their website:
“With a native graph database at the core, Neo4j stores and manages data in a natural, connected state. The graph database takes a property graph approach, which benefits traversal performance and operations runtime.”
So, why should we use Neo4j? Because Neo4j uses the connected graph to store data in the native storage layer. It is much more helpful in the relational database, where the data is extensively connected to the other nodes. The queries are similar to SQL. However, the execution time is much faster, especially when performing heavy query operations between multiple nodes.
Some of the most outstanding advantages of using Neo4j include the following:
JOIN
operations like SQL because the data is directly connected to the graphNeo4j gives us the facility to work directly with GraphQL. This allows us to implement our backend project with Neo4j and GraphQL using the Neo4j GraphQL Library. It is a JavaScript library that can be used in any JavaScript GraphQL implementation, such as Apollo Server.
The Neo4j GraphQL Library automatically generates CRUD operations when we provide the GraphQL type definitions to the library. That means we don’t need to write queries or mutations explicitly to perform CRUD operations. The Neo4j GraphQL Library automatically handles all of that for us. It also provides complex pagination, sorting, filtering, and more.
Enough theory discussion — let’s build an example project. In this article, we will create a to-do application using GraphQL as the API and Neo4j AuraDB as the database. We will only focus on the backend part and not cover the frontend, as this is not in the scope of the article. Let’s get started!
First, let’s initiate a new project. We will create our root project folder and create a package.json
file for our Node.js backend project by running the following command:
npm i -y
We will get the following result:
Now, let’s run the following command to install the required files for our project:
npm install @neo4j/graphql neo4j-driver graphql apollo-server dotenv
The code above will install all the required packages that we need. Also, let’s install nodemon. It will make our life easier, and we won’t have to restart our server every time while making any changes. Install it with the npm install --save-dev nodemon
command.
Next, we will create a new Neo4j AuraDB instance from the Neo4j Aura website. You will need to create an account for a new Neo4j AuraDB instance. First, let’s create an Empty instance from the website. You can also make other instances with existing data and play around:
After clicking the Create button, we will get the following modal containing the username and password:
Let’s download the env
file containing the credentials by clicking the Download button and selecting Continue.
Let’s create our first file and start writing some code. Let’s start by creating a new file named server.js
and paste the following code:
const { ApolloServer } = require("apollo-server");
const { Neo4jGraphQL } = require("@neo4j/graphql");
const neo4j = require("neo4j-driver");
const { typeDefs } = require("./typedefs");
const Config = require("./config");
const driver = neo4j.driver(
Config.NEO4J_URI,
neo4j.auth.basic(Config.NEO4J_USERNAME, Config.NEO4J_PASSWORD)
);
const neoSchema = new Neo4jGraphQL({ typeDefs, driver });
neoSchema.getSchema().then((schema) => {
const server = new ApolloServer({
schema: schema,
});
server.listen().then(({ url }) => {
console.log(`GraphQL server ready on ${url}`);
});
});
Here, we can see that we will also need to create two additional files: config.js
and typedefs.js
. So, let’s create the config.js
file and paste the following code:
require("dotenv").config();
module.exports = class Config {
static NEO4J_URI = process.env.NEO4J_URI;
static NEO4J_USERNAME = process.env.NEO4J_USERNAME;
static NEO4J_PASSWORD = process.env.NEO4J_PASSWORD;
static NEO4J_AURA_INSTANCENAME = process.env.AURA_INSTANCENAME;
};
Now, let’s paste the following code into the typedefs.js
file:
const { gql } = require("apollo-server");
module.exports.typeDefs = gql`
type Todo @node(label: "Todo") {
title: String!
status: String!
category: Category! @relationship(type: "Category", direction: IN)
}
type Category @node(label: "Category") {
title: String!
todos: [Todo!]! @relationship(type: "Category", direction: OUT)
}
`;
Here, the @relationship
schema directive helps the Neo4j understand the relationships between the types in our type
definition. Before running the server, we also must make the following changes to our package.json
file:
{
...
"scripts": {
...,
"start": "node server.js",
"start:dev": "nodemon server.js"
},
...
}
Now, let’s run the server and see the changes by running npm run start:dev
to the terminal. After running the command, we will see the following message:
If we click the link, we will see the following screen in our browser:
After clicking the Query your server button, we can see that the Neo4j GraphQL library provides some basic CRUD operations (queries and mutations). Now, because we started from an empty Neo4j AuraDB instance, we need to create some todo
s by running the following mutation in the playground (and, of course, changing the input for each entry):
mutation CreateTodos {
createTodos(input: {
category: {
create: {
node: {
title: "Assignment"
}
}
},
status: "NEW",
title: "Assignment on Fourier Transform"
}) {
info {
bookmark
nodesCreated
relationshipsCreated
}
todos {
category {
title
}
status
title
}
}
}
You can find all the information about the nodes
in the Apollo Server playground. Here, note that we can create categories while also creating todo
. However, if we want to use an existing category while running the createTodos
mutation, we can rewrite the previous mutation, as shown below:
mutation CreateTodos {
createTodos(input: {
category: {
connect: {
where: {
node: {
title: "Assignment"
}
}
}
},
status: "NEW",
title: "Assignment on Fourier Transform"
}) {
info {
bookmark
nodesCreated
relationshipsCreated
}
todos {
category {
title
}
status
title
}
}
}
We can also query all the todo
items in our database by running the following query in GraphQL:
query Todos {
todos {
title
status
category {
title
}
}
}
After running the query
, we will get the JSON output:
{
"data": {
"todos": [
{
"title": "Write an Article on ChatGPT",
"status": "NEW",
"category": {
"title": "Writing"
}
},
{
"title": "Assignment on Fourier Transform",
"status": "NEW",
"category": {
"title": "Assignment"
}
},
{
"title": "Assignment on Neural Network",
"status": "NEW",
"category": {
"title": "Assignment"
}
}
]
}
}
We can also visualize the data from the Neo4j Workspace. Let’s go to the Neo4j AuraDB dashboard and click the Open button of our instance. Then enter your password, and you will log in to the Neo4j Workspace:
Now, in the Neo4j Workspace, select Show me a graph and hit enter
in the search bar. You will see all the nodes and relationships between them. In our example, we will see a graph like the image below:
In this article, we successfully performed the CRUD operation using GraphQL and then visualized the data in Neo4j AuraDB. Using the powerful technology of Neo4j, we can do complex relational queries without all the JOIN
operations and get our results faster and more efficiently. We can also use the Neo4j GraphQL Library to work with GraphQL. We saw that we could quickly generate CRUD operations for our given GraphQL type definitions without explicitly writing any queries or mutations.
Source: https://blog.logrocket.com
#graphql #neo4j
1671536940
In this article, we will learn about Neo4j and GraphQL: Developer Guides. GraphQL enables an API developer to model application data as a graph, and API clients that request that data to easily traverse this data graph to retrieve it. These are powerful game-changing capabilities. But if your backend isn’t graph-ready, these capabilities could become liabilities by putting additional pressure on your database, consuming greater time and resources.
In this article, I’ll shed some light on ways you can mitigate these issues when you use a graph database as the backend for your next GraphQL API by taking advantage of the capabilities offered by the open-source Neo4j GraphQL Library.
Fundamentally, a graph is a data structure composed of nodes (the entities in the data model) along with the relationships between nodes. Graphs are all about the connections in your data. For this reason, relationships are first-class citizens in the graph data model.
Graphs are so important that an entire category of databases was created to work with graphs: graph databases. Unlike relational or document databases that use tables or documents, respectively, as their data models, the data model of a graph database is (you guessed it!) a graph.
GraphQL is not and was never intended to be a database query language. It is indeed a query language, yet it lacks much of the semantics we would expect from a true database query language like SQL or Cypher. That’s on purpose. You don’t want to be exposing our entire database to all our client applications out there in the world.
Instead, GraphQL is an API query language, modeling application data as a graph and purpose-built for exposing and querying that data graph, just as SQL and Cypher were purpose-built for working with relational and graph databases, respectively. Since one of the primary functions of an API application is to interact with a database, it makes sense that GraphQL database integrations should help build GraphQL APIs that are backed by a database. That’s exactly what the Neo4j GraphQL Library does — makes it easier to build GraphQL APIs backed by Neo4j.
One of GraphQL’s most powerful capabilities enables the API designer to express the entire data domain as a graph using nodes and relationships. This way, API clients can traverse the data graph to find the relevant data. This makes better sense because most API interactions are done in the context of relationships. For example, if we want to fetch all orders placed by a specific customer or all the products in a given order, we’re traversing the pattern of relationships to find those connections in our data.
Soon after GraphQL was open-sourced by Facebook in 2015, a crop of GraphQL database integrations sprung up, evidently in an effort to address the n+1 conundrum and similar problems. Neo4j GraphQL Library was one of these integrations.
Building a GraphQL API service requires you to perform two steps:
Combining these schema and resolver functions gives you an executable GraphQL schema object. You may then attach the schema object to a networking layer, such as a Node.js web server or lambda function, to expose the GraphQL API to clients. Often developers will use tools like Apollo Server or GraphQL Yoga to help with this process, but it’s still up to them to handle the first two steps.
If you’ve ever written resolver functions, you’ll recall they can be a bit tedious, as they’re typically filled with boilerplate data fetching code. But even worse than lost developer productivity is the dreaded n+1 query problem. Because of the nested way that GraphQL resolver functions are called, a single GraphQL request can result in multiple round-trip requests to the database. Addressing this typically involves a batching and caching strategy, adding additional complexity to your GraphQL application.
Originally, the term GraphQL-First Development described a collaborative process. Frontend and backend teams would agree on a GraphQL schema, then go to work independently building their respective pieces of the codebase. Database integrations extend the idea of GraphQL-First development by applying this concept to the database as well. GraphQL-type definitions can now drive the database.
You can find the full code examples presented here on GitHub.
Let’s say you’re building a business reviews application where you want to keep track of businesses, users, and user reviews. GraphQL-type definitions to describe this API might look something like this:
type Business {
businessId: ID!
name: String!
city: String!
state: String!
address: String!
location: Point!
reviews: [Review!]! @relationship(type: "REVIEWS", direction: IN)
categories: [Category!]!
@relationship(type: "IN_CATEGORY", direction: OUT)
}
type User {
userID: ID!
name: String!
reviews: [Review!]! @relationship(type: "WROTE", direction: OUT)
}
type Review {
reviewId: ID!
stars: Float!
date: Date!
text: String
user: User! @relationship(type: "WROTE", direction: IN)
business: Business! @relationship(type: "REVIEWS", direction: OUT)
}
type Category {
name: String!
businesses: [Business!]!
@relationship(type: "IN_CATEGORY", direction: IN)
}
Note the use of the GraphQL schema directive @relationship
in our type definitions. GraphQL schema directives are the language’s built-in extension mechanism and key components for extending and configuring GraphQL APIs — especially with database integrations like Neo4j GraphQL Library. In this case, the @relationship
directive encodes the relationship type and direction (in or out) for pairs of nodes in the database.
Type definitions are then used to define the property graph data model in Neo4j. Instead of maintaining two schemas (one for our database and another for our API), you can now use type definitions to define both the API and the database’s data model. Furthermore, since Neo4j is schema-optional, using GraphQL to drive the database adds a layer of type safety to your application.
In GraphQL, you use fields on special types (Query, Mutation, and Subscription) to define the entry points for the API. In addition, you may want to define field arguments that can be passed at query time, for example, for sorting or filtering. Neo4j GraphQL Library handles this by creating entry points in the GraphQL API for create
, read
, update
, and delete
operations for each type, as well as field arguments for sorting and filtering.
Let’s look at some examples. For our business reviews application, suppose you want to show a list of businesses sorted alphabetically by name. Neo4j GraphQL Library automatically adds field arguments to accomplish just this.
{
businesses(options: { limit: 10, sort: { name: ASC } }) {
name
}
}
Perhaps you want to allow the users to filter this list of businesses by searching for companies by name or keyword. The where
argument handles this kind of filtering:
{
businesses(where: { name_CONTAINS: "Brew" }) {
name
address
}
You can then combine these filter arguments to express very complex operations. Say you want to find businesses that are in either the Coffee or Breakfast category and filter for reviews containing the keyword “breakfast sandwich:”
{
businesses(
where: {
OR: [
{ categories_SOME: { name: "Coffee" } }
{ categories_SOME: { name: "Breakfast" } }
]
}
) {
name
address
reviews(where: { text_CONTAINS: "breakfast sandwich" }) {
stars
text
}
}
}
Using location data, for example, you can even search for businesses within 1 km of our current location:
{
businesses(
where: {
location_LT: {
distance: 1000
point: { latitude: 37.563675, longitude: -122.322243 }
}
}
) {
name
address
city
state
}
}
As you can see, this functionality is extremely powerful, and the generated API can be configured through the use of GraphQL schema directives.
As we noted earlier, GraphQL server implementations require resolver functions where the logic for interacting with the data layer lives. With database integrations such as Neo4j GraphQL Library, resolvers are generated for you at query time for translating arbitrary GraphQL requests into singular, encapsulated database queries. This is a huge developer productivity win (we don’t have to write boilerplate data fetching code — yay!). It also addresses the n+1 query problem by making a single round-trip request to the database.
Moreover, graph databases like Neo4j are optimized for exactly the kind of nested graph data traversals commonly expressed in GraphQL. Let’s see this in action. Once you’ve defined your GraphQL type definitions, here’s all the code necessary to spin up your fully functional GraphQL API:
const { ApolloServer } = require("apollo-server");
const neo4j = require("neo4j-driver");
const { Neo4jGraphQL } = require("@neo4j/graphql");
// Connect to your Neo4j instance.
const driver = neo4j.driver(
"neo4j+s://my-neo4j-db.com",
neo4j.auth.basic("neo4j", "letmein")
);
// Pass our GraphQL type definitions and Neo4j driver instance.
const neoSchema = new Neo4jGraphQL({ typeDefs, driver });
// Generate an executable GraphQL schema object and start
// Apollo Server.
neoSchema.getSchema().then((schema) => {
const server = new ApolloServer({
schema,
});
server.listen().then(({ url }) => {
console.log(`GraphQL server ready at ${url}`);
});
});
That’s it! No resolvers.
So far, we’ve only been talking about basic create
, read
, update
, and delete
operations. How can you handle custom logic with Neo4j GraphQL Library?
Let’s say you want to show recommended businesses to your users based on their search or review history. One way would be to implement your own resolver function with the logic for generating those personalized recommendations built in. Yet there’s a better way to maintain the one-to-one, GraphQL-to-database operation performance guarantee: You can leverage the power of the Cypher query language using the @cypher
GraphQL schema directive to define your custom logic within your GraphQL type definitions.
Cypher is an extremely powerful language that enables you to express complex graph patterns using ASCII-art-like declarative syntax. I won’t go into detail about Cypher in this article, but let’s see how you could accomplish our personalized recommendation task by adding a new field to your GraphQL-type definitions:
extend type Business {
recommended(first: Int = 1): [Business!]!
@cypher(
statement: """
MATCH (this)<-[:REVIEWS]-(:Review)<-[:WROTE]-(u:User)
MATCH (u)-[:WROTE]->(:Review)-[:REVIEWS]->(rec:Business)
WITH rec, COUNT(*) AS score
RETURN rec ORDER BY score DESC LIMIT $first
"""
)
}
Here, the Business type has a recommended field, which uses the Cypher query defined above to show recommended businesses whenever requested in the GraphQL query. You didn’t need to write a custom resolver to accomplish this. Neo4j GraphQL Library is still able to generate a single database request even when using a custom recommended field.
GraphQL database integrations like Neo4j GraphQL Library are powered by the GraphQLResolveInfo object. This object is passed to all resolvers, including the ones generated for us by Neo4j GraphQL Library. It contains information about both the GraphQL schema and GraphQL operation being resolved. By closely inspecting this object, GraphQL database integrations can generate database queries at the time queries are placed.
If you’re interested, I recently gave a talk at GraphQL Summit that goes into much more detail.
An open-source library that works with any JavaScript GraphQL implementation can conceivably power an entire ecosystem of low-code GraphQL tools. Collectively, these tools leverage the functionality of Neo4j GraphQL Library to help make it easier for you to build, test, and deploy GraphQL APIs backed by a real graph database.
For example, GraphQL Mesh uses Neo4j GraphQL Library to enable Neo4j as a data source for data federation. Don’t want to write the code necessary to build a GraphQL API for testing and development? The Neo4j GraphQL Toolbox is an open-source, low-code web UI that wraps Neo4j GraphQL Library. This way, it can generate a GraphQL API from an existing Neo4j database with a single click.
If building a GraphQL API backed by a native graph database sounds interesting or at all helpful for the problems you’re trying to solve as a developer, I would encourage you to give the Neo4j GraphQL Library a try. Also, the Neo4j GraphQL Library landing page is a good starting point for documentation, further examples, and comprehensive workshops.
I’ve also written a book Full Stack GraphQL Applications, published by Manning, that covers this topic in much more depth. My book covers handling authorization, working with the frontend application, and using cloud services like Auth0, Netlify, AWS Lambda, and Neo4j Aura to deploy a full-stack GraphQL application. In fact, I’ve built out the very business reviews application from this article as an example in the book! Thanks to Neo4j, this book is now available as a free download.
Last but not least, I will be presenting a live session entitled “Making Sense of Geospatial Data with Knowledge Graphs” during the NODES 2022 virtual conference on Wednesday, November 16, produced by Neo4j. Registration is free to all attendees.
Original article sourced at: https://www.smashingmagazine.com
1664521380
Popoto.js is a JavaScript library built with D3.js designed to create interactive and customizable visual query builder for Neo4j graph databases.
The graph queries are translated into Cypher and run on the database. Popoto also helps to display and customize the results.
An application is composed of various components, each one can be included independently anywhere in a web application. It just need to be bound to a container ID in an HTML page and the content will be generated automatically.
A common example application contains the following components:
![]() | Graph component is an interactive interface designed to build queries for non technical users, the graph is made of selectable nodes connected to each other by links. |
![]() | Toolbar is a list of actions available in the graph container. |
![]() | Taxonomy container contains the list of searchable labels in the database. |
![]() | Query viewers container shows different representation of the corresponding query defined in the Graph component. |
![]() | Result container displays the results matching the graph query. |
For NPM, npm install popoto
For Yarn, yarn add popoto
.
Otherwise, download the latest release.
You can also load directly from unpkg or jsDelivr
Example:
<!-- Add default CSS reference -->
<link rel="stylesheet" href="https://unpkg.com/popoto/dist/popoto.min.css">
<!-- Or -->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/popoto/dist/popoto.min.css">
<!-- Add Popoto script reference, will default to popoto.min.js -->
<script src="https://unpkg.com/popoto"></script>
<!-- Or -->
<script src="https://cdn.jsdelivr.net/npm/popoto/dist/popoto.min.js"></script>
For source version:
<!-- Add Popoto script reference -->
<script src="https://unpkg.com/popoto/dist/popoto.js"></script>
<!-- Or -->
<script src="https://cdn.jsdelivr.net/npm/popoto/dist/popoto.js"></script>
const driver = neo4j.driver(
"neo4j://dff437fa.databases.neo4j.io", // Unencrypted
//"neo4j+s://dff437fa.databases.neo4j.io", //Encrypted with Full Certificate
neo4j.auth.basic("popoto", "popotopassword"),
//{disableLosslessIntegers: true} // Enabling native numbers
);
popoto.runner.DRIVER = driver
to your running server driver instance.popoto.runner.createSession = function () {
return runner.DRIVER.session({defaultAccessMode: "READ"})
};
See an explained example page source in Getting started.
Author: Nhogs
Source Code: https://github.com/Nhogs/popoto
License: GPL-3.0 license
1660327500
RNeo4j is Neo4j's R driver. It allows you to read and write data from / to Neo4j directly from your R environment.
First and foremost, download Neo4j!
If you're on Windows, download the .exe
and follow the instructions. You'll get a GUI where you simply press "Start" to start Neo4j.
If you're on OS X, you can download either the .dmg
or the .tar.gz
. The .dmg
will give you a GUI where you simply press "Start" to start Neo4j. Otherwise, download the .tar.gz
, unzip, navigate to the directory and execute ./bin/neo4j start
.
If you're on Linux, you have to use the .tar.gz
. Download the .tar.gz
, unzip, navigate to the directory and execute ./bin/neo4j start
.
You may also find neo4j in your distribution's package manager.
These depencies are are only required if you want to use the Bolt interface. They must be present at build time, and libneo4j-client
must also be present at runtime.
C:\RTools\MinGW\bin
, and C:\MinGW\bin
)brew install rust
(or https://rustup.rs but see "Rust Path" section)brew install llvm
brew install cleishm/neo4j/libneo4j-client
sudo apt-get install cargo
sudo pacman -S rust
sudo apt-get install clang libclang-dev
sudo pacman -S clang
clang
. It may be called llvm
.sudo apt-get install libneo4j-client-dev
By default, on *nix systems (such as Linux and OS X), rustup only sets the PATH in your shell. That means that if you try to build RNeo4j in a GUI application like RStudio, it may fail. To work around this issue, simply build RNeo4j in a terminal.
Newer versions of GCC require removing the -Werror
from GCC_CFLAGS
in configure.ac
.
Run these commands in your shell:
git clone https://github.com/cleishm/libneo4j-client
cd libneo4j-client
./autogen.sh
./configure --disable-tools
sudo make install
See https://github.com/cleishm/libneo4j-client for more details
install.packages("RNeo4j")
devtools::install_github("nicolewhite/RNeo4j")
Go to the latest release and download the source code. You can then install with install.packages
.
install.packages("/path/to/file.tar.gz", repos=NULL, type="source")
library(RNeo4j)
graph = startGraph("http://localhost:7474/db/data/")
If you have authentication enabled, pass your username and password.
graph = startGraph("http://localhost:7474/db/data/", username="neo4j", password="password")
nicole = createNode(graph, "Person", name="Nicole", age=24)
greta = createNode(graph, "Person", name="Greta", age=24)
kenny = createNode(graph, "Person", name="Kenny", age=27)
shannon = createNode(graph, "Person", name="Shannon", age=23)
r1 = createRel(greta, "LIKES", nicole, weight=7)
r2 = createRel(nicole, "LIKES", kenny, weight=1)
r3 = createRel(kenny, "LIKES", shannon, weight=3)
r4 = createRel(nicole, "LIKES", shannon, weight=5)
If you're returning tabular results, use cypher
, which will give you a data.frame
.
query = "
MATCH (nicole:Person)-[r:LIKES]->(p:Person)
WHERE nicole.name = 'Nicole'
RETURN nicole.name, r.weight, p.name
"
cypher(graph, query)
## nicole.name r.weight p.name
## 1 Nicole 5 Shannon
## 2 Nicole 1 Kenny
For anything more complicated, use cypherToList
, which will give you a list
.
query = "
MATCH (nicole:Person)-[:LIKES]->(p:Person)
WHERE nicole.name = 'Nicole'
RETURN nicole, COLLECT(p.name) AS friends
"
cypherToList(graph, query)
## [[1]]
## [[1]]$nicole
## < Node >
## Person
##
## $name
## [1] "Nicole"
##
## $age
## [1] 24
##
##
## [[1]]$friends
## [[1]]$friends[[1]]
## [1] "Shannon"
##
## [[1]]$friends[[2]]
## [1] "Kenny"
Both cypher
and cypherToList
accept parameters. These parameters can be passed individually or as a list.
query = "
MATCH (p1:Person)-[r:LIKES]->(p2:Person)
WHERE p1.name = {name1} AND p2.name = {name2}
RETURN p1.name, r.weight, p2.name
"
cypher(graph, query, name1="Nicole", name2="Shannon")
## p1.name r.weight p2.name
## 1 Nicole 5 Shannon
cypher(graph, query, list(name1="Nicole", name2="Shannon"))
## p1.name r.weight p2.name
## 1 Nicole 5 Shannon
p = shortestPath(greta, "LIKES", shannon, max_depth=4)
n = nodes(p)
sapply(n, "[[", "name")
## [1] "Greta" "Nicole" "Shannon"
p = shortestPath(greta, "LIKES", shannon, max_depth=4, cost_property="weight")
n = nodes(p)
sapply(n, "[[", "name")
## [1] "Greta" "Nicole" "Kenny" "Shannon"
p$weight
## [1] 11
library(igraph)
query = "
MATCH (n)-->(m)
RETURN n.name, m.name
"
edgelist = cypher(graph, query)
ig = graph.data.frame(edgelist, directed=F)
betweenness(ig)
## Nicole Greta Kenny Shannon
## 2 0 0 0
closeness(ig)
## Nicole Greta Kenny Shannon
## 0.3333333 0.2000000 0.2500000 0.2500000
igraph
plot(ig)
ggnet
library(network)
library(GGally)
net = network(edgelist)
ggnet(net, label.nodes=TRUE)
visNetwork
Read this blog post and check out this slide deck.
library(hflights)
hflights = hflights[sample(nrow(hflights), 1000), ]
row.names(hflights) = NULL
head(hflights)
## Year Month DayofMonth DayOfWeek DepTime ArrTime UniqueCarrier FlightNum
## 1 2011 1 15 6 927 1038 XE 2885
## 2 2011 10 10 1 2001 2322 XE 4243
## 3 2011 6 15 3 1853 2108 CO 670
## 4 2011 4 10 7 2100 102 CO 410
## 5 2011 1 25 2 739 1016 XE 3083
## 6 2011 9 13 2 1745 1841 CO 1204
## TailNum ActualElapsedTime AirTime ArrDelay DepDelay Origin Dest Distance
## 1 N34110 131 113 -10 -3 IAH COS 809
## 2 N13970 141 127 2 19 IAH CMH 986
## 3 N36207 255 231 15 -2 IAH SFO 1635
## 4 N76517 182 162 -18 5 IAH EWR 1400
## 5 N12922 157 128 0 -6 IAH MKE 984
## 6 N35271 56 34 -7 -5 IAH SAT 191
## TaxiIn TaxiOut Cancelled CancellationCode Diverted
## 1 6 12 0 0
## 2 4 10 0 0
## 3 5 19 0 0
## 4 7 13 0 0
## 5 4 25 0 0
## 6 3 19 0 0
addConstraint(graph, "Carrier", "name")
addConstraint(graph, "Airport", "name")
query = "
CREATE (flight:Flight {number: {FlightNum} })
SET flight.year = TOINT({Year}),
flight.month = TOINT({DayofMonth}),
flight.day = TOINT({DayOfWeek})
MERGE (carrier:Carrier {name: {UniqueCarrier} })
CREATE (flight)-[:OPERATED_BY]->(carrier)
MERGE (origin:Airport {name: {Origin} })
MERGE (dest:Airport {name: {Dest} })
CREATE (flight)-[o:ORIGIN]->(origin)
CREATE (flight)-[d:DESTINATION]->(dest)
SET o.delay = TOINT({DepDelay}),
o.taxi_time = TOINT({TaxiOut})
SET d.delay = TOINT({ArrDelay}),
d.taxi_time = TOINT({TaxiIn})
"
tx = newTransaction(graph)
for(i in 1:nrow(hflights)) {
row = hflights[i, ]
appendCypher(tx, query,
FlightNum=row$FlightNum,
Year=row$Year,
DayofMonth=row$DayofMonth,
DayOfWeek=row$DayOfWeek,
UniqueCarrier=row$UniqueCarrier,
Origin=row$Origin,
Dest=row$Dest,
DepDelay=row$DepDelay,
TaxiOut=row$TaxiOut,
ArrDelay=row$ArrDelay,
TaxiIn=row$TaxiIn)
}
commit(tx)
summary(graph)
## This To That
## 1 Flight OPERATED_BY Carrier
## 2 Flight ORIGIN Airport
## 3 Flight DESTINATION Airport
Error in curl::curl_fetch_memory(url, handle = handle) :
Couldn't connect to server
Neo4j probably isn't running. Make sure Neo4j is running first. It's also possible you have localhost resolution issues; try connecting to http://127.0.0.1:7474/db/data/
instead.
Error: client error: (401) Unauthorized
Neo.ClientError.Security.AuthorizationFailed
No authorization header supplied.
You have auth enabled on Neo4j and either didn't provide your username and password or they were invalid. You can pass a username and password to startGraph
.
graph = startGraph("http://localhost:7474/db/data/", username="neo4j", password="password")
You can also disable auth by editing the following line in conf/neo4j-server.properties
.
# Require (or disable the requirement of) auth to access Neo4j
dbms.security.auth_enabled=false
Check out the contributing doc if you'd like to contribute!
Author: Nicolewhite
Source Code: https://github.com/nicolewhite/Rneo4j
License: View license
1656905364
Graph visualizations powered by vis.js with data from Neo4j.
Neovis.js can be installed via npm:
npm install --save neovis.js
you can also obtain neovis.js via CDN:
For ease of use Neovis.js can be obtained from Neo4jLabs CDN:
Most recent release
<script src="https://unpkg.com/neovis.js@2.0.2"></script>
Version without neo4j-driver dependency
<script src="https://unpkg.com/neovis.js@2.0.2/dist/neovis-without-dependencies.js"></script>
Let's go through the steps to reproduce this visualization:
Start with a blank Neo4j instance, or spin up a blank Neo4j Sandbox. We'll load the Game of Thrones dataset, run:
LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/mathbeveridge/asoiaf/master/data/asoiaf-all-edges.csv'
AS row
MERGE (src:Character {name: row.Source})
MERGE (tgt:Character {name: row.Target})
MERGE (src)-[r:INTERACTS]->(tgt)
ON CREATE SET r.weight = toInteger(row.weight)
We've pre-calculated PageRank and ran a community detection algorithm to assign community ids for each Character. Let's load those next:
LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/johnymontana/neovis.js/master/examples/data/got-centralities.csv'
AS row
MATCH (c:Character {name: row.name})
SET c.community = toInteger(row.community),
c.pagerank = toFloat(row.pagerank)
Our graph now consists of Character
nodes that are connected by an INTERACTS
relationships. We can visualize the whole graph in Neo4j Browser by running:
MATCH p = (:Character)-[:INTERACTS]->(:Character)
RETURN p
We can see characters that are connected and with the help of the force directed layout we can begin to see clusters in the graph. However, we want to visualize the centralities (PageRank) and community detection results that we also imported.
Specifically we would like:
pagerank
score. This will allow us to quickly identify important nodes in the network.community
property. This will allow us to visualize clusters.weight
property on the INTERACTS
relationship.Neovis.js, by combining the JavaScript driver for Neo4j and the vis.js visualization library will allow us to build this visualization.
Create a new html file:
<!doctype html>
<html>
<head>
<title>Neovis.js Simple Example</title>
<style type="text/css">
html, body {
font: 16pt arial;
}
#viz {
width: 900px;
height: 700px;
border: 1px solid lightgray;
font: 22pt arial;
}
</style>
</head>
<body onload="draw()">
<div id="viz"></div>
</body>
</html>
We define some basic CSS to specify the boundaries of a div
and then create a single div
in the body. We also specify onload="draw()"
so that the draw()
function is called as soon as the body is loaded.
We need to pull in neovis.js
:
<script src="https://unpkg.com/neovis.js@2.0.2"></script>
And define our draw() function:
<script type="text/javascript">
let neoViz;
function draw() {
const config = {
containerId: "viz",
neo4j: {
serverUrl: "bolt://localhost:7687",
serverUser: "neo4j",
serverPassword: "sorts-swims-burglaries",
}
labels: {
Character: {
label: "name",
value: "pagerank",
group: "community",
[NeoVis.NEOVIS_ADVANCED_CONFIG]: {
function: {
title: (node) => viz.nodeToHtml(node, [
"name",
"pagerank"
])
}
}
}
},
relationships: {
INTERACTS: {
value: "weight"
}
},
initialCypher: "MATCH (n)-[r:INTERACTS]->(m) RETURN *"
};
neoViz = new NeoVis.default(config);
neoViz.render();
}
</script>
This function creates a config
object that specifies how to connect to Neo4j, what data to fetch, and how to configure the visualization.
See simple-example.html for the full code.
you can also use it as module, but it would require you have a way to import css files
import NeoVis from 'neovis.js';
or you can import the version with bundled dependency
import NeoVis from 'neovis.js/dist/neovis.js';
This project uses git submodules to include the dependencies for neo4j-driver and vis.js. This project uses webpack to build a bundle that includes all project dependencies. webpack.config.js
contains the configuration for webpack. After cloning the repo:
npm install
npm run build
npm run typedoc
will build dist/neovis.js
and dist/neovis-without-dependencies.js
Download Details:
Author: neo4j-contrib
Source Code: https://github.com/neo4j-contrib/neovis.js
License: Apache-2.0 license
#datavisualization #javascript #neo4j #graphvisualization #neo4j #visjs
1649220900
Py2neo
Py2neo is a client library and toolkit for working with Neo4j from within Python applications. The library supports both Bolt and HTTP and provides a high level API, an OGM, admin tools, a Cypher lexer for Pygments, and many other bells and whistles.
Command line tooling has been removed from the library in py2neo 2021.2. This functionality now exists in the separate ipy2neo project.
As of version 2021.1, py2neo contains full support for routing, as exposed by a Neo4j cluster. This can be enabled using a neo4j://...
URI or by passing routing=True
to a Graph
constructor.
To install the latest release of py2neo, simply use:
$ pip install py2neo
The following versions of Python and Neo4j (all editions) are supported:
Neo4j | Python 3.5+ | Python 2.7 |
---|---|---|
4.4 | ||
4.3 | ||
4.2 | ||
4.1 | ||
4.0 | ||
3.5 | ||
3.4 |
Note that py2neo is developed and tested under Linux using standard CPython distributions. While other operating systems and Python distributions may work, support for these is not available.
To run a query against a local database is straightforward:
>>> from py2neo import Graph
>>> graph = Graph("bolt://localhost:7687", auth=("neo4j", "password"))
>>> graph.run("UNWIND range(1, 3) AS n RETURN n, n * n as n_sq")
n | n_sq
-----|------
1 | 1
2 | 4
3 | 9
As of 2020, py2neo has switched to Calendar Versioning, using a scheme of YYYY.N.M
. Here, N
is an incrementing zero-based number for each year, and M
is a revision within that version (also zero-based).
No compatibility guarantees are given between versions, but as a general rule, a change in M
should require little-to-no work within client applications, whereas a change in N
may require some work. A change to the year is likely to require a more significant amount of work to upgrade.
Note that py2neo is developed on a rolling basis, so patches are not made to old versions. Users will instead need to install the latest release to adopt bug fixes.
For more information, read the handbook.
Author: py2neo-org
Source Code: https://github.com/py2neo-org/py2neo
License: Apache-2.0 License
1646890680
Accessing Neo4j Data with REST
This guide walks you through the process of creating an application that accesses graph-based data through a hypermedia-based RESTful front end.
You will build a Spring application that lets you create and retrieve Person
objects that are stored in a Neo4j NoSQL database by using Spring Data REST. Spring Data REST takes the features of Spring HATEOAS and Spring Data Neo4j and automatically combines them together.
Spring Data REST also supports Spring Data JPA, Spring Data Gemfire, and Spring Data MongoDB as backend data stores, but this guide deals with Neo4j. |
Like most Spring Getting Started guides, you can start from scratch and complete each step or you can bypass basic setup steps that are already familiar to you. Either way, you end up with working code.
To start from scratch, move on to [scratch].
To skip the basics, do the following:
git clone
https://github.com/spring-guides/gs-accessing-neo4j-data-rest.git
gs-accessing-neo4j-data-rest/initial
When you finish, you can check your results against the code in gs-accessing-neo4j-data-rest/complete
.
Before you can build a this application, you need to set up a Neo4j server.
Neo4j has an open source server that you can install for free.
On a Mac with Homebrew installed, you can type the following in a terminal window:
$ brew install neo4j
For other options, see https://neo4j.com/download/community-edition/
Once you have installed Neo4j, you can launch it with its default settings by running the following command:
$ neo4j start
You should see a message similar to the following:
Starting Neo4j.
Started neo4j (pid 96416). By default, it is available at http://localhost:7474/
There may be a short delay until the server is ready.
See /usr/local/Cellar/neo4j/3.0.6/libexec/logs/neo4j.log for current status.
By default, Neo4j has a username and password of neo4j
and neo4j
. However, it requires that the new account password be changed. To do so, run the following command:
$ curl -v -u neo4j:neo4j POST localhost:7474/user/neo4j/password -H "Content-type:application/json" -d "{\"password\":\"secret\"}"
This changes the password from neo4j
to secret
(something to NOT DO in production!) With that completed, you should be ready to run this guide.
You can use this pre-initialized project and click Generate to download a ZIP file. This project is configured to fit the examples in this tutorial.
To manually initialize the project:
If your IDE has the Spring Initializr integration, you can complete this process from your IDE. |
You can also fork the project from Github and open it in your IDE or other editor. |
Neo4j Community Edition requires credentials to access it. You can configure the credentials by setting properties in src/main/resources/application.properties
, as follows:
spring.neo4j.uri=bolt://localhost:7687
spring.data.neo4j.username=neo4j
spring.data.neo4j.password=secret
This includes the default username (neo4j
) and the newly set password (secret
) that you set earlier.
Do NOT store real credentials in your source repository. Instead, configure them in your runtime by using Spring Boot’s property overrides. |
You need to create a new domain object to present a person, as the following example (in src/main/java/com/example/accessingneo4jdatarest/Person.java
) shows:
package com.example.accessingneo4jdatarest;
import org.springframework.data.neo4j.core.schema.Id;
import org.springframework.data.neo4j.core.schema.Node;
import org.springframework.data.neo4j.core.schema.GeneratedValue;
@Node
public class Person {
@Id @GeneratedValue private Long id;
private String firstName;
private String lastName;
public String getFirstName() {
return firstName;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
public String getLastName() {
return lastName;
}
public void setLastName(String lastName) {
this.lastName = lastName;
}
}
The Person
object has a first name and a last name. There is also an ID object that is configured to be automatically generated so that you need not do so.
Person
RepositoryNext, you need to create a simple repository, as the following example (in src/main/java/com/example/accessingneo4jdatarest/PersonRepository.java
) shows:
package com.example.accessingneo4jdatarest;
import java.util.List;
import org.springframework.data.repository.PagingAndSortingRepository;
import org.springframework.data.repository.query.Param;
import org.springframework.data.rest.core.annotation.RepositoryRestResource;
@RepositoryRestResource(collectionResourceRel = "people", path = "people")
public interface PersonRepository extends PagingAndSortingRepository<Person, Long> {
List<Person> findByLastName(@Param("name") String name);
}
This repository is an interface and lets you perform various operations that involve Person
objects. It gets these operations by extending the PagingAndSortingRepositry
interface defined in Spring Data Commons.
At runtime, Spring Data REST automatically creates an implementation of this interface. Then it uses the @RepositoryRestResource
annotation to direct Spring MVC to create RESTful endpoints at /people
.
@RepositoryRestResource is not required for a repository to be exported. It is used only to change the export details, such as using /people instead of the default value of /persons . |
Here you have also defined a custom query to retrieve a list of Person
objects based on the lastName
value. You can see how to invoke it later in this guide.
The Spring Initializr creates an application class when you use it to create a project. You can find that in src/main/java/com/example/accessingneo4jdatarest/Application.java
. Note that the Spring Initializr concatenates (and properly changes the case of) the package name and adds it to Application
to create the application case name. In this case, we get AccessingNeo4jDataRestApplication
, as the following listing shows:
package com.example.accessingneo4jdatarest;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.data.neo4j.repository.config.EnableNeo4jRepositories;
import org.springframework.transaction.annotation.EnableTransactionManagement;
@EnableTransactionManagement
@EnableNeo4jRepositories
@SpringBootApplication
public class AccessingNeo4jDataRestApplication {
public static void main(String[] args) {
SpringApplication.run(AccessingNeo4jDataRestApplication.class, args);
}
}
You need not make any changes to this application class for this example
@SpringBootApplication
is a convenience annotation that adds all of the following:
@Configuration
: Tags the class as a source of bean definitions for the application context.@EnableAutoConfiguration
: Tells Spring Boot to start adding beans based on classpath settings, other beans, and various property settings. For example, if spring-webmvc
is on the classpath, this annotation flags the application as a web application and activates key behaviors, such as setting up a DispatcherServlet
.@ComponentScan
: Tells Spring to look for other components, configurations, and services in the com/example
package, letting it find the controllers.The main()
method uses Spring Boot’s SpringApplication.run()
method to launch an application. Did you notice that there was not a single line of XML? There is no web.xml
file, either. This web application is 100% pure Java and you did not have to deal with configuring any plumbing or infrastructure.
The @EnableNeo4jRepositories
annotation activates Spring Data Neo4j. Spring Data Neo4j creates a concrete implementation of the PersonRepository
and configures it to talk to an embedded Neo4j database by using the Cypher query language.
You can run the application from the command line with Gradle or Maven. You can also build a single executable JAR file that contains all the necessary dependencies, classes, and resources and run that. Building an executable jar makes it easy to ship, version, and deploy the service as an application throughout the development lifecycle, across different environments, and so forth.
If you use Gradle, you can run the application by using ./gradlew bootRun
. Alternatively, you can build the JAR file by using ./gradlew build
and then run the JAR file, as follows:
java -jar build/libs/gs-accessing-neo4j-data-rest-0.1.0.jar
If you use Maven, you can run the application by using ./mvnw spring-boot:run
. Alternatively, you can build the JAR file with ./mvnw clean package
and then run the JAR file, as follows:
java -jar target/gs-accessing-neo4j-data-rest-0.1.0.jar
The steps described here create a runnable JAR. You can also build a classic WAR file. |
Logging output is displayed. The service should be up and running within a few seconds.
Now that the application is running, you can test it. You can use any REST client you wish. The following examples use the *nix tool called curl
.
First, you want to see the top level service. The following example (with output) shows how to do so:
$ curl http://localhost:8080
{
"_links" : {
"people" : {
"href" : "http://localhost:8080/people{?page,size,sort}",
"templated" : true
}
}
}
Here you get a first glimpse of what this server has to offer. There is a people
link located at http://localhost:8080/people. It has some options such as ?page
, ?size
, and ?sort
.
Spring Data REST uses the HAL format for JSON output. It is flexible and offers a convenient way to supply links adjacent to the data that is served. |
$ curl http://localhost:8080/people
{
"_links" : {
"self" : {
"href" : "http://localhost:8080/people{?page,size,sort}",
"templated" : true
},
"search" : {
"href" : "http://localhost:8080/people/search"
}
},
"page" : {
"size" : 20,
"totalElements" : 0,
"totalPages" : 0,
"number" : 0
}
}
There are currently no elements and, consequently, no pages, so it is time to create a new Person
! To do so, run the following command (shown with its output):
$ curl -i -X POST -H "Content-Type:application/json" -d '{ "firstName" : "Frodo", "lastName" : "Baggins" }' http://localhost:8080/people
HTTP/1.1 201 Created
Server: Apache-Coyote/1.1
Location: http://localhost:8080/people/0
Content-Length: 0
Date: Wed, 26 Feb 2014 20:26:55 GMT
-i
ensures you can see the response message including the headers. The URI of the newly created Person
is shown-X POST
signals this a POST
used to create a new entry-H "Content-Type:application/json"
sets the content type so the application knows the payload contains a JSON object-d '{ "firstName" : "Frodo", "lastName" : "Baggins" }'
is the data being sentNotice how the previous POST operation includes a Location header. This contains the URI of the newly created resource. Spring Data REST also has two methods (RepositoryRestConfiguration.setReturnBodyOnCreate(…) and setReturnBodyOnCreate(…) ) that you can use to configure the framework to immediately return the representation of the resource that was just created. |
From this you can query for all people by running the following command (shown with its output):
$ curl http://localhost:8080/people
{
"_links" : {
"self" : {
"href" : "http://localhost:8080/people{?page,size,sort}",
"templated" : true
},
"search" : {
"href" : "http://localhost:8080/people/search"
}
},
"_embedded" : {
"people" : [ {
"firstName" : "Frodo",
"lastName" : "Baggins",
"_links" : {
"self" : {
"href" : "http://localhost:8080/people/0"
}
}
} ]
},
"page" : {
"size" : 20,
"totalElements" : 1,
"totalPages" : 1,
"number" : 0
}
}
The people
object contains a list with Frodo. Notice how it includes a self
link. Spring Data REST also uses the Evo Inflector library to pluralize the name of the entity for groupings.
You can query directly for the individual record by running the following command (shown with its output):
$ curl http://localhost:8080/people/0
{
"firstName" : "Frodo",
"lastName" : "Baggins",
"_links" : {
"self" : {
"href" : "http://localhost:8080/people/0"
}
}
}
This might appear to be purely web based, but, behind the scenes, there is an embedded Neo4j graph database. In production, you would probably connect to a standalone Neo4j server. |
In this guide, there is only one domain object. With a more complex system, where domain objects are related to each other, Spring Data REST renders additional links to help navigate to connected records.
You can find all the custom queries by running the following command (shown with its output):
$ curl http://localhost:8080/people/search
{
"_links" : {
"findByLastName" : {
"href" : "http://localhost:8080/people/search/findByLastName{?name}",
"templated" : true
}
}
}
You can see the URL for the query, including the HTTP query parameter: name
. Note that this matches the @Param("name")
annotation embedded in the interface.
To use the findByLastName
query, run the following command (shown with its output):
$ curl http://localhost:8080/people/search/findByLastName?name=Baggins
{
"_embedded" : {
"people" : [ {
"firstName" : "Frodo",
"lastName" : "Baggins",
"_links" : {
"self" : {
"href" : "http://localhost:8080/people/0"
},
"person" : {
"href" : "http://localhost:8080/people/0"
}
}
} ]
},
"_links" : {
"self" : {
"href" : "http://localhost:8080/people/search/findByLastName?name=Baggins"
}
}
}
Because you defined it to return List<Person>
in the code, it returns all of the results. If you had defined it to return only Person
, it would pick one of the Person
objects to return. Since this can be unpredictable, you probably do not want to do that for queries that can return multiple entries.
You can also issue PUT
, PATCH
, and DELETE
REST calls to either replace, update, or delete existing records. The following example (shown with its output) shows a PUT
call:
$ curl -X PUT -H "Content-Type:application/json" -d '{ "firstName": "Bilbo", "lastName": "Baggins" }' http://localhost:8080/people/0
$ curl http://localhost:8080/people/0
{
"firstName" : "Bilbo",
"lastName" : "Baggins",
"_links" : {
"self" : {
"href" : "http://localhost:8080/people/0"
}
}
}
The following example (shown with its output) shows a PATCH
call:
$ curl -X PATCH -H "Content-Type:application/json" -d '{ "firstName": "Bilbo Jr." }' http://localhost:8080/people/0
$ curl http://localhost:8080/people/0
{
"firstName" : "Bilbo Jr.",
"lastName" : "Baggins",
"_links" : {
"self" : {
"href" : "http://localhost:8080/people/0"
}
}
}
PUT replaces an entire record. Fields that are not supplied are replaced with null . PATCH can be used to update a subset of items. |
You can also delete records, as the following example (shown with its output) shows:
$ curl -X DELETE http://localhost:8080/people/0
$ curl http://localhost:8080/people
{
"_links" : {
"self" : {
"href" : "http://localhost:8080/people{?page,size,sort}",
"templated" : true
},
"search" : {
"href" : "http://localhost:8080/people/search"
}
},
"page" : {
"size" : 20,
"totalElements" : 0,
"totalPages" : 0,
"number" : 0
}
}
A convenient aspect of this hypermedia-driven interface is how you can discover all the RESTful endpoints by using curl (or whatever REST client you like). You need not exchange a formal contract or interface document with your customers.
Congratulations! You have just developed an application with a hypermedia-based RESTful front end and a Neo4j-based back end.
Link: https://spring.io/guides/gs/accessing-neo4j-data-rest/
1645139520
Py2neo is a client library and toolkit for working with Neo4j from within Python applications. The library supports both Bolt and HTTP and provides a high level API, an OGM, admin tools, a Cypher lexer for Pygments, and many other bells and whistles.
Command line tooling has been removed from the library in py2neo 2021.2. This functionality now exists in the separate ipy2neo project.
As of version 2021.1, py2neo contains full support for routing, as exposed by a Neo4j cluster. This can be enabled using a neo4j://...
URI or by passing routing=True
to a Graph
constructor.
To install the latest release of py2neo, simply use:
$ pip install py2neo
The following versions of Python and Neo4j (all editions) are supported:
Neo4j | Python 3.5+ | Python 2.7 |
---|---|---|
4.4 | ||
4.3 | ||
4.2 | ||
4.1 | ||
4.0 | ||
3.5 | ||
3.4 |
Note that py2neo is developed and tested under Linux using standard CPython distributions. While other operating systems and Python distributions may work, support for these is not available.
To run a query against a local database is straightforward:
>>> from py2neo import Graph
>>> graph = Graph("bolt://localhost:7687", auth=("neo4j", "password"))
>>> graph.run("UNWIND range(1, 3) AS n RETURN n, n * n as n_sq")
n | n_sq
-----|------
1 | 1
2 | 4
3 | 9
As of 2020, py2neo has switched to Calendar Versioning, using a scheme of YYYY.N.M
. Here, N
is an incrementing zero-based number for each year, and M
is a revision within that version (also zero-based).
No compatibility guarantees are given between versions, but as a general rule, a change in M
should require little-to-no work within client applications, whereas a change in N
may require some work. A change to the year is likely to require a more significant amount of work to upgrade.
Note that py2neo is developed on a rolling basis, so patches are not made to old versions. Users will instead need to install the latest release to adopt bug fixes.
For more information, read the handbook.
Author: Py2neo-org
Source Code: https://github.com/py2neo-org/py2neo
License: Apache-2.0 License
1644922800
graphdatascience
is a Python client for operating and working with the Neo4j Graph Data Science (GDS) library. It enables users to write pure Python code to project graphs, run algorithms, as well as define and use machine learning pipelines in GDS.
The API is designed to mimic the GDS Cypher procedure API in Python code. It abstracts the necessary operations of the Neo4j Python driver to offer a simpler surface.
Please leave any feedback as issues on the source repository. Happy coding!
This is a work in progress and some GDS features are known to be missing or not working properly (see Known limitations below). Further, this library targets GDS versions 2.0+ (not yet released) and as such may not work with older versions.
To install the latest deployed version of graphdatascience
, simply run:
pip install graphdatascience
What follows is a high level description of some of the operations supported by graphdatascience
. For extensive documentation of all capabilities, please refer to the Python client chapter of the GDS Manual.
Extensive end-to-end examples in Jupyter ready-to-run notebooks can be found in the examples
source directory:
The library wraps the Neo4j Python driver with a GraphDataScience
object through which most calls to GDS will be made.
from graphdatascience import GraphDataScience
# Use Neo4j URI and credentials according to your setup
gds = GraphDataScience("bolt://localhost:7687", auth=None)
There's also a method GraphDataScience.from_neo4j_driver
for instantiating the gds
object directly from a Neo4j driver object.
If we don't want to use the default database of our DBMS, we can specify which one to use:
gds.set_database("my-db")
If you are connecting the client to an AuraDS instance, you can get recommended non-default configuration settings of the Python Driver applied automatically. To achieve this, set the constructor argument aura_ds=True
:
from graphdatascience import GraphDataScience
# Configures the driver with AuraDS-recommended settings
gds = GraphDataScience("neo4j+s://my-aura-ds.databases.neo4j.io:7687", auth=("neo4j", "my-password"), aura_ds=True)
Supposing that we have some graph data in our Neo4j database, we can project the graph into memory.
# Optionally we can estimate memory of the operation first
res = gds.graph.project.estimate("*", "*")
assert res["requiredMemory"] < 1e12
G = gds.graph.project("graph", "*", "*")
The G
that is returned here is a Graph
which on the client side represents the projection on the server side.
The analogous calls gds.graph.project.cypher{,.estimate}
for Cypher based projection are also supported.
We can take a projected graph, represented to us by a Graph
object named G
, and run algorithms on it.
# Optionally we can estimate memory of the operation first (if the algo supports it)
res = gds.pageRank.mutate.estimate(G, tolerance=0.5, mutateProperty="pagerank")
assert res["requiredMemory"] < 1e12
res = gds.pageRank.mutate(G, tolerance=0.5, mutateProperty="pagerank")
assert res["nodePropertiesWritten"] == G.node_count()
These calls take one positional argument and a number of keyword arguments depending on the algorithm. The first (positional) argument is a Graph
, and the keyword arguments map directly to the algorithm's configuration map.
The other algorithm execution modes - stats, stream and write - are also supported via analogous calls. The stream mode call returns a list of dictionaries (with contents depending on the algorithm of course) - which we can think of as a table - as is also the case when using the Neo4j Python driver directly. The mutate, stats and write mode calls however return a dictionary with metadata about the algorithm execution.
The methods for doing topological link prediction are a bit different. Just like in the GDS procedure API they do not take a graph as an argument, but rather two node references as positional arguments. And they simply return the similarity score of the prediction just made as a float - not a list of dictionaries.
Some of the methods for computing similarity are also different. The functions take two positional List[float]
vectors as input and return a similarty score. The procedures that don't take a graph name as input (but only a configuration map) in the GDS API are represented by methods that only take keyword arguments mapping to the keys of their GDS configuration map.
In this library, graphs projected onto server-side memory are represented by Graph
objects. There are convenience methods on the Graph
object that let us extract information about our projected graph. Some examples are (where G
is a Graph
):
# Get the graph's node count
n = G.node_count()
# Get a list of all relationship properties present on
# relationships of the type "myRelType"
rel_props = G.relationship_properties("myRelType")
# Drop the projection represented by G
G.drop()
In GDS, you can train machine learning models. When doing this using the graphdatascience
, you can get a model object returned directly in the client. The model object allows for convenient access to details about the model via Python methods. It also offers the ability to directly compute predictions using the appropriate GDS procedure for that model. This includes support for models trained using pipelines (for Link Prediction and Node Classification) as well as GraphSAGE models.
There's native support for Link prediction pipelines and Node classification pipelines. Apart from the call to create a pipeline, the GDS native pipelines calls are represented by methods on pipeline Python objects. Additionally to the standard GDS calls, there are several methods to query the pipeline for information about it.
Below is a minimal example for node classification (supposing we have a graph G
with a property "myClass"):
pipe = gds.alpha.ml.pipeline.nodeClassification.create("myPipe")
assert pipe.type() == "Node classification training pipeline"
pipe.addNodeProperty("degree", mutateProperty="rank")
pipe.selectFeatures("rank")
steps = pipe.feature_properties()
assert len(steps) == 1
assert steps[0]["feature"] == "rank"
trained_pipe = pipe.train(G, modelName="myModel", targetProperty="myClass", metrics=["ACCURACY"])
assert trained_pipe.metrics()["ACCURACY"]["test"] > 0
res = trained_pipe.predict_stream(G)
assert len(res) == G.node_count()
Link prediction works the same way, just with different method names for calls specific to that pipeline. Please see the GDS documentation for more on the pipelines' procedure APIs.
Assuming we have a graph G
with node property x
, we can do the following:
model = gds.beta.graphSage.train(G, modelName="myModel", featureProperties=["x"])
assert len(model.metrics()["epochLosses"]) == model.metrics()["ranEpochs"]
res = model.predict_stream(G)
assert len(res) == G.node_count()
Note that with GraphSAGE we call the train
method directly and supply all training configuration.
All procedures from the GDS Graph catalog are supported with graphdatascience
. Some examples are (where G
is a Graph
):
res = gds.graph.list()
assert len(res) == 1 # Exactly one graph is projected
res = gds.graph.streamNodeProperties(G, "rank")
assert len(res) == G.node_count()
Further, there's a new call named gds.graph.get
(graphdatascience
only) which takes a name as input and returns a Graph
object if a graph projection of that name exists in the user's graph catalog. The idea is to have a way of creating Graph
s for already projected graphs, without having to do a new projection.
All procedures from the GDS Model catalog are supported with graphdatascience
. Some examples are (where model
is a machine learning model object):
res = gds.beta.model.list()
assert len(res) == 1 # Exactly one model is loaded
res = gds.beta.model.drop(model)
assert res["modelInfo"]["modelName"] == model.name()
Further, there's a new call named gds.model.get
(graphdatascience
only) which takes a model name as input and returns a model object if a model of that name exists in the user's model catalog. The idea is to have a way of creating model objects for already loaded models, without having to create them again.
When calling path finding or topological link prediction algorithms one has to provide specific nodes as input arguments. When using the GDS procedure API directly to call such algorithms, typically Cypher MATCH
statements are used in order to find valid representations of input nodes of interest, see eg. this example in the GDS docs. To simplify this, graphdatascience
provides a utility function, gds.find_node_id
, for letting one find nodes without using Cypher.
Below is an example of how this can be done (supposing G
is a projected Graph
with City
nodes having name
properties):
# gds.find_node_id takes a list of labels and a dictionary of
# property key-value pairs
source_id = gds.find_node_id(["City"], {"name": "New York"})
target_id = gds.find_node_id(["City"], {"name": "Philadelphia"})
res = gds.shortestPath.dijkstra.stream(G, sourceNode=source_id, targetNode=target_id)
assert res[0]["totalCost"] == 100
The nodes found by gds.find_node_id
are those that have all labels specified and fully match all property key-value pairs given. Note that exactly one node per method call must be matched.
For more advanced filtering we recommend users do matching via Cypher's MATCH
.
Operations known to not yet work with graphdatascience
:
graphdatascience
is licensed under the Apache Software License version 2.0. All content is copyright © Neo4j Sweden AB.
This work has been inspired by the great work done in the following libraries:
Author: neo4j
Source Code: https://github.com/neo4j/graph-data-science-client
License: Apache-2.0 License
1644629383
In this talk, William Lyon discusses the Neo4j GraphQL Library and how to get started with it, as well as issues that arise when building GraphQL APIs. The video covers how to translate business requirements to a graph data model and how to design our API using GraphQL type definitions, creating a CRUD GraphQL API using the Neo4j GraphQL Library, adding custom logic using the Cypher schema directive, and securing our API using authorization rules.
📚📚📚
Full Stack GraphQL Applications | http://mng.bz/NxO2
To save 40% off this book use discount code: watchtlyon40
📚📚📚
About the author:
William Lyon is a software developer at Neo4j, working on integrations with other technologies and helping users build applications with Neo4j. He is the creator and maintainer of neo4j-graphql.js, a JavaScript library for creating GraphQL APIs, and is a contributor to GRANDstack.io. He serves as Neo4j’s representative on the GraphQL Foundation.
About the book:
Full Stack GraphQL Applications teaches you to leverage the power of GraphQL to create modern APIs that streamline data loads by allowing client applications to selectively fetch only the data required. GRANDstack.io contributor and GraphQL Foundation member William Lyon teaches you everything you need to know to design, deploy, and maintain a GraphQL API from scratch and create graph-aware fullstack web applications. In this project-driven book, you’ll build a complete business reviews application using the cutting-edge GRANDstack, learning how the different parts fit together. Chapter-by-chapter, you’ll master creating a GraphQL service with Apollo Server, modelling a GraphQL API with GraphQL type definitions, installing the Neo4j Database on different platforms, and more. By the time you’re done, you’ll be able to deploy all of the components of a serverless fullstack application in a secure and cost-effective way that takes full advantage of GraphQL’s performance capabilities. Along the way, you’ll also get tips for applying these techniques to other stacks.
#neo4j #graphql #api
1641529320
This repository contains the official Neo4j driver for Python. Each driver release (from 4.0 upwards) is built specifically to work with a corresponding Neo4j release, i.e. that with the same major.minor version number. These drivers will also be compatible with the previous Neo4j release, although new server features will not be available.
Python 2.7 support has been dropped as of the Neo4j 4.0 release.
To install the latest stable version, use:
pip install neo4j
from neo4j import GraphDatabase
driver = GraphDatabase.driver("neo4j://localhost:7687", auth=("neo4j", "password"))
def add_friend(tx, name, friend_name):
tx.run("MERGE (a:Person {name: $name}) "
"MERGE (a)-[:KNOWS]->(friend:Person {name: $friend_name})",
name=name, friend_name=friend_name)
def print_friends(tx, name):
for record in tx.run("MATCH (a:Person)-[:KNOWS]->(friend) WHERE a.name = $name "
"RETURN friend.name ORDER BY friend.name", name=name):
print(record["friend.name"])
with driver.session() as session:
session.write_transaction(add_friend, "Arthur", "Guinevere")
session.write_transaction(add_friend, "Arthur", "Lancelot")
session.write_transaction(add_friend, "Arthur", "Merlin")
session.read_transaction(print_friends, "Arthur")
driver.close()
See, https://neo4j.com/docs/migration-guide/4.0/upgrade-driver/#upgrade-driver-breakingchanges
See, https://neo4j.com/docs/driver-manual/current/client-applications/#driver-connection-uris for changes in default security settings between 3.x and 4.x
Using the Python Driver 4.x and connecting to Neo4j 3.5 with default connection settings for Neo4j 3.5.
# the preferred form
driver = GraphDatabase.driver("neo4j+ssc://localhost:7687", auth=("neo4j", "password"))
# is equivalent to
driver = GraphDatabase.driver("neo4j://localhost:7687", auth=("neo4j", "password"), encrypted=True, trust=False)
Using the Python Driver 1.7 and connecting to Neo4j 4.x with default connection settings for Neo4j 4.x.
driver = GraphDatabase.driver("neo4j://localhost:7687", auth=("neo4j", "password"), encrypted=False)
Download Details:
Author: neo4j
Source Code: https://github.com/neo4j/neo4j-python-driver
License: Apache-2.0 License
1641214800
In this article, we learn how to combine Neo4j, Graphql and React to develop a graph discovery engine
1640710706
npx create-grandstack-app myNewApp
This project is a starter for building a GRANDstack (GraphQL, React, Apollo, Neo4j Database) application. There are two components to the starter, the web frontend application (in React and Angular flavors) and the API app (GraphQL server).
The starter represents a business reviews dashboard. You need to adjust the GraphQL schema, the seed data, database index creation, and the UI components for your use-case.
Hands On With The GRANDstack Starter Video
The easiest way to get started with the GRANDstack Starter is to create a Neo4j Sandbox instance and use the create-grandstack-app
command line tool.
(If you have a running Neo4j database on localhost via Neo4j Desktop or a Neo4j server installation, change the password in api/.env
)
Neo4j Sandbox allows you to create a free hosted Neo4j instance private to you that can be used for development.
After signing in to Neo4j Sandbox, click the + New Project
button and select the "Blank Sandbox" option. In the next step we'll use the connection credentials from the "Connection details" tab to connect our GraphQL API to this Neo4j instance.
If you instead would like to use Neo4j Desktop. The process will be almost the same with a minor detour. Install Neo4j Desktop for your chosen OS and then make a new blank graph for your project. It will require you to put in a password and username. Remember those.
Next you need to go to open the manage screen from the options in the 3 dot stack menu
And install the apoc plugin, green button at the top of the list.
After that you can return to setting up your app with the credentials from the prior steps.
create-grandstack-app
CLInpx create-grandstack-app myNewApp
or with Yarn
yarn create grandstack-app myNewApp
This will create a new directory myNewApp
, download the latest release of the GRANDstack Starter, install dependencies and prompt for your connection credentials for Neo4j to connect to the GraphQL API.
Make sure your application is running locally with npm start
or yarn start
, open another terminal and run
npm run seedDb
or with Yarn
yarn run seedDb
The GRANDstack Starter is a monorepo that includes a GraphQL API application and client web applications for React (default) and Angular for a business reviews dashboard.
/
- Project RootThe root directory contains some global configuration and scripts:
npm run start
and npm run build
/api
This directory contains the GraphQL API application using Apollo Server and the Neo4j GraphQL Library.
.env
:# Use this file to set environment variables with credentials and configuration options
# This file is provided as an example and should be replaced with your own values
# You probably don't want to check this into version control!
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=letmein
# Uncomment this line to enable encrypted driver connection for Neo4j
#NEO4J_ENCRYPTED=true
# Uncomment this line to specify a specific Neo4j database (v4.x+ only)
#NEO4J_DATABASE=neo4j
GRAPHQL_SERVER_HOST=0.0.0.0
GRAPHQL_SERVER_PORT=4001
GRAPHQL_SERVER_PATH=/graphql
/web-react
The frontend React web application is found in this directory.
It includes:
/web-angular
A UI built with Angular, Apollo and the Clarity Design System is also available.
Start the Angular UI server
cd ./web-angular && npm start
/mobile_client_flutter
A mobile client built with Flutter which supports Android, iOS, and web. See the README for detailed setup instructions.
cd ./mobile_client_flutter && flutter run
/web-react-ts
A UI built with CRA
Start the React dev server
cd ./web-react-ts && npm start
This monorepo can be deployed to Netlify. The frontend application will be served over Netlify's CDN and the GraphQL API will be provisioned as a serverless GraphQL API lambda function deployed to AWS (via Netlify). A netlify.toml file is included with the necessary build configurations. The following environment variables must be set in Netlify (either via the Netlify web UI or via the command line tool)
NEO4J_URI
NEO4J_USER
NEO4J_PASSWORD
See the "Hands On With The GRANDStack Starter" video linked at the beginning of this README for a walkthrough of deploying to Netlify.
Vercel can be used with monorepos such as grand-stack-starter. vercel.json
defines the configuration for deploying with Vercel.
vercel secret add grand_stack_starter_neo4j_uri bolt://<YOUR_NEO4J_INSTANCE_HERE>
vercel secret add grand_stack_starter_neo4j_user <YOUR_DATABASE_USERNAME_HERE>
vercel secret add grand_stack_starter_neo4j_password <YOUR_DATABASE_USER_PASSWORD_HERE>
vercel
You can quickly start via:
docker-compose up -d
If you want to load the example DB after the services have been started:
docker-compose run api npm run seedDb
See the project releases for the changelog.
You can find instructions for other ways to use Neo4j (Neo4j Desktop, Neo4j Aura, and other cloud services) in the Neo4j directory README.
This project is licensed under the Apache License v2. Copyright (c) 2020 Neo4j, Inc.
Download Details:
Author: kumarss20
Source Code: https://github.com/kumarss20/neo4j-gql
License: Apache-2.0 License
1640695338
Import, visualize, and analyze SpiderFoot OSINT data in Neo4j, a graph database
NOTE: This installs the sfgraph
command-line utility
$ pip install spiderfoot-neo4j
NOTE: Docker must first be installed
$ docker run --rm --name sfgraph -v "$(pwd)/neo4j_database:/data" -e 'NEO4J_AUTH=neo4j/CHANGETHISIFYOURENOTZUCK' -e 'NEO4JLABS_PLUGINS=["apoc", "graph-data-science"]' -e 'NEO4J_dbms_security_procedures_unrestricted=apoc.*,gds.*' -p "7474:7474" -p "7687:7687" neo4j
$ sfgraph path_to/spiderfoot.db -s <SCANID_1> <SCANID_2> ...
Visit http://127.0.0.1:7474 and log in with neo4j/CHANGETHISIFYOURENOTZUCK
The --suggest
option will rank nodes based on their connectedness in the graph. This is perfect for finding closely-related affiliates (child companies, etc.) to scan and add to the graph. By default, Harmonic Centrality is used, but others such as PageRank can be specified with --closeness-algorithm
$ sfgraph --suggest DOMAIN_NAME
# match all INTERNET_NAMEs
MATCH (n:INTERNET_NAME) RETURN n
# match multiple event types
MATCH (n) WHERE n:INTERNET_NAME OR n:DOMAIN_NAME OR n:EMAILADDR RETURN n
# match by attribute
MATCH (n {data: "evilcorp.com"}) RETURN n
# match by spiderfoot module (relationship)
MATCH p=()-[r:WHOIS]->() RETURN p
# shortest path to all INTERNET_NAMEs from seed domain
MATCH p=shortestPath((d:DOMAIN_NAME {data:"evilcorp.com"})-[*]-(n:INTERNET_NAME)) RETURN p
# match only primary targets (non-affiliates)
MATCH (n {scanned: true}) return n
# match only affiliates
MATCH (n {affiliate: true}) return n
sfgraph [-h] [-db SQLITEDB] [-s SCANS [SCANS ...]] [--uri URI] [-u USERNAME] [-p PASSWORD] [--clear] [--suggest SUGGEST]
[--closeness-algorithm {pageRank,articleRank,closenessCentrality,harmonicCentrality,betweennessCentrality,eigenvectorCentrality}] [-v]
optional arguments:
-h, --help show this help message and exit
-db SQLITEDB, --sqlitedb SQLITEDB
Spiderfoot sqlite database
-s SCANS [SCANS ...], --scans SCANS [SCANS ...]
scan IDs to import
--uri URI Neo4j database URI (default: bolt://127.0.0.1:7687)
-u USERNAME, --username USERNAME
Neo4j username (default: neo4j)
-p PASSWORD, --password PASSWORD
Neo4j password
--clear Wipe the Neo4j database
--suggest SUGGEST Suggest targets of this type (e.g. DOMAIN_NAME) based on their connectedness in the graph
--closeness-algorithm {pageRank,articleRank,closenessCentrality,harmonicCentrality,betweennessCentrality,eigenvectorCentrality}
Algorithm to use when suggesting targets
-v, -d, --debug Verbose / debug
Download Details:
Author: blacklanternsecurity
Source Code: https://github.com/blacklanternsecurity/spiderfoot-neo4j
License: GPL-3.0 License
1639764000
Open Source Tool - Cybersecurity Graph Database in Neo4j
|G|r|a|p|h|K|e|r|
{ open source tool for a cybersecurity graph database in neo4j }
With GraphKer you can have the most recent update of cyber-security vulnerabilities, weaknesses, attack patterns and platforms from MITRE and NIST, in an very useful and user friendly way provided by Neo4j graph databases!
Prerequisites
3 + 1 Steps to run GraphKer Tool
Windows Users: https://neo4j.com/download/
Create an account to get the license (totally free), download and install Neo4j Desktop.
Useful Video: https://tinyurl.com/yjjbn8jx
Linux Users:
sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://debian.neo4j.com/neotechnology.gpg.key | sudo apt-key add -
sudo add-apt-repository "deb https://debian.neo4j.com stable 4.1"
sudo apt install neo4j
sudo systemctl enable neo4j.service
sudo systemctl status neo4j.service
You should have output that is similar to the following:
● neo4j.service - Neo4j Graph Database
Loaded: loaded (/lib/systemd/system/neo4j.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-08-07 01:43:00 UTC; 6min ago
Main PID: 21915 (java)
Tasks: 45 (limit: 1137)
Memory: 259.3M
CGroup: /system.slice/neo4j.service
. . .
Useful Video: https://tinyurl.com/vvpjf3dr
Windows Users:
You can create databases in whatever version you want (latest version preferable) through GUI or Neo4j Terminal.
Linux Users: When you start neo4j through systemctl, type cypher-shell
, then create database NAME;
. Now you have to set this database, as default so when you start neo4j you start automatically this database. Go to /etc/neo4j/neo4j.conf and uncomment dbms.default_database=neo4j
and change it with your new database name. Restart neo4j service and you are ready.
Install APOC Plugin:
Download APOC jar File: https://github.com/neo4j-contrib/neo4j-apoc-procedures/releases (--all.jar file)
Place it in Plugins Folder --> check every folder path in Neo4j: https://neo4j.com/docs/operations-manual/current/configuration/file-locations/
Modify the Database Configuration File to approve apoc procedures.
Uncomment: dbms.directories.plugins=plugins
Uncomment and Modify:
dbms.security.procedures.unrestricted=apoc.*
dbms.security.procedures.whitelist=apoc.*,apoc.coll.*,apoc.load.*
#loads unrestricted and white-listed procedures/plugins to the server
Restart Neo4j: systemctl restart neo4j
Configure Database Settings File:
Windows Users: In Neo4j Desktop Main Page --> Choose your Database --> ... (Three Dots) --> Settings --> Go to last line and set the commands below --> Apply and Restart the Database
apoc.export.file.enabled=true
apoc.import.file.enabled=true
apoc.import.file.user_neo4j_config=false
cypher.lenient_create_relationship = true
Linux Users: Same as above, in the neo4j.conf file --> check every folder path in Neo4j: https://neo4j.com/docs/operations-manual/current/configuration/file-locations/
Configure Memory Usage:
In Neo4j Configuration File (neo4j.conf): For 16GB RAM you can use 8G + 4G for heap. For 8GB RAM you can use 4G + 2G etc.
dbms.memory.heap.initial_size=4G
dbms.memory.heap.max_size=8G
dbms.memory.pagecache.size=4G
pip install -r requirements.txt
Run GraphKer
// Default
python main.py -u BOLT_URL -n USERNAME -p PASSWORD -d IMPORT_PATH
// Run and Open Neo4j Browser
python main.py -u BOLT_URL -n USERNAME -p PASSWORD -d IMPORT_PATH -b y
// Run and Open Graphlytic App
python main.py -u BOLT_URL -n USERNAME -p PASSWORD -d IMPORT_PATH -g y
// Default Run Example in Ubuntu
sudo python3 main.py -u BOLT_URL -n USERNAME -p PASSWORD -d /var/lib/neo4j/import/
Default Bolt URL for Neo4j: bolt://localhost:7687
Default Username in Neo4j Databases: neo4j
For Neo4j Import Folder check the link above with File Locations.
Estimated RunTime: 6-15 Minutes. Depends on hardware.
At least 8GB in your hard drive.
You can check out an existing example of the graph database that GraphKer creates. Just download the dump file from repo: https://github.com/amberzovitis/GraphKer-DBMS-Dump and import it to an existing or new graph database in Neo4j. This file consists of CVEs from 2021 (with related CPEs) and all CWEs and CAPECs.
You can access the CVE and CPE Datasets in National Vulnerability Database by NIST (https://nvd.nist.gov/vuln/data-feeds), CWE Dataset in MITRE (https://cwe.mitre.org/data/downloads.html) and CAPEC Dataset in MITRE (https://capec.mitre.org/data/downloads.html).
--Search, Export Data and Analytics, Enrich your Skills--
Created by Adamantios - Marios Berzovitis, Cybersecurity Expert MSc, BSc
Diploma Research - MSc @ Distributed Systems, Security and Emerging Information Technologies | University Of Piraeus --> https://www.cs.unipi.gr/distributed/
Co-Working with Cyber Security Research Lab | University Of Piraeus --> https://seclab.cs.unipi.gr/
Facebook: https://www.facebook.com/GraphKerTool/
LinkedIn: https://tinyurl.com/p57w4ntu
Github: https://github.com/amberzovitis
Enjoy! Provide Feedback!
Author: amberzovitis
Source Code: https://github.com/amberzovitis/GraphKer
#neo4j