Getting started with Amazon Neptune Database

If you already have your data in a graph model, it’s easy to get started with Amazon Neptune Database. You can load data in CSV or RDF formats and begin writing graph queries with Apache TinkerPop Gremlin, SPARQL, or openCypher. You can use the getting started documentation or view the AWS Online Tech Talk through the following links. We've also consolidated best practices for Neptune Database as well.

Getting started with Amazon Neptune Analytics

You can get started with Neptune Analytics in a few steps by creating a graph using the AWS Management Console or the CDK, SDK, or CLI. AWS CloudFormation support coming soon. You can load a graph into Neptune Analytics from data in an Amazon S3 bucket or from a Neptune database. You can send requests using the openCypher query language to a graph in Neptune Analytics directly from your graph applications. You can also connect to the graph in Neptune Analytics from a Jupyter notebook to run queries and graph algorithms. Results of analytic queries can be written back into the Neptune Analytics graph to serve incoming queries or stored within S3 for further processing. Neptune Analytics supports integration with the open-source LangChain library to work with existing applications powered by large language models.

Getting started with Amazon Neptune ML

For getting started with Neptune ML, reference this blog post that goes through the steps in the getting started workflow including the following:
  • Setting up the test environment
  • Launching the node classification notebook sample
  • Loading the sample data into the cluster
  • Exporting the graph
  • Performing ML training
  • Running Gremlin queries with Neptune ML

Getting started with graph visualization

You can use either Neptune notebooks or Graph Explorer to visualize your graph data. If you are new to graph databases and query languages or want to explore graph data without writing queries, we recommend starting with Graph Explorer. You can get started with Graph Explorer in a few steps using the AWS Management Console. Users must have access to read Neptune data through a new or existing IAM role to use Graph Explorer. The Graph Explorer project is available on GitHub, and Graph Explorer is available in all AWS Regions where the Neptune workbench is available.

If you are familiar with graph query languages or running graph workloads in a notebook environment, you can start with Neptune notebooks. Neptune provides Jupyter and JupyterLab notebooks in the open-source Neptune graph notebook project on GitHub and in the Neptune workbench. These notebooks offer sample application tutorials and code snippets in an interactive coding environment where you can learn about graph technology and Neptune.

Neptune notebooks can both visualize query results and provide an IDE-like interface for application development and testing, or you can use Neptune notebooks with other Neptune features such as Neptune Streams and Neptune ML. Additionally, each Neptune notebook hosts a Graph Explorer endpoint. You can find a link to open Graph Explorer on each notebook instance in the Amazon Neptune console.

Getting started with query languages

Gremlin: Customers using Gremlin with Neptune often refer to the online book, Practical Gremlin: An Apache TinkerPop Tutorial, as a helpful reference to augment the Apache TinkerPop documentation.
 
SPARQL: For customers using RDF and SPARQL with Neptune, the World Wide Web Consortium's SPARQL 1.1 Overview is a useful guide.
 
openCypher: openCypher is a declarative query language for property graphs that was originally developed by Neo4j, then became open source in 2015, and contributed to the openCypher project under an Apache 2 open-source license. Its syntax is documented in the Cypher Query Language Reference, Version 9.
 
GraphQL: If you're interested in enabling GraphQL for access to Neptune, there's an  example application that shows how to use AWS AppSync GraphQL and Neptune.