Importing RDF data into Neo4j

The previous blog post might have been a bit too dense to start with, so I’ll try something a bit lighter this time like importing RDF data into Neo4j. It assumes, however, a certain degree of familiarity with both RDF and graph databases.

There are a number of RDF datasets out there that you may be aware of and you may have asked yourself at some point: “if RDF is a graph, then it should be easy to load it into a graph database like Neo4j, right?”. Well, the RDF model and the property graph model (implemented by Neo4j) are both graph models but with some important differences that I wont go over in this post. What I’ll do though, is describe one possible way of migrating data from an RDF graph into Neo4j’s property graph database.

I’ve also implemented this approach as a Neo4j stored procedure, so if you’re less interested in the concept and just want to see how to use the procedure you can go straight to the last section. Give it a try and share your experience, please.

The mapping

The first thing to do is plan a way to map both models. Here is my proposal.

An RDF graph is a set of tiples or statements (subject,predicate,object) where both the subject and the predicate are resources and the object can be either another resource or a literal. The only particularity about literals is that they cannot be the subject of other statements. In a tree structure we would call them leaf nodes. Also keep in mind that resources are uniquely identified by URIs.
Rule1: Subjects of triples are mapped to nodes  in Neo4j. A node in Neo4j representing an RDF resource will be labeled :Resource and have a property uri with the resource’s URI.
(S,P,O) => (:Resource {uri:S})...
Rule2a: Predicates of triples are mapped to node properties in Neo4j if the object of the triple is a literal
(S,P,O) && isLiteral(O) => (:Resource {uri:S, P:O})
Rule 2b: Predicates of triples are mapped to relationships in Neo4j if the object of the triple is a resource
(S,P,O) && !isLiteral(O) => (:Resource {uri:S})-[:P]->(:Resource {uri:O})
Let’s look at an example: Here is a short RDF fragment from the RDF Primer by the W3C that describes a web page and links it to its author. The triples are the following:
ex:index.html   dc:creator              exstaff:85740 .
ex:index.html   exterms:creation-date   "August 16, 1999" .
ex:index.html   dc:language             "en" .
The URIs of the resources are shortened by using the xml namespace mechanism. In this example, ex stands for, exterms stands for, exstaff stands for  and dc stands for
The full URIs are shown in the graphical representation of the triples (the figure is taken from the W3C page).
If we iterate over this set of triples applying the  three rules defined before, we would get the following elements in a Neo4j property graph. I’ll use Cypher to describe them.
The application of rules 1 and 2b to the first triple would produce:
(:Resource { uri:"ex:index.html"})-[:`dc:creator`]->(:Resource { uri:"exstaff:85740"})
The second triple is transformed using rules 1 and 2a:
(:Resource { uri:"ex:index.html", `exterms:creation-date`: "August 16, 1999"})
And finally the third triple is transformed also with rules 1 and 2a producing:
(:Resource { uri:"ex:index.html", `dc:language`: "dc"})


The proposed set of basic mapping rules can be improved by adding one obvious exception for categories. RDF can represent both data and metadata as triples in the same graph and one of the most common uses of this is to categorise resources by linking them to classes through an instance-of style relationships (called rdf:type). So let’s add a new rule to deal with this case.
rule3: The rdf:type statements are mapped to categories in Neo4j.
(Something ,rdf:type, Category) => (:Category {uri:Something})
The rule basically maps the way individual resources (data) are linked to classes (metadata) in RDF through the rdf:type predicate to the way you categorise nodes in Neo4j i.e. by using labels.
This has also the advantage of removing dense nodes that aren’t particularly nice to deal with for any database. Rather than having a few million nodes representing people in your graph all of them connected to a single Person class node, we will have them all labeled as :Person which makes a lot more sense and there is no semantic loss.

The naming of things

Resources in RDF are identified by URIs which makes them unique, and thats great, but they are meant to be machine readable rather than nice to the human eye. So even though you’d like to read ‘Person’, RDF will use (for example). While these kind of names can be used in Neo4j with no problem, they will make your labels and property names horribly long and hard to read and your Cypher queries will be polluted with http://… making the logic harder to grasp.
So what can we do? We have two options: 1) leave things named just as they are in the RDF model, with full URIS, and just deal with it in your queries. This would be the right thing to do if your data uses multiple schemas not necessarily under your control and/or more schemas can be added dynamically. Option 2) would be to make the pragmatic decision of shortening names to make both the model and the queries more readable. This will require some governance to ensure there are no name clashes. Probably a reasonable thing to do if you are migrating into Neo4j data from an RDF graph where you are the owner of the vocabularies being used or at least you have control over what schemas are used.
The initial version of the importRDF stored procedure supports both approaches as we will see in the final sections.

Datatypes in RDF literals

Literals can have data types associated in RDF by by pairing a string with a URI that identifies a particular XSD datatype.

exstaff:85740  exterms:age  "27"^^xsd:integer .

As part of the import process you may want to map the XSD datatype used in a triple to one of Neo4j’s datatypes. If datatypes are not explicitly declared in your RDF data you can always just load all literals as Strings and then cast them if needed at query time or through some batch post-import processing.

Blank nodes

The building block of the RDF model is the triple and this implies an atomic decomposition of your data in individual statements. However -and I quote here the W3C’s RDF Primer again- most real-world  data involves structures that are more complicated than that and the way to model structured information is by linking the different components to an aggregator resource. These aggregator resources may never need to be referred to directly, and hence may not require universal identifiers (URIs). Blank nodes are the artefacts in RDF that fulfil this requirement of representing anonymous resources. Triple stores will give them some sort of graph store local unique ID for the purposes of keeping unicity and avoiding clashes.

Our RDF importer will label blank nodes as BNode, and resources identified with URIs as URI, however, it’s important to keep in mind that if you bring data into Neo4j from multiple RDF graphs, identifiers of blank nodes are not guaranteed to be unique and unexpected clashes may occur so extra controls may be required.

The importRDF stored procedure

UPDATE [Feb-2021] The stored procedure described here in its initial form, evolved into a toolkit called Neosemantics (n10s) that includes a number of features that make it possible to work with RDF in Neo4j. The syntax described below has changed since, and it would be impractical to try and keep this post up to date. For the latest on the implementation have a look at the manual or check out the github repository for the source.

As I mentioned at the beginning of the post, I’ve implemented these ideas in a plugin for Neo4j called Neosemantics (n10s), which includes a procedure called importRDF. The usage is pretty simple. It takes four arguments as input.

  • The url of the RDF data to import.
  • The type of serialization used. The most frequent serializations for RDF are JSON-LD, Turtle, RDF/XML, N-Triples and  TriG. There are a couple more but these are the ones accepted by the stored proc for now.
  • A boolean indicating whether we want the names of labels, properties and relationships shortened as described in the “naming of things” section.
  • The periodicity of the commits. Number of triples ingested after which a commit is run.
CALL semantics.importRDF("file:///Users/jbarrasa/Downloads/opentox-example.turtle","Turtle", false, 500)

Will produce the following output:

Screen Shot 2016-06-08 at 23.39.43

The URL can point at a local RDF file, like in the previous example or to one accessible via HTTP. The next example loads a public dataset with 3.5 million triples on food products, their ingredients, allergens, nutrition facts and much more from Open Food Facts.

CALL semantics.importRDF("","RDF/XML", true, 25000)

On my laptop the whole import took just over 4 minutes to produce this output.

Screen Shot 2016-06-09 at 00.45.38

When shortening of names is selected, the list of prefix being used is included in the import summary. If you want to give it a try don’t forget to create the following indexes beforehand, otherwise the stored procedure will abort the import and will remind you:

CREATE INDEX ON :Resource(uri) 

Once imported, I can find straight away what’s the set of shared ingredients between your Kellogg’s Coco Pops cereals and a bag of pork pies that you can buy at your local Spar.

Screen Shot 2016-06-08 at 23.57.51

Below is the cypher query that produces these results. Notice how the urls have been shortened but unicity of names is preserved by prefixing them with a namespace prefix.

MATCH (prod1:Resource { uri: ''})
MATCH (prod2:ns3_FoodProduct { ns3_name : '2 Snack Pork Pies'})
MATCH (prod1)-[:ns3_containsIngredient]->(x1)-[:ns3_food]->(sharedIngredient)<-[:ns3_food]-(x2)<-[:ns3_containsIngredient]-(prod2)
RETURN prod1, prod2, x1, x2, sharedIngredient

I’ve intentionally written the two MATCH blocks for the two products in different ways, one identifying the product by its unique identifier (URI) and the other combining the category and the name.

A couple of open points

There are a couple of thing that I have not explored in this post and that the current implementation of the RDF importer does not deal with.

Mutltivalued properties

The current implementation does not deal with multivalued properties, although an obvious implementation could be to use arrays of values for this.

And the metadata?

This works great for instance data, but there is a little detail to take into account: An RDF graph can contain metadata statements. This means that you can find in the same graph (JB, rdf:type, Person) and (Person, rdf:type, owl:Class) and even (rdf:type, rdf:type, refs:Property). The post on Building a semantic graph in Neo4j gives some ideas on how to deal with RDF metadata but this is a very interesting topic and I’ll be coming back to it in future posts.


Migrating data from an RDF graph into a property graph like the one implemented by Neo4j can be done in a generic and relatively straightforward way as we’ve seen. This is interesting because it gives an automated way of importing your existing RDF graphs (regardless of your serialization: JSON-LD, RDF/XML, Turtle, etc.) into Neo4j without loss of its graph nature and without having to go through any intermediate flattening step.

The import process being totally generic results in a graph in Neo4j that of course inherits the modelling limitations of RDF like the lack of support for attributes on relationships so you will probably want to enrich / fix your raw graph once it’s been loaded in Neo4j. Both potential improvements to the import process and post-import graph processing will be discussed in future posts. Watch this space.

65 thoughts on “Importing RDF data into Neo4j

  1. Hello Jesús,

    i’m new to the Neo4J world and interested to import rdf N-Triples data to Neo4J. So I got to your blog post. At the moment i am a little bit stucked, because i don’t know how to use your procedure. This are the steps that i’ve done:

    1. Downloaded your source code
    2. Generate a jar file with ‘mvn package’
    3. Put the generated jar in the plugin folder of Neo4J
    4. Called the procedure over the web interface

    But calling the procedure generates the error:
    ‘There is no procedure with the name `semantics.importRDF` registered for this database instance. Please ensure you’ve spelled the procedure name correctly and that the procedure is properly deployed.’.

    Can you give me some suggestions how to solve it? Maybe APOC is a necessary dependency?

    Best regards

    Liked by 1 person

    1. Hi Timo, great to hear that you’re giving the loader a try! You seem to have followed the right steps but let me ask you a couple of questions.

      1. Did you restart the Neo4j server after copying the jar to the plugins folder?

      2. Did you copy across also the dependent jars? APOC is not a required jar in the current version but there are a number of required third party jars that you’ll find in the pom.xml that need to be copied to the plugins directory as well.

      3. Finally, if you’ve checked the previous two and it still does not work I’d recommend opening an issue in github (look at this simliar one ). At this point maybe the version of Neo4j you’re working with and the server logs would be useful.

      Hope this helps. Let me know how it goes.



  2. Hello,

    I have some trouble with your procedure…
    In Neo4j, callling the procedure returns always a KO termination status with the same extra info:

    “At least one of the required indexes was not found [ :Resource(uri), :URI(uri), :BNode(uri), :Class(uri) ]”

    Even your example
    CALL semantics.importRDF(“”,”RDF/XML”,false,5000)

    returns the same issue.

    Any idea?

    Thank you for your help,


    1. Hi Thomas,
      I did add this stopper to avoid kicking of the RDF load if the indexes were not present because without indexes it can be really slow on large RDF imports.

      All you have to do is create the indexes as described in the post. You can do this either from the browser or from the shell by running the following instructions:

      CREATE INDEX ON :Resource(uri)
      CREATE INDEX ON :BNode(uri)
      CREATE INDEX ON :Class(uri)

      You can check that the indexes have been created by running :SCHEMA on your Neo4j browser.
      Once the indexes are present the importRDF procedure should work nicely.
      Let me know if this solves the problem.




      1. Hello Jesús,

        Nice, it works great!
        Very good work, thank you very much for your help.



  3. Hello there,

    Unfortunately I still run into trouble getting it to work.
    Let me recap the steps I took in case I did anything wrong (neo4j version 3.0.6):

    1. downloaded the entire neosemantics-folder into maven/bin directory, running mvn package shade:shade
    2. copied the target output unto neo4j/plugins save for original-neosemantics-1.0-SNAPSHOT.jar, which causes a crash at neo4j startup
    3. CALL semantics.importRDF(“”,”RDF/XML”, true, 25000) causes following error: At least one of the required indexes was not found [ :Resource(uri), :URI(uri), :BNode(uri), :Class(uri) ]
    4. copied the JARs from “alternate location” directly into neo4j/plugins folder, restart server, neo4j crashes altogether.

    I uploaded the error/warning portion from my log bc it would explode this comment section ( I don’t understand the error because all the mentioned JARs are in the plugins folder…

    Maybe you know what could be the matter?


    1. For goodness sake, why can’t I read instructions properly? It was the creating Index issue, which was even pointed out in the original blog article AND was mentioned in this very comment section. Sometimes you can’t see the forest for the trees I guess.
      Seems to work now, sorry for posting the question


  4. Hi Jesús,

    Thanks for the detailed blogs and code that you have made availble to us “online community”. It had helped me a lot to get a running start on combining the two worlds RDF and Neo4j {graphs} with each other.
    I have a conundrum with a real-life use-case and would like to know if you can get in touch with me. I’m intressted in your view on few points…..

    Kind regards, Black


      1. Hi again Black, sorry just noticed the email address was not displayed as expected there is a neo4j missing between the at and the dot com.



  5. Hello Jesús,

    I really searched something like this. Thank you for providing it! I tried today your loader and it is working fine. The only problem is scalability. I’m trying to import an 100 million triple file and it takes very long. In fact I have the impression that number of triples imported per second is going down. I’m trying on a 60Gb RAM machine. Did you try on larger datasets? Have you some ideas what is happening?

    Thank you


    1. Hi Dennis,

      The degradation in write performance is expected. In Neo4j all connections (relationships) between nodes are materialised at write time as opposed to triple stores where they are computed via joins at query time. It’s a trade-off between write and read performance. What you’re getting with Neo4j is more expensive transactional data load but lightning speed traversals at read time.

      Also Keep in mind that this approach is transactional which may or may not be the best approach for super large datasets. If your dataset is really massive you may want to try the non transactional import tool (

      The largest data set I’ve imported with my stored procedure is 107Million triples and it took 2h50min on my 16Gb laptop.

      So I’d say it depends, if it’s a one off import it can be acceptable but if you want to import 100Mill triples every hour then probably you’ll need to find alternatives.

      BTW, I’m currently working on the next post describing the experience of loading the larger dataset I mentioned before so watch this space.




      1. Hello Jesús,

        thank you for your answer, it is very helpful. I will see ahead for the next post!



  6. Hi Jesus,

    thank you for your work that allows the Semantic Web community to use more easily neo4j technology. I wrote once in the post about importing RDF datasets. My problem was that I had problems importing a bigger RDF dataset in neo4j. Unfortunately I still have the problem. You pointed me to the neo4j-import utility. So I did the following. I parsed an RDF file and I created a dictionary for all the nodes using a quite big HashMap. Then I created two files:





    I used neo4j-import for importing. I can import a part of DBpedia (and it is fine) but when I want to load also the yago classes (just a lot of classes) I get a strange error I reported here:

    You said that you wanted to do a post for importing big datasets. Is this still the case? Can you help?

    Dennis Diefenbach


    1. Hi Dennis, I see you uncovered a bug in the import tool. Thanks and nice catch.
      I thought I’d drop you a line to mention that one year later… yes, that’s how time flies and how other priorities take precedence 😦 I’m about to publish a worked example on a larger dataset.
      I’m using the OpenPermID from Thomson Reuters.
      Should be public in the next week or so.
      Thanks for the patience 🙂



  7. Thanks for your code.But when I running the code CALL semantics.importRDF(“”,”RDF/XML”, { languageFilter: ‘fr’, commitSize: 5000 , nodeCacheSize: 250000}) the procedure returns always a KO termination status with the same extra info:”unqualified property element not allowed [line 2, column 14]”

    Liked by 1 person

  8. Hello,

    Thank you for the great article.
    I have followed the procedure for loading the triples in Neo4j but i am getting the following error:

    “Neo.ClientError.Statement.SyntaxError: Procedure call does not provide the required number of arguments: got 4 expected 3.”

    This is with the example that you have provided:

    CALL semantics.importRDF(“”,”RDF/XML”, true, 25000)

    Any help will be appreciated.

    Thanks and regards,


  9. Hi, thanks for this wonderful work. But I have a small trouble while getting the names of the nodes. It shows URI in place of them, can you help me with this.
    Thanks in advance


      1. Thank you for your immediate reply. What I mean in the above question is, I am unable to get the labels of the nodes. Only uri is seen for each node


  10. Hi, Jesús!
    Now I want to import RDF N-Triples to neo4j. But I have some difficulties. The following steps are all under Windows.
    First, I downloaded the .jar file from github, and then I put it into the plugins folder of neo4j. And then I add “dbms.unmanaged_extension_classes=semantics.extension=/rdf” to conf/neo4j.conf. In the end, I restart the server and run “call dbms.procedures()”, but “semantics.*” didn’t appear in the list.
    Now I have no idea what I can do. Can you give me some suggestions? Thank you!


    1. Now I succeed by building the JARs from the source. Although I suffered some errors in the way, I solved them. Thank you for your wonderful work,again!


  11. Hello Jesus,
    For a while I was trying to find a straightforward way how to port RDF into neo4j, until I found your great post (and its sequels) – thank you for sharing all this! I went through all suggested steps: the .jar file loaded well into plugins, after catching a few errors of my own I went to executing the stored procedure (the latest version from your GitHub, with syntax chosen accordingly). I got stuck when trying to load into ne4j the following .ttl (or, in fact, any from the BBC repository):
    CALL semantics.importRDF(“”,”Turtle”, { shortenUrls: false, typesToLabels: true, commitSize: 500 })
    … the loader then terminated with KO and extraInfo “IRI included an unencoded space: ’32’ [line 7]”.
    When I downloaded the file to desktop and tried to load the file into neo4j locally:
    CALL semantics.importRDF(“file:///C:/…/maths.ttl”,”Turtle”, { shortenUrls: false, typesToLabels: true, commitSize: 500 })
    … I get still KO, with a different extraInfo “Expected an RDF value here, found ‘=’ [line 58]”.
    Then I tried to replicated literally – just with the latest version as per GitHub – your post example, with:
    CALL semantics.importRDF(“”,”RDF/XML”, { shortenUrls: false, typesToLabels: true, commitSize: 25000 })
    … still KO, with extraInfo “unqualified property element not allowed [line 2, column 14]”.
    Being new to RDF and neo4j I am sure I am doing some rookie mistake somewhere, but neither searching nor experimenting took me out of this – your kind hint would be much appreciated. Thank you!


    1. Hi Vladislav, when loading the BBC data from GitHub you should use the raw version of the Turtle file, otherwise you’ll be trying to load an html page which is, of course, not valid RDF. You can see the page by clicking on the ‘Raw’ button on the top right of the source code in GitHub. This is the correct url to use in the stored proc:
      CALL semantics.importRDF(“”,”Turtle”, { shortenUrls: false, typesToLabels: true, commitSize: 500 })

      Regarding the openfoodfacts, their datasets are broken. I suggested interested people to reach out to them to have it fixed. All the parser can do is detect a syntax error but the file should be fixed at source. Here’se the issue in GitHub where this was tracked:

      By the way, these kind of technical questions are best tracked in GitHub than here.

      Good luck and let me know how things go.



      1. Hi Jesus, thank you – it moved me a step! However, not yet till the very end: loading the file from the raw GitHub still ends with an error “”Expected an RDF value here, found ‘=’ [line 58]” (which is the same loader error as when I donwloaded the file to desktop and used the neo4j loader locally).

        Next such question I will move to the GitHub, as you suggested.
        Thank you!


  12. Hi Jesus,

    I was trying to address the multivalued property issue by editing some of your code in the setProp function in the DirectStatementLoader class where you commented:
    // we are overwriting multivalued properties.
    // An array should be created. Check that all data types are compatible.

    When I try to add an array to the map and run the RDFImportTest, I get a error:
    org.neo4j.driver.v1.exceptions.ClientException: Failed to invoke procedure `semantics.importRDF`: Caused by: java.lang.IllegalArgumentException: [[Ljava.lang.ArrayList;] is not a supported property value


  13. Hi Jesus, fantastic post and super helpful for someone getting into neo4j like myself. Do you have a followup on how to get RDF metadata, as mention in the post? I’m trying to get the rdf:type of an instance but can’t figure out how. I have a query like:

    MATCH (n:owl__NamedIndividual)-[*..3]->(m:owl__Class)
    WHERE n.rdfs__label = ‘2017 Volkswagen GTI Sport’
    RETURN n, m
    LIMIT 25

    And I’d like to get ‘m’ node back where the label would be ‘GTI’ given the following class heirarchy:

    Automobile ➝ Volkswagen ➝ Compact ➝ GTI ➝ “2017 Volkswagen GTI Sport”


    1. Hi Sammy, thanks. Happy to hear you’re finding it useful.
      Answering your question, your query looks reasonable but it really depends on how you’ve carried out the data import and what’s in the dataset you’ve imported.
      Statements of type `rdf:type` can be imported in two ways: (1)as LPG labels or (2) as separate nodes depending on the value of the `typesToLabels` parameter. I’m afraid you’ll have to share the `importRDF` call you’ve used and at least a part of the dataset if you want me to try it at my end. Is that possible?
      RDF metadata is itself RDF so in principle there should be no difference. But again i should be able to respond more precisely with a dataset we can comment on.

      One final comment, could you bring this conversation over to github issues (, please? I think it’s a more adequate platform for technical discussions.




  14. Hi Jesus,

    Even I faced the same issue of indexes.
    Can you please explain me the purpose of indexes?



    1. Hi Prashanti, if you are referring to the index on :Resource(uri), it’s needed to accelerate the many lookups needed to link resources to one another using relationships. Before you run any RDF import the index needs to be created.
      Hope this helps. Let me know if I misunderstood your question.


  15. Hello Jesus,

    I have my own rdf file which I am trying to load into Neo4j using Call semantics.importRDF(“file:///C:/Users/prash/OneDrive/Documents/Prashanti/RdfFile/myrdf.rdf”, “Turtle”, {})

    But I am facing “Expected ‘.’, found ‘<' [line 1]" error. Any idea what needs to be fixed?


  16. It looks like it’s not a Turtle file but an RDF/XML. If that’s the case you’ll have to use:
    semantics.importRDF(“file:///C:/…/myrdf.rdf”, “RDF/XML”, {})


  17. Trying to import from local windows machine with below command but getting following error.
    file Sample_Payload_1.json in import folder on local windows machine

    //import jason-LD files in neo4j using neosematics plug in
    call semantics.importRDF(“file:///Sample_Payload_1.json”, “JSON-LD”,{ shortenUrls: true, typesToLabels: true, commitSize: 9000 });

    terminationStatus: KO
    extraInfo: “\Sample_Payload_1.json (The system cannot find the file specified)”

    thoughts? and ideas to resolve the issue?


  18. Are blank nodes still being labeled as BNode? I imported the STATO ontology using CALL semantics.importRDF(‘’, ‘RDF/XML’). But no node is labeled as BNode. There’re many blank nodes because of owl:restriction. What’s intersting is there’re also blank nodes of type owl:Class (labled as owl__Class after import).

    I’m asking because I want to find a way to exclude the blank nodes in a search. Right now I’m using

    MATCH (n:owl__Class) where exists(n.rdfs__label) RETURN n

    But I think there must be a better way to do this.


  19. Hi
    Any idea for be able to go over the limitations of RDF like the lack of support for attributes on relationships ?
    I convert al my data to turtle how can i add to the relation attributes and insert them to neo4j?


  20. Hi Jesús,
    I am just getting started with exploring rdf data and neo4j in general using neosemantics. When I try to import the rdf dataset, I get an error with extraInfo stating “Unexpected character U+7C at index 52:..”. Is this related to the coding of the data and how can I fix it?


  21. CALL semantics.importRDF(“file:///D:/All_Script/Protege/neo4j.turtle”,”Turtle”, true, 500),wrong

    Procedure call provides too many arguments: got 4 expected no more than 3.

    Procedure semantics.importRDF has signature: semantics.importRDF(url :: STRING?, format :: STRING?, params = {} :: MAP?) :: terminationStatus :: STRING?, triplesLoaded :: INTEGER?, triplesParsed :: INTEGER?, namespaces :: MAP?, extraInfo :: STRING?, configSummary :: MAP?
    meaning that it expects at least 2 arguments of types STRING?, STRING?
    Description: Imports RDF from an url (file or http) and stores it in Neo4j as a property graph. Requires and index on :Resource(uri) (line 1, column 1 (offset: 0))
    “CALL semantics.importRDF(“file:///D:/All_Script/Protege/neo4j.turtle”,”Turtle”, true, 500)”



    1. There’s a comment highlighted in red indicating that the code in the post is no longer up to date and that you should go to the manual (also linked in the comment) for the most up to date reference on how to use it.
      Also for technical questions like this one probably github or the neo4j community site are certainly better channels than this blog.

      Loooking forward to hearing from you over there. Enjoy!



Leave a Reply to J.Barrasa Cancel reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s