You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the POC process of writing the Avro schema code, I discovered that Asami has one feature that really saved us some work: it supports recursive fetching of entities as maps, while recognizing structural loops and making sure no infinite recursion occurs.
Since it's also a triple store, I decided to utilize this benefit and stored our triples into Asami. I mean: triples are triples right? Well...
The nice news is: it works and it did indeed save us that trouble. But, I do think it might have come at the cost of some things I didn't realize up front:
Even though Asami's query language is very much like SPARQL, it is not SPARQL. It also isn't as powerful yet. For example, I cannot express (to my knowledge at least) a recursive property path.
RDF semantics like how to parse lists are unknown to it. Now I've written a custom function to parse a RDF list to a Clojure sequence. This might not have been necessary if a triple store that understands RDF semantics was used.
There might be more, but these are the most pressing issues that come to mind now.
So, how do we proceed from here? Basically, I see three options.
This is fine.
Sure, it's a trade-off, but it's a fine one.
Restrict what we use Asami for.
Perhaps we can keep Asami for doing what it does so well: building trees from the triple store. We could consider using some other DB that is RDF specific and supports SQARQL (RDF4j comes to mind), and use a CONSTRUCT query to end up with triples that we load into Asami. From there we can let Asami do the magic.
Ditch Asami altogether.
Maybe this extra dependency is not worth it. How hard is it really to build a tree structure from SPARQL query results? How much does Asami really give us, at the cost of the "complexity" of having an extra dependency weaved into our logic.
For options 2 and 3 we can further divide our options:
use a RDF framework in Clojure or a Clojure library that wraps a Java one
use a Java RDF framework directly (like RDF4j or Jena)
Also keep in mind that Paula has expressed that she intends to extend Asami with RDF/SPARQL semantics in the future. But of course, this might take very very long and has little guarantee.
My current suggestion
I think Asami solves something fundamental for us: turning graphs into trees. I would like to keep making use of that. Moreover, it's nice that Paula is an expert in Semantic Web (most notably she was Lead Editor for SPARQL 1.1), who loves to help and implement features for you.
That being said, having SPARQL and RDF semantics is definitely a nice to have. We could consider using Grafter, which is a Clojure wrapper for RDF4j. In conjunction we could use flint as a SPARQL DSL. Generating string queries from there to plug into RDF4j via Grafter seems a beautiful match.
qualityImproving the quality of code and documentation
1 participant
Heading
Bold
Italic
Quote
Code
Link
Numbered list
Unordered list
Task list
Attach files
Mention
Reference
Menu
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
In the POC process of writing the Avro schema code, I discovered that Asami has one feature that really saved us some work: it supports recursive fetching of entities as maps, while recognizing structural loops and making sure no infinite recursion occurs.
Since it's also a triple store, I decided to utilize this benefit and stored our triples into Asami. I mean: triples are triples right? Well...
The nice news is: it works and it did indeed save us that trouble. But, I do think it might have come at the cost of some things I didn't realize up front:
There might be more, but these are the most pressing issues that come to mind now.
So, how do we proceed from here? Basically, I see three options.
Sure, it's a trade-off, but it's a fine one.
Perhaps we can keep Asami for doing what it does so well: building trees from the triple store. We could consider using some other DB that is RDF specific and supports SQARQL (RDF4j comes to mind), and use a
CONSTRUCT
query to end up with triples that we load into Asami. From there we can let Asami do the magic.Maybe this extra dependency is not worth it. How hard is it really to build a tree structure from SPARQL query results? How much does Asami really give us, at the cost of the "complexity" of having an extra dependency weaved into our logic.
For options 2 and 3 we can further divide our options:
Also keep in mind that Paula has expressed that she intends to extend Asami with RDF/SPARQL semantics in the future. But of course, this might take very very long and has little guarantee.
My current suggestion
I think Asami solves something fundamental for us: turning graphs into trees. I would like to keep making use of that. Moreover, it's nice that Paula is an expert in Semantic Web (most notably she was Lead Editor for SPARQL 1.1), who loves to help and implement features for you.
That being said, having SPARQL and RDF semantics is definitely a nice to have. We could consider using Grafter, which is a Clojure wrapper for RDF4j. In conjunction we could use flint as a SPARQL DSL. Generating string queries from there to plug into RDF4j via Grafter seems a beautiful match.
It definitely needs mulling over though.
Beta Was this translation helpful? Give feedback.
All reactions