The Smart Data Analytics group  is happy to announce SANSA 0.3 - the third release of the Scalable Semantic Analytics Stack. SANSA employs distributed computing via Apache Spark and Flink in order to allow scalable machine learning, inference and querying capabilities for large knowledge graphs.
You can find the FAQ and usage examples at http://sansa-stack.net/faq/.
The following features are currently supported by SANSA:
* Reading and writing RDF files in N-Triples, Turtle, RDF/XML, N-Quad format
* Reading OWL files in various standard formats
* Support for multiple data partitioning techniques
* SPARQL querying via Sparqlify (with some known limitations until the next Spark 2.3.* release)
* SPARQL querying via conversion to Gremlin path traversals (experimental)
* RDFS, RDFS Simple, OWL-Horst (all in beta status), EL (experimental) forward chaining inference
* Automatic inference plan creation (experimental)
* RDF graph clustering with different algorithms
* Rule mining from RDF graphs based AMIE+
* Terminological decision trees (experimental)
* Anomaly detection (beta)
* Distributed knowledge graph embedding approaches: TransE (beta), DistMult (beta), several further algorithms planned
Deployment and getting started:
* There are template projects for SBT and Maven for Apache Spark as well as for Apache Flink available  to get started.
* The SANSA jar files are in Maven Central i.e. in most IDEs you can just search for “sansa” to include the dependencies in Maven projects.
* There is example code for various tasks available .
* We provide interactive notebooks for running and testing code  via Docker.
We want to thank everyone who helped to create this release, in particular the projects Big Data Europe , HOBBIT , SAKE , Big Data Ocean , SLIPO , QROWD  and BETTER.
View this announcement on Twitter and the SDA blog:
The SANSA Development Team
Thank you for very interesting job!
The question are :
1) where do you store final results or intermediate results? Parquet, Janusgraph, Cassandra ?
2) Is there integration with Spark GraphFrames?
Sincerely yours, Timur
On Mon, Dec 18, 2017 at 9:21 AM, Hajira Jabeen <[hidden email]> wrote:
Please feel free to contact us in case of any questions.
The use of GraphFrames is in planning, as they are not officially integrated with spark yet.
The intermediate results are stored in RDDs mostly ( sometimes in Parquet files).
Hi Timur,Thanks for your interest in SANSA.
On 18 December 2017 at 10:54, Timur Shenkao <[hidden email]> wrote:
|Free forum by Nabble||Edit this page|