Matches in ESWC 2020 for { ?s ?p ?o. }
- Paper.186_Review.0 issuedAt easychair.org.
- Paper.186_Review.0 issuedFor Conference.
- Paper.186_Review.0 releasedBy Conference.
- Paper.186_Review.0 hasContent "I am happy with the clarifications made by the authors. The rebuttal shows their capability and willingness to amend the paper in order to address my comments, mostly focusing on various claims that I found to not be carefully written or fully supported in the previous version of the paper. I have updated my score. ############### This submission investigates how to best use ElasticSearch (ES) for retrieving RDF triples, in order to achieve the best accuracy on several entity-based tasks. Several aspects are being tested: field separation, field weighting, index extensions with properties beyond the triple, and off-the-shelf ES similarity metrics. What I enjoyed about the paper is its pragmatic approach of exploring how the ES's rich functionality set can be tuned to RDF data and tasks. Another strong aspect of the paper is the multifaceted evaluation in section 5. These aspects, together with the fact that performing keyword search over RDF is an open challenge, the fairly clear paper story and the public release of the code, make this paper decent and worth considering for acceptance. I do, however, have a few comments about various claims made in this paper. This, IMHO, weakens its position and contribution. While I trust that these points are generally addressable, I have serious doubts about whether they can happen for the camera-ready version. 1) The authors claim to be the first ones to use ElasticSearch for retrieval of RDF triples and to investigate various indexing, querying and retrieval approaches. I have to disagree with this point. LOTUS (Ilievski et al., 2015; 2016), which was built on top of LodLaundromat data, also uses ElasticSearch to index and retrieve RDF triples, and investigates 32 retrieval options. This is not to say that the two approaches are the same: the present paper has focus on systematic investigation of existing functionality for accurate retrieval; LOTUS focused on scalability and was built on an assumption that the 'best' retrieval is application-dependent. In any case, this should be integrated in the paper, and the relation to LOTUS (and potentially other ES RDF engines) should be made clear. 2) I find the related work to be long and not very concise. There are two pages explaining approach after approach without a direct comparison to the approach in the present paper, and then the positioning is briefly outlined in two paragraphs (which should probably be revisited according to point 1). I would suggest that this section is rewritten in a concise and focused way, explaining what are the general ideas in the two directions covered and directly, how this work relates to them. 3) While this paper does a very nice job of exploring and measuring the accuracy of different configurations, I missed the general picture. Several sections point to this from various perspectives (requirements, challenges, approach), but they are not mappable to each other. It would really help the paper if the main hypotheses and aspects are summarized fairly early in the paper, potentially aided with a scheme/table, and ideally already pointing to the results tables. It would also help if these several sections are integrated with each other better (making pointers, aligning points, etc.) 4) Besides the matter discussed in point 1, I find other claims made in the paper to be insufficiently supported/obvious. Specifically, is it justified to say that the analysis in section 5 is 'extensive' (also considering the systematicity note in point 3)? The SDM system in table 6 is on average 0.03 points better than your system - which is comparable to the improvements that you observe in the previous tables - is it fair to say it is a 'slight' improvement? And overall, section 5 claims 'high' performance - again unsure whether this is justified. Minor comments: * On several occasions, the authors claim that the result is 'as expected' - where do these expectations come from? As it is, they seem fairly ad-hoc. Addressing point 3 above would help here I think. * does one really need to be aware of the schema to rely on rdfs:label and rdfs:comment (in practice, it seems like these RDFS constructs are commonly used and can almost be assumed) * section 2.3 discusses five types of objects and explains how type i (URIs) are indexed - how are the types ii-v indexed? * the approach indexes triples - how about indexing 'statements' in general (e.g., quads)? * The last paragraph of 4.4 is quite dense and hard to follow - please rewrite * the approach does not seem very scalable - the performance of the baseline model (which is comparable to LOTUS) is similar to LOTUS, but the size of the index is 10x smaller. Can the authors comment on this? Comment on efficiency would be nice in the summary in 5.4 anyways. * How are lists exactly evaluated - please expand on this in 5.2 * please say what is DL, and b-connected * it would be nice to have a demo where users can play and get a feeling of the system behavior. * Ideally, the paper should ideally be black-and-white readable (I'd suggest adapting figure 1 to enable this)"".
- Paper.186_Review.1_Reviewer type RoleDuringEvent.
- Paper.186_Review.1_Reviewer label "Anonymous Reviewer for Paper 186".
- Paper.186_Review.1_Reviewer withRole ReviewerRole.
- Paper.186_Review.1_Reviewer withRole AnonymousReviewerRole.
- Paper.186_Review.1 type ReviewVersion.
- Paper.186_Review.1 issued "2001-01-14T15:05:00.000Z".
- Paper.186_Review.1 creator Paper.186_Review.1_Reviewer.
- Paper.186_Review.1 hasRating ReviewRating.2.
- Paper.186_Review.1 hasReviewerConfidence ReviewerConfidence.4.
- Paper.186_Review.1 reviews Paper.186.
- Paper.186_Review.1 issuedAt easychair.org.
- Paper.186_Review.1 issuedFor Conference.
- Paper.186_Review.1 releasedBy Conference.
- Paper.186_Review.1 hasContent "In this paper, the authors proposes a keyword search approach over RDF datasets. The paper describes a practical approach to a specific problem tackled in the last few years. It is well structured, according to this kind of papers. The paper is easy to read and follow. The proposed tool is publicly available in Github, a key aspect to guaranty reproducibility of the results. Recommendations: - The section 4 (proposed approach) use only four pages over sixteen. From my point of view, this section should have more attention, but this is'nt mandatory. - The cited bibliographic should be updated. More than 50 percent aren't from the last five years (16/27)."".
- Paper.188 type SubmissionsPaper.
- Paper.188 label "FaVLib: A Fact Validation Library based on Knowledge Graph Embeddings".
- Paper.188 title "FaVLib: A Fact Validation Library based on Knowledge Graph Embeddings".
- Paper.188 issued "2001-12-04T18:38:00.000Z".
- Paper.188 authorList b0_g505.
- Paper.188 submission Paper.188.
- Paper.188 track Track.Resources%20Track.
- b0_g505 first Author.188.1.
- b0_g505 rest b0_g506.
- Author.188.1 type RoleDuringEvent.
- Author.188.1 label "Ammar Ammar, 1st Author for Paper 188".
- Author.188.1 withRole PublishingRole.
- Author.188.1 isHeldBy Ammar_Ammar.
- b0_g506 first Author.188.2.
- b0_g506 rest b0_g507.
- Author.188.2 type RoleDuringEvent.
- Author.188.2 label "Remzi Celebi, 2nd Author for Paper 188".
- Author.188.2 withRole PublishingRole.
- Author.188.2 isHeldBy Remzi_Celebi.
- b0_g507 first Author.188.3.
- b0_g507 rest nil.
- Author.188.3 type RoleDuringEvent.
- Author.188.3 label "Michel Dumontier, 3rd Author for Paper 188".
- Author.188.3 withRole PublishingRole.
- Author.188.3 isHeldBy Michel_Dumontier.
- Ammar_Ammar type Person.
- Ammar_Ammar name "Ammar Ammar".
- Ammar_Ammar label "Ammar Ammar".
- Ammar_Ammar holdsRole Author.188.1.
- Remzi_Celebi type Person.
- Remzi_Celebi name "Remzi Celebi".
- Remzi_Celebi label "Remzi Celebi".
- Remzi_Celebi holdsRole Author.188.2.
- Michel_Dumontier type Person.
- Michel_Dumontier name "Michel Dumontier".
- Michel_Dumontier label "Michel Dumontier".
- Michel_Dumontier holdsRole Author.188.3.
- Paper.189 type SubmissionsPaper.
- Paper.189 label "A Use Case-Driven Metadata Model and Repository".
- Paper.189 title "A Use Case-Driven Metadata Model and Repository".
- Paper.189 issued "2001-12-04T18:44:00.000Z".
- Paper.189 authorList b0_g508.
- Paper.189 submission Paper.189.
- Paper.189 track Track.Resources%20Track.
- b0_g508 first Author.189.1.
- b0_g508 rest b0_g509.
- Author.189.1 type RoleDuringEvent.
- Author.189.1 label "Manuel Fiorelli, 1st Author for Paper 189".
- Author.189.1 withRole PublishingRole.
- Author.189.1 isHeldBy Manuel_Fiorelli.
- b0_g509 first Author.189.2.
- b0_g509 rest b0_g510.
- b0_g510 first Author.189.3.
- b0_g510 rest b0_g511.
- Author.189.3 type RoleDuringEvent.
- Author.189.3 label "Tiziano Lorenzetti, 3rd Author for Paper 189".
- Author.189.3 withRole PublishingRole.
- Author.189.3 isHeldBy Tiziano_Lorenzetti.
- b0_g511 first Author.189.4.
- b0_g511 rest b0_g512.
- Author.189.4 type RoleDuringEvent.
- Author.189.4 label "Andrea Turbati, 4th Author for Paper 189".
- Author.189.4 withRole PublishingRole.
- Author.189.4 isHeldBy Andrea_Turbati.
- b0_g512 first Author.189.5.
- b0_g512 rest b0_g513.
- Author.189.5 type RoleDuringEvent.
- Author.189.5 label "Peter Schmitz, 5th Author for Paper 189".
- Author.189.5 withRole PublishingRole.
- Author.189.5 isHeldBy Peter_Schmitz.
- b0_g513 first Author.189.6.
- b0_g513 rest b0_g514.
- Author.189.6 type RoleDuringEvent.
- Author.189.6 label "Willem van Gemert, 6th Author for Paper 189".
- Author.189.6 withRole PublishingRole.
- Author.189.6 isHeldBy Willem_van_Gemert.
- b0_g514 first Author.189.7.
- b0_g514 rest b0_g515.
- Author.189.7 type RoleDuringEvent.
- Author.189.7 label "Denis Dechandon, 7th Author for Paper 189".
- Author.189.7 withRole PublishingRole.
- Author.189.7 isHeldBy Denis_Dechandon.