Matches in ESWC 2020 for { <https://metadata.2020.eswc-conferences.org/rdf/submissions/Paper.88_Review.0> ?p ?o. }
Showing items 1 to 10 of
10
with 100 items per page.
- Paper.88_Review.0 type ReviewVersion.
- Paper.88_Review.0 issued "2001-01-29T12:14:00.000Z".
- Paper.88_Review.0 creator Paper.88_Review.0_Reviewer.
- Paper.88_Review.0 hasRating ReviewRating.2.
- Paper.88_Review.0 hasReviewerConfidence ReviewerConfidence.4.
- Paper.88_Review.0 reviews Paper.88.
- Paper.88_Review.0 issuedAt easychair.org.
- Paper.88_Review.0 issuedFor Conference.
- Paper.88_Review.0 releasedBy Conference.
- Paper.88_Review.0 hasContent "Comment after Rebuttal: Thank you for the clarifying comments. My decision (accept) remains unchanged. --- The authors present a benchmark created for the evaluation of systems that match tabular data to knowledge graphs, and explain the underlying generating algorithm. Furthermore, they give insights into the first edition of the SemTab challenge in which they invited participants to evaluate their systems against the generated benchmark. All in all, the paper is well structured and nice to read. The necessary concepts are introduced properly and an appropriate amount of background information is provided. The topic itself is very relevant as - despite of the existence of a couple of evaluation data sets - it is not guaranteed that all systems are evaluated against them in the same way. Additionally, the current benchmarks lack either in appropriate size or in quality of annotations. My wish for the next edition of the SemTab challenge would (as the authors already mentioned) also be a more realistic data set. Some ideas from my side (for which it is very likely that you have heard already) are: - The manual or automatic identification of some general "kinds" of noise that occur in the existing benchmarks; they can then be introduced to the generated data set on a random basis for a more realistic setting. - Include tables and/or entities that can't be mapped at all, as this is often the case in real world scenarios (of course only if that is in the scope of the SemTab challenge). Realistic entities that can't be mapped could easily be created by generating the data set from a complete knowledge graph, but let the participants only match against a part of the knowledge graph. Some minor remarks: - Page 8, Section 5.1: "CTA, CEA and CTA" -> "CTA, CEA and CPA" - Page 10, Section 5.1 (last sentence): You say that "AH is used as primary score to encourage perfect annotations". As far as I understand the metrics, the AP score rewards perfect mappings even more, doesn't it? - Page 14 (Evaluation platform): "One the one hand" -> "On the one hand" I suggest to accept the paper as it is well written, relevant, and does not contain any major or minor flaws."".