Matches in ESWC 2020 for { ?s ?p ?o. }
- Author.266.1 label "Endri Kacupaj, 1st Author for Paper 266".
- Author.266.1 withRole PublishingRole.
- Author.266.1 isHeldBy Endri_Kacupaj.
- Hamid_Zafar type Person.
- Hamid_Zafar name "Hamid Zafar".
- Hamid_Zafar label "Hamid Zafar".
- Hamid_Zafar holdsRole Author.69.2.
- Maria_Maleshkova type Person.
- Maria_Maleshkova name "Maria Maleshkova".
- Maria_Maleshkova label "Maria Maleshkova".
- Maria_Maleshkova holdsRole Author.69.4.
- Maria_Maleshkova holdsRole Author.169.5.
- Author.169.5 type RoleDuringEvent.
- Author.169.5 label "Maria Maleshkova, 5th Author for Paper 169".
- Author.169.5 withRole PublishingRole.
- Author.169.5 isHeldBy Maria_Maleshkova.
- Sebastien_Ferre type Person.
- Sebastien_Ferre name "Sebastien Ferre".
- Sebastien_Ferre label "Sebastien Ferre".
- Sebastien_Ferre holdsRole Paper.69_Review.0_Reviewer.
- Sebastien_Ferre mbox mailto:ferre@irisa.fr.
- Paper.69_Review.0_Reviewer type RoleDuringEvent.
- Paper.69_Review.0_Reviewer label "Sebastien Ferre, Reviewer for Paper 69".
- Paper.69_Review.0_Reviewer withRole ReviewerRole.
- Paper.69_Review.0_Reviewer withRole NonAnonymousReviewerRole.
- Paper.69_Review.0_Reviewer isHeldBy Sebastien_Ferre.
- Paper.69_Review.0 type ReviewVersion.
- Paper.69_Review.0 issued "2001-01-29T21:06:00.000Z".
- Paper.69_Review.0 creator Paper.69_Review.0_Reviewer.
- Paper.69_Review.0 hasRating ReviewRating.1.
- Paper.69_Review.0 hasReviewerConfidence ReviewerConfidence.3.
- Paper.69_Review.0 reviews Paper.69.
- Paper.69_Review.0 issuedAt easychair.org.
- Paper.69_Review.0 issuedFor Conference.
- Paper.69_Review.0 releasedBy Conference.
- Paper.69_Review.0 hasContent "Thank your for the clarifications in the rebuttal, in particular about the covered expressivity. The scope/limits have to be clearly stated in the paper because "complex queries" can have a very different meaning for different people (for me, they're really simple). # Strengths S1 - first QA dataset that comes with answer verbalizations, as an extension of LC-QuAD S2 - several machine learning models are given as baseline for future evaluation by the community S3 - the resources are available through a dedicated web page and a repository # Weaknesses W1 - the range of expressivity of the covered questions is not clearly defined W2 - the reusability of the dataset is limited by the fact that often many answer verbalizations are possible W3 - the dataset is large (5000 questions) but this may not be enough for machine learning The dataset should also include the raw answers, as a list, for easier reusability. # Summary The main proposed resource is an extension of the LC-QuAD dataset, which is a question-answer collection for evaluating Question-Answering (QA) approaches, with a new field that contains the verbalization of the answers. The verbalizations were first generated automatically based on templates, and second were manually curated by following some style rules (e.g., active voice). The secondary resource is made of machine learning models based on neural networks to generate the answer verbalization from the question or formal query. They serve as baselines for the production of templates for answer verbalization. Scores are given in the paper for each model, and it is shown that there is ample room for improvement. # Discussion QA systems generally have low accuracy in open domain questions, ranging from 20% to 80%. It is therefore important, when answers are returned by a QA system, to give insight about how the QA system came to those answers. The authors propose to generate a verbalization of the answers that reflects the intention of the formal query that was used to retrieved the answers. I agree with the authors that this is more natural than showing the formal query or even to verbalize the formal query before listing the answers, at least in a vocal dialogue. [W1] The authors claim that the dataset covers complex questions and not only factoid questions. However, from what I have seen, all questions use either a ASK query or a SELECT DISTINCT query with a single projection, either ?uri or COUNT(?uri). We need to know more precisely the range of questions: - how many projections in the SELECT clause? at most 1? (several projections would make the verbalization more useful and interesting) - which aggregators? only COUNT? - how many triple patterns at most? what's the distribution? - are there cycles in the graph patterns? - are there graph patterns with UNION, OPTIONAL or MINUS? - what about CONSTRUCT queries (more open questions) that would really make verbalization compulsory? I agree several features are clearly future work, but it seems fair to state clearly what is covered by the dataset, and what is future work. [W2] The main difficulty I see in the proposed approach is that many correct answer verbalizations are possible. On the contrary, there is in general a single correct answer set for a given question. Although I agree that verbalizing answers is a good idea, using your own verbalizations to evaluate other verbalizations seems a bit fragile. Therefore, your verbalizations can be useful as examples or as target for machine learning, but if I come up with my own verbalizations, it is not clear how I can compare to yours. # Minor comments - Fig.1: I can't see QALD datasets, this should be added - Table 1: 33k -> 33K, 11k -> 11K - p.5: the users is --> the user is - p.6: publicity --> publicly - p.7: I would switch 'Generate' and 'Create' in the paragraph headers because the verbalization templates are manually created, while the initial verbalizations are automatically generated. - p.9: suitability --> sustainability - p.10: straight forward --> (in one word) - p.11: evaluation metrics: please give the range of values for each measure, and whether it is better to have lower values or higher values. Some of this information is given later but it would be better here, close to their definitions."".
- Paper.69_Review.1_Reviewer type RoleDuringEvent.
- Paper.69_Review.1_Reviewer label "Anonymous Reviewer for Paper 69".
- Paper.69_Review.1_Reviewer withRole ReviewerRole.
- Paper.69_Review.1_Reviewer withRole AnonymousReviewerRole.
- Paper.69_Review.1 type ReviewVersion.
- Paper.69_Review.1 issued "2001-01-29T12:09:00.000Z".
- Paper.69_Review.1 creator Paper.69_Review.1_Reviewer.
- Paper.69_Review.1 hasRating ReviewRating.2.
- Paper.69_Review.1 hasReviewerConfidence ReviewerConfidence.3.
- Paper.69_Review.1 reviews Paper.69.
- Paper.69_Review.1 issuedAt easychair.org.
- Paper.69_Review.1 issuedFor Conference.
- Paper.69_Review.1 releasedBy Conference.
- Paper.69_Review.1 hasContent "Overall evaluation: 2 (accept)"".
- Paper.70 type SubmissionsPaper.
- Paper.70 label "RDF reasoning on large ontologies for cultural heritage: a study on Wikidata".
- Paper.70 title "RDF reasoning on large ontologies for cultural heritage: a study on Wikidata".
- Paper.70 issued "2001-12-03T12:54:00.000Z".
- Paper.70 authorList b0_g179.
- Paper.70 submission Paper.70.
- Paper.70 track Track.Ontologies%20and%20Reasoning.
- b0_g179 first Author.70.1.
- b0_g179 rest b0_g180.
- Author.70.1 type RoleDuringEvent.
- Author.70.1 label "Nuno Freire, 1st Author for Paper 70".
- Author.70.1 withRole PublishingRole.
- Author.70.1 isHeldBy Nuno_Freire.
- b0_g180 first Author.70.2.
- b0_g180 rest nil.
- Author.70.2 type RoleDuringEvent.
- Author.70.2 label "Diogo Proença, 2nd Author for Paper 70".
- Author.70.2 withRole PublishingRole.
- Author.70.2 isHeldBy Diogo_Proença.
- Nuno_Freire type Person.
- Nuno_Freire name "Nuno Freire".
- Nuno_Freire label "Nuno Freire".
- Nuno_Freire holdsRole Author.70.1.
- Diogo_Proença type Person.
- Diogo_Proença name "Diogo Proença".
- Diogo_Proença label "Diogo Proença".
- Diogo_Proença holdsRole Author.70.2.
- b0_g182 first Author.72.2.
- b0_g182 rest b0_g183.
- b0_g183 first Author.72.3.
- b0_g183 rest b0_g184.
- b0_g184 first Author.72.4.
- b0_g184 rest nil.
- Paper.72_Review.0_Reviewer type RoleDuringEvent.
- Paper.72_Review.0_Reviewer label "Anonymous Reviewer for Paper 72".
- Paper.72_Review.0_Reviewer withRole ReviewerRole.
- Paper.72_Review.0_Reviewer withRole AnonymousReviewerRole.
- Paper.72_Review.0 type ReviewVersion.
- Paper.72_Review.0 issued "2001-01-30T08:07:00.000Z".
- Paper.72_Review.0 creator Paper.72_Review.0_Reviewer.
- Paper.72_Review.0 hasRating ReviewRating.2.
- Paper.72_Review.0 hasReviewerConfidence ReviewerConfidence.3.
- Paper.72_Review.0 reviews Paper.72.
- Paper.72_Review.0 issuedAt easychair.org.
- Paper.72_Review.0 issuedFor Conference.
- Paper.72_Review.0 releasedBy Conference.
- Paper.72_Review.0 hasContent "The paper describes the creation of ESBM, a benchmark for testing entity summarization. This is definitely a relevant problem for the semantic web Community. ESBM is manually created and made publicly available with a permanent identifier on w3id.org. The main objective is to create a resource of general purpose, in comparison to state-of-the-art datasets. It is also intended that the resource shall be permanently available. The methodology followed by the authors was to sample prominent datasets (DBpedia and LinkedMDB). The result is a curated dataset from which ground truth summaries are produced manually. The paper described in sufficient detail the methodology for creating these ground truth summaries. One concern that arises, regarding this resource, is that the ground truth is intrinsically bias. Further, it seems rather difficult to main and verify the dataset. Nevertheless, given the scarcity of benchmarks available for evaluating entity summarization, ESBM shall provide reliable starting point for benchmarking, thus improving the state or the art."".
- Paper.72_Review.1 type ReviewVersion.
- Paper.72_Review.1 issued "2001-01-18T11:36:00.000Z".
- Paper.72_Review.1 creator Paper.72_Review.1_Reviewer.