Matches in ESWC 2020 for { <https://metadata.2020.eswc-conferences.org/rdf/submissions/Paper.237_Review.0> ?p ?o. }
Showing items 1 to 10 of
10
with 100 items per page.
- Paper.237_Review.0 type ReviewVersion.
- Paper.237_Review.0 issued "2001-02-06T12:17:00.000Z".
- Paper.237_Review.0 creator Paper.237_Review.0_Reviewer.
- Paper.237_Review.0 hasRating ReviewRating.1.
- Paper.237_Review.0 hasReviewerConfidence ReviewerConfidence.3.
- Paper.237_Review.0 reviews Paper.237.
- Paper.237_Review.0 issuedAt easychair.org.
- Paper.237_Review.0 issuedFor Conference.
- Paper.237_Review.0 releasedBy Conference.
- Paper.237_Review.0 hasContent "This paper presents an architecture to generate a portable Question Answering System (QA) over RDF data. This architecture is an extension of a previous QA focused on the portability problem. The system allows non-sparql-experts-users to query RDF dataset using natural language. The architecture is based on machine learning algorithms and confidences. I believe this is an interesting problem. The article is well organized and motivated. However, I am not sure about the real contribution of the proposed extension. It seems that the main contribution of this paper is a new step in the previous workflow, where the user have to evaluate the query results (as shown in figure 6). This evaluation is then used to re-training the system to obtain better solutions. If I do not misunderstood, non-expert user have to know the results, or at least part of them, when doing the query?. I believe that in a real scenario an end-user does not know the results of a query in advance. On the other hand, experiments show that the proposed architecture improve F-measure significantly. Them, I am mainly concerned about which users have carried out the experiments and what level of previous knowledge about the RDF dataset they had. I think authors should clarify these questions. After reading the rebuttal response I have changed my decision to weak accept."".