Data Portal @ linkeddatafragments.org

ESWC 2020

Search ESWC 2020 by triple pattern

Matches in ESWC 2020 for { ?s ?p After rebuttal: The authors have addressed most of my comments and now the vocabulary is available, so I decided to raise my score to accept. ************* This paper describes an approach to represent semantically study results so users can aggregate and compare them more efficiently; and an application to ease search, exploration and comparison of studies in the domain of human cooperation. The paper is well written, easy to follow and highly relevant for the conference in general and this track in particular. I also believe the topic is quite novel and with a lot of potential usefulness, as it could really be helpful to make researchers more efficient when comparing their work or exploring the state of the art. Therefore, I think the paper would be a nice addition to the conference, but I also felt like the approach was not completely mature. I discuss below some of the points that I think have room from improvement: - The vocabulary is listed as a contribution, but I tried to resolve http://data.coda.org/coda/vocab and I got a 404. - The paper claims to help on hypothesis generation, but this is not explained or demonstrated in the evaluation. I suggest removing such claims. - The authors state that reusing existing vocabularies was not within the scope of this early release. However, if the contribution is the semantic representation, this point needs to be addressed. Specifically because some of the terms have a very simple mapping to schema.org or the investigation, study, assay (ISA) model. https://isa-tools.org/ - Regarding the data model, I find very confusing that DOI and Paper are concepts described with the same properties. A DOI represents a paper, so I would have that as the paper identifier or import the DOI metadata and associate that to the paper. As it is, it can be interpreted that the title or author of a DOI is the title or author of the identifier, not the target paper. In addition, some properties seem to have been introduced just to define a hierarchy, to group subproperties under them (e.g., scholarly prop). If these properties do not have any semantics associated with them, it is often recommended to delete them from the ontology. Instead, you can define a category and add it as an annotation property. - I think that the paper will become really strong after the user-based evaluation is published. The formative evaluation presented is appropriate, but I would like to know more about how did the community converge on those 86 independent variables. This is often challenging, and I would like to see if there are guidelines from the community and how they reached consensus on those variables. - Some example SPARQL queries showing how to retrieve contents would really help understanding that data and the model behind it. - Where are the APIs used by the application? - Even though I understood the intent behind table 4, I don't quite understand the table itself: how is the information shown in another paper relevant to the studies assessed by the experts in this experiment? - How would one submit a custom visualization to the system? Cosmetic issues: - The paper gets at some point repetitive with contributions. I think that I have read them three times. And 'more automatically' does not sound correct to me. I would say "facilitate automation" or performing meta-analysis "semi-automatically" - There are a few typos, I recommend a proof read. In particular "softwares" is incorrect, instead use "software" or "software tools"". }

Showing items 1 to 1 of 1 with 100 items per page.