Matches in ESWC 2020 for { <https://metadata.2020.eswc-conferences.org/rdf/submissions/Paper.89_Review.1> ?p ?o. }
Showing items 1 to 10 of
10
with 100 items per page.
- Paper.89_Review.1 type ReviewVersion.
- Paper.89_Review.1 issued "2001-01-15T14:40:00.000Z".
- Paper.89_Review.1 creator Paper.89_Review.1_Reviewer.
- Paper.89_Review.1 hasRating ReviewRating.2.
- Paper.89_Review.1 hasReviewerConfidence ReviewerConfidence.4.
- Paper.89_Review.1 reviews Paper.89.
- Paper.89_Review.1 issuedAt easychair.org.
- Paper.89_Review.1 issuedFor Conference.
- Paper.89_Review.1 releasedBy Conference.
- Paper.89_Review.1 hasContent "The paper introduces and evaluates the effectiveness of a novel loss functions useful to learn to predict links from a knowledge graph by relying on relation and entity embeddings. In particular, the authors evaluate the proposed link prediction approach by relying on a scholarly knowledge graph and focusing on the recommendation of potential co-authorship. After introducing and motivating the general context of their work, the authors present the scholarly knowledge graphs they built in order to learn to recommend co-authorship links among Author edges: the structure, the data sources and the content (in terms of number of entities and relations) of each knowledge graph are described. Then an overview of current approaches to embedding-based knowledge graph link prediction is provided: in particular, the weak aspects of the use of the Margin Ranking Loss function used to learn such embeddings are highlighted. A new loss function, the Soft Margin Loss, is introduced with the aim of mitigating some of thee problems highlighted with respect to the Margin Ranking Loss function: in particular, the new loss function avoids the need to define a hard margin to separate prediction scores of positive and negative samples as is required if we use the Margin Ranking Loss function; indeed such hard margin could penalize training performance especially when negative sampling techniques generate a non-negligible rate of false negative training samples. The link-prediction performance of the Soft Margin Loss function is compared with scenarios where embeddings are learnt by using the Margin Ranking Loss function: several relation scoring functions are used. Both the scholarly knowledge graphs created by the authors and other link prediction datasets (FB15k and WN18) are exploited for evaluation purpose. The authors also present the results of a manual evaluation performed to validate the co-authorship recommendations proposed by their approach by training over the scholarly knowledge graphs they created. Also the sensitivity of the proposed loss function with respect to variations in hyperparameters is analyzed. --- The paper is fairly well written. It deals with a relevant topic in the area of semantic content recommendation: link prediction approaches over knowledge graphs based on entity and relation embeddings. In particular, a novel loss functions to train these embeddings with improved performance is presented and evaluated quantitatively and qualitatively by considering scholarly knowledge graphs. Comments: - could you better explain the differences among the two scholarly knowledge graphs you consider (SKGOLS and SKGNEW) with respect to the evaluation of the considered approaches? Why would you expect that these two scholarly knowledge graphs provide distinct, may be complementary evaluation frameworks / scenarios with respect to the link prediction approaches considered or proposed? - in Section 5.2 "Quality and Soundness analysis", the manual evaluation performed and their results should be explained with greater clarity. Instead of "50 recommendations filtered for a closer look", it could be better to say that "the top 50 recommendations for each author have been manually reviewed in order to distinguish correct from incorrect ones with respect to the following set of criteria: 1. close match in research...". It is not clear that Table 4 contains the results of this manual filtering of automated recommendations. - could you explicitly specify in Table 2 and 3 the meaning of underlined and bold numbers? - does the formula (6) of the loss function miss a sigma / simmation over all negative samples? MINORS: - Abstract: sixth line: "knowldge graph embedding (KGE) models have..." --> Knowldge Graph Embedding (uppecase first letter) - Figure 1: could you check if the direction of the "isPublished" relation / link (from Event to Paper) is correct? For completnessm, should the direction of the "isAffiliatedIn" relation / link be specified (it is not)? - Section 3, "Preliminaries and Related Work": "A Kg is roughly represented" --> KG - Section 3, "Preliminaries and Related Work": "...defines an score function..." --> a score function - Section 3, "Preliminaries and Related Work": "...a loss function to adjust embedding." --> embeddings - Section 5, "Evaluation": "...evaluation methods have been performed in order to approve: 1) better performance and..." --> assess: 1)"".