Data Portal @ linkeddatafragments.org

ESWC 2020

Search ESWC 2020 by triple pattern

Matches in ESWC 2020 for { ?s ?p The paper describes a framework for KG embedding evaluation. It takes the generated vector representation and uses them for various tasks and report the corresponding results to see how these vectors perform in different tasks. Many different tasks are already implemented and a new embedding technique could be directly tested with this framework. The related work section is well written and also shows the differences to the proposed approach. Section three gives an overview of the framework and its extension points. Following I describe my procedure when installing and using the proposed software: I tried installing the software with pip but I received a FileNotFoundError because in the setup.py in line 10 it reads the file pip_readme.md which is not contained in the package but only in the GitHub repro. The authors should fix it, to allow others an easy installation. Moreover many users currently have anaconda installed. Maybe in the future a version on anaconda would be also helpful. With my updated version of the pip package I installed it in python 2 which works fine but in my new conda environment with python 3.8.1 I got a UnicodeDecodeError which I did not further analyzed. Since python 2 already reach its end of life [1], new software should target python 3 anyway. Afterwards I wanted to test main_00.py from the example folder. Unfortunately I couldn't find the file country_vectors.txt. Thus I proceed with main_01.py. There it couldn't load the FrameworkManager. The reason was, that in all examples the import statement is ```from evaluation_manager.manager import FrameworkManager``` but should be ```from evaluation_framework.manager import FrameworkManager``` After these changes, my script runs. The result directory is not generated at the correct place (as least when installing with pip) because in file evaluationManager.py line 228 [2], the results directory is the file path of evaluationManager.py. This results in a directory like "{{pythonpath}}\\lib\\site-packages\\evaluation_framework\\../results" where nobody would have a look. Thus I recommend to change to "os.getcwd()" for the current working directory. When looking at the results folder, only a log file is generated which describes that the data files are missing. Which is correct because in the pip file, no data files (like /evaluation_framework/Classification/data/Cities.tsv). Thus these files also needs to be included in the pip file. After manually copying these files, I first got some result files which are mostly empty. In the log I saw the error message: "Classification : Problems in merging vector with gold standard" The reason was that in the objectFrequencyS.txt file no URIs from the gold standard appeared. I thought that an example would cover such things. After trying out the software I asked myself where to get the KG for which I should generate the embedding. Based on the gold standard files I assume this is DBpedia. Maybe I have overlooked it but this should be clearly mentioned somewhere (and probably also in the GitHub readme). More importantly, not only the specific version of DBpedia is necessary but also which files [3] can be used to actually allow a comparison between the embedding techniques. I know that this does not be fixed by the evaluation framework as long as each embedding uses the same files, but maybe a general recommendation would be good. The software is not yet ready to be easily used but I think the authors can update it very fast. If this is done, the framework allows to easily compare different KG embeddings methods and I think that this solves an important gap. Some minor points: - Figure one can be converted to grayscale to allow a black and white print - It would also help to point out that this work is an extension? to [4] - page 4: "do not state it further"-> "do not state if further" ; "It takes in input a file" -> "It takes as input a file" - page 10: dbo:SportsTeam is over the line width because of \texttt [1] https://www.python.org/doc/sunset-python-2/ [2] https://github.com/mariaangelapellegrino/Evaluation-Framework/blob/master/evaluation_framework/evaluationManager.py#L228 [3] https://wiki.dbpedia.org/downloads-2016-10 [4] Pellegrino M.A., Cochez M., Garofalo M., Ristoski P. (2019) A Configurable Evaluation Framework for Node Embedding Techniques. In: Hitzler P. et al. (eds) The Semantic Web: ESWC 2019 Satellite Events. ESWC 2019. Lecture Notes in Computer Science, vol 11762. Springer, Cham After reading the rebuttal, I update my overall evaluation to accept. The technical details are solved.". }

Showing items 1 to 1 of 1 with 100 items per page.