Data Portal @ linkeddatafragments.org

ESWC 2020

Search ESWC 2020 by triple pattern

Matches in ESWC 2020 for { ?s ?p This paper describes a technique that optimizes the execution of stream reasoning programs in LARS, by keeping track of formulas that will not hold and thus should not be considered during the reasoning process. This work is an incremental contribution to a previous paper by the authors, in which the same type of LARS programs execution was optimized by considering those formulas that would be guaranteed to hold for a given time interval. In the considered subset of LARS chosen by the authors, it is possible to establish formulas that may hold for all time instants on a given window, or that hold for at least some time instants of a window. With this information, the authors propose a technique for optimization in the cases where these formulas are guaranteed not-to-hold, and therefore can be excluded from the reasoning process. The techniques is well explained, and it is based on the LARS framework, which has the advantage of providing rich semantics and a number of expressive operators for stream reasoning, often offering more features than other approaches in the stream reasoning umbrella, such as RDF stream processors or CEP-based solutions. Nevertheless, given that this specific technique is essentially an incremental contribution with respect to the previous paper, the degree of novelty is not especially high. In contrast, the fact that the authors actually re-implemented Laser, is a nice technical contribution, although more on the engineering side. Another issue is related to the motivation of this work. Although the authors mention in the introduction some potential uses for this type of reasoning, the rest of the paper does not follow any of these motivating examples and goes directly into solving the proposed challenge. While the optimization is totally reasonable, the lack of a real motivating use case brings up the question of the concrete impact of these techniques in actual stream reasoning problems. The paper would benefit form a clearer motivation taken from more realistic use cases in which it is clear that handling these 'impossible derivations' has a substantial impact. This problem is also found in the evaluation. While it is fair to show the best and worst case scenarios with the proposed microbenchmarks, the reader may have the impression of having an experimentation setup that is only designed to validate the paper hypotheses, but that has no connection to real use cases and real problems. It is understandable, as the authors mention, that some of the benchmarks out there, do not really handle many of the rich features of LARS programs. However, the authors may need to find better ways of showing the utility of this interesting work, while getting closer to real life datasets and reasoning problems. The paper well-written and the technique is well described including a clear formalization, examples and description of the main algorithms in detail. ----- Thanks to the authors for the response. I disagree that this is the best that could be done in terms of evaluation. The authors can argue that the microbenchmarks are fair enough or somehow sufficient to validate their hypothesis. I still think that the risk of bias is high, but I also understand the difficulty and the tons of work that would take to build a comprehensive evaluation scenario. I keep the scores for this solid manuscript.". }

Showing items 1 to 1 of 1 with 100 items per page.