Matches in ScholarlyData for { ?s <https://w3id.org/scholarlydata/ontology/conference-ontology.owl#abstract> ?o. }
- 252 abstract "Ontologies are used for sharing information and are often collaboratively developed. They are adapted for different applications and domains resulting in multiple versions of an ontology that are caused by changes and refactorings. Quite often, ontology versions (or parts of them) are syntactical very different but semantically equivalent. While there is existing work on detecting syntactical and structural changes in ontologies, there is still a need in analyzing and recognizing ontology changes and refactorings by a semantically comparison of ontology versions. In our approach, we start with a classification of model refactorings found in software engineering for identifying such refactorings in OWL ontologies using DL reasoning to recognize these refactorings.".
- 253 abstract "There are ontology domain concepts that can be represented according to multiple alternative classification criteria. Current ontology modeling guidelines do not explicitly consider this aspect in the representation of such concepts. To assist with this issue, we examined a domain-specific simplified model for facet analysis used in Library Science. This model produces a Faceted Classification Scheme (FCS) which accounts for the multiple alternative classification criteria of the domain concept under scrutiny. A comparative analysis between a FCS and the Normalisation Ontology Design Pattern (ODP) indicates the existence of key similarities between the elements in the generic structure of both knowledge representation models. As a result, a mapping is identified that allows to transform a FCS into an OWL DL ontology applying the Normalisation ODP. Our contribution is illustrated with an existing FCS example in the domain of "Dishwashing Detergent" that benefits from the outcome of this study.".
- 259 abstract "Policies are declarations of constraints on the behaviour of components within distributed systems, and are often used to capture norms within agent-based systems. A few machine-processable representations for policies have been proposed, but they tend to be either limited in the types of policies that can be expressed or limited by the complexity of associated reasoning mechanisms. In this paper, we argue for a language that sufficiently expresses the types of policies essential in practical systems, and which enables both policy-governed decisionmaking and policy analysis within the bounds of decidability.We then propose an OWL-based representation of policies that meets these criteria using and a reasoning mechanism that uses a novel combination of ontology consistency checking and query answering. In this way, agent-based systems can be developed that operate flexibly and effectively in policy-constrainted environments.".
- 261 abstract "In Linked Data, the use of owl:sameAs is ubiquitous in interlinking data-sets. There is however, ongoing discussion about its use, and potential misuse, particularly with regards to interactions with inference. In fact, owl:sameAs can be viewed as encoding only one point on a scale of similarity, one that is often too strong for many of its current uses. We describe how referentially opaque contexts that do not allow inference exist, and then outline some varieties of referentially-opaque alternatives to owl:sameAs. Finally, we report on an empirical experiment over randomly selected owl:sameAs statements from the Web of data. This theoretical apparatus and experiment shed light upon how owl:sameAs is being used (and misused) on the Web of data.".
- 274 abstract "We extend our recent work on evaluating incomplete reasoners by introducing strict testing bases. We show how they can be used in practice to identify ontologies and queries where applications can exploit highly scalable incomplete query answering systems while enjoying completeness guarantees normally available only when using computationally intensive reasoning systems.".
- 278 abstract "We develop query relaxation techniques for regular path queries and combine them with query approximation in order to support flexible querying of RDF data when the user lacks knowledge of its full structure or where the structure is irregular. In such circumstances, it is helpful if the querying system can perform both approximate matching and relaxation of the user's query and can rank the answers according to how closely they match the original query. Our framework incorporates both standard notions of approximation based on edit distance and RDFS-based inference rules. The query language we adopt comprises conjunctions of regular path queries, thus including extensions proposed for SPARQL to allow for querying paths using regular expressions. We provide an incremental query evaluation algorithm which runs in polynomial time and returns answers to the user in ranked order.".
- 280 abstract "We extend the Semantic Web query language SPARQL by defining the semantics of SPARQL queries under the entailment regimes of RDF, RDFS, and OWL. The proposed extensions are part of the SPARQL 1.1 Entailment Regimes working draft which is currently being developed as part of the W3C standardization process of SPARQL 1.1. We review the conditions that SPARQL imposes on such extensions, discuss the practical difficulties of this task, and explicate the design choices underlying our proposals. In addition, we include an overview of current implementations and their underlying techniques.".
- 288 abstract "Ontology classification - the computation of subsumption hierarchies for classes and properties is one of the most important tasks for OWL reasoners. Based on the algorithm by Shearer and Horrocks [9], we present a new classification procedure that addresses several open issues of the original algorithm, and that uses several novel optimisations in order to achieve superior performance. We also consider the classification of (object and data) properties. We show that algorithms commonly used to implement that task are incomplete even for relatively weak ontology languages. Furthermore, we show how to reduce the property classification problem into a standard (class) classification problem, which allows reasoners to classify properties using our optimised procedure. We have implemented our algorithms in the OWL HermiT reasoner, and we present the results of a performance evaluation.".
- 301 abstract "In order to effectively and quickly answer queries in environments with distributed RDF/OWL, we present a query optimization algorithm to identify the potentially relevant Semantic Web data sources using structural query features and a term index. This algorithm is based on the observation that the join selectivity of a pair of query triple patterns is often higher than the overall selectivity of these two patterns treated independently. Given a rule goal tree that expresses the reformulation of a conjunctive query, our algorithm uses a bottom-up approach to estimate the selectivity of each node. It then prioritizes loading of selective nodes and uses the information from these sources to further constrain other nodes. Finally, we use an OWL reasoner to answer queries over the selected sources and their corresponding ontologies. We have evaluated our system using both a synthetic data set and a subset of the real-world Billion Triple Challenge data.".
- 304 abstract "Much of the AI-related work on Web Service Composition (WSC) relates it to an Artificial Intelligence (AI) planning problem, where the composition is primarily done offline prior to execution. Recent research on WSC has argued convincingly for the importance of optimizing quality of service and user preferences. While some of this optimization can be done offline, many interesting and useful optimizations are data-dependent, and must be done following execution of at least some information-providing services. In this paper, we examine this class of WSC problems, attempting to bridge the gap between offline composition and online information gathering with a view to producing high-quality compositions without excessive data gathering. Our investigation is performed in the context of an existing preference-based Hierarchical Task Networks (HTNs) WSC system. Our experiments show an improvement in both the quality and speed of finding a composition.".
- 305 abstract "In this paper, we discuss optimisations of rule-based materialisation approaches for reasoning over large static RDF datasets. We generalise and reformalise what we call the "partial-indexing" approach to scalable rule-based materialisation: the approach is based on a separation of terminological data, which has been shown in previous and related works to enable highly scalable and distributable reasoning for specific rulesets; in so doing, we provide some completeness propositions with respect to semi-naive evaluation. We then show how related work on template rules - T-Box-specific dynamic rulesets created by binding the terminological patterns in the static ruleset - can be incorporated and optimised for the partial-indexing approach. We evaluate our methods using LUBM(10) for RDFS, pD* (OWL Horst) and OWL 2 RL, and thereafter demonstrate pragmatic distributed reasoning over 1.12 billion Linked Data statements for a subset of OWL 2 RL/RDF rules we argue to be suitable for Web reasoning.".
- 307 abstract "The Rule Interchange Format Production Rule Dialect (RIF-PRD) is a W3C Recommendation to define production rules for the Semantic Web, whose semantics is defined operationally via labeled terminal transition systems. In this paper, we introduce a declarative logical characterization of the full default semantics of RIF-PRD based on Answer Set Programming (ASP), including matching, conflict resolution and acting. Our proposal to the semantics of RIF-PRD enjoys several features. Being based on ASP, it enables a straightforward integration with Logic Pro- gramming rule based technology, namely for reasoning and acting with ontologies. Then, its full declarative logical character facilitates the in- vestigation of formal properties of RIF-PRD itself. Furthermore, it turns out that our characterization based on ASP is flexible enough so that new conflict resolution semantics for RIF-PRD can easily be defined and encoded. Finally, it immediately serves as the declarative specification of an implementation, whose prototype we developed.".
- 317 abstract "Effective communication in open environments relies on the ability of agents to reach a mutual understanding of the exchanged message by reconciling the vocabulary (ontology) used. Various approaches have considered how mutually acceptable mappings between corresponding concepts in the agents' own ontologies may be determined dynami- cally through argumentation-based negotiation (such as Meaning-based Argumentation, MbA). In this paper we present a novel approach to the dynamic determination of mutually acceptable mappings, that allows agents to express a private acceptability threshold over the types of mappings they prefer. We empirically compare this approach with the Meaning-based Argumentation and demonstrate that the proposed approach produces larger agreed alignments thus better enabling agent communication. Furthermore, we compare and evaluate the fitness for purpose of the generated alignments, and we empirically demonstrate that the proposed approach has comparable performance to the MbA approach.".
- 318 abstract "Millions of owl:sameAs statements have been published on the Web of Data. Due to its unique role and heavy usage in Linked Data integration, owl:sameAs has become a topic of increasing interest and debate. This paper provides a quantitative analysis of owl:sameAs deployment status and uses these statistics to focus discussion around its usage in Linked Data.".
- 319 abstract "The Semantic Web graph is growing at an incredible pace, enabling opportunities to discover new knowledge by interlinking and analyzing previously unconnected data sets. This confronts researchers with a conundrum: Whilst the data is available the programming models that facilitate scalability and the infrastructure to run various algorithms on the graph are missing. Some use MapReduce - a good solution for many problems. However, even some simple iterative graph algorithms do not map nicely to that programming model requiring programmers to shoehorn their problem to the MapReduce model. This paper presents the Signal/Collect programming model for synchronous and asynchronous graph algorithms. We demonstrate that this abstraction can capture the essence of many algorithms on graphs in a concise and elegant way by giving Signal/Collect adaptations of various relevant algorithms. Furthermore, we built and evaluated a prototype Signal/Collect framework that executes algorithms in our pro- gramming model. We empirically show that this prototype transpar- ently scales and that guiding computations by scoring as well as asyn- chronicity can greatly improve the convergence of some example algorithms. We released the framework under the Apache License 2.0 (at http://www.ifi.uzh.ch/ddis/research/sc).".
- 334 abstract "The Web of Linked Data is characterized by linking structured data from different sources using equivalence statements, such as owl:sameAs, as well as other types of linked properties. The ontologies behind these sources, however, remain unlinked. This paper describes an extensional approach to generate alignments between these ontologies. Specifically our algorithm produces equivalence and subsumption relationships between classes from ontologies of different Linked Data sources by exploring the space of hypotheses supported by the existing equivalence statements. We are also able to generate a complementary hierarchy of derived classes within an existing ontology or generate new classes for a second source where the ontology is not as refined as the first. We demonstrate empirically our approach using Linked Data sources from the geospatial, genetics, and zoology domains. Our algorithm discovered about 800 equivalences and 29,000 subset relationships in the alignment of five source pairs from these domains. Thus, we are able to model one Linked Data source in terms of another by aligning their ontologies and understand the semantic relationships between the two sources.".
- 352 abstract "Twitter enjoys enormous popularity as a microblogging service largely due to its simplicity. On the downside, there is little organization to the Twitterverse and making sense of the stream of messagespassing through the system has become a significant challenge for everyone involved. As a solution, Twitter users have adopted the convention ofadding a hash at the beginning of a word to turn it into a hashtag. Hashtags have become the means in Twitter to create threads of conversationand to build communities around particular interests.In this paper, we take a first look at whether hashtags behave as strongidentifiers, and thus whether they could serve as identifiers for the Semantic Web. We introduce some metrics that can help identify hashtagsthat show the desirable characteristics of strong identifiers. We look atthe various ways in which hashtags are used, and show through evaluation that our metrics can be applied to detect hashtags that representreal world entities.".
- 376 abstract "The Semantic Web is now gaining momentum due to its efforts to create a universal medium for the exchange of semantically tagged data. The representation and querying of semantic data have been made by means of directed labelled graphs using RDF and SPARQL, standards which have been widely accepted by the scientific community. Currently, most implementations of RDF/SPARQL are based on relational database technology. But executing complex queries in these systems usually is rather slow due to the number of joins that need to be performed. In this article, we describe an indexing method using materialized SPARQL queries as indexes on RDF data sets to reduce the query processing time. We provide a formal definition of materialized SPARQL queries, a cost model to evaluate their impact on query performance, a storage scheme for the materialization, and an algorithm to find the optimal set of indexes given a query. We also introduce different approaches to integrate materialized queries into an existing SPARQL query engine.".
- 383 abstract "With the development of the Semantic Web, an increasing amount of data with semantics has been published on the web according to the Linked Data principles and become ubiquitous. The requirement of utilizing the entire web of data to answer a query has arised since the desire for the application of such ubiquitous semantic data sources. However, limited research work has been made on the query above the web of data and there is not a formal way to describe the query processing procedure. In this paper, we propose a general query processing method on the web of data, which contains three steps: data inference configuration, data discovery, result generation and ranking. Finally, we briefly represent the work already done and the future work.".
- 385 abstract "Recently, the use of semantic technologies has gained quite some traction. With increased use of these technologies, their maturation not only in terms of performance, robustness but also with regard to support of non-latin-based languages and regional differences is of paramount importance. In this paper, we provide a comprehensive review of the current state of the internationalization (I18n) of Semantic Web technologies. Since resource identifiers play a crucial role for the Semantic Web, the internatinalization of resource identifiers is of high importance. It turns out that the prevalent resource identification mechanism on the Semantic Web, i.e. URIs, are not sufficient for an efficient internationalization of knowledge bases. Fortunately, with IRIs a standard for international resource identifiers is available, but its support needs much more penetration and homogenization in various semantic web technology stacks. In addition, we review various RDF serializations with regard to their support for internationalized knowledge bases. The paper also contains an in-depth review of popular semantic web tools and APIs with regard to their support for internationalization.".
- 386 abstract "Although the Internet, as an ubiquitous medium for communication, publication and research, already significantly influenced the way historians work, the capabilities of the Web as a direct medium for collaboration in historic research are not much explored. We report about the application of an adaptive, semantics-based knowledge engineering approach for the development of a prosopographical knowledge base on the Web - the Catalogus Professorum Lipsiensis. In order to enable historians to collect, structure and publish prosopographical knowledge an ontology was developed and knowledge engineering facilities based on the semantic data wiki OntoWiki were implemented. The resulting knowledge base contains information about more than 14.000 entities and is tightly interlinked with the emerging Web of Data. For access and exploration by other historians a number of access interfaces were developed, such as a visual SPARQL query builder, a relationship finder and a Linked Data interface. The approach is transferable to other prosopographical research projects and historical research in general, thus improving the collaboration in historic research communities and facilitating the reusability of historic research results.".
- 387 abstract "This paper introduces the task of Technology-Structure Mining to support Management of Technology. We propose a linguistic based approach for identification of Technology Interdependence through extraction of technology concepts and relations between them. In addition, we introduce Technology Structure Graph for the task formalization. While the major challenge in technology structure mining is the lack of a benchmark dataset for evaluation and development purposes, we describes steps that we have taken towards providing such a benchmark. The proposed approach is initially evaluated and applied in the domain of Human Language Technology and primarily results are demonstrated. We further explain plans and research challenges for evaluation of the proposed task.".
- 388 abstract "Even though its adoption in the enterprise environment lags behind the public domain, semantic (web) technologies, more recently the linked data initiative, started to penetrate into business domain with more and more people recognising the benefit of such technologies. An evident advantage of leveraging semantic technologies is the integration of distributed data sets that benefit companies with a great return of value. Enterprise data, however, present significantly different characteristics from public data on the Internet. These differences are evident in both technical and managerial perspectives. This paper reports a pilot study, carried out in an international organisation, aiming to provide a collaborative workspace for fast and low-overhead data sharing and integration. We believe that the design considerations, study outcomes, and learnt lessons can help making decisions of whether and how one should adopt semantic technologies in similar contexts.".
- 397 abstract "Wikis allow users to collaboratively create and maintain content. Semantic wikis, which provide the additional means to annotate the content semantically and thereby allow to structure it, experience an enormous increase in popularity, because structured data is more usable and thus more valuable than unstructured data. As an illustration of leveraging the advantages of semantic wikis for semantic portals, we report on the experience with building the AIFB portal based on Semantic MediaWiki. We discuss the design, in particular how free, wiki-style semantic annotations and guided input along a predefined schema can be combined to create a flexible, extensible, and structured knowledge representation. How this structured data evolved over time and its flexibility regarding changes are subsequently discussed and illustrated by statistics based on actual operational data of the portal. Further, the features exploiting the structured data and the benefits they provide are presented. Since all benefits have its costs, we conducted a performance study of the Semantic MediaWiki and compare it to MediaWiki, the non- semantic base platform. Finally we show how existing caching techniques can be applied to increase the performance.".
- 401 abstract "Enterprise clouds apply the paradigm of cloud computing to enterprise IT infrastructures, with the goal of providing easy, flexible, and scalable access to both computing resources and IT services. Realizing the vision of the fully automated enterprise cloud involves addressing a range of technological challenges. In this paper, we focus on the challenges related to intelligent information management in enterprise clouds and discuss how semantic technologies can help to fulfill them. In particular, we address the topics of data integration, collaborative documentation and annotation and intelligent information access and analytics and present solutions that are implemented in the newest addition to our eCloudManager product suite: The Intelligence Edition.".
- 402 abstract "Organizations today collect and store large amounts of data in various formats and locations. However they are sometimes required to locate all instances of a certain type of data. Good data classification allows marking enterprise data in a way that enables quick and efficient retrieval of information when needed. We introduce a generic, automatic classification method that exploits Semantic Web technologies to assist in several phases in the classification process; defining the classification requirements, performing the classification and representing the results. Using Semantic Web technologies enables flexible and extensible configuration, centralized management and uniform results. This approach creates general and maintainable classifications, and enables applying semantic queries, rule languages and inference on the results.".
- 407 abstract "We present a scalable, SPARQL-based computational pipeline for testing the lattice-theoretic properties of partial orders represented as RDF triples. The use case for this work is quality assurance in biomedical ontologies, one desirable property of which is conformance to lattice structures. At the core of our pipeline is the algorithm called NuMi, for detecting the Number of Minimal upper bounds of any pair of elements in a given finite partial order. Our technical contribution is the coding of NuMi completely in SPARQL. To show its scalability, we applied NuMi to the entirety of SNOMED CT, the largest clinical ontology (over 300,000 conepts). Our experimental results have been groundbreaking: for the first time, all non-lattice pairs in SNOMED CT have been identified exhaustively from 34 million candidate pairs using over 2.5 billion queries issued to Virtuoso. The percentage of non-lattice pairs ranges from 0 to 1.66 among the 19 SNOMED CT hierarchies. These non-lattice pairs represent target areas for focused curation by domain experts. RDF, SPARQL and related tooling provide an efficient platform for implementing lattice algorithms on large data structures.".
- 410 abstract "Popularity and spread of online social networking in recent years has given a great momentum to the study of dynamics and patterns of social interactions. However, these studies have often been confined to the online world, neglecting its interdependencies with the offline world. This is mainly due to the lack of real data that spans across this divide. The Live Social Semantics application is a novel platform that dissolves this divide, by collecting and integrating data about people from (a) their online social networks and tagging activities from popular social networking sites, (b) their publications and co-authorship networks from semantic repositories, and (c) their real-world face-to-face contacts with other attendees collected via a network of wearable active sensors. This paper investigates the data collected by this application during its deployment at three major conferences, where it was used by more than 400 people. Our analyses show the robustness of the patterns of contacts at various conferences, and the influence of various personal properties (e.g. seniority, conference attendance) on social networking patterns.".
- 413 abstract "The ability to answer temporal-oriented questions based on clinical narratives is essential to clinical research. The temporal dimension in medical data analysis enables clinical researches on many areas, such as, disease progress, individualized treatment, and decision support. The Semantic Web provides a suitable environment to represent the temporal dimension of the clinical data and reason about them. In this paper, we introduce a Semantic-Web based framework, which provides an API for querying temporal information from clinical narratives. The framework is centered by an OWL ontology called CNTRO (Clinical Narrative Temporal Relation Ontology), and contains three major components: time normalizer, SWRL based reasoner, and OWL-DL based reasoner. We also discuss how we adopted these three components in the clinical domain, their limitations, as well as extensions that we found necessary or desirable to archive the purposes of querying time-oriented data from real-world clinical narratives.".
- 414 abstract "We describe a mapping language for converting data contained in spreadsheets into the Web Ontology Language (OWL). The developed language, called M2, overcomes shortcomings with existing mapping techniques, including their restriction to well-formed spreadsheets reminiscent of a single relational database table and verbose syntax for expressing mapping rules when transforming spreadsheet contents into OWL. The M2 language provides expressive, yet concise mechanisms to create both individual and class axioms when generating OWL ontologies. We additionally present an implementation of the mapping approach, Mapping Master, which is available as a plug-in for the Protege ontology editor.".
- 419 abstract "Clinical trials are fundamental for medical science: they provide the evaluation for new treatments and new diagnostic approaches. One of the most difficult parts of clinical trials is the recruitment of patients: many trials fail due to lack of participants. Recruitment is done by matching the eligibility criteria of trials to patient conditions. This is usually done manually, but both the large number of active trials and the lack of time available for matching keep the recruitment ratio low. In this paper we present a method, entirely based on standard semantic web technologies and tool, that allows the automatic recruitment of a patient to the available clinical trials. We use a domain specific ontology to represent data from patients' health records and we use SWRL to verify the eligibility of patients to clinical trials.".
- 421 abstract "Conceptual modelling tools allow users to construct formal representations of their conceptualisations. These models are typically developed in isolation, unrelated to other user models, thus losing the opportunity of incorporating knowledge from other existing models or ontologies that might enrich the modelling process. We propose to apply Semantic Web techniques to the context of conceptual modelling (more particularly to the domain of qualitative reasoning), to smoothly interconnect conceptual models created by different users, thus facilitating the global sharing of scientific data contained in such models and creating new learning opportunities for people who start modelling. This paper describes how semantic grounding techniques can be used during the creation of qualitative reasoning models, to bridge the gap between the imprecise user terminology and a well defined external common vocabulary. We also explore the application of ontology matching techniques between models, which can provide valuable feedback during the model construction process.".
- 422 abstract "We present the first open and cross-disciplinary 3D Internet research platform, called ISReal, for intelligent 3D simulation of realities. Its core innovation is the comprehensively integrated application of semantic Web technologies, semantic services, intelligent agents, verification and 3D graphics for this purpose. In this paper, we focus on the interplay between its components for semantic XML3D scene query processing and semantic 3D animation service handling, as well as the semantic-based perception and action planning with coupled semantic service composition by agent-controlled avatars in a virtual world. We demonstrate the use of the implemented platform for semantic-based 3D simulations in a small virtual world example with an intelligent user avatar and discuss results of the platform performance evaluation.".
- 428 abstract "The World Health Organization is beginning to use Semantic Web technologies in the development of the 11th revision of the International Classification of Diseases (ICD-11). Health officials use ICD in all United Nations member countries to compile basic health statistics, to monitor health-related spending, and to inform policy makers. While previous revisions of ICD encoded minimal information about a disease, and were mainly published as books and tabulation lists, the creators of ICD-11 envision that it will become a multi-purpose and coherent classification ready for electronic health records. Most important, they plan to have ICD-11 applied for a much broader variety of uses than previous revisions. The new requirements entail significant changes in the way we represent disease information, as well as in the technologies and processes that we use to acquire the new content. In this paper, we describe the previous processes and technologies used for developing ICD. We then describe the requirements for the new development process and present the Semantic Web technologies that we use for ICD-11. We outline the experiences of the domain experts using the software system that we implemented using Semantic Web technologies. We then discuss the benefits and challenges in following this approach and conclude with lessons learned from this experience.".
- 431 abstract "This paper describes the theoretical background and the implementation of dbrec, a music recommendation system built on top of DBpedia, offering recommendations for more than 39,000 bands and solo artists. We discuss the various challenges and lessons learnt while building it, providing relevant insights for people developing applications consuming Linked Data. Furthermore, we provide a user-centric evaluation of the system, notably by comparing it to last.fm.".
- 432 abstract "Unlike Western Medicine, those in Traditional Chinese Medicine (TCM) are based on inherent rules or patterns, which can be considered as causal links. Existing approaches tend to apply computational methods on semantic ontology to do knowledge mining, but it cannot perfectly make use of internal principles in TCM. When it comes to knowledge representation, we can transform this inherent knowledge into causal graphs. In this paper, we present an approach to build a TCM knowledge model with the capability of rule reasoning using OWL 2. In particular, we focused on the causal relations among syndrome and symptoms, changes between syndromes. We evaluated our approach by giving two typical use cases and implemented them using Jena, a Java framework supporting RDF, OWL, and including a rule-based inference engine. The evaluation results suggested that our approach clearly displayed the causal relations in TCM and shows a great potential in TCM knowledge mining.".
- 435 abstract "We describe our experience of designing and implementing a knowledge-based pre-operative assessment decision support system. We developed the system using semantic web technology, including modular ontologies developed in the OWL Web Ontology Language, the OWL Java Application Programming Interface and an automated logic reasoner. Using ontologies at the core of the system's architecture permits to efficiently manage a vast repository of pre-operative assessment domain knowledge, including classification of surgical procedures, classification of morbidities, and guidelines for routine pre-operative screening tests. Logical inference on the domain knowledge, according to individual patient's medical context (medical history combined with planned surgical procedure) enables to generate personalised patients' reports, consisting of a risk assessment and clinical recommendations, including relevant pre-operative screening tests.".
- 436 abstract "Building applications over Linked Data often requires a mapping between the application model and the ontology underlying the source dataset in the Linked Data cloud. This mapping can be defined in many ways. For instance, by describing the application model as a view over the source dataset, by giving mappings in the form of dependencies between the two datasets, or by inference rules that infer the application model from the source dataset. Explicitly formulating these mappings demands a comprehensive understanding of the underlying schemas (RDF ontologies) of the source and target datasets. This task can be supported by integrating the process of schema exploration into the mapping process and help the application designer with finding the implicit relationships that she wants to map. This paper describes Fusion - a framework for closing the gap between the application model and the underlying ontologies in the Linked Data cloud. Fusion simplifies the definition of mappings by providing a visual user interface that integrates the exploratory process and the mapping process. Its architecture allows the creation of new applications through the extension of existing Linked Data with additional data.".
- 439 abstract "The novel practice of Open Innovation on the Web has imposed new challenges to known expert search approaches. At the same time, many potential sources of evidence about users' interest and expertise (e.g., research papers, blogs, activities) are becoming ubiquitously present as Linked Data. In this paper we present a research effort for suggesting the right way to search for potential Open Innovation problem solvers in Linked Data sources, by looking at the structure of available data sources. In addition, we seek to develop ways of suggesting domains of expertise that are in some way relevant to the domain of the Open Innovation problem, in order to enable a cross-domain solution transfer.".
- 440 abstract "When multiple ontologies are used within one application system, aligning the ontologies is a prerequisite for interoperability and unhampered semantic navigation and search. Various methods have been proposed to compute mappings between elements from different ontologies, the majority of which being based on various kinds of similarity measures. As a major shortcoming of these methods it is difficult to decode the semantics of the results achieved. In addition, in many cases they miss important mappings due to poorly developed ontology structures or dissimilar ontology designs. I propose a complementary approach making massive use of relation extraction techniques applied to broad-coverage text corpora. This approach is able to detect different types of semantic relations, dependent on the extraction techniques used. Furthermore, exploiting external background knowledge, it can detect relations even without clear evidence in the input ontologies themselves.".
- 441 abstract "This paper presents our work on supporting evaluation of integrity constraint issues in semantic web instance data. We propose an alternative semantics for the ontology language, i.e., OWL, a decision procedure for constraint evaluation by query answering, and an approach of explaining and repairing integrity constraint violations by utilizing the justifications of conjunctive query answers.".
- 442 abstract "SPARQL is the W3C Recommended query language for RDF. My current work aims at extending SPARQL in two distinct ways: (i) to allow a better integration of RDF and XML; and (ii) to define a query language for RDF extended with domain specific annotations. Transforming data between XML and RDF is a much required, but not so simple, task in the Semantic Web. The aim of (i) is to enable transparent transformations between these two representation formats, relying on a new query language called XSPARQL. On a different aspect, representing and reasoning with meta-information about RDF triples has been addressed by several proposals for representing time, trust and provenance. A common extension of RDF (and RDFS inferencing rules), capable of encompassing all these proposals, with a clearly defined semantics is much desirable. Building on top of Annotated RDF, we present such an extension and an extension of the SPARQL language capable of querying triples with annotations. An open research issue remains the possibility of unifying the two, currently independent, extensions of SPARQL.".
- 443 abstract "In this work we give the outline for a novel approach to semantic web service based applications. This approach centers around the use of so-called intelligent agents as the basis for the middleware layer. We give a broad outline of our prototype architecture and preliminary correctness results.".
- 445 abstract "Web service composition (WSC) - loosely, the composition of web-accessible software systems - requires a computer program to automatically select, integrate, and invoke multiple web services in order to achieve a user-defined objective. It is an example of the more general task of composing business processes or component-based software. Our doctoral research endeavours to make fundamental contributions to the knowledge representation and reasoning principles underlying the task of WSC, with a particular focus on the customization of compositions with respect to individual preferences. The setting for our work is the semantic web, where the properties and functioning of services and data are described in a computer-interpretable form. In this setting we conceive of WSC as an Artificial Intelligence planning task. This enables us to bring to bear many of the theoretical and computational advances in reasoning about action and planning to the task of WSC. However, WSC goes far beyond the reaches of classical planning, presenting a number of interesting challenges that are relevant not only to WSC but to a large body of problems related to the composition of actions, programs, business processes, and services. In what follows we identify a set of challenges facing our doctoral research and report on our progress to date in addressing these challenges.".
- 448 abstract "Incorporating domain knowledge is one of the most challenging problems in data mining. The Semantic Web technologies are promising to offer solutions to formally capture and efficiently use the domain knowledge. We call data mining technologies powered by the Semantic Web, capable of systematically incorporating domain knowledge, the semantic data mining. In this paper, we identify the importance of semantic annotation-a crucial step towards realizing semantic data mining by bringing meaning to data, and propose a learning-based semantic search algorithm for annotating (semi-) structured data.".
- 451 abstract "One of the problems of Knowledge Discovery in Databases (KDD) is the lack of user support for solving KDD problems. Current Data Mining (DM) systems enable the user to manually design workflows but this becomes difficult when there are too many operators to choose from or the workflow's size is too large. Therefore we propose to use auto-experimentation based on ontological planning to provide the users with automatic generated workflows as well as rankings for workflows based on several criteria (execution time, accuracy, etc.). Moreover autoexperimentation will help to validate the generated workflows and to prune and reduce their number. Furthermore we will use mixed-initiative planning to allow the users to set parameters and criteria to limit the planning search space as well as to guide the planner towards better workflows.".
- 453 abstract "Performance is the most critical aspect towards achieving high scalability of Semantic Web reasoning applications, and considerably limits the application areas of them. There is still a deep mismatch between the requirements for reasoning on a Web scale and performance of the existing reasoning engines. The performance limitation can be considerably reduced by utilizing such large-scale e-Infrastructures as LarKC - the Large Knowledge Collider - an experimental platform for massive distributed incomplete reasoning, which offers several innovative approaches removing the scalability barriers, in particularly, by enabling transparent access to HPC systems. Efficient utilization of such resources is facilitated by means of parallelization being the major element for accomplishing performance and scalability of semantic applications. Here we discuss application of some emerging parallelization strategies and show the benefits obtained by using such systems as LarKC.".
- 455 abstract "We explore a diagrammatic logic suitable for specifying ontologies using a case study. Diagrammatic reasoning is used to establish consequences of the ontology.".
- 456 abstract "The success of wikis for collaborative knowledge construction is triggering the development of a number of tools for collaborative conceptual modeling based on them. In this paper we present a completely revised version of MoKi, a tool for modelling ontologies and business process models in an integrated way.".
- 457 abstract "We present BibBase, a system for publishing and managing bibliographic data available in BibTeX files on the Semantic Web. BibBase uses a powerful yet light-weight approach to transform BibTeX files into rich Linked Data as well as custom HTML and RSS code that can readily be integrated within a user's website. The data can instantly be queried online on the system's SPARQL endpoint. In this demo, we present a brief overview of the features of our system and outline a few challenges in the design and implementation of such a system.".
- 460 abstract "Most tools for visualizing Semantic Web data structure the representation according to the concept definitions and interrelations that constitute the ontology's vocabulary. Instances are often treated as somewhat peripheral information, when considered at all. The visualization of instance-level data poses different but significant challenges as instances will often be orders of magnitude more numerous than the concept definitions that give them machine-processable meaning. We present a visualization technique designed to visualize large instance sets and the relations that connect them. This visualization uses both nodelink and adjacency matrix representations of graphs to visualize different parts of the data depending on their semantic and local structural properties, exploiting ontological knowledge to drive the layout of, and navigation in, the visualization.".
- 463 abstract "Ambient assisted living (AAL) is a new research area focusing on services that support people in their daily life with a particular focus on elderly people. In the AAL domain sensor technologies are used to identify situations that pose a risk to the assisted person (AP) or that indicate the need of proactive assistance. These situations of interest are detected by analyzing sensor data coming from a whole variety of sensors. Considering the need for immediate assistance especially in the case of safety- and health-critical situations, the detection of situations must be achieved in real-time. In this paper we propose to use Complex Event Processing (CEP) based on semantic technologies to detect typical AAL-like situations. In particular, we present how the ETALIS CEP engine can be used to detect situations in real-time and how this can lead to immediate and proper assistance even in critical situations in conjunction with the semantic AAL service platform openAAL.".
- 465 abstract "Most graph visualisation tools for RDF data are desktop applications focused on loading complete ontologies and metadata from a file and allowing users to filter out information if needed. Recently both scientific and commercial frameworks have started to shift their focus to the web, however they still rely on plugins such as Java and rarely handle larger collections of RDF statements efficiently. In this abstract we present a framework which visualises RDF graphs in a native browser environment, leveraging both the SVG standard and JavaScript technology to provide a responsive user interface. Graphs can be directly expanded, modified and explored. Users select nodes and edges from a central data repository containing millions of statements. The resulting graph can be shared with other users retaining full interactivity for collaborative work or presentation purposes.".
- 467 abstract "As more RDF streaming applications are being developed, there is a growing need for an efficient mechanism for storing and performing inference over these streams. In this poster, we present a tool that stores these streams in a unified model by combining memory and disk based mechanisms. We explore various memory management algorithms and disk-persistence strategies to optimize query performance. Our unified model produces an optimized query execution and inference performance for RDF streams that benefit from the advantages of using both, memory and disk.".
- 468 abstract "Evaluation of semantic web technologies at large scale, including ontology matching, is an important topic of semantic web research. This paper presents a web-based evaluation service for automatically executing the evaluation of ontology matching systems. This service is based on the use of a web service interface wrapping the functionality of a matching tool to be evaluated and allows developers to launch evaluations of their tool at any time on their own. Furthermore, the service can be used to visualise and manipulate the evaluation results. The approach allows the execution of the tool on the machine of the tool developer without the need for a runtime environment.".
- 469 abstract "SemWebVid is an online Ajax application that allows for the automatic generation of Resource Description Framework (RDF) video descriptions. These descriptions are based on two pillars: first, on a combination of user-generated metadata such as title, summary, and tags; and second, on closed captions which can be user-generated, or be auto-generated via speech recognition. The plaintext contents of both pillars are being analyzed using multiple Natural Language Processing (NLP) Web services in parallel whose results are then merged and where possible matched back to concepts in the sense of Linking Open Data (LOD). The final result is a deep-linkable RDF description of the video, and a "scroll-along" view of the video as an example of video visualization formats.".
- 470 abstract "Recognizing that two Semantic Web documents or graphs are similar and characterizing their differences is useful in many tasks, including retrieval, updating, version control and knowledge base editing. We describe several text-based similarity metrics that characterize the relation between Semantic Web graphs and evaluate these metrics for three specific?c cases of similarity: similarity in classes and properties, similarity disregarding differences in base-URIs, and versioning relationship. We apply these techniques for a specific use case: generating a delta between versions of a Semantic Web graph. We have evaluated our system on several tasks using a collection of graphs from the archive of the Swoogle Semantic Web search engine.".
- 471 abstract "This demo presents RExplorator, an environment which allows nontechnically savvy users, but who understand the problem domain, to explore the data until they understand its structure They employ a combination of search, query and faceted navigation in a direct manipulation, query-byexample style interface. In this process, users can reuse previously found solutions by other users, which may accomplish sub-tasks of the problem at hand. It is also possible to create an end-user friendly interface to allow them to access the information. Once a solution has been found, it can be generalized, and optionally made available for reuse by other users. This enables the establishment of a social network of users that share solutions for problems in particular domains (repositories) of interest.".
- 473 abstract "Application testing is a critical component of application development. Testing of Semantic Web applications requires large RDF datasets, conforming to an expected form or schema, and preferably, to an expected data distribution. Finding such datasets often proves impossible, while generating input datasets is often cumbersome. The GRR (Generating Random RDF) system is a convenient, yet powerful, tool for generating random RDF, based on a SPARQLlike syntax. In this poster and demo, we show how large datasets can be easily generated using intuitive commands.".
- 474 abstract "Stable semantic ontology measurement is crucial to obtain significant and comparable measurement results. In this paper, we present a summary of the definition of ontology measurement stability and the preprocessing for stable semantic ontology measurement from [5]. Meanwhile, we describe two existing ontology metrics. For each of them, we compare their stability from the perspectives of structural and semantic ontology measurements, respectively. The experiments show that some structural ontology measurements may be unusable in cases when we want to compare the measurements of different models, unless the pre-processing of the models is performed.".
- 476 abstract "Organizations today collect and store large amounts of data in various formats and locations, however they are sometimes required to locate all instances of a certain type of data. Data classification enables efficient retrieval of information when needed. This work presents a reference implementation for enterprise data classification using Semantic Web technologies. We demonstrate automatic discovery and classification of Personally Identifiable Information (PII) in relational databases, using a classification model in RDF/OWL describing the elements to discover and classify. At the end of the process the results are also stored in RDF, enabling simple navigation between the input model and the findings in different databases. Recorded demo link: https://www.research.ibm.com/haifa/info/demos/ piidiscovery_full.htm".
- 479 abstract "Mobile devices contain more personal data such as GPS location, contacts and music, with which users can create innovative and pragmatic mashup applications for different areas such as social networking, E-commerce, and entertainment. We propose a semantic-based mobile mashup platform which enables users to create mashup applications by simply selecting service nodes, linking them together and configuring some connection parameters. Our platform also offers a recommendation mechanism on linkable services by adding semantic annotation to service description, so that users do not need to read specifications of web services in order to find out linkable ones. Therefore, users can focus more on the innovation and practicability of their mashup applications, which will surely result in the emergence of abundant mobile mashup applications.".
- 481 abstract "SILK is an expressive Semantic Web rule language and system equipped with scalable reactive higher-order defaults. We present one of its latest novel features: a graphical user interface (GUI) for knowledge entry, query answering, and justification browsing that supports user specification and understanding of advanced courteous prioritized defeasible reasoning. We illustrate the use of the GUI in an example from college-level biology of modeling and reasoning about hierarchically structured causal processes with interfering multiple causes.".
- 483 abstract "The Linking Open Data cloud contains several music related datasets that hold great potential for enhancing the process of research in the ?eld of Music Information Retrieval (MIR) and which, in turn, can be enriched by MIR results. We demonstrate a system with several related aims: to enable MIR researchers to utilise these datasets through incorporation in their research systems and work?ows; to publish MIR research output on the Semantic Web linked to existing datasets (thereby also increasing the size and applicability of the datasets for use in MIR); and to present MIR research output, with cross-referencing to other linked data sources, for manipulation and evaluation by researchers and re-use within the wider Semantic Web. By way of example we gather and publish RDF describing signal collections derived from the country of an artist. Genre analysis over these collections and integration of collection and result metadata enables us to ask: "how country is my country?".".
- 484 abstract "During the last decade, there has been intense research and development in creating methodologies and tools able to map Relational Databases with the Resource Description Framework. Although some systems have gained wider acceptance in the Semantic Web community, they either require users to learn a declarative language for encoding mappings, or have limited expressivity. Thereupon we present RDOTE, a framework for easily transporting data residing in Relational Databases into the Semantic Web. RDOTE is available under GNU/GPL license and provides friendly graphical interfaces, as well as enough expressivity for creating custom RDF dumps.".
- 488 abstract "Enterprise clouds apply the paradigm of cloud computing to enterprise IT infrastructures, with the goal of providing easy, flexible, and scalable access to both computing resources and IT services. Realizing the vision of the fully automated enterprise cloud involves addressing a range of technological challenges. In this demonstration, we show how semantic technologies can help to address the challenges related to intelligent information management in enterprise clouds. In particular, we address the topics of data integration, collaborative documentation and annotation and intelligent information access and analytics and demonstrate solutions that are implemented in the newest addition to our eCloudManager product suite: The Intelligence Edition.".
- 489 abstract "We describe a framework and prototype system for interpreting tables and extracting entities and relations from them, and producing a linked data representation of the table's contents. This can be used to annotate the table or to add new facts to the linked data collection.".
- 491 abstract "The amount of data on the Internet is rapidly growing. Formal languages are used to annotate such data in order to make it machine-understandable; i.e., allow machines to reason about it, to check consistency, to answer queries, or to infer new facts. Essential for this are formalisms that allow for tractable and efficient reasoning algorithms. Particular care is demanded in efficiently responding to the trade-off between expressivity and usefulness. The updated Web Ontology Language (OWL 2) provides dialects that are restricted in their semantic expressivity for optimizing the reasoning behavior; e.g., the OWL 2 EL or OWL 2 RL profiles. Such dialects are very important to respond to the aforementioned trade-off. Profiles reflect particular requirements and yield purposeful balance between expressivity and computational complexity. The support for dialects is not only given in OWL 2, but also in the Rule Interchange Format (RIF) standards. RIF specifies formalisms for the knowl- edge exchange between different rule systems. The same applies for the WSML language that provides variants for Description Logics and rule-based reasoning. The goal remains the same, formalisms that are expressive enough to be useful, while exhibiting reasoning characteristics that can scale to the size of the Web. Leveraging this is exactly the objective of the WSML2Reasoner framework. In Section 2 we present WSML2Reasoner and our reasoners IRIS and Ell".
- 492 abstract "Smartphones are becoming main personal information repositories. Unfortunately this information is stored in independent silos managed by applications. We have seen that already: in the Palm operating system, application "databases" were only accessible when the application schemas were known and worked by opening other application databases. Our goal is to provide support for applications to deliver their data in RDF. This would allow applications to exploit this information in a uniform way without knowing beforehand application schemas. This would also connect this information to the semantic web and the web of data through reference from and to device information. We present a way to do this in a uniform manner within the Android platform. Moreover, we propose to do it along the linked data principles (provide RDF, describe in ontologies, use URIs, link other sources). We first consider how the integration of RDF could be further pushed within the context of the Android platform. We demonstrate its feasibility through a linked data browser that allows for browsing the phone information.".
- 495 abstract "The Web of Linked Data is growing and currently consists of several hundred interconnected data sources altogether serving over 25 billion RDF triples to the Web. What has hampered the exploitation of this global dataspace up till now is the lack of an open-source Linked Data crawler which can be employed by Linked Data applications to localize (parts of) the dataspace for further processing. With LDSpider, we are closing this gap in the landscape of publicly available Linked Data tools. LDSpider traverses the Web of Linked Data by following RDF links between data items, it supports di?fferent crawling strategies and allows crawled data to be stored either in fi?les or in an RDF store.".
- 496 abstract "This demo paper describes an extension of the Enrycher text enhancement system, which annotates words in context, from a text fragment, with RDF/OWL encoded senses from WordNet and OpenCyc. The extension is based on a general purpose disambiguation algorithm which takes advantage of the structure and/or content of knowledge resources, reaching state-of-the-art performance when compared to other knowledge-lean word sense disambiguation algorithms.".
- 498 abstract "This poster paper presents the design and implementation of an RDFS reasoner based on a backward chaining approach and implemented on a clustered RDF triplestore. The system presented, called 4sr, uses 4store as base infrastructure. In order to achieve a highly scalable system we implemented the reasoning at the lowest level of the quad store, the bind operation. The bind operation in 4sr traverses the quad store indexes matching or expanding the query variables with awareness of the RDFS semantics.".
- 499 abstract "RightField is an open source application that provides a mechanism for embedding ontology annotation support for Life Science data in Microsoft Excel spreadsheets. Individual cells, columns, or rows can be restricted to particular ranges of allowed classes or instances from chosen ontologies. Informaticians, with experience in ontologies and data annotation prepare RightField-enabled spreadsheets with embedded ontology term selection for use by a wider community of laboratory scientists. The RightField-enabled spreadsheet presents selected ontology terms to the users as a simple drop-down list, enabling scientists to consistently annotate their data without the need to understand the numerous metadata standards and ontologies available to them. The spreadsheets are self-contained and remain "vanilla" Excel so that they can be readily exchanged, processed offline and are usable by regular Excel tooling. The result is semantic annotation by stealth, with an annotation process that is less error-prone, more efficient, and more consistent with community standards. RightField has been developed and deployed for a consortium of some 300 Systems Biologists. RightField is open source under a BSD license and freely available from http://www.sysmo-db.org/RightField.".
- 501 abstract "Semantic matchmaking, i.e., the task of finding matching (Web) services based on semantic information, has been a prominent field of research lately, and a wide range of supporting tools both for research and practice have been published. However, no suitable solution for the visualization of matchmaking results exists so far. In this paper, we present the Matchmaking Visualizer, an application for the visual representation and analysis of semantic matchmaking results. It allows for the comparing of matchmaking approaches for semantic Web services in a fine-grained manner and thus complements existing evaluation suites that are based on rather coarse-grained information retrieval metrics.".
- 502 abstract "WebProtege is a highly customizable Web interface for browsing and editing ontologies, which provides support for collaboration. We have created a customized version to support the World Health Organization with the collaborative development of the 11th revision of the International Classification of Diseases (ICD-11). Our demo will present this customized version and focus on how content creation and collaboration is being supported in WebProtege for the development of ICD-11.".
- 503 abstract "We present RDF On the Go, a full-edged RDF storage and SPARQL query processor for mobile devices. Implemented by adapting the widely used Jena and ARQ Semantic Web Toolkit and query engine, it uses Berkeley DB for storing the RDF data, R-Trees for indexing spatial data indexing and a query processor that supports both standard and spatial queries. By storing and querying RDF data locally at the user's mobile device, RDF On the Go contributes to improving scalability, decreasing transmission costs, and controlling access to user's personal information. It also enables the development of a next generation of mobile applications. RDF On the Go is available for the Android platform and can be downloaded at http://rdfonthego.googlecode.com/.".
- 505 abstract "The Living Document Project aims to harness the collective knowledge within communities in digital libraries, making it possible to enhance knowledge discovery and dissemination as well as to facilitate interdisciplinary collaborations amongst readers. Here we present a prototype that allows users to annotate content within digital libraries; the annotation schema is built upon the Annotation Ontology; data is available as RDF, making it possible to publish it as linked data and use SPARQL and SWRL for querying, reasoning, and processing. Our demo illustrates how a social tagging system could be used within the context of digital libraries in life sciences so that users are able to better organize, share, and discover knowledge embedded in research articles. Availability: http://www.biotea.ws/videos/ld_ao/ld_ao.html".
- 506 abstract "The Smart Grid aims at making the current energy grid more efficient and eco-friendly. The Smart Grid features an IT-layer, which allows communication between a multitude of stakeholders and will have to be integrated with other "smart" systems (e.g., smart factories or smart cities) to operate effectively. Thus, many participants will be involved and will exchange large volumes of data, leading to a heterogeneous system with ad-hoc data exchange in which centralised coordination and control will be very difficult to achieve. In this paper, we show parallels between requirements for the (Semantic) Web and the Smart Grid. We argue that the communication architecture for the Smart Grid can be built upon existing (Semantic) Web technologies. We point out differences between the existing Web and the Smart Grid, thereby identifying remaining challenges.".
- 508 abstract "Large amounts of data reside in sources which are not web- accessible. Wrappers - small software programs that provide uniform access to data - are often used to transform legacy data sources for use on the Semantic Web.Wrappers, as well as links between data from wrapped sources and data that already exists on the Web, are typically created in an ad-hoc fashion. We propose a principled approach to integrating data-providing services with Linked Data. Linked Data Services (LIDS) can be used in various application scenarios to provide uniform access to legacy data and enable automatic interlinkage with existing data sets.".
- 509 abstract "To the best of our knowledge, existing Semantic Web (SW) search systems fail to index RDF graph structures as graphs. They either do not index graph structures and retrieve them by run-time formal queries, or index all row triples from the back-end repositories. This increases the overhead of indexing for very large RDF documents. Moreover, the graph explorations from row triples can be complicated when blank nodes, RDF collections and containers are involved. This paper provides a means to index SW data in graph structures, which potentially benefit the graph exploration and ranking in SW querying.".
- 51 abstract "Starting from the general framework for annotated RDF(S) which we have presented in previous work, we address the development of a query language - AnQL - that is inspired by SPARQL, including several features of SPARQL 1.1. As a side effect we propose formal definitions of the semantics of these features (subqueries, aggregates, assignment, solution modifiers) which could serve as a basis for the ongoing work in SPARQL 1.1. We demonstrate the value of such a framework by comparing our approach to previously proposed extensions of SPARQL and show that AnQL generalises and extends them.".
- 510 abstract "In general, ranking entities (resources) on the Semantic Web (SW) is subject to importance, relevance, and query length. Few existing SW search systems cover all of these aspects. Moreover, many existing efforts simply reuse the technologies from conventional Information Retrieval (IR), which are not designed for SW data. This paper proposes a ranking mechanism, which includes all three categories of rankings and are tailored to SW data.".
- 512 abstract "In this paper, we present a project which extends a SMW+ semantic wiki with a Linked Data Integration Framework that performs Web data access, vocabulary mapping, identity resolution, and quality evaluation of Linked Data. As a result, a large collection of neurogenomics-relevant data from the Web can be fiexibly transformed into a uni?ed ontology, allowing uni?ed querying, navigation, and visualization; as well as support for wiki-style collaboration, crowdsourcing, and commentary on chosen data sets.".
- 514 abstract "FOAF is widely used on the Web to describe people, groups and organizations and their properties. Since FOAF does not require unique IDs, it is often unclear when two FOAF instances are co-referent, i.e., denote the same entity in the world. We describe a prototype system that identifies sets of co-referent FOAF instances using logical constraints (e.g., IFPs), strong heuristics (e.g., FOAF agents described in the same file are not co-referent), and a Support Vector Machine (SVM) generated classifier.".
- 515 abstract "Building applications over Linked Data often requires a mapping between the application model and the ontology underlying the source dataset in the Linked Data cloud. Explicitly formulating these mappings demands a comprehensive understanding of the underlying schemas (RDF ontologies) of the source and target datasets. This task can be supported by integrating the process of schema exploration into the mapping process and help the application designer with finding the implicit relationships that she wants to map. This demo describes Fusion - a framework for closing the gap between the application model and the underlying ontologies in the Linked Data cloud. Fusion simplifies the definition of mappings by providing a visual user interface that integrates the exploratory process and the mapping process. Its architecture allows the creation of new applications through the extension of existing Linked Data sources with additional data.".
- 518 abstract "Publishing Linked Data on the Web has become much easier with tools such as Drupal. However, consuming that data and presenting it in a meaningful way is still difficult for both Web developers and for Semantic Web practitioners. We demonstrate a module for Drupal which supports visual query building for SPARQL queries and enables meaningful displays of the query result.".
- 519 abstract "The central idea of the Web of Data is to interlink data items using RDF links. However, in practice most data sources are not sufficiently interlinked with related data sources. The Silk Link Discovery Framework addresses this problem by providing tools to generate links between data items based on user-provided link specifications. It can be used by data publishers to generate links between data sets as well as by Linked Data consumers to augment Web data with additional RDF links. In this poster we present the Silk Link Discovery Framework and report on two usage examples in which we employed Silk to generate links between two data sets about movies as well as to find duplicate persons in a stream of data items that is crawled from the Web.".
- 520 abstract "Adopting keyword query interface to semantic search on RDF data can help users keep away from learning the SPARQL query syntax and understanding the complex and fast evolving data schema. The existing approaches are divided into two categories: instance-based approaches and schema-based approaches. The instance-based approaches relying on the original RDF graph can generate precise answers but take a long processing time. In contrast, the schema-based approaches relying on the reduced summary graph require much less processing time but cannot always generate correct answers. In this paper, we propose a novel approach based on a hybrid graph which can achieve significant improvements on processing time with a limited accuracy drop compared with instance-based approaches, and meanwhile, can achieve promising accuracy gains at an affordable time cost compared with schema-based approaches.".
- 521 abstract "This paper describes an application which aims at producing Polish descriptions for the data available as Linked Open Data, the MusicBrainz knowledge base contents in particular.".
- 522 abstract "Although research towards the reduction of the knowledge acquisition bottleneck in ontology engineering is advancing, a central issue remains unsolved: Light-weight processes for collaborative knowledge engineering by a massive user base. In this demo, we present HANNE, a holistic application that implements all necessary prerequisites for Navigational Knowledge Engineering and thus reduces the complexity of creating expressive knowledge by disguising it as navigation. HANNE enables users and domain experts to navigate over knowledge bases by selecting examples. From these examples, formal OWL class expressions are created and refined by a scalable Iterative Machine Learning approach. When saved by users, these class expressions form an expressive OWL ontology, which can be exploited in numerous ways: as navigation suggestions for users, as a hierarchy for browsing, as input for a team of ontology editors.".
- 524 abstract "We present STEREO, a system that offers an expressive formalism and implements techniques firmly grounded on logic to solve the Service Selection Problem (SSP). STEREO adopts the Local-As-View approach (LAV) to represent services' functionality as views on ontology concepts, while user requests are expressed as conjunctive queries on these concepts. Additionally, users can describe their preferences, which are used to rank the solutions. We discuss the LAV formulation of SSP; then, we illustrate the encoding of SSP as a logical theory whose models are in correspondence with the problem solutions, and in presence of preferences, the best models are in correspondence with the best-ranked solutions. We demonstrate STEREO and the properties of modern SAT solvers that provide an efficient and scalable solution to SSP.".
- 525 abstract "Most of the data on the Web is stored in relational databases. In order to make the Semantic Web grow we need to provide easy-to-use tools to convert those databases into linked data, so that even people with little knowledge of the semantic web can use them. Some programs able to convert relational databases into RDF files have been developed, but the user still has to link manually the database attribute names to existing ontology properties and this generated "linked data" is not actually linked with external relevant data. We propose here a method to associate automatically attribute names to existing ontology entities in order to complete the automation of the conversion of databases. We also present a way - rather basic, but with low error rate - to add links automatically to relevant data from other data sets.".
- 527 abstract "Traditionally Semantic Web applications either included a web crawler or relied on external services to gain access to the Web of Data. Recent efforts, have enabled applications to query the entire Semantic Web for up-to-date results. Such approaches are based on either centralized indexing of semantically annotated metadata or link traversal and URI dereferencing as in the case of Linked Open Data. They pose a number of limiting assumptions, thus breaking the openness principle of the Web. In this demo we present a novel technique called Avalanche, designed to allow a data surfer to query the Semantic Web transparently. The technique makes no prior assumptions about data distribution. Specifically, Avalanche can perform "live" queries over the Web of Data. First, it gets on-line statistical information about the data distribution, as well as bandwidth availability. Then, it plans and executes the query in a distributed manner trying to quickly provide first answers.".
- 529 abstract "Ontology alignment is the task of matching concepts and terminology from multiple ontologies. Ontology alignment is especially relevant in the semantic web domain as RDF documents and OWL ontologies are quite heterogeneous, yet often describe related concepts. The end goal for ontology matching is to allow the knowledge sets to interoperate. To achieve this goal, it is necessary for queries to return results that include knowledge, and inferred knowledge, from multiple datasets and terminologies, using the alignment information. Furthermore, ontology alignment is not an exact science, and concept matchings often involve uncertainty. The goal of this paper is to provide a semantic web repository that supports applying alignments to the dataset and reasoning with alignments. Our goal is to provide high performance queries that return results that include inference across alignment matchings, and rank results using certainty information. Our semantic web repository uses distributed inference and probabilistic reasoning to allow datasets to be efficiently updated with ontology alignments. We materialize the inferred, aligned data and make it available in efficient queries.".
- 532 abstract "The World Wide Web (WWW), as an ubiquitous medium for publication and exchange, already significantly influenced the way how historians work: the availability of public catalogs and bibliographies enable efficient research of relevant content for a certain investigation; the increasing digitization of works from historical archives and libraries, in addition, enables historians to directly access historical sources remotely. The capabilities of the WWW as a medium for collaboration, however, are only starting to be explored. Many historical questions are only answerable by combining information from different sources, from different researchers and organizations. Furthermore, after analyzing original sources, the derived information is often more comprehensive than can be captured by simple keyword indexing. In [3] we report about the application of an adaptive, semantics-based knowledge engineering approach for the development of a prosopographical knowledge base. In this demonstration we will showcase the comprehensive prosopographical knowledge base and its potential for applications. In prosopographical research, historians analyze common characteristics of historical groups by studying statistically relevant quantities of individual biographies. Untraceable periods of biographies can be determined on the basis of such accomplished analyses in combination with statistically examinations as well as patterns of relationships between individuals and their activities. In our case, researchers from the Historical Seminar at the University of Leipzig aimed at creating a prosopographical knowledge base about the life and work of professors in the 600 years history of University of Leipzig ranging from the year 1409 till 2009 - the Catalogus Professorum Lipsiensis (CPL).".
- 533 abstract "Contextify is a tool for maximizing user productivity by showing email-related contextual information. The contextual information is determined based on the currently selected email and includes related emails, people, attachments and web links. This content is displayed in a sidebar in Microsoft Outlook and in a special dialog that can display an extended context.".
- 534 abstract "An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the networks. The output of one network in response to a stimulus to another network can be interpreted as an analogical mapping. In a similar fashion, the networks can be explicitly trained to map specific items in one domain to specific items in another domain. Representation layer helps the network learn relationship mapping with direct training method. OMNN is applied to several OAEI benchmark test cases to test its performance on ontology mapping. Results show that OMNN approach is competitive to the top performing systems that participated in OAEI 2009.".
- 535 abstract "The combination of semantic technologies and social software has become more and more popular in the last few years as can be seen by the emergence of Semantic Wikis or the popularity of vocabularies such as FOAF or SIOC. The KiWi project is based upon these principles and offers features required for Social Media applications such as versioning, (semantic) tagging, rich text editing, easy linking, rating and commenting, as well as advanced "smart" services such as recommendation, rule-based reasoning, information extraction, intelligent search and querying, a sophisticated social reputation system, vocabulary management, and rich visualization. KiWi can be used both, as a platform for building custom Semantic Media applications, and as a Semantic Social Index, integrating content and data from a variety of different sources, e.g. Wikis, blogs and content management systems in an enterprise internet. Third-party applications can access the KiWi System using simple-to-use web services. The demo presents the whole functionality of the Open Source development platform KiWi in its final version within one integrated project management scenario. Furthermore it shows different KiWi-based Social Media projects to illustrate its various fields of application.".
- 536 abstract "Since the high time and space complexity, most existing ontology matching systems are not well scalable to solve the large ontology matching problem. Moreover, the popular divide-and-conquer matching solution faces two disadvantages: First, partitioning ontology is a complicate process; Second, it will lead to loss of semantic information during matching. To avoid these drawbacks, this paper presents an efficient large ontology matching system Lily-LOM, which uses a non-partitioned method. Lily-LOM is based on two kinds of reduction anchors, i.e. positive and negative reduction anchors, to reduce the time complexity problem. Some empirical strategies for reducing the space complexity are also discussed. The experiments show that Lily-LOM is effective.".
- 539 abstract "Geo Semantic Technology is evaluated as the core technology for supporting interoperability of geospatial data and building urban computing environment. We made semantic integrations of LOD's Linked Geo Data and Open Street Map with Korean POI data set, and have researched for developing intelligent road sign management system based on the LarKC platform.".