Matches in ScholarlyData for { ?s <https://w3id.org/scholarlydata/ontology/conference-ontology.owl#abstract> ?o. }
- 617 abstract "Hierarchical classifications are used pervasively by humans as a means to organize their data and knowledge about the world. One of their main advantages is that natural language labels, used to describe their contents, are easily understood by human users. However, at the same time, this is also one of their main disadvantages as these same labels are ambiguous and very hard to be reasoned about by software agents. This fact creates an insuperable hindrance for classifications to being embedded in the Semantic Web infrastructure. This paper presents an approach to converting classifications into lightweight ontologies, and it makes the following contributions: (i) it identifies the main NLP problems related to the conversion process and shows how they are different from the classical problems of NLP; (ii) it proposes heuristic solutions to these problems, which are especially effective in this domain; and (iii) it evaluates the proposed solutions by testing them on DMoz data.".
- 631 abstract "The ability to compute the differences that exist between two RDF models is an important step to cope with the evolving nature of the Semantic Web (SW). In particular, RDF Deltas can be employed to reduce the amount of data that need to be exchanged and managed over the network and hence build advanced SW synchronization and versioning services. By considering Deltas as sets of change operations, in this paper we study various RDF comparison functions in conjunction with the semantics of the underlying change operations and formally analyze their possible combinations in terms of correctness, minimality, semantic identity and redundancy properties.".
- 645 abstract "As an extension to the current Web, Semantic Web will not only contain structured data with machine understandable semantics but also textual information. While structured queries can be used to find information more precisely on the Semantic Web, keyword searches are still needed to help exploit textual information. It thus becomes very important that we can combine precise structured queries with imprecise keyword searches to have a hybrid query capability. In addition, due to the huge volume of information on the Semantic Web, the hybrid query must be processed in a very scalable way. In this paper, we define such a hybrid query capability that combines unary tree-shaped structured queries with keyword searches. We show how existing information retrieval (IR) index structures and functions can be reused to index semantic web data and its textual information, and how the hybrid query is evaluated on the index structure using IR engines in an efficient and scalable manner. We implemented this IR approach in an engine called Semplore. Comprehensive experiments on its performance show that it is a promising approach. It leads us to believe that it may be possible to evolve current web search engines to query and search the Semantic Web. Finally, we breifly describe how Semplore is used for searching Wikipedia and an IBM customer's product information.".
- 659 abstract "Several proposals have been put forward to support distributed agent cooperation in the Semantic Web, by allowing concepts and roles in one ontology be reused in another ontology. In general, these proposals reduce the autonomy of each ontology by defining the semantics of the ontology to depend on the semantics of the other ontologies. We propose a new framework for managing autonomy in a set of cooperating ontologies (or ontology space). In this framework, each language entity (concept/role/individual) in an ontology may have its meaning assigned either locally with respect to the semantics of its own ontology, to preserve the autonomy of the ontology, or globally with respect to the semantics of any neighbouring ontology in which it is defined, thus enabling semantic cooperation between multiple ontologies. In this way, each ontology has a "subjective semantics" based on local interpretation and a "foreign semantics" based on semantic binding to neighbouring ontologies. We study the properties of these two semantics and describe the conditions under which entailment and satisfiability are preserved. We also introduce two reasoning mechanisms under this framework: "cautious reasoning" and "brave reasoning". Cautious reasoning is done with respect to a local ontology and its neighbours (those ontologies in which its entities are defined); brave reasoning is done with respect to the transitive closure of this relationship. This framework is independent of ontology languages. As a case study, for Description Logic ALCN we present two tableau-based algorithms for performing each form of reasonings and prove their correctness.".
- 673 abstract "This paper deals with the problem of exploring hierarchical semantics from social annotations. Recently, social annotation services have become more and more popular in Semantic Web. It allows users to arbitrarily annotate web resources, thus, largely lowers the barrier to cooperation. Furthermore, through providing abundant meta-data resources, social annotation might become a key to the development of Semantic Web. However, on the other hand, social annotation has its own apparent limitations, for instance, 1) ambiguity and synonym phenomena and 2) lack of hierarchical information. In this paper, we propose an unsupervised model to automatically derive hierarchical semantics from social annotations. Using a social bookmark service Del.icio.us as example, we demonstrate that the derived hierarchical semantics has the ability to compensate those shortcomings. We further apply our model on another data set from Flickr to testify our model's applicability on different environments. The experimental results demonstrate our model's effciency.".
- 687 abstract "Semantic search promises to provide more accurate result than present-day keyword search. However, progress with semantic search has been delayed due to the complexity of its query languages. In this paper, we explore a novel approach of adapting keywords to querying the semantic web: the approach automatically translates keyword queries into formal logic queries so that end users can use familiar keywords to perform semantic search. A prototype system named ‘SPARK’ has been implemented in light of this approach. Given a keyword query, SPARK outputs a ranked list of SPARQL queries as the translation result. The translation in SPARK consists of three major steps: term mapping, query graph construction and query ranking. Specifically, a probabilistic query ranking model is proposed to select the most likely SPARQL query. In the experiment, SPARK achieved an encouraging translation result.".
- 71 abstract "Design patterns are widely-used software engineering abstractions which define guidelines for modeling common application scenarios. Ontology design patterns are the extension of software patterns for knowledge acquisition in the Semantic Web. In this work we present a design pattern for representing relevance depending on context in OWL ontologies, i.e. to assert which knowledge from the domain ought to be considered in a given scenario. Besides the formal semantics and the features of the pattern, we describe a reasoning procedure to extract relevant knowledge in the resulting ontology and a plug-in for Protégé which assists pattern use.".
- 85 abstract "An important open question in the semantic Web is the precise relationship between the RDF(S) semantics and the semantics of standard knowledge representation formalisms such as logic programming and description logics. In this paper we address this issue by considering embeddings of RDF and RDFS in logic. Using these embeddings, combined with existing results about various fragments of logic, we establish several novel complexity results. The embeddings we consider show how techniques from deductive databases and description logics can be used for reasoning with RDF(S). Finally, we consider querying RDF graphs and establish the data complexity of conjunctive querying for the various RDF entailment regimes.".
- 99 abstract "The approach of using ontology reasoning to cleanse the output of information extraction tools was first articulated in SemantiClean. A limiting factor in applying this approach has been that ontology reasoning to find inconsistencies does not scale to the size of data produced by information extraction tools. In this paper, we describe techniques to scale inconsistency detection, and illustrate the use of our techniques to produce a consistent subset of a knowledge base with several thousand inconsistencies.".
- 10 abstract "In business webs within the Internet of Services arbitrary services shall be composed to new composite services and therewith creating tradeable goods. A composite service can be part of another composite service and so on. Since business partners can meet just for one transaction not having regular business which justifies frame contracts, ad-hoc automated contracting needs to be established. In addition services have an intangible character and therefore are prone to illegal reproduction. Thus intellectual property rights have to be considered. Our research approach is to assess the applicability of copyright law for semantic web services and develop a concept for automated contracting. Methodologies to be used are in the field of ontology modeling and reasoning.".
- 18 abstract "In this research we investigate to what extent explicit semantics can be used to support end users with the exploration of a large heterogeneous collection. In particular we consider cultural heritage, a knowledge-rich domain in which collections are typically described by multiple thesauri. We focus on three types of end user functionality. First, searching for terms within multiple thesauri to support manual annotation. Second, keyword search, as it has become the de-facto standard to access data on the web. Third, faceted browsing as it has become a popular method to interactively explore (image) collections. For these three tasks we question the role of explicit semantics in the search algorithm, the result organization and visualization and how to evaluate the added value of for end users. We investigate these questions by the implementation and evaluation of three prototype systems on top of large and real wold data collections.".
- 20 abstract "Statical ontology learning from large text corpora is a well understood task while evolving ontologies dynamically from user-input has rarely been adressed so far. Evolution of ontologies has to deal with vague or incomplete information. Accordingly, the formalism used for knowledge representation must be able to deal with this kind of information. Classical logical approaches such as description logics are particularly poor in adressing uncertainty. Ontology evolution may benefit from exploring probabilistic or fuzzy approaches to knowledge representation. In this thesis an approach to evolve and update ontologies is developed which uses explicit and implicit user-input and extends probabilistic approaches to ontology engineering.".
- 25 abstract "As yet there has been very little uptake of the semantic web within the mainstream internet environment. In contrast 'Web 2.0' has seen an explosion in uptake due to (among others) rich user experience, user participation and collective intelligence. We intend to make use of user-driven methodologies that exist within ‘Web 2.0’ to enhance semantic mapping. This paper is an extended abstract on PhD work that will identify interactions that will be acceptable, efficient and effective for casual web users to achieve semantic mappings gradually and over time between information models of interest to the user.".
- 26 abstract "This PhD proposal is about the development of new methods for information access. Two new approaches are proposed: Multi-Grained Query Answering that bridges the gap between Information Retrieval and Question Answering and Learning-Enhanced Query Answering that enables the improvement of retrieval performance based on the experience of previous queries and answers.".
- 31 abstract "The usability and the strong social dimension of the Web2.0 applications has encouraged users to create, annotate and share their content thus leading to a rich and content-intensive Web. Despite that, the Web2.0 content lacks the explicit semantics that would allow it to be used in large-scale intelligent applications. At the same time the advances in Semantic Web technologies imply a promising potential for intelligent applications capable to integrate distributed content and knowledge from various heterogeneous resources. We present FLOR a tool that performs semantic enrichment of folksonomy tagspaces by exploiting online ontologies, thesauri and other knowledge sources.".
- 33 abstract "As current reasoning techniques are not designed for massive parallelisation, usage of parallel computation techniques in reasoning establishes a major research problem. I will propose two possibilities of applying parallel computation techniques to ontology reasoning: parallel processing of independent ontological modules, and tailoring the reasoning algorithms to parallel architectures.".
- 12 abstract "In cultural heritage, large virtual collections are coming into existence. Such collections contain heterogeneous sets of metadata and vocabulary concepts, originating from multiple sources. In the context of the E-Culture demonstrator we have shown earlier that such virtual collections can be effectively explored with keyword search and semantic clustering. In this paper we describe the design rationale of ClioPatria, the E-Culture open-source software which provides APIs for scalable semantic graph search. The use of ClioPatria's search strategies is illustrated with a realistic use case: searching for "Picasso". We discuss details of scalable graph search, the required OWL reasoning functionalities and show why SPARQL queries are insufficient for solving the search problem.".
- 14 abstract "Market Blended Insight (MBI) is a project with a clear objective of making a significant performance improvement in UK business to business (B2B) marketing activities in the 5-7 year timeframe. The web has created a rapid expansion of content that can be harnessed by recent advances in Semantic Web technologies and applied to both Media industry provision and company utilization of exploitable business data and content. The project plans to aggregate a broad range of business information, providing unparalleled insight into UK business activity and develop rich semantic search and navigation tools to allow any business to 'place their sales proposition in front of a prospective buyer' confident of the fact that the recipient has a propensity to buy.".
- 16 abstract "The Inference Web infrastructure for web explanations together with its underlying Proof Markup Language (PML) for encoding justification and provenance information has been used in multiple projects varying from explaining the behavior of cognitive agents to explaining how knowledge is extracted from multiple sources of information in natural language. The PML specification has increased significantly since its inception in 2002 in order to accommodate a rich set of requirements derived from multiple projects, including the ones mentioned above. In this paper, we have a very different goal than the other PML documents: to demonstrate that PML may be effectively used by simple systems (as well as complex systems) and to describe lightweight use of language and its associated Inference Web tools. We show how an exemplar scientific application can use lightweight PML descriptions within the context of an NSF-funded cyberinfrastructure project. The scientific application is used throughout the paper as a use case for the lightweight use of PML and the Inference Web and is meant to be an operational prototype for a class of cyberinfrastructure applications.".
- 17 abstract "As AI developers increasingly look to workflow technologies to perform complex integrations of individual software components, there is a growing need for the workflow systems to have expressive descriptions of those components. They must know more than just the types of a component's inputs and outputs; instead, they need detailed characterizations that allow them to make fine-grained distinctions between candidate components and between candidate workflows. This paper describes ProCat, an implemented ontology-based catalog for components, conceptualized as processes, that captures and communicates this detailed information. ProCat is built on a layered representation that allows reasoning about processes at varying levels of abstraction, from qualitative constraints reflecting preconditions and effects, to quantitative predictions about output data and performance. ProCat employs Semantic Web technologies RDF, OWL, and SPARQL, and builds on Semantic Web services research. We describe ProCat's approach to representing and answering queries about processes, discuss some early experiments evaluating the quantitative predictions, and report on our experience using ProCat in a system producing workflows for intelligence analysis.".
- 18 abstract "We present a tool, called Requirements Critic that performs a wide range of best practice analyses on software requirements documents. The novelty of our approach is its ability to perform a broad range of syntactic and semantic analyses, while allowing the users to write requirements in natural language. The crux of our syntactic analyses approach is based on using controlled syntaxes and user-defined glossaries to extract structured content about a requirements document. Semantic Web technologies are then leveraged for deeper semantic analysis of the extracted structured content to find various kinds of problems in requirements documents.".
- 2 abstract "We present IYOUIT, a prototype service to pioneer a context-aware mobile digital lifestyle and its reflection on the Web. The application is based on a distributed infrastructure that incorporates Semantic Web technologies in several places to derive qualitative interpretations of a user's digital traces in the real world. Networked components map quantitative sensor data to qualitative abstractions represented in formal ontologies. Subsequent classification processes combine these with formalized domain knowledge to derive meaningful interpretations and to recognize exceptional events in context histories. The application is made available on Nokia Series-60 phones and designed to seamlessly run 24/7. We also intend to demonstrate IYOUIT at the ISWC'08 and to provide it to conference attendees, based on their demand.".
- 26 abstract "Modern businesses operate in a rapidly changing environment. Continuous learning is an essential ingredient in order to stay competitive in such environments. The APOSDLE system utilizes semantic web technologies to create a generic system for supporting knowledge workers in different domains to learn@work. Since APOSDLE relies on three interconnected semantic models to achieve this goal, the question on how to efficiently create high-quality semantic models has become one of the major research challenges. On the basis of two concrete examples – namely deployment of such a learning system at EADS, a large corporation, and deployment at ISN, a network of SMEs – we report in detail the issues a company has to face, when it wants to deploy a modern learning environment relying on semantic web technologies. Although we describe the experiences related to a specific system, we think that our reports are of interest to a wider audience, as APOSDLE relies on many “standard” semantic web technologies and consequently inherits both their advantages and disadvantages.".
- 31 abstract "Home automation has recently gained a new momentum thanks to the ever-increasing commercial availability of domotic com- ponents. In this context, researchers are working to provide interopera- tion mechanisms and to add intelligence on top of them. For supporting intelligent behaviors, house modeling is an essential requirement to un- derstand current and future house states and to possibly drive more complex actions. In this paper we propose a new house modeling on- tology designed to fit real world domotic system capabilities and to support interoperation between currently available and future solutions. Taking advantage of technologies developed in the context of the Se- mantic Web, the DogOnt ontology supports device/network independent description of houses, including both “controllable” and architectural el- ements. States and functionalities are automatically associated to the modeled elements through proper inheritance mechanisms and by means of properly defined SWRL auto-completion rules which ease the mod- eling process, while automatic device recognition is achieved through classification reasoning.".
- 32 abstract "The use of Semantic Grid architecture eases the development of complex, flexible applications, in which several organisations are involved and where resources of diverse nature (data and computing elements) are shared. This is the situation in the Space domain, with an extensive and heterogeneous network of facilities and institutions. There is a strong need to share both data and computational resources for complex processing tasks. One such is moni- toring and data analysis for Satellite Missions and this paper presents the Satel- lite Mission Grid, built in the OntoGrid project as an alternative to the current systems used. Flexibility, scalability, interoperability, extensibility and efficient development were the main advantages found in using a common framework for data sharing and creating a Semantic Data Grid.".
- 33 abstract "Medical ontologies have become the standard means of record- ing and accessing conceptualized biological and medical knowledge. The expressivity of these ontologies goes from simple concept lists through taxonomies to formal logical theories. In the context of patient informa- tion, their application is primarily annotation of medical (instance) data. To exploit higher expressivity, we propose an architecture which allows for reasoning on patient data using OWL DL ontologies. The implemen- tation is carried out as part of the Health-e-Child platform prototype. We discuss the use case where ontologies establish a hierarchical clas- sification of patients which in turn is used to aid the visualization of patient data. We briefly discuss the treemap-based patient viewer which has been evaluated in the Health-e-Child pro ject.".
- 5 abstract "A Multimedia Content Marketplace can support innovative business models in the telecommunication sector. This marketplace has a strong need for semantics, co-ordination and a service-oriented architecture. Triple Space Computing is an emerging semantic co-ordination paradigm for Web services, for which the marketplace is an ideal implementation scenario. This paper introduces the developed Triple Space platform and our planned evaluation of its value to our telecommunication scenario.".
- 7 abstract "The documentation of Enterprise Research Planning (ERP) systems is usually (1) extremely large and (2) combines various views from the business and the technical implementation perspective. Also, a very specific vocabulary has evolved, in particular in the SAP domain (e.g. SAP Solution Maps or SAP software module names). This vocabulary is not clearly mapped to business management terminology and concepts. It is a well-known problem in practice that searching in SAP ERP documentation is difficult, because it requires in-depth knowledge of a large and proprietary terminology. We propose to use ontologies and automatic annotation of such large HTML software documentation in order to improve the usability and accessibility, namely of ERP help files. In order to achieve that, we have developed an ontology and prototype for SAP ERP 6.0. Our approach integrates concepts and lexical resources from (1) business management terminology, (2) SAP business terminology, (3) SAP system terminology, and (4) Wordnet synsets. We use standard GATE/KIM technology to annotate SAP help documentation with respective references to our ontology. Eventually, our approach consolidates the knowledge contained in the SAP help functionality at a conceptual level. This allows users to express their queries using a terminology they are familiar with, e.g. referring to general management terms. Despite a widely automated ontology construction process and a simplistic annotation strategy with minimal human intervention, we experienced convincing results. For an average query linked to an action and a topic, our technology returns more than 3 relevant resources, while a naïve term-based search returns on average only about 0.2 relevant resources.".
- 8 abstract "Modern knowledge management is based on the organization of dynamic communities that acquire and share knowledge according to dedicated ontological schemas. However, while independence of ontological views is favored, these communities must also be able to share their knowledge with the rest of the organization. In this paper we introduce K-Forms and K-Search, a suite of Semantic Web tools for supporting distributed and networked knowledge acquisition, capturing, retrieval and sharing. They enable communities of users within or across organizations to define their own views in an intuitive way (and which are automatically turned into formal ontologies) and capture and share knowledge according to them. The tools favor reuse of existing ontologies; reuse creates as side effect a network of (partially) interconnected ontologies that form the basis for knowledge exchange among communities. The suite has been evaluated and is under release to support knowledge capture, retrieval and sharing in a large jet engine company.".
- 9 abstract "Metadata management is an important aspect of today's enterprise information systems. Metadata management systems are growing from tool-specific repositories to enterprise-wide metadata repositories. In this context, one challenge is the management of the evolving metadata whose schema or meta-model itself may evolve, e.g., dynamically-added properties, which are often hard to predict upfront at the initial meta-model design time; another challenge is to organize the metadata by semantically-rich classification schemes. In this paper, we present a practical system which provides support for users to dynamically manage semantically-rich properties and classifications in the IBM WebSphere Metadata Server (MDS) by integrating an OWL ontology repository. To enable the smooth acceptance of Semantic Web technologies for developers of commercial software which must run 24 hours/day, 7 days/week, the system is designed to consist of integrated modeling paradigms, with an integrated query language and runtime repository. Specifically, we propose the modeling of dynamic properties on structured metadata as OWL properties and the modeling of classification schemes as OWL ontologies for metadata classification. We present a natural extension to OQL (Object Query Language)-like query language to embrace dynamic properties and metadata classification. We also observe that hybrid storage, i.e., horizontal tables for structured metadata and vertical triple tables for dynamic properties and classification, is suitable for the storage and query processing of co-existing structured metadata and semantic metadata. We believe that our study and experience are not specific to MDS, but are valuable for the community trying to apply Semantic Web technologies to the structured data management area.".
- 1 abstract "To be successful, Personal Information Management (PIM) solutions must be built on top of a robust data management infrastructure. Such an infrastructure must efficiently and unobtrusively support the requirements of PIM. At present, the design of data management infrastructures for PIM is in its infancy. In particular, indexing, a fundamental data management technology, is still not well understood in this domain. Indexes are necessary for efficient querying and exploration of data. This poster will describe our ongoing efforts to design index structures specifically tailored to the social semantic data managed by PIM systems.".
- 10 abstract "Reidentification has been recognised as the most central job of cognition. In this paper, we motivate that concepts as abilities to reidentify, rather than classifications, should be the basis of an agent's conceptuology. Most concepts are not classes; class definitions are artificial, often context-dependent, and don't use inductive knowledge. We will present the basic concepts of CROC, a Representational Ontology for Concepts. Artificial agents can have concepts through language representations alone. Language-like representations, based on lexical concepts, plus reasoning, will be able to solve the interoperability problem to a large extent. By using these concepts, agents can interoperate without need for shared ontologies and with freedom for own conceptions.".
- 11 abstract "Query formulation is a key aspect of information retrieval, contributing to both the efficiency and usability of many semantic applications. A number of query languages, such as SPARQL, have been developed for the Semantic Web; however, there are, as yet, few tools to support end users with respect to the creation and editing of semantic queries. In this paper we present NITELIGHT, a graphical tool for semantic query design. NITELIGHT uses a Visual Query Language (VQL), called vSPARQL, which provides graphical formalisms for SPARQL query specification. NITELIGHT is a highly reusable Web-based component, and it can be easily embedded in a variety of different Web applications. This paper provides an overview of the NITELIGHT tool and the vSPARQL specification.".
- 12 abstract "We demonstrate AceWiki that is a semantic wiki using the controlled natural language Attempto Controlled English (ACE). The goal is to enable easy creation and modification of ontologies through the web. Texts in ACE can automatically be translated into first-order logic and other languages, for example OWL. Previous evaluation showed that ordinary people are able to use AceWiki without being instructed.".
- 13 abstract "Trust and policies are going to play a crucial role in enabling the potential of many web applications. In this paper we illustrate Protune, a system for specifying and cooperatively enforcing security and privacy policies.".
- 14 abstract "The national library of France (BnF) is in the process of setting up a digital preservation repository, named SPAR (Système de Préservation et d’Archivage Réparti – Distributed Preservation and Archiving System). The infrastructure for the system, bought in 2005, is designed to support 1,5 petabytes of storage by 2014. The software components of the system are currently developped by Atos Origin. The design of the SPAR system is based on the major digital preservation standard, the OAIS model1. The architecture is composed of several modules connected via web services and based on open source components. One of the main components of the system is the data management module : it will use RDF data stored in a RDF triple store. We explain here why RDF is relevant for digital preservation and how it will be implemented in SPAR.".
- 15 abstract "An increasing number of recent information retrieval systems make use of ontologies to help the users clarify their information needs and come up with semantic representations of documents. In this paper, we present an approach that utilizes ontologies to enhance the effectiveness of large-scale search systems for the Web. The ontology concepts are adapted to the domain terminology by computing a feature vector for each concept. We explain how these feature vectors are constructed and finally present some results.".
- 16 abstract "This poster proposes BitMat, a bit matrix structure for representing a large number of RDF triples in memory and processing conjunctive triple pattern (multi-join) queries using it. The compact in-memory storage and use of bitwise operations, can lead to a faster processing of join queries when compared to the conventional RDF triple stores. Unlike conventional RDF triple stores, where the size of the intermediate join results can grow very large, our BitMat based multi-join algorithm ensures that the intermediate result set remains small across any number of join operations (provided there are no Cartesian joins). We present the key concepts of BitMat structure, its use in processing join queries, describe the preliminary experimental results with UniProt and LUBM datasets, and discuss the possible use case scenarios.".
- 17 abstract "Large amounts of scientific digital contents, potentially available for public sharing and reuse, are nowadays held by scientific and cultural institutions which institutionally collect, produce and store information valuable for dissemination, work, study and research. Semantic technology offers to these stakeholders the possibility to integrate dispersed heterogeneous yet related resources and to build value-added sharing services (overcoming barriers such as knowledge domain complexity, different classification, languages, data formats and localization) by exploiting knowledge formalisation and semantic annotation. Applications in real cases are anyway often hampered by difficulties related to the proper formalization of scientific knowledge domains (ontology engineering) and the description of contents (semantic annotation). Difficulties that become even more relevant when dealing with complex knowledge domains (vast domain + certain level of dynamicity) that formalize heterogeneous resources coming from scattered sources. This paper illustrates the lessons learnt in applying the Semantic Web specifications to support content management and sharing in scientific complex knowledge domains using mixed - formal/informal - annotation and ontology learning approaches to overcome the difficulties posed by dealing with complex knowledge domains such as the aquatic domain.".
- 18 abstract "Word sense disambiguation (WSD) technology is very important to Semantic web/web 2.0 and Ontology as well. There still has no easy-to-use WSD tool available. This paper demonstrates a simple and very efficient tooL. It does WSD in an Ontology context and get the right word senses from WordNet\cite{WordNet}. This tool is very useful to natural language process and Ontology related applications.".
- 19 abstract "Several Semantic Web specific tasks such as ontology learning/extension or ontology matching rely on identifying relations between two given concepts. Scarlet is a technique for discovering relations between two given concepts by exploring ontologies available on the Semantic Web as a source of background knowledge. By relying on semantic web search engines such as Watson, Scarlet automatically identifies and combines relevant information from multiple and heterogeneous online ontologies. Scarlet has already been used successfully to support a variety of tasks, but is also available as a stand alone component that can be reused in various other applications. This poster will be accompanied by a demo of Scarlet's functionality available through its Web based user interface.".
- 2 abstract "In this paper, we introduce the notion of Online Presence, a concept related to user’s presence on online services. We identify interoperability issues in the field of exchange of the online presence data and propose a solution in building a common model for semantic representation of online presence data. We present the Online Presence Ontology (OPO) together with benefits such an ontology could bring.".
- 20 abstract "The Semantic Web is becoming more and more a reality, as the required technologies have reached an appropriate level of maturity. However, at this stage, it is important to provide tools facilitating the use and deployment of these technologies by end-users. In this paper, we describe EdHibou, an automatically generated, ontology-based graphical user interface that integrates in a semantic portal. The particularity of EdHibou is that it makes use of OWL reasoning capabilities to provide intelligent features, such as decision support, upon the underlying ontology. We present an application of EdHibou to medical decision support based on a formalization of clinical guidelines in OWL and show how it can be customized thanks to an ontology of graphical components.".
- 21 abstract "We present the DESWAP system that relieves developers of component-based Semantic Web applications of the burden of manual component selection (CS). Our system implements a novel approach to automatic CS that utilizes semantic technologies. We enable users to specify dependencies between the required components, an issue not considered by existing approaches. To realize our approach in DESWAP we developed a knowledge base with comprehensive semantic descriptions of software and their functionalities.".
- 22 abstract "Enterprise search is vital for today's enterprises. The ongoing growth of information in enterprises demands new solutions for finding relevant information in the information space. The personalization of search results is one promising approach in solving this challenge. Additionally, ontologies stoke expectations of easing the information integration process for federated search results by their formal and declarative nature. In this poster we present our novel approach of an ontology-based personalized enterprise search. We introduce an ontology-based federation layer for bridging the heterogeneity of the different knowledge sources in an enterprise.".
- 23 abstract "Authorities cooperate in various ways. The Web portal www.verwaltungskooperation.at aims to share knowledge on collaboration projects. A semantic wiki approach was used to facilitate best practice documentation with Semantic Web and Web 2.0 technology. Intercommunal cooperation has a long tradition among Austrian towns, cities, and municipalities. Much like in its German-speaking neighbour countries, this issue has become a subject of increased interest to Austria and has been intensely discussed in the last years. Apart from basic analyses of intercommunal cooperation in various scientific journals or the arrangement of expert meetings; the number of practical examples of cross-municipal cooperation is growing. In 2006, the KDZ published a book on intercommunal cooperation including a description of some 50 best-practice examples. In late 2007, the decision was made to publish these examples on a Web platform in order to make them available to a broader public and to enable the static information contained in the book to become dynamic Web content, editable by interested users. The use of the latest semantic wiki technologies for the new platform www.verwaltungskooperation.at is an example of the emergence of Web 2.0 applications with semantic technologies, sometimes referred to as “Web 3.0” or “Social Semantic Web”.".
- 24 abstract "This poster session is about a new type of event database that makes it efficient to reason about things, people, companies, relationships between people and companies, and about places and events. This event database is built on top of a scalable distributed RDF triple store that can handle literally billions of events. Like objects, events have at least one actor, but usually more, a start-time and possibly an end-time, a place where the event happened, and the type of the event. An event can have many additional properties and annotations. For example, telephone call detail records, email records, financial transactions, purchases, hospital visits, insurance claims, library records, etc. can all be viewed as events. On top of this event database we implemented very efficient geospatial and temporal queries, an extensive social network analysis library and simplified description logic. This session describes the design and use of a unifying query framework for geospatial reasoning, temporal logic, social network analytics, RDFS and OWL in Event-Based systems.".
- 25 abstract "Biomedical ontologies provide essential domain knowledge to drive data integration, information retrieval, data annotation, natural-language processing, and decision support. The National Center for Biomedical Ontology is developing BioPortal, a Web-based system that serves as a repository for biomedical ontologies. BioPortal defines relationships among those ontologies and between the ontologies and online data resources such as PubMed, ClinicalTrials.gov, and the Gene Expression Omnibus (GEO). BioPortal supports not only the technical requirements for access to biomedical ontologies either via Web browsers or via Web services, but also community-based participation in the evaluation and evolution of ontology content. BioPortal enables ontology users to learn what biomedical ontologies exist, what a particular ontology might be good for, and how individual ontologies relate to one another. BioPortal is available online at http://alpha.bioontology.org.".
- 26 abstract "Portals allow easy access to information by integrating heterogeneous applications or data sources in consistent way. It gives users a personalized and restricted view of domain information. Standard portal features could be improved employing semantic web technologies. Although portals are now experiencing serious growth just as number of available semantic web tools, number of semantic web portals is negligible. In accordance to observed acceptance problems guidelines for developing semantic web portals are proposed.".
- 27 abstract "We describe the architecture of a novel ontology and rule editor ACE View. The goal of ACE View is to simplify viewing and editing expressive and syntactically complex OWL/SWRL knowledgebases by making most of the interaction with the knowledgebase happen via Attempto Controlled English (ACE). This makes ACE View radically different from current OWL/SWRL editors which are based on formal logic syntaxes and general purpose graphical user interface widgets.".
- 28 abstract "Metadata management systems are growing from tool-specific repositories to enterprise-wide metadata repositories. In this context, one challenge is the management of the evolving metadata whose schema or meta-model itself may evolve, e.g., dynamically-added properties, which are often hard to predict upfront at the initial meta-model design time; another challenge is to organize the metadata by semantically-rich classification schemes. In this paper, we demonstrate a practical system which provides support for users to dynamically manage semantically-rich properties and classifications in the IBM WebSphere Metadata Server (MDS) by integrating an OWL ontology repository. To enable the smooth acceptance of Semantic Web technologies, the system is designed to consist of integrated modeling paradigms, with an integrated query language and runtime repository.".
- 29 abstract "Metadata that describes the structure and semantics of data sources takes a significant role in enterprise information integration. Enterprise information integration always involves an increasing set of types of metadata that are dispersed in various repositories, modeled by various tools, represented in various formats. There is a crucial requirement to break the “Tower of Babel” among different types of metadata to enable better understanding, more comprehensive governance and analysis. Towards the goal of a simplified framework for metadata representation, federation, search and analysis, in the UMRR (Unified Metadata Registry and Repository) project at IBM, we propose and implement an open Web architecture for universal metadata management, where the Resource Description Framework (RDF) is adopted to represent the underlying metadata that are in various formats and the “Linked Data” method is leveraged to build a web of models. Our demonstration illustrates the effectiveness of this architecture to enable the metadata federation such that the global query, search and analysis on the metadata are feasible. Additionally, we also demonstrate that the proposed architecture could easily leverage Web 2.0 technologies, such as social bookmaking, tagging and RSS feeds, etc. for collaborative metadata management.".
- 3 abstract "Knowledge workers are central to an organisation’s success – yet the tools they must use often stand in the way of maximising their productivity. ACTIVE (http://www.active-project.eu), an EU FP7 integrating project, addresses the need for greater knowledge worker productivity with three integrated research themes: easier sharing of information through combining the ease-of-use of folksonomies with the richness of formal ontologies; sharing and reusing informal knowledge processes, by automatically learning those processes from the user’s behaviour and describing the processes semantically; and using machine learning techniques to describe the user’s context semantically and thereby tailor the information presented to the user to fit the current task. The results of ACTIVE are relevant to all knowledge work; they are being validated in the domains of consultancy, telecommunications and engineering.".
- 30 abstract "Nowadays, more and more URIs reside on DataWeb, as published for linked open data, dereferencing URIs challenges the current Web to embrace Semantic Web. Although, quite a few practical recipes for publishing URIs have been provided to make URIs dereferencable, we believe a fundamental investigation of publishing and dereferencing URIs would contribute a forward compatibility with the RDF and OWL upper layers in the Semantic Web architecture. In this paper, we propose to make URIs published on Data Web RDF dereferencable, and we formalize such a requirement in an RDF-compatible semantics. Also, the dereferencing operation is defined in an abstract URI syntax, such that URIs, as interpreted as described resources, would be RDF dereferencable by default. Accompanied by a live demonstration, the poster demo explanation would elaborately discuss and seriously address issues on Data Web URIs, which were or have been taken for granted. Additionally, for case study, Metadata Web, a Data Web of enterprise-wide models, is explored. The URIs on Metadata Web is published as RDF dereferencable. Such an implementation of universal meta data management across the enterprise enables the metadata federation such that global query, search and analysis could be conducted on top of the Metadata Web.".
- 31 abstract "Object Oriented (OO) programming is dominant in the current software development. Starting from the design of OO models for applications, developers also expect to address issues on the data of models and the semantics of models. Objects, being the data of models, could be stored in relational databases, and ontologies appear as a good candidate for capturing the semantics of models. This poster presents a method and system which elegantly generates relational schema, OWL ontology, and semantic mapping between them, for any given OO model. The resulting relational schema serves for storing objects that are defined in the input OO model, the resulting OWL ontology is assured by a semantically “close” model transformation, and the generated automation mapping between them enables relational persistence and Semantic Web style access simultaneously for objects.".
- 32 abstract "IYOUIT is a prototype mobile service to pioneer a context-aware digital lifestyle and its reflection on the Web. The service is made freely available and leverages Semantic Web technology to implement smart application features. We intend to not only present and demonstrate IYOUIT at ISWC’08 but also to provide it to conference attendees, based on their demand.".
- 33 abstract "This demonstration presents ROO, a tool that facilitates domain experts' definition of ontologies in OWL by allowing them to author the ontology in a controlled natural language called Rabbit. ROO guides users through the ontology construction process by following a methodology geared towards domain experts’ involvement in ontology authoring, and exploiting intelligent user interfaces techniques. An experimental study with ROO was conducted to examine the usability and usefulness of the tool, and the quality of the resultant ontologies. The findings of the study will be presented in a full paper at the ISWC08 research track [2]. The proposed demonstration will provide hands-on experience with the tool and will illustrate its main functionality.".
- 34 abstract "The recent growth of the Semantic Web increases the amount of ontologies that can be reused, but also makes more difficult the tasks of finding, selecting and integrating reusable knowledge for ontology engineering. For this reason, we developed the Watson plugin, a tool which aims to facilitate large scale knowledge reuse by extending an ontology editor with the features of the Watson Semantic Web search engine. With this plugin, it is possible to discover, inspect and reuse ontology statements originating from various online ontologies directly in the ontology engineering environment.".
- 35 abstract "We present a graph-theoretic analysis of the topological structures underlying the collaborative knowledge bases Wikipedia and Wiktionary, which are promising uprising resources in Natural Language Processing. We contrastively compare them to a conventional linguistic knowledge base, and address the issue of how these Social Web knowledge repositories can be best exploited within the Social-Semantic Web.".
- 36 abstract "The recent flood of data and factual knowledge in biology and medicine requires some principled approaches to their proper analysis and management. A cornerstone in this effort constitutes the precise and complete description of the fundamental entities within this domain. But although this fact is generally accepted, often biomedical ontology developments still do not adhere to some of the basic ontology design principles: For example, even very low-level domain terms lack precise and unambiguous (logical) definitions in many cases. Such issues impede the move towards semantic standardization needed for their intended knowledge managment task. Rather it leads to inconsistencies, fragmentation and overlap both within and inbetween different biomedical ontologies. In light of this we introduce BioTop and ChemTop, two top-domain ontologies containing definitions for the most important, foundational entities necesarry to describe the various phenomena in biology and chemistry. These ontologies can subsequently serve as top-level basis for creating new focused domain ontologies in biomedicine or as aid for aligning or improving existing ones.".
- 37 abstract "Information sharing can be effective with structured data. However, there are several challenges for having structured data on the web. Creating structured concept definitions is difficult and multiple conceptualizations may exist due to different user requirements and preferences. We propose consolidating multiple concept definitions into a unified virtual concept. We have implemented a system called StYLiD to realize this. StYLiD is a social software for sharing a wide variety of structured data. Users can freely define their own structured concepts. Attributes of the multiple concept versions are aligned semi-automatically to provide a unified view. It provides a flexible interface for easy concept definition and data contribution. Popular concepts gradually emerge from the cloud of concepts while concepts evolve incrementally. StYLiD also supports linked data by interlinking data instances including external resources like Wikipedia.".
- 38 abstract "We present an efficient mechanism for finding equivalent ontologies motivated by the development of the Semantic Web search engine Watson. In principle, it computes a canonical form for the ontologies, which can then be compared syntactically to assess semantic equivalence. The advantage of using this method is that the canonical form can be indexed by the search engine, reducing the search for equivalent ontologies to a usual text search operation using the canonical form. This method is therefore more suitable for a search engine like Watson than the naive comparison of all possible candidate pairs of ontologies using a reasoner.".
- 39 abstract "The Semantic Web envisions a distributed environment with well-defined data that can be understood and used by machines in order to, among others, allow intelligent agents to automatically perform tasks on our behalf. In the past, different Semantic Web policy languages have been developed as a powerful means to describe a system's behavior by defining statements about how the system must behave under certain conditions. With the growing dynamics of the Semantic Web the need for a reactive control based on changing and evolving situations arises. This paper presents preliminary results towards a framework for the specification and enforcement of reactive Semantic Web policies, which can be used in order to allow agents to automatically perform advanced and powerful tasks, which can neither be addressed by existing Semantic Web policy languages nor by recent efforts towards reactivity on the Semantic Web.".
- 4 abstract "In February, the W3C initiated the RDB2RDF Incubator Group chartered to set direction in the area on mapping Relational Databases to RDF. In these few months we have made significant progress. There is a substantial Wiki that catalogs existing tools and approaches and we have the start of a recommendation to the W3C for further work. The poster session will discuss existing approaches to mapping Relational data to RDF and will entertain discussion on how to make progress in this area.".
- 40 abstract "With the advance of the Semantic Web technology, increasing data will be annotated with computer understandable structures (i.e. RDF and OWL), which allow us to use more expressive queries to improve our ability in information seeking. However, constructing a structured query is a laborious process, as a user has to master the query language as well as the underlying schema of the queried data. In this demo, we introduce SUITS4RDF, a novel interface for constructing structured queries for the Semantic Web. It allows users to start with arbitrary keyword queries and to enrich them incrementally with an arbitrary but valid structure, using computer suggested queries or query components. This interface allows querying the Semantic Web conveniently and efficiently, while enabling users to express their intent precisely.".
- 41 abstract "Interoperation between knowledge-based systems or agents requires common ontologies to facilitate successful information exchange. However, the openness of the Semantic Web means that the notion of there being common domain ontologies sufficient to cater for the requirements of a diverse range of consumers and producers of services has become untenable. In these types of environments it is necessary to consider that no ontology can be expected to remain unchanged throughout its lifetime. However, the dynamism and the large scale of the environment prevent the use of traditional ontology evolution techniques, where changes are mediated by a knowledge engineer [3]. We argue that the ability to estimate the impact of change a priori, i.e. before performing the change itself, is crucial, since this estimate can be used to assess the usefulness of the change. We assume that agents are capable of rational behaviour, and that they decide whether to change the ontology they commit to if the cost of the change (in terms of reclassification of knowledge) is offset by the benefits derived from the ability of a system to acquire new capabilities and therefore to achieve new tasks (or answer new queries, in the case of knowledge based systems). However, the agent’s decision making process follows the principle of bounded rationality [5]: agents operate with limited computational resources, and with partial knowledge of the environment [4]. We present an approach that evaluates the impact of change on an ontology a priori, without using reasoning, by estimating which set of axioms in an ontology is impacted by the change.".
- 42 abstract "The Semantic Web is about to become a rich source of knowledge whose potential will be squandered if it is not accessible for everyone. Intuitive interfaces like conversational agents are needed that can disseminate this knowledge either on request or even proactively in a context-aware manner. This paper presents work on extending one existing conversational agent, Max, with abilities to access the Semantic Web and provide the found knowledge for natural language communication.".
- 43 abstract "Skipforward is a distributed recommendation system using a lightweight ontology approach for formalizing opinions about item features. Items can be things such as songs or board games; example item features are the genre of a song or the degree of chance in a board game. Every user of the system is free to add new items and statements about existing items to the system. Naturally, opinions may differ between users---the system even encourages people to express dissent by supporting negation for item features. Skipforward allows discussions for any item feature as well as displaying these discussions in a way similar to web forums.".
- 44 abstract "An important requirement to the enterprise IT is the ability to manage information with high flexibility. Semantic web research and resulting technologies are therefore getting more and more vital within business processes. One question is how to get the research work - done at universities or within corporations - into the enterprise easily. One possible answer to this question is the availability of an open source information processing framework, which meets the requirements of an enterprise. This framework should be mature and flexible enough to design any application. To move towards such a flexible architecture, which is able to process vast amounts of information in an enterprise, a joint development by BROX and Empolis, has been started on Eclipse.".
- 46 abstract "In 2005, Sven Schwarz coined the term RDFHomepage. Such a homepage uses RDF to encode all the knowledge about a person and their associations. This homepage separates the content from the model allowing users to customise the view of the homepage without editing its content. We decided to take this idea further and immerse it in the world of Web 2.0 technologies: through the union between Semantic Web and Web 2.0 technologies we provide ease of use to average users via powerful but simplistic Web 2.0 interfaces. We overhauled the architecture of the original RDFHomepage and focused on a modular structure which included many standard Web 2.0 applications such as weblog, calendar and search.".
- 47 abstract "This poster and demo presents new OWL ontology explanation tools and facilities that are available in Protege 4. These explanations take the form of justifications. A justification is a minimal sets of axioms that is sufficient for a given entailment to hold. Justification finding services for Protege 4 are presented, including what have become de-facto explanation services such as root/derived pinpointing, and justification presentation. In addition to this, an implementation of recent theoretical work that computes so-called precise justifications is presented. Finally, preliminary work and new ideas of how justifications might be made easier to understand is a topic for discussion. All feedback and discussion is welcomed. Protege 4 is an open source, freely available OWL ontology editor.".
- 49 abstract "This poster describes the scope and current work of the W3C Product Modelling Incubator Group.".
- 50 abstract "This paper discusses an ontology exploration tool which allows the users to explore an ontology according to their own perspectives. It extracts concepts from an ontology and visualizes them in a user-friendly form, i.e. conceptual map, in which the user is interested. It helps users to understand the extracted knowledge from the ontology, and contribute to integrated understanding of ontologies and domain dependent knowledge.".
- 51 abstract "In this paper we face the problem of learning Semantic Web rules within a decidable instantiation of the $\mathcal{DL}$+log framework which integrates the DL $\mathcal{SHIQ}$ and positive \textsc{Datalog}. To solve the problem, we resort to the methodological apparatus of Inductive Logic Programming.".
- 52 abstract "The authors present a case for an ontology of finite algebras. This vocabulary is a direct response to the limitations of the formats employed by first-order model searchers, such as Mace4, and specialized software, such as UACalc. It will support a semantically rich format for algebra storage and interchange intended to improve the efficiency of computational discovery processes in universal algebra. The class of finite quandles is considered as a case study in order to understand some of the challenges of designing such a knowledge base.".
- 53 abstract "This short demo description presents an extension to the Konduit tool for visual programming for the semantic desktop. Previously, Konduit required users to define filter components by manually specifying a SPARQL CONSTRUCT query. With the current work presented in this demo, we are exploring ways of building such queries visually, and aiding the use in doing it. We hope that this will make it easier to work with Konduit, which will thus appeal to more users.".
- 54 abstract "Data integration is complex often requiring much technical knowledge and expert understanding of the data’s meaning. In this paper we investigate the use of current semantic tools as an aid to data integration, and identify the need to modify these tools to meet the needs of spatial data. We create a demonstrator based on the real world problem of predicting sources of diffuse pollution, illustrating the benefits of exposing the semantics of integration.".
- 55 abstract "SPARQL lowers the barrier to RDF-based mashup development. It does, however, not support write operations, useful constructs like aggregates, the ability to combine and post-process query results, or human-oriented result formats. This paper describes a set of extensions that largely reuse SPARQL's intuitive syntax to provide query aggregates and update functionality (SPARQL+), result processing and chained queries across multiple endpoints (SPARQLScript), and result templating. The combination of these extensions enables the creation of mashups not only from distributed data sources, but also from portable application components.".
- 56 abstract "Discovering and ranking complex relationships in the semantic web is an important building block of semantic search applications. Although semantic web technologies define relations between objects but there are some complex (hidden) relationships that are valuable in different applications. Currently, users need to discover the relations between objects and find the level of semantic similarity between them. (I.e. find two similar papers). This paper presents a new approach for ranking semantic similarity association in semantic web document, based on semantic association concept.".
- 57 abstract "In this paper, we propose an approach to expand folksonomy search with ontologies, which are completely transparent to users. Preliminary implementations and evaluations of this approach are promising.".
- 59 abstract "We present preliminary results of applying a novel method based in metric order theory to provide a measure of the structural quality of some of the test alignments between semantic hierarchies used in the 2007 Ontology Alignment Evaluation Initiative.".
- 6 abstract "The Semantic Web calls for a new generation of search query visualizer that can rely on the document metadata. For this purpose, we present the design of WebViser a visualizer and browser of search results organized along three dimensions: a class based representation of documents on carousels, a stack structure of classes according to their ranking, and a meta carousel for the localization of class stacks associated with different queries. In addition links that connect documents through metadata comparison are displayed in such a way that link overlaps and visualization cluttering are minimized. A qualitative evaluation provides interesting insights on the users’ appropriation of the interface and demonstrates that this system is an effective complementary to the traditional explorer for Semantic Web search engines.".
- 60 abstract "Ontologies are becoming so large in their coverage that no single person or a small group of people can develop them effectively and ontology development becomes a community-based enterprise. We present Collaborative Protege - an extension of the Protege ontology editor that we have designed specifically to support the collaboration process for a community of users. During the ontology-development process, Collaborative Protege allows users to hold discussions about the ontology components and changes using typed annotations; it tracks the change history of the ontology entities; it provides a chat and search functionality. Users edit simultaneously an ontology stored in a common repository. All changes made by a user are seen immediately by other users. Collaborative Protege is open source and distributed with the full installation of Protege.".
- 61 abstract "The Tetherless World (TW) Wine Agent extends the original Stanford Knowledge Systems Laboratory (KSL) Wine Agent to support collective recommendations on food-wine pairings. This is done to (1) demonstrate the advance of Semantic Web technologies, including OWL DL reasoning, SPARQL, provenance explanation, and semantic wikis, and (2) show how the Semantic Web can be integrated into Social Web applications. A live demo is available at http://onto.rpi.edu/wiki/wine, which is designed for use on mobile phones as well as standard browsers.".
- 62 abstract "In this paper we introduce IKen, a platform for image retrieval enhanced by semantic annotations. The paper discusses work in progress and reports the current state of IKen. This comprises the functionalities for annotating photos with an underlying ontology, search features based on these annotations and the development of the domain ontology used for annotation.".
- 63 abstract "In this paper, we present the ASK-IT ambient intelligent framework. ASK-IT is built on a service-oriented architecture that uses ontologies in order to semantically annotate Web services and facilitate service discovery and retrieval. Its main aim is to enable a wide range of use cases for elderly and mobility impaired users related to the domains of transport, tourism and leisure, e-working, remote home control and social relationships amongst others. Based on specific use cases, ASK-IT gathers the requested information from a set of interconnected registered Web services and provides it on mobile devices, such as mobile phones and PDAs. We describe the general architecture of ASK-IT framework and present a set of indicative supported demonstration scenarios.".
- 64 abstract "One of the major Semantic Web challenges is the knowledge acquisition bottleneck. New content on the web is produced much faster than the respective machine readable annotations, while a scalable knowledge extraction from the legacy resources is still largely an open problem. This poster presents an ongoing research on an empirical knowledge representation and reasoning framework, which is tailored to robust and meaningful processing of emergent, automatically learned ontologies. According to the preliminary results of our EUREEKA\footnote{A permutated acronym for {\it \textbf{E}asy to \textbf{U}se \textbf{E}mpirical \textbf{R}easoning about \textbf{A}utomatically \textbf{E}xtracted \textbf{K}nowledge}.} prototype, the proposed framework can substantially improve the applicability of the rather messy emergent knowledge and thus facilitate the knowledge acquisition in an unprecedented way.".
- 65 abstract "This demonstration illustrates the benefits of probabilistic ontological modeling for uncertain domains in the Semantic Web. It is based on Pronto - probabilistic OWL reasoner that allows modelers to complement classical OWL ontologies with probabilistic statements. In addition to Pronto's features and capabilities, a great deal of the demonstration will be devoted to presenting modeling patterns, typical pitfalls, desirable as well as incidental consequences of probabilistic reasoning. The testbed will be the prototype of the Breast Cancer Risk Assessment ontology that we have developed to evaluate Pronto.".
- 66 abstract "Despite the increasing popularity of RDF as a data representation method, there is no accepted measure of the importance of nodes in an RDF graph. Such a measure could be used to sort the nodes returned by a SPARQL query or to find the important concepts in an RDF graph. In this paper we propose a graph-theoretic measure called noc-order for ranking nodes in RDF graphs based on the notion of centrality. We illustrate that this method is able to capture interesting global properties of the underlying RDF graph using study cases from different knowledge domains. We also show how well noc-order behaves even if the underlying data has some noise, i.e. superfluous and/or erroneous data. Finally, we discuss how information about the importance of different predicates either based on their informativeness, prior semantic information about them or user preferences can be incorporated into this measure. We show the effects of such modifications to the ranking method by examples.".
- 67 abstract "In this poster, we present COSIFile and COSIMail, semantic desktop tools for enhanced file and email management that are based on the X-COSIM semantic desktop framework. They are implemented as extensions for an email client and file manager, specifically designed to enhance support for the personal information managment tasks of information organization and re-finding.".
- 68 abstract "The main objectives of this paper are to discuss the various aspects of similarity calculations between objects and sets of objects in ontology-based environments and to propose a framework for cluster analysis in such an environment. The framework is based on the ontology specification of two core components: description of categories and description of objects. Similarity between objects is defined as an amalgamation function of taxonomy, relationship and attribute similarity. The different measures to calculate similarity that can be used in framework implementations are presented. The ontology-based data representation and the framework of cluster analysis can be useful in the area of Business Intelligence, e.g. clustering similar companies that profiles are described by ontology-based data.".
- 69 abstract "With the ever-increasing amount of data on the Web available at SPARQL endpoints the need for an integrated and transparent way of accessing the data has arisen. It is highly desirable to have a way of asking SPARQL queries that make use of data residing in disparate data sources served by multiple SPARQL endpoints. We aim at providing such a capability and thus enabling an integrated way of querying the whole Semantic Web at a time.".
- 7 abstract "In order to be more flexible in publishing information and improve accessibility of information for end-users, Oslo Municipality has funded the SUBLIMA project [25]. Within this project a stack of open source Semantic Web and Content Management System (CMS) compo- nents is used in order to deliver a flexible solution for publication of meta-data from libraries. Queries from a specific front-end are automat- ically dispatched to the available SPARQL end-points [23] in a pool of library and archive based installations. Returning results are presented to the user in an integral manner. The final delivery consists of an open source software stack based on Semantic Web Technology and W3C stan- dards. Finally resulting in a well-defined and approved stack of software, major efforts has been spent on licensing issues, immaturity of compo- nent parts, ill-defined documentation of software and conversion of older bases into OWL [5]. Experiences and problems have been discussed with and reported into the respective communities.".
- 70 abstract "Scientific data services are increasing in usage and scope, and with these increases comes growing need for access to provenance information. Our goal is to design and implement an extensible provenance solution that is deployed at the science data ingest time. In this paper, we describe our work in the setting of a particular set of data services in the area of solar coronal physics. The paper focuses on one existing federated data service and one proposed observatory. Our claim is both that the design and implementation are useful for the particular scientific image data services we designed for, but further that the design provides an operational specification for other scientific data applications. We highlight the need for and usage of semantic technologies and tools in our design and implemented service.".
- 71 abstract "We present our work on semantically-enabled data and schema registration in the setting of a scientific data integration project: SESDI (Semantically-Enabled Scientific Data Integration), which aims initially to integrate heterogeneous volcanic and atmospheric chemical compound data in support of assessing the atmospheric effects of a volcanic eruption. We use semantic methods through out the project, however in this paper and demonstration, we will highlight issues related to our work on data and schema registration and integration. In this process, we will demonstrate how we are re-using previously developed ontologies and how those ontologies are being used to provide a “smart: data integration capability aimed at interdisciplinary scientific research.".
- 72 abstract "Ontology alignment involves determining the semantic heterogeneity between two or more domain specifications by considering their associated concepts. Our approach considers name, structural and content matching techniques for aligning ontologies. After comparing the ontologies using concept names, we examine the instance data of the compared concepts and perform content matching using value types based on N-grams and Entropy Based Distribution (EBD). Although these approaches are generally sufficient, additional methods may be required. Subsequently, we compare the structural characteristics between concepts using Expectation-Maximization (EM). To illustrate our approach, we conducted experiments using authentic geographic information systems (GIS) data and generate results which clearly demonstrate the utility of the algorithms while emphasizing the contribution of structural matching.".
- 73 abstract "We propose an algorithm that mines blog entries to realize Semantic Web searches. It divides a graph, whose vertices are users and whose edges represent shared interests, into subgraphs - communities of interests (COIs). The algorithm allows users to identify new interests quickly and accurately. Our algorithm differs from conventional modularity-based approaches because it attaches semantic tags to the relationships between COIs or between users based on taxonomies of the content items of interest. We also introduce a mechanism to harmonize the number of users in each COI. Our two proposals raise the effectiveness of information searches.".
- 74 abstract "The vision of the Semantic Web is to create a web of data with well-defined meaning. Most data in the current web is managed by relational databases. Thus, it is imperative for the Semantic Web community to offer easily implemented solutions to bridging relational database content and RDF. Direct mappings means to use the SQL schema to create an OWL ontology and use it to represent the data in RDF. Direct mapping methods have an advantage that they are, intrinsically, automated. If a SQL-schema was created using contemporary model-driven software engineering tools, the resulting OWL ontology can be semantically rich. However, few SQL databases are so developed and therefore semantically weak. We suggest that direct-mapping methods can be integrated by a refinement process. We propose a two step bootstrapping architecture of integrating relational databases with the Semantic Web by first generating a “database-derived putative ontology” and second, refining the putative ontology with a domain ontology and database individuals.".
- 75 abstract "We present OmniCat, an ontology-based text categorization method that classifies documents into a dynamically defined set of categories specified as contexts in the domain ontology. The method does not require a training set and is based on measuring the semantic similarity of the thematic graph created from a text document and the ontology fragments created by the projection of the defined contexts. The domain ontology together with the defined contexts effectively becomes the classifier, as it includes all of the necessary semantic and structural features of the classification categories. With the proposed approach, we can also dynamically change the classification categories without the need to retrain the classifier. In our experiments, we used an RDF ontology created from the full version of the English language Wikipedia to categorize a set of CNN documents and a subset of the Reuters RCV1 corpora. The high accuracy achieved in our tests demonstrates the effectiveness of the proposed method and applicability of Wikipedia for semantic text categorization.".
- 76 abstract "We present semSL, an approach to bring Semantic Web technologies into Second Life. Second Life is a virtual 3D world, in which users can communicate, build objects, and explore the land of other users. There are different kinds of entities in Second Life, which can be locations, objects, or events. Many of these entities are of potential interest to users. However, searching for entities is difficult in Second Life, since there is only a very limited way to describe entities. With semSL it becomes possible for every user to add arbitrary tags or key/value pair based descriptions to entities in Second Life, or to create typed links between entities. Such typed links can even be established between entities in Second Life and resources from the Semantic Web. The description data for all such entities is centrally stored at a server external to Second Life. The data is encoded in RDF, and is publicly accessible via a SPARQL endpoint. This should not only lead to significant improvements for searching operations, but will also allow for flexible data integration between data from semSL and data from other sources on the Semantic Web.".