Matches in ScholarlyData for { ?s <https://w3id.org/scholarlydata/ontology/conference-ontology.owl#abstract> ?o. }
- 327 abstract "We present a novel approach for the visualization and browsing of Web services. In contrast to most existing tools that categorize Web service with respect to specific description elements, our tool is based on goals that describe the objective that a client wants to solve by using Web services while abstracting from the technical details. The data structure for our search space visualization is a graph that organizes goal templates - i.e. generic and reusable objective descriptions - with respect to their semantic similarity, and keeps the relevant knowledge on the available Web services for solving them; the graph is generated automatically by semantic matchmaking. The browsing tool is implemented as a new plug-in of the Web Service Modeling Toolkit WSMT, an integrated development environment for Semantic Web services.".
- 338 abstract "This poster describes three static SPARQL optimization approaches for in-memory RDF graphs: (1) a selectivity estimation index (SEI) for single query triple patterns; (2) a query pattern index (QPI) for joined triple patterns; and (3) a hybrid optimization approach that combines both indexes. Using the Lehigh University Benchmark (LUBM), we show that the hybrid approach outperforms other SPARQL query engines such as ARQ and Sesame for in-memory graphs.".
- 339 abstract "Ontology mediation is one of the key research topics for the acomplishment of the semantic web. Different tasks can be distinguished under this generic term: instance transformation, query rewriting, instance unification, ontology merging or alignment creation. All first four tasks require the specification of an alignment between the ontologies to be mediated. Graphical tools and matching algorithms are developed to help specialists creating such specifications. In order to improve both user tools and matching algorithms we developped a library of correspondence patterns to represent complex correspondences between ontologies. We express these patterns as an extension of the Ontology Alignment ontology, meant to serve as an exchange format to represent ontology alignments on the semantic web.".
- 340 abstract "In Protege, any newly created RDF/OWL knowledge base refers to local instances through a local URI, which is obtained through the concatenation of the ontology URI, the hash sign '#' and the local identifier. However, this practice makes data level integration quite hard, and definitely prevents the straightforward application of RDF graph merge for independently developed knowledge bases, even if they share the same OWL ontology. In this paper, we present a Protege plugin which supports the systematic reuse of global identifiers for instances in RDF/OWL knowledge base. The plugin is an extension of the "individual" tab. The main difference is that, when an instance is created, the user has a chance of looking for a pre-existing URI for the corresponding individual in a publicly available service called OKKAM. The match between the newly created instance and the stored individuals is based on an algorithm which compares the features of the new isntance with a simple profile stored in OKKAM for all individuals. The plugin is available and tested for Protege 3.2.1.".
- 342 abstract "In this paper we describe the first results of our efforts to build a solid framework for a Semantic Web browser, Power Magpie. For Semantic Web browser we envision an extension to a standard Web browser that augment its features with the ability to act on resources described in Web pages and to find resources semantically related to a Web page. In our vision, the Power Magpie prototype will provide three main features: 1. The ability to automatically find and retrieve ontologies that are related to the generic Web page that the user is browsing. 2. A simple user interface to navigate these data (as well as the Semantic data available in the Web page itself, using technologies such as RDFa or Microformats). 3. The ability to provide services that exploit the discovered data. The current state of the prototype already address the first two points, while the latter can be considered a further step that can be develop upon the proposed framework.".
- 344 abstract "Ranking the importance of concepts and relations in an ontology and highlight those having high rank scores is an intuitive and effective approach to support the understanding of an ontology for further developments and reuses. In this paper, we propose CARRank, to efficiently and simultaneously rank the importance of concepts and relations. Its uniqueness lies in that it mutually reinforces between the importance of concepts and the weights of relations, and it performs in a reverse PageRank manner.".
- 345 abstract "We propose a new dynamic constraint optimization problem (COP) formalization for Web services composition problem that permits extension of the original abstract workflow (in case failure) based on the OWL-S service profile description of the underlying semantic Web services and using OWL-S control constructs. Moreover, we have developed a novel, human-centered, agent-based protocol able to find "satisfying" solutions for this problem in real time. This protocol allows restriction and/or relaxation within the original workflow through addition and/or removal of new sub-tasks, deals with a dynamic environment, and uses incomplete, uncertain information.".
- 349 abstract "A current major obstacle to the development of ontologies in support of the Semantic Web is the inability to handle very large ontologies. We propose a solution which consists in augmenting the functionality of a relational DBMS to include the reasoning mechanisms (implemented as stored procedures) that are the essence of ontology management. As a result, a single system is used to store, query, update, and reason about a very large ontology. The OntoMinD prototype we have developed shows very promising performances, according to our tests with the LUBM benchmark.".
- 356 abstract "This paper proposes WebKER, a system for extracting knowledge from Web documents based on domain ontologies and suffix-array-based wrapper learning. It also gives the performance evaluation of this system and the comparison between querying information in the Knowledge Base (KB) and querying information in the traditional database. Moreover, the outstanding-wrapper selection and a novel method for linking and merging the knowledge are also presented.".
- 357 abstract "We try to develop a methodology, as a system of discipline and a number of tools to help create useful ontologies. Our methodology and system lends concepts and practices from software development, and improve them to fit some special requirements of knowledge development.".
- 359 abstract "Restriction to a predefined set of ontologies, and consequently limitation to specific domain environments is a pervading drawback in Semantic Search technologies. In this work we present PowerAqua, a multi-ontology-based Question Answering (QA) platform that exploits multiple distributed ontologies and knowledge bases to answer queries in multi-domain environments. The system interprets the user's Natural Language (NL) query using the available semantic information, and translates the user terminology into the ontology terminology (triples), retrieving accurate semantic entity values as response to the user's request.".
- 360 abstract "To help researchers to get information, we build a semantic portal using OntoFrame, KISTI's semantic web platform. Unlike other semantic web solutions, URI-server underlies OntoFrame. It stores ontology instances before populating them into a triple store, controlling their key/referential integrities based on URI. Our semantic portal is thus efficiently maintained by assigning search-oriented requests to URI-server-based IR engine and reasoning queries to the inference system.".
- 361 abstract "SWCLOS is an OWL Full processor buit on top of Common Lisp Object System. We enabled OWL metamodeling with SWCLOS. In this poster and demo, we introduce criteria for metamodeling, that are derived from the principles of object-oriented metamodeling, and demonstrate examples of metamodeling with SWCLOS.".
- 362 abstract "We propose an approach that collaborates the logic formalisms with collaboratively created web repositories. A logic conceptualisation based "signaturing" algorithm is to discover, from concept definitions, the feature vectors that uniquely identify concepts while web repositories are used to understand the implications of these features.".
- 363 abstract "In order to reduce the cost of building domain ontologies manually, in this paper, we integrate a domain ontology development environment: DODDLE-OWL and an ontology search engine: Swoogle. We propose a method and a tool for domain ontology construction reusing texts and existing ontologies for a target domain extracted by Swoogle. In the evaluation, we applied the method to a particular field of law and evaluated the acquired ontologies.".
- 364 abstract "This paper presents a health advice derivation system. The system can derive adequate advices according to user's goal and the health condition of the user because an ontology is introduced into the system for describing the knowledge about exercise, meal and user's health condition. We introduce an inference mechanism into the system for deriving advices because the inference mechanism makes good use of the ontology. New relationships on the knowledge are found by the inference mechanism and are finally provided as advices to the user.".
- 365 abstract "Remarkable growth of the mobile internet service industry in Japan proved that present method of service menu system is insufficient to guide users efficiently to the services they need. This paper introduces our research activities toward realization of Task-oriented menu system, which enables users to search for mobile services by "what they want to do" instead of by "name of category". Although efficiency of the menu system has been proved on a prototype system, it is limited and capable of supporting limited activities; mobile users' activity in a theme park. Thus we need to analyze wider area of user activities to list up the situations and task models of users as many as possible to expand the prototype menu to cover larger scale of the services. We introduce OOPS (Ontology-based Obstacle, Prevention and Solution) modeling method which supports description of the necessary situations. We have applied the method to model "Tourism" domain which covers a broader spectrum of users' actions. We have evaluated the coverage of the OOPS model by verifying situations represented in the model developed and those situations assumed to be supported by i-mode official services. The model covered about 97% of the assumed situations of mobile internet services. Reorganizing "contexts" in the model, we aim at developing task oriented menu system as a next step.".
- 366 abstract "The amount of knowledge published on the Semantic Web - i.e, the number of ontologies and semantic documents available online - is rapidly increasing. Following this evolution, changes appear in the way semantic applications are designed: instead of relying on one, self engineered base of semantic data, these next generation Semantic Web applications assume the existence and availability of a large scale, distributed Web of data. In order for these applications to be able to exploit and combine the increasing amount of semantic data and ontologies available online, a gateway to the Semantic Web is required.".
- 369 abstract "Video recordings of lectures most times are monolithic entities that cannot be integrated into an active learning process offhand. We present the web-based E-Librarian service CHESt that matches a user's question given in natural language to a selection of semantically pertinent lecture recordings based on an adapted best cover algorithm. Our e-Librarian Service is able to augment the educational value of lecture recordings and thus, to overcome the current deadlock situation in e-learning.".
- 370 abstract "Human Resource Management (HRM) aims a strategic human resource administration in a company organization. In order to support HRM, various information systems have been developed conventionally. They are used for management of personal information, ability evaluation, and employee education. By introducing HRM, a company can develop employees and allocate them into adequate position. However, when an employee leaves him/her job due to a problem of the health, the effect of HRM may be reduced. Therefore, a productivity of a whole organization may be also reduced. We assume that there exists strong relevance in HRM and the health of the employee and develop Healthy Human Resource Management (H2RM) system. This system offers a recommendation of health education contents that is included in H2RM system. The recommendation is derived by an inference system based on an ontology and inference rule about health and disease. By using the inference system, the health condition of an employee is decided according to their health condition. Then the system offers the recommendation of healthcare education contents.".
- 371 abstract "A keyword-based web search cannot accept a user's detailed intention for a search. Consideration has been made concerning the annotation of context data (e.g. contents creation time and location) for a web search in response to this problem. However, it would require users to handle accurate context data in search queries. This paper proposes a method to associate keywords with context data by analyzing blog entries annotated with context data, including an efficient mechanism for indexing.".
- 372 abstract "Geographic place names are widely used but are semantically often highly ambiguous. For example, there are 491 places in Finland sharing the same name "Isosaari" (great island) that are instances of several geographical classes, such as Island, Forest, Peninsula, Inhabited area, etc. Referencing unambiguously to a particular "Isosaari", either when annotating content or during information retrieval, can be quite problematic and requires usage of advanced search methods and maps for semantic disambiguation. This paper presents an ontology server, ONKI-Paikka, for solving the place finding and place name disambiguation problem. In ONKI-Paikka, places can be found by a faceted search engine, combined with semantic autocompletion and a map service for constraining search and for visualizing results. The service can be connected to legacy applications cost-effectively by using Ajax-technology in the same spirit as Google Maps that is used in ONKI-Paikka as a subservice.".
- 373 abstract "Semantic Web aims to lift current Web into semantic repositories where heterogeneous data can be queried and different services can be mashed up. Here we report some of on-going work with the EASAIER project to enable enhanced access to sound archives by integrating archives based on Music Ontology and provide different search results from different mashups.".
- 374 abstract "For relevance ranking in Web search, heterogeneous semantic information such as thesauruses, semantic markups and social annotations have been adopted respectively. As utilization of semantics are key to semantic search, to integrate more semantics would logically generate better search results in respect of semantic relevance. However, such integrated semantic search mechanism is still in absence and to be developed. This paper proposes a statistical approach to integrate both keywords and heterogeneous semantics for semantic search.".
- 375 abstract "This paper addresses the issue of expertise object search. In Semantic Web, information is described as concepts and relations between the concepts. Previous document-level information search can unfortunately lead to highly inaccurate results in answering semantic-oriented queries. Previous work on semantic search is targeted at finding the most relevant objects. In this paper, we investigate several models for expertise object search, namely local language model, propagation based model, and a unified model.".
- 377 abstract "Everyday individual wants to record, annotate, and manipulate digital media in their own ways. Semantic web supports users to represent annotations in forms that bear personal semantic meaning. An essential feature of personal media management systems is to give individual users significant control over representation, annotation and query their media information. In our framework, na�ve users create their personalized ontology for describing their media information and construct semantically rich metadata collections using the personalized media ontology. Our system allows users to provide expressive queries in their own ontology. Queries are represented by OWL classes or instances. The system uses a uniform representation of personalized ontology, metadata, and queries. Such a uniform representation enables the system to exploit description logic reasoner.".
- 378 abstract "This demonstration aims at presenting an overview of the ArnetMiner system. The system addresses several key issues in extraction and mining of the researcher social network. The system is in operation on the internet for about one year and receives accesses from about 1,500 users per month. Feedbacks from users and system logs indicate that users consider the system can help people to find and share information in the web community.".
- 379 abstract "We demonstrate our ontology alignment tool, RiMOM. RiMOM integrates multiple strategies for ontology alignment. It utilizes a strategy selection module to dynamically determine which strategies should be used in the alignment for different tasks. RiMOM provides a friendly user interface for alignment tracking and customization. In addition, RiMOM currently supports alignment from a database schema to an ontology. RiMOM participated in OAEI2006 and obtained the best results on the benchmark datasets.".
- 380 abstract "Sharing of bibliographic information is very important in research communities. SocioBiblog is a semantic blogging system that provides a decentralized environment for this. The SWRC ontology has been used for adding metadata about publications in blogs. SocioBiblog aggregates publications from the social network neighborhood and co-authors of a researcher. RSS aggregation has been extended to handle embedded publication metadata in BuRST feeds. Interoperability with other systems has been maintained by adopting standard formats. The aggregated collections may be searched and filtered flexibly by metadata criteria. The results can be redistributed as new feeds. Thus, a decentralized ecosystem can be formed where each unit can publish, aggregate and redistribute information.".
- 382 abstract "Many studies have been conducted concerning ontology generation from the tags of social tagging services and some tagging services target academic literature. We think that it is unsuitable to search for technical information using the generated ontology from these tags, because they are often taken directly from the contents. In order to obtain contextual information on the papers, we focused on the notes taken when users listen to a presentation. We implemented and applied a system where the participants can input two kinds of memos on Web pages for presentations at conferences. As a result of this operation, we obtained words related to the content of the papers and to the background knowledge and research details.".
- 383 abstract "Semantically described business process is one of the most challenging research issues currently addressed by the semantic community. This paper proposes a Business Process Modeling Ontology in SemBiz, which contributes to bridging gap between the business designing perspective and the technical implementation level in BPM or workflow by semantic descriptions of business processes.".
- 384 abstract "The poster describes the current status of the international effort to transform existing international standards so that they can work as part of the Semantic Web. There are different standards for mechanical products, building and construction and process plants, but these have similar requirements for the representation of product structure. The definition of a common core ontology for product structure is being attempted.".
- 385 abstract "In this paper we describes OPTIMA (Ontology Population Tool based on Information extraction and MAtching) a system for semiautomatically populating ontologies from unstructured and semi-structured information. Based on Information Extraction techniques the system identifies Named Entities and their inter-relations that serve as basis of a Machine Learning algorithm for generating candidates of instances according to classes described in the ontology schema, associated with a probability score.".
- 386 abstract "Producing semantic metadata requires efficient methods, e.g., concept finding, for accessing and using ontologies. To add such functionalities to metadata applications such as cataloging systems in museums, we propose a mash-up approach where ready-to-use user interface components for using specific ontologies are made available to be integrated into applications. As a proof-of-concept, we present the Ontology Service ONKI wich implements semantic autocompletion concept search and concept browsing for ontologies as shared mash-up components.".
- 387 abstract "We present an overview of the infrastructure required to support content and metadata creation for a health portal based on Semantic Web technologies. The system requires web content management systems to be enhanced to support ontological metadata creation; export of metadata in RDF or embedded into HTML pages; and a metadata harvester that aggregates metadata from all the content creators and provides feedback reports to the content creators. The system has been set up in a prototype of the national health portal HEALTHFINLAND.".
- 388 abstract "For building ontology from text, we need to extract terms and conceptualize them as classes of ontology. In this paper we present a clustering of terms method using paradigmatic relation and hierarchical clustering. To extract the paradigmatic relation, we use 1st and 2nd order collocation extracted from Wikipedia documents. By computing the semantic relatedness of clusters, we can extract clusters that consist of similar words.".
- 389 abstract "As classical reasoning from inconsistent ontologies can't give meaningful answers to queries, it is necessary to either revise the ontology, discarding some axioms in order to restore consistency, or make use of a non-standard notion of logical entailment that allows to give meaningful answers from inconsistent premises. We propose a complete procedure to reason with inconsistent ontologies and show how ontology revision can be obtained from inconsistency reasoning.".
- 390 abstract "Semantic autocompletion interfaces offer an efficient way for concept selection useful in both search and annotation applications. However, these interfaces usually do not expose the semantic context of the matched concepts, thereby making it hard to know if a matched concept is the right one, as well as hiding possibly more appropriate choices. To lessen these problems, we present an in-place ontological context navigation interface to be used with semantic autocompletion.".
- 391 abstract "This paper presents a system for searching semantic relations between web resources, in our case significant persons of art history. The system is based on the Union List of Artists Names (ULAN) metadata of some 120,000 persons and organizations.".
- 392 abstract "This paper shortly presents an automatic exhibition generation interface that turns the focus of semantic search from search items to the concepts they are annotated with.".
- 393 abstract "In current Semantic Web view-based search systems views are formed by selecting properties and enumerating all their values as selections. This approach breaks down with multiple content types, such as in the cultural heritage domain, because the number of differing properties, and therefore views becomes unmanageable. We propose a novel solution termed Domain-Centric View-Based Search, in which views are created based on common property ranges and domain ontologies.".
- 394 abstract "In this poster the semantic web architecture for 'open market' concept is described. 'Open market' is a commercial application of semantic technologies. The semantic web architecture elements have been categorized as: web service provider, web service requester and infrastructure elements. This architecture addresses the standard ontology discovery, semantic web service discovery, availability of semantic web content with embedded normative metadata and semantic web service integration automation requirements of the semantic web.".
- 395 abstract "We present a novel merging algorithm for light-weight ontologies using answer set programming and linguistic background knowledge. The semi-automatic method provides a number of solutions for the user to choose from, by straightforwardly applying intuitive merging rules in a declarative programming environment.".
- 396 abstract "Object recognitions are challenging tasks, especially invisible object recognition in changing and unpredictable robot environments. We propose a novel approach employing context and ontology to improve object recognition capability of mobile robots in real-world situations. By semantic contexts we mean characteristic information abstracted from robot sensors. We propose a method to construct semantic contexts using inferences for mobile robots to recognize objects in a more efficient way. In addition, ontology has been used for better recognizing objects using knowledge represented in the ontology.".
- 397 abstract "The vision of Semantic Web can be realized when there are masses of machine-processable semantic metadata. Manual construction of metadata is not feasible, methods for automated semantic annotation have been developed. Semantic annotation is the process to identify the most appropriate semantic tagging to entities. The key challenge in automatic semantic annotation is resolving ambiguities in identifying semantic tagging. We propose the ontology-based semantic named entity disambiguation, which is a new named entity disambiguation algorithm, based on Hidden Markov Model (HMM) and semantic contexts. Based on our system, ubiquitous applications can reason about named entity free of contradictions. Compared to SemTag algorithm, our system has an improved performance by about 18%.".
- 398 abstract "Ontology evolution is the process of modifying an ontology to preserve consistency during changes. Current work on ontology evolution is based on the idea of bringing the AGM belief change theory to work within ontology evolution. However, AGM's principle of priority to incoming information can not be accepted when the new information represents a new evidence about the world, supposed to be a fixed static entity, while its description is only partial and uncertain. In particular, it can not be accepted in a distributed environment, where the information sources are potentially unreliable. We replace the priority to incoming information with the principle of recoverability: any previously held piece of knowledge should belong to the current knowledge space if consistent with it.".
- 399 abstract "We present HOMER, an analysis and visualization tool for ontology alignment. HOMER features a radial-graph display GUI, a complete execution trace that allows the user to override and navigate to any match decisions during at runtime and a comparison mode that displays multiple alignments in parallel. HOMER contains a builtin plugin for the ILIADS ontology alignment algorithm, but other algorithms can be plugged in as well.".
- 400 abstract "Since Vannevar Bush's vision of the Memex, the idea of a knowledge working environment with interlinked data has been around. Recently, with a significant increase in computing power and the advent of the Semantic Web, these ideas have surfaced again, now under the term "Semantic Desktop". Many attempts to build such a Semantic Desktop try to build a complete system from scratch. In this demo, we want to show that it is also possible to get a long way with existing OS technology, which will simply be harnessed in the form of a number of small, lightweight tools. Instead of building an extensive and feature-rich system from scratch, we harness the power of modern, metadata-enabled file systems and search indices to create a prototypical system with minimal effort.".
- 401 abstract "This paper introduces a novel approach which exploits ontologies and ontology-based category utility for author name disambiguation. Our method utilizes author knowledge in the form of populated ontology that uses various types of properties: titles, abstracts and co-authors of papers and authors' affiliations. Author name disambiguation determines the correct author from various candidate authors in the populated author ontology. Candidate authors are evaluated using proposed ontology-based category utility to resolve disambiguation. Experiments using the ontology-based category utility increase the number of disambiguation by about 10% compared with that of category utility, and increase the overall amount of accuracy by around 98%.".
- 402 abstract "In order to derive hidden information of OWL ontology, a number of OWL reasoners have been introduced. In this paper, we propose MEXS (Minimum Expression Axiom Set) extracting and storing for debugging unsatisfiable concepts in ontology. A MEXS is a set of axioms to occur unsatisfiable concepts. In order to extract MEXS, we need to find axiom to cause inconsistency in ontology. Therefore we propose an improved method.".
- 404 abstract "We present our DLDB knowledge base system and evaluate its capability in integrating a very large set of real-world Semantic Web data. Using DLDB, we have constructed the Hawkeye knowledge base, in which we have loaded more than 166 million facts from a diverse set of real-world data sources and integrated them using alignments expressed in OWL.".
- 405 abstract "Semantic search would result in an overwhelming number of results for users are increased, therefore elevating the need for appropriate personalized schemes. In this paper, we also present the structure used in the Culture Finder to support personalized search service. The Culture Finder helps semantic web agents obtain personalized culture information.".
- 406 abstract "Navigating ontologies can be a cumbersome task: existing approaches to ontology visualization have several shortcomings including that elements are neither sorted by relevance nor do they adapt to changing user requirements. We present a novel technique for user interfaces of semantic systems based on an adaptive extension of the tag cloud paradigm.".
- 407 abstract "Despite very active research on ontologies, only a small amount of useful ontologies can be found on the Web. The reasons for this are manifold, but a major obstacle is that ontology engineering environments impose high entrance barriers on users, and that the community does not have direct control over the ontology evolution. In the myOntology project, we propose the use of a Wiki supported by sophisticated background functionality to enable community-driven ontology building by giving users with no or little expertise in ontology engineering the opportunity to contribute.".
- 409 abstract "This paper addresses an approach for building a relation hierarchy in order to integrate relations from different sources. A relation tree is generated by mapping relations to WordNet synsets, on which duplicated relations are eliminated. As criteria for judging relations not absolutely required for a domain, popularity and uniqueness are proposed. Our research shows that use of a relation hierarchy makes the process of judging necessary relations of a domain much simpler and more efficient.".
- 410 abstract "Despite significant advancement in ontology learning, building ontologies remains a task that highly depends on human intelligence, both as a source of domain expertise and for producing a consensual conceptualization. Now, we can observe a sharp contrast in user interest in two branches of Web activity: While the "Web 2.0" movement lives from an unprecedented amount of contributions from Web users, we witness a substantial lack of user involvement in ontology projects for the Semantic Web. We assume that one cause of the latter is a lack of proper incentive structures of ontology projects. As a novel solution, we (1) propose to masquerade collaborative ontology engineering behind on-line, multi-player game scenarios, in order to create proper incentives for humans to help building ontologies for the Semantic Web. Then, we (2) describe our OntoGame prototype, and (3) provide preliminary evidence that users are willing to invest a lot of time into those games, and, by doing so, unknowingly weave ontologies for the Semantic Web.".
- 411 abstract "This paper presents the constructing of a QA test collection using ontology instance knowledge. We construct test collection to verify the question-answering system based on ontologies. We define the ontology triplet from IT device ontology and people ontology. We use semantic restriction of relation name to classify query types and E-K translation to generate the Korean test collection. Relation names are expanded using the WordNet synset.".
- 412 abstract "In this paper, we construct IT-domain question-answering system as an application system of the IT-domain ontology. We defined 16 types of questions based on question type definition by DARPA. For each of the question type, an inferencing method is designed and implemented. We demonstrate the user interface which can connect users with the QA system and the ontology.".
- 413 abstract "We present WikiFactory, a tool that enables an automatic, ontology-driven deployment of a semantic wiki, where the ontology describes a specific domain of interest. The resulting deployed semantic wiki can be used as a friendly interface to domain experts, for ontology browsing and editing of both ontology and content. WikiFactory also provides run time synchronization between ontology and content: changes made over wiki content are reflected upon the underlying ontology, and viceversa.".
- 1 abstract "Semantic Web databases allow efficient storage and access to RDF statements. Applications are able to use expressive query languages in order to retrieve relevant metadata in order to perform different tasks. However, access to metadata may not be public to just any application or service. Instead, powerful and flexible mechanisms for protecting sets of RDF statements are required for many Semantic Web applications. Unfortunately, current RDF stores do not provide fine-grained protection. This paper fills this gap and presents a mechanism by which complex and expressive policies can be specified in order to protect access to metadata in multi-service environments.".
- 113 abstract "Ontology mapping is the key to data interoperability in the semantic web. This problem has received a lot of research attention, however, the research emphasis has been mostly devoted to automating the mapping process, even though the creation of mappings often involve the user. As industry interest in semantic web technologies grows and the number of widely adopted semantic web applications increases, we must begin to support the user. In this paper, we combine data gathered from background literature, theories of cognitive support and decision making, and an observational case study to propose a theoretical framework for cognitive support in ontology mapping tools. We also describe a tool called CogZ that is based on this framework.".
- 127 abstract "Wikipedia, a killer application in Web 2.0, has embraced the power of collaborative editing to harness collective intelligence. It can also serve as an ideal Semantic Web data source due to its abundance, influence, high quality and well-structuring. However, the heavy burden of up-building and maintaining such an enormous and ever-growing online encyclopedic knowledge base still rests on a very small group of people. Many casual users may still feel difficulties in writing high quality Wikipedia articles. In this paper, we use RDF graphs to model the key elements in Wikipedia authoring, and propose an integrated solution to make Wikipedia authoring easier based on RDF graph matching, expecting making more Wikipedians. Our solution facilitates semantics reuse and provides users with: 1) a link suggestion module that suggests and auto-completes internal links between Wikipedia articles for the user; 2) a category suggestion module that helps the user place her articles in correct categories. A prototype system is implemented and experimental results show significant improvements over existing solutions to link and category suggestion tasks. The proposed enhancements can be applied to attract more contributors and relieve the burden of professional editors, thus enhancing the current Wikipedia to make it an even better Semantic Web data source.".
- 141 abstract "This paper presents a controlled language for ontology editing and a software implementation, based partly on standard NLP tools, for processing that language and manipulating an ontology. The input sentences are analysed deterministically and compositionally with respect to a given ontology, which the software consults in order to interpret the input’s semantics; this allows the user to learn fewer syntactic structures since some of them can be used to refer to either classes or instances, for example. A repeated-measures, task-based evaluation has been carried out in comparison with a well-known ontology editor; our software received favourable results for basic tasks. The paper also discusses work in progress and future plans for developing this language and tool.".
- 15 abstract "Many scientific problems can be represented as computational workflows of operations that access remote data, integrate heterogeneous data, and analyze and derive new data. Even when the data access and processing operations are implemented as web or grid services, workflows are often constructed manually in languages such as BPEL. Adding semantic descriptions of the services enables automatic or mixed-initiative composition. In most previous work, these descriptions consists of semantic types for inputs and outputs of services or a type for the service as a whole. While this is certainly useful, we argue that is not enough to model and construct complex data workflows. We present a planning approach to automatically constructing data processing workflows where the inputs and outputs of services are relational descriptions in an expressive logic. Our workflow planner uses relational subsumption to connect the output of a service with the input of another. This modeling style has the advantage that adaptor services, so-called shims, can be automatically inserted into the workflow where necessary.".
- 155 abstract "We present a simple method to extract information from search engine snippets. Although the techniques presented are domain independent, this work focuses on extracting biographical information of historical persons from multiple unstructured sources on the Web. We first similarly find a list of persons and their periods of life by querying the periods and scanning the retrieved snippets for person names. Subsequently, we find biographical informationfor the persons extracted. In order to get insight in the mutual relations among the persons identified, we create a social network using co-occurrences on the Web. Although we use uncontrolled and unstructured Web sources, the information extracted is reliable. Moreover we show that Web Information Extraction can be used to create both informative and enjoyable applications.".
- 169 abstract "OBO is an ontology language that has often been used for modeling ontologies in the life sciences. Its definition is relatively informal, so, in this paper, we provide a clear specification for OBO syntax and semantics via a mapping to OWL. This mapping also allows us to apply existing Semantic Web tools and techniques to OBO. We show that Semantic Web reasoners can be used to efficiently reason with OBO ontologies. Furthermore, we show that grounding the OBO language in formal semantics is useful for the ontology development process: using an OWL reasoner, we detected a likely modeling error in one OBO ontology.".
- 183 abstract "The development of ontologies involves continuous but relatively small modifications. Existing ontology reasoners, however, do not take advantage of the similarities between different versions of an ontology. In this paper, we propose a technique for incremental reasoning - that is, reasoning that reuses information obtained from previous versions of an ontology - based on the notion of a module. Our technique does not depend on a particular reasoning calculus and thus can be used in combination with any reasoner. We have applied our results to incremental classification of OWL DL ontologies and found significant improvement over regular classification time on a set of real-world ontologies.".
- 197 abstract "In this paper we present a solution for "weaving the claim web", i.e. the creation of knowledge networks via so-called claims stated in scientific publications created with the SALT (Semantically Annotated LaTeX) framework. To attain this objective, we provide support for claim identification, evolved the appropriate ontologies and defined a claim citation and reference mechanism. We also describe a prototypical claim search engine, which allows to reference to existing claims and hence, weave the web. Finally, we performed a small-scale evaluation of the authoring framework with a quite promising outcome.".
- 211 abstract "We present the architecture of an end-to-end semantic search engine that uses a graph data model to enable interactive query answering over structured and interlinked data collected from many disparate sources on the Web. In particular, we study distributed indexing methods for graph-structured data and parallel query evaluation methods on a cluster of computers. We evaluate the system on a dataset with 430 million statements collected from the Web, and provide scale-up experiments on 7 billion synthetically generated statements.".
- 225 abstract "Ontologies proliferate with the growth of the Semantic Web. However, most of data on the Web are still stored in relational databases. Therefore, it is important to establish interoperability between relational databases and ontologies for creating a Web of data. An effective way to achieve interoperability is finding mappings between relational database schemas and ontologies. In this paper, we propose a new approach to discovering simple mappings between a relational database schema and an ontology. It exploits simple mappings based on virtual documents, and eliminates incorrect mappings via validating mapping consistency. Additionally, it also constructs a special type of semantic mappings, called contextual mappings, which is useful for practical applications. Experimental results demonstrate that our approach performs well on several data sets from real world domains.".
- 239 abstract "As there is more and more reusable structured data on the Web, casual users will want to take into their own hands the task of mashing up data rather than wait for mash-up sites to be built that address exactly their individually unique needs. In this paper, we present Potluck, a Web user interface that lets casual users—those without programming skills and data modeling expertise—mash up data themselves. Potluck is novel in its use of drag and drop for merging fields, its integration and extension of the faceted browsing paradigm for focusing on subsets of data to align, and its application of simultaneous editing for cleaning up data syntactically. Potluck also lets the user construct rich visualizations of data in-place as the user aligns and cleans up the data. This iterative process of integrating the data while constructing useful visualizations is desirable when the user is unfamiliar with the data at the beginning—a common case—and wishes to get immediate value out of the data without having to spend the overhead of completely and perfectly integrating the data first. A user study on Potluck indicated that it was usable and learnable, and elicited excitement from programmers who, even with their programming skills, previously had great difficulties performing data integration.".
- 253 abstract "Instance-based ontology mapping is a promising family of solutions to a class of ontology alignment problems. Instance-based ontology mapping crucially depends on measuring the similarity between sets of annotated instances. In this paper we study how the choice of co-occurrence measures affects the performance of instance-based mapping. To this end, we have implemented a number of different statistical co-occurrence measures. We have prepared an extensive test case using vocabularies of thousands of terms, millions of instances, and hundreds of thousands of co-annotated items, and we have obtained a human Gold Standard judgement for part of the mapping-space. We then study how the different co-occurrence measures and a number of algorithmic variations perform on our benchmark dataset, as compared against the GoldStandard. Our systematic study shows excellent results of instance-based matching in general, where the more simple measures often outperform more sophisticated statistical co-occurrence measures.".
- 267 abstract "Finding the justifications of an entailment (that is, all the minimal set of axioms sufficient to produce an entailment) has emerged as a key inference service for the Web Ontology Language (OWL). Justifications are essential for debugging unsatisfiable classes and contradictions. The availability of justifications as explanations of entailments improves the understandability of large and complex ontologies. In this paper, we present several algorithms for computing all the justifications of an entailment in an OWL-DL Ontology and show, by an empirical evaluation, that even a reasoner independent approach works well on real ontologies.".
- 281 abstract "Natural language interfaces offer end-users a familiar and convenient option for querying ontology-based knowledge bases. Several studies have shown that they can achieve high retrieval performance as well as domain independence. This paper focuses on usability and investigates if NLIs are useful from an end-user's point of view. To that end, we introduce four interfaces each allowing a different query language and present a usability study benchmarking these interfaces. The results of the study reveal a clear preference for full sentences as query language and confirm that NLIs are useful for querying Semantic Web data.".
- 29 abstract "Semantic descriptions of non-textual media available on the web can be used to facilitate retrieval and presentation of media assets and documents containing them. While technologies for multimedia semantic descriptions already exist, there is as yet no formal description of a high quality multimedia ontology that is compatible with existing (semantic) web technologies. We explain the complexity of the problem using an annotation scenario. We then derive a number of requirements for specifying a formal multimedia ontology before we present the developed ontology, COMM, and evaluate it with respect to our requirements. We provide an API for generating multimedia annotations that conform to COMM.".
- 295 abstract "This research explores three SPARQL-based techniques to solve Semantic Web tasks that often require similarity measures, such as semantic data integration, ontology mapping, and Semantic Web service matchmaking. Our aim is to see how far it is possible to integrate customized similarity functions (CSF) into SPARQL to achieve good results for these tasks. Our first approach exploits virtual triples calling property functions to establish virtual relations among resources under comparison; the second approach uses extension functions to filter out resources that do not meet the requested similarity criteria; finally, our third technique applies new solution modifiers to post-process a SPARQL solution sequence. The semantics of the three approaches are formally elaborated and discussed. We close the paper with a demonstration of the usefulness of our iSPARQL framework in the context of a data integration and an ontology mapping experiment.".
- 309 abstract "Despite the success of the Web Ontology Language OWL, the development of expressive means for querying OWL knowledge bases is still an open issue. In this paper, we investigate how a very natural and desirable form of queries – namely conjunctive ones – can be used in conjunction with OWL such that one of the major design criteria of the latter – namely decidability – can be retained. More precisely, we show that querying the tractable fragment EL++ of OWL 1.1 is decidable. We also provide a complexity analysis and show that querying unrestricted EL++ is undecidable.".
- 323 abstract "We study the continuous evaluation of conjunctive triple pattern queries over RDF data stored in distributed hash tables. In a continuous query scenario network nodes subscribe with long-standing queries and receive answers whenever RDF triples satisfying their queries are published. We present two novel query processing algorithms for this scenario and analyze their properties formally. Our performance goal is to have algorithms that scale to large amounts of RDF data, distribute the storage and query processing load evenly and incur as little network traffic as possible. We discuss the various performance tradeoffs that occur through a detailed experimental evaluation of the proposed algorithms.".
- 337 abstract "Recently, the World Wide Web Consortium (W3C) produced a standard set of “Semantic Annotations for WSDL and XML Schema” (SAWSDL). SAWSDL provides a standard means by which WSDL documents can be related to semantic descriptions, such as those provided by OWL-S (OWL for Services) and other Semantic Web services frameworks. We argue that the value of SAWSDL cannot be realized until its use is specified, and its benefits explained, in connection with a particular framework. This paper is an important first step toward meeting that need, with respect to OWL-S. We explain what OWL-S constructs are appropriate for use with the various SAWSDL annotations, and provide a rationale and guidelines for their use. In addition, we discuss some weaknesses of SAWSDL, and identify some ways in which OWL-S could evolve so as to integrate more smoothly with SAWSDL.".
- 351 abstract "In recent years, CNL (Controlled Natural Language) has received much attention with regard to ontology-based knowledge acquisition systems. CNLs, as subsets of natural languages, can be useful for both humans and computers by eliminating ambiguity of natural languages. Our previous work, OntoPath, proposed to edit natural language-like narratives that are structured in RDF (Resource Description Framework) triples, using a domain-specific ontology as their language constituents. However, our previous work and other systems employing CFG for grammar definition have difficulties in enlarging the expression capacity. A newly developed editor, which we propose in this paper, permits grammar definitions through CFG-LD (Context-Free Grammar with Lexical Dependency) that includes sequential and semantic structures of the grammars. With CFG describing the sequential structure of grammar, lexical dependencies between sentence elements can be designated in the definition system. Through the defined grammars, the implemented editor guides users’ narratives in more familiar expressions with a domain-specific ontology and translates the content into RDF triples.".
- 365 abstract "In this paper, we present a new approach to web search personalization based on user collaboration and sharing of information about web documents. The proposed personalization technique separates data collection and user profiling from the information system whose contents and indexed documents are being searched for, i.e. the search engines, and uses social bookmarking and tagging to re-rank web search results. It is independent of the search engine being used, so users are free to choose the one they prefer, even if their favorite search engine does not natively support personalization. We show how to design and implement such a system in practice and investigate its feasibility and usefulness with large sets of real-word data and a user study.".
- 379 abstract "Ontologies play a core role for the success of the Semantic Web as they provide a shared vocabulary for different resources and applications. Developing an error-free ontology is a difficult task. A common kind of error for an ontology is logical contradiction or incoherence. In this paper, we propose some approaches to measuring incoherence in DL-based ontologies. These measures give an ontology engineer important information for maintaining and evaluating ontologies. We implement the proposed approaches using the KAON2 reasoner and provide some preliminary but encouraging empirical results.".
- 393 abstract "We present a semantic-based approach to multi-issue bilateral negotiation for e-commerce. We use Description Logics to model advertisements, and relations among issues as axioms in a TBox. We then introduce a logic-based alternating-offers protocol, able to handle conflicting information, that merges non-standard reasoning services in Description Logics with utility thoery to find the most suitable agreements. We illustrate and motivate the theoretical framework, the logical language, and the negotiation protocol.".
- 407 abstract "This paper presents a method for making metadata conforming to heterogeneous schemas semantically interoperable. The idea is to make the knowledge embedded in the schema structures interoperable and explicit by transforming the schemas into a shared, event-based representation of knowledge about the real world. This enables and simplifies accurate reasoning services such as cross-domain semantic search, browsing, and recommending. A case study of transforming three different schemas and datasets is presented. An implemented knowledge-based recommender system utilizing the results in the semantic portal CULTURESAMPO was found useful in a preliminary user study.".
- 421 abstract "The increased availability of online knowledge has led to the design of several algorithms that solve a variety of tasks by harvesting the Semantic Web, i.e., by dynamically selecting and exploring a multitude of online ontologies. Our hypothesis is that the performance of such novel algorithms implicitly provides an insight into the quality of the used ontologies and thus opens the way to a task-based evaluation of the Semantic Web. We have investigated this hypothesis by studying the lessons learnt about online ontologies when used to solve three tasks: ontology matching, folksonomy enrichment, and word sense disambiguation. Our analysis leads to a suit of conclusions about the status of the Semantic Web, which highlight a number of strengths and weaknesses of the semantic information available online and complement the findings of other analysis of the Semantic Web landscape.".
- 43 abstract "In open and distributed environments ontology mapping provides interoperability between interacting actors. However, conventional mapping systems focus on acquiring static information, and on mapping whole ontologies, which is infeasible in open systems. This paper shows that the interactions themselves between the actors can be used to predict mappings, simplifying dynamic ontology mapping. The intuitive idea is that similar interactions follow similar conventions and patterns, which can be analysed. The computed model can be used to suggest the possible mappings for the exchanged messages in new interactions. The suggestions can be evaluate by any standard ontology matcher: if they are accurate, the matchers avoid evaluating mappings unrelated to the interaction. The minimal requirement in order to use this system is that it is possible to describe and identify the interaction sequences: the OpenKnowledge project has produced an implementation that demonstrates this is possible in a fully peer-to-peer environment.".
- 435 abstract "This paper presents a tableau approach for deciding description logics outside the scope of OWL DL and current state-of-the-art tableau-based description logic systems. In particular, we define a sound and complete tableau calculus for the description logic ALBO and show that it provides a basis for decision procedures for this logic and numerous other description logics with full role negation. ALBO is the extension of ALC with the Boolean role operators, inverse of roles, domain and range restriction operators and it includes full support for objects (nominals). ALBO is a very expressive description logic which is NExpTime complete and subsumes Boolean modal logic and the two-variable fragment of first-order logic. An important novelty is the use of a versatile, unrestricted blocking rule as a replacement for standard loop checking mechanisms implemented in description logic systems. An implementation of our approach exists in the MetTeL system.".
- 449 abstract "In this paper we address the problem of migrating instances between heterogeneous overlapping ontologies. The instance migration problem arises when one wants to reclassify a set of instances of a source ontology into a semantically related target ontology. Our approach exploits mappings between ontologies, which are used to reconcile both conceptual and individual level heterogeneity, and further used to draw the migration process. We ground the approach on a distributed description logic (DDL), in which ontologies are formally encoded as DL knowledge bases and mappings as bridge rules and individual correspondences. From the theoretical side, we study the task of reasoning with instance data in DDL composed of SHIQ ontologies and define a correct and complete distributed tableaux inference procedure. From the practical side, we upgrade the DRAGO DDL reasoner for dealing with instances and further show how it can be used to drive the migration of instances between heterogeneous ontologies.".
- 463 abstract "The process of instantiating an ontology with high-quality and up-to-date instance information manually is both time consuming and prone to error. Automatic ontology instantiation from Web sources is one of the possible solutions to this problem and aims at the computer supported population of an ontology through the exploitation of (redundant) information available on the Web. In this paper we present AllRight, a comprehensive ontology instantiating system. In particular, the techniques implemented in AllRight are designed for application scenarios, in which the desired instance information is given in the form of tables and for which existing Information Extraction approaches based on statistical or natural language processing methods are not directly applicable. Within AllRight, we have therefore developed new techniques for dealing with tabular instance data and combined these techniques with existing methods. The system supports all necessary steps for ontology instantiation, i.e. web crawling, name extraction, document clustering as well as fact extraction and validation. AllRight has been successfully evaluated in the popular domains of digital cameras and notebooks leading to a about eighty percent accuracy of the extracted facts given only a very limited amount of seed knowledge.".
- 477 abstract "The discovery of suitable Web services for a given task is one of the central operations in Service-oriented Architectures (SOA), and research on Semantic Web services (SWS) aims at automating this step. For the large amount of available Web services that can be expected in real-world settings, the computational costs of automated discovery based on semantic matchmaking become important. To make a discovery engine a reliable software component, we must thus aim at minimizing both the mean and the variance of the duration of the discovery task. For this, we present an extension for discovery engines in SWS environments that exploits structural knowledge and previous discovery results for reducing the search space of consequent discovery operations. Our prototype implementation shows significant improvements when applied to the Stanford SWS Challenge scenario and dataset.".
- 491 abstract "In different areas ontologies have been developed and many of these ontologies contain overlapping information. Often we would therefore want to be able to use multiple ontologies. To obtain good results, we need to find the relationships between terms in the different ontologies, i.e. we need to align them. Currently, there already exist a number of different alignment strategies. However, it is usually difficult for a user that needs to align two ontologies to decide which of the different available strategies are the most suitable. In this paper we propose a method that provides recommendations on alignment strategies for a given alignment problem. The method is based on the evaluation of the different available alignment strategies on several small selected pieces from the ontologies, and uses the evaluation results to provide recommendations. In the paper we give the basic steps of the method, and then illustrate and discuss the method in the setting of an alignment problem with two well-known biomedical ontologies. We also experiment with different implementations of the steps in the method.".
- 505 abstract "Ontology-based applications play an increasingly important role in the public and corporate Semantic Web. While today there exist a range of tools and technologies to support specific ontology engineering and management activities, architectural design guidelines for building ontology-based applications are missing. In this paper, we present an architecture for ontology-based application — covering the complete ontology-lifecycle - that is intended to support software engineers in designing and developing ontology-based applications. We illustrate the use of the architecture in a concrete case study using the NeOn toolkit as one implementation of the architecture.".
- 519 abstract "Current information retrieval (IR) approaches do not formally capture the explicit meaning of a keyword query but provide a comfortable way for the user to specify information needs on the basis of keywords. Ontology-based approaches allow for sophisticated semantic search but impose a query syntax more difficult to handle. In this paper, we present an approach for translating keyword queries to DL conjunctive queries using background knowledge available in ontologies. We present an implementation which shows that this interpretation of keywords can then be used for both exploration of asserted knowledge and for a semantics-based declarative query answering process. We also present an evaluation of our system and a discussion of the limitations of the approach with respect to our underlying assumptions which directly points to issues for future work.".
- 533 abstract "In this paper we describe RDFSync, a methodology for efficient synchronization and merging of RDF models. RDFSync is based on decomposing a model into Minimum Self-Contained graphs (MSGs). After illustrating theory and deriving properties of MSGs, we show how a RDF model can be represented by a list of hashes of such information fragments. The synchronization procedure here described is based on the evaluation and remote comparison of these ordered lists. Experimental results show that the algorithm provides very significant savings on network traffic compared to the file-oriented synchronization of serialized RDF graphs. Finally, we provide the design and report the implementation of a protocol for executing the RDFSync algorithm over HTTP.".
- 547 abstract "Developers of SemanticWeb applications face a challenge with respect to the decentralised publication model: where to find statements about encountered resources. The “linked data” approach, which mandates that resource URIs should be de-referenced and yield metadata about the resource, helps but is only a partial solution and not followed widely. We present a lookup index over resources crawled on the Semantic Web. Our index allows applications to automatically retrieve sources with information about a certain resource. In contrast to more feature-rich Semantic Web search engines, our index is purposely limited in scope and functionality to achieve highly scalability and maintainability.".
- 561 abstract "Automatic knowledge reuse for Semantic Web applications imposes several challenges on ontology search. Existing ontology retrieval systems merely return a lengthy list of relevant single ontologies, which may not completely cover the specified user requirements. Therefore, there arises an increasing demand for a tool or algorithm with a mechanism to check concept adequacy of existing ontologies with respect to a user query, and then recommend a single or combination of ontologies which can entirely fulfill the requirements. Thus, this paper develops an algorithm, namely combiSQORE to determine whether the available collection of ontologies is able to completely satisfy a submitted query and return a single or combinative ontology that guarantees query coverage. In addition, it ranks the returned answers based on their conceptual closeness and query coverage. The experimental results show that the proposed algorithm is simple, efficient and effective.".
- 57 abstract "The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.".
- 575 abstract "Extracting semantic relations is of great importance for the creation of the Semantic Web content. It is of great benefit to semi-automatically extract relations from the free text of Wikipedia using the structured content readily available in it. Pattern matching methods that employ information redundancy cannot work well since there is not much redundancy information in Wikipedia, compared to the Web. Multi-class classification methods are not reasonable since no classification of relation types is available in Wikipedia. In this paper, we propose PORE (Positive-Only Relation Extraction), for relation extraction from Wikipedia text. The core algorithm B-POL extends a state-of-the-art positive-only learning algorithm using bootstrapping, strong negative identification, and transductive inference to work with fewer positive training examples. We conducted experiments on several relations with different amount of training data. The experimental results show that B-POL can work effectively given only a small amount of positive training examples and it significantly outperforms the original positive learning approaches and a multi-class SVM. Furthermore, although PORE is applied in the context of Wikipedia, the core algorithm B-POL is a general approach for Ontology Population and can be adapted to other domains.".
- 589 abstract ""[Reasoner] performance can be scary, so much so, that we cannot deploy the technology in our products." -- Michael Shepard (http://lists.w3.org/Archives/Public/public-owl-dev/2007JanMar/0047.html). What are typical OWL users to do when their favorite reasoner never seems to return? In this paper, we present our first steps considering this problem. We describe the challenges and our approach, and present a prototype tool to help users identify reasoner performance bottlenecks with respect to their ontologies. We then describe 4 case studies on synthetic and real-world ontologies. While the anecdotal evidence suggests that the service can be useful for both ontology developers and reasoner implementors, much more is desired.".
- 603 abstract "For the development of Semantic Web technology, researchers and developers in the Semantic Web community need to focus on the areas in which human reasoning is particularly difficult. Two studies in this paper demonstrate that people are predisposed to use class-inclusion labels for inductive judgments. This tendency appears to stem from a general characteristic of human reasoning – using heuristics to solve problems. The inference engines and interface designs that incorporate human reasoning need to integrate this general characteristic underlying human induction.".