GitBucket
4.21.2
Toggle navigation
Snippets
Sign in
Files
Branches
1
Releases
Issues
Pull requests
Labels
Priorities
Milestones
Wiki
Forks
nigel.stanger
/
Discussion_Papers
Browse code
Initial import of discussion papers web site.
master
1 parent
79e1ea1
commit
6510ec72060e8e437a7dc0a246051d81cb2b42b7
nstanger
authored
on 15 Jul 2002
Patch
Showing
2 changed files
Website/dp1999-abstracts.htm
Website/dp2001-abstracts.htm
Ignore Space
Show notes
View
Website/dp1999-abstracts.htm
0 → 100644
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <HTML> <HEAD> <title>Information Science Discussion Papers Series: 1999 abstracts</title> <META NAME="generator" CONTENT="BBEdit 5.1.1"> <link rel="Stylesheet" href="/infosci/styles.css" type="text/css"> <link rel="Stylesheet" href="DPSstyles.css" type="text/css"> </HEAD> <BODY> <h2>Information Science Discussion Papers Series: 1999 Abstracts</h2> <hr> <h3><a name="dp9901">99/01: UML as an ontology modelling language</a></h3> <h4>S. Cranefield and M.K. Purvis</h4> <p>Current tools and techniques for ontology development are based on the traditions of AI knowledge representation research. This research has led to popular formalisms such as KIF and KL-ONE style languages. However, these representations are little known outside AI research laboratories. In contrast, commercial interest has resulted in ideas from the object-oriented programming community maturing into industry standards and powerful tools for object-oriented analysis, design and implementation. These standards and tools have a wide and rapidly growing user community. This paper examines the potential for object-oriented standards to be used for ontology modelling, and in particular presents an ontology representation language based on a subset of the Unified Modeling Language together with its associated Object Constraint Language.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9901sc.pdf.gz">Download</a> (gzipped PDF, 156KB)</p> <hr> <h3><a name="dp9902">99/02: Evolving connectionist systems for on-line, knowledge-based learning: Principles and applications</a></h3> <h4>N. Kasabov</h4> <p>The paper introduces evolving connectionist systems (ECOS) as an effective approach to building on-line, adaptive intelligent systems. ECOS evolve through incremental, hybrid (supervised/unsupervised), on-line learning. They can accommodate new input data, including new features, new classes, etc. through local element tuning. New connections and new neurons are created during the operation of the system. The ECOS framework is presented and illustrated on a particular type of evolving neural networks—evolving fuzzy neural networks (EFuNNs). EFuNNs can learn spatial-temporal sequences in an adaptive way, through one pass learning. Rules can be inserted and extracted at any time of the system operation. The characteristics of ECOS and EFuNNs are illustrated on several case studies that include: adaptive pattern classification; adaptive, phoneme-based spoken language recognition; adaptive dynamic time-series prediction; intelligent agents.</p> <p><strong>Keywords: </strong>evolving connectionist systems, evolving fuzzy neural networks, on-line learning, spatial-temporal adaptation, adaptive speech recognition</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9902nk.pdf.gz">Download</a> (gzipped PDF, 800KB)</p> <hr> <h3><a name="dp9903">99/03: Spatial-temporal adaptation in evolving fuzzy neural networks for on-line adaptive phoneme recognition</a></h3> <h4>N. Kasabov and M. Watts</h4> <p>The paper is a study on a new class of spatial-temporal evolving fuzzy neural network systems (EFuNNs) for on-line adaptive learning, and their applications for adaptive phoneme recognition. The systems evolve through incremental, hybrid (supervised / unsupervised) learning. They accommodate new input data, including new features, new classes, etc. through local element tuning. Both feature-based similarities and temporal dependencies, that are present in the input data, are learned and stored in the connections, and adjusted over time. This is an important requirement for the task of adaptive, speaker independent spoken language recognition, where new pronunciations and new accents need to be learned in an on-line, adaptive mode. Experiments with EFuNNs, and also with multi-layer perceptrons, and fuzzy neural networks (FuNNs), conducted on the whole set of New Zealand English phonemes, show the superiority and the potential of EFuNNs when used for the task. Spatial allocation of nodes and their aggregation in EFuNNs allow for similarity preserving and similarity observation within one phoneme data and across phonemes, while subtle temporal variations within one phoneme data can be learned and adjusted through temporal feedback connections. The experimental results support the claim that spatial-temporal organisation in EFuNNs can lead to a significant improvement in the recognition rate especially for the diphthong and the vowel phonemes in English, which in many cases are problematic for a system to learn and adjust in an adaptive way.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9903nk.pdf.gz">Download</a> (gzipped PDF, 438KB)</p> <hr> <h3><a name="dp9904">99/04: Dynamic evolving fuzzy neural networks with ‘m-out-of-n’ activation nodes for on-line adaptive systems</a></h3> <h4>N. Kasabov and Q. Song</h4> <p>The paper introduces a new type of evolving fuzzy neural networks (EFuNNs), denoted as mEFuNNs, for on-line learning and their applications for dynamic time series analysis and prediction. mEFuNNs evolve through incremental, hybrid (supervised/unsupervised), on-line learning, like the EFuNNs. They can accommodate new input data, including new features, new classes, etc. through local element tuning. New connections and new neurons are created during the operation of the system. At each time moment the output vector of a mEFuNN is calculated based on the m-most activated rule nodes. Two approaches are proposed: (1) using weighted fuzzy rules of Zadeh-Mamdani type; (2) using Takagi-Sugeno fuzzy rules that utilise dynamically changing and adapting values for the inference parameters. It is proved that the mEFuNNs can effectively learn complex temporal sequences in an adaptive way and outperform EFuNNs, ANFIS and other neural network and hybrid models. Rules can be inserted, extracted and adjusted continuously during the operation of the system. The characteristics of the mEFuNNs are illustrated on two bench-mark dynamic time series data, as well as on two real case studies for on-line adaptive control and decision making. Aggregation of rule nodes in evolved mEFuNNs can be achieved through fuzzy C-means clustering algorithm which is also illustrated on the bench mark data sets. The regularly trained and aggregated in an on-line, self-organised mode mEFuNNs perform as well, or better, than the mEFuNNs that use fuzzy C-means clustering algorithm for off-line rule node generation on the same data set.</p> <p><strong>Keywords: </strong>dynamic evolving fuzzy neural networks, on-line learning, adaptive control, dynamic time series prediction, fuzzy clustering</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9904nk.pdf.gz">Download</a> (gzipped PDF, 1.5MB)</p> <hr> <h3><a name="dp9905">99/05: Hybrid neuro-fuzzy inference systems and their application for on-line adaptive learning of nonlinear dynamical systems</a></h3> <h4>J. Kim and N. Kasabov</h4> <p>In this paper, an adaptive neuro-fuzzy system, called HyFIS, is proposed to build and optimise fuzzy models. The proposed model introduces the learning power of neural networks into the fuzzy logic systems and provides linguistic meaning to the connectionist architectures. Heuristic fuzzy logic rules and input-output fuzzy membership functions can be optimally tuned from training examples by a hybrid learning scheme composed of two phases: the phase of rule generation from data, and the phase of rule tuning by using the error backpropagation learning scheme for a neural fuzzy system. In order to illustrate the performance and applicability of the proposed neuro-fuzzy hybrid model, extensive simulation studies of nonlinear complex dynamics are carried out. The proposed method can be applied to on-line incremental adaptive leaning for the purpose of prediction and control of non-linear dynamical systems.</p> <p><strong>Keywords: </strong>neuro-fuzzy systems, neural networks, fuzzy logic, parameter and structure learning, knowledge acquisition, adaptation, time series</p> <hr> <h3><a name="dp9906">99/06: A distributed architecture for environmental information systems</a></h3> <h4>M.K. Purvis, S. Cranefield and M. Nowostawski</h4> <p>The increasing availability and variety of large environmental data sets is opening new opportunities for data mining and useful cross-referencing of disparate environmental data sets distributed over a network. In order to take advantage of these opportunities, environmental information systems will need to operate effectively in a distributed, open environment. In this paper, we describe the New Zealand Distributed Information System (NZDIS) software architecture for environmental information systems. In order to optimise extensibility, openness, and flexible query processing, the architecture is organised into collaborating software agents that communicate by means of a standard declarative agent communication language. The metadata of environmental data sources are stored as part of agent ontologies, which represent information models of the domain of the data repository. The agents and the associated ontological framework are designed as much as possible to take advantage of standard object-oriented technology, such as CORBA, UML, and OQL, in order to enhance the openness and accessibility of the system.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9906mp.pdf.gz">Download</a> (gzipped PDF, 156Kb)</p> <hr> <h3><a name="dp9907">99/07: From hybrid adjustable neuro-fuzzy systems to adaptive connectionist-based systems for phoneme and word recognition</a></h3> <h4>N. Kasabov, R. Kilgour and S. Sinclair</h4> <p>This paper discusses the problem of adaptation in automatic speech recognition systems (ASRS) and suggests several strategies for adaptation in a modular architecture for speech recognition. The architecture allows for adaptation at different levels of the recognition process, where modules can be adapted individually based on their performance and the performance of the whole system. Two realisations of this architecture are presented along with experimental results from small-scale experiments. The first realisation is a hybrid system for speaker-independent phoneme-based spoken word recognition, consisting of neural networks for recognising English phonemes and fuzzy systems for modelling acoustic and linguistic knowledge. This system is adjustable by additional training of individual neural network modules and tuning the fuzzy systems. The increased accuracy of the recognition through appropriate adjustment is also discussed. The second realisation of the architecture is a connectionist system that uses fuzzy neural networks FuNNs to accommodate both a prior linguistic knowledge and data from a speech corpus. A method for on-line adaptation of FuNNs is also presented.</p> <p><strong>Keywords: </strong>pattern recognition, artificial intelligence, neural networks, speech recognition</p> <hr> <h3><a name="dp9908">99/08: Adaptive, evolving, hybrid connectionist systems for image pattern recognition</a></h3> <h4>N. Kasabov, S. Israel and B. Woodford</h4> <p>The chapter presents a new methodology for building adaptive, incremental learning systems for image pattern classification. The systems are based on dynamically evolving fuzzy neural networks that are neural architectures to realise connectionist learning, fuzzy logic inference, and case-based reasoning. The methodology and the architecture are applied on two sets of real data—one of satellite image data, and the other of fruit image data. The proposed method and architecture encourage fast learning, life-long learning and on-line learning when the system operates in a changing environment of image data.</p> <p><strong>Keywords: </strong>image classification, evolving fuzzy neural networks, case-based reasoning</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9908nk.pdf.gz">Download</a> (gzipped PDF, 1.2MB)</p> <hr> <h3><a name="dp9909">99/09: The concepts of hidden Markov model in speech recognition</a></h3> <h4>W. Abdulla and N. Kasabov</h4> <p>The speech recognition field is one of the most challenging fields that has faced scientists for a long time. The complete solution is still far from reach. The efforts are concentrated with huge funds from the companies to different related and supportive approaches to reach the final goal. Then, apply it to the enormous applications that are still waiting for the successful speech recognisers that are free from the constraints of speakers, vocabularies or environment. This task is not an easy one due to the interdisciplinary nature of the problem and as it requires speech perception to be implied in the recogniser (Speech Understanding Systems) which in turn point strongly to the use of intelligence within the systems.</p> <p>The bare techniques of recognisers (without intelligence) are following wide varieties of approaches with different claims of success by each group of authors who put their faith in their favourite way. However, the sole technique that gains the acceptance of the researchers to be the state of the art is the Hidden Markov Model (HMM) technique. HMM is agreed to be the most promising one. It might be used successfully with other techniques to improve the performance, such as hybridising the HMM with Artificial Neural Networks (ANN) algorithms. This does not mean that the HMM is pure from approximations that are far from reality, such as the successive observations independence, but the results and potential of this algorithm is reliable. The modifications on HMM take the burden of releasing it from these poorly representative approximations hoping for better results.</p> <p>In this report we are going to describe the backbone of the HMM technique with the main outlines for successful implementation. The representation and implementation of HMM varies in one way or another but the main idea is the same as well as the results and computation costs, it is a matter of preferences to choose one. Our preference here is that adopted by Ferguson and Rabiner et al.</p> <p>In this report we will describe the Markov Chain, and then investigate a very popular model in the speech recognition field (the Left-Right HMM Topology). The mathematical formulations needed to be implemented will be fully explained as they are crucial in building the HMM. The prominent factors in the design will also be discussed. Finally we conclude this report by some experimental results to see the practical outcomes of the implemented model.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9909wa.pdf.gz">Download</a> (gzipped PDF, 489KB)</p> <hr> <h3><a name="dp9910">99/10: Finding medical information on the Internet: Who should do it and what should they know</a></h3> <h4>D. Parry</h4> <p>More and more medical information is appearing on the Internet, but it is not easy to get at the nuggets amongst all the spoil. Bruce McKenzie’s editorial in the December 1997 edition of <EM>SIM Quarterly</EM> dealt very well with the problems of quality, but I would suggest that the problem of accessibility is as much of a challenge. As ever-greater quantities of high quality medical information are published electronically, the need to be able to find it becomes imperative. There are a number of tools to find what you want on the Internet—search engines, agents, indexing and classification schemes and hyperlinks, but their use requires care, skill and experience.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9910dp.pdf.gz">Download</a> (gzipped PDF, 142KB)</p> <hr> <h3><a name="dp9911">99/11: Software metrics data analysis—Exploring the relative performance of some commonly used modeling techniques</a></h3> <h4>A. Gray and S. MacDonell</h4> <p>Whilst some software measurement research has been unquestionably successful, other research has struggled to enable expected advances in project and process management. Contributing to this lack of advancement has been the incidence of inappropriate or non-optimal application of various model-building procedures. This obviously raises questions over the validity and reliability of any results obtained as well as the conclusions that may have been drawn regarding the appropriateness of the techniques in question. In this paper we investigate the influence of various data set characteristics and the purpose of analysis on the effectiveness of four model-building techniques—three statistical methods and one neural network method. In order to illustrate the impact of data set characteristics, three separate data sets, drawn from the literature, are used in this analysis. In terms of predictive accuracy, it is shown that no one modeling method is best in every case. Some consideration of the characteristics of data sets should therefore occur before analysis begins, so that the most appropriate modeling method is then used. Moreover, issues other than predictive accuracy may have a significant influence on the selection of model-building methods. These issues are also addressed here and a series of guidelines for selecting among and implementing these and other modeling techniques is discussed.</p> <p><strong>Keywords: </strong>software metrics, analysis, statistical methods, connectionist methods</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9911ag.pdf.gz">Download</a> (gzipped PDF, 219KB)</p> <hr> <h3><a name="dp9912">99/12: Software forensics for discriminating between program authors using case-based reasoning, feed-forward neural networks and multiple discriminant analysis</a></h3> <h4>S. MacDonell, A. Gray, G. MacLennan and P. Sallis</h4> <p>Software forensics is a research field that, by treating pieces of program source code as linguistically and stylistically analyzable entities, attempts to investigate aspects of computer program authorship. This can be performed with the goal of identification, discrimination, or characterization of authors. In this paper we extract a set of 26 standard authorship metrics from 351 programs by 7 different authors. The use of feed-forward neural networks, multiple discriminant analysis, and case-based reasoning is then investigated in terms of classification accuracy for the authors on both training and testing samples. The first two techniques produce remarkably similar results, with the best results coming from the case-based reasoning models. All techniques have high prediction accuracy rates, supporting the feasibility of the task of discriminating program authors based on source-code measurements.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9912sm.pdf.gz">Download</a> (gzipped PDF, 123KB)</p> <hr> <h3><a name="dp9913">99/13: FULSOME: Fuzzy logic for software metric practitioners and researchers</a></h3> <h4>S. MacDonell, A. Gray and J. Calvert</h4> <p>There has been increasing interest in recent times for using fuzzy logic techniques to represent software metric models, especially those predicting development effort. The use of fuzzy logic for this application area offers several advantages when compared to other commonly used techniques. These include the use of a single model with different levels of precision for inputs and outputs used throughout the development life cycle, the possibility of model development with little or no data, and its effectiveness when used as a communication tool. The use of fuzzy logic in any applied field however requires that suitable tools are available for both practitioners and researchers—satisfying both interface and functionality related requirements. After outlining some of the specific needs of the software metrics community, including results from a survey of software developers on this topic, the paper describes the use of a set of tools called FULSOME (Fuzzy Logic for Software Metrics). The development of a simple fuzzy logic system by a software metrician and subsequent tuning are then discussed using a real-world set of software metric data. The automatically generated fuzzy model performs acceptably when compared to regression-based models.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9913sm.pdf.gz">Download</a> (gzipped PDF, 201KB)</p> <hr> <h3><a name="dp9914">99/14: Assessing prediction systems</a></h3> <h4>B. Kitchenham, S. MacDonell, L. Pickard and M. Shepperd</h4> <p>For some years software engineers have been attempting to develop useful prediction systems to estimate such attributes as the effort to develop a piece of software and the likely number of defects. Typically, prediction systems are proposed and then subjected to empirical evaluation. Claims are then made with regard to the quality of the prediction systems. A wide variety of prediction quality indicators have been suggested in the literature. Unfortunately, we believe that a somewhat confusing state of affairs prevails and that this impedes research progress. This paper aims to provide the research community with a better understanding of the meaning of, and relationship between, these indicators. We critically review twelve different approaches by considering them as descriptors of the residual variable. We demonstrate that the two most popular indicators MMRE and pred(25) are in fact indicators of the spread and shape respectively of prediction accuracy where prediction accuracy is the ratio of estimate to actual (or actual to estimate). Next we highlight the impact of the choice of indicator by comparing three prediction systems derived using four different simulated datasets. We demonstrate that the results of such a comparison depend upon the choice of indicator, the analysis technique, and the nature of the dataset used to derive the predictive model. We conclude that prediction systems cannot be characterised by a single summary statistic. We suggest that we need indicators of the central tendency and spread of accuracy as well as indicators of shape and bias. For this reason, boxplots of relative error or residuals are useful alternatives to simple summary metrics.</p> <p><strong>Keywords: </strong>prediction systems, estimation, empirical analysis, metrics, goodness-of-fit statistics</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9914bk.pdf.gz">Download</a> (gzipped PDF, 239KB)</p> <hr> <h3><a name="dp9915">99/15: Industry practices in project management for multimedia information systems</a></h3> <h4>S. MacDonell and T. Fletcher</h4> <p>This paper describes ongoing research directed at formulating a set of appropriate measures for assessing and ultimately predicting effort requirements for multimedia systems development. Whilst significant advances have been made in the determination of measures for both transaction-based and process-intensive systems, very little work has been undertaken in relation to measures for multimedia systems. A small preliminary empirical study is reviewed as a precursor to a more exploratory investigation of the factors that are considered by industry to be influential in determining development effort. This work incorporates the development and use of a goal-based framework to assist the measure selection process from a literature basis, followed by an industry questionnaire. The results provide a number of preliminary but nevertheless useful insights into contemporary project management practices with respect to multimedia systems.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9915sm.pdf.gz">Download</a> (gzipped PDF, 167KB)</p> <hr> <h3><a name="dp9916">99/16: Factors systematically associated with errors in subjective estimates of software development effort: The stability of expert judgment</a></h3> <h4>A. Gray, S. MacDonell and M. Shepperd</h4> <p>Software metric-based estimation of project development effort is most often performed by expert judgment rather than by using an empirically derived model (although such may be used by the expert to assist their decision). One question that can be asked about these estimates is how stable are they with respect to characteristics of the development process and product? This stability can be assessed in relation to the degree to which the project has advanced over time, the type of module for which the estimate is being made, and the characteristics of that module. In this paper we examine a set of expert-derived estimates for the effort required to develop a collection of modules from a large health-care system. Statistical tests are used to identify relationships between the type (screen or report) and characteristics of modules and the likelihood of the associated development effort being under-estimated, approximately correct, or over-estimated. Distinct relationships are found that suggest that the estimation process being examined was not unbiased to such characteristics.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9916ag.pdf.gz">Download</a> (gzipped PDF, 199KB)</p> <hr> <h3><a name="dp9917">99/17: The NZDIS project: An agent-based distributed information systems architecture</a></h3> <h4>M.K. Purvis, S. Cranefield, G. Bush, D. Carter, B. McKinlay, M. Nowostawski and R. Ward</h4> <p>This paper describes an architecture for building distributed information systems from existing information resources, based on distributed object and software agent technologies. This architecture is being developed as part of the New Zealand Distributed Information Systems (NZDIS) project.</p> <p>An agent-based architecture is used: information sources are encapsulated as information agents that accept messages in an agent communication language (the FIPA ACL). A user agent assists users to browse ontologies appropriate to their domain of interest and to construct queries based on terms from one or more ontologies. One or more query processing agents are then responsible for discovering (from a resource broker agent) which data source agents are relevant to the query, decomposing the query into subqueries suitable for those agents (including the translation of the query into the specific ontologies implemented by those agents), executing the subqueries and translating and combining the subquery results into the desired result set.</p> <p>Novel features of this system include the use of standards from the object-oriented community such as the Common Object Request Broker Architecture (CORBA) (as a communications infrastructure), the Unified Modeling Language (used as an ontology representation language), the Object Data Management Group’s Object Query Language (used for queries) and the Object Management Group’s Meta Object Facility (used as the basis for an ontology repository agent). Query results need not be returned within an ACL message, but may instead be represented by a CORBA object reference which may be used to obtain the result set.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9917mp.pdf.gz">Download</a> (gzipped PDF, 171KB)</p> <hr> <h3><a name="dp9918">99/18: HTN planning for information processing tasks</a></h3> <h4>S. Cranefield</h4> <p>This paper discusses the problem of integrated planning and execution for tasks that involve the consumption, production and alteration of relational information. Unlike information retrieval problems, the information processing domain requires explicit modelling of the changing information state of the domain and how the validity of resources changes as actions are performed. A solution to this problem is presented in the form of a specialised hierarchical task network planning model. A distinction is made between the information processing effects of an action (modelled in terms of constraints relating the domain information before and after the action) and the actions’ preconditions and effects which are expressed in terms of required, produced and invalidated resources. The information flow between tasks is explicitly represented in methods and plans, including any required information-combining operations such as selection and union.</p> <p>The paper presents the semantics of this model and discusses implementation issues arising from the extension of an existing HTN planner (SHOP) to support this model of planning.</p> <p><strong>Keywords: </strong>HTN planning, information processing, integrated planning and execution</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9918sc.pdf.gz">Download</a> (gzipped PDF, 147KB)</p> <hr> <h3><a name="dp9919">99/19: Automated scoring of practical tests in an introductory course in information technology</a></h3> <h4>G. Kennedy</h4> <p>In an introductory course in information technology at the University of Otago the acquisition of practical skills is considered to be a prime objective. An effective way of assessing the achievement of this objective is by means of a ‘practical test’, in which students are required to accomplish simple tasks in a controlled environment. The assessment of such work demands a high level of expertise, is very labour intensive and can suffer from marker inconsistency, particularly with large candidatures.</p> <p>This paper describes the results of a trial in which the efforts of one thousand students in a practical test of word processing were scored by means of a program written in MediaTalk. Details of the procedure are given, including sampling strategies for the purpose of validation and examples of problems that were encountered.</p> <p>It was concluded that the approach was useful, and once properly validated gave rise to considerable savings in the time and effort.</p> <p><strong>Keywords: </strong>computer-aided learning, automated scoring, computer education, test validation</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9919gk.pdf.gz">Download</a> (gzipped PDF, 138KB)</p> <hr> <h3><a name="dp9920">99/20: Fuzzy logic for software metric models throughout the development life-cycle</a></h3> <h4>A. Gray and S. MacDonell</h4> <p>One problem faced by managers who are using project management models is the elicitation of numerical inputs. Obtaining these with any degree of confidence early in a project is not always feasible. Related to this difficulty is the risk of precisely specified outputs from models leading to overcommitment. These problems can be seen as the collective failure of software measurements to represent the inherent uncertainties in managers’ knowledge of the development products, resources, and processes. It is proposed that fuzzy logic techniques can help to overcome some of these difficulties by representing the imprecision in inputs and outputs, as well as providing a more expert-knowledge based approach to model building. The use of fuzzy logic for project management however should not be the same throughout the development life cycle. Different levels of available information and desired precision suggest that it can be used differently depending on the current phase, although a single model can be used for consistency.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9920ag.pdf.gz">Download</a> (gzipped PDF, 120KB)</p> <hr> <h3><a name="dp9921">99/21: Wayfinding/navigation within a QTVR virtual environment: Preliminary results</a></h3> <h4>B. Norris, D. Rashid and W. Wong</h4> <p>This paper reports on an investigation into wayfinding principles, and their effectiveness within a virtual environment. To investigate these principles, a virtual environment of an actual museum was created using QuickTime Virtual Reality. Wayfinding principles used in the real world were identified and used to design the interaction of the virtual environment. The initial findings suggests that real-world navigation principles, such as the use of map and landmark principles, can significantly help navigation within this virtual environment. However, navigation difficulties were discovered through an Activity Theory-based Cognitive Task Analysis.</p> <p><strong>Keywords: </strong>wayfinding, navigation, QTVR, virtual environments, activity theory</p> <hr> <h3><a name="dp9922">99/22: Predictive modelling of plankton dynamics in freshwater lakes using genetic programming</a></h3> <h4>P. Whigham and F. Recknagel</h4> <p>Building predictive time series models for freshwater systems is important both for understanding the dynamics of these natural systems and in the development of decision support and management software. This work describes the application of a machine learning technique, namely genetic programming (GP), to the prediction of chlorophyll-a. The system endeavoured to evolve several mathematical time series equations, based on limnological and climate variables, which could predict the dynamics of chlorophyll-a on unseen data. The predictive accuracy of the genetic programming approach was compared with an artificial neural network and a deterministic algal growth model. The GP system evolved some solutions which were improvements over the neural network and showed that the transparent nature of the solutions may allow inferences about underlying processes to be made. This work demonstrates that non-linear processes in natural systems may be successfully modelled through the use of machine learning techniques. Further, it shows that genetic programming may be used as a tool for exploring the driving processes underlying freshwater system dynamics.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9922pw.pdf.gz">Download</a> (gzipped PDF, 234KB)</p> <hr> <h3><a name="dp9923">99/23: Modifications to Smith’s method for deriving normalised relations from a functional dependency diagram</a></h3> <h4>N. Stanger</h4> <p>Smith’s method (Smith, 1995) is a formal technique for deriving a set of normalised relations from a functional dependency diagram (FDD). Smith’s original rules for deriving these relations are incomplete, as they do not fully address the issue of determining the foreign key links between relations. In addition, one of the rules for deriving foreign keys can produce incorrect results, while the other rule is difficult to automate. In this paper are described solutions these issues.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9923ns.pdf.gz">Download</a> (gzipped PDF, 158KB)</p> <hr> <h3><a name="dp9924">99/24: The development of an electronic distance learning course in health informatics</a></h3> <h4>D. Parry, S. Cockcroft, A. Breton, D. Abernethy and J. Gillies</h4> <p>Since 1997 the authors have been involved in the development of a distance learning course in health informatics. The course is delivered via CD-ROM and the Internet. During this process we have learned valuable lessons about computer-assisted collaboration and cooperative work. In particular we have developed methods of using the software tools available for communication and education. We believe that electronic distance learning offers a realistic means of providing education in health informatics and other fields to students whom for reasons of geography or work commitments would not be able to participate in a conventional course.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9924dp.pdf.gz">Download</a> (gzipped PDF, 484KB)</p> <hr> <h3><a name="dp9925">99/25: Infiltrating IT into primary care: A case study</a></h3> <h4>S. Cockcroft, D. Parry, A. Breton, D. Abernethy and J. Gillies</h4> <p>Web based approaches to tracking students on placement are receiving much interest in the field of medical education The work presented here describes a web-based solution to the problem of managing data collection from student encounters with patients whilst on placement. The solution has been developed by postgraduate students under the direction of staff of the health informatics diploma. Specifically, the system allows undergraduate students on placement or in the main hospital to access a web-based front end to a database designed to store the data that they are required to gather. The system also has the important effect of providing a rationale for the provision of electronic communication to the undergraduate students within the context of healthcare delivery. We believe that an additional effect will be to expose practicing healthcare providers to electronic information systems, along with the undergraduates who are trained to use them, and increase the skill base of the practitioners.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9925so.pdf.gz">Download</a> (gzipped PDF, 72KB)</p> <hr> <h3><a name="dp9926">99/26: Using rough sets to study expert behaviour in induction of labour</a></h3> <h4>D. Parry, W.K. Yeap and N. Pattison</h4> <p>The rate of induction of labour (IOL) is increasing, despite no obvious increase in the incidence of the major indications. However the rate varies widely between different centres and practitioners and this does not seem to be due to variations in patient populations. The IOL decision-making process of six clinicians was recorded and examined using hypothetical scenarios presented on a computer. Several rules were identified from a rough sets analysis of the data. These rules were compared to the actual practise of these clinicians in 1994 Initial tests of these rules show that they may form a suitable set for developing an expert system for the induction of labour.</p> <p><strong>Keywords: </strong>rough sets, obstetrics, knowledge acquisition</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9926dp.pdf.gz">Download</a> (gzipped PDF, 78KB)</p> <hr> <h3><a name="dp9927">99/27: Using the Internet to teach health informatics</a></h3> <h4>D. Parry, A. Breton, D. Abernethy, S. Cockcroft and J. Gillies</h4> <p>Since July 1998 we have been teaching an Internet-based distance learning course in health informatics (<a href="http://basil.otago.ac.nz:800">http://basil.otago.ac.nz:800</a>). The development of this course and the experiences we have had running it are described in this paper. The course was delivered using paper materials, a face-to-face workshop, a CD-ROM and Internet communication tools. We currently have about 30 students around New Zealand, a mixture of physicians, nurses and other health staff. Some teaching methods have worked, some haven’t, but in the process we have learned a number of valuable lessons.</p> <p><strong>Keywords: </strong>distance learning, healthcare, Internet, CD-ROM</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp9927dp.pdf.gz">Download</a> (gzipped PDF, 69KB)</p> <hr> <!--#include file="/infosci/footer.htm" --> <center><small><small>Last Modified <!--#ECHO VAR="LAST_MODIFIED" --></small></small></center> </BODY> </HTML>
Ignore Space
Show notes
View
Website/dp2001-abstracts.htm
0 → 100644
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <HTML> <HEAD> <title>Information Science Discussion Papers Series: 2001 abstracts</title> <META NAME="generator" CONTENT="BBEdit 5.1.1"> <link rel="Stylesheet" href="/infosci/styles.css" type="text/css"> <link rel="Stylesheet" href="DPSstyles.css" type="text/css"> </HEAD> <BODY> <h2>Information Science Discussion Papers Series: 2001 Abstracts</h2> <hr> <h3><a name="dp2001-01">2001/01: Evolving fuzzy neural networks for on-line knowledge discovery</a></h3> <h4>N. Kasabov</h4> <p>Fuzzy neural networks are connectionist systems that facilitate learning from data, reasoning over fuzzy rules, rule insertion, rule extraction, and rule adaptation. The concept evolving fuzzy neural networks (EFuNNs), with respective algorithms for learning, aggregation, rule insertion, rule extraction, is further developed here and applied for on-line knowledge discovery on both prediction and classification tasks. EFuNNs operate in an on-line mode and learn incrementally through locally tuned elements. They grow as data arrive, and regularly shrink through pruning of nodes, or through node aggregation. The aggregation procedure is functionally equivalent to knowledge abstraction. The features of EFuNNs are illustrated on two real-world application problems—one from macroeconomics and another from Bioinformatics. EFuNNs are suitable for fast learning of on-line incoming data (e.g., financial and economic time series, biological process control), adaptive learning of speech and video data, incremental learning and knowledge discovery from growing databases (e.g. in Bioinformatics), on-line tracing of processes over time, life-long learning. The paper includes also a short review of the most common types of rules used in the knowledge-based neural networks for knowledge discovery and data mining.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp2001-01.pdf.gz">Download</a> (gzipped PDF, 252KB)</p> <hr> <h3><a name="dp2001-02">2001/02: The Styx agent methodology</a></h3> <h4>G. Bush, S. Cranefield and M.K. Purvis</h4> <p>Agent-oriented software engineering is a promising new approach to software engineering that uses the notion of an agent as the primary entity of abstraction. The development of methodologies for agent-oriented software engineering is an area that is currently receiving much attention, there have been several agent-oriented methodologies proposed recently and survey papers are starting to appear. However the authors feel that there is still much work necessary in this area; current methodologies can be improved upon. This paper presents a new methodology, the Styx Agent Methodology, which guides the development of collaborative agent systems from the analysis phase through to system implementation and maintenance. A distinguishing feature of Styx is that it covers a wider range of software development life-cycle activities than do other recently proposed agent-oriented methodologies. The key areas covered by this methodology are the specification of communication concepts, inter-agent communication and each agent's behaviour activation—but it does not address the development of application-specific parts of a system. It will be supported by a software tool which is currently in development.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp2001-02.pdf.gz">Download</a> (gzipped PDF, 81KB)</p> <hr> <h3><a name="dp2001-03">2001/03: Implementing agent communication languages directly from UML specifications</a></h3> <h4>S. Cranefield, M.K. Purvis and M. Nowostawski</h4> <p>This paper proposes the use of the Unified Modelling Language (UML) as a formalism for defining an abstract syntax for Agent Communication Languages (ACLs) and their associated content languages. It describes an approach supporting an automatic mapping from high-level abstract specifications of language structures to specific computer language bindings that can be directly used by an agent platform. Some advantages of this approach are that it provides a framework for specifying and experimenting with alternative agent communication languages and reduces the error-prone manual process of generating compatible bindings and grammars for different syntaxes. A prototype implementation supporting an automatic conversion from an abstract communication language expressed in UML to a native Java API and a Resource Description Framework (RDF) serialisation format is described.</p> <p><strong>Keywords: </strong>agent communication languages, abstract syntax, UML, XMI, Java binding, marshalling, RDF</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp2001-03.pdf.gz">Download</a> (gzipped PDF, 413KB)</p> <hr> <h3><a name="dp2001-04">2001/04: UML and the Semantic Web</a></h3> <h4>S. Cranefield</h4> <p>This paper discusses technology to support the use of UML for representing ontologies and domain knowledge in the Semantic Web. Two mappings have been defined and implemented using XSLT to produce Java classes and an RDF schema from an ontology represented as a UML class diagram and encoded using XMI. A Java application can encode domain knowledge as an object diagram realised as a network of instances of the generated classes. Support is provided for marshalling and unmarshalling this object-oriented knowledge to and from an RDF/XML serialisation.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp2001-04.pdf.gz">Download</a> (gzipped PDF, 595KB)</p> <hr> <h3><a name="dp2001-05">2001/05: A layered approach for modelling agent conversations</a></h3> <h4>M. Nowostawski, M.K. Purvis and S. Cranefield</h4> <p>Although the notion of conversations has been discussed for some time as a way in which to provide an abstract representation of extended agent message exchange, there is still no consensus established concerning how to use these abstractions effectively. This paper describes a layered approach based on coloured Petri Nets that can be used for modelling complex, concurrent conversations among agents in a multi-agent system. The approach can be used both to define simple conversation protocols and to define more complex conversation protocols composed of a number of simpler conversations. With this method it is possible (a) to capture the concurrent characteristics of a conversation, (b) to capture the state of a conversation at runtime, and (c) to reuse conversation structures for the processing of multiple concurrent messages. A prototype implementation of such a system with some examples is described.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp2001-05.pdf.gz">Download</a> (gzipped PDF, 97KB)</p> <hr> <h3><a name="dp2001-06">2001/06: A multi-level approach and infrastructure for agent-oriented software development</a></h3> <h4>M. Nowostawski, G. Bush, M.K. Purvis and S. Cranefield</h4> <p>An architecture, and the accompanying infrastructural support, for agent-based software developement is described which supports the use of agent-oriented ideas at multiple levels of abstraction. At the lowest level are micro-agents, which are robust and efficient implementations of streamlined agents that can be used for many conventional programming tasks. Agents with more sophisticated functionality can be constructed by combining these micro-agents into more complicated agents. Consequently the system supports the consistent use of agent-based ideas throughout the software engineering process, since higher level agents may be hierarchically refined into more detailed agent implementations. We outline how micro-agents are implemented in Java and how they have been used to construct the Opal framework for the construction of more complex agents that are based on the FIPA specifications.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp2001-06.pdf.gz">Download</a> (gzipped PDF, 134KB)</p> <hr> <h3><a name="dp2001-07">2001/07: UML-based ontology modelling for software agents</a></h3> <h4>S. Cranefield, S. Haustein and M.K. Purvis</h4> <p>Ontologies play an important role in defining the terminology that agents use in the exchange of knowledge-level messages. As object-oriented modelling, and the Unified Modeling Language (UML) in particular, have built up a huge following in the field of software engineering and are widely supported by robust commercial tools, the use of UML for ontology representation in agent systems would help to hasten the uptake of agent-based systems concepts into industry. This paper examines the potential for UML to be used for ontology modelling, compares it to traditional description logic formalisms and discusses some further possibilities for applying UML-based technologies to agent communication systems.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp2001-07.pdf.gz">Download</a> (gzipped PDF, 81KB)</p> <hr> <h3><a name="dp2001-08">2001/08: Generating ontology-specific content languages</a></h3> <h4>S. Cranefield and M.K. Purvis</h4> <p>This paper examines a recent trend amongst software agent application and platform developers to desire the ability to send domain-specific objects within inter-agent messages. If this feature is to be supported without departing from the notion that agents communicate in terms of knowledge, it is important that the meaning of such objects be well understood. Using an object-oriented metamodelling approach, the relationships between ontologies and agent communication and content languages in FIPA-style agent systems are examined. It is shown how object structures in messages can be considered as expressions in ontology-specific extensions of standard content languages. It is also argued that ontologies must distingish between objects with and objects without identity. Traditionally ontologies are used in agent systems “by reference”. An agent is not required to explicitly reason with the ontology, or even to have an online copy available. The names of ontologies can simply be used as a contract between agents undertaking a dialogue: they each claim to be using an interpretation of the terms used in the conversation that conforms to the ontology. The content language uses a string-based syntax to represent sentences in the language which are constructed using constants and function and predicate symbols from the ontology as well as built-in language symbols such as “and” and “or”.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp2001-08.pdf.gz">Download</a> (gzipped PDF, 111KB)</p> <hr> <h3><a name="dp2001-09">2001/09: View-based consistency and its implementation</a></h3> <h4>Z. Huang, C. Sun, M.K. Purvis and S. Cranefield</h4> <p>This paper proposes a novel View-based Consistency model for Distributed Shared Memory. A view is a set of ordinary data objects that a processor has the right to access in a data-race-free program. The View-based Consistency model only requires that the data objects of a view are updated before a processor accesses them. Compared with other memory consistency models, the View-based Consistency model can achieve data selection without user annotation and can reduce much false-sharing effect. This model has been implemented based on TreadMarks. Performance results have shown that for all our applications the View-based Consistency model outperforms the Lazy Release Consistency model.</p> <p><strong>Keywords: </strong>distributed shared memory, sequential consistency, false sharing</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp2001-09.pdf.gz">Download</a> (gzipped PDF, 179KB)</p> <hr> <h3><a name="dp2001-10">2001/10: Distributed information access in New Zealand</a></h3> <h4>H. Nicholls and R. Gibb</h4> <p>The purpose of this document is to describe the key technology issues for distributed information access in New Zealand. It is written from an industrial and public sector perspective, representing the views and findings of a wide cross-section of institutions in public and private sectors. It is an output of Objective 2 of the Distributed Information Systems project funded under contract UO0621 by the New Zealand Foundation for Research, Science and Technology (FRST).</p> <p>It complements other project material produced by the academic research team at the University of Otago and its collaborators.</p> <p>It focuses on requirements and applications, and is intended to provide a real-world, New Zealand-oriented context for the research in distributed information technologies (DIST).</p> <p>The report represents the culmination of a series of workshops, industrial consultations, a questionnaire, and the experiences of the authors’ institutions during the project, and therefore it supplements any previously produced material.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp2001-10.pdf.gz">Download</a> (gzipped PDF, 1MB)</p> <hr> <h3><a name="dp2001-11">2001/11: Naturalistic decision making in emergency ambulance command and control</a></h3> <h4>W. Wong and A. Blandford</h4> <p>This paper reports on a field study into the nature of decision making in the command and control of emergency ambulances at the London Ambulance Service (LAS). This paper will describe how real-time decisions are made by emergency medical dispatchers and the decision strategies they invoke as they assess the situation, plan and co-ordinate the dispatch of emergency ambulances.</p> <p>A cognitive task analysis approach known as the Critical Decision Method (Hoffman et al., 1998; Klein et al., 1989) was used in the study. The study showed that decision making in emergency ambulance command and control involves four major processes—assessment of the situation, assessment of resources, planning, and co-ordinating and control. These four processes function within an awareness of goings-on in and around the sectors that the dispatchers operate in. This awareness is referred to as situation awareness and is being reported elsewhere (Wong & Blandford, submitted). The decision making process resembles the decision making described by naturalistic decision making models (see (Zsambok & Klein, 1997) for an extensive discussion on the topic) and is an extension of the Integrated Decision Model (Wong, 1999). The study also suggested that a lot of effort was directed at understanding and assessing the situation and in maintaining a constant awareness of the situation. These observations have significant implications for the design of information systems for command and control purposes. These implications will be discussed separately in another paper.</p> <p>The paper will first introduce the domain of EMD at the LAS, then explain how the Critical Decision Method was used in the data collection and in the data anlaysis. It will then describe how decisions are made, particularly during major incidents, and then discuss the implications of those findings for the design of command and control systems.</p> <p><a href="http://divcom.otago.ac.nz/infosci/publctns/complete/papers/dp2001-11.pdf.gz">Download</a> (gzipped PDF, 220KB)</p> <hr> <!--#include file="/infosci/footer.htm" --> <center><small><small>Last Modified <!--#ECHO VAR="LAST_MODIFIED" --></small></small></center> </BODY> </HTML>
Show line notes below