- Removed reference to obsolete CSS style sheet.
- Normalised tag case.
1 parent 6f5b994 commit 384b6e413531607e57b923c140381ab9dfdb3921
nstanger authored on 1 May 2009
Showing 18 changed files
View
11
Website/dp1993-abstracts-contents.htm
<link rel="Stylesheet" href="/infosci/styles.css" type="text/css">
<div class="sectionTitle">Information Science Discussion Papers Series: 1993 Abstracts</div>
 
<hr>
 
<h3><a name="dp9301">93/1: A data complexity formula for deriving time-to-build estimates from non-relational to relational databases</a></h3>
<a name="dp9301"></a><h3>93/1: A data complexity formula for deriving time-to-build estimates from non-relational to relational databases</h3>
<h4>P.J. Sallis</h4>
 
<p>Despite the many qualitative elements of software time-to-build estimating, some observable features can be quantified; even if the resulting set of variables observed is arbitrary. Such is the case when estimating the expected duration for database re-engineering. If we assume that for any extant database, an entity-relationship model (ERM) can be produced from which a new normalised schema is generated, then our estimating task needs to quantify both the complexity of the ensuing ERM and also the data modelling knowledge of the &#8216;re-engineer&#8217;. Whilst there may be additional variables to be considered, a set of primary elements required for estimating the durations of the task have been identified. The formula proposed in this paper is arbitrary but it is intended as an instrument for measuring ER model complexity, such that time-to-build estimates can be made for the task of re-engineering extant non-relational databases into relational form.</p>
 
 
 
<hr>
 
<h3><a name="dp9302">93/2: Recomputing historical barometric heighting</a></h3>
<a name="dp9302"></a><h3>93/2: Recomputing historical barometric heighting</h3>
<h4>G.L. Benwell</h4>
 
<p>This paper discusses the method of determining heights of mountains during the original geodetic survey of Victoria. From 1840 to 1875, more particularly the 1860s, geodetic surveyors were charged with the responsibility of mapping the colony. The subject of this paper is their efforts to determine the elevations by barometric heighting. A brief introduction to other methods is given while particular attention is paid to the determination of the height of Mount Sabine in the Otway Ranges, Victoria, by Surveyor Irwin in 1865. Attempts are made to recompute his original observations.</p>
 
 
 
<hr>
 
<h3><a name="dp9303">93/3: The derivation of thematic map layers from entity-relationship data models</a></h3>
<a name="dp9303"></a><h3>93/3: The derivation of thematic map layers from entity-relationship data models</h3>
<h4>P.G. Firns</h4>
 
<p>Semantic data models comprise formally defined abstractions for representing real world relationships and aspects of the structure of real world phenomena so as to aid database design. While previous research in spatial database design has shown that semantic data models are amenable to explicitly representing some spatial concepts, this paper shows that semantic data models may usefully be applied to the design of spatial databases even without explicitly representing spatial concepts. Specifically, an entity-relationship model comprising only &#8220;is-associated-with&#8221; relationships is used as the basis from which to define thematic layers for a layer based spatial database.</p>
 
 
 
<hr>
 
<h3><a name="dp9304">93/4: Software development, CASE tools and 4GLs&#8212;A survey of New Zealand usage (part 1)</a></h3>
<a name="dp9304"></a><h3>93/4: Software development, CASE tools and 4GLs&#8212;A survey of New Zealand usage (part 1)</h3>
<h4>S.G. MacDonell</h4>
 
<p>This paper reports the results of a recent national survey which considered the use of CASE tools and 4GLs in commercial software development. Responses from just over 750 organisations show a high degree of product penetration, along with extensive use of package solutions. Use of 3GLs in general, and of COBOL in particular, is still relatively widespread, however. In terms of systems analysis and design techniques under a CASE/4GL environment, screen and report definition is the most preferred technique, although both dataflow analysis and data modelling also feature strongly.</p>
 
 
 
<hr>
 
<h3><a name="dp9305">93/5: Cadastral &#8220;reform&#8221;&#8212;At what cost to developing countries?</a></h3>
<a name="dp9305"></a><h3>93/5: Cadastral &#8220;reform&#8221;&#8212;At what cost to developing countries?</h3>
<h4>I.C. Ezigbalike and G.L. Benwell</h4>
 
<p>This paper argues that the introduction of western cadastral concepts into communities with different land tenure systems have involved &#8220;cultural costs.&#8221; The paper discusses these cultural costs and concludes that cadastral reformers need to re-design their product to fit the communities.</p>
 
View
39
Website/dp1994-abstracts-contents.htm
<link rel="Stylesheet" href="/infosci/styles.css" type="text/css">
<div class="sectionTitle">Information Science Discussion Papers Series: 1994 Abstracts</div>
 
<hr>
 
<h3><a name="dp9401">94/1: A comparison of authorship style in the document corpus of the Epistles of St. Ignatius of Antioch</a></h3>
<a name="dp9401"></a><h3>94/1: A comparison of authorship style in the document corpus of the Epistles of St. Ignatius of Antioch</h3>
<h4>P.J. Sallis</h4>
 
<p>This paper is the result of some research in computational stylistics; in particular, the analysis of a document corpus that has attracted the attention of scholars from several disciplines for hundreds of years. This corpus, the Epistles of Saint Ignatius of Antioch, was originally written in Greek but this analysis is of a single translation in English. The analysis has been undertaken using a conventional approach in computational stylistics but has employed a number of contemporary software packages, such as a grammar checker, normally used for text and document creation.</p>
 
 
 
<hr>
 
<h3><a name="dp9402">94/2: Management perceptions of IS research and development issues</a></h3>
<a name="dp9402"></a><h3>94/2: Management perceptions of IS research and development issues</h3>
<h4>P.J. Sallis</h4>
 
<p>Whilst change is an inherent characteristic of the IT industry, the difficulty of frequent and timely change in tertiary curricula is a constraint on the ability of universities to adequately meet the requirements of knowledge and expertise expected of new graduates. In this paper, some recently published research concerning the top ten issues for managers of information technology in the USA, Europe and Australia is evaluated in terms of its impact on IS teaching and research. The paper concludes that the top ten issues perceived by IS managers was probably in large part due to change resulting not only from advances in technology but also in response to past failures or inadequacies in the process of delivering high quality information system products to corporate consumers. The need for business and education to be aware of the motivations for change and the constraints that are attendant on it in both environments is emphasised for harmonious progress to prevail in the production and utilisation of new IS graduates.</p>
 
 
 
<hr>
 
<h3><a name="dp9403">94/3: Assessing the graphical and algorithmic structure of hierarchical coloured Petri net models</a></h3>
<a name="dp9403"></a><h3>94/3: Assessing the graphical and algorithmic structure of hierarchical coloured Petri net models</h3>
<h4>G.L. Benwell and S.G. MacDonell</h4>
 
<p>Petri nets, as a modelling formalism, are utilised for the analysis of processes, whether for explicit understanding, database design or business process re-engineering. The formalism, however, can be represented on a virtual continuum from highly graphical to largely algorithmic. The use and understanding of the formalism will, in part, therefore depend on the resultant complexity and power of the representation and, on the graphical or algorithmic preference of the user. This paper develops a metric which will indicate the graphical or algorithmic tendency of hierarchical coloured Petri nets.</p>
 
 
 
<hr>
 
<h3><a name="dp9404">94/4: Phoneme recognition with hierarchical self organised neural networks and fuzzy systems</a></h3>
<a name="dp9404"></a><h3>94/4: Phoneme recognition with hierarchical self organised neural networks and fuzzy systems</h3>
<h4>N.K. Kasabov and E. Peev</h4>
 
<p>Neural networks (NN) have been intensively used for speech processing. This paper describes a series of experiments on using a single Kohonen Self Organizing Map (KSOM), hierarchically organised KSOM, a backpropagation-type neural network with fuzzy inputs and outputs, and a fuzzy system, for continuous speech recognition. Experiments with different non-linear transformations on the signal before using a KSOM have been done. The results obtained by using different techniques on the case study of phonemes in Bulgarian language are compared.</p>
 
 
 
<hr>
 
<h3><a name="dp9405">94/5: A model for exploiting associative matching in AI production systems</a></h3>
<a name="dp9405"></a><h3>94/5: A model for exploiting associative matching in AI production systems</h3>
<h4>N.K. Kasabov, S.H. Lavington, S. Lin and C. Wang</h4>
 
<p>A Content-Addressable Model of Production Systems, &#8216;CAMPUS&#8217;, has been developed. The main idea is to achieve high execution performance in production systems by exploiting the potential fine-grain data parallelism. The facts and the rules of a production system are uniformly represented as CAM tables. CAMPUS differs from other CAM-inspired models in that it is based on a non-state saving and &#8216;lazy&#8217; matching algorithm. The production system execution cycle is represented by a small number of associative search operations over the CAM tables which number does not depend, or depends slightly, on the number of the rules and the number of the facts in the production system. The model makes efficient implementation of large production systems on fast CAM possible. An experimental CAMPUS realisation of the production language CLIPS is also reported. The production systems execution time for large number of processed facts is about 1,000 times less than the corresponding CLIPS execution time on a standard computer architecture.</p>
 
 
 
<hr>
 
<h3><a name="dp9406">94/6: A system development methodology for geomatics as derived from informatics</a></h3>
<a name="dp9406"></a><h3>94/6: A system development methodology for geomatics as derived from informatics</h3>
<h4>G.L. Benwell</h4>
 
<p>This paper describes the creation of a system development methodology suitable for spatial information systems. The concept is substantiated on the fact that spatial systems are similar to information systems in general. The subtle difference being the fact that spatial systems are not yet readily supported by large digital data bases. This fact has diverted attention away from system development to data collection. A spatial system development methodology is derived, based on a historical review of information systems methodologies and the coupling of same with a data collection and integration methodology for the spatially referenced digital data.</p>
 
 
 
<hr>
 
<h3><a name="dp9407">94/7: Software metrics in New Zealand: Recent trends</a></h3>
<a name="dp9407"></a><h3>94/7: Software metrics in New Zealand: Recent trends</h3>
<h4>M.K. Purvis, S.G. MacDonell and J. Westland</h4>
 
<p>Almost by definition, any engineering discipline has quantitative measurement at its foundation. In adopting an engineering approach to software development, the establishment and use of software metrics has therefore seen extensive discussion. The degree to which metrics are actually used, however, particularly in New Zealand, is unclear. Four surveys, conducted over the last eight years, are therefore reviewed in this paper, with a view to determining trends in the use of metrics. According to the findings presented, it would appear that no more than one third of organisations involved in software development utilise software metrics.</p>
 
 
<hr>
 
<h3><a name="dp9408">94/8: A comparative review of functional complexity assessment methods for effort estimation</a></h3>
<a name="dp9408"></a><h3>94/8: A comparative review of functional complexity assessment methods for effort estimation</h3>
<h4>S.G. MacDonell</h4>
 
<p>Budgetary constraints are placing increasing pressure on project managers to effectively estimate development effort requirements at the earliest opportunity. With the rising impact of automation on commercial software development the attention of researchers developing effort estimation models has recently been focused on functional representations of systems, in response to the assertion that development effort is a function of specification content. A number of such models exist&#8212;several, however, have received almost no research or industry attention. Project managers wishing to implement a functional assessment and estimation programme are therefore unlikely to be aware of the various methods or how they compare. This paper therefore attempts to provide this information, as well as forming a basis for the development and improvement of new methods.</p>
 
 
 
<hr>
 
<h3><a name="dp9409">94/9: Viruses: What can we really do?</a></h3>
<a name="dp9409"></a><h3>94/9: Viruses: What can we really do?</h3>
<h4>H.B. Wolfe</h4>
 
<p>It is virtually impossible to know everything about any facet of computing as it changes on almost a daily basis. Having said that I believe that it is worth sharing some of the knowledge that I have gained as a result of 5 years of study and experimentation with viruses and virus defense strategies as well as having personally tested nearly 50 anti-virus products.</p>
 
 
<hr>
 
<h3><a name="dp9410">94/10: Stochastic models of the behaviour of scrubweeds in Southland and Otago</a></h3>
<a name="dp9410"></a><h3>94/10: Stochastic models of the behaviour of scrubweeds in Southland and Otago</h3>
<h4>L. Gonzalez and G.L. Benwell</h4>
 
<p>This paper investigates statistical models for the understanding of the behaviour of scrubweeds in Southland and Otago. Data pertaining to eight scrubweed species have been collected along four transects together with the environmental factors, altitude, slope, aspect and land use classification. Each transect is approximately 80km by 2km, with data being held for every 1ha so that there are approximately 16,000 pixels for each transect. It is important to understand the relationship between the species so that interpolation and extrapolation can be performed. The initial survey, completed in 1992, will be repeated in 1995 and 1998. These surveys will then form the baseline for an understanding of the spread or contraction of the species in farmlands of the South Island. This in turn will assist policy makers in formulating management plans which relate eradication to farmland productivity. This paper deals in detail with one of the transects&#8212;Balclutha to Katiki Point.</p>
 
 
 
<hr>
 
<h3><a name="dp9411">94/11: Towards using hybrid connectionist fuzzy production systems for speech recognition</a></h3>
<a name="dp9411"></a><h3>94/11: Towards using hybrid connectionist fuzzy production systems for speech recognition</h3>
<h4>N.K. Kasabov</h4>
 
<p>The paper presents a novel approach towards solving different speech recognition tasks, i.e. phoneme recognition, ambiguous words recognition, continuous speech to text conversion, learning fuzzy rules for language processing. The model uses a standard connectionist system for initial recognition and a connectionist rule-based system for a higher level recognition. The higher level is realised as a Connectionist Fuzzy Production System (CFPS) which makes possible introducing different parameters to the higher level production rules, like: degrees of importance, dynamic sensitivity factors, noise tolerance factors, certainty degrees and reactiveness factors. It provides different approximate chain reasoning techniques. The CFPS helps to solve many of the ambiguities in speech recognition tasks. Experiments on phoneme recognition in the English language are reported. This approach facilitates a connectionist implementation of the whole process of speech recognition (at a low level and at a higher logical level) which used to be performed in hybrid environments. It also facilitates the process of learning fuzzy rules for language processing. All the language processing tasks and subtasks are realised in a homogeneous connectionist environment. This brings all the benefits of connectionist systems to practical applications in the speech recognition area.</p>
 
 
<hr>
 
<h3><a name="dp9412">94/12: Connectionist fuzzy production systems</a></h3>
<a name="dp9412"></a><h3>94/12: Connectionist fuzzy production systems</h3>
<h4>N.K. Kasabov</h4>
 
<p>A new type of generalised fuzzy rule and generalised fuzzy production system and a corresponding reasoning method are developed. They are implemented in a connectionist architecture and are called connectionist fuzzy production systems. They combine all the features of symbolic AI production systems, fuzzy production systems and connectionist systems. A connectionist method for learning generalised fuzzy productions from raw data is also presented. The main conclusion reached is that connectionist fuzzy production systems are very powerful as fuzzy reasoning machines and they may well inspire new methods of plausible representation of inexact knowledge and new inference techniques for approximate reasoning.</p>
 
 
<hr>
 
<h3><a name="dp9413">94/13: Information display design: A survey of visual display formats</a></h3>
<a name="dp9413"></a><h3>94/13: Information display design: A survey of visual display formats</h3>
<h4>W.B.L. Wong</h4>
 
<p>This paper reviews the research and practice of how computer-based output information has been presented in nine different information display formats and the suitability of their use in environments ranging from static, reference-type situations, to complex, dynamic situations. The review while not generating conclusive results suggests that displays are more than a platform to place information. Instead care should be taken to organise, lay out, and pre-process the information so that it enhances the communication between computer and human. The information on the screen should also be designed to augment human cognitive limitations. For instance, human weakness in integrating information across time and multiple sources could be assisted by display formats that integrate the information in the display rather than having the user attempt to integrate that information mentally. If this be the desired outcome, information designers must start to consider performing analyses that help them understand the demands on the human information processing system and hence how information can be presented to augment this weakness. This would have to be further investigated in subsequent research.</p>
 
 
 
<hr>
 
<h3><a name="dp9414">94/14: A conceptual data modelling framework incorporating the notion of a thematic layer</a></h3>
<a name="dp9414"></a><h3>94/14: A conceptual data modelling framework incorporating the notion of a thematic layer</h3>
<h4>P.G. Firns</h4>
 
<p>Semantic data models comprise abstractions used, in conceptual database design, to represent real world relationships and aspects of the structure of real world phenomena. Such abstractions have previously been applied to the modelling of spatial concepts, but in the process their semantics are implicitly extended. This paper explicitly extends the semantics of the entity relationship model, defining two specific types of entity set to enable the notion of a thematic layer to be incorporated in entity relationship schemas. It places this in the context of a conceptual modelling framework to be used in the design of spatially referenced databases.</p>
 
 
 
<hr>
 
<h3><a name="dp9415">94/15: Recording, placement and presentation of M<span style="text-decoration:overline">a</span>ori place names in a spatial information system</a></h3>
<a name="dp9415">94/15: Recording, placement and presentation of M<span style="text-decoration:overline"></a><h3>a</span>ori place names in a spatial information system</h3>
<h4>I.J. Cranwell and G.L. Benwell</h4>
 
<p>This paper deals with matters relating to toponymy. The concept of indigenous place names is discussed. A view is presented, based on empirical evidence, that current processes for the official recording of names are detrimental to a fair and reasonable representation of indigenous names. Historical events in Aotearoa are examined as well as the existing place name recording process. Research is outlined as to what can be done to examine and redress this situation. A proposition is tendered whereby names can be recorded via a process which is people based and not government based. Research matters surrounding this concept are discussed.</p>
 
 
 
<hr>
 
<h3><a name="dp9416">94/16: Towards the deductive synthesis of nonlinear plans</a></h3>
<a name="dp9416"></a><h3>94/16: Towards the deductive synthesis of nonlinear plans</h3>
<h4>S.J.S. Cranefield</h4>
 
<p>Recently there has been a resurgence of interest in the deductive approach to planning. There are many benefits of this approach but one shortcoming is the difficulty of performing nonlinear planning in this framework. This paper argues that these problems are caused by a flaw in the partial order approach&#8212;the lack of structure in such a representation&#8212;and proposes an alternative, dynamic programming style approach based on a more structured representation of plans.</p>
 
 
<hr>
 
<h3><a name="dp9417">94/17: Hybrid fuzzy connectionist rule-based systems and the role of fuzzy rules extraction</a></h3>
<a name="dp9417"></a><h3>94/17: Hybrid fuzzy connectionist rule-based systems and the role of fuzzy rules extraction</h3>
<h4>N.K. Kasabov</h4>
 
<p>The paper presents the major principles of building complex hybrid systems for knowledge engineering where at the centre of the design process is the task of learning (extracting) fuzzy rules from data. An experimental environment FuzzyCOPE, which facilitates this process, is described. It consists of a fuzzy rules extraction module, a neural networks module, module fuzzy inference methods and a production rules module. Such an environment makes possible the use of the three paradigms, i.e. fuzzy rules, neural networks and symbolic production rules, in one system. Automatic rules extraction from data and choosing the most appropriate reasoning mechanism is also provided. Using FuzzyCOPE for building hybrid systems for decision making and speech recognition is discussed and illustrated.</p>
 
 
<hr>
 
<h3><a name="dp9418">94/18: Integrating neural networks and fuzzy systems for speech recognition</a></h3>
<a name="dp9418"></a><h3>94/18: Integrating neural networks and fuzzy systems for speech recognition</h3>
<h4>N.K. Kasabov, C.I. Watson, S. Sinclair and R. Kilgour</h4>
 
<p>The paper presents a framework of an integrated environment for speech recognition and a methodology of using such environment. The integrated environment includes a signal processing unit, neural networks and fuzzy rule-based systems. Neural networks are used for &#8220;blind&#8221; pattern recognition of the phonemic labels of the segments of the speech. Fuzzy rules are used for reducing the ambiguities of the correctly recognised phonemic labels, for final recognition of the phonemes, and for language understanding. The fuzzy system part is organised as multi-level, hierarchical structure. As an illustration, a model for phoneme recognition of New Zealand English is developed which exploits the advantages of the integrated environment. The model is illustrated on a small set of phonemes.</p>
 
 
<hr>
 
<h3><a name="dp9419">94/19: The visual display test: A test to assess the usefulness of a visual speech aid</a></h3>
<a name="dp9419"></a><h3>94/19: The visual display test: A test to assess the usefulness of a visual speech aid</h3>
<h4>C.I. Watson</h4>
 
<p>The facility to be able to display features of speech in a visual speech aid does not by itself guarantee that the aid will be effective in speech therapy. An effective visual speech aid must provide a visual representation of an utterance from which a judgement on the &#8220;goodness&#8221; of the utterance can be made. Two things are required for an aid to be effective. Firstly, the clusters of acceptable utterances must be separate from the unacceptable utterances in display space. Secondly, the acoustic features which distinguish acceptable utterances from unacceptable utterances must be evident in the displays of the speech aid. A two part test, called the Visual Display Test (VDT), has been developed to assess a visual speech aid&#8217;s capacity to fulfil these requirements.</p>
 
View
33
Website/dp1995-abstracts-contents.htm
<link rel="Stylesheet" href="/infosci/styles.css" type="text/css">
<div class="sectionTitle">Information Science Discussion Papers Series: 1995 Abstracts</div>
 
<hr>
 
<h3><a name="dp9501">95/1: Semantic data modelling for hypermedia database applications</a></h3>
<a name="dp9501"></a><h3>95/1: Semantic data modelling for hypermedia database applications</h3>
<h4>R.J. Pegler and P.G. Firns</h4>
 
<p>This paper develops an approach to data modelling for the design of hypermedia databases. First, the use of data modelling for the design of hypermedia database systems is investigated. A specific example, that of a car parts database, is used as a means of illustrating a generic problem, namely the difficulty associated with interrogating a large database when the exact data element being sought is unknown. The use of hypermedia as a basis for data retrieval in such situations is then discussed. The data contained within hypermedia database systems is typically unstructured, which has led to systems being developed using ad hoc design approaches with little regard for formal data modelling techniques. Hence, the main contribution of the paper is the illustration of a hybrid data modelling approach of suitable semantic richness to capture the complexities of hypermedia databases.</p>
 
 
<hr>
 
<h3><a name="dp9502">95/2: Pursuing a national policy for information technology in school education: A New Zealand odyssey</a></h3>
<a name="dp9502"></a><h3>95/2: Pursuing a national policy for information technology in school education: A New Zealand odyssey</h3>
<h4>P.J. Sallis and T. McMahon</h4>
 
<p>In September 1994, the government of New Zealand published a document entitled <EM>Education for the 21st Century</EM>. The document sets out targets and challenges for the education system in New Zealand to meet by 2001. One of the targets, and the associated fiscal challenge, is to improve the access of New Zealand students to information technology, so that by 2001 there is at least one computer for every five students at all levels of school education.</p>
 
 
 
<hr>
 
<h3><a name="dp9503">95/3: Information portrayal for intentional processes: A framework for analysis</a></h3>
<a name="dp9503"></a><h3>95/3: Information portrayal for intentional processes: A framework for analysis</h3>
<h4>W.B.L. Wong, P.J. Sallis and D.P. O&#8217;Hare</h4>
 
<p>It is increasingly recognised that the manner in which information required by a decision maker is portrayed is as important as providing appropriate information. In dynamic intentional process environments such as emergency dispatch control, where the problems are non-trivial and time is tightly constrained, it is important to portray information that is used together, close to one another or appropriately integrated. This is important in speeding up the decision maker&#8217;s interpretation of the information and assessment of the state of the situation.</p>
 
 
 
<hr>
 
<h3><a name="dp9504">95/4: An object repository model for the storage of spatial metadata</a></h3>
<a name="dp9504"></a><h3>95/4: An object repository model for the storage of spatial metadata</h3>
<h4>S.K.S. Cockcroft</h4>
 
<p>The design of spatial information systems has traditionally been carried out independently of mainstream database developments. It is contended that the adoption of mainstream database design techniques is important to progress in the spatial information systems development field. An accepted approach to the development of information systems is through an integrated development environment with a design repository at its core. This paper proposes a skeleton model for the design of a repository to store spatial metadata. An object oriented modelling approach is adopted in preference to an entity relationship approach because of its ability to model functional and dynamic aspects of the repository.</p>
 
 
 
<hr>
 
<h3><a name="dp9505">95/5: Establishing relationships between specification size and software process effort in CASE environments</a></h3>
<a name="dp9505"></a><h3>95/5: Establishing relationships between specification size and software process effort in CASE environments</h3>
<h4>S.G. MacDonell</h4>
 
<p>Advances in software process technology have rendered many existing methods of size assessment and effort estimation inapplicable. The use of automation in the software process, however, provides an opportunity for the development of more appropriate software size-based effort estimation models. A specification-based size assessment method has therefore been developed and tested in relation to process effort on a preliminary set of systems. The results of the analysis confirm the assertion that, within the automated environment class, specification size indicators (that may be automatically and objectively derived) are strongly related to process effort requirements.</p>
 
 
 
<hr>
 
<h3><a name="dp9506">95/6: Information portrayal for decision support in dynamic intentional process environments</a></h3>
<a name="dp9506"></a><h3>95/6: Information portrayal for decision support in dynamic intentional process environments</h3>
<h4>W.B.L. Wong, P.J. Sallis and D.P. O&#8217;Hare</h4>
 
<p>This paper reports on preliminary findings of a cognitive task analysis conducted at an ambulance despatch control center. The intense and dynamic nature of the decision making environment is first described, and the decision process modelled in an attempt to identify decision strategies used by the Communications Officers. Some information portrayal requirements stemming from one of the decision processes are then discussed, and these requirements are then translated into a proposed display solution.</p>
 
 
 
<hr>
 
<h3><a name="dp9507">95/7: Communicating agents: An emerging approach for distributed heterogeneous systems</a></h3>
<a name="dp9507"></a><h3>95/7: Communicating agents: An emerging approach for distributed heterogeneous systems</h3>
<h4>S.J.S. Cranefield, P. Gorman and M.K. Purvis</h4>
 
<p>The concept of an intelligent software agent has emerged from its origins in artificial intelligence laboratories to become an important basis for the development of distributed systems in the mainstream computer science community. This paper provides a review of some of the ideas behind the intelligent agent approach and addresses the question &#8220;what is an agent?&#8221; Some principal application areas for agent-based computing are outlined and related research programmes at the University of Otago are discussed.</p>
 
<p><a href="papers/dp9507sc.pdf.gz">Download</a> (gzipped PDF, 143KB)</p>
 
<hr>
 
<h3><a name="dp9508">95/8: Causal agent modelling: A unifying paradigm for systems and organisations</a></h3>
<a name="dp9508"></a><h3>95/8: Causal agent modelling: A unifying paradigm for systems and organisations</h3>
<h4>M.K. Purvis and S.J.S. Cranefield</h4>
 
<p>With the increasing size, complexity and interconnectedness of systems and organisations, there is a growing need for high level modelling approaches that span the range of application domains. Causal agent modelling offers an intuitive and powerful approach for the development of dynamic models for any application area. This paper outlines some of the basic ideas behind the nature of causal agent models, why they are fundamental to the modelling enterprise, and compares developments in this area to those in the related field of coordination theory. It also describes some research activities using causal agent models at the University of Otago.</p>
 
<p><a href="papers/dp9508mp.pdf.gz">Download</a> (gzipped PDF, 132KB)</p>
 
<hr>
 
<h3><a name="dp9509">95/9: Case-based reasoning and spatial analysis</a></h3>
<a name="dp9509"></a><h3>95/9: Case-based reasoning and spatial analysis</h3>
<h4>A. Holt and G.L. Benwell</h4>
 
<p>This paper brings emphasis to the plausible concept of case-based reasoning being integrated with spatial information systems, and the adaptation of artificial intelligence techniques to improve the analytical strength in spatial information systems. This adaptation of artificial intelligence techniques may include examples of expert systems, fuzzy logic, hybrid connectionist systems and neural networks, all integrated with spatial information systems. The unique process of case-based reasoning is described. The research into the possible integration of case-based reasoning and spatial information systems is outlined. The benefits of a case-based reasoning spatial information hybrid system are discussed.</p>
 
 
<hr>
 
<h3><a name="dp9510">95/10: Modelling and simulation of the New Zealand Resource Management Act</a></h3>
<a name="dp9510"></a><h3>95/10: Modelling and simulation of the New Zealand Resource Management Act</h3>
<h4>M.K. Purvis, M.A. Purvis and G.L. Benwell</h4>
 
<p>A single piece of legislation, the Resource Management Act, governs the management of environmental resources in New Zealand. It establishes procedural requirements and time constraints for all decision-making activities related to governmental environmental management. The present paper describes a model, based on coloured Petri nets, that is under development to facilitate understanding of the Act and to examine performance characteristics of legal processes defined in the Act.</p>
 
 
<hr>
 
<h3><a name="dp9511">95/11: Fuzzy concepts, land and cultural confidentiality</a></h3>
<a name="dp9511"></a><h3>95/11: Fuzzy concepts, land and cultural confidentiality</h3>
<h4>B.A. Ballantyne, G.L. Benwell and N.C. Sutherland</h4>
 
<p>Fuzzy concepts might have potential for protecting and preserving land which has special cultural or spiritual significance for indigenous peoples, because it might support any tangata whenua (indigenous peoples) desires for secrecy and confidentiality. These issues are examined in terms of New Zealand and from the technical perspective of Information Science. The various meanings of <EM>fuzzy</EM> are discussed. Some pertinent questions are: Is a fuzzy concept a useful tool to apply? Do the tangata whenua wich to make use of this tool?</p>
 
 
<hr>
 
<h3><a name="dp9512">95/12: A case study in environmental decision making</a></h3>
<a name="dp9512"></a><h3>95/12: A case study in environmental decision making</h3>
<h4>S.A. Mann</h4>
 
<p>Resource management in New Zealand is fraught with debate and controversy. Regional Councils often seem stuck in the middle of two opposing groups, the farmers and the environmentalists. There are areas, however, where the Regional Councils could be seen to be hindering progress towards resolution of problems. By avoiding policy formulations of certain issues eg: vegetation burning, Councils are creating difficulties for their own staff, landholders and environmental groups. This paper examines one debate that could be greatly simplified by a few policy direction decisions.</p>
 
 
<hr>
 
<h3><a name="dp9513">95/13: Local government GIS in New Zealand since 1989</a></h3>
<a name="dp9513"></a><h3>95/13: Local government GIS in New Zealand since 1989</h3>
<h4>A.J. Marr and G.L. Benwell</h4>
 
<p>This paper draws together existing data with recent survey results and compares the development of local government GIS with the evolution of Information Systems (IS). These comparisons are made using the philosophy that organisational GIS can be modelled. Using this model, various stages of GIS maturity are evaluated.</p>
 
 
 
<hr>
 
<h3><a name="dp9514">95/14: Integrating modelling and simulation into a problem solving paradigm for improved regional and environmental decision making</a></h3>
<a name="dp9514"></a><h3>95/14: Integrating modelling and simulation into a problem solving paradigm for improved regional and environmental decision making</h3>
<h4>G.L. Benwell, S.A. Mann and C.B. Smith</h4>
 
<p>In New Zealand the management of the environment is now largely embodied in the Resource Management Act. Within this context there is a clear need to support regionally significant decisions. Furthermore it is important that such decisions are scale invariant, that is, they are appropriately implementable at the micro and macro levels. This demands that decision makers at these diametrically opposed levels are cognisant of the influence of their domain on other domains. A difficult concept. It also implies that there is consensus on what are the significant regional decisions and also how decisions and consequences interact across all scales and, possibly, even regions. As a region is a scale dependent term it is important that the different views can be perceived and conveyed to the different proponents and opponents. This paper develops the case that it is important to make appropriate use of technology when attempting to make decisions at the regional level. This is particularly so in the fragile environments of the high country of southern New Zealand. Furthermore, this paper embodies a concept of the <EM>Six Thinking Hats</EM> of E. de Bono in developing a simulation modelling tool which presents interactive management scenarios of agricultural areas of the high country. The modelling concept is presented along with the reasons for adopting the de Bono concept.</p>
 
 
<hr>
 
<h3><a name="dp9515">95/15: Agent-based integration of general-purpose tools</a></h3>
<a name="dp9515"></a><h3>95/15: Agent-based integration of general-purpose tools</h3>
<h4>S.J.S. Cranefield and M.K. Purvis</h4>
 
<p>Agent-Based Software Integration (ABSI) entails the development of intelligent software agents and knowledge-sharing protocols that enhance interoperability of multiple software packages. Although some past ABSI projects reported in the literature have been concerned with the integration of relatively large software frameworks from separate engineering disciplines, the discussion in this paper concerns the integration of general-purpose software utilities and hand-crafted tools. With such smaller-scale ABSI projects, it may be difficult to justify the expense of constructing an overall ontology for the application. There are cases, however, when the project involves general-purpose tools that manipulate the same general entity types (such as files) but at different levels of abstraction. In such cases it is appropriate to have ontologies appropriate for the general usage of each tool and constraint descriptions that enable the ontological specifications to be mapped across the various levels of abstraction. This paper discusses issues associated with this type of ABSI project and describes an example information management application associated with university course administration. For the information management application presented the key issues are the provision of standard agent wrappers for standard desktop information management tools and the design of standard ontologies describing information stored in relational databases as well as in structured text files. Examples of a conceptual model describing such a database ontology are presented in connection with the example application. It is also suggested that a general planning agent, distinct from the notion of a facilitator agent, be employed in this context to assist in the use of various agents to manipulate information and move items from one data format to another.</p>
 
 
<hr>
 
<h3><a name="dp9516">95/16: Applying case-based reasoning to spatial phenomena</a></h3>
<a name="dp9516"></a><h3>95/16: Applying case-based reasoning to spatial phenomena</h3>
<h4>A. Holt and G.L. Benwell</h4>
 
<p>This paper outlines a unique approach applying artificial intelligence techniques to the solving of environmental problems. The approach combines case-based reasoning with spatial information systems, enabling technologies and techniques from each domain to be applied to environmental problems. This paper defines a possible case-based reasoning/spatial information system hybrid that would allow spatial cases to be defined and analysed by both technologies. The example used in this paper involves soil series classification which, using case-based reasoning, is performed according to spatial criteria. Evaluations and spatial criteria are then used to predict properties of new cases based on similar previous spatial cases.</p>
View
51
Website/dp1996-abstracts-contents.htm
<link rel="Stylesheet" href="/infosci/styles.css" type="text/css">
<div class="sectionTitle">Information Science Discussion Papers Series: 1996 Abstracts</div>
 
<hr>
 
<h3><a name="dp9601">96/01: Using data models to estimate required effort in creating a spatial information system</a></h3>
<a name="dp9601"></a><h3>96/01: Using data models to estimate required effort in creating a spatial information system</h3>
<h4>G.L. Benwell and S.G. MacDonell</h4>
 
<p>The creation of spatial information systems can be viewed from many directions. One such view is to see the creation in terms of data collection, data modelling, codifying spatial processes, information management, analysis and presentation. The amount of effort to create such systems is frequently under-estimated; this is true for each aspect of the above view. The accuracy of the assessment of effort will vary for each aspect. This paper concentrates on the effort required to create the code for spatial processes and analysis. Recent experience has indicated that this is an area where considerable under-estimation is occurring. Function point analysis presented in this paper provides a reliable metric for spatial systems developers to assess required effort based on spatial data models.</p>
 
 
<hr>
 
<h3><a name="dp9602">96/02: The use of a metadata repository in spatial database development</a></h3>
<a name="dp9602"></a><h3>96/02: The use of a metadata repository in spatial database development</h3>
<h4>S.K.S. Cockcroft</h4>
 
<p>Database schemas currently used to define spatial databases are deficient in that they do not incorporate facilities to specify business rules/integrity constraints. This shortcoming has been noted by G&uuml;nther and Lamberts [G&uuml;nther &amp; Lamberts, 1994] who commented that geographical information systems (GIS) do not generally offer any functionality to preserve semantic integrity. It is desirable that this functionality be incorporated for reasons of consistency and so that an estimate of the accuracy of data entry can be made. Research into constraints upon spatial relationships at the conceptual level is well documented. A number of researchers have shown that the transition from conceptual to logical spatial data models is possible [Firns, 1994; Hadzilacos &amp; Tryfona, 1995]. The algorithmic accomplishment of this transition is a subject of current research. This paper presents one approach to incorporating spatial business rules in spatially referenced database schemas by means of a repository. It is demonstrated that the repository has an important role to play in spatial data management and in particular automatic schema generation for spatially referenced databases.</p>
 
 
 
<hr>
 
<h3><a name="dp9603">96/03: Connectionist-based information systems: A proposed research theme</a></h3>
<a name="dp9603"></a><h3>96/03: Connectionist-based information systems: A proposed research theme</h3>
<h4>N.K. Kasabov, M.K. Purvis and P.J. Sallis</h4>
 
<p>General Characteristics of the Theme</p>
 
 
 
<hr>
 
<h3><a name="dp9604">96/04: Agent modelling with Petri nets</a></h3>
<a name="dp9604"></a><h3>96/04: Agent modelling with Petri nets</h3>
<h4>M.K. Purvis and S.J.S. Cranefield</h4>
 
<p>The use of intelligent software agents is a modelling paradigm that is gaining increasing attention in the applications of distributed systems. This paper identifies essential characteristics of agents and shows how they can be mapped into a coloured Petri net representation so that the coordination of activities both within agents and between interacting agents can be visualised and analysed. The detailed structure and behaviour of an individual agent in terms of coloured Petri nets is presented, as well as a description of how such agents interact. A key notion is that the essential functional components of an agent are explicitly represented by means of coloured Petri net constructs in this representation.</p>
 
 
<hr>
 
<h3><a name="dp9605">96/05: A comparison of alternatives to regression analysis as model building techniques to develop predictive equations for software metrics</a></h3>
<a name="dp9605"></a><h3>96/05: A comparison of alternatives to regression analysis as model building techniques to develop predictive equations for software metrics</h3>
<h4>A.R. Gray and S.G. MacDonell</h4>
 
<p>The almost exclusive use of regression analysis to derive predictive equations for software development metrics found in papers published before 1990 has recently been complemented by increasing numbers of studies using non-traditional methods, such as neural networks, fuzzy logic models, case-based reasoning systems, rule-based systems, and regression trees. There has also been an increasing level of sophistication in the regression-based techniques used, including robust regression methods, factor analysis, resampling methods, and more effective and efficient validation procedures. This paper examines the implications of using these alternative methods and provides some recommendations as to when they may be appropriate. A comparison between standard linear regression, robust regression, and the alternative techniques is also made in terms of their modelling capabilities with specific reference to software metrics.</p>
 
 
<hr>
 
<h3><a name="dp9606">96/06: Process management for geographical information system development</a></h3>
<a name="dp9606"></a><h3>96/06: Process management for geographical information system development</h3>
<h4>S.G. MacDonell and G.L. Benwell</h4>
 
<p>The controlled management of software processes, an area of ongoing research in the business systems domain, is equally important in the development of geographical information systems (GIS). Appropriate software processes must be defined, used and managed in order to ensure that, as much as possible, systems are developed to quality standards on time and within budget. However, specific characteristics of geographical information systems, in terms of their inherent need for graphical output, render some process management tools and techniques less appropriate. This paper examines process management activities that are applicable to GIS, and suggests that it may be possible to extend such developments into the visual programming domain. A case study concerned with development effort estimation is presented as a precursor to a discussion of the implications of system requirements for significant graphical output.</p>
 
 
 
<hr>
 
<h3><a name="dp9607">96/07: Experimental transformation of a cognitive schema into a display structure</a></h3>
<a name="dp9607"></a><h3>96/07: Experimental transformation of a cognitive schema into a display structure</h3>
<h4>W.B.L. Wong, D.P. O&#8217;Hare and P.J. Sallis</h4>
 
<p>The purpose of this paper is to report on an experiment conducted to evaluate the feasibility of an empirical approach for translating a cognitive schema into a display structure. This experiment is part of a series of investigations aimed at determining how information about dynamic environments should be portrayed to facilitate decision making. Studies to date have generally derived an information display organisation that is largely based on a designer&#8217;s experience, intuition and understanding of the processes. In this study we report on how we attempted to formalise this design process so that if the procedures were adopted, other less experienced designers would still be able to objectively formulate a display organisation that is just as effective. This study is based on the first stage of the emergency dispatch management process, the call-taking stage. The participants in the study were ambulance dispatch officers from the Dunedin-based Southern Regional Communications Centre of the St. John&#8217;s Ambulance Service in New Zealand.</p>
 
 
<hr>
 
<h3><a name="dp9608">96/08: Neuro-fuzzy engineering for spatial information processing</a></h3>
<a name="dp9608"></a><h3>96/08: Neuro-fuzzy engineering for spatial information processing</h3>
<h4>N.K. Kasabov, M.K. Purvis, F. Zhang and G.L. Benwell</h4>
 
<p>This paper proposes neuro-fuzzy engineering as a novel approach to spatial data analysis and for building decision making systems based on spatial information processing, and the development of this approach by the authors is presented in this paper. It has been implemented as a software environment and is illustrated on a case study problem.</p>
 
 
 
<hr>
 
<h3><a name="dp9609">96/09: Improved learning strategies for multimodular fuzzy neural network systems: A case study on image classification</a></h3>
<a name="dp9609"></a><h3>96/09: Improved learning strategies for multimodular fuzzy neural network systems: A case study on image classification</h3>
<h4>S.A. Israel and N.K. Kasabov</h4>
 
<p>This paper explores two different methods for improved learning in multimodular fuzzy neural network systems for classification. It demonstrates these methods on a case study of satellite image classification using 3 spectral inputs and 10 coastal vegetation covertype outputs. The classification system is a multimodular one; it has one fuzzy neural network per output. All the fuzzy neural networks are trained in parallel for a small number of iterations. Then, the system performance is tested on new data to determine the types of interclass confusion. Two strategies are developed to improve classification performance. First, the individual modules are additionally trained for a very small number of iterations on a subset of the data to decrease the false positive and the false negative errors. The second strategy is to create new units, &#8216;experts&#8217;, which are individually trained to discriminate only the ambiguous classes. So, if the main system classifies a new input into one of the ambiguous classes, then the new input is passed to the &#8216;experts&#8217; for final classification. Two learning techniques are presented and applied to both classification performance enhancement strategies; the first one reduces omission, or false negative, error; the second reduces comission, or false positive, error. Considerable improvement is achieved by using these learning techniques and thus, making it feasible to incorporate them into a real adaptive system that improves during operation.</p>
 
 
 
<hr>
 
<h3><a name="dp9610">96/10: Politics and techniques of data encryption</a></h3>
<a name="dp9610"></a><h3>96/10: Politics and techniques of data encryption</h3>
<h4>H.B. Wolfe</h4>
 
<p>Cryptography is the art or science, depending on how you look at it, of keeping messages secure. It has been around for a couple of thousand years in various forms. The Spartan Lysander and even Caesar made use of cryptography in some of their communications. Others in history include Roger Bacon, Edgar Allan Poe, Geoffrey Chaucer, and many more. By today&#8217;s standards cryptographic techniques, through the ages right up to the end of World War I, have been pretty primitive. With the development of the electro-mechanical devices cryptography came of age. The subsequent evolution of the computer has raised the level of security that cryptography can provide in communications and data storage.</p>
 
 
<hr>
 
<h3><a name="dp9611">96/11: Reasonable security safeguards for small to medium organisations</a></h3>
<a name="dp9611"></a><h3>96/11: Reasonable security safeguards for small to medium organisations</h3>
<h4>H.B. Wolfe</h4>
 
<p>In today&#8217;s world most businesses, large and small, depend on their computer(s) to provide vital functions consistently and without interruption. In many organizations the loss of the computer function could mean the difference between continued operation and shutdown. Reliability and continuity, therefore, become the critical aspect of any computer system(s) currently in use. This paper attempts to describe some of the most important issues any organization should address in order to reduce their risk where it relates to computer related failure.</p>
 
 
<hr>
 
<h3><a name="dp9612">96/12: Information warfare: Where are the threats?</a></h3>
<a name="dp9612"></a><h3>96/12: Information warfare: Where are the threats?</h3>
<h4>H.B. Wolfe</h4>
 
<p>(No abstract.)</p>
 
 
 
<hr>
 
<h3><a name="dp9613">96/13: Interactive visualisation tools for analysing NIR data</a></h3>
<a name="dp9613"></a><h3>96/13: Interactive visualisation tools for analysing NIR data</h3>
<h4>H. Munro, K. Novins, G.L. Benwell and A. Moffat</h4>
 
<p>This paper describes a tool being developed to allow users to visualise the ripening characteristics of fruit. These characteristics, such as sugar, acid and moisture content, can be measured using non-destructive Near Infrared Reflectance (NIR) analysis techniques. The four dimensional nature of the NIR data introduces some interesting visualisation problems. The display device only provides two dimensions, making it necessary to design two dimensional methods for representing the data. In order to help the user fully understand the dataset, a graphical display system is created with an interface that provides flexible visualisation tools.</p>
 
 
 
<hr>
 
<h3><a name="dp9614">96/14: A connectionist computational architecture based on an optical thin-film model</a></h3>
<a name="dp9614"></a><h3>96/14: A connectionist computational architecture based on an optical thin-film model</h3>
<h4>M.K. Purvis and X. Li</h4>
 
<p>A novel connectionist architecture that differs from conventional architectures based on the neuroanatomy of biological organisms is described. The proposed scheme is based on the model of multilayered optical thin-films, with the thicknesses of the individual thin-film layers serving as adjustable &#8216;weights&#8217; for the training. A discussion of training techniques for this model and some sample simulation calculations in the area of pattern recognition are presented. These results are shown to compare with results when the same training data are used in connection with a feed-forward neural network with back propagation training. A physical realization of this architecture could largely take advantage of existing optical thin-film deposition technology.</p>
 
 
<hr>
 
<h3><a name="dp9615">96/15: Measurement of database systems: An empirical study</a></h3>
<a name="dp9615"></a><h3>96/15: Measurement of database systems: An empirical study</h3>
<h4>S.G. MacDonell, M.J. Shepperd and P.J. Sallis</h4>
 
<p>There is comparatively little work, other than function points, that tackles the problem of building prediction systems for software that is dominated by data considerations, in particular systems developed using 4GLs. We describe an empirical investigation of 70 such systems. Various easily obtainable counts were extracted from data models (e.g. number of entities) and from specifications (e.g. number of screens). Using simple regression analysis, prediction systems of implementation size with accuracy of MMRE=21% were constructed. Our work shows that it is possible to develop simple and effective prediction systems based upon metrics easily derived from functional specifications and data models.</p>
 
 
 
<hr>
 
<h3><a name="dp9616">96/16: Teaching a diploma in medical informatics using the World Wide Web</a></h3>
<a name="dp9616"></a><h3>96/16: Teaching a diploma in medical informatics using the World Wide Web</h3>
<h4>R.T. Pascoe and D. Abernathy</h4>
 
<p>In this paper is discussed the preliminary development of a Diploma in Medical Informatics which will comprise courses offered entirely through the Internet in the form of World Wide Web documents and electronic mail. Proposed use of such educational technology for the delivery of these courses within a distance learning environment is based upon a conversational framework developed by Laurillard (1993) and an associated classification of this technology according to the length to which elements within the conversational framework is supported.</p>
 
 
 
<hr>
 
<h3><a name="dp9617">96/17: Alternatives to regression models for estimating software projects</a></h3>
<a name="dp9617"></a><h3>96/17: Alternatives to regression models for estimating software projects</h3>
<h4>S.G. MacDonell and A.R. Gray</h4>
 
<p>The use of &#8216;standard&#8217; regression analysis to derive predictive equations for software development has recently been complemented by increasing numbers of analyses using less common methods, such as neural networks, fuzzy logic models, and regression trees. This paper considers the implications of using these methods and provides some recommendations as to when they may be appropriate. A comparison of techniques is also made in terms of their modelling capabilities with specific reference to function point analysis.</p>
 
 
 
<hr>
 
<h3><a name="dp9618">96/18: A goal-oriented approach for designing decision support displays in dynamic environments</a></h3>
<a name="dp9618"></a><h3>96/18: A goal-oriented approach for designing decision support displays in dynamic environments</h3>
<h4>W.B.L. Wong, D.P. O&#8217;Hare and P.J. Sallis</h4>
 
<p>This paper reports on how the Critical Decision Method, a cognitive task analysis technique, was employed to identify the goal states of tasks performed by dispatchers in a dynamic environment, the Sydney Ambulance Co-ordination Centre. The analysis identified five goal states: Notification; Situation awareness; Planning resource to task compatibility; Speedy response; Maintain history of developments. These goals were then used to guide the development of display concepts that support decision strategies invoked by dispatchers in this task environment.</p>
 
 
 
<hr>
 
<h3><a name="dp9619">96/19: Software process engineering for measurement-driven software quality programs&#8212;Realism and idealism</a></h3>
<a name="dp9619"></a><h3>96/19: Software process engineering for measurement-driven software quality programs&#8212;Realism and idealism</h3>
<h4>S.G. MacDonell and A.R. Gray</h4>
 
<p>This paper brings together a set of commonsense recommendations relating to the delivery of software quality, with some emphasis on the adoption of realistic perspectives for software process/product stakeholders in the area of process improvement. The use of software measurement is regarded as an essential component for a quality development program, in terms of prediction, control, and adaptation as well as the communication necessary for stakeholders&#8217; realistic perspectives. Some recipes for failure are briefly considered so as to enable some degree of contrast between what is currently perceived to be good and bad practices. This is followed by an evaluation of the quality-at-all-costs model, including a brief pragmatic investigation of quality in other, more mature, disciplines. Several programs that claim to assist in the pursuit of quality are examined, with some suggestions made as to how they may best be used in practice.</p>
 
 
 
<hr>
 
<h3><a name="dp9620">96/20: Using genetic algorithms for an optical thin-film learning model</a></h3>
<a name="dp9620"></a><h3>96/20: Using genetic algorithms for an optical thin-film learning model</h3>
<h4>X. Li and M.K. Purvis</h4>
 
<p>A novel connectionist architecture based on an optical thin-film multilayer model (OTFM) is described. The architecture is explored as an alternative to the widely used neuron-inspired models, with the thin-film thicknesses serving as adjustable &#8216;weights&#8217; for the computation. The use of genetic algorithms for training the thin-film model, along with experimental results on the parity problem and the iris data classification are presented.</p>
 
 
 
<hr>
 
<h3><a name="dp9621">96/21: Early experiences in measuring multimedia systems development effort</a></h3>
<a name="dp9621"></a><h3>96/21: Early experiences in measuring multimedia systems development effort</h3>
<h4>T. Fletcher, W.B.L. Wong and S.G. MacDonell</h4>
 
<p>The development of multimedia information systems must be managed and controlled just as it is for other generic system types. This paper proposes an approach for assessing multimedia component and system characteristics with a view to ultimately using these features to estimate the associated development effort. Given the different nature of multimedia systems, existing metrics do not appear to be entirely useful in this domain; however, some general principles can still be applied in analysis. Some basic assertions concerning the influential characteristics of multimedia systems are made and a small preliminary set of data is evaluated</p>
 
 
 
<hr>
 
<h3><a name="dp9622">96/22: Applying soft systems methodology to multimedia systems requirements analysis</a></h3>
<a name="dp9622"></a><h3>96/22: Applying soft systems methodology to multimedia systems requirements analysis</h3>
<h4>D.Z. Butt, T. Fletcher, S.G. MacDonell, B.E. Norris and W.B.L. Wong</h4>
 
<p>The Soft Systems Methodology (SSM) was used to identify requirements for the development of one or more information systems for a local company. The outcome of using this methodology was the development of three multimedia information systems. This paper discusses the use of the SSM when developing for multimedia environments. Namely, this paper covers the problems with traditional methods of requirements analysis (which the SSM addresses), how the SSM can be used to elicit multimedia information system requirements, and our personal experience of the method. Our personal experience is discussed in terms of the systems we developed using the SSM.</p>
 
 
 
<hr>
 
<h3><a name="dp9623">96/23: FuNN/2&#8212;A fuzzy neural network architecture for adaptive learning and knowledge acquisition</a></h3>
<a name="dp9623"></a><h3>96/23: FuNN/2&#8212;A fuzzy neural network architecture for adaptive learning and knowledge acquisition</h3>
<h4>N.K. Kasabov, J. Kim, M.J. Watts and A.R. Gray</h4>
 
<p>Fuzzy neural networks have several features that make them well suited to a wide range of knowledge engineering applications. These strengths include fast and accurate learning, good generalisation capabilities, excellent explanation facilities in the form of semantically meaningful fuzzy rules, and the ability to accommodate both data and existing expert knowledge about the problem under consideration. This paper investigates adaptive learning, rule extraction and insertion, and neural/fuzzy reasoning for a particular model of a fuzzy neural network called FuNN. As well as providing for representing a fuzzy system with an adaptable neural architecture, FuNN also incorporates a genetic algorithm in one of its adaptation modes. A version of FuNN&#8212;FuNN/2, which employs triangular membership functions and correspondingly modified learning and adaptation algorithms, is also presented in the paper.</p>
 
 
<hr>
 
<h3><a name="dp9624">96/24: An agent-based architecture for software tool coordination</a></h3>
<a name="dp9624"></a><h3>96/24: An agent-based architecture for software tool coordination</h3>
<h4>S.J.S. Cranefield and M.K. Purvis</h4>
 
<p>This paper presents a practical multi-agent architecture for assisting users to coordinate the use of both special and general purpose software tools for performing tasks in a given problem domain. The architecture is open and extensible being based on the techniques of agent-based software interoperability (ABSI), where each tool is encapsulated by a KQML-speaking agent. The work reported here adds additional facilities for the user to describe the problem domain, the tasks that are commonly performed in that domain and the ways in which various software tools are commonly used by the user. Together, these features provide the computer with a degree of autonomy in the user&#8217;s problem domain in order to help the user achieve tasks through the coordinated use of disparate software tools.
 
 
 
<hr>
 
<h3><a name="dp9625">96/25: Special issue: GeoComputation &#8217;96</a></h3>
<a name="dp9625"></a><h3>96/25: Special issue: GeoComputation &#8217;96</h3>
 
<p>A collection of papers authored by members of the Information Science department and presented at the 1st International Conference on GeoComputation, Leeds, United Kingdom. It comprises the nine papers listed below.</p>
 
 
View
31
Website/dp1997-abstracts-contents.htm
<link rel="Stylesheet" href="/infosci/styles.css" type="text/css">
<hr>
 
<h3><a name="dp9701">97/01: Planning and matchmaking for the interoperation of information processing agents</a></h3>
<a name="dp9701"></a><h3>97/01: Planning and matchmaking for the interoperation of information processing agents</h3>
<h4>S.J.S. Cranefield, A. Diaz and M.K. Purvis</h4>
 
<p>In today&#8217;s open, distributed environments, there is an increasing need for systems to assist the interoperation of tools and information resources. This paper describes a multi-agent system, DALEKS, that supports such activities for the information processing domain. With this system, information processing tasks are accomplished by the use of an agent architecture incorporating task planning and information agent matchmaking components. We discuss the characteristics of planning in this domain and describe how information processing tools are specified for the planner. We also describe the manner in which planning, agent matchmaking, and information task execution are interleaved in the DALEKS system. An example application taken from the domain of university course administration is provided to illustrate some of the activities performed in this system.</p>
 
 
 
<hr>
 
<h3><a name="dp9702">97/02: Landscape structure and ecosystem conservation: An assessment using remote sensing</a></h3>
<a name="dp9702"></a><h3>97/02: Landscape structure and ecosystem conservation: An assessment using remote sensing</h3>
<h4>S. Mann, G.L. Benwell and W.G. Lee</h4>
 
<p>Analyses of landscape structure are used to test the hypothesis that remotely sensed images can be used as indicators of ecosystem conservation status. Vegetation types based on a classified SPOT satellite image were used in a comparison of paired, reserve (conservation area) and adjacent more human modified areas (controls). Ten reserves (average size 965 ha) were selected from upland tussock grasslands in Otago, New Zealand. While there were equal numbers of vegetation types and the size and shape distribution of patches within the overall landscapes were not significantly different, there was less of &#8216;target&#8217; vegetation in controls. This was in smaller patches and fewer of these patches contained &#8216;core areas&#8217;. These control &#8216;target&#8217; patches were also less complex in shape than those in the adjacent reserves. These measures showed that remotely sensed images can be used to derive large scale indicators of landscape conservation status. An index is proposed for assessing landscape change and conservation management issues are raised.</p>
 
 
 
<hr>
 
<h3><a name="dp9703">97/03: Supporting task performance: Is text or video better?</a></h3>
<a name="dp9703"></a><h3>97/03: Supporting task performance: Is text or video better?</h3>
<h4>B.E. Norris and W.B.L. Wong</h4>
 
<p>Multimedia technology allows a variety of the presentation formats to portray instructions for performing a task. These formats include the use of text, graphics, video, aural, photographs, used singly or in combination (Kawin, 1992; Hills, 1984; Newton, 1990; Bailey, 1996). As part of research at the Multimedia Systems Research Laboratory to identify a syntax for the use of multimedia elements, an experiment was conducted to determine whether the use text or video representations of task instructions was more effective at communicating task instructions (Norris, 1996). This paper reports on the outcome of that study.</p>
 
 
 
<hr>
 
<h3><a name="dp9704">97/04: Eliciting information portrayal requirements: Experiences with the critical decision method</a></h3>
<a name="dp9704"></a><h3>97/04: Eliciting information portrayal requirements: Experiences with the critical decision method</h3>
<h4>W.B.L. Wong, P.J. Sallis and D.P. O&#8217;Hare</h4>
 
<p>This study is part of research that is investigating the notion that human performance in dynamic and intentional decision making environments, such as ambulance dispatch management, can be improved if information is portrayed in a manner that supports the decision strategies invoked to achieve the goal states of the process being controlled. Hence, in designing interfaces to support real-time dispatch management decisions, it is suggested that it would be necessary to first discover the goal states and the decision strategies invoked during the process, and then portray the required information in a manner that supports such a user group&#8217;s decision making goals and strategies.</p>
 
 
 
<hr>
 
<h3><a name="dp9705">97/05: A taxonomy of spatial data integrity constraints</a></h3>
<a name="dp9705"></a><h3>97/05: A taxonomy of spatial data integrity constraints</h3>
<h4>S.K.S. Cockcroft</h4>
 
<p>Spatial data quality has become an issue of increasing concern to researchers and practitioners in the field of Spatial Information Systems (SIS). Clearly the results of any spatial analysis are only as good as the data on which it is based. There are a number of significant areas for data quality research in SIS. These include topological consistency; consistency between spatial and attribute data; and consistency between spatial objects&#8217; representation and their true representation on the ground. The last category may be subdivided into spatial accuracy and attribute accuracy. One approach to improving data quality is the imposition of constraints upon data entered into the database. This paper presents a taxonomy of integrity constraints as they apply to spatial database systems. Taking a cross disciplinary approach it aims to clarify some of the terms used in the database and SIS fields for data integrity management. An overview of spatial data quality concerns is given and each type of constraint is assessed regarding its approach to addressing these concerns. Some indication of an implementation method is also given for each.</p>
 
 
 
<hr>
 
<h3><a name="dp9706">97/06: Planning and matchmaking in a multi-agent system for software integration</a></h3>
<a name="dp9706"></a><h3>97/06: Planning and matchmaking in a multi-agent system for software integration</h3>
<h4>A. Diaz, S.J.S. Cranefield and M.K. Purvis</h4>
 
<p>Computer users employ a collection of software tools to support their day-to-day work. Often the software environment is dynamic with new tools being added as they become available and removed as they become obsolete or outdated. In today&#8217;s systems, the burden of coordinating the use of these disparate tools, remembering the correct sequence of commands, and incorporating new and modified programs into the daily work pattern lies with the user. This paper describes a multi-agent system, DALEKS, that assists users in utilizing diverse software tools for their everyday work. It manages work and information flow by providing a coordination layer that selects the appropriate tool(s) to use for each of the user&#8217;s tasks and automates the flow of information between them. This enables the user to be concerned more with what has to be done, rather than with the specifics of how to access tools and information. Here we describe the system architecture of DALEKS and illustrate it with an example in university course administration.</p>
 
 
 
<hr>
 
<h3><a name="dp9707">97/07: Environments for viewpoint representations</a></h3>
<a name="dp9707"></a><h3>97/07: Environments for viewpoint representations</h3>
<h4>N.J. Stanger and R.T. Pascoe</h4>
 
<p>Modelling the structure of data is an important part of any system analysis project. One problem that can arise is that there may be many differing viewpoints among the various groups that are involved in a project. Each of these viewpoints describes a perspective on the phenomenon being modelled. In this paper, we focus on the representation of developer viewpoints, and in particular on how multiple viewpoint representations may be used for database design. We examine the issues that arise when transforming between different viewpoint representations, and describe an architecture for implementing a database design environment based on these concepts.</p>
 
 
 
<hr>
 
<h3><a name="dp9708">97/08: Exploiting the advantages of object-oriented programming in the implementation of a database design environment</a></h3>
<a name="dp9708"></a><h3>97/08: Exploiting the advantages of object-oriented programming in the implementation of a database design environment</h3>
<h4>N.J. Stanger and R.T. Pascoe</h4>
 
<p>In this paper, we describe the implementation of a database design environment (<EM>Swift</EM>) that incorporates several novel features: Swift&#8217;s data modelling approach is derived from viewpoint-oriented methods; Swift is implemented in Java, which allows us to easily construct a client/server based environment; the repository is implemented using PostgreSQL, which allows us to store the actual application code in the database; and the combination of Java and PostgreSQL reduces the impedance mismatch between the application and the repository.</p>
 
 
 
<hr>
 
<h3><a name="dp9709">97/09: Privacy enhancing technology</a></h3>
<a name="dp9709"></a><h3>97/09: Privacy enhancing technology</h3>
<h4>H.B. Wolfe</h4>
 
<p>Privacy is one of the most fundamental of human rights. It is not a privilege granted by some authority or state. It is, in fact, necessary for each human being&#8217;s normal development and survival. Those nations who have, in the past, and currently follow the notion that they have the authority and/or moral high ground to grant or deny privacy to their citizens are notable for their other human rights violations. This paper is centered around the above premise and will offer the reader some good news and some bad news. But most important, it will put the reader on notice that our privacy is constantly under attack from one vested interest or another and that each and every one of us must be vigilant in the protection of our private matters.</p>
 
 
 
<hr>
 
<h3><a name="dp9710">97/10: Applications of fuzzy logic to software metric models for development effort estimation</a></h3>
<a name="dp9710"></a><h3>97/10: Applications of fuzzy logic to software metric models for development effort estimation</h3>
<h4>A.R. Gray and S.G. MacDonell</h4>
 
<p>Software metrics are measurements of the software development process and product that can be used as variables (both dependent and independent) in models for project management. The most common types of these models are those used for predicting the development effort for a software system based on size, complexity, developer characteristics, and other metrics. Despite the financial benefits from developing accurate and usable models, there are a number of problems that have not been overcome using the traditional techniques of formal and linear regression models. These include the non-linearities and interactions inherent in complex real-world development processes, the lack of stationarity in such processes, over-commitment to precisely specified values, the small quantities of data often available, and the inability to use whatever knowledge is available where exact numerical values are unknown. The use of alternative techniques, especially fuzzy logic, is investigated and some usage recommendations are made.</p>
 
 
 
<hr>
 
<h3><a name="dp9711">97/11: Usenet newsgroups&#8217; profile analysis utilising standard and non-standard statistical methods</a></h3>
<a name="dp9711"></a><h3>97/11: Usenet newsgroups&#8217; profile analysis utilising standard and non-standard statistical methods</h3>
<h4>P.J. Sallis and D.A. Kassabova</h4>
 
<p>The paper explores building profiles of Newsgroups from a corpus of Usenet E-mail messages employing some standard statistical techniques as well as fuzzy clustering methods. A large set of data from a number of Newsgroups has been analysed to elicit some text attributes, such as number of words, length of sentences and other stylistic characteristics. Readability scores have also been obtained by using recognised assessment methods. These text attributes were used for building Newsgroups&#8217; profiles. Three newsgroups, each with similar number of messages were selected from the processed sample for the analysis of two types of one-dimensional profiles, one by length of texts and the second by readability scores. Those profiles are compared with corresponding profiles of the whole sample and also with those of a group of frequent participants in the newsgroups. Fuzzy clustering is used for creating two-dimensional profiles of the same groups. An attempt is made to identify the newsgroups by defining centres of data clusters.</p>
 
 
 
<hr>
 
<h3><a name="dp9712">97/12: The ecological approach to interface design in intentional domains</a></h3>
<a name="dp9712"></a><h3>97/12: The ecological approach to interface design in intentional domains</h3>
<h4>W.B.L. Wong</h4>
 
<p>(No abstract.)</p>
 
 
 
<hr>
 
<h3><a name="dp9713">97/13: Computer-mediated communication: Experiments with e-mail readability</a></h3>
<a name="dp9713"></a><h3>97/13: Computer-mediated communication: Experiments with e-mail readability</h3>
<h4>P.J. Sallis and D.A. Kassabova</h4>
 
<p>(No abstract.)</p>
 
 
<hr>
 
<h3><a name="dp9714">97/14: Software forensics: Extending authorship analysis techniques to computer programs</a></h3>
<a name="dp9714"></a><h3>97/14: Software forensics: Extending authorship analysis techniques to computer programs</h3>
<h4>A.R. Gray, P.J. Sallis and S.G. MacDonell</h4>
 
<p>The number of occurrences and severity of computer-based attacks such as viruses and worms, logic bombs, trojan horses, computer fraud, and plagiarism of code have become of increasing concern. In an attempt to better deal with these problems it is proposed that methods for examining the authorship of computer programs are necessary. This field is referred to here as <EM>software forensics</EM>. This involves the areas of author discrimination, identification, and characterisation, as well as intent analysis. Borrowing extensively from the existing fields of linguistics and software metrics, this can be seen as a new and exciting area for forensics to extend into.</p>
 
 
 
<hr>
 
<h3><a name="dp9715">97/15: A membership function selection method for fuzzy neural networks</a></h3>
<a name="dp9715"></a><h3>97/15: A membership function selection method for fuzzy neural networks</h3>
<h4>Q. Zhou, M.K. Purvis and N.K. Kasabov</h4>
 
<p>Fuzzy neural networks provide for the extraction of fuzzy rules from artificial neural network architectures. In this paper we describe a general method, based on statistical analysis of the training data, for the selection of fuzzy membership functions to be used in connection with fuzzy neural networks. The technique is first described and then illustrated by means of two experimental examinations.</p>
 
View
21
Website/dp1998-abstracts-contents.htm
<link rel="Stylesheet" href="/infosci/styles.css" type="text/css">
<div class="sectionTitle">Information Science Discussion Papers Series: 1998 Abstracts</div>
 
 
<hr>
 
<h3><a name="dp9801">98/01: User defined spatial business rules: Storage, management and implementation&#8212;A pipe network case study</a></h3>
<a name="dp9801"></a><h3>98/01: User defined spatial business rules: Storage, management and implementation&#8212;A pipe network case study</h3>
<h4>S. Cockcroft</h4>
 
<p>The application of business rules as a means of ensuring data quality is an accepted approach in information systems development. Rules, defined by the user, are stored and manipulated by a repository or data dictionary. The repository stores the system design, including rules which result from constraints in the user&#8217;s environment, and enforces these rules at runtime. The work presented here represents the application of this approach to spatial information system design using an integrated spatial software engineering tool (ISSET) with a repository at its core.</p>
 
 
 
<hr>
 
<h3><a name="dp9802">98/02: Electronic Security</a></h3>
<a name="dp9802"></a><h3>98/02: Electronic Security</h3>
<h4>H.B. Wolfe</h4>
 
<p>Electronic security in this day and age covers a wide variety of techniques. One of the most important areas that must be addressed is that of commerce on the Internet. The Internet is an insecure medium to say the least. Every message sent must pass through many computers that are most likely controlled by unrelated and untrusted organizations before it ultimately reaches the final destination. At any one of these relays the information within the message can be scrutinized, analyzed and/or copied for later reference. There are documented and suspected instances of surveillance of Internet traffic. It has been suggested that several of the major communication switches (through which 90% or more of Internet traffic must pass) have permanent surveillance in place.</p>
 
 
 
<hr>
 
<h3><a name="dp9803">98/03: Looking for a new AI paradigm: Evolving connectionist and fuzzy connectionist systems&#8212;Theory and applications for adaptive, on-line intelligent systems</a></h3>
<a name="dp9803"></a><h3>98/03: Looking for a new AI paradigm: Evolving connectionist and fuzzy connectionist systems&#8212;Theory and applications for adaptive, on-line intelligent systems</h3>
<h4>N. Kasabov</h4>
 
<p>The paper introduces one paradigm of neuro-fuzzy techniques and an approach to building on-line, adaptive intelligent systems. This approach is called evolving connectionist systems (ECOS). ECOS evolve through incremental, on-line learning, both supervised and unsupervised. They can accommodate new input data, including new features, new classes, etc. The ECOS framework is presented and illustrated on a particular type of evolving neural networks&#8212;evolving fuzzy neural networks. ECOS are three to six orders of magnitude faster than the multilayer perceptrons, or the fuzzy neural networks, trained with the backpropagation algorithm, or with a genetic programming technique. ECOS belong to the new generation of adaptive intelligent systems. This is illustrated on several real world problems for adaptive, on-line classification, prediction, decision making and control: phoneme-based speech recognition; moving person identification; wastewater flow time-series prediction and control; intelligent agents; financial time series prediction and control. The principles of recurrent ECOS and reinforcement learning are outlined.</p>
 
 
 
<hr>
 
<h3><a name="dp9804">98/04: Connectionist methods for classification of fruit populations based on visible-near infrared spectrophotometry data</a></h3>
<a name="dp9804"></a><h3>98/04: Connectionist methods for classification of fruit populations based on visible-near infrared spectrophotometry data</h3>
<h4>J. Kim, N. Kasabov, A. Mowat and P. Poole</h4>
 
<p>Variation in fruit maturation can influence harvest timing and duration, post-harvest fruit attributes and consumer acceptability. Present methods of managing and identifying lines of fruit with specific attributes both in commercial fruit production systems and breeding programs are limited by a lack of suitable tools to characterise fruit attributes at different stages of development in order to predict fruit behaviour at harvest, during storage or in relation to consumer acceptance. With visible-near infrared (VNIR) reflectance spectroscopy a vast array of analytical information is collected rapidly with a minimum of sample pre-treatment. VNIR spectra contain information about the amount and the composition of constituents within fruit. This information can be obtained from intact fruit at different stage of development. Spectroscopic data is processed using chemometrics techniques such as principal component analysis (PCA), discriminant analysis and/or connectionist approaches in order to extract qualitative and quantitative information for classification and predictive purposes. In this paper, we will illustrate the effectiveness of a model, connectionist and hybrid approaches, for fruit quality classification problems.</p>
 
 
<hr>
 
<h3><a name="dp9805">98/05: A fuzzy neural network model for the estimation of the feeding rate to an anaerobic waste water treatment process</a></h3>
<a name="dp9805"></a><h3>98/05: A fuzzy neural network model for the estimation of the feeding rate to an anaerobic waste water treatment process</h3>
<h4>J. Kim, R. Kozma, N. Kasabov, B. Gols, M. Geerink and T. Cohen</h4>
 
<p>Biological processes are among the most challenging to predict and control. It has been recognised that the development of an intelligent system for the recognition, prediction and control of process states in a complex, nonlinear biological process control is difficult. Such unpredictable system behaviour requires an advanced, intelligent control system which learns from observations of the process dynamics and takes appropriate control action to avoid collapse of the biological culture. In the present study, a hybrid system called fuzzy neural network is considered, where the role of the fuzzy neural network is to estimate the correct feed demand as a function of the process responses. The feed material is an organic and/or inorganic mixture of chemical compounds for the bacteria to grow on. Small amounts of the feed sources must be added and the response of the bacteria must be measured. This is no easy task because the process sensors used are non-specific and their response would vary during the developmental stages of the process. This hybrid control strategy retains the advantages of both neural networks and fuzzy control. These strengths include fast and accurate learning, good generalisation capabilities, excellent explanation facilities in the form of semantically meaningful fuzzy rules, and the ability to accommodate both numerical data and existing expert knowledge about the problem under consideration. The application to the estimation and prediction of the correct feed demand shows the power of this strategy as compared with conventional fuzzy control.</p>
 
 
 
<hr>
 
<h3><a name="dp9806">98/06: Induction of labour for post term pregnancy: An observational study</a></h3>
<a name="dp9806"></a><h3>98/06: Induction of labour for post term pregnancy: An observational study</h3>
<h4>E. Parry, D. Parry and N. Pattison</h4>
 
<p>The aim of the study was to compare the 2 management protocols for postterm pregnancy; elective induction of labour at 42 weeks&#8217; gestation and continuing the pregnancy with fetal monitoring while awaiting spontaneous labour. A retrospective observational study compared a cohort of 360 pregnancies where labour was induced with 486 controls. All pregnancies were postterm (&gt;294 days) by an early ultrasound scan. Induction of labour was achieved with either prostaglandin vaginal pessaries or gel or forewater rupture and Syntocinon infusion. The control group consisted of women with postterm pregnancies who were not induced routinely and who usually had twice weekly fetal assessment with cardiotocography and/or ultrasound. Women who had their labour induced differed from those who awaited spontaneous labour. Nulliparas (OR 1.54; 95% CI 1.24-1.83) and married women (OR 1.76; 95% CI 1.45-2.06) were more likely to have their labour induced. There was no association between the type of caregiver and induction of labour. Induction of labour was associated with a reduction in the incidence of normal vaginal delivery (OR 0.63, 95% CI 0.43-0.92) and an increased incidence of operative vaginal delivery (OR 1.46; 95% CI 1.34-2.01). There was no difference in the overall rate of Caesarean section. There was no difference in fetal or neonatal outcomes. Parity had a major influence on delivery outcomes from a policy of induction of labour. Nulliparas in the induced group had worse outcomes with only 43% achieving a normal vaginal delivery (OR 0.78, 95% CI 0.65-0.95). In contrast for multiparas, the induced group had better outcomes with less Caesarean sections (OR 0.88, 95% CI 0.81-0.96). This retrospective observational study of current clinical practice shows that induction of labour for postterm pregnancy appears to be favoured by nulliparous married women. <STRONG>It suggests that induction of labour may improve delivery outcomes for multigravas but has an adverse effect for nulliparas.</STRONG></p>
 
 
<hr>
 
<h3><a name="dp9807">98/07: Automating information processing tasks: An agent-based architecture</a></h3>
<a name="dp9807"></a><h3>98/07: Automating information processing tasks: An agent-based architecture</h3>
<h4>S. Cranefield, B. McKinlay, E. Moreale and M.K. Purvis</h4>
 
<p>This paper describes an agent-based architecture designed to provide automation support for users who perform information processing tasks using a collection of distributed and disparate software tools and on-line resources. The architecture extends previous work on agent-based software interoperability. The unique features of the information processing domain compared to distributed information retrieval are discussed and a novel extension of hierarchical task network (HTN) planning to support this domain is presented.</p>
 
 
 
<hr>
 
<h3><a name="dp9808">98/08: Spatial isomorphism</a></h3>
<a name="dp9808"></a><h3>98/08: Spatial isomorphism</h3>
<h4>A. Holt, S. MacDonell and G. Benwell</h4>
 
<p>This research continues with current innovative geocomputational research trends that aim to provide enhanced spatial analysis tools. The coupling of case-based reasoning (CBR) with GIS provides the focus of this paper. This coupling allows the retrieval, reuse, revision and retention of previous similar spatial cases. CBR is therefore used to develop more complex spatial data modelling methods (by using the CBR modules for improved spatial data manipulation) and provide enhanced exploratory geographical analysis tools (to find and assess certain patterns and relationships that may exist in spatial databases). This paper details the manner in which spatial similarity is assessed, for the purpose of re-using previous spatial cases. The authors consider similarity assessment a useful concept for retrieving and analysing spatial information as it may help researchers describe and explore a certain phenomena, its immediate environment and its relationships to other phenomena. This paper will address the following questions: What makes phenomena similar? What is the definition of similarity? What principles govern similarity? and How can similarity be measured?</p>
<p><a href="papers/dp9808ah.pdf.gz">Download</a> (gzipped PDF, 358KB)</p>
 
<hr>
 
<h3><a name="dp9809">98/09: Development of a generic system for modelling spatial processes</a></h3>
<a name="dp9809"></a><h3>98/09: Development of a generic system for modelling spatial processes</h3>
<h4>A. Marr, R. Pascoe, G. Benwell and S. Mann</h4>
 
<p>In this paper is proposed a structure for the development of a generic graphical system for modelling spatial processes (SMSP). This system seeks to integrate the spatial data handling operations of a GIS with specialist numerical modelling functionality, by the description of the processes involved. A conceptual framework is described, the foundation of which are six defined modules (or <EM>services</EM>) that are considered a minimum requirement for basic system operation. The services are identified following description of the three key components to systems integration, and the examination of the preferred integrating structure. The relationship of the integration components to sample commentary on the future requirements of integration is discussed, and the benefits and deficiencies of an implemented system for modelling spatial processes are noted.</p>
 
<p><a href="papers/dp9809am.pdf.gz">Download</a> (gzipped PDF, 511KB)</p>
 
<hr>
 
<h3><a name="dp9810">98/10: Neuro-fuzzy methods for environmental modelling</a></h3>
<a name="dp9810"></a><h3>98/10: Neuro-fuzzy methods for environmental modelling</h3>
<h4>M.K. Purvis, N. Kasabov, G. Benwell, Q. Zhou and F. Zhang</h4>
 
<p>This paper describes combined approaches of data preparation, neural network analysis, and fuzzy inferencing techniques (which we collectively call neuro-fuzzy engineering) to the problem of environmental modelling. The overall neuro-fuzzy architecture is presented, and specific issues associated with environmental modelling are discussed. A case study that shows how these techniques can be combined is presented for illustration. We also describe our current software implementation that incorporates neuro-fuzzy analytical tools into commercially available geographical information system software.</p>
 
View
53
Website/dp1999-abstracts-contents.htm
<link rel="Stylesheet" href="/infosci/styles.css" type="text/css">
<div class="sectionTitle">Information Science Discussion Papers Series: 1999 Abstracts</div>
 
<hr>
 
<h3><a name="dp9901">99/01: UML as an ontology modelling language</a></h3>
<a name="dp9901"></a><h3>99/01: UML as an ontology modelling language</h3>
<h4>S. Cranefield and M.K. Purvis</h4>
 
<p>Current tools and techniques for ontology development are based on the traditions of AI knowledge representation research. This research has led to popular formalisms such as KIF and KL-ONE style languages. However, these representations are little known outside AI research laboratories. In contrast, commercial interest has resulted in ideas from the object-oriented programming community maturing into industry standards and powerful tools for object-oriented analysis, design and implementation. These standards and tools have a wide and rapidly growing user community. This paper examines the potential for object-oriented standards to be used for ontology modelling, and in particular presents an ontology representation language based on a subset of the Unified Modeling Language together with its associated Object Constraint Language.</p>
 
 
 
<hr>
 
<h3><a name="dp9902">99/02: Evolving connectionist systems for on-line, knowledge-based learning: Principles and applications</a></h3>
<a name="dp9902"></a><h3>99/02: Evolving connectionist systems for on-line, knowledge-based learning: Principles and applications</h3>
<h4>N. Kasabov</h4>
 
<p>The paper introduces evolving connectionist systems (ECOS) as an effective approach to building on-line, adaptive intelligent systems. ECOS evolve through incremental, hybrid (supervised/unsupervised), on-line learning. They can accommodate new input data, including new features, new classes, etc. through local element tuning. New connections and new neurons are created during the operation of the system. The ECOS framework is presented and illustrated on a particular type of evolving neural networks&#8212;evolving fuzzy neural networks (EFuNNs). EFuNNs can learn spatial-temporal sequences in an adaptive way, through one pass learning. Rules can be inserted and extracted at any time of the system operation. The characteristics of ECOS and EFuNNs are illustrated on several case studies that include: adaptive pattern classification; adaptive, phoneme-based spoken language recognition; adaptive dynamic time-series prediction; intelligent agents.</p>
 
 
 
<hr>
 
<h3><a name="dp9903">99/03: Spatial-temporal adaptation in evolving fuzzy neural networks for on-line adaptive phoneme recognition</a></h3>
<a name="dp9903"></a><h3>99/03: Spatial-temporal adaptation in evolving fuzzy neural networks for on-line adaptive phoneme recognition</h3>
<h4>N. Kasabov and M. Watts</h4>
 
<p>The paper is a study on a new class of spatial-temporal evolving fuzzy neural network systems (EFuNNs) for on-line adaptive learning, and their applications for adaptive phoneme recognition. The systems evolve through incremental, hybrid (supervised / unsupervised) learning. They accommodate new input data, including new features, new classes, etc. through local element tuning. Both feature-based similarities and temporal dependencies, that are present in the input data, are learned and stored in the connections, and adjusted over time. This is an important requirement for the task of adaptive, speaker independent spoken language recognition, where new pronunciations and new accents need to be learned in an on-line, adaptive mode. Experiments with EFuNNs, and also with multi-layer perceptrons, and fuzzy neural networks (FuNNs), conducted on the whole set of New Zealand English phonemes, show the superiority and the potential of EFuNNs when used for the task. Spatial allocation of nodes and their aggregation in EFuNNs allow for similarity preserving and similarity observation within one phoneme data and across phonemes, while subtle temporal variations within one phoneme data can be learned and adjusted through temporal feedback connections. The experimental results support the claim that spatial-temporal organisation in EFuNNs can lead to a significant improvement in the recognition rate especially for the diphthong and the vowel phonemes in English, which in many cases are problematic for a system to learn and adjust in an adaptive way.</p>
 
 
 
<hr>
 
<h3><a name="dp9904">99/04: Dynamic evolving fuzzy neural networks with &#8216;m-out-of-n&#8217; activation nodes for on-line adaptive systems</a></h3>
<a name="dp9904"></a><h3>99/04: Dynamic evolving fuzzy neural networks with &#8216;m-out-of-n&#8217; activation nodes for on-line adaptive systems</h3>
<h4>N. Kasabov and Q. Song</h4>
 
<p>The paper introduces a new type of evolving fuzzy neural networks (EFuNNs), denoted as mEFuNNs, for on-line learning and their applications for dynamic time series analysis and prediction. mEFuNNs evolve through incremental, hybrid (supervised/unsupervised), on-line learning, like the EFuNNs. They can accommodate new input data, including new features, new classes, etc. through local element tuning. New connections and new neurons are created during the operation of the system. At each time moment the output vector of a mEFuNN is calculated based on the m-most activated rule nodes. Two approaches are proposed: (1) using weighted fuzzy rules of Zadeh-Mamdani type; (2) using Takagi-Sugeno fuzzy rules that utilise dynamically changing and adapting values for the inference parameters. It is proved that the mEFuNNs can effectively learn complex temporal sequences in an adaptive way and outperform EFuNNs, ANFIS and other neural network and hybrid models. Rules can be inserted, extracted and adjusted continuously during the operation of the system. The characteristics of the mEFuNNs are illustrated on two bench-mark dynamic time series data, as well as on two real case studies for on-line adaptive control and decision making. Aggregation of rule nodes in evolved mEFuNNs can be achieved through fuzzy C-means clustering algorithm which is also illustrated on the bench mark data sets. The regularly trained and aggregated in an on-line, self-organised mode mEFuNNs perform as well, or better, than the mEFuNNs that use fuzzy C-means clustering algorithm for off-line rule node generation on the same data set.</p>
 
 
 
<hr>
 
<h3><a name="dp9905">99/05: Hybrid neuro-fuzzy inference systems and their application for on-line adaptive learning of nonlinear dynamical systems</a></h3>
<a name="dp9905"></a><h3>99/05: Hybrid neuro-fuzzy inference systems and their application for on-line adaptive learning of nonlinear dynamical systems</h3>
<h4>J. Kim and N. Kasabov</h4>
 
<p>In this paper, an adaptive neuro-fuzzy system, called HyFIS, is proposed to build and optimise fuzzy models. The proposed model introduces the learning power of neural networks into the fuzzy logic systems and provides linguistic meaning to the connectionist architectures. Heuristic fuzzy logic rules and input-output fuzzy membership functions can be optimally tuned from training examples by a hybrid learning scheme composed of two phases: the phase of rule generation from data, and the phase of rule tuning by using the error backpropagation learning scheme for a neural fuzzy system. In order to illustrate the performance and applicability of the proposed neuro-fuzzy hybrid model, extensive simulation studies of nonlinear complex dynamics are carried out. The proposed method can be applied to on-line incremental adaptive leaning for the purpose of prediction and control of non-linear dynamical systems.</p>
 
 
 
<hr>
 
<h3><a name="dp9906">99/06: A distributed architecture for environmental information systems</a></h3>
<a name="dp9906"></a><h3>99/06: A distributed architecture for environmental information systems</h3>
<h4>M.K. Purvis, S. Cranefield and M. Nowostawski</h4>
 
<p>The increasing availability and variety of large environmental data sets is opening new opportunities for data mining and useful cross-referencing of disparate environmental data sets distributed over a network. In order to take advantage of these opportunities, environmental information systems will need to operate effectively in a distributed, open environment. In this paper, we describe the New Zealand Distributed Information System (NZDIS) software architecture for environmental information systems. In order to optimise extensibility, openness, and flexible query processing, the architecture is organised into collaborating software agents that communicate by means of a standard declarative agent communication language. The metadata of environmental data sources are stored as part of agent ontologies, which represent information models of the domain of the data repository. The agents and the associated ontological framework are designed as much as possible to take advantage of standard object-oriented technology, such as CORBA, UML, and OQL, in order to enhance the openness and accessibility of the system.</p>
 
 
 
<hr>
 
<h3><a name="dp9907">99/07: From hybrid adjustable neuro-fuzzy systems to adaptive connectionist-based systems for phoneme and word recognition</a></h3>
<a name="dp9907"></a><h3>99/07: From hybrid adjustable neuro-fuzzy systems to adaptive connectionist-based systems for phoneme and word recognition</h3>
<h4>N. Kasabov, R. Kilgour and S. Sinclair</h4>
 
<p>This paper discusses the problem of adaptation in automatic speech recognition systems (ASRS) and suggests several strategies for adaptation in a modular architecture for speech recognition. The architecture allows for adaptation at different levels of the recognition process, where modules can be adapted individually based on their performance and the performance of the whole system. Two realisations of this architecture are presented along with experimental results from small-scale experiments. The first realisation is a hybrid system for speaker-independent phoneme-based spoken word recognition, consisting of neural networks for recognising English phonemes and fuzzy systems for modelling acoustic and linguistic knowledge. This system is adjustable by additional training of individual neural network modules and tuning the fuzzy systems. The increased accuracy of the recognition through appropriate adjustment is also discussed. The second realisation of the architecture is a connectionist system that uses fuzzy neural networks FuNNs to accommodate both a prior linguistic knowledge and data from a speech corpus. A method for on-line adaptation of FuNNs is also presented.</p>
 
 
 
<hr>
 
<h3><a name="dp9908">99/08: Adaptive, evolving, hybrid connectionist systems for image pattern recognition</a></h3>
<a name="dp9908"></a><h3>99/08: Adaptive, evolving, hybrid connectionist systems for image pattern recognition</h3>
<h4>N. Kasabov, S. Israel and B. Woodford</h4>
 
<p>The chapter presents a new methodology for building adaptive, incremental learning systems for image pattern classification. The systems are based on dynamically evolving fuzzy neural networks that are neural architectures to realise connectionist learning, fuzzy logic inference, and case-based reasoning. The methodology and the architecture are applied on two sets of real data&#8212;one of satellite image data, and the other of fruit image data. The proposed method and architecture encourage fast learning, life-long learning and on-line learning when the system operates in a changing environment of image data.</p>
 
 
 
<hr>
 
<h3><a name="dp9909">99/09: The concepts of hidden Markov model in speech recognition</a></h3>
<a name="dp9909"></a><h3>99/09: The concepts of hidden Markov model in speech recognition</h3>
<h4>W. Abdulla and N. Kasabov</h4>
 
<p>The speech recognition field is one of the most challenging fields that has faced scientists for a long time. The complete solution is still far from reach. The efforts are concentrated with huge funds from the companies to different related and supportive approaches to reach the final goal. Then, apply it to the enormous applications that are still waiting for the successful speech recognisers that are free from the constraints of speakers, vocabularies or environment. This task is not an easy one due to the interdisciplinary nature of the problem and as it requires speech perception to be implied in the recogniser (Speech Understanding Systems) which in turn point strongly to the use of intelligence within the systems.</p>
 
 
 
<hr>
 
<h3><a name="dp9910">99/10: Finding medical information on the Internet: Who should do it and what should they know</a></h3>
<a name="dp9910"></a><h3>99/10: Finding medical information on the Internet: Who should do it and what should they know</h3>
<h4>D. Parry</h4>
 
<p>More and more medical information is appearing on the Internet, but it is not easy to get at the nuggets amongst all the spoil. Bruce McKenzie&#8217;s editorial in the December 1997 edition of <EM>SIM Quarterly</EM> dealt very well with the problems of quality, but I would suggest that the problem of accessibility is as much of a challenge. As ever-greater quantities of high quality medical information are published electronically, the need to be able to find it becomes imperative. There are a number of tools to find what you want on the Internet&#8212;search engines, agents, indexing and classification schemes and hyperlinks, but their use requires care, skill and experience.</p>
 
 
 
<hr>
 
<h3><a name="dp9911">99/11: Software metrics data analysis&#8212;Exploring the relative performance of some commonly used modeling techniques</a></h3>
<a name="dp9911"></a><h3>99/11: Software metrics data analysis&#8212;Exploring the relative performance of some commonly used modeling techniques</h3>
<h4>A. Gray and S. MacDonell</h4>
 
<p>Whilst some software measurement research has been unquestionably successful, other research has struggled to enable expected advances in project and process management. Contributing to this lack of advancement has been the incidence of inappropriate or non-optimal application of various model-building procedures. This obviously raises questions over the validity and reliability of any results obtained as well as the conclusions that may have been drawn regarding the appropriateness of the techniques in question. In this paper we investigate the influence of various data set characteristics and the purpose of analysis on the effectiveness of four model-building techniques&#8212;three statistical methods and one neural network method. In order to illustrate the impact of data set characteristics, three separate data sets, drawn from the literature, are used in this analysis. In terms of predictive accuracy, it is shown that no one modeling method is best in every case. Some consideration of the characteristics of data sets should therefore occur before analysis begins, so that the most appropriate modeling method is then used. Moreover, issues other than predictive accuracy may have a significant influence on the selection of model-building methods. These issues are also addressed here and a series of guidelines for selecting among and implementing these and other modeling techniques is discussed.</p>
 
 
 
<hr>
 
<h3><a name="dp9912">99/12: Software forensics for discriminating between program authors using case-based reasoning, feed-forward neural networks and multiple discriminant analysis</a></h3>
<a name="dp9912"></a><h3>99/12: Software forensics for discriminating between program authors using case-based reasoning, feed-forward neural networks and multiple discriminant analysis</h3>
<h4>S. MacDonell, A. Gray, G. MacLennan and P. Sallis</h4>
 
<p>Software forensics is a research field that, by treating pieces of program source code as linguistically and stylistically analyzable entities, attempts to investigate aspects of computer program authorship. This can be performed with the goal of identification, discrimination, or characterization of authors. In this paper we extract a set of 26 standard authorship metrics from 351 programs by 7 different authors. The use of feed-forward neural networks, multiple discriminant analysis, and case-based reasoning is then investigated in terms of classification accuracy for the authors on both training and testing samples. The first two techniques produce remarkably similar results, with the best results coming from the case-based reasoning models. All techniques have high prediction accuracy rates, supporting the feasibility of the task of discriminating program authors based on source-code measurements.</p>
 
 
 
<hr>
 
<h3><a name="dp9913">99/13: FULSOME: Fuzzy logic for software metric practitioners and researchers</a></h3>
<a name="dp9913"></a><h3>99/13: FULSOME: Fuzzy logic for software metric practitioners and researchers</h3>
<h4>S. MacDonell, A. Gray and J. Calvert</h4>
 
<p>There has been increasing interest in recent times for using fuzzy logic techniques to represent software metric models, especially those predicting development effort. The use of fuzzy logic for this application area offers several advantages when compared to other commonly used techniques. These include the use of a single model with different levels of precision for inputs and outputs used throughout the development life cycle, the possibility of model development with little or no data, and its effectiveness when used as a communication tool. The use of fuzzy logic in any applied field however requires that suitable tools are available for both practitioners and researchers&#8212;satisfying both interface and functionality related requirements. After outlining some of the specific needs of the software metrics community, including results from a survey of software developers on this topic, the paper describes the use of a set of tools called FULSOME (Fuzzy Logic for Software Metrics). The development of a simple fuzzy logic system by a software metrician and subsequent tuning are then discussed using a real-world set of software metric data. The automatically generated fuzzy model performs acceptably when compared to regression-based models.</p>
 
 
 
<hr>
 
<h3><a name="dp9914">99/14: Assessing prediction systems</a></h3>
<a name="dp9914"></a><h3>99/14: Assessing prediction systems</h3>
<h4>B. Kitchenham, S. MacDonell, L. Pickard and M. Shepperd</h4>
 
<p>For some years software engineers have been attempting to develop useful prediction systems to estimate such attributes as the effort to develop a piece of software and the likely number of defects. Typically, prediction systems are proposed and then subjected to empirical evaluation. Claims are then made with regard to the quality of the prediction systems. A wide variety of prediction quality indicators have been suggested in the literature. Unfortunately, we believe that a somewhat confusing state of affairs prevails and that this impedes research progress. This paper aims to provide the research community with a better understanding of the meaning of, and relationship between, these indicators. We critically review twelve different approaches by considering them as descriptors of the residual variable. We demonstrate that the two most popular indicators MMRE and pred(25) are in fact indicators of the spread and shape respectively of prediction accuracy where prediction accuracy is the ratio of estimate to actual (or actual to estimate). Next we highlight the impact of the choice of indicator by comparing three prediction systems derived using four different simulated datasets. We demonstrate that the results of such a comparison depend upon the choice of indicator, the analysis technique, and the nature of the dataset used to derive the predictive model. We conclude that prediction systems cannot be characterised by a single summary statistic. We suggest that we need indicators of the central tendency and spread of accuracy as well as indicators of shape and bias. For this reason, boxplots of relative error or residuals are useful alternatives to simple summary metrics.</p>
 
<p><a href="papers/dp9915sm.pdf.gz">Download</a> (gzipped PDF, 167KB)</p>
 
<hr>
 
<h3><a name="dp9916">99/16: Factors systematically associated with errors in subjective estimates of software development effort: The stability of expert judgment</a></h3>
<a name="dp9916"></a><h3>99/16: Factors systematically associated with errors in subjective estimates of software development effort: The stability of expert judgment</h3>
<h4>A. Gray, S. MacDonell and M. Shepperd</h4>
 
<p>Software metric-based estimation of project development effort is most often performed by expert judgment rather than by using an empirically derived model (although such may be used by the expert to assist their decision). One question that can be asked about these estimates is how stable are they with respect to characteristics of the development process and product? This stability can be assessed in relation to the degree to which the project has advanced over time, the type of module for which the estimate is being made, and the characteristics of that module. In this paper we examine a set of expert-derived estimates for the effort required to develop a collection of modules from a large health-care system. Statistical tests are used to identify relationships between the type (screen or report) and characteristics of modules and the likelihood of the associated development effort being under-estimated, approximately correct, or over-estimated. Distinct relationships are found that suggest that the estimation process being examined was not unbiased to such characteristics.</p>
 
<p><a href="papers/dp9916ag.pdf.gz">Download</a> (gzipped PDF, 199KB)</p>
 
<hr>
 
<h3><a name="dp9917">99/17: The NZDIS project: An agent-based distributed information systems architecture</a></h3>
<a name="dp9917"></a><h3>99/17: The NZDIS project: An agent-based distributed information systems architecture</h3>
<h4>M.K. Purvis, S. Cranefield, G. Bush, D. Carter, B. McKinlay, M. Nowostawski and R. Ward</h4>
 
<p>This paper describes an architecture for building distributed information systems from existing information resources, based on distributed object and software agent technologies. This architecture is being developed as part of the New Zealand Distributed Information Systems (NZDIS) project.</p>
 
<p><a href="papers/dp9917mp.pdf.gz">Download</a> (gzipped PDF, 171KB)</p>
 
<hr>
 
<h3><a name="dp9918">99/18: HTN planning for information processing tasks</a></h3>
<a name="dp9918"></a><h3>99/18: HTN planning for information processing tasks</h3>
<h4>S. Cranefield</h4>
 
<p>This paper discusses the problem of integrated planning and execution for tasks that involve the consumption, production and alteration of relational information. Unlike information retrieval problems, the information processing domain requires explicit modelling of the changing information state of the domain and how the validity of resources changes as actions are performed. A solution to this problem is presented in the form of a specialised hierarchical task network planning model. A distinction is made between the information processing effects of an action (modelled in terms of constraints relating the domain information before and after the action) and the actions&#8217; preconditions and effects which are expressed in terms of required, produced and invalidated resources. The information flow between tasks is explicitly represented in methods and plans, including any required information-combining operations such as selection and union.</p>
 
<p><a href="papers/dp9918sc.pdf.gz">Download</a> (gzipped PDF, 147KB)</p>
 
<hr>
 
<h3><a name="dp9919">99/19: Automated scoring of practical tests in an introductory course in information technology</a></h3>
<a name="dp9919"></a><h3>99/19: Automated scoring of practical tests in an introductory course in information technology</h3>
<h4>G. Kennedy</h4>
 
<p>In an introductory course in information technology at the University of Otago the acquisition of practical skills is considered to be a prime objective. An effective way of assessing the achievement of this objective is by means of a &#8216;practical test&#8217;, in which students are required to accomplish simple tasks in a controlled environment. The assessment of such work demands a high level of expertise, is very labour intensive and can suffer from marker inconsistency, particularly with large candidatures.</p>
 
<p><a href="papers/dp9919gk.pdf.gz">Download</a> (gzipped PDF, 138KB)</p>
 
<hr>
 
<h3><a name="dp9920">99/20: Fuzzy logic for software metric models throughout the development life-cycle</a></h3>
<a name="dp9920"></a><h3>99/20: Fuzzy logic for software metric models throughout the development life-cycle</h3>
<h4>A. Gray and S. MacDonell</h4>
 
<p>One problem faced by managers who are using project management models is the elicitation of numerical inputs. Obtaining these with any degree of confidence early in a project is not always feasible. Related to this difficulty is the risk of precisely specified outputs from models leading to overcommitment. These problems can be seen as the collective failure of software measurements to represent the inherent uncertainties in managers&#8217; knowledge of the development products, resources, and processes. It is proposed that fuzzy logic techniques can help to overcome some of these difficulties by representing the imprecision in inputs and outputs, as well as providing a more expert-knowledge based approach to model building. The use of fuzzy logic for project management however should not be the same throughout the development life cycle. Different levels of available information and desired precision suggest that it can be used differently depending on the current phase, although a single model can be used for consistency.</p>
 
 
 
<hr>
 
<h3><a name="dp9921">99/21: Wayfinding/navigation within a QTVR virtual environment: Preliminary results</a></h3>
<a name="dp9921"></a><h3>99/21: Wayfinding/navigation within a QTVR virtual environment: Preliminary results</h3>
<h4>B. Norris, D. Rashid and W. Wong</h4>
 
<p>This paper reports on an investigation into wayfinding principles, and their effectiveness within a virtual environment. To investigate these principles, a virtual environment of an actual museum was created using QuickTime Virtual Reality. Wayfinding principles used in the real world were identified and used to design the interaction of the virtual environment. The initial findings suggests that real-world navigation principles, such as the use of map and landmark principles, can significantly help navigation within this virtual environment. However, navigation difficulties were discovered through an Activity Theory-based Cognitive Task Analysis.</p>
 
 
 
<hr>
 
<h3><a name="dp9922">99/22: Predictive modelling of plankton dynamics in freshwater lakes using genetic programming</a></h3>
<a name="dp9922"></a><h3>99/22: Predictive modelling of plankton dynamics in freshwater lakes using genetic programming</h3>
<h4>P. Whigham and F. Recknagel</h4>
 
<p>Building predictive time series models for freshwater systems is important both for understanding the dynamics of these natural systems and in the development of decision support and management software. This work describes the application of a machine learning technique, namely genetic programming (GP), to the prediction of chlorophyll-a. The system endeavoured to evolve several mathematical time series equations, based on limnological and climate variables, which could predict the dynamics of chlorophyll-a on unseen data. The predictive accuracy of the genetic programming approach was compared with an artificial neural network and a deterministic algal growth model. The GP system evolved some solutions which were improvements over the neural network and showed that the transparent nature of the solutions may allow inferences about underlying processes to be made. This work demonstrates that non-linear processes in natural systems may be successfully modelled through the use of machine learning techniques. Further, it shows that genetic programming may be used as a tool for exploring the driving processes underlying freshwater system dynamics.</p>
 
<p><a href="papers/dp9922pw.pdf.gz">Download</a> (gzipped PDF, 234KB)</p>
 
<hr>
 
<h3><a name="dp9923">99/23: Modifications to Smith&#8217;s method for deriving normalised relations from a functional dependency diagram</a></h3>
<a name="dp9923"></a><h3>99/23: Modifications to Smith&#8217;s method for deriving normalised relations from a functional dependency diagram</h3>
<h4>N. Stanger</h4>
 
<p>Smith&#8217;s method (Smith, 1995) is a formal technique for deriving a set of normalised relations from a functional dependency diagram (FDD). Smith&#8217;s original rules for deriving these relations are incomplete, as they do not fully address the issue of determining the foreign key links between relations. In addition, one of the rules for deriving foreign keys can produce incorrect results, while the other rule is difficult to automate. In this paper are described solutions these issues.</p>
 
<p><a href="papers/dp9923ns.pdf.gz">Download</a> (gzipped PDF, 158KB)</p>
 
<hr>
 
<h3><a name="dp9924">99/24: The development of an electronic distance learning course in health informatics</a></h3>
<a name="dp9924"></a><h3>99/24: The development of an electronic distance learning course in health informatics</h3>
<h4>D. Parry, S. Cockcroft, A. Breton, D. Abernethy and J. Gillies</h4>
 
<p>Since 1997 the authors have been involved in the development of a distance learning course in health informatics. The course is delivered via CD-ROM and the Internet. During this process we have learned valuable lessons about computer-assisted collaboration and cooperative work. In particular we have developed methods of using the software tools available for communication and education. We believe that electronic distance learning offers a realistic means of providing education in health informatics and other fields to students whom for reasons of geography or work commitments would not be able to participate in a conventional course.</p>
 
<p><a href="papers/dp9924dp.pdf.gz">Download</a> (gzipped PDF, 484KB)</p>
 
<hr>
 
<h3><a name="dp9925">99/25: Infiltrating IT into primary care: A case study</a></h3>
<a name="dp9925"></a><h3>99/25: Infiltrating IT into primary care: A case study</h3>
<h4>S. Cockcroft, D. Parry, A. Breton, D. Abernethy and J. Gillies</h4>
 
<p>Web based approaches to tracking students on placement are receiving much interest in the field of medical education The work presented here describes a web-based solution to the problem of managing data collection from student encounters with patients whilst on placement. The solution has been developed by postgraduate students under the direction of staff of the health informatics diploma. Specifically, the system allows undergraduate students on placement or in the main hospital to access a web-based front end to a database designed to store the data that they are required to gather. The system also has the important effect of providing a rationale for the provision of electronic communication to the undergraduate students within the context of healthcare delivery. We believe that an additional effect will be to expose practicing healthcare providers to electronic information systems, along with the undergraduates who are trained to use them, and increase the skill base of the practitioners.</p>
 
<p><a href="papers/dp9925so.pdf.gz">Download</a> (gzipped PDF, 72KB)</p>
 
<hr>
 
<h3><a name="dp9926">99/26: Using rough sets to study expert behaviour in induction of labour</a></h3>
<a name="dp9926"></a><h3>99/26: Using rough sets to study expert behaviour in induction of labour</h3>
<h4>D. Parry, W.K. Yeap and N. Pattison</h4>
 
<p>The rate of induction of labour (IOL) is increasing, despite no obvious increase in the incidence of the major indications. However the rate varies widely between different centres and practitioners and this does not seem to be due to variations in patient populations. The IOL decision-making process of six clinicians was recorded and examined using hypothetical scenarios presented on a computer. Several rules were identified from a rough sets analysis of the data. These rules were compared to the actual practise of these clinicians in 1994 Initial tests of these rules show that they may form a suitable set for developing an expert system for the induction of labour.</p>
 
<p><a href="papers/dp9926dp.pdf.gz">Download</a> (gzipped PDF, 78KB)</p>
 
<hr>
 
<h3><a name="dp9927">99/27: Using the Internet to teach health informatics</a></h3>
<a name="dp9927"></a><h3>99/27: Using the Internet to teach health informatics</h3>
<h4>D. Parry, A. Breton, D. Abernethy, S. Cockcroft and J. Gillies</h4>
 
<p>Since July 1998 we have been teaching an Internet-based distance learning course in health informatics (<a href="http://basil.otago.ac.nz:800">http://basil.otago.ac.nz:800</a>). The development of this course and the experiences we have had running it are described in this paper. The course was delivered using paper materials, a face-to-face workshop, a CD-ROM and Internet communication tools. We currently have about 30 students around New Zealand, a mixture of physicians, nurses and other health staff. Some teaching methods have worked, some haven&#8217;t, but in the process we have learned a number of valuable lessons.</p>
 
View
41
Website/dp2000-abstracts-contents.htm
<link rel="Stylesheet" href="/infosci/styles.css" type="text/css">
<div class="sectionTitle">Information Science Discussion Papers Series: 2000 Abstracts</div>
 
 
<hr>
 
<h3><a name="dp2000-01">2000/01: Investigating complexities through computational techniques</a></h3>
<a name="dp2000-01"></a><h3>2000/01: Investigating complexities through computational techniques</h3>
<h4>A. Holt</h4>
 
<p>This article outlines similarity applied to the general environment and geographical information domains. The hypothesis is if physical and social sciences manifest similar amenities, then similarity would be a generative technique to analyse the cached information inherent in the data retrieved. Similarity is examined concerning the spatial grouping of natural kinds in a complex environment.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-02">2000/02: Integrating environmental information: Incorporating metadata in a distributed information systems architecture</a></h3>
<a name="dp2000-02"></a><h3>2000/02: Integrating environmental information: Incorporating metadata in a distributed information systems architecture</h3>
<h4>S. Cranefield and M.K. Purvis</h4>
 
<p>An approach is presented for incorporating metatata constraints into queries to be processed by a distributed environmental information system. The approach, based on a novel metamodel unifying concepts from the Unified Modelling Language (UML), the Object Query Language (OQL), and the Resource Description Framework (RDF), allows metadata information to be represented and processed in combination with regular data queries.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-03">2000/03: Modelling the emergence of speech sound categories in evolving connectionist systems</a></h3>
<a name="dp2000-03"></a><h3>2000/03: Modelling the emergence of speech sound categories in evolving connectionist systems</h3>
<h4>J. Taylor, N. Kasabov and R. Kilgour</h4>
 
<p>We report on the clustering of nodes in internally represented acoustic space. Learners of different languages partition perceptual space distinctly. Here, an Evolving Connectionist-Based System (ECOS) is used to model the perceptual space of New Zealand English. Currently, the system evolves in an unsupervised, self-organising manner. The perceptual space can be visualised, and the important features of the input patterns analysed. Additionally, the path of the internal representations can be seen. The results here will be used to develop a supervised system that can be used for speech recognition based on the evolved, internal sub-word units.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-04">2000/04: Analysis of the New Zealand and M<span style="text-decoration:overline">a</span>ori on-line translator</a></h3>
<a name="dp2000-04">2000/04: Analysis of the New Zealand and M<span style="text-decoration:overline"></a><h3>a</span>ori on-line translator</h3>
<h4>M. Laws, R. Kilgour and M. Watts</h4>
 
<p>The English and M<span style="text-decoration:overline">a</span>ori word translator ng<span style="text-decoration:overline">a</span> aho whakam<span style="text-decoration:overline">a</span>ori-<span style="text-decoration:overline">a</span>-tuhi was designed to provide single head-word translations to on-line web users. There are over 13,000 words all based on traditional text sources, derived because of their high frequency used within each of the respective languages. The translator has been operational for well over a year now, and it has had the highest web traffic usage in the Department of Information Science. Two log files were generated to record domain hits and language translations, both provided the up-to-date data for analysis contained in this paper.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-05">2000/05: Development of a M<span style="text-decoration:overline">a</span>ori database for speech perception and generation</a></h3>
<a name="dp2000-05">2000/05: Development of a M<span style="text-decoration:overline"></a><h3>a</span>ori database for speech perception and generation</h3>
<h4>M. Laws</h4>
 
<p>M<span style="text-decoration:overline">a</span>ori speech data collection and analysis is an ongoing process, as new and existing data sets are continuously accessed for many different experimental speech perception and generation processing tasks. A data management system is an important tool to facilitate the systematic techniques applied to the speech and language data. Identification of the core components for M<span style="text-decoration:overline">a</span>ori speech and language databases, translation systems, speech recognition and speech synthesis have been undertaken as research themes. The latter component will be the main area of discussion here. So to hasten the development of M<span style="text-decoration:overline">a</span>ori speech synthesis, joint collaborative research with established international projects has begun. This will allow the M<span style="text-decoration:overline">a</span>ori language to be presented to the wider scientific community well in advance of other similar languages, many times it&#8217;s own size and distribution. Propagation of the M<span style="text-decoration:overline">a</span>ori language via the information communication technology (ICT) medium is advantageous to it&#8217;s long term survival.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-06">2000/06: Evolving self-organizing maps for on-line learning, data analysis and modelling</a></h3>
<a name="dp2000-06"></a><h3>2000/06: Evolving self-organizing maps for on-line learning, data analysis and modelling</h3>
<h4>D. Deng and N. Kasabov</h4>
 
<p>In real world information systems, data analysis and processing are usually needed to be done in an on-line, self-adaptive way. In this respect, neural algorithms of incremental learning and constructive network models are of increased interest. In this paper we present a new algorithm of evolving self-organizing map (ESOM), which features fast one-pass learning, dynamic network structure, and good visualisation ability. Simulations have been carried out on some benchmark data sets for classification and prediction tasks, as well as on some macroeconomic data for data analysis. Compared with other methods, ESOM achieved better classification with much shorter learning time. Its performance for time series modelling is also comparable, requiring more hidden units but with only one-pass learning. Our results demonstrate that ESOM is an effective computational model for on-line learning, data analysis and modelling.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-07">2000/07: Extending agent messaging to enable OO information exchange</a></h3>
<a name="dp2000-07"></a><h3>2000/07: Extending agent messaging to enable OO information exchange</h3>
<h4>S. Cranefield and M.K. Purvis</h4>
 
<p>It is canonical practice in agent-based systems to use a declarative format for the exchange of information. The increasing usage and facility of object-oriented tools and techniques, however, suggests there may be benefits in combining the use of object-oriented modelling approaches with agent-based messaging. In this paper we outline our efforts in connection with the New Zealand Distributed Information Systems project to use object-oriented knowledge representation in an agent-based architecture. Issues and tradeoffs are discussed, as well as the possible extensions to current agent-based message protocols that may be necessary in order to support object-oriented information exchange.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-08">2000/08: Is it an ontology or an abstract syntax? Modelling objects, knowledge and agent messages</a></h3>
<a name="dp2000-08"></a><h3>2000/08: Is it an ontology or an abstract syntax? Modelling objects, knowledge and agent messages</h3>
<h4>S. Cranefield, M.K. Purvis and M. Nowostawski</h4>
 
<p>This paper describes a system of interlinked ontologies to describe the concepts underlying FIPA agent communication. A meta-modelling approach is used to relate object-oriented domain ontologies and abstract models of agent communication and content languages and to describe them in a single framework. The modelling language used is the Unified Modeling Language, which is extended by adding the concepts of resource and reference. The resulting framework provides an elegant basis for the development of agent systems that combine object-oriented information representation with agent messaging protocols.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-09">2000/09: A viewpoint-based framework for discussing the use of multiple modelling representations</a></h3>
<a name="dp2000-09"></a><h3>2000/09: A viewpoint-based framework for discussing the use of multiple modelling representations</h3>
<h4>N. Stanger</h4>
 
<p>When modelling a real-world phenomenon, it can often be useful to have multiple descriptions of the phenomenon, each expressed using a different modelling approach or representation. Different representations such as entity-relationship modelling, data flow modelling and use case modelling allow analysts to describe different aspects of real-world phenomena, thus providing a more thorough understanding than if a single representation were used. Researchers working with multiple representations have approached the problem from many different fields, resulting in a diverse and potentially confusing set of terminologies. In this paper is described a viewpoint-based framework for discussing the use of multiple modelling representations to describe real-world phenomena. This framework provides a consistent and integrated terminology for researchers working with multiple representations. An abstract notation is also defined for expressing concepts within the framework.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-10">2000/10: A shape metric for evolving time series models</a></h3>
<a name="dp2000-10"></a><h3>2000/10: A shape metric for evolving time series models</h3>
<h4>P. Whigham and C. Aldridge</h4>
 
<p>Most applications of Genetic Programming to time series modeling use a fitness measure for comparing potential solutions that treat each point in the time series independently. This non-temporal approach can lead to some potential solutions being given a relatively high fitness measure even though they do not correspond to the training data when the overall shape of the series is taken into account. This paper develops two fitness measures which emphasize the concept of shape when measuring the similarity between a training and evolved time series. One approach extends the root mean square error to higher dimensional derivatives of the series. The second approach uses a simplified derivative concept that describes shape in terms of positive, negative and zero slope.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-11">2000/11: Translating descriptions of a viewpoint among different representations</a></h3>
<a name="dp2000-11"></a><h3>2000/11: Translating descriptions of a viewpoint among different representations</h3>
<h4>N. Stanger</h4>
 
<p>An important part of the systems development process is building models of real-world phenomena. These phenomena are described by many different kinds of information, and this diversity has resulted in a wide variety of modelling representations. Some types of information are better expressed by some representations than others, so it is sensible to use multiple representations to describe a real-world phenomenon. The author has developed an approach to facilitating the use of multiple representations within a single viewpoint by translating descriptions of the viewpoint among different representations. An important issue with such translations is their quality, or how well they map constructs of one representation to constructs of another representation. Two possible methods for improving translation quality, heuristics and enrichment, are proposed in this paper, and a preliminary metric for measuring relative translation quality is described.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-12">2000/12: An adaptive distributed workflow system framework</a></h3>
<a name="dp2000-12"></a><h3>2000/12: An adaptive distributed workflow system framework</h3>
<h4>M.K. Purvis, M.A. Purvis and S. Lemalu</h4>
 
<p>Workflow management systems are increasingly used to assist the automation of business processes that involve the exchange of documents, information, or task execution results. Recent developments in distributed information system technology now make it possible to extend the workflow management system idea to much wider spheres of activity in the industrial and commercial world. This paper describes a framework under development that employs such technology so that software tools and processes may interoperate in a distributed and dynamic environment. Key technical elements of the framework include the use of coloured Petri nets and distributed object technology (CORBA).</p>
 
 
 
<hr>
 
<h3><a name="dp2000-13">2000/13: Platforms for agent-oriented software</a></h3>
<a name="dp2000-13"></a><h3>2000/13: Platforms for agent-oriented software</h3>
<h4>M. Nowostawski, G. Bush, M.K. Purvis and S. Cranefield</h4>
 
<p>The use of modelling abstractions to map from items in the real-world to objects in the computational domain is useful both for the effective implementation of abstract problem solutions and for the management of software complexity. This paper discusses the new approach of agent-oriented software engineering (AOSE), which uses the notion of an autonomous agent as its fundamental modelling abstraction. For the AOSE approach to be fully exploited, software engineers must be able to gain leverage from an agent software architecture and framework, and there are several such frameworks now publicly available. At the present time, however, there is little information concerning the options that are available and what needs to be considered when choosing or developing an agent framework. We consider three different agent software architectures that are (or will be) publicly available and evaluate some of the design and architectural differences and trade-offs that are associated with them and their impact on agent-oriented software development. Our discussion examines these frameworks in the context of an example in the area of distributed information systems.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-14">2000/14: Electronic medical consultation: A New Zealand perspective</a></h3>
<a name="dp2000-14"></a><h3>2000/14: Electronic medical consultation: A New Zealand perspective</h3>
<h4>C. Brebner, R. Jones, J. Krisjanous, W. Marshall, G. Parry and A. Holt</h4>
 
<p>Electronic medical consultation is available worldwide through access to the World Wide Web (WWW). This article outlines a research study on the adoption of electronic medical consultation as a means of health delivery. It focuses on the delivery of healthcare specifically for New Zealanders, by New Zealanders. It is acknowledged that the WWW is a global market place and it is therefore difficult to identify New Zealanders&#8217; use of such a global market, but we have attempted to provide a New Zealand perspective on electronic medical consultation.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-15">2000/15: Analysis of the macroeconomic development of European and Asia-Pacific countries with the use of connectionist models</a></h3>
<a name="dp2000-15"></a><h3>2000/15: Analysis of the macroeconomic development of European and Asia-Pacific countries with the use of connectionist models</h3>
<h4>N. Kasabov, H. Akpinar, L. Rizzi and D. Deng</h4>
 
<p>The paper applies novel techniques for on-line, adaptive learning of macroeconomic data and a consecutive analysis and prediction. The evolving connectionist system paradigm (ECOS) is used in its two versions&#8212;unsupervised (evolving self-organised maps), and supervised (evolving fuzzy neural networks&#8212;EFuNN). In addition to these techniques self-organised maps (SOM) are also employed for finding clusters of countries based on their macroeconomic parameters. EFuNNs allow for modelling, clustering, prediction and rule extraction. The rules that describe future annual values for the consumer price index (CPI), interest rate, unemployment and GDP per capita are extracted from data and reported in the paper for both global&#8212;EU-Asia block of countries, and for smaller groups&#8212;EU, EU-candidate countries, Asia-Pacific countries. The analysis and prediction models proof to be useful tools for the analysis of trends in macroeconomic development of clusters of countries and their future prediction.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-16">2000/16: Evolving localised learning for on-line colour image quantisation</a></h3>
<a name="dp2000-16"></a><h3>2000/16: Evolving localised learning for on-line colour image quantisation</h3>
<h4>D. Deng and N. Kasabov</h4>
 
<p>Although widely studied for many years, colour image quantisation remains a challenging problem. We propose to use an evolving self-organising map model for the on-line image quantisation tasks. Encouraging results are obtained in experiments and we look forward to implementing the algorithm in real world applications with further improvement.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-17">2000/17: Using consensus ensembles to identify suspect data</a></h3>
<a name="dp2000-17"></a><h3>2000/17: Using consensus ensembles to identify suspect data</h3>
<h4>D. Clark</h4>
 
<p>In a consensus ensemble all members must agree before they classify a data point. But even when they all agree some data is still misclassified. In this paper we look closely at consistently misclassified data to investigate whether some of it may be outliers or may have been mislabeled.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-18">2000/18: Elementary structures in entity-relationship diagrams as a diagnostic tool in data modelling and a basis for effort estimation</a></h3>
<a name="dp2000-18"></a><h3>2000/18: Elementary structures in entity-relationship diagrams as a diagnostic tool in data modelling and a basis for effort estimation</h3>
<h4>G. Kennedy</h4>
 
<p>Elsewhere Kennedy describes three elementary structures to be found in entity-relationship diagrams. Here, each of these structures is considered in the context of a transaction processing system and a specific set of components that can be associated with the structure is described. Next, an example is given illustrating the use of elementary structures as an analytical tool for data modelling and a diagnostic tool for the identification of errors in the resulting data model. It is conjectured that the amount of effort associated with each structure can be measured. A new approach for the estimation of the total effort required to develop a system, based on a count of the elementary structures present in the entity-relationship diagram, is then proposed. The approach is appealing because it can be automated and because it can be applied earlier in the development cycle than other estimation methods currently in use. The question of a suitable counting strategy remains open.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-19">2000/19: Comparing Huber&#8217;s M-Estimator function with the mean square error in backpropagation networks when the training data is noisy</a></h3>
<a name="dp2000-19"></a><h3>2000/19: Comparing Huber&#8217;s M-Estimator function with the mean square error in backpropagation networks when the training data is noisy</h3>
<h4>D. Clark</h4>
 
<p>In any data set there some of the data will be bad or noisy. This study identifies two types of noise and investigates the effect of each in the training data of backpropagation neural networks. It also compares the mean square error function with a more robust alternative advocated by Huber.</p>
 
 
 
<hr>
 
<h3><a name="dp2000-20">2000/20: A framework for distributed workflow systems</a></h3>
<a name="dp2000-20"></a><h3>2000/20: A framework for distributed workflow systems</h3>
<h4>M.K. Purvis, M.A. Purvis and S. Lemalu</h4>
 
<p>Workflow management systems (WFMS) are being adopted to assist the automation of business processes that involve the exchange of information. As a result of developments in distributed information system technology, it is now possible to extend the WFMS idea to wider spheres of activity in the industrial and commercial world and thereby to encompass the increasingly sprawling nature of modern organisations. This paper describes a framework under development that employs such technology so that software tools and processes may interoperate in a distributed and dynamic environment. The framework employs Petri nets to model the interaction between various sub-processes. CORBA technology is used to enable different participants who are physically disparate to monitor activity in and make resource-level adaptations to their particular subnet.</p>
 
View
Website/dp2001-abstracts-contents.htm
View
Website/dp2002-abstracts-contents.htm
View
Website/dp2003-abstracts-contents.htm
View
Website/dp2004-abstracts-contents.htm
View
Website/dp2005-abstracts-contents.htm
View
Website/dp2006-abstracts-contents.htm
View
Website/dp2007-abstracts-contents.htm
View
Website/dp2008-abstracts-contents.htm
View
Website/dp2009-abstracts-contents.htm
View
Website/working-contents.htm