Project Name: NOTA: NonTraditional Approaches
Abbreviation: NOTA Start date:
January 1, 1998 End date:
February 1, 2000 Project Description:
The project serves a twofold purpose, on the one hand the applications
of neural networks in the field of natural language processing. Here
the main interest is performance simulation; what could be main
contributions of neural networks to natural language processing,
especially to grammar inference and word disambiguation.
On the other hand the goal is to develop design strategies for neural
networks. Globally there are three important stages in the design traject:
 An initial problem specification tailored towards neural networks.
 The transformation of the problem specification towards a
neural prototype.
 An efficient neural implementation of the prototype.
In all these stages compositionality and modularity play an important
role.
The objectives are to study the competence of neural networks for
certain natural language tasks such as grammar inference,
disambiguation etc. A special research direction is the application of
Hopfield networks for language recognition.
The second important theme in the project is the development of design
strategies from a given problem specification towards an efficient
neural network solution up to tolerable faults. First a specification
formalism should be developed which is tailored towards neural network
implementations. Secondly one needs to transform this specification
into a neural prototype. And finally this prototype must be
transformed in to an efficient neural network which satisfies the
initial problem specification.
Recently there is a new/renewed interest in approaches to information
and language processing that are not based on the traditional Von
Neumann architecture or that do not fit into the usual paradigms in
computing (imperative, functional, logic, objectoriented
programming). Typical examples of these approaches include:
 neural networks (NN),
 genetic algorithms (GA),
 cellular automata (CA),
 computation based on fuzzy logic (FL),
Although these computing models have been inspired by quite different
ideas (How does the human brain function? What is the behaviour of
populations evolving in time? Do selfreproducing automata exist and
are they universal? Can we compute in a more subtle way than just
using the crude "yes" (1) and "no" (0) as basic values only?,
respectively) nowadays there emerge some central themes and common
features.
 All these models belong to that area of science that is usually
referred to as complex, dynamical or nonlinear systems. These three
different names corresponds to several aspects of the subject under
consideration. Restricting ourselves to the subject "information
processing", we first remark that each computational process that
involves a basic programming concept like "if then else fi" is in
essence nonlinear. Of course, our subject is complex (Nobody wants to
be involved in trivial matters only!): the reason is that we usually
study a large number of relatively simple information processing
elements that are interconnected in some regular fashion in order to
obtain an interesting global computational phenomenon. Keywords,
usually mentioned in this context, are connectionism and massively
parallelism. Thirdly, the adjective "dynamical" emphasises the fact
that we are interested in the behaviour of the computational process
as a function of the parameter time rather than the mere final result
of such a computation.
 There are interesting connections to combinatorial
optimisation: from a certain point of view NN's and GA's (as well as
the Boltzmann machine and simulated annealing) are stochastic
optimisation techniques.
 Apart from being defined locally (e.g. the definitions of the
neurons (NN), the chromosomes (GA), the cells (CA)) the overall
computational process may be controlled at a global level ("training
the neural network'' i.e. adjusting the weights in the
interconnections, manipulating the fitness' function in GA).
In this field we will focus our attention to a few fundamental
problems and to some application areas. These fundamental questions
include:
 Given the properties of the simple processing elements (the
local properties), what is the global behaviour of a large complex of
these regularly interconnected processing elements?
 Given a particular computational problem, i.e. the given
global behaviour of our complex system, what is the best way to design
the local processing elements and to control these elements at a
global level in order to solve this computational problem. In other
words: given a specification of the computational problem, try to
specify the global behaviour of the system, derive the local behaviour
from this latter specification in such a way that the original problem
will be solved efficiently. This research includes the development of
formalisms for specifying the global behaviour (especially for NN's)
as well as transformations of specifications to equivalent, more
efficient ones.

The following HMImember(s) is/are coordinator of this Project
Dirk Heylen
Here you can find the publications
