Innovative types and abilities of neural networks based on associative mechanisms and a new associative model of neurons
The goal of this paper is to present a new concept of representation data and their relations in neural networks in such a way that enables to actively associate them in order to automatically reproduce and generalize them. This paper also demonstrates an innovative way of developing knowledge in a new kind of neural networks which structure and parameters are automatically constructed on the basis of plastic mechanisms implemented in an alternative model of neurons. This model allows to quickly create associations and establish weighted connections between neural representations of data, their groups, and classes, inspired by the plastic mechanisms that are commonly performed in a human brain. This paper provides a new associative model of neurons that is able to intelligently interconnect itself to the other neurons representing similar or following data and their contexts. This approach is based on the current knowledge and discoveries in neurobiology and psychology. This contribution proposes new possibilities for the use of neural networks to represent and generalize complex relations between groups of data. Finally, this paper demonstrates how to use the formed knowledge and get associated feedback or even answers based on it.
Uncertainty in Machine Learning: From Weak Supervision to Reliable Prediction
This talk will broach different facets of uncertainty in (supervised) machine learning. The first part will motivate the distinction between different types of uncertainty in prediction. In fact, despite the existence of various probabilistic approaches in machine learning, there is arguably no method that is able to distinguish between two very different sources of uncertainty: aleatoric uncertainty, which is due to statistical variability and effects that are inherently random, and epistemic uncertainty, which is caused by a lack of knowledge. Here, a method for binary classification will be introduced that does not only produce a prediction of the class of a query instance but also a quantification of the two aforementioned sources of uncertainty. The second part of the talk is devoted to learning from weak supervision and, more specifically, the problem of superset learning, in which outputs in the training data are only characterized in terms of subsets of candidates. In order to tackle this problem, a generalized risk minimization procedure is proposed. Using an extended loss function that compares precise predictions with set-valued observations, this approach is able to perform model identification and data disambiguation simultaneously.
Stable mutations for evolutionary algorithms
The talk is focused on the set of theoretical and experimental results, which describe features of the wide family of the α-stable distributions, varied methods of their applications to the mutation operator of the evolutionary algorithms based on real-number representation of the individuals, and, first of all, equipping these algorithms with features which enrich their effectiveness in multi-modal, multidimensional global optimization problems solving.