Skip to ContentSkip to Navigation
Research Bernoulli Institute Calendar

Bachelor Artificial Intelligence symposium 28 June 2018

When:Th 28-06-2018 12:55 - 17:00
Where:Bernoulliborg, Nijenborgh 9, Groningen

On Thursday 28 June 2018 the Bachelor Artificial Intelligence Symposium will take place in the afternoon!

All Bachelor project students will present the results of their hard work. After the symposium there's a BORREL!

The programme and the abstracts are below.

The symposium will take place in three (!) parallel sessions:

  • In room 253 : Cognition and Logic (start 12:55 )
  • In room 267 : Machine Learning 1 (Marco Wiering) (start 13:25 )
  • In room 222 : Machine Learning 2 (start 13:55 ) The sessions start at different times, but the BORREL starts at 17:00 at the take away (first floor).

In the breaks, coffee and tea will be provided. Everyone is invited!


Programme Bachelor Artificial Intelligence symposium 28 June 2018

SESSION 1 — COGNITION AND LOGIC — Room 253

12:55 Opening — Fokie Cnossen

COGNITION
13:00 Flow in learning — Jesse Stoffels (Supervisor(s): Fokie Cnossen )
13:15 Implementation of Conservation Operationalisation — Emily Beuken (Supervisor(s): Niels Taatgen)
13:30 Anatomical connections of brain areas linked to ACT-R and self-generated thought — Marten de Vries (Supervisor(s): Marieke van Vugt & Oscar Portolès Marí­n)


13:45 BREAK

LANGUAGE
14:00 Training away over-exhaustive errors with distributive quantifiers in children with a single training session — Christian Roest (Supervisor(s): Jennifer Spenader)
14:15 Computational Modeling of Object-Word Learning Strategies: Cross-situational Learning versus Propose-but-verify — Tinke van Buijtene (Supervisor(s): Jennifer Spenader & Jacolien van Rij-Tange)
14:30 Computational modeling of observational word learning using the Rescorla-Wagner model and propose-but-verify — Rijk van Braak (Supervisor(s): Jennifer Spenader & Jacolien van Rij-Tange)


14:45 BREAK

LOGIC 1
15:00 An Analysis of Decompositional Rule Extraction for Explainable Deep Learning — Nicholas Dupuis (Supervisor(s): Bart Verheij)
15:15 An automatic generator of alternative representations of Bayesian Networks — Kai Liang (Supervisor(s): Bart Verheij)
15:30 An automated extension calculator for default logic — Marco Breemhaar (Supervisor(s): Stipe Pandzic)


15:45 BREAK

LOGIC 2
16:00 An analysis of the decentralization of Federated Byzantine Agreement Systems (FBASs) — Lucia Baldassini (Supervisor(s): Davide Grossi)
16:15 Experimental assessment of robustness in Federated Byzantine Agreement Systems — Anna Schmidt-Rohr (Supervisor(s): Davide Grossi)
16:30 Using a parsimonious deontic logic to analyze problems in machine ethics — Jan van Houten (Supervisor(s): Davide Grossi & Rineke Verbrugge)
16:45 Analytical assessment of failure resilience in Stellar Consensus Protocol — Wenxuan Huang (Supervisor(s): Davide Grossi)

17:00 BORREL!!Take Away, first floor


SESSION 2 — MACHINE LEARNING 1 — Room 267


13:25 Opening — Marco Wiering

IMAGE SEGMENTATION AND INFORMATION RETRIEVAL

13:30 Deep learning with data augmentation — Joppe Boekestijn (Supervisor(s): Pri Pornntiwa & Marco Wiering)
13:45 How to present the results of a query to the user in information retrieval — Klemen Voncina (Supervisor(s): Marco Wiering)
14:00 Using Intersection over Union loss to improve Binary Image Segmentation — Floris van Beers (Supervisor(s): Marco Wiering)
14:15 Improving the segmentation of faces in images using feature extractors fine tuned on gender recognition — Arvid Lindström (Supervisor(s): Marco Wiering)


14:30 BREAK

MACHINE LEARNING IN MEDICINE
14:45 Predicting sepsis-induced patient deterioration using machine learning — Menno Liefstingh (Supervisor(s): Marco Wiering
15:00 Predicting Sepsis onset with biosignals using Machine Learning models — Francesco Dal Canton (Supervisor(s): Marco Wiering)

15:15 Tumor Segmentation using Deep Learning — Werner van der Veen (Supervisor(s): Marco Wiering & Elisabeth Pfaehler)
15:30 Tooth segmentation from a CT scan with Convolutional Neural Networks and the watershed algorithm — Ruben Cöp (Supervisor(s): Marco Wiering)
15:45 BREAK
MACHINE LEARNING IN GAMES
16:00 Connectionist Reinforcement Learning in the game of Asteroids — Sjors Mallon & Niels Meima (Supervisor(s): Marco Wiering)
16:20 Reinforcement Learning in the game Lines of Action using an MLP — Quintin van Lohuizen & Remo Sasso (Supervisor(s): Marco Wiering)
16:40 Applying RL techniques on agar.io — Anton Wiehe & Nil Stolt Ansó (Supervisor(s): Marco Wiering)
17:00 BORREL!!Take Away, first floor


SESSION 3 — MACHINE LEARNING 2 — Room 222


13:55 Opening — Sietse van Netten

DEVELOPMENT OF AN ARTIFICIAL LATERAL LINE 1
14:00 Extreme learning machine using filters for artifial lateral line source localisation — Jelle Egbers (Supervisor(s): Sietse van Netten & Ben Wolf & Primoz Pirih)
14:15 Comparing and reducing dark corners in lateral line sensor configurations — Christiaan Steenkist (Supervisor(s): Sietse van Netten & Ben Wolf & Primoz Pirih)
14:30 Underwater object localization and identification using an extreme learning machine and artificial lateral line sensors — Arjen Brussen (Supervisor(s): Sietse van Netten & Ben Wolf & Primoz Pirih)


14:45 BREAK

DEVELOPMENT OF AN ARTIFICIAL LATERAL LINE 2 & EXTREME LEARNING MACHINE
15:00 Source Detection Performance Comparison between Potential and Turbulent Flow: An Artificial Neural Network Approach — Jonathan Reid (Supervisor(s): Sietse van Netten & Ben Wolf & Primoz Pirih)
15:15 Extreme Learning Machine and Its Two Variations in Localization and Direction Determination in Artficial Lateral Lines — Ziyu Bao (Supervisor(s): Sietse van Netten & Ben Wolf & Primoz Pirih)
15:30 A comparison of a convolutional neural network and an extreme learning machine for obscured traffic sign recognition — Folke Drost (Supervisor(s): Sietse van Netten & Ben Wolf)


15:45 BREAK

AUTONOMOUS PERCEPTION
16:00 Writer identification for ancient historical manuscripts — Maaike Los (Supervisor(s): Maruf Dhali & Lambert Schomaker)
16:15 Image binarization of ancient historical manuscripts — Daan Lambert (Supervisor(s): Maruf Dhali & Lambert Schomaker)
16:30 Determining K in k-means clustering by exploiting attribute distributions — Oscar Bocking (Supervisor(s): Lambert Schomaker)
16:45 Performance of deep learned features with different loss functions — Xingye Li (Supervisor(s): Sheng He)

17:00 BORREL!!Take Away, first floor

________________________________________________________
BACHELOR PROJECTS ALREADY PRESENTED ELSEWHERE
  1. Egocentric and Altercentric Interference on Level-1 Perspective Taking — Changji Zhou (Supervisor(s): Rineke Verbrugge)
  2. Influences of visual and linguistic context on object pronoun processing: EEG and Eye-Tracking provide new information — Tineke Jelsma & Julia Mol & Wessel van der Rest & Rosa Verhoeven (Supervisor(s): Jelmer Borst & Jacolien van Rij-Tange)


Abstracts


________________________________________________________
SESSION 1 — COGNITION AND LOGIC — Room 253

13:00 Flow in learning
Jesse Stoffels
Supervisor(s): Fokie Cnossen
In psychology flow is the state/feeling of being relaxed, engaged and optimally stimulated. Instilling this sensation is desirable to make tasks more enjoyable and potentially more successful. The generation and measurement of flow are largely uncharted scientific territory. This study attempted to instil a state of flow in participants performing a learning tasks by varying presentation order of the items. Heart rate and psychological measures were taken to determine the state of flow. These measures and the state of flow they suggest will be discussed.

13:15 Implementation of Conservation Operationalisation
Emily Beuken
Supervisor(s): Niels Taatgen
Conservation is the understanding, that, although the dimensions of an object can change, the mass or quantity of the object remains constant. This concept of conservation is very difficult for young children yet feels trivial to adults. Many studies have focused on when conservation starts, but little has been done to identify how the understanding of conservation develops or how children move through the various conservation strategies. This paper focuses on generating an ACT-R model that replicates previous studies on conservation by implementing a theorised operationalised model by Siegler (1981). Results show that an operationalised model has to be more complex than previously thought to fully replicate the development of conservation strategies.

13:30 Anatomical connections of brain areas linked to ACT-R and self-generated thought
Marten de Vries
Supervisor(s): Marieke van Vugt & Oscar Portolès Marí­n
Anatomical connections between brain areas can be represented as a network, in which areas become nodes and connections become (the weights of) edges. Using diffusion tensor imaging, a form of MRI, such a network can be generated for a human brain. When analysing the result using graph theory, the importance of specific brain areas can be determined. For example, we can say whether a brain area is central in the brain. If so, it is likely to be recruited in the process of integrating information throughout the brain. ACT-R modules have been linked to different brain areas, and it has also been shown that certain brain areas activate consistently during self-generated thought. A graph-theoretic comparison of these two categories of brain areas with the remainder of the brain was made. It was found that areas linked to ACT-R modules differ from other brain areas in a number of graph measures. When it comes to areas linked to self-generated thought, the results are not as clear. At best, there is some anecdotal evidence that these areas do not differ from the remainder of the brain in their graph measures.

14:00 Training away over-exhaustive errors with distributive quantifiers in children with a single training session
Christian Roest
Supervisor(s): Jennifer Spenader
Children between the ages of 5 to 9 years often make language errors called "overexhaustive errors". If we show them an image with 3 boys each playing with a cat, they reject a statement like "Every boy plays with a cat", if there is an extra cat in the image which no boy is playing with. There is a clear polarization between the responses that children at this age give; they will either always make the mistake, or never make the mistake. This suggests that a full understanding of distributive quantifiers is not learned gradually, rather that there is a trigger effect after which they learn the correct understanding quickly. We wanted to find out whether exposing children to informative examples, in a single training session, will trigger children to learn the correct understanding of distributive quantifiers. We designed a study to test this using the Dutch quantifier "elke". Results of picture verification tasks show that after 5 weeks, 10 out of 24 children improved to correctly use the quantifier most of the time. This suggests that a single session of informative examples is not enough for most children to trigger a correct understanding.

14:15 Computational Modeling of Object-Word Learning Strategies: Cross-situational Learning versus Propose-but-verify
Tinke van Buijtene
Supervisor(s): Jennifer Spenader & Jacolien van Rij-Tange
Young children learn their native language very rapidly - an impressive achievement judging from the complexity of the environments in which they learn. It is yet unknown how exactly children are able to match objects they perceive with their correct referents. This is known as the mapping problem. Two theories propose a possible solution. The first, cross-situational learning, assumes that children keep track of the environments they perceive upon the utterance of a word. They then compare these environments and determine the common factor between them. The second theory, propose-but-verify, suggests that children do not remember previously encountered environments. Instead they take a guess at the referent of a word and for each new encounter of the word they either confirm or reject that guess - depending on the presence of the guessed object. I built computational models of these two strategies so that I could inspect more closely how they perform and differ from each other and -most importantly- which is more representative of human data. I found that neither necessarily resembles human data better than the other. It may be interesting to see if it is possible that children in reality use a combination of the two strategies.

14:30 Computational modeling of observational word learning using the Rescorla-Wagner model and propose-but-verify
Rijk van Braak
Supervisor(s): Jennifer Spenader & Jacolien van Rij-Tange
How humans learn new words is an ongoing study in cognition and linguistics. Several theories have been proposed to describe how referents are linked to the correct words. Two of those theories are cross-situational learning and propose-but-verify. Cross-situational learning assumes that learners track co-occurrence statistics between words and referents across learning events from which a match can be determined. An argument against this theory is that in any given real life learning event there is an immense amount of referents to track. Propose-but-verify proposes a less memory intensive theory: in a new learning event one referent for a word is chosen at random and in following learning events it is either confirmed and further solidified in memory or rejected and replaced by a new word and referent match. In our research we used computational models of cross-situational learning, in the form of the Rescorla-Wagner model, and propose-but-verify to model previously conducted experiments on observational word learning in adults. Our results show that neither model is sufficient to adequately match the real world data. A combination of both models might result in a more representative model.

15:00 An Analysis of Decompositional Rule Extraction for Explainable Deep Learning
Nicholas Dupuis
Supervisor(s): Bart Verheij
As artificially intelligent systems take a more important role in our society, it become important to be able to explain their decisions. Deep learning has recently been one of the most successful tools for producing intelligent systems, but the decisions of neural networks are inherently difficult to explain, as the internal state of a neural network is incomprehensible. Rule extraction seeks to make that state accessible to humans, and bring explainability to deep learning. This paper analyses a decompositional approach to rule extraction, and finds that explainability may come at the cost of losing robustness in the presence of noise, scalability and reasonable time complexity, and flexibility to learn different types of relationships.

15:15 An automatic generator of alternative representations of Bayesian Networks
Kai Liang
Supervisor(s): Bart Verheij
The edges in a Bayesian Network (BN) do not always represent causal relationships. The previous literature stated that there could be various alternative representations of even a small BN. Since probabilistic reasoning plays an increasingly essential role in many domains, the misinterpretation of always treating the edges in BNs as showing causal relationships will lead to errors in probabilistic reasoning, which might then cause serious consequences. To understand this problem in depth, a set of defective algorithms proposed in previous research are modified to precisely generate all alternative representations of any input BN. A number of sample BNs are tested with respectively the modified algorithms and original algorithms, and different result distributions are found in the generated alternative representations for these BNs. To back up the current results, more importantly, to understand the generation of the alternative representations and their implications, both theoretical and practical techniques are discussed.

15:30 An automated extension calculator for default logic
Marco Breemhaar
Supervisor(s): Stipe Pandzic
Default Logic is a non-monotonic logic presented by R. Reiter in 1980. In many intelligent systems, a complete view of all information required is available, but Default Logic can handle incomplete sets of information because it is able to make assumptions based what is known. Theories in this type of logic, so-called default theories, are based on given axioms and default rules. By applying these default rules, we can reach new conclusions or extensions of a default theory: we extend our knowledge using default rules. Strangely enough, software to analyze theories for this hard to find. Automating the process of working out these extensions is a great help for research done with Default Logic. I implemented a system in Java that can do this for any finite default theory.

16:00 An analysis of the decentralization of Federated Byzantine Agreement Systems (FBASs)
Lucia Baldassini
Supervisor(s): Davide Grossi
Federated Byzantine Agreement Systems (FBASs) are a novel approach to reach consensus pioneered by the Stellar blockchain. An FBAS allows participating nodes to freely choose which other nodes to trust and its main feature is to be highly decentralized. This claim is investigated by applying notions from cooperative game theory (Shapley-Shubik index, Banzhaf index, and nucleolus) and by considering that each FBAS can be treated as a simple game: each system is either winning if it promotes consensus or losing. The borrowed notions provide a way of assessing the power of a given node in a system, meaning how much the vote of a single node can influence the final outcome. In a truly decentralized system, there should be no node having power index higher than others. Algorithms computing those indexes are implemented in Python and indexes of given FBASs are calculated. Results show that in given FBAS some nodes have a power of 0, meaning no power at all. Those nodes, in fact, do not have any influence on whether the system reaches consensus or not. This, in turn, suggests that the claim of FBASs being highly decentralized structures should be revisited.

16:15 Experimental assessment of robustness in Federated Byzantine Agreement Systems
Anna Schmidt-Rohr
Supervisor(s): Davide Grossi
Federated Byzantine agreement (FBA) is a model for consensus, where individual trust decisions are made by each node. This experimental study investigates the effects of the network topology and the initial voting distribution, on the robustness of a FBA system. The measured indicators of robustness are (1) the probability of the network reaching stability, (2) the probability of the network reaching consensus and (3) the number of iterations needed until consensus is reached. The results can be used to assess for which types of networks and possible fields of implementation using a FBAS is appropriate.

16:30 Using a parsimonious deontic logic to analyze problems in machine ethics
Jan van Houten
Supervisor(s): Davide Grossi & Rineke Verbrugge
As the field of AI advances, it produces more agents performing activities that used to be uniquely human (such as driving, surgery, and - in the case of war drones - firing guns). With respect to many such activities, ethical questions arise. Machine ethics deals with these questions from the point of view of the machine itself. That is, it aims to answer questions such as “What should I do?” from a machine's viewpoint. My approach to machine ethics is based on normative (i.e. deontic) logic. I use default theories as a basis and translate these to formulas in S4, which are then used as the initial list of a semantic tableau. For this, I have programmed a default-to-S4 translator and an S4 tableau solver. I apply my approach to a number of ethical dilemmas in the light of various ethical theories. What I find during the process of applying, as well as the results of the application, will serve as the basis for critical reflection on the apparent viability of my approach for solving machine-ethical problems.

16:45 Analytical assessment of failure resilience in Stellar Consensus Protocol
Wenxuan Huang
Supervisor(s): Davide Grossi
Federated Byzantine Agreement is a consensus-oriented system that could potentially be the backbone of future financial institutions, including banking services. In the process of development, blockchain services and providers encountered several incidents of either node failure (Liveness problem) or inconsistent data while communicating with different nodes (Safety problem). This analytical assessment tries to investigate potential origins of these problems by simulating typical configurations and assessing the performance of different FBAS failure solutions. The assessment could be adapted to provide insights for future services that build upon FBAS and similar protocols.

________________________________________________________
SESSION 2 — MACHINE LEARNING 1 — Room 267

13:30 Deep learning with data augmentation
Joppe Boekestijn
Supervisor(s): Pri Pornntiwa & Marco Wiering
Convolutional neural networks (CNN) learn better when more training data is presented to the network. Data augmentation techniques artificially increase the amount of image data that can be passed to a CNN. This allows the CNN's to extract class-defining features more accurately, resulting in better classification accuracy. In this paper several data techniques are presented and their performances are compared. Two deep learning frameworks are used, GoogLeNet and ResNet, and they are both trained on a plant image dataset (Tropic10). The data augmentation techniques used in this paper are rotation, flipping, shifting, cutout, and mix-up. Their performances are compared, as well as the performance of some combination of techniques. This results in 9 data augmentation methods. Both deep learning frameworks are configured with either pre-trained weights, trained on the `ImageNet' dataset, or with randomly initialized weights.Both configurations benefit from data augmentation techniques, especially when flipping and shifting the images.

13:45 How to present the results of a query to the user in information retrieval
Klemen Voncina
Supervisor(s): Marco Wiering
In information retrieval (IR), a common problem is how the results of a query are presented to the user so the most relevant result is positioned at the top. This project looks at two different approaches to doing this, a click model and a ranking algorithm. The purpose of the click model is to predict whether a returned document is likely to be clicked given a feature vector of other relevance scores and metric. In contrast, a ranking algorithm scores a series of documents on their relevance based on these same features to determine order. Through this exploration it is shown that a click model can be used as a method of scoring documents, thus cementing it as a solid relevance score. It is also shown that, despite this, using the click model output as a feature shows no statistical improvement on ranking over the already extracted features of each document.

14:00 Using Intersection over Union loss to improve Binary Image Segmentation
Floris van Beers
Supervisor(s): Marco Wiering
In semantic segmentation tasks it is increasingly common to see the Jaccard Index, or Intersection over Union (IoU), as a measure of success. While this measure is more reliable than per-pixel-accuracy, a lot of neural networks are still trained on accuracy. In this research an alternative is proposed where a neural network will be trained for a segmentation task on face detection by optimizing directly on IoU. Several datasets and datasplits will be used to test this. After testing it is found that training directly on IoU does increase performance compared to training using conventional loss functions.

14:15 Improving the segmentation of faces in images using feature extractors fine tuned on gender recognition
Arvid Lindström
Supervisor(s): Marco Wiering
This study investigates how semantic segmentation networks can be improved on the task of face segmentation in images by using convolutional layers previously fine-tuned on gender recognition. The study performs a large comparison between 4 models which have been trained to predict gender. The classifiers are later replaced by pixel-wise predictions used to segment faces and two datasets are used to investigate the improvement of the "gender aware" networks. Initial results do not suggest higher Intersection over Union scores, however the amount of epochs required to train the models has been reduced by using "gender aware" feature extractors.

14:45 Predicting sepsis-induced patient deterioration using machine learning
Menno Liefstingh
Supervisor(s): Marco Wiering
Sepsis is one of the leading causes of in-hospital mortality, and patients benefit greatly from early detection. Using data collected at the emergency room of the University Medical Center of Groningen, this research compares a number of different algorithms and imputation methods to predict multiple kinds of sepsis-induced patient deterioration to see what machine learning could be capable of for risk assessment and early detection. Challenges with this dataset are the relatively low amount of inclusions and high amount of missing values. Results show that a well-tuned Random Forests classifier outperforms other algorithms on the dataset and that MICE imputation has a clear edge over other imputation methods.

15:00 Predicting Sepsis onset with biosignals using Machine Learning models
Francesco Dal Canton
Supervisor(s): Marco Wiering
Sepsis is an excessive bodily reaction to an infection in the bloodstream, which causes one in five patients to deteriorate within two days after admission to the hospital. Until now, no clear tool for early detection of Sepsis has been found. This research uses electrocardiograph (ECG), respiratory rate, and blood oxygen saturation continuous bio-signals collected from 123 patients from the Universitair Medisch Centrum Groningen during the first 48 hours of after hospital admission. This data is examined under a range of feature extraction strategies and Machine Learning techniques as an exploratory framework to find the most promising methods for early Sepsis detection. The analysis included the use of Gradient Boosting Machines, Random Forests, Linear Support Vector Machines, Multilayer Perceptrons, Naive Bayes Classifiers, and k-Nearest Neighbors classifiers. The most promising results were obtained using Linear Support Vector Machines trained on features extracted from single heartbeats using Wavelet Transforms and Autoregressive Modelling, where the classification occurred as a majority vote of the heartbeats over long ECG segments.

15:15 Tumor Segmentation using Deep Learning
Werner van der Veen
Supervisor(s): Marco Wiering & Elisabeth Pfaehler
Newly developed deep learning techniques have recently been proven to be successful in the field of semantic segmentation. In this project, some of these techniques are applied to the non-trivial task of segmenting tumors from PET-scans, intending to facilitate cancer diagnosis. The performance of a deep neural network model using dilated convolution and residual connections is compared to a simple threshold algorithm. With the use of a modified Jaccard index metric that allows for a variable desired balance between false positives and false negatives, initial results suggest that this approach to tumor segmentation may prove to surpass the performance and desirability of simple thresholding algorithms.

15:30 Tooth segmentation from a CT scan with Convolutional Neural Networks and the watershed algorithm
Ruben Cöp
Supervisor(s): Marco Wiering
The manual segmentation of teeth from a CT scan is a labor-intensive job which costs a lot of time. This research proposes a technique for automatic threshold selection for teeth tissue using Convolutional Neural Networks, as well as a technique that uses the watershed algorithm to automatically segment a tooth from axial slices. The conclusion of the research is that CNNs are suitable for automatic threshold finding, but require a lot of data for training. Also, it is concluded that the watershed algorithm is suitable for slice wise tooth segmentation, but needs some tweaking to get the same results as a segmentation performed by an expert.

16:00 Connectionist Reinforcement Learning in the game of Asteroids
Sjors Mallon & Niels Meima
Supervisor(s): Marco Wiering
Reinforcement Learning has been widely applied to games, with varying degrees of success. From previous research conducted by Google DeepMind, the 1980's arcade game Asteroids has proven to be one of the games with sub-human level performance. Asteroids is a continuous game, posing extra difficulty. This research contributes a higher order state extraction algorithm, combined with various connectionist reinforcement learning algorithms and world modelling to be able to achieve better agent performance than seen in previous research. The performance of the various learning algorithms, such as naive Q-learning, double Q-learning, Q-learning with experience replay and QV-learning, is compared to each other and a random agent.

16:20 Reinforcement Learning in the game Lines of Action using an MLP
Quintin van Lohuizen & Remo Sasso
Supervisor(s): Marco Wiering
This paper investigates whether TD-Leaf and changing game state representation improve performance when a artificial agent learns to play the game Lines of Action using reinforcement learning and a multi-layer perceptron. For this research the reinforcement learning algorithm TD-learning was used. To test the performance of TD-Leaf in a timely fashion an artificial agent learned to play the game without TD-Leaf and was tested against a fixed opponent and by self-play, with and without TD-Leaf. Performance of both were measured and it was found that TD-Leaf improves performance, even when trained without it. The game can be represented as an eight by eight board to the mlp, although this can be done differently. By looking at expert strategies, more than just plain state representation could be captured. An expert might look at: connectivity, what is the biggest cluster or walling. By feeding the mlp these extra state representation the artificial agent performed better than, when these input features are not fed into the mlp. The highest performance of the artificial agent was achieved with: TD-Leaf, connectivity-state, biggest cluster and walling.

16:40 Applying RL techniques on agar.io
Anton Wiehe & Nil Stolt Ansó
Supervisor(s): Marco Wiering
The online game agar.io has become massively popular on the internet due to its intuitive game design and its ability to instantly match people up with players around the world. From the point of view of artificial intelligence this game is also very intriguing, as the game has a high-dimensional input and action space while still allowing to have diverse agents with complex strategies compete against each other. We apply reinforcement learning techniques to this game to answer three research questions. The first question is which reinforcement learning methods and techniques are required to learn to play this game at an advanced level. Furthermore this research focuses on how a discretization of the game state represenTation through vision grids of hand-crafted features compares to the raw pixel input in terms of post-training performance. Lastly we study how the discrete-output algorithm Q-learning fares against continuous-output Actor-Critic methods such as CACLA and DPG. Results show that the reinforcement learning techniques that we investigated are only sufficient to learn simplified versions of the game. So far handcrafted vision grids outperform raw pixel input. Q-learning isSuperior to the Actor-Critic methods in simple environments, whereas CACLA outperforms all other algorithms in complex environments.

________________________________________________________
SESSION 3 — MACHINE LEARNING 2 — Room 222

14:00 Extreme learning machine using filters for artifial lateral line source localisation
Jelle Egbers
Supervisor(s): Sietse van Netten & Ben Wolf & Primoz Pirih
Data of an artificial lateral line, an array of sensors which are able to sense the differences in water flow, can be used for source localisation and angle prediction. In this research, possibilities to improve the extreme learning machine are explored. This is tried with filters, by changing the input representation, and by changing the activation function. Changing the input representation to a square matrix improves the accuracy of the algorithm too, with the mean square error reaching an asymptotic line as more and more filters are added. An algorithm with the ReLu and tanh activation functions also turned out to work well, because of the fact that one function was good at predicting the location and the other function was good at predicting the angle.

14:15 Comparing and reducing dark corners in lateral line sensor configurations
Christiaan Steenkist
Supervisor(s): Sietse van Netten & Ben Wolf & Primoz Pirih
Fish are able to sense movement in the water around them with an organ called a lateral line. This organ consists of neuromast detectors along the body of the fish that detect water flow. Plots of parameter accuracy of these bi-axial sensors, calculated with Fisher information matrices, shows that there are areas where parameter detection accuracy is low. The goal of this paper is to reduce these dark corners for source position. The assumption is that regularly spaced out sensors have the least dark corners. Sensor configurations generated by a Continuous Genetic Algorithm, K-means, Kohonen SOM and an ideal configuration were compared. Extreme Learning Machines were trained and tested on a dataset of moving sources and compared with dark corners from Fisher matrix heatmaps. The parameter errors of the ELM are approximately 4 times larger than the Fisher matrices predict. The dark corners shown by Fisher matrix heatmaps are not visible in the ELM parameter errors. The CGA configurations had the highest ELM errors, except for an outlier, and had the most dark corners. The ideal configuration had the least dark corners and shared the lowest ELM errors with the sensor configurations generated by K-means and Kohonen SOM.

14:30 Underwater object localization and identification using an extreme learning machine and artificial lateral line sensors
Arjen Brussen
Supervisor(s): Sietse van Netten & Ben Wolf & Primoz Pirih
Fish possess a lateral line organ, consisting of flow detectors along the body of the fish. With this organ, they are able to detect water flow in surrounding waters. Using excitation patterns of these flow detectors, the location and direction of moving objects can be inferred. Using these findings as a basis, this paper shows if and how we can identify these objects in terms of shape, size and velocity using an artificial lateral line in combination with an extreme learning machine (ELM). Input normalization based on the maximum velocity outperforms other tested techniques. For shape classification, three object shapes were used (single object, school of fish, snake) and a classification accuracy of 66.3% is reported. For size classification (0.025m, 0.05m, 0.1m), the ELM achieves an overall accuracy of 68.3%. For velocity classification (0.065m/s, 0.13m/s,0.26m/s), an overall accuracy of 53.8% was measured.

15:00 Source Detection Performance Comparison between Potential and Turbulent Flow: An Artificial Neural Network Approach
Jonathan Reid
Supervisor(s): Sietse van Netten & Ben Wolf & Primoz Pirih
In this paper, we compare the performance of several neural networks on underwater source detection using an artificial lateral line on realistic simulated flow data. Previous work has shown how the fish lateral line helps fish determine a source’s location in water. Based on this, a number of studies using potential flow data have shown neural networks to be able to perform source location and source angle estimation satisfactorily. In the present study, we aimed to recreate those results using more realistic turbulent flow data generated by using a three dimensional Stam-type fluid solver to simulate flow produced by a sphere moving through a two dimensional plane of fluid. The more realistic data did indeed prove to be more difficult for the neural networks. Even so, the source location estimation suffered a loss in accuracy of only a factor of two, while the angle estimation suffered a loss in accuracy of a factor of five.

15:15 Extreme Learning Machine and Its Two Variations in Localization and Direction Determination in Artificial Lateral Lines
Ziyu Bao
Supervisor(s): Sietse van Netten & Ben Wolf & Primoz Pirih
Lateral lines are a fish organ that can help fish detect nearby moving objects (prey or predators) by sensing pressure gradients at their locations. Based on a simulation of artificial lateral lines, performances of three kinds of Extreme Learning Machines (ELM) are studied. The original ELM are compared with the Optimally-Pruned ELM (OPELM) and the Kernel ELM (KELM). The results show that the three ELMs share a similar level of accuracy for different noise levels with around 2300 samples. However, the KELM has the shortest training time. If we use a Particle Swarm Optimization (PSO) method rather than 9-fold cross validation method for hyperparameters optimization, the KELM can have even shorter training time. PSO is faster than the computing of one fold of the 9-fold cross validation. Because the hyperparameters are pretty stable for different data sets, if we use predefined hyperparameters, the KELM will not need any validation time. The KELM is also assumed to be suitable for less samples (the accuracy will be higher than the ELM), but not for larger samples (the accuracy will be lower than the ELM). The OPELM requires much time on hyperparameter optimization and does not achieve better results. It is shown to be not well suitable for a simulation situation.

15:30 A comparison of a convolutional neural network and an extreme learning machine for obscured traffic sign recognition
Folke Drost
Supervisor(s): Sietse van Netten & Ben Wolf
The present paper compares two neural network architectures for traffic sign recognition (TSR). First, a convolutional neural network (CNN) pre-trained for object recognition and retrained for TSR. Second, a single-hidden-layer feed forward neural network (SLFN), trained by an extreme learning machine (ELM) algorithm with a histogram of oriented gradient (HOG) feature extractor. The comparison focusses on recognition accuracy and computational costs regarding normal as well as obscured traffic signs. The models are trained and tested on a combination of traffic signs from the German TSR benchmark dataset, the Belgium traffic sign classification dataset and the revised mapping and assessing the state of traffic infrastructure (revised MASTIF) datasets. Results show an advantage of the ELM in recognition accuracy, computational cots and robustness on obscured traffic signs.

16:00 Writer identification for ancient historical manuscripts
Maaike Los
Supervisor(s): Maruf Dhali & Lambert Schomaker
In the study of ancient manuscripts, it is often a challenge to obtain knowledge about the author of a document. Nevertheless, for palaeographists this knowledge is important, because knowing the writer can give much more information about the historical and geographical context of a historical document. In this problem, handwriting can be used to help identify the writer of a piece of handwriting: different features of handwritten text can be used as characteristics of individual writers. This study focusses on the Dead Sea Scrolls, a collection of Hebrew fragments found in the Qumran area near the Dead Sea. We studied the performance of the HOG-BOW feature for writer classification of fragments of the Dead Sea Scrolls, and tested different clustering and visualisation techniques. The t-SNE method appeared to be a good writer clustering technique using the Hinge feature, while the Kohonen self-organising map performed slightly poorer.

16:15 Image binarization of ancient historical manuscripts
Daan Lambert
Supervisor(s): Maruf Dhali & Lambert Schomaker
Image binarization is applied to get the functional information from noisy image. This study looks at a neural networking application for binarization. The main focus is the performance of a conditional generative adversarial network on a binarization task. This technique trains a system that is able to generate binarized output from normal images. The images that were used for binarization come from the Dead Sea Scrolls. This is a collection of Hebrew fragments found in the Qumran area near the Dead Sea. These texts form a difficult case for binarization, as a lot of these documents are damaged.

16:30 Determining K in k-means clustering by exploiting attribute distributions
Oscar Bocking
Supervisor(s): Lambert Schomaker
Methods for estimating the natural number of clusters (k) in a data set traditionally rely on the distance between points. In this project, an alternative was investigated: using the likelihood of the distribution of some informative nominal variable over the clusters to see which k partitions the data in a way that is least likely to be random. Artificial data sets are used to assess the strategy's performance and viability in comparison to a well-established distance-based method.

16:45 Performance of deep learned features with different loss functions
Xingye Li
Supervisor(s): Sheng He
Loss functions are often strongly associated with the deep features' performance in deep learning. With more loss functions being developed, their influences on the deep feature's discriminative power and efficiency have become unclear. Identifying the common characteristic of the high performance loss functions therefore is significant for the loss function development. In this study, experiments are conducted on handwritten datasets to compare the effects of different types of loss functions on the features' performance. These influences are reflected by the test accuracy in the task of classifying unseen data, which are exclusive from the training data in terms of their image classes. The results suggest that, when the training loss is attained from multiple samples at a time, the knowledge of the overall feature representation space accumulated throughout training plays a role in enhancing the deep features' performance in both terms of sampling and loss computing.


___________________________________________________

BACHELOR PROJECTS ALREADY PRESENTED ELSEWHERE

Egocentric and Altercentric Interference on Level-1 Perspective Taking
Changji Zhou
Supervisor(s): Rineke Verbrugge
In social cognition, humans use rapid, automatic low-level processing to take the perspective of other humans. Evidence for this automaticity comes from a large body of work that demonstrates how participants experience both egocentric and altercentric interference when their perspective differs from that of their partner. However, it is unknown whether perspective-taking also extends to non-human primates. To address this question, this study investigates whether a chimpanzee can influence understanding of seeing. In a Level-1 visual perspective-taking experiment adult participants were asked to judge either their own perspective or the perspective of an avatar (human vs. chimpanzee) depicted in the scene. The perspective of the character was either the same as the participant or different. We found that participants suffered from altercentric interference: they cannot prevent themselves from processing the scene from the avatar’s perspective. This effect was found for both the chimpanzee and human avatar: Participants calculated the avatar’s perspective, even when it was not required, leading to slower reaction times (RT) and decreased accuracy when calculating their own perspective. Conversely, participants also suffered from egocentric interference: they still calculated their own perspective when required to take the perspective of the avatar. Further, the results show that when participants are asked to take the perspective of the chimpanzee as opposed to the human avatar, they respond quicker and make fewer errors, suggesting that altercentric interference is weakened when taking the perspective of a chimpanzee. Similarly, egocentric interference was also weakened when distracted by the chimpanzee avatar - although here we only found that RT was affected. The results from this study suggest that adults make good use of rapid, efficient or even automatic processes to compute what other people or even chimpanzees see. We argue that humans benefit from this perspective-taking, even though we may suffer from ego- and altercentric interference. This thesis also proposes that these findings could be used to contribute toward the development of naturalistic interactive technologies, in particular multi-agent systems.


Influences of visual and linguistic context on object pronoun processing: EEG and Eye-Tracking provide new information
Tineke Jelsma & Julia Mol & Wessel van der Rest & Rosa Verhoeven
Supervisor(s): Jelmer Borst & Jacolien van Rij-Tange
This study aims to investigate whether and how visual and linguistic context affect the processing of object pronouns (“him”, “her”). Recent studies have shown, by measuring pupil dilation, that context information affects object-pronoun processing in an early stage and interacts with grammatical processing (e.g., van Rij et al, 2016; van Rij, 2012). In the current study, we investigated the exact timing of those contexts on object-pronoun processing by co-registering EEG and pupil dilation. With a 2x2x2 within-subject design, we investigated the effects of visual context (other-oriented vs self-oriented action; e.g., a picture with a hedgehog photographing a mouse, or a hedgehog photographing himself), discourse prominence (the introduction sentence introduces the actor first or second; "You just saw a hedgehog and a mouse." vs “You just saw a mouse and a hedgehog.”), and referring expression (the test sentence contains an object pronoun or a reflexive (fillers); "The hedgehog was photographing him / himself with a camera."). Surprisingly, we found that visual and linguistic context do not have an influence on object-pronoun processing. However, we did find an influence of visual context on the processing of reflexives in EEG and an interaction between visual and linguistic context in pupil dilation.