H.Y. Gan, MSc
- Project: Scalable algorithms to process massive datasets from radio astronomy
One of the holy grails in cosmology is the discovery of neutral hydrogen in the infant Universe (its first billion years), which can tell astronomers and cosmologists how the first stars, galaxies and black holes formed, how dark matter evolved and whether gravity indeed follows Einstein’s general relativity. The low-frequency part of the Square Kilometer Array (SKA) – the world’s largest radio telescope currently under construction in Australia and South Africa – can not only discover the emission of this hydrogen, but it can even make images of its distribution and how it evolves in time (redshift). The data volume of the SKA, however, is huge (1000 Tb/day) and will require novel scalable algorithms in order to process the total amount of data accumulated in such a project (106 Tb or 1 Exabyte).
The project will focus on how subtle effects in the processing (i.e. error-correction and imaging) of these data can be uncovered using machine learning techniques, such as neural networks and pattern recognition. Such an approach is becoming necessary in order to quickly explore the very high-dimensional data-set, beyond any human ability, and discover effects and relations in the data that will enable finding these extremely faint signals of neutral hydrogen from the infant Universe.
- Keywords: Cosmology, Radio Telescopes, Machine Learning, Infant Universe.
- Fields of expertise involved: Cosmic Dawn & Reionization, Radio astronomy, Machine Learning, Signal Processing.
- Link: https://www.rug.nl/research/fse/themes/dssc/cofund/training/#project-17-scalable-algorithms-to-process-massive-datasets-from-radio-astronomy
|Last modified:||05 June 2018 3.29 p.m.|