{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Time Series Exercises for Ay 119\n", "\n", "Written by: Matthew J. Graham (Caltech), May 2020\n", "\n", "Dependencies:\n", "\n", " * numpy\n", " * pandas\n", " * astropy.timeseries\n", " * sklearn.datasets\n", " * GPy (pip install GPy)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Period finding\n", "\n", "Since period finding is one of the main time series analysis techniques in astronomy, we're going to explore some of the dependencies of the most popular period finding algorithm -- the Lomb-Scargle periodogram (see Jake Vanderplas' excellent review article on this: https://arxiv.org/abs/1703.09824 and with associated code at https://github.com/jakevdp/PracticalLombScargle/). In particular, we want to assess its performance as a function of signal-to-noise, time series sampling, and waveform shape. It is often worth investigating the performance of algorithms in terms of toy data sets to get an understanding of what the limitations may be. So:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1) Write a routine to generate a periodic time series of a function $perfunc$ at a period of $per$, containing $n$ datapoints, assuming homoscedastic Gaussian errors given by a variance $sigma^2$, with a mean sample time of $meandt$, and a flag to indicate whether the sampling is regular or irregular:\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def getTimeSeries(perfunc, per, n, sigma2, meandt, regular_sample = True):" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Demonstrate it with (a) a sinusoidal waveform ($y(t) = \\sin(2 \\pi t / per)$) and (b) a square waveform and apply LombScargle to recover the period (so choose a range of test frequencies, or use autopower (see the documentation for the astropy.timeseries.LombScargle method) and then plot the corresponding periodogram, and the phase folded time series ($phi = t / per - int(t / per)$. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "2) For a range of periods extending over three decades, e.g., $\\log_{10}(per) = 1 - 3$, generate 100 time series and determine how accurate (how many period found by Lomb-Scargle are within 1\\% of the known periods) Lomb-Scargle is as:\n", "\n", "a) a function of number of data points in the time series, i.e., plot LS accuracy againts $n$\n", "\n", "b) a range of error values ($sigma^2$)\n", "\n", "c) different sample times, both regular and irregular ($dt = meandt + random value$)\n", "\n", "Does Lomb-Scargle do better for a sinusoidal waveform than the square waveform?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Gaussian processes\n", "\n", "A Gaussian process is an optimal way (in a Bayesian sense) to fit a time series and can be used to predict (interpolate) values where needed as well as forecasting (extrapolating). The standard set we are going to look at consists of monthly average $CO_2$ concentrations collected at the Mauna Loa Observatory in Hawaii between 1958 and 2001. We want to model the $CO_2$ concentration as a function of time.\n", "\n", "The data is available as a standard data set:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import fetch_openml\n", "co2_data = fetch_openml(data_id = 41187)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A bit of preprocessing is required to convert this to average monthly counts. Plot the data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we are going to use the GPy library to fit our time series $(t, y)$ and the way to do this is:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import GPy\n", "kern = GPy.kern.RBF(1) # define the kernel here\n", "yp = y - y.mean() # subtract mean from observed value as we're assuming a zero mean process\n", "m = GPy.models.GPRegression(t[:, None], yp[:, None], kern) # define GP regressor\n", "m.optimize() # fit" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This performs a maximum likelihood estimation of the Gaussian process kernel hyperparameters \n", "and we can then predict values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tpred = \n", "ypred = m.predict(tp[:, None]) # This will turn the predicted values at tpred and the predicted error" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So we want to model the time series as a combination of:\n", "\n", "* a long term, smooth rising trend (using an RBF kernel)\n", "* a seasonal component with a fixed periodicity of 1 year (using a PeriodicExponential)\n", "* smaller, medium term irregularities (using a RationalQuadratic (RatQuad) kernel)\n", "* a noise term (consisting of a RBF kernel and a White kernel)\n", "\n", "Define the component kernels: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "k1 = GPy.kern...." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And then define the complete kernel:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "k = k1 + ..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now fit it to the data, and then plot the model against the measured data. Extend the plot to 2030 and see what the mode suggests the trend is." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 2 }