Al-Mamun et al. (2021) Data Files

The files are stored in HDF5 format. The main table is stored in a group named "markov_chain_0". The table contains a group "col_names" which gives all the column names stored as two arrays. An integer array contains the number of characters in each column, and a character array contains the characters in the column names with no space between them. If you use "h5dump -r" to look at the column names you will be able to see them.

Each MCMC iteration occupies a different row of the table. The MCMC output rejects some (or sometimes many) steps, and the proper way to construct the chain is to repeat the previous entry for a failed step. In lieu of repeating rows, the data files use the mult as the row "multiplicity" (the number of rejected steps plus one) and then the next acceptance is stored as a new row. Rows with 0 multiplicity are to be ignored.

These output files have strong autocorrelations, and the user must account for these, either by thinning or block averaging.

The column log_wgt contains the natural log of the "weight", i.e. the likelihood function.

The columns which have a numerical suffix, i.e. R_43, refer to vector quantities. An entire vector is created for each point in the Markov chain and thus the entire vector is stored in one row of the table. The numerical suffix ("43" in the example) refers to a fixed grid. In the case of the radii, they are stored on a fixed gravitational mass grid (the mass-radius curve). The gravitational mass grid is stored in an object in the HDF5 file called "m_grid". (The column R_43 stores the radius of a 1.4 solar mass neutron star.)

Notes: The Markov chains are spread across several files and must be concatenated together for the final output. We dropped 40,000 rows from the beginning of of the 3P_GW_QLMXB_PRE and 3P_GW_all files and 60,000 rows from the beginning of the 4L_GW_QLMXB_PRE and 4L_GW_all files. Files from the beginning of the 3P_GW_all_IS and the 4L_GW_all_IS runs have already been dropped and not included here. In many cases, the last row in a file is duplicated as the first row in the subsequent file and thus must be dropped.

Raw data:

Partially processed files:


Back to Andrew W. Steiner at the University of Tennessee.