Determines which datasets to query using qtl_search. Uses coordinates from stored summary stats files (e.g. GWAS) to determine which regions to query from eQTL Catalogue. Each locus file can be stored separately, or merged together to form one large file with all query results.

eQTL_Catalogue.query(
  sumstats_paths = NULL,
  output_dir = "./catalogueR_queries",
  qtl_search = NULL,
  use_tabix = T,
  nThread = 4,
  quant_method = "ge",
  infer_region = T,
  split_files = T,
  merge_with_gwas = T,
  force_new_subset = F,
  genome_build = "hg19",
  progress_bar = T,
  verbose = T
)

Arguments

sumstats_paths

A list of paths to any number of summary stats files whose coordinates you want to use to make queries to eQTL Catalogue. If you wish to add custom names to the loci, simply add these as the names of the path list (e.g. c(BST1="<path>/<to>/<BST1_file>", LRRK2="<path>/<to>/<LRRK2_file>")). Otherwise, loci will automatically named based on their min/max genomic coordinates.

The minimum columns in these files required to make queries include:

SNP

RSID of each SNP.

CHR

Chromosome (can be in "chr12" or "12" format).

POS

Genomic position of each SNP.

...

Optional extra columns.

output_dir

The folder you want the merged gwas/qtl results to be saved to (set output_dir=F if you don't want to save the results). If split_files=F, all query results will be merged into one and saved as <output_dir>/eQTL_Catalogue.tsv.gz. If split_files=T, all query results will instead be split into smaller files and stored in <output_dir>/.

qtl_search

This function will automatically search for any datasets that match your criterion. For example, if you search "Alasoo_2018", it will query the datasets:

  • Alasoo_2018.macrophage_naive

  • Alasoo_2018.macrophage_Salmonella

  • Alasoo_2018.macrophage_IFNg+Salmonella

You can be more specific about which datasets you want to include, for example by searching: "Alasoo_2018.macrophage_IFNg". You can even search by tissue or condition type (e.g. c("blood","brain")) and any QTL datasets containing those substrings (case-insensitive) in their name or metadata will be queried too.

use_tabix

Tabix is about ~17x faster (default: =T) than the REST API (=F).

nThread

The number of CPU cores you want to use to speed up your queries through parallelization.

quant_method

eQTL Catalogue actually contains more than just eQTL data. For each dataset, the following kinds of QTLs can be queried:

gene expression QTL

quant_method="ge" (default) or quant_method="microarray", depending on the dataset. catalogueR will automatically select whichever option is available.

exon expression QTL

*under construction* quant_method="ex"

transcript usage QTL

*under construction* quant_method="tx"

promoter, splice junction and 3' end usage QTL

*under construction* quant_method="txrev"

split_files

Save the results as one file per QTL dataset (with all loci within each file). If this is set to =T, then this function will return the list of paths where these files were saved. A helper function is provided to import and merge them back together in R. If this is set to =F, then this function will instead return one big merged data.table containing results from all QTL datasets and all loci. =F is not recommended when you have many large loci and/or many QTL datasets, because you can only fit so much data into memory.

merge_with_gwas

Whether you want to merge your QTL query results with your GWAS data (convenient, but takes up more storage).

force_new_subset

By default, catalogueR will use any pre-existing files that match your query. Set force_new_subset=T to override this and force a new query.

genome_build

The genome build of your query coordinates (e.g. gwas_data). If your coordinates are in hg19, catalogueR will automatically lift them over to hg38 (as this is the build that eQTL Catalogue uses).

progress_bar

progress_bar=T allows progress to be monitored even when multithreading enabled. Requires R package pbmcapply.

verbose

Show more (=T) or fewer (=F) messages.

See also

Examples

sumstats_paths <- example_sumstats_paths() # Merged results # GWAS.QTL <- eQTL_Catalogue.query(sumstats_paths=sumstats_paths, qtl_search="Alasoo_2018", nThread=1, force_new_subset=T, merge_with_gwas=F, progress_bar=T, split_files=F) # Merged results (parallel) GWAS.QTL <- eQTL_Catalogue.query(sumstats_paths=sumstats_paths, qtl_search="Alasoo_2018", nThread=4, force_new_subset=T, merge_with_gwas=F, progress_bar=T, split_files=F)
#> [1] "+ Optimizing multi-threading..." #> [1] "++ Multi-threading across QTL datasets." #> [1] "eQTL_Catalogue:: Querying 4 QTL datasets x 3 GWAS loci (12 total)"
#> Warning: scheduled cores 1, 2, 4 encountered errors in user code, all values of the jobs will be affected
#> [1] "++ Post-processing merged results."
#> Error in data.table::rbindlist(GWAS.QTL_all, fill = T): Item 1 of input is not a data.frame, data.table or list
# Split results # gwas.qtl_paths <- eQTL_Catalogue.query(sumstats_paths=sumstats_paths, qtl_search="Alasoo_2018", nThread=1, force_new_subset=T, merge_with_gwas=F, progress_bar=T) # Split results (parallel) gwas.qtl_paths <- eQTL_Catalogue.query(sumstats_paths=sumstats_paths, qtl_search="Alasoo_2018", nThread=4, force_new_subset=T, merge_with_gwas=F, progress_bar=T)
#> [1] "+ Optimizing multi-threading..." #> [1] "++ Multi-threading across QTL datasets." #> [1] "eQTL_Catalogue:: Querying 4 QTL datasets x 3 GWAS loci (12 total)"
#> Warning: scheduled cores 4, 3 encountered errors in user code, all values of the jobs will be affected
#> [1] "++ Returning list of split files paths." #> [1] "Data dimensions: x " #> Time difference of 10.9 secs
GWAS.QTL <- gather_files(file_paths = gwas.qtl_paths)
#> [1] "+ Merging 8 files." #> [1] "+ Using 4 cores." #> [1] "+ Merged data.table: 2 rows x 10 columns."
# Nalls et al example if (FALSE) { sumstats_paths_Nalls <- list.files("Fine_Mapping/Data/GWAS/Nalls23andMe_2019","Multi-finemap_results.txt", recursive = T, full.names = T) names(sumstats_paths_Nalls) <- basename(dirname(dirname(sumstats_paths_Nalls))) gwas.qtl_paths <- eQTL_Catalogue.query(sumstats_paths=sumstats_paths_Nalls, output_dir="catalogueR_queries/Nalls23andMe_2019", merge_with_gwas=T, nThread=1, force_new_subset=T) }