06. August 2014 · Comments Off on 1000 Genomes and Beyond: a retrospective · Categories: Conferences, Science · Tags: , ,

1000 Genomes and Beyond Conference held on 24-26 of June 2014 in Cambridge, UK was the latest of the 1000 Genome Project community meetings, marking the end of this grandiose endeavor launched in 2008. With the final Phase 3 of the 1000 Genome project being released on 24th of June, this was an excellent opportunity to get an update on this release but also to see what was learned leveraging the genetic variation catalogued in 1000 Genomes so far and provide a glimpse of future opportunities and directions.

The final, phase 3 is a catalog of genetic variants identified through low-coverage (8x) whole-genome, exome sequencing and genotyping arrays in 2,504 individuals from 26 populations (each population is represented with 60-100 individuals). The catalog includes over 79 million variant sites, covering short variants (bi- and multiallelic SNPs, indels), tandem repeats and structural variants; and is expected to contain over 95% of common variants.

1000 Genome Project had a tremendous impact on our understanding of population genetics and human evolution. It also enabled studies on population isolates, easy and effective study design which revealed a numerous candidate loci for complex traits. One of the most well-rounded studies presented during the meeting was the one about the common Greenlandic stop-gain variant in TBC1D4 conferring muscle insulin resistance and T2D. In the discovery analysis variant was found to be associated with higher plasma glucose and serum insulin levels in the Greenlandic participants without previous known T2D, while the consequent T2D case-control analysis showed strong association with increased T2D risk. The variant has a MAF of 17% in Greenlandic cohort, and it has been observed in only one Japanese individual out of all individuals sequenced in the 1000 Genome and several related large sequencing projects. The observed effect sizes are several times larger than any previous finding in large-scale GWASs of these traits, with ~60% of the homozygous carriers developing T2D between 40 and 60 years of age; indicating a Mendelian-disease-like pattern of inheritance. The well thought-out design of this study was commended, and it sparked some discussion about the often lack of population-based control groups in the studies of the extreme and rare phenotypes, where the initial ascertainment bias is introducing an upward bias in reported effect sizes.

What I found particularly encouraging is that the focus of the genetic studies is slowly moving toward the large-scale functional and mechanistic studies. This direction has been advocated for years but finally the results are being produced. From study of genetic variation in human DNA replication timing (rtQTL), high-quality and resolution transcriptome analysis, investigation of loss-of-function variants effects on transcriptome to development of new software tools for analysing sequenced variants, such as Ensembl’s variant effect predictor (VEP); are all gradually unveiling the functional consequences of genetic variation in humans. Still, some of this work is still in its infancy and will require larger, better powered studies to produce meaningful conclusions. This was nicely demonstrated during Andrew Wood’s talk on epistatic effects of genetic variants influencing gene expression levels. They sought to replicate findings from the first study reporting 30 epistasis interactions affecting traits (Hemani et al., Nature 2014) using genotypes imputed from the 1000 Genomes reference panel. 14 interaction effects were replicated, however, in each case, a third variant uncaptured in the initial study could provide explanation for the apparent epistasis. A second study reporting epistatic effects was also published in 2014 but despite the fact that sharing data between these groups helped, Wood concluded that clear picture of epistatic interaction effects on gene expression will require large samples sizes and whole-genome-sequencing.

The overall conclusion of the meeting was that developments in the computational biology methods will be of critical importance, and that data sharing and functional characterization will be the biggest challenges in human genetics in the future. Richard Durbin also noted that we should be considering the 1 Million Genomes Project, but that new initiative, goals and leadership is needed.

This year’s Biology of Genomes (BoG) meeting maintained its high standard with another display of excellent and exciting science. One of my favourite presentations was given by Matthew Stephens from the University of Chicago on the topic of False Discovery Rates (FDRs).

The FDR is a basic concept in statistical testing that we all come across in our research. By controlling the FDR, we aim to limit the expected proportion of false positives among significant loci identified by association studies. Slide1 The idea is that under the null hypothesis (H0, that the locus is not associated with the trait), the observed p-values are expected to be distributed uniformly (Fig.1(a)); and under an alternative hypothesis (H1), more of the p-values should be close to zero (Fig.1(b)). In other words, the observed distributions of p-values in a genome-wide scan should be a mixture of these two distributions. The existing FDR methods find a maximum cutoff value (Fig. 1(c)) such that the results with smaller p-values are likely to be true positives from H1.

More »

10. February 2014 · Comments Off on Looking back at looking forward: ASHG 2013 · Categories: Conferences, Science · Tags: , ,

Wendy offers a retrospective on the forward-looking atmosphere at last year’s American Society of Human Genetics meeting, in Boston

Looking into the future is something I felt the Annual ASHG conference focused on more this year than last year. One man who certainly has form for looking into the future is the Dermatologist Rudolf Happle. In the eighties Happle predicted a number of genetic conditions would be mosaic and, following the advent of whole exome sequencing, he has turned out to be correct in his predictions so far. Therefore I was rather pleased to catch him as the opening speaker in a stimulating session on mosaicism chaired by Leslie Besecker and William Dobyns.
More »

28. October 2013 · Comments Off on A neat idea to tell polygenic signal from stratification noise · Categories: Conferences, Science · Tags: , , ,

One of my favorite presentations at ASHG this year was a poster given by Brendan Bulik-Sullivan from the Broad. Brendan and his colleagues attempted to answer a puzzling question which has come up quite often recently: “If we see an inflation of GWAS test statistics, is it because of polygenic risk (good) or population stratification (bad)?”
More »

17. February 2012 · Comments Off on Are loss-of-function variants relevant to complex disease? · Categories: Papers · Tags: , , , , , , , ,

Three members of the group (James, Luke & Jeff) were involved in a substantial undertaking (led by our colleague Daniel MacArthur) to study loss-of-function (LoF) variants in otherwise “normal” human genomes. These are mutations which are predicted to obliterate the function of a gene: things like gained stop codons, coding frameshift insertions and the like. The paper has just come out in Science, and Daniel has a great write-up over at Genomes Unzipped. We made two contributions to the project (reflecting our principal interests):

  1. Because real LoF variants are (generally) selected against, but sequencing errors which look like LoF aren’t, this class of variation is hugely enriched in all sorts of sequencing and annotation errors. We therefore spent a lot of effort in hand validating LoF calls (using Evoker) to try to separate the wheat from the chaff.
  2. One might expect LoF variants to be more likely than an average SNP to affect disease risk, but it didn’t really seem to be the case. Only one very well known LoF variant (the Crohn’s disease NOD2 frameshift) from the paper showed any appreciable association to disease. Perhaps these variants are just too strongly selected against to rise to frequencies visible to GWAS?