Reader Comments
Post a new comment on this article
Post Your Discussion Comment
Please follow our guidelines for comments and review our competing interests policy. Comments that do not conform to our guidelines will be promptly removed and the user account disabled. The following must be avoided:
- Remarks that could be interpreted as allegations of misconduct
- Unsupported assertions or statements
- Inflammatory or insulting language
Thank You!
Thank you for taking the time to flag this posting; we review flagged postings on a regular basis.
closeFrom sequencing/resequencing to re-annotation/integration
Posted by ramy on 09 Jun 2009 at 21:12 GMT
This article lists very interesting challenges and questions that will be answered in the next decade of this millennium. With the revolution stirred by next-gen sequencing machines, sequencing/resequencing steps have become quick and cheap. Thus, data generation is the least part to worry about. However, as the article appropriately discusses, the problem is what to sequence and then how to make sense out of the piles.
We will very soon have 5,000 fully sequenced prokaryotic genomes, but, as quick annotation tools are being developed, we realize very well that more genomes annotated = more errors propagated.
In addition to high-speed and high-performance computation, we need, more than ever, high quality annotation and re-annotation. I emphasize on re-annotation because--for example--if you go to a genome in NCBI, you still find the same old annotation errors and inconsistencies, and, to get the right answer, you have to click many many links.
Re-annotation and integration of experimental data (as suggested in this article) will be the rate-limiting steps in the next decade as we attempt to cope with the billions of letters coming out of the sequencing machines.