Graphing protein databases

I’m giving a lecture next week to the Bioinformatics Masters students here about protein structure prediction. As part of the introduction to this topic, I have a traditional ‘data explosion’ slide, to illustrate the gap between the quantity of protein sequence data available versus the number of solved protein structures in the PDB (hence the need for bioinformatics to help fill the gap, by good prediction algorithms). When I last gave this talk (scarily, 4 years ago), this slide was just text, a description of the present size of UniProt & the PDB.

Since 2006 my lecturing style has progressed somewhat, I don’t like to have slides with just words on anymore, so I wanted to replace this slide, rather than just updating the numbers. Graphs of the growing sizes of the databases are easy to find online, but to my mind the real story here is of the gap in the sizes of the 2 databases (UniProt & PDB), and whether it is growing (or are protein structural determination methods catching up). This graph doesn’t (to my knowledge) exist, so, inspired by this question on BioStar I set out to draw them.

The first task is to retrieve numbers from each of the databases of their size at particular dates. For the PDB this is simple, because they distribute a CSV file of this information. You can get it too, it’s linked to here. For UniProt, it was non-obvious where to find this information. Every time there’s a new release, the webpage documenting that release gives the size of UniProt at the point of release (and it’s components, SwissProt and TrEMBL), but it is hard to find these pages for any release that is not current. So my approach was to download the history of UniProt from their FTP server, and use BioPython to calculate the size of each release:

[python]
import os
import sys
from Bio import SwissProt

def main():
dirs = os.listdir("data")
results = map(numbers, dirs)

def numbers(dir):
directory = "data/"+dir
h = open(directory+"/reldate.txt")
lines = h.readlines()
h.close()
date = lines[1].rstrip() #more processing required to return just date
sh = open(directory+"/uniprot_sprot.dat")
descriptions = [record.accessions for record in SwissProt.parse(sh)]
sprot_size = len(descriptions)
sh.close()
th = open(directory+"/uniprot_trembl.dat") #and the same for trembl
descriptions = [record.accessions for record in SwissProt.parse(th)]
trembl_size = len(descriptions)
th.close()
return (date,sprot_size,trembl_size)
[/python]

It was only once I was coming to the end of this process (slow, because we’re dealing with 16 releases of UniProt: 150GB of data) that I found this page, which was fairly hidden away, but gives me the sizes of SwissProt from the last 25 years. Curses! So much effort seemingly gone to waste. However, there doesn’t appear to be a corresponding page for TrEMBL, which is much larger (being a conceptual translation of EMBL), and I wanted these numbers too, to illustrate the full scope of the problem. So my effort was not in vein.

Now that we have all the numbers in an appropriate format (DATE,DATABASE,SIZE), we can draw some graphs. For this I use the ggplot2 library and R, which seems to be de rigueur for pretty visualisations these days. Here’s some code:

library(ggplot2)
pdb <- read.table("/path/to/data/pdb.txt", sep=",")
colnames(pdb) = c("Year", "Database", "value")
pdb$Year <- as.Date(pdb$Year)
png("/path/to/graphs/uniprot_graphs/pdb.png", bg="transparent", width=800, height=600)
qplot(Year, value, data=pdb, geom="line", color=I("red")) + scale_x_date(format="%Y") + scale_y_continuous("Entries", formatter="comma")
dev.off()

spdb <- read.table("/path/to/data/sp_pdb.txt", sep=",")
colnames(spdb) = c("Year", "Database", "value")
spdb$Year <- as.Date(spdb$Year)
png("/path/to/graphs/sp_pdb.png", bg="transparent", width=800, height=600)
qplot(Year, value, data=spdb, geom="line", group=Database, color=Database) + scale_x_date(format="%Y") + scale_y_continuous("Entries", formatter="comma")
dev.off()

all <- read.table("/path/to/data/all.txt", sep=",")
colnames(all) = c("Year", "Database", "value")
all$Year <- as.Date(all$Year)
png("/path/to/graphs/all.png", bg="transparent", width=800, height=600)
qplot(Year, value, data=all, geom="line", group=Database, color=Database) + scale_x_date(format="%Y") + scale_y_log10("Entries", breaks=c(10^4,10^5,10^6,10^7))
dev.off()

This very simple R produces 3 plots, all of which are informative in different ways.

PDB

Plot 1 is a simple restatment of the PDB graph, which I produced just so all my graphs would look the same, it’s a pretty standard exponential curve (though admittedly the numbers are slightly smaller than the numbers you may be used to seeing on such plots).

SwissProt vs PDB

Plot 2 compares the size of SwissProt with the size of the PDB. I’m extremely happy with this one, as it shows precisely what I wanted it to, SwissProt being much larger than the PDB, and marching away at an increasing rate. For the record, the most recent size of the PDB and SwissProt in the graph are 68,998 and 522,019 respectively (compared with when I last gave the protein structure lecture: 40,132 & 241,365).

TrEMBL vs SwissProt vs PDB

The final plot is just to scare people. It includes TrEMBL, and had to be plotted on a log10 scale, because TrEMBL is another order of magnitude larger than SwissProt (12,347,303 sequences).

Addendum – further to all this, the problem of the gap between sequence and structure is actually more stark than presented here. Although the PDB today (11/11/10) contains 69,162 structures, they are highly redundant, and there are only 39,724 unique sequences of known structure.

7 comments

  1. Hi – nice plots 🙂

    While the PDB structures are fairly redundant, the protein sequence data is also redundant due to the high level of homology between many of the sequences. It would be interesting to add another line for the number of COGs identifiable within TrEMBL at each date, to see how close we are to turning this particular exponential curve into an s-shaped one as we get closer to sampling most of the space of biological proteins.

  2. Seems like you could have done this a lot quicker with a shell script instead of parsing the whole Uniprot file. Just as proof of concept, I put your trembl parsing/counting code in testpython.py on my machine, and did a simple one liner with ‘grep’ and ‘wc -l’ at the prompt. The one liner also had the advantage of not using a massive amount of memory (actually, that’s what I was searching for today; a good way to use SwissProt.parse on the trembl file without exhausting my machine’s memory). Comparison:

    llc@lewis-lab$ time ./testpython.py
    14555721

    real 43m44.747s
    user 42m49.050s
    sys 0m38.720s

    llc@lewis-lab$ time grep ^ID uniprot_trembl.dat | wc -l
    14555721

    real 6m10.584s
    user 3m35.610s
    sys 0m31.130s

  3. Granted, the time consuming part isn’t python. It’s parsing the whole record when you aren’t using the data. Just going line by line with a “for line in open(‘uniprot_tembl.dat’)” and incrementing a counter if line[:2]==’ID’ (or you could use if line[:2]==’//’) gives:


    llc@lewis-lab$ time ./newpy.py
    14555721

    real 6m6.583s
    user 4m29.270s
    sys 0m37.600s

  4. The funniest part is that looking at this solved my problem. It wasn’t the SwissProt parsing that was eating my memory up. It was having DEBUG set to True in the Django instance that I was inserting the data into.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s