# Troubles with sapply

Say your function in sapply returns a matrix with a variable number of rows. For example

ff <- function( i ) matrix( i, ncol= 3, nrow= i )
pipa <- sapply( 1:3, , simplify= "array" )


sapply is to stupid to see the pattern here (or maybe I don’t know how to cast the return value into an appropriate shape…). The result of the above is a list:

[[1]]
[,1] [,2] [,3]
[1,]    1    1    1

[[2]]
[,1] [,2] [,3]
[1,]    2    2    2
[2,]    2    2    2

[[3]]
[,1] [,2] [,3]
[1,]    3    3    3
[2,]    3    3    3
[3,]    3    3    3


However, we can turn this list of matrices into a simple matrix using Reduce:

pipa <- Reduce( function( ... ) rbind( ... ), pipa )


# biomaRt

The biomaRt package allows to query the gigantic ensembl data base directly from R. Since it happens only once in a while, I find myself reading the biomaRt vignette every time I’m using it. Here are the crucial points for the most common application.

ensembl <- useMart( "ensembl", dataset="hsapiens_gene_ensembl" )
f <- listFilters( ensembl )
a <- listAttributes( ensembl )


The data frames f and a hold the available filters and attributes, respectively. To retrieve information, we need to specify by what key we are searching (filters), what keys do we want to retrieve from the database (attributes), what are the values of our search keys (values) and from which mart we want to retrieve the information.

g <- c( "CCL2", "DCN", "LIF", "PLAU", "IL6" )
d <- getBM(
attributes= c( "entrezgene", "description", "uniprot_genename" ),
filters= "hgnc_symbol",
values= g,
mart= ensembl )


This returns the following:

  entrezgene                                                      description uniprot_genename
1       6347    chemokine (C-C motif) ligand 2 [Source:HGNC Symbol;Acc:10618]             CCL2
2       1634                            decorin [Source:HGNC Symbol;Acc:2705]              DCN
3       3569 interleukin 6 (interferon, beta 2) [Source:HGNC Symbol;Acc:6018]              IL6
4       3976         leukemia inhibitory factor [Source:HGNC Symbol;Acc:6596]              LIF
5       5328   plasminogen activator, urokinase [Source:HGNC Symbol;Acc:9052]             PLAU


OK, one problem that presents itself is that if there are multiple matching values for a given key, you are getting multiple lines for that key. It may be useful to aggregate this information.

g <- c( "BTC" )
d <- getBM( attributes= c( "uniprot_genename", "description", "ensembl_gene_id" ), filters= "hgnc_symbol", values= g, mart= ensembl )


will return two lines:

  uniprot_genename                                description ensembl_gene_id
1              BTC betacellulin [Source:HGNC Symbol;Acc:1121] ENSG00000261530
2              BTC betacellulin [Source:HGNC Symbol;Acc:1121] ENSG00000174808


We can use aggregate to collapse the ensembl IDs:

ff <- function( x ) { x <- x[ !is.na( x ) & x != "" ] ; paste( unique( x ), collapse= ";" ) }
d <- aggregate( d, by= list( d$uniprot_genename ), FUN=ff )[,-1]  The ff function ignores empty strings and NAs. Also, we remove the first column as it is added by the aggregate function. The result:  Group.1 uniprot_genename description ensembl_gene_id 1 BTC BTC betacellulin [Source:HGNC Symbol;Acc:1121] ENSG00000261530;ENSG00000174808  # Pathway analysis in R and BioConductor. There are many options to do pathway analysis with R and BioConductor. First, it is useful to get the KEGG pathways: library( gage ) kg.hsa <- kegg.gsets( "hsa" ) kegg.gs2 <- kg.hsa$kg.sets[ kg.hsa$sigmet.idx ]  Of course, “hsa” stands for Homo sapiens, “mmu” would stand for Mus musuculus etc. Incidentally, we can immediately make an analysis using gage. However, gage is tricky; note that by default, it makes a pairwise comparison between samples in the reference and treatment group. Also, you just have the two groups — no complex contrasts like in limma. res <- gage( E, gsets= kegg.gs2, ref= which( group == "Control" ), samp= which( group == "Treatment" ), compare= "unpaired", same.dir= FALSE )  Now, some filthy details about the parameters for gage. • E is the matrix with expression data: columns are arrays and rows are genes. If you use a limma EList object to store your data, this is just the E member of the object (rg$E for example). However, and this is important, gage (and KEGG and others) are driven by the Entrez gene identifiers, and this is not what you usually have when you start the analysis. To get the correct array, you need to

• select only the genes with ENTREZ IDs,
• make sure that there are no duplicates
• change the row names of E to ENTREZ IDs
• gsets is just a list of character vectors; the list names are the pathways / gene sets, and the character vectors must correspond to the column names of E.
• ref and samp are the indices for the “reference” and “sample” (treatment) groups. This cannot be logical vectors. Only two groups can be compared at the same time (so for example, you cannot test for interaction).
• compare — by default, gage makes a paired comparison between the “reference” and “sample” sets, which requires of course to have exactly the same number of samples in both sets. Use “unpaired” for most of your needs.
• same.dir — if FALSE, then absolute fold changes are considered; if TRUE, then up- and down-regulated genes are considered separately

To visualise the changes on the pathway diagram from KEGG, one can use the package pathview. However, there are a few quirks when working with this package. First, the package requires a vector or a matrix with, respectively, names or rownames that are ENTREZ IDs. By the way, if I want to visualise say the logFC from topTable, I can create a named numeric vector in one go:

setNames( tt$logFC, tt$EID )


Another useful package is SPIA; SPIA only uses fold changes and predefined sets of differentially expressed genes, but it also takes the pathway topology into account.

# New editor in WordPress and how to circumvent it

I have problems like everyone else with the new editor, except that my main issue is editing existing posts.

1. Why I don’t want to use the new editor, in order of importance:

– it doesn’t take up all of the screen estate, only a narrow column in the middle of one of the two of 24″ monitors I use to work with text.
– it meddles with my HTML code. I have many “tl;dr” (very long) posts (not here, on another blog) which I like to format with empty lines in order to facilitate editing.
– it is buggy (read: non-functional) in my browser of choice (experimental Opera running on Ubuntu). Now, I don’t ask WordPress to adapt to my needs; but if everything else works fine with that browser, why would I want to switch to something I don’t like just because one beta feature on one site doesn’t work?

The first two problems make the editor unusable; the third can be circumvented (I could work with WordPress and WordPress only in Firefox, for example).

2. How to circumvent the new editor

Of course, clicking on the little pencil icon on your blog post takes you to the new editor. To use the old editor, the only way is to go to “Dashboard” -> “Posts”, search for the post you would like to edit, and click on “Edit”.

3. A better solution

Another, better solution would be to add a link rewrite plugin. The link to the old style editor looks like this: https://yourblogname.wordpress.com/wp-admin/post.php?post=1234&action=edit

Since the link to the new editor looks like this:

https://wordpress.com/post/BlogIDNumber/1234

I think that a regular expression rewriting your links (for example, a simple bookmarklet) should do the trick. Essentially, you would catch the regular expression “https://wordpress.com/post/BlogIDNumber/(%5B0-9%5D*)&#8221; and replace the link by “https://yourblogname.wordpress.com/wp-admin/post.php?post=$1&action=edit&#8221;. The code below is untested and you should use it on your own responsibility There is actually quite an easy piece of javascript code that you can add as a bookmark. You click the bookmark, and all your new-editor links convert in your old-editor links. You may start with the following code: javascript:(function(){ var m = /wordpress.com\/post\/YourBlogID\/([0-9]*)/ ; var r = 'yourblogname.wordpress.com/wp-admin/post.php?post=$1&action=edit';
var links = window.document.getElementsByTagName('a');
for(var i = 0, l; l = links[i]; i++) {
if (l.href)
l.href = l.href.replace(m,r);
}
})();


Replace “yourblogname” by your blog name (e.g. “logfc” in case of my blog); replace “YourBlogID” by your blog ID. Add a bookmark that contains, as reference, the above code. Clicking on the bookmark should replace all “new style links” by “old style links”.

Here is an example screenshot from Google Chrome. You see that in the field where you would normally put your URL (http…), you enter the above code (except that you replace “YourBlogID” by the actual ID and “yourblogname” by the actual name):

The downside is that you will need a bookmark for each blog that you are writing.

# Sloppy Science

Last week, Science has published a paper by Rodriguez and Laio on a density-based clustering algorithm. As a non-expert, I found the results actually quite good compared to the standard tools that I am using in my everyday work. I even implemented the package as an R package (soon to be published on CRAN, look out for “fsf”).

However, there are problems with the paper. More than one.

1. The authors claim that the density for each sample is determined with a simple formula which is actually the number of other samples within a certain diameter. This does not add up, since then the density must be always a whole number. It is obvious from the figures that this is not the case. When you look up the original matlab code in the supplementary material, you see that the authors actually use a Gaussian kernel function for density calculation.

2. If you use the simple density count as described in the paper, the algorithm will not and cannot work. Imagine a relatively simple case with two distinct clusters. Imagine that in one cluster, there is a sample A with density 25, and in the other cluster, there are two samples, B and C, with identical densities 24. This is actually quite likely to happen. The algorithm now determines, for each sample, $\delta$, that is the distance to the next sample with higher density. The whole idea of the algorithm is that for putative cluster centres, this distance will be very high, because it will point to the center of another cluster.

However, with ties, we have the following problem. If we choose the approach described by the authors, then both of the samples with density B and C (which have identical density 24) will be assigned a large $\delta$ value and will become cluster center candidates. If we choose to use a weak inequality, then B will point to C, and C to B, and both of them will have a small $\delta$.

Therefore, we either have both B and C as equivalent cluster candidates, or none of them. No wonder that the authors never used this approach!

3. The authors explicitly claim that their algorithm can “automatically find the correct number of clusters.” This does not seem to be true, at least there is nothing in the original paper that warrants this statement. If you study their matlab code, you will find that the selection of cluster centers is done manually by a user selecting a rectangle on the screen. Frankly, I cannot even comment on that, this is outrageous.

I think that Science might have done a great disservice to the authors — everyone will hate them for having a sloppy, half-baked paper that others would get rejected in PLoS ONE published in Science. I know I do :-)

# Copy text from images

All the elements were there. Algorithms freely available; implementations ready to download, for free. It just took one clever person to FINALLY make it: a seamless way to copy text from images. At least in Chrome, for now, but since it is Open Source, I guess it is just a matter of time and we will see it as a build-in feature in Firefox and other browsers. For now, use the Project Naphta Chrome browser extension. The main project page is also an interesting read.

Words on the web exist in two forms: there’s the text of articles, emails, tweets, chats and blogs— which can be copied, searched, translated, edited and selected— and then there’s the text which is shackled to images, found in comics, document scans, photographs, posters, charts, diagrams, screenshots and memes. Interaction with this second type of text has always been a second class experience, the only way to search or copy a sentence from an image would be to do as the ancient monks did, manually transcribing regions of interest.

(from the Naphta web site)

It works really nice, assuming that you select the “English tesseract” option from the “Languages” sub-menu — the off-line javascript implementation of the OCRad is not very effective, in contrast to the cloud-based tesseract service.

# Rotating movies with RGL

Learned another thing today: it is very simple to create animated GIFs in the rgl package with the build-in functions spin3d, play3d and movie3d.

library( pca3d )
data(metabo)
pca <- prcomp( metabo[,-1], scale.= TRUE )
pca3d( pca, group= metabo[,1] )
rot <- spin3d( axis= c( 0, 1, 0 ) )
movie3d( rot, duration= 12 )


Here is the result: