Louvain Clustering basis


I’ve been following this notebook to run the pySCENIC workflow on a single-cell dataset from flies.

It uses Scanpy for the standard scRNA seq analysis and pySCENIC for the SCENIC protocol and then merges the two analyses into one loom file for visualisation in Scope.

Here’s my code;

# path to unfiltered loom file (this will be created in the optional steps below)
f_loom_path_unfilt = "dm_unfiltered.loom" 

# # path to loom file with basic filtering applied (this will be created in the "initial filtering" step below). Optional.
f_loom_path_scenic = "dm_filtered_scenic.loom"

# path to anndata object, which will be updated to store Scanpy results as they are generated below
f_anndata_path = "anndata.h5ad"

# path to pyscenic output
f_pyscenic_output = "pyscenic_output.loom"

# loom output, generated from a combination of Scanpy and pySCENIC results:
f_final_loom = 'dm_scenic_integrated-output.loom'

sc.settings.verbosity = 3 # verbosity: errors (0), warnings (1), info (2), hints (3)
sc.set_figure_params(dpi=150, fontsize=10, dpi_save=600)

# Set maximum number of jobs for Scanpy.
sc.settings.njobs = 20

f_mtx_dir = 'filtered_feature_bc_matrix/'
adata = sc.read_10x_mtx(
    f_mtx_dir ,                 # the directory with the `.mtx` file
    var_names='gene_symbols',   # use gene symbols for the variable names (variables-axis index)

row_attrs = { 
    "Gene": np.array(adata.var.index) ,
col_attrs = { 
    "CellID":  np.array(adata.obs.index) ,
    "nGene": np.array( np.sum(adata.X.transpose()>0 , axis=0)).flatten() ,
    "nUMI": np.array( np.sum(adata.X.transpose() , axis=0)).flatten() ,

lp.create( f_loom_path_unfilt, adata.X.transpose(), row_attrs, col_attrs )

# read unfiltered data from a loom file
adata = sc.read_loom( f_loom_path_unfilt )

nCountsPerGene = np.sum(adata.X, axis=0)
nCellsPerGene = np.sum(adata.X>0, axis=0)

# Show info
print("Number of counts (in the dataset units) per gene:", nCountsPerGene.min(), " - " ,nCountsPerGene.max())
print("Number of cells in which each gene is detected:", nCellsPerGene.min(), " - " ,nCellsPerGene.max())


# pySCENIC thresholds
minCountsPerGene=3*.01*nCells # 3 counts in 1% of cells
print("minCountsPerGene: ", minCountsPerGene)

minSamples=.01*nCells # 1% of cells
print("minSamples: ", minSamples)

# simply compute the number of genes per cell (computers 'n_genes' column)
sc.pp.filter_cells(adata, min_genes=0)

# mito and genes/counts cuts
mito_genes = adata.var_names.str.startswith('mt:')

# for each cell compute fraction of counts in mito genes vs. all genes
adata.obs['percent_mito'] = np.sum(
    adata[:, mito_genes].X, axis=1).A1 / np.sum(adata.X, axis=1).A1

# add the total counts per cell as observations-annotation to adata
adata.obs['n_counts'] = adata.X.sum(axis=1).A1

# initial cuts
sc.pp.filter_cells(adata, min_genes=200 )
sc.pp.filter_genes(adata, min_cells=3 )

adata = adata[adata.obs['n_genes'] < 4000, :]
adata = adata[adata.obs['percent_mito'] < 0.15, :]

adata.write( f_anndata_path )

# create basic row and column attributes for the loom file:
row_attrs = {
    "Gene": np.array(adata.var_names) ,
col_attrs = {
    "CellID": np.array(adata.obs_names) ,
    "nGene": np.array( np.sum(adata.X.transpose()>0 , axis=0)).flatten() ,
    "nUMI": np.array( np.sum(adata.X.transpose() , axis=0)).flatten() ,
lp.create( f_loom_path_scenic, adata.X.transpose(), row_attrs, col_attrs)

## Further pre-processing of expression data

# save a copy of the raw data
adata.raw = adata

# Total-count normalize (library-size correct) to 10,000 reads/cell
sc.pp.normalize_per_cell(adata, counts_per_cell_after=1e4)

# log transform the data.

# identify highly variable genes.
sc.pp.highly_variable_genes(adata, min_mean=0.0125, max_mean=3, min_disp=0.5)
sc.pl.highly_variable_genes(adata, save=".pdf")

# keep only highly variable genes:
adata = adata[:, adata.var['highly_variable']]

# regress out total counts per cell and the percentage of mitochondrial genes expressed
sc.pp.regress_out(adata, ['n_counts', 'percent_mito'] ) #, n_jobs=args.threads)

# scale each gene to unit variance, clip values exceeding SD 10.
sc.pp.scale(adata, max_value=10)

# update the anndata file:
adata.write( f_anndata_path )

# principal component analysis
sc.tl.pca(adata, svd_solver='arpack')
sc.pl.pca_variance_ratio(adata, log=True, save=".pdf")
adata.write( f_anndata_path )

# neighborhood graph of cells (determine optimal number of PCs here)
sc.pp.neighbors(adata, n_neighbors=15, n_pcs=30)
# compute UMAP
# tSNE
tsne = TSNE( n_jobs=20 )
adata.obsm['X_tsne'] = tsne.fit_transform( adata.X )
adata.write( f_anndata_path )

# cluster the neighbourhood graph

sc.pl.umap(adata, color=['louvain'], save="_clusters.pdf" )

The last part of the code (Clustering) results in the following figure;

Screenshot 2021-03-08 at 12.54.03

My concern is that the pink cluster labelled as cluster 6 in this plot is unexpectedly spread across two different locations in the plot. Could someone help me getting an insight into why that might be even though it’s clearly different communities.

One reason that I can think of is the “resolution=0.4” parameter affects the clustering but I wanted to know if there can be any other factor that can lead to this that I might be missing.

Thank you

Hi @AIqbal94,

Firstly, it is difficult to trust the UMAP representation of a single-cell dataset as it only shows 2 dimensions of what is actually many more. Thus, the pink cluster can be more similar than you expect based on the variance that UMAP cannot represent in these two dimensions. Clustering is done on the single-cell knn graph, which is a better representation than the 2D UMAP.

On a more practical note, you can subcluster this cluster using the restrict parameter in sc.tl.louvain and sc.tl.leiden. But in general, it is also likely that a higher resolution global clustering will separate these two groups you see separated in the UMAP.

Hi @LuckyMD,

Thank you so much for your reply. That really helps.