uORF-Tools—Workflow for the determination of translation-regulatory upstream open reading frames

Ribosome profiling (ribo-seq) provides a means to analyze active translation by determining ribosome occupancy in a transcriptome-wide manner. The vast majority of ribosome protected fragments (RPFs) resides within the protein-coding sequence of mRNAs. However, commonly reads are also found within the transcript leader sequence (TLS) (aka 5’ untranslated region) preceding the main open reading frame (ORF), indicating the translation of regulatory upstream ORFs (uORFs). Here, we present a workflow for the identification of translation-regulatory uORFs. Specifically, uORF-Tools uses Ribo-TISH to identify uORFs within a given dataset and generates a uORF annotation file. In addition, a comprehensive human uORF annotation file, based on 35 ribo-seq files, is provided, which can serve as an alternative input file for the workflow. To assess the translation-regulatory activity of the uORFs, stimulus-induced changes in the ratio of the RPFs residing in the main ORFs relative to those found in the associated uORFs are determined. The resulting output file allows for the easy identification of candidate uORFs, which have translation-inhibitory effects on their associated main ORFs. uORF-Tools is available as a free and open Snakemake workflow at https://github.com/Biochemistry1-FFM/uORF-Tools. It is easily installed and all necessary tools are provided in a version-controlled manner, which also ensures lasting usability. uORF-Tools is designed for intuitive use and requires only limited computing times and resources.

1 Introduction uORF-Tools is a workflow and a collection of tools for the analysis of 'Upstream Open Reading Frames' (short uORFs). The workflow is based on the workflow management system snakemake [1] and handles installation of all dependencies via bioconda [2], as well as all processings steps. The source code of uORF-Tools is open source and available under the License GNU General Public License 3. Installation and basic usage is described below. This documentation was last updated 08.03.2019. An up-to-date webbased version of the documentation can be found on ReadTheDocs.

Program flowchart
The flowchart (see Figure 1) describes the processing steps of the workflow and how they are connected.

Directory table
The output is written to a directory structure (see Figure 2) that corresponds to the workflow steps. • uORF-Tools: contains the workflow tools.
-schemas: validation templates for input files -scripts: scripts used by the snakemake workflow.
-templates: templates for the config.yaml and the samples.tsv.

Input/Output files
The following table contains explanations for each of the input/output files: File name Description annotation.gtf user-provided annotation file with genomic features genome.fa user-provided genome file containing the genome sequence genome.fa.fai index file of the genome file sizes.genome file containing the sizes of each genome sequence in the genome file <method>-<conditon>-<replicate>.bam user-provided alignment files (or created using the preprocessing workflow) <method-condition-replicate>.bam.bai index file of the alignment files <condition-replicate>.bam.para.py parameter file generated by RiboTISH <methods>.log files containing the process log for each method <condition-replicate>-newORFs.tsv RiboTISH output file containing newly discovered ORFs (significant only) <condition-replicate>-newORFs.tsv_all.txt RiboTISH output file containing newly discovered ORFs (all) <condition-replicate>-qual.txt RiboTISH quality control report text file <condition-replicate>-qual.pdf RiboTISH quality control report file with illustrations annotation.bb input annotation in bigbed format for genome browser visualization annotation.bed input annotation in bed format for genome browser visualization annotation.bed6 input annotation in bed6 format for genome browser visualization annotation-woGenes.gtf input annotation filtered exclusively for gene features <method-condition-replicate>.

uORFs_regulation.tsv
Description for the columns present in the final output file (Supplement Table 3): • transcript_id: transcript id of the main open reading frame (mORF) • uORF_id: id of the potential Upstream open reading frame (uORF), derived from mORF id • Ratio: list of columns, one for each sample, with the ratio of read counts for the mORF and the uORF • Standard deviation of changes of the ratio of the relative uORF activities of treatment vs control • binary logarithm fold change of the ratio of the relative uORF activities of treatment vs control

Tool Parameters
Special characters and versions used for the most important tools. Standard input/output parameters were omitted.

Requirements
In the following, we describe all the required files and tools needed to run our workflow.

Tools miniconda3
As this workflow is based on the workflow management system snakemake [1]. Snakemake will download and install all necessary dependencies via conda. We strongly recommend installing miniconda3 with python3.7. After downloading the miniconda3 version suiting your linux system, execute the downloaded bash file and follow the instructions given.

snakemake
The uORF-Tools require snakemake (version == 5.4.5). The newest version of snakemake can be downloaded via conda using the following command: $ conda create -c conda-forge -c bioconda \ -n snakemake snakemake==5.4.5 This creates a new conda environment called snakemake and installs snakemake into the environment. The environment can be activated using: $ conda activate snakemake and deactivated using:

uORF-Tools
Using the workflow requires the uORF-Tools. The latest version is available on our GitHub page. In order to run the workflow, we suggest that you download the uORF-Tools into your project directory. The following command creates an example directory and changes into it: $ mkdir project; cd project; Now, download and unpack the latest version of the uORF-Tools by entering the following commands: The uORF-Tools are now located in a subdirectory of your workflow directory.

Input Files
Several input files are required in order to run our workflow, a genome sequence (.fa), an annotation file (.gtf) and the bam files (.bam).

genome.fa and annotation.gtf
We recommend retrieving both the genome and the annotation files for mouse and human from GENCODE [3] and for other species from Ensembl Genomes [4].

input .bam files
These are the input files provided by you (the user). "For best performance, reads should be trimmed (to˜29 nt RPF length) and aligned to genome using end-to-end mode (no soft-clip). Intron splicing is supported. Some attributes are needed such as NM, NH and MD. For STAR, '-outSAMattributes All' should be set. bam file should be sorted and indexed by samtools." (RiboTISH requirements, see https://github.com/zhpn1024/ribotish' ).
Please ensure that you move all input .bam files into a folder called bam: $ mkdir bam $ mv *.bam bam/

sample sheet and configuration file
In order to run the uORF-Tools, you have to provide a sample sheet and a configuration file. There are templates for both files available in the uORF-Tools folder.
Copy the templates of the sample sheet and the configuration file into the uORF-Tools folder: $ cp uORF-Tools/templates/samples.tsv uORF-Tools/ $ cp uORF-Tools/templates/config.yaml uORF-Tools/ Customize the config.yaml using your preferred editor. It contains the following variables: • taxonomy Specify the taxonomic group of the used organism in order to ensure the correct removal of reads mapping to ribosomal genes (Eukarya, Bacteria, Archea). (Option for the preprocessing workflow) • adapter Specify the adapter sequence to be used. If not set, Trim galore will try to determine it automatically. (Option for the preprocessing workflow) • samples The location of the samples sheet created in the previous step.
• genomeindexpath If the STAR genome index was already precomputed, you can specify the path to the files here, in order to avoid recomputation.
(Option for the extended workflow) • uorfannotationpath If a uORF-annotation file was already pre-computed, you can specify the path to the file here. Please make sure, that the file has the same format as the uORF_annotation_hg38.csv file provided in the git repo (i.e. same number of columns, same column names) • alternativestartcodons Specify a comma separated list of alternative start codons.
Edit the sample sheet corresponding to your project. It contains the following variables: • method Indicates the method used for this project, here RIBO for ribosome profiling. • condition Indicates the applied condition (A, B) • replicate Used to distinguish between the different replicates (e.g. 1,2, ...) • inputFile Indicates the bam file for a given sample.
As seen in the samples.tsv template: Please make sure that you have at-least two replicates for each condition!. Please ensure that you put the treatment before the control alphabetically (e.g. A: Treatment B: Control).

cluster.yaml
In the template folder, we provide two cluster.yaml files needed by snakemake in order to run on a cluster system: • sge-cluster.yaml for grid based queuing systems • torque-cluster.yaml for torque based queuing systems 7 Example workflow The retrieval of input files and running the workflow locally and on a server cluster via a queuing system is demonstrated using an example with data available from our FTP-Server. Ensure that you have miniconda3 installed and a conda environment set-up. Please refer to the Tools Section (6.1) for details on the installation.

Setup
First of all, we start by creating the project directory and changing to it.
$ mkdir project; cd project; We then download the latest version of the uORF-Tools into the newly created project folder and unpack it.

Retrieve and prepare input files
Before starting the workflow, we have to acquire and prepare several input files. These files are the annotation file, the genome file, the bam files, the configuration file and the sample sheet.

Annotation and genome files
First, we want to retrieve the annotation file and the genome file. In this case, we can find both on the GENCODE [3] webpage for the human genome. On this page, we can directly retrieve both files by clicking on the according download links next to the file descriptions. (As shown in Figure 3). Alternatively, you can directly download them using the following commands: Finally, we will rename these files to annotation.gtf and genome.fa.
$ mv gencode.v28.annotation.gtf annotation.gtf $ mv GRCh38.p12.genome.fa genome.fa Another webpage that provides these files is Ensembl Genomes. This usually requires searching their file system in order to find the wanted files. For this tutorial, we recommend to stick to GenCode instead.

.bam files
Next, we want to acquire the bam files. The bam files for the tutorial dataset can be downloaded from our FTP-Server: We provide both a .zip and a .tar.gz file. We recommend the .tar.gz file as most linux systems can decompress them via commandline by default.
$ wget ftp://biftp.informatik.uni-freiburg.de/ \pub/uORF-Tools/bam.tar.gz; $ tar -zxvf bam.tar.gz; rm bam.tar.gz; This will create a bam folder containing all the files necessary to run the workflow. If you prefer using your own .bam files, we suggest creating a bam folder and copying the files into it. Make sure that your reads were trimmed (to~29bp length) and aligned to the genome using end-to-end alignment. The bam files need to include all SAM attributes and should be sorted using samtools.

Configuration file and sample sheet
Finally, we will prepare the configuration file (config.yaml ) and the sample sheet (samples.tsv ). We start by copying templates for both files from the uORF-Tools/templates into the uORF-Tools/ folder. When using your own data, use any editor (vi(m), gedit, nano, atom, ...) to customize the sample sheet.
Please ensure not to replace any tabulator symbols with spaces while changing this file.
Next, we are going to set up the config.yaml.
$ cp uORF-Tools/templates/config.yaml uORF-Tools/ $ vi uORF-Tools/config.yaml This file contains the following variables: • taxonomy Specify the taxonomic group of the used organism in order to ensure the correct removal of reads mapping to ribosomal genes (Eukarya, Bacteria, Archea). (Option for the preprocessing workflow) • adapter Specify the adapter sequence to be used. If not set, Trim galore will try to determine it automatically. (Option for the preprocessing workflow) • samples The location of the samples sheet created in the previous step.
• genomeindexpath If the STAR genome index was already precomputed, you can specify the path to the files here, in order to avoid recomputation.
(Option for the preprocessing workflow) • uorfannotationpath If a uORF-annotation file was already pre-computed, you can specify the path to the file here. Please make sure, that the file has the same format as the uORF_annotation_hg38.csv file provided in the git repo (i.e. same number of columns, same column names) • alternativestartcodons Specify a comma separated list of alternative start codons. For this tutorial, we can keep the default values for the config.yaml as shown in Figure 6. The organism analyzed in this tutorial is homo sapiens, therefore we keep the taxonomy at Eukarya. The path to samples.tsv is set correctly.

Running the workflow
Now that we have all the required files, we can start running the workflow, either locally or in a cluster environment.
Run the workflow locally Use the following steps when you plan to execute the workflow on a single server or workstation. Please be aware that some steps of the workflow require a lot of memory, specifically for eukaryotic species.
$ snakemake --use-conda -s uORF-Tools/Snakefile \ --configfile uORF-Tools/config.yaml \ --directory ${PWD} -j 20 \ --latency-wait 60 Run Snakemake in a cluster environment Use the following steps if you are executing the workflow via a queuing system. Edit the configuration file cluster.yaml according to your queuing system setup and cluster hardware. The following system call shows the usage with Grid Engine: $ snakemake --use-conda -s uORF-Tools/Snakefile \ --configfile uORF-Tools/config.yaml \ --directory ${PWD} -j 20 \ --cluster-config uORF-Tools/templates/sge-cluster.yaml Example: Run Snakemake in a cluster environment We ran the tutorial workflow in a cluster environment, specifically a TORQUE cluster environment. Therefore, we created a bash script torque.sh in our project folder.

$ vi torque.sh
We proceeded by writing the queueing script as shown in Figure 7. We then simply submitted this job to the cluster.

$ qsub torque.sh
Using any of the presented methods, this will run the workflow on our dataset and create the desired output files. Once the workflow has finished, we can request an automatically generated report.html file using the following command: $ snakemake --latency-wait 600 --use-conda \ -s uORF-Tools/Snakefile \ --configfile uORF-Tools/config.yaml \ --report report.html

Preprocessing
The retrieval of input files and running the workflow locally and on a server cluster via a queuing system is demonstrated using an example with data available from SRA via NCBI. The dataset is available under the GEO accession number GSE103719. The retrieval of the data is described in this tutorial. Ensure that you have miniconda3 installed and a conda environment set-up. Please refer to the Tools Section (6.1) for details on the installation.

Setup
First of all, we start by creating the project directory and changing to it.
$ mkdir preprocessing_project; cd preprocessing_project; We then download the latest version of the uORF-Tools into the newly created project folder and unpack it.

Retrieve and prepare input files
Before starting the workflow, we have to acquire and prepare several input files. These files are the annotation file, the genome file, the fastq files, the configuration file and the sample sheet.

Annotation and genome files
On this page, we can directly retrieve both files by clicking on the according download links next to the file descriptions. (As shown in Figure 3). Alternatively, you can directly download them using the following commands: Then, we are going to unpack both files.
$ mv gencode.v28.annotation.gtf annotation.gtf $ mv GRCh38.p12.genome.fa genome.fa Another webpage that provides these files is Ensembl Genomes. This usually requires searching their file system in order to find the wanted files. For this tutorial, we recommend to stick to GenCode instead.

Fastq files
In this example, we will use both RNA-seq and RIBO-seq data. In order to fasten up the tutorial, we download only 2 of the 4 replicates available for each Condition.
Please note that you should always use all available replicates, when analyzing your data.

European Nucleotide Archive (ENA)
For many datasets, the easiest way to retrieve the fastq files is using the European Nucleotide Archive as it provides direct download links when searching for a dataset. Use the interface on ENA or type the follwing commands: default values for the config.yaml as shown in Figure 6. The organism analyzed in this tutorial is homo sapiens, therefore we keep the taxonomy at Eukarya. The path to samples.tsv is set correctly. Now that we have all the required files, we can start running the workflow, either locally or in a cluster environment.
Run the workflow locally Use the following steps when you plan to execute the workflow on a single server or workstation. Please be aware that some steps of the workflow require a lot of memory, specifically for eukaryotic species.
$ snakemake --use-conda -s uORF-Tools/Preprocessing_Snakefile\ --configfile uORF-Tools/config.yaml \ --directory ${PWD} -j 20 \ --latency-wait 60 Run Snakemake in a cluster environment Use the following steps if you are executing the workflow via a queuing system. Edit the configuration file cluster.yaml according to your queuing system setup and cluster hardware. The following system call shows the usage with Grid Engine: $ snakemake --use-conda -s uORF-Tools/Preprocessing_Snakefile\ --configfile uORF-Tools/config.yaml \ --directory ${PWD} -j 20 \ --cluster-config uORF-Tools/templates/sge-cluster.yaml Example: Run Snakemake in a cluster environment We ran the tutorial workflow in a cluster environment, specifically a TORQUE cluster environment. Therefore, we created a bash script torque.sh in our project folder.

$ vi torque.sh
We proceeded by writing the queueing script as shown in Figure 7. We then simply submitted this job to the cluster.

$ qsub torque.sh
Using any of the presented methods, this will run the workflow on our dataset and create the desired output files.