hero

gene orthology software behind OrthoDB

OrthoLoger v2.7.1 is the current stable version!
Docker container, Gitlab.

Orthologs are genes in different species that evolved from a common ancestral gene by speciation.

The LEMMI-style benchmarking shows its state-of-the-art performance.

Cite us

OrthoDB v11: annotation of orthologs in the widest sampling of organismal diversity D Kuznetsov, F Tegenfeldt, M Manni, M Seppey, M Berkeley, EV Kriventseva, EM Zdobnov, NAR, Nov 2022, doi:10.1093/nar/gkac996. PMID:36350662

more & stats >>

Getting OrthoLoger software

OrthoLoger, the OrthoDB standalone pipeline for delineation of orthologs, is freely available

  • as a ready to run docker image
docker pull ezlabgva/orthologer:v2.7.1
docker run -u $(id -u) -v ${where}:/odbwork ezlabgva/orthologer:v2.7.1 setup_odb.sh
docker run -u $(id -u) -v ${where}:/odbwork ezlabgva/orthologer:v2.7.1 ./orthologer.sh "command" "options"

..more >>

  • or you can build the docker image yourself
git clone https://gitlab.com/ezlab/orthologer_container.git
cd orthologer_container
docker build ./ -t orthologer .
  • or build a local instance of orthologer manually
curl https://www.orthodb.org/software/orthologer_2.7.1.tgz -O
curl https://www.orthodb.org/software/orthologer_2.7.1.md5sum -O
# check md5sum
md5sum -c orthologer_2.7.1.md5sum

# if previous md5sum checks out OK, then unpack the package
tar -xzf orthologer_2.7.1.tgz

and follow the instructions in orthologer_2.7.1/README.

Issues board

Setting up a project

Assume DIR_PIPELINE = install location of ORTHOLOGER.

If the orthologer package is installed locally, create the basic setup using the following procedure:

  1. Create a new empty directory
  2. From within this new directory run $DIR_PIPELINE/bin/setup.sh and answer the questions - in general, the default responses will suffice
  3. Run the generated setup script
  4. It will give instructions for how to proceed and/or whether or not something is missing

If the orthologer_container repo is installed use the script docker_run.sh to setup and run the pipeline.

In both cases, the configuration file common.sh is likely to need an edit.

Configuration: common.sh

The configuration is in the file common.sh where each variable is briefly described in comments. A few may need adjustment:

Variable Description
OP_NJOBMAX_LOCAL max number of jobs submitted in local mode
OP_NJOBMAX_BATCH max number of jobs submitted in batch mode
SCHEDULER_LABEL scheduler to be used, NONE or SLURM
ODB_RUN_MODE labels for different preset parameter settings
TREE_ENABLED run in tree mode - requires a newick tree
TREE_INPUT_FILE the newick tree
ALIGNMENT_MATRIX compute full (0) or half (1) matrix for homology
MAKEBRH_NALT if set to > 1 it will allow for fuzzy BRHs
MAKEBRH_FSEPMAX max separation relative the best BRH [0..1] (fuzzy BRHs)
POSTPROC_LABELS labels for various postprocess tools
OP_STEP_NPARALLEL[S] step S : number of jobs launched in parallel
OP_STEP_NTHREADS[S] step S : number of threads per job
OP_STEP_NMERGE[S] step S : number of single jobs merged into one
OP_STEP_RUNMODE[S] step S : run locally (LOCAL) or using scheduler (BATCH)
OP_STEP_SCHEDOPTS step S : options for scheduler
OP_LABELA_START/END pairwise steps: this selects the range of keys A
OP_LABELB_START/END pairwise steps: this selects the range of keys B

Including fuzzy BRHs will add a maximum of MAKEBRH_NALT homologies that are nearly BRHs. The candidates must not differ from the best one by more than MAKEBRH_FSEPMAX.

More help can be obtained using

./orthologer.sh -h                 # orthologer commands
./orthologer.sh -H <step>          # description of a given step
./orthologer.sh -H .               # extra help
./orthologer.sh -H .variables      # help on variables
./orthologer.sh -H .examples       # a few examples

Import fasta files

Fasta files are imported using:

./orthologer.sh manage -f fastafiles.txt

The file fastafiles.txt contains two columns, first a label and the second a file name:

+HAMA   data/myfasta.fs
+SIRI   data/urfasta.fs

The '+' sign before the labels indicates that the sequence IDs should be relabeled using that label. If not, it will use the base of the filename for the internal sequence IDs. In general it is recommended to relabel to something simple. Only case-insensitive alphanumerical characters are allowed [a-z,0-9] and '_'.

Note that the label TPA is not allowed as it has a special meaning in segmasker used for masking.

When importing, it will also create a corresponding todo file at todo/fastafiles.todo. Ensure that all directories are created by

./orthologer.sh -C -t todo/fastafiles.todo

Run a project

If everything is setup and PL_TODO is set in common.sh, the following will start a run:

./orthologer.sh -r ALL                        # run over all steps
./orthologer.sh -r MAKEBRH -t todo/my.todo    # run over one step using a 

Adding an option -d, a dry run is triggered. It will just print out each step without actually running the steps.

Tree mode

By setting TREE_ENABLED=1 the pipeline will run using a user provided taxonomy tree. It can be defined in one of three ways

  1. set TREE_INPUT_FILE to a newick file defining the tree
  2. set TREE_ROOT_CLADE to a clade NCBI taxid present in OrthoDB (e.g 33208 for metazoa)
  3. none of the above, the given todo file is used to construct a tree file name (todo/<label>.nw)

Mapping

The orthologer can also be used to map new fasta files to an existing project. Import the new fasta file as described above.

On an existing user project

In order to map against an existing project you need to create a todo file with the new label as well as the other taxids you want to map against.

Run, assuming the taxid label of the imported fasta is mylabel and the source cluster is Cluster/source.og.

./orthologer -r all -R mylabel -I Cluster/source.og

It will ensure that all pre-cluster steps will only involve mylabel. The cluster step will merge those BRHs with the source cluster.

The -R option takes a space separated list of labels. Hence, more than one label can be given. However if there are two or more labels, BRHs will also be computed within the group of extra labels. This is not equivalent to map by running separate runs for each extra label.

On OrthoDB data

In order to run on OrthoDB data, from a new directory run

<DIR_PIPELINE>/bin/setup_mapping.sh

Defaults are OK for small tests but it is recommended to change the storage locations defined in mapping_conf.sh.

Variable Description
DBI_DOWNLOAD temporary storage for downloaded tar files, default is /tmp
MAP_USERLOC user location - where pipelines are run per user
MAP_ORTHODB_DATA data location - where the downloaded OrthoDB files are installed

Next the template_common.sh may require some editing as described above. You can check the setup by running

./mapping.sh CHECK

The mapping is set up and run via mapping.sh. Some commands will end with 'GO'. If not included, the command will just do a dry run.

OrthoDB taxids are referred to below. They are identical to NCBI IDs but with a version appended, e.g 9606_0.

Note that the commands are capitalized below. This is not required.

Step 1. Running requires creating 'users' which can be seen as any arbitrary label.

./mapping.sh CREATE <user> GO

Step 2. Download the node you want to map against.

./mapping.sh DOWNLOAD <node ncbi taxid>

If you do not know which node to use, you can get the full lineage using ete3 tool:

ete3 ncbiquery --search <ncbi taxid> --info

Note that all nodes wont be available in OrthoDB.

See HERE for instructions on how to install ete3.

A full list of all nodes available for download can be obtained by

./mapping.sh DOWNLOAD

Step 3. Import OrthoDB files to your project

./mapping.sh DBIMPORT <user> <node id> GO

A list of all imported DB files is obtained from

./mapping.sh DBINFO <user>

Step 4. Import your fasta file

./mapping.sh IMPORT <user> "<taxid>;<filename>"

The is a label to be used internally. Use only regular alphanumerical characters. They are only used internally. Note the quotes, otherwise the ';' will be interpreted as a new line by the shell.

Step 5. Run

When mapping against a given node, you may want to select a subset of the node to map against as it will reduce to compute time. This subset is provided as a CSV list with OrthoDB taxids.

./mapping.sh RUN <user> <node id> <taxid> [using=<CSV list taxids>] GO

Further help can be obtained using

./mapping.sh HELP

Example

Below is an example where a sample fasta file is mapped against a subset of Cichliformes from OrthoDB.

# create user 'myproj'
./mapping.sh CREATE myproj GO

# download OrthoDB data for Cichliformes (NCBI taxid 1489911)
./mapping.sh DOWNLOAD 1489911

# import that node to 'myproj'
./mapping.sh DBIMPORT myproj 1489911 GO

# load a sample fasta file
curl https://data.orthodb.org/v11/download/mapping/example.fs.gz -O
gunzip example.fs.gz

# import file
./mapping.sh IMPORT myusr "fish;example.fs"

# start mapping example.fs to node 1489911 using OrthoDB taxids 303518_0, 43689_0 and 8128_0
./mapping.sh RUN myusr 1489911 fish using=303518_0,43689_0,8128_0 GO

The result should be in two files

  1. <base>.user - contains only the mapped genes from the input fasta together with OrthoDB cluster IDs
  2. <base>.odb - contains ALL clusters including the new mapped genes

where base is users/myusr/pipeline/Cluster/node_1489911_subnode_303518_0_43689_0_8128_0_taxid_fish.og