This box searches only this space. The box at the upper right searches the entire iPlant wiki.

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


Josh Forstedt – Linked independent master and worker VMs running docker containers together, running BLASTEasy’s modified 1.0.12 Sequence Server with WorkQueue and bench-marking

Derek Caldwell - keeping track of documentation, final presentation,   bench-marking, progress presentations in class, and attending class collaboration meetings. 


Nasser Albalawi - contributed in creating VMs to test benchmarking bench-marking and Creating” Master Machine”, created docker image for the master,  having hands on makeflow and work queue attending class meeting, attending workshops, contributed with writing and presentation.    


Asiedu Owusu-Kyereko - Contributed on making the docker container. Contributed with write-up, bench-marking, and presentation

Special Thanks to John Xu, Sateesh Peri, and Team BLASTEasy.

Project Timeline 



For the following benchmarking bench-marking portion, we had the entire team of 5 people run BLAST queries through our solution. Derek could not get an instance up and running in time, so he was considered to be the ‘outlier’, submitting jobs as a non-worker. This is why there is no core count next to his name in the graph. Nonetheless, Josh was able to start up two Ubuntu instances (4 core each) and satisfy the fifth worker requirement. 

For the first nucleotide benchmarkingbench-marking, we took the random generator and had it create sequences from 1000- 500000 sequences long. What we found in the beginning was expected; the smaller the queries the quicker the times. We ended up not being able to break the solution or crash the browser; everything ran fine for the nucleotide sequences. This was not the case with the protein sequences.  


 To benchmark the protein sequences, we used an online random protein sequence generator that only gave us the ability to go from 1 - 100 sequences. In this test, we experienced increasingly diversified times depending on our devices. Those with more RAM in their machines had quicker times, so we recognize by the benchmarking bench-marking process that something is running or processing on the local memory. The following table illustrates our results.