Computational Core

MBI’s computational core facility was created to store, process, analyze and visualize experimental data and images as well as large scale computer modeling of cell processing and data mining of complex biological processes. As MBI has moved to a new research space without any preexisting IT facilities, major efforts were taken to design, acquire, install and configure the IT infrastructure.

MBI computational core

The Computational Core provides infrastructure for computation, storage, networking and scientific software services and support for the Mechanobiology Institute.

The Mechanobiology Institute employs imaging technology to acquire biological data sets as a core component of their research. A single image data set may be several TB in size and is often acquired as  individual files resulting in many thousands of files per experiment that must be stored, processed, analyzed and visualized.  Mechanobiology will house multiple microscopes that can each generate large continuous streams of new data.  Thus a fast, efficient and reliable network, storage and computational infrastructure is critical to support our research activities.  Having a local data centre to house this computational equipment is the only way to provide the services and support for Mechanobiology.

MBI's computational core

The Core currently houses a 16 node Linux blade compute cluster for a total of 192 cores and 768 GB of memory.

MBI’s core IT infrastructure is currently composed of the following basic components:

1. A network and server capable of supporting 1 and 10 Gbps (109 bits per second) transfer rates today, and 10 and 100 Gbps rates in the future. This network capability will allow us to internally transmit our large data sets in minutes as opposed to hours using the traditional shared university facilities.

2. A high-performance, large capacity and highly reliable storage system. The initial storage system will have a capacity of over 100 TBs and can expand to multiple PBs as required.

3. A set of computational resources composed of a high performance cluster, initially with 132 computational cores (1.4 teraFLOPS) and several large memory servers with initial memory of 128 GB expandable to 1 TB.

4. A diverse set of both commercial and Open Source software applications for image processing, database mining, computer modelling, bioinformatics and genomics, archival storage, etc.

5. A staff of IT professionals with expertise in systems design, implementation and management and application development.

6. An independent web presence and network for inter and intranets.