Department: Dept. Ophthalmology, Moran Eye Center
Project
While genomics aims to map the human genome, connectomics aims to map all connections in the brain and retina. Retinal connectomics help discover how retinas are wired via structural maps of connectivity, also revealing how retinal structure and function can be altered in specific diseases, such as age-related macular degeneration (AMD). Creating such connectome maps requires high-speed automated imaging and automated computational map-building and massive data storage. Dr. Jones' research group has built specialized infrastructure for data assembly, visualization, annotation and analysis.
Nornir is a set of python packages to facilitate assembly of large image data sets. These tools were originally designed to construct 3D connectomics volumes from terabytes of image data, including the use of slice-to-slice and slice-to-volume transforms. While current implementation leverages multi-threaded CPU processing, we aim to further speed up this process by directly leveraging GPUs, using the open-source CuPy library.
Progress
We implemented GPU-accelerated processing for various transforms by leveraging the CuPy library. We implemented tests to ensure code quality, and enabled comparison between single CPU, multi-threaded and GPU processing. We achieved 3x to 10x speedups in computation time from using single CPU to GPU-accelerated processing, and 1.5x to 3x gains from multi-CPU to GPU processing. Additional improvements are expected with the implementation of novel features in future CuPy releases.
We also worked on continuous integration and continuous delivery (CI/CD), aiming to modernize, streamline and accelerate the software development lifecycle for these retinal connectomics open-source libraries. We focused on modern package configurations, as well as library inter-dependencies across multiple repositories. We also enabled automated cross-platform testing via Github actions, testing code on Windows, Mac and Linux virtual machines to help ensure new features do not introduce errors. We finally assessed python packaging for easier deployment of these software solutions to end-users.
These stages will help facilitate data sharing of existing connectomes to the wider community, including research groups aiming to use AI or ML approaches for automated data annotations.
We implemented GPU-accelerated processing for various transforms by leveraging the CuPy library. We implemented tests to ensure code quality, and enabled comparison between single CPU, multi-threaded and GPU processing. We achieved 3x to 10x speedups in computation time from using single CPU to GPU-accelerated processing, and 1.5x to 3x gains from multi-CPU to GPU processing. Additional improvements are expected with the implementation of novel features in future CuPy releases.
We also worked on continuous integration and continuous delivery (CI/CD), aiming to modernize, streamline and accelerate the software development lifecycle for these retinal connectomics open-source libraries. We focused on modern package configurations, as well as library inter-dependencies across multiple repositories. We also enabled automated cross-platform testing via Github actions, testing code on Windows, Mac and Linux virtual machines to help ensure new features do not introduce errors. We finally assessed python packaging for easier deployment of these software solutions to end-users.
These stages will help facilitate data sharing of existing connectomes to the wider community, including research groups aiming to use AI or ML approaches for automated data annotations.