Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Gapr for large-scale collaborative single-neuron reconstruction

Abstract

Whole-brain analysis of single-neuron morphology is crucial for unraveling the complex structure of the brain. However, large-scale neuron reconstruction from terabyte and even petabyte data of mammalian brains generated by state-of-the-art light microscopy is a daunting task. Here, we developed ‘Gapr’ (Gapr accelerates projectome reconstruction) that streamlines deep learning-based automatic reconstruction, ‘automatic proofreading’ that reduces human workloads at high-confidence sites, and high-throughput collaborative proofreading by crowd users through the Internet. Furthermore, Gapr offers a seamless user interface that ensures high proofreading speed per annotator, on-demand conversion for handling large datasets, flexible workflows tailored to diverse datasets and rigorous error tracking for quality control. Finally, we demonstrated Gapr’s efficacy by reconstructing over 4,000 neurons in mouse brains, revealing the morphological diversity in cortical interneurons and hypothalamic neurons. Here, we present Gapr as a solution for large-scale single-neuron reconstruction projects.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Software architecture of Gapr.
Fig. 2: Automatic reconstruction in Gapr.
Fig. 3: Collaborative reconstruction with Gapr.
Fig. 4: Neuron reconstruction with fMOST datasets.
Fig. 5: Morphological analysis of neurons reconstructed by Gapr.

Similar content being viewed by others

Data availability

A minimal test dataset for our software, along with an example of server configuration, is deposited at https://doi.org/10.5281/zenodo.10988281 (ref. 31). Full-resolution fMOST imaging datasets in this study are deposited at Institute of Neuroscience, Chinese Academy of Sciences, and can be accessed using this server configuration (https://doi.org/10.5281/zenodo.10988281, ref. 31). All reconstructed neurons in the SWC file format, along with complete reconstruction history as LMDB database files, are deposited at https://doi.org/10.5281/zenodo.10988416 (ref. 32). Parameter files of trained U-Net and ResNet are deposited at https://doi.org/10.5281/zenodo.10988756 (ref. 33). Source data are provided with this paper.

Code availability

Gapr is licensed under the GNU General Public License v.3.0 or later. The source code and user guide are available at https://doi.org/10.5281/zenodo.10988621 (ref. 34). For binary packages and future versions of source code and user guide, please refer to http://yanlab.org.cn/gapr/. Custom script files used for data analysis are deposited at https://doi.org/10.5281/zenodo.11005126 (ref. 35).

References

  1. Zheng, T. et al. Visualization of brain circuits using two-photon fluorescence micro-optical sectioning tomography. Opt. Express 21, 9839 (2013).

    Article  PubMed  Google Scholar 

  2. Economo, M. N. et al. A platform for brain-wide imaging and reconstruction of individual neurons. eLife 5, e10566 (2016).

    Article  PubMed  PubMed Central  Google Scholar 

  3. Wang, H. et al. Scalable volumetric imaging for ultrahigh-speed brain mapping at synaptic resolution. Natl Sci. Rev. 6, 982–992 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  4. Peng, H., Ruan, Z., Long, F., Simpson, J. H. & Myers, E. W. V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets. Nat. Biotechnol. 28, 348–353 (2010).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Longair, M., Baker, D. A. & Armstrong, J. D. Simple Neurite Tracer: open source software for reconstruction, visualization and analysis of neuronal processes. Bioinformatics 27, 2453–2454 (2011).

    Article  CAS  PubMed  Google Scholar 

  6. Feng, L., Zhao, T. & Kim, J. neuTube 1.0: a new design for efficient neuron reconstruction software based on the SWC format. eNeuro https://doi.org/10.1523/ENEURO.0049-14.2014 (2015).

  7. Bria, A., Iannello, G., Onofri, L. & Peng, H. TeraFly: real-time three-dimensional visualization and annotation of terabytes of multidimensional volumetric images. Nat. Methods 13, 192–194 (2016).

    Article  CAS  PubMed  Google Scholar 

  8. Xu, F. et al. High-throughput mapping of a whole rhesus monkey brain at micrometer resolution. Nat. Biotechnol. 39, 1521–1528 (2021).

    Article  CAS  PubMed  Google Scholar 

  9. Gao, L. et al. Single-neuron projectome of mouse prefrontal cortex. Nat. Neurosci. 25, 515–529 (2022).

    Article  CAS  PubMed  Google Scholar 

  10. Yang, J., Gonzalez-Bellido, P. T. & Peng, H. A distance-field based automatic neuron tracing method. BMC Bioinformatics 14, 93 (2013).

    Article  PubMed  PubMed Central  Google Scholar 

  11. Sui, D., Wang, K., Chae, J., Zhang, Y. & Zhang, H. A pipeline for neuron reconstruction based on spatial sliding volume filter seeding. Comput. Math. Meth. Med. 2014, 386974 (2014).

    Article  Google Scholar 

  12. Xiao, H. & Peng, H. APP2: automatic tracing of 3D neuron morphology based on hierarchical pruning of a gray-weighted image distance-tree. Bioinformatics 29, 1448–1454 (2013).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  13. Liu, S. et al. Rivulet: 3D neuron morphology tracing with iterative back-tracking. Neuroinformatics 14, 387–401 (2016).

    Article  PubMed  Google Scholar 

  14. Quan, T. et al. NeuroGPS-Tree: automatic reconstruction of large-scale neuronal populations with dense neurites. Nat. Methods 13, 51–54 (2016).

    Article  CAS  PubMed  Google Scholar 

  15. Zhou, Z., Kuo, H.-C., Peng, H. & Long, F. DeepNeuron: an open deep learning toolbox for neuron tracing. Brain Inform. 5, 3 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  16. Callara, A. L., Magliaro, C., Ahluwalia, A. & Vanello, N. A smart region-growing algorithm for single-neuron segmentation from confocal and 2-photon datasets. Front. Neuroinformatics https://doi.org/10.3389/fninf.2020.00009 (2020).

  17. Li, S. et al. Brain-wide shape reconstruction of a traced neuron using the convex image segmentation method. Neuroinformatics 18, 199–218 (2020).

    Article  PubMed  Google Scholar 

  18. Friedmann, D. et al. Mapping mesoscale axonal projections in the mouse brain using a 3D convolutional network. Proc. Natl Acad. Sci. USA 117, 11068–11075 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  19. Peng, H., Long, F., Zhao, T. & Myers, E. Proof-editing is the bottleneck of 3D neuron reconstruction: the problem and solutions. Neuroinformatics 9, 103–105 (2011).

    Article  PubMed  Google Scholar 

  20. Wang, Y. et al. TeraVR empowers precise reconstruction of complete 3-D neuronal morphology in the whole brain. Nat. Commun. 10, 3474 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  21. Winnubst, J. et al. Reconstruction of 1,000 projection neurons reveals new cell types and organization of long-range connectivity in the mouse brain. Cell 179, 268–281.e13 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  22. Cannon, R. C., Turner, D. A., Pyapali, G. K. & Wheal, H. V. An on-line archive of reconstructed hippocampal neurons. J. Neurosci. Meth. 84, 49–54 (1998).

    Article  CAS  Google Scholar 

  23. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. In Proc. Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015 (eds Navab, N. et al.) 234–241 (Springer International Publishing, 2015).

  24. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778 (IEEE, 2016).

  25. Paszke, A. et al. Pytorch: an imperative style, high-performance deep learning library. In Proc. 33rd International Conference on Neural Information Processing Systems (eds Wallah, H. et al.) (Curran Associates Inc., 2019).

  26. Wang, Q. et al. The Allen Mouse Brain Common Coordinate Framework: a 3D reference atlas. Cell 181, 936–953.e20 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  27. Rohlfing, T. & Maurer, C. R. Nonrigid image registration in shared-memory multiprocessor environments with application to brains, breasts, and bees. IEEE Trans. Inform. Technol. Biomed. 7, 16–25 (2003).

    Article  Google Scholar 

  28. Avants, B. B. et al. A reproducible evaluation of ANTs similarity metric performance in brain image registration. Neuroimage 54, 2033–2044 (2011).

    Article  PubMed  Google Scholar 

  29. Bates, A. S. et al. The natverse, a versatile toolbox for combining and analysing neuroanatomical data. eLife 9, e53350 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  30. Gu, Z., Eils, R. & Schlesner, M. Complex heatmaps reveal patterns and correlations in multidimensional genomic data. Bioinformatics 32, 2847–2849 (2016).

    Article  CAS  PubMed  Google Scholar 

  31. Gou, L., Wang, Y. & Yan, J. Gapr for large-scale collaborative single-neuron reconstruction: test data. Zenodo https://doi.org/10.5281/zenodo.10988280 (2024).

  32. Gou, L., Wang, Y. & Yan, J. Gapr for large-scale collaborative single-neuron reconstruction: reconstruction results. Zenodo https://doi.org/10.5281/zenodo.10988415 (2024).

  33. Gou, L., Wang, Y. & Yan, J. Gapr for large-scale collaborative single-neuron reconstruction: neural network parameter files. Zenodo https://doi.org/10.5281/zenodo.10988755 (2024).

  34. Gou, L., Wang, Y. & Yan, J. Gapr for large-scale collaborative single-neuron reconstruction: source code and user guide. Zenodo https://doi.org/10.5281/zenodo.10988621 (2024).

  35. Gou, L., Wang, Y. & Yan, J. Gapr for large-scale collaborative single-neuron reconstruction: custom scripts for analysis. Zenodo. https://doi.org/10.5281/zenodo.11005126 (2024).

Download references

Acknowledgements

We thank Y. Sun (Institute of Neuroscience, Chinese Academy of Sciences) for helpful discussions, Q. Luo (Hainan University), H. Gong and A. Li (Huazhong University of Science and Technology) for the help in fMOST imaging, X. Wang and his team (Institute of Neuroscience, Chinese Academy of Sciences) for managing the fMOST data, and Z. Zeng and his team (Chengdu Huizhong Tianzhi Technology Co. Ltd) for manual proofreading using Gapr in this study. This work was supported by the National Science and Technology Innovation 2030—Major Projects (grant nos. 2021ZD0204400 to H.W. and X.X., 2021ZD0200200 to J.Y. and 2021ZD0203203 to X.X.), the Lingang Laboratory grant (no. LG202104-01-06 J.Y.), Shanghai Municipal Science and Technology Major Project grant (no. 2018SHZDZX05 J.Y. and X.X.), the National Natural Science Foundation of China (no. 32221003 J.Y.) and Strategic Priority Research Program of Chinese Academy of Sciences grant (no. XDB32040104 J.Y.). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

The study was designed by L. Gou and J.Y. Gapr was developed by L. Gou. Data management and reconstruction curation was carried out by Y.W. Comparison with other software was carried out by Y.W. and L. Gao. Quality control was performed by Y.W. and L. Gao. fMOST image alignment was carried out by Y.Z. Virus injection and sparse labeling were carried out by L.X., X.Z., Y.S., H.X. and X.X. Data analysis, interpretation and generation of figures were performed by L. Gou, Y.W. and J.Y. Writing, reviewing and editing of the paper were carried out by L. Gou and J.Y. Scientific direction and funding were the responsibilities of J.Y. and H.W.

Corresponding author

Correspondence to Jun Yan.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Methods thanks Hua Han and Daniel Tward for their contribution to the peer review of this work. Peer reviewer reports are available. Primary Handling Editor: Nina Vogt, in collaboration with the Nature Methods team.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Comparison between Gapr’s automatic reconstruction (U-Net+neuTube) and the neuTube plugin from the Vaa3D software.

For each dataset, 10 sample cubes containing neurites are randomly selected. With U-Net, both FDR and FNR are significantly reduced (p = 2.0 × 10−11 and 6.8 × 10−4 respectively, with a one-sided Wilcoxon rank-sum test). The horizontal box lines represent the 25th, 50th, and 75th percentiles respectively, and the whiskers extend to values within 1.5 times the interquartile range.

Source data

Extended Data Fig. 2 Demonstration of automatic reconstruction with a large dataset.

Red squares denote active areas for one step of reconstruction. For each step, black lines denote existing edges, and green lines denote newly reconstructed edges. Red dots are active nodes that guide the selection of reconstruction areas. Note that the existing short segment in step 1 is a manually introduced seed.

Extended Data Fig. 3 Confirmation of modification consistency by the gather module, including access control, validation and collision avoidance.

This mechanism allows multiple users to perform modifications to the same dataset concurrently and ensures data consistency without explicitly locking anything.

Extended Data Fig. 4 Collaborative reconstruction of a branch structure by two annotators.

Black lines denote reconstructed edges, while red lines denote new edges to add to the reconstruction result. Annotators prepare the new edges based on their observation of imaging data (thick gray arrows), which typically takes several seconds per step. Thin black arrows show communication between the clients and the server, with a typical round-trip time of less than 100ms. Note that the reconstruction result at the client side may be incomplete. When incompleteness leads to incorrect operations, the reconstruction result is automatically updated. In this example, client B initially only has the reconstruction result of the right-side branch. After rejection by the server, the result for the left-side branch contributed by client A is automatically loaded, such that client B can connect to the correct node at the branch point.

Extended Data Fig. 5 Comparison of visualization efficiency between Gapr, Janelia Workstation and Vaa3D.

We compared these three tools side-by-side with various numbers of nodes loaded.

Extended Data Fig. 6 Reconstruction results of all 15 datasets in sagittal and horizontal views.

Neurons that have been proofread are randomly colored. Gray neurites in dataset no. 192101 have not been manually proofread, as the selected reconstruction procedure is performed.

Extended Data Fig. 7 Examples of loops and errors.

(a) An example of a loop structure. On the right side, a global view of the whole loop is shown. An annotator has reported an error at the correct location for resolution. At this site, the over-connection of the loop is balanced by the under-connection of the other neurite segment. (b) Examples of fixed errors and unresolvable errors. The two unresolvable cases are both tangled neurites. In the first fixed error case, the incorrect path found by the A* algorithm is revised. The later two fixed error cases involve adding missing branches.

Extended Data Fig. 8 Morphology and projection analysis of reconstructed neurons in the mouse brain.

(a) Morphology of cortical Grpr+ interneurons. Five representative neurons demonstrate the morphology of the 5 distinct clusters. Axons are indicated with brighter color. The distribution of dendrite and axon lengths along the cortical layers for all neurons in the corresponding cluster is shown with shaded areas indicating the standard error (s.e.m). Paired one-sided Wilcoxon signed-rank tests were applied and showed axons are deeper than dendrites in most clusters. (b) Correspondence between cortical layers of somata and morphology clusters for cortical Grpr+ interneurons. One-sided Fisher’s exact tests (followed by Benjamini-Hochberg correction) were applied as in Fig. 5c and found significant correspondence of cluster 1 and layer 1 (***, p = 1.72 × 10−4). (c) Difference of morphological features for cortical Grpr+ interneurons with soma located in different regions. Two-sided Wilcoxon rank-sum tests followed by Benjamini-Hochberg correction were applied. (d) Projection strength to targets of each morphology cluster of VMH neurons in Fig. 5j and Fig. 5k. (e) Distinct projection targets between Esr1+ and Nr5a1+ VMH neurons. One-sided Wilcoxon rank-sum tests were applied for each target region.

Source data

Supplementary information

Supplementary Information

Supplementary Tables 1 and 3, Figs. 1–6 and Notes 1 and 2.

Reporting Summary

Peer Review File

Supplementary Table 2

Information for all 15 datasets and their reconstruction processes.

Supplementary Video 1

Screencast of the fix module running in GNU/Linux. The annotator starts at a soma and proofreads the automatically reconstructed neurite segments. The program guides the annotator to locations that need human attention, skipping nodes that have been automatically proofread. This video contains no audio stream.

Supplementary Video 2

Playback of the whole reconstruction process for dataset no. 18925. Summary text is displayed in the top left corner. The number of edits of the current snapshot is displayed in the bottom right corner. This video contains no audio stream.

Supplementary Video 3

Screencast of the proofread module running on an Android smartphone. The annotator proofreads neurite segments in this cube by fixing a few local errors. The proofread module provides only cube-level information, thus can run fluently on mobile devices. This video contains no audio stream.

Source data

Source Data Fig. 2

Source data tables.

Source Data Fig. 3

Source data tables.

Source Data Fig. 4

Source data tables.

Source Data Fig. 5

Source data tables.

Source Data Extended Data Fig. 1

Source data tables.

Source Data Extended Data Fig. 8

Source data tables.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gou, L., Wang, Y., Gao, L. et al. Gapr for large-scale collaborative single-neuron reconstruction. Nat Methods (2024). https://doi.org/10.1038/s41592-024-02345-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1038/s41592-024-02345-z

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics