Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.
SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.
Software quality assurance and checking tool for quantitative assessment of magnetic resonance imaging and computed tomography data. Used for quality control of MR imaging data.
Proper citation: MRQy (RRID:SCR_025779) Copy
https://github.com/Washington-University/HCPpipelines
Software package as set of tools, primarily shell scripts, for processing multi-modal, high-quality MRI images for the Human Connectome Project. Minimal preprocessing pipelines for structural, functional, and diffusion MRI that were developed by the HCP to accomplish many low level tasks, including spatial artifact/distortion removal, surface generation, cross-modal registration, and alignment to standard space.
Proper citation: HCP Pipelines (RRID:SCR_026575) Copy
https://github.com/pyranges/ncls
Software library for nested containment list data structure for interval overlap queries, like interval tree. It is a static interval-tree that is fast for both construction and lookups.
Proper citation: Nested containment list (RRID:SCR_027849) Copy
https://omics.pnl.gov/software/ms-gf
Software that performs peptide identification by scoring MS/MS spectra against peptides derived from a protein sequence database.
Proper citation: MS-GF+ (RRID:SCR_015646) Copy
http://www.usc.edu/dept/biomed/UTRC/
Biomedical technology research center focusing on the development of very high frequency (above 20 MHz) ultrasonic transducers/arrays for applications in medicine and biology that include ophthalmology, dermatology, vascular surgery, and small animal imaging. The research is pursued simultaneously in three directions: novel piezoelectric materials, very high frequency single element transducers and linear arrays, and finite element modeling and material property measurements. The Center also serves the community through collaborative efforts with investigators having a research interest in high-frequency ultrasound imaging. In addition, it performs the function of training and information dissemination by offering conferences, seminars and specialized courses at the University of Southern California. The Center has set forth a number of goals which define its mission: * Conduct novel research and development of very high frequency (>20MHz) ultrasonic transducers, arrays and imaging applications * Collaborate with other academic institutions, non-profit organizations, and small businesses supported by the NIH to further the development of these high-frequency applications and provide the expertise in transducers necessary for project success * Serve as an educational center for training scientists and engineers interested in ultrasonic transducer technology One of the primary goals of the Center is to provide service to outside investigators and small business. Often an investigator or company has a specific application in mind but is without the expertise to develop the necessary ultrasonic device. Investigators at academic institutions, research institutes, or small businesses supported by NIH grants who have a need for medical ultrasound transducers and are interested in a collaborative effort should contact Dr. Hyung Ham Kim or Dr. K. Kirk Shung. Ultrasound transducers and components can be fabricated either completely by center personnel or in a joint effort with other investigators. In addition, collaborators are encouraged to visit the facility for additional training in fabrication and assembly.
Proper citation: Resource Center for Medical Ultrasonic Transducer Technology (RRID:SCR_001404) Copy
THIS RESOURCE IS NO LONGER IN SERVICE, documented on June 23, 2013. Homophila utilizes the sequence information of human disease genes from the NCBI OMIM (Online Mendelian Inheritance in Man) database in order to determine if sequence homologs of these genes exist in the current Drosophila sequence database (FlyBase). Sequences are compared using NCBI's BLAST program. The database is updated weekly and can be searched by human disease, gene name, OMIM number, title, subtitle and/or allelic variant descriptions.
Proper citation: Homophila (RRID:SCR_007717) Copy
http://www.bioinformatics.ucla.edu/ASAP2
THIS RESOURCE IS NO LONGER IN SERVICE, documented on 8/12/13. An expanded version of the Alternative Splicing Annotation Project (ASAP) database with a new interface and integration of comparative features using UCSC BLASTZ multiple alignments. It supports 9 vertebrate species, 4 insects, and nematodes, and provides with extensive alternative splicing analysis and their splicing variants. As for human alternative splicing data, newly added EST libraries were classified and included into previous tissue and cancer classification, and lists of tissue and cancer (normal) specific alternatively spliced genes are re-calculated and updated. They have created a novel orthologous exon and intron databases and their splice variants based on multiple alignment among several species. These orthologous exon and intron database can give more comprehensive homologous gene information than protein similarity based method. Furthermore, splice junction and exon identity among species can be valuable resources to elucidate species-specific genes. ASAP II database can be easily integrated with pygr (unpublished, the Python Graph Database Framework for Bioinformatics) and its powerful features such as graph query, multi-genome alignment query and etc. ASAP II can be searched by several different criteria such as gene symbol, gene name and ID (UniGene, GenBank etc.). The web interface provides 7 different kinds of views: (I) user query, UniGene annotation, orthologous genes and genome browsers; (II) genome alignment; (III) exons and orthologous exons; (IV) introns and orthologous introns; (V) alternative splicing; (IV) isoform and protein sequences; (VII) tissue and cancer vs. normal specificity. ASAP II shows genome alignments of isoforms, exons, and introns in UCSC-like genome browser. All alternative splicing relationships with supporting evidence information, types of alternative splicing patterns, and inclusion rate for skipped exons are listed in separate tables. Users can also search human data for tissue- and cancer-specific splice forms at the bottom of the gene summary page. The p-values for tissue-specificity as log-odds (LOD) scores, and highlight the results for LOD >= 3 and at least 3 EST sequences are all also reported.
Proper citation: Alternative Splicing Annotation Project II Database (RRID:SCR_000322) Copy
http://biositemaps.ncbcs.org/rds/search.html
Resource Discovery System is a web-accessible and searchable inventory of biomedical research resources. Powered by the Resource Discovery System (RDS) that includes a standards-based informatics infrastructure * Biositemaps Information Model * Biomedical Resource Ontology Extensions * Web Services distributed web-accessible inventory framework * Biositemap Resource Editor * Resource Discovery System Source code and project documentation to be made available on an open-source basis. Contributing institutions: University of Pittsburgh, University of Michigan, Stanford University, Oregon Health & Science University, University of Texas Houston. Duke University, Emory University, University of California Davis, University of California San Diego, National Institutes of Health, Inventory Resources Working Group Members
Proper citation: Resource Discovery System (RRID:SCR_005554) Copy
http://www.informatics.jax.org/home/recombinase
Curated data about all recombinase-containing transgenes and knock-ins developed in mice providing a comprehensive resource delineating known activity patterns and allows users to find relevant mouse resources for their studies.
Proper citation: Recombinase (cre) Activity (RRID:SCR_006585) Copy
Biomedical technology research center that conducts, catalyzes and enables multiscale biomedical research, focusing on four key activities: 1) integrating computational, data and visualization resources in a transparent, advanced grid environment to enable better access to distributed data, computational resources, instruments and people; 2) developing and deploying advanced computational tools for modeling and simulation, data analysis, query and integration, three-dimensional image processing and interactive visualization; 3) delivering and supporting advanced grid/cyberinfrastructure for biomedical researchers; and 4) training a cadre of new researchers to have an interdisciplinary, working knowledge of computational technology relevant to biomedical scientists. NBCR enables biomedical scientists to address the challenge of integrating detailed structural measurements from diverse scales of biological organization that range from molecules to organ systems in order to gain quantitative understanding of biological function and phenotypes. Predictive multi-scale models and their driving biological research problems together address issues in modeling of sub-cellular biophysics, building molecular modeling tools to accelerate discovery, and defining tools for patient-specific multi-scale modeling. NBCR furthers these driving problems by developing tools and models based on rapid advances in mathematics and information technology, incorporating them into NBCR pipelines or problem solving environments, and addressing the inevitable changes in the underlying cyber-infrastructure technologies and continually adapting codes over time. Their technology focus integrates both the biological applications and the underlying support software into reproducible science workflows that can function across a number of physical infrastructures.
Proper citation: National Biomedical Computation Resource (RRID:SCR_002656) Copy
Biomedical technology research center that develops computer-aided, advanced microscopy for the acquisition of structural and functional data in the dimensional range of 1 nm to 100 um, a range encompassing macromolecules, subcellular structures and cells. Novel specimen-staining methods, imaging instrumentsincluding intermediate high-voltage transmission electron microscopes (IVEMs) and high-speed, large-format laser-scanning light microscopesand computational capabilities are available for addressing mesoscale biological microscopy of proteins and macromolecular complexes in their cellular and tissue environments. These technologies are developed to bridge understanding of biological systems between the gross anatomical and molecular scales and to make these technologies broadly available to biomedical researchers. NCMIR provides expertise, infrastructure, technological development, and an environment in which new information about the 3D ultrastructure of tissues, cells, and macromolecular complexes may be accurately and easily obtained and analyzed. NCMIR fulfills its mission through technology development, collaboration, service, training, and dissemination. It aims to develop preparative methods and analytical approaches to 3D microscopy applicable to neurobiology and cell biology, incorporating equipment and implementing software that expand the analysis of 3D structure. The core research activities in the areas of specimen development, instrument development, and software infrastructures maximize the advantages of higher voltage electron microscopy and correlated light microscopies to make ambitious imaging studies across scales routine, and to facilitate the use of resources by biomedical researchers. NCMIR actively recruits outside users who will not only make use of these resources, but who also will drive technology development and receive training.
Proper citation: National Center for Microscopy and Imaging Research (RRID:SCR_002655) Copy
Biomedical technology research center and training resource for the study of the structure of partially ordered biological molecules, complexes of biomolecules and cellular structures under conditions similar to those present in living cells and tissues. The goal of research at BioCAT is to determine the detailed structure and mechanism of action of biological systems at the molecular level. The techniques used are X-ray fiber diffraction, X-ray solution scattering and X-ray micro-emission and micro-absorption spectroscopy, with an emphasis on time-resolved studies and the development of novel techniques.
Proper citation: BioCAT (RRID:SCR_001440) Copy
http://www.mri-resource.kennedykrieger.org/
Biomedical technology research center that provides expertise for the design of quantitative magnetic resonance imaging (MRI) and spectroscopy (MRS) data acquisition and processing technologies that facilitate the biomedical research of a large community of clinicians and neuroscientists in Maryland and throughout the USA. These methods allow noninvasive assessment of changes in brain anatomy as well as in tissue metabolite levels, physiology, and brain functioning while the brain is changing size during early development and during neurodegeneration, i.e. the changing brain throughout the life span. The Kirby Center has 3 Tesla and 7 Tesla state of the art scanners equipped with parallel imaging (8, 16, and 32-channel receive coils) and multi-transmit capabilities. CIS has an IBM supercomputer that is part of a national supercomputing infrastructure. Resources fall into the following categories: * MRI facilities, image acquisition, and processing * Computing facilities and image analysis * Novel statistical methods for functional brain imaging * Translating laboratory discoveries to patient treatment
Proper citation: National Resource for Quantitative Functional MRI (RRID:SCR_006716) Copy
Provides high-performance tandem mass spectrometry and proteomics, including multiplexed quantitative comparative analysis of protein and post-translational modifications, and a suite of tools for the analysis of mass spectrometry proteomics data. It provides both scientific and technical expertise and state-of-the-art high-performance, tandem mass spectrometric instrumentation. The facility also provides a service for small molecule analysis. Significant instrumentation in the facility includes three QSTAR quadrupole orthogonal time of flight instruments, and both an LTQ-Orbitrap platform with electron transfer dissociation (ETD) and an LTQ-FT linear ion trap FT-ICR instrument equipped with the ability to perform electron capture dissociation (ECD). The Center also has a 4700 Proteomic Analyzer MALDI tandem time of flight instrument; as well as a QTRAP 5500 hybrid triple quadrupole linear ion trap instrument; and a Thermo Fisher LTQ Orbitrap Velos. Major research focuses within the Center are the analysis of post-translational modifications, including phosphorylation and O-GlcNAcylation and development of methods for quantitative comparative analysis of protein and post-translational modification levels. The program also continues to develop one of the leading suites of tools for analysis of mass spectrometry proteomics data, Protein Prospector. The current web-based release allows unrestricted searching of MS and MSMS data, as well as the ability to perform comparative quantitative analysis of samples using isotopic-labeling reagents. It is the only freely-available web-based resource that allows this type of analysis.
Proper citation: National Bio-Organic Biomedical Mass Spectrometry Resource Center (RRID:SCR_009004) Copy
Web application to discover resources available at participating networked universities. This distributed platform for creating and sharing semantically rich data is built around semantic web technologies and follows linked open data principles.
Proper citation: Eagle I (RRID:SCR_013153) Copy
Database for the bacterium Escherichia coli K-12 MG1655, the EcoCyc project performs literature-based curation of the entire genome, and of transcriptional regulation, transporters, and metabolic pathways. The long-term goal of the project is to describe the molecular catalog of the E. coli cell, as well as the functions of each of its molecular parts, to facilitate a system-level understanding of E. coli. EcoCyc is an electronic reference source for E. coli biologists, and for biologists who work with related microorganisms.
Proper citation: EcoCyc (RRID:SCR_002433) Copy
https://open.med.harvard.edu/display/SHRINE/Community
Software providing a scalable query and aggregation mechanism that enables federated queries across many independently operated patient databases. This platform enables clinical researchers to solve the problem of identifying sufficient numbers of patients to include in their studies by querying across distributed hospital electronic medical record systems. Through the use of a federated network protocol, SHRINE allows investigators to see limited data about patients meeting their study criteria without compromising patient privacy. This software should greatly enable population-based research, assessment of potential clinical trials cohorts, and hypothesis formation for followup study by combining the EHR assets across the hospital system. In order to obtain the maximum number of cases representing the study population, it is useful to aggregate patient facts across as many sites as possible. Cutting across institutional boundaries necessitates that each hospital IRB remain in control, and that their local authority is recognized for each and every request for patient data. The independence, ownership, and legal responsibilities of hospitals predetermines a decentralized technical approach, such as a federated query over locally controlled databases. The application comes with the SHRINE Core Ontology but it can be used with any ontology, even one that is disease specific. The Core Ontology is designed to enable the widest range of studies possible using facts gathered in the EMR during routine patient care. SHRINE allows multiple ontologies to be used for different research purposes on the same installed systems.
Proper citation: SHRINE (RRID:SCR_006293) Copy
http://www.loni.usc.edu/Software/Debabeler
Software to manage the conversion of imaging data from one file format and convention to another. It consists of a graphical user interface to visually program the translations, and a data translation engine to read, sort and translate the input files, and write the output files to disk. The data translation engine: (1) Reads metadata from a set of image files on disk to identify the source that produced each file; (2) Groups the image files into user-defined collections using image metadata values; (3) Translates each image file collection by reading metadata and pixel data and mapping the data into the appropriate output file format through a programmable set of connected modules. The Debabeler uses the Java Image I/O Plugin Architecture to read and write a wide variety of common medical image file formats, including ANALYZE, MINC, and most variations of DICOM.
Proper citation: LONI Debabeler (RRID:SCR_001160) Copy
https://github.com/SciKnowEngine/kefed.io
Knowledge engineering software for reasoning with scientific observations and interpretations. The software has three parts: (a) the KEfED model editor - a design editor for creating KEfED models by drawing a flow diagram of an experimental protocol; (b) the KEfED data interface - a spreadsheet-like tool that permits users to enter experimental data pertaining to a specific model; (c) a "neural connection matrix" interface that presents neural connectivity as a table of ordinal connection strengths representing the interpretations of tract-tracing data. This tool also allows the user to view experimental evidence pertaining to a specific connection. The KEfED model is designed to provide a lightweight representation for scientific knowledge that is (a) generalizable, (b) a suitable target for text-mining approaches, (c) relatively semantically simple, and (d) is based on the way that scientist plan experiments and should therefore be intuitively understandable to non-computational bench scientists. The basic idea of the KEfED model is that scientific observations tend to have a common design: there is a significant difference between measurements of some dependent variable under conditions specified by two (or more) values of some independent variable.
Proper citation: Knowledge Engineering from Experimental Design (RRID:SCR_001238) Copy
The MiND: Metadata in NIfTI for DWI framework enables data sharing and software interoperability for diffusion-weighted MRI. This site provides specification details, tools, and examples of the MiND mechanism for representing important metadata for DWI data sets at various stages of post-processing. MiND framework provides a practical solution to the problem of interoperability between DWI analysis tools, and it effectively expands the analysis options available to end users. To assist both users and developers in working with MiND-formatted files, we provide a number of software tools for download. * MiNDHeader A utility for inspecting MiND-extended files. * I/O Libraries Programming libraries to simplify writing and parsing MiND-formatted data. * Sample Files Example files for each MiND schema. * DIRAC LONI''s Diffusion Imaging Reconstruction and Analysis Collection is a DWI processing suite which utilizes the MiND framework.
Proper citation: LONI MiND (RRID:SCR_004820) Copy
Can't find your Tool?
We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.
Welcome to the NIF Resources search. From here you can search through a compilation of resources used by NIF and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that NIF has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on NIF then you can log in from here to get additional features in NIF such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into NIF you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the sources that were queried against in your search that you can investigate further.
Here are the categories present within NIF that you can filter your data on
Here are the subcategories present within this category that you can filter your data on
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.