Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.
SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.
http://metpetdb.rpi.edu/metpetweb/
Database / data repository for metamorphic petrology that is being designed and built by a global community of metamorphic petrologists in collaboration with computer scientists at Rensselaer Polytechnic Institute as part of the National Cyberinfrastructure Initiative.
Proper citation: MetPetDB (RRID:SCR_002208) Copy
National marine phytoplankton collection, maintaining over 2700 strains from around the world, most are marine phytoplankton but they also have benthic, macrophytic, freshwater and heterotrophic organisms - now incorporating bacteria and viruses. Strain records have (when available): * collection and isolation information * culturing medium recipes and growth conditions * photographs * GenBank accession link * collection site map * link to the taxonomic database Micro*scope The deposition of new strains are welcome if the strains are a valuable addition to the collection. Examples include strains that are referred to in publications, contain interesting molecular, biochemical or physiological properties, are the basis for taxonomic descriptions, are important for aquaculture, or are from an unusual geographical location or ecological habitat. The NCMA offers a course in phytoplankton culturing techniques and facilities for visiting scientists are available at the new laboratories in East Boothbay, Maine. Services include: Mass Culturing DNA and RNA, Purification, Private Holdings, Culture Techniques Course, Visiting Scientists, Single Cell Genomics, Flow Cytometry, Corporate Alliances and Technology Transfer.
Proper citation: National Center for Marine Algae and Microbiota (RRID:SCR_002120) Copy
http://csdms.colorado.edu/wiki/Main_Page
Model repository and data related to earth-surface dynamics modeling. The CSDMS Modeling Tool (CMT) allows you to run and couple CSDMS model components on the CSDMS supercomputer in a user-friendly software environment. Components in the CMT are based on models, originally submitted to the CSDMS model repository, and now adapted to communicate with other models. The CMT tool is the environment in which you can link these components together to run new simulations. The CMT software runs on your own computer; but it communicates with the CSDMS HPCC, to perform the simulations. Thus, the CMT also offers you a relatively easy way of using the CSDMS supercomputer for model experiments. CSDMS deals with the Earth's surface - the ever-changing, dynamic interface between lithosphere, hydrosphere, cryosphere, and atmosphere. They are a diverse community of experts promoting the modeling of earth surface processes by developing, supporting, and disseminating integrated software modules that predict the movement of fluids, and the flux (production, erosion, transport, and deposition) of sediment and solutes in landscapes and their sedimentary basins. CSDMS: * Produces protocols for community-generated, continuously evolving, open software * Distributes software tools and models * Provides cyber-infrastructure to promote the quantitative modeling of earth surface processes * Addresses the challenging problems of surface-dynamic systems: self-organization, localization, thresholds, strong linkages, scale invariance, and interwoven biology & geochemistry * Enables the rapid development and application of linked dynamic models tailored to specific landscape basin evolution (LBE) problems at specific temporal and spatial scales * Partners with related computational and scientific programs to eliminate duplication of effort and to provide an intellectually stimulating environment * Supports a strong linkage between what is predicted by CSDMS codes and what is observed, both in nature and in physical experiments * Supports the imperatives in Earth Science research
Proper citation: Community Surface Dynamics Modeling System (RRID:SCR_002196) Copy
Paleoecology database for plio-pleistocene to holocene fossil data with a centralized structure for interdisciplinary, multiproxy analyses and common tool development; discipline-specific data can also be easily accessed. Data currently include North American Pollen (NAPD) and fossil mammals (FAUNMAP). Other proxies (plant macrofossils, beetles, ostracodes, diatoms, etc.) and geographic areas (Europe, Latin America, etc.) will be added in the near future. Data are derived from sites from the last 5 million years.
Proper citation: Neotoma Paleoecology Database (RRID:SCR_002190) Copy
THIS RESOURCE IS NO LONGER IN SERVICE. Documented on July 1, 2022. Organization whose mission is to build and promote a sustainable ecosystem of professional societies, funding agencies, foundations, companies, and citizens together with life science researchers and innovators in computing, infrastructure and analysis with the expressed goal of translating new discoveries into tools, resources and products.
Proper citation: DELSA (RRID:SCR_006231) Copy
https://www.msu.edu/~brains/brains/human/index.html
A labeled three-dimensional atlas of the human brain created from MRI images. In conjunction are presented anatomically labeled stained sections that correspond to the three-dimensional MRI images. The stained sections are from a different brain than the one which was scanned for the MRI images. Also available the major anatomical features of the human hypothalamus, axial sections stained for cell bodies or for nerve fibers, at six rostro-caudal levels of the human brain stem; images and Quicktime movies. The MRI subject was a 22-year-old adult male. Differing techniques used to study the anatomy of the human brain all have their advantages and disadvantages. Magnetic resonance imaging (MRI) allows for the three-dimensional viewing of the brain and structures, precise spatial relationships and some differentiation between types of tissue, however, the image resolution is somewhat limited. Stained sections, on the other hand, offer excellent resolution and the ability to see individual nuclei (cell stain) or fiber tracts (myelin stain), however, there are often spatial distortions inherent in the staining process. The nomenclature used is from Paxinos G, and Watson C. 1998. The Rat Brain in Stereotaxic Coordinates, 4th ed. Academic Press. San Diego, CA. 256 pp
Proper citation: Human Brain Atlas (RRID:SCR_006131) Copy
XSEDE is a single virtual system that scientists can use to interactively share computing resources, data and expertise. People around the world use these resources and services things like supercomputers, collections of data and new tools to improve our planet. XSEDE resources may be broadly categorized as follows: High Performance Computing, High Throughput Computing, Visualization, Storage, and Data Services. Many resources provide overlapping functionality across categories. Scientists, engineers, social scientists, and humanists around the world - many of them at colleges and universities - use advanced digital resources and services every day. Things like supercomputers, collections of data, and new tools are critical to the success of those researchers, who use them to make our lives healthier, safer, and better. XSEDE integrates these resources and services, makes them easier to use, and helps more people use them. XSEDE supports 16 supercomputers and high-end visualization and data analysis resources across the country. Digital services, meanwhile, provide users with seamless integration to NSF''s high-performance computing and data resources. XSEDE''s integrated, comprehensive suite of advanced digital services will federate with other high-end facilities and with campus-based resources, serving as the foundation for a national cyberinfrastructure ecosystem. Common authentication and trust mechanisms, global namespace and filesystems, remote job submission and monitoring, and file transfer services are examples of XSEDE''s advanced digital services. XSEDE''s standards-based architecture allows open development for future digital services and enhancements. XSEDE also provides the expertise to ensure that researchers can make the most of the supercomputers and tools.
Proper citation: XSEDE - Extreme Science and Engineering Discovery Environment (RRID:SCR_006091) Copy
A specialized version of autoPack designed to pack biological components together. The current version is optimized to pack molecules into cells with biologically relevant interactions to populate massive cell models with atomic or near-atomic details. Components of the algorithm pack transmembrane proteins and lipids into bilayers, globular molecules into compartments defined by the bilayers (or as exteriors), and fibrous components like microtubules, actin, and DNA.
Proper citation: Cellpack (RRID:SCR_006831) Copy
Web portal that allows free access to supercomputing resources for large scale modeling and data processing. Portal facilitates access and use of National Science Foundation (NSF) High Performance Computing (HPC) resources by neuroscientists.
Proper citation: Neuroscience Gateway (RRID:SCR_008915) Copy
http://tulane.edu/som/regenmed/services/index.cfm
The Stem Cell Research and Regenerative Medicine''s Tissue Culture Core provides cells for research use within the department, as well as for distribution to other facilities. The core obtains hMSCs from bone marrow donor samples and expands these cells for research use. The hMSC''s are also characterized for bone, fat and cartilage differentiation, and are stored on site for use. The Tissue Culture Core also handles the expansion and characterization of mouse and rat MSC''s. The animal cells are cultured in a separate area, and never interact with human derived cells. We also have a supply of hMSC''s marked with GFP+, Mito Red and Mito Blue available.
Proper citation: Tulane Stem Cell Research and Regenerative Medicine Tissue Culture Core (RRID:SCR_007342) Copy
https://github.com/mandricigor/ScaffMatch
Software tool as scaffolding algorithm based on maximum weight matching able to produce high quality scaffolds from next generation sequencing data (reads and contigs). Able to handle reads with both short and long insert sizes.
Proper citation: ScaffMatch (RRID:SCR_017025) Copy
https://github.com/taborlab/FlowCal
Open source software tool for automatically converting flow cytometry data from arbitrary to calibrated units. Can be run using intuitive Microsoft Excel interface, or customizable Python scripts. Software accepts Flow Cytometry Standard (FCS) files as inputs and is compatible with different calibration particles, fluorescent probes, and cell types. Automatically gates data, calculates common statistics, and produces plots.
Proper citation: FlowCal (RRID:SCR_018140) Copy
A set of software tools created to rapidly build scientific data-management applications. These applications will enhance the process of data annotation, analysis, and web publication. The system provides a set of easy-to-use software tools for data sharing by the scientific community. It enables researchers to build their own custom-designed data management systems. The problem of scientific data management rests on several challenges. These include flexible data storage, a way to share the stored data, tools to curate the data, and history of the data to show provenance. The Yogo Framework gives you the ability to build scientific data management applications that address all of these challenges. The Yogo software is being developed as part of the NeuroSys project. All tools created as part of the Yogo Data Management Framework are open source and released under an OSI approved license.
Proper citation: Yogo Data Management System (RRID:SCR_004239) Copy
http://openconnectomeproject.org/
THIS RESOURCE IS NO LONGER IN SERVICE. Documented on January 9, 2023. Connectomes repository to facilitate the analysis of connectome data by providing a unified front for connectomics research. With a focus on Electron Microscopy (EM) data and various forms of Magnetic Resonance (MR) data, the project aims to make state-of-the-art neuroscience open to anybody with computer access, regardless of knowledge, training, background, etc. Open science means open to view, play, analyze, contribute, anything. Access to high resolution neuroanatomical images that can be used to explore connectomes and programmatic access to this data for human and machine annotation are provided, with a long-term goal of reconstructing the neural circuits comprising an entire brain. This project aims to bring the most state-of-the-art scientific data in the world to the hands of anybody with internet access, so collectively, we can begin to unravel connectomes. Services: * Data Hosting - Their Bruster (brain-cluster) is large enough to store nearly any modern connectome data set. Contact them to make your data available to others for any purpose, including gaining access to state-of-the-art analysis and machine vision pipelines. * Web Viewing - Collaborative Annotation Toolkit for Massive Amounts of Image Data (CATMAID) is designed to navigate, share and collaboratively annotate massive image data sets of biological specimens. The interface is inspired by Google Maps, enhanced to allow the exploration of 3D image data. View the fork of the code or go directly to view the data. * Volume Cutout Service - RESTful API that enables you to select any arbitrary volume of the 3d database (3ddb), and receive a link to download an HDF5 file (for matlab, C, C++, or C#) or a NumPy pickle (for python). Use some other programming language? Just let them know. * Annotation Database - Spatially co-registered volumetric annotations are compactly stored for efficient queries such as: find all synapses, or which neurons synapse onto this one. Create your own annotations or browse others. *Sample Downloads - In addition to being able to select arbitrary downloads from the datasets, they have also collected a few choice volumes of interest. * Volume Viewer - A web and GPU enabled stand-alone app for viewing volumes at arbitrary cutting planes and zoom levels. The code and program can be downloaded. * Machine Vision Pipeline - They are building a machine vision pipeline that pulls volumes from the 3ddb and outputs neural circuits. - a work in progress. As soon as we have a stable version, it will be released. * Mr. Cap - The Magnetic Resonance Connectome Automated Pipeline (Mr. Cap) is built on JIST/MIPAV for high-throughput estimation of connectomes from diffusion and structural imaging data. * Graph Invariant Computation - Upload your graphs or streamlines, and download some invariants. * iPad App - WholeSlide is an iPad app that accesses utilizes our open data and API to serve images on the go.
Proper citation: Open Connectome Project (RRID:SCR_004232) Copy
A dynamic archive of information on digital morphology and high-resolution X-ray computed tomography of biological specimens serving imagery for more than 750 specimens contributed by almost 150 collaborating researchers from the world''s premiere natural history museums and universities. Browse through the site and see spectacular imagery and animations and details on the morphology of many representatives of the Earth''s biota. Digital Morphology, part of the National Science Foundation Digital Libraries Initiative, develops and serves unique 2D and 3D visualizations of the internal and external structure of living and extinct vertebrates, and a growing number of ''invertebrates.'' The Digital Morphology library contains nearly a terabyte of imagery of natural history specimens that are important to education and central to ongoing cutting-edge research efforts. Digital Morphology visualizations are now in use in classrooms and research labs around the world and can be seen in a growing number of museum exhibition halls. The Digital Morphology site currently presents: * QuickTime animations of complete stacks of serial CT sections * Animated 3D volumetric movies of complete specimens * Stereolithography (STL) files of 3D objects that can be viewed interactively and rapidly prototyped into scalable physical 3D objects that can be handled and studied as if they were the original specimens * Informative introductions to the scanned organisms, often written by world authorities * Pertinent bibliographic information on each specimen * Useful links * A course resource for our ''Digital Methods for Paleontology'' course, in which students learn how to generate all of the types of imagery displayed on the Digital Morphology site
Proper citation: DigiMorph (RRID:SCR_004416) Copy
THIS RESOURCE IS NO LONGER IN SERVICE. Documented on January 11, 2023. The Catalog of Fishes is the authoritative reference for taxonomic fish names, featuring a searchable on-line database. The Catalog of Fishes covers more than 53,000 species and subspecies, over 10,000 genera and subgenera, and includes in excess of 16,000 bibliographic references. The Catalog of Fishes consists of three hardbound volumes of 900-1000 pages each, along with a CD-ROM. The online database is updated about every 8 weeks and is now about twice the size of the published version. It is one of the oldest and most complete databases for any large animal group. References are over 30,000. Valid species are over 30,000. This work is an essential reference for taxonomists, scientific historians, and for any specialist dealing with fishes. Entries for species, for example, consist of species/subspecies name, genus, author, date, publication, pages, figures, type locality, location of type specimen(s), current status (with references), family/subfamily, and important publication, taxonomic, or nomenclatural notes. Nearly all original descriptions have been examined, and much effort has gone into determining the location of type specimens. The Genera are updated from Eschmeyer''s 1990 Genera of Recent Fishes. Both genera and species are listed in a classification using recent taxonomic schemes. Also included are a lengthy list of museum acronyms, an interpretation of the International Code of Zoological Nomenclature, and Opinions of the International Commission involving fishes.
Proper citation: Catalog of Fishes (RRID:SCR_004408) Copy
Resource for the storage, retrieval and annotation of plant ESTs, with a focus on comparative genomics. PGN comprises an analysis pipeline and a website, and presently contains mainly data from the Floral Genome Project. However, it accepts submission from other sources. All data in PGN is directly derived from chromatograms and all original and intermediate data are stored in the database. The current datasets on PGN come from the floral genome project and includes the following species: Acorus americanus, Amborella trichopoda, Asparagus officinalis, Cucumis sativus, Eschscholzia californica, Eschscholzia californica, Illicium parviflorum, Ipomopsis aggregata, Liriodendron tulipifera, Mesembryanthemum crystallinum, Mimulus guttatus, Nuphar advena, Papaver somniferum, Persea americana, Prymnesium parvum, Ribes americanum, Saruma henryi, Stenogyne rugosa, Vaccinium corymbosa, Welwitschia mirabilis, Yucca filamentosa, Zamia fischeri. For functional annotation, blast is used to compare find the best match of each unigene sequence to in the Genbank NR database, and the in complete coding sequences from Arabidopsis. These annotations are stored in the database and serve as the primary source of annotation. The annotation framework will be extended to Gene Ontology annotations in the future.
Proper citation: PGN (RRID:SCR_004559) Copy
Markup Language that provides a representation of PDB data in XML format. The description of this format is provided in XML schema of the PDB Exchange Data Dictionary. This schema is produced by direct translation of the mmCIF format PDB Exchange Data Dictionary Other data dictionaries used by the PDB have been electronically translated into XML/XSD schemas and these are also presented in the list below. * PDBML data files are provided in three forms: ** fully marked-up files, ** files without atom records ** files with a more space efficient encoding of atom records * Data files in PDBML format can be downloaded from the RCSB PDB website or by ftp. * Software tools for manipulating PDB data in XML format are available.
Proper citation: Protein Data Bank Markup Language (RRID:SCR_005085) Copy
Open platform for analyzing and sharing neuroimaging data from human brain imaging research studies. Brain Imaging Data Structure ( BIDS) compliant database. Formerly known as OpenfMRI. Data archives to hold magnetic resonance imaging data. Platform for sharing MRI, MEG, EEG, iEEG, and ECoG data.
Proper citation: OpenNeuro (RRID:SCR_005031) Copy
Kepler is a software application for analyzing and modeling scientific data. Using Kepler''s graphical interface and components, scientists with little background in computer science can create executable models, called scientific workflows, for flexibly accessing scientific data (streaming sensor data, medical and satellite images, simulation output, observational data, etc.) and executing complex analyses on this data. Kepler is developed by a cross-project collaboration led by the Kepler/CORE team. The software builds upon the mature Ptolemy II framework, developed at the University of California, Berkeley. Ptolemy II is a software framework designed for modeling, design, and simulation of concurrent, real-time, embedded systems. The Kepler Project is dedicated to furthering and supporting the capabilities, use, and awareness of the free and open source, scientific workflow application, Kepler. Kepler is designed to help scien��tists, analysts, and computer programmers create, execute, and share models and analyses across a broad range of scientific and engineering disciplines. Kepler can operate on data stored in a variety of formats, locally and over the internet, and is an effective environment for integrating disparate software components, such as merging R scripts with compiled C code, or facilitating remote, distributed execution of models. Using Kepler''s graphical user interface, users simply select and then connect pertinent analytical components and data sources to create a scientific workflowan executable representation of the steps required to generate results. The Kepler software helps users share and reuse data, workflows, and compo��nents developed by the scientific community to address common needs. Kepler is a java-based application that is maintained for the Windows, OSX, and Linux operating systems. The Kepler Project supports the official code-base for Kepler development, as well as provides materials and mechanisms for learning how to use Kepler, sharing experiences with other workflow developers, reporting bugs, suggesting enhancements, etc. The Kepler Project Leadership Team works to assure the long-term technical and financial viability of Kepler by making strategic decisions on behalf of the Kepler user community, as well as providing an official and durable point-of-contact to articulate and represent the interests of the Kepler Project and the Kepler software application. Details about how to get more involved with the Kepler Project can be found in the developer section of this website.
Proper citation: Kepler (RRID:SCR_005252) Copy
Can't find your Tool?
We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.
Welcome to the SPARC SAWG Resources search. From here you can search through a compilation of resources used by SPARC SAWG and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that SPARC SAWG has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on SPARC SAWG then you can log in from here to get additional features in SPARC SAWG such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into SPARC SAWG you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the sources that were queried against in your search that you can investigate further.
Here are the categories present within SPARC SAWG that you can filter your data on
Here are the subcategories present within this category that you can filter your data on
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.