Searching the RRID Resource Information Network

Our searching services are busy right now. Please try again later

  • Register
X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X

Leaving Community

Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.

No
Yes
X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.

Search

Type in a keyword to search

On page 13 showing 241 ~ 260 out of 469 results
Snippet view Table view Download 469 Result(s)
Click the to add this resource to a Collection

http://www.ldeo.columbia.edu/core-repository

Core repository and one of the world's most unique and important collections of scientific samples from the deep sea. Sediment cores from every major ocean and sea are archived at the Core Repository. The collection contains approximately 72,000 meters of core composed of 9,700 piston cores; 7,000 trigger weight cores; and 2,000 other cores such as box, kasten, and large diameter gravity cores. They also hold 4,000 dredge and grab samples, including a large collection of manganese nodules, many of which were recovered by submersibles. Over 100,000 residues are stored and are available for sampling where core material is expended. In addition to physical samples, a database of the Lamont core collection has been maintained for nearly 50 years and contains information on the geographic location of each collection site, core length, mineralogy and paleontology, lithology, and structure, and more recently, the full text of megascopic descriptions. Samples from cores and dredges, as well as descriptions of cores and dredges (including digital images and other cruise information), are provided to scientific investigators upon request. Materials for educational purposes and museum displays may also be made available in limited quantities when requests are adequately justified. Various services and data analyses, including core archiving, carbonate analyses, grain size analyses, and RGB line scan imaging, GRAPE, P-wave velocity and magnetic susceptibility runs, can also be provided at cost. The Repository operates a number of labs and instruments dedicated to making fundamental measurements on material entering the repository including several non-destructive methods. Instruments for conducting and/or assisting with analyses of deep-sea sediments include a GeoTek Multi-Sensor Core Logger, a UIC coulometer, a Micromeritics sedigraph, Vane Shear, X-radiograph, Sonic Sifter, freeze dryer, as well as a variety of microscopes, sieves, and sampling tools. They also make these instruments available to the scientific community for conducting analyses of deep-sea sediments. If you are interested in borrowing any field equipment, please contact the Repository Curator.

Proper citation: Lamont-Doherty Core Repository (RRID:SCR_002216) Copy   


http://lrc.geo.umn.edu/laccore/

Archive of almost 20,000 meters of high quality sediment cores from large and small expeditions to lakes all around the world. LacCore advocates for, coordinates, and facilitates core-based research on Earth's continents through collaborative support for logistics, field and laboratory, and data and sample curation and dissemination. They provide a wide variety of fee-based analytical services, as well as offer training and instrument time to lab visitors. They also develop Standard Operating Procedures (SOPs) for local training and adoption by individuals at other labs.

Proper citation: National Lacustrine Core Facility (RRID:SCR_002215) Copy   


  • RRID:SCR_003169

    This resource has 10+ mentions.

http://www.broad.mit.edu/annotation/fungi/fgi/

Produces and analyzes sequence data from fungal organisms that are important to medicine, agriculture and industry. The FGI is a partnership between the Broad Institute and the wider fungal research community, with the selection of target genomes governed by a steering committee of fungal scientists. Organisms are selected for sequencing as part of a cohesive strategy that considers the value of data from each organism, given their role in basic research, health, agriculture and industry, as well as their value in comparative genomics.

Proper citation: Fungal Genome Initiative (RRID:SCR_003169) Copy   


http://www.sgn.cornell.edu/bulk/input.pl?modeunigene

Allows users to download Unigene or BAC information using a list of identifiers or complete datasets with FTP., THIS RESOURCE IS NO LONGER IN SERVICE. Documented on September 16,2025.

Proper citation: Sol Genomics Network - Bulk download (RRID:SCR_007161) Copy   


http://www.nber.org/papers/h0038

A dataset to advance the study of life-cycle interactions of biomedical and socioeconomic factors in the aging process. The EI project has assembled a variety of large datasets covering the life histories of approximately 39,616 white male volunteers (drawn from a random sample of 331 companies) who served in the Union Army (UA), and of about 6,000 African-American veterans from 51 randomly selected United States Colored Troops companies (USCT). Their military records were linked to pension and medical records that detailed the soldiers������?? health status and socioeconomic and family characteristics. Each soldier was searched for in the US decennial census for the years in which they were most likely to be found alive (1850, 1860, 1880, 1900, 1910). In addition, a sample consisting of 70,000 men examined for service in the Union Army between September 1864 and April 1865 has been assembled and linked only to census records. These records will be useful for life-cycle comparisons of those accepted and rejected for service. Military Data: The military service and wartime medical histories of the UA and USCT men were collected from the Union Army and United States Colored Troops military service records, carded medical records, and other wartime documents. Pension Data: Wherever possible, the UA and USCT samples have been linked to pension records, including surgeon''''s certificates. About 70% of men in the Union Army sample have a pension. These records provide the bulk of the socioeconomic and demographic information on these men from the late 1800s through the early 1900s, including family structure and employment information. In addition, the surgeon''''s certificates provide rich medical histories, with an average of 5 examinations per linked recruit for the UA, and about 2.5 exams per USCT recruit. Census Data: Both early and late-age familial and socioeconomic information is collected from the manuscript schedules of the federal censuses of 1850, 1860, 1870 (incomplete), 1880, 1900, and 1910. Data Availability: All of the datasets (Military Union Army; linked Census; Surgeon''''s Certificates; Examination Records, and supporting ecological and environmental variables) are publicly available from ICPSR. In addition, copies on CD-ROM may be obtained from the CPE, which also maintains an interactive Internet Data Archive and Documentation Library, which can be accessed on the Project Website. * Dates of Study: 1850-1910 * Study Features: Longitudinal, Minority Oversamples * Sample Size: ** Union Army: 35,747 ** Colored Troops: 6,187 ** Examination Sample: 70,800 ICPSR Link: http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/06836

Proper citation: Early Indicators of Later Work Levels Disease and Death (EI) - Union Army Samples Public Health and Ecological Datasets (RRID:SCR_008921) Copy   


https://dna.dbi.udel.edu/

Provides genomics and molecular biology services for University of Delaware research groups and outside users.Supports genomic research through established expertise with genomics technologies.

Proper citation: University of Delaware Sequencing and Genotyping Center Core Facility (RRID:SCR_012230) Copy   


http://www.scienceexchange.com/facilities/genomics-core-facility-brown

Provides genomics and proteomics equipment to researchers at Brown University and to entire Rhode Island research community, as well as assistance with experimental design, trouble shooting, and data analysis. Offers Affymetrix microarray and Illumina NextGeneration services to academic community and external customers.

Proper citation: Brown University Genomics Core Facility (RRID:SCR_012217) Copy   


http://www.scienceexchange.com/facilities/nnin-nano-research-facility-wustl

THIS RESOURCE IS NO LONGER IN SERVICE. Documented on May 15,2024. Nano Research Facility (NRF) at Washington University in St. Louis is a NNIN nodal facility supported by the National Science Foundation. It cultivates an open, shared research, and education environment that brings researchers across disciplines together, particularly in the emerging area of nanomaterials with applications in the energy, environment, and biomedical fields. The mission is to be a resource to the scientific and technical community for the advancement of nanoscience and nanotechnology in a safe and environmentally benign manner. NRF includes a micro- and nano-fabrication lab (clean room), surface characterization lab, particle technology lab, and imaging lab with a focus on bio-imaging. NRF provides unique technical expertise in: Knowledge-based synthesis of nanostructured materials Particle instrumentation tools for toxicity studies Non-invasive imaging modalities for biological applications Clean Energy Applications Energy and Environmental nanotechology Environmental Health and Safety As a member of the National Nanotechnology Infrastructure Network (NNIN), supported by the National Science Foundation, NRF is available to both academic and industrial users nation-wide and across the globe.

Proper citation: WUSTL NNIN - Nano Research Facility (RRID:SCR_012674) Copy   


https://github.com/pyranges/ncls

Software library for nested containment list data structure for interval overlap queries, like interval tree. It is a static interval-tree that is fast for both construction and lookups.

Proper citation: Nested containment list (RRID:SCR_027849) Copy   


  • RRID:SCR_027742

https://github.com/McGranahanLab/TcellExTRECT

Software R package to calculate T cell fractions from WES data from hg19 or hg38 aligned genomes.

Proper citation: T Cell ExTRECT (RRID:SCR_027742) Copy   


  • RRID:SCR_027745

    This resource has 1+ mentions.

https://github.com/vanallenlab/comut

Software Python library for creating comutation plots to visualize genomic and phenotypic information. Used for visualizing genomic and phenotypic information via comutation plots.

Proper citation: CoMUT (RRID:SCR_027745) Copy   


  • RRID:SCR_008665

    This resource has 10+ mentions.

http://wiki.c2b2.columbia.edu/honiglab_public/index.php/Software:Jackal

Jackal is a collection of programs designed for the modeling and analysis of protein structures. Its core program is a versatile homology modeling package. It contains twelve individual programs, each with their own function.

Proper citation: Jackal (RRID:SCR_008665) Copy   


http://www.poissonboltzmann.org/apbs/

APBS is a software package for modeling biomolecular solvation through solution of the Poisson-Boltzmann equation (PBE), one of the most popular continuum models for describing electrostatic interactions between molecular solutes in salty, aqueous media. APBS was designed to efficiently evaluate electrostatic properties for such simulations for a wide range of length scales to enable the investigation of molecules with tens to millions of atoms. It also provides implicit solvent models of nonpolar solvation which accurately account for both repulsive and attractive solute-solvent interactions. APBS uses FEtk (the Finite Element ToolKit) to solve the Poisson-Boltzmann equation numerically. FEtk is a portable collection of finite element modeling class libraries written in an object-oriented version of C. It is designed to solve general coupled systems of nonlinear partial differential equations using adaptive finite element methods, inexact Newton methods, and algebraic multilevel methods.

Proper citation: Adaptive Poisson-Boltzmann Solver (RRID:SCR_008387) Copy   


http://rankprop.gs.washington.edu/svm-fold/

This web server makes predictions of family, superfamily and fold level classifications of proteins based on the Structural Classification of Proteins (SCOP) hierarchy using the Support Vector Machine (SVM) learning algorithm. SVM-FOLD detects subtle protein sequence similarities by learning from all available annotated proteins, as well as utilizing potential hits as identified by PSI-BLAST. Predictions of classes of proteins that do not have any known example with a significant pairwise PSI-BLAST E-value can still be found using SVMs.

Proper citation: SVM-fold: Protein Fold Prediction (RRID:SCR_006834) Copy   


  • RRID:SCR_007009

    This resource has 1+ mentions.

http://www.softpedia.com/get/Science-CAD/DynGO.shtml

DynGO is a client-server application that provides several advanced functionalities in addition to the standard browsing capability. DynGO allows users to conduct batch retrieval of GO annotations for a list of genes and gene products, and semantic retrieval of genes and gene products sharing similar GO annotations (which requires more disk and memory to handle the semantic retrieval). The result are shown in an association tree organized according to GO hierarchies and supported with many dynamic display options such as sorting tree nodes or changing orientation of the tree. For GO curators and frequent GO users, DynGO provides fast and convenient access to GO annotation data. DynGO is generally applicable to any data set where the records are annotated with GO terms, as illustrated by two examples. Requirements: Java Platform: Windows compatible, Linux compatible, Unix compatible

Proper citation: DynGO (RRID:SCR_007009) Copy   


  • RRID:SCR_005497

    This resource has 100+ mentions.

http://research.cs.wisc.edu/wham/

THIS RESOURCE IS NO LONGER IN SERVICE. Documented on February 28,2023. High-throughput sequence alignment tool that aligns short DNA sequences (reads) to the whole human genome at a rate of over 1500 million 60bps reads per hour, which is one to two orders of magnitudes faster than the leading state-of-the-art techniques. Feature list for the current version (v 0.1.5) of WHAM: * Supports paired-end reads * Supports up to 5 errores * Supports alignments with gaps * Supports quality scores for filtering invalid alignments, and sorting valid alignments * finds ALL valid alignments * Supports multi-threading * Supports rich reporting modes * Supports SAM format output

Proper citation: WHAM (RRID:SCR_005497) Copy   


http://www.nescent.org/

The National Evolutionary Synthesis Center (NESCent) is a nonprofit science center dedicated to cross-disciplinary research in evolution. NESCent promotes the synthesis of information, concepts and knowledge to address significant, emerging, or novel questions in evolutionary science and its applications. NESCent achieves this by supporting research and education across disciplinary, institutional, geographic, and demographic boundaries. Synthetic research in evolutionary science takes many forms but includes integrating novel data sets and models to address important problems within a discipline, developing new analytical approaches and tools, and combining methods and perspectives from multiple disciplines to answer and even create new fundamental scientific questions. NESCent facilitates such synthetic research by providing an environment for fertile interactions among scientists. Our Science and Synthesis program sponsors postdoctoral fellows and sabbatical scholars as resident scientists, and two kinds of meetings, working groups and catalysis meetings. Catalysis meetings provide a novel mechanism for bringing together diverse research communities and cultures to identify common interests, while working groups provide an opportunity for scientists to work together intensively on fundamental synthetic questions over a several-year period. These activities are community driven through our application process and evaluated by an external advisory board. Our Informatics program provides state of the art informatics tools to visiting and in-house scientists and aims to take the lead in assembling novel databases and developing new analytical tools for evolutionary biology. Finally it is sponsoring a major initiative to provide a digital data repository for work in evolutionary biology. NESCent''s Education and Outreach group communicates the results of evolutionary biology research to the general public and scientific community, provides outreach to groups who are underrepresented in evolutionary biology and works to improve evolution education.

Proper citation: NESCent - National Evolutionary Synthesis Center (RRID:SCR_005911) Copy   


  • RRID:SCR_001875

    This resource has 1+ mentions.

http://www.agcol.arizona.edu/software/tcw/

Software package for assembling, annotating, querying, and comparing transcript and expression level data that consists of two parts: * singleTCW (sTCW): Single transcript sets or assemblies; annotation; differential expression (EdgeR, DEGSeq, DESeq, GoSeq) * multiTCW (mTCW): Comparison of multiple transcript sets; ortholog grouping (e.g., OrthoMCL) It has been tested on Linux and uses Java, mySQL and optionally R.

Proper citation: TCW (RRID:SCR_001875) Copy   


  • RRID:SCR_013719

    This resource has 1+ mentions.

http://www.internano.org/

Database and knowledge base of techniques for processing nanoscale materials, devices, and structures that includes step-by-step descriptions, images, notes on methodology and environmental variables, and associated references and patent information. The purpose of the Process Database is to facilitate the sharing of appropriate process knowledge across laboratories.The processes included here have been previously published or patented

Proper citation: InterNano Process Database (RRID:SCR_013719) Copy   


  • RRID:SCR_004618

    This resource has 5000+ mentions.

http://www.arabidopsis.org

Database of genetic and molecular biology data for the model higher plant Arabidopsis thaliana. Data available includes the complete genome sequence along with gene structure, gene product information, metabolism, gene expression, DNA and seed stocks, genome maps, genetic and physical markers, publications, and information about the Arabidopsis research community. Gene product function data is updated every two weeks from the latest published research literature and community data submissions. Gene structures are updated 1-2 times per year using computational and manual methods as well as community submissions of new and updated genes. TAIR also provides extensive linkouts from data pages to other Arabidopsis resources. The data can be searched, viewed and analyzed. Datasets can also be downloaded. Pages on news, job postings, conference announcements, Arabidopsis lab protocols, and useful links are provided.

Proper citation: TAIR (RRID:SCR_004618) Copy   



Can't find your Tool?

We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.

Can't find the RRID you're searching for? X
  1. RRID Portal Resources

    Welcome to the RRID Resources search. From here you can search through a compilation of resources used by RRID and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that RRID has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on RRID then you can log in from here to get additional features in RRID such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into RRID you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Sources

    Here are the sources that were queried against in your search that you can investigate further.

  9. Categories

    Here are the categories present within RRID that you can filter your data on

  10. Subcategories

    Here are the subcategories present within this category that you can filter your data on

  11. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

X