Searching the RRID Resource Information Network

Our searching services are busy right now. Please try again later

  • Register
X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X

Leaving Community

Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.

No
Yes
X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.

Search

Type in a keyword to search

On page 11 showing 201 ~ 220 out of 469 results
Snippet view Table view Download 469 Result(s)
Click the to add this resource to a Collection

http://www.scienceexchange.com/facilities/nnin-nano-research-facility-wustl

THIS RESOURCE IS NO LONGER IN SERVICE. Documented on May 15,2024. Nano Research Facility (NRF) at Washington University in St. Louis is a NNIN nodal facility supported by the National Science Foundation. It cultivates an open, shared research, and education environment that brings researchers across disciplines together, particularly in the emerging area of nanomaterials with applications in the energy, environment, and biomedical fields. The mission is to be a resource to the scientific and technical community for the advancement of nanoscience and nanotechnology in a safe and environmentally benign manner. NRF includes a micro- and nano-fabrication lab (clean room), surface characterization lab, particle technology lab, and imaging lab with a focus on bio-imaging. NRF provides unique technical expertise in: Knowledge-based synthesis of nanostructured materials Particle instrumentation tools for toxicity studies Non-invasive imaging modalities for biological applications Clean Energy Applications Energy and Environmental nanotechology Environmental Health and Safety As a member of the National Nanotechnology Infrastructure Network (NNIN), supported by the National Science Foundation, NRF is available to both academic and industrial users nation-wide and across the globe.

Proper citation: WUSTL NNIN - Nano Research Facility (RRID:SCR_012674) Copy   


http://www.nitrc.org/projects/efficient_pt

A Matlab implementation for efficient permutation testing by using matrix completion.

Proper citation: Efficient Permutation Testing (RRID:SCR_014104) Copy   


  • RRID:SCR_010715

    This resource has 100+ mentions.

http://trex.biohpc.org/

T-REX is a free, platform-independent online tool that allows for an integrated, rapid, and more robust analysis of T-RFLP data. Despite increasing popularity and improvements in terminal restriction fragment length polymorphism (T-RFLP) and other microbial community fingerprinting techniques, there are still numerous obstacles that hamper the analysis of these datasets. Many steps are required to process raw data into a format ready for analysis and interpretation. These steps can be time-intensive, error-prone, and can introduce unwanted variability into the analysis. Accordingly, we developed T-REX, free, online software for the processing and analysis of T-RFLP data. Analysis of T-RFLP data generated from a multiple-factorial study was performed with T-REX. With this software, we were able to i) label raw data with attributes related to the experimental design of the samples, ii) determine a baseline threshold for identification of true peaks over noise, iii) align terminal restriction fragments (T-RFs) in all samples (i.e., bin T-RFs), iv) construct a two-way data matrix from labeled data and process the matrix in a variety of ways, v) produce several measures of data matrix complexity, including the distribution of variance between main and interaction effects and sample heterogeneity, and vi) analyze a data matrix with the additive main effects and multiplicative interaction (AMMI) model.

Proper citation: T-REX (RRID:SCR_010715) Copy   


  • RRID:SCR_012776

    This resource has 10+ mentions.

http://www.cravat.us/

A web-based application designed with an easy-to-use interface to facilitate the high-throughput assessment and prioritization of genes and missense alterations important for cancer tumorigenesis.

Proper citation: CRAVAT (RRID:SCR_012776) Copy   


  • RRID:SCR_017236

    This resource has 100+ mentions.

http://cisbp.ccbr.utoronto.ca

Software tool as catalog of inferred sequence binding preferences. Online library of transcription factors and their DNA binding motifs.

Proper citation: CIS-BP (RRID:SCR_017236) Copy   


  • RRID:SCR_005761

    This resource has 1+ mentions.

http://alchemy.sourceforge.net/

ALCHEMY is a genotype calling algorithm for Affymetrix and Illumina products which is not based on clustering methods. Features include explicit handling of reduced heterozygosity due to inbreeding and accurate results with small sample sizes. ALCHEMY is a method for automated calling of diploid genotypes from raw intensity data produced by various high-throughput multiplexed SNP genotyping methods. It has been developed for and tested on Affymetrix GeneChip Arrays, Illumina GoldenGate, and Illumina Infinium based assays. Primary motivations for ALCHEMY''s development was the lack of available genotype calling methods which can perform well in the absence of heterozygous samples (due to panels of inbred lines being genotyped) or provide accurate calls with small sample batches. ALCHEMY differs from other genotype calling methods in that genotype inference is based on a parametric Bayesian model of the raw intensity data rather than a generalized clustering approach and the model incorporates population genetic principles such as Hardy-Weinberg equilibrium adjusted for inbreeding levels. ALCHEMY can simultaneously estimate individual sample inbreeding coefficients from the data and use them to improve statistical inference of diploid genotypes at individual SNPs. The main documentation for ALCHEMY is maintained on the sourceforge-hosted MediaWiki system. Features * Population genetic model based SNP genotype calling * Simultaneous estimation of per-sample inbreeding coefficients, allele frequencies, and genotypes * Bayesian model provides posterior probabilities of genotype correctness as quality measures * Growing number of scripts and supporting programs for validation of genotypes against control data and output reformating needs * Multithreaded program for parallel execution on multi-CPU/core systems * Non-clustering based methods can handle small sample sets for empirical optimization of sample preparation techniques and accurate calling of SNPs missing genotype classes ALCHEMY is written in C and developed on the GNU/Linux platform. It should compile on any current GNU/Linux distribution with the development packages for the GNU Scientific Library (gsl) and other development packages for standard system libraries. It may also compile and run on Mac OS X if gsl is installed.

Proper citation: ALCHEMY (RRID:SCR_005761) Copy   


http://www.ldeo.columbia.edu/core-repository

Core repository and one of the world's most unique and important collections of scientific samples from the deep sea. Sediment cores from every major ocean and sea are archived at the Core Repository. The collection contains approximately 72,000 meters of core composed of 9,700 piston cores; 7,000 trigger weight cores; and 2,000 other cores such as box, kasten, and large diameter gravity cores. They also hold 4,000 dredge and grab samples, including a large collection of manganese nodules, many of which were recovered by submersibles. Over 100,000 residues are stored and are available for sampling where core material is expended. In addition to physical samples, a database of the Lamont core collection has been maintained for nearly 50 years and contains information on the geographic location of each collection site, core length, mineralogy and paleontology, lithology, and structure, and more recently, the full text of megascopic descriptions. Samples from cores and dredges, as well as descriptions of cores and dredges (including digital images and other cruise information), are provided to scientific investigators upon request. Materials for educational purposes and museum displays may also be made available in limited quantities when requests are adequately justified. Various services and data analyses, including core archiving, carbonate analyses, grain size analyses, and RGB line scan imaging, GRAPE, P-wave velocity and magnetic susceptibility runs, can also be provided at cost. The Repository operates a number of labs and instruments dedicated to making fundamental measurements on material entering the repository including several non-destructive methods. Instruments for conducting and/or assisting with analyses of deep-sea sediments include a GeoTek Multi-Sensor Core Logger, a UIC coulometer, a Micromeritics sedigraph, Vane Shear, X-radiograph, Sonic Sifter, freeze dryer, as well as a variety of microscopes, sieves, and sampling tools. They also make these instruments available to the scientific community for conducting analyses of deep-sea sediments. If you are interested in borrowing any field equipment, please contact the Repository Curator.

Proper citation: Lamont-Doherty Core Repository (RRID:SCR_002216) Copy   


http://lrc.geo.umn.edu/laccore/

Archive of almost 20,000 meters of high quality sediment cores from large and small expeditions to lakes all around the world. LacCore advocates for, coordinates, and facilitates core-based research on Earth's continents through collaborative support for logistics, field and laboratory, and data and sample curation and dissemination. They provide a wide variety of fee-based analytical services, as well as offer training and instrument time to lab visitors. They also develop Standard Operating Procedures (SOPs) for local training and adoption by individuals at other labs.

Proper citation: National Lacustrine Core Facility (RRID:SCR_002215) Copy   


http://www.inbre.montana.edu/bioinformatics/functional_genomics/index.html

Core provides instrumentation and support for academic investigators throughout Montana and Rocky Mountain west. For most instrumentation, facility provides instruction and supervision followed by independent user access. For those doing Affymetrix microarrays, facility can also accept RNA samples and provides full service processing. Assists with experimental planning and grantmanship phases.

Proper citation: Montana State University Functional Genomics Core Facility (RRID:SCR_009939) Copy   


  • RRID:SCR_023223

    This resource has 1+ mentions.

https://github.com/caraweisman/abSENSE

Software to interpret undetected homolog.Method that calculates probability that homolog of given gene would fail to be detected by homology search in given species, even if homolog were present and evolving normally.

Proper citation: abSENSE (RRID:SCR_023223) Copy   


  • RRID:SCR_022975

https://github.com/compbiolabucf/PTNet

Graph based learning model for protein expression estimation by considering miRNA-mRNA interactions. Estimates protein levels by considering miRNA-mRNA interaction network, mRNA expression and miRNA expression.

Proper citation: PTNet (RRID:SCR_022975) Copy   


  • RRID:SCR_023602

    This resource has 1+ mentions.

https://github.com/DeNardoLab/BehaviorDEPOT

Software tool for automated behavioral detection based on markerless pose tracking. Behavioral analysis tool to first compile and clean point-tracking output from DeepLabCut, and then classify behavioral epochs using custom behavior classifiers. Used to detect frame by frame behavior from video time series and can analyze results of common experimental assays, including fear conditioning, decision-making in T-maze, open field, elevated plus maze, and novel object exploration. Calculates kinematic and postural statistics from keypoint tracking data from pose estimation software outputs.

Proper citation: BehaviorDEPOT (RRID:SCR_023602) Copy   


https://dna.dbi.udel.edu/

Provides genomics and molecular biology services for University of Delaware research groups and outside users.Supports genomic research through established expertise with genomics technologies.

Proper citation: University of Delaware Sequencing and Genotyping Center Core Facility (RRID:SCR_012230) Copy   


http://www.scienceexchange.com/facilities/genomics-core-facility-brown

Provides genomics and proteomics equipment to researchers at Brown University and to entire Rhode Island research community, as well as assistance with experimental design, trouble shooting, and data analysis. Offers Affymetrix microarray and Illumina NextGeneration services to academic community and external customers.

Proper citation: Brown University Genomics Core Facility (RRID:SCR_012217) Copy   


  • RRID:SCR_017452

    This resource has 1+ mentions.

https://pynwb.readthedocs.io/en/latest/

Software Python package for working with Neurodata stored in Neurodata Without Borders files. Software providing API allowing users to read and create NWB formatted HDF5 files. Developed in support to NWB project with aim of spreading standardized data format for cellular based neurophysiology information.

Proper citation: PyNWB (RRID:SCR_017452) Copy   


  • RRID:SCR_021946

    This resource has 500+ mentions.

https://github.com/sqjin/CellChat

Software R toolkit for inference, visualization and analysis of cell-cell communication from single cell data.Quantitatively infers and analyzes intercellular communication networks from single-cell RNA-sequencing data. Predicts major signaling inputs and outputs for cells and how those cells and signals coordinate for functions using network analysis and pattern recognition approaches. Classifies signaling pathways and delineates conserved and context specific pathways across different datasets.

Proper citation: CellChat (RRID:SCR_021946) Copy   


  • RRID:SCR_022523

    This resource has 1+ mentions.

https://CRAN.R-project.org/package=simplePHENOTYPES

Software R package that simulates pleiotropy, partial pleiotropy, and spurious pleiotropy in wide range of genetic architectures, including additive, dominance and epistatic models. Used to simulate multiple traits controlled by loci with varying degrees of pleiotropy.

Proper citation: simplePHENOTYPES (RRID:SCR_022523) Copy   


  • RRID:SCR_022976

    This resource has 1+ mentions.

https://github.com/compbiolabucf/omicsGAN

Software generative adversarial network to integrate two omics data and their interaction network to generate one synthetic data corresponding to each omics profile that can result in better phenotype prediction. Used to capture information from interaction network as well as two omics datasets and fuse them to generate synthetic data with better predictive signals.

Proper citation: OmicsGAN (RRID:SCR_022976) Copy   


  • RRID:SCR_023080

    This resource has 1+ mentions.

https://github.com/plaisier-lab/sygnal

Software pipeline to integrate correlative, causal and mechanistic inference approaches into unified framework that systematically infers causal flow of information from mutations to TFs and miRNAs to perturbed gene expression patterns across patients. Used to decipher transcriptional regulatory networks from multi-omic and clinical patient data. Applicable for integrating genomic and transcriptomic measurements from human cohorts.

Proper citation: SYGNAL (RRID:SCR_023080) Copy   


https://yeatmanlab.github.io/pyAFQ/

Software package focused on automated delineation of major fiber tracts in individual human brains, and quantification of tissue properties within the tracts.Software for automated processing and analysis of diffusion MRI data. Automates tractometry.

Proper citation: Automated Fiber Quantification in Python (RRID:SCR_023366) Copy   



Can't find your Tool?

We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.

Can't find the RRID you're searching for? X
  1. RRID Portal Resources

    Welcome to the RRID Resources search. From here you can search through a compilation of resources used by RRID and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that RRID has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on RRID then you can log in from here to get additional features in RRID such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into RRID you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Sources

    Here are the sources that were queried against in your search that you can investigate further.

  9. Categories

    Here are the categories present within RRID that you can filter your data on

  10. Subcategories

    Here are the subcategories present within this category that you can filter your data on

  11. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

X