Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.
SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.
Database on the sequence of the euchromatic genome of Drosophila melanogaster In addition to genomic sequencing, the BDGP is 1) producing gene disruptions using P element-mediated mutagenesis on a scale unprecedented in metazoans; 2) characterizing the sequence and expression of cDNAs; and 3) developing informatics tools that support the experimental process, identify features of DNA sequence, and allow us to present up-to-date information about the annotated sequence to the research community. Resources * Universal Proteomics Resource: Search for clones for expression and tissue culture * Materials: Request genomic or cDNA clones, library filters or fly stocks * Download Sequence data sets and annotations in fasta or xml format by http or ftp * Publications: Browse or download BDGP papers * Methods: BDGP laboratory protocols and vector maps * Analysis Tools: Search sequences for CRMs, promoters, splice sites, and gene predictions * Apollo: Genome annotation viewer and editor September 15, 2009 Illumina RNA-Seq data from 30 developmental time points of D. melanogaster has been submitted to the Short Read Archive at NCBI as part of the modENCODE project. The data set currently contains 2.2 billion single-end and paired reads and over 201 billion base pairs.
Proper citation: Berkeley Drosophila Genome Project (RRID:SCR_013094) Copy
Open database of polygenic scores and relevant metadata required for accurate application and evaluation. Used for reproducibility and systematic evaluation.
Proper citation: Polygenic Score Catalog (RRID:SCR_023558) Copy
A custom genome browser which provides detailed answers to questions on the haplotype diversity and phylogenetic origin of the genetic variation underlying any genomic region of most laboratory strains of mice (both classical and wild-derived). Users can select a region of the genome and a set of laboratory strains and/or wild caught mice. The region is selected by specifying the start (e.g. 31200000 or 31200K or 31.2M), and end of the interval and the chromosome (i.e, autosome number and X chromosome). Samples can be selected by name or by entire set. Data sets include information on subspecific origin, heterozygosity regions, and haplotype coloring, among others.
Proper citation: Mouse Phylogeny Viewer (RRID:SCR_014071) Copy
https://www.clinicalgenome.org
Genomics knowledgebase for clinical relevance of genes and variants for use in research. ClinGen's primary function is to store and share information for the benefit of the scientific community. Laboratory scientists, clinicians, and patients can share and access data.
Proper citation: ClinGen (RRID:SCR_014968) Copy
https://repository.niddk.nih.gov/study/21
Data and biological samples were collected by this consortium organizing international efforts to identify genes that determine an individual risk of type 1 diabetes. It originally focused on recruiting families with at least two siblings (brothers and/or sisters) who have type 1 diabetes (affected sibling pair or ASP families). The T1DGC completed enrollment for these families in August 2009. They completed enrollment of trios (father, mother, and a child with type 1 diabetes), as well as cases (people with type 1 diabetes) and controls (people with no history of type 1 diabetes) from populations with a low prevalence of this disease in January 2010. T1DGC Data and Samples: Phenotypic and genotypic data as well as biological samples (DNA, serum and plasma) for T1DGC participants have been deposited in the NIDDKCentral Repositories for future research.
Proper citation: Type 1 Diabetes Genetics Consortium (RRID:SCR_001557) Copy
Community standard for pathway data sharing. Standard language that aims to enable integration, exchange, visualization and analysis of biological pathway data. Supports data exchange between pathway data groups and thus reduces complexity of interchange between data formats by providing accepted standard format for pathway data. Open and collaborative effort by community of researchers, software developers, and institutions. BioPAX is defined in OWL DL and is represented in RDF/XML format.Uses W3C standard Web Ontology Language, OWL.
Proper citation: Biological Pathways Exchange (RRID:SCR_001681) Copy
http://amigo.geneontology.org/
Web tool to search, sort, analyze, visualize and download data of interest. Along with providing details of the ontologies, gene products and annotations, features a BLAST search, Term Enrichment and GO Slimmer tools, the GO Online SQL Environment and a user help guide.Used at the Gene Ontology (GO) website to access the data provided by the GO Consortium. Developed and maintained by the GO Consortium.
Proper citation: AmiGO (RRID:SCR_002143) Copy
http://www.pathwaycommons.org/pc
Database of publicly available pathways from multiple organisms and multiple sources represented in a common language. Pathways include biochemical reactions, complex assembly, transport and catalysis events, and physical interactions involving proteins, DNA, RNA, small molecules and complexes. Pathways were downloaded directly from source databases. Each source pathway database has been created differently, some by manual extraction of pathway information from the literature and some by computational prediction. Pathway Commons provides a filtering mechanism to allow the user to view only chosen subsets of information, such as only the manually curated subset. The quality of Pathway Commons pathways is dependent on the quality of the pathways from source databases. Pathway Commons aims to collect and integrate all public pathway data available in standard formats. It currently contains data from nine databases with over 1,668 pathways, 442,182 interactions,414 organisms and will be continually expanded and updated. (April 2013)
Proper citation: Pathway Commons (RRID:SCR_002103) Copy
Original SAMTOOLS package has been split into three separate repositories including Samtools, BCFtools and HTSlib. Samtools for manipulating next generation sequencing data used for reading, writing, editing, indexing,viewing nucleotide alignments in SAM,BAM,CRAM format. BCFtools used for reading, writing BCF2,VCF, gVCF files and calling, filtering, summarising SNP and short indel sequence variants. HTSlib used for reading, writing high throughput sequencing data.
Proper citation: SAMTOOLS (RRID:SCR_002105) Copy
http://purl.bioontology.org/ontology/DOID
Comprehensive hierarchical controlled vocabulary for human disease representation.Open source ontology for integration of biomedical data associated with human disease. Disease Ontology database represents comprehensive knowledge base of inherited, developmental and acquired human diseases.
Proper citation: Human Disease Ontology (RRID:SCR_000476) Copy
https://bitbucket.org/dkessner/forqs
Software for forward-in-time population genetics simulation that tracks individual haplotype chunks as they recombine each generation. It also also models quantitative traits and selection on those traits.
Proper citation: forqs (RRID:SCR_000643) Copy
http://code.google.com/p/rna-star/
Software performing alignment of high-throughput RNA-seq data. Aligns RNA-seq reads to reference genome using uncompressed suffix arrays.
Proper citation: STAR (RRID:SCR_004463) Copy
http://compbio.cs.brown.edu/projects/gasv/
Software tool combining both paired read and read depth signals into probabilistic model which can analyze multiple alignments of reads. Used to find structural variation in both normal and cancer genomes using data from variety of next-generation sequencing platforms. Used to predict structural variants directly from aligned reads in SAM/BAM format.Combines read depth information along with discordant paired read mappings into single probabilistic model two common signals of structural variation. When multiple alignments of read are given, GASVPro utilizes Markov Chain Monte Carlo procedure to sample over the space of possible alignments.
Proper citation: GASVPro (RRID:SCR_005259) Copy
http://bioportal.bioontology.org/annotator
A Web service that annotates textual metadata (e.g. journal abstract) with relevant ontology concepts. NCBO uses this Web service to annotate resources in the NCBO Resource Index. They also provide this Web service as a stand-alone service for users. This Web service can be accessed through BioPortal or used directly in your software. Currently, the annotation workflow is based on syntactic concept recognition (using concept names and synonyms) and on a set of semantic expansion algorithms that leverage the semantics in ontologies (e.g., is_a relations). Their service methodology leverages ontologies to create annotations of raw text and returns them using semantic web standards.
Proper citation: NCBO Annotator (RRID:SCR_005329) Copy
http://bowtie-bio.sourceforge.net/index.shtml
Software ultrafast memory efficient tool for aligning sequencing reads. Bowtie is short read aligner.
Proper citation: Bowtie (RRID:SCR_005476) Copy
http://great.stanford.edu/public/html/splash.php
Data analysis service that predicts functions of cis-regulatory regions identified by localized measurements of DNA binding events across an entire genome. Whereas previous methods took into account only binding proximal to genes, GREAT is able to properly incorporate distal binding sites and control for false positives using a binomial test over the input genomic regions. GREAT incorporates annotations from 20 ontologies and is available as a web application. The utility of GREAT extends to data generated for transcription-associated factors, open chromatin, localized epigenomic markers and similar functional data sets, and comparative genomics sets. Platform: Online tool
Proper citation: GREAT: Genomic Regions Enrichment of Annotations Tool (RRID:SCR_005807) Copy
Multi-organism, publicly accessible compendium of peptides identified in a large set of tandem mass spectrometry proteomics experiments. Mass spectrometer output files are collected for human, mouse, yeast, and several other organisms, and searched using the latest search engines and protein sequences. All results of sequence and spectral library searching are subsequently processed through the Trans Proteomic Pipeline to derive a probability of correct identification for all results in a uniform manner to insure a high quality database, along with false discovery rates at the whole atlas level. The raw data, search results, and full builds can be downloaded for other uses. All results of sequence searching are processed through PeptideProphet to derive a probability of correct identification for all results in a uniform manner ensuring a high quality database. All peptides are mapped to Ensembl and can be viewed as custom tracks on the Ensembl genome browser. The long term goal of the project is full annotation of eukaryotic genomes through a thorough validation of expressed proteins. The PeptideAtlas provides a method and a framework to accommodate proteome information coming from high-throughput proteomics technologies. The online database administers experimental data in the public domain. You are encouraged to contribute to the database.
Proper citation: PeptideAtlas (RRID:SCR_006783) Copy
The BBOP, located at the Lawrence Berkeley National Labs, is a diverse group of scientific researchers and software engineers dedicated to developing tools and applying computational technologies to solve biological problems. Members of the group contribute to a number of projects, including the Gene Ontology, OBO Foundry, the Phenotypic Quality Ontology, modENCODE, and the Generic Model Organism Database Project. Our group is focused on the development, use, and integration of ontolgies into biological data analysis. Software written or maintained by BBOP is accessible through the site.
Proper citation: Berkeley Bioinformatics Open-Source Projects (RRID:SCR_006704) Copy
Next generation sequencing and genotyping services provided to investigators working to discover genes that contribute to disease. On-site statistical geneticists provide insight into analysis issues as they relate to study design, data production and quality control. In addition, CIDR has a consulting agreement with the University of Washington Genetics Coordinating Center (GCC) to provide statistical and analytical support, most predominantly in the areas of GWAS data cleaning and methods development. Completed studies encompass over 175 phenotypes across 530 projects and 620,000 samples. The impact is evidenced by over 380 peer-reviewed papers published in 100 journals. Three pathways exist to access the CIDR genotyping facility: * NIH CIDR Program: The CIDR contract is funded by 14 NIH Institutes and provides genotyping and statistical genetic services to investigators approved for access through competitive peer review. An application is required for projects supported by the NIH CIDR Program. * The HTS Facility: The High Throughput Sequencing Facility, part of the Johns Hopkins Genetic Resources Core Facility, provides next generation sequencing services to internal JHU investigators and external scientists on a fee-for-service basis. * The JHU SNP Center: The SNP Center, part of the Johns Hopkins Genetic Resources Core Facility, provides genotyping to internal JHU investigators and external scientists on a fee-for-service basis. Data computation service is included to cover the statistical genetics services provided for investigators seeking to identify genes that contribute to human disease. Human Genotyping Services include SNP Genome Wide Association Studies, SNP Linkage Scans, Custom SNP Studies, Cancer Panel, MHC Panels, and Methylation Profiling. Mouse Genotyping Services include SNP Scans and Custom SNP Studies.
Proper citation: Center for Inherited Disease Research (RRID:SCR_007339) Copy
https://www.mc.vanderbilt.edu/victr/dcc/projects/acc/index.php/Main_Page
A national consortium formed to develop, disseminate, and apply approaches to research that combine DNA biorepositories with electronic medical record (EMR) systems for large-scale, high-throughput genetic research. The consortium is composed of seven member sites exploring the ability and feasibility of using EMR systems to investigate gene-disease relationships. Themes of bioinformatics, genomic medicine, privacy and community engagement are of particular relevance to eMERGE. The consortium uses data from the EMR clinical systems that represent actual health care events and focuses on ethical issues such as privacy, confidentiality, and interactions with the broader community.
Proper citation: eMERGE Network: electronic Medical Records and Genomics (RRID:SCR_007428) Copy
Can't find your Tool?
We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.
Welcome to the dkNET Resources search. From here you can search through a compilation of resources used by dkNET and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that dkNET has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on dkNET then you can log in from here to get additional features in dkNET such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into dkNET you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the sources that were queried against in your search that you can investigate further.
Here are the categories present within dkNET that you can filter your data on
Here are the subcategories present within this category that you can filter your data on
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.