Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.
SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.
http://www.cs.gsu.edu/~serghei/?q=drut
Software for Discovery and Reconstruction of Unannotated Transcripts in Partially Annotated Genomes from High-Throughput RNA-Seq Data.
Proper citation: DRUT (RRID:SCR_004351) Copy
http://www.ncdc.noaa.gov/paleo/softlib/
THIS RESOURCE IS NO LONGER IN SERVICE. Documented on April 12,2023. A simple, efficient, process-based forward model of tree-ring growth, requires as inputs only latitude and monthly temperature and precipitation.
Proper citation: VS-Lite (RRID:SCR_002431) Copy
http://www.complex.iastate.edu/download/Picky/
A software tool for selecting optimal oligonucleotides (oligos) that allows the rapid and efficient determination of gene-specific oligos based on given gene sets, and can be used for large, complex genomes such as human, mouse, or maize.
Proper citation: Picky (RRID:SCR_010963) Copy
https://github.com/ihmwg/IHM-dictionary
Software resource for a data representation for integrative/hybrid methods of modeling macromolecular structures.
Proper citation: IHM-dictionary (RRID:SCR_016186) Copy
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6301786/
Device to control spatial and temporal variations in oxygen tensions to better replicate in vivo biology. Consists of three parallel connected tissue chambers and oxygen scavenger channel placed adjacent to these tissue chambers. Provides consistent control of spatial and temporal oxygen gradients in tissue microenvironment and can be used to investigate important oxygen dependent biological processes present in cancer, ischemic heart disease, and wound healing.
Proper citation: Microfluidic device to attain high spatial and temporal control of oxygen (RRID:SCR_017131) Copy
http://www.bioextract.org/GuestLogin
An open, web-based system designed to aid researchers in the analysis of genomic data by providing a platform for the creation of bioinformatic workflows. Scientific workflows are created within the system by recording tasks performed by the user. These tasks may include querying multiple, distributed data sources, saving query results as searchable data extracts, and executing local and web-accessible analytic tools. The series of recorded tasks can then be saved as a reproducible, sharable workflow available for subsequent execution with the original or modified inputs and parameter settings. Integrated data resources include interfaces to the National Center for Biotechnology Information (NCBI) nucleotide and protein databases, the European Molecular Biology Laboratory (EMBL-Bank) non-redundant nucleotide database, the Universal Protein Resource (UniProt), and the UniProt Reference Clusters (UniRef) database. The system offers access to numerous preinstalled, curated analytic tools and also provides researchers with the option of selecting computational tools from a large list of web services including the European Molecular Biology Open Software Suite (EMBOSS), BioMoby, and the Kyoto Encyclopedia of Genes and Genomes (KEGG). The system further allows users to integrate local command line tools residing on their own computers through a client-side Java applet.
Proper citation: BioExtract (RRID:SCR_005397) Copy
A Comprehensive Bioinformatics Scientific Workflow Module for Distributed Analysis of Large-Scale Biological Data that is distributed on top of the core Kepler scientific workflow system.
Proper citation: bioKepler (RRID:SCR_005385) Copy
http://carringtonlab.org/resources/cashx
Software pipeline to parse, map, quantify and manage large quantities of sequence data. CASHX is a set of tools that can be used together, or as independent modules on their own. The reference genome alignment tools can be used with any reference sequence in fasta format. The pipeline was designed and tested using Arabidopsis thaliana small RNA reads generated using an Illumina 1G.
Proper citation: CASHX (RRID:SCR_005477) Copy
http://mesquiteproject.org/packages/chromaseq/
A software package in Mesquite that processes chromatograms, makes contigs, base calls, etc., using in part the programs Phred and Phrap.
Proper citation: Chromaseq (RRID:SCR_005587) Copy
Founded in 1985, the San Diego Supercomputer Center (SDSC) enables international science and engineering discoveries through advances in computational science and data-intensive, high-performance computing. SDSC is considered a leader in data-intensive computing, providing resources, services and expertise to the national research community including industry and academia. The mission of SDSC is to extend the reach of scientific accomplishments by providing tools such as high-performance hardware technologies, integrative software technologies, and deep interdisciplinary expertise to these communities. From 1997 to 2004, SDSC extended its leadership in computational science and engineering to form the National Partnership for Advanced Computational Infrastructure (NPACI), teaming with approximately 40 university partners around the country. Today, SDSC is an Organized Research Unit of the University of California, San Diego with a staff of talented scientists, software developers, and support personnel. A broad community of scientists, engineers, students, commercial partners, museums, and other facilities work with SDSC to develop cyberinfrastructure-enabled applications to help manage their extreme data needs. Projects run the gamut from creating astrophysics visualization for the American Museum of Natural History, to supporting more than 20,000 users per day to the Protein Data Bank, to performing large-scale, award-winning simulations of the origin of the universe or how a major earthquake would affect densely populated areas such as southern California. Along with these data cyberinfrastructure tools, SDSC also offers users full-time support including code optimization, training, 24-hour help desk services, portal development and a variety of other services. As one of the NSF's first national supercomputer centers, SDSC served as the data-intensive site lead in the agency's TeraGrid program, a multiyear effort to build and deploy the world's first large-scale infrastructure for open scientific research. SDSC currently provides advanced user support and expertise for XSEDE (Extreme Science and Engineering Discovery Environment) the five-year NSF-funded program that succeeded TeraGrid in mid-2011.
Proper citation: San Diego Supercomputer Center (RRID:SCR_001856) Copy
http://www.broadinstitute.org/genome_bio/siphy/
Software that implements rigorous statistical tests to detect bases under selection from a multiple alignment data. It takes full advantage of deeply sequenced phylogenies to estimate both unlikely substitution patterns as well as slowdowns or accelerations in mutation rates. It can be applied as an Hidden Markov Model (HMM), in sliding windows, or to specific regions.
Proper citation: SiPhy (RRID:SCR_000564) Copy
http://www.brown.edu/Research/Istrail_Lab/hapcompass.php
Software that utilizes a fast cycle basis algorithm for the accurate haplotype assembly of sequence data. It is able to create pairwise SNP phasings.
Proper citation: HapCompass (RRID:SCR_000942) Copy
Software application for annotating character matrix files with ontology terms. Character states can be annotated using Entity-Quality syntax, where entity, quality, and possibly related entities are drawn from requisite ontologies. In addition, taxa (the rows of a character matrix) can be annotated with identifiers from taxonomy ontology. Phenex saves ontology annotations alongside original free text character matrix data using new NeXML format standard for evolutionary data.
Proper citation: Phenex (RRID:SCR_021748) Copy
Free access to materials for students, educators, and researchers in cognitive psychology and cognitive neuroscience. Currently there are about a dozen demonstrations and more than 30 videos that were produced over the last two years. The basic philosophy of goCognitive rests on the assumption that easy and free access to high-quality content will improve the learning experience of students and will enable more students to enjoy the field of cognitive psychology and cognitive neuroscience. There are a few parts of goCognitive that are only available to registered users who have provided their email address, but all of the online demonstrations and videos are accessible to the everyone. Both new demonstrations and new video interviews will continually be added to the site. Manuals for each of the demonstration are being created and available as pdf files for download. Most of the demonstrations are pretty straightforward - but in some cases, especially if you would like to collect data - it might be a good idea to look over the manual. There are different ways in which you can get involved and contribute to the site. Your involvement can range from sending us feedback about the demonstrations and videos, suggestions for new materials, or the simple submission of corrections, to the creation or publication of demonstrations and videos that meet our criteria. Down the road we will make the submission process easier, but for now please contact swerner (at) uidaho dot edu for more information. NSF student grant Undergraduate students can apply through goCognitive for an $1,100 grant to co-produce a new video interview with a leading researcher in the field of cognitive neuroscience. The funding has been provided by the National Science Foundation.
Proper citation: goCognitive (RRID:SCR_006154) Copy
Protege is a free, open-source platform that provides a growing user community with a suite of tools to construct domain models and knowledge-based applications with ontologies. At its core, Protege implements a rich set of knowledge-modeling structures and actions that support the creation, visualization, and manipulation of ontologies in various representation formats. Protege can be customized to provide domain-friendly support for creating knowledge models and entering data. Further, Protege can be extended by way of a plug-in architecture and a Java-based Application Programming Interface (API) for building knowledge-based tools and applications. An ontology describes the concepts and relationships that are important in a particular domain, providing a vocabulary for that domain as well as a computerized specification of the meaning of terms used in the vocabulary. Ontologies range from taxonomies and classifications, database schemas, to fully axiomatized theories. In recent years, ontologies have been adopted in many business and scientific communities as a way to share, reuse and process domain knowledge. Ontologies are now central to many applications such as scientific knowledge portals, information management and integration systems, electronic commerce, and semantic web services. The Protege platform supports two main ways of modeling ontologies: * The Protege-Frames editor enables users to build and populate ontologies that are frame-based, in accordance with the Open Knowledge Base Connectivity protocol (OKBC). In this model, an ontology consists of a set of classes organized in a subsumption hierarchy to represent a domain's salient concepts, a set of slots associated to classes to describe their properties and relationships, and a set of instances of those classes - individual exemplars of the concepts that hold specific values for their properties. * The Protege-OWL editor enables users to build ontologies for the Semantic Web, in particular in the W3C's Web Ontology Language (OWL). An OWL ontology may include descriptions of classes, properties and their instances. Given such an ontology, the OWL formal semantics specifies how to derive its logical consequences, i.e. facts not literally present in the ontology, but entailed by the semantics. These entailments may be based on a single document or multiple distributed documents that have been combined using defined OWL mechanisms (see the OWL Web Ontology Language Guide). Protege is based on Java, is extensible, and provides a plug-and-play environment that makes it a flexible base for rapid prototyping and application development.
Proper citation: Protege (RRID:SCR_003299) Copy
Web application providing online database and workspace for evolutionary research, specifically systematics (the science of determining the evolutionary relationships among species). It enables researchers to upload images and affiliate data with those images (labels, species names, etc.) and allows researchers to upload morphological data and affiliate it with phylogenetic matrices. MorphoBank is project-based, meaning a team of researchers can create a project and share the images and associated data exclusively with each other. When a paper associated with the project is published, the research team can make their data permanently available for view on MorphoBank where it is now archived.
Proper citation: MorphoBank (RRID:SCR_003213) Copy
Data repository for integrative/hybrid structural models of macromolecules and their assemblies. This includes atomistic models as well as multi-scale models consisting of different coarse-grained representations.
Proper citation: PDB-Dev (RRID:SCR_016185) Copy
http://www.broad.mit.edu/annotation/fungi/fgi/
Produces and analyzes sequence data from fungal organisms that are important to medicine, agriculture and industry. The FGI is a partnership between the Broad Institute and the wider fungal research community, with the selection of target genomes governed by a steering committee of fungal scientists. Organisms are selected for sequencing as part of a cohesive strategy that considers the value of data from each organism, given their role in basic research, health, agriculture and industry, as well as their value in comparative genomics.
Proper citation: Fungal Genome Initiative (RRID:SCR_003169) Copy
http://www.sgn.cornell.edu/bulk/input.pl?modeunigene
Allows users to download Unigene or BAC information using a list of identifiers or complete datasets with FTP., THIS RESOURCE IS NO LONGER IN SERVICE. Documented on September 16,2025.
Proper citation: Sol Genomics Network - Bulk download (RRID:SCR_007161) Copy
http://www.nber.org/papers/h0038
A dataset to advance the study of life-cycle interactions of biomedical and socioeconomic factors in the aging process. The EI project has assembled a variety of large datasets covering the life histories of approximately 39,616 white male volunteers (drawn from a random sample of 331 companies) who served in the Union Army (UA), and of about 6,000 African-American veterans from 51 randomly selected United States Colored Troops companies (USCT). Their military records were linked to pension and medical records that detailed the soldiers������?? health status and socioeconomic and family characteristics. Each soldier was searched for in the US decennial census for the years in which they were most likely to be found alive (1850, 1860, 1880, 1900, 1910). In addition, a sample consisting of 70,000 men examined for service in the Union Army between September 1864 and April 1865 has been assembled and linked only to census records. These records will be useful for life-cycle comparisons of those accepted and rejected for service. Military Data: The military service and wartime medical histories of the UA and USCT men were collected from the Union Army and United States Colored Troops military service records, carded medical records, and other wartime documents. Pension Data: Wherever possible, the UA and USCT samples have been linked to pension records, including surgeon''''s certificates. About 70% of men in the Union Army sample have a pension. These records provide the bulk of the socioeconomic and demographic information on these men from the late 1800s through the early 1900s, including family structure and employment information. In addition, the surgeon''''s certificates provide rich medical histories, with an average of 5 examinations per linked recruit for the UA, and about 2.5 exams per USCT recruit. Census Data: Both early and late-age familial and socioeconomic information is collected from the manuscript schedules of the federal censuses of 1850, 1860, 1870 (incomplete), 1880, 1900, and 1910. Data Availability: All of the datasets (Military Union Army; linked Census; Surgeon''''s Certificates; Examination Records, and supporting ecological and environmental variables) are publicly available from ICPSR. In addition, copies on CD-ROM may be obtained from the CPE, which also maintains an interactive Internet Data Archive and Documentation Library, which can be accessed on the Project Website. * Dates of Study: 1850-1910 * Study Features: Longitudinal, Minority Oversamples * Sample Size: ** Union Army: 35,747 ** Colored Troops: 6,187 ** Examination Sample: 70,800 ICPSR Link: http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/06836
Proper citation: Early Indicators of Later Work Levels Disease and Death (EI) - Union Army Samples Public Health and Ecological Datasets (RRID:SCR_008921) Copy
Can't find your Tool?
We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.
Welcome to the dkNET Resources search. From here you can search through a compilation of resources used by dkNET and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that dkNET has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on dkNET then you can log in from here to get additional features in dkNET such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into dkNET you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the sources that were queried against in your search that you can investigate further.
Here are the categories present within dkNET that you can filter your data on
Here are the subcategories present within this category that you can filter your data on
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.