Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.
SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.
http://www.cs.gsu.edu/~serghei/?q=drut
Software for Discovery and Reconstruction of Unannotated Transcripts in Partially Annotated Genomes from High-Throughput RNA-Seq Data.
Proper citation: DRUT (RRID:SCR_004351) Copy
http://www.ncdc.noaa.gov/paleo/softlib/
THIS RESOURCE IS NO LONGER IN SERVICE. Documented on April 12,2023. A simple, efficient, process-based forward model of tree-ring growth, requires as inputs only latitude and monthly temperature and precipitation.
Proper citation: VS-Lite (RRID:SCR_002431) Copy
http://www.complex.iastate.edu/download/Picky/
A software tool for selecting optimal oligonucleotides (oligos) that allows the rapid and efficient determination of gene-specific oligos based on given gene sets, and can be used for large, complex genomes such as human, mouse, or maize.
Proper citation: Picky (RRID:SCR_010963) Copy
https://github.com/ihmwg/IHM-dictionary
Software resource for a data representation for integrative/hybrid methods of modeling macromolecular structures.
Proper citation: IHM-dictionary (RRID:SCR_016186) Copy
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6301786/
Device to control spatial and temporal variations in oxygen tensions to better replicate in vivo biology. Consists of three parallel connected tissue chambers and oxygen scavenger channel placed adjacent to these tissue chambers. Provides consistent control of spatial and temporal oxygen gradients in tissue microenvironment and can be used to investigate important oxygen dependent biological processes present in cancer, ischemic heart disease, and wound healing.
Proper citation: Microfluidic device to attain high spatial and temporal control of oxygen (RRID:SCR_017131) Copy
http://www.bioextract.org/GuestLogin
An open, web-based system designed to aid researchers in the analysis of genomic data by providing a platform for the creation of bioinformatic workflows. Scientific workflows are created within the system by recording tasks performed by the user. These tasks may include querying multiple, distributed data sources, saving query results as searchable data extracts, and executing local and web-accessible analytic tools. The series of recorded tasks can then be saved as a reproducible, sharable workflow available for subsequent execution with the original or modified inputs and parameter settings. Integrated data resources include interfaces to the National Center for Biotechnology Information (NCBI) nucleotide and protein databases, the European Molecular Biology Laboratory (EMBL-Bank) non-redundant nucleotide database, the Universal Protein Resource (UniProt), and the UniProt Reference Clusters (UniRef) database. The system offers access to numerous preinstalled, curated analytic tools and also provides researchers with the option of selecting computational tools from a large list of web services including the European Molecular Biology Open Software Suite (EMBOSS), BioMoby, and the Kyoto Encyclopedia of Genes and Genomes (KEGG). The system further allows users to integrate local command line tools residing on their own computers through a client-side Java applet.
Proper citation: BioExtract (RRID:SCR_005397) Copy
A Comprehensive Bioinformatics Scientific Workflow Module for Distributed Analysis of Large-Scale Biological Data that is distributed on top of the core Kepler scientific workflow system.
Proper citation: bioKepler (RRID:SCR_005385) Copy
http://carringtonlab.org/resources/cashx
Software pipeline to parse, map, quantify and manage large quantities of sequence data. CASHX is a set of tools that can be used together, or as independent modules on their own. The reference genome alignment tools can be used with any reference sequence in fasta format. The pipeline was designed and tested using Arabidopsis thaliana small RNA reads generated using an Illumina 1G.
Proper citation: CASHX (RRID:SCR_005477) Copy
http://mesquiteproject.org/packages/chromaseq/
A software package in Mesquite that processes chromatograms, makes contigs, base calls, etc., using in part the programs Phred and Phrap.
Proper citation: Chromaseq (RRID:SCR_005587) Copy
Founded in 1985, the San Diego Supercomputer Center (SDSC) enables international science and engineering discoveries through advances in computational science and data-intensive, high-performance computing. SDSC is considered a leader in data-intensive computing, providing resources, services and expertise to the national research community including industry and academia. The mission of SDSC is to extend the reach of scientific accomplishments by providing tools such as high-performance hardware technologies, integrative software technologies, and deep interdisciplinary expertise to these communities. From 1997 to 2004, SDSC extended its leadership in computational science and engineering to form the National Partnership for Advanced Computational Infrastructure (NPACI), teaming with approximately 40 university partners around the country. Today, SDSC is an Organized Research Unit of the University of California, San Diego with a staff of talented scientists, software developers, and support personnel. A broad community of scientists, engineers, students, commercial partners, museums, and other facilities work with SDSC to develop cyberinfrastructure-enabled applications to help manage their extreme data needs. Projects run the gamut from creating astrophysics visualization for the American Museum of Natural History, to supporting more than 20,000 users per day to the Protein Data Bank, to performing large-scale, award-winning simulations of the origin of the universe or how a major earthquake would affect densely populated areas such as southern California. Along with these data cyberinfrastructure tools, SDSC also offers users full-time support including code optimization, training, 24-hour help desk services, portal development and a variety of other services. As one of the NSF's first national supercomputer centers, SDSC served as the data-intensive site lead in the agency's TeraGrid program, a multiyear effort to build and deploy the world's first large-scale infrastructure for open scientific research. SDSC currently provides advanced user support and expertise for XSEDE (Extreme Science and Engineering Discovery Environment) the five-year NSF-funded program that succeeded TeraGrid in mid-2011.
Proper citation: San Diego Supercomputer Center (RRID:SCR_001856) Copy
http://www.broadinstitute.org/genome_bio/siphy/
Software that implements rigorous statistical tests to detect bases under selection from a multiple alignment data. It takes full advantage of deeply sequenced phylogenies to estimate both unlikely substitution patterns as well as slowdowns or accelerations in mutation rates. It can be applied as an Hidden Markov Model (HMM), in sliding windows, or to specific regions.
Proper citation: SiPhy (RRID:SCR_000564) Copy
http://www.brown.edu/Research/Istrail_Lab/hapcompass.php
Software that utilizes a fast cycle basis algorithm for the accurate haplotype assembly of sequence data. It is able to create pairwise SNP phasings.
Proper citation: HapCompass (RRID:SCR_000942) Copy
Software application for annotating character matrix files with ontology terms. Character states can be annotated using Entity-Quality syntax, where entity, quality, and possibly related entities are drawn from requisite ontologies. In addition, taxa (the rows of a character matrix) can be annotated with identifiers from taxonomy ontology. Phenex saves ontology annotations alongside original free text character matrix data using new NeXML format standard for evolutionary data.
Proper citation: Phenex (RRID:SCR_021748) Copy
https://pynwb.readthedocs.io/en/latest/
Software Python package for working with Neurodata stored in Neurodata Without Borders files. Software providing API allowing users to read and create NWB formatted HDF5 files. Developed in support to NWB project with aim of spreading standardized data format for cellular based neurophysiology information.
Proper citation: PyNWB (RRID:SCR_017452) Copy
https://github.com/compbiolabucf/omicsGAN
Software generative adversarial network to integrate two omics data and their interaction network to generate one synthetic data corresponding to each omics profile that can result in better phenotype prediction. Used to capture information from interaction network as well as two omics datasets and fuse them to generate synthetic data with better predictive signals.
Proper citation: OmicsGAN (RRID:SCR_022976) Copy
Free access to materials for students, educators, and researchers in cognitive psychology and cognitive neuroscience. Currently there are about a dozen demonstrations and more than 30 videos that were produced over the last two years. The basic philosophy of goCognitive rests on the assumption that easy and free access to high-quality content will improve the learning experience of students and will enable more students to enjoy the field of cognitive psychology and cognitive neuroscience. There are a few parts of goCognitive that are only available to registered users who have provided their email address, but all of the online demonstrations and videos are accessible to the everyone. Both new demonstrations and new video interviews will continually be added to the site. Manuals for each of the demonstration are being created and available as pdf files for download. Most of the demonstrations are pretty straightforward - but in some cases, especially if you would like to collect data - it might be a good idea to look over the manual. There are different ways in which you can get involved and contribute to the site. Your involvement can range from sending us feedback about the demonstrations and videos, suggestions for new materials, or the simple submission of corrections, to the creation or publication of demonstrations and videos that meet our criteria. Down the road we will make the submission process easier, but for now please contact swerner (at) uidaho dot edu for more information. NSF student grant Undergraduate students can apply through goCognitive for an $1,100 grant to co-produce a new video interview with a leading researcher in the field of cognitive neuroscience. The funding has been provided by the National Science Foundation.
Proper citation: goCognitive (RRID:SCR_006154) Copy
http://www.scrible.com/#desktop
We''re bringing Web-based research into the Internet Era by empowering people to mark up web pages in the browser and manage and collaborate on them online. And that''s just the start... We''ve got much more planned in a variety of areas to help people manage the mounds of info they''re pulling off the Web everyday. Simply drag the scrible bookmarklet to your browser''s Bookmarks toolbar. Click it later to mark up, save or share web pages. Even though the world uses the Internet to research nearly everything for work, school and home (job postings, press releases, Wikipedia articles, medical info, etc.), most folks still use old-school ways of annotating, organizing and sharing online info (printing to mark by hand, copying/pasting into Word, etc.). It''s archaic, laborious and a waste of time. We''re changing that. A bookmarklet is a bookmarked link that, when clicked, adds functionality to your browser. When the scrible Bookmarklet is clicked, it loads the scrible Toolbar atop the current webpage you''re viewing. Adding the scrible Bookmarklet to your browser is a breeze. Simply drag it to your browser''s Bookmarks Toolbar.
Proper citation: scrible (RRID:SCR_008882) Copy
http://titan.biotec.uiuc.edu/bee/honeybee_project.htm
A database integrating data from the bee brain EST sequencing project with data from sequencing and gene research projects from other organisms, primarily the fruit fly Drosophila melanogaster. The goal of Bee-ESTdb is to provide updated information on the genes of the honey bee, currently using annotation primarily from flies to suggest cellular roles, biological functions, and evolutionary relationships. The site allows searches by sequence ID, EST annotations, Gene Ontology terms, Contig ID and using BLAST. Very nice resource for those interested in comparative genomics of brain. A normalized unidirectional cDNA library was made in the laboratory of Prof. Bento Soares, University of Iowa. The library was subsequently subtracted. Over 20,000 cDNA clones were partially sequenced from the normalized and subtracted libraries at the Keck Center, resulting in 15,311 vector-trimmed, high-quality, sequences with an average read length of 494 bp. and average base-quality of 41. These sequences were assembled into 8966 putatively unique sequences, which were tested for similarity to sequences in the public databases with a variety of BLAST searches. The Clemson University Genomics Institute is the distributor of these public domain cDNA clones. For information on how to purchase an individual clone or the entire collection, please contact www.genome.clemson.edu/orders/ or generobi (at) life.uiuc.edu.
Proper citation: Honey Bee Brain EST Project (RRID:SCR_002389) Copy
http://cgsc.biology.yale.edu/index.php
The CGSC Collection contains only non-pathogenic BSL-1 laboratory strains, primarily genetic derivatives of Escherichia coli K-12, the laboratory strain widely used in genetic and molecular studies, but a few B strains. The CGSC Database of E. coli genetic information includes genotypes and reference information for the strains in the CGSC collection, the names, synonyms, properties, and map position for genes, gene product information, and information on specific mutations and references to primary literature. The public version of the database includes this information and can be queried directly via this CGSC DB WebServer. The collection includes cultures of wild-type contributed from a number of laboratories and a few thousand derivatives carrying one or up to 29 mutations from among 3500 mutations in (or included in deletions spanning) more than 1300 different loci. Some combinations were constructed particularly for mapping purposes and are still used for teaching and for rapid localization, some for manifestation of a particular phenotype, some strains for transferring a particular region or for complementation analysis. Some plasmids, e.g., the Clarke and Carbon collection, F-primes, a number of toolkit plasmids, and a few classic plasmids are included, but it is not a comprehensive collection of plasmids. Additionally, we have recently acquired most of the strains from the Keio Collection of systematic individual gene knockout (deletion/kan insertion) strains.
Proper citation: CGSC (RRID:SCR_002303) Copy
Kepler is a software application for analyzing and modeling scientific data. Using Kepler''s graphical interface and components, scientists with little background in computer science can create executable models, called scientific workflows, for flexibly accessing scientific data (streaming sensor data, medical and satellite images, simulation output, observational data, etc.) and executing complex analyses on this data. Kepler is developed by a cross-project collaboration led by the Kepler/CORE team. The software builds upon the mature Ptolemy II framework, developed at the University of California, Berkeley. Ptolemy II is a software framework designed for modeling, design, and simulation of concurrent, real-time, embedded systems. The Kepler Project is dedicated to furthering and supporting the capabilities, use, and awareness of the free and open source, scientific workflow application, Kepler. Kepler is designed to help scien��tists, analysts, and computer programmers create, execute, and share models and analyses across a broad range of scientific and engineering disciplines. Kepler can operate on data stored in a variety of formats, locally and over the internet, and is an effective environment for integrating disparate software components, such as merging R scripts with compiled C code, or facilitating remote, distributed execution of models. Using Kepler''s graphical user interface, users simply select and then connect pertinent analytical components and data sources to create a scientific workflowan executable representation of the steps required to generate results. The Kepler software helps users share and reuse data, workflows, and compo��nents developed by the scientific community to address common needs. Kepler is a java-based application that is maintained for the Windows, OSX, and Linux operating systems. The Kepler Project supports the official code-base for Kepler development, as well as provides materials and mechanisms for learning how to use Kepler, sharing experiences with other workflow developers, reporting bugs, suggesting enhancements, etc. The Kepler Project Leadership Team works to assure the long-term technical and financial viability of Kepler by making strategic decisions on behalf of the Kepler user community, as well as providing an official and durable point-of-contact to articulate and represent the interests of the Kepler Project and the Kepler software application. Details about how to get more involved with the Kepler Project can be found in the developer section of this website.
Proper citation: Kepler (RRID:SCR_005252) Copy
Can't find your Tool?
We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.
Welcome to the SPARC SAWG Resources search. From here you can search through a compilation of resources used by SPARC SAWG and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that SPARC SAWG has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on SPARC SAWG then you can log in from here to get additional features in SPARC SAWG such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into SPARC SAWG you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the sources that were queried against in your search that you can investigate further.
Here are the categories present within SPARC SAWG that you can filter your data on
Here are the subcategories present within this category that you can filter your data on
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.