Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.
SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.
http://function.princeton.edu/GOLEM/index.html
THIS RESOURCE IS NO LONGER IN SERVICE, documented July 7, 2017. Welcome to the home of GOLEM: An interactive, graphical gene-ontology visualization, navigation,and analysis tool on the web. GOLEM is a useful tool which allows the viewer to navigate and explore a local portion of the Gene Ontology (GO) hierarchy. Users can also load annotations for various organisms into the ontology in order to search for particular genes, or to limit the display to show only GO terms relevant to a particular organism, or to quickly search for GO terms enriched in a set of query genes. GOLEM is implemented in Java, and is available both for use on the web as an applet, and for download as a JAR package. A brief tutorial on how to use GOLEM is available both online and in the instructions included in the program. We also have a list of links to libraries used to make GOLEM, as well as the various organizations that curate organism annotations to the ontology. GOLEM is available as a .jar package and a macintosh .app for use on- or off- line as a stand-alone package. You will need to have Java (v.1.5 or greater) installed on your system to run GOLEM. Source code (including Eclipse project files) are also available. GOLEM (Gene Ontology Local Exploration Map)is a visualization and analysis tool for focused exploration of the gene ontology graph. GOLEM allows the user to dynamically expand and focus the local graph structure of the gene ontology hierarchy in the neighborhood of any chosen term. It also supports rapid analysis of an input list of genes to find enriched gene ontology terms. The GOLEM application permits the user either to utilize local gene ontology and annotations files in the absence of an Internet connection, or to access the most recent ontology and annotation information from the gene ontology webpage. GOLEM supports global and organism-specific searches by gene ontology term name, gene ontology id and gene name. CONCLUSION: GOLEM is a useful software tool for biologists interested in visualizing the local directed acyclic graph structure of the gene ontology hierarchy and searching for gene ontology terms enriched in genes of interest. It is freely available both as an application and as an applet.
Proper citation: GOLEM An interactive, graphical gene-ontology visualization, navigation, and analysis tool (RRID:SCR_003191) Copy
http://rostlab.org/services/nlsdb/
A database of nuclear localization signals (NLSs) and of nuclear proteins targeted to the nucleus by NLS motifs. NLSs are short stretches of residues mediating transport of nuclear proteins into the nucleus. The database contains 114 experimentally determined NLSs that were obtained through an extensive literature search. Using "in silico mutagenesis" this set was extended to 308 experimental and potential NLSs. This final set matched over 43% of all known nuclear proteins and matches no currently known non-nuclear protein. NLSdb contains over 6000 predicted nuclear proteins and their targeting signals from the PDB and SWISS-PROT/TrEMBL databases. The database also contains over 12 500 predicted nuclear proteins from six entirely sequenced eukaryotic proteomes (Homo sapiens, Mus musculus, Drosophila melanogaster, Caenorhabditis elegans, Arabidopsis thaliana and Saccharomyces cerevisiae). NLS motifs often co-localize with DNA-binding regions. This observation was used to also annotate over 1500 DNA-binding proteins. From this site you can: * Query NLSdb * Find out how to use NLSdb * Browse the entries in NLSdb * Find out if your protein has an NLS using PredictNLS * Predict subcellular localization of your protein using LOCtree
Proper citation: NLSdb: a database of nuclear localization signals (RRID:SCR_003273) Copy
Protege is a free, open-source platform that provides a growing user community with a suite of tools to construct domain models and knowledge-based applications with ontologies. At its core, Protege implements a rich set of knowledge-modeling structures and actions that support the creation, visualization, and manipulation of ontologies in various representation formats. Protege can be customized to provide domain-friendly support for creating knowledge models and entering data. Further, Protege can be extended by way of a plug-in architecture and a Java-based Application Programming Interface (API) for building knowledge-based tools and applications. An ontology describes the concepts and relationships that are important in a particular domain, providing a vocabulary for that domain as well as a computerized specification of the meaning of terms used in the vocabulary. Ontologies range from taxonomies and classifications, database schemas, to fully axiomatized theories. In recent years, ontologies have been adopted in many business and scientific communities as a way to share, reuse and process domain knowledge. Ontologies are now central to many applications such as scientific knowledge portals, information management and integration systems, electronic commerce, and semantic web services. The Protege platform supports two main ways of modeling ontologies: * The Protege-Frames editor enables users to build and populate ontologies that are frame-based, in accordance with the Open Knowledge Base Connectivity protocol (OKBC). In this model, an ontology consists of a set of classes organized in a subsumption hierarchy to represent a domain's salient concepts, a set of slots associated to classes to describe their properties and relationships, and a set of instances of those classes - individual exemplars of the concepts that hold specific values for their properties. * The Protege-OWL editor enables users to build ontologies for the Semantic Web, in particular in the W3C's Web Ontology Language (OWL). An OWL ontology may include descriptions of classes, properties and their instances. Given such an ontology, the OWL formal semantics specifies how to derive its logical consequences, i.e. facts not literally present in the ontology, but entailed by the semantics. These entailments may be based on a single document or multiple distributed documents that have been combined using defined OWL mechanisms (see the OWL Web Ontology Language Guide). Protege is based on Java, is extensible, and provides a plug-and-play environment that makes it a flexible base for rapid prototyping and application development.
Proper citation: Protege (RRID:SCR_003299) Copy
http://pir.georgetown.edu/pirwww/dbinfo/pirsf.shtml
A SuperFamily classification system, with rules for functional site and protein name, to facilitate the sensible propagation and standardization of protein annotation and the systematic detection of annotation errors. The PIRSF concept is being used as a guiding principle to provide comprehensive and non-overlapping clustering of UniProtKB sequences into a hierarchical order to reflect their evolutionary relationships. The PIRSF classification system is based on whole proteins rather than on the component domains; therefore, it allows annotation of generic biochemical and specific biological functions, as well as classification of proteins without well-defined domains. There are different PIRSF classification levels. The primary level is the homeomorphic family, whose members are both homologous (evolved from a common ancestor) and homeomorphic (sharing full-length sequence similarity and a common domain architecture). At a lower level are the subfamilies which are clusters representing functional specialization and/or domain architecture variation within the family. Above the homeomorphic level there may be parent superfamilies that connect distantly related families and orphan proteins based on common domains. Because proteins can belong to more than one domain superfamily, the PIRSF structure is formally a network. The FTP site provides free download for PIRSF.
Proper citation: PIRSF (RRID:SCR_003352) Copy
A functional network for laboratory mouse based on integration of diverse genetic and genomic data. It allows the users to accurately predict novel functional assignments and network components. MouseNET uses a probabilistic Bayesian algorithm to identify genes that are most likely to be in the same pathway/functional neighborhood as your genes of interest. It then displays biological network for the resulting genes as a graph. The nodes in the graph are genes (clicking on each node will bring up SGD page for that gene) and edges are interactions (clicking on each edge will show evidence used to predict this interaction). Most likely, the first results to load on the results page will be a list of significant Gene Ontology terms. This list is calculated for the genes in the biological network created by the mouseNET algorithm. If a gene ontology term appears on this list with a low p-value, it is statistically significantly overrepresented in this biological network. The graph may be explored further. As you move the mouse over genes in the network, interactions involving these genes are highlighted.If you click on any of the highlighted interactions graph, evidence pop-up window will appear. The Evidence pop-up lists all evidence for this interaction, with links to the papers that produced this evidence - clicking these links will bring up the relevant source citation(s) in PubMed.
Proper citation: MouseNET (RRID:SCR_003357) Copy
Computing resources structural biologists need to discover the shapes of the molecules of life, it provides access to web-enabled structural biology applications, data sharing facilities, biological data sets, and other resources valuable to the computational structural biology community. Consortium includes X-ray crystallography, NMR and electron microscopy laboratories worldwide.SBGrid Service Center is located at Harvard Medical School.SBGrid's NIH-compliant Service Center supports SBGrid operations and provides members with access to Software Maintenance, Computing Access, and Training. Consortium benefits include: * remote management of your customized collection of structural biology applications on Linux and Mac workstations; * access to commercial applications exclusively licensed to members of the Consortium, such as NMRPipe, Schrodinger Suite (limited tokens) and the Incentive version of Pymol; remote management of supporting scientific applications (e.g., bioinformatics, computational chemistry and utilities); * access to SBGrid seminars and events; and * advice about hardware configurations, operating system installations and high performance computing. Membership is restricted to academic/non-profit research laboratories that use X-ray crystallography, 2D crystallography, NMR, EM, tomography and other experimental structural biology technologies in their research. Most new members are fully integrated with SBGrid within 2 weeks of the initial application.
Proper citation: Structural Biology Grid (RRID:SCR_003511) Copy
A hierarchy of portable online interactive aids for motivating, modernizing probability and statistics applications. The tools and resources include a repository of interactive applets, computational and graphing tools, instructional and course materials. The core SOCR educational and computational components include the following suite of web-based Java applets: * Distributions (interactive graphs and calculators) * Experiments (virtual computer-generated games and processes) * Analyses (collection of common web-accessible tools for statistical data analysis) * Games (interfaces and simulations to real-life processes) * Modeler (tools for distribution, polynomial and spectral model-fitting and simulation) * Graphs, Plots and Charts (comprehensive web-based tools for exploratory data analysis), * Additional Tools (other statistical tools and resources) * SOCR Java-based Statistical Computing Libraries * SOCR Wiki (collaborative Wiki resource) * Educational Materials and Hands-on Activities (varieties of SOCR educational materials), * SOCR Statistical Consulting In addition, SOCR provides a suite of tools for volume-based statistical mapping (http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_AnalysesCommandLine) via command-line execution and via the LONI Pipeline workflows (http://www.nitrc.org/projects/pipeline). Course instructors and teachers will find the SOCR class notes and interactive tools useful for student motivation, concept demonstrations and for enhancing their technology based pedagogical approaches to any study of variation and uncertainty. Students and trainees may find the SOCR class notes, analyses, computational and graphing tools extremely useful in their learning/practicing pursuits. Model developers, software programmers and other engineering, biomedical and applied researchers may find the light-weight plug-in oriented SOCR computational libraries and infrastructure useful in their algorithm designs and research efforts. The three types of SOCR resources are: * Interactive Java applets: these include a number of different applets, simulations, demonstrations, virtual experiments, tools for data visualization and analysis, etc. All applets require a Java-enabled browser (if you see a blank screen, see the SOCR Feedback to find out how to configure your browser). * Instructional Resources: these include data, electronic textbooks, tutorials, etc. * Learning Activities: these include various interactive hands-on activities. * SOCR Video Tutorials (including general and tool-specific screencasts).
Proper citation: Statistics Online Computational Resource (RRID:SCR_003378) Copy
http://mimi.ncibi.org/MimiWeb/main-page.jsp
MiMi Web gives you an easy to use interface to a rich NCIBI data repository for conducting your systems biology analyses. This repository includes the MiMI database, PubMed resources updated nightly, and text mined from biomedical research literature. The MiMI database comprehensively includes protein interaction information that has been integrated and merged from diverse protein interaction databases and other biological sources. With MiMI, you get one point of entry for querying, exploring, and analyzing all these data. MiMI provides access to the knowledge and data merged and integrated from numerous protein interactions databases and augments this information from many other biological sources. MiMI merges data from these sources with deep integration into its single database with one point of entry for querying, exploring, and analyzing all these data. MiMI allows you to query all data, whether corroborative or contradictory, and specify which sources to utilize. MiMI displays results of your queries in easy-to-browse interfaces and provides you with workspaces to explore and analyze the results. Among these workspaces is an interactive network of protein-protein interactions displayed in Cytoscape and accessed through MiMI via a MiMI Cytoscape plug-in. MiMI gives you access to more information than you can get from any one protein interaction source such as: * Vetted data on genes, attributes, interactions, literature citations, compounds, and annotated text extracts through natural language processing (NLP) * Linkouts to integrated NCIBI tools to: analyze overrepresented MeSH terms for genes of interest, read additional NLP-mined text passages, and explore interactive graphics of networks of interactions * Linkouts to PubMed and NCIBI's MiSearch interface to PubMed for better relevance rankings * Querying by keywords, genes, lists or interactions * Provenance tracking * Quick views of missing information across databases. Data Sources include: BIND, BioGRID, CCSB at Harvard, cPath, DIP, GO (Gene Ontology), HPRD, IntAct, InterPro, IPI, KEGG, Max Delbreuck Center, MiBLAST, NCBI Gene, Organelle DB, OrthoMCL DB, PFam, ProtoNet, PubMed, PubMed NLP Mining, Reactome, MINT, and Finley Lab. The data integration service is supplied under the conditions of the original data sources and the specific terms of use for MiMI. Access to this website is provided free of charge. The MiMI data is queryable through a web services api. The MiMI data is available in PSI-MITAB Format. These files represent a subset of the data available in MiMI. Only UniProt and RefSeq identifiers are included for each interactor, pathways and metabolomics data is not included, and provenance is not included for each interaction. If you need access to the full MiMI dataset please send an email to mimi-help (at) umich.edu.
Proper citation: Michigan Molecular Interactions (RRID:SCR_003521) Copy
http://www.isi.edu/integration/karma/
An information integration software tool that enables users to integrate data from a variety of data sources including databases, spreadsheets, delimited text files, XML, JSON, KML and Web APIs. Users integrate information by modeling it according to an ontology of their choice using a graphical user interface that automates much of the process. Karma learns to recognize the mapping of data to ontology classes and then uses the ontology to propose a model that ties together these classes. Users then interact with the system to adjust the automatically generated model. During this process, users can transform the data as needed to normalize data expressed in different formats and to restructure it. Once the model is complete, users can publish the integrated data as RDF or store it in a database.
Proper citation: Karma (RRID:SCR_003732) Copy
http://wiki.c2b2.columbia.edu/honiglab_public/index.php/Software:DelPhi
DelPhi provides numerical solutions to the Poisson-Boltzmann equation (both linear and nonlinear form) for molecules of arbitrary shape and charge distribution. The current version is fast, accurate, and can handle extremely high lattice dimensions. It also includes flexible features for assigning different dielectric constants to different regions of space and treating systems containing mixed salt solutions. DelPhi takes as input a coordinate file format of a molecule or equivalent data for geometrical objects and/or charge distributions and calculates the electrostatic potential in and around the system, using a finite difference solution to the Poisson-Boltzmann equation. DelPhi is a versatile electrostatics simulation program that can be used to investigate electrostatic fields in a variety of molecular systems. Features of DelPhi include solutions to mixtures of salts of different valence; solutions to different dielectric constants to different regions of space; and estimation of the best relaxation parameter at run time.
Proper citation: DelPhi (RRID:SCR_008669) Copy
http://plantgrn.noble.org/LegumeIP/
LegumeIP is an integrative database and bioinformatics platform for comparative genomics and transcriptomics to facilitate the study of gene function and genome evolution in legumes, and ultimately to generate molecular based breeding tools to improve quality of crop legumes. LegumeIP currently hosts large-scale genomics and transcriptomics data, including: * Genomic sequences of three model legumes, i.e. Medicago truncatula, Glycine max (soybean) and Lotus japonicus, including two reference plant species, Arabidopsis thaliana and Poplar trichocarpa, with the annotation based on UniProt TrEMBL, InterProScan, Gene Ontology and KEGG databases. LegumeIP covers a total 222,217 protein-coding gene sequences. * Large-scale gene expression data compiled from 104 array hybridizations from L. japonicas, 156 array hybridizations from M. truncatula gene atlas database, and 14 RNA-Seq-based gene expression profiles from G. max on different tissues including four common tissues: Nodule, Flower, Root and Leaf. * Systematic synteny analysis among M. truncatula, G. max, L. japonicus and A. thaliana. * Reconstruction of gene family and gene family-wide phylogenetic analysis across the five hosted species. LegumeIP features comprehensive search and visualization tools to enable the flexible query on gene annotation, gene family, synteny, relative abundance of gene expression.
Proper citation: LegumeIP (RRID:SCR_008906) Copy
Matlab toolbox that makes it easy to apply decoding analyses to neural data. The design of the toolbox revolves around four abstract object classes which enables users to interchange particular modules in order to try different analyses while keeping the rest of the processing stream intact. The toolbox is capable of analyzing data from many different types of recording modalities, and examples are given on how it can be used to decode basic visual information from neural spiking activity and how it can be used to examine how invariant the activity of a neural population is to stimulus transformations.
Proper citation: Neural Decoding Toolbox (RRID:SCR_009012) Copy
Project aims to promote data sharing, archiving, and reuse among researchers who study human development. Focuses on creating tools for scientists to store, manage, preserve, analyze and share video and related data.
Proper citation: Databrary (RRID:SCR_010471) Copy
Markup Language that provides a representation of PDB data in XML format. The description of this format is provided in XML schema of the PDB Exchange Data Dictionary. This schema is produced by direct translation of the mmCIF format PDB Exchange Data Dictionary Other data dictionaries used by the PDB have been electronically translated into XML/XSD schemas and these are also presented in the list below. * PDBML data files are provided in three forms: ** fully marked-up files, ** files without atom records ** files with a more space efficient encoding of atom records * Data files in PDBML format can be downloaded from the RCSB PDB website or by ftp. * Software tools for manipulating PDB data in XML format are available.
Proper citation: Protein Data Bank Markup Language (RRID:SCR_005085) Copy
Open platform for analyzing and sharing neuroimaging data from human brain imaging research studies. Brain Imaging Data Structure ( BIDS) compliant database. Formerly known as OpenfMRI. Data archives to hold magnetic resonance imaging data. Platform for sharing MRI, MEG, EEG, iEEG, and ECoG data.
Proper citation: OpenNeuro (RRID:SCR_005031) Copy
http://science.kqed.org/quest/
An award-winning multimedia science and environment series created by KQED, San Francisco, the public media station serving Northern California. Launched in February 2007, by the end of its fourth season (in September 2010), QUEST had reached approximately 36 million viewers and listeners through its traditional TV and radio broadcasts and its growing Web audience. QUEST''s ultimate aim is to raise science literacy in the San Francisco Bay Area and beyond, inspiring audiences to discover and explore science and environment issues for themselves. Every season, KQED''s QUEST produces: * half-hour television episodes episodes that air weekly, exploring the cutting-edge work of Northern California scientists and researchers (QUEST airs Wednesdays 7:30pm on KQED Public Television 9); * weekly radio reports covering urban environmental issues which often include multimedia slide shows, and interactive online maps (QUEST airs Mondays 6:30am and 8:30am on KQED Public Radio 88.5 FM); * Educational resources, for use by formal and informal educators; QUEST also provides professional development for science educators to support multimedia and technology integration in science classrooms and programs; * 20 six-minute stories for its new web only series, Science on the SPOT, which takes a fresh, fast and curious look at science with stories about albino redwoods, the science of fog and banana slugs, to name a few. (launched in 2010); * A daily science blog written by Northern California scientists, QUEST producers and science enthusiasts; * Exclusive web extras, featuring extended interviews with scientists; Flickr photos, and science hikes. Formal and informal Educators who would like to become involved withthe educational outreach program should contact: ScienceEd (at) kqed.org.
Proper citation: QUEST (RRID:SCR_005210) Copy
Kepler is a software application for analyzing and modeling scientific data. Using Kepler''s graphical interface and components, scientists with little background in computer science can create executable models, called scientific workflows, for flexibly accessing scientific data (streaming sensor data, medical and satellite images, simulation output, observational data, etc.) and executing complex analyses on this data. Kepler is developed by a cross-project collaboration led by the Kepler/CORE team. The software builds upon the mature Ptolemy II framework, developed at the University of California, Berkeley. Ptolemy II is a software framework designed for modeling, design, and simulation of concurrent, real-time, embedded systems. The Kepler Project is dedicated to furthering and supporting the capabilities, use, and awareness of the free and open source, scientific workflow application, Kepler. Kepler is designed to help scien��tists, analysts, and computer programmers create, execute, and share models and analyses across a broad range of scientific and engineering disciplines. Kepler can operate on data stored in a variety of formats, locally and over the internet, and is an effective environment for integrating disparate software components, such as merging R scripts with compiled C code, or facilitating remote, distributed execution of models. Using Kepler''s graphical user interface, users simply select and then connect pertinent analytical components and data sources to create a scientific workflowan executable representation of the steps required to generate results. The Kepler software helps users share and reuse data, workflows, and compo��nents developed by the scientific community to address common needs. Kepler is a java-based application that is maintained for the Windows, OSX, and Linux operating systems. The Kepler Project supports the official code-base for Kepler development, as well as provides materials and mechanisms for learning how to use Kepler, sharing experiences with other workflow developers, reporting bugs, suggesting enhancements, etc. The Kepler Project Leadership Team works to assure the long-term technical and financial viability of Kepler by making strategic decisions on behalf of the Kepler user community, as well as providing an official and durable point-of-contact to articulate and represent the interests of the Kepler Project and the Kepler software application. Details about how to get more involved with the Kepler Project can be found in the developer section of this website.
Proper citation: Kepler (RRID:SCR_005252) Copy
http://cvcweb.ices.utexas.edu/cvcwp/?page_id=100
VolumeRover (a.k.a VolRover) is an interactive multi-purpose image processing software that can visualize three dimensional imaging data of any size (as big as terabyte) in a commodity PC or workstation and additionally supports the following image processing operations. Image Contrast Enhancement, Filtering/Noise Reduction, Image Segmentation, Isocontouring, Symmetry Detection (for Virus Maps, Boundary-free Image Skeletonization. VolRover provides a user interface to a number of CVC software packages including Segmentation, Contrast Enhancement, and Motif Elucidation.
Proper citation: VolumeRover (RRID:SCR_005457) Copy
http://bioimage.ucsb.edu/bisque
Open source database for exchange and exploration of biological images. Used to store, visualize, organize and analyze images in cloud. Centered around database of images and metadata.
Proper citation: Bisque database (RRID:SCR_005559) Copy
THIS RESOURCE IS NO LONGER IN SERVICE. Documented on July 1, 2022. Organization whose mission is to build and promote a sustainable ecosystem of professional societies, funding agencies, foundations, companies, and citizens together with life science researchers and innovators in computing, infrastructure and analysis with the expressed goal of translating new discoveries into tools, resources and products.
Proper citation: DELSA (RRID:SCR_006231) Copy
Can't find your Tool?
We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.
Welcome to the dkNET Resources search. From here you can search through a compilation of resources used by dkNET and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that dkNET has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on dkNET then you can log in from here to get additional features in dkNET such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into dkNET you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the sources that were queried against in your search that you can investigate further.
Here are the categories present within dkNET that you can filter your data on
Here are the subcategories present within this category that you can filter your data on
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.