Tutorial Abstracts

Morning Session
AM1 Ontology Development for the Semantic Web: Protégé's Web Ontology Language (OWL) interface
AM2 Integration of Genomic, Biomedical and Clinical Databases and Tools to Enable Genomic Medicine
AM3 Computational analyses across the BioCyc collection of Pathway/Genome Databases
AM4 Introduction to Statistics for Bioinformatics
AM5 Introduction to Molecular Visualization
Afternoon Session
PM6 Ontologies for Biomedicine – How to make and use them
PM7 Computational Analysis of Tiling Arrays for ChIP-chip on Mammalian Genomes
PM8 Installing, Configuring and Using GMOD Web-based Genome Visualization Tool (GBrowse)
PM9 Improvements in automated identification of protein sequences and post-translational modifications from tandem mass spectroscopy data
PM10 Systems Biology of Host-Pathogen Interactions and Microbial Communities


AM1

 

Ontology Development for the Semantic Web: Protégé's Web Ontology Language (OWL) interface

Daniel Rubin, MD, MS, Kaustubh Supekar, Stanford Medical Informatics

The Semantic Web is a potentially promising technology that makes ontologies accessible and connectable to data and computer processing in a decentralized manner. The Semantic Web may have particular applicability to the life sciences, a field that is rich in the diversity of existing bio-ontologies and the vast amounts of data becoming available in cyberspace. Protégé OWL is an open source tool created to support ontology development for the Semantic Web. It is a plug-in extension to the Protégé ontology development platform. Protégé OWL allows users to edit ontologies in the Web Ontology Language (OWL) and to use description logic classifiers to maintain consistency of their ontologies. Protégé OWL can also assist developers of intelligent applications, because many of the problem-solving tasks they seek to automate can be construed as classification tasks, and thus they can use Protégé OWL to enable these applications. Being integrated with Protégé, Protégé OWL allows users to exploit Protégé’s core features and services such as graphical user interfaces, a variety of storage formats, and data acquisition and visualization tools. In this introductory tutorial, we will demonstrate the fundamental features of Protege OWL to help users develop and manage OWL ontologies. We will also show how to use automatic classification to help content authors create robust ontologies. We will motivate our tutorial by demonstrating the exciting possibilities of these technologies with a real-world example Semantic Web application in the biomedical domain that was engineered using OWL, automatic classification, and the Protégé OWL platform.

Expected goals and objectives:
1. Introduce bio-ontologies and Semantic Web technologies, particularly focused on the biomedical domain.
2. Provide an overview of open source tools for creating ontologies and Semantic Web applications (Protégé and Protégé-OWL)
3. Demonstrate how to use Protégé-OWL in concreting a simple Semantic Web application ontology step-by-step
4. Familiarize participants with active efforts in the community to apply Semantic Web technologies to biomedical problems (such as the National Center of Biomedical Ontology and the W3C Semantic Web for Life Sciences group).

Intended audience:This is an introductory tutorial. Some conceptual understanding of biomedicine would be helpful.

Daniel Rubin: My research focus since completing the biomedical informatics program at Stanford has been in developing and exploiting knowledge representation approaches in biomedicine. I have previously been a participant in the PharmGKB project, during which time I developed ontologies for representing genetic and pharmacokinetic knowledge to enable browsing, searching, and analyzing pharmacogenetics data. Subsequently, I coordinated a project at Stanford to create a virtual human by linking ontologies to segmented images of the visible human and mathematical simulation models to predict the physiological consequences of penetrating injury. A key aspect of this work was using OWL as a representation formalism to support automated computer reasoning in this task.  Currently, I am the Executive Director of the National Center of Biomedical Ontology (http://bioontology.org), a National Center of Biomedical Computation funded under the NIH roadmap.  We are collaborating with biomedical researchers to develop and use biomedical ontologies to streamline discovery in contemporary large-scale science projects. 

Kaustubh Supekar has extensive background in creating Semantic Web applications.  He is currently working on a project with the Biomedical Informatics Research Network (BIRN) to use ontologies to support data integration and analysis in electron microscopic images of neural tissue.

Experience: We have individually taught courses and performed demonstrations at Protégé workshops that have been given over the past year as well as at scientific meetings such as the Semantic Web conference and ISMB.  We will be doing a tutorial on the Semantic Web at the Semantic Technology Conference in March 2007.  Daniel Rubin has given guest lectures in many Stanford courses and seminars, including BMI 210, 211, and 212, in Doug Brutlag’s bioinformatics class, in the SMI and bioinformatics short courses, and in the database seminar series.

Return to indexReturn to Program


AM2

 

Integration of Genomic, Biomedical and Clinical Databases and Tools to Enable Genomic Medicine

Atul Butte, MD, PhD, Stanford Medical Informatics

The next step for genetics and genomics is associating such data with human phenotypic data, and the largest source of phenotypes is in the clinical record. We will cover the various types of clinical data, the ontologies currently used in medicine, and how these can interface with genetics, genomics, and proteomics.

Expected Goals and Objectives: Over the past 10 years, high-dimensional investigations related to human disease have expanded considerably in breadth and depth. The breadth of such investigations spans at least 30 types of high-dimensional measurement and experimental modalities, including RNA expression microarrays, DNA sequencing, protein identification, mutagenesis, RNA interference, and many others. The depth of such investigations has grown to include measurements of entire sets of transcripts, proteins, and genomes. Most recently, these technologies have started to be applied to the study of many diseases. However, application of these measurement modalities and related algorithms to clinical data presents its own set of challenges, including the paucity of cases, the difficulty in representing and measuring the effects of the environment on people, the distinction between diseases and phenotypes, and even the legal restrictions against additional data acquisition. These are all actively studied challenges, and proposed solutions to these are commonly viewed as cutting-edge. In the US, the NIH Roadmap for Medical Research has led to multiple funding opportunities for bioinformaticians to collaborate with clinical researchers to promote and facilitate translational research. For example, the Clinical and Translational Science Awards, the replacement for the General Clinical Research Centres, require a strong biomedical informatics collaborative component. This tutorial is a timely one, as bioinformatics professionals are being increasingly asked toparticipate in, and even organize, these groups.


Topic Area:
• Medical Bioinformatics: 40%
• Transcriptomics: 10%
• Proteomics: 10%
• Sequence Analysis: 10%
• Database and Data Integration: 20%
• Ontologies: 10%


Tutorial Outline:
1. Review (45 minutes)
The first part of the tutorial will be the most didactic. It will include a review of:

  • The biology behind the measurement modalities: teaching just enough biology to understand the various measurement modalities: polymorphisms, haplotypes, proteomics, gene expression, metabolomics, protein-protein interactions, and RNAi.
  • Nature and format of expression, polymorphism, and proteomic data. Emphasis on the different characteristics of these measurement systems, including noise profiles, and how normalization of the data sets can be approached (and common mistakes).
  • Description of the most frequently used analysis techniques for each measurement type. Strengths and weakness will only be summarized. The questions for which each might be better suited will be addressed, as well as reasonable approaches to the interpretation of results generated by these techniques.
  • An overview of the most commonly used structured vocabularies, taxonomies, and ontologies used in clinical medicine and research.
2. Clinical reasons to interface genomic and clinical data (45 minutes)
The second part of the tutorial will focus on motivating why genomic and genetic data should be interfaced with the clinical record, for the direct benefit of patients. What kind of clinical tools would such an interface enable?
  • What is disease and genetic predisposition to disease?
  • How is clinical genetic and genomic data collected and used today? Examples given of specific institutions and their practices.
  • How is genetic information currently used in all medical specialties?
  • How genetic data is used to guide therapy, and how clinical genetic tests are found
  • Differences between research and clinical genomic and genetic data; CLIA approval in the United States.
3. Research goals in interface genomic and clinical data (45 minutes)
The third part of the tutorial will focus on specific hypotheses and questions that can be asked if genetic and genomic data were better integrated.
  • How do we interface genomic and clinical data to study patient disease-free status and survival? How do we interface genomic and clinical data to study a disease and potentially find clinically relevant subtypes of a disease?
  • Where do animals and cell lines fit in? How can the study of these directly enable clinical diagnosis?
  • How is genomics being used to identify potential drug targets?
  • What are the categories of biomarkers and why are they useful? What are the unique challenges in applying supervised machine learning techniques to clinical questions, in terms of prior probabilities of disease?
  • Can we relate genomic and clinical data through the diagnosed and studied diseases in both domains?
2. Participants will be encouraged to explore how they might use these techniques in domains that are of interest to them, through questions and answers throughout the tutorial. We will leave 15 minutes for discussion at the end of the tutorial as well.

Tutorial level:
Biomedicine: Advanced
Computer Science: Basic
Statistics: Basic
Programming: Basic
Prior knowledge required:
Basic statistics (such as t-tests), basic biology (such as DNA, RNA, synthesis and function of proteins), some awareness of high-dimensional measurement systems in molecular biology (such as genetics, microarrays, mass spectrometry, or sequencing), and an interest in medical or clinical problems.

Intended audience: The intended audience includes academic faculty or professionals setting up bioinformatics facilities and/or relating these to clinical data; health information professionals responsible for clinical databases or data warehouses and tying these to researchers; informaticians, clinicians, and scientists interested in genetics, functional genomics, and microarray analysis; and students.  

Atul Butte, MD, PhD is Assistant Professor in Medicine (Medical Informatics) and Pediatrics at the Stanford University School of Medicine, and a board-certified pediatric endocrinologist. He obtained his B.A. Computer Science (Honors) from Brown University: worked several stints as a software engineer at Apple Computer (on the System 7 team) and Microsoft Corporation (on the Excel team). Dr. Butte obtained his M.D. from Brown University School of Medicine: worked as a research fellow at NIDDK through the Howard Hughes/NIH Research Scholars Program, studying insulin receptor signal transduction. His Ph.D. is in Health Sciences and Technology from the Medical Engineering / Medical Physics Program in the Division of Health Sciences and Technology, at Harvard Medical School and Massachusetts Institute of Technology.

Dr. Butte’s laboratory focuses on solving problems relevant to genomic medicine by developing new biomedical-informatics methodologies in integrative biology. He has authored more than 25 publications in bioinformatics, medical informatics, and molecular diabetes. He is co-author of one of the first books on microarray analysis titled Microarrays for an Integrative Genomics. His recent awards include the 2006 PhRMA Foundation Research Starter Grant, the 2001 American Association for Cancer Research Scholar-In-Training Award and the 2001 Lawson Wilkins Pediatric Endocrine Society Clinical Scholar Award.

Return to indexReturn to Program


AM3

Computational analyses across the BioCyc collection of Pathway/Genome Databases

Peter Karp, PhD, Bioinformatics Research Group Institution at SRI International

BioCyc is a collection of 205 pathway/genome databases for most organisms whose genomes have been completely sequenced. It is a large and comprehensive resource for systems biology research. We expect that many bioinformatics and computational biology researchers will be interested in computing with BioCyc to address global biological questions, such as studying the phylogenetic distribution and evolution of metabolic pathways. The goal of this tutorial will be to provide researchers with the information they need to perform global analyses of BioCyc. The tutorial will cover the methodologies used to create BioCyc, a description of the complex database schema and ontologies that underly BioCyc, and descriptions of the APIs that are available to query BioCyc. The tutorial will also present the Pathway Tools semantic inference layer, which is a library of commonly used queries that we have encoded to save researchers time. We will also consider common stumbling blocks and misconceptions that can lead to misinterpretations of the data.

Expected outcomes and goals Students will learn how to perform computational analyses across the large BioCyc collection of Pathway/Genome Databases.

Prequisites: Basic familiarity with programming and databases and basic familiarity required with concepts in biology and metabolic pathways, genetics, structural biology algorithms.

Teaching experience and background: Dr. Karp has given several tutorials at past ISMB meetings, and many lectures at conferences and in classrooms.

Return to indexReturn to Program


AM4


Introduction to Statistics for Bioinformatics

Michael G. Walker, Ph.D., President, Walker Bioscience

This tutorial will introduce the most widely used statistical methods for bioinformatics, including descriptive statistics, probability, analysis of variance, discriminant analysis and cluster analysis. Examples will be drawn from biomedical cases, including gene expression microarray data, and illustrated using Excel and other analysis packages.

Background and Experience: [Added by CSB Tutorial Chair] Dr. Walker has extensive teaching experience; he has taught introductory statistics in both academia and industry. He has won world-wide praise for excellence in teaching and developing lecture content which focuses on essential topics in statistics for bioinformatics.

Prework 1 and 2 for AM4 Tutorial Attendees.

 

Return to indexReturn to Program


AM5

 

An Introduction to Molecular Visualization
Scooter Morris, Conrad Huang, Pharmaceutical Chemistry, University of California, San Francisco

Projects such as Structural Genomics are providing increasing numbers of experimental protein and protein-complex structures. Furthermore, increasing numbers of theoretical models are being predicted from primary sequence. Biologists have an increasing need to understand and communicate the structures, functions and relationships between these protein and protein-complex structures. As a result, molecular visualization is becoming an important tool for the presentation and communication of the results of biological experiments and research. This tutorial will provide a basic foundation for the understanding of molecular structures through use of visualization tools. Attendees will learn the basics of molecular visualization and will be provided an overview of available tools and techniques for visualization, analysis and modeling of protein structure. To make these concepts more concrete, attendees will be shown the academic program UCSF Chimera in more detail, and receive instruction in its features and use. The field of structural biology is still changing, and new techniques are continually being developed. Attendees will be shown how they can add new analysis techniques and their own data to the visualization.

Expected outcomes and goals: Attendees will learn tools and techniques for molecular visualization of macromolecular systems. Specific instructions will be given for UCSF Chimera to provide a basic working knowledge of how to load, manipulate, analyze and visualize macromolecules.

Prerequisites: Conceptual understanding of programming languages, structural biology

Outline:

  1. Introduction to Molecular Visualization
    • a. Data sources
    • b. Representations
    • c. Manipulations
    • d. Analysis
    • e. Modeling
  2. Available Tools
    • a. Visualization
    • b. Analytical tools
    • c. Modeling tools
  3. Using UCSF Chimera
    • a. Basic features
    • b. Comparison with other packages
    • c. Concepts
  4. Scenarios of use
    • a. Structure analysis
    • b. Sequence-structure relationships
    • c. Docking
    • d. Publication
    • e. Animation
  5. Extending Chimera
    • a. Incorporating user data
    • b. Scripting
    • c. Python extensions


Teaching experience and background: Both instructors have taught in numerous workshops and presented in a variety of conferences, seminars and tutorials. Both instructors are also lecturers at UCSF.

 

Return to indexReturn to Program


PM6

 

Ontologies for Biomedicine – How to make and use them


Amar Das, MD, PhD, Nigam Shah, MBBS, PhD, Stanford Medical Informatics, Stanford School of Medicine

Ontologies are becoming essential as the amount and types of data we handle in the biology domain rises. Simultaneously, the need to organise, co-ordinate and disseminate ontologies as well as ontology development methods is now accepted and is evidenced by the funding of the National Center for Biomedical Ontology (NCBO). Part of the mission of NCBO is to conduct education and dissemination activities in the field of biomedical ontologies. Though the need of using ontologies is widely appreciated, the right manner in which they should be used and developed is not. Researchers still resort to ad-hoc methods while using and/or developing ontologies. This tutorial will provide an overview of the various ways in which ontologies are used in bioinformatics and biomedicine along with pointers to lesser known but potentially more rewarding applications of ontologies. This tutorial will educate the participants on what ontologies are and how they are currently used as well as outline best practices for their development.

Goals and Objectives: The tutorial will provide an overview of the current uses of ontologies in bioinformatics and instruction on ontology design and use. The instruction will be via an interactive session emphasizing the best practises for ontology design and use.

Intended Audience: This tutorial will be aimed at advanced graduate students and active researchers who will need to use ontologies as a part of their routine research work; either for interpreting their own data or developing applications that assist in data integration.

Prerequisites: Familiarity with concepts in molecular biology such as genes, proteins, promoter, intron and exon is expected. Basic understanding (about one semester) of discreet mathematics and programming concepts would be helpful. Attendance at the "Ontology Development for the Semantic Web" tutorial (AM 1 session) is HIGHLY RECOMMENDED.

Tutorial Outline:

  1. Overview of current uses of ontologies in Bioinformatics [40 min]
    1. As a controlled vocabulary to describe genes and gene products
      1. The Gene Ontology
    1. As a data exchange format and for data integration
      1. MGED, SBML and BioPax as examples
    1. To define a knowledgebase schema
      1. BioCyc and Reactome as examples
    1. For driving natural language processing
      1. Textpresso as example
    1. For semantically rich querying of federated databases
      1. TAMBIS as example
    1. Creating formal representations of biological processes

 

  1. Ontologies – What they are and What they are not [20 min]
    1. The various meanings of “ontology” from philosophy, computer science and information science will be discussed.
    2. This module will clarify the various interpretations of ontology such as terminologies, taxonomies, application ontologies, depicting ontologies, upper ontologies as well as explain how the computer/information science meanings are different - but related to - the philosophical meaning of the word ontology.
    3. What ontologies are not
      1. Ontologies and ontology representation languages are not adequate to perform “simulations”. We need to support these activities in biomedicine. How do we allow for that?
  1. Basics of Developing ontologies (learn the most common mistakes and the kind of design decisions to make) [75 min]
    1. Ontology design 101 (The computer science perspective – 30 min)
      1. When to make a class? When to subclass?
      2. Choice of the representation formalism.
      3. What is the level of domain expertise required?
    1. Ontology design 201 (The philosophy perspective – 30 min)
      1. Logic and model theory in ontologies
        1. How the reasoning used by philosophical ontologists can be helpful in recognizing and avoiding potential logical mistakes such as use-mention confusion and circular definitions.
      2. What are the advantages of being this rigorous?
    1. Ontology design in practice (The GO perspective – 15 min)
      1. What mistakes to avoid if starting today?
      2. Community awareness: Ontology development is a community effort, what are the essentials that everyone should know?

 

  1. Wrap up question and answers with discussion [15 min]

 

Return to indexReturn to Program


PM7

Computational analysis of Tiling arrays for ChIP-chip on Mammalian Genomes

W. Evan Johnson, Dept. of Biostatistics, Harvard School of Arts and Sciences


Chromatin immunoprecipitation coupled with DNA microarray analysis (ChIP-chip) has quickly evolved as a popular technique to study the in vivo targets of DNA-binding proteins at the genome level. Generally, DNA is crosslinked to proteins at sites of protein-DNA interaction, sheared into small fragments, and then precipitated by antibodies specific to the protein of interest. The precipitated protein-bound DNA fragments are purified, amplified, labeled, and hybridized to tiling microarrays.

Many tiling array platforms have now been developed for mammalian genomes, allowing for the unbiased mapping of transcription factor binding sites across these genomes. These platforms come in many varieties including short and long oligos, and one and two channel arrays. Because of the complex nature of mammalian genomes and the massive amounts of data produced by the arrays, there are many computational challenges to dealing with the data produced by tiling arrays. The data are often quite noisy, so low-level analysis methods must be applied for chip normalization and probe background adjustment. Additionally, the nature and amount of data produced by tiling arrays requires innovative methods to accurately detect binding sites across the genome.

Once the binding sites have been identified, one can conduct de novo motif finding using available computational analysis methods to find new binding motifs or use available tools to search for enrichment of previously known transcription factor binding motifs, locate areas of conservation across genomes, and find protein cofactors, target genes (and their functions), and other elements of the regulatory network of interest inferred by the binding regions.

Many well-known analysis procedures will be introduced in this tutorial. We will also present an example from our research to illustrate normalization, background adjustment, and identification of transcription factor binding sites. Finally, we will apply available web tools to find biologically relevant information from our binding sites.

Goals, objectives:
This tutorial will briefly introduce a Chromatin ImmunoPrecipitation (ChIP) on tiling arrays (chip) experiment on mammalian genomes, discuss the purpose ChIP-chip experiments, and give a detailed description and demonstration of some of the tools and methods available to analyze of ChIP-chip data.

Intended audience:
Biologists interested in utilizing ChIP-chip technology for biological discovery in their labs; Computational Biologists/Bioinformaticians with collaborators in conducting ChIP-chip experiments or with interest in analyzing ChIP-chip data.

Prerequisites: Basic understanding of Biology and microarrays; Basic statistics

 

Return to indexReturn to Program


PM8

Installing, Configuring and Using the GMOD Web-based Genome Visualization Tool (GBrowse)

Scott Cain, Ph. D. GMOD Project Coordinator, Cold Spring Harbor Laboratory

The Generic Genome Browser (GBrowse) is a web-based, graphical browser for genomic information that has be adopted for use by over 100 organizations. This tutorial will cover installing and configuring GBrowse, starting with the basics of how data needs to be formatted and simple configuration for viewing that data, and moving on to more complex topics like showing multi-segmented features, protein reading frames, and genome-wide graphs. This tutorial will also cover the use of small snippets of perl in the configuration to demonstrate the considerable versatility of display that GBrowse gives it users.

Intended Audience: This is an introductory tutorial; attendees should be comfortable performing simple system administration tasks like stopping and starting services. If attendees want to follow along "live", they should have a laptop with GBrowse and prerequisites installed. Please see http://www.gmod.org/ggb for instructions and downloads.

Scott Cain is a member of the professional research staff at Cold Spring Harbor Laboratory and is the GMOD (Generic Model Organism Database) project coordinator. As coordinator, he has participated in the development of several GMOD components, including the schema (known as 'Chado') and related tools and the Generic Genome Browser (GBrowse). Previously he has taught several tutorials for GBrowse. Scott has taught computer science and programming courses and is currently on the faculty of the University of Phoenix. Scott was formerly the lead bioinformatics developer for the biotechnology company Athersys, Inc.

Return to indexReturn to Program


PM9

Computational mass-spectrometry advances in the identification of proteins and posttranslational modifications

Nuno Bandeira, Computer Science and Engineering, University of California, San Diego

Tandem mass spectrometry (MS/MS) is nowadays a fundamental and far reaching instrumentation technique that enables many different types of proteomic studies. One of its most important areas of application is that of peptide and protein identification and several mainstream identification tools have been based on the concept of matching MS/MS spectra against databases of protein sequences (e.g SEQUEST and Mascot). However, these tools face a severe bottleneck when attempting identification of unexpected post-translational modifications and provide no solutions when the putative protein sequences are not known in advance. This tutorial will focus on recent mass spectrometry-based identification approaches based on the combination of unidentified MS/MS spectra. These approaches have been shown to be widely applicable to everyday MS/MS samples and to substantially improve the quality of de-novo sequence reconstructions and the number of identified post-translational modifications. The simplest example of this type of approach is clustering – combining different MS/MS spectra from the same peptide. This tutorial will additionally cover the combination of spectra from other types of related peptides like partially overlapping peptides or different variants of the same peptide (e.g. a peptide P and its modified variants P*, P#, P*#, etc.). The latter have also provided the foundations of a recently proposed database search approach that never compares a spectrum against a database (Bandeira et al., RECOMB 2006). Participants in this tutorial can expect to gain both a conceptual understanding of the algorithms and techniques developed in this field and a practical introduction to the usage of related novel tools.

Expected outcomes and goals:
This tutorial will focus on recent tandem mass spectrometry (MS/MS)-based approaches to the identification of proteins and post-translational modifications. We will cover several new promising techniques that have been proposed based on the combination of unidentified MS/MS spectra followed by additional interpretation steps and show how these can be used to overcome the difficulties faced by currently available mainstream tools. Participants in this tutorial can expect to gain both a conceptual understanding of the recent algorithms and techniques developed in computational mass spectrometry and a practical introduction to the usage of related novel tools.

Prerequisites: Conceptual understanding of programming languages, some experience with statistics and algorithms, conceptual understanding of molecular biology.

Nuno Bandeira is a 4-year Ph.D. student at the department of Computer Science and Engineering (CSE) of the University of California, San Diego (UCSD). Over the last 3.5 years he has worked with Prof. Pavel Pevzner on the computational analysis of tandem mass spectrometry data, resulting in the development of published novel approaches to the identification of proteins and post-translational modifications. Before coming to UCSD, he focused on computer science techniques and studied their application to the biomedical sciences since 1999.

Return to indexReturn to Program


PM10

 

Systems Biology of Host-Pathogen Interactions and Microbial Communities

Christian V. Forst, Bioscience Division, Los Alamos National Laboratory

Unlike traditional biological research that focuses on a small set of components, systems biology studies the complex interactions among a large number of genes, proteins and other elements of biological networks. Host-Pathogen Systems Biology examines the interactions among the components of two distinct organisms, either a microbial or viral pathogen and its animal host or two different microbial species in a community. With the availability of complete genomic sequences of both host and pathogens, together with breakthroughs in proteomics, metabolomics and other experimental areas, the investigation of host-pathogen systems on a multitude of levels of detail comes within reach. Mathematical models of the immune system describing host-pathogen interactions have a long history in mathematical biology. Nevertheless, the continuing and accelerating emergence of new biological threats requires the development of new and innovative approaches to combat them.

Intended audience: The tutorial is aimed at an audience with bioinformatics/computational biology background and with interest in systems biology. A significant part of host-pathogen interactions involves some aspect of the immune system, thus a background in immunology is useful.

Tutorial Level: Intermediate; basic knowledge on immunology is useful; bioinformatics, computational/theoretical biology background required

Goals and Objectives: The primary goal of this tutorial is to provide the audience with a hands-on guide to network biology and its application in systems biology of one and two-component systems. Systems biology is a still rather young research area with fast development.

A section of the tutorial hand-outs will serve as a reference to web-sites and will provide a glossary, respectively.

Outline of the Tutorial
1. Introduction
(a) A brief overview of the immune system with emphasis on innate immunity
(b) Host-pathogen systems and “hijacking” of the host by pathogens
(c) Quorum sensing, quorum jamming and microbial communities
2. A word on models and scales
3. Bottom-up approaches
(a) Network biology and response networks
(b) A “true” metabolic host-pathogen network
(c) A two-microbe system
(d) The -phage
(e) Combinatorics of immune receptor signaling networks
(f) Immune system models
4. Top-down approaches
(a) Hierarchical models, The Physiome Project and PhysioLab
(b) Cardiac biosimulation and Asthma PhysioLab
5. Conclusion and Outlook
(a) Hybrid approaches
(b) Full scale in silico system models

Christian Forst is a staff member in the Bioscience Division at Los Alamos National Lab involved in a research effort on Computational Biosystems. His group is committed to Network Biology and Network Genomics, the analysis of genomes in the context of biological networks, their construction, inference and regulation. In this context he is interested in the genomic foundation of networks within a single organisms as well between organisms as, for example, in host-pathogen interactions. He is also dedicated to network analysis and the identification of Response Networks in large biological networks that represent responses to generic stress and specific drug treatment, differential network expression analysis during different drug treatments as well as network analysis and proteomics studies. A review paper on Host-Pathogen Systems Biology is in press with the journal Drug Discovery Today. His previous research areas include comparative network genomics, theoretical/computational analysis of molecular evolution, the specific properties of genotype-phenotype maps necessary for the success of molecular evolution and the evolutionary dynamics of entities within an evolving auto-catalytic reaction network of species with distinct genotype-phenotype relationships. Dr. Forst was trained as a chemist and have a background in dynamical systems, complex dynamics, optimization in combinatorial landscapes, graph-theory, sequence/context analysis, whole genome annotation, network construction, gene-expression analysis and phylogeny. He teachs two summer courses, one on bioinformatics and one on systems biology at the Los Alamos Summer School. He taught a tutorial on Network Genomics and Systems Biology at the ISMB 2001. An early version of this tutorial has been presented at the PSB 2004. An improved version has been taught at the ISMB 2004 .

Return to indexReturn to Program

RETURN TO TOP

spacer

TUTORIAL PROGRAM


PLATINUM SPONSORS


HP


Microsoft Research



SPONSORS

Journal of Bioinformatics and Computational Biology

GoingToMeet.com

spacer