The interconnections usually represent biochemical transformation or physical transport. One needs to incorporate as much known physiology and biochemistry as possible in this structure recognizing there may be a need to combine some of the processes. Two sets of equations are usually required. First, there are the equations that characterize the components and their interconnections, and hence will be determined by the nature of the system being studied. They will be based on known or hypothesized physical chemistry and will be characterized by values of parameters that must be supplied in order to obtain a solution.
It is possible that a set of equations may be specified for which there is no solution in contrast to a system that is difficult to solve ; this can happen for a variety of reasons most of which are theoretical. In this case, simplification must be made. However, it must be recognized that simplifications are in a sense hypotheses that must ultimately be tested. A second set of equations is required when experimental data are involved. These are the measurement equations. As the link between the model equations and the data, they ensure that the units between the two are consistent.
Normally parameters such as volumes or masses are required for these equations, with the parameters characterizing the model equations. When the model is fully specified, that is, when the equations have been written and parameter values assigned, the equations can be solved. This is called a simulation.
One can then examine the predictions of the model and compare them with experimental data. The best one can hope for is compatibility between the model predictions and the data within some statistical tolerance Berman a , b. Compatibility, however, does not mean the model is correct. It does mean that it can be used to estimate parameters of interest such as production and catabolic rates. Incompatibility between the model predictions and the data means the model is not correct. More specifically, it means there is one or more incorrect components or inter-connections.
Since the postulation of a model is equivalent to the postulation of a hypothesis about how the system works, this is valuable information since it will force a change in the model formulation. In fact it is during this part of the model development process that the modeler learns the most about the physiology of the system under study.
In the model development process, one easily learns from the errors.
As noted, a model is specified by a set of parameters. The question arises as to whether the parameters can be estimated with a predefined degree of statistical precision from a particular set of experimental data. The problem can be posed in two ways. A priori identification addresses the issue as to whether the parameters describing the model can be estimated from a set of ideal data where ideal data are continuous, noise-free time measurements.
If they cannot, there is no way they can be estimated from a set of real data. A priori identifiability is a very difficult problem to solve in the general case Carson and others A posteriori identification addresses the question as to whether the parameters can be estimated from a set of experimental data. Normally one uses a computer software package that contains at least one optimization routine see SAAM Since most biological models are nonlinear, that is, parameters characterizing them are nonlinear, the optimization itself is nonlinear and thus is an approximation see Bates and Watts [ ].
This contains information on the estimated standard deviations of the parameters and the correlations among them. From this knowledge, one can assess the a posteriori identifiability of the model by using preset criteria for acceptance of the estimated standard deviations and correlations. It is important to point out that optimizers in different software packages can act differently. More particularly, if a model has more parameters than can be estimated from the data, some software packages will return no statistical information. The reason why this happens depends both upon the particular optimization algorithm chosen and how it is implemented numerically.
The crucial point is the following: When a model is overparameterized and statistical information is returned, the researcher can tell by examining the coefficients of variations of the parameters and the correlation coefficients which parameters cannot be estimated.tiojekamova.ml
Modelling biological systems - Wikipedia
If this problem is discovered early enough, the experimental design can be modified so that sufficient data are collected; mathematical tools such as optimal sampling design Carson and others are available to help. In the event that no such modification is possible, the model structure may have to be changed, or specific constraints will have to be imposed on some parameters.
Just because one has postulated a model structure that is compatible with the data does not mean this is the only such model. It is incumbent upon the investigator to search for other model structures that are also compatible. If more than one can be found, then an experiment must be designed to distinguish between the different structures.
An example of this situation can be found in Foster and others where 3 different models describing low density lipoprotein metabolism were formulated, all of which had different physiological meanings. Mathematical models are characterized by unknown parameters. By a minimal model, we mean a mathematical model with the fewest parameters needed to describe the data, that is, the simplest structure. Determining this number requires statistical knowledge in terms of testing for goodness-of-fit and model order the number of parameters in a model. Clearly the notions of model uniqueness and minimal models go hand-in-hand.
It is very rare that one can postulate a model structure that has all the characteristics of the data on the first try. Thus one starts with a simple structure and adds complexity based upon known physiology and biochemistry until such a structure is found. The parameters of the model can either be estimated with acceptable precision or they cannot. If they can, then as noted above one must explore other plausible model structures to see if they can describe the data equally well.
If there are none, then one can accept the model as a minimal model of the system. If, on the other hand, there are other model structures of the same model order that describe the data equally well, experiments must be designed to distinguish among the acceptable structures. If the parameters cannot be estimated with precision and further experiments are not possible, then the model structure must be simplified again in accordance with known physiology and biochemistry.
This may mean some processes are lumped together because in the time frame of the experiment they cannot be separated. In the process of model reduction, when an acceptable model is found, then the steps described above must be followed before concluding a model is a minimal model. When a model has been postulated that is compatible with a set of data, it can then be validated. Validation usually involves a perturbation to the system under study, that is, performing a different experiment on the system in a different physical state to see if the model can predict the outcome from this new experiment see Carson and others .
Validation in this sense refers to a model's ability to predict the behavior of a system under conditions different from those from which it was derived. Examples of such perturbations include pathophysiological conditions that is, comparing a normal situation with a group having a specific disease or intervention studies such as administering a drug or dietary therapy to control a pathophysiological condition. Many different mathematical modeling methodologies have been used in biomedical research.
One that is commonly used and that has produced many useful results over the years is compartmental modeling.
Compartmental models, technically, are schematics representing systems of ordinary differential equations Jacquez , This, however, is not the point in the present discussion. What we wish to do is give some background into compartmental models and what, from a modeling perspective, they mean. We will then give some actual examples.
The kinetics of a substance refers to its temporal and spatial distribution in a system such as the human body. However, it is impossible to follow every molecule of a substance all of the time. It is therefore necessary to discretize the system, that is, to lump various parts of the system together into a finite number of entities. Compartments and compartmental models accomplish this. A compartment is an amount of material that is kinetically homogeneous, well-mixed, and distinct from other material in the system. The easiest compartment to think about is plasma; other examples could be red blood cells or liver cells.
Glucose and lactate could be two compartments, both of which are in plasma. This lumping process will reduce the system to a finite number of compartments. Material can then flow into or out from a compartment, or be exchanged between compartments. We will see in a moment, however, that compartments are mathematical constructs, and equating a compartment with a physical anatomical or chemical space requires great care.
A compartmental model is a collection of interconnected compartments with a specified set of inputs and outputs. Mathematically, it represents a set of differential equations. More importantly, however, it provides a mechanistic framework for direct physiological interpretation of data. We reiterate that great care must be exercised in equating a compartment, which actually represents a differential equation, with a physiological volume. Figure 3 is an example of a 3-compartment model.
What comes after the equals sign is determined by the arrows in the model. The arrows labeled k ij represent the fractional transfer of material from compartment j to compartment i. For example, k 12 is the fraction of mass in compartment 2 transferred to compartment 1 per unit time. The arrows k 01 and k 02 are the fractional losses from compartments 1 and 2 respectively, and the bold arrow U 1 denotes exogenous mass input per unit time into the system.
2016 Nobel Prize in Physiology or Medicine
A model such as this is frequently used to described glucose kinetics in the whole body see Ferrannini and Cobelli [ ]. It is assumed that compartment 1 is plasma, and that this compartment can be sampled; this is indicated by the dotted line with the bullet. The two loss pathways are hypothesized to be insulin-dependent and insulin-independent glucose disposal.
One can hypothesize that compartment 2 represents glucose in the brain and that k 02 is the insulin-independent glucose utilization in that organ. However, this is just a hypothesis, and if one wanted to test that hypothesis, one would have to design an experiment in which brain glucose was available for measurement; these data could then be compared with the model-predicted value for compartment 2. Figure 3 is an example of how a model can be used to predict the behavior of a system.
Again, however, we are operating at the boundary between experiments and mathematical models. These model predictions can only be tested through experimentation. The plasma lipoproteins are aggregates of fat and protein molecules that allow the transport of fats in the aqueous environment of plasma.
They are subdivided into a number of specific classes using density value, the most common one of which is buoyant density. The proteins of lipoproteins are called apolipoproteins apo. Much is known about the metabolism of the plasma lipoproteins see, for example Dammerman and Breslow [ ].
In the following example, we will concentrate on the metabolism of the apoB-containing lipoproteins. LDL is primarily responsible for transporting cholesterol to cells in the periphery. LDL is positively correlated with cardiovascular disease, and hence studies dealing with its physiology and pathophysiology are very important Kwiterovich and others The LDL particles were once considered to be a homogeneous collection of particles. However, it is now known this is not the case see Krauss and Burke [ ]. This makes the design and interpretation of metabolic studies much more difficult see, for example, Foster and others [ ].
The problem is trying to understand the balance between LDL production and catabolism in different subclasses of particles since these are the two metabolic processes responsible for determining an individual's plasma LDL level. In addition, it has long been argued that there are two sources for LDL. Both are assumed to occur in the liver Dammerman and Breslow The reason why the African green monkey was chosen as an animal model in this particular study was that previously it had been shown that VLDL isolated from an isolated per-fused African green monkey liver was rapidly converted to LDL Marzetta and others Such a study, which provided the first clue that such a small class of particle might exist, is not possible in humans.
The study described in Murthy and others , which built upon this information, was designed to assess the contribution of this small class of particles to plasma LDL levels in vivo. This can be regarded as a first step towards understanding how such a class could contribute similarly in humans.
The long-term goal, of course, is a better understanding of human lipoprotein metabolism and the discovery of interventions to correct pathophysiological conditions. The study design was a multiple input-multiple output tracer kinetic study that again would not have been possible in humans. At the same time, plasma LDL was isolated and radioiodinated. The 2 tracer-labeled particles were injected simultaneously, and serial plasma samples were taken. The samples were subfractionated into different lipoprotein subfractions including 2 LDL subfractions , and quantitated for radioactivity.
The set of data was rich in information and required a model to interpret. Basically, a compartmental model structure was derived to create a mathematical model of the conceptual model shown in Figure 4. The final structure, which was derived and tested using the principles described earlier, is shown in Figure 5 below. Compartment 13 is an extravascular compartment which exchanges with compartment 3. In this paper, additional compartmental modeling techniques were used to test a variety of hypotheses using the data that would not have been possible using conventional methods.
This is a very important observation in the study of atherogeneity of LDL in humans since it can provide clues in humans as to why certain subfractions of LDL are more atherogenic than others, what the origins of these subfractions may be, and what kind of a therapeutic intervention might be designed to alter the pathophysiological condition. Many people think that animal research, and the mathematical models that have been developed based upon that research, deal only with human needs.
While we have given examples of this in the past section, there are many instances when studies are undertaken because of a need to understand a metabolic system in a particular animal species. We give an example below of water metabolism, and go into more details of the modeling so the interested reader can appreciate by example some of the points we have been making. By introducing a tamoxifen site into a tissue-specific promoter, gene targeting can be obtained selectively in a certain tissue when the mouse is treated with the drug.
Cre-lox technology can also be used to replace an existing gene with another one .
It has also been used to replace an allele with another one, the latter for instance being an allele suspected to cause disease. Gene targeting has transformed scientific medicine by permitting experimental testing of hypotheses regarding the function of specific genes. Prior to gene targeting, our understanding of the role of genes in higher organisms was deduced from observations of spontaneous mutations in patients and experimental animals, linkage and association studies, administration of gene products to animals and, to some extent, from cell culture experiments. However, cell culture is not helpful for understanding functions and diseases involving multicellular, integrative responses.
Insights into organ systems such as the nervous system,the cardiovascular system, and the immune system, were fragmentary at best, as was knowledge of mammalian development. The possibility of observing the effects on the intact organism of destroying a candidate gene transformed these areas of research. For instance, cardiovascular physiologists switched from rats to mice as models, downscaling their instruments and techniques in order to study the genetic regulation of hemodynamics.
A new era of genetic physiology was born. The genomes of man and mouse contain about 22, genes. Several thousand of them have already been investigated by gene targeting. Collectively, these studies have provided a wealth of information about gene function in development and disease. They have helped fuse mechanistic molecular biology with integrative life sciences such as embryology, physiology and immunology and have prompted new technical developments in physiological sciences. For medicine, the modeling of human diseases by gene targeting in mice has been particularly informative.
At this stage, it may be helpful to recapitulate the criteria first proposed by Claude Bernard for the scientific method in medicine  : Medical scientists use observations, hypotheses and deductions to propose explanations, theories, for natural phenomena. Predictions from these theories are tested by experiment. Any theory which is cogent enough to make predictions can be tested reproducibly in this way. Therefore, the scientific method is essentially a cautious means of building a supportable, evidence-based understandingof our natural world.
- Courses | Bulletin | Columbia Engineering.
- Struggling for a Social Europe: neoliberal globalization and the birth of a European social movement.
- Linear Regression Analysis: Theory and Computing?
- Vision and Reality in Pacific Religion: Essays in Honour of Niel Gunson.
- Introduction to Chemistry: A Conceptual Approach, Second Edition.
- Max Weber, Rationality and Modernity;
- Cardiology secrets.
Experiments are crucial in this process. Prior to gene targeting, genetic medicine lacked the means for experimental testing. By mutating a gene to destroy its function knock-out or switching it to a disease-associated allele knock-in , disease is induced if the hypothesis is correct. Alternative approaches based on genetic epidemiology are currently being developed but currently available methods do not have the precision of hypothesis-based experiments. This digression into scientific theory may suffice to make the point that only by targeting candidate genes did it become possible to formally establish causality between gene and disease.
Let us now look at some specific examples of the impact of gene targeting in medicine. The first area to which experimental geneticists turned their attention after the birth of gene targeting in mammals was monogenic diseases. One of the reasons for chosing this particular medical condition was because selection conditions for isolating transduced cells were available for HPRT.
This prompted analysis of purine salvage pathways in mice and led to the findings that mice depend largely on adenine phosphoribosyltransferase APRT for purine salvage and are therefore not as sensitive to HPRT deficiency as humans. This is an illustration of the need for sophisticated analysis of integrative functions when characterising the phenotype of gene-targeted mice.
Cystic fibrosis is one of the most common monogenic disease and was chosen for gene-targeting studies by Smithies and his co-workers [53, 54]. The defective gene had been identified by linkage studies in patient families followed by molecular cloning. By knocking out CFTR in mice, a condition was generated that reproduced many features of the human disease.
These studies were among the first to create a model of a human disease by gene targeting in mice. They have been followed by an avalanche of such knock-out models. The pathogenesis of inherited heart diseases have been explored successfully by gene targeting approaches [55, 56].
- Nobel Prizes in Medicine;
- Treating Bulimia in Adolescents: A Family-Based Approach;
- Introduction to Modeling in Physiology and Medicine - 2nd Edition.
- Movement, Connectivity, and Landscape Change in the Ancient Southwest (Proceedings of the Southwest Symposium)?
- Computer-Integrated Building Design!
- Pyrrhonism: How the Ancient Greeks Reinvented Buddhism (Studies in Comparative Philosophy and Religion);
For instance, targeting of genes encoding components of the contractile apparatus in cardiomyocytes leads to cardiomyopathy; targeted mutations in connexin proteins of gap junctions cause conduction defects; disrupted genes for transcription factors involved in heart development lead to congenital heart malformations; and targeting of genes controlling energy metabolism causes cardiomyopathy.
Complex diseases involving the action of more than one gene, and in addition, gene-environment interactions, represent a particular challenge for medical research. Inheritance, penetration and interactions are usually poorly understood, it has been difficult to dissect the contribution of an individual genetic factor, and the distinction between causation and correlation has been problematic.
In order to prove causation in such a complex system, experiments must permit detection of the effects of changing only a single variable at one time. Gene targeting made such experiments possible and has permitted proof of causation in complex diseases. Oliver Smithies has been the leader in this development. Together with Nobuyo Maeda, he focused on two important, complex diseases, hypertension and atherosclerosis reviewed in . However, at least 10 genes have been shown to alter blood pressure and their gene products appear to interact in complex ways.
In spite of the discovery that angiotensinogen AGT gene polymorphism is associated with essential hypertension, the genetics of this disease has remained poorly understood . Little is known about the number of genes actually involved in human essential hypertension, their quantitative effect on blood pressure, their mode of transmission, or their interaction with other genes and environmental components. Smithies suspected that gene dose effects would impact on blood pressure levels and designed a new method for titrating gene dosage by producing mice with one, two or three functional copies of the AGT gene .
This resulted in proportionally higher levels of gene products i. When Smithies et al targeted another important gene for blood pressure regulation, the one coding for the angiotensin-converting enzyme ACE , no such linear relationship was observed, in spite of the effectiveness of ACE inhibitors in reducing blood pressure. The investigators submitted their data to a computer simulation for complex interacting systems and could propose a model for blood pressure control through the renin-angiotensin system, which has proven to be useful for understanding essential hypertension .
The same gene was targeted independently by investigators at Rockefeller University . The following year, Michael Brown and Joseph Goldstein Nobel Prize for discoveries concerning cholesterol metabolism and their co-workers targeted the gene for the low density lipoprotein LDL receptor Ldlr and obtained a mouse that develops atherosclerosis when fed a cholesterol-rich diet . The introduction of the two mouse models with defective Apoe and Ldlr genes have completely changed atherosclerosis research. By crossbreeding them with other gene-targeted mice, it has been possible to deduce the importance of genes regulating inflammation, lipid metabolism, blood pressure and other factors proposed to be involved in atherosclerotic cardiovascular disease .
They are also used abundantly in the pharmaceutical industry for development and testing of new drugs against coronary artery disease. Gene targeting has been exceptionally useful in cancer research. A large number of protooncogenes, tumor suppressor genes, angiogenetic factors etc have been targeted in different tissues in mice to shed light on the induction and spreading of tumours .
Gene targeting of tumour suppressor genes have helped clarify their role in the formation of tumours. For instance, mice carrying a targeted p53 gene were predisposed to tumour development . Conditional targeting using Cre-lox technology of the adenomatous polyposis coli APC gene induces colorectal tumors in mice and APC-targeted mice have become useful models for research on solid tumours .
Targeting of genes for endothelial growth factors and proteolytic enzymes have been essential for understanding mechanisms of neoangiogenesis and metastasis of solid tumours and are also used for developing therapeutic strategies to prevent spreading . Gene-targeted mouse models have also become increasingly important in studies of host defense against pathogens.
Indeed, gene targeted mice have become indispensable in virtually all aspects of medical research. Gene targeting has transformed physiology and medicine. Among the basic biomedical sciences, it is difficult to imagine contemporary medical research without the use of gene targeted models. The ability to generate predictable designer mutations in mouse genes has led to penetrating new insights into development, immunology, neurobiology, physiology, and metabolism. It has also allowed disease models of human pathologies to be generated in a tractable mammalian system and consequently enabled experimental dissection of disease states, identification of new therapy targets and the development of test systems for pharmacology.
Finally, it is obvious that the development, in the future, of novel therapies to correct genetic defects in man will build on the experience of gene modification in mice that is based on the discoveries made by Mario Capecchi, Martin Evans and Oliver Smithies. Back to top Back To Top Takes users back to the top of the page. Select the category or categories you would like to filter by Physics. Economic Sciences. Figure 1. General strategy for gene targeting in mice. A Gene targeting of embryonic stem ES cells in culture is followed by cloning of an ES cell line containing the desired mutations.
Positive-negative selection is used to enrich for ES cells containing a targeted disruption of a gene. In both gene targeting A and random integration B , the upper line shows the targeting vector, the middle one the chromosomal gene, and the lower one the modified gene. Genes targeted by homologous recombination contain the neo R element but not HSV-tk, since the latter resides outside the sequences in the targeting vector homologous to sequences in the chromosomal gene. In contrast, random integration of the vector results in introduction of HSV-tk as well as neo R. Figure 3. Molecular genetic evidence for germline transmission of repaired HPRT gene.
From ref . The Southern blot shows genomic DNA hybridized to a probe specific for a sequence in the targeting vector. The tumor was from a chimeric mouse that carried the targeted gene, as did the targeted ES cell line. In the F1 offspring, female agouti mice derived from targeted ES cells carried the targeted HPRT gene, while it was present neither in black mice derived from recipient blastocysts nor in male agouti mice.
Reprinted, with permission, from Proceedings of the National Academy of Sciences. Copyright National Academy of Sciences, U. Verhandl Deutsch Pathol. Spontaneous testicular teratomas in an inbred strain of mice. Embryonic potency of embryoid bodies derived from a transplantable testicular teratoma of the mouse. Dev Biol. Multipotentiality of single embryonal carcinoma cells. Cancer Res. In vitro growth and differentiation of clonal populations of multipotential mouse cells derived from a transplantable testicular teratocarcinoma. J Natl Cancer Inst. Developmental potentialities of clonal in vitro cultures of mouse testicular teratoma.
The isolation and properties of a clonal tissue culture strain of pluripotent mouse teratoma cells. J Embryol Exp Morphol. Garfinkel D. Computer modeling, complex biological systems, and their simplifications. Am J Physiol. Gunawardena J. BMC Biology ; Phair RD. Development of kinetic models in the nonlinear world of molecular cell biology.
Metabolism ; Hsieh C-H ed.
Related Introduction to Modeling in Physiology and Medicine
Copyright 2019 - All Right Reserved