

desertcart.com: The Model Thinker: What You Need to Know to Make Data Work for You (Audible Audio Edition): Scott E. Page, Jamie Renell, Basic Books: Books Review: Conceptual models are the key to understanding how a mathematically-described universe functions. - I regard Scott E. Page's The Model Thinker to be the singular 'must read' reference work and learning aid for anyone looking to master conceptual models of the world we live in. For the most part, these are dynamic models, most frequently mathematically described, which are essential to thinking about data; interactions, including conflict, strategies for resolving conflicts between individuals and their respective social groups, institutions, and generally speaking, civilized society itself. A single work of this type cannot include every single permutation of conceptual model; but Professor Page accomplishes a great deal in the 420-odd pages comprising this masterful work. As Professor Page observes, early on, "we live in a time awash in information and data". At the same time, our innate abilities to comprehend and deal with the mental load that those data represent, and how to respond appropriately, requires us to acquire new cognitive skills in order to make any headway in dealing with the implications that those data represent. Professor Page identifies seven uses of models: (1) to reason, meaning to identify conditions and deduce logical implications; (2) to explain, meaning to provide testable explanations for empirical phenomena; (3) to design, meaning to choose features of institution, policies, and rules of behavior that will allow particular models to behave as intended; (4) to communicate, meaning to relate knowledge and understandings about the natural world, and human-developed institutions, as they interact with one another; (5) to act, we need to guide policy choices and strategic actions to achieve predictable results; (6) to predict, meaning to make numerical and categorical statements about future and unknown phenomena, based upon historical data or extrapolating from current data, where the trend lines are predicting where we will end up at some point in the future; and finally, (7) to explore, in which we may need to investigate possibilities, hypotheticals, including counterfactuals. The models that Professor Page presents are not reality, but the functions that they explore are found in reality if we look closely enough to find them and are capable of understanding what they are telling us and where we are heading. They are simple, and yet sophisticated. They employ depictions of both physics, and psychology, and a great deal more. In 29 chapters, Professor Page limns the universe of conceptual models and their mathematical and-graphical representations, how they work, and the logical inferences that can be drawn from each of them in terms of the types of outcomes that can be expected when those models are employed to explain numerical, financial, organizational and sociological phenomena. The mathematics described an input-process-output scenario where they simulate natural processes that behave stochastically as if they were governed by predictable mathematical rules. Professor Page repeatedly emphasizes that understanding phenomena, and the data that describes them, requires a knowledge of many models, each one pointing out a specific facet of the whole, and none of them completely describing what the whole actually looks like. T In many respects, these models are counterintuitive, meaning that left to our own devices we might come to different decisions based upon criteria that fall well short of rational calculation. Nowadays, the science of behavioral economics has, through extensive research, established that in fact, decision-makers are subject to cognitive biases that distort our perception of reality, importance, and value. We perceive and think in much the same way that our ancient forebears did, before formal learning took hold. We as a species are risk aversive by a factor of two over what we might gain from a prospective course of action. We act on the basis of unfounded assumptions and cognitive biases. And this is why dynamic modeling become so important in order to strip away the cognitive tricks that our minds play on us when we encounter new and unfamiliar situations. Using these models, we apply rule-based behaviors that have been time-tested to optimize our chances of succeeding in whatever endeavor we are then engaged. Using dynamic modeling techniques, we also learn to adapt our behavior, based upon the behaviors of others, by applying principles of probability that allow us to calculate the likelihood that certain events will occur, or will not. We learn to adjust our behaviors based upon what happened in the past, but understanding that our revised calculations are still educated guesswork. We also come to understand, as Professor Page teaches us, that different models lead to different outcomes: equilibrium; cycles; randomness or complexity. If a rule-based model generates a random outcome, the rule itself serves only to alert the user that the outcome of the action taken is limited in its predictability; nevertheless, randomness itself can be studied, and the lessons applied, to rule out situations where precise calculations would be necessary to achieve the model user's objectives. Moreover, as Professor Page instructs us, some models generating what at first glance may appear to be randomness may actually be recurring patterns that derive from an extended number of cycles. Now that we can use computers to simulate actions taken over time, the regularity of those secular outcomes can readily be seen. If a model produces an equilibrium, we know that certain actions will ultimately succeed, and others will fail. Other models produce complexity, both internally, and through their interaction with other actors above and below the level at which the model operates. We learned that those complex interactions generate what is referred to as 'emergent behaviors', sometimes higher levels of sophistication, but also phenomena that could not have been anticipated at the operational level at which those models work. This is how models, and those who use them, adapt to changing circumstances. We can also see where particular models favor selfish outcomes at the expense of the community at large, and where the community response to its disadvantage in the economic sphere by changing the rules of engagement through public policy changes that are intended to restrict or deter overreaching by those considered to be bad actors. More on that later. At the core of all models are the notions of statistical distributions, and the way in which probabilities are to be calculated. They are part of the core knowledge base for any modeler, because they determine the ways in which data are perceived and handled. A working knowledge of distributions is necessary to measure inequalities in power, income and wealth, and to perform statistical tests. Professor Page addresses the matter of distributions over two chapters, the first dealing with normal (Gaussian) distributions, often referred to as the Bell Curve. The next chapter deals with power law distributions, i.e., long-tailed events. Distributions mathematically capture variations in diversity within certain types, representing them as probabilities as defined over numerical values or categories, with the bulk of them ordinarily clustered around the statistical mean or average value, with parameters of difference represented as deviations from the mean in a standardized format. Power distributions differ from normal distributions, whereby positive feedback loops augment and reinforce the action or trend that the distribution explains or describes. Two models frequently appear where power laws occur; the first is described as preferential attachment that captures trending preferences about where people choose to live (i.e., cities); which things to buy (i.e., books that become wildly popular simply as a function of either their subject matter or word-of-mouth advertising); certain places on the Internet (i.e. websites such as Facebook), and so on. The second model may consist of self-organized criticality in the form of default choices that lead to predictable consequences, and where under conditions of entropy, where a power law distribution maximizes uncertainty hovering around a fixed mean, but with a proximity to large events that, although they occur infrequently, can have devastating consequences. Along these timelines, normality is correlated in the minds of many observers with frequency of occurrence, a conclusion justified by the number of smaller events clustered around the historical averages during the relevant time periods in the run-up to the present day. It would be as if we are traveling on a highway accelerating as we go, and viewing the highway through the rearview mirror. It is but a small leap of faith to jump to the conclusion that the future will not be markedly different from the past; and the historical record often validates that assumption. Using the normal bell curve to justify that conclusion is really asking the wrong question, because it inherently incorporates the so-called 'survivors bias', those who have averted or avoided catastrophe and lived to tell the tale. At best, it is a psychological pick-me-up that reassures us that whatever happens in the future is survivable, because we have done it before. Well, not entirely, because by focusing on survivable events (because we, or most of us did survive), instead of looking at the risks we are incurring now, with fewer resources and less resilience to withstand future shocks. We have come to realize that statistical probabilities of those large, infrequently experienced events, as represented within a nominal ‘normal distribution’ tend to understate the likelihood that those events will ever occur. In recent decades, for example, turmoil within the financial markets was regarded by many as ‘unforeseeable’; well, those events had been foreseen, and warned against. The argument against foreseeability was simply that it was impolitic, unfashionable, and unprofitable for many to acknowledge the obvious risk that major financial services institutions were running at the time. Accounting for that potentially large downside risk simply did not fit their model, based upon what had gone before. From the standpoint of statistical analysis, there are many who view normal distributions of data from of hindsight from a frequentist perspective, comparing the most recent data in the context of historical averages. Therefore, the past historical record is often be seen as prologue as to what will happen in the future. This is linear thinking where scaling remains within fixed ratios. Contemplating event probabilities that are known to be distinctly nonlinear using linear thinking is apt to draw erroneous conclusions, because the underlying forces are differently scalable from what we experienced in the past. Within my lifetime we have gone from mechanical calculators to cloud-based supercomputers, with results in computational capacity and productivity that could not have been imagined 60 or 70 years ago. We are doing business in ways that were inconceivable and unattainable only a few decades ago, using instantaneous worldwide communications that accelerate the size and pace of commercial and financial transactions in real time and ways that far exceed our collective abilities to design and manage them. By virtue of our enhanced technologies, we have opened the door to effects of scale that could not have been contemplated when the institutions we created to manage the forerunners of those now-enhanced processes were created. We are figuratively 'the sorcerer's apprentice', playing with magic, using those newfound powers in ways that we can only guess at learning how to control, and which could do us all great harm if misused. Philosopher, teacher, writer (and former stock trader) Naseem Nicholas Taleb coined the phrase ‘Black Swan’ to describe those extremely rare events that for most purposes are ignored. In a power law distribution, the probability that an event will occur is proportional to its size raised to a negative exponent. The size of the power law’s exponent determines the likelihood and size of large events, meaning that the probability that the event will occur is inverse to its size; when the exponent equals 2, the probability of that event is proportional to the square of its size. For exponents of 2 or less, a power law distribution lacks a well-defined mean. The mean of data drawn from a power law distribution with an exponent of 1.5 never converges; it simply increases without limit. Thus, the larger the event, the less likely it is to occur; but if it does occur, the potential size of the event can be catastrophic. Decreasing frequency is coupled with exponentially larger magnitude of effect, with the obvious corollary that building robustness into the enterprise is a practical necessity. And yet, in the years since the debacle of 2008, what we have seen is increasing pressure to allow those who were responsible for the near collapse of the world economy, to behave much the same as they did before. Models such as those described above are supposed to be teaching tools; but apparently those who were in charge before have learned nothing from the event since then. Time and space do not allow for a broader exposition of Professor Page’s excellent analysis; but if the crisp and well-reasoned presentation he makes in his discussion of the effect of power laws on the probability of real-world events is to be taken at face value, the balance of his book is equally well done. Nevertheless, as with any work of substantial size and effort, the sheer weight of the output, inevitably leave patches here and there that would warrant reappraisal and updating in a future edition of this extraordinary book. Two points come to mind: in discussing Game Theory in chapter 21, Professor page does not elaborate on what competition between an established business firm and a new competitor seeking entry into the market would look like. This competition depends upon the ability of the competitors to differentiate themselves, in accordance with customer preferences, and which can be any of the following – • Price/price competition. • Price/quality competition. • Price/services. • Quality versus service, with price a dealbreaker. I cannot recommend Professor Page’s book too highly. If I could, I would tuck it into the book bag of every STEM student in high school and college undergraduate. I would also recommend it highly for students who do not have plans that include cultivating prowess in mathematics and science. In point of fact, I would use whatever strategy I could conjure up to induce non-science students to learn as much as they can from what Professor Page provides in his teaching syllabus. This material should be part of every high school curriculum, and educators owe it to their students and their parents to make this information, in some form, available to their students, because without it those students will be treading water getting nowhere, as the world they live in becomes overrun with data they are unable to use. Review: Nice High level primer - Helped simplify some big ideas and what’s coming next in the area
A**N
Conceptual models are the key to understanding how a mathematically-described universe functions.
I regard Scott E. Page's The Model Thinker to be the singular 'must read' reference work and learning aid for anyone looking to master conceptual models of the world we live in. For the most part, these are dynamic models, most frequently mathematically described, which are essential to thinking about data; interactions, including conflict, strategies for resolving conflicts between individuals and their respective social groups, institutions, and generally speaking, civilized society itself. A single work of this type cannot include every single permutation of conceptual model; but Professor Page accomplishes a great deal in the 420-odd pages comprising this masterful work. As Professor Page observes, early on, "we live in a time awash in information and data". At the same time, our innate abilities to comprehend and deal with the mental load that those data represent, and how to respond appropriately, requires us to acquire new cognitive skills in order to make any headway in dealing with the implications that those data represent. Professor Page identifies seven uses of models: (1) to reason, meaning to identify conditions and deduce logical implications; (2) to explain, meaning to provide testable explanations for empirical phenomena; (3) to design, meaning to choose features of institution, policies, and rules of behavior that will allow particular models to behave as intended; (4) to communicate, meaning to relate knowledge and understandings about the natural world, and human-developed institutions, as they interact with one another; (5) to act, we need to guide policy choices and strategic actions to achieve predictable results; (6) to predict, meaning to make numerical and categorical statements about future and unknown phenomena, based upon historical data or extrapolating from current data, where the trend lines are predicting where we will end up at some point in the future; and finally, (7) to explore, in which we may need to investigate possibilities, hypotheticals, including counterfactuals. The models that Professor Page presents are not reality, but the functions that they explore are found in reality if we look closely enough to find them and are capable of understanding what they are telling us and where we are heading. They are simple, and yet sophisticated. They employ depictions of both physics, and psychology, and a great deal more. In 29 chapters, Professor Page limns the universe of conceptual models and their mathematical and-graphical representations, how they work, and the logical inferences that can be drawn from each of them in terms of the types of outcomes that can be expected when those models are employed to explain numerical, financial, organizational and sociological phenomena. The mathematics described an input-process-output scenario where they simulate natural processes that behave stochastically as if they were governed by predictable mathematical rules. Professor Page repeatedly emphasizes that understanding phenomena, and the data that describes them, requires a knowledge of many models, each one pointing out a specific facet of the whole, and none of them completely describing what the whole actually looks like. T In many respects, these models are counterintuitive, meaning that left to our own devices we might come to different decisions based upon criteria that fall well short of rational calculation. Nowadays, the science of behavioral economics has, through extensive research, established that in fact, decision-makers are subject to cognitive biases that distort our perception of reality, importance, and value. We perceive and think in much the same way that our ancient forebears did, before formal learning took hold. We as a species are risk aversive by a factor of two over what we might gain from a prospective course of action. We act on the basis of unfounded assumptions and cognitive biases. And this is why dynamic modeling become so important in order to strip away the cognitive tricks that our minds play on us when we encounter new and unfamiliar situations. Using these models, we apply rule-based behaviors that have been time-tested to optimize our chances of succeeding in whatever endeavor we are then engaged. Using dynamic modeling techniques, we also learn to adapt our behavior, based upon the behaviors of others, by applying principles of probability that allow us to calculate the likelihood that certain events will occur, or will not. We learn to adjust our behaviors based upon what happened in the past, but understanding that our revised calculations are still educated guesswork. We also come to understand, as Professor Page teaches us, that different models lead to different outcomes: equilibrium; cycles; randomness or complexity. If a rule-based model generates a random outcome, the rule itself serves only to alert the user that the outcome of the action taken is limited in its predictability; nevertheless, randomness itself can be studied, and the lessons applied, to rule out situations where precise calculations would be necessary to achieve the model user's objectives. Moreover, as Professor Page instructs us, some models generating what at first glance may appear to be randomness may actually be recurring patterns that derive from an extended number of cycles. Now that we can use computers to simulate actions taken over time, the regularity of those secular outcomes can readily be seen. If a model produces an equilibrium, we know that certain actions will ultimately succeed, and others will fail. Other models produce complexity, both internally, and through their interaction with other actors above and below the level at which the model operates. We learned that those complex interactions generate what is referred to as 'emergent behaviors', sometimes higher levels of sophistication, but also phenomena that could not have been anticipated at the operational level at which those models work. This is how models, and those who use them, adapt to changing circumstances. We can also see where particular models favor selfish outcomes at the expense of the community at large, and where the community response to its disadvantage in the economic sphere by changing the rules of engagement through public policy changes that are intended to restrict or deter overreaching by those considered to be bad actors. More on that later. At the core of all models are the notions of statistical distributions, and the way in which probabilities are to be calculated. They are part of the core knowledge base for any modeler, because they determine the ways in which data are perceived and handled. A working knowledge of distributions is necessary to measure inequalities in power, income and wealth, and to perform statistical tests. Professor Page addresses the matter of distributions over two chapters, the first dealing with normal (Gaussian) distributions, often referred to as the Bell Curve. The next chapter deals with power law distributions, i.e., long-tailed events. Distributions mathematically capture variations in diversity within certain types, representing them as probabilities as defined over numerical values or categories, with the bulk of them ordinarily clustered around the statistical mean or average value, with parameters of difference represented as deviations from the mean in a standardized format. Power distributions differ from normal distributions, whereby positive feedback loops augment and reinforce the action or trend that the distribution explains or describes. Two models frequently appear where power laws occur; the first is described as preferential attachment that captures trending preferences about where people choose to live (i.e., cities); which things to buy (i.e., books that become wildly popular simply as a function of either their subject matter or word-of-mouth advertising); certain places on the Internet (i.e. websites such as Facebook), and so on. The second model may consist of self-organized criticality in the form of default choices that lead to predictable consequences, and where under conditions of entropy, where a power law distribution maximizes uncertainty hovering around a fixed mean, but with a proximity to large events that, although they occur infrequently, can have devastating consequences. Along these timelines, normality is correlated in the minds of many observers with frequency of occurrence, a conclusion justified by the number of smaller events clustered around the historical averages during the relevant time periods in the run-up to the present day. It would be as if we are traveling on a highway accelerating as we go, and viewing the highway through the rearview mirror. It is but a small leap of faith to jump to the conclusion that the future will not be markedly different from the past; and the historical record often validates that assumption. Using the normal bell curve to justify that conclusion is really asking the wrong question, because it inherently incorporates the so-called 'survivors bias', those who have averted or avoided catastrophe and lived to tell the tale. At best, it is a psychological pick-me-up that reassures us that whatever happens in the future is survivable, because we have done it before. Well, not entirely, because by focusing on survivable events (because we, or most of us did survive), instead of looking at the risks we are incurring now, with fewer resources and less resilience to withstand future shocks. We have come to realize that statistical probabilities of those large, infrequently experienced events, as represented within a nominal ‘normal distribution’ tend to understate the likelihood that those events will ever occur. In recent decades, for example, turmoil within the financial markets was regarded by many as ‘unforeseeable’; well, those events had been foreseen, and warned against. The argument against foreseeability was simply that it was impolitic, unfashionable, and unprofitable for many to acknowledge the obvious risk that major financial services institutions were running at the time. Accounting for that potentially large downside risk simply did not fit their model, based upon what had gone before. From the standpoint of statistical analysis, there are many who view normal distributions of data from of hindsight from a frequentist perspective, comparing the most recent data in the context of historical averages. Therefore, the past historical record is often be seen as prologue as to what will happen in the future. This is linear thinking where scaling remains within fixed ratios. Contemplating event probabilities that are known to be distinctly nonlinear using linear thinking is apt to draw erroneous conclusions, because the underlying forces are differently scalable from what we experienced in the past. Within my lifetime we have gone from mechanical calculators to cloud-based supercomputers, with results in computational capacity and productivity that could not have been imagined 60 or 70 years ago. We are doing business in ways that were inconceivable and unattainable only a few decades ago, using instantaneous worldwide communications that accelerate the size and pace of commercial and financial transactions in real time and ways that far exceed our collective abilities to design and manage them. By virtue of our enhanced technologies, we have opened the door to effects of scale that could not have been contemplated when the institutions we created to manage the forerunners of those now-enhanced processes were created. We are figuratively 'the sorcerer's apprentice', playing with magic, using those newfound powers in ways that we can only guess at learning how to control, and which could do us all great harm if misused. Philosopher, teacher, writer (and former stock trader) Naseem Nicholas Taleb coined the phrase ‘Black Swan’ to describe those extremely rare events that for most purposes are ignored. In a power law distribution, the probability that an event will occur is proportional to its size raised to a negative exponent. The size of the power law’s exponent determines the likelihood and size of large events, meaning that the probability that the event will occur is inverse to its size; when the exponent equals 2, the probability of that event is proportional to the square of its size. For exponents of 2 or less, a power law distribution lacks a well-defined mean. The mean of data drawn from a power law distribution with an exponent of 1.5 never converges; it simply increases without limit. Thus, the larger the event, the less likely it is to occur; but if it does occur, the potential size of the event can be catastrophic. Decreasing frequency is coupled with exponentially larger magnitude of effect, with the obvious corollary that building robustness into the enterprise is a practical necessity. And yet, in the years since the debacle of 2008, what we have seen is increasing pressure to allow those who were responsible for the near collapse of the world economy, to behave much the same as they did before. Models such as those described above are supposed to be teaching tools; but apparently those who were in charge before have learned nothing from the event since then. Time and space do not allow for a broader exposition of Professor Page’s excellent analysis; but if the crisp and well-reasoned presentation he makes in his discussion of the effect of power laws on the probability of real-world events is to be taken at face value, the balance of his book is equally well done. Nevertheless, as with any work of substantial size and effort, the sheer weight of the output, inevitably leave patches here and there that would warrant reappraisal and updating in a future edition of this extraordinary book. Two points come to mind: in discussing Game Theory in chapter 21, Professor page does not elaborate on what competition between an established business firm and a new competitor seeking entry into the market would look like. This competition depends upon the ability of the competitors to differentiate themselves, in accordance with customer preferences, and which can be any of the following – • Price/price competition. • Price/quality competition. • Price/services. • Quality versus service, with price a dealbreaker. I cannot recommend Professor Page’s book too highly. If I could, I would tuck it into the book bag of every STEM student in high school and college undergraduate. I would also recommend it highly for students who do not have plans that include cultivating prowess in mathematics and science. In point of fact, I would use whatever strategy I could conjure up to induce non-science students to learn as much as they can from what Professor Page provides in his teaching syllabus. This material should be part of every high school curriculum, and educators owe it to their students and their parents to make this information, in some form, available to their students, because without it those students will be treading water getting nowhere, as the world they live in becomes overrun with data they are unable to use.
P**K
Nice High level primer
Helped simplify some big ideas and what’s coming next in the area
A**N
Great introduction to modelling
I'm just an undergraduate computer science student (so take this review with a healthy dose of scepticism), but this is an excellent book on modelling both natural and social phenomena. Its philosophy is to use multiple models to describe a phenomena that the reader is interested in. The author discusses the strengths and weaknesses of each model, and the strengths and weaknesses of modelling in general. The author discusses rational and psychological models, the bell curve and power law distributions, linear regression, concave and convex functions, network models, entropy, epidemiological models, Markov chains, path dependence, and many other useful mathematical tools. The author acknowledges the weaknesses of models and does a great job giving a nuanced discussion of employing models with a healthy dose of scepticism and with value judgement. The book ends with an investigation into income inequality and opioids with a "many-model" approach and the author shows how the use of multiple models can be helpful when trying to understand something. I suppose that my only criticisms is that the author quotes authorities to bolster the usefulness of the model that they will discuss (personally, I think that it's better to quote authorities to disagree with them). Furthermore, I feel that the mathematical rigour is a bit lacking (though I do acknowledge that this book was written for people who don't necessarily have a strong maths background). Overall, I'd rate this book like a 4.3/5 stars. Amazing primer!
E**H
Good Survey of the Uses of Modeling
In our era of voluminous data modeling is increasingly used to analyze and predict. In "The Model Thinker," author Scott Page takes the reader on a tour of some of the most common models used today, using examples that touch on areas such as world affairs and geopolitics, social networking, medicine, politics, sports, economics, and unusual occurrences. The author realizes and notes that no model is perfect, so he stresses the practice of using multiple models for any given problem in order to look at situations from different angles to obtain the best solution. The general reader will likely be familiar with many of the concepts used to illustrate the models such as game theory and bell-shaped normal distributions, and while the equations highlighted will likely be over the heads of those without undergrad math degrees, general readers will be able to follow Page's arguments and train of thought well enough to make "The Model Thinker" a worthwhile and informative read about the modeling process used to make more and more predictions and decisions in our time.
F**N
Enjoyed the course, trying the book
Great teachers can explain complex concepts in a way that even dummies like me can understand. This is an important book for anyone who wishes to become a more lucid, independent thinker. All this talk of Fake News makes us feel like we have to take hold of one side or the other haphazardly. This book (and the accompanying course) is about taking time to think through problems from multiple angles. I really enjoyed Dr Page’s FREE COURSE on Coursera (Model Thinking). I decided to buy the book as a result. Despite taking the course, I found the book a bit challenging. The formulas are a bit intimidating. I am going back and watching the videos again as I read it.
S**G
Great book and equally great recording
The content is wonderful and the narration on the audiobook edition is superb. So many good books suffer from boring or distracting narration. This narrator strikes a good balance that keeps the information engaging without being distracting.
C**E
An Elementary Textbook Decades Behind the State of the Art
I am giving this book five stars for the effort involved in wrtiing the book, .subtracting two stars for the double marketing hype in the titile and on the back; for a net rating of three stars. The hype is explained below. 1. The book started as an introduction to modeling for social scientists. It achieves that goal. But an appropriate title would "An Introductory Handbook on Modeling for Social Scientists". That could be a five star book. Instead the marketing hypes at Basic Books introduced the buzz words "Model Thinker" and "data". Both topics are lightly treated in the book. Model thinking is all the rage because of the Charles Munger quotation of chapter one. But Munger's mental models are subtle and are rarely discussed well. An excellent treatment though is Peter Bevelin's "Seeking Wisdom from Darwin to Munger". Using models to make data analysis possible is professionally described in Philipp Janert's "Data Analysis with Open Source Tools". 2. Turning now to the issues with the content of the book. The book contains some twenty-five models in brief chapters averaging roughly 12 pages each. A wine tasting for models. What do you when you need a full case of vintage modeling for real world problem. You turn to the references where you may find a dedicated tome with a few hundred pages covering the same topic. Two examples are Ch. 10 on Network Models backed up by Mark Newman's treatise and Ch 18 System Dynamics Models backed up by John Sterman's "Business Dynamics " Now these are two worthy books, and each could justify a year's course work, representing a large number of hours of effort. This brings me to the point of how much effort is required for each of the twenty-five models for a reasonable competence level to support a truly multi-model effort. Page touches this only lightly and not well. 3. This introduces the subject of a major lack of the book: a methodology for integrating a set of models for a multi-modeling effort. A methodology is a theortetical system supporting the use of a set of methods, models in a field. A methodology can also guide the effort of additional learnings in a field to support a large application. Fortunately the British have provided five decades of work on methodologies for wicked and messy problems in social systems. An entry into this literatue is provided by "Rational Analysis for a Problematic World: Problem Structuring Methods ..." edited by Jonathan Rosenhead and John Mingers, 2001. Note that the British cybernetics and soft system methodology schools treat social systems as human activity systems (HAS). Over decades this has proven to be powerful and fruitful approach.
S**K
Interesting and revealing, a guide to help think on today's complex problems
What's a model? How to use a model? Why to use a model? Any qualitative approach to apply a model? Only use 1 model or various? These are the questions that these book look to help to answer. It is fascinating as depending your area of expertise some model will be recognizable and others not. Maybe someone can say that many other models are not included, or not clear the difference what a model is, is the Standard Atomic model a "model"? Not described here however it is. Same with many others. In any case it is very informative and overeating to understand the why: the more diverse approach to think about a problem, the better answers or guidelines you can get. Some are most difficult to apply unless further study or practice like bandit models or NK models in my case. What about machine learning? Algorithms are also a model? It is a super interesting book to start this thinking approach, and I recommend to do also his course in Coursera: Model Thinking. Ideally to read paired with Algorithms to Live by: different style, different Algorithms (or model?) But completes this vision of learning to apply formal models to analyze a problem and think on different potential solutions.
TrustPilot
5天前
1 个月前