Full description not available
M**N
Can forecasters be better than the chimp randomly throwing darts?
Why do so many specialists make no better forecasts than the proverbial chimp throwing darts at the dartboard? Why are so many forecasts made by governments, business and the general public not subject to subsequent measurement of their accuracy? Without measurement there can be no revision and without revision there can be no improvement.The most spectacular example of mistaken predictions with catastrophic effects was that which predicted Saddam Hussain had weapons of mass destruction which he was prepared to use. In the US, the CIA, the National Security Agency, the Defense Intelligence Agency plus 13 other agencies all agreed. After invading Iraq in 2003 the "facts" were shown to be false.In a brave acknowledgement of their mistakes, the US authorities were brave enough to question their own fundamental ability to forecast. In 2006 the Intelligence Advanced Research Project Activity (IARPA) was created to fund research into making the intelligence smarter and coopted Philip Tetlock. IARPA would sponsor a massive tournament to see who could invent the best methods of making all sorts of forecasts. IARTA was breaking the mould by demanding the measurement of forecasts. How accurate are forecasts in predicting outcomes? It is only by measurement that improvements can be made.Tetlock’s contribution to the IARTA research was his Good Judgement Project on the internet whereby he got 2,800 people to make specific predictions on a range of questions. For example, will an outbreak of H5N1 kill more than 10 people in China in the next 6 months? Will the euro fall below $1.20 in the next 12 months? Will the President of Tunisia flee to exile within 3 months? Then after the 6 or 12 month period had elapsed, the accuracy of the forecasts and the forecasters were assessed.His surprising results were that there are people who prove consistently good at forecasting across a broad range of subjects, and they are not necessarily the superbright specialists immersed in their subject. The superforecasters came from a cross section of the general public . But they did share common characteristics and an approach to making forecasts. Thanks to IARTA we now know a few hundred ordinary people and some simple math can not only compete with professionals supported by multibillion dollar apparatus but also beat them.The book lays out in fascinating detail what these characteristics are and the fundamentals of the approach to forecasting that increases chances of success.If you want to test yourself against the superforecasters, there is a new Good Judgement Project being launched on the net. Go to www.goodjudgement.com if you are mesmerised by a fascinating book.
A**A
Worth a read
I found this a really interesting book. I was very sceptical at first and the idea of superforecasters sounded like an issue of survivorship bias, but the author did address this satisfactorily. For the most part the book just goes through various types of logical fallacies and how you can avoid these to make more accurate predictions about the future, so if you know a bit about probability and logical fallacies already you won't find it much new. But the story of the Good Judgment Project is very interesting and certainly worth knowing about.Certainly in this time of COVID-19, after reading this book you'll start noticing a lot of public figures fall into basic data interpretation mistakes, make predictions that turn out to be totally wrong, and then continue as normal anyway!
L**R
The art of writing a Pop-Science book
One simply cant put this book down. Phillips Tettlock and Alan Gardner have clearly taken a leaf out of Gladwellian type books in terms of entertaining content as opposed a more Kahneman type approach which bombards one with details. The book is not of course a detailed methodology of how to forecast but simply looks at a simple question. What makes a good forecaster? We are led through the authors thoughts' on this matter throughout the book by means of some high end anecdotes (Obama, General Petraeus etc) and some more simple retired fold who happen to be good at forecasting. We know that they are good at forecasting through the online forecasting tool developed by Tetlock and others which invited participants to simply make forecasts on a number of events. All for the grand allure of an amazon gift voucher! (Well more books eh?). Once the cream rises to the top of this forecasting competition, the book then delves into what makes a good forecaster, which are summarised in temp key points at the end of the book. Its all bout being logical, taking baby steps (joining the dots), keeping up to date with the latest events, making small incremental changes in your forecast and of course remember that are human and prone to errors and bias.Although the book was more an entertaining summary of the study and why old retirees were getting Brier scores better than high paid statisticians, I was perhaps expecting a bit more science in this pop-science book. However full marks for this book. I understand if I want the science I am sure I can find some papers by him online.
A**R
An interesting and insightful read.
A brief explanation of how the 'good judgement project' established a fairly accurate way of forecasting.It runs you through sequentially so you understand the key techniques while also explaining theory and reasoning behind it.I would say it's worth read 'thinking slow and fast' as some of the key terms and ideas in it are referenced In this.
O**0
I forecast waffle
This book was recommended to me by a friend because both of us occasionally need to forecast as part of our jobs. In many aspects of this book it was like reading about how granny sucked eggs. A lot of the book covers theory on what makes an accurate forecast. There are some nuggets of insight on good forecasting which is nicely surmised in the appendix with the 11 commandments of Superforecasting.The issue with the book is not the material of the content but the padding. There seems to be a lot of it. This is a 300+ page book that can be edited down to half the size without losing information. Many of the same examples of Superforecasting were repeated more than once.It was funny to read that a lot of businesses are not actually that interested if a forecast is right or wrong provided the forecast tells them what they want to hear. Talking from experience I know this to be true. In addition other forecasters are reluctant to revisit old forecasts in fear of exposing their inaccuracies, which to me, made zero sense and I am glad Tetlock agrees with this view.Overall it is a good read, just nothing special if you do this sort of thing for a living.
Trustpilot
3 days ago
2 weeks ago