Data, analytics, and the human condition: life lessons from baseball’s data-driven revolution

The history of professional baseball is, I think, the story of talented, skilled and experienced individuals relinquishing some of their decision-making autonomy to better coordinate their actions with others for the overall benefit of the group. In recent years, data, analytics and the people who effectively evaluate them have played a key role in this coordination effort. As a result, baseball’s history is, I think, a useful case study with which to better understand the value of the broader data-driven revolution that is well underway in many parts of our lives.

In the early days of professional baseball, individual players played as they pleased within the rules and conventions of the game. The manager was able to exercise some control over some on-field decisions because he decided who played each day. He used that authority to bend players to his will, whether or not his will led to success. In some remarkable instances, players were “too good not to play,” and they continued to play as they pleased, succeeding and failing according to their own set of rules. Their natural god-given talent was taken as proof that they could play by a different set of rules or none at all.

Today, because of data and modern analytics, managers and players are now relying on the advice and decisions of people who have often never played the game and who rarely step on the field. At first, these data-driven and analytical outsiders had to persuade the insiders to act on their insights and recommendations. Eventually, the people who control the pursestrings recognized the value of data-driven analysis and decision-making. As a result, the data nerds are now themselves insiders and enjoin rather than entreat. It also seems likely that their influence on the game will continue to grow. For example, data-driven analysis is now influencing player development, which historically, as far as I can tell, has been an unfathomable mix of toxic masculinity, superstition, blind luck, and occasional strokes of genuine and well-intentioned pedagogy.

This turn towards player development is happening in large part because most teams for the most part have now embraced data, analytics, and the people who effectively evaluate them. As a result, the competitive edge associated with the analytics  revolution has been blunted somewhat. For example, even if a clever analyst is able to identify an innovative way to evaluate players, whatever advantage that is gained will be short-lived because player acquisition is a very public activity. Eventually, some other team’s analyst will crack the code underpinning the innovative and unexpected acquisition. In contrast, if a team can use data and analytics to improve their player development, which happens behind the mostly closed doors of their training facilities, to turn average players into good players and good players into stars, there is a huge opportunity for teams to win more games at a much lower cost. They can sign players based on their current value and develop them into higher value players while under the original contract. Crucially, because teaching and development must always be tailored to the student, even if we imagine that an ideal system for player development can be broadly identified and it becomes widely known and understood, there will be plenty of room, I think, for innovation and competitive specialization. Although a handful of very successful teams already have a history of identifying and nurturing talent in-house, the future of player development will probably look a lot like the recent history of data’s influence on player evaluation, tactics, and strategy. Data, analytics and the people who effectively evaluate them can be expected to identify more effective approaches for player development, discredit others, and more accurately explain why some traditional approaches have worked.

I suspect that the analytics revolution has had such a profound impact in baseball only because baseball’s culture was ruled for so long by superstition, custom, and arbitrary acts of authority. This culture likely emerged, I am prepared to speculate, because there were so many exceptionally talented players competing for so few spots. Because all of these players were willing to accept pretty much whatever signing bonus or salary they were offered, if these exceptionally talents guys failed for whatever reason, from a team’s perspective, it didn’t much matter because there were plenty of hungry, talented and cheap guys waiting to take their place. Some guys made it through and some guys didn’t; as far the teams were concerned, it didn’t much matter who made it through or why they made it through — so long as those that did could help to win games. Of course, this model only works when players are cheap. It should come as no surprise that teams have become much more interested in accurately evaluating their players and investing in their development now that signing bonuses and player salaries are substantial and much more reflective of a player’s true contribution to the team’s bottom line. Thanks to collective bargaining and free agency, an economic motive was created that forced teams to treat players as valuable assets rather disposable widgets.

For a fan of baseball — or a fan like me, anyway — one of the unexpected outcomes of a deep dive into baseball’s analytics revolution* is the realization that the action on the field is very much an effect rather than a cause of a team’s success. Evaluating and acquiring players, developing them, motivating them, and keeping them healthy is the key to winning pennants. Yes, there will always be room for individual on-field heroics that help turn the tide of a game or series, but a player is on the field and physically and mentally prepared to make the big play only if a tremendous amount of work has already been done to put him there. And while I will resist the temptation to make the intentionally provocative claim that the analytics revolution in baseball highlights that on-field play is the least important aspect of a team’s success in baseball, it is nevertheless clear that that the data-driven evaluation of all aspects of the game highlights that the managers and players are only one piece of a very large system that makes on-field success possible. At this calibre of competition, with so many talented players on the field, an individual game is probably  best understood as a competition between two teams of finely-tune probabilities working through the contingencies of chance brought about the interactions of those probabilities. This, I think, not only explains the strange cruelties of the game for both players and fans, but it is also a pretty plausible description of the human condition. Once again, even from the cold dispassionate perspective of data, baseball looks like a pretty useful metaphor for life.

If my version of the history of professional baseball is (within the ballpark of being) correct, data, analytics and the people who effectively evaluate them have played a revolutionary role in baseball not because they revealed and reveal previously unseen truths. Instead, they are revolutionary because they broadened the scope of the kinds of people involved in baseball’s decision-making processes and, in doing so, changed how those decisions are made. By creating a more sophisticated, systematic and coherent methodology to measure and evaluate the contributions of players, the data nerds created a tool with which to challenge the tactical and strategic decisions of the old guard, which too often relied on appeals to custom, individual intuition, and authority. In this way, the data nerds earned themselves a place at the decision-making table. Crucially, baseball’s analytics revolution reminds us that people are the true catalyst and vehicle for change and innovation. It doesn’t matter if some new tool unearths previously unseen truths. If the people in charge aren’t willing to act on them, for all intents and purposes, the earth will remain at the centre of the universe.

The history of baseball also reminds us that a group of individuals working together to achieve some shared goal is much more likely to achieve their goal if they relinquish some of their decision-making autonomy in order to effectively coordinate their efforts. This is as true for hunters and gatherers working together to collect life-sustaining berries as it is for disciplined armies fighting undisciplined hordes. Communities, armies and sports teams that rely on an “invisible hand” to coordinate the actions of their individual members simply aren’t as effective as those that consciously and effectively coordinate their actions. We shouldn’t have to look to baseball’s history to be reminded of this simple truth. Unfortunately, western culture’s misplaced faith in the hope that individuals doing pretty much as they please will accidentally lead to the best outcome has created a culture in which we too too often organize ourselves along feudal lines, ceding absolute authority to individuals over some piece of work or part of a larger project, creating silos of silos within more silos. Yes, some leaders have made terrible decisions on behalf of the group, but that is an indication that we need better approaches to leadership not less coordination.

Baseball’s analytics revolution also reminds us that the coordination of individuals will be most effective when it takes into consideration the actual contributions made by each individual and that this assessment requires a systematic and coherent methodology to be effective. Quick judgements about a person’s contribution based on a small or irrelevant dataset is not an effective way to manage a team for success. An individual’s contribution to their team needs to be assessed based on a significant amount of meaningful, relevant and consistent data, which often needs to be collected over a significant period of time. Additionally, the tactical and strategic decisions based on those evaluations must also be subject to regular assessment and that assessment must be made in terms of the ultimate aim of the shared endeavour. Effective team management requires time, a historical frame of reference, and a long-term vision of success. In other words, there is much more to the successful coordination of a team than a rousing locker room speech or a clever presentation at an annual off-site meeting.

Baseball’s increased interest in data-driven player development also reminds us that the bedrock of long-term success for any team is an ability to recruit and nurture talent, where talent is largely understood to be a willingness to learn and evolve and a willingness to mentor and train. On the one hand, people who are set in their ways are unlikely to adapt to the culture of their new team; additionally, as the team and the work evolves, they won’t evolve with it. On the other hand, if they aren’t willing to mentor and train others, whatever knowledge and skills they have and develop won’t be shared. Yes, data and analytics, like any new tool, can create a competitive advantage in the short-term, but the bedrock of enduring success is people who are committed to learning and developing, and a culture and leadership team that supports and rewards their development.

The final insight from baseball’s analytics revolution might be more difficult to tease out because it challenges a habit that is so perennial that it is probably difficult to see it as anything but natural and given. I said earlier that a data-driven evaluation of all aspects of baseball’s operations is bringing into focus the idea that the action on the field is an outcome of a very complex process and that the success of that process is the fundamental cause of success on the field. If every aspect of a baseball team’s operations is designed and coordinated to ensure that the best players can play as effectively as possible during a game, that team is much more likely to succeed against the competition. An essential feature of this model is the important distinction between the activities undertaken to prepare and train for execution and the execution itself. Crucially, there is substantially more preparation than execution, and it is the quality and effectiveness of the preparation that determines the effectiveness of the execution. With that observation in mind, I’m willing to bet that in work, life and play, you and your team (however you conceive it) spend most of your time executing, very little time preparing, and a whole lot of time not living up to your full potential either as an individual or as a team. In theory, it is possible to approach execution in such a way that it becomes a kind of preparation and training opportunity, but, in practice, it will never be as effective as regularly setting aside time for dedicated and focused periods of training, planning and preparation. Essentially, whatever it is you do and whomever you do it with, if you aren’t taking time to train, practice, and prepare, you aren’t going to be as effective as you otherwise might be.

Ultimately, professional baseball is, I think, a useful case study with which to better understand the potential of the broader data-driven revolution taking place today  because of its unique gameplay, specific history, and the financial incentives which rule its day-to-day operations. Because of these factors, the ecosystem of baseball has embraced data, analytics and the people who effectively evaluate them in a way that lets us more easily see the big picture. Because of baseball, it is easy to see that the data-driven revolution is very real but that its potential can only be fully realized if it is the catalyst for welcoming new people and new forms of decision-making into the fold. There are no silver bullets. There are, however, when we are lucky, new people testing new ideas that sometimes work out and insiders who recognize — by choice or by necessity — the value of the new people and their ideas. Unfortunately, this also means that the very people, communities, and organizations who are most likely to benefit from the data-driven revolution and other forms of innovation — those that are ruled by superstition, custom, and arbitrary acts of authority — are the least likely to embrace the people and ideas most likely to make the most of it. And that, I think, is one more important insight into the the human condition brought neatly into focus thanks to baseball.

* If enough people express interest, I can put together a bibliography/reading list. However, any good local library should get you headed in the right direction.

The rise and fall of social media: a swift and familiar tale

The rise and fall of social media has been so swift and so familiar that the story of its rise and fall probably says more about us than it does about the tools themselves.

In the early days, social media seemed revolutionary and, at times, it was. Unfortunately, like all revolutionaries who win, social media has come to mirror the status quo it had initially challenged.

Twitter, Facebook, YouTube (et al) now look, feel and act very much like the traditional media many of us were avoiding when we first joined these digital networks.

Advertising, of course, was a key player in the counterrevolution, but social media turned to advertising only because a profit had to be turned and it had to be turned quickly. Old and familiar habits die hard when they boost revenues and profits easily.

Meanwhile, as the masters of the social media universe learned to dance to the tune called by the advertisers, the users themselves (myself included) set about trying to monetize their activities on social media. Socializing for its own sake quickly (d)evolved into network marketing. Eventually, those of us who did not become viral millionaires parlayed our social media cred into paid positions. Others simply walked away from the tools. We all returned to our familiar folds, even as we shook our fists at the masters of the social media universe for doing the same.

The pure-of-heart revolutionary will likely sneer at the bourgeois sell-outs, but they can do so with a clear conscious only if they are not at all concerned about hypocrisy. The revolutionary is a network marketer with a different call-to-action.

Indeed, the revolutionary, the marketer, and the poet are of imagination all compact. Whenever they see a crowd, they imagine an army they can rally to their own cause — a cause that not-coincidentally puts bread on their table too. It’s what we humans seem to do whenever we are given half-a-chance. We always seem to want to turn the lead of our relationships into the gold of wealth and power. Perhaps, it is the natural inverse of the fact that our relationships have always been the surest path to survival, power and wealth.

Social media, it now seems to me, was one more stage upon which we could strut and fret our way through this familiar tale. From time-to-time in human history, the status quo is upset by some unexpected and novel circumstance like social media. In these times of uncertainty, some outsiders move in, some insiders are forced out, and, eventually, the new and novel is normalized, contained, and pacified. As the dust settles, a new status quo consolidates and the longing for the next revolution begins.

With the benefit of hindsight, I suppose, the only remarkable thing is that I (and, perhaps, others) are surprised by this inevitable outcome. Like Charlie Brown, lying flat on his back staring at the sky, we are dumbfounded that we are on our backs again and, at the same time, incredulous that we fooled ourselves once more into believing for one glorious moment that it might end differently this time.

And it is true, if we look only at the abstract narrative arc: we are trapped, like Charlie Brown, Sisyphus, and the pendulum, in a seemingly futile inevitability. The devil and salvation, however, are in the details. With each push of the rock, every missed football, and each swing of the pendulum, we change and, if we are lucky, we learn. Progress, like science, begins and ends with failure. We push, we race, and we swing not to win but to experience. The reward comes when we return to the rock, race once more towards the football, and swing again into the void hoping against hope — believing — that this time it will end differently, even when we know that it won’t. It is in that moment of hope that we seem to escape the inevitable physics of our humanity. Then, the weight of the rock turns against us, the football is missed, and the pendulum begins it inescapable return. Arc after arc, life after life, generation after generation until, if we are lucky, our descendants live and are different enough from us to look back on our efforts to tell a story of progress. We are trapped but it is not necessarily futile. The trap itself begets the idea of escape and with that hope anything is possible.

My last word on political philosophy (hopefully): chase no more

The fundamental question of politics concerns power: is power an end unto itself?

If it is, politics is fundamentally about managing power. It involves creating and managing social practices that determine who wields power and the extent to which they wield it. In principle, power could be exercised with an eye to true, good or best outcomes, but, so long as power is seen as an end unto itself, gaining, maintaining and exercising power will always trump the true, the good, or the best. Inevitably, this kind of politics is or will become authoritarian because any balance of power will always eventually be upset in favour of someone or some group.

If power is not an end unto itself, politics is fundamentally a form of inquiry. It involves creating social practices that have the best chance of identifying true, good, or best outcomes. It is unlikely that any set of social practices will always identify true, good, or best outcomes, but the shared commitment to social practices that aim for these kinds of outcomes can, nevertheless, justify abiding by outcomes even when we or others disagree with them. This kind of politics relies on both the expertise of the individual and the wisdom of the crowd.

In principle, we could empirically determine which of these two approaches to politics works best for human flourishing. In practice, however, people who think power is an end unto itself are little interested in empirical justification. For them, the experience of power is the most important consideration. It trumps all other considerations, including empirical evidence.

The human propensity to treat power as an end unto itself is, I think, the essential challenge of all politics. The authoritarian urge seems to be primordial, in an infantile sort of way, and can manifest itself in anyone and everyone, wherever they happen to fall on the conventional political spectrum. It also seems highly unlikely that there is any particular set of social practices that will exorcise the authoritarian urge from human existence. Instead, we must constantly work to correct, inhibit and contain it whenever and wherever it might emerge.

We must also accept that people who treat power as an end unto itself are not interested in facts, figures, argument or reason unless these are used to buttress their own power. Accordingly, it is appropriate, I think, to use power to contain or dispose of those who treat power as an end unto itself. However, if we are successful, we must be careful to remember that it does not prove that we are right and they are wrong. It only shows that we are sufficiently powerful to contain or dispose of those who would use power to contain or dispose of us, whatever the merits of our beliefs and values may be. A successful exercise of power proves nothing about the truth, value or merit of anyone’s beliefs. Might does not make right, even if it is our right that it serves.

*

At some point in their growth and development, all things being equal, most humans will be able to make effective judgments about most matters that relate to them. No person will always be right but no person will always be wrong either. Furthermore, between right and wrong, there will always be many different judgments a person can reach that, all things being equal, are reasonable even if they are not wholly correct or wholly wrong.

Similarly, when a majority of people who are effective judges independently reach the same conclusion about some state of affairs, all things being equal, the fact of that independently shared judgement is the best evidence we have that the conclusion is correct. We can’t say with absolute certainty that the conclusion is correct but, in most cases and as a general rule, we should tentatively accept that the conclusion is probably correct even if we or others disagree with it. At the same time, we should also accept that we may learn in the future that the conclusion is incorrect. That is simply the nature of inquiry, political or otherwise.

It is the interplay between the effective judgments of individuals and the wisdom of the crowd that drives and shapes any politics conceived as a form of inquiry. The ultimate aim is to develop social practices that make the most of both. Practically-speaking, this means we should expect our social practices to evolve and change over time. We must always be ready to propose and test new ideas, mechanisms, and institutions and we must give up on the idea that any one person or any one group of people can, could have or will ever identify the one and only true form of government for all time. To do otherwise is to simply give up on the hope that our understanding of the world and each other grows and evolves over time.

*

Politics does not only happen at the ballet box or when parliament is in session or between the commercials of the nightly news. It happens wherever we live, work and play. It happens whenever we decide together how we are going to live, work, and play. It happens wherever and whenever we answer in word and deed the question: is power an end unto itself?

Our answers shape our lives, our communities, our society.

*

It takes only a moment of reflection to realize that we live most of our lives in authoritarian communities, organizations and institutions.

We are born into families that are authoritarian. We are educated in institutions that are authoritarian. We work at jobs that are authoritarian. Our political system is run, administered and governed by authoritarian individuals, groups and institutions. Our economy too.

The habits and practices of politics are like any other. We learn from doing and, if authoritarianism is all we do, then, our politics are also authoritarian, whatever we might think of the ribbons and bows of periodic elections. Elections are also an instrument of authoritarianism.

*

I want to tell a noble lie. I want to claim that we need only conceive of politics as a form of inquiry to ensure everything will always work out well for everyone. Unfortunately, inquiry doesn’t work that way. We can make better or worse judgements based on the evidence, but there is nothing in and of itself that can definitively point the way to the best outcomes for all people for all time. There are no guarantees.

We also can’t avoid the use of power and there is always — always — a risk that we will abuse it, even when we use it judiciously and cautiously. Nothing can absolve us of the responsibility of the wrongs we may do even when we intend to do right. There are better and worse ways to avoid the abuse of power, but there is nothing in and of itself that will prevent all people for all time from abusing power. Again, there are no guarantees.

And, perhaps, after all these years, that is all political philosophy I need.

I suspect now that I may have wanted much more than that only because I also wanted there to be some kind of secular magic that would guarantee the best outcomes for all people for all time and that would also absolve me of any responsibility to attend to the unintended consequences of my well-intentioned actions. I suspect I also wanted to avoid the messy and uncertain business of winning friends, influencing people, and fighting enemies. I hoped also, I think, that I might bequeath to the world some magical words that would help solve all problems everywhere. I would then be free to enjoy the beauty of the day safe in the comfort that I had done all that I could to do to make the world a better place without ever breaking an egg, pulling a trigger or currying favour. I see now that I was chasing a chimera, a wild goose, and a dragon all in one.

*

I am suddenly reminded that my very first essay in political philosophy was written in grade eleven or, perhaps, grade twelve. It was a short paper that attempted to explain what Marx had meant by the notion that religion is “the opiate of the masses.” I don’t remember if I wrote anything noteworthy, but I do remember struggling to write the paper. I also remember enjoying very much the struggle to write it. I also received a good mark. It’s easy to imagine that the struggle and the reward made me feel important — perhaps, even special. It probably provided a heady rush of meaning, purpose, and distinction at a time of lonely adolescence. Like opiates everywhere, it soothed and it distracted and, like junkies everywhere, I remember that first fix with a mix of fondness, regret, and understanding.

It has been said before and it will be said again: “In my beginning is my end.”

Wonder upon wonder: the I in the absence of history

Histories are an afterthought. They are written after the experiences they describe. They are normally written by the victors.

I wonder:

Is it only with the benefit of hindsight that we understand that we lived through history, or is it possible to experience something as history — in the making, as it is so often said.

*

I recently finished reading Dr. Zhivago. While reading it, it felt like I was reading a story about people who were experiencing history. It also felt like I might have developed a better understanding of my own experience of the Russian Revolution had I lived through it and then read the book. Although the characters in the story do not — I am guessing — represent all the experiences of the revolution, it also felt like those who were omitted from the story would feel included precisely because they were absent from a story of which they knew they were an essential part. I can imagine an old peasant nodding to himself over vodka and muttering, “Ah, yes, that was Pasternak’s take on things, but he saw it that way only because, like so many of his generation, he didn’t see it as I saw it. Let me tell you about the truth of the revolution.”  

I wonder:

Is it Pasternak’s skill as an author that makes me feel like his story is inclusively exclusive or is it the all-encompassing nature of the revolution that he was trying to document that makes me feel that way?

*

I have lived through a number of events, which, by any standard or measure, should count as history in the making: the fall of the Berlin Wall and, eventually, the Soviet Union, the rise of the American corporate kleptocracy, globalization, the dawn of the digital age, and the uneven march of social justice. However, it does not feel to me like anyone could write a history of those events, individually or collectively, that would be encompassing and inclusive in the same way that Pasternak’s seems to be. I cannot imagine a history that would help me better understand my own experience of those events.    

I wonder:

Have we lost an ability to write and read all-encompassing histories like Pasternak’s or are the kinds of events that histories are normally written about no longer unavoidable as they once were? Today, can we opt out of the very stuff of history in a way that was previously impossible?

*

The capitalist kleptocrats, by any objective measure, are the victors in the western industrialized and colonial world. Their history, history has shown, is our history.

I wonder:

Is the seeming absence of an all-encompassing history of our times by design or is its absence an indication that the battle has been won but the war not lost? Is history the greatest spoil of war or its final battle? 

*

My initial thoughts:

The Russian Revolution probably was all-encompassing in a way that the Capitalist Kleptocrat Revolution is not, but the difference lies not in the magnitude or significance of the revolutions, but in the self-understanding of the people who lived through them. Society today is so fractured and atomistic that there seems to be little appetite for experiences or histories that speak to and for all of us. This, I think, is both a symptom of and a crucial tactic in the Capitalist Kleptocrat Revolution. We have all been affected by this revolution, and it has, in winning the moment, convinced all of us that we have have not been individually affected by it. In the absence of a history, it is difficult to even see that a revolution has taken place.

I wonder:    

Grand all-encompassing histories have rightly, I think, undergone a sustained and withering critique in recent decades. These kinds of history have been instruments of oppression — excluding, silencing, and marginalizing — but must a history that aspires to be universal always be oppressive? Even stronger: do we need these kinds of histories to better understand our place in society, even if it is only to see that we are at the margins? And finally: when we give up on history, do we also concede the war?

The condition of my humanity: arrogant humility

It would be fair to say that I have spent most of my life thinking about the human condition.

The catalyst for this lifelong reflection was the profound realization, at the age of nineteen, that God does not exist. At the time, it seemed that the fact of God’s non-existence was a big deal. I also thought that a full and proper understanding of this fact would have profound consequences for the way I, you, all of us should live. I expected profound consequences because we live in ways that have been built on and around the idea that God exists. Remove the keystone of God’s existence, I thought, and the structure of everything would fall away, and we could rebuild everything anew. I read, I argued, I taught and, in the end, I realized that God’s existence or non-existence is pretty much irrelevant to deciding how we should live.

Then, it occurred to me that capital-T truth does not exist. It seemed to me that this was the fundamentally important fact, for more or less the same reasons that I thought God’s non-existence was so important. Again, I hoped that if I thought long and hard enough about it that I would identify some profound implications for the way I, you, all of us should live. I read, I argued, I taught and, in the end, I realized that the existence or non-existence of capital-T truth is as irrelevant to how we live as the existence or non-existence of God, for more or less the same reasons. Whatever you or I may believe about the nature of truth, it doesn’t really matter when it comes to deciding how we should live.

Then, it occurred to me that a fully naturalized and evolutionary understanding of consciousness was the key. Because culture and society begins and ends with humans, it seemed reasonable to conclude that a better understanding of the human nervous system would lead to profound implications for the way I, you, all of us should live. Moreover, for the first time in human history we had tools that allowed us to exorcise the quasi-divine conception of self we had inherited from our ancestors. The moon may have already been conquered by others but we are the first humans to tread on the very stuff of the human condition. And while it remains theoretically possible that there may be some unimaginable discovery yet to be made that will falsify the conclusion that I am about to share with you and that you should really be able to anticipate by now; but, after reading, arguing, and teaching, I have reached the conclusion that we will never be able to draw unassailable and universally compelling conclusions about how we should live based on a fully naturalized and evolutionary understanding of consciousness either.

The crucial words here are “unassailable” and “universally compelling”. With the benefit of hindsight, I see now that I was hoping to find a conclusion, a claim, an idea, something that would win in every argument and always compel all others to action. I was doing what prophets and priests and philosophers and warlords have been doing since time immemorial. I was trying to derive an “ought” from an “is” and hoping that the “ought” would be so magical and powerful that everyone would be swayed by it. The subtle and not terribly sophisticated difference is that I was trying to derive an unyielding “ought” from a “not is” instead of an “is.” Rather than saying, “x, therefore you must do y”, I was saying (or hoping for), “not x, therefore you must do y.” For example, instead of “God is love, therefore, we should do good,” I was hoping for “there is no God, therefore, we should do good.” And while it remains intuitively plausible to me even now that there is some special significance in the fact that things like God and capital-T truth don’t exist, I know that it is as nonsensical to draw unconditional moral claims based on what is not as it is to draw unconditional moral claims based on what is.

And, as important as that conclusion may be, the far more important insight, I think, is that the very idea of an unassailable and universally compelling argument is a coercive fantasy. It is essentially the hope that might and right are identical and that rightness can in and of itself compel others to believe and act. It is also an idea that leads, I think, either to passivity or to oppression because, if right and might are one in the same, either unpopular beliefs are not quite right or there is something not quite right with everyone who fails to accept and act on beliefs we think are right. If a belief, idea or way of life fails to compel acceptance and motivate action, we either think less of that which was  not compelling or think less of the people who failed to be compelled. So, either we end up believing and doing nothing because the burden of proof is impossibly high or we do whatever we want because disagreement is proof that those who disagree with us are somehow broken or not fully human and, for this reason, don’t deserve our consideration and can be compelled to do anything we want.

It’s also crucial, I think, to realize that might comes in many forms, is expressed in many ways, and is never in itself a measure of rightness whatever its form or expression. Most people, for example, would probably now accept the notion that the strength of a person’s muscles has no bearing on the validity of their beliefs, and yet many today still believe that the strength of a country’s military or its economy is a measure of the rightness of its moral and political values. Vote-getting, profit-making, and fundraising are often thought to be legitimate measures of rightness but they really only indicate what can attract votes, profits, and charity at any given point in time. An argument, a speech or an essay may be persuasive, but this in itself is proof only of its persuasiveness. Charm may be non-violent, but there is no reason to think that a consensus built on it is any more true than a consensus built on fear. Might comes in many forms, and it never makes right — even when it is expressed in a way we admire or by people we like.

I should, nevertheless, be explicit on this point: coercion is an inescapable fact of social and political life. We must sometimes coerce people to do things they would rather not do (remember: forcing people not to interfere in the lives of others is a form of coercion too). However, we should always coerce cautiously and from a place of humility, respect and empathy, recognizing that there will be times when we will also be coerced to do something we would rather not do. Most importantly, we must never conclude that our ability to force a person to do something that they would rather not do proves anything about the merits of our beliefs, our way of life or our worldview. Coercion becomes oppression, I think, precisely when we start to believe that our might — whether it be physical, intellectual, emotional, financial, electoral, anything — is proof that we are right. It is one thing to force people to comply with, say, a political or legal decision with which they do not fully agree, while at the same time recognizing that the decision may be imperfect. It is something altogether different to force compliance and, at the same time, insist that coercion would be unnecessary if only those who were being coerced were more rational, compassionate, or open-minded — or whatever term we might use to signal that they are to blame for not seeing it our way. We must, I think, always remain mindful of the fact that anyone of us — and not just those people who we think are the bad guys — can walk the path of good intentions from coercion to oppression.

With that important caveat in mind, we must, nevertheless, carry on living and, in my own case, I have come to embrace an attitude of, what might be called, arrogant humility. I’m arrogant enough to think I have a pretty good shot at making pretty good judgments about what is or is not the best course of action in most situations, when I do the work to gather and consider enough of the relevant evidence. I am also humble enough to accept that I often get it wrong, that I have blind spots, and that some of my most cherished beliefs and well-considered beliefs might be totally wrong. In short, I’ve come to trust my judgement, while at the same time accepting its limitations and failings. I am no longer looking for something — or a not-something — to validate my beliefs, decisions and failings.

I will not, however, claim that all people should necessarily adopt this attitude. I can’t ignore the fact that much good has come from people who have put their faith in God, who pursue the Truth, or stand their ground in the name of moral facts that they consider to be self-evident. I am also well aware that much evil has been done in the name of God, Truth, and indubitable moral facts written into the bones of nature, however, when I consider the evidence available, I am not convinced that these attitudes necessarily lead to good or evil. Whether a person has faith in God or in their own judgement, they must consider the evidence and make judgments based on it. They and I may sometimes disagree over what counts as admissible evidence, but a shared commitment to the fact that might does not make right and right does not make might seems to me to be much more important than a shared opinion about the nature of God.

And once I set aside aside worries about the existence or non-existence of God, Truth, and Human Nature, it was much easier for me to see that there is both too little and too much to say about the human condition. From one perspective, we are simple, fleeting and trivial creatures who, like all the other quirks and quarks in a cold, vast and indifferent universe, are, in principle, perfectly predictable. From another perspective, the human condition is an unimaginably rich and cacophonous kaleidoscope of boundless possibility and each human life is unique, beautiful, and precious. The human condition is a lot like the weather, I think. Seen from on high, it is simple and perfectly predictable, but, closer to the ground, it is complex, varied and difficult to predict, and, at the eye of the storm, no two storms are ever quite the same for those who experience it — no matter what the experts, instruments, and equations may say.

And that’s all I have to say about that (I think).

The All or Nothing Pursuit of Wealth Destroys the Ground of Wealth: Society

ModelTbigHenry Ford famously quipped, “any customer can have a car painted any color that he wants so long as it is black.” The logic of the assembly line demanded a quick drying paint. At the time of the quip, it was available only in black.  So, black paint became the only choice of colour for Ford’s famous Model Ts.

Today, Ford’s one-size-fits-all notion of choice is the dominant ideology of our lives. Live any way you want, we are told, so long as you live exactly the way we want you to live. Pursue wealth before all else. Get rich or die trying.

Death, in this old world order made new, isn’t necessarily literal. Indeed, it starts as a social death and too easily becomes a literal death. When wealth is the highest aim of society, there can only be Haves and Have-nots. The Haves matter and the Have-nots do not. Only the Haves, the Haves quickly decide, are entitled to a long, healthy and meaningful life.

Wealth, of course, has a role to play in any just society. We should accept and encourage some wealth accumulation. It is an important means to many other valuable aspects of life. For some, it is also an useful motivator. It shouldn’t, however, be the highest or only goal of society or of all lives.

For one reason, wealth can be horded. Yes, happiness, good health, and justice can be denied. They can’t, however, be hoarded in offshore accounts. Moreover, wealth, once hoarded, becomes one of the most effective means to deny happiness, good health, and justice to others.

There are also limitations to the instrumental value of wealth. Once a person amasses a certain amount of wealth, more wealth does not create any more lasting happiness. Excessive wealth buys only fleeting pleasures. It can even lead to frustration. Like addicts everywhere, the wealthy soon find themselves chasing the dragon of hedonistic pleasure, at the expense of other people’s well-being.

There are also many paths to wealth that are neither good, fair, virtuous, efficient, or sustainable. Every path to wealth isn’t evil. It is, nevertheless, far too easy to embrace evil in wealth’s name, when it becomes the only or most important goal.

And finally, a society focussed on maximizing wealth tends to suppress and even destroy other valuable ways of life that are not focussed exclusively on the accumulation of wealth. From the perspective of justice and from the perspective of security, this is a concern.

In a just society, all people should be able to pursue whatever way of life they enjoy and have enough wealth to have a decent life too. In a secure society, diversity is the source of its strength. A society in which a rich variety of ways of life flourish is likely to be both just and secure. In contrast, a society that is focussed too much on wealth can only be unjust, weak, and prone to collapse.

There are many reasons for humans to come together, to cooperate, and to work together. Wealth creation will always be one important goal for human society. We are always better off materially when we work together. Wealth, however, should never be the highest or only goal of society. It’s a false idol. It’s worship leads only to its own downfall. Divided by wealth, human society – the very ground of wealth – is doomed to collapse.

*

SUPPORT MY THINKING AND WRITING ON PATREON

*

No Reason, No Cry: Determinism May Be Good For Your Health

TreesandBuildingPeople who insist that there is no such thing as free will often make an important gaff, as they dismiss – often trenchantly – the opinion of people who insist there must be something like free will. This gaff points to an important – and often overlooked – implication of the fact that we likely don’t have free will.

Here is a convenient example of the gaff:

“Given the dubious claim that rejecting free will damages society, and the undoubted benefits to our judicial system of embracing determinism, I’m still baffled by why compatibilists continue to argue that we NEED [sic] some notion of free will. […] Science tells us that our behavior is not under our conscious control ….”

Do you see it?

If there is no such thing as free will, there really is no reason to be baffled by the fact that compatibilists continue to hold their position and argue for it. It’s not like the compatibilists can freely and consciously choose to believe other than they believe, or argue other than they argue. They aren’t deciding to hold on to their view in the face of evidence to the contrary. No, they persist in their compatibilist belief and argue for it because of a complex, probably unknowable, and wholly determined process. Yes, their beliefs may change, but it won’t happen because they freely and consciously will that change. It will only happen if the necessary pieces in the deterministic puzzle fall into place. Otherwise, they will continue to be compatibilists and argue for the compatibilist position for as long as whatever wholly determined process makes it so.

And these observations, I think, point to a very important implication of the non-existence of free will that seems to be often overlooked. If there is no free will, there really is no such thing as “rationality,” “choice,” or “decision,” as we have typically understood them in modern times, because typically they are thought to involve an ability to freely choose between the true and the false, the right and the wrong, the this or the that. But, of course, if there is no such thing as free will, that can’t be correct. Instead, it must be the case that people reason, choose, and/or decide because of a wholly determined process – in all likelihood, the interactions of our brains and genes with the environment. To put it bluntly, without free will, we must discard any notion of human reason, which presumes we can freely will our way in and out of beliefs or anything else about which we might reason.   

Admittedly, for many people, that will be a difficult pill to swallow. Reason (or, if you prefer, rationality), like the enduring love of the one true God, is often thought to be the defining feature of our species. It is the secular magic wand that is often used to draw a line between us and the brutes. Without a totally free, capital-R reason, we humans don’t look very special when we compare ourselves to all the other wholly determined objects banging around the universe. For some, the prospect of having no special place in the universe might be as frightening as realizing that there is no God to answer our prayers.

To further complicate matters, on first impression, it will be very easy to think there are profound and scary consequences to this realization that human reason does not exist. While it is certainly true that we will need to rethink some of our theories about human behaviour, in practice, it won’t make a lot of difference in most people’s lives. Why? Because if it true that there are no such things as free will and human reason, it has always been true. Our description of an underlying process can change, but it doesn’t necessarily change the underlying process. To be sure, some people can be expected to act differently once the neurons in their brain realign to reflect the probably new belief that free will and reason don’t really exist, but how they respond to these changes is anyone’s guess. There is certainly no grounds to assume they will act any differently.

At the level of systematic inquiry, the biggest challenge – and opportunity – will be in the realm of moral and political theory, where it is very often assumed we humans are capable of the very kind of reasoning that is impossible in a universe of deterministic laws. As a result, I’m inclined to think many conceptions of morality and politics will need to be discarded or dramatically rethought. On the plus side, we will, I think, be able to look at old phenomenon from a fresh perspective. For example, the fact that voters often vote against their interests only seems perplexing when we think they can freely choose between the relevant candidates or policies. Instead, the fact that voters often vote against their own interests makes much more sense when we accept that those votes are determined by factors beyond the control of any one voter.

Strictly speaking, what I am proposing is not terribly radical, even if my characterization may be unsettling to some. For example, behavioural economists, primarily as a result of important and influential work in psychology, have already accepted the notion that we humans don’t reason anything like economists once thought we did. They are now adjusting their theories and research methods accordingly. Furthermore, it can be claimed, I think, that economists have always implicitly assumed that people don’t really reason freely because one of their fundamental claims has always been that the vortex of the market somehow magically makes all people freely choose to act in entirely predictable ways – which hardly seems free at all. Economists were, for a long time, perplexed by the fact that actual humans rarely act in accord with the predictions of their theories. Now, because of the historical evolution of the discipline, behavioural economists tend to talk as if we humans are poor at reasoning. However, it must be the case that we don’t reason at all, if by “reason” we mean anything that involves the exercise of one’s free will.

“If what you are saying is true,” the unsympathetic reader might ask, “why do you even bother sharing your ideas?” The answer, of course, is simple. I am one instance of a species that has reproduced successfully because a critical mass of us have always done something pretty much like what I am doing now – sharing ideas that cause people to take on those ideas as their own. Moreover, the part of me that thinks it is freely choosing to think and write in the way that I do can also point to research that suggests that mere exposure to an idea can cause people to judge it to be true, whether they realize it or not. So, if you’ve come this far, you’re already more susceptible to believing the claim that there is no such thing as free will and, for the part of me that thinks it is in control, that is as good a reason as any to share an idea. 

I also happen to think the idea that there is no such thing as a free will can lead to positive and practical outcomes in one’s life. In my own case, as my neurons have rewired themselves in whatever way is required for my conscious mind to take seriously the notion we don’t have a free will, I’ve discovered I am much less likely to get frustrated and angry with myself and others. From this new perspective, for example, people who disagree with me aren’t willfully ignoring the facts or failing to reason properly, they are simply following a wholly determined path over which they and I have no control. On the other hand, if I am the one who is wrong and not aware of it, there’s not a whole lot I can do about it, other than put myself into situations and environments that might stimulate correct belief and then wait for the cognitive miracle to come. Similarly, along those lines, if I make mistakes in my day-to-day life or fail to live up to some personal ideal, I am also much less likely to get angry with myself. Instead of punishing myself for a failure of will, I focus instead on the mental gymnastics that will keeping me moving towards my goal or ideal, which – not surprisingly – is exactly what the best teachers do.

Coming full circle, the main claim I’m advancing is, I think, straightforward. If you accept the view that there is no free will, expressing bafflement, frustration or even anger about other people’s unwillingness to share your view on free will (or any view, really) doesn’t make much sense. Admittedly, even if you agree with me, accepting and acting on my observation is not likely to be automatic. It will take time for your neurons to rewire themselves in whatever way will produce in you a new habit or behaviour. Of course, there is also a good chance that you disagree with me (and, I’d enjoy hearing why in the comments section below), but, please remember, whether or not we agree — or come to agree — is ultimately beyond our control.

*

SUPPORT MY THINKING AND WRITING ON PATREON

*