Data, analytics, and the human condition: life lessons from baseball’s data-driven revolution

The history of professional baseball is, I think, the story of talented, skilled and experienced individuals relinquishing some of their decision-making autonomy to better coordinate their actions with others for the overall benefit of the group. In recent years, data, analytics and the people who effectively evaluate them have played a key role in this coordination effort. As a result, baseball’s history is, I think, a useful case study with which to better understand the value of the broader data-driven revolution that is well underway in many parts of our lives.

In the early days of professional baseball, individual players played as they pleased within the rules and conventions of the game. The manager was able to exercise some control over some on-field decisions because he decided who played each day. He used that authority to bend players to his will, whether or not his will led to success. In some remarkable instances, players were “too good not to play,” and they continued to play as they pleased, succeeding and failing according to their own set of rules. Their natural god-given talent was taken as proof that they could play by a different set of rules or none at all.

Today, because of data and modern analytics, managers and players are now relying on the advice and decisions of people who have often never played the game and who rarely step on the field. At first, these data-driven and analytical outsiders had to persuade the insiders to act on their insights and recommendations. Eventually, the people who control the pursestrings recognized the value of data-driven analysis and decision-making. As a result, the data nerds are now themselves insiders and enjoin rather than entreat. It also seems likely that their influence on the game will continue to grow. For example, data-driven analysis is now influencing player development, which historically, as far as I can tell, has been an unfathomable mix of toxic masculinity, superstition, blind luck, and occasional strokes of genuine and well-intentioned pedagogy.

This turn towards player development is happening in large part because most teams for the most part have now embraced data, analytics, and the people who effectively evaluate them. As a result, the competitive edge associated with the analytics  revolution has been blunted somewhat. For example, even if a clever analyst is able to identify an innovative way to evaluate players, whatever advantage that is gained will be short-lived because player acquisition is a very public activity. Eventually, some other team’s analyst will crack the code underpinning the innovative and unexpected acquisition. In contrast, if a team can use data and analytics to improve their player development, which happens behind the mostly closed doors of their training facilities, to turn average players into good players and good players into stars, there is a huge opportunity for teams to win more games at a much lower cost. They can sign players based on their current value and develop them into higher value players while under the original contract. Crucially, because teaching and development must always be tailored to the student, even if we imagine that an ideal system for player development can be broadly identified and it becomes widely known and understood, there will be plenty of room, I think, for innovation and competitive specialization. Although a handful of very successful teams already have a history of identifying and nurturing talent in-house, the future of player development will probably look a lot like the recent history of data’s influence on player evaluation, tactics, and strategy. Data, analytics and the people who effectively evaluate them can be expected to identify more effective approaches for player development, discredit others, and more accurately explain why some traditional approaches have worked.

I suspect that the analytics revolution has had such a profound impact in baseball only because baseball’s culture was ruled for so long by superstition, custom, and arbitrary acts of authority. This culture likely emerged, I am prepared to speculate, because there were so many exceptionally talented players competing for so few spots. Because all of these players were willing to accept pretty much whatever signing bonus or salary they were offered, if these exceptionally talents guys failed for whatever reason, from a team’s perspective, it didn’t much matter because there were plenty of hungry, talented and cheap guys waiting to take their place. Some guys made it through and some guys didn’t; as far the teams were concerned, it didn’t much matter who made it through or why they made it through — so long as those that did could help to win games. Of course, this model only works when players are cheap. It should come as no surprise that teams have become much more interested in accurately evaluating their players and investing in their development now that signing bonuses and player salaries are substantial and much more reflective of a player’s true contribution to the team’s bottom line. Thanks to collective bargaining and free agency, an economic motive was created that forced teams to treat players as valuable assets rather disposable widgets.

For a fan of baseball — or a fan like me, anyway — one of the unexpected outcomes of a deep dive into baseball’s analytics revolution* is the realization that the action on the field is very much an effect rather than a cause of a team’s success. Evaluating and acquiring players, developing them, motivating them, and keeping them healthy is the key to winning pennants. Yes, there will always be room for individual on-field heroics that help turn the tide of a game or series, but a player is on the field and physically and mentally prepared to make the big play only if a tremendous amount of work has already been done to put him there. And while I will resist the temptation to make the intentionally provocative claim that the analytics revolution in baseball highlights that on-field play is the least important aspect of a team’s success in baseball, it is nevertheless clear that that the data-driven evaluation of all aspects of the game highlights that the managers and players are only one piece of a very large system that makes on-field success possible. At this calibre of competition, with so many talented players on the field, an individual game is probably  best understood as a competition between two teams of finely-tune probabilities working through the contingencies of chance brought about the interactions of those probabilities. This, I think, not only explains the strange cruelties of the game for both players and fans, but it is also a pretty plausible description of the human condition. Once again, even from the cold dispassionate perspective of data, baseball looks like a pretty useful metaphor for life.

If my version of the history of professional baseball is (within the ballpark of being) correct, data, analytics and the people who effectively evaluate them have played a revolutionary role in baseball not because they revealed and reveal previously unseen truths. Instead, they are revolutionary because they broadened the scope of the kinds of people involved in baseball’s decision-making processes and, in doing so, changed how those decisions are made. By creating a more sophisticated, systematic and coherent methodology to measure and evaluate the contributions of players, the data nerds created a tool with which to challenge the tactical and strategic decisions of the old guard, which too often relied on appeals to custom, individual intuition, and authority. In this way, the data nerds earned themselves a place at the decision-making table. Crucially, baseball’s analytics revolution reminds us that people are the true catalyst and vehicle for change and innovation. It doesn’t matter if some new tool unearths previously unseen truths. If the people in charge aren’t willing to act on them, for all intents and purposes, the earth will remain at the centre of the universe.

The history of baseball also reminds us that a group of individuals working together to achieve some shared goal is much more likely to achieve their goal if they relinquish some of their decision-making autonomy in order to effectively coordinate their efforts. This is as true for hunters and gatherers working together to collect life-sustaining berries as it is for disciplined armies fighting undisciplined hordes. Communities, armies and sports teams that rely on an “invisible hand” to coordinate the actions of their individual members simply aren’t as effective as those that consciously and effectively coordinate their actions. We shouldn’t have to look to baseball’s history to be reminded of this simple truth. Unfortunately, western culture’s misplaced faith in the hope that individuals doing pretty much as they please will accidentally lead to the best outcome has created a culture in which we too too often organize ourselves along feudal lines, ceding absolute authority to individuals over some piece of work or part of a larger project, creating silos of silos within more silos. Yes, some leaders have made terrible decisions on behalf of the group, but that is an indication that we need better approaches to leadership not less coordination.

Baseball’s analytics revolution also reminds us that the coordination of individuals will be most effective when it takes into consideration the actual contributions made by each individual and that this assessment requires a systematic and coherent methodology to be effective. Quick judgements about a person’s contribution based on a small or irrelevant dataset is not an effective way to manage a team for success. An individual’s contribution to their team needs to be assessed based on a significant amount of meaningful, relevant and consistent data, which often needs to be collected over a significant period of time. Additionally, the tactical and strategic decisions based on those evaluations must also be subject to regular assessment and that assessment must be made in terms of the ultimate aim of the shared endeavour. Effective team management requires time, a historical frame of reference, and a long-term vision of success. In other words, there is much more to the successful coordination of a team than a rousing locker room speech or a clever presentation at an annual off-site meeting.

Baseball’s increased interest in data-driven player development also reminds us that the bedrock of long-term success for any team is an ability to recruit and nurture talent, where talent is largely understood to be a willingness to learn and evolve and a willingness to mentor and train. On the one hand, people who are set in their ways are unlikely to adapt to the culture of their new team; additionally, as the team and the work evolves, they won’t evolve with it. On the other hand, if they aren’t willing to mentor and train others, whatever knowledge and skills they have and develop won’t be shared. Yes, data and analytics, like any new tool, can create a competitive advantage in the short-term, but the bedrock of enduring success is people who are committed to learning and developing, and a culture and leadership team that supports and rewards their development.

The final insight from baseball’s analytics revolution might be more difficult to tease out because it challenges a habit that is so perennial that it is probably difficult to see it as anything but natural and given. I said earlier that a data-driven evaluation of all aspects of baseball’s operations is bringing into focus the idea that the action on the field is an outcome of a very complex process and that the success of that process is the fundamental cause of success on the field. If every aspect of a baseball team’s operations is designed and coordinated to ensure that the best players can play as effectively as possible during a game, that team is much more likely to succeed against the competition. An essential feature of this model is the important distinction between the activities undertaken to prepare and train for execution and the execution itself. Crucially, there is substantially more preparation than execution, and it is the quality and effectiveness of the preparation that determines the effectiveness of the execution. With that observation in mind, I’m willing to bet that in work, life and play, you and your team (however you conceive it) spend most of your time executing, very little time preparing, and a whole lot of time not living up to your full potential either as an individual or as a team. In theory, it is possible to approach execution in such a way that it becomes a kind of preparation and training opportunity, but, in practice, it will never be as effective as regularly setting aside time for dedicated and focused periods of training, planning and preparation. Essentially, whatever it is you do and whomever you do it with, if you aren’t taking time to train, practice, and prepare, you aren’t going to be as effective as you otherwise might be.

Ultimately, professional baseball is, I think, a useful case study with which to better understand the potential of the broader data-driven revolution taking place today  because of its unique gameplay, specific history, and the financial incentives which rule its day-to-day operations. Because of these factors, the ecosystem of baseball has embraced data, analytics and the people who effectively evaluate them in a way that lets us more easily see the big picture. Because of baseball, it is easy to see that the data-driven revolution is very real but that its potential can only be fully realized if it is the catalyst for welcoming new people and new forms of decision-making into the fold. There are no silver bullets. There are, however, when we are lucky, new people testing new ideas that sometimes work out and insiders who recognize — by choice or by necessity — the value of the new people and their ideas. Unfortunately, this also means that the very people, communities, and organizations who are most likely to benefit from the data-driven revolution and other forms of innovation — those that are ruled by superstition, custom, and arbitrary acts of authority — are the least likely to embrace the people and ideas most likely to make the most of it. And that, I think, is one more important insight into the the human condition brought neatly into focus thanks to baseball.

* If enough people express interest, I can put together a bibliography/reading list. However, any good local library should get you headed in the right direction.

Between the wake of living and the insensibility of death: the experience of now

It’s an old and familiar trope; as a young man, it would enrage me.

Picture it: an old person, who is tired of living, decides that they are ready to die. Then, they close their eyes and die, as if the matter was decided in that moment — probably after some important milestone had passed and some important wisdom had been imparted.

The decision itself to die is not, I think, the key issue. Death as the ultimate sacrifice, in the name of some higher principle or for the benefit of some other person, has always tickled my adolescent fancy. Likewise, for as long as I can remember, I have always thought suicide to be an appropriate response to a cruel and terminal illness, even if it isn’t the choice I would make for myself.

I think the trope enraged me because it eulogized a decision to acquiesce to death’s inevitable and final ushering for no other reason than the old person’s indifference to life. The old person could live longer; they simply choose not to because they don’t much see the point in living any longer. It seemed to me to be the ultimate betrayal of the very idea of life, in all of its stubborn glory. Death is not an undiscovered country; it is an insensibility to be resisted at all costs until the very moment of consumption and consummation.

However, now that I have made it to middle age, I have found that the trope no longer enrages me. The decision to acquiesce to death, however unpalatable such acquiescence  may be to me, even seems to make sense, once the nature of lived experience is rightly understood.

When I was younger, lived experience seemed much more concrete and enduring, even after it had already been lost under the wake of living, because the amount of lived experience I could remember seemed to be much more than the experience I had forgotten. Sure, I couldn’t remember every detail of waking life but, on the whole, it felt like my experiences lived on with me in my memories.

At forty-five, however, the ledger of memories and lived experience is not at all balanced. I have undeniably forgotten much more of my life than I can now remember. I can no longer pretend otherwise: experience is gone forever once it is lived and our very fallible and fleeting memories can’t preserve or resurrect it. In terms of the experience of lived experience, the only difference between living and death is that the now of living is experienced and the now of death is not. The past is as unknowable as the future, whatever the fantasy of memory might otherwise try to tell us.

Now that this insight has taken root, it has become much easier for me to imagine a time when I will be able to look forward into death and look back onto life and not really see that much difference in terms of the experience of lived experience. As a young person, the experience of now was a supernova that illuminated all horizons; today, it is a star bright enough for me to look back with fondness and forward with anticipation, despite the shadows growing all around me; looking out towards 80 or 90 (and, hopefully, 100 or 120), it is very easy to imagine that the experience of now might feel like a pale dim light in a universe of nothing stretching in all directions. If that is the case, persistence for the sake of persistence might not seem to really add or subtract from the final ledger; and acquiescence to an insensible future might not seem so different from an attachment to the insensible past. Maybe, just maybe, I will also be ready to close my eyes and slip away quietly.

But, let me say this now! If some future Sterling starts nattering on about going gently into that good night, he is a rogue and a fraud! Here me now and believe me later: attach every machine, do all the surgeries, and give me every drug; do whatever it takes to keep my faint ember of consciousness aglow, no matter the suffering I may endure. I expect future Sterling will feel the same; however, because younger Sterling would probably be enraged at my defence of the enraging trope, I shall err on the side of caution: let my will today bind his then. If future Sterling ever loses sight of the faint ember of his experience in the engulfing insensibility of past and future, give him a stiff rum or two and send him to bed. I’m sure he will be fine in the morning. He’s probably just had a bad day. Plus, if he has got to go, he will probably want to go quietly in his own bed, enveloped in  a nice light glow.

Losing my religion: the unknowable self and the myth of a well-ordered society

I suspect that you and I don’t really know anything.

Today, thanks to a lot of trial and error, we humans have a pretty good understanding of what we need to do to distinguish between plausible and implausible beliefs. If we run controlled double-blind and repeatable experiments that generate a sufficient amount of data of sufficient quality, we can use statistical methods to confidently identify those beliefs that are false and those that are plausibly true but still in need of further testing. Considered from this perspective, it seems pretty obvious to me that you and I don’t really know anything. Most of our beliefs have not been tested in this way. 

To start, almost all of our beliefs about the universe are taken on faith that the people doing the work of understanding the universe are doing it correctly. To be sure, this is probably a sensible approach for you and I to take. It certainly seems much more efficient to rely on a specialized community of inquirers to undertake this work, but it doesn’t change the fact that you and I don’t really know what the scientific community knows. Their well-tested beliefs are, for us, articles of faith, even if we can expect them to be much more reliable than the articles of faith generated by theologians interpreting ancient texts. And if this is true, it is true whenever we rely on others to formulate and test beliefs on our behalf. Beliefs that we don’t test ourselves are, for us, articles of faith. 

With that conclusion in mind, take a few minutes to catalogue all the beliefs that you have and rely on each day that are formulated for you and/or tested by others. If you are honest with yourself, I am pretty sure the list will be quite long. And while it is tempting to believe that we have good reason to rely on others for all of these beliefs, I’m willing to bet that you have not tested that belief either. I, for one, can admit that I have not tested it — and most of my other beliefs. I also feel pretty comfortable guessing that you and I are in the same boat. 

And this, I think, is the crucial consideration. We might be able to shrug off the fact that particle physics is for us a matter of faith, but I suspect it will be much more unsettling to realize that you and I never properly test a whole range of beliefs that fundamentally shape our sense of self, our identity, and our daily experience of living.

Consider: Am I happy or unhappy today? Am I happier or less happy than I was yesterday? Last week? Last year? Am I better off now than I was three years ago? Am I consistently making choices that support my well-being? Did I go to the right university? Was I right not to go to university? Am I in the right career? Are my goals for the future the right goals? Am I with the right partner? Would I have been happier with no children or more children? Am I the person I wanted to become? Who was I? Which of my memories are accurate? How accurate? And so on. For all of these questions and many more, there are objective and measurable answers. I’m also willing to bet that your answers to these kinds of questions are a mix of educated guesses, received wisdom, and Magic 8-Ball proclamations. 

To further complicate matters, it is very likely that some of these questions can’t ever be properly answered. We could, for example, carefully track our self-reported experiences of happiness over a long enough period of time to come up with some plausible theories about what makes us happy and then test those theories with more data. However, we probably will never be able to adequately test whether any particular life choice was the right one to make. There are no do-overs in life. As a result, we can’t even generate the data that would put us in a position to make a valid assessment. Furthermore, in the face of this certain uncertainty, it seems likely that we can’t even reliably assess these choices in the here and now because we don’t have the well-tested beliefs upon which to assess the expected outcomes. So, even if we want to evaluate our life choices before we make them (overlooking the important consideration that many people don’t), we don’t even have the correct data for that evaluation. 

One plausible way to sidestep these concerns is to simply stipulate a lower burden of proof for these kinds of beliefs. Perhaps, it doesn’t really matter if we have properly tested beliefs about our happiness, our favourite foods, or our career path. One might be happy to claim that the good life requires only that we can tell ourselves a convincing story in the here and now that we are happy, well-off and that the events of our lives brought us here. All’s well that we can describe as ending well! And while I suspect that this tactic might actually be the best explanation for our species’ reproductive success up to this point (i.e. that we have a curious ability to reimagine suffering as a net benefit), I remain suspicious of the notion that we should lower the burden of proof for these kinds of beliefs. A delusion is a delusion is a delusion, even if we can convince ourselves that we are happy about it. 

In the face of this uncertainty, however, I suspect the only appropriate conclusion is to give up on the notion that we can ever definitively know ourselves. We are constantly evolving animals that are bound in the flow of time and, as a result, there are beliefs about ourselves of which we can never properly test. We have to rely on hunches, received wisdom and wild guesses because we have no other option. It isn’t because we are inherently mystical or otherworldly. It is because we are constrained  by our temporal existence. The much larger and crucial delusion, I think, is the belief that we could know with certainty who we are and what we value. Once we give up on that idea, the notion that we don’t know ourselves with God-like certainty seems much less unsettling and becomes just another mundane limitation of human existence. 

And while this conclusion might be well and good on the personal level, it creates one teensy-weensy little issue when we turn our attention to society and its organization: the fundamental and essential assumption of a liberal democracy and a market economy is that you and I can know our own well-being and happiness, know it better than anyone else, and reason effectively about it. Thanks to research in neuroscience and behavioural psychology, we now know with some certainty that these assumptions are false. We are poor reasoners in general but especially about what we value. Additionally, many of our beliefs about our own well-being are demonstrably false (i.e. people remember happiness that they did not experience and forget pain that they did). So, if it is true that most of our beliefs are inadequately tested and that we can’t even make accurate judgments about what we value or think to be good, democracies and markets are, at best, arbitrarily organizing society and, at worst, guaranteed to do it poorly. Garbage in, garbage out, as the saying goes. And to be clear, this is also true for authoritarian strong men, councils of nerds, and any other social-political system that depends on anyone looking deep within themselves to figure out who they are, what they value, or what they want to become. The root problem is the practical constraints of inquiry. There is no social architecture that will solve that problem for us.  

What then of politics, society, and its organization, if we can’t count on people knowing themselves with any certainty? 

First, I think we need to recognize and accept that our present-day social and political habits, institutions, and systems are largely the consequence of chance (akin to biological evolution), prone to constant change, and persist only as long we allow them to persist. They are an expression of our need to organize ourselves, they reflect the environment in which they developed, and they emerge like any other natural phenomenon. They can become better or worse (relative to a host of benchmarks), none of them will magically persist over time, and there is no reason to think that solutions from hundreds and even thousands of years ago will work for today’s challenges. We need to accept that society’s organization is an ever-evolving and accidental by-product of the on-going effort to solve many different, discrete and often intertwined problems. 

Second, I think we need to get out of the habit of appealing to any claims that rely on introspection alone, in the same way that we almost got out of the habit of appealing to claims about the one true God. There are a lot of well-tested and plausible beliefs that we can use to guide our efforts to organize ourselves and direct our problem-solving efforts. The challenge, of course, is that even well-tested beliefs don’t all necessarily point to the same conclusion and course of action. In those cases, we must resist the temptation to frame the debate in terms of largely unanswerable questions like “what’s best for me”, “who’s vision of the good life is correct,” or “who worships the right God.” Instead, we need to look to well-tested beliefs, run good experiments, and always account for all the costs and benefits of whatever approach we settle on in the here and now, recognizing that with new evidence we may need to adapt and change.  

Finally, for those of us who think that we should settle our disagreements based on well-tested beliefs rather than dubious claims grounded in introspection, we need to lead by example. I think this will primarily involve asking the right sort of questions when we disagree with others. For example, what well-tested evidence do we have for one conclusion or the other? What kind of evidence do we need to decide the matter? What experiments can we run to get the necessary evidence? We will also need to get in the habit of discounting our own beliefs, especially if they are based on nothing more than introspection or received wisdom. And this might actually be the toughest hurdle to overcome both personally and practically. It is very natural to become attached to our own bright ideas before they are properly tested. Once attached, it becomes much easier to discount the evidence against them. To further complicate matters, humans also seem to be too easily motivated to action by strongly-expressed convictions that align with preconceived notions, whether they are well-tested or not. Asking for evidence before action and expressing doubts about one’s own convictions might not resonate with the very people we need to sway. Unfortunately, but not surprisingly, there is no easy general all-purpose way to solve this problem. People who want to motivate others to action will always need to strike the tricky balance between rhetoric and honest communication. We don’t need to be puritans about honest communication but we also shouldn’t use the human condition as an excuse to spin outright lies — even in the service of thoroughly tested beliefs.            

Descartes is often credited with kicking off modernity when he famously doubted the existence of everything but his own thinking mind. In the very many years since he reached his pithy and highly quotable conclusion, we have learned a lot more about the best methods of inquiry and have developed a well-tested and always evolving understanding of the world. More recently, thanks to those methods of inquiry and their application in neuroscience and behavioural psychology, it is becoming increasingly clear that we can’t know much of anything from introspection alone — including ourselves. There is nothing you, I, or Descartes can know with any certainty by looking inwards for answers. Unfortunately, we continue to rely on habits, institutions, and systems which presuppose that you or I have privileged and certain knowledge about our own well-being, values, and optimal outcomes. This may partly explain — in conjunction with other issues (hello, massive inequality) — why liberal democratic political systems that rely on free markets are in crisis these days.

It was fashionable in the late 20th century to talk as if we had escaped modernism, but postmodernism, I think, only takes Descartes’ modernism to its logical conclusion, while willfully overlooking the fact that we humans have become pretty good at understanding the world around us. To set ourselves on a new path, to really escape the gravity well of modernism, we need to set aside the Cartesian notion that the aim of inquiry is absolute certainty and that such certainly can be found through introspection. Instead, we need to accept that we really don’t know ourselves, whatever our heartfelt convictions might tell us, and look instead to well-tested beliefs to guide and organize our lives, both individually and collectively. 

Who died and made content king? Survival bias, confirmation bias, and a farcical aquatic ceremony.

When I first started using social media, thirty Helens agreed: “content is king!” 

And, at the time, it certainly felt that way. Perfectly crafted tweets seemed to be retweeted again and again; insightful blogs seemed to lead to comment after comment; great articles were always bookmarked. 

I suspect, however, that content looked kingly only because we content creators looked at tiny samples of high-performing content and jumped quickly to conclusions. Survival bias ran rampant, it was primarily the bias of content creators that was running, and content creators really really wanted to believe that expertly crafted content could compel others to action.     

Much later, in the early days of live streaming on Facebook, a video I shot and shared live went “viral”. It received something like half-a-million views in twelve hours or so. For a social media nerd like me, let me tell you, there is no greater thrill than hitting refresh every few seconds and seeing the number of views on your post jump by hundreds and, at times, thousands. Like slot machine enthusiasts everywhere, the bells and whistles are almost more important than the jackpot itself.

And, on the face of it, it seemed like the sort of video that should earn a lot of attention. My phone had captured a pretty special moment in a powerful story, even if the video quality was questionable and the audio mediocre. The story — we content enthusiasts had been telling ourselves for years — was much more important than the technical specifications of the media that shared it. And, this video was a perfect case in point! A live, raw and powerful moment was the stuff of social media glory! I had always known it, but now here was the proof! One more bias was joyfully confirmed.

Then, I watched that short video of a woman laughing in a Chewbacca mask. Do you even remember it? It was the video that blew up in those early days of live streaming on Facebook. Sure, it was vaguely amusing, but was it really that share worthy? Was it really earning all those views and engagements? Was this really the kingly content that the social media prophecy had foretold?  

Then, it occurred to me: Facebook had just launched its live stream functionality and they wanted it to make a splash. My phone had been rattling every two seconds to let me know whenever anyone streamed live for the first time. Moreover, because it was a new service, it had appeared on my phone using the default settings for notifications, which is something like “maximum racket.” In other words, Facebook was making every effort to put as many eyeballs as possible on any content that was shared live.  

Facebook’s effort to boost the visibility of its live stream service should come as no surprise. They wanted people to use the service right away and they wanted those people who used it right away to experience success right away. Easy success would hook users and those who were hooked would talk it up to others. The first hit is always free. 

I am reminded of all of this because of a recent article about TikTok and the author’s naive attempt to explain why some videos on this service have earned big numbers. To be blunt: I wouldn’t be at all surprised if the people running TikTok are specifically manipulating things behind the scenes to generate big media-story-worthy numbers. You are the product, after all; they need you to be active; and, what’s a few inflated numbers between friends?  

However, even if the people running TikTok aren’t intentionally manipulating the numbers, there is a much more plausible explanation why some content is getting more attention than other content. Dumb chance. When enough content gets in front of enough people, some of that content will earn more attention and, from there, it can snowball. That’s it; that’s all. There is nothing in the content itself that will definitively explain its success. In the same way that we can’t know in advance which genetic adaptions will lead to an organism’s reproductive success, we can’t know in advance which features of our content will lead to its reproductive success.

Circling back to those early days of social media and the quest for the holy content grail, if there was any truth in our collective hope that content is king, I suspect it was this: the experience of kingly content is probably symptomatic of the fact that humans tend to socialize with people much like themselves and become more like the people with whom they socialize as they socialize with them. 

So, at the outset, specific social media channels were attractive to a particular community of users who were already pretty similar in terms of interests, values, and identity. There wasn’t a lot of content being created, so any content that was shared was bound to earn whatever attention was out there to earn. Because the people using the tools were already pretty similar, they came up with similar theories to explain the success of some content and those theories became self-reinforcing. As people shared content that fit their theories of success, the successful content was more likely to match the theories because there was more content out in the world that aligned with the theory. For example, if you claim that red aces are always drawn because they are special and you add more red aces to the deck every time one is drawn, your theory is bound to look true whether there is anything special about red aces or not. 

Eventually, these theories about what made content shareable, engaging or whatever were internalized as norms, values and aesthetic sensibilities. In this context, content starts to look kingly and almost magical because it’s attractiveness is rooted in a sense of “we”. We are the kind of people who think a tweet will be more engaging if the hashtag is at the end of the copy instead of beginning, so we see it as such and act accordingly. In other words, the apparent kingliness of content is an expression of a particular community’s sense of shared identity. If a particular community of we has power and influence, then, they will influence the tastes of other communities. And so on.

But here, I think, is the nub of the matter: this isn’t some kind of social media gaffe or millennial voodoo. It has always been like this for all content everywhere. The success of content is best explained by the communities that behold it, their sense of “we”, and their power and status. Shakespeare’s plays, for example, seem kingly to us only because an influential group of people took a liking to them at a time when there wasn’t much competition for people’s attention. When you are the only show in town, it is very easy to make the taste.  

If I am right about this (and I’d bet that I am not the first to claim it), I suspect a lot of content lovers and creators’ will react to my conclusion with nihilistic rage. “If there is nothing in the artefact of creation itself that guarantees success or could guarantee success, what is the point of creating at all? Why create if what is produced is of secondary importance or, dear god, not important at all? Oh woe is us!” However, I want to make the case that this frown can be turned upside down. 

On the one hand, if your aim is to create content and be recognized as a content creator, the path forward is pretty simple: do your best to ingratiate yourself to whatever community is the tastemaker community for the kind of content you want to create. Meet, greet and emulate. Play the game well enough and long enough, and you will probably get a shot at shifting the community’s taste. No magic or special natural gifts required. You don’t need to be the anointed one. Being pleasant and patient should do the trick.

Alternatively, if you enjoy creating content for its own sake and have no particular desire or need to be recognized as a content creator by the relevant tastemaker community, you are free to create in accord with whichever standard(s) you want. Who cares what the tastemakers think? They no longer control the means of creative production or distribution. Go forth and create! Celebrate the fact that you have enough time and the means to create, even if no one is looking. On the other hand, if it turns out that you don’t want to suck up to tastemakers to earn a living as a content creator and have better things to do with your time than create for the fun of it, so be it. The choice is yours and, to be frank (you be Jane), having that choice is pretty lucky too.

I can think of only two groups of people who will be in a jam: those people who desperately want to be recognized as a content creator but don’t want to suck up to the relevant tastemaker community or the people who are ignored by that community even when they do suck up. For them, only Nietzschean frustration awaits. 

If you are among this lot, I can offer only this advice: storm the taste making gates until you are accepted, ingratiate yourself to a marginalized or underserved community and hope their day is yet to come, or ride the early adopting wave of some new technology like the printing press or social media. However, whichever path you take, please remember: if you end up holding something that feels like a sword of divine right, the underlying mechanism that provided it to you remains the same, whether you were finally picked by the cool kids or the uncool kids somehow suddenly turned cool. The sword doesn’t make you or your content king; nor does the farcical aquatic ceremony that put it in your hand. Instead, it is the community who thinks of “you” as “we”.

My answer to the ultimate question of life, the universe, and everything: four four through it all

If the mystery of the human condition can be characterized as a kind of puzzle or riddle, the answer and/or punchline can be aphorized, I think, through four banal facts and four mollifying delusions.

I can’t say that anyone will necessarily gain anything by knowing and understanding these facts; nor can I say that they will gain anything by ridding themselves of the delusions.

If anything, I am pretty sure the delusions persist precisely because they are useful to most people most of the time. Whether or not they become more or less useful will be settled by evolution eventually — and not by you or I.

Four banal facts:

  1. Almost all human behaviour is perfectly predictable. Some human behaviour may be random or the result of chance.
  2. Human behaviour and all the products of human behaviour are expressions of the human disposition to allocate resources according to status.
  3. Human society and its organization requires the exercise of power. The risk of abuse is omnipresent. Some will guard against it; some won’t.
  4. We die and will be forgotten.

Four mollifying delusions:

  1. Humans have free will and are masters of their own destiny.
  2. The truth will set you free.
  3. Democracy is the worst form of government, except for all the others.
  4. Immortality is possible.

That’s it; that’s all. If you like my solution or enjoy talking about the puzzle, let’s start a club. You bring the (alcoholic) punch. I will bring the (vegan) pie.

The spade has turned: from weltschmerz to mono no aware and back again.

mono no aware After many years of digging, I am pretty sure I have hit bedrock.

The spade has turned.

Human behaviour and all the products of human behaviour are best understood as expressions of the human tendency to allocate resources according to status.

Practically-speaking, a person’s survival depends on their membership in a group and their status in it. Additionally, success — however it is defined over and beyond survival and reproduction — also depends on and is defined by an individual’s membership in a group and their status in it.

There is nothing — absolutely nothing — that trumps, circumvents or transcends the ceaseless churn of seeking, gaining, maintaining and losing the approval of others.

And so it goes. Fine weather, isn’t it?

Unreal city: a black hole of dazzling light

A picture of the unreal city at night.

I remember the moment, but I can’t place it in time.

We were returning to Waterloo from Toronto. It was night. The stream of lights heading east on the 401 was an endless milky way.

It struck me: behind each set of quivering headlights, there was at least one person. It struck me: on one side of this narrow strip of highway, heading east towards a moderately-sized metropolitan city, there was a galaxy of human experience, unique distinct breathing and, like me, living at the centre of its own universe. It struck me.

At high school in Ottawa, I remember it often felt like I was surrounded on all sides by unknown, colourless, cardboard people, who reappeared over and over again like the recycled backgrounds of a low-budget cartoon. Many years later and long after the moment of dread on the highway heading west from Toronto to Waterloo, it struck me: I was as colourless and unreal to the unknown others as they were to me.

I now live in Toronto, a city as unreal as Eliot’s. From my window, I see towers and towers of existences. When I walk to and from work, there is always a hornet’s nest of activity. When I shop within minutes of my home, I see faces that I know I will never see again. Like a shovel of dirt from a wild and healthy field, these few blocks of my existence are teeming with life.

If I reflect for too long on the scale of life in this city and on this planet, it obviates me. If I focus instead on the energy, colours and details of this urban microcosm, I am dazzled by it all, and happy to play the role of cardboard cutout to the unknown universes of life booming and buzzing around me.

Oh, unreal city, at the centre of a black hole where all light is trapped, could it be as dazzling as this?