Why aren’t more teams outside of sport playing Moneyball? Because they’re human, stupid.

After reading Michael Lewis’ Moneyball, I asked myself: why hasn’t this data-driven approach to the evaluation and recruitment of talent been embraced by more teams and organizations outside of professional sports? Why, after all of these years and with the very tangible success of professional sports to look to as an example, why are we still evaluating and recruiting talent like we always have? 

After all, when you cut through the sound and the fury of Lewis’ tale, the innovation described in Moneyball is pretty straightforward. Billy Bean and Paul DePodesta of the Oakland Athletics use data to identify players who are undervalued by other teams and then sign them to contracts at a bargain price. Essentially, they get more for less by exploiting information the other teams ignore. It’s smart, but it’s also a tactic that every bargain hunter, thrifter and value investor understands. Because the core idea described in Moneyball is so straightforward and has been so widely celebrated, you would think (or, at least, I would) the data-driven approach to the evaluation and recruitment of talent described in Moneyball (or something approximating it) would have swept through all other industries by now.

Instead, it seems that most teams and organizations rely on recruitment practices that are probably older than baseball. You know the drill: after a largely arbitrary sorting process based on self-reported data points (i.e. a resume is pulled out of a hat based on a crappy keyword search or because a friend-of-a-friend recommends that it be pulled), the evaluation of a potential hire boils down to a highly subjective gut-check, which may or may not be based on an assessment of the candidate’s skills in highly artificial circumstances. A few reference checks later — which everyone agrees are useless — and, blammo, a new hire is being onboarded. If a professional sports team recruited like this, it would be out of business in no time. How is it possible that so many teams and organizations continue to recruit in this essentially arbitrary fashion? 

Overlooking the rhetorical nature of my question, you might reply, “well, probably because most teams, organizations and industries don’t have access to the kind of dataset baseball has. Baseball has always been kind of nutty for numbers.” To which, I might reply, great point, Dave, but there is no necessary reason why a baseball-like dataset couldn’t be developed and maintained by, say, a professional association. Isn’t the market supposed to identify opportunities like this and fill them? Potential employers, it is fair to say, would probably pay oodles of money to access this kind of data, if it led to better and less costly hires. Moreover, I would quickly add, not giving you a chance to get a word in edgewise, because that’s how I roll, once someone is hired, a team or organization can create and maintain as much data about the new hire and their performance as they would like. So, if some hungry-for-success team or organization wants to evaluate a new hire based on their contribution to the success of the team or organization, generating the right kind of data should be a straightforward exercise once the person is onboarded.

Instead, much like the recruitment process itself, the evaluation of new hires seem to be largely a matter of feel. If a new hire “fits” into their new team and seems to contribute, the recruitment process is normally judged a success, whether or not the person measurably contributes to the success of the team or organization. To be fair, group harmony and team cohesion is always going to play a role in any team’s success. However, group harmony and team cohesion are very often a by-product of team work rather than a catalyst for it. Whether or not a person “fits” is probably irrelevant, so long as they make some effort to cooperate and work well with others. Proximity and time will take care of the rest. 

Before you interrupt me with another objection that I already have a clever reply to, it was probably around this point in my thinking and writing that the penny dropped. Duh, Sterling, of course, most organizations and industries hire based on “feel”, where “feel” more or less translates into, “yep, gut sure says that they’re like me.” We humans are tribal. From the very outset of our lives, we tend to form relationships and social groups based on physical proximity and physical similarities. Why would it be dramatically different for the workforce? Well, Sterling, I guess I was assuming that competition and/or the desire to achieve our aims would have nudged us to adopt more rational, coherent and less arbitrary approaches to building teams and organizations. Whether an organization is for-profit or not-for-profit, it makes much more sense to recruit people who measurably contribute to the achievement of the organization’s aims rather than people who just happen to look and talk like the friends-of-friends we have in common.

Think about it, if the jocks — of all people — have figured this one out, why hasn’t anyone else? 

Then, it was around this point that another penny dropped for me. Most people agree that Michael Lewis’ version of the events in Moneyball “torques” the facts for the sake of a more compelling story. In particular, it seems likely that there was far less conflict and debate about the data-driven approach Bean and DePodesta championed. Strictly-speaking, once a certain caricature of scouts and scouting is set aside, the difference between player evaluation and acquisition as it was traditionally done in baseball and the approach described in Moneyball is one of degree rather than kind. Moreover, by the time that Beane and DePodesta had turned to data to drive their player acquisitions, amateur data aficionados had already been using data to dissect and criticize professional baseball’s approach to player evaluation and acquisition for some time. The notion that data could lead to better recruitment practices was already well and truly in the air.

It’s also important to remember that Bean and DePodesta were evaluating and recruiting players who had already been through a very long and very difficult vetting process. To be among the players who are even on the radar of being considered for a spot on a professional baseball team, a lot of people in the baseball community would have already vouched for that player in some way or the other. It’s not like the Athletics were using data to recruit hockey goalies to be catchers or signing Tim from the mailroom. If a team is trying to decide between signing this guy and that guy, and everyone already agrees that both of them are part of the very exclusive club known as professional baseball, why wouldn’t you roll the dice and pick the cheaper guy if the data also seems to predict he would do fine. Shorn of Lewis’ drama, the Athletics faced a pretty simple choice. On the one hand, they could continue evaluating and recruiting talent as they always had and expect the same middling results or, on the other, they could take a chance on a newish approach broadly recognized as having some merit, generate results no worse than they might otherwise expect, and save money while doing it. Really, when you think about it, it’s a no-brainer, but, “the not-so-remarkable tale of safely entrenched insiders making an even safer bet that works out better than expected” doesn’t make for compelling dust jacket copy.

With all of that throat-clearing now well in hand (uh, gross), the answer to the question I started with is this, I think: teams and organizations outside of professional sports haven’t yet broadly adopted a data-driven approach to the evaluation and recruitment of talent because, all things considered, the age-old approaches work well-enough; as a result, no well-established insiders have felt compelled to try something new. On the one hand, successful organizations tend to attract a lot people who have already been vetted in some fashion. Randomly picking, more or less, among those people who present themselves for selection is probably a safe bet and, if random selection is a safe bet, why not also pick people “like me,” if it will make you and everyone else on the team feel more comfortable with the new hire. On the other hand, struggling organizations tend to cut employees rather than than make new hires and, you can be sure, any hires they do make are going to be on the safe and familiar side. In other words, even after very many years of working together in groups to achieve different aims, it seems that we humans haven’t confronted any situation that would compel us to change how we recruit people or how we evaluate their contributions to our efforts. And, if it hasn’t happened yet, don’t hold your breath! Businesses fail every day and entire industries have collapsed over the years and yet these very negative consequences have not driven business or industry insiders to fundamentally and systematically rethink how they evaluate and acquire talent. If it hasn’t happened yet, I doubt it it will happen anytime soon.

Now, if you are like me, at this point, you might be somewhat disheartened to realize that organizations build their teams using methods that wouldn’t look out of place on the schoolyard (i.e. pick that kid, he dresses like us!). However, if you are a normal human being, you are probably actually thinking, “Are you serious?” Did you really only just figure out that hiring decisions are primarily an exercise of “like” hiring “like”?” Well, sort of. I have always understood that humans have a habit of grouping together based on superficial similarities and excluding those who are superficially different, but I have always thought of it as a bad habit, which would eventually be broken, both at the individual and group level, either consciously as people and societies matured or unconsciously through something like competition. What has dawned on me (thanks to Moneyball and baseball!?) is that the human tendency to socialize, build teams and act collectively by looking for and finding people “like us” is so fundamental that nothing will ever compel us to change, other than a true evolutionary shift in our DNA, which, strictly-speaking, is just a fancy way of saying, “if people who embrace difference reproduce more than all those other people who prefer homogeneity.” It’s “we like us” and “different like them” all the way down. 

Moreover, on a personal level, it is also dawning on me that whatever I have accomplished in my life is probably best understood as being a consequence of my similarities to others  rather than my differences. I’m not a beautiful unique snowflake; I’m a me-too drug. And, yes, while I am one hundred per cent talking about social privilege, I am also driving at something that runs deeper. Returning again to evolution (which probably should be the subordinate clause that starts every discussion about human nature), in my experience, evolution is often characterized as a triumph of difference because it is a heritable difference in phenotype that leads to a reproductive advantage that, over many generations, leads to a new breeding population. Hurray for difference — so long as you overlook the fact that the difference is one tiny bit in a whole lot of sameness. Without the sameness, the little bit of difference wouldn’t ever take hold in a breeding population. To put it bluntly, if you are too different, your difference ain’t being passed along to anyone because you won’t get the opportunity to reproduce and, if you are really different, the breeding might not even work. In other words, what makes you and me human are the ways in which we are the same; insofar as we aspire to be unique, it it only possible because we are like others — and not in spite of it.

And that is the moral of a completely different after-school special than the ones I watched growing up.

Writing: what I’ve learned

In the beginning, writing was a fun school assignment. It was a way to compete with my friends. It helped to wean me off my toys, offering an age-appropriate medium for the expression of my imaginative impulses.  

Then, when I was sixteen, going on seventeen, while hiking across a glacier in the Rockies, I experienced something I couldn’t quite make sense of. In response to the experience, I tried to make sense of it by writing a poem. It was, I think, my first true poem. I also now suspect that I turned to the page only because I had no one else to talk with about the experience. 

If writers, like super heroes, have secret origins, my experience on the glacier and my effort to make sense of it with words is my secret origin. Like every super hero’s secret origin, it has shaped everything else that has come after. I never finished that first true poem; I don’t think I’ve ever stopped trying to write it either. 

Twenty-nine years after that first unfinished and forever-revised poem, I now know this about writing: Luke got it backwards. Flesh becomes word, and not the other way around. The marks on a page don’t affect us. We affect them. The influence we suppose we feel in words originates in us. We make marks work. We make marks words. The power of words is us imaginatively transubstantiated.

The power of writing, then, is always the power of a community. Like a currency, writing is only as influential as the people who call it their own. If you want to craft writing that wins friends, influences neighbours, or earns money and acclaim, the marks on the page are probably the least important consideration.

Don’t write each day; instead, ingratiate yourself each day to the right people. It’s gatekeepers all the way down.

I also now suspect that words have limited efficacy when it comes to making sense of the kind of experience I had on the glacier. The experience originates, I think, in a part of our brains that experience, know, and understand without using the marks, sounds and physicalizations we learn as children to express as language. If this suspicion is correct, it is probably impossible to express in words the experience I had on the glacier. My adolescent turn to words, poetry and writing, to make sense of my encounter with the infinitesimal nature of human experience, was probably futile from the outset. 

Fortunately, writing has helped me to understand myself, others, and the world around me, even if it can’t magically motivate people to action or express the inexpressible. Despite its mundane limitations, writing can be very satisfying, especially when I catch in words some feeling, intuition or idea that had previously seemed ineffably out of reach. Rationally, I know writing — my writing — is little more than an elaborate game of solitaire; irrationally, I also know that it feels important. I’ve always been one of those kids who takes play very seriously.

In another twenty-nine years, I will be seventy-four, going on seventy-five. With so much life left to learn from, I wonder who I might yet become. Will the person I am today be as much of a stranger to me then as that sixteen year-old is a stranger to me now? It seems likely. It also seems likely that the different texts I have created or will create will be insufficient to forge a persistent identify over time. My past selves, my present selves, and my future selves, like any other reader, make of texts whatever they bring to them at the time of the encounter. There is no indelible message that can be preserved in the bottle of my words, even for my future selves. Waves in the ocean of experience leave no trace. 

If all of this is true, why write at all? It’s a fair a question, and one that I often ask myself. If there are so many other enjoyable activities that are much more likely to win friends, influence neighbours, and earn money and acclaim, why bother writing, why persist in a habit which serves no greater purpose than its own perpetuation. At the age of forty-five, going on forty-six, this is my answer: writing deeply is like breathing deeply; you understand its value, whenever you take the time to do it.

Within the mirror of COVID-19: a vision of a better society.

Many years ago, during my PhD research, I audited a lively political theory seminar. As you’d expect, at some point, we discussed the ethics of health care.

From the outset, the conversation was framed by the assumption that health care resources are necessarily scarce. Society, it was assumed, would be crippled by the costs of health care, unless they were carefully rationed. The notion that we might organize society to make health care a top priority was characterized as absurd. Society, it was declared with table-thumping authority, must be organized around higher ideals than the good health of all. What are we, animals?

In the years since that seminar, it has seemed to me that most policy discussions about health care begin and end with very similar table-thumping assumptions about the nature of society, its highest aims, and the presumed scarcity of health care resources. Time and again, health care discussions begin with the assumption that we must do more with less, instead of discussing how we might do much more, if only we realigned our priorities. 

Fortunately, during a crisis like the COVID-19 pandemic, most everyone seems to understand that we have a duty to do everything we can to keep as many people as possible as healthy as possible. At a time like this, it is easy for most people to understand that society can’t function if everyone is ill, dying or dead. Moreover, people also seem much more willing to do much more to prevent all ill-health, suffering and death — perhaps, because the consequences are top-of-mind. Unfortunately, once the crisis passes, if past experience is any measure to go by, it seems likely that people will very quickly forget their present concern for the good health of all.  

It is important to remember, I think, that the costs of preventable ill-health, suffering and death are as real when they happen over time as they are when they happen in a wave. Admittedly, the high volume of health care needed during a pandemic is a unique challenge in its own right, but the actual suffering caused by preventable ill-health, suffering and death doesn’t disappear when it is spread over time. It may be less dramatic, more easily managed, and more easily hidden from view, but all of the costs remain: human, social, and economic.  

So, if it is true during a pandemic that we should do everything in our power to keep as many people as healthy as possible, I think, it should also be true when there is no pandemic to focus our attention. Our good health is the very foundation for everything else we value — whatever we might value.

I don’t care if you are financier, a poet or a small business owner, your pursuit of the good life is not possible without your good health and the good health of everyone else.

This pandemic, I hope, has reminded us of that fundamental fact. 

The good news is that this pandemic will eventually end. It will end precisely because we are making its speedy resolution our top social, political and economic priority. The bad news is that, once the crisis is over, we will likely fall back into that old habit of thinking that the good health of all is a priority easily trumped by other considerations, like the marginal tax rate of our wealthiest citizens. It is my hope, however fleeting, that our response to the COVID-19 pandemic will remind us that we can accomplish much, when we align our social, political and economic priorities to serve the good health of all.  In recent history, we have organized society for the sake of the power and privilege of a few “royal” families, to wage total war, and to maximize shareholder wealth. Perhaps, now is the time to organize society for the sake of the good health of as many people as possible.

I have no symptoms. I have locked myself down. You should consider it too.

I have no symptoms. I have not travelled recently. As far as I know, I have not been in direct contact with anyone who has travelled recently. The risk that I have COVID-19 is very low — almost nil. I have, nevertheless, by my own choice, locked myself down. I have not left my apartment since Monday evening. 

Why is that? 

The short answer: I want to be certain that I am not spreading the disease now and that I won’t spread the disease in the future. 

The key statement in the opening paragraph is “as far as I know.” Yes, I am probably not a risk to others, but the evidence seems pretty clear that people without symptoms or people with only mild symptoms are spreading the disease. When it comes time for me to leave my apartment — when it is essential for me to do so — the only way to minimize the chance that I am not a carrier is to self-isolate now. 

While this may seem a bit over-the-top, I need to take this step precisely because so many people who are at risk aren’t taking the pandemic seriously. People who appear to be healthy right now are out and about spreading the disease to other people who will also appear healthy long enough to spread the disease to others who will also appear healthy and so on. One of those people could be me for all I know. I am pretty sure I’m not infected, but, because those who are genuinely a risk aren’t playing their part, I can’t be sure. 

Fortunately, if I can go ten or twelve days without leaving my apartment, I will be pretty confident that I don’t have the virus and I won’t spread it. If I make it to fourteen days of self-isolation (the incubation period probably isn’t longer than fourteen days), I will be almost certain that I won’t put others at risk. For me, at this point in time, when most of my normal activities are already on hold, a couple of weeks in my apartment seems like a pretty small cost to pay for the certainty that I will be part of the solution and not part of the problem — a problem, I don’t hesitate to remind you, that will kill people. 

Practically speaking, I also suspect we are currently in a period of relative calm when the risk of infection is very high precisely because so many people aren’t taking the pandemic seriously. Once lots of people are sick and the hospitals are filling up to the breaking point, I am guessing that the risk of catching the virus in the community will be much lower because the sick will have no choice but to stay at home in their misery; healthy people who are a risk will hopefully recognize the tangible consequences of their actions and adjust their behaviour. 

Looking further down the road, I also want to be healthy when the storm finally hits — and it seems pretty clear that things will get very rough sooner or later. The only way to ensure that I am healthy enough to help others and to continue working is to self-isolate now. And again, to hammer the point home, if it is essential for me to leave my apartment to help others or for work after this period of self-isolation, I probably won’t be putting others at risk. That’s peace of mind that I’m willing to eat leftovers for.

I should also acknowledge that I am both lucky and privileged enough to be able to make this decision for myself. I can easily work from home, my employer officially encouraged all of its employees who could work from home to do so, and my job is about as secure as a job can be in these difficult times. If I am laid off, I also have enough savings that I should be able to weather the economic storm if I am careful. For the time being, I don’t know anyone who needs help or assistance on a regular basis. I have no prescriptions to fill. I have plenty of natural light in my flat, plenty of friends I can reach electronically, and I can workout easily in my apartment. By blind luck, in recent weeks, I even accidentally stockpiled some tasty cooking in my freezer. On Monday, to prepare for my solitary confinement, I only had to buy a couple extra bags of coffee (i.e. no hoarding required). Barring the unexpected, I should be able to stay isolated for another ten days easily or, at least, until it is essential for me to leave my apartment.  

And this is the key consideration: if it is essential that you go out, you absolutely should. No argument here. There are many legitimate reasons to break isolation both now and in the future. However, before you go out, ask yourself, “Is it essential that I go out today? Can I accomplish this task by some other means? Can I put it off until later?” If you can, please do. Additionally, please take whatever steps you need to take right now to minimize your reasons to leave your home in the future. Because the storm is coming, the time to prepare is now — not when it arrives. Now is the time to identify and solve the challenges of isolation while you are in good health and better spirits. Because sooner or later, either the government or the disease will make the decision to isolate for you. 

If you’ve made it this far but you are not entirely convinced that the time to self-isolate is now, please take a few minutes to read an article André Picard published today. Here, I think, is the key message:

As the number of infections rise, we need to behave as if we could all be infected, as if everyone around us could be infected. […] As the risks grow, our actions must accelerate. 

Ultimately, this isn’t about you or me. It’s about some of the most vulnerable members of our community, the front line workers who are going to do everything in their power to contain this storm when it hits, and the very real hope that we can get this pandemic under control. Taking action — even drastic action — is driven by the hope that our actions and choices can make a difference. To carry on as if nothing has changed is the stuff of despair — not hope. If, like me, you are lucky enough to be able to decide for yourself to self-isolate now or, at least, to minimize the time you spend out of your home, please do it now. There is very good reason to believe that this simple choice will help reduce the spread of the disease and make the time ahead easier for all of us.

Frank and Angélique Maheux: Correcting My Record

My great grandparents, and some of their children.Some time ago, I discovered that Desmond Morton, a highly esteemed Canadian historian, wrote an article about my maternal great grand father, Frank (Francois-Xavier) Maheux. The article is based on the letters Frank wrote to my great grand mother, Angélique, after he enlisted to serve in the First World War. The letters, along with some other materials, were donated by my great aunt to the Library and Archives Canada in 1977

Overall, the article is very good; however, in a quick aside, Morton describes Angélique as “the full-blooded Odawa [Frank] had married in 1905 when he had worked in a lumber camp near her reserve.” When I first read the article, Morton’s claim that Angélique was Odawa struck a dissonant chord. My maternal grandmother, as far as I can remember, identified as Algonquin. She even ran an organization called the “Congress of Algonquin Indians,” which I was able to confirm thanks to the magic of the internet. On the one hand, I had the official record of a well-respected Canadian historian and, on the other hand, I had my memory, the unofficial record of the internet, and an inference that, astute readers will note, I was not, strictly speaking, entitled to make.

The short version of what followed is this: at first, I believed Morton’s claim about Angélique’s identity. Then, after a while, thanks to the magic of the internet, I discovered information that implied my memory was correct. I found marriage and birth records that connected Angélique patrilineally to the Algonquin First Nation and to Kitigan Zibi, a reserve also connected with the Algonquin First Nation. I also discovered that she was an informant for an unfinished book on Algonquin culture that is now in the possession of the Canadian Museum of History. When I went to look at the materials for the book at the museum on the same day that I went to look at Frank’s letters in the archives, I found a story about nosebleeds in the materials for the book that I also happened to see mentioned in Frank’s letters earlier that day. On top of all that, I found artwork signed by my grandmother and my great aunt. My inner detective was satisfied. Case closed.     

Well, almost. My inner academic decided that it wouldn’t hurt to reach out to Morton to see if he would be willing to do a correction. I did a bit of digging on the internet and, sure enough, I found an email address for Morton at McGill. I guessed that it wasn’t monitored anymore because it looked like Morton had well and truly retired. I, nevertheless, sent the email address a polite note, not ever expecting to get a reply. A few months later, to my great surprise, an enthusiastic reply arrived from Morton, we had a brief exchange, in which he thanked me for the new information, expressed particular interest in Angélique’s role as a cultural informant, and said that he would look into the possibility of correcting the record. Although I didn’t necessarily expect a correction to ever materialize, given his other commitments, it was more than enough for me that my email had been acknowledged by Morton and that he would do what he could to correct the record if it was feasible. Finally, case closed.   

Meanwhile, thanks to the posts I shared about my efforts to muddle through all of this history, two cousins who I hadn’t heard from since I was very young contacted me through Facebook. After swapping family stories for a while through Messenger, one of them created a private Facebook for people who are the descendents of Frank and Angélique to share stories and pictures. As more and more extended family were added, more and more stories and pictures were swapped. Then, the motherload was shared. Another cousin had paid the archives for a digital version of Frank’s letters and she shared the files with us. For me, this was like manna from heaven. I had always wanted to read the letters in their entirety; however, there are far too many to easily get through all of them while sitting at the archives. Now the opportunity had come at last! 

Around the time Frank’s letters were shared, I learned that Morton has passed away. This news reminded me of the notion I had to correct Morton’s article. I decided again that it couldn’t hurt to reach out to the journal that had published Morton’s article to see if they would consider a correction. Because I remembered my grandmother identifying as Algonquin, it never occurred to me to think that my extended family might identify with a different First Nation and, most importantly, so might Angélique. Instead of discussing Morton’s claim in the Facebook group, I sent a note directly to the editor of Canadian Military History, to see if they would be friendly to a correction.    

To their immeasurable credit, the journal was friendly to the idea of adding a note to the digital version of the article; however, the editor gently (and wisely!) suggested I check with other family members to sort out what the correction might look like. That made sense to me. Plus, thanks to Facebook, I now had the easy means to consult an engaged cross section of my extended family. And so I did, and within a few minutes of asking for advice on how to write the correction, I learned from two cousins that they specifically remember Angélique identifying as Odawa, in one way or another. It turns out Morton was not wrong to describe her as Odawa. I let the editor know that Angélique’s identify was more complex than I had realized and that a correction wasn’t required. For me, a puzzle, nevertheless, remained. On the one hand, I had the birth and marriage records. On the other hand, I had my cousins’ memories. Was it possible to reconcile them? After a bit more digging, I’ve come up with a plausible answer.

Many of the names used by settlers to distinguish between the different indigenous peoples and nations were invented or misapplied by the settlers themselves. Notably, the name “Algonquin”, I have learned, wouldn’t have been recognized by the people it has named for much of their history. It is also falling out of favour among those very same people today. Crucially, as one of my cousins mentioned on Facebook, in Angélique’s own language, she probably would have referred to herself as Anishinabe, whenever she had reason to describe herself in a way that didn’t reference kinship and place. And while there are very many reasons, for better and for worse, that indigenous people may have come to use and even cherish some of the names they found in settler history books, there is no reason to expect their attitude to those names to be uniform or even consistent over time. If it was expedient to use one name invented by settlers rather than some other name invented by them, it probably wouldn’t have made much difference because they had their own name for themselves in their own language. As a point of contrast, think of all the different names Europeans have for the people we call German in English. In some contexts, they are German; in others, they are “Allemand”; and people from Germany don’t insist that they be always be called, “Deutsche.” From this perspective, in the case of Angélique and my grandmother, it seems entirely plausible to me that they used whichever settler name was most useful given their aims at any particular time and the context in which they were using it. 

With all that in mind, it’s probably worth returning to the original aside that kicked off my adventure in history, for one final fact check. In it, Morton describes Angélique as “the full-blooded Odawa [Frank] had married in 1905 when he had worked in a lumber camp near her reserve.” Although, as I have discussed, it’s not necessarily wrong to describe Angélique as Odawa, Morton’s very specific claim about Angélique’s blood quantum is strange. As far as I can tell, there is nothing in Frank’s letters that can be used to draw that inference. Genetically-speaking, Angélique probably wasn’t “full blooded”, but, outside of settler history, that point is irrelevant to her identity. I can also say with some confidence that Frank and Angélique were married in 1906, because their ten year anniversary is mentioned by Frank in one of his letters and it is is dated January 1916. Finally, it is also probably worth emphasizing, that the reserve closest to Baskatong Bridge, where Angélique and Frank were married and lived for a good part of their lives, is Kitigan Zibi. At the time of the article’ publication, it was known as River Desert, and probably would have been described by the community that lived there as an Algonquin reserve rather than as Odawa. Today, the community call themselves the Kitigan Zibi Anishinabeg. 

Ultimately, I won’t ever know with much certainty the names my great grandmother and grandmother mostly closely identified with when they described themselves. However, “Anishinabe” seems like a pretty plausible option, and is much more appropriate for today’s time and context. It also aligns with the expressed wishes of the community that they lived in close proximity to. So, from here on, I will say that my great grandmother and grandmother were Anishinabe, and, in the course of their lives, they lived at Baskatong Bridge, Maniwaki and Ottawa. If pressed by someone to use one of the names found in settler history books, I will shrug my shoulders and use it as an opportunity to discuss the myopic nature of settler history.

And, as far as the “official” record goes, thanks to the internet, I have now probably entangled Morton’s article with my own muddled attempts to make sense of his claims about Angélique’s identity. As a result, anyone who is interested in the article, Frank or Angélique will also be able to easily find the additional context my account provides. More importantly, thanks to the hard work of indigenous scholars and the emergence of Indigenous Studies over the last few decades, I doubt any future historians who takes an interest in Angélique’s story will take Morton’s description of her identity or my account of my effort to make sense of it as definitive.

Data, analytics, and the human condition: life lessons from baseball’s data-driven revolution

The history of professional baseball is, I think, the story of talented, skilled and experienced individuals relinquishing some of their decision-making autonomy to better coordinate their actions with others for the overall benefit of the group. In recent years, data, analytics and the people who effectively evaluate them have played a key role in this coordination effort. As a result, baseball’s history is, I think, a useful case study with which to better understand the value of the broader data-driven revolution that is well underway in many parts of our lives.

In the early days of professional baseball, individual players played as they pleased within the rules and conventions of the game. The manager was able to exercise some control over some on-field decisions because he decided who played each day. He used that authority to bend players to his will, whether or not his will led to success. In some remarkable instances, players were “too good not to play,” and they continued to play as they pleased, succeeding and failing according to their own set of rules. Their natural god-given talent was taken as proof that they could play by a different set of rules or none at all.

Today, because of data and modern analytics, managers and players are now relying on the advice and decisions of people who have often never played the game and who rarely step on the field. At first, these data-driven and analytical outsiders had to persuade the insiders to act on their insights and recommendations. Eventually, the people who control the pursestrings recognized the value of data-driven analysis and decision-making. As a result, the data nerds are now themselves insiders and enjoin rather than entreat. It also seems likely that their influence on the game will continue to grow. For example, data-driven analysis is now influencing player development, which historically, as far as I can tell, has been an unfathomable mix of toxic masculinity, superstition, blind luck, and occasional strokes of genuine and well-intentioned pedagogy.

This turn towards player development is happening in large part because most teams for the most part have now embraced data, analytics, and the people who effectively evaluate them. As a result, the competitive edge associated with the analytics  revolution has been blunted somewhat. For example, even if a clever analyst is able to identify an innovative way to evaluate players, whatever advantage that is gained will be short-lived because player acquisition is a very public activity. Eventually, some other team’s analyst will crack the code underpinning the innovative and unexpected acquisition. In contrast, if a team can use data and analytics to improve their player development, which happens behind the mostly closed doors of their training facilities, to turn average players into good players and good players into stars, there is a huge opportunity for teams to win more games at a much lower cost. They can sign players based on their current value and develop them into higher value players while under the original contract. Crucially, because teaching and development must always be tailored to the student, even if we imagine that an ideal system for player development can be broadly identified and it becomes widely known and understood, there will be plenty of room, I think, for innovation and competitive specialization. Although a handful of very successful teams already have a history of identifying and nurturing talent in-house, the future of player development will probably look a lot like the recent history of data’s influence on player evaluation, tactics, and strategy. Data, analytics and the people who effectively evaluate them can be expected to identify more effective approaches for player development, discredit others, and more accurately explain why some traditional approaches have worked.

I suspect that the analytics revolution has had such a profound impact in baseball only because baseball’s culture was ruled for so long by superstition, custom, and arbitrary acts of authority. This culture likely emerged, I am prepared to speculate, because there were so many exceptionally talented players competing for so few spots. Because all of these players were willing to accept pretty much whatever signing bonus or salary they were offered, if these exceptionally talents guys failed for whatever reason, from a team’s perspective, it didn’t much matter because there were plenty of hungry, talented and cheap guys waiting to take their place. Some guys made it through and some guys didn’t; as far the teams were concerned, it didn’t much matter who made it through or why they made it through — so long as those that did could help to win games. Of course, this model only works when players are cheap. It should come as no surprise that teams have become much more interested in accurately evaluating their players and investing in their development now that signing bonuses and player salaries are substantial and much more reflective of a player’s true contribution to the team’s bottom line. Thanks to collective bargaining and free agency, an economic motive was created that forced teams to treat players as valuable assets rather disposable widgets.

For a fan of baseball — or a fan like me, anyway — one of the unexpected outcomes of a deep dive into baseball’s analytics revolution* is the realization that the action on the field is very much an effect rather than a cause of a team’s success. Evaluating and acquiring players, developing them, motivating them, and keeping them healthy is the key to winning pennants. Yes, there will always be room for individual on-field heroics that help turn the tide of a game or series, but a player is on the field and physically and mentally prepared to make the big play only if a tremendous amount of work has already been done to put him there. And while I will resist the temptation to make the intentionally provocative claim that the analytics revolution in baseball highlights that on-field play is the least important aspect of a team’s success in baseball, it is nevertheless clear that that the data-driven evaluation of all aspects of the game highlights that the managers and players are only one piece of a very large system that makes on-field success possible. At this calibre of competition, with so many talented players on the field, an individual game is probably  best understood as a competition between two teams of finely-tune probabilities working through the contingencies of chance brought about the interactions of those probabilities. This, I think, not only explains the strange cruelties of the game for both players and fans, but it is also a pretty plausible description of the human condition. Once again, even from the cold dispassionate perspective of data, baseball looks like a pretty useful metaphor for life.

If my version of the history of professional baseball is (within the ballpark of being) correct, data, analytics and the people who effectively evaluate them have played a revolutionary role in baseball not because they revealed and reveal previously unseen truths. Instead, they are revolutionary because they broadened the scope of the kinds of people involved in baseball’s decision-making processes and, in doing so, changed how those decisions are made. By creating a more sophisticated, systematic and coherent methodology to measure and evaluate the contributions of players, the data nerds created a tool with which to challenge the tactical and strategic decisions of the old guard, which too often relied on appeals to custom, individual intuition, and authority. In this way, the data nerds earned themselves a place at the decision-making table. Crucially, baseball’s analytics revolution reminds us that people are the true catalyst and vehicle for change and innovation. It doesn’t matter if some new tool unearths previously unseen truths. If the people in charge aren’t willing to act on them, for all intents and purposes, the earth will remain at the centre of the universe.

The history of baseball also reminds us that a group of individuals working together to achieve some shared goal is much more likely to achieve their goal if they relinquish some of their decision-making autonomy in order to effectively coordinate their efforts. This is as true for hunters and gatherers working together to collect life-sustaining berries as it is for disciplined armies fighting undisciplined hordes. Communities, armies and sports teams that rely on an “invisible hand” to coordinate the actions of their individual members simply aren’t as effective as those that consciously and effectively coordinate their actions. We shouldn’t have to look to baseball’s history to be reminded of this simple truth. Unfortunately, western culture’s misplaced faith in the hope that individuals doing pretty much as they please will accidentally lead to the best outcome has created a culture in which we too too often organize ourselves along feudal lines, ceding absolute authority to individuals over some piece of work or part of a larger project, creating silos of silos within more silos. Yes, some leaders have made terrible decisions on behalf of the group, but that is an indication that we need better approaches to leadership not less coordination.

Baseball’s analytics revolution also reminds us that the coordination of individuals will be most effective when it takes into consideration the actual contributions made by each individual and that this assessment requires a systematic and coherent methodology to be effective. Quick judgements about a person’s contribution based on a small or irrelevant dataset is not an effective way to manage a team for success. An individual’s contribution to their team needs to be assessed based on a significant amount of meaningful, relevant and consistent data, which often needs to be collected over a significant period of time. Additionally, the tactical and strategic decisions based on those evaluations must also be subject to regular assessment and that assessment must be made in terms of the ultimate aim of the shared endeavour. Effective team management requires time, a historical frame of reference, and a long-term vision of success. In other words, there is much more to the successful coordination of a team than a rousing locker room speech or a clever presentation at an annual off-site meeting.

Baseball’s increased interest in data-driven player development also reminds us that the bedrock of long-term success for any team is an ability to recruit and nurture talent, where talent is largely understood to be a willingness to learn and evolve and a willingness to mentor and train. On the one hand, people who are set in their ways are unlikely to adapt to the culture of their new team; additionally, as the team and the work evolves, they won’t evolve with it. On the other hand, if they aren’t willing to mentor and train others, whatever knowledge and skills they have and develop won’t be shared. Yes, data and analytics, like any new tool, can create a competitive advantage in the short-term, but the bedrock of enduring success is people who are committed to learning and developing, and a culture and leadership team that supports and rewards their development.

The final insight from baseball’s analytics revolution might be more difficult to tease out because it challenges a habit that is so perennial that it is probably difficult to see it as anything but natural and given. I said earlier that a data-driven evaluation of all aspects of baseball’s operations is bringing into focus the idea that the action on the field is an outcome of a very complex process and that the success of that process is the fundamental cause of success on the field. If every aspect of a baseball team’s operations is designed and coordinated to ensure that the best players can play as effectively as possible during a game, that team is much more likely to succeed against the competition. An essential feature of this model is the important distinction between the activities undertaken to prepare and train for execution and the execution itself. Crucially, there is substantially more preparation than execution, and it is the quality and effectiveness of the preparation that determines the effectiveness of the execution. With that observation in mind, I’m willing to bet that in work, life and play, you and your team (however you conceive it) spend most of your time executing, very little time preparing, and a whole lot of time not living up to your full potential either as an individual or as a team. In theory, it is possible to approach execution in such a way that it becomes a kind of preparation and training opportunity, but, in practice, it will never be as effective as regularly setting aside time for dedicated and focused periods of training, planning and preparation. Essentially, whatever it is you do and whomever you do it with, if you aren’t taking time to train, practice, and prepare, you aren’t going to be as effective as you otherwise might be.

Ultimately, professional baseball is, I think, a useful case study with which to better understand the potential of the broader data-driven revolution taking place today  because of its unique gameplay, specific history, and the financial incentives which rule its day-to-day operations. Because of these factors, the ecosystem of baseball has embraced data, analytics and the people who effectively evaluate them in a way that lets us more easily see the big picture. Because of baseball, it is easy to see that the data-driven revolution is very real but that its potential can only be fully realized if it is the catalyst for welcoming new people and new forms of decision-making into the fold. There are no silver bullets. There are, however, when we are lucky, new people testing new ideas that sometimes work out and insiders who recognize — by choice or by necessity — the value of the new people and their ideas. Unfortunately, this also means that the very people, communities, and organizations who are most likely to benefit from the data-driven revolution and other forms of innovation — those that are ruled by superstition, custom, and arbitrary acts of authority — are the least likely to embrace the people and ideas most likely to make the most of it. And that, I think, is one more important insight into the the human condition brought neatly into focus thanks to baseball.

* If enough people express interest, I can put together a bibliography/reading list. However, any good local library should get you headed in the right direction.

Between the wake of living and the insensibility of death: the experience of now

It’s an old and familiar trope; as a young man, it would enrage me.

Picture it: an old person, who is tired of living, decides that they are ready to die. Then, they close their eyes and die, as if the matter was decided in that moment — probably after some important milestone had passed and some important wisdom had been imparted.

The decision itself to die is not, I think, the key issue. Death as the ultimate sacrifice, in the name of some higher principle or for the benefit of some other person, has always tickled my adolescent fancy. Likewise, for as long as I can remember, I have always thought suicide to be an appropriate response to a cruel and terminal illness, even if it isn’t the choice I would make for myself.

I think the trope enraged me because it eulogized a decision to acquiesce to death’s inevitable and final ushering for no other reason than the old person’s indifference to life. The old person could live longer; they simply choose not to because they don’t much see the point in living any longer. It seemed to me to be the ultimate betrayal of the very idea of life, in all of its stubborn glory. Death is not an undiscovered country; it is an insensibility to be resisted at all costs until the very moment of consumption and consummation.

However, now that I have made it to middle age, I have found that the trope no longer enrages me. The decision to acquiesce to death, however unpalatable such acquiescence  may be to me, even seems to make sense, once the nature of lived experience is rightly understood.

When I was younger, lived experience seemed much more concrete and enduring, even after it had already been lost under the wake of living, because the amount of lived experience I could remember seemed to be much more than the experience I had forgotten. Sure, I couldn’t remember every detail of waking life but, on the whole, it felt like my experiences lived on with me in my memories.

At forty-five, however, the ledger of memories and lived experience is not at all balanced. I have undeniably forgotten much more of my life than I can now remember. I can no longer pretend otherwise: experience is gone forever once it is lived and our very fallible and fleeting memories can’t preserve or resurrect it. In terms of the experience of lived experience, the only difference between living and death is that the now of living is experienced and the now of death is not. The past is as unknowable as the future, whatever the fantasy of memory might otherwise try to tell us.

Now that this insight has taken root, it has become much easier for me to imagine a time when I will be able to look forward into death and look back onto life and not really see that much difference in terms of the experience of lived experience. As a young person, the experience of now was a supernova that illuminated all horizons; today, it is a star bright enough for me to look back with fondness and forward with anticipation, despite the shadows growing all around me; looking out towards 80 or 90 (and, hopefully, 100 or 120), it is very easy to imagine that the experience of now might feel like a pale dim light in a universe of nothing stretching in all directions. If that is the case, persistence for the sake of persistence might not seem to really add or subtract from the final ledger; and acquiescence to an insensible future might not seem so different from an attachment to the insensible past. Maybe, just maybe, I will also be ready to close my eyes and slip away quietly.

But, let me say this now! If some future Sterling starts nattering on about going gently into that good night, he is a rogue and a fraud! Here me now and believe me later: attach every machine, do all the surgeries, and give me every drug; do whatever it takes to keep my faint ember of consciousness aglow, no matter the suffering I may endure. I expect future Sterling will feel the same; however, because younger Sterling would probably be enraged at my defence of the enraging trope, I shall err on the side of caution: let my will today bind his then. If future Sterling ever loses sight of the faint ember of his experience in the engulfing insensibility of past and future, give him a stiff rum or two and send him to bed. I’m sure he will be fine in the morning. He’s probably just had a bad day. Plus, if he has got to go, he will probably want to go quietly in his own bed, enveloped in  a nice light glow.

Losing my religion: the unknowable self and the myth of a well-ordered society

I suspect that you and I don’t really know anything.

Today, thanks to a lot of trial and error, we humans have a pretty good understanding of what we need to do to distinguish between plausible and implausible beliefs. If we run controlled double-blind and repeatable experiments that generate a sufficient amount of data of sufficient quality, we can use statistical methods to confidently identify those beliefs that are false and those that are plausibly true but still in need of further testing. Considered from this perspective, it seems pretty obvious to me that you and I don’t really know anything. Most of our beliefs have not been tested in this way. 

To start, almost all of our beliefs about the universe are taken on faith that the people doing the work of understanding the universe are doing it correctly. To be sure, this is probably a sensible approach for you and I to take. It certainly seems much more efficient to rely on a specialized community of inquirers to undertake this work, but it doesn’t change the fact that you and I don’t really know what the scientific community knows. Their well-tested beliefs are, for us, articles of faith, even if we can expect them to be much more reliable than the articles of faith generated by theologians interpreting ancient texts. And if this is true, it is true whenever we rely on others to formulate and test beliefs on our behalf. Beliefs that we don’t test ourselves are, for us, articles of faith. 

With that conclusion in mind, take a few minutes to catalogue all the beliefs that you have and rely on each day that are formulated for you and/or tested by others. If you are honest with yourself, I am pretty sure the list will be quite long. And while it is tempting to believe that we have good reason to rely on others for all of these beliefs, I’m willing to bet that you have not tested that belief either. I, for one, can admit that I have not tested it — and most of my other beliefs. I also feel pretty comfortable guessing that you and I are in the same boat. 

And this, I think, is the crucial consideration. We might be able to shrug off the fact that particle physics is for us a matter of faith, but I suspect it will be much more unsettling to realize that you and I never properly test a whole range of beliefs that fundamentally shape our sense of self, our identity, and our daily experience of living.

Consider: Am I happy or unhappy today? Am I happier or less happy than I was yesterday? Last week? Last year? Am I better off now than I was three years ago? Am I consistently making choices that support my well-being? Did I go to the right university? Was I right not to go to university? Am I in the right career? Are my goals for the future the right goals? Am I with the right partner? Would I have been happier with no children or more children? Am I the person I wanted to become? Who was I? Which of my memories are accurate? How accurate? And so on. For all of these questions and many more, there are objective and measurable answers. I’m also willing to bet that your answers to these kinds of questions are a mix of educated guesses, received wisdom, and Magic 8-Ball proclamations. 

To further complicate matters, it is very likely that some of these questions can’t ever be properly answered. We could, for example, carefully track our self-reported experiences of happiness over a long enough period of time to come up with some plausible theories about what makes us happy and then test those theories with more data. However, we probably will never be able to adequately test whether any particular life choice was the right one to make. There are no do-overs in life. As a result, we can’t even generate the data that would put us in a position to make a valid assessment. Furthermore, in the face of this certain uncertainty, it seems likely that we can’t even reliably assess these choices in the here and now because we don’t have the well-tested beliefs upon which to assess the expected outcomes. So, even if we want to evaluate our life choices before we make them (overlooking the important consideration that many people don’t), we don’t even have the correct data for that evaluation. 

One plausible way to sidestep these concerns is to simply stipulate a lower burden of proof for these kinds of beliefs. Perhaps, it doesn’t really matter if we have properly tested beliefs about our happiness, our favourite foods, or our career path. One might be happy to claim that the good life requires only that we can tell ourselves a convincing story in the here and now that we are happy, well-off and that the events of our lives brought us here. All’s well that we can describe as ending well! And while I suspect that this tactic might actually be the best explanation for our species’ reproductive success up to this point (i.e. that we have a curious ability to reimagine suffering as a net benefit), I remain suspicious of the notion that we should lower the burden of proof for these kinds of beliefs. A delusion is a delusion is a delusion, even if we can convince ourselves that we are happy about it. 

In the face of this uncertainty, however, I suspect the only appropriate conclusion is to give up on the notion that we can ever definitively know ourselves. We are constantly evolving animals that are bound in the flow of time and, as a result, there are beliefs about ourselves of which we can never properly test. We have to rely on hunches, received wisdom and wild guesses because we have no other option. It isn’t because we are inherently mystical or otherworldly. It is because we are constrained  by our temporal existence. The much larger and crucial delusion, I think, is the belief that we could know with certainty who we are and what we value. Once we give up on that idea, the notion that we don’t know ourselves with God-like certainty seems much less unsettling and becomes just another mundane limitation of human existence. 

And while this conclusion might be well and good on the personal level, it creates one teensy-weensy little issue when we turn our attention to society and its organization: the fundamental and essential assumption of a liberal democracy and a market economy is that you and I can know our own well-being and happiness, know it better than anyone else, and reason effectively about it. Thanks to research in neuroscience and behavioural psychology, we now know with some certainty that these assumptions are false. We are poor reasoners in general but especially about what we value. Additionally, many of our beliefs about our own well-being are demonstrably false (i.e. people remember happiness that they did not experience and forget pain that they did). So, if it is true that most of our beliefs are inadequately tested and that we can’t even make accurate judgments about what we value or think to be good, democracies and markets are, at best, arbitrarily organizing society and, at worst, guaranteed to do it poorly. Garbage in, garbage out, as the saying goes. And to be clear, this is also true for authoritarian strong men, councils of nerds, and any other social-political system that depends on anyone looking deep within themselves to figure out who they are, what they value, or what they want to become. The root problem is the practical constraints of inquiry. There is no social architecture that will solve that problem for us.  

What then of politics, society, and its organization, if we can’t count on people knowing themselves with any certainty? 

First, I think we need to recognize and accept that our present-day social and political habits, institutions, and systems are largely the consequence of chance (akin to biological evolution), prone to constant change, and persist only as long we allow them to persist. They are an expression of our need to organize ourselves, they reflect the environment in which they developed, and they emerge like any other natural phenomenon. They can become better or worse (relative to a host of benchmarks), none of them will magically persist over time, and there is no reason to think that solutions from hundreds and even thousands of years ago will work for today’s challenges. We need to accept that society’s organization is an ever-evolving and accidental by-product of the on-going effort to solve many different, discrete and often intertwined problems. 

Second, I think we need to get out of the habit of appealing to any claims that rely on introspection alone, in the same way that we almost got out of the habit of appealing to claims about the one true God. There are a lot of well-tested and plausible beliefs that we can use to guide our efforts to organize ourselves and direct our problem-solving efforts. The challenge, of course, is that even well-tested beliefs don’t all necessarily point to the same conclusion and course of action. In those cases, we must resist the temptation to frame the debate in terms of largely unanswerable questions like “what’s best for me”, “who’s vision of the good life is correct,” or “who worships the right God.” Instead, we need to look to well-tested beliefs, run good experiments, and always account for all the costs and benefits of whatever approach we settle on in the here and now, recognizing that with new evidence we may need to adapt and change.  

Finally, for those of us who think that we should settle our disagreements based on well-tested beliefs rather than dubious claims grounded in introspection, we need to lead by example. I think this will primarily involve asking the right sort of questions when we disagree with others. For example, what well-tested evidence do we have for one conclusion or the other? What kind of evidence do we need to decide the matter? What experiments can we run to get the necessary evidence? We will also need to get in the habit of discounting our own beliefs, especially if they are based on nothing more than introspection or received wisdom. And this might actually be the toughest hurdle to overcome both personally and practically. It is very natural to become attached to our own bright ideas before they are properly tested. Once attached, it becomes much easier to discount the evidence against them. To further complicate matters, humans also seem to be too easily motivated to action by strongly-expressed convictions that align with preconceived notions, whether they are well-tested or not. Asking for evidence before action and expressing doubts about one’s own convictions might not resonate with the very people we need to sway. Unfortunately, but not surprisingly, there is no easy general all-purpose way to solve this problem. People who want to motivate others to action will always need to strike the tricky balance between rhetoric and honest communication. We don’t need to be puritans about honest communication but we also shouldn’t use the human condition as an excuse to spin outright lies — even in the service of thoroughly tested beliefs.            

Descartes is often credited with kicking off modernity when he famously doubted the existence of everything but his own thinking mind. In the very many years since he reached his pithy and highly quotable conclusion, we have learned a lot more about the best methods of inquiry and have developed a well-tested and always evolving understanding of the world. More recently, thanks to those methods of inquiry and their application in neuroscience and behavioural psychology, it is becoming increasingly clear that we can’t know much of anything from introspection alone — including ourselves. There is nothing you, I, or Descartes can know with any certainty by looking inwards for answers. Unfortunately, we continue to rely on habits, institutions, and systems which presuppose that you or I have privileged and certain knowledge about our own well-being, values, and optimal outcomes. This may partly explain — in conjunction with other issues (hello, massive inequality) — why liberal democratic political systems that rely on free markets are in crisis these days.

It was fashionable in the late 20th century to talk as if we had escaped modernism, but postmodernism, I think, only takes Descartes’ modernism to its logical conclusion, while willfully overlooking the fact that we humans have become pretty good at understanding the world around us. To set ourselves on a new path, to really escape the gravity well of modernism, we need to set aside the Cartesian notion that the aim of inquiry is absolute certainty and that such certainly can be found through introspection. Instead, we need to accept that we really don’t know ourselves, whatever our heartfelt convictions might tell us, and look instead to well-tested beliefs to guide and organize our lives, both individually and collectively. 

Who died and made content king? Survival bias, confirmation bias, and a farcical aquatic ceremony.

When I first started using social media, thirty Helens agreed: “content is king!” 

And, at the time, it certainly felt that way. Perfectly crafted tweets seemed to be retweeted again and again; insightful blogs seemed to lead to comment after comment; great articles were always bookmarked. 

I suspect, however, that content looked kingly only because we content creators looked at tiny samples of high-performing content and jumped quickly to conclusions. Survival bias ran rampant, it was primarily the bias of content creators that was running, and content creators really really wanted to believe that expertly crafted content could compel others to action.     

Much later, in the early days of live streaming on Facebook, a video I shot and shared live went “viral”. It received something like half-a-million views in twelve hours or so. For a social media nerd like me, let me tell you, there is no greater thrill than hitting refresh every few seconds and seeing the number of views on your post jump by hundreds and, at times, thousands. Like slot machine enthusiasts everywhere, the bells and whistles are almost more important than the jackpot itself.

And, on the face of it, it seemed like the sort of video that should earn a lot of attention. My phone had captured a pretty special moment in a powerful story, even if the video quality was questionable and the audio mediocre. The story — we content enthusiasts had been telling ourselves for years — was much more important than the technical specifications of the media that shared it. And, this video was a perfect case in point! A live, raw and powerful moment was the stuff of social media glory! I had always known it, but now here was the proof! One more bias was joyfully confirmed.

Then, I watched that short video of a woman laughing in a Chewbacca mask. Do you even remember it? It was the video that blew up in those early days of live streaming on Facebook. Sure, it was vaguely amusing, but was it really that share worthy? Was it really earning all those views and engagements? Was this really the kingly content that the social media prophecy had foretold?  

Then, it occurred to me: Facebook had just launched its live stream functionality and they wanted it to make a splash. My phone had been rattling every two seconds to let me know whenever anyone streamed live for the first time. Moreover, because it was a new service, it had appeared on my phone using the default settings for notifications, which is something like “maximum racket.” In other words, Facebook was making every effort to put as many eyeballs as possible on any content that was shared live.  

Facebook’s effort to boost the visibility of its live stream service should come as no surprise. They wanted people to use the service right away and they wanted those people who used it right away to experience success right away. Easy success would hook users and those who were hooked would talk it up to others. The first hit is always free. 

I am reminded of all of this because of a recent article about TikTok and the author’s naive attempt to explain why some videos on this service have earned big numbers. To be blunt: I wouldn’t be at all surprised if the people running TikTok are specifically manipulating things behind the scenes to generate big media-story-worthy numbers. You are the product, after all; they need you to be active; and, what’s a few inflated numbers between friends?  

However, even if the people running TikTok aren’t intentionally manipulating the numbers, there is a much more plausible explanation why some content is getting more attention than other content. Dumb chance. When enough content gets in front of enough people, some of that content will earn more attention and, from there, it can snowball. That’s it; that’s all. There is nothing in the content itself that will definitively explain its success. In the same way that we can’t know in advance which genetic adaptions will lead to an organism’s reproductive success, we can’t know in advance which features of our content will lead to its reproductive success.

Circling back to those early days of social media and the quest for the holy content grail, if there was any truth in our collective hope that content is king, I suspect it was this: the experience of kingly content is probably symptomatic of the fact that humans tend to socialize with people much like themselves and become more like the people with whom they socialize as they socialize with them. 

So, at the outset, specific social media channels were attractive to a particular community of users who were already pretty similar in terms of interests, values, and identity. There wasn’t a lot of content being created, so any content that was shared was bound to earn whatever attention was out there to earn. Because the people using the tools were already pretty similar, they came up with similar theories to explain the success of some content and those theories became self-reinforcing. As people shared content that fit their theories of success, the successful content was more likely to match the theories because there was more content out in the world that aligned with the theory. For example, if you claim that red aces are always drawn because they are special and you add more red aces to the deck every time one is drawn, your theory is bound to look true whether there is anything special about red aces or not. 

Eventually, these theories about what made content shareable, engaging or whatever were internalized as norms, values and aesthetic sensibilities. In this context, content starts to look kingly and almost magical because it’s attractiveness is rooted in a sense of “we”. We are the kind of people who think a tweet will be more engaging if the hashtag is at the end of the copy instead of beginning, so we see it as such and act accordingly. In other words, the apparent kingliness of content is an expression of a particular community’s sense of shared identity. If a particular community of we has power and influence, then, they will influence the tastes of other communities. And so on.

But here, I think, is the nub of the matter: this isn’t some kind of social media gaffe or millennial voodoo. It has always been like this for all content everywhere. The success of content is best explained by the communities that behold it, their sense of “we”, and their power and status. Shakespeare’s plays, for example, seem kingly to us only because an influential group of people took a liking to them at a time when there wasn’t much competition for people’s attention. When you are the only show in town, it is very easy to make the taste.  

If I am right about this (and I’d bet that I am not the first to claim it), I suspect a lot of content lovers and creators’ will react to my conclusion with nihilistic rage. “If there is nothing in the artefact of creation itself that guarantees success or could guarantee success, what is the point of creating at all? Why create if what is produced is of secondary importance or, dear god, not important at all? Oh woe is us!” However, I want to make the case that this frown can be turned upside down. 

On the one hand, if your aim is to create content and be recognized as a content creator, the path forward is pretty simple: do your best to ingratiate yourself to whatever community is the tastemaker community for the kind of content you want to create. Meet, greet and emulate. Play the game well enough and long enough, and you will probably get a shot at shifting the community’s taste. No magic or special natural gifts required. You don’t need to be the anointed one. Being pleasant and patient should do the trick.

Alternatively, if you enjoy creating content for its own sake and have no particular desire or need to be recognized as a content creator by the relevant tastemaker community, you are free to create in accord with whichever standard(s) you want. Who cares what the tastemakers think? They no longer control the means of creative production or distribution. Go forth and create! Celebrate the fact that you have enough time and the means to create, even if no one is looking. On the other hand, if it turns out that you don’t want to suck up to tastemakers to earn a living as a content creator and have better things to do with your time than create for the fun of it, so be it. The choice is yours and, to be frank (you be Jane), having that choice is pretty lucky too.

I can think of only two groups of people who will be in a jam: those people who desperately want to be recognized as a content creator but don’t want to suck up to the relevant tastemaker community or the people who are ignored by that community even when they do suck up. For them, only Nietzschean frustration awaits. 

If you are among this lot, I can offer only this advice: storm the taste making gates until you are accepted, ingratiate yourself to a marginalized or underserved community and hope their day is yet to come, or ride the early adopting wave of some new technology like the printing press or social media. However, whichever path you take, please remember: if you end up holding something that feels like a sword of divine right, the underlying mechanism that provided it to you remains the same, whether you were finally picked by the cool kids or the uncool kids somehow suddenly turned cool. The sword doesn’t make you or your content king; nor does the farcical aquatic ceremony that put it in your hand. Instead, it is the community who thinks of “you” as “we”.

My answer to the ultimate question of life, the universe, and everything: four four through it all

If the mystery of the human condition can be characterized as a kind of puzzle or riddle, the answer and/or punchline can be aphorized, I think, through four banal facts and four mollifying delusions.

I can’t say that anyone will necessarily gain anything by knowing and understanding these facts; nor can I say that they will gain anything by ridding themselves of the delusions.

If anything, I am pretty sure the delusions persist precisely because they are useful to most people most of the time. Whether or not they become more or less useful will be settled by evolution eventually — and not by you or I.

Four banal facts:

  1. Almost all human behaviour is perfectly predictable. Some human behaviour may be random or the result of chance.
  2. Human behaviour and all the products of human behaviour are expressions of the human disposition to allocate resources according to status.
  3. Human society and its organization requires the exercise of power. The risk of abuse is omnipresent. Some will guard against it; some won’t.
  4. We die and will be forgotten.

Four mollifying delusions:

  1. Humans have free will and are masters of their own destiny.
  2. The truth will set you free.
  3. Democracy is the worst form of government, except for all the others.
  4. Immortality is possible.

That’s it; that’s all. If you like my solution or enjoy talking about the puzzle, let’s start a club. You bring the (alcoholic) punch. I will bring the (vegan) pie.